id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2306.06930 | Localised Adaptive Spatial-Temporal Graph Neural Network | Spatial-temporal graph models are prevailing for abstracting and modelling
spatial and temporal dependencies. In this work, we ask the following question:
whether and to what extent can we localise spatial-temporal graph models? We
limit our scope to adaptive spatial-temporal graph neural networks (ASTGNNs),
the state-of-the-art model architecture. Our approach to localisation involves
sparsifying the spatial graph adjacency matrices. To this end, we propose
Adaptive Graph Sparsification (AGS), a graph sparsification algorithm which
successfully enables the localisation of ASTGNNs to an extreme extent (fully
localisation). We apply AGS to two distinct ASTGNN architectures and nine
spatial-temporal datasets. Intriguingly, we observe that spatial graphs in
ASTGNNs can be sparsified by over 99.5\% without any decline in test accuracy.
Furthermore, even when ASTGNNs are fully localised, becoming graph-less and
purely temporal, we record no drop in accuracy for the majority of tested
datasets, with only minor accuracy deterioration observed in the remaining
datasets. However, when the partially or fully localised ASTGNNs are
reinitialised and retrained on the same data, there is a considerable and
consistent drop in accuracy. Based on these observations, we reckon that
\textit{(i)} in the tested data, the information provided by the spatial
dependencies is primarily included in the information provided by the temporal
dependencies and, thus, can be essentially ignored for inference; and
\textit{(ii)} although the spatial dependencies provide redundant information,
it is vital for the effective training of ASTGNNs and thus cannot be ignored
during training. Furthermore, the localisation of ASTGNNs holds the potential
to reduce the heavy computation overhead required on large-scale
spatial-temporal data and further enable the distributed deployment of ASTGNNs. | Wenying Duan, Xiaoxi He, Zimu Zhou, Lothar Thiele, Hong Rao | 2023-06-12T08:08:53Z | http://arxiv.org/abs/2306.06930v2 | # Localised Adaptive Spatial-Temporal Graph Neural Network
###### Abstract.
Spatial-temporal graph models are prevailing for abstracting and modelling spatial and temporal dependencies. In this work, we ask the following question: _whether and to what extent can we localise spatial-temporal graph models?_ We limit our scope to adaptive spatial-temporal graph neural networks (ASTGNNs), the state-of-the-art model architecture. Our approach to localisation involves sparsifying the spatial graph adjacency matrices. To this end, we propose Adaptive Graph Sparsification (AGS), a graph sparsification algorithm which successfully enables the localisation of ASTGNNs to an extreme extent (fully localisation). We apply AGS to two distinct ASTGNN architectures and nine spatial-temporal datasets. Intriguingly, we observe that spatial graphs in ASTGNNs can be sparsified by over 99.5% without any decline in test accuracy. Furthermore, even when ASTGNNs are fully localised, becoming gapless and purely temporal, we record no drop in accuracy for the majority of tested datasets, with only minor accuracy deterioration observed in the remaining datasets. However, when the partially or fully localised ASTGNNs are reinitialised and retrained on the same data, there is a considerable and consistent drop in accuracy. Based on these observations, we reckon that _(i)_ in the tested data, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies and, thus, can be essentially ignored for inference; and _(ii)_ although the spatial dependencies provide redundant information, it is vital for the effective training of ASTGNNs and thus cannot be ignored during training. Furthermore, the localisation of ASTGNNs holds the potential to reduce the heavy computation overhead required on large-scale spatial-temporal data and further enable the distributed deployment of ASTGNNs.
graph sparsification, spatial-temporal graph neural network, spatial-temporal data +
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeX Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTeTe Templates
+
Footnote †: journal: Acment of LaTe Templates
+
Footnote †: journal: Acment of LaTe Templates
the localisation of an ASTGNN refers to the sparsification of only the adjacency matrices capturing the spatial dependencies. It is crucial not to be confused with sparsifying other weight matrices, such as those used in the temporal modules, often seen in GNN sparsification with pre-defined graph architectures [5, 29, 16, 27]. An initial clue which indicates that the localisation of ASTGNNs could be feasible, probably even to an extreme extent, is shown in Figure. 1). Here we show the distribution of the elements in the spatial graph's adjacency matrix from an ASTGNN trained on the PeMSD8 dataset [4]. It is obvious that most edges in the spatial graph have weights close to zero.
We are interested in the localisation of spatial-temporal graph models for the following reasons:
* _A deeper understanding of the spatial and temporal dependencies in the data._ Although it is commonly accepted that both spatial and temporal dependencies are vital for inference, it is unclear whether and to what extent the information provided by these dependencies overlaps. If the localisation induces a marginal accuracy drop, then the information provided by the spatial dependencies is already largely included in the information contained in the temporal dependencies and, thus, unnecessary for inference.
* _Resoruce-efficient ASTGNN designs._ ASTGNNs are notoriously computation heavy as the size of the spatial graph grows quadratically with the number of vertices, thus limiting their usage on large-scale data and applications. The localisation of ASTGNNs may significantly reduce the resource requirement of these spatial-temporal graph models and enable new spatial-temporal applications.
* _Distributed deployment of spatial-temporal graph models._ In many cases, the data to construct the spatial-temporal graph models are collected via distributed sensing systems _e.g._, sensor networks. However, making predictions of each vertex using these models requires the history of other vertices, thereby involving data exchange between sensor nodes. Localisation of spatial-temporal graph models may enable individual sensor nodes to make predictions autonomously without communicating with each other, which saves bandwidth and protects privacy in a distributed system.
We explore the localisation of ASTGNNs via Adaptive Graph Sparsification (AGS), a novel algorithm dedicated to the sparsification of adjacency matrices in ASTGNNs. The core of AGS is a differentiable approximation of the \(L_{0}\)-regularization of a mask matrix, which allows the back-propagation to go past the regularizer and thus enables a progressive sparsification while training. We apply AGS to two representative ASTGNN architectures and nine different spatial-temporal datasets. The experiment results are surprising. _(i)_ The spatial adjacency matrices can be sparsified to over 99.5% without deterioration in test accuracy on all datasets. _(ii)_ Even fully localised ASTGNNs, which effectively degenerate to purely temporal models, can still provide decent test accuracy (no deterioration on most tested datasets while only minor accuracy drops on the rest). _(iii)_ When we reinitialise the weights of the localised ASTGNNs and retrain them on the spatial-temporal datasets, we cannot reinstate the same inference accuracy. Figure. 2 summarises our experiments and observations.
Our empirical study implies two hypotheses. _(i)_ In the tested spatial-temporal datasets, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies. Therefore, **the spatial dependencies can be safely ignored for inference** without a noteworthy loss of accuracy. _(ii)_ Although the information contained in the spatial and temporal dependencies overlaps, such overlapping provides the vital redundancy necessary for properly training a spatial-temporal graph model. Thus, **the spatial dependencies cannot be ignored during training**.
Our main contributions are summarised as follows:
* To the best of our knowledge, this is the first study on the localisation of spatial-temporal graph models. We surprisingly observed that spatial dependencies could be largely ignored during inference without losing accuracy. Extensive experiments on common spatial-temporal datasets and representative ASTGNN architectures demonstrated that only a few edges (less than 0.5% on all tested datasets) are required to maintain the inference accuracy. More surprisingly, when the spatial dependencies are completely ignored, _i.e._, the ASTGNNs are fully localised, they can still maintain a decent inference accuracy (no deterioration on most tested datasets, minor drops on the rest).
* With further investigations, we suggest the hypothesis that, although spatial dependencies can be primarily ignored during inference, they can drastically improve training effectiveness. This is supported by the observation that, if we reinitialise all parameters in the sparsified ASTGNNs and retrain them with the same data, the retrained networks yield considerably and consistently worse accuracy.
* To enable the localisation of ASTGNNs, we propose Adaptive Graph Sparsification (AGS), a novel graph sparsification algorithm dedicated to ASTGNNs. The core of AGS is a differentiable approximation of the \(L_{0}\)-regularization of a mask matrix, which allows the back-propagation to go past the regularizer and thus enables a progressive sparsification while training.
Figure 1. Histogram of edge weights of the spatial graph from an ASTGNN trained on the PeMSD8 dataset.
## 2. Related Work
Our work is relevant to the following threads of research.
### Spatial-Temporal Graph Neural Networks
Spatial-temporal graph neural networks (STGNNs) play an essential role in spatial-temporal data analysis for their ability to learn hidden patterns of spatial irregular signals varying across time (Sutskever et al., 2017). These models often combine graph convolutional networks and recurrent neural networks. For example, Graph Convolutional Recurrent Network (GCRN) (Krizhevsky et al., 2014) combines an LSTM with ChebNet. Diffusion Convolutional Recurrent Neural Network (Krizhevsky et al., 2014) incorporates a proposed diffusion graph convolutional layer into GRU in an encoder-decoder manner to make multi-step predictions. Alternatively, CNN-based models can represent the temporal relations in spatial-temporal data in a non-recursive manner. For instance, GCCN (Ghezani et al., 2017) combines 1D convolutional layers with GCN layers. STG (Ghezani et al., 2017) composes a spatial-temporal model for skeleton-based action recognition using a 1D convolutional layer and a Partition Graph Convolution (PGC) layer. More recent proposals such as ASTGCN (Krizhevsky et al., 2014), STG2Seq (Bengio et al., 2015), and LSGCN (Krizhevsky et al., 2014) further employ attention mechanisms to model dynamic spatial dependencies and temporal dependencies. In addition, some researchers consider the out-of-distribution generalisation of STGNN, and propose a domain generalisation framework based on hypernetworks to solve this problem(Krizhevsky et al., 2014). However, these models adopt a predefined graph structure, which may not reflect the complete spatial dependency.
To capture the dynamics in graph structures of spatial-temporal data, an emerging trend is to utilize adaptive spatial-temporal graph neural networks (ASTGNNs). Graph WaveNet (Ghezani et al., 2017) proposes an AGCN layer to learn a normalized adaptive adjacency matrix without a pre-defined graph. ASTGAT introduces a network generator model that generates an adaptive discrete graph with the Gumbel-Softmax technique(Ghezani et al., 2017). The network generator can adaptively infer the hidden correlations from data. AGCRN (Bengio et al., 2015) designs a Node Adaptive Parameter Learning enhanced AGCN (NAPL-AGCN) to learn node-specific patterns. Due to its start-of-the-art performance, NAPL-AGCN has been integrated into various recent models such as Z-GCNETs (Chen et al., 2017), STG-NCDE (Chen et al., 2017), and TAMP-S2GCNets (Chen et al., 2017).
Despite the superior performance of ASTGNNs, they incur tremendous computation overhead, mainly because _(i)_ learning an adaptive adjacency matrix involves calculating the edge weight between each pair of nodes, and _(ii)_ the aggregation phase is computationally intensive. We aim at efficient ASTGNN inference, particularly for large graphs.
### Graph Sparsification for GNNs
With graphs rapidly growing, the training and inference cost of GNNs has become increasingly expensive. The prohibitive cost has motivated growing interest in graph sparsification. The purpose of graph sparsification is to extract a small sub-graph from the original large one. SGCN (Ghezani et al., 2017) is the first to investigate graph sparsification for GNNs, _i.e._, pruning input graph edges, and learned an extra DNN surrogate. NeuralSparse (Srivastava et al., 2015) prunes task-irrelevant edges from downstream supervision signals to learn robust graph representation. More recent works such as UGS (Chen et al., 2017) and GBET (Ghezani et al., 2017) explore graph sparsification from the perspective of the winning tickets hypothesis.
The aforementioned works only explore graph sparsification for vanilla GNNs and non-temporal data with pre-defined graphs. Our work differs by focusing on _spatial-temporal_ GNNs with _adaptive_ graph architectures.
## 3. Preliminaries
This section provides a quick review of the representative architectures of ASTGNNs.
Figure 2. An overview of our experiments and observations. We train ASTGNNs on spatial-temporal datasets, achieving baseline accuracies. Then we localise the ASTGNNs with the proposed algorithm AGS, achieving accuracies comparable to the dense graph baselines. Finally, we reinitialise the localised ASTGNNs and retrain them on the same datasets, resulting in considerably and consistently deteriorated accuracies.
### Spatial-Temporal Data as Graph Structure
Following the conventions in spatial-temporal graph neural network research (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), we represent the spatial-temporal data as a sequence of discrete frames \(\mathcal{X}\) with \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{X}=\{\mathbf{X}_{1},\mathbf{X}_{2},\ldots,\mathbf{X}_{T}\}\). The graph \(\mathcal{G}\) is also known as the spatial network, which consists of a set of nodes \(\mathcal{V}\) and a set of edges \(\mathcal{E}\). Let \(|\mathcal{V}|=N\). Then the edges are presented with an adjacency matrix \(A_{t}\in\mathbb{R}^{N\times N}\), where \(\mathbf{X}_{t}\in\mathbb{R}^{N\times C}\) is the node feature matrix with dimension \(C\) at timestep \(t\), for \(t=1,\ldots,T\).
Given graphs \(\mathcal{G}\) and \(\mathcal{T}\) historical observations \(\mathcal{X}^{T}\)=\(\{\mathbf{X}_{t-\mathcal{T}},\ldots,\mathbf{X}_{t-1}\}\in\mathbb{R}^{T\times N\times C}\), we aim to learn a function \(\mathcal{F}\) which maps the historical observations into the future observations in the next \(\mathcal{H}\) timesteps:
\[\left\{\mathbf{X}_{t},\ldots,\mathbf{X}_{t+\mathcal{H}}\right\}=\mathcal{F}(\mathbf{X}_{ t-\mathcal{T}},\ldots,\mathbf{X}_{t-1};\theta,\mathcal{G}) \tag{1}\]
where \(\theta\) denotes all the learnable parameters.
### Modeling the Spatial Network \(\mathcal{G}\)
Since we focus on the role of spatial dependencies, we explain the state-of-the-art modelling of the spatial network in spatial-temporal graph neural networks (STGNNs).
The basic method to model the spatial network \(\mathcal{G}\) at timestep \(t\) with its node feature matrix \(\mathbf{X}_{t}\) in representative STGNNs (Beng et al., 2017; Chen et al., 2019; Chen et al., 2019) is Graph Convolution Networks (GCNs). A one-layer GCN can be defined as:
\[Z_{t}=\sigma\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}_{t}\tilde{D}^{-\frac{1}{2 }}\mathbf{X}_{t}W\right) \tag{2}\]
where \(\tilde{A}=A+I_{N}\) is the adjacency matrix of the graph with added self-connections. \(I_{N}\) is the identity matrix. \(\tilde{D}\) is the degree matrix. \(W\in\mathbb{R}^{C\times F}\) is a trainable parameter matrix. \(\sigma(\cdot)\) is the activation function. \(Z_{t}\in\mathbb{R}^{N\times F}\) is the output. All information regarding the input \(\mathbf{X}_{t}\) at timestep \(t\) is aggregated in \(Z_{t}\).
A key improvement to model the spatial network is to adopt Adaptive Graph Convolution Networks (AGCNs) to capture the dynamics in the graph \(\mathcal{G}\), which leads to the adaptive spatial-temporal graph neural networks (ASTGNNs) (Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). In the following, we briefly explain Adaptive Graph Convolutional Recurrent Network (AGCRN) (Chen et al., 2019) and the extension with transformers, denoted as AGFormer, two representative ASTGNN models.
* **AGCRN.** It enhances the GCN layer by combining the normalized self-adaptive adjacency matrix with a Node Adaptive Parameter Learning (NAPL), which is known as NAPL-AGCN. \[\mathbf{\Lambda}_{adp}=SoftMax\left(ReLU\left(\mathbf{E}\mathbf{E}^{T}\right)\right)\] (3) \[Z_{t}=\sigma\left(\mathbf{\Lambda}_{adp}\mathbf{X}_{t}\mathbf{E}W_{ \mathbf{\beta}}\right)\]
where \(W_{\mathbf{\beta}}\in\mathbb{R}^{d\times C\times F}\) and \(\mathbf{E}W_{\mathbf{\beta}}\in\mathbb{R}^{N\times C\times F}\). \(\mathbf{\Lambda}_{adp}\in\mathbb{R}^{N\times N}\) is the normalized self-adaptive adjacency matrix (Chen et al., 2019). \(d\) is the embedding dimension for \(d\ll N\). Each row of \(\mathbf{E}\) presents the embedding of the node. During training, \(\mathbf{E}\) is updated to learn the spatial dependencies among all nodes. Instead of directly learning \(W\) in (2) shared by all nodes, NAPL-AGCN uses \(EW_{\mathbf{\beta}}\) to learn node-specific parameters. From the view of one node (_e.g._, node \(i\)), \(\mathbf{E}_{i}W_{\mathbf{\beta}}\) are the corresponding node-specific parameters according to its node embedding \(\mathbf{E}_{i}\). Finally, to capture both spatial and temporal dependencies, AGCRN integrates NAPL-AGCN and Gated Recurrent Units (GRU) by replacing the MLP layers in GRU with NAPL-AGCN.
* **AGFormer.** It extends AGCRN by modelling the temporal dependencies with transformers (Kipf and Welling, 2017; Chen et al., 2019). A Transformer is a stack of transformer blocks. One block contains a multi-head self-attention mechanism and a fully connected feedforward network. We replace the MLP layers in the multi-head self-attention mechanism with NAPL-AGCN to construct a transformer-based ASTGNN model, which we call AGFormer.
Modelling the spatial network with NAPL-AGCN achieves state-of-the-art performances on multiple benchmarks, and it has been widely adopted in diverse ASTGNN variants (Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). However, NAPL-AGCN is much more inefficient than GCNs, since \(\mathbf{\Lambda}_{adp}\) is a matrix without zeros, while the adjacency matrix of a pre-defined graph in GCNs is far more sparse than \(\mathbf{\Lambda}_{adp}\). This motivates us to explore the localisation of spatial-temporal graph models taking ASTGNNs as examples.
## 4. Adaptive Graph Sparsification
This section presents Adaptive Graph Sparsification (AGS), a new algorithm dedicated to the sparsification of adjacency matrices in ASTGNNs.
**Formulation.** The NAPL-AGCN-based ASTGNNs with normalized self-adaptive adjacency matrix \(\mathbf{\Lambda}_{adp}\) can be trained using the following objective:
\[\mathcal{L}(\theta,\mathbf{\Lambda}_{adp})=\frac{\sum_{r\in\mathcal{T}}\sum_{ \mathbf{\nu}\in\mathcal{V}}\left\|\mathbf{y}^{(r,\mathbf{\nu})}-\mathbf{\hat{y}}^{(r,\mathbf{ \nu})}\right\|_{1}}{|\mathcal{V}|\times|\Gamma|} \tag{4}\]
where \(\Gamma\) is a training set, \(\tau\) is a train sample, and \(\mathbf{y}^{(r,\mathbf{\nu})}\) is the ground-truth of node \(\mathbf{\nu}\) in \(\tau\).
Given a pre-trained model \(\mathcal{F}(\cdot;\theta,\mathbf{\Lambda}_{adp})\), we introduce a mask \(\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\) to prune the adjacency matrix \(\mathbf{\Lambda}_{adp}\). The shape of \(\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\) is identical to \(\mathbf{\Lambda}_{adp}\). Specifically, given \(\mathcal{F}(\cdot;\theta,\mathbf{\Lambda}_{adp})\), we obtain \(\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\) by optimizing the following objective:
\[\mathcal{L}_{AGS}=\mathcal{L}(\theta,\mathbf{\Lambda}_{adp}\odot\mathbf{ \mathbf{M}}_{\mathbf{\Lambda}})+\lambda\left\|\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\right\|_{ 0},\] \[\left\|\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\right\|_{0}=\sum_{i=1}^{N}\sum_ {j=1}^{N}m_{(i,j)},m_{(i,j)}\in\{0,1\} \tag{5}\]
where \(\odot\) is the element-wise product, \(m_{(i,j)}\) corresponds to binary "gate" that indicates whether an edge is pruned, and \(\lambda\) is a weighting factor for \(L_{0}\)-regularization of \(\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\).
**Sparsification Algorithm.** An intuitive way to get \(\mathbf{\mathbf{M}}_{\mathbf{\Lambda}}\) is to initialize a trainable weight matrix \(\mathbf{\mathbf{U}}\in\mathbb{R}^{N\times N}\) and map the entry \(u_{(i,j)}\in\mathbf{\mathbf{U}}\) into binary "gates" using a Bernoulli distribution: \(m_{(i,j)}=\mathcal{B}\left(u_{(i,j)}\right)\), where \(\mathcal{B}\) is a Bernoulli distribution. However, directly introducing \(\mathbf{\mathbf{U}}\) to model \(\mathcal{F}(\cdot;\theta,\mathbf{\Lambda}_{adp})\) has two problems.
* It may be unscalable for large-scale graphs.
* The \(L_{0}\) sparsity penalty is non-differentiable.
For the scalability issue, we adopt a node adaptive weight learning technique to reduce the computation cost by simply generating \(\mathbf{\mathbf{U}}\) with the node embedding \(\mathbf{\mathbf{E}}\):
\[\mathbf{\mathbf{U}}=\mathbf{\mathbf{E}}W_{\mathbf{\mathbf{E}}} \tag{6}\]
where \(W_{\mathbf{E}}\in\mathbb{R}^{d\times N}\) is a trainable matrix.
For the non-differentiable issue, we introduce the hard concrete distribution instead of the Bernoulli distribution (Berridge et al., 2016), which is a contiguous relaxation of a discrete distribution and can approximate binary values.
Accordingly, the computation of binary "gates" \(m_{(i,j)}\) can be formulated as:
\[\begin{split}& z\sim\mathcal{U}(0,1),s_{(i,j)}=Sig\left(\log z- \log(1-z)+\log\left(u_{(i,j)}\right)\right)/\beta\\ &\tilde{s}_{(i,j)}=s_{(i,j)}(\zeta-\gamma)+\gamma,m_{(i,j)}=\min \left(1,\max\left(0,\tilde{s}_{(i,j)}\right)\right)\end{split} \tag{7}\]
where \(z\) is a uniform distribution, \(Sig\) is a sigmoid function, \(\beta\) is a temperature value and \((\zeta-\gamma)\) is the interval with \(\zeta<0\) and \(\gamma>1\). We set \(\zeta=-0.1\) and \(\gamma=1.1\) in practice. Then, \(\mathbf{M}_{\mathbf{A}}\) is applied to prune the lowest-magnitude entries in \(\mathbf{A}_{adp}\), _w.r.t._ pre-defined ratios \(p_{g}\).
Alg. 1 outlines the procedure of AGS. The pruning begins after the network is pre-trained (Line 5). First, we sort edges in the adjacency matrix by magnitude in ascending order (Line 6). Then we perform pruning iteratively (Line 7). The top \(p_{g}\) of the edges are removed, and the rest are retained. We identify the remaining edge by setting its corresponding entry in \(\mathbf{M}_{\mathbf{A}}\) to \(1\) in each iteration (Line 8). The edges to be pruned are removed using Eq.(5) (Line 9).
```
1:\(\mathcal{X}\): input data, \(\mathcal{F}\left(\cdot;\theta,\mathbf{A}_{adp}\right)\): Spatial-temporal GNN with initialization self-adaptive adjacency matrix \(\mathbf{A}_{adp}\), \(\mathbf{N}_{1}\): number of pre-training iterations, \(\mathbf{N}_{2}\): number of sparsification iterations, \(s_{g}\): pre-defined sparsity level for graph.
2:\(\mathcal{F}\left(\cdot;\theta,\mathbf{A}_{adp}\odot\mathbf{M}_{\mathbf{A}}\right)\)
3:while iteration \(i<\mathbf{N}_{1}\)do
4: Forward to compute the loss in Eq.(4).
5: Back-propagate to update \(\theta\) and \(\mathbf{A}_{adp}\).
6:endwhile
7:Obtain pre-trained \(\mathcal{F}\left(\cdot;\theta,\mathbf{A}_{adp}\right)\).
8:Sort entries in \(\mathbf{A}_{adp}\) by magnitude in an ascending order then obtain list \(K=\left\{k_{i}\right\}_{i=1}^{N\times N}\).
9:while iteration \(i<\mathbf{N}_{2}\) and \(1-\frac{\left\|\mathbf{M}_{\mathbf{A}}\right\|_{0}}{\left\|\mathbf{A}_{adp} \right\|_{0}}<s_{g}\)do
10: set \(\mathbf{M}_{\mathbf{A}}^{(i,j)}=1\) if \(\mathbf{A}_{adp}^{(i,j)}\notin K_{N^{2}s_{g}}\).
11: Forward to compute the loss in Eq.(5).
12: Back-propagate to update \(\theta\) and \(\mathbf{A}_{adp}\odot\mathbf{M}_{\mathbf{A}}\).
13:endwhile
```
**Algorithm 1**Adaptive Graph Sparsification (AGS)
**Discussions.** We make two notes on our AGS algorithm.
* AGS differs from prior magnitude-based pruning methods designed for graphs (Zhu et al., 2017; Wang et al., 2018) in that (_i_) it sparsifies _adaptive graphs_ in spatial-temporal GNNs rather than vanilla GNNs and non-temporal data with pre-defined graphs; and (_ii_) it does not require iterative retraining on the pruned graphs to restore model accuracy.
* Pruning the adjacency matrix with AGS notably reduces the complexity of ASTGNNs for inference. The inference time complexity of unpruned NAPL-AGCN layers is \(\mathcal{O}(N^{2}d+\mathit{LNT}\mathcal{F}^{2}+\mathit{L7}\left\|A_{adp} \right\|_{0}F+\mathit{NdF})\). After sparsification, the inference time complexity of NAPL-AGCN layers is \(\mathcal{O}(N^{2}d+\mathit{LNT}\mathcal{F}^{2}+\mathit{L7}\left\|A_{adp} \odot\mathbf{M}_{\mathbf{A}}\right\|_{0}F+\mathit{NdF})\), where \(N^{2}d\) is the time complexity of computing adaptive adjacency matrix, \(d\) is the embedding dimension,\(N\) is the number of nodes, \(\left\|A_{adp}\odot M_{\mathbf{A}}\right\|_{0}\) is the number of remaining edges, \(F\) is the representation dimension of node features, \(\mathcal{T}\) is the length of input, \(L\) is the number of layers.
## 5. Experiments
To answer the question of whether and to what extent we can localise a spatial-temporal graph model, we conducted extensive experiments explained in this section.
### Neural Network Architecture
We evaluate the performance of AGS over two representative NAPL-AGCN-based ASTGNN architectures: **AGCRN**(Chen et al., 2017) and its extension **AGFormer**. AGCRN is a state-of-the-art ASTGNN architecture combining AGCN and RNN layers. AGCN layers are used to capture the spatial dependencies, while RNN layers are there to model the temporal dependencies. AGFormer, on the other hand, can be regarded as an alternative version of the AGCRN, in which the RNN layers are substituted by Transformer layers. We intentionally chose these two ASTGNN architectures sharing the same spatial module but using different temporal modules, to show that both the effectiveness of AGS and our observations on the learned spatial dependencies are orthogonal to the temporal modules involved.
### Datasets and Configurations
The localisation of ASTGNNs is evaluated on nine real-world spatial-temporal datasets from three application domains: transportation, blockchain and biosurveillance. Table 1 summarizes the specifications of the datasets used in our experiments. The detailed datasets and configurations are provided in Appendix A.1. The details on tuning hyperparameters are provided in Appendix A.2.
### Main Experimental Results
Our major experiments are illustrated in Figure. 2. We first train AGCRNs and AGFormers on the nine spatial-temporal datasets,
\begin{table}
\begin{tabular}{l c c} \hline \hline Datasets & \#Nodes & Range \\ \hline PeMSD3 & 358 & 09/01/2018 - 30/11/2018 \\ PeMSD4 & 307 & 01/01/2018 - 28/02/2018 \\ PeMSD7 & 883 & 01/07/2017 - 31/08/2017 \\ PeMSD8 & 170 & 01/07/2016 - 31/08/2016 \\ \hline Bytom & 100 & 27/07/2017 - 07/05/2018 \\ Decentral & 100 & 14/10/2017 - 07/05/2018 \\ Golem & 100 & 18/02/2017 - 07/05/2018 \\ \hline CA & 55 & 01/02/2020 - 31/12/2020 \\ TX & 251 & 01/02/2020 - 31/12/2020 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of datasets used in experiments.
achieving baseline accuracies. Then we conduct localisation of these well-trained AGCRNs and AGFormers using AGS. Finally, we reinitialise all weights in the localised AGCRNs and retrain them on the original datasets with the same training settings.
The experiment results are organised as follows:
* The test accuracies of the non-localised AGCRNs and AGFormers and those with localisation degrees until 99% are collected in Figure. 3 (on transportation datasets), Figure. 4 (on biosurveillance datasets) and Figure. 5 (blockchain datasets). These figures contain the test accuracies at graph sparsity of 0%, 30%, 50%, 80% and 99%.
* The test accuracies of AGCRNs with localisation degrees between 99.1% and 100% are illustrated in Figure. 6, shown in dotted green curves.
* The test accuracies of localised AGCRNs that are reinitialised and retrained are also collected in Figure. 6, shown in solid purple curves.
Error bars in these figures show the standard deviation of five runs. We made the following observations from these results:
* **The localisation of AGCRNs and AGFormers is possible.** Applying AGS on AGCRNs and AGFormers and localising them to a localisation degree of 99% incurs no performance degradation across all datasets. On the contrary, in many experiments, the test accuracy keeps improving until 99%-localisation. Further localisation of AGCRNs up to 99.5% still induces no accuracy drop against the non-localised baselines.
* **Full localisation of AGCRNs is still practical.** Even when we fully localise the AGCRNs, which in effect turn them into independent RNNs ignoring all spatial dependencies, they can still provide decent test accuracies. As shown in Figure. 6, on transportation datasets (PeMSD3, PeMSD4, PeMSD7 and PeMSD8), only minor drops are observed. On blockchain datasets (Bytom, Decentraland and Golem) and biosurveillance datasets (CA&TX), we can observe that the test accuracy is no worse at 100% sparsity compared with the non-localised baselines.
* **Localised AGCRNs cannot be relearned without the dense spatial graphs.** As shown in Figure. 6, when we reinitialise the partially or fully localised AGCRNs and then
Figure 3. Test accuracies of original and localised (up to 99%) AGCRNs and AGFormers, tested on transportation datasets (PeMSD3, PeMSD4, PeMSD7 and PeMSD8). Horizontal dash lines represent the baselines of non-localised AGCRNs and AGFormers.
retrain them on the nine datasets, we can observe a consistent and considerable drop in inference accuracy.
Upon these observations, we suggest the following hypotheses:
* In many spatial-temporal datasets, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies. Therefore, the spatial dependencies can be safely ignored for inference without a noteworthy loss of accuracy.
* Although the information contained in the spatial and temporal dependencies overlaps, such overlapping provides the vital redundancy necessary for properly training a spatial-temporal graph model. Thus, the spatial dependencies cannot be ignored during training.
## 6. Ablation Studies
### Impact on Resource Efficiency
As mentioned in Sec. 1, one of the reasons that we are particularly interested in the localisation of ASTGNNs is its resource efficiency. Non-localised ASTGNNs usually learn spatial graphs that are complete. Therefore, the number of edges and consequently the computation overhead grows quadratically with the number of vertices. Localisation of ASTGNNs is equivalent to pruning edges in the learned spatial graph, which could dramatically reduce the computation overhead associated with the spatial dependencies and thus improve resource efficiency. To this end, we calculated the amount of computation during inference of 99%-localised AGCRNs and AGFormers, measured in FLOPs, and summarised the results in Table 2. We can see that the localisation of AGCRNs and AGFormers effectively reduces the amount of computation required for inference. The acceleration is more prominent on AGCRNs against AGFormers because a larger portion of the total computation required by AGFormers is used on their temporal module (transformer layers), whereas AGCRNs use much lighter temporal modules (RNN layers).
### Localised AGCRNs vs. Other Non-Localised ASTGNNs
In Figure. 3, Figure. 4 and Figure. 5, we can clearly see that the localisation up to 99% is able to improve the test accuracy slightly. For example, 99%-localised AGCRNs outperform non-localised AGCRNs by decreasing the RMSE/MAE/MAPE by 3.6%/3.7%/2.0% on PeMSD3. Such improvement is consistently observed across both AGCRNs and AGFormers and all tested datasets. We reckon that this improvement is caused by the regularisation effect of sparsifying the spatial graph, which may suggest that the non-localised AGCRNs and AGFormers all suffer from overfitting to a certain degree.
Recent works on ASTGNNs proposed improved architectures of AGCRN, including Z-GCNETs (Zhou et al., 2017), STG-NCDE (Chen et al., 2018), and TAMP-S2GCNets (Chen et al., 2018) for different applications. We are therefore curious about how our localised AGCRNs compare to these variations. Hence we compare the test accuracy of 99%-localised AGCRNs with these architectures. Results are shown in Table 3, Table 4 and Table 5. We can see that our localised AGCRNs can generally provide competitive inference performance even against those delivered by state-of-the-art architectures. This observation also agrees with our first hypothesis mentioned in Sec. 5.3: in many spatial-temporal datasets, the information provided by the spatial dependencies are primarily included in the information provided by the temporal dependencies. Therefore, different spatial modules, given that the temporal modules are properly trained, may not make a significant difference in inference performance.
### Localisation of Non-temporal Graphs
To further investigate spatial dependencies and indirectly test our hypotheses, we conduct additional experiments and extend AGS to non-temporal graphs. We attempt to sparsify the spatial graph pre-given by non-temporal datasets, including Cora, CiteSeer, and Pubmed (Pubmed, 2018). On these datasets, we train two non-temporal graph neural network architectures, GCN(Wang et al., 2019) and GAT(Wang et al., 2019). On GCN and GAT, as they don't own node embeddings E, we can use the representation H learned by pre-trained GCN and GAT to replace E in (6), then prune the edges as in Alg. 1, where the weighting factor \(\lambda\) takes control of graph sparsity.
Table 6 shows the accuracy of localised GCN and GAT non-temporal datasets. The pre-defined graphs of Cora, CiteSeer, and PubMed are sparsified to 30%, 50%, 80% and 100%, respectively. We can observe a significant drop in the test accuracy among all
Figure 4. Test accuracies of original and localised (up to 99%) AGCRNs and AGFormers, tested on biosurveillance datasets (CA and TX). Horizontal dash lines represent the baselines of non-localised AGCRNs and AGFormers.
localised non-temporal graph models. This indicates that, in the absence of temporal dependencies, the information provided by spatial dependencies plays a major role during inference and thus can not be ignored via localisation.
Figure 5. Test accuracies of original and localised (up to 99%) AGCRNs and AGFormers, tested on blockchain datasets (Bytom, Decentraland and Golem). Horizontal dash lines represent the baselines of non-localised AGCRNs and AGFormers.
Figure 6. Test accuracies of localised (99.1% to 100%) AGCRNs and their reinitialised&retrained counterparts, tested on all datasets. Horizontal dash lines represent the baselines of non-localised AGCRNs.
## 7. Conclusion
In this paper, we ask the following question: whether and to what extent can we localise spatial-temporal graph models? To facilitate our investigation, we propose AGS, a novel algorithm dedicated to the sparsification of adjacency matrices in ASTGNNs. We use AGS to localise two ASTGNN architectures: AGCRN and AGFormer, and conduct extensive experiments on nine different spatial-temporal datasets. Primary experiment results showed that The spatial adjacency matrices could be sparsified to over 99.5% without deterioration in test accuracy on all datasets. Furthermore, when the ASTGNNs are fully localised, we still observe no accuracy drop on the majority of the tested datasets, while only minor accuracy deterioration happened on the rest datasets. Based on these observations, we suggest two hypotheses regarding spatial and temporal dependencies: _(i)_ in the tested data, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies and, thus, can be essentially ignored for inference; and _(ii)_ although the spatial dependencies provide redundant information, it is vital for effective training of ASTGNNs and thus cannot be ignored during training. Last but not least, we conduct additional ablation studies to show the practical impact of ASTGNN's localisation on resource efficiency and to verify our hypotheses from different angles further.
## 8. Acknowledgement
The authors are grateful to the KDD anonymous reviewers for many insightful suggestions and engaging discussion which improved the quality of the manuscript.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Sparsity(\%)} & Cora & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{PubMed} \\ \cline{2-7} & GCN & GAT & GCN & GAT & GCN & GAT \\ \hline
0\% & 80.20 & 82.10 & 69.40 & 72.52 & 78.90 & 79.00 \\
30\% & 80.35 & 83.17 & 69.23 & 72.31 & 79.14 & 79.23 \\
50\% & 72.73 & 75.40 & 69.37 & 72.70 & 78.82 & 79.31 \\
80\% & 65.19 & 70.81 & 58.47 & 63.18 & 68.37 & 77.03 \\
100\% & 56.22 & 63.29 & 53.13 & 57.50 & 61.02 & 64.25 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Classification accuracy (%) of localised GCN and GAT on citation graph datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & Datasets & PenMSD3 & \multicolumn{3}{c}{PenMSD4} & \multicolumn{3}{c}{PenMSD7} & \multicolumn{3}{c}{PenMSD8} & \multicolumn{3}{c}{Average} \\ \cline{2-13} & Metrics & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE \\ \hline AGCRN & 15.98 & 28.25 & 15.23\% & 19.83 & 32.30 & 12.97\% & 22.37 & 36.55 & 9.12\% & 15.95 & 25.22 & 10.09\% & 18.53 & 30.58 & 11.85\% \\ Z-GCNETS & 16.64 & 28.15 & 16.39\% & 19.50 & 31.61 & 12.78\% & 21.77 & 35.17 & 9.25\% & 15.76 & 25.11 & 10.01\% & 18.42 & 30.01 & 12.11\% \\ STG-NCDE & 15.57 & **27.09** & 15.06\% & **19.21** & **31.09** & 12.76\% & **20.53** & **33.84** & 8.80\% & **15.45** & 24.81 & 9.92\% & **17.69** & **29.21** & 11.64\% \\ TAMP-S2GCNets & 16.03 & 28.28 & 15.37\% & 19.58 & 31.64 & 13.22\% & 22.16 & 36.24 & 9.20\% & 16.17 & 25.75 & 10.18\% & 18.49 & 30.48 & 11.99\% \\ \hline
**Localised AGCRN** & **15.41** & 27.21 & **14.93**\% & 19.55 & 31.88 & **12.70**\% & 21.03 & 34.56 & **8.53**\% & 15.63 & **24.78** & **9.78**\% & 17.91 & 29.61 & **11.49\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance of 99%-localised AGCRNs compared with other non-localised ASTGNN architectures on biosurveillance datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & Datasets & \multicolumn{3}{c}{PenMSD3} & \multicolumn{3}{c}{PenMSD4} & \multicolumn{3}{c}{PenMSD7} & \multicolumn{3}{c}{PenMSD8} & \multicolumn{3}{c}{Average} \\ \cline{2-13} & Metrics & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE \\ \hline AGCRN & 15.98 & 28.25 & 15.23\% & 19.83 & 32.30 & 12.97\% & 22.37 & 36.55 & 9.12\% & 15.95 & 25.22 & 10.09\% & 18.53 & 30.58 & 11.85\% \\ Z-GCNETS & 16.64 & 28.15 & 16.39\% & 19.50 & 31.61 & 12.78\% & 21.77 & 35.17 & 9.25\% & 15.76 & 25.11 & 10.01\% & 18.42 & 30.01 & 12.11\% \\ STG-NCDE & 15.57 & **27.09** & 15.06\% & **19.21** & **31.09** & 12.76\% & **20.53** & **33.84** & 8.80\% & **15.45** & 24.81 & 9.92\% & **17.69** & **29.21** & 11.64\% \\ TAMP-S2GCNets & 16.03 & 28.28 & 15.37\% & 19.58 & 31.64 & 13.22\% & 22.16 & 36.24 & 9.20\% & 16.17 & 25.75 & 10.18\% & 18.49 & 30.48 & 11.99\% \\ \hline
**Localised AGCRN** & **15.41** & 27.21 & **14.93**\% & 19.55 & 31.88 & **12.70\%** & 21.03 & 34.56 & **8.53\%** & 15.63 & **24.78** & **9.78**\% & 17.91 & 29.61 & **11.49\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Performance of 99%-localised AGCRNs compared with other non-localised ASTGNN architectures on transportation datasets.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Sparsity(\%)} & Cora & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{PubMed} \\ \cline{2-5} & GCN & GAT & GCN & GAT & GCN & GAT & GCN & GAT \\ \hline
0\% & 80.20 & 82.10 & 69.40 & 72.52 & 78.90 & 79.00 \\
30\% & 80.35 & 83.17 & 69.23 & 72.31 & 79.14 & 79.23 \\
50\% & 72.73 & 75.40 & 69.37 & 72.70 & 78.82 & 79.31 \\
80\% & 65.19 & 70.81 & 58.47 & 63.18 & 68.37 & 77.03 \\
100\% & 56.22 & 63.29 & 53.13 & 57.50 & 61.02 & 64.25 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Computation cost during inference on original and 99%-localised AGCRNs and AGformers. The amount of computation is measured in MFLOPs, and acceleration factors are calculated in the round brackets. |
2308.13891 | Drug Interaction Vectors Neural Network: DrIVeNN | Polypharmacy, the concurrent use of multiple drugs to treat a single
condition, is common in patients managing multiple or complex conditions.
However, as more drugs are added to the treatment plan, the risk of adverse
drug events (ADEs) rises rapidly. Many serious ADEs associated with
polypharmacy only become known after the drugs are in use. It is impractical to
test every possible drug combination during clinical trials. This issue is
particularly prevalent among older adults with cardiovascular disease (CVD)
where polypharmacy and ADEs are commonly observed. In this research, our
primary objective was to identify key drug features to build and evaluate a
model for modeling polypharmacy ADEs. Our secondary objective was to assess our
model on a domain-specific case study. We developed a two-layer neural network
that incorporated drug features such as molecular structure, drug-protein
interactions, and mono drug side effects (DrIVeNN). We assessed DrIVeNN using
publicly available side effect databases and determined Principal Component
Analysis (PCA) with a variance threshold of 0.95 as the most effective feature
selection method. DrIVeNN performed moderately better than state-of-the-art
models like RESCAL, DEDICOM, DeepWalk, Decagon, DeepDDI, KGDDI, and KGNN in
terms of AUROC for the drug-drug interaction prediction task. We also conducted
a domain-specific case study centered on the treatment of cardiovascular
disease (CVD). When the best performing model architecture was applied to the
CVD treatment cohort, there was a significant increase in performance from the
general model. We observed an average AUROC for CVD drug pair prediction
increasing from 0.826 (general model) to 0.975 (CVD specific model). Our
findings indicate the strong potential of domain-specific models for improving
the accuracy of drug-drug interaction predictions. | Natalie Wang, Casey Overby Taylor | 2023-08-26T14:24:41Z | http://arxiv.org/abs/2308.13891v1 | # Drug Interaction Vectors Neural Network: DrIVeNN
###### Abstract.
Polypharmacy, the concurrent use of multiple drugs to treat a single condition, is common in patients managing multiple or complex conditions. However, as more drugs are added to the treatment plan, the risk of adverse drug events (ADEs) rises rapidly. Many serious ADEs associated with polypharmacy only become known after the drugs are already in use. It is impractical to test every possible drug combination during clinical trials. This issue is particularly prevalent among older adults with cardiovascular disease (CVD) where polypharmacy and ADEs are commonly observed. In this research, our primary objective was to identify key drug features and build and evaluate a model for modeling polypharmacy ADEs. Our secondary objective was to assess our model on a domain-specific case study.
We developed a two-layer neural network that incorporated drug features such as molecular structure, drug-protein interactions, and mono drug side effects (DrIVeNN). We assessed DrIVeNN using publicly available side effect databases and determined Principal Component Analysis (PCA) with a variance threshold of 0.95 as the most effective feature selection method. DrIVeNN performed moderately better than state-of-the-art models like RESCAL, DEDI-COM, DeepWalk, Decagon, DeepDDI, KGDDI, and KGNN in terms of AUROC for the drug-drug interaction prediction task.
We also conducted a domain-specific case study centered on the treatment of cardiovascular disease (CVD). When the best performing model architecture was applied to the CVD treatment cohort, there was a significant increase in performance from the general model. We observed an average AUROC for CVD drug pair prediction increasing from 0.826 (general model) to 0.975 (CVD specific model). Our findings indicate the strong potential of domain-specific models for improving the accuracy of drug-drug interaction predictions. In conclusion, this research contributes to the advancement of predictive modeling techniques for polypharmacy ADEs.
polopharmacy prediction, neural networks, adverse drug events +
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
+
Footnote †: journal: Computer Science and Technology
Our overall approach expands on existing work by integrating diverse sources of drug-related data and a new model structure. The Drug Interaction Vectors Neural Network (DrIVeNN) model we build uses graph neural networks to learn complex patterns from datasets consisting of known drug interactions, drug targets, and molecular structures. We also drew inspiration from a study conducted by Chen et al (2021), who developed a deep learning method called MUFFIN that incorporates chemical structures, drug-target interactions, and known drug side effects to predict DDIs[(1)] Their results demonstrate the effectiveness of multi-modal data integration for the DDI prediction task and motivated us to include these same data types for our model. Lastly, DrIVeNN incorporates an ensemble method for prediction, inspired by the works of Masumashah et al (2021).[(14)]
## 2. Objectives
* Build and evaluate the DrIVeNN model with diverse drug feature vectors for the drug-drug interaction prediction task.
* Evaluate DrIVeNN for the DDI prediction task in the CVD domain, as a case study for domain-specific DDI prediction tasks.
## 3. Methods
### Datasets
To address our first objective, we employed the datasets compiled and preprocessed by Zitnik et al[(26)] which contained drug-drug interactions, drug-protein interactions, and mono drug side effects. The authors used the STITCH (Search Tool for InTeractions of CHemicals) database as a primary source for drug-protein interactions. STITCH is a comprehensive database of protein-chemical interaction data with original data sources for each interaction.[(23)] To capture mono drug side effects, Zitnik et al[(26)] used two distinct databases: SIDER (Side Effect Resource) and OFFSIDES. SIDER contains drug-side effect associations obtained from drug label text, while OFFSIDES contains drug-side effect associations generated from adverse event reporting systems.[(9; 24)] To obtain the structural attributes of drugs, we used the SMILES (Simplified Molecular Input Line Entry System) representation of each drug to construct a molecular graph and employed a graph neural network to generate structural representations.
To address our second objective, we utilized the Inxight Drugs API to conduct a comprehensive search for drugs prescribed for three major cardiovascular diseases: myocardial infarction (MI), congestive heart failure (CHF), and coronary artery disease (CAD).[(22)] Through this analysis, we found 30 distinct principal forms of drugs for MI, 22 for CHF, and 18 for CAD. To integrate this data with our existing datasets, we gathered Unique Ingredient Identifiers (UNIIs) and InChIKeys for each cardiovascular disease treatment drug. We chose these because they were the most frequently occurring in the dataset. For drugs in our other drug datasets, which were identified solely by PubChem IDs, we used a two-step process to obtain their corresponding UNIIs. First, we downloaded UNII drug records from open.fda.gov and matched records that contained both UNII and PubChem IDs, thus obtaining UNIIs for the drugs in our dataset. However, approximately 300 drugs from our original dataset still lacked UNII matches so we manually used the PubChem Lookup tool to find to find their drug names and InChIKeys, then the Global Substance Registration System (GSRS) to search for their UNIIs based on their drug names and InChIKeys.[(8; 20)] There is an overview of this process shown in Figure 1. The CVD treatment drugs we identified that were also present in our DDI datasets were Enoxaparin Sodium, Niacin, Aspirin, Carvediolol, Metoprolol, Nitroglycerin, Ramipril, Valsartan, Amlodipine, Ticlopidine, Chlorothizide, Ethacrynic Acid, Indapamide, Metolazone, Ramipril, and Spironolactone.
### Data-Driven Motivation for this Approach
During our exploratory data analysis, we made several observations that served as our motivation for creating a domain-specific model.
First, we noticed a difference between the ten most common mono side effects of all drugs and the ten most common mono side effects of drugs identified as CVD treatment drugs. Only two side effects, "emotional distress" and "blood creatinine increased" overlapped both groups. In the context of this study, a mono side effect refers to an ADE that is known to be caused by a drug (as opposed to a drug-drug interaction which is an ADE known to be caused by two drugs). This observation suggests potential differences in the drug features of CVD treatment drugs compared to other drugs, indicating the need for a specialized model for this domain.
We also observed the median number of drug-drug interactions (DDIs) for different drug sets. The median number of DDIs per all drug pairs was found to be 53. In contrast, drug pairs that contained at least one CVD treatment drug exhibited a higher median number of DDIs, 65 interactions, and the drug pairs the consisted of two CVD drugs, reached a median of 124 interactions. This notable increase in the likelihood of ADEs for CVD treatment drugs further motivated us to explore this domain.
### Data Processing
Each drug was associated with three distinct features: drug structure features, drug-protein interaction features, and mono drug side effect features. We created two drug feature datasets, one with drug structure features and one without. To extract the drug structure features, we utilized dgllife, a Python package designed for deep learning on graphs.[(12)] We used a message-passing graph neural network, MPNN, that was pretrained for molecules to extract corresponding drug structure embeddings from our drug dataset. For the drug-protein interaction features and mono drug side effect features, we applied Principle Component Analysis (PCA) as a feature extraction method[(18)]. We also applied normalization techniques and UMAP (Uniform Manifold Approximation and Projection) to address sparsity in the data[(18)]. To capture the full representation of each drug, we concatenated the three types of features into a single drug feature vector (Fig. 2). To represent a drug-drug pair, we summed the individual drug features following protocols from similar studies (Fig. 3a)[(14)].
To construct our classification dataset, we implemented negative sampling following the established protocol from other studies.[(16; 25; 26)] For each side effect, there are corresponding drug-drug pairs that were identified as causative factors, these are our positive edges. To generate negative edges, we randomly selected drug pairs
that were not associated with the given side effect. This approach allowed us to create a balanced dataset that included both positive and negative instances, helping with the classification task and ensuring more complete coverage of potential drug-drug interactions.
### Model Training
For every type of side effect, we partitioned the corresponding drug-drug pairs into train, validation, and test sets using an 80/10/10 ratio, respectively. This partitioning ensured the model was trained on a sufficiently large dataset while also providing separate datasets for validation and final evaluation. During the training phase, our objective was to minimize the binary cross-entropy loss function. Each training run consisted of 50 epochs, allowing the model to learn and adjust its parameters over multiple iterations. We used a pretrained GNN for the data extraction step, so our model training consists of learning weights for our predictive feed-forward neural network.
### Experiments
In our initial experiment, we examined different feature selection methods to address sparsity observed in the dataset and improve computational efficiency. We evaluated various levels of Principle Component Analysis (PCA) and a combination of normalization and Uniform Manifold Approximation and Projection (UMAP). PCA was chosen as a dimensionality reduction technique to retain the most informative components of the drug features, while mitigating the effect of sparse data on our analysis. We utilized normalization and UMAP as complementary approaches. Because our drug feature vectors were derived from multiple sources, we first applied normalization to ensure features were on the same scale for fair comparisons. We chose UMAP due to its well-known effectiveness at preserving both local and global structure within the data while reducing its dimensionality.(Han et al., 2015) For this experiment, we used the model architecture developed by Masumashah et al, a three-layer neural network with 300, 200, and 100 nodes for each respective layer(Masumashah et al., 2016). This architecture has demonstrated promising performance in previous studies and served as a foundation for our experiments. One key difference between that study and ours is the incorporation of molecular drug data into the model.
In the second experiment, we performed hyperparameter tuning on the model using the Hyperband algorithm which is a competition-based approach to efficiently explore hyperparameter space.(Masumashah et al., 2016) We considered variables such as number of layers, neurons per layer, the inclusion of batch normalization, and dropout regularization. Dropout was considered to prevent overfitting and promote generalization while batch normalization was incorporated to normalize inputs within each batch which can lead to faster convergence.
For our third experiment, we conducted training using our best-performing feature selection methods and model architecture on our cardiovascular disease treatment dataset. This dataset is comprised of drug pairs where at least one drug was specifically used for cardiovascular disease treatment. By focusing on this domain-specific dataset, we aimed to assess the performance and applicability of our approach within a specific medical context.
## 4. Evaluation and Performance
For each model examined in our study, we evaluated its performance using the following metrics: AUROC (Area Under ROC Curve) and AUPR (Area Under the Precision-Recall Curve). We used AUROC to provide an assessment of the model's discriminatory power and AUPR to capture the trade-off between precision and recall. We believe AUROC is a robust evaluation metric because our test set is balanced. As mentioned in the data processing step, we employed negative sampling techniques to get a balanced dataset. We calculated the average of these metrics across the different side effects
Figure 1. DrIVeNN Cardiovascular Disease Data Pipeline.
to provide a more comprehensive evaluation of the model's performance. We also evaluated each model on training time.
Additionally, we investigated side effects for which the model performed exceptionally well and those where its performance was comparatively poorer. To further understand the significance of these side effects, we employed Saedr scores as a means of assessing the severity of side effects grouped by model prediction accuracy. Saedr scores, developed by Lavertu et al, are designed to quantify the severity of adverse drug events using social media network analysis.(Lavertu et al., 2017) We categorized side effects into three distinct bins based on the model prediction accuracy: AUROC 0.85-0.90, AUROC 0.90-0.95, and AUROC 0.95-0.99. Then we evaluated and compared the average Saedr scores associated with each of these bins.
## 5. Results
### Feature Selection
The initial dimensions of our drug feature matrix, before any feature selection, were (645, 18279) across 964 side effects. During our first experiment, we assessed the impact of different feature selection methods on the performance of our model (Table 1). We observed as the variance captured by PCA increased and as molecular embeddings were incorporated into the input matrix, there was a moderate improvement in AUROC and AUPRC. We chose to proceed with PCA 0.95 as the chosen feature selection method and with molecular embeddings for our subsequent experiments. In the future, it may also be interesting to look more closely at the performance of some of the PCA datasets with lower variance as they would utilize less computational resources and time.
### Hyperparameter Tuning
Upon completion of the hyperparameter turning process, we analyzed the results obtained for each side effect. To ensure a more generalizable mode, we selected the overall best model parameters by identifying the most common parameters across all side effects. This approach aimed to find a balance between individual side effect performance and the ability to capture patterns across collective characteristics. The selected hyperparameters which reflect our overall best model configuration are highlighted in Table 2.
Figure 2. Drug feature matrix creation for drugs \(d_{1},d_{2},...,d_{n}\). This summarizes the drug feature matrix creation process which includes: 1. Applying MPNN, a graph neural network, to the drug structure dataset, 2. Training and applying PCA(0.95) to the drug-protein dataset and mono drug side effect dataset, and 3. Concatenating the three separate processed datasets into the drug feature matrix where each row represents one drug.
### Overall Model Performance
In our analysis, we found that our hyperparameter-tuned model, DriVeNN, slightly outperformed some commonly used baselines for AUROC (Table 3). Please note that the performance values for AUROC and AUPRC for other models were obtained from previously published works.(Kumar et al., 2017) On CPU, the DriVeNN model took about 2 hours and 30 minutes to train.
### Domain-Specific Results
The performance evaluation of our domain-specific model, specifically focused on cardiovascular disease treatment drug pairs, yielded promising results. We observed a significant increase in AUROC and AUPRC values, reaching 0.9725 and 0.952 respectively (Table 4). For comparison, we evaluated the performance of the general DriVeNN_all model on the same datasets. The general model had an
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{With Molecular Embeddings} & \multicolumn{3}{c}{Without Molecular Embeddings} \\ \hline Feature Selection & Drug Feature Dimensions & AUROC & AUPRC & AUROC & AUPRC \\ \hline UMAP + Norm & \((645,2)\) & 0.516 & 0.522 & 0.532 & 0.545 \\ PCA(0.85) & \((645,667)\) & 0.901 & 0.824 & 0.897 & 0.819 \\ PCA(0.90) & \((645,733)\) & 0.900 & 0.823 & 0.901 & 0.821 \\ PCA(0.95) & \((645,825)\) & 0.903 & 0.826 & 0.902 & 0.825 \\ PCA(0.99) & \((645,972)\) & 0.906 & 0.829 & 0.905 & 0.825 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Evaluation of Feature Selection Methods
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & AUROC & AUPRC \\ \hline RESCAL & 0.693 & 0.613 \\ DEDICOM & 0.705 & 0.637 \\ DeepWalk & 0.761 & 0.737 \\ DeepDDI & 0.830 & 0.503 \\ Decagon & 0.874 & 0.825 \\ KGDDI & 0.891 & 0.653 \\ KGNN & 0.896 & 0.658 \\ DrIVeNN_all & **0.901** & 0.821 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Evaluation of General Model Performance
Figure 3. Drug Pair Prediction: **3a. Drug pair representation for drugs \((d_{i},d_{j})\). Element-wise summation was performed on the individual drug representations to create one representative vector. This process was done for all drug pairs in the dataset. **3b. DDI prediction process for a single drug pair and for a single side effect**
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{With Molecular Embeddings} & \multicolumn{3}{c}{Without Molecular Embeddings} \\ \hline Feature Selection & Drug Feature Dimensions & AUROC & AUPRC & AUROC & AUPRC \\ \hline UMAP + Norm & \((645,2)\) & 0.516 & 0.522 & 0.532 & 0.545 \\ PCA(0.85) & \((645,667)\) & 0.901 & 0.824 & 0.897 & 0.819 \\ PCA(0.90) & \((645,733)\) & 0.900 & 0.823 & 0.901 & 0.821 \\ PCA(0.95) & \((645,825)\) & 0.903 & 0.826 & 0.902 & 0.825 \\ PCA(0.99) & \((645,972)\) & 0.906 & 0.829 & 0.905 & 0.825 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Evaluation of Feature Selection Methods
average AUROC of 0.901 for the test set created from all drug pairs, and only 0.701 for the test set created from CVD drug pairs. These findings show our domain-specific model performs significantly better than a general model on domain-specific drug pairs. These findings highlight the potential of domain-specific models to increase predictive performance for drug-drug interaction prediction. Additionally, on CPU, our domain-specific model trained in under 20 minutes.
### Side Effect Severity
We conducted an analysis of side effects associated with CVD-related treatment polypharmacy, specifically focusing on the prediction AUROC of these side effects from our model, DrIVeNN_cvd. To address the severity of these side effects, we utilized Saedr scores as a metric. Out of the 3,196 unique side effects caused by CVD-related treatment polypharmacy, we were able to obtain Saedr scores for 2,794 side effects. This subset of side effects became the focus of our analysis. In the future, we would like to explore additional severity metrics and work on obtaining a larger domain-specific dataset to allow for a more comprehensive analysis of drug-drug interactions within the domain.
After excluding the first group (\(\sim\)0.85 AUROC) from our analysis because it contained only one side effect, we observed that, on average, DrIVeNN_cvd demonstrated slightly higher AUROC scores for side effects with higher severity scores (Table 5). This finding suggests that our model exhibits improved performance on side effects that are associated with greater severity, as measured by Saedr scores. To provide illustrative examples, we have included a selection of side effects from each group in the table below to showcase the variation in severity and corresponding AUROC scores.
## 6. Discussion
The drug-drug interaction (DDI) prediction task is a critical area of research that plays a role in ensuring patient safety and healthcare management. Traditionally, DDI prediction has been approached in two distinct ways: predicting whether a side effect will occur as a result of drug interactions and predicting the specific side effect that may occur. Our study focuses on the second aspect. By targeting the prediction of specific side effects, we have a more granular approach to understanding the potential outcomes of polypharmacy. In our evaluation, we introduced Saedr scores as a metric to assess the severity of side effects with lower and higher prediction accuracy to gain insights into the potential impact and clinical relevance of this model. This approach allows us to identify where areas of improvement in predication accuracy may have significant clinical implications.
This study presented DrIVeNN, a novel approach to DDI prediction that incorporated diverse drug features. In our methodology, we used PCA with a variance threshold of 0.95 for feature selection then for each drug pair, a representation was created by summing the selected features. This served as the input for our model which had a performance comparable to state-of-the-art baselines. Additionally, our findings indicate that our domain-specific model, DrIVeNN_cvd, outperformed our general model in DDI prediction. This indicates the potential for improved accuracy to predict DDIs with domain-specific models. To the best of our knowledge, DrIVeNN_cvd represents the first domain-specific model developed for the DDI prediction task. Exploring additional domains and developing tailored models could lead to further improvements in predictive accuracy.
The first notable limitation is the size of our domain-specific dataset. Although we made substantial effort to collect relevant data for cardiovascular disease treatment, the dataset's size may restrict the full exploration of the DDIs in the domain. Future work may include compiling a more comprehensive and expansive domain-specific dataset. In addition, it is worth exploring other medical domains where polypharmacy is common, for example, the mental health domain. This could offer insights into the generalizability and effectiveness of domain-specific models. Another limitation of our study is that we evaluated our findings solely on the dataset of known side effects. While this dataset provides a foundation for initial model evaluation, it does not fully encompass the complexities of real-world patient data. To better understand the potential to use this model to predict DDIs in a real-world clinical setting, we believe that evaluations on patient data sources, such as electronic health records (EHRs) are needed. Doing so would provide more insight into the applicability of the model in clinical settings and an evaluation framework for other models.
Some other potential avenues for future research includes a more comprehensive exploration of both the time and space complexities of the model. Additionally, trying local Principal Component Analysis (PCA) could prove beneficial. While global PCA was employed due to its widespread application in contexts such as ours, it is worth noting its use comes with strong linearity assumptions. Consequently, global PCA might offer a better fit for representing the data.
The potential impact of DrIVeNN extends beyond drug development and clinical decision-making. Accurate DDI prediction can assist healthcare professionals in proactively mitigating adverse effects, optimizing drug combinations, and maximizing therapeutic outcomes. Additionally, pharmaceutical companies may benefit from using our model to optimize drug discovery pipelines, prioritize candidate drugs with lower interaction risks, and minimize costly experimental testing.
The main contributions of this research paper can be summarized as follows:
* Novel Drug Feature Vectors and Model: Different from baseline models, we used drug feature vectors including drug structures, drug-protein interactions, and mono drug side effects. We also propose a feature selection method and model architecture for the drug-drug interaction task.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Test Set & AUROC & AUPRC \\ \hline DrIVeNN\_all & All Drug Pairs & 0.901 & 0.821 \\ DrIVeNN\_all & CVD Drug Pairs & 0.826 & 0.802 \\ DrIVeNN\_cvd & All Drug Pairs & 0.701 & 0.683 \\ DrIVeNN\_cvd & CVD Drug Pairs & **0.975** & **0.952** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Evaluation of Domain-Specific Models
* Domain-Specific Model Performance: We evaluated our model in two ways: with all available drug pairs (as others have), and second, with a focus on polypharmacy in CVD treatment (new in this study). Experimental results obtained for this evaluation are noteworthy as they highlight the promising potential of the effectiveness of domain-specific models in DDI prediction.
## 7. Conclusion
In conclusion, our research has highlighted the potential of domain-specific models for drug-drug interaction (DDI) prediction. By introducing DrIVeNN as a new model that utilizes diverse drug features and feature selection techniques, we have demonstrated competitive performance for the DDI prediction task, comparable to existing state-of-the-art models in terms of AUROC and AUPRC. We also have preliminary results suggesting our domain-specific model specifically tailored for cardiovascular disease treatment (DrIVeNN_cvd), exhibits potential advantages in predictive accuracy. Furthermore, we observed that on average, DrIVeNN_cvd performs better on more severe side effects, as quantified by the Saedr score system. This finding may suggest other underlying factors related to severity that influence predictive accuracy. Exploring these factors may provide more insights into the nature of drug-drug interactions.
|
2306.05587 | MC-NN: An End-to-End Multi-Channel Neural Network Approach for
Predicting Influenza A Virus Hosts and Antigenic Types | Influenza poses a significant threat to public health, particularly among the
elderly, young children, and people with underlying dis-eases. The
manifestation of severe conditions, such as pneumonia, highlights the
importance of preventing the spread of influenza. An accurate and
cost-effective prediction of the host and antigenic sub-types of influenza A
viruses is essential to addressing this issue, particularly in
resource-constrained regions. In this study, we propose a multi-channel neural
network model to predict the host and antigenic subtypes of influenza A viruses
from hemagglutinin and neuraminidase protein sequences. Our model was trained
on a comprehensive data set of complete protein sequences and evaluated on
various test data sets of complete and incomplete sequences. The results
demonstrate the potential and practicality of using multi-channel neural
networks in predicting the host and antigenic subtypes of influenza A viruses
from both full and partial protein sequences. | Yanhua Xu, Dominik Wojtczak | 2023-06-08T23:14:39Z | http://arxiv.org/abs/2306.05587v4 | MC-NN: An End-to-End Multi-Channel Neural Network Approach for Predicting Influenza A Virus Hosts and Antigenic Types
###### Abstract
Influenza poses a significant threat to public health, particularly among the elderly, young children, and people with underlying diseases. The manifestation of severe conditions, such as pneumonia, highlights the importance of preventing the spread of influenza. An accurate and cost-effective prediction of the host and antigenic subtypes of influenza A viruses is essential to addressing this issue, particularly in resource-constrained regions. In this study, we propose a multi-channel neural network model to predict the host and antigenic subtypes of influenza A viruses from hemagglutinin and neuraminidase protein sequences. Our model was trained on a comprehensive data set of complete protein sequences and evaluated on various test data sets of complete and incomplete sequences. The results demonstrate the potential and practicality of using multi-channel neural networks in predicting the host and antigenic subtypes of influenza A viruses from both full and partial protein sequences.
Influenza, CNN, BiGRU, Transformer, Deep Learning
## 1 Introduction
The impact of influenza viruses on respiratory diseases worldwide is substantial, leading to severe infections in the lower respiratory tract, hospitalisations,
and mortality. There are estimated to be?5 million hospitalisations annually due to influenza-related respiratory illnesses [1]. The incidence of severe influenza-associated diseases and hospitalisation is highest among individuals at the extremes of age and those with pre-existing medical conditions. The virus spreads primarily through droplets, aerosols or direct contact, and up to 50% of infections are asymptomatic [2; 3]. The influenza virus can cause various complications associated with high fatality rates, including secondary bacterial pneumonia, primary viral pneumonia, chronic kidney disease, acute renal failure, and heart failure [4; 5; 6].
The influenza virus's genome is comprised of single-stranded ribonuclecic acid (RNA) segments. It is classified into four genera differentiated primarily by the antigenic properties of the nucleocapsid (NP) and matrix (M) proteins [7]. Currently, the influenza virus has four types: A (IAV), B (IBV), C (IVC), and D (IVD). Among them, IAV is the most widespread and virulent, capable of triggering major public health disruptions and pandemics, as demonstrated by the Spanish Flu of 1918-1919 that resulted in an estimated 20-100 million deaths [8]. IAV is further subtyped by the antigenic properties of its hemagglutinin (HA) and neuraminidase (NA) surface glycoproteins, with 18 HA and 11 NA subtypes currently known [9]. The avian influenza viruses, including H5N1, H5N2, H5N8, H7N7, and H9N2, can also spread from birds to humans with potentially deadly consequences, although this rarely occurs.
The HA and NA proteins of the influenza virus play a crucial role in its ability to infect host cells by allowing it to recognise and attach to specific receptors on host epithelial cells, followed by replication and release into neighbouring cells through the action of NA [10]. The immune system can respond to the virus by attacking and destroying infected tissue, although death can sometimes result from organ failure or secondary infections. The continuous evolution of the virus through point mutations in the genes encoding HA and NA can result in antigenic drift, leading to seasonal influenza, or the rarer antigenic shift, resulting in the emergence of new viruses with a significant change in HA and NA production that can trigger pandemics [11].
In this study, we aim to predict IAV subtypes and hosts using a multi-channel neural network (MC-NN) approach comprising a combination of convolutional neural networks (CNNs), bidirectional gated recurrent units (BiGRUs), and transformer models. The models are trained on a large-scale integrated protein sequence data set collected before 2020 and evaluated on both a post-2020 data set and a data set containing incomplete sequences. The study includes a broad range of hosts. Its results demonstrate the superiority of our multi-channel approach, with the transformer model achieving 83.39%, 99.91% and 99.87% F\({}_{1}\) scores for the host, HA subtype and NA subtype prediction, respectively, in the post-2020 data set. Furthermore, its performance on incomplete sequences reached 76.13%, 95.37% and 96.37% F\({}_{1}\) scores for the host, HA subtype and NA subtype prediction, respectively.
## 2 Related Work
The detection of IAV hosts and subtypes can enhance the surveillance of influenza and mitigate its spread. However, traditional methods for virus subtyping, such as nucleic acid-based tests (NATs), are labour-intensive and time-consuming [12]. To address this issue, researchers have explored various supervised machine learning-based methods for predicting IAV hosts or subtypes. These include using CNNs [11; 13; 14], support vector machines (SVM) [15; 16; 17], decision trees (DT) [15; 18], and random forests (RF) [17; 19; 20].
In order to train machine learning models, the protein sequences need to be transformed into numerical vectors. This transformation has been achieved through various methods, including one-hot encoding [11; 19; 21], pre-defined binary encoding schemes [22], ASCII codes [13], Word2Vec [16], and the use of physicochemical features [23; 24; 25; 20]. However, using handcrafted feature sets or physicochemical features requires a feature selection process, which can be time-consuming. This study used word embedding to allow the models to learn features from the training data since this approach is more convenient and efficient. Previous studies have focused on either higher classification (i.e. avian, swine, or human) or a single class of hosts from a single database. In contrast, this study collects data from multiple databases and focuses on a broad range of hosts.
MC-NNs have been used in various applications, such as face detection [26], relation extraction [27], entity alignment [28], emotion recognition [29], and haptic material classification [30]. To our knowledge, few studies have used MC-NNs for infectious disease predictions. In this study, we propose using three MC-NN architectures to simultaneously predict IAV hosts and subtypes rather than training separate models for each task.
## 3 Materials and Methods
### Data Preparation
#### Hemagglutinin and Neuraminidase Protein Sequences
Complete hemagglutinin (HA) and neuraminidase (NA) sequences were acquired from two sources: the Influenza Research Database (IRD) [31] and the Global Initiative on Sharing Avian Influenza Data (GISAID) [32]. The initial data collection process yielded 381,369 HA sequences and 338,631 NA sequences (completed on 13 December 2022). ). To maintain the uniqueness of each strain, redundant and multi-label sequences were filtered, resulting in a unique HA and NA sequence pair for each strain in the final data set. To prevent duplicates, the integration process involved removing sequences from GISAID if they were already present in IRD. Additionally, strains belonging to the H0N0 subtype, which have an uncleaved HA0 protein that is not infectious, were also removed from the data set. The process of data curation also involved eliminating sequences with erroneous or ambiguous metadata
labels. For example, A/American Pelican/Kansas/W22-200/2022 (isolated ID: EPLISL_14937098) was inaccurately labelled as 'host'. Subsequently, the final outcome comprised 46,172 unique pairs of complete and partial HA and NA sequences
The criterion for defining the completeness of A sequence was considered complete if its length was equivalent to that of the actual genomic sequence [32] or the complete coding region defined by The National Center for Biotechnology Information (NCBI) [31]. The completeness annotation cannot be explicitly obtained from the metadata of the strain. Therefore, incomplete sequences were obtained by filtering the complete sequences from the full influenza database, which comprises both complete and incomplete sequences (\(all\ sequences=complete\ sequences\cup incomplete\ sequences\)).
The pre-trained model was trained using a training data set comprising sequences of strains isolated before 2020. Conversely, the sequences of strains isolated from 2020 to 2022 were used solely to evaluate the performance of the models during testing; the testing data set also included incomplete sequences. The characteristics of the data sets used in this study are presented in Table 1.
#### 3.1.2 Label Reassignment
While the GISAID and IRD databases recorded?300 hosts, only 30% were consistent across both databases. This issue could be attributed to the blended use of animals' common and scientific names. We regrouped the viral hosts into 25 categories based primarily on the biological family classification of the animals; the distribution of reassigned hosts is presented in Fig. 1. We also moved a few subtypes in the data set into other subtypes (i.e. H15, H17, H18, N10, and N11), as shown in Fig. 2.
### Protein Sequence Representation
Neural networks are mathematical operators that operate on inputs and generate numerical outputs. However, the raw input sequences must be represented as numerical vectors before the neural network can process them. One popular method of vectorising sequences is one-hot encoding. In natural language processing (NLP), the length of the one-hot vector for each word is determined by the size of the vocabulary, which comprises all unique words or tokens in the data. When representing amino acids, the length of the one-hot vector for each amino acid depends on the number of unique amino acids. This results in a sparse matrix for large vocabularies, which is computationally inefficient. An
\begin{table}
\begin{tabular}{c c c c} \hline Data Set (_alias_) & \# Total Pairs & \# Seqs from IRD & \# Seqs from GISAID \\ \hline \(<\) 2020 (_pre-20_) & 33,159 & 41,940 & 24,378 \\
2020 - 2022 (_post-20_) & 4,488 & 3,232 & 5,744 \\ Incomplete (_incomplete_) & 8,525 & 11,111 & 5,939 \\ \hline \end{tabular}
\end{table}
Table 1: Summary statistics of data sets.
alternative and more powerful approach are to represent each word as a dense vector through word embedding. Word embedding learns the representation of a word by considering its context, allowing similar words to have similar representations. It has been used successfully in the extraction of features from biological sequences [33].
The word embedding process can be incorporated into a deep learning model without relying on manually-crafted feature extraction techniques. A protein's amino acid sequence is usually written as a string of letters but can also be represented as a set of tripeptides, also known as 3-grams. In NLP, _N_-grams refer to \(N\) consecutive words in a text, and similarly, _N_-grams of a protein sequence refer to \(N\) consecutive amino acids. For example, the 3-grams of the sequence "AAADADTICIG" would be 'AAA', 'AAD', 'ADA', 'DAD', 'ADT', 'DTI', 'TIC', 'ICI', and 'CIG'. N was set to 3 based on previous research findings [34; 35].
## 4 Neural Network Architectures
In this study, we propose a multi-channel neural network (MC-NN) architecture that incorporates two inputs, namely HA trigrams and NA trigrams, and produces three outputs, specifically host, HA subtypes, and NA subtypes. The neural network models utilized in this research encompass bidirectional
Figure 1: Data distribution (hosts)
Figure 2: Data distribution (subtypes)
gated recurrent unit (BiGRU), convolutional neural network (CNN), and transformer.
### Bidirectional Gated Recurrent Unit
The Bidirectional Gated Recurrent Unit (BiGRU) is a model designed to handle sequential data by considering both past and future information at each time step. This model is composed of two separate Gated Recurrent Unit (GRU) layers, one for processing the input sequence in the forward direction and the other for processing the input sequence in the backward direction. The outputs of these two layers are then concatenated and utilised for prediction purposes.
GRUs, similar to Long Short-Term Memory (LSTM) units, possess a reset gate and an update gate [36]. The reset gate determines the amount of previous information that needs to be forgotten, while the update gate decides the proportion of information to discard and the proportion of new information to incorporate. Due to fewer tensor operations, GRUs are faster in terms of training speed when compared to LSTMs.
The utilisation of a BiGRU provides the advantage of considering both the past and future context at each time step, thereby leading to more informed predictions. This is particularly useful in sequential data processing where context plays a crucial role in prediction accuracy.
### Transformer
The Transformer neural network architecture has had a significant impact in the field of NLP [37]. It was initially designed to facilitate machine translation, however, the scope of its application can be broadened to encompass other areas such as addressing protein folding dilemmas [38]. The Transformer architecture serves as the cornerstone for the advancement of contemporary natural language processing models, including BERT [39], T5 [40], and GPT-3 [41]. One of the most significant benefits that a Transformer possesses over conventional Recurrent Neural Networks (RNNs) is its capability to process data in a parallel manner. This attribute allows for the utilisation of Graphics Processing Units (GPUs) to optimise the speed of processing and effectively handle extensive text sequences.
The Transformer neural network presents a breakthrough in the field of deep learning through its incorporation of positional encoding and self-attention mechanism. The positional encoding feature serves as a means of preserving the word order information in the data, thereby enabling the neural network to learn and understand the significance of the order. The attention mechanism, on the other hand, allows the model to effectively translate words from the source text to the target text by determining their relative importance. The self-attention mechanism, as implied by its name, allows the neural network to focus on its own internal operations and processes. Through this mechanism, the neural network can comprehend the contextual
meaning of words by analysing their relationships and interactions with surrounding words. Furthermore, the self-attention mechanism enables the neural network to not only differentiate between words but also reduce computational requirements, thus improving its efficiency.
### Convolutional Neural Network
A Convolutional Neural Network (CNN) was designed to work with image and video data. It is an artificial neural network that uses convolutional layers to extract features from raw data. These convolutional layers analyse the spatial relationship between pixels and learn to recognise patterns in the data. The concept behind Convolutional Neural Networks (CNNs) is based on the visual processing mechanism of the human brain, where neurons are selectively activated in response to various features present in an image, such as edges. In CNNs, two primary types of layers are utilised, namely convolution layers and pooling layers. Convolution layers are the core of the CNN architecture, performing convolution operations on the input image and filters. On the other hand, pooling layers perform down-sampling on the image in order to minimise the number of learnable parameters. This study implements one-dimensional convolution layers to process sequence data.
## 5 Implementation and Evaluation Methods
All of the models in this study were built using Keras and trained on pre-20 data sets. They were then tested on both post-20 and incomplete data sets. The architecture of the multi-channel neural network used in this study is illustrated in Figure 3. The Transformer architecture used here is the encoder presented in [37].
In some cases, there is confusion regarding the role of validation and test sets, leading to the tuning of model hyperparameters using the testing set instead of a separate validation set. This increases the risk of data leakage and reduces the credibility of the results. To avoid this issue, nested cross-validation (CV) is used instead of classic K-fold CV. In nested CV, an outer CV is used to estimate the generalisation error of the model and an inner CV is used for model selection and hyperparameter tuning. The outer CV splits the data into a training\({}_{\text{outer}}\) set and a testing set, while the inner CV splits the training outer set into a training\({}_{\text{inner}}\) set and a validation set. The model is trained only on the training\({}_{\text{inner}}\) set, its hyperparameters are tuned based on its performance on the validation set, and its overall performance is evaluated on the testing set. In this study, the outer fold \(k_{outer}\) was set to 5 and the inner fold \(k_{inner}\) was set to 4. The hyperparameters settings for the neural network architectures used in this study are presented in Table 2.,
The present study utilises data sets that exhibit a high degree of imbalance, and as such, the application of conventional evaluation metrics such as accuracy and receiver operating characteristic (ROC) curves can lead to misleading results, as demonstrated in prior research [42; 43]. Precision-recall
curve (PRC), on the other hand, has been demonstrated to be more informative when addressing highly imbalanced data sets and has been widely adopted in the research [44; 45; 46; 47].
The utilisation of linear interpolation to calculate the area under the precision-recall curve (AUPRC) has been shown to be inappropriate [43]. An alternative approach that has been demonstrated to be effective in such cases is the calculation of the average precision (AP) score[48]. Furthermore, this study also employs conventional evaluation metrics F\({}_{1}\) score, with the formulas for these metrics provided below:
\[Precision=\frac{TP}{TP+FP} \tag{1}\]
\[Recall=\frac{TP}{TP+FN} \tag{2}\]
\begin{table}
\begin{tabular}{l l} \hline \hline
**Models** & **Hyperparameters** \\ \hline \multirow{4}{*}{CNN} & kernel size = 3, 4, 5 \\ & embedding size = 50, 100, 150, 200 \\ & learning rate = 0.01, 0.005, 0.001, 0.0001 \\ \cline{1-1} & embedding size = 50, 100, 150, 200 \\ \cline{1-1} & learning rate = 0.01, 0.005, 0.001, 0.0001 \\ \cline{1-1} & embedding size = 32, 64, 128 \\ \cline{1-1} Transformer & learning rate = 0.01, 0.005, 0.001, 0.0001 \\ \cline{1-1} & num heads = 1, 2, 3, 4, 5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameter Settings
Figure 3: The multi-channel neural network architecture: positional encoding is only employed along with Transformer.
\[F_{1}=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{3}\] \[AP=\sum_{n}(Recall_{n}-Recall_{n-1}Precision_{n}) \tag{4}\]
where TP, FP, TN, FN stand for true positive, false positive, true negative and false negative. If positive data is predicted as negative, then it counts as FN, and so on for TN, TP and FP.
The evaluation of the overall performance of the models was conducted using the results obtained from the Basic Local Alignment Search Tool (BLAST) as a baseline because BLAST is a commonly employed benchmark in computational biology and bioinformatics.
## 6 Results
### Overall Performance
The model's performance on various data sets is shown in Figures 4 to 6. The metrics used, such as average precision (AP), have been developed for binary classification but can be adapted to multi-class classification using a one-vs-all approach. This approach involves designating one class as positive and all others as negative. AP, F\({}_{1}\) score, precision, and recall values were used to compare the models to a baseline model, the Basic Local Alignment Search Tool (BLAST), with its default parameters. The BLAST results were obtained through five-fold cross-validation and are indicated by the solid black line in the figures. All models outperformed the baseline model, with the MC-BiGRU and MC-CNN models achieving particularly notable results. The results also showed that the host classification task was more challenging than the subtype classification task with all models.
The models were trained solely on the pre-20 data set and tested on the post-20 and incomplete data sets. The pre-20 and post-20 data sets only contained complete sequences, while the incomplete data set contained both complete and incomplete sequences. There was no significant difference in the performance of all models on the pre-20 data set. The best-performing model was the MC-CNN, achieving an AP of 94.61% (94.22%, 94.99%), and a F\({}_{1}\) score of 93.20% (92.86%, 93.54%) on the post-20 data set. The MC-Transformer performed best on the incomplete data set, achieving an AP of 91.63% (91.41%, 91.85%), and a F\({}_{1}\) score of 89.29% (88.80%, 89.78%).
### Performance on Single Sequence Input
The proposed MC-NN uses two inputs. However, it cannot be guaranteed that the required HA and NA pairs will always be obtainable for every strain. We conducted additional experiments on two data sets, one comprising 23,802 HA protein sequences and the other 5,142 NA protein sequences. The results of these experiments are presented in Table 3. The results indicated reduced
performance for all models when corresponding H/N sequence pairs were missing. However, the MC-Transformer model outperformed the MC-CNN and MC-BiGRU models on both data sets.
## 7 Conclusion and Discussion
The rapid mutation of influenza viruses leads to frequent seasonal outbreaks, although they infrequently result in pandemics. However, these viruses can exacerbate underlying medical conditions, elevating the risk of mortality. In this study, we present a novel approach to predict the viral host at a lower taxonomic level and subtype of the Influenza A virus (IAV) by utilising multi-channel neural networks.
Figure 4: Comparison of Overall Performance Between Models (Hosts): the baseline results with BLAST are framed by the black solid line.
Figure 5: Comparison of Overall Performance Between Models (HA subtypes): the baseline results with BLAST are framed by the black solid line.
Figure 6: Comparison of Overall Performance Between Models (NA Subtypes): the baseline results with BLAST are framed by the black solid line.
Our approach differs from traditional methods, as it employs a neural network architecture that can learn the embedding of protein trigrams instead of manually encoding protein sequences into numerical vectors. The multi-channel nature of our network eliminates the need for separate models for similar tasks, as it can take multiple inputs and produce multiple outputs. We evaluated the performance of our approach using various algorithms, including CNN, BiGRU, and Transformer, and found that Transformer performed better than the other algorithms. In addition to our previous experiments, we carried out further evaluations to assess the performance of the models in the absence of matching H/N sequence pairs. The results showed that the MC-Transformer model consistently displayed superior performance.
This method could greatly benefit resource-poor regions where laboratory experiments are cost-prohibitive. However, our approach is limited by its reliance on supervised learning algorithms and the need for correctly labelled data, which may result in the poor predictive ability for labels with insufficient data. Further research is needed to address these limitations, including the prediction of cross-species transmissibility and leveraging insufficient data.
Acknowledgments.The work is supported by the University of Liverpool.
## Declarations
**Conflict of interest** The authors have no conflicts of interest to declare. All co-authors have seen and agreed with the contents of the manuscript. We certify that the submission is original work and is not under review at any other publication.
**Ethics approval** This article does not contain any studies with human participants or animals performed by any of the authors.
|
2306.05593 | Localized Neural Network Modelling of Time Series: A Case Study on US
Monetary Policy | In this paper, we investigate a semiparametric regression model under the
context of treatment effects via a localized neural network (LNN) approach. Due
to a vast number of parameters involved, we reduce the number of effective
parameters by (i) exploring the use of identification restrictions; and (ii)
adopting a variable selection method based on the group-LASSO technique.
Subsequently, we derive the corresponding estimation theory and propose a
dependent wild bootstrap procedure to construct valid inferences accounting for
the dependence of data. Finally, we validate our theoretical findings through
extensive numerical studies. In an empirical study, we revisit the impacts of a
tightening monetary policy action on a variety of economic variables, including
short-/long-term interest rate, inflation, unemployment rate, industrial price
and equity return via the newly proposed framework using a monthly dataset of
the US. | Jiti Gao, Fei Liu, Bin Peng, Yanrong Yang | 2023-06-08T23:41:06Z | http://arxiv.org/abs/2306.05593v2 | # A Localized Neural Network with Dependent Data: Estimation and Inference
###### Abstract
In this paper, we propose a localized neural network (LNN) model and then develop the LNN based estimation and inferential procedures for dependent data in both cases with quantitative/qualitative outcomes. We explore the use of identification restrictions from a nonparametric regression perspective, and establish an estimation theory for the LNN setting under a set of mild conditions. The asymptotic distributions are derived accordingly, and we show that LNN automatically eliminates the dependence of data when calculating the asymptotic variances. The finding is important, as one can easily use different types of wild bootstrap methods to obtain valid inference practically. In particular, for quantitative outcomes, the proposed LNN approach yields closed-form expressions for the estimates of some key estimators of interest. Last but not least, we examine our theoretical findings through extensive numerical studies.
_Keywords_: Activation Function; Binary Structure; Deep Learning; Kernel Regression; Time Series
_JEL classification_: C14, C22, C45
Introduction
Neural network (NN) architecture has received increasing attention over the last several decades. On relevant topics, a large number of papers have been published in different journals such as _Econometrica_, _The Annals of Statistics_, _Journal of Machine Learning Research_, _Neural Networks_, _Neurocomputing_, etc. by experts from different disciplines. Apparently, we cannot exhaust the literature, but refer the interested reader to Bartlett et al. (2021) and Fan et al. (2021) for extensive reviews from a methodological point of view.
NN usually includes three ingredients: input layer, hidden layer(s), and output layer. We now briefly comment on them one by one. The input layer is possibly the easiest one to understand, as it includes regressors only. Because NN usually concerns about a generic unknown function of multiple regressors of interest, it is unnecessary to involve any interaction terms among these regressors in the input layer. The hidden layer(s) include the parameters which need to be determined by training data. Once the parameters are estimated, one can load the test dataset in order to evaluate the performance of the newly constructed NN. Finally, the output layer receives the estimation/decision/prediction which varies under different context. Usually, there are two types of data for the output layer: quantitative outcome (e.g., \(y\) is continuous), and qualitative outcome (e.g., \(y\) is binary).
That said, in this paper we account for the features of input and output layers, and propose to consider two specific models as follows:
\[y =g(\mathbf{x})+\varepsilon\quad\text{for continuous outcome}, \tag{1.1}\] \[y =\left\{\begin{array}{ll}1&g(\mathbf{x})-\varepsilon\geq 0 \\ 0&\text{otherwise}\end{array}\right.\quad\text{for binary outcome}, \tag{1.2}\]
in which \(\mathbf{x}=(x_{1},\ldots,x_{d})^{\top}\) includes \(d\) regressors with \(d\) being finite, and \(\varepsilon\) is the idiosyncratic error component. Further, we suppose that \(g\,:\,[-a,a]^{d}\rightarrow\mathbb{R}\), where \(a\) is fixed1. The main goal of this paper is to infer \(g(\cdot)\) through a neural network architecture for both models. In addition, for Model (1.2) only, we suppose that the respective probability density function (PDF) and the cumulative distribution function (CDF) of \(\varepsilon\) are known, and denote the PDF and CDF by \(\phi_{\varepsilon}(\cdot)\) and \(\Phi_{\varepsilon}(\cdot)\) respectively. Here, the information about \(\phi_{\varepsilon}(\cdot)\) and \(\Phi_{\varepsilon}(\cdot)\) is necessary for carrying on likelihood estimation. Possibly, one may further merge both models into one framework (say, \(y=g(\mathbf{x},\varepsilon)\)), and consider a generic loss function, such as Chen and White (1999) and Section 1 of Bartlett et al. (2021), which however will involve some high level conditions to facilitate the development. As we will derive some results which are for Model (1.2) in particular and
will also show that Model (1.1) may enjoy some closed-form estimation procedures, we do not further pursue a more general setup in this paper, but would like to investigate the general setup in future research.
Until very recently, the investigation on NN architecture mainly focuses on some fixed design (e.g., Cybenko, 1989 and many follow-up studies since then), or uses independent and identically distributed (i.i.d.) data (e.g., Kohler and Krzyzak (2017), Bauer and Kohler, 2019, Schmidt-Hieber, 2020, and many references therein). There are only limited studies available for us to understand NN with dependent data from theoretical perspective (see, Chen and Shen, 1998, Chen, 2007, for example), although NN based methods have been widely used to study different types of time series data in practice (e.g., Hill et al., 1996, Chen et al., 2001, Gu et al., 2021, Gu et al., 2020, just to name a few). We would like to contribute along this line of research, so throughout we assume the following time series data are observable:
\[\{(y_{t},\mathbf{x}_{t})\,|\,t\in[T]\}, \tag{1.3}\]
where, for a positive integer \(T\), \([T]\) stands for \(\{1,\ldots,T\}\). The values of \(\{y_{t}\}\) vary with respect to the model being studied. We shall not mention it again unless misunderstanding may arise.
Up to this point, it is worth further commenting on the hidden layer, which usually involves lots of neurons. Here, a neuron stands for an activation function, which maps a linear combination of the regressors from the input layer to a certain range of the real line. As a consequence, lots of parameters are actually involved in the hidden layer. The literature seems to agree that in order to ensure the number of effective parameters within a reasonable range, sparsity should be imposed (e.g., Schmidt-Hieber, 2020, Wang and Lin, 2021, Fan and Gu, 2022, Zhong et al., 2022, just to name a few), and in particular Schmidt-Hieber (2020) writes "_the network sparsity assumes that there are only few non-zero/active network parameters_", which essentially is reflected by the number of active neurons. In this paper we consider Sigmoidal function, \(\sigma(\cdot):\mathbb{R}\rightarrow[0,1]\), as the activation function, and acknowledge the growing literature on the non-sigmoidal rectified linear unit (ReLU) function. We conjecture using ReLU function one may derive results similar to what we are about to present below, but it requires some careful investigation in future research.
All things considered, we draw Figure 1 for the purpose of illustration, which presents a neural network architecture with only a small number of active neurons. Specifically, the elements of \(\mathbf{x}\) form the input layer, while, in the hidden layer, only neurons in the dark area are activated. Eventually, a final result is passed on to the output layer. To understand Figure 1 fully, a few questions arise naturally:
1. Provided a set of dependent time series data, how do we train a neural network, such as that in Figure 1?
2. Why are there only a small of number of neurons getting activated?
3. Given a set of activated neurons, can we further reduce the number of parameters? In other words, how does sparsity come to play just among the activated neurons?
4. When the least absolute shrinkage and selection operator (i.e., LASSO) is employed, how do we define the set of true parameters?
5. Can any inference (such as a confidence interval) be established? and so forth.
In Figure 1, \(\widehat{g}(\cdot)\) is an estimated version of \(g(\cdot)\).
With respect to the existing literature, it seems that so far the entire literature has been focusing on prediction errors, and barely talks about how to build feasible inferential procedures, such as constructing confidence intervals. A few exceptions known to us are Du et al. (2021), Farrell et al. (2021) and Chen et al. (2022) for example on estimation and inference for the average treatment effect rather than on \(g(\cdot)\) we are interested in this paper. In other words, to the best of our knowledge, there have been neither asymptotic distribution results available for \(\widehat{g}(\cdot)\) nor inferential results available for \(g(\cdot)\). Therefore, the main objective and contribution of this paper is that we develop the LNN based approach to addressing important estimation and inferential issues for \(g(\cdot)\) associated with time series data.
Another challenge which arises with the complexity of the NN architecture is the transparency of algorithms. It is actually associated with our discussion about lacking of understanding on how to establish a feasible version practically. Although a variety of software
Figure 1: A Neural Network with Sparsity
packages (e.g., Gunther and Fritsch, 2010 and references therein) have been well adopted, the detailed implementation is still very vague in our view. When writing up this paper, we cannot find any existing paper using NN approach on social science topics that provides description about numerical implementation. While we agree with Athey (2019) that "_machine learning approaches will have dramatic impacts on different fields of social science within a short time frame_", a transparent algorithm is much needed to ensure that findings are relevant and useful in practice.
Below, we provide some examples to illustrate our concern about the transparency of algorithms further. In the literature, the "neuralnet" R package (see Gunther and Fritsch, 2010 for detailed illustration) has been well adopted. When training a NN, a key parameter is called "hidden" that is _a vector of integers specifying the number of hidden neurons in each layer_ by R document, and the package refers to Murata et al. (1994) regarding the choice of the number of neurons. However, it is worth pointing out that Murata et al. (1994) use a modified AIC criterion to investigate the case with the number of neurons (as well as the number of parameters) being finite which is reflected in their asymptotic development given in the appendix. As a consequence, the arguments made in Murata et al. (1994) no longer hold when the number of neurons is diverging. As we are about to show below, having a diverging number of neurons is the minimum requirement to achieve asymptotic consistency, and the rate of divergence is also associated with the sample size under a set of minor conditions. Similar issues also apply to "deepnet" package of R, "torch.nn.Linear" and "tfl.layers.Linear" of Python, "feedforwardnet" of Matlab, etc. All in all, the fundamental question behind these is how to establish NN based estimation and inference for practical implementation.
Having those said, our contributions are four-fold. First, we explore the use of identification restrictions from a nonparametric regression perspective, and establish the LNN based estimation method for the effective parameters under a set of mild conditions. Second, asymptotic distributions are derived accordingly for inferential purposes, and we show that the LNN method automatically eliminates the dependence of data when calculating the asymptotic variances. The finding is important, as one can easily use different types of wild bootstrap methods to obtain valid inference practically. Also, it is worth emphasizing that for the first time in the relevant literature, we are able to rigorously establish asymptotic distributions for the proposed estimators of the unknown functions using the LNN approach. Third, for quantitative outcomes, the proposed LNN approach yields closed-form expressions for the estimators of the parameters of interest. Last but not least, we examine our theoretical findings through extensive numerical studies.
The rest of paper is organised as follows. Section 2 provides some basic assumptions with justifications. Section 3 first introduces the LNN architecture from a nonparametric regression
perspective in Section 3.1, and then establishes asymptomatic results associated with Models (1.1) and (1.2) in Section 3.2 and Section 3.3 respectively. Section 3.4 summarizes some key points outlined in this paper, and further discusses some important issues associated with LNN. We provide extensive simulation studies in Section 4 to examine the theoretical findings. Section 5 includes two empirical studies using Model (1.1) and Model (1.2) respectively. Section 6 concludes. We provide some additional information about Sigmoidal squasher in Appendix A.1, some extra simulation results in Appendix A.2, an important extension from \(\mathbf{x}\in[0,1]^{d}\) to \(\mathbf{x}\in\mathbb{R}^{d}\) in Appendix A.3, important lemmas in Appendix A.4, and all the necessary proofs in Appendix A.5.
## 2 Notation and Assumptions
In this section, we introduce some notation that will be repeatedly used throughout the rest of this paper, and lay out some necessary assumptions.
We start from mathematical symbols. Vectors and matrices are always expressed in bold font. Further, \(I(\cdot)\) defines the indicator function; \(\|\cdot\|\) denotes the Euclidean norm of a vector or the Frobenius norm of a matrix; \(\mathbf{0}_{a}\) and \(\mathbf{1}_{a}\) are respectively \(a\times 1\) vectors of zeros and ones for \(a\in\mathbb{N}\); for a vector of nonnegative integers \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{d})^{\top}\in\mathbb{N}_{0}^{d}\) in which \(\mathbb{N}_{0}=0\cup\mathbb{N}\), let \(\boldsymbol{\alpha}!=\alpha_{1}!\cdots\alpha_{d}!\); \(\mathtt{c}\), \(\mathtt{C}\) and \(O(1)\) always stand for fixed constants, and may be different at each appearance; \(\rightarrow_{P}\) and \(\rightarrow_{D}\) stand for convergence in probability and convergence in distribution, respectively. For \(g(\mathbf{x})\) defined in Model (1.1) and Model (1.2), if the partial derivative of \(g(\mathbf{x})\) exists, we write \(\frac{\partial^{|\boldsymbol{\alpha}|}g(\mathbf{x})}{\partial\mathbf{x}^{ \boldsymbol{\alpha}}}=\frac{\partial^{|\boldsymbol{\alpha}|}g(\mathbf{x})}{ \partial x_{1}^{\boldsymbol{\alpha}_{1}}\cdots\partial x_{d}^{\boldsymbol{ \alpha}_{d}}}\) for short, where \(|\boldsymbol{\alpha}|=\sum_{j=1}^{d}\alpha_{j}\). We let \(\|g\|_{\infty}=\sup_{\mathbf{x}\in[-a,a]^{d}}|g(\mathbf{x})|\).
Having these symbols in hand, we are ready to proceed. First, we regulate the function of interest and the activation function. Below, we present a definition, and then formally state the first assumption of the paper.
**Definition 2.1** (Continuity).: _Let \(p=q+s\) for some \(q\in\mathbb{N}\) and \(0<s\leq 1\). A function \(m\::[-a,a]^{d}\mapsto\mathbb{R}\) is called \((p,\mathscr{C})\)-smooth, if for every \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{d}\) with \(|\boldsymbol{\alpha}|=q\) the partial derivative \(\frac{\partial^{|\boldsymbol{\alpha}|}m(\mathbf{x})}{\partial\mathbf{x}^{ \boldsymbol{\alpha}}}\) exists and satisfies that for all \(\mathbf{x},\mathbf{z}\in[-a,a]^{d}\),_
\[\left|\frac{\partial^{|\boldsymbol{\alpha}|}m(\mathbf{x})}{\partial\mathbf{x}^ {\boldsymbol{\alpha}}}-\frac{\partial^{|\boldsymbol{\alpha}|}m(\mathbf{z})}{ \partial\mathbf{z}^{\boldsymbol{\alpha}}}\right|\leq\mathscr{C}\cdot\| \mathbf{x}-\mathbf{z}\|^{s}.\]
Definition 2.1 basically defines a family of sufficiently smooth functions.
**Assumption 1**.:
1. _Let_ \(g(\mathbf{x})\) _be_ \((p,\mathscr{C})\)_-smooth as in Definition_ 2.1_, and_ \(\max_{|\boldsymbol{\alpha}|\leq q}\|\frac{\partial^{|\boldsymbol{\alpha}|}g( \mathbf{x})}{\partial\mathbf{x}^{\boldsymbol{\alpha}}}\|_{\infty}\leq\mathtt{c}\)_._
2. _Let Sigmoidal function_ \(\sigma(\cdot)\,:\,\mathbb{R}\to[0,1]\) _satisfy that_ 1. \(\sigma(\cdot)\) _is at least_ \(q+1\) _times continuously differentiable with bounded derivatives._ 2. _A point_ \(u_{\sigma}\in\mathbb{R}\) _exists, where all derivatives up to the order_ \(q\) _of_ \(\sigma(\cdot)\) _are different from zero._
Assumption 1.1 is widely adopted in the literature of nonparametric regression (e.g., Li and Racine, 2007). The main point is that each component in Taylor expansion of \(g(\cdot)\) is bounded and also sufficiently smooth.
Assumption 1.2 imposes very limited conditions on the activation function, and nests a wide class of activation functions commonly used in the relevant literature as special cases, e.g., Sigmoidal squasher (i.e., \(\sigma(x)=\frac{1}{1+\exp(-x)}\)), Error function (i.e., \(\sigma(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}\exp(-w^{2})dw\)), etc. We refer the interested reader to Dubey et al. (2022) for a comprehensive review on different activation functions. In addition, we note that the activation function is usually chosen by users, so in general it can be much smoother than the function of interest (i.e., \(g(\mathbf{x})\)). A detailed example is provided in the online supplementary Appendix A.1 to show Assumption 1.2 can be easily fulfilled in practice. Obviously, Assumption 1.2 rules out ReLU function, which will be left for future research as explained in the introduction.
We then regulate the behaviour of the time series involved in (1.3).
**Assumption 2**.:
1. \(\{(\mathbf{x}_{t},\varepsilon_{t})\,|\,t\in[T]\}\) _are strictly stationary and_ \(\alpha\)_-mixing with mixing coefficient_ \[\alpha(t)=\sup_{A\in\mathcal{F}^{0}_{-\infty},B\in\mathcal{F}^{\infty}_{t}}|P (A)P(B)-P(AB)|\] _satisfies_ \(\sum_{t=1}^{\infty}\alpha^{\nu/(2+\nu)}(t)<\infty\) _for some_ \(\nu>0\)_, which is the same as involved in Assumption_ 2.3 _below, where_ \(\mathcal{F}^{0}_{-\infty}\) _and_ \(\mathcal{F}^{\infty}_{t}\) _are the_ \(\sigma\)_-algebras generated by_ \(\{(\mathbf{x}_{s},\varepsilon_{s}):s\leq 0\}\) _and_ \(\{(\mathbf{x}_{s},\varepsilon_{s}):s\geq t\}\)_, respectively._
2. _The density function_ \(f_{\mathbf{x}}(\cdot)\) _of_ \(\mathbf{x}_{1}\) _is Lipschitz continuous on_ \([-a,a]^{d}\)_, and is also bounded away from 0 below and bounded away from_ \(\infty\) _above on_ \([-a,a]^{d}\)_._
3. _Model (_1.1_):_ \(E[\varepsilon_{1}\,|\,\mathbf{x}_{1}]=0\)_,_ \(E[\varepsilon_{1}^{2}\,|\,\mathbf{x}_{1}]=\sigma_{\varepsilon}^{2}\) _almost surely (a.s.), and_ \(E[|\varepsilon_{1}|^{2+\nu}\,|\,\mathbf{x}_{1}]\leq\mathsf{c}\) _a.s._ _Model (_1.2_):_ \(\phi_{\varepsilon}(\cdot)\) _and_ \(\Phi_{\varepsilon}(\cdot)\) _are known, and_ \([1-\Phi_{\varepsilon}(g(\mathbf{x}))]\Phi_{\varepsilon}(g(\mathbf{x}))\neq 0\) _uniformly on_ \([-a,a]^{d}\)_._
Assumption 2 is standard in the literature of time series regression (Fan and Yao, 2003; Gao, 2007). As we shall embed the regressors \(\{\mathbf{x}_{t}\}\) in the activation function under consideration
that is bounded from both below and above, we do not impose many assumptions on the behaviour of \(\{\mathbf{x}_{t}\}\). In a sense, LNN automatically rescales the regressors by design, which somewhat simplifies our asymptotic analysis. Also, we note that the uncorrelated structure imposed between \(\{\mathbf{x}_{t}\}\) and \(\{\varepsilon_{t}\}\) is not necessary in this paper. For Model (1.1), we in fact can allow for heterogeneity, such as
\[E[\varepsilon_{1}^{2}\,|\,\mathbf{x}_{1}]=\psi(\mathbf{x}_{1}),\]
which however makes notation even more complicated than what we need to involve. Similarly, for Model (1.2), we may also introduce heterogeneity in a more general setup such as
\[\Pr(y_{1}=1\,|\,\mathbf{x}_{1})=\Phi_{\varepsilon}(g(\mathbf{x}_{1})\,|\, \widetilde{\psi}),\]
in which \(\widetilde{\psi}\) stands for an unknown variance of \(\varepsilon_{1}\) to be determined by data. Again, it will complex our discussion and deviate from our main goal.
Finally, we would like to point out that \(\{\mathbf{x}_{t}\}\) do not have to be strictly stationary, and can have other more complex structures such as linear processes, locally stationarity, heterogeneity, deterministic trends, etc. Surely, the corresponding development and asymptotic results will need to be modified accordingly, but it will not add too many extra credits to the original idea of this paper. Therefore, we adopt the current setting throughout the rest of this paper.
We now move on to construct the so-called LNN architecture for both models, and will provide some necessary comments along the way.
## 3 Methodology and Theory
In this section, we first introduce the LNN architecture in Section 3.1, and then establish asymptotic results for both models in Sections 3.2 and 3.3 respectively. Section 3.4 further discusses some issues which may guide our future research.
One important goal of this paper is to understand the logic of NN. Therefore, it is worth to be more specific on how NN architecture works conceptually. In the literature of machine learning, one prefers to target the entire area that the unknown function is defined on, and normally uses training set to pre-specify a lot of parameters, which do not change with respect to different observations in the test set. By doing so, one just needs to pay some price when calculating the parameters in the first time, and no longer needs to update them with respect to different values in the test set. Having this mind, we are ready to consider the estimation of both Models (1.1) and (1.2).
### The Localized Neural Network Architecture
In contrast to the current literature that usually allows for all parameters to be estimated from data, we start by presenting some identification conditions, which can help reduce the number of effective parameters significantly. In a sense, we bring in sparsity from a different perspective.
In view of the literature (e.g., Bauer and Kohler, 2019; Schmidt-Hieber, 2020), one key idea behind NN architecture is that it can approximate polynomial terms, of which as well understood the linear combination can further approximate unknown functions by standard nonparametric analysis. In this paper, we do the same, but the difference is that we introduce a bandwidth parameter \(h\) below, which serves the same purpose as that employed in nonparametric kernel estimation. This is basically why we define the neural network under consideration as a localized neural network (LNN). More importantly, the length of \(h\) has a direct control on activated and non-activated neurons involved in the hidden layer. We shall be clear on this point when presenting the estimation method in Sections 3.2 and 3.3.
To proceed further, we develop several new and useful results in Lemmas 3.1-3.3 to show how to approximate \(g(\cdot)\) by a localized neural network system.
**Lemma 3.1** (Identification).: _Suppose that Assumption 1.2 holds. For \(\forall x_{0}\in\mathbb{R}^{d}\), we can find \(\boldsymbol{\gamma}=(\gamma_{1},\ldots,\gamma_{q+1})^{\top}\) and \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{q+1})^{\top}\) such that_
\[\sup_{|x-x_{0}|\leq h}\left|\sum_{k=1}^{q+1}\gamma_{k}\sigma(\beta_{k}\cdot(x- x_{0})+u_{\sigma})-(x-x_{0})^{q}\right|=O\left(h^{q+1}\right)\]
_holds, where \(\mathtt{C}\) is a constant, \(h\) is a bandwidth, and \(\gamma_{k}=\frac{(-1)^{q+k-1}\mathtt{C}^{q}}{\sigma^{(q)}(u_{\sigma})}\binom{q }{k-1}\) and \(\beta_{k}=\frac{k-1}{\mathtt{C}}\) for \(k\in[q+1]\)._
**Remark 3.1**.:
1. Note that Lemma 3.1 is independent of data. Only the number of polynomial terms to be approximated depends on the smoothness of \(g(\cdot)\). In fact, even the smoothness is predetermined by users, which is equivalent to choosing among local constant, local linear, and local higher order polynomials in the literature of kernel regression (Fan and Gijbels, 1996). That said, \(\boldsymbol{\gamma}\), \(\boldsymbol{\beta}\) and \(u_{\sigma}\) are fully decided by the activation function and the polynomial terms, so they are known prior to regression. The constant \(\mathtt{C}\) raises an issue of identifiability, so we simply let \(\mathtt{C}=1\) throughout. Also, as required by Assumption 1.2, \(\sigma^{(q)}(u_{\sigma})\neq 0\), which can be easily realized in view of Figure A.1 of the online supplementary appendix.
2. \(h\) is equivalent to the bandwidth of kernel regression, so it will be decided by the sample size eventually. Later on, we show that \(h\) reflects the relationship between the sample size and the total number of neurons, and also controls how many neurons are activated.
3. \(k-1\) of \(\beta_{k}\) is to invoke the property of the Stirling number of the second kind in the development. As a result, the LNN architecture with a suitable Sigmoidal function can approximate the polynomial terms, of which the linear combination further ensures an unknown smooth function can be approximated.
4. Lemma 3.1 in fact yields a recursive relationship, as one can repeatedly invoke Lemma 3.1 to replace \((x-x_{0})\) inside the activation. It will then yield a LNN architecture with multiple hidden layers. However, we do not see any benefit of doing so unless \(g(\cdot)\) in both models has certain specific structure. That said, we focus on LNN with one hidden layer in this study.
To carry on, we further introduce a few more symbols. Denote that
\[\mathscr{P}_{q}=\left\{\text{Linear span of the monomials }\prod_{k=1}^{d}x_{k}^{n_{k}} \text{ with }0\leq|\mathbf{n}|\leq q\right\}, \tag{3.1}\]
where \(\mathbf{n}=(n_{1},\ldots,n_{d})^{\top}\in\mathbb{N}_{0}^{d}\) and \(|\mathbf{n}|=\sum_{i=1}^{d}n_{i}\). The dimension of \(\mathscr{P}_{q}\) is equivalent to the number of distinctive terms in the expansion of \((1+x_{1}+\cdots+x_{d})^{q}\), so
\[\text{dim}\mathscr{P}_{q}=\binom{d+q}{d}:=d_{q},\]
where \(d_{q}\) is introduced for notational simplicity, and is fixed due to the fact that both \(d\) and \(q\) are finite. Furthermore, we define
\[\mathbf{m}(\mathbf{x}\,|\,\mathbf{x}_{0})=(m_{1}(\mathbf{x}\,|\,\mathbf{x}_{0 }),\ldots,m_{d_{q}}(\mathbf{x}\,|\,\mathbf{x}_{0}))^{\top}, \tag{3.2}\]
where \(m_{j}(\mathbf{x}\,|\,\mathbf{x}_{0})\)'s are the basis monomials (centred at \(\mathbf{x}_{0}\)) in \(\mathscr{P}_{q}\). Also, we denote a set \(C_{\mathbf{x}_{0},h}\):
\[C_{\mathbf{x}_{0},h}=\{\mathbf{x}\,|\,|x_{j}-x_{0,j}|\leq h\text{ for }j\in[d]\}, \tag{3.3}\]
where \(x_{j}\) and \(x_{0,j}\) are the \(j^{th}\) elements of \(\mathbf{x}\) and \(\mathbf{x}_{0}\) respectively.
With these symbols in hand, we present the next lemma, which demonstrates the feasibility of the LNN architecture.
**Lemma 3.2** (Feasibility).: _Suppose that Assumption 1.2 holds. For \(\forall\mathbf{x}_{0}\in\mathbb{R}^{d}\), we define \(p(\mathbf{x}\,|\,\mathbf{x}_{0},\boldsymbol{\lambda})=\boldsymbol{\lambda}^{ \top}\mathbf{m}(\mathbf{x}\,|\,\mathbf{x}_{0}),\) where \(\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{d_{q}})^{\top}\) with \(\|\boldsymbol{\lambda}\|\leq\mathbf{c}\). Then a LNN of the type_
\[s(\mathbf{x}\,|\,\mathbf{x}_{0},\widetilde{\boldsymbol{\lambda}})=( \widetilde{\boldsymbol{\lambda}}\otimes\boldsymbol{\gamma})^{\top}\boldsymbol {\sigma}(\mathbf{x}\,|\,\mathbf{x}_{0})\quad\text{with}\quad\boldsymbol{ \sigma}(\mathbf{x}\,|\,\mathbf{x}_{0})=\{\sigma\left([1,\mathbf{x}^{\top}- \mathbf{x}_{0}^{\top}]\boldsymbol{\pi}_{j}\right)\}_{d_{q}(q+1)\times 1}\]
_exists such that_
\[\sup_{\mathbf{x}\in C_{\mathbf{x}_{0},h}}|s(\mathbf{x}\,|\,\mathbf{x}_{0},\widetilde {\boldsymbol{\lambda}})-p(\mathbf{x}\,|\,\mathbf{x}_{0},\boldsymbol{\lambda})|=O( h^{q+1}),\]
_where \(\widetilde{\boldsymbol{\lambda}}=\mathbf{D}^{\top}\boldsymbol{\lambda}\) with \(\mathbf{D}\) being a rotation matrix, and_
\[(\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{d_{q}(q+1)})=\frac{\operatorname {diag}\{h,\mathbf{I}_{d}\}\mathbf{W}(\mathbf{I}_{d_{q}}\otimes\boldsymbol{ \beta}^{\top})}{d+1}+\begin{bmatrix}u_{\sigma}\\ \mathbf{0}_{d}\end{bmatrix}\otimes\mathbf{1}_{d_{q}(q+1)}^{\top},\]
_in which \(\mathbf{W}\) is a user chosen matrix satisfying that \(\max_{j}\|\mathbf{w}_{j}\|\leq\sqrt{d+1}\) and \(\mathbf{w}_{j_{1}}\neq\mathbf{w}_{j_{2}}\) for any given \(j_{1},j_{2}\in[d_{q}]\), and \(\mathbf{w}_{j}\) stands for the \(j^{\text{th}}\) column of \(\mathbf{W}\)._
**Remark 3.2**.:
1. Note that we use vector form to rewrite each element of \(\boldsymbol{\sigma}(\mathbf{x}\,|\,\mathbf{x}_{0})\), i.e., \[\sigma\left([1,\mathbf{x}^{\top}-\mathbf{x}_{0}^{\top}]\boldsymbol{\pi}_{j}\right)\] which is exactly what Sigmoidal activation function does (i.e., mapping a liner combination of regressors plus a location parameter to \([0,1]\)). By design, \(\boldsymbol{\sigma}(\mathbf{x}\,|\,\mathbf{x}_{0})\) naturally and automatically explores different interaction terms of the regressors.
2. By choosing \(\boldsymbol{\lambda}\) and \(\mathbf{x}_{0}\), \(p(\mathbf{x}\,|\,\mathbf{x}_{0},\boldsymbol{\lambda})\) can approximate \(g(\mathbf{x})\) reasonably well in a small neighbourhood of \(\mathbf{x}_{0}\). This is not hard to see, as the leading terms of the \(q^{th}\) order Taylor expansion of \(g(\mathbf{x})\) at \(\mathbf{x}_{0}\) can be written as follows: \[g(\mathbf{x})\simeq\sum_{0\leq|\mathbf{J}|\leq q}\frac{1}{\mathbf{J}!}\cdot \frac{\partial^{|\mathbf{J}|}g(\mathbf{x}_{0})}{\partial\mathbf{x}^{\mathbf{J} }}(\mathbf{x}-\mathbf{x}_{0})^{\mathbf{J}}:=\boldsymbol{\lambda}^{\top} \mathbf{m}(\mathbf{x}\,|\,\mathbf{x}_{0}):=p(\mathbf{x}\,|\,\mathbf{x}_{0}, \boldsymbol{\lambda}),\] where the definition of \(\boldsymbol{\lambda}\) should be obvious in view of the definition of \(\mathbf{m}(\mathbf{x}\,|\,\mathbf{x}_{0})\) according to (3.2). As a result, \(s(\mathbf{x}\,|\,\mathbf{x}_{0},\widetilde{\boldsymbol{\lambda}})\) essentially approximates \(g(\mathbf{x})\) but in a small neighbourhood only, and the rate produced in Lemma 3.2 is proportional to \(h^{q+1}\) as a bias term.
3. Lemma 3.2 infers that the LNN architecture rotates the parameters of interest (i.e., \(\boldsymbol{\lambda}\)) with a pre-determined \(d_{q}\times d_{q}\) full rank matrix \(\mathbf{D}\), so the new parameters of interest become \(\widetilde{\boldsymbol{\lambda}}\). Therefore, accounting for identification conditions properly ensures the number of effective parameters to be estimated is far less than it looks like. Without addressing identification issues, one has to determine \(d_{q}(q+1)(d+2)\) parameters. In other words, utilizing identification conditions automatically reduces the number of effective parameters which need to be estimated.
4. Without loss of generality, we can let \(\mathbf{w}_{j}=\frac{\sqrt{d+1}}{q}\mathbf{w}_{j}^{*}\), where \(\{\mathbf{w}_{j}^{*}\}\) are the vectors corresponding to the powers of the distinctive terms in the expansion of \((1+x_{1}+\cdots+x_{d})^{q}\). Thus, \(\|\mathbf{w}_{j}\|\leq\frac{\sqrt{d+1}}{q}|\mathbf{w}_{j}^{*}|=\sqrt{d+1}\), which further ensures Lemma 3.1 can be invoked.
Using Lemma 3.2, the final form of LNN requires us to focus on a compact subspace of \(\mathbb{R}^{d}\) (i.e., \([-a,a]^{d}\)), as we need to partition it into lots of small cubes. As discussed in Appendix A.3 below, the main results remain valid when allowing \(a\to\infty\) along with the sample size. These small cubes may have different names in each appearance. For example, they are referred to as (hyper-)cubes in Bauer and Kohler (2019) and Schmidt-Hieber (2020), and are referred to as localization in Farrell et al. (2021). In our view, each cube is corresponding to an effective sample set, so it is very much similar to the effective sample range in nonparametric kernel regression, which is jointly determined by the point of interest and the bandwidth.
That said, for a given integer \(M\geq 1\), we subdivide \([-a,a]^{d}\) into \(M^{d}\) cubes of side length as follows:
\[h=\frac{a}{M},\]
and for comprehensibility, we number these cubes by \(C_{\mathbf{x}_{0i},h}\) with \(\mathbf{i}\in[M]^{d}\), where \(\mathbf{x}_{0i}\) represents the center of \(C_{\mathbf{x}_{0i},h}\). Mathematically, we can write
\[C_{\mathbf{x}_{0i},h}=\{\mathbf{x}\,|\,|x_{j}-x_{0i},|\leq h\text{ for }j\in[d]\}, \tag{3.4}\]
where \(x_{j}\) and \(x_{0i,j}\) are the \(j^{th}\) elements of \(\mathbf{x}\) and \(\mathbf{x}_{0i}\) respectively. Then the following lemma holds.
**Lemma 3.3** (NN Architecture).: _Suppose that Assumption 1 holds. There exists a LNN approximate of \(g(\mathbf{x})\) as follows:_
\[\|g(\mathbf{x})-\widetilde{s}(\mathbf{x}\,|\,\widetilde{\boldsymbol{\Lambda}} )\|_{\infty}=O(h^{p}),\]
_where \(\widetilde{\boldsymbol{\Lambda}}=\{\widetilde{\boldsymbol{\lambda}}_{ \mathbf{i}}\,|\,\mathbf{i}\in[M]^{d}\}\), and_
\[\widetilde{s}(\mathbf{x}\,|\,\widetilde{\boldsymbol{\Lambda}})=\sum_{ \mathbf{i}\in[M]^{d}}I_{\mathbf{i},h}(\mathbf{x})\cdot s(\mathbf{x}\,|\, \mathbf{x}_{0i},\widetilde{\boldsymbol{\lambda}}_{\mathbf{i}})\quad\text{ with}\quad I_{\mathbf{i},h}(\mathbf{x})=I(\mathbf{x}\in C_{\mathbf{x}_{0i},h}).\]
**Remark 3.3**.:
1. LNN considers the estimation of \(\{\widetilde{\boldsymbol{\lambda}}_{\mathbf{i}}\,|\,\mathbf{i}\in[M]^{d}\}\) simultaneously, where \(\widetilde{\boldsymbol{\lambda}}_{\mathbf{i}}\) corresponds to \(\boldsymbol{\lambda}_{\mathbf{i}}\) up to a rotation matrix \(\mathbf{D}\), and \(\boldsymbol{\lambda}_{\mathbf{i}}\) is decided by the Taylor expansion of \(g(\mathbf{x})\) at the point \(\mathbf{x}_{0i}\).
2. The use of the indicator function is consistent with the sparsity setting of Schmidt-Hieber (2020), in which the author argues that "_the network sparsity assumes that there are only few non-zero/active network parameters_". In fact, our study further clarifies the definition of "_the network sparsity_" by providing two categories: (1) defining non-active neurons (such as those in Figure 1) which is realised through the use of indicator function \(I_{\mathbf{i},h}(\mathbf{x})\), and (2) pointing out the number of effective parameters. In particular, the second point is important in our view, as one can now define a true set of parameters when using thresholding techniques as in Wang and Lin (2021), and Fan and Gu (2022).
3. The bandwidth \(h\) (i.e., the number of cubes) is data driven. This point will become obvious when we move on to Sections 3.2 and 3.3 below.
### On Model (1.1)
In this subsection, we consider Model (1.1) only. As explained previously, LNN targets the entire area that \(g(\mathbf{x})\) is defined on, and uses all training data to pre-specify a large number of parameters which do not change with respect to the test set.
In view of Lemma 3.3, we define the following set of LNN candidates:
\[\mathcal{S}=\left\{\widetilde{s}(\mathbf{x}\,|\,\mathbf{\Theta})=\sum_{ \mathbf{i}\in[M]^{d}}I_{\mathbf{i},h}(\mathbf{x})\cdot s(\mathbf{x}\,|\, \mathbf{x}_{0\mathbf{i}},\boldsymbol{\theta}_{\mathbf{i}})\,\Big{|}\,\| \boldsymbol{\theta}_{\mathbf{i}}\|<\infty\right\}, \tag{3.5}\]
in which \(\mathbf{\Theta}=\{\boldsymbol{\theta}_{\mathbf{i}}\,|\,\mathbf{i}\in[M]^{d}\}\) with \(\boldsymbol{\theta}_{\mathbf{i}}\)'s being \(d_{q}\times 1\) vectors. The objective function is defined as
\[Q(\mathbf{\Theta})=\sum_{t=1}^{T}[y_{t}-\widetilde{s}(\mathbf{x}_{t}\,|\, \mathbf{\Theta})]^{2}. \tag{3.6}\]
Accordingly the OLS estimator of \(\widetilde{\boldsymbol{\Lambda}}\) defined in Lemma 3.3 is obtained by
\[\widehat{\mathbf{\Theta}}=\operatorname*{argmin}_{\widetilde{s}\in\mathcal{S }}Q(\mathbf{\Theta})\quad\text{with}\quad\widehat{\mathbf{\Theta}}=\{ \widehat{\boldsymbol{\theta}}_{\mathbf{i}}\,|\,\mathbf{i}\in[M]^{d}\} \tag{3.7}\]
and, for \(\forall\mathbf{x}_{0}\in[-a,a]^{d}\), the estimator of \(g(\mathbf{x}_{0})\) is then defined by
\[\widehat{g}(\mathbf{x}_{0})=\widetilde{s}(\mathbf{x}_{0}\,|\,\widehat{ \mathbf{\Theta}}). \tag{3.8}\]
By design, LNN accounts for the points on the boundary of \([-a,a]^{d}\) automatically.
**Remark 3.4**.: Our investigation assumes that the number of regressors is correctly specified. When they are over-specified, sparsity naturally arises. It is now even clearer why thresholding methods (such as those in Wang and Lin, 2021; Fan and Gu, 2022) should be adopted, and what
they are really penalizing. In a sense identification restrictions bridge LNN and thresholding approaches in a better fashion. In this work, we do not further explore regression with penalty terms in order not to deviate from our main goal.
Equation (3.7) admits a closed-form estimator for each \(\mathbf{\widehat{\theta}_{\mathbf{i}}}\). To see this, we write
\[\frac{\partial Q(\mathbf{\Theta})}{\partial\mathbf{\theta}_{\mathbf{i}}} =-2\sum_{t=1}^{T}[y_{t}-\widetilde{s}(\mathbf{x}_{t}\,|\,\mathbf{ \Theta})]\cdot I_{\mathbf{i},h}(\mathbf{x}_{t})\cdot\frac{\partial s(\mathbf{x} _{t}\,|\,\mathbf{x}_{0\mathbf{i}},\mathbf{\theta}_{\mathbf{i}})}{\partial\mathbf{ \theta}_{\mathbf{i}}}\] \[=-2\sum_{t=1}^{T}[y_{t}-\widetilde{\mathbf{x}}_{\mathbf{i},t}^{ \top}\mathbf{\theta}_{\mathbf{i}}]\cdot\widetilde{\mathbf{x}}_{\mathbf{i},t},\]
where \(\widetilde{\mathbf{x}}_{\mathbf{i},t}=I_{\mathbf{i},h}(\mathbf{x}_{t})( \mathbf{I}_{d_{q}}\otimes\mathbf{\gamma}^{\top})\mathbf{\sigma}(\mathbf{x}_{t}\,|\, \mathbf{x}_{0\mathbf{i}})\), and the second equality follows from the fact that \(I_{\mathbf{i},h}(\mathbf{x}_{t})I_{\mathbf{j},h}(\mathbf{x}_{t})=0\) for \(\mathbf{i}\neq\mathbf{j}\). Thus, for \(\forall\mathbf{i}\), the first order condition yields
\[\mathbf{\widehat{\theta}_{\mathbf{i}}}=\left(\sum_{t=1}^{T}\widetilde{\mathbf{x}} _{\mathbf{i},t}\widetilde{\mathbf{x}}_{\mathbf{i},t}^{\top}\right)^{-1}\sum_{t =1}^{T}\widetilde{\mathbf{x}}_{\mathbf{i},t}y_{t}. \tag{3.9}\]
**Remark 3.5**.: We note that by (A.10) and (A.11) of the online supplementary appendix:
\[\sup_{\mathbf{x}\in C_{\mathbf{x}_{0\mathbf{i}},h}}\left\|\mathbf{D}^{-1} \mathbf{m}(\mathbf{x}\,|\,\mathbf{x}_{0\mathbf{i}})-(\mathbf{I}_{d_{q}} \otimes\mathbf{\gamma}^{\top})\mathbf{\sigma}(\mathbf{x}\,|\,\mathbf{x}_{0\mathbf{i}}) \right\|=O(h^{q+1}),\]
where \(\mathbf{D}\) is a full rank predetermined rotation matrix, and has been explained in Remark 3.3. In connection with the definition of \(\widetilde{\mathbf{x}}_{\mathbf{i},t}\), we immediately obtain
\[\left\|\mathbf{D}^{-1}I_{\mathbf{i},h}(\mathbf{x}_{t})\cdot\mathbf{m}( \mathbf{x}_{t}\,|\,\mathbf{x}_{0\mathbf{i}})-\widetilde{\mathbf{x}}_{\mathbf{ i},t}\right\|=O_{P}(h^{q+1}).\]
Thus, \(\widetilde{\mathbf{x}}_{\mathbf{i},t}\) is equivalent to \(I_{\mathbf{i},h}(\mathbf{x}_{t})\cdot\mathbf{m}(\mathbf{x}_{t}\,|\,\mathbf{x} _{0\mathbf{i}})\) up to a predetermined rotation matrix \(\mathbf{D}\). The finding is consistent with Remark 3.2, and ensures the invertibility of \(\sum_{t=1}^{T}\widetilde{\mathbf{x}}_{\mathbf{i},t}\widetilde{\mathbf{x}}_{ \mathbf{i},t}^{\top}\), at least asymptotically.
After carefully studying (3.9) for each \(\mathbf{i}\) and repeatedly invoking \(I_{\mathbf{i},h}(\mathbf{x}_{t})I_{\mathbf{j},h}(\mathbf{x}_{t})=0\) for \(\mathbf{i}\neq\mathbf{j}\), the first theorem of this paper is stated as follows.
**Theorem 3.1**.: _Suppose that Assumptions 1 and 2 hold. For \(\forall\mathbf{x}_{0}\in[-a,a]^{d}\),_
\[\sqrt{Th^{d}}\widehat{\sigma}_{\mathbf{x}_{0}}^{-1}(\widehat{g}(\mathbf{x}_{0 })-g(\mathbf{x}_{0})+O_{P}(h^{p}))\rightarrow_{D}N(0,1),\]
_where \(\widehat{\sigma}_{\mathbf{x}_{0}}^{2}=\sigma_{\varepsilon}^{2}\mathbf{m}( \mathbf{x}_{0}\,|\,\mathbf{x}_{0})^{\top}\mathbf{H}\mathbf{\Sigma}_{\mathbf{ x}_{0}}^{-1}\mathbf{H}\mathbf{m}(\mathbf{x}_{0}\,|\,\mathbf{x}_{0})\), \(\mathbf{\Sigma}_{\mathbf{x}_{0}}=f_{\mathbf{x}}(\mathbf{x}_{0})\int_{[-1,1]^{d }}\mathbf{m}(\mathbf{x}\,|\,\mathbf{0})\mathbf{m}(\mathbf{x}\,|\,\mathbf{0}) ^{\top}\mathrm{d}\mathbf{x}\), \(\mathbf{H}=\mathrm{diag}\{H_{1},\ldots,H_{d_{q}}\}\) with \(H_{j}=\prod_{k=1}^{d}h^{-n_{j,k}}\), and \(\mathbf{n}_{j}=(n_{j,1},\ldots,n_{j,d})^{\top}\) includes the corresponding power terms of \(m_{j}(\mathbf{x}\,|\,\mathbf{x}_{0})\) defined in (3.2)._
It is noted that our LNN based estimation method is simple and easy to implement. As pointed out in the introduction, moreover, we are probably among the first being able to rigorously establish the asymptotic normality of the LNN based estimator given by \(\widehat{g}(\mathbf{x}_{0})\) in Theorem 3.1.
It is also noted out that our simulation studies in Appendix A.2 show that \(\widehat{g}(\mathbf{x}_{0})\) has better finite-sample properties than those of the conventional local-constant kernel estimation method. The reason to explain the finite-sample advantages is probably that the LNN based estimation method naturally makes the best use of all possible linear combinations of \(\mathbf{x}_{t}\) as explained in Remark 3.2.1 above. By contrast, the conventional local-constant kernel estimation method only evaluates the estimates at the single data points: \(\mathbf{x}_{t}\).
The same comments apply to Theorem 3.3 below.
**Remark 3.6**.:
1. When establishing the asymptotic distribution, the terms \(E[\varepsilon_{1}\varepsilon_{1+t}]\) for \(t\geq 1\) all vanish in the asymptotic covariance matrix due to the partition of (3.4) and the use of the indicator function. Here, the indicator function is a special nonparametric kernel function, so it is not very surprising that the finding is consistent with Chapter 5 of Fan and Yao (2003), for example, in the literature of nonparametric kernel regression with dependent data. As a result, when constructing confidence interval, one can turn to wild bootstrap procedure directly, which also bypasses the rotation matrix \(\mathbf{D}\) (see Theorem 3.2 below for details).
2. Ideally, we would like to replace the indicator function by a general kernel function such as \(K(\cdot)\) in (3.5). However, by doing so, \(s(\mathbf{x}\,|\,\mathbf{x}_{0},\widetilde{\boldsymbol{\lambda}})\) will not be able to approximate \(g(\mathbf{x}_{0})\) using Lemma 3.2.
3. We do require \(g(\mathbf{x})\) to be defined on a compact set by the design of LNN, but do not impose restriction on the range of \(\{\mathbf{x}_{t}\}\). In Appendix A.3, we further explain how to get a valid result for \(\forall\mathbf{x}_{0}\in\mathbb{R}^{d}\) from a nonparametric regression perspective.
To close this subsection, we propose the following bootstrap procedure to establish inference in practice.
1. By (3.8), we calculate \(\widehat{\varepsilon}_{t}=y_{t}-\widehat{g}(\mathbf{x}_{t})\) for \(t\in[T]\).
2. Collect i.i.d. draws of \(\{\eta_{t}\,|\,t\in[T]\}\) from \(N(0,1)\), and construct the bootstrap version dependent variables as follows: \(y_{t}^{*}=\widehat{g}(\mathbf{x}_{t})+\widehat{\varepsilon}_{t}\eta_{t}\). We re-estimate \(g(\cdot)\) using \(\{(y_{t}^{*},\mathbf{x}_{T})\,|\,t\in[T]\}\) as in (3.8), and denote the estimate as \(\widehat{g}^{*}(\cdot)\).
3. Repeat Step 2 \(R\) times, where \(R\) is sufficiently large.
For the bootstrap procedure, the following result holds immediately.
**Theorem 3.2**.: _Let the conditions of Theorem 3.1 hold. Suppose further that \(Th^{d+2p}\to 0\). For \(\forall\mathbf{x}_{0}\in[-a,a]^{d}\), we have_
\[\sup_{w}|\Pr^{*}(\sqrt{Th^{d}}\widehat{\sigma}_{\mathbf{x}_{0}}^{-1}[ \widehat{g}^{*}(\mathbf{x}_{0})-\widehat{g}(\mathbf{x}_{0})]\leq w)-\Pr(\sqrt {Th^{d}}\widehat{\sigma}_{\mathbf{x}_{0}}^{-1}[\widehat{g}(\mathbf{x}_{0})-g( \mathbf{x}_{0})]\leq w)|=o_{P}(1).\]
_where \(\Pr^{*}\) is the probability measure induced by the bootstrap procedure._
The condition \(Th^{d+2p}\to 0\) is required to eliminate the bias terms. For the LNN estimators, the bias terms arise from two resources: (1) LNN approximates each polynomial term, and (2) the polynomial terms approximate \(g(\cdot)\). As a result, we could not derive the detailed form of the biases but providing its order only. That said, to ensure the bootstrap procedure can provide a valid confidence interval in practice, we require \(Th^{d+2p}\to 0\).
### On Model (1.2)
In this subsection, we investigate Model (1.2), which has been widely adopted for a variety of decision making processes. See Athey (2019) for discussions on different examples.
Direct calculation shows that
\[\Pr(y=1\,|\,\mathbf{x}) =\Phi_{\varepsilon}(g(\mathbf{x})),\] \[\Pr(y=0\,|\,\mathbf{x}) =1-\Phi_{\varepsilon}(g(\mathbf{x})),\]
which yields \(E[y\,|\,\mathbf{x}]=\Phi_{\varepsilon}(g(\mathbf{x}))\). Thus, provided a set of time series observations given in (1.3), the likelihood function is specified as follows:
\[L(g)=\prod_{t=1}^{T}[1-\Phi_{\varepsilon}(g(\mathbf{x}_{t}))]^{1-y_{t}}\cdot \Phi_{\varepsilon}(g(\mathbf{x}_{t}))^{y_{t}}.\]
Accordingly, the log-likelihood function is defined below:
\[\log L(g) =\sum_{t=1}^{T}l_{t}(g(\mathbf{x}_{t}))\] \[=\sum_{t=1}^{T}\left\{(1-y_{t})\cdot\log[1-\Phi_{\varepsilon}(g( \mathbf{x}_{t}))]+y_{t}\cdot\log\Phi_{\varepsilon}(g(\mathbf{x}_{t}))\right\}, \tag{3.10}\]
where the definition of \(l_{t}(\cdot)\) is obvious.
Our interest is still inferring \(g(\cdot)\). We construct the neural network as in (3.5), and consider the following objective function:
\[\log L(\widetilde{s}(\cdot\,|\,\boldsymbol{\Theta}))=\sum_{t=1}^{T}\log l_{t}( \widetilde{s}(\mathbf{x}_{t}\,|\,\boldsymbol{\Theta}))\]
\[=\sum_{t=1}^{T}\left\{(1-y_{t})\cdot\log[1-\Phi_{\varepsilon}(\widetilde{s}( \mathbf{x}_{t}\,|\,\mathbf{\Theta}))]+y_{t}\cdot\log\Phi_{\varepsilon}( \widetilde{s}(\mathbf{x}_{t}\,|\,\mathbf{\Theta}))\right\},\]
which yields the following maximum likelihood estimator:
\[\widehat{\mathbf{\Theta}}=\operatorname*{argmax}_{\widetilde{s}\in\mathcal{S}} \log L(\widetilde{s}(\cdot\,|\,\mathbf{\Theta})). \tag{3.11}\]
Note that our LNN method involves a general unknown function form. As a result, the classical results of likelihood estimation, such as those in Newey and McFadden (1994), no longer hold. Therefore, before establishing an asymptotic distribution using \(\widehat{\mathbf{\Theta}}\), we state a lemma to show the overall feasibility of the LNN architecture when modelling binary outcomes.
**Lemma 3.4**.: _Under Assumptions 1 and 2,_
\[\frac{1}{T}\sum_{t=1}^{T}[\Phi_{\varepsilon}(g(\mathbf{x}_{t}))-\Phi_{ \varepsilon}(\widetilde{s}(\mathbf{x}_{t}\,|\,\widehat{\mathbf{\Theta}}))]^ {2}=o_{P}(1).\]
Lemma 3.4 provides the consistency, and also bridges the likelihood estimation and the nonlinear least squares approach to some extent. To be precise, Lemma 3.4 does not provide any specific consistency for \(\forall\,\widehat{\mathbf{\theta}}_{\mathbf{i}}\). Instead, it evaluates the overall performance of LNN. More importantly, it says when modelling a binary outcome, the likelihood estimation using the LNN architecture is approximately equivalent to implementing a nonlinear least squares method provided the distribution of \(\varepsilon_{t}\) is correctly specified. In addition, Lemma 3.4 further infers that
\[\frac{1}{T}\sum_{t=1}^{T}[g(\mathbf{x}_{t})-\widetilde{s}(\mathbf{x}_{t}\,|\, \widehat{\mathbf{\Theta}})]^{2}=o_{P}(1),\]
which is nice, as many remarks made in Section 3.2 can be directly applied. Last but not least, Lemma 3.4 facilitates numerical implementation in practice, which will be further discussed in Section 4.2.
Below, we establish the following asymptotic distribution for Model (1.2).
**Theorem 3.3**.: _Suppose that Assumptions 1 and 2 hold. For \(\forall\mathbf{x}_{0}\in[-a,a]^{d}\),_
\[\sqrt{Th^{d}}\widetilde{\sigma}_{\mathbf{x}_{0}}^{-1}(\widetilde{g}(\mathbf{x }_{0})-g(\mathbf{x}_{0})+O_{P}(h^{p}))\rightarrow_{D}N(0,1),\]
_where \(\widetilde{g}(\mathbf{x})\) is defined in the same form as (3.8), \(\widetilde{\sigma}_{\mathbf{x}_{0}}^{2}=\mathbf{m}(\mathbf{x}_{0}\,|\, \mathbf{x}_{0})^{\top}\mathbf{H}\widetilde{\Sigma}_{\mathbf{x}_{0}}^{-1} \mathbf{H}\mathbf{m}(\mathbf{x}_{0}\,|\,\mathbf{x}_{0})\), and_
\[\widetilde{\mathbf{\Sigma}}_{\mathbf{x}_{0}}=\frac{f_{\mathbf{x}}(\mathbf{x }_{0})\phi_{\varepsilon}(g(\mathbf{x}_{0}))^{2}}{[1-\Phi_{\varepsilon}(g( \mathbf{x}_{0}))]\Phi_{\varepsilon}(g(\mathbf{x}_{0}))}\int_{[-1,1]^{d}} \mathbf{m}(\mathbf{x}\,|\,\mathbf{0})\mathbf{m}(\mathbf{x}\,|\,\mathbf{0})^{ \top}\mathrm{d}\mathbf{x}.\]
Remark 3.6 can be applied to Theorem 3.3 with some obvious modifications.
In light of Patrick and Andres (2012), we then propose a score based wild bootstrap approach for inferential purposes as follows.
1. For each bootstrap replication, we collect i.i.d. draws of \(\{\eta_{t}\,|\,t\in[T]\}\) from \(N(0,1)\), and calculate \[\widehat{\boldsymbol{\theta}}_{\mathbf{i}}^{*}=\widehat{\boldsymbol{\theta}}_{ \mathbf{i}}+\left(\sum_{t=1}^{T}\frac{\partial^{2}\log l_{t}(\widetilde{s}( \mathbf{x}_{t}\,|\,\widehat{\boldsymbol{\Theta}}))}{\partial\boldsymbol{\theta }_{\mathbf{i}}\partial\boldsymbol{\theta}_{\mathbf{i}}^{\top}}\right)^{-1} \sum_{t=1}^{T}\frac{\partial\log l_{t}(\widetilde{s}(\mathbf{x}_{t}\,|\, \widehat{\boldsymbol{\Theta}}))}{\partial\boldsymbol{\theta}_{\mathbf{i}}} \eta_{t},\] (3.12) where \(l_{t}(\cdot)\) is defined in (3.10).
2. Repeat Step 1 \(R\) times, where \(R\) is sufficiently large.
It is worth pointing out that the above procedure is computationally efficient in the sense that the right hand side of (3.12) enjoys a closed-form expression, which is given in (A.28) and (A.29) specifically for the sake of space. Practically, the bootstrap procedure may require much less time compared with the estimation of (3.11) itself. We will further comment on the computational issues associated with Model (1.2) in Section 4.2.
The following theorem holds for the above bootstrap procedure.
**Theorem 3.4**.: _Let the conditions of Theorem 3.3 hold. Suppose further that \(Th^{d+2p}\to 0\). For \(\forall\mathbf{x}_{0}\in[-a,a]^{d}\), we have_
\[\sup_{w}|\Pr^{*}(\sqrt{Th^{d}}\widehat{\sigma}_{\mathbf{x}_{0}}^{-1}[ \widehat{g}^{*}(\mathbf{x}_{0})-\widehat{g}(\mathbf{x}_{0})]\leq w)-\Pr(\sqrt {Th^{d}}\widehat{\sigma}_{\mathbf{x}_{0}}^{-1}[\widehat{g}(\mathbf{x}_{0})-g( \mathbf{x}_{0})]\leq w)|=o_{P}(1),\]
_where \(\Pr^{*}\) is the probability measure induced by the bootstrap procedure, and \(\widehat{g}^{*}(\cdot)\) is yielded by the bootstrap draws in an obvious manner._
The comments and discussion below Theorem 3.2 apply to Theorem 3.4 with some minor modifications, so we do not repeat them here.
### Further Discussion
Up to this point, we would like to point out that those questions raised in Section 1 have all been answered, so Figure 1 can be understood better. We summarize some key points which may have been discussed previously here and there, and further discuss some issues left behind.
**Neurons** -- Having established the results in Sections 3.2 and 3.3, it is now clear that for LNN, the total number of neurons is
\[M^{d}\cdot d_{q}(q+1)=\left(\frac{2a}{h}\right)^{d}\cdot d_{q}(q+1),\]
of which only \(d_{q}(q+1)\) neurons are activated when loading test data. Among the activated neurons, the number of effective parameters is only \(d_{q}\), while the rest of the parameters are predetermined.
**Multiple Hidden Layers** -- Lemma 3.1 yields a recursive relationship, as one can repeatedly invoke Lemma 3.1 to replace \((x-x_{0})\) inside the activation. It will then yield a LNN architecture with multiple hidden layers. Although having multiple hidden layers is achievable, at this stage it is not clear to us why we should do so. As discussed in Remark 3.1, this step is completely independent of data, so we do not see any benefit of doing so unless \(g(\cdot)\) in both models has certain specific structure. Under some extra structure on \(g(\cdot)\), however, the necessity of developing a LNN architecture with multiple hidden layers deserves extra attention in future research.
**Dependence** -- As shown for both models, LNN automatically eliminates the correlation of observations from different time periods when establishing the asymptotic distribution. This finding is important, as one can easily use different types of wild bootstrap methods to obtain valid inference.
**ReLU Function** -- The current study focuses on Sigmoidal activation function. We conjecture for ReLU function a corresponding LNN approach can be constructed similarly, which, however, requires some careful consideration.
**Thresholding** -- Our investigation assumes that the number of regressors is correctly specified. When they are over-specified, sparsity naturally arises. In this case, it is clear that why a thresholding method (such as those in Wang and Lin, 2021; Fan and Gu, 2022) should be adopted, and what it is really penalizing. Utilizing identification conditions allows one to define the true set of parameters, so dimension reduction techniques such as LASSO (Tibshirani, 1996) and bridge estimation (Huang et al., 2008) techniques can be adopted straight away.
**Trending** -- In this paper, we assume that the regressors \(\{\mathbf{x}_{t}\,|\,t\in[T]\}\) are strictly stationary and mixing. In fact, they can have other more complex structures, such as linear processes, locally stationarity, heterogeneity, deterministic trends, etc. As a result, many climate models (such as those in Mudelsee, 2019) may be better captured. For such cases, one may need to revise the assumptions and proofs accordingly depending on detailed research questions.
## 4 Simulation
In this section we conduct simulations to examine the theoretical findings. We consider Models (1.1) and (1.2) in Sections 4.1 and 4.2 respectively.
Before proceeding further, we would like to point out the following simulation design focuses on coverage rates for the entire test set. Also, no statistical software package is required, as the algorithm becomes obvious after presenting Section 3. We now provide some details of the numerical implementation. As shown in Section 3, many parameters are involved in the LNN architecture. It would be extremely difficult to systematically check every single one in one paper, so we have to be selective. That said, the following quantities are pre-fixed without loss of generality.
* Throughout, we use the Sigmoidal squasher, \(\sigma(w)=1/(1+\exp(-w))\), as the activation function. The derivatives of Sigmoidal squasher are provided in Appendix A.1 of the online supplementary file for the sake of space.
* \(\boldsymbol{\pi}_{j}\)'s are generated in exactly the same way as mentioned in Remark 3.2.
* Let \(R=200\) for the bootstrap procedure.
* Let \(g(\mathbf{x})=1+\sin(\mathbf{x}^{\top}\mathbf{1}_{d}/d)\).
Surely, we can explore different settings, but these are less important based on the development of Section 3. Also, we are constrained by computing power.
### Model (1.1)
Consider the following regression model:
\[y_{t}=g(\mathbf{x}_{t})+\varepsilon_{t}, \tag{4.1}\]
where \(\varepsilon_{t}=0.5\varepsilon_{t-1}+N(0,0.75)\), and the \(j^{th}\) element of \(\mathbf{x}_{t}\) is generated as \(x_{t,j}\sim U(-a,a)\). The bandwidth \(h\) is set as \(h=a/M\), where \(M\) is the integer closest to \(a/h_{1}\) with \(h_{1}=2.5\cdot T^{-1/(d+2p-0.5)}\). Here, \(-0.5\) is to ensure \(\sqrt{T}h^{p+d/2}\to 0\) holds. In fact, \(h\) is very close to \(h_{1}\), and the current setup is simply to guarantee \(M\) is a large positive integer. When designing the simulations, our impression is that the results are not sensitive to the choices of \(g(\mathbf{x})\) and the bandwidth. Therefore, in what follows, we only vary the values of
\[(q,d,u_{\sigma}). \tag{4.2}\]
To measure the finite sample performance, we select \(L^{d}\) test points from \([-a,a]^{d}\) as follows:
\[\mathbf{x}_{L\mathbf{j}}=\left(-a+\frac{2a}{L-1}(j_{1}-1),\ldots,-a+\frac{2a} {L-1}(j_{d}-1)\right)^{\top}, \tag{4.3}\]
where \(\mathbf{j}\in[L]^{d}\). With each dataset, we first estimate all \(g(\mathbf{x}_{L,\mathbf{j}})\) using the approach of Section 3.2, and then construct the corresponding 95% confidence interval using the bootstrap procedure documented in Theorem 3.2 for each point. After \(n\) replications, we report
\[\text{RMSE}_{g}=\left\{\frac{1}{nL^{d}}\sum_{i=1}^{n}\sum_{\mathbf{j}\in[L]^{d }}[\widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}})-g(\mathbf{x}_{L,\mathbf{j}})]^{ 2}\right\}^{1/2}\]
and
\[\text{CR}_{g}=\frac{1}{nL^{d}}\sum_{i=1}^{n}\sum_{\mathbf{j}\in[L]^{d}}I( \widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}})-g(\mathbf{x}_{L,\mathbf{j}})\in \text{CI}_{i,\mathbf{j}}),\]
where \(\widehat{g}_{i}(\cdot)\) stands for the estimate of \(g(\cdot)\) at the \(i^{th}\) replication, and \(\text{CI}_{i,\mathbf{j}}\) is the 95% confidence interval of \(\widehat{g}_{i}^{*}(\mathbf{x}_{L,\mathbf{j}})-\widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}})\) based on the bootstrap draws from the \(i^{th}\) replication. We set \(T\in\{800,1600,2400\}\) to examine the convergence of NN approach, and let2\(u_{\sigma}\in\{-0.5,0.5\}\) and \(d\in\{2,3\}\). We further set \(a=3\), \(L=20\) and \(n=200\) without loss of generality.
Footnote 2: We provide some extra simulation results with larger \(d\) value in the online supplementary appendices of the paper, and comment on some issues related to computational overhead.
We start presenting the results below. First, we draw some plots for the case with \(d=2\). In both Figures 2 and 3, the first sub-plot is always the true \(g(\mathbf{x})\). For the rest of sub-plots, each has three layers. The middle one is the average of estimates over \(n\) replications. The top and bottom layers are the averages of the bootstrap draws corresponding to the 97.5% and 2.5% quantiles respectively over \(n\) replications. A few facts emerge. Overall, the LNN approach can recover the unknown function reasonably well. When \(q=4\), both figures have smoother plots, which should be expected provided \(q\) is considered as fixed. Also, both figures are very similar, so the results are not sensitive to the choice of \(u_{\sigma}\) as explained in Remark 3.1.
More detailed numbers are summarized Table 1. As expected, when \(T\) goes up, \(\text{RMSE}_{g}\) converges to 0 and \(\text{CR}_{g}\) converges to 0.95. In particular, for \(\text{CR}_{g}\), when the sample size is not large enough, we tend to get a narrower CI rather than wider CI. Also, we note that when \(d\) increases, \(\text{RMSE}_{g}\) increases rather significantly, which is understood as the curse of dimensionality. The curse is also reflected in \(\text{CR}_{g}\), but the results are still acceptable in our view. Recall that Figures 2 and 3 suggest that when \(q=4\) the plots are much smoothers, but Table 1 actually says larger \(q\) is corresponding to larger \(\text{RMSE}_{g}\). However, the change in \(q\) does not alter \(\text{CR}_{g}\) much, which shows certain robustness of the bootstrap procedure suggested in Section 3.2. Finally, we note that the time consuming part of this simulation design comes from generating inference for a large number of points defined in (4.3) using the proposed bootstrap procedure.
Figure 2: Simulation Results of Example 1.1 (\(u_{\sigma}=-0.5,d=2\))
Figure 3: Simulation Results of Example 1.1 (\(u_{\sigma}=0.5,d=2\))
### Model (1.2)
Next, we consider the following data generating process:
\[y_{t}=\left\{\begin{array}{ll}1&g(\mathbf{x}_{t})-\varepsilon_{t}\geq 0\\ 0&\mbox{otherwise}\end{array}\right.,\]
in which \(\{\mathbf{x}_{t}\}\), \(\{\varepsilon_{t}\}\) and \(g(\cdot)\) are exactly the same as those in Section 4.1.
We first note a computational issue. We reply on "fminunc" function of Matlab to find the solution of
\[\operatorname*{argmin}_{\widetilde{\mathbf{x}}\in\mathcal{S}}[-\log L( \widetilde{\mathbf{s}}(\cdot\,|\,\boldsymbol{\Theta}))],\]
which is the same as that in (3.11). In order to invoke the minimization process in any statistical software (including R, Matlab, etc.), one needs to provide initial values to the parameters under estimation. As a consequence, the numbers reported below are affected by the initial values more or less. Although it is not our intention to tackle this complicated computational issue in this paper, Lemma 3.4 does become useful in this case. Recall that Lemma 3.4 bridges the log likelihood estimation and the nonlinear least squares estimation. Therefore, we first conduct an OLS estimation using the approach of Section 4.1 as the initial value of \(\boldsymbol{\Theta}\) for each generated \(\{(y_{t},\mathbf{x}_{t})\,|\,t\in[T]\}\). We then invoke log likelihood estimation as our final estimate of
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & \multicolumn{2}{c}{RMSE\({}_{g}\)} & \multicolumn{2}{c}{CR\({}_{g}\)} \\ & & \(T\setminus d\) & 2 & 3 & 2 & 3 \\ \(u_{\sigma}=-0.5\) & \(q=3\) & 800 & 0.282 & 0.637 & 0.894 & 0.871 \\ & & 1600 & 0.261 & 0.543 & 0.925 & 0.884 \\ & & 2400 & 0.211 & 0.356 & 0.929 & 0.916 \\ & \(q=4\) & 800 & 0.324 & 0.932 & 0.917 & 0.837 \\ & & 1600 & 0.222 & 0.529 & 0.927 & 0.896 \\ & & 2400 & 0.180 & 0.408 & 0.934 & 0.916 \\ \(u_{\sigma}=0.5\) & \(q=3\) & 800 & 0.291 & 0.608 & 0.912 & 0.873 \\ & & 1600 & 0.261 & 0.388 & 0.917 & 0.907 \\ & & 2400 & 0.236 & 0.293 & 0.918 & 0.910 \\ & \(q=4\) & 800 & 0.327 & 0.126 & 0.915 & 0.837 \\ & & 1600 & 0.226 & 0.579 & 0.922 & 0.891 \\ & & 2400 & 0.184 & 0.433 & 0.927 & 0.911 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation Results of Example 1.1
\(\mathbf{\Theta}\) for each dataset. Even in this case, the computation is rather slow, and the computational time increases dramatically when the number of parameters goes up.
That said, due to the limitation of computing power, we consider \(d=2\) only in this section. The rest settings are the same as those in Section 4.1. On top of the criteria \(\text{RMSE}_{g}\) and \(\text{CR}_{g}\) introduced above, we further introduce
\[\text{RMSE}_{g}^{*}=\left\{\frac{1}{\sharp\text{Q}_{L,\mathbf{j}}\cdot L^{d}} \sum_{\mathbf{j}\in[L]^{d}}\sum_{\widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}}) \in\text{Q}_{L,\mathbf{j}}}[\widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}})-g( \mathbf{x}_{L,\mathbf{j}})]^{2}\right\}^{1/2},\]
where \(\text{Q}_{L,\mathbf{j}}\) defines the values between 1st and 3rd quartiles of \(\{\widehat{g}_{i}(\mathbf{x}_{L,\mathbf{j}})\,|\,i\in[n]\}\). By doing so, we can remove some impacts of the initial value issues associated with "fminunc". A similar treatment can also be found in the simulation study of Bauer and Kohler (2019), in which some quantiles of the simulation results are reported in order to eliminate the impacts of outliers.
Again, we draw a few plots as in Figures 2 and 3. In this case, to minimize the impacts of initial value issues caused by the program, we take the median based on the simulation results when plotting each layer. As shown in Figures 4 and 5 below, the results are clearly not as good as those for Model 1.1 due to lack of closed-form solutions. Still, we are able to see the consistency as the sample size increases. Both figures show that the patterns are very much similar to those in Figures 2 and 3, so we omit the discussion here.
We further summarize the detailed numbers in Table 2. A few facts should be mentioned. First, the coverage rates are reasonably well. The values of \(\text{RMSE}_{g}^{*}\) are comparable to those reported in Table 1, in which the numbers are generated from closed-form estimates. Third, the results are not changing much with respect to the value of \(u_{\sigma}\), which again confirms our argument in Remark 3.1.
Figure 4: Simulation Results of Example 1.2 (\(u_{\sigma}=-0.5,d=2\))
Figure 5: Simulation Results of Example 1.2 (\(u_{\sigma}=0.5,d=2\))
## 5 Empirical Study
In this section, we consider two empirical examples to illustrate the two models that have been investigated in Section 3.
### On Climate Data
First, we would like to recall that as mentioned in Section 2, we assume that \(\{\mathbf{x}_{t}\,|\,t\in[T]\}\) are strictly stationary for the purpose of simplicity. In practice, it is not surprising that data may present certain time trend or seasonality as we shall show below. Such features do not alter our theoretical results fundamentally. For example, we can suppose that
\[\mathbf{x}_{t}=\mathbf{u}(\tau_{t})+\mathbf{s}_{t}+\mathbf{v}_{t}, \tag{5.1}\]
where \(\tau_{t}=\frac{t}{T}\), \(\mathbf{u}(\cdot)\) is a vector of deterministic trending functions, \(\mathbf{s}_{t}\) captures some seasonal effects, and \(\mathbf{v}_{t}\) mimics randomness. With some obvious modifications on Assumption 2, all the results established previously still hold.
The data to be investigated are collected from Weather Undergroud API, and has also been extensively studied by researchers from a very diversified background on Kaggle3. This dataset includes four time series (temperature, humidity, wind speed, atmospheric pressure)
\begin{table}
\begin{tabular}{c c r r r r} \hline \hline & & \(T\) & RMSE\({}_{g}\) & RMSE\({}_{g}^{*}\) & CR\({}_{g}\) \\ \(u_{\sigma}=-0.5\) & \(q=3\) & 800 & 1.212 & 0.790 & 0.904 \\ & & 1600 & 1.042 & 0.537 & 0.912 \\ & & 2400 & 0.967 & 0.428 & 0.917 \\ & \(q=4\) & 800 & 1.378 & 0.813 & 0.915 \\ & & 1600 & 0.797 & 0.405 & 0.923 \\ & & 2400 & 0.525 & 0.282 & 0.934 \\ \(u_{\sigma}=0.5\) & \(q=3\) & 800 & 1.106 & 0.600 & 0.898 \\ & & 1600 & 1.023 & 0.577 & 0.905 \\ & & 2400 & 0.795 & 0.449 & 0.907 \\ & \(q=4\) & 800 & 1.110 & 0.616 & 0.906 \\ & & 1600 & 0.687 & 0.384 & 0.919 \\ & & 2400 & 0.480 & 0.278 & 0.930 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation Results of Example 1.2
from \(1^{st}\) January 2013 to \(24^{th}\) April 2017 for the city of Delhi, India. As a routine exercise, we normalize each time series to get sample mean 0 and sample standard deviation 1 respectively. Typically, researchers use the time period from \(1^{st}\) January 2017 to \(24^{th}\) April 2017 as the test data, and use the rest time period as the training data. We do the same, so we have \(T=1461\) observations to train our LNN, and the size of test data is \(T^{*}=114\).
We plot the normalized time series in Figure 6 below. In each sub-plot, the observations on the left hand of the vertical line are the training data, while the observations on the right hand side are the test data. Temperature and pressure move in opposite direction, while humidity and wind speed look like white noises more or less. Although wind speed has some spikes from time to time, it is rather stationary overall. When training our LNN, we consider Model 1.1 and fit temperature at period \(t\) (i.e., \(y_{t}\)) on humidity, wind speed and atmospheric pressure at period \(t-1\) (i.e., \(\mathbf{x}_{t}\)).
In view of the data range, we specifically consider the following settings when running the regression:
\[a=2,\quad q\in\{3,4,5,6\},\quad u_{\sigma}\in\{-0.5,0.5\} \tag{5.2}\]
The rest settings are the same as those in simulation studies. First, we report RMSE using test set. Specifically, after training our LNN, we calculate
\[\text{RMSE}=\left\{\frac{1}{T^{*}}\sum_{t=1}^{T^{*}}[y_{t}-\widehat{g}( \mathbf{x}_{t})]^{2}\right\}^{1/2}, \tag{5.3}\]
where \((y_{t},\mathbf{x}_{t})\) are from the test set, and \(\widehat{g}(\cdot)\) is obtained using the training set.
Figure 6: Climate Data of Delhi
We summarize RMSE in Table 3 below, in which the results are very close. Regardless the value of \(u_{\sigma}\), when we increase the value of \(q\), RMSE always shows a U shape, which can be used as a criterion to pick the "optimal" \(q\) in practice. In what follows, we focus on the cases \((u_{\sigma},q)=(0.5,5)\) and \((u_{\sigma},q)=(-0.5,4)\), as both combinations are corresponding to the bottom points of each U-curve.
In both Figures 7 and 8, the black dotted line stands for the true data, the red dashdotted line stands for the estimated values, and the blue solid lines present the 95% CI based on the bootstrap draws. Figure 7 shows that the LNN approach can capture the trend of the entire test period reasonably well over all, and Figure 8 provides a zoomed-in version by focusing on the test period only. It is clear that both subplots are almost identical, although the values of \((u_{\sigma},q)\) are different. Therefore, we conclude that the numerical results show certain robustness of NN architecture.
WiderShares, LLC, which is a modified equal weighted index comprised of companies that are publicly traded in the United States and that are engaged in the business of the advancement of cleaner energy and conservation. A very detailed data description about ECO can be found at [https://wildshares.com/about.php](https://wildshares.com/about.php), so we omit them here for simplicity. In connection with the recent trend of investigating climate change, the importance of understanding clean energy sector is more important than ever. Not to mention that for finance discipline, studying energy sector is always a topical interest (e.g., Jin and Jorion, 1996; El-Sharif et al., 2005; and many follow-up studies).
In what follows, we focus on the next model
\[y_{t}=\left\{\begin{array}{ll}1&g(\mathbf{x}_{t})-\varepsilon_{t}\geq 0\\ 0&\mbox{otherwise}\end{array}\right.,\]
where \(y_{t}=1\) if ECO index return is positive at time \(t\), otherwise \(y_{t}=0\); and \(\mathbf{x}_{t}\) includes oil price change rate at time \(t-1\), gas price change rate at time \(t-1\), and CBOE volatility index (VIX) change rate at time \(t-1\). The choices of regressors are based on Section III of Jin and Jorion (1996) and Section 7 of Christoffersen and Diebold (2006). One definitely can consider more choices and different forms of regressors, which may lead to another research paper with comprehensive empirical study. Again, we normalize the observations of each regressor to ensure mean 0 and standard deviation 1. Finally, we present the data in Figure 9.
The entire time period we consider covers \(4^{th}\) January 2010 to \(31^{st}\) December 2019, in which we use the period from \(1^{st}\) July 2019 to \(31^{st}\) December 2019 as the test period. As a result, we have 2369 observations as training data (i.e., \(T=2369\)), and 146 observations as test data (i.e., \(T^{*}=146\)).
Figure 8: Plot of the Fitted Curve of the Test Period
After training our LNN, we calculate the mean absolute error (MAE) as follows:
\[\text{MAE}=\frac{1}{T^{*}}\sum_{t=1}^{T^{*}}|y_{t}-\widehat{y}_{t}|,\]
where \(\widehat{y}_{t}=1\) if \(\widehat{g}(\mathbf{x}_{t})\geq 0.5\), and \(\widehat{y}_{t}=0\) otherwise. Here, we arbitrarily use \(0.5\) as the threshold. One may further explore other options in practice. We consider different choices of \(u_{\sigma}\) and \(q\), and summarize results in Table 4 below. Regardless the value of \(u_{\sigma}\), we obtain the minimum MAE with \(q=4\). In the best case scenario, the successful rate is around \(59\%\) which might be further improved by accounting for volatility as well as other possible explanatory variables through a comprehensive empirical investigation.
In what follows, we focus on the results by setting \(q=4\). In Figure 10, using test data we respectively plot the estimated \(\widehat{g}(\cdot)\), and the corresponding probabilities (i.e., \(\Phi_{\varepsilon}(\widehat{g}(\cdot))\)). In each sub-figure, the dash-dotted line is corresponding to the choice of (\(u_{\sigma}=-0.5,q=4\)), while the
\begin{table}
\begin{tabular}{l r r r} \hline \hline \(u_{\sigma}\setminus q\) & 3 & 4 & 5 \\ -0.5 & 0.5068 & **0.4384** & 0.5137 \\
0.5 & 0.5205 & **0.4110** & 0.4589 \\ \hline \hline \end{tabular}
\end{table}
Table 4: MAE for the Test Set
Figure 9: Clean Energy Index Data
dotted line is corresponding to the choice of \((u_{\sigma}=0.5,q=4)\). The results are almost identical when \(u_{\sigma}\) varies, so it again justifies our argument on the choice of \(u_{\sigma}\) and the robustness of the approach investigated above.
Finally, we point out that Christoffersen and Diebold (2006) write "_As volatility moves, so too does the probability of a positive return: the higher the volatility, the lower the probability of a positive return_". With good sign prediction, we can then remove the "_bad volatility_" and keep the "_good volatility_" (e.g., Patton and Sheppard, 2015). This is important, as traditionally a primary goal of portfolio analysis is to minimize the volatility (e.g., Engle et al., 2019), so many good volatilities are wiped out by classic approaches. Model (1.2) associated with the LNN approach naturally marries the above studies and improves the prediction of positive returns, so a better portfolio strategy can be implemented in practice. We will no longer proceed further along this line, and would like to leave this in another research paper by systematically comparing different approaches adopted when constructing portfolio.
## 6 Conclusion
NN has gained considerable attentions over the past a few decades. Yet, questions as those stated in Section 1 have not been answered well in the literature. In this paper, we bring in identification restrictions to the LNN framework from a nonparametric regression perspective, and consider the LNN based estimation with dependent data for both cases with quantitative/qualitative outcomes. We then establish the LNN based estimation theory under a set of minor conditions. The asymptotic distributions are derived accordingly, and we show that LNN automatically eliminates the dependence of data when calculating the asymptotic variances.
Figure 10: Plot of \(\widehat{g}(\mathbf{x})\) and associated Probabilities of the Test Period
The finding is important, as one can easily use different types of wild bootstrap methods to obtain valid inference for practical implementation. In particular, for quantitative outcomes, the LNN approach yields closed-form expressions for the estimates of some key parameters of interest. Last but not least, we examine our theoretical findings through extensive numerical studies.
Several major comments have been made here and there in Section 3, and some future research directions have been acknowledged along the way. Finally, we hope the current article will shed light on how to produce transparent algorithms to ensure that our research findings are relevant and useful for practical implementations and applications.
## 7 Acknowledgements
Gao, Peng and Yang acknowledge financial support from the Australian Research Council Discovery Grants Program under Grant Numbers: DP200102769, DP210100476 and DP230102250, respectively.
|
2306.09381 | Spatiotemporal-Augmented Graph Neural Networks for Human Mobility
Simulation | Human mobility patterns have shown significant applications in
policy-decision scenarios and economic behavior researches. The human mobility
simulation task aims to generate human mobility trajectories given a small set
of trajectory data, which have aroused much concern due to the scarcity and
sparsity of human mobility data. Existing methods mostly rely on the static
relationships of locations, while largely neglect the dynamic spatiotemporal
effects of locations. On the one hand, spatiotemporal correspondences of visit
distributions reveal the spatial proximity and the functionality similarity of
locations. On the other hand, the varying durations in different locations
hinder the iterative generation process of the mobility trajectory. Therefore,
we propose a novel framework to model the dynamic spatiotemporal effects of
locations, namely SpatioTemporal-Augmented gRaph neural networks (STAR). The
STAR framework designs various spatiotemporal graphs to capture the
spatiotemporal correspondences and builds a novel dwell branch to simulate the
varying durations in locations, which is finally optimized in an adversarial
manner. The comprehensive experiments over four real datasets for the human
mobility simulation have verified the superiority of STAR to state-of-the-art
methods. Our code is available at https://github.com/Star607/STAR-TKDE. | Yu Wang, Tongya Zheng, Shunyu Liu, Zunlei Feng, Kaixuan Chen, Yunzhi Hao, Mingli Song | 2023-06-15T11:47:45Z | http://arxiv.org/abs/2306.09381v3 | # Spatiotemporal-Augmented Graph Neural Networks for Human Mobility Simulation
###### Abstract
Human mobility patterns have shown significant applications in policy-decision scenarios and economic behavior researches. The human mobility simulation task aims to generate human mobility trajectories given a small set of trajectory data, which have aroused much concern due to the scarcity and sparsity of human mobility data. Existing methods mostly rely on the static relationships of locations, while largely neglect the dynamic spatiotemporal effects of locations. On the one hand, spatiotemporal correspondences of visit distributions reveal the spatial proximity and the functionality similarity of locations. On the other hand, the varying durations in different locations hinder the iterative generation process of the mobility trajectory. Therefore, we propose a novel framework to model the dynamic spatiotemporal effects of locations, namely **S**patio**Temporal**-**A**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**u**uu**u**u**u**u**uu**u**u**u**u**uu**u**uu**u**uu**u**uu**uu**u**uu**uu**u**u**u**uu**uu**u**uuu**uu**uu**uuu**uu**uu**uu**uu**uu**uuu**uuu**uuu**uuu**uuuu**uuuu**uuuu**uuuuuu**uuuuuuu**
of locations; MoveSim [13] takes a step further to introduce the structure prior of locations with an attention layer. However, these methods focus on the static relationships of locations, while the dynamic spatiotemporal effects of locations are largely under-explored.
As shown in Figure 1, the spatiotemporal effects of locations can be observed from two aspects: the spatiotemporal correspondences and the varying durations. On the one hand, Figure 1 (a) depicts that various locations (Home, Bar, Cafe, Pizza) get busy when people get off work. It indicates the spatiotemporal correspondences between these locations, which can reflect both the spatial proximity and the functionality similarity. On the other hand, Figure 1 (b) depicts the varying durations in different locations of an illustrative trajectory, which describes the spatiotemporal continuity of human mobility behaviors. Existing methods generate the simulation trajectories without considering the varying durations, where locations with short dwell time will undoubtedly get neglected in the optimization goals.
Therefore, in this paper, we propose a novel **S**patio**T**emporal-**A**ugmented g**R**raph neural networks (STAR) to model the spatiotemporal effects of locations in a generator-discriminator paradigm. Firstly, we construct various kinds of spatiotemporal graphs to capture the spatiotemporal correspondences of locations and obtain the location embeddings with the multi-channel embedding module. Secondly, we build a dual-branch decision generator module to capture the varying durations in different locations, where the exploration branch accounts for the diverse transitions of locations and the dwell branch accounts for the staying patterns of locations. After generating a complete human trajectory iteratively, the proposed STAR is optimized by the policy gradient strategy with rewards from the policy discriminator module, playing a min-max game with the discriminator [15]. We have conducted comprehensive experiments on four real datasets for the human mobility simulation task. Results on various real-world datasets validate the superiority of our proposed STAR framework to the _state-of-the-art_ methods. In summary, our contributions can be summarized as follows:
* We innovatively build the spatiotemporal graphs of locations to model the dynamic spatiotemporal effects among locations for the human mobility simulation task.
* A novel framework STAR is proposed to handle the spatiotemporal correspondences and the varying durations with the multi-channel embedding module and the dual-branch decision generator module, respectively.
* Extensive experiments on various real-world datasets demonstrate that the proposed STAR consistently outperforms the _state-of-the-art_ baselines in human mobility simulation with high fidelity. The ablation studies further reveal the working mechanisms of the spatiotemporal graphs and the dwell branch.
## 2 Related Works
In this section, we briefly review the most-related literatures along the following lines of fields: (1) human mobility simulation and (2) graph neural networks.
### _Human Mobility Simulation_
The human mobility simulation task aims to generate artificial mobility trajectories with realistic mobility patterns given a small set of human mobility data [13, 16, 17, 18, 19, 20]. The generated artificial trajectories must reproduce a set of spatial and temporal mobility patterns, such as the distribution of characteristic distances and the predictability of human whereabouts. Temporal patterns usually include the number and sequence of visited locations together with the time and duration of the visits, which involves balancing an individual's routine and sporadic out-of-routine mobility patterns. And spatial patterns include the preference for short distances [1, 21], the tendency to split into returnees and explorers [22], and the fact of visiting multiple sites for constant times for individuals [23].
In the early stage, Markov-based models dominate the task of human mobility simulation. For example, First-order MC [24] defines the state as the location accessed and assumes that the next location depends only on the current location, thus constructing a transition matrix to capture the first-order transition probability among locations. HMM [25] is established with discrete emission probability and optimized by the Baum-Welch algorithm. IO-HMM [11] further extends Markov-based models by introducing more annotation information, which also improves interpretability. However, Markov-based models are limited in capturing long-term dependencies and incorporating individual preferences.
To make up the deficiency of Markov-based models, a large body of mechanistic methods have emerged, which can reproduce basic temporal, spatial, and social patterns of human mobility [16, 17, 18, 19, 26]. For example, in the Exploration and Preferential Return (EPR) model [27], an agent can select a new location that has never been visited before based on a random walk process utilizing a power-law jump-size distribution, or return to a previously visited location based on its frequency. Then several studies enhanced the EPR model by incorporating increasingly elaborate spatial or social mechanisms [22, 23, 28, 29, 30]. EPR and its extensions primarily focus on the spatial patterns of human mobility, neglecting to capture the temporal mechanisms. TimeGeo [31] and DITRAS [26] improve temporal mechanism by integrating a data-driven model into an EPR-like model to capture both routine and out-of-routine circadian preferences. Despite the interpretability of mechanistic methods, their realism is limited by the simplicity of the implemented mechanisms.
The limitations mentioned above can be addressed by deep learning generative paradigms such as recurrent neural networks (RNNs) and generative adversarial networks (GANs), which can learn the distribution of data by capturing complex and non-linear relationships in data and generate mobility trajectories from the same distribution. As a result, the deep learning approaches for human mobility simulation can generate more realistic data than traditional methods. RNN-based models prefer to maximize the prediction likelihood of the next location likelihood [32, 33], resulting in ignoring the long-term influence and the so-called _exposure bias_. SeqGAN [33], the pioneering work of
sequence generation based on Generative Adversarial Networks (GAN) [15], proposes to optimize the generator of discrete tokens by a policy gradient technique. Based on SeqGAN, MoveSim [13] further introduces the location structure as a prior for human mobility simulation. ActSTD [34] improves the dynamic modeling of individual trajectories by the neural ordinary equation. It is worth noting that the purpose of human mobility simulation differs from that of human mobility prediction [35, 36, 37]. The former aims to produce results that reflect the characteristics of real-world data, which should not be too similar to real data so as to protect user privacy. Conversely, the latter emphasizes testing the model's ability to recover the real data. Despite deep learning approaches proposed for human mobility prediction [38, 39], simulating daily mobility has been underexplored.
### _Graph Neural Networks_
Recently, the rapid development of deep learning [40, 41] has inspired the researches on Graph Neural Networks (GNNs) [42, 43, 44, 45], which attracts much concern from various fields due to their widespread applications for non-grid data. Graph Convolution Networks (GCN) [43] advances the node classification task by a large margin compared to the previous methods, showing a pioneering attempt at GNNs; GraphSAGE [44] aims to perform graph learning on large-scale datasets by sampling the recursively expanded ego-graphs; Graph Attention Network (GAT) [45] weights the neighborhood importances by an adaptive attention mechanism [46]. Due to their significant progress, GNNs have been applied in various fields such as social networks, molecular structures, traffic flows, and so on.
Among the numerous fields in which GNNs are applied, urban computing is most relevant to the human mobility simulation task in this paper. Urban computing aims to understand the urban patterns and dynamics from different application domains where the big data explodes, such as transportation, environment, security, etc [47]. Due to the spatio-temporal characteristics of some typical urban data, such as traffic network flow [48, 49, 50, 51], crowd flow [52], environmental monitoring data, etc., some previous works combines graph neural networks with various temporal learning networks to capture the dynamics in the spacial and temporal dimensions [53]. The hybrid neural network architecture is collectively referred to as spatio-temporal graph neural network (STGNN).
The basic STGNN framework for predictive learning is composed of three modules: Data Processing Module (DPM) which constructs the spatio-temporal graph from raw data, Spatio-Temporal Graph Learning Module (STGLM) which extracts hidden spatio-temporal dependencies within complex social systems and Task-Aware Prediction Module (TPM) which maps the spatio-temporal hidden representation from STGLM into the space of downstream prediction tasks. As the most crucial part in STGNN, STGLM combines spatial learning networks such as spectral graph convolutional networks (Spectral GCNs), spatial graph convolutional networks (Spatial GCNs) or GATs, and temporal leaning networks such as RNNs, temporal convolutional networks (TCNs) or temporal self-attention networks (TSANs) or dynamically through a certain spatio-temporal fusion neural architecture.
Therefore, almost all current researches focus on the design of the neural architectures in STGLM and there are many frontier methods to improve the learning of spatio-temporal dependencies. For example, THINK [54] and DMGCRN [55] perform the hyperbolic graph neural network on the Poincare ball to directly capture multi-scale spatial dependencies. ASTGCN [56] employ a typical three-branch architecture for multi-granularity temporal learning, where the data undergone the calculation of multiple GCNs and Attention networks from the three branches will be finally fused by the learnable weight matrix. STSGCN [57] fuses spatio-temporal dependencies by constructing the spatio-temporal synchronous graph. Based on STGNN, STFGNN [58] introduces topology-based graph and similarity-based graph simultaneously to construct a spatio-temporal synchronous graph, making the spatio-temporal synchronous graph more informative. STAT [59] proposes a spatio-temporal synchronous transformer framework to enhance the learning capability with attention mechanisms.
However, there are two main differences between our work and the existing researches on STGNN. First, in previous studies, the topological structure of the spatio-temporal graph (e.g. road network) is fixed, while ours is self-constructed and closely related to the order of the locations visited by individuals. Second, the existing methods based on STGNN are almost used for predictive learning tasks in urban computing, but our approach focuses on simulation task which emphasizes the effective capture of the overall patterns rather than the accurate prediction of a single entity.
## 3 Problem Statement
A human mobility trajectory can be defined as a spatiotemporal sequence \(s=[\tau_{1},\tau_{2},\cdots,\tau_{L}]\). The \(l\)-th visit record \(\tau_{l}\) represents a tuple \((p_{l},t_{l})\), where \(p_{l}\) records the location ID and \(t_{l}\) records the visit timestamp. The human mobility simulation problem is thus defined as follows.
**Definition 1** **(Human Mobility Simulation)**: Given a real-world mobility trajectory dataset \(\mathcal{S}=\{s_{1},s_{2},\cdots,s_{m}\}\), our goal is learning to simulate human mobility behaviors in order to generate an artificial trajectory \(\hat{s}=[\hat{\tau}_{1},\hat{\tau}_{2},\ldots,\hat{\tau}_{L}]\) with fidelity and utility. \(\Box\)
Directly maximizing the likelihood of the sequence generation model would undoubtedly cause the _exposure bias_[32, 33] towards the training data, resulting in poor generalization abilities. Therefore, it is useful to advance the human mobility simulation under the framework of SeqGAN [15, 33], which coordinates the optimization of the generator (sequence generation model) and a discriminator in a min-max game, written as
\[\min_{\theta}\max_{\phi}\mathbb{E}_{\mathbf{x}\sim\pi}[\log D_{\phi}(\mathbf{x })]+\mathbb{E}_{\mathbf{x}\sim\pi_{d}}\left[\log\left(1-D_{\phi}(\mathbf{x}) \right)\right], \tag{1}\]
where \(\mathbf{x}\) is the sampled mobility trajectory, \(\mathbb{E}_{\pi}\) represents the expected reward of the sequences under the policy \(\pi\), and \(\pi_{d}\) samples \(\mathbf{x}\) from the ground-truth trajectory data \(\mathcal{S}\).
## 4 Methods
The proposed **SpatioTemporal-Augmented g**R**aph neural networks (STAR) aims to capture the spatiotemporal correspondences among locations and the varying durations in different locations to advance the human mobility simulation task in a spatiotemporal way. As illustrated in Figure 2, the STAR framework consists of three key components: (i) the multi-channel embedding module learns spatiotemporal semantics of locations by the dense representations based on the proposed multi-channel location graphs with the partial observed human trajectories; (ii) the decision generator module tackles the varying duration problem by the exploration branch and the dwell branch, alleviating the effects of highly repetitive patterns in mobility trajectories; (iii) the policy discriminator module provides the rewards for the generator at each step based on the policy gradient technique.
### _Multi-channel Embedding Module_
#### 4.1.1 Location Graphs
Let \(\mathcal{P}=\{p_{0},p_{1},\cdots,p_{N}\}\) be the set of locations. The spatiotemporal correspondences among locations \(p_{i},p_{j}\in\mathcal{P}\) can be well modeled by their correspondence scores \(\epsilon(p_{i},p_{j})\) with a correspondence function \(\epsilon(\cdot,\cdot)\), resulting in a location-location adjacency graph \(G=(\mathcal{P},\mathcal{E})\). The edge set \(\mathcal{E}\) of the graph contains pairwise correspondences of locations. For example, if the function \(\epsilon(\cdot,\cdot)\) measures the geographical proximity of locations, the constructed graph \(G\) represents a \(K\)-nearest neighbor spatial graph of all locations. Based on the spatial graph, our model will generate human mobility trajectories in consideration of geographical effects.
Therefore, to go beyond the geographical effects, we firstly build a Spatial Distance Graph (SDG) and Temporal Transition Graph (TTG) based on the observed human mobility trajectories, which are constructed from the spatial and temporal perspectives, respectively. Secondly, we propose a SpatioTemporal Graph (STG) by measuring the Wasserstein distance [60] of location visit distributions to capture the spatiotemporal effects, which combines the spatial and temporal dynamics of locations simultaneously.
**Spatial Distance Graph.** The proposed SDG is constructed based on the spatial proximity of a location pair \((p_{i},p_{j})\), which indicates the spatial cooperation effects in human trajectories. Let \(w_{ij}\) be the spatial proximity score of \(\epsilon(p_{i},p_{j})\), we have to build the Spatial Distance Graph (SDG) out of the Cartesian score set of \(\mathcal{P}\times\mathcal{P}\), written as
\[G_{\mathrm{SDG}}=\begin{cases}w_{ij},&w_{ij}\in\text{top}_{k}(\epsilon(p_{i}, \cdot)),\\ 0,&\text{Otherwise},\end{cases} \tag{2}\]
which remains the top-\(k\) neighbors from all locations. The spatial proximity function is usually implemented by Euclidean distance for simplicity.
**Temporal Transition Graph.** The SDG can reveal the spatial proximity of locations, but ignores the mobility patterns of human trajectories. We further propose the Temporal Transition Graph (TTG) to encode the mobility patterns based on the partial observed human trajectories. Let \(s=[\cdots,\tau_{l},\tau_{l+1},\cdots]\) be a human trajectory and \(\tau_{l}=(p_{i},t_{l}),\tau_{l+1}=(p_{j},t_{l+1})\). We record the proximity
Fig. 2: The overall framework of STAR. Firstly, given an observed human trajectory, the multi-channel embedding module generates location embeddings based on the proposed multi-channel spatiotemporal graphs. Secondly, the decision generator module predicts the future trajectory by balancing the exploration branch which is prone to another location and the dwell branch which decides whether to stay at the previous location. Finally, STAR is optimized in an adversarial manner by the policy discriminator module to alleviate the _exposure bias_ of the maximum likelihood manner.
score between \(p_{i}\) and \(p_{j}\) as \(\epsilon(p_{i},p_{j})=1\) and obtain the summed scores by \(w_{ij}=\sum_{(pi,p_{j})\in\mathcal{S}}\epsilon(p_{i},p_{j})\). The graph is thus formulated as
\[G_{\text{TTG}}=\begin{cases}w_{ij},&w_{ij}>0,\\ 0,&\text{Otherwise}.\end{cases} \tag{3}\]
The obtained TTG can detailedly describe the mobility patterns from the temporal transition perspective.
**SptioTemporal Graph.** The SDG and TTG describe the location relationships from the spatial and temporal perspectives, respectively, thus ignore the other aspect to some extent. We further characterize the location functionality with a SpatioTemporal Graph (STG) by its visit distribution over time, which can describe the spatial proximity with the spatial cooperation effects and the temporal proximity with the visit distribution similarity. For instance, cafes often open from morning to evening, while bars keep open till midnight. The visit distribution \(F_{p_{i}}\) of a location \(p_{i}\) is calculating its normalized visit counts in the observed records \(\mathcal{S}_{p_{i}}\) by discretizing the timestamps into \(T\) time slots: \(F_{p_{i}}(t)=\frac{1}{|\mathcal{S}_{p_{i}}|}\sum_{t_{k}\in\mathcal{S}_{p_{i}}} \mathbbm{1}[t_{k}=t]\). The obtained visit distribution \(F_{i}\) can well describe the location from both the spatial and the temporal aspects. On the one hand, neighborhood locations often share similar visit distributions based on geographical effects, which have been widely adopted in Location-based Services. On the other hand, locations far apart can also share similar visit distributions since they provide peer functionalities like eating or drinking. Thus, the Wasserstein distance [60] is employed here to measure the distances of the visit distributions \(\hat{F}_{\mathcal{P}}\) of locations to differentiate their individual functionalities, defined as follows:
\[d(p_{i},p_{j})=\inf_{\pi\in\prod[F_{p_{i}},F_{p_{j}}]}\int_{x} \int_{y}\pi(x,y)|x-y|\mathrm{d}x\mathrm{d}y, \tag{4}\] \[\text{s.t.}\ \int\pi(x,y)\mathrm{d}y=F_{p_{i}}(x),\int\pi(x,y) \mathrm{d}x=F_{p_{j}}(y),\]
where \(\pi(x,y)\) is a joint distribution of \(F_{p_{i}}\) and \(F_{p_{j}}\). The proximity score can be simply defined as \(\epsilon(p_{i},p_{j})=1-d(p_{i},p_{j})\in[0,1]\). Let \(w_{ij}=\epsilon(p_{i},p_{J})\), the graph can be formulated as
\[G_{\text{STG}}=\begin{cases}w_{ij},&w_{ij}\in\text{top}_{k}(\epsilon(p_{i}, \cdot)),\\ 0,&\text{Otherwise}.\end{cases} \tag{5}\]
The proposed STG can capture the spatiotemporal dynamics of locations in a distribution way.
Overall, the above multi-channel location graphs are denoted by \(\mathcal{G}=\{G_{\text{SDG}},G_{\text{TTG}},G_{\text{STG}}\}\). The connection edges of a location pair \((p_{i},p_{j})\in G\) are naturally weighted with their proximity scores, indicating the spatiotemporal correspondences given by different measurements. However, these scores are not aligned with the human mobility simulation task, which can cause the inconsistent problems between the spatiotemporal graphs and the simulation task. To bridge the gap between the graphs and the task, we alternatively propose a _Vanilla_ version based on the _Weighted_ version, written as
\[\hat{\epsilon}(p_{i},p_{j})=\begin{cases}1,&(p_{i},p_{j})\in G,\\ 0,&\text{Otherwise}.\end{cases} \tag{6}\]
The _Vanilla_ graph regards all neighbors as the same, benefiting for the model optimization without the computed weights of the multi-channel graphs.
#### 4.1.2 Location Embeddings
After establishing the above graphs, location embeddings can be obtained by the following procedure.
Firstly, we implement the features of the location set \(\mathcal{P}\) at the input layer by a learnable embedding matrix with end-to-end optimization, following the common training paradigm of human mobility simulation. Secondly, to adaptively aggregate the useful information in terms of the graph heterogeneity, we adopt the attention layer of graphs [45] to perform multi-channel attention over all kinds of spatiotemporal graphs, namely \(\mathcal{G}=\{G_{\text{SDG}},G_{\text{TTG}},G_{\text{STG}}\}\). Given location embeddings \(\mathbf{H}_{G}^{l-1}\) from the previous \(l-1\)-th layer (initialized embeddings at the input layer), our spatiotemporal attention layer computes the node embeddings in a multi-head manner by
\[\mathbf{h}_{i}^{l}=\Big{\|}_{k=1}^{K}\text{ReLU}\left(\sum_{j\in\mathcal{N}_{ i}}\alpha_{ij}^{k}W^{k}\mathbf{h}_{j}^{l-1}\right), \tag{7}\]
where \(\alpha_{ij}^{k}\) is the \(k\)-th attention scores of the edge \((p_{i},p_{j})\), \(\mathcal{N}_{i}\) provides the neighbors of location \(p_{i}\), \(K\) is the number of attention heads, \(\|\) is the concatenation operation, \(W^{k}\) is the transformation matrix of \(k\)-th attention head, and ReLU is a non-linear activation function.
For simplicity, the \(l\)-th layer location embeddings are summed over all graphs: \(\mathbf{H}^{l}=\sum_{G\in\mathcal{G}}\mathbf{H}_{G}^{l}\). The final location embeddings are denoted by \(\mathbf{H}_{\mathcal{P}}\).
### _Decision Generator Module_
The multi-channel embedding module provides location embeddings via our dedicated spatiotemporal graphs, enhancing the representation quality used in the following decision generator module. The decision generator module predicts the next location based on the observed partial mobility trajectory, which is also generated by the decision generator sequentially. Besides the common exploration branch of GANs, we further propose the dwell branch to cope with the varying durations of locations as shown in Figure 1 (b), which also reduces the optimization difficulty to some extent. Since the policy discriminator will not give classification results until the trajectory sequence is finished, we adopt the Monte-Carlo strategy to iteratively predict the next location until the required length.
#### 4.2.1 Exploration Branch
Suppose that a partial mobility trajectory \(s_{1:l}=\{(p_{1},t_{1}),(p_{2},t_{2}),\cdots,(p_{l},t_{l})\}\) is observed, the exploration branch aims to predict the next location \(\hat{p}_{l+1}\) that maximizes the decision reward given by the policy discriminator module, where \(t_{l+1}\) is assumed uniformly spaced with \(t_{l}\) for simplicity. Given a bunch of location embeddings \(\mathbf{H}_{\mathcal{P}}\) from the multi-channel embedding module, we firstly fetch the corresponding input embeddings of the trajectory, denoted by \(\mathbf{h}_{s}=\{\mathbf{h}_{p_{0}},\mathbf{h}_{p_{1}},\cdots,\mathbf{h}_{p_{ l}}\}\). To facilitate the sequence modeling and training efficiency, we adopt the Gated Recurrent Unit [61] as our sequence model, written as
\[\mathbf{h}_{l},z_{l}=\mathrm{GRU}(\mathbf{h}_{p_{l}},z_{l-1}), \tag{8}\]
where the hidden state \(z_{l-1}\) is initialized as zero vectors in the first step. The obtained sequence representation \(\mathbf{h}_{l}\) is fed into a linear layer to predict the probability of the next locations by
\[\hat{p}_{l+1}=\mathrm{softmax}(\mathbf{h}_{l}\times\mathbf{W}_{p}+\mathbf{b}_{p}). \tag{9}\]
Then the outputs of the exploration branch \(\hat{p}_{l+1}\) together with the outputs of the dwell branch determine the next location iteratively until the sequence is finished.
#### 4.2.2 Dwell Branch
Motivated by the varying durations of locations in mobility trajectories, we specially design a dwell branch to predict the probability of staying at the previous location, which can inform the model to _dwell_ or _explore_. Intuitively, the duration of a specific location is determined by the spatiotemporal context, and the cumulative duration of the trajectory, where the former can be well captured by the exploration branch and the latter lacks specific modeling. For a highly repetitive trajectory dataset, the optimization goal will be overwhelmed by the repetitive locations, leading to the ignorance of diverse behaviors. Therefore, it is necessary to build the dwell branch to predict whether to stay at the previous location to alleviate the learning difficulties of the exploration branch. Taking the sequence representation \(\mathbf{h}_{l}\) for the dwell branch classification is straightforward. However, it ignores the saturability of human mobility behaviors [1] that the durations of locations are usually upper-bounded to some extent. We thus combine the sigmoid function with an exponentially decaying coefficient to reduce the effects of a long-duration location, written as
\[\hat{y}_{l+1}^{d}=\mathrm{sigmoid}(\mathbf{h}_{l}\mathbf{W}_{d}+\mathbf{b}_{ d})\cdot\exp(-\beta\cdot\mathrm{C}(s_{1:l},p_{l})), \tag{10}\]
where the hyper-parameter \(\beta\) adjusts the decaying rate and \(\mathrm{C}(s_{1:l},p_{l})=\sum_{p_{k}\in s_{1:l}}\mathbbm{1}\left[p_{k}=p_{l}\right]\) counts the frequency of \(p_{l}\) in the observed trajectory. \(\beta\) is set to \(1\) by default.
Finally, the decision generator module balances the location predictions \(\hat{p}_{l+1}\) of the exploration branch and the dwell predictions \(\hat{y}_{l+1}^{d}\) by sampling from the output distributions, written as
\[p_{l+1}=\begin{cases}p_{l},&l>1\text{ and }\Psi(\hat{y}_{l+1}^{d})=1,\\ \Psi(\hat{p}_{l+1}),&\text{Otherwise},\end{cases} \tag{11}\]
where \(\Psi(.)\) samples a location by polynomial sampling from a given probability distribution.
### _Policy Discriminator Module_
As discussed in [32, 33], maximizing the likelihood of sequence models would suffer from the _exposure bias_ of the training data, thus generalizing poorly to the testing data. Recently, variants of GANs [13, 33] have shown their successful applications in sequence generation tasks by back-propagating the policy gradients into the generator. Therefore, we follow the generator-discriminator paradigm to perform human mobility simulation in an adversarial manner.
By generating and sampling iteratively in the decision generator module, a set of generated trajectories \(\mathcal{T}_{G}\) are fed into the policy discriminator module \(D_{\phi}\). Since a powerful discriminator might hinder the optimization of the generator \(G_{\theta}\), we build a simple yet efficient discriminator \(D_{\phi}\) that contains an embedding matrix of locations and classifies the trajectory sequence as real data or fake data based on a GRU layer, written as
\[\mathcal{L}_{D}=\mathbb{E}_{\mathbf{x}\in\mathcal{T}_{R}}\log D_{\phi}( \mathbf{x})+\mathbb{E}_{(\mathbf{x})\in\mathcal{T}_{G}}\log\left(1-D_{\phi}( \mathbf{x})\right), \tag{12}\]
where \(\mathcal{T}_{R}\) and \(\mathcal{T}_{G}\) are real and generated trajectories, respectively. The step-by-step predictions of the decision generator module \(G_{\theta}\) receives the policy gradients from the discriminator \(D_{\phi}\) by unfolding the classification reward recursively following the REINFORCE algorithm [62], written as
\[\nabla_{\theta}=\nabla_{\theta}\mathbb{E}_{P_{\theta}(\mathbf{x})}[R(\mathbf{ x})]=\mathbb{E}_{P_{\theta}(\mathbf{x})}\left[R(\mathbf{x})\nabla_{\theta}\log P _{\theta}(\mathbf{x})\right], \tag{13}\]
where \(P_{\theta}\) denotes the generated distribution by the decision generator \(G_{\theta}\), \(\mathbf{x}\) is the generated mobility trajectory, and the reward \(R(\mathbf{x})\) is the classification loss from the discriminator \(D_{\phi}\). The generator \(G_{\theta}\) can be optimized in this way.
## 5 Experiment
In this section, we conduct experiments for the human mobility simulation task on four real-world datasets to evaluate the performance of our proposed STAR framework. We first briefly introduce the four datasets, baseline methods, and evaluation metrics. Then, we compare STAR with the _state-of-the-art_ baselines for the human mobility simulation task and present the experiment results. Furthermore, we analyze the impact of different modules and hyperparameters on model performance. Finally, we present the geographical visualization results of our proposed STAR method and the baseline methods.
We aim to answer the following key research questions:
* **RQ1**: How does STAR perform compared with other _state-of-the-art_ methods for the human mobility simulation task?
* **RQ2**: How do different modules (different kinds of spatiotemporal graphs and the dwell branch) affect STAR?
* **RQ3**: How do hyper-parameter settings (the depth of layer and the number of attention heads) influence the performance of STAR?
* **RQ4**: How do we qualitatively assess the quality of human trajectories simulated by different methods with regard to the real trajectories?
### _Datasets_
To ensure reproducibility and facilitate fair comparisons with previous works, we evaluates the human mobility of four cities (i.e., New York, Tokyo, Moscow and Singapore)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Timespan & [Users] & [Locations] & [Visits] \\ \hline NYC & Apr. 2012 - Feb. 2013 & 1,083 & 38,333 & 227,428 \\ TKY & Apr. 2012 - Feb. 2013 & 2,293 & 61,858 & 573,703 \\ Moscow & Apr. 2012 - Sep. 2013 & 10,464 & 88,036 & 806,196 \\ Singapore & Apr. 2012 - Sep. 2013 & 8,784 & 45,525 & 397,873 \\ \hline \hline \end{tabular}
\end{table} TABLE I: The statistics of four datasets.
extracted from the publicly available dataset Foursquare in line with previous studies [14, 64]. The timespan and the numbers of users, locations and visit records in each dataset are shown in Table I. The wide range of the timespan and the large-scale visits can sufficiently compare the STAR framework against baseline methods. In experiments, we use the original raw datasets that only contain the GPS coordinates of each location and user check-in records, and pre-process them following the protocol of the human mobility simulation task.
Specifically, we split the whole dataset into three parts: a training set for training the generative model, a validation set for finding the best parameters of models and a testing set for the final evaluation of various metrics. The partition of the four datasets is all set as 7:1:2. Besides, we set the basic time slot as an hour of the day for the convenience and universality of modeling. Finally, in order to ensure the effectiveness and accuracy of modeling, we only remain the trajectories with visit records greater than eight each day.
### _Baseline Methods and Evaluation Metrics_
We compare the performance of the STAR framework with the following _state-of-the-art_ baseline methods.
* **Markov**[10] is a well-known probability method describing the state transitions, which treats the locations as states and calculates the transition probability of locations.
* **IO-HMM**[11] fits the probability model with annotated user activities as its latent states and then generates human trajectories based on the hidden Markov model.
* **LSTM**[63] improves the classical RNN by the dedicated memory cell and the forget gate to enhance the long-term dependency modeling.
* **DeepMove**[12] learns the periodic patterns of mobility by the recurrent network and the neural attention layer.
* **GAN**[15] uses two LSTMs as the generator and the discriminator in our settings, respectively.
* **SeqGAN**[33] solves the discrete sequence generation problem based on the policy gradient technique with GAN, which is suitable for trajectory generation.
* **MoveSim**[13] simulates mobility trajectory by augmenting the prior knowledge of the generator and regularizing the periodicity of sequences.
* **CGE**[14] generates location sequences based on a newly constructed static graph from historical visit records.
Following the common practice in previous works [13, 65], we adopt six metrics to evaluate the quality of generated data by comparing the distributions of important mobility patterns between the simulated trajectories and the real trajectories from different perspectives.
* **Distance**: The moving distance among locations in individuals' trajectories is a metric from the spatial perspective.
* **Radius**: Radius of gyration is the root mean square distance of all locations from the central one, which represents the spatial range of individual daily movement.
* **Duration**: Dwell duration among locations in mobility trajectories is a metric from the temporal perspective.
* **DailyLoc**: Daily visited locations are calculated as the number of visited locations per day for each user.
* **G-rank**: The number of visits per location is calculated as the visiting frequency of the top-100 locations.
* **I-rank**: It is an individual version of G-rank.
Specifically, we use Jensen-Shannon divergence (JSD) [66] to measure the discrepancy of the distributions between
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c} \hline \hline & \multicolumn{4}{c}{**NYC**} & \multicolumn{4}{c}{**TKY**} \\ \hline
**Metrics(JSD)** & Distance & Radius & Duration & DailyLoc & G-rank & I-rank & Distance & Radius & Duration & DailyLoc & G-rank & I-rank \\ \hline Markov [10] & 0.1256 & 0.3742 & 0.0037 & 0.2667 & 0.1119 & 0.0686 & 0.1397 & 0.4246 & 0.0077 & 0.2615 & 0.0413 & 0.0659 \\ IQHM [11] & 0.4129 & 0.6022 & 0.0046 & 0.1071 & 0.6931 & 0.0750 & 0.3368 & 0.5438 & 0.0059 & 0.1732 & 0.6931 & 0.0577 \\ LSTM [63] & 0.1239 & 0.3862 & 0.0014 & 0.0776 & 0.0772 & 0.0728 & 0.1334 & 0.4479 & 0.0019 & 0.0415 & 0.0350 & 0.0615 \\ DeepMove [12] & 0.4068 & 0.5851 & 0.0064 & 0.3139 & 0.4030 & 0.0593 & 0.3349 & 0.5374 & 0.0010 & 0.0478 & 0.2926 & 0.0534 \\ GAN [15] & 0.4405 & 0.6298 & 0.0026 & 0.0839 & 0.2418 & **0.0364** & 0.3644 & 0.5653 & 0.0046 & 0.1881 & 0.1959 & 0.0557 \\ SeqGAN [33] & 0.1774 & 0.4374 & 0.0043 & 0.0803 & 0.1159 & 0.0528 & 0.1260 & 0.4193 & 0.0011 & 0.0452 & 0.0267 & 0.0591 \\ MoveSim [13] & 0.4571 & 0.5977 & **0.0013** & 0.0885 & 0.2474 & 0.0624 & 0.3459 & 0.5511 & 0.0306 & 0.1655 & 0.0777 & 0.0416 \\ CCE [14] & 0.4154 & 0.6338 & 0.1101 & 0.3925 & 0.3172 & 0.0476 & 0.4151 & 0.5816 & 0.0041 & 0.1792 & 0.1820 & **0.0457** \\ \hline STAR (Ours) & **0.1105** & **0.3636** & 0.0011 & **0.0580** & **0.0359** & 0.0624 & **0.1196** & **0.4178** & **0.0005** & **0.0402** & **0.0259** & 0.0693 \\ \hline \multicolumn{10}{c}{**Moscow**} & \multicolumn{4}{c}{**Singapore**} \\ \hline
**Metrics(JSD)** & Distance & Radius & Duration & DailyLoc & G-rank & I-rank & Distance & Radius & Duration & DailyLoc & G-rank & I-rank \\ \hline Markov [10] & 0.0083 & 0.0558 & 0.1036 & 0.1729 & 0.1229 & 0.0694 & 0.0078 & 0.0409 & 0.0060 & 0.1427 & 0.2505 & 0.0797 \\ IQHM [11] & 0.1050 & 0.0610 & 0.0137 & 0.2832 & 0.1639 & 0.0733 & 0.0447 & 0.0456 & 0.0053 & 0.1729 & 0.1839 & 0.0693 \\ LSTM [63] & 0.0127 & 0.0208 & 0.0192 & 0.4412 & 0.3621 & 0.0626 & 0.0066 & 0.0097 & 0.0777 & 0.6457 & 0.2965 & 0.0596 \\ DeepMove [12] & 0.1098 & 0.0528 & 0.0046 & 0.1592 & 0.3814 & 0.0532 & 0.0365 & 0.0279 & 0.0018 & 0.0405 & 0.2931 & 0.0459 \\ GAN [15] & 0.1073 & 0.0735 & 0.0098 & 0.2511 & 0.2158 & 0.0755 & 0.0494 & 0.0438 & 0.0042 & 0.1608 & 0.3325 & 0.0577 \\ SeqGAN [33] & 0.0123 & 0.0207 & 0.0036 & **0.0434** & 0.0307 & 0.0593 & 0.0106 & 0.0106 & 0.0028 & 0.0696 & 0.0294 & **0.0451** \\ MoveSim [13] & 0.0235 & 0.0179 & 0.0457 & 0.4915 & 0.1252 & 0.0523 & 0.0996 & 0.0099 & 0.0016 & 0.5643 & 0.2459 & 0.0487 \\ CCE [14] & 0.0328 & 0.06931 & 0.0062 & 0.2322 & 0.2170 & 0.0659 & 0.1798 & 0.1008 & 0.0785 & 0.5164 & 0.3254 & 0.0581 \\ \hline STAR (Ours) & **0.0049** & **0.0152** & **0.0023** & 0.0497 & **0.0294** & **0.0485** & **0.0062** & **0.0095** & **0.0014** & **0.0371** & **0.0287** & 0.0485 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance of the proposed STAR framework and baselines in terms of JSD for human mobility trajectory simulation. A lower JSD value indicates a better performance. **Bold** and underline means the best and the second-best results.
the generated data and real-world data. The JSD metric is defined as follows:
\[\mathrm{JSD}(p\|q)=H((p+q)/2)-\frac{1}{2}(H(p)+H(q)) \tag{14}\]
where \(p\) and \(q\) are two distributions for comparison, and \(H\) is the Shannon entropy. A lower JSD denotes a closer match to the statistical characteristics and thus indicates a better generation result.
### _Experimental Settings_
For a fair comparison, we set all methods with the following settings: the hidden dimension of embeddings is set to 32, the number of epochs is set to 50, and the batch size is set to 32. Other specific hyper-parameters of these baselines follow the best settings in their papers. The proposed STAR framework is implemented in PyTorch [67]. In default settings, the learning rate is set to 0.01 and the dropout ratio is set to 0.6. The number of attention layers is searched over {1, 2}, and the number of attention heads is searched over {1, 2, 4}.
### _RQ1: Performance Comparison_
Table II presents the performance in retaining the data fidelity of our framework and the eight competitive baselines on four real-world datasets at different scales. The results reveal the following discoveries:
* **Our framework steadily achieves the best performance.** STAR achieves the best performance on all datasets with ranking first on nineteen metrics and ranking second on two metrics over twenty four metrics of the four datasets. For the nineteen ranking first metrics, compared with the best baseline, our method reduces the JSD up to 53.50%. For the other five metrics, namely _Duration_ on the NYC dataset, _DailyLoc_ on the Moscow dataset, _1-rank_ on the NYC, TKY and Singapore dataset, our framework also obtains competitive performance with the best baseline.
* **Model**-based methods are limited in simulating human mobility. Markov performs worse on the time-dependent metrics (i.e., _duration_ and _dailyLoc_) but better on the distance-based metrics (i.e., _Distance_ and _Radius_), because it obtains the next location based on the distribution of historical transition probabilities, which is in line with our intuition that individuals are prone to the near areas. The performance of IO-HMM is also unsatisfactory, because its modeling relies on extensive manual labeling, which places higher demands on the quality of data. For example, lack of records at home interferes with the annotation of Home label, which degrades the predictive performance of IO-HMM. In addition, the sparsity of data introduces errors in the labeling of dwell time.
* **Deep learning methods for mobility prediction task perform poorly on human mobility simulation task.** LSTM and DeepMove are all trained with the short-term goal (i.e. next location prediction) and thus do not perform particularly well on the human mobility simulation task. Unlike human mobility simulation, which emphasizes producing results that reflect various mobility patterns in real trajectories, human mobility prediction highlights the recovery of the next location in real-data and lacks the learning of the global patterns.
* **Generative networks fail to generate realistic human trajectories without trajectory pre-training.** GAN performs the worst across almost all metrics, indicating that it is difficult to capture the hidden patterns of human mobility when learning with noisy and inaccurate raw data. SeqGAN is pretrained by the task based on human mobility modeling, so it yields much better results than GAN.
* **It is essential to model dynamic spatiotemporal dependencies among locations.** Although CGE leverages graph structure for data augmentation, its graph is static and the node embeddings generated by Word2vec cannot be learned and updated in the process of mobility simulation, so it only learns better on several metrics but performs poorly overall. MoveSim introduces the urban structure modeling component especially for locations, so it achieves the best result on _Duration_ metric of the NYC dataset and the second-best result on _1-rank_ metric of the TKY and Moscow dataset, _radius_ metric of the Moscow dataset and _Duration_ metric of the Singapore dataset.
### _RQ2: Ablation Study of STAR_
In this part, we attempt to investigate the effectiveness of different modules in the STAR framework.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Dataset & Method & Distance & Radius & Duration & DailyLoc & G-rank & 1-rank \\ \hline \multirow{3}{*}{NYC} & w/o DB & 0.1152 & 0.3717 & 0.0013 & 0.0582 & 0.0376 & 0.0659 \\ & STAR & 0.1105 & 0.3636 & 0.0011 & 0.0580 & 0.0395 & 0.0624 \\ & \% Improv. & 4.005 & 2.185 & 15.387 & 0.3474 & 0.3245 & 5.3175 \\ \hline \multirow{3}{*}{TKY} & w/o DB & 0.1237 & 0.4293 & 0.0004 & 0.0433 & 0.0343 & 0.0705 \\ & STAR & 0.1196 & 0.4178 & 0.0005 & 0.0402 & 0.0259 & 0.0693 \\ & \% Improv. & 3.3175 & 2.6678 & -25.00\% & 7.165 & 24.4945 & 1.70\% \\ \hline \multirow{3}{*}{MSC} & w/o DB & 0.0076 & 0.0180 & 0.0027 & 0.0510 & 0.0297 & 0.0585 \\ & STAR & 0.0049 & 0.0152 & 0.0023 & 0.0487 & 0.0294 & 0.0845 \\ & \% Improv. & 35.53\% & 15.56\% & 14.81\% & 0.805\% & 1.2147 & 0.1815 \\ \hline \multirow{3}{*}{SCP} & w/o DB & 0.0065 & 0.0101 & 0.0016 & 0.0384 & 0.0230 & 0.0867 \\ & STAR & 0.0062 & 0.0095 & 0.0014 & 0.0391 & 0.0287 & 0.0848 \\ & \% Improv. & 4.62\% & 5.94\% & 12.50\% & 3.39\% & 10.31\% & 2.41\% \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Ablation study on depth branch in STAR in terms of JSD. DB, MSC and SGP are short for the dwell branch, Moscow and Singapore. A lower JSD value indicates a better performance.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Dataset & Method & Distance & Radius & Duration & DailyLoc & G-rank & 1-rank \\ \hline \multirow{3}{*}{NYC} & Weighted & 0.1117 & 0.3665 & 0.0013 & 0.0613 & 0.0383 & 0.0628 \\ & Vanilla & 0.1105 & 0.3636 & 0.0011 & 0.0580 & 0.0359 & 0.0624 \\ & \% Improv. & 1.07\% & 0.29\% & 15.38\% & 5.38\% & 6.27\% & 0.64\% \\ \hline \multirow{3}{*}{TKY} & Weighted & 0.1247 & 0.4220 & 0.0006 & 0.0438 & 0.0288 & 0.0697 \\ & \% Improv. & 4.09\% & 1.00\% & 16.67\% & 8.22\% & 13.09\% & 0.57\% \\ \hline \multirow{3}{*}{MSC} & Weighted & 0.0081 & 0.0201 & 0.0020 & 0.0501 & 0.0335 & 0.0720 \\ & Vanilla & 0.0049 & 0.0152 & 0.0023 & 0.0497 & 0.0294 & 0.0485 \\ & \% Improv. & 39.51\% & 24.38\% & 20.69\% & 8.087\% & 12.24\% & 32.64\% \\ \hline \multirow{3}{*}{SCP} & Weighted & 0.0080 & 0.0115 & 0.0012 & 0.0038 & 0.0295 & 0.0494 \\ & Vanilla & 0.0062 & 0.0095 & 0.0014 & 0.0391 & 0.0287 & 0.0485 \\ & \% Improv. & 36.73\% & 17.39\% & -16.67\% & 3.39\% & 2.71\% & 1.82\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: Ablation study on edge type of GANs in STAR in terms of JSD. MSC and SGP are short for the dwell branch, Moscow and Singapore. A lower JSD value indicates a better performance.
#### 5.5.1 Designs of Multi-channel Graph
In order to learn the spatiotemporal correspondence among locations, we design three kinds of spatiotemporal graphs in the multi-channel embedding module to represent the node embeddings separately and obtain the final embeddings with a fusion layer. To verify the effectiveness of the three graphs in STAR, we get rid of any of them (i.e., without SpatioTemporal Graph, without Spatial Distance Graph and without Temporal Transition Graph) to learn the node embeddings and perform human mobility simulation. The comparison of their simulated results is shown in Figure 3.
Compared with the results predicted by removing any of the three graphs, the fused embedding achieves optimal performance on almost all metrics over the four datasets. In addition, the average performance of removing SpatioTemporal Graph is the worst on six metrics, especially on _Distance_, _Duration_, _DailyLoc_ and _G-rank_ metric, which measure the effectiveness of simulated trajectories from both spatial and temporal perspectives. It indicates that only relying on the static distance and transition information of locations is not enough to carry out accurate trajectory simulation.
Eliminating the Spatial Distance Graph results in a decline of performance on distance-based metrics (i.e. _Distance_ and _Radius_), which is as per our hypothesis. However, it scores well on preference-based metrics (i.e. _G-rank_ and _I-rank_), which is attributed to the SpatioTemporal Graph and Temporal Transition Graph effectively learning frequently visited locations in the trajectories. Furthermore, removing the Temporal Transition Graph results in inferior performance compared to remaining all three graphs but sur
Fig. 4: Effects of the number of layers and attention heads in STAR.
Fig. 3: Ablation study on the channels of graphs. STG, SDG and TTG represent SpatioTemporal Graph, Spatial Distance Graph and Temporal Transition Graph respectively. MSC and SGP are short for Moscow and Singapore. A lower JSD value indicates a better performance.
passes removing any other graph overall, which suggests that the SpatioTemporal Graph can aptly supplement the temporal transition information of trajectories.
#### 5.5.2 Designs of Edge Type in GNNs
In order to explore the effect of edge weight in GNNs on model performance, we set two edge types for model, _Weighted_ and _Vanilla_. _Weighted_ retains the weights of edges in the three graphs, that is, the three graphs before binarzyzation, while _Vanilla_ only uses the adjacency relationship of the three graphs to learn node embedding. We list their performance in Table III.
It can be seen from the results that _Vanilla_ is better than _Weighted_ on almost all metrics of the four datasets, especially on the _Distance_ metric of the Moscow and Singapore dataset, the improvement reaches 39.51% and 36.73%, respectively. The reasons for the result may be as follows. Firstly, _Vanilla_ are simpler to implement, while _Weighted_ increases the training time and degrades the performance. Secondly, _Weighted_ edges may assign significant weights to noisy or irrelevant features, leading to reduced model performance. In contrast, _Vanilla_ only consider the presence or absence of an edge, which can be more resilient to noise.
The six metrics can be classified into three categories based on the level of improvement they show when comparing the _Vanilla_ graph to the _Weighted_ graph. Distance-based metrics (i.e. _Distance_ and _Radius_) exhibit the highest improvement, whereas time-dependent metrics (i.e. _duration_ and _daily_) show the least improvement. Preference-based metrics (i.e. _G-rank_ and _I-rank_) show moderate improvements. The underlying reason lies in the fact that our proposed multi-channel graphs explicitly encode the distance and user preferences of locations, while lack the time-dependent constraints. Additionally, the long-tailed distribution of the _Weighted_ version might exaggerate the uneven distribution of location frequencies.
#### 5.5.3 Designs of Dwell Branch
The dwell branch which allows individuals to remain in the current location with a learnable probability, aims to adaptively perceive the varying durations in mobility trajectories. We conduct ablation experiments on this component to verify its effectiveness, and the results are shown in Table IV.
As we can observe, the design of the dwell branch enhances the performance on twenty three metrics over twenty four metrics of the four datasets, with the most significant improvement observed with the _Distance_ metric on the Moscow dataset, which is up to 35.53%. It implies that enabling individuals to dwell in a specific location with a learnable probability can facilitate a more comprehensive representation of complex real-world behavior, consequently improving the performance of various metrics from different perspectives.
Specifically, there are two reasons for the performance improvement. First of all, staying at the current location with a certain probability reduces the likelihood of long-distance movement for a individual. As a result, the dwell branch facilitates the acquisition of distance-related patterns and features, subsequently improving the performance of distance-based and time-dependent metrics. Secondly, individuals tend to visit several specific locations frequently and thereby the learnable stay probability can increase individuals' visit probability at frequently visited places, which effectively captures the mobility preferences of real trajectories from both collective and individual view.
It is worth noting that the dwell branch achieves the most significant performance improvement on the Moscow dataset, as expected. Due to the high frequency of staying in the previous location in the Moscow dataset, the learnable stay probability is essential for modelling repeated movement patterns within the trajectories more effectively.
### _RQ3: Parameter Sensitivity of STAR_
Parameter sensitivity analysis can help us understand how changes in parameters of model affect its performance, which is conducive to optimizing the model for better performance, identifying key parameters that have the most important impact on results, and assessing the reliability and robustness of the model. Figure 4 demonstrates the parameter sensitivity of STAR on two key hyper-parameters, the number of GAT layers and attention heads, which affect the graph convolution architecture in the STAR framework.
We plot the grid search results of the number of GAT layers over \(\{1,2\}\) and the number of attention heads over \(\{1,2,4\}\) to thoroughly test the STAR framework on twenty four metrics over the four datasets. As can be seen from Figure 4, the performance of STAR is robust under different hyperparameter settings, which indicates different hyperparameter values wouldn't affect its superiority over other baselines.
Specifically, the performance of different hyperparameters varies from metric to metric. On distance-based metrics (i.e., _Distance_ and _Radius_), only a simple model with a few number of GAT layers and attention heads can achieve good performance. As the number of layers and attention heads increases, the performance of the model on distance-based metrics deteriorates due to overfitting. The phenomenon emerges since moving distance is the most conspicuous pattern of trajectories, where the distance-related features in trajectories can be effectively modeled with fewer GAT layers and attention heads. On time-dependent metrics (i.e., _duration_ and _daily_), the performance of the model exhibits minimal fluctuations with the changes in the number of GAT layers and attention heads, which demonstrates the robustness of our proposed method.
On preference-based metrics (i.e., _G-rank_ and _I-rank_), a simple model with a few number of GAT layers and attention heads performs poorly. On the one hand, deeper GAT layers improves the performance of the model, which is because deeper layers can effectively capture complex and advanced graph structures. By using multiple GAT layers, the model can handle increasingly abstract representations of the input graphs, thereby allowing the model to learn more advanced features and make better simulation. In addition, deeper GAT layers can improve the model's ability to learn relationships between remote nodes, which is important for capturing dynamic spatio-temporal dependencies among locations. On the other hand, the increase in the number of attention heads also improves
the model's performance. The reason for the result is that having more attention heads improves the quality of the learned node representation. Each attention head learns a different attention weight, enabling the model to capture different and potentially complementary information from the neighborhood of each node. By integrating information from multiple attention heads, the model can learn a richer representation of the graph and produce better simulation. Furthermore, the increase in the number of attention heads can make the model more resilient to different types of input graphs, improving its adaptability to unknown graph structures.
However, when the number of GAT layers is 2 and the number of attention heads increases to 4, the model performance decreases, which is even worse than the model with one layer and four attention heads. As mentioned above, too many attention heads tend to overfit, where the extra head no longer improves or even degrades the performance of the model.
### _RQ4: Visualization_
The geographical visualization results of our proposed STAR method and two baseline methods on the TKY dataset are presented in Figure 5. The side colorbar illustrates the frequency of location visits, with brighter colors indicating higher visit frequencies. The STAR method yields more similar results in the diagonal direction's highlighted areas to the real visit frequencies than the other two methods, namely CGE and IO-HMM, which are one of _state-of-the-art_ deep learning methods and statistical modeling methods for trajectory simulation, respectively. Additionally, STAR also provides attention to lower-visited areas in the upper-left corner, while the other two methods lack precision in modeling the long-tailed location visit patterns.
## 6 Conclusion
In this paper, we propose a novel framework for human mobility simulation, namely **S**patio**Temporal**-**A**Augmented **g**R**aph neural networks (STAR), to model the spatiotemporal effects among locations. On the one hand, STAR designs various kinds of spatiotemporal graphs to incorporate the spatiotemporal correspondences of locations, revealing the spatial proximity and functionality similarity with the temporal visit distribution. On the other hand, STAR builds the dual-branch decision generator module to balance the diverse transitions generated by the exploration branch and the repetitive patterns generated by the dwell branch. The STAR framework is optimized based on the classification rewards of the policy discriminator module after iteratively generating a complete trajectory sequence. We have conducted comprehensive experiments on the human mobility simulation task, verifying the superiority of the STAR framework to the _state-of-the-art_ methods. Ablation studies on the multi-channel embedding module reveal that different datasets prefer different spatiotemporal graphs. Elaborate experiments further verify the effectiveness of the edges of the _Vanilla_ version and the dwell branch.
However, deep learning-based models always suffer from data scarcity and sparsity, easily leading to overfitting in the small training data. In the future, we will explore the commonalities of different human mobility data and attempt to transfer the human mobility model from high-resource locations to low-resource locations. Another interesting direction is introducing massive external resources like daily social tweets to build a huge location graph, which can largely alleviate the data sparsity problem.
|
2308.10368 | Prediction of Pneumonia and COVID-19 Using Deep Neural Networks | Pneumonia, caused by bacteria and viruses, is a rapidly spreading viral
infection with global implications. Prompt identification of infected
individuals is crucial for containing its transmission. This study explores the
potential of medical image analysis to address this challenge. We propose
machine-learning techniques for predicting Pneumonia from chest X-ray images.
Chest X-ray imaging is vital for Pneumonia diagnosis due to its accessibility
and cost-effectiveness. However, interpreting X-rays for Pneumonia detection
can be complex, as radiographic features can overlap with other respiratory
conditions. We evaluate the performance of different machine learning models,
including DenseNet121, Inception Resnet-v2, Inception Resnet-v3, Resnet50, and
Xception, using chest X-ray images of pneumonia patients. Performance measures
and confusion matrices are employed to assess and compare the models. The
findings reveal that DenseNet121 outperforms other models, achieving an
accuracy rate of 99.58%. This study underscores the significance of machine
learning in the accurate detection of Pneumonia, leveraging chest X-ray images.
Our study offers insights into the potential of technology to mitigate the
spread of pneumonia through precise diagnostics. | M. S. Haque, M. S. Taluckder, S. B. Shawkat, M. A. Shahriyar, M. A. Sayed, C. Modak | 2023-08-20T21:26:37Z | http://arxiv.org/abs/2308.10368v1 | # Pro
###### Abstract
Pneumonia, caused by bacteria and viruses, is a rapidly spreading viral infection with global implications. Prompt identification of infected individuals is crucial for containing its transmission. This study explores the potential of medical image analysis to address this challenge. We propose machine-learning techniques for predicting Pneumonia from chest X-ray images. Chest X-ray imaging is vital for Pneumonia diagnosis due to its accessibility and cost-effectiveness. However, interpreting X-rays for Pneumonia detection can be complex, as radiographic features can overlap with other respiratory conditions. We evaluate the performance of different machine learning models, including DenseNet121, Inception Resnet-v2, Inception Resnet-v3, Resnet50, and Xception, using chest X-ray images of pneumonia patients. Performance measures and confusion matrices are employed to assess and compare the models. The findings reveal that DenseNet121 outperforms other models, achieving an accuracy rate of 99.58%. This study underscores the significance of machine learning in the accurate detection of Pneumonia, leveraging chest X-ray images. Our study offers insights into the potential of technology to mitigate the spread of pneumonia through precise diagnostics.
Keywords:_X-ray, Pneumonia, Fever, Neural Networks, Artificial Intelligence, Machine Learning +
Footnote †: 2023 Copyright reserved by authors; publication right reserved by ICE3IS 2023
## I Introduction
Pneumonia is a respiratory infection characterized by the infiltration of air cells within one or both lungs, caused by viral, bacterial, or dual agents. This infection leads to symptoms such as a productive cough accompanied by phlegm or pus, elevated body temperature (fever), sensations of coldness (chills), and respiratory distress due to the accumulation of fluid or pus within the air cells. Pneumonia is brought about by a variety of microorganisms, encompassing fungi, viruses, and bacteria. The severity of pneumonia ranges from mild forms to instances of life-threatening conditions. Individuals aged over 65, individuals with pre-existing health conditions, and those possessing compromised immune systems are particularly susceptible to severe manifestations of pneumonia. In contrast to bacterial pneumonia, which often follows a deterioration pattern prior to recovery, viral pneumonia frequently undergoes spontaneous resolution. Pneumonia can manifest in singular or dual lung involvement, dependent on the extent of the infection and its location within the respiratory system [1].
The manifestation of pneumonia can vary from being sufficiently severe to necessitate hospitalization, or so mild that its presence goes almost unnoticed. Among the different types of pneumonia, bacterial pneumonia is the most prevalent and typically presents with more pronounced symptoms compared to other forms. Such symptoms often demand immediate medical intervention. Bacterial pneumonia can elicit a sudden or gradual onset of adverse effects. These effects may encompass a potentially hazardous fever, reaching temperatures as high as 105 degrees Fahrenheit, accompanied by a rapid elevation in breathing and heart rates, coupled with profuse sweating. Additionally, reduced oxygen levels in the blood can lead to a bluish discoloration of the lips and nailbeds. The cognitive state of a patient might exhibit confusion or irrationality. In the case of viral pneumonia, symptoms typically manifest over a span of several days. Within a day or two, these symptoms generally exacerbate, incorporating an intensified cough, difficulty in breathing, and increased muscular pain. High fever and bluish discoloration of the lips can also manifest as potential side effects.
Scientists have come up with a new and affordable way to detect pneumonia, which is a lung infection. They suggest using X-ray images of the chest to look for signs of pneumonia [1,2,3]. pneumonia resembles viral pneumonia in appearance on radiographs, its radiological detection can be challenging [1, 4]. With the high number of suspected cases daily, distinguishing between the two may not always be possible due to the need for specific expertise. Automation and machine learning have been used by researchers to close this gap. The
purpose of this research is to examine how machine learning algorithms can be used to identify pneumonia in chest X-ray images. We discuss the performances of various machine learning algorithms and their implications in diagnosing pneumonia with limited resources. The findings from this study can be greatly beneficial for the development of precise and cost-effective methods for detecting pneumonia. We demonstrate that the use of our proposed machine learning algorithms in radiological imaging can significantly improve the accuracy of detecting pneumonia.
## II Literature Review
Rajasenbagam et al [5]. In this research, it was suggested that chest X-ray images could be used to detect pneumonia infection in the lung using a Deep Convolutional Neural Network. Pneumonia Chest X-ray Dataset: 12,000 images of infected and uninfected chest X-rays were used to train the proposed Deep CNN models. The augmented images were made by utilizing the fundamental manipulation strategies and the Deep Convolutional Generative Adversarial Network (DCGAN). The proposed Deep CNN model was developed using the VGG19 network. In the unseen chest X-ray images, the proposed Deep CNN model had a classification accuracy of 99.34%. The proposed deep CNN's performance was compared to that of cutting-edge transfer learning methods like AlexNet, VGG16Net, and InceptionNet. The comparison results demonstrate that the proposed Deep CNN model outperformed the other methods in terms of classification performance.
K. S. and D. Radha et al [6]. The proposed work is to use X-rays, one of the medical imaging techniques used to examine the health of patients with lung inflammation, to identify COVID-19 and pneumonia patients. For the identified dataset, the appropriate Convolutional Neural Network Model is chosen. On the actual dataset of lung X-ray images, the model finds patients with pneumonia and COVID-19. Images are trained for Normal, COVID-19, and Pneumonia classifications after being pre-processed. The disease is detected by selecting the appropriate features from each dataset's images following pre-processing. The result indicates that COVID and pneumonia were accurately identified. COVID versus Normal is more accurate than COVID versus Pneumonia among these two options. With 80% and 91.46%, respectively, this method identifies not only COVID or pneumonia but also its subtypes--bacterial pneumonia and viral pneumonia. Utilizing the proposed model, COVID, bacterial pneumonia, and viral pneumonia can all be identified and distinguished quickly, making it easier to implement prompt and appropriate solutions.
D. Varshni et al [7]. In this study, Pre-trained CNN models used as feature extractors and various classifiers for the classification of abnormal and normal chest X-rays are evaluated. The most suitable CNN model is selected analytically for this purpose. Pre-trained CNN models and supervised classifier algorithms can be very helpful in the analysis of chest X-ray images, particularly for the purpose of detecting pneumonia, as demonstrated by the statistical results obtained.
Kareem et al [8]. This paper surveys and examines the use of computer-aided methods in the detection of pneumonia. It also suggests a hybrid model that uses real-time medical image data in a privacy-preserving manner to effectively detect pneumonia. In this paper, we'll look at how X-rays and other preprocessing methods can identify and classify a wide range of diseases. The survey also looks at how different machine learning technologies like k-nearest neighbor (KNN), convolution neural network (CNN), RESNET, Chex Net, DECNET, and ANN can be used to find pneumonia. An extensive literature review is conducted within this article to ascertain the possibilities of amalgamating hospital and medical institution datasets in order to enhance the training of machine learning models for improved disease detection accuracy and efficiency,
A. Pant et al [9]. The main motivation behind this research was utilizing the X-ray images of the patients alone to identify pneumonia. Since doctors need to run a lot of specific tests to figure out if a patient has pneumonia. To address this challenge, the researchers devised an ensemble of two deep-learning models. This ensemble approach serves to streamline doctors' tasks, achieving notable accuracy when tested against both unseen data and relevant research papers. This study culminates in the proposition of an ensemble deep learning model, presenting a comprehensive solution to the issue at hand.
M. Yaseliani et al [10]. In this research, using three classification methods, a novel hybrid Convolutional Neural Network (CNN) model is proposed. CXR images are classified using Fully Connected (FC) layers in the first classification method. The weights with the highest classification accuracy are saved after this model is trained for several epochs. The most representative CXR image features are extracted using the second classification method's trained optimized weights, and Machine Learning (ML) classifiers are used to classify the images. CXR images are classified using an ensemble of the proposed classifiers in the third classification method. With an accuracy of 98.55%, the results indicate that the proposed ensemble classifier employing Support Vector Machine (SVM), Radial Basis Function (RBF), and Logistic Regression (LR) classifiers performs the best. In the end, this model is used to make a web-based CAD system that can help radiologists find pneumonia with a lot of accuracy.
## III Methodology
In this section, we discuss various approaches based on machine learning algorithms that we used and trained for detecting pneumonia. Machine learning models like Inception Resnet-V2, Inception-v3, Resnet-50, Xception, and Densenet-121 are used in our prediction. All the models give us good results but comparatively one model gives us the best result for the prediction of pneumonia. So, below we discuss the methodologies of various models used in this study.
### Inception-ResNet-V2
Convolutional neural architecture Inception-ResNet-v2 adds residual connections to the Inception family of architectures. The residual Inception Block is Inception-ResNet-V2's fundamental building block. Before the addition to match the depth of the input, a 1 \(\times\) 1 convolution filter expansion layer is used after each block to scale up the dimensionality of the filter bank. Batch normalization is only used on top of the traditional
layers in this architecture. The Inception-ResNet-V2 image input size is 299. It is 164 layers deep. Multiple-sized convolutional filters with residual connections are incorporated into the Residual Inception Block [13, 16]. This architecture reduces training time and avoids the degradation issue associated with deep networks by making use of residual connections. Our refined Inception-ResNet-V2 model for COVID-19 and pneumonia classification is shown in Figure 1.
### Inception-V3
This paragraph explains the Inception-V3 model used in the study. The model used in this study comprises there are 11 inception modules spread across 484 layers in InceptionV3. It has a picture input size of 299 \(\times\) 299. Convolution filters, pooling layers, and the ReLu activation function make up each module [1, 19]. By factorizing convolutions, InceptionV3 reduces the number of parameters without affecting the efficiency of the network. In addition, InceptionV3 made a novel proposal for reducing the number of features. Our refined model of InceptionV3 for COVID-19 and pneumonia classification is shown in Figure 2.
An equivalent quantity of samples was randomly selected for both training and validation, constituting a pneumonia dataset. Within this sample pool, 20% of the cases were allocated for validation purposes, leaving the remaining 80% for training endeavors. To enhance testing results and prevent the issue of overfitting concerning pneumonia instances, the remaining samples were held back. Irrespective of the cases present in the test dataset, the training and validation tests were executed in a balanced manner to achieve optimal accuracy. This approach was adopted due to previous studies showcasing the necessity of a well-balanced training dataset to attain precise outcomes.
### Dense-Net-121
The DenseNet121 model can only be used with an RGB image that is at least 224 by 224 pixels in size. The model's 121 layers are made up of more than 8 million parameters. Dense Blocks, the core of DenseNet121, change the number of filters while maintaining the dimensions of the feature map. In between the blocks, the Transition layers use batch normalization for downsampling. The final fully linked layer with Soft Max activation is replaced in this study by a custom classifier, as depicted in Figure 3.
### Res-Net-50
The ResNet50 design for residual networks consists of 48 convolutional layers, one Max-Pool layer, and one Average Pool layer. Each convolution block in ResNet50 has three convolutional layers. ResNet50 has one identification block and three convolutional layers. The model is capable of being trained with over 23 million distinct parameters. In this study, the ResNet50 model was modified to take into account pneumonia categorization and COVID-19. This is depicted in Figure 4.
### Xception
In 2016, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 5.
### Xception
In 2017, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 5.
### Xception
In 2018, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 6.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 7.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 8.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 9.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 10.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 11.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 12.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 13.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 14.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 15.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 16.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 17.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 18.
### Xception
In 2019, Chollet, the creator of the Keras library, proposed a machine-learning model named Xception. It is an adaptation of the Inception architectures in which depth-wise separable convolutions take the place of the Inception modules. On the ImageNet dataset, Xception performed better than the conventional InceptionV3, attaining higher Top-1 and Top-5 accuracy. Xception has roughly the same number of parameters as InceptionV3 (around 23 million). Our refined Xception model for COVID-19 and pneumonia classification is shown in Figure 19.
### Model Training and validation
This research employed five distinct machine learning models (namely, Inception-ResNet-v2, Inception-v3, DenseNet121, ResNet50, and Xception). To ensure uniformity, all images within the dataset underwent resizing to dimensions of 224 \(\times\) 224 pixels. The algorithm's architecture was fashioned, and the convolutional neural network (CNN) was implemented using Tensor Flow 2.4 and the Keras API. Training for these models took place on a 12 GB NVIDIA Tesla K80 GPU. Throughout the training phase, model performance was assessed by their capacity to accurately predict probabilities aligned with the ground truth, utilizing the categorical cross-entropy loss function.
In this investigation, we amassed a compilation of images from various openly accessible databases. A total of 3525 pneumonia photographs and 1123 images depicting COVID-19 were procured to serve as the foundation for training, validation, and testing of the models. The images underwent a uniform standardization procedure, involving resizing to dimensions of 224 by 224 pixels, irrespective of their initial size variations. Within the COVID-19 samples, a random selection of 10% was earmarked for testing purposes, while the remaining images were partitioned, with 80% designated for training and the remaining 20% allocated for validation.
## IV Result and Discussion
Table 1, Chart 1, and Figures 6, 7, 8, 9, and 10 provide visual representations of the accuracy and loss metrics for each refined model throughout the stages of both training and validation. The table presents the presentation measurements of different models on a characterization task. DenseNet-121, ResNet-50, Xception, and Inception ResNet-V2 are the models being evaluated. Accuracy, precision, recall, and the F1 score are some of the measured metrics.
DenseNet-121 has the highest accuracy of 99.48 percent when looking at the values for accuracy, indicating that it can correctly classify instances. With an accuracy of 99.26 percent, ResNet-50 closely follows. Both Inception-V3 and Inception ResNet-V2 have high levels of accuracy, reaching 98.96 percent and 98.21 percent, respectively. On the other hand, Xception has a slightly lower accuracy of 98.34 percent.
The model's ability to correctly identify positive instances is measured by the recall, also known as sensitivity or true positive rate. With a score of 99.38 percent, DenseNet-121 excels at recall. ResNet-50 is next, with a recall of 99.31 percent. With a recall value of 99.01 percent, Xception outperforms the other models. The recall rates of Inception-V3 and Inception ResNet-V2 are respectively 98.16% and 97.06%.
The F1 score provides a balanced evaluation of the model's performance because it combines precision and recall. DenseNet-121 has the highest F1 score, 99.42 percent, indicating that it performs classification tasks well overall. With an F1 score of 99.30 percent, ResNet-50 comes in close behind. F1 scores of 97.12 percent and 98.89 percent are displayed by Inception-V3 and Xception, respectively. The F1 score of Inception ResNet-V2 is 98.06%.
## V Conclusion and Future Work
In our study, we consider various advanced machine learning algorithms and compared their performances. Among all models, DenseNet-121 achieves the highest rankings in terms of accuracy, precision, recall, and F1 score. In most metrics, ResNet-50 performs similarly well to DenseNet-121. In contrast, Inception ResNet-V2 receives slightly lower scores compared to the robust performance of Inception-V3 and Xception. Overall, the prowess of machine learning in healthcare is epitomized by DenseNet-121's capacity to accurately identify pneumonia through chest X-rays. This
Fig. 8: Graph depicting the accuracy of the DenseNet model.
Fig. 6: Graph illustrating the accuracy of the Inception-ResNet-V2 model.
Fig. 7: Graph depicting the accuracy of the Inception-V3 model.
Fig. 9: Graph depicting the accuracy of the ResNet-50 model.
underscores the significance of ongoing research and innovation in this realm. While further research is necessary to validate DenseNet-121's efficacy on a larger scale, the initial findings are highly promising. The potential of DenseNet-121 to revolutionize pneumonia detection through chest X-rays could significantly impact the diagnosis and treatment of this disease, particularly in regions with limited access to medical resources and testing.
|
2308.14085 | Sampling with flows, diffusion and autoregressive neural networks: A
spin-glass perspective | Recent years witnessed the development of powerful generative models based on
flows, diffusion or autoregressive neural networks, achieving remarkable
success in generating data from examples with applications in a broad range of
areas. A theoretical analysis of the performance and understanding of the
limitations of these methods remain, however, challenging. In this paper, we
undertake a step in this direction by analysing the efficiency of sampling by
these methods on a class of problems with a known probability distribution and
comparing it with the sampling performance of more traditional methods such as
the Monte Carlo Markov chain and Langevin dynamics. We focus on a class of
probability distribution widely studied in the statistical physics of
disordered systems that relate to spin glasses, statistical inference and
constraint satisfaction problems.
We leverage the fact that sampling via flow-based, diffusion-based or
autoregressive networks methods can be equivalently mapped to the analysis of a
Bayes optimal denoising of a modified probability measure. Our findings
demonstrate that these methods encounter difficulties in sampling stemming from
the presence of a first-order phase transition along the algorithm's denoising
path. Our conclusions go both ways: we identify regions of parameters where
these methods are unable to sample efficiently, while that is possible using
standard Monte Carlo or Langevin approaches. We also identify regions where the
opposite happens: standard approaches are inefficient while the discussed
generative methods work well. | Davide Ghio, Yatin Dandi, Florent Krzakala, Lenka Zdeborová | 2023-08-27T12:16:33Z | http://arxiv.org/abs/2308.14085v1 | # Sampling with flows, diffusion and autoregressive neural networks:
###### Abstract
Recent years witnessed the development of powerful generative models based on flows, diffusion or autoregressive neural networks, achieving remarkable success in generating data from examples with applications in a broad range of areas. A theoretical analysis of the performance and understanding of the limitations of these methods remain, however, challenging. In this paper, we undertake a step in this direction by analysing the efficiency of sampling by these methods on a class of problems with a known probability distribution and comparing it with the sampling performance of more traditional methods such as the Monte Carlo Markov chain and Langevin dynamics. We focus on a class of probability distribution widely studied in the statistical physics of disordered systems that relate to spin glasses, statistical inference and constraint satisfaction problems.
We leverage the fact that sampling via flow-based, diffusion-based or autoregressive networks methods can be equivalently mapped to the analysis of a Bayes optimal denoising of a modified probability measure. Our findings demonstrate that these methods encounter difficulties in sampling stemming from the presence of a first-order phase transition along the algorithm's denoising path. Our conclusions go both ways: we identify regions of parameters where these methods are unable to sample efficiently, while that is possible using standard Monte Carlo or Langevin approaches. We also identify regions where the opposite happens: standard approaches are inefficient while the discussed generative methods work well.
## I Introduction
The field of machine learning recently witnessed the development of powerful generative models able to produce new data-samples based on learning on datasets of existing samples. Among the most prominent ones achieving recent successes are flow-based models [1; 2; 3; 4; 5], diffusion-based models [6; 7; 8; 9] and generative autoregressive neural networks [10; 11; 12; 13]. These approaches are achieving remarkable success in diverse areas such as image generation [14], language modelling [15], generation of molecules [16; 17] or theoretical physics [18]. A theoretical understanding of the capabilities of these models, of their limitations and performance, remains, however, a challenge. A major aspect of these techniques is in the learning of the probability measure or its representation from examples. In this paper, we focus instead on a restricted setting where the probability distribution we aim to sample from is known beforehand. Our main goal is to contribute to setting the theoretical understanding of the capabilities and limitations of these powerful generative models.
When applied to parametric probability distributions, the generative models are indeed designed with the goal to provide samples uniform at random from the distribution. Here, we study whether they are able to _efficiently_ sample from types of probability distributions that are encountered in the study of mean-field spin glasses and related statistical inference problems [19; 20; 21; 22; 23; 24].
There are several benefits to studying this class of probability distributions. On the one hand, due to numerous studies in the statistical physics of disordered systems, we possess a comparatively good grasp of parameter regions where traditional sampling methods--like Monte Carlo sampling or Langevin dynamics--are effective and where they are not [25; 26; 27; 28; 29; 30]. On the other hand, the tools available for outlining the phase diagrams of these problems [20; 21; 31; 32] turn out to be highly effective in analytically describing the performance of generative techniques such as flow-based, diffusion-based, or autoregressive networks as samplers for the respective probability measures.
The above-mentioned tools, along with their mathematically rigorous counterparts, have recently been applied to the analysis and design of sampling algorithms in mean-field spin glass models in the context of stochastic localization [33; 34; 35; 36; 37; 38]. This was later found to have a close relationship with diffusion-based models [37; 38]. In particular, [36; 37] showed how one can turn the message-passing denoising algorithms into samplers, a technique we shall use as well in the present study. While we could not find similar studies for flow-based models, closely related work on autoregressive networks exists on the decimation of message-passing algorithms used for finding solutions of the random K-satisfiability problem in [39; 40]. Here, we build on these works and bring the following contributions:
* Using the formalism of stochastic interpolants [4; 41], we analyse sampling with flow-based methods, which leads to Bayesian denoising with an additive white Gaussian noise (AWGN). This turns out to be equivalent to what arises in the analysis of diffusion-based sampling and stochastic localization derived in [36; 38; 42].
* In the case of autoregressive generative networks the analysis leads instead to a Bayesian denoising problem now correcting erased variables (the Binary Erasure Channel [43]) as in [40].
* Focusing on prototypical exactly solvable models where one can perform asymptotic analysis, we then study the phase diagrams of the corresponding Bayesian denoising problems as a function of the strength of the signal-to-noise ratio.
* We investigate the feasibility of sampling using these techniques. We posit that these methods are capable of efficient sampling, provided there is no range in noise amplitude exhibiting the metastability characteristic of first-order phase transitions. If such metastability exists, the denoising becomes computationally hard.
* We locate these metastable regions in prototypical models, specifically the Ising and spherical \(p\)-spin models, the bicoloring of random hypergraphs problem, and a matrix estimation problem with a sparse spike.
* We also provide a GitHub repository with our numerical experiments in [44], see [https://github.com/IdePHICS/DiffSamp](https://github.com/IdePHICS/DiffSamp).
In terms of comparison between the flow-based, diffusion-based or autoregressive generative models with more traditional Monte Carlo Markov chains (MCMC) and Langevin sampling procedures, the following picture emerges (summarized in Fig. 1) for two classes of models.
* Disordered models that exhibit a phase diagram of the random-first-order-theory (RFOT) type, also often called discontinuous one-step replica symmetry breaking [46; 19], are typical in the mean-field theory of glass transition [28], but they also appear in a variety of random constraint satisfaction problems [22]. For such models, there exists a so-called dynamical temperature \(T_{d}\) such that Monte Carlo Markov chains or Langevin algorithms are predicted to sample efficiently for \(T>T_{d}\)[25; 26; 27; 28; 29] while the sampling problem is computationally hard for \(T<T_{d}\). Recent work [47] showed empirically that the region \(T<T_{d}\) is also hard when sampling with autoregressive neural networks. Our analysis of the currently used flow-based, diffusion-based and autoregressive networks-based sampling procedures goes further and reveals analytically that they perform _worse_ than the traditional techniques and that there is another temperature \(T_{\text{tri}}\), that depends on the detail of the method, such that for \(T_{d}<T<T_{\text{tri}}\) these generative methods do not sample efficiently while traditional procedures do.
* A second class of problems are the noisy random statistical inference problems with a hard phase and a statistical-to-computational gap [23; 24; 45; 48]. In such problems, a good statistical inference can only be performed below a critical value of the noise (or inverse signal-to-noise ratio) \(\Delta_{\text{IT}}\), at least with infinite computational power. It turns out, however, that the best algorithms we know for such problems (in particular message-passing ones) are only able to work below a second threshold \(\Delta_{\text{alg}}\) and fail to learn in the "hard"
Figure 1: Schematic summary of the comparison of the efficiency of sampling with flow-based, diffusion-based and autoregressive methods versus Langevin or Monte-Carlo approaches in spin glass models with a random first-order transition (top), and in statistical inference models with a computationally hard phase (bottom). The computationally hard phase in inference problems appears for For \(\Delta_{\text{alg}}<\Delta<\Delta_{\text{IT}}\) where efficient algorithms achieving close-to-optimal estimation error are not known and conjectured not to exist [45].
region \(\Delta_{\rm alg}<\Delta<\Delta_{\rm IT}\). Existing literature predicts an even _more significant gap_ to exist in those cases for MCMC or Langevin-based samplers [30; 49; 50] down to a noise amplitude \(\Delta_{\rm MCMC}<\Delta_{\rm alg}\). In this setting, however, it appears that flow-based, diffusion-based, and autoregressive network-based methods outperform the standard approaches and sample efficiently as soon as \(\Delta<\Delta_{\rm alg}\).
### Further related works
The present study of sampling algorithms owes a lot to recent lines of work. In particular, we shall follow the helpful approach of continuous-time flows based on stochastic interpolants [4; 41] as the starting point. Additionally, our way of setting up the statistical physics equivalent problem for a continuous-time flow method closely follows the one pioneered recently in [36; 42] for stochastic localization and generalized in [37] for diffusion models.
The difficulty in denoising we uncover turns out to be connected to an easy-hard transition arising in statistical inference problems [23] often called the statistical-to-computational gap [48; 24]. For these models, mean-field algorithms from spin glass theory, such as the Belief-Propagation (BP) equations [21] and the Approximate-Message-Passing (AMP) [31], or the equivalent Thouless-Anderson-Palmer (TAP) [51] approach, are believed to be among the most efficient one [52]. Using AMP for sampling via diffusion method (or via stochastic localization) was discussed recently in [36; 37]. The resulting algorithm turns out to have very close connections to the so-called reinforcement BP [53], and a direct connection exists between the analysis in the present paper and the reinforced and decimated version of message passing algorithms [54; 55; 39; 56].
Finally, we note that since we assume that the model is known, we are not discussing here the limitations of training denoisers models from a limited number of data points. This difficulty was discussed recently in [57] in the context of statistical physics models.
## II Sampling with flow and diffusion-based models
We start by discussing continuous-time flow-based models [3; 4; 5]. The goal is to sample a vector \(\mathbf{x}_{0}\in\mathbb{R}^{N}\) from a so-called target distribution \(P_{0}(\mathbf{x}_{0})\). Continuous-time flow-based models [3; 4; 5; 58] achieve this by constructing an ODE (flow) that continuously transforms Gaussian noise \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\) to a sample \(\mathbf{x}_{0}\sim P_{0}\). One way to obtain such a flow is by inverting the time-evolving law of stochastic or deterministic processes bridging from a sample \(\mathbf{x}_{0}\sim P_{0}\) to a Gaussian noise \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\). Such a continous-time reverse process underlies a wide range of flow-models [4; 5] as well as the score-based diffusion models [9]. In practice, we would only observe samples from the distribution \(P_{0}(\mathbf{x}_{0})\), whereas in this paper we aim to focus on the limitations of these procedures, we will thus assume that the distribution \(P_{0}(\mathbf{x}_{0})\) is known as this can only render the task easier than when only samples from \(P_{0}(\mathbf{x}_{0})\) are available.
To present this method clearly, we find it convenient to use the formalism of stochastic linear interpolants introduced in [41; 4]. Starting from a vector \(\mathbf{x}_{0}\) sampled from \(P_{0}\) and a Gaussian vector \(\mathbf{z}\), we consider the process --or _one-sided stochastic interpolant_-- defined by
\[\mathbf{y}(t)=\alpha(t)\mathbf{x}_{0}+\beta(t)\mathbf{z},\quad t\in\left[0,1 \right], \tag{1}\]
where the functions \(\alpha\) and \(\beta\) are generic, but constrained by the following relations:
\[\alpha(0)=\beta(1)=1;\quad\alpha(1)=\beta(0)=0; \tag{2}\] \[\forall t\in\left[0,1\right]:\alpha(t)\geq 0,\ \dot{\alpha}(t)\leq 0,\ \beta(t)\geq 0,\ \dot{\beta}(t)\geq 0 \tag{3}\]
such that indeed we have \(\mathbf{y}(0)=\mathbf{x}_{0}\) and \(\mathbf{y}(1)=\mathbf{z}\). Suppose that \(P_{0}(\mathbf{x}_{0})\) admits a density \(\rho_{0}\) w.r.t the Lebesgue measure. It can be shown [41], see also Appendix E, that the probability density \(\rho(\mathbf{y}(t))\) associated to the measure \(P_{t}\) of the random variable \(\mathbf{y}(t)\) satisfies the following transport equation:
\[\partial_{t}\rho(\mathbf{y},t)+\nabla\cdot\left(b(\mathbf{y},t)\rho(\mathbf{y },t)\right)=0\,, \tag{4}\]
where we defined the _velocity field_
\[b(\mathbf{y},t)=\mathbb{E}[\partial_{t}\mathbf{y}(t)|\mathbf{y}(t)=\mathbf{y }]=\mathbb{E}[\dot{\alpha}(t)\mathbf{x}_{\mathbf{0}}+\hat{\beta}(t)z|\mathbf{ y}(t)=\mathbf{y}]\,. \tag{5}\]
Indeed, \(b(\mathbf{y},t)\) is simply the expected velocity of \(\mathbf{y}(t)\) conditioned on being at \(\mathbf{y}\) at time \(t\). In Appendix E, we also provide a formal definition of Eq. (5) based on the analysis in [4], along with a discussion of the case when the measure \(P_{0}(\mathbf{x}_{\mathbf{0}})\) is discrete.
Equation (1) defines a forward process interpolating from \(\mathbf{x_{0}}\) to \(\mathbf{z}\). Equation (4) further reveals that, in law, \(\mathbf{y}(t)\) can be obtained by applying the vector field (Eq. (5)) starting from \(\mathbf{x_{0}}\) at time \(t=0\). The algorithm proposed by [4] relies on applying the velocity field in the reverse direction of time. Concretely, it relies on approximating the unique solution to the following ordinary differential equation starting from a random Gaussian initial condition from \(t=1\), back to \(t=0\):
\[\frac{d\mathbf{Y}(t)}{dt}=b(\mathbf{Y}(t),t)\,. \tag{6}\]
Eq. (4) then implies that the random variable defined by Eq. (6) solved backwards in time from the final value \(\mathbf{Y}_{t=1}=\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\) is also distributed according to \(\rho(\mathbf{y}(t))\) (which is nothing but the continuity equation for the flow defined by Eq. (6)) and, in particular, is distributed as the desired target \(P_{0}(\mathbf{x}_{0})\) at time \(t=0\). If we can numerically solve this ODE, then we have a sampling algorithm for \(P_{0}(x)\).
In order to do that, we discretize Eq. (6) using the forward Euler method (in reverse time) to write
\[\mathbf{Y}_{t-\delta}=\mathbf{Y}_{t}-\delta\,b(\mathbf{Y}_{t},t)\,. \tag{7}\]
Noticing that we can rewrite the vector field using Eq. (1) as
\[b(\mathbf{Y}_{t},t) = \mathbb{E}\left[\dot{\alpha}(t)\mathbf{x}_{0}+\frac{\dot{\beta}( t)}{\beta(t)}\beta(t)\mathbf{z}\bigg{|}\mathbf{y}(t)=\mathbf{Y}_{t}\right] \tag{8}\] \[= \mathbb{E}\left[\dot{\alpha}(t)\mathbf{x}_{0}+\frac{\dot{\beta}( t)}{\beta(t)}\left(\mathbf{y}(t)-\alpha(t)\mathbf{x}_{0}\right)\bigg{|} \mathbf{y}(t)=\mathbf{Y}_{t}\right]\] \[= \frac{\dot{\beta}(t)}{\beta(t)}\mathbf{Y}_{t}+\left(\dot{\alpha} (t)-\frac{\dot{\beta}(t)\alpha(t)}{\beta(t)}\right)\mathbb{E}\left[\mathbf{x} _{0}|\mathbf{y}(t)=\mathbf{Y}_{t}\right]\,,\]
we put back Eq. (8) in Eq. (7) to reach
\[\mathbf{Y}_{t-\delta}\!=\!\left(\!1-\delta\frac{\dot{\beta}(t)}{\beta(t)} \!\right)\!\mathbf{Y}_{t}-\delta\left(\dot{\alpha}(t)-\frac{\dot{\beta}(t) \alpha(t)}{\beta(t)}\right)\mathbb{E}\left[\mathbf{x}_{0}|\mathbf{y}(t)= \mathbf{Y}_{t}\right] \tag{9}\]
which, given the initial condition \(\mathbf{Y}_{t=1}=\mathbf{z}\), will evolve back to a sample from the target \(\mathbf{Y}_{t=0}\sim P_{0}\), provided that we are able to estimate \(\mathbb{E}\left[\mathbf{x}_{0}|\mathbf{y}(t)=\mathbf{Y}_{t}\right]\)_well_ at each time \(t\). In Algorithm 1 we report a schematic implementation of the resulting flow-based sampling technique. In Appendix E, we further discuss the effect of discretization and the associated sampling guarantees under access to a perfect denoiser.
### Diffusion-based models and SDEs
The flow-based algorithm [4] (Alg. 1) relies on replicating the law of the interpolant \(\mathbf{y}(t)\) defined by Eq. (1) through the deterministic ordinary differential equation (ODE) defined by Eq. (6). Interestingly, however, the _same_ law for \(\mathbf{y}(t)\) can be obtained via a stochastic differential equation (SDE). Similarly, when \(\mathbf{y}(t)\) is generated through a forward diffusion process applied to the data, the law can be generated through an associated time-reversed SDE (and an ODE) [9; 59]. It turns out that both flow-based (ODE) methods and diffusion-based (SDE) ones associated to the above processes (as well as stochastic localization [33; 34; 35]) require estimating the posterior-mean \(\mathbb{E}\left[\mathbf{x}_{0}|\mathbf{y}(t)=\mathbf{Y}_{t}\right]\), i.e. averages w.r.t a "tilted" probability measure. Utilization of such an evolving tilted measure for sampling was done first in the stochastic-localization-based sampling algorithm in [36; 37] and related to generic diffusion models in [38]. The reliance of both the ODE and SDE-based approaches on the same tilted measure is a consequence of the posterior mean (or equivalently the score) yielding both the velocity field for the ODE defined by Eq. (6) and the drift term for the equivalent SDE as noted in [4; 9; 41]. Since the law of \(\mathbf{y}(t)\) remains the same with the ODE or SDE-based approaches, the deterministic evolution of the "tilted" measure \(P(\mathbf{x}|\mathbf{y}(t)=\mathbf{Y}_{t})\) matches in law with that of the associated stochastic localization process [38]. This is convenient for our analysis since this means that _all_ the phase transitions discussed in our work apply to flow-based methods, diffusion-based ones and stochastic localization approaches.
### The posterior average and Bayes-optimal denoising
The key difficulty is thus to estimate the average \(\mathbb{E}\left[\mathbf{x}_{0}|\mathbf{y}(t)=\mathbf{Y}_{t}\right]\). This is nothing but the Bayes-optimal denoising where \(\mathbf{Y}_{t}\) is a noisy observation of \(\mathbf{x}_{0}\) corrupted by additive white Gaussian noise in the form of Eq. (1).
The Bayes-optimal denoising estimator \(\hat{\mathbf{x}}\) (optimal in the sense of minimizing the mean-squared-error between the estimator and \(\mathbf{x}_{0}\)) is to use the posterior mean \(\hat{\mathbf{x}}=\mathbb{E}\left[\mathbf{x}_{0}|\mathbf{y}(t)=\mathbf{Y}_{t}\right]\)[60]. Determining this average can be computationally hard, as it involves an integral in high-dimension. In practical machine learning setups, one learns the denoiser using a neural network trained on data coming from the distribution. Thus, the main narrative of the diffusion and flow-based methods is that if one can access to a good denoiser, then one can turn it into a sampler.
In this paper, we aim to focus on the limitations of this procedure and we shall hence assume that the task of obtaining a denoiser has been completed to near perfection by focusing on target distributions \(P_{0}(\mathbf{x}_{0})\) stemming from problems studied in statistical physics for which the Bayes-optimal denoising problem including its algorithmic feasibility has been extensively investigated. Before turning to these models, we write the denoising problem in terms that are more familiar in the statistical physics but also in the information-theoretic context.
Let us consider again the process defined by Eq. (1). Using the Bayes theorem, we can see that the posterior distribution of the sample conditioned on the observation \(\mathbf{y}(t)=\mathbf{Y}_{t}\) is given by
\[P(\mathbf{x}|\mathbf{y}(t)=\mathbf{Y}_{t})=\frac{P_{0}(\mathbf{x })P(\mathbf{y}(t)=\mathbf{Y}_{t}|\mathbf{x})}{P(\mathbf{y}(t)=\mathbf{Y}_{t})}\] \[= \frac{1}{Z(\mathbf{Y}_{t})}\exp\left(\frac{\alpha(t)}{\beta(t)^{ 2}}\langle\mathbf{Y}_{t},\mathbf{x}\rangle-\frac{\alpha(t)^{2}}{2\beta(t)^{2} }||\mathbf{x}||^{2}\right)P_{0}(\mathbf{x})\,.\]
where in the last line we put all the terms not depending on \(\mathbf{x}\) inside the normalization aka partition function \(Z\).
We now recall that the law of \(\mathbf{Y}_{t}\) can be obtained through an observation of a sample from the target distribution \(\mathbf{x}_{0}\sim P_{0}\) through the AWGN channel rescaled by the factors \(\alpha(t)\) and \(\beta(t)\) as in Eq. (1).
Here we denote \(\mathbf{x_{0}}\) as the "truth" signal and make the difference between the dummy variable \(\mathbf{x}\) in the integral. We can hence rewrite the measure in Eq. (10) further as
\[P_{\gamma}(\mathbf{x}|\mathbf{x}_{0},\mathbf{z})\!\propto\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for a sample, by sampling each coordinate conditioned on its "parent" coordinates [61]. In the simplest case, the distribution of each coordinate is assumed to depend on all previous coordinates. Thus, after sampling \(x_{1}\), for each subsequent node, one looks at the distribution \(P_{0}(\mathbf{x}|x_{1})\), one considers the marginal distribution of \(x_{2}\) etc. Of course marginalization of a high-dimensional probability distribution is in general hard, and the strategy used in an autoregressive networks [62] is to directly _learn from data_ a probability distribution written in the (autoregressive) form \(P_{0}(\mathbf{x})=P_{0}(x_{1})P_{0}(x_{2}|x_{1})\ldots P_{0}(x_{N}|x_{1}, \ldots,x_{N-1})\) with each term being represented via a neural network. While this decomposition works for any ordering of the components, in practical applications the order may be relevant (this is an important point in ancestral sampling). In the present paper, we will consider the order to be random.
We showed in the previous section that sampling through diffusion in our formalism boils down to the performance of a sequence of denoising problems, where in information theory terms the signal is observed through an additive white Gaussian noise channel. Analogously to that, we can interpret the autoregressive networks, or its ideal version sequential sampling, as the estimation of the marginal when a fraction \(\theta\) of entries of one configuration sampled uniformly at random from the target \(P_{0}\) is revealed exactly. To come back to the sampling scheme, reversing this process is equivalent to fixing one variable at a time from the marginal conditioned to the variables previously fixed until all variables are fixed. In statistical physics, this procedure has been studied and analysed under the name decimation [40]. In Algorithm 2 we report a schematic implementation of the autoregressive-based sampling technique.
In information-theoretic terms, this corresponds to a denoising problem under the so-called binary erasure channel (BEC) in which a transmitter sends a bit and the receiver either receives the bit correctly or with some probability \(1-\theta\) receives a message that the bit was not received but "erased" instead.
Repeating the way of thinking we used for sampling with diffusion, we now want to analyse the denoising for such models and write the modified measure
\[P_{\theta}(\mathbf{x}|\mathbf{x}_{0},S_{\theta})\propto P_{0}(\mathbf{x})\prod _{i\in S_{\theta}}\delta([x_{i}-[\mathbf{x}_{0}]_{i}) \tag{13}\]
where \(S_{\theta}\) is the set of revealed variables of size \(\theta N\). We can also think of these variables as pinned to their ground truth value \(\mathbf{x}_{0}\) and consequently call the measure \(P_{\theta}(\mathbf{x}|\mathbf{x}_{0},S_{\theta})\) the pinning measure. From a statistical physics point of view, the new measure can again be thought of as the original one, but with an "infinite" magnetic field pointing to the direction of the components of \(\mathbf{x}_{0}\) for an expected fraction \(\theta\) of components. We are again interested in the marginals of \(P_{\theta}\) that we will denote \(\hat{\mathbf{x}}(\theta)\). The evolution of the pinning measure can also be interpreted as a coordinate-by-coordinate stochastic localization process [35].
```
Input: BEC denoiser for\(l=1,\ldots,N\)do Compute the posterior marginal \(P_{l}(x_{l}|x_{1},\ldots,x_{l-1})\) with BEC denoiser Eq. (15) Assign \(x_{l}\sim P_{l}(x_{l}|x_{1},\ldots,x_{l-1})\) endfor Return\(x\)
```
**Algorithm 2** Autoregressive-network-based sampling algorithm
## IV Properties of Bayesian-optimal denoising
In both the cases - diffusion and autoregressive - we see that one has to perform an optimal denoising based on an observation \(\mathbf{y}\). In the case of diffusion, we have access to an observable where a sample from the target \(\mathbf{x}_{0}\sim P_{0}\) is polluted by an AWGN channel:
\[\mathbf{y}=\alpha\mathbf{x}_{0}+\beta\mathbf{z}\,, \tag{14}\]
with \(z\sim\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\), while for autoregressive, it is polluted by the BEC channel where for every \(i=1,\ldots,N\) independently
\[y_{i}=\begin{cases}[\mathbf{x}_{0}]_{i}&\text{with probability }\theta\\ *&\text{otherwise}\end{cases} \tag{15}\]
Our goal is now to study the properties of such channels and the corresponding Bayes-optimal denoisers. A crucial point is that these Bayes-optimal estimation problems lead to the _Nishimori identities_ and the _single state_ (or
replica symmetric in the spin glass jargon) properties of the measures \(P_{\gamma}\), Eq. (11), and \(P_{\theta}\), Eq. (13), see e.g. the review [23]. While these are classical properties, we remind their derivations in the appendix, and state informally their form and main consequences here.
Concretely, we shall be interested in the evolution in time (or equivalently in \(\gamma\in[0,\infty[\) (AWGN) and \(\theta\in[0,1]\) (BEC)) of the following _order parameters_, or _overlaps_:
\[\mu(\gamma) \equiv \frac{1}{N}\mathbb{E}[\dot{\mathbf{x}}(\gamma)\cdot\mathbf{x}_{ 0}]\,, \tag{16}\] \[\chi(\gamma) \equiv \frac{1}{N}\mathbb{E}[\|\dot{\mathbf{x}}(\gamma)\|^{2}]\,. \tag{17}\]
The same definitions hold if we replace \(\gamma\) by \(\theta\), i.e. for the autoregressive process instead of the diffusive. The expectations are over the disorder \(\mathbf{x}_{0}\) and \(\mathbf{z}\). The single state property relates to the self-averaging of these order parameters: Almost anywhere in \(\gamma\) (AWGN) or \(\theta\) (BEC), the equilibrium measure \(P_{\gamma}\) and \(P_{\theta}\) is a single "phase" probability measure, where overlap and order parameter are self-averaging, i.e. concentrate to a single scalar quantity as \(N\to\infty\). Note that the "almost anywhere" is not a void statement: for instance, if one is sampling a complicated glassy problem such as, say, the Sherrington-Kirkpatrick model [20], strictly at \(\gamma=0\), the Boltzmann measure is complex and overlaps are not self-averaging. For almost all values of \(\gamma\) (that is, except at some critical threshold values) the measure then corresponds to a much simpler single-phase problem, and in particular, there is no phenomenon akin to a static glassy transition or replica-symmetry-breaking, thus the replica symmetric assumption is sufficient to describe the thermodynamic behaviour of these measures. This will be instrumental in the exact asymptotic analysis.
While such results have a long history [63; 64], their proof in the present context follows from the analysis of optimal Bayesian denoising, and in particular the I-MMSE theorem and the fluctuation-dissipation one [65; 66; 32; 67], for the AWGN channel, and by the study of the so-called pinning lemma [68; 69] in the BEC channel, that also has roots in the study of the glass transition [29; 70].
A second property, often called the Nishimori symmetry in physics [23; 64], follows from Bayes theorem and states that for optimal Bayesian denoising one has \(\mu(\gamma)=\chi(\gamma)\) in the diffusion setting and \(\mu(\theta)=\chi(\theta)\) in the autoregressive one.
Note that while we cannot experimentally compute \(\mu(\gamma)\) in simulations, we can instead compute numerically \(\chi(\gamma)\) which is just the norm of the estimator. The study of phase transitions in such problems is thus reduced to the behaviour of a single scalar quantity.
## V Prototypical exactly analysable models
We will now analyse the properties of the tilted measure \(P_{\gamma}\) and the pinning measure \(P_{\theta}\) for several concrete cases of the target measure \(P_{0}\). This will be possible exactly in the thermodynamic limit \(N\to\infty\). We shall focus on several classical problems from spin glass theory, statistical inference, and constraint optimization problems, but analogous analysis of the tilted and pinning measures can be done for many other problems for which the phase diagram was obtained via the replica or the cavity method [20; 21].
First, we shall start by studying models that present a so-called random first-order transition [71; 28], or in replica parlance, a discontinuous one-step replica symmetry breaking phenomenology [20]. There are two crucial temperatures in RFOT systems. The so-called Kauzmann temperature \(T_{K}\) below which the system behaves as an ideal glass and the so-called dynamical temperature \(T_{d}\) that is defined as the temperature at which the point-to-set correlation length diverges and consequently Monte Carlo or Langevin dynamics equilibration time diverges as well [28; 29]. It is a widely accepted conjecture that no efficient method can sample such models in their low temperature \(T<T_{d}\) glassy phase (and in fact, one can prove such hardness for classes of algorithms [72; 36]). We shall focus on the "paramagnetic" phases of these models \(T>T_{d}\) for which the Monte Carlo or Langevin equilibration time is known to be finite and hence efficient sampling with MCMC or Langevin is possible. Specifically, we shall consider the following models:
* **The Ising and spherical \(p\)-spin glass**[19], whose Hamiltonian reads (for \(p=3\)): \[\mathcal{H}(\mathbf{x})=-\frac{\sqrt{3}}{N}\sum_{i<j<k}J_{ijk}x_{i}x_{j}x_{k}\,,\] (18) with \(J_{ijk}\sim\mathcal{N}(0,1)\). The Boltzmann distribution is then \(P_{0}(\mathbf{x})\propto\exp(-\beta\mathcal{H}(\mathbf{x}))\) with \(\beta=1/T\). In the Ising case, we take \(x_{i}=\pm 1\), for \(i=1,\dots,N\), in the spherical case \(\mathbf{x}\in\mathcal{S}^{N-1}\) (even though, since we are discussing
the high-temperature phase, we shall use the equivalent Gaussian model). This is one of the most studied models in spin glass theory.
* **NAE-SAT, or bicoloring**: Another class of popular models arises in the context of constraint satisfaction problems, e.g. random satisfiability problem or random graph coloring [22]. Here we shall focus on a prototypical case from this class, the problem of coloring random \(k\)-hypergraphs with two colours. This model was studied using statistical physics techniques in [73; 74; 75]. Numerous rigorous results for this model were also established e.g. in [76; 77]. The probability distribution is the following: \[P_{0}(\mathbf{x})\propto\prod_{a=1}^{M}\omega(\mathbf{x}_{\partial a})\,,\quad \mathbf{x}\in\{-1,+1\}^{N}\] (19) \[\alpha=\frac{M}{N}\,,\quad\omega(x_{1},\ldots,x_{k})=\begin{cases}0&\text{if } \sum_{i=1}^{k}x_{i}=\pm k\\ 1&\text{otherwise}\end{cases}\] where by \(\mathbf{x}_{\partial a}\) we refer to the group of \(k\) variables entering into the clause \(a\). Again, this model presents a RFOT phenomenology, with a dynamical/clustering transition \(\alpha_{d}\). The literature supports the property that MCMC is able to sample efficiently for \(\alpha<\alpha_{d}\) and is not for \(\alpha>\alpha_{d}\)[22; 78; 29].
In these models, by studying the tilted/pinning measures, we shall see the limitations of flow-based, diffusion and autoregressive sampling methods that will _fail_ at temperature \(T_{d}<T<T_{\text{tri}}\) and constraint density \(\alpha_{\text{tri}}<\alpha<\alpha_{d}\). These methods thus turn out to be not as performing as MCMC/Langevin approaches above the dynamical transition. We expect this to be the case for any model with RFOT phenomenology. Every cloud, however, has its silver lining, and we shall also note that there is a class of models where flow, diffusion or autoregressive approaches outperform MCMC/Langevin. This will be the case in statistical inference problems presenting a hard phase, i.e. presenting a sharply defined region of the noise \(\Delta_{\Gamma\Gamma}>\Delta>\Delta_{\text{alg}}\) that is computationally hard for message-passing algorithms and conjectured hard for any other efficient algorithm [45]. A recent line of work [80; 30; 49; 50; 79], argues that when it comes to sampling algorithms such as Langevin or other algorithms walking in the space of configurations, such as gradient descent, then the hardness actually extends to even beyond the threshold \(\Delta_{\text{alg}}\) up to some \(\Delta_{\text{MCMC}}\) that depends on the specific algorithm. Yet, diffusion models were shown to be able to sample down to \(\Delta_{\text{alg}}\)[37]. We will also illustrate this here by studying the tilted and pinning measures of the following prototypical model:
* **Sparse rank-one matrix factorization**: The sparse _spiked Wigner_ model and its phase diagram were discussed e.g. in [81; 82; 83; 84]. This model is a variation of the "planted" Sherrington-Kirkpatrick model in spin glass physics. In such models, one is given a "spiked" version of a random symmetric matrix with a rank-one perturbation with a "planted" vector \(\mathbf{x}^{*}\) and aims at finding back \(\mathbf{x}^{*}\) from the matrix. This is done by sampling from the posterior probability, and therefore one considers the probability distribution: \[P_{0}(\mathbf{x})\propto\prod_{i}P_{X}(x_{i})\prod_{i<j}\exp \left(\frac{1}{\Delta\sqrt{N}}J_{ij}x_{i}x_{j}-\frac{1}{2\Delta N}x_{i}^{2}x_ {j}^{2}\right)\,,\] (20) \[J_{ij}=\frac{x_{i}^{*}x_{j}^{*}}{\sqrt{N}}+z_{ij}\,,\quad z_{ij} =z_{ji}\sim\mathcal{N}(0,\Delta)\,;\] \[P_{X}(x)=(1-\rho)\delta_{x,0}+\frac{\rho}{2}\left(\delta_{x,+1}+ \delta_{x,-1}\right)\,,\quad x_{i}^{*}\sim P_{X}\;\forall\,i.\]
## VI Phase diagrams of the tilted and pinning measures
We first discuss sampling in the spherical \(p\)-spin model, Eq. (18). We focus in particular on sampling in its paramagnetic phase, i.e. \(T>T_{d}\), where MCMC and Langevin algorithms are predicted to work efficiently. In order to analyse flow-based, diffusion-based and autoregressive sampling, we need to consider the corresponding denoising problems. For flow-based and diffusion-based sampling, this leads to the tilted measure Eq. (11), with \(P_{0}\) now given by Eq. (18). Taken together, this defines a variant of the \(p\)-spin model with a particular random field.
While the tilted measure may look complicated at first sight, because of the random field in the direction of a particular "equilibrium" direction \(\mathbf{x}_{0}\), it turns out that measures of this type were already studied in the literature.
We refer the reader to Appendix A for details and only briefly sketch the reasoning here. The trick (also used in e.g. [36, 85]) is to notice that \(\forall T>T_{K}\) the original \(p\)-spin model is contiguous to its "planted" version, where a vector \(\mathbf{x}_{0}\) has been hidden beforehand as an equilibrium configuration (the spike tensor model [86]). The model is thus equivalent for all practical purposes to its planted version, with the tilted field now acting in the direction of the planted configuration.
Computing the phase diagram of the model associated with the tilted measure thus requires computing the free entropy of the planted model with an additional side Gaussian information. This is the same computation as needed in various mathematical works based on Guerra's interpolation technique where the planted model is observed together with an additional Gaussian channel, e.g. in [32]. This, and the single-state property of Bayes-optimal denoising discussed in the former section (that ensures replica symmetry), allows us to obtain the phase diagram of the tilted measure.
For the present spherical \(p\)-spin model, we show in Appendix B that the equilibrium properties for \(T>T_{\mathrm{K}}\) are given by
\[\chi^{*} =\operatorname*{argmax}_{\chi}\Phi_{\mathrm{RS}}(\chi) \tag{21}\] \[\Phi_{\mathrm{RS}}(\chi) =\frac{\widetilde{\chi}}{2}+\frac{1}{2}\log\left(\frac{2\pi}{ \widetilde{\chi}+1}\right)-\frac{1}{2T^{2}}\chi^{3}\,,\] \[\widetilde{\chi} =\frac{3}{2T^{2}}\chi^{2}+\gamma^{2}\,.\]
Solving the above maximization problem is easily done. One observes that depending on the range of parameters
Figure 2: Phase diagrams for the tilted measure \(P_{\mathrm{r}}\) (top), and the pinning measure \(P_{\theta}\) (bottom): (a) The spherical \(p\)-spin model, b) the Ising \(p\)-spin model, c) the sparse rank-one matrix estimation d) the bicoloring problem on hypergraphs (NAESTAT). The x-axis is the temperature \(T=1/\beta\) in (a) and (b), the inverse-SNR \(\Delta/\rho^{2}\) in (c) and the clauses-to-variables ratio \(\alpha\) in (d), while the y-axis shows the SNR ratio \(\gamma^{2}=\alpha^{2}/\beta^{2}\) (top) and the decimated ratio \(\theta\) (bottom). In the green phase there is a single maxima to the free entropy. The red and orange regions display a phase coexistence with two maxima. In the red region efficient denoising is predicted to be algorithmically hard. In (a) the spherical \(p\)-spin at \(\gamma=\theta=0\) the dynamical threshold \(T_{d}=\sqrt{3/8}\), the Kauzmann transition \(T_{\mathrm{K}}\approx 0.58\), while the tri-critical point is at \(T_{\mathrm{tri}}=2/3\) for diffusion and \(T_{\mathrm{tri}}=\sqrt{1/2}\) for autoregressive. In (b) the Ising \(p\)-spin the values are: \(T_{d}\approx 0.682\), \(T_{\mathrm{K}}\approx 0.652\), \(T_{\mathrm{tri}}\approx 0.741\) for diffusion, and \(T_{\mathrm{tri}}\approx 0.759\) for autoregressive. In (c) for the sparse rank-one matrix estimation at \(\rho=0.08\) we have \(\Delta_{d}/\rho^{2}\approx 1.041\), \(\Delta_{\mathrm{K}}/\rho^{2}\approx 1.029\), and \(\Delta_{\mathrm{alg}}/\rho^{2}\approx 0.981\). The tri-critical points are at \(\Delta_{\mathrm{tri}}/\rho^{2}\approx 1.08\) for diffusion, and \(\Delta_{\mathrm{tri}}/\rho^{2}\approx 1.069\) for autoregressive. In (d) the bicoloring the values are \(\alpha_{d}\approx 9.465\), \(\alpha_{\mathrm{K}}\approx 10.3\). The tri-critical points are \(\alpha_{\mathrm{tri}}\approx 8.4\) for both diffusion and autoregressive. The curves for bicoloring were obtained by a polynomial fit, while in all the other cases we represent directly the data points.
\(T\) and \(\gamma\), up to two local maxima of \(\Phi_{\rm RS}\) can be found for this model. In Fig. 2 (a, top) we depict in green the regions of \(T,\gamma\) where \(\Phi_{\rm RS}\) has a unique maximizer. The orange region is where two maximizers co-exist with the global one having a smaller value of the order parameter \(\chi\), and in the red region, the global maximizer has a larger value of \(\chi\). Such a phenomenology is familiar in first-order phase transitions, where the red and orange phases in Fig. 2 correspond to the phase coexistence region.
Now, a crucial point for the follow-up discussion of the flow- and diffusion-based sampling is that it is widely believed to be _algorithmically hard_ to obtain the Bayes-optimal denoiser in the red region for all efficient algorithms (even if \(P_{0}\) is known) [45, 48, 52, 23]. This is nothing else than the well-known metastability problem in thermodynamics, as one is "trapped" in the wrong maxima of (21).
The evidence for denoising being algorithmically hard in the red region goes beyond mere physical analogies and is an intense subject of studies in computer science, with the study of a variety of techniques such as low degree polynomial or message passing algorithms (see e.g. [48]). Note that while denoising is hard in the red region, direct sampling with MCMC is hard already in the orange one.
To explain this further, let us discuss a concrete implementation of a denoiser. Computing the marginals for the tilted measure is a classical topic in spin glass and estimation theory, and the best-known algorithm to do it efficiently is the so-called mean-field Thouless-Anderson-Palmer equations [51], or --to use their modern counterpart-- the iterative approximate message passing (AMP) [87, 31]. The AMP algorithm is an iterative update procedure on the estimates of the posterior means \(\hat{x}_{i}\) and the co-variances \(\sigma_{i}\), and for the spherical 3-spin model it reads
\[\begin{cases}B_{i}^{t}=\frac{\sqrt{3}\beta}{N}\sum_{j<k}J_{ij\hat{x}}\widehat {x}_{j}^{t}\widehat{x}_{k}^{t}-\frac{3}{N}\beta^{2}\widehat{x}_{i}^{t-1} \sigma^{t}\widehat{\mathbf{x}}^{t}\cdot\widehat{\mathbf{x}}^{t-1}\\ \widehat{x}_{i}^{t+1}=\frac{B_{i}^{t}+\frac{\alpha(t)}{\beta(t)T}\left[\mathbf{ Y}_{i}\right]}{3\beta^{2}\left\|\mathbf{x}^{k}\right\|_{2}^{2}/(2N)+\gamma^{2}+1}\,, \sigma^{t+1}=\frac{1}{3\beta^{2}\left\|\mathbf{x}^{k}\right\|_{2}^{2}/(2N)+ \gamma^{2}+1}\,.\end{cases}\]
The virtue of AMP is that its performance can be tracked rigorously over iteration time, and in fact, one can show that the overlap \(\chi^{t}\) defined by the AMP estimates obeys the following _state evolution_:
\[\chi^{t+1}=\frac{\widetilde{\chi}^{t}}{1+\widetilde{\chi}^{t}}\,,\quad \widetilde{\chi}^{t}\equiv\frac{3}{2T^{2}}(\chi^{t})^{2}+\gamma^{2}\,, \tag{22}\]
which is nothing but the fixed point equation of Eq. (21). We see now how the presence of multiple fixed points can trap the algorithm in the wrong maximizer.
Turning an AMP-denoiser into a sampler was precisely the idea introduced in [42, 37] under the framework of stochastic localization [33, 34, 35]. While we derived the equation for a flow-based approach, the conclusion of [42, 37] remains. In particular, leveraging the rigorous analysis of the asymptotic error obtained by AMP [88, 89], they prove that AMP can approximate optimal denoising throughout the interpolation path in the regime where the global maximum for \(\Phi_{\rm RS}\) is the first one reached by the state evolution [37]. Furthermore, using the local convexity of the TAP free energy [90, 36], they show that the AMP iterates satisfy Lipschitz-continuity w.r.t the observation \(\mathbf{Y}_{t}\). This is crucial for the SDE-based sampling in [36, 37] as well as the continuous flow-based sampler in our case, since both require control of the discretization errors. We discuss this further in Appendix E. However, AMP and, conjecturally [45], any other polynomial-time denoiser (and in particular any neural-network denoiser learned from data) fail to return the correct marginal whenever the global maxima of \(\Phi_{\rm RS}\) is not the first one encountered by the state evolution, i.e. in the red region in Fig. 2.
### Failure of generative models while MCMC succeeds
Recall now how the flow- or diffusion-based methods use the denoiser (the marginal of the tilted measure) to produce samples from \(P_{0}\). They start with a Gaussian noise at \(\gamma=0\) and increase \(\gamma\) gradually while transforming the Gaussian noise towards the direction of the marginal of the tilted measure. To do this for a specific temperature \(T\), one needs to be able to denoise optimally on a vertical line at _all_ values of \(\gamma\in[0,\infty[\). However, for temperatures below the tri-critical points \(T_{\rm tri}=2/3\) (this value is for diffusion, in the spherical \(p\)-spin model) we see that we encounter the first-order phase transition, and the metastable region (red in Fig. 2) where optimal denoising is computationally hard and hence the uniform sampling using this strategy is as well. Another, less critical, problem is the fact that the measure will change drastically at the phase transition (as the value \(\chi^{*}\) jumps discontinuously) which means one has to be very careful with the discretization of the diffusion process.
Based on this reasoning, we interpret the phase diagram of the tilted measure as a representation of the presence of a fundamental barrier for sampling by flow- and diffusion-based methods: The sampling scheme corresponds to starting from \(\gamma=0\) and going upwards. If this path intersects the hard phase (red) at any time, it means that the Bayesian denoising cannot be performed optimally in an efficient way, thus preventing the sampling
scheme from working efficiently in consequence. In particular, for the spherical \(p\)-spin model, we computed the curves analytically, see Appendix B, and found that this hurdle is present at temperatures up to the tri-critical point \(T_{\rm tri}=2/3\), strictly larger than the dynamical temperature \(T_{d}=\sqrt{3/8}\) (threshold between the orange and red region at \(\gamma=0\)) down to which MCMC and Langevin algorithms are predicted to sample efficiently in the literature.
Indeed, one can write exact equations describing the Langevin dynamics [25] (and prove them [27]). The analysis shows that Langevin is efficient at sampling for all \(T>T_{d}\)[91]. On the other hand, for \(T<T_{d}\), we are in the so-called "dynamical" spin glass phase and Langevin fails to sample in linear time, this is the ageing regime [25; 26]. In fact, all polynomial algorithms are conjectured to fail to sample efficiently, as can be proven for any "stable" algorithm [72].
The situation for sampling with autoregressive network, analysed via the phase diagram of the pinning measure, is very analogous, see the lower row of Fig. 2. The pinning measure again defines a variation of the original \(p\)-spin model, and as shown in the appendix we can compute properties at equilibrium by solving \(\chi^{*}=\operatorname{argmax}_{\chi}\Phi_{\rm RS}(\chi)\) where the RS free entropy reads
\[\Phi_{\rm RS}(\chi) =\frac{\widetilde{\chi}}{2}+\frac{1-\theta}{2}\log\left(\frac{2 \pi}{\widetilde{\chi}+1}\right)-\theta\log(\sqrt{2\pi}e)-\frac{\chi^{3}}{2T^{2 }}\,,\] \[\widetilde{\chi} =\frac{3\chi^{2}}{2T^{2}}\,. \tag{23}\]
The same reasoning can be applied to this phase diagram (and in fact the so-called decimated versions of message-passing algorithms were proposed as early as in [54; 55]). Concerning the efficiency of sampling, the same phenomenology appears again when interpreting this phase diagram. In fact, for the spherical \(p\)-spin model, it turns out to be even worse because the temperature where the difficulty arises for the autoregressive method is \(T_{\rm tri}=\sqrt{1/2}\), which is larger than the one for flow-based and diffusion models \(T_{\rm tri}=2/3\). So in this case, autoregressive-based sampling algorithms perform worse than flow- or diffusion-based.
### Other models
Fig. 2 then evaluates the phase diagrams of the tilted and pinning measures for three other models - the Ising \(p\)-spin (b), the rank-one matrix estimation (c), and the bicoloring problem on sparse hypergraphs (d). For the bicoloring problem, defined on random sparse hypergraphs, we can use the belief propagation equations as Bayesian denoiser [73; 74; 75]. The resulting equations are quite long, and we defer their presentation in the appendix, along with their derivation using the cavity method.
We observe the same phenomenology with tri-critical points causing hurdles for flow-, diffusion- and autoregressive methods reaching out to the phase where traditional approaches based on MCMC or Langevin work efficiently. In fact, we expect this picture to always appear for any model with the RFOT phenomenology where the dynamical temperature is distinct from the ideal glass or Kauzmann temperature. Such a phenomenology was described in many problems far beyond those we picked to study, and we can hence anticipate follow-up studies identifying analogous phase diagrams in many other problems of interest.
Finally, we notice that depending on the model, the position of the tri-critical point for flow- and diffusion-based methods is better (e.g. for the spherical 3-spin, Ising 3-spin) or worse (e.g. for the sparse rank-one matrix estimation) than for the autoregressive methods. In any case, the position of the tri-critical point does depend on the noise channel to which the generative model maps. This leaves open the question of which channel one should use, for each model, to minimize the range of values for which generative model-based sampling is suboptimal compared to MCMC techniques that fail at the dynamical threshold \(T_{d}\). It is not inconceivable that it is possible to reduce \(T_{\rm tri}\) very close (or maybe even up to) \(T_{d}\) by optimizing over different distributions in linear interpolant [41], or using non-linear maps [37]. This is left for future studies.
### Outperforming MCMC in inference models with a hard phase
We now discuss a situation more advantageous for modern techniques. Column (c) of Fig. 2 depicts the phase diagram of the sparse rank-one matrix estimation problem that presents an additional interesting feature: here there is a planted "hidden" signal that we seek to recover. At large values of the noise the signal is hidden and the RFOT-type phenomenology reappears so that the high noise behaviour is identical to the high-temperature models of the other models.
At low noise values, however, there is another phase transition at \(\gamma=\theta=0\) denoted as \(\Delta_{\rm alg}\) below which the AMP algorithm solves the estimation problems optimally and above which it does not up to the value \(\Delta_{\rm IT}\) (the hard region, in red, is delimited by these two values). Going vertically up in the phase diagram in \(\gamma\) or \(\theta\) for \(\Delta<\Delta_{\rm alg}\) does not cause any encounter of the hard (red) phase, and thus sampling based on flow or diffusion or autoregressive networks works. This has been proven rigorously recently in [92] (with some technical regularity assumptions on the denoiser [90]).
Yet existing literature collects evidence that in inference problems that present such a hard phase, local dynamics algorithms such as MCMC and Langevin are _not able_ to sample efficiently until some yet lower values of noise \(\Delta_{\rm MCMC}\). In particular, this suggestion was put forward indirectly in [30] by arguing that the metastable phase in the hard region is glassy, and this glassy nature extends well beyond the region that is hard for message passing algorithms such as AMP. This was then shown explicitly in follow-up works starting with an analysis of the dynamics in a mixed spiked matrix-tensor model in [49], in the phase retrieval problem [79], the planted coloring [50] and on a rigorous basis in the planted clique problem [80]. Works such as [93] suggest that over-parametrization observed in modern neural networks mitigates those hurdles, and this may be one of the reasons why over-parametrization is beneficial.
In light of these works, it is interesting to note that sampling based on flow, diffusion or autoregressive networks also avoids hurdles stemming from the glassiness of the hard phase and rather effortlessly so by working in the space of marginals rather than configurations directly. The phase diagram presented in Fig. 2 indicates that both diffusion and autoregressive networks sample the \(P_{0}\) efficiently for any \(\Delta<\Delta_{\rm alg}\). This poses an intriguing question for future work of whether over-parametrization of neural networks that are learning the denoisers from data would still be so beneficial in these methods.
Finally, we comment on the relevance of these findings beyond the specific model discussed here, and in particular for the study of physical objects in finite dimension with short-range interaction. Do we expect a problem embedded in finite dimension to suffer the same fate as the one discussed here? While similar phase diagrams as Fig. 2 have been observed in finite dimension in e.g. [94; 70], the phenomenology of first-order transition is different. From nucleation arguments, the exponentially hard denoising phase is not expected to exist in finite dimension [95]. Indeed, it can be proven rigorously that for any graph that can be embedded in a finite-dimensional lattice, an efficient algorithm exists [96]. In this case, the analysis of whether a good denoiser can be learned with a neural network will require a finer study, depending on the discretization of the ODE close to the transition, and on the number of points in the data set (perhaps in the vein of [57]). These are interesting potential new directions of research.
## Conclusion
Our investigation into the efficiency of sampling with modern generative models, in comparison with traditional methods, reveals distinct strengths and weaknesses for each. By examining a specific class of probability distributions from statistical physics, we identified parameters where either method excels or falls short. Significantly, our approach highlighted challenges stemming from a first-order discontinuous transition for generative models-based sampling techniques even in regions of parameters where traditional samplers work efficiently. While generative models have shown promise across various applications, it is crucial to understand their potential pitfalls and advantages in specific contexts, and our paper makes a key step in this direction.
###### Acknowledgements.
We acknowledge funding from the Swiss National Science Foundation grant SNFS OperaGOST and SMArtNet. We also thank Ahmed El Alaoui, Hugo Cui, and Eric Vanden-Eijnden for enlightening discussions on these problems.
## Appendix A Computing free entropies: the planting trick
The main technical difficulty is the study of the tilted or pinned measure and its correlation with an equilibrium configuration. Here, we explain how this difficulty is avoided. Given a probability measure \(P_{0}\), the tilted measure reads
\[P_{\gamma}(\mathbf{x}|\mathbf{x}_{0},\mathbf{z})\propto e^{\gamma(t)^{2}( \mathbf{x},\mathbf{x}_{0})+\gamma(t)(z,\mathbf{x})-\frac{\gamma(t)^{2}}{2} \left\|\mathbf{x}\right\|^{2}}P_{0}(\mathbf{x})\,. \tag{10}\]
We shall restrict ourselves to "planted" models with a hidden assignment in the following. While this is the case for actual inference statistical models (such as the sparse Wigner model we consider), it turns out that the \(p\)-spin [82] and the NAESAT model [97] are contiguous to their planted version as long as \(T>T_{K}\) (for \(p\)-spin) or \(\alpha<\alpha_{K}\) for NAESAT. For the \(p\)-spin models, such contiguity has also been rigorously established in certain setups [98; 86; 99].
For all practical purposes, we shall thus study planted models. In this case, the difficulty is greatly simplified, as the joint distribution of the planted vector and the disorder equals the joint distribution of the disorder and an equilibrium configuration. This equivalence, along with the contiguity with the planted models for non-inference problems, allows us to replace the equilibrium configuration \(\mathbf{x_{0}}\) in the tilted measure with the planted vector.
For the case of the Sherrington-Kirkpatrick (SK) model, such an equivalence was rigorously proven and utilized in [36] using the contiguity between the planted and unplanted models. Concretely, Proposition 4.2 in [36] shows the equivalence between the following two methods for generating the tilted measure:
1. Sample \(\mathbf{x_{0}}\) uniformly. Then sample the interaction matrix \(J\) as \(J=\frac{\beta}{n}\mathbf{x_{0}}\mathbf{x_{0}}^{\top}+W\) where \(W\sim\text{GOE}\).
2. Sample \(J\sim\text{GOE}\). Then sample \(\mathbf{x_{0}}\sim P_{\text{SK}}(J)\),
where \(P_{\text{SK}}(J)\) denotes the Boltzmann Sherrington-Kirkpatrick measure with interaction matrix \(J\). Similarly, based on the contiguity between planted and unplanted models, we assume that the above equivalence holds for \(T>T_{K}\) (for \(p\)-spin) and for \(\alpha<\alpha_{K}\) (for NAESAT). We note that for inference problems, i.e. when \(J\) is planted for both points (1), (2) above, the equivalence holds directly based on the definition of the posterior measure.
Additionally, the tilted measure can be then seen, in terms of free entropy, as the measure associated now to an inference problem. Consider for concreteness the planted \(p\)-spin, also called the spike-tensor model [86; 87]: One extracts a random vector \(X^{0}\in\mathbb{R}^{N}\) from a Gaussian or a Rademacher distribution, and then one aims at recovering \(X^{0}\) from the measure of a) a Gaussian measurement \(Y=\alpha X^{0}+\beta Z\) and b) a noisy tensor measurement \(J_{ijk}=X^{0}_{i}X^{0}_{j}X^{0}_{k}+\frac{1}{T}\xi_{ijk}\forall i<j<k\). The Bayesian posterior of this model is nothing but the tilted measure (10) applied to the tensor spike model.
Interestingly, such considerations are not new. Nishimori [63] and later Iba [64] already used a similar trick, and [29; 78; 85] used it to discuss the dynamics starting from equilibrium conditions, while [100] used it in the context of error correction, all for the \(p\)-spin model. Recently, the same technique was used to prove the clustering property in the \(p\)-spin model [72].
Adding a Gaussian measurement to an inference problem is also a classical trick used when proving free energies in the mathematical physics literature, especially in the context of Guerra interpolation [101; 102] for Bayes optimal models, see e.g. [66; 32; 67; 103], and thus such free energies have been solved rigorously as well.
How does this change for the decimated problem? In this case, the additional Gaussian channel is replaced by an erasure channel. This is precisely one of the alternatives used as well in the mathematical physics literature! Indeed, the pinning lemma [68] (see also [69]) is often used instead of the Gaussian channel.
These considerations are really helpful, as we can now solve these problems as a simple variant of problems already solved in the literature, often rigorously (in the case of the \(p\)-spin we refer to [83] and [49], and for the spike Wigner model to [81; 84; 104].
For the sparse case, a rigorous control is harder, and thus we shall simply stay at the level of rigour of the cavity method [20] and use the results of [74]. Note however that the method is trustworthy [97].
Finally, since these are variations of known inference problems, we can leverage on the existing work on approximate message passing [31] for the \(p\)-spin model [83; 87] and the spike model [81], see in particular [86] for a detailed presentation.
## Appendix B Asymptotic solutions and Phase Diagrams
In this section, we present how each of the phase diagrams we presented in the main text can be produced. Specifically, we first remind our definition of tilted and pinning measures and of the order parameters we are
going to use for the analysis. After defining the probability distribution associated to each problem, we give the expression of the RS free entropy, from which one can compute the values of the parameters of the problem at which the _spinodal points_ are located, i.e. the points at which the potential develops a second maxima by continuous deformation, and also the point corresponding to the _IT transition_, i.e. where the two maxima exchange the role of global and local maxima.
We then report the expression of the denoisers (AMP/BP) used in the sampling schemes illustrated in the main text, with their associated self-consistent asymptotic equations (state evolution for AMP and Cavity equations for BP). Looking at the fixed points of these equations, starting from an uninformed and informed initialization, we can plot the difference between the values of the order parameter reached at the fixed point in these two cases, which allows detecting the phases in which multiple fixed points are present.
In Eq. (10) we have recalled the expression for the tilted measure characterizing the flow-based sampling scheme. As we already mentioned, \(\gamma\) is nothing but a rescaled sampling time, such that studying the properties of the tilted measure varying \(\gamma\) allows us to characterize the properties of the Bayesian denoising problem at all times during sampling.
In the same way, let us remind the pinning measure, which we use to analyse the autoregressive-based sampling procedure:
\[P_{\theta}(\mathbf{x}|\mathbf{x}_{0},S_{\theta})\propto P_{0}(\mathbf{x})\prod _{i\in S_{\theta}}\delta([x_{i}-[\mathbf{x}_{0}]_{i}) \tag{11}\]
where \(\theta\) is the fraction of pinned variables.
Finally, let us remind that we shall study the evolution in time (or equivalently in \(\gamma\in[0,\infty[\) (AWGN) and \(\theta\in[0,1]\) (BEC)) of the following order parameters:
\[\mu(\gamma) \equiv \frac{1}{N}\mathbb{E}[\hat{\mathbf{x}}(\gamma)\cdot\mathbf{x}_{0 }]\,, \tag{12}\] \[\chi(\gamma) \equiv \frac{1}{N}\mathbb{E}[\|\hat{\mathbf{x}}(\gamma)\|^{2}]\,, \tag{13}\]
and analogously we can define \(\mu(\theta)\) and \(\chi(\theta)\). Concretely, we will consider only cases in which the Nishimori identities hold, such that these two quantities always coincide, and thus we will restrict our analysis to \(\chi\).
Let us now go through each one of the models mentioned in the main.
### Sparse rank-one matrix factorization
We consider the Bayes-Optimal rank-one matrix estimation (or rank-one matrix factorization) problem:
Given a hidden vector \(\mathbf{x}^{*}\), sampled from the so-called _Rademacher-Bernoulli_ prior distribution
\[P_{X}(x)=(1-\rho)\delta_{x,0}+\frac{\rho}{2}\left(\delta_{x,+1}+\delta_{x,-1} \right)\,,\quad x_{i}^{*}\sim P_{X}\ \forall\,i\,.\]
one has access to noisy observations, that is a matrix \(J_{ij}\) is composed by a rank-one spike plus i.i.d. Gaussian noise:
\[J_{ij}=\frac{x_{i}^{*}x_{j}^{*}}{\sqrt{N}}+\widetilde{z}_{ij}\,,\quad \widetilde{z}_{ij}=\widetilde{z}_{ji}\sim\mathcal{N}(0,\Delta)\,;\]
and the goal is to infer \(\mathbf{x}^{*}\) in the best way possible. There are many important problems in statistics and machine learning that can be expressed in this way [105], and this model has been the subject of many works both from the statistics [106, 107, 108, 81, 104] and the statistical physics communities [109, 83]. The presentation of this problem closely follows the study presented in [83].
From the Bayesian point of view, the problem amounts to sampling from the posterior. One way to introduce the model is through the following probability distribution:
\[P_{0}(\mathbf{x})\propto\prod_{i}P_{X}(x_{i})\prod_{i<j}\exp\left(\frac{1}{ \Delta\sqrt{N}}J_{ij}x_{i}x_{j}-\frac{1}{2\Delta N}x_{i}^{2}x_{j}^{2}\right)\,. \tag{14}\]
With this distribution, \(\mathbf{x}\) will be a random vector with, on average, a fraction \(\rho\) of components that are Ising spin variables (i.e. each \(x_{i}\) takes values \(\pm 1\)) and the rest of the entries that are put to zero, in such a way that the parameter \(\rho\) controls the sparsity of the vector we want to retrieve.
As discussed in Appendix A, the **tilted measure** with an equilibrium vector \(\mathbf{x}_{0}\) is equivalent to the one where the vector \(\mathbf{x}_{0}\) is planted. Therefore, in what follows, we shall assume that \(\mathbf{x}_{0}\) corresponds to the planted configuration. We thus obtain the following tilted measure for diffusion and flow-based models:
\[P_{\gamma}(\mathbf{x})\ =\ \frac{1}{Z_{\gamma}}\left(\prod_{i}P_{X}(x_{i})\right)e^{ \gamma(t)^{2}(\mathbf{x},\mathbf{x}_{0})+\gamma(t)\langle\mathbf{z},\mathbf{ x}\rangle-\frac{\gamma(t)^{2}}{2}\|\mathbf{x}\|^{2}}\left(\prod_{i<j}e^{\frac{1}{ \Delta\sqrt{N}}\frac{z_{ij}x_{i}x_{j}}{2}+\frac{1}{2\Delta^{3}}x_{i}x_{j}[ \mathbf{x}_{0}]_{i}[\mathbf{x}_{0}]_{j}-\frac{1}{2\Delta^{3}}\pi^{2}_{i}x_{j}^ {2}}\right) \tag{10}\]
With respect to the original model, the tilted measure simply includes in addition, a field in the planted direction, a random field and a renormalization of the constant in front of the quadratic part. As mentioned in Appendix A, this corresponds to an equivalent inference problem with an additional Gaussian measurement.
The **pinned measure** is slightly different. In this case, it modifies the original problems as (denoting the pinned list as \(S_{\theta}\)):
\[P_{\theta}(\mathbf{x})\ =\ \frac{1}{Z_{\theta}}\left(\prod_{i\notin S_{ \theta}}P_{X}(x_{i})\right)\left(\prod_{i\in S_{\theta}}\delta(x_{i}-x_{i}^{* })\right)\left(\prod_{i<j}e^{\frac{1}{\Delta\sqrt{N}}\frac{z_{ij}x_{i}x_{j}+ \frac{1}{2\Delta^{3}}x_{i}x_{j}[\mathbf{x}_{0}]_{i}[\mathbf{x}_{0}]_{j}-\frac{ 1}{2\Delta^{3}}\pi^{2}_{i}x_{j}^{2}}}\right) \tag{11}\]
#### b.2.1 Replica free entropy
The replica formula for such problems can be found in many places and we refer to the mathematical literature for the detailed rigorous statements: [81, 84, 103, 104, 108, 110]. In particular, [103] gives a generic proof using the adaptive interpolation methods in the presence of a Gaussian channel is added, which turns out to give the same measure as the tilted one, while [104] uses instead a pinned measure. In both case, we can thus adapt the results in the literature: The asymptotic free entropy is given by the maximum of the so-called replica symmetric potential[20] :
\[\frac{1}{N}\mathbb{E}_{\mathbf{x}_{0},\mathbf{z},\tilde{\mathbf{ z}}}\log Z_{\gamma}\ \xrightarrow[N\to\infty]{}\ \operatorname{argmax}\Phi_{RS}(m) \tag{12}\] \[\Phi_{RS}(m)\quad=\quad\mathbb{E}_{w,x_{0}}\left[\log Z_{x}\left( \frac{m}{\Delta},\frac{m}{\Delta}x_{0}+\sqrt{\frac{m}{\Delta}}w\right)\right]- \frac{m^{2}}{4\Delta} \tag{13}\]
where \(m\) is the order parameter of the problem, \(x_{0}\sim P_{X}\), \(w\sim\mathcal{N}(0,1)\) while \(Z_{x}\) depends on the specific measure considered. The same is valid for the pinned measure, as long as one substitutes \(Z_{\gamma}\) with \(Z_{\theta}\).
_Tilted measure:_ In the case of the **tilted measure** (10), we have
\[Z_{x}(A,B;x_{0}) =\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2 }x^{2}/2)\exp(Bx-Ax^{2}/2) \tag{14}\] \[=\rho e^{-(A+\gamma^{2})/2}\cosh(B+\gamma w+\gamma^{2}x_{0})+(1-\rho)\]
that we can put into Eq. (13) to get
\[\Phi_{\mathrm{RS}}(\chi)= \rho\mathbb{E}_{w}\left[\log\left((1-\rho)+\rho e^{-\widetilde{ \chi}/2}\cosh\left(\widetilde{\chi}+\sqrt{\widetilde{\chi}}w\right)\right)\right] \tag{15}\] \[+(1-\rho)\mathbb{E}_{w}\left[\log\left((1-\rho)+\rho e^{- \widetilde{\chi}/2}\cosh\left(\sqrt{\widetilde{\chi}}w\right)\right)\right]- \frac{\chi^{2}}{4\Delta}\,,\quad\widetilde{\chi}=\frac{\chi}{\Delta}+\gamma^ {2}\,.\]
_Pinning measure:_ Meanwhile, considering the **pinning measure** (15) leads to
\[Z_{x}(A,B;x_{0}) =\begin{cases}\int\mathrm{d}xP_{X}(x)\delta_{x,x_{0}}\exp(Bx-Ax^{2 }/2)&\text{with probability }\theta\\ \int\mathrm{d}xP_{X}(x)\exp(Bx-Ax^{2}/2)&\text{with probability }1-\theta \end{cases} \tag{16}\] \[=\begin{cases}P_{X}(x_{0})\exp\left(Bx_{0}-Ax_{0}^{2}/2\right)& \text{with probability }\theta\\ \rho\exp(-A/2)\cosh(B)+(1-\rho)&\text{with probability }1-\theta\end{cases}\]
which in turn gives
\[\Phi_{\mathrm{RS}}(\chi)= \theta\left(\rho\log\rho+(1-\rho)\log(1-\rho)+\frac{\rho}{2} \widetilde{\chi}\right) \tag{17}\] \[+(1-\theta)\bigg{[}\rho\mathbb{E}_{w}\left[\log\left((1-\rho)+ \rho e^{-\widetilde{\chi}/2}\cosh\left(\widetilde{\chi}+\sqrt{\widetilde{\chi}} w\right)\right)\right]\] \[+(1-\rho)\mathbb{E}_{w}\left[\log\left((1-\rho)+\rho e^{- \widetilde{\chi}/2}\cosh\left(\sqrt{\widetilde{\chi}}w\right)\right)\right] \bigg{]}-\frac{\chi^{2}}{4\Delta}\,,\quad\widetilde{\chi}=\frac{\chi}{\Delta}\,.\]
Message-Passing algorithm
The derivation of the AMP algorithm for this problem has a long history, and is connected to the Thouless-Anderson-Palmer (TAP) equations [51]. For this problem, the introduction of TAP as an iterative algorithm is due to Bolthausen [111] and has been adapted to the present situation in [107, 81].
_Tilted measure:_ The equivalence between the tilted and the planted measure with external field allows us to reduce the AMP iterations for the tilted measure to the ones for an associated inference problem. In [83], the authors provided a framework for deriving the AMP iterates for such an inference problem involving pair-wise interactions between spins. While the tilted measure defined by Eq. (14) involves additional random and planted fields, the generality of the derivation in [83] allows them to be straightforwardly incorporated into the single-site factors \(P_{X}(x_{i})\) in [83]. Through an adaptation of the derivation of Equations 66, 67 in [83], we obtain:
\[\begin{cases}\widehat{x}_{i}^{t+1}=\frac{\rho\tanh(B_{i}^{t}+\frac{\alpha(t)} {\beta(t)^{2}}(\mathbf{Y}_{i}))}{\rho+\frac{(1-\rho)\exp(\alpha t^{t}+\gamma ^{2})/2}{\cosh(B_{i}^{t}+\frac{\alpha(t)}{\beta(t)^{2}}(\mathbf{Y}_{i}))}}, \quad\sigma_{i}^{t+1}=\rho\frac{\rho+(1-\rho)e^{(A^{t}+\gamma^{2})/2}\cosh(B _{i}^{t}+\frac{\alpha(t)}{\beta(t)^{2}}(\mathbf{Y}_{i}))}{\left(\rho\cosh(B_{i }^{t}+\frac{\alpha(t)}{\beta(t)^{2}}(\mathbf{Y}_{i}))+(1-\rho)\exp((A+\gamma^ {2})/2)\right)^{2}}\\ A^{t}=\frac{\|\widehat{\mathbf{x}}^{t}\|_{2}^{2}}{\Delta N};\quad B_{i}^{t}= \frac{1}{\Delta\sqrt{N}}\mathbf{J}_{1}\cdot\widehat{\mathbf{x}}^{t}-\frac{1} {N\Delta}\widehat{x}_{i}^{t-1}\sum_{k}\sigma_{k}^{t}\end{cases} \tag{15}\]
where \(\alpha(t)\) and \(\beta(t)\) are the functions defining the interpolant process, fixed at the start, and \(\mathbf{Y}_{t}\) is the value of the noisy observation at time \(t\).
_Pinning measure:_ When considering the pinning measure (14), the AMP equations are only a slight variation of the ones presented for the flow-based case.
Specifically, in autoregressive-based sampling we choose a fraction \(\theta\) of the variables, for which we fix \(\widehat{x}_{i}^{t}=[\mathbf{x}_{0}]_{i},\;\sigma_{i}^{t}=0\); this is due to the fact that their posterior means are completely polarized on the solution. For the rest of the variables, a fraction \(1-\theta\), the AMP equations are exactly the ones for diffusion in Eq. (15), at \(\gamma=0\).
The resulting algorithm is thus
\[\begin{cases}\quad\widehat{x}_{i}^{t+1}=\begin{cases}[\mathbf{x}_{0}]_{i}& \text{if }i\in S_{\theta}\\ \frac{\rho\tanh(B_{i}^{t})}{\rho+\frac{(1-\rho)\exp(A^{t}/2)}{\cosh(B_{i}^{t} )}},&\text{otherwise}\end{cases},\quad\sigma_{i}^{t+1}=\begin{cases}0&\text{ if }i\in S_{\theta}\\ \rho\frac{\rho+(1-\rho)e^{A^{t}/2}\cosh(B_{i}^{t})}{\left(\rho\cosh(B_{i}^{t} )+(1-\rho)\exp(A/2)\right)^{2}}&\text{otherwise}\end{cases}\\ A^{t}=\frac{\|\widehat{\mathbf{x}}^{t}\|_{2}^{2}}{\Delta N};\quad B_{i}^{t}= \frac{1}{\Delta\sqrt{N}}\mathbf{J}_{1}\cdot\widehat{\mathbf{x}}^{t}-\frac{1} {N\Delta}\widehat{x}_{i}^{t-1}\sum_{k}\sigma_{k}^{t}\end{cases} \tag{16}\]
#### b.2.3 State Evolution equations
The advantage of AMP is that it can be rigorously tracked by the State Evolution equations [111, 107, 31] that turn out to be nothing but the fixed point equations of the associated replica free entropy. Again, we can use the generic results reported in [83] to get
\[m^{t+1}=\mathbb{E}_{x_{0},w}\left[f_{\text{in}}\left(\frac{m^{t}}{\Delta}, \frac{m^{t}}{\Delta}x_{0}+\sqrt{\frac{m^{t}}{\Delta}}w\right)x_{0}\right] \tag{17}\]
where \(x_{0}\sim P_{X},\,w\sim\mathcal{N}(0,1)\) and \(f_{\text{in}}\) is the input channel and depends on the specific problem.
_Tilted measure:_ For the **tilted measure** (13) we get
\[\begin{split} f_{\text{in}}(A,B;x_{0})&=\frac{\int \mathrm{d}xPx_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2)\exp( Bx-Ax^{2}/2)}{\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2) \exp(Bx-Ax^{2}/2)}\\ &=\frac{\rho\tanh(B+\gamma w+\gamma^{2}x_{0})}{\rho+\frac{(1-\rho) \exp((A+\gamma^{2})/2)}{\cosh(B+\gamma w+\gamma^{2}x_{0})}}\end{split} \tag{18}\]
which leads to the state evolution equations:
\[\chi^{t+1}=\rho^{2}\mathbb{E}_{w}\left[\frac{\tanh(\widetilde{\chi}^{t}+\sqrt{ \widetilde{\chi}^{t}}w)}{\rho+\frac{(1-\rho)\exp(\widetilde{\chi}^{t}/2)}{ \cosh(\widetilde{\chi}^{t}+\sqrt{\widetilde{\chi}^{t}}w)}}\right]\,,\quad \widetilde{\chi}^{t}\equiv\frac{\chi^{t}}{\Delta}+\gamma^{2}\,,\quad w\sim \mathcal{N}(0,1) \tag{19}\]
Pinning measure:In the same way, the **pinning measure** (14) is associated to
\[f_{\text{in}}(A,B;x_{0}) =\begin{cases}\frac{P_{X}(x_{0})x_{0}\exp\left(Bx_{0}-Ax_{0}^{2}/2 \right)}{P_{X}(x_{0})\exp\left(Bx_{0}-Ax_{0}^{2}/2\right)}&\text{with probability }\theta\\ \frac{\int dxx_{P}(x)\exp\left(Bx-Ax^{2}/2\right)}{\int dxP_{X}(x)\exp\left(Bx -Ax^{2}/2\right)}&\text{with probability }1-\theta\end{cases} \tag{15}\] \[=\begin{cases}x_{0}&\text{with probability }\theta\\ \rho\tanh(B)/(\rho+(1-\rho)\exp\left(A/2\right)/\cosh(B))&\text{with probability }1-\theta\end{cases}\]
and consequently to the fixed point equations
\[\frac{\chi^{t+1}}{\rho}=\theta+(1-\theta)\mathbb{E}_{z}\left[\frac{\rho\tanh( \widetilde{\chi}^{t}+\sqrt{\widetilde{\chi}^{t}}w)}{\rho+\frac{(1-\rho)\exp( \widetilde{\chi}^{t}/2)}{\cosh(\widetilde{\chi}^{t}+\sqrt{\widetilde{\chi}^{ t}}w)}}\right]\,,\quad\widetilde{\chi}^{t}\equiv\frac{\chi^{t}}{\Delta}\,,\quad w\sim \mathcal{N}(0,1)\,. \tag{16}\]
#### b.2.4 Phase diagrams
In Fig. 3 we present the phase diagrams for the sparse rank-one matrix factorization problem, choosing \(\rho=0.08\) as value for sparsity. We remind that \(\rho\) must be small enough to observe a first-order phenomenology [108].
The section of the parameter space displayed is the same as in the plots presented in the main text, but here we display directly the difference \(\chi_{\text{inf}}-\chi_{\text{uninf}}\), so that the coloured zones of the plots are the ones displaying multiple fixed points, as opposed to the black ones. We furthermore draw as white dashed lines the spinodal points, and as a black dashed line the IT threshold, both defined at the beginning of Appendix B. For the flow-based plot, we also show explicitly in Fig. 4 and Fig. 5 the behaviour of the free entropy functional for \(\Delta/\rho^{2}=1.05\) and \(\Delta/\rho^{2}=0.98\). For Fig. 4 the first order transition is apparent, while for Fig. 5 the transition is found to be continuous.
Autoregressive networks vs FlowsAs we can see from the plots, for this model the tri-critical point for flow-based sampling is at \(\Delta_{\text{tri}}/\rho^{2}\approx 1.08\), while for autoregressive-based sampling is at \(\Delta_{\text{tri}}/\rho^{2}\approx 1.069\), meaning that the gap with MCMC and Langevin sampling is smaller in this latter case. In other words, this means that there is a range of values of inverse-SNR \(\Delta\) for which autoregressive networks based sampling (along with MCMC and Langevin) is efficient, while flow-based sampling is not.
This appears not to be specific to this particular value of \(\rho\), but in all the range of sparsity in which the model presents a first order phase transition, autoregressive-based sampling appears to be more efficient than flow-based sampling, in the sense explained above, as shown in Fig. 6.
Figure 3: Phase diagrams for flow-based sampling (left) and autoregressive-based sampling (right) for the _sparse rank-one_ model, with Rademacher-Bernoulli prior and sparsity \(\rho=0.08\). On the x-axis we put the rescaled signal-to-noise-ratio \(\Delta/\rho^{2}\) and on the y-axis the ratio \(\gamma^{2}=\alpha^{2}/\beta^{2}\) (left) and the decimated ratio \(\theta\) (right). We compute the order parameter \(\chi/\rho\), defined in (12), both from an uninformed and an informed initialization, and we plot the difference between the two. The dashed white lines are the _spinodal lines_, while the dashed black one is the _IT threshold_. Note that for the flow-based case (left panel), we show explicitly in Fig. 4 and Fig. 5 the behaviour of the free entropy functional for \(\Delta/\rho^{2}=1.05\) and \(\Delta/\rho^{2}=0.98\). Here in both plots (left and right) we have that the dynamical transition is at \(\Delta_{d}/\rho^{2}\approx 1.041\), the IT/Kauzmann transition is at \(\Delta_{IT}/\rho^{2}\approx 1.029\), while the tri-critical points are at \(\Delta_{\text{tri}}/\rho^{2}\approx 1.08\) for flow-based and \(\Delta_{\text{tri}}/\rho^{2}\approx 1.069\) for autoregressive based sampling.
Figure 5: The free entropy function \(\Phi_{\rm RS}(\chi)\) in the region where the flow-based model succeeds as the position of the maxima is always unique.
Figure 6: **Flows vs autoregressive in the sparse rank-one problem.** We plot the values of the tri-critical point for flow-based (orange dots) and autoregressive-based (blue crosses) sampling on the sparse rank-one model, when varying the sparsity parameter \(\rho\). Specifically, we plot \(\Delta_{\rm tri}/\rho\) for the two cases, comparing them also to the Dynamical transition value \(\Delta_{d}/\rho\) (red line) and the IT transition value \(\Delta_{\rm IT}/\rho\) (green line). Finally, we also put a black cross at the point \((\rho_{\rm max},\Delta_{\rm max}/\rho_{\rm max})\), taken from [83], which corresponds to the maximum value of \(\rho\) at which there is a first order phase transition.
The easy phase (and a subtle point)For the expert reader, it is worth pointing a subtle difference between what happens at \(\Delta=\rho^{2}\) and \(\Delta=\Delta_{\rm alg}<\rho^{2}\) (see also [83]). Indeed, we use here the definition of the easy phase as the absence of a metastable maxima that is trapping the dynamics. In other words, the state evolution of AMP initialized at the uninformed fixed point finds the global maxima. This is indeed what is going on for \(\Delta=\Delta_{\rm alg}\), as illustrated in Figure 5.
However, it is worth mentioning that already below \(\Delta=\rho^{2}\), the fixed point at zero is unstable, so that there are two fixed point: the "correct" one at large \(\chi\), and the one found by AMP at low but non-zero \(\chi\). This phenomenon, usually called the Baik-Ben Arous-Peche (BBP) transition, is illustrated in Figure 7. In this phase, while AMP, for instance (but also a standard spectral method such as PCA [83, 112]) would find an estimator correlated with the ground truth (and thus \(\chi^{*}>0\) even at \(\gamma=0\)), there is still a first-order phase transition and AMP would not be optimal. Since there is a discontinuity, the flow-based method suffers from the same problem, and fails to sample as well.
### Ising \(p\)-spin model
We consider now the \(p\)-spin model (here with \(p=3\)), which is one of the most important statistical physics problems in spin glass theory. First, let us look at the Ising version, defined by the following Hamiltonian:
\[\mathcal{H}(\mathbf{x})=-\frac{\sqrt{3}}{N}\sum_{i<j<k}J_{ijk}x_{i}x_{j}x_{k}\,, \tag{101}\]
where \(J_{ijk}\sim\mathcal{N}(0,1)\) and the variables \(x_{i}\) are constrained to be \(\pm 1\). This version of the model was introduced in [113], and solved with the replica method in [19]. It is hard to underestimate its importance in the spin glass and glass theory as the prototype of the mean-field random first-order model [114, 115, 116, 117, 46].
For temperatures higher than the spin glass, or Kauzmann, temperature, the model can be proven [98, 86, 99] to be contiguous to its "planted version", the tensor factorization problem [87]. This is nothing but the tensor generalization of the former matrix estimation problem of the preceding section. We shall report the derivations presented in [86], which considers the planted version of the problem, but that (thanks to contiguity) also describe the unplanted model, and the mapping between the two can be done using \(\Delta=\frac{2}{3}T^{2}\).
As before, from a Bayesian perspective, the problem boils down to sampling the following posterior
\[P_{0}(\mathbf{x})\propto\prod_{i}P_{X}(x_{i})\prod_{i<j<k}e^{\frac{\sqrt{3} }{N}J_{ijk}x_{i}x_{j}x_{k}}\,, \tag{102}\]
where the prior \(P_{X}(x)=\delta_{x,-1}/2+\delta_{x,+1}/2\) constrains the variables to be \(\pm 1\).
As we have reminded for the previous model, when considering the tilted measure (100) we will exploit again the fact that considering an equilibrium configuration \(\mathbf{x}_{0}\) is equivalent to take as \(\mathbf{x}_{0}\) the planted configuration, as long as we are beyond the Spin Glass temperature.
We shall thus consider the following **tilted measure** for diffusion and flow-based sampling:
\[P_{\gamma}(\mathbf{x})=\frac{1}{Z_{\gamma}}\left(\prod_{i}P_{X}(x_{i})\right) e^{\gamma(t)^{2}(\mathbf{x},\mathbf{x}_{0})+\gamma(t)(z,\mathbf{x})-\frac{\gamma(t)^{2} }{2}\|\mathbf{x}\|^{2}}\prod_{i<j<k}e^{\frac{\sqrt{3}}{2}J_{ijk}x_{i}x_{j}x_{k}} \tag{103}\]
Figure 7: Illustration of the BBP[112] transition around \(\Delta=\rho^{2}\) where the spurious minima stop to be at zero (see the difference between the zoomed (a) and (c)), but there the correct non-spurious minima is still at larger value of \(\chi\) (see the unzoomed (b) and (d)). This is still a hard phase for inference and sampling, and the problem remains so until \(\Delta<\Delta_{\rm alg}\)
As before, we can notice that with respect to the original measure in (111), the tilting adds a field in the planted direction, a random field and a normalization factor depending on the l2 norm.
For the **pinned measure** defined in (118) the original problem becomes (denoting as \(S_{\theta}\) the pinned list):
\[P_{\theta}(\mathbf{x})=\frac{1}{Z_{\theta}}\left(\prod_{i\notin S_{\theta}}P_{X} (x_{i})\right)\left(\prod_{i\in S_{\theta}}\delta(x_{i}-x_{i}^{*})\right)\prod_ {i<j<k}e^{\frac{\sqrt{3}\theta}{\lambda}J_{ijk}x_{i}x_{j}x_{k}} \tag{119}\]
#### b.2.1 Replica free entropy
The replica formula for this model has been studied extensively in the literature, and can also be proven rigorously, see [86]. Again, the asymptotic free entropy is given by the maximum of the so-called replica symmetric potential [20] :
\[\frac{1}{N}\mathbb{E}_{\mathbf{x}_{0},\mathbf{z},J}\log Z_{\gamma} \xrightarrow[N\to\infty]{}\text{ argmax }\Phi_{RS}(m) \tag{120}\] \[\Phi_{RS}(m) = \mathbb{E}_{w,x_{0}}\left[\log Z_{x}\left(\frac{m^{2}}{\Delta}, \frac{m^{2}}{\Delta}x_{0}+\sqrt{\frac{m^{2}}{\Delta}}w\right)\right]-\frac{m^ {3}}{3\Delta} \tag{121}\]
where \(m\) is the order parameter of the problem, \(x_{0}\sim P_{X}\), \(w\sim\mathcal{N}(0,1)\) while \(Z_{x}\) depends on the specific measure considered. The same is valid for the pinned measure, as long as one substitutes \(Z_{\gamma}\) with \(Z_{\theta}\).
In the following, since we will be interested on the unplanted model, we will use the temperature \(T=\sqrt{3\Delta/2}\) as signal-to-noise parameter.
_Tilted measure:_ In the case of the **tilted measure** (117) we have
\[Z_{x}(A,B;x_{0}) =\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2 }x^{2}/2)\exp(Bx-Ax^{2}/2) \tag{122}\] \[=e^{-(A+\gamma^{2})/2}\cosh(B+\gamma w+\gamma^{2}x_{0})\]
that can be put into Eq. (120) to get
\[\Phi_{\mathrm{RS}}(\chi)=-\frac{\widetilde{\chi}}{2}+\mathbb{E}_{w}\left[\log \cosh\left(\widetilde{\chi}+\sqrt{\widetilde{\chi}}w\right)\right]-\frac{ \chi^{3}}{2T^{2}}\,,\quad\widetilde{\chi}=\frac{3\chi^{2}}{2T^{2}}+\gamma^{2}\,. \tag{123}\]
_Pinning measure:_ Meanwhile, considering the **pinning measure** (118) leads to
\[Z_{x}(A,B;x_{0}) =\begin{cases}\int\mathrm{d}xP_{X}(x)\delta_{x,x_{0}}\exp(Bx-Ax^ {2}/2)&\text{with probability }\theta\\ \int\mathrm{d}xP_{X}(x)\exp(Bx-Ax^{2}/2)&\text{with probability }1-\theta \end{cases} \tag{124}\] \[=\begin{cases}P_{X}(x_{0})\exp\left(Bx_{0}-Ax_{0}^{2}/2\right)& \text{with probability }\theta\\ \exp(-A/2)\cosh(B)&\text{with probability }1-\theta\end{cases}\]
which in turn gives
\[\Phi_{\mathrm{RS}}(\chi)=\frac{2\theta-1}{2}\widetilde{\chi}+(1-\theta)\mathbb{ E}_{w}\left[\log\cosh\left(\widetilde{\chi}+\sqrt{\widetilde{\chi}}w\right) \right]-\frac{\chi^{3}}{2T^{2}}\,,\quad\widetilde{\chi}=\frac{3\chi^{2}}{2T^{ 2}}\,. \tag{125}\]
#### b.2.2 Message-passing algorithm
As for the SK model and the sparse rank-one matrix factorization problem, the AMP iterates and the associated Thouless-Anderson-Palmer (TAP) equations [51] have been widely studied for the \(p\)-spin models [117]. Here we consider the formalism of [86], which presents the AMP iterates for the Spike Tensor model, and we use contiguity to derive equations valid for the unplanted model with the tilting (or pinning) field. We remind again that the mapping between the two models is given by \(\Delta=\frac{2T^{2}}{3}\).
_Tilted measure:_ Due to contiguity with the planted model and the equivalence of the tilted measure and an associated measure with a planted field described in Section A, the AMP iterations for the tilted measure can be obtained through the ones for the associated inference problem described in [83].This yields to
\[\begin{cases}\widehat{x}_{i}^{t+1}=\tanh(B_{i}^{t}+\frac{\alpha(t)}{\beta(t)}[ \mathbf{Y}_{t}]_{i}),\quad\sigma_{i}^{t+1}=\cosh(B_{i}^{t}+\frac{\alpha(t)}{ \beta(t)^{2}}[\mathbf{Y}_{t}]_{i})^{-2}\\ B_{i}^{t}=\frac{\sqrt{3}\beta}{N}\sum_{j<k}J_{ijk}\widehat{x}_{j}^{t}\widehat{x}_ {k}^{t}-\frac{3}{N}\beta^{2}\widehat{x}_{i}^{t-1}\widehat{\mathbf{x}}^{t}\cdot \widehat{\mathbf{x}}^{t-1}\sum_{k}\sigma_{k}^{t}/N\end{cases} \tag{121}\]
where \(\alpha(t)\) and \(\beta(t)\) are the functions defining the interpolant process, fixed at the start, and \(\mathbf{Y}_{t}\) is the value of the noisy observation at time \(t\).
_Pinning measure:_ The AMP equations for the pinning measure (109) are a slight variation of the ones just presented for the tilted measure.
Specifically, in the autoregressive-based sampling scheme for a fraction \(\theta\) of the variables we fix \(\widehat{x}_{i}^{t}=x_{0},\;\sigma_{i}^{t}=0\), since their posterior means are totally polarized on the solution. For the rest of the variables, a fraction \(1-\theta\), the AMP equations are the same as the one presented for diffusion in Eq. (121), provided that we fix \(\gamma=0\).
The resulting equations are:
\[\begin{cases}&\widehat{x}_{i}^{t+1}=\begin{cases}[\mathbf{x}_{0}]_{i}&\text{ if }i\in S_{\theta}\\ \tanh(B_{i}^{t}),&\text{otherwise}\end{cases},\quad\sigma_{i}^{t+1}=\begin{cases} 0&\text{if }i\in S_{\theta}\\ \cosh(B_{i}^{t})^{-2}&\text{otherwise}\end{cases}\\ &B_{i}^{t}=\frac{\sqrt{3}\beta}{N}\sum_{j<k}J_{ijk}\widehat{x}_{j}^{t}\widehat {x}_{k}^{t}-\frac{3}{N}\beta^{2}\widehat{x}_{i}^{t-1}\widehat{\mathbf{x}}^{t} \cdot\widehat{\mathbf{x}}^{t-1}\sum_{k}\sigma_{k}^{t}/N\end{cases} \tag{122}\]
#### b.3.3 State evolution equations
The advantage of AMP is that it can be rigorously tracked by the State Evolution equations [31], that can be proven to be nothing but the fixed point of the replica potential (107). We can use the formalism in [86] to get
\[m^{t+1}=\mathbb{E}_{x_{0},w}\left[f_{\text{in}}\left(\frac{(m^{t})^{2}}{ \Delta},\frac{(m^{t})^{2}}{\Delta}x_{0}+\sqrt{\frac{(m^{t})^{2}}{\Delta}}w \right)x_{0}\right] \tag{123}\]
where \(x_{0}\sim P_{X}\), \(w\sim\mathcal{N}(0,1)\) and \(f_{in}\) is the input channel and depends on the specific problem. Again, we will state our results using the temperature \(T=\sqrt{3\Delta/2}\).
_Tilted measure:_ For the **tilted measure** (108) we have
\[\begin{split} f_{\text{in}}(A,B;x_{0})&=\frac{\int \mathrm{d}xxP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2)\exp( Bx-Ax^{2}/2)}{\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2) \exp(Bx-Ax^{2}/2)}\\ &=\tanh(B+\gamma w+\gamma^{2}x_{0})\end{split} \tag{124}\]
which leads to the State Evolution equations
\[\begin{split}\chi^{t+1}=\mathbb{E}_{w}\left[\tanh(\widetilde{ \chi}^{t}+\sqrt{\widetilde{\chi}^{t}}w)\right]\,,\quad\widetilde{\chi}^{t} \equiv\frac{3(\chi^{t})^{2}}{2T^{2}}+\gamma^{2}\,,\quad w\sim\mathcal{N}(0,1) \end{split} \tag{125}\]
_Pinning measure:_ In the same way, for the **pinning measure** (109) we get
\[\begin{split} f_{\text{in}}(A,B;x_{0})&=\begin{cases} \frac{P_{X}(x_{0})x_{0}\exp(Bx_{0}-Ax_{0}^{2}/2)}{P_{X}(x_{0})\exp(Bx_{0}-Ax_ {0}^{2}/2)}&\text{with probability }\theta\\ &\frac{\int dx_{P}X(x)\exp(Bx-Ax^{2}/2)}{\int dxP_{X}(x)\exp(Bx-Ax^{2 }/2)}&\text{with probability }1-\theta\end{cases}\\ &=\begin{cases}x_{0}&\text{with probability }\theta\\ \tanh(B)&\text{with probability }1-\theta\end{cases}\end{split} \tag{126}\]
and thus the fixed point equations
\[\chi^{t+1}=\theta+(1-\theta)\mathbb{E}_{w}\left[\tanh(\widetilde{\chi}^{t}+ \sqrt{\widetilde{\chi}^{t}}w)\right]\,,\quad\widetilde{\chi}^{t}\equiv\frac{ 3(\chi^{t})^{2}}{2T^{2}}\,,\quad w\sim\mathcal{N}(0,1) \tag{127}\]
#### b.3.4 Phase diagrams
The phase diagrams for the Ising \(p\)-spin are presented in Figure 8, in the same style as the previous section. In this case, notably, we observe that the flow-based method is advantageous with respect to the autoregressive one, in the sense that the former shows a smaller gap \(T_{\rm tri}-T_{d}\) compared to the latter. This is the opposite situation compared to the previous problem.
### Spherical p-spin model
The spherical \(p\)-spin model is a variation of the Ising model where the configurations are constrained to lie on the sphere \(\mathbf{x}\in\mathcal{S}^{N-1}\)[117]. Again, we shall use the planted version, which is asymptotically equivalent at high temperature with the unplanted problem having independent Gaussian entries. The Hamiltonian thus still reads:
\[\mathcal{H}(\mathbf{x})=-\frac{\sqrt{3}}{N}\sum_{i<j<k}J_{ijk}x_{i}x_{j}x_{k}\,, \tag{100}\]
where \(J_{ijk}\sim\mathcal{N}(0,1)\) and now \(P_{X}(x_{i})=\mathcal{N}(0,1)\).
Same as its Ising version, for temperatures higher than the Kauzmann temperature the model can be proven [86] to be contiguous to its "planted version", the tensor factorization problem [87]. Here we report the derivations presented in [86], and the mapping between the two is again \(\Delta=\frac{2}{3}T^{2}\).
From a Bayesian point of view, the posterior measure we want to sample is the following:
\[P_{0}(\mathbf{x})\propto\prod_{i}P_{X}(x_{i})\prod_{i<j<k}e^{\frac{\sqrt{3}}{ N}J_{ijk}x_{i}x_{j}x_{k}}\,, \tag{101}\]
where \(P_{X}(x)=\mathcal{N}(0,1)\) in high-dimension constrains the variables to be on the sphere.
We first consider the following **tilted measure**, arising in diffusion and flow-based sampling:
\[P_{\gamma}(\mathbf{x})=\frac{1}{Z_{\gamma}}\left(\prod_{i}P_{X}(x_{i})\right)e ^{\gamma(t)^{2}\langle\mathbf{x},\mathbf{x}_{0}\rangle+\gamma(t)\langle z, \mathbf{x}\rangle-\frac{\gamma(t)^{2}}{2}\|\mathbf{x}\|^{2}}\prod_{i<j<k}e^{ \frac{\sqrt{3}g}{N}J_{ijk}x_{i}x_{j}x_{k}} \tag{102}\]
Again, with respect to the original measure in Eq. (101), this measure presents an additional field in the planted direction, a random field and a constant factor depending on the l2 norm.
Figure 8: Phase diagrams for flow-based sampling (left) and autoregressive-based sampling (right) for the _Ising \(p\)-spin_ model. On the x-axis we put the temperature \(T\) and on the y-axis the ratio \(\gamma^{2}=\alpha^{2}/\beta^{2}\) (left) and the decimated ratio \(\theta\) (right). We compute the order parameter \(\chi\), defined in (100), both from an uninformed and an informed initialization, and we plot the difference between the two. The dashed white lines are the _spinodal lines_, while the dashed black one is the _IT threshold_, both defined at the beginning of the section. In both plots we have that the dynamical transition is at \(T_{d}\approx 0.682\), the Kauzmann transition is at \(T_{K}\approx 0.652\), while the tri-critical points are at \(T_{\rm tri}\approx 0.741\) for flow-based and \(T_{\rm tri}\approx 0.759\) for autoregressive-based sampling.
For the **pinned measure** defined in (111) the original problem becomes (denoting as \(S_{\theta}\) the pinned list):
\[P_{\theta}(\mathbf{x})=\frac{1}{Z_{\theta}}\left(\prod_{i\notin S_{\theta}}P_{X}( x_{i})\right)\left(\prod_{i\in S_{\theta}}\delta(x_{i}-x_{i}^{*})\right)\prod_{i<j<k}e^{ \frac{\sqrt{S_{\theta}}}{N}J_{ijk}x_{i}x_{j}x_{k}} \tag{112}\]
#### b.2.1 Replica free entropy:
The replica free entropy for this model has been studied extensively in the spin-glass literature, and can also be proven rigorously, see e.g. [86].
As for the previous models, the asymptotic free entropy is given by the maximum of the so-called replica symmetric potential [20] :
\[\frac{1}{N}\mathbb{E}_{\mathbf{x}_{0},\mathbf{z},J}\log Z_{\gamma} \xrightarrow[N\to\infty]{}\text{ argmax }\Phi_{RS}(m) \tag{113}\] \[\Phi_{RS}(m) = \mathbb{E}_{w,x_{0}}\left[\log Z_{x}\left(\frac{m^{2}}{\Delta}, \frac{m^{2}}{\Delta}x_{0}+\sqrt{\frac{m^{2}}{\Delta}}w\right)\right]-\frac{m^ {3}}{3\Delta} \tag{114}\]
where \(m\) is the order parameter of the problem, \(x_{0}\sim P_{X}\), \(w\sim\mathcal{N}(0,1)\) and \(Z_{x}\) depends on the specific measure considered. The same is valid for the pinned measure, as long as one substitutes \(Z_{\gamma}\) with \(Z_{\theta}\).
While using the same derivation, in the following we will use \(T=\sqrt{3\Delta/2}\) as signal-to-noise parameter, since we are interested in studying the unplanted model.
_Tilted measure:_ In the case of the **tilted measure** (110) we have
\[Z_{x}(A,B;x_{0}) =\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2 }x^{2}/2)\exp(Bx-Ax^{2}/2) \tag{115}\] \[=\sqrt{\frac{2\pi}{A+\gamma^{2}+1}}\exp\left(\frac{(B+\gamma^{2} x_{0}+\gamma w)^{2}}{2(A+\gamma^{2}+1)}\right)\]
which leads to
\[\Phi_{\text{RS}}(\chi)=\frac{\widetilde{\chi}}{2}+\frac{1}{2}\log\left(\frac{2 \pi}{\widetilde{\chi}+1}\right)-\frac{1}{2T^{2}}\chi^{3}\,,\quad\widetilde{ \chi}=\frac{3\chi^{2}}{2T^{2}}+\gamma^{2}\,. \tag{116}\]
_Pinning measure:_ Meanwhile, considering the **pinning measure** (111) leads to
\[Z_{x}(A,B;x_{0})=\begin{cases}P_{X}(x_{0})\exp\left(Bx_{0}-\frac{A}{2}x_{0}^{ 2}\right)&\text{with probability }\theta\\ \sqrt{\frac{2\pi}{A+1}}\exp\left(\frac{B^{2}}{2(A+1)}\right)&\text{with probability }1-\theta\end{cases} \tag{117}\]
which in turn gives
\[\Phi_{\text{RS}}(\chi)=\frac{\widetilde{\chi}}{2}+\frac{1-\theta}{2}\log \left(\frac{2\pi}{\widetilde{\chi}+1}\right)-\theta\log(\sqrt{2\pi}e)-\frac{ \chi^{3}}{2T^{2}}\,,\quad\widetilde{\chi}=\frac{3\chi^{2}}{2T^{2}}\,. \tag{118}\]
#### b.2.2 Message-passing algorithm
As already mentioned for the Ising version of the model, the Thouless-Anderson-Palmer (TAP) equations [51] have been widely studied for the \(p\)-spin models [117]. We use the results of [86], which presents the AMP algorithm for the Spike Tensor model, and we use contiguity to derive equations valid for the unplanted model with the tilting (or pinning) field. The mapping between the two models is again given by \(\Delta=\frac{2T^{2}}{3}\).
_Tilted measure:_ The equivalence between the tilted and the planted measure with external field allows us to map the AMP iterations for the tilted measure to the ones for an associated inference problem. Here we consider the formalism of [86], such that the resulting equations are:
\[\begin{cases}B_{i}^{t}=\frac{\sqrt{3}\beta}{N}\sum_{j<k}J_{ijk}\widehat{x}_{j }^{t}\widehat{x}_{k}^{t}-\frac{3}{N}\beta^{2}\sigma^{t}\widehat{x}_{i}^{t-1} \widehat{\mathbf{x}}^{t}\cdot\widehat{\mathbf{x}}^{t-1}\\ \widehat{x}_{i}^{t+1}=\frac{B_{i}^{t}+\frac{\alpha(t)}{\beta\sigma(2)}\left( \mathbf{Y}_{t}\right)_{i}}{3\beta^{2}\|\mathbf{x}^{2}\|^{2}_{2}/(2N)+\gamma^{2 }+1}\,,\sigma^{t+1}=\frac{1}{3\beta^{2}\|\mathbf{x}^{4}\|^{2}_{2}/(2N)+\gamma^ {2}+1}\end{cases} \tag{119}\]
where \(\alpha(t)\) and \(\beta(t)\) are the functions defining the interpolant process, fixed at the start, and \(\mathbf{Y}_{t}\) is the value of the noisy observation at time \(t\).
Pinning measure:When considering the pinning measure (144), the AMP equations are only a slight variation of the ones presented for the flow-based case.
Specifically, in autoregressive-based sampling we choose a fraction \(\theta\) of the variables, for which we fix \(\widehat{x}_{i}^{t}=x_{0},\;\sigma_{i}^{t}=0\), which stems from the fact that their posterior means are completely polarized on the solution. For the rest of the variables, a fraction \(1-\theta\), the AMP equations are exactly the ones reported in Eq. (130), provided we fix \(\gamma=0\).
The resulting algorithm is the following:
\[\left\{\begin{array}{ll}\widehat{x}_{i}^{t+1}=\begin{cases}[ \mathbf{x}_{0}]_{i}&\text{if }i\in S_{\theta}\\ \frac{B_{i}^{t}}{3\beta^{2}\|\mathbf{x}^{t}\|^{2}/(2N)+1},&\text{otherwise}\end{cases},\quad\sigma_{i}^{t+1}=\begin{cases}0&\text{if }i\in S_{\theta}\\ \frac{1}{3\beta^{2}\|\mathbf{x}^{t}\|^{2}_{2}/(2N)+1}&\text{otherwise}\end{cases} \\ B_{i}^{t}=\frac{\sqrt{3}\beta}{N}\sum_{j<k}J_{ijk}\widehat{x}_{j}^{t}\widehat{ x}_{k}^{t}-\frac{3}{N}\beta^{2}\widehat{x}_{i}^{t-1}\widehat{\mathbf{x}}^{t} \cdot\widehat{\mathbf{x}}^{t-1}\sum_{k}\sigma_{k}^{t}/N\end{cases}\right. \tag{131}\]
#### b.3.3 State evolution equations
As mentioned earlier, AMP iterations possess the salient property of being able to be rigorously tracked by the State Evolution equations, that turn out to be the fixed point of the replica potential in Eq. (130). Here, again we follow the results presented in [86], such that as the Ising version of the model, we have
\[m^{t+1}=\mathbb{E}_{x_{0},w}\left[f_{\text{in}}\left(\frac{(m^{t})^{2}}{\Delta },\frac{(m^{t})^{2}}{\Delta}x_{0}+\sqrt{\frac{(m^{t})^{2}}{\Delta}}w\right)x_ {0}\right] \tag{132}\]
where \(x_{0}\sim P_{X}\), \(w\sim\mathcal{N}(0,1)\) and \(f_{\text{in}}\) is the input channel and depends on the specific problem. We also remind the mapping \(T=\sqrt{3\Delta/2}\), which we will use in the following presentation.
Tilted measure:For the **tilted measure** (144) we have
\[\begin{split} f_{\text{in}}(A,B;x_{0})&=\frac{\int \mathrm{d}xxP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2)\exp( Bx-Ax^{2}/2)}{\int\mathrm{d}xP_{X}(x)\exp(\gamma^{2}xx_{0}+\gamma wx-\gamma^{2}x^{2}/2) \exp(Bx-Ax^{2}/2)}\\ &=\frac{B+\gamma^{2}x_{0}+\gamma w}{A+\gamma^{2}+1}\end{split} \tag{133}\]
which leads to the State Evolution equations
\[\chi^{t+1}=\frac{\widetilde{\chi}^{t}}{1+\widetilde{\chi}^{t}}\,,\quad \widetilde{\chi}^{t}\equiv\frac{3}{2T^{2}}(\chi^{t})^{2}+\gamma^{2}\,. \tag{134}\]
Pinning measure:In the same way, for the **pinning measure** (144) we find
\[\begin{split} f_{\text{in}}(A,B;x_{0})&=\begin{cases} \frac{P_{X}(x_{0})x_{0}\exp\left(Bx_{0}-Ax_{0}^{2}/2\right)}{P_{X}(x_{0})\exp \left(Bx_{0}-Ax_{0}^{2}/2\right)}&\text{with probability }\theta\\ \frac{\mathrm{d}xP_{X}(x)\exp(Bx-Ax^{2}/2)}{\int\mathrm{d}xP_{X}(x)\exp(Bx-Ax^ {2}/2)}&\text{with probability }1-\theta\end{cases}\\ &=\begin{cases}x_{0}&\text{with probability }\theta\\ \frac{B}{A+1}&\text{with probability }1-\theta\end{cases}\end{split} \tag{135}\]
and thus the fixed point equations are
\[\chi^{t+1}=\theta+(1-\theta)\frac{\widetilde{\chi}^{t}}{1+\widetilde{\chi}^{t }}\,,\quad\widetilde{\chi}^{t}\equiv\frac{3(\chi^{t})^{2}}{2T^{2}} \tag{136}\]
#### b.3.4 Phase diagrams
The phase diagrams for the spherical \(p\)-spin are presented in Figure 9, and we plot the same quantities as for the previous models. We observe again that the flow-based method is advantageous with respect to the autoregressive one.
### NAE-SAT model
#### b.4.1 Target model definition
The \(k\)-hypergraph bicoloring (or \(k\)-NAESAT) problem is a prototypical model of constrained satisfaction problems defined on hypergraphs. An instance of the problem is determined by an hypergraph \(G=(V,E)\), where \(V\) is the set of \(N\) vertices and \(E\) the set of \(M\) hyperedges, each containing exactly \(k\) vertices.
Each vertex \(i\in V\) is associated to an Ising-spin variable \(x_{i}=\pm 1\), while each hyperedge \(a\in E\) is associated to a constraint involving the \(k\) vertices entering in \(a\) (in the following, we will use \(\mathbf{x}_{\partial a}\) to indicate this set of variables).
For bicoloring, the \(a\)-th constraint is satisfied if there is at least one \(+1\) and one \(-1\) among the \(k\) variables of \(\mathbf{x}_{\partial a}\). In terms of probability distribution, this translates to
\[P_{0}(\mathbf{x})=\frac{1}{Z(G)}\prod_{a=1}^{M}\omega(\mathbf{x}_{\partial a} )\,,\quad\omega(x_{1},\ldots,x_{k})=\begin{cases}0&\text{if }\sum_{i=1}^{k}x_{i}=\pm k\\ 1&\text{otherwise}\end{cases} \tag{100}\]
In the following, we will focus on the asymptotic limit where both \(N\) and \(M\) go to infinity, with constant rate \(\alpha=M/N\).
This model has again the advantage to be contiguous to its planted version [69]. We refer to [76; 77; 69] for rigorous results, and to [73; 74; 75; 118] for results within the cavity approach.
#### b.4.2 BP equations and Bethe free entropy:
To analyse the properties of the model, one can use the cavity method [21] from statistical physics, which allows to derive the BP equations for the problem (see e.g. [118]), which can be written in terms of _cavity messages_ as
\[h_{i\to a}=f(\{u_{b\to i}\}_{b\in\partial_{i}\setminus a})\,,\quad u_{a\to i}= g(\{h_{j\to a}\}_{j\in\partial_{a}\setminus i}) \tag{101}\]
where
\[f(u_{1},\ldots,u_{d})=\frac{\prod_{i=1}^{d}(1+u_{i})-\prod_{i=1}^{d}(1-u_{i})} {\prod_{i=1}^{d}(1+u_{i})+\prod_{i=1}^{d}(1-u_{i})} \tag{102}\]
Figure 9: Phase diagrams for flow-based sampling (left) and autoregressive-based sampling (right) for the _Spherical p-spin_ model. On the x-axis we put the temperature \(T\) and on the y-axis the ratio \(\gamma^{2}=\alpha^{2}/\beta^{2}\) (left) and the decimated ratio \(\theta\) (right). We compute the order parameter \(\chi\), defined in (101), both from an uninformed and an informed initialization, and we plot the difference between the two. The dashed white lines are the _spinodal lines_, while the dashed black one is the _IT threshold_, both defined at the beginning of the section. In both plots we have that the dynamical transition is at \(T_{d}=\sqrt{3/8}\), the Kauzmann transition is at \(T_{K}\approx 0.58\), while the tri-critical points are at \(T_{\text{tri}}=2/3\) for flow-based and \(T_{\text{tri}}=\sqrt{1/2}\) for autoregressive based sampling.
is the function defining the messages going from factor nodes to variable nodes and
\[g(h_{1},\ldots,h_{k-1})=\frac{\sum_{x_{1},\ldots,x_{k}}\omega(x_{1},\ldots,x_{k})x _{k}\prod_{i=1}^{k-1}(1+h_{i}x_{i})}{\sum_{x_{1},\ldots,x_{k}}\omega(x_{1}, \ldots,x_{k})\prod_{i=1}^{k-1}(1+h_{i}x_{i})} \tag{100}\]
is the one defining messages going from variable nodes to factor nodes.
Once a fixed point of the BP equations is reached, one can compute the Free entropy from the resulting BP marginals. In its general form, it can be written as
\[\frac{1}{N}\ln Z(G)=\frac{1}{N}\sum_{i=1}^{N}\ln\mathcal{Z}_{0}^{\mathrm{v}}( \{u_{a\to i}\}_{a\in\partial i})+\frac{1}{N}\sum_{a=1}^{M}\ln\mathcal{Z}_{0}^ {\mathrm{c}}(\{h_{i\to a}\}_{i\in\partial a})-\frac{1}{N}\sum_{(i,a)}\ln \mathcal{Z}_{0}^{\mathrm{e}}(h_{i\to a},u_{a\to i})\, \tag{101}\]
where the last sum runs over the edges of the factor graph, and the local partition functions are defined as:
\[\mathcal{Z}_{0}^{\mathrm{v}}(u_{1},\ldots,u_{d}) =\sum_{x}\prod_{i=1}^{d}\left(\frac{1+xu_{i}}{2}\right)\, \tag{102}\] \[\mathcal{Z}_{0}^{\mathrm{c}}(h_{1},\ldots,h_{k}) =\sum_{x_{1},\ldots,x_{k}}\omega(x_{1},\ldots,x_{k})\prod_{i=1}^{k }\left(\frac{1+x_{i}h_{i}}{2}\right)\,\] (103) \[\mathcal{Z}_{0}^{\mathrm{e}}(h,u) =\sum_{x}\left(\frac{1+xh}{2}\right)\left(\frac{1+xu}{2}\right). \tag{104}\]
Replica symmetric cavity equations:The simplest version of the cavity method implies the assumption of Replica Symmetry (RS). This is rigorously justified in our case thanks to the fact that we are considering a Bayes-Optimal model.
In such a case, the resulting self-consistent equations are relatively simpler:
\[\mathcal{P}^{RS}(h) =\sum_{d=0}^{\infty}p_{d}\int\left(\prod_{i=1}^{d}\mathrm{d}u_{i }\widehat{\mathcal{P}}^{RS}(u_{i})\right)\,\delta(h-f(u_{1},\ldots,u_{d}))\, \tag{105}\] \[\widehat{\mathcal{P}}^{RS}(u) =\int\left(\prod_{i=1}^{k-1}\mathrm{d}h_{i}\mathcal{P}^{RS}(h_{i })\right)\,\delta(u-g(h_{1},\ldots,h_{k-1}))\.\]
where \(\mathcal{P}^{RS}\) and \(\widehat{\mathcal{P}}^{RS}\) are probability distributions defined on the messages \(h\) and \(u\) respectively.
Being self-consistency equations defined on probability distributions makes the computation of an analytical solution possible only in very restricted cases, while in practise one needs to use numerical techniques to find approximate solutions.
Still, _population dynamics_ techniques (see for example [21]) have been shown to provide very good approximations when one is interested in observables defined as average quantities over \(\mathcal{O}(N)\) cavity messages.
When the density of interactions \(\alpha\) becomes large, the hypothesis underlying the Replica Symmetric assumption breaks down, and we have to consider the Replica Symmetry Breaking (RSB) phenomenon [119].
#### b.3.3 1RSB and Tree reconstruction equations:
The 1RSB cavity method aims to compute the potential
\[\Phi_{1}(m)=\lim_{N\to\infty}\frac{1}{N}\log\left(\sum_{\gamma}Z_{\gamma}^{m}\right) \tag{106}\]
where \(m\) is the so-called Parisi parameter.
In their general formulation, the 1RSB equations are self-consistent equations defined on distributions over probability distributions. Focusing on the case \(m=1\) allows simplifying considerably the equations, and the resulting formulas correspond to the so-called _Tree Reconstruction_ equations [120]. Moreover, the problem we are
considering has a global spin-flip symmetry which allows us to simplify further the equations, which can be finally written as
\[Q_{+}^{(t+1)}(h) =\sum_{d=0}^{\infty}p_{d}\int\left(\prod_{i=1}^{d}\mathrm{d}u_{i} \widehat{Q}_{+}^{(t)}(u_{i})\right)\,\delta(h-f(u_{1},\ldots,u_{d}))\, \tag{111}\] \[\widehat{Q}_{+}^{(t)}(u) =\sum_{x_{1},\ldots,x_{k-1}}\tilde{p}(x_{1},...,x_{k-1}|+)\int \left(\prod_{i=1}^{k-1}\mathrm{d}h_{i}Q_{+}^{(t)}(h_{i})\right)\delta(u-g(x_{1} h_{1},\ldots,x_{k-1}h_{k-1}))\,\]
where
\[\tilde{p}(x_{1},\ldots,x_{k-1}|+)=\frac{\omega(x_{1},\ldots,x_{k-1},+)}{\sum \limits_{x_{1}^{\prime},\ldots,x_{k-1}^{\prime}}\omega(x_{1}^{\prime},\ldots, x_{k-1}^{\prime},+)}=\frac{\sum\limits_{p=0}^{k-1}\omega_{p}\operatorname{ \mathbb{I}}\left[\sum\limits_{i=1}^{k-1}x_{i}=k-1-2p\right]}{\sum\limits_{p=0 }^{k-1}\binom{k-1}{p}\omega_{p}}. \tag{112}\]
Similar as before, we can compute the RS free entropy for the planted problem as
\[\begin{split}\Phi_{\mathrm{RS}}=&\sum_{d=0}^{\infty }p_{d}\int\left(\prod_{i=1}^{d}\mathrm{d}u_{i}\widehat{Q}_{+}(u_{i})\right) \ln\mathcal{Z}_{0}^{\mathrm{v}}(u_{1},\ldots,u_{d})+\alpha\int\left(\prod_{i=1 }^{k}\mathrm{d}h_{i}Q_{+}(h_{i})\right)\ln\mathcal{Z}_{0}^{\mathrm{c}}(h_{1}, \ldots,h_{k})\\ &-\alpha k\int\mathrm{d}h\,\mathrm{d}uQ_{+}(h)\widehat{Q}_{+}(u )\ln\mathcal{Z}_{0}^{\mathrm{e}}(h,u)\,\end{split} \tag{113}\]
where \(\mathcal{Z}_{0}^{\mathrm{c}}\), \(\mathcal{Z}_{0}^{\mathrm{e}}\) and \(\mathcal{Z}_{0}^{\mathrm{v}}\) are the local partitions reported in Eq. (110), (112) and (111) respectively.
Finally, the order parameter for the problem, corresponding to the definition in Eq. (109), can be computed from the population dynamics simulations as
\[\chi=\int\mathrm{d}hQ_{+}(h)h^{2}\,. \tag{114}\]
#### b.2.4 Tilted and Pinned measures
In the preceding section, we presented the classical \(k\)-NAESAT model and how its properties can be studied using the cavity method. Now, let us consider the tilted and pinning measures and see how this changes the previous equations.
Tilted measure:Starting with the **tilted measure**, the added tilting field results directly in the function defining the messages going from the factor nodes to the variable nodes, which is modified to
\[f^{\mathrm{tilt}}(u_{1},\ldots,u_{d})=\frac{e^{2\gamma(\gamma+z)}\prod\limits _{i=1}^{d}(1+u_{i})-\prod\limits_{i=1}^{d}(1-u_{i})}{e^{2\gamma(\gamma+z)} \prod\limits_{i=1}^{d}(1+u_{i})+\prod\limits_{i=1}^{d}(1-u_{i})}\,,\quad z \sim\mathcal{N}(0,1)\,, \tag{115}\]
while the function \(g\), defined in (111), is not modified. In terms of the partition function, the only local term that is modified is given by
\[\mathcal{Z}_{0}^{\mathrm{v}}(u_{1},\ldots,u_{d})=e^{\gamma(\gamma+z)}\prod \limits_{i=1}^{d}\left(\frac{1+u_{i}}{2}\right)+e^{-\gamma(\gamma+z)}\prod \limits_{i=1}^{d}\left(\frac{1-u_{i}}{2}\right)\,,\quad z\sim\mathcal{N}(0,1)\,. \tag{116}\]
while the other two terms remain the same.
Pinning measure:For the **pinning measure**, again the function \(g\) remains the same, but now we have
\[f^{\mathrm{pinn}}(u_{1},\ldots,u_{d})=\begin{cases}1&\text{with probability $\theta$}\\ \frac{\prod\limits_{i=1}^{d}(1+u_{i})-\prod\limits_{i=1}^{d}(1-u_{i})}{\prod \limits_{i=1}^{d}(1+u_{i})+\prod\limits_{i=1}^{d}(1-u_{i})}&\text{with probability $1-\theta$}\,\end{cases} \tag{117}\]
and again the only contribution to the partition function that is modified is
\[\mathcal{Z}_{0}^{\mathrm{v}}(u_{1},\ldots,u_{d})=\begin{cases}\prod_{i=1}^{d} \left(\frac{1+u_{i}}{2}\right)&\text{with probability }\theta\\ \prod_{i=1}^{d}\left(\frac{1+u_{i}}{2}\right)+\prod_{i=1}^{d}\left(\frac{1-u_{ i}}{2}\right)&\text{with probability }1-\theta\end{cases}\,. \tag{101}\]
Phase diagrams:In Fig. 10 we present the phase diagrams for the \(k\)-NAESAT problem, considering the case \(k=5\). Compared to the plots reported in the main text, here we display directly the difference \(\chi_{\mathrm{inf}}-\chi_{\mathrm{uninf}}\), so that the coloured zones of the plots are the ones displaying multiple fixed points, as opposed to the black ones. We furthermore draw as white dashed lines the spinodal points, and as a black dashed line the IT threshold, both defined at the beginning of Appendix B.
Compared to the previous models, defined on dense graphs, here we don't have some fixed point equations defined on an \(O(1)\) number of scalar parameter, but instead some self-consistent equations on probability distributions. This clearly makes it harder to have a precise estimate of the tri-critical points, both for the tilted and the pinned measures. For the precision we were able to achieve, both for flow-based sampling and for autoregressive-based sampling the tri-critical point appears to be around \(\alpha_{\mathrm{tri}}\approx 8.4\), and we are not able to state if one of the two methods has a smaller gap compared to the other and thus if there is a range in \(\alpha\) where it is able to sample efficiently where the other can not. A more careful analysis is needed to clarify this point.
## Appendix C Sampling simulations for Algorithm 1
We now show how the phenomena described through the previous phase diagram influences the performance of the sampling algorithm in practice. In Fig. 11 we report some numerical simulations where we compare the asymptotic curves derived through state evolution with some empirical simulations implementing the flow-based sampling for the spherical \(p\)-spin model.
Specifically, we compare what happens at a high value of temperature \(T>T_{\mathrm{tri}}\), where we expect the sampling scheme to work, to a value of \(T\) in the interval \([T_{d},T_{\mathrm{tri}}]\), where, as previously explained, we predict the algorithm to fail.
In the first case, shown in the left part of the plot, we see that finite-size simulation follows very well the theoretical prediction, already at a considerably low number of variables. Conversely, in the second case, reported in the right part, the situation is different, and the curves show a gap.
Moreover, we check when the Nishimori conditions are satisfied for different values of temperature \(T\) in the
Figure 10: Phase diagrams for flow-based sampling (left) and autoregressive-based sampling (right) for the \(k\)-NAESAT model. On the x-axis we put the constraints to variables ratio \(\alpha=M/N\) and on the y-axis the ratio \(\gamma^{2}=\alpha^{2}/\beta^{2}\) (left) and the decimated ratio \(\theta\) (right). We compute the order parameter \(\chi\), defined in (100), both from an uninformed and an informed initialization, and we plot the difference between the two. The dashed white lines are the _spinodal lines_, while the dashed black one is the _IT threshold_, both defined at the beginning of the section. In both plots we have that the dynamical transition is at \(\alpha_{d}\approx 9.465\), the Kauzmann (or condensation) transition is at \(\alpha_{K}\approx 10.3\), while the tri-critical points are at \(\alpha_{\mathrm{tri}}\approx 8.4\) for both flow-based and autoregressive based sampling. Thus, within our numerical precision, the two methods seems to be equally efficient.
spherical \(p\)-spin. We do this by analysing the following observable:
\[\text{OV}(\gamma)\equiv\frac{1}{N}\mathbb{E}\left[\mathbf{Y_{t}}\langle\mathbf{x }\rangle\right] \tag{104}\]
which is the overlap between the observation vector \(\mathbf{Y_{t}}\) and the average magnetization \(\langle\mathbf{x}\rangle\).
Indeed, by definition of the observation process we can write
\[\text{OV}(\gamma)=\frac{\alpha(t)}{N}\mathbb{E}\left[\mathbf{x_{0}}\langle \mathbf{x}\rangle\right]+\frac{\beta(t)}{N}\mathbb{E}\left[\mathbf{z}\langle \mathbf{x}\rangle\right] \tag{105}\]
and furthermore by using Stein's lemma [121] we can write it as
\[\begin{split}\text{OV}(\gamma)&=\frac{\alpha(t)}{N }\mathbb{E}\left[\mathbf{x_{0}}\langle\mathbf{x}\rangle\right]+\frac{\beta(t) }{N}\mathbb{E}\left[\partial_{\mathbf{z}}\langle\mathbf{x}\rangle\right]\\ &=\frac{\alpha(t)}{N}\mathbb{E}\left[\mathbf{x_{0}}\langle \mathbf{x}\rangle\right]+\frac{\beta(t)}{N}\mathbb{E}\left[\frac{\alpha(t)}{ \beta(t)}\langle\|\mathbf{x}\|_{2}^{2}\rangle-\frac{\alpha(t)}{\beta(t)} \langle\mathbf{x}\rangle^{2}\right]\\ &=\alpha(t)\frac{1}{N}\mathbb{E}\left[\mathbf{x_{0}}\langle \mathbf{x}\rangle+\langle\|\mathbf{x}\|_{2}^{2}\rangle-\langle\mathbf{x} \rangle^{2}\right]\approx\alpha(t)+\alpha(t)\frac{1}{N}\mathbb{E}\left[ \mathbf{x_{0}}\langle\mathbf{x}\rangle-\langle\mathbf{x}\rangle^{2}\right]\,. \end{split} \tag{106}\]
where in the last step we used the fact that \(\lim_{N\to\infty}\|\mathbf{x}\|_{2}^{2}/N=1\). Now, _if_ the Nishimori conditions are satisfied, we have that \(\mathbb{E}\left[\mathbf{x_{0}}\langle\mathbf{x}\rangle\right]=\mathbb{E} \left[\langle\mathbf{x}\rangle^{2}\right]\). Putting this equivalence back into (106), we see that in this case OV coincides with the function \(\alpha(t)\) that defines the interpolant process.
Figure 11: **Spherical p-spin: \(\chi(\gamma)\) for flow-based sampling.** We compare the results for the order parameter \(\chi\) computed from the State Evolution equations (black lines) to finite-size implementations of the sampling algorithm in two regimes. Left: \(T=\sqrt{\frac{3}{4}}>\Delta_{\text{tri}}\), for all values of \(\gamma^{2}\), the SE equations have a unique fixed point, and thus the initialization plays no role. The resulting curve (black continuous line) is compared to algorithmic implementations of the flow-based sampling algorithm, for sizes \(N=200,400,800\). These curves, shown in different colours above, match the asymptotic prediction. Right: \(T_{d}<T=\frac{9\sqrt{2}}{20}<T_{\text{tri}}\), there is a range of values of \(\gamma^{2}\), for which the SE equations have two distinct fixed points. More precisely, for all values between the IT point (reported as a dotted line) and the informed spinodal point the model presents an algorithmically hard phase. The uninformed/informed state evolution curves (black continuous/dashed lines respectively) are compared to algorithmic implementations of the flow-based sampling algorithm, for sizes \(N=200,400,800\). Importantly, we show that in this regime there is an evident mismatch with the asymptotic prediction.
In Fig. 12, we use this equivalence and compare with \(\alpha(t)=1-t\), the behaviour of OV for \(T=\frac{9\sqrt{2}}{20}>T_{\rm tri}\) and for \(T_{d}<T=\sqrt{\frac{3}{4}}<T_{\rm tri}\), showing that in the first case we are Bayes optimal for all the values of \(\gamma\), and thus at each sampling step. Instead, in the second case, even if at the beginning the two curves coincide, around the IT threshold they develop a gap, and after this point they take two different paths. This behaviour is consistent with what we observed from the study of the order parameter \(\chi\), for which also a gap with the state evolution equations was developed around the IT threshold.
## Appendix D Bayes optimal inference: Concentration, Replica Symmetry and Nishimori identities
We shall briefly recall here some of the important properties of the measure associated with optimal Bayesian denoising, that we are using in the main text. All of these properties are well known in the literature, but we recall them for completeness
The first one are the well-known Nishimori identities [63, 64, 23], which are valid for any Bayesian posterior estimation problem where one observes a variable \(Y\) sampled from \(P(Y|X)\), and attempt to reconstruct \(X\) by computing the posterior average. We reproduce the theorem and the proof here
**Theorem 1** (Nishimori Identity).: _Let \(X^{(1)},\ldots,X^{(k)}\) be \(k\) i.i.d. samples (given \(Y\)) from the distribution \(P\left(X=\cdot\left|Y\right)\right)\). Denoting \(\left\langle\cdot\right\rangle\) the "Boltzmann" expectation, that is the average with respect to the \(P\left(X=\cdot\left|Y\right)\right)\), and \(\mathbb{E}\left[\cdot\right]\) the "Disorder" expectation, that is with respect to \(\left(X^{*},Y\right)\). Then for all continuous bounded function \(f\) we can switch one of the copies for \(X^{*}\):_
\[\mathbb{E}\left[\left\langle f\left(Y,X^{(1)},\ldots,X^{(k-1)},X^{(k)}\right) \right\rangle_{k}\right]=\mathbb{E}\left[\left\langle f\left(Y,X^{(1)},\ldots,X^{(k-1)},X^{*}\right)\right\rangle_{k-1}\right] \tag{12}\]
Proof.: The proof is a consequence of Bayes theorem and of the fact that both \(x^{*}\) and any of the copy \(X^{(k)}\) are distributed from the posterior distribution. Denoting more explicitly the Boltzmann average over \(k\) copies for any
Figure 12: **Spherical p-spin: Checking the Nishimori conditions for flow-based sampling.** We compare the results for the overlap OV, defined in (12) computed from finite-size (\(N=200,400,800\)) implementations of the sampling algorithm to the behaviour with \(\gamma\) of the function \(\alpha(t)=1-t\) in two regimes. Left: \(T=\sqrt{\frac{3}{4}}>\Delta_{\rm tri}\). The simulations, shown in different colours above, are very close to the asymptotic prediction, even if the sizes are relatively limited. Right: \(T_{d}<T=\frac{9\sqrt{2}}{20}<T_{\rm tri}\). Here, after the IT threshold there is an evident mismatch between the two curves, due to the departure of the algorithm from the Bayes-optimality regime.
function \(g\) as
\[\left\langle g(X^{(1)},\ldots,X^{(k)})\right\rangle_{k}=:\int\prod_{i=1}^{k} \mathrm{d}x_{i}P(x_{i}|Y)g(X^{(1)},\ldots,X^{(k)}) \tag{104}\]
we have, starting from the right-hand side
\[\mathbb{E}_{Y,X^{*}}\left[\left\langle f\left(Y,X^{(1)},\ldots,X^ {(k-1)},X^{*}\right)\right\rangle_{k-1}\right]\] \[=\int dx^{*}dyP(x^{*}|Y)P(Y)\left\langle f\left(Y,X^{(1)},\ldots,X ^{(k-1)},X^{*}\right)\right\rangle_{k-1}\] \[=\mathbb{E}_{Y}\int\mathrm{d}x^{k}P(x^{k}|y)\left\langle f\left(Y,X^{(1)},\ldots,X^{(k-1)},X^{k}\right)\right\rangle_{k-1}\] \[=\mathbb{E}_{Y}\left[\left\langle f\left(Y,X^{(1)},\ldots,X^{(k- 1)},X^{(k)}\right)\right\rangle_{k}\right]\]
In particular, we have the relation \(\mu=\chi\), as stated in the main text.
We now move to the specific case of Gaussian denoising, and more specifically to the measure:
\[P_{\gamma}(\mathbf{x})=\frac{1}{Z_{n}}\exp\left(\gamma^{2}\langle\mathbf{x_{0 }},\mathbf{x}\rangle+\gamma\langle\mathbf{z},\mathbf{x}\rangle-\frac{\gamma^ {2}}{2}\|\mathbf{x}\|^{2}\right)P_{0}(\mathbf{x}) \tag{105}\]
The first identity is that the derivative of the free entropy associated with this problem is simply the expected overlap \(\mu\) (This is often called in slightly different context the I-MMSE theorem [122]) and the second derivative the (Boltzmann) variance of the overlap (this is often called the Fluctuation-Dissipation theorem in statistical mechanics):
**Lemma 1** (First(I-MMSE theorem[65]) and second (FDT theorem [123]) derivative of the free entropy).: _Consider the free entropy density associated with the measure (105):_
\[f_{N}=\frac{1}{N}\mathbb{E}[\log Z_{n}(\gamma)] \tag{106}\]
_then_
\[\partial_{\gamma}f_{n}=\gamma\mu(\gamma)\equiv\frac{\gamma}{N}\mathbb{E}[ \langle\mathbf{x}(\gamma)\rangle\mathbf{x}_{0}] \tag{107}\]
_and_
\[\partial_{\gamma}^{2}f_{N}=N\gamma^{2}\mathbb{E}\left[\left(\langle(\frac{ \mathbf{x}(\gamma)\cdot\mathbf{x}_{0}}{N})^{2}\rangle\right)-\left(\frac{ \langle\mathbf{x}(\gamma)\rangle\mathbf{x}_{0}}{N}\right)^{2}\right]=N\gamma^ {2}\mathbb{E}[\mathrm{var}\left(\frac{\mathbf{x}\cdot\mathbf{x}_{0}}{N}\right)] \tag{108}\]
Proof.: The proof is a direct application of Nishimori identities together with Stein lemma (which states that \(\mathbb{E}_{Z}zg(Z)=\mathbb{E}g^{\prime}(z)\) for a Gaussian random variable \(Z\)), which we reproduce here:
\[\partial_{\gamma}f_{n} = \mathbb{E}\int d\mathbf{x}P_{\gamma}(\mathbf{x})\frac{2\gamma \mathbf{x}_{0}\cdot\mathbf{x}+\mathbf{x}\cdot\mathbf{z}-\gamma\mathbf{x}\cdot \mathbf{x}}{N\mathbf{z}(\gamma)} \tag{109}\] \[= 2\gamma\mu-\gamma\mathbb{E}\langle\frac{\mathbf{x}\cdot\mathbf{x }}{N}\rangle-\mathbb{E}\int d\mathbf{x}P_{\gamma}(X)\frac{X\cdot Z}{N \mathbf{z}(\gamma)}\] (110) \[= 2\gamma\mu-\gamma\mathbb{E}\langle\frac{\mathbf{x}\cdot\mathbf{x }}{N}\rangle+\gamma\mathbb{E}\langle\frac{\mathbf{x}\cdot\mathbf{x}}{N} \rangle-\gamma\frac{1}{N}\langle\mathbf{x}\rangle\cdot\langle\mathbf{x}\rangle\] (111) \[= 2\gamma\mu-\chi=\gamma\mu\]
where we use Stein lemma on line 110 and Nishimori on line 110. The second derivative identity is obtained along the same line by deriving twice with respect to \(\gamma\).
This lemma in turn implies that the concentration of the overlap \(\mu\), a trick often used in the literature of mathematical physics when proving the replica equation (see e.g. [67]):
**Theorem 2** (Concentration of overlaps).: _Almost everywhere in \(gamma\), we have for some \(K(\gamma)\), that_
\[\mathbb{E}[\mathrm{var}\left(\frac{\mathbf{x}\cdot\mathbf{x}_{0}}{N}\right)] \rightarrow_{N\rightarrow\infty}0\]
Proof.: From the derivative, we have that
\[\int_{\gamma_{1}}^{\gamma_{2}}\mathrm{var}_{\gamma}\left(\frac{\mathbf{x} \cdot\mathbf{x}_{0}}{N}\right)=\frac{1}{\gamma^{2}N}(\mu(\gamma_{2})-\mu( \gamma_{1}))\leq\frac{K}{N} \tag{101}\]
where we have assumed that \(\mu\) is bounded by some constant (which is the case for the discrete variables discussed in this paper where \(-1<\mu<1\)). As a consequence, almost everywhere in \(\gamma\), the variance of the overlap must vanish.
Additionally, one can also prove the concentration of the variance with respect to the disorder, see [67].
Finally, similar results exist in the case of "pinning" [68; 69].
## Appendix E Analysis of Algorithm 1
In this section, we provide a theoretical analysis of the performance of Algorithm 1 for the "efficient" regime in Figure 1. In these regimes, we conjecture being able to approximate perfect denoising during the interpolation path. Therefore, to simplify our analysis, we assume access to a perfect denoiser and primarily focus on the validity of the continuous-time limit and the effect of discretization. Additional approximation and discretization errors due to the denoiser, such as AMP can be incorporated in our analysis through a straightforward manner. For a rigorous analysis of the approximation error and Lipschitzness of the AMP iterates in the case of the SK and spiked matrix models, we refer the reader to [36; 37].
Lastly, throughout the analysis, we shall assume that Algorithm 1 is run from time \(t=1\) to \(t=\epsilon\) for some \(\epsilon>0\). This ensures that we can avoid singularities at \(t=0\), while choosing \(\epsilon\) arbitrarily close to \(0\) allows us to approximate the target measure up to arbitrary accuracy.
Recall the definition of the "tilted" posterior measure at time \(t\):
\[P(\mathbf{x}|\mathbf{y}(t)=\mathbf{Y}_{t})=\frac{1}{Z(\mathbf{Y}_{t})}\exp \left(\frac{\alpha(t)}{\beta(t)^{2}}\langle\mathbf{Y}_{t},\mathbf{x}\rangle- \frac{\alpha(t)^{2}}{2\beta(t)^{2}}||\mathbf{x}||^{2}\right)P_{0}(\mathbf{x}). \tag{102}\]
Here the factor \(\frac{1}{Z(\mathbf{Y}_{t})}\exp\left(\frac{\alpha(t)}{\beta(t)^{2}}\langle \mathbf{Y}_{t},\mathbf{x}\rangle-\frac{\alpha(t)^{2}}{2\beta(t)^{2}}||\mathbf{ x}||^{2}\right)\) is interpreted as the Radon-Nikodym derivative of \(P(\mathbf{x}|\mathbf{y}(t)\) w.r.t \(P_{0}\). Throughout the present section, we shall denote \(P(\mathbf{x}|\mathbf{y}(t)=\mathbf{Y}_{t})\) by \(P_{t,\mathbf{Y}_{t}}\). We shall assume that \(\alpha(t),\beta(t)\) are continuously-differentiable.
### Fluctuation-dissipation and Lipschitzness
A crucial quantity related to the well-posedness of the continuity equation (Eq. (4)) and the validity of Algorithm 1 is the Lipschitz constant of the vector field. We therefore start by present a preliminary result, which can be interpreted as an instance of the Fluctuation-dissipation theorem:
**Lemma 2**.: _Suppose that the measure \(P_{0}\) has compact support. Then the Jacobian of the vector \(\frac{\partial b(\mathbf{y},t)}{\partial\mathbf{y}}\) is related to the covariance of the tilted measure \(P_{t,\mathbf{Y}_{t}}=P(\mathbf{x}|\mathbf{y}(t)=\mathbf{Y}_{t})\) as follows:_
\[\frac{\partial b(\mathbf{y},t)}{\partial\mathbf{y}}|_{\mathbf{y}=\mathbf{Y}_{t }}=\frac{\alpha(t)}{\beta(t)^{2}}(\dot{\alpha}(t)-\frac{\dot{\beta}(t)\alpha( t)}{\beta(t)})\,\mathrm{Covar}[P_{t,\mathbf{Y}_{t}}]+\frac{\dot{\beta}(t)}{ \beta(t)} \tag{103}\]
Proof.: We have, from Eq. (5):
\[b(\mathbf{y},t) =\mathbb{E}[\dot{\alpha}(t)\mathbf{x}_{0}+\dot{\beta}(t)z|\mathbf{ y}(t)=\mathbf{y}] \tag{104}\] \[=\mathbb{E}[\dot{\alpha}(t)\mathbf{x}_{0}+\frac{\dot{\beta}(t)}{ \beta(t)}(\mathbf{y}(t)-\alpha(t)\mathbf{x}_{0})|\mathbf{y}(t)=\mathbf{y}]\] \[=(\dot{\alpha}(t)-\frac{\dot{\beta}(t)\alpha(t)}{\beta(t)})\mathbb{ E}[\mathbf{x}_{0}|\mathbf{y}(t)=\mathbf{y}]+\frac{\dot{\beta}(t)}{\beta(t)} \mathbf{y}.\]
where we used the relation in Eq. (1). Next, we have through the expression for the posterior measure \(P_{t,\mathbf{Y}_{t}}\):
\[\mathbb{E}[\mathbf{x_{0}}|\mathbf{y}(t)=\mathbf{y}]=\mathbb{E}_{P_{0}}\left[ \frac{1}{Z(\mathbf{y})}\mathbf{x}\exp\left(\frac{\alpha(t)}{\beta(t)^{2}} \langle\mathbf{y},\mathbf{x}\rangle-\frac{\alpha(t)^{2}}{2\beta(t)^{2}}|| \mathbf{x}||^{2}\right)\right]. \tag{100}\]
Using the boundedness of the support of \(P_{0}\) and dominated convergence theorem, we may differentiate the L.H.S inside the expectation. We obtain that for all \(t\in(0,1]\), \(\mathbb{E}[\mathbf{x_{0}}|\mathbf{y}(t)=\mathbf{Y}_{t}]\) is differentiable w.r.t \(\mathbf{Y}_{t}\) with the Jacobian given by \(\frac{\alpha(t)}{\beta(t)^{2}}\operatorname{Covar}[P_{t,\mathbf{Y}_{t}}]\). Substituting into Eq. (100) completes the proof.
### Derivation of the ODE (Eq. (6))
We start by presenting an informal derivation of the continuity equation, borrowed from [124]. Recall that, by definition, \(\mathbf{y}(t)=\alpha(t)\mathbf{x}_{0}+\beta(t)\mathbf{z})\). The density \(\rho(\mathbf{y},t)\) can be expressed as:
\[\rho(\mathbf{y},t)=\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\delta(\mathbf{y }-\mathbf{y}(\mathbf{t}))\rho_{\mathbf{0}}(\mathbf{x_{0}})\rho_{\gamma}( \mathbf{z})\mathbf{dx_{0}dz}, \tag{101}\]
where \(\rho_{\gamma}\) denotes the density of the standard Gaussian measure \(\gamma=\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\). Differentiating both sides of Equation (101) yields:
\[\begin{split}\partial_{t}\rho(\mathbf{y},t)&=-\int _{\mathbb{R}^{N}\times\mathbb{R}^{N}}\nabla\delta(\mathbf{y}-\mathbf{y}( \mathbf{t}))\cdot\partial_{t}\mathbf{y}(\mathbf{t})\rho_{\mathbf{0}}( \mathbf{x_{0}})\rho_{\gamma}(\mathbf{z})\mathbf{dx_{0}dz}\\ &=-\nabla\cdot\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\delta( \mathbf{y}-\mathbf{y}(\mathbf{t}))\partial_{t}\mathbf{y}(\mathbf{t})\rho_{ \mathbf{0}}(\mathbf{x_{0}})\rho_{\gamma}(\mathbf{z})\mathbf{dx_{0}dz}\\ &=-\nabla\cdot\left(\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}} \left(\delta(\mathbf{y}-\mathbf{y}(t))\partial_{t}\mathbf{y}(t)\frac{\rho_{ \mathbf{0}}(\mathbf{x_{0}})\rho_{\gamma}(\mathbf{z})}{\rho(\mathbf{y},t)}d \mathbf{x_{0}dz}\right)\rho(\mathbf{y},t)\right)\end{split} \tag{102}\]
Eq. (4) is then obtained by noticing that equals \(b(\mathbf{y},t)\) defined by (5).
We refer to [124, 41] for the complete derivation based on the above approach.
We next prove that the pushforward measure obtained after applying the flow defined by the ODE (6) to initial Gaussian noise \(\mathbf{z}\) results in a sample from the target measure \(P_{0}\). For the sake of completeness, we include an alternative derivation for Eq. (4) in our proof, based on the weak form of the continuity equation. Our result allows \(P_{0}\) to be a discrete measure, by restricting the time to \(t\in(0,1)\).
**Lemma 3**.: _Let \(\epsilon\) be an arbitrarily-fixed real in \((0,1)\). Let \(\mathbf{Y_{t}}(\mathbf{z})\) denote the flow associated with the ODE in Eq. (6) starting from \(t=1\) to \(t=\epsilon\) with \(\mathbf{z}\sim\gamma\), where \(\gamma\) denotes the standard Gaussian measure \(\gamma=\mathcal{N}(\mathbf{0},\mathbb{I}_{N})\). Suppose that \(P_{0}\) has bounded support. Then, the pushforward measure \(\mathbf{Y_{t}}_{\#}\gamma\) at any time \(t\in(0,1]\) equals the measure corresponding to the law \(P_{t}\) of the interpolant \(\mathbf{y}(t)\) defined by equation (1)._
Proof.: Let \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d})\) be an arbitrary test function. Using the change of variables formula, the expectation of \(\psi\) w.r.t the measure \(\mu_{t}\) at time \(t\) can be expressed as:
\[\int_{\mathbb{R}^{N}}\psi(\mathbf{y})dP_{t}(\mathbf{y})=\int_{\mathbb{R}^{N} \times\mathbb{R}^{N}}\psi(\mathbf{y}(t))dP_{0}(\mathbf{x}_{0})d\gamma_{z}( \mathbf{z}), \tag{103}\]
where \(\mathbf{y}(t)\) is defined as a measurable function of \(\mathbf{x}_{0},\mathbf{z}\) through Eq. (1).
Therefore, \(P_{t}\) evolves in a distributional sense as follows:
\[\int_{\mathbb{R}^{N}}\psi(\mathbf{y})\partial_{t}dP_{t}(\mathbf{y})=\frac{d \int_{\mathbb{R}^{N}}\psi(\mathbf{y})dP_{t}(\mathbf{y})}{dt}=\int_{\mathbb{R}^ {N}\times\mathbb{R}^{N}}\nabla\psi(\mathbf{y}(t))\cdot\partial_{t}\mathbf{y}( t)dP_{0}(\mathbf{x}_{0})d\gamma_{z}(\mathbf{z}). \tag{104}\]
Recall that:
\[b(\mathbf{y},t)=\mathbb{E}[\partial_{t}\mathbf{y}(t)|\mathbf{y}(t)=\mathbf{y}]. \tag{105}\]
Using the definition of the conditional expectation, and the change of variables formula, we have:
\[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\nabla\psi(\mathbf{y}(t)) \cdot\partial_{t}\mathbf{y}(t)dP_{0}(\mathbf{x}_{0})d\gamma_{z}(\mathbf{z}) =\int_{\mathbb{R}^{N}}\nabla\psi(\mathbf{y}).\mathbb{E}[\partial_{ t}\mathbf{y}(t)|\mathbf{y}(t)=\mathbf{y}]dP_{t}(\mathbf{y})\] \[=\int_{\mathbb{R}^{N}}\nabla\psi(\mathbf{y})\cdot b(\mathbf{y},t )dP_{t}(\mathbf{y})\] \[=-\int_{\mathbb{R}^{N}}\psi(\mathbf{y})\nabla.(b(\mathbf{y},t)P_ {t}(\mathbf{y}))d\mathbf{y},\]
where we used the compactness of the support of \(\psi(x)\) and the distributional definition of the divergence operator \(\nabla\). Substituting in Eq. (100), we obtain:
\[\int_{\mathbb{R}^{N}}\psi(\mathbf{y})\partial_{t}dP_{t}(\mathbf{y})=-\int_{ \mathbb{R}^{N}}\psi(\mathbf{y})\nabla.(b(\mathbf{y},t)P_{t}(\mathbf{y}))d \mathbf{y}. \tag{101}\]
Furthermore, for all \(t>\epsilon\) for fixed \(\epsilon>0\), we have \(\beta(t)>0\). Then, Lemma 2 implies that \(b(\mathbf{y},t)\) is Lipschitz w.r.t \(\mathbf{y}\). Thus, the probability flow \(\mathbf{Y_{t}}\) for \(t\in[\epsilon,1]\) exists and is unique. Since both the push-forward measure \(\mathbf{Y_{t}}\#\gamma\) and the law of \(\mathbf{y}(t)\) i.e. \(P_{t}\) satisfy the continuity equation with velocity field \(b(\mathbf{y},t)\) (Lemma 4.1.1. in [125]). By the uniqueness of the solution of the continuity equation (see e.g. [126]), \(b(t,x)_{\#}\mu(z)=\mu_{t}\).
### Sampling Guarantees
In this section, we quantify the effect of discretization errors in the velocity field, leading to sampling guarantees for Algorithm 1. Again, we fix a parameter \(\epsilon\) lying in \((0,1)\) We rely on the following assumptions:
**Assumption 1**.: _The measure \(P_{0}\) has compact support lying in a sphere with radius bounded as \(\mathcal{O}(\sqrt{N})\)_
\[\mathrm{Supp}(P_{0})\subseteq\mathrm{S}^{N-1}(\sqrt{N}R), \tag{102}\]
_for some \(R\) independent of \(N\)._
**Assumption 2**.: _The spectral norm of the vector field's Jacobian \(\frac{\partial b(\mathbf{Y}_{t},t)}{\partial\mathbf{Y}_{t}}\) w.r.t \(\mathbf{Y}\) is uniformly bounded, i.e. \(\|\frac{\partial b(\mathbf{Y},t)}{\partial\mathbf{Y}}\|_{2}\leq L,\forall t \in(\epsilon,1),\forall\mathbf{Y}\in\mathbb{R}^{d}\) for some \(L\geq 0\)._
**Assumption 3**.: \(\|\frac{\partial b(\mathbf{Y},t)}{\partial t}\|\) _is uniformly bounded by \(M\sqrt{N}\) in \(\mathbf{Y},t\) for \(t\in(\epsilon,1)\), for some \(M\geq 0\)._
We have the following error bounds for the discretization error associated to the flow:
**Lemma 4**.: _Consider the ODE defined by Eq. (6) i.e \(\frac{d\mathbf{Y}}{dt}=b(\mathbf{Y},t)\), with \(\mathbf{Y}\in\mathbb{R}^{N}\). Let \(\mathbf{Y}_{t}(\mathbf{z})\) denote the flow associated to the above ODE at time \(t\in(0,1]\) starting from some fixed \(\mathbf{z}\in\mathbb{R}^{N}\) at time \(t=1\). Let \(\mathbf{Y}_{\delta,t}(\mathbf{z})\) denote the iterates of the forward Euler method applied to the above ODE (in reverse time) starting from the same initialization \(\mathbf{z}\) with step-size \(\delta\) i.e, for \(i\in\mathbb{N}\):_
\[\mathbf{Y}_{\delta,\delta(i-1)}(\mathbf{z})=\mathbf{Y}_{\delta,\delta i}( \mathbf{z})-\delta b(\mathbf{Y}_{\delta,\delta i}(\mathbf{z}),\delta i), \tag{103}\]
_with \(\mathbf{z}\) fixed. Under Assumptions 1,2,3, there exists a constant \(A(\epsilon)\) such that, for all \(k\in\mathbb{N},\mathbf{z}\in\mathbb{R}^{N}\) with \(k\delta\geq\epsilon\):_
\[\|\mathbf{Y}_{\delta,k\delta}(\mathbf{z})-\mathbf{Y}_{k\delta}(\mathbf{z})\|_ {2}\leq(\frac{M+AL}{L})\sqrt{N}e^{Lk\delta}\delta, \tag{104}\]
for small enough \(\delta\).
Proof.: We first note that Assumption 1 and Lemma 2 imply that \(b(\mathbf{Y},t)\) is uniformly bounded for any fixed \(\mathbf{z}\).Then, the above bound follows from standard analysis of the forward Euler method. See for example Chapter 7 in [127].
**Remark**: The Lipschitzness of the posterior mean (optimal denoiser) w.r.t \(\mathbf{Y}_{t}\) is expected to hold for the setups considered in our work in light of similar results proven in [36, 42]. Furthermore, as Lemma 2 shows, it is equivalent to the boundedness of the covariance of the tilted measure. Similarly, Lemma 1 provides control over \(\frac{\partial b(\mathbf{Y},t)}{\partial t}\) through the variance of the overlap.
We now prove that the proposed algorithm with sufficiently small step for the discretized ODE produces a distribution close to the target distribution in Wasserstein distance. The obtained bounds on the error between the ODE and the algorithm iterates can be related to the Wasserstein distance between the corresponding pushforward measures.
**Theorem 3**.: _Let \(P_{\mathrm{alg,N_{steps}}}\) denote the measure of the output produced by Algorithm 1 after \(N_{\mathrm{steps}}\). Suppose that the vector field, for any \(\eta>0\), there exist \(N_{\mathrm{steps}}\), independent of \(N\), such that the normalized Wasserstein distance \(\frac{1}{\sqrt{N}}W_{2}(P_{\mathrm{alg,N_{steps}}},P_{0})<\eta\)._
Proof.: Let \(0<\epsilon<h\) be arbitrary. By definition, \(\mathbf{Y}_{h,\epsilon}\) has law \(P_{\mathrm{alg,N_{steps}}}\) while, from Lemma 3, \(\mathbf{Y}_{\epsilon}\) equals in law \(\mathbf{y}(\epsilon)\). Setting the same initialization \(\mathbf{z}\) induces a coupling between the two measures. Recall that the 2-Wasserstein distance [125, 128] between two measures \(\mu,\nu\) on \(\mathbb{R}^{N}\) is defined as \(W_{2}^{2}(\mu,\nu)=\inf_{\Gamma(\mu,\nu)}\mathbb{E}\|\mathbf{X}-\mathbf{Y}\|^ {2}\), where \(\Gamma(\mu,\nu)\) denotes the set of couplings between \(\mu,\nu\) with \(\mathbf{X},\mathbf{Y}\) being distributed as \(\mu,\nu\) respectively. Therefore, we obtain:
\[W_{2}^{2}(P_{\mathrm{alg,N_{steps}}},P_{\epsilon})\leq\mathbb{E}\|\mathbf{Y} _{\delta,\epsilon}(\mathbf{z})-\mathbf{Y}_{\epsilon}(\mathbf{z})\|^{2}. \tag{120}\]
Lemma 4 then implies that by choosing a small enough step-size \(\delta\), or equivalently, with a large enough \(N_{\mathrm{steps}}\), \(W_{2}(P_{\mathrm{alg,N_{steps}}},P_{\epsilon})\) can be made arbitrarily small for any fixed epsilon.
We further have that \(W_{2}(P_{0},P_{\epsilon})\to 0\) as \(\epsilon\to 0\). By triangle inequality for \(W_{2}\)[125, 128], we obtain:
\[W_{2}(P_{0},P_{\epsilon})\leq W_{2}(P_{\mathrm{alg,N_{steps}}},P_{\epsilon})+ W_{2}(P_{0},P_{\epsilon}). \tag{121}\]
Now, first pick \(\epsilon>0\) such that \(\frac{1}{\sqrt{N}}W_{2}(P_{0},P_{\epsilon})\leq\eta/2\). Subsequently, we pick \(N_{\mathrm{steps}}\) such that \(\frac{1}{\sqrt{N}}W_{2}(P_{\mathrm{alg,N_{steps}}},P_{\epsilon})\leq\eta/2\) to complete the proof.
The above result establishes that Algorithm 1 samples from a distribution approximating the target distribution up to any desired accuracy, with a finite number of forward Euler steps (\(N_{\mathrm{steps}}\)), independent of the dimension \(N\).
**Remark:** In the presence of a first-order phase transition during the interpolation path \(\|\frac{\partial\mathbf{b}(\mathbf{Y},t)}{\partial t}\|\) may grow as \(\omega(\sqrt{N})\). This is apparent from Lemma 1 which relates \(\frac{\partial\mathbf{b}(\mathbf{Y},t)}{\partial t}\) to the variance of the overlap. This corresponds to \(M\) in Lemma 4 growing with \(N\). Lemma 4 reveals that as long as this growth is polynomial in \(N\), the discretization error can still be controlled by choosing \(h\) to be \(1/\mathrm{poly}(N)\). This would lead only to an additional polynomial time factor in the time complexity of the algorithm. Therefore, with an oracle access to the denoiser, Algorithm 1 might be able to efficiently sample even in the "inefficient" phase. We believe this to be an interesting question for future research.
|
2308.04296 | Unsteady Cylinder Wakes from Arbitrary Bodies with Differentiable
Physics-Assisted Neural Network | This work delineates a hybrid predictive framework configured as a
coarse-grained surrogate for reconstructing unsteady fluid flows around
multiple cylinders of diverse configurations. The presence of cylinders of
arbitrary nature causes abrupt changes in the local flow profile while globally
exhibiting a wide spectrum of dynamical wakes fluctuating in either a periodic
or chaotic manner. Consequently, the focal point of the present study is to
establish predictive frameworks that accurately reconstruct the overall fluid
velocity flowfield such that the local boundary layer profile, as well as the
wake dynamics, are both preserved for long time horizons. The hybrid framework
is realized using a base differentiable flow solver combined with a neural
network, yielding a differentiable physics-assisted neural network (DPNN). The
framework is trained using bodies with arbitrary shapes, and then it is tested
and further assessed on out-of-distribution samples. Our results indicate that
the neural network acts as a forcing function to correct the local boundary
layer profile while also remarkably improving the dissipative nature of the
flowfields. It is found that the DPNN framework clearly outperforms the
supervised learning approach while respecting the reduced feature space
dynamics. The model predictions for arbitrary bodies indicate that the Strouhal
number distribution with respect to spacing ratio exhibits similar patterns
with existing literature. In addition, our model predictions also enable us to
discover similar wake categories for flow past arbitrary bodies. For the
chaotic wakes, the present approach predicts the chaotic switch in gap flows up
to the mid-time range. | Shuvayan Brahmachary, Nils Thuerey | 2023-08-08T14:41:26Z | http://arxiv.org/abs/2308.04296v2 | # Unsteady Cylinder Wakes from Arbitrary Bodies with Differentiable Physics-Assisted Neural Network
###### Abstract
This work delineates a hybrid predictive framework configured as a coarse-grained surrogate for reconstructing unsteady fluid flows around multiple cylinders of diverse configurations. The presence of cylinders of arbitrary nature causes abrupt changes in the local flow profile while globally exhibiting a wide spectrum of dynamical wakes fluctuating in either a periodic or chaotic manner. Consequently, the focal point of the present study is to establish predictive frameworks that accurately reconstruct the overall fluid velocity flowfield such that the local boundary layer profile, as well as the wake dynamics, are both preserved for long time horizons. The hybrid framework is realized using a base differentiable flow solver combined with a neural network, yielding a differentiable physics-assisted neural network (DPNN). The framework is trained using bodies with arbitrary shapes, and then it is tested and further assessed on out-of-distribution samples. Our results indicate that the neural network acts as a forcing function to correct the local boundary layer profile while also remarkably improving the dissipative nature of the flowfields. It is found that the DPNN framework clearly outperforms the supervised learning approach while respecting the reduced feature space dynamics. The model predictions for arbitrary bodies indicate that the Strouhal number distribution with respect to spacing ratio exhibits similar patterns with existing literature. In addition, our model predictions also enable us to discover similar wake categories for flow past arbitrary bodies. For the chaotic wakes, the present approach predicts the chaotic switch in gap flows up to the mid-time range.
keywords: Differentiable physics, unsteady cylinder wakes, arbitrary flows, spatio-temporal predictions +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Machine learning has drawn significant attention towards the field of fluid simulations (Kutz, 2017; Brunton et al., 2020), particularly in the last two decades. Owing to its successful inception in the field of turbulence modeling (Ling et al., 2016; Wu et al., 2018; Duraisamy et al., 2019; Srinivasan et al., 2019), it has spawned multiple avenues such as super-resolution of fluid flows (Xie et al., 2018; Fukami et al., 2019), detection of turbulent interface (Li et al., 2020), active flow control (Pino et al., 2023), fluid-particle interaction (Davydzenka and Tahmasebi, 2022), reduced-order modeling (ROM) for predicting the dynamical state of a fluid system (Xiao et al., 2015; Pawar
et al., 2019), aerodynamic shape optimization (Chen et al., 2021), among others. To incorporate human knowledge in the form of physics models, methods were introduced that integrate them into the training procedure, _e.g.,_ physics-informed neural network (Raissi et al., 2019; Sun et al., 2020), graph neural network (Belbute-Peres et al., 2020; Brandstetter et al., 2022), and generative adversarial network (Cheng et al., 2020) with physical loss functions (Lee and You, 2019). While custom loss functions satisfy the boundary conditions and minimize the residual of the underlying governing partial differential equations, they only allows for partial coupling of the flow solver with the neural network. Another alternative that allows full integration of the flow solver within the training loop is via differentiable models, _e.g.,_ differentiable physics for turbulence modeling (List et al., 2022). This approach has demonstrated its effectiveness as a cost-efficient strategy for turbulence modeling compared to the cost-intensive direct numerical simulations. Such approaches have also shown the potential to serve as a coarse-grained surrogate for unsteady fluid flows. For instance, the works of Um et al. (2020) and Kochkov et al. (2021) demonstrated that differentiable flow solvers coupled with a neural network can yield satisfactory reconstruction of flowfields for representative test cases. However, the full merits of the hybrid strategy as a generic and accurate approach are far from established.
A primary task when evaluating the merits of a predictive framework is to identify a fluid test problem that is of significant practical interest and complexity. One such problem that has been of significant interest to the fluid dynamics community is that of flow past body (Hasegawa et al., 2020) or multiple bodies Wan and Sapsis (2018). Typical factors and challenges that plague the analysis of such fluid flows may include computational time or the turn-around time, low to large temporal scales in the flow, unsteadiness in the flow, the presence of multiple bodies, a large spectrum of wake dynamics, etc. While coarse-grained surrogates reduce the computational burden (Stachenfeld et al., 2021), the effectiveness of such a strategy to reliably reconstruct the flowfields for multiple wake regimes, besides ensuring accuracy in local boundary profiles and global wake dynamics, is yet to be examined. For instance, authors in work Morimoto et al. (2022) examine flow past two side-by-side cylinders to qualitatively evaluate the generalizability aspect of their models without considering the local fluid behavior around the body boundary. In addition, the work by Hasegawa et al. (2020) examined flow past a single bluff-body of various shapes using a convolutional neural network-based auto-encoder (CNN-AE) model with an emphasis on global fluid dynamics. In a similar vein, the investigation by Lee and You (2019) points out that their model predictions for flow past a single cylinder exhibited the largest errors in either the body boundary or the wake region. Clearly, this presents a significant limitation that should be addressed by advancing performance measuring metrics based on both the local as well as global fluid properties.
Flow past objects are of significant interest to the engineering community from the hydrodynamics as well as structural design point of view. The potential implications of the underlying instabilities (McKinley et al., 1993), wake dynamics (Williamson, 1996), wake induced vibration (Bearman, 2011) and its control (Choi et al., 2008) have led to establishing a significant body of research. One of the focal points concerning this research is understanding the spectrum of wake flow exhibited by multiple cylinders placed relative to each other. Some of the investigations that present a comprehensive study for two cylinders are (Zravkovich, 1987; Papaioannou et al., 2006; Sumner, 2010) etc. and (Lam and Cheung, 1988; Guillaume and LaRue, 1999; Bao et al., 2010; Zheng et al., 2016) for three or more cylinders. These investigations point towards the direct dependency of the resulting wake regimes with the spacing ratio (_i.e.,_, length \(L\) to diameter \(D\) ratio) \(L/D\). More recently, the investigation by (Chen et al., 2020) showed the presence of up to nine
distinct flow regimes for cylinders arranged in an equilateral-triangle position at various spacing ratios, \(1\leq L/D\leq 6\) for multiple Reynolds numbers \(50\leq Re\leq 175\). While these investigations provide fundamental insights about wake flow regimes and flow states, their analysis is limited to the use of equi-diameter circular cylinders or rectangular cylinders with varying aspect ratios. Consequently, this begs the question if the observations related to the universal non-dimensional parameters such as Strouhal number and critical spacing ratio (as reported by (Chen et al., 2020)) still hold for arbitrarily shaped cylinders.
Analyzing large datasets of wake flow exhibited by arbitrarily shaped complex configurations offers potential for novel physical insights; however, employing traditional body-conforming continuum computational fluid dynamics (CFD) flow solvers can be time-consuming and warrants special treatment for catering to changing geometries via the underlying computational grid. While non-conformal immersed boundary (IB) approaches (Mittal and Iaccarino, 2005; Brahmachary, 2019) allow for the use of a fixed Cartesian mesh, exploring a large design space of possible spacing ratios could significantly ramp up the turn-around time. Consequently, neural network-based coarse-grained surrogate models for unsteady fluid flows provide a cost-effective alternative approach (Stachenfeld et al., 2021).
In this work, we build on the ideas by Um et al. (2020) and present a hybrid Differentiable Physics-Assisted Neural Network (or DPNN) framework that acts as a coarse-grained surrogate for accurate unsteady fluid simulations for low Reynolds number flows past multiple bodies. This framework requires a base solver integrated into the neural network (NN) along with ground truth data to train the resulting hybrid framework. We employ a Cartesian grid-based immersed boundary method (_i.e.,_ FoamExtend) as the _Reference_ solver. In addition, we employ an in-house developed differentiable flow solver (_i.e.,_ PhiFlow) as the _Source_ solver. For faster training and inference, the _Source_ intentionally resorts to a first-order spatial accuracy along with masked stair-step representation of the underlying body boundary. This inherently reduces the computational burden of the base differentiable flow solver. In contrast, however, the _Reference_ solver utilizes an accurate local algebraic reconstruction for fluid flow around the body boundary with second order spatial accuracy. Given this stark contrast between the _Reference_ and _Source_ solvers, besides the highly multi-modal nature of the present problem, this naturally poses a very challenging task for NNs _i.e.,_ to act as a forcing function to learn the non-trivial corrections. We comprehensively evaluate the hybrid approach for multiple wake categories while drawing necessary comparisons with various benchmarks. Moreover, our approach of utilizing arbitrarily shaped bodies in a staggered equilateral triangle position allows for a large data diversity and sheds light on the physics of the wake dynamics resulting from the arbitrary cylindrical configurations.
Our focus is thus twofold. First, we train and test the DPNN framework on a dataset comprising unsteady flow past arbitrary bodies placed in a staggered equilateral triangle position, each at a unique spacing ratio. These tests were performed once the _Reference_ solver was throughly validated and verified to generate accurate solutions. Second, after establishing the effectiveness of our approach using multiple performance measuring metrics, we evaluate the out-of-distribution generalization capabilities of our framework. These evaluations is based on configurations that align closely with the current state of existing literature. Throughout the study, we strive to understand the physical implications of the solutions derived from the predictive framework, examining both local boundary layer profiles and global wake dynamics. We also highlight the framework's performance in both physical and reduced feature spaces to demonstrate its long-term temporal stability.
## 2 Numerical setup and validation
In this section, we illustrate the computational setup underpinning the entire hybrid framework. This is followed by validation test cases undertaken to affirm the correctness of the setup as well as verify the ability to render accurate solutions.
### Reference solver
The present study employs the open source FoamExtend V 4.0 immersed boundary (IB) approach as the _Reference_ solver. As mentioned earlier, the IB-based non-conformal approach serves a dual purpose. Firstly, it greatly simplifies the use of a fixed Cartesian mesh for all underlying configurations, irrespective of the geometric complexity. This methodology circumvents the necessity for labor-intensive re-meshing for each geometric configuration and the corresponding coordinate transformation from physical to computational space. Secondly, the use of a fixed Cartesian mesh ensures seamless integration to the deep-learning pipeline wherein one can leverage the benefits of the Eulerian (_i.e._, fixed cell in space) viewpoint. FoamExtend utilizes a discrete forcing approached treatment strategy for enforcing the boundary condition for the immersed bodies and has been built on top of the well-established OpenFoam (Jasak and Tukovic, 2015) framework.
In the present work, we consider the 2D Navier Stokes equations as mentioned below:
\[\begin{cases}\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}.\nabla)\mathbf{ u}=-\frac{1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{u}\\ \nabla.\mathbf{u}=0\end{cases} \tag{1}\]
where, \(\mathbf{u}=(u,v)\) is the velocity field, \(\rho\) is the density, \(p\) is the pressure, and \(\nu\) is the dynamic viscosity. The first equation represents the momentum conservation equations, whereas the second equation represents the continuity equation, which also serves as a constraint on the velocity fields, otherwise known as the "divergence-free" constraint.
Figure 1: Computational domain including the three bodies placed in an equilateral triangle position (not to scale)
The flow scenario describes the incompressible flow past cylinders at moderate Reynolds number \(\mathrm{Re}\approx 100\). For the present case, the Reynolds number is evaluated based on the cylinder height of the primary upstream cylinder as \(Re=\frac{UH}{\nu}\), where \(U\) and \(H\) represent the freestream velocity and height of the upstream cylinder respectively. For the present setup, \(\nu\)=0.01 \(\mathrm{Ns}/m^{2}\) and \(U\)=1m/s are used. A second-order accurate Crank Nicolson scheme is used to march the solutions in time, while a second-order limited linear (central-differencing) scheme is used for convective flux discretization. Pressure-Implicit with Splitting of Operators (PISO) (Issa, 1986) is employed to solve for the discretized momentum and continuity equations in 1 predictor and 2 corrector steps. In particular, preconditioned conjugate gradient (PCG) based solver is used for \(p\) linear equations along with a diagonal-based incomplete Cholesky (DIC) precondition for symmetric matrices. In addition, for \(U\) linear equations, BiCGSTAB is used along with diagonal-based incomplete LU (DILU) preconditioned for asymmetric matrices.
### Validation: Strouhal number and force characteristics
We execute a comprehensive grid independence study to corroborate the _Reference_ solver used to acquire the ground truth data. We initiate our numerical experiments with flow past a single cylinder at Reynolds number \(Re\)=100. A computational domain of size [0,24D] \(\times\) [0,16D] is chosen using a uniformly spaced Cartesian grid. In particular, we choose medium and fine spatial resolutions to undertake a grid-independence study, _i.e.,_\(\Delta x/D\) = \(\Delta y/D\) \(\in\) {1/32, 1/50}, where \(D\) represents the diameter of the cylinder. This leads to a blockage ratio, \(B\)=\(D/H\) = 0.0625. Further, we use a constant time-step \(\Delta t\) = 0.1s for this validation test case and perform the simulations until a total time \(T\) = 200s.
The time average drag coefficient \(\bar{C_{d}}\) as well as the root mean square (r.m.s) lift coefficient \(C_{1}^{{}^{\prime}}\) obtained from the _Reference_ flow solver is compared with existing results. Specifically, Table 1 shows the \(\bar{C_{d}}\), \(C_{1}^{{}^{\prime}}\), and Strouhal number \(St\) values for flow past a single cylinder at various grid resolutions. It is found that both \(\bar{C_{d}}\) and \(C_{l}^{{}^{\prime}}\) obtained from the medium as well as the fine spatial
\begin{table}
\begin{tabular}{l c c c c} \(Re\)=100 (single cylinder) & \(\bar{C_{d}}\) & \(C_{1}^{{}^{\prime}}\) & \(St\) \\ Constant et al. (2017) (\(\Delta x=\Delta y=0.02D\)) & 1.38 & - & 0.165 \\ Constant et al. (2017) (\(\Delta x=\Delta y=0.010D\)) & 1.37 & - & 0.165 \\ Present (\(\Delta x=\Delta y=0.0312D\)) & 1.412 & 0.276 & 0.156 \\ Present (\(\Delta x=\Delta y=0.02D\)) & 1.403 & 0.285 & 0.156 \\ \end{tabular}
\end{table}
Table 1: Coefficient of lift \(C_{l}\), drag \(C_{d}\), and Strouhal number \(St\) for flow past a single cylinder at \(Re\)=100
\begin{table}
\begin{tabular}{l c c c c c c} Zheng et al. (2016) & 1.23 & 0.0 & -0.002 & 1.53 & -0.087 & 0.335 \\ Present (\(\Delta x=\Delta y=0.0312D\)) & 1.249 & 0.0 & 0.0 & 1.561 & 0.166 & 0.433 \\ \end{tabular}
\end{table}
Table 2: Coefficient of lift \(C_{l}\), drag \(C_{d}\), and Strouhal number \(St\) for flow past three cylinders at various time-step sizes
resolutions compare really well with the existing solutions of Constant et al. (2017), who also employ a non-conformal immersed boundary-based numerical solver for the computations. It must be noted that a change in the underlying grid from medium to fine spatial resolution results in an increase in the total number of control volumes \(n_{c}\) from 3,93,216 to 9,60,000. Consequently, to allow for a reasonable computational time while retaining accuracy, we choose the medium grid for the remainder of the computations in this section as well as for obtaining the ground truth solution.
Table 2 presents the mean as well as the r.m.s values for the coefficient of drag and lift for the upstream and downstream cylinder (only one) for flow past three cylinders placed in an equilateral triangle position at spacing ratio \(L/D\)=2.5. It is found that the present solutions auger really well with the ones obtained by Zheng et al. (2016), who employ a body-conformal finite-volume approach using a time step size \(\Delta t\)=0.05s. The influence of the time-step size is also analyzed in Table 3. It is found that the choice of time step size results in a marginal difference in force quantities as well as the Strouhal number. Consequently, a moderate value of \(\Delta t\)=0.1 s is adopted for the present study. It can now be remarked that the grid resolution and time-step size chosen for the present study allow for the accurate computation of flow past multiple bodies. In addition, we also qualitatively verify the wake regimes produced using the flow solver, as highlighted in B.
### Differentiable flow solver
The present study uses the open source simulation toolkit Phiflow (\(\phi_{\text{flow}}\)) as the base differentiable flow solver (or _Source_). It leverages automatic differentiation to enable an end-to-end recurrent training paradigm. While it is crucial that the base solver (_i.e.,_ Source) be differentiable, the _Reference_ solver (_i.e.,_ FoamExtend in this case) need not be differentiable. We employ the projection method to decouple the momentum and continuity equation in a two predictor-corrector step. Firstly, we use the operator splitting of the diffusion, advection, and pressure terms in the predictor phase. Specifically, the MacCormack advection step for advecting the initial velocity fields. Secondly, in the corrector phase, the spatial gradients of pressure are used to compute the divergence-free velocity field Chorin (1968).
Finally, the choice of Cartesian grids in both _Reference_ and _Source_ solver allows for a one-to-one mapping between each control cell. This is especially important given that the use of masked stair-step representation of the body boundary by the _Source_ results in a very approximate reconstruction of near-body fluid properties. This allows for a training setup wherein the network learns to faithfully reconstruct the local boundary flow profile based on the accurate _Reference_ solver.
### Cylinder layout
We carefully choose a setup for cylinder cluster arrangement that allows us to mimic the traditional approach adopted in open literature. However, as we seek to generate and exploit a
\begin{table}
\begin{tabular}{l c c c c c c} \(Re\)=100 (three cylinders, \(L/D=2.5\)) & \(\bar{C}_{d,1}\) & \(\bar{C}_{l,1}\) & \(C^{\prime}_{l,1}\) & \(\bar{C}_{d,2}\) & \(\bar{C}_{l,2}\) & \(C^{\prime}_{l,2}\) & \(St\) \\ Present (\(\Delta t\) = 0.25) & 1.254 & 0.0 & 0.0 & 1.584 & 0.172 & 0.49 & 0.148 \\ Present (\(\Delta t\) = 0.1) & 1.249 & 0.0 & 0.0 & 1.561 & 0.166 & 0.433 & 0.156 \\ Present (\(\Delta t\) = 0.05) & 1.244 & 0.0 & 0.0 & 1.545 & 0.166 & 0.359 & 0.156 \\ \end{tabular}
\end{table}
Table 3: Coefficient of lift \(C_{l}\), drag \(C_{d}\), and Strouhal number \(St\) for flow past three cylinders
multitude of complexities in the training data to harness the full capabilities of the present hybrid learning strategy, we introduce some randomness to the setup. Some specifications related to the cylinder arrangements are as follows:
1. **Cylinder arrangement**: We place three cylinders in an equilateral triangle position, _i.e.,_ one upstream cylinder followed by two downstream cylinders, placed apart from each other via the spacing ratio \(L/D\). This setup mimics the "regular triangular" cluster setup used in Zravkovich (1987) and has also been investigated recently by Chen et al. (2020). Additionally, this arrangement allows us to investigate _proximity_ interference (P), _wake_ interference (W), a combination of (P+W) interference as well as no-interference.
2. **Upstream composite cylinder**: The upstream cylinder (cylinder \(C_{1}\)) is framed as a composite body consisting of a base rectangle (at its center) and secondary rectangle or semi-circle on its sides (see Fig. 1). The selection between a secondary rectangle or a secondary semi-circle as one of the sides of the base rectangle is random. We define a function \(f_{1}:R_{\text{u,i}}\to S\), such that it maps a random integer \(R_{\text{u,i}}\in\{\text{0,1}\}\) to geometric shapes, \(\text{S}\in\{\text{semi-circle, rectangle}\}\), where \(i\in\{\text{1,2,3,4}\}\). The outcome can be represented as follows: \[R_{\text{u,i}}=\begin{cases}0&f_{\text{i}}=\text{semicircle}\\ 1&f_{\text{i}}=\text{rectangle}\end{cases}\] The length of the base rectangle, as well as the secondary rectangles, are randomly chosen between fixed upper and lower limits (see Table 4). The diameter of the semi-circle is equal to the edge length of the base rectangle to which it is attached.
3. **Downstream cylinders**: The choice of the downstream cylinders is either a rectangular cylinder of a certain length and height or a circular cylinder of a certain diameter. This selection is partially random. We define a function \(f_{\text{2,3}}:R_{\text{d}}\to S\), such that it maps a random integer \(R_{\text{d}}\in\{\text{0,1}\}\) to geometric shapes, \(\text{S}\in\{\text{circle, rectangle}\}\). Once the upper downstream cylinder (cylinder \(C_{2}\)) is chosen (say a rectangular cylinder), the lower downstream cylinder (cylinder \(C_{3}\)) gets fixed (a circular cylinder). This outcome can be represented as shown below.
Figure 2: Boundary reconstruction approach used by (a) FoamExtend (_Reference_ solver) (b) Phiflow (_Source_ solver)
\[R_{\rm d}=\begin{cases}0&f_{2}=\text{circle};\ \ f_{3}=\text{rectangle}\\ 1&f_{2}=\text{rectangle};\ \ f_{3}=\text{circle}\end{cases}\]
This forces the training dataset to always contain a rectangle and a circular cylinder as the two downstream cylinders. In addition, once the height of the downstream rectangular cylinder, \(H_{\rm C2}\) (or the diameter of the downstream circular cylinder, \(D_{\rm C3}\)) is chosen, the dimension (_i.e.,_ diameter or cylinder height) of the other downstream cylinder is fixed to satisfy the following criterion.
\[H_{\rm C2}+D_{\rm C3}=2H_{\rm C1} \tag{2}\]
This restricts the cylinders from overlapping. The dimensions of the rectangle and the circular cylinder are randomly chosen between a fixed upper and lower limit (see Table 4).
4. **Spacing ratio**: The distance between centre-to-centre distance between the downstream cylinders, as well as the downstream and upstream cylinders, is controlled by the spacing ratio \(L/D\). The \(L/D\) values are chosen such that it covers a wide spectrum of possible wake flow regimes (_i.e.,_\(1.2\leq L/D\leq 5.5\)). Care is taken to avoid overlapping bodies for low \(L/D\) values. Finally, to ensure an unbiased selection of the spacing ratio, the Latin Hypercube Sampling (LHS) is chosen as a design of the experiment method.
Points 2 and 3 above intentionally introduce arbitrariness to the training data to serve a dual purpose. Firstly, we analyze the role of arbitrariness in the wake flow regime to offer new insight into the present body of knowledge around flow past equi-diameter cylinders. Secondly, this introduces data diversity while also allowing us to evaluate the generalizability of the hybrid learning framework to more representative problems. Further, point 4 allows us to uncover various wake categories that have been introduced recently by Chen et al. (2020) for similar spacing ratios.
### Wake categories
This section introduces the various wake categories obtained from the _Reference_ solver at multiple spacing ratios \(L/D\) for equi-diameter cylinders placed in an equilateral triangle position. These representative spacing ratios for equi-diameter cylinders are for illustration purposes and are not part of the training data. We purposely choose equi-diameter circular cylinders so that fair comparisons can be drawn with Chen et al. (2020) who also evaluate it on identical conditions.
**Single bluff-body wake:** At extremely small spacing ratio _i.e.,_\(L/D=1.2\), the three cylinders are close to each other such that they exhibit vortices as if it were a "single bluff-body". This is further demonstrated pictorially in Fig. 30 in the Appendix B, along with other representative cases. This kind of wake is periodic in nature.
**Deflected gap wake:** At slightly larger spacing ratio _i.e.,_\(L/D=1.5\), the fluid in between the two downstream cylinders (or gap flow) switches direction to either the upper cylinder (_i.e.,_ cylinder 2) or the lower cylinder (_i.e.,_ cylinder 3), hence the name "deflected gap". This switch is constant with time. The kind of wake is periodic, with some modulation over time.
**Flip-flopping wake:** For marginally larger spacing ratio _i.e.,_\(L/D=2.25\), the gap flow in between the two downstream cylinders switches direction erratically between the upper cylinder (_i.e.,_ cylinder 2) or the lower cylinder (_i.e.,_ cylinder 3), hence the name "flip-flopping". This chaotic switch is a challenging case for any predictive framework, and the kind of wake is irregular in nature.
**Anti-phase wake:** For moderately larger spacing ratio _i.e.,_\(L/D\) = 3.5, the vortices shed by the two downstream cylinders rotate in opposing directions, hence the name "anti-phase". This wake category is periodic in nature and exhibits a symmetry about the centerline streamwise direction. For arbitrarily shaped bodies, this symmetry is lost and is consequently referred to in this work as "quasi anti-phase" wake.
**Fully developed in-phase wake:** For larger spacing ratio _i.e.,_\(L/D\) = 5, the vortices shed by the two downstream cylinders are in same direction; hence the name "in-phase". Further, the upstream cylinder also sheds vortices which merges with the vortices shed by the downstream cylinders, hence the term "fully-developed". This kind of wake is periodic in nature. For arbitrarily shaped bodies, this is referred to in this work as "quasi in-phase" wake.
\begin{table}
\begin{tabular}{l l c c c c c c} \multicolumn{2}{c}{Component} & \multicolumn{2}{c}{Length (\(L\))} & \multicolumn{2}{c}{Height (\(H\))} & \multicolumn{2}{c}{Radius (\(R\))} \\ & & & lower & upper & lower & upper & lower & upper \\ & & & limit & limit & limit & limit & limit & limit \\ & (primary) & & 0.4 & 0.5 & 0.4 & 0.5 & - & - \\ Upstream & \multicolumn{2}{c}{Base rectangle} & & & & & & \\ Cylinder & \multicolumn{2}{c}{(secondary)} & & & & & & \\ & \multicolumn{2}{c}{Rectangle} & & & & & & \\ & \multicolumn{2}{c}{Rectangle} & & & & & & \\ Downstream & \multicolumn{2}{c}{Rectangle} & & & & & & \\ Cylinder & \multicolumn{2}{c}{Circular cylinder} & & & & & & \\ \end{tabular}
\end{table}
Table 4: Upper and lower limits for the dimensions of the cylinders
Figure 3: DPNN-based hybrid predictive framework
## 3 Hybrid predictive framework
This section introduces the methodology behind the DPNN-based hybrid framework used in the present work. As alluded earlier, such a strategy embeds the base differentiable flow solver Phiflow with a neural network (NN) architecture, mimicking the _solver-in-the-loop_ strategy previously employed by Um et al. (2020). Such a strategy reaps the benefit of long-term training feedback while enabling an end-to-end training paradigm. We build on the above by enabling a robust and efficient coarse-grained surrogate model for unsteady flows for investigating one of the core fluid flow problems, _i.e.,_ wake flows caused by arbitrarily shaped objects. This serves a dual purpose, _i.e.,_ firstly, the arbitrarily shaped bodies allow for a rich and diverse set of datasets. Secondly, while it is well known that the shape of the body is responsible for the resulting wake characteristics, such an issue has not prompted an investigation into a large spectrum of spacing ratios to the best of our knowledge.
### Neural network architecture
In this section, we highlight the neural network (NN) architectures that were explored in the present study. We explore a range of popular architectures while keeping the total number of parameters in each network architecture within reasonable limits to facilitate a fair comparison.
The network takes as input of channel size = 3, _i.e.,_ velocity fields, \(\mathbf{u}=u,v\) along with the shape mask or marker field \(m_{f}\in R^{n_{x}\times n_{y}}\), where \(n_{x}\) and \(n_{y}\) represents the number of grid points along the longitudinal and lateral directions, respectively. The shape mask \(m_{f}\) uniquely identifies the shape of the enclosed body and is fixed for a given test configuration. While the ground truth data is obtained on a high-resolution grid of size \(n_{x},n_{y}\) = (\(768\times 512\)), the network only receives a downsampled (3x) velocity and shape mask as inputs (_i.e.,_\(96\times 64\)), for faster training. For a training sample, there are a total of \(t_{f}\) = 3000 frames or snapshots (of which the first 1500 are discarded to only consider statically stable snapshots), each representing velocity fields generated sequentially using a constant time-step of \(\Delta t\) = 0.1s (or, total time \(t\) = 300s). The entire dataset consists of 100 different experiments each at a unique spacing ratio, resulting in 100 unique shape masks \(m_{f}\). As mentioned in the earlier Section 2.4 and Fig. 1, to allow for a fair comparison, we employ the same cylinder cluster arrangement as also employed by Chen et al. (2020), with the equi-diameter cylinders employed by Chen et al. (2020) being replaced by arbitrarily shaped cylinders in the present work. Among the entire dataset, 50 are used for training, and the remaining 50 are used for testing. The training is executed using a batch size of 50 for 50 epochs using Adam (Kingma and Ba, 2014) as the optimizer. A variable learning rate \(\alpha\) is employed, which is sequentially reduced with epochs. The output from the network is constrained to be of channel size = 2, _i.e.,_ the velocity field at the next time step. In this work, we predominantly employ residual neural network (ResNet)-based architecture besides also using other popular designs. Appendix C provides the details related to each architecture.
### Present hybrid framework
The DPNN-based hybrid predictive framework is a _recursive_ learning strategy wherein the base flow solver Phiflow (\(\phi_{\text{flow}}\)) is coupled with the neural network as shown in Fig. 3. In the present study, the _Reference_ solver (_i.e.,_ FoamExtend) provides the pre-computed ground truth data. The hybrid framework receives an initial guess \(\mathbf{r}^{\prime}\) from the _Reference_ solver and feeds it to the base solver (_i.e.,_ Phiflow). The base solver then updates the velocity field to the next time-step \(\mathbf{u}^{*}\). At this
stage, no learning is involved. The output from PhiFlow is then fed to the NN architecture as input, _i.e.,_ velocity fields \(u,v\), and shape marker field \(m_{f}\). The output from the NN, \(\mathbf{f}(\mathbf{u}^{*};\theta_{W})\) acts as the forcing function which is used to correct the output from the base solver PhiFlow (\(\theta_{w}\) represents the network weights). This step is made recursive for \(m\) solver unrollment steps signifying as many forward steps by base solver in time by feeding the cumulative output back to the base solver, _i.e.,_\(\mathbf{u}^{t+1}=\mathbf{u}^{*}+\mathbf{f}(\mathbf{u}^{*};\theta_{W})\).
We remark that both the _Reference_ and the base solvers are subject to identical values for the dynamic viscosity \(\nu\), time step size \(\Delta t\), and freestream conditions, besides making sure to incorporate the same object under consideration. The critical difference here lies in their respective solution strategies and the techniques to handle body boundary conditions, _i.e.,_ while the former is spatially second-order accurate and algebraically reconstructs the fluid properties along the body boundary to locally satisfy the no-slip boundary conditions, the latter is spatially first-order accurate and resorts to a masked stair-step representation for computational efficiency. This fact serves as the key motivation to learn the global wake dynamics as well as the local boundary representation via the loss formulation, as will be discussed in the next Section.
### Loss formulation
The loss formulation provides the central goal of the learning process. In this study, this means recovering the ground truth velocity flowfields as accurately as possible. The trainable parameters of the model are iteratively updated by minimizing the loss function. This allows the optimizer to navigate the non-convex energy landscape with the goal to arrive at a global/local minima. Typical choices of a loss function \(\mathcal{L}\) use some error norm to compare network prediction \(\mathbf{u}\) with the ground truth \(\mathbf{r}\). For instance, the \(\mathcal{L}_{2}\) norm for a purely supervised training is given as:
\[\mathcal{L}=\mathcal{L}_{2}(\mathbf{r}-\mathbf{u}) \tag{3}\]
Such a formulation minimizes the Euclidean distance between \(\mathbf{u}\) and \(\mathbf{r}\). Although popular, such a loss is devoid of additional information from the physics point of view. For instance, Um et al. (2020) showed that one can formulate physics-based loss formulations by taking advantage of the temporal unrolling during the training. This was followed by an independent investigation (List et al., 2022) wherein they built a custom loss function to accurately predict turbulent flows. Consequently, a greater emphasis is placed on rendering a loss yielding a physically consistent solution. Such a physics-based loss could take the form:
\[\mathcal{L}=\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}_{2}(\mathbf{r}^{t+i}- \mathbf{u}^{i}) \tag{4}\]
where \(m\) represents the number of unrollment steps used during training and \(t\) represents the instant of time. Interested readers are referred to the webbook Thuerey et al. (2021) for further details.
### Benchmarks
In addition to our present DPNN framework, we also compare it to existing benchmarks. As part of this comparative evaluation, we employ the data-driven supervised learning (SL) method. In addition, we evaluate results produced by only running the base solver _i.e.,_ _Sourcea_. For the SL approach, the network directly receives the input \(\mathbf{r}^{t}\) from the _Reference_ solver and produces the
output as the flowfield for the subsequent time step _i.e.,_\(\mathbf{f}(\mathbf{r}^{t};\theta_{W})\). As this SL framework does not contain an integrated _Source_ solver, the output generated by the network serves as the final prediction for the next time step (_i.e.,_\(\mathbf{u}^{t+1}=\mathbf{f}(\mathbf{r}^{t};\theta_{W})\)). This makes the SL approach faster. Notably, to ensure a fair comparison, the outputs from both DPNN and SL frameworks utilize identical underlying networks and receive the same input during training/testing. For the output produced from the _Source_ solver, the next time step predictions can be written as \(\mathbf{u}^{t+1}=\mathbf{u}^{*}\). Finally, For reasons of completeness and validation, we also juxtapose our predictions with those from the _Reference_ solver. Comparisons with the _Source_ solver highlight the starting point on which the DPNN solver learns to apply its corrections. Further details related to the network architecture may be found in Appendix C.
## 4 Results and Discussions
In this section, we present a detailed analysis of the results obtained from the present hybrid learning framework as well as other benchmarks. As mentioned earlier, we employ multiple neural network architectures within the hybrid learning approach. Henceforth, we begin by undertaking an analysis that compares various network architectures for a common problem in the following section.
### Role of Network Parameters in Model Performance
We start by presenting the statistical evaluations of models builds using different network architectures for all the testing samples. An effective representation of the distribution of the model performance across a large spectrum of testing samples is done via kernel density estimation (or KDE). Besides highlighting the model confidence, it further allows for the potential identification of outliers. The KDE is evaluated based on the mean absolute error, \(\mu\) of the model predictions compared to the ground truth data, for the 100th testing frame. The mean absolute error, \(\mu\) is evaluated as follows:
\[\mu=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}(|u_{\text{ref,i}}-u_{\text{ baseline,i}}|)+(|v_{\text{ref,i}}-v_{\text{baseline,i}}|) \tag{5}\]
\begin{table}
\begin{tabular}{c c c c} Architecture & No. of parameters & Mean, \(\mu_{N}\) & Standard deviation, \(\sigma_{N}\) \\ CNN & 546,478 & 0.03124 & 0.0082 \\ ResNet & 516,674 & **0.02626** & 0.00657 \\ Unet (with DS) & 520,342 & 0.07693 & 0.0138 \\ Unet & 520,342 & 0.06067 & 0.0152 \\ Dil-ResNet (with DS) & 548,066 & 0.07266 & 0.0211 \\ Dil-ResNet & 548,066 & 0.04303 & 0.01101 \\ ResNeXt & 524,322 & 0.03871 & 0.00787 \\ DenseNet & 566,578 & 0.04258 & **0.0047** \\ \end{tabular}
\end{table}
Table 5: Comparative performance among various NN architectures in terms of mean \(\mu_{N}\) and standard deviation \(\sigma_{N}\) (across 50 testing samples, at the end of 100 testing frame)
\begin{table}
\begin{tabular}{c c c} Random seed number & Mean, \(\mu\) & Standard deviation, \(\sigma\) \\
1 & 0.03029 & 0.00818 \\
2 & 0.02643 & 0.00747 \\
3 & **0.02483** & **0.00706** \\
4 & 0.02991 & 0.00896 \\
42 & 0.02626 & 0.0082 \\ \end{tabular}
\end{table}
Table 6: Comparative performance of ResNet architecture for different random seeds
Figure 4: Comparison of the mean absolute error \(\mu\) at the end of 100th testing frame for 50 testing samples obtained from different NN architectures
Figure 4 presents the approximate probability distribution via the KDE for the 50 testing samples from different network architectures. It is found that most of the approximate probability distribution is indicative of a Gaussian-like distribution. These include the KDE obtained from the convolution neural network (CNN) and the residual network-based models _i.e.,_ ResNet (He et al., 2016), together with the newer variants ResNeXt (Xie et al., 2017), DenseNet (Huang et al., 2017), and diluted residual network (Dil-ResNet) from Stachenfeld et al. (2021). On the contrary, however, the Unet (Ronneberger et al., 2015) exhibits multiple peaks in the distribution while also incurring greater sample mean error \(\mu_{N}\) (see Table 5). While attributing higher error to the architectures is not straightforward, strikingly, both the Unet with downsample (with DS) and the Dil-ResNet (with DS) architectures employ encoder blocks that compress the input data to a reduced dimension via the stride operation. This is not entirely surprising as the model struggles to retain local features of the input upon downsampling. We remark that identical choice of activation function, optimizer, learning rate \(\eta\), initial weights \(\theta_{\text{W}}\), loss function, etc. have been used for all the architectures. Consequently, it can be mentioned that the models without any compression or convolution downsample serve as a better alternative for the present choice of problems.
Neural network-based learning is a stochastic, non-linear optimization process that aims to iteratively improve the model predictions on training data while managing uncertainty and navigating through a complex, multidimensional solution space. In this context, using a gradient-based optimizer like Adam may result in a locally-optimal solution with some sensitivity to the initial guess. To check the robustness of the model, the influence of the random number seed must be evaluated. We investigate the performance of the ResNet-based architecture for multiple seeds. Figure 5 shows the approximate probability density based on the mean absolute error \(\mu\) obtained using ResNet-based architectures for different values of random number seed for all the testing samples. It can be observed that the Gaussian-like distribution of the error \(\mu\) is retained for different random seed values, besides showing consistency in the model performance in terms of the sample mean \(\mu_{N}\) (see Table 6). In light of these discussions, it can be remarked that the performance is
Figure 5: Influence of random number seed on the performance of residual network (ResNet) architecture
independent of the random number seed. Finally, we also determine the influence of the adaptive learning rate \(\eta\) that determines the convergence and consequently the nature of solutions obtained. This is briefly discussed in Appendix C.1. The remainder of the study is based on the ResNet architecture.
### Qualitative comparison of predictive models across wake categories
In this section, we compare various benchmark models, assessing their capability to accurately reproduce the dynamic wake characteristics for new, unseen testing samples with different spacing ratios. This analysis is fundamentally important because, intriguingly, different spacing ratios between the cylinders yield diverse wake behaviors, each of which necessitates faithful reproduction. For instance, the deflected gap wake should exhibit a narrow and a wide wake immediately downstream of the two subsequent cylinders, with the gap flow between these cylinders being deflected towards the narrow wake region. This deflection should remain constant with time. In contrast, for the flip-flopping or the chaotic wakes, the gap flow switches direction in an erratic manner. As such, while some wake patterns exhibit a periodic, uniform repetition, others veer into the realm of chaos. It is, therefore, a fascinating problem that tests the predictive capabilities of the models.
Five representative testing samples are chosen from the available 50 experiments, each exhibiting a unique spacing ratio and the corresponding wake dynamics. We present vorticity flowfields generated entirely from the network's output. This allows us to highlight the regions of interest, _i.e.,_ cylinder wake and near body boundary, besides also indicating the direction of rotation of the vortices. Figure 6 compares the vorticity flowfield obtained from the various benchmarks, including
Figure 6: Comparison of velocity flowfields obtained using various benchmarks for different time instances for the spacing ratio \(L/D=1.26\) (single bluff-body wake, 17th testing sample)
the _Reference_ solver at the spacing ratio \(L/D\) = 1.26. The vorticity flowfield obtained from the _Reference_ exhibits a single bluff-body wake with a certain shedding frequency. It is found that while all the baselines can reproduce the correct category of the wake, the _Source_ and the supervised learning (SL) approach very quickly deviate from the correct vortex shedding cycle. However, the present DPNN approach compares remarkably well with the _Reference_ data even for long time horizons.
Figure 7 presents the comparison of the vorticity flowfield obtained at the spacing ratio \(L/D\) = 1.57 for which the _Reference_ solver portrays deflected gap wake. As alluded to earlier, the deflected gap flow shifts toward the narrower wake region (_i.e.,_ lower downstream cylinder in this case), with the direction of the gap flow remaining stable over time. Consequently, the predictive models should preserve this phenomenon while accurately predicting the vortex-shedding cycle. It is found from Fig. 7 that the predictions from _Source_ quickly transitions to a single bluff-body wake, whereas the vorticity contours obtained from the SL approach transform into a hybrid quasi in-phase wake. In contrast, however, the current DPNN approach aligns impressively with the _Reference_ data across extended time horizons. This observation holds true for the deflected gap flow direction and the overall vortex shedding cycle.
Figure 8 showcases a comparison of the vorticity flowfield at a spacing ratio \(L/D\) = 2.34, where the _Reference_ solver displays a flip-flopping (or chaotic) wake, _i.e.,_ the gap flow in between the two downstream cylinders chaotically switches direction. This is evident in Fig. 8 wherein the gap flow pointing downwards from the frame at \(t=60\Delta t\) has transitioned to the top at \(t=120\Delta t\). The
Figure 7: Comparison of vorticity flowfields obtained using various benchmarks for different time instances for the spacing ratio \(L/D\) = 1.57 (deflected gap wake, 39th testing sample)
predictions from DPNN seem to capture this switch correctly up to frame \(t=180\Delta t\), whereas, for long temporal horizons, both the gap flow direction and the vortex shedding cycles have visible differences. These differences further augments when compared to the vorticity flowfields obtained from _Source_ and SL approaches.
Figure 9 exhibits a comparative analysis of vorticity flowfields at a spacing ratio of \(L/D\) = 3.37, which, as per the _Reference_ solver, corresponds to a fully developed quasi in-phase wake. The clockwise rotating vortices (highlighted in blue) emitted from the freestream side of the upper cylinder and the gap side of the lower cylinder, appear to be in quasi in-phase. The same can be mentioned for the anti-clockwise rotating vortices (highlighted in red) shed from the freestream side of the lower cylinder and the gap side of the upper cylinder. It is found that among all the baselines, both DPNN and SL approaches perform well, with the former preserving the vortex structure quite remarkably for long-time horizons. On the other hand, the _Source_ depicts a chaotic wake and fails to reproduce the actual wake dynamics for the given spacing ratio and geometric configuration.
Figure 10 highlights the vorticity flowfields obtained at the spacing ratio of \(L/D\) = 3.89 from various benchmarks. The flowfield obtained from the _Reference_ solver indicates a fully-developed quasi anti-phase wake. It is observed that the vortices emitted from the freestream side of the two downstream cylinders shed vortices in the opposite direction. This dynamic nature of the wake is accurately retained by the present DPNN approach. On the contrary, however, the SL approach produces a hybrid wake, _i.e.,_ wake transitions into a hybrid wake, whereas the _Source_ yields a chaotic wake.
Figure 8: Comparison of vorticity flowfields obtained using various benchmarks for different time instances for the spacing ratio \(L/D\) = 2.34 (chaotic wake, 41st testing sample)
The exercise underscored the crucial need for developing robust and precise models. These models should accurately identify the underlying wake category and deliver sustained temporal fidelity over extended periods. While the current section provides a qualitative comparison of individual baselines across multiple wake categories, the subsequent section will present a quantitative analysis of the key variables of interest.
### Quantitative comparisons in physical and reduced feature space
In the preceding section, we employed velocity and vorticity flowfields as a qualitative measure of performance accuracy. The network used in our study is trained purely on physical space, specifically the velocity flowfields as a collection of control volumes with staggered faces. Through the actions of the kernels, convolutions transform these physical quantities into an equivalent dimensional feature space, achieved through zero-padding. This process poses an engaging question - does the network retain its performance in Fourier space or the reduced feature space offered by the first three principal components?
We begin our investigation by adopting the 39th testing sample, which depicts a deflected gap wake, as discussed in Section 4.2. In Fig. 11, we present the temporal variation of the velocity components \(u,v\) at multiple spatial probes using different baselines. These probes are placed at the wake of the downstream cylinders along the centerline axis, _i.e._, \((n_{\mathrm{px}},n_{\mathrm{py}})\) = (48,32), (64,32), (80,32), where \(n_{\mathrm{px}},n_{\mathrm{py}}\) represents the nodal positioning in the \(x\) and \(y\) directions, respectively. It must be noted here that the present DPNN approach from hereon refers to \(m\)=4 steps of unrolling during training. Thus, an evaluation period up to \(t=300\Delta t\) is a substantial period wherein multiple
Figure 9: Comparison of vorticity \(\omega\) flowfields obtained using various benchmarks for different time instances for the spacing ratio \(L/D\) = 3.37 (fully developed quasi in-phase wake, 26th testing sample)
vortex-shedding cycles are captured. It is found that the present DPNN framework results in very good agreement with the modulated signals obtained for both \(u,v\) velocity components, with the agreement seemingly better for probes placed further away from the bodies. The minor offset with the data obtained from the _Reference_ solver tends to happen for higher time values \(t\). On the contrary, however, this offset is rather early for the predictions obtained from the _Source_ solver. Moreover, for higher values of time \(t\), the predictions significantly differ from the _Reference_.
Figure 12 displays the distributions of \(u\)-velocity across several cross-sections aligned in the longitudinal direction (along the rows) at multiple instances of time (along the columns), obtained from multiple benchmarks. Barring some minor differences, we found that the current framework aligns exceptionally well with data obtained from the _Reference_ solver across various spatial locations and at different time instances. Similar observations can also be drawn from Fig. 13, where the \(u\)-velocity distributions are plotted along multiple horizontal cross-sections at different instances of time. Naturally, these observations are also reflected in Fig. 14 where the mean absolute error, \(\mu\) (calculated based on Eq. 5) for the recursive predictions with respect to the ground truth data are shown. It is found that the present DPNN-based approach yields the lowest cumulative error over time, whereas the data from the _Source_ results in the highest growth of error over time. As such, it can be mentioned that the DPNN approach has successfully enabled a hybrid framework that can faithfully reproduce the dynamical behavior of the physical quantity (_i.e.,_ velocity flowfields).
We now compute the power spectral density (or PSD) based on the velocity flowfields obtained from the predictive frameworks. Figure 15(a)-(c) presents the PSD obtained from the velocity measurements at the \(n_{\mathrm{px}},n_{\mathrm{py}}\) = (64,32) probe location with respect to different time ranges. It is
Figure 10: Comparison of vorticity \(\omega\) flowfields obtained using various benchmarks for different time instances for the spacing ratio \(L/D\) = 3.89 (fully developed quasi anti-phase wake, 32nd testing sample)
Figure 11: Comparison of velocity downstream of the bodies and its variation with time (39th testing sample).
Figure 12: Comparison of the \(u\)-velocity at multiple vertical cross-sections (39th testing sample)
Figure 14: Variation of the mean absolute error from various frameworks with testing frames for the 39th testing sample
Figure 13: Comparison of the \(u\)-velocity at multiple horizontal cross-sections (39th testing sample)
found that for a relatively smaller time range of up to 100 \(\Delta t\), the PSD vs. frequency obtained from the DPNN aligns exceptionally well with the data obtained from the _Reference_ solver signifying that the present framework is able to faithfully reconstruct small as well as large scale structures in the velocity flowfield. However, minor differences in the high-frequency range are found for large recursive predictions (_i.e.,_ up to 300 \(\Delta t\)), indicating small-scale differences in the velocity flowfields. This bodes well with the observations in Fig. 11, where observable differences are found for larger recursive predictions.
We perform principal component analysis (or PCA) on the predicted velocity fields obtained from the baselines to evaluate the principal components (or PC) and their variation with respect to time \(t\). Figure 16 presents the first three PC's and their distribution with time. It is found that the temporal variation of PC 1 and PC 2 indicates a periodicity in the data obtained from the _Reference_ solver, which seems to be captured correctly in predictions obtained by DPNN. Moreover, the temporal variation in the PC 2, PC 3, and PC 3, PC 1 components are also in good overall agreement with the _Reference_ solver. The protracted vortex shedding cycle obtained from _Source_ observed in Fig. 11 is also reflected in Fig. 16 in terms of the open-ended arc indicating a longer vortex shedding time-period. As a result of these investigations, it can be remarked that the performance of the DPNN approach demonstrates consistency in both the physical and reduced feature space.
### Reconstructing flowfields: global and local perspectives
As we delve deeper into investigating the present coarse-grained surrogate model, it becomes increasingly clear that both local and global fluid structures are crucial for the overall representation of flow physics. Specifically, precisely reproducing local boundary layer phenomenon and global wake dynamics becomes essential. The intersection of these scales - the local and the global - is where true accuracy in understanding and predicting flow dynamics lies. To that extent, we investigate the ability of the learned models to correctly reconstruct the local mean velocity boundary layer profile (with momentum thickness, \(\theta\)) as well as the mean gap flow \(\bar{U_{\text{G}}}\), while also evaluating the global fluid variables in terms of kinetic energy \(KE\) and enstrophy \(\Omega\). This step is not trivial
Figure 15: Comparison of the power spectral density of the streamwise velocity for multiple time ranges (39th testing sample)
Figure 16: Comparison of the first three principal components and its variation with time (39th testing sample).
given that the _Source_ solver employs a masked stair-step representation for the underlying body boundary for computational efficiency.
It is also crucial to remark here that while three level coarsening will naturally influence the local resolution of the boundary layer profile it still retains a parabolic profile for the upstream body, as will be shown later on. Thus at a given level of grid resolution, the goal is to mitigate the difference between the under-resolved boundary layer of the _Source_ and the resolved boundary layer of the _Reference_ transferred to the _Source_ mesh. Hence, our goal is not a super-resolution task, i.e. transferring a low-resolution solution to a higher resolution, but instead to improve the solution given a fixed computational mesh.
The momentum thickness, \(\theta\) is evaluated as below,
\[\theta(x)=(\int_{0}^{\delta_{\max}}\frac{\bar{u}(x,y)}{\bar{u}_{e}(x)}\left(1- \frac{\bar{u}(x,y)}{\bar{u}_{e}(x)}\right)dy) \tag{6}\]
The time and space averaged streamwise velocity at the gap Chen et al. (2020), \(U_{G}/U\) is evaluated as follows,
\[\frac{U_{\text{G}}}{U_{\infty}}=\frac{1}{U_{\infty}}\int_{-0.5}^{0.5}\bar{u} \ d\left(\frac{y}{G}\right) \tag{7}\]
where, \(\bar{u}\) is the time-mean streamwise velocity at the gap. The kinetic energy \(KE\) and enstrophy \(\Omega\) are calculated as follows,
\[KE(t,U)=\frac{1}{2}\int_{L_{2(A)}}U(t)^{2}dA \tag{8}\]
\[\Omega(t,U)=\frac{1}{2}\int_{L_{2(A)}}\omega(t)^{2}dA \tag{9}\]
While KE is a popular choice for statistical evaluation of fluid flows, enstrophy represents the rotational energy of the flow and corresponds to the dissipative effects of the flow (Tong et al., 2015).
Figure 17 presents the boundary layer profile in the lateral \(y\) direction just above the body boundary along the cross-section drawn at the upstream cylinder for three distinct testing samples.
Figure 17: Comparison of the time-averaged boundary layer profile along the lateral \(y\) direction of the upstream body for (a) 28th (b) 40th (c) 46th testing samples
Despite the fact that downsampling high-resolution data obtained from the _Reference_ solver can distort the actual boundary profile, reconstructing the steep rise in velocity remains a critical criterion to be preserved. It is evident that the inadequate treatment of the body boundary results in a much thicker boundary layer profile obtained from the _Source_. This results in an enlarged effective diameter of the body, subsequently changing the equivalent effective spacing ratio. Such alteration points to a potential difference in the wake category (single bluff-body wake instead of deflected gap wake from _Source_ in Fig. 7). On the contrary, the boundary layer profile obtained from the DPNN approach agrees quite well with the _Reference_ data. This is also observed in Table 7, where the momentum thickness \(\theta/D\) has been reported for three representative testing samples.
Figure 18 shows the streamwise mean gap velocity profile distribution between the two downstream regions. The prominent peaks adjacent to the parabolic profile near the center are caused by the interaction of the separated shear layer from the upstream cylinder with the gap side shear
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{2}{c}{Mean streamwise velocity at the gap, \(\frac{U_{\text{G}}}{U_{\infty}}\)} \\ Testing samples no & _Reference_ & _Source_ & DPNN \\
28 & 0.896 & 0.530 & 0.857 \\
40 & 0.956 & 0.657 & 0.958 \\
46 & 0.958 & 0.728 & 0.946 \\ \end{tabular}
\end{table}
Table 8: Comparison of the time and space averaged streamwise velocity at the gap \(\frac{U_{\text{G}}}{U_{\infty}}\) for multiple testing samples
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{2}{c}{Mean streamwise velocity at the gap, \(\frac{U_{\text{G}}}{U_{\infty}}\)} \\ Testing samples no & _Reference_ & _Source_ & DPNN \\
28 & 0.896 & 0.530 & 0.857 \\
40 & 0.956 & 0.657 & 0.958 \\
46 & 0.958 & 0.728 & 0.946 \\ \end{tabular}
\end{table}
Table 7: Comparison of the boundary layer momentum thickness \(\theta/D\) for multiple testing samples
Figure 18: Comparison of the time-averaged streamwise gap velocity profile in between the two downstream bodies for (a) 28th (b) 40th (c) 46th testing samples
layers of the downstream cylinders. This interaction penetrates the gap flow. There seems to be a very good agreement of the mean streamwise velocity profile at this gap obtained from the DPNN approach with that of the _Reference_, whereas the thicker boundary layer profile exhibited by the _Source_ results in a narrower gap profile. Finally, we also quantify the time and space averaged streamwise velocity at the gap \(\frac{U_{\Omega}}{U_{\infty}}\) in Table 8, where a very good agreement between DPNN and _Reference_ data has been noted.
To enable a comparison between global variables that represent interesting fluid phenomena, we focus on assessing the enstrophy, symbolized by \(\Omega\), and the kinetic energy, represented by \(KE\). Tables 9 and 10 present the time-averaged enstrophy \(\Omega\) and kinetic energy \(KE\) obtained from multiple benchmarks. It is found that the flowfields obtained from _Source_ exhibit considerably lower values of \(\Omega\) and \(KE\), indicating loss of rotational and kinetic energy over time. In other words, the flowfields obtained from _Source_ result in significant decay of kinetic and rotational energy due to stronger dissipation effects resulting from the coarse grid simulations. In contrast, however, the time-averaged enstrophy and kinetic energy obtained from DPNN are in excellent agreement with the data obtained from the _Reference_ solver. Given the fact that the same numerical schemes are employed in the base solver for both the _Source_ and the DPNN (i.e., the solver embedded in the loop), the network thus acts as a forcing function that counters the strong numerical dissipation introduced by the _Source_ solver. This exercise, thus, clearly points towards the ability of the DPNN approach to yield predictions that preserve the local boundary layer profile along with the global wake dynamics in a way that results in lower numerical dissipation, as achieved by the _Reference_ solver. So far, we have demonstrated the reliability of the model at an individual prediction level. Additional evaluations for all the 50 previously unseen test samples have also been performed and are shown in Appendix A, supporting the conclusions drawn above.
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{3}{c}{Mean kinetic energy, \(KE/KE_{0}\)} \\ Testing samples no & _Reference_ & _Source_ & DPNN \\
28 & 1.000 & 0.924 & 0.991 \\
40 & 1.000 & 0.929 & 0.995 \\
46 & 1.000 & 0.927 & 0.996 \\ \end{tabular}
\end{table}
Table 10: Comparison of the normalized time averaged kinetic energy \(KE/KE_{0}\) for multiple testing samples
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{3}{c}{Mean enstrophy, \(\Omega/\Omega_{0}\)} \\ Testing samples no & _Reference_ & _Source_ & DPNN \\
28 & 1.001 & 0.926 & 0.985 \\
40 & 0.998 & 0.944 & 0.979 \\
46 & 0.999 & 0.988 & 0.997 \\ \end{tabular}
\end{table}
Table 9: Comparison of the normalized time averaged enstrophy \(\Omega/\Omega_{0}\) for multiple testing samples
### Insights through model predictions
A key aspect of our research is the model's ability to produce not only robust but also practical and valuable predictions. As we advance, we will undertake a comprehensive evaluation to further substantiate the model's reliability in producing results comparable to established literature. Specifically, we will compare non-dimensional parameters such as the Strouhal number (\(St\)) and spacing ratio (\(L/D\)), which are typically associated with the flow around two or more cylinders. This broadens our evaluation beyond individual testing samples, as was conducted in previous sections. To our knowledge, this marks the first occasion when such an investigation has been carried out for arbitrarily shaped bodies.
Despite the fact that the present dataset corresponds to flowfields obtained for flow past arbitrary configurations, one can still compare the distributions of the Strouhal number \(St\) based on the dominant frequency of one of the two downstream cylinders with the spacing ratio \(L/D\). The diameter in the spacing ratio \(L/D\) is simply the projected height of the upstream cylinder. This results in the distribution of the non-dimensional number (_i.e.,_ Strouhal number and spacing ratio), which allows for comparison with (Chen et al., 2020), who report the data obtained for flow past equi-diameter cylinders computed using an immersed boundary-based flow solver. Figure 19 compares the Strouhal number based on the dominant frequency of the downstream cylinder (upper) as a function of the spacing ratio \(L/D\). Figure 19 offers multiple insights, viz., besides the observable oscillations in the Strouhal number, a good overall fit between the solutions from the two different methodologies is noted. This illustrates that despite the arbitrary nature of the embedded bodies, the underlying frequency of vortex shedding (non-dimensionalized by the corresponding diameter and freestream velocity) for a certain body at a given spacing ratio exhibits similar patterns. Besides, it is also noted that the transition from low-frequency single bluff-body wake to deflected gap wake and subsequently to high-frequency, chaotic wake happens at nearly the same spacing ratio (as compared to Chen et al. (2020)). This observation is especially important given that the Strouhal number accuracy is crucially dependent on long-term recursive predictions, and deviations cause significant differences in the vortex shedding frequency (Hasegawa et al., 2020) _e.g.,_ from _Source_ (not shown here). Consequently, it can be remarked that the present DPNN-based predictive framework allow for useful and reliable predictions enabling long-term temporal accuracy. While multiple factors, such as shear layer growth, instabilities, gap flow, etc., contribute to the frequency
Figure 19: Strouhal number along with the spacing ratio for the arbitrarily shaped cylinders and its comparison with Chen et al. 2020. Strouhal number is evaluated based on the dominant frequency of the downstream (upper) cylinder obtained for 300 frames
of vortex shedding, further investigations are needed to probe the cause of the fluctuations.
In prior sections, we emphasized the necessity of evaluating the performance of the predictive model using test samples representative of different wake categories. In this section, we expand on this by presenting additional metrics of interest. This exercise allows for inferring greater physical insights from the predictions obtained so far, thus offering a unique perspective on solutions obtained from the predictive framework. Figure 20 represents the time-averaged enstrophy \(\Omega\) evaluated over the spacial domain for 300 testing frames for all testing samples, color marked by the category of the wake. It is found that there exist clear strata or bands of regions belonging to certain wake categories, _e.g._, single bluff-body, deflected gap, and chaotic wake, whereas these bands overlap for the fully developed quasi in/anti-wake categories. This indicates that the mean rotational energy of the flowfields is lowest for the single bluff-body wake and highest for the fully-developed quasi in-phase wake. This is not entirely surprising as the vortices shed from the upstream cylinder in the fully developed wake category penetrate the gap region while merging with vortices shed from the downstream cylinder, thereby increasing the rotational energy of the flow.
\[L_{2,\phi}=\sqrt{\frac{\sum_{i=1}^{nc}(\phi_{\text{Ref},i}^{j}-\phi_{\text{ baseline},i}^{j})^{2}}{n_{x}\times n_{y}}} \tag{10}\]
Figure 21 illustrates the time-averaged \(L_{2}\) norm error computed using Eq. 10 for each testing
Figure 21: Comparison of the \(l_{2}\) norm error of the predictions obtained from DPNN
Figure 20: Comparison of the enstrophy \(\Omega\) of the predictions obtained from DPNN color-coded by the wake category. The color bars represent the region within the maximum and the minimum \(\Omega\) for each wake category.
sample based on the velocity field up to \(t_{f}\)=300 frames. The variables \(n_{x}\), \(n_{y}\) are the numbers of computational cells in the streamwise and lateral directions, and \(\phi\) represents the quantity of interest. The samples are color-marked according to the wake category they exhibit. Unsurprisingly, it is found that the testing samples belonging to the chaotic wakes incur the highest error. Additionally, the testing samples corresponding to the deflected gap wake result in the lowest error from both DPNN and Source, which can be attributed to the stable gap flow (see Fig. 22). On the contrary, the testing samples exhibiting fully developed quasi in-phase wake result in the highest error from the _Source_. To probe further, in Fig. 23, we compare the normalized enstrophy \(\Omega\) for the flowfields obtained from _Source_ and the corresponding \(L_{2}\) norm error, each in ascending order. It is found that there exists a clear relationship between the dissipation resulting from the higher enstrophy in the flowfields and the corresponding error. This suggests that the error in predictions obtained from _Source_ is predominantly influenced by the numerical dissipation.
### Generalizability towards out-of-distribution cases
A crucial component towards determining the performance of a neural network-based predictive model is its generalizability to new, unseen datasets. As such, an evaluation with test data precludes inherent biases of the training dataset. Evaluations on out-of-distribution (OOD) datasets allows for estimating the robustness of the model while potentially identifying the model's inherent bias towards certain kind of data. In addition, such practices hint towards the applicability of the model to real-world data, which are likely to be OOD. We have carefully designed experiments for OOD datasets which allow us to perform the following:
Figure 23: Comparison of the enstrophy with \(L_{2}\) norm of the predictions from _Source_ solver, sorted in ascending manner
Figure 22: Comparison of the \(l_{2}\) norm error of the predictions obtained from _Source_
* **Three equi-diameter cylinders**. This setup mimics the existing one as shown in Fig. 1 with the exception of the geometric symmetry induced by three equi-diameter cylinders at spacing ratios of \(L/D\) = [1.5, 3.5, 5.0] with each cylinder having unit diameter. This serves as an interesting problem considering that the model was never trained for symmetry (the equi-diameter cylinders result in symmetric distribution about the x-axis). Additionally, this allows for a direct comparison of the wake category with that obtained by Zheng et al. (2016). For the chosen spacing ratio, the flowfield should exhibit deflected gap wake, anti-phase wake, and fully-developed in-phase wake, respectively.
* **Two side-by-side cylinders**. This layout chooses two side-by-side equi-diameter cylinders which is contrary to the present training setup to assess the performance of the model for an entirely alternative problem of practical relevance. The spacing ratios chosen for this case are \(L/D\) = [1.5, 2.5, 4.0], which allow for direct comparison with Bao et al. (2013) and should yield flip-flopping (or chaotic) wake, in-phase wake, and anti-phase wake, respectively.
#### 4.6.1 Test case 1: three cylinders
Figure 24 presents the velocity flowfields obtained from the DPNN approach and its comparisons with the data obtained from the _Reference_ solver at multiple time instances, _i.e.,_ three equi-diameter cylinders arranged in an equilateral triangle layout for the spacing ratio \(L/D\) = 1.5. The _Reference_ (or ground truth data) solver depicts a deflected gap flow wake, with the deflected gap flow being deflected towards the narrow wake region (upper cylinder), which bodes well with the observations by Zheng et al. (2016). It is found that the predictions obtained by the present DPNN approach also result in the same wake dynamics. In addition, the vortex shedding cycle result in an excellent agreement with the data obtained from the _Reference_ solver, which instils further confidence in the long-term temporal predictions.
Figure 25 presents the velocity flowfields obtained from the present approach and its comparison with the data obtained from the _Reference_ solver at the spacing ratios \(L/D\) = 3.5 and 5, up to a total time of \(t=500\Delta t\). The wake exhibited at these spacing ratios clearly depicts anti-phase and fully developed in-phase patterns, respectively, as reported by Zheng et al. (2016). The comparison shows that DPNN is able to preserve the overall wake structures remarkably well for long time horizons, irrespective of either in-phase or fully developed anti-phase wake.
Figure 24: Comparison of velocity flowfield obtained from DPNN and _Reference_ solver at multiple time instances for flow past the three-cylinder case at \(L/D\) = 1.5
#### 4.6.2 Test case 2: two cylinders
Figure 26 highlights the velocity flowfield obtained from the present approach and its comparison with the corresponding ground truth data at multiple time instances, _i.e.,_ two side-by-side cylinders at a spacing ratio \(L/D=1.5\). The data obtained from the _Reference_ solver exhibits flip-flopping wake pattern wherein the gap flow switches direction in a chaotic manner and augers well with the observation by Bao et al. (2013). This can be clearly seen in Fig. 26, _e.g.,_ the gap flow at \(t=1\Delta t\) (directed towards the lower cylinder) switches direction at \(t=30\Delta t\) (directed towards the upper cylinder). Addressing such a tumultuous shift poses a significant challenge for any predictive framework (Vlachas et al. (2018); Doan et al. (2021)). For instance, while DPNN accurately predicts the first two shifts, its effectiveness diminishes when it comes to long-term forecasting, particularly struggling with the accurate reconstruction of chaotic wake dynamics. Nevertheless, considering the fact that the proposed framework yields satisfactory performance for predicting the chaotic dynamics for mid-time horizons, this acts as a promising direction for future research.
Figure 27 presents the velocity flowfield obtained from the present DPNN approach and its comparison with the _Reference_ data for spacing ratios \(L/D=2.5\) and \(4\) for multiple time instances. The wake dynamics exhibit in-phase and anti-phase wake, respectively, which bodes well with the observations made by Bao et al. (2013). It is found that the predictions from the present approach compare really well with the data obtained from the _Reference_ solver, especially for the anti-phase wake at \(L/D\)=4 where it is seen to preserve the symmetry of the flow about the centerline \(x\)-axis. For the in-phase wake obtained at \(L/D\)=2.5, the flowfields diverge at long-time predictions _i.e.,_\(t=500\Delta t\). Therefore, given these observations, it is encouraging to n
Figure 25: Comparison of velocity flowfield obtained from DPNN and _Reference_ solver at multiple time instances for flow past the three-cylinder case at \(L/D=3.5\) and \(5\)
Figure 27: Comparison of velocity flowfield obtained from DPNN and _Reference_ solver at multiple time instances for flow past the two cylinder case at \(L/D=1.5\)
Figure 26: Comparison of velocity flowfield obtained from DPNN and _Reference_ solver at multiple time instances for flow past the two cylinder case at \(L/D=1.5\)
approach is successful in faithfully reconstructing wake dynamics for periodic flows (both in-phase and anti-phase) over extended time horizons. Although the accuracy of predictions for chaotic wakes is limited to mid-time horizons, the progress made so far offers a solid foundation for further refinement and optimization.
## 5 Conclusions
This work implements a differentiable physics-assisted neural network (DPNN) approach as a coarse-grained surrogate for unsteady incompressible flow past arbitrarily shaped bodies for a low Reynolds number of around 100. The bodies under consideration are arranged in a staggered equilateral triangle manner at multiple spacing ratio \(1.2\leq L/D\leq 5.5\) to produce wakes of varying nature. The main conclusions from the work can be summarized as follows:
1. The DPNN-based predictive framework outperforms the standard data-driven supervised learning approach and the _Source_ as a robust coarse-grained surrogate for unsteady fluid flows past arbitrary bodies. This has been qualitatively and quantitatively verified using multiple performance measuring indices that indicate accurate reconstruction of local (boundary layer) and global (wake dynamics) fluid phenomenon and for both physical quantities and Fourier space or reduced feature space.
2. The framework has also been evaluated for out-of-distribution test samples for cases that assess the ability of the predictive framework to preserve symmetry (_i.e.,_ anti-phase wake), stable gap flow (_i.e.,_ deflected gap wake), long-term temporal stability, chaotic dynamics (_i.e.,_ flip-flopping wake). It is found that the framework is largely wake agnostic, _i.e.,_ it renders a good prediction irrespective of the wake category. However, for the chaotic or flip-flopping wake, the framework can only accurately maintain the chaotic switching of the gap flow up to a certain mid-time range.
3. The \(L_{2}\) errors computed on the predictions obtained from DPNN and _Source_ are lowest for the deflected gap wake owing to its stable gap flow as well as long vortex formation length, the latter of which occupies the bulk of the computational space. The DPNN-based predictions seem to suffer from the onset of chaotic dynamics, which later manifest into inaccurate wake dynamics. The flowfields from _Source_ seem to suffer from large dissipation due to the underlying coarse computational grid, which results in the continuous decay of kinetic/rotational energy.
4. The Strouhal number distributions with the spacing ratio for the arbitrary bodies results in good agreement with the literature based on flow past equi-diameter bodies. Also, the transition in the wake category from a single bluff-body wake to a deflected gap wake and eventually to a chaotic wake occurs at nearly the same spacing ratios. Moreover, we discover similar categories and patterns in the wake exhibited by the arbitrary bodies, except that in-phase and anti-phase wakes yield in quasi in/anti-phase wakes. Additionally, the arbitrary nature of the bodies promotes an early vortex shedding by the upstream cylinder and precludes quasi in-phase wakes from occurring at all.
5. The enstrophy calculated based on the flowfields obtained from DPNN exhibit clear strata of test samples belonging to unique wake categories. This is particularly noticeable for single bluff-body, deflected gap, and chaotic wakes, whereas these delineations tend to overlap for the fully developed quasi in/anti-phase wake categories.
In view of these discussions, we postulate that such a strategy that incorporates both a low-fidelity solver as well as the neural network as a hybrid predictive framework derives merit from each component. Such a strategy paves the way for the low-fidelity solver to incorporate as much of the underlying physics as possible while the network learns the difference between low-fidelity solver and high-fidelity data. This specific attribute is key that makes such a framework more promising than purely data-driven supervised learning.
The present approach also shows potential merits as a generic reconstruction strategy for near-body boundary fluid properties. This is especially true for non body-conformal sharp interface immersed boundary approaches for high Mach and Reynolds numbers flows (Brahmachary et al., 2021) that are prone to suffer from heat flux reconstruction at the body boundary. Moreover, the present learning framework would also be interesting avenue for flows over rough surfaces (Lee et al., 2022; Jouybari et al., 2021) with practical relevance. In addition, the generalizability of the approach opens up the potential directions for learning the kinematics of dispersed spherical particles (Ozaki and Aoyagi, 2022).
## Funding
This work was supported by the European Research Council Consolidator Grant _SpaTe_ (CoG-2019-863850)
## Data availability statement
The data and code supporting this study are openly available in disclosed upon publication.
## Declaration of interests
The authors report no conflict of interest.
|
2304.08779 | Error bounds for maxout neural network approximations of model
predictive control | Neural network (NN) approximations of model predictive control (MPC) are a
versatile approach if the online solution of the underlying optimal control
problem (OCP) is too demanding and if an exact computation of the explicit MPC
law is intractable. The drawback of such approximations is that they typically
do not preserve stability and performance guarantees of the original MPC.
However, such guarantees can be recovered if the maximum error with respect to
the optimal control law and the Lipschitz constant of that error are known. We
show in this work how to compute both values exactly when the control law is
approximated by a maxout NN. We build upon related results for ReLU NN
approximations and derive mixed-integer (MI) linear constraints that allow a
computation of the output and the local gain of a maxout NN by solving an MI
feasibility problem. Furthermore, we show theoretically and experimentally that
maxout NN exist for which the maximum error is zero. | Dieter Teichrib, Moritz Schulze Darup | 2023-04-18T07:29:37Z | http://arxiv.org/abs/2304.08779v2 | # Error bounds for maxout neural network
###### Abstract
Neural network (NN) approximations of model predictive control (MPC) are a versatile approach if the online solution of the underlying optimal control problem (OCP) is too demanding and if an exact computation of the explicit MPC law is intractable. The drawback of such approximations is that they typically do not preserve stability and performance guarantees of the original MPC. However, such guarantees can be recovered if the maximum error with respect to the optimal control law and the Lipschitz constant of that error are known. We show in this work how to compute both values exactly when the control law is approximated by a maxout NN. We build upon related results for ReLU NN approximations and derive mixed-integer (MI) linear constraints that allow a computation of the output and the local gain of a maxout NN by solving an MI feasibility problem. Furthermore, we show theoretically and experimentally that maxout NN exist for which the maximum error is zero.
## 1 Introduction
Model predictive control (MPC) (see, e.g, (Rawlings et al., 2017)) has become a standard tool for the control of dynamical systems with state and input constraints and has been successfully applied in different industrial fields (see, e.g., (Qin and Badgwell, 2003) for an overview). In the classical setup, MPC requires to solve an optimization problem (OP) in every time step. For systems with a short sampling period as, e.g., in power electronics (Karamanakos et al., 2020), this can be challenging because the OP may be too complex to be solved within the sampling period. If we consider a linear discrete-time prediction model in combination with a quadratic cost function, the resulting OP is a quadratic program (QP). In principle, we can compute the solution of the parametric QP offline for all feasible states. This results in an explicit control law with a piecewise affine (PWA) input-output relation defined on a polyhedral partition of the state space (Bemporad et al., 2002). Given the explicit control law, the online computational effort reduces to the evaluation of the PWA function. However, the number of polyhedral regions may grow exponentially with the state dimension and the number of constraints in the OP. Hence, exactly computing the explicit MPC law becomes untractable for complex systems. As a consequence, various techniques have been developed to approximate the control law in MPC (Jones and Morari, 2009; Bemporad and Filippi, 2003). In this context, neural networks (NN) are very popular (Chen et al., 2018; Karg and Lucia, 2020; Chen et al., 2022) since they can approximate a large class of functions, including PWA functions, with arbitrary accuracy (Hornik et al., 1989). In addition, besides their computational demanding offline training, NN are typically fast to evaluate online, which is essential if they are used as controllers. Moreover, some types of NN share the PWA structure of the control law (Schulze Darup, 2020; Hanin, 2017; Arora et al., 2016), making them the perfect choice for approximating MPC. Unfortunately, in general, stability cannot be guaranteed for the approximated controllers. One possibility to recover stability and recursive feasibility is to project the output of the NN onto a suitable set as, e.g., in (Paulson and Mesbah, 2020; Chen et al., 2018). Alternatively, the output of the NN can be used as an initial guess for a solver and not directly for control (Chen et al., 2022). The drawback of both approaches is that they require an additional optimization-based computation step online. Ideally, the NN can be used as a controller without additional online computation, while still providing stability guarantees. In (Fabiani and Goulart, 2022), it is proven that this is possible if the maximum error with respect to the optimal control law and the Lipschitz constant of the corresponding error function are known. Both values can indeed be computed exactly presupposed the output and the local gain of the used NN can be computed by solving a mixed-integer (MI) feasibility problem. This is known to be possible for NN using rectified linear units (ReLU) as activation functions (Fabiani and Goulart, 2022, Thm. 6.1).
In the work at hand, we will extend the result of (Fabiani and Goulart, 2022) by showing that the maximum error and the Lipschitz constant of the error can also be computed exactly for a controller approximation based on a maxout NN. Since maxout NN include other NN with PWA input-output relation such as, e.g., ReLU and leaky ReLU, as a special case, the results provide a generalization to a broader class of PWA NN. Furthermore, we use the PWA structure of maxout NN to compute NN that exactly describe MPC control laws and validate experimentally that these exact maxout NN indeed lead to a maximum error and Lipschitz constant of zero.
The paper is organized as follows. In the remainder of this section, we introduce relevant notation. In Section 2, we summarize some basics on MPC as well as PWA NN and describe concepts
for approximating MPC in more depth. Section 3 is devoted to our main result, i.e., the computation of the maximum error and the Lipschitz constant of the error function related to maxout NN approximating MPC. The obtained method is applied to various maxout NN approximations of MPC laws in Section 4. Finally, conclusions and an outlook are given in Section 5.
### Notation
We will denote the index set containing \(p_{i}\in\mathbb{N}\) integers starting at \(p_{i}(l-1)+1,\;l\in\mathbb{N}\) by
\[\mathcal{A}_{l}^{(i)}:=\{p_{i}(l-1)+1,\ldots,p_{i}l\}.\]
For vectors \(\mathbf{x}\in\mathbb{R}^{n}\) we denote the \(i\)-th element by \(\mathbf{x}_{i}\) and the elements between the indices \(a_{1}\) and \(a_{2}>a_{1}\) by \(\mathbf{x}_{a_{1}:a_{2}}\). For matrices \(\mathbf{K}\in\mathbb{R}^{m\times n}\) we denote the element in the \(i\)-th row and \(j\)-th column by \(\mathbf{K}_{i,j}\), the \(i\)-row and \(j\)-th column by \(\mathbf{K}_{i,\cdot}\) and \(\mathbf{K}_{\cdot,j}\), respectively. If we only write \(\mathbf{K}_{i}\) then we refer to the \(i\)-th row. A block diagonal matrix is defined as
\[\mathrm{diag}(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{N}):=\begin{pmatrix}\mathbf{ \alpha}_{1}&\mathbf{0}&\ldots&\mathbf{0}\\ \mathbf{0}&\ddots&&\vdots\\ \vdots&&&\mathbf{0}\\ \mathbf{0}&\ldots&\mathbf{0}&\mathbf{\alpha}_{w_{i}}\end{pmatrix},\]
with \(\mathbf{\alpha}\in\mathbb{R}^{w_{i}\times p_{i}}\). A continuous function \(\mathbf{F}(\mathbf{x}):\mathcal{P}\subset\mathbb{R}^{n}\to\mathbb{R}^{m}\) of the form
\[\mathbf{F}(\mathbf{x})=\left\{\begin{array}{ccc}\mathbf{G}^{(1)}\mathbf{x}+\mathbf{g}^{(1)}& \text{if}&\mathbf{x}\in\mathcal{P}^{(1)},\\ \vdots&&\vdots\\ \mathbf{G}^{(s)}\mathbf{x}+\mathbf{g}^{(s)}&\text{if}&\mathbf{x}\in\mathcal{P}^{(r)},\end{array}\right. \tag{1}\]
with a polyhedral partition \(\mathcal{P}=\cup_{i=1}^{r}\mathcal{P}^{(i)}\) and \(\mathrm{int}(\mathcal{P}^{(i)})\cap\mathrm{int}(\mathcal{P}^{(j)})=\emptyset \forall i\neq j\) is denoted as piecewise affine (PWA) function. We further define the local gain \(\mathbf{K}(\mathbf{x}):\cup_{i=1}^{r}\mathrm{int}(\mathcal{P}^{(i)})\to\mathbb{R}^{m}\) of a PWA function as
\[\mathbf{K}(\mathbf{x}):=\left\{\begin{array}{ccc}\mathbf{G}^{(1)}&\text{if}&\mathbf{x}\in \mathrm{int}(\mathcal{P}^{(1)}),\\ \vdots&&\vdots\\ \mathbf{G}^{(s)}&\text{if}&\mathbf{x}\in\mathrm{int}(\mathcal{P}^{(r)}).\end{array}\right. \tag{2}\]
## 2 Fundamentals of MPC and NN
### Model predicitive control
Model predictive control (MPC) for linear discrete-time systems builds on solving an optimal control problem (OCP) of the form
\[V_{N}(\mathbf{x}):= \min_{\hat{\mathbf{u}}(0),\ldots,\hat{\mathbf{u}}(N)\atop\hat{\mathbf{u}}(0 ),\ldots,\hat{\mathbf{u}}(N-1)}\varphi(\hat{\mathbf{x}}(N))+\sum_{\kappa=0}^{N-1} \ell(\hat{\mathbf{x}}(\kappa),\hat{\mathbf{u}}(\kappa))\] (3) s.t. \[\hat{\mathbf{x}}(0) =\mathbf{x},\] \[\hat{\mathbf{x}}(\kappa+1) =\mathbf{A}\,\hat{\mathbf{x}}(\kappa)+\mathbf{B}\hat{\mathbf{u}}(\kappa), \forall\kappa\in\{0,...,N-1\},\] \[(\hat{\mathbf{x}}(\kappa),\hat{\mathbf{u}}(\kappa)) \in\mathcal{X}\times\mathcal{U}, \forall\kappa\in\{0,...,N-1\},\] \[\hat{\mathbf{x}}(N) \in\mathcal{T}\]
in every time step \(k\in\mathbb{N}\) for the current state \(\mathbf{x}=\mathbf{x}(k)\). Here, \(N\in\mathbb{N}\) refers to the prediction horizon and
\[\varphi(\mathbf{x}):=\mathbf{x}^{\top}\mathbf{P}\mathbf{x}\quad\text{and}\quad\ell(\mathbf{x},\bm {u}):=\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}+\mathbf{u}^{\top}\mathbf{Ru} \tag{4}\]
denote the terminal and stage cost, respectively, where the weighting matrices \(\mathbf{P}\), \(\mathbf{Q}\), and \(\mathbf{R}\) are positive (semi-) definite. The dynamics of the linear prediction model are described by \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{B}\in\mathbb{R}^{n\times m}\). State and input constraints can be incorporated via the polyhedral sets \(\mathcal{X}\) and \(\mathcal{U}\). Finally, the terminal set \(\mathcal{T}\) allows to enforce closed-loop stability (see (Mayne et al., 2000) for details). The resulting control law \(\mathbf{\pi}:\mathcal{F}_{N}\to\mathcal{U}\) is defined as
\[\mathbf{\pi}(\mathbf{x}):=\hat{\mathbf{u}}^{*}(0), \tag{5}\]
where \(\mathcal{F}_{N}\) denotes the feasible set of (3) and where \(\hat{\mathbf{u}}^{*}(0)\) refers to the first element of the optimal input sequence. For the considered setup it is well known that \(\mathbf{\pi}(\mathbf{x})\) is a PWA function (Bemporad et al., 2002, Thm. 4) of the form (1) with \(\mathbf{G}^{(i)}=\mathbf{K}^{(i)},\mathbf{g}^{(i)}=\mathbf{b}^{(i)}\), \(r=r_{\text{MPC}}\), polyhedral sets \(\mathcal{P}^{(i)}=\mathcal{R}^{(i)}\;\forall i\in\{1,\ldots,r_{\text{MPC}}\}\) and local gain \(\mathbf{K}_{\text{MPC}}(\mathbf{x})\).
### Neural networks with piecewise affine activations
In general, a feed-forward-NN with \(\ell\in\mathbb{N}\) hidden layers and \(w_{i}\) neurons in layer \(i\) can be written as a composition of the form
\[\mathbf{\Phi}(\mathbf{x})=\mathbf{f}^{(\ell+1)}\circ\mathbf{g}^{(\ell)}\circ\mathbf{f}^{(\ell)} \circ\cdots\circ\mathbf{g}^{(1)}\circ\mathbf{f}^{(1)}(\mathbf{x}). \tag{6}\]
Here, the functions \(\mathbf{f}^{(i)}:\mathbb{R}^{w_{i-1}}\to\mathbb{R}^{p_{i}w_{i}}\) for \(i\in\{1,\ldots,\ell\}\) refer to preactivations, where the parameter \(p_{i}\in\mathbb{N}\) allows to consider "multi-channel" preactivations as required for maxout (see (Goodfellow et al., 2013)). Moreover, \(\mathbf{g}^{(i)}:\mathbb{R}^{p_{i}w_{i}}\to\mathbb{R}^{w_{i}}\) stand for activation functions and \(\mathbf{f}^{(\ell+1)}:\mathbb{R}^{w_{\ell}}\to\mathbb{R}^{w_{\ell+1}}\) reflects postactivation. The functions \(\mathbf{f}^{(i)}\) are typically affine, i.e.,
\[\mathbf{f}^{(i)}(\mathbf{y}^{(i-1)})=\mathbf{W}^{(i)}\mathbf{y}^{(i-1)}+\mathbf{b}^{(i)}, \tag{7}\]
where \(\mathbf{W}^{(i)}\in\mathbb{R}^{p_{i}w_{i}\times w_{i-1}}\) is a weighting matrix, \(\mathbf{b}^{(i)}\in\mathbb{R}^{p_{i}w_{i}}\) is a bias vector, and \(\mathbf{y}^{(i-1)}\) denotes the output of the previous layer with \(\mathbf{y}^{(0)}:=\mathbf{x}\in\mathbb{R}^{n}\).
Now, various activation functions have been proposed. As already stated in the introduction, we here focus on PWA activation functions, i.e., we consider the ReLU activation function
\[\mathbf{g}^{(i)}_{\text{ReLU}}(\mathbf{z}^{(i)})=\max\left\{\mathbf{0},\mathbf{z}^{(i)}\right\}: =\begin{pmatrix}\max\left\{0,\mathbf{z}^{(i)}_{1}\right\}\\ \vdots\\ \max\left\{0,\mathbf{z}^{(i)}_{w_{i}}\right\}\end{pmatrix} \tag{8}\]
and the maxout activation function
\[\mathbf{g}^{(i)}_{\text{max}}(\mathbf{z}^{(i)})=\begin{pmatrix}\max\limits_{1\leq j\leq p _{i}}\left\{\mathbf{z}^{(i)}_{j}\right\}\\ \vdots\\ \max\limits_{p_{i}:(w_{i}-1)+1\leq j\leq p_{i}w_{i}}\left\{\mathbf{z}^{(i)}_{j} \right\}\end{pmatrix}, \tag{9}\]
where we use the shorthand notation
\[\max\limits_{1\leq j\leq p_{i}}\left\{\mathbf{z}^{(i)}_{j}\right\}:=\max\left\{\mathbf{z}^ {(i)}_{1},\ldots,\mathbf{z}^{(i)}_{p_{i}}\right\}.\]
We will refer to the resulting NN as ReLU NN and maxout NN, respectively. The proof of (Goodfellow et al., 2013, Thm. 4.3) shows that maxout NN are PWA functions of the form (1) with \(\mathbf{G}^{(i)}=\mathbf{K}^{(i)}_{\text{NN}}\), \(\mathbf{g}^{(i)}=\mathbf{b}^{(i)}_{\text{NN}}\), \(r=r_{\text{NN}}\), polyhedral sets \(\mathcal{P}^{(i)}=\mathcal{R}^{(i)}_{\text{NN}}\;\forall i\in\{1,\ldots,r_{ \text{NN}}\}\) and local gain \(\mathbf{K}_{\text{NN}}(\mathbf{x})\). The number of parameters needed to describe a NN is
\[\#_{p}:=\sum_{i=1}^{\ell}(w_{i-1}+1)p_{i}w_{i}+(w_{\ell}+1)w_{\ell+1},\]
where \(\ell\), \(p_{i}\) with \(i\in\{1,\ldots,\ell\}\) and \(w_{i}\) with \(i\in\{1,\ldots,\ell+1\}\) describe the topology of the NN.
### Approximate MPC
The use of approximate MPC is particularly useful, if the OCP (3) is too complex to be solved online and the explicit solution has too many regions and thus a large memory footprint ((Kvasnica and Fikar, 2012)). In this case, the exact control law may be approximated by a function that is fast to evaluate and has a small memory footprint. A promising candidate for such a function is an NN, as it combines both required properties and is a universal function approximator (Hornik et al., 1989, Thm. 2.4). In addition, due to the common PWA structure of some NN (Hanin, 2017, Thm. 2), (Arora et al., 2016, Thm. 2.1) and the control law (Bemporad et al., 2002, Thm. 4), they seem to be a natural choice. In fact, for a suitable choice of the weighting matrices and bias vectors, ReLU NN can represent the control law exactly (Schulze Darup, 2020, Thm. 1), (Karg and Lucia, 2020, Thm. 1). Unfortunately, despite the ability of NN to exactly represent the control law, the approximated version of the control law typically does not preserve desirable properties of the MPC, such as stability and performance. Crucial for preserving these properties is the error function
\[\mathbf{e}(\mathbf{x}):=\mathbf{\pi}(\mathbf{x})-\mathbf{\Phi}(\mathbf{x}). \tag{10}\]
More precisely we have to compute the maximum error
\[\overline{e}_{\alpha}:=\max_{x\in\mathcal{X}}\left\|\mathbf{e}(\mathbf{x})\right\|_{\alpha} \tag{11}\]
and the \(\alpha\)-Lipschitz constant of the error
\[\mathcal{L}_{\alpha}(\mathbf{e},\mathcal{X}):=\sup_{\mathbf{x}\neq\mathbf{y}\in\mathcal{X }}\frac{\left\|\mathbf{e}(\mathbf{x})-\mathbf{e}(\mathbf{y})\right\|_{\alpha}}{\left\|\mathbf{x}- \mathbf{y}\right\|_{\alpha}}.\]
According to (Gorokhovic et al., 1994, Prop. 3.4), the \(\alpha\)-Lipschitz constant of a PWA function is equal to the maximum \(\alpha\)-norm of the local gain. If we consider a PWA NN, the error is as difference of two PWA functions also PWA (Gorokhovic et al., 1994, Prop. 1.1) and the \(\alpha\)-Lipschitz constant is thus
\[\mathcal{L}_{\alpha}(\mathbf{e},\mathcal{X})=\max_{\mathbf{x}\in\mathcal{X}}\left\|\bm {K}_{\text{MPC}}(\mathbf{x})-\mathbf{K}_{\text{NN}}(\mathbf{x})\right\|_{\alpha}. \tag{12}\]
Now, if \(e_{\alpha}\) and \(\mathcal{L}_{\alpha}(\mathbf{e},\mathcal{T})\) are below certain values, specified in (Fabiani and Goulart, 2022, Eq. (24)-(25)), then the closed-loop system with the approximated NN controller converges exponentially to the origin according to (Fabiani and Goulart, 2022, Thm. 3.4). The problem at this point is that during the training of a NN, the error (10) is only evaluated at discrete samples \((\mathbf{x}_{i}^{\top}\;\mathbf{\pi}(\mathbf{x}_{i})^{\top})\) of the control law, and the parameters of the NN are chosen such that the mean squared error (MSE)
\[\hat{e}^{2}:=\frac{1}{D}\sum_{i=1}^{D}\left\|\mathbf{\pi}(\mathbf{x}_{i})-\mathbf{\Phi}( \mathbf{x}_{i})\right\|_{2}^{2} \tag{13}\]
over the \(D\in\mathbb{N}\) training samples is minimized. We thus do not have any guarantees for the error at points \(\mathbf{x}\in\mathcal{F}_{N}\) not included in the training samples. Moreover, a low MSE does not necessarily mean that the values of (11) and (12) are low. Therefore, to certify stability of the closed-loop system with a pre-trained NN we need a way to compute these values exactly. For \(\alpha=\{1,\infty\}\) this is possible by solving a mixed-integer linear program (MILP) (Fabiani and Goulart, 2022, Thm. 6.1) if both the output \(\mathbf{\Phi}(\mathbf{x})\) and the local gain \(\mathbf{K}_{\text{NN}}(\mathbf{x})\) of the NN can be computed by solving an MI feasibility problem. Which is proven for ReLU NN in (Fabiani and Goulart, 2022, Thm. 6.1). In the remainder of this paper, we will extend the results to maxout NN.
## 3 Maxout neural networks for approximate MPC
In most of the recent work where the control law (5) is approximated by an NN, ReLU NN are used as function approximators (see, e.g., (Chen et al., 2022; Drummond et al., 2022; Karg and Lucia, 2020)). Maxout NN are rarely considered in this context, although they offer a number of advantages. First, the fact that ReLU NN can represent every PWA function exactly is often used as justification for their use to approximate the PWA control law. However, most ReLU NN that allow an exact description are based on the representation of PWA functions as a sum of the type
\[\hat{F}(\mathbf{x})=\sum_{i=1}^{M}\sigma_{i}\max_{1\leq j\leq J}\{\mathbf{\beta}^{(i)}_ {j}\mathbf{x}+\gamma^{(i)}_{j}\}. \tag{14}\]
From (Wang and Sun, 2005) and (Kripfganz and Schulze, 1987) it is known for which \(M\) and \(J\) we can find parameters \(\sigma_{i}\), \(\mathbf{\beta}_{j}\) and \(\gamma_{j}\) such that \(\hat{F}(\mathbf{x})=F(\mathbf{x})\) holds for \(m=1\). Since (14) is a maxout NN with \(\ell=1\), \(w_{1}=M\), \(p_{1}=J\), \(w_{2}=1\), these results can be used directly to find suitable topologies for maxout NN that allow an exact description of the control law. For ReLU NN these results are not directly applicable. Therefore, (14) is decomposed in, e.g., (Hanin, 2017; Arora et al., 2016) to find a ReLU topology that allows an exact description. Such a topology is used in, e.g., (Karg and Lucia, 2020, Thm. 1) to represent the control law. The decomposition step typically leads to a more conservative ReLU topology in terms of number of layers \(\ell\) and neurons per layer \(w_{i}\) compared to a maxout NN that directly represents (14). Another advantage of the maxout activation is that it trivially include the ReLU activation as a special case. In fact, a ReLU activation is a maxout activation with \(p_{i}=2\) where every second affine segment is set to zero (cf. (8) and
(9)). Thus an approach that allow the computation of (11) and (12) for maxout NN is also applicable to ReLU NN and hence extends the known results.
### Exact maxout neural networks
**Corollary 1**.: _Let \(F(\mathbf{x})\) be an arbitrary PWA function of the form (1) with one dimensional output, i.e, \(m=1\). Then for a maxout NN \(\Phi(\mathbf{x})\) with \(\ell=1\), \(w_{2}=1\) and \(\mathbf{b}^{(2)}=0\) there exist parameters_
1. \(p_{1}\!\in\!\mathbb{N}\)_,_ \(\mathbf{W}^{(1)}\!\in\!\mathbb{R}^{2p_{1}\times n}\)_,_ \(\mathbf{b}^{(1)}\!\in\!\mathbb{R}^{2p_{1}\times 1}\) _and_ \(\mathbf{W}^{(2)}\!\in\!\mathbb{R}^{1\times 2}\)__
2. \(w_{1}\!\in\!\mathbb{N}\)_,_ \(\mathbf{W}^{(1)}\!\in\!\mathbb{R}^{w_{1}(n+1)\times n}\)_,_ \(\mathbf{b}^{(1)}\!\in\!\mathbb{R}^{w_{1}(n+1)\times 1}\) _and_ \(\mathbf{W}^{(2)}\!\in\!\mathbb{R}^{1\times w_{1}}\)__
_with \(\Phi(\mathbf{x})=F(\mathbf{x})\)._
Proof.: Since the resulting maxout NN are of the form (14) with \(J=p_{1}\), \(M=2\) for (i) and \(J=n+1\), \(M=w_{1}\) for (ii), the proof follows from (Kripfganz and Schulze, 1987, Lem. 1) and (Wang and Sun, 2005, Thm. 1), respectively.
Corollary 1 provides two different topologies (i) and (ii) for maxout NN with one hidden layer that can represent every PWA function with one dimensional output exactly, if \(p_{1}\) and \(w_{1}\), respectively are chosen large enough. Although the results are formulated for \(m=1\) they can also be applied to PWA functions with \(m>1\) by applying Corollary 1 to every dimension of the output individually. Thus maxout NN can represent the PWA control law (5) for arbitrary state and input dimension.
### Maxout neural networks as MILP
In this section, we will derive our main result, which is the computation of the output and the local gain of NN with maxout activation by solving an MI feasibility problem. This result then allows to compute (11) and (12) for the case where the control law (5) is approximated by a maxout NN. Both values are according to the descriptions in Section 2.3 sufficient to prove stability of the closed-loop system. Moreover, these values provide a more profound way to evaluate the success of the training than by just considering the error at the training samples (13) as in, e.g., (Karg and Lucia, 2020; Teichrib and Schulze Darup, 2021). We will derive our results based on the observation that the output of a maxout NN with \(\ell\) hidden layers can be modeled by the recursion
\[\mathbf{y}^{(0)} =\mathbf{x},\] \[\mathbf{y}^{(i)} =\Delta^{(i)}(\mathbf{W}^{(i)}\mathbf{y}^{(i-1)}+\mathbf{b}^{(i)}),\;1\leq i \leq\ell,\] \[\mathbf{\Phi}(\mathbf{x}) =\mathbf{W}^{(\ell+1)}\mathbf{y}^{(\ell)}+\mathbf{b}^{(\ell+1)} \tag{15}\]
where \(\Delta^{(i)}\) is a block diagonal matrix of the form
\[\Delta^{(i)}:=\operatorname{diag}(\mathbf{\delta}^{(i)}_{1:p_{i}},\ldots,\mathbf{ \delta}^{(i)}_{p_{i}(w_{i}-1)+1:p_{i}:w_{i}}) \tag{16}\]
with binary variables \(\mathbf{\delta}^{(i)}\in\mathbb{R}^{1\times p_{i}w_{i}}\). The matrix \(\Delta^{(i)}\) is such that for all elements \(\mathbf{\delta}^{(i)}_{j}\) the logical implication
\[[\mathbf{\delta}^{(i)}_{k_{s}}=1]\Longleftrightarrow[\mathbf{W}^{(i)}_{j} \mathbf{y}^{(i-1)}+\mathbf{b}^{(i)}_{j}\leq\mathbf{W}^{(i)}_{k_{s}}\mathbf{y}^{(i-1)}+\mathbf{b}^{ (i)}_{k_{s}},\] \[\qquad\qquad\qquad\forall j\in\mathcal{A}^{(i)}_{s}\setminus k_ {s},\;k_{s}\in\mathcal{A}^{(i)}_{s}],\] \[\forall s\in\{1,\ldots,w_{i}\},\;\forall i\in\{1,\ldots,\ell\}. \tag{17}\]
holds. Thus, for every neuron in layer \(i\) the matrix \(\Delta^{(i)}\) selects the largest affine segment among all \(p_{i}\) segments. With the results from (Fischetti and Jo, 2018, Sec. 2) we can model the logical implication (17) by the following MI linear constraints
\[\mathbf{q}^{(i)}_{s} \leq\mathbf{W}^{(i)}_{j}\mathbf{q}^{(i-1)}+\mathbf{b}^{(i)}_{j}+\overline{ \overline{b}}^{(i)}(1-\mathbf{\delta}^{(i)}_{j}),\] \[-\mathbf{q}^{(i)}_{s} \leq-\mathbf{W}^{(i)}_{j}\mathbf{q}^{(i-1)}-\mathbf{b}^{(i)}_{j}-\varepsilon( 1-\mathbf{\delta}^{(i)}_{j}),\] \[\mathbf{q}^{(0)} =\mathbf{x},\] \[\sum_{j\in\mathcal{A}^{(i)}_{s}}\mathbf{\delta}^{(i)}_{j}=1,\] \[\qquad\forall j\in\mathcal{A}^{(i)}_{s},\;\forall s\in\{1, \ldots,w_{i}\},\;\forall i\in\{1,\ldots,\ell\}, \tag{18}\]
with a constant upper bound \(\overline{\overline{b}}^{(i)}\in\mathbb{R}\) and a small \(\varepsilon\geq 0\). The variables \(\mathbf{q}^{(i)}\in\mathbb{R}^{w_{i}}\) and \(\mathbf{\delta}^{(i)}\) are real and binary optimization variables, respectively.
**Lemma 2**.: _Let \(\mathbf{q}^{(i)}\) and \(\mathbf{\delta}^{(i)}\) be such that the constraints (18) with \(\varepsilon=0\) hold. Then, the output of the maxout NN (6) is given by_
\[\mathbf{\Phi}(\mathbf{x})=\mathbf{W}^{(\ell+1)}\mathbf{q}^{(\ell)}+\mathbf{b}^{(\ell+1)}. \tag{19}\]
Proof.: If we can show that
\[\mathbf{q}^{(i)}=\mathbf{y}^{(i)} \tag{20}\]
holds for all \(i\in\{0,\ldots,\ell\}\) then (19) holds according to (6) and (7). We prove this by induction. The base case \(i=0\) is true by assumption since we have \(\mathbf{q}^{(0)}=\mathbf{x}=\mathbf{y}^{(0)}\). Moreover, the constraints (18) are such that for every index set \(\mathcal{A}^{(i)}_{s}\) there exists exactly one \(k_{s}\in\mathcal{A}^{(i)}_{s}\) with \(\mathbf{\delta}^{(i)}_{k_{s}}=1\) and \(\mathbf{\delta}^{(i)}_{j}=0,\forall j\in\mathcal{A}^{(i)}_{s}\setminus k_{s}\). Thus if we assume that the induction hypothesis (20) is true for one \(i=t\) we obtain for \(i=t+1\) the constraints
\[\mathbf{q}^{(t+1)}_{s} =\mathbf{W}^{(t+1)}_{k_{s}}\mathbf{y}^{(t)}+\mathbf{b}^{(t+1)}_{k_{s}},\;k_{s} \in\mathcal{A}^{(t+1)}_{s},\] \[\mathbf{q}^{(t+1)}_{s} \leq\mathbf{W}^{(t+1)}_{j}\mathbf{y}^{(t)}+\mathbf{b}^{(t+1)}_{j}+\overline{ \overline{b}}^{(t+1)},\;\forall j\in\mathcal{A}^{(t+1)}_{s}\setminus k_{s},\] \[\mathbf{q}^{(t+1)}_{s} \geq\mathbf{W}^{(t+1)}_{j}\mathbf{y}^{(t)}+\mathbf{b}^{(t+1)}_{j},\;\forall j \in\mathcal{A}^{(t+1)}_{s}\setminus k_{s},\] \[\forall s \in\{1,\ldots,w_{t+1}\}.\]
The first two constraints always hold for a large \(\overline{b}^{(i)}\). Hence the constraints (18) imply
\[\mathbf{q}^{(t+1)}_{s}=\mathbf{W}^{(t+1)}_{k_{s}}\mathbf{y}^{(t)}+\mathbf{b}^{(t+ 1)}_{k_{s}}\geq\mathbf{W}^{(t+1)}_{j}\mathbf{y}^{(t)}+\mathbf{b}^{(t+1)}_{j},\] \[\forall j\in\mathcal{A}^{(t+1)}_{s}\setminus k_{s},\;\forall s\in\{1, \ldots,w_{t+1}\}.\]
Which is exactly the relation (17) for \(i=t+1\), i.e., the largest affine segment of the \(s\)-th neuron is the \(k_{s}\)-th segment. Thus the
relation
\[\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Thus the solution exists and is unique for \(\mathbf{x}\in\mathbb{R}^{n}\setminus\overline{\mathbf{B}}\). Next, we prove that (24) holds for feasible \(\mathbf{x}\), starting with the case \(\mathbf{\delta}_{h}^{(i)}=0\), where we have
\[\tilde{\mathbf{\xi}}_{h,r}^{(i)}=0\text{ and }\underline{w}_{h,r}^{(i)}\leq\tilde{\mathbf{W} }_{h,r}^{(i)}\leq\overline{w}_{h,r}^{(i)}\]
and for \(\mathbf{\delta}_{h}^{(i)}=1\)
\[\underline{w}_{h,r}^{(i)}\leq\tilde{\mathbf{\xi}}_{h,r}^{(i)}\leq \overline{w}_{h,r}^{(i)}\text{ and }\tilde{\mathbf{\xi}}_{h,r}^{(i)}=\tilde{\mathbf{W}}_{h,r}^{(i)},\] \[\forall h\in\{1,\ldots,p_{i}w_{i}\},\;\forall r\in\{1,\ldots,n\}, \;\forall i\in\{1,\ldots,\ell\}.\]
We can now rewrite the equality constraint in the third line of (23) as follows
\[\mathbf{\xi}_{1,r}^{(i)}=\sum_{h=1}^{p_{i}}\mathbf{\delta}_{h}^{(i)} \tilde{\mathbf{W}}_{h,r}^{(i)}=\Delta_{1,:}^{(i)}\tilde{\mathbf{W}}_{:,r}^{(i)},\] \[\vdots\] \[\mathbf{\xi}_{w_{i},r}^{(i)}=\sum_{h=p_{i}(w_{i}-1)+1}^{p_{i}w_{i}} \mathbf{\delta}_{h}^{(i)}\tilde{\mathbf{W}}_{h,r}^{(i)}=\Delta_{w_{i},:}^{(i)}\tilde{ \mathbf{W}}_{:,r}^{(i)},\] \[\forall r\in\{1,\ldots,n\},\;\forall i\in\{1,\ldots,\ell\}.\]
This can be written in a more compact form as matrix multiplication
\[\mathbf{\xi}^{(i)}=\Delta^{(i)}\tilde{\mathbf{W}}^{(i)}=\Delta^{(i)}\mathbf{W}^{(i)}\mathbf{ \xi}^{(i-1)},\;\forall i\in\{1,\ldots,\ell\}.\]
Starting with \(\mathbf{\xi}^{(0)}=\mathbf{I}\), this recursion leads to
\[\mathbf{\xi}^{(\ell)}=\prod_{i=1}^{\ell}\Delta^{(i)}\mathbf{W}^{(i)}.\]
By substituting the former relation in (21) we can show that the local gain is indeed given by (24), which completes the proof.
The constraints (18) and (23) can now be used to model a maxout NN by MI linear constraints and to compute the output (19) and the local gain (24) by solving an MI feasibility problem. If we further combine the results of Lemma 2 and 3, we can compute the maximum error and the \(\alpha\)-Lipschitz constant of the error.
**Theorem 4**.: _Let \(\mathbf{\Phi}(\mathbf{x})\) be a maxout NN and \(\alpha=\{1,\infty\}\). Then the maximum error (11) and the \(\alpha\)-Lipschitz constant of the error (12) can be computed by solving an MILP._
Proof.: The proof follows from (Fabiani and Goulart, 2022, Thm. 6.1), where it is shown that both values can be computed by solving an MILP, if the output \(\mathbf{\Phi}(\mathbf{x})\) and the local gain \(\mathbf{K}_{\text{NN}}(\mathbf{x})\) of the NN can be computed by solving an MI feasibility problem. This is always possible for maxout NN according to Lemma 2 and 3. Thus, by replacing the MI linear constraints for the ReLU NN in (Fabiani and Goulart, 2022, Thm. 6.1) with the constraints (18) and (23), we can compute (11) and (12) for a maxout NN by solving an MILP.
## 4 Numerical examples
We consider two simple examples to highlight our theoretical findings. In both examples, the maximum error (11) and the \(\alpha\)-Lipschitz constant of the error (12) are computed by solving the MILP according to Theorem 4 with the MOSEK optimization toolbox for MATLAB (see (MOSEK ApS, 2022)). The NN are implemented in Python with Keras (Chollet et al., 2015) and Tensorflow (Abadi et al., 2015). During the training, the MSE (13) is minimized with respect to the weighting matrices and bias vectors of the NN using a stochastic gradient descent.
### Example system with \(n=1\)
We consider the system from (Schulze Darup and Cannon, 2016, Ex. 2) with the dynamics
\[x(k+1)=\tfrac{6}{5}x(k)+u(k)\]
and the constraints \(\mathcal{X}=[-10,10]\) and \(\mathcal{U}=[-1,1]\). As in (Schulze Darup and Cannon, 2016), we choose \(Q=19/5\), \(R=1\), \(P=5\), and \(\mathcal{T}=[-1,1]\). Finally, we select \(N=2\) for illustration purposes here. Explicitly solving the OCP (3) then leads to the control law
\[\pi(x)=\left\{\begin{array}{ccc}-1&\text{if}&x\in\left[-\tfrac{20}{9},-1 \right],\\ -x&\text{if}&x\in\left[-1,1\right],\\ 1&\text{if}&x\in\left[1,\tfrac{20}{9}\right].\end{array}\right. \tag{27}\]
For a maxout NN with the topology from Corollary 1 (i) with \(p_{1}=2\),
\[\mathbf{W}^{(1)} =\begin{pmatrix}-1&0&-1&0\end{pmatrix}^{\top},\,\mathbf{b}^{(1)}= \begin{pmatrix}0&-1&-1&0\end{pmatrix}^{\top}\text{ and }\] \[\mathbf{W}^{(2)} =\begin{pmatrix}1&-1\end{pmatrix} \tag{28}\]
we have
\[\Phi(x)=\max\{-x,-1\}-\max\{-x-1,0\}=\pi(x). \tag{29}\]
We implemented the constraints (18) and (23) with \(\overline{b}^{(i)}=\overline{w}_{h,r}^{(i)}=-\underline{w}_{h,r}^{(i)}=10^{4}\), \(\varepsilon=10^{-5}\) and used them to compute (11) and (12) with \(\alpha=\infty\) according to Theorem 4. For a maxout NN with the parameters (28), which exactly represents the control law, we computed \(\overline{e}_{\infty}=4.5\times 10^{-16}\) and \(\mathcal{L}_{\infty}(e,\mathcal{T})=0\). This can be seen as an experimental evidence that the implementation computes correctly the maximum error and the \(\infty\)-Lipschitz constant of the error.
To obtain the data in Table 1, we sampled randomly \(1000\) points from the control law (27) and trained different maxout NN for \(1000\) epochs.
The results show that a maxout NN with \(w_{1}=2\) and \(p_{1}=2\), which can represent the control law exactly (cf. (29)), is sufficient to get a good approximation, in terms of maximum error and \(\infty\)-Lipschitz constant of the error. We only get an approximation with a relatively high error for the first and last topology of Table 1. This is not surprising since for \(w_{1}=1\), \(p_{1}=4\) the maxout NN is a convex PWA function and for \(w_{1}=4\), \(p_{1}=1\) it is an affine function. In both cases, it is not possible to find an approximation of (27) with a maximum error close to zero.
### Example system with \(n=2\)
We consider the OCP (3) with
\[\mathbf{A} =\begin{pmatrix}1&1\\ 0&1\end{pmatrix},\quad\mathbf{B}=\begin{pmatrix}0.5\\ 1\end{pmatrix},\] \[\mathcal{X} =\{\mathbf{x}\in\mathbb{R}^{2}\mid|\mathbf{x}_{1}|\leq 25,\,|\mathbf{x}_{2}| \leq 5\},\] \[\mathcal{U} =\{u\in\mathbb{R}\mid|u|\leq 1\},\]
and choose \(\mathbf{Q}=\mathbf{I}\), \(R=1\), \(N=3\), \(\mathbf{P}\) as the solution of the discrete-time algebraic Riccati equation and \(\mathcal{T}\) as the maximal output admissible set (see (Gilbert and Tan, 1991) for details). Explicitly solving the OCP leads to a control law with \(29\) regions. Using the procedure described in (Kripfganz and Schulze, 1987, Sec. 1), we find a maxout NN of the type (i) from Corollary 1 with \(p_{1}=38\) and \(231\) parameters, that exactly describes the control law. For this NN we computed \(\overline{e}_{\infty}=2.68\times 10^{-12}\) and \(\mathcal{L}_{\infty}(e,\mathcal{T})=3.11\times 10^{-5}\). A ReLU NN that can exactly represent the control law has according to (Karg and Lucia, 2020, Thm. 1) \(457\) parameters. This shows that, as already stated in Section 3, ReLU NN for the exact representation of PWA functions are more conservative compared to maxout NN.
Table 2 summarizes the training results for different maxout NN trained for \(1000\) epochs on \(10^{4}\) samples of the control law. The values of (11) and (12) are computed as in the first example, except that in this example \(\varepsilon=10^{-3}\) is chosen.
The experimental data indicate that the first maxout NN is the best choice. Note that this is a maxout NN with the topology (i), which theoretically allows an exact representation of the control law, trained on samples of the control law. The seventh maxout NN is of the type (ii), where \(w_{1}\) is chosen such that the number of parameters is equal to that of the first maxout NN. It has a comparable maximum error but a \(\infty\)-Lipschitz constant that is about \(183\)-times higher. This observation may be explained by the fact that maxout NN with more neurons represent PWA functions with more regions compared to maxout NN with less neurons (see (Montufar et al., 2021, Thm. 3.6)). This makes it more likely that there exits one region for which the deviation between the local gains of the NN and the control law is larger, resulting in a larger \(\infty\)-Lipschitz constant (cf. (12)). The increasing \(\infty\)-Lipschitz constant with the number of neurons \(w_{1}\) in Table 2 supports this assumption.
## 5 Conclusions and outlook
We presented a method to compute two values, i.e., the maximum error (11) with respect to the optimal control law and the Lipschitz constant (12) of the error function (10), which are crucial to certify stability of the closed-loop system with a maxout NN controller that approximates the optimal control law of MPC. Our results are derived by showing how the output and the local gain of maxout NN can be computed by solving an MI feasibility problem (cf. Lems. 2 and 3). The combination of both lemmas leads to Theorem 4, which states that the maximum error and the \(\alpha\)-Lipschitz constant of the error function can be computed by solving an MILP. The computation of both values has been successfully applied to a number of realizations, including maxout NN which exactly describe the control law.
An interesting direction for future research is the combination of the proposed method with the results from (Teichrib and Schulze Darup, 2022), where maxout NN are designed that allow an exact description of the piecewise quadratic optimal value function in MPC. Such a combination may provide a new method to analyze maxout NN approximations of the optimal value function.
|
2305.09028 | SKI to go Faster: Accelerating Toeplitz Neural Networks via Asymmetric
Kernels | Toeplitz Neural Networks (TNNs) (Qin et. al. 2023) are a recent sequence
model with impressive results. They require O(n log n) computational complexity
and O(n) relative positional encoder (RPE) multi-layer perceptron (MLP) and
decay bias calls. We aim to reduce both. We first note that the RPE is a
non-SPD (symmetric positive definite) kernel and the Toeplitz matrices are
pseudo-Gram matrices. Further 1) the learned kernels display spiky behavior
near the main diagonals with otherwise smooth behavior; 2) the RPE MLP is slow.
For bidirectional models, this motivates a sparse plus low-rank Toeplitz matrix
decomposition. For the sparse component's action, we do a small 1D convolution.
For the low rank component, we replace the RPE MLP with linear interpolation
and use asymmetric Structured Kernel Interpolation (SKI) (Wilson et. al. 2015)
for O(n) complexity: we provide rigorous error analysis. For causal models,
"fast" causal masking (Katharopoulos et. al. 2020) negates SKI's benefits.
Working in the frequency domain, we avoid an explicit decay bias. To enforce
causality, we represent the kernel via the real part of its frequency response
using the RPE and compute the imaginary part via a Hilbert transform. This
maintains O(n log n) complexity but achieves an absolute speedup. Modeling the
frequency response directly is also competitive for bidirectional training,
using one fewer FFT. We set a speed state of the art on Long Range Arena (Tay
et. al. 2020) with minimal score degradation. | Alexander Moreno, Jonathan Mei, Luke Walters | 2023-05-15T21:25:35Z | http://arxiv.org/abs/2305.09028v2 | # SKI to go Faster: Accelerating Toeplitz Neural Networks via Asymmetric Kernels
###### Abstract
Toeplitz Neural Networks (TNNs) [29] are a recent impressive sequence model requiring \(O(n\log n)\) computational complexity and \(O(n)\) relative positional encoder (RPE) multi-layer perceptron (MLP) and decay bias calls. We aim to reduce both. We first note that the RPE is a non symmetric positive definite kernel and the Toeplitz matrices are pseudo-Gram matrices. Further 1) the learned kernels display spiky behavior near the main diagonals with otherwise smooth behavior; 2) the RPE MLP is slow. For bidirectional models, this motivates a sparse plus low-rank Toeplitz matrix decomposition. For the sparse component's action, we do a small 1D convolution. For the low rank component, we replace the RPE MLP with linear interpolation and use Structured Kernel Interpolation (SKI) [39] for \(O(n)\) complexity. For causal models, "fast" causal masking [16] negates SKI's benefits. Working in frequency domain, we avoid an explicit decay bias. To enforce causality, we represent the kernel via the real part of its frequency response using the RPE and compute the imaginary part via a Hilbert transform. This maintains \(O(n\log n)\) complexity but achieves an absolute speedup. Modeling the frequency response directly is also competitive for bidirectional training, using one fewer FFT. We improve on speed and sometimes score on the Long Range Arena (LRA) [35].
Figure 1: (a) In LRA, our approaches, SKI and FD-TNN are faster than TNNs for 1d tasks with strong LRA scores. Bubble sizes denote training model memory. (b) Our approach, FD-TNN, achieves substantial speed ups in iterations/sec for pre-training both causal and bidirectional models. Note that we do not include SKI-TNN in this plot as it does not use an MLP based RPE.
Introduction
Sequence modeling is important in natural language processing, where sentences are represented as a sequence of tokens. Successful sequence modeling typically involves token and channel mixing. Token mixing combines representations of different sequence parts, while channel mixing combines the information across different dimensions of embedding vectors used to encode tokens. Transformers [37] are arguably the most successful technique for sequence modeling, and variants including [14; 9] have achieved state of the art performance on natural language tasks. They use self-attention for token mixing and feedforward networks for channel mixing.
Recently, [29] proposed Toeplitz Neural Networks (TNN) using Toeplitz matrices for token mixing. They use a learned neural similarity function, the Relative Positional Encoder (RPE), to form the Toeplitz matrices. Toeplitz matrix vector multiplication can be performed with sub-quadratic complexity using the Fast Fourier Transform (FFT), giving the TNN token mixing layer a total \(O(dn\log n)\) computational complexity, where \(d\) is the embedding dimension and \(n\) is the sequence length. This achieved state of the art predictive performance and nearly state of the art speed for the long range arena (LRA) benchmark [35]. They also showed strong performance pre-training wikitext-103 [21] and on the GLUE benchmark[38]. Despite strong empirical speed performance, TNNs have two fundamental efficiency limitations: 1) super-linear computational complexity 2) many calls to the RPE: for each layer, one call per relative position.
In this paper, we interpret the RPE as a non-SPD kernel and note 1) the learned kernels are discontinuous near the main diagonals but otherwise smooth globally; 2) the ReLU RPE learns 1D piecewise linear functions: an MLP is slower than necessary. For bidirectional models, this motivates a sparse plus low-rank decomposition. We apply the sparse component's action via a small 1D convolution. For the low rank component, we replace the RPE MLP with linear interpolation at a set of inducing points and an asymmetric extension of Structured Kernel Interpolation (SKI) [39] for \(O(n)\) complexity. Further, using an inverse time warp, we can extrapolate beyond sequence lengths observed during training. For causal models, even "fast" causal masking [16] negates the speed and memory benefits from SKI. Thus, we instead represent the real part of the kernel's frequency response using the RPE MLP, and evaluate the RPE with finer frequency resolution to extrapolate to longer sequence lengths in the time domain. From the real part, we compute the imaginary part via a Hilbert transform during the forward pass to enforce causality. In the bidirectional setting, we remove the causality constraint and represent the complex frequency response of the kernel with the RPE MLP. Levels of smoothness in frequency response imply decay rates in the time domain: thus we model the decay bias implicitly. This maintains \(O(n\log n)\) complexity but achieves an absolute speedup. Further, it often leads to better predictive performance on LRA tasks.
This paper has three primary contributions: 1) a TNN sparse plus low rank decomposition, extending SKI to TNNs for the low rank part. We replace the RPE MLP with linear interpolation and apply inverse time warping to efficiently train bidirectional TNNs. We provide rigorous error analysis for our asymmetric SKI application; 2) alternatively, for both causal and bidirectional models, we work directly in the frequency domain and use the Hilbert transform to enforce causality in the autoregressive setting. We prove that different activation choices for an MLP modeling the discrete time Fourier transform (DTFT) lead to different decay rates in the original kernel. 3) Empirical results: we demonstrate that our approaches show dramatically improved computational efficiency, setting a new speed state of the art on LRA [36] on the 1d tasks, with strong LRA score. In section 2 we describe related work. In section 3 we propose our new modeling approaches. In 4 we state several theoretical results regarding our modeling approaches. In 5 we extend the empirical results of [29], showing our speed gains with minimal prediction deterioration. We conclude in section 7.
## 2 Related
The most related papers use Toeplitz matrices for sequence modeling [29; 18; 27]. We build off of [29] and introduce several techniques to improve on their speed results. [18] took a similar approach, but applied Toeplitz matrices to self-attention rather than departing from it. [27] is also similar, using alternating Toeplitz and diagonal matrices as a replacement for self-attention within a Transformer. While we focus on the setting of [29] as it was released first, our approach is applicable to [27].
Also related are kernel based xFormers, particularly those using the Nystrom method [23; 1]. The most related work is [41], which adapts a matrix Nystrom method for asymmetric matrices [22] to self-attention. We instead adapt this along with SKI [39] to Toeplitz matrices. [6] extends [41] by embedding the self-attention matrix into a larger PSD kernel matrix and approximating the larger matrix instead. Their final approximate matrix has lower spectral error compared to [41] and higher average validation accuracy on LRA [35]. However, their method is slightly slower. Also somewhat related are random feature self-attention approximations[26; 8]. These extend [30], but use different random features that better approximate self-attention than random Fourier or binning features.
Sparse transformers are also relevant. [7] proposed using strided and fixed patterns. [2] alternated between sparse locally banded and dense attention. Finally, [42] proposed combining random attention, window attention and global attention. Our use of a short convolutional filter is most similar to window attention. The space of efficient transformers is huge and there are many models that we haven't covered that may be relevant. [36] provides an excellent survey.
Other successful long sequence approaches include state space models [12; 34; 10], long convolution [32; 11], adding moving averages to gated attention [19] and more [17].
## 3 Modeling Approach
We review Toeplitz neural networks (TNNs) in section 3.1. We next speed up the TNN's Toeplitz neural operator (TNO). We discuss using Nystrom and SKI approaches to bidirectional training in 3.2. We discuss frequency based approaches, particularly for causal training in 3.3.
### Preliminaries: Toeplitz matrices and Toeplitz Neural Networks
TNNs [29] replace self-attention, which computes the action of self-attention matrices that encode the similarity between both observation values and absolute positions, with the action of Toeplitz matrices that encode similarity only based on _relative_ positions. Toeplitz matrices have, for each diagonal, the same entries from left to right. That is, \(\mathbf{T}_{ij}=t_{i-j},\mathbf{T}\in\mathbb{R}^{n\times n}\). Unlike self-attention matrices, which require \(O(n^{2})\) memory, a Toeplitz matrix has \(2n-1\) unique elements and requires \(O(n)\) memory. Due to close connections with discrete-time convolution, Tx can be computed in \(O(n\log n)\) time by embedding \(\mathbf{T}\) in a circulant matrix and applying FFT.
A TNN [29] has multiple sequence modeling blocks, which we show in Figure 3 in Appendix A. Each block has a Gated Toeplitz Unit (GTU), which does both token and channel mixing, followed by a Gated Linear Unit (GLU) [33], which does channel mixing. The core of the GTU is the Toeplitz Neural Operator (TNO), which does token mixing and is the part of the architecture that we modify.
We now describe the TNO, shown in Figure 2(b) of Appendix A. Given a sequence \(\mathbf{X}\in\mathbb{R}^{n\times d}\) of length \(n\) and dimension \(d\) in discrete time, there are \(2n-1\) unique relative positions/times \(i-j\) for \(i,j=1,\ldots,n\). An RPE \(:\mathbb{Z}\rightarrow\mathbb{R}^{d}\) neural network maps each relative position to a \(d\)-dimensional embedding. These embeddings are used to construct Toeplitz matrices \(\mathbf{T}^{l}\) for \(l=1,\ldots,d\) using
\[\mathbf{T}^{l}_{ij}=\lambda^{|i-j|}\text{RPE}_{l}(i-j).\]
RPE\({}_{l}(i-j)\) is a learned similarity between positions for dimension \(l\), while \(\lambda^{|i-j|}\) with \(\lambda\in(0,1)\) is an exponential decay bias penalizing far away tokens to be dissimilar. We can interpret \(\mathbf{T}^{l}_{ij}\) as evaluating a stationary non-SPD kernel \(k_{l}(i-j)=\lambda^{|i-j|}\text{RPE}_{l}(i-j)\). Thus \(\mathbf{T}^{l}\) can be interpreted as a pseudo or generalized Gram matrix. Letting \(\mathbf{x}^{l}\) be the \(l\)th column of \(\mathbf{X}\), the TNO outputs
\[\text{TNO}(\mathbf{X})=(\mathbf{T}^{1}\mathbf{x}^{1}\ldots\mathbf{T}^{d} \mathbf{x}^{d})\in\mathbb{R}^{n\times d}\]
where each \(\mathbf{T}^{l}\mathbf{x}^{l}\) is computed via the FFT as described above.
The main costs are the RPE's MLP, the FFT, and the decay bias. We aim to eliminate the MLP and decay bias when possible. In the bidirectional setting, we use SKI to apply the FFT using a much smaller Toeplitz matrix. In a separate model we learn the RPE's frequency response directly. In the bidirectional setting, this allows us to both avoid explicitly modeling the decay bias and use one fewer FFT. In the causal setting, it allows us to avoid explicitly modeling the decay bias.
### SKI Based Approaches for Bidirectional Training
For a given Toeplitz matrix \(\mathbf{T}\), we assume it admits a decomposition that we can approximate with a sparse+low-rank representation, \(\mathbf{T}=\mathbf{T}_{\text{sparse}}+\mathbf{T}_{\text{smooth}}\approx\mathbf{ T}_{\text{sparse}}+\mathbf{T}_{\text{low}}\). Our bidirectional training thus consists of three primary components. The first, the sparse component \(\mathbf{T}_{\text{sparse}}\) is straightforward. Applying the action \(\mathbf{T}_{\text{sparse}\mathbf{X}}\) of \(\mathbf{T}_{\text{sparse}}\in\mathbb{R}^{n\times n}\) with \(m\) non-zero diagonals is equivalent to applying a 1D convolution layer with filter size \(m\). We then discuss our asymmetric SKI for \(\mathbf{T}_{\text{low}}\) in section 3.2.1. Finally, we discuss how we handle sequence lengths not observed in training for \(\mathbf{T}_{\text{low}}\) via an inverse time warp in section 3.2.2. Algorithm 1 summarizes our TNO based on these techniques.
```
Given sequence \(\mathbf{X}\in\mathbb{R}^{n\times d}\) with columns \(\mathbf{x}^{l}\) Hyperparameters rank \(r\ll n\), sparse filter size \(m\), interpolation degree \(N\), decay parameter \(\lambda\) Compute inducing points \(p_{1},\dots,p_{r}\) evenly spaced on \([0,n]\) for\(l=1,\dots,d\)do Compute \(\mathbf{T}_{\text{sparse}}^{l}\mathbf{x}^{l}\) with a 1D convolutional filter, size \(m\). Let \(x(t)=\text{sign}(t)\lambda^{|t|}\). Form \(\mathbf{A}^{l}\in\mathbb{R}^{r\times r}\) with entries \(\mathbf{A}^{l}_{ij}=k_{l}(p_{i}-p_{j})=\text{RPE}_{l}(x(p_{i}-p_{j}))\) Form \(\mathbf{W}^{l}\in\mathbb{R}^{n\times r}\) degree \(N\) polynomial interpolation matrix Compute \(\mathbf{T}_{\text{low}}^{l}\mathbf{x}^{l}\) with \(\mathbf{T}_{\text{low}}^{l}=\mathbf{W}^{l}\mathbf{A}^{l}\mathbf{W}^{l\top}\) endfor Return \(\text{TNO}(\mathbf{X})=(\mathbf{T}_{\text{sparse}}^{1}\mathbf{x}^{1}+\mathbf{ T}_{\text{low}}^{1}\mathbf{x}^{1},\dots,\mathbf{T}_{\text{sparse}}^{d} \mathbf{x}^{d}+\mathbf{T}_{\text{low}}^{d}\mathbf{x}^{d})\)
```
**Algorithm 1** Sparse Plus Low Rank Bidirectional TNO with Asymmetric SKI
#### 3.2.1 SKI For Asymmetric Nystrom
Given an asymmetric stationary kernel \(k:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\), we wish to approximate the (pseudo) Gram matrix \(\mathbf{T}\in\mathbb{R}^{n\times n}\) using a low-rank approximation based on a smaller Gram matrix \(\mathbf{A}\in\mathbb{R}^{r\times r}\), with \(r\ll n\). In context, \(\mathbf{A}\) is formed using relative positions between a set of inducing points \(p_{1},\dots,p_{r}\) instead of the full set \(1,\dots,n\) that is used for \(\mathbf{T}\). That is,
\[\mathbf{T}_{ij}=k(i-j)\qquad\text{and}\qquad\mathbf{A}_{ij}=k(p_{i}-p_{j}).\]
In our case, the inducing points are uniformly spaced. Some submatrices of \(\mathbf{A}\) may be submatrices of \(\mathbf{T}\) (if inducing points are also observation points). To derive the Nystrom approximation, we form an
Figure 2: Our SKI-TNO and FD-TNO modifications: (a) We decompose Toeplitz matrices into sums of sparse + smooth components. Additionally, we use interpolation instead of an MLP to learn the RPE. (b) We use a 1D convolution to apply the sparse component and SKI as a low-rank approximation to the smooth component. (c) For the causal case, we use frequency domain RPE with a Hilbert Transform to enforce causality. (d) Our FD-TNO also is competitive in the bidirectional case, with one fewer FFT than TNO.
augmented Gram matrix \(\mathbf{K}\in\mathbb{R}^{(n+r)\times(n+r)}\) in block form as
\[\mathbf{K}=\begin{pmatrix}\mathbf{A}&\mathbf{B}\\ \mathbf{F}&\mathbf{T}\end{pmatrix},\]
where \(\mathbf{B}\in\mathbb{R}^{r\times n}\) and \(\mathbf{F}\in\mathbb{R}^{n\times r}\) are respectively the upper right and lower left partitions of the large Gram matrix \(\mathbf{K}\). Explicitly,
\[\mathbf{B}_{ij}=k(p_{i}-j)\qquad\text{and}\qquad\mathbf{F}_{ij}=k(i-p_{j}).\]
Extending [22] to allow singular \(\mathbf{A}\),
\[\widehat{\mathbf{K}}=\begin{pmatrix}\mathbf{A}\\ \mathbf{F}\end{pmatrix}\mathbf{A}^{\dagger}\begin{pmatrix}\mathbf{A}&\mathbf{B }\end{pmatrix}=\begin{pmatrix}\mathbf{A}&\mathbf{A}\mathbf{A}^{\dagger} \mathbf{B}\\ \mathbf{F}\mathbf{A}^{\dagger}\mathbf{A}&\mathbf{F}\mathbf{A}^{\dagger} \mathbf{B}\end{pmatrix}\]
where \(\mathbf{A}^{\dagger}\) is the Moore-Penrose pseudo-inverse satisfying \(\mathbf{A}\mathbf{A}^{\dagger}\mathbf{A}=\mathbf{A}\) (but not necessarily \(\mathbf{A}\mathbf{A}^{\dagger}=\mathbf{I}\) as in [22], which shows up in our different expressions for off-diagonal blocks of \(\widehat{\mathbf{K}}\)). Following structured kernel interpolation (SKI) [39], we approximate \(\mathbf{F}\) and \(\mathbf{B}\) using interpolation. Specifically,
\[\mathbf{F}\approx\mathbf{W}\mathbf{A}\qquad\text{and}\qquad\mathbf{B}\approx \mathbf{A}\mathbf{W}^{\top}\]
where \(\mathbf{W}\in\mathbb{R}^{n\times r}\) is a matrix of sparse interpolation weights with up to two non-zero entries per row for linear interpolation or up to four for cubic. These weights can be computed in closed form from the inducing points \(p_{i}\) and the observation points \(i\). Thus we have
\[\mathbf{T} \approx\mathbf{F}\mathbf{A}^{\dagger}\mathbf{B}\approx\mathbf{ W}\mathbf{A}\mathbf{A}^{\dagger}\mathbf{A}\mathbf{W}^{\top}=\mathbf{W}\mathbf{A} \mathbf{W}^{\top}\] \[\Rightarrow\widehat{\mathbf{T}} =\mathbf{W}\mathbf{A}\mathbf{W}^{\top}\]
as desired. We can set \(\mathbf{T}_{\text{low}}=\tilde{\mathbf{T}}\) and compute \(\tilde{\mathbf{T}}\mathbf{x}\) by first applying \(\mathbf{W}^{\top}\mathbf{x}\), which is an \(O(n)\) operation due to \(\mathbf{W}\in\mathbb{R}^{n\times r}\) having sparse rows. Next, we apply \(\mathbf{A}(\mathbf{W}^{\top}\mathbf{x})\). Since \(\mathbf{A}\) is a Toeplitz matrix, this is \(O(r\log r)\) as per Section 3.1. Finally, \(\mathbf{W}(\mathbf{A}\mathbf{W}^{\top}\mathbf{x})\), the action of \(\mathbf{W}\), is again an \(O(n)\) operation. Thus computing \(\tilde{\mathbf{T}}\mathbf{x}\) is \(O(n+r\log r)\) computation. On a GPU, this factorization achieves a speedup from having small \(r\) and being able to leverage efficient parallelized matrix multiplication on specialized hardware. However, in PyTorch [25], we note that for medium sized matrices up to \(n=512\), the time required for data movement in order to perform sparse-dense matrix multiplications can be higher than that of simply performing dense matrix multiplication. This means that in practice, we may instead choose to perform batched dense matrix multiplication, which yields an absolute speedup but a worse asymptotic complexity of \(O(nr^{2}+r\log r)\).
#### 3.2.2 Inverse Time Warp
TNNs use \(k_{l}(i-j)=\lambda^{|i-j|}\text{RPE}_{l}(i-j)\), where \(\text{RPE}_{l}(i-j)\) is an MLP. There are two issues: 1) the sequential computations required for an MLP are slow, and we only need to evaluate at \(2r-1\) points using SKI instead of \(2n-1\) to produce the full matrix; 2) extrapolation is used in extending to longer sequence lengths than the MLP was trained on, which is generally less reliable than interpolation.
In Proposition 1, we note that an MLP \(f:\mathbb{R}\rightarrow\mathbb{R}^{d}\) with ReLU activations and layer normalization is \(d\) piecewise linear functions. As we only need to evaluate at \(2r-1\) points, we could let \(\text{RPE}_{l}\) be a piecewise linear function with \(r\) grid points. However, we still need to handle extrapolation. We use an inverse time warp and let \(\text{RPE}_{l}\) linearly interpolate on \([-1,1]\) with the constraint \(\text{RPE}_{l}(0)=0\) and define \(x(t)=\text{sign}(t)\lambda^{|t|}\) for some \(0<\lambda<1\). We then let \(k_{l}(i-j)=\text{RPE}_{l}(x(i-j))\).
### Frequency Based Approaches
#### 3.3.1 Causal Training
The SKI approach allows training bidirectional TNNs with linear complexity. However, fast causal masking negates SKI's benefits (see Appendix B). Thus we need an alternate causal speedup. We use an MLP in the Fourier domain to avoid an explicit time domain decay bias, and use the Hilbert transform to enforce causality. We now describe how we can learn a causal kernel when working in frequency domain (FD). We first define the discrete Hilbert transform, the key tool for achieving this.
**Definition 1**.: _The **discrete Hilbert transform** of the discrete Fourier transform \(\hat{k}\) is given by_
\[\mathcal{H}\{\hat{k}\}=\hat{k}*h\]
_where \(*\) denotes convolution and_
\[h[l]=\begin{cases}0,\,l\text{ even}\\ \frac{2}{\pi l},\,l\text{ odd}\end{cases}\]
The real and imaginary parts of the Fourier transform of a causal function are related to each other through the Hilbert transform. Thus, in order to represent a causal signal, we can model only the real part and compute the corresponding imaginary part. That is, we first estimate an even real function \(\hat{k}\) (symmetric about \(0\)) using an MLP. We then take \(\hat{k}_{\text{causal}}(\omega)=\hat{k}(\omega)-i\mathcal{H}\{\hat{k}\}( \omega)\).
The inverse Fourier transform \(k_{\text{causal}}\) of \(\hat{k}_{\text{causal}}\) will thus be causal. For a discussion of why this ensures causality, see [24]. See Algorithm 2 for TNO pseudocode using this approach. Different choices for the smoothness of the frequency domain MLP will lead to different decay rates in time domain, so that smoothness in frequency domain essentially serves the same purpose as the decay bias in [29]. We discuss this theoretically in Section 4.2. Note that we also find that working directly in the frequency domain for bidirectional models (without the Hilbert transform) is often competitive with SKI for speed (despite being \(O(n\log n)\) instead of \(O(n+r\log r)\)) due to needing one fewer FFT.
```
Given sequence \(\mathbf{X}\in\mathbb{R}^{n\times d}\) with columns \(\mathbf{x}^{l}\) Hyperparameters activation function for\(l=1,\ldots,d\)do \(\hat{\mathbf{x}}^{l}\leftarrow\mathcal{F}\{\mathbf{x}^{l}\}\), where \(\mathcal{F}\) is the rFFT. Compute even real function \(\hat{k}^{l}=\text{RPE}_{l}(\omega)\), \(\omega=\frac{m\pi}{n},m=0,\ldots,n\). Take discrete Hilbert transform \(\mathcal{H}\{\hat{k}^{l}\}\) via the rFFT and irFFT. Compute \(\hat{k}^{l}_{\text{causal}}(\omega)=\hat{k}^{l}(\omega)-i\mathcal{H}\{\hat{k }^{l}\}(\omega)\) for \(\omega=\frac{m\pi}{n},m=0,\ldots,n\). \(\mathbf{y}^{l}\leftarrow\mathcal{F}^{-1}\{\hat{k}^{l}_{\text{causal}} \odot\hat{\mathbf{x}}^{l}\}\), where \(\mathcal{F}^{-1}\) is the irFFT and \(\odot\) denotes an element-wise product. endfor Return \(\text{TNO}(\mathbf{X})=(\mathbf{y}^{1},\ldots,\mathbf{y}^{d})\)
```
**Algorithm 2** Causal TNO via Discrete Hilbert Transform
#### 3.3.2 Bidirectional Training with FD TNN
We extend the FD approach to bidirectional training by removing the causality constraint and model the complex frequency response of real valued time domain kernels directly. To do so we simply double the output width of the RPE and allocate each half for the real and imaginary parts of the kernel frequency responses, while explicitly forcing real-valued responses at \(\omega=0\) and \(\pi\). While increasing the complexity of the RPE slightly, we achieve the speed ups in Figure 1 by eliminating the FFTs for the kernels and causality constraint, in addition to the decay bias.
## 4 Theory
We show in Proposition 1 that an MLP mapping from scalars with layer norm and ReLU activations is piecewise linear and continuous, suggesting that using an MLP that we only need to evaluate at a small number of points may be overparametrized, justifying the use of interpolated piecewise linear functions. In section 4.1 we analyze the spectral norm of the matrix approximation error for SKI. We assume the sparse component is exactly identifiable and bound the error of approximating the smooth term via a low-rank SKI factorization. We leave the problem of relaxing this assumption to future work. In section 4.2, we analyze how by using different activations with different smoothness when learning the DTFT of the kernel, we obtain corresponding decay rates for the time domain signal.
**Proposition 1**.: _A ReLU MLP \(f:\mathbb{R}\rightarrow\mathbb{R}^{d}\) with layer norm and no activation on its output is \(d\) piecewise linear continuous functions._
Proof.: See Appendix C.
### Matrix Approximation Spectral Norm Error
We give our main error bound for our SKI based low rank approximation. Note that this requires that our kernel is \(N+1\) times continuously differentiable, while the kernel we use in practice uses a piecewise linear function and is thus non-differentiable. In theory, we would need a smoother kernel, adding additional computation overhead. However, we find that empirical performance is still strong and thus we simply use piecewise linear kernels but include the error bound for completeness. Our results depends on the Nystrom error \(\mathbf{E}_{nyst}\): its \(l^{2}\) norm is bounded in [22].
**Theorem 1**.: _Assume that \(\mathbf{A}\) is non-singular and \(k:[p_{1},p_{r}]\rightarrow\mathbb{R}\) is an \(N+1\) times continuously differentiable function, where \(p_{1}\) is the smallest inducing point and \(p_{r}\) is the largest. Let \(\mathbf{T}_{r,opt}\) be the optimal rank \(r\) approximation to \(\mathbf{T}\) and let_
\[\mathbf{E}_{SKI}=\mathbf{W}\mathbf{A}\mathbf{W}^{\top}-\mathbf{T}_{r,opt}\]
_be the difference between the SKI approximation using linear interpolation and the optimal one, while_
\[\mathbf{E}_{nyst}=\mathbf{F}\mathbf{A}^{-1}\mathbf{B}-\mathbf{T}_{r,opt}\]
_is the difference between the Nystrom approximation and the optimal one. Then_
\[\|\mathbf{E}_{SKI}\|_{2}\leq\sqrt{nr}\max_{p_{n_{1}}\leq i\leq p_{n_{N}}} \frac{|\psi_{N}(i)|}{(N+1)!}L\left((N+1)\sqrt{n}+\frac{\min(\sigma_{1}(\mathbf{ F}),\sigma_{1}(\mathbf{B}))}{\sigma_{r}(\mathbf{A})}\right)+\|\mathbf{E}_{nyst}\|_{2}.\]
_where \(\psi_{N}(i)=\prod_{j=1}^{N}(i-p_{n_{j}})\) with \(p_{n_{j}}\) being the \(j\)th closest inducing point to \(i\), \(L\) is an upper bound on the \(N+1\)th derivative of \(k\), and \(\sigma_{i}(\mathbf{M})\) denotes the \(i\)th largest singular value of matrix \(\mathbf{M}\)._
Proof.: See Appendix D.1.
For linear interpolation \(\frac{|\psi_{N}(i)|}{(N+1)!}\leq\frac{h^{2}}{8}\), where \(h\) is the spacing between two neighboring inducing points. We have considered the sparse component of the Toeplitz matrix to be identifiable and focused on the error of approximating the smooth component. While there are potential approaches to relaxing this assumption [31; 3; 44; 20; 4; 5; 43], they must be adapted properly to the Toeplitz setting. Thus, this additional analysis is outside the scope of this paper and a fruitful direction for future work.
### Smoothness in Fourier Domain Implies Decay in Time Domain
We now discuss activation function choices when directly learning the discrete time Fourier transform (DTFT) \(\hat{k}\) as an MLP. In practice, we sample the DTFT to obtain the actually computable discrete Fourier transform (DFT) by evaluating the MLP with uniform spacing. Different levels of smoothness of the MLP \(\hat{k}\) imply different decay rates of the signal \(k\). One can think of the choice of activation function as a parametric form for the decay bias. For an MLP, using a GeLU activation implies super-exponential time domain decay. Using SiLU implies super-polynomial time domain decay. For ReLU the signal is square summable. While this subsection focuses on the theoretical relationship between smoothness and decay, in Appendix E.3 we show visualizations demonstrating that these relationships are observed in practice. We first define the DTFT and its inverse.
**Definition 2**.: _The **discrete time Fourier transform**[28; 24]\(\hat{k}\) or \(\mathcal{F}\{k\}\) of \(k\) is given by_
\[\hat{k}(\omega)\equiv\sum_{m=-\infty}^{\infty}k[m]\exp(-i\omega m)\]
**Definition 3**.: _The **inverse discrete time Fourier transform** of the DTFT \(\hat{k}\) is given by_
\[\mathcal{F}^{-1}\{\hat{k}\}[n]\equiv\frac{1}{2\pi}\int_{-\pi}^{\pi}\hat{k}( \omega)\exp(i\omega n)d\omega\]
We now give three theorems relating smoothness of the DTFT to decay of the signal (its inverse).
**Theorem 2**.: _Using a GeLU MLP for the DTFT \(\hat{k}\), for all \(a>0\), the signal \(k[n]\) will have decay_
\[k[n]=O(\exp(-an)).\]
Proof.: See Appendix E.1.
**Theorem 3**.: _Using a SiLU MLP for the DTFT \(\hat{k}\), the signal \(k[n]\) will have decay_
\[|k[n]|\leq\frac{1}{2\pi|n|^{N}}\big{\|}\hat{k}^{(N)}\big{\|}_{1}\]
_for all \(n\neq 0,N\in\mathbb{N}\)._
Proof.: See Appendix E.2.
**Theorem 4**.: _Using a ReLU MLP for the DTFT \(\hat{k}\) implies \(\|k\|_{2}<\infty\) (the signal is square summable)._
Proof.: Note that \(\hat{k}\in L^{2}[-\pi,\pi]\) since it is continuous. Then apply Parseval's theorem.
## 5 Experiments
We perform experiments in two areas: pre-training a causal language model on Wikitext-103 [21] and training bidirectional models on Long-Range Arena. We start with the repositories of the TNN paper2 and use their training and hyper-parameter settings unless indicated otherwise. We use A100 and V100s for training, and a single A100 for timing experiments.
Footnote 2: [https://github.com/OpenNLPLab/Tnn](https://github.com/OpenNLPLab/Tnn)
### Pre-training on Wikitext-103
In the causal case we aim to predict the next token, conditional on a fixed length sequence of previous tokens. Table 1 compares FD-TNN's causal pre-training perplexity [21] to existing models: it almost exactly matches that of TNNs. Our approach is faster for the same capacity: at sequence length 512 with 6 layer RPEs (as in the TNN paper), FD TNN is 15% faster than the baseline TNN on a single A100 GPU. When both use a three layer RPE, FD TNN is 10% faster. We provide some additional details for this experiment as well as for bidirectional pre-training (we see larger speed gains) in Appendix F.
### Long-Range Arena
The Long-Range Arena (LRA) is a benchmark with several long sequence datasets. The goal is to achieve both high LRA score (predictive performance) and training steps per second. Following [29], we take the TNN architecture and their tuned hyperparameter (HP) configurations3, simply replacing their TNO module with our SKI-TNO module with \(r=64\) and \(m=32\). We use \(\lambda=0.99\) where they set \(\lambda=1\), but otherwise perform _no additional HP tuning_ on 1D tasks and use smaller layers \(r=32\) and \(m=16\) for the 2D tasks. For FD-TNN, we simply use a same-sized RPE for all tasks except a 3-layer RPE for the CIFAR task. We could potentially achieve even higher accuracy with more comprehensive tuning on the 2D tasks or _any_ tuning for the 1D tasks. We select the checkpoint with the highest validation accuracy and report the corresponding test accuracy. SKI-TNN achieves similar average accuracy than TNN at lower size, while FD-TNN achieves _higher_ accuracy. We suspect that for some of these problems, the square summable signal implied by ReLU in frequency domain is a better parametric form than applying exponential decay bias. We show our results in Table 2.
Footnote 3: [https://github.com/OpenNLPLab/lra](https://github.com/OpenNLPLab/lra)
We additionally perform timing and memory profiling tests on a single 1x A100 instance, keeping the per-GPU batch size constant as in the training runs. In Figure 0(a), we plot for each 1D task the percentage of TNN accuracy achieved vs the percentage speedup relative to TNN, with the size of the marker corresponding to the peak memory usage measured. We highlight the 1D tasks because they required no tuning, and they represent the longest sequences at lengths ranging from \(1024\) to \(4096\), whereas the 2D tasks are treated as separate 1D sequences in each dimension, so that a \(32\times 32\) image is seen as alternating length \(32\) sequences. We note that because the effective sequence lengths are shorter, there is less benefit from using our methods over the baseline TNN.
## 6 Conclusion
In this paper, we note that [29]'s Toeplitz neural networks essentially apply the action of a generalized Gram matrix (the Toeplitz matrix) for an asymmetric kernel (the RPE times decay bias) as their main computationally expensive operation. The visualized learned Gram matrices motivate a sparse and low rank decomposition. We thus propose two different approaches to improve efficiency. In the bidirectional setting, we extend SKI to the asymmetric setting and use linear interpolation over a small set of inducing points to avoid the MLP entirely, while using an inverse time warp to handle extrapolation to time points not observed during training. This approach reduces the mathematical complexity from \(O(n\log n)\) to \(O(n+r\log r)\), where \(r\) is the number of inducing points. However in practice, we do not actually use \(O(n+r\log r)\) code due to a reshape required for sparse tensors leading to them actually being _flower_ than dense tensors. Thus we actually use \(O(nr^{2}+r\log r)\) in code: still much faster than baseline TNN for small \(r\). For causal training, as causal masking negates SKI's benefits, we instead eliminate the explicit decay bias. We do this by working directly in the frequency domain, enforcing causality via the Hilbert transform and enforcing decay in time domain via smoothness. For the bidirectional case, we eliminate the FFT applied to the kernels. While this maintains \(O(n\log n)\) computational complexity, it leads to a substantial speedup in practice and beats TNNs on LRA score.
\begin{table}
\begin{tabular}{l l l l} Architecture & PPL (val) & PPL (test) & Params (m) \\ \hline (Attn-based) & & & \\ \hline Trans & 24.40 & 24.78 & 44.65 \\ LS & 23.56 & 24.05 & 47.89 \\ Flash & 25.92 & 26.70 & 42.17 \\ \(1+\)elu & 27.44 & 28.05 & 44.65 \\ Performer & 62.50 & 63.16 & 44.65 \\ Cosformer & 26.53 & 27.06 & 44.65 \\ \hline (MLP-based) & & & \\ \hline Syn(D) & 31.31 & 32.43 & 46.75 \\ Syn(R) & 33.68 & 34.78 & 44.65 \\ gMLP & 28.08 & 29.13 & 47.83 \\ \hline (SS-based) & & & \\ \hline S4 & 38.34 & 39.66 & 45.69 \\ DSS & 39.39 & 41.07 & 45.73 \\ GSS & 29.61 & 30.74 & 43.84 \\ \hline (TNN-based) & & & \\ \hline TNN (reproduced, 3 layers) & 23.98 (23.96) & 24.67 (24.61) & 48.68 (48.59) \\ FD-TNN: Ours, 3 layers & 23.97 & 24.56 & 48.58 \\ \hline \end{tabular}
\end{table}
Table 1: **Performance on Wikitext-103, Causal Language Model**. We reproduce [29]’s table except for the bottom two rows corresponding to the baseline TNN and our FD-TNN. For both we use the same RPE config with 3 layers. We add in parenthesis the baseline TNN results that we reproduced. We have nearly the same perplexity as the baseline TNN. Our approach is faster: at sequence length 512 with a six layer RPE (as in the TNN paper), FD TNN is 15% faster than the baseline TNN. For a three layer RPE, it is 10% faster.
\begin{table}
\begin{tabular}{l|l l l l l|l} Architecture & Text & ListOps & Retrieval & Pathfinder & Image & Avg \\ \hline TNN & **86.39** & 47.33 & 89.40 & **73.89** & 77.84 & 74.97 \\ \hline SKI-TNN & 83.19 & 45.31 & 88.73 & 68.30 & 76.46 & 72.40 \\ FD-TNN & 85.00 & **55.21** & **90.26** & 69.45 & **84.12** & **76.81** \\ \hline \end{tabular}
\end{table}
Table 2: **Performance on Long Range Arena**. We reproduce experiments and train our proposed variants using tuned hyperparameters from [29]. We **bold** the best and underline the second in each task. Our proposed SKI-TNN and FD-TNN achieve similar overall performance with _no additional hyperparameter tuning_ on 1D LRA tasks and a minimal amount of tuning on 2D tasks.
Acknowledgments
This projects was funded by Luminous Computing. We thank Yiran Zhong and the team from [29] for helpful discussions about and updates to their code base. We also thank Tri Dao and Michael Poli for helpful questions and comments. Finally, we thank David Scott for going over the math of the paper.
|
2307.13609 | Dendritic Integration Based Quadratic Neural Networks Outperform
Traditional Aritificial Ones | Incorporating biological neuronal properties into Artificial Neural Networks
(ANNs) to enhance computational capabilities poses a formidable challenge in
the field of machine learning. Inspired by recent findings indicating that
dendrites adhere to quadratic integration rules for synaptic inputs, we propose
a novel ANN model, Dendritic Integration-Based Quadratic Neural Network
(DIQNN). This model shows superior performance over traditional ANNs in a
variety of classification tasks. To reduce the computational cost of DIQNN, we
introduce the Low-Rank DIQNN, while we find it can retain the performance of
the original DIQNN. We further propose a margin to characterize the
generalization error and theoretically prove this margin will increase
monotonically during training. And we show the consistency between
generalization and our margin using numerical experiments. Finally, by
integrating this margin into the loss function, the change of test accuracy is
indeed accelerated. Our work contributes a novel, brain-inspired ANN model that
surpasses traditional ANNs and provides a theoretical framework to analyze the
generalization error in classification tasks. | Chongming Liu, Songting Li, Douglas Zhou | 2023-05-25T13:06:49Z | http://arxiv.org/abs/2307.13609v1 | # Dendritic Integration Based Quadratic Neural Networks Outperform Traditional Aritificial Ones
###### Abstract
Incorporating biological neuronal properties into Artificial Neural Networks (ANNs) to enhance computational capabilities poses a formidable challenge in the field of machine learning. Inspired by recent findings indicating that dendrites adhere to quadratic integration rules for synaptic inputs, we propose a novel ANN model, Dendritic Integration-Based Quadratic Neural Network (DIQNN). This model shows superior performance over traditional ANNs in a variety of classification tasks. To reduce the computational cost of DIQNN, we introduce the Low-Rank DIQNN, while we find it can retain the performance of the original DIQNN. We further propose a margin to characterize the generalization error and theoretically prove this margin will increase monotonically during training. And we show the consistency between generalization and our margin using numerical experiments. Finally, by integrating this margin into the loss function, the change of test accuracy is indeed accelerated. Our work contributes a novel, brain-inspired ANN model that surpasses traditional ANNs and provides a theoretical framework to analyze the generalization error in classification tasks.
## 1 Introduction
While the artificial neural network (ANN) framework has made significant advancements towards solving complex tasks, it still faces problems that are rudimentary to real brains [2]. A notable distinction between the modern ANN framework and the human brain is that the former relies on a significant number of training samples, which consumes large amounts of energy, whereas the latter runs on extremely low power (<20 watts) and possesses a strong generalization capability based on few-shot learning. Studies have demonstrated that incorporating dendritic features in ANNs can alleviate these issues and enhance overall performance [27; 20; 14]. However, it is difficult to quantify the nonlinear integration of dendrite which is an essential property that allows individual neurons to perform complex computations [24; 25]. Consequently, simple nonlinear functions like ReLU and Sigmoid are often used in dendritic-inspired models[10; 19]. Moreover, theoretical analysis is crucial for the understanding of how dendritic-inspired properties can enhance the performance of ANNs, e.g., small generalization error in classification tasks.
To address these issues, we propose a novel ANN model, i.e., Dendritic Integration-Based Quadratic Neural Network (DIQNN), based on the recent studies indicating that the somatic response of biological neurons obeys a quadratic integration rule when multiple synaptic inputs are received on
the dendrites [7; 13]. Our model replaces the linear integration and nonlinear activation function with a simple quadratic integration, as shown below:
\[f(x)=\sigma(w\cdot x+b)\to f(x)=x^{T}Ax. \tag{1}\]
Using various classification datasets, we demonstrate that DIQNN significantly improve classification performance compare to the traditional ANN. To further reduce the computational cost of DIQNN, we introduce the Low-Rank DIQNN, while it can maintain the performance of the original DIQNN. To theoretically understand why Low-Rank DIQNN performs well, we present a normalized margin to characterize the generalization error and prove that the margin increases monotonically during training process. We show the consistency between generalization and our margin on datasets such as MNIST[12], FMNIST[28] and CIFAR-10. Finally, by explicitly adding our margin into the loss function, we indeed observe a noticeable acceleration in the change of test accuracy in Low-Rank DIQNN.
The remainder of this paper is organized as follows. Section 2 provides an overview of previous works related to dendritic-inspired models, quadratic neural networks, and ANN classification margin. In Section 3, we present DIQNN's performance and the low-rank property in classification tasks. This motivates us to propose Low-Rank DIQNN in Section 4. We further propose a new margin and analyze the classification performance of Low-Rank DIQNN. Section 5 contains conclusions.
## 2 Related Work
**Dendritic-inspired computational model.** It is widely acknowledged that dendrites play a pivotal role in the nonlinear integration of neural systems, enabling individual neurons to execute intricate tasks[24; 25]. Recently, numerous studies have explored the implementation of dendritic features in ANNs from different aspects, yielding encouraging results. In [27; 10; 19], they examined the incorporation of dendrite's morphology in ANNs and achieved higher test accuracy than traditional ANNs. Dendritic plasticity rule were used to design learning algorithms in [20; 17], to replace the non-biological backpropagation algorithm, and obtained enhanced performance on classification tasks. In [14], they represented dendritic nonlinearity by replacing the standard linear summation with polynomial summation and reported an improvement in approximation and classification experiments.
**Quadratic neural networks (QNNs).** Previous works have examined quadratic integration as an alternative to the traditional linear summation. Table 1 presents several commonly studied neuron models in quadratic formats and their corresponding references. One-rank format of quadratic neurons is provided in [5; 3; 4; 1; 29], and enhanced performance on classification tasks is reported. In [9; 16; 31], quadratic neuron is only used in the convolution layer, leading to test accuracy improvements. In [21], a single-layer network with quadratic format is applied to solve the XOR problem. In contrast, Table 1 shows that our Low-Rank DIQNN approach totally differs from previous models, and we provide a theoretical framework for analyzing the generalization error of our model.
**ANN classification margin.** Margin is a useful metric for describing both the generalization error and the robustness of models[23; 18] However, how to accurately calculate the margin of an ANN
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Work reference & Quadratic neuron format & Layers for using & Model & Theory for \\ & & quadratic neuron & structure & generalization \\ \hline
[5],[3], [4], [1],[29] & \(f(x)=(w_{a}x)\cdot(w_{b}x)\) &
\begin{tabular}{c} Convolution \\ \(\&\) Classifier \\ \end{tabular} & Various & N \\ \hline
[21] & \(f(x)=x^{T}Wx\) & Classifier & 1-layer & N \\ \hline
[9],[16],[31] & \(f(x)=x^{T}Wx\) & Convolution & Various & N \\ \hline Our DIQNN & \(f(x)=x^{T}Wx\) & Classifier & Various & Y \\ \hline Our Low-Rank DIQNN & \(f(x)=\sum_{i}(w_{i}x)\cdot(w_{i}x)\) & Classifier & Various & Y \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Overview of current QNN works.
model remains an open question [6]. In [11; 26], they define a margin and explicitly add it to the loss function, and achieve enhanced model performance (generalization error, robustness, etc.). Meanwhile, local linearization is used in [30; 23] to effectively calculate the classification margin. However, the relationship between margin and generalization is still lacking. In [15], they define a margin for homogeneous neural networks and provide theoretical proof for its monotonicity and optimality when the loss is sufficiently small. However, the results can not describe the relation between margin and generalization error during the early stages of training process. To contribute to this area, we propose our margin and theoretically prove this margin increases throughout the training process. And numerical experiments show the consistency of dynamics between test accuracy and our margin.
## 3 Our dendritic integration based quadratic neural network (DIQNN)
All of the numerical experiments in this paper are conducted using Python and executed on a Tesla A100 computing card with a 7nm GA100 GPU, featuring 6,912 CUDA cores and 432 tensor cores.
### Performance of DIQNN on classification tasks
Here, we primarily evaluate DIQNN's performance on two datasets: MNIST and FMNIST. (Supplementary materials include information on experimental setup and results for additional datasets such as Iris.)
**MNIST results.** Our study evaluates single layer DIQNN, single layer linear net, and two-layer linear net with a nonlinear activation function ReLU (equation1, left side). The test error curves are presented in the left figure of Figure1. Additionally, Table 2 shows the number of trainable parameters and final test accuracy for each model. Our results indicate that DIQNN performs significantly better than linear net, even with almost identical number of trainable parameters. These findings underscore the advantages of DIQNN.
**FMNIST results.** We employ a two-layer convolutional neural network (CNN), comprising of one convolution layer with ReLU activation function and one fully connected layer. We examine whether the classifier ( the fully connected layer) is linear or quadratic. The test accuracy of these two models
\begin{table}
\begin{tabular}{c c c c} \hline \hline Network structure & Linear net (784-10) & Linear net (784-ReLU(8000)-10) & Single layer DIQNN \\ \hline Number of & 7.84*1e3 & 6.35*1e6 & 6.15*1e6 \\ \hline Test accuracy & 89.8\(\pm\)0.5\(\%\) & 93.1\(\pm\)0.4\(\%\) & **98.0\(\pm\)0.1\(\%\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different model on MNIST.
Figure 1: Performance of DIQNN. Left: Test error curves on MNIST. Right: Test accuracy (with error bar) on FMNIST.
is presented in the right figure of Figure 1, indicating that DIQNN significantly outperforms the linear net.
### Low rank properties of DIQNN
Although quadratic classifiers offer advantages, their computational cost is large. Therefore, it is imperative to reduce the number of trainable parameters while preserving the quadratic form in DIQNN. To this end, we want to find out whether the trained network can possess some good properties. Firstly, the weight matrix \(A\) undergoes spectrum decomposition. The decomposition can be represented as follows1:
Footnote 1: As \(A\) is initialized by a symmetric matrix, it is always symmetric with gradient descent algorithm.
\[A=\sum_{i=1}^{n}\lambda_{i}\mathbf{a}_{i}\mathbf{a}_{i}^{T},\ |\lambda_{1}| \geq|\lambda_{2}|\geq|\lambda_{3}|\geq\cdots\geq|\lambda_{n}| \tag{2}\]
Here, numerical experiment is conducted on MNIST. Figure 2 illustrates the leading eigenvectors of different after-train weight matrices. It can be observed that these vectors are similar to the spike-triggered average of output neurons. The term "spike-triggered average" refers to the average input that evokes the highest response from a neuron[22]. This phenomenon implies the majority of information in spike-triggered average could be encoded by the leading eigenvector of the weight matrix \(A\). In addition, we find when only the first few eigenvectors are kept in weight matrix \(A\), DIQNN can achieve high test accuracy. Experiments on FMNIST exhibit similar results. Additional details are provided in supplementary materials. These observations suggest that DIQNN naturally display low-rank properties after training.
## 4 Low-Rank DIQNN
The low-rank property of DIQNN allows us to explicitly set the weight matrix \(A\) in the form of a sum of rank-one matrices given by the following equation:
\[A=\sum_{i=1}^{r}\mathbf{c}_{i}\mathbf{c}_{i}^{T} \tag{3}\]
where \(r\) is a hyperparameter and \(\mathbf{c}_{i}\) is a column vector. We refer to this model as Low-Rank DIQNN owing to its capability of reducing computational cost, since \(r\) is always small in comparison to the dimension of weight matrix \(A\).
Figure 2: The leading eigenvectors of different after-trained weight matrices.(Totally 10 weight matrices)
### Performance of Low-Rank DIQNN on classification tasks
We next investigate the performance of Low-Rank DIQNN on MNIST, FMNIST, and CIFAR-10. The details of the numerical experiments and results on other datasets such as Iris are provided in supplementary materials.
**MNIST and FMNIST results.** We utilize the same network structure as in the previous section for Low-Rank DIQNN. On MNIST, we employ single-layer Low-Rank DIQNNs with various ranks, while on FMNIST, we utilize a two-layer convolutional neural network that contains one low-rank classifier.
Figure 3 mainly compares linear net with Low-Rank DIQNN having various ranks. The figure shows that when Low-Rank DIQNN with just one rank (\(r=1\) in equation 3) performs better than the linear net, even though they have an equal number of trainable parameters (Low-Rank DIQNN with one rank even outperforms a two-layer linear net with ReLU activation function). Moreover, as the number of ranks in Low-Rank DIQNN increases, its performance continues to improve until it almost reaches saturation.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Network structure & Residual blocks & Last layer method (Classifier) & Test accuracy \\ \hline \multirow{4}{*}{ResNet 10} & & Linear & 92.20\(\pm\)0.11\% \\ & & Rank one & 92.32\(\pm\)0.18\% \\ & [1,1,1,1] & Rank two & 92.35\(\pm\)0.09\% \\ & & Rank three & 92.48\(\pm\)0.07\% \\ & & Rank four & **92.50\(\pm\)0.11\%** \\ \hline \multirow{4}{*}{ResNet 12} & & Linear & 92.59\(\pm\)0.14\% \\ & & Rank one & 92.63\(\pm\)0.15\% \\ & [2,1,1,1] & Rank two & 92.76\(\pm\)0.08\% \\ & & Rank three & 92.72\(\pm\)0.15\% \\ & & Rank four & **92.89\(\pm\)0.14\%** \\ \hline \multirow{4}{*}{ResNet 14} & & Linear & 92.82\(\pm\)0.20\% \\ & & Rank one & 93.00\(\pm\)0.07\% \\ & & Rank two & 92.89\(\pm\)0.17\% \\ & & Rank three & 93.30\(\pm\)0.12\% \\ & & Rank four & **93.31\(\pm\)0.19\%** \\ \hline \multirow{4}{*}{ResNet 16} & & Linear & 93.27\(\pm\)0.08\% \\ & & Rank one & 93.26\(\pm\)0.04\% \\ & & Rank two & 93.29\(\pm\)0.05\% \\ & & Rank three & 93.33\(\pm\)0.17\% \\ & & Rank four & **93.43\(\pm\)0.05\%** \\ \hline \multirow{4}{*}{ResNet 18} & & Linear & 93.51\(\pm\)0.04\% \\ & & Rank one & 93.46\(\pm\)0.13\% \\ \cline{1-1} & & Rank two & 93.57\(\pm\)0.03\% \\ \cline{1-1} & & Rank three & **93.67\(\pm\)0.13\%** \\ \cline{1-1} & & Rank four & 93.66\(\pm\)0.06\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of Low-Rank DIQNN on CIFAR-10.
Figure 3: Performance of Low-Rank DIQNN. Left: On MNIST. Right: On FMNIST.
**CIFAR-10 results.** We train ResNets[8] with a low-rank classifier on CIFAR-10. Similar to the CNN used in FMNIST, we keep the computation of convolution layers in ResNet intact, and only modify the last fully connected layer to low-rank form. Test accuracy results on different ResNets are presented in Table 3, where the "Residual blocks" column means how these ResNets are constructed in terms of their convolution layers [8]. Similar to the results on MNIST and FMNIST, it is evident that Low-Rank DIQNN performs better than traditional ANN, and its performance increases with higher ranks.
### Analyzing generalization of Low-Rank DIQNN on classification tasks
This section begins by defining the margin on Low-Rank DIQNN. We then utilize it to analyze Low-Rank DIQNN and show the consistency between generalization and our margin. Finally, we show this margin can be used to accelerate the change of test accuracy in Low-Rank DIQNN. The details of the numerical experiments and the proof of our theorem are provided in supplementary materials.
#### 4.2.1 Normalized margin
Given training dataset \(\{(x_{n},y_{n})\}_{n=1}^{N}\), each data point \(x_{n}\in\mathbb{R}^{d}\) has a true label \(y_{n}\in[k]\), where \(k\) (\(k\geq 2\)) denotes the total number of classes. Given a certain input \(x_{n}\), the Low-Rank DIQNN produces an output vector \(\Phi(x_{n},\theta)\in\mathbb{R}^{k}\), where \(\theta\) represents the trainable network parameters. Naturally, the margin for data point \(x_{n}\) can be defined as \(s_{n}=(\Phi(x_{n},\theta))_{y_{n}}-(\Phi(x_{n},\theta))_{j},\;j=\arg\max_{i \neq y_{n}}(\Phi(x_{n},\theta))_{i}\). However, this definition does not directly reflect the distance between data point \(x_{n}\) and the classification boundary of the model in input space since it does not obey scale invariance. To address this issue, we propose a normalized margin for single data point \(x_{n}\): \(\frac{s_{n}}{\|\Phi(x_{n},\theta)\|_{2}}\), and the average of this margin across all training data points are taken into consideration:
\[\frac{1}{N}\sum_{n=1}^{N}\frac{s_{n}}{\|\Phi(x_{n},\theta)\|_{2}}=\frac{1}{N} \sum_{n=1}^{N}\frac{(\Phi(x_{n},\theta))_{y_{n}}}{\|\Phi(x_{n},\theta)\|_{2}}- \frac{1}{N}\sum_{n=1}^{N}\frac{(\Phi(x_{n},\theta))_{j}}{\|\Phi(x_{n},\theta) \|_{2}}\triangleq\mu_{1}-\mu_{2}\triangleq\Delta\mu \tag{4}\]
#### 4.2.2 XOR problem
We employ a single-layer Low-Rank DIQNN with one rank to solve the XOR problem and prove the following theorem (see details in supplementary materials):
**Theorem 1**.: _Training single layer Low-Rank DIQNN with one rank to solve XOR problem under cross-entropy loss \(\mathcal{L}(\theta_{t})\) and gradient flow algorithm as \(\frac{d\theta_{t}}{dt}=-\nabla_{\theta}\mathcal{L}(\theta_{t})\), we have:_
\[\frac{d\Delta\mu}{dt}>0\;\;\text{and}\;\lim_{t\rightarrow\infty}\Delta\mu=1.\]
With our defined margin, the above theorem demonstrates that our margin will monotonically increase and finally reach its maximum value. Notably, in Low-Rank DIQNN, one can prove \(-1\leq\Delta\mu\leq 1\).
We perform numerical experiments, as shown in Figure 4. These figures display the training dynamics with gradient descent (GD). It shows that our margin reflects the relative distance between data
Figure 4: Training process (gradient descent) of single layer Low-Rank DIQNN with one rank on XOR problem. From left to right: Initialization, after 12, 36, 84, 396 training steps, respectively. The blue region indicates the model’s classification for the blue class, while the red region indicates classification for the red class. And the title of each picture indicates the value of our margin \(\Delta\mu\) at different training steps.
points and the classification boundary. And we observe that Low-Rank DIQNN can achieve a smaller generalization error by pushing the classification boundary further away from data points until it converges to an optimal result, which aligns with our theorem. Therefore, our theorem implies that Low-Rank DIQNN reduces generalization error monotonically until converging to the optimal solution.
#### 4.2.3 General set up
We next extend our investigation to general datasets and training algorithms (e.g., stochastic gradient descent (SGD)).
**Definition 1**.: _(Homogeneous property) We say a network satisfy the homogeneous property for some integer \(L\) if:_
\[\Phi(x_{n},\theta_{t})=\|\theta_{t}\|_{2}^{L}\Phi(x_{n},\frac{\theta_{t}}{\| \theta_{t}\|_{2}})\]
**Lemma 1**.: _(Homogeneous property) Low-Rank DIQNN satisfy the homogeneous property: \(\Phi(x_{n},\theta_{t})=\|\theta_{t}\|_{2}^{L}\Phi(x_{n},\frac{\theta_{t}}{\| \theta_{t}\|_{2}})\) for \(L=2^{l+1}-2\), where \(l\) is the total number of layers of Low-Rank DIQNN. Moreover, we can prove that \(\langle\partial_{\theta}(\Phi(x_{n},\theta_{t}))_{i},\theta_{t}\rangle=L( \Phi(x_{n},\theta_{t}))_{i}\ \forall i\in[k]\)_
**Definition 2**.: _A set that consist of real numbers \(\{y_{1},y_{2},\cdots,y_{m}\}\) satisfy the \(\varepsilon-\)separated condition if \(y_{j}-\min_{i}(y_{i})\geq 1/\varepsilon,\forall j\in[m],j\neq\arg\min_{i}(y_{i})\)_
**Definition 3**.: \(s_{nj}=(\Phi(x_{n},\theta_{t}))_{y_{n}}-(\Phi(x_{n},\theta_{t}))_{j}\)_; \(n=\arg\max_{i\neq y_{n}}\left(\Phi(x_{n},\theta)\right)_{i}\); \(s_{n}=s_{nj_{n}}\); \(c\) is the condition number of matrix \(A=\partial_{\theta}S\partial_{\theta}S^{T}-\frac{L^{2}}{\|\theta_{t}\|_{2}^{2 }}SS^{T}\), where \(S=(s_{1},\ldots,s_{n})^{T}\), \(\partial_{\theta}S=(\partial_{\theta}s_{1},\ldots,\partial_{\theta}s_{n})^{T}\); \(\mathbf{v}=(v_{1},\ldots,v_{N})^{T}\), where \(v_{n}=\frac{e^{\frac{\Phi(x_{n},\theta_{t}))_{j_{n}}}{\sum_{i=1}^{k}e^{\Phi(x_ {n},\theta_{t})_{i}}}}}\), it should be noted that all of these values or vectors are time dependent._
**Theorem 2**.: _Training Low-Rank DIQNN under cross-entropy loss and gradient flow algorithm, if the following three assumptions are satisfied in a small time interval \([t-\Delta t,t+\Delta t]\) for some \(\Delta t>0\):_
* \(\|\Phi(x_{n},\theta_{t})\|_{2}=a_{n}\|\theta_{t}\|_{2}^{L}\)__
* _The set_ \(\{s_{nj}\}_{j=1,j\neq y_{n}}^{k}\) _satisfy the_ \(\varepsilon-\)_separated condition for some_ \(\varepsilon>0\) _(_\(\forall n\in[N]\)_), and_ \(\{\|\partial_{\theta}s_{nj}\|_{2}\}_{n\in[N],j\in[k]}\) _is uniformly bounded with some constant_ \(M\)_._
* \(c-1\leq\frac{2m}{\sqrt{1-m^{2}}}\)_, where_ \(m=\cos(\mathbf{a},\mathbf{v})>0\)_,_ \(\mathbf{a}=(1/a_{1},\ldots,1/a_{N})^{T}\)__
_Then at time \(t\) we have:_
\[\frac{d\Delta\mu}{dt}\geq-\frac{M^{2}(k-2)\|\mathbf{a}\|_{1}}{N\|\theta_{t}\| _{2}^{L}}e^{-\frac{1}{\varepsilon}}.\]
This theorem indicates that through the training process, our defined margin increases in an almost monotonic manner2
Figure 5: Numerical experiments on MNIST(Left), FMNIST(Middle) and CIFAR-10(Right). There are two vertical axes: the left axis represents the test accuracy, and the right axis represents our margin \(\Delta\mu\). The blue solid line indicates test accuracy while the red dash line indicates our margin.
We next perform numerical experiments on MNIST, FMNIST, and CIFAR-10 with SGD training algorithm, as shown in Figure 5. These figures demonstrate that the margin increases almost monotonically on these datasets, which is consistent with our theorem. Additionally, it is noteworthy that the dynamics of the margin is closely related to that of test accuracy, indicating that the margin well reflects the generalization error of Low-Rank DIQNN. Our theorem implies that the generalization error decreases monotonically, indicating the Low-Rank DIQNN is approaching the solution with good generalization.
The monotonical increase of our margin during training process may imply the existence of implicit regularity. Therefore, we next explore the impact of explicitly adding margin regularization to the loss function. The modified loss function is given as follows:
\[\bar{\mathcal{L}}(\theta)=\mathcal{L}(\theta)-\lambda*\Delta\mu, \tag{5}\]
where \(\mathcal{L}(\theta)\) is the original cross-entropy loss, and \(\lambda\) is a hyperparameter that controls the magnitude of regularization. Numerical results on MNIST, FMNIST, and CIFAR-10 are provided in Figure 6. It is found that the convergence speed is indeed enhanced after the margin regularization is added, here the convergence speed is characterized by the test accuracy, since one cares more about the generalization capability of solutions. This indicates one can obtain a solution with small generalization error in a short time by explicitly adding margin regularization.
## 5 Conclusion
In this paper, we present a novel ANN model inspired by the quadratic integration rule of dendrite, namely DIQNN, which significantly outperforms traditional ANNs. Based on observations of the low-rank structure of DIQNN, we propose Low-Rank DIQNN that retains the high performance while having low computational cost. Moreover, we provide a theoretical framework to analyze the generalization error in classification tasks, and experimental results on MNIST, FMNIST, and CIFAR-10 exhibit the effectiveness of our framework. Considering quadratic integration rule of neurons could be confined to a few brain areas[7], thus other integration rules of neurons may be discovered in future electrophysiological experiments. Our framework could be extended to design brain-inspired ANN models that can incorporate these new integration rules. Moreover, in the design of brain-inspired deep neural networks, it might be possible to refer to our framework to incorporate different integration rules in different layers, which represent neurons in various brain areas. Additionally, how to theoretically analyze the above brain-inspired models developed from new integration rules will also be an important issue. It is expected to extend our margin theory to investigate the generalization error of these new brain-inspired models in classification tasks.
|
2307.07654 | Aligned and oblique dynamics in recurrent neural networks | The relation between neural activity and behaviorally relevant variables is
at the heart of neuroscience research. When strong, this relation is termed a
neural representation. There is increasing evidence, however, for partial
dissociations between activity in an area and relevant external variables.
While many explanations have been proposed, a theoretical framework for the
relationship between external and internal variables is lacking. Here, we
utilize recurrent neural networks (RNNs) to explore the question of when and
how neural dynamics and the network's output are related from a geometrical
point of view. We find that training RNNs can lead to two dynamical regimes:
dynamics can either be aligned with the directions that generate output
variables, or oblique to them. We show that the choice of readout weight
magnitude before training can serve as a control knob between the regimes,
similar to recent findings in feedforward networks. These regimes are
functionally distinct. Oblique networks are more heterogeneous and suppress
noise in their output directions. They are furthermore more robust to
perturbations along the output directions. Crucially, the oblique regime is
specific to recurrent (but not feedforward) networks, arising from dynamical
stability considerations. Finally, we show that tendencies towards the aligned
or the oblique regime can be dissociated in neural recordings. Altogether, our
results open a new perspective for interpreting neural activity by relating
network dynamics and their output. | Friedrich Schuessler, Francesca Mastrogiuseppe, Srdjan Ostojic, Omri Barak | 2023-07-14T23:14:50Z | http://arxiv.org/abs/2307.07654v3 | # Aligned and oblique dynamics in recurrent neural networks
###### Abstract
The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network's output are related from a geometrical point of view. We find that RNNs can operate in two regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the magnitude of the readout weights can serve as a control knob between the regimes. Importantly, these regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Finally, we show that the two regimes can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output.
## 1 Introduction
The relation between neural activity and behavioral variables is often expressed in terms of neural representations. Sensory input and motor output have been related to the tuning curves of single neurons [21, 24, 41] and, since the advent of large-scale recordings, to population activity [6, 57, 73]. Both input and output can be decoded from population activity, [10, 37], even in real time, closed-loop settings [56, 74]. However, neural activity is often not fully explained by observable behavioral variables. Some components of the unexplained neural activity have been interpreted as random trial-to-trial fluctuations [15], potentially linked to unobserved behavior [40, 65]. Activity may further be due to other ongoing computations not immediately related to behavior, such as preparatory motor activity in a null-space of the motor readout [22, 29]. Finally, neural activity may partially be due to other constraints, for example related to the underlying connectivity [2, 44], the process of learning [56], or stability, i.e., the robustness of the neural dynamics to perturbations [55].
Here we aim for a theoretical understanding of neural representations: What determines how strongly activity and behavioral output variables are related? To this end, we use trained recurrent neural networks (RNNs). In this setting, output variables are determined by the task at hand, and neural activity can be described by its projection onto the principal components (PCs). We show that these networks can operate between two extremes: an "aligned" regime in which the output weights and the largest PCs are strongly correlated. In the second, "oblique" regime, the output weights and the largest PCs are poorly correlated.
What determines the regime a network operates in? We show that quite general considerations lead to a link between the magnitude of output weights and the regime of the network. As a consequence, we can use output magnitude as a control knob for trained RNNs. Indeed, when we train RNN models on different neuroscience tasks, large output weights led to oblique dynamics, and small output weights to aligned dynamics.
We then considered the functional consequences of the two regimes. Building on the concept of feedback loops driving the network dynamics [50, 68], we show that in the aligned regime, the largest PCs and the output are qualitatively similar. In the oblique regimes, in contrast, the two may be qualitatively different. This functional decoupling in oblique networks leads to large freedom for the neural dynamics. Different networks with oblique dynamics thus tend to employ different dynamics for the same tasks. Aligned dynamics, in contrast, are much more stereotypical. Furthermore, as a result of how neural dynamics and output are coupled, oblique and aligned networks react differently to perturbations of the neural activity along the output direction. Further, oblique (but not aligned) networks develop an additional negative feedback loop that suppresses output noise. We finally link our theoretical results to experimental data by showing that these different regimes can be identified in neural recordings from several experiments.
Altogether, our work opens a new perspective relating network dynamics and their output, yielding both important insights for modeling brain dynamics as well as experimentally accessible questions about learning and dynamics in the brain.
## 2 Results
### Aligned and oblique population dynamics
We consider an animal performing a task while both behavior and neural activity are recorded. For example, the task might be to produce a periodic motion, described by the output \(z(t)\) of Fig. 1A. For simplicity, we assume that the behavioral output can be decoded linearly from the neural activity [18, 37, 54, 56, 74]. We can thus write
\[z(t)=\sum_{i=1}^{N}w_{\text{out},i}\,x_{i}(t)=\mathbf{w}_{\text{out}}^{T} \mathbf{x}(t)\,, \tag{1}\]
with readout weights \(\mathbf{w}_{\text{out}}\). The activity of neuron \(i\in\{1,\,\dots,\,N\}\) is given by \(x_{i}(t)\), and we refer to the vector \(\mathbf{x}\) as the state of the network.
Neural activity has to generate the output in some subspace of the state space, where each axis represents the activity of one neuron. In the simplest case (Fig. 1B, top), the output is produced along the largest PCs of activity, as shown by the fact that projecting the neural activity \(\mathbf{x}(t)\) onto the largest PCs returns the target oscillation (Fig. 1D, top). We call such dynamics "aligned" because of the alignment between the subspace spanned by the largest PCs and the output vector (red).
There is, however, another possibility. Neural activity may have many other components not directly related to the output, and these other components may even dominate the overall activity. In this case (Fig. 1B-D, bottom), the two largest PCs are not enough to read out the output, and smaller PCs are needed. We call such dynamics "oblique", because the subspace spanned by the largest PCs and the output vector are poorly aligned.
We consider these two possibilities as distinct regimes, noting that intermediate solutions are also possible. The actual regime of neural dynamics has important consequences for how one interprets
Figure 1: Schematic of aligned and oblique dynamics in recurrent neural networks. **A** Output generated by both networks. **B** Neural activity of aligned (top) and oblique (bottom) dynamics, visualized in the space spanned by three neurons. Here, the activity (green) is three-dimensional, but most of the variance is concentrated along the two largest PCs (blue). For aligned dynamics, the output weights (red) are small and lie in the subspace spanned by the largest PCs; they are hence correlated to the activity. For oblique dynamics, the output weights are large and lie outside of the subspace spanned by the largest PCs; they are hence poorly correlated to the activity. **C** Projection of activity onto the two largest PCs. For oblique dynamics, the output weights are orthogonal to the leading PCs. **D** Evolution of PC projections over time. For aligned dynamics, the projection on the PCs resembles the output \(z(t)\), and reconstructing the output from the largest two components is possible. For the oblique dynamics, such reconstruction is not possible, because the projections oscillate much more slowly than the output.
neural recordings. For aligned dynamics, analyzing the dynamics within the largest PCs may lead to insights about the computations generating the output [73]. For oblique dynamics, such an analysis is hampered by the dissociation between the large PCs and the components generating the output [54].
### Magnitude of output weights controls regime
What determines which regime the network operates in? To probe this, we directly use the notion of alignment between output weights and states. For this, we assume that the output weights are known, and compute their correlation \(\rho\), to the states:
\[\rho(t)=\mathbf{w}_{\text{out}}^{T}\mathbf{x}(t)\,/\left(\left\|\mathbf{w}_{ \text{out}}\right\|\left\|\mathbf{x}(t)\right\|\right). \tag{2}\]
For aligned dynamics, the correlation is large in magnitude, corresponding to the alignment between the large PCs of the neural activity and the output weights (Fig. 1B top). In contrast, for oblique dynamics, this correlation is small (Fig. 1B bottom). Note that the concept of correlation can be generalized to accommodate multiple time points and multidimensional output (see Section 4.3).
This allows us to express the output in terms of this correlation:
\[z(t)=\rho(t)\left\|\mathbf{w}_{\text{out}}\right\|\left\|\mathbf{x}(t)\right\|. \tag{3}\]
This factorization into correlation and magnitudes (as quantified by vector norms) has one immediate consequence. We consider a situation in which a network learns to generate output by adapting its dynamics for given, fixed output weights. The magnitude of the output \(\left\|z\right\|\) is set by the definition of the task. We moreover assume that the magnitude of the activity \(\left\|\mathbf{x}\right\|\) is constrained - which we show
Figure 2: Aligned and oblique dynamics for a cycling task [54]. **A** network with two outputs needed to generate either clockwise or anticlockwise rotations, depending on the context (top). Our model RNN (bottom) received a context input pulse, generated dynamics \(\mathbf{x}(t)\) via recurrent weights \(W\), and yielded output as linear projections of the states. We trained the recurrent weights \(W\) with gradient descent. **B-C** Resulting internal dynamics for two networks with small (top) and large (bottom) output weights, corresponding to aligned and oblique dynamics, respectively. **B** Dynamics projected on the first 2 PCs and the remaining direction \(\mathbf{w}_{\text{out},\perp}\) of the first output vector (for \(z_{1}\)). The output weights are amplified to be visible. Arrowheads indicate the direction of the dynamics. Note that for the large output weights, the dynamics in the first two PCs co-rotated, despite the counter-rotating output. **C** Output reconstructed from the largest PCs, with dimension \(D=2\) (full lines) or \(8\) (dotted). Two dimensions already yield a fit with \(R^{2}=0.99\) for aligned dynamics (top), but almost no output for oblique (bottom, \(R^{2}=0.005\), no arrows shown). For the latter, a good fit with \(R^{2}>90\%\) is only reached with \(D=8\).
below is the case for robust RNNs in presence of noise. In this case, correlation and output weight magnitude must compensate each other. For aligned dynamics, we expect small output weights, because these suffice to generate the output via strongly correlated activity. For oblique dynamics, we need large output weights to amplify small correlation (Fig. 1B bottom).
We tested whether output weights can serve as a control knob to select dynamical regimes using an RNN model trained on an abstract version of the cycling task introduced in Ref. [54]. The networks were trained to generate a 2D signal that rotated in the plane spanned by two outputs \(z_{1}(t)\) and \(z_{2}(t)\) (Fig. 2A). An input pulse at the beginning of each trial indicated the desired direction of rotation. We set up two models with either small or large output weights, and trained the recurrent weights of each with gradient descent.
After both models learned the task (Section 4.1), we projected the network activity into a three-dimensional space spanned by the two largest PCs of the dynamics \(\mathbf{x}(t)\). A third direction, \(\mathbf{w}_{\text{out},\perp}\), spanned the remaining part of the first output vector \(\mathbf{w}_{\text{out},1}\). The resulting plots, Fig. 2B, corroborate our hypothesis: Small output weights led to aligned dynamics with large correlation between the largest PCs and the output weights. In contrast, the large output weights of the second network were almost orthogonal, or oblique, to the two leading PCs. Further qualitative differences between the two solutions in terms of the direction of trajectories will be discussed below.
Another way to quantify these regimes is by the ability to reconstruct the output from the large PCs of neural activity, as quantified by the coefficient of determination \(R^{2}\). For the aligned network, the projection on the two largest PCs (Fig. 2C, solid) already led to a good reconstruction. For the oblique
Figure 3: Testing how the magnitude of output weights determines regimes across multiple neuroscience tasks [54, 52, 37, 69]. **A** Correlation and norms of output weights and neural activity. For each task, we initialized networks with small or large output weights (dark vs light orange). The initial norms \(\|\mathbf{w}_{\text{out}}\|\) are indicated by the dashed lines. Learning does not change the norm dramatically. Note that all y-axes are logarithmically scaled. **B** Variance of \(\mathbf{x}\) explained and \(R^{2}\) of reconstructed output for projections of \(\mathbf{x}\) on increasing number of PCs. Results from one example network trained on the cycling task for each condition are shown. **C** Number of PCs necessary to reach 90% of the variance of \(\mathbf{x}(t)\) or of the \(R^{2}\) of the output reconstruction (top/bottom; dotted lines in **B**). In **A, C**, violin plots show the distribution over 5 sample networks, with vertical bars indicating the mean and the extreme values (where visible).
networks, the two largest PCs were not sufficient, and we needed eight dimensions (Fig. 2C, dashed) to obtain a good reconstruction (\(R^{2}>0.9\)). In contrast to the differences in these fits, the neural dynamics themselves were much more similar between the networks. Specifically, \(90\%\) of the variance was explained by 4 and 5 dimensions for the aligned and oblique networks, respectively.
Can we use the output weights to induce aligned or oblique dynamics in more general settings? We trained RNN models with small or large initial output weights on five different neuroscience tasks. All weights (input, recurrent, and output) were trained using the Adam algorithm (Section 4.1). After training, we measured the three quantities of Eq. (3): magnitudes of neural activity and output weights, and the correlation between the two. The results in Fig. 3A show that across tasks, initialization with large output weights led to oblique solutions (small correlation), and with small output weights to aligned solutions (large correlation). The output weights magnitude generally increased during training if it was initially small, but the initial difference in scale largely remained.
In Fig. 3B-C, we adopted the perspective of Fig. 2C and quantified how well we can reconstruct the output from a projection of \(\mathbf{x}\) onto its largest \(D\) PCs. As expected, for an increasing number \(D\), both the variance of \(\mathbf{x}\) explained and the quality of output reconstruction increased (Fig. 3B). The manner in which both quantities increased, however, differs between the two regimes. While the variance explained increased similarly in both cases, the quality of the reconstruction increased much more slowly for the model with large output weights. We quantified this phenomenon by comparing the dimensions at which either the variance of \(\mathbf{x}\) explained or \(R^{2}\) reaches \(90\%\), denoted by \(D_{x,90}\) and \(D_{\mathrm{fit},90}\), respectively.
In Fig. 3C, we compare \(D_{x,90}\) and \(D_{\mathrm{fit},90}\) across multiple networks and tasks. Generally, larger output weights led to larger numbers for both. However, the number of PCs necessary to obtain a good reconstruction increased much more drastically for large output weights than the dimension of the data. Thus, the output was less well represented by the large PCs of the dynamics for networks with large output weights, in accordance with our notion of oblique dynamics.
Importantly, reaching the aligned and oblique regimes relies on ensuring robust and stable solutions, which we achieve by adding noise to the dynamics during training. This yields a similar magnitude of neural activity \(\|\mathbf{x}\|\) across networks and tasks (Fig. 3A). We show in Methods, Section 4.6, that learning in simple, noise-free conditions with large output weights can lead to solutions not captured by either aligned or oblique dynamics; those solutions, however, are unstable. Furthermore, we observed that some of the qualitative differences between aligned and oblique dynamics are less pronounced if we initialized networks with small recurrent weights and initially decaying dynamics (Fig. 18).
Figure 4: Variability between learners for the two regimes. **A-B** Examples of networks trained on the cycling task with small (aligned) or large (oblique) output weights. The top left and central networks, respectively, are the same as those plotted in Fig. 2. **C** Dissimilarity between solutions across different tasks. Aligned solutions (dark) were less dissimilar to each other than oblique ones (light). The violin plots show the distribution over all possible different pairs for five samples (mean and extrema as bars).
### Neural dynamics decouple from output for the oblique regime
What are the functional consequences of the two regimes? A hint might be seen in an intriguing qualitative difference between the aligned and oblique solutions for the cycling task in Fig. 2. For the aligned network, the two trajectories for the two different contexts (green and purple) are counter-rotating (Fig. 2B, top). This agrees with the output, which also counter-rotates as demanded by the task (Fig. 2A). In contrast, the neural activity of the oblique network _co-rotates_ in the leading two PCs (Fig. 2B, bottom). This is despite the counter-rotating output, since this network also solves the task (not shown). This also indicates why reconstructing the output from the leading two PCs is not possible (Fig. 2C). Taken together, aligned and oblique dynamics differ in the coupling between leading neural dynamics and output. For aligned dynamics, the two are strongly coupled. For oblique dynamics, the two decouple qualitatively.
Such a decoupling for oblique, but not aligned, dynamics leads to a prediction regarding the universality of solutions [35, 45, 72]. For aligned dynamics, the coupling implies that the internal dynamics are strongly constrained by the task. We thus expect different learners to converge to similar solutions, even if their initial connectivity is random and unstructured. In Fig. 4A, we show the dynamics of three randomly initialized aligned networks trained on the cycling task, projected onto the three leading PCs. Apart from global rotations, the dynamics in the three networks are very similar.
For oblique dynamics, the task-defined output exerts weaker constraints on the internal dynamics. Any variability experienced during learning can potentially build up, and eventually create qualitatively different solutions. Three examples of oblique networks solving the cycling tasks indeed show visibly different dynamics (Fig. 4B). Further analysis shows that the models also differ in the frequency components in the leading dynamics (Fig. 17).
The degree of variability between learners depends on the task. The differences observable in the PC projections were most striking for the cycling task. For the flipflop task, for example, solutions were generally noisier in the oblique regime than in the aligned, but did not have observable qualitative differences in either regime (Fig. 22). We quantified the difference between models for the different neuroscience tasks considered before. To compare different neural dynamics, we used a dissimilarity measure invariant under rotation (Section 4.4) [75]. The results are shown in Fig. 4C. Two observations stand out: First, across tasks, the dissimilarity was higher for networks in the oblique regime than for those in the aligned. Second, both overall dissimilarity and the discrepancy between regimes differed strongly between tasks. The largest dissimilarity (for oblique solutions) and the largest discrepancy between regimes was found for the cycling. The smallest discrepancy between regimes was found for the flipflop task. Such difference between tasks is consistent with the differences in the range of possible solutions for different tasks, as reported in Refs. [35, 72].
What are the underlying mechanisms for the qualitative decoupling in oblique networks, but not aligned? For aligned dynamics, we saw that the small output weights demand large activity to generate the output. In other words, the activity along the largest PCs must be coupled to the output. For oblique dynamics, this constraint is not present, which opens the possibility for small components outside the largest PCs to generate the output. If this is the case, we have a decoupling, such as the observed co-rotation in the cycling task, and the possible variability between solutions. We discuss this point in more detail in Section 4.9.
In the following two sections, we will explore how the decoupling between neural dynamics and output for oblique, but not aligned, dynamics influences the response to perturbations and the effects of noise during learning.
### Differences in response to perturbations
To understand how networks respond to external perturbations and internal noise requires some understanding how dynamics are generated. Dynamics of trained networks are mostly generated internally, through recurrent interactions. In robust networks, these internally generated dynamics are a prominent part of the largest PCs (among input-driven components; Sections 4.6 and 4.8). Internally-generated dynamics are sustained by positive feedback loops, through which neurons excite each other. Those loops are low-dimensional, with activity along a few directions of the dynamics being amplified and fed back along the same directions. This results in dynamics being driven by effective feedback loops along the
largest PCs (Fig. 5A). As shown above, the largest PCs can either be aligned, or not aligned, with the output weights. This leads to predictions about how aligned and oblique networks differentiate in their responses to perturbations along different directions.
We apply perturbations to the neural activity at a single point in time: \(\mathbf{x}(t)\) evolves undisturbed until time \(t_{p}\). At that point, it is shifted to \(\mathbf{x}(t_{p})+\Delta\mathbf{x}\). After the perturbation, we let the network evolve freely and compare this evolution to that of an unperturbed copy. Such a perturbation mimics a very short optogenetic perturbation applied to a selected neural population [13, 42].
Our intuition about feedback loops suggests that networks respond strongly to a perturbation that is aligned with the directions contributing to the feedback loop, but weakly to a perturbation that is orthogonal to them. In particular, if a perturbation is applied along the output weights, aligned and oblique dynamics should dissociate, with a strong disruption of dynamics for aligned, but not for oblique dynamics (Fig. 5A).
To test this, we compare the response to perturbations along the output direction and along the largest PCs. In Fig. 5B, we show the output after such perturbations for an aligned (top) and an oblique network (bottom) trained on the cycling task. The time point and amplitude are the same for both directions and networks. For each network and type of perturbation, there is an immediate deflection and a long-term response. For both networks, perturbing along the PCs (blue) leads to a long-term phase shift. Only in the aligned network, however, perturbation along the output direction (red) leads to a visible long-term response. In the oblique network, the amplitude of the immediate response is larger, but the long-term response is _smaller_. Our results for the oblique network, but not for the aligned, agree with simulations of networks generating EMG data from the cycling experiments [58].
To quantify the relative long-term susceptibility of networks to perturbations along output weights or PCs, we sampled from different times \(t_{p}\) and different directions in the 2D subspaces spanned either by the two output vectors or the two largest PCs. For each perturbation, we measured the loss of the
Figure 5: Perturbations differentially affect dynamics in the aligned and oblique regimes. **A** Cartoon illustrating the relation between perturbations along output weights or PCs and the feedback loops driving autonomous dynamics. **B** Output after perturbation for aligned (top) and oblique (bottom) networks trained on the cycling task. The unperturbed network (light red line) yields a sine wave along the first output direction \(z_{1}\). At \(t_{p}=9\), a perturbation with amplitude \(\|\Delta\mathbf{x}\|=34\) is applied along the output weights (dashed red) or the first PC (dashed-dotted blue). The perturbations only differ in the directions applied. While the immediate response for the oblique network to a perturbation along the output weights is much larger, \(z_{1}(t_{p})\approx 80\), the long-term dynamics yield the same output as the unperturbed network. See also Fig. 19 for more details. **C** Loss for perturbations of different amplitudes for the two networks in **B**. Lines and shades are means and std devs over different perturbation times \(t_{p}\in[5,15]\) and random directions spanned by the output weights (red) or the two largest PCs (blue). The loss is the mean squared error between output and target for \(t>20\). Gray dot indicates example in **B**. **D** Relative susceptibility of networks to perturbation directions for different tasks and dynamical regimes. We measured the area under the curve (AUC) of loss over perturbation amplitude for perturbations along the output weights or two largest PCs. The relative susceptibility is the ratio between the two AUCs. The example in **C** is indicated by gray triangles.
perturbed networks on the original task (excluding the five time points after \(t_{p}\) that correspond to the immediate deflection after the perturbation). Fig. 5C shows that the aligned network is almost equally susceptible to perturbations along the PCs and the output weights. In contrast, the oblique network is much more susceptible to perturbations along the PCs.
We repeated this analysis for oblique and aligned networks trained on five different tasks. We computed the area under the curve (AUC) for the two loss profiles in Fig. 5C. We then defined the "relative susceptibility" as the ratio \(\text{AUC}_{\mathbf{w}_{\text{out}}}/\text{AUC}_{\text{PC}}\), Fig. 5D. For aligned networks (red), the relative susceptibility was close to 1 indicating similarly strong responses to both types of perturbations. For oblique networks (yellow), it was much smaller than 1, indicating that long-term responses to perturbations along the output direction were weaker than those to perturbations along the PCs.
### Noise suppression for oblique dynamics
In the oblique regime, the output weights are large. To produce the correct output (and not a too large one), the large PCs of the dynamics are almost orthogonal to the output weights. The large output weights, however, pose a robustness problem: Small noise in the direction of the output weights is also amplified at the level of the readout. We show that learning leads to a slow process of sculpting noise statistics to avoid this effect (Fig. 11). Specifically, a negative feedback loop is generated that suppresses fluctuations along the output direction (Fig. 6A, Fig. 10). Because the positive feedback loop that gives rise to the large PCs is mostly orthogonal to the output direction, it remains unaffected by this additional negative feedback loop. A detailed analysis of how learning is affected by noise shows that, for large output weights, the network first learns a solution that is not robust to noise. This solution is then transformed to increasingly stable and oblique dynamics over longer time scales (Sections 4.7 and 4.8).
To illustrate the effect of the negative feedback loop, we consider the fluctuations around trial averages. We take a collection of states \(\mathbf{x}(t)\) and then subtract the task-conditioned averages \(\bar{\mathbf{x}}(t)\) to compute \(\delta\mathbf{x}(t)=\mathbf{x}(t)-\bar{\mathbf{x}}(t)\). We then project \(\delta\mathbf{x}(t)\) onto three different direction categories: the largest PCs of
Figure 6: Noise suppression along the output direction in the oblique regime. **A** A cartoon of the feedback loop structure for aligned (top) and oblique (bottom) dynamics. The latter develops a negative feedback loop which suppresses fluctuations along the output direction. **B** Comparing the distribution of variance of mean-subtracted activity along different directions for network trained on cycling task (see Fig. 20): PCs of trial-averaged activity (blue), readout (red), and random (grey) directions. For the PCs and output weights, we sampled 100 normalized combinations of either the first two PCs or the two output vectors. For the random directions, we drew 1000 random vectors in the full, \(N\)-dimensional space. **C** Noise compression across tasks as measured by the ratio between variance along output and random directions. The dashed line indicates neither compression nor expansion. Black markers indicate the values for the two examples in **B-C**. Note the log-scales in **B-C**.
the averaged data \(\bar{\mathbf{x}}(t)\), the output directions, or randomly drawn directions.
How strongly the activity fluctuates along each direction is quantified by the variance of the projections (Fig. 6B). For both aligned and oblique dynamics, the variance is much larger along the PCs than along random directions. This is not necessarily expected, because the PCA was performed on the _averaged_ activity, hence without the fluctuations. Instead, it is a dynamical effect: the same positive feedback that generates the autonomous dynamics also amplifies the noise (Section 4.8).
The two network regimes, however, dissociate when considering the variance along the output direction. For aligned dynamics, there is no negative feedback loop, and \(\mathbf{w}_{\text{out}}\) is correlated with the PCs. The variance along the output direction is hence similar to that along the PCs, and larger than along random directions. For oblique dynamics, the negative feedback loop suppresses the fluctuations along the output direction, so that they become weaker than along random directions.
In Fig. 6C, we quantify this dissociation across different tasks. We measured the ratio between variance along output and random directions. Aligned networks have a ratio much large than one, indicating that the fluctuations along the output direction are increased due to the autonomous dynamics along the PCs. In contrast, oblique networks have a ratio smaller than 1 for all tasks, which indicates noise compression along the output.
### Aligned and oblique dynamics in experimental settings
For the cycling task, we observed that solutions were qualitatively different for the two regimes, with trajectories either counter- or co-rotating (Fig. 2B). Interestingly, the experimental results of Russo et al. [54] matched the oblique, but not the aligned dynamics. The authors observed co-rotating dynamics in the leading PCs of motor cortex activity despite counter-rotating activity of simultaneously recorded muscle activity. Here we test whether our theory can help to more clearly and quantitatively distinguish between the two regimes in experimental settings.
In typical experimental settings, we do not have direct access to output weights. We can, however,
Figure 7: Quantifying aligned and oblique dynamics in experimental data [11, 20, 23, 47, 54]. **A** Cartoon of a two types of experimental data considered. In motor control experiments (top), we first needed to obtain the output weights \(\mathbf{w}_{\text{out}}\) via linear regression. We then computed the correlation \(\rho\) and the reconstruction dimension \(D_{\text{fit},90}\), i.e. the number of PCs of \(\mathbf{x}\) necessary to obtain a coefficient of determination \(R^{2}>90\%\). In BCI experiments (bottom), the output (cursor velocity) is generated from neural activity \(\mathbf{x}(t)\) via output weights \(\mathbf{w}_{\text{out}}\) defined by the experimenter. This allowed us to directly compute correlation and fitting dimension. **B** Correlation \(\rho\) (top) and relative fitting dimension \(D_{\text{fit},90}/D_{x,90}\) (bottom) for a number of publicly available data sets. The cycling task data (purple) were trial-conditioned averages, the BCI experiments (red) and NLB tasks (yellow) single-trial data. Results for the full data sets are shown as dots, violin plots indicate results for 20 random subsets of 25% of the data points in each data set (see small text below x-axis).
approximate these by fitting neural data to simultaneously recorded behavioral output, such as hand velocity in a motor control experiment (Fig. 7A top). Following the model above, where the output is a weighted average of the states, we reconstruct the output from the neural activity with linear regression. To quantify the dynamical regime, we then compute the correlation \(\rho\) between the weights from fitting and the neural data. Additionally, we can also quantify the regime by the "relative fitting dimension" \(D_{\text{fit},90}/D_{x,90}\), where \(D_{\text{fit},90}\) is the number of PCs necessary to recover the output and \(D_{x,90}\) number to the number of PCs necessary to represent 90% of the variance of the neural data. We computed both the correlation and the relative fitting dimension for different publicly available data sets (Fig. 7B). For details, see Section 4.5.
We started with data sets from two monkeys performing the cycling task [54]. The data contained motor cortex activity, hand movement, and EMG from the arms, all averaged over multiple trials of the same condition. In Fig. 7B, we show results for reconstructing the hand velocity. The correlation was small \(\rho\in\{0.10,0.12\}\). To obtain a good reconstruction, we needed a substantial fraction of the dimension of the neural data: The relative fitting dimension was \(D_{\text{fit},90}/D_{x,90}\in\{0.57,0.48\}\). Both quantities are consistent with oblique dynamics, as suggested by the qualitative agreement between oblique RNN solutions and experimental data. Fitting other behavioral outputs yielded similar results (Fig. 23). Our results are in agreement with previous studies, showing that the best decoding directions for EMG are only weakly correlated with the leading PCs of motor cortex activity [59].
We also analyzed data made available through the Neural Latents Benchmark (NLB) [47]. In two different tasks, monkeys needed to perform movements along a screen. In a maze task, the monkeys were trained to follow a trajectory through a maze with their hand [9]. In a random target task (RTT), the monkeys had to reach for a randomly generated target position on a screen, with a successive target point generated once the previous one was reached [36]. In both cases, we reconstructed the output from neural activity on single trials. The correlation was slightly smaller than for the cycling task, \(\rho\in\{0.07,0.08\}\), and the relative fitting dimension was higher, \(D_{\text{fit},90}/D_{x,90}\in\{0.65,0.62\}\). These values indicate that the neural dynamics for these two tasks are also oblique.
Do we always observe oblique dynamics in experimental data? One setting in which this may not be the case are brain computer interface (BCI) experiments [56]. In these experiments, monkeys were trained to control a cursor on a screen via activity read out from their motor cortex (Fig. 7A bottom). The output weights were set by the experimenter (so we don't need to fit). Importantly, the output weights were typically chosen to be spanned by the largest PCs (the "neural manifold"), suggesting aligned dynamics. We tested this hypothesis on three example data sets [11, 20, 23]. We obtained higher correlation values, \(\rho=0.2\) (Fig. 7B). The relative fitting dimension was much smaller than for the non-BCI data sets, especially for the two largest data sets, where \(D_{\text{fit},90}/D_{x,90}\in\{0.03,0.04\}\). Both measures are thus consistent with the hypothesis that the neural dynamics arising in BCI experiments is more aligned than those in non-BCI settings.
The results above show that our measures can indeed be used to distinguish dynamical regimes in neural data. It would be interesting to test the differences between BCI and non-BCI data on larger data sets, and different experiments with different dimensions of neural data [19, 64]. Finally, we hypothesize that larger and out-of-manifold weights in the BCI setting, if learnable [43], would lead to qualitatively different solutions. One interesting scenario would be to train a monkey on the cycling task [54] but using a BCI either with small, inside-manifold or large, outside-manifold weights.
## 3 Discussion
We analyzed the relationship between neural dynamics and behavior, asking to which extent a network's output is represented in its dynamics. We identified two different limiting regimes: aligned dynamics, in which most of the neural activity in a network is related to its output, and oblique dynamics, where the output is only a small modulation on top of the dominating dynamics. We demonstrated that these two regimes have different functional implications. We also showed how they might arise through learning, and how they relate to experimental findings.
Linking neural activity to external variables is one of the core challenges of neuroscience [24]. In most cases, however, such links are far from perfect. The activity of single neurons can be related in a nonlinear, mixed manner, to task variables [49]. Even when considering populations of neurons, a large fraction of neural activity is not easily explained by external variables [1]. Various explanations have been proposed for this disconnect. In the visual cortex, activity has been shown to be related to "irrelevant" external variables, such as body movements [65]. Follow-up work showed that, in primates, some of these effects can be explained by the induced changes on retinal images [71], but this study still explained only half of the neural variability. An alternative explanation hinges on the redundancy of the neural code, which allows "null spaces" in which activity can visit without affecting behavior [28, 29, 51]. While such accounts can explain why irrelevant activity is _allowed_, they do not explain whether and why it is _necessary_. Here, we showed that merely ensuring stability of neural dynamics can lead to the oblique regime with a decoupling between output and dynamics.
We showed theoretically and in simulations that, when training recurrent neural networks, the magnitude of output weights is a central parameter that controls which regime is reached. This finding is vital for the use of RNNs as hypothesis generators [3, 67, 73], where it is often implicitly assumed that training results in universal solutions [35] (even though biases in the distribution of solutions have been discussed [70]). Here, we show that a specific control knob allows to move between qualitatively different solutions of the same task, thereby expanding the control over the hypothesis space [45, 72]. Note in particular that the default initialization in standard learning frameworks has large output weights, which results in oblique dynamics (or unstable solutions if training without noise, see Methods, Section 4.6).
The role of the magnitude of output weights is also discussed in machine learning settings, where different learning regimes have been found [8, 25, 26, 39]. In particular, "lazy" solutions were observed for large output weights in feed-forward networks. We show in Methods, Section 4.6, that these are unstable for recurrent networks, and are replaced in a second phase of learning by oblique solutions. It would be interesting to see whether an analog to the oblique regime also exists in feed-forward settings. On a broader scale, which learning regime is relevant when modeling biological learning is an open question that is just started to be explored [14].
The particular control knob we studied has an analog in the biological circuit - the synaptic weights. We can thus use experimental data to study whether the brain might rely on oblique or aligned dynamics. Existing experimental work has partially addressed this question. In particular, the work by Russo et al. [54] has been a major inspiration for our study. Our results share some of the key findings from that paper - the importance of stability leading to "untangled" dynamics [66] and a dissociation between hidden dynamics and output. In addition, we suggest a specific mechanism to reach oblique dynamics - via large output weights. Furthermore, we characterize the aligned and oblique regimes along experimentally accessible axes.
We see three avenues for exploring our results experimentally. First, it would be interesting to test whether simultaneous measurements of neural dynamics and muscle activity, such as in Ref. [54], support the idea of noise compression along the output direction. We make a suggestion on how to test this in Fig. 6C. Second, we show how the dynamical regimes dissociate under perturbations along specific directions. Experiments along these lines have recently become possible [7, 13, 53]. Future work is left to combine our model with biological constraints that induce additional effects during perturbations, e.g., through non-normal synaptic connectivity [5, 30, 34, 42]. Third, our work connects to the setting of brain-computer interfaces [20, 56, 74]. In this case, the output weights are directly controlled by the experimenter, hence allowing to test the effect of output weights on network dynamics.
Overall, our results provide an explanation for the plethora of relationships between neural activity and external variables.
## Acknowledgements
FS was supported by the Deutsche Forschungsgemeinschaft (German Research Foundation) under Germany's Excellence Strategy - EXC 2002/1 "Science of Intelligence" - project no. 390523135. OB was supported by the Israeli Science Foundation (grant 1442/21) and an HFSP research grant (RGP0017/2021). SO was supported by the program "Ecoles Universitaires de Recherche" ANR-17-EURE-0017
## Author contributions
Conceptualization, F.S., F.M., S.O., and O.B.; Methodology, F.S.; Formal Analysis, F.S.; Investigation, F.S.; Writing - Original Draft, F.S., F.M., S.O., and O.B.; Writing - Review & Editing, F.S., F.M., S.O., and O.B.; Supervision, F.M., S.O., O.B..
## Declaration of Interests
The authors declare no competing interests.
## Data availability
The code for all simulations and generation of figures is available on github, [https://github.com/frschu/aligned_oblique_in_rnns/](https://github.com/frschu/aligned_oblique_in_rnns/). |
2301.08831 | Explainable Multilayer Graph Neural Network for Cancer Gene Prediction | The identification of cancer genes is a critical yet challenging problem in
cancer genomics research. Existing computational methods, including deep graph
neural networks, fail to exploit the multilayered gene-gene interactions or
provide limited explanation for their predictions. These methods are restricted
to a single biological network, which cannot capture the full complexity of
tumorigenesis. Models trained on different biological networks often yield
different and even opposite cancer gene predictions, hindering their
trustworthy adaptation. Here, we introduce an Explainable Multilayer Graph
Neural Network (EMGNN) approach to identify cancer genes by leveraging multiple
genegene interaction networks and pan-cancer multi-omics data. Unlike
conventional graph learning on a single biological network, EMGNN uses a
multilayered graph neural network to learn from multiple biological networks
for accurate cancer gene prediction. Our method consistently outperforms all
existing methods, with an average 7.15% improvement in area under the
precision-recall curve (AUPR) over the current state-of-the-art method.
Importantly, EMGNN integrated multiple graphs to prioritize newly predicted
cancer genes with conflicting predictions from single biological networks. For
each prediction, EMGNN provided valuable biological insights via both
model-level feature importance explanations and molecular-level gene set
enrichment analysis. Overall, EMGNN offers a powerful new paradigm of graph
learning through modeling the multilayered topological gene relationships and
provides a valuable tool for cancer genomics research. | Michail Chatzianastasis, Michalis Vazirgiannis, Zijun Zhang | 2023-01-20T23:57:12Z | http://arxiv.org/abs/2301.08831v2 | # Explainable Multilayer Graph Neural Network for Cancer Gene Prediction
###### Abstract
The identification of cancer genes is a critical, yet challenging problem in cancer genomics research. Recently, several computational methods have been developed to address this issue, including deep neural networks. However, these methods fail to exploit the multilayered gene-gene interactions and provide little to no explanation for their predictions. Results: In this study, we propose an Explainable Multilayer Graph Neural Network (EMGNN) approach to identify cancer genes, by leveraging multiple gene-gene interaction networks and multi-omics data. Compared to conventional graph learning methods, EMGNN learned complementary information in multiple graphs to accurately predict cancer genes. Our method consistently outperforms existing approaches while providing valuable biological insights into its predictions. We further release our novel cancer gene predictions and connect them with known cancer patterns, aiming to accelerate the progress of cancer research.
## 1 Introduction
Understanding the precise function and disease pathogenicity of a gene is dependent on the target gene's properties, as well as its interaction partners in a disease-specific context [1; 2; 3]. High-throughput experiments, such as whole-genome sequencing and RNA sequencing of bulk and single-cell assays, have enabled unbiased profiling of genetic and molecular properties for all genes across the genome. Experimental methods to probe both physical [4; 5] and genetic interactions [6; 7] provide valuable insights of the functional relevance between a pair of genes. Based on these data, computational methods have been developed to predict gene functions for understudied and uncharacteristic genes by combining the gene's property with its network connectivity patterns [8; 9]. However, the prediction of gene pathogenicity in disease-specific contexts is challenging. Functional assays describing the gene and its gene network are relevant to disease only to the degree to which the measured property correlates with disease physiology [10]; while our understanding of complex disease physiology is poor, even for diseases with large sample size and data modalities, such as cancer [11].
As the completeness of known cancer genes is questioned, predicting novel cancer genes remains a crucial task in cancer genomics research. These genes, which are often mutated or over-expressed in cancer cells, play a key role in the development and progression of the disease [12]. Large-scale cancer sequencing consortia projects have generated genomic, molecular and histological profiling data for a variety of cancer types, providing an information-rich resource for identifying novel cancer genes. Building on the hypothesis that different genetic and molecular modalities provide complementary information to cancer gene pathogenicity, a pioneering work EMOGI [13] innovatively modeled the multi-omics features of cancer genes in Protein-Protein interaction (PPI) networks to predict novel cancer genes. To
address the challenge of functional properties irrelevant to cancer disease physiology, EMOGI featurized each gene by a vector summarizing multi-omics data levels across various cancer types in The Cancer Genome Atlas (TCGA) [14]. EMOGI then modeled the gene-gene interactions using pre-defined generic PPI networks using a Graph Convolution Neural network (GCN) and trained using a set of high-confidence cancer- and non-cancer genes. EMOGI identified \(165\) novel cancer genes without recurrent alterations, but interact with known cancer genes.
A major limitation of EMOGI is that it didn't address the disease physiology relevance in the pre-defined graph topology and connectivity patterns. EMOGI employed six different pre-defined graphs, including genetic-focused networks such as Multinet [15], and generic protein interaction networks such as STRING-db [16]. Among EMOGI models trained on different PPI networks, we found an average standard deviation of \(25.2\%\) in unlabelled cancer gene predictions, demonstrating the predicted novel cancer genes were different when using different PPI networks. For these genes, a trustworthy adaptation of the method's output is challenging when conflicting prediction results are present. Because cancer disease physiology is complex, using a single predefined graph to represent the gene-gene relationships cannot fully capture its molecular landscape; therefore, more sophisticated, data-driven methods are needed to decipher the gene relationships in disease-specific contexts.
To alleviate this issue, we propose a novel graph learning framework, EMGNN (Explainable Multilayer Graph Neural Network), for predicting gene pathogenicity based on multiple input graphs. EMGNN maximizes the concordance of functional gene relationships with the unknown disease physiology by jointly modeling a multilayered graph structure. We evaluated the performance of EMGNN in predicting cancer genes using the same compiled datasets as EMOGI and showed that our proposed method achieves state-of-the-art performance by combining information from all six PPI networks. Furthermore, we explained EMGNN's prediction by both model-level integrated gradients and molecular-level gene pathways. By examining novel cancer genes predicted by EMGNN, we demonstrated novel biological insights by leveraging the complementary information in different types of biological networks. Overall, EMGNN provides a powerful new paradigm of graph learning through modeling the multilayered topological gene relationships. Our key contributions can be summarized as follows:
* We propose an Explainable Multilayer Graph Neural Network (EMGNN) approach to identify cancer genes by leveraging multiple protein-protein interaction networks and multi-omics data.
* Our method demonstrates superior performance compared to existing approaches as quantified by a significant increase in the AUPRC across six PPI networks. The average improvement in performance is 7.15% over the current state-of-the-art method, EMOGI.
* We identify the most important multi-omics features for the prediction of each cancer gene, as well as the most influential PPI networks, using model interpretation strategies.
* EMGNN identifies novel cancer genes by integrating multiple PPI networks, providing a unified and robust prediction for novel cancer genes discovery. Our code is publicly available on GitHub2
Footnote 2: Code: [https://github.com/zhanglab-aim/EMGNN](https://github.com/zhanglab-aim/EMGNN)
## 2 Materials and Methods
### Datasets
We trained the proposed model with six PPI Networks: CPDB [17], Multinet [15], PCNet [18], STRING-db [16], Iref [19] and its newest version Iref(2015). As node features, we used mutation, copy number, DNA methylation and gene expression data of \(29,446\) samples from TCGA [14], from \(16\) different cancer types.
### Multilayer Graph Neural Network
**GNNs.** Let a graph be denoted by \(G=(V,E),\) where \(V=\{v_{1},\ldots,v_{N}\}\) is the set of vertices and \(E\) is the set of edges. Let \(\mathbf{A}\in\mathbb{R}^{N\times N}\) denote the adjacency matrix, \(\mathbf{X}=[x_{1},\ldots,x_{N}]^{T}\in\mathbb{R}^{N\times d_{I}}\) denote the node features and \(\mathbf{Y}=[y_{1},\ldots,y_{N}]^{T}\in\mathbb{N}^{N}\) denote the label vector. Graph neural networks have been successfully applied to many graph-structured problems[20; 21], as they can effectively leverage both the network structure and node features. They typically employ a message-passing scheme, which constitutes of the two following steps. In the first step, every node aggregates the representations of its neighbors using a permutation-invariant function. In the second step, each node updates its own representation by combining the aggregated message from the neighbors with its own previous
representation,
\[m_{u}^{(l)} =\text{Aggregate}^{(l)}\left(\left\{\mathbf{h}_{v}^{(l-1)}:v\in \mathcal{N}(u)\right\}\right), \tag{1}\] \[\mathbf{h}_{u}^{(l)} =\text{Combine}^{(l)}\left(\mathbf{h}_{u}^{(l-1)},\mathbf{m}_{u} ^{(l)}\right), \tag{2}\]
where \(h_{u}^{(l)}\) represents the hidden representation of node \(u\) at the \(l^{\text{th}}\) layer of the GNN architecture. Many choices for the _Aggregate_ and _Combine_ functions have been proposed in the recent years, as they have a huge impact in the representation power of the model[22]. Among the most popular architectures, are Graph Convolution Networks (GCNs) [23], and Graph Attention Networks (GAT) [24]. In GCN, each node aggregates the feature vectors of its neighbors with fixed weights inversely proportional to the central and neighbors' node degrees,\(\mathbf{h}_{u}^{\prime}=\mathbf{W}^{\top}\sum_{v\in\mathcal{N}(u)\cup\{u \}}\frac{\mathbf{h}_{v}}{\sqrt{d_{v}d_{u}}}\), with \(\hat{d}_{i}=1+\sum_{j\in\mathcal{N}(i)}1\). In GAT, each node aggregates the messages from its neighbor using learnable weighted scores: \(\mathbf{h}_{u}^{\prime}=\alpha_{u,u}\mathbf{W}\mathbf{h}_{u}+\sum_{v\in \mathcal{N}(u)}\alpha_{u,v}\mathbf{W}\mathbf{h}_{v}\), where the attention coefficients \(\alpha_{u,v}\) are computed as
\(\alpha_{u,v}=\frac{\exp\left(\text{LeakyReLU}\left(\mathbf{u}^{\top}\left[ \mathbf{W}\mathbf{h}_{u}\right.\left[\mathbf{W}\mathbf{h}_{u}\right]\right) \right)}{\sum_{k\in\mathcal{N}(u)\cup\{u\}}\exp\left(\text{LeakyReLU}\left( \mathbf{u}^{\top}\left[\mathbf{W}\mathbf{h}_{u}\right.\left[\mathbf{W} \mathbf{h}_{u}\right]\right)\right)})\).
**Multilayer Graph Construction.** Extending graph neural networks to handle multiple networks is not a trivial task, as they are designed to operate on a single graph. Next, we describe our method, which can accurately learn node representations using graph neural networks, from multilayer graphs.
Let \(N\) be the total number of genes, each associated with a feature vector \(x_{j}\in R^{d}\). Let also \(K\) be the number of gene-gene interaction networks. We represent each graph \(G^{(i)}\) with an adjacency matrix \(A^{(i)}\in\mathbf{Z}^{N_{i}\times N_{i}}\) and feature matrix \(X^{(i)}\in R^{N_{i}\times d}\), where \(n_{i}\) is the number of genes in the \(i\)-th network. Since some genes are not presented in all the graphs, the following equation holds \(N_{i}\leq N\), \(i\in\{0,1,\dots,K-1\}\).
In the first step, for each graph \(G^{i}\) we apply a graph neural network \(f_{1}\) that performs message-passing and updates the node representation matrix \(H^{(i)}=f_{1}(X^{(i)},A^{(i)})\) of each graph \(i\in\{0,1,\dots,K-1\}\). We set \(f_{1}\) to be shared across all graphs. This design allows us to handle a variable number of graphs while keeping the number of trainable parameters fixed. Next, we construct a meta graph \(G_{meta,j}\) for each gene/node \(j\), where genes \(j\) across all graphs are connected to a meta node \(v_{j}\). We initialize the features of the meta node \(v_{j}\) with the initial features of the corresponding gene \(j\).
Figure 1: An illustration of our proposed Explainable Multilayer Graph Neural Network (EMGNN) approach. The model consists of three main steps: (1) Apply a shared GNN to update the node representation matrix of each input graph, (2) Construct a meta graph for each gene, where the same genes across all graphs are connected to a meta node, and update the representation of the meta nodes with a second GNN (Meta GNN) (3) Use a multi-layer perceptron to predict the class of each meta node.
In the next step, we apply a second GNN \(f_{2}\) to update the representation of the meta node \(v_{j}\), \(H_{meta,j}=f_{2}(X_{meta,j},A_{meta,j})\), where \(X_{meta,j}\) contains the features of gene \(j\) from all the networks and \(A_{meta,j}\) is the adjacency matrix of the meta graph \(G_{meta,j}\), \(j\in\{0,1,\dots,N\}\). We set \(f_{2}\) to be shared across all genes. Therefore, in this stage, the model combines and exchanges information between the different networks. Finally, a multi-layer perceptron \(f_{3}\) predicts the class of the meta node \(j\), \(\hat{y_{j}}=f_{3}(H_{meta,j})\). An illustration of the proposed model can be found in Figure 1.
**Experimental Details.** To ensure a fair comparison with previous work, we adopted the experimental setup used in EMOGI. Specifically, we randomly divided the data for each testing graph into a 75% training set and a 25% testing set using stratified sampling, ensuring that the proportion of known cancer and non-cancer genes in both sets was equal. Given that our model uses multiple graphs as input for each experiment, we retained the test nodes of one graph as the test set, and added 90% of the remaining nodes from the other graphs to the training set and 10% to the validation set. The model was trained for \(2000\) epochs, using the cross-entropy loss function, and the ADAM optimizer [25] with a learning rate of \(0.001\). The initial GNN had three layers with a hidden dimension of \(64\), while the meta-GNN had a single layer with a hidden dimension of \(64\). The
### Model interpretation
Captum is a tool for understanding and interpreting the decision-making process of machine learning models [26]. It offers a range of interpretability methods that allow users to analyze the predictions made by their models and understand how different input variables contribute to these predictions. We used the integrated gradient (IG) module in Captum, to assign an importance score to each input feature. IG interprets the decisions of neural networks by estimating the contribution of each input feature to the final prediction. The integrated gradient approximates the integral of gradients of the model's output with respect to the inputs along a straight line path from a specific baseline input to the current input. The baseline input is typically chosen to be a neutral or a meaningless input, such as an all-zero vector or a random noise. Formally, let \(F(x)\) be the function of a neural network, where \(x\) is the input and \(\hat{x}\) is the baseline input. The integrated gradients for input \(x\) and baseline \(x0\) along the \(i\)-th dimension is defined as: \(\texttt{IntegratedGrads}_{i}(x)=(x_{i}-\hat{x_{i}})\int_{\alpha=0}^{1}\frac{ \partial F(\hat{x}+\alpha(x-\hat{x})}{\partial x_{i}}d\alpha\), where the integral is taken along the straight line path from \(\hat{x}\) to \(x\) and \(\partial F(x)/\partial x_{i}\) is the gradient of \(F(x)\) along the \(i\)-th dimension.
However, the traditional integrated gradient method, which is designed for single input models, is not directly applicable to graph neural networks as they have two distinct inputs, namely node features and network connectivity. This necessitates the development of a modified approach for computing integrated gradients in graph neural networks that considers both inputs. To this end, we propose a decomposition of the problem into two parts: identifying the most important node features and identifying the most crucial edges in the network separately. Since we predict the class of each gene by combining all the graphs, from the meta-node representations, we apply the interpretation analysis only to the meta-nodes.
**Node feature interpretation analysis.** We analyze the contribution of node features to the predictions of the GNN by using the traditional integrated gradient method while keeping the edges in the network fixed. Specifically, we interpolate between the current node features input and a baseline input where the node features are zero: \(Attribution_{x_{i}}=(x_{i}-\hat{x_{i}})\int_{\alpha=0}^{1}\frac{\partial F(\hat{ x}+\alpha(x-\hat{x},A)}{\partial x_{i}}d\alpha\), where \(A\) are the adjacency matrices of the graphs. Since the prediction for each gene is also based on the features of surrounding genes in the graphs, we extract attribution values for the \(k\)-hop neighbor genes as well, where \(k\) is equal to the number of message-passing layers in the first GNN. Therefore, the output of the attribution method for each node \(u\), is a matrix \(\mathbf{K}^{(u)}\in\mathbb{R}^{N\times d}\). Each entry \(K_{ij}\) of the matrix, corresponds to the attribution of the feature \(j\) of node \(i\) to the target node \(u\). From this matrix, we select the row that corresponds to the feature attributions of the corresponding meta node.
**Edge feature interpretation analysis.** To analyze the contribution of edges in the meta-graph to the predictions of the GNN, we use the integrated gradient method for the edges while keeping the node features fixed. Specifically, we interpolate between the current edge input and a baseline input where the weights of the edges are zero: \(Attribution_{e_{i}}=\int_{\alpha=0}^{1}\frac{\partial F(X,A_{\alpha})}{\partial w _{e_{i}}}d\alpha\), where \(A_{\alpha}\) corresponds to the graphs with the edge weights equal to \(\alpha\). We further normalize the attribution values of each meta node by dividing them by their maximum value, resulting in a range of [0, 1] for each edge. This explanation technique allows us to understand which edges in the meta-graph are crucial for the model's decision-making process, and therefore which input PPI networks are important for each gene prediction.
### Novel cancer gene discovery
We applied the trained EMBNN model that combined all six individual PPI networks to predict novel cancer genes in the \(n=14019\) unlabeled genes. We ranked these gene by their predicted cancer gene probability for potential novel
predicted cancer genes (NPCG) in this study. For each unlabeled gene, we also applied EMOGI models trained on individual PPI networks to predict the probability of it being a cancer gene. After inner join with all EMOGI models across six individual PPI networks, we analyzed the results of \(n=6591\) unlabelled genes whose predictions were available for all models.
### Gene set enrichment analysis
To understand the biological mechanisms of EMGNN's cancer gene prediction, we employed gene set enrichment analysis (GSEA) to analyze the functional enrichment of important gene features in curated cancer pathway annotations. Specifically, to determine the importance of neighboring gene nodes, we aggregated the maximum feature importance of each node using Captum's feature explanation results. Genes with zero importance were excluded in this analysis as they did not contribute to the prediction of this target gene. We then ranked the neighboring gene nodes based on their importance, and used this ranked gene list as input for GSEA. The enrichment p-value and multiple testing corrected FDR were computed by GSEA python package [27] against cancer hallmark gene sets [28].
## 3 Results
### Overview of EMGNN framework
To model multilayered graph structures, we developed a graph neural network model EMGNN (Figure 1). The input for EMGNN is a feature vector for each gene, and multiple graphs where each graph describes gene-gene relationships by an adjacency matrix. EMGNN first updates the graph representation within each graph layer by a shared message passing operator. As different graphs have distinct connectivity patterns, node representations will be updated differently in each layer. The shared message passing operator allows the model to incorporate new graphs while keeping the model's trainable parameters fixed.
EMGNN then introduces a meta graph layer that combines the layer-wise node representations of the same genes across the graphs, with a second GNN, referred to as the Meta GNN (Figure 1). Meta GNN enables directed message-passing to combine and exchange information from the different networks to the meta nodes, which will contain the final representations of the genes. A multi-layer perceptron (MLP) takes as input the meta node representations and performs the final node classification task.
Figure 2: Test AUPRC values of EMGNN(GCN) with respect to the number of input PPI networks. Each line represents a test set of cancer and non-cancer genes held-out in a specific PPI network. When there was only one input network, the corresponding PPI network was used to train an EMGNN model; then, the remaining PPI networks were added subsequently in the order of CPDB, Multinet, PCNet, STRING-db, Iref, and Iref(2015), excluding the first PPI network.
Notably, our EMGNN model is a generalized, multilayered form of single graph GNN. In the special cases where multilayered graphs have identical adjacency matrices or only one graph is provided as inputs, EMGNN reduces to a standard single graph GNN, where the shared message passing operators are standard GNN operators, and meta GNN reduces to an identical operation. Thus, EMGNN generalizes single graph GNN by capitalizing on the complementary information stored in multiple graphs.
### Multilayered graph improves EMGNN performance
We applied EMGNN to predict cancer genes using a previously compiled dataset (Methods). Briefly, this dataset consisted of a total of 887 labeled cancer genes, 7753 non-cancer genes and 14019 unlabeled genes. Six PPI networks were binarized to keep only high-confidence edges based on previously reported criteria.
We demonstrated that the integration of multiple graphs leads to an improvement in the performance of EMGNN. Specifically, we trained EMGNN models and evaluated the testing performance with respect to different numbers of PPI networks. As shown in Figure 2, the performance increased for each of the six PPI-network derived testing datasets, as the number of input networks increased. For CPDB, Multinet, STRING-db, IRefIndex networks, the incorporation of more graphs steadily increased the performance without reaching a plateau. For PCNet, EMGNN achieved the best testing performance by combining five networks. This behavior is largely consistent with previously reported benchmarking results, which suggest that the performance scale with the network size [18].
EMGNN trained by incorporating all six graphs achieved state-of-the-art performance for all test sets (Table 1). Each test set was an independent set of held-out labeled cancer and non-cancer genes from each network and we kept the set identical to previous reports (Methods). For most test sets, EMGNN outperformed EMGGI by a margin over 5% AURPC, with the largest gain of 11.1% in performance observed in the old version Iref. The smallest gain was observed in PCNet, likely because PCNet is already an expert-assembled graph combining the information from the other five graphs [18]. Nevertheless, for PCNet test set, EMGNN combining six graphs is significantly more accurate than EMGGI using PCNet (p-value=0.012, t-test). This aligns with our hypothesis that incorporating information from multiple networks leads to enhanced predictive power for gene pathogenicity prediction.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **CPDB** & **Multinet** & **PCNet** & **STRING-db** & **Iref** & **Iref(2015)** \\ \hline Random & 0.27 & 0.18 & 0.14 & 0.24 & 0.17 & 0.28 \\
20/20+ & 0.66 & 0.62 & 0.55 & 0.67 & 0.61 & 0.65 \\ MutSigCV & 0.38 & 0.33 & 0.27 & 0.41 & 0.35 & 0.43 \\ HotNet2 diffusion & 0.62 & 0.56 & 0.48 & 0.50 & 0.45 & 0.65 \\ DeepWalk+features RF & 0.74 & 0.71 & 0.72 & 0.71 & 0.66 & 0.71 \\ PageRank & 0.59 & 0.53 & 0.54 & 0.44 & 0.42 & 0.62 \\ GCN without omics & 0.57 & 0.53 & 0.47 & 0.39 & 0.37 & 0.64 \\ DeepWalk + SVM & 0.73 & 0.51 & 0.63 & 0.52 & 0.62 & 0.66 \\ RF & 0.60 & 0.59 & 0.51 & 0.61 & 0.54 & 0.62 \\ MLP & 0.58 & 0.63 & 0.47 & 0.63 & 0.55 & 0.64 \\ EMGGI[13] & 0.74 & 0.74 & 0.68 & 0.76 & 0.67 & 0.75 \\ EMGGI[29] & 0.775\(\pm\)0.003 & 0.732\(\pm\)0.003 & 0.745\(\pm\)0.002 & 0.763\(\pm\)0.003 & 0.701\(\pm\)0.004 & 0.757\(\pm\)0.001 \\ \hline
**EMGNN(GCN)** & **0.809**\(\pm\) 0.006 & **0.854**\(\pm\) 0.007 & **0.761**\(\pm\) 0.001 & **0.856**\(\pm\) 0.002 & **0.822**\(\pm\) 0.002 & **0.800**\(\pm\) 0.010 \\
**EMGNN(GAT)** & 0.776 \(\pm\) 0.018 & 0.796 \(\pm\) 0.034 & 0.730 \(\pm\) 0.031 & 0.805 \(\pm\) 0.307 & 0.739 \(\pm\) 0.033 & 0.773 \(\pm\) 0.049 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test AUPRC values and standard deviation across different PPI networks across five different runs.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Method** & **CPDB** & **Multinet** & **PCNet** & **STRING-db** & **Iref** & **Iref(2015)** \\ \hline
**Random Features** & 0.703 \(\pm\) 0.001 & 0.727 \(\pm\) 0.002 & 0.615 \(\pm\) 0.009 & 0.745 \(\pm\) 0.002 & 0.674 \(\pm\) 0.001 & 0.697 \(\pm\) 0.005 \\
**All-one Features** & 0.726 \(\pm\) 0.002 & 0.769 \(\pm\) 0.001 & 0.657 \(\pm\) 0.010 & 0.779 \(\pm\) 0.010 & 0.710 \(\pm\) 0.015 & 0.725 \(\pm\) 0.013 \\ \hline
**Edge Removal(0.2)** & 0.800 \(\pm\) 0.007 & 0.841 \(\pm\) 0.016 & 0.746 \(\pm\) 0.017 & 0.841 \(\pm\) 0.009 & 0.796 \(\pm\) 0.005 & 0.786 \(\pm\) 0.011 \\
**Edge Removal(0.4)** & 0.795 \(\pm\) 0.004 & 0.834 \(\pm\) 0.009 & 0.743 \(\pm\) 0.003 & 0.828 \(\pm\) 0.004 & 0.790 \(\pm\) 0.012 & 0.802 \(\pm\) 0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test AUPRC and standard deviation of EMGNN(GCN) for different input perturbations methods across three different runs.
### Evaluating the performance of different GNN architectures and graph ablations.
Next, we performed an ablation study to assess the performance of our EMGNN model using different GNN architectures and input perturbations. Specifically, we compared Graph Convolutional Network (GCN), and Graph Attention Network (GAT)[24]. Using the best identified GNN architecture, we further permuted input features by random permutations and setting all node features to one, as well as randomly removing 20% and 40% of the edges in the input graphs.
As shown in Table 1, the GNN architecture played an essential role in EMGNN testing performance. We observed that GCN is the best-performing GNN architecture in all the datasets. Our findings demonstrated that the choice of GNN architecture has a significant impact on the performance of our model. Therefore, EMGNN refers to EMGNN(GCN) throughout the paper unless specified.
We then sought to answer the two following research questions: How robust is EMGNN to node feature and graph structure perturbations? Are the node features and the graph structure both crucial for the prediction of cancer genes? Biologically, the EMGNN node features and edges are determined using high-throughput assays that inherently have measurement errors. To this end, we examined the performance of EMGNN with GCN architecture under different types of input perturbations (Table 2). Specifically, we removed the node features and instead added uninformative vectors (random or constant).
We found that EMGNN decreased in performance for both random and all-one node features, suggesting that the node features derived from TCGA consortia were informative and highly relevant to cancer physiology and cancer gene pathogenicity. For edge ablations, we randomly removed 20% and 40% edges of each PPI network. The removal of edges slightly decreased EMGNN performance. This demonstrates the connectivity pattern robustness by jointly modeling a multilayered graph topology. Overall, EMGNN effectively leveraged both node features and edges to achieve accurate predictions.
### Explaining EMGNN reveals biological insights of cancer gene pathogenicity
Explainable and trustworthy models are essential for understanding the biological mechanisms of known cancer genes and facilitating the discovery of novel cancer genes. Therefore, we applied Captum [26] to explain the node and edge attributions of EMGNN (Methods). We focused our analysis on the relative contributions from each PPI network to the known cancer gene predictions, as measured by the importance of meta edges (Figure 3A). To examine if some PPI networks were statistically more important contributors than others, we performed an ANOVA test. We observed a significant difference in contributions from the different PPI networks (P-value=\(1.2e-65\)), suggesting certain PPI networks were more informative for cancer gene prediction. Among pairwise comparisons, Iref(2015) achieved a significantly higher contribution than Iref (P-value=\(1.3e-50\), t-test), which consolidated our observation that the incorporation of other PPI networks substantially improved upon the model using only Iref. We examined the relative
Figure 3: Explanation of each PPI network’s contribution to cancer gene predictions. A) Overall distribution of of meta-edge feature importance for all known cancer genes across six PPI networks. Meta-edge feature importance was normalized to 1 (see Methods for details). B) Representative PPI network contributions in known cancer genes and novel predicted cancer genes. CSMD3 and RSPO3 are known cancer genes; COL5A1 and MSLN are novel predicted cancer genes. A contribution of zero suggests the gene was missing in the corresponding PPI network.
contributions for two known cancer genes (CSMD3 and RSPO3) and two novel predicted cancer genes (COL5A1 and MSLN; see Figure 3B). Notably, different genes were predicted as cancer genes leveraging evidence from different PPI networks. For example, Multinet contributed to CSMD3, but not for MSLN; while COL5A1 combined all six PPI networks. Thus, EMGNN successfully learned complementary information from the connectivity patterns in each layer of the multilayer graph, as shown by the similar overall contributions from individual graphs and gene-specific variations.
As EMGNN's node features were derived by multi-omic cancer datasets, we next assessed if certain types of omic data were informative to cancer gene predictions. Our model explanation results of node features indicated that Single Nucleotide Variation (SNV) features were found to be significantly less informative than other types of features, which is consistent with previous reports that CENs were more detrimental to cancer progression than SNVs [30]. In contrast, we observed that DNA methylation was significantly more important for known cancer gene prediction than other omics data (P-value<0.01 for all three pairwise t-test of other omics against DNA methylation). We further examined the node feature importance of the same four genes as in Figure 3A. The omics feature contributions were highly gene specific. In general, DNA methylation had a moderate contribution across all four genes (Figure 4B). As DNA methylations are reversible epigenetic modifications, this may suggest potential novel therapeutic targets for certain cancer genes mediated by DNA methylation [31].
### EMGNN identifies novel cancer genes by integrating multilayer graphs
To gain biological insights for precision oncology, we applied the trained EMGNN model to predict cancer genes on the unlabeled genes. We discovered a non-trivial number of unlabeled genes with a high probability of predicted novel cancer genes (n=108 predicted cancer genes with over 95% predicted cancer gene probability), consistent with the previous report that machine learning predictions can augment the completeness of cancer gene catalogs [13].
Compared to models trained using single PPI networks, EMGNN achieved an accurate and unified novel cancer gene prediction by integrating multilayer graphs. Indeed, we observed a substantial divergence among the predictions of EMOGI models trained on individual PPI networks, with an average of \(29\%\) and \(63\%\) difference between the highest and lowest predictions for the top-100 and all unlabelled nodes, respectively. Furthermore, we found an average standard deviation of 25.2% in unlabelled cancer gene predictions of EMOGI, demonstrating the predicted novel cancer genes were different when using different PPI networks.
As a case study, we analyzed the predictions of a NPCG, COL5A1 (Figure 5). For this gene, EMOGI model trained on STRINGdb predicted a non-cancer gene with high confidence, EMOGI models trained on IRefIndex, CPDB and PCNet predicted a cancer gene with high confidence, while the models trained on Multinet and IRefIndex2015 predicted a cancer gene with moderate likelihood. The fact that STRINGdb was the best performer among models trained on individual PPI networks further complicated the decision making whether COL5A1 should be considered as a cancer/non-cancer gene. This level of divergence in predictions hinders a trustworthy adaptation of model predictions in clinical and pragmatic settings. In contrast, EMGNN integrated the information from each individual PPI networks in a data-driven approach and provided more accurate, unified predictions of cancer genes (Table 1). EMGNN predicted
Figure 4: Explanations of multi-omic node features importance in cancer gene predictions. A) Overall distribution of of node feature importance grouped by omic feature types, including single-nucleotide variants (MF), DNA methylation (METH), gene expression (GE) and copy number aberrations (CNA), for known cancer genes. B) Detailed node feature importance for the four genes analyzed in Figure 3B. X-axis labels were color-coded to match the omic feature types in panel A. Individual tumor types were coded according to TCGA study abbreviations [14].
COL5A1 as a cancer gene with high confidence (Figure 5A). Importantly, we also found all individual PPI networks were contributing similarly to the final EMGNN predictions (Figure 4B).
Leveraging the explanation results of node contributions for COL5A1 (Figure 4), we further illustrated the potential biological mechanisms of COL5A1 by a gene set enrichment analysis (Methods). We discovered that three cancer hallmark gene sets, i.e. apical junction, coagulation, and complement system (part of the innate immune system), were significantly enriched in COL5A1 neighboring genes (Figure 5B). These neighboring gene's contributions were important for EMGNN's prediction of COL5A1 as cancer gene or not. For example, the apical junction cancer hallmark genes contained genes annotated to function in cell-cell adhesion among epithelial cells, many of those were enriched in the top contributors (Figure 5C). This was further supported by mouse studies where loss of COL5A1 triggered skin cancer [32], the type of cancer with a strong epithelial cell origin. Therefore, we demonstrated how molecular mechanisms of novel predicted cancer genes could be interpreted and discovered using the explainable EMGNN framework.
## 4 Discussion
The biomedical and biological domain contains a wealth of information, which is often represented and analyzed using graph structures to reveal relationships and patterns in complex data sets. Indeed, various gene interaction and protein-protein interaction networks describe the functional relationships of genes and proteins, respectively. The gene-gene relationships were often described in generic cellular contexts and/or by integrating different, heterogeneous sources of information. Therefore, a single graph often struggles to best match disease-specific conditions, and different graph construction and integration methods render distinct predictive powers [33; 34]. Substantial efforts have been devoted to develop integrated [33; 34] and tissue-specific graphs [35]. Here, we took a complementary approach and developed a new graph learning framework, EMGNN, to jointly model multilayered graphs. Applying EMGNN to predict cancer driver genes demonstrated its superior performance over previous graph neural networks trained on single graphs. We also employed model explanation techniques to assess both node and edge feature importance. Our results showed that EMGNN leveraged the complementary information from different graph layers and omics features to predict cancer genes. Importantly, we found that cancer genes that have conflicting predictions based on different single graphs, or are missed by previous state-of-the-art predictors, can be recovered effectively using EMGNN. This demonstrates the robustness of EMGNN predictions by joint modeling the multilayered graphs.
The EMGNN model can be viewed as a data-driven, gradient-enabled integration method for multiple graphs. By providing multiple PPI networks as input, EMGNN learns from the different connectivity patterns that represent complementary information to predict cancer genes. Since all PPI networks share the same type of nodes and edges, EMGNN currently integrates homogeneous, undirected graphs; however, the EMGNN framework can be extended to various types of graphs and to perform cross-data modality integration. In biology and biomedicine, hierarchical graphs
Figure 5: EMGNN predicts COL5A1 as a novel cancer gene and reveals biological insights. A) A comparison of predicted cancer gene probability from EMGNN and EMOGI models trained on single PPI networks. As a probability of 50% equalled random guessing between cancer vs non-cancer gene, the bar heights reflected the prediction confidence. B) Three cancer hallmark genes were significantly enriched in the important neighboring genes of COL5A1 as revealed by interpreting EMGNN model. C) Enrichment of apical junction cancer hallmark genes in COL5A1 neighboring genes. The neighboring genes of COL5A1 were ranked by their EMGNN node importance on the x-axis, where each blue bar represented a gene in the apical junction geneset. A strong left-shifted curve demonstrates enrichment of apical junction geneset in the top important genes to predict COL5A1 as a cancer gene.
and heterogeneous graphs are particularly prevalent, such as Gene Ontology[36]. For example, biomedical data is often organized in hierarchical levels, starting with genes and molecules, moving on to cells and tissues, and finally reaching the level of individual patients and populations. Therefore, an interesting future direction is to apply EMGNN to model multiple graphs with more heterogeneous node and edge types, and with more complex inter-graph structures.
DNA methylation and gene expression aberrations are major contributors to EMGNN's cancer gene predictions, which are considered important features when explaining its omics node features. Unlike single nucleotide variation and copy number alteration that introduced permanent mutations to DNAs, epigenetic and transcriptomic alterations of cancer genes are potentially reversible by targeted therapies. Model explanations for EMGNN revealed these molecular aberrations that may be leveraged for screening and re-purposing of drugs, especially for previously less well-characterized, novel predicted cancer genes. This highlights the importance of model explanations to gain biological and biomedical insights when developing deep learning models to predict gene pathogenicity.
In summary, we present a novel deep learning approach for the prediction of cancer genes by integrating multiple gene-gene interaction networks. By applying graph neural networks to each individual network and then combining the representations of the same genes across networks through a meta-graph, our model is able to effectively integrate information from multiple sources. We demonstrate the effectiveness of our approach through experiments on benchmark datasets, achieving state-of-the-art performance. Furthermore, the ability to interpret the model's decision-making process through the use of integrated gradients allows for a better understanding of the contribution of different multi-omic features and PPI networks. Overall, our approach presents a promising avenue for the prediction of novel cancer genes.
## Acknowledgements
We thank all members of the Zhang laboratory for helpful discussions. This work was performed at the high-performance computing resources at Cedars-Sinai Medical Center and the Simons Foundation.
## Funding
This work has been supported by an institutional commitment fund from Cedars-Sinai Medical Center to ZZ.
|
2310.02491 | DON-LSTM: Multi-Resolution Learning with DeepONets and Long Short-Term
Memory Neural Networks | Deep operator networks (DeepONets, DONs) offer a distinct advantage over
traditional neural networks in their ability to be trained on multi-resolution
data. This property becomes especially relevant in real-world scenarios where
high-resolution measurements are difficult to obtain, while low-resolution data
is more readily available. Nevertheless, DeepONets alone often struggle to
capture and maintain dependencies over long sequences compared to other
state-of-the-art algorithms. We propose a novel architecture, named DON-LSTM,
which extends the DeepONet with a long short-term memory network (LSTM).
Combining these two architectures, we equip the network with explicit
mechanisms to leverage multi-resolution data, as well as capture temporal
dependencies in long sequences. We test our method on long-time-evolution
modeling of multiple non-linear systems and show that the proposed
multi-resolution DON-LSTM achieves significantly lower generalization error and
requires fewer high-resolution samples compared to its vanilla counterparts. | Katarzyna Michałowska, Somdatta Goswami, George Em Karniadakis, Signe Riemer-Sørensen | 2023-10-03T23:43:16Z | http://arxiv.org/abs/2310.02491v1 | # DON-LSTM: Multi-Resolution Learning with DeepONets and Long Short-Term Memory Neural Networks
###### Abstract
Deep operator networks (DeepONets, DONs) offer a distinct advantage over traditional neural networks in their ability to be trained on multi-resolution data. This property becomes especially relevant in real-world scenarios where high-resolution measurements are difficult to obtain, while low-resolution data is more readily available. Nevertheless, DeepONets alone often struggle to capture and maintain dependencies over long sequences compared to other state-of-the-art algorithms. We propose a novel architecture, named DON-LSTM, which extends the DeepONet with a long short-term memory network (LSTM). Combining these two architectures, we equip the network with explicit mechanisms to leverage multi-resolution data, as well as capture temporal dependencies in long sequences. We test our method on long-time-evolution modeling of multiple non-linear systems and show that the proposed multi-resolution DON-LSTM achieves significantly lower generalization error and requires fewer high-resolution samples compared to its vanilla counterparts.
## Introduction
Modeling the temporal evolution of dynamical systems is of paramount importance across various scientific and engineering domains, including physics, biology, climate science, and industrial predictive maintenance. Creating such models enables future predictions and process optimization, and provides insights into the underlying mechanisms governing these systems. Nevertheless, obtaining precise and comprehensive data for modeling poses a challenge in practical contexts. Real-world scenarios bring along the issue of data limitations, where data is typically more readily available at lower resolutions: historical measurements are stored with reduced granularity, vast quantities of legacy data originate from an outdated technology, or the costs associated with obtaining precise measurements on a large scale are prohibitive.
Multi-resolution learning algorithms serve as a solution for effectively integrating information from data collected over diverse temporal and spatial scales. Beyond addressing data availability constraints, multi-resolution methods can offer a reduction of computational costs during training, initially leveraging only lower-resolution data to capture general dynamics, and subsequently fine-tuning the models on high-resolution data.
To this end, a wide array of approaches has been proposed to tackle multi-resolution, multi-scale, or discretization-invariant learning. Among these, a natural framework that has gained prominence is neural operators, which are neural networks that learn mappings between function spaces, e.g., DeepONet, Fourier neural operator and others (Lu et al., 2021; Li et al., 2020; Cao et al., 2023; Wang and Golland, 2022; Ronneberger et al., 2015; Seidman et al., 2022). Alternative methods for multi-resolution learning include encoder-decoder-based architectures (Ong et al., 2022; Aboutalebi et al., 2022), graph and message-passing networks (Equer et al., 2023; Liu et al., 2021), and com
binations of the above (Yang et al., 2022). In a related vein, another body of research deals with conceptually similar multi-fidelity learning in DeepONets, which tackles the problem of combining data of varying quality, where high-quality samples are sparse (Lu et al., 2022; Howard et al., 2022; De et al., 2023).
In this work, we approach multi-resolution learning through the innate discretization-invariance property of DeepONets. We further propose a new architecture, DON-LSTM, which extends the architecture of the DeepONet with a long short-term memory network (LSTM), in order to capture temporal patterns in sequential outputs of the DeepONet. The main purpose of combining these two architectures is to leverage data of different resolutions in training, effectively increasing the feasible training set, as well as to assist the modeling of time-dependent evolution through explicit mechanisms of the LSTM.
The remainder of this paper is structured as follows. First, we formulate the learning problem at hand. Next, we describe our proposed architecture and the training procedure. Finally, we present and discuss our experimental results on four non-linear partial differential equations (PDEs) and low- and high-resolution training sets of various sizes. Our main findings show that our proposed multi-resolution DON-LSTM achieves lower generalization error than its single-resolution counterparts, which require a much larger high-resolution sample size in order to achieve similar precision.
## 1 Problem statement
In this study, we learn the operator \(\mathcal{N}\) that defines the evolution of a system over time starting from any given initial condition, i.e.:
\[\mathcal{N}:u(x,t=0)\to G(u(y)), \tag{1}\]
where \(u(x,t=0)\) is the function defining the initial condition and \(G(u(y))\) is the operator describing the evolution over time for any \(y=(x,t)\), where \(x\) and \(t\) are the spatial and temporal coordinates of the system's trajectory.
In practice, \(G(u(y))\) is observed at discretized fixed locations \(\{(x_{1},t_{1}),...,(x_{m},t_{n})\}\), resulting in a vector \([u(x_{1},t_{1}),...,u(x_{m},t_{n})]\in\mathbb{R}^{m\times n}\) where \(m\) is the total number of spatial discretization points and \(n\) is the total number of temporal discretization points that define the full trajectory. For the purpose of this study, we assume that for each system we have two datasets:
* High-resolution set \(D_{H}\) with \(N_{H}\) samples of time resolution \(\Delta t_{H}\),
* Low-resolution set \(D_{L}\) of size \(N_{L}\) samples of time resolution \(\Delta t_{L}\).
For the four considered PDE-based examples in this work, we set \(N_{H}=4\times N_{L}\) and \(\Delta t_{L}=5\times\Delta t_{H}\), while the spatial discretization is the same in both datasets. The multi-resolution network is trained on both datasets. The sizes of the training data are dependent on the complexity of the problem.
## 2 Proposed Multi-resolution DON-LSTM (ours)
The schematic representation of the proposed architecture is shown in Figure 1. The architecture consists of a DeepONet followed by a reshaping layer, an LSTM layer, and a final feed-forward layer which brings the output back to the predefined spatial dimension of the solution. The architecture is set up such that the DeepONet outputs are fed as inputs to the LSTM. The DeepONet approximates the solution at locations that are determined by its inputs, which enables the initial discretization-invariant training. During the training of the LSTM, we impose the sequential nature of the outputs through these inputs and use the LSTM to process them as temporal sequences.
### Deep operator network (DeepONet)
The DeepONet is an architecture based on the universal approximation theorem for operators (Chen & Chen, 1995) which combines the outputs of two deep neural networks (the "branch" and "trunk"
networks). It is formulated as (Lu et al., 2021):
\[G(u)(y)\approx\sum_{k=1}^{p}\underbrace{b_{k}(u(x_{1}),u(x_{2}),\ldots,u(x_{m}))} _{\text{branch}}\odot\underbrace{t_{k}(y)}_{\text{trunk}}\, \tag{2}\]
where \(b_{k}\) and \(t_{k}\) denote the output embeddings of the branch and the trunk network, respectively, and \(\odot\) denotes the element-wise multiplication of the outputs of the two networks. The inputs to the branch network are the initial conditions, \(u(t=0)\), discretized at \(m\) spatial sensor locations \(\{x_{1},x_{2},\ldots,x_{m}\}\), and for the trunk network they are the locations, \(y=(x,t)\), at which the operator \(G(u)\) is evaluated. Additionally, the trunk network can incorporate periodic boundary conditions through simple feature expansion that consists of applying Fourier basis on the provided spatial locations, i.e., \(x\rightarrow\cos(\frac{2\pi x}{P})\), and \(x\rightarrow\sin(\frac{2\pi x}{P})\), where \(P\) is the period (Lu et al., 2022a).
### Long short-term memory network (LSTM)
LSTM is a type of recurrent neural network (RNN), which is a class of specialized neural networks for processing long sequences. In contrast to traditional feed-forward networks, RNNs employ a hidden state, which persists as the network iterates through data sequences and propagates the information from the previous states forward. LSTMs were developed as an extension to RNNs, aimed at mitigating the problem of vanishing gradients encountered in RNNs. Vanishing gradients occur in long sequences, as the hidden state is updated at every timestep, which leads to a large number of multiplications causing the gradients to approach zero. To address this, LSTM is equipped with a set of gates: the _input gate_ regulates the inflow of new information to the maintained hidden state, the _forget gate_ determines what proportion of the information should be discarded, and the _output gate_ controls the amount of information carried forward.
### Self-adaptive loss function
In solving time-dependent PDEs, the main challenge lies in preserving the precision of the modeled solution over a long-time horizon. Introducing appropriately structured non-uniform penalization parameters can address this aspect. In all the experiments, we equip the networks with self-adaptive weights in the loss function, first introduced by McClenny & Braga-Neto (2020), which adapt during training and force the network to focus on the most challenging regions of the solution. We take the inspiration from Kontolati et al. (2022), where in turn self-adaptive weights considerably improve the accuracy prediction of discontinuities or non-smooth features in the solution. The self-adaptive weights are defined for every spatial and temporal location, and they are updated along with the
Figure 1: The proposed architecture. In the first phase, the DeepONet maps the solution operator from the initial condition \(u_{t=0}\) to solutions at later timesteps. During the DeepONet training, the output \(u\) can be defined at arbitrary temporal locations. In the next two training stages, the outputs must have a fixed time interval \(\Delta t\) to be compatible with the LSTM. The DeepONet output solutions are reshaped to be represented in a temporally sequential manner and processed by an LSTM. The LSTM lifts the dimension to the specified number of neurons and returns all hidden states. The last layer is a fully-connected dense layer bringing the embedding to the original size of the solution.
network parameters during optimization. Specifically, the training loss is defined as:
\[\mathcal{L}(\mathbf{\theta},\mathbf{\lambda})=\frac{1}{N}\sum_{i=1}^{N}g(\mathbf{\lambda})|u( x_{i},t_{i})-\hat{u}(x_{i},t_{i})|^{2}, \tag{3}\]
where \((u(x_{i},t_{i})-\hat{u}(x_{i},t_{i}))^{2}\) is the squared error between the reference \(u\) and predicted value \(\hat{u}\), and \(g(\lambda)\) is a non-negative, strictly increasing self-adaptive mask function, and \(\mathbf{\lambda}=\{\lambda_{1},\lambda_{2},\cdots\lambda_{j}\}\) are \(j\) self-adaptive parameters, and \(j\) is the total number of evaluation points. Typically, in a neural network, we minimize the loss function with respect to the network parameters, \(\mathbf{\theta}\). However, in this approach, we additionally maximize the loss function with respect to the trainable hyper-parameters using a gradient descent/ascent procedure. The modified objective function is defined as:
\[\min_{\mathbf{\theta}}\max_{\mathbf{\lambda}}\mathcal{L}(\mathbf{\theta},\mathbf{\lambda}). \tag{4}\]
The self-adaptive weights are updated using the gradient descent method, such that
\[\mathbf{\lambda}^{k+1}=\lambda^{k}+\eta_{\lambda}\nabla_{\mathbf{\lambda}}\mathcal{L }(\mathbf{\theta},\mathbf{\lambda}), \tag{5}\]
where \(\eta_{\mathbf{\lambda}}\) is the learning rate of the self-adaptive weights and
\[\nabla_{\lambda_{i}}\mathcal{L}=\left[g^{\prime}(\lambda_{i})(u_{i}(\xi)- \mathcal{G}_{\theta}(\mathbf{v}_{i})(\xi))^{2}\right]^{T}. \tag{6}\]
Therefore, if \(g(\lambda_{i})>0\), \(\nabla_{\lambda_{i}}\mathcal{L}\) would be zero only if the term \((u_{i}(\xi)-\mathcal{G}_{\theta}(\mathbf{v}_{i})(\xi))\) is zero. In this work, the self-adaptive weights are normalized to sum up to one after each epoch. The weights \(\lambda\) are updated alongside the weights of the network during training.
### Training procedure
Training is performed iteratively over a predefined number of epochs. The training is performed in minibatches, i.e., in each epoch the training data is divided into multiple smaller subsets. After calculating the loss on each batch, the gradients are calculated w.r.t. the loss function and the trainable network parameters (weights) are updated. The optimization is performed using the Adam optimizer and a predefined learning rate. The training of the multi-resolution DON-LSTM is performed in three stages:
**Step 1:**: DeepONet training:
1. Initialize the DeepONet with a set of weights \(\mathbf{\theta}_{DON}\).
2. Iteratively update \(\mathbf{\theta}_{DON}\) on low-resolution data \(D_{L}\) with a learning rate \(lr_{1}\), saving weights \(\mathbf{\theta}_{DON_{i}}\) at every \(n_{freq}\) epochs.
3. Compute the MSE loss on the predictions on validation data and choose the set of weights that correspond to the lowest loss.
4. Iteratively update \(\mathbf{\theta}_{LSTM}\) on high-resolution data \(D_{H}\), with a learning rate \(lr_{1}\), saving models at every \(n_{freq}\) epochs.
5. Compute the MSE loss on the predictions on validation data and choose the set of weights that correspond to the lowest loss.
**Step 3:**: DON-LSTM training:
1. Unfreeze \(\mathbf{\theta}_{DON}\) (set as trainable parameters).
2. Iteratively update \(\mathbf{\theta}_{LSTM}\) on high-resolution data \(D_{H}\) with a learning rate \(lr_{2}\), where \(lr_{2}<lr_{1}\), saving models at every \(n_{freq}\) epochs.
3. Compute the MSE loss on the predictions on validation data and choose the set of weights that correspond to the lowest loss.
Problems considered and data generation
The performance of the proposed multi-resolution models is showcased on four infinite-dimensional non-linear dynamical systems. The numerical solutions to the Korteweg-de Vries, Benjamin-Bona-Mahony and Cahn-Hilliard equations were obtained through schemes that are second order in both space and time; the equations are spatially discretized using central finite differences and integrated in time using the implicit midpoint method. The solutions of the Burgers' equation were generated using PDEBench (Takamoto et al., 2022). All PDEs are evaluated on the one-dimensional spatial domain \(\Omega=[0,P]\), for different \(P\), with periodic boundary conditions \(u(P,t)=u(0,t)\) for all \(t\geq 0\). The training set for each PDE consists of data obtained from \(N=N_{H}+N_{L}\) different initial conditions integrated over time \(t=T\) with step size \(\Delta t\). Example data are visualized in Figure 3 and details of the time and space domains are given in Table 2, both in Appendix A.
### Korteweg-de Vries equation
The Korteweg-de Vries (KdV) equation (Korteweg & De Vries, 1895) is a non-linear dispersive PDE that describes the evolution of small-amplitude, long-wavelength systems in a variety of physical settings, such as shallow water waves, ion-acoustic waves in plasmas, and certain types of nonlinear optical waves. We consider a one-dimensional unforced case, which is given by:
\[\frac{\partial u}{\partial t}+\gamma\frac{\partial^{3}u}{\partial x^{3}}- \eta u\frac{\partial u}{\partial x}=0, \tag{7}\]
where \(u:=u(x,t)\) is the height of the wave at position \(x\) at time \(t\), and \(\eta=6\) and \(\gamma=1\) are chosen real-valued scalar parameters.
The initial conditions are each a sum of two solitons (solitary waves), _i.e._:
\[u(x,0)=\sum_{i=1}^{2}2k_{i}^{2}\text{sech}^{2}\bigg{(}k_{i}((x+\frac{P}{2}-Pd_{ i})\%P-\frac{P}{2})\bigg{)}, \tag{8}\]
where \(\text{sech}\) stands for hyperbolic secant (\(\frac{1}{\cosh}\)), \(P\) is the period in space, \(\%\) is the modulo operator, \(i=\{1,2\}\), \(k_{i}\in(0.5,1.0)\) and \(d_{i}\in(0,1)\) are coefficients that determine the height and location of the peak of a soliton, respectively. For each initial state \(t=0\) in the training data, these coefficients are drawn randomly from their distribution.
### Benjamin-Bona-Mahony equation
The Benjamin-Bona-Mahony (BBM) equation was derived as a higher-order improvement on the KdV equation and includes both non-linear and dispersive effects (Peregrine, 1966; Benjamin et al., 1972). It is used for studying a broader range of wave behaviors, including wave breaking and dispersion, and is given by:
\[\frac{\partial u}{\partial t}-\frac{\partial^{3}u}{\partial x^{2}\partial t}+ \frac{\partial}{\partial x}\left(u+\frac{1}{2}u^{2}\right)=0. \tag{9}\]
The chosen initial conditions are superpositions of two soliton waves of the following shape:
\[u_{i}(x,0)=\sum_{i=1}^{2}3(c_{i}-1)\text{sech}^{2}\bigg{(}\frac{1}{2}\sqrt{1- \frac{1}{c_{i}}}(x+\frac{P}{2}-Pd_{i})\%P-\frac{P}{2}\bigg{)}, \tag{10}\]
where \(c_{i}\in(1,3)\) and \(d_{i}\in(0,1)\) are coefficients that determine the height and location of the peak of a soliton, respectively.
### Cahn-Hilliard equation
The Cahn-Hilliard equation is used to describe phase separation with applications to materials science and physics (Cahn & Hilliard, 1958). It is expressed as:
\[\frac{\partial u}{\partial t}-\frac{\partial^{2}}{\partial x^{2}}(\nu u+ \alpha u^{3}+\mu\frac{\partial^{2}u}{\partial x^{2}})=0, \tag{11}\]
where we set \(\nu=-0.01\), \(\alpha=0.01\) and \(\mu=-0.00001\).
The initial conditions are superpositions of sine and cosine waves, i.e. \(u(x,0)=u_{1}+u_{2}\) with:
\[u_{i}(x,0)=a_{i}\text{sin}(k_{i}\frac{2\pi}{P}x)+b_{i}\text{cos}(j_{i}\frac{2 \pi}{P}x), \tag{12}\]
where \(a_{i},b_{i}\in(0,0.2)\) and \(k_{i},j_{i}\) are integers and \(j_{i},k_{i}\in[1,6]\).
### Viscous Burgers' equation
The viscous Burgers' equation (Bateman, 1915; Burgers, 1948) describes the behavior of waves and shocks in a viscous fluid or gas. It is given by:
\[\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}\left(\frac{u^{2}}{2 }\right)=\frac{\nu}{\pi}\frac{\partial u^{2}}{\partial^{2}x}, \tag{13}\]
where \(x\in(0,1)\), \(t\in(0,2]\), and \(\nu=0.001\) is the diffusion coefficient.
The initial conditions are given by the superposition of sinusoidal waves:
\[u(x,0)=\sum_{k_{i}=k_{1},\ldots,k_{N}}A_{i}\sin(k_{i}x+\phi_{i}), \tag{14}\]
where \(k_{i}=2\pi n_{i}/P\) are coefficients where \(n_{i}\) are arbitrarily selected integers in \([1,n_{max}]\). \(N\) is the integer determining how many waves are added, \(A_{i}\) is a random float number uniformly chosen in \((0,1)\), and \(\phi_{i}\) is the randomly chosen phase in \((0,2\pi)\).
## 4 Experimental results
We compare the performance of our multi-resolution DON-LSTM against five benchmark networks discussed in section 3 using the relative squared error (RSE), the mean average error (MAE) and the root mean squared error (RMSE), described in Appendix E. The average values of these error metrics are presented in Table 1, and the log values of RSE against the increasing number of high-resolution training samples are shown in Figure 2. Due to space considerations, detailed tables with the evaluation of the models trained on distinct sample sizes are available in Appendix B. All evaluations are performed on high-resolution test samples of size \(N_{test}=1000\).
### Benchmark models
The benchmark models used to evaluate the performance of the multi-resolution DON-LSTM (DON-LSTM (\(D_{H}\), \(D_{L}\))) are the following:
* **DON** (\(D_{L}\)): The vanilla DeepONet trained on \(N_{L}\) low-resolution data,
* **DON** (\(D_{H}\)): The vanilla DeepONet trained on \(N_{H}\) high-resolution data,
* **DON** (\(D_{H},D_{L}\)): The vanilla DeepONet trained on \(N_{L}+N_{H}\) multi-resolution data (both datasets),
* **LSTM** (\(D_{H}\)): An architecture trained on \(N_{H}\) high-resolution data, consisting of one dense layer lifting the input to a dimension [-1, \(mn\)], a reshaping layer (into [-1, \(m\), \(n\)]) followed by an LSTM layer, where \(m\) is the spatial and \(t\) is the temporal dimension,
* **DON-LSTM** (\(D_{H}\)): The proposed architecture trained only on \(N_{H}\) high-resolution data.
The models vary by the amount and granularity of the training data. We take advantage of the discretization-invariance of deep neural operators and train the vanilla DeepONet and DON-LSTM on multi-resolution data (in the case of DON-LSTM only the DeepONet layers are trained with multi-resolution). In this problem formulation, the LSTM implicitly learns \(\Delta t\) and therefore has to be trained at the resolution used in testing (here, high-resolution). In the following, we refer to specific models by the data resolution used in their training and the models' name, e.g., _high-resolution DON_.
### Generalization performance
We evaluate the performance of our models grouping them by the number of samples used in their training (Figure 2). As expected, increasing the sample size leads to a reduction in the generalization error for all models. In nearly all cases, we also see that the multi-resolution DON-LSTM achieves the lowest error, followed by the multi-resolution DON for the KdV, BBM and Cahn-Hilliard equations, and the high-resolution LSTM for the Burgers' equation.
Our main findings can be summarized into the following:
1. The multi-resolution DON-LSTM generally achieves the lowest generalization error out of the five benchmarks.
2. In order to achieve similar accuracy with single-resolution methods (such as the vanilla LSTM) we need significantly more high-resolution training samples than for multi-resolution DON-LSTM.
3. In multiple cases the DON trained on larger amount of lower-resolution data obtains better results than the DON trained on fewer samples of high-resolution data.
4. While the DeepONet itself achieves reasonable performance, the time-dependent architecture is crucial for capturing long-time dynamics, as is evident by the superior performance of DON-LSTM and the vanilla LSTM.
\begin{table}
\begin{tabular}{l l l l l} \hline
**Model** & **Resolution** & \multicolumn{2}{c}{**MAE**} & **RMSE** & **RSE** \\ \hline \multicolumn{5}{c}{**Korteweg-de Vries equation**} \\ \hline
[MISSING_PAGE_POST]
STM** & \(\Delta t=0.02\) & (high) & 0.016\(\pm\)0.006 & 0.036\(\pm\)0.013 & 0.005\(\pm\)0.004 \\ \hline \end{tabular}
\end{table}
Table 1: The mean and the standard deviation of the prediction errors of all models. Each reported value aggregates the mean error values across all models trained on different sample sizes, where each model has been trained five times (i.e., the mean and standard deviation of the values reported in Appendix B).
## 5 Discussion
In all our experiments, the multi-resolution DON-LSTM achieved the lowest generalization error, while requiring fewer high-resolution training samples than its benchmarks. This advantage stems from the combination of two factors: the utilization of a larger training dataset, which encompasses both high- and low-resolution samples, and the integration of LSTM mechanisms that facilitate capturing the temporal evolution of the systems. We specifically observe that the inclusion of LSTM mechanisms is beneficial, as evidenced by the multi-resolution DON-LSTM outperforming the vanilla DON trained on multi-resolution data. Additionally, the inclusion of low-resolution data in early training contributes to the improvement of the prediction, as seen in the superior performance of multi-resolution DON-LSTM as compared to the vanilla LSTM and single-resolution DON-LSTM, as well as the superior performance of the multi-resolution DON in comparison to both single-resolution DONs.
When low-resolution data was not used in training, the comparison between the architectures was inconclusive, i.e., the high-resolution DON-LSTM outperformed its vanilla counterparts only in two experiments. We attribute it to the fact that DON-LSTM is comprised of a larger number of parameters, and a small training sample is not sufficient to effectively train the network, leading to under/over-fitting. We can also see that the vanilla DON struggles with adjusting all its parameters on a small sample, which becomes apparent through the fact that the model trained on a low-resolution data achieves better performance than when trained on the high-resolution data (regardless of being tested on high-resolution). This means that the inclusion of low-resolution data in early training is essential for the good performance of the proposed architecture.
## 6 Limitations and Future work
While DeepONets possess the discretization-invariance property in the output function, they require the input data to be defined at fixed locations. This problem is addressed in the literature through the employment of an encoder-decoder architecture integrated with the DeepONet (Ong et al., 2022; Zhang et al., 2022). We also note that the framework that is limited to discretization-invariant output
Figure 2: Model performance vs. amount of training samples. The \(y\)-axis shows the relative squared error (log values) for predictions on high-resolution test data, while the \(x\)-axis indicates the number of high- and low-resolution training samples. The figure compares the performance of three models: DON (red) and DON-LSTM (blue) and LSTM (green), with variations of training data denoted by \(D_{\square}\) and the line pattern (\(D_{H}\) and solid line for high-resolution, \(D_{L}\) and dotted line for low-resolution, and \(D_{H}\), \(D_{L}\) and dash-dotted line for multi-resolution). The error bars indicate the \(95\%\) confidence interval calculated over the five trained models (two standard deviations of the error).
is sufficient for applications where the system behaviour is modeled from a single-resolution input, e.g., an initial condition. For cases when multi-resolution input is desired, we highlight the existence of other neural operator architectures such as the Fourier neural operator (Li et al., 2020), or the Laplace neural operator (Cao et al., 2023).
In addition, we note that LSTM is specifically suited and limited to sequential data. We see this as an opportunity for future studies, in which the DeepONet can be extended with architectures appropriate for different types of data, for example convolutional neural networks in case of image data, or transformers for data governed by less-regular temporal patterns.
## 7 Conclusions
Our proposed architecture, DON-LSTM, seamlessly integrates the strengths of its two composites: the discretization invariance of the DeepONet and the improved sequential learning with the memory-preserving mechanisms of the LSTM. We have demonstrated that these properties can be leveraged to incorporate multi-resolution data in training, as well as to capture the intricate dynamics of time-dependent systems, leading to significantly improved predictive accuracy in modeling of the systems' evolution over long-time horizons. Our experiments clearly demonstrate the efficacy of our approach in creating accurate, high-resolution models even with limited training data available at fine-grained resolution. Moreover, the synergistic effect of our proposed architecture makes it an apt choice for real-world scenarios, promising substantial enhancements in prediction quality. This work not only advances the understanding and utilization of multi-resolution data in sequential analysis but also provides valuable insights for future research and applications.
## 8 Acknowledgement
For KM and SRS, this work is based upon the support from the Research Council of Norway under project SFI NorwAI 309834. Furthermore, KM is also supported by the PhD project grant 323338 Stipendiatstilling 17 SINTEF (2021-2023). For SG and GEK, this work was supported by the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project, DE- SC0023191. Furthermore, the authors would like to acknowledge the computing support provided by the computational resources and services at the Center for Computation and Visualization (CCV), Brown University where all the experiments were carried out, as well as the invaluable contribution in generating the training data by Solve Eidnes.
## 9 Reproducibility
The code for reproducing the results is available in GitHub repository (Michalowska, 2023). The code includes the default parameters to generate the models and the data processing pipeline used in this paper. The details of the used architectures, training setup and data processing steps are also specified in Appendix C and Appendix D.
|
2309.01945 | On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks | Low-bit quantization emerges as one of the most promising compression
approaches for deploying deep neural networks on edge devices. Mixed-precision
quantization leverages a mixture of bit-widths to unleash the accuracy and
efficiency potential of quantized models. However, existing mixed-precision
quantization methods rely on simulations in high-performance devices to achieve
accuracy and efficiency trade-offs in immense search spaces. This leads to a
non-negligible gap between the estimated efficiency metrics and the actual
hardware that makes quantized models far away from the optimal accuracy and
efficiency, and also causes the quantization process to rely on additional
high-performance devices. In this paper, we propose an On-Chip Hardware-Aware
Quantization (OHQ) framework, performing hardware-aware mixed-precision
quantization on deployed edge devices to achieve accurate and efficient
computing. Specifically, for efficiency metrics, we built an On-Chip
Quantization Aware pipeline, which allows the quantization process to perceive
the actual hardware efficiency of the quantization operator and avoid
optimization errors caused by inaccurate simulation. For accuracy metrics, we
propose Mask-Guided Quantization Estimation technology to effectively estimate
the accuracy impact of operators in the on-chip scenario, getting rid of the
dependence of the quantization process on high computing power. By synthesizing
insights from quantized models and hardware through linear optimization, we can
obtain optimized bit-width configurations to achieve outstanding performance on
accuracy and efficiency. We evaluate inference accuracy and acceleration with
quantization for various architectures and compression ratios on hardware. OHQ
achieves 70% and 73% accuracy for ResNet-18 and MobileNetV3, respectively, and
can reduce latency by 15~30% compared to INT8 on real deployment. | Wei Huang, Haotong Qin, Yangdong Liu, Jingzhuo Liang, Yulun Zhang, Ying Li, Xianglong Liu | 2023-09-05T04:39:34Z | http://arxiv.org/abs/2309.01945v5 | # OHQ: On-chip Hardware-aware Quantization
###### Abstract
Quantization emerges as one of the most promising approaches for deploying advanced deep models on resource-constrained hardware. Mixed-precision quantization leverages multiple bit-width architectures to un-leash the accuracy and efficiency potential of quantized models. However, existing mixed-precision quantization suffers exhaustive search space that causes immense computational overhead. The quantization process thus relies on separate high-performance devices rather than locally, which also leads to a significant gap between the considered hardware metrics and the real deployment. In this paper, we propose an **On-chip Hardware-aware Quantization** (OHQ) framework that performs hardware-aware mixed-precision quantization without accessing online devices. First, we construct the _On-chip Quantization Awareness_ (OQA) pipeline, enabling perceive the actual efficiency metrics of the quantization operator on the hardware. Second, we propose _Mask-guided Quantization Estimation_ (MQE) technique to efficiently estimate the accuracy metrics of operators under the constraints of on-chip-level computing power. By synthesizing network and hardware insights through linear programming, we obtain optimized bit-width configurations. Notably, the quantization process occurs on-chip entirely without any additional computing devices and data access. We demonstrate accelerated inference after quantization for various architectures and compression ratios, achieving 70% and 73% accuracy for ResNet-18 and MobileNetV3, respectively. OHQ improves latency by 15\(\sim\)30% compared to INT8 on deployment.
1Beihang University, 2ETH Zurich, 3The University of Hong Kong
[email protected], [email protected], [email protected]
[email protected], [email protected], [email protected], [email protected]
## 1 Introduction
Recently, the deep neural network (DNN) has achieved significant development and shown its great potential in various fields, such as computer vision, and natural language processing. The advancement of DNN can be mainly attributed to the rapid expansion of parameters and the increasing depth of the model. Although over-parameterization has enhanced the performance of DNNs, it has simultaneously posed a serious challenge when deploying real-time inference in resource-constrained edge devices. Edge chips impose stricter constraints on memory footprint, energy consumption, and latency[18]. An intuitive solution to mitigate this issue is quantizing the full-precision 32-bit weights and activations of DNNs to low-bit integers or binaries [12, 13, 14, 15]. The basic quantization involves uniformly compressing all layers of a DNN to a consistent bit-width [16, 17], which overlooks the computational redundancies inherent in different operator layers within the network and subsequently neglects the varying impact on overall performance. To address this limitation, it becomes imperative to adopt variable bit-widths for distinct layers, a concept known as mixed-precision quantization [13, 14, 15]. The mixed-precision quantization considers the operators with different bit widths in search space optimization, which significantly pushes the accuracy and efficiency limits of quantized NNs. However, it also causes significant computation overhead in the quantization process. In a \(L\)-layers DNN, the \(S\) number of bit-width candidates results in a computa
Figure 1: Off-chip _vs._ on-chip quantization. The left shows the traditional off-chip quantization framework involving quantization analysis and deployment steps. The right part is our OHQ framework which is fully integrated on-chip.
tion with O(\(S^{L}\)) complexity. This exemplifies an exponential increase in computational demand for mixed-precision strategies, especially pronounced in deeper architectures.
These facts introduce strong motivation to construct an on-chip mixed-precision quantization framework that harmoniously integrates the quantization algorithm with real hardware. The challenges in constructing this quantization framework stem from two main aspects: (1) Hardware awareness: Since the need to consider the accuracy and speed of quantized models based on their deployed hardware, this framework should on-chip perceive various metrics of operators on real hardware, including but not limited to latency and memory usage. This framework should achieve hardware awareness for the deployed devices. (2) Lightweight process: Given the necessity to thoroughly consider efficiency-oriented hardware metrics, the expanded search space requires an efficient yet resource-lightweight algorithm for on-chip quantization processes. This ensures that the quantization process does not rely on external computational and data resources. Despite the presence of certain hardware awareness and mixed-precision quantization techniques that emerged earlier [4, 13, 14, 15, 16], they still exhibit some significant issues that cannot be overlooked. These issues include the computationally expensive nature of the process and reliance on inaccurately estimated hardware metrics, among others. These factors hinder their progression toward becoming the ideal on-chip quantization methods.
In this work, we propose an On-chip Hardware-aware Quantization (OHQ) framework (see Fig. 1) to overcome the above-mentioned issues. The proposed OHA mainly relies on two novel techniques: the _On-chip Quantization Awareness_ (OQA) pipeline enables perceiving the actual efficiency metrics of the quantization operator on the hardware, which uses synthetic data as input to obtain the latency, memory usage, and power metrics on-chip. Second, we propose _Mask-guided Quantization Estimation_ (MQE) technique to efficiently estimate the accuracy metrics of operators under the constraints of on-chip-level computing power, and then we can search for optimized bit-width configurations simplified as linear programming. Our comprehensive experiments show that OHQ outperforms existing mixed-precision quantization methods in accuracy and efficiency by a substantial margin. We also demonstrate the effectiveness of our OHA on various networks, such as ResNet-18/50 and MobileNetV2/V3, highlighting its versatility across architectures. We summarize our contributions as follows:
* We propose a mixed-precision quantization framework OHQ, which is a totally on-chip design from quantization computation to deployment. By selecting Field Programmable Gate Arrays (FPGAs) as co-processors to aid the on-chip Central Processing Units (CPUs) in quantization computing, we effectively eliminate the necessity for supplementary computing devices.
* A hardware-aware method OQA that operates at the Intellectual Property (IP) core granularity is first proposed. This approach utilizes the chip's clock cycle consumption per computational layer as a metric for hardware, while simultaneously considering the constraints imposed by the available computational power.
* We develop an enhanced and statistically robust sensitivity metric MQE for performing small-batch distilled data inference on edge devices, which support data-free sensitivity feedback and quantization.
* We additionally furnish comprehensive empirical findings for ResNet18/50 [11], MobileNetV2/V3 [12]. These findings delineate the state-of-the-art quantization outcomes achievable in real-world edge scenarios.
## Related Work
### Mixed-Precision Quantization
Quantization maps parameters \(\mathbf{x}\) to fixed-point ranges using the following equation:
\[\mathrm{quantize}(\mathbf{x})=\mathrm{round}\left(\mathbf{x}/S\right)-\mathbf{z}, \tag{1}\]
where \(\mathrm{quantize}\) denotes quantization function, \(S\) and \(\mathbf{z}\) denote the scaling factor and zero-point, respectively. Existing quantization approaches can be divided into quantization-aware training (QAT) and post-training quantization (PTQ). QAT quantizes the network throughout the training process with the original dataset, resulting in accurate quantization while markedly computation overhead [13, 14, 15]. PTQ operates as an offline algorithm, it relies on a few real or synthetic samples to calibrate the quantization functions of weights and activations, thus just utilizing much less computation in the quantization process compared to QAT [13, 14, 15].
For utilizing the accuracy and efficiency potential of the quantized model, mixed-precision quantization emerges as a promising way, which enables accuracy-sensitive layers to retain high precision (i.e., more bit-widths) while others maintain low precision [4, 13, 14, 15, 16]. Nevertheless, a major challenge associated with this strategy lies in identifying the optimized mixed-precision configuration since the search space exhibits an exponential relationship with the number of layers, particularly in resource-limited scenarios. Thus various mixed-precision quantization methods are proposed to improve this. Dong et al. use a second-order Hessian matrix as an accuracy sensitivity metric for each layer [4] and also apply the mean of the eigenvalues of the Hessian to mitigate computational overhead [4]. [15] and [14] propose data-free methods to get rid of the reliance on the data resource on edge. However, it is hard for existing methods to achieve on-chip mixed-precision quantization accurately and efficiently.
### Hardware-Aware Quantization
The accuracy sensitivity of quantized layers can frequently be described as the influence of quantization on the model accuracy. However, the computational expenditure of network operators on physical hardware should also serve as
a constraint for bit width configuration, incorporating aspects such as latency and energy consumption. The heavily-consuming layers should be quantized to low precision, while the light-consuming ones should be retained to high precision. Wang _et al_. sensed the bit-width rationing under different hardware architectures through a reinforcement learning model, which is a black-box strategy model based on hardware perception [22]. However, this exploration approach is computationally complex and difficult to realize offline quantization. Yao _et al_. sensed the computation time of each layer as a bit-width allocation constraint by deploying the quantized network to Graphics Processing Unit (GPU) [23]. But the roughly obtained computation time is affected by software, operating system (OS), and network transmission, and cannot reflect the on-chip computational efficiency accurately.
We select FPGA to implement and evaluate our framework since it serves as a flexible and reliable hardware platform for DNN deployment[15, 24] and validating Application-Specific Integrated Circuit (ASIC) designs through accurate flow verification methodologies[16, 17, 18, 19]. We proposed a fine-grained hardware awareness by clock cycles and energy, which can achieve a more accurate representation of hardware computation consumption at the IP core level. This methodology enables a closer examination of the chip's underlying layer, thereby facilitating a realistic construction of the computational connection between the DNNs and the chips.
## Methodology
In this section, we present our On-Chip Hardware-Aware Quantization (OHQ) framework (see Fig. 2), including the On-Chip Quantization Awareness (OQA) pipeline and Mask-Guided Quantization Estimation (MQE) technique.
### On-Chip Quantization Awareness
#### Generated Synthetic Data
The statistics of the BN layer in the full-precision model, namely the mean standard deviation, correspond to the real dataset in the training process. Consequently, the majority of data-free quantization schemes[16, 1, 17, 18, 19] employ BN statistical losses to capitalize on the information present in the BN layer. The subsequent optimization objectives facilitate the congruence of synthetic data distribution \(x^{d}\) with the BN statistics:
\[\min_{x^{d}}\mathcal{L}_{\text{distill}}=\sum_{i=1}^{L}\parallel\tilde{u}_{i}^ {d}-u_{i}\parallel_{2}^{2}+\parallel\tilde{\sigma}_{i}^{d}-\sigma_{i} \parallel_{2}^{2}, \tag{2}\]
where \(u_{i}\) and \(\sigma_{i}\) are the mean and standard deviation parameters of the pre-trained model's BN layer. \(\tilde{u}_{i}^{d}\) and \(\tilde{\sigma}_{i}^{d}\) represent the mean and standard, respectively, of the feature matrix generated by \(x^{d}\) at the \(i\)-th of the BN layer. Eq. 2 is to minimize the statistical loss \(\mathcal{L}_{distill}\) in each BN layer of the original model, and ultimately to generate synthetic data that matches the input data distribution. Our distilled data is generated via the embedded CPU on the edge platform and subsequently inputted directly into the FPGA, which ascertained the following hardware awareness and MQE technique. This approach aims to bolster the operational efficiency of the proposed framework.
#### Hardware Awareness
While previous study[23] has recommended the use of cloud GPU running time as a hardware-aware constraint, this coarse-grained metric is influenced by the OS[16, 17], and application layers latency and does not provide a comprehensive reflection of the DNN's performance on the physical chip. In order to comprehend the finer-grained hardware constraints, we observed the IP core-level operations on FPGAs and we name it as OQA.
To facilitate NN on FPGAs, we quantize and compile the model following its acquisition, subsequently converting the layers into corresponding arithmetic module implementations, while providing a central controller for regulation. In this process, we employ the following techniques:
1. Img2Col: Converts layer data from discontinuous storage to continuous storage, streamlining transmission and computation.
2. Multiplication and addition tree: Utilizes parallel stacking and cascading multiplication and addition mechanisms to enable simultaneous pointwise multiplication computation across vast amounts of data.
3. Sub-matrix slicing transmission and computation: Divides a large matrix into multiple tiles for transmission to the chip, performing computation and splicing on these tiles to alleviate FPGA resource pressure.
Figure 2: The overview of OHQ framework. This proposed OHQ obtains chip-level sensing parameters and layer-wise differences through a physical deployment (OQA and MQE are respectively described in detail in Fig. 3 and Fig. 4).
4. BRAM large bandwidth fill: The primary storage mechanism employed in on-chip Block Random Access Memory (BRAM), which is divided into rows to facilitate the reading and writing of large bit-width data simultaneously, ensuring computation and access remain logically and physically coherent.
The above methods enable efficient parallel computation of networks, and a reasonable quantitative deployment scheme is formulated with full consideration of the resource and power constraints of hardware.
Our proposed clock-aware approach, predicated on the interaction between the IP core and the BRAM, is illustrated in Fig. 3. During runtime, the IP core on the Programmable Logic (PL) side--which is responsible for four steps including computation of weight and feature map, data transfer, data write-back, and data post-process--automatically collects the number of running clock cycles and stores them in the target BRAM. Then, the FPGA-compatible transmission relays the collected clock cycles to the Processing System (PS) side. This can be denoted as \([c_{1},\ldots,c_{i},\ldots,c_{L}]\), \(c_{i}\) represents the total number of clock cycles in the \(i\)-th layer. Concurrently, the power consumption of the IP core during computation is diminished as \([e_{1},\ldots,e_{i},\ldots,e_{L}]\).
Note that OQA is the first pipeline to propose fine-grained sensing at the chip IP core level. By obtaining the clock information of four calculation steps from the PL respectively, we obtain the exact part of the operation of each layer on-chip.
### Mask-Guided Quantization Estimation
Layerwise Sensitivity on HardwareDNN models comprise \(L\) layers of computational units, which can be represented as \([U_{1},U_{2},...,U_{L}]\), wherein \(U_{i}\) denotes the \(i\)-th layer computational unit. The learnable parameters (e.g., weights) are denoted as \([\theta_{1},\theta_{2},...,\theta_{L}]\), where \(\theta\in\mathcal{R}^{n}\) is float32 type data in full-precision models. Layer-wise sensitivity typically corresponds to the impact of distinct layers within a network on the output results. As previously discussed, sensitivity measurement using the Hessian matrix can be computationally intensive. An alternative approach involves quantizing \(U_{i}\) to 4/8 bits while maintaining full precision for the remaining layers:
\[\theta_{i}^{q}=\mathrm{quantize}(\theta_{i}), \tag{3}\]
\[\begin{cases}\mathcal{M}=F(U_{1}(\theta_{1});\ldots;U_{i}(\theta_{i});\ldots;U _{L}(\theta_{L})),\\ \mathcal{M}_{i}^{q}=F(U_{1}(\theta_{1});\ldots;U_{i}^{q}(\theta_{i}^{q});\ldots ;U_{L}(\theta_{L})),\end{cases} \tag{4}\]
where \(\theta_{i}^{q}\) is the quantized parameters in \(i\)-th layer and the bit width, \(q\) is the selected bit-widths, \(F\) denotes the general function of NN model, \(\mathcal{M}\) represents the full-precision model, \(\mathcal{M}_{i}^{q}\) denotes the quantized model with quantized layer \(U_{i}^{q}\). Then the performance difference is calculated by the following equation:
\[\omega_{i}=\frac{1}{N}\sum_{j=1}^{N}f(\mathcal{M}(x_{j}),\mathcal{M}_{i}^{q}( x_{j})), \tag{5}\]
where \(\omega_{i}\) indicated the sensitivity value of the \(i\)-th layer, \(N\) is the batch-size of distilled data used for inference, \(f(\cdot,\cdot)\) denotes the sensitivity calculation function that compares the output of \(\mathcal{M}\) and \(\mathcal{M}_{i}^{q}\), and \(x\) is the input data.
Nonetheless, this method necessitates the quantization \(\mathbf{L}\) times, resulting in considerable computational overhead for so many quantizations processed. Consequently, this sensitivity calculation presents challenges for on-chip deployment. Therefore, we introduce a masked-guided technique (MQE) displayed in Fig. 4, which is more efficient and suitable for on-chip implementation:
\[\tilde{\theta}_{i}^{q}=g(\alpha,\theta_{i}^{q}), \tag{6}\]
\[\begin{cases}\mathcal{M}^{q}=\ F(U_{1}^{q}(\theta_{1}^{q});\ldots;U_{i}^{q}( \theta_{i}^{q});\ldots;U_{L}^{q}(\theta_{L}^{q})),\\ \tilde{\mathcal{M}}_{i}^{q}=\ F(U_{1}^{q}(\theta_{1}^{q});\ldots;\tilde{U}_{i}^{ q}(\tilde{\theta}_{i}^{q});\ldots;U_{L}^{q}(\theta_{L}^{q})),\end{cases} \tag{7}\]
where \(g(\cdot,\cdot)\) denotes the masking operator, \(\alpha\) is the mask ratio, and \(\tilde{\theta}_{i}^{q}\) is the mask result of \(\theta_{i}^{q}\). To facilitate the on-chip test of FPGA, the parameters must first undergo integer
Figure 4: Illustration of MQE for ResNet18. Specifically, we feed synthesized data into on-chip models. The figure shows the model with the \(5\)-th layer specifically masked out.
Figure 3: The workflow of OQA. (Top) The PL part samples time, power, and other information of four main steps for awareness while computing, which use BRAM to optimize matrix multiplication and data transfer. (Bottom) The PS part controls the whole situation, including accessing data, organizing the network, and instructing IP cores.
bit-wise quantization (in this case, \(q=8\)). Subsequently, the parameters of the target layer are randomly masked by setting them to 0, and the \(\alpha\) is selected as \(0.5\) to map the loss of information loss from \(8\) bits to \(4\) bits. In MQE, only **1** time quantization of the DNN is necessary. Despite the requirement for \(L\) masking operations, their computational consumption is considerably less compared to that of quantization computation. Then, we use the KL divergence:
\[\begin{split} D_{kl}(p\parallel q)&=-\sum_{x}p(x) \log q(x)-\sum_{x}-p(x)\log p(x),\\ &=-\sum_{x}p(x)\log\frac{p(x)}{q(x)},\end{split} \tag{8}\]
where \(p(x)\) and \(q(x)\) denote two probability distributions of the original model and masked model. KL divergence can determine the disparity in output distribution between the original and masked models, thereby effectively measuring the information entropy between the matrices. Then, we update Eq. 5 to get our sensitivity:
\[\omega_{i}=\frac{1}{N}\sum_{j=1}^{N}D_{kl}(\mathcal{M}^{q},\tilde{\mathcal{M} }_{i}^{q}), \tag{9}\]
Our proposed MQE is obtained by inference on edge devices, which better reflects the layer-wise perception in NN in real hardware operation scenarios than simulation experiments on servers and GPUs. Consequently, the masking operations are negligible and can be swiftly executed by the CPU embedded within the edge device.
Mixed-Precision QuantizationWe reveal that the aforementioned correlation is not entirely linear and exhibits fluctuations as displayed in Fig. 6. This observation underscores the capability of our proposed OHQ framework in discerning the genuine performance of both the network and hardware, thereby highlighting the significance of on-chip hardware awareness for quantization. Inspired by this, we would like to keep the layers with high sensitivity at a higher precision, while compressing the layers with high clock and energy consumption to a lower precision. We, therefore, design a quantitative constraint function by combine OQA and MQE:
\[1=\beta+\gamma, \tag{10}\]
\[\Omega_{i}=\beta\hat{\omega}_{i}-\frac{\gamma}{2}(\hat{c}_{i}+\hat{e}_{i}), \tag{11}\]
where \(\beta,\gamma\) are two hyper-parameters used to control the proportion of the sensitivity and hardware resources, in order to fairly consider this hardware awareness, we set \(\beta\) and \(\gamma\) to 0.5 in the subsequent experiments. The hyper-parameters can be manually modified to satisfy the user's personalized needs for the precision and hardware compression rate. \(\hat{\omega}_{i}\), \(\hat{c}_{i}\) and \(\hat{e}_{i}\) are scaled values from \(\omega_{i}\), \(c_{i}\) and \(e_{i}\), which ensures that the different initial awareness value is in same the range \([0,1]\) and \(\Omega_{i}\) is the optimal factor of the \(i\)-th layer. Ultimately, we maximize the sum of \(\Omega_{i}\) in the network through an integer linear programming (ILP) model:
\[\begin{split}\text{Objective}:&\max_{\{b_{i}\}_{i= 1}^{L}}\sum_{i=1}^{L}(b_{i}\Omega_{i}),\\ \text{Constraint}:&\sum_{i=1}^{L}M_{i}^{b_{i}}\leq \text{Model Size Limit},\end{split} \tag{12}\]
\(M_{i}^{b_{i}}\) denotes the parameters size of the \(i\)-th layer under \(b_{i}\) bit-width quantization. And the size of the target compressed model can be set by the user. Since the selectable bit widths are 4 and 8, the target size should be between the uniform 4 bits model and the uniform 8 bits model. It is worth noting that compared to the RL method proposed by Wang et al.[20] and the Hessian matrix computation proposed by Dong et al.[19] the ILP model only takes about 1 second to obtain the optimal bit-width configuration result under the hardware-aware result input condition. This computation is very efficient and can be done on the embedded CPU of the edge platform.
## Experiment
In this subsection, a comprehensive array of experiments is undertaken to empirically validate the performance of the OHQ. The preliminary step encompasses the delineation of the datasets employed and the specific models selected for the experimental evaluations. The ablation experiments compare the decomposition of OQA and MQE. Conclusively, an intricate comparison is executed, juxtaposing the performance metrics of disparate models as facilitated by the OHQ framework. The results of this comparative assessment conspicuously highlight the discernible merits inher
Figure 5: Comparison of on-chip perceptual parameters of MobileNetV3. From left to right, the sensitivity-accuracy, clock-parameter counts, and power consumption-parameter comparison curves at different layers are shown.
ent in our proposed approach, manifesting in superior compression rates, heightened operational performance, and enhanced quantization efficiency.
### Implementation Details
We validate the experiments on ImageNet[4] dataset. The training data (1.5M) encompassing Imagenet was deliberately left unutilized in our study. Instead, only 32 distilled images are generated, specifically dedicated to the computational assessment of perceptual outcomes pertaining to OQA and MQE during the forward inference. Owing to the wide use of residual structures in DNNS, we chose the ResNet18 and ResNet50 networks. In addition, we also conduct experiments on MobileNetV2 and MobileNetV3.
Our experimentation and assessments are exclusively executed on an ECE-EMBD development board, housing components from the ZYNQ chip series. This development board amalgamates a dual-core ARM Cortex-A9 processor endowed with 512MB of DDR3 memory on the PS side. The memory subsystem is defined by the MT41K256M16TW model, boasting a 16-bit bit width. The PL domain encompasses the XC7Z020-CLG400-1 chip, characterized by an assembly of 85K logic resources, and 140 BRAMs with a cumulative capacity of 4.9 Mb. Notably, the on-chip hardware sensing leverages a quantization strategy spanning from 4 bits to 8 bits, judiciously bounding matrix parallel operations to a maximum size of 128 data elements on a single facet. This prudent limitation is designed to preempt an over-allocation of BRAM resources and thus preserve adherence to prescribed quotas. We also validated the performance of the OHQ method for the QAT, where we used a GTX 1080 Ti to perform fine-tuning computations on selected models and present the results in Table 3.
### Ablation Results
#### Hardware-aware Parameters
We present an analytical exposition of statistical insights in Fig. 6 for MobileNetV3. We unveil the sensitivities exhibited across layers(left). Notably, higher sensitivities exert a profound influence on accuracy, underscoring their pivotal role in shaping model performance. We found that the first and the last layers give higher sensitivities in each network, which is due to the fact that the first layer directly deals with original images or feature maps with larger aspects, while the last layer needs to perform the hidden layer classification computation that determines the output data, and is prone to accumulating errors and is sensitive to the weight changes in on-chip low-precision quantitative inference scenarios. The middle demonstrates the change of clocks, and it can be seen that the latency on-chip is not strongly correlated with the parameter number. It is more strongly correlated with the size of the input feature map of each layer, while depthwise convolution has a greater correlation between the computation time and the number of channels compared to ordinary convolution. Power consumption is predominantly shaped by parallel computation scales, indicating the reduced rate of consumption increase with large size considering power and computation efficiency.
#### Optimization Factor Deconstruction
The present investigation undertakes an in-depth analysis of compression and performance dynamics within the framework of diverse model volume constraints. HAWQ-v3 is selected as the vanilla method, which also conducts 4/8 bits mixed-precision quantization. The findings, showcased in Table 4, distinctly highlight the absolute performance advantage achieved through MQE-based sensitivity. While OQA-guided hardware-aware constraints exhibit an appreciable augmentation in compression rates, their linkage primarily centers on the dimensions and channels of the feature map. However, it's noteworthy that the adoption of OQA as a sole optimization factor yields elevated accuracy loss in bit-width configurations. As elucidated by Eq. 11, the amalgamation of MQE and OQA constraints engenders a synergistic effect, yielding an elevated compression rate while ensuring accuracy, in stark comparison to the outcomes derived from individual parameters. This multifaceted analysis underscores the nuanced interplay between compression, performance, and model constraints, facilitating informed decisions regarding optimization strategies and trade-offs in resource-constrained scenarios.
### Comparison Results
The important feature of the OHQ proposed in this paper is the on-chip, and in order to efficiently implement the model, we performed PTQ on ResNet18/50, MobileNetV2, and MobileNetV3. and compared it with previous methods ([11, 12, 13, 14, 15, 16, 17, 18, 19]).
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \multirow{2}{*}{Arch} & \multirow{2}{*}{Ratio} & \multirow{2}{*}{W/A} & \multicolumn{3}{c}{Vanilla} & \multicolumn{3}{c}{MOE} & \multicolumn{3}{c}{OQA} & \multicolumn{3}{c}{MOE + OQA} \\ \cline{4-11} & & & Size (Mb) & Top-1 (\%) & Size (Mb) & Top-1 (\%) & Size (Mb) & Top-1 (\%) & Size (Mb) & Top-1 (\%) \\ \hline \hline ResNet18 & 0.25 & */* & 7.3 & 70.01 & 6.90 & 70.23 & 6.20 & 69.97 & 6.90 & 70.23 \\ & 0.50 & */* & 7.9 & 70.50 & 8.30 & 71.31 & 7.30 & 70.05 & 7.70 & 71.25 \\ & 0.75 & */* & 9.9 & 71.20 & 9.70 & 71.79 & 8.20 & 70.13 & 8.90 & 71.56 \\ & 0.80 & */* & 10.7 & 71.91 & 10.00 & 72.53 & 10.00 & 71.12 & 10.00 & 72.53 \\ \hline MobileNetV2 & 0.25 & */* & 7.3 & 70.01 & 6.90 & 70.23 & 6.20 & 69.97 & 6.90 & 70.23 \\ & 0.50 & */* & 7.9 & 70.50 & 8.30 & 71.31 & 7.30 & 70.05 & 7.70 & 71.25 \\ & 0.75 & */* & 9.9 & 71.20 & 9.70 & 71.79 & 8.20 & 70.13 & 8.90 & 71.56 \\ & 0.80 & */* & 10.7 & 71.91 & 10.00 & 72.53 & 10.00 & 71.12 & 10.00 & 72.53 \\ \hline \end{tabular}
\end{table}
Table 1: Ablation experiments of different optimization factor calculations on ResNet18 and MobileNetV2. We select HAWQ-v3 as our Vanilla baseline, which is a Hessian-based method. The red data indicates the best performance and cyan data is the second best. W/A is the bit-width of weight and activation. * represents mixed precision.
Table 2 underscores OHQ as the optimal 8-bit quantization strategy across diverse networks. The optimal trade-off between precision and compression ratio in mixed-precision quantization is also shown. In the performance of ResNet18, an accuracy of 70.08% is attained, harmoniously juxtaposed with a compact footprint of 5.8 Mb and latency of 63.5 ms. Meanwhile, the ResNet50 model, compressed to a size of 17.8 Mb with the speed of 147.8 ms, exhibits an accuracy of 77.55%. The highest accuracy and lowest compression Rate and time achieved in MobileNetV2 (71.64%, 1.7 Mb, 70.1 ms) Notably, the MobileNetV3 network attains an accuracy of 73.01% while preserving a remarkably small size of 2.4 Mb and latency 0f 73.4 ms.
We also combine QAT and OHQ due to the ability of the QAT method to improve quantization model accuracy through data retraining. In the ResNet18 network we achieve, with only 6.9Mb of the model, 70.23% accuracy, which is ahead of HAWQ-v3 in terms of compression and performance. Notably, PACT does not quantize the input and output layers, and the activations are retained 23 bits from HAQ and LQ-Nets. For ResNet50, OHQ shows the best results(76.64%, 16.5 Mb, 135.9 Ms). Moreover, in MobileNetV3, the accuracy of OHQ is 0.08% and 0.46% higher than that of HAQ and GZNQ, respectively, and the compression rate of the model is 20% higher than that of GZNQ, which still shows the sota quantization performance. Notably, all the OHQ quantization models have the fastest inference on FPGA.
## Conclusion
In this paper, our proposed OHQ introduces an innovative and effective solution for hardware-aware mixed-precision quantization, offering a substantial stride toward efficient and accurate deployment of DNNs on resource-constrained chips. Firstly, the OQA pipeline furnishes an avenue to comprehend the true efficiency metrics of quantization operators within the hardware ecosystem and yields insights that inform subsequent optimization steps. Secondly, the MQE technique is meticulously designed to efficiently gauge accuracy metrics for operators while adhering to on-chip computational constraints. Synthesizing network and hardware insights through linear programming, we derive optimal bit-width configurations. A remarkable facet of our approach is that the entire quantization process unfolds on-chip.
\begin{table}
\begin{tabular}{l l c c c c c c c} Arch & Method & Int-Only & Uniform & W/A & Data & Size (Mb) & Latency(ms) & Top-1 (\%) \\ \hline \hline \multirow{2}{*}{ResNet18} & Baseline & \(\bigtimes\) & - & 32/32 & 1.2E6 & 44.6 & 39.6\({}^{\star}\) & 73.21 \\ \cline{2-8} & Min\&Max & \(\bigtimes\) & \(\bigvee\) & 8/8 & 1.2E6 & 11.1 & 78.3 & 71.38 \\ \hline \multirow{8}{*}{ResNet50} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 11.1 & 78.3 & 71.52 \\ \cline{2-8} & ZeroQ & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 5.8 & 67.9 & 21.20 \\ & BRECQ & \(\bigvee\) & \(\bigvee\) & */* & 1024 & 5.8 & 67.6 & 69.32 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 5.8 & 63.5 & **70.08** \\ \cline{2-8} & Baseline & \(\bigtimes\) & - & 32/32 & 1.2E6 & 97.8 & 80.2\({}^{\star}\) & 77.72 \\ \cline{2-8} & Min\&Max & \(\bigtimes\) & \(\bigvee\) & 8/8 & 1.2E6 & 24.5 & 182.6 & 77.70 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 24.5 & 182.6 & 77.72 \\ \cline{2-8} & OCS & \(\bigvee\) & \(\bigvee\) & 6/6 & 1.2E6 & 18.4 & 159.6 & 74.80 \\ & ZeroQ & \(\bigvee\) & \(\bigvee\) & */6 & 32 \(\div\) & 18.3 & 160.3 & 77.43 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 17.8 & 147.8 & 77.55 \\ \hline \multirow{2}{*}{MobileNetV2} & Baseline & \(\bigtimes\) & - & 32/32 & 1.2E6 & 13.4 & 11.3\({}^{\star}\) & 73.03 \\ \cline{2-8} & Min\&Max & \(\bigtimes\) & \(\bigvee\) & 8/8 & 1.2E6 & 3.3 & 100.7 & 70.29 \\ \cline{2-8} & DFQ & \(\bigvee\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 3.3 & 100.7 & 71.20 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 3.3 & 100.7 & 73.00 \\ \cline{2-8} & FracBits & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 1.8 & 85.4 & 69.90 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 1.7 & 70.1 & 71.63 \\ \hline \multirow{2}{*}{MobileNetV3} & Baseline & \(\bigtimes\) & - & 32/32 & 1.2E6 & 15.3 & 10.6\({}^{\star}\) & 74.32 \\ \cline{2-8} & Min\&Max & \(\bigtimes\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 3.9 & 85.0 & 72.98 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & 8/8 & 32 \(\div\) & 3.9 & 85.0 & 74.29 \\ \cline{2-8} & **OHQ (ours)** & \(\bigvee\) & \(\bigvee\) & */* & 32 \(\div\) & 2.4 & 73.4 & 73.01 \\ \hline \end{tabular}
\end{table}
Table 2: Results of PTQ methods with ResNet18, ResNet50, MobileNetV2, and MobileNetV3. \(\dagger\) indicates using distilled data in the quantization process. \({}^{\star}\) means the latency results are tested on CPU, others are deployed on FPGA (batch=1).
\begin{table}
\begin{tabular}{l l c c c c c} Arch & Method & W/A & Int-Only & \begin{tabular}{c} Size \\ (Mb) \\ \end{tabular} & \begin{tabular}{c} Latency \\ (ms) \\ \end{tabular} &
\begin{tabular}{c} Top-1 \\ (\%) \\ \end{tabular} \\ \hline \hline \multirow{2}{*}{ResNet18} & PACT\(\sharp\) & 5/5 & \(\bigtimes\) & 7.2 & 70.4 & 69.80 \\ & HAWQ-v3 & */* & \(\bigvee\) & 7.3 & 72.9 & 70.01 \\ & **OHQ (ours)** & */* & \(\bigvee\) & 6.9 & 68.3 & 70.23 \\ \hline \multirow{2}{*}{ResNet50} & HAWQ-v3 & */* & \(\bigvee\) & 18.7 & 172.1 & 75.39 \\ & **OHQ (ours)** & */* & \(\bigvee\) & 16.5 & 135.9 & 76.64 \\ \hline \multirow{2}{*}{MobileNetV2} & GZNQ & 6/6 & \(\bigvee\) & 2.5 & 81.7 & 71.10 \\ & HAQ & */* & \(\bigtimes\) & 1.8 & 75.5 & 71.47 \\ \cline{2-8} & **OHQ (ours)** & */* & \(\bigvee\) & 1.7 & 70.1 & 72.56 \\ \hline \end{tabular}
\end{table}
Table 3: Results of QAT with ResNet18/50 and MobileNetV2. \(\ddagger\) means not quantizing the first and last layers. |
2302.03213 | LUT-NN: Empower Efficient Neural Network Inference with Centroid
Learning and Table Lookup | On-device Deep Neural Network (DNN) inference consumes significant computing
resources and development efforts. To alleviate that, we propose LUT-NN, the
first system to empower inference by table lookup, to reduce inference cost.
LUT-NN learns the typical features for each operator, named centroid, and
precompute the results for these centroids to save in lookup tables. During
inference, the results of the closest centroids with the inputs can be read
directly from the table, as the approximated outputs without computations.
LUT-NN integrates two major novel techniques: (1) differentiable centroid
learning through backpropagation, which adapts three levels of approximation to
minimize the accuracy impact by centroids; (2) table lookup inference
execution, which comprehensively considers different levels of parallelism,
memory access reduction, and dedicated hardware units for optimal performance.
LUT-NN is evaluated on multiple real tasks, covering image and speech
recognition, and nature language processing. Compared to related work, LUT-NN
improves accuracy by 66% to 92%, achieving similar level with the original
models. LUT-NN reduces the cost at all dimensions, including FLOPs ($\leq$
16x), model size ($\leq$ 7x), latency ($\leq$ 6.8x), memory ($\leq$ 6.5x), and
power ($\leq$ 41.7%). | Xiaohu Tang, Yang Wang, Ting Cao, Li Lyna Zhang, Qi Chen, Deng Cai, Yunxin Liu, Mao Yang | 2023-02-07T02:51:10Z | http://arxiv.org/abs/2302.03213v2 | # LUT-NN: Towards Unified Neural Network Inference by Table Lookup
###### Abstract
DNN inference requires huge effort of system development and resource cost. This drives us to propose LUT-NN, the first trial towards empowering deep neural network (DNN) inference by table lookup, to eliminate the diverse computation kernels as well as save running cost. Based on the feature similarity of each layer, LUT-NN can learn the typical features, named centroids, of each layer from the training data, precompute them with model weights, and save the results in tables. For future input, the results of the closest centroids with the input features can be directly read from the table, as the approximation of layer output.
We propose the novel centroid learning technique for DNN, which enables centroid learning through backpropagation, and adapts three levels of approximation to minimize the model loss. By this technique, LUT-NN achieves comparable accuracy (<5% difference) with original models on real complex dataset, including CIFAR, ImageNet, and GLUE. LUT-NN simplifies the computing operators to only two: closest centroid search and table lookup. We implement them for Intel and ARM CPUs. The model size is reduced by up to \(3.5\times\) for CNN models and \(7\times\) for BERT. Latency-wise, the real speedup of LUT-NN is up to \(7\times\) for BERT and \(2\times\) for ResNet, much lower than theoretical results because of the current unfriendly hardware design for table lookup. We expect first-class table lookup support in the future to unleash the potential of LUT-NN.
+
Footnote †: Contribution during internship at Microsoft Research
+
Footnote †: Contribution during internship at Microsoft Research
## 1 Introduction
Current DNN inference faces two critical issues. Firstly, the development of inference software (e.g., ONNX Runtime [35], TensorFlow [14], and TVM [6]) and hardware (e.g., Google TPU [26], Alibaba HanGuang [25], and Intel Spring Hill [41]) takes huge efforts and often lags behind DNN innovation. Computing operators are optimized for different operator sizes, types, algorithms, and backends. To take convolution as an example, an inference system normally has to support direct, Winograd, im2col, depthwise, and group convolution for CPU, CUDA, OpenCL, WebAssembly, etc. Hardware wise, inference accelerators tend to be customized for different computing operators. For example, the ones designed for convolution, such as HanGuang, cannot well support a NLP model. Secondly, inference consumes more and more computation and storage resources, as models get larger (e.g. ResNet-50 [18] 98 M, BERT\({}_{\textit{LARGE}}\)[11] 340 M, GPT-2 [37] 1.5 B).
These issues drive us to ask is it possible to eliminate these computing operators for inference. If so, the development of inference systems can be much simplified, accelerated, and decoupled with the rapid DNN algorithm evolution. A new model could be directly deployed after training. Besides, computation resources can be saved.
To explore the possibility, we rethink the essence of DNN inference. The computation of each DNN layer is to output another level of features given the input features. For example, the front layers of a CNN model output low-level features (e.g., edges and lines), and subsequent layers output high-level features (e.g., faces and objects). Though the input images for a model are diverse, the input features for each layer have semantic similarity, and result in similar outputs. A vivid example is that though cats and horses are different inputs for a model, the output of the ear feature could be similar for the layer that extracts it. Same for language tasks, similar words of different sentences should have similar embeddings and output results.
Because of the similarity, the research question of this work is that whether the typical features among clusters of similar features can be learned for each layer, so that the computation output of these typical features can approximate the output of diverse input features, and achieve similar model accuracy.
Towards answering this question, we propose LUT-NN. The system learns the typical features, named _centroid_, for each layer, and precompute the results for these features to save in lookup tables. During inference, the results of the closest centroids with the input features can be read directly from the table as the output of this layer. Fig. 1 shows an overview of LUT-NN using real data samples of a model. LUT-NN is the first to empower end-to-end DNN inference by table lookup. It achieves comparable accuracy with the original models on real and complex datasets and tasks, including CIFAR [29] and ImageNet [10] for image tasks, and GLUE [39] for language tasks.
The major challenges for LUT-NN is how to learn the cen
troids for minimum accuracy loss. MADDNESS [4] initiates the works towards learning centroids for matrix multiplication (MM), leveraging the technique of Product Quantization (PQ). PQ is an effective vector quantization algorithm widely used for dataset compression [24, 16]. It clusters the vectors in the dataset, and learns the centroids to represent vectors in each cluster. By using PQ, MADDNESS learns the centroids from the training dataset, and precomputes the MM results for the centroids, as the approximation for future input matrices.
However, as we will show in the paper, directly applying MADDNESS to the computation layers of a DNN model results in poor accuracy (80% accuracy drops for CIFAR-10). A similar conclusion is also from McCarter _et al._[33].
Inspired by this, we first analyze and expose the key factor for DNN centroid learning compared to a MM. We find the main reason for the poor result of MADDNESS for DNN is that the different optimization goal of centroid learning for PQ and DNN. The goal of PQ is to learn the centroids that can minimize the total error/distance between centroids and vectors, while the goal of DNN is to minimize the final loss function. Without considering the loss function, errors introduced by centroids in MADDNESS are accumulated layer by layer, resulting in poor accuracy.
Therefore, the key for DNN centroid learning is to pass the model loss to each layer through backpropagation and iteratively adjust the centroids by the gradients to minimize the loss. However, it is not possible for PQ since the function to encode a vector to the closest centroid e.g., argmin or hashing in MADDNESS, is not differentiable.
To solve this, we propose the novel LUT-NN centroid learning technique for DNN. It adapts three levels of approximation by three methods through backpropagation, to minimize the model loss. First, to adapt the approximation introduced by centroids, LUT-NN uses _soft-PQ_ method. It uses the continuous approximation of argmax, i.e., softmax, for backward pass, to enable gradient calculation and centroid update. The forward pass still uses argmin for model loss calculation, since argmin will be used for inference. Second, to adapt the approximation introduced by softmax, LUT-NN uses _learned temperature_ method, to learn the hyperparameter temperature of softmax for each layer during backpropagation, as the tradeoff of model loss reduction and learning convergence. Third, to reduce memory cost, LUT-NN reduces the size of lookup tables by scalar quantization. To adapt the approximation introduced by this quantization, LUT-NN uses _quantization-aware training_, which uses quantized tables in the forward pass and real-value tables in the backward pass.
Empowered by this centroid learning technique, LUT-NN can reduce DNN computations to table lookup, and unify the diverse computation kernels to only two, i.e., _closest centroid search_ and _table lookup_. It can greatly simplify the inference software and hardware design.
LUT-NN is best fit for the system taking table lookup as the first class, but current systems are all designed for complex computations. Even though, we still implement the two kernels of LUT-NN by a tensor compiler, on commodity Intel and ARM CPUs. Our design carefully aligns the number of centroids and feature length with the width of SIMD instructions. We properly order the loop to improve cache locality of dist calculation and fuse it with argmin to reduce memory accesses. Even with all these optimizations, the implementation is still hard to unleash the potential of LUT-NN. For example, the argmin has to be mostly sequentially executed. The configurations of centroids are limited by SIMD width. Hashing cannot accelerate centroid search compared with distance calculation.
Based on current implementation, LUT-NN achieves comparable accuracy, i.e., within 5% difference, for CIFAR, ImageNet, and GLUE, with reduced model size (in FP32) by 2.3-3.5\(\times\) for CNN ResNet models, and 7\(\times\) for BERT, since it only needs to save centroid results. We also count the theoretical FLOPs of LUT-NN by using distance calculation compared to MM. The FLOPs for CNN can be reduced by 2-4\(\times\), and 16\(\times\) for BERT. The real speedup is less than this, due to the issues above. The speedup for BERT is up-to 5.4\(\times\) compared to current inference systems. The speedup for larger ResNet, e.g., ResNet-18 for ImageNet, can achieve 2\(\times\). However, for smaller ones, e.g., ResNet-20 for CIFAR, LUT-NN can even be slower than the baseline.
We will discuss the hardware implications of LUT-NN in the paper. With first-class table lookup and hashing support, LUT-NN is expected to approach even higher theoretical speedup.
To sum up, the key contributions of this paper are as follows.
* LUT-NN is the first to empower DNN inference by table lookup to simplify inference systems and save cost.
* It can achieve comparable model accuracy by the novel centroid learning technique for DNN, which can adjust approximation through backpropagation to reduce model loss.
* It automatically generates kernels for table lookups to enable this new inference paradigm on commodity CPUs.
* We implement the whole learning and inference pipelines.
Figure 1: LUT-NN transforms model linear-computation layers to table lookup for inference.
Results show up-to 5.4\(\times\) speedup and 7\(\times\) model size reduction for BERT, and 2\(\times\) speedup and 3.5\(\times\) model size reduction for ResNet.
## 2 Background and Motivation
Each data sample in the training set can be assumed as a vector. This section will first introduce the concept of vector quantization and its efficient solution, PQ, and then show the poor results of directly applying PQ to DNN inference.
### Product Quantization
Quantization has been well studied in information theory [15]. It reduces the cardinality of a dataset by using _centroids_ to represent the data samples. The set of centroids is called a _codebook_. For vector quantization, the dataset is composed of \(D\)-dimension vectors. The vector quantizer can encode each vector to a centroid in the codebook. However, the complexity of learning and storing the centroids increases exponentially with the vector dimension.
As a result, PQ is proposed to address this complexity issue. The essence of it is to decompose the high-dimensional vector space into the Cartesian product of sub-vector spaces and then quantize these sub-vector spaces separately. As shown in Fig. 2(a), it splits the \(D\)-dimension vector into \(C\) distinct \(V\)-dimension sub-vectors (\(D=C\!\cdot\!V\)). The sub-vectors are quantized separately using \(C\) codebooks. The quantization result of the vector is the concatenation of the \(C\) centroids. By PQ, the complexity of learning and storing the centroids increases linearly with the vector dimension.
**Centroid learning** We now formulize the PQ process. To quantize a \(D\)-dimension vector \(a\in\mathbb{R}^{D}\), PQ needs to learn the centroids from a training dataset \(\hat{A}\in\mathbb{R}^{\hat{N}\times D}\) composed of vectors with the same distribution as \(a\). PQ first decomposes the vectors in the dataset into \(C\) distinct \(V\)-dimension sub-vectors, notated as \(\hat{A}^{c}\in\mathbb{R}^{\hat{N}\times V}\) (marked in different colors in Fig. 2). To make optimal quantization, the centroid learning process is to find the \(K\) centroids \(P^{c}\) (i.e., the \(c^{th}\) codebook) for \(\hat{A}^{c}\) by \(k\)-means [30], which can minimize the distance sum of each sub-vector \(\hat{A}^{c}_{i}\) and its nearest centroid \(P^{c}_{k}\), as shown in Eq. 1.
\[\operatorname*{arg\,min}_{P}\sum_{c}\sum_{i}\left\|\hat{A}^{c}_{i}-P^{c}_{k} \right\|^{2} \tag{1}\]
**Sub-vector encoding** With the learned centroids \(P\), for an input vector \(a\), PQ can encode it as the concatenation of the nearest centroids for each sub-vector. The encoding function for a sub-vector is shown in Eq. 2. By this vector decomposition method, the centroids can represent \(K^{C}\) different vectors by only \(K\times C\) memory cost.
\[g^{c}(a^{c})=\operatorname*{arg\,min}_{k}\left\|a^{c}-P^{c}_{k}\right\|^{2} \tag{2}\]
**Hashing for encoding acceleration** Learning centroids is a NP-hard problem. Vanilla PQ uses \(k\)-means to learn centroids and encode sub-vectors. \(k\)-means satisfies Lloyd optimality conditions [30, 24] and can get locally optimal quantization error. However, the \(k\)-means encoding costs high to compute the Euclidean distance for each sub-vector with each centroid, as shown in Eq. 2.
To reduce the encoding cost, some works propose hashing methods to encode sub-vectors [4, 17], but at the cost of higher quantization error. Hashing has a sub-vector to one of the \(K\) buckets. For example, MADDNESS selects a 4-level balanced binary regression tree of the hashing function family, with each leaf as a hash bucket. A sub-vector is encoded by traversing the tree from the root, and moving to left or right child if the value of certain indices is above or below a threshold.
This paper will evaluate both distance-based encoding and hashing-based encoding methods.
### PQ for AMM
PQ can be used for AMM (approximated matrix multiplication) [4]. The essence is to approximate the matrix multiplication by the centroids' multiplication.
To formulize it, for matrix multiplication \(A\times B^{T}\), \(a\) and \(b\) are the rows of \(A\) and \(B\) respectively. The centroid codebooks for \(A\) is \(P\). For a layer of DNN, \(A\) can be the input feature maps, and \(B\) can be the weights (note that convolution can be computed as matrix multiplication too). Since \(B\) is constant, the multiplication of all the centroids and \(B\) can be precomputed to construct a lookup table, as shown in Fig. 2(b). The table construction function for \(b^{c}\) is shown in Eq. 3.
\[h^{c}(b^{c})=[P^{c}_{0}\!\cdot\!b^{c},P^{c}_{1}\!\cdot\!b^{c},\cdots,P^{c}_{K- 1}\!\cdot\!b^{c}] \tag{3}\]
The matrix multiplication can then be approximated by looking up and aggregating the results of the nearest centroids in the precompute table, formulated in Eq. 4. Here considers the \(g^{c}(a^{c})\) function with _onehot_ representation for _argmin_, i.e., the nearest centroid is marked as 1 and others as 0, \(g^{c}(a^{c})=\sum_{i=1}^{n}g^{c}(a^{c})\). The matrix multiplication is then computed as \(g^{c}(a^{c})=\sum_{i=1}^{n}g^{c}(a^{c})\).
Figure 2: Product Quantization for AMM. Each color marks a sub-vector dataset.
\[\begin{split} onehot(\operatorname*{arg\,min}_{k}\big{\|}a^{c}-P_{k}^{c} \big{\|}^{2})=(0,...,0,1,0,...,0).\\ \\ a\cdot b=\sum_{c}a^{c}b^{c}\approx\sum_{c}g^{c}(a^{c})\cdot h^{c}(b^{c} )\end{split} \tag{4}\]
### Poor results of PQ for DNN
Since DNN models are composed of MM, a direct thought is that can we replace MM in a DNN model by the PQ-based AMM. However, results show that directly applying it to a DNN model leads to poor accuracy.
Fig. 3 shows the accuracy of using PQ-based AMM for ResNet-20 on CIFAR-10, as well as the Mean Square Error (MSE) of the replaced model and the original model. We replace the MM from the last to the first layer by PQ-based AMM. Fig. 2(a) uses vanilla PQ with \(k\)-means for encoding, while Fig. 2(b) uses MADDNESS with hashing for encoding.
Results show the accuracy keeps dropping while more layers are replaced by AMM, because the error of the AMMs is accumulated. As expected, vanilla PQ shows better results than MADDNESS, since \(k\)-means introduces smaller quantization error than hashing. MADDNESS can maintain accuracy when only the last layer is replaced. This is consistent with the MADDNESS paper, which only replaces the last layer, i.e., fully-connect, by AMM. However, if we replace the last two layers, the accuracy sharply drops by 30%, and finally ends in 10% accuracy. Vanilla PQ drops by 30% when the last six layers are replaced, and also ends in 10% accuracy.
**Reason for poor accuracy** We expose the key reason for the poor model accuracy of using PQ-based AMM is that _the optimization goal of PQ and DNN learning is different._ As shown in Eq. 1, the goal of PQ is to minimize quantization error, i.e., learn the centroids to minimize the distance of each sub-vector and its nearest centroid. On the other hand, the learning goal of DNN is to minimize the final loss function, through backpropagation to iteratively adjust the model parameters of each layer. Without considering the loss function, when more layers use AMM, the approximation error gets accumulated, as shown in Fig. 3. To get better accuracy, it is necessary to learn the centroids by the DNN training process.
However, the challenge is that PQ centroid learning and encoding functions shown in Eqs. 1 and 2 are not differentiable, and cannot use backpropagation to calculate gradient. This paper thus proposes the soft-PQ technique, which empowers the centroid learning for a DNN by backpropagation and gradient descent.
## 3 Differentiable Centroid Learning for DNN
It is essential to learn the centroids of each layer through back-propagation to minimize the model loss. However, the argmin function in vanilla PQ is not differentiable. Our key technique is to leverage the continuous and differentiable approximation to argmax function, softmax, for backpropagation [23], which approximates the _onehot_ centroid result as the weighted sum of all centroid results (Sec. 3.1). There is work before using softmax for differentiable PQ, but again only for the input embedding compression [7].
The challenge for approximating argmin by softmax for every layer of a model is again the accumulated approximation error and reduced model accuracy. Since the hyperparameter of softmax, temperature, adjusts the approximation error with argmin, we propose learnable temperature to learn the temperature of each layer's softmax through backpropagation (Sec. 3.2).
To reduce the memory and table lookup cost, we propose scalar-quantized lookup tables. However, this again introduces a level of approximation. Similarly, we quantize the lookup tables during centroid learning (Sec. 3.3).
The two major factors of LUT-NN are the number of centroids and the length of sub-vectors, which are a tradeoff between cost and accuracy. We theoretically analyze the FLOPs and disk cost of LUT-NN with the two factors (Sec. 3.4). Since hashing can speed up encoding but at the cost of higher quantization error, we also explore the potential of hashing for LUT-NN (Sec. 3.5).
### Backpropagation through soft-PQ
As introduced in Sec. 2.1, vanilla PQ uses \(k\)-means to learn centroids from the dataset, and the encoding function \(g^{c}(a^{c})\) uses argmin to encode a sub-vector as the nearest centroid, e.g., the onehot representation \((0,...,1,...,0)\) shown in Fig. 4. The sub-vector AMM result can then be read directly from the lookup table by \(g^{c}(a^{c})\cdot h^{c}(a^{c})\).
Figure 3: Model accuracy keeps decreasing because MSE keeps increasing, while each MM (from the last to the first) is replaced by PQ-based AMM, for ResNet20 on CIFAR-10. Encoding uses (a) \(k\)-means from vanilla PQ and (b) hashing from MADDNESS. MSE is computed as MSE(replaced model, original model).
However, to apply PQ to the whole DNN model and minimize the model loss, the centroids for each layer are required to be learned from backpropagation and gradient descent. We therefore use the smooth argmax function, softmax, as the encoding function for backpropagation, shown in Eq. 5. \(t\) is the temperature hyperparameter, which will be discussed in Sec. 3.2.
\[\tilde{g}^{c}(a^{c})=\text{softmax}(-\left\|a^{c}-P_{k}^{c}\right\|^{2}/\,t) \tag{5}\]
For the \(K\) centroids in a codebook, softmax takes a vector of \(K\) distance results between sub-vector \(a^{c}\) and each centroid \(P_{k}^{c}\) as the input, and normalizes the input to the probability distribution adding up to 1. According to the definition of softmax, each probability is proportional to the exponent of the distance, i.e., \(exp(-\left\|a^{c}-P_{k}^{c}\right\|^{2}/t)\). The nearer the centroid is, the higher the probability is. The encoding for a sub-vector is then changed from a deterministic onehot vector into the probability vector. Illustrated in Fig. 4 for example, the softmax encoding function output is \((0.12,...,0.64,...)\), and 0.64 is the probability of the nearest centroid. The sub-vector AMM result is now the dot product of the probability vector and the table entries of the lookup table.
**Soft-PQ centroid learning** By using softmax, the centroid learning process of our soft-PQ for the whole model is shown in Fig. 4. In the forward pass, onehot argmin is the encoding function to calculate the model output and loss, since the model inference will also use argmin for execution simplicity. The backward pass uses softmax as the encoding function to calculate gradients, adjust centroids by gradient descent, and rebuild lookup tables with the adjusted centroids for the next training iteration. Based on Eq. 4, the sub-vector AMM in soft-PQ is formulated as Eq. 6.
\[a^{c}b^{c}=\tilde{g}^{c}(a^{c})\cdot h^{c}(b^{c})-\text{sg}(\tilde{g}^{c}(a^{c })\cdot h^{c}(b^{c})-g^{c}(a^{c})\cdot h^{c}(b^{c})) \tag{6}\]
Here \(sg\) is the _stop gradient_ operator. It is an identity function in the forward pass, to enable \(g^{c}(a^{c})\) encoding in argmin. It drops gradient inside it in the backward pass, to enable \(\tilde{g}^{c}(a^{c})\) to generate gradients by softmax.
As the initial value is critical for learning convergence and accuracy, we use the \(k\)-means learned centroids from vanilla PQ to initialize centroids and lookup tables.
### Learned temperature
The temperature hyperparameter \(t\) of softmax[20] controls the approximation error of softmax to argmax. Shown in Fig. 4(a), as \(softmax(x)_{i}=\frac{exp(x_{i}/t)}{\sum_{k=1}^{k}exp(x_{k}/t)}\), when \(t\rightarrow\infty\), \(softmax(x)_{i}\rightarrow\frac{1}{K}\), i.e., the output probability distribution approaches uniform distribution. When \(t\to 0\), \(softmax(x)\rightarrow onehot(argmax(x))\), i.e., the probability of the largest \(x\) approaches 1 and others are 0.
Therefore, there is a tradeoff between small and large temperature. For small temperature, softmax is close to onehot argmax, but the training is difficult since the variance of gradients is large. For larger temperature, the approximation error is increased, but the variance of gradients is smaller.
Works before normally set \(t\) as a fixed value (mostly 1), or anneal it from a large number to a small one during training [20, 23], but never analyze how to set it reasonably. This is because currently for a DNN model, softmax is only used by the output layer to produce class probability, or used by the input layer to produce symbol embedding. The approximation error barely impacts the DNN model accuracy. However, our soft-PQ for DNN employs softmax in every layer. The accumulated error can obviously decrease accuracy without proper \(t\) settings.
We thus propose to learn the temperature for each layer, also during backpropagation while centroid learning. Fig. 4(b) shows the learned \(t\) for each layer for ResNet18. The value for each layer is different, and thus not practical to tune by hand. According to the CIFAR10 accuracy experiments, training with the learned temperature technique only spends \(\frac{1}{10}\) iterations of training with temperature setting to 1 to achieve 85% accuracy. Detailed results will show in Sec. 6.
### Scalar quantized lookup table
Lookup tables are the main disk and memory cost. We reduce the table size by scalar quantization (e.g., FP32 to INT8). We leverage the classic range-based linear quantization. The formula is \(r=s(q-z)\)[21], in which \(r\) is the real value, \(s\) is the scaling factor, \(q\) is the quantized value, and \(z\) is the zero point. We use symmetric quantization, so \(z\) is forced to be 0, and the quantized range is \([-2^{n-1},2^{n-1}-1]\). The scaling factor \(s\) is calculated as the max absolute value in the table divided by half of the range, i.e., \(s=\frac{\max(\text{valuel})}{2^{n-1}-1}\).
Figure 4: Soft PQ learns centroids by backpropagation. The forward pass uses argmin as the encoding function, and backward uses softmax. Each color marks a sub-vector dataset using one codebook. \(P_{k}^{c}\) is the \(k^{th}\) centroid for sub-vector \(a^{c}\).
Quantized lookup tables introduce another level of approximation. Similar to temperature, we thus quantize the tables during centroid learning, to minimize the loss function. Inspired by Jacob _et al_. [21], the backpropagation uses lookup tables in real values, so that they can be adjusted in small amounts. The forward pass uses quantized lookup tables as in the inference to calculate the loss. Results show that by this learning method, the quantized lookup table has little impact on the model accuracy. We have also tried to quantize the centroids, which reduces the accuracy more significantly. Therefore, we do not quantize centroids in this paper.
### Cost analysis of LUT-NN
According to the output formula Eq. 4 of a PQ-based AMM in LUT-NN, the main cost is from the encoding function \(g^{c}(a^{c})\), which calculates the Euclidean distance of the sub-vector with each centroid. After that, the cost is from table lookup with the encoding result (i.e., index of the closest centroid), and the result aggregation of sub-vectors. For file size, the major cost is from the lookup tables, which saves the dot product result of each centroid and the according sub-vectors in the weight matrix. The size of codebooks is relatively small, since the sub-vectors on the same column share one codebook.
Therefore, we analyze the FLOPs of encoding, table lookup and aggregation, as well as the size of lookup tables as the cost of a LUT-NN AMM, to compare it with normal MM in Table 1. Since convolution can be transformed to MM by im2col, its cost also follows these formulas. For a convolution, \(M\) is the number of output channels, \(D\) is the number of input channels \(\times\) filter size\({}^{2}\), and \(N\) is height \(\times\) width.
The number of centroids \(K\) and the sub-vector length \(V\) are two hyperparameters of LUT-NN. They are tradeoffs between accuracy and cost (refer to Sec. 6.3). The more centroids \(K\) and shorter sub-vector \(V\) may lead to higher accuracy, but will increase the cost of LUT-NN. Table 2 shows the calculated GFLOPs and model size by the formulas in Table 1 for different models, using typical \((K,V)\) settings in LUT-NN. These typical settings can achieve comparable accuracy with the original model, and also align with the SIMD width for high performance. Similar to other hyperparameters in DNN training, \(K\) and \(V\) can be set by grid search, evolutionary search, or other popular methods considering the cost budget.
It is clear that LUT-NN can achieve both computation and model size saving. The FLOPs saving is because \(K\) is normally smaller than \(M\). For example, the number of output channels i.e., \(M\), for ResNet50 is normally 128, 256, or 512, so the FLOPs can be reduced by \(4\times\) when \(K=8\). For ResNet20, the number of output channel is 16, 32, and 64, so the FLOPs is reduced by \(2\times\) when \(K=8\). We will discuss the hashing cost in next section.
### Hashing for faster encoding
As introduced in Sec. 2.1, hashing is to avoid the Euclidean distance calculation of \(k\)-means encoding, but at the cost of higher quantization error (refer to Fig. 2(b)).
We also evaluate the potential of hashing for DNN inference. Since hashing is not differentiable, LUT-NN uses hashing after the centroids are learned. Our current results show that to achieve similar model accuracy as Euclidean distance for encoding, we have to use a 12-level decision tree for hashing. Table 2 lists the theoretical FLOPs reduction by using this 12-layer decision tree. Compared to distance calculation, it can further reduce FLOPs by 30% to \(3\times\). As the encoding cost
\begin{table}
\begin{tabular}{|l|c|} \hline \(A\in\mathbb{R}^{N\times D}\): input matrix \\ \(B\in\mathbb{R}^{D\times M}\): weight matrix \\ \(V\): the length of a sub-vector \(a^{c}\) \\ \(K\): the number of centroids in a codebook for \(a^{c}\) \\ \hline & **Ours** & **MM** \\ \hline
**FLOPs** & \(N\cdot D\cdot K+N\cdot M\cdot D/V\) & \(N\cdot D\cdot M\) \\ \hline
**Size** & \(D\cdot M\cdot K/V\) & \(4\cdot D\cdot M\) \\ \hline \end{tabular}
\end{table}
Table 1: FLOPs and disk size of a LUT-NN AMM compared to MM (FP32).
\begin{table}
\begin{tabular}{|c|c c c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{GFLOPs} \\ \cline{2-4} & original & (8, 9) & (16, 9) & (DT, 9) \\ \hline ResNet20 & 0.041 & 0.017 & 0.029 & 0.006 \\ ResNet32 & 0.069 & 0.028 & 0.048 & 0.011 \\ ResNet18 & 1.814 & 0.412 & 0.515 & 0.208 \\ ResNet50 & 4.089 & 1.015 & 1.175 & 0.775 \\ \hline & original & (16, 32) & (16, 16) & (DT, 32) \\ \hline BERT & 2.759 & 0.169 & 0.254 & 0.127 \\ \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Disk Size (MB)} \\ \cline{2-4} & original & (8, 9) & (16, 9) \\ \hline ResNet20 & 1.02 & 0.40 & 0.80 \\ ResNet32 & 1.76 & 0.69 & 1.37 \\ ResNet18 & 44.55 & 12.57 & 23.16 \\ ResNet50 & 97.29 & 42.18 & 76.52 \\ \hline & original & (16, 32) & (16, 16) \\ \hline BERT & 162.26 & 23.05 & 43.30 \\ \hline \end{tabular}
\end{table}
Table 2: Theoretical GFLOPs and model size of typical (\(K,V\)) for different models, calculated by formulas in Table 1. “DT” stands for hashing by decision tree to encode sub-vectors. Hashing does not change disk size.
Figure 5: (a) The output probability distribution of \(\operatorname{softmax}(\frac{x}{t})\) at different temperature, for four centroids as an example. \(t\rightarrow\infty\) approaches uniform distribution, and \(t\to 0\) approaches argmax. (b) The learned \(t\) for each layer of ResNet18.
is reduced, the table lookup and aggregation FLOPs become the bottleneck.
On current commodity hardware without direct hashing support, hashing is higher than Euclidean distance calculation, since the tree traversing is sequential without SIMD support. Therefore, we still use Euclidean distance for encoding in LUT-NN and will discuss the potential hardware design for hashing in Sec. 5.
## 4 Model Inference Design and Optimization
In this section, we present the design and optimization of the inference system to support LUT-NN. LUT-NN unifies neural network operators to table lookups, and each operator can be represented by closest centroid search and table lookup. We design the LUT-NN model inference architecture in Fig. 6, which includes the Closest Centroid Search Operator and the Table Lookup Operator.
### Closest Centroid Search Operator
The centroid search is the most computation-intensive operation in LUT-NN, and high-performance centroid search empowers performance efficiency compared to conventional computational methods. The **Closest Centroid Search Operator** first computes the distance between input tensors and centroids, which can be denoted as matrix multiplications of input tensors and centroid matrices. Then, it searches centroids with the shortest distance for each input sub-vector.
However, it is challenging to implement the centroid search in LUT-NN. First, LUT-NN distance computation is irregular-shaped (tall-and-skinny) and is difficult to be optimized by BLAS libraries. The height of the input tensor (\(N\)) is usually much larger than the length of sub-vector (\(N\gg V\)) and the number of centroids (\(N\gg K\)) on each codebook. Thus, the operation intensity of the distance calculation can be approximated by \(\frac{2NVK}{NV+KV+NK}\approx\frac{2}{1/K+1/V}\)FLOP/Byte. Since the length of the sub-vector and the number of centroids are small in LUT-NN, the operation intensity \(\frac{2}{1/K+1/V}\) is also small. The distance computation becomes a memory-intensive matrix multiplication.
Therefore, we focus primarily on optimizing memory access for centroid distance computations ( in Fig. 6). To reduce memory access overhead, we keep frequently accessed data in registers and caches as much as possible. Since centroids matrices have small sizes, we design a centroid-stationary computation scheme to reside centroid matrices in registers and reorder centroid matrix load in the inner loop to keep them in cache as long as possible. The centroid-stationary computation keeps \(K\cdot V\) centroid elements in cache for each codebook, which only requires to read these centroids once from DRAM. Consequently, it also only requires to read an \(N\cdot V\) input tensor and \(V\cdot M\) centroids from DRAM once, which alleviates memory bandwidth costs and improves performance.
Second, after computing the centroid distance, the Closest Centroid Search Operator needs to find the centroid with the shortest distance for each input sub-vector and produce the centroid index ( in Fig. 6). It can be represented by an argmin function, which finds an index with the shortest distance. However, optimization of the search for the closest centroid is still challenging. First, searching for the nearest centroid for each input sub-vector is a data-dependent operation. To find the closest one, we must compare each distance sequentially, which is RAW (Read After Write) dependent and hard to be parallelized on CPUs. Then, each input tensor has to leverage the distance computation stage and the closest distance search stage to obtain the result's centroid index. And there are data-dependent memory reads and writes between these two stages.
We propose intra-codebook parallelism to optimize the Closest Centroid Search Operators. Intra-codebook parallelism searches the nearest centroid for the input sub-vector on a codebook in parallel. We slice a codebook into multiple sub-codebooks and compare each distance between sub-vector and centroids in each sub-codebook. We merge the compared distances by reduction and find the index corresponding to the closest centroid. It leverages instruction-level parallelism to improve hardware utilization and performance.
### Table Lookup Operator
LUT-NN obtains the indices of the nearest centroids for the input sub-vectors after closest centroid search, and it leverages **Table Lookup Operator** to compute the final results. Table Lookup Operator first reads the pre-computed results from the corresponding lookup table through indices ( in Fig. 6) and completes the computation by accumulation operation ( in Fig. 6). For example, convolution operators directly read out filter's outputs from lookup tables and accumulate each input channel's result for output channels.
However, table lookup and accumulation introduce additional overhead in model inference. First, table lookup is difficult to be parallelized and introduces additional indirect memory accesses, which exaggerate the memory overhead on lookup tables. Since we have quantized lookup table into INT8 in Sec. 3.3, we leverage widely supported SIMD shuffle instructions (x86: pshfb and ARM: tbl) to achieve parallel and efficient table lookup. We demonstrate the implementation of table lookup using shuffle instructions in Fig. 6. The shuffle instruction [5] permutes each byte of a vector based on an index vector and stores the shuffled bytes in the result vector register in each clock cycle. On 128-bit wide SIMD, a vectorized table lookup instruction handles 16 sub-vector lookups (\(128/8=16\)) on 16 results (\(128/8=16\)) simultaneously, greatly simplifying table lookup and reducing overheads.
Second, the accumulation still has comparable computation costs to the entire process. For example, when a codebook handles \(N\) index lookups on a \(K\cdot M\) lookup table, it costs \(N\cdot M\) table lookups (\(K=16\)) and \(N\cdot M\) accumulation adds.
Therefore, the vectorized lookup table leaves accumulation operations as the performance bottleneck of table lookups. Since a higher number of lanes leads to higher throughput on the same width of SIMD instruction (e.g., the vectorized INT16 add instruction has twice the operations of INT8 on a 128-bit SIMD), we maximize accumulation throughput by mixed precision accumulation. It first accumulates results in INT16 to utilize more SIMD lanes and then gathers INT16 to INT32 to avoid overflow.
## 5 Gaps of LUT-NN
As the first trial towards unified DNN by table lookup, LUT-NN has several gaps to be improved.
**Hardware implication** Potentially, LUT-NN can greatly reduce FLOPs, e.g., 16\(\times\) for BERT. However, as we will show in Sec. 6.2, the real speedup of LUT-NN is only 5.4\(\times\) compared to BERT. For small models, e.g., ResNet20 with 2\(\times\) FLOPs reduction, LUT-NN may be even slower. The reason is the unfriendly hardware support. LUT-NN has to run argmin partially sequentially to return the index of the nearest centroid, and then lookup in the table followed by aggregation. Compared to the direct Multiply-Add support for MM in hardware, the execution of LUT-NN is very inefficient. Besides, the SIMD width limits the number of centroids.
To this end, an accelerator or function unit supporting the first-class parallel table-lookup pipeline could approach the theoretical speedup of LUT-NN, with no limitation on the number of centroids. What is more, hardware can also integrate hashing units for further speedup.
**Layers sensitive to LUT-NN** For CNNs, we find replacing the first layer by table-lookup can lead to obvious accuracy drop. For example, replacing the first layer of ResNet20 results in a ~7% accuracy drop on CIFAR-10. As explained by Zhou _et al._[44], the layer interacted with the model input can cause more accuracy drop. This issue is even more serious for BERT. Replacing the first two layers results in 80% accuracy loss (refer to Fig. 11). Therefore, we do not replace these layers by table lookup. The solutions need to be further explored.
**Operators not replaced by LUT-NN** Currently LUT-NN only replaces linear computation operators with weights, including MM, convolution, and fully connected. The scaled dot-product attention (\(<2\%\) of total latency) in attention layers have no weights, and we do not replace it by lookup table in this work. Even with no weights, it could be possible to replace it by lookup tables with the production of centroids. The non-linear operators, such as the activation functions, are not replaced by LUT-NN right now. The possibility of replacing non-linear operators will be future work.
**Learning for hashing** We have explored applying hashing after centroids are learned. It could be possible to also integrate it into the backpropagation to learn the hashing functions. This could reduce the depth of the tree for hashing and further reduce the encoding cost.
## 6 Evaluation
### Experiment methodologies and settings
**Dataset and Models** In our experiment, we evaluate LUT-NN on both vision and NLP tasks to demonstrate its effectiveness. Specifically, we evaluate ResNet family on CIFAR-10 [29] and a large-scale ImageNet [10] dataset. CIFAR-10 dataset consists of 50\(K\) training images and 10\(K\) validation images in 10 classes. ImageNet dataset consists of 1.28\(M\) for training and 50\(K\) images for validation. For NLP tasks, we evaluate the popular BERT model on the GLUE [39] benchmark dataset.
**Baselines** We compare CNN model accuracy with two baselines: (1) MADDNESS [4] and (2) BNN (binary neural network) [44], where weights and activation are stored in 1-bit and 2-bit, respectively. BNN is a highly optimized model for traditional model compression methods to avoid computation.
For NLP tasks, the baselines are the BERT-base with 12 layers, and another 6-layer BERT, marked as BERT-half. BERT-half is initialized with the weights of the first 6 layers from BERT-base and trained with the same settings as BERT-base. Its precision is illustrated in Table 5.
**Soft-PQ training** Table 3 lists the detailed training hyperparameter settings. Except for the initial learning rate, all other training receipts follow the standard practices [19, 11]. Since the learned temperature requires a larger learning rate to converge quickly to the optimum value, we use different learning rates for centroid and temperature learning. Before soft-PQ training, we initialize centroids by \(k\)-means clustering. Specifically, we forward the original model on a randomly sampled sub-dataset (i.e., 1024 training samples) and collect each layer's inputs. Each layer's inputs are clustered by the \(k\)-means algorithm to get the initial centroids.
**LUT-NN settings** To trade off accuracy and cost, we can
Figure 6: Model inference design and optimization in LUT-NN.
specify: the layers to be replaced by lookup table, the number of centroids in a codebook \(K\), and the length of the sub-vector \(V\) (Table 1). The default setting in the paper is to align with both the feature size and the length of SIMD instructions. We set \((K,V)=(16,9)\) for \(3\times 3\) convolution, \((K,V)=(16,4)\) for \(1\times 1\) convolution, and \((K,V)=(16,32)\) for BERT. We evaluate different settings in Sec. 6.3.
As explained in Sec. 5, for CNNs, we replace all the convolution layers except the first layer. For BERT, we replace the fully-connected operators of the last 6 layers to compare with BERT-half. We will also evaluate the accuracy with different number of replaced layers in Sec. 6.3.
**Evaluation platforms** LUT-NN are implemented on ARM Neon and Intel SIMD instructions in a single thread and batch size 1. We evaluate LUT-NN on an ARM CPU, Cortex-X1 at 2.80 GHz of Google Pixel 6, and an Intel server CPU, Xeon Silver 4210 at 2.20 GHz. We compare LUT-NN with other high-performance inference systems, including TVM v0.9.0 [6] and ONNX Runtime v1.12.1 [35]. We tune TVM baselines per kernel using AutoScheduler and AutoTVM in 1500 iterations.
### Accuracy and latency evaluation
**Accuracy** Table 4 and 5 summarize the accuracy achieved by different methods. Remarkably, LUT-NN achieves comparable accuracy with the original models with only 2.03% accuracy drop on CIFAR-10, \(<5\)% drop on the ImageNet, and \(\sim\)3.3% drops on GLUE, suggesting that LUT-NN empowers the possibility of unifying DNN inference by table lookups. In contrast, BNN and MADDNESS experience a significant accuracy drop. Specifically, BNN has 20% accuracy drop on ImageNet, and MADDNESS only performs random prediction (\(\sim\)10% on CIFAR-10 and \(\sim\)0.1% on ImageNet).
**Kernel speedup** According to Table 1, the FLOPs of our proposed LUT-NN is \(ND(K+M/V)\), and the FLOPs of a normal MM is \(NDM\). The FLOPs reduction can be denoted as \(M/(K+M/V)\). We can observe that the LUT-NN kernel achieves lower computation costs when \(M\gg K\) and \(V\gg 1\). As shown in Fig. 6(a) and 6(d), the kernel speedup gradually improves as the layer index increases. The reason is that the number of output channels increases from 64 to 128, 256, and 512 as the layer index increases. The increased number of output channels \(M\) contributes to the speedup in these layers.
We observe that some kernels have smaller speedups in these two figures, and all of these layers are \(1\times 1\) convolutions. Compared with \(3\times 3\) convolutions (\(V=9\)), currently we use shorter sub-vectors (\(V=4\)) for \(1\times 1\) convolutions. It limits the speedup of these layers.The similar results can be also observed in ResNet20 (Fig. 6(c) and Fig. 6(f)). The first six kernels of ResNet20 are even slower than the baselines. The reason is \(M=16\) for these layers, so LUT-NN has more FLOPs than the original kernels. BERT kernels achieve higher speedups due to larger \(M=768\) or 3072, and longer sub-vector \(V=32\). The best speedup on BERT kernels are 12.5\(\times\) (ARM CPU) and 10.3\(\times\) (x86 CPU).
**Model speedup** Fig. 8 shows the normalized end-to-end model throughput. LUT-NN achieves 1.37-2.3\(\times\) end-to-end speedup on ResNet18 and ResNet50 compared to TVM and ONNX Runtime. The best speedup is 2.3\(\times\) in ResNet18 on ARM CPU over ONNX Runtime. However, since ResNet20 and ResNet32 have fewer output channels \(M\), these models achieve 1.04\(\sim\)1.51\(\times\) speedups. And these two have not achieved speedup compared with TVM. The BERT model has higher throughput, and the speedups are 5.4\(\times\) and 3.8\(\times\) on the ARM CPU and the x86 CPU, respectively. Compared to CNN models (\(M\leq 512\)), the BERT model has a larger input tensor (\(M=768,3072\)) to acquire better performance gains. It also has a longer sub-vector length (\(V=32\)) compared to CNN models (\(V\leq 9\)).
### Ablation Study
We evaluate the effectiveness of the learned temperature and hyperparameters in the ablation study. The hyperparameters include the number of centroids, the length of sub-vector, and the number of replaced layers.
**Learned Temperature** In Sec. 3.2, we use gradient descent to learn the temperature. To evaluate the effectiveness of our
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Task & \multicolumn{2}{c|}{CIFAR-10 (\%)} & \multicolumn{2}{c|}{ImageNet (\%)} \\ \hline Model & ResNet20 & ResNet32 & ResNet18 & ResNet50 \\ \hline Original & 91.73 & 92.63 & 69.76 & 76.13 \\ \hline BNN & 84.87 & 86.74 & 51.20 & 55.80 \\ \hline MADDNESS & 10.85 & 10 & 0.1 & 0.1 \\ \hline LUT-NN & 89.94 & 90.60 & 67.38 & 71.58 \\ \hline \end{tabular}
\end{table}
Table 4: LUT-NN achieves comparable accuracy on CIFAR-10 and ImageNet with the original models, much higher than BNN and MADDNESS.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Dataset & Model & \begin{tabular}{c} Centroid \\ learning rate \\ \end{tabular} & \begin{tabular}{c} Temperature \\ learning rate \\ \end{tabular} & \begin{tabular}{c} Weight \\ decay \\ \end{tabular} & \begin{tabular}{c} Batch \\ size \\ \end{tabular} & \begin{tabular}{c} Number of \\ epochs \\ \end{tabular} & Optimizer & \begin{tabular}{c} Learning rate \\ scheduler \\ \end{tabular} \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet20 & \multirow{2}{*}{1e-3} & \multirow{2}{*}{0} & \multirow{2}{*}{256} & \multirow{2}{*}{400} & \multirow{2}{*}{Adam} & \multirow{2}{*}{\begin{tabular}{c} Cosine \\ annealing \\ \end{tabular} } \\ \cline{2-2} \cline{6-8} & ResNet32 & & & & & & & \\ \hline \multirow{2}{*}{ImageNet} & ResNet18 & \multirow{2}{*}{1e-1} & \multirow{2}{*}{0} & \multirow{2}{*}{256} & \multirow{2}{*}{150} & \multirow{2}{*}{Adam} & \multirow{2}{*}{\begin{tabular}{c} Cosine \\ annealing \\ \end{tabular} } \\ \cline{2-2} \cline{6-8} & ResNet50 & & & & & & & \\ \hline \multirow{2}{*}{GLUE} & BERT & \{
\begin{tabular}{c} \{5e-5, 4e-5, \\ 3e-5, 2e-5\} \\ \end{tabular} & \multirow{2}{*}{1e-2} & \multirow{2}{*}{32} & \multirow{2}{*}{3} & \multirow{2}{*}{AdamW} & \multirow{2}{*}{Constant} \\ \hline \end{tabular}
\end{table}
Table 3: Soft-PQ training settings for centroid and temperature learning. We choose the best fine-tune learning rate among [5e-5, 4e-5, 3e-5, 2e-5] based on the accuracy of validation set for BERT, following [11].
method, we compare three temperature tuning strategies together: learned temperature, statically setting the temperature as 1, and annealing temperature from 1 to 0.1. We compare the training curves in Fig. 9. The figure suggests that our proposed learned temperature reaches the highest accuracy (89.94%) and outperforms the statically setting (86.85%) and the annealing temperature (89.01%). In addition, the learned temperature has a faster convergence speed.
**Impact of centroids number and sub-vector length** The centroids number and sub-vector length do affect not only the inference throughput but also the model accuracy. We present the ablation study for the impact of these two hyperparameters. We follow the same experiment setting as Sec. 6.1 on ResNet20 and BERT.
Figs. (a)a and (b)b collect accuracy and FLOPs for the variant of centroid numbers and sub-vector lengths in ResNet20 and BERT. For ResNet20, the sub-vector length significantly affects the model accuracy and worsens as the sub-vector length increases. Each codebook has to handle higher dimensions as the sub-vector length grows, which cannot be accurately classified and harms the model accuracy.
The centroid number also affects accuracy and performance. As the centroid number increases, each codebook can classify sub-vector in more fine-grained granularity, and it improves the accuracy of ResNet20. To balance the model accuracy with performance, we prefer to set \(K=16\) and \(V=9\) to achieve better accuracy and fewer GFLOPs than ResNet20. However, the accuracy of LUT-NN on BERT model rises and then falls
Figure 8: Normalized end-to-end throughput of LUT-NN (higher is better).
Figure 7: LUT-NN layerwise kernel speedup over ONNX Runtime (ORT) and TVM.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset Task} & \multicolumn{2}{c|}{Single-Sentence} & \multicolumn{3}{c|}{Similarity and Paraphrase} & \multicolumn{3}{c|}{Natural Language Inference} & \multicolumn{1}{c|}{} \\ \cline{2-10} & CoLA & SST-2 & MRPC & STS-B & QQP & MNLI-m & MNLI-m & QNLI & RTE & Average \\ \hline Training Dataset Size & 8.5k & 67k & 3.7k & 7k & 364k & 393k & 105k & 2.5k & \\ \hline Test Dataset Size & 1k & 1.8k & 1.7k & 1.4k & 391k & 20k & 5.4k & 3k & \\ \hline BERT base (\%) & 52.1 & 93.5 & 88.9 & 85.8 & 71.2 & 84.6 & 83.4 & 90.5 & 66.4 & 79.6 \\ \hline BERT half (\%) & 32.9 & 91.3 & 85.5 & 80.8 & 69.2 & 81.0 & 80.3 & 87.4 & 65.2 & 74.8 \\ \hline LUT-NN (\%) & 43.5 & 92.4 & 85.1 & 83.2 & 69.6 & 81.3 & 79.9 & 87.4 & 64.7 & 76.3 \\ \hline \end{tabular}
\end{table}
Table 5: LUT-NN achieves comparable accuracy on GLUE with BERT-base on all tasks. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks.
as the sub-vector length and centroid number increase. It indicates that the relationship between latency and accuracy is affected by hyperparameters, model structures and datasets. For BERT model, we set centroid numbers \(K=16\) and sub-vector length \(V=32\), which obtains the highest accuracy and lower costs.
**The number of replaced layers** As LUT-NN introduces quantization errors, the number of replaced layers of a model also affects its accuracy. To investigate this effect, we gradually replace more layers of BERT model on the Semantic Textual Similarity Benchmark (STS-B) task. Since quantization errors are passed through forward propagation, we gradually replace each layer of BERT with LUT-NN from the last layer to the front, and see accuracy gradually drops in Fig. 11, particularly for the last three layers. The experiment shows that the number of replaced layers is an essential hyperparameter for improving model accuracy.
## 7 Related Work
### Approximated Matrix Multiplication
Traditional approximated matrix multiplication works focus on minimizing the difference between the ground truth and the approximated output. [31] used the random projection technique to project two matrices to a lower dimensional subspace before multiplying. [36] treated matrix multiplication as the sum of outer products and calculate it with Fast Fourier Transformation. SVD [28] can be used for accelerating matrix multiplication too. With SVD, one input matrix is decomposed into a form that is more efficient to compute. SVD is applicable when one input matrix is known in the initialization stage and can be decomposed in advance. [1, 12] approximated matrix multiplication with the Column-Row-Sampling method: they first sample \(k\) columns from the left operand and respective \(k\) rows from the right operand. Then the \(k\) columns and \(k\) rows are multiplied to get the approximated result. [4] used product quantization to compress one input matrix and precompute the compressed centroids with the other input matrix to form a lookup table. The matrix multiplication is then transformed into a lookup operation in the lookup table. To conclude, traditional methods will suffer from severe accuracy loss when embedded in the neural network inference if the approximation is too harsh. Our method can avoid this defect by integrating itself into the end-to-end training process of the neural network.
Figure 11: The accuracy of LUT-NN based BERT with respect to the number of layers to replace. Replacing 7 layers brings a negligible decrease in accuracy.
Figure 10: The scaling of centroids number and sub-vector length on ResNet20 and BERT for accuracy and FLOPs.
Figure 9: Learning curves of LUT-NN-based ResNet20 on the CIFAR10 dataset with temperature tuning methods. “Annealing Temperature” (Orange) refers to manually annealing temperature from 1 to 1e-1. Our proposed learned temperature technique reaches a higher 89.94% accuracy than the annealing temperature’s 89.01% accuracy and setting temperature as 1’s 86.85% accuracy.
### Product Quantization for DNN
Product Quantization [32] is a popular and successful method for large-scale approximated nearest neighbor search [24]. In recent years, some work integrates product quantization into neural networks for different purposes. For example, [42, 43, 22, 27] incorporated product quantization as a layer of a convolutional neural network to obtain a compact and discriminative image representation. [7] utilized Product Quantization as a tool to compress the embedding layer in NLP models. [38, 13, 8] compress the weight matrix for neural networks with Product Quantization. However, our method differs from the above methods in that our purpose is end-to-end neural network inference acceleration. We use differentiable product quantization to replace all layers of the neural network rather than a single layer. Also, we compress the input and output feature maps of operators rather than the weight matrices to achieve acceleration.
### Scalar Quantization
Scalar quantization methods aim to reduce scalar bits in neural networks, e.g., from float representation to INT8 representation. They can both compress the neural network bits and accelerate the inference. [21] proposed an INT8 quantization method with both the training scheme and an efficient kernel implementation. [9, 45, 2, 3, 34] further proved that 1/2/4-bit quantization suffices for image classification. Recently, [40] proposed a quantization method that reduces the weight to less than one bit. However, extremely low-bit quantization methods, such as 1/2/4 bit or even less than one bit, usually require specially designed hardware like the Tensor Core. Our work designed a less than one-bit quantization method that only depends on SIMD instructions available on the common Intel and ARM CPU platforms.
## 8 Conclusion
This paper takes the first step towards unifying DNN inference by table lookup. A new paradigm potentially brings significant benefits to DNN inference ecosystem, which simplifies the inference software and hardware design, and decouples with the DNN algorithm updates. By the centroid learning technique for DNN, LUT-NN achieves comparable accuracy for complex tasks with less resource cost. However, LUT-NN still has space to be improved, such as the learning technique to improve accuracy, and hardware support for table lookup, which call for future efforts from the community.
|
2302.01404 | Provably Bounding Neural Network Preimages | Most work on the formal verification of neural networks has focused on
bounding the set of outputs that correspond to a given set of inputs (for
example, bounded perturbations of a nominal input). However, many use cases of
neural network verification require solving the inverse problem, or
over-approximating the set of inputs that lead to certain outputs. We present
the INVPROP algorithm for verifying properties over the preimage of a linearly
constrained output set, which can be combined with branch-and-bound to increase
precision. Contrary to other approaches, our efficient algorithm is
GPU-accelerated and does not require a linear programming solver. We
demonstrate our algorithm for identifying safe control regions for a dynamical
system via backward reachability analysis, verifying adversarial robustness,
and detecting out-of-distribution inputs to a neural network. Our results show
that in certain settings, we find over-approximations over 2500x tighter than
prior work while being 2.5x faster. By strengthening robustness verification
with output constraints, we consistently verify more properties than the
previous state-of-the-art on multiple benchmarks, including a large model with
167k neurons in VNN-COMP 2023. Our algorithm has been incorporated into the
$\alpha,\!\beta$-CROWN verifier, available at https://abcrown.org. | Suhas Kotha, Christopher Brix, Zico Kolter, Krishnamurthy Dvijotham, Huan Zhang | 2023-02-02T20:34:45Z | http://arxiv.org/abs/2302.01404v4 | # Provably Bounding Neural Network Preimages
###### Abstract
Most work on the formal verification of neural networks has focused on bounding forward images of neural networks, i.e., the set of outputs of a neural network that correspond to a given set of inputs (for example, bounded perturbations of a nominal input). However, many use cases of neural network verification require solving the inverse problem, i.e, over-approximating the set of inputs that lead to certain outputs. In this work, we present the first efficient bound propagation algorithm, INVPROP, for verifying properties over the preimage of a linearly constrained output set of a neural network, which can be combined with branch-and-bound to achieve completeness. Our efficient algorithm allows multiple passes of intermediate bound refinements, which are crucial for tight inverse verification because the bounds of an intermediate layer depend on relaxations both before and after this layer. We demonstrate our algorithm on applications related to quantifying safe control regions for a dynamical system and detecting out-of-distribution inputs to a neural network. Our results show that in certain settings, we can find over-approximations that are over \(2500\times\) tighter than prior work while being \(2.5\times\) faster on the same hardware.
## 1 Introduction
Applications of neural networks to safety-critical settings often require reasoning about the set of inputs that will produce a particular set of outputs. For example, for a physical system controlled by a policy parameterized as a neural network, it is of interest to understand the states that would cause the policy to choose control actions that are outside acceptable limits (for example torques that are beyond the safe operating range of a motor), or that would result in a future state that violates safety constraints. Another example is quantifying the out-of-distribution (OOD) detection behavior of a neural network: here, it is of interest to bound the set of inputs that leads to a classifier making a confident prediction, i.e, where the highest scoring logit exceeds the second highest scoring logit by a large margin. Given this set, one can assess whether there are inputs far from the training data that are assigned high confidence predictions, which would be an undesirable property.
Formal verification of neural networks seeks to provide provable guarantees demonstrating that networks satisfy formal specifications on their inputs and outputs. Practical algorithms often rephrase the above as finding provably correct bounds on the inputs or outputs of a neural network subject to specific constraints. Most work to date has focused on developing algorithms that can bound the outputs of a neural network given constraints on the inputs, which is challenging due to high
dimensionality and non-convexity. For example, to analyze the robustness of a neural network to perturbations of a given input, these algorithms compute bounds on the outputs of the network given that the inputs are bounded perturbations of a nominal input (Wong and Kolter, 2018; Dvijotham et al., 2018; Zhang et al., 2018; Raghunathan et al., 2018; Gehr et al., 2018).
In this work, we address the inverse problem, motivated by the use-cases outlined above. Our goal is to find bounds on the pre-image of a neural network: given a set of outputs \(\mathcal{S}_{\text{out}}\) (described by linear constraints on the output of a neural network), we seek to find a set that provably contains all inputs that lead to such outputs. The verification problem is already challenging due to non-convexity and high dimensionality. This new problem is increasingly difficult since neural networks are not invertible.
Specifically, representative efficient verifiers (such as state of the art bound-propagation-based methods (Zhang et al., 2022)) can only compute bounds in the forward direction, and the tightness of the linear programming relaxations they rely on to perform formal verification critically depends on having tight bounds on intermediate activations. In the setting of this paper, however, the bounds available on the intermediate activations derived from the input constraints are weak, since the only constraints on the input are that it should be from the valid input domain (for example, normalized pixels from an image lie between \(0\) and \(1\)). In fact, in some applications like control, the input domain may be unbounded. Given this, the bounds on intermediate activations derived from the input constraints may be vacuous or very loose, impacting the tightness of the intermediate activation relaxations. We efficiently solve this problem by significantly generalizing the existing bound propagation-based verification framework. Our contributions are as follows:
\(\bullet\) We formulate the _inverse verification problem_ for neural networks, i.e, the problem of over-approximating the set of inputs that leads to a given set of outputs, with a mixed integer linear programming formulation. This motivates our development of an efficient bound propagation method.
\(\bullet\) We develop an effective bound propagation framework, Inverse Propagation for Neural Network Verification (INVPROP), which computes provable over-approximations of neural network preimages. We study the convex outer approximation of this problem which leads to a novel inverse bound propagation method for verification. We also unify INVPROP and forward verification bound propagation into a more general verification framework, allowing us to connect our method to standard tools and benefit from prior progress in the verification community, such as efficient GPU acceleration.
\(\bullet\) We demonstrate that tight inverse verification requires multiple refinement passes of intermediate bounds. This is unique to the inverse verification problem because bounds for an intermediate layer depend on not only intermediate bounds before this layer, but also constraints imposed on downstream layers or the output of the network. In other words, bounds cannot be simply propagated in a single forward pass first tightening all neurons in a given layer and then tightening the next layer based on the previous layer bounds. Instead, inverse verification requires an iterative process where, in each pass, all bounds are tightened based on previous bounds (including output constraints), and the iterative process continues until bounds converge.
\(\bullet\) We improve the state-of-the-art on a control benchmark Rober et al. (2022a,b) by providing \(2500\times\) tighter bounds with \(2.5\times\) faster computation. We also demonstrate applicability in OOD detection.
## 2 Setup
### Notation
Throughout this paper, we will use \([L]\) for \(L\in\mathbb{N}\) to refer to the set \(\{1,2,\ldots,L\}\), \(\mathbf{W}_{:,j}^{(i)}\) to refer to column \(j\) of the matrix \(\mathbf{W}^{(i)}\), \([\cdot]_{+}\) to refer to \(\max(0,\cdot)\), and \([\cdot]_{-}\) to refer to \(-\min(0,\cdot)\). We will also boldface symbols for vectors and matrices (such as \(\mathbf{x}^{(i)}\) and \(\mathbf{W}^{(i)}\)) and use regular symbols for scalars (such as \(x_{j}^{(i)}\)). We use \(\mathbf{x}\odot\mathbf{y}\) to denote element-wise multiplication of vectors \(\mathbf{x},\mathbf{y}\).
We define an \(L\) layer ReLU neural network by its weight matrices \(\mathbf{W}^{(i)}\) and bias vectors \(\mathbf{b}^{(i)}\) for \(i\in[L]\). The output of the neural network for the input \(\hat{\mathbf{x}}^{(0)}\) from a bounded input domain \(\mathcal{X}\) is computed by alternately applying linear layers \(\mathbf{x}^{(i)}=\mathbf{W}^{(i)}\hat{\mathbf{x}}^{(i-1)}+\mathbf{b}^{(i)}\) and ReLU layers
\(\hat{\mathbf{x}}^{(i)}=\max(\mathbf{0},\mathbf{x}^{(i)})\) until we receive the output \(\mathbf{x}^{(L)}\) (which we refer to as the logits). Note that we treat softmax as a component of the loss function, not the neural network.
The inputs to our network will lie in the Euclidean space \(\mathbb{R}^{\text{in}}\) and the outputs will lie in \(\mathbb{R}^{\text{out}}\). For any subset \(S\) of \(\mathbb{R}^{d}\) (\(d\) dimensional Euclidean space), \(\operatorname{CONV}(S)\) denotes the convex hull of \(S\), i.e, the smallest convex set containing \(S\).
### Problem Statement
Given a neural network \(f:\mathcal{X}\subseteq\mathbb{R}^{\text{in}}\to\mathbb{R}^{\text{out}}\) and an output constraint \(\mathcal{S}_{\text{out}}\subseteq\mathbb{R}^{\text{out}}\), we want to compute \(f^{-1}(\mathcal{S}_{\text{out}})\subseteq\mathcal{X}\). Precisely computing or expressing \(f^{-1}(\mathcal{S}_{\text{out}})\) is an intractable problem in general. Therefore, we strive to compute a tight over-approximation \(\mathcal{S}_{\text{over}}\) such that \(f^{-1}(\mathcal{S}_{\text{out}})\subseteq\mathcal{S}_{\text{over}}\). In particular, we will develop an algorithm to compute \(\mathcal{S}_{\text{over}}=\operatorname{CONV}\left(f^{-1}(\mathcal{S}_{\text {out}})\right)\), the convex hull of the pre-image, via a cutting-plane representation. In this paper, \(\mathcal{S}_{\text{out}}\) will always be defined by a set of linear constraints parameterized by \(\mathbf{H}f\left(\mathbf{x}\right)+\mathbf{d}\leq\mathbf{0}\).
### Applications
We outline two concrete applications of our approach.
Backward Reachability Analysis for Neural Feedback Loops.For control problems involving linear dynamical systems with quadratic cost functions, the optimal feedback policies can be proven to be linear Bertsekas (2000). However, many tasks require non-quadratic cost functions (for example, tasks requiring driving a system towards a desired safe set). In this case, linear feedback policies are no longer sufficient. Neural networks constitute a flexible class of nonlinear feedback policies. However, establishing safety guarantees for neural network policies is a challenging task. In particular, one problem of interest is to find a set of initial states that does not reach a particular set of future states under the neural network policy. This can be helpful in collision avoidance or control with safety constraints.
Under linear time-invariant dynamics, the state of a system is determined by the state transition function
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t})=\mathbf{A}\mathbf{x}_{t}+\mathbf{B}\pi(\mathbf{x}_{t})+ \mathbf{c}\]
where for \(x_{n},x_{u}\in\mathbb{N}\), \(\mathbf{x}_{t}\in\mathcal{X}\subseteq\mathbb{R}^{x_{n}}\) is the state at discrete timestep \(t\), \(\mathbf{A}\in\mathbb{R}^{x_{n}\times x_{n}}\) and \(\mathbf{B}\in\mathbb{R}^{x_{n}\times x_{n}}\) are known system matrices, \(\mathbf{c}\in\mathbb{R}^{x_{n}}\) is a known exogenous input, and \(\pi:\mathcal{X}\to\mathbb{R}^{n_{u}}\) is the state-feedback control policy (the decision the model takes given the state).
In the controls benchmark we use for our evaluation Hu et al. (2020); Everett et al. (2021, 2022), a robot is moving on the floor of a room with coordinates spanning \(\mathcal{X}=[-5,5]\times[-5,5]\). The position of the robot at time \(t+1\) can be directly computed based on the position at time \(t\), and follows the equation
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t})=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\mathbf{x}_{t}+\begin{bmatrix}0.5\\ 1\end{bmatrix}\pi(\mathbf{x}_{t})\]
with \(\pi:\mathbb{R}^{2}\to\mathbb{R}\). There is an obstacle in the room covering the region \([4.5,5.0]\times[-0.25,0.25]\) that the robot needs to avoid. It is of interest to understand whether, starting in a given state, the system will enter the unsafe region in the next time-step. We can represent this obstacle set with the linear constraints
\[\mathcal{S}_{\text{out}}=\left\{\mathbf{x}_{t+1}:\begin{bmatrix}-1&0\\ 0&-1\\ 1&0\\ 0&1\end{bmatrix}\mathbf{x}_{t+1}+\begin{bmatrix}4.5\\ -0.25\\ -5.0\\ -0.25\end{bmatrix}\leq\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix}\right\}\]
\(f^{-1}(\mathcal{S}_{\text{out}})\) denotes the set of states \(\mathbf{x_{t}}\) such that \(\mathbf{x_{t+1}}\) lies in the unsafe region given the control policy \(\pi\). Overapproximating this set allows us to define the set of states to avoid one timestep in advance. We can compose \(f\) with itself \(t\) times to obtain the set of initial states where \(\pi\) would drive the system to the unsafe set after \(t\) steps.
Computing the set of inputs that leads to confident predictions.Training a classifier that knows when it doesn't know is a challenging problem. A simple and effective approach to doing this is to train a classifier with an additional logit representing (out of distribution) OOD-ness. Labeled data for this additional class is generated via outlier exposure, i.e., adding synthetic training data with features drawn from an OOD distribution (known as outlier exposure (Hendrycks et al., 2018)) and labels corresponding to the additional OOD logit (Chen et al., 2021).
Consider the logits \(y\) produced by a classifier trained in this manner for a binary classification task. The classifier can classify in-distribution data by comparing \(y_{0}\) and \(y_{1}\), the logits corresponding to the two in-distribution labels. The quantity \(\max(y_{0},y_{1})-y_{2}\), known as the logit gap, can be used to identify out-of-distribution data (Fig. 1).
A natural question to ask is: Can it correctly identify datapoints far from the training data as being OOD? In order to do so, one needs to compute the preimage of the set
\[\mathcal{S}_{\text{out}}=\{\mathbf{y}:\max(y_{0},y_{1})\geq y_{2}\}\]
We detail how to model this set with linear constraints in Section D.1. The algorithm INVPROP we develop in this paper enables us to compute an overapproximation of \(f^{-1}(\mathcal{S}_{\text{out}})\), thereby answering the question of whether the classifier learned to correctly identify OOD data.
## 3 Approach
We first provide an algorithm to find convex over-approximations of feed-forward ReLU networks. We then extend our method to non-convex over-approximations of general computational graphs.
### Convex over-approximation
Suppose we want to find a convex over-approximation of \(f^{-1}(\mathcal{S}_{\text{out}})\). The tightest such set is its convex hull, which is the intersection of all half-spaces that contain \(f^{-1}(\mathcal{S}_{\text{out}})\)(Boyd et al., 2004). This intersection is equivalent to \(\bigcap_{\mathbf{c}\in\mathbb{R}^{n}}\{\mathbf{x}:\mathbf{c}^{\top}\mathbf{x}\geq\min_{f(\mathbf{ x}^{\prime})\in\mathcal{S}_{\text{out}}}\mathbf{c}^{\top}\mathbf{x}^{\prime}\}\), which means that we can build an over-approximation by taking this intersection for finitely many \(\mathbf{c}\). Furthermore, replacing the minimization problem with a lower bound to its value still yields a valid half-space, where a tighter bound yields a tighter approximation.
We focus on convex over-approximations for two reasons: First, checking that the pre-image satisfies a linear constraint \(\mathbf{c}^{\top}\mathbf{x}+d\geq 0\) is equivalent to minimizing the linear function \(\mathbf{x}\mapsto\mathbf{c}^{\top}\mathbf{x}\) over the pre-image, which is in turn equivalent to minimizing the linear function over \(\operatorname{CONV}\left(f^{-1}(\mathcal{S}_{\text{out}})\right)\)(Boyd et al., 2004). Second, the convex hull and its tractable over-approximations are conveniently represented as intersections of linear constraints on the pre-image, each of which can be quickly computed after a single precomputation phase to tighten bounds on intermediate neurons using the INVPROP algorithm outlined in the next section.
Figure 1: To improve our model’s ability to detect data outside the training distribution (of green and red points), we randomly sample points that are far from every training point. In the right contour, we see that the model’s confidence of in-distribution increases as we approach the centers of the distribution, demonstrating our model is more calibrated.
For the above reasons, we will focus on solving the following constrained optimization problem.
\[\min_{\mathbf{x}} \mathbf{c}^{\top}\mathbf{x}\] (1a) s.t. \[\mathbf{x}\in\mathcal{X};\quad f\left(\mathbf{x}\right)\in\mathcal{S}_{ \text{out}} \tag{1b}\]
Note that this differs from the forward verification problem widely studied in the literature, which can be phrased as
\[\min_{\mathbf{x}\in\mathcal{S}_{\text{in}}}\quad\mathbf{c}^{\top}f\left(\mathbf{x}\right)\]
where \(\mathcal{S}_{\text{in}}\) is a set representing constraints on the input, and the goal of the verification is to establish that the output \(f\left(\mathbf{x}\right)\) satisfies a linear constraint of the form \(\mathbf{c}^{\top}f\left(\mathbf{x}\right)\geq d\) for all \(x\in\mathcal{S}_{\text{in}}\).
### The INVPROP Algorithm
As a brief overview of our method, we first construct the Mixed Integer Linear Program (MILP) for solving this optimization problem, which generates the over-approximation \(\mathcal{S}_{\text{MILP}}\) when solved for finitely many \(\mathbf{c}\). Then, we relax the MILP to a Linear Program (LP), which can construct the over-approximation \(\mathcal{S}_{\text{LP}}\). Finally, we relax the LP via its Lagrangian dual, which will be used to construct our over-approximation \(\mathcal{S}_{\text{over}}\). This chain of relaxations is visualized in Figure 2. Though the diagram displays looseness, in the coming sections we provide a comprehensive methodology to arbitrarily reduce error in all three relaxations.
The Mixed Integer Linear Programming (MILP) FormulationFor feed-forward neural networks with ReLU non-linearities, this problem admits a MILP encoding similar to prior work in adversarial robustness Tjeng et al. (2017). The weights and biases at layer \(i\) are denoted by \(\mathbf{W}^{(i)},\mathbf{b}^{(i)}\). Let \(\hat{\mathbf{x}}^{(i)}\) denote the post-activations (following the application of the ReLU nonlinearity) at layer \(i\) and \(\mathbf{x}^{(i)}\) denote the pre-activations. Then, the problem (1) is equivalent to:
\[\min_{\mathbf{x},\hat{\mathbf{x}},\mathbf{x}} \mathbf{c}^{\top}\mathbf{x}\] (2a) s.t. \[\mathbf{x}\in\mathcal{X};\hat{\mathbf{x}}^{(0)}=\mathbf{x};\quad\mathbf{H} \mathbf{x}^{(L)}+\mathbf{d}\leq\mathbf{0} \tag{2b}\] \[\mathbf{x}^{(i)}=\mathbf{W}^{(i)}\hat{\mathbf{x}}^{(i-1)}+\mathbf{b}^{(i )};\quad i\in[L],\] (2c) \[\hat{\mathbf{x}}^{(i)}=\max(0,\mathbf{x}^{(i)});\quad i\in[L-1] \tag{2d}\]
We note that, as opposed to prior work on verifying forward properties of neural networks (Zhang et al., 2022), our approach seeks to bound a linear function of the _inputs_ to the neural network, subject to constraints on the _output_ of the neural network. This makes it particularly challenging to directly apply bound propagation algorithms derived in prior work, as these algorithms rely on propagating bounds on the inputs to obtain tight bounds on intermediate activations. In our problem, we need to modify these algorithms to leverage constraints on the outputs of downstream layers to obtain bounds on intermediate or input neurons.
Figure 2: Visualization of relaxations. The inner green region depicts the true \(f^{-1}(\mathcal{S}_{\text{out}})\), the blue relaxation depicts the intersection of finite half-spaces solved via MILP, the red relaxation displays the same via LP, and the purple relaxation displays the same via INVPROP.
The Linear Programming (LP) FormulationUnfortunately, finding an exact solution to this MILP is NP-complete Katz et al. (2017). To sidestep the intractability of exact verification, we can compute lower bounds for this program via its convex relaxation. In order to do so, we begin by finding bounds on the outputs of intermediate layers:
\[l_{j}^{(i)}\leq x_{j}^{(i)}\leq u_{j}^{(i)}\]
Based on these bounds, we can rewrite the last constraint of (2) as follows:
\[0\leq\hat{x}_{j}^{(i)} \leq u_{j}^{(i)}z_{j}^{(i)}\] \[x_{j}^{(i)}\leq\hat{x}_{j}^{(i)} \leq x_{j}^{(i)}-l_{j}^{(i)}(1-z_{j}^{(i)})\]
Relaxing the constraints on \(z\) to \(z_{j}^{(i)}\in[0,1]\), we obtain the following LP relaxation of (2):
\[\min_{\mathbf{x},\hat{\mathbf{x}},\mathbf{x}} \mathbf{c}^{\top}\mathbf{x}\] (3a) s.t. \[\mathbf{x}\in\mathcal{X};\mathbf{l}^{(0)}\leq\mathbf{x}\leq\mathbf{u}^{(0)};\hat {\mathbf{x}}^{(0)}=\mathbf{x} \tag{3b}\] \[\mathbf{H}\mathbf{x}^{(L)}+\mathbf{d}\leq\mathbf{0}\] (3c) \[\mathbf{x}^{(i)}=\mathbf{W}^{(i)}\hat{\mathbf{x}}^{(i-1)}+\mathbf{b}^{( i)};\quad i\in[L],\] (3d) \[\mathbf{0}\leq\hat{\mathbf{x}}^{(i)}\leq\mathbf{u}^{(i)}\odot\mathbf{x}^{(i)} \quad i\in[L]\] (3e) \[\mathbf{x}^{(i)}\leq\hat{\mathbf{x}}_{j}^{(i)}\leq\mathbf{x}^{(i)}-\mathbf{l}^{ (i)}\odot\left(1-\mathbf{z}^{(i)}\right)\quad i\in[L],\] (3f) \[\mathbf{0}\leq\mathbf{z}^{(i)}\leq\mathbf{1}\quad i\in[L] \tag{3g}\]
where the bounds \(\mathbf{l}^{(0)}\leq\mathbf{x}\leq\mathbf{u}^{(0)}\) are either the bounds already implicit in \(\mathcal{X}\), or a refinement of these obtained via previous rounds of bound propagation or input branching (see Algorithm 1).
Most efficient neural network verifiers do not solve an LP formulation of verification directly because LP solvers are often slow for problem instances involving neural networks. Instead, the bound propagation framework (Zhang et al., 2018; Wang et al., 2021; Zhang et al., 2022) is a practical and efficient way to lower bound the LP relaxation of the forward verification problem. However, there are _two major roadblocks_ to applying existing methods here: First, typical methods cannot directly handle the output constraints (Eq. 3c) and the objective involving input layer variables (Eq. 3a); second, the intermediate bounds \(\mathbf{l}\) and \(\mathbf{u}\) must be computed considering the output constraint (Eq. 3c), which is impossible with existing bound propagation methods, as discussed below.
**Example 1**.: Consider the toy neural network in Figure 3 under \(\mathcal{X}=[-2,2]\) and \(\mathcal{S}_{\text{out}}=\{y:1\leq y\leq 1.02\}\). Suppose we wanted to find the tightest \([l_{0}^{(1)},u_{0}^{(1)}]\) (bounds on \(x_{0}^{(1)}\)). If we only enforce the constraints preceding the ReLU (standard bound propagation), we get \([-2,2]\). If we only enforce the constraints following the ReLU, we get \((-\infty,1.02]\). However, when we utilize all of the constraints, we find that the true intermediate bounds are \([0,0.01]\), which is over \(100\) times tighter than the intersection of the two previous methods (all derivations provided in Appendix C). Therefore, new techniques are necessary for optimizing intermediate bounds in this setting.
The Inverse Propagation (INVPROP) FormulationBy changing the LP in Section 3.2 to optimize the quantity \(\hat{x}_{j}^{(0)}\) or \(x_{j}^{(i)}\) for \(i\in[L-1]\), the bounds for the \(j\)th neuron of layer \(i\) can be iteratively tightened in separate LP calls. However, this highly constrained program is too expensive to be run multiple times for each neuron in a neural network.
Inspired by the success of CROWN-family robustness verifiers Zhang et al. (2018); Xu et al. (2021); Wang et al. (2021); Zhang et al. (2022), we efficiently provide lower bounds to the program by optimizing its Lagrangian dual. This dual is highly structured, allowing us to formulate quantities such as the input cutting plane and intermediate bounds as closed-form expressions of the dual variables. Our main generalization of the existing method is the ability to optimize variables with
Figure 3: A simple neural network from \(\mathbb{R}\) to \(\mathbb{R}\). Every node is the sum of its incoming nodes unless the edge is labeled ReLU.
respect to constraints that are sequentially **after** it in the neural network. We present this contribution in the following theorem.
**Theorem 1** (Bounding cutting plane).: _Given an output set \(\mathcal{S}_{\text{out}}=\{\mathbf{y}:\mathbf{H}\mathbf{y}+\mathbf{d}\leq\mathbf{0}\}\) and vector \(\mathbf{c}\), \(g_{\mathbf{c}}(\mathbf{\alpha},\mathbf{\gamma})\) is a lower bound to the linear program in (3) for \(\mathbf{0}\leq\mathbf{\alpha}\leq\mathbf{1}\), \(\mathbf{\gamma}\geq\mathbf{0}\), and \(g_{\mathbf{c}}\) defined via_
\[g_{\mathbf{c}}(\mathbf{\alpha},\mathbf{\gamma}) =\mathsf{ReLU}\left(\mathbf{c}-\mathbf{\nu}^{(1)\top}\mathbf{W}^{(1)} \right)\mathbf{I}^{(0)}\] \[-\mathsf{ReLU}\left(-\mathbf{c}+\mathbf{\nu}^{(1)\top}\mathbf{W}^{(1)} \right)\mathbf{u}^{(0)}\] \[-\sum_{i=1}^{L}\mathbf{\nu}^{(i)\top}\mathbf{b}^{(i)}+\sum_{i=1}^{L-1 }\sum_{j\in\mathcal{I}^{\pm(i)}}\left[\frac{u_{j}^{(i)}l_{j}^{(i)}[\hat{\nu}_{ j}^{(i)}]_{+}}{u_{i}^{(j)}-l_{i}^{(j)}}\right]\]
_where every term can be directly recursively computed via_
\[\mathcal{I}^{-(i)} =\{j:u_{j}^{(i)}\leq 0\}\text{ (Provably Off ReLUs)}\] \[\mathcal{I}^{+(i)} =\{j:l_{j}^{(i)}\geq 0\}\text{ (Provably On ReLUs)}\] \[\mathcal{I}^{\pm(i)} =\{j:l_{j}^{(i)}<0<u_{j}^{(i)}\}\text{ (Ambiguous ReLUs)}\] \[\mathbf{\nu}^{(L)} =-\mathbf{\gamma}\] \[\hat{\mathbf{\nu}}^{(i)} =\left(\mathbf{W}^{(i+1)\top}\right)\mathbf{\nu}^{(i+1)}\quad\forall i \in[L-1]\] \[\nu_{j}^{(i)} =\left\{\begin{array}{ll}\hat{\nu}_{j}^{(i)},&j\in\mathcal{I}^{ +(i)}\\ 0,&j\in\mathcal{I}^{-(i)}\\ \frac{u_{j}^{(i)}}{u_{j}^{(j)}-l_{i}^{(j)}}[\hat{\nu}_{j}^{(i)}]_{+}-\alpha_{ j}^{(i)}[\hat{\nu}_{j}^{(i)}]_{-},&j\in\mathcal{I}^{\pm(i)}\end{array}\right.\]
Proof.: Full proof is presented in Appendix B.
In Appendix A, we show how to also compute bounds on the values of intermediate neurons in the network using a similar approach. Since the intermediate bounds might depend on bounds on neurons in subsequent layers (due to the output constraint), we cannot simply optimize bounds in a single forward pass layer by layer, unlike prior work (Zhang et al., 2022). Instead, we must iteratively tighten these bounds, where in each pass all bounds are tightened given the tightest bounds on all neurons computed thus far. This iterative approach is able to tighten the bounds by several orders of magnitude, as shown in Figure 5. After performing this procedure once, the intermediate bounds can be used to tightly lower bound \(\mathbf{c}^{\top}\mathbf{x}\) for any \(\mathbf{c}\) via Theorem 1. Therefore, this computation can be shared across all the constraints \(\mathbf{c}\) we use to describe \(\mathcal{S}_{\text{over}}\). Our algorithm can be expressed in terms of forward/backward passes through layers of the neural network and implemented via standard deep learning modules in libraries like PyTorch (Paszke et al., 2019). Since all of the operations are auto-differentiable, we can tighten our lower bound using standard gradient ascent (projected by the dual variable constraints).
Connection to forward verificationOur bound in Theorem 1 introduces the dual variable \(\mathbf{\gamma}\), which enforces the output constraint during optimization. In fact, we can use this variable to get a better conceptual interpretation of our result. Consider the optimization problem in (1). Dualizing the output constraint and taking the dual wrt the constraint \(\mathbf{H}f\left(\mathbf{x}\right)+\mathbf{d}\leq\mathbf{0}\) yields the following lower bound:
\[\max_{\mathbf{\gamma}}\min_{\mathbf{x}} \mathbf{c}^{\top}\mathbf{x}+\mathbf{\gamma}^{\top}\left(\mathbf{H}f\left(\mathbf{ x}\right)+\mathbf{d}\right)\] s.t. \[\mathbf{x}\in\mathcal{X};\quad\mathbf{\gamma}\geq 0\]
The objective for the inner minimization can be represented as minimizing a linear function of the output of a residual neural network with a skip connection from the input to the output layer, subject to constraints on the input. This is precisely the standard forward verification problem.1 This unifies the inverse verification problem with the forward verification problem, connecting our method to standard tools Brix et al. (2023); Xu et al. (2021); Ferrari et al. (2022). For example, as long as \(f\) is a
general computation graph with ReLU activations, Auto-LiRPA Xu et al. (2020) can be used to lower bound the objective function with a closed form expression parameterized by relaxation variables \(\mathbf{\alpha}\). With this, we can directly optimize our lower bound with respect to \(\mathbf{\alpha},\mathbf{\gamma}\).
When \(f\) is a feedforward ReLU network, the lower bound described in this section is precisely the same as Theorem 1 (since both are solving the dual of the same linear program). This shows that the introduction of \(\mathbf{\gamma}\) constitutes our generalization to the bound propagation framework.
### Branch and Bound
Our current formulation still faces two sources of looseness: the gap in \(\mathcal{S}_{\text{LP}}\supseteq\mathcal{S}_{\text{MILP}}\) and the gap in \(\mathcal{S}_{\text{MILP}}\supseteq f^{-1}(\mathcal{S}_{\text{out}})\). To overcome both of these issues, we can make use of branching. While there are several possibilities here, we focus on input branching, which gave the biggest empirical gains in our experiments described in Section 4. More concretely, we divide the input domain \(\mathcal{X}=[\mathbf{l}^{(0)},\mathbf{u}^{(0)}]\) into several regions by choosing a coordinate \(i\) and computing
\[\mathcal{X}_{a}=\mathcal{X}\cap\{\mathbf{x}:\mathbf{x}_{i}\geq s_{i}\},\mathcal{X}_{b} =\mathcal{X}\cap\{\mathbf{x}:\mathbf{x}_{i}\leq s_{i}\}\]
so that \(\mathcal{X}=\mathcal{X}_{a}\cup\mathcal{X}_{b}\). Doing this recursively, we obtain several regions and can compute an overapproximation of \(f^{-1}(\mathcal{S}_{\text{out}})\) when the input is in each of those regions, and take a union of the resulting sets. The finer the input domain we compute the over-approximation over, the tighter the approximation is.
## 4 Results
We demonstrate the impact of our approach on a controls benchmark Rober et al. (2022a,b) and OOD detection. For the established controls benchmark, we significantly outperform the state-of-the-art.
We measure the tightness of an over-approximation using its Approximation Ratio, defined as \(\frac{\text{vol}(\mathcal{S}_{\text{out}})}{\text{vol}(f^{-1}(\mathcal{S}_{ \text{out}}))}\). Both of these volumes are heuristically estimated with 1 million evenly distributed samples of the input space.
While INVPROP allows the optimization of arbitrary cutting planes, all experiments were performed using 40 planes with slopes of equally distributed angles. All implementation details are described in Section D.
### Backward Reachability Analysis for Neural Feedback Loops
After combining (see Section D.2) the state-feedback control policy \(\pi\) of the benchmark with the state transition function (see Section 2.3), we get a three layer MLP with \(12\), \(7\), and \(2\) neurons which is typical for applications in this setting. We model multiple time steps via composing copies of this function. For example, the \(10\) time step transition function can be represented as a \(21\) layer MLP (after fusing consecutive linear layers in the composition). Moreover, we leverage the fact that intermediate bounds for the \(t\) time step transition can be used for the \(t+1\) time step transition.
We find that INVPROP is significantly tighter and faster than ReBreach-LP, the SOTA available method for this problem Rober et al. (2022a,b). As evident in Figure 4, ReBreach-LP suffers from increasingly weak bounds as \(t\) increases, whereas our approach is able to compute tight bounds for all \(t\). Different to their implementation, we are able to factor in the output constraint to improve the intermediate neuron bounds (see Figure 6). Furthermore, we can iteratively tighten these bounds with respect to each other. Even though we optimize many more quantities, our efficient bound propagation allows us to solve the problem faster overall. Figure 5 visualizes the improvement of the approximation ratio score and intermediate bounds over time. We also support the partitioning over the input space introduced by Rober et al. 2022a and the polytope bounds introduced by Everett et al. 2022. Both our increased tightness and speed are quantified in Table 1.
### OOD Detection
Consider the calibrated OOD detector presented in Figure 1, encoded by a four layer MLP with \(200\), \(200\), \(3\), and \(2\) neurons. We compute the set of inputs which induce a sufficiently high ID confidence (measured by \(\max\{y_{0},y_{1}\}>y_{2}\)), pictured in green in Figure 7. This set is non-convex, making the convex hull a poor over-approximation. With some simple input space branching, we can get a much tighter over-approximation, as shown in the right plot. We compare the performance of our approach with and without branching over the input space with the MILP baseline (see Table 2).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Input & Approx & Time (sec) \\ \cline{2-3} ReBreach-LP & yes & 4021.47 & \(42.86\pm 0.04\) \\ INVPROP & no & 1.46 & \(17.89\pm 0.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our method vs SOTA in bounding the inputs that will collide with an obstacle in 10 time steps (as shown in Figure 4). We observe that we are significantly tighter and faster than their method.
Figure 4: In both of the above plots, we over-approximate the set of states that would collide with the obstacle (black square) at \([4.5,5.0]\times[-0.25,0.25]\). We carry out this task for up to \(10\) steps back in time. Each green region is an approximation of the preimages sets via \(1\) million samples. The error of ReBreach-LP exponentially blows up further back in time while our bounding polytopes of 40 edges continue to stay at near perfect tightness.
## 5 Related Work
Formally Verified Neural Networks.There has been a large body of work on the formal verification of neural networks, tackling the problem from the angles of Interval Bound Propagation Gowal et al. (2018); Gehr et al. (2018); Convex Relaxations Wong and Kolter (2018); Shiqi et al. (2018); Salman et al. (2019); Dvijotham et al. (2018); Abstract Interpretation Singh et al. (2018); Linear Programming Ehlers (2017); Semidefinite Programming Raghunathan et al. (2018); Satisfiability Modulo Theories Katz et al. (2017), and Mixed Integer Linear Programming Tjeng et al. (2017). However, most of this work is in the different setting of forward verification. Our work strictly generalizes the bound propagation framework presented in Xu et al. 2021 since setting \(\mathbf{\gamma}=\mathbf{0}\) in Theorem 1 recovers its results.
Formal Verification for Neural Feedback Loops.Our work was motivated by the growing body of work on backward reachability analysis and over-approximating states that result in a target set Everett et al. (2021). The original method solely utilized input constraints for deriving intermediate bounds. The later development of Everett et al. 2022 improved upon this by optimizing \(\mathbf{t}^{(0)}\) and \(\mathbf{u}^{(0)}\)
Figure 5: Our approach enables us to iteratively tighten the bounds of all layers, with each iteration allowing for a smaller approximation ratio with respect to the true preimage. Green: Sum of bound intervals for all neurons in the third layer (second hidden layer); Blue: resulting approximation ratio. Measured for step \(t=10\) of the control benchmark. One iteration updates all bounds of layers not previously optimized in step \(t=9\) as well as all cutting hyperplanes. Note that the improvement from iterative tightening is two orders of magnitude for intermediate bounds, and four orders of magnitude for the approximation ratio.
Figure 6: Comparison of constraints enforced in forward propagation, SOTA controls methods, and our approach. Forward propagation (blue arrows) cannot tighten intermediate bounds with respect to an output constraint. ReBreach-LP and DRIP (blue and red arrows) use the output constraint to tighten the input bounds, and use these input bounds to tighten the intermediate bounds. We (blue, red, and green arrows) tighten all intermediate bounds with respect to input and output constraints simultaneously.
with respect to output bounds. The work of Vincent and Schwager 2021 explores utilizing an LP with complete neuron branching to verify control safety, which can be viewed as a domain-specific implementation of our MILP forumalation of the inverse verification problem. Both approaches share the ability to be terminated at any time for a looser approximation.
Certified OOD detection.There is a wide variety of OOD detection systems Yang et al. (2021); Salehi et al. (2022). Commonly, they are evaluated empirically based on known OOD data Wang et al. (2022); Liang et al. (2018); Chen et al. (2021). Therefore, they fail to provide verified guarantees for their effectiveness. In fact, many OOD detection systems are susceptible to adversarial attacks Sehwag et al. (2019); Chen et al. (2022). Meinke et al. (2021) show how to verify robustness to adversarial examples around given input samples. Berrada et al. (2021) develop a general framework for certifying properties of output distributions of neural networks given constraints on the input distribution. They study the problem of distributionally robust OOD detection, where OOD data can be perturbed by noise drawn from a collection of distributions. However, this work is still constrained to verifying properties of the outputs given constraints (albeit probabilistic) on the inputs. In contrast, INVPROP is able to certify arbitrary regions of the input space that lead to confident predictions.
## 6 Discussion
In this work we present the challenge of over-approximating neural network preimages and provide an efficient algorithm to solve it. By doing so, we demonstrate strong performance on multiple application areas. This work provides a first attempt at this problem and we believe there is a large scope for future investigation and new applications of our contribution.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & Input & Approx & \multirow{2}{*}{Time (sec)} \\ & Branching & Ratio & \\ \hline MILP & no & 1.47 & 1562.26 \\ INVPROP & no & 4.39 & \(3.77\pm 0.02\) \\ INVPROP & yes & 1.14 & \(12.02\pm 0.06\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of methods for over-approximating the preimage of the OOD region shown in Figure 1 using 40 half-spaces.
Figure 7: In both plots, the green regions approximate inputs which the model believes are in-distribution (generated from 1 million samples). On the left, the blue border traces out the intersection of the half-spaces we tightened without branching. On the right, we do the same with input space branching and display the union of multiple sets. We observe that branching produced a tighter and more expressive over-approximation.
LimitationsOur current implementation does not implement neuron and output branching. The input branching strategy we implemented will be less effective for high dimensional inputs.
Future WorkTo scale this verification to higher-dimensional tasks such as standard image datasets would require addressing additional challenges. The preimage may not be well represented by the intersection of half-spaces. Moreover, the difficulty of uniformly sampling cutting planes \(\mathbf{c}\) will complicate measuring the efficacy of our methods.
Potential Negative Social ImpactOur work improves reliable ML through facilitating the deployment of systems that provably align with practitioner expectations. As such, we expect INVPROP to have positive societal impact. We acknowledge that our improvements to verification may be repurposed as improvements to attacks given access to a model though we believe the positive use cases of our technique greatly outweigh current speculation of misusage.
## 7 Acknowledgements
We thank Michael Everett and Nicholas Rober for helpful discussion and feedback on the paper. |
2303.16251 | Function Approximation with Randomly Initialized Neural Networks for
Approximate Model Reference Adaptive Control | Classical results in neural network approximation theory show how arbitrary
continuous functions can be approximated by networks with a single hidden
layer, under mild assumptions on the activation function. However, the
classical theory does not give a constructive means to generate the network
parameters that achieve a desired accuracy. Recent results have demonstrated
that for specialized activation functions, such as ReLUs and some classes of
analytic functions, high accuracy can be achieved via linear combinations of
randomly initialized activations. These recent works utilize specialized
integral representations of target functions that depend on the specific
activation functions used. This paper defines mollified integral
representations, which provide a means to form integral representations of
target functions using activations for which no direct integral representation
is currently known. The new construction enables approximation guarantees for
randomly initialized networks for a variety of widely used activation
functions. | Tyler Lekang, Andrew Lamperski | 2023-03-28T18:55:48Z | http://arxiv.org/abs/2303.16251v2 | # Function Approximation with Randomly Initialized Neural Networks
###### Abstract
Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs, high accuracy can be achieved via linear combinations of _randomly initialized_ activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines _modified integral representations_, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks using any activation for which there exists an established base approximation which may not be constructive. We extend the results to the supremum norm and show how this enables application to an extended, approximate version of (linear) model reference adaptive control.
## I Introduction
Adaptive control lies at the intersection of machine learning methods and the control of dynamics systems [1], and has considerable development in the controls literature [2, 3, 4]. Deep neural network based machine learning approaches have achieved unprecedented levels of performance and utility in areas such as language processing [5], medical computer vision [6], and reinforcement learning [7]. In this paper, we aim to contribute novel ideas to function approximation techniques and apply them to an approximate extension of Model Reference Adaptive Control (MRAC) established in [8].
A key challenge in function approximation work is that many theorems describe the existence of approximations that are linear combinations of some activation function (such as a sigmoid), but are not constructive. In [9], many of these results are reviewed, including classic results on the universal approximation properties of continuous sigmoidal activations [10, 11, 12], as well as another classic result by [13] for arbitrary sigmoidal activations, which was expanded to arbitrary hinge functions by [14].
Random approximation provides a key way to avoid the non-constructiveness by randomly sampling the internal parameters of the activations from a known set, if the target function of interest has an integral representation. However, determining these representations for general classes of target functions and activations is a nontrivial task. [15] gives a constructive method, but only for target and activation functions in \(L_{1}\). In [16] and [17], they propose constructive methods for a class of target functions with unit step and ReLU activations respectively. In [18], functions are approximated using trigonometric polynomial ridge functions, which can then be shown in expectation to be equivalent to randomly initialized ReLU activations.
There are several interesting and important results in the literature having to do with neural networks with random (typically Gaussian) parameter initialization, sometimes as a consequence of using randomly initialized gradient descent for training the network. A classic result by [19] shows that the output of a single hidden-layer network with Gaussian randomly initialized parameters goes to a Gaussian Process as the width goes to infinity. Similar results are then achieved in [20] for deep fully-connected networks as all hidden layer widths go to infinity. Also for deep fully-connected networks, in [21] the authors define the Neural Tangent Kernel and propose that its limit, as the hidden layer widths go to infinity, can be used to study the timestep evolution and dynamics of the parameters, and the corresponding network output function, in gradient descent. In [22], the authors show that single hidden-layer networks cannot achieve the same rates of increase in a measure of curvature produced by the network output, as deep networks can, with parameters Gaussian randomly initialized and bounded activation functions. In [23], the authors show that single hidden-layer networks of a sufficient width can use Gaussian randomly initialized gradient descent on values of a target function, and achieve guaranteed generalization to the entire function. And in [24], the authors show that deep fully-connected networks, where each hidden layer width meets a sufficient size, can be trained with Gaussian randomly initialized gradient descent and be guaranteed to reach the global minimum at a linear rate.
Our primary contributions in this paper are: developing a novel method for bridging convex combinations of activation functions (eg. step and ReLU) with unknown parameters to approximations using randomly initialized parameters by using a mollified integral representation, obtaining a main overall result bounding the approximation error for the random approximations in the \(L_{2}\) norm, extending these results to the \(L_{\infty}\) norm under certain conditions, and then applying those results to an approximate extension of MRAC adaptive control.
The organization of the remaining parts of the paper are as follows. Section II presents preliminary notation. Section III presents background on function approximation with neural
networks. Section IV presents theoretical results on mollified approximations, while Section V gives an overall approximation error bound in \(L_{2}\) norm for randomly initialized approximations. Section VI presents an extension to \(L_{\infty}\) norm bounds and an application on approximate MRAC, and we provide closing remarks in Section VII.
## II Notation
We use \(\mathbb{R},\mathbb{N}\) to denote the real and natural numbers. \(\mathbb{I}\) denotes the indicator function, while \(\mathbb{E}\) denotes the expected value. We use \(\mathcal{B},\mathcal{K}\) to denote bounded and compact convex subsets of \(\mathbb{R}^{n}\). The integer set \(\{1,\ldots,k\}\) is denoted by \([k]\). \(B_{x}(r)\subset\mathbb{R}^{n}\) denotes the radius \(r\) Euclidean ball centered at \(x\in\mathbb{R}^{n}\). We interpret \(w,x\in\mathbb{R}^{n}\) as column vectors and denote their inner product as \(w^{\top}x\). Similarly, we denote the product of matrix \(W\in\mathbb{R}^{n\times N}\) with vector \(x\) as \(W^{\top}x\), which is a length \(N\) vector where the \(i\)th element is the inner product \(W^{\top}_{i}x\). Subscripts on vectors and matrices denote the row index, for example \(W^{\top}_{i}\) is the \(i\)th row of \(W^{\top}\). The standard Euclidean norm is denoted \(\|w\|\). We use subscripts on time-dependent variables to reduce parentheses, for example \(\phi(x(t))\) is instead denoted \(\phi(x_{t})\). The \(i\)th row of a time-varying vector or matrix is then \(x_{t,i}\).
## III Background on Function Approximation with Neural Networks
In this paper, we consider the approximation of target functions \(f:\mathbb{R}^{n}\to\mathbb{R}\) that are continuous on a bounded set \(\mathcal{B}\subset\mathbb{R}^{n}\) containing the origin, which we denote as \(f\in C(\mathcal{B})\).
Formally, for a given target function \(f\in C(\mathcal{B})\), positive integer \(N\in\mathbb{N}\), and positive scalar \(\varepsilon_{N}>0\) (which may depend on \(N\)) there exists some vector \(\theta\in\mathbb{R}^{N}\) and nonlinear, continuous almost everywhere basis functions \(\mathcal{B}_{1},\ldots,\mathcal{B}_{N}:\mathbb{R}^{n}\to\mathbb{R}\) such that the approximation
\[\widehat{f}(x)\ :=\ \sum_{i=1}^{N}\theta_{i}\,\mathcal{B}_{i}(x) \tag{1}\]
satisfies
\[\left\|\widetilde{f}(x)-\widehat{f}(x)\right\|_{\mathcal{B}}\ \leq\ \varepsilon_{N}\ . \tag{2}\]
Here, we denote the the \(L_{2}(\mathcal{B},\bar{\mu}_{\mathcal{B}})\) norm
\[\left\|\widetilde{f}(x)\right\|_{\mathcal{B}}\ :=\ \left(\int_{B}|\widetilde{f}(x)|^{2}\, \bar{\mu}_{\mathcal{B}}(\text{d}x)\right)^{\frac{1}{2}}\]
with \(\bar{\mu}_{\mathcal{B}}\) as the uniform probability measure over \(\mathcal{B}\) such that \(\int_{\mathcal{B}}\bar{\mu}_{\mathcal{B}}(\text{d}x)=1\), and some _affine shift_\(\widetilde{f}(x)=f(x)-a\,^{\top}\!x-\!\ell\), for some \(a\in\mathbb{R}^{n}\) and \(\ell\in\mathbb{R}\).
We will assume that the basis function is parameterized \(\mathcal{B}(x\,;\gamma)\) and is a composition of some nonlinear, continuous almost everywhere function \(\sigma:\mathbb{R}\to\mathbb{R}\) together with an affine transformation of \(\mathbb{R}^{n}\) into \(\mathbb{R}\) according to the _parameter_\(\gamma\in\Upsilon\). We define each point \(\gamma:=[w^{\top}\ b]^{\top}=[w_{1}\ \ldots\ w_{n}\ b]^{\top}\) with \(w\in\mathbb{R}^{n}\backslash\{0_{n}\}\) as the _weight vector_ and \(b\in\mathbb{R}\) as the _bias term_. We assume the _parameter set_\(\Upsilon\) is a bounded subset of \(\mathbb{R}^{n}\backslash\{0_{n}\}\times\mathbb{R}\). And so, we set \(\mathcal{B}(x\,;\gamma_{i})=\sigma(w^{\top}_{i}x+b_{i})=\sigma(\gamma^{\top}_ {i}X)\), where \(X:=[x_{1}\ \cdots\ x_{n}\ 1]^{\top}\) and we denote \(\gamma_{i}=[w^{\top}_{i}\ b_{i}]^{\top}=[w_{i,1}\ \ldots\ w_{i,n}\ b_{i}]^{\top}\), for all \(i\in[N]\).
We will use the term _base approximation_ to define, for some vector \(\theta\in\mathbb{R}^{N}\) and _fixed_ parameters \(\gamma_{1},\ldots,\gamma_{N}\in\Upsilon\), for any integer \(N\in\mathbb{N}\), the sum
\[\widehat{f}_{N}(x)=\sum_{i=1}^{N}\theta_{i}\,\sigma(\gamma^{\top}_{i}X)\ \, \tag{3}\]
and _random approximation_ to define, for some vector \(\vartheta\in\mathbb{R}^{R}\) and _randomly sampled_ parameters \(\gamma_{1},\ldots,\gamma_{R}\in\Upsilon\), for any integer \(R\in\mathbb{N}\), the sum
\[\widehat{\mathcal{H}}_{R}(x\,;\gamma_{1},\ldots,\gamma_{R})=\sum_{j=1}^{R} \vartheta_{j}\,\sigma(\boldsymbol{\gamma}^{\top}_{j}X)\ . \tag{4}\]
Note that these linear combinations are equivalent to a feedforward neural network with a single hidden layer, as depicted in Figure 1, where \(x_{0}=1\). Hence why we refer to \(\sigma\) as an _activation function_, as the output of each neuron is the activation of the weighted inputs (plus bias term) to the neuron. We will assume that the activation function \(\sigma\) is bounded (growth) as follows.
**Assumption 1**: The nonlinear, continuous almost everywhere activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) is bounded (growth) over all feasible inputs, such that either of
1. \(|\sigma(u)-\sigma(v)|\ \leq\ \mathcal{O}_{\sigma}\)__
2. \(|\sigma(u)-\sigma(v)|\ \leq\ \mathcal{L}_{\sigma}\,|u-v|\)__
_hold for all \(u,v\in\mathcal{I}:=\{\gamma^{\top}\!X\ |\ (\gamma,X)\in\Upsilon\times \mathcal{B}\times\{1\}\}\), with positive scalar \(\mathcal{O}_{\sigma}\) or \(\mathcal{L}_{\sigma}\)._
If we shift the activation to the origin as \(\breve{\sigma}(y):=\sigma(y)-\sigma(0)\), this maintains the assumption with the same constant in either case. The (scaled) step function \(\sigma_{c\mathcal{S}}\) and the ReLU function \(\sigma_{\mathcal{R}}\), defined as
\[\sigma_{c\mathcal{S}}(y) =\begin{cases}0&\text{if }y\leq 0\\ c&\text{if }y>0\end{cases}\qquad\text{(scaled step)} \tag{5}\] \[\sigma_{\mathcal{R}}(y) =\begin{cases}0&\text{if }y\leq 0\\ y&\text{if }y>0\end{cases}\qquad\text{(ReLU)}\ \, \tag{6}\]
are examples meeting Assumption 1, with \(\mathcal{O}_{\sigma}=c\) and \(\mathcal{L}_{\sigma}=1\) respectively, where \(c>0\) is an arbitrary positive scalar.
Fig. 1: Single hidden-layer neural network with \(\sigma\) activation function.
## IV Modified Approximations
In this section, we define the _mollified approximation_\(\widehat{f}_{\lambda,N}\) using a novel method of forming a _mollified integral representation_ for a base approximation \(\widehat{f}_{N}\) to a target function \(f\in C(\mathcal{B})\), and bound their difference in the \(L_{2}(\mathcal{B},\bar{\mu}_{\mathcal{B}})\) norm.
### _Functions with Integral Representation_
If a target function \(f:\mathbb{R}^{n}\to\mathbb{R}\) has an _integral representation_ over the parameters set \(\Upsilon\) with some activation function \(\sigma:\mathbb{R}\to\mathbb{R}\), then
\[f(x)\ =\ \int_{\Upsilon}\,g(\gamma)\,\sigma(\gamma^{\top}X)\,\mathrm{d}\gamma \tag{7}\]
holds with some bounded, continuous _coefficient function_\(g:\Upsilon\to\mathbb{R}\). This is a version of (3) that is continuous in the parameters \(\gamma_{i}\) over \(\Upsilon\). Indeed, if we define \(\gamma_{1},\ldots,\gamma_{N}\) to be a sequence of points that uniformly grids \(\Upsilon\) with volume \(\mathrm{d}_{N}\) between any neighboring points, then as \(N\to\infty\) (and \(\mathrm{d}_{N}\to 0\)) we have
\[\lim_{N\to\infty}\sum_{i=1}^{N}g(\gamma_{i})\,\sigma(\gamma_{i}^{\top}X)\, \mathrm{d}_{N}=\int_{\Upsilon}\,g(\gamma)\,\sigma(\gamma^{\top}X)\,\mathrm{d} \gamma=f(x)\.\]
This implies that the (infinite) sum of the absolute values of the coefficients is finite, with
\[\lim_{N\to\infty}\sum_{i=1}^{N}|g(\gamma_{i})|\,\mathrm{d}_{N}\ =\ \int_{\Upsilon}|g(\gamma)|\,\mathrm{d}\gamma\ <\ \infty\]
holding since \(g\) is bounded and the integral is over the bounded set \(\Upsilon\).
However, finding such integral representations (7) for general classes of functions \(f\) and activation functions \(\sigma\) is a nontrivial task. If _both_ are \(L_{1}\) (absolutely integrable over \(\mathbb{R}^{n}\) and \(\mathbb{R}\) respectively), then Theorem 1 in [15] shows that the integral representation can always be constructed with \(g\) related to the Fourier transforms of \(f\) and \(\sigma\). But this is a limiting restriction, since none of the typical activation functions like sigmoids, steps, ReLU's, and their variants meet this requirement. Even so, in [12] this was used to prove the universal approximation properties of sigmoid activations.
### _Mollified Integral Representation_
Now we will propose a novel method of forming an integral representation for _any_ function \(f\in C(\mathcal{B})\), so long as a base approximation \(\widehat{f}_{N}\) of the form (3) exists and the bounded parameter set \(\Upsilon\subset\mathbb{R}^{n+1}\) and a bound \(\mathcal{S}_{N}\) on the coefficient size \(\sum_{i=1}^{N}|\theta_{i}|\) are known.
For any fixed parameters \(\gamma_{1},\ldots,\gamma_{N}\in\Upsilon\), this approximation \(\widehat{f}_{N}\) can be defined as
\[\widehat{f}_{N}(x) =\ \sum_{i=1}^{N}\theta_{i}\,\sigma(\gamma_{i}^{\top}X)\] \[=\ \int_{\Upsilon}\,\sum_{i=1}^{N}\theta_{i}\,\delta^{n+1}( \gamma-\gamma_{i})\ \sigma(\gamma^{\top}X)\,\mathrm{d}\gamma\ \, \tag{8}\]
where \(\delta^{n+1}\) is the \((n+1)\)-dimensional Dirac delta which satisfies for any \(\gamma_{i}\in\Upsilon\) that \(\int_{\Upsilon}\,\delta^{n+1}(\gamma-\gamma_{i})\,\sigma(\gamma^{\top}X)\, \mathrm{d}\gamma=\sigma(\gamma_{i}^{\top}X)\.\)
Thus, the RHS of (8) is like an integral representation (7) of \(\widehat{f}_{N}\) with a coefficient mapping \(g(\gamma)=\sum_{i=1}^{N}\theta_{i}\,\delta^{n+1}(\gamma-\gamma_{i})\). However, this mapping is not continuous and bounded as required, and the Dirac delta is not even a function in the regular sense. Indeed, if the overall coefficients of a random approximation \(\widehat{\mathcal{G}}_{R}\) were set as \(\vartheta_{j}=g(\boldsymbol{\gamma}_{j})\) for each randomly sampled \(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R}\in\Upsilon\), then in this case they almost surely would all be zero.
We propose the _Mollified integral representation_ to solve this issue, where the Dirac delta in (8) is replaced with the _mollified_ delta function \(\hat{\delta}_{\lambda}^{n+1}\). Recalling that \(\gamma=[w_{1}\ \cdots\ w_{n}\ b\ ]\), this is defined as the \((n+1)\)-dimensional bump function
\[\hat{\delta}_{\lambda}^{n+1}(\gamma)\ :=\] (9) \[\left\{\begin{aligned} &(\eta\lambda)^{n+1}\prod_{i=1}^{n} \exp\left(\frac{-1}{1-\lambda^{2}w_{i}^{2}}\right)\exp\left(\frac{-1}{1- \lambda^{2}b^{2}}\right)\\ &\gamma\in(-\frac{1}{\lambda},\frac{1}{\lambda})^{n+1}\\ & 0\
that \(\mathop{\mathchoice{\vbox{\hbox{$\int$}}\hbox{\vbox{\hbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \int$}}\hbox{\vbox{$\int$}}\hbox{\vbox{$ \
For the corresponding \(\Upsilon_{\lambda}\), we define the diameter \(D_{\Upsilon_{\lambda}}:=\sup\{\|\boldsymbol{\gamma}-\phi\|_{2}\mid\boldsymbol{ \gamma},\phi\in\Upsilon_{\lambda}\}\) and largest point \(\gamma_{\lambda\max}:=\sup\{\|\boldsymbol{\gamma}\|_{2}\mid\boldsymbol{\gamma} \in\Upsilon_{\lambda}\}\), and for the selected \(P_{\lambda}\) we define the minimum \(P_{\lambda\min}:=\inf\{P_{\lambda}(\boldsymbol{\gamma})\mid\boldsymbol{\gamma} \in\Upsilon_{\lambda}\}\).
Then, note that the RHS of (11) is equivalently
\[\begin{split}&\widehat{f}_{\lambda,N}(x)\ =\\ &\int_{\Upsilon_{\lambda}}\frac{g_{\lambda}(\boldsymbol{\gamma})}{P _{\lambda}(\boldsymbol{\gamma})}\,\sigma(\boldsymbol{\gamma}^{\top}z)\,P_{ \lambda}(\boldsymbol{\gamma})\text{d}\boldsymbol{\gamma}=\mathop{\mathbb{E}} _{\boldsymbol{\gamma}\sim P_{\lambda}}\left[\frac{g_{\lambda}(\boldsymbol{ \gamma})}{P_{\lambda}(\boldsymbol{\gamma})}\,\sigma(\boldsymbol{\gamma}^{\top} z)\right]\.\end{split}\]
Thus, a random approximation \(\widehat{\mathcal{H}}_{R}\) exists which satisfies (16) by setting its coefficients, for all \(j\in[R]\), as
\[\vartheta_{j}=\frac{g_{\lambda}(\boldsymbol{\gamma}_{j})}{P_{\lambda}( \boldsymbol{\gamma}_{j})\,R}\ =\ \frac{\sum_{i=1}^{N}\theta_{i}\,\hat{\delta}_{\lambda}^{n+1}(\boldsymbol{\gamma }_{j}-\boldsymbol{\gamma}_{i})}{P_{\lambda}(\boldsymbol{\gamma}_{j})\,R}\ \.\]
### _Random Approximations to Base Approximations_
We now state and prove an overall bound on the approximation error in \(L_{2}(\mathcal{B},\bar{\mu}_{\mathcal{B}})\) norm, between a base approximation \(\widehat{f}_{N}\) of the form (3) to a target function \(f\in\mathcal{C}(\mathcal{B})\) and a random approximation \(\widehat{\mathcal{H}}_{R}(x\,;\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{ \gamma}_{R})\) of the form (4).
**Theorem 2**: _Let there exist a base approximation \(\widehat{f}_{N}\) of the form (3), for some integer \(N\in\mathbb{N}\), to a target function \(f\in\mathcal{C}(\mathcal{B})\), with unknown parameters \(\gamma_{1},\ldots,\gamma_{N}\) in a known bounded parameter set \(\Upsilon\subset\mathbb{R}^{n+1}\), unknown coefficients \(\theta\in\mathbb{R}^{N}\) with a known bound on the coefficient size \(\sum_{i=1}^{N}|\theta_{i}|\leq\mathcal{S}_{N}\), and using an activation function \(\sigma\) satisfying Assumption 1. Assume that \(\left\|\widetilde{f}(x)-\widehat{f}_{N}(x)\right\|_{\mathcal{B}}\leq\varepsilon _{N}\) holds for some \(\varepsilon_{N}>0\) which may depend on \(N\), for an affine shift \(\widetilde{f}(x)=f-a^{\top}x-\xi f(0)\) with some \(a\in\mathbb{R}^{n}\) and \(\xi\in\mathbb{R}\). The mollified approximation \(\widehat{f}_{\lambda,N}\) as in (11) satisfies the results of Theorem 1 for any mollification factor \(\lambda>0\). Let the random parameters \(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R}\) be sampled iid according to any nonzero density \(P_{\lambda}\) over the \(\lambda\)-expanded set \(\Upsilon_{\lambda}\), which has diameter \(D_{\Upsilon_{\lambda}}\) and farthest point \(\gamma_{\lambda\max}\), and let \(E_{\mathcal{B}}\) be as in (13). Then for any \(\nu\in(0,1)\), with probability greater than \(1-\nu\) there exists a random approximation \(\widehat{\mathcal{H}}_{R}(x\,;\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{ \gamma}_{R})\) as in (4) with \(|\vartheta_{j}|\leq\frac{g_{\lambda\max}}{P_{\lambda\min}R}\) for all \(j\in[R]\) such that either of the following hold:_
\[\begin{split}&\left\|\,\widetilde{f}(x)-\widehat{\mathcal{H}}_{R}(x \,;\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\right\|_{ \mathcal{B}}\ \leq\\ i)&\varepsilon_{N}+\mathcal{S}_{N}\mathcal{D}_{ \sigma}\left(1\ +\ \frac{\lambda^{n+1}}{\sqrt{R}}\,K_{\partial_{\sigma}}(\nu)\right)\\ ii)&\varepsilon_{N}+\mathcal{S}_{N}\mathcal{L}_{ \sigma}\left(\frac{\sqrt{n+1}}{\lambda}\,E_{\mathcal{B}}+\frac{\lambda^{n+1} }{\sqrt{R}}\,K_{\mathcal{L}_{\sigma}}(\nu)\right)\ \ \.\end{split}\]
_The coefficients \(K_{\partial_{\sigma}}\) and \(K_{\mathcal{L}_{\sigma}}\) are defined in terms of the selected \(\nu\) as:_
\[\begin{split}& K_{\partial_{\sigma}}(\nu)\ :=\ \frac{\eta^{n+1}}{e^{n+1}P_{\lambda\min}}\left(1+\frac{|\sigma(0)|}{ \mathcal{D}_{\sigma}}\right.\\ &\left.+\bigg{(}3+2\frac{|\sigma(0)|}{\mathcal{D}_{\sigma}} \bigg{)}\sqrt{\frac{1}{2}\log\left(\frac{1}{\nu}\right)}\right)\\ & K_{\mathcal{L}_{\sigma}}(\nu)\ :=\ \frac{\eta^{n+1}}{e^{n+1}P_{ \lambda\min}}\left(\gamma_{\lambda\max}E_{\mathcal{B}}+\frac{|\sigma(0)|}{ \mathcal{L}_{\sigma}}\right.\\ &\left.+\bigg{(}(D_{\Upsilon_{\lambda}}+2\gamma_{\lambda\max})E_{ \mathcal{B}}+2\frac{|\sigma(0)|}{\mathcal{L}_{\sigma}}\bigg{)}\sqrt{\frac{1} {2}\log\left(\frac{1}{\nu}\right)}\right).\end{split}\]
Given in Appendix IV.
Proof SketchWe closely follow the strategy in Lemma 4 of [25], which uses McDiarmid's inequality to obtain the final result.
We start by defining function \(h_{\mathcal{B}}:\Upsilon_{\lambda}^{R}\to\mathbb{R}\), which captures the norm difference between the random approximation and its expected value over the iid draws of \((\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\in\Upsilon_{\lambda}^ {R}\), as
\[h_{\mathcal{B}}(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\ :=\\ \left\|\,\widehat{\mathcal{H}}_{R}(x\,;\boldsymbol{\gamma}_{1}, \ldots,\boldsymbol{\gamma}_{R})-\mathop{\mathbb{E}}_{\boldsymbol{\gamma}_{j} \sim P_{\lambda}}\left[\widehat{\mathcal{H}}_{R}(x\,;\boldsymbol{\gamma}_{1}, \ldots,\boldsymbol{\gamma}_{R})\right]\right\|_{\mathcal{B}}\ \.\]
Thus, we can use Lemma (1) to show that there exists \(\widehat{\mathcal{H}}_{R}\) such that (18) is equivalently
\[h_{\mathcal{B}}(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\ =\ \left\|\,\widehat{f}_{\lambda,N}(x)-\widehat{ \mathcal{H}}_{R}(x\,;\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R}) \right\|_{\mathcal{B}}\.\]
We then can bound the expected value
\[\mathop{\mathbb{E}}_{\boldsymbol{\gamma}_{j}\sim P_{\lambda}}\Big{[}h_{ \mathcal{B}}(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\Big{]}\]
and the absolute value of the difference between any
\[\left|h_{\mathcal{B}}(\boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{j}, \ldots,\boldsymbol{\gamma}_{R})-h_{\mathcal{B}}(\boldsymbol{\gamma}_{1},\ldots, \widehat{\boldsymbol{\gamma}}_{j},\ldots,\boldsymbol{\gamma}_{R})\right|\]
that differ only at one coordinate, over each \(j\in[R]\). These two bounds then allow us to apply the results of McDiarmid's inequality, thus bounding (in high probability)
\[\left\|\,\widehat{f}_{\lambda,N}(x)-\widehat{\mathcal{H}}_{R}(x\,;\boldsymbol{ \gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\right\|_{\mathcal{B}}\.\]
The proof is then completed by using the triangle inequality, with
\[\begin{split}&\left\|\,\widetilde{f}(x)-\widehat{\mathcal{H}}_{R}(x\,; \boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\right\|_{\mathcal{B}}\ \leq\ \left\|\widetilde{f}(x)-\widehat{f}_{N}(x)\right\|_{\mathcal{B}}+\\ &\left\|\widehat{f}_{N}(x)-\widehat{f}_{\lambda,N}(x)\right\|_{ \mathcal{B}}+\left\|\widehat{f}_{\lambda,N}(x)-\widehat{\mathcal{H}}_{R}(x\,; \boldsymbol{\gamma}_{1},\ldots,\boldsymbol{\gamma}_{R})\right\|_{\mathcal{B}}\.\end{split}\]
where the first term is assumed bounded by \(\varepsilon_{N}\) for the base approximation \(\widehat{f}_{N}\) and the second term is bounded by Theorem 1.
**Remark 2**: _Classic results from [13] and [14] prove the existence of convex combinations of step (5) and ReLU (6) activations that approximate (affine shifts of) functions in specific classes arbitrarily well, with an approximation error proportional to \(\varepsilon_{N}=O\left(\frac{1}{\sqrt{N}}\right)\). Importantly, the convexity property allows the length (number of terms) \(N\) to
### _Extending the Error Bound to \(L_{\infty}\). for Lipschitz Functions on Convex Compact Sets_
We assume that \(\mathcal{K}\) is full dimension, with nonzero Lebesgue measure (volume) \(\mathcal{V}=\mu(\mathcal{K})>0\) and diameter \(\mathcal{D}>0\). It also must hold that \(\mathcal{K}\subseteq B_{0}(\rho)\) for a sufficiently large radius \(0<\rho\leqslant\mathcal{D}\).
Then, for any approximation \(\widehat{f}_{N}\) of the form (3) using an activation function \(\sigma\) satisfying Assumption 1, we have that
\[\left|\sum_{i=1}^{N}\theta_{i}\,\sigma(\gamma_{i}^{\top}X)-\sum_{i= 1}^{N}\theta_{i}\,\sigma(\gamma_{i}^{\top}Y)\right|\] \[\leqslant\sum_{i=1}^{N}\left|\theta_{i}\,\sigma(\gamma_{i}^{\top} X)-\theta_{i}\,\sigma(\gamma_{i}^{\top}Y)\right|\] \[=\sum_{i=1}^{N}\left|\theta_{i}\right||\sigma(\gamma_{i}^{\top}X)- \sigma(\gamma_{i}^{\top}Y)\big{|}\leqslant\sum_{i=1}^{N}\left|\theta_{i} \right||\mathcal{L}_{\sigma}\left|\gamma_{i}^{\top}X-\gamma_{i}^{\top}Y\right|\] \[=\sum_{i=1}^{N}\left|\theta_{i}\right||\mathcal{L}_{\sigma} \left|w_{i}^{\top}(x-y)\right|\ \leqslant\ \sum_{i=1}^{N}\left|\theta_{i}\right||w_{i}\big{|}_{2}\ \left|x-y\right|_{2}\]
holds for all \(x,y\in\mathbb{R}^{n}\), recalling that \(X:=[x_{1}\ \cdots\ x_{n}\ 1]^{\top}\) and \(Y:=[y_{1}\ \cdots\ y_{n}\ 1]^{\top}\), since \(\left|\sigma(u)-\sigma(v)\right|\leqslant\mathcal{L}_{\sigma}\left|u-v\right|\) for any \(u,v\in\mathbb{R}\) by the Lipschitz condition of Assumption 1. Thus, such approximations are \(\widehat{\mathcal{L}}\)-Lipschitz with constant
\[\widehat{\mathcal{L}}\ =\ \mathcal{L}_{\sigma}\sum_{i=1}^{N}\left|\theta_{i} \right||w_{i}\big{|}_{2}\ \ . \tag{19}\]
A similar calculation gives that such approximations are \(\widehat{\mathcal{D}}\)-bounded with constant
\[\widehat{\mathcal{D}}\ =\ \mathcal{D}_{\sigma}\sum_{i=1}^{N}\left|\theta_{i}\right| \tag{20}\]
if \(\left|\sigma(u)-\sigma(v)\right|\leqslant\mathcal{D}_{\sigma}\) for any \(u,v\in\mathbb{R}\) by the bounded condition of Assumption 1.
Let an affine shift to function \(f\) be defined as
\[\widetilde{f}(x)\ :=\ f(x)-a^{\top}x-\xi \tag{21}\]
for any \(a\in\mathbb{R}^{n}\) and \(\xi\in\mathbb{R}\). Thus, if \(f\) is \(\mathcal{L}\)-Lipschitz, then
\[\left|\widetilde{f}(x)-\widetilde{f}(y)\right|\ =\ \left|f(x)-f(y)-a^{\top}(x-y)\right|\] \[\leqslant\ \left|f(x)-f(y)\right|+\left|a^{\top}(x-y)\right|\] \[\leqslant\ \mathcal{L}\left\|x-y\right\|_{2}+\left|a\right\|_{2} \left\|x-y\right\|_{2}\]
holds for all \(x,y\in\mathbb{R}^{n}\). And so, \(\widetilde{f}\) is also Lipschitz, with constant \(\widetilde{\mathcal{L}}=\mathcal{L}+\left\|a\right\|_{2}\).
**Lemma 2**: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be \(\mathcal{L}\)-Lipschitz on a convex compact set \(\mathcal{K}\subset\mathbb{R}^{n}\) containing the origin and differentiable at the origin. Let there also be an approximation \(\widetilde{f}:\mathbb{R}^{n}\to\mathbb{R}\) which is \(\widehat{\mathcal{D}}\)-bounded or \(\widehat{\mathcal{L}}\)-Lipschitz on \(\mathcal{K}\) and such that the approximation error \(\widetilde{f}(x)=\widetilde{f}(x)-\widehat{f}(x)\), with \(\widetilde{f}\) as in (21), satisfies the \(L_{2}(\mathcal{K},\bar{\mu}_{\mathcal{K}})\) norm bound_
\[\left\|\widetilde{f}\right\|_{\mathcal{K}}=\sqrt{\int_{\mathcal{K}}\left| \widetilde{f}(x)\right|^{2}\bar{\mu}_{\mathcal{K}}(\mathsf{d}x)}\ \leqslant\ \varepsilon_{N}\]
_where the bound \(\varepsilon_{N}>0\) is a finite constant and \(\bar{\mu}_{\mathcal{K}}(\mathsf{d}x)\) is the uniform probability measure on \(\mathcal{K}\) with \(\int_{\mathcal{K}}\bar{\mu}_{\mathcal{K}}(\mathsf{d}x)=1\)._
_Assume \(\mathcal{K}\) is full dimension with diameter \(\mathcal{D}>0\), and it holds that \(\mathcal{K}\subseteq B_{0}(\rho)\) for a ball of sufficiently large radius \(0<\rho\leqslant\mathcal{D}\). Then, the approximation error also satisfies the \(L_{\infty}(\mathcal{K})\) norm bound_
\[\sup_{x\in\mathcal{K}}\left|\widetilde{f}(x)\right|\ \leqslant\ 2\ \left(\frac{r}{\mathcal{D}(r+\sqrt{r^{2}+ \mathcal{D}^{2}})}\right)^{\frac{-n}{n+2}}\ K(\varepsilon_{N}) \tag{22}\]
_where_
\[K(\varepsilon_{N}) =\] \[i) \left(\left(\mathcal{L}+\left\|a\right\|_{2}\right)^{n}\varepsilon _{N}^{2}\right)^{\frac{1}{n+2}}+\widehat{\mathcal{D}}\] \[ii) \left(\left(\mathcal{L}+\left\|a\right\|_{2}+\widehat{\mathcal{L }}\right)^{n}\varepsilon_{N}^{2}\right)^{\frac{1}{n+2}}\]
_and \(r\) is the largest radius ball within \(\mathcal{K}\) centered at its centroid._
Given in Appendix V-B.
### _Application to MRAC_
In Chapter 12 of [8], an approximate extension to the general linear MRAC setup (see Chapter 9) is introduced which allows for nonlinearities \(f:\mathbb{R}^{n}\to\mathbb{R}^{\ell}\) as \(f(x)\ =\ \Theta^{\top}\Psi(x)+\epsilon_{f}(x)\) such that the plant is given by
\[\dot{x}_{t} =\ A\,x_{t}+B\big{(}u_{t}+f(x)\big{)} \tag{23}\] \[=\]
where \(A\) is a known \(n\times n\) state matrix for the plant state \(x_{t}\in\mathbb{R}^{n}\), \(B\) is a known \(n\times\ell\) input matrix for the input \(u_{t}\in\mathbb{R}^{\ell}\), and \(\Theta\) is an unknown \(N\times\ell\) matrix which linearly parameterizes the known vector function \(\Psi:\mathbb{R}^{n}\to\mathbb{R}^{N}\). We assume \((A,B)\) is controllable. The general setup also includes an unknown diagonal scaling matrix \(\Lambda\), such that the overall input matrix is \(B\Lambda\), and assumes that \(A\) is unknown. For simplicity, we assume \(A\) is known and omit \(\Lambda\).
It is assumed that there exists an \(n\times\ell\) matrix of feedback gains \(K_{x}\) and an \(\ell\times\ell\) matrix of feedforward gains \(K_{r}\) satisfying the _matching conditions_
\[A+BK_{x}^{\top} =A_{r}\] \[BK_{r}^{\top} =B_{r}\]
to a controllable, linear reference model
\[\dot{x}_{t}^{r}=A_{r}\,x_{t}^{r}+B_{r}\,r_{t}\ \,\]
where \(A_{r}\) is a known Hurwitz \(n\times n\) reference state matrix for the reference state \(x_{t}^{r}\in\mathbb{R}^{n}\), \(B_{r}\) is a known \(n\times\ell\) reference input matrix, and \(r_{t}\in\mathbb{R}^{\ell}\) is a bounded reference input. Here, we assume that \(K_{x}\) and \(K_{r}\) can be directly calculated from known \(A\) and \(B\), and used directly in the control law.
It is required that the nonlinearity satisfy the bound
\[\left\|\epsilon_{f}(x)\right\|_{2}\ \leqslant\ \bar{\varepsilon} \tag{24}\]
for some constant \(\bar{\varepsilon}>0\) and for all \(x\in B_{0}(r)\) with some radius \(r>0\). Then, the adaptive control law
\[u_{t} = K_{x}^{\top}x_{t}-\widehat{\Theta}_{t}^{\top}\Psi(x_{t})+\left(1- \mu(x)\right)K_{r}^{\top}r_{t}+\mu(x)\widetilde{u}(x_{t})\]
is shown to stabilize the state tracking error \(e_{t}=x_{t}-x_{t}^{r}\) down to a compact set about the origin, where the \(N\times\ell\) matrix of parameter estimates \(\hat{\Theta}_{t}\) is dynamically updated with the update rule
\[\hat{\hat{\Theta}}_{t}=\Gamma\,\Psi(x_{t})\,e_{t}^{\top}P_{x}B\ \.\]
The scalar function \(\mu:\mathbb{R}^{n}\rightarrow[0,1]\) transitions the control law from tracking the reference input to simply returning the plant state \(x_{t}\) to the bounded region \(B_{0}(r)\) within which (24) is valid, using an appropriately defined \(\widetilde{u}(x_{t})\) based on assumptions about the growth of \(\epsilon_{f}(x)\) outside of \(B_{0}(r)\). (See Chapter 12 of [8] for details.)
And so, if the vector function \(\Psi(x)\) is constructed as
\[\Psi(x)=\begin{bmatrix}\sigma(\gamma_{1}^{\top}X)\\ \vdots\\ \sigma(\gamma_{N}^{\top}X)\end{bmatrix}\, \tag{25}\]
with an activation function \(\sigma\) satisfying the bounded (growth) conditions of Assumption 1 with constant \(\mathcal{D}_{\sigma}\) or \(\mathcal{L}_{\sigma}\), then the above setup is valid for any nonlinearity \(f=[f_{1}\ \cdots\ f_{\ell}]\) where we can show that
\[\sup_{x\in\mathcal{K}}\left|f_{i}(x)-\Theta_{i}^{\top}\Psi(x)\right|\ \leqslant\ \bar{ \varepsilon}_{i}\]
holds for some constants \(\bar{\varepsilon}_{1},\ldots,\bar{\varepsilon}_{\ell}>0\). Here, \(f_{1},\ldots,f_{\ell}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and each \(\Theta_{i}^{\top}\in\mathbb{R}^{N}\) is the corresponding row of the true parameters \(\Theta^{\top}\).
## VII Conclusion and Future Directions
In this paper, we developed a novel method for bridging approximations using activation functions with unknown parameters to approximations using randomly initialized parameters, with the mollified integral representation. We showed that this could be extended to the supremum norm for use with an extended, approximate version of the linear MRAC setup.
There are two immediate paths that we can see along the lines of extending this work and application. 1) Is it possible to do "approximate persistency of excitation" on the extended, approximate version of the linear MRAC setup, similar to what was achieved in [26] but with the parameter estimates only converging to a compact set about the origin, instead of being asymptotically stable. 2) Extending the work of [15] towards the determination of integral representations for more general classes of functions. In particular, using linear combinations of step functions, which cannot be trained using backpropagation in the modern paradigm of training deep neural networks. Yet in the 1D case, we can easily show that the step function is superior at uniformly approximating functions than the ReLU function. In fact, the step can handle discontinuous (regulated) functions, while the ReLU can only deal with continuous functions.
|
2303.02542 | Physics-informed neural network for friction-involved nonsmooth dynamics
problems | Friction-induced vibration (FIV) is very common in engineering areas.
Analysing the dynamic behaviour of systems containing a multiple-contact point
frictional interface is an important topic. However, accurately simulating
nonsmooth/discontinuous dynamic behaviour due to friction is challenging. This
paper presents a new physics-informed neural network approach for solving
nonsmooth friction-induced vibration or friction-involved vibration problems.
Compared with schemes of the conventional time-stepping methodology, in this
new computational framework, the theoretical formulations of nonsmooth
multibody dynamics are transformed and embedded in the training process of the
neural network. Major findings include that the new framework not only can
perform accurate simulation of nonsmooth dynamic behaviour, but also eliminate
the need for extremely small time steps typically associated with the
conventional time-stepping methodology for multibody systems, thus saving much
computation work while maintaining high accuracy. Specifically, four kinds of
high-accuracy PINN-based methods are proposed: (1) single PINN; (2) dual PINN;
(3) advanced single PINN; (4) advanced dual PINN. Two typical dynamics problems
with nonsmooth contact are tested: one is a 1-dimensional contact problem with
stick-slip, and the other is a 2-dimensional contact problem considering
separation-reattachment and stick-slip oscillation. Both single and dual PINN
methods show their advantages in dealing with the 1-dimensional stick-slip
problem, which outperforms conventional methods across friction models that are
difficult to simulate by the conventional time-stepping method. For the
2-dimensional problem, the capability of the advanced single and advanced dual
PINN on accuracy improvement is shown, and they provide good results even in
the cases when conventional methods fail. | Zilin Li, Jinshuai Bai, Huajing Ouyang, Saulo Martelli, Jun Zhao, Ming Tang, Yang Yang, Hongtao Wei, Pan Liu, Wei-Ron Han, Yuantong Gu | 2023-03-05T01:27:11Z | http://arxiv.org/abs/2303.02542v5 | # Physics-informed neural network for friction-involved nonsmooth dynamics problems
###### Abstract
Nonsmooth, discontinuous dynamic behaviour of systems containing multiple contact points is an open research problem. Accurately simulating the constantly nonsmooth/discontinuous dynamic behaviour is challenging. This study presents novel physics-informed neural network (PINN) frameworks for solving nonsmooth friction-induced, or friction-involved, vibration (FIV) problems where the training of the neural network training is guided by the relevant physics laws. Four kinds of high-accuracy PINN-based methods are proposed: (1) single PINN; (2) dual PINN; (3) advanced single PINN; (4) advanced dual PINN. The single PINN method uses the nonsmooth dynamic formulation for multibody systems, as physical constraints, substituting the time iteration loop in the conventional dynamic analysis. For the dual PINN method, in conjunction with the single PINN, a new PINN is proposed as an alternative to traditional numerical methods for solving linear complementary problems (LCP). The advanced single/dual PINN methods improve the corresponding single/dual PINN methods by incorporating an interpolation technique during the training process. The success of the proposed PINN-based methods in predicting the nonlinear vibration of two typical nonsmooth dynamics problems is verified: one is a 1-dimensional system with direct rigid contact considering stick-slip oscillation, and the other is a 2-dimensional system with spring contact considering separation-reattachment and stick-slip oscillation. Overall, the
developed PINN-based methods are accurate in predicting the nonsmooth dynamic behaviour, which does not require a tiny time step as the conventional time-stepping methods.
Key words :Physics-informed neural network, Nonsmooth dynamics, Stick-slip, Contact loss, Friction-induce vibration;
## 1 Introduction
Friction is ubiquitous in daily life as well as in engineering. For centuries, humans have exploited friction and attempted to control friction. When friction is involved in a dynamic environment, the dissipating energy property of friction is generally used to reduce motion or vibration. However, in certain conditions, the stability of a system counterintuitively degrades due to friction [1-3]. Friction-induced vibration (FIV) [4-6] is one kind of self-excited vibration that associates with a variety of engineering issues, including squealing brakes[4], squeaky joints [7], chattering of cutting tools [8], inaccurate positioning of the robotics [9], etc. The dynamics of the systems with friction are protean and complex. There is a lack of comprehensive understanding of the dynamic characteristics when friction is involved. The manifestations of nonsmoothness that could be directly attributed to friction are: (1) the direction-changing feature of friction force with respect to the relative velocity; (2) stick-slip vibration, which is known as one of the main mechanisms of FIV [10]; (3) separation-reattachment events [7] resulting from unstable FIV. As the contact states change during vibration in a dynamic environment, even systems that are linear within each motion regime (e.g., stick or slip) become nonlinear and time-varying.
Research works on FIV choose to focus on two kinds of systems: (1) low-degrees-of-freedom theoretical models and (2) complicated structures. As for the former kind, models are usually nonlinear and often even nonsmooth. Stick-slip is an essential mechanism of friction-induced vibration and exists in many applications [11, 12]. Popp and Stelter [13] pointed out that even in a one-degree-of-freedom slider-belt model, dry friction-induced stick-slip vibration could be periodic, quasi-periodic and even chaotic. Leine et al. [14] proposed a switching method that is conducive for stick-slip analysis of the slider-belt model with a single contact point. With the development of unstable vibration, loss of contact may occur. Li et al. [15] used a slider-belt system with an inclined spring to reveal that separation and reattachment events could happen when unstable vibration was aroused by mode-coupling instability. In their follow-up work [16], a slider moving on an elastic disc also exhibited stick-slip vibration and the separation and reattachment phenomenon. The
conventional approach for determining FIV of a theoretical model that considers nonsmooth features such as stick-slip or separation-reattachment involves tracking the motion states (e.g., stick or slip, contact or separation), which is tedious and time-consuming. For example, Pascal [17] studied the stick-slip motion of a two-degrees-of-freedom lumped-mass-belt model when two sliders were involved, and the stick-slip transition was monitored and captured to ensure the equations of motion with respect to the right motion states were used.
Detailed finite element models of complicated structures, such as automotive brakes, are often linearised [18] for stability analysis. Such a finite element model can have many contact points. Chen et al. [19] found that in a finite element model with multiple contact points, the separation took place between various contact nodes in different regions of the contact interface. However, stick-slip oscillation could not be included in their simulation as modelling strategies suitable for problems with very few contact points are no longer valid for many contact points [14]. Dealing with the contact and friction for large contact areas is difficult [20, 21, 22, 23]. There are extensive research concerning the discontinouty in multi-body systems, such as robotics or rotor systems [24]. Moreau [25] proposed the differential measurement and scanning process theories as foundations for nonsmooth dynamic analysis. The unilateral contact problem of multibody systems can be transformed into a linear complementary problem (LCP). Several methods based on the LCP formulations for systems with rigid or elastic unilateral contact have been developed [24, 26, 27]. Two general approaches in multibody dynamics are the event-driven method and the time-stepping method [21, 22, 23]. Li et al. [28] found that for the friction-induced vibration problem with multiple contact points, the system displays repeated separation-reattachment and stick-slip events. The complex nonsmooth dynamics are hard to capture unless a tiny time step is used.
In other reseachlines, several novel approaches were introduced. One such approach is the moving least squares material point method (MLS-MPM) developed by Hu et al. [29], which considers displacement discontinuity and frictionless boundaries. Based on this method, a real-time simulator for soft robotics was also developed [3]. Another approach was proposed by Peng et al. [30], who suggested using a method based on the symplectic discrete format to tackle nonsmooth problems with friction and impact. This approach enabled accurate simulations with a larger time step. Fazeli et al. [31], on the other hand, developed a hierarchical learning approach to teach manipulation strategies that consider friction. The development of simulators that possess accuracy, efficiency, and robustness remains a significant area of interest.
Physics-informed neural network (PINN), which is proposed by Raissi et al. [32] in 2019, is considered a promising method for finding the solution of the partial differential equations (PDEs) of complex systems. PDEs, initial and boundary conditions can be applied to guide the training of deep learning neural networks. Karniadakis et al. [33] and Bai et al. [34] reviewed the development of PINN in various fields. It has been demonstrated that, combining the advantages of neural networks and physics laws, PINN is effective for dealing with problems with insufficient data and thus has the prospect of a wide range of applications. Samaniego [35] presented an energy-based PINN framework, which initially proofed the effectiveness of PINN for mechanics applications in terms of hyperelasticity, crack propagation and piezoelectric beams and plates problems. Gu's team [36, 37] developed a series of computing and modelling methods based on PINN for hydrodynamics and topology optimisation. Pfrommer et al. [38] solved the contact-induced dynamic discontinuous problem of a rigid body by the neural network. In that work, a comprehensive loss function involving governing equation, unilateral constraint law and maximum energy dissipation law was defined in their work. The new method could represent realistic impact and stick motion even when spare training data is used.
Accurately predicting nonsmooth dynamic phenomena from contact is a fundamental problem in many applications. The primary goal of this research is to propose a new methodology for the solution in the time domain of nonsmooth dynamics problems by combining neural network and the theories of nonsmooth dynamics into physics-informed neural network frameworks. The paper is organised as: (1) Section 2 presents the mix-level time-stepping modelling method; (2) Section 3 introduce the frameworks of PINN for solving LCP, and the single and dual PINN for the dynamic simulation of the complex systems with friction and contact loss; (3) Section 4 illustrates the application of proposed single PINN and dual PINN frameworks in the FIV problem with stick-slip. Eventually, the accuracy of the single/dual PINN framework in solving the FIV problem with separation-reattachment and stick-slip is determined and presented.
## 2 Theoretical Approach and Numerical Implementation
### The linear complementary problem in unilateral contact
Mathematically, unilateral contact can be described by linear complementary conditions. Specifically, Fig.1(a) shows the complementary relations for single-point rigid contact in the normal
direction. The distance between the object of interest and the other contact surface is denoted by \(g_{\rm Ni}\), and the contact force by \(\lambda_{\rm Ni}\).
These two quantities satisfy the relationships that if \(g_{\rm Ni}\)\(\geq\)0, \(\lambda_{\rm Ni}\)=0; otherwise if \(g_{\rm N}\)=0, \(\lambda_{\rm Ni}\)\(\geq\)0. For systems with \(m\) contact points, \(g_{\rm Ni}\) (\(i\)=1, 2,..., \(m\)) is the \(i\)th element of vector \(\mathbf{g}_{\rm N}\) and \(\lambda_{\rm Ni}\) (i=1, 2,..., \(m\)) is the \(i\)th element of vector \(\mathbf{\lambda}_{\rm N}\). The complementary relationship is given in the vector form:
\[0\mbox{$\leq$}\underline{g}_{\rm N}\,\mbox{$\perp$}\,\mbox{$\lambda_{\rm N}$} \mbox{$\geq$}0 \tag{1}\]
Additionally, for spring contact shown in Fig.1(b), \(\Omega_{\rm Ni}\)=\(k_{\rm c}g_{\rm Ni}\)+\(\lambda_{\rm Ni}\) is introduced. The following revised relationship now applies: if \(\lambda_{\rm Ni}\)=0, \(\Omega_{\rm Ni}\)\(\geq\)0, otherwise if \(\lambda_{\rm Ni}\)\(\geq\)0, \(\Omega_{\rm Ni}\)=0. For multiple spring contact, \(\Omega_{\rm Ni}\) (\(i\)=1, 2,..., \(m\)) is the \(i\)th element of vector \(\mathbf{\Omega}_{\rm N}\). The complementary relationship is written as:
\[0\mbox{$\leq$}\mathbf{\Omega}_{\rm N}\,\mbox{$\perp$}\,\mbox{$\lambda_{\rm N }$}\mbox{$\geq$}0 \tag{2}\]
When friction is considered in the tangential direction, a general friction law can be described by a function of friction force, namely \(\lambda_{\rm Ti}\), with respect to the relative velocity \(\gamma_{\rm Ti}\), which is shown in Figure 1 (b). It is generally assumed that sliding friction \(\lambda_{\rm Ti}\) follows Coulomb's law of friction, which is expressed as \(\lambda_{\rm Ti}\)=\(\mu\)\(\lambda_{\rm Ni}\). Like the notation rules stated in the normal contact, \(\lambda_{\rm Ti}\) and \(\gamma_{\rm Ti}\) (\(i\)=1, 2,..., \(m\)) are the elements of vector \(\lambda_{\rm T}\) and \(\gamma_{\rm T}\) for describing the friction and relative velocity of the multi-point contact problem. Through mathematical transformation, the friction law can be decomposed into two complementary relations:
\[0\mbox{$\leq$}\,\mbox{$\lambda_{\rm R}^{\top}$}\,\mbox{$\perp$}\,\mbox{$\gamma _{\rm R}$}\mbox{$\geq$}\,0\,,\,\,\,\,\,0\mbox{$\leq$}\,\mbox{$\lambda_{\rm L}^{ \top}$}\,\mbox{$\perp$}\,\mbox{$\gamma_{\rm L}$}\mbox{$\geq$}\,0 \tag{3}\]
where \(\gamma_{\rm R}\) = \(\frac{1}{2}\)(\(\mid\)\(\gamma_{\rm T}\)\(\mid\)+\(\gamma_{\rm T}\)), \(\gamma_{\rm L}\) = \(\frac{1}{2}\)(\(\mid\)\(\gamma_{\rm T}\)\(\mid\)-\(\gamma_{\rm T}\)), \(\lambda_{\rm R}\)=\(\mu\)\(\lambda_{\rm N}\)+\(\lambda_{\rm T}\) and \(\lambda_{\rm L}\)=\(\mu\)\(\lambda_{\rm N}\) - \(\lambda_{\rm T}\),
with the relationship
\[\mbox{$\lambda_{\rm R}$}\mbox{$\geq$}\,\mbox{$
\[\gamma_{\tau}=-\gamma_{\rm R}+\gamma_{\rm L}\ \ \ {\rm and}\ \ \ \lambda_{\rm R}=2\mu\dot{\lambda}_{\rm N}-\dot{\lambda}_{\rm L} \tag{4}\]
or between the velocity and impulse of the friction force:
\[0\leq\Lambda_{\rm R}^{\tau}\perp\gamma_{\rm R}\geq 0\,,\ \ \ 0\leq\Lambda_{\rm L}^{ \tau}\perp\gamma_{\rm L}\geq 0 \tag{5}\]
where \(\Lambda_{\rm R}=\int\lambda_{\rm R}{\rm d}t\) and \(\Lambda_{\rm L}=\int\lambda_{\rm L}{\rm d}t\)
### Equation of motion of the multibody system with unilateral contact
For the dynamic modelling of the multibody systems with unilateral contact, the equation of motion in matrix form is generally written as Eq.(6) :
\[{\bf M}\ddot{\bf q}-{\bf h}-{\bf W}_{\rm N}\dot{\lambda}_{\rm N}-{\bf W}_{\rm T }\dot{\lambda}_{\rm T}={\bf 0} \tag{6}\]
in which \({\bf M}\) is the mass matrix, \({\bf h}\) is the force vector for the system regardless of the unilateral conditions, \(\dot{\lambda}_{\rm N}\) and \(\dot{\lambda}_{\rm T}\), are the normal and tangential force vectors, respectively. \({\bf W}_{\rm N}\) is the displacement vector defined in the global coordinates. The constraint functions in the normal direction and tangential direction are expressed as \({\bf g}_{\rm N}={\bf W}_{\rm N}^{\tau}{\bf q}\), and \({\bf g}_{\rm T}={\bf W}_{\rm T}^{\tau}{\bf q}\).\({\bf W}_{\rm N}\) and \({\bf W}_{\rm T}\) can be obtained by \({\bf W}_{\rm N}=\left(\frac{\partial\xi_{\rm N}}{\partial{\bf q}}\right)^{\rm T}\) and \({\bf W}_{\rm T}=\left(\frac{\partial\gamma_{\rm T}}{\partial{\bf u}}\right)^{ \rm T}\).
Then, take the first derivative of \({\bf g}_{\rm N}\) and \(\gamma_{\rm T}\) with respect to time, the normal velocity and tangential acceleration can be expressed as:
\[\gamma_{\rm N}={\bf W}_{\rm N}^{\tau}\dot{\bf q}+\bar{\omega}_{\rm N}\,,\ \ \dot{\gamma}_{\rm T}={\bf W}_{\rm T}^{\tau}\ddot{\bf q}+\frac{\partial{\bf W}_{ \rm T}^{\tau}}{\partial t}\dot{\bf q}+\dot{\bar{\omega}}_{\rm T} \tag{7}\]
\[{\rm in\ which}\ \ \ \bar{\omega}_{\rm N}=\frac{\partial{\bf W}_{\rm N}^{ \tau}}{\partial t}\ \ \ {\rm and}\ \ \ \dot{\bar{\omega}}_{\rm T}=\frac{\partial^{2}{\bf W}_{\rm T}^{ \tau}}{\partial t^{2}}\.\]
By discretising the motion states over time interval \(\Delta t=t_{\rm E}-t_{\rm A}\), one obtained the time-domain discretised equation of motion and the state vector:
\[\begin{cases}{\bf M}^{\rm(A)}\Delta{\bf u}-{\bf h}^{\rm(A)}\Delta t-{\bf W}_{ \rm N}^{\rm(A)}\Lambda_{\rm N}-{\bf W}_{\rm T}^{\rm(A)}\Lambda_{\rm T}={\bf 0} \\ \Delta{\bf q}=({\bf u}^{\rm(A)}+\Delta{\bf u})\Delta t\end{cases} \tag{8}\]
in which \(\Lambda_{\rm N}=\lambda_{\rm N}\Delta t\), \(\Lambda_{\rm T}=\lambda_{\rm T}\Delta t\) and superscript (A) represents the corresponding physical quantity at instant \(t_{\rm A}\).
The increment of constraint function \(\Delta\mathbf{g}_{{}_{\mathrm{N}}}\) and \(\Delta\mathbf{\gamma}_{{}_{\mathrm{T}}}\) over \(\Delta t\) can be obtained according to Eq.(7):
\[\Delta\mathbf{g}_{{}_{\mathrm{N}}}=\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(AT)} }}\Delta\mathbf{q}+\hat{\mathbf{o}}_{{}_{\mathrm{N}}}\Delta t\,\ \ \Delta\mathbf{\gamma}_{{}_{\mathrm{T}}}=\mathbf{W}_{{}_{\mathrm{T}}}^{{ \mathrm{T}}}\Delta\mathbf{u}+\frac{\hat{\mathbf{o}}\mathbf{W}_{{}_{\mathrm{T}} }^{{\mathrm{T}}}}{\hat{\mathbf{o}}\mathbf{q}}\,\Delta\mathbf{q}+\hat{\hat{ \mathbf{o}}}_{{}_{\mathrm{T}}}\Delta t \tag{9}\]
In corporation with Eq.(8), motion states and the constraint function at the next time instance \(t_{\mathrm{E}}\) are derived:
\[\mathbf{g}_{{}_{\mathrm{N}}}^{{\mathrm{(E)}}}=\mathbf{G}_{{}_{\mathrm{NN}}} \mathbf{A}_{{}_{\mathrm{N}}}^{{\mathrm{(E)}}}\Delta t+\mathbf{G}_{{}_{\mathrm{NT }}}\mathbf{A}_{{}_{\mathrm{T}}}^{{\mathrm{(E)}}}\Delta t+\mathbf{g}_{{}_{ \mathrm{N}}}^{{\mathrm{(A)}}}+\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(AT)}}} \mathbf{u}^{{\mathrm{(A)}}}\Delta t+\mathbf{G}_{{}_{\mathrm{N}}}\Delta t+\hat{ \mathbf{o}}_{{}_{\mathrm{N}}}\Delta t \tag{10}\]
\[\mathbf{\gamma}_{{}_{\mathrm{T}}}^{{\mathrm{(E)}}}=\mathbf{G}_{{}_{\mathrm{ TN}}}\mathbf{A}_{{}_{\mathrm{N}}}^{{\mathrm{(E)}}}+\mathbf{G}_{{}_{\mathrm{TT}}} \mathbf{A}_{{}_{\mathrm{T}}}^{{\mathrm{(E)}}}+\mathbf{G}_{{}_{\mathrm{T}}} \mathbf{h}^{{\mathrm{(A)}}}\Delta t+\frac{\hat{\mathbf{o}}\mathbf{W}_{{}_{ \mathrm{T}}}^{{\mathrm{(AT)}}}}{\hat{\mathbf{o}}\mathbf{q}}\,\mathbf{u}^{{ \mathrm{(A)}}}\Delta t+\hat{\hat{\mathbf{o}}}_{{}_{\mathrm{T}}}\Delta t \tag{11}\]
in which \(\mathbf{G}_{{}_{\mathrm{N}}}=\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(A)}}} \mathbf{M}^{{}^{\mathrm{-1(A)}}}\), \(\mathbf{G}_{{}_{\mathrm{NN}}}=\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(A)}}} \mathbf{M}^{{}^{\mathrm{-1(A)}}}\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(A)}}}\), \(\mathbf{G}_{{}_{\mathrm{NT}}}=\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(A)}}} \mathbf{M}^{{}^{\mathrm{-1(A)}}}\mathbf{W}_{{}_{\mathrm{T}}}^{{\mathrm{(A)}}}\), \(\mathbf{G}_{{}_{\mathrm{T}}}=\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{T(A)}}} \mathbf{M}^{{}^{\mathrm{-1(A)}}}\mathbf{W}_{{}_{\mathrm{T}}}^{{\mathrm{(A)}}}\),
\[\mathbf{G}_{{}_{\mathrm{T}}}=\mathbf{W}_{{}_{\mathrm{T}}}^{{\mathrm{(A)}}} \mathbf{M}^{{}^{\mathrm{-1(A)}}}+\frac{\hat{\mathbf{o}}\mathbf{W}_{{}_{\mathrm{ T}}}^{{\mathrm{T}}}}{\hat{\mathbf{o}}\mathbf{q}}\,\mathbf{M}^{{}^{\mathrm{-1(A)}}} \Delta t\,\ \ \mathbf{G}_{{}_{\mathrm{TN}}}=\mathbf{G}_{{}_{\mathrm{T}}}\mathbf{W}_{{}_{ \mathrm{N}}}^{{\mathrm{T(A)}}}\,\ \ \mathbf{G}_{{}_{\mathrm{TT}}}=\mathbf{G}_{{}_{\mathrm{T}}} \mathbf{W}_{{}_{\mathrm{T}}}^{{\mathrm{T(A)}}}\,.\]
There are six unknown vectors in the four equations in Eqs. (8), (10) and (11). To solve the problem, another two complementary equations need to be introduced. In the following, the theoretical formulations for two contact conditions are presented.
#### 2.2.1 Time-stepping formulations for the unilateral rigid contact at displacement-velocity level
Unilateral rigid contact can be enforced using a complementary relation between \(\mathbf{g}_{\mathrm{N}}\) and \(\lambda_{\mathrm{N}}\). The friction is independent of the state solutions in the normal direction, which follow the complementary relation given in Eq.(1). According to Eq.(4), \(\mathbf{\gamma}_{{}_{\mathrm{T}}}^{{\mathrm{(E)}}}\) and \(\mathbf{\Lambda}_{{}_{\mathrm{T}}}^{{\mathrm{(E)}}}\) in Eq.(11) can be replaced by \(\mathbf{\gamma}_{{}_{\mathrm{L}}}^{{\mathrm{(E)}}}\) and \(\mathbf{\Lambda}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\). And then, by introducing \(\mathbf{\Lambda}_{{}_{\mathrm{R}}}{=}2\mathbf{\mu}\mathbf{\Lambda}_{{}_{ \mathrm{N}}}-\mathbf{\Lambda}_{{}_{\mathrm{L}}}\) according to Eq.(4), the LCP equations in the matrix form can be obtained:
\[\underbrace{\left[\begin{array}{cc}\mathbf{g}_{{}_{\mathrm{N}}}^{{\mathrm{(E) }}}\\ \mathbf{\gamma}_{{}_{\mathrm{L}}}^{{\mathrm{(E)}}}\Delta t\\ \mathbf{\Lambda}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\Delta t\end{array}\right]}_{ \mathbf{\gamma}}\underbrace{\left[\begin{array}{cc}\mathbf{G}_{{}_{\mathrm{ NN}}}+\mathbf{G}_{{}_{\mathrm{NT}}}\mathbf{H}&-\mathbf{G}_{{}_{\mathrm{NT}}}&\mathbf{0}\\ -\mathbf{(G}_{{}_{\mathrm{TN}}}+\mathbf{G}_{{}_{\mathrm{TT}}}\mathbf{H})&\mathbf{G }_{{}_{\mathrm{TT}}}&\mathbf{I}\\ 2\mathbf{\mu}&-\mathbf{I}&\mathbf{0}\end{array}\right]}_{\mathbf{\Lambda}} \underbrace{\left[\begin{array}{cc}\mathbf{G}_{{}_{\mathrm{N}}}^{{\mathrm{(A)}}} \Delta t\\ \mathbf{\Lambda}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\Delta t\\ \mathbf{\gamma}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\Delta t\end{array}\right]}_{ \mathbf{\gamma}}+\underbrace{\left[\begin{array}{cc}\mathbf{g}_{{}_{\mathrm{N}}}^{{ \mathrm{(A)}}}+\mathbf{W}_{{}_{\mathrm{N}}}^{{\mathrm{(AT)}}}\Delta t\mathbf{u}^{ {\mathrm{(A)}}}+\mathbf{G}_{{}_{\mathrm{N}}}\mathbf{h}^{{\mathrm{(A)}}} \Delta t+\hat{\mathbf{o}}_{{}_{\mathrm{N}}}\Delta t\\ \mathbf{\Lambda}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\Delta t\end{array}\right]}_{ \mathbf{\Lambda}} \tag{12}\]
with
\[0\leq\mathbf{g}_{{}_{\mathrm{N}}}^{{\mathrm{(E)}}}\perp\mathbf{\Lambda}_{{}_{ \mathrm{N}}}^{{\mathrm{(E)}}}\geq 0,\ 0\leq\mathbf{\gamma}_{{}_{\mathrm{L}}}^{{\mathrm{(E)}}}\perp\mathbf{\Lambda}_{{}_{ \mathrm{L}}}^{{\mathrm{(E)}}}\geq 0\ \text{and}\ 0\leq\mathbf{\gamma}_{{}_{\mathrm{R}}}^{{\mathrm{(E)}}}\perp\mathbf{\Lambda}_{{}_{ \mathrm{R}}}^{{\mathrm{(E)}}}\geq 0\]
.2.2 Time-stepping formulations for the unilateral spring contact at the displacement-velocity level
In contrast with direct contact, the complementary condition of the spring contact is between \(\mathbf{\Omega}_{\text{N}}\) and \(\mathbf{\dot{\omega}}_{\text{N}}\). Meanwhile, the friction force depends on the state solutions in the normal direction when contact is maintained. By substituting Eq. (9) into \(\mathbf{\Omega}_{\text{N}}\), one gets:
\[\mathbf{\Omega}_{\text{\tiny NE}}=\left(\mathbf{I}\,/\,\Delta t^{2}+\mathbf{k} _{\text{c}}\mathbf{G}_{\text{\tiny NN}}+\mathbf{k}_{\text{c}}\mathbf{G}_{\text {\tiny NT}}\mathbf{\mu}\right)\mathbf{\Lambda}_{\text{N}}\Delta t-\mathbf{k}_{ \text{c}}\mathbf{G}_{\text{\tiny NT}}\mathbf{\Lambda}_{\text{L}}\Delta t+ \mathbf{k}_{\text{c}}\left(\mathbf{g}_{\text{\tiny NA}}+\mathbf{W}_{\text{ \tiny NA}}^{\text{T}}\Delta\mathbf{\mu}_{\text{A}}+\mathbf{G}_{\text{\tiny N}} \mathbf{h}_{\text{\tiny A}}\Delta t+\widehat{\mathbf{\omega}}_{\text{\tiny N}} \Delta t\right) \tag{13}\]
Collecting Eqs. (10), (13) and \(\mathbf{\Lambda}_{\text{\tiny R}}\!\simeq\!2\mathbf{\mu}\mathbf{\Lambda}_{\text {\tiny N}}\!-\!\mathbf{\Lambda}_{\text{\tiny L}}\) in matrix form gives the LCP formulations of the whole system:
\[\underbrace{\begin{bmatrix}\mathbf{\Omega}_{\text{\tiny N}}^{\text{(E)}}\\ \mathbf{\gamma}_{\text{\tiny L}}^{\text{(E)}}\Delta t\end{bmatrix}}_{\text{ \tiny Y}}=\underbrace{\begin{bmatrix}\mathbf{I}\,/\,\Delta t^{2}+\mathbf{k}_{ \text{c}}\mathbf{G}_{\text{\tiny NN}}+\mathbf{k}_{\text{c}}\mathbf{G}_{\text {\tiny NT}}\mathbf{\mu}&-\mathbf{k}_{\text{c}}\mathbf{G}_{\text{\tiny NT}}& \mathbf{0}\\ -\left(\mathbf{G}_{\text{\tiny TN}}+\mathbf{G}_{\text{\tiny TT}}\mathbf{\mu} \right)&\mathbf{G}_{\text{\tiny TT}}&\mathbf{I}\\ 2\mathbf{\mu}&-\mathbf{I}&\mathbf{0}\end{bmatrix}\underbrace{\begin{bmatrix} \mathbf{\Lambda}_{\text{\tiny N}}^{\text{(E)}}\Delta t\\ \mathbf{\Lambda}_{\text{\tiny N}}^{\text{(E)}}\Delta t\end{bmatrix}}_{\text{ \tiny x}}}+\underbrace{\begin{bmatrix}\mathbf{k}_{\text{c}}\left(\mathbf{g}_{ \text{\tiny N}}^{\text{(A)}}+\mathbf{W}_{\text{\tiny N}}^{\text{(AT)}} \mathbf{u}^{\text{(A)}}\Delta t+\mathbf{G}_{\text{\tiny N}}\mathbf{h}^{\text{ (A)}}\Delta t+\widehat{\mathbf{\omega}}_{\text{\tiny N}}\Delta t\right)\\ -\mathbf{\gamma}_{\text{\tiny T}}^{\text{(A)}}\Delta t-\left(\mathbf{G}_{\text{ \tiny T}}\mathbf{h}^{\text{(A)}}+\frac{\partial\mathbf{W}_{\text{\tiny T}}^{ \text{T}}}{\partial\mathbf{q}}\mathbf{u}^{\text{(A)}}+\dot{\mathbf{\omega}}_{ \text{\tiny T}}\right)\Delta t^{2}\\ \mathbf{0}\end{bmatrix}}_{\text{\tiny R}} \tag{14}\]
Eventually, Eqs. (8) and (14) together form the equations for the nonsmooth dynamic analysis of the complex frictional system with multi-contact points. The mix-level time-stepping method is called the conventional method in the following part.
### Physics-informed neural network for nonsmooth dynamics
Compared to the event-detection strategy, the time-stepping strategy is more suitable for problems with a considerable amount of contact points as it avoids the repeated detection of the state transition. However, as precision cannot be compromised, a tiny time step is required for solving complex nonlinear vibrations at the cost of a large computation time.
To achieve high-accuracy calculation of nonsmooth dynamics we propose a new approach by utilizing the physics-informed neural network (PINN). A PINN consists of the feedforward neural network (FNN) and a physics-informed loss function [34], as shown in Figure 2. In a PINN, the FNN
is used to capture the relationship between the input data and the output data, while the physics-informed loss function embeds the physics laws and quantifies the performance of the FNN [39].
As observed from Figure 2, an FNN comprises three parts, namely the input layer, the hidden layers and the output layer. When using an FNN, input data are fed into the input layer and transported forward to the next adjacent layer. The final predictions of an FNN are output from the output layer. An L-layer FNN can be mathematically expressed as
\[{}^{0}\textbf{a}={}^{0}\textbf{a}, \tag{15}\] \[{}^{l}\textbf{a}=\sigma\big{(}{}^{l-1}\textbf{w}\cdot{}^{l-1} \textbf{a}+{}^{l-1}\textbf{b}\big{)},\] \[{}^{L}\textbf{a}={}^{L-1}\textbf{w}\cdot{}^{L-1}\textbf{a},\]
Where \({}^{0}\textbf{a}\) and \({}^{l}\textbf{a}\) denote the input layer (the 0\({}^{\text{th}}\) layer) and the output layer (the \(L^{\text{th}}\) layer), respectively. \(l=1\), \(2\), \(\ldots\), \(L{-}1\) denotes the number of hidden layers. **w** and **b** are the weights and biases in the FNN. \(\varSigma\) denotes the activation function that adds nonlinearity to the FNN [40]. More details regarding the activation function are given in the following context. On the right-hand side of Figure 2, the outputs of the FNN are used to formulate the physics-informed loss function. It is worth noting that, when solving (partial differential equations) PDEs via FNNs, the partial differential terms in the target PDEs can be analytically obtained through automatic differentiation [41]. Through training algorithms [42], the parameters of the FNN are iteratively modified to decrease the value of the physics-informed loss function. Finally, the training process converges when the given criteria are satisfied.
Figure 2: An example of a physics-informed neural network (PINN).
To calculate the nonsmooth dynamic problem, two fundamental solution problems are involved, namely, the solution of the LCP equation and the calculation of the dynamic response. In what follows, different neural network structures for the LCP equation and dynamics response are introduced in detail, respectively.
#### 2.3.1 PINN for solving LCP equations
This part presents the idea of combining the neural network and the theoretical description of LCP. The general LCP equation is in the form of Eq. (16):
\[\begin{cases}\mathbf{y}=\mathbf{Ax}+\mathbf{b}\\ \mathbf{y}^{\top}\mathbf{x}=\mathbf{0}\\ \mathbf{x}\geq\mathbf{0,y}\geq\mathbf{0}\end{cases} \tag{16}\]
in which \(\mathbf{x}\) and \(\mathbf{y}\) are vectors.
Instead of the conventional methods (pivoting or the quadratic programming method (QP method)) to get the solutions, we define two functions:
\[\mathbf{f}=\mathbf{y}-\mathbf{Ax}+\mathbf{b} \tag{17}\]
\[\mathbf{r}=\mathbf{y}^{\top}\mathbf{x} \tag{18}\]
The loss function in the following form is defined:
\[\mathcal{L}=\frac{\sum\limits_{i=1}^{N}f_{i}^{2}+\sum\limits_{i=1}^{N}r_{i}^{2} }{N} \tag{19}\]
in which \(f_{i}\) and \(r_{i}\) are respectively the element in vector \(\mathbf{f}\) and \(\mathbf{r}\), and \(N\) is the element number of the vector. Physics-informed loss function \(\mathcal{L}\) serves as the optimisation object in the training process of the neural network.
The architecture of the PINN for solving the LCP equation (Eq. (17)) is shown in Figure 3. By starting with \(a_{0}\), \(a_{1}\),..., \(a_{n}\) as the input, the neural network is used for predicting the output data \(\mathbf{x}\) and \(\mathbf{y}\) data. In this procedure, several variables need to be determined. One critical choice is the activation function between the layers. Several nonlinear activation functions can be used, such as the tanh, mish and ReLU, shown in Figure 4.
As **x** and **y** in LCP need to be either positive or zero values, the ReLU function given in Figure 3 can provide such an output satisfying the LCP condition. However, the output of the ReLU function has a flat area when inputs are located at the left side of the origin point. In this case, the training of neural networks may trap by the flat area, resulting in more computational time to convergence. To alleviate this problem, a new activation function is designed based on the ReLU function, which is in the following form:
\[g=\max\left(0,a+c_{1}\right)+c_{2} \tag{20}\]
where \(c_{1}\) and \(c_{2}\) are the correction factors. The modified ReLU function could adjust the data near the origin point, shown in Figure 4 (d). By using the modified ReLU function, the output of initialised neural networks is more likely fall out of the flat area, which largely saves the training time.
After obtaining the outputs \(\mathbf{x}\) and \(\mathbf{y}\), the optimisation of the physics-informed loss function is accomplished by L-BFGS algorithm. As shown in the overall procedure of the LCP PINN (Figure 3), the training ends until the error is smaller than the tolerance.
Figure 3: The process of the PINN framework for LCP.
Several examples are calculated by the LCP PINN framework as well as two conventional methods (Pivoting and QP method) for comparison, with \(A=\begin{bmatrix}1&-1\\ -1&0\end{bmatrix}\) and \(b=\begin{bmatrix}-0.009&0.02\end{bmatrix}^{T}\). For the sake of conciseness, the solutions are listed in Table 1. This example uses a 3-layer neural network with 5 neurons at each layer. It is proved that the proposed LCP PINN framework can give accurate solutions.
\begin{table}
\begin{tabular}{c c c c c c} \hline & Accurate result & PINN framework & Pivoting method & QP method \\ \hline
**x** & [0.009 0] T & [0.009 0] T & [0.009 0] T & [0.0090004 4.8204583e-09] T \\ \hline
**y** & [0 0.011] T & [0 0.011] T & [0 0.011] T & [0 0.011] T & [4.0007060e-07 0.0109996] T \\ \hline \end{tabular}
\end{table}
Table 1: The solutions by different methods
Figure 4: The nonlinear activation functions: (a) Tanh function; (b) Mish function; (c) ReLu function; (d) the modified ReLu function.
#### 2.3.2 PINN for vibration of multibody systems with unilateral contact
For the initial condition problem, the second-order differential equation is general in the form \(\ddot{q}=f\left(t,q,\dot{q}\right)\), and the initial conditions are \(q\left(t_{0}\right)=q_{0}\) and \(\dot{q}\left(t_{0}\right)=\dot{q}_{0}\). Implicit Runge-Kutta method is an high-order integral method to solve the initial condition problem. The \(R^{\text{th}}\)-order implicit Runge-Kutta method can be expressed as a transformation form, given in Eqs. (21) and (22):
\[\dot{q}_{k}^{\left(i\right)}=\dot{q}_{k}^{\left(i+\kappa_{k}\right)}-\Delta t \sum_{r=1}^{R}a_{\lambda_{r},f}\left(t^{\left(i\right)}+c_{r}\Delta t,\dot{q}_ {r}^{\left(i+\kappa_{r}\right)}\right)\text{, }k=1,2,\ldots R \tag{21}\]
\[\dot{q}^{\left(i\right)}=\dot{q}^{\left(i+1\right)}-\Delta t\sum_{k=1}^{R}b_{ \lambda_{r}}f\left(t^{\left(i\right)}+c_{r}\Delta t,\dot{q}_{k}^{\left(i+ \kappa_{k}\right)}\right) \tag{22}\]
in which \(\Delta t\) is one time increment over discrete time, \(a_{\lambda_{r}}\), \(b_{\lambda_{k}}\) and \(c_{r}\) are the coefficients. In Eqs. (21) and (22), the known term at time step \(I\) are expressed by the unknown terms at time step \(i\)+1. When a neural network is introduced, using \(q\) as the input data, \([\begin{array}{cccc}\dot{q}_{k}^{\left(i+\kappa_{0}\right)},\dot{q}_{k}^{ \left(i+\kappa_{0}\right)},\ldots,&\dot{q}_{k}^{\left(i+1\right)}\end{array}]\) can be predicted which is designed as the output data. Subsequently, the neural network is trained until \(\dot{q}_{r}^{\left(i\right)}\) and \(\dot{q}^{\left(i\right)}\) obtained by Eqs. (21) and (22) converge to the corresponding values that has been known.
As to multi-degree-of-freedom systems, the general form of the equation of motion is expressed as:
\[\mathbf{M}\ddot{\mathbf{q}}+\mathbf{C}\dot{\mathbf{q}}+\mathbf{K}\mathbf{q}= \mathbf{f}_{\text{e}} \tag{23}\]
in which \(\mathbf{M}\), \(\mathbf{K}\), \(\mathbf{C}\) are the mass, stiffness and damping matrix, \(\mathbf{q}\) is the displacement vector and \(\mathbf{f}_{\text{e}}\) is the external force. \(\mathbf{v}\) refers to as \(\dot{\mathbf{q}}\). The conventional way to solve this second-order differential equation of dynamical systems is using numerical integral methods, such as the explicit/implicit Runge-Kutta (RK) methods. Here, Figure 5 shows the framework of a PINN for dynamic simulation of multi-degree-of-freedom systems. Although the time integral idea of the current PINN is based on the implicit R-K method but as introduced above, it is actually different from conventional implicit R-K method.
In the framework, the input data are the elements of \(\mathbf{q}\)= [ \(q_{1}\),... \(q_{l}\),..., \(q_{N}\)]T (\(n\)=1,2,..., \(N\)) and the network is designed to output \(N\times R\) number of data. \(N\) is the number of the system degrees of freedom, and \(r\) is the number of orders used for the time integral. Then the output data is reshaped to \(N\) vectors that each has \(R\) elements. Each vector is assigned to the velocity \(\mathbf{v}_{r}^{\left(i+1\right)}=[\begin{array}{cccc}\dot{q}_{r}^{\left(i+ \kappa_{0}\right)},\dot{q}_{r}^{\left(i+\kappa_{0}\right)},\ldots,&\dot{q}_{r }^{\left(i+1\right)}\end{array}]T\) corresponding to \(r\) number of quantities of each degree-of-freedom. After obtaining the physics
quantities \(\mathbf{v}_{{}_{N}}^{{}^{(i+1)}}\), the displacement and the acceleration at time step \(i\)+1 can be obtained through the dynamic equations (Eqs. (i)-(iii)) shown in Figure 5. **R** is the coefficients matrix about \(a_{{}_{x},{}_{x}}\) and \(b_{{}_{r}}\). Thus, according to the idea of Eqs. (21) and (22), which is written as a matrix form Eq. (iv) in Figure 5, the velocity vectors \(\mathbf{v}_{{}_{n}}^{{}^{(i)}}=[\mathbf{v}_{{}_{1,n}},...,\mathbf{v}_{{}_{r,n} },...,\mathbf{v}_{{}_{R+1,N}}]\) can be obtained. The displacement and velocity can be learned by optimizing the loss function in the following form:
\[\mathcal{L}_{\text{D}}=\sum_{n=1}^{N}\sum_{r=1}^{R+1}\left(\mathbf{v}_{{}_{r,n }}-\mathbf{v}^{{}^{(i)}}\right) \tag{24}\]
The coefficient tables from the 2\({}^{\text{th}}\)-order to 200\({}^{\text{th}}\)-order implicit Runge-Kutta method can be obtained in [https://github.com/maziarraissi/PINNs](https://github.com/maziarraissi/PINNs)[32].
#### 2.3.3 Single PINN for the transient dynamic analysis of nonsmooth vibration
When unilateral contact is considered, nonsmooth characteristics are involved in the dynamic system. Figure 6 shows the framework of the specific PINN, which is referred to as the single PINN framework in the following. Compared with the PINN for dynamic simulation given in Figure 5, the equations of motion Eqs. (b) and (c) used in the neural network are:
\[\mathbf{h}_{{}_{n}}^{{}^{(i+1)}}=\mathbf{K}_{{}_{n}}\mathbf{q}_{{}_{n}}^{{}^{( i+1)}}+\mathbf{C}_{{}_{N}}\mathbf{v}_{{}_{n}}^{{}^{(i+1)}}+\mathbf{F}_{{}_{e}} \tag{25}\]
\[\mathbf{a}_{{}_{n}}^{{}^{(i+1)}}=\mathbf{M}^{-1}\mathbf{h}_{{}_{n}}^{{}^{(i+1) }}+\mathbf{W}_{{}_{N}}\mathbf{\lambda}_{{}_{N}}+\mathbf{W}_{{}_{T}}\mathbf{ \lambda}_{{}_{T}} \tag{26}\]
Figure 5: The framework of PINN for dynamic simulation.
in which \(\mathbf{K}\), and \(\mathbf{C}_{\mathrm{s}}\) are the stiffness and damping matrix when the unilateral constraints are removed, and the contact force \(\lambda_{\mathrm{s}}\) and friction force \(\lambda_{\mathrm{\tau}}\) are time-varying variables, which can be determined by the LCP equations introduced in Section 2.2.1 or 2.2.2.
The calculation process of the single PINN framework is: (1) use the system information, such as the system parameters and the initial conditions to calculate \(\lambda_{\mathrm{s}}\) and \(\lambda_{\mathrm{\tau}}\), based on the time-stepping method according to the contact type; (2) transfer \(\lambda_{\mathrm{s}}\) and \(\lambda_{\mathrm{\tau}}\) to the dynamic equation part in the PINN for dynamic calculation. The operation mechanism of the PINN for dynamic calculation has been explained in previous section. In terms of the neural network design, the activation function for the dynamic calculation is tanh. The dynamic response can be learned by optimising the loss function \(\mathcal{L}_{\mathrm{D}}\) given in Eq. (24).
#### 2.3.4 Dual PINN for the transient dynamic analysis of nonsmooth vibration
One the basis of the LCP PINN for linear complementary problem proposed in Section 2.3.1 and the PINN for multibody systems with unilateral contact, a new physics-informed neural network is proposed, which is shown in Figure 7. In this method, two physics-informed networks are involved. Thus, this method is referred to as the dual PINN framework.
As the normal contact and tangential contact can be enforced through LCP, the first neural network is basically the LCP PINN. Compared with the PINN for LCP in Section 2.3.1, the outputs of the
Figure 6. The framework of the single PINN framework.
FNN for LCP are assigned to physical variables. It should be noticed that for different contact types, the physical variables are different: (1) for direct rigid contact (Section 2.2.1), outputs are assigned to \(\Lambda_{\mbox{\tiny N}}\), \(\Lambda_{\mbox{\tiny RE}}\), \(\Lambda_{\mbox{\tiny LE}}\), \(\Lambda_{\mbox{\tiny LE}}\), \(\mathbf{g}_{\varepsilon}\), \(\mathbf{r}_{\mbox{\tiny LE}}\) and \(\mathbf{r}_{\mbox{\tiny RE}}\); (2) for rigid contact with springs (Section 2.2.2), outputs are assigned to \(\Lambda_{\mbox{\tiny N}},\Lambda_{\mbox{\tiny RE}}\), \(\Lambda_{\mbox{\tiny LE}}\), \(\mathbf{\Omega}_{\mbox{\tiny E}}\), \(\mathbf{r}_{\mbox{\tiny LE}}\) and \(\mathbf{r}_{\mbox{\tiny RE}}\). Then \(\mathbf{x}\!=\!\left[\Lambda_{\mbox{\tiny N}}\)\(\Lambda_{\mbox{\tiny LE}}\)\(\mathbf{r}_{\mbox{\tiny RE}}\right]^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
When contact force \(\lambda_{{}_{\rm N}}\) and friction force \(\lambda_{{}_{\rm T}}\) are outputted, they are transferred to the physics part of the second PINN for the dynamic analysis of the multibody system with unilateral contact. The basic idea of this PINN has been introduced in Section 2.3.4. Eventually, the dynamic response during an arbitrary time duration can be learned through training.
## 3 One-dimensional stick-slip vibration with direct contact
This section presents the application of PINN frameworks to solve the stick-slip vibration of Figure 8. It is assumed that the contact in the normal direction is maintained during vibration. The vertical position of the slider mass is a constant value of \(0.\lambda_{{}_{\rm N}}\)is known in advance, which is the compression force. The equations of motion for systems with friction contact can be reduced to:
\[\mathbf{M}\ddot{\mathbf{x}}+\mathbf{h}+\mathbf{W}_{\rm T}\dot{\lambda}_{{}_{ \rm T}}=\mathbf{0} \tag{27}\]
The LCP equation given in Eq.(12) is degenerated to:
\[\underbrace{\left\{\begin{matrix}\tau_{{}_{\rm L}}^{\rm(E)}\\ \mathbf{\Lambda}_{\rm R}^{\rm(E)}\Delta t\end{matrix}\right\}}_{\rm y}= \underbrace{\left[\begin{matrix}\mathbf{G}_{{}_{\rm TT}}&\mathbf{I}\\ -\mathbf{I}&\mathbf{0}\end{matrix}\right]}_{\rm A}\underbrace{\left[ \begin{matrix}\mathbf{\Lambda}_{\rm L}^{\rm(E)}\\ \mathbf{\gamma}_{\rm R}^{\rm(E)}\end{matrix}\right]}_{\rm A}+\underbrace{ \left[\begin{matrix}-\left(\mathbf{G}_{{}_{\rm TN}}+\mathbf{G}_{{}_{\rm TT}} \mathbf{\mu}\right)\mathbf{\Lambda}_{\rm N}^{\rm(E)}-\gamma_{{}_{\rm T}}^{\rm(A )}-\left(\mathbf{G}_{{}_{\rm T}}\mathbf{h}^{\rm(A)}+\frac{\partial\mathbf{W}_{ \rm T}^{\rm T}}{\partial\mathbf{q}}\mathbf{u}^{\rm(A)}+\dot{\mathbf{\phi}}_{{}_{ \rm T}}\right)\Delta t\\ \mathbf{2}\mathbf{\mu}\mathbf{\Lambda}_{\rm N}^{\rm(E)}\end{matrix}\right]}_{ \rm\dot{\mathbf{h}}} \tag{28}\]
with \(0\leq\gamma_{{}_{\rm L}}^{\rm(E)}\perp\Lambda_{\rm L}^{\rm(E)}\geq 0\) and \(0\leq\gamma_{{}_{\rm R}}^{\rm(E)}\perp\Lambda_{\rm R}^{\rm(E)}\geq 0\).
### Mechanical model
The novel PINN approach proposed here is demonstrated by simulating a common 1-dimensional stick-slip friction and vibration problem representing a slider-belt system (Figure 8). The system is modelled as a point mass \(m\) held by a linear spring and viscous damper, supported by a rigid belt moving at a constant velocity \(v_{0}\) whereas \(k_{0}\) is the spring stiffness, \(c_{0}\) is the damping, \(F_{\rm n}\)is the normal compression force, and \(\mu\) is the coefficient of friction, assumed constant. The coordinate \(x\) describes the horizontal displacement. The vertical position of this model is a constant value assumed to be 0. \(\lambda_{{}_{\rm N}}=F_{{}_{\rm n}}\) is known in advance. The constraint function in the tangential direction are expressed as \(g_{{}_{\rm T}}=\dot{x}-v_{o}t\). According to the definitions, one gets: \(W_{{}_{\rm T}}=1\), \(\widetilde{w}_{{}_{\rm N}}=\widetilde{w}_{{}_{\rm N}}=\widetilde{w}_{{}_{\rm T }}=0\), \(\widetilde{w}_{{}_{\rm T}}=-v_{o}\).
The two kinds of Coulomb-Stribeck laws of friction are (Figure 9):
\[\mu=\frac{\mu_{\ast}}{1+\delta\big{|}v_{\ast}\big{|}}\,,\,\,\,\,\mu=\mu_{\ast}+ \big{(}\mu_{\ast}-\mu_{\ast}\big{)}e^{-\alpha\mathbf{k}\cdot\mathbf{l}} \tag{29}\]
### PINN scheme
When the system is nonlinear/nonsmooth, the requirement for accuracy is stricter as the results of the next step are sensitive to the results of the previous step. To explore the applicability and accuracy of the new PINN strategy, two specific PINN frameworks for the single/dual PINN strategy are used for stick-slip simulation. Only the friction force is calculated from the conventional LCP algorithm involved in the single PINN framework or the LCP PINN part in the dual PINN framework. The LCP formulations are now Eq. (28).
Figure 8: Model I: the single mass-belt system with friction.
### Numerical Simulation
For the single-point frictional oscillator with stick-slip motion, the numerical methods, for example, the smoothing method or the bisection method, can provide very accurate results. Here, the switching model method gives ground truth results. In this section, the mix-level time-stepping method presented in Section 2 is used to simulate stick-slip, which is called the conventional method. In the numerical simulations, system parameters are \(m\)=1, \(k_{0}\)=1, c\({}_{0}\)=0.
#### 3.3.1 Friction law with a slowly decreasing rate
Firstly, the PINN frameworks are examined in a situation when the friction force gently decays with the relative velocity. The nonlinear friction law that is employed has been given in Eq. (29) and depicted in Figure 9. Figure 10 shows the results of the phase plot of the vibration calculated by two PINN strategies. The black line with the triangle markers is about the ground truth results calculated by the switching method. The green line with the square markers is about the result of the single PINN framework, and the blue dashed line with a circle mark is about the result of the dual PINN framework. The straight line in the phase plot indicates stick motion, and the curve represents the slip motion. The vibration of the system alternates between the stick phase and the slip phase.
The comparison in Figure 10 shows that the results from both the single PINN framework and dual PINN framework are identical to the ground truth results. This indicates that the idea of employing nonsmooth dynamics theory for the multi-point contact problem in a neural network works very well
Figure 10: The phase plot of stick-slip vibration.
in dealing with nonlinear stick-slip vibration. The 4th-order PINN and 10th-order PINN give very close results. Thus, only one group of results are provided.
The dynamic behaviour of nonlinear systems with stick-slip or loss of contact (that would appear in 2-DoF models) depends on the initial condition. Hence, capturing the transition point, which provides the initial conditions for the next phase, is important. Time-stepping method does not require the transition check or event check, while the price is a sufficiently small time step required for high accuracy. Figures 11-12 illustrate the comparisons between conventional time-stepping methods and two PINN frameworks when the time step \(\Delta t\)=0.01. The calculation of the conventional LCP method and the conventional RK4 LCP method is based on a numerical integration method. The latter is a high-order numerical method. Figure 11 shows that when \(\Delta t\)=0.01, the conventional LCP method could still predict the transitions between stick and slip motion. However, higher-order RK4 LCP method, in this case, is not stable enough to predict the correct stick-slip. Notably, both the single and Dual PINN frameworks are high-order methods and could produce the correct stick-slip limit cycle. Furthermore, from Figure 12, it is known that the accuracy of PINN frameworks in both the displacement and the velocity level is quite good. The conventional RK4 LCP method could initially predict the correct results until the failure of detecting the second transition from slip to stick around 18s (illustrated in Figure 12 (b)). As high-order algorithms, the PINN frameworks are more reliable.
Figure 11: The phase plot of stick-slip vibration when \(\Delta t\)=0.01.
The frequency spectrum shown in Figure 13 indicates that the system with stick-slip motion has one primary vibration frequency, one doubling frequency and one tripling frequency. From the frequency point of view, when a method (Conventional RK4 LCP method) is incapable of simulating correct stick-slip transitions, it not only predicts distorted time history responses but also provides incorrect vibration frequencies. Stick-slip vibration has a higher requirement for numerical methods.
When the time step increases to \(\Delta t\)=0.08, Figure 14 shows that the two PINN frameworks could maintain the correct prediction of stick-slip vibration. However, the two conventional methods could
Figure 12. Time history of the displacement and the velocity \(\Delta t\)=0.01.
Figure 13. The frequency of stick-slip vibration \(\Delta t\)=0.01.
not give accurate results. Figure 14 (b) is the time history of the velocity, in which the constant velocity means that the system is in the stick phase. Otherwise, the slider is slipping. Figure 14 (b) explains the reason for the failure of the conventional methods. With the large time step, the conventional method cannot identify the transition instants between stick and slip motion.
#### 3.3.2 Friction law with a sharply decreasing rate
In this section, the simulations using the friction law shown in Figure 9 (b) are conducted. The sharp decrease of the friction force with a velocity near zero velocity reflects the friction behaviour of many real frictional pairs. This kind of friction law causes more difficulties in numerical calculations. On the one hand, using a smaller time step could ensure the detection of stick-slip transitions which has been known as a key factor for the accuracy of nonsmooth vibration. On the other hand, the time step could not be too small, considering the expenses of numerical calculation. In the following examples, the neural network that is used in the PINN for dynamic simulation is a 7-layers feedforward neural network in which each layer contains 20 neurons. The network used in the PINN for LCP is a 3-layers feedforward neural network in which each layer contains 5 neurons.
Figure 15 shows the results of different methods at the time step that could predict correct stick-slip vibration. As illustrated in Figure 15 (b), the conventional high-order method needs a very small time step compared with the PINN frameworks. The dual PINN framework needs a time step that is 50 times larger than the Conventional RK4 LCP method. Although the advantage of the Single PINN
Figure 14: The time response of stick-slip vibration when \(\Delta t\)=0.08.
framework in terms of the time step is not as outstanding as the dual PINN framework, it could predict stick-slip vibration accurately when the time step is not too small.
Figure 16 compares the time history results of the different methods based on LCP strategy for multi-contact systems when \(\Delta t\)=0.005. Figure 16 (b) shows that the Single PINN framework and the two conventional methods at the same time step fail to switch to stick motion after the slider starts to slide on the belt. Consequently, the wrong displacement track begins, as shown in Figure 16 (a). The above results indicate the importance of the numerical method in correct state detection and the excellent performance of Dual PINN.
Figure 15: The correct phase plot of stick-slip by different methods: (a) conventional LCP \(\Delta t\)=0.0001; (b) Conventional RK4 LCP when \(\Delta t\)=10\({}^{\text{-6}}\); (c) Single PINN when \(\Delta t\)=0.0001; (d) Dual PINN when \(\Delta t\)=0.005.
## 4 Two-dimensional nonsmoothness FIV with spring contact
### Mechanical model
Figure 17 is one classic and minimal model showing stability bifurcation, known as mode-coupling instability. It is a 2-DoF model consisting of a mass-spring-damper sliding on a rigid moving belt. In the horizontal direction, the mass \(m\) is held by a spring \(k_{1}\) and a damper \(c_{1}\). In the vertical direction, \(m\) is linked with a ground damper \(c_{2}\). Additionally, \(m\) is constrained by an inclined spring \(k_{3}\), which would cause an asymmetry stiffness that accounts for mode-coupling instability. Between the mass and the belt, the contact stiffness is \(k_{2}\). An assumed massless slider is in contact with the moving belt. A preload \(F_{\text{p}}\) is applied to bring the slider into contact with the belt before the belt starts to move. The friction at the interface is assumed to follow the Coulomb-Stribeck friction law. The sliding friction force \(F_{\text{T}}\) is proportional to the normal contact force.
The coefficient of friction \(\mu\) affects the natural frequencies of the 2-DoF system [35, 36]. As \(\mu\) increases, the two natural frequencies of the system move closer and eventually merge causing instability of the system. As horizontal vibration grows, the interface between the slider and the belt opens, due to vertical motion, before contant is restored at a later time. The transition of the states in the normal direction is called the separation-reattachment phenomenon. Additionally, when the Coulomb-Stribeck friction law is considered, the slider would undergo horizontal stick-slip vibration
Figure 16: The time response of stick-slip vibration with friction law when \(\Delta t\)=0.005: (a) the time history of the displacement; (b) the time history of the velocity.
at the interface, potentially promoting further instability of the system. Both types of nonsmooth phenomena can be present in this model, which is unlike the one-degree-of-freedom model in Section 3.
Figure 17. Model II: 2-DoF slider-belt model with dry friction.
dual PINN framework, an interpolation technique is used. According to the order of the numerical integration method, the force vector \(\lambda_{\mathrm{r}}^{\left(i\right)}\) and \(\lambda_{\mathrm{n}}^{\left(i\right)}\) in conventional methods are now transformed to \(\lambda_{\mathrm{r}\cdot\mathbf{x}\mathrm{x}}^{\left(i\right)}\) and \(\lambda_{\mathrm{n}\cdot\mathbf{x}\mathrm{x}}^{\left(i\right)}\) matrices based on the following equations:
\[\lambda_{\mathrm{r}\cdot\mathbf{x}\mathrm{x}}^{\left(i\right)}=\lambda_{ \mathrm{r}}^{\left(i-1\right)}+\mathbf{s}\left(\lambda_{\mathrm{r}}^{\left(i \right)}-\lambda_{\mathrm{r}}^{\left(i-1\right)}\right) \tag{30}\]
\[\lambda_{\mathrm{n}\cdot\mathbf{x}\mathrm{x}\mathrm{x}}^{\left(i\right)}= \lambda_{\mathrm{n}}^{\left(i\right)}+\mathbf{s}\left(\lambda_{\mathrm{n}}^{ \left(i\right)}-\lambda_{\mathrm{n}}^{\left(i-1\right)}\right) \tag{31}\]
in which \(\mathbf{s}\) is the vector formed by the time segment calculated based on the coefficient c\({}_{k}\) (\(k\)=1,2,...,\(q\)) of the \(q^{\mathrm{th}}\)-order RK method. Then \(\lambda_{\mathrm{r}\cdot\mathbf{x}}^{\left(i\right)}\) and \(\lambda_{\mathrm{n}\cdot\mathbf{x}}^{\left(i\right)}\) are passed into the physics part of the PINN for dynamic response \(\lambda_{\mathrm{r}}^{\left(i\right)}\) and \(\lambda_{\mathrm{n}}^{\left(i\right)}\) in Eq. (c) in Figures 5 and 6. The new methods are referred to as the advanced Single PINN and the advanced Dual PINN in the following. The solution procedure of the two advanced PINN frameworks is shown in Figure 18.
### Numerical simulation
Although stick-slip and separation-reattachment are allowed in this problem, the occurrence of these phenomena depends on the system parameters. As mode-coupling instability happen, the critical coefficient of friction \(\mu_{\mathrm{c}}\) at bifurcation can be obtained via eigenvalue analysis. Figure 19 shows the change of the real and imaginary parts of the eigenvalues during sliding contact with the coefficient
Figure 18: Framework of the advanced Single/Dual PINN for the 2D nonsmooth problem.
of friction for Example 1 (see section 4.3.1). Instability occurs at \(\mu_{\epsilon}=\)0.83 (Figure 19). Two examples with parameters \(\mu=\)0.4 and \(\mu=\)1.2, are investigated.
#### 4.3.1 Example 1
By using the single PINN and dual PINN schemes, numerical simulations are completed by setting the initial condition to \(x(t=0)=y(t=0)=\)-10, and \(v_{x}(t=0)=v_{y}(t=0)=v_{0}\).The size of the neural network is identical to the problem in Section 3 (Figure 20). The results of the root-shooting method are used as the ground truth results. In Figures 20 and 21, the solid and hollow squares are for the advanced single PINN method, when a 4\({}^{\text{th}}\)-order and a 10\({}^{\text{th}}\)-order integration formulations are used in the PINN for dynamic simulation. The solid triangles and hollow circles are for the advanced dual PINN method when a 4\({}^{\text{th}}\)-order and a 10\({}^{\text{th}}\)-order integration formulations are involved.
Figure 20 shows the time history of the contact force. In this example, the contact force alternates between positive values, which represent contact between the slider and the belt, and zero value, which represents the loss of contact. Thus, separation and reattachment happen during the vibration, which brings vertical nonsmoothness. From the results, one can see that both the advanced single PINN and the advanced dual PINN framework can produce very good results for a problem when repeated separation-reattachment events happen. As illustrated in the enlarged plot of the contact force (Figure 20 (b)), when the 4\({}^{\text{th}}\)-order PINN is used, the result calculated by the dual PINN framework is not as good as that of the single PINN. When the 10\({}^{\text{th}}\)-order integration method is used, the accuracy of the dual PINN framework (red circles) can be improved from 4\({}^{\text{th}}\)-order dual PINN
Figure 19: The evolution of the real part (left) and imaginary part (right) of the eigenvalue with the friction coefficient.
(blue triangle). Similarly, the accuracy of the single PINN framework can be also improved by using a high-order integration method.
Figure 21 illustrates that the displacement and velocity responses calculated by the two advanced PINN strategies are very accurate compared with the root-shooting results. As shown by the time history of the velocity given in Figure 21 (c)-(d), stick-slip vibration does not appear in this case. These results enable comparisons between the PINN strategies that employ the dynamic equations with unilateral contact and the ground truth results, which are calculated by the rooting-searching method for the single-point contact problem. In the following, the comparisons between the conventional numerical methods for the multi-point contact problem with the PINN strategies are made.
Figure 20: The time history of the contact force. (a) \(t\)=[0\(\sim\)10]s; (b) \(t\)=[9\(\sim\)10]s.
Table 2 shows the root mean squares (RMS) of the contact force, the displacement and the velocity calculated by different methods. The difference between the conventional LCP method and the RK4 LCP method is that the former uses Morue's algorithm, and the latter adopts the idea of 4th-order RK. Single PINN and Dual PINN are the methods without the interpolation that are introduced in Sections 2.2.3 and 2.2.4. To give a clearer illustration of the data in Table 2, Figure 22 shows the RMS error of the horizontal and vertical displacement. By using the results of the root-shooting method as the reference value, the absolute errors of the corresponding RMS results of the six methods for nonsmooth simulation of multi-point contact problems are calculated. One can find that: (1) with the interpolation technology, the advanced PINN frameworks improve the accuracy of the PINN framework. (2) compared with the conventional methods, the advanced PINN frameworks have the advantage in accuracy when the same time step is used ( \(\Delta\)\(t\)=0.001). The accuracy of the conventional methods could be improved when the time step is smaller ( \(\Delta\)\(t\)=0.0005), which means lower efficiency. As shown by the velocity data listed in Table 2, the proposed advanced single and dual PINN methods can produce accurate results.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline MethodRMS & Contact force & Horizontal displacement & Vertical displacement & Horizontal velocity & Vertical velocity \\ \hline \hline \end{tabular}
\end{table}
Table 2: RMS results of different numerical methods.
Figure 21: The time history of displacement and velocity: (a) horizontal displacement vs time; (b) vertical displacement vs time; (c) horizontal velocity vs time; (d) vertical velocity vs time.
Figure 22: The RMS error of the displacement response:(a) horizontal displacement; (b) vertical displacement.
#### 4.3.2 Example 2
Figures 23 and 24 show the response of the contact force and the horizontal velocity when \(\mu\)=1.2 and the initial conditions are \(x(t=0)=y(t=0)=-1\), and \(v_{x}(t=0)=v_{y}(t=0)=v_{0}\). The first observation is that separation and reattachment happen in this example, as the contact force repeatedly alternants between positive values and zero representing contact and loss of contact, respectively. Then from Figure 24, it can be seen that stick-slip vibration in the horizontal direction occurs as well. More clearly, Figure 24 (b) shows that when both normal and tangential nonsmoothness are involved in a problem, the state change is unpredictable. Figure 23 shows whether the advanced single PINN or advanced dual PINN framework can accurately detect the separation-reattachment events. The zoomed-in view is presented in Figure 23 (b). Compared with the results of the 10th-order advanced single and dual PINN cases, adding one PINN could lower the accuracy, which also happens in Example 1. However, when a higher-order numerical integration scheme is used, the accuracy of both single and dual PINN frameworks is improved, and it is not a big concern. As shown in Figure 24, stick-slip vibration occurs from around 4.6s. Consistently with the contact force results, when the order of the numerical method used is ten, accurate stick-slip vibration can be produced by both the advanced Single and dual PINN frameworks. Even though the dynamic response is highly nonlinear, PINN frameworks work quite well.
In the following, the results of the new PINN frameworks and the conventional methods that all nonsmooth calculations of multi-point contact systems are compared. In Table 3, numerical calculations of the two conventional methods are carried out with different time steps (\(\Delta\,t=\)0.001, \(\Delta\,t=\)0.0005, and \(\Delta\,t=\)0.0001). \(\times\) denotes the failure of detection of stick-slip transitions, which in the view of nonsmooth simulations is unacceptable. When \(\Delta\,t=\)0.0001, conventional methods could give satisfied results. With respect to the PINN frameworks, when the 4th-order methods are used, a smaller time step is needed as when \(\Delta\,t=\)0.001 would not correctly predict stick-slip transitions at 8.88 second (see Figure 24(b)). Furthermore, the 10th-order single and dual PINN frameworks can make an accurate transition detection at \(\Delta\,t=\)0.001, which is 10 times larger than the time step of the conventional methods, and the accuracy of the overall time- domain response is high.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & RMS & Contact force & Horizontal displacement & Vertical displacement \\ \hline Root-Shooting Method & & 224.449 & 0.3997 & 0.7629 \\ \hline Conventional LCP method & \(\Delta\,t=\)5\(\times\)10\({}^{-4}\) & \(\times\) & \(\times\) & \(\times\) \\ & \(\Delta\,t=\)10\({}^{-4}\) & 222.331 (0.94\%) & 0.3980 (0.43\%) & 0.7530 (1.30\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: RMS results of different numerical methods. Cross (\(\times\)) denotes results that fail to present correct stick-slip transitions
Figure 24: The time history of the velocity. (a) \(t\)=[0\(\sim\)10] s; (b) \(t\)=[8.7\(\sim\)9.7]s, in which the dotted zone represents the sticking state, the grey zone represents the slipping state, the white zone represents the separation state.
Through the above numerical analysis of examples 1 and 2, one can conclude that when determining the vibration of a system considering separation-reattachment and stick-slip, the numerical methods and the time step must be chosen very carefully. The error in predicting nonsmooth vibration comes not only from the discernible error accumulation due to the numerical integration but also from the unpredictable failure of detecting nonsmooth transitions, which is a fatal issue. The new PINN frameworks proposed in this work are shown to be well capable of dealing with nonsmoothness in two dimensions (the normal and tangential directions). Moreover, compared with conventional methods, the advanced single and deal PINN frameworks are less dependent on the time step length to give good predictions of the stick-slip and separation-reattachment transitions. Because PINN frameworks rely on the functions from conventional physical equations, in which the numerical error from the traditional numerical methods can be offset through the training of the neural network. The accuracy of PINN frameworks can be improved in two ways: decreasing the time step and increasing the order of the numerical integration formulations in PINN frameworks.
## 5 Conclusions
This work proposes novel physics-informed neural network (PINN) frameworks for simulating the vibration of frictional systems with multiple contact points. Firstly, a tailored PINN based on the mathematical expressions of the linear complementary problem (LCP) is proposed. Then, the PINN for the dynamic simulation of complex systems based on numerical integration formulations of the dynamic equations is established. Furthermore, based on the two PINN frameworks, four PINN-based
methods to replace conventional numerical methods for solving the vibration of frictional systems with multiple contact points are proposed. They are the single PINN framework, the dual PINN framework, and the advanced single/dual PINN framework, respectively.
The applications of these frameworks in solving the direct contact problem with only the stick-slip vibration (1-D problem) and spring contact problem with both separation-reattachment and stick-slip (2-D problem) lead to the following findings: (1) all four PINN frameworks can accurately detect stick-slip and separation-reattachment transition events in both 1-D and 2-D nonsmooth problems; (2) compared with the conventional numerical methods, the PINN frameworks do not necessitate a tiny time step. High-order PINN frameworks allow a larger time step with better accuracy and greater stability than the conventional methods; (3) the advanced single/dual PINN frameworks outperform the single/dual PINN frameworks in identifying state transitions, which is important for achieving high accuracy; (4) the accuracy of PINN strategies can be improved by simply increasing the numerical integration order. Overall, the newly developed methods have illustrated the potential to accurately predict non-smooth dynamic behaviour, indicating encouraging prospects for future research.
## Acknowledgements
The authors are grateful for the financial support from Postdoctoral Research Foundation of China (No. 2019M652564). Support from the Australian Research Council to SM and YG (FT180100338; IC190100020), and the Songshan Laboratory Project (No: 221100211000-01) are also gratefully acknowledged.
|
2308.07984 | Anaphoric Structure Emerges Between Neural Networks | Pragmatics is core to natural language, enabling speakers to communicate
efficiently with structures like ellipsis and anaphora that can shorten
utterances without loss of meaning. These structures require a listener to
interpret an ambiguous form - like a pronoun - and infer the speaker's intended
meaning - who that pronoun refers to. Despite potential to introduce ambiguity,
anaphora is ubiquitous across human language. In an effort to better understand
the origins of anaphoric structure in natural language, we look to see if
analogous structures can emerge between artificial neural networks trained to
solve a communicative task. We show that: first, despite the potential for
increased ambiguity, languages with anaphoric structures are learnable by
neural models. Second, anaphoric structures emerge between models 'naturally'
without need for additional constraints. Finally, introducing an explicit
efficiency pressure on the speaker increases the prevalence of these
structures. We conclude that certain pragmatic structures straightforwardly
emerge between neural networks, without explicit efficiency pressures, but that
the competing needs of speakers and listeners conditions the degree and nature
of their emergence. | Nicholas Edwards, Hannah Rohde, Henry Conklin | 2023-08-15T18:34:26Z | http://arxiv.org/abs/2308.07984v1 | # Anaphoric Structure Emerges Between Neural Networks
###### Abstract
Pragmatics is core to natural language, enabling speakers to communicate efficiently with structures like ellipsis and anaphora that can shorten utterances without loss of meaning. These structures require a listener to interpret an ambiguous form--like a pronoun--and infer the speaker's intended meaning--who that pronoun refers to. Despite potential to introduce ambiguity, anaphora is ubiquitous across human language. In an effort to better understand the origins of anaphoric structure in natural language, we look to see if analogous structures can emerge between artificial neural networks trained to solve a communicative task. We show that: first, despite the potential for increased ambiguity, languages with anaphoric structures are learnable by neural models. Second, anaphoric structures emerge between models "naturally" without need for additional constraints. Finally, introducing an explicit efficiency pressure on the speaker increases the prevalence of these structures. We conclude that certain pragmatic structures straightforwardly emerge between neural networks, without explicit efficiency pressures, but that the competing needs of speakers and listeners conditions the degree and nature of their emergence.
**Keywords:** language emergence; pronouns; ellipsis; pragmatics; neural networks
## Introduction
When we communicate, we often leave out material that is recoverable from the context. Linguistically, such omission or shortening presents a challenge for the listener--both in identifying that something has been omitted and recovering the intended meaning in context. Certain linguistic structures signal that part of what's being said is redundant and can be recovered: ellipsis enables speakers to signal repeated meaning by omitting words [12] and pronominal anaphora signals the re-mention of a discourse referent (see Figure 1 for examples). Both structures employ _anaphors_ that refer back to meaning mentioned elsewhere in the context--_antecedents_. While this is particularly clear in the case of pronouns, work on ellipsis has suggested that there is similar anaphoric behaviour at ellipsis sites [13, 14, 15, 16], even when the missing material is not replaced with an overt marker, like in cases of null anaphora, or _pro-drop_[17]. We take both pronouns and ellipsis as examples of the broader class of pragmatic structures in natural language. We look at what conditions are needed for structures analogous to these to arise between neural networks in an effort to better understand why pragmatic structure may have emerged across human languages despite its potential for ambiguity.
A growing body of work makes the case that natural language has evolved to enable efficient communication between humans, with the competing needs of speakers and listeners as major factors shaping the structure that emerges [1, 18, 19]. This is perhaps most famously demonstrated by Zipf's Law [20], which observes that word frequency is inversely correlated with word length: more frequent words (e.g., _a_) are shorter than less frequent words (_e.g., electroencephalograph_), which helps minimise the production effort required by a speaker [16]. Compressing semantically redundant information could be argued to achieve a similar goal. However, compression risks introducing greater ambiguity into communication, increasing a listener's uncertainty about the intended meaning [19]. Despite such a risk of miscommunication, the existence of these structures across the world's languages [10] suggests that affordances to the speaker outweigh potential communicative failure. Taken together, these observations illustrate how language balances the needs of the speaker by minimising the costs of production while allowing the listener to recover the meaning behind what is said [18, 19, 20].
Recent work also looks at languages that emerge between neural agents trained to solve a communicative task [13, 14]. The task is modelled after a Lewisian signalling game [17] (Figure 2)
Figure 1: Examples of sentences with anaphoric structure used in our experiments. The first example illustrates _verb phrase ellipsis_, where the repetition of verb meaning is signalled by _does too_. The second example illustrates _pronominal anaphora_, where the pronoun _she_ can be used to signal the re-mention of a previous discourse referent, _Mary_. In some languages, this pronoun can be omitted.
where agents need to communicate about a meaning space but are given no supervision about how to do so. During a run of the model, a language which maps meanings to signals emerges, enabling communication between agents. Consequently, the languages are shaped by the biases of the networks and the objective with which they're trained [15]. Recent work in this area has largely been concerned with investigating and identifying the conditions required for the emergence of syntactic structure [13, 14, 15, 16]. Other work has examined contact linguistics, showing, for example, that creole-like languages can emerge in populations of agents tasked with playing a simple reference game [1]. In our work, we instead study the emergence of pragmatic structure through the lens of multi-agent communication, which has so far received little attention from the emergence and efficient communication literature. We consider whether anaphoric structure analogous to that found in human language can emerge and what conditions might be required for this to happen. Neural networks have been shown to lack human inductive biases on an array of linguistic tasks [1, 17, 18], which makes them an interesting testing ground for efficiency-driven accounts of anaphora. If anaphoric structure can emerge in a population of speakers that don't share our cognitive endowment, this could support the idea that these structures emerge predominantly as a result of pressures arising from the competing communicative needs of speaker and listener.
Prior work investigating signal compressibility at the language level [1] found that networks by default preferred signals that were _anti_-efficient, where encodings of more frequent meanings were longer rather than shorter. Only when an explicit cost encouraging brevity was introduced did signals conform to Zipf's Law. This suggests that the agents opted for a strategy which maximised listener discrimination, with the setup skewing in favour of listener-oriented pressures by default. In both that work and ours, the objective used to train the model optimises for as little ambiguity as possible in the listener's reconstruction of a meaning. With that in mind, we similarly look to see if anaphoric structure can emerge in the default (anti-efficient) case, or if an explicit efficiency pressure is required.
We put forward a set of quantitative measurements designed to capture three high-level characteristics of anaphoric structure, so we can identify if and when it emerges, given that the emerging structures may not necessarily appear as straightforward'she' or 'did too' anaphora. Our measures assess: the uniqueness of structures used to signal meaning redundancy ('signal uniqueness'), signal ambiguity, and signal length. Before looking at whether these structures emerge between neural agents, we first train a single 'listener' agent on handcrafted languages designed to mimic three attested types of anaphoric structures in natural language. We show that all three types of structures are learnable, but differ in the speed at which they're learned and the degree of ambiguity they impose on the listener. Then, in a set of multi-agent experiments we show that structures akin to anaphora in natural language emerge between neural agents in every run of every condition of our model--even without an explicit efficiency pressure on the'speaker' agent. By introducing an efficiency pressure, we increase the degree of anaphoric structure that emerges. None of the languages that emerge show substantial evidence of elided structures like those seen in pro-drop languages, which reaffirms that models like ours are optimised for minimal ambiguity. Taken together, our results suggest that while efficiency pressures on a speaker condition the anaphoric structure that emerges, such structure can emerge wherever there's redundancy in what's communicated. Such a finding points to the importance of the semantics-pragmatics interface, in addition to communicative needs, in providing an account of the origins of anaphoric structure.
## Methodology
### The Game
We use a _reconstruction_ game in the style of lewis2017, with a Sender ('speaker' agent) and a Receiver ('listener' agent)--both neural networks. In each round:
1. The Sender receives a meaning \(m_{i}\in M\), drawn from the meaning space \(M\), as input.
2. The Sender generates a signal \(s_{i}\) of maximum length \(n\), one character \(c\) at a time from an alphabet \(C\) of size \(|C|\).
3. The Receiver receives the Sender's signal \(s_{i}\) as input and predicts the corresponding meaning \(\hat{m_{i}}\).
4. The round is successful if \(m_{i}=\hat{m_{i}}\).
### The Meaning Space
To allow for meanings with repetition, each meaning \(m_{i}\) is the concatenation of 5 \(\langle role,word\rangle\) pairs representing a'sentence', where each role represents an element of the sentence, like a subject or verb, and can be realized as a particular word: e.g., \(\langle subj_{1},John\rangle\), \(\langle verb_{1},walks\rangle\), \(\langle conj,and\rangle\), etc. Each sentence is grammatically structured as two conjoined one-place predicates with roles \((subj_{1},verb_{1},conj,subj_{2},verb_{2})\) as in: \(walks^{\prime}(John)\)\(\wedge\)\(smiles^{\prime}(Mary)\). This yields three different kinds of redundant meanings:
Figure 2: Illustration of the Lewisian signalling game. The Sender generates a signal representing the meaning, and the Receiver guesses the original meaning by decoding the signal. Here, the Receiver correctly reconstructs the meaning, so the round is successful.
* **Non-redundant:** nothing is repeated--e.g., _John walks and Mary smiles_ \((subj_{1}\neq subj_{2}\wedge verb_{1}\neq verb_{2})\).
* **Partially Redundant:** either the subject or verb is repeated--e.g., _Mary walks and Mary smiles_ \((subj_{1}=subj_{2}\lor verb_{1}=verb_{2})\).
* **Fully Redundant:** if both the subject and verb are repeated--e.g., _Mary smiles and Mary smiles_ \((subj_{1}=subj_{2}\wedge verb_{1}=verb_{2})\).
Here each _role_ is realised as one of 15 _words_--apart from the conjunction which can only be 'and'--resulting in a meaning space size of 20,000. We use the same meaning space for both the single-agent and multi-agent experiments.
### Implementation
The game is implemented in PyTorch Paszke et al. (2019), using portions of code from the EGG ('Emergence of language in Games') toolkit Kharitonov et al. (2019). The Sender is comprised of a linear layer which maps the meaning to a hidden representation of size 250 and a single-layer Gated Recurrent Unit (GRU) Cho et al. (2014) that produces the variable-length signal a character at a time. The Receiver architecture is the inverse with a GRU mapping the signal to a hidden representation, and a linear layer mapping that to a predicted meaning. The Sender is limited to a maximum length, but may produce a signal of any length up to that bound.
### Optimisation
A hybrid approach is used to train the agents. Since the loss is differentiable, the Receiver can be trained using standard stochastic gradient descent. Due to the discrete signal, the Sender is trained using policy gradient method REINFORCE Williams (1992). Both are optimised using the Adam optimiser Kingma & Ba (2014), with learning rate 0.001.1
Footnote 1: Our code and data, along with a full set of hyperparameters, can be found at: [https://github.com/hcoxec/emerge](https://github.com/hcoxec/emerge).
## Neural agents can learn languages with anaphoric structure
Before seeing if anaphoric structure emerges between agents, we start by showing that the agents used in the referential game are capable of learning these structures if presented with a pre-existing language containing them. We design three languages for comparison, and train only the Receiver via supervised learning to map signals to meanings. The languages are:
1. **No Elision**: each meaning role is mapped to one or more characters in the signal, even when there is redundancy.
2. **Pronoun**: redundant roles are mapped to a unique anaphoric token--repeated nouns to a 'pronoun' character, and repeated verbs to a 'did too' character.
3. **Pro-drop**: whenever there is a redundant role, the corresponding signal is _not_ appended, shortening the overall signal. The language is named after the similar linguistic phenomenon of _pro-drop_Chomsky (1981).
We use a miniature Receiver--given the simplicity of the task--with a hidden state of size 64, trained on each language for 50 epochs with a learning rate of 5e\({}^{-4}\). Results are averaged over 10 runs and summarised in Table 1.
All three languages are consistently learned by the Receiver, with the No Elision language learned faster than pronoun (\(t(18)=-4.35\), \(p<0.05\)), and the Pronoun language learned faster than Pro-drop (\(t(18)=-6.19\), \(p<0.05\)). This finding means the Receiver can successfully learn that in the signals corresponding to _Mary smiles and she dances_ and _Ada smiles and she dances_ the same token'she' maps to different nouns: Mary and Ada.
### Predictive Ambiguity
Anaphoric forms like _she_ or _did too_ can be ambiguous because their intended referent is determined by context. In the next set of experiments where we look to see if these structures emerge between networks, it would be useful to quantify how ambiguous a given signal is for the Receiver. The emergence of communicatively successful, but more ambiguous, signals may be an indication that those signals contain emergent anaphors. In information-theoretic terms, more ambiguity means more uncertainty, quantifiable in terms of _entropy_Shannon (1948). The Receiver \(\mathcal{R}\), when given a signal \(s_{i}\), predicts a distribution over _words_ for each role \(r\in roles\), parameterising the distribution \(\mathcal{R}(words|s_{i},role)\). If the Receiver is certain the first subject is Mary, then Mary will have
\begin{table}
\begin{tabular}{c|c|c} _Sentence:_ & John smiles and John smiles. \\ \hline _Logical Form:_ & \(\text{\emph{smiles}}^{\prime}(John)\wedge\text{\emph{smiles}}^{\prime}(John)\) \\ \hline _Productions:_ & \(\text{\emph{John}}\longrightarrow 12\) \\ & \(\text{\emph{smiles}}\longrightarrow 34\) \\ & _and_\(\longrightarrow 13\) \\ & _’pronoun’_\(\longrightarrow 1\) \\ & _’did too’_\(\longrightarrow 2\) \\ & _EOS_\(\longrightarrow 0\) \\ \hline _Signals:_ & **No Elision:** 12 34 13 12 34 0 \\ & **Pronoun:** 12 34 13 1 2 0 \\ & **Pro-drop:** 12 34 13 0 \\ \end{tabular}
\end{table}
Table 1: Average number of epochs to reach 100% accuracy on test set.
Figure 3: An example with corresponding signals for each handcrafted language. _EOS_ is the end-of-sentence token, which the Sender is required to output as the final symbol.
probability 1.0, resulting in an entropy of 0.0. If the Receiver finds the meaning totally ambiguous with respect to the first subject, then we would expect that distribution over words to be uniform, resulting in higher entropy. We quantify this as _predictive ambiguity_ (PA), defined for each role as the average entropy over words in a role for a set of \(M\) meanings:
\[PA(M,role)=\frac{\sum_{i=1}^{|M|}\mathcal{H}(\mathcal{R}(words|s_{i},role))}{|M|} \tag{1}\]
Predictive ambiguity is shown for each of the 3 languages in Figure 4, computed separately over meanings without redundancy, redundant subjects, or redundant verbs. Redundant roles (panels (b)-(c)) show an increase in ambiguity in the Pronoun and Pro-drop languages, but no increase in the No Elision language. Conversely, for non-redundant meanings (panel (a)) ambiguity is similar for all positions and comparable across all three languages. These findings are in line with what we might expect, with anaphoric structures resulting in increased ambiguity and increased training time. They also highlight the desirability of overt anaphoric forms in a language: while serving a speaker's need for efficiency, they introduce relatively little ambiguity, allowing a listener to recover the intended meaning.
Importantly though, all three languages are learnable, meaning if a language with anaphoric structure emerged among neural agents, it could be maintained by agents exposed to it. While the languages vary in learnability--with the No Elision language the most learnable, followed by Pronoun, and finally Pro-drop--this difference is small--the Receiver only requires on average 13 additional epochs of training to successfully learn the Pro-drop language.
## Languages with anaphoric structure emerge between neural agents
Having shown that neural agents reliably acquire anaphoric structure, we move now to see if such structure emerges naturally between agents, or if explicit pressures related to efficiency are required. As a proxy to effort minimisation pressures, we add an explicit term into the loss function (following Chaabouni et al. (2019)) to penalise the Sender for sending longer signals:
\[\mathbf{B}=\mathbf{B}+\alpha\times|m| \tag{2}\]
\(\mathbf{B}\) represents the standard Sender loss obtained with the RE-INFORCE objective, while \(\alpha\) is a hyperparameter, controlling how strong the efficiency pressure is, and \(|m|\) is the length of the signal generated by the Sender. Across all experiments using a length cost, \(\alpha=0.15\) is used. Results are reported for two conditions: a) with length cost (**+Efficiency**), and b) without length cost (**Control**). For each condition we run 10 different initialisations, with each run having 3000 interactions between Sender and Receiver. Agents communicate about the same meaning space used in the last experiment, but here start with a random mapping from meanings to signals rather than a language of our design. We set the maximum signal length to \(n=10\) and the alphabet size to \(|C|=26\).
### Identifying Anaphoric Structure
In order to find evidence of anaphoric structure in the emergent languages, we use a set of three quantitative measurements which look for high-level characteristics of anaphoric structures in natural language. With each measurement, we compare the signals produced for different'meaning groups': those that are fully, partially, and non-redundant.
Figure 4: Predictive ambiguity with 95% confidence intervals for single agent experiments. Meanings are either a) non-redundant, b) partially redundant (redundant subjects), or c) partially redundant (redundant verbs) (computed at the earliest epoch when perfect test accuracy is achieved on all languages). Position 3 corresponding with _and_ is omitted here and in Figure 5—due to its appearance in each meaning, corresponding entropy is always low and of little interest.
Signal UniquenessIn natural language, words used to express semantic redundancy are partially unique, since some words are used only for this purpose (e.g., _she_, _they_), whereas others are syntactically and semantically context-dependent (e.g., _did_, _too_). Emergent languages with anaphoric structure should mirror this tendency, with a subset of strings which appear only with redundant meanings--despite the fact that words in our meaning space are equally likely to appear in redundant or non-redundant contexts. We can quantify this using _Jaccard similarity_[10] to determine the overlap between _n-grams_ (signal substrings) used with redundant vs. non-redundant meanings. For a set of signals \(S\) we can define signal uniqueness \(SU\) as the difference between the Jaccard similarity for n-grams used in a sample of redundant signals \(S_{\textit{red}}\) and non-redundant signals \(S_{\textit{non-red}}\), and a control--the Jaccard similarity for n-grams used in two random (mutually exclusive) samples of non-redundant signals \(S_{\textit{non-red}}\) and \(S_{\textit{non-red}}\):
\[SU(S)=\frac{\big{|}\,S_{\textit{non-red}}\cap S_{\textit{non-red}}\big{|}}{ \big{|}\,S_{\textit{non-red}}\cup S_{\textit{non-red}}\big{|}}-\frac{\big{|}\, S_{\textit{red}}\cap S_{\textit{non-red}}\big{|}}{\big{|}\,S_{\textit{red}}\cup S_{ \textit{non-red}}\big{|}} \tag{3}\]
An emergent language with anaphoric structure should have a higher \(SU\) value than one without, i.e., the overlap between n-grams used in redundant and non-redundant meanings should be smaller than the overlap between two random samples of non-redundant meanings. We show in Table 2 that this holds for the handcrafted languages used in the preceding experiments. We compute signal uniqueness for unigrams, bigrams and trigrams in the signals respectively.
Signal LengthIn natural language, anaphors often shorten signals, either by fully removing redundant material as in some kinds of ellipsis (e.g., gapping) and pro-dropping, or by using short, frequent anaphoric forms. By measuring the mean length of signals for different meaning groups in a given emergent language, we can see if redundant meanings are consistently shorter, indicating the use of structures analogous to gapping or pro-drop. This may not capture instances of overt anaphoric usage given that agents may not necessarily use anaphors that are shorter than the forms they replace--we use signal uniqueness to identify anaphors independent of their length.
Predictive AmbiguityAs demonstrated in the previous section, the Receiver's predictive ambiguity is indicative of a language's anaphoric structure, with the No Elision language resulting in less predictive ambiguity than either of the languages with anaphoric structure. In the following emergent experiments, if the emergent languages do develop anaphoric structure then we would similarly expect higher predictive ambiguity--i.e., higher uncertainty--about the intended meaning for redundant roles than for non-redundant ones. An important caveat here is that higher predictive ambiguity must be coupled with high communicative success given that anaphorically structured languages are more ambiguous but still communicatively useful. A completely random language which does not enable the agents to solve the task will likely be highly ambiguous, but that in and of itself should not be taken as evidence of human-analogous anaphoric structure. Fortunately, all runs of our model achieve near perfect communicative accuracy making this issue not a concern for our results.
### Results
All conditions achieve near-perfect communicative success on both the training data and a held-out test set. Because these results are the same across all conditions they are omitted here for brevity. We review our three measures for evidence of anaphoric structure in both the +Efficiency and Control conditions--with and without additional pressure for speaker efficiency. We find evidence that anaphoric structure emerges in all conditions. With an additional speaker-oriented pressure imposed on the system, our measures for anaphoric structure
Figure 5: Predictive ambiguity in the multi-agent experiments (95% confidence intervals).
are amplified, highlighting how constraints on speaker efficiency condition the structures that emerge--even if they are not required for its emergence.
Signal UniquenessIn each condition of our setup we see a unique repertoire of n-grams used to refer exclusively to redundant meanings. This is indicative of anaphoric structure of some kind having emerged, given that the model has specialised ways of conveying semantically redundant information. Additionally, this suggests that anaphoric structure emerges 'naturally' without requiring any pressure for speaker brevity. Results in Table 2 also highlight the amplifying effect of a length cost: for bigrams and trigrams, signal uniqueness is higher in +Efficiency.
Signal LengthOverall, the mean length of all signals is very close to the maximum of 10 (see Table 3), with lengths in the +Efficiency condition lower than for Control (\(t(18)=2.47,\ p<0.05\)). Moreover, when a cost is applied, signal length decreases more for redundant than non-redundant meanings, and more so when both subject and verb are redundant, although this reduction is not significant (\(t(18)=-0.683,\ p=0.503\)). As such, the emergent languages for both +Efficiency and Control may develop some anaphoric structure but are unlikely to include structures analogous to elision which would more directly reduce length. In future, the prevalence of ellipsis-like structure may be increased by introducing greater efficiency costs or incentivising the model to develop other cues that can help resolve ambiguity (e.g., verbal morphology in natural languages with pro-drop may help listeners recover an antecedent when no overt pronoun is present).
Predictive AmbiguityWe observe a numeric trend towards increased predictive ambiguity in the redundant meanings compared with the non-redundant ones. For each role in the meaning and across different meaning groups we also see that predictive ambiguity tends to be higher for +Efficiency than for Control. These observations are not statistically significant, which could be due to high variance as a result of each run using a different network initialisation. In Figure 5, the redundant subject and verb meanings (panels (b)-(c)) experience predictive ambiguity'spikes' in +Efficiency--we also see evidence of this for meanings with redundant subjects in Control.2 These results, coupled with each run's high communicative success, further suggest that the emergent languages in all conditions contain some kind of anaphoric structure, with a speaker-oriented pressure magnifying the Receiver's uncertainty about the intended meaning for redundant roles, without degrading communicative success.
Footnote 2: The spikes appear in the non-redundant position instead of the redundant one, e.g., we see high predictive ambiguity in Subject 2 when the verb is redundant. This may arise because signal ambiguity has an effect across the whole meaning.
Overall, these results provide compelling evidence that anaphoric structure emerges between neural networks. We find that, across all conditions, languages use unique n-grams that exclusively refer to redundancy and increase ambiguity about the intended meaning. These measures are further amplified when a pressure for brevity is imposed on the Sender. The minimal change in signal length with the additional pressure suggests the resulting structures resemble overt anaphors, rather than elided ones. While these results should not be interpreted as providing definitive evidence of anaphoric structure analogous to anaphora in natural language, they point towards two hypotheses about the origins of anaphoric structure: firstly, anaphoric structure does not require explicit constraints to emerge, instead emerging 'for free' depending on the semantic context; secondly, pressures like efficiency condition the emergence of anaphoric structure but are not a prerequisite for it. Encouraging the emergence of elided structures may require greater efficiency pressures, or a different semantic context.
## Conclusion
In our experiments we have shown that neural agents are able to acquire languages containing anaphoric structure with ease, and that languages with overt anaphoric structure straightforwardly emerge in a communicative setting. While our evidence suggests efficiency pressures on the speaker amplify the degree of anaphoric structure--in line with expectations--strong efficiency pressures do not appear to be a precondition for its emergence. This points to the importance of the semantics-pragmatics interface, in addition to communicative needs, in offering an explanatory account of the origins of anaphoric structure.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Unigram & Bigram & Trigram \\ \hline No Elision & 0.0 & 0.0 & 0.0 \\ Pronoun & 0.0 & 0.265 & 0.297 \\ Pro-drop & 0.0 & 0.0 & 0.119 \\ \hline Control & 0.0 \(\pm\) 0.0 & 0.148 \(\pm\) 0.0660 & 0.183 \(\pm\) 0.0541 \\ +Efficiency & 0.0 \(\pm\) 0.0 & 0.210 \(\pm\) 0.0397 & 0.198 \(\pm\) 0.0691 \\ \hline \end{tabular}
\end{table}
Table 2: Signal uniqueness for each condition (95% confidence intervals). 0.0 indicates no difference in n-grams used for redundant vs. non-redundant meanings. Higher numbers indicate a larger set of n-grams are used exclusively with redundant meanings. Values for the handcrafted languages in the previous section are provided for reference. Unigrams are 0 as individual characters are used across meaning types.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Control & +Efficiency \\ \hline All & 9.92 \(\pm\) 0.041 & 9.58 \(\pm\) 0.244 \\ \hline Partially Redundant & 9.91 \(\pm\) 0.043 & 9.51 \(\pm\) 0.255 \\ \hline Fully Redundant & 9.92 \(\pm\) 0.041 & 9.35 \(\pm\) 0.260 \\ \hline Non-redundant & 9.92 \(\pm\) 0.041 & 9.63 \(\pm\) 0.242 \\ \hline \end{tabular}
\end{table}
Table 3: Mean signal lengths (95% confidence intervals). |
2301.07531 | Safety Verification of Neural Network Control Systems Using Guaranteed
Neural Network Model Reduction | This paper aims to enhance the computational efficiency of safety
verification of neural network control systems by developing a guaranteed
neural network model reduction method. First, a concept of model reduction
precision is proposed to describe the guaranteed distance between the outputs
of a neural network and its reduced-size version. A reachability-based
algorithm is proposed to accurately compute the model reduction precision.
Then, by substituting a reduced-size neural network controller into the
closed-loop system, an algorithm to compute the reachable set of the original
system is developed, which is able to support much more computationally
efficient safety verification processes. Finally, the developed methods are
applied to a case study of the Adaptive Cruise Control system with a neural
network controller, which is shown to significantly reduce the computational
time of safety verification and thus validate the effectiveness of the method. | Weiming Xiang, Zhongzhu Shao | 2023-01-17T03:25:57Z | http://arxiv.org/abs/2301.07531v1 | Safety Verification of Neural Network Control Systems Using Guaranteed Neural Network Model Reduction
###### Abstract
This paper aims to enhance the computational efficiency of safety verification of neural network control systems by developing a guaranteed neural network model reduction method. First, a concept of model reduction precision is proposed to describe the guaranteed distance between the outputs of a neural network and its reduced-size version. A reachability-based algorithm is proposed to accurately compute the model reduction precision. Then, by substituting a reduced-size neural network controller into the closed-loop system, an algorithm to compute the reachable set of the original system is developed, which is able to support much more computationally efficient safety verification processes. Finally, the developed methods are applied to a case study of the Adaptive Cruise Control system with a neural network controller, which is shown to significantly reduce the computational time of safety verification and thus validate the effectiveness of the method.
## I Introduction
Neural networks are currently widely used in various fields, such as image processing [1], pattern recognition [2], adaptive control [3], unmanned vehicles [4] and aircraft collision avoidance systems [5], etc., demonstrating their powerful capabilities in solving complex and challenging problems that traditional approaches fail to address. As neural networks are further investigated, the size and complexity of their models continue to increase in order to improve their performance and accuracy to cope with complex and difficult tasks and changing environments. However, more complex large-scale neural network models also imply larger computational resources, such as larger memory, higher computational power and more energy consumption in applications [6]. As a result, many neural network model reduction methods have been developed, such as parameter pruning and sharing, low-rank factorization, transfer/compact convolution filters, and knowledge distillation [7]. More results on neural network model reduction can be found in a recent survey [8].
On the other hand, due to the black-box nature of neural networks, neural networks are vulnerable in the face of resistance to interference/attacks. It has been observed that neural networks trained on large amounts of data are sometimes sensitive to updates and react to even small changes in parameters in unexpected and incorrect ways [9]. When neural networks are applied as controllers onto dynamical systems, they will inevitably suffer from safety problems due to the inevitable disturbances and uncertainties in the control process, further affecting the stability and safety of the whole closed-loop system. Therefore, when integrating neural networks into safety-critical control systems, the safety of the neural network needs to be guaranteed at all times, i.e., the safety verification of the neural network needs to be implemented. However, due to the sensitivity of neural networks to perturbations and the complex structure of neural networks, the verification of neural networks is extremely difficult. It has been demonstrated that the verification of simple properties of a small-scale neural network is an uncertainty polynomial (NP) complete problem [10]. A few results have been reported in the literature for the formal verification of systems consisting of neural networks, readers are referred to the recent survey [11]. Specifically, reachability analysis is one of the promising safety verification tools such as in [12, 13, 14, 15, 16], a simulation-based approach is proposed that transforms the difficulty of over-approximating the neural network's output set into a problem of estimating the neural network's maximal sensitivity, which is formulated as a series of convex optimization problems [13, 14]. Polytope-operation-based approaches were developed in [15, 16, 12] for dealing with a class of neural networks with activation functions of Rectified Linear Units (ReLU). However, the scalability issue is the major barrier preventing applying these methods to large-scale neural networks as well as neural network control systems active in a long period of time which means a large amount of reachable set computation is required during the time of interest.
In this paper, we propose a guaranteed model reduction method for neural network controllers based on the neural network reachability analysis and apply it to enhance the scalability of the reachability-based safety verification of closed-loop systems. Firstly, a concept of model reduction precision is proposed to accurately measure the distance between the outputs of an original neural network and its reduced-size version, and an approach to compute the model reduction precision is proposed, which ensures that the difference between the outputs obtained from two neural networks for a given input interval, chosen with any identical input, is within the model reduction precision. This algorithm is then applied to the model reduction of the neural network control system, enabling computationally efficient verification processes based on the reduced-size neural network controller. Finally, the correctness and feasibility of our approach are verified by applying it to the safety verification through the Adaptive Cruise Control (ACC) case study.
The remainder of the paper is organized as follows: Preliminaries are given in Section II. The guaranteed model
reduction of neural networks is presented in Section III. The reachable set computation and safety verification algorithm for the neural network control system are presented in Section IV. The evaluation on the adaptive cruise control system is given in Section V. The conclusion is given in Section VI.
## II Preliminaries
In this paper, we consider a class of continuous-time nonlinear systems in the form of
\[\begin{cases}\dot{\mathbf{x}}(t)=f(\mathbf{x}(t),\mathbf{u}(t))\\ \mathbf{y}(t)=h(\mathbf{x}(t))\end{cases} \tag{1}\]
where \(\mathbf{x}(t)\in\mathbb{R}^{n_{x}}\) is the state vector, \(\mathbf{u}(t)\in\mathbb{R}^{n_{u}}\) is the control input and the \(\mathbf{y}(t)\in\mathbb{R}^{n_{y}}\) is the output vector. In general, the control input is in the form of
\[\mathbf{u}(t)=\gamma(\mathbf{y}(t),\mathbf{r}(t),t) \tag{2}\]
where \(\mathbf{r}(t)\in\mathbb{R}^{n_{r}}\) is the reference input for the controller.
To avoid the difficulties in the controller design when system models are complex or even unavailable, one effective method is to use input-output data to train neural networks capable of generating appropriate control input signals to achieve control objectives. The neural network controller is in the form of
\[\mathbf{u}(t)=\Phi(\mathbf{y}(t),\mathbf{r}(t)) \tag{3}\]
where \(\Phi\) denotes the neural network mapping output and reference signals to control input.
In actual applications, the neural network receives input and generates output in a fraction of the computation time, so the control input generated by the neural network is generally discrete, generated only at each sampling time point \(t_{k},k\in\mathbb{N}\), and then remains a constant value between two successive sampling time instants. Therefore, the continuous-time nonlinear dynamical system with a neural network controller with sampling actions can be expressed in the following form of
\[\begin{cases}\dot{\mathbf{x}}(t)=f(\mathbf{x}(t),\Phi(\boldsymbol{\tau}(t_{k}) ))\\ \mathbf{y}(t)=h(\mathbf{x}(t))\end{cases},\quad t\in[t_{k},t_{k+1}) \tag{4}\]
where \(\boldsymbol{\tau}(t_{k})=[\mathbf{y}^{\top}(t_{k}),\mathbf{r}^{\top}(t_{k})] ^{\top}\).
In this work, we consider feedforward neural networks for controllers in the form of \(\Phi:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{L}}\) defined by the following recursive equations in the form of
\[\begin{cases}\boldsymbol{\eta}_{\ell}=\phi_{\ell}(\mathbf{W}_{\ell}\boldsymbol {\eta}_{\ell-1}+\mathbf{b}_{\ell}),\ \ell=1,\ldots,L\\ \boldsymbol{\eta}_{L}=\Phi(\boldsymbol{\eta}_{0})\end{cases} \tag{5}\]
where \(\boldsymbol{\eta}_{\ell}\) denotes the output of the \(\ell\)-th layer of the neural network, and in particular \(\boldsymbol{\eta}_{0}\in\mathbb{R}^{n_{0}}\) is the input to the neural network and \(\boldsymbol{\eta}_{L}\in\mathbb{R}^{n_{L}}\) is the output produced by the neural network, respectively. \(\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) and \(\mathbf{b}_{\ell}\in\mathbb{R}^{n_{\ell}}\) are weight matrices and bias vectors for the \(\ell\)-th layer. \(\phi_{\ell}=[\psi_{\ell},\cdots,\psi_{\ell}]\) is the concatenation of activation functions of the \(\ell\)-th layer in which \(\psi_{\ell}:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function.
In this paper, we aim at reducing the computational cost of safety verification of neural network control systems in the framework of reachable set computation.
**Definition 1**: _Given a neural network in the form of (5) and an input set \(\mathcal{U}\), the following set_
\[\mathcal{Y}=\{\boldsymbol{\eta}_{L}\in\mathbb{R}^{n_{L}}\mid\boldsymbol{\eta }_{L}=\Phi(\boldsymbol{\eta}_{0}),\ \boldsymbol{\eta}_{0}\in\mathcal{U}\} \tag{6}\]
_is called the output set of neural network (5)._
**Definition 2**: _A set \(\mathcal{Y}_{e}\) is called an output reachable set over-approximation of neural network (5), if \(\mathcal{Y}\subseteq\mathcal{Y}_{e}\) holds, where \(\mathcal{Y}\) is the output reachable set of neural network (5)._
**Definition 3**: _Given a neural network control system in the form of (1) and (3) with initial set \(\mathcal{X}_{0}\) and input set \(\mathcal{V}\), the reachable set at time \(t\) is_
\[\mathcal{R}(t)=\{\mathbf{x}(t;\mathbf{x}_{0},\mathbf{r}(\cdot))\in\mathbb{R}^{ n_{x}}\mid\mathbf{x}_{0}\in\mathcal{X}_{0},\ \mathbf{r}(t)\in\mathcal{V}\} \tag{7}\]
_and the union of \(\mathcal{R}(t)\) over \([t_{0},t_{f}]\) defined by_
\[\mathcal{R}([t_{0},t_{f}])=\bigcup_{t\in[t_{0},t_{f}]}\mathcal{R}(t) \tag{8}\]
_is the reachable set over time interval \([t_{0},t_{f}]\)._
**Definition 4**: _A set \(\mathcal{R}_{e}(t)\) is an over-approximation of \(\mathcal{R}(t)\) at time \(t\) if \(\mathcal{R}(t)\subseteq\mathcal{R}_{e}(t)\) holds. Moreover, \(\mathcal{R}_{e}([t_{0},t_{f}])=\bigcup_{t\in[t_{0},t_{f}]}\mathcal{R}_{e}(t)\) is an over-approximation of \(\mathcal{R}([t_{0},t_{f}])\) over time interval \([t_{0},t_{f}]\)._
**Definition 5**: _Safety specification \(\mathcal{S}\) formalizes the safety requirements for state \(\mathbf{x}(t)\) of neural network control system (1), and is a predicate over state \(\mathbf{x}(t)\) of neural network control system (1). The neural network control system (1) is safe over time interval \([t_{0},t_{f}]\) if the following condition is satisfied:_
\[\mathcal{R}_{e}([t_{0},t_{f}])\cap\neg\mathcal{S}=\emptyset \tag{9}\]
_where \(\neg\) is the symbol for logical negation._
As indicated in [12, 13, 14, 15, 16], the computation cost for reachable set computation heavily relies on the size of neural networks, i.e., numbers of layers and neurons. In this paper, we aim to reduce the size of the neural network controller and rigorously compute the model reduction error, i.e., guaranteed neural network model reduction, so that the reachable set computation can be efficiently performed on a significantly reduced-size neural network and then mapped back to the original neural network to reach safety verification conclusions.
## III Guaranteed Neural Network Model Reduction
Given a large-scale neural network \(\Phi\), there exist a large number of neural network model reduction methods as in survey paper [8] to obtain its reduced-size version \(\hat{\Phi}\) as below:
\[\begin{cases}\hat{\boldsymbol{\eta}}_{\ell}=\hat{\phi}_{\ell}(\hat{\mathbf{W}} _{\ell}\hat{\boldsymbol{\eta}}_{\ell-1}+\hat{\mathbf{b}}_{\ell}),\ \ell=1,\ldots,\hat{L}\\ \hat{\boldsymbol{\eta}}_{L}=\hat{\Phi}(\hat{\boldsymbol{\eta}}_{0})\end{cases} \tag{10}\]
To enable guaranteed neural network model reduction, the key is how to rigorously compute the output difference between the original neural network \(\Phi\) and its reduced-size version \(\hat{\Phi}\). Without loss of generality, the following
assumption is given for neural network \(\Phi\) and its reduced-size version \(\hat{\Phi}\).
**Assumption 1**: _The following assumptions hold for neural network \(\Phi\) and its reduced-size version \(\hat{\Phi}\):_
1. _The number of inputs of two neural networks are the same, i.e.,_ \(n_{0}=\hat{n}_{0}\)_;_
2. _The number of outputs of two neural networks are the same, i.e.,_ \(n_{L}=n_{\hat{L}}\)_;_
3. _The number of hidden layers of neural network_ \(\Phi\) _is greater than or equal to the number of hidden layers of neural network_ \(\hat{\Phi}\)_, i.e.,_ \(L\geq\hat{L}\)_._
To characterize the output difference between \(\Phi\) and \(\hat{\Phi}\), we define the following metric for model reduction precision.
**Definition 6**: _Consider neural network \(\Phi\) and its reduced-size version \(\hat{\Phi}\) with one same input set \(\mathcal{U}\) and their corresponding output sets \(\mathcal{Y}\) and \(\hat{\mathcal{Y}}\), we define the distance between the outputs of \(\Phi\) and \(\hat{\Phi}\) with respect to the input set \(\mathcal{U}\) by_
\[\rho(\Phi,\hat{\Phi},\mathcal{U})=\sup_{\mathbf{\eta}_{0}=\hat{\mathbf{\eta}}_{0},\bm {\eta}_{0},\hat{\mathbf{\eta}}_{0}\in\mathcal{U}}\left\|\Phi(\mathbf{\eta}_{0})-\hat{ \Phi}(\hat{\mathbf{\eta}}_{0})\right\| \tag{11}\]
_where \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) is called model reduction precision._
In the framework of reachability analysis of neural networks, the following theorem presents a numerically tractable method to compute model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\).
**Theorem 1**: _Given neural network \(\Phi\) and its reduced-size version \(\tilde{\Phi}\) with input set \(\mathcal{U}\), the model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) can be computed by_
\[\rho(\Phi,\hat{\Phi},\mathcal{U})=\sup_{\hat{\eta}_{0}\in\mathcal{U}}\left\| \tilde{\Phi}(\tilde{\mathbf{\eta}}_{0})\right\| \tag{12}\]
_where neural network \(\tilde{\Phi}\) is an augmented neural network of \(\Phi\) and \(\hat{\Phi}\) defined as follows:_
\[\begin{cases}\tilde{\mathbf{\eta}}_{\ell}=\tilde{\phi}_{\ell}(\tilde{\mathbf{W}}_ {\ell}\tilde{\mathbf{\eta}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell}),\ \ell=1,\dots,L+1\\ \tilde{\mathbf{\eta}}_{L+1}=\tilde{\Phi}(\tilde{\mathbf{\eta}}_{0})\end{cases} \tag{13}\]
_in which input \(\tilde{\mathbf{\eta}}_{0}=\mathbf{\eta}_{0}=\hat{\mathbf{\eta}}_{0}\) and_
\[\tilde{\mathbf{W}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{W}_{1}\\ \hat{\mathbf{W}}_{1}\end{bmatrix},&\ell=1\\ \begin{bmatrix}\mathbf{W}_{\ell}&\mathbf{0}_{n_{\ell}\times\hat{n}_{\ell-1}}\\ \mathbf{0}_{\hat{n}_{\ell}\times n_{\ell-1}}&\hat{\mathbf{W}}_{\ell}\end{bmatrix},&1<\ell\leq\hat{L}-1\\ \begin{bmatrix}\mathbf{W}_{\ell}&\mathbf{0}_{n_{\ell}\times\hat{n}_{\hat{L}-1}} \\ \mathbf{0}_{\hat{n}_{L-1}\times n_{\ell}}&\mathbf{I}_{\hat{n}_{L-1}}\end{bmatrix},&\hat{L}\leq\ell\leq L-1\\ \begin{bmatrix}\mathbf{W}_{L}&\mathbf{0}_{n_{L}\times n_{L-1}}\\ \mathbf{0}_{n_{L}\times n_{L-1}}&\hat{\mathbf{W}}_{L}\end{bmatrix},&\ell=L\\ \begin{bmatrix}\mathbf{I}_{n_{L}}&-\mathbf{I}_{n_{L}}\end{bmatrix},&\ell=L+1 \end{cases} \tag{14}\]
\[\tilde{\mathbf{b}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{b}_{\ell}\\ \hat{\mathbf{b}}_{\ell}\end{bmatrix},&1\leq\ell\leq\hat{L}-1\\ \begin{bmatrix}\mathbf{b}_{\ell}\\ \mathbf{0}_{\hat{n}_{\hat{L}-1}\times 1}\end{bmatrix},&\hat{L}\leq\ell\leq L-1\\ \begin{bmatrix}\mathbf{b}_{L}\\ \hat{\mathbf{b}}_{L}\end{bmatrix},&\ell=L\\ \begin{bmatrix}\mathbf{0}_{(n_{L}+n_{L})\times 1}\end{bmatrix},&\ell=L+1\\ \end{cases} \tag{15}\]
\[\tilde{\phi}_{\ell}(\cdot)=\begin{cases}\begin{bmatrix}\phi_{\ell}(\cdot)\\ \hat{\phi}_{\ell}(\cdot)\\ \phi_{\ell}(\cdot)\\ \mathsf{purelin}(\cdot)\end{bmatrix},&\hat{L}\leq\ell\leq L-1\\ \begin{bmatrix}\phi_{L}(\cdot)\\ \hat{\phi}_{\hat{L}}(\cdot)\\ \mathsf{purelin}(\cdot),&\ell=L+1\end{cases}\end{cases} \tag{16}\]
where \(\mathsf{purelin}(\cdot)\) is linear transfer function, i.e., \(x=\mathsf{purelin}(x)\).
Given an input \(\mathbf{\eta}_{0}\in\mathcal{U}\) and \(\tilde{\mathbf{\eta}}_{0}=\hat{\mathbf{\eta}}_{0}=\mathbf{\eta}_{0}\) and considering layers \(1\leq\ell\leq\hat{L}-1\) of \(\tilde{\Phi}\), we have
\[\tilde{\mathbf{\eta}}_{L-1}=\begin{bmatrix}\phi_{L-1}\circ\cdots\circ\phi_{1}( \mathbf{W}_{1}\mathbf{\eta}_{0}+\mathbf{b}_{1})\\ \hat{\phi}_{L-1}\circ\cdots\circ\hat{\phi}_{1}(\hat{\mathbf{W}}_{1}\hat{\mathbf{ \eta}}_{0}+\hat{\mathbf{b}}_{1})\end{bmatrix} \tag{17}\]
Specifically, we consider \(\ell=1\) such that
\[\tilde{\mathbf{\eta}}_{1}=\begin{bmatrix}\phi_{1}(\mathbf{W}_{1}\mathbf{\eta}_{0}+ \mathbf{b}_{1})\\ \hat{\phi}_{1}(\hat{\mathbf{W}}_{1}\hat{\mathbf{\eta}}_{0}+\hat{\mathbf{b}}_{1}) \end{bmatrix}=\begin{bmatrix}\mathbf{\eta}_{1}\\ \hat{\mathbf{\eta}}_{1}\end{bmatrix} \tag{18}\]
Moreover, when \(1<\ell\leq\hat{L}-1\), it leads to
\[\tilde{\mathbf{W}}_{\ell}\tilde{\mathbf{\eta}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell}= \begin{bmatrix}\mathbf{W}_{\ell}\mathbf{\eta}_{\ell-1}+\mathbf{b}_{\ell}\\ \hat{\mathbf{W}}_{\ell}\hat{\mathbf{\eta}}_{\ell-1}+\hat{\mathbf{b}}_{\ell}\end{bmatrix},\ 1<\ell\leq\hat{L}-1 \tag{19}\]
Staring from \(\ell=1\) and recursively using (19) into (17), one can obtain
\[\tilde{\mathbf{\eta}}_{\hat{L}-1}=\begin{bmatrix}\mathbf{\eta}_{L-1}\\ \hat{\mathbf{\eta}}_{\hat{L}-1}\end{bmatrix} \tag{20}\]
Furthermore, when \(\hat{L}\leq\ell\leq L-1\), one can derive
\[\tilde{\mathbf{W}}_{\ell}\tilde{\mathbf{\eta}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell}= \begin{bmatrix}\mathbf{W}_{\ell}\mathbf{\eta}_{\ell-1}+\mathbf{b}_{\ell}\\ \hat{\mathbf{\eta}}_{\ell-1}\end{bmatrix},\ \hat{L}\leq\ell\leq L-1 \tag{21}\]
and
\[\mathbf{\eta}_{L-1}=\phi_{L-1}\circ\cdots\circ\phi_{\hat{L}}(\mathbf{W}_{L}\mathbf{\eta} _{L-1}+\mathbf{b}_{\hat{L}-1})\]
which leads to
\[\tilde{\mathbf{\eta}}_{L-1}=\begin{bmatrix}\mathbf{\eta}_{L-1}\\ \hat{\mathbf{\eta}}_{\hat{L}-1}\end{bmatrix} \tag{22}\]
Then, when \(\ell=L\), it yields that
\[\tilde{\mathbf{W}}_{L}\tilde{\mathbf{\eta}}_{L-1}+\tilde{\mathbf{b}}_{L}= \begin{bmatrix}\mathbf{W}_{L}\mathbf{\eta}_{L-1}+\mathbf{b}_{L}\\ \hat{\mathbf{W}}_{\hat{L}}\hat{\mathbf{\eta}}_{\hat{L}-1}+\hat{\mathbf{b}}_{\hat{L }}\end{bmatrix} \tag{23}\]
Thus, one can obtain
\[\tilde{\mathbf{\eta}}_{L}=\begin{bmatrix}\mathbf{\eta}_{L}\\ \hat{\mathbf{\eta}}_{\hat{L}}\end{bmatrix} \tag{24}\]
At last, when \(\ell=L+1\), the following result can be obtained
\[\tilde{\mathbf{\eta}}_{L+1}=\tilde{\mathbf{\eta}}_{L+1}\tilde{\mathbf{\eta}}_{L}+\tilde{\mathbf{ \eta}}_{L+1}=\begin{bmatrix}\mathbf{\mathrm{I}}_{n_{L}}&-\mathbf{\mathrm{I}}_{n_{L}} \end{bmatrix}\begin{bmatrix}\mathbf{\eta}_{L}\\ \hat{\mathbf{\eta}}_{L}\end{bmatrix}\]
which implies that \(\tilde{\mathbf{\eta}}_{L+1}=\mathbf{\eta}_{L}-\hat{\mathbf{\eta}}_{L}\). Therefore, we can conclude that \(\tilde{\Phi}(\tilde{\mathbf{\eta}}_{0})=\Phi(\mathbf{\eta}_{0})-\hat{\Phi}(\tilde{\bm {\eta}}_{0})\) as long as \(\tilde{\mathbf{\eta}}_{0}=\hat{\mathbf{\eta}}_{0}=\hat{\eta}_{0}\) and
\[\rho(\Phi,\hat{\Phi},\mathcal{U})=\sup_{\tilde{\eta}_{0}\in \mathcal{U}}\left\|\tilde{\Phi}(\tilde{\mathbf{\eta}}_{0})\right\| \tag{25}\]
The proof is complete.
**Remark 1**: _In the process of augmenting neural networks \(\Phi\) and \(\hat{\Phi}\) into \(\tilde{\Phi}\), the case of \(\ell=1\) in (14)-(16) ensures that augmented neural network \(\tilde{\Phi}\) takes the one same input \(\mathbf{\eta}_{0}\) for the subsequent calls involving both processes of \(\Phi\) and its reduced-size version \(\hat{\Phi}\). Then, for \(1<\ell\leq\hat{L}-1\), augmented neural network \(\tilde{\Phi}\) conducts the computation of \(\Phi\) and and its reduced-size version \(\hat{\Phi}\) parallelly for the hidden layers of \(1<\ell\leq\hat{L}-1\). When \(\hat{L}\leq\ell\leq L-1\), the hidden layers of reduced-size neural network \(\tilde{\Phi}\) which has fewer hidden layers are expanded to match the number of layers of the original neural network \(\Phi\) with a larger number of hidden layers, but the expanded layers are forced to pass the information to subsequent layers without any changes, i.e., the weight matrices of the expanded hidden layers are identity matrices, and the bias vectors are zero vectors. This expansion is formalized as in the case of \(\hat{L}\leq\ell\leq L-1\) in (14)-(16). Moreover, as \(\ell=L\), this layer is a combination of output layers of both \(\Phi\) and \(\hat{\Phi}\) to generate the same outputs of \(\Phi\) and \(\hat{\Phi}\). At last, a comparison layer \(L+1\) is added to compute the exact difference between the original neural network \(\Phi\) and its reduced-size version \(\hat{\Phi}\)._
**Remark 2**: _As shown in Theorem 1, the key of computing model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) is to compute the maximal output value of augmented neural network \(\tilde{\Phi}\) with respect to input set \(\mathcal{U}\). This can be efficiently done by neural network reachability analysis. For instance, as in NNV neural network reachability analysis tool, the reachable sets are in the form of a family of polyhedral sets [16], and in the IGNNV tool, the output reachable set is a family of interval sets [13, 14]. With the reachable set \(\tilde{\mathcal{Y}}\), the model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) can be easily obtained by searching for the maximal value of \(\|\tilde{\mathbf{\eta}}_{L+1}\|\) in \(\tilde{\mathcal{Y}}\), e.g., testing throughout a finite number of vertices in polyhedral sets._
## IV Safety Verification of Neural Network Control Systems
In this section, we apply neural network model reduction and model reduction precision to a neural network control system. By replacing the original neural network controller with a reduced-size neural network, the computational cost of safety verification can be significantly reduced. Moreover, the model reduction precision allows an over-estimation of the difference in behavior between the original neural network and the reduced-size one. For the reachability analysis of neural networks, the following result can be obtained.
**Proposition 1**: _Given neural network \(\Phi\), its reduced-size version \(\hat{\Phi}\) with output set \(\hat{\mathcal{Y}}\), and model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\), the output reachable set of original neural network \(\Phi\) satisfies_
\[\mathcal{Y}\subseteq\hat{\mathcal{Y}}\oplus\mathcal{B}(\mathbf{0}_{n_{L}\times 1}, \frac{\rho(\Phi,\hat{\Phi},\mathcal{U})}{2}) \tag{26}\]
_where \(\mathcal{B}(\mathbf{0}_{n\times 1},r)\) denotes a ball centered at \(\mathbf{0}_{n\times 1}\) with a radius of \(r\), and \(\oplus\) denotes the Minkowski sum._
This can be obtained straightforwardly by the definition of model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) which characterizes the maximal difference between the outputs of \(\Phi\) and \(\hat{\Phi}\). The proof is complete.
The reachable set estimation for a sampled-data neural network control system in the form of (4) generally involves two parts: 1) Output set computation for neural network controllers denoted by
\[\mathcal{Y}=\texttt{reachNN}(\Phi,\mathcal{U}) \tag{27}\]
which can be efficiently obtained by neural network reachability tools such as [13, 16] and (26), and 2) Reachable set computation of system (1). For the reachable set computation of systems described by ODEs, there exist a variety of approaches and tools such as those well-developed in [17, 18, 19, 20]. The following functions are given to denote the reachable set estimation for sampled data ODE models during \([t_{k},t_{k+1}]\),
\[\mathcal{R}_{e}([t_{k},t_{k+1}]) =\texttt{reachODEx}(f,\mathcal{U}(t_{k}),\mathcal{R}_{e}(t_{k})) \tag{28}\] \[\mathcal{Y}_{e}(t_{k}) =\texttt{reachODEy}(h,\mathcal{R}_{e}(t_{k})) \tag{29}\]
where \(\mathcal{U}(t_{k})\) is the input set for sampling interval \([t_{k},t_{k+1}]\). \(\mathcal{R}_{e}(t_{k})\) and \(\mathcal{R}_{e}([t_{k},t_{k+1}])\) are the estimated reachable sets
```
Input : System dynamics \(f\), \(h\); Reduced-size neural network \(\hat{\Phi}\); Model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\); Initial set \(\mathcal{X}_{0}\); Input set \(\mathcal{V}\) Output : Reachable set estimation \(\mathcal{R}_{e}([t_{0},t_{f}])\).
1FunctionreachNNCS /* Initialization */
2\(k\gets 0\)\(t_{K+1}\gets t_{f}\)\(\mathcal{R}_{e}(t_{0})\leftarrow\mathcal{X}_{0}\) /* Iteration for all sampling intervals */
3while\(k\leq K\)do
4\(\mathcal{Y}_{e}(t_{k})\leftarrow\texttt{reachODEy}(h,\mathcal{R}_{e}(t_{k}))\)
5\(\mathcal{H}\leftarrow\mathcal{Y}_{e}(t_{k})\times\mathcal{V}\)\(\mathcal{\hat{U}}_{e}(t_{k})\leftarrow\texttt{reachNN}(\hat{\Phi},\mathcal{H})\)
6\(\mathcal{U}_{e}\leftarrow\mathcal{\hat{U}}(t_{k})\oplus\mathcal{B}(\mathbf{0}_{n_{L} \times 1},\frac{\rho(\Phi,\hat{\Phi},\mathcal{U})}{2})\)\(\mathcal{R}_{e}([t_{k},t_{k+1}])\leftarrow\texttt{reachODEx}(f,\mathcal{U}_{e}, \mathcal{R}_{e}(t_{k}))\)
7\(k\gets k+1\)
8 end while
9return\(\mathcal{R}_{e}([t_{0},t_{f}])\leftarrow\bigcup_{k=0,1\ldots,K}\mathcal{R}_{e}([t_{k},t_{ k+1}])\)
```
**Algorithm 1**Reachable Set computation for Neural Network Control Systems (4)
**Remark 3**: _As shown in Theorem 1, the key of computing model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) is to compute the maximal output value of augmented neural network \(\tilde{\Phi}\) with respect to input set \(\mathcal{U}\). This can be efficiently done by neural network reachability analysis. For instance, as in NNV neural network reachability analysis tool, the reachable sets are in the form of a family of polyhedral sets [16], and in the IGNNV tool, the output reachable set is a family of interval sets [13, 14]. With the reachable set \(\tilde{\mathcal{Y}}\), the model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) can be easily obtained by searching for the maximal value of \(\|\tilde{\mathbf{\eta}}_{L+1}\|\) in \(\tilde{\mathcal{Y}}\), e.g., testing throughout a finite number of vertices in polyhedral sets._
## V Safety Verification of Neural Network Control Systems
In this section, we apply neural network model reduction and model reduction precision to a neural network control system. By replacing the original neural network controller with a reduced-size neural network, the computational cost of safety verification can be significantly reduced. Moreover, the model reduction precision allows an over-estimation of the difference in behavior between the original neural network and the reduced-size one. For the reachability analysis of neural networks, the following result can be obtained.
**Proposition 2**: _Given neural network \(\Phi\), its reduced-size version \(\hat{\Phi}\) with output set \(\hat{\mathcal{Y}}\), and model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\), the output reachable set of original neural network \(\Phi\) satisfies_
\[\mathcal{Y}\subseteq\hat{\mathcal{Y}}\oplus\mathcal{B}(\mathbf{0}_{n_{L}\times 1 },\frac{\rho(\Phi,\hat{\Phi},\mathcal{U})}{2}) \tag{30}\]
_where \(\mathcal{B}(\mathbf{0}_{n\times 1},r)\) denotes a ball centered at \(\mathbf{0}_{n\times 1}\) with a radius of \(r\), and \(\oplus\) denotes the Minkowski sum._
This can be obtained straightforwardly by the definition of model reduction precision \(\rho(\Phi,\hat{\Phi},\mathcal{U})\) which characterizes the maximal difference between the outputs of \(\Phi\) and \(\hat{\Phi}\). The proof is complete.
The reachable set estimation for a sampled-data neural network control system in the form of (4) generally involves two parts: 1) Output set computation for neural network controllers denoted by
\[\mathcal{Y}=\texttt{reachNN}(\Phi,\mathcal{U}) \tag{31}\]
which can be efficiently obtained by neural network reachability tools such as [13, 16] and (30), and 2) Reachable set computation of system (1). For the reachable set computation of systems described by ODEs, there exist a variety of approaches and tools such as
for state \(\mathbf{x}(t)\) at sampling instant \(t_{k}\) and interval \([t_{k},t_{k+1}]\), respectively. \(\mathcal{Y}_{e}(t_{k})\) is the estimated reachable set for output \(\mathbf{y}(t_{k})\). With the results in Proposition 1, we can use the reduced-size neural network to compute output set \(\tilde{\mathcal{U}}(t_{k})\) which is much more computationally efficient due to its smaller size, and replace the output set of \(\mathcal{U}(t_{k})\) by \(\tilde{\mathcal{U}}(t_{k})\oplus\mathcal{B}(\mathbf{0}_{n_{L}\times 1}, \frac{\rho(\Phi,\tilde{\Phi},\mathcal{U})}{2})\) where \(\rho(\Phi,\tilde{\Phi},\mathcal{U})\) is normally obtained through a one-time offline computation. The reachable set computation process is shown in Algorithm 1.
Based on the estimated reachable set obtained by Algorithm 1, the safety property can be examined with the existence of intersections between the estimated reachable set and unsafe region \(\neg\mathcal{S}\).
**Proposition 2**: _Consider a neural network control system in the form of (4) with a safety specification \(\mathcal{S}\), the system is safe in \([t_{0},t_{f}]\), if \(\mathcal{R}_{e}([t_{0},t_{f}])\cap\neg\mathcal{S}=\emptyset\), where \(\mathcal{R}_{e}([t_{0},t_{f}])\) is an estimated reachable set obtained by Algorithm 1._
## V Evaluation on Adaptive Cruise Control Systems
In this section, our approach will be evaluated by the safety verification of an Adaptive Cruise Control (ACC) system equipped with a neural network controller as depicted in Fig. 1. The system dynamics is in the form of
\[\begin{cases}\dot{x}_{l}(t)=v_{l}(t)\\ \dot{v}_{l}(t)=\gamma_{l}(t)\\ \dot{\gamma}_{l}(t)=-2\gamma_{l}(t)+2\alpha_{l}(t)-\mu v_{l}^{2}(t)\\ \dot{x}_{e}(t)=v_{e}(t)\\ \dot{v}_{e}(t)=\gamma_{e}(t)\\ \dot{\gamma}_{e}(t)=-2\gamma_{e}(t)+2\alpha_{e}(t)-\mu v_{e}^{2}(t)\end{cases} \tag{30}\]
where \(x_{l}(x_{e})\), \(v_{l}(v_{e})\) and \(\gamma_{l}(\gamma_{e})\) are the position, velocity and actual acceleration of the lead (ego) car, respectively. \(\alpha_{l}(\alpha_{e})\) is the acceleration control input applied to the lead (ego) car, and \(\mu=0.001\) is the friction parameter. The ACC controller we considered here is a \(5\times 20\) feed-forward neural network with ReLU as its activation functions. The sampling scheme is considered as a periodic sampling every 0.01 seconds, i.e., \(t_{k+1}-t_{k}=0.01\) seconds.
The sampled-data neural network controller for the acceleration control of the ego car is in the form of
\[\alpha_{e}(t)=\Phi(v_{set}(t_{k}),t_{gap},v_{e}(t_{k}),d_{rel}(t_{k}),v_{rel} (t_{k})) \tag{31}\]
in which \(k\in[t_{k},t_{k+1}]\). The threshold of the safe distance between the two cars satisfies a function as defined below in the form of
\[d_{safe}>d_{thold}=d_{def}+t_{gap}\cdot v_{e} \tag{32}\]
where \(d_{safe}\) is the safe distance between the ego car and lead car, \(d_{thold}\) is the threshold of the safe distance, \(d_{def}\) is the standstill default spacing, and \(t_{gap}\) is the time gap between the vehicles. The safety verification scenario we consider is that the lead car decelerates with \(\alpha_{l}=2\) to reduce its speed as an emergency braking occurs. We expect that the ego car guided by a neural network control system is able to maintain a safe relative distance to the lead car to avoid the collision.
The safety specification parameter we consider in the simulation is \(t_{gap}=1.4\) seconds and \(d_{def}=10\). The time horizon that we want to verify is 3 seconds, i.e., 300 sampling intervals, after the emergency braking comes into play. The initial sets are \(x_{l}(0)\in[94,96]\), \(v_{l}(0)\in[30,30.2]\), \(\gamma_{l}(0)=0\), \(x_{e}(0)\in[10,11]\), \(v_{e}(0)\in[30,30.2]\), and \(\gamma_{e}(0)=0\).
As mentioned above, the size of the hidden layer of the original neural network controller is \(5\times 20\), i.e., 5 layers with 20 neurons in each layer, Through neural network model reduction, we replace the original neural network controller with a reduced-size neural network of hidden layer size \(2\times 5\), i.e., 2 layers with 5 neurons in each layer, combined with a model reduction precision \(\rho=1.0234\). For the continuous-time nonlinear dynamics, we use CORA [18] to do the reachability analysis for the time interval between two sampling instants, and IGNNV in [13] is used for neural network reachability analysis.
The output reachable set of the ACC system for the relative distance between the lead car and the ego car over time can be shown in Figs. 2 and 3. Notably, the computation time has been significantly reduced from 12.355 seconds to 1.099 seconds when using a reduced-size neural network as shown
Fig. 1: Adaptive cruise control system model [13]
Fig. 2: Reachable set of ACC systems. The green (blue) pipe indicates the output reachable set of the dynamic system with the original (reduced-size) neural network. It is obvious that the blue pipe completely wraps around the green pipe. The yellow (red) pipe denotes the safe distance threshold region for the dynamical system with the original (reduced-size) neural network.
in Fig. 2 and Table I. The system is safe when the output reachable set of relative distances does not intersect with the safe distance threshold region.
In summary, the simulations show that the closed-loop system with the reduced-size neural network can be used for safety verification of the original system as long as the model reduction precision can be provided. The reduced-size neural network can significantly reduce the computational time of the entire process of solving the neural network control system for the output reachable set.
## VI Conclusions
This paper investigates the problem of simplifying the safety verification of neural network control systems, proposes a concept of model reduction precision that characterizes the minimum upper bound on the outputs between a neural network and its reduced-size one, and proposes an algorithm to calculate the model reduction precision. By using a reduced-size neural network as the neural network controller and introducing the model reduction precision in the computation of the output reachable. Combined with the calculation of reachable sets for dynamical systems, we give the reachable set computation algorithm based on model reduction of neural network control systems. In this way, we can obtain the over-approximated output reachable set of the original neural network control system with less computation time and enable a simplification of safety verification processes. The developed results are applied to the ACC system to verify its effectiveness and feasibility.
|
2301.03019 | Equivariant and Steerable Neural Networks: A review with special
emphasis on the symmetric group | Convolutional neural networks revolutionized computer vision and natrual
language processing. Their efficiency, as compared to fully connected neural
networks, has its origin in the architecture, where convolutions reflect the
translation invariance in space and time in pattern or speech recognition
tasks. Recently, Cohen and Welling have put this in the broader perspective of
invariance under symmetry groups, which leads to the concept of group
equivaiant neural networks and more generally steerable neural networks. In
this article, we review the architecture of such networks including equivariant
layers and filter banks, activation with capsules and group pooling. We apply
this formalism to the symmetric group, for which we work out a number of
details on representations and capsules that are not found in the literature. | Patrick Krüger, Hanno Gottschalk | 2023-01-08T11:05:31Z | http://arxiv.org/abs/2301.03019v1 | # Equivariant and Steerable Neural Networks
###### Abstract
Convolutional neural networks revolutionized computer vision and natural language processing. Their efficiency, as compared to fully connected neural networks, has its origin in the architecture, where convolutions reflect the translation invariance in space and time in pattern or speech recognition tasks. Recently, Cohen and Welling have put this in the broader perspective of invariance under symmetry groups, which leads to the concept of group equivaiant neural networks and more generally steerable neural networks. In this article, we review the architecture of such networks including equivariant layers and filter banks, activation with capsules and group pooling. We apply this formalism to the symmetric group, for which we work out a number of details on representations and capsules that are not found in the literature.
**Keywords:** Group equivariant neural networks, steerable neural networks,
symmetic group
**MSC(2020).** 68T07, 68T45.
## 1 Introduction
Neural networks are machine learning algorithms that are used in a wide variety of applications, for example in image recognition and language processing. There are many applications, among them automated communication in the service sector, perception for self driving cars and the interpretation of medical images in healthcare. In many machine learning problems, it is desirable to make the predictions of a network _invariant_ to certain transformations of the input. This means that, for instance in image classification, we want an image that was transformed by a rotation by some angle to still be classified with the same label: A picture of a cat rotated by 90 degrees is still a picture of a cat.
An important step towards learning these _invariant representations_ was made by the introduction of convolutional neural networks by LeCun et al. for image classification of handwritten digits [1, 2]. These networks rely on the mathematical _convolution operation_ to achieve higher efficiency by _parameter sharing_, as well as _equivariance to translations_. Equivariance means that a shift in the input data in some direction will be carried through all but the ultimate fully connected layers of the network and result in a similar shift in further deep layers, which can then be used to achieve translation invariant representations.
While convolutional neural networks thus yield some of the desired invariance, their performance still decreases when the data is transformed by other symmetries, e.g. rotations or reflections. While this can be avoided by data augmentation, another compelling approach is to extend the translation equivariance of convolutional networks to a wider class of transformations. This is realized by group equivariant neural networks, or, in short, \(G\)-CNNs, which were introduced by Cohen & Welling in [3]. \(G\)-CNNs use group theory to generalize the convolution operation of conventional CNNs to _group convolution_, which then yields equivariance not only to translations, but to a wider group of transformations \(G\) of e.g. compositions of translations, reflections and rotations. These \(G\)-CNNs are again generalized by steerable CNNs (Cohen & Welling, [4]) which, instead of modifying the convolution operation, instantiate _equivariant filter banks_ and _steerable feature spaces_ by making use of the theory of _group representations_. They thereby achieve a more general and versatile concept of group equivariance.
The aim of this article is to give an overview of the theory of equivariant networks. For this, some understanding of the mathematical theory of groups and group representations is needed, which we provide in chapter 3.
Chapter 4 will then be concerned with the theory of equivariant networks. After a short introduction on conventional, fully connected networks in section 4.1, basic knowledge about convolutional neural networks is provided in section 4.2 by explaining the core concepts of feature maps, filters, the convolution operation and the translation equivariance resulting from it, as well as parameter sharing. In section 4.3, we will explain how a modification of the conventional
convolution operation leads to \(G\)-CNNs and how these thereby achieve equivariance to groups of transformations. The more general theory of steerable CNNs will then be presented in section 4.4. We will furthermore recapitulate how steerable CNNs are in fact a direct generalization of \(G\)-CNNs by explaining the equivalence of the latter to a special case of the former. The second aim of this article is to give some contributions by applications of \(G\)-CNNs and steerable CNNs for the symmetric group. This will be the focus of chapter 5.
## 2 Related Work
Steerable CNNs have besides [4] been investigated by several other publications. Weiler et al. ([5]) discuss 3D-steerable CNNs, i.e. networks which process volumetric data with domain \(\mathbb{R}^{3}\). In [6], Cohen et al. give a more abstract approach to intertwiners in steerable CNNs and it is shown that layers of an equivariant network in fact _need_ to transform according to an induced group representation. A more general theory of steerable CNNs on homogeneous spaces is discussed by the same authors in [7], showing that linear maps between feature spaces are in one-to-one correspondence to convolutions with equivariant kernels. \(E2\)-equivariant steerable CNNs that are equivariant to isometries of the plane \(R^{2}\) in the form of continuous rotations, reflections and translations are discussed in [8]. Furthermore, gauge equivariant networks, which enable equivariance not just to global symmetries, but also to local transformations, are discussed by Cohen et al. in [9].
\(G\)-CNNs, i.e. networks that implicitly or explicitly rely on the regular representation of their respective group have also been studied several times. In [10], Gens & Domingos present symnets, which are a framework for networks with feature maps over arbitrary symmetry groups. Kanazawa et al. present in [11] a way to learn scale invariant representations while keeping parameter cost low. Dielemann et al. discuss exploiting rotation symmetry to predict galaxy morphology in [12] and expand their work to cyclic symmetries in [13]. Another network architecture that relies on regular representations is given by scattering networks, which were defined by Mallat et al. in [14] for compact groups, as well as for the Euclidean group in [15] and [16].
Another important part of research concerning neural networks in general and equivariant networks specifically is the development of _universal approximation theorems_. Many of these have been proved for different kinds of networks in varying levels of generality, with some of the first being [17] and [18]. Generally speaking, all approximation theorems state that neural networks can, under certain conditions, approximate any function from a predefined function space with arbitrary precision. In particular, it was proved by Leshno et al. in [19] that multilayer feedforward networks can approximate any continuous function as long as non-polynomial activation functions are used.
Petersen and Voigtlaender showed in [20] that fully connected feedforward
networks can under minimal conditions be translated into convolutional (i.e. translation equivariant) networks, thereby enabling the application of most approximation theorems for the former to the latter. For group equivariant maps, Kumagai & Sannai ([21]) also employ a conversion theorem from feedforward networks to CNNs to establish a method for obtaining universal approximation theorems for group equivariant convolutional networks.
## 3 Mathematical Preliminaries
In this section, some algebraic concepts that will be used and referenced throughout this thesis will be explained.
Secondly, we briefly recall the representation theory of finite groups.
Lastly, a short summary of the explicit representation theory of \(S_{n}\), the symmetric group of \(n\) letters, is presented.
### Semidirect Products
We define the core mathematical concepts needed for the theory of equivariant networks, starting with definitions of semi-direct products. While this section is kept quite short, there exist many introductions to this topic, see g.g. [22] or [23].
**Definition 1**: Let \((G,\circ)\) be a group and \(X\) be some set. A _(left) action_ of \(G\) on \(X\) is a binary operation
\[\cdot:G\times X\to X,\,(g,x)\mapsto g\cdot x, \tag{1}\]
such that the following requirements are met:
1. \(e\cdot x=x\;\forall x\in X\), with \(e\) being the neutral element of \(G\).
2. \((g\circ h)\cdot x=g\cdot(h\cdot x)\;\forall g,h\in G,x\in X\).
A group action is also often interpreted as a map
\[\phi_{G}:G\rightarrow\mathrm{Aut}(X)=\{f:X\to X\mid f\text{ is bijective }\},\;g\mapsto\phi_{g}. \tag{2}\]
Then, the two conditions above translate to the following:
1. \(\phi_{e}=id\)
2. \(\phi_{g}\circ\phi_{h}=\phi_{gh}\;\forall g,h\in G\).
**Definition 2**: Let \((H,\circ_{H})\) and \((N,\circ_{N})\) be two groups and let furthermore \(\phi_{H}:H\rightarrow\mathrm{Aut}(N)\) be a group action of \(H\) on \(N\). Then, we can construct the _outer_ semidirect product \((N\rtimes H,\bullet)\) by setting the cartesian product \(N\times H\) as the group's underlying set and defining the group operation as follows:
\[\bullet:N\rtimes H\to N\rtimes H, \tag{3}\] \[(n_{1},h_{1})\bullet(n_{2},h_{2})=(n_{1}\circ_{N}\phi_{H}(h_{1}) n_{2},h_{1}\circ_{H}h_{2}).\]
With this operation, \(H\cong\{(e_{N},h)\mid h\in H\}\) and \(N\cong\{(n,e_{H})\mid n\in N\}\) become subgroups of \((N\rtimes H)\), with the latter being a normal subgroup as
\[(n,h)\bullet(n^{\prime},e)=(n\circ_{N}\phi(h)n^{\prime}\circ_{N}n^{-1},e) \bullet(n,h)\;\forall(n,h)\in N\rtimes H,(n^{\prime},e)\in N. \tag{4}\]
The neutral element is given by \((e_{N},e_{H})\) and we have \((n,h)^{-1}=(\phi_{H}(h^{-1})n^{-1},h^{-1})\) for any \((n,h)\in H\rtimes N\). Furthermore, using above isomorphisms, any element \((n,h)\in N\rtimes H\) can be written as a unique product
\[(n,h)=(n,e_{H})\bullet(e_{N},h). \tag{5}\]
As \(N\cap H=(e_{N},e_{H})\) holds trivially, the outer semidirect product thus has all properties of the inner semidirect product with respective isomorphic subgroups.
Example 1: I shall name two explicit examples for semidirect product groups and their respective actions here. These examples will be referenced in later parts of this article.
1. The group \(p4\) of compositions of rotations and translations of the square grid \(\mathbb{Z}^{2}\) is the semidirect product of \(\mathbb{Z}^{2}\) and \(C_{4}\), the group of 90-degree-rotations around any origin. Elements of \(p4\) can be parametrized by \(3\times 3\) matrices depending on a rotational coordinate \(r\in\{0,1,2,3\}\) and two translational coordinates \((u,v)\in\mathbb{Z}^{2}\): \[g(r,u,v)=\begin{pmatrix}\cos(\frac{r\pi}{2})&-\sin(\frac{r\pi}{2})&u\\ \sin(\frac{r\pi}{2})&\cos(\frac{r\pi}{2})&v\\ 0&0&1\end{pmatrix}\] (6) \(p4\) now acts on points \(x=(u^{\prime},v^{\prime})\in\mathbb{Z}^{2}\) by matrix multiplication from the left, after adding a homogenous coordinate to \(x\), i.e. \(x=(u^{\prime},v^{\prime})\mapsto(u^{\prime},v^{\prime},1)\).
2. The group \(p4m\) of compositions of rotations, mirror reflections and translations of \(\mathbb{Z}^{2}\) is the semidirect product of \(\mathbb{Z}^{2}\) and \(D_{4}\), which is the group of 90-degree-rotations and reflections about any origin. Elements of this group can be parametrized similarly to \(p4\), with the difference of adding a reflection coordinate \(m\): \[g(m,r,u,v)=\begin{pmatrix}(-1^{m})\cos(\frac{r\pi}{2})&-(-1^{m})\sin(\frac{r \pi}{2})&u\\ \sin(\frac{r\pi}{2})&\cos(\frac{r\pi}{2})&v\\ 0&0&1\end{pmatrix}\] (7) \(p4m\) then acts on \(\mathbb{Z}^{2}\) analogously to \(p4\).
In section 5, we will investigate steerable CNNs that rely on the semidirect product of the symmetric group \(S_{n}\) and \(\mathbb{Z}^{n}\). For this, some preliminary definitions shall also be given here.
**Definition 3**:
1. The symmetric group \(S_{n}\) is the group of permutations of \(n\) letters, i.e. \[S_{n}=\{\sigma:\{1,..,n\}\to\{1,..,n\}\ |\ \sigma\ \text{is bijective}\},\] with the group operation being defined by composition of elements, i.e. functions.
2. For \(r\leq n\), a _r-cycle_ is an element of \(\sigma\in S_{n}\), such that there exist \(x_{1},..,x_{r}\in\{1,..,n\}\) with \[\sigma(x_{1})=x_{2},\sigma(x_{2})=x_{3},..,\sigma(x_{r-1})=x_{r},\sigma(x_{r})=x_{1},\] \[\sigma(x_{k})=x_{k}\ \forall\ x_{k}\notin\{x_{1},..,x_{r}\}.\] An \(r\)-cycle as above is often denoted \((x_{1}\,x_{2}\,..\,x_{r})\). We call this the _cycle notation_, and we call \(r\) the _cycle length_ of \(\sigma\).
3. We obtain an action of \(S_{n}\) on \(\mathbb{Z}^{n}\) for any \(n\in\mathbb{N}\) by letting \(\sigma\in S_{n}\) permute the coordinates of \(x=(x_{1},..,x_{n})\in\mathbb{Z}^{n}\): \[\sigma\circ(x_{1},..,x_{n})=(x_{\sigma(1)},..,x_{\sigma(n)})\] (8)
_Example 2_: With above action, we can construct the outer semidirect product \(\mathbb{Z}^{n}\rtimes S_{n}\) of products \(t\sigma\) of translations \(t\) and coordinate permutations \(\sigma\). This group can now be parameterized and made to act on \(\mathbb{Z}^{n}\) in analogy to example 1 by \(n+1\times n+1\) matrices with the upper left \(n\times n\) block being a permutation matrix representing \(\sigma\) and a translation vector \(t\in\mathbb{Z}^{n}\) in the first \(n\) entries of the last column. Two examples for \(n=2\) and \(n=3\) shall be given here:
\[g_{2}(((12),(3,3)))=\begin{pmatrix}0&1&3\\ 1&0&3\\ 0&0&1\end{pmatrix},g_{3}(((123),(1,2,2)))=\begin{pmatrix}0&0&1&1\\ 1&0&0&2\\ 0&1&0&2\\ 0&0&0&1\end{pmatrix} \tag{9}\]
### An Introduction to Representation Theory of Finite Groups
**Definition 4**: Let \(G\) be a group.
1. A _representation_\((V,\rho)\) of \(G\) consists of a vector space \(V\), together with a group homomorphism \(\rho:G\to GL(V)\) from \(G\) to the general linear group of \(V\), i.e. the group of invertible linear maps from \(V\) to \(V\), such that: \[\rho(gh)=\rho(g)\rho(h)\,\forall g,h\in G.\] (10) The _dimension_ or _degree_ of the representation is defined as the dimension of its vector space \(V\).
2. A _subrepresentation_ of a representation \((V,\rho)\) is a subspace \(W\subseteq V\) which is _G-invariant_, meaning \(\rho(g)w\in W\) for all \(g\in G,w\in W\). The subrepresentation is then denoted by \((W,\rho\upharpoonright_{W})\), and we have \[\rho\upharpoonright_{W}(g)=\rho(g)\upharpoonright_{W}.\] (11) Any representation \((V,\rho)\) always has at least two subrepresentations: Itself and \(\{0\}\).
3. If a representation \((V,\rho)\) with \(V\neq\{0\}\) has no subrepresentations except itself and \(\{0\}\), it is called _irreducible_. Otherwise, it is called _reducible_. Throughout this thesis, we will often refer to irreducible representations as _irreps_.
4. For two representations \((V_{1},\rho_{1})\) and \((V_{2},\rho_{2})\) of \(G\) of respective degrees \(n_{1}\) and \(n_{2}\) we can define the _direct sum_ of representations, \((V_{1}\oplus V_{2},(\rho_{1}\oplus\rho_{2}))\), with \((\rho_{1}\oplus\rho_{2})(g)(v,w)=(\rho_{1}(v),\rho_{2}(w))\). This is again a representation of \(G\) and has degree \(n_{1}+n_{2}\).
**Example 3** (**The Quotient Representation**):
Given the quotient space (or quotient group) \(G/H\) of a group \(G\) and some subgroup \(H\subseteq G\), we define the _quotient representation_\((V_{\text{quot}},\rho_{\text{quot}})\) by associating a basis vector with every coset in \(G/H\):
\[V_{\text{quot}}=\langle\{e_{gH}\mid gH\in G/H\}\rangle \tag{12}\]
and define \(\rho_{\text{quot}}\) using the action of \(G\) on cosets:
\[\rho_{\text{quot}}(g^{\prime})e_{gH}=e_{g^{\prime}gH}\;\forall g^{\prime}\in G,\;\forall e_{gH}\in V_{\text{quot}}. \tag{13}\]
As this representation just permutes the basis vectors, this time corresponding to cosets, this can also be realized by permutation matrices. If we choose \(H=e\), and thus receive \(G/H=G\), this yields the so-called _regular representation_ which permutes basis vectors \(e_{g}\) for each \(g\in G\).
**Definition 5**:
Let \((V_{1},\rho_{1})\) and \((V_{2},\rho_{2})\) be two representations of some group \(G\) and let \(f:V_{1}\to V_{2}\) be a linear map.
1. \(f\) is called an _intertwiner_ between \(\rho_{1}\) and \(\rho_{2}\), if \[f(\rho_{1}(g)(v))=\rho_{2}(g)(f(v)),\,\forall g\in G,\,\forall v\in V_{1}.\]
2. \(\rho_{1}\) and \(\rho_{2}\) are called _equivalent_ or _isomorphic_, if there exists an intertwiner \(f:V_{1}\to V_{2}\) between them, which also is a vector space isomorphism, i.e. an invertible linear map. We then often write \(V_{1}\simeq V_{2}\) or \(\rho_{1}\simeq\rho_{2}\).
3. The condition in i) is linear in \(f\), thus any linear combination of intertwiners between representations \(\rho_{1}\) and \(\rho_{2}\) is again an intertwiner. We hence receive a vector space of intertwiners between \(\rho_{1}\) and \(\rho_{2}\), denoted \(\text{Hom}_{G}(\rho_{1},\rho_{2})\).
**Theorem 1** (**Schur's Lemma**): _Let \((V_{1},\rho_{1})\) and \((V_{2},\rho_{2})\) be irreducible representations of \(G\)._
1. _If_ \(V_{1}\) _and_ \(V_{2}\) _are not isomorphic, then dim_ \(\mbox{Hom}_{G}(\rho_{1},\rho_{2})=0\)_, ie the only intertwiner between_ \(\rho_{1}\) _and_ \(\rho_{2}\) _is the zero map._
2. _If_ \(V_{1}\) _and_ \(V_{2}\) _are isomorphic, then dim_ \(\mbox{Hom}_{G}(\rho_{1},\rho_{2})=1\)_, and all maps intertwining_ \(\rho_{1}\) _and_ \(\rho_{2}\) _are scalar multiples of the identity map._
**Theorem 2**:
1. _Any finite dimensional representation_ \((V,\rho)\) _of a finite group can be decomposed into a direct sum of irreducible representations:_ \[(V,\rho)\sim(\oplus V_{i},\oplus\rho_{i}),\] _where_ \((V_{i},\rho_{i})\) _are irreps of_ \(G\)_._
2. _The irreps of_ \(G\) _are uniquely determined up to isomorphism, and there are finitely many of them._
See [24], section 1.4, Theorem 2 for i) and section 2.5, Theorem 7 for ii). \(\Box\)
**Definition 6**: Let \(Irr(G)\) be the set of nonisomorphic irreducible representations of \(G\). Then, any \((V_{i},\rho_{i})\in\mbox{Irr}(G)\) can occur in the decomposition of some finite dimensional representation \((V,\rho)\) any number of times. A list of integers \(m_{\rho_{i}}(\rho)\geq 0\) corresponding to the multiplicity of each irrep \((V_{i},\rho_{i})\) in \((V,\rho)\) is called the _type_ of \(\rho\). Representations are uniquely determined up to isomorphism by their types, meaning that if two representations have the same type, then they are isomorphic and that isomorphic representations always have the same type.
## 4 The Theory of Equivariant Networks
In this section, the general theory behind convolutional neural networks is explained. We cover three types of convolutional networks which are subsequent generalizations of each other. In section 4.1, We give a brief overview on how non-convolutional, fully connected networks work. Section 4.2 will then cover the basics on standard _translation equivariant_ convolutional networks. Key concepts, such as _feature maps_, _filters_, convolutional layers and the convolution operation, as well as activation and pooling layers is discussed. In section 4.3, we elaborate the generalization of convolutional networks to group equivariant networks (G-CNNs). All concepts from the previous section are suitably generalized, such as e.g. \(G\)-feature maps and group convolution will be touched on, and it will be shown how the resulting networks have a "more general" form of
-equivariance. Lastly, in section 4.4, we introduce steerable convolutional networks, which use representation theory to yield a more efficient and versatile way to define group equivariant networks where the group acts on fibers of feature maps, also called filter banks. It should be noted that while G-CNNs and steerable CNNs can be realized for many kinds of groups, the sections 4.3 and 4.4 will focus on networks that use split groups, i.e. groups that are constructed as a semidirect product. Explicitly, we will consider networks on \(p4\) and \(p4m\), which were defined in example 1, iv) and v). However, the concepts can easily be generalized to other split groups. A \(G\)-CNN that does rely on a non-split group will be covered in section 5.
### Fully Connected Neural Networks
Deep neural networks describe a wide range of computing systems in the domain of machine learning that are meant to loosely resemble the human brain by using interconnected layers, indexed in the following by \(l=1,...,L\), of _neurons_. Each neuron \(N_{i}^{l},i=1,...,K_{l}\) at each layer \(l\) of the network can be thought of as a simple unit that holds a number, usually referred to as its _activation_, and which is connected to all of neurons of the next layer. Therefore, this kind of network is often called _fully connected_. Each of these connections is determined by a _weight_ and a _bias_. We denote by \(\omega_{i,j}^{l}\) and \(b_{i,j}^{l}\) the weight and bias that connect the \(i\)th neuron of layer \(l\), \(N_{i}^{l}\), to the \(j\)th neuron \(N_{j}^{l+1}\) of layer \(l+1\). In the so-called _forward propagation step_ of a network, starting at \(l=1\), each neuron is updated in the following way:
\[N_{i}^{l}=\sigma_{l}\left(\sum_{j=1}^{K_{l-1}}\omega_{j,i}^{l-1}N_{j}^{l-1}+b_ {j,i}^{l-1}\right), \tag{14}\]
where \(\sigma\) is some _non-linear activation function_. These will be discussed in 4.2.5 This process is then repeated sequentially for all remaining \(l=2,...,L\), until the final layer is reached. This _output layer_ consists of one neuron for each possible label of the classification problem at hand. The output neuron with the highest activation is returned as the networks predicted label for the given input.
To train a deep neural network, a training dataset with inputs that are already labeled with their respective classifications is needed. During the training phase, after each forward pass, the (initially large) error of the network, i.e. the difference between the model's prediction and the actual labels of the training data is quantified by a _loss function_ which takes as inputs the weights and biases of the network. A _backpropagation_ algorithm then produces the gradients of this loss function with respect to its variables (i.e. the weights and biases), which are often called the _parameters_ of the networks. After obtaining the gradients, an optimization algorithm, the most used one being
stochastic gradient descent_ ([25]), is used to modify the networks parameters, thereby slightly optimizing the loss function. This training process of forward pass and backpropagation is now repeated until the error is sufficiently small. Then, the backpropagation step is no longer needed and the network can be used to classify unlabeled data. The reader interested in a more thorough introduction to feedforward networks is referred to chapter 6 of [26].
### Basics on Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are a powerful tool in machine learning, mainly used for pattern recognition and classification of certain input data. They can be used on a variety of data, for example sound signatures (one-dimensional data) and 2D and/or 3D Images. Just as fully connected networks, CNNs work in layers, in each of which a set of _filters_ representing certain features to look for in the input data is _convolved_ with stacks of so-called _feature maps_. After application of certain operations known as _nonlinearities_ and _pooling_, this yields a new set of feature maps, which is then convolved with a new set of filters in the next layer. A core feature of CNNs is the _equivariance to translation_ of their convolutional layers. This means that shifting the input data of a convolutional Layer will result in a similar shift that layer's output, allowing us to detect features independent of their location. In the next few paragraphs we will explain these terms in more detail. We use a Network operating on 2D black and white images, i.e. an input feature map with one channel as example.
#### Feature Maps and Filters
Feature maps are mathematical functions used to describe the input data, as well as the outputs of each convolutional layer. Depending on the type of the input, their domain can vary. In image recognition, the domain of a feature map usually is the two-dimensional pixel grid \(\mathbb{Z}^{2}\). In our example of a black and white image, an input-level feature map \(f:\mathbb{Z}^{2}\rightarrow\mathbb{R}\) simply returns the grey-value at each pixel \(x\in\mathbb{Z}^{2}\). In a coloured image, the function would instead be of the form \(f:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{3}\), returning a vector consisting of the color channel values at each pixel coordinate. Since images are bound in size, a feature map is usually said to just return zero everywhere outside of a certain subdomain of pixels. Since layers of convolutional networks contain more of one feature map most of the time, it is often also spoken of'stacks' of feature maps \(f^{j}:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{K},j=1,..,n\), whereby the index is often omitted for simplicity.
Filters
Filters are used to extract certain characteristic patterns in our data by convolving them with feature maps, which will be described in the next paragraph. Mathematically, they are also described as functions. For convolution to work,
it is important that these functions share the domain and image space of the feature maps that they are to be convolved with. In our example, a one-channel filter looking for diagonal lines (top left to bottom right) could look like this:
\[\psi:\mathbb{Z}^{2}\to\mathbb{R};\qquad\quad\psi(x)=\begin{cases}1&x\in\{(-1,1),( 0,0),(1,-1)\}\\ 0&\text{elsewhere}.\end{cases} \tag{15}\]
The support, i.e. the non-zero domain of filters is usually much smaller than that of the feature maps, with usual widths being \(3\times 3\) or \(5\times 5\) squares centered at the origin. While classical computer vision used handcrafted filters as in the above example, CNN based computer vision uses learnable filters \(\psi=(\psi_{i,j}))_{i,j=-s,\ldots,s}\), \(s=1,2\). The values \(\psi_{i,j}\) are the learned parameters of the network. As will elaborated in 4.2.4, this drastically reduces the parameter cost of CNNs in comparison to fully connected networks.
#### The Convolution Operation
The core building block of regular CNNs are the so called _convolutional layers_. In each of these layers, a stack of feature maps \(f:\mathbb{Z}^{2}\to\mathbb{R}^{K}\) is _convolved_ with a set of filters \(\psi:\mathbb{Z}^{2}\to\mathbb{R}^{K}\), producing a new feature map which will then be used in the next layer. The convolution operation is mathematically defined as follows:
\[f\star\psi:\mathbb{Z}^{2}\to\mathbb{R};\qquad\quad[f\star\psi](x)=\sum_{y\in \mathbb{Z}^{2}}\sum_{k=1}^{K}f_{k}(y)\psi_{k}(y-x) \tag{16}\]
While this may look complicated at first glance, it can be interpreted as simply sliding our small filter window over the domain of the image or feature map and summing up the element-wise products of the filter's values and the feature map's values lying "under" them at each output channel \(k\) and each respective position \(x\) of the filter. To demonstrate this further, we can interpret our example filter \(\psi\) from (15), as well as a feature map \(f\) as matrices. The feature map and filter, as well as the result of the convolution operation of said elements is illustrated in figure 1.
Here, as in all following figures throughout this thesis, black pixels represent one, grey pixels zero and white pixels represent minus one.
Recall that, even though the domain of \(f\) and \(\psi\) is infinite, both functions just return \(0\) everywhere outside of the depicted areas.
Convolving a feature map with a filter yields a new feature map \(f\star\psi:\mathbb{Z}^{2}\to\mathbb{R}\), describing how well the feature described by the filter fits at different positions \(x\in\mathbb{Z}^{2}\) in the image.
As announced, we will from now on often omit the summation over channels
in (16) for simplicity and pretend to have feature maps and filters with just one channel, as this will not harm any arguments made in the following paragraphs.
#### Translation Equivariance
An important property that makes CNNs such powerful tools, for example in image recognition, is their equivariance to translations of the input at each layer. Roughly speaking, this means that a CNN is able to detect features in an image (or other data) regardless of their specific position, without requiring additional parameters. To be more precise, we define a translation of a feature map mathematically:
\[[T_{t}f](x)=f(t^{-1}x)=f(x-t);\;\;x,t\in\mathbb{Z}^{2} \tag{17}\]
A feature map \(f\) is hence transformed by a translation \(t\in\mathbb{Z}^{2}\) by looking up the value of \(f\) at the point \(t^{-1}x=x-t\) and moving it to the position \(x\), resulting in the \(t\)-transformed feature map \(T_{t}f\). Visually, this translates to a shift of the patterns of a given input by the respective horizontal and vertical coordinates of \(t\). For an illustration, see the left side of figure 2. The underlying concept of a group action of \(\mathbb{Z}^{2}\) on itself is also used by the convolution operation itself: In (16), filters are transformed by \(T_{x}\) when calculating the output of the convolution at position \(x\), representing a shift of said filter to that position.
Equivariance to translations in CNNs now means that applying a translation \(t\) to a feature map \(f\), followed by a convolution with a filter \(\psi\) yields the same
Figure 1: A feature map \(f\) representing the letter “V” is convolved with a filter detecting diagonal edges from top left to bottom right, yielding a new feature map \(f\star\psi\).
result as convolving \(f\) with \(\psi\) first and then applying the translation:
\[\begin{split}[[T_{t}f]\star\psi](x)&=\sum_{y\in\mathbb{ Z}^{2}}f(t^{-1}y)\psi(x^{-1}y)\\ &=\sum_{y\in\mathbb{Z}^{2}}f(y-t)\psi(y-x)\\ &=\sum_{y\in\mathbb{Z}^{2}}f(y)\psi(y+t-x)\\ &=\sum_{y\in\mathbb{Z}^{2}}f(y)\psi(y-(x-t))\\ &=\sum_{y\in\mathbb{Z}^{2}}f(y)\psi((t^{-1}x)^{-1}y)\\ &=[T_{t}[f\star\psi]](x).\end{split} \tag{18}\]
#### 4.2.4 Parameter Sharing
In comparison to conventional, fully connected neural networks (section 4.1), convolutional neural networks increase the efficiency of their parameters by utilizing the convolution operation to achieve _parameter sharing_ across layers.
In fully connected layers of a conventional neural network, each neuron is
Figure 2: A translation equivariant convolution. Here, the feature maps are transformed by \(t=(-2,1)\) and the diagram commutes.
connected to the neurons of the next layer by an individual connection consisting of a weight and a bias, which together make up the parameters of the network. Each of these connections is modified individually to achieve the best possible classification result. In image recognition for instance, one can view each pixel of the input image as a neuron, and each of these is connected via individual parameters to the neurons of the subsequent layer. As layers are stacked on top of each other, one can imagine that the number of connections, and thus of parameters will grow quite large.
For convolutional neural networks, consider each channel \(f_{l}^{k^{\prime}},\,k^{\prime}=1,..,K^{l}\) of a feature map \(f_{l}:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{K^{l}}\) with \(K^{l}\) output channels in a given layer \(l\). Each of these channels is obtained by convolving a small filter patch \(\psi^{k^{\prime}}\) of size \(s\times s\times K^{l-1}\) with the stack of feature maps \(f_{l-1}^{k}\) of the previous layer. Here, \(k=1,..,K^{l-1}\) denotes the individual channels and \(K^{l-1}\) is the number of channels of the feature map of layer \(l-1\). All values \(f_{l}^{k^{\prime}}(x)\) of an output channel \(k^{\prime}\in\{1,..,K_{l}\}\) for each \(x\in\mathbb{Z}^{2}\) are thus obtained from the same filter, evaluated at different spatial coordinates, and hence _share_ the same \(s^{2}K^{l-1}\) parameters. For the whole layer \(l\), this then amounts to \(s^{2}K^{l-1}K^{l}\) parameters. For comparison, as each pixel would need its own parameters, a fully connected layer processing the same data would need around \(K^{l-1}K^{l}Z^{l-1}{}^{2}Z^{l}{}^{2}\) parameters with \(Z^{l-1}\) and \(Z^{l}\) being the spatial extend of the non-zero domains of the feature maps \(f_{l-1}\) and \(f_{l}\), which is substantially larger than \(s^{2}\), with \(s\) usually being 7 or smaller.
#### 4.2.5 Nonlinearities and Pooling
**Nonlinearities**
The convolutional layers of a CNN are interspersed with non-linear layers. Such a layer operates by applying a non-linear function to the input is needed to enhance the expressive power of CNNs, since hardly any phenomenon that CNNs are meant to observe can be described by linear functions. Various _universal approximation theorems_, for instance in [19] and [20, 27], prove that (feedforward and convolutional) networks can approximate almost any function with arbitrary levels of precision, as long as non-polynomial nonlinearities are used.
One of the most prominent examples for a nonlinearity is the _rectified linear unit_ ReLU:
\[\text{ReLU}(x):=\max(x,0) \tag{19}\]
This function has empirically been proven to be the most efficient choice in many cases, as its computation is faster than most other non-linear functions and it is less prone to plateaus in the training process of the network.
Note that ReLU is applied _element-wise_, meaning that if a multi-channel feature map \(f:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{K}\) is given, ReLU is evaluated at each output channel
\(f_{k}(x)\), \(k=1,..,K\), individually. There are other forms of _fiber-wise_ nonlinearities, which will be touched on in section 4.4.
#### Pooling
In CNNs, convolutional layers are often interspersed with so called _pooling layers_, with one of the first examples being [28]. In these layers, a pooling operation is performed on a given set of feature maps. The main goal of this operation is to cut out superfluous information, reducing data size by slightly reducing locational precision. This can be done because a CNN usually does not require the exact location of a feature to function properly. There are several different pooling operations, one of the most frequently used is _max pooling_:
\[P:\mathbb{Z}^{2}\to\mathbb{R};\qquad\quad Pf(x):=\max_{x^{\prime}\in U(x)}\!\!f( x^{\prime}), \tag{20}\]
where \(f\) is a feature map and \(U(x)\) is some neighbourhood of \(x\) in \(\mathbb{Z}^{2}\), in this case usually being a small square with center \(x\). The pooling operation thus works by finding the highest activation in said neighbourhood. Average pooling, which is another frequently used operation, would find the average of all activations in the pooling window.
To actually reduce the size of the feature map, pooling needs to be performed with a _stride_, meaning that \(P\) is not evaluated on every point in the base space, but only on points of certain distance to each other. For example, a stride of 2 would mean that in (20), \(P\) is only evaluated at points in \(2\mathbb{Z}^{2}=\{2x\,:\,x\in\mathbb{Z}^{2}\}\), while the neighbourhoods \(U(x)\) remain subsets of \(\mathbb{Z}^{2}\).
### Group Equivariant CNNs
#### 4.3.1 Motivation
We can interpret convolutional networks from section 4.2 as already being G-CNNs with \(G\) the group of translations, \(\mathbb{Z}^{2}\). Our convolution operation \([f\star\psi](x)\) basically returns the activation of \(\psi\), transformed under the action of the group element \(x\), i.e. a translation. HGFor a general G-CNN, one would like to find a way to replace the underlying group of the network with some other group \(G\), containing more transformations than just translations, for example rotations and reflections, and still have equivariance of convolution with respect to every element of \(G\).
#### 4.3.2 Group Convolution, Feature Maps and Equivariance
We first establish the action of the group \(G\). The convolution operation from equation (16) is modified in two steps. First, we consider the initial convolutional layer of the network, i.e. where the image \(f:\mathbb{Z}^{2}\to\mathbb{R}\) is convolved with
the first set of filters:
\[f\star\psi:G\to\mathbb{R};\qquad\quad[f\star\psi](g)=\sum_{y\in\mathbb{Z}^{2}}f(y) \psi(g^{-1}y) \tag{21}\]
Two things should be noted: First, \(g\in G\) again transforms feature maps (or filters) via a group action on the domain of said maps:
\[[T_{g}f](x)=f(g^{-1}x);\;\;x\in\mathbb{Z}^{2},\,g\in G. \tag{22}\]
Specifically for \(G=p4\) or \(G=p4m\), \(g\in G\) transforms \(f(x)\) by moving its output from \(x\in\mathbb{Z}^{2}\) to the pixel at \(g^{-1}x\) under the actions defined in example 1.
Secondly, it should be noted that the convolution operation \(f\star\psi\) itself now yields a function on the group \(G\) instead of \(\mathbb{Z}^{2}\), thus requiring filters in the subsequent layer to also be functions on \(G\), yielding yet another slightly different convolution operation:
\[f\star\psi:G\to\mathbb{R};\qquad\quad[f\star\psi](g)=\sum_{h\in G}f(h)\psi(g^{ -1}h) \tag{23}\]
As one might notice, the transformation of feature maps and filters by \(G\) is again slightly different:
\[[T_{g}f](h)=f(g^{-1}h);\;\;h,g\in G. \tag{24}\]
Hence, the output of \(f\) at a point \(h\in G\) of the domain (which also is just \(G\)) is moved to the point \(g^{-1}h\), which is obtained by the canonical action of \(G\) on itself by its group operation.
While it is easy to imagine feature maps \(f:\mathbb{Z}^{2}\to\mathbb{R}\) simply as images sampled on a pixel grid, feature maps with base space \(G\) might not be as intuitive. For \(G=p4\), a feature map \(f:G\to\mathbb{R}\) can be imagined as a graph with four patches, of which each corresponds to one of the four rotations \(C_{4}=\{e,r,r^{2},r^{3}\}\). Each pixel now has a rotational coordinate, corresponding to the patch in which it appears, and two translational coordinates specifying its position in the respective patch. These coordinates do also coincide with the semidirect product structure of \(p4\), enabling us to write any element \(g\in p4\) as a product \(g=tr,\;t\in\mathbb{Z}^{2},r\in C_{4}\).
The rotation \(r\) for instance now acts on this graph by moving each patch along the arrows indicated in figure 3, and by also rotating each of the patches themselves by \(90\) degrees, as is also shown in figure 3.
The convolution operation of the first layer (21) and the convolution operations of the subsequent layers (23) are now not only equivariant to translations, but to all transformations from the group \(G\), expanding t
by e.g. compositions of translations and rotations for \(p4\) and by compositions of translations with rotations and reflections for \(p4m\):
\[[[T_{u}f]\star\psi](g)=[T_{u}[f\star\psi]](g)\;\forall\,u,g\in G. \tag{25}\]
For higher layer convolutions, this is derived in complete analogy to (18), this time using the fact that \(uG=G\;\forall u\in G\) and thus substituting \(uh\in G\) for \(h\), again keeping the overall sum the same. At the same point for first layer convolutions, we use an analogous argument \(g\mathbb{Z}^{2}=\mathbb{Z}^{2}\;\forall g\in G\), as \(G\) acts transitively on \(\mathbb{Z}^{2}\), meaning that any point in the pixel grid can be reached from any other point by a transformation from \(G\).
#### 4.3.3 Group Pooling and Equivariant Nonlinearities
**Equivariance to element-wise nonlinearities**
Element-wise nonlinearities \(\nu:\mathbb{R}\to\mathbb{R}\) can be used in G-CNNs without restriction, as post-composing them with a feature map preserves the equivariance to pre-compositions with group transformations:
Let \(\nu:\mathbb{R}\to\mathbb{R}\) be an element-wise nonlinearity such as e.g. ReLU, and define the post-composition, i.e. the application of such a function to a feature map \(f\):
\[C_{\nu}(f(g))=[\nu\circ f](g)=\nu(f(g)).\]
Let \(T_{g}\) be the left transformation operator from (22) or (24). As \(T_{g}\) is realized by _pre_-composition with \(f\) and \(C_{\nu}\) by _post_-composition, these actions commute and we get
\[C_{\nu}T_{g}f=\nu\circ[f\circ g^{-1}]=[\nu\circ f]\circ g^{-1}=T_{g}C_{\nu}f. \tag{26}\]
**Group Pooling**
As in Regular CNNs, convolutional layers of a G-CNN are often followed by a
Figure 3: A \(p4\) feature map (left) and its rotation by \(r\) (right).
pooling layer. As we no longer need to have the simple pixel grid \(\mathbb{Z}^{2}\) as base space, we need to define the notions of pooling and stride for the more general case of the base space being a group \(G\). To simplify the process, we split it into two steps, namely the pooling step, which is performed without stride, and the subsampling step which can then realize any notion of stride, if desired. In the first step, the max pooling operation for instance becomes
\[P:G\rightarrow\mathbb{R};\qquad\quad Pf(g):=\max_{x\in gU}f(x). \tag{27}\]
Here, \(gU:=\{gu\mid u\in U\}\) is some \(g\)-transformed neighbourhood \(U\) of the identity element in G. In \(\mathbb{Z}^{2}\), this would correspond to a square around the origin that is moved across the images by translations \(t\in\mathbb{Z}^{2}\). This operation is now equivariant to \(G\):
\[\begin{split} PT_{h}f(g)&=\max_{k\in gU}T_{h}f(k)\\ &=\max_{k\in gU}f(h^{-1}k)\\ &=\max_{hk\in gU}f(k)\\ &=\max_{k\in h^{-1}gU}f(k)\\ &=Pf(h^{-1}g)\\ &=T_{h}Pf(g)\end{split} \tag{28}\]
The arguments behind these equations are as follows: The first two equations are just the definitions of group pooling (27) and the left transformation of feature maps (24), respectively. In the third line, we substitute \(k\) for \(hk\) using the fact that taking the maximum of \(f(h^{-1}k)\) over \(k\in gU\) is the same as taking the maximum of \(f(k)\) with \(k\) such that \(hk\) is in \(gU\). In the fourth line we use that this is again the same as taking \(k\in h^{-1}gU\), which can be seen by just multiplying both sides with \(h^{-1}\). The last two lines are then obtained by just resubstitutuing the definitions of group pooling and \(T_{h}\).
Any stride could now be realized in the second step by subsampling the pooled feature map over a subgroup \(H\subseteq G\), i.e. evaluating just on points \(h\in H\) instead of all \(g\in G\). However, a feature map subsampled in this way would not anymore be equivariant to all of \(G\), but only to \(H\). As we wish to maintain equivariance to the whole group \(G\), this form of stride is usually not used in practical \(G\)-CNN applications.
Instead, to preserve \(G\)-equivariance throughout the network, one can use _coset pooling_ by choosing the pooling neighbourhood \(U\) in the first step to be itself a subgroup \(H\) of \(G\). The resulting transformed pooling regions then are the non-overlapping _cosets_ of \(H\) in \(G\) which are either disjoint or equal for any \(gH\) and \(g^{\prime}H\). Because of this, any element of each of the distinct cosets can then
be chosen as a representative to subsample on. The resulting pooled feature map can then be interpreted as a map on the quotient space \(G/H\), which can then be acted upon by \(G\) similar to (24) by utilizing the general action of \(G\) on its quotient spaces from example 1, preserving equivariance as shown in (28):
\[T_{g^{\prime}}f(gH)=f((g^{\prime-1}g)H),\;g^{\prime}\in G,gH\in G/H \tag{29}\]
As an example of this, consider a \(p4\)-feature map that is pooled over the group of rotations, \(C_{4}=\{e,r,r^{2},r^{3}\}\). Visually (see figure 4), this equates to checking the four rotational outputs of each pixel coordinate \(x\in\mathbb{Z}^{2}\) and choosing the one with the highest activation as representative for the coset \(\{x,xr,xr^{2},xr^{3}\}\). The resulting feature map is then of domain \(p4/C_{4}=\mathbb{Z}^{2}\) and thus transforms in the same way as an input feature map in (22).
#### 4.3.4 Implementation
\(G\)-Convolution can be implemented rather easily, at least for so-called _split_ groups. By exploiting this property, we can just use a standard convolution routine with an expanded filter bank, which will be described shortly.
Recall that a group being _split_ means that any element \(g\in G\) can be written as a product \(g=ts\). For \(p4\) and \(p4m\), \(t\in\mathbb{Z}^{2}\) would be a translation, and \(s\) would be a transformation from the stabilizer group \(H=C_{4}\) or \(H=D_{4}\) that leaves the origin invariant, i.e. a rotation or roto-reflection around the origin. This, together with \(T_{t}T_{s}=T_{ts}\) for the action of G allows us to rewrite the
Figure 4: Coset pooling by \(H=C_{4}\) of a \(p4\) feature map. A single coset \(((-2,2)C_{4})\) is highlighted.
definition of \(G\)-convolution as follows:
\[f\star\psi(g)=f\star\psi(ts)=\sum_{x\in X}f(x)T_{t}[T_{s}\psi(x)] \tag{30}\]
with \(X=\mathbb{Z}^{2}\) in layer one and \(X=G\) in subsequent layers, thus allowing us to precompute the transformed filters \(T_{s}\psi\) for all transformations of the stabilizer and then convolve them with the input using a fast planar convolution routine.
The set of untransformed filters at some layer \(l\) can be sorted in an array \(F\) of shape \(K^{l}\times K^{l-1}\times S^{l-1}\times n\times n\). Here, \(K^{l-1}\) denotes the number of input channels, \(K^{l}\) is the number of output channels i.e. the number of distinct filters, and \(n\times n\) denotes the spatial extend of the filters. Furthermore, \(S^{l-1}\) is the size of the stabilizer group, i.e. the number transformations of \(G\) that fix the origin in the base space of the feature maps that \(F\) is to be convolved with.
Each transformation \(T_{s}\) now "acts" on F by permuting the scalar entries of each of the \(K^{l}\times K^{l-1}\) distinct "filter blocks" of shape \(S^{l-1}\times n\times n\). If \(S^{l}\) transformations are applied, this leads to an extended array of shape \(K^{l}\times S^{l}\times K^{l-1}\times S^{l-1}\times n\times n\).
The permutations themselves can be realized by implementing an invertible map \(g\) which yields the group element corresponding to an index from the array of shape \(S^{l-1}\times n\times n\), represented as matrices. For example, for \(p4\) this map would be the following, as was described in example 1, iv):
\[g(s,u,v)=\begin{pmatrix}\cos(\frac{s\pi}{2})&-\sin(\frac{s\pi}{2})&u\\ \sin(\frac{s\pi}{2})&\cos(\frac{s\pi}{2})&v\\ 0&0&1\end{pmatrix} \tag{31}\]
We then set
\[F^{+}[i,s^{\prime},j,s,u,v]=F[i,j,\bar{s},\bar{u},\bar{v}] \tag{32}\]
with
\[(\bar{s},\bar{u},\bar{v})=g^{-1}(g(s^{\prime},0,0)^{-1}g(s,u,v)). \tag{33}\]
To use \(F^{+}\) in a planar convolution routine, we exploit the fact that \(X\) in (30) involves a sum over the stabilizer, again allowing us to to rewrite the equation as
\[\sum_{x\in X}f(x)T_{t}[T_{s}\psi(x)]=\sum_{x\in\mathbb{Z}^{2}}\sum_{k=1}^{S^{ l-1}K^{l-1}}f_{k}(x)T_{t}[T_{s}\psi_{k}(x)]. \tag{34}\]
We can now reshape \(F^{+}\) into an array of shape \(S^{l}K^{l}\times S^{l-1}K^{l-1}\times n\times n\) which can then be applied to similarly reshaped feature maps in a planar convolution routine.
### Steerable CNNs
#### 4.4.1 Motivation
It was shown how G-CNNs achieve group equivariance by expanding the domain of the feature maps and filters. The magnitude of the expansion depends on the size of the stabilizer group of the origin. Thus, larger groups lead to larger expansions, which in turn lead to a proportionally increasing computing cost.
Steerable CNNs are a generalization of G-CNNs which achieve equivariance by defining filter banks as _intertwiners_, which are morphisms between _group representations_, through which feature spaces become _G-steerable_. For a classical, unrestricted filter bank to be an intertwiner, it needs to satisfy an _equivariance constraint_, which depends on the representations that it is meant to intertwine. Thus, while G-CNNs achieve equivariance by expanding arbitrary filter banks, Steerable CNNs achieve it by restricting the space of available filters, thus decoupling the required computational power from the size of the group.
#### 4.4.2 Feature Spaces, Fibers and Steerability
We once again consider 2D signals \(f:\mathbb{Z}^{2}\to\mathbb{R}^{K}\) with \(K\) channels. These can be added, as well as multiplied by scalars and therefore form a vector space, often also called _feature space_, which we will denote by \(\mathcal{F}_{l}\). The layer index \(l\) will often be omitted for simplicity.
Given a group \(G\) acting on \(\mathbb{Z}^{2}\), we are able to transform signals \(f\in\mathcal{F}_{0}\):
\[[\pi_{0}(g)f](x)=f(g^{-1}x). \tag{35}\]
\(\pi_{0}(g):\mathcal{F}_{0}\to\mathcal{F}_{0}\) is then a linear map \(\forall g\in G\) that furthermore satisfies
\[\pi_{0}(gh)=\pi_{0}(g)\pi_{0}(h)\,\forall g,h\in G, \tag{36}\]
yielding a group homomorphism \(\pi_{0}:G\to GL(\mathcal{F}_{0})\). A vector space such as \(\mathcal{F}_{0}\), together with a map such as \(\pi_{0}\) that satisfies (36) fulfills the conditions of a group representation defined in definition 4. It shall be denoted by \((\mathcal{F}_{0},\pi_{0})\). Oftentimes we will just call \(\pi_{0}\) (or \(\pi_{l}\)) a representation, if the nature of \(\mathcal{F}_{0}\) (\(\mathcal{F}_{l}\)) is clear. As will be explained later, the representations \(\pi_{l}\) transforming higher layer feature spaces \(\mathcal{F}_{l}\) might look slightly different than in (35). Nevertheless, they will always satisfy (36).
While in most other deep learning publications some \(f\in\mathcal{F}\) is usually considered as a stack of \(K\) feature maps \(f_{k}:\mathbb{Z}^{2}\to\mathbb{R}\), \(k=1,..,K\), it is useful for
the theory of steerable CNNs to consider another decomposition: Instead of splitting the feature space "horizontally" into one-dimensional planar feature maps, it can be decomposed into _fibers_\(\mathcal{F}_{x}\), one of which being located at each "base point" \(x\in\mathbb{Z}^{2}\). Each fiber consists of the \(K\)-dimensional vector space representing all channels at the given position. \(f\in\mathcal{F}\) is therefore comprised of feature vectors \(f(x)\in\mathcal{F}_{x}\). See figure 5 for an illustration.
Let \(\mathcal{F},\mathcal{F}^{\prime}\) be two feature spaces, and let \(\pi:G\to GL(\mathcal{F})\) be a group homomorphism, such that \((\mathcal{F},\pi)\) is a group representation. Let furthermore \(\Phi\) describe a convolutional layer of a network, i.e.
\[\Phi:\mathcal{F}\rightarrow\mathcal{F}^{\prime};\qquad\quad\Phi(f)=f\star \Psi\;\forall f\in\mathcal{F} \tag{37}\]
for some filter bank \(\Psi:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{K}\).
We now say \(\mathcal{F}^{\prime}\) is _steerable_ w.r.t. \(G\), if there exists another function \(\pi^{\prime}:G\to GL(\mathcal{F}^{\prime})\) transforming \(\mathcal{F}^{\prime}\) such that
\[\Phi\pi(g)f=\pi^{\prime}(g)\Phi f\;\forall g\in G, \tag{38}\]
meaning that transforming the input by \(\pi(g)\) yields the same result under \(\Phi\) as transforming the output of \(\Phi\) by \(\pi^{\prime}(g)\). The following equation shows that this condition implies that \(\pi^{\prime}:G\to GL(\mathcal{F}^{\prime})\) is also a group homomorphism, and thus \((\mathcal{F}^{\prime},\pi^{\prime})\) is also a group representation:
\[\pi^{\prime}(gh)\Phi f=\Phi\pi(gh)f=\Phi\pi(g)\pi(h)f=\pi^{\prime}(g)\Phi\pi(h )f=\pi^{\prime}(g)\pi^{\prime}(h)\Phi f. \tag{39}\]
Note that (39) only implies the desired property of \(\pi^{\prime}\) for the span of the image of \(\Phi\). However, this is enough for our purposes, as any features in \(\mathcal{F}^{\prime}\) that are
Figure 5: The decomposition of \(\mathcal{F}\) into stacks of feature maps (left), and into fibers (right). A single fiber is highlighted.
transformed by \(\pi^{\prime}\) result from applying a convolutional layer, i.e. lie in said image.
#### 4.4.3 The Equivariance Constraint on Filter Banks
Filterbanks are arrays of shape \(K^{\prime}\times K\times s\times s\), where \(K^{\prime}\) denotes the number of output channels, i.e. the number of distinct filters that are to be convolved with the input. Each of those \(K^{\prime}\) filters then has the shape \(K\times s\times s\), where \(K\) is the number of input channels, and \(s\) denotes the spatial extent, i.e. the support/non-zero domain of the filter, usually being \(3\times 3\) or \(5\times 5\), though other sizes are possible.
Assuming inductively that such a filter bank \(\Psi\) is applied to a steerable feature space \((\mathcal{F},\pi)\), we need it to be an _\(H\)-Intertwiner_ between \(\pi\)_and_\(\rho\), i.e. to satisfy the _equivariance constraint_ with respect to \(H\), in order for the output of the convolution to be steerable:
\[\rho(h)\Psi=\Psi\pi(h)\,\forall h\in H. \tag{40}\]
This means that \(\Psi\) applied to a feature map that was transformed by \(\pi(h)\) has to yield the same result as applying a _fiber representation_\(\rho\) to the fiber that results from applying \(\Psi\) to the untransformed image, as illustrated in figure 6.
Two things should be noted at this point:
Firstly, \(\rho\) does not act on a feature space (or on a filter, equivalently) as \(\pi_{0}\) does in 35, but on individual fibers \(\mathcal{F}^{\prime}_{x}\in\mathbb{R}^{K^{\prime}}\). Formally, a fiber representation is thus just a \(K^{\prime}\)-dimensional representation \((\mathbb{R}^{K^{\prime}},\rho)\) of the stabilizer group \(H\). In section 4.4.5, it will be explained how a \(G\)-representation acting on a
Figure 6: A \(H\)-equivariant filter bank \(\Psi\). Here, \(\rho\) represents the rotation \(r\) by cyclicly permuting the channels in each fiber. For \(\Psi\) to be equivariant, this diagram needs to commute for any element of the considered group. Note that in this figure, we have \(K=1\) and \(K^{\prime}=4\).
full feature space can be _induced_ from the \(H\)-representation \(\rho\).
Secondly, the equivariance constraint needs only to be fulfilled for all \(h\in H\), thus excluding translations. This is because translations would be able to move patterns out of the receptive field of single fibers (see figure 7). This is relevant, as for the equivariance calculations we interpret the action of \(\pi(h)\) on \(\mathcal{F}\) for \(h\in H\) as the equivalent action of \(\pi(h^{-1})\) on the filters. Full \(G\)-equivariance will then be achieved by the induced representation.
The equivariance constraint is linear in \(\Psi\), meaning that any linear combination \(\sum\lambda_{i}\Psi_{i}\) of intertwiners again satisfies (40) and thus is an intertwiner itself. We thus obtain a vector space of intertwiners, denoted \(\mathrm{Hom}_{H}(\pi,\rho)\).
#### 4.4.4 The Space of Intertwiners
To better understand the construction of \(\mathrm{Hom}_{H}(\pi,\rho)\), we consider an example: Let \(\mathcal{F}_{0}\) be the space of \(3\times 3\) filters with one channel, i.e. functions \(\psi:\mathbb{Z}^{2}\to\mathbb{R}\) which return zero outside of the \(3\times 3\) square around the origin. As these functions, just like feature maps, can be added and multiplied by scalars, \(\mathcal{F}_{0}\) is a vector space of dimension \(9\) (or \(Ks^{2}\) for arbitrary spatial extends and numbers of channels). \(H\) can act on this space via \(\pi_{0}\) from (35) in the same way as it acts on feature maps:
\[\pi_{0}(h)\psi(x)=\psi(h^{-1}x) \tag{41}\]
and, as mentioned in the previous section, transforming a patch from a feature map that \(\psi\) is evaluated on by \(h\in H\) is the same as transforming \(\psi\) with \(h^{-1}\). Staying with the example, \((\mathcal{F}_{0},\pi_{0})\) is a \(9\)-dimensional group representation of \(H\). The canonical Basis for this space is shown in figure 8.
Furthermore, an example for the transformation of a \(\mathcal{C}\)-linear combination of basis filters is depicted in figure 9.
Figure 7: Elements of the stabilizer group \(H\) (top) do not shift patterns out of the receptive fields of fibers, while translations (bottom) do, thus making full \(G\)-equivariance of filter banks impossible.
However, as explained in section 3.2, \(\mathcal{F}_{0}\) can be decomposed into irreducible representations, i.e. subspaces \(V\subseteq\mathcal{F}_{0}\), that are uniquely determined up to isomorphism (i.e. change of basis), and are \(H\)-invariant, meaning
\[\pi_{0}(h)V\subseteq V\ \forall h\in H. \tag{42}\]
For \(G=p4m\) the irreducible decomposition of \((\mathcal{F}_{0},\pi_{0})\) for the stabilizer group \(H=D_{4}\) is shown in table 1.
As depicted, the group \(H=D_{4}\) has five distinct irreps, which are characterized by the way in which \(\pi_{0}(h)\) acts on them for different \(h\in H\). For instance (see figure 11), \(\pi_{0}(h)\) acts trivially on A1-filters, as rotating or reflecting has no effect on them, while \(\pi_{0}(r)\) acts by multiplication by \([-1]\) on B1-filters.
To obtain the irreducible decomposition of \(\pi_{0}\), use a simplified _character formula_ ([29],[24]):
\[m_{\rho_{i}}(\pi_{0})=\frac{1}{|H|}\sum_{h\in H}\chi_{\pi_{0}}(h)\chi_{\rho_{i }}(h). \tag{43}\]
The characters, which are just the traces of the representing matrices evaluated at each \(h\in H\), can be obtained from table 1. For other groups, such
**Figure 8**: The canonical Basis of \((\mathcal{F}_{0},\pi_{0})\).
**Figure 9**: \(\pi_{0}(r)\) acting on a \(\mathcal{C}\)-linear combination.
character tables are also available in literature. To determine the traces of \(\pi_{0}(h)\) for \(h\in H\), one can look at the canonical basis of \(\mathcal{F}_{0}\) (figure 8) and how the individual vectors of it transform under \(\pi_{0}\). The trace \(\chi_{\pi_{0}}(h)\) of any representation matrix corresponding to \(\pi_{0}(h)\) is then just the number of basis vectors of \(\mathcal{C}\) that are left unchanged by its action. Thus, for instance, we receive \(\chi_{\pi_{0}}(e)=9\), as any representation evaluated at the neutral element always acts as the identity matrix, and \(\chi_{\pi_{0}}(m)=3\).
The latter is visualized in figure 10.
\(\pi_{0}\) turns out to have type \((3,0,1,1,2)\) (see definition 6), meaning that there are 3 copies of A1, i.e. 3 \(H\)-invariant one-dimensional subspaces with exactly the transformation properties of this irrep. Furthermore, there is one copy of each of B1 and B2, and two copies of the two-dimensional irrep E.
Decomposing \((\mathcal{F}_{0},\pi_{0})\) also makes \(\pi_{0}(h)\) block diagonal for any \(h\in H\), after applying a change of basis matrix \(A\) that is constructed from the basis filters shown column two of table 1. The same matrix \(A\) block diagonalizes \(\pi_{0}(h)\) for all \(h\in H\) simultaneously and the elements on the diagonal are the irrep matrices shown in the third to last columns of table 1, with their respective multiplicities. For instance, for \(h=r\), we have:
\[\psi_{1}=\quad\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig 1}}\cdot\pi_{0}(r)\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ Fig 1}}=[1]\cdot\psi_{1},\quad\psi_{4}=\quad\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{Fig1}}\cdot\pi_{0}(r)\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{Fig1}}=[-1]\cdot\psi_{4}\]
**Figure 11**: A rotation acting trivially on an A1 basis element (left), and as [-1] on a B1 basis element (right).
Figure 10: The canonical basis \(\mathcal{C}\) of \((\mathcal{F}_{0},\pi_{0})\) (top) is transformed by the reflection \(\pi_{0}(m)\) (bottom). The invariant vectors are highlighted.
\[A\pi_{0}(r)A^{-1}=\text{block\_diag}([1],[1],[1],[-1],[-1],[0,-1;\ 1,0],[0,-1;\ 1,0]) \tag{44}\]
To get an intuition for \(\text{Hom}_{H}(\pi,\rho)\), we also need to look at the fiber representation \(\rho\), which itself also has a decomposition into irreps and thus a type. In fact, \(\rho\) would theoretically just be chosen by picking an arbitrary set of integers \((m_{1},..,m_{n})\) as multiplicities for the irreps (\(n=5\) for \(D_{4}\)), as well as some basis \(A\in\mathbb{R}^{K^{\prime}\times K^{\prime}}\). However, the choice of a basis is made obsolete later, when nonlinearities and _capsules_ are discussed. The number of output channels, i.e. the size of the fiber \(\mathcal{F}_{x}\) that \(\rho\) can act upon is also determined by the irreps:
\[K^{\prime}=\sum_{i=1}^{n}m_{i}\dim\rho_{i}. \tag{45}\]
For the sake of this example, let \(\rho\) be of type \((2,1,1,1,1)\). Then, \(K^{\prime}=\sum_{i=1}^{5}m_{i}\dim\rho_{i}=7\), thus \(\rho\) acts on seven-dimensional fibers \(\mathcal{F}_{x}\in\mathbb{R}^{7}\), for instance as
\[\rho(r)=A^{-1}\text{block\_diag}([1],[1],[1],[-1],[-1],[0,-1;\ 1,0])A, \tag{46}\]
i.e. a block diagonal matrix (after change of basis) with the corresponding matrices from column "r" of table 1 on the diagonal. The first channel now transforms by A1, the second by A2, and so on.
As the number of output channels \(K^{\prime}\) equals the number of "distinct filters" that are convolved with the input, a filter bank \(\Psi\) intertwining \(\pi_{0}\) and \(\rho\) must consist of seven filter units of shape \(K\times s\times s\), i.e. \(3\times 3\) in this case. Schur's Lemma (theorem 1) now determines which basis elements of \(\mathcal{F}_{0}\) can be used to construct each of the individual filters. According to the lemma, the intertwiner space of two irreps \(\text{Hom}_{H}(\rho_{i},\rho_{j})\) is either zero, or one-dimensional iff \(\rho_{i}\) and \(\rho_{j}\) are isomorphic. It follows that each individual filter can only be constructed from the basis elements in \(\mathcal{F}_{0}\) that transform under the same irrep as said filter's output channel.
To finish the example, consider again the type of \(\rho\). There are two A1-channels in the fiber, i.e. two distinct filters that need to be A1-equivariant. These filters can be constructed independently from one another, using the three basis elements in \(\mathcal{F}_{0}\) that span the three A1-subspaces, yielding six parameters in total. There is one A2-channel, yet there are no basis elements in \(\mathcal{F}_{0}\) that transform according to A2, thus this filter is simply zero, adding no additional dimensions. While the B1- and B2-filters are obtained in the same way as for A1, each yielding one independent parameter, there also is a copy of the two-dimensional irrep E in \(\rho\). While this irrep therefore has two
channels, these do not transform independently from one another, but are multiplied by two-dimensional matrices, which are given in the last row of table 1. Thus, sets of two E2-basis filters in \(\mathcal{F}_{0}\) (of which there also are two) have to share one parameter, yielding two parameters in total. An illustration of this construction is given by figure 12.
The process demonstrated in this example can be generalized to intertwiner spaces \(\mathrm{Hom}_{H}(\pi,\rho)\) for arbitrary representations of respective types \((m_{1},..,m_{n})\) and \((m^{\prime}_{1},..,m^{\prime}_{n})\), where \(\pi\) is the feature space representation that transforms feature maps and filters and \(\rho\) is the fiber representation for the next layer. As mentioned before, \(\pi\) might act on on the feature space (and thus on filters) in a slightly different way than \(\pi_{0}\) (35) used in this example. The reason for this will be explained in the next section. We receive the following formula for the dimension of of said intertwiner spaces, i.e. for the number of parameters required by any given layer:
\[\mathrm{dim}\ \mathrm{Hom}_{H}(\pi,\rho)=\sum_{i}m_{i}m^{\prime}_{i}. \tag{47}\]
Figure 12: The construction of an equivariant filter bank. The basis elements connected by the dotted lines must share a parameter to preserve equivariance. There are seven output channels.
#### Generating Steerable Feature Spaces
What is left to be shown is how \(H\)-steerability of individual fibers \(\mathcal{F}^{\prime}_{x}\) leads to steerability of the whole feature space \(\mathcal{F}^{\prime}\) with respect to all of \(G\). To derive this, we use the induced representation of \(H\) on \(G\) and we show that the transformation law that is imposed on an output space \(\mathcal{F}^{\prime}\). By convolving a transformed signal \(\pi(g)f,\,f\in\mathcal{F}\) with an equivariant filter bank \(\Psi\in\mathrm{Hom}_{H}(\pi,\rho)\) we obtain the formula for the induced representation. While the following paragraphs again give the computations the explicit groups \(p4\) and \(p4m\), the arguments used can be made in complete analogy for any other semidirect product \(G=N\rtimes H\).
Points \(x\in\mathbb{Z}^{2}\) can be interpreted either as a point (e.g. when looked at as a pixel in the base space of feature maps), or as a translation. To make this interpretation explicit, we denote \(x\) as \(\bar{x}\) when we interpret it as a translation.
Translation equivariant convolution of \(\Psi\) with \(f\) can then be defined as
\[[f\star\Psi](x)=\Psi[\pi(\bar{x})^{-1}f], \tag{48}\]
for translations \(\bar{x}\in\mathbb{Z}^{2}\), with \(\pi\) acting on \(\mathcal{F}\) as described in (35).
We now utilize the semidirect product structure of \(G\) (definition 2), which enables us to write any element \(g\in G\) as a product \(g=th\) where \(t\in\mathbb{Z}^{2}\) is a translation and \(h\in H\) is an element from the stabilizer group, i.e. some rotation from \(C_{4}\) or rotation-flip from \(D_{4}\) for \(p4\) or \(p4m\), respectively. It is useful to look at an explicit matrix representation of \(G\). We can write any element of \(G\) as
\[g=th=\begin{bmatrix}I&T\\ 0&1\end{bmatrix}\begin{bmatrix}R&0\\ 0&1\end{bmatrix}=\begin{bmatrix}R&T\\ 0&1\end{bmatrix}. \tag{49}\]
Here, \(R\) is a matrix representation of \(h\in H\), for instance a \(2\times 2\) rotation or roto-reflection matrix for \(p4\) or \(p4m\), and T is the translation vector representing \(t\). With this form of representation, a translation \(\bar{x}\in\mathbb{Z}^{2}\) then amounts to
\[\bar{x}=\begin{bmatrix}I&x\\ 0&1\end{bmatrix}. \tag{50}\]
It is furthermore useful to make the difference between the action of \(G\) on itself by group operation and the action of \(G\) on \(\mathbb{Z}^{2}\) visible, thus we will use \(gh\) for \(g,h\in G\) for the former and \(g\cdot x\) for \(g\in G,x\in\mathbb{Z}^{2}\) for the latter.
Applying \(th\) to \(f\) via \(\pi\) before convolving with \(\psi\) then yields:
\[\begin{split}[\Psi\star[\pi(th)f]](x)&=\Psi\pi(\bar{x }^{-1})\pi(th)f\\ &=\Psi\pi(\bar{x}^{-1}th)f\\ &=\Psi\pi(hh^{-1}\bar{x}^{-1}th)f\\ &=\Psi\pi(h)\pi(h^{-1}\bar{x}^{-1}th)f\\ &=\rho(h)\Psi\pi(h^{-1}t^{-1}\bar{x}h)f\\ &=\rho(h)\Psi\pi(\overline{((th)^{-1}\cdot x)}^{-1})f\\ &=\rho(h)[\Psi\star f]((th)^{-1}\cdot x).\end{split} \tag{51}\]
The justifications for these equations are as follows: In the first line, we use the definition of convolution (48). In the second to fourth line, we use \(hh^{-1}=e\), thus multiplying by one and the fact that \(\pi\) is a group homomorphism, i.e. \(\pi(gg^{\prime})=\pi(g)\pi(g^{\prime})\)\(\forall g,g^{\prime}\in G\). The equivariance constraint (40) is then used in the fifth line, and the argument of \(\pi\) is inverted. In line six we use the fact that \(h^{-1}t^{-1}\bar{x}h=\overline{((th)^{-1}\cdot x)}^{-1}\). This can be verified via the representation matrices presented in (49). Finally, the definition of the convolution operation is applied backwards.
We can now define \(\pi^{\prime}\) by
\[[\pi^{\prime}(th)f](x):=\rho(h)[f((th)^{-1}x)], \tag{52}\]
yielding \(\Psi\star\pi(g)f=\pi^{\prime}(g)\Psi\star f\), thus giving us the desired steerability of \(\mathcal{F}^{\prime}\) with respect to all of \(G\).
Comparing \(\pi\) (35) and \(\pi^{\prime}\) (52), one notices that they only differ in the factor \(\rho(r)\) which permutes the individual fibers after they have been moved from \(x\) to \((th)^{-1}x\). This kind of action characterizes \(\pi^{\prime}\) as the aforementioned _induced representation_ of \(\rho\) (of \(H\)) on \(G\), often also denoted as \(\mathrm{Ind}_{H}^{G}(\rho)\). While there exists extensive knowledge about this concept, for instance in [30] and [31], for our purpose it suffices to know that induced representations transport fibers to new locations and then transform them via \(\rho\), the representation induced from \(\pi^{\prime}\)
Furthermore, any representation of any subgroup of \(G\) can induce a representation on \(G\), allowing us to freely choose fiber representations when constructing filter banks. The representation \(\pi_{0}\) from (35) without any factor permuting the fibers can also be seen as being induced from the trivial representation, which acts as the identity matrix for all \(h\in H\).
It should also be noted that the content of section 4.4.4, which is using the decomposition of the trivially induced representation \(\pi_{0}\), can also be applied
to induced representations \(\pi^{\prime}=\operatorname{Ind}_{H}^{G}(\rho)\). When investigating the action of such a representation \(\pi^{\prime}\) on basis filters to determine its trace, one just has to include the factor of \(\rho\), i.e. the fiber-permutation that is applied additionally.
Further layers can now be added to the network by choosing a new representation \(\rho^{\prime}\) of \(H\) to act on the fibers of the next output space and compute \(\operatorname{Hom}_{H}(\pi^{\prime},\rho^{\prime})\) for \(\pi^{\prime}\) restricted to \(H\).
#### 4.4.6 Commutation with Nonlinearities via Capsules
We have now seen how the convolutional layers of steerable CNNs are built and how their equivariance to group transformations is achieved. In particular, it was shown that only the basis independent type of the input and output representations is relevant for the parameter count of a filterbank, i.e.
\[\dim\operatorname{Hom}_{H}(\pi,\rho)=\dim\operatorname{Hom}_{H}(\pi,A^{-1} \rho A)\;\forall A\in GL(\mathbb{R}^{\dim\;\rho}).\]
However, when considering nonlinearities, the choice of the basis becomes relevant, which we shall demonstrate with a small example:
Consider the classical ReLU function and let \(g=(12)\in S_{2}\):
\[\rho(g)=\begin{pmatrix}1&-1\\ 0&1\end{pmatrix}\text{ and }\rho^{\prime}(g)=A^{-1}\rho(g)A=\begin{pmatrix}3&1 \\ -4&-1\end{pmatrix}\text{ for }A=\begin{pmatrix}1&1\\ 2&1\end{pmatrix}\]
a change of basis. \(\rho(g)\), and thus \(\rho^{\prime}(g)\) correspond to a representation of \(S_{2}\). But, e.g. for \(x=(-2,5)^{T}\) and \(x^{\prime}=A^{-1}x=(7,-9)^{T}\) we have:
\[\operatorname{ReLU}(\rho(g)(x)) =\operatorname{ReLU}(\begin{pmatrix}-7\\ 5\end{pmatrix})=\begin{pmatrix}0\\ 5\end{pmatrix}\] \[\neq\operatorname{ReLU}(\rho^{\prime}(g)(x^{\prime}))= \operatorname{ReLU}(\begin{pmatrix}12\\ -19\end{pmatrix})=\begin{pmatrix}12\\ 0\end{pmatrix}.\]
Thus, different bases can lead to different results under non linear activation, even though the underlying type of the representation is the same. To efficiently tackle this problem, the concept of _capsules_ is introduced:
A \(\rho-\)_capsule_\((\mathbb{R}^{K},\rho,\mathcal{B})\) is defined as a representation of the stabilizer group \((\mathbb{R}^{K},\rho)\) with a fixed basis \(\mathcal{B}\) of \(\mathbb{R}^{K}\). By doing this, the representation matrices \(\rho(h)\) also become fixed, which allows us us to _realize_ them as matrices, which will help in obtaining equivariance of nonlinearities, as will be explained in the following. In practice, capsules are typically held relatively low-dimensional, rarely exceeding the size of the stabilizer group, \(|H|\). In the following a capsule will often just be denoted by \(\rho\), omitting the specified Basis \(\mathcal{B}\) and simply assuming that it is chosen in a way that yields a certain structure for the
representation matrices.
Just like a convolutional layer, a nonlinearity layer must be equivariant to the group \(G\), which is realized by guaranteeing fiber-wise equivariance first and then "inducing" feature space equivariance afterwards.
We define an element-wise nonlinearity \(\nu:\mathbb{R}\to\mathbb{R}\), or a fiber-wise nonlinearity \(\nu:\mathbb{R}^{K}\to\mathbb{R}^{K^{\prime}}\) to be _admissible_ for an input representation \(\rho\), iff there exists an output capsule \(\rho\), such that
\[\nu\rho(h)=\rho^{\prime}(h)\nu\,\forall h\in H, \tag{53}\]
i.e. transforming input fibers by \(\rho\) before applying \(\nu\) is the same as transforming the nonlinearity's output by \(\rho^{\prime}\). We call \(\rho^{\prime}\) the _post-activation capsule_ corresponding to \(\rho\) and denote it by \(\mathrm{Act}_{\nu}\rho\).
Given an admissible nonlinearity \(\nu\) and corresponding pre- and post activation capsules \(\rho\) and \(\rho^{\prime}=\mathrm{Act}_{\nu}\rho\) such that (53) holds, feature space equivariance to the respective induced representation is derived as follows:
\[\begin{split}\nu([\mathrm{Ind}_{H}^{G}\rho](th)f)(x)& =\nu(\rho(r)f((th)^{-1}x))\\ &=\rho^{\prime}(r)\nu(f((th)^{-1}x))\\ &=\rho^{\prime}(r)\nu(f)((th)^{-1}x)\\ &=[\mathrm{Ind}_{H}^{G}\rho^{\prime}](th)\nu(f)(x)\end{split} \tag{54}\]
Instead of building a layer by choosing multiplicities of irreps for a fiber representation, one now chooses multiplicities \(m_{i}\geq 0,i=1,..,n\) corresponding to a set of predefined capsules \(\rho_{1},..,\rho_{n}\). The chosen capsules are then concatenated into a fiber of dimension \(\sum m_{i}\dim\rho_{i}\), which is then acted upon by \(\rho=\bigoplus m_{i}\rho_{i}\), i.e. the block diagonal representation with \(m_{i}\) copies of \(\rho_{i}\) on the diagonal. Due to this block diagonal structure, capsules are _disentangled_, i.e. channels of one capsule do not mix with channels belonging to other capsules during transformations.
When it comes to finding explicit pairs of capsules and corresponding admissible nonlinearities, one needs not only to look at the irrep type of a capsule's underlying representation, but also at the individual form of the representation matrices \(\rho(g)\) which, as stated, differs not only depending on the choice of irreps, but also on the choice of basis for the individual representation spaces. Below, we will name the most prominent choices of capsules. For a more extensive list, the reader is referred to [8].
First, any capsule \(\rho\) which can be realized by permutation matrices will be compatible with any element-wise nonlinearity. _Element-wise_ means that instead of acting on a whole feature vector \(f(x)=(f_{1}(x),..,f_{K}(x))\), the
nonlinearity transforms each element \(f_{i}(x),\,i=1,..,K\), individually. The most prominent example for such a nonlinearity is the ReLU function (see (19)). The most frequently used capsules that are realizable by permutation matrices are _regular_ capsules, i.e. feature vectors that transform according to the regular representation from example 3. While regular capsules have been shown to work very well in practice ([3],[4]), they have the drawback of leading to relatively high dimensional feature spaces, as each capsule has to span \(|H|\) channels.
A possible fix for the high dimensionality of regular capsules is given by _quotient_ capsules, in which the feature vectors transform according to the quotient representation (example 3) of \(H\) by some (normal) subgroup \(K\). Quotient capsules have the advantage of requiring \(\frac{|H|}{|K|}\) instead of \(|H|\) channels. If \(K\) is normal, the features represented by such capsules become not just equivariant, but invariant to the symmetries of \(K\):
\[\rho_{\text{quot}}(k)e_{hK}=e_{khK}=e_{kKh}=e_{Kh}=e_{hK}. \tag{55}\]
Hence, performance can be improved by reducing parameter cost and feature size through the use of \(H/K\) quotient capsules if the \(K\)-invariant patterns are prevalent in the input data. On the other hand, quotient capsules can also severely harm performance if the patterns are not found in the data or irrelevant to the learning task at hand. It should furthermore be noted, that quotient representations can also be obtained from non-normal subgroups. In that case, the features need not be necessarily invariant under \(K\), but can perform differently. An example for this, as well as some visual intuition for quotient features in general, is given in appendix C of [8].
A second class of representation matrices are _monomial_ matrices, which have the same structure as permutation matrices, but also allow for entries of minus one instead of just one. Given a representation that is fully realized by monomial matrices, all _concatenated_ nonlinearities will be admissible. A concatenated nonlinearity is defined as evaluating an element-wise nonlinearity on an element \(x\) and also on \(-x\). For instance, the concatenated ReLU function is defined as
\[\text{CReLU}(x):=(\text{ReLU}(x),\text{ReLU}(-x)). \tag{56}\]
Trivially, regular and quotient capsules will be compatible with concatenated nonlinearities. Also, many irreps can be realized (through a suitable basis) by monomial matrices, thus allowing for low dimensional irrep capsules.
The third class of examples is given by _orthogonal_ matrices characterized by the fact that they preserve the norm (length) \(|x|\) of vectors that they act upon. Capsules of this kind will be compatible with norm nonlinearities,
which only act on the norm of any given feature vector (hence do not act element-wise), but not on its orientation. A norm nonlinearity can generally be written in the following form:
\[\nu_{\text{norm}}:\mathbb{R}^{K}\rightarrow\mathbb{R}^{K},\qquad \quad f(x)\mapsto\eta(|f(x)|)\frac{f(x)}{|f(x)|} \tag{57}\]
Here, \(\eta:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) describes a non-linear function that is to act on the feature vectors norms. A prominent example of this would be _Norm-ReLUs_, with
\[\eta(|f(x)|)=\text{ReLU}(|f(x)|-b), \tag{58}\]
with \(b\) being a learned bias. Amongst others, Norm-ReLUs were used in [32]. The advantage of these nonlinearities is their broad compatibility, as any representation of any finite group can be made orthogonal, by choosing an orthogonal change of basis. Further subclasses of norm nonlinearities, such as _gated_ and _squashed_ nonlinearities are mentioned in [33] and [5], respectively.
#### A Remark on Pooling and Group Pooling in Steerable CNNs
As far as the literature on steerable CNNs ([4],[5],[8]) is concerned, pooling operations are, if at all, applied fiber- or capsule-wise. This means that, instead of conventional pooling with stride, i.e. subsampling the feature map on a subgroup of \(\mathbb{Z}^{2}\), thereby reducing equivariance of the network to that subgroup, each copy of a capsule located at each point \(x\in\mathbb{Z}^{2}\) is searched for the maximal activation (in the case of max pooling), which is then set as the output of the pooling operation. Thus, pooling operations of this nature in steerable CNNs can be interpreted as nonlinearities, which were discussed in the previous section, and hence also need to satisfy an equivaraince constraint that is analogue to 53.
Let \(P:\mathbb{R}^{K}\rightarrow\mathbb{R}\) be a pooling operation such as max pooling or average pooling, performed on a \(K\)-dimensional capsule \(\rho\). For \(G\)-equivariance to be maintained, we need the pooling operation to be \(H\)-equivariant, i.e.
\[P\rho(h)=\rho^{\prime}(h)P\,\forall h\in H. \tag{59}\]
The choices of compatible capsules \(\rho,\rho^{\prime}\) is quite limited here, as they must not alter the numeric value of the feature vectors on which the pooling operation is performed, or the result of the pooling operation. Therefore, any input capsule \(\rho\) that is to commute with \(P\) has to be realized by permutation matrices, as taking the maximum (or average) value of a vector with permuted entries still yields the same result. Furthermore, \(\rho^{\prime}\) needs to be the trivial representation, i.e. \(\rho^{\prime}(h)=[1]\,\forall h\in H\) as otherwise equivariance could again be broken, for
instance when the result is multiplied by [-1], as e.g. by the irrep A2 in table 1. The most common choices for \(\rho\) therefore are regular capsules, quotient capsules, as well as irrep capsules that are realized by permutation matrices.
As an exception to the choice of \(\rho^{\prime}\) which was, to our knowledge, not yet discussed in the literature, one can implement an equivalent of coset pooling from 4.3.3. This will especially make sense taking into account that regular steerable CNNs are equivalent to \(G\)-CNNs in section 4.4.9. Suppose a subgroup \(K\subseteq H\) is given, yielding a quotient space \(H/K\). Setting the input capsule as the regular representation of \(H\) and the output capsule as the quotient representation of \(H/K\), we can define quotient pooling as follows:
\[P_{\text{quot}}:\mathbb{R}^{|H|}\to\mathbb{R}^{|H/K|},\;P_{\text{quot}}(f(x))= \bigoplus_{hK\in H/K}\max_{h^{\prime}\in hK}f(x_{h^{\prime}}). \tag{60}\]
This means that for each of the outputs \(\max_{h^{\prime}\in hN}f(x_{h^{\prime}})\), \(P_{\text{quot}}\) only looks at the coordinates \(f(x_{h})\) of \(f(x)\) that correspond to the the elements of the respective coset that is being pooled over. This process is repeated \(|H/K|\) times, i.e. for each coset. The individual results are then added into a \(|H/K|\)-dimensional vector that then transforms according to the quotient representation of \(|H/K|\).
#### Reduction of Parameters and Implementation
By the use of steerable filter banks, the parameters of steerable CNNs are utilized several times more efficiently than the parameters of regular convolutional networks. As was shown in section 4.4.4, an equivariant filter bank \(\Psi\) intertwining representations \(\pi\) and \(\rho\) has a parameter cost of dim \(\text{Hom}_{H}(\pi,\rho)=\sum m_{i}m_{i}^{\prime}\), depending on the irrep multiplicities \(m_{i}\) and \(m_{i}^{\prime}\) of \(\pi\) and \(\rho\), respectively. A non-equivariant filter bank of the same size, i.e. same filter width \(s\times s\) and same number of input/output channels \(K/K^{\prime}\) (which is also determined by the dimensions of \(\pi\) and \(\rho\)), would instead have a parameter cost of dim \(\pi\cdot\text{dim}\ \rho=s^{2}\cdot K\cdot K^{\prime}\).
Thus, the parameter efficiency of a steerable filter bank in comparison to a conventional filter bank is expressed as follows:
\[\mu=\frac{\text{dim}\ \pi\cdot\text{dim}\ \rho}{\text{dim}\ \text{Hom}_{H}(\pi, \rho)}. \tag{61}\]
For effective network architectures, this value usually lies around \(|H|\), i.e. the size of the stabilizer group. For instance, we have \(\mu=8\) for \(H=D_{4}\), meaning that a \(p4m\)-steerable CNN's convolutional layer uses its parameters eight times more efficiently than a layer of the same size in a conventional CNN. Efficiency is further increased by the fact that the representations \(\pi\) and \(\rho\) in each convolutional layer have a block-diagonal structure, i.e. consisting of
direct sums of distinct, disentangled and lower-dimensional capsules. Because of this, an intertwiner \(\Psi\) between said representations will also have a certain block structure:
For this, let \(\pi=\pi_{1}\oplus...\oplus\pi_{p}\) and \(\rho=\rho_{1}\oplus...\oplus\rho_{r}\). An intertwiner then is a matrix \(\Psi\) of shape \(K^{\prime}\times Ks^{2}\), with \(K^{\prime}=\sum_{i=}^{r}\dim\,\rho_{i}\) and \(Ks^{2}=\sum_{i=1}^{p}\dim\,\pi_{i}\), and has the following block structure:
\[\begin{bmatrix}h_{11}\in\operatorname{Hom}_{H}(\pi_{1},\rho_{1})&\cdots&h_{p 1}\in\operatorname{Hom}_{H}(\pi_{p},\rho_{1})\\ \vdots&\ddots&\vdots\\ h_{1r}\in\operatorname{Hom}_{H}(\pi_{1},\rho_{r})&\cdots&h_{pr}\in \operatorname{Hom}_{H}(\pi_{p},\rho_{r})\end{bmatrix} \tag{62}\]
Here, each subblock \(h_{ij}\in\operatorname{Hom}_{H}(\pi_{i},\rho_{j})\) is itself an intertwiner between the capsules \(\pi_{i}\) and \(\rho_{j}\). In many implementations, the same capsule is used several, or even all the times, allowing to compute many or all of the \(h_{ij}\) using the same intertwiner basis, thus drastically reducing computational cost. Ordering the individual capsules such that equivalent capsules are adjacent to each other then leads to superblocks \(H_{ij}\) of shape \(m_{i}\dim\pi_{i}\times m_{j}\dim\rho_{j}\), with \(m_{i},n_{j}\) being the respective multiplicities of the two capsules of \(H_{ij}\). The superblock itself is then filled with the subblocks \(h_{ij}\) of shape \(\dim\,\pi_{i}\times\,\dim\,\rho_{j}\).
In practice, when a list of capsules \(\rho_{i}\) and corresponding lists of post-activation capsules \(\operatorname{Act}_{\nu}\rho_{i}\) (depending on the nonlinearity) are given, the induced representations \(\pi_{i}=\operatorname{Ind}_{H}^{G}\operatorname{Act}_{\nu}\rho_{i}\), as well as the bases for the intertwiner spaces \(\operatorname{Hom}_{H}(\pi_{i},\rho_{j})\) for all pairs \(i,j\) are computed offline. The bases are stored as matrices \(\psi_{ij}\) of shape \(\dim\,\pi_{i}\cdot\dim\,\rho_{j}\times\,\dim\,\operatorname{Hom}_{H}(\pi_{i}, \rho_{j})\). After that, a list of input multiplicities \(m_{i}\) and output multiplicities \(n_{j}\) provided by the user is put into a parameter matrix \(\Phi_{i,j}\) of shape \(\dim\,\operatorname{Hom}_{H}(\pi_{i},\rho_{j})\times m_{i}n_{j}\) and the aforementioned superblocks \(H_{ij}\) are obtained by matrix multiplication of \(\psi_{ij}\Phi_{i,j}\). After all superblocks are obtained, \(\Psi\) is reshaped to \(K^{\prime}\times K\times s\times s\) and can then be convolved with the input.
#### On the Equivalency of G-CNNs and Regular Steerable CNNs
It was mentioned before that steerable CNNs are a direct generalizations of \(G\)-CNNs from section 4.3. To be precise, this means that \(G\)-CNNs are equivalent to steerable CNNs with regular capsules. To see this, first consider the feature maps of the respective architectures:
\[f_{G}:G\to\mathbb{R}^{K_{G}},\qquad\quad f_{\text{steer}}:\mathbb{Z}^{2}\to \mathbb{R}^{K_{s}}. \tag{63}\]
In \(G\)-CNNs, all feature maps except the input image are functions on a group \(G\), which in the cases that were considered here has \(\mathbb{Z}^{2}\) as a normal subgroup. Equivariance is achieved by modifying the domain of the convolution operation, leading to \(G\)-convolution (23) as described in section 4.3.2. On the other
hand, the feature maps of steerable CNNs keep the domain of \(\mathbb{Z}^{2}\) in all layers and achieve equivariance by restricting the space of filter banks, as explained in detail in the sections 4.4.2 to 4.4.5.
To understand the equivalence, we first look at how \(f_{G}\) transforms under actions of \(G\). For this, we again use the example of a \(p4\)-feature map (figure 3). A generalization to other groups is easily done. The group \(p4\) consists of unique tuples \(x=(t,r)\) with \(t\in\mathbb{Z}^{2}\) and \(r\in C_{4}=\{e,r,r^{2},r^{3}\}\). We can thus interpret a \(p4\)-feature map as a map on \(\mathbb{Z}^{2}\), which returns four "rotational coordinates" at each point \(t\in\mathbb{Z}^{2}\), corresponding to the elements \(e,r,r^{2},r^{3}\), as is depicted in figure 13, which gives a visualization of this slightly different, but equivalent interpretation of a \(p4\)-feature map. When being acted upon by another element \(x^{\prime}=(t^{\prime},r^{\prime})\in p4\), the \(\mathbb{Z}^{2}\)-pixel coordinate \(t\) of \(x\) and its rotational outputs are moved to \({x^{\prime}}^{-1}t\). Simultaneously, the rotational coordinates are permuted by \(r^{\prime}\).
In steerable CNNs, all feature maps and filters are functions on \(\mathbb{Z}^{2}\). The equivalent of the rotational outputs of \(G\)-CNNs is realized by the channels of the regular representation of \(C_{4}\), which has a dimensionality of \(|C_{4}|=4\), thus consisting of one channel per element of \(C_{4}\). As the transformation law of the regular representation on these four channels is defined by the group's action on itself by composition, they transform in exactly the same way as the rotational outputs of a \(G\)-CNN. The induced representation \(\mathrm{Ind}_{C_{4}}^{p4}\rho_{\mathrm{reg}}\) of this regular representation lastly ensures that the respective pixel coordinates \(t\in\mathbb{Z}^{2}\) transform in the same way in both cases, as can be checked by comparison of the equations (22) and (24) with (35) and (52). Hence, we receive the desired equivalence of \(G\)-CNNs and regular steerable CNNs. As stated, these arguments can be generalized to any semidirect product group \(G^{\prime}=N\rtimes H\) instead of \(\mathbb{Z}^{2}\rtimes C_{4}\), as the regular representation behaves in the same way for
Figure 13: The transformation of a single pixel \(x\in\mathbb{Z}^{2}\) by the rotation \(r\in C_{4}\) (top), and the permutation of that pixel’s four rotational outputs (bottom).
any finite stabilizer group \(H\). In any case, one channel of a \(G\)-feature map in a layer of a \(G\)-CNN equates to one regular capsule, and thus to \(|H|\) channels in the equivalent regular steerable CNN.
## 5 Steerable CNNs on \(\mathbb{Z}^{n}\rtimes S_{n}\)
In this section, possible applications of the theory of equivariant networks to the symmetric group \(S_{n}\) will be explored. As \(G\)-CNNs are equivalent to a special case of steerable CNNs, this chapter will stay in the more general language of the latter.
As in example 2, one can build semidirect product groups \(\mathbb{Z}^{n}\rtimes S_{n}\), with the most practical use cases likely being \(n=2\), for instance for 2D-images and \(n=3\) for volumetric data, as for instance in [5]. One use case is given by interpretation of street scenes by one neural network based on the input of two redundant camera sensors that are installed at nearby positions. The signal of these sensors can be considered to be equivalent, but not necessarily equal, as one camera sensor could be disturbed by dirt, but not the other. These networks will be very similar to the ones discussed before, with the difference that \(S_{n}\) is used as the stabilizer group instead of \(C_{4}\) or \(D_{4}\).
### Steerable Feature Spaces and Filter Banks for \(\mathbb{Z}^{n}\rtimes S_{n}\)
\(\mathbb{Z}^{n}\rtimes S_{n}\) (see example 2) acts on functions on \(\mathbb{Z}^{n}\) in a similar way to the product groups \(p4\) and \(p4m\) by compositions of translations and coordinate permutations. Hence, the theory of steerable CNNs from section 4.4 can be applied for \(\mathbb{Z}^{n}\rtimes S_{n}\) in near complete analogy. In this section, we will nevertheless summarize the process of obtaining steerable feature spaces and filter banks. We also highlight the key differences, which lie in representation theory, such as in basis filters for intertwiner spaces, as these depend on the irreps of the underlying group. When discussing concrete examples, we will confine to \(S_{2}\) and \(S_{3}\), for visualization purposes. However, the general procedure of constructing steerable CNNs on \(\mathbb{Z}^{n}\rtimes S_{n}\) for any \(n\in\mathbb{N}\) becomes apparent.
Feature Spaces
We again deal with feature spaces \(\mathcal{F}_{l}\) of functions \(f:\mathbb{Z}^{n}\to\mathbb{R}^{K_{l}}\) with \(K_{l}\) output channels in each layer \(l\), which transform by a representation \(\pi_{l}\) analogously to (35) for \(l=0\) and (52) in higher layers:
\[[\pi_{0}(t,\sigma)f](x)=f((t\sigma)^{-1}x)=f(\sigma^{-1}(x-t)),\,t\in\mathbb{Z }^{n},\sigma\in S_{n} \tag{64}\]
\[[\pi_{l+1}(t,\sigma)f](x)=\rho_{l}(\sigma)[f((t\sigma)^{-1}x)],\,t\in\mathbb{Z }^{n},\sigma\in S_{n}. \tag{65}\]
In (65), \(\rho_{l}\) is the fiber representation from the previous layer from which \(\pi_{l+1}=\mathrm{Ind}_{S_{n}}^{\mathbb{Z}^{n}\rtimes S_{n}}\rho_{l}\) is induced, and with respect to which filter banks \(\Psi\in\mathbb{Z}^{n}\rtimes S_{n}\) are the same as the fiber representation from \(\pi_{l+1}\) to \(\pi_{l+1}\).
The \(\mathbb{Z}^{n}\rtimes S_{n}\) is the \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\), which is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\).
[MISSING_PAGE_POST]
The \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\) is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\), which is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\).
The \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\) is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\), which is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\), which is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\).
The \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\) is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\), which is a \(\mathbb{Z}^{n}\rtimes S_{n}\)-equivariant map \(\pi_{l+1}\).
\(\mathrm{Hom}_{S_{n}}(\pi_{l},\rho_{l})\) of layer \(l\) must be equivariant:
\[\rho_{l}(\sigma)\Psi=\Psi\pi_{l}(\sigma)\;\forall\sigma\in S_{n}. \tag{66}\]
**Equivariant Filter Banks**
The symmetric group of 2 elements has two distinct irreps: The trivial representation, \(\mathrm{id}\), and the sign representation \(\mathrm{sgn}\), both of them have dimension one. By restricting \(\pi_{0}\) from (64) to \(S_{2}\) and letting it act on \(3\times 3\) filters with one channel, we once again receive a nine-dimensional representation \((\mathcal{F}_{0}^{S_{2}},\pi_{0})\). One now obtains the irrep decomposition of \(\pi_{0}\) by applying the character formula for \(\chi_{\mathrm{id}}\) and \(\chi_{\mathrm{sgn}}\):
\[m_{\pi_{0}}(\mathrm{id})=\frac{1}{|S_{2}|}\sum_{\sigma\in S_{2}}\chi_{\pi_{0} }(\sigma)\chi_{\mathrm{id}}(\sigma)=\frac{1}{2}(\chi_{\pi_{0}}(e)\chi_{ \mathrm{id}}(e)+\chi_{\pi_{0}}((12))\chi_{\mathrm{id}}((12)))=6. \tag{67}\]
While \(\chi_{\mathrm{id}}(e)\) and \(\chi_{\mathrm{id}}((12))\) can be looked up from table 2, it is useful for \(\chi_{\pi_{0}}\) to consider the canonical basis vectors of \((\mathcal{F}_{0}^{S_{2}},\pi_{0})\) in figure 8 and how \(\pi_{0}(e)\) and \(\pi_{0}((12))\) transform them. The representation matrix for the identity element is always the identity matrix for any representation, hence we receive \(\chi_{\pi_{0}}(e)=9\). On the other hand, \(\pi_{0}((12))\) swaps the coordinates of each pixel in each basis element, thus only leaving the third, fifth, and seventh filter invariant, leading to \(\chi_{\pi_{0}}((12))=3\).
Since there is only one other possible irrep, we immediately get \(m_{\pi_{0}}(\mathrm{sgn})=3\). With this, we have the full irreducible decomposition of \((\mathcal{F}_{0}^{S_{2}},\pi_{0})\), which is depicted with an exemplary set of basis filters in table 2.
For \(S_{3}\) and \(\mathbb{Z}^{3}\), we have one extra dimension for the base of feature spaces and filters, hence leading to an increase in dimensionality of the space of filters when looking at the \(S_{3}\)-equivalent of \((\mathcal{F}_{0}^{S_{2}},\pi_{0})\). Instead of \(3\times 3\) filters, we now have \(3\times 3\times 3\) filters, leading to a representation \((\mathcal{F}_{0}^{S_{3}},\pi_{0})\) of degree 27. The canonical basis for the space \(\mathcal{F}_{0}^{S_{3}}\) is depicted in figure 14.
We next determine the irreducible decomposition of \(\pi_{0}\) in \(\mathcal{F}_{0}^{S_{3}}\). While the characters of the three irreducible representations of \(S_{3}\) are given in table 3, it might look tedious at first to determine the character of \(\pi_{0}\). However, this becomes quite easy if one again looks at which basis elements from \(\mathcal{C}_{S_{3}}\) (figure 14) are left invariant by each group element of \(S_{3}\). As always, \(\pi_{0}(e)\) leaves every element invariant, thus yielding \(\chi_{\pi_{0}}(e)=27\). For the other elements, it is helpful to identify each basis vector with the coordinates of its non-zero entry (i.e. the black cube in each filter), e.g. the first vector with \((-1,1,1)\),the second with \((0,1,1)\) and the fourth with \((-1,1,0)\).
Any of the transpositions, i.e. 2-cycles \((12),(23),(13)\in S_{3}\) fix all vectors of which the two non-zero coordinates that are permuted are equal, e.g. \(\{(x_{1},x_{1},x_{2})\mid x_{1},x_{2}\in\{-1,0,1\}\}\) for \((12)\). Thus, each transposition fixes 9 elements, so we have \(\chi_{\pi_{0}}((12)=\chi_{\pi_{0}}((23)=\chi_{\pi_{0}}((13)=9\). The same argument can be used for the two 3-cycles. Each of them fixes the same 3 vectors of which all three non-zero coordinates are equal, hence we have \(\chi_{\pi_{0}}((123))=\chi_{\pi_{0}}((132))=3\).
Plugging these results into the character formula now yields
\[m_{\pi_{0}}(\mathrm{id})=\frac{1}{|S_{3}|}\sum_{\sigma\in S_{3}}\chi_{\pi_{0}} (\sigma)\chi_{\mathrm{id}}(\sigma)=\frac{1}{6}(27+3\cdot 9+2\cdot 3)=10,\]
\[m_{\pi_{0}}(\mathrm{sgn})=\frac{1}{|S_{3}|}\sum_{\sigma\in S_{3}}\chi_{\pi_{0 }}(\sigma)\chi_{\mathrm{sgn}}(\sigma)=\frac{1}{6}(27+3\cdot(9\cdot(-1))+2\cdot 3 )=1,\]
\[m_{\pi_{0}}(V_{s})=\frac{1}{|S_{3}|}\sum_{\sigma\in S_{3}}\chi_{\pi_{0}}( \sigma)\chi_{V_{s}}(\sigma)=\frac{1}{6}(27\cdot 2+3\cdot(9\cdot 0)+2\cdot(3 \cdot(-1)))=8. \tag{68}\]
Thus, the type (i.e. the multiplicities of irreps) of \(\pi_{0}\) is \((10,1,8)\) for the trivial representation \(\mathrm{id}\), the sign representation \(\mathrm{sgn}\) and the two-dimensional standard representation \(V_{s}\), respectively. The basis of this irreducible decomposition is depicted in figure 15. As one can verify, these filters transform under \(\pi_{0}(\sigma)\) for \(\sigma\in S_{3}\) by multiplication with the representation matrices for the respective group element, which are given in table 3.
Figure 14: Some canonical basis elements of \(\mathcal{C}_{S_{3}}\) of \((\mathcal{F}_{0}^{S_{3}},\pi_{0})\). As always, entries of one are black and entries of zero are grey.
Equivariant filter banks and steerable feature spaces are now obtained in analogy to the sections 4.4.4 and 4.4.5. Given some list of output multiplicities \((m_{\mathrm{id}},m_{\mathrm{sgn}},m_{V_{s}})\) (or just \((m_{\mathrm{id}},m_{\mathrm{sgn}})\) for \(S_{2}\)) of some fiber representation \(\rho\), the filter bank \(\Psi\in\mathrm{Hom}_{H}(\pi_{0},\rho)\) is constructed by linearly combining the irrep basis filters from figure 15 for \(S_{3}\) or from table 2 for \(S_{2}\) in the same way as is depicted (for \(p4m\)) in figure 12, i.e. by only combining filters that correspond to the irrep that each of the respective output channels is transformed by.
### Capsules, Nonlinearities and Pooling for \(\mathbb{Z}^{n}\rtimes\boldsymbol{S}^{n}\)
As was explained in the sections 4.4.6 and 4.4.7, certain requirements have to be fulfilled for a layer (realized by a concatenation of capsules as before) to commute with fiber-wise nonlinearities and pooling. A layer is thus not built by choosing multiplicities for the irreps directly, but by choosing copies of
\begin{table}
\begin{tabular}{c|c|c|c|c|c} Irrep & \(e\) & (12) & (13) & (23) & (123) & (132) \\ \hline id & \([1]\) & \([1]\) & \([1]\) & \([1]\) & \([1]\) & \([1]\) \\ sgn & \([1]\) & \([-1]\) & \([-1]\) & \([-1]\) & \([1]\) & \([1]\) \\ \(V_{s}\) & \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ -1&-1\end{bmatrix}\) & \(\begin{bmatrix}-1&1\\ 0&-1\end{bmatrix}\) & \(\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\) & \(\begin{bmatrix}-1&1\\ -1&0\end{bmatrix}\) & \(\begin{bmatrix}0&1\\ -1&-1\end{bmatrix}\) \\ \end{tabular}
\end{table}
Table 3: The representation matrices of \(\pi_{0}\) for the basis filters of the irrep decomposition of \((\mathcal{F}_{0}^{S_{3}},\pi_{0})\).
Figure 15: The basis vectors for the irrep decomposition of \((\mathcal{F}_{0}^{S_{3}},\pi_{0})\). The reader may verify that these vectors transform under \(\pi_{0}\) by the respective matrices given in table 3. Again, grey and transparent entries count as zeros, black entries as one and white entries as minus one.
capsules to be concatenated into a fiber. We now list some relevant capsules for steerable CNNs with \(H=S_{2}\) or \(H=S_{3}\)
As for any finite group, we can look at regular capsules which transform under the regular representation \(\rho_{\text{reg}}\). For \(S_{2}\), this is the two-dimensional representation of type \((1,1)\) and for \(S_{3}\), we have the four-dimensional representation of type \((1,1,2)\). As all regular capsules, these are realized by permutation matrices and thus commute with all nonlinearities discussed in 4.4.6, including ReLU. Furthermore, fiber-wise max-pooling can be applied. To obtain quotient capsules (which have the same compatibility properties as regular capsules), subgroups are needed. Of these there are none except the group itself and \(\{e\}\) for \(S_{2}\), which would lead to a trivial capsule and a regular capsule, respectively. For \(S_{3}\), we have (besides the aforementioned ones) one normal subgroup, which is the _alternating_ group containing all even permutations, \(A_{3}=\{e,(123),(132)\}\). With this, we have \(S_{3}/A_{3}=S_{2}\), hence this quotient capsule would behave like a regular capsule of \(S_{2}\). \(S_{3}\) furthermore has \(S_{2}\) as non-normal subgroup, allowing for \(S_{3}/S_{2}\)-quotient capsules. The two subgroups can also be used to implement coset pooling as described in 60.
## 6 Conclusion
In this review article, we have seen several relevant approaches to achieve invariant representations in machine learning. After establishing the mathematical preliminaries from group theory and representation theory, we discussed translation equivariant convolutional networks. We then saw how group equivariant neural networks generalize CNNs and thus allow for representations that are invariant to groups of transformations that are more general than just translations. This concept was then generalized once more by introducing steerable CNNs, which by the use of group representations allow for different, more nuanced ways to express equivariance to a group.
We furthermore presented an application of the theory to the symmetric group resulting in a steerable CNN architecture for the symmetric group.
**Acknowledgement.** Helpful discussions with Matthias Rottmann are gratefully acknowledged.
|
2303.03660 | ECG Classification System for Arrhythmia Detection Using Convolutional
Neural Networks | Arrhythmia is just one of the many cardiovascular illnesses that have been
extensively studied throughout the years. Using multi-lead ECG data, this
research describes a deep learning (DL) pipeline technique based on
convolutional neural network (CNN) algorithms to detect cardiovascular lar
arrhythmia in patients. The suggested model architecture has hidden layers with
a residual block in addition to the input and output layers. In this study, the
classification of the ECG signals into five main groups, namely: Left Bundle
Branch Block (LBBB), Right Bundle Branch Block (RBBB), Atrial Premature
Contraction (APC), Premature Ventricular Contraction (PVC), and Normal Beat
(N), are performed. Using the MIT-BIH arrhythmia dataset, we assessed the
suggested technique. The findings show that our suggested strategy classified
15,000 cases with a high accuracy of 98.2% | Aryan Odugoudar, Jaskaran Singh Walia | 2023-03-07T05:48:28Z | http://arxiv.org/abs/2303.03660v2 | # ECG Classification System for Arrhythmia Detection Using Convolutional Neural Network
###### Abstract
Arrhythmia is just one of the many cardiovascular illnesses that have been extensively studied throughout the years. Using a multi-lead ECG data, this research describes a deep learning (DL) technique based on a convolutional neural network (CNN) algorithm to detect cardiovascular arrhythmia in patients. The suggested CNN model has six layers total, two convolution layers, two pooling layers, and two fully linked layers within a residual block, in addition to the input and output layers. In this study, the classification of the ECG signals into five groups--Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB), Atrial Premature Contraction (APC), Premature Ventricular Contraction (PVC), and Normal Beat--is the main goal (N).
Using the MIT-BIH arrhythmia dataset, we assessed the suggested technique.The findings show that our suggested strategy classified 15000 cases with an average accuracy of 98.2%.
## Introduction
Since cardiovascular diseases account for 10% of all diseases and 30% of all deaths worldwide, they constitute a challenge for global public health.
The conventional CVD diagnosis paradigm is based on the clinical examinations and medical history of a specific patient. The patients are categorised based on these findings using a taxonomy of medical illnesses and a set of quantitative medical indicators.
Unfortunately, a vast volume of heterogeneous data renders the conventional rule-based diagnosis paradigm ineffective, necessitating extensive analysis and specialised medical knowledge to attain appropriate accuracy. In poor nations where there is a shortage of medical professionals and medical equipment, the issue is more severe.
The most popular method for monitoring heart activity is the electrocardiogram (ECG), which is both accessible and non-invasive. Different types of heartbeats may typically be distinguished by meticulously evaluating ECG morphology. ECG is not typically helpful for non-stationary signals, though. The morphology of this last changes over time, and these changes can be seen in both separate cases as well as the same patient. The ability of skilled medical professionals to decipher the features of ECG signals is crucial for the early detection of arrhythmia.
Doctors must have a high level of professional expertise.
Consideration of computer-aided detection and diagnosis in ECG signals for cardiovascular illnesses is growing. However, it is quite challenging to create and choose the best diagnostic model with clinical implications.
As a result, a variety of ECG heartbeat recognition and classification algorithms have been created using various methods, including wavelet transform, hidden Markov models, support vector machines, and artificial neural networks. Due to the significance of accuracy in the medical field, the bulk of these ECG beat categorization techniques perform well during training but provide poor results. In this article, a convolutional neural network (CNN)-based ECG classification system based on deep learning is suggested to categorise five different types of heartbeat. Given that deep learning, particularly CNNs, have drawn a lot of attention recently for their outstanding abilities in the areas of image processing and natural signal processing, as well as their enormous potential to recognise signals.
## Working On The Model
Preprocessing, heartbeat segmentation, feature extraction, and classification are the four stages of the ECG heartbeat classification process. The goal of the signal preprocessing is to remove various types of noise from the ECG signal, such as artefacts and baseline drift. In the literature, a variety of techniques for ECG signal denoising have been reported. Traditional filtering techniques include the usage of low-pass filters, Weiner filters, adaptive filters, and filter banks are included in these techniques. A lot of researchers have also worked on classification of ECG signals and feature extraction using conventional machine learning techniques. Additionally, certain statistical techniques were utilised to extract features from ECG data, including principal component analysis (PCA), the higher-order statistic (HOS) methodology, and linear discriminant analysis (LDA).
The majority of studies confirmed that wavelet transform (WT), which can concurrently extract frequency and time information, has a satisfactory outcome for ECG signal feature extraction. They classified five different beat types using the WT technique in and were 97.29% accurate.
Recently, the majority of classification techniques merged the two phases of feature extraction and classification by using a deep learning model. Acharya et al. [1] used nine layers of CNN to classify arrhythmia heartbeats and achieved accuracy of 94.03% and 93.47% with original and denoising signals, respectively. Keep in mind that the CNN model's learning capabilities will improve as the number of network layers increases. But increasing the number of network layers alone won't help with accuracy. The term for this issue is the vanishing gradients.
Fig 1-Model Structure
Normalized initialization and intermediate normalised approaches have mostly been used to resolve it [17], [18]. In fact, even with these methods, the training of deep neural networks still suffers from the previously described problem of accuracy loss as network depth increases. However, in [19], they employed a straightforward CNN model with only five layers and got a 97.5% accuracy rate.
In this work, we start by adding two convolutional layers and using the coherent latter technique [19]. We added a residual block that contains these two convolutional layers hidden in order to identify five different types of heartbeats in order to avoid the issue of vanishing/exploding gradients. We outperformed in terms of accuracy as a result.
## 2 Methodology
### 2.0.1 Method Overview
The recordings are initially filtered by a moving average filter and an eight-level Daubechies 4 wavelet transform in order to categorise the input ECG signal into 5 groups. Based on the MITBIH annotations, 200 samples are divided into each record. Then, prior to training, a dimension reduction of 180 samples is used.
Finally, to achieve the feature extraction and classification of ECG signals, the processed heartbeat segments are employed directly as input data of the CNN model. Figure 1 provides a general overview of the suggested strategy.
**2.Data**
PhysioNet ([http://www.physionet.org](http://www.physionet.org)) [21] hosts the MIT-BIH database, from which we pulled data for this study. 48 recordings for two-channel dynamic ECGs are present in this database. Each recording has a maximum duration of 30 minutes and a 360HZ sample frequency. Each beat is annotated by the MT-BIH to indicate the class to which it belongs. Figure I displays the number of beats per class in the Mit-Bih database. To train and test the viability of our technique, 44 ECG records from the lead II (MLII) in the database were chosen.
Page 6 of 17
The 4 beats were disregarded due to their poor signal quality for post processing, as stated by the AAMI standard(102, 104, 107, 217). Our data was split into a training set (50%) and a test set (50%). There are 13200 non-duplicate occurrences in each collection. The total number of data chosen for each dataset is shown in Table I.
\begin{tabular}{c c c} \multicolumn{3}{c}{**Table-I**} \\ \multicolumn{3}{c}{Classes of Arrhythmia} \\ \hline Arrhythmia Types & Annotations & Total \\ \hline Normal Rythm(NOR) & N & 73324 \\ \hline Left Bundle Branch & L & 8071 \\ Block(LBBB) & & \\ \hline Right Bundle Branch & R & 6858 \\ Block(RBBB) & & \\ \hline Premature Ventricular & A & 6767 \\ Contraction(PVC) & & \\ \hline Atrial Premature & V & 1161 \\ contraction & & \\ \hline \end{tabular}
Page 7 of 17
3.Data Processing
In general, several interference noises would be easily mixed during the collection process due to the weak ECG signal and the effect of acquisition equipment. However, these sounds make it very difficult to analyse ECG readings. Therefore, prior to the classification of ECG, proper preprocessing of ECG signals is a crucial issue. Power frequency interference, baseline drift, and electromyographic interference are three common ECG signal interference sounds. Moving average filter and Daubechies 4 wavelet transform are both used to denoise the ECG signal.
In real-world scenarios, valuable signals typically appear as low-frequency or more smooth signals while noisy signals typically appear as high-frequency signals in signal processing. The high-frequency wavelet coefficients are obtained from the signals with noise when the signals are divided by the wavelet transform. Then, high-frequency wavelet coefficients are threshold processed to remove interference from power lines and electromyography. Finally, the inverse wavelet transform is used to reconstruct
signals. The baseline drift noise is eliminated while moving by the average filter.
**4.CNN Model Architecture**
Different manually created features are used by traditional machine learning techniques to obtain representations of input data. Deep learning involves an autonomous learning process that progresses from low-level representations acquired over numerous layers to higher abstract representations.
One of the most popular varieties of artificial neural networks is the CNN. A CNN is conceptually similar to a multilayer perceptron (MLP). When the network contains more than one hidden layer, an MLP becomes a deep MLP. Since each perceptron in MLP is interconnected with every other perceptron, there is a risk that the total number of parameters will increase significantly. Due to the large degree of redundancy, this is inefficient. Its disregard for spatial data is another drawback.It accepts inputs of flattened vectors. These issues were fixed by the CNN model by accounting for local connection. Additionally, all layers are somewhat loosely rather than completely attached.
The convolution layer, pooling layer, and fully-connected layer are the three fundamental layers of the CNN architecture. It also
consists of two parts: a feature extractor that automatically learns the features from the raw input data, and a fully integrated multi-layer perception system (MLP). The convolution layer and the pooling layer are the first two layers of the feature extractor. The input is scanned in terms of dimensions by the first layer, which also utilises filters and convolution processes. The stride and filter size are two of its hyper-parameters. The output is added by a bias before being subjected to the activation function to create a feature map for the following layer. This output is known as a feature map or activation map.As the beat samples data input vector, let x0i = [X1, X2....Xn], where n is the number of samples per beat.
The output of the convolution layer is:
\(c^{i,j}\)=\(\sigma\)(b +\(\Sigma^{M}\) w\({}^{j}\) x\({}^{0j}\)),
where wmj is the weight for the jth feature map and mth filter index, l is the layer index, is the activation function, b is the bias term for the jth feature map, M is the kernel/filter size. The pooling layer comes just after the convolution layer. It is a process of downsampling. It helps to shrink the size of the activation map, which produces medium-level features. A layer's pooling of a feature map is determined by
Pl,j =max r\(\in\)R(cl,j i i\(\times\)T+r )
where T is the pooling stride and R is the pooling window's size. The fully connected layer is the final layer (FC). It uses a flattened input where all neurons are coupled to each input. An activation function, a mathematical equation that defines a neural network's output, was used in each neural network. Each neuron in the network has a function connected to it that decides whether or not to activate it based on whether the input from that neuron is important to the prediction made by the model. ReLu serves as an activation function in this investigation. Considering For many different kinds of neural networks, it has evolved into the standard activation function. There isn't any difficult math.
As a result, the model can train or run faster. It is described mathematically as y = max (0,x). The appearance is similar to A straightforward softmax classifier is utilised for beat classification and is positioned at the end of the CNN design.
It is a mathematical function that turns a vector of integers into a vector of probabilities, where the probabilities of each value are inversely correlated with their relative sizes in the vector. The prediction error is determined using the loss function when the predicted output is obtained through forward propagation. Back propagation is then used to update weights by computing the gradient of the convolutional weights. The projected error propagates back on each parameter of each layer during this process. The propagation process is continued both forward and backward until a certain number of epochs are reached.
An important factor influencing the final classification and recognition results is the deep learning network's depth. The general concept is to make the neural network design as deep as possible. Increasing the depth will eventually hurt the deep learning network's performance though. Vanishing/exploding gradients is a problem that makes network training more challenging. The residual block was used to overcome this
difficulty. It is a stack of layers configured so that each layer's output is added to a layer further down the stack. By using "shortcut connections" that skip numerous network layers, it is an enhanced deep learning algorithm for CNN that prevents these issues.
The two convolutional layers in the proposed model make it better than the model that was first put forth in. I added the two layers in a residual block to get around the aforementioned problem. The input layer of our suggested model architecture is composed of a segment of ECG data with 180 sample points. It has two convolutional layers, two pooling layers, two fully connected layers, and one softmax layer. The kernel sizes, strides, and number of filters on each layer are three, two, and eighteen, respectively. Reversed linear function (ReLu) was used after each convolutional layer. A residual block with two convolutional layers exists (18 convolution kernels with a length of 7 and stride 2)
Experimental Results
I used a computer with an 8GB RAM and Mac M1 processor to train our model. The international standard ECG database MIT-BIH served as the source of the experimental data. It is frequently used in ECG research and has a precise and thorough professional annotation. For training and testing purposes, this data was split into two sets, each of which has 13200 instances. 300 epochs were used for training. The dataset's batch size was 32 for each epoch, and it was expanded to include all input data. Also, 0.001 is chosen as the learning rate. Prior to training, we utilise signal rescaling to the [-1,1] range of data, which provides greater accuracy than without normalisation.
As shown by equations (3), (4), and (5), where TP stands for the true positive, TN for the true negative, FP for the false positive, and FN for the false negative, we employed the following metrics to assess the performance of our model: accuracy, specificity, and sensitivity. After experimental verification, the suggested CNN model achieved 97.8% accuracy, 97.0% sensitivity, and 97.32% specificity.
\[accuracy=\frac{TP+TN}{TN+FP+TP+FN}\times 100 \tag{3}\]
\[specificity=\frac{TN}{TN+FP}\times 100 \tag{4}\]
\[sensitivity=\frac{TP}{TP+FN}\times 100 \tag{5}\]
By employing moving average filter and wavelet transform for the preprocessing phase and 1D- CNN with residual block for classification, we can demonstrate that the suggested technique outperforms the other proposed methods in terms of ECG classification accuracy. The study's five heartbeat types are "N.L.R.A.V." Each kind corresponds to a distinct arrhythmia signal. Normal beats (N), supraventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F), and unclassifiable beats (U) were the five forms of ECG signals according to the AAMI standard guidelines (Q).
Page 14 of 17
## Conclusion
The suggested model is sound and should be used in clinical settings as an auxiliary tool to assist cardiologists in identifying patients' cardiac arrhythmia. In clinical settings, this strategy will cut down on hospital ECG signal processing expenses and patient wait times. It's important to stress that a model that is highly accurate in detecting cardiovascular disease will lower medical errors. In this study, I used wavelet transform 4 in 8 levels and moving average filter to denoise the signal. The CNN model has six layers with a residual block to classify five different types of heartbeats in addition to input and output layers. I tested the trained model using the standard Mitbih database (lead II) in the experimental findings, and it had a 97.8% accuracy rate. I want to increase accuracy in my future work by utilising residual architecture like ResNet or DenseNet.
|
2302.10253 | Multiobjective Evolutionary Pruning of Deep Neural Networks with
Transfer Learning for improving their Performance and Robustness | Evolutionary Computation algorithms have been used to solve optimization
problems in relation with architectural, hyper-parameter or training
configuration, forging the field known today as Neural Architecture Search.
These algorithms have been combined with other techniques such as the pruning
of Neural Networks, which reduces the complexity of the network, and the
Transfer Learning, which lets the import of knowledge from another problem
related to the one at hand. The usage of several criteria to evaluate the
quality of the evolutionary proposals is also a common case, in which the
performance and complexity of the network are the most used criteria. This work
proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm.
MO-EvoPruneDeepTL uses Transfer Learning to adapt the last layers of Deep
Neural Networks, by replacing them with sparse layers evolved by a genetic
algorithm, which guides the evolution based in the performance, complexity and
robustness of the network, being the robustness a great quality indicator for
the evolved models. We carry out different experiments with several datasets to
assess the benefits of our proposal. Results show that our proposal achieves
promising results in all the objectives, and direct relation are presented
among them. The experiments also show that the most influential neurons help us
explain which parts of the input images are the most relevant for the
prediction of the pruned neural network. Lastly, by virtue of the diversity
within the Pareto front of pruning patterns produced by the proposal, it is
shown that an ensemble of differently pruned models improves the overall
performance and robustness of the trained networks. | Javier Poyatos, Daniel Molina, Aitor Martínez, Javier Del Ser, Francisco Herrera | 2023-02-20T19:33:38Z | http://arxiv.org/abs/2302.10253v2 | Multiobjective Evolutionary Pruning of Deep Neural Networks with Transfer Learning for improving their Performance and Robustness
###### Abstract
Evolutionary Computation algorithms have been used to solve optimization problems in relation with architectural, hyper-parameter or training configuration, forging the field known today as Neural Architecture Search. These algorithms have been combined with other techniques such as the pruning of Neural Networks, which reduces the complexity of the network, and the Transfer Learning, which lets the import of knowledge from another problem related to the one at hand. The usage of several criteria to evaluate the quality of the evolutionary proposals is also a common case, in which the performance and complexity of the network are the most used criteria. This work proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm. MO-EvoPruneDeepTL uses Transfer Learning to adapt the last layers of Deep Neural Networks, by replacing them with sparse layers evolved by a genetic algorithm, which guides the evolution based in the performance, complexity and robustness of the network, being the robustness a great quality indicator for the evolved models. We carry out different experiments with several datasets to assess the benefits of our proposal. Results show that our proposal achieves promising results in all the objectives, and direct relation are presented among them. The experiments also show that the most influential neurons help us explain which parts of the input images are the most relevant for the prediction of the pruned neural network. Lastly, by virtue of the diversity within the Pareto front of pruning patterns produced by the proposal, it is shown that an ensemble of differently pruned models improves the overall performance and robustness of the trained networks.
keywords: Evolutionary Deep Learning, Multi-Objective Algorithms, Pruning, Out of Distribution Detection, Transfer Learning +
Footnote †: journal: Applied Soft Computing
## 1 Introduction
Evolutionary Computation (EC) refers to a family of global optimization algorithms inspired by biological evolution (Back & Schwefel, 1996, May). EC algorithms such as Evolutionary Algorithms (EA) (Back et al., 1997) have been used to solve several complex optimization problems which cannot be analytically solved in polynomial time. In many real-world optimization problems, there is not only one criterion or objective to improve, but several objectives to consider. Multi-Objective Evolutionary Algorithms (MOEAs) are an family of EAs capable of efficiently tackle optimization problems comprising several goals (Deb, 2011).
Structural search and, in particular, Neural Architecture Search (NAS), is one of the non-polynomial problems which has been approached with EAs over the years (Martinez et al., 2021). This problem consists
of looking for neural network configurations that fit better one dataset by optimizing the performance or loss of the network in function of the selected evaluation metric (Stanley and Miikkulainen, 2002). There have been several Neural Networks (NN), and particularly when integrated with Deep Learning (DL) called as Deep Neural Network (DNN), to which NAS has been applied are well-known networks with one or more objectives (Pham et al., 2018; July; Yang et al., 2020; June).
Among other decision variables considered in NAS, this area has also approached the improvement of a NN by optimally pruning their neural connections. Pruning techniques seek to reduce the number of parameters of the network, targeting network architectures with less complexity. Usually this comes at the cost of a lower performance of the network. When a new learning task is present, a manner to compensate the lack of quality of data is the usage of the Transfer Learning (TL), whose most straightforward approximation is the usage of pre-trained network in very large datasets (Krizhevsky et al., 2017) for the extraction of features, followed by a specialization of the last layers of the network. Due to the fact that the number of trainable parameters of these last layers is lower, it is possible to avoid an early overfitting of the network, which can happen if there are few examples for large models. For those cases, the search of optimal pruning patters using evolutionary NAS is done for these last layers (Poyatos et al., 2023).
Robustness is one of the unavoidable requirements to ensure proper performance in scenarios where risks must be controlled and certain guarantees are needed to ensure the proper performance of the models (ISO, 2021a,b). The combination of EAs together with techniques that allow evaluating the robustness of the models paves the way towards the creation of better models for all types of problems. It could be useful to incorporate robustness as a target, but unfortunately, robustness has been rarely considered an objective (Wang et al., 2021). Robustness can be measured in several ways for a DNN model, one being the performance in Out-of-Distribution (OoD) detection problems (Hendrycks and Gimpel, 2016). This problem consists of detecting whether a new test instance queried to the model belongs to the distribution underneath the learning dataset i.e., the In-Distribution (InD) dataset or, instead, it belongs to another different distribution (correspondingly, the Out-Distribution dataset, OoD).
The natural extension of NAS is the development of proposals with several objectives. In this scenario, the MOEAs can take place, as they evolve the networks meanwhile an optimization of several objectives is made (Lu et al., 2022b; Elsken et al., 2019). The MONAS term arises as the union of multi-objective algorithms which are used for NAS problems (MONAS). MONAS algorithms usually rely in several objectives, being a standard objective the performance of the network. The complexity of the network is a common second objective, which can be modeled as the number of parameters pruned form the network, network compression or other alternatives. More sophisticated proposals consider another objective based on the energy consumption or hardware device in use, among others (Chitty-Venkata and Somani, 2022; Wei et al., 2022). The addition of the robustness, with a OoD detection technique applied to the DL model being optimized, as an additional objective unleashes a new vision for the MONAS proposals.
The main hypothesis is the convenience of using a MOEA to evolve the pruning patterns of the fully-connected layers of a neural network via a sparse representation, simultaneously according to the generalization performance of the network, its complexity and the robustness of a OoD detection technique relying on the activation signals inside the network against samples that may or may not belong to the distribution of the training data.
In this work, a novel MOEAs is proposed based on the evolution of the pruning patterns of fully-connected layers, which we hereafter refer to as Multi-Objective Evolutionary Pruning for Deep Transfer Learning (MO-EvoPruneDeepTL). MO-EvoPruneDeepTL uses TL to focus the evolutionary search for pruning patterns on the last layers of the neural network, so as to adapt them to the problem at hand. This work gets inspired by recent work in (Poyatos et al., 2023), in which dense layers are pruned using a configuration that define the active neurons. In the previous work, that configuration evolves using a binary genetic algorithm guided by the performance of the network. In this manuscript, the previous problem is reformulated to optimize the pruning patterns with a MOEA, in which the search is guided by three objectives: performance of the network, its complexity, and the robustness measured by the capability of a OoD detection technique to detect InD and OoD samples. The insight about these objectives is that a most pruned network may reduce its performance and robustness of the OoD detector that works with the activations of this pruned network. For that reason, a minimum number of neurons needs to be active to achieve balanced models with
the performance and robustness. This OoD context falls within the umbrella of the Open-World Learning (OWL) (Parmar et al., 2022; Zhou, 2022). The aim of OWL consist in learning non-controlled environments so that the Artificial Intelligence (AI) agent (in this study the evolved trained models) will become more and more knowledgeable, as the models can be asked with new data. However, OWL is also included in the umbrella of General Purpose Artificial Intelligence (GPAI), which is a relevant and important term nowadays. The basic of GPAI lays in AI generating AI (Clune, 2019; Real et al., 2020, July), because this work uses an EA that learns in base to DL models.
To assess the quality of MO-EvoPruneDeepTL, different experiments have been designed that allow inspecting several aspects of the performance of MO-EvoPruneDeepTL from different perspectives. To that end, the main purpose of the experimental setup is to provide an informed answer to the following research questions (RQ):
* How are the approximated Pareto fronts produced by the proposal in each of the considered datasets?
* Is there any remarkable pruning pattern that appears in all the solutions of the Pareto front?
* Do our models achieve an overall improvement in performance when combined through ensemble modeling?
A general insight about these experiments is the the achievement of optimized networks in these objectives, but also that the evolutionary process gives rise to pruning patterns that maintain relevant neurons with information about the input of the model, and leads to the use of ensembles to further improve modeling performance in terms of generalization and robustness to OoD.
The rest of the article is structured as follows: Section 2 briefly overviews background literature related to the proposal. Section 3 shows the details of the proposed MO-EvoPruneDeepTL model. Section 4 presents the experimental framework designed to thoroughly examine the behavior of MO-EvoPruneDeepTL with respect to the RQ formulated above. In Section 5, we show and discuss in depth the results obtained by MO-EvoPruneDeepTL in the different experiments. Several indicators are presented to show the quality of MO-EvoPruneDeepTL. Finally, Section 6 draws the main conclusions from this study, as well as future research lines stimulated by our findings.
## 2 Related Work
The aim of this section is to make a review of contributions to the literature about the key elements of this study: Neural Architecture Search (Subsection 2.1), Transfer Learning and Pruning of Convolutional Neural Networks (CNN, Subsection 2.2) and OoD detection (Subsection 2.3). The last paragraph of this section resumes the benefits of MO-EvoPruneDeepTL.
### Neural Architecture Search
The design of the NN that best fits for the problem at hand is a challenging task. The search for the best design of the network is also considered as another problem, as it is necessary to find the best architecture that optimally fits the data. In this context, NAS has achieved a great importance in this area. The main purpose of NAS proposals is the search for the best design of the NN to solve the considered problem.
First NAS-based proposals started to emerge in the beginning of this century. NEAT, presented in (Stanley and Miikkulainen, 2002) was a pioneering proposal about how EAs -- specifically, a Genetic Algorithm (GA) - can be used to evolve NNs. They showed that a constructive modeling of the NN with the benefits of the GA can lead to optimized NN topologies. The natural extension of this seminal work allowing for the evolution of DNN was presented years after in (Miikkulainen et al., 2019), in which the authors use a co-evolutionary algorithm based on the co-operation scheme to evolve DNN.
In the last years, more NAS proposals have been developed. One of them is EvoDeep (Martin et al., 2018), in which the authors create an EA with specific operators to create and evolve DL models from scratch. More examples of the importance of NAS come with the next proposals. In (Dufourq and Bassett, 2017, November), authors propose another EA to perform the evolution of NN, similarly to the previous
proposal, but with a difference in relation to the fitness function, which is influenced by the accuracy and complexity of the network. The other example is presented in (Assuncao et al., 2019). In this case, the evolution comes in two different ways: topology and parameters of the Convolutional Neural Networks (CNN). In (Trivedi et al., 2018), the authors propose a GA that evolves the weights of the softmax layer to improve the performance of the NN. Suganuma et al. propose in (Suganuma et al., 2020) a \((1+\lambda)\) evolutionary strategy to evolve DNN. In 2020, (Real et al., 2020, July) presents an advanced technique that automatically searches for the best model, operating from scratch and obtaining a good performance with the problems at hand. The use of NAS has been applied in other areas like the Reinforcement Learning (RL). In that area, there is a great example of NAS (Zoph and Le, 2016). In that work, authors use a recurrent network (RNN) to generate the model descriptions of NN and train this RNN with RL to maximize the expected accuracy of the generated architectures on a validation set.
There are more examples of NAS in the literature like the NAS algorithm which comprises of two surrogate through a supernet, with the objective of improving the gradient descent training efficiency (Lu et al., 2020, August). Another NAS comes in (Lu et al., 2022a), in which the authors propose a pipeline with also a surrogate NAS applied to real-time semantic segmentation. They manage to convert the original NAS task into an ordinary multi-objective optimization problem.
Lastly, there are more advanced techniques of NAS and EA given by (Real et al., 2019, January), in which a new model for evolving a classifier is presented, and by (Real et al., 2020, July), in which the authors propose AutoML-Zero, an evolutionary search to build a model from scratch (with low-level primitives for feature combination and neuron training) which is able to get a great performance over the addressed problem.
The main characteristic of the previous NAS proposals is the evolution of the DL model guided by a single objective, usually the accuracy or another that measures the performance of the network. The following proposals share a common aspect: the evolution of the model is done using more than one objective. This leads to the algorithms in the field of MONAS.
We can find several approaches of MONAS that have been applied to diverse fields with great results. One of them is presented in (Elsken et al., 2019, May). This work propose a MONAS that lets the approximation of the Pareto-front of all the architectures. In relation to medical images area, in (Baldeon-Calisto and Lai-Yuen, 2020) a MONAS that evolves both accuracy and model size is proposed. Moreover, following this research in medical images, in (Baldeon Calisto and Lai-Yuen, 2020), the authors use a MO evolutionary based algorithm that minimizes both the expected segmentation error and number of parameters in the network. Another interesting work is presented in (Calisto and Lai-Yuen, 2020), in which they have created a pipeline for the automatic design of neural architectures while optimizing the network's accuracy and size.
Typically, MONAS evaluate two or three objectives. A common objective is usually the performance of the network. The others objectives are related with the complexity of the network and other empirical and measurable objectives. In (Lu et al., 2021), the authors propose a multi-objective evolutionary algorithm for the design of DNN for image classification, adopting the classification performance and the number of floating-point operations as its objectives. Another example is DeepMaker, (Loni et al., 2020), which is a multi-objective evolutionary approach that considers both the accuracy of the network and its size to evolve robust DNN architectures for embedded devices.
There are some well-known MOEAs in the literature. One of them is NSGA-II. A new version of it has been developed to use it for NAS (Lu et al., 2019, July), called NSGA-Net. This proposal looks for the best architecture through a three-step search based an initialization step, followed by an exploration step that performs the EA operators to create new architectures, and an exploitation step that uses the previous knowledge of all the evaluated architectures.
### Transfer Learning and CNN Pruning
One of the objectives that EAs used for NAS usually aim to optimize is the complexity of the network. NN are structures with a great amount of parameters. These networks are composed of two main parts. The first one extracts the main features of the problem, i.e., learns to distinguish the patterns of the images (when working with image classification) and the second part is responsible to classify these patterns into several classes.
In this context, TL appears as a figure that helps in the learning process when there are few data, i.e., prevents the overfitting when the input examples is not large (Pan and Yang, 2010). TL is a DL mechanism encompassing a broad family of techniques. The most common method of TL with DL is the usage of a previous network structure with pre-trained parameters in a similar problem to the related task, being trained with huge datasets like (Krizhevsky et al., 2017). This fact involves the usage of a DL model with fixed and pre-trained weights in the convolutional layers with a dataset and then add and train several layers to adapt the network to a different classification problem (Khan et al., 2019).
Another technique to reduce the complexity of the networks is pruning. Pruning a CNN model consists of reducing the parameters of the model, but it may lead into a decrease of the performance of the model. An example of the fusion of EAs, DNN and pruning is shown in (Wang et al., 2020), which proposes a novel approach based on a combination of pruning CNN of sparse layers (layer with fewer connections between neurons) guided by a genetic algorithm. The main consequences of this study is the reduction of a great fraction of the network, but at the penalty of a lower generalization performance of the network.
Following the idea of EAs and Dnn, in (Poyatos et al., 2023) the authors propose also propose a combination of sparse layers and a genetic algorithm. They have shown that pruning can be done in a TL scheme with sparse layers and EAs. Their proposal is only guided by the performance of the model, but they also achieve a great reduction in the optimized sparse layers.
### Out of Distribution Detection
Robustness is a term that has been used with related yet different meanings among the literature of the Machine Learning (ML) community. In this work we refer to the model's ability to handle the unknown, to detect whether it has been queried with an example of a not learned distribution, therefore refusing to make the classification it has been trained to do. This is precisely what the Out of Distribution detection framework measures.
In this problem, a model learns to classify instances in the different classes from a training dataset that is sampled from a distribution, namely the In-Distribution. After the process, the model is asked to correctly distinguish between test examples that are drawn equally from either the in-distribution or from a semantically different distribution, the Out-Distribution dataset (Yang et al., 2021). The term semantically different refers to the fact that the classes contained in this foreign distribution are distinct from the ones present in the InD. As ML and DNN model are not natively prepared for this task, an OoD detection technique is wrapped around the model to allow this behavior. Typically, these techniques are based on creating a score for every example processed by the model, such that the score obtained by an OoD instance is significantly different from the one obtained by a InD example. Then, by simply defining a threshold on this score, the model can decide whether an instance is from the in or out distribution.
A great variety of methods exist in the literature, which was started by Hendrycks and Gimpel (2016), where the so-called _baseline_ method was introduced. It relies on the simple observation that ID instances tend to have greater Maximum Softmax Probability, the softmax probability of the predicted class. By simply applying a threshold to this score, they achieved acceptable performance on many classification problems. In Liang et al. (2018, Aprila), this idea was refined by applying temperature scaling to the softmax probabilities, what further separates apart from each other the distributions of the scores of the in- and out- distribution samples probabilities. Authors also implemented an input preprocessing pipeline that enhanced a bit the performance by adding a small quantity of gradient and softmax dependent noise. The paper presented in Lee et al. (2018), instead of using the softmax probabilities, exploits the feature space of the layer right before softmax and assumes that it follows a multivariate Gaussian distribution, enabling the calculation of its mean and variance for every sample. After creating a class-conditional distribution utilizing the training samples, the score for every test sample is the closest Mahalanobis distance between the sample and the calculated Gaussian class-conditional distributions.
The technique proposed in (Hendrycks et al., 2019, May), in contrast to previous works, focuses on modifying model's training by adding a term to the loss function (that depends on the classification of the task, density estimation, etc.), helping the model learn heuristics that will improve the performance of other OoD methods applied afterwards. This new term needs to be trained with OoD data, which can
be obtained by leveraging the large amount of data publicly available on the internet. Authors prove that the learned heuristics for arbitrary OoD datasets generalize well to other unseen OoD data. Thereafter, (Liu et al., 2020) based its detector in what they called the free energy function, that combines concepts of the energy based models with modern neural networks and their capability of assigning a scalar to every instance fed to the model without changing its parametrization. Specifically, the free energy function is based on the logits of the network, and the work empirically demonstrates that OoD instances tend to have higher energy, enabling the distinction between InD and OoD data. In the following work in (Lin et al., 2021, June) exploited the idea that easy OoD samples can be detected by leveraging low-level statistics. On this basis, several intermediate classifiers are trained at different depths and each example is outputted through one of them depending on its complexity. To measure complexity, a function based on the number of bits used to encode the compressed image is harnessed. The OoD scoring function employed is the above presented energy function adapted to the corresponding depth.
Although only a few research contributions are presented in this work, it must be noted that the OoD problem has been widely studied in the literature (Salehi et al., 2021), with proposals ranging from the more complex and well performing ones to the more simple yet effective ones. As the aim of this paper is to show that the robustness in the OoD can be affected when the pruning of the network is done. Therefore, the OoD detection method will be selected to be computationally cheap yet effective, to not add computational complexity to the MOEA.
In this section, a review of the related work from three different perspectives has been presented. Terms like MONAS, MOEA are important as this work presents a new work about these topics. Moreover, it is based on a TL scheme in which an evolutionary pruning of the last layers is done. In the last years, several proposals have been published over these topics, i.e, MOEAs that search for the best architecture attending to one or more objectives and also pruning approaches for CNNs. However, this study introduces a new manner to guide the evolutionary pruning of the models with the usage of a OoD mechanism. MOEvoPruneDeepTL tries to solve the problem of achieving robust models with high performance and least active neurons. This scheme, a MOEA that performs pruning in the last layers (TL paradigm) with three objectives is a new contribution to all this fields at the same time.
## 3 Multi-Objective Evolutionary Pruning for Deep Transfer Learning
This section is devised to explain the details of MO-EvoPruneDeepTL. First, in Subsection 3.1 the formulation to the problem at hand is presented. In Subsection 3.2, we will describe the objectives used to guide the search. Then, in Subsection 3.3, the description of the OoD detector is explained. The DL and network schemes are shown in Subsection 3.4. Finally, in Subsection 3.5, the evolutionary parts of MO-EvoPruneDeepTL are described.
### Problem formulation
This section aims at defining and explaining the mathematical components that circumscribe MO-EvoPruneDeepTL. We explore different concepts needed to fully understand the basics of our study.
We define the concept of dataset. Mathematically, we define a training dataset \(\mathcal{D}\doteq\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) composed by \(N\) (instance, label) pairs. Such a dataset is split in training, validation and test partitions, such that \(\mathcal{D}=\mathcal{D}_{tr}\cup\mathcal{D}_{val}\cup\mathcal{D}_{test}\) with \(|\mathcal{D}_{tr}|=N_{tr}\).
Another important concept to keep in mind is the model. We define a model \(M_{\theta}\) to represent the relationship between its input \(\mathcal{X}\) and its corresponding output \(y\in\{1,\ldots,Y\}\), where \(Y\) denotes the number of classes present in \(\mathcal{D}\). Learning the parameter values \(\theta^{*}\) that best model this relationship can be accomplished by using a learning algorithm \(\theta^{*}=\text{ALG}(M;\mathcal{D}_{tr})\) that aims to minimize a measure of the difference (_loss_) between the model's output and its ground truth over the training instances in \(\mathcal{D}_{tr}\) (e.g. gradient backpropagation in neural networks). In what follows \(M_{\theta}\) is assumed to be a NN, so that \(\theta\) represent the totality of trainable weights in its neural connections.
In the context of TL for classification tasks, the NN \(M_{\theta}\) is assumed to be composed by a pre-trained feature extractor \(F_{\phi}(\mathbf{x})\) (whose parameters \(\phi\) are kept fixed while ALG(\(\cdot\)) operates), and a dense (i.e.
fully-connected) part \(G_{\theta}(\cdot)\) that maps the output of the feature extractor to the class/label to be predicted. Therefore, after tuning the trainable parameters of the network as \(\theta^{*}=\text{ALG}(G;\mathcal{D}_{tr})\), the class \(\widehat{y}\in\{1,\dots,Y\}\) predicted for an input instance \(\mathbf{x}\) is given by:
\[\widehat{y}=(F\circ G_{\theta^{*}})(\mathbf{x})=G_{\theta^{*}}(F(\mathbf{x})), \tag{1}\]
where \(\circ\) denotes composition of functions. When predictions are issued over the validation partition \(\mathcal{D}_{val}\), a measure of accuracy can be done by simply accounting for the ration of correct predictions to the overall size of the set, i.e. \(\text{ACC}_{val}=(1/N_{val})\cdot\sum_{i\in\mathcal{D}_{val}}\mathbb{I}( \widehat{y}_{i}=y_{i})\), where \(\mathbb{I}(\cdot)\) equals 1 if its argument is true (0 otherwise).
Bearing this notation in mind, pruning can be defined as a binary vector \(\mathbf{p}=\{p_{j}\}_{j=1}^{P}\), where \(P\) denotes the length of the feature vectors extracted by \(F_{\phi}(\mathbf{x})\) for every input instance to the network. As such, \(p_{j}=0\) indicates that neural links that connect the \(j\)-th component of the feature vector to the rest of neurons in the dense part \(G_{\theta}(\cdot)\) of the network are disconnected, causing that all trainable parameters from this disconnected input to the output of the overall model are pruned. Conversely, if \(p_{j}=1\) the \(j\)-th input neuron is connected to the densely connected layers of the neural hierarchy. By extending the previous notation, the training algorithm is redefined to \(\theta^{*}(\mathbf{p})=\text{ALG}(G;\mathcal{D}_{tr},\mathbf{p})\) to account from the fact that the network has been pruned as per \(\mathbf{p}\). This dependence of the trained parameters on the pruned vector propagate to the measure of accuracy over the validation instances, yielding \(\text{ACC}_{val}(\mathbf{p})\). Likewise, a measure of the reduction of the number of trainable parameters can be also computed for a given pruning vector \(\mathbf{p}\) relative to the case when no pruning is performed (i.e., \(\mathbf{p}=\mathbf{1}\doteq\{1\}_{j=1}^{P}\)) as \(\text{MEM}(\mathbf{p})=|\theta(\mathbf{p})|/|\theta(\mathbf{1})|\).
Intuitively, a good pruning strategy should consider the balance between the reduced number of trainable parameters and its impact on the accuracy when addressing the modeling task at hand. Reducing the amount of parameters to be stored has practical benefits in terms of memory space, and can yield a lower inference latency when the trained model is queried.
A third dimension of the network that can be affected by pruning is its capacity to detect Out of Distribution (OoD) instances. A significant fraction of the techniques proposed so far for identifying query samples that deviate from the distribution of training data rely on the network dynamics between neurons while the instance flows through the network. This is the case of ODIN (Liang et al., 2018, Aprila), BASELINE (Hendrycks and Gimpel, 2016) and ENERGY (Liu et al., 2020), among others. To quantify the capability of a pruned network \(M_{\theta^{*}(\mathbf{p})}\) to detect OoD instances, we utilize other datasets \(\boldsymbol{\mathcal{D}}^{\prime}=\{\mathcal{D}_{d}^{\prime}\}_{d=1}^{D_{OoD}}\) different from \(\mathcal{D}\), whose instances \((\mathbf{x}^{\prime},y^{\prime})\) are assumed to be representative of the OoD test instances with which the model can be queried in practice. An OoD detection technique \(T_{OoD}(\mathbf{x};M_{\theta^{*}(\mathbf{p})})\equiv T_{OoD}(\mathbf{x})\) processes the activations triggered by \(\mathbf{x}\) throughout the trained pruned model \(M_{\theta^{*}(\mathbf{p})}\) so as to decide whether \(\mathbf{x}\) is an In-Distribution (\(T_{OoD}(\mathbf{x})=0\)) or an OoD instance (corr. \(T_{OoD}(\mathbf{x})=1\)). This being said, true positive and false negative rates can be computed for \(T(\mathbf{x})\) over the test subset \(\mathcal{D}_{test}\) of \(\mathcal{D}\) and random \(N_{val}/D_{OoD}\)-sized samples drawn from every other dataset \(\mathcal{D}_{d}^{\prime}\), which can be aggregated in a compound performance metric. Among other choices for this purpose, we consider the AUROC measure AUROC(\(\mathbf{p}\)), which measures the ability of \(T(\cdot)\) to discriminate between positive and negative examples. This measure is set dependent on \(\mathbf{p}\) in accordance with previous notation, as \(T(\mathbf{x})\) operates on the neural activations stimulated by \(\mathbf{x}\).
### Objectives of MO-EvoPruneDeepTL
This section introduces the objectives that guide MO-EvoPruneDeepTL during its evolutionary process. We define them using the notation previously commented in Subsection 3.1.
The optimization problem addressed in this work aims to discover the set of Pareto-optimal pruning vectors \(\{\mathbf{p}_{k}^{opt}\}_{k=1}^{K}\) that best balance between three objectives:
1. The modeling performance of the pruned model over dataset \(\mathcal{D}\). This performance is measured with the accuracy over the test dataset (\(\mathcal{D}_{test}\)). It is the percentage of well classified images out of the total set of images.
2. The number of active neurons left after the pruning operation. The number of active neurons corresponds with the remaining active connections after the pruning and evolutionary process.
3. The capability of an OoD detection technique to discriminate between Out-of-Distribution and In-Distribution data by inspecting the activations inside the pruned model.
Mathematically:
\[\{\mathbf{p}_{k}^{opt}\}_{k=1}^{K} =\arg_{\mathbf{p}\in\{0,1\}^{P}}\left[\max\text{ACC}_{val}(\mathbf{ p}),\min\text{MEM}(\mathbf{p}),\max\text{AUROC}(\mathbf{p})\right],\] (2) s.t. \[\mathcal{D}:\text{In-distribution dataset}, \tag{3}\] \[\mathcal{D}^{\prime}_{1},\ldots,\mathcal{D}^{\prime}_{D_{OoD}}: \text{Out-of-distribution datasets},\] (4) \[F_{\phi}(\mathbf{x}):\text{Pre-trained feature extractor},\] (5) \[T(\mathbf{x}):\text{Out-of-distribution detection technique}. \tag{6}\]
### Out of Distribution detector of MO-EvoPruneDeepTL
In the following subsection the technique selected to assess the OoD performance of the pruned models is presented, along with a clarification about the metrics used to measure it.
Due to the fact that every new child of the population in the evolutionary algorithm must have its OoD performance correctly assessed, the chose method should not entail a big computational burden while maintaining a sufficient effectiveness in detecting OoD samples. The technique presented in (Liang et al., 2018, Aprilb), ODIN, fulfills these requirements and is the selected one.
Before explaining ODIN, the already mentioned performance metric used in this study must be clarified, namely the AUROC or _Area Under the Receiver Operation Characteristic curve_. It is a threshold-independent metric for binary classification that can be considered as the probability that the model ranks a random positive example with higher score than a random negative example. Is defined as TPR/FPR, which stand for True Positive Rate and False Positive Rate respectively and can be computed as \(\text{TPR}=\text{TP}/(\text{TP}+\text{FN})\) and \(\text{FPR}=\text{FP}/(\text{FP}+\text{TN})\). Therefore, in order to compute the AUROC, the FPR value for every TPR needs to be calculated. In this work, TP is used to refer to an in-distribution sample correctly classified as such, whereas a TN represents an OoD sample detected correctly by the OoD detector.
The basic principle of ODIN is to use maximum softmax probability with temperature scaling as the OoD score for every sample, defined by the expression
\[f_{ODIN}(\mathbf{x};T)=\max_{i}(S_{i}(\mathbf{x};T))=S_{\hat{y}}(\mathbf{x};T), \tag{7}\]
where \(S_{i}(x;T)\) is the softmax probability of the \(i^{t}h\) class for the input instance \(\mathbf{x}\), scaled by a temperature parameter \(T\in\mathbb{R}^{+}\). This scaled softmax can be calculated as:
\[S_{i}(x;T)=\frac{\exp(h_{i}(\mathbf{x})/T)}{\sum_{j=1}^{N}\exp(h_{i}(\mathbf{x })/T)}. \tag{8}\]
where \(h_{i}(\mathbf{x})\mid i\in\{1,\ldots,Y\}\) are the logits, the values prior to the softmax activation function. Then, and in accordance with notation presented in Subsection 3.1, the OoD detection technique \(T_{OoD}\), ODIN in this case \(T_{ODIN}\), will output a 1 if the instance's score is below a defined threshold, indicating that is considered an out-of-distribution sample, outputting a 0 otherwise:
\[\mathbf{x}\text{ belongs to}\begin{cases}\text{in-distribution}&\text{if }T_{ODIN}( \mathbf{x};T;\lambda)=0\iff f_{ODIN}(\mathbf{x};T)\geq\lambda,\\ \text{out-distribution}&\text{if }T_{ODIN}(\mathbf{x};T;\lambda)=1\iff f_{ODIN}( \mathbf{x};T)<\lambda.\end{cases} \tag{9}\]
It is important to remark that ODIN also uses an input preprocessing pipeline to further improve its performance in the OoD detection problem, but that in this study it will be discarded for the sake of reducing the computational burden of the algorithm.
So, in order to implement ODIN, the below presented steps must be followed. First, the model \(M_{\theta}\) must be trained using the training set \(\mathcal{D}_{tr}\) of the in-distribution dataset \(\mathcal{D}\). Then the logits of the instances of the test set \(\mathcal{D}_{test}\) must be extracted for the sake of calculating the temperature scaled softmax outputs using
the equation (8). The OoD score of each input instance \(f_{ODIN}(\mathbf{x};T)\) will be the maximum of these scaled softmax outputs, i.e., the value corresponding to the predicted class, as expression (7) indicates.
Next, same operation must be repeated with the the out-of-distribution detection set, composed by samples drawn from every other dataset \(\mathcal{D}^{\prime}_{d}\) as indicated in Subsection 3.1. In this manner, we have created two distributions of OoD scores: one for the in-distribution samples of \(\mathcal{D}_{test}\) and other for the out-of-distribution ones. Now, the threshold on the score for each TPR is defined by using the score distribution of test instances and equation (9). The corresponding FPR for each TPR is computed by employing the OoD distribution and the defined threshold, obtaining a set of [TPR, FPR] values that compose the ROC curve. Finally, from this curve the AUROC can be computed, therefore obtaining the desired robustness score for the model \(M_{\theta}\) and in-distribution dataset \(\mathcal{D}\).
In this study, in order to evaluate the robustness of each model, a practical approach is used, which involves the usage of the OoD detector with the other datasets that are not covered in training phase. However, the design of the algorithm accommodates any other dataset as an OoD evaluation dataset.
### Network characteristics of MO-EvoPruneDeepTL
In this subsection, we introduce the characteristics of the used network in MO-EvoPruneDeepTL. In our study, we use the TL paradigm, i.e., the weights of the convolutional phase are imported and fixed from other trained network in a similar task. For that reason, the DL model we use works as a feature extractor. The images pass through the network and it extracts their main features. These features correspond with the neurons which are evolved by the evolutionary components of MO-EvoPruneDeepTL. In Subsection 4.2, we give more details about the feature extractor and its characteristics.
These features are the input for the last layers of our model. The composition of the layer for our models is a single fully-connected layer which is composed of a single layer with 512 neurons, followed by the output layer. The input of the model correspond with extracted features from the network.
The fully-connected layer is converted into a sparse layer, which means that the layer has less active connections. In our study, the chromosome is decoded to an adjacency matrix which defines the connections of the neurons in the layer. MO-EvoPruneDeepTL creates the architecture of the model using this matrix and then, makes the pruning in relation to the connections which represent the input of the neuron.
### Evolutionary components of MO-EvoPruneDeepTL
In this section, we introduce the evolutionary components of MO-EvoPruneDeepTL. It is a MOEA, called Non-Dominated Sorting Genetic Algorithm II (NSGA-II) (Deb et al., 2002). The population of networks is evolved using the common operators from this GA, but, in this case, only two individuals are used for the evolution as parents. As a result, two offspring individuals are produced. Our MO-EvoPruneDeepTL uses a binary encoding strategy, which represents if a neuron is active or not. A neuron is active if its gen is 1 or not active if it is 0. Thanks to this direct encoding approach, each gen determines uniquely a neuron in the decoded network.
The initialization of the chromosomes correspond with a random discrete initialization in \([0,1]\), the selection is done using a binary tournament selection method, whereas the replacement strategy is the dual strategy of ranges of Pareto dominance and crowding distance of NSGA-II. Finally, the crossover and mutation operator are outlined:
**Crossover**: the crossover operator used in MO-EvoPruneDeepTL is the uniform crossover. This operator defines two new individuals from two parents. Mathematically, given these two parents \(\mathbf{p}\) and \(\mathbf{q}\), where \(\mathbf{p}=\{p_{i}\}_{i=1}^{P}\) and \(\mathbf{q}=\{q_{i}\}_{i=1}^{P}\) and their length is \(P\), the resultant offsprings \(\mathbf{p}\)' \(=\{p^{\prime}_{i}\}_{i=1}^{P}\) and \(\mathbf{q}\)' \(=\{q^{\prime}_{i}\}_{i=1}^{P}\) (also with length \(P\)) are generated using these equations:
\[\begin{split} p^{\prime}_{i}&=\left\{\begin{array}{ ll}p_{i}&\text{if }r\leq 0.5\\ q_{i}&\text{otherwise}\end{array}\right.\\ q^{\prime}_{i}&=\left\{\begin{array}{ll}q_{i}&\text{if }r\leq 0.5\\ p_{i}&\text{otherwise}\end{array}\right.\end{split} \tag{10}\]
where \(r\) is the realization of a continuous random variable with support over the range \([0.0,1.0]\). This operator creates two individuals using information of the genes of both parents. Each position \(i\) of the new individual takes the value of the gene of \(\mathbf{p}\) or \(\mathbf{q}\) until the offspring is fully created.
**Mutation**: the mutation performed by MO-EvoPruneDeepTL is the Bit Flip mutation. This operator needs a mutation probability defined by \(mut_{p}\). Thus, for each chromosome, all of its genes can be mutated if the mutation is really performed, which means changing the value of the gene from active to not active or vice versa. The parameter that controls if a gene is flipped or not is \(mut_{p}\).
Next, we give a brief explanation about the process that MO-EvoPruneDeepTL performs. First, we need to know the data required by MO-EvoPruneDeepTL, which is the dataset for training the network and its test dataset, the InD data, and also the OoD data, so that each model can also be tested on it. Lastly, the configuration of the GA and of the network are also required.
Once all the data is gathered, then the evolutionary process takes place. Algorithm 1 shows the pseudocode of MO-EvoPruneDeepTL. The beginning of the process is the standard procedure of initialization and evaluation of the initial population (lines 1 and 2). Then, the evolution is performed. In each generation, the operators are being executed sequentially. The parents are selected using the selection operator (line 4). After that, they generate their offspring using the crossover operator (line 5) which are mutated using the mutation operator (line 6). Then, both children are evaluated to obtain the values of the objectives that guide the evolutionary process. Thus, for each child, its chromosome is decoded into a sparse network (line 8) which is trained using the train set of the InD data (line 9). Then, the information contained in the logits is passed through the OoD detector which determines the robustness of the child using the AUROC metric (line 10). The accuracy is calculated using the test set of the InD (line 11) and the complexity of the network is also achieved using the number of neurons which are active in the child chromosome (line 12). Then, the objectives are retained as part of the information of the child for further generations (line 13).
```
Input : InD dataset, OoD dataset, configuration of the GA and configuration of the network Output : Evolved pruned network
1 Initialization of individuals of the population using the initialization operator;
2 Evaluation of the initial population (see lines 9-14);
3while evaluations \(<\) max_evalsdo
4 Parent selection using binary tournament;
5 Generate offsprings using uniform crossover;
6 Mutation of individuals using the bit flip mutation;
7for each child \(p\) in children populationdo
8 SparseNetwork\({}_{\mathbf{p}}\)\(\leftarrow\) Decodification of child chromosome;
9 SparseTrainedNetwork\({}_{\mathbf{p}}\)\(\leftarrow\) Train SparseNetwork\({}_{\mathbf{p}}\) using the train set of InD data;
10 AurocChild\({}_{\mathbf{p}}\)\(\leftarrow\) Robustness metric of child using OoD data;
11 AccChild\({}_{\mathbf{p}}\)\(\leftarrow\) Accuracy of SparseTrainedNetwork\({}_{\mathbf{p}}\) evaluated in test set of InD data;
12 ComplexChild\({}_{\mathbf{p}}\)\(\leftarrow\) Number of active neurons in SparseNetwork\({}_{\mathbf{p}}\);
13 SolutionVector(AccChild,ComplexChild,AurocChild)\({}_{\mathbf{p}}\);
14 evaluations+=1;
15
16 end for
17 Replacement Strategy;
18
19 end for
```
**Algorithm 1**MO-EvoPruneDeepTL
## 4 Experimental Framework
This section is intended to describe the framework surrounding the experiments conducted in this study. In Subsection 4.1, a detailed description of the datasets is given. Then, Subsection 4.2 shows the values of the parameters and the network setup of MO-EvoPruneDeepTL in the experiments.
### Dataset information
In this study we have selected several datasets which fit in our working environment. These datasets represent a good choice for TL approaches due to their size, as the training and inference times are lower. Thus, these datasets are suitable for problems related with population metaheuristics, since a large number of individuals will be evaluated. We present a brief description of each dataset:
* CATARACT ([dataset]Sungjoon Choi, 2020) is a dataset related with the medical environment. It classifies different types of eye diseases.
* LEAVES ([dataset]Hafiz Tayyab Rauf et al., 2019) is a dataset that is composed of images of different types of leaves, since healthy to unhealthy with different shades of green.
* PAINTING is related to the painting environment ([dataset]Virtual Russian Museum, 2018). This dataset is composed of images which represent different types of paintings.
* PLANTS is dataset which presents a great variety of leaves and plants, which ranges from tomato, or corn plants to other leaves, among others ([dataset] Singh et al., 2020, May).
* RPS ([dataset]Laurence Moroney, 2019) is a dataset whose purpose is to distinguish the gesture of the hands in the popular Rock Paper Scissors game from artificially-created images with different positions and skin colors.
* SRSMAS is based on the marine world whose aim is to classify different coral reef types ([dataset] Gomez-Rios et al., 2019).
Next, we show some examples for several of the above datasets are shown in Fig. 1.
Finally, we highlight the main characteristics in quantitative terms of instances, classes and metrics with non-pruned networks for each dataset. Table 1 show these numbers.
Figure 1: Images of datasets. Left: LEAVES examples. Middle: RPS examples. Right: SRSMAS examples.
### Training and network setup
In this subsection, we describe both the training and network setup of MO-EvoPruneDeepTL. First, we explain how our datasets are split. Then, the network setup is presented. Lastly, we discuss the parameters of MO-EvoPruneDeepTL.
In this study, we use six different datasets in our experiments. We need to split the images of these datasets into a train and test subsets, as the evaluation of MO-EvoPruneDeepTL requires it. We have created a 5-fold cross-validation evaluation environment, meanwhile for the rest of the datasets, their train and test subsets had already been predefined.
Another component of MO-EvoPruneDeepTL is the used network along all the experiments. In our case, we have chosen ResNet-50 as the pre-trained network. This election has been taken to maintain the balance between the number of features, which leads to a higher computational space, and the performance obtained in the TL process. The combinatorial problem can be huge for typical values of feature extractors commonly used in problems where TL is in use. Using this network yields feature vectors \(F_{\phi}(\mathbf{x})\) comprising \(P=2,048\) components, leading to a total of \(2^{2,048}\approx 3.23\cdot 10^{616}\) possible pruning patterns. Furthermore, the evaluation of pruned networks during the search requires repeatedly training over the instances in the test subset can be computationally expensive. Note that, although in our experiments a CNN model is used, the pruning can also be performed with other type or architectures, like Long Short-Term Memory (LSTM) (Wang et al., 2019).
These extracted features are passed through the last layers, which are the layers that are going to be trained. The model with the larger accuracy on the training set is saved. The optimizer of the training environment is the standard SGD. The parameters of MO-EvoPruneDeepTL are shown in Table 2. The maximum number of training epochs is 600, but the training phase stops if no improvement is achieved in ten consecutive rounds. The last important parameter appears in the OoD phase. It is called _Tem\({}_{ODIN}\)_ and it controls how the softmax values are computed using the logits from the Ind and OoD. The value of this parameter has been tested with several values that the authors chosen as best values for it.
The last contribution of this section is the discussion of the parameters of MO-EvoPruneDeepTL. The maximum evaluations of MO-EvoPruneDeepTL is set to 200 and the size of the population of networks for each generation is 30. Table 3 shows the evaluation time for each individual, so that the total time
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **Image Size** & \(L\) & **\# Instances** & **Accuracy** & **AUROC** \\ & & (**\# classes**) & (**train / test**) & (**No Pruning**) & (**No Pruning**) \\ \hline CATARACT & \((256,256)\) & 4 & 480 / 121 & 0.732 & 0.870 \\ LEAVES & \((256,256)\) & 4 & 476 / 120 & 0.935 & 0.960 \\ PAINTING & \((256,256)\) & 5 & 7721 / 856 & 0.951 & 0.990 \\ PLANTS & \((100,100)\) & 27 & 2340 / 236 & 0.480 & 0.820 \\ RPS & \((300,300)\) & 3 & 2520 / 372 & 0.954 & 0.934 \\ SRSMAS & \((299,299)\) & 14 & 333 / 76 & 0.885 & 0.999 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets used in the experiments.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Value** \\ \hline Maximum Evals & 200 \\ \# Runs & 10 \\ Population size & 30 \\ \(p_{mut}\) & \(\frac{1}{P}\) \\ Batch Size & 32 \\ Tem\({}_{ODIN}\) & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of MO-EvoPruneDeepTL.
of execution is the time of the first column multiplied by the number of evaluations. Each OoD detection requires a minute, but in the datasets with the 5-fold cross-validation, this time reaches the five minutes. Moreover, we also indicate the inference time for test and the required time to calculate the AUROC metric in the OoD phase. Those times force us to keep a low number of runs and evaluations to meet a computationally affordable balance between the performance of our models and the high execution times required for our simulations. Moreover, although statistical tests are important to assess the significance of the differences in the results, but due to these limited number of runs, we can not apply them, as large number of runs is required to achieve statistically reliable insights.
The experiments have been carried out using Python 3.6 and a Keras/Tensorflow implementation deployed and running on a Tesla V100-SXM2 GPU.
## 5 Results and Discussion
This section is devised to analyze the behavior of MO-EvoPruneDeepTL. To this end, we define three research questions (RQ) which are going to be answered in the following subsections with diverse experiments over the previous datasets. We will show and analyze several plots to illustrate the benefits of MO-EvoPruneDeepTL. The RQ can be stated as follows:
* How are the approximated Pareto fronts produced by the proposal in each of the considered datasets? The Pareto front can be defined as the set of non-dominated solutions, being chosen as optimal, if no objective can be improved without sacrificing at least one other objective. The problem at hand is approximated using a multi-objective approach. For that reason, we want to check that not only we have promising solutions in the extreme values of the Pareto front, but also to have a wide population of diverse solutions in the whole Pareto. As a consequence of that, to answer this RQ, we will analyze how is the Pareto front for each dataset and if there exists any direct connection between the objectives of the study: accuracy, complexity of the network, and robustness.
* Is there any remarkable pruning pattern that appears in all the solutions of the Pareto front? We compare the pruning patterns of all the models of the Pareto fronts of MO-EvoPruneDeepTL to show if there are some important patterns which are key to identify the most important zones of the input images. We employ a well-known technique called Grad-CAM (Selvaraju et al., 2017, October), which uses the gradient of the classification score with respect to the convolutional features of the network to check which parts of the image are most important for the classification task. Grad-CAM lies in the group of Explainable Artificial Intelligence (XAI) techniques, as it produces details to make easy to understand which neurons are the relevant ones in all the experiments (Barredo Arrieta et al., 2020). These neurons lead to specific pixels or group of them of the original images that are passed through the network.
\begin{table}
\begin{tabular}{c r r r r} \hline \hline Dataset & Total & Evaluation & Training and Inference & OoD Detection \\ \hline CATARACT & 332 min & 1.66 min & 0.66 min & 1 min \\ LEAVES & 1600 min & 8 min & 3 min & 5 min \\ PAINTING & 1700 min & 8.5 min & 7.5min & 1 min \\ PLANTS & 800 min & 4 min & 3 min & 1 min \\ RPS & 900 min & 4.5 min & 3.5 min & 1 min \\ SRSMAS & 1500 min & 7.5 min & 2.5 min & 5 min \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average time in evaluations of MO-EvoPruneDeepTL.
* Do our models achieve an overall improvement in performance when merged through ensemble modelling?
* MO-EvoPruneDeepTL trains a great variety of models which leads to a wide diversity of models in the Pareto front for each dataset. The aim of this RQ is to check whether the diversity of pruning patterns in the Pareto front can be used to improve our DL models through ensemble strategies. Our aim is to check if an ensemble of differently pruned models can yield more accurate predictions, leading to a better overall performance than their compounding models in isolation. Beyond improving the accuracy through ensembles, we will also explore whether ensemble modeling allows obtaining more robust models, so that the number of OoD samples that the network wrongly predicts as InD is lower.
This section is divided in Section 5.1, where we analyze the different Pareto front for each considered dataset in order to answer RQ1. Next, in Section 5.2, we will examine the different pruning patterns of our models. Precisely, we will look for the neurons that appears in most of them, and we will highlight the essential zones of the input images, as this is the key part to answer RQ2. Lastly, we will discuss in Section 5.3 the benefits of the diversity of our models when ensemble modeling is performed, to show if an improvement in terms of accuracy and AUROC is achieved, which the principles lines of the RQ3.
### Answering RQ1: Analyzing the Pareto fronts of MO-EvoPruneDeepTL
The objective of this section is to answer RQ1 by performing a complete analysis of the Pareto fronts of MO-EvoPruneDeepTL. This analysis is to be performed focusing on two important aspects: i) how are the Pareto fronts for each dataset? and ii) how are the projections in each objective for each dataset? Note that for each dataset, the Pareto front shown is the so-called super Pareto front, which consists of taking the solutions of each Pareto for each run and calculating the Pareto front on these solutions. Moreover, it has been included in these Pareto fronts the results for the non-pruned network for each dataset, which are composed of solutions with all the active neurons and the accuracy and AUROC showed in Table 1.
With these graphics we analyze the quality of each Pareto and, particularly, by assessing the full spectrum of solutions that can be achieved in each of the Pareto. Moreover, we are going to study whether there is a direct relationship between any of the objectives we have formulated in the previous sections. In order to develop these plots, we have collected all the solutions of the super Pareto front (called Pareto front from now), selecting 10% of the best solutions for each objective in order to make their projections.
These Pareto front are presented in Figures 2 and 3. We can observe the diversity of pruning patterns produced by MO-EvoPruneDeepTL. Moreover, another insight from these Pareto front comes up when we inspect extreme values of each objective, as they systematically achieve good results in each dataset. Most
Figure 2: Pareto fronts of MO-EvoPruneDeepTL. Left: CATARACT dataset. Middle: RPS dataset. Right: PAINTING dataset.
of the solutions obtain high values of accuracy and robustness meanwhile their remaining active neurons are kept low.
First, we focus on the central part of the 3D projections, in which we visualize the three objectives. Our goal is to detect if there exist some kind of relationship between them. We clearly see that the projection in all the Pareto front takes values to the upper corner in which the three objectives present low values of percentage of active neurons, but high accuracy and AUROC. Moreover, this distribution of the points in both group of figures indicate that there is a tendency of the solutions to that plane in which the number of active neurons is low.
Analyzing the two-dimensional planes, there is not a clear relation between the performance and robustness. Nonetheless, the common point of this projection, namely, the complexity of the network, sheds light to the fact that it can be related with the performance and robustness separately. In both cases, a minimum number of active neurons is needed in order to start achieving good results in each objective. For both objectives, there is a certain range of optimal number of active neurons in which each of them obtains their best values.
The Pareto fronts shown in this section have allowed us to obtain valuable information on the different executions of MO-EvoPruneDeepTL. The configuration of MO-EvoPruneDeepTL has allowed us to obtain a fairly diverse set of solutions, with competitive solutions at the extremes values of the different objectives of the study. A second conclusion drawn from this analysis is the existence of relationship or direct Pareto both the complexity of the network and its performance and the complexity and robustness, but it does not appear to exist between the performance and the robustness.
### Answering RQ2: Remarkable pruning patterns in the Pareto fronts of MO-EvoPruneDeepTL
This RQ aims to analyze if there are certain pruning patterns, along the different trained networks, that allows detecting important regions in the input images to the pruned networks.
In order to answer this RQ, we must discriminate relevant neurons that appear in most of the pruning patterns in the Pareto fronts produced by MO-EvoPruneDeepTL. In doing so, we resort to a XAI technique called GradCAM (Selvaraju et al., 2017, October), which permits to localize the regions within the image that are responsible for a class prediction. Thanks to GradCAM, we can go backwards from the neurons of the solutions and highlight these key pixel regions. For each dataset, we depict several query images, and remark the 10 most relevant neurons as per GradCAM and their distribution among the three objectives. In the following figures, the central sections are relevance heatmaps obtained by GradCAM, remarking the most influential zones of the input images as warmer colors. In addition to the heatmap, we also present two more plots. The first one, in the left top, show as a bar diagram the index of the 10 most relevant neurons which appear as active in most of the solutions of the Pareto front, with their relative frequency. The second one, at the right, shows the distribution of the objectives values for these representative neurons. In this chart, a boxplot is shown for each objective and neuron: from left to right, accuracy, percentage
Figure 3: Pareto fronts of MO-EvoPruneDeepTL. Left: LEAVES dataset. Middle: PLANTS dataset. Right: SRSMAS dataset.
of active neurons and AUROC, respectively. The border of the heatmaps and the color of the bars in the figures are related, so that the reader can match each heatmap to the corresponding neuron and frequency of appearance in the Pareto.
Figure 4 shows the previous information for the CATARACT dataset. In the first one, the barplot, we can see that the least important neuron achieves a 60% of frequency in the Pareto front, i.e., it appears in the 60% of solutions meanwhile the best one has a frequency rate of more than the 80%. The second figure, the boxplot, shows the distribution of the objectives for solutions which have these relevant neurons. These results show that low complexity is presented in these neurons and high accuracy and AUROC. Lastly, we see the heatmaps for this dataset. For the shown images, we can see how these pruning patterns that MO-EvoPruneDeepTL achieves during its evolutionary process. These patterns let us recognize how the network dictate the class for each image thanks to these ten most important neurons.
The next figure, Figure 5 shows the results for the RPS dataset. The bar graph shows that these neurons achieve an appearance in more than the 80% of solutions of the Pareto and the boxplot confirm that the solutions in which these neurons are presented achieve, in most cases, less than 10% of active neurons, accuracies near 90% and AUROC around an 80%. The examples images shown in the heatmaps present the effect of these important neurons. As we have previously noted, the keys to recognize the images are position of the fingers and even the separation among them, as warmer color are presented for them.
The next dataset is PAINTING. Figure 6 shows the set of graphics for this dataset. The relevant neurons for PAINTING achieve a minimum percentage of appearance of 70% among all the solutions in the Pareto front. There is a significant difference between the first relevant neuron and the rest in terms of appearance. These solutions present almost a 20% of active neurons, but also high performance both in accuracy and AUROC, between 90 and 100%. This indicates the great level of uniformity in the robustness for this dataset. These neurons help us to analyze the images of this dataset. The third image presents a woman and, taking a deep look into the heatmaps, we see that the network recognizes the face, and then the outer parts, like the arms and the hair. Another interesting image is the fifth one. Our network is able to recognize the chest and also the arms and the rest of the body extremities.
We continue our analysis of the obtained pruning patterns of MO-EvoPruneDeepTL with the PLANTS dataset, shown in Figure 7. The most important neurons have an appearance rate between 60 and 80% in all the solutions of the Pareto front. Their distribution of objective report us a very low complexity of the network, near the 10% in average, with a good results for this dataset both in accuracy and AUROC. This dataset contains images of leaves and plants of fruit and vegetables and, for that reason, our network focus in the recognition of the shape of these leaves, as it is shown in the three bottom images of the figure.
We continue with this analysis with the LEAVES dataset. In this case, Figure 8 shows the three graphics for this dataset. The first one, which is related with the relevance of the neurons, exhibit two neurons which appear in all the solutions of the Pareto front and another two which present almost a 100% of appearance. For those neurons, the boxplot chart report us similar distributions because, in all the cases, the remaining active neurons are kept low and the accuracy and AUROC are high. Lastly, the images from this dataset show both diseased and healthy leaves. The achieved pruning patterns of MO-EvoPruneDeepTL are able to distinguish the healthy from the diseased leaves (last image versus the third one starting from the top), and then the type of the disease.
The last dataset is SRSMAS, whose charts are presented in Figure 9. The most relevant neurons obtain a minimum of 60% of appearance in all the solutions of the Pareto front, which has been a constant factor in all the datasets. Moreover, the distribution of the objectives for the solutions, in which these neurons are active, shares a common lines: high values both performance and robustness and low complexity of the network. These neurons draw pruning patterns that identify the class for the input images. As an example, in the fourth image it is only necessary to recognize the silhouette of the coral reef, but in the fifth one, the network needs to understand how is the central part of the coral reef and then its extremities.
In these figures, we have seen several datasets in which the difference rate between the most important neurons is close (RPS and PLANTS), but there are other datasets in which these difference is up to 20% between the most and least relevant neurons. Moreover, the distribution of the objectives for each dataset gives us good insights about the uniformity of the performance and robustness in most of the datasets.
The good work done by MO-EvoPruneDeepTL in training the models has made possible to achieve
Figure 4: Bars, boxplots and heatmaps of CATARACT.
Figure 5: Bars, boxplots and heatmaps of RPS.
Figure 6: Bars, boxplots and heatmaps of PAINTING.
Figure 7: Bars, boxplots and heatmaps of PLANTS.
Figure 8: Bars, boxplots and heatmaps of LEAVES.
Figure 9: Bars, boxplots and heatmaps of SRSMAS.
remarkable pruning patterns. These have helped us to decipher not only those neurons that have been key in the whole training and inference process, but also to locate in the input images those groups of influential pixels which have been important to decide the class of each of these images.
### Answering RQ3: Quality of the models through ensemble modeling
This last section is devised to formally answer to RQ3, which is to show if the diversity of the models trained by MO-EvoPruneDeepTL lets us achieve new models granted with the benefits of the ensemble. These benefits are obtaining models with better predictions, and better performance, than any single contributing model, and reducing the dispersion of the predictions and model performance.
In our study, we have selected three of the six datasets to be part of the tests performed to answer this RQ. These datasets are CATARACT, PAINTING and RPS. The objective of the next experiments is to show how the ensemble modeling performs in terms of making better predictions and how it helps to identify less number of images of the OoD samples as InD, which will lead to an increase of the AUROC. In this section, we analyze the behavior of the ensemble by quartiles, in order to give more information about the models with different ranges of accuracy and AUROC.
Before showing the graphs that result from making the ensemble, we explain how to proceed with their reading so that the meaning that emerges from each one of them can be clearly understood. We have selected the quantiles for minimum and maximum individual accuracy/AUROC, starting from 50 until 95. Then, we have divided the graph into 8 zones, each of them follows the minimum and maximum quantile of \((Q_{min},Q_{max})\), where \(Q_{min}=50\%,55\%,60\%,\ldots,85\%\) and \(Q_{max}=60\%,65\%,70\%,\ldots,95\%\). With this division, we have the following intervals \((50\%,60\%),(55\%,65\%),\ldots,(85\%,95\%)\). These intervals show us the value of the accuracy or AUROC of the models which participate in the ensemble. There are three symbols for each of these intervals. The first one is a rectangle which shows the distribution of the values of accuracy/AUROC, then the second symbol is a square, which symbolizes the best result for the measure at hand and, lastly, a star, which represent the accuracy/AUROC of the ensemble.
With this explanation, we can interpret the two graphs that result from making the ensemble. The first one is related to the accuracy of the network. Figure 10 shows this graph. It presents three graphs sorted alphabetically by dataset. The first one corresponds to CATARACT, the second to RPS and the third to PAINTING. Each of them shows, for each interval of quantile the distribution of individual accuracies, the maximum of the distribution and the accuracy of the ensemble.
The overall performance in the three cases is positive since the diversity of the models allows us to find new models that improve the accuracy for each quantile interval, except in the case of RPS where we only have one model in the interval \((60\%,70\%)\). In the RPS case, we have models near the 90% of accuracy and the ensemble produces a new model with almost 96% of it, which is a great result. Moreover, models with higher accuracy (96% or more) achieve close to 100% of accuracy. For RPS (the chart of the right), most of the ensemble models get a 95% of accuracy, meanwhile their individual models are present a lower value in accuracy. As a result, these charts show the benefits of the ensemble modeling for the accuracy objective.
The second part of this section consists of replicating the previous experiment, but for the case of OoD detection in order to check if the AUROC improves when ensemble modeling occurs. In such a case, we will be able to confirm that the new model detects less OoD sample as InD, which makes an improvement in the associated metric.
The interpretation of the set of charts is the same as in the previous case. We have made the ensemble with the models for each interval. The same characteristics are presented in this graphic. It is shown the distribution of individual AUROC values, its maximum and then, marked with a star, the AUROC of the ensemble. The CATARACT case shows an improvement in the AUROC in all the cases but one. For the RPS case (middle chart), the case of \((85\%,95\%)\) achieves almost a 95% of AUROC, meanwhile the individual values get a maximum of 87%. The PAINTING dataset also presents outstanding results. Its minimum AUROC for all the intervals of the ensemble is more than 98.5% and the least value is of individual models is less than 97%. The results obtained from the graph are similar to those obtained for the case of accuracy, since they improve on the individual results in the vast majority of the intervals.
In this section, we have conducted two experiments which involve the ensemble modeling of the trained models by MO-EvoPruneDeepTL. The ensemble has been done taking into account the performance of the
Figure 11: Ensemble modeling of the models trained by MO-EvoPruneDeepTL in terms of OoD detection. Left: CATARACT dataset. Middle: RPS dataset. Right: PAINTING dataset.
Figure 10: Ensemble modeling of the models trained by MO-EvoPruneDeepTL in terms of accuracy. Left: CATARACT dataset. Middle: RPS dataset. Right: PAINTING dataset.
network and the robustness and we have given the liberty to choose the interval of values for each measure. The results drawn from these graphics show that both of the objectives have been improved. Performing a MO search not only provides the user with a wide range of models that balance between the three stated objectives, but it also achieves more diversity among the models in order to ensemble them and achieve even higher performance and robustness.
## 6 Conclusions
This paper has introduced MO-EvoPruneDeepTL, a MONAS model that evolves sparse layers of a DL model which has been instantiated using the TL paradigm. MO-EvoPruneDeepTL uses a MOEA, which evolves these sparse layers, in order to obtain adapted, pruned layers to the problem at hand and making decisions about the neurons that need to be active or inactive.
MO-EvoPruneDeepTL is a model that evolves the extracted features from the pre-trained network in order to train the last layers to tackle the considered problem. Our results draw two conclusions from the Pareto fronts: there exists a great diversity in the solutions and they also establish promising values for the objectives in their extremes values. Moreover, the projections for each objective shed light on the existence of a direct relationships between the complexity of the network and each of the other two objectives (performance and robustness), whereas there is no direct relationship between the latter two. This work falls within the umbrella of OWL because the evolved models are asked about new data, which is the OoD datasets. Moreover, OWL is related with GPAI and, particularly, in this manuscript, the experiments have shown the capability of AI generating AI as the MOEA has learnt from the trained DL models.
The trained models of MO-EvoPruneDeepTL lead to several pruning patterns in which there exist neurons that appear in most of the best solutions of the Pareto front. These patterns help us to recognize the key group of regions of the input images that our models consider the most important ones when assigning the class to the input image at inference time.
The diversity of the models of MO-EvoPruneDeepTL has shown that ensemble modeling is able to increase the overall performance, both in performance of the network and robustness, in most of the quantiles for minimum and maximum considered objective values.
The evolved trained models have shown a great performance with a minimum number of active neurons, but it is also shown the great contribution of the robustness for these models, as each DL model is tested with data that it has not previously seen. Moreover, the objectives of the MOEA has been the performance, complexity and robustness, but other alternatives can be formulated as objectives such as the latency or energy used of the GPU in the inference of the pruned model or the epistemic uncertainty level.
## Acknowledgments
F. Herrera, D. Molina and J. Poyatos are supported by the Andalusian Excellence project P18-FR-4961, the infrastructure project with reference EQC2018-005084-P and the R&D and Innovation project with reference PID2020-119478GB-I00 granted by the Spain's Ministry of Science and Innovation and European Regional Development Fund (ERDF). Aitor Martinez and Javier Del Ser would like to thank the Basque Government for the funding support received through the EMAITEK and ELKARTEK programs, as well as the Consolidated Research Group MATHMODE (IT1456-22) granted by the Department of Education of this institution.
|
2308.01311 | TEASMA: A Practical Methodology for Test Adequacy Assessment of Deep
Neural Networks | Successful deployment of Deep Neural Networks (DNNs) requires their
validation with an adequate test set to ensure a sufficient degree of
confidence in test outcomes. Although well-established test adequacy assessment
techniques have been proposed for DNNs, we still need to investigate their
application within a comprehensive methodology for accurately predicting the
fault detection ability of test sets and thus assessing their adequacy. In this
paper, we propose and evaluate TEASMA, a comprehensive and practical
methodology designed to accurately assess the adequacy of test sets for DNNs.
In practice, TEASMA allows engineers to decide whether they can trust
high-accuracy test results and thus validate the DNN before its deployment.
Based on a DNN model's training set, TEASMA provides a procedure to build
accurate DNN-specific prediction models of the Fault Detection Rate (FDR) of a
test set using an existing adequacy metric, thus enabling its assessment. We
evaluated TEASMA with four state-of-the-art test adequacy metrics:
Distance-based Surprise Coverage (DSC), Likelihood-based Surprise Coverage
(LSC), Input Distribution Coverage (IDC), and Mutation Score (MS). Our
extensive empirical evaluation across multiple DNN models and input sets such
as ImageNet, reveals a strong linear correlation between the predicted and
actual FDR values derived from MS, DSC, and IDC, with minimum R^2 values of
0.94 for MS and 0.90 for DSC and IDC. Furthermore, a low average Root Mean
Square Error (RMSE) of 9% between actual and predicted FDR values across all
subjects, when relying on regression analysis and MS, demonstrates the latter's
superior accuracy when compared to DSC and IDC, with RMSE values of 0.17 and
0.18, respectively. Overall, these results suggest that TEASMA provides a
reliable basis for confidently deciding whether to trust test results for DNN
models. | Amin Abbasishahkoo, Mahboubeh Dadkhah, Lionel Briand, Dayi Lin | 2023-08-02T17:56:05Z | http://arxiv.org/abs/2308.01311v4 | TEASMA: A Practical Approach for the Test Assessment of Deep Neural Networks using Mutation Analysis
###### Abstract.
Successful deployment of Deep Neural Networks (DNNs), particularly in safety-critical systems, requires their validation with an adequate test set to ensure a sufficient degree of confidence in test outcomes. Mutation analysis, one of the main techniques for measuring test adequacy in traditional software, has been adapted to DNNs in recent years. This technique is based on generating mutants that aim to be representative of actual faults and thus can be used for test adequacy assessment. In this paper, we investigate for the first time whether mutation operators that directly modify the trained DNN model (i.e., post-training) can be used for reliably assessing the test inputs of DNNs. We propose and evaluate _TEASMA_, an approach based on post-training mutation for assessing the adequacy of DNN's test sets. In practice, _TEASMA_ allows engineers to decide whether they will be able to trust test results and thus validate the DNN before its deployment. Based on a DNN model's training set, _TEASMA_ provides a methodology to build accurate prediction models of the Fault Detection Rate (FDR) of a test set from its mutation score, thus enabling its assessment. Our large empirical evaluation, across multiple DNN models, shows that predicted FDR values have a strong linear correlation (\(R^{2}\geq 0.94\)) with actual values. Consequently, empirical evidence suggests that _TEASMA_ provides a reliable basis for confidently deciding whether to trust test results or improve the test set.
Deep Neural Network, Test Assessment, Mutation Analysis +
Footnote †: journal: T
## 1. Introduction
Effective testing of Deep Neural Networks (DNN) is essential to ensure they can be trusted, particularly as they are increasingly deployed in safety-critical systems. However, to rely on testing outcomes, it is crucial to assess the adequacy of the test set with which the DNN is to be validated. In the context of DNN testing, several techniques have been proposed for test adequacy assessment and are mostly relying on coverage criteria (Louis et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019), inspired by practice in testing traditional software. However, these coverage criteria have shown no significant correlation with the number of mispredicted inputs or detected faults in a test set (Li et al., 2019; Li et al., 2019; Li et al., 2019), thus putting into question whether they are reliable criteria for test assessment for DNN models.
Mutation analysis, one of the most used techniques for test assessment in traditional software (Dadkhah and Kast, 2018; Li et al., 2019), has been adapted to DNN models in recent years. DNN-specific Mutation Operators (MO) have been proposed (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) since traditional MOs are not directly applicable to DNNs (Li et al., 2019). Indeed, developing a DNN model entails applying a training process relying on both a training program and a training set. Therefore, researchers have proposed MOs that fall into two primary categories: post-training MOs (Li et al., 2019; Li et al., 2019), which mutate the DNN model obtained after the training process, and pre-training MOs (Li et al., 2019; Li et al., 2019), which mutate either the training program or the training set and use these mutated versions in the training process to generate mutated models.
Post-training MOs were introduced by mutation testing frameworks for DNN models: MuNN (Li et al., 2019) and DeepMutation (Li et al., 2019). Additionally, DeepMutation introduced a set of pre-training MOs. However, DeepMutation's authors later introduced DeepMutation++ (Li et al., 2019), a mutation testing tool exclusively implementing the post-training MOs. Hembatova _et al._(Hembatova et al., 2019) further expanded the concept of pre-training MOs by investigating real faults in DNN models (Li et al., 2019) and proposing corresponding, more realistic pre-training MOs. They also introduced a statistical approach to determine whether a mutant generated by pre-training MOs is killed (Li et al., 2019). However, this approach requires generating many instances of the original DNN model and each mutated model by repeating the training process many times for each model. Such an expensive procedure makes the cost of mutation testing impractical in many situations.
The use of mutation analysis in traditional software is motivated by the strong association between mutants and faults reported in the literature (Li et al., 2019; Li et al., 2019; Li et al., 2019). Regarding DNNs, the ability of MOs to generate killable and non-trivial mutants has been thoroughly investigated (Li et al., 2019; Li et al., 2019). However, the association between the mutants and DNN faults, and thus the ability to assess a test set based on its Mutation Score (MS), has never been studied. In this paper, we address this gap by investigating such an association for mutants generated from post-training MOs, motivated by their much lower cost compared to pre-training MOs. For this purpose we investigate, across a number of models and input subsets from training sets, the
relationship between MS and Fault Detection Rates (FDR) considering various MS definitions proposed in the literature (Beng et al., 2017; Chen et al., 2018). We perform our experiments using post-training MOs implemented by DeepMutation++, a State-of-The-Art (SoTA) mutation testing tool for Feed-Forward Neural Networks (FNNs). We show that there is a strong correlation between FDR and MS indicating that mutants generated by post-training MOs are highly associated with faults.
Building on correlation results, in this paper, we propose _TEASMA_, a practical approach to assess the adequacy of test sets for DNN models using mutation analysis based on post-training MOs. Following _TEASMA_, engineers can develop a specific FDR prediction model, along with Prediction Intervals (PI), for each DNN model based on its training set. This prediction model can be used by engineers to predict a test set's FDR based on its MS and decide, also considering the prediction's PI, whether a test set is adequate for validation in their context. Developing _TEASMA_ is further motivated by the costly and time-consuming process of preparing large test sets, whose labeling usually requires domain experts. Using _TEASMA_, a test set is assessed before labeling and execution, solely based on the training set used for training the DNN model. Test inputs can then be labeled and executed once the test set is deemed adequate, thus preventing the waste of testing resources. Further, if sufficiently fast and when MS is shown to accurately predict FDR, _TEASMA_ could be used to automatically guide test selection.
We evaluated _TEASMA_ on five widely used DNN models and four image recognition input sets. We compared, for a large number of test subsets, their predicted FDR with the actual one and we could observe, in all cases, a very high linear correlation (\(R^{2}\geq 0.94\) ) between the two, with a slope close to 1 for the regression line. We also show that our prediction models have narrow PIs, implying that the predicted FDR can confidently be used as a basis for deciding about the adequacy of a test set. The key contributions of our paper are summarized as follows:
* An empirical analysis of SoTA post-training MOs showing that the MS calculated based on these mutants is strongly correlated with FDR.
* _TEASMA_, a practical approach utilizing post-training MOs for assessing a test set before validation and deployment of a DNN, which is based on regression analysis using the training set with the objective to predict the FDR of test sets as a basis for assessing their adequacy.
* An empirical evaluation of _TEASMA_ on five widely used DNN models and four image input sets, demonstrating its ability to accurately predict a test set's FDR based on its MS, relying on a prediction model built during training.
The remainder of this paper is structured as follows: Section 2 provides background and discussions of the related work directly relevant to our research questions. Section 3 presents our proposed approach for test adequacy assessment. Section 4 describes the experiments we performed to assess our approach. Section 5 presents and discusses the results for each research question, and Section 6 concludes the paper.
## 2. Background and Related Work
This section is divided into four parts. In the first part, we briefly describe the proposed mutant generation processes for DNNs. The second part summarizes different ways of identifying killed mutants and calculating MS. In the third part, we discuss the association between mutants and faults and point out the challenge of investigating such association in DNNs. The last part describes a SoTA approach that can be leveraged to overcome this challenge and perform our experiments.
### Mutant Generation
Applying mutation testing to DNN-based systems requires defining specific operators for DNN models (Krizhevsky et al., 2014). The proposed MOs can be broadly classified into two primary categories, namely pre-training (Krizhevsky et al., 2014; Chen et al., 2018; Chen et al., 2018) and post-training (Beng et al., 2017; Chen et al., 2018; Chen et al., 2018) operators, as illustrated in Figure 1. Pre-training MOs, also called source-level MOs, slightly modify the original version of either the program or the input set that has been used to train the original DNN model under test. These modified versions are then used by the training process to generate mutated models (i.e., mutants). Mutation testing using pre-training MOs is computationally expensive since it entails repeating the training process for generating each mutant. On the other hand, post-training MOs, also referred to as model-level MOs, modify the already trained original model, eliminating the need for additional training to generate mutants.
Ma _et al._(Ma et al., 2018) proposed DeepMutation, the first mutation testing framework for DNNs that includes both pre- and post-training MOs. Pre-training operators are classified into data-level and program-level operators whereas post-training operators are classified into weight-level, neuron-level, and layer-level operators, depending on the specific component of the DNN model that they modify. The DeepMutation++ tool (Beng et al., 2017) was later introduced by the same authors implementing only the post-training operators. Shen _et al._(Shen et al., 2018) proposed the MuNN framework, including five post-training operators that modify neurons, activation function, bias values, and weight values of a DNN model. Since post-training MOs directly modify the DNN model, not all of them are applicable to any DNN model. For example, applying the Layer Deactivation (\(LD\)) operator of DeepMutation (Ma et al., 2018) on some layers may break the model's structure. Therefore, DeepMutation restricts the application of this operator to only those layers whose input and output shapes are consistent, if the model has such layers.
Humbatova _et al._(Humbatova et al., 2018) proposed DeepCrime, a mutation testing tool that includes 24 pre-training mutation operators inspired by real faults in deep learning systems (Krizhevsky et al., 2014; Chen et al., 2018; Chen et al., 2018). The main goal was to obtain more realistic mutation operators. They also proposed a statistical approach (Krizhevsky et al., 2014) for identifying killed mutants that requires multiple instances of each mutant. Therefore, DeepCrime repeats the training process on the original and modified versions of the
Figure 1. Pre-training and post-training mutation operators
training set and program many times, resulting in many instances of the original model and each mutant. This further increases the cost of mutation testing, requiring not only to repeat the training process and generating more mutants but also executing each test set on all mutant instances.
Most of the MOs proposed for DNNs (Humbs, 2017; Zhang et al., 2018; Zhang et al., 2018) have parameters that define the scale of modifications by that MO and need to be configured before their application. For example, operators that modify training sets could be configured by defining the percentage of inputs to be modified. Therefore, each MO can be applied to the DNN with multiple configurations, some of them resulting in generating mutants with undesirable characteristics: equivalent, trivial, and redundant. It is essential to identify and filter such mutants since they not only increase the overall execution time of mutation analysis but may also affect the mutation score. Different approaches have been proposed to address this issue (Humbs, 2017; Zhang et al., 2018; Zhang et al., 2018). In the DeepMutation++ tool (Humbs, 2017), mutants with prediction accuracy lower than a specified threshold are eliminated. Then, for each input set, mutants with high error rates (i.e., killed by most of the inputs), are filtered out to prevent inflating the MS with easy-to-kill mutants. The DeepMutation++ tool (Humbs, 2017) sets the accuracy threshold to be 90% of the original model's accuracy, and the error rate threshold to be 20% of the input set size.
The high computational cost of mutation testing is its main drawback in practice. This issue is even more critical in the case of DNNs considering large input sets, which is usual in practice. In this context, the substantial computational overhead of mutant generation using pre-training MOs, due to the need to perform training on each mutant, is a significant drawback when compared with post-training MOs. This is particularly true for DeepCrime which involves generating many instances of each mutant, even if we only consider the most effective configurations (Humbs, 2017) for each MO to create non-equivalent and non-trivial mutants 1. Therefore, it is essential to investigate if the more practical post-training MOs are also able to generate mutants associated with faults that can support our objective, assess the adequacy of test sets. In this paper, we perform such an investigation and propose _TEASMA_, a practical approach for test set adequacy assessment in DNNs based on mutation analysis.
Footnote 1: To give a concrete idea, in our experiments, DeepCrime took about 20 to 30 times the amount of computation time of DeepMutation++. This ratio would of course increase with larger models and input sets.
### Mutation Score Calculation
Similar to traditional software, killed DNN mutants can be identified by comparing the output of a mutant \(M_{i}\) with the original model \(N\) and the corresponding MS can be calculated as:
\[StandardMS(T,M)=\frac{\sum_{M_{i}\in M}killed(T,M_{i})}{|M|} \tag{1}\]
where \(M\) is the set of mutants, \(M_{i}\in M\) is \(killed\) by test set \(T\) (\(killed(T,M_{i})=1\)) if there is at least one input \(t\in T\) that is correctly classified by \(N\) and is not correctly classified by mutant \(M_{i}\). DeepMutation (Zhang et al., 2018), however, calculates MS in a different way. For a classification problem with \(k\) classes \(C=\{C_{1},...,C_{k}\}\), a test input \(t\in T\) kills class \(C_{j}\in C\) of mutant \(M_{i}\in M\) if \(t\) is correctly classified as \(C_{j}\) by the original model \(N\) and is not classified as \(C_{j}\) by mutant \(M_{i}\). Accordingly, MS is calculated for a test set \(T\) as:
\[DeepMutationMS(T,M)=\frac{\sum_{M_{i}\in M}killedClasses(T,M_{i})}{|M|\times|C|} \tag{2}\]
where \(killedClasses(T,M_{i})\) is the number of classes of mutant \(M_{i}\) killed by inputs in \(T\)(Zhang et al., 2018). The difference when calculating MS for the same test set using \(StandardMS\) based on Equation (1) and \(DeepMutationMS\) based on Equation (2) can be very large. For instance, consider at test \(T\)s from the input set of handwritten digits (MNIST). Each input \(t\in TS\) is an image belonging to a class \(C_{j}\). Let the test set \(TS\) be represented by its correct input labels \(label(TS)=\{3,5,3,7\}\), including two different images from the same class representing digit 3 but also two images from classes 5 and 7. Further, consider that all images are correctly predicted by \(N\). If we have a set of mutants \(M=\{M_{1},M_{2}\}\), where the first two test inputs of \(TS\) are mispredicted by \(M_{1}\) and all test inputs are mispredicted by \(M_{2}\), then \(DeepMutationMS(TS,M)=\frac{2+3}{2\times 10}=0.25\) while \(StandardMS(TS,M)=1\). The score calculated by \(StandardMS\) for a test set \(T\) is higher since killing a mutant \(M_{i}\) with \(T\) requires only one test input \(t\in T\) to be mispredicted by \(M_{i}\).
Hu _et al._(Humbs, 2017) defined a killing score metric for each test input \(t\) as the proportion of mutants in \(M\) whose output is different from \(N\).
\[KillingScore(t,M,N)=\frac{|M_{i}|M_{i}\in M\wedge N(t)\neq M_{i}(t)|}{|M|} \tag{3}\]
Humbatova _et al._(Humbs, 2017) used this metric to calculate the MS of a test set \(T\) on mutants generated by DeepMutation++ (Humbs, 2017):
\[KillingScoreBasedMS(T,M)=\frac{\sum_{t\in T}KillingScore(t,M,N)}{|T|} \tag{4}\]
However, calculating MS as the average \(KillingScore\) of test inputs in a test set may not indicate the ability of the entire test set to kill mutants. A test set cannot achieve such a high MS unless many of its test inputs are able to kill most of the mutants. This is visible in results reported by Humbatova _et al._(Humbs, 2017) where the MS calculated by \(KillingScoreBasedMS\) for entire test sets ranges from 0.059 to 0.33. Humbatova _et al._(Humbs, 2017) further proposed a statistical approach for identifying killed mutants when applying pre-training MOs in DeepCrime where the training process is repeated \(n\) times, leading to \(n\) instances of the original model and mutants being generated (Section 2.1). This further increases the cost of mutant generation using pre-training MOs and makes them computationally challenging for practical use in our context. Note that this statistical approach for identifying killed mutants is not applicable to post-training MOs, since they directly modify an already trained DNN model.
Mutation analysis involves identifying test sets with high MS based on the key assumption that MS captures a test set's capability to detect faults. In DNNs, however, various MS formulas lead to different results and which test sets lead to high MS varies according to which formula is used. It is evident that more test sets can achieve a high MS using \(StandardMS\) (Equation (1)) than when using \(DeepMutationMS\) (Equation (2)). We also expect that not many test sets can achieve a high MS using \(KillingScoreBasedMS\)
(Equation (4)). Therefore, because we cannot a priori determine which MS formula satisfies our assumption to the largest extent, it is essential to consider all of them when experimenting with mutation analysis for DNNs.
### Mutants and Faults
In traditional software, MOs are defined to introduce simple syntactical changes representing real mistakes often made by programmers into the source code of a program. Similarly, DeepCrime defined a set of pre-training MOs based on an analysis of real faults (Deng et al., 2015; Wang et al., 2016; Wang et al., 2017) related to the training set or program used to train a DNN model. However, the high cost of generating mutants using pre-training MOs hinders their application in practice. Post-training MOs, in contrast, apply modifications directly to the original model but have been criticized for performing modifications that are not realistic (Wang et al., 2016). Nevertheless, for mutation analysis to be effective in assessing the adequacy of test sets, regardless of how mutation operators are defined, it is important to generate mutants whose corresponding MS is, for a test set, a good predictor of fault detection. Therefore, considering the strong practical advantages of post-training MOs, despite their presumed lack of realism, they should be carefully investigated to build predictors of fault detection before being dismissed as inadequate.
The use of mutation analysis for assessing the quality of test sets in traditional software is supported by the observed association between mutants and real faults, as reported in the literature (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017). This association has been investigated by performing correlation analysis (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017) between the MS of a test set and its capability to detect faults. Regression analysis has also been applied for modeling this relationship to build a prediction model for FDR (Wang et al., 2016; Wang et al., 2017). In DNNs, however, the association between mutants and faults has never been investigated. The main challenge in such investigation is that, in contrast to traditional software where it is possible to identify real faults by isolating the faulty statement that caused a failure, identifying faults in DNNs is not a straightforward task due to the complexity and non-linear nature of the DNNs. However, a recent approach proposed by Aghababaeyan _et al._(Aghababaeyan et al., 2016) for estimating faults is a possible, practical, and automated solution. We describe next how we rely on this approach to investigate the relationship between mutants and faults, and how MS can help predict FDR.
### Fault Estimation
Although the number of mispredicted test inputs is a well-known metric in evaluating DNN testing approaches (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017), it is not adequate in our experiments. Indeed, a single fault within a DNN model can make the model mispredict multiple test inputs. Subsequently, a test set that identifies a considerable number of mispredictions may actually detect only a few underlying faults, whereas another test set with the same number of mispredictions could detect a higher number of distinct faults. Therefore, assessing test sets based on mispredictions and thus investigating the relationship between the number of killed mutants and mispredictions can be misleading. Consequently, we rely on a recent approach introduced by Aghababaeyan _et al._(Aghababaeyan et al., 2016) for estimating faults in a DNN model, that are defined as distinct root causes of mispredictions (Bahdan et al., 2016). They cluster mispredicted inputs that exhibit similar features in three key steps: feature extraction, dimensionality reduction, and density-based clustering. Initially, VGG-16 (Vogtani et al., 2016) is employed to extract the feature matrix from the mispredicted inputs, serving as the foundation for clustering. Subsequently, dimensionality reduction is applied to enhance clustering performance within the inherently high dimensional feature space. The HDBSCAN algorithm (Bahdan et al., 2016) is then utilized to cluster the mispredicted inputs based on the extracted features. In their empirical analysis of the resulting clusters, Aghababaeyan _et al._(Aghababaeyan et al., 2016) have shown that inputs in each cluster are mispredicted due to the same fault in the model and inputs in different clusters are mispredicted due to distinct faults. Therefore, if a test set contains only one of the test inputs in a cluster, then it is able to detect the underlying fault of that cluster.
Suppose we identify a set of \(k\) Clusters of Mispredicted Inputs \(CMI=\{CM_{\text{I}},...,CM_{\text{I}}\}\), we can then assume that each cluster \(CMI_{\text{I}}\) contains inputs that are mispredicted due to the same root cause (i.e., the underlying fault \(F_{i}\)). Therefore, we say that test set \(T\)_detects_ a fault \(F_{i}\in F\) if there is at least one test input \(t\in T\) such that \(t\in CMI_{\text{I}}\). Finally, we calculate the FDR for a test set \(T\) as:
\[FDR(T,F)=\frac{\sum_{F_{i}\in F}detect(T,F_{i})}{|F|} \tag{5}\]
In the next section, we propose a practical solution for test set adequacy assessment using mutation analysis. The objective is to predict the expected FDR for a given test set and DNN, based on mutation and regression analysis performed during training.
## 3. Practical Approach
In this section, we present _TEASMA_, a practical solution for assessing test sets for DNNs using mutation analysis. As mentioned earlier, the basic idea is to rely on mutation and regression analysis based on the training set to accurately predict the FDR of test sets, which can then be used for evaluating their adequacy before proceeding with labeling and execution. In subsequent sections, we will empirically evaluate the accuracy of _TEASMA_ in predicting FDR and its applicability. The process to be followed by engineers in practice is illustrated in Figure 2 and includes three steps:
**a) Calculating MS and FDR for subsets of the training set.** In this step, only post-training MOs are used to generate mutants as described in Section 2.1, since in our application context pre-training MOs are not a practical option. The training set is executed against the original DNN model and mispredicted inputs are clustered, as described in Section 2.4, to identify faults. Then, a large number of subsets of varying sizes are randomly sampled from the training set, and their MS and FDR are calculated to be used in the next step.
**b) Performing regression analysis.** In this step, regression analysis between FDR and MS is performed to obtain the best model for predicting FDR based on MS. The best shape for the regression model is determined empirically, prioritizing simpler linear models when they fare well. But because we can expect non-linear relationships, for reasons explained later, we also consider quadratic and exponential regression models, as well as regression trees when no regression function fits well. Regression trees enable the optimal partition of the MS range into optimal sub-ranges with similar FDR values. Note that the best model shapes may differ across DNNs
because of differences in their architecture and training sets. Regardless of its shape, the regression model must be sufficiently accurate to enable practically useful FDR predictions. This prediction model is then used in the next step to assess the adequacy of test sets.
**c) Assessing test set adequacy.** In this step, given a test set, its MS is calculated using the original DNN model and previously generated mutants. Then, the prediction model obtained from the previous step is used to predict the test set's FDR (\(\widehat{FDR}\)), including PIs. Given a certain percentage of confidence (e.g., 95% in our experiments), such intervals specify the range of potential actual FDR values for test sets achieving a specific MS value and thus provide engineers with valuable insights into the uncertainty of FDR predictions. Depending on the context, one can then decide how conservative should decision-making be by considering the lower or higher bounds of the PI. The \(\widehat{FDR}\) of a test set, if deemed sufficiently accurate based on its PI, can then be used to assess its capability to detect faults and therefore whether, in a given context, it is adequate to validate the DNN model.
We believe that our proposed solution--assuming an accurate FDR prediction model can be built using the training set--is practical. Indeed, we use post-training MOs which are directly applied to the DNN model and do not entail any re-training, thus avoiding the main challenge with pre-training MOs. In practice, such assessment helps guide the selection of test inputs and significantly reduces test costs by only labeling a test set once it is considered adequate. However, since we build the FDR prediction model using the training set and use it to determine the adequacy of a test set, we implicitly assume that the two sets have similar distributions. This is, however, a common assumption for the training of DNNs to be considered adequate.
## 4. Experimental Procedure
Our main goal is to evaluate the accuracy of _TEASMA_ in predicting a test set's FDR using post-training MOs and a regression model built entirely based on the training set. To accomplish this, for each subject, we performed the following high-level steps:
1. Select a large number of input subsets of diverse sizes from the training set (Section 4.3).
2. Generate and select mutants using SoTA mutation operators and mutation testing tools (Section 4.4).
3. Identify distinct faults in the DNN models based on the models' mispredicted inputs in the training set (Section 4.5).
4. Conduct experiments using the mutants, faults, and training input subsets to answer the research questions presented next (Section 4.6).
### Research Questions
Our experiments are designed to answer the following research questions:
**RQ1: Can an accurate regression model be built to explain FDR as a function of MS?**
In this question, we investigate the output of the second step of _TEASMA_ in Section 3. We select a large number of input subsets from the training set, calculate their MS and FDR using the faults estimated in the original model as described in Section 4.5, and investigate, for mutants derived from post-training MOs, if the MS of an input subset is significantly correlated with its FDR. In that case, we resort to regression analysis to model the relationship between MS and FDR and investigate if an accurate regression model can be built to predict a subset's FDR from its MS. If that is the case, the predicted FDR can serve as a good basis on which to decide whether a test set provides sufficient confidence about the validation of the DNN before its deployment. Though such questions have been investigated for traditional software (Brock et al., 2018; Chen et al., 2019), that has not been the case for DNNs.
**RQ2: Can a test set's FDR be accurately predicted from its MS using the regression model built on the training set?**
In this question, we evaluate the last step of _TEASMA_ in Section 3 and investigate if we can accurately predict a test set's FDR using the regression model built in the previous step on the training set. For this purpose, we select a large number of input subsets from the test set and calculate their MS and FDR using the faults identified in the original model. Then, we use the regression model to predict FDR for each subset and compare it with the actual one, to assess whether, as expected, they have a linear relationship with a high correlation and a slope close to 1.
Figure 2. The process of test adequacy assessment with _TEASMA_
### Subjects
We performed our study on different combinations of input sets and models. Since we rely on a SoTA fault estimation approach for DNNs that is tailored for image inputs, we selected a set of widely-used publicly available image input sets including MNIST (Huang et al., 2017), Fashion-MNIST (Huang et al., 2017), Cifar-10 (Chen et al., 2017), and SVHN (Zhuang et al., 2017). Some of these input sets have also been used for evaluating SoTA mutation testing tools and operators (Fashion-MNIST, 2017; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018).
MNIST is a set of images of handwritten digits and Fashion-MNIST contains images of clothes that are associated with fashion and clothing items. Each one of MNIST and Fashion-MNIST represents a 10-class classification problem and consists of 70,000 grayscale images of a standardized 28x28 size. Cifar-10 is another widely-used input set that includes a collection of images from 10 different classes (e.g., cats, dogs, airplanes, cars). SVHN (Zhuang et al., 2017), a set of real-world house numbers collected by Google Street View, can be considered similar to MNIST since each image represents a sequence of digits. The Cifar-10 and SVHN input sets contain 32x32 cropped colored images.
In our experiments, we trained five SoTA DNN models using the above input sets: LeNet5, LeNet4, ResNet20, VGG16, and an 8-layer Convolutional Neural Network (Conv-8). Since not all of the post-training MOs are applicable to all DNN models, as described in Section 2.1, we selected DNN models with different internal architectures. All the feasible combinations of DNN models and input sets we experiment with are referred to as subjects. Overall, we perform our experiment on six subjects including a wide range of diverse image inputs and DNN architectures. Table 1 lists the details of each subject including the size of test and training datasets, the number of epochs used to train the model, the accuracy of the DNN model on the training and test input sets, and the number of identified faults based on the training set. Note that we reused the training and test sets as defined in the original sources (Chen et al., 2017; Huang et al., 2017; Zhuang et al., 2017).
### Sampling Input Subsets
To evaluate _TEASMA_, we select input subsets from the training set, calculate the MS and FDR of each subset, and build a regression model. For this purpose, we require a large number of input subsets that possess diverse MS and FDR values. Therefore, we start sampling input subsets with sizes 100, 300, and 500 and continue with larger or smaller sizes where needed, until the entire spectrum of FDR with minimum and maximum values (\(0\leq FDR\leq 1\)) is covered.For each subset size, we sample 300 subsets with replacements. Since one of the MS formulas we use (Equation (2)) considers the number of images from each class \(C_{i}\) in an input subset, we complement the conventional approach of creating randomly sampled subsets by creating a set of uniformly sampled subsets. Random subsets may contain any number of input images from each class \(C_{i}\) but uniform subsets contain an equal number of images from each class \(C_{i}\). Uniform subsets may achieve higher MS values than random subsets based on Equation (2). Therefore, we perform our experiments on both types of input subsets and empirically determine which one serves our purpose better.
### Generating and filtering Mutants
We use DeepMutation++ (Fashion-MNIST, 2017), the most recent SoTA mutation testing tool for DNNs that implements eight post-training MOs suitable for FNNs. Some of the implemented MOs need to be configured by setting values for their parameters. For example, the Neuron Effect Blocking (NEB) operator is configured by determining the number of neurons it affects. By default, the DeepMutation++ tool configures this operator to sample a predefined ratio of 1% of the total neurons in a DNN model and limits the total number of generated mutants to 50. In our experiments, we used the default configuration for generating mutants. Furthermore, we perform the same procedure proposed by the tool and use its default thresholds to filter out some of the generated mutants. For example, we exclude mutants that are either too similar to the original model or too easy to kill. Specifically, we exclude mutants when they yield more than 90% of the original model's accuracy or mispredict more than 20% of the inputs correctly predicted by the original model, as reported in the original papers (Fashion-MNIST, 2017; Dosovitskiy et al., 2018). Further, in our experiments, we perform an additional step to filter out equivalent mutants since DeepMutation++ does not have a mechanism to do so. To calculate MS precisely, we execute the entire training set against the mutants, and if a mutant cannot be killed by any training input, it is considered equivalent and hence excluded.
### Estimating Faults
In order to evaluate _TEASMA_, we estimate faults in the DNN model using the approach described in Section 2.4 based on the mispredicted inputs of the training set. We execute the entire training set against the original DNN model, extract the mispredicted inputs, and apply the fault identification approach (Chen et al., 2017) on these mispredicted inputs. Some of the subjects used in our experiment include original models that achieve very high accuracy (e.g., 98%). As a result, the number of inputs mispredicted by the original model is a very small percentage of the entire training set and the number of estimated faults is very low. In such cases, we need more faults to be able to investigate the ability of a test set to detect faults. For this purpose, we use snapshot models (Fashion-MNIST, 2017), the intermediate models obtained during the training process of the original DNN model. Once the original model training is completed, we execute the entire training set against all snapshot models and identify mispredicted inputs. The clustering approach (Chen et al., 2017) is then applied to mispredicted inputs to identify all the faults that were observed during the training process.
### Experiments
This section explains how we conduct our experiments to evaluate _TEASMA_ and answer the research questions described in Section 4.1. To answer RQ1, we estimate faults of the DNN model based on the training set as described in Section 4.5, generate mutants using DeepMutation++, and calculate FDR and MS for a large number of diverse input subsets sampled from the training set. Subsequently, we measure the correlation between FDR and MS using the non-parametric Spearman coefficient based on all subsets, since we cannot assume linearity. We also measure the correlation for mutants generated by each MO separately and then attempt to find
subsets of operators leading to higher correlation. Before proceeding to the next step, we compare the highest correlation achieved by subsets of MOs with that obtained with all MOs. If the correlation of the former is higher than the latter, then we only consider mutants generated by the optimal subset with the highest correlation.
Next, using all the training subsets, we perform regression analysis, prioritizing simpler linear models but relying on non-linear models where needed to model the relationships between MS and FDR, and inspect the resulting coefficient of determination (\(R^{2}\)) to measure the goodness of fit. Furthermore, we report two common measures for evaluating the accuracy of predictions, namely the Mean Magnitude of Relative Error (MMRE) and the Root Mean Square Error (RMSE) to assess how accurately we can predict FDR. To obtain more realistic results, we perform a \(K\)-fold cross-validation procedure with \(K=5\) and report the average of the \(R^{2}\), MMRE, and RMSE across all folds. Based on these metrics, we determine for each subject, the best regression model for predicting FDR based on MS. To summarize, the experiment that we conducted using the training set to answer RQ1 included the following steps:
1. Calculate MS and FDR for each input subset
2. Measure the Spearman correlation between FDR and MS
3. Perform regression analysis between FDR and MS using linear and non-linear models (quadratic, exponential, and regression trees)
4. Evaluate regression models by performing \(K\)-fold cross-validation and calculating \(R^{2}\), MMRE, and RMSE to identify the best regression model
To address RQ2, we sample a large number of input subsets from the test set, calculate their MS using previously generated mutants, and leverage the previously built and selected regression model to predict FDR (\(\widehat{FDR}(T)\)). We should note that we determine killed mutants by comparing their output with that of the original model and thus the actual labels of test inputs are not required. To estimate the actual FDR (\(ActualFDR\)) of each test subset, we use the same fault clusters identified based on the training set as described in Section 4.5 and we assign each mispredicted test input to one of the clusters. Since the fault estimation approach (Section 2.4) employs the HDBSCAN algorithm (Brandt, 2002), a density-based hierarchical clustering method where each cluster is characterized by a number of core points, we assign each mispredicted test input to the cluster featuring a core point that is closest to the input. To ensure an accurate calculation of \(ActualFDR\), we only consider in the denominator of the Equation (5) the number of fault clusters that are detectable by the test set i.e., clusters that have at least one mispredicted test input assigned to them. Subsequently, we are able to compare for each test subset, the predicted FDR with the actual one and investigate if the former is closely aligned with the latter and can thus be trusted to make decisions about the test set. Therefore, to answer RQ2, we went through the following steps:
1. Randomly sample a large number of input subsets of diverse sizes from the test set
2. Calculate MS and FDR (\(ActualFDR\)) for each test subset
3. Use previously selected prediction model to compute \(\widehat{FDR}\) based on MS for each test subset
4. Based on all test subsets, measure the linear correlation (\(R^{2}\)) between \(\widehat{FDR}\) and \(ActualFDR\) and check the slope of the regression line
As described in Section 2.2, different methods have been proposed to determine whether a mutant is killed and therefore to calculate MS. To fully address our research questions, we ran a comprehensive analysis by considering all the MS calculation variants described in Section 2.2. As a result, we ran three variations of our experiments that only differ in the way they calculate MS; (E1) \(Deep\)\(Mutation\)\(MS\), (E2) \(Standard\)\(MS\), and (E3) \(Killing\)\(Score\)\(Based\)\(MS\).
Experiments on all subjects took about a week using a cloud computing environment provided by the Digital Research Alliance of Canada (Cowell et al., 2017), on the Cedar cluster with a total of 1352 _NVIDIA P100 Pascal_ GPUs with \(8\) to _16 GB_ of memory.
## 5. Results
In this section, we report results pertaining to our research questions and discuss their practical implications.
### RQ1: Can an accurate regression model be built to explain FDR as a function of MS?
To answer this question, we first investigate the extent of the correlation between MS and FDR. We perform our experiment on a large number of randomly and uniformly sampled input subsets from the training set, also considering different MS calculation variants.The correlation analysis results using Spearman correlation
\begin{table}
\begin{tabular}{|c|c c c|c c|c c|c c|} \hline ID & Input & Training & Test & Model & Epochs & Train & Test & Number of mispredictions & Number of faults \\ & set & set & set & & accuracy & accuracy & (training) & (training) \\ \hline S1 & MNIST & 60,000 & 10,000 & LeNet5 & 12 & 99\% & 98\% & 2462 & 351 \\ \hline S2 & \multirow{2}{*}{Cifar-10} & \multirow{2}{*}{50,000} & \multirow{2}{*}{10,000} & Conv-8 & 20 & 94\% & 85\% & 2507 & 354 \\ \cline{5-10}
coefficients for experiment variants E1 and E2 are shown in Table 2. Given space constraints, we only report results for random subsets as they yielded slightly higher correlations. The best correlation achieved for each subject across experiment variants, given in each column of Table 2, is highlighted in bold font. We did not report the results for experiment variant E3 since all of the correlation values for both random and uniform subsets across all subjects are less than 0.1, indicating a complete absence of correlation. This can be explained by the fact that MS is determined by computing the average \(KillingScore\) (Equation (3)) of all inputs within a subset. Consequently, for subsets of different sizes, MS calculated based on \(KillingScoreBasedMS\) (Equation (4)) falls within a similar range (0 to 0.1). As discussed in Section 2.2, this confirms that calculating the MS of a subset as the average capability of its inputs to kill mutants cannot be used as an indicator of the capability of the entire subset to kill mutants.
E1 shows the strongest correlation across all subjects between MS and FDR (above 0.98 across all subjects). In this experiment, we calculate MS as proposed originally by DeepMutation (Equation (2)). In contrast, for experiment E2, \(R^{2}\) ranges from 0.51 to 0.92, which is still a strong correlation. The method for identifying killed mutants and calculating MS therefore has a substantial impact on the results of mutation analysis. Given the strong correlations achieved in E1 and E2 across all subjects, we conclude that mutants generated by DeepMutation's post-training MOs are highly associated with faults. Given the above results, we focus solely on \(DeepMutationMS\) and random subsets from now on. Before proceeding to the next steps, we calculated the correlation between FDR and MS individually for each MO as well as for different subsets of operators. However, none of them achieved a significantly higher correlation than the correlation obtained with all MOs. Consequently, we proceeded to the next step considering, for each subject, all of the applicable MOs and performed regression analysis to model the relationship
Figure 3. Selected regression models across subjects based on the training set
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Accuracy & \multicolumn{6}{c|}{Subjects} \\ \cline{2-7} metric & S1 & S2 & S3 & S4 & S5 & S6 \\ \hline \(R^{2}\) & 0.98 & 0.99 & 0.98 & 0.98 & 0.99 & 0.98 \\ \hline MMRE & 0.19 & 0.13 & 0.15 & 0.14 & 0.1 & 0.15 \\ \hline RMSE & 0.05 & 0.04 & 0.04 & 0.05 & 0.03 & 0.04 \\ \hline Shape* & RT & RT & RT & E & RT & RT \\ \hline \multicolumn{7}{l}{*Selected regression model shapes include Exponential (E) and Regression Tree (RT)} \\ \end{tabular}
\end{table}
Table 3. Accuracy and shape of the selected regression models when predicting FDR with MS, based on the training set and using a 5-fold cross-validation procedure
between MS and FDR on the training set. For each subject, we examined multiple regression models starting with a simple linear regression model and then using more complex non-linear models (quadratic, exponential, and regression trees) when needed. We then performed a 5-fold cross-validation procedure to calculate \(R^{2}\), MME, and RMSE, and selected the most accurate model for each subject. Due to space limitation, we only report the results for the selected regression model for each subject in Table 3. These results show a minimum \(R^{2}\) value of 0.98 across all subjects, indicating that MS explains most of the variance in FDR. The average RMSE across all \(K\)-folds for all subjects ranges from 0.03 to 0.05, highlighting the high prediction accuracy. Additionally, the average MME varies between 0.1 and 0.19 indicating a low relative error. This suggests that MS can be used to very accurately predict FDR in the training set.
The final row of Table 3 reports the shape of the selected regression model on each subject. Regression trees are the best for all subjects but S4. This suggests that standard regression functions are often not a good fit and regression trees offer more flexibility by optimally partitioning the MS range and computing averages within partitions. To prevent overfitting of the regression trees, which would yield poor results on test sets, we limited their maximum depth to 5.
To visualize our results, we present for each subject in Figure 3, the scatterplots and fitted regression curves between FDR and MS along with 95% prediction intervals (PI), based on all input subsets sampled from the training set. We compute PIs for regression tree models based on a bootstrapping method for non-parametric regression models (K
sampled input subsets. To address this, we selected diverse input subsets, employing both random and uniform sampling techniques and varying sample sizes. Another internal threat to validity stems from selected parameters used throughout the experiments. This encompasses parameters for clustering mispredicted inputs to estimate faults and for configuring MOs. To mitigate this concern, regarding clustering parameters, we explored different settings for each subject and opted for the best configuration based on the evaluation of the Silhouette index (Srivastava et al., 2017) and DBCV scores (Srivastava et al., 2017), as suggested by Aghababaeyan _et al._(Ag
## Acknowledgements
This work was supported by a research grant from Huawei Technologies Canada Co., Ltd, as well as by the Mitacs Accelerate Program. The experiments conducted in this work were enabled in part by support provided by the Digital Research Alliance of Canada 2.
Footnote 2: [https://alliancecan.ca](https://alliancecan.ca)
|
2301.12100 | Reachability Analysis of Neural Network Control Systems | Neural network controllers (NNCs) have shown great promise in autonomous and
cyber-physical systems. Despite the various verification approaches for neural
networks, the safety analysis of NNCs remains an open problem. Existing
verification approaches for neural network control systems (NNCSs) either can
only work on a limited type of activation functions, or result in non-trivial
over-approximation errors with time evolving. This paper proposes a
verification framework for NNCS based on Lipschitzian optimisation, called
DeepNNC. We first prove the Lipschitz continuity of closed-loop NNCSs by
unrolling and eliminating the loops. We then reveal the working principles of
applying Lipschitzian optimisation on NNCS verification and illustrate it by
verifying an adaptive cruise control model. Compared to state-of-the-art
verification approaches, DeepNNC shows superior performance in terms of
efficiency and accuracy over a wide range of NNCs. We also provide a case study
to demonstrate the capability of DeepNNC to handle a real-world, practical, and
complex system. Our tool \textbf{DeepNNC} is available at
\url{https://github.com/TrustAI/DeepNNC}. | Chi Zhang, Wenjie Ruan, Peipei Xu | 2023-01-28T05:57:37Z | http://arxiv.org/abs/2301.12100v1 | # Reachability Analysis of Neural Network Control Systems
###### Abstract
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-physical systems. Despite the various verification approaches for neural networks, the safety analysis of NNCs remains an open problem. Existing verification approaches for neural network control systems (NNCSs) either can only work on a limited type of activation functions, or result in non-trivial over-approximation errors with time evolving. This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC. We first prove the Lipschitz continuity of closed-loop NNCSs by unrolling and eliminating the loops. We then reveal the working principles of applying Lipschitzian optimisation on NNCS verification and illustrate it by verifying an adaptive cruise control model. Compared to state-of-the-art verification approaches, DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs. We also provide a case study to demonstrate the capability of DeepNNC to handle a real-world, practical, and complex system. Our tool **DeepNNC** is available at [https://github.com/TrustAI/DeepNNC](https://github.com/TrustAI/DeepNNC).
## Introduction
Neural network controllers have gained increasing interest in the autonomous industry and cyber-physical systems because of their excellent capacity of representation learning [14, 15, 16]. Compared to conventional knowledge-based controllers, learning-based NNCs not only simplify the design process, but also show superior performance in various unexpected scenarios [11, 12]. Since they have already been applied in safety-critical circumstances, including autonomous driving [13, 14, 15] and air traffic collision avoidance systems [13], verification on NNCSs plays a crucial role in ensuring their safety and reliability before their deployment in the real world. Essentially, we can achieve verification on NNCSs through estimating the system's reachable set. As Figure 1 shows, given the neural network controller, the dynamic model of the target plant, and the range \(X_{0}\) of initial states, we intend to estimate the reachable sets of the system, that is, the output range of state variables over a finite time horizon \([0,t_{n}]\). If the reachable sets \(X_{t1}\), \(X_{t2}\),... \(X_{tn}\) have _no intersection_ with the avoid set and \(X_{tn}\) reaches the goal set, we can verify that the NNCS is safe.
Compared to verification on NNCSs, verification on neural networks (NNs) is a relatively well-explored area [14, 15, 16, 17, 18, 19, 20, 21]. Representative solutions can be categorised into constraint satisfaction based approaches [13, 14] and approximation-based approaches [15, 16, 17, 18]. However, these verification approaches cannot be applied directly to NNCS. They are not workable in the scenario where the neural network is in close conjunction with other subsystems. A speciality of NNCSs is the association with control time steps. Moreover, approximation-based verification inevitably results in the accumulation of the over-approximation error with the control time evolved.
Recently, some pioneering research has emerged to verify NNCSs. They first estimate the output range of the controller and then feed the range to a hybrid system verification tool. Although both groups of tools can generate a tight estimate on their isolated models, direct combination results in a loose estimate of NNCS reachable sets after a few control steps, known as the _wrapping effect_[11].
To resolve the wrapping effect, researchers develop vari
Figure 1: NNCS verification through reachable set estimation. _We estimate the reachable sets \(X_{t1},X_{t2},X_{t3}\)..., \(X_{tn}\) at \(t_{1},t_{2},t_{3}\)..., \(t_{n}\). If all reachable sets do not intersect with the grey avoid set and \(X_{tn}\) reaches the blue goal set, the NNCS is verified to be safe._
ous methods to analyse NNCSs as a whole. Verisig [14] transforms the neural network into an equivalent hybrid system for verification, which can only work on Sigmoid activation function. NNV [13] uses a variety of set representations, e.g., polyhedra and zonotopes, for both the controller and the plant. Other researchers are inspired by the fact that the Taylor model approach can analyse hybrid systems with traditional controllers [1]. Thus, they applied the Taylor model for the verification of NNCSs. Representatives are ReachNN [10], ReachNN* [13], Versig 2.0 [14] and Sherlock [15]. However, verification approaches based on Taylor polynomials are limited to certain types of activation function. And their over-approximation errors still exist and accumulate over time.
To alleviate the above weakness, this paper proposes a novel verification method for NNCSs, called DeepNNC, taking advantage of the recent advance in Lipschitzian optimisation. As shown in Table 1, compared to the state-of-the-art NNCS verification, DeepNNC can solve linear and nonlinear NNCS with a wide range of activation functions. Essentially, we treat the reachability problem as a black-box optimisation problem and verify the NNCS as long as it is Lipschitz continuous. DeepNNC is the only model-agnostic approach, which means that access to the inner structure of NNCSs is not required. DeepNNC has the ability to work on _any Lipschitz continuous complex system_ to achieve efficient and tight reachability analysis. The contributions of this paper are summarised below:
* This paper proposes a novel framework for the verification of NNCS, called DeepNNC. It is the first model-agnostic work that can deal with a broad class of hybrid systems with linear or nonlinear plants and neural network controllers with various types of activation functions such as ReLU, Sigmoid, and Tanh activation.
* We theoretically prove that Lipschitz continuity holds on closed-loop NNCSs. DeepNNC constructs the reachable set estimation as a series of independent global optimisation problems, which can significantly reduce the over-approximation error and the wrapping effect. We also provide a theoretical analysis on the soundness and completeness of DeepNNC.
* Our intensive experiments show that DeepNNC outperforms state-of-the-art NNCS verification approaches on various benchmarks in terms of both accuracy and efficiency. On average, DeepNNC is **768 times faster** than ReachNN* [13], **37 times faster** than Sherlock [15], and **56 times faster** than Verisig 2.0 [14] (see Table 2).
## Related Work
We review the related works from the following three aspects.
**SMC-based approaches** transform the problem into an SMC problem [15]. First, it partitions the safe workspace, i.e., the safe set, into imaging-adapted sets. After partitioning, this method utilises an SMC encoding to specify all possible assignments of the activation functions for the given plant and NNC. However, it can only work on discrete-time linear plants with ReLU neural controller.
**SDP-based and LP-based approaches** SDP-based approach [16] uses a semidefinite program (SDP) for reachability analysis of NNCS. It abstracts the nonlinear components of the closed-loop system by quadratic constraints and computes the approximate reachable sets via SDP. It is limited to linear systems with NNC. LP-based approach[17] provides a linear programming-based formulation of NNCSs. It demonstrates higher efficiency and scalability than the SDP methods. However, the estimation results are less tight than SDP-based methods.
**Representation-based approaches** use the representation sets to serve as the input and output domains for both the controller and the hybrid system. The representative method is NNV [13], which has integrated various representation sets, such as polyhedron [13, 14], star sets [13] and zonotop [15]. For a linear plant, NNV can provide exact reachable sets, while it can only achieve an over-approximated analysis for a nonlinear plant.
**Taylor model based approaches** approximate the reachable sets of NNCSs with the Taylor model (TM). Representative methods are Sherlock [15] and Verisig 2.0 [14]. The core concept of the Taylor model is to approximate a function with a polynomial and a worst-case error bound. TM approximation has shown impressive results in the anal
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & **Plant Dynamics** & **Discrete/Continuous** & **Workable Activation Functions** & **Core Techniques** & **Model-Agnostic Solution** \\ \hline
**SMC** [15] & Linear & Discrete & ReLU & SMC encoding + solver & ✗ \\ \hline
**Verisig** [16] & Linear, Nonlinear & Discrete, Continuous & Sigmoid, Tanh & TM + Equivalent hybrid & ✗ \\ \hline
**ReachNN** [17] & Linear, Nonlinear & Discrete, Continuous & ReLU, Sigmoid, Tanh & TM+ Bernstein polynomial & ✗ \\ \hline
**Sherlock** [17] & Linear, Nonlinear & Discrete, Continuous & ReLU & TM + MILP & ✗ \\ \hline
**ReachNN*** [18] & Linear, Nonlinear & Discrete, Continuous & ReLU, Sigmoid, Tanh & TM + Bernstein polynomial & ✗ \\ \hline
**Verisig 2.0** [19] & Linear, Nonlinear & Discrete, Continuous & Tanh, Sigmoid & TM + Extrink wrapping & ✗ \\ \hline
**DeepNNC (Our work)** & Any Lipschitz & Continuous & Any Lipschitz continuous activations & \multirow{2}{*}{Lipschitz optimisation} & ✗ \\ & continuous systems & & e.g., ReLU, Sigmoid, Tanh, etc. & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with existing NNCS verification methods from multiple aspects
ysis of the reachability of hybrid systems with conventional controllers [3, 13] and the corresponding tool Flow*[3] is widely applied. Verisig [12] makes use of the tool Flow*[3] by transforming the NNC into a regular hybrid system without a neural network. It proves that the Sigmoid is the solution to a quadratic differential equation and applies Flow* [3] on the reachability analysis of the equivalent system. Instead of directly using the TM verification tool, ReachNN [13], ReachNN* [13] and Verisig 2.0 [12] maintain the structure of NNCS and approximate the input and output of NNC by a polynomial and an error band, with a structure similar to the TM model. Therefore, the reachable sets of controller and plant share the same format and can be transmitted and processed in the closed loop.
Table 1 presents a detailed comparison of DeepNNC with its related work. The core technique of our method is significantly different from existing solutions, enabling DeepNNC to work on any neural network controlled system as long as it is Lipschitz continuous.
## Problem Formulation
This section first describes the NNCS model and then formulates the NNCS reachability problem. We deal with a closed-loop neural network controlled system as shown in Figure 2. It consists of two parts, a plant represented by a continuous system \(\dot{x}=f(x,u)\) and a neural network controller \(u=\sigma(y)\). The system works in a time-triggered manner with the control step size \(\delta\).
In control systems, the plant is usually modelled in the form of an ordinary differential equation (ODE). To guarantee the existence of a unique solution in ODE, the function \(f\) to model the plant is required to be Lipschitz continuous in \(x\) and \(u\)[10].
**Definition 2** (Neural Network Controller).: _The neural network controller is a k layer feedforward neural network with one or multiple types of activation functions, such as Sigmoid, Tanh, and ReLU activations, which can be defined as:_
\[\sigma(y)=\sigma_{k}\circ\sigma_{k-1}\circ...\circ\sigma_{1}(y). \tag{3}\]
As shown in Figure 2, system transmits the measurements \(y\) to the NNC, forming the control input through \(u=\sigma(y)\). The measurement function \(y=h(x)\) is usually Lipschitz continuous, and some systems use \(y=x\) for simplicity [10].
**Definition 3** (Neural Network Controlled System).: _The neural network controlled system \(\phi\) is a closed-loop system composed of a plant \(P\) and a neural network controller \(\sigma\). In a control time step, the closed-loop system is characterised as:_
\[\begin{split}\phi(x(i\delta),\Delta t)\equiv\,x(i\delta+\Delta t )\\ =x(i\delta)+\int_{i\delta}^{i\delta+\Delta t}f(x(\tau),\sigma(h(x(i \delta))))\,d\tau\end{split} \tag{4}\]
_with \(i=0,1,2,3...\), and \(\Delta t\in[0,\delta]\). For a given initial state \(x_{0}\in\mathcal{X}_{0}\) at \(t=0\) and a specified NNCS \(\phi\), the state \(x\) at time \(t\geq 0\) is denoted as \(x(t)\equiv\phi(x_{0},t)\)._
**Definition 4** (Reachable Set of NNCS).: _Given a range \(\mathcal{X}_{0}\) for initial states and a neural network controlled system \(\phi\), we define the region \(S_{t}\) a reachable set at time \(t\geq 0\) if all reachable states can be found in \(S_{t}\)._
\[\forall x_{0}\in\mathcal{X}_{0},\,\exists\,\phi(x_{0},t)\in S_{t} \tag{5}\]
It is possible that an over-approximation of reachable states exists in the reachable set. In other words, all reachable states of time \(t\) stay in \(S_{t}\), while not all states in \(S_{t}\) are reachable. Furthermore, we can formulate the estimation of the reachable set of NNCS as an optimisation problem:
**Definition 5** (Reachability of NNCS).: _Let the range \(\mathcal{X}_{0}\) be initial states \(x_{0}\) and \(\phi\) be a NNCS. The reachability of NNCS at a predefined time point \(t\) is defined as the reachable set \(S_{t}(\phi,X_{0},\epsilon)=[l,u]\) of NNCS \(\phi\) under an error tolerance \(\epsilon\geq 0\) such that_
\[\begin{split}\inf_{x_{0}\in X_{0}}\phi(x_{0},t)-\epsilon\leq l \leq\inf_{x_{0}\in X_{0}}\phi(x_{0},t)+\epsilon\\ \sup_{x_{0}\in X_{0}}\phi(x_{0},t)-\epsilon\leq u\leq\sup_{x_{0} \in X_{0}}\phi(x_{0},t)+\epsilon\end{split} \tag{6}\]
As discussed in multiple works [13, 14], the reachability problem in neural networks is extremely challenging, it is an NP-complete problem.
## Reachability Analysis via Lipschitzian Optimisation
This section presents a novel solution taking advantage of recent advances in Lipschitzian optimisation [15, 16, 17]. But before employing Lipschitzian optimisation, we need to theoretically prove the Lipschitz continuity of the NNCS.
Figure 2: Architecture of a neural network controlled system. _Neural network controller \(\delta\) takes measurements \(y\) as input and initiates the control input \(u\). The plant updates its states according to its dynamic \(\dot{x}=f(x,u)\) and generates new measurements of the states to be fed to the controller._
### Lipschitz Continuity of NNCSs
**Theorem 1** (Lipschitz Continuity of NNCS).: _Given an initial state \(X_{0}\) and a neural network controlled system \(\phi(x,t)\) with controller \(\sigma(x)\) and plant \(f(x,u)\), if \(f\) and \(\sigma\) are Lipschitz continuous, then the system \(\phi(x,t)\) is Lipschitz continuous in initial states \(\mathcal{X}_{0}\). A real constant \(K\geq 0\) exists for any time point \(t>0\) and for all \(x_{0},y_{0}\in\mathcal{X}_{0}\):_
\[\left|\phi(x_{0},t)-\phi(y_{0},t)\right|\leq K\left|x_{0}-y_{0}\right| \tag{7}\]
_The smallest \(K\) is the best Lipschitz constant for the system, denoted \(K_{best}\)._
Proof.: (sketch) The essential idea is to open the control loop and further demonstrate that the system \(\phi(x_{0},t)\) is Lipschitz continuous on \(x_{0}\). We duplicate the NNC and plant blocks to build an equivalent open-loop control system. Based on Definition 3, we have
\[\begin{split}&\left|\phi(x_{0},t)-\phi(y_{0},t)\right|\leq\left|x_ {0}-y_{0}\right|\\ +&\int_{t_{0}}^{t}\left|\,f(x(\tau),u_{x}(\tau))\ -\ f(y(\tau),u_{y}(\tau))\right|d\tau\end{split} \tag{8}\]
We add the term \(f(x(\tau),u_{y}(\tau))\ -\ f(x(\tau),u_{y}(\tau))\) to the right side and assume that the Lipschitz constant of \(f\) on \(x\) and \(u\) are \(L_{u}\) and \(L_{x}\) and the Lipschitz constant of NN as \(L_{n}\), we can transform Equation (8) to the following form:
\[\begin{split}&\left\|\phi(x_{0},t)-\phi(y_{0},t)\right\|\leq\\ &\left(L_{u}L_{n}(t-t_{0})+1\right)\left\|x_{0}-y_{0}\right\|+L_{ x}\int_{t_{0}}^{t}\left\|\,x(\tau)-y(\tau)\ \right\|\ d\tau.\end{split} \tag{9}\]
Based on Gronwall's theory of integral inequality (Gronwall, 1919), we further have the following equation:
\[\begin{split}&\left\|\phi(x_{0},t)-\phi(y_{0},t)\right\|\leq \left(L_{u}L_{n}(t-t_{0})+1\right)\left\|x_{0}-y_{0}\right\|e^{L_{x}(t-t_{0})}.\end{split} \tag{10}\]
Hence, we have demonstrated the Lipschitz continuity of the closed-loop system on its initial states with respect to the time interval \([0,\delta]\). We repeat this process through all the control steps and prove the Lipschitz continuity of NNCSs on initial states. See **Appendix-A1** for a detailed proof.
Footnote 1: All appendixes of this paper can be found at [https://github.com/TrustAI/DeepNNC/blob/main/appendix.pdf](https://github.com/TrustAI/DeepNNC/blob/main/appendix.pdf)
Based on Theorem 1, we can easily have the following lemma about the local Lipschitz continuity of the NNCS.
**Lemma 1** (Local Lipschitz Continuity of NNCS).: _If NNCS \(\phi(x,t)\) is Lipschitz continuous throughout \(X_{0}\), then it is also locally Lipschitz continuous in its sub-intervals. There exists a real constant \(k_{i}\geq 0\) for any time point \(t>0\), and \(\forall x_{0},y_{0}\in[a_{i},a_{i+1}]\subset X_{0}\), we have \(\left\|\phi(x_{0},t)-\phi(y_{0},t)\right\|\leq k_{i}\left\|x_{0}-y_{0}\right\|\)._
Based on Lemma 1, we develop a fine-grained strategy to estimate local Lipschitz constants, leading to faster convergence in Lipschitz optimisation.
### Lipschitzian Optimisation
In the reachability analysis, the primary aim is to provide the lower bound \(R\) of the minimal value \(\phi(x),x\in X_{0}\). We achieve tighter bounds by reasonably partitioning the initial input region \(X_{0}\) and evaluating the points at the edges of the partition. The pseudocode of the algorithm can be found in **Appendix-B**. Here, we explain the partition strategy and the construction of \(R\) in the \(k\)-th iteration.
1. We sort the input sub intervals \(D_{0},D_{1},D_{2}...D_{n}\), with \(D_{i}=[a_{i},a_{i+1}]\). In the first iteration, we only have one sub interval with \(D_{0}=X_{0}=[a_{0},a_{1}]\).
2. We evaluate the system output \(\phi_{i}=\phi_{t}(a_{i})\) at the edge points \(a_{0},...a_{n}\), and define \(R_{i}\) as the character value of the interval \(D_{i}\) with the corresponding input \(x_{i}^{R}\): \[R_{i}=\frac{\phi_{i}+\phi_{i+1}}{2}+l_{i}\frac{a_{i}-a_{i+1}}{2}\] (11) \[x_{i}^{R}=\frac{\phi_{i}-\phi_{i+1}}{2l_{i}}+\frac{a_{i+1}+a_{i}}{2}.\] (12) Both \(R_{i}\) and \(x_{i}^{R}\) are determined by the information of the edge point and local Lipschitz constant \(l_{i}\).
3. We search for the interval with a minimum character value and minimal evaluated edge points: \[\begin{split}& R_{min}^{k}=\min\{R_{0},...,R_{n}\},\ j=\arg\min\{R_{0},...,R_{n}\}\\ & D_{j}=[a_{j},a_{j+1}],\ \hat{\phi}_{min}^{k}=\min\{\phi_{0},\phi_{1},...\phi_{n}\}.\end{split}\] (13) If \(\hat{\phi}_{min}^{k}-R_{min}^{k}\leq\epsilon\), the optimisation terminates, and we return \(R_{j}\) for the lower bound. Otherwise, we add a new edge point \(a_{new}=x_{j}^{R}\) to divide \(D_{j}\).
As illustrated in Figure 3, with blue points as edges and red points indicating the characteristic value \(R\). Both \(R_{min}^{k}\) and \(\hat{\phi}_{min}^{k}\) are approaching the real minimum value \(\phi_{min}\). The optimisation ends when \(\hat{\phi}_{min}^{k}-R_{min}^{k}\leq\epsilon\), which implies \(R_{min}^{k}\leq\phi_{min}\leq R_{min}^{k}+\epsilon\)
To extend the above strategy into a multi-dimensional case, we use the nested optimisation scheme to solve the problem in a recursive way.
\[\min_{x\in[p_{i},q_{i}]^{n}}\ \phi(x)=\min_{x_{1}\in[p_{1},q_{1}]}\ \cdots\min_{x_{n}\in[p_{n},q_{n}]}\phi(x_{1},...,x_{n}) \tag{14}\]
w
Figure 3: Demonstration of the optimisation process: \(|\hat{\phi}_{min}^{k}-R_{min}^{k}|\)_decreases with iteration \(k\), optimisation converges when \(|\hat{\phi}_{min}^{k}-R_{min}^{k}|<\epsilon\)_
We define the \(d\)-th level optimisation sub-problem,
\[w_{d}(x_{1},...,x_{d})=\min_{x_{d+1}\in[p_{d+1},q_{d+1}]}w_{d+1}(x_{1},...,x_{d+1}) \tag{15}\]
and for \(d=n\), \(w_{n}(x_{1},...,x_{n})=\phi(x_{1},x_{2},...,x_{n})\). Thus, we have \(\min_{x\in[p_{i},q_{i}]^{n}}\ \ \phi(x)=\min_{x_{1}\in[p_{i},q_{1}]}\phi_{1}(x_{1})\) which is actually a one-dimensional optimisation problem.
### Convergence Analysis
We discuss the convergence analysis in two circumstances, i.e., one-dimensional problem and multidimensional problem. In the one-dimensional problem,we divide the sub-interval \(D_{j}=[a_{j},a_{j+1}]\) into \(D^{1}_{new}=[a_{j},x_{j}^{R}]\) and \(D^{2}_{new}=[x_{j}^{R},a_{j+1}]\) in the \(k\)-th iteration. The new character value \(R^{1}_{new}-R_{j}=l_{j}\frac{a_{j+1}-x_{j}^{R}}{2}-\frac{\phi_{j+1}-\phi(x_{j} ^{R})}{2}>0\), \(R^{2}_{new}-R_{j}=l_{j}\frac{x_{j}^{R}-a_{j}}{2}-\frac{\phi(x_{j}^{R})-\phi_{j} }{2}>0\). Since \(R^{1}_{min}=\min\{R_{0},...R_{n}\}\setminus\{R_{j}\}\cup\{R^{1}_{new},R^{2}_{ new}\}\), we confirm that \(R^{k}_{min}\) increases strictly monotonically and is bounded.
In the multidimensional problem, the problem is transformed to a one-dimensional problem with nested optimisation. We introduce Theorem 2 for the inductive step.
**Theorem 2**.: _In optimisation, if \(\forall x\in\mathbb{R}^{d}\), \(\lim_{k\rightarrow\infty}R^{k}_{min}=\inf_{x\in[a,b]^{d}}\phi(x)\) and \(\lim_{i\rightarrow\infty}(\hat{\phi}^{k}_{min}-R^{k}_{min})=0\) are satisfied, then \(\forall x\in\mathbb{R}^{d+1}\), \(\lim_{k\rightarrow\infty}R^{k}_{min}=\inf_{x\in[a,b]^{d+1}}\phi(x)\) and \(\lim_{k\rightarrow\infty}(\hat{\phi}^{k}_{min}-R^{k}_{min})=0\) hold._
Proof.: (sketch) By the nested optimisation scheme, we have
\[\min_{x\in[a_{i},b_{i}]^{d+1}}\phi(x)=\min_{x\in[a,b]}W(x);\ \ W(x)=\min_{y\in[a_{i},b _{i}]^{d}}\phi(x,y) \tag{16}\]
Since \(\min_{y\in[a_{i},b_{i}]^{d}}\phi(x,y)\) is bounded by an interval error \(\epsilon_{\mathbf{y}}\), then we have \(|W(x)-W^{*}(x)|\leq\epsilon_{\mathbf{y}},\forall x\in[a,b]\), where \(W^{*}(x)\) is the accurate evaluation of the function.
For the inaccurate evaluation case, we have \(W_{min}=\min_{x\in[a,b]}W(x)\), \(R^{k}_{min}\) and \(\widehat{W}^{k}_{min}\). The termination criteria for both cases are \(|\widehat{W}^{k}_{min}-R^{k}_{min}|\leq\epsilon_{x}\) and \(|\widehat{W}^{k*}_{min}-R^{k}_{min}|\leq\epsilon_{x}\), and \(w^{*}\) represents the ideal global minimum. Based on the definition of \(R^{k}_{min}\), we have \(w^{*}-R^{k}_{min}\leq\epsilon_{\mathbf{y}}+\epsilon_{x}\). By analogy, we get \(\widehat{W}^{k}_{min}-w^{*}\leq\epsilon_{\mathbf{y}}+\epsilon_{x}\). Thus, the accurate global minimum is bounded. Theorem 2 is proved. See **Appendix C** for a detailed proof.
### Estimation of Lipschitz Constant
To enable a fast convergence of the optimisation, we need to provide a Lipschitz constant for NNCS. As we aim for a model-agnostic reachability analysis for NNCSs, we further propose two practical variants to estimate the Lipschitz constant for a black-box NNCS.
Dynamic Estimation of Global Lipschitz ConstantIn the optimisation process, the entire input range \(X\) of the function \(\phi_{t}(x)\) is divided into limited sub-intervals \(D_{0},D_{1},D_{2}...D_{n}\), where \(D_{i}=[a_{i},a_{i+1}]\). We update the global Lipschitz constant \(L\) according to the interval partitions:
\[L=r\cdot max\bigg{|}\frac{\phi_{t}(a_{i+1})-\phi_{t}(a_{i})}{a_{i+1}-a_{i}} \bigg{|} \tag{17}\]
To avoid an underestimate of \(L\), we choose \(r>1\) and have
\[\lim_{j\rightarrow\infty}r\cdot\max_{i=1,...,j-1}\bigg{|}\frac{\phi_{t}(a_{i+ 1})-\phi_{t}(a_{i})}{a_{i+1}-a_{i}}\bigg{|}=r\cdot\sup_{a\in X}\frac{d\phi_{t }}{da}>K_{best}. \tag{18}\]
The dynamic update of \(L\) will approximate the best Lipschitz constant of the NNCS.
Local Lipschitz Constant EstimationAn alternative to improve efficiency is the adoption of the local Lipschitz constant \(l_{i}\). We adopt the local adjustment (Sergeyev 1995) for local Lipschitz estimation, which considers not only global information, but also neighbourhood intervals. For each sub-interval, we introduce \(m_{i}\)
\[m_{i}=|\phi_{t}(a_{i+1})-\phi_{t}(a_{i})|/|a_{i+1}-a_{i}|,M=\max\ m_{i} \tag{19}\]
We calculate sub-interval sizes \(d_{i}\) and select the largest \(D\).
\[d_{i}=|a_{i+1}-a_{i}|\,,D=\max\ d_{i} \tag{20}\]
Equation (21) estimates the local Lipschitz constant.
\[l_{i}=r\cdot\max\left\{m_{i-1},m_{i},m_{i+1},M\cdot d_{i}/D\right\} \tag{21}\]
It balances the local and global information. When the sub-interval is small, that is, \(M\cdot d_{i}/D\to 0\), \(l_{i}\) is decided by local information.
\[\lim_{d_{i}\to D}l_{i}=r\cdot max\left\{m_{i-1},m_{i},m_{i+1}\right\}=r \cdot\sup_{a\in[a_{i},a_{i+1}]}\frac{d\phi_{t}}{da}>k_{i}^{best}\]
When the subinterval is large, that is, \(d_{i}\to D\), local information is not reliable. \(l_{i}\) is determined by global information.
\[\lim_{d_{i}\to D}l_{i}=r\cdot M=r\cdot\sup_{a\in X}\frac{d\phi_{t}}{da}>K_{ best}>k_{i}^{best} \tag{22}\]
Local Lipschitz constants provide a _more accurate_ polyline approximation to the original function, leading to faster convergence.
### Soundness and Completeness
**Theorem 3** (Soundness).: _Given a box-constrained initial input range \(X_{0}\) and a dangerous set \(S_{a}\), the verification of an NNCS via DeepNNC is sound anytime. DeepNNC can return an overestimation \(S\) of the real reachable set \(X\) at any iteration, even it has not reached the convergence._
Proof.: When we interrupt the optimisation at any iteration \(k\), lower bound \(R^{k}_{min}\) is returned for verification. We assume that the real minimal value \(\phi_{min}\) is located in the
Figure 4: \(X_{0}\) is the initial state set and \(S_{d}\) is the avoid set. DeepNNC can return an overestimation \(S_{any}\) in any iteration. Verification is sound but not complete by \(S_{any}\), and Verification is both sound and complete by \(S_{\epsilon\to 0}\).
sub-interval \([a_{i},a_{i+1}]\), i.e. \(\phi_{min}=\phi(x_{min})\) with \(x_{min}\in[a_{i},a_{i+1}]\).
\[R_{i}-\phi_{min}=\frac{\phi_{i}+\phi_{i+1}}{2}-l_{i}\frac{a_{i}-a_ {i+1}}{2}-\phi_{min}\] \[= \frac{\phi_{i}-\phi_{min}}{2}+l_{i}\frac{a_{i}-x_{min}}{2}+\frac{ \phi_{min}-\phi_{i+1}}{2}+l_{i}\frac{x_{min}-a_{i+1}}{2}\leq 0 \tag{23}\]
Thus, \(R_{min}^{k}-\phi_{min}\leq R_{i}-\phi_{min}\leq 0\) holds. When using \(R_{min}^{k}\) to estimate the output range \(S_{any}\), \(R_{min}^{k}\leq\phi_{min}\) ensures \(X\subset S_{any}\), where \(X\) is the real reachable set. Given an avoid set \(S_{d}\), if \(S_{any}\cap S_{d}=\emptyset\), then \(X\cap S_{d}=\emptyset\), the NNCS is safe. Thus, the approach DeepNNC is sound.
**Theorem 4** (Completeness).: _Given a box-constrained initial input range \(X_{0}\) and a dangerous set \(S_{d}\), verification via DeepNNC is complete only when optimisation reaches convergence with the overestimation error \(\epsilon\to 0\)._
Proof.: As demonstrated in the convergence analysis, with iteration number \(k\rightarrow\infty\) the optimisation can achieve convergence with error \(\epsilon\to 0\). In such circumstances, the lower bound \(R_{min}^{k}\rightarrow\phi_{min}\) results in an estimation \(S_{\epsilon\to 0}\to X_{0}\) with ignorable estimation error. If \(S_{\epsilon\to 0}\cap S_{d}\neq\emptyset\), then \(X\cap S_{d}\neq\emptyset\), the NNCS is not safe.
## Experiments
We first compare DeepNNC with some state-of-the-art baselines. We then analyse the influences of parameter \(k_{max}\) and \(\epsilon\). Finally, we provide a case study on an airplane model. An additional case study of an adaptive cruise control system is presented in **Appendix D**.
### Comparison with State-of-the-art Methods
We test DeenNNC and baseline methods on six benchmarks. See **Appendix-E** for the details of the benchmarks. We compare our method with ReachNN* [12], Sherlock [10] and Versig 2.0 [2] in terms of efficiency. Regarding the accuracy comparison, we compare our method with Verisig [2] instead of Sherlock [10], since the results of Sherlock loose fast with the increase of control steps.
The time consumption of different approaches for estimating the reachable set at a predefined time is demonstrated in Table 2 and Figure 5. Our method can be applied to all benchmarks and has relatively better efficiency, especially for low-dimensional problems.
ing of the control timestep and the simulated trajectories (red). Reachable sets of DeepNNC (blue) can cover the trajectories with a small overestimation error.
### Impact of \(\epsilon\) and \(k_{max}\)
Two parameters \(\epsilon\) and \({}_{max}k\) are involved in our approach. \(\epsilon\) indicates the allowable error, and \(k_{max}\) represents the maximum number of iterations in each dimension. The iteration stops either when the difference of upper bound (red square) and lower bound (blue square) is smaller than \(\epsilon\) or the iteration number in one dimension reaches \(k_{max}\). In Figure 8, we fix \(\epsilon=0.0005\) and change \(k_{max}=3,5,10,10000\). The estimation becomes tighter with the larger iteration number. In Figure 9 we study the influence of \(\epsilon\) by fixing the maximum iteration number \(k_{max}=10^{4}\) and changing \(\epsilon=0.5,0.05,0.005,0.0005\). The accuracy of the estimate increases with smaller \(\epsilon\).
### Case Study: Flying Airplane
We analyse a complex control system of a flying airplane, which is a benchmark in ARCH-COMP21 (Johnson et al., 2021). The system contains 12 states \([x,y,z,u,v,w,\alpha,\beta,\gamma,r,p,q]\) and the technical details are in **Appendix-F**. Initial states are \(x=y=z=r=p=q=0,[u,v,w,\alpha,\beta,\gamma]=[0,1]^{6}\). The goal set is \(y\in[-0.5,0,5]\) and \([\alpha,\beta,\gamma]=[-1,1]^{3}\) for \(t<2s\). The controller here takes an input of 12 dimensions and generates a six-dimensional control input \(F_{x},F_{y},F_{z},M_{x},M_{y},M_{z}\). For simplicity, we choose a small subset of the initial input \([u,v,w,\alpha,\beta,\gamma]=[0.9,1]^{6}\), the estimated reachable sets of \(\alpha\) and \(y\) in the time range \([0s,2s]\) are presented in figure 10 (a). Both \(\alpha\) and \(y\) are above the safe range. Figure 10 shows the reachable set of \(\alpha\) and \(\gamma\), which is also above the safe region. The airplane NNCS is unsafe.
## Conclusion
We develop an NNCS verification tool, DeepNNC, that is applicable to a wide range of neural network controllers, as long as the whole system is Lipschitz-continuous. We treat NNCS verification as a series of optimisation problems, and the estimated reachable set bears a tight bound even in a long control time horizon. The efficiency of DeepNNC is mainly influenced by two factors, the Lipschitz constant estimation and the dimension of the input. For the first, we have adopted dynamic local Lipschitzian optimisation to improve efficiency. Regarding the high-dimensional reachability problem, a possible solution is to transform the high-dimension problem into a one-dimensional problem using a space-filling curve (Lera and Sergeyev, 2015). This idea can be explored in future work.
Figure 8: Reachable sets of B3 with ReLU at \(t=0.01s\), 0.1s, 0.2s, 0.4s, 1s, 3s, 6s, with \(\epsilon=0.0005\) and (a) \(k_{max}=3\); (b) \(k_{max}=5\); (c) \(k_{max}=10\); (d) \(k_{max}=10^{4}\)
Figure 6: Reachable sets at a predefined time point estimated by different approaches. The estimated reachable set at a time point is a polygon. The size of the area of the reachable set indicates the precision of the estimation results.
Figure 7: (a)B1 Sigmoid (b) B2 Sigmoid (c) B5 Sigmoid (d) B5 Tanh. Red trajectories are the simulated results. Blue reachable sets are generated from our approach. The yellow and green reachable sets are from the baseline methods.
## Acknowledgements
This work is supported by Partnership Resource Fund of ORCA Hub via the UK EPSRC under project [EP/R026173/1].
|
2310.15648 | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio
Models | The introduction of large-scale audio datasets, such as AudioSet, paved the
way for Transformers to conquer the audio domain and replace CNNs as the
state-of-the-art neural network architecture for many tasks. Audio Spectrogram
Transformers are excellent at exploiting large datasets, creating powerful
pre-trained models that surpass CNNs when fine-tuned on downstream tasks.
However, current popular Audio Spectrogram Transformers are demanding in terms
of computational complexity compared to CNNs. Recently, we have shown that, by
employing Transformer-to-CNN Knowledge Distillation, efficient CNNs can catch
up with and even outperform Transformers on large datasets. In this work, we
extend this line of research and increase the capacity of efficient CNNs by
introducing dynamic CNN blocks, constructed of dynamic non-linearities, dynamic
convolutions and attention mechanisms. We show that these dynamic CNNs
outperform traditional efficient CNNs, in terms of the performance-complexity
trade-off and parameter efficiency, at the task of audio tagging on the
large-scale AudioSet. Our experiments further indicate that the introduced
dynamic CNNs achieve better performance on downstream tasks and scale up well,
attaining Transformer performance and even outperforming them on AudioSet and
several downstream tasks. | Florian Schmid, Khaled Koutini, Gerhard Widmer | 2023-10-24T09:08:20Z | http://arxiv.org/abs/2310.15648v1 | # Dynamic Convolutional Neural Networks
###### Abstract
The introduction of large-scale audio datasets, such as AudioSet, paved the way for Transformers to conquer the audio domain and replace CNNs as the state-of-the-art neural network architecture for many tasks. Audio Spectrogram Transformers are excellent at exploiting large datasets, creating powerful pre-trained models that surpass CNNs when fine-tuned on downstream tasks. However, current popular Audio Spectrogram Transformers are demanding in terms of computational complexity compared to CNNs. Recently, we have shown that, by employing Transformer-to-CNN Knowledge Distillation, efficient CNNs can catch up with and even outperform Transformers on large datasets. In this work, we extend this line of research and increase the capacity of efficient CNNs by introducing dynamic CNN blocks, constructed of dynamic non-linearities, dynamic convolutions and attention mechanisms. We show that these dynamic CNNs outperform traditional efficient CNNs, in terms of the performance-complexity trade-off and parameter efficiency, at the task of audio tagging on the large-scale AudioSet. Our experiments further indicate that the introduced dynamic CNNs achieve better performance on downstream tasks and scale up well, attaining Transformer performance and even outperforming them on AudioSet and several downstream tasks.
Dynamic Convolutional Neural Networks, Dynamic ReLU, Dynamic Convolution, Coordinate Attention, Audio Spectrogram Transformer, Audio Classification, Pre-trained Audio Models, Knowledge Distillation.
## I Introduction
Pre-trained deep neural networks have emerged as a pivotal paradigm in the field of machine learning over the last few years. Leveraging transfer learning techniques, pre-trained models can significantly enhance the performance on downstream tasks, in particular, when the training data is insufficient to learn an end-to-end model from scratch. Such models are typically pre-trained in a supervised or self-supervised fashion on large datasets, such as ImageNet [1] in the vision domain, or AudioSet [2] in the audio domain. While convolutional neural networks (CNNs) have been the method of choice during the past years to create pre-trained models in both the audio and the vision domain, Transformers [3] recently surpassed them due to their ability to scale up and exploit large datasets [4, 5]. With superior performance on the large pre-training datasets, Transformers then overtook CNNs also on many downstream tasks with smaller datasets. However, Transformers are computationally costly in training and inference, as the attention operation scales quadratically with respect to the processed sequence length. For this reason, CNNs maintain their dominance on resource-constrained platforms, such as mobile devices.
Recently, it has been shown in the audio domain that efficient CNNs can attain and even outperform Transformers on large-scale datasets when they are trained using Knowledge Distillation (KD) [6, 7, 8] from a Transformer ensemble. Concretely, Schmid et al. [9] train MobileNetV3s (MNs) [10] on AudioSet using offline KD from an ensemble consisting of 9 different Patchout FaSt Spectrogram Transformer (PaSST) [11] models. The resulting efficient pre-trained MNs have been shown to extract high-quality general-purpose audio embeddings that can generalize to downstream tasks in various audio domains such as music, speech and environmental sounds [12]. Compared to Transformers, the quality of extracted audio embeddings is comparable while the computational cost of inference is much lower. Although the MNs achieve excellent performance on AudioSet [9] and serve as high-quality audio embeddings extractors [12], they remain to be tested in end-to-end fine-tuning, in which a pre-trained model is directly fine-tuned on a downstream task.
In this work, we extend this line of research and propose computationally efficient pre-trained audio model, obtained by integrating dynamic components into MobileNetV3s (DyMNs). We use the Knowledge Distillation pre-training setup of [9] and show that the proposed DyMNs achieve substantially higher pre-training performance on AudioSet compared to MNs. We train MNs and DyMNs of different complexity levels on the downstream tasks of polyphonic musical instrument recognition _(OpenMic dataset_[13]), environmental sound classification _(ESC50 dataset_[14])_, acoustic scene classification _(TAU Urban Acoustic Scenes 2020 challenge_[15])_, and sound event tagging _(FSD50K_[16])_. The results show that MNs and DyMNs can attain or even surpass the performance of a single teacher model PaSST [11] on many downstream tasks, while being much more computationally efficient. The proposed DyMNs outperform MNs for the majority of downstream tasks and complexity levels.
The motivation for integrating dynamic components into MNs is threefold. Firstly, instead of scaling CNNs by network width and depth, which increases the computational complexity substantially, a variety of lightweight dynamic components such as dynamic non-linearities [17, 18], convolutions [19, 20] and attention mechanisms [21, 22, 23, 24, 25, 26, 27] have been proposed. These dynamic components have been shown
to increase the performance while only marginally adding to the computational complexity in terms of consumed multiply-accumulate operations (MACs) at inference time. Secondly, the success of Transformers is largely based on self-attention, a highly dynamic operation that adapts its attention weights based on input data. Thirdly, an ablation study conducted in [9] revealed that Squeeze-and-Excitation [21], a component that dynamically computes channel attention weights based on input data, is an integral part of MNs to achieve high performance on AudioSet [2].
Concretely, our proposed DyMN block deviates from the traditional residual inverted bottleneck block [28] in MNs in three ways: we include dynamic convolution [19], replace the non-linearity after the depthwise convolution with dynamic ReLU [17], and make use of Coordinate Attention [22] instead of Squeeze-and-Excitation [21]. Since all these dynamic components perform operations based on the context of the input sample, we compute the context once and share it across all dynamic components in a block.
To summarize our contribution, we extend the work of Schmid et al. [9] and introduce a dynamic CNN block to further boost the pre-training performance of efficient CNNs on AudioSet [2]. We show that the proposed dynamic CNNs outperform traditional efficient CNNs on downstream tasks in an end-to-end fine-tuning setting. The proposed dynamic CNNs attain and even outperform current Audio Spectrogram Transformers on several downstream tasks while being more computationally efficient.
The remainder of this paper is structured as follows. Related work is reviewed in Section II, followed by the introduction of the proposed dynamic model in Section III. The pre-training setup and results are presented in Section IV; Section V then shows the experiments and results on the downstream tasks. A detailed systematic configuration study and an analysis of the dynamic components are performed in Sections VI and VII, respectively. The paper is concluded in Section VIII.
## II Related Work
In this section, the related work is covered. We start with a general recap on efficient CNN architectures and popular dynamic CNN components that were introduced to further increase a CNN's computational efficiency. We then cover the literature on pre-trained audio models to set the stage for introducing our dynamic CNN serving as an efficient, pre-trained audio model.
### _Efficient CNN Architectures_
Much effort has been invested in research on designing efficient CNNs, such as the series of MobileNets [28, 29, 10], EfficientNets [30, 31] or ShuffleNets [32, 33]. MobileNetV1 [29] substantially reduces the computational complexity of conventional convolution layers by factorizing them into a depthwise and a 1x1 pointwise convolution. On top of this, MobileNetV2 [28] introduces inverted residual blocks with linear bottlenecks, leading to better accuracy and computational efficiency. MobileNetV3 [10] additionally adds Squeeze-and-Excitation [21] layers after the depthwise filters, upgrades activation functions using swish non-linearity [34] and optimizes the global network structure using platform-aware network architecture search [35]. EfficientNet [30] builds on the MobileNetV2 inverted residual block and introduces compound scaling laws of depth, width and input resolution. Similar to MobileNetV3, EfficientNetV2 [31] performs a neural architecture search to optimize parameter efficiency and training speed. Originally introduced in the vision domain, MobileNets and EfficientNets have been shown to provide a good performance-complexity trade-off also in the audio domain [36, 37, 38, 9].
### _Dynamic CNN Components_
While scaling CNNs by width and depth typically improves performance, it significantly increases the model's complexity. In particular, the computational demand of a CNN scales with the square of the model's width. As an alternative strategy, a lot of research has focused on using a fixed number of channels more efficiently by introducing dynamic components to CNNs. While the majority of work proposes attention mechanisms [21, 22, 23, 24, 25, 26, 27], others have used dynamic convolutions [19, 20, 39, 40, 41] and dynamic non-linearities [17, 18].
**Attention Mechanisms:** The most prominent instance of a CNN attention mechanism is Squeeze-and-Excitation (SE) [21] being integrated into MobileNetV3 [10] and EfficientNets [30, 31]. As shown in Eq. 1, SE applies a squeeze (\(F_{sq}\)) and an excitation (\(F_{ex}\)) operation on a feature map to obtain channel recalibration weights \(\mathbf{s}\). \(F_{sq}\) applies global average pooling (GAP) to collect contextual information while \(F_{ex}\) captures channel-wise dependencies via a learnable non-linear transformation followed by a sigmoid activation to compute weights.
\[\mathbf{s}=F_{ex}(F_{sq}(\mathbf{x}),\mathbf{W}) \tag{1}\]
Other attention mechanisms differ mainly in how they realize \(F_{sq}\) and \(F_{ex}\). The Style-based Recalibration Module (SRM) [23] extends SE by combing GAP and global standard pooling to realize \(F_{sq}\), followed by a channel-wise fully connected layer, batch normalization and a sigmoid activation function for \(F_{ex}\). CBAM [24] additionally computes attention weights for spatial locations using a channel pooling operation to squeeze contextual information. Coordinate Attention (CA) [22] factorizes the attention mechanism into two separate context encoding processes of the spatial dimensions. Features are aggregated by GAP along the spatial dimensions, processed separately and then used as spatially aware recalibration weights. Triplet Attention (TA) [26] reduces the three dimensions of a feature map by average and max pooling, processes the three 2-dimensional slices separately, and recalibrates the feature map with the resulting three sets of recalibration weights. Global Context (GC) blocks [27] differ from the aforementioned attention mechanisms by performing additive instead of multiplicative recalibration. The recently introduced Global Response Normalization (GRN) [42] serves as a particularly lightweight attention module by recalibrating
the channels based on their L2-norms and adding only two learnable parameters, a scale and a shift.
**Dynamic Convolution:** In contrast to standard convolution layers, dynamic convolutions adapt the kernel weights based on global context information extracted from an input. Early approaches in the vision domain have explored generating the kernels directly [43, 44], resulting in a substantial increase in complexity as the number of parameters of convolution kernels is large. A more lightweight approach is to predict coefficients to linearly combine a fixed set of kernels. In this spirit, CondConv [20] computes a linear mixture of \(K\) distinct trainable kernel weights. As shown in Eq. 2, only a single convolution (\(*\)) with the mixed kernel has to be performed, as this is equivalent to combining the results of \(K\) individual convolutions. The weights \(\alpha_{i}\) per filter are computed dynamically from the input using GAP, a linear transformation, and a sigmoid activation.
\[\begin{split}\alpha_{1}(W_{1}*x)+\alpha_{2}(W_{2}*x)+...+\alpha _{K}(W_{K}*x)=\\ (\alpha_{1}W_{1}+\alpha_{2}W_{2}+...+\alpha_{K}W_{K})*x\end{split} \tag{2}\]
DyNet [41] proposes a similar dynamic convolution mechanism, motivated from the perspective of extracting noise-irrelevant features, showing that dynamic filters generate more diverse feature maps with lower cross-correlation. A common global context is shared across CNN blocks, e.g., the global context extracted by GAP from a block input is used to parameterize the three dynamic convolutions in an inverted bottleneck block as used in MobileNetV2s [28]. The dynamic convolution introduced in [19] compresses the kernel space by introducing the constraint \(\sum_{k}\alpha_{k}=1\) on the computed kernel attention weights. As a result, a smaller number of kernels \(K\) can be used, saving computations and parameters. The constraint is enforced by applying a softmax instead of a sigmoid activation. Temperature scaling is used before the softmax to ensure near-uniform attention in early epochs.
Besides dynamic convolution in the vision domain, some works have explored generating input-aware convolutional filters based on the input sentences in NLP [39, 45, 46]. In the audio domain, temporal dynamic convolutions (TDY) [47] and frequency dynamic convolutions (FDY) [48] have gained popularity recently. TDY dynamically adapts the filters along the time axis to consider time-varying characteristics of speech; FDY has been shown to improve sound event detection by dynamically adapting the filters along the frequency axis, addressing the fact that the frequency dimension is not shift invariant. However, both TDY and FDY execute \(K\) parallel convolutions before combining the results, which leads to considerable computational overhead during inference.
**Dynamic Activation Function:** Less explored in comparison to CNN attention mechanisms and dynamic convolutions is the use of dynamic activation functions. Si et al. [18] dynamically adapt the threshold value of ReLU activations in an MLP network. Chen et al. extend this line of research by introducing dynamic ReLU (Dy-ReLU) [17], which works as a dynamic and efficient version of Maxout [49]. A hyperfunction, similar to Squeeze-and-Excitation [21], is used to dynamically generate coefficients for \(M\) linear mappings. After normalizing the coefficients (slopes and intercepts) to specific ranges, the element-wise maximum is applied across the \(M\) different mappings. Most commonly, \(M=2\) is used and linear mappings are spatially shared but differ across channels.
### _Pre-trained Audio Models_
Models in the audio domain are typically pre-trained in a supervised or self-supervised way on large-scale datasets, such as AudioSet [2]. AudioSet consists of around 2 million weakly labeled 10-second audio snippets downloaded from YouTube. The audio clips were manually annotated with 527 different event classes hierarchically sorted in an ontology.
PANNs [36] introduced a series of AudioSet-pre-trained CNNs of varying complexities and architectures, which are widely used for downstream applications in audio-related domains such as sound event detection [50], automated audio captioning [51], language-based audio retrieval [52], emotion recognition [53], or even optical fiber sensing [54]. The prevalence of PANNs across many different downstream application areas underlines the community's interest in pre-trained audio models for end-to-end fine-tuning. The study presented in [36] includes MobileNetV2 [28] which shows a good performance-complexity trade-off but falls behind CNN14 [36] in terms of performance. Gong et al. [37] improve over PANNs in terms of performance and complexity by using an EfficientNet-B2 [30] pre-trained on ImageNet [1], balanced sampling, and label enhancement. ERANNs [55] improve the performance on AudioSet further while controlling efficiency by temporal downsampling via strided convolutions. However, despite all these improvements, CNNs have been substantially outperformed by supervised [5, 56, 11, 38] and self-supervised [57, 58, 59] Transformer models.
Recently, it has been shown that the sharp increase in audio tagging performance on AudioSet achieved by Transformers can be exploited and transferred to efficient CNNs using Knowledge Distillation (KD) [6, 8]. In this context, Gong et al. [38] found that Transformers and CNNs are good teachers for each other, improving the performance of both models, and Schmid et al. [9] use Transformer-to-CNN KD to match the performance of a PaSST [11] Transformer model with a MobileNetV3 having only 6% of the parameters and requiring 100 times fewer MACs. These efficient, pre-trained CNNs have been shown to extract high-quality audio representations [12] and have a high potential to be fine-tuned for low-complexity on-device applications. In parallel to our work, which focuses on architectural improvements of efficient CNNs with dynamic components, Dinkel et al. [60] recently improved the KD setup introduced in [9] by using consistent ensemble distillation and an improved teacher model.
## III Proposed dynamic model
This section introduces the proposed dynamic model consisting of dynamic ReLU (Dy-ReLU) [17], dynamic convolutions (Dy-Conv) [19], and Coordinate Attention (CA) [22]. This design decision is based on our belief that the three dynamic methods are complementary: Dy-Conv can extract
noise-invariant features [41], CA detects important channels, time frames and frequency bins, and Dy-ReLU increases the model's expressiveness by applying a dynamic non-linear function. These dynamic components are integrated into efficient inverted residual blocks (IR blocks) [28], as our focus is on creating efficient pre-trained audio models. In particular, we use the global network design of MobileNetV3-Large (MN) [10] and scale the models by network width using a width multiplier \(\alpha\). MN is optimized towards latency and has shown to provide an excellent performance-complexity trade-off on AudioSet [9].
In the following, we describe our proposed dynamic IR block in a top-down manner. We start by reviewing the conventional IR block in Section III-A, and introduce the modifications that lead to the dynamic IR block in Section III-B. We then zoom into the four central components of the dynamic IR block: the Context Generation Module (Section III-C), Dy-ReLU (Section III-D), Dy-Conv (Section III-E) and CA (Section III-F). For all these additional components, we will identify the computationally most costly part in terms of multiply-accumulate operations (MACs) and compare it to the cost of convolutions in the conventional IR block.
### _Inverted Residual Block_
IR blocks [28] are constructed of (1) a pointwise channel expansion convolution projecting the number of channels from \(C_{\textit{IN}}\) to \(C_{\textit{EXP}}\) using 1x1 kernels, (2) a depthwise convolution operating on each of the \(C_{\textit{EXP}}\) channels independently, and (3) a pointwise projection convolution projecting the channels from \(C_{\textit{EXP}}\) to \(C_{\textit{OUT}}\) with 1x1 kernels. For most blocks, it holds that \(C_{\textit{IN}}=C_{\textit{OUT}}\). However, transition blocks increase the number of channels, leading to \(C_{\textit{OUT}}>C_{\textit{IN}}\). Each convolution is followed by batch normalization and ReLU activation, except for the linear bottleneck after the last convolution. The depthwise convolution can be strided to downsample the spatial dimensions. If the spatial dimensions and the number of channels match, a residual connection from block input to block output is used.
The pointwise convolutions are the computationally most expensive operations in an IR block. Specifically, given a block with \(C_{\textit{OUT}}\) output channels, \(C_{\textit{EXP}}\) channels in the expanded channel representation and spatial output dimensions of sizes \(T_{\textit{OUT}}\) and \(F_{\textit{OUT}}\), the final pointwise convolution performs \(C_{\textit{EXP}}*C_{\textit{OUT}}*T_{\textit{OUT}}*F_{\textit{OUT}}\) MACs.
### _Dynamic Inverted Residual Block_
Starting from the conventional IR block, we apply the following modifications to integrate the dynamic components:
1. **Conv \(\rightarrow\) Dy-Conv**: We replace all three convolutions in the IR block with Dy-Conv [19].
2. **ReLU \(\rightarrow\) Dy-ReLU**: The ReLU activation function after the depthwise convolution is replaced by Dy-ReLU [17]. As shown in Section VI, using additional Dy-ReLUs after the two pointwise convolutions does not yield further improvements.
3. **Squeeze-and-Excitation \(\rightarrow\) CA**: Instead of SE [21], CA [22] is used as the attention mechanism. As will be shown in Section VI, this replacement yields substantial performance gains.
Fig. 1 shows the overall structure of the dynamic IR block. All three types of dynamic components have in common
Fig. 2: Context Generation Module (CGM): a zoom into the green parts of Fig. 1. The CGM operates on the input of an IR block and outputs a time and frequency sequence embedded in a reduced channel dimension of size \(H\). These sequences are used to parameterize Dy-ReLU [17] and Dy-Convs [19] and to compute the channel-time and channel-frequency recalibration weights for CA. The shape of the input feature map size is denoted in terms of _batch size \(\times\) channels \(\times\) frequency bands \(\times\) time frames_.
Fig. 1: Dynamic inverted residual block: Starting from the conventional inverted residual (IR) block, all convolutions are replaced by dynamic convolutions (Dy-Conv) [19]; Coordinate Attention (CA) [22] is used instead of Squeeze-and-Excitation [21]; and dynamic ReLU (Dy-ReLU) [17] replaces the non-linear activation function after the depthwise convolution. The Context Generation Module (CGM) operates on the block input and extracts embedded time and frequency sequences used to parameterize Dy-ReLU, Dy-Convs and CA. Dynamic components are depicted in red, and context generation is shown in green. The shape of the input feature map size is denoted in terms of _batch size \(\times\) channels \(\times\) frequency bands \(\times\) time frames_.
that they require statistics extracted from the input sample for dynamic parameterization and reweighting. We share the computation of these statistics across all dynamic components in a common Context Generation Module (CGM), which will be explained in detail in Section III-C. The CGM operates on the input of the IR block and outputs an embedded time and frequency sequence, i.e., two separate lists of embeddings for time frames and frequency bins (depicted as green blocks in Fig. 1). The following sections on Dy-ReLU (III-D), Dy-Conv (III-E) and CA (III-F) will explain in detail how these sequences are processed for dynamic parameterization and reweighting.
### _Context Generation Module_
The goal of the CGM is to collect informative statistics from an input sample that can be used to parameterize the dynamic components without creating substantial computational overhead. Fig. 2 depicts the details of this process. The CGM transforms the block input feature map into embedded time (\(S_{T}\)) and frequency (\(S_{F}\)) sequences. Inspired by the original CA module [22], introduced in the vision domain, the input feature map is pooled separately across the two spatial dimensions to retain positional information. The resulting time and frequency sequences are then processed by a shared transformation consisting of a linear layer, a batch-norm [61] and hardswish activation [10]. The linear layer embeds the channel dimension of size \(C_{\textit{IN}}\) into a reduced space with \(H\) dimensions. We set \(H\) to a fraction of the expanded channel representation, such that \(H=C_{\textit{EXP}}/r\), where \(r=4\) in our experiments.
The computationally most expensive operation in the CGM is the linear layer, which performs \(C_{\textit{IN}}*H*(T_{\textit{IN}}+F_{\textit{IN}})\) MACs. Compared to the first pointwise convolution in the IR block, which requires \(C_{\textit{IN}}*C_{\textit{EXP}}*T_{\textit{IN}}*F_{\textit{IN}}\) MACs, the computational demand of the CGM is insignificant.
### _Dy-ReLU_
Our specific implementation of dynamic ReLU is the spatially-shared and channel-wise Dy-ReLU-B introduced in [17]. Applying Dy-ReLU-B to a feature map requires predicting \(M\) linear mappings for each channel. The output of Dy-ReLU-B is then computed by taking the elementwise maximum across the \(M\) linear mappings. Specifically, given \(x_{c}\), the input plane of the channel with index \(c\), and the predicted coefficients \(\alpha_{c}^{m}\) (slope) and \(\beta_{c}^{m}\) (intercept) for that channel, the output plane \(y_{c}\) is calculated as follows:
\[y_{c}=\max_{1\leq m\leq M}\{\alpha_{c}^{m}x_{c}+\beta_{c}^{m}\} \tag{3}\]
Dy-ReLU-B requires in total \(2*M*C\) dynamic coefficients, resulting from slope and intercept for \(M\) linear mappings for each of the \(C\) channels.
We apply the transformation shown in Eq. 4 to predict the dynamic coefficients from the CGM output sequences \(S_{T}\) and \(S_{F}\). Firstly, the two sequences are concatenated and pooled over the sequence length dimension, resulting in a vector of size \(H\). Secondly, a trainable linear transformation with parameters \(W\in\mathbb{R}^{2*M*C_{\textit{EXP}}*H}\) and \(b\in\mathbb{R}^{2*M*C_{\textit{EXP}}}\) is used to predict the Dy-ReLU coefficients collected in the vector \(C_{\text{dyrelu}}\). Most commonly, _M_=2 linear mappings are used [17], which is also the default value in our setup.
\[C_{\text{dyrelu}}=\mathrm{Pool}(\mathrm{Concat}[S_{T},S_{F}])W^{T}+b \tag{4}\]
The computational cost of Dy-ReLU is dominated by the calculation of elementwise linear mappings (\(\alpha_{c}^{m}x_{c}+\beta_{c}^{m}\)) as part of Eq. 3. The total cost of computing \(M\) linear mappings is \(M*C_{\textit{EXP}}*T_{\textit{OUT}}*F_{\textit{OUT}}\) MACs. Since \(M=2\) is much smaller than the number of block output channels \(C_{\textit{OUT}}\), the cost of DyReLU is insignificant compared to the cost of the final pointwise convolution (\(C_{\textit{OUT}}*C_{\textit{EXP}}*T_{\textit{OUT}}*F_{\textit{OUT}}\)) as discussed in Section III-A.
### _Dy-Conv_
Our implementation of Dy-Conv is based on the dynamic convolution introduced in Chen et al. [19]. \(K\) different kernels are kept in parallel and are aggregated based on predicted kernel attention weights \(\tilde{\alpha}_{k}\). The sum of the kernel attention weights is normalized to 1 using a softmax function with temperature scaling:
\[\alpha_{k}=\frac{\exp(\tilde{\alpha}_{k}/\tau)}{\sum_{k}^{K}\exp(\tilde{\alpha }_{k}/\tau)} \tag{5}\]
The aggregated dynamic kernel \(W\) is then constructed as a weighted sum of the \(K\) individual kernels: \(W=\sum_{k}^{K}\alpha_{k}W_{k}\). Per default, we stick with the recommended settings in [19]
Fig. 3: Coordinate Attention (CA): CA highlights important channels, frequency bins and time frames by recalibrating the feature map by elementwise multiplication with attention weights. It operates on the output sequences of the CGM and transforms them separately into the channel-time \(\tilde{S}_{T}\) and channel-frequency \(S_{F}\) attention weights. Average Pooling with a kernel size of 3 and a stride of 2 is used in case of a strided IR block. The linear transformation upsamples the number of channels from \(H\) to \(C_{\textit{EXP}}\) to match the dimensionality of the feature map after the depthwise convolution.
and use \(K=4\) and linearly anneal the temperature \(\tau\) from 30 to 1 over the first epochs of training.
To obtain predictions from the CGM output sequences for the \(K\) kernel attention weights, we apply the transformation shown in Eq. 6. The difference to the Dy-ReLU coefficient prediction is only in the shape of the trainable linear transformation and the resulting coefficients vector \(C_{\mathrm{dyconv}}\). Specifically, we use \(W\in\mathbb{R}^{K\times H}\) and \(b\in\mathbb{R}^{K}\) resulting in a vector \(C_{\mathrm{dyconv}}\) of size \(K\).
\[C_{\mathrm{dyconv}}=\mathrm{Pool}(\mathrm{Concat}[S_{T},S_{F}])W^{T}+b \tag{6}\]
The computationally most expensive operation is the aggregation of the \(K\) kernels requiring at most \(K*C_{EXP}*C_{OUT}\) MACs for the final pointwise convolution. Since \(K=4\) is much smaller than \(T_{OUT}*F_{OUT}\), constructing the dynamic kernel is insignificant compared to the convolution itself.
### _Coordinate Attention_
The purpose of CA [22] is to emphasize important spatial positions and channels by recalibrating a feature map with channel-time and channel-frequency weights. As depicted in Fig. 3, CA takes as input the embedded time and frequency sequences as produced by the CGM, performs separate transformations per sequence, and outputs the respective attention weights. Eq. 7 shows the transformation from the embedded sequences \(S_{\{T,F\}}\) to the respective attention weights \(\tilde{S}_{\{T,F\}}\). To match the dimensions of the feature map, an average pooling operation with kernel size 3 and a stride of 2 is applied in case of a strided depthwise convolution in the IR block. The result is processed by a trainable linear layer with parameters \(W\in\mathbb{R}^{C_{EXP}*H}\) and \(b\in\mathbb{R}^{C_{EXP}}\) to match the channel dimension of the feature map. The final sigmoid function converts the resulting sequences into attention weights that are used to recalibrate the feature map via elementwise multiplications.
\[\tilde{S}_{\{T,F\}}=\mathrm{Sigmoid}(\mathrm{Pool}(S_{\{T,F\}})W^{T}+b) \tag{7}\]
The computationally most expensive operations in CA are the linear layers, which jointly perform \(H*C_{EXP}*(T_{OUT}+F_{OUT})\) MACs. Compared to the final pointwise convolution in the IR block, which requires \(C_{OUT}*C_{EXP}*T_{OUT}*F_{OUT}\) MACs, the computational demand of CA is negligible since \(C_{OUT}\approx\) H and the product of the spatial dimensions is typically much larger than their sum.
## IV Pre-Training on large-scale Audio Tagging
In this section, we report the pre-training results of the introduced dynamic CNNs on the task of large-scale audio tagging on AudioSet [2]. That is, models need to assign one or multiple labels out of 527 classes to 10-second audio clips. Since AudioSet must be downloaded from YouTube, different proportions of the dataset can be successfully downloaded. In this regard, our setup is strictly comparable to the dataset used in [11] and [9]. The proposed DyMN is scaled to three different complexities using width multipliers \(\alpha\in\{0.4,1,2\}\). Adapting the width multiplier changes the number of channels in the IR blocks while keeping the total number of blocks constant. We denote the resulting models as DyMN small (DyMN-S), DyMN medium (DyMN-M) and DyMN large (DyMN-L). By default, we replace all 15 IR blocks with their dynamic counterparts; however, we will also discuss the effect of applying the dynamic IR blocks selectively in Section VI. The results will be compared in terms of parameter and computational efficiency to other models pre-trained on AudioSet. In particular, we are interested in a comparison to the non-dynamic counterpart of our proposed DyMNs, the efficient MNs used in [9].
### _Pre-Training Setup_
#### Iv-A1 Preprocessing and Augmentation
To pre-train our models on AudioSet, we match the preprocessing used in [11] and [9]. We use mono audio with a sampling rate of 32 kHz and apply STFT with a window length of 25 ms and a hop size of 10 ms. Mel spectrograms are computed using a Mel filterbank with 128 frequency bins and the minimum and maximum frequency is randomly perturbed within a range of 10 Hz and 2 kHz, respectively. Mixup [62] with a mixing coefficient of 0.3 is the only spectrogram-level data augmentation used since we are using offline KD as described in Section IV-B and it has been shown that consistent KD is beneficial [9, 63].
#### Iv-A2 Training
Models are trained for a total of 200 epochs and 100,000 samples are drawn at random without replacement from the full AudioSet in each epoch. We use a learning rate scheduler consisting of an exponential warmup phase until epoch 8, followed by a constant peak learning rate phase, 95 epochs of linear rampdown and 25 epochs fine-tuning with 1% of the peak learning rate. The peak learning rates are set to \(2\times 10^{-3}\), \(1\times 10^{-3}\) and \(5\times 10^{-4}\) for DyMN-S/M/L, respectively. We use the Adam optimizer [64] with a batch size of 120. We adopt the importance sampling strategy based on label frequency from [11] to counter the long tail of infrequent classes. The results presented in this section are achieved by DyMNs pre-trained on ImageNet [1], which has been shown to improve performance substantially [37].
#### Iv-A3 Dynamic Component Settings
The temperature \(\tau\) used in the context of Dy-Convs (see Eq. 2) is linearly annealed from 30 to 1 in the first 30 epochs of training. The sequence embedding dimension \(H\) as defined in the CGM (Fig. 2) is set to \(C_{EXP}/\tau\) with \(r=4\). However, we additionally restrict its size between 32 and 128 to ensure that the sequences can capture enough information about the feature map in early layers and do not get unnecessarily complex in the final layers. These bounds are scaled accordingly with the model's width multiplier \(\alpha\).
### _Offline Knowledge Distillation_
We copy the Transformer-to-CNN KD method introduced in [9] to train our DyMNs. Specifically, the DyMNs act as student in the KD setup and optimize the loss given in Eq. 8. The loss is a weighted sum of label loss \(L_{l}\) and distillation loss \(L_{kd}\) traded off by a hyperparameter \(\lambda\). \(y\) denotes the AudioSet labels, \(z_{S}\) and \(z_{T}\) are the student and teacher logits,
respectively, \(\delta\) is the sigmoid activation function, and Binary-Cross-Entropy is applied for \(L_{l}\) and \(L_{kd}\).
\[Loss=\lambda L_{l}(\delta(z_{S}),y)+(1-\lambda)L_{kd}(\delta(z_{S}),\delta(z_{T})) \tag{8}\]
The teacher logits \(z_{T}\) are constructed by ensembling the logits of 9 different Patchout FaSt Spectrogram Transformer (PaSST) models [11] achieving a mean average precision (mAP) of 49.5 on the AudioSet evaluation set. Aligned with [9], we pre-compute the ensemble logits for all recordings in the training set to speed up the training of the DyMN students and use \(\lambda=0.1\) to emphasize the distillation loss.
### _Results on AudioSet_
In the following, we will compare the computational efficiency and the parameter efficiency across different models trained on AudioSet. In particular, the DyMNs are trained in the same setup as the MNs in [9], which permits a fair comparison in assessing the effect of the dynamic components. All results presented throughout this paper are averages of at least 3 independent runs and the last 10 epochs of training.
#### Iv-C1 Computational Complexity
The number of MACs is calculated on 10-second audio recordings using the model profiler contained in Microsoft's DeepSpeed framework [67]. The results, plotting the performance of different models against the consumed MACs, are shown in Fig. 4. We compare our DyMNs (red stars) to other popular CNNs (circles) and Transformers (crosses) trained on AudioSet. The plot shows that DyMNs and MNs [9] (green line) trained in the Transformer-to-CNN KD setup [9] outperform other CNNs in terms of both prediction performance and computational efficiency. The diagram also confirms that the number of consumed MACs is indeed only marginally increased by the dynamic components while they can boost the performance substantially. DyMN-S improves over its static counterpart MN with a matching width multiplier (\(\alpha=0.4\)) by almost 2 points in mAP. DyMN-M (\(\alpha=1.0\)) almost matches the performance of the MN with twice the width (\(\alpha=2.0\)), and DyMN-L (\(\alpha=2.0\)) outperforms even the largest MN (\(\alpha=4.0\)) while requiring approximately 4 times less MACs. The crosses depict the Transformer models PaSST-S [11], which is used as the teacher for MNs and DyMNs;1 BEATs [58], which achieves the best performance on AudioSet; and HTS-AT [56], a particularly efficient implementation based on the Swin Transformer [68]. While both DyMNs and MNs can outperform a single-teacher model, the PaSST ensemble teacher performance of 49.5 mAP is not reached. However, DyMN-L outperforms even the best-performing Transformer model BEATs [58] while requiring less than 5% of its MACs.
Footnote 1: More precisely: we show here one single Transformer model from the teacher ensemble, in order to only have comparable single models in the plot. The 9-model PaSST ensemble would figure at \((6.4*10^{2},49.5)\) in Fig. 4; in Fig. 5, it would completely distort the plot, with a parameter complexity of 775.3 million.
#### Iv-C2 Parameter Complexity
The parameter efficiency on AudioSet is compared across different CNNs (circles) and Transformers (crosses) in Fig. 5. Both MNs and DyMNs outperform a variety of Transformers and CNNs in terms of the performance-complexity trade-off. BEATs [58] is the only model that outperforms the best MNs, but DyMN-L achieves a higher mAP with less than half the number of parameters. The introduced DyMNs outperform the MNs in terms of parameter efficiency, although the dynamic convolutions require more than \(K\) times as many parameters as conventional convolution layers and the fully-connected layers predicting the Dy-ReLU activation coefficients are nonnegligible in terms of parameters. However, scaling the width of a network by a factor \(\alpha\) increases the number of parameters approximately by \(\alpha^{2}\). This shows that introducing dynamic IR blocks to MNs instead of scaling by network width can be more efficient also in terms of parameters.
Fig. 4: The plot compares the performance – computational complexity trade-off across different single (i.e., non-ensemble) models on AudioSet. CNNs (MNs [9], PSLA [37], CNN14 [36], ResNet38 [36], CMKD-CNN [38], ConvNet [65]) are shown as circles, Transformer models (PaSST’S [11], BEATs [58]) and HTS-AT [56]) are depicted as crosses, the width-scaled MobileNets introduced in [9] are connected by the green line for better visibility), and our proposed DyMNs are denoted as red stars. The computational complexity is measured in terms of multiply-accumulate operations (MACs) plotted in log scale on the x-axis.
Fig. 5: The plot compares the parameter-efficiency across multiple different single models on AudioSet [2]. CNNs (MNs [9], PSLA [37], ERANN [55], Wavegram-Logmel CNN [36], CNN14 [36], ResNet38 [36], CMKD-CNN [38], ConvNet [65], EAT-M [56] are denoted as circles. Transformers ( Audio-MAE [57], FTS-AT [56], PASST-S [11], PaSST-S-L [11], AST [5], CMKD-AST [38], BEATs [58]) are denoted as crosses. The green line connects the series of width-scaled MobileNets introduced in [9] and the red stars indicate our proposed DyMNs.
## V Experiments on Downstream Tasks
The previous section has shown that the introduction of dynamic components to MNs increase the pre-training performance on large-scale AudioSet. However, the main question is if the performance gain during pre-training can be carried over to downstream tasks. We fine-tune AudioSet-pre-trained MNs and DyMNs on the tasks of polyphonic musical instrument recognition, environmental sound classification, sound event tagging, and acoustic scene classification, and compare their performance against each other, the pre-training teacher Transformer PaSST [11] and the state of the art on the respective tasks. Furthermore, we share the same fine-tuning pipeline across all downstream tasks and only adapt the learning rate to show that no extensive hyperparameter tuning is required for high performance on downstream tasks.
### _Tasks_
#### V-A1 Polyphonic Musical Instrument Recognition
This task is to recognize all instruments present in an audio clip. It is based on the OpenMIC dataset [13] which consists of 20,000 10-second audio clips. Each clip is annotated by multiple tags out of 20 different classes. The performance metric is mean average precision (mAP). The state of the art for this task is the Transformer model PaSST [11], which surpassed receptive-field-regularized CNNs [69] as the previous state-of-the-art method.
#### V-A2 Environmental Sound Classification
This task is to classify 5-second audio recordings into one out of 50 different classes. It is based on the ESC50 [14] dataset consisting of 2,000 environmental sound recordings. The performance metric for this task is accuracy and we report the results in terms of averages of the 5 official folds [14]. The state-of-the-art model on this dataset is the Transformer BEATs [58]; in terms of CNNs the fine-tuned audio encoder of CLAP [70] has the lead.
#### V-A3 Sound Event Tagging
The FSD50K dataset [16] consists of 51,197 recordings that are annotated with 200 event classes taken from the AudioSet [2] ontology. It is the second-largest publicly available general-purpose sound event tagging dataset after AudioSet, consisting of 100 hours of audio. FSD50K is separated into training, validation and evaluation splits. We use the validation split to set up our fine-tuning pipeline that is shared across all models and downstream tasks. The performance is reported in terms of mAP on the evaluation set. The multi-modal giant-size ONE-PEACE [71] Transformer with 4B parameters achieves state-of-the-art results on this dataset. On the CNN side, the fine-tuned audio encoder of CLAP [70] achieves the highest mAP.
#### V-A4 Acoustic Scene Classification
This task is to classify 10-second audio recordings into one out of ten different acoustic scenes. The TAU Urban Acoustic Scenes 2020 Mobile dataset [15] has been used in the DCASE 2020 challenge Task 1 and consists of 13,965 recordings in the train set and 2,979 in the test set. The performance is measured in terms of accuracy. It is particularly difficult to find models that generalize well on this dataset since it is recorded with a limited number of microphones, some of which are completely unseen during training and cause a distribution shift at test time. PaSST-S [11] is the state-of-the-art method on this dataset, outperforming the top CNN [72] from the DCASE 2020 challenge [15].
### _Fine-Tuning Setup_
For fine-tuning models on downstream tasks, the pre-processing of audio recordings applied in the pre-training stage, as discussed in Section IV-A, is matched. Except for the learning rate, we share the fine-tuning pipeline across all tasks and models. We use the Adam optimizer and train for 80 epochs. The learning rate schedule includes an exponential warmup phase for 10 epochs, followed by a linear rampdown for 65 epochs and 5 final epochs with 1% of the peak learning rate. The temperature \(\tau\) to compute the attention weights for Dy-Conv is fixed to 1. As for data augmentation, we randomly roll the waveform over time in a maximum range of \(\pm\)125ms. We use two-level mixup [62], both on the raw waveforms and on the spectrogram level, and the audio waveform is multiplied to change the gain by \(\pm\)7 dB. The min and max frequencies of the mel filterbank are randomly perturbed within ranges of 10 Hz and 2 kHz, respectively. Interestingly, a critical performance factor on the downstream tasks is the weight decay, which must be set to 0 to achieve high performance.
### _Results_
The results on the four downstream tasks are given in Table I. For each of the tasks, we specify the baseline performance (Baseline), global state of the art (SOTA), state of the art among CNNs (SOTA CNN), and the performance of the AudioSet teacher model PaSST-S [11]. We also compare our proposed DyMNs to the MNs with matching width multiplier and add a MN with increased width of \(\alpha=3.0\).
#### V-B1 DyMNs vs. MNs
The DyMNs outperform the MNs with matching width across all tasks and model sizes, with the exception of \(\alpha=1.0\) on FSD50K. DyMN-L even outperforms MN (\(\alpha=3.0\)) on OpenMic, ESC-50 and DCASE20 while being on par on FSD50K. These results underline that dynamic
components can increase channel efficiency and generalization performance.
#### V-C2 DyMNs vs. PaSST
DyMN-L outperforms the pre-training teacher model PaSST-S [11] on all four downstream tasks while requiring less than half of its parameters and less than 3% of its MACs for computing the predictions for a 10-second audio recording. DyMN-M achieves comparable performance on OpenMic and ESC-50, being 8 times smaller and requiring less than 1% of the number of MACs compared to PaSST.
#### V-C3 DyMNs vs. SOTA CNN
DyMN-L beats the state-of-the-art CNN performance on all four downstream tasks. On FSD50K even the most lightweight models DyMN-S and MN (\(\alpha=0.4\)) outperform the top CNNs such as PSLA [37] (56.7), CMKD-CNN [38] (58.2) or CLAP [70] (58.6). While CMKD-CNN is the smallest of the aforementioned CNNs with 8M parameters, MN (\(\alpha=0.4\)) and DyMN-S are below 1M and 2M parameters, respectively.
#### V-C4 DyMNs vs. SOTA
On OpenMic, DyMN-M and DyMN-L outperform the state-of-the-art performance held by the Transformer model PaSST [11]. On ESC-50, DyMN-L lags slightly behind the top method BEATs [58] but outperforms other recent Transformers such as AST [5] (95.7), PaSST [11] (96.8) or HTS-AT [56] (97.0). However, DyMN-L is much more lightweight compared to BEATs, having less than half of its parameters and less than 5% of its MACs. On FSD50K, besides the giant-size model ONE-PEACE [71] with 4B parameters, DyMN-L outperforms other recent Transformers such as PaSST [11] (65.3) and CMKD-AST [38] (61.7) and on DCASE20 DyMN-L lags only behind a specific version of PaSST (PaSST-B) that uses no patchout (76.6).
## VI Systematic Configuration Study
The purpose of this section is to justify the design decisions that led to the final dynamic IR block presented in Section III, and to show that the proposed variant has turned out to be beneficial across a variety of other configurations.
In the following, we present configuration studies for the proposed dynamic IR block in Section VI-A. We then delve into the details of the individual dynamic components in Sections VI-B, VI-C and VI-D, followed by a study on different context generation variants in Section VI-E. All experiments are conducted on AudioSet [2] using DyMN-M and MN (\(\alpha=1.0\)) without ImageNet [1] pre-training. In the following tables, the default values in our setup are indicated in bold.
### _Dynamic IR block_
In this section, we perform a configuration study based on the proposed dynamic IR block. We investigate the effect of the individual dynamic components, the impact of applying the dynamic IR blocks selectively, and the effect of applying Dy-Conv and Dy-ReLU at different positions in the block.
#### Vi-A1 Importance of Dynamic Components
Table II presents the results for the proposed DyMN, the MobileNetV3 [10] Baseline (MN Baseline) from [9], a fully static MN with no Squeeze-and-Excitation [21] (MN Static) and all other combinations of the three dynamic components in DyMNs.
The results show that dynamic input-dependent processing is important. The proposed DyMN improves the performance by 7% over the static MN. While Dy-ReLU and CA are of equal importance, Dy-Conv leads to the smallest improvements. However, all three dynamic components are beneficial for the overall performance and improve over MN Baseline.
#### Vi-A2 Selectively applying the dynamic IR block
The purpose of this experiment, with results summarized in Table III, is to determine at which positions in the model the dynamic blocks have the highest impact. MN has in total 15 IR blocks, all of which are replaced by dynamic IR blocks in the proposed DyMN. In this study, we replace only the first, middle and last 5 blocks in the MN with dynamic IR blocks and keep conventional IR blocks at the remaining positions. Additionally, the setting _Replace SE_ uses the dynamic IR block only at the positions at which the original MN uses SE, resulting in 8 out of the 15 IR blocks being dynamic.
Table III shows that dynamic blocks are beneficial at different positions in the model. Each selective variant outperforms the MN Baseline from [9]. The best choice among the versions applying the 5 dynamic blocks is to make the last 5 blocks dynamic. Replacing only SE blocks with dynamic blocks comes closest to the fully dynamic model in terms of performance and can be seen as a lightweight alternative.
#### Vi-A3 Effects of Dy-Conv and Dy-ReLU positions
The proposed dynamic IR block replaces all convolution layers with Dy-Conv and uses Dy-ReLU only after the depthwise convolution. Table IV shows the results for applying Dy-Conv and Dy-ReLU at alternative positions. Pos. 1, 2 and 3 describe the first, second and third convolution in the dynamic IR block (shown in Fig. 1) and the activation functions that follow them. In case of Dy-ReLU at Pos. 3, we add an additional Dy-ReLU after the final pointwise convolution.
The results show that replacing all convolution layers with Dy-Conv is beneficial and the proposed Dy-ReLU variant, where we have a single Dy-ReLU at Pos. 2, achieves the best performance. In particular, adding additional Dy-ReLUs does not improve results further.
### _Attention Mechanism_
This section presents a study on the choice of attention mechanism and the impact of channel-frequency and channel-time recalibration in CA.
#### Vi-B1 Choice of Attention Mechanism
Table V shows the results for integrating different popular attention mechanisms (CA [22], TA [26], SRM [23], GRN [42], SE [21], CBAM [24], and GC [27]) into MN. All attention mechanisms are integrated before the final pointwise convolution into all 15 IR blocks.
While a number of different attention methods are capable of achieving substantial improvements over the static MN with no attention mechanism, CA leads to the largest improvement and is therefore the attention mechanism of choice for our proposed dynamic IR block.
#### Vi-B2 Channel-Time and Channel-Frequency Recalibration in CA
CA performs recalibration of the feature map with channel-time and channel-frequency attention weights. The results given in Table VI assess the importance of these two recalibration steps. While the channel-frequency and channel-time weights are equally important, using both of them leads to the best results.
### _Dy-Conv_
In this section, the impact of the two hyperparameters of Dy-Conv, the number of kernels \(K\) and the temperature \(\tau\) is assessed.
#### Vi-C1 Number of dynamic kernels \(K\)
The number of kernels \(K\) specifies how many different kernels are aggregated in each Dy-Conv layer. Table VII shows the results for \(K\in\{2,4,6\}\). The performance improves only marginally from \(K=2\) to \(K=4\) kernels and plateaus for larger values of \(K\).
#### Vi-C2 Temperate \(\tau\)
The temperature \(\tau\) affects the computation of kernel attention weights as shown in Eq. 5. Aligned with [19], by default we use a temperature schedule for \(\tau\) and anneal it from 30 to 1 over the first 30 epochs of training. This ensures near-uniform attention weights in the first epochs to properly update all kernels. Table VIII compares the temperature schedule to the results of using a constant temperature (\(\tau\in\{1,10,30\}\)). The results show that the performance is stable across different constant temperature values in our setup. \(\tau=10\) achieves the same performance as the temperature schedule. However, keeping the temperature constant and setting it to non-optimal values (\(\tau=1\) or \(\tau=30\)) leads to a slight performance decrease, underlining the advantage of using a temperature schedule.
### _Dy-ReLU_
An important hyperparameter of Dy-ReLU is the number of linear mappings \(M\) that the max operation, shown in Eq. 3, acts on. The results for \(M\in\{1,2,3\}\) are shown in Table IX. \(M=1\) results in a dynamic linear function while \(M=2\) and \(M=3\) are dynamic non-linear functions. The non-linear functions outperform the linear function and aligned with the findings in [17], \(M=2\) and \(M=3\) achieve a similar performance.
### _Context Generation_
In this section, different variants of context generation are studied. In particular, architectural variants are discussed in Section VI-E1 and modifications of the context size \(H\) are investigated in Section VI-E2.
#### Vi-E1 Different architectural variants for context generation
Table X contains results for the following modifications of the context generation process:
* _no shared context_: Refers to a setting in which Dy-Conv and Dy-ReLU extract their own context by GAP and a learnable non-linear transformation, as originally proposed in [19] and [17], respectively. In this case, Dy-Conv and Dy-ReLU do not make use of the CGM output sequences. This experiment tests whether the shared CGM is capable of extracting a sufficiently rich global context that can be used to parameterize all dynamic components in a block.
* _no shared seq. parameters_: Indicates that, in contrast to the CGM setup in Fig. 2, the linear layer and batchnorm parameters are not shared across the sequences. Instead, two sets of parameters are learned to transform the time and frequency sequences. The motivation for this experiment is to decouple transformations involving time and frequency information, which, in contrast to the height and width of an image, encode different physical properties.
* _concat pooled seq._: Describes a setting for which Equations 4 and 6 are modified. Specifically, the sequences \(S_{T}\) and \(S_{F}\) are pooled separately and the vectors of size \(H\) are concatenated, resulting in a context vector of size \(2*H\). Aligned with the motivation of the last experiment, with this experiment, we try to avoid mixing time and frequency information.
The results presented in Table X show that all of these modifications lead to a slight decrease in performance, despite all of them increasing the number of parameters. The proposed design of the context generation is based on the findings of these experiments; the CGM follows the feature encoding process used in CA [22], and the dynamic coefficients of Dy-ReLU and Dy-Conv are derived as defined in Eqs. 4 and 6, respectively.
#### Vi-E2 Varying the sequence embedding size \(H\)
As stated in Section VI-E, the embedding dimension for the time and frequency sequences, computed by the CGM, is defined as \(H=C_{\mathit{EXP}}/r\). Additionally \(H\) is clipped between a lower bound (\(H_{\mathit{MIN}}=32\)) and an upper bound (\(H_{\mathit{MAX}}=128\)) that are scaled accordingly with the model's width \(\alpha\). Table XI shows the results for different values of \(r\), \(H_{\mathit{MIN}}\) and \(H_{\mathit{MAX}}\). The results indicate that reducing the sequence embedding dimension \(H\) by either increasing \(r\) or decreasing \(H_{\mathit{MIN}}\) and \(H_{\mathit{MAX}}\) leads to a decrease in mAP. This shows that a assigning a certain capacity to the global context is important for the dynamic components to exploit their full potential. However, increasing \(H\) by increasing the lower and upper bounds (\(H_{\mathit{MIN}}=64\) and \(H_{\mathit{MAX}}=256\)) does not yield further performance improvements and shows that the performance saturates after a certain context size is reached.
## VII Inspecting the Dynamic Components
In this section, we analyze the dynamic components and investigate whether the CA attention weights and predicted coefficients for Dy-ReLU and Dy-Conv are indeed input-dependent. In contrast to Section VI, the study conducted in this section is performed using a single well-trained model
on AudioSet with ImageNet pre-training. In the following, we will discuss the dynamic behaviour of CA, Dy-Conv and Dy-ReLU in Sections VII-A, VII-B and VII-C. All results are presented in Table XII. The inspection method _context shuffle_ is shared across the three dynamic components and describes the following setting: The attention weights for CA and the coefficients for Dy-ReLU and Dy-Conv are applied to an input recording \(\mathbf{x}\) but computed based on a different recording \(\mathbf{\tilde{x}}\). In particular, _context shuffle_ serves as the main analysis to determine whether the dynamic components perform input-dependent transformations.
### _Coordinate Attention_
For _context shuffle_, the performance decreases by more than 50%, which clearly shows that the dynamic recalibration weights of CA are input-dependent. To assess the importance of channel, time and frequency attention, we randomly shuffle the recalibration weights along (1) the channel dimension (_channel shuffle_), both spatial dimensions (_spatial shuffle_), the time dimension (_time shuffle_) and the frequency dimension (_frequency shuffle_). The results in Table XII show that the channel attention mechanism is the most important and attention over frequency bins is more important than attention over time frames. Surprisingly, _channel shuffle_ and _spatial shuffle_ lead to a more severe performance drop than _context shuffle_, which indicates that, besides dynamic input-dependent processing, a shared prior across the samples is learned in CA.
### _Dy-Conv_
In addition to _context shuffle_, we probe for input-dependency of Dy-Conv by randomly shuffling the \(K\) entries in the attention weight vectors (_attention shuffle_). Both methods lead to a moderate performance decrease, showing that Dy-Conv is the least input-dependent operation of the three dynamic components in our setup. We further investigate the diversity of the \(K\) dynamic kernels learned by constructing the kernel \(W\) using _uniform attention_ weights (\(W=\sum_{k}^{K}W_{k}/K\)) and selecting the kernel \(W\) based on the _max attention_ weight (\(W=W_{\mathrm{argmax}_{k}(\alpha_{k})}\)). Since the performance in case of _uniform attention_ drops only by 2.5 points in mAP, we conclude that the learned kernels are similar to each other. However, when we conduct the same experiment for a model using Dy-Conv as the only dynamic component (such as the setting - _CA_, _Dy-ReLU_ in Table II), we find that the performance when using _context shuffle_ or _uniform attention_ drops to a value close to random guessing. This indicates that Dy-ReLU and CA already perform much of the capabilities of Dy-Conv. This is also aligned with the finding that Dy-Conv leads to the smallest performance improvement among the three dynamic methods, as shown in Table II.
### _Dy-ReLU_
In case of Dy-ReLU, _context shuffle_ leads to a substantial performance drop of 15.3 points mAP. However, if we shuffle the predicted coefficients randomly across channels (_channel shuffle_), the performance drop is much more severe (44.3 points mAP). These results show that Dy-ReLU learns a prior for the channel coefficients shared across the samples, aligned with the results of CA.
Fig. 6 provides additional insights into the behaviour of Dy-ReLU. It shows the Dy-ReLU input to output mapping of several blocks at different depth in a well trained DyMN-M. The input to output mappings are collected from 10,000 randomly drawn samples from the AudioSet evaluation set. The dashed red line indicates the conventional static ReLU function. We make sure that patterns described in the following hold across several trained DyMNs and not only for the model used to create Fig. 6. An interesting pattern can be detected in Block 1. The input values are exclusively within a small
Fig. 6: The figure shows the input to output mapping of Dy-ReLU for blocks at different depths in a well-trained DyMN-M. To generate the plots we use 10,000 randomly selected samples from the AudioSet evaluation set. Block 1 corresponds to the first block and Block 15 corresponds to the final block in Dy-MN. The dashed red line depicts the conventional ReLU function.
range of positive values and the Dy-ReLU approximates an identity function. In general, Dy-ReLUs in early blocks tend to learn to map points at lines with specific slopes. For instance, the shape of the mappings in Block 3 resembles the absolute value function. In contrast, Dy-ReLUs in later blocks, such as Blocks 13 and 15, show a highly dynamic behaviour, mapping specific input values to a wide range of different output values. Different to the conventional ReLU, Dy-ReLU maps a lots of negative input values to positive activations.
## VIII Conclusion
In this work, we proposed dynamic convolutional neural networks as efficient pre-trained audio models. We integrated dynamic convolutions, dynamic ReLU, and Coordinate Attention into efficient inverted residual blocks and share the computation of a global context for dynamic parameterization across all dynamic modules in a block. The resulting models, named DyMNs, are pre-trained on AudioSet at three different complexity levels using Transformer-to-CNN Knowledge Distillation. DyMNs show a beneficial performance-complexity trade-off compared to their non-dynamic counterparts and other Transformers and CNNs. Specifically, Dy-MN-L achieves a pre-training performance of 49.0 mAP on AudioSet, outperforming current popular Audio Spectrogram Transformers. Experiments on downstream tasks indicate that the proposed DyMNs outperform other CNNs by a large margin and are highly competitive compared to Audio Spectrogram Transformers while being much more computationally efficient. Furthermore, we show that DyMNs are suitable for simple task-specific fine-tuning by sharing the same fine-tuning pipeline across all downstream tasks. In short, DyMNs are efficient, high-performing, easy-to-fine-tune audio models that can have a large impact on the audio community, especially in the context of resource-critical applications.
## IX Acknowledgment
The computational results presented were achieved in part using the Linz Institute of Technology (LIT) AI Lab Cluster. The LIT AI Lab is supported by the Federal State of Upper Austria. Gerhard Widmer's work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No 101019375 (Whither Music?).
|
2308.13073 | SurGNN: Explainable visual scene understanding and assessment of
surgical skill using graph neural networks | This paper explores how graph neural networks (GNNs) can be used to enhance
visual scene understanding and surgical skill assessment. By using GNNs to
analyze the complex visual data of surgical procedures represented as graph
structures, relevant features can be extracted and surgical skill can be
predicted. Additionally, GNNs provide interpretable results, revealing the
specific actions, instruments, or anatomical structures that contribute to the
predicted skill metrics. This can be highly beneficial for surgical educators
and trainees, as it provides valuable insights into the factors that contribute
to successful surgical performance and outcomes. SurGNN proposes two concurrent
approaches -- one supervised and the other self-supervised. The paper also
briefly discusses other automated surgical skill evaluation techniques and
highlights the limitations of hand-crafted features in capturing the
intricacies of surgical expertise. We use the proposed methods to achieve
state-of-the-art results on EndoVis19, and custom datasets. The working
implementation of the code can be found at https://github.com/<redacted>. | Shuja Khalid, Frank Rudzicz | 2023-08-24T20:32:57Z | http://arxiv.org/abs/2308.13073v1 | SurGNN: Explainable visual scene understanding and assessment of surgical skill using graph neural networks
###### Abstract
This paper explores how graph neural networks (GNNs) can be used to enhance visual scene understanding and surgical skill assessment. By using GNNs to analyze the complex visual data of surgical procedures represented as graph structures, relevant features can be extracted and surgical skill can be predicted. Additionally, GNNs provide interpretable results, revealing the specific actions, instruments, or anatomical structures that contribute to the predicted skill metrics. This can be highly beneficial for surgical educators and trainees, as it provides valuable insights into the factors that contribute to successful surgical performance and outcomes. SurGNN proposes two concurrent approaches - one supervised and the other self-supervised. The paper also briefly discusses other automated surgical skill evaluation techniques and highlights the limitations of hand-crafted features in capturing the intricacies of surgical expertise. We use the proposed methods to achieve state-of-the-art results on EndoVis19, and custom datasets. The working implementation of the code can be found at [https://github.com/](https://github.com/)<redacted>.
## 1 Introduction
Explainable visual scene understanding for surgical skill assessment using graph neural networks (GNNs) involves developing models that can interpret and reason about the complex visual data generated during surgical procedures. By representing the procedure as a graph structure and training a GNN on this data, we can model the dependencies and relationships between different surgical actions, instruments, and anatomical structures [34]. This allows us to extract relevant features from the visual data and predict various aspects of surgical skill, such as the overall quality of the procedure or the proficiency of individual actions [32]. This can help surgical educators and trainees better understand the underlying factors that contribute to successful surgical outcomes and improve training and performance [25; 5; 28].
Traditional assessment methods often rely on subjective expert ratings or global performance metrics [10; 26; 13] which can be biased, inconsistent, or difficult to interpret [13; 36; 31; 14]. By contrast, GNN-based models can provide a more objective and fine-grained analysis of surgical performance by interpreting the rich visual data available from surgical videos [34; 32]. GNN-based models can also provide explanations for their predictions, which can help build trust in the models and facilitate learning and improvement [38; 15; 20]. Additionally, we use 3D representations in our analysis. To the best of our knowledge, we are the first paper to consider explainable 3D representations for surgical skill analysis directly from visual cues.
Related work
Automated surgical skill evaluationAutomated surgical skill evaluation is an active research area, with various approaches proposed for analyzing surgical data using computer vision and machine learning [40; 7; 21; 16; 3]. A popular approach is to use hand-crafted features, such as motion analysis or tool usage, to train classifiers for surgical skill assessment. For example, Gao _et al_[9] developed a feature-based model that used motion analysis to classify suturing and knot-tying skill in surgical videos. However, these hand-crafted features are often limited by their ability to capture the complex and subtle movements that characterize surgical expertise.
To address these limitations, researchers have also explored deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), for automated surgical skill evaluation. For example, Cheng _et al_[4] developed a CNN-based model that used visual cues to assess the proficiency of laparoscopic suturing and knot-tying in a simulator. Similarly, Zia _et al_[41] proposed an RNN-based model that used motion analysis to classify surgical skill in a simulation environment. However, these deep learning models often lack interpretability, making it difficult to understand the underlying factors that contribute to their predictions. Anastasiou _et al_[1] described an action-aware Transformer with multi-head attention producing inter-video contrastive features to regress the manually-assigned skill scores. This approach, although novel, requires consensus on what is considered good performance and has only been tested in the JIGSAWS [9] dataset, which consists of very short, simulated procedures.
To address this interpretability issue, recent research focuses on explainable AI, including attention mechanisms and GNNs, for automated surgical skill evaluation. For example, Hira _et al_[12] developed an attention-based model that used a hierarchical CNN to identify relevant visual features for surgical skill assessment. Meanwhile, Ban _et al_[2] proposed a GNN-based model that represented surgical actions as nodes in a graph structure, allowing for a more fine-grained analysis of surgical skill. These approaches represent promising directions towards automated surgical skill evaluation systems that are both accurate and interpretable.
Scene understanding using graph neural networksScene understanding using GNNs is an active research area, with numerous approaches for leveraging the rich visual data available in complex scenes [24; 6]. One popular application of GNNs in scene understanding is semantic segmentation, where the goal is to assign a semantic label to each pixel in an image [35]. For example, Wang _et al_[35] proposed a GNN-based model that used graph convolutions to incorporate contextual information from neighboring pixels, improving the accuracy of semantic segmentation in complex scenes.
Another popular application of GNNs in scene understanding is object detection and recognition, where the goal is to identify and localize objects in an image. Li _et al_[22] developed a GNN-based model that used spatial and semantic information to perform object detection and semantic segmentation simultaneously. Qi _et al_[30] proposed a GNN-based model that used message-passing to refine object proposals and improve object detection performance.
GNNs have also been used for more complex tasks in scene understanding, such as 3D object detection and reconstruction. For example, Zhang _et al_[39] developed a GNN-based model that used graph convolutions to reason about the 3D geometry of objects and their relationships in a scene. Similarly, Yin _et al_[37] proposed a GNN-based model that used message passing to refine 3D object proposals and improve 3D object detection performance.
Overall, these works demonstrate the versatility and effectiveness of GNNs in scene understanding, and highlight their potential for improving the performance of computer vision systems in a wide range of applications, including surgical skill evaluation.
## 3 SurGNN: The model
Our architecture consists of two self-supervised graph attention layers followed by a global mean pooling layer and, finally, a linear classification layer for training the model in a supervised manner with the available labels. To generate the graph embeddings, we consider the following approaches: **2D** We follow the feature extraction approach, presented in [16] where each instrument is segmented and tracked over the length of a clip. A set of features are calculated from this temporal data to
generate the node embeddings. **3D** Traditional approaches, discussed in section 2 of this paper use either 2D representations, or robotic kinematic data for 3D analysis. Since kinematic ground truth data isn't available for laparoscopic videos, we use neural radiance fields [8; 29; 23; 27; 17; 18] to generate dynamic scene renderings. The neural scene renderings are advantageous as they allow for fixed camera renderings. By fixing the position of the camera, we aim to remove the relative motion of the camera with respect to the scene. Using the same approach presented in [16], we generate 3D representations without the additional noise associated with extraneous camera motion.
### Self-supervised Architecture
The self-supervised architecture is inspired by Tsitsulin _et al._[33], who used a spectral loss, which is commonly used in GNNs to optimize the spectral properties of the learned representation. The spectral loss is defined as the Frobenius norm of the difference between the spectral embeddings of the original graph and the reconstructed graph, and is typically used in conjunction with other loss functions, such as cross-entropy loss. This is represented mathematically using the following equations:
The normalized Laplacian matrix L is calculated as follows:
\[L=I-D^{-1/2}AD^{-1/2} \tag{1}\]
We compute the eigenvectors \(\{v_{1},v_{2},\ldots,v_{n}\}\) and eigenvalues \(\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) of the Laplacian matrix \(L\).
\[Lv_{i}=\lambda_{i}v_{i},\quad i=1,2,\ldots,n \tag{2}\]
For the self-supervised task, we randomly mask some nodes or edges in the graph. The goal is to predict the missing parts of the graph from its spectral representation:
Original Laplacian Matrix: \[L_{\text{original}}=L\] (3) Reconstructed Laplacian Matrix: \[L_{\text{reconstructed}}=\text{GNN\_Decoder}(v_{1},v_{2},\ldots,v_{n})\] (4)
The spectral loss can be defined using the mean squared error (MSE) between the original and reconstructed Laplacian matrices.
\[\text{Spectral Loss}=\frac{1}{n}\sum_{i=1}^{n}\|L_{\text{original}}^{(i)}-L_{ \text{reconstructed}}^{(i)}\|_{F}^{2} \tag{5}\]
The first part of the model architecture in Figure 1 serves as the encoder which takes the graph representation of the input and creates an embedding of length 32. Masking out each embedding ans
Figure 1: Architecture of the SurGNN model. Feature nodes are extracted in the pre-processing stage and the adjacency matrix generated. The resulting model is trained with a supervised objective using the provided labels. The self-supervised approach uses a _spectral_ objective.
using this loss function for training encourages the embeddings to capture the spectral properties of the graph, such as smoothness and locality, while also preserving the modularity structure of the network. By using this loss, we are able to systematically train the model without labels.
## 4 Experiments
### Datasets
EndoVis19The EndoVis19 dataset was created for endoscopic video analysis in surgical scenarios. The dataset consists of over 60 hours of endoscopic video recordings of 10 different surgical procedures, including cholecystectomy, appendectomy, and hernia repair. The videos were captured using various endoscopic cameras, and include both raw and annotated videos, as well as surgical tool and instrument segmentation masks. It is notable for its relatively large size and diverse set of surgical procedures, which enables the development and evaluation of algorithms for a wide range of endoscopic surgical scenarios. The dataset also includes expert annotations of surgical phases and actions, which can be used for developing and evaluating algorithms for automated surgical phase detection and action recognition.
CustomOur original dataset contains 309 sample clips, evenly distributed among the three major categories of _novice_, _intermediate_, and _expert_. This dataset was procured as part of data-sharing agreements that [ANON] has with its partner institutions and is limited to laparoscopic cholecystectomy procedures. The videos are double-rated and annotated by skilled clinical analysts.
### Pre-processing
We direct the reader to Figure 4 and describe the steps to create a surgical graph from a surgical video:
Video segmentation: Each surgical video needs to be segmented into different phases or steps, such as the initial incision, tissue dissection, and wound closure. This can be done manually or using automated techniques such as motion detection or deep learning algorithms. For the purposes of this paper, this was done manually and each procedure was broken down into two major components - the calot and the dissection phases.
Graph construction: Once the different surgical phases have been identified, they are represented as nodes in a graph, and the relationships between them are captured as edges. For example, an edge created between calot and dissection phases indicates that the tissue has been cut and is about to be dissected.
Feature extraction: Various features can be extracted from the surgical graph, such as the frequency and duration of each action, the number of edges between different actions, and the overall structure of the graph. We extract this information directly by extracting instrument motion statistics
Figure 2: Our content extraction pipeline for defining the nodes of the proposed GNN architecture.
across frames such as the position and speed of the instrument over time, illustrated in figure 2. For the purposes of this paper, we compare and contrast 2 approaches, namely segmentation assisted 2D [16] and radiance fields assisted 3D feature extraction [8; 29; 23; 27; 17; 18] as described in Figure 3. For the 3D case, video clips were limited to 1 minute segments. This is because of the extremely computationally intensive nature of the reconstructions. Please refer to the Appendix for further details.
Data balancing: The extracted features and their associated features are balanced by creating synthetic data using the ADASYN [11] algorithm with seven \(k\)-neighbours.
Graph-based analysis: The features extracted from the surgical graph can be used to assess various aspects of surgical skill, such as the speed and accuracy of the surgeon's movements, the level of coordination between different actions, and the complexity of the procedure.
### Training
Supervised learningThe architecture is shown in Figure 1. We use the Adam optimizer [19] with an initial learning rate of 0.0025 which decays at 200 and 400 epochs with a batch size of 32. We train each model for a total of 1000 epochs on one Nvidia RTX 3090 card.
Self-supervised learningOne approach to training graph neural networks (GNNs) is to use spectral self-supervised learning. This technique involves using the graph Laplacian matrix to generate embeddings for each node in the graph, which can then be used to train the GNN.
To implement spectral self-supervised learning for GNNs, we can use the following steps:
* Construct the adjacency matrix \(A\) for the graph of interest.
* Compute the graph Laplacian matrix \(L\) from the adjacency matrix using the formula \(L=D-A\), where \(D\) is the diagonal matrix of node degrees.
* Compute the eigenvectors and eigenvalues of the Laplacian matrix \(L\).
* Select the \(k\) smallest eigenvalues and corresponding eigenvectors to form the spectral embedding matrix \(H\).
* Train the GNN using the spectral embedding matrix \(H\) as illustrated in equation 5.
where \(\mathbf{X}_{i}\) and \(\mathbf{X}j\) are the node features of nodes \(i\) and \(j\) in the graph, \(w_{ij}\) is the weight of the edge connecting nodes \(i\) and \(j\), and \(\mathbf{Z}_{i}\) and \(\mathbf{Z}_{j}\) are the corresponding embeddings generated by the GNN. The spectral loss function aims to minimize the distance between the node embeddings in the learned feature space while taking into account the graph structure.
An important advantage of spectral self-supervised learning is that it can be used to learn embeddings that capture the global structure of the graph, rather than just local node features. This can be particularly useful in domains where the relationships between nodes are complex and non-local, as in the case of surgical videos.
Figure 3: **3D representations**: We include the reconstructed images at times t=0 _left_ and t=1 _right_ for 3 cases, along with the corresponding instrument segmentation masks and depth masks. Note: Since this is a fixed camera view rendering, the ground truth image corresponds to the image at t=0.
## 5 Explainability
Explainability is the ability of a model to provide understandable and transparent reasoning for its predictions. In the case of GNNs, explainability is particularly important because the structure of the input data is often essential to the prediction. Therefore, understanding how the model arrives at its prediction can provide valuable insights into the underlying data and the decision-making process. We visualize the learned node embeddings or feature representations, which are the intermediate outputs of SurGNN. By visualizing these embeddings, we gain insights into the relationships between nodes in the graph and the importance of different features for the prediction. We visualize the structure of the embeddings and project them in 2D space in Figure 4. The assigned classifications are thus accompanied by supporting visualizations which can be invaluable for surgeons interested in improving their skills by comparing their performance to that of experienced surgeons.
## 6 Results and discussion
We present our findings on two datasets - the custom dataset and the EndoVis19 dataset - in Tables 1 and 2 respectively. In both, each method is evaluated for different categories, and we report performance metrics such as Pearson correlation coefficients, Spearman rank correlation coefficients, Kendall's tau, precision, recall, and F1 scores. For each dataset, the baseline results are calculated by sampling from a Gaussian distribution and we average over 10 unique runs. To the best of our knowledge, there aren't any published baselines in literature that we can compare our approach against. We thus present the results of our supervised and self-supervised approaches.
[ANON] datasetThe baseline method performs poorly in terms of Spearman and Kendall correlation coefficients, with a mean value of \(0.045\) for the overall score category. We include another approach, "SurGNN (SS)", which is a self-supervised approach. In contrast, our proposed method, represented by "SurGNN (S)" in the table, shows a significant improvement across all performance metrics and outperforms the baseline method in terms of precision, recall, and F1.
EndoVis19 datasetThe results show that our proposed method, represented by "SurGNN (S)," again outperforms the baseline method across all performance metrics. For instance, for the Overall category, our proposed method achieves Pearson, Spearman, and Kendall's tau coefficients of 0.709, 0.711, and 0.678, respectively, while the baseline method has Spearman correlation of only \(0.088\). Similarly, for the Time and Motion category, our proposed method outperforms the baseline method across all metrics.
Overall, our results suggest that our proposed method, "SurGNN (S)," is effective in improving performance on both datasets for most categories and performance metrics. However, the method's performance may vary depending on the category and the dataset. For example, the 2D analysis shows signifcantly better results than the 3D analysis. This could be a results of a few factors: The 3D analysis is limited to a length of 30s or approximately 60 frames in total (sampling 2 frames/s). The 30s segment chosen is also a potential source of error as it was chosen using clinical judgement. Longer samples were shown to lead to image degeneration, decreasing the quantity of available samples to 211 and 50 samples for the ANON and EndoVis19 datasets respectively.
### Limitations and future directions
We aim to provide a structured basis for research in visual scene understanding for surgical applications. Below are some items that should be considered before widespread application of these systems.
Inter-observer variability: Different expert surgeons may have varying opinions on what constitutes good or bad surgical skill, leading to inconsistent ratings across different evaluators. Each dataset used in this analysis has been rated by multiple analysts and has thus gone through a rigorous quality assurance process.
Lack of granularity: Subjective labels may provide only a high-level assessment of surgical performance, such as overall performance or proficiency in a particular task, without providing detailed feedback on specific actions or movements.
Bias: Human evaluators may unconsciously or consciously introduce bias into their ratings, for example, by favoring or discriminating against certain individuals or groups based on factors such as gender, race, or ethnicity.
Limited feedback: Subjective labels may not provide specific feedback on how to improve performance, which can be crucial for surgical trainees to learn and improve.
Time-consuming: Collecting subjective ratings from expert evaluators can be time-consuming and expensive, especially when evaluating large datasets.
Concurrent validity: The datasets presented in this paper consist primarily of cholecystectomy procedures. Testing the efficacy of our proposed method on datasets with different types of demographics is an important line of future work.
To address these issues, objective and quantitative measures of surgical performance, such as motion analysis or tool usage, have been proposed. These measures can provide more granular and unbiased assessments of surgical skill, and can be automated using computer vision and machine learning techniques. Additionally, such objective measures can provide specific feedback on how to improve performance and can facilitate more efficient evaluation of large datasets.
## 7 Conclusion
We present two distinct approaches using graph neural networks (GNNs) for explainable visual scene understanding in surgical skill assessment. We demonstrate that, by representing surgical procedures
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline Method & Category & Mode & N & Pearson & Spearman & Kendall & Precision & Recall & F1 \\ \hline - & Overall (baseline) & - & - & - & 0.045 & - & 0.140 & 0.140 & 0.140 \\ SurGNN (S) & Overall & 2D & 309 & 0.724 & 0.717 & 0.671 & 0.690 & 0.690 & 0.680 \\ SurGNN (S) & Precision of Operating technique & 2D & 309 & 0.731 & 0.724 & 0.675 & 0.670 & 0.680 & 0.675 \\ SurGNN (S) & Economy of movements & 2D & 309 & 0.682 & 0.669 & 0.642 & 0.681 & 0.659 & 0.670 \\ SurGNN (SS) & Overall & 2D & 309 & 0.424 & 0.419 & 0.411 & 0.530 & 0.580 & 0.550 \\ SurGNN (SS) & Precision of Operating technique & 2D & 309 & 0.444 & 0.413 & 0.397 & 0.580 & 0.520 & 0.550 \\ SurGNN (SS) & Economy of movements & 2D & 309 & 0.415 & 0.412 & 0.406 & 0.560 & 0.540 & 0.550 \\ SurGNN (SS) & Overall & 3D & 211 & 0.355 & 0.325 & 0.316 & 0.430 & 0.440 & 0.435 \\ SurGNN (S) & Precision of Operating technique & 3D & 211 & 0.328 & 0.315 & 0.321 & 0.430 & 0.440 & 0.435 \\ SurGNN (S) & Economy of movements & 3D & 211 & 0.409 & 0.401 & 0.389 & 0.410 & 0.410 & 0.410 \\ SurGNN (SS) & Overall & 3D & 211 & 0.288 & 0.273 & 0.279 & 0.320 & 0.320 & 0.320 \\ SurGNN (SS) & Precision of Operating technique & 3D & 211 & 0.261 & 0.254 & 0.255 & 0.300 & 0.310 & 0.300 \\ SurGNN (SS) & Economy of movements & 3D & 211 & 0.214 & 0.197 & 0.196 & 0.310 & 0.310 & 0.310 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model performance on [ANON] dataset. _SurGNN (S)_ refers to our results using the supervised approach whereas _SurGNN (SS)_ refers to the proposed Self-supervised clustering approach detailed in the previous section.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline Method & Category & Mode & N & Pearson & Spearman & Kendall & Precision & Recall & F1 \\ \hline - & Overall (baseline) & - & - & 0.088 & - & 0.180 & 0.180 & 0.180 \\ SurGNN (S) & Overall & 2D & 69 & 0.709 & 0.711 & 0.678 & 0.700 & 0.700 & 0.690 \\ SurGNN (S) & Time and Motion & 2D & 69 & 0.712 & 0.705 & 0.686 & 0.800 & 0.800 & 0.790 \\ SurGNN (S) & Instrument Handling & 2D & 69 & 0.663 & 0.677 & 0.628 & 0.660 & 0.670 & 0.650 \\ SurGNN (SS) & Overall & 2D & 69 & 0.409 & 0.394 & 0.381 & 0.500 & 0.520 & 0.510 \\ SurGNN (SS) & Time and Motion & 2D & 69 & 0.387 & 0.368 & 0.365 & 0.470 & 0.550 & 0.510 \\ SurGNN (SS) & Instrument Handling & 2D & 69 & 0.430 & 0.381 & 0.675 & 0.510 & 0.520 & 0.520 \\ SurGNN (S) & Overall & 3D & 50 & 0.311 & 0.297 & 0.281 & 0.360 & 0.350 & 0.360 \\ SurGNN (S) & Time and Motion & 3D & 50 & 0.324 & 0.312 & 0.309 & 0.360 & 0.360 & 0.360 \\ SurGNN (S) & Instrument Handling & 3D & 50 & 0.317 & 0.316 & 0.297 & 0.320 & 0.320 & 0.320 \\ SurGNN (SS) & Overall & 3D & 50 & 0.156 & 0.144 & 0.121 & 0.250 & 0.250 & 0.250 \\ SurGNN (SS) & Time and Motion & 3D & 50 & 0.169 & 0.154 & 0.142 & 0.275 & 0.275 & 0.275 \\ SurGNN (SS) & Instrument Handling & 3D & 50 & 0.165 & 0.155 & 0.140 & 0.270 & 0.270 & 0.270 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model performance on EndoVis19 dataset - _SurGNN (S)_ refers to our results using the supervised approach whereas _SurGNN (SS)_ refers to the proposed self-supervised clustering approach detailed in the previous section.
as graph structures and training them using supervised and self-supervised techniques, relevant features can be extracted from the complex visual data and used to predict various aspects of surgical skill. GNNs also allow for interpretable results by identifying the specific actions, instruments, or anatomical structures that contribute to the predicted skill metrics. We highlight the limitations of traditional assessment methods and how GNN-based models can provide a more objective and fine-grained analysis of surgical performance. Explainable visual scene understanding models for surgical skill assessment represents an exciting opportunity to leverage cutting-edge AI techniques to improve surgical education, training, and outcomes.
Figure 4: Explainable clusters in of surgical procedures allows for surgeries to be assessed and ranked. |
2304.11758 | Improving Classification Neural Networks by using Absolute activation
function (MNIST/LeNET-5 example) | The paper discusses the use of the Absolute activation function in
classification neural networks. An examples are shown of using this activation
function in simple and more complex problems. Using as a baseline LeNet-5
network for solving the MNIST problem, the efficiency of Absolute activation
function is shown in comparison with the use of Tanh, ReLU and SeLU
activations. It is shown that in deep networks Absolute activation does not
cause vanishing and exploding gradients, and therefore Absolute activation can
be used in both simple and deep neural networks. Due to high volatility of
training networks with Absolute activation, a special modification of ADAM
training algorithm is used, that estimates lower bound of accuracy at any test
dataset using validation dataset analysis at each training epoch, and uses this
value to stop/decrease learning rate, and re-initializes ADAM algorithm between
these steps. It is shown that solving the MNIST problem with the LeNet-like
architectures based on Absolute activation allows to significantly reduce the
number of trained parameters in the neural network with improving the
prediction accuracy. | Oleg I. Berngardt | 2023-04-23T22:17:58Z | http://arxiv.org/abs/2304.11758v1 | # Improving Classification Neural Networks by using Absolute activation function
###### Abstract
The paper discusses the use of the Absolute activation function in classification neural networks. An examples are shown of using this activation function in simple and more complex problems. Using as a baseline LeNet-5 network for solving the MNIST problem, the efficiency of Absolute activation function is shown in comparison with the use of Tanh, ReLU and SeLU activations. It is shown that in deep networks Absolute activation does not cause vanishing and exploding gradients, and therefore Absolute activation can be used in both simple and deep neural networks. Due to high volatility of training networks with Absolute activation, a special modification of ADAM training algorithm is used, that estimates lower bound of accuracy at any test dataset using validation dataset analysis at each training epoch, and uses this value to stop/decrease learning rate, and reinitializes ADAM algorithm between these steps. It is shown that solving the MNIST problem with the LeNet-like architectures based on Absolute activation allows to significantly reduce the number of trained parameters in the neural network with improving the prediction accuracy.
## 1 Introduction and basic idea
Many tasks have recently been effectively solved by neural networks. Deep neural networks are the most actively studied, the intensive progress in their development is associated with successful solution of the problem of vanishing and exploding gradients in deep networks.
In practical use, the problem often arises of constructing from already trained deep network a smaller network allowing one to successfully solve the problem with similar quility[Hinton et al.(2015)]. The main theoretical basis for the possibility of creating small networks are the theorems of Kolmogorov-Arnold [Kolmogorov(1957), Arnold(1963)], Cybenko [Cybenko(1989)], Funahashi [Funahashi(1989)], and Hornik [Hornik et al.(1989), Hornik(1991)] (also known as the universal approximation theorem).
In accordance with the Kolmogorov-Arnold theorem, the approximation of an unknown function of N variables can be represented as:
\[f(\overrightarrow{x})=\sum_{q=0}^{2N}\Phi_{q}\left(\sum_{p=1}^{N}\phi_{q,p} \left(x_{p}\right)\right) \tag{1}\]
The theorem is not constructive, and proves the existence of optimal functions of one argument \(\Phi_{q}\left(x\right),\phi_{q,p}\left(x\right)\) that provide such an approximation, but does not specify their specific shape.
The theorems [Cybenko(1989), Funahashi(1989), Hornik et al.(1989), Hornik(1991), Sonoda and Murata(2017)] specify the types of these functions and prove the possibility of using various functions \(\Phi_{q}\left(x\right),\phi_{q,p}\left(x\right)\). Thus, theoretically, most of the practical problems should be solved by a neural network with one hidden layer by choosing correct functions \(\Phi_{q}\left(x\right),\phi_{q,p}\left(x\right)\).
Traditionally, however, for large input vectors this hidden layer size and number of unknown functions becomes as huge as \(O(N^{2})\), where N is dimension of input vector. Therefore the solution of many problems is reduced to the construction of neural networks with a larger number of hidden layers, and smaller number of neurons in each layer ('thin networks'), that efficiently decrease number of unknown (trained) parameters. However, there are still problems in which the use of a large number of hidden layers is not required, and the problem can be solved with a simpler network.
The simplest form of the function \(\phi_{q,p}\) is linear:
\[\phi_{q,p}(x)=A_{p,q}x+B_{p,q} \tag{2}\]
and the \(\Phi_{q}\) function is the neuron activation function. This allows one to build neural networks with one hidden layer to solve some problems by choosing corresponding activation function and fitting coefficients \(A_{p,q},B_{p,q}\).
A difficult task is to choose the function \(\Phi_{q}\) - the activation function of the hidden layer. The theorem [Hornik(1991)] proves that as this function one can use a sufficiently arbitrary function - continuous and bounded. However even non-bounded functions can be used, for example widely used ReLU [Agarap(2018)], that satisfy less strong conditions [Sonoda and Murata(2017)] and also could be used in universal approximators(networks).
The only problem is to find the coefficients \(A_{p,q},B_{p,q}\). In modern time this problem is solved by gradient descent methods. Modern experience in building deep neural networks shows that in order to avoid vanishing and exploding gradients [Bengio et al.(1994)] when searching for optimal network coefficients using the gradient descent method, only some activation functions are beneficial. This greatly limits the activation functions suitable for building deep neural networks. For optimal activation functions, the derivative of the function should be as close as possible to 1. This is one of the reasons for replacing Sigmoid (logistic function) with Tanh (hyperbolic tangent) in the LeNet[Lecun et al.(1998)] network, replacing Tanh with ReLU (Rectified Linear Unit) in the AlexNet[Krizhevsky et al.(2017)] network, and the emergence
of Residual blocks and ResNet[He et al.(2015)] networks, that are widely used today.
Therefore, the requirement that the derivative of the activation function be close to 1 is a very important requirement when choosing activation functions for deep neural networks. An ideal function that satisfies this condition is linear. This function is not useful, since it will not allow effective building deep networks - the superposition of layers with linear activation functions is equivalent to a single layer. Thus, we face with a contradiction, which is usually resolved by using the ReLU and similar right-hand-linear functions (SeLU, SWISH, MISH, APTx, etc.) or Residual blocks.
However, this contradiction is apparent: in order for the gradient to do not vanish or explode, a slightly different condition should be met. Namely, not the derivative, but the modulus of the derivative of the activation function must be equal to 1. This property is possessed not only by a linear function, but also by many other real-valued functions whose derivative modulus are equal to 1 almost everywhere. We can classify these functions by the number of points where their differential is not defined. The simplest function with none of these points is linear, and the function with single point is absolute value (module). So we can use absolute value function as an activation function (Absolute activation function, Abs), and \(\Phi_{q}\) becomes:
\[\Phi_{q}(y)=W_{q}|y| \tag{3}\]
It should be noted that most of the popular activation functions (Sigmoid, Tanh, ReLU, SeLU, LeakyReLU, etc.) are monotonically nondecreasing. Absolute activation function Abs differs significantly from them - it is nonmonotone, but continuous, like SWISH, MISH and APTx functions[Kumar(2022)]. The Absolute activation function already mentioned by researchers [Karnewar(2018), Apicella et al.(2021)], but its efficiency has not been demonstrated in details. The Absolute activation function satisfy conditions of [Sonoda and Murata(2017)] and therefore could be used in universal approximators (neural networks).
A neural network with one hidden layer with an N-dimensional input vector and Absolute activation will implement the operation:
\[f(\overrightarrow{x})=\sum_{q=0}^{2N}W_{q}\left|\sum_{p=1}^{N}\left(A_{p,q}x_ {p}+B_{p,q}\right)\right| \tag{4}\]
It can be seen from the eq.(4) that the network is implemented by two layers of neurons - the hidden fully connected layer with an Absolute activation function (Abs, \(\varphi(y)=|y|\)) and with the number of neurons 2N+1, where N is the dimension of the input vector. The decision layer is standard for the given problem: with a linear activation function in the case of solving the approximation problem, as in (4) or with a Softmax function in the case of solving the classification problem:
\[f_{c}(\overrightarrow{x})=Softmax_{c}\left(\sum_{q=0}^{2N}W_{q,c}\left|\sum_{p=1 }^{N}\left(A_{p,q}x_{p}+B_{p,q}\right)\right|\right) \tag{5}\]
where c is the class number and \(W_{q}\) are the coefficients of the decision layer.
## 2 Simplest 2D classification problems
Let us demonstrate the possibility of using the Absolute activation function for effective solving simple classification problems. Let us compare three networks: a fully-connected network with single hidden layer with ReLU activations, a fully-connected network with two hidden layers with ReLU activations, and a fully-connected network with single hidden layer with Abs activations. We will use two-dimensional input vectors (N=2). The three synthetic problems of two-class classification were solved: a) linear separation, b) "cross" type separation, c) circular area separation. These tasks are shown in Fig.1.
In accordance with (4) and the Kolmogorov-Arnold theorem (1), we choose the number of neurons in each layer equal to 2N+1=5. The number of training epochs is 1000, loss function is cross-entropy, the optimum search method is ADAM [Kingma and Ba(2014)] with learning rate \(10^{-3}\). The first network has two hidden layers and 57 unknown parameters, the second and third networks have one hidden layer and 27 unknown parameters.
The architectures of the studied networks and training processes are shown in Fig.2. Fig.2 also shows the results of training the networks on these synthetic test datasets (A-C).
It should be noted that the solution of datasets (B-C) by a network with single hidden ReLU layer looks unstable. In the most cases it predicts accurate classification, but in some cases - an inaccurate one (see Fig.2B for datasets B and C). Experiments shows, that for dataset B inaccurate training (final accuracy is less than 0.8) is about 9% of all cases, for dataset C inaccurate training is about 16% of all cases. For two hidden layer ReLU network inaccurate training probability is about 3% of B-C cases (possibly due to small number of training epochs), for single hidden layer Abs network there are no inaccurate runs.
This demonstrates a dependence of the single fully-connected hidden layer ReLU network on the initial conditions (initial coefficients values), and weaker dependence of the single fully-connected hidden layer Abs network results on initial conditions.
Fig.2 shows that the neural network with single hidden layer and Abs activations is no less efficient than the network with single ReLU hidden layer, but less efficient than the network with two ReLU hidden layers. At the same time, it has the same number of free parameters as the network with one hidden ReLU layer and is less dependent on initial conditions during training. So from the first view Abs activation function can be effectively used in classification
Figure 1: Synthetic datasets for the simplest classification: A) linear separation; B) “cross” separation; C) circular area
Figure 2: The architecture of the studied simple networks: a) two hidden layers with ReLU activations, b) single hidden layer with ReLU activations c) single hidden layer with Abs activations. Results of training three networks on datasets (A-C, as shown in Fig.1). For each dataset, the dependence of the loss function on the training epoch on the training and validation datasets is shown.
networks and may improve their efficiency. Let us study the use of Absolute activation in more complex classification problems.
## 3 LeNet-kind solution of MNIST problem
### Preliminary analysis
Let us consider the solution of the MNIST problem - the task of classifying handwritten digital characters into 10 classes[Simard et al.(2003)].
When solving the MNIST problem, the LeNet-5[Lecun et al.(1998)] network was frequently used as a baseline solution, which is quite compact and fast. To demonstrate the effectiveness of Absolute activation, the network was analyzed in two versions - in its standard variant (with Tanh activation functions, LeNet-5) and its modernization, obtained from LeNet-5 by replacing all the activation functions by Abs (referred as LeNet+Abs).
Both networks have 368,426 free parameters, network architectures are similar and shown in Fig.3. Significant differences are highlighted in bold. There are also more efficient networks exist for solving the MNIST problem, for example listed in [Apicella et al.(2021)], but usualy they contain a larger number of free parameters (for example as much as 2,888,000, as in [Simard et al.(2003)]). Therefore, to demonstrate the effectiveness of the Absolute activation function, we will use a relatively simple LeNet-5 as baseline solution.
Fig.3 shows the dependencies of the loss function and accuracy of both networks after 100 training epochs on identical MNIST training and validation datasets (80% training data followed by 20% of validation data). Optimization algorithm is ADAM with batchsize 128 and learning rate \(10^{-3}\).
It can be seen from Fig.3 that the resulting accuracy of the LeNet+Abs network looks higher than the accuracy of the standard LeNet-5, and with an accurate training stop, it should provide the better accuracy and the smaller value of the loss function on the validation dataset, than standard LeNet-5 does. Thus, using the Abs activation function in this problem at first view is more profitable than the use of Tanh activations due to the greater accuracy achieved.
### Training process details
It can be seen from the Fig.3 that the dependence of the LeNet+Abs accuracy and loss function on the training epoch is not smooth, but much more volatile than those for LeNet-5. There seems to be two reasons for this. First, when training the network, by default, the epoch accuracy is calculated over the last batch, i.e. not over the entire training dataset. The second is that the derivative of the Absolute activation function, indicating the direction of the gradient, is discontinuous at zero argument, so small changes in the argument can lead to sharp changes in the gradient, so the ADAM algorithm does not provide a smooth enough descent, and one should use lower learning rates, slowing down training process.
Figure 3: Tested network architectures: A) standard LeNet-5; B) LeNet+Abs (with Absolute activation functions). The results of training the two networks, from top to bottom: loss on the test and Validation datasets, accuracy (accuracy) on the training and validation datasets.
To find the optimally trained model we should find the epoch with maximal accuracy at unknown test dataset. We do not use the test dataset at training, so we may only estimate this accuracy. To prevent overtraining, we should not use train dataset information, and only validation dataset could be used.
Optimally trained network should provide good accuracy exceeding a given limit at almost any test dataset. After the network trained this minimal accuracy bound should be as high as possible.
To estimate the lower bound of accuracy at any test dataset we should know the distribution of network accuracy at different validation datasets. We have only single validation dataset, so the lower bound could be estimated by bootstrap algorithm [1]. But for using bootstrap we should accurately choose significance level, that defines how low the lower bound over test dataset should be below the mean validation accuracy. As shows experiments, more accurate results can be obtained when we just choose minimal accuracy over two halves of validation dataset:
\[ACC_{expected@test}=\min(ACC(ValidationData1),ACC(ValidationData2)) \tag{6}\]
Another training problem is choosing a right learning rate for training process to be fast and accurate. We reach this by by step-by-step decrease learning rate with reinitializaing ADAM between steps. The ADAM algorithm is known to be very affective, but have hidden parameters, optimized during training. Reinitialization resets these hidden parameters to their initial values. Training process at given learning rate stops if expected accuracy lower bound has not increased for 10 epoches since epoch of previous increase. During training we use learning rates stepping from \(10^{-3}\) downto \(10^{-6}\) with 10 times decrease between steps. This approach is close to well known ReduceLROnPlateau algorithm with patience 10, monitoring of \(ACC_{expected@test}\) and reinitializing ADAM before reduction.
The training algorithm is shown as Algorithm 1
The Table 1 shows the accuracy achieved and total epoch number during training of the standard LeNet-5 model and the LeNet+Abs model (both marked as "Not degraded" in the Table 1). The table shows that the accuracy of LeNet+Abs exceeds LeNet-5 accuracy, and the number of errors on the test dataset has fallen from 1.14% (for LeNet-5) to 0.56% (for LeNet+Abs) - i.e. almost 2 times. Also the found error level 0.56% for LeNet+Abs looks better than the lowest error level 0.7% reported in [11] even for boosted variants of LeNet. One can also see that training of LeNet+Abs slower than training LeNet-5 for about 1.5 times (100 training epochs vs. 68 correspondingly).
### Gradient vanishing robustness
It is known that an important problem in training deep and recurrent networks is gradient vanishing and explosion - the loss of the network's ability to train
layers with an increase of their number [Bengio et al.(1994)]. Recently, Residual blocks[He et al.(2015)] have been found to solve this problem. Let us demonstrate that the Absolute activation function is resistant to the gradient vanishing and explosion effect, not worse than popular ReLU and SeLU functions.
To show this, inside each of the networks (LeNet-5 and LeNet+Abs) 20 intermediate (disturbing) layers were placed (as the last hidden layers), complicating training the network by vanishing gradient. To LeNet-5 and LeNet+Abs architectures a 20 layers with Tanh, ReLU, SeSU, Abs activations were added, producing 8 final archives: Lenet-5+20DTanh, Lenet-5+20DReLU, Lenet-5+20DSeLU, Lenet-5+20DAbs, Lenet+Abs+20DTanh, Lenet+Abs+20DReLU, Lenet+Abs+20DSeLU, Lenet+Abs+20DAbs. Examples of these architectures are shown in Fig.4.
All 8 resulting (deep) networks have 511226 free coefficients, and about 143 thousand of them are from the last (disturbing) layers, and these networks differ only by activation functions in layers.
The training results are shown in Table 1.
It can be seen from the Table 1 that adding 20 more hidden layers with Abs activation at the end makes it still possible to train LeNet+Abs+20DAbs, although with somewhat lower accuracy than in the absence of these layers - LeNet+Abs network (99.05% vs. 99.44% correspondingly). This confirms with our theoretical expectation that the Absolute activation function does not cause significant gradient vanishing or explosion, and therefore can be effectively used
## 4 Conclusion
Figure 4: Degraded versions of LeNet-5 and LeNet+Abs networks to test the effect of gradient vanishing and explosion effect: LeNet-5+20DTanh (A), LeNet+Abs+20DAbs (B), LeNet-5+20Drelu (C), LeNet+Abs+20Drelu (D). Architecture and quality of training.
in both simple and deep neural networks. It can be seen from Table 1 that the gradient vanishing/explosion it causes is better than that of ReLU, and that Abs as good as SeLU activation. This allows to use Abs in deep networks.
### Reducing the LeNet-5 network size without loss of accuracy
The good stability of the Absolute activation function against gradient vanishing suggests that the original LeNet-5 neural network could be too complex for its reported accuracy. Let us reduce its size without loosing the accuracy. The most obvious way is to remove the last hidden convolutional layer from LeNet+Abs. This resulting network referred as SmallLeNet+Abs network and shown in Fig.5A. The TinyLeNet+Abs model is made from the SmallLeNet+Abs model by reducing the number of convolutions in the second convolution layer from 16 to 3. This new architecture is also shown in Fig.5B. Bold marks important changes.
The Table 2 shows the accuracies achieved by these networks using training Algorithm 1. In addition, the number of trained model coefficients, total trained epoches, and bootstrap confidence interval on the test dataset are shown (with a standard significance level of 0.95).
The Table 2 shows that the best accuracy on the test dataset is provided by the LeNet+Abs model. The SmallLeNet+Abs model with 52 thousand trained parameters, and TinyLeNet+Abs model with 10.6 thousand trained parameters also exceeds the initial LeNet-5 (with 368 thousand parameters) accuracy in
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Network & Accuracy & Trained Epoches & Conf.interval \\ \hline \hline LeNet-5 architecture & & & \\ \hline \hline Not degraded & 98.86 & 68 & (98.65, 99.07) \\ \hline +20DTanh & 98.41 & 80 & (98.17,98.66) \\ \hline +20DReLU & 98.32 & 129 & (98.07,98.57) \\ \hline +20DSeLU & 98.75 & 80 & (98.53,98.97) \\ \hline +20DAbs(*) & 98.71 & 66 & (98.49,98.93) \\ \hline \hline LeNet+Abs architecture & & & \\ \hline
**Not degraded** & **99.44** & 100 & (99.30,99.59) \\ \hline +20DTanh(*) & 97.51 & 80 & (96.90,97.54) \\ \hline +20DReLU & 98.92 & 118 & (98.72,99.12) \\ \hline +20DSeLU & 99.03 & 98 & (98.84,99.22) \\ \hline **+20DAbs** & **99.05** & 112 & (98.86,99.24) \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) of various networks - standard LeNet-5 and LeNet+Abs, degraded by the last 20 hidden fully connected layers with different activation functions. *) marks the cases when learning rate \(10^{-3}\) is not enough to start training and we start from \(10^{-4}\) instead. Bootstrap confidential interval over test dataset at standard confidence level 0.95 is also shown.
Figure 5: Smaller models A)SmallLeNet+Abs; B)TinyLeNet+Abs; C)LeNet+Conv120+Abs; D)LeNet+Conv64+Conv120+Abs. Bold highlights the main difference from the previous model.
most cases.
More accurate network is extended LeNet architecture - Lenet+Conv120+Abs, created by adding Conv120 and AveragePooling layers to LeNet (it has 76226 parameters and is shown in Fig.5C), provides the even better accuracy than LeNet+Abs architecture - 99.51%, but having 4.8 times less parameters.
Thus, the replacement of Tanh activation functions with Abs makes it possible to reduce the size of original LeNet-5 model by more than 4.8 and 7 times with an improvement of its accuracy (LeNet+Conv120+Abs and SmallLeNet+Abs variants correspondingly), and providing better accuracy than ReLU and SeLU variants. It can be seen from the bootstrap confidence intervals that even TinyLeNet+Abs model with the number of parameters 35 times less than the number of LeNet-5 parameters provides an accuracy that not worse than the accuracy of original LeNet-5 with Tanh, ReLU, or SeLU activations.
As we have shown, even standard LeNet-5 with Abs activation (LeNet+Abs)
\begin{table}
\begin{tabular}{|l|c|c|l|l|} \hline & Accuracy & Trained epoches & Conf.interval & \# coefs \\ \hline \hline \multicolumn{5}{|l|}{LeNet architecture} & \multicolumn{3}{c|}{368426} \\ \hline +Tanh & 98.86 & 68 & (98.65, 99.07) & 368426 \\ (LeNet-5) & & & & \\ \hline + ReLU & 99.13 & 72 & (98.95,99.31) & - \\ \hline + SeLU & 99.03 & 62 & (98.84,99.23) & - \\ \hline
**+ Abs** & **99.44** & 100 & (99.30,99.58) & - \\ \hline \hline \multicolumn{5}{|l|}{SmallLeNet architecture} & \multicolumn{3}{c|}{51890} \\ \hline + Tanh & 99.03 & 64 & (98.84, 99.22) & 51890 \\ \hline + Relu & 99.10 & 88 & (98.91, 99.28) & - \\ \hline + Selu & 99.10 & 59 & (98.92, 99.28) & - \\ \hline
**+ Abs** & **99.28** & 116 & (99.12, 99.44) & - \\ \hline \hline \multicolumn{5}{|l|}{TinyLeNet architecture} & \multicolumn{3}{c|}{10615} \\ \hline + Tanh & 98.76 & 75 & (98.54, 98.97) & 10615 \\ \hline + Relu & 98.94 & 71 & (98.74, 99.14) & - \\ \hline + Selu & 98.65 & 74 & (98.42, 98.88) & - \\ \hline
**+ Abs** & **99.08** & 87 & (98.90, 99.27) & - \\ \hline \hline \multicolumn{5}{|l|}{LeNet+Conv120 architecture} & \multicolumn{3}{c|}{76226} \\ \hline + Tanh & 99.09 & 72 & (98.91, 99.27) & - \\ \hline + Relu & 99.28 & 80 & (99.12, 99.45) & - \\ \hline + Selu & 99.26 & 65 & (99.09, 99.43) & - \\ \hline
**+ Abs** & **99.51** & 103 & (99.38, 99.65) & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: The best accuracy obtained on test dataset for different architectures with different activation functions. The number of trained model parameters and the bootstrap confidence interval (standard confidence level 0.95) over the test dataset. Bold marks best activation function for given architecture. \(ACC_{expected@test}\) according to eq.6.
can provide very good accuracy (99.44%) comparable with top solutions for fixed shape activation functions [Apicella et al.(2021)]. Extended LeNet versions (Lenet+Conv120+Abs) have accuracy up to (99.51%), very close to top solutions for fixed shape activation functions [Apicella et al.(2021)], and having less than 77,000 trained parameters.
## 4 More efficient accuracy predictions for training
As we have shown above, the networks with Abs activation looks better than the networks with ReLU, SeLU and Tanh activations. But there are two problems arise.
Problem 1. The stabuility of training the network. We use random initial network parameters, no regularization, so the found loss function minimum is not global one. We need to check how the initializing and training affect the trained network accuracy.
Problem 2. The stop condition. As a stop condition we use prediction of network accuracy at unknown test dataset over validation dataset. It is obvious that different prediction algorithms lead to different network accuracy.
To study these problems we created 3 ensembles for LeNet+Conv120 network architecture (LeNet+Conv120+Abs, LeNet+Conv120+ReLU, LeNet+Conv120+SeLU, LeNet+Conv120+Tanh), that differs only by activation function (Abs, ReLU, SeLU, Tanh). Each of 3 ensembles is trained by using different model for accuracy at test dataset prediction over validation dataset ( \(ACC_{expected@test}\) in Algorithm 1 ):
\[ACC_{expected@test,1}=\min(ACC(ValidationData1),ACC(ValidationData2)) \tag{7}\]
\[ACC_{expected@test,2}=MEAN_{bootstrap}(ValidationData)-STD_{bootstrap}( ValidationData) \tag{8}\]
\[ACC_{expected@test,3}=ACC_{expected@test,1}-STD_{bootstrap}(ValidationData) \tag{9}\]
The first prediction method is discussed earlier with Algorithm 1, the second one is based on bootstrap estimate of lower bound of prediction interval over validation dataset with significance level 0.68, the last one is combination of the first and the second, lowering the expected bound.
Each ensemble of trained networks were used to produce 2 values, shown in Table 3. The first one is the accuracy over the test dataset by majority ensemble voting, when the predicted label is found as the label predicted by maximal number of networks (marked as M.V. - majority voting). Another
measure is prediction interval over the test dataset over the ensemble of trained networks (marked as C.I. - confidence interval).
For different purposes different M.V. and C.I. are required. For example, for majority ensemble voting the networks should be overtrained and M.V. should be as high as possible. As shown in Table 3, for Abs network the best M.V. is provided by use \(ACC_{expected@test,1}\) or \(ACC_{expected@test,3}\) accuracy prediction variant during train.
When only single network is planned for prediction, we need the highest confidential interval. In this case using \(ACC_{expected@test,3}\) prediction variant is preferable. For both cases using \(ACC_{expected@test,3}\) accuracy prediction for training the LeNet+Conv120+Abs is preferable.
To improve the result, a LeNet+Conv64+Conv120+Abs were made - it is wider and deeper LeNet-like network, having more convolutions at top layer, than LeNet+Conv120+Abs has. Its architecture is shown in Fig.5D.
The model has 227474 parameters, that is also smaller than number of parameters in original LeNet-5 network, and provides better accuracy than LeNet+Conv120+Abs. It's ensemble training using \(ACC_{expected@test,3}\) test dataset accuracy prediction shows its increased accuracy, also exceeding its ReLU, SeLU and Tanh variants (shown in Table 3).
## 5 Conclusion
In the paper we discuss the Absolute activation function for classification neural networks. An examples of using this activation function in simple and complex classifying problems are presented. In solving the MNIST problem with the LeNet-5 network, the efficiency of Abs is shown in comparison with Tanh, ReLU and SeLU activations. It allows to reach 99.44% accuracy at standard LeNet-5 network by only changing all activations into Abs. It is shown that its use practically does not lead to gradient vanishing/explosion, and therefore Absolute activation can be used in both small and deep neural networks. It is shown that in solving the MNIST problem with the LeNet-5 architecture, the use of Absolute activation helps to significantly reduce the size of the neural network (up to 2 orders by number of trained parameters) and improve the accuracy of the solution. The mean reached accuracy could be about 99.51% ([99.41%...99.59%] at different trained varaints) at train dataset with decreasing the network size from 368 thousands to 76 thousands coeficients(LeNet+Conv120+Abs) and about 99.53% ([99.45%...99.60%] at different trained varaints) at train dataset with decreasing the network size from 368 thousands to 227 thousands coeficients(LeNet+Conv64+Conv120+Abs). Therefore these networks with Absolute activations could be close to the best solutions accuracy for fixed shape activation functions, reached by more complicated networks [Apicella et al.(2021)].
It is demonstrated that training curve (accuracy vs. training epoch) when using Abs activation is not smooth, so one should use more complex technique for changing learning ratio and stop training. We doing this by estimating the lower bound of network accuracy at any test dataset using accuracy calculated
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & \(ACC_{expected\$}test,1\) & \(ACC_{expected\$}test,2\) & \(ACC_{expected\$}test,3\) \\ \hline \hline \multicolumn{4}{|l|}{Lenet+Conv120 architecture} \\ \hline +Tanh C.I. & [98.95,99.20] & [98.93,99.21] & [99.03,99.22] \\ \hline +Tanh M.V. & 99.33 & 99.32 & 99.36 \\ \hline +ReLU C.I. & [99.12,99.38] & [99.16,99.37] & [99.13,99.37] \\ \hline +ReLU M.V. & 99.52 & 99.52 & 99.48 \\ \hline +SeLU C.I. & [99.19,99.36] & [99.17,99.39] & [99.20,99.35] \\ \hline +SeLU M.V. & 99.54 & 99.55 & 99.54 \\ \hline \hline +Abs C.I. & [99.39,99.56] & [99.35,99.57] & [**99.41,99.59**] \\ \hline +Abs M.V. & **99.64** & 99.61 & **99.64** \\ \hline \hline \multicolumn{4}{|l|}{Lenet+Conv64+Conv120 architecture} \\ \hline +Tanh C.I. & & [98.87,99.15] \\ \hline +Tanh M.V. & & 99.37 \\ \hline +ReLU C.I. & & [99.24,99.50] \\ \hline +ReLU M.V. & & 99.55 \\ \hline +SeLU C.I. & & [99.29,99.44] \\ \hline +SeLU M.V. & & 99.56 \\ \hline +Abs C.I. & & **[99.45,99.60]** \\ \hline +Abs M.V. & & **99.64** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy results for ensemble of from 20 training runs, produced 20 trained variants of LeNet+Conv120 and LeNet+Conv64+Conv120 networks, with different algorithms for predicting accuracy at test dataset from the validation dataset during training, eqs.7-9. M.V. rows - majority voting ensemble accuracy over 20 networks at test dataset, C.I. rows - accuracy limits over the ensemble at test dataset. Best results marked by bold.
over the halves of validating dataset and by bootstrap method and combining these predictions.
Thus, the high efficiency of the Absolute activation function has been demonstrated, which makes it possible to improve the accuracy of current classification neural networks even without changing their architecture.
## Acknowledgements
The data analysis was performed in part on the equipment of the Bioinformatics Shared Access Center, the Federal Research Center Institute of Cytology and Genetics of Siberian Branch of the Russian Academy of Sciences (ICG SB RAS). The work has been done under financial support of the Ministry of Science and Higher Education of the Russian Federation (Subsidy No.075-GZ/C3569/278).
|
2303.07189 | Optimizing Convolutional Neural Networks for Chronic Obstructive
Pulmonary Disease Detection in Clinical Computed Tomography Imaging | We aim to optimize the binary detection of Chronic Obstructive Pulmonary
Disease (COPD) based on emphysema presence in the lung with convolutional
neural networks (CNN) by exploring manually adjusted versus automated
window-setting optimization (WSO) on computed tomography (CT) images. 7,194 CT
images (3,597 with COPD; 3,597 healthy controls) from 78 subjects (43 with
COPD; 35 healthy controls) were selected retrospectively (10.2018-12.2019) and
preprocessed. For each image, intensity values were manually clipped to the
emphysema window setting and a baseline 'full-range' window setting.
Class-balanced train, validation, and test sets contained 3,392, 1,114, and
2,688 images. The network backbone was optimized by comparing various CNN
architectures. Furthermore, automated WSO was implemented by adding a
customized layer to the model. The image-level area under the Receiver
Operating Characteristics curve (AUC) [lower, upper limit 95% confidence] was
utilized to compare model variations. Repeated inference (n=7) on the test set
showed that the DenseNet was the most efficient backbone and achieved a mean
AUC of 0.80 [0.76, 0.85] without WSO. Comparably, with input images manually
adjusted to the emphysema window, the DenseNet model predicted COPD with a mean
AUC of 0.86 [0.82, 0.89]. By adding a customized WSO layer to the DenseNet, an
optimal window in the proximity of the emphysema window setting was learned
automatically, and a mean AUC of 0.82 [0.78, 0.86] was achieved. Detection of
COPD with DenseNet models was improved by WSO of CT data to the emphysema
window setting range. | Tina Dorosti, Manuel Schultheiss, Felix Hofmann, Johannes Thalhammer, Luisa Kirchner, Theresa Urban, Franz Pfeiffer, Florian Schaff, Tobias Lasser, Daniela Pfeiffer | 2023-03-13T15:30:28Z | http://arxiv.org/abs/2303.07189v3 | Optimizing Convolutional Neural Networks for Chronic Obstructive Pulmonary Disease Detection in Clinical Computed Tomography Imaging
###### Abstract
Chronic Obstructive Pulmonary Disease (COPD) is a leading cause of death worldwide, yet early detection and treatment can prevent the progression of the disease. In contrast to the conventional method of detecting COPD with spirometry tests, X-ray Computed Tomography (CT) scans of the chest provide a measure of morphological changes in the lung. It has been shown that automated detection of COPD can be performed with deep learning models. However, the potential of incorporating optimal window setting selection, typically carried out by clinicians during examination of CT scans for COPD, is generally overlooked in deep learning approaches. We aim to optimize the binary classification of COPD with densely connected convolutional neural networks (DenseNets) through implementation of manual and automated Window-Setting Optimization (WSO) steps. Our dataset consisted of 78 CT scans from the Klinikum rechts der Isar research hospital. Repeated inference on the test set showed that without WSO, the plain DenseNet resulted in a mean slice-level AUC of 0.80\(\pm\)0.05. With input images manually adjusted to the emphysema window setting, the plain DenseNet model predicted COPD with a mean AUC of 0.86\(\pm\)0.04. By automating the WSO through addition of a customized layer to the DenseNet, an optimal window setting in the proximity of the emphysema window setting was learned and a mean AUC of 0.82\(\pm\)0.04 was achieved. Detection of COPD with DenseNet models was optimized by WSO of CT data to the emphysema window setting range, demonstrating the importance of implementing optimal window setting selection in the deep learning pipeline.
Convolutional Neural Network (CNN) emphysema lung Window-Setting Optimization (WSO) X-ray Computed Tomography (CT)
## 1 Introduction
Chronic Obstructive Pulmonary Disease (COPD) refers to a group of respiratory diseases that reduce the exchange of oxygen and carbon-dioxide in the lung. COPD chronically impairs the structure of the lung by narrowing the airways and damaging the air sacs. One of the common diseases associated with COPD is emphysema, which is most often observed in smokers [1]. Although disease progression can be prevented with early detection, COPD is among the
leading causes of death worldwide, with 3.23 million deaths recorded across the globe in 2019 [2]. In addition to increased mortality rates in direct correlation with the disease, patients with COPD are likely to develop comorbid diseases, such as cardiovascular diseases, mental disorders, and other respiratory diseases [3]. Furthermore, with respect to the recent outbreak of the Severe Acute Respiratory Syndrome Coronavirus type 2 (SARS-CoV-2) in 2019 and the associated Coronavirus disease (COVID-19), current research suggests that all-cause mortality risks are higher for individuals with COPD preconditions [4],[5]. With early detection and intervention, the prevalence and negative impacts of COPD can be decreased [3].
A readily-available procedure for the detection of COPD is the spirometry test, whereby a measure of forced exhalation volume less than a reference value is indicative of COPD. The Global Initiative for Chronic Obstructive Lung Disease (GOLD) committee has defined a four-stage progression scale for the diagnosis of COPD based on spirometry measurements, staging from mild (I) to very severe (IV) [1]. Although spirometry reliably detects advanced stages of COPD, results for patients at an early stage of COPD can still be negative despite the presence of chronic symptoms, such as a persistent cough and shortness of breath [6],[1]. Moreover, patients categorized in the same GOLD stage have shown drastic morphological differences in the lung structure [7].
X-ray Computed Tomography (CT) scanning of the chest is an alternative method for the detection of COPD. Chest CT scans provide detailed three-dimensional morphological information about the lung structure. Each volume element is characterized by its Hounsfield Unit (HU) value, a measure of the local attenuation coefficient for X-rays. The three-dimensional information obtained about phenotypic abnormalities and pattern of morphological changes reflecting emphysema allows for detection and control of disease progression even in early stages. In 2015, the Fleischner Society introduced a disease progression scale based on pattern of abnormalities present in CT data that correspond to COPD and emphysema sub-types [7].
In recent years, large scale studies with publicly-available datasets, such as the Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points (ECLIPSE) and the COPDGene study, have been carried out to investigate the association of COPD with biomarkers, genetic risk factors, and epidemiologic indicators [8],[9]. In parallel, improvements in computation power and machine learning algorithms have made Convolutional Neural Networks (CNN) a popular tool for automated classification and detection tasks in medical imaging. CNNs are a specialized case of machine learning algorithms that can extract features from image data and are often applied to computer vision tasks [10]. Consequently, with rising prevalence of COPD, large imaging datasets, and technological advancements in the field of machine learning, CNNs have been applied to automate binary classification of COPD and have shown promising results [11],[12],[13],[14]. However, model outcomes are still not ready for integration into a computer-aided clinical workflow for efficient and cost-effective COPD diagnosis. CNN research in other areas of medical data analysis, such as e.g. intracranial hemorrhage classification, has shown that the incorporation of clinically-relevant steps in the model workflow can improve the output. In particular, it was found that optimizing clinical window-setting parameters of input CT images greatly affects the output quality. [15],[16],[17].
Since the extant literature on COPD detection using CNNs predominantly focuses on fine-tuning deep learning algorithms, the potential of implementing preprocessing steps to more closely adapt clinical workflow processes has thus far not been explored in detail. Furthermore, relevant literature mainly benefits from the ECLIPSE and COPDGene study datasets, where ground truth labels for the scans are given on the GOLD standard progression scale, instead of the phenotypic-relevant scale introduced by the Fleischner Society [11],[12],[13],[14]. In this work, we aim to optimize the detection of COPD based on emphysema presence in the lung with densely connected CNNs (DenseNets). We adapted a routine clinical-workflow procedure for a total of 78 chest CT scans obtained from the Klinikum rechts der Isar research hospital of the Technical University of Munich. In doing so, the effects of manually-adjusted versus automated window-setting optimization on the binary classification task for differentiation between healthy and COPD patient slices were explored in detail. Our findings demonstrate that diligent preprocessing based on existing radiological knowledge, as well as selecting phenotypically representative ground truth labels positively impact the outcome of COPD detection with CNN models.
## 2 Methods
### Dataset
A total of 78 patients with contrast enhanced chest CTs were retrospectively selected from our picture archiving and communication system at the Klinikum rechts der Isar research hospital between October, 2018 and December, 2019. CT scans included those of patients suffering from different COPD stages (n = 43) and healthy controls (n = 35). Scans that presented conditions or pathologies that did not correspond to COPD, yet could influence lung parenchyma, such as pulmonary congestion or lung cancer, were excluded in the selection process. Imaging was carried out with an IQon Spectral CT scanner (Royal Philips, Netherlands). The CT scans were first anonymized and then graded by three
expert radiologists with four to 12 years of experience. Patient-level grading was based on the six Fleischner Score (FS) categories of absent (FS = 0), trace (FS = 1), mild (FS = 2), moderate (FS = 3), confluent (FS = 4), and advanced destructive (FS = 5) emphysema, as per Fleischner society's statement [7]. Scans with FS \(>\) 2 were considered as the COPD class for our binary classification task. Patients at a moderate COPD stage exhibit many defined areas of low attenuation in the CT scan covering over 5% of the lung region [7]. Therefore, to distinguish slices presenting COPD from other slices, CT scans with an FS = 3 were further annotated on slice-level basis by a radiologist not involved in the initial FS grading process. Datasets were selected on slice-level basis as such: Training and validation sets each included 3,392 and 1,114 slices, respectively. A total of 2,688 slices were reserved for the test set. All sets contained equal number of slices from the COPD and the no COPD class.
### Data preprocessing
As an initial preprocessing step, each slice was multiplied by its corresponding lung segmentation mask, generated with a commercially available software (IntelliSpace, Royal Philips, Netherlands). The 256x256 pixel images were clipped to the respective window setting and normalized to values between zero and one. A window setting is given by the window width and the window level (WW, WL) in HU as standardized by radiologists. Note that the WL defines the mid-point value of a window setting. Here, the emphysema window (124, -962) HU was used to clip the CT images for classification of COPD. Furthermore, a 'full-range' windowing (2048, 0) HU was applied to introduce a base-line intensity range for all slices. The (WW\({}_{\text{full-range}}\), WL\({}_{\text{full-range}}\)) values were set based on the minimum, -1024 HU, and the maximum, 966 HU, intensity values recorded over all slices. Figure 1 shows an example slice from the no COPD class in (a, b), and a slice from the COPD class in (c-f). The slices were preprocessed to the full-range window setting in Figure 1(a, c, e) and to the emphysema window setting in (b, d, f). The nonhomogeneous patches of low attenuation corresponding to emphysema are emphasized with a stronger contrast in Figure 1(d) compared to the full-range window setting in Figure 1(c).
Figure 1: Example slices from the test set corresponding to (a, b) a patient (FS = 0) in the no COPD class and (c–f) a patient (FS = 4) in the COPD class. Slices are clipped to (a, c, e) the full-range and (b, d, f) the emphysema window settings. The segmented lung region is shown in (a–d).
### Implementation of DenseNets
Three DenseNets with minor differences in their architectures and training process were compared to examine the effects of window setting on COPD detection. The models were implemented with the TensorFlow platform (version 2.4.0) [18].
#### 2.3.1 Plain DenseNet
The DenseNet with 121 layers (plain DenseNet) was chosen as it has been shown to outperform other CNNs such as VGGNet, AlexNet, ResNet, and DenseNet201 for COPD classification [19]. Figure 2 depicts the architecture of the DenseNet model used in this work as introduced by [20] with a growth rate of 32. All models were compiled with a binary cross entropy loss and the Adam optimizer [21]. Early stopping and reduced learning rate were scheduled for the training process over 50 and 15 epochs respectively. A learning rate of 0.01 was initially set and reduced by a factor of 10 after 15 epochs if the validation loss did not reduce. The aforementioned parameters were set empirically. To analyze the influence of preprocessed input slices to different window settings on classification of COPD, the plain DenseNet model was trained and tested on images linearly clipped to the full-range and the emphysema window settings.
#### 2.3.2 DenseNet with Added Window-Setting Optimization Layer (DenseNetwso)
A window-setting optimization (WSO) layer was added to the aforementioned DenseNet as suggested by [16]. Both a Rectified Linear Unit (ReLU) function and a sigmoid function were initially considered for the WSO layer. The ReLU variant consistently outperformed the sigmoid variant for the validation set. Therefore, only DenseNetwso models with a ReLU activation function in the WSO layer are considered here. As depicted in Figure 2, the WSO layer consisted of a 1x1 convolution layer followed by a ReLU activation function. The ReLU hereby acts as a windowing function, trained
Figure 2: DenseNet architecture used for binary classification of COPD. The model constituents, Dense Block (DB), Dense Layer (DL), and Transition Layer (TL), are expanded in detail. The convolution (conv) and pooling (pool) layers are described by their stride (s) and padding (p) parameters. DenseNet-characteristic skip connections are shown in the DB and DL. The model had a growth rate of 32. The window-setting optimization (WSO) layer consisted of a 1x1 convolution layer followed by a ReLU activation and was used for automatic optimization of the window setting in the DenseNetwso and DenseNetNF implementations. The architecturally specific vertical digits for each box represent the side length dimensions, and the numbers over each block correspond to the number of filters. Adapted from [22].
to find an optimal window setting for the classification task. The WW and WL values were related to the learnable weight (w) and bias (b) parameters of the ReLU function, taken from [16] with correction,
\[f_{\mathrm{ReLU}}(x)=max(min(\mathrm{w}x+\mathrm{b},\mathrm{U}),0),\quad\mathrm{ where}\quad\mathrm{w}=\frac{\mathrm{U}}{\mathrm{WW}},\quad\mathrm{b}=\frac{ \mathrm{U}}{\mathrm{WW}}(\frac{\mathrm{WW}}{2}-\mathrm{WL}). \tag{1}\]
The parameter U sets the upper bound for the ReLU windowing function. Therefore, to achieve learned window settings that range between zero and one, the upper limit was set to \(\mathrm{U}=1\). The DenseNetwso model was trained to converge to an optimal window setting after being initialized to the full-range and the emphysema window settings, while simultaneously adjusting learnable parameters of the DenseNet block for classification of COPD. Initialization of the WSO layer was carried out by defining the learnable parameters for each window setting respectively. To do so, all input slices for DenseNetwso were given in the full-range window setting and normalized. The optimal window settings learned by the model were calculated using (1). All other architectural elements and training processes were identical to that of the plain DenseNet.
#### 2.3.3 DenseNetwso with Sequentially Trained WSO Parameters (DenseNetFNF)
The learned window setting by the DenseNetwso model after initialization to the full-range window setting exhibited large standard deviations over seven runs, as described in subsection 3.2. In an attempt to stabilize the learned window setting over all runs, the DenseNetwso model was first trained with the learnable parameters from the WSO layer, w and b, frozen. In doing so, the clipping parameters from the WSO layer were fixed to the initialized settings. Then, the model was further trained with the WSO layer unfrozen, which allowed its parameters to adjust for the optimal window setting. Additionally, the same model was trained continuously for a third round, again with the learnable parameters of the WSO layer frozen. We refer to this sequence of training with frozen, not-frozen, and frozen (FNF) WSO layer learnable parameters as DenseNetFNF. Similar to DenseNetwso, all input slices for DenseNetFNF were clipped to the full-range windowing and normalized. All other architectural elements and training processes were identical to that of the plain DenseNet.
### Evaluation Metrics
Training was repeated seven times and each run was inferred on once with the test data. The Receiver Operating Characteristics (ROC) curve, and area under the ROC curve (AUC) were used to assess the performance of DenseNet models with different window settings on the binary classification task. As the AUC utilizes different threshold choices, this metric was considered over the conventional accuracy metric. Additionally, for smaller sample sizes, the choice of maximizing for sensitivity or (\(1-\)specificity) becomes ambiguous since there is an inherent trade-off between the two parameters. Therefore, the AUC value was taken as the evaluation metric to resolve any ambiguities based on the choice of threshold by providing the highest value for the best observer while independent of the choice of a threshold [23]. The Scikit-learn library (version 1.2.0) was used to generate the ROC curves, choose optimal thresholds for each curve, and calculate the respective AUC values [24].
## 3 Results
This section presents the classification results of the three DenseNet variants.
### Manually-adjusted WSO
Using the plain DenseNet model, slice-level ROC plots with corresponding AUC values were compared between full-range and emphysema window settings in Figure 3. Since the test set had balanced slices from both classes of no COPD and COPD, the chance diagonal was used as a visual guide to mark the AUC value of 0.5. We see in Figure 3 that clipping data to emphysema window setting allows the model to consistently achieve better results (mean AUC = 0.86\(\pm\)0.04) in comparison to the full-range window setting (mean AUC = 0.80\(\pm\)0.05). The single highest AUC value of 0.91 corresponded to the plain DenseNet model with the input slices preprocessed to the emphysema window setting.
### Automatically-adjusted WSO
The window-setting values in Table 1 correspond to the mean and standard deviation values for WW and WL over the seven runs of each arrangement. The information in Table 1 is independent of the inference data set, as the learned window-setting values are fixed model-specific parameters after a completed training run. The learned WW and WL
parameters were calculated from the weights and bias values of the WSO layer using (1). Figure 4 shows the learned and the corresponding initialization window setting for each WSO model. Note that the window settings used for initialization of WSO models were the same as the parameters used for clipping the inputs to the plain DenseNet.
We notice a shift towards the lower end of the HU range in all learned window settings, as given by Table 1 and Figure 4. The mean learned WL decreased more drastically for models initialized to the full-range window setting. The observed trends suggest a convergence towards the standard emphysema window setting for the learned WW and WL parameters by both DenseNetwso and DenseNetFNF when initialized to the full-range window setting. Between the two models, DenseNetNF learned a window setting closer to the emphysema window setting regardless of initialization window setting. However, when initialized to the full-range window, the DenseNetFNF arrived at the mean WW and WL parameters over seven runs with less deviation in comparison to when the model was initialized to the emphysema window. Overall, we see that when the learned window setting is closer to the standard emphysema window, the better mean AUC values are achieved.
Figure 6 and Figure 6 depict the ROC curves for DenseNetWSO and DenseNetFNF models respectively. We observe that initialization to emphysema windowing results in more consistent AUC values over seven runs compared to the full-range window setting for the DenseNetWSO model. Conversely, for the DenseNetFNF model, initialization to the full-range window generates more consistent AUC values over seven runs compared to the emphysema window setting. These results are in agreement with the standard deviation values given in Table 1. The highest AUC value achieved between the DenseNetWSO and the DenseNetFNF models was 0.91. This corresponded to the emphysema window setting initialization for the DenseNetFNF model.
### Optimal Window Setting for COPD Detection
The mean AUC values for all model and window setting combinations for inference on the same test set are provided in Table 2. Overall, the plain DenseNet with input slices initialized to the emphysema window setting resulted in the best
mean AUC value at 0.86\(\pm\)0.04 over all runs. Implementation of the WSO layer in DenseNet\({}_{\text{WSO}}\) and DenseNet\({}_{\text{FNF}}\) models did not drastically enhance the AUC compared to the results obtained with the plain DenseNet. Taking the plain DenseNet model with full-range input images as baseline, the DenseNet\({}_{\text{FNF}}\) model generated slightly better AUC values when initialized to either window setting. However, The most optimal window setting for the COPD detection task was the standard emphysema window setting of (124, -962) HU and not a window setting learned by either of the automated WSO models.
## 4 Discussion
Results in Table 2 indicate that adjusting input slices to different window settings directly impacts the binary classification of COPD. Looking closer at the plain DenseNet model, clipping the input data to different window settings influenced the AUC values: with AUC values for the plain DenseNet model for full-range input data as baseline, when the input was clipped to the emphysema window, the AUC values increased from 0.80 to 0.86.
The AUC values for the DenseNet\({}_{\text{WSO}}\) model in Table 2 suggest the shortcoming of the model in simultaneously detecting COPD and converging to optimal windowing parameters when initialized to the full-range window setting. Furthermore, the windowing parameters learned with this setup suffered from large standard deviations between the seven runs, as seen in Table 1. Lower standard deviations in learned window settings were observed when the WSO layer was trained with periodically frozen learnable parameters, as implemented in DenseNet\({}_{\text{FNF}}\) initialized to the full-range window setting.
Through automatic WSO, only minimal improvement in AUC value was observed with the DenseNet\({}_{\text{WSO}}\) and the DenseNet\({}_{\text{FNF}}\) models initialized to emphysema window setting, in comparison to the baseline. The window settings learned by these two models were in the vicinity of the standard emphysema window at the lower ranges of the HU scale. However, neither DenseNet\({}_{\text{WSO}}\) nor DenseNet\({}_{\text{FNF}}\) outperformed the mean AUC values obtained with the plain DenseNet model with images clipped to the standard emphysema window setting. A possible explanation is that although the single WSO layer was effective in converging to the optimal emphysema window setting, it was not sufficiently complex for the task of optimal window setting selection.
The standard emphysema windowing is tailored to present high contrast between healthy and emphysematous tissue in the lung. Therefore, as the ground truth labels for our dataset were graded based on the severity of emphysema, the results were in line with the hypothesis that slices clipped directly to the standard emphysema windowing, or automatically clipped with the WSO layer to learned window-setting parameters in the proximity of standard emphysema window, would improve classification of COPD with DenseNets. Optimizing for window setting as a means of increasing contrast in slices was effective for the detection task because the Fleischner Score (FS) ground-truth labels were directly based on disease-relevant morphological changes in the lung. Therefore, for models trained with spirometry-based GOLD standard COPD stages as ground truth label for lung CTs, adjusting input data based on window setting could potentially not be as effective. Utilizing FS as ground-truth labels also enabled us to achieve comparable results to related works in the literature, despite our use of a considerably smaller dataset [12],[13],[14].
The main limitation of this work was its small dataset, giving rise to intra-slice correlation for slice-level evaluations. Additionally, all patients were examined at the same hospital. Extendability of our findings to a larger, more diverse dataset should be further explored. In the context of COPD, future works could further explore categorical classification of the disease based on the progression scale introduced by the Fleischner society.
We showed that optimizing for a task-specific window-setting improved CNN outcome by enhancing disease-relevant information from the input data. Our findings can be extended to a range of computer vision tasks in medicine focusing on X-ray and CT data. Specifically, when disease relevant window settings are commonly used by radiologists, that information can be extended to the deep learning pipeline.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model \(\text{Windowing}\) & Full-range & Emphysema \\ \hline Plain DenseNet & 0.80\(\pm\) 0.05 & **0.86\(\pm\) 0.04** \\ DenseNet\({}_{\text{seq}}\) & 0.79\(\pm\) 0.07 & 0.81\(\pm\) 0.04 \\ DenseNet\({}_{\text{FNF}}\) & 0.81\(\pm\) 0.03 & 0.82\(\pm\) 0.04 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean and standard deviation AUC values from test set inference on each model and window setting combination calculated over seven runs.
## 5 Conclusion
Our results demonstrate the importance of incorporating the common clinical workflow process of window setting selection for CT scans in the CNN pipeline for binary classification of COPD. Specifically, preprocessing CT images to emphysema window setting improved the plain DenseNet model's performance. Furthermore, we showed that when information regarding the optimal window setting for classification of COPD is unknown, learning an optimal windowing through the addition of a WSO layer with the DenseNet\({}_{\text{FNF}}\) model would result in higher AUC values in comparison to training a plain DenseNet model with full-range normalized slices.
|
2301.07928 | Hamiltonian Neural Networks with Automatic Symmetry Detection | Recently, Hamiltonian neural networks (HNN) have been introduced to
incorporate prior physical knowledge when learning the dynamical equations of
Hamiltonian systems. Hereby, the symplectic system structure is preserved
despite the data-driven modeling approach. However, preserving symmetries
requires additional attention. In this research, we enhance HNN with a Lie
algebra framework to detect and embed symmetries in the neural network. This
approach allows to simultaneously learn the symmetry group action and the total
energy of the system. As illustrating examples, a pendulum on a cart and a
two-body problem from astrodynamics are considered. | Eva Dierkes, Christian Offen, Sina Ober-Blöbaum, Kathrin Flaßkamp | 2023-01-19T07:34:57Z | http://arxiv.org/abs/2301.07928v2 | # Hamiltonian Neural Networks with Automatic Symmetry Detection
###### Abstract
Recently, Hamiltonian neural networks (HNN) have been introduced to incorporate prior physical knowledge when learning the dynamical equations of Hamiltonian systems. Hereby, the symplectic system structure is preserved despite the data-driven modeling approach. However, preserving symmetries requires additional attention. In this research, we enhance the HNN with a Lie algebra framework to detect and embed symmetries in the neural network. This approach allows to simultaneously learn the symmetry group action and the total energy of the system. As illustrating examples, a pendulum on a cart and a two-body problem from astrodynamics are considered.
Hamiltonian Neural Networks, Hamiltonian Neural Networks, Hamiltonian Neural Networks, Hamiltonian systems
tonian along with a spanning set of generators of invariant vector fields and testing via a loss function whether the derivatives of the Hamiltonian \(H\) along the invariant vector fields vanish. Utilising Noether's theorem, the two ingredients, symmetries and symplectic structure, then allow us to identify integrals of motions. We exemplify the concept by the cart-pendulum and the two-body example, for which we train a symmetry-symplecticity-preserving neural network.
## II Preliminaries
To introduce notation and our main examples, we summarize preliminaries from the referred research fields, namely geometric mechanics with symmetries and neural network learning. We refer to Marsden and Abraham (1978); Marsden and Ratiu (1999); Olver (1986); Goodfellow, Bengio, and Courville (2016) for details.
### Symplectic maps and Hamiltonian dynamics
Let \(Q\) denote an open, non-empty subset of \(\mathbb{R}^{n}\), which we refer to as _configuration space_. The cotangent bundle \(\mathcal{M}=T^{*}Q\) can be identified with an open subset of \(\mathbb{R}^{2n}\cong\mathbb{R}^{n}\times\mathbb{R}^{n}\) with (Darboux) coordinates \(z=(q,p)\in\mathcal{M}\). The manifold \(\mathcal{M}\) is called _phase space_. A diffeomorphism \(\Psi\colon\mathcal{M}\to\mathcal{M}\) is _symplectic_, if \(\mathrm{D}\Psi(z)^{\top}\mathrm{D}\Psi(z)=J\) for all \(z\in\mathcal{M}\), where \(\mathrm{D}\Psi(z)\) is the Jacobi matrix of \(\Psi\) at \(z\) and
\[J=\begin{pmatrix}0&-I_{n}\\ I_{n}&0\end{pmatrix}\in\mathbb{R}^{2n}\]
is the symplectic structure matrix of \(\mathcal{M}\). Here, \(I_{n}\) denotes the \(n\)-dimensional identity matrix.
Symplectic maps on \(\mathcal{M}\) can be constructed as cotangent lifts of diffeomorphisms and as flows of Hamiltonian vector fields: given a diffeomorphism \(\phi\colon\mathcal{Q}\to Q\) we can obtain a symplectic map \(\Psi\colon\mathcal{M}\to\mathcal{M}\) as \(\Psi(q,p)=(\phi^{-1}(q),\mathrm{D}\phi(q)^{\top}p)\). The map \(\Psi\colon\mathcal{M}\to\mathcal{M}\) is called _the cotangent lift of \(\phi\)_.
For any scalar valued map \(H\colon\mathcal{M}\to\mathbb{R}\) we consider the _Hamiltonian vector field_\(X_{H}(z)=J^{-1}\nabla H(z)\). _Hamilton's equations_ are given as the flow equations to \(X_{H}\), i.e.
\[\dot{z}=X_{H}(z)=J^{-1}\nabla H(z)\quad\text{or}\quad\begin{cases}\dot{q}&= \nabla_{p}H(q,p)\\ \dot{p}&=-\nabla_{q}H(q,p)\end{cases}. \tag{1}\]
For \(\tau\in\mathbb{R}\), the flow map \(\Psi_{\tau}\) assigns to \(z_{0}\in\mathcal{M}\) the solution of (1) at time \(\tau\) subject to the initial condition \(z(0)=z_{0}\) (if it exists). On its domain of definition \(\mathcal{M}_{\tau}\subset\mathcal{M}\), the map \(\Psi_{\tau}\colon\mathcal{M}_{\tau}\to\Psi_{\tau}(\mathcal{M})\) is symplectic. Symplectic maps on \(\mathcal{M}\) form a group under composition which we denote by \(\mathrm{Symp}(\mathcal{M})\).
### Lie group actions
Let us briefly introduce Lie group actions and invariant vector fields. For details we refer to Marsden and Ratiu (1999).
Lie groups are algebraic groups which are also smooth manifolds. For a Lie group \(G\), the Lie algebra \(\mathfrak{g}\) is the tangent space of \(G\) at the neutral element \(e\), i.e., for
\[\mathfrak{g}=\left\{v=\left.\frac{\mathrm{d}}{\mathrm{d}t}\right| _{t=0}\gamma(t)\right| \gamma: \left(-\varepsilon,\varepsilon\right)\to G\text{ smooth}\] \[\text{with }\gamma(0)=e,\,\varepsilon>0\right\}.\]
Denote by \(\exp\colon\mathfrak{g}\to G\) the _exponential map_: in case \(G\) is a subgroup of the group of invertible matrices \(\mathrm{GL}(\mathbb{R},n)=\{A\in\mathbb{R}^{n\times n}|\det(A)\neq 0\}\), the map \(\exp\) is given by the matrix exponential.
A _Lie group action_ on a manifold \(\mathcal{M}\) is a group homomorphism \(L\colon G\to\mathrm{Diff}(\mathcal{M}),\,\mathfrak{g}\mapsto L_{g}\), i.e. for all \(g,h\in G\) the map \(L_{g}\colon\mathcal{M}\to\mathcal{M}\) is a diffeomorphism and \(L_{g\in h}=L_{g}\circ L_{h}\). If the manifold \(\mathcal{M}\) is symplectic and each \(L_{g}\) is a symplectic map, then the Lie group action is called _symplectic_.
To each Lie algebra element \(v\in\mathfrak{g}\) we associate the _fundamental vector field_\(\widehat{v}\) defined by
\[\widehat{v}_{z}=\left.\frac{\mathrm{d}}{\mathrm{d}t}\right|_{t=0}L_{\exp(rv)}( z)\in T_{z}\mathcal{M},\quad z\in\mathcal{M}.\]
These vector fields can be thought of as infinitesimal actions of the Lie group \(G\) on \(\mathcal{M}\).
Of central importance for the applications considered in this article are actions of subgroups of the group of affine linear transformations, which are introduced in the following.
**Example II.1** (Affine linear transformations).: Let \(Q=\mathbb{R}^{n}\) and \(\mathcal{M}=T^{*}Q\cong\mathbb{R}^{2n}\) with Darboux coordinates \((q,p)\). The group of affine transformations \(G=\mathrm{Aff}(Q)\) on \(Q\) can be represented as
\[G=\mathrm{Aff}(Q)=\left\{\begin{pmatrix}A&b\\ 0&1\end{pmatrix}\right|A\in\mathrm{Gl}(\mathbb{R},n),\,b\in\mathbb{R}^{n} \right\},\]
where the group operation is matrix multiplication and \(\mathrm{Gl}(\mathbb{R},n)\) denotes the group of invertible matrices. Its Lie algebra is
\[\mathfrak{g}=\mathrm{aff}(Q)=\left\{\begin{pmatrix}M&b\\ 0&0\end{pmatrix}\right|M\in\mathbb{R}^{n\times n},\,b\in\mathbb{R}^{n}\right\}.\]
As a shorthand for \(\begin{pmatrix}A&b\\ 0&1\end{pmatrix}\in G\) and for \(\begin{pmatrix}M&b\\ 0&0\end{pmatrix}\in\mathfrak{g}\) we may write \((A,b)\in G\) or \((M,b)\in\mathfrak{g}\), respectively. Later, it will be handy to consider \(\mathfrak{g}=\mathrm{aff}(Q)\) as an inner product space. For this we can consider the inner product
\[\begin{split}\langle v^{(1)},v^{(2)}\rangle_{\mathrm{aff}(Q)}& =\langle(M^{(1)},b^{(1)}),(M^{(2)},b^{(2)})\rangle_{\mathrm{aff}(Q)}\\ &=\langle M^{(1)},M^{(2)}\rangle+\langle b^{(1)},b^{(2)}\rangle\end{split} \tag{2}\]
for \(v^{(j)}=(M^{(j)},b^{(j)})\in\mathfrak{g}\). Here \(\langle M^{(1)},M^{(2)}\rangle\) denotes the Frobenius inner product \(\langle M^{(1)},M^{(2)}\rangle=\sum_{i,j=1}^{n}M^{(1)}_{ij}M^{(2)}_{ij}\) on
\(\mathbb{R}^{n\times n}\) and \(\langle b^{(1)},b^{(2)}\rangle\) the Eucledian inner product on \(\mathbb{R}^{n}\). This definition of \(\langle v^{(1)},v^{(2)}\rangle_{\text{aff}(Q)}\) corresponds to the Frobenius inner product of the matrices \(\begin{pmatrix}M^{(1)}&b^{(1)}\\ 0&0\end{pmatrix},\begin{pmatrix}M^{(2)}&b^{(2)}\\ 0&0\end{pmatrix}\in\mathbb{R}^{(n+1)\times(n+1)}\). The induced norm on \(\mathfrak{g}\) is given as \(\|v\|_{\text{aff}(Q)}=\langle v,v\rangle_{\text{aff}(Q)}\) for \(v\in\mathfrak{g}\)
The exponential map \(\exp\colon\mathfrak{g}\to G\) is given as the matrix exponential. The group \(G\) acts on \(Q\) by affine linear transformations \((A,b)\mapsto\ell_{(A,b)}\) with \(\ell_{(A,b)}(q)=Aq+b\). Furthermore, the group \(G\) acts on \(\mathcal{M}=T^{*}Q\) symplectically with \((A,b)\mapsto L_{(A,b)}\) where \(L_{(A,b)}\colon\mathcal{M}\to\mathcal{M}\) is the contangent lift of \(\ell_{(A,b)}\colon\mathcal{Q}\to Q\), i.e.
\[L_{(A,b)}(q,p)=\left(A^{-1}(q-b),\ A^{\top}p\right).\]
For \(v=(M,b)\in\mathfrak{g}\) the invariant vector field \(\widehat{v}=\widehat{(M,b)}\) at \((q,p)\in\mathcal{M}\) is given as
\[\widehat{v}_{(q,p)}=\widehat{(M,b)}_{(q,p)}=\left(-Mq-b,\ M^{\top}p\right).\]
Vector fields can be interpreted as derivations: if \(\widehat{v}=\widehat{(M,b)}_{(q,p)}\) is applied to \(H\colon\mathcal{M}\to\mathbb{R}\) we obtain the directional derivative
\[\widehat{v}_{(q,p)}(H) =\widehat{(M,b)}_{(q,p)}(H)\] \[=(-Mq-b)^{\top}\nabla_{q}H(q,p)+(M^{\top}p)^{\top}\nabla_{p}H(q,p). \tag{3}\]
### Symmetry actions and conserved quantities
Consider a Hamiltonian \(H\colon\mathcal{M}\to\mathbb{R}\) on the symplectic manifold \(\mathcal{M}=T^{*}Q\), where \(Q\) is an open, non-empty subset of \(\mathbb{R}^{n}\). A symplectic Lie group action \(L\colon G\to\text{Symp}(\mathcal{M})\) is a _symmetry_ if \(H\circ L_{g}=H\) for all \(g\in G\). This has important consequences for the dynamical system defined by the Hamiltonian vector field \(X_{H}\): if \(\gamma\colon[a,b]\to\mathcal{M}\) is a motion, i.e. \(\gamma\) fulfills Hamilton's equations (1), then \(L_{g}\circ\gamma\) is a motion as well. Moreover, if \(v\in\mathfrak{g}\) such that \(\widehat{v}\) is the Hamiltonian vector field to a smooth map \(I\colon\mathcal{M}\to\mathbb{R}\) then \(I\) is a conserved quantity of the system, i.e. \(I\circ\gamma=I\) for all motions \(\gamma\) (Noether's theorem).
**Example II.2**.: If a Hamiltonian \(H\) is invariant under a translation in the direction of \(b\in\mathbb{R}^{n}\), i.e. the directional derivative \(\nabla_{(b,0)}H(q,p)=\sum_{j=1}^{n}b^{j}\frac{\partial H}{\partial q^{j}}(q,p)\) vanishes for all \((q,p)\in\mathcal{M}\), then the quantity \(I(q,p)=b^{\top}p\) is conserved under motions. More generally, if \((M,b)\in\mathfrak{g}\) is an element of the Lie algebra of affine linear transformations and \(\widehat{(M,b)}_{(q,p)}(H)=0\) for all \((q,p)\in\mathcal{M}\) (see (3)) then
\[I(q,p)=p^{\top}(-Mq-b) \tag{4}\]
is a conserved quantity.
**Remark II.1**.: If for a Lie group action on a Hamiltonian system with Hamiltonian \(H\) the fundamental vector fields \(\widehat{v}^{\dagger},\widehat{v}^{2}\) to Lie algebra elements \(v^{\dagger},v^{2}\in\mathfrak{g}\) generate infinitesimal symmetries, i.e. \(\widehat{v}^{\dagger}(H)=0\) and \(\widehat{v}^{2}(H)=0\), then, the commutator of the vector fields \([\widehat{v}^{\dagger},\widehat{v}^{2}]\) is the fundamental vector field to a Lie algebra element \([v^{1},v^{2}]\in\mathfrak{g}\) and \(\widehat{[v^{1},v^{2}]}(H)=0\).
The following Examples II.3 and II.4 introduce the two main examples to which we will apply our framework.
**Example II.3** (Pendulum on a cart).: The proposed framework will be applied to the example of a planar pendulum mounted on a cart (cf. e.g. Bloch, 2003). The generalized coordinates are \(q=(s,\varphi)\), where \(s\) is the position of the cart and \(\varphi\) is the angle to the upper vertical of the pendulum. Since this system is well studied, we can use the true Hamiltonian for, firstly, generating data points and, secondly, evaluating the performance of our data-based approach. The Hamiltonian is given by
\[H(\varphi,p_{s},p_{\varphi})=\frac{ap_{s}^{2}+2bp_{s}p_{\varphi}\cos\varphi+ c\varphi_{\varphi}^{2}}{2ac-b^{2}\cos^{2}\varphi}-D\cos\varphi, \tag{5}\]
with the constants \(a=ml^{2}\), \(b=ml\), \(c=m_{0}+m\) and \(D=-mgl\). Here \(l\) is the length and \(m\) the mass of the pendulum, \(m_{0}\) corresponds to the mass of the cart and \(g=9.81\) is the gravitation constant. In our numerical experiments \(m=m_{0}=l=1\).
The Hamiltonian \(H\) does not explicitly depend on the cart's position \(s\). Therefore, \(H\) is invariant under translations in \(s\), i.e. the symplectic transformations \((s,\varphi,p_{s},p_{\phi})\mapsto(s+b_{1},\varphi,p_{s},p_{\phi})\) for all \(b_{1}\in\mathbb{R}\). This can be interpreted as actions by the following subgroup of \(\text{Aff}(Q)\) (see Example II.1):
\[G=\left\{(I_{2},b)=\begin{pmatrix}I_{2}&b\\ 0&1\end{pmatrix}\middle|b=\begin{pmatrix}b_{1}\\ 0\end{pmatrix},b_{1}\in\mathbb{R}\right\}\subset\text{Aff}(Q).\]
Here \(I_{2}\) is the 2-dimensional identity matrix. Its Lie algebra \(\mathfrak{g}\) is given as the following sub-Lie algebra of \(\text{aff}(Q)\)
\[\mathfrak{g}=\left\{\left.(0_{2},b)=\begin{pmatrix}0_{2}&b\\ 0&0\end{pmatrix}\middle|b=\begin{pmatrix}b_{1}\\ 0\end{pmatrix},b_{1}\in\mathbb{R}\right\}\subset\text{aff}(Q). \tag{6}\]
Here \(0_{2}\) is the 2-dimensional zero matrix. We have \(\widehat{v}(H)=\overline{(0_{2},(0,s)^{\top})}(H)=-s\frac{\partial H}{ \partial s}=0\) for all \(v\in\mathfrak{g}\) (see (3)). It follows that \(I(s,\phi,p_{s},p_{\phi})=-p_{s}\) is a conserved quantity.
**Example II.4** (Two-body Example).: Another example is a point mass orbiting a fixed point, e.g. a satellite rotating about a planet. We consider this example in Cartesian coordinates, with the origin of the coordinate frame in the center of the bigger point mass (e.g. the planet). The true Hamiltonian is given by
\[H(x,y,p_{x},p_{y})=\frac{1}{2}p^{\top}M^{-1}p-\frac{k}{\|q\|}, \tag{7}\]
with \(q^{\top}=\begin{pmatrix}x,&y\end{pmatrix}\) the position of the small mass (satellite) and \(p^{\top}=\begin{pmatrix}p_{x}&p_{y}\end{pmatrix}\) the corresponding conjugate momenta,
\(m\cdot I_{2}\) the mass matrix and \(k\) the gravitation constant. In our numerical examples \(m=1\) and \(k\approx 1016.895\).
Symmetry transformations for the two-body example are given by rotations around the origin, i.e. \((q,p)\mapsto(A^{\top}q,A^{\top}p)\) for a matrix \(A\in SO(2)=\{A\in\text{GI}(2,\mathbb{R})\,|\,A^{\top}A=I_{2},\det(A)=1\}\). These transformations can be interpreted as actions by the following subgroup of \(\text{Aff}(Q)\) (see Example II.1):
\[G= \left\{(A,0)=\begin{pmatrix}A&0\\ 0&1\end{pmatrix}\right|A\in\mathbb{R}^{2\times 2},A^{\top}A=I_{2},\det(A)=1\right\}\] \[\subset\text{Aff}(Q).\]
Its Lie algebra \(\mathfrak{g}\) is given as the following sub-Lie algebra of \(\text{aff}(Q)\)
\[\mathfrak{g}=\left\{(M,0)=\begin{pmatrix}M&0\\ 0&0\end{pmatrix}\right|M\in\mathbb{R}^{2\times 2},M^{\top}=-M\right\}\subset \text{aff}(Q).\]
For \((M,0)\in\mathfrak{g}\) the fundamental vector field is given as \(\widetilde{(M,0)}_{q,p}=(-Mq,M^{\top}p)\) and we have \(\widetilde{(M,0)}_{q,p}(H)=(-Mq)^{\top}\nabla_{\mathfrak{g}}H(q,p)+(M^{\top} p)^{\top}\nabla_{p}H(q,p)=0\). This implies that \(I(q,p)=-p^{\top}Mq\) is a conserved quantity for the dynamics for all \(M\in\text{so}(2)\). Here, \(\text{so}(2)\subset\mathbb{R}^{2\times 2}\) is the Lie algebra to \(SO(2)\). It consists of all skew symmetric \(2\times 2\) matrices.
### Deep Learning for structure-preserving system identification
Deep learning methods are widely used for data-based identification tasks. We introduce the basic concepts needed for our framework and refer to Goodfellow, Bengio, and Courville (2016) as a detailed textbook on deep learning for further information.
A neural network is a network of artificial neurons, which are connected by affine linear transformations connected with (nonlinear) activation functions. The neurons can be represented as nodes and are grouped into layers. There always is an input layer, whose shape is defined by the data set, and an output layer, whose shape is defined by the task to be learned. Between those two layers, the user can introduce any number of hidden layers.
For feedforward networks, the connections are only in the direction of the output layer, i.e. no backward connection and no connection within a layer are allowed. Due to the nesting of the nodes, arbitrary functions can be approximated (see Hornik, Stinchcombe, and White, 1989). The network parameters \(\theta\), given by the connection of nodes, are fitted during the learning process to minimize a user defined _loss function_, usually using a variant of gradient descent method. The loss function defines the learning task. In the case of data fitting, a variant of the mean-squared error (MSE) between the data set and the model output is often used.
Physics-based learning (as proposed by Raissi, Perdikaris, and Karniadakis (2019)) was introduced to enhance neural networks with physical properties. Here, the loss function represents the known physics of the system, and the model output are the parameters to be learned. For this approach, a physical representation needs to be known.
If only a more general system structure is known beforehand, this structure can be preserved by special formulations of the loss function. As a part of physics-informed learning, Lagrangian neural networks (LNN) by Cranmer _et al._ (2020) and Hamiltonian neural networks (HNN) by Greydanus, Dzamba, and Yosinski (2019) aim to preserve the underlying system structure. This is achieved by learning the Lagrangian or Hamiltonian function of the system from data without assumptions on the specific form of the function. This needs to be contrasted to learning the vector field or flow map of the system directly, which does not enforce physical structure on the learned system.
Our proposed approach is based on HNN, which proposes to learn a Hamiltonian \(H\colon\mathcal{M}\to\mathbb{R}\) modeled as a neural network based on observations \((\hat{z}^{(k)})_{k=1}^{N}\) of a Hamiltonian vector field \(X_{H}\in\mathfrak{X}(\mathcal{M})\) at positions \((z^{(k)})_{k=1}^{N}\subset\mathcal{M}\). The loss function involves the error of the Hamiltonian equation (see (1)) for all observations,
\[\ell_{\text{dynamics}}=\sum_{k=1}^{N}\|\hat{z}^{(k)}-X_{H}(z^{(k)})\|_{T \mathcal{M}}^{2}.\]
The derivatives to obtain \(X_{H}(z^{(k)})\) are computed using algorithmic differentiation (see Griewank and Walther, 2008). Further details and used methods for (stochastic) gradient descent, choices of activation functions and hyperparameter are given in course of studying the numerical examples in Section IV.2.
Based on the HNN approach, Dierkes and Flasskamp (2021) propose to include the known symmetry of the system in the learning of the Hamiltonian. Other works, such as Lishkova _et al._ (2022), are learning an underlying symmetry group alongside a discrete Lagrangian on a single trajectory.
## III Simultaneous learning of symmetry actions and Hamiltonians
Assume we are faced with an identification problem of a dynamical system known to be Hamiltonian and to possess symmetry, but without having knowledge of the symmetry's Lie group action or the corresponding conserved quantity. Then, it is desirable to reveal both, the Hamiltonian dynamics, as well as the symmetry, by data-based learning. For a symplectic action of a group \(G\) on the phase space (such as actions by affine linear transformations, for instance) we propose to learn a basis \(v^{(1)},\dots,v^{(K)}\) of a subspace \(V\) of the Lie algebra \(\mathfrak{g}\) such that actions by elements \(g\in\exp(V)\) are symmetries of the dynamical system. The basis elements \(v^{(1)},\dots,v^{(K)}\) are learned simultaneously with the Hamiltonian \(H\) from observations of the vector field. Here \(K\) is an estimation for the dimension of the subgroup of \(G\) which acts by symmetries. One may start with \(K=1\) (or \(K=0\)) and successively increase \(K\) until either \(K=\dim(G)\) or the minimisation of the loss function introduced in the following fails to fit another symmetry.
An identification of \(V\) also reveals conserved quantities of the dynamical system as outlined in Section II.3.
### Loss function for symmetry terms
Let \(\mathcal{M}^{\circ}\subset\mathcal{M}\) be a section of the phase space \(\mathcal{M}\) containing the parts of interest. \(\mathcal{M}^{\circ}\) is required to be topologically open and its closure to be a compact subset of \(\mathcal{M}\). Let \(\mathrm{dvol}\) denote the volume form on \(\mathcal{M}\), i.e. \(\mathrm{dvol}=\mathrm{d}q^{1}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{ d}q^{n}\wedge\mathrm{d}p_{n}\) if \(\mathcal{M}\) is represented as an open subset of \(\mathbb{R}^{2n}\cong T^{*}\mathbb{R}^{n}\).
For \(k=1,\ldots,K\) the symmetry loss term for each basis element \(v^{(k)}\) is defined by
\[\ell_{\mathrm{sym}}^{(k)}=\frac{1}{\mathrm{dvol}(\mathcal{M}^{\circ})\|v^{(k) }\|}\int_{\mathcal{M}^{\circ}}|\widehat{v}^{(k)}(H)|^{2}\mathrm{dvol}. \tag{8}\]
Here, \(\widehat{v}^{(k)}\) denotes the invariant vector field to \(v^{(k)}\). The term \(\ell_{\mathrm{sym}}^{(k)}\) measures how invariant \(H\) is under actions with group elements of \(\exp(rv^{(k)}|t\in\mathbb{R})\). The factor \(\frac{1}{\mathrm{dvol}(\mathcal{M}^{\circ})\|v^{(k)}\|}\) is a normalisation by the volume of \(\mathcal{M}^{\circ}\) and the norm of \(v\). Equip \(\mathfrak{g}\) with an inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\). Given weights \(\alpha^{(k)},\beta^{(k)}>0\) the combined symmetry loss function is
\[\ell_{\mathrm{sym}}=\sum_{k=1}^{K}\left(\ell_{\mathrm{sym}}^{(k)}+\alpha^{(k )}\|\|v^{(k)}\|-1\|^{2}+\beta^{(k)}\sum_{s=1}^{k-1}\langle v^{(k)},v^{(s)} \rangle\right). \tag{9}\]
The last two terms of \(\ell_{\mathrm{sym}}\) measure the orthonormality of the spanning set \(v^{(1)},\ldots,v^{(K)}\) while \(\sum_{k=1}^{K}\ell_{\mathrm{sym}}^{(k)}\) measures how well infinitesimal actions by elements of \(V=\mathrm{span}(v^{(1)},\ldots,v^{(K)})\) preserve \(H\).
**Example III.1**.: If we look for a 1-dimensional subgroup of the group of affine linear transformations as in Example II.1, then for the Lie algebra element \(v^{(1)}=(M^{(1)},b^{(1)})\in\mathfrak{g}\) we have
\[\ell_{\mathrm{sym}} =\ell_{\mathrm{sym}}^{(1)}+\alpha^{(1)}\|\|v^{(1)}\|_{\mathrm{ diff}(Q)}-1\|^{2}\] \[=\ell_{\mathrm{sym}}^{(1)}+\alpha^{(1)}\|M^{(1)}\|_{\mathrm{F}}+ \|b^{(1)}\|_{2}-1\|^{2}\]
with Frobenius norm \(\|M^{(1)}\|_{F}\) (see Example II.1) and Euclidean norm \(\|b^{(1)}\|_{2}\) and with
\[\ell_{\mathrm{sym}}^{(1)} =\Theta\int_{\mathcal{M}^{\circ}}\Big{|}(-M^{(1)}q-b^{(1)})^{\top }\nabla_{q}H(q,p) \tag{10}\] \[\qquad\qquad+(M^{(1)^{\top}}p)^{\top}\nabla_{p}H(q,p)\Big{|}^{2} \mathrm{d}q\mathrm{d}p,\]
with normalisation factor \(\Theta=(\mathrm{dvol}(\mathcal{M}^{\circ})\|v^{(1)}\|_{\mathrm{diff}(Q)})^{-1}\) and with \(\mathrm{d}q\mathrm{d}p=\mathrm{d}q^{1}\ldots\mathrm{d}q^{n}\mathrm{d}p_{1}\ldots \mathrm{d}p_{n}\). The elements of \(M^{(1)}\in\mathbb{R}^{n\times n}\) and \(b^{(1)}\in\mathbb{R}^{n}\) are learnable parameters.
If we look for a 2-dimensional subgroup, then
\[\ell_{\mathrm{sym}} =\ell_{\mathrm{sym}}^{(1)}+\ell_{\mathrm{sym}}^{(2)} \tag{11}\] \[+\alpha^{(1)}\|M^{(1)}\|_{\mathrm{F}}+\|b^{(1)}\|_{2}-1\|^{2}\] \[+\alpha^{(2)}\|\|M^{(2)}\|_{\mathrm{F}}+\|b^{(2)}\|_{2}-1\|^{2}\] \[+\beta^{(2)}(\langle M^{(2)},M^{(1)}\rangle+\langle b^{(2)},b^{(1 )}\rangle).\]
Here \(\langle M^{(2)},M^{(1)}\rangle\) is the Frobenius inner product of the matrices \(M^{(2)},M^{(1)}\) (see Example II.1) and \(\langle b^{(2)},b^{(1)}\rangle\) the Euclidean inner product. \(\alpha^{(1)},\alpha^{(2)},\beta^{(2)}>0\) are weights. The symmetry loss \(\ell_{\mathrm{sym}}^{(2)}\) is defined in analogy to \(\ell_{\mathrm{sym}}^{(1)}\). The learnable symmetry parameters are given as \(M^{(1)},M^{(2)}\in\mathbb{R}^{n\times n}\) and \(b^{(1)},b^{(2)}\in\mathbb{R}^{n}\).
### Training
The Hamiltonian \(H\) and the spanning set \(v^{(1)},\ldots,v^{(K)}\) can now be learned using the combined loss function
\[\ell=\ell_{\mathrm{dynamics}}+\gamma\;\ell_{\mathrm{sym}}, \tag{12}\]
where \(H\) is modeled as a neural network, \(v^{(1)},\ldots,v^{(K)}\) are additional free parameters and \(\gamma>0\) is a scaling weight. We denote a model using (12) as a loss function by SymHNN.
Rather than training the network and identifying all symmetry parameters at once, the training can be performed with a low value for \(K\) first. (For \(K=0\) our method corresponds to HNN.) Then \(K\) is increased, and the training is repeated using the pre-trained network and \(v^{(1)},\ldots,v^{(K-1)}\) as priors.
As learning a good Hamiltonian is the primary goal, it is helpful to pre-train the network without the symmetry term first and only learn the symmetry once a good approximation of the data is achieved. Adding the symmetry loss to the overall loss function may lead to high values in the training loss, as the (so far) learned Hamiltonian does not satisfy the symmetry requirement of the random initialized basis element.
The weight \(\gamma>0\) in (12) may vary with the number of epochs trained. Pre-training the neural network without the symmetry loss term to achieve a good approximation of the Hamiltonian seems a reasonable approach. Furthermore, slowly increasing \(\gamma\) over some epochs avoids high jumps in the learning curve, as the optimisation process can adapt the randomly initialized \(v^{(k)}\) to the symmetry of the current learned Hamiltonian first and then slightly modify the Hamiltonian to minimise the symmetry loss. This avoids that the Hamiltonian is driven towards a symmetric function whose symmetry corresponds to the random initialization of \(v^{(k)}\).
The integral in (8) can be approximated by averaging the integrand over a few points in the phase space \(\mathcal{M}^{\circ}\), randomly drawn in each epoch of the minimization procedure with respect to a uniform probability distribution on \(\mathcal{M}^{\circ}\).
### Extensions and further remarks
**Remark III.1**.: As justified by Remark II.1, when in the learning process of Section III.2 Lie algebra elements \(v^{(1)},v^{(2)}\) have been identified, then the Lie bracket \([v^{(1)},v^{(2)}]\) can be used as a prior for \(v^{(3)}\) (unless vanishing or close to zero). Moreover, in any stage of the learning process with \(K\geq 2\) (higher order iterations of) Lie brackets of already identified elements \(v^{(1)},\ldots,v^{(K-1)}\) can be used to obtain initial guesses for \(v^{(K)}\).
**Remark III.2**.: Under mild assumptions, for a Lie group action of a Lie group \(G\) on a symplectic manifold \(\mathcal{M}\) there exists a _momentum map_\(\mu\). To be more precise: Denote the paring of \(\mathfrak{g}\) and its dual space \(\mathfrak{g}^{*}\) by \(\langle\cdot,\cdot\rangle_{\text{pair}}\). The momentum map \(\mu:\mathcal{M}\to\mathfrak{g}^{*}\) is a smooth map defined by \(\mu(0)=0\) and by the relation \(X_{\langle\mu,v\rangle}=\widehat{v}\) for all \(v\in\mathfrak{g}\), where \(\widehat{v}\) denotes the fundamental vector field to \(v\) and \(X_{\langle\mu,v\rangle}\) the Hamiltonian vector field to the function \(\langle\mu,v\rangle:\mathcal{M}\to\mathbb{R}\).
Provided that the learned Hamiltonian \(H\) accurately describes the true Hamiltonian of the system and \(\ell_{\text{sym}}=0\) (see (9)) the learned Lie algebra elements \(v^{(1)},\ldots,v^{(K)}\) found by the described procedure yield the following functionally independent conserved quantities \(\mu^{(j)}=\langle\mu,v^{(j)}\rangle_{\text{pair}}\) of the system's Hamiltonian motions. In the example of affine linear transformations, the expressions \(\mu^{(j)}\) recover the conserved quantities presented in Example II.2.
**Remark III.3**.: Our framework can be generalised directly to general symplectic group actions, i.e. for actions which are not cotangent lifted actions. Indeed, the symplectic manifold \((\mathcal{M},\omega)\) does not need to have the structure of a cotangent bundle.
## IV Numerical results
For the two examples, pendulum on a cart (Example II.3) and 2-body example (Example II.4), numerical results are presented in this section. We use the library _PyTorch_ by Paszke _et al._ (2019) to setup and train our models, the code for the implementation and the experiments can be found on git [email protected]:eva-dierkes/HNN_withSymmetries.git. More details on how we construct data, chose hyperpaemeters of the networks and the results are discussed in the following.
### Generating data for academic examples
In order to generate data to train and test our network, we sample initial points in the area of interest, i.e. \(\mathcal{M}^{\text{o}}\), and simulate each of them with the true dynamics for a short period of time, using a fourth order Runge-Kutta scheme (cf. Runge, 1895) with a low error tolerance of \(1\mathrm{e}-10\). Each point of these trajectories is extended by the vector field evaluation to obtain \((\dot{q},\dot{p})\), and Gaussian noise is added if stated. As our proposed model does not require trajectory input, each data set sample is an individual training/validation/testing example. The data set is split into \(70\,\mathrm{\char 37}\) training, \(15\,\mathrm{\char 37}\) validation, and \(15\,\mathrm{\char 37}\) test data. The choice of \(\mathcal{M}^{\text{o}}\) highly depends on the application, just as the length of the trajectories, the sampling rate, and the number of trajectories.
For the numerical comparison, a dense neural network (BaseNN), without learning the Hamiltonian but directly learning the vector field, and an HNN without symmetry information are trained.
The ground truth solution \(\gamma^{*}\) is generated by integrating a randomly chosen initial point (in the area of interest) with a symplectic midpoint rule (see Haier, Lubich, and Wanner, 2006). For this, the true system, the Hamiltonian, and the symmetry are evaluated. Trajectories that are obtained by symplectic integration are denoted by \(\gamma\). Note that learned models and symmetries are denoted by a subscript \((\cdot)_{\theta}\) or by a label corresponding to the used modeling approach (as BaseNN, HNN or SymHNN), and the ground truth is labeled by a superscript \((\cdot)^{*}\). An overview of the used abbreviations and their explanation can be found in Table 1.
### Net architecture and choice of hyperparameters
The chosen network is a fully connected feedforward network, as proposed by Greydanus, Dzamba, and Yosinski (2019) for the HNN. We take three layers of 256 neurons each; as an activation function, we choose \(\mathrm{Softplus}(x)=\cdot\log(1+\exp(x))\), as it outperformed the use of a hyperbolic tangent.
The _Adam_ optimizer (see Kingma and Ba, 2015) is used to train the models with an initial learning rate of \(5\mathrm{e}-3\). Additionally, a _reduce on plateau_ learning rate scheduler is used, which decreases the learning rate by a factor of \(0.95\) if the Hamiltonian loss \(\ell_{\text{dynamics}}\) does not improve over \(50\,\mathrm{epochs}\) on the validation data set. The learning is performed for \(20\,000\,\mathrm{epochs}\), but stops early if \(\ell_{\text{dynamics}}\) has not decreased on the validation data set over the last \(10\,000\,\mathrm{epochs}\).
For the SymHNN models, the symmetry term is smoothly added to the loss function (as proposed in Section III.2). The first \(100\,\mathrm{epochs}\) are trained without any symmetry term, then for the next \(100\,\mathrm{epochs}\) the weight \(\gamma\) from (12) is linearly increased from \(0\) to \(0.5\).
\begin{table}
\begin{tabular}{l l} \(H^{*}\) & True Hamiltonian, see (5) \\ \(H_{\theta}\) & Learned Hamiltonian \\ \(H_{\text{HNN}}\) & Learned Hamiltonian by an HNN \\ \(H_{\text{SymHNN}}\) & Learned Hamiltonian by a SymHNN \\ \hline \(\gamma^{*}\) & True trajectory; (5) integrated by symplectic midpoint rule \\ \(\gamma_{\theta}\) & Trajectory using a learned model integrated by symplectic midpoint rule \\ \(\gamma_{\text{BaseNN}}\) & Trajectory using a directly learned vector field integrated by symplectic midpoint rule \\ \(\gamma_{\text{HNN}}\) & Trajectory using a HNN and a symplectic midpoint rule \\ \(\gamma_{\text{SymHNN}}\) & Trajectory using a SymHNN integrated by symplectic midpoint rule \\ \hline \(v^{*}\) & True symmetry of the system, see (6) \\ \(v^{*}\)SymHNN & Learned symmetry of the system using a SymHNN \\ \end{tabular}
\end{table}
Table 1: Explanation for the used symbols and abbreviations.
### Pendulum on a cart
Generating a data set for Example II.3, 1000 random initial values with \(|s|<5\), \(|\varphi|<\pi\), \(|p_{s}|<1\) and \(|p_{\varphi}|<\pi\) are generated. Each initial guess is integrated for 3 s with a sampling rate of 15 Hz. Only for data generation a fourth-order Runge-Kutta scheme is used to integrate the (exact) vector field induced by \(H\). To each sample of the resulting data set, Gaussian noise with \(\sigma^{2}=1\mathrm{e}{-2}\) is added.
For the training of our proposed SymHNN we set \(K=1\) and the scaling factors in (9), \(\alpha^{(k)}\) and \(\beta^{(k)}\), are set to 1 for all \(k\).
The resulting trajectories for a long time evaluation \(\gamma\) using a symplectic midpoint rule, are shown in Figure 1 for the (randomly chosen) initial configuration \([q,p]=[0,2.34,0.49,1.54]\). The true Hamiltonian (5), denoted by \(H^{*}\), evaluated for the states of Figure 1 and shifted by the inital value \(H^{*}(\gamma_{\theta}(0))\) is shown in Figure 2. However, since system dynamics depend on derivatives of \(H\) only, the constant offset is irrelevant. The evaluation of \(H\) for the SymHNN trajectory stays almost constant and close to the reference. The Hamiltonian for the BaseNN grows quickly and for the HNN, it swings with a high amplitude.
It should be highlighted that SymHNN learns correctly (up to numerically precision) to choose \(b^{(1)}=[1.0000,-1.0348\mathrm{e}{-05}]\) and
\[M^{(1)}=\begin{pmatrix}-1.21\mathrm{e}{-6}&-1.91\mathrm{e}{-6}\\ 9.30\mathrm{e}{-9}&-4.22\mathrm{e}{-6}\end{pmatrix},\]
and thus close to \([1,0]\) and the zero matrix \(0_{2}\), respectively. Since this example is known to be invariant in the position \(s\), the momentum \(p_{s}\) is conserved, which leads to a constant \(p_{s}\) for each trajectory. In Figure 1 it can be seen that the momentum \(p_{s}\) for the BaseNN and the HNN are not conserved, whereas the momentum for our proposed SymHNN is preserved effectively. To confirm this, the values for \(\ell_{\mathrm{sym}}^{(k)}\) are computed with \(v^{*}\) and 1000 randomly sampled grid points (in
Figure 1: Computed trajectories \(\gamma_{\theta}\) for different (learned) models for the cart-pendulum obtained with the implicit midpoint rule. Note: the exact trajectory is almost covered by \(\gamma_{\mathrm{SymHNN}}\).
Figure 4: Level sets of (learned) cart-pendulum Hamiltonian for constant values \((p_{s},p_{\varphi})=(0,0)\) on the left and \((s,\varphi)=(0,0)\) on the right. The gray box indicates the area where training examples are sampled.
Figure 3: Conserved quantity \(I\) (see (4)) evaluated for the pendulum on a cart states shown in Figure 1 and the learned Hamiltonian.
Figure 2: True cart-pendulum Hamiltonian (5) evaluated along the trajectories shown in Figure 1 shifted by \(H^{*}(\gamma_{\theta}(0))\).
an area, which is broader than the training data area). An improvement by a factor of 100 is observed (HNN: 3.1478 and SymHNN: 0.0136). The value for the SymHNN model with its learned value \(v_{SymHNN}\) is 0.0121, which indicates that the model preserves the learned symmetry slightly better than the true one.
However, since the integration in (8) is not performed exactly and the symmetry losses \(\ell_{\text{sym}}^{(k)}\) are not precisely zero, the conjugate momentum \(p_{s}\) is not exactly but only approximately conserved.
Figure 3 shows the conserved quantity (see Example II.2) evaluated for the trajectories shown in Figure 1 with the true symmetry \(v^{*}\) or the learned symmetry \(v_{\text{SymHNN}}\). \(I\) increases fast for the BaseNN model and oscillates and slightly drifts away for the HNN, whereas for the SymHNN it only slightly drifts away but has no high oscillations. The level sets of the Hamiltonian are shown in Figure 4 for constant values \((p_{s},p_{\phi})=(0,0)\) on the left and for \((s,\phi)=(0,0)\) on the right. As for BaseNN, no Hamiltonian is learned, it is not shown in this figure. The gray box indicates the area where data samples had been provided. The level sets inside the box match with the true ones rather well for both the HNN and the SymHNN model. Outside the gray box, both models start deviating from the true solution. Together with the trajectories from Figure 1, we conclude that conservation of \(p_{s}\) generalizes outside the range of provided data samples.
In Figure 5, the value of the invariant vector field is plotted for 1000 points. From the color scale it can be observed that, overall, the symmetry error is smaller for the SymHNN models than for HNN and the difference is even larger for the SymHNN with its own learned \(\widehat{v}_{\text{SymHNN}}\). For the SymHNN models a slight horizontal line pattern can be seen as the points exceed the training area, this again hints to the \(s\)-invariance of the system.
### Two-body example in cartesian coordinates
In order to generate data for the two-body example (Example II.4), points with a radius between 5 and 10 are sampled and the momentum \(p\) is constructed to be orthogonal to \(q\) with \(\|p\|=(1\pm 0.3)\left(\pm\sqrt{\frac{k}{\|q\|}}\right)\). Where \(\|p\|=\pm\sqrt{\frac{k}{\|q\|}}\) leads to perfect circular trajectories, i.e. solving the _vis viva_ equation (cf. Vallado, 2001).
5000 trajectories of 10 s length each are generated using
Figure 5: Median of \(\widehat{v}\) evaluated for 1000 randomly sampled cart-pendulum points (as for the training data but within a broader area), projected onto the \(q\)-plane. Note: the scaling of the colour bars differ.
Figure 6: Trajectories of different (learned) models for the cartesian two-body example.
Figure 7: True two-body Hamiltonian (7) evaluated for the states shown in Figure 6 shifted by \(H^{*}(\gamma_{\theta}(0))\).
a fourth-order Runge-Kutta method with a sampling rate of \(1\,\mathrm{Hz}\) and Gaussian noise with \(\sigma^{2}=1\mathrm{e}{-}2\) is added. The constructed data set is split into training (\(70\,\%\)), validation (\(15\,\%\)) and test data (\(15\,\%\)).
Figure 6 shows the longtime trajectories \(\gamma_{\theta}\) that are obtained using the learned models and integration by a implicit midpoint rule for the initial point \([7.0,0.0,1.72,12.05]\). None of the models can precisely map the true solution.
The true Hamiltonian \(H^{*}\), see (7), is evaluated along the trajectories from Figure 6 which is shown in Figure 7 (shifted by their inital value \(H^{*}(\gamma_{\theta}(0))\)). The Hamiltonian for the exact model oscillates with a small amplitude, due to the symplectic midpoint rule. This cannot be observed in Figure 7 as the amplitude is about \(\mathrm{e}{-}5\). The energy for the BaseNN drops directly, slightly swings up for the HNN and, besides the oscillations, stays at the same level for the SymHNN, even though the oscillations vary in magnitude.
The conserved quantities, recall (4), are shown in Figure 10 for the different combinations of trajectories and symmetries. The solid lines show the conserved quantity for the true symmetry, the dashed orange line shows the conserved quantity for the SymHNN trajectory with the learned symmetry and the dotted line shows the true trajectory with the learned symmetry \(v_{\mathrm{SymHNN}}\). The BaseNN and the HNN are not capable of conserving \(I\), whereas for SymHNN the conserved quantity oscillates. The orange dotted line indicates how much the leaned symmetry is respected by the true trajectory, which only oscillates with a small amplitude.
The projection from the trajectories from Figure 6 onto the \((x,y)\)-plane are shown in Figure 8. The exact system is completely integrable, i.e. the degrees of freedom equate to the number of independent, commuting conserved quantities. Compact level sets of the conserved quantities form topological tori (Liouville tori) and Hamiltonian motions are quasi-periodic motions on these tori (Libermann and Marle, 1987). The regularity in the plots corresponding to HNN and SymHNN suggest that the motions of these models are (almost) quasi-periodic motions on some modified tori in the
Figure 8: Two-body states shown in Figure 6 projected on \((x,y)\)-plane.
Figure 10: Conserved quantity \(I\) (see (4)) evaluated for states of the two-body example shown in Figure 6.
Figure 9: \(\hat{v}\) of the two-body example evaluated for 1000 samples and projected on the q-plane. In the upper row the exact symmetry \(v^{*}\) is chosen and in the lower the learned symmetry \(v_{\theta}\) is used. The left figures evaluate the HNN model and the right the SymHNN model.
phase space. This suggests that a completely integrable structure is approximately present in the learned models. In contrast, the unstructured motions exhibited by BaseNN appear not to follow a completely integrable dynamics.
In Figure 9, the values of \(\widehat{v}_{(q,p)}(H_{\theta})\) are shown for 1000 randomly sampled pairs of \((q,p)\) projected on the \((x,y)\)-plane. The samples are taken from the a slightly broader area than for the training examples. \(\widehat{v}_{(q,p)}\) vanishes, if \(H\) fulfills the symmetry given by \(v\). The middle figure compared to the left one indicates that the symmetry is better conserved by the SymHNN as not so many samples with high \(\widehat{v}_{(q,p)}\)-values are present. Furthermore, the values in the right figure are even smaller, indicating that the SymHNN model preserves the learned symmetry the best.
Figure 12 confirms this result. It shows the kernel density, median and quartiles for the same points as in Figure 9, compared for the different models. The violin on the right is the smallest, meaning the learned symmetry is approximately preserved from the SymHNN model.
The level sets of the (learned) Hamiltonian are shown in Figure 11 for different constant values for the coordinates. The two figures in the middle show that the level sets for \((p_{x},p_{y})=(0,0)\) and for \((x,y)=(3,3)\) are approximated quite precisely by either the HNN and the SymHNN. The four other images at the border (further away from the training data) show that the HNN has more difficulties to map accurately.
## V Conclusion and Future Work
We propose a neural network approach for simultaneously learning a system's Hamiltonian and its symmetries based on snapshot data. To preserve the symplectic structure encoded in the data, the NN is trained to learn the Hamiltonian, since the vector field can then be generated via automatic differentiation (cf. Greydanus, Dzamba, and Yosinski, 2019; Dierkes and Flasskamp, 2021). Extending this approach, known as the HNN method, we simultaneously identify inherent symmetries. For this, we make an ansatz for a symplectic Lie-group action on the phase space (e.g. affine linear transformations) and identify a subgroup whose actions leave the Hamiltonian invariant. The identified symmetries then correspond to conserved quantities of the Hamiltonian motion, such as conserved momentum in the studied cart-pendulum example.
The presented numerical examples demonstrate that the proposed framework is capable of learning a Hamiltonian and to identify symmetries of the system. Moreover, incorporation of the learned symmetries into the learned model improves qualitative aspects of the predicted motions.
A discrete-time variant of this approach might directly learn
Figure 11: Level sets of the different learned two-body Hamiltonian for fixed values for some coordinates (as indicated by the respective titles).
Figure 12: Mean and quartiles for the values of \(\tilde{v}\) shown in Figure 9.
modified Hamiltonian and discrete-time symmetries (cf. Offen and Ober-Blobaum, 2022; Ober-Blobaum and Offen, 2022). Symmetry in Hamiltonian systems give rise to relative equilibria, to which the learning framework can be extended in future.
###### Acknowledgements.
E. Dierkes acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG) Project number 281474342. C. Offen acknowledges the Ministerium fur Kultur und Wissenschaft des Landes Nordrhein-Westfalen.
|
2307.06760 | Privacy-Utility Trade-offs in Neural Networks for Medical Population
Graphs: Insights from Differential Privacy and Graph Structure | We initiate an empirical investigation into differentially private graph
neural networks on population graphs from the medical domain by examining
privacy-utility trade-offs at different privacy levels on both real-world and
synthetic datasets and performing auditing through membership inference
attacks. Our findings highlight the potential and the challenges of this
specific DP application area. Moreover, we find evidence that the underlying
graph structure constitutes a potential factor for larger performance gaps by
showing a correlation between the degree of graph homophily and the accuracy of
the trained model. | Tamara T. Mueller, Maulik Chevli, Ameya Daigavane, Daniel Rueckert, Georgios Kaissis | 2023-07-13T13:59:54Z | http://arxiv.org/abs/2307.06760v1 | Privacy-Utility Trade-offs in Neural Networks for Medical Population Graphs: Insights from Differential Privacy and Graph Structure
###### Abstract
We initiate an empirical investigation into differentially private graph neural networks on population graphs from the medical domain by examining privacy-utility trade-offs at different privacy levels on both real-world and synthetic datasets and performing auditing through membership inference attacks. Our findings highlight the potential and the challenges of this specific DP application area. Moreover, we find evidence that the underlying graph structure constitutes a potential factor for larger performance gaps by showing a correlation between the degree of graph homophily and the accuracy of the trained model.
## 1 Introduction
Graph neural networks (GNNs) are powerful methods to apply deep learning (DL) to non-Euclidean datasets, like graphs or meshes [1]. A graph \(G:=(\,V,E)\) is defined as a set of vertices/nodes \(\,V\) and a set of edges \(E\), connecting nodes. A neighbourhood of a node \(v\in\,V\) contains all nodes \(u\in\,V\) for which an edge \(e_{uv}\) from \(u\) to \(v\) exists. GNNs follow a _message passing_ scheme, where node features are shared and aggregated across neighbourhoods of \(n\) hops [2; 3; 4; 5]. An \(n\)-layer GNN propagates the information that is stored in the node features across \(n\)-hop neighbourhoods. The utilisation of GNNs has shown improved performance, even on dataset not exhibiting an intrinsic graph structure, e.g. 3D point clouds [6]. The application of GNNs has therefore been expanded to datasets which require constructing a graph structure prior to learning. One such example are population graphs [7]. Here, a cohort is represented by one (typically large) graph, in which nodes represent subjects and edges connect similar subjects. The construction of the graph's structure is an important step in this pipeline, since a graph of poor structural quality can substantially hinder graph learning [8; 9]. This has been attributed to several different graph properties, one of them being _homophily_. Homophily is a measure for the ratio between same-labelled and differently labelled neighbours in a graph. A high homophily indicates that the majority of nodes in all neighbourhoods in the graph share the same label as the node of interest.
Motivation and Prior WorkThe application of differential privacy (DP) to graph neural networks for node classification tasks presents two main challenges: (1) The connection and information exchange between data points requires specific definitions of DP for graph-structured data; (2) In addition, the unboundedness of neighbourhoods in a graph makes privacy amplification by sub-sampling non-trivial.
In tabular datasets, individual data points can be treated separately. This is not the case in graph learning settings, where nodes are connected to each other and share information [10]. This contradicts the principle of _per-sample gradients_ in DP methods and thus requires specialised notions of DP on graphs [10]. In this work, we focus on _node-level DP_, which protects the sensitive information stored in the node features of population graphs as well as the connections between neighbouring nodes. So far, only few works have investigated DP training of GNNs. Daigavane et al. [11] were among the first to introduce a privacy amplification by sub-sampling technique for multi-layer GNN training with DP-stochastic gradient descent (DP-SGD). In order to enable a sensitivity analysis for DP-SGD in multi-layer GNNs, the authors apply a graph neighbourhood sampling scheme. Here, the number of \(k\)-hop neighbours is bounded to a maximum node degree. This ensures that the learned feature embeddings over the course of training are influenced by at most a bounded number of nodes. Furthermore, the standard privacy amplification by sub-sampling technique for DP-SGD is extended, such that a gradient can depend on multiple subjects in the dataset: First, a local \(k\)-hop neighbourhood of each node with a bounded number of neighbours is sampled. Next, a subset
\(\mathcal{B}_{t}\) of \(n\) sub-graphs is chosen uniformly at random from the set of sub-graphs that constitute the training set. On these sub-samples ("mini-batches"), standard DP-SGD is applied by clipping the gradients, adding noise and using the noisy gradients for the update steps. The noise is hereby calibrated to the sensitivity with respect to any individual node, which has been bounded via sub-sampling the input graph. The authors of [11] show generally good performance at various privacy levels, motivating us to adopt their technique for this work. A different line of work by Sajadmanesh et al. [12] also introduces a method for DP training of multi-layer GNNs via _aggregation perturbation_, i.e. by adding noise to the aggregation function of the GNN. This method can be used to ensure either node-level or edge-level DP and also ensures privacy guarantees at inference. We intend to explore this technique as part of ongoing work.
DP with sufficiently strong guarantees naturally protects against membership inference attacks (MIAs), which aim to infer whether a certain individual was part of the training set or not. There are a few works investigating MIAs on GNNs [13; 14; 15; 16]. These and works on other privacy attacks on GNNs such as _link stealing_ attacks [17; 18] and _inference attacks_[19; 20] highlight an increased vulnerability of GNNs compared to non-graph machine learning applications. In this work, we extend the state-of-the art MIA technique by [21] to GNNs for the purpose of empirically validating the privacy guarantees.
ContributionsIn this work, we investigate the privacy-utility trade-offs of DP GNN training on medical population graphs. Our contributions are as follows: (1) To the best of our knowledge, our work demonstrates the first successful application of DP-SGD to multi-hop GNNs in medical population graphs; (2) we empirically investigate the success of membership inference attacks (MIAs) at different levels of privacy protection and (3) analyse the interplay between graph structure and the accuracy of DP GNNs, highlighting homophily as a key factor influencing model utility.
## 2 Experiments and Results
All experiments are performed based on the node-level DP GNN implementation of [11], using graph convolutional networks (GCNs) [2] and the transductive learning approach. We recall that transductive learning means that all node features and edges are included in the forward pass, but only the training labels are used for backpropagation. As a baseline, we also train a multi-layer perceptron (MLP) for specific experiments below. We use three medical datasets which are frequently used in the context of population graphs [7]: The TADPOLE dataset studies Alzheimer's disease and functions as a benchmark dataset for population graphs [7; 22]. We also use an in-house COVID dataset as a realistic, noisy medical dataset, with the task of predicting whether a COVID patient will require intensive care unit (ICU) treatment and the ABIDE dataset from the autism brain imaging data exchange [23], where we perform a binary classification task. The ABIDE dataset is highly challenging, and therefore lends itself to investigating the impact of the graph structure on our experiments. We therefore report results on the ABIDE dataset for two different graph structures, constructed using either \(5\) or \(30\) neighbours. Furthermore, we evaluate our experiments on a synthetically generated binary classification dataset to investigate the impact of different graph structures on the performance of DP population graphs under controlled conditions. All graph structures are generated using the \(k\)-nearest neighbour approach [24]. Here, \(k\) is a hyperparameter, which specifies how many neighbours each node has, and the \(k\) most similar nodes are connected. Details about the datasets as well as all \(\delta\) values used for DP-SGD training are summarised in Table 1.
### DP Training of GNNs on Population Graphs
We summarise the results of non-DP and DP training at different privacy budgets in Table 2.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **\# Nodes** & **Homophily** & \(\delta\) \\ \hline TADPOLE & \(1\,277\) & \(0.7392\) & \(1.31\cdot 10^{-4}\) \\ COVID & \(65\) & \(0.7569\) & \(2.78\cdot 10^{-3}\) \\ ABIDE (\(k\)=5) & \(871\) & \(0.6009\) & \(1.92\cdot 10^{-4}\) \\ Synthetic & \(1\,000\) & varying & \(1.79\cdot 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the homophily of the utilised graph structures as well as the \(\delta\) values for all datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Dataset** & **Non-DP** & **Sub-graphing** & **DP (\(\varepsilon=20\))** & **DP (\(\varepsilon=15\))** & **DP (\(\varepsilon=10\))** & **DP (\(\varepsilon=5\))** \\ \hline TADPOLE & \(72.73\pm 1.39\) & \(\mathbf{76.69\pm 1.73}\) & \(72.42\pm 0.94\) & \(71.02\pm 1.22\) & \(70.39\pm 0.43\) & \(69.45\pm 1.82\) \\ Covid & \(72.31\pm 1.151\) & \(\mathbf{73.85\pm 3.77}\) & \(69.23\pm 8.43\) & \(69.23\pm 1.287\) & \(66.15\pm 10.43\) & \(56.92\pm 1.287\) \\ ABIDE (\(k\)=5) & \(85.86\pm 0.81\) & \(\mathbf{66.14\pm 2.37}\) & \(57.83\pm 2.02\) & \(55.54\pm 2.62\) & \(53.71\pm 2.73\) & \(54.17\pm 2.97\) \\ ABIDE (\(k\)=30) & \(\mathbf{68.51\pm 2.75}\) & \(65.83\pm 3.57\) & \(53.49\pm 4.27\) & \(53.37\pm 1.47\) & \(51.89\pm 4.40\) & \(51.43\pm 3.47\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test set accuracy (%) of non-DP and DP models at different \(\varepsilon\)-values across five random seeds.
As expected, a higher privacy guarantee results in lower model performance. For the TADPOLE dataset, a DP guarantee of \(\varepsilon=20\) achieves performance comparable to non-DP training and even at \(\varepsilon=10\), performance is only about two percent lower than non-DP results. We attribute this to the informative underlying graph structure of the TADPOLE dataset, which stabilises graph learning. For the ABIDE dataset, non-DP performance is better for the graph structure that uses \(30\) neighbours (\(k\)=\(30\)) compared to only \(5\) neighbours. However, larger neighbourhoods lead to more noise being added during DP-SGD training, which impacts the privacy-utility trade-off on this graph. For all datasets apart from the ABIDE dataset with 30 neighbours (\(k\)=\(30\)), the model trained without DP, but employing sub-graph sampling ("sub-graphing") and gradient clipping out-performs the non-DP model trained without these techniques. We attribute this to the regularising effect of both aforementioned methods. As seen, the ABIDE dataset has overall much lower accuracy and simultaneously, the homophily of the ABIDE dataset is the lowest among all datasets. We therefore further investigate the impact of the homophily of the graph structure on the performance of DP population graphs in Section 2.3.
### Membership Inference Attacks
The dependencies between graph elements render GNNs more vulnerable to MIA [25]. Moreover, in the transductive setting of graph learning, test node features are included in the forward pass, which facilitates MIA [14]. To empirically audit the privacy leakage of sensitive patient data from our GNN models, we employ the MIA implementation of Carlini et al. [21]. We perform these experiments on the GNNs trained on the TADPOLE dataset, as it is known that higher model accuracy improves MIA success [21]. The adversary/auditor in this membership inference scenario has full access to the trained model \(f_{\theta}\), its architecture, and the graph, including its ground-truth labels [21]. We trained \(128\) shadow models to estimate the models' output logit distributions and create a classifier that predicts whether a specific example was used as training data for the model \(f_{\theta}\).
In Figure 0(a), we report the log-scale receiver operating characteristic (ROC) curve of the attacks and report the true positive rate (TPR) at three fixed, low false positive rates (FPR) (\(0.1\%,0.5\%,1\%\)). The attack's success rates are also summarised in Table 3.
Furthermore, we derive the maximum TPR (i.e. power) that is theoretically achievable for a given \((\varepsilon,\delta)\) setting through the duality between \((\varepsilon,\delta)\)-DP and hypothesis testing DP [26]. We will refer to this maximum achievable TPR as the adversary's _supremum power_\(\mathcal{P}\).
As seen, for FPR-values \(<0.001\), the MIA is unsuccessful. As the FPR tolerance is increased, models trained with weaker privacy guarantees (\(\varepsilon\in\{20,15\}\)) yield positive TPR when attacked, with TPR values approaching these of models trained without DP guarantees (_Non-DP_ and _Sub-graphing_ variants) in case of \(\varepsilon=20\). Interestingly, the model trained at \(\varepsilon=5\) successfully resists membership inference even at an FPR value of \(0.01\). Moreover, we observe that the GNN trained with clipped gradients is less vulnerable to membership inference than the GNNs trained without gradient clipping. This is in line with the findings in [21] that clipping the gradients during training offers some (empirical) protection against MIAs.
### Impact of Graph Structure on Performance
The interaction between memorisation and generalisation in neural networks is of particular interest to the privacy community. Feldman [27] hypothesises that especially noisy and atypical data from the long tail of the data distribution requires memorisation. This increases the negative impact of DP training for those samples. To investigate the applicability of the long-tail hypothesis to graph databases, we evaluate the impact of graph structure, measured in terms of homophily, on accuracy. Concretely, we hypothesise that graphs with low homophily are "noisier" and therefore suffer more from DP training. For example, at a homophily of \(0.5\) on a binary node classification dataset, on average, half of all neighbours have the same label and the other half the opposite label. Thus, when applying message-passing on such a graph, the node features will get averaged over an approximately equal number of nodes from both labels. This makes it nearly impossible to learn meaningful node feature embeddings, such that learning likely relies nearly exclusively on memorisation.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Model** & **Variant** & \multicolumn{2}{c}{\(<\)**0.001 FPR**} & \multicolumn{2}{c}{\(<\)**0.005 FPR**} & \multicolumn{2}{c}{\(<\)**\(<\)**0.01 FPR**} \\ & & **TPR** & \(\mathcal{P}\) & **TPR** & \(\mathcal{P}\) & **TPR** & \(\mathcal{P}\) \\ \hline MLP & - & 0.0115 & - & 0.0138 & - & 0.0184 & - \\ \hline GNN & Non-DP & 0.0092 & - & 0.0092 & - & 0.0230 & - \\ & Sub-graphing & 0.0023 & - & 0.0069 & - & 0.0207 & - \\ & DP (\(\varepsilon=20\)) & 0.0000 & 1.0 & 0.0069 & 1.0 & 0.0230 & 1.0 \\ & DP (\(\varepsilon=15\)) & 0.0000 & 1.0 & 0.0046 & 1.0 & 0.0069 & 1.0 \\ & DP (\(\varepsilon=10\)) & 0.0000 & 1.0 & 0.0000 & 1.0 & 0.0023 & 1.0 \\ & DP (\(\varepsilon=5\)) & 0.0000 & 0.1485 & 0.0000 & 0.7422 & 0.0000 & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: MIA results on the MLP and the non-DP and DP GNNs at different privacy budgets.
The results using a synthetic dataset with different levels of homophily are summarised in Table 4 and visualised in Figure 0(b).
The generalisation gap is especially large on low-homophily graphs, indicating over-fitting in the non-DP setting (not shown in the Table). Model accuracy in the non-DP setting profits more from the regularising effects of clipping and sub-graphing in graphs with lower homophily (\(0.6\)) compared to high homophily (\(0.9\)). We note that, in a binary classification task, homophily values are symmetric about \(0.5\).
As expected, at the lowest homophily of \(0.5\), learning is severely compromised without DP, and the regularising effect of clipping and sub-graphing actually harms accuracy. Moreover, learning is nearly impossible with DP, corroborating that, in this setting, the accuracy benefit of non-DP learning is mostly due to memorisation. In addition, under DP, graphs with high homophily (\(0.9\)) suffer a lower performance decrease compared to graphs with low homophily, likely due to the favourable graph structure for the learning task, i.e. not requiring strong memorisation.
## 3 Discussion, Conclusion, and Future Work
In this work, we investigate the practicality and challenges of differential private (DP) graph neural networks (GNNs) applied to medical population graphs. The utilisation of population graphs in medicine has shown promising improvements in performance for disease prediction [7] or age estimation [22]. However, it comes with an additional challenge, namely the requirement of an explicit graph construction step. This can lead to poor graph structures, regarding the homogeneity of neighbourhoods, which can be measured by the homophily metric. Population graphs contain sensitive medical data of several subjects, which requires protection when applying DL methods to these graphs. Applying DP to GNNs requires special formulations of DP concepts like privacy amplification techniques and DP-SGD methods [11]. We here evaluate privacy-utility trade-offs of DP GNNs trained on medical population graphs and reveal interesting correlations between the graph structure and performance of DP GNNs. When the underlying graph structure of a dataset has low homophily (indicating diverse neighbourhoods with different labels), DP has a stronger negative impact on model performance compared to datasets with high homophily. This finding and its possible connection to the long-tail hypothesis [27] is a promising direction for future work to potentially improve DP methods for GNNs through improving the underlying graph structure. Moreover, homophily is not the only measure for the "quality" of a graph structure. Further metrics should be evaluated, which may shed more light on the impact of graph structure on the performance of DP GNNs.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline
**Hom.** & **Non-DP** & **Clipping** & **Sub-graphing** & **Subg. + Clip.** & **DP (\(\epsilon=20\))** & **DP (\(\epsilon=15\))** & **DP (\(\epsilon=10\))** & **DP (\(\epsilon=5\))** \\ \hline
0.9 & 99.00 \(\pm\) 2.00 & **100.0 \(\pm\) 0.00** & 99.80 \(\pm\) 0.40 & 99.00 \(\pm\) 0.00 & 99.00 \(\pm\) 4.17 & 96.00 \(\pm\) 2.49 & 93.10 \(\pm\) 1.36 & 88.70 \(\pm\) 3.06 \\
0.8 & 99.62 \(\pm\) 0.37 & 99.90 \(\pm\) 0.00 & **100.0 \(\pm\) 0.00** & 99.80 \(\pm\) 2.45 & 82.80 \(\pm\) 7.83 & 80.60 \(\pm\) 10.8 & 81.30 \(\pm\) 6.40 & 80.00 \(\pm\) 8.76 \\
0.7 & 96.50 \(\pm\) 1.23 & **99.70 \(\pm\) 0.40** & 99.84 \(\pm\) 0.51 & 98.20 \(\pm\) 0.50 & 79.10 \(\pm\) 3.00 & 76.30 \(\pm\) 3.10 & 74.80 \(\pm\) 4.55 & 70.30 \(\pm\) 5.64 \\
0.6 & 73.60 \(\pm\) 0.97 & **91.00 \(\pm\) 2.16** & 83.10 \(\pm\) 3.10 & 85.40 \(\pm\) 3.54 & 57.72 \(\pm\) 2.38 & 55.50 \(\pm\) 1.45 & 54.80 \(\pm\) 3.87 & 57.90 \(\pm\) 1.32 \\
0.5 & **66.10 \(\pm\) 1.361** & 53.60 \(\pm\) 1.16 & 57.00 \(\pm\) 1.76 & 58.10 \(\pm\) 2.03 & 52.71 \(\pm\) 3.34 & 51.82 \(\pm\) 3.43 & 51.70 \(\pm\) 2.38 & 50.70 \(\pm\) 3.87 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the experiments on the synthetic dataset at different homophily values; All results refer to test accuracy (%), with the best ones highlighted in bold. Compare Section 2 for details. Hom.: Homophily, Subg.: Sub-graphing.
Figure 1: |
2304.14152 | Spiking Neural Network Decision Feedback Equalization for IM/DD Systems | A spiking neural network (SNN) equalizer with a decision feedback structure
is applied to an IM/DD link with various parameters. The SNN outperforms linear
and artificial neural network (ANN) based equalizers. | Alexander von Bank, Eike-Manuel Edelmann, Laurent Schmalen | 2023-04-27T12:49:31Z | http://arxiv.org/abs/2304.14152v1 | # Spiking Neural Network Decision Feedback Equalization for IM/DD Systems
###### Abstract
A spiking neural network (SNN) equalizer with a decision feedback structure is applied to an IM/DD link with various parameters. The SNN outperforms linear and artificial neural network (ANN) based equalizers. (c) 2023 The Author(s)
+
Footnote †: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001899). Parts of this work were carried out in the framework of the CELTIC-NEXT project AI-NET-ANTILLAS (C2019/3-3) funded by the German Federal Ministry of Education and Research (BMBF) (grant agreement 16KIS1316).
## 1 Introduction
Machine-learning-based equalizers have proven to enhance system performance for non-linear optical channels. However, the performance of most equalizers depends on their complexity, leading to power-hungry receivers when implemented on digital hardware. Compared to conventional digital hardware, neuromorphic hardware can massively reduce energy consumption when solving the same tasks [1]. Spiking neural networks (SNNs) implemented on neuromorphic hardware mimic the human brain's behavior and promise energy-efficient, low-latency processing [2]. In [3], an SNN-based equalizer with a decision feedback structure (SNN-DFE) has been proposed for equalization and demapping based on future and currently received, and already decided symbols. For different multipath scenarios, i.e., linear channels, the SNN-DFE performs similarly to the classical decision feedback equalizer (CDFE) and artificial neural network (ANN) based equalizers. For a 4-fold pulse amplitude modulation (PAM4) transmitted over an intensity modulation / direct detection (IM/DD) link suffering from _chromatic dispersion_ (CD) and non-linear impairments, [4] proposes an SNN that estimates the transmit symbols based on received symbols without feedback, no-feedback-SNN (NF-SNN). In simulations and experiments using the neuromorphic BrainScaleS-2 system, [4] outperforms linear minimum mean square error (LMMSE) equalizers and draws level with ANN-based equalizers. However, the approach of [4] lacks feedback on already decided decisions, which can enhance equalization. In this paper, we apply the SNN-DFE of [3] to the channel model of [4]. By incorporating decisions, the SNN-DFE outperforms the approach proposed by [4]. As chromatic dispersion grows, the SNN-DFE benefits more from the decision feedback structure. The code is available at [https://github.com/kit-cel/snn-dfe](https://github.com/kit-cel/snn-dfe).
## 2 Spiking-Neural-Network-based Decision Feedback Equalizer and Demapper
Like ANNs, SNNs are networks of interconnected neurons; however, SNNs represent information using pulses (spikes) instead of numbers. SNN neurons have an internal membrane state \(v(t)\), which depends on the time-variant input spike trains \(s_{j}(t)\in\{0,1\}\), \(j\in\mathbb{N}\), emitted by the \(j\)-th upstream connected neuron. The spike trains \(s_{j}(t)\) cause the synaptic current \(i(t)\), which charges the neuron's state \(v(t)\). If \(v(t)\) surpasses a predefined threshold \(v_{\text{th}}\), i.e., if \(v(t)>v_{\text{th}}\), the neuron emits an output spike. Afterward, the neuron resets its state to \(v(t)=v_{\text{r}}\). A common neuron model is the leaky-integrate-and-fire (LIF) model. It is described by [2]\(\frac{\text{d}v(t)}{\text{d}t}=-\frac{(vt)-v(t)}{v_{\text{th}}}+\Theta(v(t)-v_{ \text{th}})(v_{\text{r}}-v_{\text{th}})\) and \(\frac{\text{d}i(t)}{\text{d}t}=-\frac{(it)}{\text{e}_{\text{s}}}+\sum_{j}w_{j }s_{j}(t)\), where \(\tau_{\text{m}}\) and \(\tau_{\text{s}}\) are the time constants of \(v(t)\) and \(i(t)\), \(\Theta(\cdot)\) denotes the Heaviside function, and \(w_{j}\in\mathbb{R}\) are the input weights. The PyTorch-based SNN deep learning library Norse [5] provides the approximate solutions of the equations as well as the backpropagation through time algorithm (BPTT) with surrogate gradients [2], which is used for updating the network's weights.
Inspired by a DFE, we proposed an SNN-based equalizer and demapper (SNN-DFE) in [3], whose architecture can be seen in Fig. 1. The received symbol \(y[k]\), where \(k\) denotes the discrete time, is translated into a spike signal using ternary encoding [3] with \(M_{\text{t}}=8\) input neurons per sample. Based on the last \(n=\left\lceil\frac{n_{\text{up}}}{2}\right\rceil\) received symbols and the last \(m=\left\lfloor\frac{n_{\text{up}}}{2}\right\rfloor\) estimated symbols, the transmit symbol class estimate \(\hat{a}[k]\) is obtained, where \(n_{\text{up}}\) is the number of significant channel taps when sampling the channel with symbol rate. Finally, a bit mapper translates the estimate \(\hat{a}[k]\) to a corresponding bit sequence \(\hat{\mathbf{b}}[k]\).
Figure 1: Proposed SNN-DFE structure
## 3 Results and Conclusion
To benchmark the SNN-DFE against the approach of [4], we implemented the IM/DD link as given by [4, Fig. 2A] with the same parameters (channel A). With altered parameters, a second reference link (channel B) with a stronger CD is created. Shared parameters are a fiber length of \(5\,\mathrm{km}\), a pulse shaping roll-off factor of \(\beta=0.2\), and a direct current block prior to equalization. Channel-specific parameters for channel A are (100 GBd, 1270 nm, \(-5\) ps\(\,\mathrm{nm}^{-1}\,\mathrm{km}^{-1}\), a bias of 2.25 after pulse shaping, \(n_{\text{tap}}=17\), \(\mathcal{C}=\{-3,-1,1,3\}\)) and for channel B (50 GBd, 1550 nm, \(-17\) ps\(\,\mathrm{nm}^{-1}\,\mathrm{km}^{-1}\), a bias of 0.25 after pulse shaping, \(n_{\text{tap}}=41\), \(\mathcal{C}=\{0,1,\sqrt{2},\sqrt{3}\}\)), where \(\mathcal{C}\) is the set of transmit symbols with Gray mapping. To benchmark our SNN-DFE (using ternary encoding [3]), we implemented the SNN and reference ANN of [4] (using log-scale encoding) with \(n_{\text{tap}}\) taps. Since both approaches lack decision feedback, they are referred to as NF-SNN and NF-ANN. Furthermore, we benchmark against the CDFE and LMMSE with \(n_{\text{tap}}\) taps. To compare the SNN-DFE with an ANN-based approach, we replace the SNN of Fig. 1 with an ANN (ANN-DFE) and ignore the encoding (real-valued inputs instead of encoded inputs). All networks have \(N_{\text{o}}=4\) output neurons, for channel A \(N_{\text{h}}=40\) hidden neurons, and channel B \(N_{\text{h}}=80\), since the task of equalization and demapping is more demanding for strong CD. The networks are trained with a learning rate of \(10^{-3}\) using \(10\,000\) independent batches with \(200\,000\) transmit symbols each. The noise power during training is experimentally chosen to be \(\sigma^{2}=-17\,\mathrm{dB}\), corresponding to an ANN-DFE training \(\text{SER}\approx 10^{-2}\). For the left plot of Fig. 2, we include the results of [4] for their proposed SNN (ref SNN) and reference ANN (ref ANN1). Since NF-SNN and NF-ANN outperform ref SNN and ref ANN1, which are of equal architecture, we conclude that our proposed training setup can boost performance. As shown in the left and middle plots of Fig. 2, for both channels, the DFE-based structures outperform NF-structures; furthermore, the SNNs we studied performed better than the corresponding ANNs. The right plot of Fig. 2 reveals that training all networks at a specific noise power (t @ 17 dB) results in better performance than training the networks for each \(\sigma^{2}\) (t @ e dB). We conclude that for low \(\sigma^{2}\), the error magnitude is insufficient for proper learning; adequately trained networks can generalize for different \(\sigma^{2}\). Figure 3 depicts the results for channel B at \(\sigma^{2}=21\) dB and varying fiber lengths. Again, the networks are trained individually for each length at a \(\sigma^{2}\) corresponding to an ANN-DFE \(\text{SER}\approx\ 10^{-2}\). As the fiber length increases, the impact of CD grows, and DFE-based structures (especially the SNN-DFE) outperform NF-structures. We conclude that for an IM/DD link, the SNN-DFE can exploit the decision feedback to combat CD for the investigated \(\sigma^{2}\). It should be noted that the SNN-DFE suffers from error propagation with increasing SER. The SNN-DFE outperforms all ANN-based equalizers and the equalizer proposed in [4] for highly dispersive regions, enabling powerful and energy-efficient equalization and demapping.
|
2303.16912 | Training Feedforward Neural Networks with Bayesian Hyper-Heuristics | The process of training feedforward neural networks (FFNNs) can benefit from
an automated process where the best heuristic to train the network is sought
out automatically by means of a high-level probabilistic-based heuristic. This
research introduces a novel population-based Bayesian hyper-heuristic (BHH)
that is used to train feedforward neural networks (FFNNs). The performance of
the BHH is compared to that of ten popular low-level heuristics, each with
different search behaviours. The chosen heuristic pool consists of classic
gradient-based heuristics as well as meta-heuristics (MHs). The empirical
process is executed on fourteen datasets consisting of classification and
regression problems with varying characteristics. The BHH is shown to be able
to train FFNNs well and provide an automated method for finding the best
heuristic to train the FFNNs at various stages of the training process. | Arné Schreuder, Anna Bosman, Andries Engelbrecht, Christopher Cleghorn | 2023-03-29T11:40:28Z | http://arxiv.org/abs/2303.16912v1 | # \({}^{\star}\)Training Feedforward Neural Networks with Bayesian Hyper-Heuristics
###### Abstract
The process of training _feedforward neural networks_ (FFNNs) can benefit from an automated process where the best heuristic to train the network is sought out automatically by means of a high-level probabilistic-based heuristic. This research introduces a novel population-based _Bayesian hyper-heuristic_ (BHH) that is used to train _feedforward neural networks_ (FFNNs). The performance of the BHH is compared to that of ten popular low-level heuristics, each with different search behaviours. The chosen heuristic pool consists of classic gradient-based heuristics as well as _meta-heuristics_ (MHs). The empirical process is executed on fourteen datasets consisting of classification and regression problems with varying characteristics. The BHH is shown to be able to train FFNNs well and provide an automated method for finding the best heuristic to train the FFNNs at various stages of the training process.
keywords: hyper-heuristics, meta-learning, feedforward neural networks, supervised learning, Bayesian statistics +
Footnote †: journal: Information Sciences
## 1 Introduction
A popular field of focus for studying _artificial neural networks_ (ANNs) is the process by which these models are trained. ANNs are trained by optimisation algorithms known as heuristics. Many different heuristics have
been developed and used to train ANNs [1]. Each of these heuristics has different search behaviours, characteristics, strengths and weaknesses. It is necessary to find the best heuristic to train ANNs in order to yield optimal results. This process is often non-trivial and time-consuming. Selection of the best heuristic to train ANNs is often problem specific [2].
A recent suggestion related to the field of _meta-learning_ is to dynamically select and/or adjust the heuristic used throughout the training process. This approach focuses on the hybridisation of learning paradigms. One such form of hybridisation of learning paradigms is that of hybridisation of different _heuristics_ as they are applied to some optimisation problem [3]. These methods are referred to as _hyper-heuristics_ (HHs) and focus on finding the best heuristic in _heuristic space_ to solve a specific problem.
In the general context of optimisation, many different types of HHs have been implemented and applied to many different problems [3]. However, research on the application of HHs in the context of ANN training is scarce. Nel [4] provides some of the first research in this field, applying a HH to _feedforward neural network_ (FFNN) training.
This research takes a particular interest in developing a population-based, selection HH that makes use of probability theory and Bayesian statistical concepts to guide the heuristic selection process. This paper presents a novel _Bayesian hyper-heuristic_ (BHH), a new high-level heuristic that utilises a statistical approach, referred to as _Bayesian analysis_, which combines prior information with new evidence to the parameters of a selection probability distribution. This selection probability distribution is the mechanism by which the HH selects appropriate heuristics to train FFNNs during the training process.
The selection mechanism implemented by the BHH is different from the _multialgorithm, genetically adaptive multiobjective_ (AMALGAM) and _biobjective hyperheuristic training algorithm_ (BOHTA) methods used by Nel [4], as well as the _hyper-heuristic Bayesian optimisation algorithm_ (HHBOA) proposed by Oliva and Martins [5]. The key differences include that the BHH does not follow an evolutionary approach to the selected low-level heuristics. As such the population does not generate offspring, but rather reuses entities in the population. Furthermore, the BHH implements a discrete credit assignment mechanism, not making use of pareto fronts as in the AMALGAM and BOHTA methods.
The remainder of this article is structured as follows: Section 2 provides background information on ANNs. Section 3 provides details on various
types of heuristics that have been used to train FFNNs. Section 4 presents background information on HHs and meta-learning. Section 5 presents background information on probability theory. Section 6 presents the developed BHH. Section 7 presents a detailed description of the empirical process and the setup of each experiment. Section 8 provides and discusses the results of the empirical study. Section 9 summarises the research that is done along with a brief overview of the findings.
## 2 Artifical Neural Networks
This research focuses on a particular type of ANN, referred to as _feedforward neural networks_ (FFNNs). FFNNs were the first and simplest type of ANNs developed [6] and implement an architecture consisting of input, hidden and output layers by arranging them in sequential order. Furthermore, FFNNs implement fully connected topologies, where each _artificial neuron_ (AN) in one layer is connected to all the ANs in the next, without any cycles [7]. In FFNNs, information moves forward, in one direction, from the input nodes, through the hidden nodes and finally to the output nodes.
_Training_ is the process whereby the weights of the FFNN are systematically changed with the aim of improving the _performance_ of the FFNN. Finding the optimal weights that produce the best performance on a given task is an optimisation problem. The optimisation algorithm used to find the optimal weights is referred to as a _heuristic_. Heuristics search for possible solutions in the solution-space and make use of information from the search space to guide to process.
During the training process, the FFNN is exposed to data while trying to produce some target outcome. The degree to which the produced outcome differs from the target outcome is referred to as _loss_. Since training of FFNNs is an optimisation problem, the goal of the training process is to minimise the loss. The loss is calculated using an error function.
## 3 Heuristics
A heuristic refers to an algorithmic search technique that serves as a guide to a search process where good solutions to an optimisation problem are being sought out. Many different techniques have been used to train FFNNs [8]. At the time of writing, the majority of work that is published on the training of FFNNs involves the use of gradient-based techniques [4].
Gradient-based heuristics are optimisation techniques that make use of derivatives obtained from evaluating the ANN error function. In the context of supervised learning, loss functions produce a scalar value that represents the error between the output of the ANN and the desired output. When using _gradient descent_ (GD) to train ANNs, the gradients of the loss function is used to adjust the weights of the ANN in order to minimise the error [9].
There are many variants of gradient-based heuristics. However, they all fundamentally apply the same generic GD framework that propagates the error signal backwards through the ANN. This algorithm is known as _back-propagation_ (BP), was popularised by Werbos [10].
The simplest type of GD algorithm is referred to as _stochastic gradient descent_ (SGD), which implements a gradient-based weight update step for each training pattern. In the context of this research, the implementation of SGD refers to the mini-batch training implementation of GD, where a small batch of training patterns are fed to the FFNN at once and the error function is aggregated across all training patterns.
Alternative variants have been proposed that lead to better control over the convergence characteristics of SGD. This research focuses on a number of these variants that include _momentum_ (Momentum) [11], _Nesterov accelerated gradients_ (NAG) [12], _adaptive gradients_ (Adagrad) [13], Adadelta [14], _root mean squared error propagation_ (RMSProp) [15] and _adaptive moment estimation_ (Adam) [8].
Gradient-based heuristics are sensitive to the problem that they are applied to, with hyper-parameter selection often dominating the research focus [16]. Blum and Roli [17] mention that since the 1980s, a new kind of approximate algorithm has emerged which tries to combine basic heuristic methods in higher level frameworks aimed at efficiently and effectively exploring a search space. These methods are referred to as MHs.
The biggest difference between MHs and gradient-based heuristics is that MHs make use of meta-information obtained as a result of evaluating the FFNN during training and is not limited to information about the search space [17]. This also means that MHs do not necessarily require the error function to be differentiable. Blum and Roli [17] provides advantages of MHs that include the following: they are easy to implement, they are problem independent and do not require problem-specific knowledge, and they are generally designed to find global optima, while gradient-based approaches can get stuck in local optima more often. Similar to gradient-based heuristics, a number of different meta-heuristics have been used to successfully train
FFNNs [1; 18; 19]. This research takes a particular interest in population-based MHs that have been used to train FFNNs. These include _particle swarm optimisation_ (PSO) [20], _differential evolution_ (DE) [21] and _genetic algorithms_ (GAs) [22].
## 4 Hyper-Heuristics
Burke et al. [23] define HHs as search methods or learning mechanism for selecting or generating heuristics to solve computational search problems. Burke et al. [24] mention that a HH is a high-level heuristic approach that, given a particular problem instance and a number of low-level heuristics, can select and apply an appropriate low-level heuristic at each decision point. HHs implement a form of _meta-learning_ that is concerned with the selection of the best heuristic from a pool of heuristics to solve a given problem. It can be said that HHs are concerned with finding the best heuristic in _heuristic space_, while the underlying low-level heuristics find solutions in the feasible _search/solution space_.
Burke et al. [23] proposed a classification scheme used to classify HHs. According to the proposed classification scheme, HHs are classified in two dimensions. These include the _source of feedback_ used during learning and the nature of the _heuristic search space_. For the dimension that involves the source of feedback, HHs can be classified as either _no learning_, _online learning_ or _offline learning_. For the dimension that involves the nature of the _heuristic search space_, HHs can be classified as either _heuristic selection_ or _heuristic generation_. Further distinction is made between _construction_ of heuristics and _perturbation_ of heuristics.
In the general context of optimisation, many different types of HHs have been implemented and applied to many different problems. Some notable examples include [23; 25; 26; 27]. Research on the application of HHs in the context of FFNN training is still scarce. Nel [4] provides some of the first research in this field, applying BOHTA, a novel adaptation of an evolutionary-based HH, known as the AMALGAM HH [28], to FFNN training. Furthermore, Oliva and Martins [5] provide HHBOA, the first use of Bayesian optimisation in a HH context. The method proposed by Oliva and Martins [5] uses a Bayesian selection operator to evolve combinations of low-level heuristics while looking for good problem solutions to a benchmark of optimisation functions, but does not apply a HH to the training of FFNNs.
This research takes a particular interest in a population-based, selection approach for HHs, with the particular intent of training FFNNs. In the context of population-based HHs, an entity pool exists that represents a pool of candidate solutions to the given problem. Each entity in the entity pool is assigned its own low-level heuristic from the heuristic pool. The selection of the best heuristic to apply to a candidate solution is based on the performance of the heuristic relative to that particular candidate solution at a particular point in the search process. Selection methods often make use of probabilistic approaches.
## 5 Probability
In general, there are two main views to probability and statistics. These include the _frequentist_ and the _Bayesian_ view of statistics. Naturally, Bayesian statistics is based on Bayes' theorem [29].
Bayesian statistics describe the probability of an event in terms of some belief, based on previous knowledge of the event and the conditions under which the event happened [30]. Bayes' theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence.
One of the many applications of Bayes' theorem is to do statistical inference. Like FFNNs, Bayesian models need to be _trained_, a process known as _Bayesian analysis_. Bayesian analysis is the process by which prior beliefs are updated as a result of observing new data/evidence.
Bayesian analysis utilises the concept of conjugate priors. Wackerly et al. [31] state that conjugate priors are prior probability distributions that result in posterior distributions that are of the same functional form as the prior, but with different parameter values. The conjugate prior to a Bernoulli probability distribution is the Beta probability distribution and the conjugate prior to a categorical and multinomial probability distribution is the Dirichlet probability distribution [31].
## 6 Bayesian Hyper-Heuristics
This section presents the novel BHH. The general concept of the BHH is summarised as follows: the BHH implements a high-level heuristic selection mechanism that learns to select the best heuristic from a pool of low-level heuristics. These low-level heuristics are applied to a population of entities,
each implementing a candidate solution to a FFNN. The intent of the BHH is to optimise both the underlying FFNN and the FFNN training process. The BHH does so by learning the probability that a given heuristic will perform well at a given stage in the FFNN training process. These probabilities are then used as heuristic selection probabilities in the next step of the training process.
According to the classification scheme for HHs by Burke et al. (2017), the BHH is a population-based, meta-hyper-heuristic that utilises selection and perturbation of low-level heuristics in an online learning fashion. Figure 1 provides an illustration of the high-level architecture of the BHH. Algorithm 1 provides the high level pseudo-code implementation of the BHH. Discussions follow on the most important components of the BHH.
### Heuristic Pool
Generally speaking, the heuristic pool is a collection of low-level heuristics under consideration by the BHH. The heuristic pool contains the set of low-level heuristics that, together with their performance information, make up the heuristic space. Importantly, the heuristic pool must consist of a diverse set of low-level heuristics with varying capabilities. This research takes an interest in including both gradient-based heuristics as well as MHs in the heuristic pool. This approach is referred to as a _multi-method_ approach.
### Proxies
Heuristics often maintain a set of parameters that are used to control the behaviour of the heuristic. These parameters are refered to as heuristic _state_. The concept of proxies arises from the sparsity of state as maintained by different heuristics. Since heuristics maintain (possibly) different states, there is an uncertainty of state transition when switching between heuristics. A solution to state indifference is to _proxy_ heuristic state update operations. State is then maintained in two parts: primary and proxied state. Primary state refers to the state that is originally maintained by a heuristic. proxied state refers to the state that is not directly maintained by the heuristic, but can be updated by outsourcing the required state update operation to another heuristic. The BHH thus incorporates a mapping of proxied state update operations as given in the example in Table 1.
From the example given in Table 1, when heuristic A is selected, it will outsource state update operations from heuristic B for state parameter 2. Heuristic B will outsource from heuristic A for state parameter 3. Finally,
Figure 1: An illustration of the architecture and high level components of the _Bayesian hyper-heuristic_ (BHH).
**Algorithm 1** The pseudo-code for the implementation of the _Bayesian hyper-heuristic_ (BHH)
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{step \(\leftarrow\) 0} \\ \multicolumn{2}{l}{select initial heuristics} \\ \multicolumn{2}{l}{initialise population and entities} \\ \multicolumn{2}{l}{evaluate entities’ initial position} \\ \multicolumn{2}{l}{update population state} \\ \multicolumn{2}{l}{**while** stopping condition not met **do**} \\ \multicolumn{2}{l}{**for** all entities in entity pool **do**} \\ \multicolumn{2}{l}{**if** selected heuristic is gradient-based **then**} \\ \multicolumn{2}{l}{get gradients} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{apply low-level heuristic and proxy operations} \\ \multicolumn{2}{l}{update population state} \\ \multicolumn{2}{l}{log performance metrics to performance log} \\ \multicolumn{2}{l}{**if** step \(\mathbf{i}\) burn-in window size **then**} \\ \multicolumn{2}{l}{select heuristic} \\ \multicolumn{2}{l}{**else**} \\ \multicolumn{2}{l}{**if** step \% reanalysis interval = 0 **then**} \\ \multicolumn{2}{l}{apply Bayesian analysis} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{**if** step \% reselection interval = 0 **then**} \\ \multicolumn{2}{l}{select heuristic} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{**if** step \(<\) replay window size **then**} \\ \multicolumn{2}{l}{prune performance log} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{**end if**} \\ \multicolumn{2}{l}{**end for**} \\ \multicolumn{2}{l}{step \(\leftarrow\) step + 1} \\ \multicolumn{2}{l}{**end while**} \\ \hline \hline \end{tabular}
\end{table}
Table 1: An example of a mapping of proxied state update operation maintained by the BHH.
heuristic C will outsource from heuristic A and B for state parameters 2 and 3 respectively. In this way, all heuristics maintain all the state parameters.
### Entity Pool
The entity pool refers to a collection or _population_ of _entities_ that each represent a candidate solution to the underlying FFNN being trained. The BHH selects from the heuristic pool a low-level heuristic to be applied to an individual entity. The outcome of this selection process is a mapping table that tracks which heuristic has been selected for which entity. These heuristic-entity combinations are applied to the underlying FFNN. The BHH tracks the performance of each of these combinations throughout the training process in a performance log.
Entities represent candidate solutions to the model's trainable parameters (weights) and other heuristic-specific state parameters. These state parameters are referred to as _local_ state. Entities are treated as physical _particles_ in a hyper-dimensional search environment. Entities model concepts from physics. For example, the candidate solution is represented as the entity's position. The velocity and acceleration is then analogous to the gradient and momentum of the entity respectfully [32]. Examples of entity state parameters, as derived from various low-level heuristics, include entity position, velocity, gradient, position delta, first and second moments of the gradient, the loss, personal best positions, losses, and so on. The entity state parameters are updated by the associated heuristic.
The population state refers to a collection of parameters that are shared between the entities in the population. Population state is also referred to as _global_ state and represents the population's memory. The population state generally contains state parameters that are of importance to multiple heuristics, and usually tracks the state of the population and not individual heuristic. Some examples of population state that can arise from different heuristics include the population of entities themselves, the global best entity found so far, the overall best loss achieved thus far, and so on.
### Performance Log
Heuristic selection probability is calculated based on heuristic-entity performance over time. Evidence of heuristic-entity performance is thus required for the BHH to learn. Historical heuristic-entity performance outcomes are stored in a performance log. The performance log tracks information such as the current step, selected heuristic, associated entity, the loss achieved and
so on. Since the performance log can become very big, only a sliding window of the performance history is maintained at each step in the learning process. The sliding window is also referred to as a _replay_ window/buffer.
### Credit Assignment Strategy
The credit assignment strategy is a mechanism that assigns a discrete credit indicator to heuristics that perform well, based on their performance metrics such as loss. The credit assignment strategy implements a component of the "move acceptance" process as proposed by Ozcan et al. [33] and addresses the credit assignment problem as discussed by Burke et al. [23]. A good credit assignment strategy will correctly allocate credit to the appropriate heuristic-entity combination. This research implements the following credit assignment strategies to choose from: _ibest_ (iteration best), _pbest_ (personal best), _gbest_ (global best), _rbest_ (replay window best), and _symmetric_, where credit is assigned to all entity-heuristic combinations, regardless of their performance.
### Selection Mechanism
The BHH implements a probabilistic predictive model based on the fundamentals of the Naive Bayes algorithm. The BHH thus distinguishes between the following events: \(\mathbf{H}\), the event of observing _heuristics_, \(\mathbf{E}\), the event of observing _entities_, and \(\mathbf{C}\), the event of observing _credit assignments_ that indicate that the credit assignment _performance criteria_ are met. By Bayes' theorem, the selection mechanism implemented by the BHH is given as
\[P(\mathbf{H}|\mathbf{E},\mathbf{C};\mathbf{\theta},\mathbf{\phi},\mathbf{\psi})\propto P(\mathbf{E}|\mathbf{H} ;\mathbf{\phi})P(\mathbf{C}|\mathbf{H};\mathbf{\psi})P(\mathbf{H}|\mathbf{\theta}) \tag{1}\]
The predictive model thus models the _proportional_ probability of the event (selection of) heuristic \(\mathbf{H}\), given allocation to entity \(\mathbf{E}\) and credit requirement \(\mathbf{C}\), parameterised by sampled \(\mathbf{\theta}\sim Dir(\mathbf{\alpha};K)\), \(\mathbf{\phi}\sim Dir(\mathbf{\beta};K)^{J}\) and \(\mathbf{\psi}\sim Beta(\gamma_{1},\gamma_{0})\). In the aforementioned, \(K\) is the heuristic pool size and \(J\) is the entity pool size. The parameters \(\mathbf{\alpha}\), \(\mathbf{\beta}\), \(\gamma_{1}\) and \(\gamma_{0}\) are referred to as concentration parameters. The concentration parameters are used to parameterise the prior probability distributions. A provides mathematical derivations of the predictive model.
### Optimisation Step
The intent of the BHH is to gather evidence that can be used to update prior beliefs about which heuristics perform well during training. These beliefs are represented by the concentration parameters \(\mathbf{\alpha}\), \(\mathbf{\beta}\), \(\gamma_{1}\) and \(\gamma_{0}\). A change in prior beliefs is represented by a change in these concentration parameters. Specifically, it can be said that the optimisation process implemented by the BHH updates _pseudo counts_ of events that are observed in the performance logs. These pseudo counts track the occurrence of a heuristic, an entity, and resulting performance of these two elements. Through the credit assignment strategy, these pseudo counts are biased towards entity-heuristic combinations that meet performance requirements and yield credit allocations.
Generally, there are two different techniques that are used to train Naive Bayes classifiers. The frequentist approach implements _maximum likelihood estimation_ (MLE) and the Bayesian approach implements _maximum a posteriori estimation_ (MAP).
#### 6.7.1 Maximum A Posteriori Estimation
MAP is an approach to optimise the values for \(\hat{\theta}_{k}\), \(\hat{\phi}_{j,k}\) and \(\hat{\psi}_{k}\) by optimising the parameters of their probability distributions. This process is referred to as _Bayesian analysis_. Bayesian analysis makes use of the _posterior_ probability distribution. The concentration update operations yielded by MAP, are given as follows:
\[\alpha_{k}(t+1)=N_{k}+\alpha_{k}(t) \tag{2}\]
\[\beta_{j,k}(t+1)=N_{j,k}+\beta_{j,k}(t) \tag{3}\]
\[\gamma_{1,k}(t+1)=N_{1,k}+\gamma_{1,k}(t) \tag{4}\]
\[\gamma_{2,k}(t+1)=N_{0,k}+\gamma_{2,k}(t) \tag{5}\]
where \(N_{k}\) is a summary variable denoting the count of occurrences of heuristic \(k\), \(N_{j}\) is a summary variable denoting the count of occurrences of entity \(j\), \(N_{j,k}\) is a summary variable denoting the count of occurrences of heuristic \(k\) for entity \(j\), \(N_{1,k}\) and \(N_{0,k}\) are summary variables denoting the count of occurrences where heuristic \(k\) meets performance requirements and where heuristic \(k\) does not not meet performance requirements.
It can be said that the BHH implements a Gaussian process [34]. Since the reselection of heuristics happens at regular intervals, the outcome of a selection in one iteration may influence the outcome of another in the next iteration, making the implementation of the BHH a _hidden Markov model_ (HMM) [35].
### Hyper-Parameters
The following hyper-parameters are implemented by the BHH: the _heuristic pool_ configures the type of heuristics included in the heuristic pool, the _population size_ specifies the number of entities in the entity pool, the _credit assignment strategy_ specifies which credit assignment strategy to use, the _reselection interval_ determines the frequency of heuristic reselection, the _replay window size_ determines the maximum size of the performance log, the _reanalysis interval_ determines the frequency at which Bayesian analysis is applied, the _burn in window size_ determines the size of an initial window where experience is simply gathered without reanalysis, and finally, the _discounted rewards_ and _normalisation_ flags toggle scaling modifiers on values assigned by the credit assignment strategies, backwards in the performance log.
## 7 Methodology
This section provides the details of the implementation of the empirical process. At a high level the experimental procedure consist of a comparison between the BHH and standalone low-level heuristics. A number of datasets, models and heuristics are specified. Throughout the empirical process, a BHH baseline configuration is used.
### BHH Baseline
The BHH baseline is a name given to a specific configuration of the BHH which has been found to provide a reasonable baseline performance. The baseline configuration is used as the cornerstone configuration from which all
other heuristics and their configurations are evaluated. The BHH baseline configuration is given in Table 2. In Table 2, the heuristic pool configuration, _all_, refers to a configuration where the heuristic pool contains all the low-level heuristics, including all gradient-based heuristics and MHs.
### BHH vs. Low-Level Heuristics
For the standalone heuristics experimental group, a number of low-level heuristics are used along with their specified hyper-parameters. Each of these standalone low-level heuristics is compared to that of the BHH baseline configuration, across all datasets. The intent of the standalone heuristics experimental group is to determine if the BHH baseline configuration can generalise to multiple problems in comparison to individual low-level heuristics.
Additional to the BHH baseline configuration, two more BHH configurations are included. These include BHH configurations where the heuristic pool only makes use of gradient-based heuristics, and a configuration where the heuristic pool only makes use of MHs. The intent behind the inclusion of these configurations is to determine the effectiveness of multi-method approaches in the heuristic pool applied to training FFNNs.
### Heuristics
Table 3 contains a list of all the standalone, low-level heuristics that are used as well as their hyper-parameter configurations. In some cases, parameters are changed dynamically throughout the training process using a _decay schedule_. Note from Table 3 that values that are configured to make use of a decay schedule are presented with the initial value first and the decay rate in brackets next to it.
The mapping of proxied heuristic state update operations implemented by the BHH in the empirical process is given in Figure 2. In Figure 2, cells containing \(\mathbf{x}\) indicate that the associated heuristic implements that particular state parameter explicitly, and cells containing \(\mathbf{o}\) indicate that the state parameter is implicitly implemented as part of the BHH algorithm.
\begin{table}
\begin{tabular}{c c c c c c c c} heuristic pool & population & burn in & credit & reselection & replay & reanalysis & normalise & discounted rewards \\ \hline all & 5 & 0 & ibest & 10 & 10 & 10 & false & false \\ \end{tabular}
\end{table}
Table 2: The BHH baseline configuration as it is used in the empirical study.
\begin{table}
\begin{tabular}{l l l l}
**heuristic** & **configuration** & **value** & **citation** \\ \hline sgd & learning rate & 0.1 (0.01) & [12] \\ momentum & learning rate & 0.1 (0.01) & [12] \\ & momentum & 0.9 & \\ nag & learning rate & 0.1 (0.01) & [12] \\ & momentum & 0.9 & \\ adagrad & learning rate & 0.1 (0.01) & [13] \\ & epsilon & 1E-07 & \\ rmsprop & learning rate & 0.1 (0.01) & [15] \\ & rho & 0.95 & \\ & epsilon & 1E-07 & \\ adadelta & learning rate & 1.0 (0.95) & [14] \\ & rho & 0.95 & \\ & epsilon & 1E-07 & \\ adam & learning rate & 0.1 (0.01) & [8] \\ & beta1 & 0.9 & \\ & beta2 & 0.999 & \\ & epsilon & 1E-07 & \\ pso & population size & 10 & [36] \\ & learning rate & 1.0 (0.9) & \\ & inertia weight (w) & 0.729844 & \\ & cognitive control (c1) & 1.49618 & \\ & social control (c2) & 1.49618 & \\ & velocity clip min & -1.0 & \\ & velocity clip max & 1.0 & \\ de & population size & 10 & [37] \\ & selection strategy & best & \\ & xo strategy & exp & \\ & recombination probability & 0.9 (0.1) & \\ & beta & 2.0 (0.1) & \\ ga & population size & 10 & [38] \\ & selection strategy & rand & \\ & xo strategy & bin & \\ & mutation rate & 0.2 (0.05) & \\ \end{tabular}
\end{table}
Table 3: Low-level heuristics and their hyper-parameter configurations.
Figure 2: Mapping of proxied heuristic state update operations as implemented by the BHH
### Datasets
In the context of training FFNNs, the underlying models are trained across a number of datasets. All the datasets used in the empirical process originate from the UCI Machine Learning Repository [39]. Datasets are grouped by problem type and include seven classification and seven regression datasets. The details around the datasets used can be found in Tables 4 and 5. Each dataset is split into a training set comprising 80% of the data, and a test set comprising 20% of the data.
A number of classification datasets contain an unbalanced representation of classes. This work does not apply mechanisms to cater for class balancing, in order to eliminate as many variables and factors in the empirical process as possible.
### Models
All models trained in the empirical process follow implementations of shallow FFNNs with only one hidden layer. The number of hidden units used were determined empirically. Weights are initialised by means of _Glorot uniform sampling_[54]. The models and their configuration, as it is used for each dataset, are given in Table 6.
\begin{table}
\begin{tabular}{c c c c c c c c c}
**dataset** & **output** & **types** & **attributes** & **classes** & **instances** & **batch** & **steps** & **citation** \\ \hline iris & multivariate & real & 4 & 3 & 150 & 16 & 10 & [40] \\ car & multivariate & categorical & 6 & 4 & 1728 & 128 & 14 & [41] \\ abalone & multivariate & categorical, integer, real & 8 & 28 & 4177 & 256 & 17 & [42] \\ wine quality & multivariate & real & 12 & 11 & 4898 & 256 & 20 & [43] \\ mushroom & multivariate & categorical & 22 & 2 & 8214 & 512 & 17 & [44] \\ bank & multivariate & real & 17 & 2 & 45211 & 512 & 89 & [45] \\ diabetic & multivariate & integer & 55 & 3 & 100000 & 1024 & 98 & [46] \\ \end{tabular}
\end{table}
Table 4: Classification datasets
\begin{table}
\begin{tabular}{c c c c c c c c}
**dataset** & **output** & **types** & **attributes** & **instances** & **batch** & **steps** & **citation** \\ \hline fish toxicity & multivariate & real & 7 & 908 & 64 & 15 & [47] \\ housing & univariate & real & 13 & 506 & 32 & 16 & [48] \\ forest fires & multivariate & real & 13 & 517 & 32 & 17 & [49] \\ student performance & multivariate & integer & 33 & 649 & 32 & 21 & [50] \\ parkinsons & multivariate & integer, real & 26 & 5875 & 256 & 23 & [51] \\ air quality & multivariate, time series & real & 15 & 9358 & 256 & 37 & [52] \\ bike & univariate & integer, real & 16 & 17389 & 256 & 68 & [53] \\ \end{tabular}
\end{table}
Table 5: Regression datasets
### Performance Measures
_Binary cross entropy_ (BinXE) is used for classification problems with two classes and _sparse categorical cross entropy_ (SparseCatXE) is used for classification problems with more than two classes. For the classification problems, accuracy is also measured. For regression problems, the _mean squared error_ (MSE) is used as a performance metric. After training has completed, the _average rank_, based on test loss, for all configurations, is calculated at each mini-batch step.
### Statistical Analysis
Each experiment and configuration is trained for a maximum of 30 epochs and is repeated over 30 independent runs, for each of the datasets. No early-stopping mechanism is used. Statistical analysis is executed on the results from the test datasets. An average rank is calculated across all 30 runs, for both experimental groups and configurations, at each step, for every epoch of training.
The Shapiro-Wilk test for normality [55] (\(\alpha\) = 0.001) is used to determine if the results are normally distributed. The Levene's test for equality of variance [56] (\(\alpha\) = 0.001) is used. For experiments with three or more configurations, the ANOVA statistical test [57] (\(\alpha\) = 0.001) is used. The Kruskal-Wallis ranked non-parametric test [58] for statistical significance (\(\alpha\) = 0.001) is used for cases where data is not normally distributed. Finally, a post-hoc Tukey honest significant difference test [59] (\(\alpha\) = 0.001) is used
\begin{table}
\begin{tabular}{r c c c c c c c c}
**dataset** & **inputs** & **hidden** & **output** & **biases** & **parameters** & **topology** & **l1 activation** & **l2 activation** \\ \hline fish toxicity & 6 & 3 & 1 & yes & 25 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ iris & 4 & 5 & 3 & yes & 43 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ air quality & 12 & 8 & 1 & yes & 113 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ housing & 13 & 8 & 1 & yes & 121 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ wine quality & 13 & 10 & 7 & yes & 217 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ parkinsons & 21 & 10 & 1 & yes & 231 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ car & 21 & 10 & 4 & yes & 264 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ forest fires & 43 & 16 & 1 & yes & 721 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ abalone & 10 & 36 & 28 & yes & 1432 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ bank & 51 & 32 & 1 & yes & 1697 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ bike & 61 & 32 & 1 & yes & 2017 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ student performance & 99 & 32 & 1 & yes & 3233 & dense & LReLU (\(\alpha=0.3\)) & sigmoid \\ adult & 108 & 64 & 1 & yes & 7041 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ mushroom & 117 & 64 & 1 & yes & 7617 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ diabetic & 2369 & 32 & 3 & yes & 75939 & dense & LReLU (\(\alpha=0.3\)) & softmax \\ \end{tabular}
\end{table}
Table 6: Model configurations
from which significant ranking is retrieved. Descriptive and critical difference plots are then retrieved from these results to provide visual aid.
### Implementation
All implementations are done from first principles in Python 3.9 using Tensorflow 2.7 and Tensorflow Probability 0.15.0. The source code and data for this research is provided and made public at [https://github.com/arneschreuder/masters](https://github.com/arneschreuder/masters).
## 8 Results
This section provides the results of the empirical process that has been conducted. Detailed discussions follow on the outcomes of each experiment.
### Overview
This section provides a brief discussion on the general outcome of the empirical process as a whole and identifies some key aspects to be kept in mind when interpreting the results of the experiments.
Firstly, the BHH applies a form of online learning. As such, the BHH applies the learning mechanism during training in a single run of the training process. The training process is not repeated iteratively as is the case with some HHs.
Most of the training progress is observed to occur within the first five epochs. As a result, the BHH should apply most learning at the early stages of the training process. After five epochs, the training of most of the underlying FFNNs converges and little performance gain is observed after that point. Since this empirical process does not apply early stopping of the training process, the BHH will continue to explore the heuristic space beyond the five epoch mark.
The BHH does not implement a type of move-acceptance strategy where the application of a heuristic to an entity is only accepted if it leads to a better solution. In some cases, the BHH then selects heuristics that yield sub-optimal results, but is shown to mostly return to optimal solutions over a number of steps.
Given the stochastic nature of the heuristic selection mechanism, sufficient samples of the performance of each heuristics-entity combination in the performance log are required for the BHH to learn. This requirement is further strengthened by the Bayesian nature of the probabilistic model
implemented by the BHH. The probabilistic model implements _probability distributions of heuristic selection probabilities_ and as such, insufficient samples in the performance log could render a form of random search.
By default, the BHH baseline configuration has a reanalysis interval of 10, and a replay window size of 10, which is a small window to learn from. Despite the small reanalysis interval and the small replay window size, it should be observed that the BHH exploits small performance biases and finds small performance gains throughout.
### BHH vs. Low-Level Heuristics
Tables 7 and 8 provide the empirical results in ranked format. The performance rank is calculated as the average rank produced by each heuristic, across all datasets, for all independent runs and all epochs. The average rank across all epochs produces a view on the performance of the heuristics as it relates to the entire training process. Finally, a normalised average rank is provided for the overall performance of all heuristics at the bottom of the table. The normalised average rank is calculated as a discrete normalisation of the average rank achieved across all datasets, for all independent runs and epochs.
Tables 7 and 8 show that the _bhh_gd_ configuration produced the best results of the BHH variants and managed to perform well, producing generally good results across all datasets. The _bhh_gd_ configuration managed to produce results that are comparable to the top three heuristics for each dataset, while the _bhh_all_ and _bhh_mh_ produced average results compared to all the heuristics.
The normalised average ranks provided in Tables 7 and 8 show that the _bhh_gd_ configuration ranked fourth, while the _bhh_al_ and _bhh_mh_ configurations ranked sixth and eighth amongst all thirteen heuristic implementations respectively. These results show that the BHH generally performs well, but is not able to outperform the best heuristic for each dataset.
Figure 3 provides an illustration showing a descriptive plot of the average ranks achieved over all independent runs, for each heuristic, per dataset. The heuristics are ordered according to the normalised ranks presented in Tables 7 and 8. Figure 3 shows that the _bhh_gd_ heuristic achieved the lowest variance in average rank across all datasets, compared to the other heuristics. The aforementioned shows the generalisation capabilities of the BHH to multiple problems.
Figure 4 provides an illustration of the overall critical difference plots that illustrate the statistically significant differences in ranked performance for each heuristic as it relates to all datasets, across all independent runs and epochs. Although the outcomes of the _bhh_al_ and _bhh_mh_ configurations seem to produce average performance results, it should be noted that the performance difference between all heuristics is very small. Furthermore, the best configuration of the BHH, namely the _bhh_gd_ configuration, is statisti
\begin{table}
\begin{tabular}{r|c c c|c|c|c} & \multicolumn{5}{c||}{**BHH vs. Low-Level Heuristics - Average Rank (Part A)**} \\ \hline dataset & adagrad & adam & rmsprop & bhh_gd & mag & bhh_all \\ \hline abalone & 2.2215 (\(\pm\)15.91) & 2.2980 (\(\pm\)18.87) & 4.6172 (\(\pm\)2.65) & 4.7032 (\(\pm\)2.108) & 4.2731 (\(\pm\)1.542) & 5.9376 (\(\pm\)2.399) \\ air\_quality & 3.6109 (\(\pm\)2.259) & 5.4312 (\(\pm\)2.62) & 3.452 (\(\pm\)2.57) & 5.0817 (\(\pm\)2.762) & 3.8191 (\(\pm\)2.229) & 6.686 (\(\pm\)3.061) \\ bank & 2.5495 (\(\pm\)1.598) & 2.0796 (\(\pm\)1.587) & 3.4645 (\(\pm\)2.209) & 4.8828 (\(\pm\)1.702) & 4.2871 (\(\pm\)1.732) & 6.2419 (\(\pm\)2.157) \\ bike & **1.7204 (\(\pm\)1.384)** & 3.6925 (\(\pm\)4.003) & 4.2822 (\(\pm\)4.58) & 3.8441 (\(\pm\)4.398) & 6.4516 (\(\pm\)1.02) & 4.2151 (\(\pm\)4.361) \\ car & 4.7634 (\(\pm\)0.938) & 16.2228 (\(\pm\)1.405) & 2.2308 (\(\pm\)1.409) & 3.3432 (\(\pm\)1.35) & 6.0785 (\(\pm\)0.799) & 3.5624 (\(\pm\)3.185) \\ diabetic & **2.7966 (\(\pm\)1.569)** & 7.1484 (\(\pm\)2.227) & 6.7376 (\(\pm\)2.577) & 5.2269 (\(\pm\)2.186) & **1.8118 (\(\pm\)1.118)** & **9.3968 (\(\pm\)3.022) \\ fish\_toxicity & **4.2645 (\(\pm\)2.614)** & **3.0022 (\(\pm\)2.415)** & **3.3946 (\(\pm\)2.429)** & 5.4118 (\(\pm\)2.65) & 5.8914 (\(\pm\)2.629) & 5.829 (\(\pm\)2.856) \\ forest\_fires & 5.1559 (\(\pm\)2.922) & 4.2688 (\(\pm\)2.984) & 5.0355 (\(\pm\)3.143) & 4.6935 (\(\pm\)2.759) & 5.6882 (\(\pm\)2.215) & 5.4839 (\(\pm\)3.107) \\ housing & **3.4481 (\(\pm\)2.022)** & 3.3334 (\(\pm\)1.819) & 3.6946 (\(\pm\)2.166) & 4.4742 (\(\pm\)2.312) & 4.6839 (\(\pm\)2.658) & 4.3763 (\(\pm\)2.438) \\ iris & 6.3346 (\(\pm\)1.6) & 3.5839 (\(\pm\)2.511) & 6.2088 (\(\pm\)1.921) & 4.7473 (\(\pm\)2.275) & 3.5548 (\(\pm\)2.125) & 5.2204 (\(\pm\)1.301) \\ mushroom & 4.6565 (\(\pm\)1.053) & 2.1344 (\(\pm\)1.883) & 2.4656 (\(\pm\)1.530) & 3.8448 (\(\pm\)1.602) & 6.3323 (\(\pm\)0.891) & 3.6688 (\(\pm\)2.469) \\ parkinsons & 2.677 (\(\pm\)1.697) & 2.2333 (\(\pm\)1.74) & 3.5565 (\(\pm\)2.492) & 4.572 (\(\pm\)1.934) & 7.5355 (\(\pm\)1.44) & 4.3839 (\(\pm\)1.861) \\ student\_performance & 2.5634 (\(\pm\)1.912) & 11.3978 (\(\pm\)2.178) & 12.4312 (\(\pm\)1.344) & 5.6624 (\(\pm\)3.57) & 3.1895 (\(\pm\)2.12) & 5.8634 (\(\pm\)3.159) \\ wine\_quality & 3.2806 (\(\pm\)1.931) & 2.1118 (\(\pm\)1.666) & 3.6301 (\(\pm\)1.731) & 4.7882 (\(\pm\)2.105) & 4.1505 (\(\pm\)1.916) & 5.1925 (\(\pm\)1.951) \\ \hline avg rank & 3.5512 (\(\pm\)2.25) & 3.9314 (\(\pm\)3.423) & 4.5691 (\(\pm\)3.517) & 4.6346 (\(\pm\)2.364) & 4.8394 (\(\pm\)2.384) & 5.4327 (\(\pm\)2.9) \\ \hline normalised avg rank & \multicolumn{2}{c|}{1} & \multicolumn{2}{c|}{2} & \multicolumn{2}{c|}{3} & \multicolumn{2}{c|}{4} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{6} \\ \hline \end{tabular}
\end{table}
Table 7: Empirical results showing normalised average rank and statistics for the top six low-level heuristics and three heuristic pool variants of the BHH baseline configuration, across multiple datasets, for all independent runs and epochs.
\begin{table}
\begin{tabular}{r|c c|c c|c c c} & \multicolumn{5}{c|}{BHH vs. Low-Level Heuristics - Average Rank (Part B)} \\ \hline dataset & adandelta & bhh\_mh & \multicolumn{1}{c}{gd} & \multicolumn{1}{c}{gd} & \multicolumn{1}{c}{gd} & \multicolumn{1}{c}{momentum} & \multicolumn{1}{c}{de} \\ \hline abalone & 5.3129 (\(\pm\)1.478) & 8.1882 (\(\pm\)1.95) & 11.1108 (\(\pm\)2.102) & 11.2599 (\(\pm\)1.820) & 8.628 (\(\pm\)1.019) & 9.8151 (\(\pm\)1.16) & 12.5706 (\(\pm\)1.29) \\ air\_quality & 5.2441 (\(\pm\)3.102) & 6.307 (\(\pm\)2.303) & 7.8204 (\(\pm\)2.260) & 9.9151 (\(\pm\)2.288) & 10.0410 (\(\pm\)1.090) & 11.5599 (\(\pm\)1.92) & 11.110 (\(\pm\)2.240) \\ bank & 5.672 (\(\pm\)1.214) & **1.9209** (\(\pm\)1.048) & **1.9691** (\(\pm\)1.216) & **1.8116** (\(\pm\)1.464) & 5.2774 (\(\pm\)1.03) & 8.4747 (\(\pm\)1.05) & 12.4696 (\(\pm\)1.294) \\ bike & 5.3092 (\(\pm\)1.155) & 4.7108 (\(\pm\)1.08) & 9.2203 (\(\pm\)1.18) & 9.3086 (\(\pm\)1.761) & 10.3035 (\(\pm\)1.419) & 10.7086 (\(\pm\)1.42) & 12.6414 (\(\pm\)1.46) \\ car & 7.7344 (\(\pm\)1.476) & 8.8305 (\(\pm\)1.413) & 10.7038 (\(\pm\)1.471) & 8.5613 (\(\pm\)1.622) & 10.2402 (\(\pm\)1.447) & 10.9226 (\(\pm\)1.310) & 12.408 (\(\pm\)1.92) \\ disable & **2.7811** (\(\pm\)1.692) & 8.507 (\(\pm\)1.471) & **1.0
cally outperformed overall by only Adagrad and Adam, yielding statistically comparable results to RMSProp and NAG. It should be noted that the standalone low-level heuristics already produce good results in general across all datasets. In this particular case, producing better performance outcomes can be hard to achieve. However, as mentioned previously, the BHH provides a generalisation capability across all datasets that is advantageous to the BHH.
Another observation that can be made is that the gradient-based heuristics generally performed much better than the MHs on all datasets. State of the art methods for training FFNNs, such as Adam, utilise gradient-based approaches that have been proven to work well on many occasions [8]. Exploration of the heuristic space leads the BHH to consider other heuristics during the training process, which could possibly result in worse performances at times. A suggestion to improve on these results is to include a move-acceptance strategy where heuristic progressions are discarded if they fail to produce better results.
Figure 3: Descriptive plots for the average ranks of all low-level heuristics compared to three heuristic pool variants of the BHH baseline configuration, per dataset, across all independent runs and epochs.
## 9 Conclusion
The research done in this study stems from the difficult and tedious process of selecting the best heuristic for training FFNNs. The research presented in this article identified the possibility of using a different approach, referred to as HHs, to automate the heuristic selection process.
This research set out to develop a novel high-level heuristic that utilises probability theory in an online learning setting to drive the automatic heuristic selection process.
For the experimental group that compares the BHH baseline with a number of low-level heuristics, it was found that the _bhh_gd_ configuration, which contains only gradient-based heuristics in the heuristic pool, performed the best out of the BHH variants, achieving an overall rank of fourth amongst thirteen heuristics that were implemented and executed on fourteen datasets. The _bhh_gd_ configuration produced performance results close to that of the best low-level heuristics and was statistically outperformed only by the top two low-level heuristics. The _bhh_all_ configuration, which contains only gradient-based heuristics and MHs in the heuristic pool, achieved an overall rank of sixth, and the _bhh_mh_ configuration, which contains only MHs in the heuristic pool, achieved an overall rank of eighth.
Although the _bhh_gd_ configuration produced performance results comparable to the best low-level heuristics, the _bhh_all_ and _bhh_mh_ configurations produced average results. It was found that, in general, gradient-based heuristics produced the best results, as such, it is understandable that the _bhh_gd_ yielded the best performance outcomes between the different BHH variants that were implemented. Although the BHH variants were not able
Figure 4: Critical difference plots for the average ranks of all low-level heuristics compared to three heuristic pool variants of the baseline BHH, across all datasets, runs and epochs.
to produce better results than the top low-level heuristics, the BHH variants still effectively trained the underlying FFNNs and produced good training outcomes overall. It was shown that the _bhh_gd_ configuration produced the lowest variance in rank between datasets out of all of the heuristics implemented, giving the BHH the ability to generalise well to other problems.
Finally, it was shown that the BHH provides a mechanism whereby prior expert knowledge can be injected, before training starts. Future research can exploit this knowledge and provide a significant bias towards heuristics that are known to perform well on particular problem types. Future research can also investigate the scalability and effectiveness of the BHH on other model architectures such as _deep neural networks_ (DNNs). Furthermore, the selection mechanism of the BHH can be extended to not just select heuristics from a heuristic pool, but also different model architectures from a model architecture pool.
## Appendix A Naive Bayes
This section aims to dissect the probabilistic model that is presented in Equation (1). The BHH implements a form of Naive Bayes classifier, and thus independence between events can be assumed. The following derived _probability mass functions_ (PMFs) are provided as fundamental building blocks to show the mechanism by which the BHH learns.
The independence between events for class label \(\mathbf{H}\), simply yields the PMF of the Multinomial distribution as presented below:
\[\begin{split} P(\mathbf{H}|\mathbf{\theta})&\propto \prod_{i=1}^{I}\prod_{k=1}^{K}P(h_{i,k}|\theta_{k})\\ &\propto\prod_{i=1}^{I}\prod_{k=1}^{K}\theta_{k}^{1_{1}(h_{i,k})} \\ &\propto\prod_{k=1}^{K}\theta_{k}^{\sum_{i=1}^{I}1_{1}(h_{i,k})} \\ &\propto\prod_{k=1}^{K}\theta_{k}^{N_{k}}\end{split} \tag{16}\]
where \(N_{k}\) is a summary variable such that \(N_{k}=\sum_{i=i}^{I}\mathbb{1}_{1}(h_{i,k})\), denoting
the count of occurrences of the event \(h_{i}\) taking on class \(k\) in \(I\) independent, identical runs.
The independence between events \(\mathbf{E}\), given class label \(\mathbf{H}\), is denoted by the likelihood of \(\mathbf{E}\), conditional to the occurrence of heuristic \(k\) and model parameter \(\mathbf{\phi}\) as follows:
\[\begin{split} P(\mathbf{E}|\mathbf{H};\mathbf{\phi})&\propto \prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}P(e_{i,j,k}|h_{i,k};\phi_{j,k})\\ &\propto\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\phi_{j,k}^{ \leavevmode\nobreak\ 1_{1}(e_{i,j,k})\leavevmode\nobreak\ 1_{1}(h_{i,k})}\\ &\propto\prod_{j=1}^{J}\prod_{k=1}^{K}\phi_{j,k}^{\leavevmode \nobreak\ 1_{i}\big{[}\leavevmode\nobreak\ 1_{1}(e_{i,j,k})\leavevmode\nobreak\ 1_{1}(h_{i,k})\big{]}}\\ &\propto\prod_{j=1}^{J}\prod_{k=1}^{K}\phi_{j,k}^{N_{j,k}}\end{split}\] (A.2)
where \(N_{j,k}\) is a summary variable such that \(N_{j,k}=\sum_{i=i}^{I}\mathbbm{1}_{1}(e_{i,j,k})\mathbbm{1}_{1}(h_{i,k})\), denoting the count of occurrences of the events \(e_{i}\) taking on class \(j\) and \(h_{i}\) taking on class \(k\), i.e. the count of occurrences of both entity \(j\) and heuristic \(k\) occurring together in \(I\) independent, identical runs.
Finally, the independence between events for the performance criteria \(\mathbf{C}\), given class label \(\mathbf{H}\), is denoted by the likelihood of \(\mathbf{C}\), conditional to the occurrence of heuristic \(k\) and model parameter \(\mathbf{\psi}\) as given below:
\[P(\mathbf{C}|\mathbf{H};\mathbf{\psi}) \propto\prod_{i=1}^{I}\prod_{k=1}^{K}P(c_{i,k}|h_{i,k};\psi_{k})\] (A.3) \[\propto\prod_{i=1}^{I}\prod_{k=1}^{K}\psi_{k}^{1_{1}(c_{i,k})1_{1} (h_{i,k})}(1-\psi_{k})^{1_{0}(c_{i,k})1_{1}(h_{i,k})}\] \[\propto\prod_{k=1}^{K}\psi_{k}^{\sum_{i=1}^{I}1_{1}(c_{i,k})1_{1} (h_{i,k})}(1-\psi_{k})^{\sum_{i=1}^{I}1_{0}(c_{i,k})1_{1}(h_{i,k})}\] \[\propto\prod_{k=1}^{K}\psi_{k}^{N_{1,k}}(1-\psi_{k})^{N_{0,k}}\] \[\propto\prod_{k=1}^{K}\psi_{k}^{N_{1,k}}(1-\psi_{k})^{(N_{k}-N_{1,k})}\]
where \(N_{k}\) is the same summary variable as described for Equation (A.1). \(N_{1,k}\) is a summary variable such that \(N_{1,k}=\sum_{i=1}^{I}\mathbbm{1}_{1}(c_{i,k})\mathbbm{1}_{1}(h_{i,k})\), denoting the count of occurrences of the events \(c_{i}\) taking on a success (i.e. \(c_{i}=1\)) and \(h_{i}\) taking on class \(k\), i.e. the count of occurrences of both succeeding in the performance criteria and heuristic \(k\) occurring together in \(I\) independent, identical runs. Similarly, \(N_{0,k}=N_{k}-N_{1,k}\) denotes the count of occurrences of the events \(c_{i}\) taking on a failure (i.e. \(c_{i}=0\)) and \(h_{i}\) taking on class \(k\).
Equations (A.1), (A.2) and (A.3) can be substituted into the proportional evaluation of the predictive model as given in Equation (1), resulting in
\[P(\mathbf{H}|\mathbf{E},\mathbf{C};\mathbf{\theta},\mathbf{\phi},\mathbf{\psi}) \propto P(\mathbf{E}|\mathbf{H};\mathbf{\phi})P(\mathbf{C}|\mathbf{H};\mathbf{\psi})P(\bm {H}|\mathbf{\theta})\] (A.4) \[\propto\left[\prod_{j=1}^{J}\prod_{k=1}^{K}\phi_{j,k}^{N_{j,k}} \right]\left[\prod_{k=1}^{K}\psi_{k}^{N_{1,k}}(1-\psi_{k})^{(N_{k}-N_{1,k})} \right]\left[\prod_{k=1}^{K}\theta_{k}^{N_{k}}\right]\]
Consider the practical implementation of the predictive model as shown in Equation (A.4). Computationally, the equation presented in Equation (A.4) will underflow on a real computer if the resulting probabilities are very small.
### Numerical Stability
When Equation (A.4) is evaluated, the numerical stability is shown to underflow if the resulting probabilities from its parts are very small. Multi
plication of multiple fractional parameters leads to an even smaller fractional number. Probabilities might be very low at some points during training. A solution to the aforementioned problem is to apply the _log-sum-exp_ trick. The transformation of Equation (A.4) using the log-sum-exp trick is given as
\[LSE(P(h_{k}|e_{j},c_{1};\mathbf{\theta},\mathbf{\phi},\mathbf{\psi}))=\ln(\exp(\phi_{j,k})+ \exp(\psi_{k})+\exp(\theta_{k}))\] (A.5)
|
2306.00010 | Trainable and Explainable Simplicial Map Neural Networks | Simplicial map neural networks (SMNNs) are topology-based neural networks
with interesting properties such as universal approximation ability and
robustness to adversarial examples under appropriate conditions. However, SMNNs
present some bottlenecks for their possible application in high-dimensional
datasets. First, SMNNs have precomputed fixed weight and no SMNN training
process has been defined so far, so they lack generalization ability. Second,
SMNNs require the construction of a convex polytope surrounding the input
dataset. In this paper, we overcome these issues by proposing an SMNN training
procedure based on a support subset of the given dataset and replacing the
construction of the convex polytope by a method based on projections to a
hypersphere. In addition, the explainability capacity of SMNNs and an effective
implementation are also newly introduced in this paper. | Eduardo Paluzo-Hidalgo, Miguel A. Gutiérrez-Naranjo, Rocio Gonzalez-Diaz | 2023-05-29T19:40:30Z | http://arxiv.org/abs/2306.00010v3 | # Explainability in Simplicial Map Neural Networks
###### Abstract
Simplicial map neural networks (SMNNs) are topology-based neural networks with interesting properties such as universal approximation capability and robustness to adversarial examples under appropriate conditions. However, SMNNs present some bottlenecks for their possible application in high dimensions. First, no SMNN training process has been defined so far. Second, SMNNs require the construction of a convex polytope surrounding the input dataset. In this paper, we propose a SMNN training procedure based on a support subset of the given dataset and a method based on projection to a hypersphere as a replacement for the convex polytope construction. In addition, the explainability capacity of SMNNs is also introduced for the first time in this paper.
## 1 Introduction
In the last years, Artificial Intelligence (AI) methods in general, and Machine Learning methods in particular, have reached a success level on real-life problems unexpectedly only a few years ago. Many different areas have contributed to this development. Among them, we can cite research on new theoretical algorithms, the increasing computational power of the hardware of the last generation, and the quick access to a huge amount of data. Such a combination of factors leads to the development of more and more complex self-regulated AI methods.
Many of the AI models currently used are based on backpropagation algorithms, which train and regulate themselves to achieve a goal, such as classification, recommendation, or prediction. These self-regulating models achieve some kind of knowledge as they successfully evaluate test data independent of the data used to train them. In general, such knowledge is not understandable by humans in its current representation, since it is implicit in a large number of parameters and their relationships, which are immeasurable to the human mind. Due to its lack of explainability, the application of AI models generates rejection among citizens, and many governments are imposing limits on their use. At the same time, experts in social sciences, medicine, and many other areas do not usually trust the decisions made by AI models since they cannot follow the argument that leads to the final decision.
To fill the gap between the recent development of AI models and their social use, many researchers have focused on the development of Explainable Artificial Intelligence (XAI), which consists of a set of techniques to provide clear, understandable, transparent, intelligible, trustworthy, and interpretable explanations of the decisions, predictions, and reasoning processes made by the IA models, rather than just presenting their output, especially in domains where AI decisions can have significant consequences on human life.
Some of the recent applications of XAI in real-life problems include healthcare [4, 11], laws [2], radiology [8], medical image [9], or education [10] In general, these approaches to XAI follow the main guideline of representing implicit knowledge in the AI model in a human-readable fashion. For example, some methods try to provide an explanation through data visualization [1] (by plotting the importance of the feature, the local representation of a specific observation, etc.). Other methods try to provide an explanation by using other AI models that are more human-readable (such as decision trees or linear regression [17]). A global taxonomy of interpretable AI with the aim of unifying terminology to achieve clarity and efficiency in the definition of regulations for the development of ethical and reliable AI can be found in [7]. A nice introduction and general vision can be found in [12]. Another clarifying paper with defini
tions, concepts, and applications of XAI is [13].
In this paper, we provide a contribution to XAI from a topological perspective. In this way, we provide a trainable version of the so-called Simplicial Map Neural Networks (SMNN) that were introduced in [14] as a constructive approach to the problem of approximating a continuous function on a compact set in a triangulated space. SMNNs were built as a two-hidden-layer feedforward network where the set of weights was precomputed. The architecture of the network and the computation of the set of weights were based on a combinatorial topology tool called simplicial maps and the triangulation of the space. From an XAI point of view, SMNN is an explainable model, since all the decision steps in order to compute the output of the network were understandable and transparent, and therefore trustworthy.
The paper extends the concept of SMNNs that is based on considering the \(n\)-dimensional space of the input labeled dataset (where \(n\) is the number of features) as a triangulable metric space by building a combinatorial structure (a simplicial complex) on top of the dataset and constructing a neural network based on a simplicial map defined between such simplicial complex and a simplex representing the label space (see details below). One of the drawbacks of this approach is the calculation of the exact weights of the neural network, being the whole computation extremely expensive. In this paper, we propose a method to approximate such weights based on optimization methods with competitive results. A second drawback of the definition of SMNNs is the set of points chosen as the vertices of the triangulation. In this paper, we also provide a study on these points (the so-called _support_ points) and propose alternatives to make SMNNs more efficient.
The paper is organized as follows. First, some concepts of computational topology and the definition of SMNNs are recalled in Section 2. Next, Section 3 develops several technical details needed for the SMNN training process, which will be introduced in Section 5. Section 6 is devoted to the explainability of the model. Finally, the paper ends with some experiments and final conclusions.
## 2 Background
In this section, we assume that the reader is familiar with the basic concepts of computational topology. For a comprehensive presentation, we refer to [5].
### Simplicial complexes
Consider a finite set of points \(V=\{v^{1},\ldots,v^{\beta}\}\subset\mathbb{R}^{n}\). whose elements will be called vertices. A subset
\[\sigma=\langle v^{i_{0}},v^{i_{1}},\ldots,v^{i_{d}}\rangle\]
of \(V\) with \(d+1\) vertices (in general position) is called a \(d\)-simplex. The convex hull of the vertices of \(\sigma\) will be denoted by \(|\sigma|\) and corresponds to the set:
\[\left\{x\in\mathbb{R}^{n}:\:x=\sum_{j\in[0,d]}b_{j}(x)v^{i_{j}}\right\}\]
where \([\![0,d]\!]=\{0,1,\ldots,d\}\) and
\[b(x)=(b_{0}(x),b_{1}(x),\ldots,b_{d}(x))\]
are called the barycentric coordinates of \(x\) with respect to \(\sigma\), and satisfy that:
\[\sum_{j\in[0,d]}b_{j}(x)=1\:\:\text{and}\:\:b_{j}(x)\geq 0\:\forall j\in[\![0,d ]\!]\,.\]
For example, let us consider the 1-simplex \(\epsilon=\langle v^{i_{0}},v^{i_{1}}\rangle\) which is composed by two vertices of \(V\). Then \(|\epsilon|\) is the set of points in \(\mathbb{R}^{n}\) corresponding to the edge with endpoints \(v^{i_{0}}\) and \(v^{i_{1}}\), and if, for example, \(b(x)=(\frac{1}{2},\frac{1}{2})\) then \(x\) is the midpoint of \(|\epsilon|\).
A simplicial complex \(K\) with vertex set \(V\) consists of a finite collection of simplices satisfying that if \(\sigma\in K\) then either \(\sigma=\langle v\rangle\) for some \(v\in V\) or any face (that is, nonempty subset) of \(\sigma\) is a simplex of \(K\). In addition, if \(\sigma,\mu\in K\) then, either \(|\sigma|\cap|\mu|=\emptyset\) or \(|\sigma|\cap|\mu|=|\gamma|\) for some \(\gamma\in K\). The set \(\bigcup_{\sigma\in K}|\sigma|\) will be denoted by \(|K|\). A maximal simplex of \(K\) is a simplex that is not the face of any other simplex of \(K\). If the maximal simplices of \(K\) are all \(d\)-simplices then \(K\) is called a pure \(d\)-simplicial complex. See the example shown in Figure 1.
The barycentric coordinates of \(x\) with respect to the simplicial complex \(|K|\) are defined as the barycentric coordinates of \(x\) with respect to \(\sigma\in K\) such that \(x\in|\sigma|\). Let us observe that the barycentric coordinates of \(x\in|K|\) are unique.
An example of simplicial complexes is the Delaunay triangulation \(\operatorname{Del}(V)\) defined from the Voronoi diagram of a given finite set of vertices \(V\). The following lemma extracted from [3, page 48] is just an alternative definition of Delaunay triangulations.
Figure 1: On the left, two triangles that do not intersect in a common face (an edge or a vertex) are shown. On the right, the geometric representation \(|K|\) of a pure 2-simplicial complex \(K\) composed of three maximal \(2\)-simplices (the triangles \(\sigma^{1}\), \(\sigma^{2}\) and \(\sigma^{3}\)). The edge \(\mu^{2}\) is a common face of \(\sigma^{2}\) and \(\sigma^{3}\). The edge \(\mu^{1}\) is a face of \(\sigma^{1}\).
**Lemma 1** (The empty ball property [3]): _Any subset \(\sigma\) of \(V\) is a simplex of \(\operatorname{Del}(V)\) if and only if \(|\sigma|\) has a circumscribing open ball empty of points of \(V\)._
### Simplicial-maps
Let \(K\) be a pure \(n\)-simplicial complex and \(L\) a pure \(k\)-simplicial complex with vertex sets \(V\subset\mathbb{R}^{n}\) and \(W\subset\mathbb{R}^{k}\), respectively. The map \(\varphi^{{{(0)}}}:V\to W\) is called a _vertex map_ if it satisfies that the set obtained from \(\left\{\varphi^{{{(0)}}}(v^{i_{0}}),\ldots,\varphi^{{ {(0)}}}(v^{i_{d}})\right\}\) after removing duplicated vertices is a simplex in \(L\) whenever \(\langle v^{i_{0}},\ldots,v^{i_{d}}\rangle\) is a simplex in \(K\). The vertex map \(\varphi^{{{(0)}}}\) always induces a continuous function, called a _simplicial map_\(\varphi:|K|\to|L|\), which is defined as follows. Let \(b(x)=(b_{0}(x),\ldots,b_{n}(x))\) be the barycentric coordinates of \(x\in|K|\) with respect to \(\sigma=\langle v^{i_{0}},\ldots,v^{i_{n}}\rangle\in K\). Then
\[\varphi(x)=\sum_{j\in[0,n]}b_{j}(x)\varphi^{{{(0)}}}(v^{i_{j}}).\]
Let us observe that \(\varphi(x)=\varphi^{{{(0)}}}(x)\) if \(x\in V\).
A special kind of simplicial maps used to solve classification tasks will be introduced in the next subsection.
### Simplicial maps for classification tasks
Next, we will show how a simplicial map can be used to solve a classification problem (see [16]) for details. From now on, we will assume that the input dataset is a finite set of points \(V\) in \(\mathbb{R}^{n}\) together with a set of \(k\) labels \(\Lambda\) such that each \(v\in V\) is tagged with a label \(\lambda^{v}\) taken from \(\Lambda\).
The intuition is to add a new label to \(\Lambda\), called _unknown_, and to have a one-hot encoding representation \(W^{k+1}\subset\mathbb{R}^{k+1}\) of these \(k+1\) labels being:
\[W^{k+1}=\left\{\ell^{j}=(0,.^{j}.,0,1,0,\overset{k-j}{\dots},0):\ j\in[\![1,k] \!]\right\},\]
where the one-hot vector \(\ell^{j}\) encodes the \(j\)-th label of \(\Lambda\) for \(j\in[\![1,k]\!]\) and where \(\ell^{0}\) is the one-hot vector encoding the new _unknown_ label. Roughly speaking, the space surrounding the dataset is labeled as _unknown_. Let \(L\) denote the simplicial complex with vertex set \(W^{k+1}\) consisting of only one maximal \(k\)-simplex.
Let us now consider a convex polytope \(\mathcal{P}\) with a vertex set \(P\) surrounding the set \(V\). This polytope always exists since \(V\) is finite. Next, a map \(\varphi^{{{(0)}}}:V\cup P\to W^{k+1}\) mapping each vertex \(v\in V\) to the one-hot vector in \(W^{k+1}\) that encodes the label \(\lambda^{v}\) is defined. The vertices of \(P\) are sent to the vertex \(\ell^{0}\) (i.e. classified as unknown). Observe that \(\varphi^{{{(0)}}}\) is a vertex map.
Then, the Delaunay triangulation \(\operatorname{Del}(V\cup P)\) is computed. Note that \(|\operatorname{Del}(V\cup P)|=\mathcal{P}\). Finally, the simplicial map \(\varphi:\mathcal{P}\to|L|\) is induced by the vertex map \(\varphi^{{{(0)}}}\) as explained in Subsection 2.2.
**Remark 1**: _The space \(|L|\) can be interpreted as the discrete probability distribution space \(\Omega^{k+1}\) with \(k+1\) variables._
In Figure 2, on the left, we can see a dataset with four points \(V=\{b,c,k,d\}\), labeled as red and blue. The green points \(P=\{a,e,g,f\}\) are the vertices of a convex polytope \(\mathcal{P}\) containing \(V\) and are sent by \(\varphi^{{{(0)}}}\) to the green vertex \(\ell^{0}\) on the right. The simplicial complex \(\operatorname{Del}(V\cup P)\) is drawn on the left and consists of ten maximal 2-simplices. On the right, the simplicial complex \(L\) consists of one maximal 2-simplex. The dotted arrows illustrate some examples of \(\varphi:\mathcal{P}\to|L|\).
### Simplicial map neural networks
Artificial neural networks can be seen as parametrized real-valued mappings between multidimensional spaces. Such mappings are the composition of several maps (usually many of them) that can be structured in layers. In [16], the simplicial map \(\varphi\) defined above was represented as a two-hidden-layer feed-forward neural network \(\mathcal{N}_{\varphi}\). This kind of network is the so-called _simplicial map neural networks_ (SMNN).
In the original definition, the first hidden layer of SMNN computes the barycentric coordinates of \(\operatorname{Del}(V\cup P)\). However, if we precompute these barycentric coordinates, we can remove the first hidden layer and simplify the architecture of \(\mathcal{N}_{\varphi}\) as follows.
As before, consider an input dataset consisting of a finite set \(V\subset\mathbb{R}^{n}\) endowed with a set of \(k\) labels and a convex polytope \(\mathcal{P}\) with a set of vertices \(P\) surrounding \(V\). Let \(\operatorname{Del}(V\cup P)\) be the Delaunay triangulation with vertex set
\[V\cup P=\{\omega^{1},\ldots,\omega^{\alpha}\}\subseteq\mathbb{R}^{n}\,.\]
Then, \(\varphi^{{{(0)}}}:V\cup P\to W^{k+1}\) is a vertex map. Let us assume that, given \(x\in\mathcal{P}\), we precompute the barycentric coordinates \(b(x)=(b_{0}(x),\ldots,b_{n}(x))\in\mathbb{R}^{n+1}\) of \(x\) with respect to an \(n\)-simplex \(\sigma=\langle\omega^{i_{0}},\ldots,\omega^{i_{n}}\rangle\in\operatorname{Del} (V\cup P)\), and we have also precomputed the vector
\[\xi(x)=(\xi_{1}(x),\ldots,\xi_{\alpha}(x))\in\mathbb{R}^{\alpha}\]
Figure 2: Illustration of a simplicial map for a classification task.
satisfying that, for \(t\in[\![1,\alpha]\!]\), \(\xi_{t}(x)=b_{j}(x)\) if \(i_{j}=t\) for some \(j\in[\![0,n]\!]\).
Then, the SMNN \(\mathcal{N}_{\varphi}\) induced by \(\varphi\) that predicts the \(h\)-label of \(x\), for \(h\in[\![0,k]\!]\), then has the following architecture:
* The number of neurons in the input layer is \(\alpha\).
* The number of neurons in the output layer is \(k+1\).
* The set of weights is represented as a \((k+1)\times\alpha\) matrix \(\mathcal{M}\) such that the \(j\)-th column of \(\mathcal{M}\) is \(\varphi^{(0)}(\omega^{t})\) for \(t\in[\![1,\alpha]\!]\).
Then,
\[\mathcal{N}_{\varphi}(x)=\max_{h\in[\![0,k]\!]}\mathcal{M}\cdot\xi(x)\]
SMNNs have several pros and cons. The advantages are that they are explainable (as we will see in Section 6), they can be transformed to be robust to adversarial examples [16] and their architecture can be reduced while maintaining accuracy [15]. Besides, they are invariant to transformation if the transformation preserves the barycentric coordinates (scale, rotation, symmetries, etc.).
Nevertheless, one drawback is that, as defined so far, the SMNN weights are untrainable, resulting in overfitting and reducing SMNN generalization capability. Furthermore, the computation of the barycentric coordinates of the points around \(V\) implies the calculation of the convex polytope \(\mathcal{P}\) surrounding \(V\). Finally, the computation of the Delaunay triangulation \(\operatorname{Del}(V\cup P)\) is costly if \(V\cup P\) has many points, its time complexity is \(O(n\log n+n^{\lceil\frac{d}{2}\rceil})\) (see [3, Chapter 4]).
In the next sections, we will propose some techniques to overcome the SMNN drawbacks while maintaining their advantages. We will see that one way to overcome the computation of the convex polytope \(\mathcal{P}\) is to consider a hypersphere \(S^{n}\) instead. We will also see how to avoid the use of the artificially created _unknown_ label. Furthermore, to reduce the cost of Delaunay calculations and add trainability to \(\mathcal{N}_{\varphi}\) to avoid overfitting, a subset \(U\subset V\) will be considered. The set \(V\) will be used to train and test a map \(\varphi_{U}^{(0)}:U\to\mathbb{R}^{k}\). Such a map will induce a continuous function \(\varphi_{U}:B^{n}\to|L|\) which approximates \(\varphi\).
## 3 The _unknown_ boundary and the function \(\varphi_{U}\)
In this section, We will see how to reduce the computation of the Delaunay triangulation used to construct SMNNs and will explain how to avoid the calculation of the convex polytope \(\mathcal{P}\) and the consideration of the _unknown_ label.
To reduce the cost of the Delaunay triangulation calculation and to add trainability to SMNNs to avoid overfitting, we will consider a subset \(U=\{u^{1},\ldots,u^{m}\}\subseteq V\). Furthermore, the set \(U\) is translated so that its center of mass is placed at the origin \(o\in\mathbb{R}^{n}\), and we will consider a hypersphere
\[S^{n}=\{w\in\mathbb{R}^{n}:\;||w||=R\}\]
satisfying that \(R>\max\{||v||:\,v\in V\}\). Then,
\[V\subset B^{n}=\{x\in\mathbb{R}^{n}:\;||x||\leq R\}\;\;\text{and}\;\;o\in| \operatorname{Del}(U)|\,.\]
Now, let us define and compute \(\xi(x)\in\mathbb{R}^{m}\) for any \(x\in B^{n}\) as follows.
First, let us consider the boundary of \(\operatorname{Del}(U)\), denoted as \(\delta\operatorname{Del}(U)\), which consists of the set of \((n-1)\)-simplices that are faces of exactly one maximal simplex of \(\operatorname{Del}(U)\).
Second, let \(b(x)=(b_{0}(x),\ldots,b_{n}(x))\in\mathbb{R}^{n+1}\) be the barycentric coordinates of \(x\) with respect to some \(n\)-simplex \(\sigma=\langle\omega^{0},\ldots,\omega^{n}\rangle\), where \(\sigma\) is computed as follows. If \(x\in|\operatorname{Del}(U)|\) then \(\sigma\) is an \(n\)-simplex of \(\operatorname{Del}(U)\) such that \(x\in|\sigma|\). Otherwise, \(\sigma\) is a new \(n\)-simplex defined by the vertices of a simplex of \(\delta\operatorname{Del}(U)\) and a new vertex consisting of the projection of \(x\) to \(S^{n}\). Specifically, if \(x\not\in|\operatorname{Del}(U)|\) then \(\sigma\) is computed in the following way:
1. Consider the set \[\Gamma=\big{\{}\mu\in\delta\operatorname{Del}(U):\;(N\cdot u^{i_{0}}+c)(N \cdot x+c)<0\big{\}}\] where \(N\) is the normal vector to the hyperplane containing \(\mu=\langle u^{i_{1}},\ldots,u^{i_{n}}\rangle\), \(c=N\cdot u^{i_{1}}\), and \(u^{i_{0}}\in U\) such that \(\langle u^{i_{0}},u^{i_{1}},\ldots,u^{i_{n}}\rangle\in\operatorname{Del}(U)\).
2. Compute the point \(w^{x}=R\frac{x}{||x||}\in S^{n}\).
3. Find \(\sigma=\langle w^{x},u^{i_{1}},\ldots,u^{i_{n}}\rangle\) such that \[\mu=\langle u^{i_{1}},\ldots,u^{i_{n}}\rangle\in\Gamma\;\;\text{and}\;\;x\in| \sigma|.\] Observe that, by construction, \(\mu\) always exists since \(|\operatorname{Del}(U)|\) is a convex polytope.
Then, \(\xi(x)=(\xi_{1}(x),\ldots,\xi_{m}(x))\) is the point in \(\mathbb{R}^{m}\) satisfying that, for \(t\in[\![1,m]\!]\),
\[\xi_{t}(x)=\left\{\begin{array}{cl}b_{j}(x)&\text{if $u^{t}=\omega^{j}$ for some $j\in[\![0,n]\!]$,}\\ 0&\text{otherwise.}\end{array}\right.\]
Observe that \(\xi(x)\) always exists and is unique. An example of points \(x\) and \(w^{x}\) and simplex \(\mu\) is shown in Figure 3.
The following property holds.
**Lemma 2** (Continuity): _Let \(x\in B^{n}\). Then,_
\[\lim_{y\to x}\xi(y)=\xi(x).\]
**Proof.** If \(x\in|\operatorname{Del}(U)|\), then the result states due to the continuity of the barycentric coordinates transformation. If
\(x\not\in|\operatorname{Del}(U)|\), since \(o\in|\operatorname{Del}(U)|\), then \(||x||\neq 0\). Therefore, for \(y\) close to \(x\), \(||y||\neq 0\) and \(w^{y}=R\frac{y}{||y||}\in\mathbb{R}^{n}\). Besides,
\[\lim_{y\to x}w^{y}=w^{x}\]
and therefore
\[\lim_{y\to x}\xi(y)=\xi(x).\]
Let us observe that, thanks to the new definition of \(\xi(x)\) for \(x\in B^{n}\), if we have a map \(\varphi_{U}^{{}_{(0)}}:U\to\mathbb{R}^{k}\) then it induces a continuous function \(\varphi_{U}:B^{n}\to|L|\) defined for any \(x\in B^{n}\) as:
\[\varphi_{U}(x)=\operatorname{softmax}\big{(}\sum_{t\in\llbracket 1,m\rrbracket}\xi_{t}(x) \varphi_{U}^{{}_{(0)}}(u^{t})\big{)}\]
where for \(z=(z_{1},\ldots,z_{k})\in\mathbb{R}^{k}\),
\[\operatorname{softmax}(z)=\Big{(}\tfrac{e^{z_{1}}}{\sum_{h\in\llbracket 1,k \rrbracket}e^{z_{h}}},\ldots,\tfrac{e^{z_{k}}}{\sum_{h\in\llbracket 1,k \rrbracket}e^{z_{h}}}\Big{)}.\]
The following result establishes that if the vertices of the convex polytope \(\mathcal{P}\) are _far enough_ from the vertices of \(V\) and \(\varphi_{U}^{{}_{(0)}}(v)=\varphi^{{}_{(0)}}(v)\) for all \(v\in V\), then the behavior of \(\varphi_{U}\) is the same as that of \(\varphi\) inside \(|\operatorname{Del}(V)|\).
**Lemma 3** (Consistence): _Let \(\varphi^{{}_{(0)}}\) be the map defined in Subsection 2.3. If \(U=V\), \(\operatorname{Del}(V)\subseteq\operatorname{Del}(V\cup P)\) and \(\varphi_{U}^{{}_{(0)}}=\varphi^{{}_{(0)}}|_{V}\) then_
\[\varphi_{U}(x)=\varphi(x)\ \ \text{for all}\ x\in\operatorname{Del}(V)\,.\]
**Proof.** Observe that if \(U=V\), \(\operatorname{Del}(V)\subseteq\operatorname{Del}(V\cup P)\) and \(\varphi_{U}^{{}_{(0)}}(v)=\varphi^{{}_{(0)}}(v)\) for all \(v\in V\) then, for any \(x\in\operatorname{Del}(V)\), we have:
\[\varphi(x)=\sum_{{}_{t\in\llbracket 1,m\rrbracket}}\xi_{t}(x)\varphi^{{}_{ (0)}}(v^{t})=\sum_{{}_{t\in\llbracket 1,m\rrbracket}}\xi_{t}(x)\varphi_{U}^{{}_{(0)}}(v^{t}).\]
Therefore, \(\varphi_{U}(x)=\operatorname{softmax}\big{(}\varphi(x)\big{)}=\varphi(x)\). \(\square\)
## 4 Measuring the similarity between \(\varphi_{U}\) and \(\varphi\)
One of the keys to our study is the identification of the points of \(\mathbb{R}^{n}\) allocated inside a given simplex with the set of all of the probability distributions with \(n+1\) support values. In this way, the barycentric coordinates of a point can be seen as a probability distribution.
From this point of view, given \(x\in B^{n}\), then \(\varphi(x)\) and \(\varphi_{U}(x)\) are both in the set \(|L|\) of probability distributions with \(k\) support points. This is why the categorical cross-entropy loss function \(\mathcal{L}\) will be used to compare the similarity between \(\varphi\) and \(\varphi_{U}\). Specifically, for \(v\in V\), \(\mathcal{L}\) is defined as:
\[\mathcal{L}(\varphi_{U},\varphi,v)=-\sum_{{}_{h\in\llbracket 1,k\rrbracket}}y_{h} \log(s_{h})\,,\]
where \(\varphi^{{}_{(0)}}(v)=(y_{1},\ldots,y_{k})\) and \(\varphi_{U}(v)=(s_{1},\ldots,s_{k})\).
The following lemma establishes a specific set \(U\subset V\) and a function \(\tilde{\varphi}_{U}\) such that \(\mathcal{L}(\varphi_{U},\varphi,v)=0\) is \(0\) for all \(v\in V\).
**Lemma 4** (\(\mathcal{L}\)**-optimum simplicial map): _Let \(U\subseteq V\) such that for all \(u\in U\), we have that \(\tilde{\varphi}_{U}^{{}_{(0)}}(u)=\varphi^{{}_{(0)}}(u)\) and there exists \(v\in V\cup P\) with \(\varphi^{{}_{(0)}}(v)\neq\varphi^{{}_{(0)}}(u)\) and \(\langle v,u\rangle\in\operatorname{Del}(V)\). Then, \(\mathcal{L}(\tilde{\varphi}_{U},\varphi,v)=0\) for all \(v\in V\)._
**Proof.** As proved in [15], under the assumptions stated in this lemma, we have that \(\varphi_{U}(v)=\varphi^{{}_{(0)}}(v)\) for all \(v\in V\) and then \(\mathcal{L}(\varphi_{U},\varphi,v)=0\).
\(\square\)
Unfortunately, to compute such \(\hat{\varphi}_{U}\) we would need to compute the entire triangulation \(\operatorname{Del}(U)\) which is computationally expensive, as we have already mentioned above.
This way, the novel idea of this paper is to learn the function \(\varphi_{U}^{{}_{(0)}}\) using gradient descent, in order to minimize the loss function \(\mathcal{L}(\varphi_{U},\varphi,v)\) for any \(v\in V\). The following result provides an expression of the gradient of \(\mathcal{L}\) in terms of the functions \(\varphi_{U}\) and \(\varphi\), and the set \(V\).
**Theorem 1**: _Let \(U=\{u^{1},\ldots,u^{m}\}\) be a subset with \(m\) elements taken from a finite set of points \(V\in\mathbb{R}^{n}\) tagged with labels taken from a set of \(k\) labels. Let \(\varphi_{U}:B^{n}\to|L|\) and \(\varphi^{{}_{(0)}}:V\to W^{k}\). Let us consider that_
\[\big{\{}\,\varphi_{U}^{{}_{(0)}}(u^{t})=(p_{1}^{t},\ldots,p_{k}^{t}):\,t\in \llbracket 1,m\rrbracket\,\big{\}}\]
_is a set of variables. Then, for \(v\in V\),_
\[\tfrac{\partial\mathcal{L}(\varphi_{U},\varphi,v)}{\partial p_{j}^{t}}=(s_{j}- y_{j})\xi_{t}(v)\]
_where \(j\in\llbracket 1,k\rrbracket\), \(t\in\llbracket 1,m\rrbracket\), \(\varphi^{{}_{(0)}}(v)=(y_{1},\ldots,y_{k})\) and \(\varphi_{U}(v)=(s_{1},\ldots,s_{k})\)._
**Proof.** We have:
\[\tfrac{\partial\mathcal{L}(\varphi_{U},\varphi,v)}{\partial p_{j}^{t}} = -\tfrac{\partial\big{(}\sum_{h\in\llbracket 1,k\rrbracket}y_{h}\ \log(s_{h})\big{)}}{ \partial p_{j}^{t}}\] \[= -\sum_{{}_{h\in\llbracket 1,k\rrbracket}}y_{h}\ \tfrac{\partial\log(s_{h})}{ \partial p_{j}^{t}}\] \[= -\sum_{{}_{h\in\llbracket 1,k\rrbracket}}y_{h}\ \tfrac{\partial\log(s_{h})}{ \partial z_{j}}\tfrac{\partial z_{j}}{\partial p_{j}^{t}}\,.\]
Since \(s_{h}=\frac{e^{s_{h}}}{\sum_{t\in[1,k]}e^{s_{t}}}\) then
\[\frac{\partial\log(s_{h})}{\partial z_{j}} = \frac{\partial\log\left(\frac{e^{s_{h}}}{\sum_{t\in[1,k]}e^{s_{t}} }\right)}{\partial z_{j}}\] \[= \frac{\partial\log(e^{s_{h}})}{\partial z_{j}}-\frac{\partial\log \left(\sum_{t\in[1,k]}e^{s_{t}}\right)}{\partial z_{j}}\] \[= \frac{\partial z_{h}}{\partial z_{j}}-\frac{1}{\sum_{t\in[1,k]}e^ {s_{t}}}\,\sum_{t\in[1,k]}\frac{\partial e^{s_{t}}}{\partial z_{j}}\] \[= \delta_{hj}-\frac{e^{z_{j}}}{\sum_{t\in[1,k]}e^{s_{t}}}=\delta_{ hj}-s_{j}\,.\]
Besides, since \(z_{j}=\sum_{h\in[1,m]}\xi_{h}(v)p_{j}^{h}\) then
\[\frac{\partial z_{v}}{\partial p_{j}^{h}}=\sum_{h\in[1,m]}\xi_{h}(v)\frac{ \partial p_{j}^{h}}{\partial p_{j}^{h}}=\xi_{t}(v)\,.\]
Finally,
\[\frac{\partial\mathcal{L}(\psi,\varphi,v)}{\partial p_{j}^{h}} = -\sum_{h\in[1,k]}y_{h}(\delta_{hj}-s_{j})\xi_{t}(v)\] \[= -\xi_{t}(v)\big{(}\sum_{h\in[1,k]}y_{h}\delta_{hj}-s_{j}\sum_{h \in[1,k]}y_{h}\big{)}\] \[= (s_{j}-y_{j})\xi_{t}(v)\,.\]
## 5 Training SMNNs
Let us see now how we add trainability to the SMNN \(\mathcal{N}_{\varphi_{U}}\) induced by \(\varphi_{v}\).
First, assuming that \(U=\{u^{1},\ldots,u^{m}\}\) has \(m\) elements, then \(\mathcal{N}_{\varphi_{U}}\) is a multiclass perceptron with an input layer with \(m\) neurons that predicts the \(h\)-th label for \(h\in[\![1,k]\!]\) using the formula:
\[\mathcal{N}_{\varphi_{U}}(x)=\max_{h\in[\![0,k]\!]}\,\,\operatorname{softmax }\big{(}\mathcal{M}\cdot\xi(x)\big{)}\]
where \(\mathcal{M}=(p_{j}^{t})_{j\in[\![1,k]\!],t\in[\![1,m]\!]}\) is a matrix of weights and \(\xi(x)\in\mathbb{R}^{m}\) is obtained from the barycentric coordinates of \(x\in B^{n}\) as in Section 3. Let us observe that
\[\operatorname{softmax}\big{(}\mathcal{M}\cdot\xi(x)\big{)}\in|L|.\]
The idea is to modify the values of
\[\varphi_{U}^{(0)}(u^{t})=(p_{1}^{t},\ldots,p_{k}^{t})\,\,\,\text{for}\,\,u^{t} \in U\,\,\text{and}\,\,t\in[\![1,m]\!],\]
in order to obtain new values for \(\varphi_{U}(v)\) in a way that the error \(\mathcal{L}(\varphi_{U},\varphi,v)\) decreases. We will do it by avoiding recomputing \(\operatorname{Del}(U)\) or the barycentric coordinates \((b_{0}(v),\ldots,b_{n}(v))\) for each \(v\in V\) during the training process.
This way, given \(v\in V\), if \(v\in|\operatorname{Del}(U)|\), we compute the maximal simplex \(\sigma=\langle u^{i_{0}},\ldots,u^{i_{n}}\rangle\in\operatorname{Del}(U)\) such that \(v\in|\sigma|\) and \(i_{h}\in[\![1,m]\!]\) for \(h\in[\![0,n]\!]\). If \(v\not\in|\operatorname{Del}(U)|\), we compute \(w\in S^{n}\) and the simplex \(\sigma=\langle w,u^{i_{1}},\ldots,u^{i_{n}}\rangle\) such that \(v\in|\sigma|\) and \(i_{h}\in[\![1,m]\!]\) for \(h\in[\![1,n]\!]\) Then, we compute the barycentric coordinates \(b(v)\) of \(v\) with respect to \(\sigma\) and the point \(\xi(x)=(\xi_{1}(x),\ldots,\xi_{m}(x))\in\mathbb{R}^{m}\) as in Section 3.
Using gradient descent, we update the variables \(p_{j}^{t}\) for \(j\in[\![1,k]\!]\) and \(t\in[\![1,m]\!]\) as follows:
\[p_{j}^{t}:=p_{j}^{t}-\eta\frac{\partial\mathcal{L}(\varphi_{U},\varphi,v)}{ \partial p_{j}^{t}}=p_{j}^{t}-\eta(s_{j}-y_{j})\xi_{t}(v).\]
## 6 Explainability
In this section, we provide insight into the explainability capability of SMNNs.
More specifically, explainability will be provided based on similarities and dissimilarities of the point \(x\) to be explained with the points corresponding to the vertices of the simplex \(\sigma\) containing it. Based on that, the barycentric coordinates of \(x\) with respect to \(\sigma\) can be considered indicators of how much a vertex of \(\sigma\) is going to contribute to the prediction of \(x\). Then, the multiplication of the barycentric coordinates of \(x\) by the trained map \(\varphi_{U}^{(0)}\) evaluated at the vertices of \(\sigma\) is computed, and it provides the contribution of the labels assigned to each vertex of \(\sigma\) after the training process to the label assigned to \(x\) by the SMNN.
As an illustration, let us consider the Iris dataset1 as a toy example and split it into a training set (\(75\%\)) and a test set (\(25\%\)). Since we focus on this section on explainability, let us take \(U=V\), containing 112 points. Then, initialize \(p_{j}^{t}\) with a random value in \([0,1]\), for \(j\in[\![1,4]\!]\) and \(t\in[\![1,112]\!]\).
Footnote 1: [https://archive.ics.uci.edu/ml/datasets/iris](https://archive.ics.uci.edu/ml/datasets/iris)
After the training process, the SMNN reached \(92\%\) accuracy on the test set. Once the SMNN is trained, we may be interested in determining why a certain output is given for a specific point \(x\) in the test set.
As previously mentioned, the explanation of why the SMNN assigns a label to \(x\) is based on the vertices of the simplex of \(\operatorname{Del}(U)\) containing \(x\). Therefore, the first step is to find the maximal simplex \(\sigma\) that contains the point \(x\) to be explained. As an example, in Figure 4, the point \(x=(5.5,2.4,3.8,1.1)\in\mathbb{R}^{4}\) in the test set is chosen to be explained, predicted by the SMNN to class 2. The coordinates of the five vertices (\(u^{26}\), \(u^{55}\), \(u^{69}\), \(u^{84}\) and \(u^{95}\)) of the simplex \(\sigma\) containing \(x\) together with the classes they belong to are shown on the table at the bottom of Figure 4. The contribution of the class assigned to each vertex of \(\sigma\) to the class assigned to \(x\) by the SMNN is displayed in the bar chart, and is measured in terms of \(\xi_{t}(x)\,\times p_{j}^{t}\) for \(j\in[\![0,2]\!]\) and \(t\in\{26,55,69,84,95\}\). We have noticed that the contributions can be positive or negative. For example, the vertex with the most influence in the classification affected negatively toward classifying the test data into the first class and third class, but positively towards the second class, which is the correct classification. Let us remark that Euclidean distance between points is not the only thing that
makes higher the contribution of a vertex of \(\sigma\). During the training process, the weights are adjusted considering the different simplices where that vertex belongs. Therefore, even if two points are equally close to the point to be explained, they will not contribute the same. In this example, the coordinates of vertices \(84\) and \(95\) are close to the test point but their contribution is very different in magnitude.
## 7 Experiments
In this Section, we provide experimentation showing the performance of SMNNs. SMNNs are applied to different datasets and compared with a feedforward neural network.
We split the given dataset into a training set composed of \(75\%\) of the data and a test set of \(25\%\). Then, the training set was used to train a feedforward neural network and an SMNN. In the case of SMNNs, a representative dataset (see [6]) of the training set was used as the \(U\) support set. Let us now describe the methodology more specifically for each dataset used.
The first dataset used for visualization purposes is a spiral dataset composed of \(400\) two-dimensional points, in Figure 5, it shows different sizes of the support subset together with the associated Delaunay triangulation. As can be seen, in this case, the accuracy increases with the size of the support subset. We can also appreciate that the topology of the dataset is characterized by the representative support set leading to successful classification.
The second datasets are synthetic datasets of size \(5000\) the Scikit-learn implementation. See Figure 6 where a small two-dimensional example is shown. The datasets generated have from \(2\) to \(5\) features and results are provided in Table 1. SMNNs were trained using the gradient descent algorithm and the cross-entropy loss function during \(500\) epochs, and using different representative parameters to choose the support set \(U\). Then, a two-hidden-layer feedforward (\(32\times 16\)) neural network with ReLu activation functions was trained using the same datasets using Adam training algorithm. The results provided are the mean of \(100\) repetitions. In Table 1, we can see that both neural networks had similar performance, but SMNNs generally reach lower loss. The variance in the results was of the order of \(10^{-8}\) to \(10^{-5}\) in the case of SMNNs and of \(10^{-5}\) to \(10^{-2}\) in the case of the feed-forwad neural network.
## 8 Conclusions
Simplicial map neural networks provide a combinatorial approach to artificial intelligence. Its simplicial-based definition provides nice properties such us easy construction, and robustness capability against adversarial examples. In this work, we have extend its definition to provide a trainable version of this architecture and exploited its explainability capability.
In this paper, we only applied standard training algorithms to study its performance. As future work, it would be interesting to study different kind of training algorithms and choices of the support set. In addition, studies towards seeking more efficient implementations to avoid Delaunay triangulations should be performed.
Figure 4: Table with the five flowers taken from the training set that influence the classification of a given flower in the test set, i.e. the vertices of \(\sigma\).
## Supplementary material
Code with the experiments and an illustrative jupyter notebook are provided as supplementary material with this submission.
## 9 Aknowledgments
The work was supported in part by European Union HORIZON-CL4-2021-HUMAN-01-01 under grant agreement 101070028 (REXASI-PRO), and by AEI/10.13039/501100011033 under grants PID2019-107339GB-100 and TED2021-129438B-I00/Union Europea NextGenerationEU/PRTR.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{SMNN} & \multicolumn{2}{c|}{NN} \\ \hline \(n\) & \(\varepsilon\) & \(m\) & Acc. & Loss & Acc. & Loss \\ \hline \multirow{3}{*}{2} & 1000 & 3560 & 0.87 & 0.64 & \multirow{3}{*}{0.91} & \multirow{3}{*}{0.23} \\ \cline{2-7} & 100 & 1282 & 0.90 & 0.51 & & \\ \cline{2-7} & 50 & 626 & 0.9 & 0.42 & & \\ \cline{2-7} & 10 & 53 & 0.87 & 0.33 & & \\ \hline \multirow{3}{*}{3} & 1000 & 3750 & 0.76 & 0.66 & \multirow{3}{*}{0.8} & \multirow{3}{*}{0.61} \\ \cline{2-7} & 100 & 3664 & 0.76 & 0.66 & & \\ \cline{2-7} & 50 & 3252 & 0.77 & 0.65 & & \\ \cline{2-7} & 10 & 413 & 0.81 & 0.5 & & \\ \hline \multirow{3}{*}{4} & 50 & 3728 & 0.69 & 0.67 & \multirow{3}{*}{0.72} & \multirow{3}{*}{0.69} \\ \cline{2-7} & 10 & 1410 & 0.73 & 0.64 & & \\ \cline{2-7} & 5 & 316 & 0.73 & 0.57 & & \\ \cline{2-7} & 2 & 26 & 0.72 & 0.56 & & \\ \hline \multirow{3}{*}{5} & 50 & 3743 & 0.77 & 0.66 & \multirow{3}{*}{0.8} & \multirow{3}{*}{0.91} \\ \cline{2-7} & 10 & 1699 & 0.81 & 0.63 & & \\ \cline{1-1} \cline{2-7} & 5 & 323 & 0.8 & 0.52 & & \\ \cline{1-1} \cline{2-7} & 2 & 17 & 0.74 & 0.53 & & \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy score and loss values obtained after training both an SMNN and a feedforward neural network. The experiments were repeated \(100\) times and the results provided are the mean of the accuracy values of the repetitions. The size \(m\) of the subset considered to compute the Delaunay triangulation also varies in each experiment depending of a representation parameter which is the maximum distance from the origin to the further point in the dataset plus \(\frac{1}{2}\) and divided by the value provided in the column \(\varepsilon\). The feedforward neural network is composed of two hidden layers of size \(32\) and \(16\), respectively, with ReLu activation functions and an output layer with a softmax activation function. The datasets used are synthetic datasets for binary classification with numbers of features \(n\).
Figure 5: Spiral dataset for binary classification. On each figure, the support points are the squared-shaped points, the test data is triangle-shaped and the training data is circle-shaped.
Figure 6: Two-dimensional binary classification synthetic dataset. Classes: Blue and yellow. Triangle-shaped points are test data and square-shaped points are representative points used for training. The diamond-shaped point is the vertex on the hypersphere (the blue circumference) used to classify the test point (surrounded by red circumference) outside the triangulation. |
2310.05842 | Robust Angular Synchronization via Directed Graph Neural Networks | The angular synchronization problem aims to accurately estimate (up to a
constant additive phase) a set of unknown angles $\theta_1, \dots,
\theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets
$\theta_i-\theta_j \;\mbox{mod} \; 2\pi.$ Applications include, for example,
sensor network localization, phase retrieval, and distributed clock
synchronization. An extension of the problem to the heterogeneous setting
(dubbed $k$-synchronization) is to estimate $k$ groups of angles
simultaneously, given noisy observations (with unknown group assignment) from
each group. Existing methods for angular synchronization usually perform poorly
in high-noise regimes, which are common in applications. In this paper, we
leverage neural networks for the angular synchronization problem, and its
heterogeneous extension, by proposing GNNSync, a theoretically-grounded
end-to-end trainable framework using directed graph neural networks. In
addition, new loss functions are devised to encode synchronization objectives.
Experimental results on extensive data sets demonstrate that GNNSync attains
competitive, and often superior, performance against a comprehensive set of
baselines for the angular synchronization problem and its extension, validating
the robustness of GNNSync even at high noise levels. | Yixuan He, Gesine Reinert, David Wipf, Mihai Cucuringu | 2023-10-09T16:37:19Z | http://arxiv.org/abs/2310.05842v2 | # Robust Angular Synchronization via Directed Graph Neural Networks
###### Abstract
The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles \(\theta_{1},\ldots,\theta_{n}\in[0,2\pi)\) from \(m\) noisy measurements of their offsets \(\theta_{i}-\theta_{j}\) mod \(2\pi\). Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed \(k\)-synchronization) is to estimate \(k\) groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
## 1 Introduction
The group synchronization problem has received considerable attention in recent years, as a key building block of many computational problems. Group synchronization aims to estimate a collection of group elements, given a small subset of potentially noisy measurements of their pairwise ratios \(\Upsilon_{i,j}=g_{i}\,g_{j}^{-1}\). Typical applications (for different underlying groups) include rotation-averaging in 3D computer vision (Arrigoni & Fusiello, 2020; Janco & Bendory, 2022) (over the group SO(3) of 3D rotations), the molecule problem in structural biology (Cucuringu et al., 2012b) (over SO(3)), solving jigsaw puzzles (Huroyan et al., 2020) (over the group \(\mathbb{Z}_{4}\) of the integers \(\{0,1,2,3\}\) with addition mod 4 as the group operation), recovering a global ranking from pairwise comparisons (He et al., 2022a; Cucuringu, 2016) (over the group \(\mathbb{Z}_{n}\), resp., SO(2)), and sensor network localization (Cucuringu et al., 2012a) (over the Euclidean group of rigid motions \(\text{Euc}(2)=\mathbb{Z}_{2}\times\text{SO(2)}\times\mathbb{R}^{2}\)).
An important special case is _angular synchronization_, also referred to as _phase synchronization_, which can be viewed as group synchronization over SO(2). The angular synchronization problem aims at obtaining an accurate estimation (up to a constant additive phase) for a set of unknown angles \(\theta_{1},\ldots,\theta_{n}\in[0,2\pi)\) from \(m\) noisy measurements of their pairwise offsets
Figure 1: Sensor network localization map.
\(\theta_{i}-\theta_{j}\mod\ 2\pi\). This problem has a wide range of applications, such as distributed clock synchronization over wireless networks (Giridhar & Kumar, 2006), image reconstruction from pairwise intensity differences (Yu, 2009; 2011), phase retrieval (Forstner et al., 2020; Iwen et al., 2020), and sensor network localization (SNL) (Cucuringu et al., 2012a). In engineering, the SNL problem seeks to reconstruct the 2D coordinates of a cloud of points from a sparse set of pairwise noisy Euclidean distances; in typical divide-and-conquer approaches that aid with scalability, one first computes a local embedding of nearby points (denoted as _patches_) and is left with the task of stitching the patches together in a globally consistent embedding (Cucuringu et al., 2012a). Fig. 1 is an example of SNL on the U.S. map, where our method recovers city locations (in blue) and aims to match ground-truth locations (in red). Most works in the SNL literature that focus on the methodology development consider only purely synthetic data sets in their experiments; here we consider a real-world data set (actual 2D layout with different levels of densities of cities across the U.S. map), and add synthetic noise to perturb the local patch embeddings for testing the robustness to noise of the angular synchronization component.
An extension of angular synchronization to the heterogeneous setting is \(k\)-_synchronization_, introduced in Cucuringu & Tyagi (2022), and motivated by real-world graph realization problems (GRP) and ranking. GRP aims to recover coordinates of a cloud of points in \(\mathbb{R}^{d}\), from a sparse subset (edges of a graph) of noisy pairwise Euclidean distances (the case \(d=2\) is the above SNL problem). The motivation for \(k\)-_synchronization_ arises in structural biology, where the distance measurements between pairs of atoms may correspond to \(k\) different configurations of the molecule, in the case of molecules with multiple conformations. In ranking applications, the \(k=2\) sets of disjoint pairwise measurements may correspond to two different judges, whose latent rankings we aim to recover.
A key limitation of existing methods for angular synchronization is their poor performance in the presence of considerable noise. High noise levels are not unusual; measurements in SO(3) can have large outliers in certain biological settings (cryo-EM and NMR spectroscopy), see for example Cucuringu et al. (2012b). Therefore, we need new methods to push the boundary of signal recovery when there is a high level of noise. While neural networks (NNs), in principle, could be trained to address high noise regimes, the angular synchronization problem is not directly amenable to a standard NN architecture due to the directed graph (digraph) structure of the underlying data measurement process and the underlying group structure; hence the need for a customized graph neural network (GNN) architecture and loss function for this task. Here we propose a GNN method called GNNsync for angular synchronization, with a novel cycle loss function, which downweights noisy observations, and explicitly enforces cycle consistency as a quality measure. GNNsync's novelty does not lie in simply applying a data-driven NN to this task, but rather in proposing a framework for handling the pairwise comparisons encoded in a digraph, accounting for the underlying SO(2) group structure, and designing a loss function for increased robustness to noise and outliers, with theoretical support.
Our main contributions are summarized as follows.
\(\bullet\) We demonstrate how the angular synchronization problem can be recast as a theoretically-grounded directed graph learning task by first incorporating the inductive biases of classical estimators within the design of a more robust GNN architecture, called GNNsync, and then pairing with a novel training loss that exploits cycle consistency to help disambiguate unknown angles.
\(\bullet\) We perform extensive experiments comparing GNNsync with existing state-of-the-art algorithms from the angular synchronization and \(k\)-synchronization literature, across a variety of synthetic outlier models at various density and noise levels, and on a real-world application. GNNsync attains leading performance, especially in high noise regimes, validating its robustness to noise.
## 2 Related work
### Angular synchronization
The seminal work of Singer (2011) introduced spectral and semidefinite programming (SDP) relaxations for angular synchronization. For the spectral relaxation, the estimated angles are given by the eigenvector corresponding to the largest eigenvalue of a Hermitian matrix \(\mathbf{H}\), whose entries are given by \(\mathbf{H}_{i,j}=\exp(\iota\mathbf{A}_{i,j})\mathbbm{1}\left(\mathbf{A}_{i,j} \neq 0\right)\), where \(\iota\) is the imaginary unit, and \(\mathbf{A}_{i,j}\) is the observed potentially noisy offset \(\theta_{i}-\theta_{j}\) mod \(2\pi\). Singer (2011) also provided an SDP relaxation involving the same matrix \(\mathbf{H}\), and empirically demonstrated that the spectral and SDP relaxations yield similar experimental results. A row normalization was introduced to \(\mathbf{H}\) prior to the eigenvector computation by Cucuringu et al. (2012a), which showed improved results. Cucuringu et al. (2012b) generalized this approach to
the 3D setting \(\text{Euc}(3)=\mathbb{Z}_{2}\times\text{SO}(3)\times\mathbb{R}^{3}\), and incorporated into the optimization pipeline the ability to operate in a semi-supervised setting, where certain group elements are known a-priori. Cucuringu and Tyagi (2022) extended the angular synchronization problem to a heterogeneous setting, to the so-called \(k\)_-synchronization_ problem, whose goal is to estimate \(k\) sets of angles simultaneously, given only the graph union of noisy pairwise offsets, which we also explore in our experiments. The key idea in their work is to estimate the \(k\) sets of angles from the top \(k\) eigenvectors of the angular embedding matrix \(\mathbf{H}\).
Boumal (2016) modeled the angular (phase) synchronization problem as a least-squares non-convex optimization problem, and proposed a modified version of the power method called the Generalized Power Method (GPM), which is straightforward to implement and free of parameter tuning. GPM often attains leading performance among baselines in our experiments, and the iterative steps in the GPM method motivated the design of the projected gradient steps in our GNNSync architecture. However, GPM is not directly applicable to \(k\)-synchronization with \(k>1\) while GNNSync is. For \(k=1,\) GNNSync tends to perform significantly better than GPM at high noise levels. Bandeira et al. (2017) studied the tightness of the maximum likelihood semidefinite relaxation for angular synchronization, where the maximum likelihood estimate is the solution to a nonbipartite Grothendieck problem over the complex numbers. A truncated least-squares approach was proposed by Huang et al. (2017) that minimizes the discrepancy between the estimated angle differences and the observed differences under some constraints. Gao and Zhao (2019) tackled the angular synchronization problem with a multi-frequency approach. Liu et al. (2023) unified various group synchronization problems over subgroups of the orthogonal group. Filbir et al. (2021) provided recovery guarantees for eigenvector relaxation and semidefinite convex relaxation methods for weighted angular synchronization. Lerman and Shi (2022) applied a message-passing procedure based on cycle consistency information, to estimate the corruption levels of group ratios and consequently solve the synchronization problem, but the method is focused on the restrictive setting of adversarial or uniform corruption and sufficiently small noise. In addition, Lerman and Shi (2022) requires post-processing based on the estimated corruption levels to obtain the group elements, while GNNSync is trained end-to-end. Maunu and Lerman (2023) utilized energy minimization ideas, with a variant converging linearly to the ground truth rotations.
### Directed graph neural networks
Digraph node embeddings can be effectively learned via directed graph neural networks (He et al., 2022c). For learning such an embedding, Tong et al. (2020) constructed a GNN using higher-order proximity. Zhang et al. (2021) built a complex Hermitian Laplacian matrix and proposed a spectral digraph GNN. He et al. (2022b) introduced imbalance objectives for digraph clustering. Our GNNSync framework can readily incorporate any existing digraph neural network.
### Relationship with other group synchronization methods
Angular synchronization outputs can be used to obtain global rankings by using a one-dimensional ordering. To this end, recovering rankings of \(n\) objects from pairwise comparisons can be viewed as group synchronization over \(\mathbb{Z}_{n}\). To recover global rankings from pairwise comparisons, GNNRank (He et al., 2022a) adopted an unfolding idea to add an inductive bias from Fogel et al. (2014) to the NN architecture. Inspired by He et al. (2022a), we adapt their framework to borrow strength from solving a related problem. We adapt their "innerproduct" variant to \(k\)-synchronization, remove the 1D ordering at the end of the GNNRank framework, and rescale the estimated quantities to the range \([0,2\pi)\). We also borrow strength from the projected gradient steps in GPM (Boumal, 2016) and add projected gradient steps to our GNNSync architecture. Another key novelty is that we devise novel objectives, which reflect the angular structure of the data, to serve as our training loss functions. The architectures are also very different: While in GNNRank the proximal gradient steps play a vital role from an unrolling perspective, and the whole architecture could be viewed as an unrolling of the SerialRank algorithm, here, although we borrow strength from the GPM method, the whole architecture is different from merely unrolling GPM. Furthermore, the baselines serve as initial guesses for the "proximal baseline" variant in GNNRank, but serve as input node features in our approach.
Other methods have been introduced for group synchronization, but mostly in the context of SO(3). Shi and Lerman (2020) proposed an efficient algorithm for synchronization over SO(3) under high levels of corruption and noise. Shi et al. (2022) provided a novel quadratic programming formulation for estimating the corruption levels, but again its focus is on SO(3). Unrolled algorithms (which are NNs) were introduced for SO(3) in Janco and Bendory (2022). While an adaptation to SO(2) may be possible in principle, as its objective functions are based on the level of agreement between the
estimated angles and ground-truth, its experiments require ground-truth during training, usually not available in practice. In contrast, our GNNsync framework can be trained without any known angles.
## 3 Problem definition
The _angular synchronization_ problem aims at obtaining an accurate estimation (up to a constant additive phase) for a set of \(n\) unknown angles \(\theta_{1},\ldots,\theta_{n}\in[0,2\pi)\) from \(m\) noisy measurements of their offsets \(\theta_{i}-\theta_{j}\) mod \(2\pi\), for \(i,j\in\{1,\ldots,n\}\). We encode the noisy measurements in a digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where each of the \(n\) elements of the node set \(\mathcal{V}\) has as attribute an angle \(\theta_{i}\in[0,2\pi)\). The edge set \(\mathcal{E}\) represents pairwise measurements of the angular offsets \((\theta_{i}-\theta_{j})\) mod \(2\pi\). The weighted directed graph has a corresponding adjacency matrix \(\mathbf{A}\) with \(\mathbf{A}_{i,j}=(\theta_{i}-\theta_{j})\) mod \(2\pi\geq 0\) where at most one of \(\mathbf{A}_{i,j}\) and \(\mathbf{A}_{j,i}\) can be nonzero by construction; this representation is used to reduce computational complexity. In general, the matrix \(\mathbf{A}\) is not symmetric. Estimating the unknown angles from noisy offsets amounts to assigning an estimate \(r_{i}\in[0,2\pi)\) to each node \(i\in\mathcal{V}\).
An extension of the above problem to the heterogeneous setting is the \(k\)_-synchronization_ problem, which is defined as follows. We are given only the graph union of \(k\) digraphs \(\mathcal{G}_{1},\ldots,\mathcal{G}_{k}\), with the same node set and disjoint edge sets, which encode noisy measurements of \(k\) sets \((\theta_{i,l}-\theta_{j,l})\) mod \(2\pi\), for \(l\in\{1,\ldots,k\},i,j\in\{1,\ldots,n\}\), of angle differences modulo \(2\pi\). Its adjacency matrix is denoted by \(\mathbf{A}\). The problem is to estimate these \(k\) sets of \(n\) unknown angles \(\theta_{i,l}\in[0,2\pi)\), \(\forall l\in\{1,\ldots,k\},i\in\{1,\ldots,n\},\) simultaneously. Note that we are given only \(\mathcal{G}=\mathcal{G}_{1}\cup\cdots\cup\mathcal{G}_{k}\) and the value of \(k\), and each edge in \(\mathcal{G}\) belongs to exactly one of \(\mathcal{G}_{1},\ldots,\mathcal{G}_{k}\). To unify notations, we view the normal angular synchronization problem as a special case of the more general \(k\)-synchronization problem where \(k=1\).
## 4 Loss and evaluation
### Loss and evaluation for angular synchronization
For a vector \(\mathbf{r}=[r_{1},\ldots,r_{n}]^{\top}\) with estimated angles as entries, we define \(\mathbf{T}=[(\mathbf{r}\mathbf{1}^{\top}-\mathbf{1}\mathbf{r}^{\top})\text{ mod }2\pi]\in\mathbb{R}^{n\times n}\). Then \(\mathbf{T}_{i,j}=(r_{i}-r_{j})\) mod \(2\pi\) estimates \(\mathbf{A}_{i,j}\). We only compare \(\mathbf{T}\) with \(\mathbf{A}\) at locations where \(\mathbf{A}\) has nonzero entries. We introduce the residual matrix \(\mathbf{M}\) with entries
\[\mathbf{M}_{i,j}=\min\left((\mathbf{T}_{i,j}-\mathbf{A}_{i,j})\text{ mod }2\pi,(\mathbf{A}_{i,j}-\mathbf{T}_{i,j})\text{ mod }2\pi\right)\]
if \(\mathbf{A}_{i,j}\neq 0\), and \(\mathbf{M}_{i,j}=0\) if \(\mathbf{A}_{i,j}=0\). Then our upset loss is defined as
\[\mathcal{L}_{\text{upset}}=\left\|\mathbf{M}\right\|_{F}/t, \tag{1}\]
where the subscript \(F\) means Frobenius norm, and \(t\) is the number of nonzero elements in \(\mathbf{A}\). Despite the non-differentiability of the loss function, using the concept of a limiting subdifferential from Li et al. (2020) we can give the following theoretical guarantee on the minimization of eq. (1); its proof is in Appendix (App.) A.1, where also the case of general \(k\) is discussed.
**Proposition 1**.: _Every local minimum of eq. (1) is a directional stationary point of eq. (1)._
For _evaluation_, we employ a Mean Square Error (MSE) function with angle corrections, considered in Singer & Shkolnisky (2011). As the offset measurements are unchanged if we shift all angles by a constant, denoting the ground-truth angle vector as \(\mathbf{R}\), this evaluation function can be written as
\[\mathcal{D}_{\text{MSE}}(\mathbf{r},\mathbf{R})=\min_{\theta_{0}\in[0,2\pi)} \sum_{i=1}^{n}[\min(\delta_{i}\text{ mod }2\pi,(-\delta_{i})\text{ mod }2\pi)]^{2}, \tag{2}\]
where \(\delta_{i}=r_{i}+\theta_{0}-\theta_{i},\ \forall i=1,\ldots,n\). Additional implementation details are provided in App. C.4.
### Cycle consistency relation
For noiseless observations, every cycle in the angular synchronization problem (\(k=1\)) or every cycle whose edges correspond to the same offset graph \(\mathcal{G}_{l}\) (\(k>1\)) satisfy the _cycle consistency_ relation that the angle sum mod \(2\pi\) is \(0\). For 3-cycles \((i,j,q)\), such that \(\mathbf{A}_{i,j}\cdot\mathbf{A}_{j,q}\cdot\mathbf{A}_{q,i}>0\), this leads to
\[(\mathbf{A}_{i,j}+\mathbf{A}_{j,q}+\mathbf{A}_{q,i})\text{ mod }2\pi=(\theta_{i}- \theta_{j}+\theta_{j}-\theta_{q}+\theta_{q}-\theta_{i})\text{ mod }2\pi=0,\]
as \((a+b\text{ mod }m)=\{(a\text{ mod }m)+(b\text{ mod }m)\text{ mod }m\}\). Hence we obtain the 3-cycle condition
\[(\mathbf{A}_{i,j}+\mathbf{A}_{j,q}+\mathbf{A}_{q,i})\text{ mod }2\pi=0,\forall(i,j,q)\text{ \ such that }\mathbf{A}_{i,j}\cdot\mathbf{A}_{j,q}\cdot\mathbf{A}_{q,i}>0. \tag{3}\]
With \(\mathbb{T}=\{(i,j,q):\mathbf{A}_{i,j}\cdot\mathbf{A}_{j,q}\cdot\mathbf{A}_{q,i} >0\}\), we define the _cycle inconsistency level_\(\frac{1}{|\mathbb{T}|}\sum_{(i,j,q)\in\mathbb{T}}[(\mathbf{A}_{i,j}+ \mathbf{A}_{j,q}+\mathbf{A}_{q,i})\text{ mod }2\pi]\). We devise a loss function to minimize the cycle inconsistency level with reweighted edges.
### Loss and evaluation for general k-synchronization
The upset loss for general \(k\) is defined similarly as in Sec. 4.1. Recall that the observed graph \(\mathcal{G}\) has adjacency matrix \(\mathbf{A}\). Given \(k\) groups of estimated angles \(\{r_{1,l},\ldots,r_{n,l}\}\), \(l=1,\ldots,k\), we define the matrix \(\mathbf{T}^{(l)}\) with entries \(\mathbf{T}^{(l)}_{i,j}=(r_{i,l}-r_{j,l})\) mod \(2\pi\), for \(i,j\in\{1,\ldots,n\},l\in\{1,\ldots,k\}\). We define \(\mathbf{M}^{(l)}\) by \(\mathbf{M}^{(l)}_{i,j}=\min((\mathbf{T}^{(l)}_{i,j}-\mathbf{A}_{i,j})\) mod \(2\pi,(\mathbf{A}_{i,j}-\mathbf{T}^{(l)}_{i,j})\) mod \(2\pi)\) if \(\mathbf{A}_{i,j}\neq 0\), and \(\mathbf{M}^{(l)}_{i,j}=0\) if \(\mathbf{A}_{i,j}=0\). Define \(\mathbf{M}\) by \(\mathbf{M}_{i,j}=\min_{le\{1,\ldots,k\}}\mathbf{M}^{(l)}_{i,j}\). The upset loss is as in eq.1, \(\mathcal{L}_{\text{upset}}=\left\|\mathbf{M}\right\|_{F}/t\).
In addition to \(\mathcal{L}_{\text{upset}}\), we introduce another option as a loss function based on the cycle consistency relation from Sec. 4.2, which adds a regularization that helps in guiding the learning process for certain challenging scenarios (e.g., with sparser \(\mathcal{G}\) or larger \(k\)). Since measurements are typically noisy, we first estimate the corruption level by entries in \(\mathbf{M}\), and use them to construct a confidence matrix \(\mathbf{C}\) by \(\mathbf{C}_{i,j}=\frac{1}{1+\mathbf{M}_{i,j}}\mathbbm{1}\left(\mathbf{A}_{i,j }\neq 0\right)\), then normalize the entries by \(\tilde{\mathbf{C}}_{i,j}=\mathbf{C}_{i,j}\frac{\sum_{u,v}\mathbf{A}_{u,v}}{ \sum_{u,v}\mathbf{A}_{u,v}\cdot\mathbf{C}_{u,v}}\). The normalization is chosen such that \(\sum_{i,j}\mathbf{A}_{i,j}\tilde{\mathbf{C}}_{i,j}=\sum_{u,v}\mathbf{A}_{u,v}\). Keeping the sum of edge weights constant is carried out in order to avoid reducing the cycle inconsistency level by only rescaling edge weights but not their relative magnitudes. Based on the confidence matrix \(\tilde{\mathbf{C}}\), we reweigh edges in \(\mathcal{G}\) to obtain an updated input graph, whose adjacency matrix is the Hadamard product \(\mathbf{A}\odot\tilde{\mathbf{C}}\). This graph attaches larger weights to edges \(\mathbf{A}_{i,j}\) for which \(\mathbf{T}^{(l)}_{i,j}\) is a good estimate when the edge \((i,j)\) belongs to graph \(\mathcal{G}_{l}\).
As the graph assignment of an edge \((i,j)\) is not known during training, we estimate it by
\[g(i,j)=\arg\min_{le\{1,\ldots,k\}}\mathbf{M}^{(l)}_{i,j},\text{ and set }g(j,i)=g(i,j), \tag{4}\]
thus obtaining our estimated graphs \(\tilde{\mathcal{G}}_{1},\ldots,\tilde{\mathcal{G}}_{k}\), which are also edge disjoint. Next, aiming to minimize 3-cycle inconsistency of the updated input graph given our graph assignment estimates, we introduce a loss function denoted as the _cycle inconsistency loss_\(\mathcal{L}_{\text{cycle}}\); for simplicity, we only focus on 3-cycles (triangles). We interpret the matrix \(\tilde{\mathbf{A}}=(\mathbf{A}\odot\tilde{\mathbf{C}}-(\mathbf{A}\odot \tilde{\mathbf{C}})^{\top})\) mod \(2\pi\) as the adjacency matrix of another weighted directed graph \(\tilde{\mathcal{G}}\). The entry \(\tilde{A}_{i,j}\) of the new adjacency matrix approximates angular differences of a reweighted graph, with noisy observations downweighted. Note that we only reweigh the adjacency matrix in the cycle loss definition, but do not update the input graph. The underlying idea is that this updated denoised graph may display higher cycle consistency than the original graph. From our graph assignment estimates, we obtain estimated adjacency matrices \(\tilde{\mathbf{A}}^{(l)}\) for \(l\in\{1,\ldots,k\}\), where \(\tilde{\mathbf{A}}^{(l)}_{i,j}=1(g(i,j)=l)\tilde{\mathbf{A}}_{i,j}\). Let \(\mathbb{T}^{(l)}=\{(i,j,q):\tilde{A}^{(l)}_{i,j}\tilde{A}^{(l)}_{j,q}\tilde{A} ^{(l)}_{q,i}>0\}\) denote the set of all triangles in \(\tilde{\mathcal{G}}_{l}\), and set \(S^{(l)}_{i,j,q}=\tilde{\mathbf{A}}^{(l)}_{i,j}+\tilde{\mathbf{A}}^{(l)}_{j,q} +\tilde{\mathbf{A}}^{(l)}_{q,i}\), for \((i,j,q)\in\mathbb{T}^{(l)}\). We define
\[\mathcal{L}^{(l)}_{\text{cycle}}=\frac{1}{|\mathbb{T}^{(l)}|}\sum_{(i,j,q)\in \mathbb{T}^{(l)}}\min(S^{(l)}_{i,j,q}\text{ mod }2\pi,(-S^{(l)}_{i,j,q})\text{ mod }2\pi) \tag{5}\]
and set \(\mathcal{L}_{\text{cycle}}=\frac{1}{k}\sum_{l=1}^{k}\mathcal{L}^{(l)}_{\text{ cycle}}\). The default training loss for \(k\geqslant 2\) is \(\mathcal{L}_{\text{cycle}}\) or \(\mathcal{L}_{\text{upset}}\) alone; in the experiment section, we also report the performance of a variant based on \(\mathcal{L}_{\text{upset}}+\mathcal{L}_{\text{cycle}}\).
For evaluation, we compute \(\mathcal{D}_{\text{MSE}}\) with eq.2, for each of the \(k\) sets of angles, and consider the average. As the ordering of the \(k\) sets can be arbitrary, we consider all permutations of \(\{1,\ldots,k\}\), denoted by \(perm(k)\). Denoting the ground-truth angle matrix as \(\mathbf{R}\), whose \((i,l)\) entry is the ground-truth angle \(\theta_{i,l}\), and the \(l\)-th entry of the permutation \(pe\) by \(pe(l)\), the final MSE value is
\[\mathcal{D}_{\text{MSE}}(\mathbf{r},\mathbf{R})=\frac{1}{k}\min_{peperm(k)}\sum _{l=1}^{k}\mathcal{D}_{\text{MSE}}(\mathbf{r}_{\cdot,pe(l)},\mathbf{R}_{ \cdot,l}). \tag{6}\]
Note that the MSE loss is not used during training as we do not have any ground-truth supervision; the MSE formulation in eq.2 is only used for evaluation. The lack of ground-truth information in the presence of noise is precisely what renders this problem very difficult. If any partial ground-truth information is available, then this can be incorporated into the loss function.
## 5 GNNSync architecture
### Obtaining directed graph embeddings
For obtaining digraph embeddings, any digraph GNN that outputs node embeddings can be applied, e.g. DIMPA by He et al. (2022b), the inception block model by Tong et al. (2020), and MagNet by
Zhang et al. (2021). Here we employ DIMPA; details are in A.2. Denoting the final node embedding matrix by \(\mathbf{Z}\in\mathbb{R}^{n\times kd}\), the embedding vector \(\mathbf{z}_{i}\) for a node \(i\) is \(\mathbf{z}_{i}=(\mathbf{Z})_{(i,:)}\in\mathbb{R}^{kd},\) the \(i^{\text{th}}\) row of \(\mathbf{Z}\).
### Obtaining initial estimated angles
To obtain the initial estimated angles for the angular synchronization problem, we introduce a trainable vector \(\mathbf{a}\) with dimension equal to the embedding dimension, then calculate the unnormalized estimated angles by the inner product of \(\mathbf{z}_{i}\) with \(\mathbf{a}\), plus a trainable bias \(b\), followed by a sigmoid layer to force positive values, and finally rescale the angles to \([0,2\pi)\); in short: \(r_{i}^{(0)}=2\pi\text{ sigmoid}(\mathbf{z}_{i}\cdot\mathbf{a}+b)\).
For general \(k\)-synchronization, we apply independent \(\mathbf{a},b\) values to obtain \(k\) different groups of initial angle estimates based on different columns of the node embedding matrix \(\mathbf{Z}\). In general, denote \(\mathbf{Z}_{i,w:\text{x}}\) as the \((v-u+1)\)-vector whose entries are from the \(i\)-th row and the \(u\)-th to \(v\)-th columns of the matrix \(\mathbf{Z}\). With a trainable vector \(\mathbf{a}^{(l)}\) for each \(l\in\{1,\dots,k\}\) with dimension equal to \(d\), we obtain the unnormalized estimated angles by the inner product of \(\mathbf{Z}_{i,(l-1)d+1:ld}\) with \(\mathbf{a}^{(l)}\), plus a trainable bias \(b_{l}\), followed by a sigmoid layer to force positive angle values, then rescale the angles to \([0,2\pi)\); in short: \(r_{i,l}^{(0)}=2\pi\text{ sigmoid}(\mathbf{z}_{i,(l-1)d+1:ld}\cdot\mathbf{a}^{(l)}+b_{l})\).
### Projected gradient steps for final angle estimates
Our final angle estimates are obtained after applying several (default: \(\Gamma=5\)) projected gradient steps to the initial angle estimates. In brief, projected gradient descent for constrained optimization problems first takes a gradient step while ignoring the constraints, and then projects the result back onto the feasible set to incorporate the constraints. Here the projected gradient steps are inspired by Boumal (2016). We construct \(\mathbf{H}\) by \(\mathbf{H}_{i,j}=\exp(\iota\mathbf{A}_{i,j})\mathbb{1}(\mathbf{A}_{i,j}\neq 0),\) and update the estimated angles using Algo. 1, where \(\mathbf{r}_{:,l}\) denotes the \(l\)-th column of \(\mathbf{r}\). In Algo. 1 the gradient step is on line 6, while the projection step on line 7 projects the updated matrix to elementwise angles. Fig. 2 shows the GNNsync framework.
If graph assignments can be estimated effectively right after the GNN, one can replace \(\mathbf{H}\) with \(\mathbf{H}^{(l)}\) for each \(l=1,\dots,k\) separately, where \(\mathbf{H}_{i,j}^{(l)}=\exp(\iota\mathbf{A}_{i,j}^{(l)})\mathbb{1}(\mathbf{A}_ {i,j}^{(l)}\neq 0),\) and \(\mathbf{A}_{i,j}^{(l)}=\mathbb{1}(g(i,j)=1)\mathbf{A}_{i,j}\) is the estimated adjacency matrix for graph \(\mathcal{G}_{l}\) using network assignments from \(g(i,j)\) from eq. (4) applied to the initial angle estimates \(\mathbf{r}^{(0)}\). Yet, separate \(\mathbf{H}^{(l)}\)'s may make the architecture sensitive to the accuracy of graph assignments after the GNN, and hence for robustness we simply choose a single \(\mathbf{H}\). We also find in Sec. 6.4 that the use of \(\mathbf{H}\) instead of separate \(\mathbf{H}^{(l)}\)'s is essential for satisfactory performance. Besides, it is possible to make Algo. 1 parameter-free by further fixing the \(\{\alpha_{\gamma}\}\) values (default: \(\alpha_{\gamma}=1,\forall\gamma\)); we find that using fixed \(\{\alpha_{\gamma}\}\) does not strongly affect performance in our experiments. GNNsync executes the projected gradient descent steps at every training iteration as part of a unified end-to-end training process, but one could also use Algo. 1 to post-process predicted angles without putting the steps in the end-to-end framework. We find that putting Algo. 1 in our end-to-end training framework is usually helpful.
Figure 2: GNNsync overview: starting from an adjacency matrix \(\mathbf{A}\) encoding (noisy) pairwise offsets and an input feature matrix \(\mathbf{X}\), GNNsync first applies a directed GNN to learn node embeddings \(\mathbf{Z}\). It then calculates the inner product with a learnable vector (or \(k\) learnable vectors for \(k>1\)) to produce the initial estimated angles \(r_{i,l}^{(0)}\in[0,2\pi)\) for \(l\in\{1,\dots,k\}\), after rescaling. It then applies several projected gradient steps to the initial angle estimates to obtain the final angle estimates, \(r_{i,l}\in[0,2\pi)\). Let the ground-truth angle matrix be \(\mathbf{R}\in\mathbb{R}^{n\times k}\). The loss function is applied to the output angle matrix \(\mathbf{r}\), given \(\mathbf{A}\), while the final evaluation is based on \(\mathbf{R}\) and \(\mathbf{r}\). Orange frames indicate trainable vectors/matrices, green squares fixed inputs, the red square the final estimated angles (outputs), and the yellow circles the loss function and evaluation.
### Robustness of GNNSync
Measurement noise that perturbs the edge offsets can significantly impact the performance of group synchronization algorithms. To this end, we demonstrate the robustness of GNNSync to such noise perturbations, with the following theoretical guarantee, proved and further discussed in App. A.2
**Proposition 2**.: _For adjacency matrices \(\mathbf{A},\hat{\mathbf{A}}\), assume their row-normalized variants \(\mathbf{A}_{s},\hat{\mathbf{A}}_{s},\mathbf{A}_{t},\hat{\mathbf{A}}_{t}\) satisfy \(\left\|\mathbf{A}_{s}-\hat{\mathbf{A}}_{s}\right\|_{F}<\epsilon_{s}\) and \(\left\|\mathbf{A}_{t}-\hat{\mathbf{A}}_{t}\right\|_{F}<\epsilon_{t},\) where subscripts \(s,t\) denote source and target, resp. Assume further their input feature matrices \(\mathbf{X},\hat{\mathbf{X}}\) satisfy \(\left\|\mathbf{X}-\hat{\mathbf{X}}\right\|_{F}<\epsilon_{f}\). Then their initial angles \(\mathbf{r}^{(0)},\hat{\mathbf{r}}^{(0)}\) from a trained GNNSync using DIMPA satisfy \(\left\|\mathbf{r}^{(0)}-\hat{\mathbf{r}}^{(0)}\right\|_{F}<B_{s}\epsilon_{s}+ B_{t}\epsilon_{s}+B_{f}\epsilon_{f}\), for values \(B_{s},B_{t},B_{f}\) that can be bounded by imposing constraints on model parameters and input._
## 6 Experiments
Implementation details are in App. C and extended results in App. D.
### Data sets and protocol
Previous works in angular synchronization typically only consider synthetic data sets in their experiments, and those applying synchronization to real-world data do not typically publish the data sets. To bridge the gap between synthetic experiments and the real world, we construct synthetic data sets with both correlated and uncorrelated ground-truth rotation angles, using various measurement graphs and noise levels. In addition, we conduct sensor network localization on two data sets.
For synthetic data, we perform experiments on graphs with \(n=360\) nodes for different measurement graphs, with edge density parameter \(p\in\{0.05,0.1,0.15\}\), noise level \(\eta\in\{0,0.1,\ldots,0.9\}\) for \(k=1\), and \(\eta\in\{0,0.1,\ldots,0.7\}\) for \(k\in\{2,3,4\}\). The graph generation procedure is as follows (with further details in App. B.1): *1) Generate \(k\) group(s) of ground-truth angles. One option is to generate each angle from the same Gamma distribution with shape 0.5 and scale \(2\pi\). We denote this option with subscript "1". As angles could be highly correlated in practical scenarios, we introduce a more realistic but challenging option "2", with multivariate normal ground-truth angles. The mean of the ground-truth angles is \(\pi\), with covariance matrix for each \(l\in\{1,\ldots,k\}\) defined by \(\mathbf{ww}^{\top}\), where entries in \(\mathbf{w}\) are generated independently from a standard normal distribution. We explore two more options in the SI. We then apply mod \(2\pi\) to all angles. * 2) Generate a noisy background adjacency matrix \(\mathbf{A}_{\text{noise}}\in\mathbb{R}^{n\times n}\). * 3) Construct a complete adjacency matrix where \(\eta\) portion of the entries are noisy and the rest represent true angular differences. * 4) Generate a measurement graph and sparsify the complete adjacency matrix by only keeping the edges in the measurement graph.
We construct 3 types of measurement graphs from NetworkX (Hagberg et al., 2008) and use the following notations, where the subscript \(o\in\{1,2,3,4\}\) is the option mentioned in step 1) above:
\(\bullet\) Erdos-Renyi (ER) Outlier model: denoted by \(\text{ERO}_{o}(p,k,\eta)\), using as the measurement graph the ER model from NetworkX, where \(p\) is the edge density parameter for the ER measurement graph;
\(\bullet\) Barabasi Albert (BA) Outlier model: denoted by \(\text{BAO}_{o}(p,k,\eta)\), where the measurement graph is a BA model with the number of edges to attach from a new node to existing nodes equal to \([np/2]\), using the standard implementation from NetworkX Hagberg et al. (2008); and
\(\bullet\) Random Geometric Graph (RGG) Outlier model: denoted by \(\text{RGGO}_{o}(p,k,\eta)\), with NetworkX parameter "distance threshold value (radius)" \(2p\) for the RGG measurement graph. For \(k=1\), we omit the value \(k\) and subscript \(o\) in the notation, as the two options coincide in this special case.
For real-world data, we conduct sensor network localization on the U.S. map and the PACM point cloud data set (Cucuringu et al., 2012a) with a focus on the SO(2) component, as follows, with data processing details provided in App. B.2. *1) Starting with the ground-truth locations of \(n=1097\) U.S. cities (resp., \(n=426\) points), we construct patches using each city (resp., point) as a central node and add its \(50\) nearest neighbors to the corresponding patch. *2) For each patch, we add noise to each node's coordinates independently. *3) We then rotate the patches using random rotation angles (ground-truth angles generated as in 1) for synthetic models). For each pair of patches that have at least \(6\) overlapping nodes, we apply Procrustes alignment (Gower, 1975) to estimate the rotation angle based on these overlapping nodes and add an edge to the observed measurement adjacency matrix. *4) We perform angular synchronization to obtain the initial estimated angles and update the estimated angles by shifting by the average pairwise differences between the estimated and ground-truth angles, to eliminate the degree of freedom of a global rotation. *5) Finally, we apply the estimated rotations to the noisy patches and estimate node coordinates by averaging the estimated locations for each node from all patches that contain this node.
### Baselines
In our numerical experiments for angular synchronization, we compare against **7 baselines**, where results are averaged over 10 runs: \(\bullet\) Spectral Baseline (Spectral) by Singer (2011), \(\bullet\) Row-Normalized Spectral Baseline (Spectral_RN) by Cucuringu et al. (2012a), \(\bullet\) Generalized Power Method (GPM) by Boumal (2016), \(\bullet\) TranSync by Huang et al. (2017), \(\bullet\) CEMP_GCW, \(\bullet\) CEMP_MST by Lerman & Shi (2022), and \(\bullet\) Trimmed Averaging Synchronization (TAS) by Maunu & Lerman (2023).
For more general \(k\)-synchronization, we compare against two baselines from Cucuringu & Tyagi (2022), which are based on the top \(k\) eigenvectors of the matrix \(\mathbf{H}\) or its row-normalized version. We use names \(\bullet\) Spectral and \(\bullet\) Spectral_RN to denote them as before. To show that GNNSync (as well as the baselines) deviate from trivial or random solutions, we include an additional baseline denoted "Trivial" for each \(k\), where all angles are predicted equal (with value 1, for simplicity).
### Main experimental results
By default, we use the output angles of the baseline "Spectral_RN" as input features for GNNSync, and thus \(d_{\text{in}}=k\). The main experimental results are shown in Fig. 3 for \(k=1\), and Fig. 4 for general \(k\in\{2,3,4\}\), with additional results reported in App. D. For \(k>1\), we use "GNNSync-cycle", "GNNSync-upset" and "GNNSync-sum" to denote GNNSync variants when considering the training loss function \(\mathcal{L}_{\text{cycle}},\mathcal{L}_{\text{upset}}\), and \(\mathcal{L}_{\text{upset}}+\mathcal{L}_{\text{cycle}}\), respectively.
Figure 4: MSE performance on \(k\)-synchronization for \(k\in\{2,3,4\}\). \(p\) is the network density and \(\eta\) is the noise level. Error bars indicate one standard deviation. Dashed lines highlight GNNSync variants.
Figure 3: MSE performance on angular synchronization (\(k=1\)). Error bars indicate one standard deviation. Dashed lines highlight GNNSync variants.
From Fig. 3 (with additional figures in App. D Fig. 6-8), we conclude that GNNSync produces generally the best performance compared to baselines, in angular synchronization (\(k=1\)). From Fig. 4 (see also App. D Fig. 9-17), we again conclude that GNNSync variants attain leading performance for \(k>1\). The first two columns of Fig. 4 compare the performance of the two options of ground-truth angles on RGGO models. In columns 3 and 4, we show the effect of varying density parameter \(p\), and different synthetic models under various measurement graphs.
For \(k>1\), GNNSync-upset performs better than both baselines in most cases, with \(\mathcal{L}_{\text{upset}}\) simple yet effective to train. GNNSync-cycle generally attains the best performance. As the problems become harder (with increasing \(\eta\), decreasing \(p\), increasing \(k\), more complex measurement graph RGG), GNNSync-cycle outperforms both baselines and other GNNSync variants by a larger margin. The performance of GNNSync-sum lies between that of GNNSync-upset and GNNSync-cycle, but is closer to that of GNNSync-upset, see App. D for more discussions on linear combinations of the two losses. We conclude that while GNNSync-upset generally attains satisfactory performance, GNNSync-cycle is more robust to harder problems than other GNNSync variants and the baselines. Accounting for the performance of trivial guesses, we observe that GNNSync variants are more robust to noise, and attain satisfactory performance even when the competitive baselines are outperformed by trivial guesses. We highlight that there is a clear advantage of using cycle consistency in the pipeline, especially when the problem is harder, thus reflecting the angular nature of the problem. For 3-cycle consistency and the cycle loss \(\mathcal{L}_{\text{cycle}}\), gradient descent in principle drives down the (non-negative) values \(S\) of the sum of three predicted angular differences. To minimize the \(S\) values, we encourage a reweighing process of the initial edge weights so that cycle consistency roughly holds. Unlike \(\mathcal{L}_{\text{upset}}\) which explicitly encourages small \(\mathbf{M}_{i,j}\) values for all edges, \(\mathcal{L}_{\text{cycle}}\) only implicitly encourages small \(\mathbf{M}_{i,j}\) values via the confidence matrix reweighing process for edges with relatively small noise. In an ideal case, we only have large \(\mathbf{M}_{i,j}\) values on noisy edges. In this case, the reweighing process would downweight these noisy edges, which results in a smaller value of the cycle loss function. This is also the underlying reason why \(\mathcal{L}_{\text{cycle}}\) is more robust to noise than \(\mathcal{L}_{\text{upset}}\). For \(k>1\), we hence recommend using the more intricate \(\mathcal{L}_{\text{cycle}}\) function as the training loss function, and we will focus on GNNSync-cycle in the ablation study.
From Fig. 1 (see also App. D Tab. 1-4, Fig. 18-27), we observe that GNNSync is able to align patches and recover coordinates effectively, and is more robust to noise than baselines. GNNSync attains competitive MSE values and Average Normalized Error (ANE) results, where ANE (defined explicitly in App. D.1) measures the discrepancy between the predicted locations and the actual locations.
### Ablation study and discussion
In this subsection, we justify several model choices for all \(k\): \(\bullet\) the use of the projected gradient steps; \(\bullet\) an end-to-end framework instead of training first without the projected gradient steps and then applying Algo. 1 as a post-processing procedure; \(\bullet\) fixed instead of trainable \(\{\alpha_{\gamma}\}\) values. For \(k>1\), we also justify the use of the \(\mathbf{H}\) matrix in Algo. 1 instead of separate \(\mathbf{H}^{(l)}\)'s based on estimated graph assignments of the edges. To validate the ability of GNNSync to borrow strength from baselines, we set the input feature matrix \(\mathbf{X}\) as a set of angles that is estimated by one of the baselines (or \(k\) sets of angles estimated by one of the baselines for \(k>1\)) and report the performance.
Due to space considerations, results for the ablation study are reported in App. D. For \(k=1\), Fig. 22-24 report the MSE performance for different GNNSync variants. Improvements over all possible baselines when taking their output as input features for \(k=1\) are reported in Fig. 34-36. For \(k>1\), we report the results when using \(\mathcal{L}_{\text{cycle}}\) as the training loss function in Fig. 25-33. We conclude that Algo. 1 is indeed helpful in guiding GNNSync to attain lower loss values (we omit loss results for space considerations) and better MSE performance, and that end-to-end training usually attains comparable or better performance than using Algo. 1 for post-processing, even when there is no trainable parameter in Algo. 1. Furthermore, we observe across all data sets, that GNNSync usually improves on existing baselines when employing their outputs as input features, and never performs significantly worse than the corresponding baseline; hence, GNNSync can be used to enhance existing methods. Further, setting \(\{\alpha_{\gamma}\}\) values to be trainable does not seem to boost performance much, and hence we stick to fixed \(\{\alpha_{\gamma}\}\) values. For \(k>1\), using separate \(\mathbf{H}^{(l)}\)'s instead of the whole \(\mathbf{H}\) in Algo. 1 harms performance, which can be explained by the fact that learning graph assignments effectively via GNN outputs is challenging.
Conclusion and outlook
This paper proposed a general NN framework for angular synchronization and a heterogeneous extension. As the current framework is limited to SO(2), we believe that extending our GNN-based framework to the setting of other more general groups is an exciting research direction to pursue, and constitutes ongoing work (for instance, for doing synchronization over the full Euclidean group \(\text{Euc}(2)=\mathbb{Z}_{2}\times\text{SO}(2)\times\mathbb{R}^{2}\)). We also plan to optimize the loss functions under constraints, train our framework with supervision of ground-truth angles (anchor information), and explore the interplay with low-rank matrix completion. Another interesting direction is to extend our SNL example to explore the graph realization problem, of recovering point clouds from a sparse noisy set of pairwise Euclidean distances. |
2310.04584 | An Algorithm to Train Unrestricted Sequential Discrete Morphological
Neural Networks | There have been attempts to insert mathematical morphology (MM) operators
into convolutional neural networks (CNN), and the most successful endeavor to
date has been the morphological neural networks (MNN). Although MNN have
performed better than CNN in solving some problems, they inherit their
black-box nature. Furthermore, in the case of binary images, they are
approximations that loose the Boolean lattice structure of MM operators and,
thus, it is not possible to represent a specific class of W-operators with
desired properties. In a recent work, we proposed the Discrete Morphological
Neural Networks (DMNN) for binary image transformation to represent specific
classes of W-operators and estimate them via machine learning. We also proposed
a stochastic lattice descent algorithm (SLDA) to learn the parameters of
Canonical Discrete Morphological Neural Networks (CDMNN), whose architecture is
composed only of operators that can be decomposed as the supremum, infimum, and
complement of erosions and dilations. In this paper, we propose an algorithm to
learn unrestricted sequential DMNN, whose architecture is given by the
composition of general W-operators. We illustrate the algorithm in a practical
example. | Diego Marcondes, Mariana Feldman, Junior Barrera | 2023-10-06T20:55:05Z | http://arxiv.org/abs/2310.04584v2 | # An Algorithm to Train Unrestricted Sequential Discrete Morphological Neural Networks+
###### Abstract
With the advent of deep learning, there have been attempts to insert mathematical morphology (MM) operators into convolutional neural networks (CNN), and the most successful endeavor to date has been the morphological neural networks (MNN). Although MNN have performed better than CNN in solving some problems, they inherit their black-box nature. Furthermore, in the case of binary images, they are approximations, which loose the Boolean lattice structure of MM operators and, thus, it is not possible to represent a specific class of W-operators with desired properties. In a recent work, we proposed the Discrete Morphological Neural Networks (DMNN) for binary image transformation to represent specific classes of W-operators and estimate them via machine learning. We also proposed a stochastic lattice gradient descent algorithm (SLGDA) to learn the parameters of Canonical Discrete Morphological Neural Networks (CDMNN), whose architecture is composed only of operators that can be decomposed as the supremum, infimum, and complement of erosions and dilations. In this paper, we propose an algorithm to learn unrestricted sequential DMNN (USDMNN), whose architecture is given by the composition of general W-operators. We consider the representation of a W-operator by its characteristic Boolean function, and then learn it via a SLGDA in the Boolean lattice of functions. Although both the CDMNN and USDMNN have the Boolean lattice structure, USDMNN are not as dependent on prior information about the problem at hand, and may be more suitable in instances in which the practitioner does not have strong domain knowledge. We illustrate the algorithm in a practical example.
Keywords:discrete morphological neural networks image processing mathematical morphology U-curve algorithms stochastic lattice gradient descent
Introduction
Mathematical morphology is a theory of lattice mappings which can be employed to design nonlinear mappings, the morphological operators, for image processing and computer vision. It originated in the 60s, and its theoretical basis was developed in the 70s and 80s [12, 16, 17]. From the established theory followed the proposal of many families of operators, which identify or transform geometrical and topological properties of images. Their heuristic combination permits to design methods for image analysis. We refer to [5] for more details on practical methods and implementations of the design of mathematical morphology operators.
Since combining basic morphological operators to form a complex image processing pipeline is not a trivial task, a natural idea is to develop methods to automatically design morphological operators based on machine learning techniques [4], what has been extensively done in the literature with great success on solving specific problems. The problems addressed by mathematical morphology concern mainly the processing of binary and gray-scale images, and the learning methods are primarily based on discrete combinatorial algorithms. More details about mathematical morphology in the context of machine learning may be found in [2, 8].
More recently, mathematical morphology methods have been studied in connection with deep neural networks, by either combining convolutional neural networks (CNN) with a morphological operator [10], or by replacing the convolution operations of CNN with basic morphological operators, such as erosions and dilations, obtaining the so-called morphological neural networks (MNN) [7, 9, 13]. A MNN has the general framework of neural networks, and its specificity is on the fact that the layers realize morphological operations. Although it has been seen empirically that convolutional layers could be replaced by morphological layers [7], and MNN have shown a better performance than CNN in some tasks [9], they are not more interpretable than an ordinary CNN. In the context of machine learning theory, the interpretability is related to the notion of hypothesis space, which is a key component of machine learning theory.
In this context, in [11] we proposed the Discrete Morphological Neural Networks (DMNN) for binary image transformation to represent translation invariant and locally defined operators, i.e., W-operators, and estimate them via machine learning. A DMNN architecture is represented by a Morphological Computational Graph that combines operators via composition, infimum, and supremum to build more complex operators. In [11] we proposed a stochastic lattice gradient descent algorithm (SLGDA) to train the parameters of Canonical Discrete Morphological Neural Networks (CDMNN) based on a sample of input and output images under the usual machine learning approach. The architecture of a CDMNN is composed only of canonical operations (supremum, infimum, or complement) and operators that can be decomposed as the supremum, infimum, and complement of erosions and dilations with the same structural element. The DMNN is a true mathematical morphology method since it retains the control over the design
and the interpretability of results of classical mathematical morphology methods, which is a relevant advantage over CNN.
In this paper, we propose an algorithm to train unrestricted sequential DMNN (USDMNN), whose architecture is given by the composition of general W-operators. The algorithm considers the representation of a W-operator by its characteristic Boolean function, which is learned via a SLGDA in the Boolean lattice of functions. This SLGDA differs from that of [11] which minimizes an empirical error in a lattice of sets and intervals that are the structural elements and intervals representing operators such as erosions, dilations, openings, closings and sup-generating.
As opposed to CDMNN, which can be designed from prior information to represent a constrained class of operators, USDMNN are subject to overfitting the data, since it can represent a great class of operators, that may fit the data, but not generalize to new examples. In order to address this issue, we control the complexity of the architecture by selecting from data the window of each W-operator in the sequence (layer). By restricting the window, we create equivalence classes on the characteristic function domain that constrain the class of operators that each layer can represent, decreasing the overall complexity of the operators represented by the architecture and mitigating the risk of overfitting. We propose a SLGDA to select the windows by minimizing a validation error in a lattice of sets within the usual model selection framework in machine learning.
We note that both the CDMNN and the USDMNN are fully transparent, the properties of the operators represented by the trained architectures are fully known, and their results can be interpreted. The advantage of USDMNN is that they are not as dependent on prior information as the CDMNN, which require a careful design of the architecture to represent a _simple_ class of operators with properties necessary to solve the practical problem, as was illustrated in the empirical application in [11]. When there is no prior information about the problem at hand, an USDMNN may be preferable.
In Section 2, we present some notations and definitions, and in Section 3 we formally define the USDMNN that is a particular example of the DMNN proposed by [11]. In Section 4, we present the SLGDAs to learn the windows and the characteristic functions of the W-operators in a USDMNN, and in Section 5 we apply the USDMNN to the same dataset used in [11] in order to compare the results. In Section 6, we present our conclusions.
## 2 W-operators
Let \(E=\mathbb{Z}^{2}\) and denote by \(\mathcal{P}(E)\) the collection of all subsets of \(E\). Denote by \(+\) the vector addition operation. We denote the zero element of \(E\) by \(o\). A _set operator_ is any mapping defined from \(\mathcal{P}(E)\) into itself. We denote by \(\Psi\) the collection of all the operators from \(\mathcal{P}(E)\) to \(\mathcal{P}(E)\). Denote by \(\iota\) the identity set operator: \(\iota(X)=X,X\in\mathcal{P}(E)\).
For any \(h\in E\) and \(X\in\mathcal{P}(E)\), the set \(X+h\coloneqq\{x\in E:x-h\in X\}\) is called the translation of \(X\) by \(h\). We may also denote this set by \(X_{h}\). A set operator
is called _translation invariant_ (t.i) if, and only if, \(\forall h\in E\), \(\psi(X+h)=\psi(X)+h\) for \(X\in\mathcal{P}(E)\).
Let \(W\) be a finite subset of \(E\). A set operator \(\psi\) is called _locally defined within a window \(W\)_ if, and only if, \(\forall h\in E\), \(h\in\psi(X)\iff h\in\psi(X\cap W_{h})\) for \(X\in\mathcal{P}(E)\). The collection \(\Psi_{W}\) of t.i. operators locally defined within a window \(W\in\mathcal{P}(E)\) inherits the complete lattice structure of \((\mathcal{P}(E),\subseteq)\) by setting, \(\forall\psi_{1},\psi_{2}\in\Psi\),
\[\psi_{1}\leq\psi_{2}\iff\psi_{1}(X)\subseteq\psi_{2}(X),\forall X\in\mathcal{ P}(E). \tag{1}\]
Define by \(\Omega=\cup_{W\in\mathcal{P}(E),|W|<\infty}\Psi_{W}\) the subset of \(\Psi\) of all operators from \(\mathcal{P}(E)\) to \(\mathcal{P}(E)\) that are t.i. and locally defined within some finite window \(W\in\mathcal{P}(E)\). The elements of \(\Omega\) are called _\(W\)-operators_.
Any W-operator can be uniquely determined by its kernel, its basis or its characteristic function (see [3] for more details). In special, denote by \(\mathfrak{B}_{W}\coloneqq\{0,1\}^{\mathcal{P}(W)}\) the set of all Boolean functions on \(\mathcal{P}(W)\) and consider the mapping \(T\) between \(\Psi_{W}\) and \(\mathfrak{B}_{W}\) defined by
\[T(\psi)(X)=\begin{cases}1,&\text{ if }o\in\psi(X)\\ 0,&\text{ otherwise.}\end{cases}\qquad\qquad\psi\in\Psi_{W},X\in\mathcal{P}(W).\]
The mapping \(T\) constitutes a lattice isomorphism between the complete lattices \((\Psi_{W},\leq)\) and \((\mathfrak{B}_{W},\leq)\), and its inverse \(T^{-1}\) is defined by \(T^{-1}(f)(X)=\{x\in E:f(X_{-x}\cap W)=1\}\) for \(f\in\mathfrak{B}_{W}\) and \(X\in\mathcal{P}(E)\). We denote by \(f_{\psi}\coloneqq T(\psi)\) the characteristic function of \(\psi\).
## 3 Unrestricted Sequential Discrete Morphological Neural Networks
### Sequential Morphological Computational graph
Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{C})\) be a _computational graph_, in which \(\mathcal{V}\) is a general set of vertices, \(\mathcal{E}\subset\{(\mathfrak{v}_{1},\mathfrak{v}_{2})\in\mathcal{V}\times \mathcal{V}:\mathfrak{v}_{1}\neq\mathfrak{v}_{2}\}\) is a set of directed edges, and \(\mathcal{C}:\mathcal{V}\to\Omega\cup\{\vee,\wedge\}\) is a mapping that associates each vertex \(\mathfrak{v}\in\mathcal{V}\) to a _computation_ given by either applying a t.i. and locally defined operator \(\psi\in\Omega\) or one of the two basic operations \(\{\vee,\wedge\}\).
Denoting \(\mathcal{V}=\{\mathfrak{v}_{i},\mathfrak{v}_{1},\ldots,\mathfrak{v}_{n}, \mathfrak{v}_{o}\}\) for a \(n\geq 1\), \(\mathcal{G}\) is a sequential morphological computational graph (MCG) if
\[\mathcal{E}=\{(\mathfrak{v}_{i},\mathfrak{v}_{1}),(\mathfrak{v}_{n},\mathfrak{ v}_{o})\}\bigcup\left\{(\mathfrak{v}_{j},\mathfrak{v}_{j+1}):j=1,\ldots,n-1 \right\},\]
\(\mathcal{C}(\mathfrak{v}_{i})=\mathcal{C}(\mathfrak{v}_{o})=\iota\) and \(\mathcal{C}(\mathfrak{v}_{j})\in\Omega,\ j=1,\ldots,n\). This computational graph satisfies the axioms of MCG (see [11] for more details).
In a sequential MCG, the computation of a vertex \(\mathfrak{v}_{j}\) in \(\mathcal{G}\) receives as input the output of the computation of the previous vertex \(\mathfrak{v}_{j-1}\), and the output of its computation will be used as the input of the computation of the vertex \(\mathfrak{v}_{j+1}\). We assume there is an input vertex \(\mathfrak{v}_{i}\), and an output vertex \(\mathfrak{v}_{o}\), that store
the input, which is an element \(X\in\mathcal{P}(E)\), and output of the computational graph, respectively, by applying the identity operator. Furthermore, each vertex computes an operator in \(\Omega\) and there are no vertices computing supremum or infimum operations.
Denote by \(\psi_{\mathcal{G}}(X)\) the output of vertex \(\mathfrak{v}_{o}\) when the input of vertex \(\mathfrak{v}_{i}\) is \(X\in\mathcal{P}(E)\) as the set operator \(\psi_{\mathcal{G}}:\mathcal{P}(E)\rightarrow\mathcal{P}(E)\) generated by MCG \(\mathcal{G}\). The operator \(\psi_{\mathcal{G}}\) is actually t.i. and locally defined within a window \(W_{\mathcal{G}}\) (cf. Proposition 5.1 in [11]). We define the sequential Discrete Morphological Neural Network represented by \(\mathcal{G}\) as the translation invariant and locally defined set operator \(\psi_{\mathcal{G}}\).
### Sequential Discrete Morphological Neural Networks Architectures
A triple \(\mathcal{A}=(\mathcal{V},\mathcal{E},\mathcal{F})\), in which \(\mathcal{F}\subseteq\Omega^{\mathcal{V}}\), is a sequential Discrete Morphological Neural Network (SDMNN) architecture if \((\mathcal{V},\mathcal{E},\mathcal{C})\) is a sequential MCG for all \(\mathcal{C}\in\mathcal{F}\). A SDMNN architecture is a collection of sequential MCG with the same graph \((\mathcal{V},\mathcal{E})\) and computation map \(\mathcal{C}\) in \(\mathcal{F}\). Since a SDMNN architecture represents a collection of MCG, it actually represents a collection of t.i. and locally defined set operators that can be represented as the composition of W-operators.
For an architecture \(\mathcal{A}=(\mathcal{V},\mathcal{E},\mathcal{F})\), let \(\mathbb{G}(\mathcal{A})=\{\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{C}): \mathcal{C}\in\mathcal{F}\}\) be the collection of MCG generated by \(\mathcal{A}\). We say that \(\mathcal{G}\in\mathbb{G}(\mathcal{A})\) is a realization of architecture \(\mathcal{A}\) and we define \(\mathcal{H}(\mathcal{A})=\{\psi\in\Omega:\psi=\psi_{\mathcal{G}},\mathcal{G}\in \mathbb{G}(\mathcal{A})\}\) as the collection of t.i. and locally defined set operators that can be realized by \(\mathcal{A}\).
A SDMNN is said unrestricted if its interior vertices \(\mathfrak{v}_{j}\) can compute any W-operator locally defined within a \(W_{j}\), so it holds \(\mathcal{F}=\{\iota\}\times\Psi_{W_{1}}\times\cdots\times\Psi_{W_{n}}\times\{\iota\}\). We denote by \(n=|\mathcal{V}|-2\) the depth of an unrestricted sequential DMNN (USDMNN) and by \(|W_{j}|\) the width of the layer represented by vertex \(\mathfrak{v}_{j},j=1,\ldots,n\).
As an example, consider an USDMNN with three hidden layers. For fixed windows \(W_{1},W_{2},W_{3}\in\mathcal{P}(E)\), this sequential architecture realizes the operators in \(\Psi_{W}\), with \(W=W_{1}\oplus W_{2}\oplus W_{3}\), in which \(\oplus\) stands for the _Minkowski addition_, that can be written as the composition of operators in \(\Psi_{W_{1}},\Psi_{W_{2}}\) and \(\Psi_{W_{3}}\), that is, \(\mathcal{H}(\mathcal{A})=\{\psi\in\Psi_{W}:\psi=\psi^{W_{3}}\psi^{W_{2}}\psi^{W_ {1}};\psi^{W_{i}}\in\Psi_{W_{i}},i=1,2,3\}\). For a proof of this fact, see [3].
### Representation of USDMNN
The USDMNN realized by MCG \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{C})\) can be represented by a sequence \(\{\psi_{1},\ldots,\psi_{n}\}\), \(n\geq 2\), of \(W\)-operators with windows \(W_{1},\ldots,W_{n}\) as the composition
\[\psi_{\mathcal{G}}=\psi_{n}\circ\cdots\circ\psi_{1} \tag{2}\]
in which \(\psi_{j}=\mathcal{C}(\mathfrak{v}_{j}),j=1,\ldots,n\). Based on (2), we propose a representation for the class of operators realized by an USDMNN.
For each \(i=1,\ldots,n\), we fix a \(d_{i}\geq 3\) odd and assume that each window \(W_{i},i=1,\ldots,n\), is a connected subset of \(F_{d_{i}}=\{-(d_{i}-1)/2,\ldots,(d_{i}-1)/2\}^{2}\), the square of side \(d_{i}\) centered at the origin of \(E\). This means that, for every \(w,w^{\prime}\in W_{i}\), there exists a sequence \(w_{0},\ldots,w_{r}\in W_{i},r\geq 1\), such that \(w_{0}=w,w_{r}=w^{\prime}\) and \(\|w_{i}-w_{i+1}\|_{\infty}=1\), for all \(i=0,\ldots,r-1\). Denoting \(\mathcal{C}_{d}=\{W\subseteq F_{d}:W\text{ is connected}\}\), we assume that \(W_{i}\in\mathcal{C}_{d_{i}}\) for all \(i=1,\ldots,n\).
For each \(W\in\mathcal{C}_{d_{i}}\), let \(\mathcal{B}_{W}=\{f:\mathcal{P}(W)\mapsto\{0,1\}\}\) be the set of all binary functions on \(\mathcal{P}(W)\), and define \(\mathcal{F}_{i}=\{(W,f):W\in\mathcal{C}_{d_{i}},f\in\mathcal{B}_{W}\}\) as the collection of \(W\)-operators with window \(W\) in \(\mathcal{C}_{d_{i}}\) and characteristic function \(f\) in \(\mathcal{B}_{W}\), which are completely defined by a pair \((W,f)\). Finally, let \(\Theta=\prod_{i=1}^{n}\mathcal{F}_{i}\) be the Cartesian product of \(\mathcal{F}_{i}\). Observe that an element \(\theta\) in \(\Theta\) is actually a sequence of \(n\)\(W\)-operators with windows in \(\mathcal{C}_{d_{i}},i=1,\ldots,n\), which we denote by \(\theta=\{(W_{1},f_{1}),\ldots,(W_{n},f_{n})\}\). Denoting the \(W\)-operator represented by \((W_{i},f_{i})\) as \(\psi_{i}\), a \(\theta\in\Theta\) generates a USDMNN \(\psi_{\theta}\) via expression (2).
The USDMNN architecture \(\mathcal{A}=(\mathcal{V},\mathcal{E},\mathcal{F})\) with \(n\) hidden layers and \(\mathcal{F}=\{\iota\}\times\Psi_{F_{d_{1}}}\times\cdots\times\Psi_{F_{d_{n}}} \times\{\iota\}\) is such that \(\mathcal{H}(\mathcal{A})=\{\psi_{\theta}:\theta\in\Theta\}\), so \(\Theta\) is a representation for the class of operators generated by the architecture \(\mathcal{A}\).
Since an operator locally defined in \(W\) is also locally defined in any \(W^{\prime}\supset W\) (cf. Proposition 5.1 in [3]) it follows that \(\Theta\) is actually an overparametrization of \(\mathcal{H}(\mathcal{A})\) since there are many representations \((W^{\prime},f_{\psi})\) for a same W-operator locally defined in \(W\subsetneq F_{d_{i}}\). This overparametrization also happens in the representation of CDMNN proposed in [11] and we take advantage of it to propose a SLGDA to train USDMNN and mitigate the risk of overfitting.
## 4 Training USDMNN via the stochastic lattice gradient descent algorithm
The training of a USDMNN is performed by obtaining a sequence \(\hat{\theta}\in\Theta\) of \(W\)-operators via the minimization of an empirical error on a sample \(\{(X_{1},Y_{1}),\ldots,\)\((X_{N},Y_{N})\}\), of \(N\) input images \(X\) and output target transformed images \(Y\). In order to mitigate overfitting the sample, the training of a USDMNN will be performed by a two-step algorithm that, for a fixed sequence of windows, learns a sequence of characteristic functions by minimizing the empirical error \(L_{t}\) in a training sample, and then learns a sequence of windows by minimizing an empirical error \(L_{v}\) in a validation sample over the sequences of windows. More details about the empirical errors \(L_{t}\) and \(L_{v}\) will be given in Section 5.
These two steps are instances of an algorithm for the minimization of a function in a subset of a Boolean lattice. On the one hand, the set \(\mathcal{C}\coloneqq\prod_{i=1}^{n}\mathcal{C}_{d_{i}}\) of all sequences of \(n\) connected windows is a subset of a Boolean lattice isomorphic to \(\prod_{i=1}^{n}\{0,1\}^{F_{d_{i}}}\), so minimizing the validation error over the windows means minimizing a function in a subset of a Boolean lattice. On the other hand, for a fixed sequence of windows \((W_{1},\ldots,W_{n})\in\mathcal{C}\), the set \(\mathcal{B}_{W_{1}}\times\cdots\times\mathcal{B}_{W_{n}}\) of all sequences of characteristic functions with these windows is a Boolean lattice isomorphic to \(\{0,1\}^{\mathcal{P}(W_{1})}\times\cdots\times\{0,1\}^{\mathcal{P}(W_{n})}\), so minimizing the training error in this space means minimizing a function in a Boolean lattice.
The U-curve algorithms [1, 6, 14, 15] were proposed to minimize a U-curve function on a Boolean lattice. In summary, these algorithms perform a greedy search of a Boolean lattice, at each step jumping to the point at distance one with the least value of the function, and stopping when all neighbor points have a function value greater or equal to that of the current point. This greedy search of a lattice, that at each step goes to the direction that minimizes the function, is analogous to the dynamic of the gradient descent algorithm to minimize a function with domain in \(\mathbb{R}^{p}\). Inspired by the U-curve algorithms and by the success of stochastic gradient descent algorithms for minimizing overparametrized functions in \(\mathbb{R}^{p}\), and following the ideas of [11], we propose a stochastic lattice gradient descent algorithm (SLGDA) to train USDMNN. In Figure 1, we present the main ideas of the algorithm, and in Sections 4.1 and 4.2 we formally define it.
### Training a USDMNN with fixed windows
For each \(\mathbf{W}\coloneqq\{W_{1},\ldots,W_{n}\}\in\mathcal{C}\) let \(\varTheta_{\mathbf{W}}\coloneqq\{\{(W_{1},f_{1}),\ldots,(W_{n},f_{n})\}:f_{i}\in \mathcal{F}_{W_{i}},i=1,\ldots,n\}\) be all sequences of \(W\)-operators with windows \(\mathbf{W}\). Observe
Figure 1: Illustration of the deterministic version of the SLGDA to (a) learn the USDMNN windows and (b) train a USDMNN with fixed windows. To simplify the illustration, we considered only the possibility of adding a point to a window, and flipping a bit from \(0\) to \(1\) of a characteristic function, at each step, even though a point can be erased from a window, and a flipping from \(1\) to \(0\) may happen, if they have the least respective error. The training error \(L_{t}\) or validation error \(L_{v}\) of each point is on top of it.
there is a bijection between \(\Theta_{\mathbf{W}}\) and the Boolean lattice \(\prod_{i=1}^{n}\{0,1\}^{\mathcal{P}(W_{i})}\), and consider the Boolean lattice \((\Theta_{\mathbf{W}},\leq)\).
Denoting by \(d\) the distance in the acyclic directed graph \((\Theta_{\mathbf{W}},\leq)\), we define the neighborhood of a \(\theta\in\Theta_{\mathbf{W}}\) as \(N(\theta)=\{\theta^{\prime}\in\Theta_{\mathbf{W}}:d(\theta,\theta^{\prime})=1\}\). Observe that \(\theta,\theta^{\prime}\in\Theta_{\mathbf{W}}\) are such that \(d(\theta,\theta^{\prime})=1\) if, and only if, all their characteristic functions but one are equal, and in the one in which they differ, the difference is in only one point of their domain. In other words, \(\theta^{\prime}\) is obtained from \(\theta\) by flipping the image of one point of one characteristic function from \(0\) to \(1\) or from \(1\) to \(0\).
The SLGDA for learning the characteristic functions of a USDMNN with fixed windows is formalized in Algorithm 1. The initial point \(\theta\in\Theta_{\mathbf{W}}\), a batch size \(b\), the number \(n\) of neighbors to be sampled at each step, and the number of training epochs are fixed. The initial point is stored as the point with minimum training error visited so far. For each epoch, the training sample is randomly partitioned in \(N/b\) batches, and we denote the training error on batch \(j\) by \(L_{t}^{(j)}\). For each batch, \(n\) neighbors of \(\theta\) are sampled and \(\theta\) is updated to the sampled neighbor with the least training error \(L_{t}^{(j)}\), that is calculated on the sample batch \(j\). Observe that \(\theta\) is updated at each batch, so during an epoch, it is updated \(N/b\) times. At the end of each epoch, the training error \(L_{t}(\theta)\) of \(\theta\) on the whole training sample is compared with the error of the point with the least training error visited so far at the end of an epoch, and it is stored as this point if its training error is lesser. After the predetermined number of epochs, the algorithm returns the point with the least training error on the whole sample visited at the end of an epoch.
Observe that Algorithm 1 has two sources of stochasticity: the sample of neighbors and the sample batches. If \(n=n(\theta)\coloneqq|N(\theta)|\) and \(b=N\), then this algorithm reduces to the deterministic one illustrated in Figure 1. Furthermore, the complexity of the algorithm is controlled by the number \(n\) of sampled neighbors, the batch size \(b\) and the number of epochs. See [11] for a further discussion about a SLGDA.
### Learning the USDMNN windows
In order to learn the windows of a USDMNN, we apply the SLGDA on \(\mathcal{C}\), that is a subset of the Boolean lattice \(\prod_{i=1}^{n}\{0,1\}^{F_{d_{i}}}\). For each \(\mathbf{W}\in\mathcal{C}\), let \(L_{v}(\mathbf{W})\coloneqq L_{v}(\theta_{\mathbf{W}}^{\mathbb{A}})\) be the validation error of the USDMNN realized by \(\theta_{\mathbf{W}}^{\mathbb{A}}\), which was learned by Algorithm 1.
We define the neighborhood of \(\mathbf{W}\) in \(\mathcal{C}\) as \(N(\mathbf{W})=\big{\{}\mathbf{W}^{\prime}\in\mathcal{C},d(\mathbf{W},\mathbf{W}^{\prime})=1 \big{\}}\) in which \(d\) means the distance in the acyclic directed graph \((\mathcal{C},\leq)\). Observe that \(\mathbf{W},\mathbf{W}^{\prime}\in\mathcal{C}\) are such that \(d(\mathbf{W},\mathbf{W}^{\prime})=1\) if, and only if, all their windows but one are equal, and in the one in which they differ, the difference is of one point. In other words, \(\mathbf{W}^{\prime}\) is obtained from \(\mathbf{W}\) by adding or removing one point from one of its windows.
The SLGDA for learning the windows of a USDMNN is formalized in Algorithm 2. The initial point \(\mathbf{W}\in\mathcal{C}\), a batch size \(b\), the number \(n\) of neighbors to
be sampled at each step, and the number of training epochs are fixed. The initial point is stored as the point with minimum validation error visited so far. For each epoch, the validation sample is randomly partitioned in \(N/b\) batches, and we denote the validation error of any \(\mathbf{W}\) on batch \(j\) by \(L_{v}^{(j)}(\mathbf{W})\coloneqq L_{v}^{(j)}(\theta_{\mathbf{W}}^{h})\) which is the empirical error on the \(j\)-th batch of the validation sample of the USDMNN realized by \(\theta_{\mathbf{W}}^{h}\), learned by Algorithm 1.
For each batch, \(n\) neighbors of \(\mathbf{W}\) are sampled and \(\mathbf{W}\) is updated to the sampled neighbor with the least validation error \(L_{v}^{(j)}\). At the end of each epoch, the validation error \(L_{v}(\mathbf{W})\) of \(\mathbf{W}\) on the whole validation sample is compared with the error of the point with the least validation error visited so far at the end of an epoch, and it is stored as this point if its validation error is lesser. After the predetermined number of epochs, the algorithm returns the point with the least validation error on the whole sample visited at the end of an epoch.
Algorithms 1 and 2 are analogous and differ only on the lattice (\(\Theta_{\mathbf{W}}\) and \(\mathcal{C}\)) they search and the function they seek to minimize (the training and the validation error).
```
1:\(\theta\in\Theta_{\mathbf{W}},n,b,Epochs\)
2:\(L_{min}\gets L_{t}(\theta)\)
3:\(\theta_{\mathbf{W}}^{h}\leftarrow\theta\)
4:for run \(\in\{1,\ldots,\text{Epochs}\}\)do
5: ShuffleBatches(\(b\))
6:for\(j\in\{1,\ldots,N/b\}\)do
7:\(\tilde{N}(\theta)\leftarrow\text{SampleNeighbors}(\theta,n)\)
8:\(\theta\leftarrow\theta^{\prime}\) s.t. \(\theta^{\prime}\in\tilde{N}(\theta)\) and \(L_{t}^{(j)}(\theta^{\prime})=\min\{L_{t}^{(j)}(\theta^{\prime\prime}):\theta^ {\prime\prime}\in\tilde{N}(\theta)\}\)
9:if\(L_{t}(\theta)<L_{min}\)then
10:\(L_{min}\gets L_{t}(\theta)\)
11:\(\theta_{\mathbf{W}}^{h}\leftarrow\theta\)
12:return\(\theta_{\mathbf{W}}^{h}\)
```
**Algorithm 1** Stochastic lattice gradient descent algorithm for learning the characteristic functions of a USDMNN with fixed windows \(\mathbf{W}=\{W_{1},\ldots,W_{n}\}\).
## 5 Application: Boundary recognition of digits with noise
As an example, we treat the problem of boundary recognition in digits with noise. We consider the USDMNN with two layers, and windows contained in the \(3\times 3\) square trained with a training sample of 10, and a validation sample of 10, \(56\times 56\) binary images. The training and validation samples are the same used in [11], the initial windows were considered as the five point cross centered at the origin, and the initial characteristic functions of a fixed window were those that minimized the training loss of its neighbor window visited before, except to the window on which they differ, where the characteristic function is initiated randomly. The algorithms were implemented in **python** and are available at
[https://github.com/MarianaFeldman/USDMM](https://github.com/MarianaFeldman/USDMM). The training was performed on a personal computer with processor 13th Gen Intel Core i7-1355U x 12 and 16 GB of ram.
We consider the intersection of union (IoU) error that is more suitable for form or object detection tasks. The IoU error of \(\psi_{\theta}\) in the training and validation sample is, respectively,
\[L_{t}(\theta)=1-\frac{1}{10}\sum_{k=1}^{10}\frac{|Y_{k}\cap\psi_{\theta}(X_{k}) |}{|Y_{k}\cup\psi_{\theta}(X_{k})|}\quad\ L_{v}(\theta)=1-\frac{1}{10}\sum_{k=1 }^{10}\frac{|Y_{k}^{(v)}\cap\psi_{\theta}(X_{k}^{(v)})|}{|Y_{k}^{(v)}\cup\psi_{ \theta}(X_{k}^{(v)})|}\]
that is, the mean proportion of pixels in \(Y\cup\psi_{\theta}(X)\) that are not in \(Y\cap\psi_{\theta}(X)\) among the sample points in the training and validation sample.
The results are presented in Table 1 and Figure 2. We trained the two layer USDMNN with batch sizes 1 and 10, sampling 10 neighbors in the SLGDA for the characteristic functions and considering all neighbors in the SLGDA for the windows. For the batch size 1, we considered 50 epochs to train the windows and 100 epochs to train the characteristic functions, while for the batch size 10 we considered 19 and 50 epochs, respectively. These batch sizes refer to the characteristic function SLGDA and in both cases we considered a batch size of 10 for the window SLGDA. The best result was obtained with batch size 10, although it took around ten times more to train. The SLGDA for the characteristic functions took an average of 17 epochs to reach the minimum with batch size 1, and an average of 35 epochs with batch size 10, what is expected since with batch size 1 the characteristic function is actually updated 10 times at each epoch.
The time to train these USDMNN was significantly greater than that it took to train CDMNN in [11], where the best architecture had a total training time of less than one hour. Moreover, as opposed to what was observed with CDMNN, the best validation error was obtained with a batch size of 10 and was actually
lesser than the minimum validation error obtained in [11] before fine training (0.066). Furthermore, the minimum training loss attained via USDMNN (0.037) was lesser than the minimum training loss obtained in [11] (0.049). Therefore, the USDMNN have obtained similar empirical results as the CDMNN in [11] without any prior information, but at a greater computational cost.
## 6 Conclusion
In this paper, we proposed a hierarchical SLGDA to train USDMNN and illustrated it in a simple example. We obtained empirical results as good as that of [11], but much more computational resources were necessary, so high algorithmic complexity was the price to pay to obtain good results without considering prior information to design a specific CDMNN architecture.
The example in this paper is only an illustration of the method, and it is necessary to improve the algorithm implementation. We are currently working on an efficient implementation of the hierarchical SLGDA that, we believe, will significantly decrease the training time, so methods based on USDMNN may also be competitive with CDMNN from a computational complexity perspective. With a more efficient algorithm, it will be possible to fine train the USDMNN and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline b & Min. Train & Learned Train & Min Val & Total time (h) & Time to min. (h) & Epochs to min. (W) & Mean Epochs to min (f) & W visited \\ \hline
1 & 0.050 & 0.090 & 0.1571 & **5.84** & **4.81** & **42** & **17** & 589 \\
10 & **0.037** & **0.044** & **0.060** & 62.39 & 37.95 & 12 & 35 & **252** \\ \hline \end{tabular}
\end{table}
Table 1: Results of the two layer USDMNN trained with batch sizes \(b=1\) and \(b=10\) for the characteristic function SLGDA. We present the minimum training loss observed during training; the training and validation loss of the trained USDMNN; the algorithm total time and time until the minimum validation error; the number of epochs to the minimum validation error and the average number of epochs until the minimum training error with fixed windows; and the number of visited windows.
Figure 2: Result obtained after applying to the validation sample each layer of the USDMNN trained with a batch size of 10.
better study the effect of settings such as the number of layers and the window size on the model performance. An efficient method to train DMNN without the need of strong prior information may help popularize methods based on them among practitioners which do not have strong knowledge in mathematical morphology to design specific CDMNN architectures.
|
2301.08399 | Who Should I Engage with At What Time? A Missing Event Aware Temporal
Graph Neural Network | Temporal graph neural network has recently received significant attention due
to its wide application scenarios, such as bioinformatics, knowledge graphs,
and social networks. There are some temporal graph neural networks that achieve
remarkable results. However, these works focus on future event prediction and
are performed under the assumption that all historical events are observable.
In real-world applications, events are not always observable, and estimating
event time is as important as predicting future events. In this paper, we
propose MTGN, a missing event-aware temporal graph neural network, which
uniformly models evolving graph structure and timing of events to support
predicting what will happen in the future and when it will happen.MTGN models
the dynamic of both observed and missing events as two coupled temporal point
processes, thereby incorporating the effects of missing events into the
network. Experimental results on several real-world temporal graphs demonstrate
that MTGN significantly outperforms existing methods with up to 89% and 112%
more accurate time and link prediction. Code can be found on
https://github.com/HIT-ICES/TNNLS-MTGN. | Mingyi Liu, Zhiying Tu, Xiaofei Xu, Zhongjie Wang | 2023-01-20T02:22:55Z | http://arxiv.org/abs/2301.08399v1 | # Who Should I Engage with At What Time? A Missing Event Aware Temporal Graph Neural Network
###### Abstract
Temporal graph neural network has recently received significant attention due to its wide application scenarios, such as bioinformatics, knowledge graphs, and social networks. There are some temporal graph neural networks that achieve remarkable results. However, these works focus on future event prediction and are performed under the assumption that all historical events are observable. In real-world applications, events are not always observable, and estimating event time is as important as predicting future events. In this paper, we propose MTGN, a missing event-aware temporal graph neural network, which uniformly models evolving graph structure and timing of events to support predicting what will happen in the future and when it will happen. MTGN models the dynamic of both observed and missing events as two coupled temporal point processes, thereby incorporating the effects of missing events into the network. Experimental results on several real-world temporal graphs demonstrate that MTGN significantly outperforms existing methods with up to \(89\%\) and \(112\%\) more accurate time and link prediction. Code can be found on [https://github.com/HIT-ICES/TNNLS-MTGN](https://github.com/HIT-ICES/TNNLS-MTGN).
Temporal Graph Neural Network, Temporal Point Process, Missing Events, Temporal Link Prediction, Event Time Estimation
## I Introduction
Graph structured data has recently received significant attention due to its wide application scenarios in various domains such as social networks, knowledge graphs, and bioinformatics. Graph neural networks (GNNs) are developed to efficiently learn high-dimensional and non-Euclidean graph information from graphs. Most existing GNNs [1, 2] are designed for static graphs. In the real world, Graphs tend to evolve continuously. For example, new friendships may be established between people in a social network. Incorporating dynamics into GNNs is a non-trivial problem.
Recently, a few temporal graph neural networks have been developed. These methods can be classified as discrete temporal GNNs and continuous temporal GNNs based on how they represent the temporal graph [3, 4]. The discrete temporal GNNs [5, 6, 7] treat temporal graph as a sequence of graph snapshots to simplify the model, which results in their inability to capture the fully continuous evolution as fine-grained temporal information is lost. As a result, the discrete temporal GNNs can only predict what will happen in the future but cannot predict when. The continuous temporal GNNs treat the temporal graph as a stream of events. The continuous temporal GNNs can be roughly divided into temporal point process (TPP) based methods [8, 9, 10] and non-TPP based methods [11, 12, 13]. The non-TPP based methods encode the event time as an auxiliary feature of the temporal graph structure using RNNs or Transformers, with the consequence that they still fail to predict the event time. TPP based methods model the temporal graph as a temporal point process of events and parameterize this process using a neural network. TPP can naturally model the event time so that, theoretically, the TPP based methods can predict the time of the event.
However, most TPP-based methods still treat event time prediction as a "second-class citizen" (e.g., not modeled in the optimization objective) to assist in predicting whether an event will occur. We believe that predicting when an event will occur is as important as predicting whether it will occur, e.g., a task with a 2-hour deadline requires a different response than a task with a 2-day deadline. In short, we believe that a temporal GNN should be able to answer "Who should I engage with at what time" rather than just "Who should I engage with" (as shown in Fig. 1). Predicting events and their corresponding times in one model will broaden the application scenario of temporal GNN.
Additionally, existing work is often working under the assumption that all events in a temporal graph are fully observed, but in many real-world scenarios, this assumption is difficult to hold. We may miss observing events for a variety of reasons (as the dash lines in Fig. 1). For example, the sensor may fail at a certain time, and events are lost during that time; in social networks, we may only observe interactions on Facebook, but interactions on Twitter or offline are missing. Missing events are part of temporal graphs and should be modeled in the temporal GNNs to improve the performance of the temporal GNNs.
Therefore, in this work, we propose MTGN, a missing event
Fig. 1: A toy example showing the two core tasks and missing events phenomenon in temporal graph. The solid lines are observed events and the dash lines are missing events.
aware method that uniformly models evolving graph structure and event time to support predicting what will happen and when it will happen. MTGN is a TPP based method that models the dynamics of both observed and missing events as two coupled TPPs. Both the observation of events and the generation of missing events depend on previously observed events as well as previously generated missing events.
The main contributions are summarized as follows:
* We point out the importance of modeling the timing of events and the missing events. We present a problem formulation that unifies modeling missing events, the of events, and evolving graph structure.
* We propose a novel method called MTGN. MTGN can effectively learn node's structure and temporal features in a uniform embedding. Additionally, MTGN model the dynamic of observed and missing events as two coupled TPPs and parameterization with log-norm mixture distribution, which makes it can effectively estimate event time.
* We conduct extensive experiments on five real-world datasets to demonstrate the superiority of the proposed method MTGN.
The remainder of this paper is organized as follows. Section II discusses the related works. Section III presents related preliminaries. Section IV gives a detailed interpretation of MTGN. Section V shows all experimental results, as well as detailed analysis and discussion. Section VI offers some concluding remarks.
## II Related Works
### _Neural Temporal Point Process_
A temporal point process [14] is a stochastic process whose realizations consist of discrete events in time \(\mathcal{T}=\{t_{1},t_{2},...,t_{T}\},t_{i}\leq t_{i+1}\). There are two common ways to characterize a temporal point process: intensity-based methods and intensity-free methods.
Intensity-based methods [15, 16, 17] are defined in terms of the conditional intensity function \(\lambda(t)\), which specifying the dependency of next arrival time \(t\) on the history \(\mathcal{H}_{t}=\{t_{i}\in\mathcal{T}|t_{i}<t\}\). With the given conditional intensity function \(\lambda(t)\), we can obtain the conditional probability density function as follows:
\[p(t)=\lambda(t)\underbrace{\exp(-\int_{t_{T}}^{t}\lambda(s)\text{d}s)}_{\text {survival function}} \tag{1}\]
where the survival function of the process [18] denotes that no events happen during \([t_{T},t)\). Intensity-based methods are very flexible as \(\lambda(t)\) can choose different forms (e.g., Poisson, Renewal, Self-correcting, Hawkes, etc.) to capture different interests in different scenarios. However, the drawback of the intensity-based methods are also obvious, as the integral term involved in the survival function leads to there is no closed form for Eq. (1), and thus requiring Monte Carlo integration.
To reduce the computationally expensive of intensity-based methods, intensity-free methods do not directly model the intensity function. For example, Omi et al. [19] introduce a fully neural network (FullyNN) to model the cumulative intensity function \(\Lambda(t)=\int_{t_{T}}^{t}\lambda(s)\text{d}s\). Shchur et al. [20] directly estimate conditional density \(p(t)\) by utilizing neural density estimation [21]. Similar to [20], Gupta et al. [22] also directly estimate conditional density, and they first introduce missing events in modeling temporal point process. Inspired by [22], we introduce the missing events into temporal graph.
For readers who want to learn more about neural temporal point process, we recommend the survey conducted by Shchur et al. [23], which provides more details about the background, models, application scenarios and limitations of using neural networks to solve temporal point process related problems.
### _Temporal Graph Neural Networks_
#### Ii-B1 Discrete Temporal Graph Neural Networks
Discrete temporal graph neural networks treat temporal graphs as a sequence of graph snapshots[24]. Most discrete dynamic embedding approaches[25, 26, 27, 28, 29] focus on the learning representations of entire temporal graphs rather than node representations. Some approaches[30, 31, 32, 33, 34] are starting to focus on the dynamic representation at node level, they encode each graph snapshot using static embedding approaches[35, 36, 37] to embed each node, and then combines some time-series models (e.g. LSTM[38], RNN[39]) for per node to model the discrete dynamic.
The main disadvantages of discrete temporal graph neural networks are twofold:
* Graph snapshots are generated by aggregating events over a period of time, and a large amount of fine-grained temporal information is lost in the aggregation process.
* Discrete temporal GNNs simplify the temporal features. For example, they treat the temporal graph as a sequence of graph snapshots consisting of multiple time steps, while this ignores the fact that the time intervals between adjacent snapshots may be vary.
These two drawbacks result in discrete temporal GNNs failing to predict event times and generally performing worse than continuous temporal graph neural networks.
#### Ii-B2 Continuous Temporal Graph Neural Networks
Continuous temporal graph neural networks treat temporal graphs as a stream of observed events. These methods can be roughly divided into two main categories: TPP based methods and non-TPP based methods[3].
For the non-TPP based methods, the embedding of interacting nodes is updated by an RNN/transformer based architecture according to the historical information. Representative works of this type of method are JODIE[40], TGN[41], APAN[11] and Streaming GNN[12]. JODIE is designed for user-item networks and uses two RNNs to maintain the embedding of each node. With one RNN for users and another one for items. Instead of keeping the embedding of nodes directly, TGN calculates the embedding of nodes at different times by introducing message and memory mechanisms. These methods encode the event time as an auxiliary feature of temporal graph structure, which helps overcome the drawbacks of discrete temporal GNNs but leads to failing to predict the event time.
Know-Evolve[42] is the pioneer in bringing the temporal point processes to dynamic graph representation learning, which models temporal knowledge graph as multi-relational timestamped edges by parameterizing a TPP by a deep recurrent architecture. DyRep[10] is the successor of Know-Evolve. DyRep extends Know-Evolve using TPP to model long-term events and short-term events and introduce aggregation mechanisms. LDG[43] argues long-term events are often specific by humans and can be suboptimal and expensive to obtain. LDG uses Neural Relational Inference (NRI) model to infer the type of events on the graph and replaces the self-attention originally used in DyRep by generating a temporal attention matrix to better aggregate neighbor information. GH[44] is another TPP based approach, which uses an adapted continuous-time LSTM for Hawkes process. TREND[8] is a Hawkes process based approach, which captures the individual and collective characteristics of events by integrating event and node dynamics. Theoretically, these TPP based methods can predict the time of events. However, most of them still treat event time prediction as a "second-class citizen" to assist in predicting events, e.g. event time is not included in optimization objective and no closed-form event time exception is provided. For a very recent work, EvoKG[9], which jointly model the evolving graph structure and timing of events have achieved state-of-the-art performance. However, EvoKG learns structural and temporal embeddings separately, which limits its performance and robustness (detailed discuss in Section V-D2).
Finally, it should be noted that all the temporal GNNs mentioned above do not model missing events.
## III Preliminaries
### _Notations_
A temporal graph \(\mathcal{G}\) can be represented by a sequence of \(|E|\) discrete events \(\{e_{i}=(u_{i},v_{i},t_{i},k_{i})\}_{i=1}^{|E|}\}\), where \(u_{i}\) and \(v_{i}\) are two nodes involved in the event \(e_{i}\). \(t_{i}\) is the time of the event, and \(t_{i}\leq t_{j}\Leftrightarrow i<j\). \(k_{i}\in\{0,1\}\) and \(k_{i}=1\) represents the event \(e_{i}\) is observed and \(k_{i}=0\) represents the event \(e_{i}\) is missed1. For convenience, we omit \(k\) and use \(\mathcal{O}(\mathcal{M})\) to denotes observed (missing) temporal graph, \(o=(u,v,t)(m=(u,v,t))\) to denotes a observed(missing) event. We preserve boldface lowercase letters (e.g., \(\mathbf{u}\)) for vectors and boldface capitals for matrices (e.g., \(\mathbf{W}\)). Frequently-used notations has been summarized in Appendix A.
Footnote 1: In this paper, while we focus on undirected graph without attributes, but it’s easy to be generalized to directed graphs with attributes
### _Problem Definition_
Given a temporal graph \(\mathcal{G}=\mathcal{O}\cup\mathcal{M}\) denoted as a sequence of events, our goal is to model the probability distribution \(p(\mathcal{G})\). We assume that both observed and missing events at time \(t\) depend on the history of previously generated missing events and observed events. As in practice, only the observed events \(\mathcal{O}\) can be evaluated for validation, and missing events are intractable, we can model the probability distribution as:
\[\begin{split} p_{\emptyset}(\mathcal{O}_{T})&=\prod _{t=1}^{T}\int_{\mathcal{M}_{t}}p_{\theta}(\mathcal{O}_{t}|\mathcal{G}_{t}^{* },\mathcal{M}_{t})p_{\theta}(\mathcal{M}_{t})\mathrm{d}\mathcal{M}_{t}\\ &=\mathbb{E}_{q_{\phi}}\prod_{t=1}^{T}\frac{p_{\theta}(\mathcal{O }_{t}|\mathcal{G}_{t}^{*},\mathcal{M}_{t})p_{\theta}(\mathcal{M}_{t}|\mathcal{G }_{t}^{*})}{q_{\phi}(\mathcal{M}_{t}|\mathcal{O}_{t},\mathcal{G}_{t}^{*})} \end{split} \tag{2}\]
where \(\bar{t}\) is timestamp of last observed event, and we assume events happen at same time interval \((\bar{t},t]\) are independent of each other, then we have:
\[p_{\theta}(\mathcal{O}_{t}|\mathcal{G}_{t}^{*},\mathcal{M}_{t}) =\prod_{(u,v,t)\in\mathcal{O}_{t}}p_{\theta}(u,v,t|\mathcal{G}_{t }^{*},\mathcal{M}_{t}) \tag{3}\] \[p_{\theta}(\mathcal{M}_{t}|\mathcal{G}_{t}^{*}) =\prod_{(u,v,t)\in\mathcal{M}_{t}}p_{\theta}(u,v,t|\mathcal{G}_{ t}^{*})\] (4) \[q_{\phi}(\mathcal{M}_{t}|\mathcal{O}_{t},\mathcal{G}_{t}^{*}) =\prod_{(u,v,t)\in\mathcal{M}_{t}}q_{\phi}(u,v,t|\mathcal{O}_{t},\mathcal{G}_{t}^{*}) \tag{5}\]
where \(\mathcal{G}_{t}^{*}\) denotes all observed and generated missing events until time \(\bar{t}\); \(\mathcal{O}_{t}\) and \(\mathcal{M}_{t}\) are observed and generated missing events at time interval \((\bar{t},t]\), and for convince, we let the time of the missing event with the same subscript lag slightly behind that of the observed event; \(q_{\phi}(\cdot)\) is a variational approximation posterior distribution, which aims to generate missing events \(\mathcal{M}_{t}\) within the interval \((\bar{t},t)\), based on the historical events as well as the further observed events \(\mathcal{O}_{t}\).
Then our goal is to maximize the marginal log-likelihood of observed events, which can be achieved by maximizing the following evidence lower bound (ELBO) of the log-likelihood of observed events:
\[\begin{split}\mathcal{L}(\theta,\phi;\mathcal{O}_{T})=\mathbb{E}_ {q_{\phi}}\sum_{t=1}^{T}\underbrace{\sum_{(u,v,t)\in\mathcal{O}_{t}}\log p_{ \theta}(u,v,t|\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t})}_{\mathcal{L}_{O}}\\ -\sum_{t=1}^{T}\underbrace{\sum_{(u,v,t)\in\mathcal{M}_{t}} \text{KL}(q_{\phi}(u,v,t|\mathcal{O}_{t},\mathcal{G}_{\bar{t}}^{*})||p_{ \theta}(u,v,t|\mathcal{G}_{t}^{*}))}_{\mathcal{L}_{M}}\end{split} \tag{6}\]
where \(\text{KL}(\cdot)\) is the Kullback-Leibler divergence. Similar to [9], we further decompose the event joint conditional probability in Eq. (6) as follows:
\[p_{\theta}(u,v,t|\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t}) =p_{\theta}(u,v|\mathcal{G}_{t}^{*},\mathcal{M}_{t})p_{\theta}(t| u,v,\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t}) \tag{7}\] \[p_{\theta}(u,v,t|\mathcal{G}_{\bar{t}}^{*}) =p_{\theta}(u,v|\mathcal{G}_{t}^{*})p_{\theta}(t|u,v,\mathcal{G}_{ \bar{t}}^{*})\] (8) \[q_{\phi}(u,v,t|\mathcal{O}_{t},\mathcal{G}_{t}^{*}) =q_{\phi}(u,v|\mathcal{O}_{t},\mathcal{G}_{t}^{*})q_{\phi}(t|u,v, \mathcal{O}_{t},\mathcal{G}_{t}^{*}) \tag{9}\]
Then, by chain rule for relative entropy, the KL divergence term in Eq. (6) can be denoted as:
\[\begin{split}\text{KL}(q_{\phi}(u,v,t|\mathcal{O}_{t},\mathcal{G}_{ \bar{t}}^{*})||p_{\theta}(u,v,t|\mathcal{G}_{\bar{t}}^{*}))=\\ \text{KL}(q_{\phi}(u,v|\mathcal{O}_{t},\mathcal{G}_{\bar{t}}^{*}) ||p_{\theta}(u,v|\mathcal{G}_{\bar{t}}^{*}))\\ +\text{KL}(q_{\phi}(t|u,v,\mathcal{O}_{t},\mathcal{G}_{\bar{t}}^{ *})||p_{\theta}(t|u,v,\mathcal{G}_{\bar{t}}^{*}))\end{split} \tag{10}\]
Above equations suggest the components of our model from two perspectives corresponding to the challenges mentioned in Introduction. From the event perspective, the ELBO in Eq. (6) suggests our model consist of the three components:
one TPP for observed events (\(p_{\theta}(u,v,t|\mathcal{G}^{*}_{t},\mathcal{M}_{t})\)), one prior TPP for missing events (\(p_{\theta}(u,v,t|\mathcal{G}^{*}_{t})\)) and one posterior TPP for missing events (\(q_{\phi}(u,v|\mathcal{O}_{t},\mathcal{G}^{*}_{t})\)). From the temporal graph perspective, Eq.(6) - Eq.(8) suggest our model consist of two components: graph structure modeling components (\(p_{\theta}(u,v|\mathcal{G}^{*}_{t})\), \(p_{\theta}(u,v|\mathcal{G}^{*}_{t})\) and \(q_{\phi}(u,v|\mathcal{O}_{t},\mathcal{G}^{*}_{t})\)) and event time modeling components (\(p_{\theta}(t|u,v,\mathcal{G}^{*}_{t})\), \(p_{\theta}(t|u,v,\mathcal{G}^{*}_{t})\) and \(q_{\phi}(t|u,v,\mathcal{O}_{t},\mathcal{G}^{*}_{t})\)).
## IV Modeling A Temporal Graph
Modeling a temporal graph is equivalent to modeling each observed and missing event in the temporal graph, i.e., Eq.(7)-Eq. (9). To model these events, we learn the observed (missing) embeddings of nodes. Unlike EvoKG[9], which learns the structural embeddings and temporal node embeddings separately, we learn the time-evolving structural dynamics and temporal characteristics of nodes in unified embeddings. The unified embeddings make the model treat events as a whole, allowing the model to have higher consistency in predicting link and predicting time, i.e., the model can obtain optimal link prediction performance and time prediction performance with the same set of parameters.
We utilize message passing framework to learn node observed (missing) embeddings. Given concurrent observed events \(\mathcal{O}_{t}\), we summarise node \(u\)'s observed embedding as follows:
\[\mathbf{o}^{l+1,t}_{u}=\mathbf{W}^{l}_{s}\mathbf{o}^{l,t}_{u}+\frac{1}{| \mathcal{N}^{\mathcal{O}_{t}}_{u}|}\sum_{(u,v,t)\in\mathcal{O}_{t}}\mathbf{W} ^{l}_{n}\mathbf{o}^{l,t}_{v}+\mathbf{W}^{l}_{t}(t-\bar{t}^{o}_{u,v}) \tag{11}\]
where \(\mathbf{o}^{l+1,t}_{u}\) is the observed embeddings of node \(u\) learned by \(l\)-th layer of GNN and \(\mathbf{o}^{0,t}_{u}\) is set to static embedding \(\mathbf{o}_{u}\); \(\mathcal{N}^{\mathcal{O}_{t}}_{u}\) is node \(u\)'s neighbors in \(\mathcal{O}_{t}\) and \(\bar{t}^{o}_{u,v}=\max(\bar{t}^{o}_{u},\bar{t}^{o}_{v})\) is the last time of node \(u\) or node \(v\) involved in an observed event. \(\mathbf{W}^{l}_{\bullet}\) are learnable weights in the \(l\)-th layer GNN. Specially, \(\mathbf{W}^{\bullet}_{s}\) models the self-evolution, which indicates a node evolves with respect to its previous state; \(\mathbf{W}^{\bullet}_{s}\) models the structural message passing from neighborhoods; \(\mathbf{W}^{\bullet}_{t}\) models the event time impact. So that Eq. (11) encodes the structural and temporal information of the observed events in a uniform embedding.
For the generated missing events \(\mathcal{M}_{t}\) (the generation algorithm is described in Section IV-D), we summarise node \(u\)'s missing embedding as follows:
\[\mathbf{m}^{l+1,t}_{u}{=}\mathbf{U}^{l}_{s}\mathbf{m}^{l,t}_{u}{+}\frac{1}{| \mathcal{N}^{\mathcal{M}_{t}}_{u}|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It is reasonable to model \(\tau\) rather than directly model \(t\), as \(t\) can be very large thus making the neural network confused.
### _Parameterization of prior TPP for missing events_
For the graph structure part, same as observed events, we decompose \(p_{\theta}(u,v|\mathcal{G}_{\bar{t}}^{*})\) as follows:
\[p_{\theta}(u,v|\mathcal{G}_{\bar{t}}^{*})=p_{\theta}(u|\mathcal{G}_{\bar{t}}^{* })p_{\theta}(v|u,\mathcal{G}_{\bar{t}}^{*}) \tag{26}\]
and parameterize each component separately:
\[p_{\theta}(u|\mathcal{G}_{\bar{t}}^{*}) =\text{softmax}(\text{MLP}([\bar{\mathbf{g}}_{u}^{\bar{t}}; \bar{\mathbf{g}}_{t}^{\bar{t}}]) \tag{27}\] \[p_{\theta}(v|u,\mathcal{G}_{\bar{t}}^{*}) =\text{softmax}(\text{MLP}([\bar{\mathbf{g}}_{u}^{\bar{t}};\bar{ \mathbf{g}}_{t}^{\bar{t}}])) \tag{28}\]
For the event time part, we also use the log-norm mixture distribution to model missing event time's PDF. We first conduct the context vector \(\mathbf{c}^{\theta}=[\mathbf{g}_{u}^{*,\bar{t}};\mathbf{g}_{v}^{*,\bar{t}}]\) and obtain the params of the log-norm mixture params \(\mathbf{\omega}^{\theta},\mathbf{\mu}^{\theta},\mathbf{\sigma}^{\theta}\) same as Eq. (24) with \(\mathbf{c}^{\theta}\) as input. Then the PDF of time \(\Delta=t^{\prime}-\bar{t}\) as follows:
\[p_{\theta}(\Delta|u,v,\mathcal{G}_{\bar{t}}^{*})=p_{\theta}( \Delta|\mathbf{\omega}^{\theta},\mathbf{\mu}^{\theta},\mathbf{\sigma}^{\theta}) \tag{29}\] \[=\sum_{k=0}^{K}\frac{\omega_{k}^{\theta}}{\Delta\sigma_{k}^{ \theta}\sqrt{2\pi}}\exp(-\frac{(\log\Delta-\mu_{k}^{\theta})^{2}}{2\sigma_{k}^ {\theta}})\]
Model time \(\Delta\) makes true the missing event time is large than the latest observed time \(\bar{t}\).
### _Parameterization of posterior TPP for missing events_
For the graph structure part, we decompose \(q_{\phi}(u,v|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\) as follows:
\[q_{\phi}(u,v|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t}) = q_{\phi}(u|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})q_{\phi}(v |u,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t}) \tag{30}\]
and then parameterize each component separately:
\[q_{\phi}(u|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t}) =\text{softmax}(\text{MLP}([\bar{\mathbf{g}}_{\bar{t}}^{\bar{t}}; \bar{\mathbf{c}}^{\bar{t}}])) \tag{31}\] \[q_{\phi}(v|u,\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t}) =\text{softmax}(\text{MLP}([\bar{\mathbf{g}}_{u}^{\bar{t}};\bar{ \mathbf{g}}_{\bar{t}}^{\bar{t}};\bar{\mathbf{o}}_{u}^{\bar{t}};\bar{\mathbf{o} }^{\bar{t}}])) \tag{32}\]
For the event time part, we conduct the context vector \(\mathbf{c}^{\phi}=[\mathbf{g}_{u}^{*,\bar{t}};\mathbf{g}_{v}^{*,\bar{t}}; \mathbf{o}_{u}^{*,\bar{t}};\mathbf{o}_{v}^{*,\bar{t}}]\) and obtain the params of the log-norm mixture params \(\mathbf{\omega}^{\phi},\mathbf{\mu}^{\phi},\mathbf{\sigma}^{\phi}\) same as Eq. (24) with \(\mathbf{c}^{\phi}\) as input. Then we have the density of the inter-arrival time \(\Delta\) as:
\[q_{\phi}(\Delta|u,v,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t}^{ *})=q_{\phi}(\Delta|\mathbf{\omega}^{\phi},\mathbf{\mu}^{\phi},\mathbf{\sigma}^{\phi}) \odot\mathbb{I}(\Delta{<}t{-}\bar{t}) \tag{33}\] \[=\mathbb{I}(\Delta{<}t{-}\bar{t})\odot\sum_{k=0}^{K}\frac{\omega_ {k}^{\phi}}{\Delta\sigma_{k}^{\phi}\sqrt{2\pi}}\exp(-\frac{(\log\Delta-\mu_{k}^{ \phi})^{2}}{2\sigma_{k}^{\phi}})\]
where \(\mathbb{I}(\cdot)\) is an indicator function. Different with density of observed events inter-arrival time (Eq. (25)) and prior missing events inter-arrival time(Eq. (33)), the density of posterior missing events inter-arrival time is a truncated log-norm mixture distribution, which makes the missing events time \(t^{\prime}\) sampled from this distribution satisfy \(\bar{t}<t^{\prime}<=t\).
```
Input : Current time \(t\); Last observed time \(\bar{t}\); Number of sampled missing events ratio \(Q\). Output : Generated missing events \(\mathcal{M}_{t}\) and the KL divergence \(\mathcal{L}_{M}\) between prior and posterior TPP for missing events.
1\(\mathcal{M}_{t}\leftarrow\{\}\)
2\(\mathcal{L}_{M}\gets 0\)
3/* executed in parallel */
4fori=\(1,\ldots,Q|\mathcal{O}_{t}|\)do */
5 /* Sample node \(u\)
6 Compute \(p_{\theta}(\cdot|\mathcal{G}_{\bar{t}}^{*})\) based on Eq. (27).
7 Compute \(q_{\phi}(\cdot|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\) based on Eq. (31).
8\(\mathcal{L}_{M}\leftarrow\mathcal{L}_{M}+\text{KL}(q_{\phi}(\cdot|\mathcal{G}_{ \bar{t}}^{*},\mathcal{O}_{t})||p_{\theta}(\cdot|\mathcal{G}_{\bar{t}}^{*}))\)
9\(u\sim q_{\phi}(\cdot|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\)
10 /* Sample \(v\)
11 Compute \(p_{\theta}(\cdot|u,\mathcal{G}_{\bar{t}}^{*})\) based on Eq. (28).
12 Compute \(q_{\phi}(\cdot|u,\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t})\) based on Eq. (32).
13\(\mathcal{L}_{M}{+}\mathcal{L}_{M}+\text{KL}(q_{\phi}(\cdot|u,\mathcal{G}_{ \bar{t}}^{*},\mathcal{O}_{t})||p_{\theta}(\cdot|u,\mathcal{G}_{\bar{t}}^{*}))\)
14\(v\sim q_{\phi}(\cdot|u,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\)
15 /* Sample time interval \(\Delta\)
16 Compute \(p_{\theta}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*})\) based on Eq. (29)
17 Compute \(q_{\phi}(\cdot|u,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\) based on Eq. (29)
18\(\mathcal{L}_{M}{+}\mathcal{L}_{M}+\text{KL}(q_{\phi}(\cdot|u,v,\mathcal{G}_{ \bar{t}}^{*},\mathcal{O}_{t})||p_{\theta}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*}))\)
19\(\Delta\sim q_{\phi}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t}))\)
20\(t^{\prime}\leftarrow\bar{t}+\Delta\)
21\(\mathcal{M}_{t}\leftarrow\mathcal{M}_{t}\cup\{(u,v,t^{\prime})\}\)
22
23 end for return\(\mathcal{M}_{t}\), \(\mathcal{L}_{M}\)
```
**Algorithm 1**Missing Events Generation
### _Generating missing events_
Algorithm 1 illustrates the missing events generation steps. For each missing event \((u,v,t^{\prime})\), we sample \(u\), \(v\) and \(\Delta\) in order according to the corresponding posterior probability distribution. Unlike the recursive sampling of missing events in [22], we directly sample \(Q|\mathcal{O}_{t}|\) missing events in parallel to improve model efficiency.
During the generation of missing events, we also calculate the KL divergence term in ELBO. For the discrete probability distribution \(p\) (\(p_{\theta}(\cdot|\mathcal{G}_{\bar{t}}^{*})\) and \(p_{\theta}(\cdot|u,\mathcal{G}_{\bar{t}}^{*})\)) and \(q\) (\(q_{\phi}(\cdot|\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})\) and \(q_{\phi}(\cdot|u,\mathcal{G}_{\bar{t}}^{*},\mathcal{M}_{t})\) ), the KL divergence is defined as:
\[KL(q||p)=\sum_{x\in nodes(\mathcal{G})}q(x)\log\frac{q(x)}{p(x)} \tag{34}\]
For \(\text{KL}(q_{\phi}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})||p_{ \theta}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*}))\), there is no closed-form expression, so we estimate it using Monte Carlo sampling:
\[\text{KL}(q_{\phi}(\cdot|u,v,\mathcal{G}_{\bar{t}}^{*},\mathcal{O}_{t})||p _{\theta}(\cdot|u,v,\mathcal{G
### _Parameter Learning_
We optimize MTGN by maximizing the ELBO:
\[\max_{\theta,\phi}\mathcal{L}(\theta,\phi;\mathcal{O}_{T}) \tag{36}\]
The parameter learning algorithm are show in Algorithm 2. Directly training on the entire event sequence is inefficient for the following issues:
1. Computing resource issue: maintain the entire history for each node's various embedding for backpropagation requires high computation and memory cost.
2. Gradient related issue: MTGN contains GRUs, which will suffer from gradient explosion or gradient disappearance.
to address the above-mentioned issues, backpropagation through time (BPTT) training is conducted over the entire event sequence.
```
Input: Observed events \(\mathcal{O}\); Maximum number of epochs \(max\_epochs\); number \(L\) of GNN layers; Number of time steps \(b\) for BPTT.
1forepoch = \(1,\ldots,max\_epochs\)do
2for\(t\in\textit{Timestep}(\mathcal{O})\)do
3 Compute \(\mathbf{o}_{\bullet}^{t,t}\) and \(\mathbf{o}_{\bullet}^{*,t}\) based on Eq. (11) and Eq. (13).
4 Generate missing events \(\mathcal{M}_{t}\) and compute \(\mathcal{L}_{M}\) based on Algorithm 1.
5 Compute \(\mathbf{m}_{\bullet}^{L,t}\) and \(\mathbf{m}_{\bullet}^{*,t}\) based on Eq. (12) and Eq. (14).
6 Compute \(\mathcal{L}_{O}\) based on Eq. (6), Eq. (21), Eq. (22) and Eq. (25)
7 Compute \(\mathcal{L}\leftarrow\mathcal{L}+\mathcal{L}_{O}+\mathcal{L}_{M}\)
8 Optimize model parameters every \(b\) time steps.
9
10 end for
11
12 end for
```
**Algorithm 2**Parameter Learning
### _Complexity Analysis_
In this section, we analyze the complexity of MTGN. Particularly, we use \(\mathcal{O}\) to stand complexity analysis symbols instead of observed events. We use \(N_{t}\) (\(QN_{t}\)) to denote the number of observed events (generated missing events) at time \(t\in\{1,\ldots,T\}\), and \(|V|\) is used to denote the node numbers in the temporal graph. The time complexity of GNN used to learn node observed (missing) embedding is \(\mathcal{O}(LN_{max})\) (\(\mathcal{O}(LQN_{max})\)), where \(N_{max}\) is maximum observed events at different timestamp. The time complexity of GRU used in Eq.(13) and Eq.(14) is \(\mathcal{O}(|V|)\), while we can only update the involved nodes to reduce the complexity to \(\mathcal{O}(N_{max})\). The time complexity of missing events generation algorithm at each time is \(\mathcal{O}(LQN_{max})\).
Overall, the time complexity of MTGN is \(\mathcal{O}(T((L+2LQ+1)N_{max}))=\mathcal{O}(TN_{max})\). This indicates that MTGN is linear to the total number of observed events, and proves that MTGN can be applied to large-scale temporal graphs.
## V Experiments
In this section, we present a comprehensive set of experiments to demonstrate the effectiveness of MTGN.
### _Datasets_
We use five publicly available datasets for experiments. The datasets are described as follows:
* **LSED2**. LSED is a dynamic interaction network built upon the openly accessible news aiming to provide a relatively complete and accurate evolution trace of "Chinese Internet+" service ecosystem. Each node represents a stakeholder or a service, and each edge denotes two nodes that were mentioned in the same news. The missing events in LSED could be unreported news. Our work is motivated while working on this dataset. Footnote 2: [https://github.com/HIT-ICES/LSED](https://github.com/HIT-ICES/LSED)
* **ENRON**[46]. ENRON is a large set of email messages. Each node represents an employee of Enron corporation, and each edge denotes an email sent from one employee to another. The missing events in ENRON could be offline face-to-face proximity.
* **UCI**[47]. UCI is an online community of students from the University of California, Irvine. Each node represents a student, and edge denotes a message sent or received between users. The missing events in UCI could be offline face-to-face proximity or message sent or received on other social platforms.
* **HYPERTEXT**[48]. HYPERTEXT is a human contact network of face-to-face proximity. Each node denotes a person who attended in the ACM Hypertext 2009 conference. Edges denote two people have a conversation during a certain time interval.
* **RT-POL**[49]. RT-POL is a network of political communication on Twitter. Nodes are Twitter users and edges denote whether the users have retweeted each other. RT-POL exhibits a highly segregated partisan structure, which is very different from the user communication networks.
The basic statistics of these datasets are summarized in Table. I. From Table. I, it is clear that the 5 selected datasets represented differently characterized temporal graphs. So the experiments conduct on these 5 datasets are able to demonstrate the generalization capability of
noted that for the events involved two same nodes in the test set, we only keep the earliest event time for testing.
### _Baselines_
We employ two static GNNs (GCN [1] and GAT [2]), two discrete temporal GNNs (DySAT [50] and EvolveGCN [6]) and two continuous temporal GNNs (KnowEvolve [42] and EvoKG [9]) as baselines. These baselines have been introduced in Section II. Table. II comparison baselines and our model.
For the static GNNs, we generate a cumulative graph from observed events in the training set, and edge timestamps are ignored. For the discrete temporal GNNs, we uniformly generate 10 snapshots of the same time span and set the time window to 3.
Please refer to Appendix C for more details on the hyper-parameters of the baselines and MTGN.
### _Tasks and Metrics_
We study the effectiveness of MTGN by evaluating our model and baselines on the following two tasks:
1. **Further link prediction**. For a given test observed event \((u,?,t)\), we replace? with nodes in the graph and compute the score (Eq. (22) for MTGN). Then, we rank all possible nodes in descending order of the score and report the rank of the ground truth node. In this paper, we report HITS@{3,5,10}, which is the percentage of ground truth nodes ranked in the top 3, 5 and 10 predictions.
2. **Event time prediction.** For a given test observed event \((u,v,?)\), the task aims to predict when the event is next observed. The next time point \(\hat{t}\) can be obtained by compute the expected value of Eq. (25) as follows: \[\mathbb{E}_{p_{\theta,\tau}}[\tau]=\sum_{k=1}^{K}\omega_{k}\exp(\mu_{k}+\sigma _{k}^{2}/2)\] (37) In this paper, we report Mean Absolute Error (MAE), which is the average of the absolute difference between the predicted time and the ground truth.
### _Experiment Results_
#### Iv-D1 Overall
Table. III shows the performance comparison of MTGN with state-of-the-art baselines. All methods are performed in 5 independent experiments on all datasets.
**Future link prediction task.** Compared with discrete temporal GNNs (DySAT and EvolveGCN), static GNNs (GCN and GAT) often obtain better link prediction performance on datasets with a lower percentage of inductive events (ENRON and HYPERTEXT). However, on datasets with high inductive event percentages, both discrete and continuous temporal GNNs (except KnowEvolve) show an obvious advantage. For example, RT-POL, a dataset with \(83.1\%\) of inductive events in the test set, where GCN and GAT almost cannot work, but temporal GNNs still maintains some performance. This is due to the fact that GCN and GAT have remembered the structure of the cumulative graph, so they tend to predict the events that have been observed in history. Discrete temporal GNNs do capture some temporal information. However, fine-grained temporal information is lost during snapshots generation, which leads to underperforming continuous GNNs. KnowEvolve is a continuous GNN based on temporal point process, but the method simply treats the temporal graph as a sequence of events without considering evolving graph structure. Thus it does not work well in future link prediction task. Among all baselines, EvoKG achieves the best performance on four datasets, as it models event time and evolving graph structure jointly. Results show MTGN signigicantly outperforms all baselines, with up to \(112.355\%\) more HITS@3 on LSED than the second-best method.
**Event time prediction task.** The MAE metric for static GNNs and discrete GNNs are marked with \(\times\) in Table. III as they cannot estimate event time. The MAE metric for KnowEvolve on RT-POL is marked with INF as there is a numerical overflow. MTGN significantly outperforms the other two continuous GNNs, with up to \(89.275\%\) less MAE on HYPERTEXT than the second best method.
#### Iv-D2 Detailed comparison to EvoKG
In this section, we illustrate the advantage of using uniform embeddings to learn the structural dynamics and temporal characteristics of nodes by comparing EvoKG in detail, which learns the structural embeddings and temporal embeddings separately.
Fig. 2 shows the changes in HITS@5 and MAE performance of EvoKG and MTGN on all datasets with the number of training epochs increasing. It can be seen from the results that EvoKG does not optimize the link prediction performance and event time prediction performance simultaneously due to learning structural embedding and temporal embeddings separately. On LSED, the link prediction performance is overfitted before the event time prediction performance is converged. However, on other datasets (except UCI), event time prediction performance is overfitted before the link prediction performance is converged.
MTGN address the above issues by learning structural dynamic and temporal characteristics of nodes in uniform embeddings. As shown in Fig. 2, MTGN can obtain the best performance of link prediction and event time prediction at the same training epoch. And event time prediction in MTGN less likely to be overfitted.
To further explain the difference between EvoKG and MTGN, we show the learned graph structural embeddings (used for link prediction task, specially Eq.(16) in MTGN) and temporal characteristics embeddings (used for event time prediction, specially Eq.(15) in MTGN) in Fig. 3 by using tSNE [51] to project node embeddings from high dimension into a 2D space. From the figure, we can find the following differences:
1. The distributions of structural and temporal embeddings separately learned by EvoKG are significantly different. While MTGN uses uniform embedding embeddings to learn structural and temporal embeddings (the only difference is whether concatenate static embeddings.), so the embeddings used for link prediction and event time prediction are similarly distributed overall.
2. EvoKG's structure embeddings do not distinguish well between nodes that are last observed at different times
\begin{tabular}{c c c c c c c c} \hline \hline Key Properties & MTGN & GCN & GAT & DySAT & EvolveGCN & KnowEvolve & EvoKG \\ \hline Model graph structure & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Model event timer & \(\surd\) & \(\times\) & \(\times\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Predict event time \(t\) & \(\surd\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\surd\) \\ Model missing event & \(\surd\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular}
TABLE III The further link prediction results and event time prediction results. The best result is marked in **bold** and the second best result is marked with underline. * indicates the best result significantly outperforms the second best results based on two-tail \(t\)-_test_(\(p\)-\(value<0.05\))
as they do not model temporal information. In contrast, both the structural and temporal embeddings learned by MTGN can clearly distinguish nodes that were recently involved in the observed event from other nodes that were not involved in the event for a long time.
### _Ablation Study_
By detailed comparison to EvoKG in previous section, we verify the superiority of using uniform embeddings to learn nodes' structural information and temporal characteristics. In this section, we demonstrate the superiority of modeling missing events by conducting an ablation study. We perform the following test:
* **MTGN-wo-m**. In this variant, we do not generate missing events (i.e. make \(\mathcal{G}=\mathcal{O}\)).
The results are shown in Fig. 4. We can observe that MTGN consistently outperforms MTGN-wo-m in all cases, except for the HITS@{3,5,10} metrics in the HYPERTEXT dataset, where MTGN is competitive with MTGN-wo-m. Modeling missing events does not bring significant benefits to HYPERTEXT mainly because HYPERTEXT is an offline face-to-face dataset, where the proportion of missing events is relatively small. However, the experimental results also show that modeling missing events do not bring significant disadvantages even under the condition that the observed events are relatively well established. Additionally, we also find that modeling missing events improves the performance of link prediction more than it improves the performance of event time prediction. The link prediction performance improvement from modeling missing events for large, sparse graphs is more significant than that for small networks.
### _Parameter Sensitivity_
The proposed MTGN involves a number of parameters that may affect its performance. In this section, we discuss two parameters that are related to the main idea of this paper: missing event ratio \(Q\) and mixture component number \(K\). The impacts of other parameters, such as the number of GNN layers \(L\), are given in the Appendix D. To save computation time, we only test on LSED, UCI and HYPERTEXT with 5 independent experiments.
Fig. 3: GSNE visualization of node embeddings on HYPERTEXT and LSED. We use the color map to indicate the last time of a node is involved in an observed event.
Fig. 2: Changes in HITS@5 and MAE performance of EvoKG and MTGN on five datasets with the number of training epochs increases
#### Vi-B1 Impact of missing events ratio \(Q\)
We test the cases where the missing events ratio \(Q\) is {0.2, 0.5, 1, 2, 4, 6, 8, 10}. The results are shown in Fig. 5.
For the link prediction, the performance improves as \(Q\) increases. However, when \(Q\) increases beyond the appropriate value, the link prediction performance decrease. Link prediction performance is significantly affected by \(Q\) on dataset with a large number of missing events, for example, the performance gap reaches more than 20% on LSED, but only about 5% on UCI and HYPERTEXT.
For the event time prediction, the performance improves with increasing \(Q\) on the LSED dataset, decreases with increasing \(Q\) on the UCI dataset, and remains relatively stable (about 2% performance fluctuation) on the HYPERTEXT dataset.
In addition, increasing \(Q\) will bring additional memory consumption and increase the training time, thus decreasing the efficiency of the model training. In real-world applications, we need to choose an appropriate \(Q\) to trade-off the performance and efficiency.
#### Vi-B2 Impact of mixture component number \(K\)
We test the cases where the mixture component number \(K\) is {2, 4, 8, 16, 32, 64}. The results are shown in Fig. 6.
For the link prediction, the performance on the dataset LSED is significantly affected by the value of \(K\) (up to \(25\%\) performance fluctuation), while on UCI and HYPERTEXT the impact is insignificant. On LSED, the link prediction
Fig. 4: Performance comparison for ablation study.
Fig. 5: Relative HTS@5 and MAE score with respect to the missing events ratio \(Q\) on the LSED, UCI and HYPERTEXT dataset.
Fig. 6: Relative HTS@5 and MAE score with respect to the mixture component number \(K\) on the LSED, UCI and HYPERTEXT dataset.
performance first increases very rapidly and then slows down with increasing \(K\).
In contrast to link prediction, for event time prediction, the mixture component number \(K\) has a smaller effect on MAE score on LSED (about \(5\%\) performance fluctuation) and a large effect on UCI and HYPERTEXT (up to \(25\%\) performance fluctuation). Normally the higher is \(K\), the better the event time prediction performance.
## VI Conclusion
Many complex systems in the real world can be naturally represented using graph data, and they are continuously evolving. How learning on temporal graphs is critical to leveraging these complex systems to support downstream applications.
In this paper, we propose a missing event aware temporal graph neural network called MTGN. MTGN focuses on solving two challenges that are overlooked in existing temporal neural networks: 1) uniformly modeling event time and evolving graph structure; and 2) modeling missing events. For uniformly modeling event time and evolving graph structure, MTGN propose a variant message passing neural network that can learn node's structural and temporal information in one embedding. For modeling missing events, MTGN treats missing events as a latent variable and then connect it with variational autoencoders. MTGN denotes temporal graphs as a sequence of observed/missing events and use three temporal point process to model the sequence, which makes MTGN can estimate further event time. Experimental results on several real-world temporal graphs demonstrate the effectiveness of MTGN.
In our future work, we will make every effort to improve MTGN in two aspects: 1) explore adaptive missing event ratio \(Q\) instead of setting it manually; 2) improve the sampling efficiency of missing events.
## Acknowledgment
The research in this paper is partially supported by the National Key Research and Development Program of China (No 2021YFB3300700) and the National Science Foundation of China (61832004, 61832014).
|
2309.01574 | Object-Size-Driven Design of Convolutional Neural Networks: Virtual Axle
Detection based on Raw Data | As infrastructure ages, the need for efficient monitoring methods becomes
increasingly critical. Bridge Weigh-In-Motion (BWIM) systems are crucial for
cost-efficient load and thus residual service life determination of road and
railway infrastructure. However, conventional BWIM systems require additional
sensors for axle detection, which have to be installed in potentially
inaccessible locations or in locations that interfere with bridge operation.
This study addresses this challenge by replacing dedicated axle detectors with
a novel approach to real-time detection of train axles using sensors
arbitrarily placed on bridges. The proposed Virtual Axle Detector with Enhanced
Receptive Field (VADER) has been validated on a single-track railway bridge,
demonstrating that it achieves to detect 99.9% of axles with a spatial error of
3.69cm using only acceleration measurements. Using raw data as input
outperforms the state-of-the-art spectrogram-based method in both speed and
memory usage by 99%, making real-time application feasible for the first time.
Additionally, we introduce the Maximum Receptive Field (MRF) rule, a novel
approach to optimise hyperparameters of Convolutional Neural Networks (CNNs)
based on the size of objects, which in this case relates to the fundamental
frequency of a bridge. The MRF rule effectively narrows the hyperparameter
search space, potentially replacing the need for extensive hyperparameter
tuning. Since the MRF rule is theoretically applicable to all unstructured
data, it could have implications for a wide range of deep learning problems
from earthquake prediction to object recognition. | Henik Riedel, Robert Steven Lorenzen, Clemens Hübler | 2023-09-04T12:53:54Z | http://arxiv.org/abs/2309.01574v3 | # Raw Data Is All You Need: Virtual Axle Detector with Enhanced Receptive Field
###### Abstract
Rising maintenance costs of ageing infrastructure necessitate innovative monitoring techniques. This paper presents a new approach for axle detection, enabling real-time application of Bridge Weigh-In-Motion (BWIM) systems without dedicated axle detectors. The proposed method adapts the Virtual Axle Detector (VAD) model to handle raw acceleration data, which allows the receptive field to be increased. The proposed Virtual Axle Detector with Enhanced Receptive field (VADER) improves the \(F_{1}\) score by 73% and spatial accuracy by 39%, while cutting computational and memory costs by 99% compared to the state-of-the-art VAD. VADER reaches a \(F_{1}\) score of 99.4% and a spatial error of 4.13 cm when using a representative training set and functional sensors. We also introduce a novel receptive field (RF) rule for an object-size driven design of Convolutional Neural Network (CNN) architectures. Based on this rule, our results suggest that models using raw data could achieve better performance than those using spectrograms, offering a compelling reason to consider raw data as input.
keywords: Sound Event Detection, Continuous Wavelet Transformation, Moving Load Localisation, Nothing-on-Road, Free-of-Axle-Detector, Bridge Weigh-In-Motion, Structural Health Monitoring, Field Validation
## 1 Introduction
As infrastructure ages, novel methods are imperative for efficient monitoring. Precise knowledge of the actual stress on infrastructure enables more accurate determination of remaining service life, identification of overloaded vehicles, and more efficient planning for new constructions. Weigh-In-Motion (WIM) systems offer a means to ascertain axle loads during regular traffic flow [1; 2; 3; 4; 5; 6]. Conventional WIM systems necessitate installation directly into roadways or tracks, often requiring traffic disruption. Conventional WIM systems are therefore expensive and hardly suitable for a nationwide investigation of axle loads. In contrast, Bridge WIM systems (BWIM) are positioned beneath bridge structures, facilitating reuse and repositioning for multiple BWIM measurements [7; 8; 9; 10].Thus, the installation of a BWIM system is less expensive and can be used for a multitude of time-limited investigations. For large-scale investigations of axle loads, BWIM systems are therefore hardly dispensable. However, effective utilization of BWIM systems typically demands knowledge of axle load positioning [10; 11; 12]. For this purpose, additional sensors currently need to be installed at specific locations (facing the roadway, on the cross girders, or near the supports) that depend on the bridge structure [13; 14; 15; 16]. Only in the case of thin slab bridges can BWIM strain sensors be reliably used for axle detection [17]. However, for a large scale application of BWIM measurements, a method for axle localization that is independent of bridge type and sensor position would be of great benefit, effectively reducing cost and risk.
Therefore, our study aims at an axle detection method using existing sensors of arbitrary BWIM systems. To determine the axle positions, axles are usually detected at at least two locations. With the distance between the locations and the difference in time, the axle position can then be interpolated or
extrapolated for any point in time. Detecting events in time series the term Sound Event Detection (SED) predominates in literature [18; 19; 20; 21]. One of the tasks of SED is for example speech recognition. Here, events refer to words and audible sound is used as input. However, this methodology can also be applied to other vibration signals as input (such as accelerations in bridges) and other events (such as a train axle passing a bridge). For SED problems, time series are typically analyzed in the form of 2D spectrograms using Convolutional Neural Networks (CNNs) [22; 23] or, more recently, Transformers [24], despite the competitive performance of 1D CNNs [25]. Consequently, research often concentrates on identifying the most suitable spectrogram type [26; 27]. Notably, in conventional tasks like speech recognition and acoustic scene classification, log mel spectrograms dominate [19; 28; 24], despite some studies showing better or comparable outcomes using raw data [26; 29; 30; 31]. In the context of axle localization, Continuous Wavelet Transformations (CWTs) have demonstrated suitability as features for axle detection [32; 33; 34; 12; 35; 36]. Given that CWTs even compete well in speech recognition [26] and log mel spectrograms were tailored for human auditory perception, CWTs were adopted in our earlier work for axle detection [36]. However, it is unclear whether CWTs are actually necessary, whether raw measurement data is sufficient as model input, and how this affects the model's performance.
In this study, we propose an novel approach capable of real-time axle detection (e.g. \(x\) axles were detected) and axle localization (e.g. an axle was located at time \(y\) at location \(z\)) using sensors placed arbitrarily. Consequently, BWIM systems can now conduct real-time axle detection without the need for additional sensors. To validate our approach, we adapted the VAD model [36] to handle raw measurement data and performed several tests on the dataset from Lorenzen et al. [37].
## 2 Methodology
In this section, we outline the methodology used for the Virtual Axle Detector with Enhanced Receptive field (VADER) in comparison to the predecessor model Virtual Axle Detector (VAD). The data set from Lorenzen et al. [37] is briefly summarized. Finally, the training parameters and evaluation metrics are described.
### Data Acquisition
The dataset from Lorenzen et al. [37] was acquired on a single-span steel trough railway bridge situated on a long-distance traffic line in Germany (fig. 1). This bridge is 18.4 meters long with a free span of 16.4 meters and a natural frequency of about 6.9 Hz for the first bending mode. The natural frequency is variable with a non-linear dependence on the load. Measurement data from ten seismic uniaxial accelerometers with a sample rate of 600 Hz installed along the bridge are used as features.
Utilizing a supervised learning approach, our model requires labels as the ground truth from which to learn. For the creation of the labels, four rosette strain gauges were used to ensure that the accuracy is sufficient to evaluate the model. The rosette strain gauges were used to determine the axle positions and velocities which were then interpolated or extrapolated for the positions of the accelerometers [36]. So that strain gauge measurements are not necessary for the application, two scenarios for labeling are investigated in this study: existing data from a failed axle detector and trains with a differential global positioning system (DGPS). The dataset consist out of 3,745 train passages with one label vector per sensor. The label vector contains ones at the time points when a train axle is positioned above the respective sensor, and the rest are zeros (fig. 2). Therefore, the sum of the label vector yields the number of axles of the corresponding train. The model then only receives the acceleration signals of a single sensor at a time and predicts the times at which an axle is located above this sensor. From a number of 2 acceleration sensors, the axle distances and velocities can thus also be determined. Due to the sampling rate, an inaccuracy of 6-38 cm is assumed for the label creation, depending on the train's velocity and the sensor's position. For more detailed information, please refer to the paper by Lorenzen et al. [36].
Figure 1: Bridge and sensor setup a) side view b) top view with sensor labels, accelerometers \(x\)-ordinate and strain gauge distances c) cross section. [37]
Figure 2: The labels represent the moment when a train axle is located over the respective sensor. Here, as an example, potential label vectors are shown for three fictitious sensors and a train up to the time shown. Derived from Lorenzen et al. [36].
### Data Transformation
The VAD model uses CWTs as features with three mother wavelets and two sets of 16 frequencies per mother wavelet [36]. This results in an input tensor of \(16\times 6\times n_{\mathrm{s}}\) with \(n_{\mathrm{s}}\) for number of samples in the acceleration signal of a sensor. The proposed VADER model is optimized for raw acceleration signals without any preprocessing. Therefore VAD operates in the frequency-time domain while VADER operates in time domain.
### Data Split
The training and validation sets were implemented using a five-fold cross-validation approach with a holdout test set. Initially, one-sixth of the data was separated for the test set. The remaining data was divided into five folds. During training, a different fold was used as the validation set in each iteration, while the other four folds were used for training. Training concluded based on the validation set results, and the model was consistently tested on the same test set. This process allowed the model to be evaluated five times using the same data, providing a more accurate estimation of its generalization capability [38].
To further investigate the applicability of the proposed concept, the dataset was divided into folds based on the number of train axles in two different ways. In the first split, the model is intended to function as a surrogate for a malfunctioning axle detector. For this purpose, the data set was split for all folds and the test set in a **stratified** fashion (fig. 3a), resulting in 3,110 passages with 112,038 individual axles for training and 623 passages with 22,472 individual axles for testing. In the second split, the model was provided only with axle positions from a specific train type equipped with a differential global positioning system (**DGPS**). Since the exact construction series of the trains for determining the train types are not known, the train length in axles was used as a simplified criterion instead. Trains with the most frequent axle count were used for the five-fold cross-validation sets, while all other train lengths were included in the test set (fig. 3b). This results in 1,916 passages with 61,312 individual axles for training and 1,817 passages with 73,198 individual axles for testing. With the stratified splits, training is done with about 5/6 of the data and testing with 1/6 of the data, whereas the splits in the DGPS set for training and testing are about the same size. Thus, in the case of DGPS splits, in addition to having significantly less data available for training, these data are also not representative of all trains. It should be noted that with non-representative data sets, inadequate results are usually to be expected [38]. Thus, the DPGS splits are suitable for testing the models under particularly difficult conditions.
### Model Definition
The VADER model is a modification of the VAD model designed for one-dimensional instead of two-dimensional inputs with some minor adjustments to the state-of-the-art. VAD and VADER were implemented as a fully convolutional neural networks (FCNs) [39] to handle inputs of arbitrary lengths. This property is particularly advantageous for the detection of train axles, as the crossing time varies significantly based on the train's speed and the number of axles.
The U-Net concept [40] was chosen for the model architecture, originally developed for semantic segmentation of images, but also adaptable for segmenting time series data. The fundamental concept of the U-Net architecture is the encoder path and the decoder path. In the encoder path, the resolution is progressively reduced and information is compressed, while in the decoder path, the resolution is increased back to the original dimensions (fig. 4). The encoder path aims to capture details at various resolutions, while the decoder path integrates these details into the appropriate context. The intermediate results from the encoder and decoder paths at the same resolution are combined through skip connections.
Our implementation of the U-Net comprises of a well established combination of convolution blocks (CBs), residual blocks (RBs) [41], max pooling layers, concatenate layers, and transposed convolution layers (fig. 4, 5). A CB always consists of a convolution layer with ReLU activation and a normalization layer. For the VAD model, batch normalization [42]
was chosen, and for the VADER model, the more recent group normalization [43] with 16 channels per group was employed. Group normalization was shown to be less sensitive to other parameters like batch size and was therefore chosen to effectively reduce the hyperparameter space. The RBs were implemented similarly to the 50-layer or larger variants of the ResNet [41]. The final layer in both models is a convolution layer with only one filter and sigmoid activation. In VADER, the first group normalization layer has only one group, making it functionally similar to layer normalization [43]. Additionally, a gaussian noise (GN) with a noise standard deviation of 0.5 was added after the first layer, aiming to aid in the model's generalizationability.
The VAD model (fig. 4) uses the default kernel size of \(3\times 3\) and the default pooling size of 2 for 2D inputs [38].This halves the temporal resolution with each pooling step. In the bottleneck layers, the temporal resolution is reduced by a factor of 16, giving the model a receptive field of \(16\times 3=48\) with a kernel size of 3. Thus, the model can examine at least 48 samples at the same time.
The bridge has a natural frequency of 6.9 Hz (depending on the load), and to distinguish the contribution of the bridge from the axle loads' structural response, a receptive field of at least \(\frac{600}{6.9~{}H_{2}}\approx 87\) samples would be necessary. Through the CWT transformation, information about frequencies up to approximately 3.4 Hz has been encapsulated into individual sample values. Therefore, the CWT transformation can effectively increase the receptive field to \(\frac{600}{3.4~{}H_{2}}\approx 176\) samples.
Since the VADER model (fig. 5) processes only 1D data, the same number of parameters are required for a kernel size of \(9\times 1\) as for the kernel size of \(3\times 3\) in the VAD model. As the kernel size is larger, the pooling size can also be increased. The pooling size was chosen such that the receptive field and hence the kernels can cover a frequency of 1 Hz. Therefore, we propose the receptive field (RF) rule to calculate the required size of the largest receptive field in order to capture the lowest frequency of interest
\[y\geq\frac{f_{\mathrm{s}}}{f_{\mathrm{l}}}, \tag{1}\]
with \(f_{\mathrm{s}}\) as sample frequency, \(f_{\mathrm{l}}\) as lowest frequency of interest and y as the necessary size of the models largest receptive field in samples. Hence, to cover signals of 1 Hz at a sampling rate of 600 Hz, the receptive field must span at least 600 samples (eq. 1). With a pooling size of 3, this results in a receptive field of \(3^{4}\times 9=729\) samples. Therefore, in the case of VADER, the actual receptive field is over 15 times larger and the effective receptive field is over 4 times larger than that of VAD, while using the same number of parameters.
For the VADER model, we were able to double the number of RBs, while doing so with the VAD model led to overfitting
Figure 3: Train lengths defined by number of axles in both splits for the first combination of folds of a five-fold cross-validation.
Figure 4: Definition of the Virtual Axle Detection model (VAD) with colored boxes corresponding to the following layers: CB (light purple), RB (yellow), max pooling (red), concatenate (green), transposed convolution (blue) and reshaping skip connection (purple arrow). Dimension of the feature maps at the corresponding boxes with T samples at the bottom right, feature maps at the bottom and frequencies at the left.
Figure 5: Definition of the Virtual Axle Detection with Enhanced Receptive field model (VADER) with colored boxes corresponding to the following layers: CB (light purple), RB (yellow), max pooling (red), concatenate (green), transposed convolution (blue) and reshaping skip connection (purple arrow). Dimension of the feature maps at the corresponding boxes with T samples at the bottom right, feature maps at the bottom and frequencies at the left.
and worse results. As a result, the VADER model has approximately twice as many learnable parameters as the VAD model overall.
The TensorFlow library [44] was used for implementation of the model and PlotNeuralNet [45] was used for visualising it.
### Training Parameter
For the loss function, binary focal loss was chosen with an optimized \(\gamma\) value of 2.5 [36]. To evaluate the model's performance, the \(F_{1}\) score (eq. 2) was used as the standard metric for imbalanced data sets [38]:
\[F_{1}=2\times\frac{precision\times recall}{precision+recall} \tag{2}\]
where,
\[precision=\frac{TruePositives}{TruePositives+FalsePositives} \tag{3}\]
\[recall=\frac{TruePositives}{TruePositives+FalseNegatives} \tag{4}\]
To determine when an axle is considered correctly detected (\(TruePositive\)), two spatial threshold values were defined. The first threshold of 200 cm corresponds to the minimum assumed distance between two axles, and the second threshold of 37 cm corresponds to the maximum expected error in label creation. Training was conducted for an arbitrary maximum of 300 epochs with unchanged batch size of 16 [36]. Within an epoch, all training data is iterated through once. The adam optimization algorithm originally proposed by Kingma and Ba [46] with the default initial learning rate of 0.001. After two epochs without improvement in the \(F_{1}\) score on the validation set, the learning rate was reduced by a factor of 0.3 (the default factor of 0.1 slowed down training to much), and the training was terminated after four epochs without improvement.
## 3 Results and Discussion
**Training and validation results:** Generally using raw data instead of CWTs is a significant reduction in computation time and memory requirements. For the 6 CWTs with 16 scales each as proposed in Lorenzen et al. [36], a 12-second traversal requires 1.4 seconds and about 5.7 MB of memory just for the transformation. In contrast, our approach takes only 0.022 seconds from raw data to prediction with about 61.1 KB memory usage. Therefore, when using raw data, inference becomes at least 65 times faster, while using only about 1 % of the memory for the input. In our case with 10 sensors the axle detection for a train with 64 axles would take \(1.4\ s\times 10\ sensors\times 64\ axles=896\ s\approx 5\ min\) with VAD and \(0.022\times 10\ sensors\times 64\ axles=14,08\ s\) with VADER. Even for more realistic szenarios with e.g. 3 sensors VAD would still need minutes while VADER takes seconds to detect the axles and this does not even include the other side of the bridge, additional tracks and reduced computing power for on-site evaluation. With potentially trains a few minutes apart, the higher efficiency of VADER is thus needed for real-world application.
To compare the two models as effectively as possible, an initial investigation was conducted to determine whether GN could also aid the generalization capability of the VAD model. Contrary to expectations, the model's performance actually decreased with the addition of GN, and as a result, it was not further considered in subsequent analyses (fig. 6).
In the case of the DGPS split, the VAD model achieves a similar \(F_{1}\) score on the training set as VADER. However, it can be observed that at the validation set, the VAD model starts to overfit around the third epoch (fig. 6(a)). In contrast, the VADER model initially achieves better results on the validation set, and it is only around the 15th epoch that the \(F_{1}\) score on the training set becomes higher. Since the two curves are very close to each other, overfitting is not usually considered to be occurring at this point.
Based on the loss, it becomes even more evident that the VAD model starts overfitting around the 6th epoch, as the validation loss begins to rise to a value similar to that of the initial epochs (fig. 6(b)). In the case of the VADER model, there is also an indication of a tendency towards overfitting, although the validation loss remains at a low level and increases only slightly above its minimum.
For the VAD model, the \(F_{1}\) score (fig. 7a) and the loss (fig. 7b) develop relatively similar to each other. In contrast, for the VADER model, the \(F_{1}\) score improves monotonically until it converges, while the loss starts to deteriorate after about half of the epochs. In this context, the loss can be seen as a measure of model bias since it is calculated on a per-sample basis, counting only exact matches (no spatial error). This makes the loss sensitive to minor deviations from the label. On the other hand, the \(F_{1}\) score can be regarded as a measure of model bias as it is calculated for each event (in this case, train axle) and allows for a certain spatial error (SE). For the SE, the temporal error is calculated first. The temporal error is the number of samples by which the prediction deviates from the label. With the axle speed, the temporal error is then converted into the SE. The \(F_{1}\) score is a good descriptor of the model's overall performance. Thus, the bias hardly decreases after the halfway point of training, while the variance increases. This suggests that the model is overfitting and training should be terminated earlier. However, as the increase in variance is only marginal, its influence on this investigation is considered minor.
In the stratified split, the difference between the two models becomes more pronounced. In terms of the \(F_{1}\) score, the VAD model fails to achieve comparable results even on the training data in comparison to the VADER model (fig. 7c) and is therefore underfitting. The training for both models is terminated approximately 10 epochs later than in the DGPS split. This suggests that due to the greater diversity of training data, the models overfit later and/or to a lesser extent. In the VAD model, the overfitting is less pronounced in terms of loss (fig. 7d), and the \(F_{1}\) score exhibits a nearly convergent trend (fig. 7c). For the VADER model, the loss demonstrates a more noticeable increase and overfitting, which could also be attributed to the logarithmic scaling of the figure (fig. 7d). The \(F_{1}\) score of the VADER model follows a similar trend for both splits (fig. 7c and 7a), indicating a lack of significant overfitting.
**Test results:** For the test results of the models, the learned parameters from the epoch with the best \(F_{1}\) score on the validation set were chosen. Using these parameters, the models were then evaluated on the previously unseen test set. This procedure was repeated for all folds, so that both models (VADER and VAD) made predictions for both threshold values (200 cm and 37 cm) and both splits (stratified and DGPS).
The VADER model achieves significantly higher accuracy for both threshold values (200 cm and 37 cm) and both splits (stratified and DGPS) (fig. 8a, 8b). In general, it can be observed that the accuracy of both models is considerably better for the stratified split compared to the DGPS split. In the stratified split, the VADER model achieves such strong results that even the 25th percentile attains an \(F_{1}\) score of 100 %. Notably, for the 200 cm threshold (fig. 8a), VADER attains a comparable accuracy with the more challenging split (DGPS), as VAD does with the simpler split (stratified). However, this observation is not consistent for the 37 cm threshold (fig. 8b).
For each threshold, split and metric, the average (avg) and standard deviation (SD) were calculated for all samples of the corresponding 5 folds. To better assess the risk of using VAD and VADER, the error reduction (ER) was calculated (right column in tab. 1):
\[ER=100\ \%-\frac{E_{\text{VADER}}}{E_{\text{VAD}}}\times 100\ \% \tag{5}\]
where,
Figure 6: Comparison of the impact of GN as data augmentation on VAD model performance. \(F_{1}\) scores for training and validation data sets with a single train length with confidence interval determined over the five folds.
Figure 7: Results on training and validation sets per epoch with a single train length with confidence interval determined over the five folds.
\[E_{\text{model}}=\begin{cases}100\;\%-F_{1},&\text{for $F_{1}$}\\ SE,&\text{for $SE$}\end{cases} \tag{6}\]
In 31 out of 32 cases, VADER performs better than VAD (tab. 1). VAD only performs better in the SD of the \(F_{1}\) score with a threshold value of 37 cm and the stratified dataset. However, the error rate (eq. 5) of VADER compared to VAD is reduced on average by:
\[100\;\%-\frac{100\;\%-95.7\;\%}{100\;\%-88.3\;\%}\times 100\;\%=63\;\%\]
This means that the better SD of the \(F_{1}\) score indicates that the VAD model is consistently significantly worse than the VADER model.
On average for all splits and thresholds, VADER reduced the error rate by 64.82 % while also reducing the SE by 37.59 % in comparison to VAD.
spatial accuracy so that all box plots have the same y-axis. Spatial accuracy is defined as 100 % for exact hits and 0 % for a spatial deviation that corresponds to the threshold value. Generally, VADER performs better than VAD in all quantiles across all plots. It's noteworthy that, for the DGPS splits, the difference from sensor R3 to the other sensors is smaller.
For the \(F_{1}\) score from the DGPS splits for a threshold value of 200 cm, the differences between VAD and VADER are still quite large. Here, the 25 % quantiles of VADER are mostly as good as or better than the 75 % quantiles of VAD (fig. 9b). With the lower threshold value of 37 cm, although the box plots overlap much more, the medians of VADER are usually above 90 %, while the medians of VAD are in the range of 50-60 % (fig. 9d). The VADER model can achieve comparable \(F_{1}\) scores on the degraded sensor R3 as VAD does on the remaining sensors (fig. 9b and 9d). A similar trend is seen in the spatial accuracy (fig. 9f).
Without the sensor R3, both models show significant improvement (tab. 2). For a threshold value of 200 cm and the stratified split, VADER achieves an \(F_{1}\) score of 99.4 %, reducing the error rate by 79.9 % compared to the VAD model. Just as before, VADER significantly improves the results in 31 out of 32 cases compared to its predecessor. On average for all splits and thresholds, VADER reduced the error rate by 72.84 % while also reducing the spatial error by 39.3 % for non-degraded sensors.
Instead of considering only the length of the train, we also examined the influence of the position of the axle within the train (fig. 10). Here, it is noticeable that the graphs for the two models and the two splits are roughly parallel to each other. VADER performs just as well or better on the more challenging **DGPS** split than VAD does on the simpler **stratified** split.
Figure 9: Metrics evaluated per sensor.
FCN can directly learn to filter the data comparable to it when necessary. In order to achieve comparable or better results with raw data, we propose the RF rule to calculate the required size of the largest receptive field of a model (eq. 1). Based on our results we were already able to show the potential of raw data as input considering the RF rule.
Furthermore, we demonstrated that acceleration data is suitable for axle detection and localization. It is particularly noteworthy that even when training with only one type of train and thus a non-representative training set, we were able to achieve very good results on other train types, demonstrating that the model generalizes exceptionally well. In this case, VADER was trained solely on trains with 32 axles but could still detect 96.4 % of the axles of all other train types, with an average spatial error of 18.6 cm. Trained on a representative training set, VADER was able to detect 99.4 % of the axles with an error of just 4.13 cm. In addition to improved accuracy, VADER allows for an inference that is 65 times faster while using only 1 % of the memory. This makes the use of virtual axle detectors in real time and using edge computing possible for the first time.
With an improvement of 72.84 % in the \(F_{1}\) score and 39.3 % in spatial error despite only minor model modifications, it's evident that there's significant potential yet to be unlocked. Also for VADER the signals of the acceleration sensors are evaluated individually. A joint evaluation of the signals could further increase the accuracy. Another approach could be a combination of raw input data, FCN-based spectrogram-like data transformation, and Transformer-based classification. This way, the FCN could learn how to optimally transform the data, while the Transformer model would be utilized to identify even more complex correlations and relationships.
## Author Contributions
**Henrik Riedel:** Conceptualization, investigation, methodology, software, validation, visualization and writing - original draft. **Steven Robert Lorenzen:** Funding, supervision and writing - review & editing **Clemens Hubler:** Resources, supervision and writing - review & editing.
## Acknowledgments
The research project ZEKISS (www.zekiss.de) is carried out in collaboration with the German railway company DB Netz AG, the Wolfel Engineering GmbH and the GMG Ingenieurgesellschaft mbH. It is funded by the mFund (mFund, 2020) promoted by the The Federal Ministry of Transport and Digital Infrastructure.
The research project DEEB-INFRA (www.deep-infra.de) is carried out in collaboration with the the sub company DB Campus from the Deutschen Bahn AG, the AIT GmbH, the Revotec zt GmbH and the iSEA Tec GmbH. It is funded by the mFund (mFund, 2020) promoted by the The Federal Ministry of Transport and Digital Infrastructure.
Figure 10: \(F_{1}\) score in relation to the axle number in driving direction for a threshold of 200 cm. Above is a histogram of the axle numbers (shown as steps) and on the right is a kernel density plot of the \(F_{1}\) scores.
## Data and Source Code
The data [47] as well as the source code [48] used in this paper is published and contains:
1. All measurement data
2. Matlab code to label data and save as text files
3. Python code for transformation, training, evaluation and plotting.
|
2307.15907 | An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks | Deep neural networks, (DNNs, a.k.a. NNs), have been widely used in various
tasks and have been proven to be successful. However, the accompanied expensive
computing and storage costs make the deployments in resource-constrained
devices a significant concern. To solve this issue, quantization has emerged as
an effective way to reduce the costs of DNNs with little accuracy degradation
by quantizing floating-point numbers to low-width fixed-point representations.
Quantized neural networks (QNNs) have been developed, with binarized neural
networks (BNNs) restricted to binary values as a special case. Another concern
about neural networks is their vulnerability and lack of interpretability.
Despite the active research on trustworthy of DNNs, few approaches have been
proposed to QNNs. To this end, this paper presents an automata-theoretic
approach to synthesizing BNNs that meet designated properties. More
specifically, we define a temporal logic, called BLTL, as the specification
language. We show that each BLTL formula can be transformed into an automaton
on finite words. To deal with the state-explosion problem, we provide a
tableau-based approach in real implementation. For the synthesis procedure, we
utilize SMT solvers to detect the existence of a model (i.e., a BNN) in the
construction process. Notably, synthesis provides a way to determine the
hyper-parameters of the network before training.Moreover, we experimentally
evaluate our approach and demonstrate its effectiveness in improving the
individual fairness and local robustness of BNNs while maintaining accuracy to
a great extent. | Ye Tao, Wanwei Liu, Fu Song, Zhen Liang, Ji Wang, Hongxu Zhu | 2023-07-29T06:27:28Z | http://arxiv.org/abs/2307.15907v1 | # An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks
###### Abstract
Deep neural networks, (DNNs, a.k.a. NNs), have been widely used in various tasks and have been proven to be successful. However, the accompanied expensive computing and storage costs make the deployments in resource-constrained devices a significant concern. To solve this issue, quantization has emerged as an effective way to reduce the costs of DNNs with little accuracy degradation by quantizing floating-point numbers to low-width fixed-point representations. Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case. Another concern about neural networks is their vulnerability and lack of interpretability. Despite the active research on trustworthy of DNNs, few approaches have been proposed to QNNs. To this end, this paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties. More specifically, we define a temporal logic, called BLTL, as the specification language. We show that each BLTL formula can be transformed into an automaton on finite words. To deal with the state-explosion problem, we provide a tableau-based approach in real implementation. For the synthesis procedure, we utilize SMT solvers to detect the existence of a model (i.e., a BNN) in the construction process. Notably, synthesis provides a way to determine the hyper-parameters of the network before training. Moreover, we experimentally evaluate our approach and demonstrate its effectiveness in improving the individual fairness and local robustness of BNNs while maintaining accuracy to a great extent.
## 1 Introduction
Deep Neural Networks (DNNs) are increasingly used in a variety of applications, from image recognition to autonomous driving, due to their high accuracy in classification and prediction tasks [27, 29]. However, two critical challenges emerge, high-cost and a lack of trustworthiness, that impede their further development.
On the one hand, a modern DNN typically contains a large number of parameters which are typically stored as 32-bit floating-point numbers (e.g., GPT-4 contains about 100 trillion parameters [14]), thus an inference often demands more than a billion floating-point operations. As a result, deploying a modern DNN requires huge computing and storage resources, thus it is challenging for resource-constrained embedding devices. To tackle this issue, quantization has been introduced, which compresses a network by converting floating-point numbers to low-width fixed-point representations, so that it can significantly reduce both memory and computing costs using fixed-point arithmetic with a relatively small side-effect on the network's accuracy [23].
On the other hand, neural networks are known to be vulnerable to input perturbations, namely, slight input disturbance may dramatically change their output [12, 3, 28, 4, 35, 5, 6, 7]. In addition, NNs are often treated as black box [17], and we are truly dearth of understanding of the decision-making process inside the "box". As a result, a natural concern is whether NNs can be trustworthy, especially in some safety-critical scenarios, where erroneous behaviors might lead to serious consequences. One promising way to tackle this problem is formal verification, which defines properties that we expect the network to satisfy and rigorously checks whether the network meets our expectations. Numerous verification approaches have been proposed recently aiming at this purpose [17]. Nevertheless, these approaches in general ignore rounding errors in quantized computations, making them unable to apply for quantized neural networks (QNNs). It has been demonstrated that specifications that hold for a floating-point numbered DNN may not necessarily hold after quantizing the inputs and/or parameters of the DNN [3, 13]. For instance, a DNN that is robust to given input perturbations might become non-robust after quantization. Compared to DNN verification [17, 18, 20, 21, 19, 36, 15], verifying QNN is truly a more challenging and less explored problem. Evidences show that the verification problem for QNNs is harder than DNNs [16], and only few works are specialized for verifying QNNs [1, 8, 13, 16, 24, 26, 32, 33, 34, 31].
In this paper, we concentrate on BNNs (i.e., binarized neural networks), a special type of QNN. Although formal verification has been the primary explored approach to verifying (quantized) neural networks, we pursue another promising line, synthesizing the expected binarized neural networks directly. In other words, we aim to construct a neural network that satisfies the expected properties we specify, rather than verifying an existing network's compliance with those properties. To achieve this, we first propose, BLTL, an extension of \(\text{LTL}_{f}\) (namely, LTL defined on finite words), as the specification language. This logic can conveniently describe data-related properties of BNNs. We then provide an approach to converting a BLTL formula to an equivalent automaton. The syn
thesis task is then boiled down to find a path from an initial state to an accepting state in the automaton.
Unfortunately, such a method suffers from the state-exploration problem. To mitigate this issue, we observe that it is not necessary to synthesize the entire BNN since the desired properties are only related to some specific hyper-parameters of the network. To this end, we propose a tableau-based approach: To judge whether a path is successfully detected, we check the satisfiability of the associated BLTL formulas, and convert the problem into an IDL-solving problem, which can be efficiently solved. Besides, we prove the existence of a tracing-back threshold, which allows us to do backtracking earlier to avoid doing trace searching that is unlikely to lead to a solution. The solution given by the solver provides the hyper-parameters of the BNN, including the length of the network and crucial input-output relations of blocks. Afterwards, one can perform a block-wise training to obtain a desired BNN.
We implement a prototype synthesizing tool and evaluate our approach on local robustness and individual fairness. The experiments demonstrate that our approach can effectively improve the network's reliability compared to the baseline, especially for individual fairness.
The main contributions of this work are summarized as follows:
* We present a new temporal logic, called BLTL, for describing properties of BNNs, and provide an approach to transforming BLTL formulas into equivalent finite-state automata.
* We propose an automata-theoretic synthesis approach that determines the hyper-parameters of a BNN model before training.
* We implement a prototype synthesis tool and evaluate the effectiveness on two concerning properties, demonstrating the feasibility of our method.
**Related Work.** For BNNs, several verification approaches have been proposed. Earlier work reduces the BNN verification problem to hardware verification (i.e., verifying combinatorial circuits), for which SAT solvers are harnessed [8]. Following this line, [24] proposes a direct encoding from the BNN verification problem into the SAT problem. [25] studies the effect of BNN architectures on the performance of SAT solvers and uses this information to train SAT-friendly BNNs. [1] provides a framework for approximately quantitative verification of BNNs with PAC-style guarantees via approximate SAT model counting. Another line of BNN verification encodes a BNN and its input region into a binary decision diagram (BDD), and then one can verify some properties of the network by analyzing BDD. [26] proposes an Angluin-style learning algorithm to compile a BNN on a given input region into a BDD, and utilize a SAT solver as an equivalence oracle to query. [32] has developed a more efficient BDD-based quantitative verification framework by exploiting the internal structure of BNNs. Few work has been dedicated to QNN verification so far. [13] shows that the properties guaranteed by the DNN are not preserved after quantization. To resolve this issue, they introduce an approach to verifying QNNs by using SMT solvers in bit-vector theory. Later, [16] proves that verifying QNN with bit-vector specifications is **PSPACE**-Hard. More recently, [34, 31] reduce the verification problem
into integer linear constraint solving which are significantly more efficient than the SMT-based one.
**Outline.** The rest of the paper is organized as follows: In Section 2, we introduce preliminaries. We present the specification language BLTL in Section 3. In Section 4, we show how to translate a BLTL formula into an equivalent automaton, which is the basic of tableau-based approach for synthesis, and technical details are given in Section 5. The proposed approach is implemented and evaluated in Section 6. We conclude the paper in Section 7.
## 2 Preliminaries
We denote by \(\mathbb{R}\), \(\mathbb{N}\), and \(\mathbb{B}\) the set of real numbers, natural numbers, and Boolean domain \(\{0,1\}\), respectively. We use \(\mathbb{R}^{n}\) and \(\mathbb{B}^{n}\) to denote the set of real number vectors and binary vectors with \(n\) elements, respectively. For \(n\in\mathbb{N}\), let \([n]\) be the set \(\{0,1,2,\ldots,n-1\}\). We will interchangeably use the terminologies 0-1 vector and binary vector in this paper. For a binary vector \(\mathbf{b}\), we use \(\mathsf{dec}(\mathbf{b})\) to denote its corresponding decimal number, and conversely let \(\mathsf{bin}(d)\) be the corresponding binary vector which encodes the number \(d\). For example, let \(\mathbf{b}=(0,1,1)^{\mathrm{T}}\), then we have \(\mathsf{dec}(\mathbf{b})=3\). Note that \(\mathsf{bin}(\mathsf{dec}(\mathbf{b}))=\mathbf{b}\) and \(\mathsf{dec}(\mathsf{bin}(d))=d\). For two binary vectors \(\mathbf{a}=\left(a_{0},\ldots,a_{n-1}\right)^{\mathrm{T}}\) and \(\mathbf{b}=\left(b_{0},\ldots,b_{n-1}\right)^{\mathrm{T}}\) with the same length, we denote by \(\mathbf{a}\sim\mathbf{b}\) if \(a_{i}\sim b_{i}\) for all \(i\in[n]\), otherwise \(\mathbf{a}\not\sim\mathbf{b}\), where \(\sim\in\{>,\geq,<,\leq,=\}\). Note that \(\mathbf{a}\neq\mathbf{b}\) if \(a_{i}\neq b_{i}\) for some \(i\in[n]\).
A (vectorized) Boolean function takes a 0-1 vector as input and returns another 0-1 vector. Hence, it is essentially a mapping from integers to integers when each 0-1 vector \(\mathbf{b}\) is viewed as an integer \(\mathsf{dec}(\mathbf{b})\). We denote by \(\mathbf{I}_{n}\) the identity function such that \(\mathbf{I}_{n}\left(\mathbf{b}\right)=\mathbf{b}\), for any \(\mathbf{b}\in\mathbb{B}^{n}\), where the subscript \(n\) may be dropped when it is clear from the context. We use _composition_ operation \(\circ\) to represent the function composition among Boolean functions.
A _binarized neural network_ (BNN) is a feed-forward neural network, composed of several internal blocks and one output block [26, 32]. Each internal block is comprised of 3 layers and can be viewed as a mapping \(f:\{-1,1\}^{n}\rightarrow\{-1,1\}^{m}\). Slightly different from internal blocks, the output block outputs the classification label to which the highest activation corresponds, thus, can be seen as a mapping \(\mathsf{out}:\{-1,1\}^{n}\rightarrow\mathbb{R}^{p}\), where \(p\) is the number of classification labels of the network.
Since the binary values \(-1\) and \(+1\) can be represented as their Boolean counterparts 0 and 1 respectively, each internal block can be viewed as a Boolean function \(f:\mathbb{B}^{n}\rightarrow\mathbb{B}^{m}\)[32]. Therefore, ignoring the slight difference in the output block, an \(n\)-block BNN \(\mathcal{N}\) can be encoded via a series of Boolean functions \(f_{i}:\mathbb{B}^{\ell_{i}}\rightarrow\mathbb{B}^{\ell_{i+1}}\) (\(i=0,1,\ldots,n-1\)), and \(\mathcal{N}\) works as the combination of these Boolean functions, namely, it corresponds to the function,
\[f_{\mathcal{N}}=f_{n-1}\circ f_{n-2}\circ\cdots\circ f_{1}\circ f_{0}.\]
_Integer difference logic_ (IDL) is a fragment of linear integer arithmetic, in which atomic formulas must be of the form \(x-y\sim c\) where \(x\) and \(y\) are integer
variables, and \(c\) is an integer constant, \(\sim\in\{\leq,\geq,<,>,=,\neq\}\). All these atomic formulas can be transformed into constraints of the form \(x-y\leq c\)[2]. For example, \(x-y=c\) can be transformed into \(x-y\leq c\wedge x-y\geq c\).
The task of an IDL-problem is to check the satisfiability of an IDL formula in conjunctive normal form (CNF)
\[(x_{1}-y_{1}\leq c_{1})\wedge\cdots\wedge(x_{n}-y_{n}\leq c_{n}),\]
which can be in general converted into the cycle detection problem in a weighted, directed graph with \(O(n)\) nodes and \(O(n)\) edges, and solved by e.g., Bellman-Ford or Dijkstra's algorithm, in \(O(n^{2})\) time [22].
## 3 The Temporal Logic BLTL
### Syntax and Semantics of BLTL
Let us fix a signature \(\vec{\Sigma}\), consisting of a set of desired Boolean functions and 0-1 vectors. Particularly, let \(\vec{\Sigma}_{\mathrm{V}}\) be the subset of \(\vec{\Sigma}\) containing only 0-1 vectors.
Terms of BLTL are described via BNF as follows:
\[\vec{t}::=\vec{b}\mid f\left(\vec{t}\right)\mid\rhd^{k}\vec{t}\]
where \(\vec{b}\in\vec{\Sigma}_{\mathrm{V}}\) is a 0-1 vector, called _vector constant_, \(f\in\vec{\Sigma}\setminus\vec{\Sigma}_{\mathrm{V}}\) is a Boolean function, and \(k\in\mathbb{N}\) is a constant, and \(\rhd^{k}\) in \(\rhd^{k}\vec{t}\) denotes \(k\) placeholders for \(k\) consecutive blocks of a BNN (i.e., \(k\) Boolean functions) to be applied onto the term \(\vec{t}\). We remark that \(\rhd^{0}\vec{t}=\vec{t}\).
BLTL formulas are given via the following grammar:
\[\psi::=\top\mid\vec{t}\sim\vec{t}\mid\neg\psi\mid\psi\vee\psi\mid\mathsf{X} \psi\mid\psi\mathsf{U}\psi\]
where \(\sim\in\{\leq,\geq,<,>,=\}\), \(\mathsf{X}\) is the _Next_ operator and \(\mathsf{U}\) is the _Until_ operator.
We define the following derived Boolean operators, quantifiers with finite domain, and temporal operators:
\[\psi_{1}\wedge\psi_{2}\stackrel{{\mathrm{def}}}{{=}} \neg(\neg\psi_{1}\vee\neg\psi_{2}) \mathsf{F}\psi\stackrel{{\mathrm{def}}}{{=}}\top \mathsf{U}\psi \mathsf{G}\psi\stackrel{{\mathrm{def}}}{{=}}\neg\mathsf{F} \neg\psi\] \[\psi_{1}\rightarrow\psi_{2}\stackrel{{\mathrm{def}}}{{= }}(\neg\psi_{1})\vee\psi_{2} \mathsf{F}\psi_{1}\mathsf{R}\psi_{2}\stackrel{{\mathrm{def}}} {{=}}\neg(\neg\psi_{1}\mathsf{U}\neg\psi_{2}) \mathsf{X}\psi\stackrel{{\mathrm{def}}}{{=}}\neg \mathsf{X}\neg\psi\] \[\forall\vec{x}\in\mathbb{B}^{k}.\psi\stackrel{{ \mathrm{def}}}{{=}}\bigwedge_{\vec{b}\in\mathbb{B}^{k}\cap\vec{\Sigma}_{ \mathrm{V}}}\psi[\vec{x}/\vec{b}] \exists\vec{x}\in\mathbb{B}^{k}.\psi\stackrel{{\mathrm{ def}}}{{=}}\neg\forall\vec{x}\in\mathbb{B}^{k}.\neg\psi\]
where \(\psi[\vec{x}/\vec{b}]\) denotes the BLTL formula obtained from \(\psi\) by replacing each occurrence of \(\vec{x}\) with \(\vec{b}\).
The semantics of BLTL formulas is defined w.r.t. a BNN \(\mathcal{N}\) given by the composition of Boolean functions \(f_{\mathcal{N}}=f_{n-1}\circ f_{n-2}\circ\cdots\circ f_{1}\circ f_{0}\), and a position \(i\in\mathbb{N}\). We first define the semantics of terms, which is given by the function \(\llbracket\bullet\rrbracket_{\mathcal{N},i}\), inductively:
* \(\llbracket\vec{b}\rrbracket_{\mathcal{N},i}=\vec{b}\) for each vector constant \(\vec{b}\);
* \(\llbracket f\left(\vec{t}\right)\rrbracket_{\mathcal{N},i}=f\left(\llbracket \vec{t}\rrbracket_{\mathcal{N},i}\right)\);
* \(\llbracket\rhd^{k}\boldsymbol{t}\rrbracket_{\mathcal{N},i}=\left\{\begin{array}{ ll}(f_{i+\mathsf{slen}(\boldsymbol{t})+k-1}\circ\cdots\circ f_{i+\mathsf{slen}( \boldsymbol{t})})(\llbracket\boldsymbol{t}\rrbracket_{\mathcal{N},i}),&\text{if $k\geq 1$};\\ \llbracket\boldsymbol{t}\rrbracket_{\mathcal{N},i},&\text{if $k=0$};\end{array}\right.\)
where \(f_{i}\) is the identity Boolean function \(\boldsymbol{I}\) if \(i\geq n\), \(\mathsf{slen}(\boldsymbol{b})=0\), \(\mathsf{slen}(f(\boldsymbol{t}))=\mathsf{slen}(\boldsymbol{t})+1\) and \(\mathsf{slen}(\rhd^{k}\boldsymbol{t})=\mathsf{slen}(\boldsymbol{t})+k\).
Note that we assume the widths of Boolean functions and their argument vectors are compatible.
Proposition 1: _We have: \(\llbracket\rhd^{k}\rhd^{k^{\prime}}\boldsymbol{t}\rrbracket_{\mathcal{N},i}= \llbracket\rhd^{k+k^{\prime}}\boldsymbol{t}\rrbracket_{\mathcal{N},i}\)._
Subsequently, the semantics of BLTL formulas is characterized via the _satisfaction_ relation \(\models\), inductively:
* \(\mathcal{N},i\models\top\) always holds;
* \(\mathcal{N},i\models\boldsymbol{t}_{1}\sim\boldsymbol{t}_{2}\) iff \(\llbracket\boldsymbol{t}_{1}\rrbracket_{\mathcal{N},i}\sim\llbracket \boldsymbol{t}_{2}\rrbracket_{\mathcal{N},i}\);
* \(\mathcal{N},i\models\neg\varphi\) iff \(\mathcal{N},i\not\models\varphi\);
* \(\mathcal{N},i\models\varphi_{1}\vee\varphi_{2}\) iff \(\mathcal{N},i\models\varphi_{1}\) or \(\mathcal{N},i\models\varphi_{1}\);
* \(\mathcal{N},i\models\mathsf{X}\psi\) iff \(i<n-1\) and \(\mathcal{N},i+1\models\psi\);
* \(\mathcal{N},i\models\psi_{1}\mathsf{U}\psi_{2}\) iff there is \(j\) such that \(i\leq j<n\), \(\mathcal{N},j\models\psi_{2}\) and \(\mathcal{N},k\models\psi_{1}\) for each \(i\leq k<j\);
We may write \(\mathcal{N}\models\psi\) in the case of \(i=0\). In the sequel, we denote by \(\mathscr{L}(\psi)\) the set of BNNs \(\{\mathcal{N}\mid\mathcal{N}\models\varphi\}\) for each formula \(\varphi\), and denote by \(\psi_{1}\equiv\psi_{2}\) if \(\mathcal{N},i\models\psi_{1}\Leftrightarrow\mathcal{N},i\models\psi_{2}\) for every BNN \(\mathcal{N}\) and \(i\).
Proposition 2: _The following statements hold:_
1. \(\mathsf{G}\psi\equiv\bot\mathsf{R}\psi\)_;_
2. \(\mathsf{F}\psi\equiv\psi\vee\mathsf{X}\mathsf{F}\psi\)_;_
3. \(\mathsf{G}\psi\equiv\psi\wedge\mathsf{X}\mathsf{G}\psi\)_;_
4. \(\psi_{1}\mathsf{U}\psi_{2}\equiv\psi_{2}\vee(\psi_{1}\wedge\mathsf{X}(\psi_{1} \mathsf{U}\psi_{2}))\)_;_
5. \(\psi_{1}\mathsf{R}\psi_{2}\equiv\psi_{2}\wedge(\psi_{1}\vee\overline{\mathsf{ X}}(\psi_{1}\mathsf{R}\psi_{2}))\)_._
For a BLTL formula \(\varphi\) and a BNN \(\mathcal{N}\), the _model checking_ problem w.r.t. \(\varphi\) and \(\mathcal{N}\) is to decide whether \(\mathcal{N}\models\varphi\) holds.
With the above derived operators, together with the patterns \(\neg\neg\psi\equiv\psi\) and \(\neg(\boldsymbol{t}_{1}\sim\boldsymbol{t}_{2})\equiv\boldsymbol{t}_{1}\not \sim\boldsymbol{t}_{2}\), BLTL formulas can be transformed into _negation normal form_ (NNF) by pushing the negations (\(\neg\)) inward, till no the negations are involved.
Given two sets of formulas \(\Gamma\) and \(\Gamma^{\prime}\) in NNF, we say that \(\Gamma^{\prime}\) is a _proper closure_ of \(\Gamma\), if the following conditions hold:
* \(\Gamma\subseteq\Gamma^{\prime}\).
* \(\psi_{1}\wedge\psi_{2}\in\Gamma^{\prime}\) implies that both \(\psi_{1}\in\Gamma^{\prime}\) and \(\psi_{2}\in\Gamma^{\prime}\).
* \(\psi_{1}\vee\psi_{2}\in\Gamma^{\prime}\) implies that either \(\psi_{1}\in\Gamma^{\prime}\) or \(\psi_{2}\in\Gamma^{\prime}\).
* \(\psi_{1}\mathsf{U}\psi_{2}\in\Gamma^{\prime}\) implies \(\psi_{2}\vee(\psi_{1}\wedge\mathsf{X}(\psi_{1}\mathsf{U}\psi_{2}))\in\Gamma^{\prime}\).
* \(\psi_{1}\mathsf{R}\psi_{2}\in\Gamma^{\prime}\) implies \(\psi_{2}\wedge(\psi_{1}\vee\overline{\mathsf{X}}(\psi_{1}\mathsf{R}\psi_{2}))\in \Gamma^{\prime}\).
We denote by \(\mathsf{Cl}(\Gamma)\) the set consisting of all proper closures of \(\Gamma\) (note that \(\mathsf{Cl}(\Gamma)\) is a family of formula sets.) We also denote by \(\mathsf{Sub}(\psi)\) the set of the subformulas of \(\psi\) except that
* if \(\psi_{1}\mathsf{U}\psi_{2}\in\mathsf{Sub}(\psi)\), then \(\psi_{2}\vee(\psi_{1}\wedge\mathsf{X}(\psi_{1}\mathsf{U}\psi_{2}))\in\mathsf{ Sub}(\psi)\);
* if \(\psi_{1}\mathsf{R}\psi_{2}\in\mathsf{Sub}(\psi)\), then \(\psi_{2}\wedge(\psi_{1}\vee\overline{\mathsf{X}}(\psi_{1}\mathsf{R}\psi_{2})) \in\mathsf{Sub}(\psi)\).
### Illustrating Properties Expressed by BLTL
In this section, we demonstrate the expressiveness of BLTL. Since BLTL has the ability to express Boolean logic and arithmetic operations, we can see that many concerning properties can be specified using BLTL.
We can partition a vector into segments of varying widths, and then define a Boolean function, denoted by \(e_{i}\), to extract the \(i\)-th segment with width of \(n\), namely, \(e_{i}:\mathbb{B}^{m}\rightarrow\mathbb{B}^{n}\), where \(m\) is the width of vector \(\vec{b}\). We use \(\vec{b}[i]\) to refer to \(e_{i}\left(\vec{b}\right)\) in the case that \(e_{i}\left(\vec{b}\right)\in\mathbb{B}\).
Local Robustness.Given a BNN \(\mathcal{N}\) and a \(n\)-width input \(\vec{u}\), \(\mathcal{N}\) is robust w.r.t. \(\vec{u}\), if all inputs in the region \(B\left(\vec{u},\epsilon\right)\), are classified into the same class as \(\vec{u}\)[1]. Here, we consider \(B\left(\vec{u},\epsilon\right)\) as the set of vectors that differ from \(\vec{u}\) in at most \(\epsilon\) positions, where \(\epsilon\) is the maximum number of positions at which the values differ from those of \(\vec{u}\). The local robustness can be described as follows:
\[\forall\vec{x}\in\mathbb{B}^{n}.\sum_{i=1}^{|\vec{u}|}\left(\vec{x}[i]\oplus \vec{u}[i]\right)\leq\epsilon\rightarrow\mathcal{N}\left(\vec{x}\right)= \mathcal{N}\left(\vec{u}\right)\]
Individual Fairness.In the context of a BNN \(\mathcal{N}\) with an input of \(t\) attributes and \(n\)-width, where the \(s\)-th attribute is considered sensitive, \(\mathcal{N}\) is fair w.r.t the \(s\)-th attribute, when no two input vectors in its domain differ only in the value of the \(s\)-th attribute and yield different outputs [30, 37]. The individual fairness can be formulated as:
\[\forall\vec{a},\vec{b}\in\mathbb{B}^{n}.\left(\neg\left(e_{s}(\vec{a})=e_{s}( \vec{b})\right)\wedge\forall i\in[t]-\{s\}.e_{i}(\vec{a})=e_{i}(\vec{b}) \right)\rightarrow\mathcal{N}\left(\vec{a}\right)=\mathcal{N}\left(\vec{b}\right)\]
where \(e_{i}\) denotes the extraction of the \(i\)-th attribute, \(\mathbb{B}^{n}\) is the domain of \(\mathcal{N}\), and \(\vec{a}\), \(\vec{b}\) are input vectors.
In practice, it is possible to select inputs in the \(\mathbb{B}^{n}\), and modify the sensitive attribute to obtain the proper pairs, which only differ in the sensitive attribute. For any such pair \((\vec{b},\vec{b}^{\prime})\),we formulate the specification as \(\mathcal{N}(\vec{b})=\mathcal{N}(\vec{b}^{\prime})\).
Specification for Internal Blocks.BLTL can specify block-level properties. For instance, the formula
\[\forall\vec{x}\in\mathbb{B}^{4}.\mathsf{F}\left(\vec{x}\geq\vec{a} \rightarrow\rhd\vec{x}=\vec{a}\right)\]
states that there exists a block in the network that behaves as follows: for any 4-bit input whose value is greater than or equal to \(\vec{a}\), the corresponding output is equal to \(\vec{a}\).
## 4 From BLTL to Automata
In this section, we present both an explicit and an implicit construction that translate a BLTL formula into an equivalent finite-state automaton. We first show how to eliminate the placeholders \(\rhd^{k}\) in terms \(\rhd^{k}\vec{t}\) and atomic formulas \(\vec{t}_{1}\sim\vec{t}_{2}\).
### Eliminating Placeholders
To eliminate the placeholders \(\rhd^{k}\) in terms \(\rhd^{k}\mathbf{t}\), we define the _apply operator_\([\;]:\mathbf{T}\times\mathbf{\Sigma}\setminus\mathbf{\Sigma}_{\mathbf{V}}\to\mathbf{T}\), where \(\mathbf{T}\) denotes the set of terms. \([\mathbf{t},f]\), written as \(\mathbf{t}[f]\), is called the _application_ of the term \(\mathbf{t}\) w.r.t. the Boolean function \(f\in\mathbf{\Sigma}\), which instantiates the innermost placeholder of the term \(\mathbf{t}\) by the Boolean function \(f\). Below, we give a formal description of the application.
Let us fix a term \(\mathbf{t}\). According to Proposition 1, \(\mathbf{t}\) can be equivalently transformed into the following canonical form
\[\rhd^{\ell_{k}}g_{k-1}\left(\rhd^{\ell_{k-1}}g_{k-2}\left(\cdots g_{0}\left( \rhd^{\ell_{0}}\mathbf{b}\right)\cdots\right)\right)\]
where \(\mathbf{b}\) is a vector constant, \(\ell_{0}\geq 0\) and \(\ell_{i}>0\) for each \(i>0\). Hereafter, we assume that \(\mathbf{t}\) is in the canonical form, and let \(\mathsf{len}(\mathbf{t})=\sum_{i=0}^{k}\ell_{i}\).
When \(\mathbf{t}\) is \(\rhd\)-free, i.e., \(\mathsf{len}(\mathbf{t})=0\), we let \(\mathbf{t}[f]=\mathbf{t}\). When \(\mathsf{len}(\mathbf{t})>0\), we say that the Boolean function \(f\in\mathbf{\Sigma}\) is _applicable_ w.r.t. the term \(\mathbf{t}\), if:
1. \(\mathbf{b}\in\mathsf{dom}\,f\);
2. if \(\ell_{0}=1\), then \(\mathsf{ran}\,f=\mathsf{dom}\,g_{0}\).
Intuitively, the above two conditions ensure that \(f\left(\mathbf{b}\right)\) and \(g_{0}\circ f\) are well-defined.
If \(f\in\mathbf{\Sigma}\) is applicable w.r.t. the term \(\mathbf{t}\), we let \(\mathbf{t}[f]\) be the term:
\[\mathbf{t}[f]=\begin{cases}\rhd^{\ell_{k}}g_{k-1}\left(\rhd^{\ell_{k-1}}g_{k-2} \left(\cdots g_{0}\left(\rhd^{\ell_{0}-1}\mathbf{b}^{\prime}\right)\cdots\right) \right),&\text{if }\ell_{0}>1\\ \rhd^{\ell_{k}}g_{k-1}\left(\rhd^{\ell_{k-1}}g_{k-2}\left(\cdots g_{1}\left( \rhd^{\ell_{1}}\mathbf{b}^{\prime\prime}\right)\cdots\right)\right),&\text{if }\ell_{0}=1 \end{cases}\]
where \(\mathbf{b}^{\prime}=f\left(\mathbf{b}\right)\) and \(\mathbf{b}^{\prime\prime}=\left(g_{0}\circ f\right)\left(\mathbf{b}\right)\).
It can be seen that \(\mathsf{len}(\mathbf{t}[f])=\mathsf{len}(\mathbf{t})-1\). By iteratively applying this operator, the placeholders \(\rhd^{k}\) in the term \(\mathbf{t}\) can be eliminated. For convenience, we write \(\mathbf{t}[f_{0},f_{1},\ldots,f_{i}]\) for the shorthand of
\[\mathbf{t}[f_{0}][f_{1}]\cdots[f_{i}],\]
provided that each Boolean function \(f_{i}\) is applicable w.r.t. \(\mathbf{t}[f_{0}][f_{1}]\cdots[f_{i}]\). Likewise, we call \(\mathbf{t}[f_{0},f_{1},\ldots,f_{i}]\) the _application_ of \(\mathbf{t}\) w.r.t. the Boolean functions \(f_{0},f_{1},\cdots,f_{i}\).
In particular, the _collapsion_ of term \(\mathbf{t}\), denoted by \(\mathbf{t}\downarrow\), is the term \(\mathbf{t}[\underbrace{\mathbf{I},\ldots,\mathbf{I}}_{\mathsf{len}(\mathbf{t})}]\), namely, \(\mathbf{t}\downarrow\) is obtained from \(\mathbf{t}\) w.r.t. \(\mathsf{len}(\mathbf{t})\) identity functions.
We hereafter denote by \(\mathsf{Cons}(\mathbf{\Sigma})\) the set of constraints \(\mathbf{t}_{1}\sim\mathbf{t}_{2}\) over the signature \(\mathbf{\Sigma}\) and lift the apply operator \([\;]\) from terms to atomic formulas \(\mathbf{t}_{1}\sim\mathbf{t}_{2}\). For a constraint \(\gamma=\mathbf{t}_{1}\sim\mathbf{t}_{2}\in\mathsf{Cons}(\mathbf{\Sigma})\), we denote by \(\gamma[f]\) the constraint \(\mathbf{t}_{1}[f]\sim\mathbf{t}_{2}[f]\); and by \(\gamma\downarrow\) the constraint \(\mathbf{t}_{1}\downarrow\sim\mathbf{t}_{2}\downarrow\). Note that the former implicitly assumes that the Boolean function \(f\) is applicable w.r.t. both terms \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\) (in this case, we call that \(f\) is applicable w.r.t. \(\gamma\)), whereas the latter requires that the terms \(\mathbf{t}_{1}\downarrow\) and \(\mathbf{t}_{2}\downarrow\) have the same width (we call that \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\) are _compatible_ w.r.t. collapsion). In addition, we let \(\mathsf{len}(\gamma)=\max(\mathsf{len}(\mathbf{t}_{1}),\mathsf{len}(\mathbf{t}_{2}))\)
and in the case that \(\mathsf{len}(\gamma)=0\), we let \(\gamma[f]=\top\) (resp. \(\gamma[f]=\bot\)) for any Boolean function \(f\) if \(\gamma\) is evaluated to true (resp. false).
We subsequently extend the above notations to constraint sets. Suppose that \(\Gamma\subseteq\mathsf{Cons}(\boldsymbol{\Sigma})\), we let \(\Gamma[f]\stackrel{{\mathrm{def}}}{{=}}\{\gamma[f]\mid\gamma\in \Gamma\}\), and let \(\Gamma\downarrow\stackrel{{\mathrm{def}}}{{=}}\{\gamma\downarrow \mid\gamma\in\Gamma\}\). Remind that the notation \(\Gamma[f]\) makes sense only if the Boolean function \(f\) is _applicable_ w.r.t. \(\Gamma\), namely \(f\) is applicable w.r.t. each constraint \(\gamma\in\Gamma\). Likewise, the notation \(\Gamma\downarrow\) indicates that \(\boldsymbol{t}_{1}\) and \(\boldsymbol{t}_{2}\) is compatible w.r.t. collapsion for each constraint \(\boldsymbol{t}_{1}\sim\boldsymbol{t}_{2}\in\Gamma\).
Theorem 4.1: _For a BNN \(\mathcal{N}\) given by \(f_{\mathcal{N}}=f_{n-1}\circ f_{n-2}\circ\cdots\circ f_{1}\circ f_{0}\), and a constraint \(\gamma\in\mathsf{Cons}(\boldsymbol{\Sigma})\), we have:_
1. \(\mathcal{N},i\models\gamma\) _iff_ \(\mathcal{N},i+1\models\gamma[f_{i}]\) _for each_ \(i<n\)_._
2. \(\mathcal{N},i\models\gamma\) _iff_ \(\mathcal{N},i\models\gamma\downarrow\) _for each_ \(i\geq n\)_._
Indeed, since \(\gamma\downarrow\) must have the form \(\boldsymbol{b}_{1}\sim\boldsymbol{b}_{2}\), where both \(\boldsymbol{b}_{1}\) and \(\boldsymbol{b}_{2}\) are Boolean constants, then the truth value of \(\gamma\downarrow\) can always be directly evaluated.
### Automata Construction
Given a BLTL formula \(\varphi\) in NNF, we can construct a finite-state automaton \(\mathcal{A}_{\varphi}=(Q_{\varphi},\boldsymbol{\Sigma},\delta_{\varphi},I_{ \varphi},F_{\varphi})\), where:
* \(Q_{\varphi}=\bigcup_{\Gamma\subseteq\mathsf{Sub}(\varphi)}\mathsf{Cl}(\Gamma)\). Recall that \(\mathsf{Cl}(\Gamma)\subseteq 2^{\mathsf{Sub}(\varphi)}\) if \(\Gamma\subseteq\mathsf{Sub}(\varphi)\), thus each state must be a subset of \(\mathsf{Sub}(\varphi)\).
* For each \(q\in Q_{\varphi}\), let \(\mathsf{Cons}(q)\stackrel{{\mathrm{def}}}{{=}}q\cap\mathsf{Cons}( \boldsymbol{\Sigma})\), let \(q^{\prime}=\{\psi\mid\mathsf{X}\psi\in q\}\) and let \(q^{\prime\prime}=\{\psi\mid\mathsf{X}\psi\in q\}\). Then, for each Boolean function \(f\in\boldsymbol{\Sigma}\), we have \[\delta_{\varphi}(q,f)=\begin{cases}\emptyset,&\bot\in q\\ \mathsf{Cl}(q^{\prime}\cup q^{\prime\prime}\cup\mathsf{Cons}(q)[f]),&\bot\not\in q \end{cases}.\]
* \(I_{\varphi}=\{q\in Q_{\varphi}\mid\varphi\in q\}\) is the set of initial states.
* \(F_{\varphi}\) is the set of accepting states such that for every state \(q\in Q_{\varphi}\), \(q\in F_{\varphi}\) only if \(\{\psi\mid\mathsf{X}\psi\in q\}=\emptyset\), \(\bot\not\in q\) and \(\mathsf{Cons}(q)\downarrow\) is evaluated true.
For a BNN \(\mathcal{N}\) given by \(f_{\mathcal{N}}=f_{n-1}\circ f_{n-2}\circ\cdots\circ f_{1}\circ f_{0}\), we denote by \(\mathcal{N}\in\mathscr{L}(\mathcal{A}_{\varphi})\) if the sequence of the Boolean functions \(f_{0},f_{1},\cdots,f_{n-1}\), regarded as a finite word, is accepted by the automaton \(\mathcal{A}_{\varphi}\).
Intuitively, \(\mathcal{N}\) accepts an input word iff it has an accepting run \(q_{0},q_{1}\cdots,q_{n}\), where \(q_{i}\) is constituted with a set of formulas that make the specification \(\varphi\) valid at the position \(i\). In this situation, \(I_{\varphi}\) refers to the states involving \(\varphi\) and \(q_{0}\in I_{\varphi}\). For the transition \(q_{i}\stackrel{{ f_{i}}}{{\longrightarrow}}q_{i+1}\), \(q^{\prime}_{i}\) and \(q^{\prime\prime}_{i}\) indicate the sets of formulas which should be satisfied in the next position \(i+1\) according to the semantics of _next_ (\(\mathsf{X}\)) and _weak next_ (\(\overline{\mathsf{X}}\)). Additionally, \(\mathsf{Cons}(q_{i+1})\) is obtained by applying the Boolean function \(f_{i}\) to the constraints in \(q_{i}\).
The following theorem reveals the relationship between \(\varphi\) and \(\mathcal{A}_{\varphi}\).
Theorem 4.1: _Let \(\mathcal{N}\) be a BNN given by a sequence of Boolean functions for a BLTL formula \(\varphi\), we have:_
\[\mathcal{N}\models\varphi\text{ if and only if }\mathcal{N}\in\mathscr{L}( \mathcal{A}_{\varphi})\text{.}\]
The proof is given in Appendix 0.A.1 and an example of the construction is given in Appendix 0.A.2.
### Tableau-Based Construction
We have successfully provided a process for converting an BLTL formula into an automaton on finite words. At first glance, it seems that the model checking problem w.r.t. BNN can be immediately boiled down to a word-problem of finite automata. Nevertheless, a careful analysis shows that this would result in a prohibitively high cost. Actually, for a BLTL formula \(\varphi\), the state set of \(\mathcal{A}_{\varphi}\) is \(\bigcup_{\Gamma\subseteq\mathsf{Sub}(\varphi)}\mathsf{Cl}(\Gamma)\subseteq 2^{ \mathsf{Sub}(\varphi)}\), thus the number of states is exponential in the size of the length of \(\varphi\). To avoid explicit construction, we provide an "on-the-fly" approach when performing synthesis.
Suppose the BLTL \(\varphi\) is given in NNF and the BNN \(\mathcal{N}\) is given as a sequence of Boolean functions \(f_{0},f_{1},\ldots,f_{n-1}\), using the following approach, we may construct a tree \(\mathcal{T}_{\varphi,\mathcal{N}}\) which fulfills the followings:
* \(\mathcal{T}_{\varphi,\mathcal{N}}\) is rooted at \(\langle 0,\{\varphi\}\rangle\);
* For an internal node \(\langle i,\Gamma\rangle\) with \(i<n-1\), it has a child \(\langle j,\Gamma^{\prime}\rangle\) only if there is a tableau rule \[\begin{array}{c|c}i&\Gamma\\ \hline j&\Gamma^{\prime}\end{array}\] where \(j\) is either \(i\) or \(i+1\).
* A leaf \(\langle i,\Gamma\rangle\) of \(\mathcal{T}_{\varphi,\mathcal{N}}\) is a (Modal)-node with \(i=n-1\), where nodes to which only the rule (Modal) can be applied are called (Modal)-nodes.
Tableau rules are listed in Figure 1. For the rule (Modal), we require that \(\Gamma\) consists of atomic formulas being of the form \(\mathbf{t}_{1}\sim\mathbf{t}_{2}\). In the rules (True) and (False), we require that \(\mathsf{len}(\mathbf{t}_{1}\sim\mathbf{t}_{2})=0\) and it is evaluated to true and false, respectively.
Suppose \(\langle n,\Gamma\cup\{\mathsf{X}\psi_{1},\ldots,\mathsf{X}\psi_{m}\}\cup\{ \overline{\mathsf{X}}\varphi_{1},\ldots,\overline{\mathsf{X}}\varphi_{k}\}\rangle\) is a leaf of \(\mathcal{T}_{\varphi,\mathcal{N}}\). We say it is _successful_ if \(m=0\) and \(\Gamma\downarrow\) is evaluated to true. In addition, we say a path of \(\mathcal{T}_{\varphi,\mathcal{N}}\) is _successful_ if it ends with a successful leaf, and no node along this path contains \(\perp\).
In the process of the on-the-fly construction, we start by creating the root node, then apply the tableau rules to rewrite the formulas in the subsequent nodes. In addition, before the rule (Modal) or (Or-\(j\)) is applied, we preserve the set of formulas, which allows us to trace back and construct other parts of the automaton afterward. We exemplify how to achieve the synthesis task via the construction in Section 5.
Theorem 4.2: \(\mathcal{N}\models\varphi\) _if and only if \(\mathcal{T}_{\varphi,\mathcal{N}}\) has a successful path._
Proof: Let \(\mathcal{A}_{\varphi}\) be the automaton corresponding to \(\varphi\). According to Theorem 2.1, it suffices to show that \(\mathcal{N}\in\mathscr{L}(\mathcal{A}_{\varphi})\) iff \(\mathcal{T}_{\varphi,\mathcal{N}}\) has a successful path.
Suppose, \(\mathcal{N}\) is accepted by \(\mathcal{A}_{\varphi}\) with the run \(q_{0},q_{1},\ldots,q_{n}\), we also create the root node \(\langle 0,\Gamma_{0}=\{\varphi\}\rangle\). Inductively, we have the followings statements for each node \(\langle i,\Gamma_{j}\rangle\) which is already constructed:
1. \(\Gamma_{j}\subseteq q_{i}\);
2. \(\mathcal{N},i\models\psi\) for each \(\psi\in q_{i}\) (see the proof of Thm. 2.1)
Then, if \(\langle i,\Gamma_{j}\rangle\) is not a leaf, we create a new node \(\langle i^{\prime},\Gamma_{j}^{\prime}\rangle\) in the following way:
* \(i^{\prime}=i\) if \(\langle i,\Gamma_{j}\rangle\) is not a (Modal)-node, otherwise \(i^{\prime}=i+1\);
* if rule (Or-\(k\)) (\(k=1,2\)) is applied to \(\langle i,\Gamma_{j}\rangle\) to some \(\varphi_{1}\vee\varphi_{2}\in\Gamma_{j}\), we require that \(\varphi_{k}\in\Gamma_{j}\); for other cases, \(\Gamma_{j}^{\prime}\) is uniquely determined by \(\Gamma_{j}\) and the tableau rule which is applied.
It can be checked that both Items 1) and 2) still hold at \(\langle i^{\prime},\Gamma_{j}^{\prime}\rangle\). Then, we can see that the path we constructed is successful since \(q_{n}\) is an accepting state of \(\mathcal{A}_{\varphi}\).
For the other way round, suppose that \(\mathcal{T}_{\varphi,\mathcal{N}}\) involves a successful path
\[\begin{array}{l}\langle 0,\Gamma_{0,0}\rangle,\langle 0,\Gamma_{0,1} \rangle,\ldots,\langle 0,\Gamma_{0,\ell_{0}}\rangle,\langle 1,\Gamma_{1,0} \rangle,\langle 1,\Gamma_{1,1}\rangle,\ldots,\langle 1,\Gamma_{1,\ell_{1}} \rangle,\ldots,\\ \langle i,\Gamma_{i,0}\rangle,\langle i,\Gamma_{i,1}\rangle,\ldots,\langle i, \Gamma_{i,\ell_{i}}\rangle,\ldots,\langle n,\Gamma_{n,0}\rangle,\langle n, \Gamma_{n,1}\rangle,\ldots,\langle n,\Gamma_{n,\ell_{n}}\rangle\end{array}\]
then, the state sequence \(q_{0},q_{1},\ldots,\ldots,q_{n}\) yields an accepting run of \(\mathcal{A}_{\varphi}\) on \(\mathcal{N}\), where \(q_{i}=\bigcup_{j=0}^{\ell_{i}}\Gamma_{i,j}\).
## 5 BNN Synthesis
Let us now consider a more challenging task: Given a BTL specification \(\varphi\), to find some BNN \(\mathcal{N}\) such that \(\mathcal{N}\models\varphi\). In the synthesis task, the parameters of
Figure 1: Tableau rules for Automata Construction
the desired BNN are not given, even, we are not aware of the length (i.e., the number of blocks) of the network. To address this challenge, we leverage the tableau-based method (cf. Section 4.3) to construct the automaton for the given specification \(\varphi\) and check the existence of the desired BNN at the same time. But when performing the tableau-based rewriting, we need to view each block (i.e., a Boolean function) \(f_{i}\) as an unknown variable (called _block variable_ in what follows).
The construction of the tableau-tree starts from the root node \(\langle 0,\varphi\rangle\). During the construction, for each internal node \(\langle i,\Gamma\rangle\), the following steps are followed: Initially, rules other than (Or-1) and (Modal) are applied to \(\Gamma\) until no further changes occur. Then rule (Or-\(j\)) is applied to the disjunctions in the formula set, and we always first try rule (Or-1) when the rewriting is performed. Lastly, rule (Modal) is applied to generate node \(\langle i+1,\Gamma^{\prime}\rangle\), which becomes the next node in the path, and the Boolean function \(f_{i}\) used in the rewriting is just a block variable. Particularly, we retain a stack of formula sets on each of those either (Or-\(j\)) or (Modal) is applied for tracing back. Once an X-free (Modal)-node is reached, We verify the success of the path. However, since now the blocks are no longer concrete in this setting, an atomic formula of the form \(\gamma[f_{i},\ldots,f_{i+k}]\) cannot be immediately evaluated even if it is \(\rhd\)-free. As a result, whether a path is _successful_ cannot be evaluated directly.
To settle this, we invoke an integer different logic (IDL) solver to examine the satisfiability of the atomic formulas in the (Modal)-nodes along the path, and we declare success if all of them are satisfiable and it in addition ends up with an X-free (Modal)-node. Meanwhile, the model given by the solver would reveal hyper-parameters of the BNN, which then we adopt to obtain the expected BNN. For a node \(\langle i,\Gamma\rangle\), we call \(i\) to be the _depth counter_. Once the infeasibility is reported by the IDL solver, or some specific depth counter (call it the _threshold_) is reached, a trace-back to the nearest (Or-1) node is required: nodes under that node are removed, and then use (Or-2) for that, but this time we do not push anything into the stack, because both choices for the disjunctive formula have been tried so far. If no (Or-1) nodes remains in the stack when doing trace-back, we declare the failure of the synthesis.
Now, there are two issues to deal with during that process. The first is, how to determine if the aforementioned 'threshold' is reached; second, how can we convert the satisfiability testing into IDL-solving.
### The Threshold
There exists a naive bound for the first problem, which is just the state number of \(\mathcal{A}_{\varphi}\). However, this bound is in general not compact (i.e., doubly exponential in the size of the formula \(\varphi\)), and thus we provide another tighter bound.
We first define the following notion: We call two modal nodes \(\langle i,\Gamma\rangle\) and \(\langle j,\Gamma^{\prime}\rangle\) are _isomorphic_, denoted as \(\langle i,\Gamma\rangle\cong\langle j,\Gamma^{\prime}\rangle\), if \(\Gamma\) can be transformed into \(\Gamma^{\prime}\) under a (block) variable bijection. The following lemma about isomorphic model nodes is straightforward.
Lemma 1: _If \(\langle i,\Gamma\rangle\cong\langle j,\Gamma^{\prime}\rangle\) and \(\langle i,\Gamma\rangle\) could lead to a successful leaf (i.e., satisfiable leaf), then so does \(\langle j,\Gamma^{\prime}\rangle\)._
Thus, given \(\varphi\), the threshold can be the number of equivalence classes w.r.t. \(\cong\). To make the analysis clearer, we here introduce some auxiliary notions.
* We call an atomic constraint \(\gamma\) occurring in \(\varphi\) to be an _original constraint_ (or, _non-padded constraint_); and call a formula being of the form \(\gamma[f_{i},\ldots,f_{j}]\)_padded constraint_, where \(f_{i},\ldots,f_{j}\) are block variables.
* A (padded or non-padded) constraint with length \(0\) (i.e., \(\rhd\)-free) is called _saturated_. In general, such a constraint is obtained from a non-padded constraint \(\gamma\) via applying \(k\) layer variables, where \(k=\mathsf{len}(\gamma)\).
Theorem 5.1: _Let \(\varphi\) be a closed BLTL formula, and let_
* \(c=\#(\mathsf{Cons}(\boldsymbol{\Sigma})\cap\mathsf{Sub}(\varphi))\)_, i.e., the number of (non-padded) constraints occurring in_ \(\varphi\)_;_
* \(k=\max\{\mathsf{len}(\gamma)\mid\gamma\in\mathsf{Cons}(\boldsymbol{\Sigma}) \cap\mathsf{Sub}(\varphi)\}\)_, i.e., the maximum length of non-padded constraints occurring in_ \(\varphi\)_;_
* \(p\) _be the number of temporal operators in_ \(\varphi\)__
_then, \(2^{(k+1)c+p}+1\) is a threshold for synthesis._
The proof is shown in Appendix 0.A.3.
### Encoding with IDL Problem
Another problem is how to convert the satisfiability testing into SMT-solving. To tackle this, we present a method that transforms BLTL atomic formulas to IDL constraints.
We may temporarily view a Boolean function \(g:\mathbb{B}^{m}\to\mathbb{B}^{n}\) as a (partial) integer function with domain \([2^{m}]\), namely, we equivalently view \(g\) maps \(\mathsf{dec}(\boldsymbol{b})\) to \(\mathsf{dec}(g(\boldsymbol{b}))\).
For a \(\rhd\)-free term \(\boldsymbol{t}=(f_{k}\circ f_{k-1}\circ\cdots\circ f_{0})(\boldsymbol{b})\), we say that \((f_{i}\circ f_{i-1}\circ\cdots\circ f_{0})(\boldsymbol{b})\) is an _intermediate term_ of \(\boldsymbol{t}\) where \(i\leq k\). In what follows, we denote by \(\boldsymbol{T}\) the set of all intermediate terms that may occur in the process of SMT-solving, which is a part of synthesis that check the satisfiability of atomic formulas in successful leaves.
Remind that in a term or an intermediate term, a symbol \(g\) may either be a fixed function or a variable that need to be determined by the SMT-solver (i.e., _block variables_). To make it clearer, we in general use \(g_{0},g_{1},\ldots\) to designate the former functions, whereas use \(f_{0},f_{1}\), etc for the latter cases.
The theory of IDL is limited to handling the difference constraints of the form \(x-y\sim c\), where \(x\), \(y\) are integer variables and \(c\) is an integer constant. However, since functions occur in the terms, they cannot be expressed using IDL. To this end, we note that we merely care about partial input-output relations of the functions, which consist of mappings among \(\boldsymbol{T}\), and then the finite mappings can be expressed by integer constraints. Thus, for each intermediate term \(\boldsymbol{t}\in\boldsymbol{T}\), we introduce an integer variable \(v_{\boldsymbol{t}}\).
Then, all constraints describing the synthesis task are listed as follows.
1. For each BLTL constraints \(\mathbf{t}_{1}\sim\mathbf{t}_{2}\), we have a conjunct \(v_{\mathbf{t}_{1}}\sim v_{\mathbf{t}_{2}}\).
2. For each block variable \(f:\mathbb{B}^{n}\rightarrow\mathbb{B}^{m}\) and each \(f(\mathbf{t})\in\mathbf{T}\), we add the bound constraints \(0\leq v_{f(\mathbf{t})}\) and \(v_{f(\mathbf{t})}\leq 2^{m}\).
3. For each block variable \(f\) and every \(\mathbf{t}_{1},\mathbf{t}_{2}\in\mathbf{T}\), we have \(v_{\mathbf{t}_{1}}=v_{\mathbf{t}_{2}}\to v_{f(\mathbf{v}_{1})}=v_{f(\mathbf{v}_{2})}\), which guarantees \(f\) to be a mapping.
4. For every fixed function \(g\), we impose the constraint \(v_{g(\mathbf{t})}=\mathsf{dec}(g(\mathsf{bin}(v_{\mathbf{t}})))\) for every \(\mathbf{t}\in\mathbf{T}\).
Once the satisfiability is reported by the SMT-solver, we extract partial mapping information of \(f_{i}\)'s from the solver's model, by analyzing equations of the form \(v_{\mathbf{t}}=c\), where \(c\) is an integer called the value of \(\mathbf{t}\). We iterate over the model and record the value of terms, when we encounter an equation in the form of \(v_{f_{i}(\mathbf{t})}=c\), we query the value of \(\mathbf{t}\), and obtain one input-output relation of \(f_{i}\). Eventually, we get partial essential mapping information of suc \(f_{i}\)'s.
### Utilize the Synthesis
A BNN that satisfies the specification can be obtained via block-wise training, namely, training each block independently to fulfill its generated input-output mapping relations, which is extracted by the SMT-solver during the synthesizing process. Indeed, such training is not only in general lightweight, but also able to reuse the pre-trained blocks.
Let us now consider a more general requirement that we have both high-level temporal specification (such as fairness, robustness) and data constraints (i.e., labels on a dataset), and is asked to obtain a BNN to meet all these obligations.
A straightforward idea is to express all data constraints with BLTL, and then perform a monolithic synthesis. However, such a solution seems to be infeasible, because the large amount of data constraints usually produces a rather complicated formula, and it makes the synthesis extremely difficult.
An alternative approach is to first perform the synthesis w.r.t. the high-level specification, then do a retraining upon the dataset. However, the second phase may distort the result of the first phase. In general, one need to conduct an iterative cycle composed of synthesis-training-verification, yet the convergence of such process cannot be guaranteed. Thus, we need make a trade-off between these two types of specifications.
More practically, synthesis is used as an "enhancement" procedure. Suppose, we already have some BNN trained with the given dataset, then we are aware the hyper-parameters of that. This time, we have more information when doing synthesis, e.g., the threshold is replaced by the length of the network, and the shape (i.e., the width of input and output) of each block are also given. With this, we may perform a more effective SMT-solving process, and then retrain each block individually. Definitely, this might affect the accuracy of network, and some compromise also should be done.
## 6 Experimental Evaluation
We implement a prototype tool in Python, which uses Z3 [9] as the off-the-shelf IDL solver and PyTorch to train blocks and BNNs. To the best of our knowledge, few existing work on synthesizing BNN has been done so far. Hence, we mainly investigate the feasibility of our approach by exploring how much the trustworthiness of BNN can be enhanced, and the corresponding trade-off on accuracy degradation. The first two experiments focus on evaluating the effectiveness of synthesis in enhancing the properties of BNNs We set BNNs with diverse architectures as baselines, and synthesize models via the "enhancement" procedure, wherein the threshold matches the length of the baselines, and the shape of blocks are constrained to maintain the same architecture as the baselines. Eventually, the blocks are retrained to fulfill the partial mapping, and the synthesized model is obtained through retraining on the dataset. We compare the synthesized models and their baselines on two properties: _local robustness_ and _individual fairness_.
Moreover, we study the potential of our approach to assist in determining the network architecture.
Datasets.We train models and evaluate our approach over two classical datasets, MNIST [10] and UCI Adult [11].
MNIST is a dataset of handwritten digits, which contains 70,000 gray-scale images with 10 classes, and each image has \(28\times 28\) pixels. In the experiments, we downscale the images to \(10\times 10\), and binarize the normalized images, and then transform them into 100-width vectors.
UCI Adult contains 48,842 entries with 14 attributes, such as age, gender, workclass and occupation. The classification task on the dataset is to predict whether an individual's annual salary is greater than 50K. We first remove unusable data, retain 45,221 entries, and then transform the real-value data into 66-dimension binarized vectors as input.
Experimental Setup.In the block-wise training, different loss functions are employed for internal and output blocks: the MSE loss function for internal blocks and the cross-entropy loss function for output blocks. The training process entails a fixed number of epochs, with 150 epochs for internal blocks and 30 epochs for output blocks. The experiments are conducted on a 3.6G HZ CPU with 12 cores and 32GB RAM, and the blocks and BNNs are trained using a single GeForce RTX 3070 Ti GPU.
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline
**Name** & **Arch** & **Acc** & **Name** & **Arch** & **Acc** \\ \hline
**R1** & 100-32-10 & 82.62\% & **F1** & 66-32-2 & 80.12\% \\
**R2** & 100-50-10 & 84.28\% & **F2** & 66-20-2 & 79.88\% \\
**R3** & 100-50-32-10 & 83.50\% & **F3** & 66-32-20-2 & 78.13\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: BNN baselines.
Baseline.We use six neural networks with different architectures as baselines, where three models **R1**-**R3** are trained on the MNIST for 10 epochs with a learning rate of \(10^{-4}\) to study on local robustness. For individual fairness, we train 3 models (**F1**-**F3**) on the UCI Adult for 10 epochs, with a learning rate of \(10^{-3}\), and split the dataset into a training set and a test set in a 4:1 ratio.
The detailed information is listed in Table 1, Column (Name) indicates the name of BNNs, and Column (Arch) presents their architectures. The architecture of each network is described as by a sequence \(\{n_{i}\}_{i=0}^{s}\), where \(s\) is the number of the blocks in the network, and \(n_{i}\) and \(n_{i+1}\) indicate the input and output dimensions of the \(i\)-th block. For instance, 100-32-10 indicates that the BNN has two blocks, the input dimensions of these blocks are 100 and 32 respectively, and the number of classification labels is 10. Column (Acc) shows the accuracy of the models on the test set.
### Local Robustness
In this section, we evaluate the effectiveness of our approach for enhancing the robustness of models in different cases. We use the metric, called Adversarial Attack Success Rate (ASR), to measure a model's resistance to adversarial attacks. ASR is calculated as the proportion of perturbed inputs that leads to a different prediction result compared to the original input.
We choose 30 image vectors from the training set, and set the maximum perturbation to four levels, \(\epsilon\in\{1,2,3,4\}\). The value of \(\epsilon\) indicates the number of positions that can be modified in one image vector. One selected input vector, one maximum perturbation \(\epsilon\) and one baseline model constitute a case, resulting in a total of 360 cases.
For each of the 360 case, we make a synthesized model individually, and compare its ASR with the corresponding baseline. For the local robustness property (cf. Section 3.2), since the input space is too large to enumerate, we need to sample inputs within \(B\left(\mathbf{u},\epsilon\right)\) when describing the specification, which is formulated as \(\bigwedge_{i=1}^{k}\left(\mathcal{N}(\mathbf{u})=\mathcal{N}(\mathbf{b}_{i})\right)\), where each \(\mathbf{b}_{i}\) is a sample and \(k\) is the number of samples. We here sample 100 points within the maximum perturbation limit \(\epsilon\). The specification is written as \(\bigwedge_{i=1}^{k}\left(\rhd^{n}\mathbf{u}=\rhd^{n}\mathbf{b}_{i}\right)\), where \(n\) is the number of the block of the baseline. Subsequently, we use the block constraint (cf. Section 5.2), \(0\leq v_{f_{i}(\mathbf{t})}\leq 2^{m}\), to specify the range of output of each block. To make the bound tighter, we retain the maximal and minimal activations of each block using calibration data run on the baseline, and then take the recorded values as bounds. Eventually, the generated mappings are used in the block-wise training, and then the enhanced BNN is obtained through retraining on the MNIST dataset.
We also take 100 samples for each case and compare the ASR for baselines and their synthesized counterparts. The results are shown in Figure 2, blue bars represent the baselines, while orange bars represent synthesized models. We use the sign + to denote the synthesized models. Figure 2(a) (resp. Figure 2(b) and Figure 2(c)) depicts the percentage of average ASR of **R1** (resp. **R2** and **R3**) and the counterpart **R1+** (resp. **R2+** and **R3+**) (the vertical axis), with different
\(\epsilon\) (1, 2, 3, 4) (the horizontal axis). The results demonstrate a decrease in ASR by an average of 43.45%, 22.12%, and 16.95% for **R1**, **R2** and **R3**, respectively.
Whist the models' robustness are enhanced, their accuracy are slightly decreased. Table 2 shows the results of the accuracy of the models, where Acc+ represents the average accuracy for synthesized models with the same architectures.
### Individual Fairness
In this section, we investigate the individual fairness w.r.t two sensitive feature, namely sex (Male and Female) and race (White and Black) on the UCI Adult dataset.
We consider **F1**-**F3** as baselines, and randomly select 1000 entries for both **F1** and **F2**, and 200 entries for **F3** from the training set, and then generate proper pairs by modifying the value of the sensitive attribute while keeping all other attributes the same. For example, we modify the value of Male to Female. After forming specifications using the approach mentioned in Section 3.2 with the pairs, we proceed with the "enhancement" procedure and retraining to obtain the synthesized models. We then evaluate the models on the test set by the measuring the fairness score. We count the number of the fair pairs (the pairs only differ in the sensitive attribute, and get the predication): _fair num_, and compute the fairness score, \(\frac{\textit{fair num}}{\textit{test size}}\), where _test size_ is the size of the test set.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **R1** & **R2** & **R3** \\ \hline Acc & 82.62\% & 84.28\% & 83.50\% \\ Acc+ & 81.33\% & 81.72\% & 78.75\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The average accuracy of **R1**-**R3** and their synthesized models.
Figure 2: Results of local robustness.
The results are listed in Table 3, where the baselines and the sensitive attributes shown in Column 1,2. Column 3,4 (Acc/Acc+) demonstrate the accuracy of baselines and synthesized models, and Column 5,6 (Fair/Fair+) show their fairness scores. The figure shows that the all models' individual fairness is significantly improved, with some even reach reaching 100% (Row 2, the fairness score increase from 92.92% to 100%). However, the enhancement is accompanied by the accuracy lost, Column 3,4 show that all models suffer from a certain degree of accuracy decrease. Our tool efficiently synthesized the hyper-parameters within a few minutes, as shown in Column 7.
Furthermore, we examine the ability of our approach on helping determine the architecture of the BNNs. For both sex and race, we sample 200 entries in the training set to generate proper pairs, and formulate the specification without using the bound constraints or fixing the number of block, as follows,
\[\mathsf{F}(\bigwedge_{i}^{k}(\mathbf{x}_{i}=\mathbf{y}_{i}))\wedge(\bigwedge_{i}^{k}( \mathbf{x}_{i}=\rhd^{2}\mathbf{a}_{i}\wedge\mathbf{y}_{i}=\rhd^{2}\mathbf{b}_{i})\vee(\bigwedge_ {i}^{k}(\mathbf{x}_{i}=\rhd^{3}\mathbf{a}_{i}\wedge\mathbf{y}_{i}=\rhd^{3}\mathbf{b}_{i})))\]
where \((\mathbf{a}_{i},\mathbf{b}_{i})\) is the proper pair, and \(k\) is the number of samples. The formula indicates the presence of consecutive blocks in the model, with a length of either 2 or 3. For each proper pair \((\mathbf{a}_{i},\mathbf{b}_{i})\), their respective outputs \((\mathbf{x}_{i},\mathbf{y}_{i})\) must be equal.
After synthesizing the partial input-output relation of \(f_{i}\)s,we determine the length of the network by selecting the maximum \(i\) among \(f_{i}\)s. The dimensions of the blocks are set to the maximum input and output dimensions in the partial relation obtained for the corresponding \(f_{i}\).
We make a slight adjustment to the synthesis framework, when find a group of hyper-parameters, we continue searching for one more feasible group, resulting
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Attr** & **Arch** & **Len** & **\#Mapping** & **Acc** & **Fair** \\ \hline sex & 66-10-10-2 & 3 & 1117 & 74.38\% & 99.51\% \\ sex & 66-8-2 & 2 & 559 & 74.69\% & 99.72\% \\ race & 66-9-8-2 & 3 & 952 & 74.38\% & 94.59\% \\ race & 66-8-2 & 2 & 567 & 74.13\% & 99.71\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: The synthesized models which architecture are given by our tool.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Model** & **Feature** & **Acc** & **Acc+** & **Fair** & **Fair+** & **Synthesis Time(s)** \\ \hline
**F1** & sex & 80.12\% & 74.53\% & 92.91\% & 99.94\% & 241.67 \\
**F1** & race & 80.12\% & 74.54\% & 92.92\% & 100\% & 216.46 \\
**F2** & sex & 79.88\% & 75.71\% & 95.68\% & 97.83\% & 215.61 \\
**F2** & race & 79.88\% & 75.18\% & 94.64\% & 98.47\% & 212.46 \\
**F3** & sex & 78.13\% & 74.48\% & 89.67\% & 99.83\% & 90.39 \\
**F3** & race & 79.88\% & 74.09\% & 89.16\% & 98.27\% & 95.75 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of individual fairness.
in two groups of hyper-parameters for sex and race. We showcase the synthesized models in Table 4. Column 1 indicates the sensitive attribute of interest, and Column 2,3 display the architecture and the length of the BNNs respectively. Column 4 shows the number of partial mappings we obtained in the synthesis task. Our tool successfully generates models with varying architectures and high individual fairness, which are presented in the Column 5,6 respectively.
## 7 Conclusion
In this paper, we have presented an automata-based approach to synthesizing binarized neural networks. Specifying BNNs' properties with the designed logic BLTL, and using the tableau-based construction approach, the synthesis framework determine hyper-parameters of BNNs and relations among some parameters, and then we may perform a block-wise training. We implemented a prototype tool and the experiments demonstrate the effectiveness of our approach in enhancing the local robustness and individual fairness of BNNs. Although our approach have shown the feasibility of synthesizing trustworthy BNNs, there is still a need to further explore this line of work. In the future, beyond the input-output relation of BNNs, we plan to focus on specifying properties between the intermediate blocks. Additionally, we aim to extend the approach to handle the synthesis task of multi-bits QNNs.
#### Acknowledgements
This work is partially supported by the National Key R & D Program of China (2022YFA1005101), the National Natural Science Foundation of China (61872371, 62072309, 62032024), CAS Project for Young Scientists in Basic Research (YSBR-040), and ISCAS New Cultivation Project (ISCAS-PYFX-202201).
|
2310.01365 | Elephant Neural Networks: Born to Be a Continual Learner | Catastrophic forgetting remains a significant challenge to continual learning
for decades. While recent works have proposed effective methods to mitigate
this problem, they mainly focus on the algorithmic side. Meanwhile, we do not
fully understand what architectural properties of neural networks lead to
catastrophic forgetting. This study aims to fill this gap by studying the role
of activation functions in the training dynamics of neural networks and their
impact on catastrophic forgetting. Our study reveals that, besides sparse
representations, the gradient sparsity of activation functions also plays an
important role in reducing forgetting. Based on this insight, we propose a new
class of activation functions, elephant activation functions, that can generate
both sparse representations and sparse gradients. We show that by simply
replacing classical activation functions with elephant activation functions, we
can significantly improve the resilience of neural networks to catastrophic
forgetting. Our method has broad applicability and benefits for continual
learning in regression, class incremental learning, and reinforcement learning
tasks. Specifically, we achieves excellent performance on Split MNIST dataset
in just one single pass, without using replay buffer, task boundary
information, or pre-training. | Qingfeng Lan, A. Rupam Mahmood | 2023-10-02T17:27:39Z | http://arxiv.org/abs/2310.01365v1 | # Elephant Neural Networks:
###### Abstract
Catastrophic forgetting remains a significant challenge to continual learning for decades. While recent works have proposed effective methods to mitigate this problem, they mainly focus on the algorithmic side. Meanwhile, we do not fully understand what architectural properties of neural networks lead to catastrophic forgetting. This study aims to fill this gap by studying the role of activation functions in the training dynamics of neural networks and their impact on catastrophic forgetting. Our study reveals that, besides sparse representations, the gradient sparsity of activation functions also plays an important role in reducing forgetting. Based on this insight, we propose a new class of activation functions, elephant activation functions, that can generate both sparse representations and sparse gradients. We show that by simply replacing classical activation functions with elephant activation functions, we can significantly improve the resilience of neural networks to catastrophic forgetting. Our method has broad applicability and benefits for continual learning in regression, class incremental learning, and reinforcement learning tasks. Specifically, we achieves excellent performance on Split MNIST dataset in just one single pass, without using replay buffer, task boundary information, or pre-training.
## 1 Introduction
One of the biggest challenges to achieving continual learning is the decades-old issue of _catastrophic forgetting_(French, 1999). Catastrophic forgetting stands for the phenomenon that artificial neural networks tend to forget prior knowledge drastically when learned with stochastic gradient descent algorithms on non-independent and identically distributed (non-iid) data. In recent years, researchers have made significant progress in mitigating catastrophic forgetting and proposed many effective methods, such as replay methods (Mendez et al., 2022), regularization-based methods (Riemer et al., 2018), parameter-isolation methods (Mendez et al., 2020), and optimization-based methods (Farajtabar et al., 2020). However, instead of alleviating forgetting by designing neural networks with specific properties, most of these methods circumvent the problem by focusing on the algorithmic side. _There is still a lack of full understanding of what properties of neural networks lead to catastrophic forgetting._ Recently, Mirzadeh et al. (2022) found that the width of a neural network significantly affects forgetting and they provided explanations from the perspectives of gradient orthogonality, gradient sparsity, and lazy training regime. Furthermore, Mirzadeh et al. (2022) studied the forgetting problem on large-scale benchmarks with various neural network architectures. They demonstrated that architectures can play a role that is as important as algorithms in continual learning.
_The interactions between continual learning and neural network architectures remain to be under-explored._ In this work, we aim to better understand and reduce forgetting by studying the impact of various architectural choices of neural networks on continual learning. Specifically, we focus on activation functions, one of the most important elements in neural networks. Experimentally, we study the effect of various activation functions (e.g., Tanh, ReLU, and Sigmoid) on catastrophic forgetting under the setting of continual supervised learning. Theoretically, we investigate the role of activation functions in the training dynamics of neural networks. Our analysis suggests that not only
sparse representations but also sparse gradients are essential for continual learning. Based on this discovery, we design a new class of activation functions, the elephant activation functions. Unlike classical activation functions, elephant activation functions are able to generate both sparse function values and sparse gradients that make neural networks more resilient to catastrophic forgetting. Correspondingly, the neural networks incorporated with elephant activation functions are named _elephant neural networks (ENNs)_. In streaming learning for regression, while many classical activation functions fail to even approximate a simple sine function, ENNs succeed by applying elephant activation functions. In class incremental learning, we show that ENNs achieve excellent performance on Split MNIST dataset in one single pass, without using any replay buffer, task boundary information, or pre-training. In reinforcement learning, ENNs improve the learning performance of agents under extreme memory constraints by alleviating the forgetting issue.
## 2 Investigating catastrophic forgetting via training dynamics
Firstly, we look into the forgetting issue via the training dynamics of neural networks. Consider a simple regression task. Let a scalar function \(f_{\mathbf{w}}(\mathbf{x})\) be represented as a neural network, parameterized by \(\mathbf{w}\), with input \(\mathbf{x}\). \(F(\mathbf{x})\) is the true function and the loss function is \(L(f,F,\mathbf{x})\). For example, for squared error, we have \(L(f,F,\mathbf{x})=(f_{\mathbf{w}}(\mathbf{x})-F(\mathbf{x}))^{2}\). At each time step \(t\), a new sample \(\{\mathbf{x}_{t},F(\mathbf{x}_{t})\}\) arrives. Given this new sample, to minimize the loss function \(L(f,F,\mathbf{x}_{t})\), we update the weight vector by \(\mathbf{w}^{\prime}=\mathbf{w}+\Delta_{\mathbf{w}}\) where \(\Delta_{\mathbf{w}}\) is the weight difference. With the stochastic gradient descent (SGD) algorithm, we have \(\mathbf{w}^{\prime}=\mathbf{w}-\alpha\nabla_{\mathbf{w}}L(f,F,\mathbf{x}_{t})\), where \(\alpha\) is the learning rate. So \(\Delta_{\mathbf{w}}=\mathbf{w}^{\prime}-\mathbf{w}=-\alpha\nabla_{\mathbf{w}}L(f,F,\mathbf{x}_{t}) =-\alpha\nabla_{f}L(f,F,\mathbf{x}_{t})\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_{t})\). By Taylor expansion,
\[f_{\mathbf{w}^{\prime}}(\mathbf{x})-f_{\mathbf{w}}(\mathbf{x})=-\alpha\nabla_{f}L(f,F,\mathbf{x}_{ t})\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_ {t})\right\rangle+O(\Delta_{\mathbf{w}}^{2}), \tag{1}\]
where \(\left\langle\cdot,\cdot\right\rangle\) denotes Frobenius inner product or dot product depending on the context. In this equation, since \(-\alpha\nabla_{f}L(f,F,\mathbf{x}_{t})\) is unrelated to \(\mathbf{x}\), we only consider the quantity \(\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x }_{t})\right\rangle\), which is known as the neural tangent kernel (NTK) (Jacot et al., 2018). Without loss of generality, assume that the original prediction \(f_{\mathbf{w}}(\mathbf{x}_{t})\) is wrong, i.e. \(f_{\mathbf{w}}(\mathbf{x}_{t})\neq F(\mathbf{x}_{t})\) and \(\nabla_{f}L(f,F,\mathbf{x}_{t})\neq 0\). To correct the wrong prediction while avoiding forgetting, we expect this NTK to satisfy two properties that are essential for continual learning:
**Property 2.1** (error correction).: _For \(\mathbf{x}=\mathbf{x}_{t}\), \(\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x }_{t})\right\rangle\neq 0\)._
**Property 2.2** (zero forgetting).: _For \(\mathbf{x}\neq\mathbf{x}_{t}\), \(\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x }_{t})\right\rangle=0\)._
In particular, Property 2.1 allows for error correction by optimizing \(f_{\mathbf{w}^{\prime}}(\mathbf{x}_{t})\) towards the true value \(F(\mathbf{x}_{t})\), so that we can learn new knowledge (i.e., update the learned function). If \(\left\langle\nabla_{\mathbf{w}}f(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_{t}) \right\rangle=0\), we then have \(f_{\mathbf{w}^{\prime}}(\mathbf{x})-f_{\mathbf{w}}(\mathbf{x})\approx 0\), failing to correct the wrong prediction at \(\mathbf{x}=\mathbf{x}_{t}\). Essentially, Property 2.1 requires the gradient norm to be non-zero. On the other hand, Property 2.2 is much harder to be satisfied, especially for non-linear approximations. Essentially, to make this property hold, except for \(\mathbf{x}=\mathbf{x}_{t}\), the neural network \(f\) is required to achieve zero knowledge forgetting after one step optimization, i.e. \(\forall\mathbf{x}\neq\mathbf{x}_{t},f_{\mathbf{w}^{\prime}}(\mathbf{x})=f_{\mathbf{w}}(\mathbf{x})\). _It is the violation of Property 2.2 that leads to the forgetting issue._ For tabular cases (e.g., \(\mathbf{x}\) is a one-hot vector and \(f_{\mathbf{w}}(\mathbf{x})\) is a linear function), this property may hold by sacrificing the generalization ability of deep neural networks. In order to benefit from generalization, we propose Property 2.3 by relaxing Property 2.2.
**Property 2.3** (local elasticity).: \(\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x }_{t})\right\rangle\approx 0\) _for \(\mathbf{x}\) that is dissimilar to \(\mathbf{x}_{t}\) in a certain sense._
Property 2.3 is known as local elasticity (He and Su, 2020). A function \(f\) is locally elastic if \(f_{\mathbf{w}}(\mathbf{x})\) is not significantly changed, after the function is updated at \(\mathbf{x}_{t}\) that is dissimilar to \(\mathbf{x}\) in a certain sense. For example, we can characterize the dissimilarity with the \(2\)-norm distance. Although He and Su (2020) show that neural networks with nonlinear activation functions are locally elastic in general, there is a lack of theoretical understanding about the connection of neural network architectures and the degrees of local elasticity. In our experiments, we find that the degrees of local elasticity of classical neural networks are not enough to address the forgetting issue, as we will show next.
Understanding the success and failure of sparse representations
The above properties can help to deepen our understanding of the success and failure of sparse representations in continual learning. To be specific, we argue that sparse representations are effective to reduce forgetting in linear function approximations, but are less useful in non-linear function approximations.
It is well-known that deep neural networks can automatically generate effective representations (a.k.a. features) to extract key properties from input data. The ability to learn useful features helps deep learning methods achieve great success in many areas (LeCun et al., 2015). In particular, we call a set of representations sparse when only a small part of representations is non-zero for a given input. Sparse representations are shown to help reduce the forgetting problem and the interference issues in both continual supervised learning and reinforcement learning (Shen et al., 2021; Liu et al., 2019). Formally, let \(\mathbf{x}\) be an input and \(\phi\) be an encoder that transforms the input \(\mathbf{x}\) into its representation \(\phi(\mathbf{x})\). The representation \(\phi(\mathbf{x})\) is sparse when most of its elements are zeros.
First, consider the case of linear approximations. A linear function is defined as \(f_{\mathbf{w}}(\mathbf{x})=\mathbf{w}^{\top}\phi(\mathbf{x}):\mathbb{R}^{n}\mapsto\mathbb{R}\), where \(\mathbf{x}\in\mathbb{R}^{n}\) is an input, \(\phi:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\) is a fixed encoder, and \(\mathbf{w}\in\mathbb{R}^{m}\) is a weight vector. Assume that the representation \(\phi(\mathbf{x})\) is sparse and non-zero (i.e., \(\|\phi(\mathbf{x})\|_{2}>0\)) for \(\mathbf{x}\in\mathbb{R}^{n}\). Next, we show that both Property 2.1 and Property 2.3 are satisfied in this case. Easy to know that \(\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x})=\phi(\mathbf{x})\). Together with Equation (1), we have
\[f_{\mathbf{w}^{\prime}}(\mathbf{x})-f_{\mathbf{w}}(\mathbf{x})=-\alpha\nabla_{f}L(f,F,\mathbf{x}_{ t})\,\phi(\mathbf{x})^{\top}\phi(\mathbf{x}_{t}). \tag{2}\]
By assumption, \(f_{\mathbf{w}}(\mathbf{x}_{t})\neq F(\mathbf{x}_{t})\) and \(\nabla_{f}L(f,F,\mathbf{x}_{t})\neq 0\). Then Property 2.1 holds since \(f_{\mathbf{w}^{\prime}}(\mathbf{x}_{t})-f_{\mathbf{w}}(\mathbf{x}_{t})=-\alpha\nabla_{f}L(f,F, \mathbf{x}_{t})\,\|\phi(\mathbf{x}_{t})\|_{2}^{2}\neq 0\). Moreover, when \(\mathbf{x}\neq\mathbf{x}_{t}\), it is very likely that \(\left\langle\phi(\mathbf{x}),\phi(\mathbf{x}_{t})\right\rangle\approx 0\) due to the sparsity of \(\phi(\mathbf{x})\) and \(\phi(\mathbf{x}_{t})\). Thus, \(f_{\mathbf{w}^{\prime}}(\mathbf{x})-f_{\mathbf{w}}(\mathbf{x})=-\alpha\nabla_{f}L(f,F,\mathbf{x}_ {t})\left\langle\phi(\mathbf{x}),\phi(\mathbf{x}_{t})\right\rangle\approx 0\) and Property 2.3 holds. We conclude that by satisfying Property 2.1 and Property 2.3, sparse representations successfully help mitigate catastrophic forgetting in linear approximations.
However, for non-linear approximations, sparse representations can no longer guarantee Property 2.3. Consider an MLP with one hidden layer \(f_{\mathbf{w}}(\mathbf{x})=\mathbf{u}^{\top}\sigma(\mathbf{V}\mathbf{x}+\mathbf{b}):\mathbb{R}^{n}\mapsto \mathbb{R}\), where \(\sigma\) is a non-linear activation function, \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{u}\in\mathbb{R}^{m}\), \(\mathbf{V}\in\mathbb{R}^{m\times n}\), \(\mathbf{b}\in\mathbb{R}^{m}\), and \(\mathbf{w}=\{\mathbf{u},\mathbf{V},\mathbf{b}\}\). We compute the NTK in this case, resulting in the following lemma.
**Lemma 3.1** (NTK in non-linear approximations).: _Given a non-linear function \(f_{\mathbf{w}}(\mathbf{x})=\mathbf{u}^{\top}\sigma(\mathbf{V}\mathbf{x}+\mathbf{b}):\mathbb{R}^{n} \mapsto\mathbb{R}\), where \(\sigma\) is a non-linear activation function, \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{u}\in\mathbb{R}^{m}\), \(\mathbf{V}\in\mathbb{R}^{m\times n}\), \(\mathbf{b}\in\mathbb{R}^{m}\), and \(\mathbf{w}=\{\mathbf{u},\mathbf{V},\mathbf{b}\}\). The NTK of this non-linear function is_
\[\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_ {t})\right\rangle=\sigma(\mathbf{V}\mathbf{x}+\mathbf{b})^{\top}\sigma(\mathbf{V}\mathbf{x}_{t}+ \mathbf{b})+\mathbf{u}^{\top}\mathbf{u}(\mathbf{x}^{\top}\mathbf{x}_{t}+1)\sigma^{\prime}(\mathbf{V} \mathbf{x}+\mathbf{b})^{\top}\sigma^{\prime}(\mathbf{V}\mathbf{x}_{t}+\mathbf{b}),\]
_where \(\left\langle\cdot,\cdot\right\rangle\) denotes Frobenius inner product or dot product depending on the context._
The proof of this lemma can be found in Appendix B. Note that the encoder \(\phi\) is no longer fixed, and we have \(\phi_{\mathbf{\theta}}(\mathbf{x})=\sigma(\mathbf{V}\mathbf{x}+\mathbf{b})\), where \(\mathbf{\theta}=\{\mathbf{V},\mathbf{b}\}\) are learnable parameters. By Lemma 3.1,
\[\left\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_ {t})\right\rangle=\phi_{\mathbf{\theta}}(\mathbf{x})^{\top}\phi_{\mathbf{\theta}}(\mathbf{x}_ {t})+\mathbf{u}^{\top}\mathbf{u}(\mathbf{x}^{\top}\mathbf{x}_{t}+1)\phi^{\prime}_{\mathbf{\theta}}( \mathbf{x})^{\top}\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x}_{t}). \tag{3}\]
Compared with the NTK in linear approximations (Equation (2)), Equation (3) has an additional term \(\mathbf{u}^{\top}\mathbf{u}(\mathbf{x}^{\top}\mathbf{x}_{t}+1)\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x} )^{\top}\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x}_{t})\), due to a learnable encoder \(\phi_{\mathbf{\theta}}\). With sparse representations, we have \(\phi_{\mathbf{\theta}}(\mathbf{x})^{\top}\phi_{\mathbf{\theta}}(\mathbf{x}_{t})\approx 0\). However, it is not necessary true that \(\mathbf{u}^{\top}\mathbf{u}(\mathbf{x}^{\top}\mathbf{x}_{t}+1)\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x })^{\top}\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x}_{t})\approx 0\) even when \(\mathbf{x}\) and \(\mathbf{x}_{t}\) are quite dissimilar, which violates Property 2.3.
To conclude, our analysis indicates that sparse representations alone are not effective enough to reduce forgetting in non-linear approximations.
## 4 Obtaining sparsity with elephant activation functions
Although Lemma 3.1 shows that the forgetting issue can not be fully addressed with sparse representations solely in deep learning methods, it also points out a possible solution: sparse gradients.
With sparse gradients, we could have \(\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x})^{\top}\phi^{\prime}_{\mathbf{\theta}}(\mathbf{x}_{t})\approx 0\). Together with sparse representations, we may still satisfy Property 2.3 in non-linear approximations, and thus reduce more forgetting. Specifically, we aim to design new activation functions to obtain both sparse representations and sparse gradients.
To begin with, we first define the sparsity of an function, which also applies to activation functions.
**Definition 4.1** (sparse function).: For a function \(\sigma:\mathbb{R}\mapsto\mathbb{R}\), we define the sparsity of function \(\sigma\) on input domain \([-C,C]\) as
\[S_{\epsilon,C}(\sigma)=\frac{|\{x\mid|\sigma(x)|\leq\epsilon,x\in[-C,C]\}|}{| \{x\mid x\in[-C,C]\}|}=\frac{|\{x\mid|\sigma(x)|\leq\epsilon,x\in[-C,C]\}|}{2C},\]
where \(\epsilon\) is a small positive number and \(C>0\). As a special case, when \(\epsilon\to 0^{+}\) and \(C\to\infty\), define
\[S(\sigma)=\lim_{\epsilon\to 0^{+}}\lim_{C\to\infty}S_{\epsilon,C}(\sigma).\]
We call \(\sigma\) a \(S(\sigma)\)-sparse function. In particular, \(\sigma\) is called a sparse function if it is a \(1\)-sparse function, i.e. \(S(\sigma)=1\).
**Remark 4.2**.: Easy to verify that \(0\leq S(\sigma)\leq 1\). The sparsity of a function shows the fraction of nearly zero outputs given a symmetric input domain. For example, both \(\mathrm{ReLU}(x)\) and \(\mathrm{ReLU}^{\prime}(x)\) are \(\frac{1}{2}\)-sparse functions. \(\mathrm{Tanh}(x)\) is a \(0\)-sparse function while \(\mathrm{Tanh}^{\prime}(x)\) is a sparse function. In the appendix, we summarize the results for common activation functions in Table 3 and visualize the activation functions with their gradients in Figure 5.
Next, we propose to use a novel class of bell-shaped activation functions, elephant activation functions 1. Formally, an elephant function is defined as
Footnote 1: We name this bell-shaped activation function as the _elephant_ function since the bell-shape is similar to the shape of an elephant (see Figure 1), honoring the work _The Little Prince_ by Antoine de Saint Exupery. This name also hints that this activation function empowers neural networks with continual learning ability, echoing the old saying that “an elephant never forgets”.
\[\mathrm{Elephant}(x)=\frac{1}{1+\left\lfloor\frac{x}{a}\right\rfloor^{d}}, \tag{4}\]
where \(a\) controls the width of the function and \(d\) controls the slope. We call a neural network that uses elephant activation functions an _elephant neural network (ENN)_. For example, for MLPs and CNNs, we have _elephant MLPs (EMLPs)_ and _elephant CNNs (ECNNs)_, respectively.
As shown in Figure 1, elephant activation functions have both sparse function values and sparse gradient values, which can also be formally proved.
**Lemma 4.3**.: \(\mathrm{Elephant}(x)\) _and \(\mathrm{Elephant}^{\prime}(x)\) are sparse functions._
Specifically, \(d\) controls the sparsity of gradients for elephant functions. The larger the value of \(d\), the sharper the slope of an elephant function and the sparser the gradient. On the other hand, \(a\) stands for the width of the function, controlling the sparsity of the function itself.
Next, we show that Property 2.3 holds under certain conditions for elephant functions.
Figure 1: (a) “My drawing was not a picture of a hat. It was a picture of a boa constructor digesting an elephant.” _The Little Prince_, by Antoine de Saint Exupery. (b) Elephant functions with \(a=2\) and various \(d\). (c) The gradient of elephant functions with \(a=2\) and various \(d\).
**Theorem 4.4**.: _Define \(f_{\mathbf{w}}(\mathbf{x})\) as in Lemma 3.1. Let \(\sigma\) be the elephant activation function with \(d\to\infty\). When \(|\mathbf{V}(\mathbf{x}-\mathbf{x}_{t})|>2a\mathbf{1}_{\text{m}}\), we have \(\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}),\nabla_{\mathbf{w}}f_{\mathbf{w}}(\mathbf{x}_{t}) \rangle=0\), where \(\succ\) denotes an element-wise inequality symbol and \(\mathbf{1}_{\text{m}}=[1,\cdots,1]^{\top}\in\mathbb{R}^{m}\)._
**Remark 4.5**.: Theorem 4.4 mainly proves that when \(d\to\infty\), Property 2.3 holds for an EMLP with one hidden layer. However, even when \(d\) is a small integer (e.g., 8), EMLPs still exhibit local elasticity, as we will show in the experiment section. The proof is in Appendix B.
## 5 Experiments
In this section, we first perform experiments in a simple regression task in the streaming learning setting. We will show that (1) sparse representations alone are not enough to address the forgetting issue, (2) ENNs can continually learn to solve regression tasks by reducing forgetting, and (3) ENNs are locally elastic even when \(d\) is a small integer. Next, we show that by incorporating elephant functions, ENNs are able to achieve better performance than several baselines in class incremental learning, without utilizing pre-training, replay buffer, or task boundaries. Finally, we apply ENNs in reinforcement learning (RL) and demonstrate ENNs' advantage in reducing catastrophic forgetting.
### Streaming learning for regression
In the real world, regression tasks are everywhere, from house price estimations (Madhuri et al., 2019) to stock predictions (Dase and Pawar, 2010), weather predictions (Ren et al., 2021), and power consumption forecasts (Dmitri et al., 2016). However, most prior continual learning methods are designed for classification tasks rather than regression tasks, although catastrophic forgetting frequently arises in regression tasks as well (He and Sick, 2021). In this section, we conduct experiments on a simple regression task in the streaming learning setting. In this setting, a learning agent is presented with one sample only at each time step and then performs learning updates given this new sample. Moreover, the learning process happens in a single pass of the whole dataset, that is, each sample only occurs once. Furthermore, the data stream is assumed to be non-iid. Finally, the evaluation happens after each new sample arrives, which requires the agent to learn quickly while avoiding catastrophic forgetting. Streaming learning methods enable real-time adaptation; thus they are more suitable in real-world scenarios where data is received in a continuous flow.
We consider streaming learning for approximating a sine function. In this task, there is a stream of data \((x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\), where \(0\leq x_{1}<x_{2}<\cdots<x_{n}\leq 2\) and \(y_{i}=\sin(\pi x_{i})\). We set \(n=200\) in our experiment. The learning agent \(f\) is an MLP with one hidden layer of size \(1,000\). At each time step \(t\), the learning agent only receives one sample \((x_{t},y_{t})\). We minimize the square error loss \(l_{t}=(f(x_{t})-y_{t})^{2}\) with Adam optimizer (Kingma and Ba, 2015), where \(f(x_{t})\) is the agent's prediction. During an evaluation, the agent performance is measured by the mean square error (MSE) on a test dataset with \(1,000\) samples, where the inputs are evenly spaced over the interval \([0,2]\).
We compare our method (EMLP) with two kinds of baselines. One is an MLP with classical activation functions, including \(\mathrm{ReLU}\), \(\mathrm{Sigmoid}\), \(\mathrm{ELU}\), and \(\mathrm{Tanh}\). The other is the sparse representation neural network (SR-NN) (Liu et al., 2019) which is designed to generate sparse representations. Specifically, we apply various classical activation functions (\(\mathrm{ReLU}\), \(\mathrm{Sigmoid}\), \(\mathrm{ELU}\), and \(\mathrm{Tanh}\)) in SR-NNs. We set \(d=8\) for all elephant functions applied in EMLP.
We summarize test MSEs in Table 1. A lower MSE indicates better performance. Additional training details are included in the appendix. For the SR-NN, we present the best result among the combinations of SR-NNs and classical activation functions. Clearly, our method EMLP achieves the best performance, reaching a very low test MSE compared with baselines. Generally, we find that the SR-NN achieves slightly better performance than MLPs with classical activation functions, showing the benefits of sparse representations. However, compared with EMLP, the test MSE of the SR-NN is still large, indicating that the SR-NN fails to approximate \(\sin(\pi x)\).
Next, we plot the true function \(\sin(\pi x)\), the learned function \(f(x)\), and the NTK function \(\mathrm{NTK}(x)=\langle\nabla_{\mathbf{w}}f_{\mathbf{w}}(x),\nabla_{\mathbf{w}}f_{\mathbf{w}}(x _{t})\rangle\) at different training stages for EMLP and SR-NN in Figure 2. The plots of MLP (\(\mathrm{ReLU}\)), MLP (\(\mathrm{Sigmoid}\)), MLP (\(\mathrm{Tanh}\)), and MLP (\(\mathrm{ELU}\)) are not presented, since they are similar to the plots of the SR-NN. The NTK function \(\mathrm{NTK}(\mathbf{x})\) is normalized such that the function value is in \([-1,1]\). In Figure 2, the plots in the first row show that for EMLP with \(d=8\), \(\mathrm{NTK}(x)\)
quickly decreases to 0 as \(x\) moving away from \(x_{t}\), demonstrating the local elasticity (Property 2.3) of EMLP with a small \(d\). However, SR-NNs (and MLPs with classical activation functions) are not locally elastic; the learned function basically evolves as a linear function, a phenomenon often appears in over-parameterized neural networks (Jacot et al. 2018, Chizat et al. 2019).
_By injecting local elasticity to a neural network, we can break the inherent global generalization ability of the neural networks (Ghiassian et al. 2020), constraining the output changes of the neural network to small local areas. Utilizing this phenomenon, we can update a wrong prediction by "editing" outputs of a neural network nearly point-wisely._ To verify, we first train a neural network to approximate \(\sin(\pi x)\) well in the traditional supervised learning style. We call the function the old learned function at this stage. Now assume that the original \(y\) value of an input \(x\) is changed to \(y^{\prime}\), while the true values of other inputs remain the same. The goal is to update the prediction of the neural network for input \(x\) to this new value \(y^{\prime}\), while keeping the predictions of other inputs close to the original predictions, without expensive re-training on the whole dataset. Note that this requirement is common, especially in RL; and we will illustrate it with more details later. For now, we focus on supervised learning for a clearer demonstration. Specifically, we would like to update the output value at \(x=1.5\) of the learned function from \(y=-1.0\) to \(y^{\prime}=-1.5\). We perform experiments on both EMLP and MLP, showing in Figure 3. Clearly, both neural works successfully update the prediction at \(x=1.5\) to the new value. However, besides the prediction at \(x=1.5\), the learned function of MLP is changed globally while the changes of EMLP are mainly confined
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Test Performance (MSE) \\ \hline MLP (ReLU) & \(0.4729\pm 0.0110\) \\ MLP (Sigmoid) & \(0.4583\pm 0.0008\) \\ MLP (Tanh) & \(0.4461\pm 0.0013\) \\ MLP (ELU) & \(0.4521\pm 0.0019\) \\ SR-NN & \(0.4061\pm 0.0036\) \\ EMLP (Elephant) & \(\mathbf{0.0081\pm 0.0009}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The test MSE of various networks in streaming learning for a simple regression task. Lower is better. All results are averaged over \(5\) runs, reported with standard errors.
Figure 2: Plots of the true function \(\sin(\pi x)\), the learned function \(f(x)\), and the NTK function \(\mathrm{NTK}(x)\) at different training stages for EMLP and SR-NN. The NTK function \(\mathrm{NTK}(x)\) of the EMLP quickly reduces to 0 as \(x\) moves away from \(x_{t}\), demonstrating the local elasticity (Property 2.3) of EMLP.
in a small local area around \(x=1.5\). That is, we can successfully correct the wrong prediction nearly point-wisely by "editing" the output value for ENNs, but not for classical neural networks. To conclude, we summarize our findings in the following:
* Sparse representations are useful but are not effective enough to solve streaming regression tasks.
* EMLP is locally elastic even when \(d\) is small (e.g., \(8\)) and it allows "editing" the output value nearly point-wisely.
* EMLP can continually learn to solve regression tasks by reducing forgetting.
### Class incremental learning
In addition to regression tasks, our method can be applied to classification tasks as well, by simply replacing classical activation functions with elephant activation functions in a classification model. Though our model is agnostic to data distributions, we test it in class incremental learning in order to compare it with previous methods. Moreover, we adopt a stricter variation of the continual learning setting by adding the following restrictions: (1) same as streaming learning, each sample only occurs once during training, (2) task boundaries are not provided or inferred (Aljundi et al.2019), (3) neither pre-training nor a fixed feature encoder is allowed (Wolfe and Kyrillidis2022), and (4) no buffer is allowed to store old task information in any form, such as training samples and gradients.
Surprisingly, we find no methods are designed for or have been tested in the above setting. As a variant of EWC (Kirkpatrick et al.2017), Online EWC (Schwarz et al.2018) almost meets these requirements, although it still requires task boundaries. To overcome this issue, we propose _Streaming EWC_ as one of the baselines, which updates the fisher information matrix after every training sample. Streaming EWC can be viewed as a special case of Online EWC, treating each training sample as a new task. Besides Streaming EWC, we consider SDMLP (Bricken et al.2023) and FlyModel (Shen et al.2021) as two strong baselines, although they require either task boundary information or multiple data passes. Finally, two naive baselines are included, which train MLPs and CNNs without any techniques to reduce forgetting.
We test various methods on several standard datasets -- Split MNIST (Deng2012), Split CIFAR10 (Krizhevsky2009), Split CIFAR100 (Krizhevsky2009), and Split Tiny ImageNet (Le and Yang2015). _We only present results on Split MNIST here due to page limit and put all other results in Appendix C.2._ For SDMLP and FlyModel, we take results from Bricken et al. (2023) directly. For MLP and CNN, we use \(\mathrm{ReLU}\) by default. For our methods, we apply elephant functions with \(d=4\) in an MLP with one hidden layer and a simple CNN. The resulting neural networks are named EMLP and ECNN, respectively.
The test accuracy is used as the performance metric. A result summary of different methods on Split MNIST is shown in Table 2. The number of neurons refers to the size of the last hidden layer in a neural network, also known as the feature dimension. Overall, FlyModel performs the best, although it requires additional task boundary information. Our methods (EMLP and ECNN) are the
Figure 3: A demonstration of local elasticity of the EMLP. EMLP allows updating the old prediction nearly point-wisely by “editing” the output value of the neural network. However, we cannot achieve this for MLP since it is not locally elastic.
second best without utilizing task boundary information by training for a single pass. Other findings are summarized in the following, reaffirming the findings in Mirzadeh et al. (2022a;b):
* Wider neural networks forget less by using more neurons.
* Neural network architectures can significantly impact learning performance: CNN (ECNN) is better than MLP (EMLP), especially with many neurons.
* Architectural improvement is larger than algorithmic improvement: using a better architecture (e.g., EMLP and ECNN) is more beneficial than incorporating Streaming EWC.
Overall, we conclude that applying elephant activation functions significantly reduces forgetting and boosts performance in class incremental learning under strict constraints.
### Reinforcement Learning
Recently, Lan et al. (2023) showed that the forgetting issue exists even in single RL tasks and it is largely masked by a large replay buffer. Without a replay buffer, a single RL task can be viewed as a series of related but different tasks without clear task boundaries (Dabney et al., 2021). For example, in temporal difference (TD) learning, the true value function is usually approximated by \(v\) with bootstrapping: \(v(S_{t})\gets R_{t+1}+\gamma v(S_{t+1})\), where \(S_{t}\) and \(S_{t+1}\) are two successive states and \(R_{t+1}+\gamma v(S_{t+1})\) is named the TD target. During training, the TD target constantly changes due to bootstrapping, non-stationary state distribution, and changing policy. To speed up learning while reducing forgetting, it is crucial to update \(v(S_{t})\) to the new TD target without changing other state values too much (Lan et al., 2023), where local elasticity can help.
_By its very nature, solving RL tasks requires continual learning ability for both classification and regression._ Concretely, estimating value functions using TD learning is a non-stationary regression task; given a specific state, selecting the optimal action from a discrete action space is very similar to a classification task. In this section, we demonstrate that incorporating elephant activation functions helps reduce forgetting in RL under memory constraints. Specifically, we use deep Q-network (DQN) (Mnih et al., 2013, 2015) as an exemplar algorithm following Lan et al. (2023). Four classical RL tasks from Gym (Prockman et al., 2016) and PyGame Learning Environment (Tasfi, 2016) are chosen: MountainCar-v0, Acrobot-v1, Catcher, and Pixelcopter. An MLP/EMLP with one hidden layer of size \(1,000\) is used to parameterize the action-value function in DQN for all tasks. For all elephant functions used in EMLPs, \(d=4\). The standard buffer size is 1e4. For EMLP, we use a
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Neurons & Dataset Passes & Task Boundary & Test Accuracy \\ \hline MLP & 1K & 1 & & 0.665\(\pm\)0.014 \\ MLP+Streaming EWC & 1K & 1 & & 0.708\(\pm\)0.008 \\ SDMLP & 1K & 500 & & 0.69 \\ FlyModel & 1K & 1 & & **0.77** \\
**EMLP (ours)** & 1K & 1 & & 0.723\(\pm\)0.006 \\ CNN & 1K & 1 & & 0.659\(\pm\)0.016 \\ CNN+Streaming EWC & 1K & 1 & & 0.716\(\pm\)0.024 \\
**ECNN (ours)** & 1K & 1 & & 0.732\(\pm\)0.007 \\ \hline MLP & 10K & 1 & & 0.621\(\pm\)0.010 \\ MLP+Streaming EWC & 10K & 1 & & 0.609\(\pm\)0.013 \\ SDMLP & 10K & 500 & & 0.53 \\ FlyModel & 10K & 1 & & **0.91** \\
**EMLP (ours)** & 10K & 1 & & 0.802\(\pm\)0.002 \\ CNN & 10K & 1 & & 0.769\(\pm\)0.011 \\ CNN+Streaming EWC & 10K & 1 & & 0.780\(\pm\)0.010 \\
**ECNN (ours)** & 10K & 1 & & 0.850\(\pm\)0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The test accuracy of various methods in class incremental learning on _Split MNIST_. Higher is better. All accuracies are averaged over \(5\) runs, reported with standard errors. The number of neurons refers to the size of the last hidden layer in a neural network, i.e. the feature dimension.
small replay buffer with size 32; for MLP, we consider two buffer sizes -- 1e4 and 32. More details are included in the appendix.
The return curves of various methods are shown in Figure 4, averaged over \(10\) runs, where shaded areas represent standard errors and \(m\) denotes the size of a replay buffer. Clearly, using a tiny replay buffer, EMLP (\(m=32\)) outperforms MLP (\(m=32\)) in all tasks. Moreover, except Pixelcopter, EMLP (\(m=32\)) achieves similar performance compared with MLP (\(m=1e4\)), although it uses a much smaller buffer. In summary, the experimental results further confirm the effectiveness of elephant activation functions in reducing catastrophic forgetting.
## 6 Related works
Architecture-based continual learningContinual learning methods can be divided into several categories, such as regularization-based methods (Kirkpatrick et al., 2017; Schwarz et al., 2018; Zenke et al., 2017; Aljundi et al., 2019), replay-based methods (Kemker et al., 2018; Farquhar and Gal, 2018; Van de Ven and Tolias, 2019; Delange et al., 2021), and optimization-based methods (Lopez-Paz and Ranzato, 2017; Zeng et al., 2019; Farajtabar et al., 2020). We encourage readers to check recent surveys (Khetarpal et al., 2022; Delange et al., 2021; Wang et al., 2023) for more details. Here, we focus on architecture-based methods that are more related to our work. Among them, our work is inspired by Mirzadeh et al. (2022);b), which study and analyze the effect of different neural architectures on continual learning. Several approaches involve careful selection and allocation of a subset of weights in a large network for each task (Mallya and Lazebnik, 2018; Sokar et al., 2021; Fernando et al., 2017; Serra et al., 2018; Masana et al., 2021; Li et al., 2019; Yoon et al., 2018), or the allocation of a network that is specific to each task (Rusu et al., 2016; Aljundi et al., 2017). Some other methods expand a neural network dynamically (Yoon et al., 2018; Hung et al., 2019; Ostapenko et al., 2019). Finally, Shen et al. (2021) and Bricken et al. (2023) proposed novel neural network structures inspired by biological neural circuits, acting as two strong baseline methods in our experiments.
Sparse representationsSparse representations are known to help reduce forgetting for decades (French, 1992). In supervised learning, sparse training (Dettmers and Zettlemoyer, 2019; Liu et al., 2020; Sokar et al., 2021), dropout variants (Srivastava et al., 2013; Goodfellow et al., 2013; Mirzadeh et al., 2020; Abbasi et al., 2022; Sarfraz et al., 2023), and weight pruning methods (Guo et al., 2016; Frankle and Carbin, 2019; Blalock et al., 2020; Zhou et al., 2020) are shown to speed up training and improve generalization. In continual learning, both SDMLP (Shen et al., 2021) and FlyModel (Bricken et al., 2023) are designed to generate sparse representations. In RL, Le et al. (2017), Liu et al. (2019), and Pan et al. (2022) showed that sparse representations help stabilize training and improve overall performance.
Local elasticity and memorizationHe and Su (2020) proposed the concept of local elasticity. Chen et al. (2020) introduced label-aware neural tangent kernels, showing that models trained with these kernels are more locally elastic. Mehta et al. (2021) proved a theoretical connection between the scale of neural network initialization and local elasticity, demonstrating extreme memorization using large initialization scales. Incorporating Fourier features in the input of a neural network also induces local elasticity, which is greatly affected by the initial variance of the Fourier basis (Li and Pathak, 2021).
Figure 4: The return curves of DQN in four tasks with different neural networks and buffer sizes. All results are averaged over \(10\) runs, where the shaded areas represent standard errors. Using a much smaller buffer, EMLP (\(m=32\)) outperforms MLP (\(m=32\)) and matches MLP (\(m=1e4\)).
Conclusion
In this work, we proposed elephant activation functions that can make neural networks more resilient to catastrophic forgetting. Specifically, we showed that incorporating elephant activation functions in classical neural networks helps improve learning performance in streaming learning for regression, class incremental learning, and reinforcement learning. Theoretically, our work provided a deeper understanding of the role of activation functions in catastrophic forgetting. We hope our work will inspire more researchers to study and design better neural network architectures for continual learning. |
2302.13570 | Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign
Recognition: A Feasibility Study | Deep Neural Networks (DNNs) are increasingly applied in the real world in
safety critical applications like advanced driver assistance systems. An
example for such use case is represented by traffic sign recognition systems.
At the same time, it is known that current DNNs can be fooled by adversarial
attacks, which raises safety concerns if those attacks can be applied under
realistic conditions. In this work we apply different black-box attack methods
to generate perturbations that are applied in the physical environment and can
be used to fool systems under different environmental conditions. To the best
of our knowledge we are the first to combine a general framework for physical
attacks with different black-box attack methods and study the impact of the
different methods on the success rate of the attack under the same setting. We
show that reliable physical adversarial attacks can be performed with different
methods and that it is also possible to reduce the perceptibility of the
resulting perturbations. The findings highlight the need for viable defenses of
a DNN even in the black-box case, but at the same time form the basis for
securing a DNN with methods like adversarial training which utilizes
adversarial attacks to augment the original training data. | Fabian Woitschek, Georg Schneider | 2023-02-27T08:10:58Z | http://arxiv.org/abs/2302.13570v1 | Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study
###### Abstract
Deep Neural Networks (DNNs) are increasingly applied in the real world in safety critical applications like advanced driver assistance systems. An example for such use case is represented by traffic sign recognition systems. At the same time, it is known that current DNNs can be fooled by adversarial attacks, which raises safety concerns if those attacks can be applied under realistic conditions. In this work we apply different black-box attack methods to generate perturbations that are applied in the physical environment and can be used to fool systems under different environmental conditions. To the best of our knowledge we are the first to combine a general framework for physical attacks with different black-box attack methods and study the impact of the different methods on the success rate of the attack under the same setting. We show that reliable physical adversarial attacks can be performed with different methods and that it is also possible to reduce the perceptibility of the resulting perturbations. The findings highlight the need for viable defenses of a DNN even in the black-box case, but at the same time form the basis for securing a DNN with methods like adversarial training which utilizes adversarial attacks to augment the original training data.
## I Introduction
Deep Neural Networks (DNNs) are increasingly used in real world applications, which includes their use in Advanced Driver Assistance Systems (ADAS). One example of such use case is the application of DNNs to the problem of Traffic Sign Recognition (TSR). However, it was shown that any machine learning system and especially DNNs are susceptible to small changes in the input data [1], which an adversary can use to fool the system by generating perturbations that are applied on the input data. Hence, it is relevant to explore whether similar attacks can be performed under realistic conditions, which would mean that systems deployed in reality have to be defended against such attacks.
Typical adversarial attacks directly manipulate the input of a DNN, which is hard to do for an adversary when attacking a real TSR system. First, the adversary does not have access to the TSR system that runs inside the vehicle and even if it has this access it would be impossible to calculate perturbations in real time. Hence, we consider an alternative and more realistic attack scenario where the adversary applies the perturbation in the physical environment, which is then captured by the ADAS-camera of the vehicle. In this case the perturbation must reliably fool the TSR system under different lighting conditions and viewpoints.
This attack scenario is the most dangerous one, since the adversary can impact the safety of many vehicles, simply by placing a perturbed traffic sign in the physical environment. However, calculating such perturbed traffic signs typically requires the adversary to have white-box access to the system, which is unrealistic. Hence, we look into existing work on black-box adversarial attacks and study whether physical attacks are also possible in this case where the adversary has only limited access to the system.
In Fig. 1 an overview over the different parameters of an adversarial attack is shown and our focus areas are highlighted. We only consider targeted attacks since these have the greatest potential to cause the highest actual damage in a realistic scenario. It is more severe if the adversary controls the exact output of a TSR system, meaning minor misclassifications (e.g. the system predicts a Speed Limit 20 sign instead of a Speed Limit 30 sign) are avoided.
**Our contributions:**
* We incorporate different black-box attack methods into a general framework that can be used to perform physical adversarial attacks
* We study how well each black-box attack method can be used to generate perturbations that reliably fool a TSR system under different conditions which simulate a realistic environment
* The capabilities of each black-box attack method to generate perturbations with limited strength and reduced perceptibility are examined
* Our findings show that physical attacks are possible even under strict black-box access, but also form the basis to protect DNNs in safety critical applications using defense methods like adversarial training [2]
## II Related Work
Adversarial attacks [1], where the resulting perturbation is applied in a physical environment, have first been published for general purpose object classifiers ([3, 4, 5]). Afterwards, similar attacks have been demonstrated for the
Fig. 1: Overview of adversarial attacks and our focus areas
specific use case of ADAS ([6, 7, 8]). However, these attacks assume white-box knowledge of a system.
In a concurrent line of work, methods have been developed to perform adversarial attacks in a black-box scenario where the access of the adversary to the system is limited. Here, the simplest methods are transfer-based attacks [1], where the perturbation is calculated for a known white-box system on a similar task and then used to attack the black-box system. To improve the success rate of such transfer-based attacks the use of special surrogate systems [9] or ensembles [10] has been proposed to perform the white-box attack. Alternatively, attack methods have been developed that attack the black-box system directly. These include approximation-based methods [11] and decision-based methods [12].
We do not utilize decision-based methods since they cannot be incorporated in the framework presented in III-A to calculate physical perturbations. For the transfer-based methods we only evaluate the approach that trains a local surrogate system which mimics the behavior of the black-box system. We choose this transfer-based method, since it may be more revealing to the adversary if the stealing of the black-box system succeeds (also for other tasks).
## III Practical Adversarial Attacks
We first describe the method used for generating adversarial perturbations that can be used in a physical environment. This method is used to fool a classification system based on a DNN for the application of TSR, but it is metabolically possible to use similar methods on other applications as well. At first, we assume complete white-box knowledge of the system that is attacked and later relax that constraint to the hard black-box case, where only the label of the class with the highest probability is returned by the system.
### _Physical Attacks_
To generate perturbations that fool a system when applied in a physical environment, we build on the RP\({}_{2}\) algorithm [6]. This consists of an objective function to find a perturbation \(\delta\sim[-1,1]^{D}\) for an input \(x\sim[0,1]^{D}\) to a classifier system \(f(\cdot)\), such that the resulting perturbed input \(x+\delta\) is classified as the adversary target class \(y^{*}\) by the system:
\[\begin{split}\operatorname*{arg\,min}_{\delta}\mathbb{E}_{t\sim T }\underbrace{\mathcal{L}(f(t(x+M\cdot\delta)),y^{*})}_{\text{adversarial}}\\ +\lambda_{TV}\underbrace{TV(M\cdot\delta)}_{\text{inconspicuous}}+ \lambda_{NPS}\underbrace{NPS(M\cdot(x+\delta))}_{\text{prinstable}}.\end{split} \tag{1}\]
Here, \(M\sim\{0,1\}^{D}\) is a mask that limits the area of the input where the perturbation can be applied to, \(\lambda_{TV}\) and \(\lambda_{NPS}\) are hyper-parameters that control the strength of the regularization of the perturbation and \(t(\cdot)\) is a function that transforms the perturbed input based on the set of considered transformations \(T\). In the remaining of this work we refer to the case of \(\lambda_{TV},\lambda_{NPS}=0\) (no regularization) as unlimited and to the case of \(\lambda_{TV},\lambda_{NPS}\neq 0\) as limited. Further, \(\mathcal{L}(\cdot)\) is a standard loss function that measures the difference between the prediction of the system and the target class \(y^{*}\). Since we study adversarial attacks on a classification system for TSR, we choose the Negative Log Likelihood (NLL) loss as \(\mathcal{L}(\cdot)\) to express how well the perturbation \(\delta\) fools the system.
Like [7] we use the Total Variation (TV) norm instead of the \(\ell_{p}\) norm to regularize the strength of the perturbation during the optimization. If \(r,c\) denote the row and column index the TV norm can be expressed as:
\[TV(a)=\sum_{r,c}|a_{r+1,c}-a_{r,c}|+|a_{r,c+1}-a_{r,c}|\,. \tag{2}\]
As done in the original RP\({}_{2}\) algorithm we also use the Non-Printability Score (NPS) [13] to account for reproduction errors when the digital perturbation is printed. If \(P_{a}\) denotes the set of RGB triplets that exist in the perturbed input and \(P_{p}\) denotes the set of RGB triplets that can be replicated by the printer the NPS term can be expressed as:
\[NPS(a)=\sum_{p\in P_{a}}\prod_{\hat{p}\in P_{p}}|p-\hat{p}|\,. \tag{3}\]
The combined objective function in (1) is then optimized using standard gradient descent to result in a perturbation \(\delta\), where we use the ADAM optimizer [14] to perform this optimization throughout this work.
In order that the final perturbation can successfully fool a classifier system in the real world, it is important that the perturbation is robust to changes in the lighting conditions and viewpoints. To achieve this robustness different environmental conditions are modeled during the optimization. In each iteration the function \(t(\cdot)\) applies a different transformation from a predefined set \(T\) to the perturbed input, simulating changing conditions. Hence, we use the Expectation over Transformation method [15] to robustly optimize the perturbation for a range of environmental conditions.
### _Black-Box Attacks_
The described algorithm requires gradients to be computable for the system \(f(\cdot)\) that should be attacked, meaning the adversary has white-box access to the system. In reality this constraint is typically hard to fulfill for an adversary, so it has to rely on black-box methods, where the internal behavior of the system does not have to be known.
#### Iii-B1 Gradient Approximation
The first black-box method we consider is based on the approximation of the true but unknown gradient \(g\). We use Simultaneous Perturbation Stochastic Approximation (SPSA) [16] to calculate an approximation of the true gradient, based on a two sided finite difference estimation with random directions. SPSA is used, since [17] has shown that SPSA is the most reliable compared to other gradient-free optimization techniques. Therefore, with noise images sampled from a Rademacher distribution \(\xi_{1},\dots,\xi_{s}\sim\{-1,1\}^{D}\) the approximation of the true gradient \(g\) can be expressed as:
\[\begin{split} g&=\frac{\partial\mathcal{L}(f(x^{ \prime}),y^{*})}{\partial x^{\prime}}\\ &\approx\frac{1}{2s\alpha}\sum_{i=1}^{s}\frac{\mathcal{L}(f(x^{ \prime}+\alpha\xi_{i}),y^{*})-\mathcal{L}(f(x^{\prime}-\alpha\xi_{i}),y^{*}) }{\xi_{i}}\,.\end{split} \tag{4}\]
Here, \(x^{\prime}=t(x+M\cdot\delta)\) is the perturbed and transformed input, \(\alpha\) is the strength of the noise samples and \(s\) is the amount of noise samples used. We use the resulting gradient estimation as a direct plug-in for the true gradient and perform the same optimization as previously.
However, (4) requires that the loss \(\mathcal{L}(\cdot)\) can be computed, which requires the output of \(f(\cdot)\) to be a vector of probabilities over all classes since we use the NLL loss. This constraint is often not met, since the system might only output the top-x classes without any confidence metric attached. Especially TSR systems that are used in reality, only give the user access to the top-1 class. Therefore, we now focus on the most difficult case and consider a system that only outputs the top-1 class without any further information. This is as little information as possible and we refer to this case as hard black-box (compared to soft black-box where the complete probability vector is output).
To adjust the procedure to this case it is required, that \(\mathcal{L}(\cdot)\) only uses the top-1 class. Hence, we define a substitute loss, inspired by [11], which measures the robustness of a classification under the influence of noise. To this end we sample noise images from a uniform distribution \(\zeta_{1},\dots,\zeta_{h}\sim\mathcal{U}(-1,1)^{D}\) and express the substitute loss as:
\[\mathcal{L}_{\text{substitute}}(x,y,f(\cdot))=1-\frac{1}{h}\sum_{i=1}^{h}[f(x +\beta\zeta_{i})==y]\,. \tag{5}\]
Here, \(y\) is the true class, \(\beta\) is the strength of the noise samples and \(h\) is the amount of noise samples used. To have meaningful approximations with this loss it is either required that the input \(x\) is already mainly classified as the true class or a rather large noise strength has to be used (at least at the beginning), to by chance find the true class and iterate towards it over time.
#### Iii-B2 Model Stealing
Transfer-based black-box attacks [1] represent an alternative to gradient approximation-based attacks. Here the perturbation is generated using a white-box system and is then transferred to the black-box system. If the systems behave similarly the transfer rate is high and the black-box system is fooled by the perturbation. Hence, the adversary wants to have a white-box system that is as similar as possible to the black-box system. To achieve this, we use Model Stealing (MS) attacks to train a private surrogate system of the black-box system. Then this surrogate is attacked using the white-box attack from (1) and the resulting perturbation is used to attack the black-box system.
Regarding the access to the system we only consider the hard black-box case, since it is most challenging and realistic. The only information the adversary needs is the amount of output classes of the black-box system, since this is required to train the surrogate system. This knowledge can be obtained by queuing the black-box system on different inputs and observing the range of possible outputs. Further, the surrogate system can also have one additional class which functions as a placeholder, where all unexpected outputs from the black-box system are collected.
To train the surrogate we follow the basic approach introduced in [9]. First, for the surrogate a DNN architecture is chosen that roughly matches the expected expressivity of the black-box system. Then the black-box system is used to label an initial seed dataset and the surrogate is trained using these labels. Hence, the surrogate learns to mimic the labels of the black-box system on the presented data. Next, the dataset is augmented using an adversarial attack on the surrogate, which forces the new data samples to lie in a new decision region of the surrogate. Then the described label + train process is repeated, whereby the surrogate adapts the decision boundaries so that it outputs the same classes as the black-box system. This process is repeated for a certain amount of global iterations, through which the decision boundaries of the surrogate are moved closer and closer to the decision boundaries of the black-box system.
To perform the adversarial attack on the surrogate, [9] use the untargeted Fast Gradient Sign Method (FGSM) [18]. In contrast, we follow the approach from [19] which use the targeted Projected Gradient Descent Method (PGDM) [2]. This results in a higher quality of the surrogate because the data augmentation is more extensive and better represents the existing data space. The PGDM is an iterative expansion of the FGSM, which in case of a targeted attack adds the negative direction of the gradient to the current input \(x_{i}\):
\[x_{i+1}=x_{i}-\epsilon\cdot\text{sgn}(\nabla_{x_{i}}\mathcal{L}(s_{i}(x_{i}),y ^{*}))\,. \tag{6}\]
Here, \(s(\cdot)\) is the surrogate system that is currently trained, \(\epsilon\) is the step size of the attack and \(\text{sgn}\) denotes the signum function. For the PGDM the step size \(\epsilon\) is divided into \(K\) steps and the procedure is executed iteratively for \(K\) iterations with an individual step size of \(\epsilon/K\). Hence, at each global iteration \(i\) we perform a PGDM attack with a randomly drawn target class (that differs from the assigned label) on each data point in the current dataset. Therefore, the size of the dataset is doubled at each global iteration.
Using the described procedure, the quality of the surrogate depends among others on the dataset representing the initial seed data. In [9] the authors require the data to be roughly representative of the true domain of the input data of the black-box system. This requires that the adversary knows the input domain and that it can generate a small amount of In-Domain (ID) data samples. Whilst this is a realistic assumption for an adversary that attacks a TSR system, the authors in [20] perform a MS attack using only Out-Of-Domain (OOD) data by introducing a sampling procedure from large (unlabeled) general image datasets. We experiment with both ID and OOD data by using different datasets to represent the initial seed data, to also mimic the case where the adversary has close to no knowledge about the underlying input domain of the black-box system.
## IV Experiments
We perform numerous experiments to determine the feasibility of physical adversarial attacks on a TSR system based on the available system access. For each attack the starting image \(x\) is a clean image of the associated traffic sign class, e.g. a raw Stop sign.
### _Setup_
The DNN based TSR system is represented by a publicly available implementation [22] of a Spatial Transformer (ST) [23], which is the state-of-the-art on the German Traffic Sign Recognition Benchmark (GTSRB) [24]. However, in contrast to [22] we do not use data augmentation during test time and hence use only a single system instead of the ensemble. This is closer to a real TSR system, where no ensembles are used because of computational limitations. Still, the system achieves an accuracy of \(>99.9\,\%\) on the original test set.
For evaluating the success of an adversarial perturbation, we test whether the final perturbation fools the system under different environmental conditions. To this end we test the impact of each perturbation on the TSR system under \(1000\) different transformations and report the associated classification rates. Concretely, we use a combination of the following transformations (which matches the set \(T\) used during the optimization): rotation, perspective distortion, color jitter, scaling and background noise injection. We test the accuracy of the TSR system under the described transformations by generating \(1000\) images per class in GTSRB where an image, containing only a raw traffic sign of the associated class, is synthetically transformed with a random combination of the transformations. On this synthetic dataset the system has an accuracy of \(94.2\,\%\), meaning this data is more challenging, but the system also classifies this synthetic data quite well. Hence, if we later observe a drop in accuracy after a perturbation is applied, it originates from the perturbation and is not by chance.
To compare the similarity of perturbations resulting from different methods we use the Structural Similarity Index Measure (SSIM) [21], which calculates a similarity value between two images in the range \([0,1]\). Here, a value of \(0\) indicates that two images a very different and a value of \(1\) indicates that the images a very similar or the same.
For the adversarial attack we initially focus on the unlimited case, where in (1) no mask \(M\) is used and \(\lambda_{TV},\lambda_{NPS}=0\), since we first want to determine the differences in the quality of a perturbation generated with different system accesses. Later, in IV-E we also perform experiments in the limited case to generate perturbations that have a reduced perceptibility for the human visual system, but still reliably fool the TSR system.
### _White-Box Attacks_
First, we evaluate the basic attack from (1) using the white-box access to the system to compute the required true gradients. In Fig. 2 two exemplary perturbations are shown for two different attack scenarios and the associated classification rates are given. For both scenarios the attack is highly successful and fools the TSR system in nearly all of the \(1000\) transformations evaluated for each perturbed traffic sign. Hence, white-box attacks can be used to generate perturbations that can be applied in reality and fool a system under a variety of environmental conditions.
In the rest of this work we only present results for the attack scenario shown in Fig. 1(a), meaning the goal of the adversary is to generate a perturbation that fools the TSR system into predicting a Stop sign as a Speed Limit 60 sign. We also explored other attack scenarios (e.g. Fig. 1(b)) which behave similarly. However, the selected scenario is one of the hardest, since a human can easily distinguish the two traffic signs, since an octagonal shape is used exclusively for a Stop sign. Hence, a human would never be fooled by such a perturbed traffic sign, but the TSR system is very reliably fooled by the generated perturbation as demonstrated in Fig. 1(a).
### _Soft Black-Box Attacks_
Next, we constrain the access of the adversary to the soft black-box case, where it still observes the complete probability vector of the system as output (typically from a softmax layer) but is unable to compute any gradients through the system. Also, the preprocessing (normalization, etc.) used by the system is unknown. We perform the same attack as for white-box access, only now approximating the true gradient with SPSA from (4).
In Fig. 3 exemplary images of resulting perturbations are shown and in Table I the associated classification rates are compared for all methods that are used for generating an unlimited perturbation. Additionally, the similarity of each perturbation is compared to the white-box perturbation using the SSIM. For the soft black-box attack based on SPSA we evaluate how many noise samples are required for an accurate approximation of the true gradient. One can observe in Fig. 3 that an increase in \(s\) leads to a convergence to the result of the white-box attack (Fig. 1(a)) and similar main regions are perturbed. Consequently, the SSIM in comparison to the white-box perturbation increases (Table I). This behavior transfers to the associated classification rates, where it is again possible to achieve high success rates under the variety of transformations if enough noise samples are used during SPSA. Even if only a very low number is used, the resulting perturbation fools the TSR system in \(49.3\,\%\) of the cases, which is already enough to prevent a successful use of the TSR system in reality.
### _Hard Black-Box Attacks_
As the last step towards the most limited access of an adversary we now evaluate how well the described methods
Fig. 2: Perturbed traffic signs generated with white-box attack and associated classification rates
perform in the hard black-box case. Hence, the TSR system now only outputs the top-1 class and no further information.
#### Iv-B1 Gradient Approximation
We first test the attack based on SPSA, where we perform the same attack as previously, but now use the substitute loss from (5). For all attacks we use \(s=2000\) and evaluate how many noise samples \(h\) are required for an useful behavior of the substitute loss. The visual results are shown in Fig. 4 and the associated classification rates and SSIMs are again given in Table I. The behavior is similar to the soft black-box case, where an increase in the noise samples leads to a perturbation that converges to the white-box perturbation. Already a small amount of noise samples is sufficient to approximate the NLL loss and result in a perturbation that fools the system highly successfully. Summarizing, it is even possible to generate perturbations in the hard black-box case that fool the system reliably under various environmental conditions.
#### Iv-B2 Model Stealing
As an alternative to a gradient approximation-based attack, we now evaluate a white-box attack with a preceding MS attack to generate the white-box surrogate system of the unknown hard black-box. We experiment with different architectures of the surrogate which includes the original architecture (STN), a VGG11 architecture [25] with batch normalization (VGG11BN) and a SqueezeNet architecture [26] (SN). For the last two architectures we use a version that is pre-trained on ImageNet [27] provided by [28]. Initially, we use ID seed data that is represented by the GTSRB [24] dataset, where we sample ten random images from each class to build the dataset, but do not use the assigned class labels as these are generated with the black-box system. We also test the hardest possible case of a MS attack, where the adversary has no information about the architecture of the system and the input domain. For this we use the VGG11BN architecture and use random images from the tiny-imagenet dataset [29] as seed data.
In Table II the accuracies of the trained surrogate systems are shown for the GTSRB dataset and our synthetically created dataset from IV-A. All surrogates have high accuracies, but if the architecture of the surrogate differs too much or if OOD data is used for seeding a drop in accuracy can be noted. Also, the accuracy on our synthetic dataset drops overall more than the accuracy on the GTSRB test dataset. Nevertheless, the surrogates mainly learn to classify traffic signs accurately, even when trained on OOD data, meaning the surrogate never sees a traffic sign during training.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Architecture & Dataset & Accuracy GTSRB & Accuracy Synthetic \\ & & \(\backslash\)\% & \(\backslash\)\% \\ \hline STN & ID & \(98.6\) & \(88.4\) \\ SN & ID & \(94.4\) & \(76.7\) \\ VGG11BN & ID & \(99.7\) & \(86.3\) \\ VGG11BN & OOD & \(95.2\) & \(79.9\) \\ \hline STN Original & GTSRB & \(99.9\) & \(94.2\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Standard accuracy for MS
Fig. 4: Perturbed traffic signs generated with hard SPSA attack and \(s=2000\)
Fig. 3: Perturbed traffic signs generated with soft SPSA attack
Next, we evaluate whether the surrogate systems are similar enough to the black-box system, such that perturbations transfer between those systems. We use the white-box attack from (1) to generate perturbations on the surrogates and then use these perturbations to attack the black-box system. In Fig. 5 the resulting perturbations are visualized and the classification rates and SSIMs are again given in Table I. One can observe that perturbations which are more similar to the white-box perturbation (Fig. 1(a)) also have a higher success rate of the targeted attack. Nevertheless, all perturbations still achieve good transfer rates, meaning all surrogates have decision boundaries roughly similar to the original system. However, using the same architecture as the black-box system leads to the best transfer rates, but an adversary typically does not have this knowledge. Interestingly, we achieve a higher transfer rate if OOD data is used than if ID data is used, although the general accuracy of that system is lower (Table II). An explanation is that the VGG11BN OOD system has a high accuracy (close to the level of STN Original) on images of Speed Limit 60 signs in both datasets. Hence, the system is worse in general accuracy, but learned the decision boundaries of the Speed Limit 60 class rather accurately which leads to an increased transfer rate.
### _Low Perceptibility Attacks_
Our results so far show that physical adversarial attacks are possible even in the hard black-box case where the adversary has close to no knowledge of the system it attacks. However, the previous perturbations have a rather high strength and are obvious to a human observer. In the next step we generate perturbations that have a decreased perceptibility by introducing \(\lambda_{TV},\lambda_{NPS}\neq 0\). Hence, we perform the same attacks evaluated previously, but adjust the optimization to include regularization terms. For each different attack method, we only evaluate the best performing method.
The resulting perturbations are shown in Fig. 6 and the associated classification rates are given in Table III. In addition to the optimized perturbations we also include a simple manual perturbation that is derived by inspecting the most prominent regions in the other minimized perturbations and placing black rectangles by hand.
Comparing the perturbation from the unlimited white-box attack (Fig. 1(a)) with the perturbation from the limited white-box attack (Fig. 5(a)) one observes that the perturbation is now limited to the most susceptible regions. This reduces the visibility of the perturbation noticeably, but also leads to a reduced success rate of the targeted attack. There exists a trade-off between the perceptibility and the adversarial character of the perturbation that the adversary can optimize depending on the concrete use case. A similar behavior also exists for all black-box attack methods, where the resulting perturbations show large similarities with the white-box perturbation and the classification rates behave alike. All methods consider similar regions as most important and focus the perturbation to those areas. Hence, the approximation and stealing of the black-box system is successful and can be used to generate physical perturbations for hard black-box TSR systems that fool the system in a variety of environmental conditions.
For the approximation attack (Fig. 5(b) and Fig. 5(c)) the perceptibility can be further reduced by applying a mask \(M\) to the perturbations. In that way unwanted pixel artifacts resulting from the approximation can be excluded and only the main important regions of the perturbations remain. This
Fig. 5: Perturbed traffic signs generated with MS attack
Fig. 6: Perturbed traffic signs generated with limited attack
\begin{table}
\begin{tabular}{c c c} \hline \hline Attack Type & Classification Rate & Classification Rate \\ & Stop \(\backslash\)\% & 60 \(\backslash\)\% \\ \hline White-Box & \(18.1\) & \(76.4\) \\ \(s=2000\) & \(31.7\) & \(65.6\) \\ \(h=50\) \& \(8\) \(=2000\) & \(37.5\) & \(61.2\) \\ STN ID & \(26.1\) & \(70.6\) \\ VGG11BN OOD & \(38.9\) & \(56.3\) \\ Manual & \(18.9\) & \(46.9\) \\ \hline White-Box Unlimited & \(0.4\) & \(98.2\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Classification rates for all limited perturbations
has very little impact on the success of the attack and is a way for an adversary to approximate the white-box results further. Alternatively, the adversary can manually (similar to [6]) generate a simple perturbation which fools the system in roughly half the cases but is easiest to deploy in reality.
## V Conclusion
We present results for the feasibility of physical adversarial attacks against a TSR system if the adversary has only limited access. Even in the most difficult hard black-box case, attacks are practicable and result in perturbations that reliably fool the system under different synthetic transformations, which simulate changing environmental conditions. It is further possible to reduce the perceptibility of the perturbations where a trade-off exists between the perceptibility and the adversarial character that the adversary must optimize depending on the concrete use case. The presented results highlight the need to secure systems that are deployed in reality against adversarial attacks and at the same time provide means to use defense methods like adversarial training.
|
2310.03572 | Residual Multi-Fidelity Neural Network Computing | In this work, we consider the general problem of constructing a neural
network surrogate model using multi-fidelity information. Motivated by rigorous
error and complexity estimates for ReLU neural networks, given an inexpensive
low-fidelity and an expensive high-fidelity computational model, we present a
residual multi-fidelity computational framework that formulates the correlation
between models as a residual function, a possibly non-linear mapping between 1)
the shared input space of the models together with the low-fidelity model
output and 2) the discrepancy between the two model outputs. To accomplish
this, we train two neural networks to work in concert. The first network learns
the residual function on a small set of high-fidelity and low-fidelity data.
Once trained, this network is used to generate additional synthetic
high-fidelity data, which is used in the training of a second network. This
second network, once trained, acts as our surrogate for the high-fidelity
quantity of interest. We present three numerical examples to demonstrate the
power of the proposed framework. In particular, we show that dramatic savings
in computational cost may be achieved when the output predictions are desired
to be accurate within small tolerances. | Owen Davis, Mohammad Motamed, Raul Tempone | 2023-10-05T14:43:16Z | http://arxiv.org/abs/2310.03572v2 | # Residual Multi-Fidelity Neural Network Computing
###### Abstract
In this work, we consider the general problem of constructing a neural network surrogate model using multi-fidelity information. Given an inexpensive low-fidelity and an expensive high-fidelity computational model, we present a residual multi-fidelity computational framework that formulates the correlation between models as a residual function, a possibly non-linear mapping between 1) the shared input space of the models together with the low-fidelity model output and 2) the discrepancy between the two model outputs. To accomplish this, we train two neural networks to work in concert. The first network learns the residual function on a small set of high-fidelity and low-fidelity data. Once trained, this network is used to generate additional synthetic high-fidelity data, which is used in the training of a second network. This second network, once trained, acts as our surrogate for the high-fidelity quantity of interest. We present three numerical examples to demonstrate the power of the proposed framework. In particular, we show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
_Dedicated to the memory of Ivo Bubuska_
**keywords:** surrogate modeling, multi-fidelity computing, residual modeling, deep neural networks, residual networks, parametric differential equations, uncertainty quantification
**MSC 2020:** 65C30, 65C40, 68T07
Introduction
Deep artificial neural networks are, today, widely regarded as central tools to solve a variety of artificial intelligence problems; see e.g. [39] and the references therein. The application of neural networks is, however, not limited to artificial intelligence. In recent years, deep networks have been successfully used to construct fast-to-evaluate surrogates for physical/biological quantities of interest (QoIs), described by complex systems of ordinary/partial differential equations (ODEs/PDEs); see e.g. [40; 10; 9]. This has resulted in a growing body of research on the integration of physics-based modeling with neural networks [41]. As surrogate models, deep networks have several important properties: 1) their expressive power enables them to accurately approximate a wide class of functions; see e.g. [3; 22; 37; 42; 11; 8; 19; 31]; 2) they can handle high-dimensional parametric problems involving many input parameters; see e.g. [24; 18; 27; 17; 23]; and 3) they are fast-to-evaluate surrogates, as their evaluations mainly require simple matrix-vector operations. Despite possessing such desirable properties, deep network surrogates are often very expensive to construct, especially when high levels of accuracy are desired and the underlying ODE/PDE systems are expensive to compute accurately. The construction of deep networks often requires abundant high-fidelity training data that in turn necessitates many expensive high-fidelity ODE/PDE computations.
To address this issue, we put forth a residual multi-fidelity computational framework. Given a low-fidelity and a high-fidelity computational model, we formulate the correlation between the two models in terms of a residual function, a possibly non-linear mapping between 1) the shared input space of the models together with the low-fidelity model output and 2) the discrepancy between the two model outputs. This formulation is motivated by the observation that the discrepancy between the model outputs often has small magnitude (or norm) relative to the high-fidelity quantity of interest. Indeed, learning a quantity of small size has been shown to be beneficial in [25] where the authors derive absolute network generalization error bounds directly proportional to the uniform norm of the target function and inversely proportional to the network complexity. As such, given a network error tolerance, and assuming the discrepancy between model outputs is small relative to the high-fidelity quantity, the residual framework, as opposed to learning the correlation between models directly, enables the correlation between models to be learned to the desired accuracy by a network of lower complexity, thereby requiring less expensive to acquire training data. This trained network is then used as a surrogate to efficiently generate additional high-fidelity data. Finally, the enlarged set of originally available and newly generated high-fidelity data is used to train a deep network that learns, and acts as a cheap to evaluate surrogate, for the high-fidelity QoI. We present three numerical examples to demonstrate the power of the proposed framework. In particular, we show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
Through a combination of multi-fidelity computation and residual modeling, the proposed formulation delivers a new framework that benefits from both approaches while differing from them. Unlike current multi-fidelity approaches that search for a direct correlation between models at different levels of fidelity (see e.g. [2; 28; 29; 30]), we consider a (possibly nonlinear) relation between the low-fidelity model and the residual of the two models. Furthermore,
unlike most residual modeling approaches, where the residual is often defined as the difference of observed and simulated data (see e.g. [1]), we define the residual as the discrepancy between high-fidelity and low-fidelity data. Our framework is also different from recent reduced-order modeling approaches (see e.g. [38]) that utilize residuals to measure the discrepancy between full-order and reduced-order models. While in these approaches residuals account for model reduction due to modal decomposition and truncation, i.e. residuals gauge the discarded modes in the original full-order model, we define residuals as the difference between the target output QoIs at different levels of fidelity. A similar residual modeling strategy to our framework can be found in residual networks (ResNets) [21], where a residual function is defined as the difference between the output and input of a block of neural layers. Our framework adds a systematic inclusion of multi-fidelity modeling to ResNets, delivering a physics-informed residual network; see a more detailed discussion on ResNets in Section 3.2.
An important application of the proposed surrogate modeling is in problems where we need accurate evaluations of a target function at many different points in the function's input domain. Such a situation arises, for example, in forward and inverse uncertainty quantification (UQ), where we require many realizations of a target random quantity. In such settings, the proposed construction can be adapted to available sampling-based techniques to further speed up computations. In forward problems, the proposed surrogate can be combined with Monte Carlo (MC) and collocation methods (see e.g. [6, 33, 32, 20]) to compute the statistics of a target random quantity; see the numerical examples in Section 5. In inverse problems, the proposed surrogate may be used to compute the likelihood of observing a target quantity in a Markov chain Monte Carlo sampling algorithm.
It is also to be noted that the framework presented here is not limited to bi-fidelity modeling. The framework extends naturally to multi-fidelity modeling problems where the ensemble of lower-fidelity models are strictly hierarchical in terms of their predictive utility per cost. To accomplish this extension a sequence of networks learns a sequence of residuals that move up the model hierarchy.
The rest of the paper is organized as follows. In Section 2, we state the mathematical formulation of the problem. We present the residual multi-fidelity neural network algorithm in Section 3 and briefly discuss its accuracy and efficiency. In Section 4 we provide a theoretical rationale for the developed residual multi-fidelity framework. In Section 5, we present three numerical examples that demonstrate the power of the proposed framework over current state of the art neural network based surrogate modeling strategies. Finally, in Section 6, we summarize conclusions and outline future works.
## 2 Problem formulation and background
### Problem statement
Let \(\mathbf{u}:\ (\mathbf{x},\mathbf{\theta})\in X\times\Theta\subset\mathbb{R}^{n}\times\mathbb{R} ^{d}\mapsto\mathbf{u}(\mathbf{x};\mathbf{\theta})\in\mathbb{R}^{m}\) be the solution to a (computationally involved) system of parametric ODEs/PDEs, parameterized by \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}\). Further, let \(Q:\Theta\rightarrow\mathbb{R}\) be a target function defined as a functional or operator, \(M_{Q}\), acting on \(\mathbf{u}\),
\[Q:\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}\mapsto Q(\mathbf{\theta})\in\mathbb{R},\qquad Q(\mathbf{\theta})=M_{Q}(\mathbf{u}(\mathbf{x};\mathbf{\theta})).\]
For example, the target function \(Q\) may be given by the ODE/PDE solution energy, where \(M_{Q}(\mathbf{u}(x;\mathbf{\theta}))=\int_{X}|\mathbf{u}(\mathbf{x};\mathbf{\theta})|^{2}d\mathbf{x}\). In general, since the target quantity \(Q\) is available to us through a set of ODEs/PDEs, we may assume that it belongs to some Sobolev space.
Suppose that we need many accurate evaluations of \(Q(\mathbf{\theta})\) at many distinct points \(\mathbf{\theta}\in\Theta\). For example, in the field of UQ, where the parameters are random or fuzzy variables, we often need many accurate realizations of target QoIs. In principle, direct evaluations/realizations of \(Q\) will require a very large number of expensive high-fidelity ODE/PDE computations that may be computationally prohibitive. Therefore, our goal is to construct a fast-to-evaluate surrogate of the target function \(Q\), say \(\tilde{Q}\), using only a small number of expensive simulations of high-fidelity ODE/PDE models, and satisfying the accuracy constraint
\[\varepsilon:=||Q-\tilde{Q}||_{L^{p}_{\pi}(\Theta)}=\left(\int_{\Theta}|Q(\mathbf{ \theta})-\tilde{Q}(\mathbf{\theta})|^{p}\,d\pi(\mathbf{\theta})\right)^{1/p}\leq \varepsilon_{\text{TOL}}. \tag{1}\]
Here, we measure the approximation error in the \(L^{p}\)-norm, with \(p\in[1,\infty)\) and weighted with a probability measure \(d\pi\) on \(\Theta\), and \(\varepsilon_{\text{TOL}}\in(0,1/2)\) is a desired small tolerance.
As we will show in Section 3, we will achieve this goal by introducing a residual multi-fidelity framework that exploits the approximation power of deep networks and the efficiency of approximating quantities of small size. The new framework has two major components: i) multi-fidelity modeling; and ii) neural network approximation. We will briefly review these components in the remainder of this section.
### Multi-fidelity modeling
The basic idea of multi-fidelity modeling is to leverage models at different levels of fidelity in order to achieve a desired accuracy with minimal computational cost. Let \(Q_{LF}(\mathbf{\theta})\) and \(Q_{HF}(\mathbf{\theta})\) be two approximations of the target quantity \(Q(\mathbf{\theta})\), obtained by a low-fidelity and a high-fidelity computational model, satisfying the following two assumptions:
* \(Q_{HF}\) is an expensive-to-compute approximation of \(Q\), satisfying (1);
* \(Q_{LF}\) is a cheaper and less accurate approximation of \(Q\) that is correlated with \(Q_{HF}\).
The high-fidelity model is often obtained by a direct and fine discretization of the underlying ODE/PDE model. There are, however, several possibilities to build the low-fidelity model. For example, it may be obtained by directly solving the original ODE/PDE problem using either a coarse discretization or a low-rank approximation. Another strategy is to solve an auxiliary problem obtained by simplifying the original problem. For instance, we may consider a model with simpler physics, or an effective model obtained by homogenization, or a simpler model obtained by smoothing out the rough parameters of \(M_{Q}\) or through linearization or modal decomposition and truncation.
Without loss of generality we build our low-fidelity and high-fidelity models using, respectively, a coarse and a fine discretization of the underlying system of differential equations,
\[Q_{LF}(\mathbf{\theta}):=M_{Q}(\mathbf{u}_{h_{LF}}(\mathbf{x};\mathbf{\theta})),\qquad Q_{HF}( \mathbf{\theta})=M_{Q}(\mathbf{u}_{h_{HF}}(\mathbf{x};\mathbf{\theta})),\qquad h_{HF}<h_{LF}. \tag{2}\]
Here, \(h_{LF}\) and \(h_{HF}\) denote the mesh size and/or the time step of a stable discretization scheme used at the low fidelity and high-fidelity levels, respectively. We further assume that the two models satisfy
\[|Q(\boldsymbol{\theta})-Q_{LF}(\boldsymbol{\theta})|\leq c(\boldsymbol{\theta}) \,h_{LF}^{q},\qquad|Q(\boldsymbol{\theta})-Q_{HF}(\boldsymbol{\theta})|\leq c( \boldsymbol{\theta})\,h_{HF}^{q},\qquad\forall\boldsymbol{\theta}\in\Theta, \tag{3}\]
where \(q>0\) is related to the order of accuracy of the discretization scheme, and \(c=c(\boldsymbol{\theta})>0\) is a bounded function. This upper error bound implies that assumptions **A1**-**A2** hold with proper choices of \(h_{LF}\) and \(h_{HF}\), e.g. with \(h_{HF}\propto\varepsilon_{\text{TOL}}^{1/q}\) and \(h_{LF}\) small enough for \(Q_{LF}\) to be correlated with \(Q_{HF}\).
It is to be noted that in general the selection of low-fidelity and high-fidelity models is problem dependent. We consider a set of multi-fidelity models admissible as long as assumptions **A1**-**A2** are satisfied, but we note that satisfying **A1**-**A2** is not sufficient to guarantee that the chosen multi-fidelity models are optimal in terms of computational efficiency for a desired accuracy.
The pivotal step in multi-fidelity computation is to design a procedure that captures and utilizes the correlation between models at different levels of fidelity. One of the oldest and most established approaches uses a Bayesian auto-regressive Gaussian process (GP) [26, 15]. This method works well for sparse high fidelity data, but is often computationally prohibitive for high dimensional problems. It is in this high dimensional regime that neural network based multi-fidelity surrogate modeling promises to make an impact. A widely used neural network based approach, known as _comprehensive correction_, assumes a linear correlation between models, writing \(Q_{HF}(\boldsymbol{\theta})=\rho(\boldsymbol{\theta})Q_{LF}(\boldsymbol{ \theta})+\delta(\boldsymbol{\theta})\), where \(\rho\) and \(\delta\) are the unknown multiplicative and additive corrections, respectively; see e.g. [13]. The main limitation of this strategy is its inability to capture a possibly nonlinear relation between the two models. In fact, in parts of the domain \(\Theta\) where the two models do not exhibit linear correlation, e.g. due to the deterioration of the low-fidelity data and hence their deviation from high-fidelity data over time in time-dependent problems, a linear formulation would ignore the low-fidelity model (that may still be very informative) and put all the weight on the high-fidelity data; see also Figure 1 below. In order to address this limitation, more general approaches have been proposed. In [35] the authors replace the linear multiplicative relation by a nonlinear one and consider \(Q_{HF}(\boldsymbol{\theta})=F(Q_{LF}(\boldsymbol{\theta}))+\delta(\boldsymbol{ \theta})\), where \(F\) is a (possibly nonlinear) unknown function. Other related works that also employ neural networks to construct multi-fidelity regression models include [29, 30]. In [29] a combination of linear and nonlinear relations between models, \(Q_{HF}(\boldsymbol{\theta})=F_{L}(\boldsymbol{\theta},Q_{LF}(\boldsymbol{ \theta}))+F_{NL}(\boldsymbol{\theta},Q_{LF}(\boldsymbol{\theta}))\) is used, and [30] utilizes a general correlation between the two models, \(Q_{HF}(\boldsymbol{\theta})=F(\boldsymbol{\theta},Q_{LF}(\boldsymbol{\theta}))\), with \(F\) being a general nonlinear unknown function. These formulations however are not capable of exploiting the full potential of multi-fidelity modeling to gain computational efficiency.
Motivated by residual modeling, instead of searching for a direct correlation between models, we reformulate the correlation in terms of a (possibly nonlinear) residual function that measures the discrepancy between the models; see Section 3.1. We will show how and why such a formulation can achieve higher levels of efficiency.
### Neural network approximation
Deep neural network surrogates are often very expensive to construct, especially when high levels of accuracy are desired and the underlying ODE/PDE systems, that generate high-fidelity training data, are expensive to compute accurately. Although ResNets [21] alleviate this issue to some extent by addressing the _degradation_ problem (as depth increases, accuracy degrades), this work takes a step further by leveraging a combination of deep networks (possibly ResNets) and cheap low-fidelity computations to efficiently build deep high-fidelity networks. We will specifically focus on ReLU feedforward networks, possibly augmented with a ResNet style architecture (shortcut connections), as defined below.
**ReLU networks.** Following the the setup used in [36, 19], we define a feedforward network with \(N_{0}=d\) inputs and \(L\) layers, each with \(N_{\ell}\) neurons, \(\ell=1,\ldots,L\), by a sequence of matrix-vector (or weight-bias) tuples
\[\Phi:=\{(W_{1},b_{1}),\ldots,(W_{L},b_{L})\},\qquad W_{\ell}\in\mathbb{R}^{N_{ \ell}\times N_{\ell-1}},\qquad b_{\ell}\in\mathbb{R}^{N_{\ell}},\qquad\ell=1, \ldots,L.\]
We denote by \(f_{\Phi}\) the function that the network \(\Phi\) realizes and write,
\[f_{\Phi}(\mathbf{\theta})=\mathbf{x}_{L},\qquad f_{\Phi}:\mathbb{R}^{d}\to \mathbb{R}^{N_{L}},\]
where \(\mathbf{x}_{L}\) results from the following scheme:
\[\mathbf{x}_{0}=\mathbf{\theta},\qquad\mathbf{x}_{\ell}=\sigma(W_{\ell}\mathbf{x}_ {\ell-1}+b_{\ell}),\ \ \ell=1,\ldots,L-1,\qquad\mathbf{x}_{L}=W_{L}\mathbf{x}_{L-1}+b_{L}.\]
Here, \(\sigma:\mathbb{R}\to\mathbb{R}\) is the ReLU activation function \(\sigma(x)=\max(0,x)\) that acts component-wise on all hidden layers, \(\ell=1,\ldots,L-1\). To make this ReLU network a ReLU ResNet we simply add shortcut connections to this structure which pass along the identity map.
**Approximation power of ReLU networks.** ReLU networks are known to be powerful and flexible approximation models. They can approximate smooth functions with accuracy comparable to other state of the art methods, and they are simultaneously capable of well approximating functions that are not smooth in any classical sense; see e.g. [31]. Because of this, we use them to formulate and test our new residual multi-fidelity method. It is to be noted that, to the best of the authors' knowledge, there is no theoretical estimate that bounds ReLU network approximation error explicitly in terms of the uniform norm of the target function and the network architecture (depth and width). Such a result is, however, available for Fourier feature ResNets in [25], providing theoretical motivation for our residual multi-fidelity framework. Obtaining similar theoretical results for ReLU networks is the subject of our current work and will be presented elsewhere.
## 3 Residual multi-fidelity neural network algorithm
In this section we present a residual multi-fidelity neural network (RMFNN) algorithm, consisting of two major parts: 1) formulation of a new residual multi-fidelity framework, and 2) construction of a pair of neural networks that work in concert within the new framework. We will discuss each part in turn.
### Nonlinear residual multi-fidelity modeling
Combining residual analysis and multi-fidelity modeling, we reformulate the correlation between two models of different fidelity as a non-linear residual function and write
\[Q_{HF}(\boldsymbol{\theta})-Q_{LF}(\boldsymbol{\theta})=F(\boldsymbol{\theta},Q_ {LF}(\boldsymbol{\theta})),\qquad\boldsymbol{\theta}\in\Theta, \tag{4}\]
where \(F\) is an unknown nonlinear function. As we will discuss below, this formulation enables the efficient construction of a fast-to-evaluate high-fidelity surrogate by a pair of neural networks that work in concert. In this framework, the first network, which learns the residual function \(F\) is leveraged to train a second network that learns the high-fidelity target quantity \(Q_{HF}\). The new formulation (4) is motivated by the following two observations.
**I. Nonlinearity.** In many ODE/PDE problems, low-fidelity solutions lose their linear correlation with the high-fidelity solution on parts of the parameter domain. For example, in wave propagation problems, the low-fidelity solution may deteriorate as time increases due to dissipative and dispersive errors. The low-fidelity solution may also deteriorate at low frequencies if it is obtained by the truncation of an asymptotic expansion that is valid at high frequencies. As an illustrative example, consider a simple forced oscillator with damping given by the ODE system,
\[\dot{\mathbf{u}}(t)=A\,\mathbf{u}(t)+\cos(\theta\,t)\,\mathbf{b},\qquad \mathbf{u}=\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix},\qquad A=\begin{bmatrix}0&1\\ -3&-3\end{bmatrix},\qquad\mathbf{b}=\begin{bmatrix}0\\ 0.6\end{bmatrix}.\]
Here, \(\theta\in\Theta=[10,50]\) is a frequency parameter. Let \(Q(\theta)=u_{2}(T;\theta)^{2}\) be the desired quantity of interest, given by the square of the second component of ODE's solution at a terminal time \(T=1\). We use Forward Euler with a very small time step (\(\Delta t=10^{-6}\)) to obtain a high-fidelity solution, \(Q_{HF}(\theta)\). For the low-fidelity model, we employ an asymptotic method (see [7]), and noting that \(\cos(\theta\,t)=(e^{i\,\theta\,t}+e^{-i\,\theta\,t})/2\), we consider the ansatz
\[\mathbf{u}(t)=\mathbf{u}_{0}(t)+\sum_{r=1}^{\infty}\frac{1}{\theta^{r}}\, \mathbf{u}_{r}(t;\theta),\qquad\mathbf{u}_{r}(t;\theta)=\mathbf{u}_{r,-1}(t) \,e^{-i\,\theta\,t}+\mathbf{u}_{r,1}(t)\,e^{i\,\theta\,t}.\]
Inserting this ansatz into the ODE system and matching the terms with the same \(\theta^{-r}\) and \(e^{\pm i\,\theta\,t}\) coefficients, we truncate the infinite sum and keep the first term with \(r=1\). This yields the low-fidelity solution
\[\mathbf{u}(t)=\mathbf{u}_{LF}(t)+\mathcal{O}(\theta^{-2}),\qquad\mathbf{u}_{ LF}(t)=e^{tA}\,\mathbf{u}(0)+\frac{1}{\theta}\,\sin(\theta\,t)\,\mathbf{b}.\]
The high-fidelity solution, which is obtained by a direct and fine discretization of the ODE system, is accurate but costly. The low-fidelity solution, having an explicit closed form, is cheap to compute, but it deteriorates when the frequency is low. Figure 1 (left) displays the deviation of \(Q_{LF}\) from \(Q_{HF}\) as the frequency decreases. In Figure 1 (middle) we observe that although the two models exhibit a rather linear relation for larger frequencies, at lower frequencies (\(\theta<30\)) their relation is nonlinear. This shows that for this system a purely linear auto-regressive approach would ignore \(Q_{LF}\) (that is still informative) in a large portion
of the parameter domain \(\Theta\), instead putting all the weight on \(Q_{HF}\). This would in turn increase the amount of (expensive) high-fidelity computations needed to build the surrogate model. On the contrary, the nonlinear formulation (4) has the capability of capturing more complex correlation structures, fully exploiting \(Q_{LF}\).
**II. Small residual magnitude.** It is important to note that we do not formulate the non-linearity discussed above as a direct relation between the two models, that is, we do not write \(Q_{HF}(\boldsymbol{\theta})=F(\boldsymbol{\theta},Q_{LF}(\boldsymbol{\theta}))\). Instead, we formulate the non-linearity in terms of the residual (4). This is motivated by the fact that the residual is expected to have a small magnitude (or norm) relative to that of the high-fidelity quantity \(Q_{HF}\); see Figure 1 (right). As an additional example, consider a low-fidelity model and a high-fidelity model obtained by a coarse and a fine discretization of some ODE/PDE problem, satisfying (3). Further, let \(h_{LF}=s\,h_{HF}\) where \(1<s<\infty\). Then using triangle inequality we can write
\[|F|=|Q_{HF}-Q_{LF}|\leq|Q-Q_{HF}|+|Q-Q_{LF}|\leq c(h_{HF}^{q}+h_{LF}^{q})\leq c \,(1+s^{q})h_{HF}^{q}\leq(1+s^{q})\varepsilon_{\mathrm{TOL}}.\]
The last inequality follows by the choice \(c\,h_{HF}^{q}\leq\varepsilon_{\mathrm{TOL}}\), which implies \(\varepsilon:=|Q-Q_{HF}|\leq\varepsilon_{\mathrm{TOL}}\), as desired. Hence the size of the residual \(|F|\) is proportional to the small quantity \(\varepsilon_{\mathrm{TOL}}\).
We conjecture that the small magnitude of the residual function, compared to the size of \(Q_{HF}\), enables its accurate approximation by a ReLU network with a complexity lower than that of a network that accurately approximates a function of size \(|Q_{HF}|\). This conjecture is motivated by two recent theoretical results. In [25] the authors consider Fourier feature ResNets, and they derive bounds on network generalization error directly proportional to the uniform norm of the target function and inversely proportional to the network degree of freedom. We hypothesize that a similar result holds for ReLU networks, and we support this hypothesis with numerical examples in Section 5. Moreover, in [31], the author relates the size of a Sobolev target function to ReLU network complexity. See section 4 for more details. Overall, the major advantage of our residual formulation, over learning the correlation between different fidelity models directly, is that the quantity being learned generally has small relative magnitude (or norm). Such a small target quantity can be learned by a network with low degree of freedom, which in turn requires less expensive high-fidelity training data.
Figure 1: Motivation behind the new residual multi-fidelity formulation. Left: deviation of the low-fidelity solution \(Q_{LF}\) from the high-fidelity solution \(Q_{HF}\), versus a frequency parameter \(\theta\in\Theta=[10,50]\). Middle: high-fidelity solution is not a linear function of the low-fidelity solution on the whole parameter space. Right: residual function \(F\) has a small magnitude.
In summary, the residual multi-fidelity formulation enables the construction of an efficient pair of networks where the first network, which learns the residual function \(F\) is leveraged to train a second network that learns the target quantity \(Q_{HF}\); see Section 3.2 for details.
### Composite network training
The second major component of the RMFNN algorithm consists of constructing a pair of networks that work together. This construction proceeds in three steps.
**Step 1. Data generation.** We generate a set of \(N=N_{I}+N_{II}\) points, \(\boldsymbol{\theta}\in\Theta\), collected in two disjoint sets:
\[\Theta_{I}:=\{\boldsymbol{\theta}^{(1)},\ldots,\boldsymbol{\theta}^{(N_{I})} \}\subset\Theta,\qquad\Theta_{II}:=\{\boldsymbol{\theta}^{(N_{I}+1)},\ldots, \boldsymbol{\theta}^{(N)}\}\subset\Theta,\qquad\Theta_{I}\cap\Theta_{II}=\emptyset.\]
The points in each set may be selected deterministically or randomly. For example, the two sets may consist of two uniform or non-uniform disjoint grids over \(\Theta\), or we may consider a random selection of points drawn from a probability distribution, such as a uniform distribution or other optimally chosen distribution. We then proceed with the following computations.
* For each \(\boldsymbol{\theta}^{(i)}\in\Theta_{I}\cup\Theta_{II}\), with \(i=1,\ldots,N\), we compute the low-fidelity realizations, \[Q_{LF}^{(i)}:=Q_{LF}(\boldsymbol{\theta}^{(i)})=M_{Q}(\boldsymbol{u}_{h_{LF}} (\boldsymbol{x};\boldsymbol{\theta}^{(i)})),\qquad i=1,\ldots,N.\]
* For each \(\boldsymbol{\theta}^{(i)}\in\Theta_{I}\), with \(i=1,\ldots,N_{I}\), we compute the high-fidelity realizations, \[Q_{HF}^{(i)}:=Q_{HF}(\boldsymbol{\theta}^{(i)})=M_{Q}(\boldsymbol{u}_{h_{HF}} (\boldsymbol{x};\boldsymbol{\theta}^{(i)})),\qquad i=1,\ldots,N_{I}.\] (5)
**Step 2. Composite network training.** We train two separate networks as follows:
* Using \(N_{I}\) input data, \(\{(\boldsymbol{\theta}^{(i)},Q_{LF}^{(i)})\}_{i=1}^{N_{I}}\) and \(N_{I}\) output data, \(\{Q_{HF}^{(i)}-Q_{LF}^{(i)}\}_{i=1}^{N_{I}}\), we train an initial network (\(ResNN\)) to learn the residual function \(F\) in (4). We call this surrogate \(F_{ResNN}\); see Figure 2 (top).
* We then use the surrogate \(F_{ResNN}\) and the rest of the \(N_{II}\) low-fidelity data points \(\{(\boldsymbol{\theta}^{(i)},Q_{LF}^{(i)})\}_{i=N_{I}+1}^{N}\) to generate \(N_{II}\) new approximate high-fidelity outputs, denoted by \(\{\hat{Q}_{HF}^{(i)}\}_{i=N_{I}+1}^{N}\), where \[\hat{Q}_{HF}^{(i)}:=Q_{LF}^{(i)}+F_{ResNN}(\boldsymbol{\theta}^{(i)},Q_{LF}^{ (i)}),\qquad i=N_{I}+1,\ldots,N.\] (6)
* Using all \(N\) input-output high-fidelity data \(\{(\boldsymbol{\theta}^{(i)},Q_{HF}^{(i)})\}_{i=1}^{N_{I}}\cup\{(\boldsymbol{ \theta}^{(i)},\hat{Q}_{HF}^{(i)})\}_{i=N_{I}+1}^{N}\), we train a deep network (\(DNN\)) as a surrogate for the high-fidelity quantity \(Q_{HF}\). We call this surrogate \(Q_{DNN}\); see Figure 2 (bottom).
**Step 3. Prediction.** Given any \(\mathbf{\theta}\in\Theta\), an approximation \(\tilde{Q}(\mathbf{\theta})\) of the target quantity \(Q(\mathbf{\theta})\) can be obtained by evaluating the trained deep network \(DNN\):
\[\tilde{Q}(\mathbf{\theta})=Q_{DNN}(\mathbf{\theta}),\qquad\mathbf{\theta}\in\Theta. \tag{7}\]
**On performance of RMFNN algorithm.** It is important to note that the two networks learn two essentially different quantities: \(ResNN\) learns the residual function \(F\), while \(DNN\) learns the target high-fidelity quantity \(Q_{HF}\). On the one hand, since \(F\) has small norm (as motivated in Section 3.1), it can be realized by a network of relatively low degree of freedom while keeping the approximation error small. On the other hand, \(Q_{HF}\) likely has a larger norm than that of the residual function, and therefore it may need to be realized by a deeper network to achieve the desired accuracy. Importantly, this architectural difference between the two networks pairs well with the training data availability conditions present in most multi-fidelity modeling problems. Generally, the number \(N_{I}\) of available high-fidelity data is small compared to the number \(N\) of available low-fidelity data, i.e. \(N_{I}\ll N\). Since \(ResNN\) learns a quantity that is generally small in magnitude relative to the quantity learned by \(DNN\), it can do so with a network of smaller degree of freedom, and as such needs less training data. Specifically, \(ResNN\) uses the small high fidelity data set numbering \(N_{I}\), while \(DNN\) uses a much larger data set numbering \(N\). This is an important advantage of the RMFNN algorithm over current composite algorithms [29, 30], where the networks in their composite settings learn similar quantities in terms of magnitude and hence have similar architectures that need comparable number of training data (\(N_{I}\sim N\)).
**An alternative approach.** We can take a slightly different approach as follows. The first step is the same as in Step 1 above, where we generate low-fidelity and high-fidelity data.
Figure 2: Schematic representation of the RMFNN algorithm, given a set of \(N\) low-fidelity and \(N_{I}\ll N\) high-fidelity data. Top: an initial network (\(ResNN\)) is trained by \(N_{I}\) low-fidelity and high-fidelity data to learn the residual function \(F\). The trained \(ResNN\) is used along with the rest of \(N_{II}=N-N_{I}\) low-fidelity data to generate a new set of \(N_{II}\) high-fidelity data. Bottom: a deep network (\(DNN\)) is trained by all \(N\) high-fidelity data as a surrogate for the high-fidelity target quantity \(Q_{HF}\).
Next, in Step 2, we construct a composite network as follows:
* Using \(N\) input-output data \(\{(\mathbf{\theta}^{(i)},Q_{LF}^{(i)}\}_{i=1}^{N}\), we train a deep network (\(DNN\)) as a surrogate for \(Q_{LF}\).
* Using \(N_{I}\) input data \(\{(\mathbf{\theta}^{(i)},Q_{LF}^{(i)}\}_{i=1}^{N_{I}}\) and \(N_{I}\) output data \(\{Q_{HF}^{(i)}-Q_{LF}^{(i)}\}_{i=1}^{N_{I}}\), we train a second network (\(ResNN\)) as a surrogate for the residual function \(F\) in (4). Note that this part is the same as the first part of step 2 in the original approach, but with a change in the order of implementation.
* We build a composite network by combining \(DNN\) and \(ResNN\) as shown in Figure 3.
For prediction in Step 3, given any \(\mathbf{\theta}\in\Theta\), we compute an approximation \(\tilde{Q}(\mathbf{\theta})\) of the target quantity \(Q(\mathbf{\theta})\) by adding the low-fidelity quantity (predicted by \(DNN\)) to the residual (predicted by \(ResNN\)); see Figure 3.
**A comparison between the two proposed approaches.** Both approaches utilize the same residual multi-fidelity formulation (4), but they differ in the way their constituent networks (\(ResNN\) and \(DNN\)) work. In the first approach, \(ResNN\) and \(DNN\) work at two separate stages: first, \(ResNN\) learns the residual and generates more high-fidelity data; next, \(DNN\) uses the initially available and newly generated high-fidelity data to learn the target quantity. In this case, \(DNN\) alone serves as the surrogate network to be evaluated many times. In the alternative approach, \(ResNN\) and \(DNN\) constitute two blocks in a composite surrogate network, and hence both will be evaluated many times. In addition, since in both approaches \(ResNN\) learns the same residual, and since \(DNN\) learns either the low- or high-fidelity quantities, which are not expected to be very different in magnitude, we expect the architectures of \(ResNN\) and \(DNN\) to be comparable across both approaches. This gives each approach distinct benefits. When we need many evaluations of the target quantity, the alternative approach may be more expensive, because it requires both \(DNN\) and \(ResNN\) to be evaluated, compared to only one evaluation of \(DNN\) in the first approach. The alternative approach, however, has the flexibility of replacing \(DNN\) by a direct computation of \(Q_{LF}\), for instance when a direct computation of \(Q_{LF}\) is more economical compared to the cost of training and evaluating a surrogate network.
Figure 3: An alternative RMFNN approach. A deep network (\(DNN\)) is trained using \(N\) low-fidelity data to learn \(Q_{LF}\). A second network (\(ResNN\)) is trained by \(N_{I}\ll N\) low-fidelity and high-fidelity data as a surrogate for the residual \(F\). Finally, the target quantity \(Q_{HF}\) is computed by adding \(Q_{LF}\) to \(F\).
**Comparison with residual networks.** ResNets [21] were originally developed to address the _degradation problem_, a phenomenon where we observe an eventual decrease in network training accuracy as depth increases. ResNets addresses this problem by introducing shortcut connections that pass along identity maps between layers. In this, as the network depth increases and accuracy becomes saturated, additional layers need only learn a residual that tends to zero as opposed to the full identity map plus some small possibly nonlinear correction. Understood in the context of the recent theoretical results in [25], ResNets take advantage of learning a residual that is small (relative to the identity map) between each layer. It is important to note that the size of the residual, as formulated in ResNet, is only guaranteed to be small relative to the identity map when the network accuracy is close to saturated and the identity map is near optimal. This situation arises when considering the impact of adding additional layers to a deep neural network that already approximates the quantity of interest to a reasonable accuracy (the conditions of the _degradation_ problem). In the context of multi-fidelity modeling, where the correlation between low-fidelity and high-fidelity models needs to be learned accurately on sparse training data, it is unreasonable to expect the residual between layers (as formulated in ResNet) to be small compared to the identity map. Therefore, in order to take advantage of the previously mentioned theoretical results, we need to explicitly enforce learning a small quantity. This is the exact insight of our proposed RMFNN algorithm. It formulates the correlational learning problem as learning a residual function \(F\) between the high-fidelity and low-fidelity models, which is generally small relative to the high-fidelity quantity.
## 4 Theoretical Rationale for the RMFNN Algorithm
In this section we provide a detailed theoretical rationale for the RMFNN algorithm. We begin by discussing the sources of error incurred in using the RMFNN algorithm. We then review current theoretical results on the convergence rate of the approximation error and discuss the computational complexity of the algorithm.
### Sources of error
Let \(\varepsilon\) denote the error (1) in approximating \(Q\) by the surrogate (7) obtained by the RMFNN algorithm, measured in \(L^{2}\)-norm, and satisfying the accuracy constraint
\[\varepsilon=||Q-Q_{DNN}||_{L^{2}(\Theta)}\leq\varepsilon_{\mathrm{TOL}}. \tag{8}\]
By triangle inequality, we can split the error into three parts:
\[\varepsilon\leq\underbrace{||Q-Q_{HF}||_{L^{2}(\Theta)}}_{\varepsilon_{I}}+ \underbrace{||Q_{HF}-\hat{Q}_{HF}||_{L^{2}(\Theta)}}_{\varepsilon_{II}}+ \underbrace{||\hat{Q}_{HF}-Q_{DNN}||_{L^{2}(\Theta)}}_{\varepsilon_{III}}\leq \varepsilon_{\mathrm{TOL}},\]
where the desired accuracy is achieved if, for instance, we enforce \(\varepsilon_{I},\varepsilon_{II},\varepsilon_{III}\leq\frac{1}{3}\, \varepsilon_{\mathrm{TOL}}\). The first error term \(\varepsilon_{I}\) is the error in approximating \(Q\) by \(Q_{HF}\) in (5) using an ODE/PDE solver. Then, from (3) we have
\[\varepsilon_{I}\leq C\,h_{HF}^{q},\]
and hence we can meet the accuracy constraint if we select \(h_{HF}\) such that \(C\,h_{HF}^{q}\leq\frac{1}{3}\,\varepsilon_{\text{TOL}}\). The second and third error terms are errors in network approximation, and hence of a different nature. Indeed, the second error term \(\varepsilon_{II}\) is the error in approximating \(Q_{HF}\) by \(\hat{Q}_{HF}\) in (6), which is the error in approximating \(F\) in (4) by the ResNN surrogate \(F_{ResNN}\),
\[\varepsilon_{II}=||Q_{LF}+F-Q_{LF}-F_{ResNN}||_{L^{2}(\Theta)}=||F-F_{ResNN}||_ {L^{2}(\Theta)}.\]
The third error term \(\varepsilon_{III}\) is the error in approximating \(\hat{Q}_{HF}\) by the DNN surrogate \(Q_{DNN}\). In general, the second and third error terms depend on three factors: i) the space of the target functions to be approximated, here the function space of \(F\) and \(Q_{HF}\); ii) the architectures of \(ResNN\) and \(DNN\) and their training hyperparameters, including, but not limited to, width, depth, activation functions, and the optimization algorithm and the cost function used in the training process; and iii) the quantity and distribution of the input training data for \(ResNN\) and \(DNN\), i.e. the choice of the sets \(\Theta_{I}\) and \(\Theta_{II}\).
It is to be noted that the precise dependence of the output error of a trained network on the target function space, network architecture, optimization hyperparameters, and the choice of input training data is still not completely understood, and one that we will not address here. Instead we address the computational complexity of training and evaluating the networks \(ResNN\) and \(DNN\) under the accuracy constraint \(\varepsilon_{II}+\varepsilon_{III}\leq\frac{2}{3}\,\varepsilon_{\text{TOL}}\). In particular, we leverage two recent theoretical results to argue that, in general, for a given accuracy constraint, the complexity of the RMFNN algorithm is less than other state-of-the-art neural network-based surrogate construction methods.
### Motivating theoretical results
Two recent theoretical results motivated the development of the RMFNN algorithm. The first is a result for Fourier feature residual networks [25]. In Theorem 2.1 of [25], the authors derive a bound on network generalization error directly proportional to the uniform norm of the target function and inversely proportional to the network degree of freedom. This result for Fourier features residual networks inspired the analogous conjecture for ReLU networks:
**Conjecture 1**.: _Consider a target function \(f\in L^{2}(\Theta)\) with bounded uniform norm \(||f||_{\infty}<\infty\). Let \(f_{\Phi}\) be a ReLU network's output with trainable parameters \(\Phi\) and consisting of \(L\) layers and \(K\) neurons in each layer. Then there is a constant \(C>0\) such that_
\[\min_{\Phi}||f-f_{\Phi}||_{L^{2}(\Theta)}^{2}\leq C\,\frac{||f||_{\infty}^{2} }{KL}+\mathcal{O}(K^{-2}+L^{-4}). \tag{9}\]
Proof of Conjecture 1 is the subject of ongoing research and will be presented elsewhere, but numerical evidence for such a bound is presented in section 5. Furthermore, the validity of a bound analogous to (9) for Fourier features residual networks supports the use of our residual multi-fidelity framework over the standard multi-fidelity framework used in [29, 30]. Indeed, for a fixed error tolerance, if we consider learning both the residual function \(F(\mathbf{\theta},Q_{LF}(\mathbf{\theta}))=Q_{HF}(\mathbf{\theta})-Q_{LF}(\mathbf{\theta})\), which is generally small in uniform norm relative to \(Q_{HF}\), and the direct mapping \(G(\mathbf{\theta},Q_{LF}(\mathbf{\theta}))=Q_{HF}(\mathbf{\theta})\), we hypothesize that the former can be learned to the desired accuracy with considerably less network complexity than the latter.
Moreover, the training of this network of smaller degree of freedom will generally require less training data, a quality that favorably addresses high-fidelity data sparsity conditions present in most multi-fidelity modeling problems.
Additional rationale for the RMFNN algorithm stems from Theorem 1, a recent result on approximation of Sobolev functions by ReLU networks, a proof for which can be found in [31]. For an in depth introduction to Sobolev function spaces we direct the reader to [12].
**Theorem 1**.: _Let \(\varepsilon_{\text{TOL}}\in(0,\frac{1}{2})\). Consider a Sobolev target function \(f\in W^{k,p}(\Theta)\) on \(\Theta=[0,1]^{d}\) with \(k\geq 1\) and \(p\in[1,\infty]\). Then there exists a ReLU neural network \(f_{\Phi}\) with \(d\) input neurons, one output neuron, and complexity_
\[L(f_{\Phi}) \leq C\,\log_{2}(\varepsilon_{\text{TOL}}^{-1}\,||f||_{W^{k-1,p}( \Theta)}),\] \[N(f_{\Phi}) \leq C\,\varepsilon_{\text{TOL}}^{-d/k}\,||f||_{W^{k,p}(\Theta)} ^{d/k}\,\log_{2}^{2}(\varepsilon_{\text{TOL}}^{-1}\,||f||_{W^{k-1,p}(\Theta)}),\]
_such that \(||f-f_{\Phi}||_{L^{p}(\Theta)}\leq\varepsilon_{\text{TOL}}\). Here, \(L(\Phi)\) and \(N(\Phi)\) are the number of layers and the total number of neurons of the network \(\Phi\), respectively, and the constant \(C=C(d,k,p)>0\) is independent of \(\varepsilon_{\text{TOL}}\)._
Important in this theorem is the dependence of network complexity on the size of the target function \(f\). For a fixed error tolerance \(\varepsilon_{\text{TOL}}\), the smaller the quantities \(||f||_{W^{k,p}}\) and \(||f||_{W^{k-1,p}}\), the smaller degree of freedom required in approximating the ReLU network. Furthermore, it is important to note that the bound on \(N(f_{\Phi})\) depends strongly on \(d/k\), a value that balances problem input dimension \(d\) and target function regularity \(k\). The problem regime in which neural networks promise to be highly beneficial is when \(d/k>>1\) (high problem dimension and/or very irregular target function). In this case, the network complexity is highly dependent on the ratio \(||f||_{W^{k,p}}/\varepsilon_{\text{TOL}}\). That is, if extremely accurate predictions are required (\(\varepsilon_{\text{TOL}}<<1\)), and network complexity is to remain controlled, then we must also require \(||f||_{W^{k,p}}\approx\varepsilon_{\text{TOL}}\). This condition is rarely satisfied in real world problems, but minimizing \(||f||_{W^{k,p}}\) can help greatly in reducing otherwise very large network complexity, and enabling accurate network training on sparser data sets.
### Computational complexity
We will now discuss the computational complexity of the RMFNN algorithm, denoted by \(W_{RMFNN}\), for approximating the target quantity \(Q(\boldsymbol{\theta})\) at \(N_{\boldsymbol{\theta}}\) distinct points \(\boldsymbol{\theta}\in\Theta\), when \(N_{\boldsymbol{\theta}}\gg 1\) is large. The complexity of the algorithm consists of two major parts: the cost of training the two networks \(ResNN\) and \(DNN\), which is assumed to be dominated by the cost of obtaining \(N_{I}\) high-fidelity training data, and the cost of \(N_{\boldsymbol{\theta}}\) evaluations of the deep network estimator (7). The total cost is hence assumed to be given by
\[W_{RMFNN}=N_{I}\,W_{HF}+N_{\boldsymbol{\theta}}\,W_{DNN}, \tag{10}\]
where \(W_{HF}\) denotes the computational cost of one evaluation of \(Q_{HF}\) in (2), and \(W_{DNN}\) denotes the cost of one evaluation of \(DNN\). In what follows, we discuss the cost (10) of RMFNN algorithm and compare it with the cost of the following two approaches:
* HFM: direct sampling of the high-fidelity model without utilizing neural network-based surrogate construction or lower-fidelity information. Here the expensive high-fidelity ODE/PDE model is directly computed \(N_{\boldsymbol{\theta}}\) times with cost, \[W_{HFM}=N_{\boldsymbol{\theta}}\,W_{HF}.\] (11)
* HFNN: evaluation of a deep high-fidelity neural network surrogate that is constructed without leveraging lower-fidelity information. The high-fidelity surrogate network is trained on only \(N\) high-fidelity data and evaluated \(N_{\boldsymbol{\theta}}\) times with cost, \[W_{HFNN}=N\,W_{HF}+N_{\boldsymbol{\theta}}\,W_{DNN}.\] (12)
The higher efficiency of the RMFNN method, compared to the above alternative approaches, relies on the following two conditions:
\[1)\ N_{I}\ll N<N_{\boldsymbol{\theta}};\ \ \ \ \ \text{and}\ \ \ \ \ 2)\ W_{DNN}\ll W_{HF}.\]
Clearly, these two conditions imply that \(W_{RMFNN}\ll W_{HFNN}<W_{HFM}\). We address each of these conditions in the two observations below.
_Observation 1_: As motivated in Section 3.1, and supported with theoretical results in Section 4.2, the small norm of \(F\) enables its accurate approximation by the network \(ResNN\) with far less complexity than the deep network \(DNN\) that approximates \(Q_{HF}\). This implies that the number \(N_{I}\) of expensive high-fidelity data needed to train \(ResNN\) may be far smaller than the total number \(N\) of data needed to train \(DNN\), that is \(N_{I}\ll N\). Moreover, in many scientific problems, we often need a very large number (\(N_{\boldsymbol{\theta}}\gg 1\)) of high-fidelity samples, implying \(N<N_{\boldsymbol{\theta}}\). For example, when using Monte Carlo sampling to evaluate the statistical moments of random quantities, we require \(N_{\boldsymbol{\theta}}=\mathcal{O}(\varepsilon_{\text{TOL}}^{-2})\), since the statistical error in Monte Carlo sampling is known to be proportional to \(N_{\boldsymbol{\theta}}^{-1/2}\)[14]. In such cases, the inequality \(N<N_{\theta}\) holds provided \(N=\mathcal{O}(\varepsilon_{\text{TOL}}^{-p})\), with \(p<2\). We note that there is currently no rigorous results that relate the number and distribution of input training data to the accuracy of neural networks. However, we have observed through numerical experiments that for the type and size of networks that we have studied, the number of data needed to train a network to achieve accuracy within \(\varepsilon_{\text{TOL}}\) indeed grows with a rate milder than \(\mathcal{O}(\varepsilon_{\text{TOL}}^{-2})\).
_Observation 2_: The evaluation of a neural network mainly involves simple matrix-vector multiplications and evaluations of activation functions. The total evaluation cost depends on the width and depth of the network and the type of activation used. Crucially, the network evaluation cost depends only mildly on the complexity of the underlying ODE/PDE problem. As can be seen in Theorem 1, approximating target functions with successively lower regularity or larger problem input dimension will require neural networks of higher complexity (larger width and or depth). However, when evaluating the neural network, this added complexity manifests only as additional simple matrix-vector operations and cheap evaluations of activation functions. In particular, for QoIs that can be well approximated by a network with several hidden layers and hundreds or thousands of neurons, we expect
\(W_{ResNN}<W_{DNN}\ll W_{HF}\). Indeed, in such cases, the more complex the high-fidelity problem, and the more expensive computing the high-fidelity quantity, the smaller the ratio \(W_{DNN}/W_{HF}\).
It is to be noted that in the above discussion we have assumed that the cost of training \(DNN\) is dominated by the cost of obtaining high-fidelity training data, neglecting the one-time cost of solving the underlying optimization problem in the training process. This will be a valid assumption, for instance, when the cost of solving the network optimization problem (e.g. by stochastic gradient descent) is negligible compared to the cost of \(N_{\boldsymbol{\theta}}>>1\) high-fidelity model evaluations. In particular, we have observed through numerical experiments that as the tolerance decreases, and hence \(N_{\boldsymbol{\theta}}\) increases, this one-time network training cost may become negligible compared to the cost of \(N_{\boldsymbol{\theta}}\) high-fidelity solves. We further illustrate this in Section 5.
**An illustrative example.** We compare the costs (10), (11), and (12) on a prototypical UQ task. Let \(\boldsymbol{\theta}\) be a random vector with a known and compactly supported joint probability density function (PDF), and suppose that we want to obtain an accurate expectation of the random quantity \(Q=Q(\boldsymbol{\theta})\) within a small tolerance, \(\varepsilon_{\text{TOL}}\ll 1\). To achieve this using MC sampling, recalling that the the statistical error in MC sampling is proportional to \(N_{\boldsymbol{\theta}}^{-1/2}\)[14], we require \(N_{\boldsymbol{\theta}}\propto\varepsilon_{\text{TOL}}^{-2}\gg 1\) samples. Now, suppose that \(W_{HF}\propto h_{HF}^{-\gamma}\), where \(\gamma>0\) is related to the time-space dimension of the underlying ODE/PDE problem and the discretization technique used to solve the problem. Then, noting \(h_{HF}\propto\varepsilon_{\text{TOL}}^{1/q}\), we obtain \(W_{HF}\propto\varepsilon_{\text{TOL}}^{-\gamma/q}\). Therefore, the cost (11) of directly computing the high-fidelity model \(N_{\boldsymbol{\theta}}\) times is
\[W_{HFM}\propto\varepsilon_{\text{TOL}}^{-(2+\gamma/q)}. \tag{13}\]
Following observation 1 above, we assume \(N\propto\varepsilon_{\text{TOL}}^{-p}\), with \(p<2\). This assumption indicates that the dependence of number \(N\) of training data on the tolerance \(\varepsilon_{TOL}\) is milder than the dependence of number of MC samples on the tolerance. Furthermore, we assume \(r:=N_{I}/N\ll 1\). Finally, following observation 2 above, we assume that the evaluation cost of the deep network \(DNN\) is less than the cost of a high-fidelity solve, i.e. \(W_{DNN}\propto\varepsilon_{\text{TOL}}^{-\hat{p}}\), with \(\hat{p}<\gamma/q\), implying \(W_{DNN}\ll W_{HF}\). With these assumptions, the cost (12) of using a purely high-fidelity network surrogate and the cost (10) of using a surrogate generated by the RMFNN algorithm read
\[W_{HFNN}\propto\varepsilon_{\text{TOL}}^{-(p+\gamma/q)}+\varepsilon_{\text{ TOL}}^{-(2+\hat{p})}+\text{training cost}, \tag{14}\]
\[W_{RMFNN}\propto r\,\varepsilon_{\text{TOL}}^{-(p+\gamma/q)}+\varepsilon_{\text {TOL}}^{-(2+\hat{p})}+\text{training cost}. \tag{15}\]
Comparing (13) and (14), and excluding training costs (see the discussion above), we get \(W_{HFNN}\ll W_{HF}\), when \(p<2\) and \(\hat{p}<\gamma/q\). Furthermore, comparing (14) and (15), we get \(W_{RMFNN}\ll W_{HFNN}\), when \(r\ll 1\) and \(\hat{p}<p+\gamma/q-2\).
## 5 Numerical examples
In this section we apply the proposed RMFNN algorithm to computing three parametric ODE/PDE problems, of which the latter two are borrowed from [30]. In all problems, the
parameters are represented by an \(d\)-dimensional random vector \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}\), with a known and bounded joint PDF, \(\pi:\Theta\to\mathbb{R}_{+}\). For each parametric problem, we consider a desired output quantity \(Q:\Theta\to\mathbb{R}\), being a functional of the ODE/PDE solution. Our main goal is to compute an accurate and efficient surrogate for \(Q\), denoted by \(Q_{DNN}\), given by (7). We will measure the approximation error using the weighted \(L^{2}\)-norm, as defined in (1) with \(p=2\). We approximate the error by sample averaging, using \(N_{\varepsilon}\) independent samples \(\{\mathbf{\theta}^{(i)}\}_{i=1}^{N_{\varepsilon}}\in\Theta\) drawn from \(\pi(\mathbf{\theta})\), and record the weighted mean squared error (MSE),
\[\varepsilon=\int_{\Theta}|Q(\mathbf{\theta})-Q_{DNN}(\mathbf{\theta})|^{2}\,\pi(\mathbf{ \theta})\,d\mathbf{\theta}\approx\frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{ \varepsilon}}|Q(\mathbf{\theta}^{(i)})-Q_{DNN}(\mathbf{\theta}^{(i)})|^{2}=: \varepsilon_{MSE}. \tag{16}\]
To facilitate a comparison of our algorithm with that of [30], two of our numerical examples involve computing the expectation of \(Q\),
\[\mathbb{E}[Q]:=\int_{\Theta}Q(\mathbf{\theta})\,\pi(\mathbf{\theta})\,d\mathbf{\theta},\]
by MC sampling. Due to the slow convergence of MC sampling, obtaining an accurate estimation may require a very large number (\(N_{\mathbf{\theta}}\gg 1\)) of realizations of \(Q\), each of which requires an expensive high-fidelity ODE/PDE solve. We will approximate all \(N_{\mathbf{\theta}}\) realizations of \(Q\) by the proposed surrogate \(Q_{DNN}\), built on far less than \(N_{\mathbf{\theta}}\) number of ODE/PDE solves. This is an exemplary setting where we need many evaluations of \(Q_{DNN}\). To this end, we generate \(N_{\mathbf{\theta}}\) independent samples of \(\mathbf{\theta}\), say \(\{\mathbf{\theta}^{(i)}\}_{i=1}^{N_{\mathbf{\theta}}}\), according to the joint PDF \(\pi(\mathbf{\theta})\) and approximate \(\mathbb{E}[Q]\) by the sample mean of its approximated realizations computed by the RMFNN algorithm:
\[\mathbb{E}[Q]\approx\mathcal{A}_{RMFNN}:=\frac{1}{N_{\mathbf{\theta}}}\sum_{i=1}^ {N_{\mathbf{\theta}}}Q_{DNN}(\mathbf{\theta}^{(i)}). \tag{17}\]
We compare the complexity of (17) with that of a direct high-fidelity MC computation,
\[\mathbb{E}[Q]\approx\mathcal{A}_{HF}:=\frac{1}{N_{\mathbf{\theta}}}\sum_{i=1}^{N _{\mathbf{\theta}}}Q_{HF}(\mathbf{\theta}^{(i)}). \tag{18}\]
In addition to the mean squared error (16), we also report the absolute and relative errors in expectations,
\[\varepsilon_{\text{abs}}:=|\mathbb{E}[Q(\mathbf{\theta})]-\mathcal{A}|,\qquad \varepsilon_{\text{rel}}:=|\mathbb{E}[Q(\mathbf{\theta})]-\mathcal{A}|/|\mathbb{E} [Q(\mathbf{\theta})]|, \tag{19}\]
where the estimator \(\mathcal{A}\) is either \(\mathcal{A}_{RMFNN}\) or \(\mathcal{A}_{HF}\). We note that \(\varepsilon_{abs}\) and \(\varepsilon_{\text{rel}}\) contain both the deterministic error (due to approximating \(Q\) by either \(Q_{DNN}\) or \(Q_{HF}\)) and the statistical error (due to MC sampling of \(Q_{DNN}\) or \(Q_{HF}\)). In the case of approximation \(Q\) by \(Q_{DNN}\), we can employ triangle inequality and write
\[\varepsilon_{abs}\leq\big{|}\mathbb{E}[Q(\mathbf{\theta})-Q_{DNN}(\mathbf{\theta})] \big{|}+\big{|}\mathbb{E}[Q_{DNN}(\mathbf{\theta})]-\mathcal{A}_{RMFNN}\big{|},\]
where the first (deterministic) term is indeed the error measured in the weighted \(L^{1}\)-norm, as given in (1) with \(p=1\).
In all examples the closed-form solutions to the problems, and hence the true values of \(Q(\boldsymbol{\theta})\), are known. We use these closed-form solutions to compute errors in the approximations, and we always compare the cost of different methods subject to the same accuracy constraint. All codes are written in Python and run on a single CPU. To construct neural networks we leverage Pytorch [34] in example 5.1, and Keras [5] in examples 5.2 and 5.3, both of which are open-source neural-network libraries written in Python. It is to be noted that all CPU times are measured by time.process_time() in Python, taking average of tens of simulations.
### A bi-fidelity surrogate construction task
For our first numerical experiment, we conduct a surrogate construction task using bi-fidelity information. We consider a pulsed harmonic oscillator governed by the system of ODEs,
\[\dot{\mathbf{u}}(t)=A\,\mathbf{u}(t)+\cos(\omega\,t)\,\mathbf{b},\qquad \mathbf{u}=\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix},\qquad A=\begin{bmatrix}-2&0\\ 0&-0.25\end{bmatrix},\qquad\mathbf{b}=\begin{bmatrix}b_{1}\\ b_{2}\end{bmatrix}, \tag{20}\]
with initial solution \(\mathbf{u}(0)=(1,20)^{\top}\). We consider four model parameters: the frequency \(\omega\in[5,50]\), the oscillation time \(t\in[0,6]\), and two pulse parameters \((b_{1},b_{2})\in[0,0.2]\times[4,4.5]\), forming a four-dimensional parameter vector \(\boldsymbol{\theta}=(\omega,t,b_{1},b_{2})\). Our goal is to construct a surrogate for the kinetic energy of the system,
\[Q(\boldsymbol{\theta})=||\mathbf{u}(\boldsymbol{\theta})||_{2}=\left(u_{1}^{2 }(\boldsymbol{\theta})+u_{2}^{2}(\boldsymbol{\theta})\right)^{1/2},\qquad \boldsymbol{\theta}=(\omega,t,b_{1},b_{2}).\]
To this end, we utilize multi-fidelity modeling as follows. For the high-fidelity model \(Q_{HF}\) we leverage the exact analytic solution to (20), and for the low-fidelity model \(Q_{LF}\) we employ an asymptotic method (see [7] and section 3.1) to derive a closed form asymptotic approximation. In Figure 4, the residual \(F=Q_{HF}-Q_{LF}\) and the high-fidelity quantity \(Q_{HF}\) are plotted for \(b_{1}=0.2\) and \(b_{2}=4.5\).
We notice that in this example, both \(Q_{HF}\) and \(F\) are highly oscillatory, but \(||F||_{\infty}\ll||Q_{HF}||_{\infty}\). This is, however, charac
Figure 4: Profiles of \(Q_{HF}\) (left) and \(F\) (right) versus \((\omega,t)\) for \(b_{1}=0.2\) and \(b_{2}=4.5\).
multi-fidelity modeling problems, and it is discussed as a primary motivating factor for the RMFNN algorithm in section 3.1.
We will compare three different methods:
1. RMFNN ResNet: our proposed residual multi-fidelity algorithm using ReLU ResNets;
2. MFNN ResNet: the multi-fidelity framework proposed in [30] using ReLU ResNets;
3. HFNN ResNet: a single-fidelity ReLU ResNet trained on only high-fidelity data.
For the RMFNN algorithm we employ the alternative approach discussed in section 3.2, and, further, we replace the training of the deep network \(DNN\) by direct evaluations of the low-fidelity model. As such, each of the three surrogate methods requires the training of just one neural network. For clarity, we state the mapping each of the methods learns below.
* RMFNN learns: \((\boldsymbol{\theta},Q_{LF}(\boldsymbol{\theta}))\mapsto(Q_{HF}(\boldsymbol{ \theta})-Q_{LF}(\boldsymbol{\theta}))\)
* MFNN learns: \((\boldsymbol{\theta},Q_{LF}(\boldsymbol{\theta}))\mapsto Q_{HF}(\boldsymbol{ \theta})\)
* HFNN learns: \(\boldsymbol{\theta}\mapsto Q_{HF}(\boldsymbol{\theta})\)
Across all three methods we use a fixed ReLU network architecture, with \(L=7\) layers, \(K=7\) neurons in each hidden layer, and one ReLU-free neuron in the last layer. Moreover, to facilitate a comparison of our residual multi-fidelity framework with standard ResNets [21], all ReLU networks are constructed with a ResNet-style architecture, where we implement shortcut connections that pass along the identity map every two layers. For each method, ReLU networks are trained on eleven different training sets where the number of high-fidelity (and low-fidelity) training samples ranges from 250 to 17000. These training sets are generated using a multi-variate uniform distribution over the parameter space. Moreover, the training sample inputs are normalized so that they reside in \(\Theta=[0,1]^{4}\). For training, we use Adam optimization algorithm with the MSE cost function and Tikhonov regularization, and we implement an adaptive learning rate and stop criterion by monitoring the MSE on a validation set. To account for the randomness inherent in the training process, for each training set and for each surrogate method, we train and test 20 separate times on 20 distinct random seeds. From each train/test trial we compute the MSE (16) in the surrogate prediction on an independent test set of \(N_{\varepsilon}=10^{6}\) points \(\boldsymbol{\theta}\) uniformly distributed in \(\Theta\). Figure 5 displays the average of the 20 MSEs versus the number of high-fidelity samples in the training set.
As seen in Figure 5, RMFNN unilaterally outperforms the other methods, and this performance benefit is largest on sparse training data, the exact conditions desired in multi-fidelity computation. Further, the results highlight the importance of leveraging both of the motivating factors behind the RMFNN algorithm:
1. _The method should leverage possibly non-linear correlations between low-fidelity and high-fidelity models._
2. _The quantity that is learned during network training should ideally be small in magnitude relative to the high fidelity quantity of interest._
Table 1 describes the three surrogate methods in the context of properties (i) and (ii). A check mark indicates that the method satisfies the corresponding property.
Property (i) is harnessed by both RMFNN and MFNN, and the orders of magnitude reduction in mean squared error for the MFNN method over the HFNN method verifies its importance. Continuing, property (ii) is characteristic of the RMFNN method, but not of the MFNN method. Hence the order of magnitude reduction in mean squared error achieved by RMFNN over MFNN validates the advantage of additionally learning a quantity of small norm relative to the high-fidelity quantity.
The red and blue dashed lines in Figure 5 are error bounds analogous to those proposed in Conjecture 1. Notice that the only difference between the blue and red dashed lines is the uniform norm appearing in the numerator of the first term. For the blue line, this is \(||Q_{HF}||_{\infty}\), and for the red line, the residual \(||F||_{\infty}\). Furthermore, the constants \(C_{1}\) and \(C_{2}\) were empirically chosen, but are importantly the same for both the blue and red dashed lines. As such, this numerical experiment provides preliminary evidence for Conjecture 1. A proof for this conjecture, as well as extensive numerical validation, is the subject of ongoing research.
The orders of magnitude reduction in error for the RMFNN ResNet over the HFNN ResNet also shows that our residual multi-fidelity framework makes a meaningful improve
\begin{table}
\begin{tabular}{|c|c|c|} \hline methods & (i) & (ii) \\ \hline \hline HFNN & & \\ MFNN & ✓ & \\ RMFNN & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of RMFNN, MFNN, and HFNN methods in the context of properties (i) and (ii)
Figure 5: Mean squared error \(\varepsilon_{MSE}\) in approximating \(Q_{HF}\) by HFNN ResNet (blue), MFNN ResNet (green) and RMFNN ResNet (red) versus number of high-fidelity training samples.
ment over simply using ResNet-style shortcut connections in a single-fidelity network. In particular, these results support our discussion in Section 3.2, where we compare the residual formulation in ResNet [21] with our residual multi-fidelity framework. To reiterate, the residual learning formulated in ResNet does well in addressing the _degradation problem_, however; in a multi-fidelity context, it does not sufficiently take advantage of learning a quantity of small norm. On the contrary, the proposed RMFNN algorithm exploits this advantage to a much greater extent, enforcing the learning of such a quantity by leveraging multi-fidelity information.
It is important to note that while the results in Figure 5 were obtained for a fixed network architecture with \((K,L)=(7,7)\), the experiment was also conducted for \((K,L)=(25,7)\) and \((K,L)=(7,15)\). The results obtained for these different network architectures are analogous to those pictured in Figure 5 up to a universal and commensurate reduction in MSE across all surrogate methods. Such a reduction in error can be explained by an increase in the number of trainable parameters and hence in the expressive capacity of the networks.
### A parametric ODE problem
Consider the following parametric initial value problem (IVP),
\[\begin{array}{l}u_{t}(t,\theta)+0.5\,u(t,\theta)=f(t,\theta),\hskip 28.452756ptt \in[0,T],\\ u(0,\theta)=g(\theta),\end{array} \tag{21}\]
where \(\theta\in\Theta=[-1,1]\) is a uniformly distributed random variable. Using the method of manufactured solutions, we choose the force term \(f\) and the initial data \(g\) so that the exact solution to the IVP (21) is
\[u(t,\theta)=0.5+2\sin(12\theta)+6\sin(2t)\sin(10\theta)(1+2\theta^{2}).\]
Now consider the target quantity
\[Q(\theta)=|u(T,\theta)|,\quad T=100.\]
Our goal is to use the RMFNN algorithm to construct a surrogate \(Q_{DNN}\) for \(Q\), and then to employ MC sampling to compute the expectation \(\mathbb{E}[Q(\theta)]\).
**Multi-fidelity models and accuracy constraint.** Suppose that we use the second-order accurate Runge-Kutta (RK2) time-stepper as the deterministic solver to compute realizations of \(Q_{LF}(\theta)\) and \(Q_{HF}(\theta)\) using time steps \(h_{LF}\) and \(h_{HF}\), respectively. We perform a simple error analysis, similar to the analysis in [30], to obtain the minimum number of realizations \(N_{\theta}\) and the maximum time step \(h_{HF}\) for the high-fidelity model to satisfy the accuracy constraint \(\varepsilon_{\text{rel}}\leq\varepsilon_{\text{TOL}}\) with a \(1\%\) failure probability, i.e. \(\text{Prob}(\varepsilon_{\text{rel}}\leq\varepsilon_{\text{TOL}})=0.99\). Here \(\varepsilon_{\text{rel}}\) is the relative error given in (19). Table 2 summarizes the numerical parameters \((N_{\theta},h_{HF},h_{LF})\) and the CPU time of evaluating single realizations of \(Q_{LF}\) and \(Q_{HF}\) for a decreasing sequence of tolerances \(\varepsilon_{\text{TOL}}=10^{-2},10^{-3},10^{-4}\). We choose \(h_{LF}=s\,h_{HF}\), with \(s=5,10,10\), for the three tolerance levels.
**Surrogate construction.** Following the algorithm in Section 3.2, we first generate a set of \(N=N_{I}+N_{II}\) points, \(\theta^{(i)}\in[-1,1]\), with \(i=1,\ldots,N\), collected into two disjoint sets, \(\Theta_{I}\) and \(\Theta_{II}\). We choose the points to be uniformly placed on the interval \([-1,1]\). We select every 10th point to be in the set \(\Theta_{I}\), and we collect the rest of the points in the set \(\Theta_{II}\). This implies \(N\approx 10\,N_{I}\), meaning that we need to compute the target quantity \(Q(\theta)\) by the high-fidelity model (using RK2 with time step \(h_{HF}\)) at only 10% of points, i.e. \(r=N_{I}/N\approx 0.1\); compare this with \(r=0.25\) taken in [30] to achieve the same accuracy. The number of points \(N\) will be chosen based on the desired tolerance, slightly increasing as the tolerance decreases, i.e. \(N\propto\varepsilon_{\text{TOL}}^{-p}\) where \(p>0\) is small. In particular, we observe in Table 3 below that the value \(p=0.5\) is enough to meet our accuracy constraints.
The architecture of \(ResNN\) consists of 2 hidden layers, where each layer has 10 neurons. The architecture of \(DNN\) consists of 4 hidden layers, where each layer has 20 neurons. The architecture of both networks will be kept fixed at all tolerance levels. For the training process, we employ the MSE cost function and use Adam optimization technique. For both networks, we split the available data points into a training set (95% of data) and a validation set (5% of data) and adaptively tune the learning rate parameter. We do not use any regularization technique. Table 3 summarizes the number of training-validation data \(N_{I}\) (for \(ResNN\)) and \(N\) (for \(DNN\)), the number of epochs \(N_{\text{epoch}}\), batch size \(N_{\text{batch}}\), and the CPU time of training and evaluating the two networks for different tolerances. With these parameters we construct three surrogates for \(Q_{DNN}\), one at each tolerance level. We note that while using a different architecture and other choices of network parameters (e.g. number of layers/neurons and learning rates) may give more efficient networks, the selected architectures and parameters here, following the general guidelines in [4, 16], produce satisfactory results in terms of efficiency and accuracy; see Table 4 and Figures 6-8 below.
Table 4 reports the MSE (16) in the constructed surrogates \(Q_{DNN}\) using \(N_{\varepsilon}=10^{8}\) evenly distributed points \(\theta\) in \(\Theta\), confirming that the (deterministic) MSE in the network approximation is comparable to the (statistical) relative error \(\varepsilon_{\text{rel}}\).
Figure 6 shows the low-fidelity and high-fidelity quantities versus \(\theta\in[-1,1]\) (solid lines)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\varepsilon_{\text{TOL}}\) & \(N_{\theta}\) & \(h_{HF}\) & \(W_{HF}\) & \(h_{LF}\) & \(W_{LF}\) \\ \hline \hline \(10^{-2}\) & \(1.35\times 10^{5}\) & \(0.1\) & \(2.24\times 10^{-4}\) & \(0.5\) & \(4.36\times 10^{-5}\) \\ \(10^{-3}\) & \(1.35\times 10^{7}\) & \(0.025\) & \(7.21\times 10^{-4}\) & \(0.25\) & \(1.04\times 10^{-4}\) \\ \(10^{-4}\) & \(1.35\times 10^{9}\) & \(0.01\) & \(2.20\times 10^{-3}\) & \(0.1\) & \(2.24\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 2: Required number of realizations and time steps to achieve \(\text{P}(\varepsilon_{\text{rel}}\leq\varepsilon_{\text{TOL}})=0.99\).
\begin{table}
\begin{tabular}{|c||c|c||c|c|c|c|c|c|c|} \hline & & & \multicolumn{4}{c||}{\(ResNN\): \(2\times 10\) neurons} & \multicolumn{4}{c|}{\(DNN\): \(4\times 20\) neurons} \\ \cline{4-9} \(\varepsilon_{\text{TOL}}\) & \(N_{I}\) & \(N\) & \(N_{\text{epoch}}\) & \(N_{\text{batch}}\) & \(W_{T_{1}}\) & \(W_{P_{1}}\) & \(N_{\text{epoch}}\) & \(N_{\text{batch}}\) & \(W_{T_{2}}\) & \(W_{P_{2}}\) \\ \hline \hline \(10^{-2}\) & 25 & 241 & 100 & 10 & 9.72 & \(2.98\times 10^{-5}\) & 400 & 40 & 35.24 & \(4.38\times 10^{-5}\) \\ \(10^{-3}\) & 81 & 801 & 1500 & 30 & 49.08 & \(2.98\times 10^{-5}\) & 8000 & 80 & 927.77 & \(4.38\times 10^{-5}\) \\ \(10^{-4}\) & 321 & 3201 & 5000 & 50 & 257.86 & \(2.98\times 10^{-5}\) & 20000 & 50 & 13708.08 & \(4.38\times 10^{-5}\) \\ \hline \end{tabular}
\end{table}
Table 3: The number of training data and training and evaluation time of the two networks.
and the data (circle and triangle markers) available in the case \(\varepsilon_{\mathrm{TOL}}=10^{-2}\). Figure 7 (left) shows the generated high-fidelity data by the network \(ResNN\), and Figure 7 (right) shows the predicted high-fidelity quantity by the deep network \(DNN\) for tolerance \(\varepsilon_{\mathrm{TOL}}=10^{-2}\).
**MC sampling.** We next compute \(\mathbb{E}[Q]\) by (17) and (18) and compare their complexities. Figure 8 (left) shows the CPU time as a function of tolerance. The computational cost of a direct high-fidelity MC computation (18) is \(\mathcal{O}(\varepsilon_{\mathrm{TOL}}^{-2.5})\), following (13) and noting that the order of accuracy of RK2 is \(q=2\), and that the time-space dimension of the problem is \(\gamma=1\). On the other hand, if we only consider the prediction time of the proposed residual multi-fidelity method, excluding the training costs, the cost is \(\mathcal{O}(\varepsilon_{\mathrm{TOL}}^{-2})\), which is much less than the cost of high-fidelity MC sampling. When adding the training costs, we observe that although for large tolerances the training cost is large, as the tolerance decreases, the training costs become negligible compared to the total CPU time. Overall, the cost of the proposed method approaches \(\mathcal{O}(\varepsilon_{\mathrm{TOL}}^{-2})\) as tolerance decreases, and hence, the smaller the tolerance, the more gain in computational cost when employing the proposed method over high-fidelity MC sampling. This can also be seen by (15) noting that \(\max(p+\gamma/q,2+\hat{p})=\max(0.5+0.5,2+\hat{p},)=2+\hat{p}\), where \(\hat{p}>0\) is very small. Finally, Figure 8 (right) shows the
\begin{table}
\begin{tabular}{|c|c|} \hline \(\varepsilon_{\mathrm{TOL}}\) & \(\varepsilon_{MSE}\) \\ \hline \hline \(10^{-2}\) & \(2.25\times 10^{-2}\) \\ \(10^{-3}\) & \(1.50\times 10^{-3}\) \\ \(10^{-4}\) & \(1.40\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 4: MSE in the approximation \(Q_{DNN}(\theta)\approx Q(\theta)\).
Figure 6: The low-fidelity and high-fidelity quantities versus \(\theta\in[-1,1]\) (solid lines) and the available data (markers) in the case \(\varepsilon_{\mathrm{TOL}}=10^{-2}\). There are \(N_{I}=25\) high-fidelity and \(N=241\) low-fidelity data points, represented by circles and triangles, respectively.
relative error as a function of tolerance for the proposed method, verifying that the tolerance is met with 1% failure probability.
Figure 8: CPU time and error versus tolerance. Left: for large tolerances the training cost is dominant, making the cost of RMFNNMC more than the cost of HFMC. However, as tolerance decreases, the training cost becomes negligible and the cost of RMFNNMC approaches \(\mathcal{O}(\varepsilon_{\text{TOL}}^{-2})\). Right: relative error as a function of tolerance verifies that the tolerance is met with 1% failure probability. The “+” markers correspond to 20 simulations at each tolerance level.
Figure 7: Outputs of the trained networks for \(\varepsilon_{\text{TOL}}=10^{-2}\). Left: generated data by the network, \(ResNN\). Right: predicted quantity by the deep network, \(DNN\).
### A parametric PDE problem
Consider the following parametric initial-boundary value problem (IBVP)
\[\begin{array}{ll}u_{tt}(t,\mathbf{x},\mathbf{\theta})-\Delta_{\mathbf{x}}u(t,\mathbf{x},\mathbf{ \theta})=f(t,\mathbf{x},\mathbf{\theta}),&(t,\mathbf{x},\mathbf{\theta})\in[0,T]\times D\times \Theta,\\ u(0,\mathbf{x},\mathbf{\theta})=g_{1}(\mathbf{x},\mathbf{\theta}),\ \ u_{t}(0,\mathbf{x},\mathbf{\theta})=g_{2}( \mathbf{x},\mathbf{\theta}),&(t,\mathbf{x},\mathbf{\theta})\in\{0\}\times D\times\Theta,\\ u(t,\mathbf{x},\mathbf{\theta})=g_{b}(t,\mathbf{x},\mathbf{\theta}),&(t,\mathbf{x},\mathbf{\theta}) \in[0,T]\times\partial D\times\Theta,\end{array} \tag{22}\]
where \(t\in[0,T]\) is the time, \(\mathbf{x}=(x_{1},x_{2})\in D\) is the vector of spatial variables in a square domain \(D=[-1,1]^{2}\), and \(\mathbf{\theta}=(\theta_{1},\theta_{2})\in\Theta\) is a vector of two independently and uniformly distributed random variables on \(\Theta=[10,11]\times[4,6]\). We select the force term \(f\) and the initial-boundary data \(g_{1},g_{2},g_{b}\) so that the exact solution to the IBVP (22) is
\[u(t,\mathbf{x},\mathbf{\theta})=\sin(\theta_{1}\,t-\theta_{2}\,x_{1})\,\sin(\theta_{2 }\,x_{2}).\]
We consider the target quantity,
\[Q(\mathbf{\theta})=|u(T,\mathbf{x}_{Q},\mathbf{\theta})|,\qquad T=30,\qquad\mathbf{x}_{Q}= (0.5,0.5).\]
Our goal is to construct a surrogate \(Q_{DNN}\) for \(Q\) and then to compute the expectation \(\mathbb{E}[Q(\mathbf{\theta})]\) by MC sampling.
**Multi-fidelity models and accuracy constraint.** Suppose that we have a second-order accurate (in both time and space) finite difference scheme as the deterministic solver to compute realizations of \(Q_{LF}(\mathbf{\theta})\) and \(Q_{HF}(\mathbf{\theta})\) using a uniform grid with grid lengths \(h_{LF}\) and \(h_{HF}\), respectively. We use the time step, \(\Delta t=h/2\), to ensure stability of the numerical scheme, where the grid length \(h\) is either \(h_{LF}\) or \(h_{HF}\), depending on the level of fidelity. Given a \(1\%\) failure probability and a decreasing sequence of tolerances \(\varepsilon_{\mathrm{TOL}}=10^{-1},10^{-2},10^{-3}\), a simple error analysis, similar to the analysis in [30], and verified by numerical computations, gives the minimum number of realizations \(N_{\mathbf{\theta}}\) and the maximum grid length \(h_{HF}\) for the high-fidelity model required to achieve \(\mathrm{Prob}(\varepsilon_{\mathrm{abs}}\leq\varepsilon_{\mathrm{TOL}})=0.99\). Here, \(\varepsilon_{\mathrm{abs}}\) is the absolute error given in (19). Table 5 summarizes the numerical parameters \((N_{\mathbf{\theta}},h_{HF},h_{LF})\), and the CPU time of evaluating single realizations of \(Q_{LF}\) and \(Q_{HF}\). We choose \(h_{LF}=s\,h_{HF}\), with \(s=1.6,4,8\), for the three tolerance levels.
**Surrogate construction.** Following the RMFNN algorithm in Section 3.2, we first generate a uniform grid of \(N=N_{I}+N_{II}\) points \(\mathbf{\theta}^{(i)}\in[10,11]\times[4,6]\), with \(i=1,\ldots,N\), collected into two disjoint sets \(\Theta_{I}\) and \(\Theta_{II}\). We select the two disjoint sets so that \(N\approx 11\,N_{I}\), meaning
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\varepsilon_{\mathrm{TOL}}\) & \(N_{\mathbf{\theta}}\) & \(h_{HF}\) & \(W_{HF}\) & \(h_{LF}\) & \(W_{LF}\) \\ \hline \hline \(10^{-1}\) & \(1.5\times 10^{2}\) & \(1/32\) & \(0.67\) & \(1/20\) & \(0.21\) \\ \(10^{-2}\) & \(1.5\times 10^{4}\) & \(1/128\) & \(29.75\) & \(1/32\) & \(0.67\) \\ \(10^{-3}\) & \(1.5\times 10^{6}\) & \(1/320\) & \(708.21\) & \(1/40\) & \(1.59\) \\ \hline \end{tabular}
\end{table}
Table 5: Required number of realizations and grid lengths to achieve \(\mathrm{P}(\varepsilon_{\mathrm{abs}}\leq\varepsilon_{\mathrm{TOL}})=0.99\).
that we need to compute the quantity \(Q(\mathbf{\theta})\) by the high-fidelity model with grid length \(h_{HF}\) at 9% of points, i.e. \(r=N_{I}/N\approx 0.09\); compare this with \(r=0.25\) in [30]. The number \(N\) of points will be chosen based on the desired tolerance, slightly increasing as the tolerance decreases, i.e. \(N\propto\varepsilon_{\mathrm{TOL}}^{-p}\) where \(p>0\) is small. In particular, as we observe in Table 6 below, the value \(p=0.2\) is enough to meet our accuracy constraints.
The architecture of \(ResNN\) consists of 2 hidden layers, where each layer has 20 neurons. The architecture of \(DNN\) consists of 4 hidden layers, where each layer has 30 neurons. The architecture of both networks will be kept fixed at all tolerance levels. For the training process, we split the available data points into a training set (95% of data) and a validation set (5% of data). We apply pre-processing transformations to the input data points before they are presented to the two networks. Precisely, we transform the points from \([10,11]\times[4,6]\) into the unit square \([0,1]^{2}\). We employ the quadratic cost function and use the Adam optimization technique with an initial learning rate, \(\eta=0.005\). This learning rate is adapted during training by monitoring the MSE on the validation set. We do not use any regularization technique. Table 6 summarizes the number of training-validation data \(N_{I}\) (for \(ResNN\)) and \(N\) (for \(DNN\)), the number of epochs \(N_{\mathrm{epoch}}\), the batch size \(N_{\mathrm{batch}}\), and the CPU time of training and evaluating the two networks for different tolerances. With these parameters we construct three surrogates for \(Q_{DNN}\), one at each tolerance level.
Table 7 reports the MSE (16) in the constructed surrogates \(Q_{DNN}\), using a uniform grid of \(N_{\varepsilon}=10^{8}\) points \(\mathbf{\theta}\) in \(\Theta\), confirming that the (deterministic) MSE in the network approximation is comparable to the (statistical) absolute error \(\varepsilon_{\mathrm{abs}}\).
**MC sampling.** We now compute \(\mathbb{E}[Q]\) by (17) and (18) and compare their complexities. Figure 9 (left) shows the CPU time as a function of tolerance. The computational cost of a direct high-fidelity MC sampling is proportional to \(\varepsilon_{\mathrm{TOL}}^{-3.5}\), following (13) and noting that the order of accuracy of the finite difference scheme is \(q=2\), and that the time-space dimension of the problem is \(\gamma=3\). On the other hand, if we only consider the evaluation
\begin{table}
\begin{tabular}{|c||c|c||c|c|c|c||c|c|c|} \hline & & & \multicolumn{4}{c||}{\(ResNN\): 2\(\times\)20 neurons} & \multicolumn{4}{c|}{\(DNN\): 4\(\times\)30 neurons} \\ \cline{3-10} \(\varepsilon_{\mathrm{TOL}}\) & \(N_{I}\) & \(N\) & \(N_{\mathrm{epoch}}\) & \(N_{\mathrm{batch}}\) & \(W_{T_{1}}\) & \(W_{P_{1}}\) & \(N_{\mathrm{epoch}}\) & \(N_{\mathrm{batch}}\) & \(W_{T_{2}}\) & \(W_{P_{1}}\) \\ \hline \hline \(10^{-1}\) & 324 & 3498 & 100 & 50 & 15.16 & \(5.13\times 10^{-5}\) & 200 & 50 & 284.54 & \(1.34\times 10^{-4}\) \\ \(10^{-2}\) & 451 & 4961 & 200 & 50 & 47.51 & \(5.13\times 10^{-5}\) & 500 & 50 & 1301.65 & \(1.34\times 10^{-4}\) \\ \(10^{-3}\) & 714 & 8003 & 1000 & 50 & 304.30 & \(5.13\times 10^{-5}\) & 4000 & 50 & 14825.28 & \(1.34\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 6: The number of training data and training and evaluation time of the two networks.
\begin{table}
\begin{tabular}{|c|c|} \hline \(\varepsilon_{\mathrm{TOL}}\) & \(\varepsilon_{MSE}\) \\ \hline \hline \(10^{-1}\) & \(1.42\times 10^{-2}\) \\ \(10^{-2}\) & \(0.69\times 10^{-3}\) \\ \(10^{-3}\) & \(0.78\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 7: Weighted mean squared error in the approximation \(Q_{DNN}(\mathbf{\theta})\approx Q(\mathbf{\theta})\).
cost of the RMFNN algorithm constructed surrogate, excluding the training costs, the cost of the proposed RMFNN method is proportional to \(\varepsilon_{\text{TOL}}^{-2}\) which is much less than the cost of high-fidelity MC sampling. When adding the training costs, we observe that although for large tolerances the training cost is large, as the tolerance decreases the training costs become negligible compared to the total CPU time. Overall, the cost of the proposed method approaches \(\mathcal{O}(\varepsilon_{\text{TOL}}^{-2})\) as tolerance decreases, indicating orders of magnitude acceleration in computing the expectation compared to high-fidelity MC sampling. This convergence rate can also be seen by (15), noting that \(\max(p+\gamma/q,2+\hat{p})=\max(0.2+1.5,2+\hat{p})=2+\hat{p}\), where \(\hat{p}>0\) is very small. Finally, Figure 9 (right) shows the absolute error as a function of tolerance for the proposed method, verifying that the tolerance is met with 1% failure probability.
## 6 Conclusion
In this work we presented a residual multi-fidelity computational framework that leverages a pair of neural networks to efficiently construct surrogates for high-fidelity QoIs described by systems of ODEs/PDEs. Given a low-fidelity and a high-fidelity computational model, we first formulate the correlation between the two models in terms of a possibly non-linear residual function that measures the discrepancy between the two model outputs. In section 4.2, we leveraged recent theoretical results [31, 25] to argue that the small magnitude (or norm) of the residual function, relative to the high-fidelity quantity, enables its accurate approximation by a neural network \(ResNN\) of low relative complexity. Moreover, this low network complexity allows \(ResNN\) to accurately learn the residual function on a sparse set
Figure 9: CPU time and error versus tolerance. Left: for large tolerances the training cost is dominant, making the cost of RMFNNMC more than the cost of HFMC. However, as tolerance decreases, the cost of RMFNNMC approaches \(\mathcal{O}(\varepsilon_{\text{TOL}}^{-2})\). Right: error as a function of tolerance verifies that the tolerance is met with 1% failure probability. The “+” markers correspond to 20 simulations at each tolerance level.
of high-fidelity and low-fidelity data, the exact conditions encountered in most multi-fidelity modeling problems. The trained network \(ResNN\) is then used to efficiently generate additional high-fidelity data. Finally, the set of all available and newly generated high-fidelity data is used to train a deep network \(DNN\) that serves as a cheap-to-evaluate surrogate for the high-fidelity QoI.
We presented three numerical examples to demonstrate the power of the proposed framework. In example 5.1 we conducted a bi-fidelity surrogate construction task using 1) the RMFNN algorithm, 2) the MFNN method from [30], and 3) a pure high-fidelity neural network that does not leverage lower-fidelity information. We showed orders of magnitude reduction in generalization error for the RMFNN constructed surrogate over those constructed using the other two methods, and, further, demonstrated that this performance benefit is largest on sparse high-fidelity data. Furthermore, we showed that in a multi-fidelity context, our residual multi-fidelity framework makes a meaningful improvement over the residual learning framework as formulated in ResNet [21]. Continuing, in examples 5.2 and 5.3 we used our RMFNN constructed surrogate to approximate the expectation of our QoI via MC sampling. We exhibited large computational savings that are especially apparent when the output predictions are desired to be accurate within small tolerances.
This work inspires two primary future research directions. The first is a mathematical proof for Conjecture 1. Such a proof would be a natural and welcome addition to the numerical justification for the RMFNN algorithm presented in this manuscript. Moreover, it would facilitate a comparison between ReLU networks and Fourier feature networks on multi-fidelity modeling tasks.
The second future research direction concerns the extension of the RMFNN algorithm from bi-fidelity modeling to multi-fidelity modeling. The algorithm extends naturally in the case of an ensemble of lower-fidelity models that are strictly hierarchical. In this case, we leverage a sequence of networks to learn a sequence of residual functions that efficiently generate additional high-fidelity data.
## 7 Declarations
### Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
### Competing Interests
The authors have no relevant financial or non-financial interests to disclose.
|
2308.00231 | Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks | The modern pervasiveness of large-scale deep neural networks (NNs) is driven
by their extraordinary performance on complex problems but is also plagued by
their sudden, unexpected, and often catastrophic failures, particularly on
challenging scenarios. Existing algorithms that provide risk-awareness to NNs
are complex and ad-hoc. Specifically, these methods require significant
engineering changes, are often developed only for particular settings, and are
not easily composable. Here we present capsa, a framework for extending models
with risk-awareness. Capsa provides a methodology for quantifying multiple
forms of risk and composing different algorithms together to quantify different
risk metrics in parallel. We validate capsa by implementing state-of-the-art
uncertainty estimation algorithms within the capsa framework and benchmarking
them on complex perception datasets. We demonstrate capsa's ability to easily
compose aleatoric uncertainty, epistemic uncertainty, and bias estimation
together in a single procedure, and show how this approach provides a
comprehensive awareness of NN risk. | Sadhana Lolla, Iaroslav Elistratov, Alejandro Perez, Elaheh Ahmadi, Daniela Rus, Alexander Amini | 2023-08-01T02:07:47Z | http://arxiv.org/abs/2308.00231v1 | # _Capsa_: A Unified Framework for Quantifying Risk in Deep Neural Networks
###### Abstract
The modern pervasiveness of large-scale deep neural networks (NNs) is driven by their extraordinary performance on complex problems but is also plagued by their sudden, unexpected, and often catastrophic failures, particularly on challenging scenarios. Existing algorithms that provide risk-awareness to NNs are complex and ad-hoc. Specifically, these methods require significant engineering changes, are often developed only for particular settings, and are not easily composable. Here we present capsa, a framework for extending models with risk-awareness. Capsa provides a methodology for quantifying multiple forms of risk and composing different algorithms together to quantify different risk metrics in parallel. We validate capsa by implementing state-of-the-art uncertainty estimation algorithms within the capsa framework and benchmarking them on complex perception datasets. We demonstrate capsa's ability to easily compose aleatoric uncertainty, epistemic uncertainty, and bias estimation together in a single procedure, and show how this approach provides a comprehensive awareness of NN risk.
## 1 Introduction
Neural networks (NNs) continue to push the boundaries of modern artificial intelligence (AI) systems across a wide range of complex real-world domains, from robotics and autonomy (Bojarski et al., 2016; Hawke et al., 2020; Codevilla et al., 2018), to healthcare and medical decision making (Ching et al., 2018; Topol, 2019). While their performance in these domains remains unmatched, modern NNs still encounter sudden, unexpected, and inexplicable failures that are often catastrophic - especially in safety-critical environments. These failures are largely due to systemic issues that propagate throughout the entire modern AI lifecycle, from imbalances (He & Garcia, 2009; Buda et al., 2018) and noise (Beigman & Klebanov, 2009) in data that lead to algorithmic bias (Bolukbasi et al., 2016; Caliskan et al., 2017; Buolamwini & Gebru, 2018; Chen et al., 2018; Obermeyer et al., 2019; Seyyed-Kalantari et al., 2021) to predictive uncertainty (Kendall & Gal, 2017; Kompa et al., 2021; Nado et al., 2021; Amini et al., 2020) that plagues model performance on unseen or out-of-distribution data. In order to realize the widespread adoption of AI in society, NN models must not only identify these potential failure modes, but also effectively use this awareness to obtain unified and calibrated measures of risk and uncertainty. There is thus a critical need for unified systems that can estimate quantitative risk metrics for any NN model, and in turn integrate this awareness back into the learning lifecycle to improve robustness, generalization, and safety.
Existing algorithmic approaches to risk quantification narrowly estimate a singular form of risk in AI models, often in the context of a limited number of data modalities (Nix & Weigend, 1994;
Kendall & Gal, 2017; Lakshminarayanan et al., 2017; Buolamwini & Gebru, 2018; Zhang et al., 2018; Gilitschenski et al., 2019). These methods present critical limitations as a result of their reductionist, ad hoc, and narrow focus on single metrics of risk or uncertainty. However, generalizable methods that provide a larger holistic awareness of risk have yet to be realized and deployed (Nado et al., 2021; Tran et al., 2022). This is in part due to the significant engineering changes required to integrate an individual risk algorithm into a larger machine learning system (Tran et al., 2016; Dillon et al., 2017; Bingham et al., 2019; Shi et al., 2017), which in turn can impact the quality and reproducibility of results. The lack of a unified approach for composing different risk estimation algorithms or risk-aware models limits the scope and capability of each algorithm independently, and further limits the robustness of the system as a whole. A general, model-agnostic framework for extending NN systems with holistic risk-awareness, covering both uncertainty and bias, would advance the ability and robustness of end-to-end systems.
To address these fundamental challenges, we present capsa - an algorithmic framework for wrapping any arbitrary NN model with state-of-the-art risk-awareness capabilities. By decomposing the algorithmic stages of risk estimation into their core building blocks, we unify different algorithms and estimation metrics under a common data-centric paradigm. Additionally, because capsa allows the underlying NN to be aware of a variety of risk metrics in parallel, we achieve improved performance and quality in risk estimation through principled redundancy, and open the door to achieving a unified composition and hierarchical understanding of NN risk.
In summary, the key contributions of this paper are:
1. Capsa, a flexible, and easy-to-use framework for equipping any given neural network with calibrated awareness of different forms of risk - including bias, label noise, and predictive uncertainty;
2. An algorithm for decomposing different types of risk and their estimation methods into modular components that can in turn be integrated and composed together to achieve greater accuracy, robustness, and efficiency; and
3. Empirical validation of capsa on a range of dataset complexities and modalities, along with the application of capsa for mitigation of algorithmic bias, identification of label noise, and detection of anomalies and out-of-distribution data.
We refer readers to Capsa Pro(Amini et al., 2023) for information on the software library with the full functionality described in this publication.
## 2 Background and Methodology
### Preliminaries
We consider the problem of supervised learning, where we are given a labeled dataset of \(n\) input, output pairs, \(\{x,y\}_{i=1}^{n}\). Our goal is to learn a model, \(f\), parameterized by weights, \(\mathbf{W}\), that minimizes the average loss over the entire dataset: \(\sum_{i}\mathcal{L}(f_{\mathbf{W}}(x),y)\). While traditionally, the model,
Figure 1: Capsa unifies state-of-the-art algorithms for quantifying neural network risks ranging from (A) under-representation bias, (B) epistemic (model) uncertainty, and (C) aleatoric uncertainty (label noise). Capsa converts existing models into risk-aware variants, capable of identifying risks efficiently during training and deployment.
a neural network, outputs predictions in the form of \(\hat{y}=f_{\mathbf{W}}(x)\), we now introduce a risk-aware transformation operation, \(\Phi\), which transforms a model, \(f\), into a risk-aware variant, such that
\[g=\Phi_{\mathbf{\theta}}(f_{\mathbf{W}}),\] \[\hat{y},R=g(x),\]
where \(R\) are the estimated "risk" measures from a set of metrics, \(\mathbf{\theta}\). The goal of this paper is to propose a common transformation backbone for \(\Phi_{\mathbf{\theta}}(\cdot)\), which automatically transforms an arbitrary model, \(f\), to be aware of risks, \(\mathbf{\theta}\).
All measures of risk aim to capture, on some level, the reliability of a given prediction. This can stem from the data source (aleatoric uncertainty, or representation bias) or the predictive capacity of the model itself (epistemic uncertainty). Within capsa, we define various risk metrics to identify and measure these sources of risk. We propose the idea of _wrappers_, which are instantiations of \(\Phi_{\theta}\), for a singular risk metric, \(\theta\). Wrappers are given an arbitrary neural network and, while preserving the structure and function of the network, add and modify the relevant components in the model. This allows them to serve as a drop-in replacement that is able to estimate the risk metric, \(\theta\). Wrappers can be further composed using a set of metrics, \(\mathbf{\theta}\), that are faster and more accurate than individual metrics.
### Capsa: The Wrapping Algorithm
While risk estimation algorithms can take a variety of forms and are often developed in ad hoc settings, we present a unified algorithm for building \(\Phi_{\mathbf{\theta}}\) in order to wrap an arbitrary neural network model. There are four main components: (1) constructing the shared feature extractor, (2) applying modifications to the existing model needed to capture the uncertainty, (3) creating additional models and augmentations if necessary, and (4) modifying the loss functions.
The feature extractor, which defaults to the model until its last layer, can be leveraged as a shared backbone by multiple wrappers at once to predict multiple compositions of risk. This results in a fast, efficient method of reusing the main body of the model, which does not require training multiple models and risk estimation methods from scratch. Next, capsa modifies the existing network according to metric-specific modifications; for example, this could entail modifying every weight in the model to be drawn from a distribution (to convert to a Bayesian neural network (Blundell et al., 2015)) or adding stochastic dropout layers (Gal and Ghahramani, 2016). Depending on the metric, capsa also adds new layers or augmentations to the model that predict new outputs. Note that these are not modifications to the model, but rather augmentations for the given metric; for example, new layers to output \(\sigma\)(Nix and Weigend, 1994), or extra model copies when ensembling (Lakshminarayanan et al., 2017). Lastly, we modify the loss function to capture any remaining metric-specific changes that need to be made. This entails combining the user-specified and metric-specific loss functions (e.g., KL-divergence (Kingma and Welling, 2013), negative log-likelihood (Nix and Weigend, 1994), etc). All of the following modifications are integrated together into a custom metric-specific forward pass, and train step. These are used to capture variations in the forward and backward passes during training and inference.
Figure 2: **Overview of Capsa architecture**. (A) Capsa converts arbitrary NN models into risk-aware variants, that can simultaneously predict both their output along with a list of user-specified risk metrics. (B) Each risk metric forms the basis of a singular model wrapper which is constructed through metric-specific modifications to the model architecture and loss function.
### Risk Metrics and Background
In this section, we outline three high-level categories of risk which we quantitatively define and estimate.
**Representation Bias -** The representation bias of a dataset uncovers imbalance in the space of features and captures whether certain combinations are more prevalent than others. Note that this is fundamentally different from label imbalance, which only captures distributional imbalance in the labels. For example, in driving datasets, it has been demonstrated that the combination of straight roads, sunlight, and absence of traffic is higher than any other feature combinations. This indicates that these samples are overrepresented (Amini et al., 2018). Similar combinations have been identified for facial detection (Buolamwini and Gebru, 2018; Amini et al., 2019), medicine (Puyol-Anton et al., 2021; Soleimany et al., 2021), and clinical trials (Xu et al., 2022). Uncovering feature representation bias is a computationally expensive process as these features are (1) often unlabeled, and (2) extremely high-dimensional (e.g., images, videos, language, etc). However, they can be estimated by learning the density distribution of the data. We accomplish this by estimating densities in feature space. For high-dimensional feature spaces we estimate a low-dimensional embedding using a variational autoencoder (Kingma and Welling, 2013) or by using the features from the penultimate layer of the model. Bias is then the imbalance between parts of the density space estimated either discretely (using a discretely-binned histogram) or continuously (using a kernel distribution (Rosenblatt, 1956)).
**Aleatoric Uncertainty-** Aleatoric uncertainty captures noise in the data, e.g., mislabeled datapoints, ambiguous labels, classes with low separation, etc. We model aleatoric uncertainty using Mean and Variance Estimation (MVE) (Nix and Weigend, 1994). In the regression case, we pass the outputs of the model's feature extractor to another layer that predicts the standard deviation of the output. We train using NLL, and use the predicted variance as an estimate of the aleatoric uncertainty. We apply a modification to the algorithm to generalize to the classification case in Alg. 1. We assume the classification logits are drawn from a normal distribution and stochastically sample from them using the reparametrization strategy. We average stochastic samples and backpropogate using cross entropy loss through logits and their inferred uncertainties.
**Epistemic Uncertainty-** Epistemic uncertainty measures uncertainty in the model's predictive process - this captures scenarios such as examples that are "hard" to learn, examples whose features are underrepresented, and out-of-distribution data. We provide a unified approach for a variety of epistemic uncertainty methods ranging from Bayesian neural networks (Blundell et al., 2015), ensembling (Lakshminarayanan et al., 2017), and reconstruction-based (Kingma and Welling, 2013) approaches. Below, we outline three metrics and how they fit into capsa's unified risk estimation framework.
A _Bayesian neural network_ can be approximated by stochastically sampling, during inference, from a neural network with probabilistic layers (Blundell et al., 2015; Gal and Ghahramani, 2016). Adding dropout layers (Srivastava et al., 2014) to a model is one of the simplest ways to capture epistemic uncertainty (Gal and Ghahramani, 2016). To calculate the uncertainty, we run \(T\) forward passes, which is equivalent to Monte Carlo sampling. Computing the first and second moments from the \(T\) stochastic samples yields a prediction and uncertainty estimate, respectively.
An _ensemble_ of \(N\) models, each a randomly initialized stochastic sample, is a common approach used to accurately estimate epistemic uncertainty (Lakshminarayanan et al., 2017). However, this comes with significant computational costs. To reduce the cost of training ensembles, capsa automates the construction and management of the training procedure for all members and parallelizes their computation.
_Variational autoencoders (VAEs)_ are typically used to learn a robust, low-dimensional representation of the latent space. They can be used as to estimate epistemic uncertainty by using the reconstruction loss \(MSE(\hat{x},x)\). In cases of out-of-distribution data, samples that are hard to learn, or underrepresented samples, we expect that the VAE will have high reconstruction loss, since the mapping to
the latent space will be less accurate. Conversely, when the model is very familiar with the features, or the data is in distribution, we expect the latent space mapping to be robust and the reconstruction loss to be low. To construct the VAE for any given model in capsa, we use the feature extractor as the encoder, and reverse the feature extractor automatically when possible to create a decoder.
### Metric Composability
Using capsa, we compose multiple risk metrics to create more robust estimates (e.g., by combining multiple metrics, or alternatively by capturing different measures of risk independently). By using the feature extractor as a shared common backbone, we can optimize for multiple objectives, ensemble multiple metrics, and obtain different types of uncertainty estimates simultaneously.
We propose a novel composability algorithm within capsa to automate this process. Again, we leverage our shared feature extractor as the common backbone for all metrics and incorporate all model modifications. Then, we apply the new model augmentations either in series or in parallel, depending on the use case (i.e., we can ensemble a metric in series to average the metric over multiple joint trials, or we can apply ensembling in parallel to estimate a independent measure of risk). Lastly, the model is jointly optimized using all of the relevant loss functions by computing the gradient of each one with regard to the shared backbone's weights and stepping into the direction of the accumulated gradient.
## 3 Experimental Results
In the following section, we analyze the risk metrics obtained by wrapping various models with capsa on several datasets. We show that capsa provides accurate, scalable, composable risk metrics that are efficient and can be used to quantify bias, aleatoric, and epistemic uncertainty using multiple methods.
### Representation Bias
Using capsa's bias and epistemic wrapper capabilities, we analyzed the Celeb-A (Liu et al., 2015) dataset. The task for the neural network was to detect faces from this dataset against non-face images (collated from various negatives in the ImageNet dataset). Fig. 3A quantifies an accuracy vs bias tradeoff that neural networks exhibit, where they tend to perform better on overrepresented training features. We used a VAE as a feature extractor to demonstrate that the it can be used for single-shot bias and epistemic uncertainty estimation without any added computation cost.
Fig. 3B qualitatively inspects the different percentiles of bias ranging from underrepresentation (left) to overrepresentation (right). We found that the underrepresented samples in the dataset commonly contained darker skin tones, darker lighting, and faces not looking at the camera. As the percentile
Figure 3: **Bias and Epistemic Uncertainty on Faces** (A) Under-represented and over-represented faces in the Celeb-A dataset found by capsa using the VAE and HistogramBias wrappers. As the percentile bias of the data increases, the skin tone gets lighter, lighting gets brighter, and hair color gets lighter, and (B) accuracy on these datapoints increases. We also determine the points with the highest epistemic uncertainty, which have artifacts such as sunglasses, hats, colored lighting, etc.
of the bias gets higher, we see that the dataset is biased towards lighter skin tones, hair colors, and a more uniform facial direction. With our approach, we highlight a critical difference between bias and epistemic estimation methods in Fig. 3C. The samples estimated to have the highest epistemic uncertainty were not necessarily only underrepresented, but also contain features that obscure the predictive power of the model (e.g., faces with colored lighting, covering masks, and artifacts such as sunglasses and hats).
### Aleatoric Uncertainty
Next, we experiment on capsa's ability to successfully detects label noise in datasets using aleatoric uncertainty estimation. An example of this can be shown in Fashion-MNIST, which contains two very similar classes: "tshirt/top" and "shirt". The methods presented in capsa identify samples in Fashion-MNIST with high aleatoric uncertainty, which are light sleeveless tops with similar necklines with minimal visual differences. Short-sleeved shirts with round necklines are also classified as either category. Compared to randomly selected samples from these two classes, the samples considered noisy by capsa are visually indistinguishable, and difficult for humans (and models) to categorize.
### Epistemic Uncertainty
In this section, we benchmark capsa's epistemic methods on toy datasets. We demonstrate how capsa's ability to compose multiple methods (e.g., dropout and VAEs) can achieve more robust, efficient performance. We combine aleatoric methods with epistemic methods (i.e., ensembling the MVE metric) to strengthen aleatoric methods, since they are averaged across multiple runs. We can also treat the ensemble of MVEs as a mixture of normals. Similarly, to combine VAE and dropout, we use a weighted sum of their variances or we run the VAE \(N\) times with dropout layers and treat multiple runs as \(N\) normals.
#### 3.3.1 Cubic Dataset and UCI Benchmarking
We compose various epistemic and aleatoric methods on a cubic dataset with injected aleatoric noise and a lack of data in some parts of the test set. We train models on \(y=x+\epsilon\), where \(\epsilon\sim\mathcal{N}(1.5,0.9)\). Training data is within \([-4,4]\) and test within \([-6,6]\). Fig. 5 demonstrates that composed metrics can successfully detect regions with no data, as well as the aleatoric uncertainty in the center.
Additionally, we benchmark raw epistemic uncertainty methods on real-world regression datasets, and evaluate VAEs, ensembles, and dropout uncertainty on these datasets based on Root Mean Squared Error (RMSE) and negative log-likelihood (NLL) in Tab. 1. More composability results, as well as training times for all methods, are available in the appendix in Tab. 4 and Tab. 3.
Figure 4: **Fashion MNIST Aleatoric Uncertainty** (A) Randomly selected samples from two classes of fashion-mnist. These samples are visually distinguishable, and have a low aleatoric uncertainty, as opposed to (B), which shows samples with highest estimated aleatoric noise. It is not clear what features distinguish these shirts from tshirts/tops, as they have similar necklines, sleeve lengths, and cuts.
Figure 5: **Risk metrics on cubic regression. A regression dataset \(y=x+\epsilon\), where \(\epsilon\) is drawn from a Normal centered at \(x=1.5\). Models are trained on \(x\in[-4,4]\) and tested on \(x\in[-6,6]\). Composing using MVE results in a single metric that can seamlessly detect epistemic and aleatoric uncertainty without any modifications to the model construction or training procedure.**
#### 3.3.2 Depth Estimation
In this section, we transition to more complex models and datasets and demonstrate how capsa can be used as a large-scale risk and uncertainty benchmarking framework for existing methods. To that end, we train a U-Net style model on the task of monocular end-to-end depth estimation (see Tab. 2). Importantly, capsa works "out of the box" without requiring any modifications since it is a highly configurable, model-agnostic framework with modularity as one of the core of its design principles.
Specifically, we take a U-Net style model whose final layer outputs a single \(H\times W\) activation map and wrap it with capsa. We then train the wrapped model on NYU Depth V2 dataset (Nathan Silberman & Fergus, 2012) (27k RGB-to-depth image pairs of indoor scenes) and evaluate on a disjoint test-set of scenes. Additionally, we use outdoor driving images from ApolloScapes (Liao et al., 2020) as OOD data points.
We see that when we wrap the model with an aleatoric method in Fig. 10, we can successfully detect label noise or mislabeled data. The model exhibits increased aleatoric uncertainty on object boundaries. Indeed, we see that the ground truth has noisy labels particularly along the edges of objects which could be due to sensor noise or motion noise.
With dropout (Fig. 11) or ensemble (Fig. 12) wrappers, we capture uncertainty in the model's prediction. We see that increased epistemic uncertainty roughly corresponds to the semantically and visually challenging pixels where the model returns erroneous output.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline & \multicolumn{2}{c|}{**RMSE**} & \multicolumn{2}{c}{**NLL**} & \multicolumn{2}{c}{**OOD AUC**} \\ \hline Boston & 2.449 +/- 0.134 & 2.323 +/- 0.117 & 2.589 +/- 0.113 & 2.282 +/- 0.03 & 2.497 +/- 0.047 & 2.253 +/- 0.056 \\ Power-Plant & 4.327 +/- 0.030 & 4.286 +/- 0.0120 & 4.221 +/- 0.028 & 2.892 +/- 0.00 & 2.964 +/- 0.000 & 2.841 +/- 0.005 \\ Yacht & 1.540 +/- 0.133 & 1.418 +/- 0.222 & 1.393 +/- 0.0965 & 2.399 +/- 0.03 & 2.637 +/- 0.131 & 1.035 +/- 0.116 \\ Concrete & 6.628 +/- 0.286 & 6.382 +/- 0.101 & 6.456 +/- 0.846 & 3.427 +/- 0.042 & 3.361 +/- 0.016 & 3.139 +/- 0.115 \\ Naval & 0.004 +/- 0.000 & 0.004 +/- 0.000 & 0.004 +/- 0.000 & 1.453 +/- 0.667 & -2.482 +/- 0.229 & -3.542 +/- 0.015 \\ Energy & 1.661 +/- 0.090 & 1.377 +/- 0.091 & 1.349 +/- 0.175 & 2.120 +/- 0.022 & 1.999 +/- 0.113 & 1.395 +/- 0.066 \\ Kin8nm & 0.088 +/- 0.001 & 0.0826 +/- 0.001 & 0.072 +/- 0.000 & -0.972 +/- 0.01 & -0.913 +/- 0.00 & -1.26 +/- 0.008 \\ Protein & 4.559 +/- 0.031 & 4.361 +/- 0.0156 & 4.295 +/- 0.029 & 4.452 +/- 0.012 & 3.345 +/- 0.011 & 2.723 +/- 0.023 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Regression benchmarking on the UCI datasets**
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Test Loss** & **NLL** & **OOD AUC** \\ \hline Base & 0.0027 \(\pm\) 0.0002 & – & – \\ \hline VAE & 0.0027 \(\pm\) 0.0001 & – & 0.885 \(\pm\) 0.0361 \\ Dropout & 0.0027 \(\pm\) 0.0001 & 0.1397 \(\pm\) 0.0123 & 0.9986 \(\pm\) 0.0026 \\ Ensembles & **0.0023 \(\pm\) T-05** & 0.0613 \(\pm\) 0.0217 & 0.9989 \(\pm\) 0.0018 \\ MVE & 0.0036 \(\pm\) 0.0010 & **0.0532 \(\pm\) 0.0224** & 0.9798 \(\pm\) 0.0118 \\ \hline Dropout + MVE & 0.0027 \(\pm\) 0.0001 & 0.1291 \(\pm\) 0.0146 & 0.9986 \(\pm\) 0.0026 \\ VAE + Dropout & 0.0027 \(\pm\) 0.0001 & **0.0932 \(\pm\) 0.0201** & **0.9988 \(\pm\) 0.0024** \\ VAE + MVE & 0.0034 \(\pm\) 0.0012 & 0.1744 \(\pm\) 0.0156 & 0.9823 \(\pm\) 0.0102 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Depth regression results.** VAE + dropout outperforms all other epistemic methods and is more efficient.
Figure 6: **Risk estimation on monocular depth prediction**. (A) Example pixel-wise depth predictions and uncertainty. Model uncertainty calibration for individual metrics (B) and composed metrics (C). OOD detection assessed via AUC-ROC (D) and a full p.d.f. histogram (E).
## 4 Applications
The benefits of seamlessly and efficiently integrating a variety of risk estimation methods into arbitrary neural models extends far beyond benchmarking and unifying these algorithms. In this section, we outline critically important applications that are possible with the estimation abilities in capsa.
### Debiasing Facial Recognition Systems
Using the bias tools provided capsa, one application is to not only estimate and identify imbalance in the dataset (which we show also leads to performance bias) but to actively reduce the performance bias by adaptively re-sampling datapoints depending on their estimated representation bias during the course of training. As shown in Fig. 7, using capsa, we pinpoint exactly which samples need under/oversampling, and therefore can intelligently resample from the dataset during training. The benefits of this are twofold - we can improve sample efficiency by training on less data if some data is redundant, and we can also oversample from areas of the dataset where our latent representation is more sparse.
By composing multiple risk metrics together (in this case, VAEs and histogram bias) we can achieve even greater robustness during training, more sample efficiency, and combine epistemic uncertainty and bias to reduce risk.
### Detecting mislabeled examples
Another application of capsa is cleaning mislabeled or noisy datasets. We previously described capsa's ability to find noisy labels with high accuracy in Sec. 3.2. In the following experiment, we replaced a random collection of the 7s in the MNIST dataset with 8s. As shown in Fig. 8(A), the samples with high aleatoric uncertainty are dominated by the mislabeled examples, and also include a naturally mislabeled sample. We further test capsa's sensitivity to mislabeled datasets by artificially corrupting our labels with varying levels of probability \(p\). In Fig. 8B, as \(p\) increases, the average aleatoric uncertainty per class also increases. These experiments highlight capsa's capability to serve as the backbone of a dataset quality controller and cleaner, due to its high-fidelity aleatoric noise detection.
### Anomaly and Adversarial Noise Detection
Another application of the "uncertainty estimation" functionality provided by capsa is anomaly detection. The core idea behind this approach is that a model's epistemic uncertainty on out-of-distribution (OOD) data is naturally higher than the same model's epistemic uncertainty on in-distribution (ID) data. Thus, given a risk aware model, we visualize density histograms of per image uncertainty estimates provided by a model on both ID (unseen test-set for NYU Depth V2 dataset) and OOD data (ApolloScapes) (see Fig. 6E). At this point, OOD detection is possible by a simple thresholding. We use AUC-ROC to quantitatively assess the separation of the two density histograms, a higher AUC indicates a better quality of the separation (see Fig. 6D).
Figure 7: **Debiasing Facial Recognition Systems** (A) Facial datasets are overwhelmingly biased towards light-skinned females. (B) The feature combinations present in dark-skinned males make up only 1.49% of the dataset, and those present in dark-skinned female faces only take up 8.18% of the dataset. Since capsa identifies underrepresented datapoints, we can implement smart sampling schemes that increase the representation of these feature combinations.
It is critical for a model to recognize that it is presented with an unreasonable input (e.g., OOD). This capability could be used for autonomous vehicles that yield control to humans when model performance is expected to be poor. For example, in Fig. 9A we see that depth estimates degrade as images drift from the distribution. We are able to detect these shifts and can use this information to avoid incorrect predictions.
Further, the approach described above could be used to detect adversarial attacks (perturbations). In Fig. 9A we see that even though the perturbed images are not immediately distinguishable to a human eye, the method described above successfully detects the altered input images.
Another way of interpreting the adversarial perturbations is as a way of gradually turning ID datapoints into OOD. Such a granular control allows for enhanced model introspection. In Fig. 9 we see that as the epsilon of the perturbation increases, the density histograms of per image uncertainty estimates on both the ID and perturbed images become more disentangled (B) and thus the quality of separation increases (C).
## 5 Conclusions
In this paper, we present a unified, model-agnostic framework for risk estimation, which allows for seamless and efficient integration of uncertainty estimates in a couple lines of code. Our approach opens new avenues for greater reproducibility and benchmarking of risk and uncertainty estimation methods. We validate the scalability and the convenience of the framework on a variety of datasets. We showcase how our method can compose different algorithms together to quantify different risk metrics efficiently in parallel. We demonstrate how the obtained uncertainty estimates can be used for downstream tasks. We further show how the framework yields interpretable risk estimation results that can provide a deeper insight into decision boundaries of NNs. We refer readers Capsa Pro(Amini et al., 2023) for details regarding our comprehensive implementation of the functionality described in this publication. Capsa has the goal of accelerating and unifying advances in the areas of uncertainty estimation and trustworthy AI. In the future, we plan to extend our approach to
Figure 8: **Mislabeled Examples in the MNIST dataset** (A) If we purposefully inject label noise into the MNIST dataset by labeling 20% of the 7s in the dataset as 8, the mislabeled items have the highest aleatoric uncertainty. We also find a naturally mislabeled sample in the dataset. (B) As the percentage of mislabeled items increases, the average measured aleatoric uncertainty per class also increases.
Figure 9: **Robustness under adversarial noise** Across increasing levels of adversarial perturbations: (A) Pixel-wise depth predictions and uncertainty visualizations, (B) Density histograms of per image uncertainty, (C) OOD detection assessed via AUC-ROC, (D) Calibration curves.
other data modalities including irregular types (graphs) and temporal data (sequences), as well as to support other model types and other risk metrics.
|
2307.01782 | GHOST: A Graph Neural Network Accelerator using Silicon Photonics | Graph neural networks (GNNs) have emerged as a powerful approach for
modelling and learning from graph-structured data. Multiple fields have since
benefitted enormously from the capabilities of GNNs, such as recommendation
systems, social network analysis, drug discovery, and robotics. However,
accelerating and efficiently processing GNNs require a unique approach that
goes beyond conventional artificial neural network accelerators, due to the
substantial computational and memory requirements of GNNs. The slowdown of
scaling in CMOS platforms also motivates a search for alternative
implementation substrates. In this paper, we present GHOST, the first
silicon-photonic hardware accelerator for GNNs. GHOST efficiently alleviates
the costs associated with both vertex-centric and edge-centric operations. It
implements separately the three main stages involved in running GNNs in the
optical domain, allowing it to be used for the inference of various widely used
GNN models and architectures, such as graph convolution networks and graph
attention networks. Our simulation studies indicate that GHOST exhibits at
least 10.2x better throughput and 3.8x better energy efficiency when compared
to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators. | Salma Afifi, Febin Sunny, Amin Shafiee, Mahdi Nikdast, Sudeep Pasricha | 2023-07-04T15:37:20Z | http://arxiv.org/abs/2307.01782v1 | # GNN: A Graph Neural Network Accelerator using Silicon Photonics
###### Abstract
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data. Multiple fields have since benefitted enormously from the capabilities of GNNs, such as recommendation systems, social network analysis, drug discovery, and robotics. However, accelerating and efficiently processing GNNs require a unique approach that goes beyond conventional artificial neural network accelerators, due to the substantial computational and memory requirements of GNNs. The slowdown of scaling in CMOS platforms also motivates a search for alternative implementation substrates. In this paper, we present _GHOST_, the first silicon-photonic hardware accelerator for GNNs. _GHOST_ efficiently alleviates the costs associated with both vertex-centric and edge-centric operations. It implements separately the three main stages involved in running GNNs in the optical domain, allowing it to be used for the inference of various widely used GNN models and architectures, such as graph convolution networks and graph attention networks. Our simulation studies indicate that _GHOST_ exhibits at least 10.2x better throughput and 3.8x better energy efficiency when compared to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators.
Graph Neural Networks, Silicon Photonics, Inference Acceleration, Optical Computing.
## 1 Introduction
Deep learning has become a vital pillar in our lives due to its ability to solve many complex problems efficiently across diverse fields, including autonomous transportation, healthcare, industrial automation, and network security. This success of deep learning owes tremendously to the evolution of neural networks variants that are tailored for specific learning tasks. For example, Convolution Neural Networks (CNNs) [1] and Recurrent Neural Networks (RNNs) [2] have proven their efficiency in pattern recognition for images and sequence data, by extracting knowledge from the spatial and temporal dimensions of data. While these examples are prominent solutions for many tasks, they are limited in scope to non-arbitrary structured and Euclidean data. Arbitrary structured data, including graphs, require different techniques for efficient processing. Graph data processing is critical for many problems, e.g., social network analysis, recommender systems, and drug discovery [3].
Graph Neural Networks (GNNs) have emerged in recent years and established their proficiency in dealing with graph-structured data. These models can extract information from the graph structure and discover patterns in the data that may be difficult to identify with other deep learning methods [4]. Accordingly, many applications now benefit greatly from GNNs, and hence a lot of recent efforts have focused on enhancing GNN algorithms and improving their efficiency in handling large and various graphs. The continuing progress of GNN algorithms and models necessitates hardware platforms capable of providing GNN-specific support with high performance, while abiding by strict power constraints. Although hardware acceleration for neural networks such as CNNs and RNNs have been extensively studied, the
processing of GNNs presents unique challenges due to their combination of dense and vastly sparse operations, diversity of input graphs, and the various types of GNN algorithms and models [5]. Thus, hardware accelerators tailored to accelerate the processing of conventional neural networks cannot be directly and efficiently applied to GNNs.
Moreover, relying on traditional electronic accelerators creates limitations as these platforms face challenges in the post-Moore era due to high costs and diminishing performance improvements with semiconductor-technology scaling. Moving data through metallic wires is a well-known bottleneck in these accelerators, as it restricts the achievable performance in terms of bandwidth, latency, and energy efficiency [6]. Silicon photonics technology provides a promising solution to this data-movement bottleneck, offering ultra-high bandwidth, low-latency, and energy-efficient communication [7]. Optical interconnects, which are now being considered for chip-scale integration, have already replaced metallic ones for light-speed data transmission at almost every level of computing. It is also possible to use optical components for computations, such as matrix-vector multiplication [8]. The emergence of chip-scale optical communication and computation has thus made it possible to design photonic integrated circuits that offer low-latency and energy-efficient optical-domain data transport and computation. Furthermore, prior work, from both academia and industry, has demonstrated the significant benefits resulting from using silicon photonics for the acceleration of neural networks, as in [8]-[10].
In this paper, we introduce _GHOST_, the first silicon-photonic-based GNN accelerator that can accelerate inference of diverse GNN models and graphs. The key contributions in this paper are:
* The design of a novel GNN accelerator hardware architecture using silicon photonics with the ability to accelerate multiple existing variants of GNN models;
* Detailed photonic device and circuit-level optimizations to mitigate crosstalk noise so that error-free GNN operations can be ensured in the accelerator;
* A detailed architectural optimization for the efficient handling and acceleration of diverse graph structures and GNN model architectures on the proposed hardware accelerator; and
* A comprehensive comparison with GPU, TPU, CPU, and state-of-the-art GNN accelerators.
The rest of the paper is organized as follows. Section 2 provides a background on GNNs (different models, their acceleration challenges, and previous efforts on GNN acceleration) and on silicon photonics and performing optical computations. Section 3 describes the _GHOST_ architecture and our optimization efforts at the device, circuit and architecture layers. Details of the experiments conducted, simulation setup, and results are presented in Section 4. Lastly, Section 5 presents concluding remarks.
## 2 Background
### Graph neural networks
Prior to the emergence of GNNs, graph processing was mostly limited to traditional machine learning and graph algorithms. However, these methods had limitations in capturing the non-linear and complex relationships between vertices in a graph [4]. With the advent of GNNs, graph processing has been revolutionized, and there has been a significant improvement in graph-based machine learning tasks, such as node classification, link prediction, and graph classification. GNNs are a type of deep learning algorithm that can learn complex graph structures and relationships, and have now broadened the scope of Artificial Neural Networks (ANNs) to encompass non-Euclidean and irregular data found in graphs [5].
GNNs exploit the connections within a graph to understand and represent the relationships between vertices. They utilize an iterative approach that relies on a graph's structure and take in edge, vertex, and graph feature vectors that represent the known attributes of these elements. The general operations of a GNN can be broadly summarized in three main steps:
1. _Pre-processing:_ an optional initial step that is typically performed offline for purposes such as sampling the graph, rearranging the graph to simplify the algorithm's processing and complexity, or encoding the feature vectors.
_Iterative updates_: the step where the main GNN computations occur through two main phases: aggregation and combination. The aggregation phase accumulates all the edges in a graph, and then for each vertex, it reduces all its neighbors and its own feature vectors into a single set. This feature set is combined and through performing linear transformations and non-linear activation functions, a new updated feature vector for each vertex is obtained. GNNs can be composed of several layers and the iterative process in a single layer updates every edge and vertex with information received from immediate neighbor vertices. This means that the relationships with nodes and edges that are progressively farther away can be gradually considered as more layers are processed.
3. _Readout_: the final step employed when a graph possesses a global feature vector, and it is updated once after the edge and node updates have been executed, usually in graph classification tasks.
Figure 1 illustrates an example of processing the first layer in a GNN. As shown, the aggregation phase iteratively gathers the neighbors of each vertex and then reduces all data into a single vector, \(h^{a}_{vi}\). The reduce operation in this phase can be a variety of arithmetic functions, e.g., summation, mean, or maximum. This vector is then passed through the combination phase, which usually involves a neural network. Unlike conventional ANNs, where each layer has a different set of weights, vertices in a GNN all share the same weights.
Since GNNs were initially introduced in [11], multiple GNN algorithms and models have emerged. Graph Convolution Network (GCN) [12] expands the idea of convolution in the graph space. In contrast to CNNs, where the convolutional operation is defined on regular grid-like data, the convolutional operation in GCNs is defined on irregular graph structures. GraphSAGE [13] and Graph Isomorphism Network (GIN) [14] are two models that are also based on graph convolutions. GraphSAGE employs custom sampling techniques to obtain a fixed number of neighbors for each vertex while GIN learns the isomorphism invariant representation of graphs by using a learnable parameter \(\varepsilon_{l}\) to adjust the weight of the central vertex. Graph Attention Networks (GATs) [15] are another class of GNNs that have demonstrated noteworthy results. GATs update node features through a pairwise function between nodes, which incorporates learnable weights. This results in an attention mechanism that can determine the usefulness of the edges.
### Graph neural network acceleration
Processing GNNs presents many challenges. A system processing a GNN needs to have the capabilities to efficiently handle both dense and very sparse computations, adapt its execution and operations based on the specific input graph structure and the GNN algorithm variant employed, and scale effectively to extremely large graphs. Due to the irregularity and large size of most real-world graphs, GNNs often require very high memory bandwidth and multiple irregular memory accesses. Further, the unique combination of computing characteristics from deep learning and graph processing in GNNs results in having alternate execution patterns [5]. Such challenges are typically absent when processing traditional ANN models. Thus, utilizing ANN accelerators for GNNs can be inefficient and lead to low performance and high energy costs. While overcoming these challenges is a non-trivial task, many recent efforts have tackled this problem and advanced the field of GNN processing, as discussed below.
Figure 1: An example of GNN inference showing 1) Input graph to be processed; 2) Aggregation phase, where each vertex’s neighbors are reduced to one feature vector; 3) Combine and Update phases, where each vertex is linearly transformed and updated using a non-linear activation function.
On the software side, several frameworks and graph libraries have been proposed to aid in the acceleration of GNN models. A few examples of relevant libraries are PyTorch Geometric [16], Deep Graph Library [17], and NeuGraph [18]. Several programming models that aim to abstract GNN operations have also emerged, such as SAGA [18] and GRETA [19].
On the hardware side, multiple electronic hardware accelerators for GNNs have been proposed. The accelerator in [20] presents a modular architecture where the core unit of the accelerator is a tile composed of an Aggregator module (AGG), a DNN Accelerator module (DNA), a DNN Queue (DNQ), and a Graph Processing Element (GPE). The main component in their AGG module is a bank of ALUs, and the DNA exhibits the architecture of existing spatial accelerators. Another electronic accelerator EnG [21] has a unified architecture that handles GNNs in a single dataflow as a concatenated matrix multiplication of feature vectors, adjacency matrices, and weights. An array of clustered PEs is utilized. To aggregate the results, each column of PEs is connected in a ring, and the results are passed along and added based on the adjacency matrix. HyGCN [22] is another electronic accelerator for GCNs, composed of two dedicated engines that handle the aggregation and combination stages, along a control mechanism that coordinates the sequential execution of both processes. The dense combination stage is computed using a conventional systolic array approach. In contrast, the aggregation stage has a more complex architecture that includes a sampler, an edge scheduler, and a sparsity eliminator. Lastly, GRIP [23] utilizes the GReTA programming model [19] to create an electronic accelerator with specialized units and accumulators for edges and vertices, which are separate and adaptable.
Several GNN accelerators based on ReRAM and Processing-In-Memory (PIM) have also been presented. For example, ReGNN [24] leverages Analog PIM (APIM) and Digital PIM (DPIM). The authors decompose the computations in the combination phase into multiple Matrix-Vector Multiplications (MVM) and handle them through a dedicated combination engine composed of an APIM ReRAM array, while non-MVM operations are processed by the DPIM array. ReGraphX [25] is another ReRAM-based architecture that can be used for both training and inference acceleration of GNNs.
Unlike previous efforts, _GHOST_ is the first GNN accelerator that leverages silicon photonics. It also supports accelerating a broad family of GNN models, adapts efficiently to different graph shapes and sizes, and mitigates typical GNN memory and performance bottlenecks.
### Silicon Photonics
#### 2.3.1 Devices and Circuits
Optical ANN accelerators have gained considerable attention from both academic researchers and industry in recent years because of their notable advantages in terms of energy-efficiency and performance [26]. There are two possible classes of optical ANN accelerators: coherent and non-coherent. In coherent architectures, a single wavelength is utilized to imprint parameters onto the optical signal's phase, which allows for Multiply and Accumulate (MAC) operations. Non-coherent architectures utilize multiple wavelengths and imprint parameters onto the optical signal's amplitude, enabling parallel operations to be performed using each wavelength. As opposed to conventional compute platforms such as GPUs and CPUs, silicon photonic CMOS fabrication does not require advanced technology nodes which mandate elevated process complexity, involving new lithography techniques and materials. Simple and less complex fabrication processes associated with older nodes are usually adopted instead [27]. The current focus of research in optical ANN accelerators is mainly on CNNs, MLPs, and RNNs. To the best of our knowledge, GHOST is the first optical accelerator for GNN models.
Figure 2 presents a general overview on the fundamental devices and circuits used for optical computing. The following are the main components needed:
* _Lasers_: used to generate optical signals that are needed to perform computation and communication in optical circuits. These lasers can either be on-chip or off-chip. While off-chip lasers have better light emission efficiency, there are significant losses when coupling the optical signals onto on-chip waveguides.
Conversely, on-chip lasers, such as vertical cavity surface emission lasers (VCSELs), offer a higher level of integration density and lower losses.
* _Waveguides_: silicon photonic waveguides carry the optical signal(s) generated by the laser source. They are typically composed of two materials, resulting in a high-refractive-index contrast such as a core made of Silicon (Si) and a cladding made of Silicon Dioxide (SiO2). This allows for total internal reflection. The waveguides can be either ridge or strip in shape. Wave Division Multiplexing (WDM) allows a single waveguide to support multiple wavelengths simultaneously without any interference. This enables the transmission of ultra-high bandwidth signals and is used for performing MAC operations.
* _Microring Resonators (MRs)_: An MR add-drop filter is an optical modulator which is designed using a ring-shaped waveguide. Each MR can be specifically designed and adjusted to work at a particular wavelength, known as the MR resonant wavelength (\(\lambda_{MR}\)), defined as \(\lambda_{MR}=\frac{2\pi R}{m}n_{eff}\), where \(R\) is the MR radius, \(m\) is the order of the resonance, and \(n_{eff}\) is the effective index of the device. Electronic data can be modulated onto the optical signal passing an MR by carefully adjusting \(n_{eff}\) (and hence \(\lambda_{MR}\)) with a tuning circuit. MR banks consist of groups of MRs that share a single input waveguide and can be utilized for performing MAC and summation operations.
* _Photodefectors (PD)_: PDs are needed to detect processed optical signals and convert them into electrical signals. To be effective, a PD should be able to generate the desired electrical output using a small input optical signal. The input signal power from the laser source must be greater than the responsivity of the PD, while taking into consideration the different types of losses that may occur along the optical link.
* _Tuning circuits_: Tuning circuits are devices designed to control the effective index (\(n_{eff}\)) of MR devices to precisely modify an output optical signal. Typically, the tuning circuit is based on Thermo-Optic (TO) [28] or carrier injection Electro-Optic (EO) tuning [29], both of which cause a change in the effective refractive index (\(n_{eff}\)) and result in a resonant shift of \(\Delta\lambda_{MR}\).
* _Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs)_: tuning the MR devices and converting the optical output to the electrical domain for intermediate buffering is all done using ADC and DAC devices. These devices represent one of the main performance bottlenecks in silicon-photonic-based systems due to their high latency and power costs. Accordingly, as will be discussed in Section 3, GHOST employs several techniques that aim to mitigate the downsides of these devices and reduce the needed opto-electric conversions.
Fig. 2: Overview of photonic circuit used to implement operations for ANN acceleration. This circuit is composed of (a) a laser source, which can be off-chip or on-chip; (b) a waveguide, which can be strip (for passive devices) or ridge (for active devices); (c) MR banks perform MAC operation; (d) the banks are tuned as per data from the electronic domain, using DACs; (e) the result from
#### 2.3.2 Optical computation
Most of _GHOST_'s core operations are performed using the opto-electronic tuning devices, MRs. Figure 3(a) shows a representation of the transmission plots for the input and the through ports' wavelengths after a parameter is imprinted onto the input signal. In most silicon-photonic-based systems, computations are performed as such by adjusting an MR's \(\Delta\lambda_{MR}\), which leads to a predictable alteration in the amplitude of the optical signal's wavelength. GHOST leverages this to implement two main computations using MR devices: summation and multiplication. Summation is performed using coherent photonic summation. This entails using one optical signal with a single wavelength \(\lambda_{MR}\) and MR devices adjusted to operate at the same resonant wavelength \(\lambda_{MR}\). Figure 3(b) illustrates an example where coherent summation is used to add the values \(a_{1},a_{2},\) and \(a_{3}.\) Using an analog biasing signal, VCSELs can be driven to produce an optical signal with a certain value imprinted onto it. Accordingly, the first value \(a_{1}\) is imprinted onto the optical signal using the bottom VCSEL laser source. The top VCSEL produces an optical signal with a value of \(1\) which is split into two signals to be passed by the two MR devices. The first MR device then imprints the value \(a_{2}\) onto the optical signal, while the second MR imprints the value \(a_{3}.\) When the optical signal generated by the bottom VCSEL unit (\(a1\)) and the one modulated by the first MR device (\(a2\)) meet, they undergo interference, resulting in a summation operation and an optical signal with the value \(a1+a2\) is generated. Similarly, when this optical signal meets the one modulated by the second MR device, they undergo interference, resulting in a summation operation and the final output \(a1+a2+a3\) is computed. Coherent summation is ensured by using a laser phase locking mechanism [30], which guarantees that VCSEL output signals have the same phase for constructive interference to occur.
On the other hand, performing multiplications is done using non-coherent silicon photonics, where multiple optical signals with different wavelengths are multiplexed into the same waveguide using WDM. This enhances throughput and emulates neurons in ANNs as it involves combining multiple optical signals with different wavelengths into a single waveguide using an optical multiplexer [26]. Different wavelengths in the input waveguide pass through a series of MRs, with each MR tuned to a specific wavelength, allowing for several multiplications to be executed simultaneously in parallel. Figure 3 (c) illustrates an example of multiplying two vectors (activations vector \(A\), and weights vector \(W\)) as follows
\[[a1\quad a2\quad a3]\times\begin{bmatrix}w1\\ w3\end{bmatrix}=a1w1+a2w2+a3w3 \tag{1}\]
Two MR banks, and three optical signals with different wavelengths are needed to perform this multiplication. The first MR bank array imprints the activation values \(a1,a2,\) and \(a3,\) while the second MR bank imprints the weight values \(w1,\)\(w2,\) and \(w3.\) As shown in the figure, after imprinting the activation values, when the same signal with the same wavelength gets modulated by a second MR, a multiplication operation occurs between the previously imprinted value and the new one. Different optical wavelengths on the waveguide can then go through a PD to accumulate the output of the dot product. This method can be extended to perform MVMs since they can be decomposed into vector multiplications, by using several rows of the MR banks organization shown.
One of the main challenges of performing multiplications using non-coherent silicon photonics is the resulting heterodyne or incoherent crosstalk which is shown as the black portions in Figure 3(d). This mainly occurs when a portion of an optical signal from neighboring wavelengths interferes with the MR spectrum of another wavelength. In section 3.2, we discuss our efforts in optimizing the MR bank arrays design to reduce this crosstalk.
## 3 Ghost Hardware Accelerator
_GHOST_ is a silicon photonic architecture that can accelerate the inference of a diverse family of GNN models. An overview of the architecture is shown in Figure 4. The photonic accelerator core is composed of aggregate, combine, and update blocks, enabling the execution of a wide range of GNN models and real-world graph datasets. Interfacing with the main memory, buffering the input graph, identifying the needed resources, and mapping the weight matrices to the photonic architecture are all handled by an integrated Electronic-Control Unit (ECU). The following subsections describe the _GHOST_ architecture and the device, circuit, and architecture layer optimization solutions we have considered to efficiently accelerate GNN models.
### Tuning circuit design
As discussed earlier, MR devices require a tuning mechanism, which can be achieved using either EO or TO methods. In _GHOST_, we have implemented a hybrid tuning circuit that utilizes both methods to induce \(\Delta\)MNR. By doing so, we can capitalize on the advantages of each approach while mitigating their drawbacks. EO tuning is quicker (\(\approx\)ns
Figure 4: Overview of _GHOST_ accelerator architecture showing the ECU, Aggregate, Combine, and Update blocks.
Figure 3: (a) MR input and through ports’ wavelengths after imprinting a parameter onto the signal; (b) two MR devices used to perform optical coherent summation to add values a; a; and a; (c) MR bank arrays used to perform multiplication by imprinting input vector (a;-a), followed by weight vector (w;-w); (d) MR bank response and heterodyne crosstalk shown in black, where CS is channel spacing and FSR is free spectral range.
range) and consumes less power (\(\approx\)4 \(\mu\)W/nm); however, it cannot be used for large tuning ranges [29]. Conversely, TO tuning offers a larger tunability range, but with the drawback of higher latency (\(\approx\)\(\mu\)s range) and power consumption (\(\approx\)27 mW/FSR) [28]. To address these challenges, we have integrated EO tuning to quickly induce small \(\Delta\lambda_{MR}\) and reserved the slower TO tuning for cases where larger \(\Delta\lambda_{MR}\) is required. The effectiveness of this hybrid approach has been previously demonstrated in [31]. To further reduce the power consumption of TO tuning and also reduce thermal crosstalk, we have employed the Thermal Eigenmode Decomposition method (TED) from [32]. We analyzed thermal interference between MR heaters and using Eigenmode decomposition, thermal tuning levels across heaters which do not cause thermal interference in MR banks were determined. Tuning the MR heaters according to the tuning levels obtained with this approach allowed us to mitigate thermal crosstalk and minimize TO tuning power.
### MR device optimization
To ensure that the MVM operations performed are error free, so that the deployed GNN can be executed correctly, it is necessary to manage various sources of noise in the analog photonic domain. There are several major noise sources in the photonic computing substrate, including thermal crosstalk, heterodyne (or incoherent) crosstalk, and homodyne (or coherent) crosstalk. The thermal crosstalk between TO tuning circuits is mitigated using our TED-based tuning mechanism (Section 3.1). But we still have to mitigate the impact of heterodyne and homodyne crosstalk in our design.
The presence of multiple wavelengths (or channels) in the same waveguide causes heterodyne or inter-channel crosstalk, where a portion of an optical signal from neighboring wavelengths (say \(\lambda_{1}\) and \(\lambda_{3}\)) can leak into the MR spectrum of another wavelength (say \(\lambda_{2}\)) (see Figure 3(c)). This power leak causes MR2 output to have \(\lambda_{1}\) and \(\lambda_{3}\) content, and the MR downstream (in this example MR3), will receive lower signal power than designed for. Thus, the output from MR2 and MR3 will be erroneous. For this design optimization, we model heterodyne crosstalk using the following equations:
\[P_{signal}=\Phi\big{(}\lambda_{i},\lambda_{j},Q_{factor}\big{)}P_{g} \big{(}\lambda_{i},\lambda_{j}\big{)}, \tag{2}\] \[P_{net\_noise}=\sum_{i=1}^{n}\Phi\big{(}\lambda_{i},\lambda_{j},Q_{factor}\big{)}P_{g}(\lambda_{i},\lambda_{j})(i\neq j), \tag{3}\]
where \(\Phi\) is the crosstalk coupling factor, i.e., the spectra overlap between the two neighboring wavelengths, \(Q_{factor}\) is the quality factor or Q-factor of the MR, and \(P_{g}\) is the input signal power to the MR.
Heterodyne crosstalk impacts signals with spectral overlap. To mitigate heterodyne crosstalk, this overlap should be minimized. This can be achieved by a well-designed channel spacing and Q-factor tuning while ensuring that the Signal-To-Noise ratio (SNR) in the output is higher than the photodetector sensitivity. Another factor to be considered is the tunable range of the designed MRs. The MRs should provide adequate Q-factor to improve SNR, but should also possess sufficient tunable range, i.e., 2xFWM (FWHM=full width half maximum), so that necessary parameters can be imprinted error free. To mitigate heterodyne crosstalk, we optimize our design for high FVHM and SNR, where SNR is expressed as:
\[SNR=\ 10\times\log_{10}\big{(}P_{signal}/P_{noise}\big{)}, \tag{4}\]
and, given \(\lambda_{res}\) as the resonant wavelength of the MR being considered, FWHM can be modeled as:
\[FWHM=\lambda_{res}/Q_{factor}. \tag{5}\]
Homodyne or coherent crosstalk is a result of undesired mode coupling among signals of the same wavelength [33]. In some of the computation circuits in _GHOST_ we rely on coherent signal processing. In such circuits, part of the signal on the same wavelength may leak through a device and experience a different phase. Such leaked signals interfere with the output signal (based on their phase difference with the output signal) as coherent crosstalk noise. The presence of homodyne crosstalk, similar to heterodyne crosstalk, impacts the SNR of the non-coherent optical circuitry. The homodyne crosstalk noise power can be modeled as follows:
\[P_{hom\_noise}=\sum_{i=1}^{n}P_{in}\cdot X_{MR}^{i}(\rho)\cdot L_{i}^{n-i}, \tag{6}\]
where \(P_{lm}\) is the input optical power, \(X_{MR}^{i}(\rho)\) is the crosstalk contribution from the \(\hat{n}^{n}\) MR in a bank of \(n\) MRs, and \(\rho\) is the optical phase of the crosstalk signal, which is a function of the EO tuning voltage. \(\rho\) does not take into account phase errors from thermal crosstalk, as TED is employed to address those errors. Finally, \(L_{p}^{n-i}\) is the passing loss that the crosstalk signal experiences as it propagates through MRs in the coherent circuit.
For homodyne crosstalk mitigation we may increase the cross over coupling by increasing the gap between the input waveguide and ring waveguide. This reduces the amount of crosstalk signal being coupled over from the MR to the main waveguide, reducing the impact of crosstalk on the output signal. For achieving this, while meeting SNR constraints (discussed later) the \(Q_{factor}\), attenuation in MR (**a**), and cross-over coupling coefficient (\(\kappa\)) has to be fine-tuned as follows [33]:
\[Q_{factor}=\frac{\pi n_{g}L\sqrt{(1-\kappa^{2})a}}{\lambda_{MR}(1-a(1-\kappa^ {2}))}\,. \tag{7}\]
Using our noise models described in (2), (3), and (6), we can identify the optimal design space for our MR banks which can ensure high SNR and a high tunable range (\(R_{tune}\)). We must also consider that the lowest optical power level (\(P_{ipar}\)) should be higher than \(P_{noise}\).
\[P_{ipar}>P_{noise}\,, \tag{8}\]
\[\frac{P_{igmat}}{P_{ipar}}<\frac{P_{igmat}}{P_{noise}}, \tag{9}\]
\[10\log_{10}\left(\frac{P_{igmat}}{P_{ipar}}\right)<10\log_{10}\left(\frac{P_{ igmat}}{P_{noise}}\right), \tag{10}\]
where \(P_{ipar}\) can be defined in terms of \(P_{signal}\), from (2), as follows:
\[P_{ipar}=\frac{P_{igmat}\times R_{tune}}{N_{levels}}\,. \tag{11}\]
Replacing \(P_{ipar}\) in (10) yields the following relation:
\[10\log_{10}\left(\frac{N_{levels}}{R_{tune}}\right)<SNR. \tag{12}\]
Here, \(N_{levels}\) is the number of amplitude levels we need to represent across the available \(R_{tune}\). For n-bit GNN parameter representation, \(N_{levels}\) will be \(2^{n}\). If positive and negative values are represented separately, as in the case with _GHOST_, then \(N_{levels}\) will be \(2^{n-1}\). The relationship in (12) can be rearranged to as follows:
\[\frac{2\times\lambda_{MR}}{Q_{factor}}>N_{levels}\times 10^{\frac{SNR}{10}} \tag{13}\]
Utilizing these models, we can identify the design space for our MRs and the MR banks they constitute, in terms of the \(Q_{factor}\), \(N_{levels}\), and \(SNR\). We can obtain values for \(\mathcal{P}\big{(}\lambda_{i},\lambda_{j},Q\big{)}\) in (1) and (2) and \(X_{MR}(\rho)\cdot L_{p}^{n-i}\) in (5) through multi-physics simulations. The results from our exploration studies using our detailed models and the simulation tool suite from Ansys Lumerical [34] are presented later in Section 4.2.
### GHOST architecture design
As illustrated in Figure 4, the main units in the _GHOST_ architecture (aggregate, combine, update) are divided into \(V\) execution lanes. During inference, each lane is assigned one output vertex to process, in parallel with all the other lanes. The aggregate block gathers all neighbor vertices and the associated edge data, and performs a reduce function for each of the assigned output vertices. The combine block then applies a linear transformation on each aggregated vertex feature vector \(h_{v}^{a}\). Finally, the update block applies a non-linear activation function to obtain the updated vertex feature vectors \(h_{v}\). At the start of processing a GNN model, when processing the first layer, the aggregate block reads the graph data from the vertex and edge buffers in the ECU, which interface directly with main memory. As updated vertex data is computed, it is placed in the intermediate vertex buffer and thus, the aggregate block would read the vertex data when processing the next layers from the intermediate vertex buffer as needed.
#### 3.3.1 Aggregate Block
As most graphs can be extremely sparse and irregular, regulating their memory accesses to improve performance is a challenge. _GHOST_ alleviates this bottleneck through employing a "buffer and partition" optimization technique which is explained in Section 3.4.1. In this technique, the source and destination vertices in the input graph are split into blocks of \(N\) and \(V\). The aggregate block is thus composed of \(N\) edge control units, \(V\) gather units, and \(V\) reduce units. In each cycle, the \(V\) execution lanes in _GHOST_ are assigned to one output vertex group at a time. The edge control units fetch \(N\) input nodes simultaneously and then each unit forwards its fetched node and edge data to all gather units to convert the vertex data to analog signals, which are used to tune the MRs in the reduce units. The corresponding edge data is used by the gather units to define whether an input vertex is a neighbor of the assigned output vertex. Accordingly, the total delay of the aggregate block is dependent on the node with the largest number of neighbors.
The reduce unit is an optical unit configured as a coherent summation block. Each row in the reduce unit is assigned a feature from the vertex feature vectors, while each column is assigned a neighbor input vertex (see Figure 5(a)). The rows and columns have the sizes \(R_{r}\) and \(R_{c}\), respectively. Thus, one reduce unit can aggregate \(R_{r}\) features of \(R_{c}\) neighbor vertices. Different VCSEL and waveguide colors in the figure represent different optical wavelengths. Each VCSEL source generates an optical signal which is split into \(R_{c}\) signals. These \(R_{c}\) signals are then passed through the MR bank to imprint the neighbor nodes' feature values onto the signals. As explained in section 2.3, summation of the two values occurs when the different waveguides carrying signals with the same wavelength meet and they undergo interference. Since for a particular vertex the number of its neighbors may be greater than \(Rc\), multiple mappings of different source vertices may be needed. Accordingly, the output of each row in the reduce unit is converted to an analog signal using a PD, and used to tune the last MR in each lane (Figure 5(a)) such that the sum that is output from that cycle will be added to the feature values in the next cycle.
The organization of the reduce unit allows for the flexibility of implementing a wide range of reduce operations, which encompass most, if not all, GNN models. After the summation step as explained above, the output of the reduce unit is \((h_{v}^{a}=h_{v}+\sum_{u=0}^{n}h_{u})\). The last MR after the coherent summation block is used to implement the mean operation where the output summation value would be adjusted by the MR such that it is multiplied by \(\frac{1}{number\,of\,neighbors\,(n)}\), resulting in a reduce unit output of \((h_{v}^{a}=h_{v}+\frac{1}{n}\sum_{u=0}^{n}h_{u})\). Further, for implementing other reduce operations, such as maximum, the reduce unit includes an optical comparator [35]. An example of one lane in the reduce unit in that case would be as shown at the top of Figure 5(a). By activating blockers \(1,3,4,6,7,9,10\), and \(12\), the output of the reduce unit becomes the maximum value among all nodes. The optical output from all the rows in the reduce unit would then be combined into a single waveguide. This optical waveguide is connected directly to transform units in the combine block (discussed in the next subsection) to undergo the needed linear transformations. The same output values may need to be passed to the transform unit multiple times, depending on the size of the weight matrix. Accordingly, as the
Figure 5: (a) Reduce unit showing the needed changes in each feature lane to support the max aggregation operation using an optical comparator; (b) detailed view of transform unit; (c) detailed view of activate unit.
output from the reduce unit is passed directly to the transform units, it will be converted to the digital domain and buffered.
#### 3.3.2 Combine Block
The combine block accumulates the results from the aggregate block and performs linear transformation using the learned weight parameters. This generates a new learned, more expressive representation that captures the important structural information of the graph. Additionally, the linear transformation usually results in a reduced dimensionality for the vertex data representation, making the model more efficient as the dimensionality of some graphs can be very large (as shown later in Table 2). The transform unit's linear transformation is computed in the optical domain using MR bank arrays, as shown in Figure 5(b). Since linear transformation operations in GNNs are mainly MVMs, where the feature vector of the vertex being computed is multiplied by the weight matrix, as discussed in Section 2.3.2, such operations can be computed in the optical domain using MR bank arrays by passing the weight values to the transform unit as analog signals to tune each MR, while the feature vectors values are imprinted onto the optical signals in the waveguide from the reduce units. Unlike the MR array used in reduce units to perform summation operations, as presented in Subsection 3.3.1, multiplications are computed using non-coherent silicon photonics where multiple optical signals with different wavelengths are multiplexed using WDM into a single waveguide and each MR is adjusted to operate at one of those wavelengths used. Accordingly, each feature value output from the reduce units and imprinted onto one optical signal's wavelength is multiplied by the weight values in the transform units. The number of rows in a reduce unit \(R_{r}\) is equal to the number of columns in a transform unit such that the number of optical signals in the waveguide is equal to the number of MRs in one row in the transform unit. The number of columns in a transform unit however is \(T_{r}\).
As Batch Normalization (BN) is commonly used in many GNNs after the linear transformation, _GHOST_ leverages broadband MR devices to perform BN in the optical domain where the BN parameter is used to tune the broadband MRs to adjust the optical signals as needed to reflect the BN operation. The efficiency of performing BN optically with this configuration was demonstrated in [35]. It is important to note that the BN unit can be bypassed if not needed. The output from each row is then accumulated using Balanced Photodetectors (BPD). BPDs are photodetectors that have two separate arms for the same waveguide, one for positive and the other for negative signal polarities. This allows them to accommodate both positive and negative parameter values by detecting the absolute difference between the two signals. The BPD sums the output signal from the positive arm and the output signal from the negative arm separately. Then, the BPD subtracts the output signal from the negative arm from the output signal from the positive arm to obtain the net difference signal.
If the size of rows in the weight matrix is smaller than or equal to the number of columns in the transform bank array and only one mapping for each weight matrix row is needed, the output from the transform unit after the BPDs will be passed directly to the activate units. Otherwise, the output will need to be converted to the digital domain and buffered till all needed values are computed and accumulated, and then passed to the activate units in the update block (discussed in the next subsection). _GHOST_s versatility to adapt to different model architectures and sizes, and not having to always convert the values to the digital domain, greatly reduces the latency and power costs associated with Analog-to-Digital Converters (ADCs) and buffering.
#### 3.3.3 Update Block
The update block is composed of \(V\) update units to apply a non-linear activation function to the output from the transform units. The work in [36] demonstrated how semiconductor-optical-amplifiers (SOAs) can be exploited to implement multiple non-linear functions such as _RELU_, _sigmoid_, and _tanh_. For example, when the gain in an SOA is adjusted to a value close to 1, the behavior resembles the _RELU_ operation. Accordingly, such non-linear operations are implemented optically, resulting in considerably improved performance. The analog signals output from the transform units are used to directly drive VCSELs, generating optical signals with the output value imprinted into its amplitude, which is then passed through the SOA-based non-linear unit. For the non-linear activation functions that are harder to implement optically (such as softmax), a digital activation unit, such as the one described in [37] can be integrated to accommodate these functions using Look-Up Tables (LUTs) and simple digital circuits, such as add and
subtract. One update unit consists of \(T_{r}\) rows of activate units, in compliance with the number of rows used in each transform unit. Accordingly, the output from each row of the transform unit corresponds to one value in the vertex data vector, as shown in Figure 5(c).
### _Orchestration and scheduling optimizations_
_GHOST_ supports four main optimizations for efficient orchestration and scheduling of GNNs, namely 1) graph buffering and partitioning, 2) execution pipelining and scheduling, 3) weight DACs sharing, and 4) workload balancing. While performing computations in the optical domain already offers significant performance and energy benefits, efficient optimizations targeting improved memory bandwidth utilization and enhanced execution flow are imperative for designing a scalable and robust GNN accelerator.
#### 3.4.1 Graph buffering and partitioning
Retrieving the entire graph from memory and processing it all at the same time entails tremendous memory bandwidth, resources, and computational costs. Hence, dividing the graph into several partitions and executing them separately is a widely used GNN optimization. But utilizing this approach alone can lead to increased latency as, in the worst case, while processing one vertex, it might necessitate loading another partition for each of its neighbor vertices. _GHOST_, on the other hand, uses a modified partitioning algorithm that builds on the one presented in [23]. As information regarding the graph edges is input, the adjacency matrix for the graph is generated. Columns of the adjacency matrix are identified as destination/output vertices while rows are the source/input vertices. Output and input vertices are then partitioned into groups of sizes \(V\) and \(N\), respectively. Consequently, the edges data are also grouped into \(V\times N\) chunks. For each partition where the group \(V_{i}\) is assigned to the vertex processing lanes, if for input nodes \(N_{i}\), the corresponding group of edges contains one or more connected edges, \(N_{i}\) is prefetched and assigned to the edge control units, while all-zero blocks are skipped entirely. Generating the partition matrices and determining the memory access and fetching order is done once offline, as part of a graph preprocessing step. This significantly reduces sparsity in the graph and the need for complex techniques to deal with performing sparse operations. It also helps overcome the GNN's memory bottleneck and eases greatly the traffic and access frequency to and from the main memory.
#### 3.4.2 Execution pipelining and scheduling
_GHOST_ can efficiently adapt the pipelining, computation scheduling, and execution ordering depending on the GNN model, as each model can require a different sequence of execution. For example, GCNs usually mandate having all nodes gathered first, reduced, transformed, and then updated. In contrast, GATs initially compute the attention coefficients, which involves gathering the nodes, performing linear transformations, and then applying a non-linear activation. In this case, the reduce step is performed at the end.
Given the buffering and partitioning optimization discussed in Section 3.4.1, _GHOST_ performs pipelining at two levels of granularity. The first pipelining technique entails pipelining the operations within one output vertex group \(V_{i}\). Initially as the output vertices are assigned to the execution lanes, all of their neighbor nodes are gathered. In case of models such as GCN, GraphSAGE, and GIN, which perform aggregation first and then the transform and update, _GHOST_ pipelines the reduce, transform, and update executions. As soon as \(R_{c}\) (number of columns in a reduce unit) input vertices are gathered, the reduce units are initiated. Transform units are activated when \(R_{r}\) (number of rows in a reduce unit) feature values are ready and output from the reduce units, without the need to wait for all feature values to be computed. Similarly, the update units begin updating the vertex data directly after \(T_{r}\) (number of rows in a transform unit) values are linearly transformed. For the second pipelining technique between different vertex groups, the operations for a group \(V_{i+1}\), are pipelined with those of group \(V_{i}\). This scheduling approach ensures that the initial reduce for \(V_{i+1}\) is activated after the last reduce for group \(V_{i}\). The pipelining model is illustrated in Figure 6(a).
For other GNN models such as GATs, where transforming the vertex data occurs prior to the aggregation, the pipelining model is tailored to fit the model's specifications. Figure 6(b) shows an example of pipelining for a GAT model. For pipelining with one output vertex group, as the transformation of each vertex fetched by the edge control units is independent of the other vertices, gather operations are pipelined with the transform operations without the
need to wait till all needed partitions are fetched and processed. Hence, the first group of input vertices is gathered, and the first linear transformation of multiplying the input with matrix \(W\) is handled by transform units, followed by attention vector multiplication. The output is then concatenated with transformed output vertices updated with a _leakyRELU_ activation function by the activate units. When all the neighbor vertices \(\in V\) are transformed, softmax is computed. Reduce is then computed at the end after the second transformation. Alternatively, when processing all output vertex groups \(V\), the gather operations for the next output vertex group \(V_{i+1}\) are pipelined with the transform and update operations for vertex output group \(V_{i}\).
#### 3.4.3 Weight DAC sharing
A key characteristic of GNNs is that the same set of weights is applied across all vertices during processing. Accordingly, when all transform units in _GHOST_ are operating with the same speed and processing vertices with the same feature sizes, the same weight values can be shared among all transform units. _GHOST_ leverages this idea to implement a weight DAC sharing optimization. DAC devices are needed in _GHOST_ to convert the digital values of weights that are read from the buffers in the ECU into analog signals. These analog signals are then used to tune optical MR devices to perform the linear transformations in each transfer unit. The DAC sharing optimization shares DACs across weights, thereby reducing the total number of DAC devices required, which is a significant factor in the power and latency budget of silicon-photonic accelerators, as normally one DAC device would be needed for each MR. With DAC sharing \(V\) (number of transform units) MRs would share the same DAC device and thus the total number of DAC devices is reduce by \(\frac{\#MRs\ in\ combine\ block}{\#MRs\ in\ one\ transfer\ unit}\).
#### 3.4.4 Workload balancing
Exploiting the buffer and partitioning optimization discussed in Section 3.4.1 implies having certain units/blocks in idle states during specific times. Due to some graphs' irregularity, the number of neighbors for each vertex within the same output vertex group \(V_{i}\) can vary considerably. As a result, when processing a GCN model for example, some gather units will be waiting in an idle state till the gather unit with the highest dimensionality vertex receives all its neighbor vertices. Our workload balancing optimization allows each lane to operate at its own rate without the need to wait while other lanes with higher dimensionality are still gathering their needed vertices. Accordingly, when such lanes complete processing their assigned vertices, the workload of the other lanes will be split among the completed lanes. As a result, this can notably reduce the overall latency of the executing GNN model, especially when dealing with highly sparse and irregular graphs.
### Programming model
_GHOST_'s programming model is based on GReTA [19], an abstraction model tailored to processing a broad family of GNNs. GReTA utilizes four stateless User-Defined Functions (UDFs) to break down computation in each GNN layer, namely gather, reduce, transform, and activate. These UDFs are then performed in a series of three main execution phases: aggregate, combine, and update. Algorithm 1 explains the execution flow of the GReTA programming model as implemented by _GHOST_. _GHOST_ takes as input the graph(s) as a set of edges and vertices represented by the notation _G(V,E)_, where edges are defined as two lists of vertices: destination/output (\(V\)), and source/input vertices (\(U\)).
Figure 6: _GHOST_ pipelining within one output vertex group \(V_{i}\) (top), and the pipelining between output vertex groups \(V1\), \(V2\), where \(V1\) requires input vertices in \(N1\) while \(V2\) requires \(N1\) and \(N2\) for (a) GCN, GraphSAGE, GIN; and (b) GAT.
The aggregate phase iterates over all edges and invokes gather and reduce UDFs. The _Gather()_ UDF collects each output vertex feature vector (\(h_{v}\)), its neighbor input vertices (\(\forall h_{u}\in N(v)\)) and the edges features (\(h_{u,v}\)) associated with the current output vertex being processed, and prepares a message value. The _Reduce()_ UDF collects the messages that are output by gather, related to the same output vertex, and reduces them into a single value. The combine phase invokes the _Transform()_ UDF where it takes in all the accumulated values for each vertex, employs the learned weight parameters, and performs linear transformation. Lastly, the update phase invokes the _Activate()_ UDF and computes a non-linear activation function for each output vertex to generate the updated feature vectors for all vertices.
```
0: Graph \(G(V,E)\), source vertex feature data \(h_{v}\) vertex feature data \(h_{w}\) edges feature data \(h_{w,w}\) weights \(W\).
0: Updated vertex feature data \(h\),'
1:// Edges Accumulate Phase
2:for each (u, v) in E:
3:\(h_{v,v}=\text{Reduce(h_{v},Gather(h_{u},h_{v},h_{w}))}\)
4:// Vertices Accumulate Phase
5:for each v in V:
6:\(h_{v,v}=\text{Transform(h_{v},W)}\)
7:// Update Vertices Phase
8:for each v in V:
9:\(h_{v}\)' = Activate(\(h_{v,v}\))
```
## 4 Experimental Results
### Simulation setup
To evaluate our proposed _GHOST_ accelerator, we developed a comprehensive simulator in Python to estimate the power and latency of the accelerator. _GHOST_ was simulated with attention to both software mapping and hardware mapping. For the software mapping, graph data is used to generate the partition matrix and retrieve needed information about the graph, such as the maximum number of neighbors in each partition. Then, we consider the layer-wise mapping and operation of each GNN model, and the architectural requirement for the mapping. For the hardware mapping, we modeled optoelectronic and electronic devices and circuits, and composed these into the blocks of the accelerator architecture. Compact models were used to analyze the losses associated with the device operation and also to determine device latency. The performance and energy estimates of all buffers used in _GHOST_ were obtained using CACTI [38]. However, since CACTI only supports down to 20 nm technology, the obtained latency and energy values were scaled down to 7 nm using the set of scaling relations from [40]. The off-chip DRAM memory considered employs HBM2 [41] with a size of 8GB, and it was simulated using DRAMsim3 [39]. The maximum bandwidth required to accommodate the largest graph dataset across all the different GNN models used in our experiments is 174.4 GB/s (the HBM2 memory system can support a maximum bandwidth of 256 GB/s [41]). The ECU is composed of 4 main buffers: input vertices buffer (128KB), output vertices buffer (128KB), edges buffer (256KB), and weights buffer (128KB). The memory bandwidth and access latencies of off-chip memory and on-chip buffers have all been accurately modeled using the methodology mentioned and are considered in all our simulations. For the softmax circuit needed for the GAT model using the update block, the design with a LUT and a maximum frequency of 294 MHz from [37] was utilized.
Table 1 displays the optoelectronic device and circuit parameters that were used during _GHOST_'s simulation-based analysis. Various factors were taken into account for assessing photonic signal losses, including waveguide propagation loss (1 dB/cm), splitter loss (0.13 dB [42]), combiner loss (0.9 dB [42]), MR through loss (0.02 dB [44]).
MR modulation loss (0.72 dB [45]), EO tuning loss (6 dB/cm [29]), and TO tuning power (27.5 mW/FSR [28]). Also, as the number of wavelengths and waveguide length increase, so does the MR count, photonic loss, and required laser power consumption. Thus, the laser power required for each source used with multiple wavelengths in our architecture is modeled as follows:
\[P_{laser}-S_{detector}\geq P_{photo\_loss}+10\times\log_{10}N_{L}, \tag{13}\]
where \(P_{laser}\) is the laser power (dBm), \(S_{detector}\) is the PD sensitivity (dBm), \(N_{L}\) is the number of laser sources/wavelengths, and \(P_{photo\_loss}\) is the total optical loss (dB) due to the factors discussed.
A diverse set of GNN models, tasks, and graph datasets were used in our analysis. The following GNN models were considered: GCN [12], GraphSAGE [13], GIN [14], and GAT [15]. Each model processed four different graph datasets with the properties outlined in Table 2. The node-classification graph datasets (Cora, PubMed, Citeseer, Amazon) were processed with GCN, GraphSAGE, and GAT, while GIN was used for processing the graph-classification datasets (Proteins, Mutag, BZR, IMDB-binary). Further, GCN and GraphSAGE were implemented with two layers, while the MLP in GIN was implemented with eight layers. For the GAT model, two layers were implemented with the first one leveraging eight attention heads while the second used one attention head. The PyTorch Geometric library [16] was used to train and analyze each model's accuracy as shown in Table 3. Our analysis indicated that 8-bit model quantization results in comparable algorithmic accuracy to models with full (32-bit) precision; thus, we targeted the acceleration of 8-bit precision GNN models.
In the following subsections, we present results from our analyses and experiments to determine optimal photonic device level configurations in _GHOST_ (Subsection 4.2), the optimal values for _GHOST_'s architectural parameters, which were discussed in Section 3.3, including \(V\), \(N\), \(R_{r}\), \(R_{c}\), and \(T_{r}\), (Subsection 4.3), sensitivity analysis to assess the impact of the orchestration and scheduling optimizations that were discussed in Section 3.4 (Subsection 4.4), and comparison with GNN accelerators proposed in prior work (Subsection 4.5).
### Device-level analysis
To ensure error-free photonic device operation in _GHOST_, we must reduce various noise sources in the analog domain, and ensure higher SNR than that dictated by (11) (see Section 3.2). We performed our device-level optimization analysis using optoelectronic simulation tools from Ansys Lumerical [34]. For obtaining the operational characteristics of the active MRs, i.e., MRs with EO tuning as opposed to passive MRs without them, we used the FDTD, CHARGE, MODE solver, and INTERCONNECT tools [34]. FDTD was used to obtain the operational characteristics of the passive MR. The doping levels for the tuning junction's p and n doped regions was assumed to be 4e19 in our CHARGE simulations, to obtain the carrier distribution in the ring, under various voltage conditions (0V to 10V range). Using the carrier distribution to voltage relationship obtained from CHARGE, we performed MODE solver
\begin{table}
\begin{tabular}{l l l l} \hline Devices & Latency & Power \\ \hline EO Tuning [29] & 20 ns & 4 mW/nm \\ TO Tuning [28] & 4 μs & 27.5 mW/FSR \\ VCSEL [10] & 0.07 ns & 1.3 mW \\ Photodetector [10] & 5.8 ps & 2.8 mW \\ SOA [10] & 0.3 ns & 2.2 mW \\ DAC (8 bit) [46] & 0.29 ns & 3.1 mW \\ ADC (8 bit) [47] & 0.82 ns & 3.1 mW \\ \hline \end{tabular}
\end{table}
Table 1: Parameters considered for _GHOST_ analysis
\begin{table}
\begin{tabular}{l l l l l} \hline Model & Dataset & Accuracy & Accuracy \\ & & (32-bit) & (8-bit) \\ \hline \multirow{4}{*}{GCN} & Cora & 88.70\% & 88.90\% \\ & PubMed & 97.40\% & 87.30\% \\ & Citeseer & 74.90\% & 74.40\% \\ & Amazon & 94.00\% & 93.70\% \\ \hline \multirow{4}{*}{GraphSAGE} & Cora & 71.70\% & 70.80\% \\ & PubMed & 77.50\% & 76.90\% \\ & Citeseer & 63.30\% & 65\% \\ & Amazon & 77.10\% & 76.90\% \\ \hline \multirow{4}{*}{GAT} & Cora & 78.30\% & 77.90\% \\ & PubMed & 76.70\% & 77.90\% \\ & Citeseer & 70.20\% & 69.10\% \\ & Amazon & 94.38\% & 94.64\% \\ \hline \multirow{4}{*}{GIN} & Proteins & 74.00\% & 73.40\% \\ & Mutag & 94.74\% & 94.74\% \\ \cline{1-1} & BZR & 65.85\% & 65.85\% \\ \cline{1-1} & IMDB- & 77\% & 73\% \\ \cline{1-1} & binary & & \\ \hline \end{tabular}
\end{table}
Table 3: GNN Models Performances
\begin{table}
\begin{tabular}{l l l l} \hline Dataset & (avg) & (avg) & \#Features & \#Labels & \#Graphs \\ & \#Nodes & Hedges & \\ \hline Cora & 2,708 & 10,565 & 1,433 & 7 & 1 \\ PubMed & 19,717 & 88,651 & 500 & 3 & 1 \\ Citeseer & 3,327 & 9,104 & 3,703 & 6 & 1 \\ Amazon & 7,650 & 238,162 & 745 & 8 & 1 \\ Proteins & 39 & 73 & 3 & 2 & 1113 \\ Mutag & 18 & 40 & 143 & 2 & 188 \\ BZR & 34 & 38 & 189 & 2 & 405 \\ IMDB- & 20 & 193 & 136 & 2 & 1000 \\ binary & & & \\ \hline \end{tabular}
\end{table}
Table 2: Graph Datasets Parameters
simulations to obtain the shift in effective refractive index (\(n_{eff}\)) over the voltage range. Finally, the data from these simulations was used in our INTERCONNECT simulations to obtain the operational characteristics of the MR under different biasing voltages. These analyses were performed over a range of design parameters for the MR and \(\lambda_{MR}\) values. From these simulations, we obtained our ideal MR design to have: ring and input waveguide width at 450 nm, radius of 10 \(\upmu\)m, Gap of 300 nm, and a Q-factor of 3100. Using these parameters and (12) we can calculate the SNR required to be 21.3 dB.
The data from these tools was used for crosstalk and SNR analysis as mentioned previously in Section 3.2. Depending on the SNR requirements, the gap between the input and the ring waveguide and the width of the waveguides (input and ring) were explored and adjusted to achieve the required trade-off between Q-factor and the SNR at the output of the design as per (12). Nieves in our design is fixed from our software-level quantization to be 2\({}^{\prime}\), since we consider 8-bit quantization in our models (see Section 3.2 for discussion on this).
Using the device operational characteristics and the noise models from Section 3.2, we perform a sweep to determine the viable MR bank sizes for our coherent circuits (\(P_{noise}=P_{hom\_noise}\) ) and non-coherent circuits (\(P_{noise}=P_{het\_noise}+P_{hom\_noise}\)), the results of which are shown in Figures 7(a) and 7(b).
For coherent MR banks we need to sweep for the wavelength to be used in the circuit, the number of MRs, and the SNR for the design. The cutoff SNR is shown as a red plane in the figure. From Figure 6(a), we can observe that it is possible to have up to 20 MRs in the coherent summation circuit, when the resonant wavelength is 1520 nm, while satisfying the SNR requirements (red plane). Similarly, exploration for non-coherent circuits was also conducted and results are shown in Figure 7(b). The number of rings (MRs) along the x-axis is 2 times the number of wavelengths in the waveguide, as we need two MR banks to perform multiply and accumulate operations. We considered the first wavelength to be 1550 nm and used a channel spacing of 1 nm between wavelengths. The red line in Figure 7(b) indicates the cut-off SNR. From this analysis, we determined that the waveguide can host 36 MRs, or 18 wavelengths (1550 nm to 1568 nm) for non-coherent operation. These results were used to size and design the MR banks within the main architectural blocks of _GHOST_, to meet SNR goals while maximizing performance.
### Architecture design space exploration
The _GHOST_ architecture relies on five main parameters, as outlined in Section 3: \(N,V,R_{r},R_{c}\) and \(T_{r}\). \(N\) refers to the number of edge control units or the size of each input vertices group in the partition matrix, while \(V\) refers to the number of execution lanes, which is also the size of each output node group in the partition matrix. \(R_{c}\) is the number of columns and \(R_{r}\) is the number of rows in the coherent sum MR array in the reduce units, which is also the number of columns for the MR bank array in the transform units. Lastly, \(T_{r}\) is the number of rows for the MR bank array in the transform units. To identify the optimal configuration for _GHOST_, which is determined by the combination of \([N,V,Rr,Rc,Tr]\) that offers the lowest EPB/GOPS (where EPB is energy-per-bit and GOPS is giga-operations-per-second), we conducted a detailed design space exploration as shown in Figure 7(c). Using the _GHOST_ simulator described in Section 4.1, the EPB and GOPs values were obtained for each GNN model and each accompanying graph dataset for a wide set of
Figure 7: Design space exploration for (a) coherent and (b) non-coherent MR banks for _GHOST_ architecture; (c) Architectural design-space exploration for _GHOST_, to find the optimal \([N,V,Rr,Rc,Tr]\) configuration with the best EPB/GOPS. The best configuration, [20,20,18,7,17] is shown with the green star.
possible values for \([N,V,Rr,Rc,Tr]\). The average EPB/GOPs values across all the GNN models and datasets for each set of parameters were then obtained and the optimal configuration [20, 20, 18, 7, 17] was identified as the one with the lowest EPB/GOPS value.
### Orchestration and scheduling optimization analysis
We conducted a sensitivity analysis to assess the impact of each of the orchestration and scheduling optimizations described in Section 3.4. The normalized energy results are shown in Figure 8. The baseline configuration does not utilize any of the optimizations and each gather unit requests the needed neighbor vertices sequentially from the ECU. The Buffer and Partition (BP), Pipelining (PP), and DAC weight sharing (DAC_Sharing) optimizations and their viable combinations were explored in this analysis. For Workload Balancing (WB), implementing it in isolation was found to inefficient as the memory and buffer accesses are not synchronized and occur sequentially and on-demand. For instance, the processing lane that handles the vertex with the smallest dimensionality may not be the first to finish execution, as the order of memory access is also a critical factor. Moreover, employing WB necessitates having each lane possibly operating at different speeds, making it difficult to utilize the weight DAC sharing optimization. Therefore, implementing WB in isolation is impractical, and we only show results of considering WB alongside BP and PP to observe its benefits.
The results shown in Figure 8 are normalized to the energy consumption of the baseline model. As can be observed, employing BP, PP, and DAC sharing optimizations simultaneously results in the least energy values for all the GNN models as well as all the graph datasets used. On average, when using DAC sharing combined with BP and PP, the energy consumption is reduced by 4.94x compared to the baseline. On the other hand, using BP, PP, and WB reduces the overall energy consumption by 2.92x. Hence, while _GHOST_ supports all four optimizations described in Section 3.4, we leverage BP, PP, and DAC sharing for our _GHOST_ configuration that is used in the subsequent sections for comparisons with other GNN accelerators.
Figure 8 also indicates that the different optimization techniques have different impacts across the various GNN models and graph datasets used. The optimizations have a greater impact with datasets having large number of vertices, and a very high degree of sparsity (e.g., PubMed). This demonstrates the scalability of _GHOST_ to larger and more complex graphs and how the optimization techniques can alleviate the expensive memory and computational costs associated with GNN inference.
Another key observation is the effect of BP and PP when processing different graphs. When processing larger graphs such as Cora, PubMed, Citeseer, and Amazon, PP results in lower energy consumption values than BP. Conversely, for datasets used with GIN, BP leads to lower energy values. While the graph datasets used with GIN are each composed of multiple graphs, each individual graph in the datasets is considerably smaller than the other graphs used with GCN, GraphSAGE and GAT. Accordingly, as pipelining is determined based on each individual graph, the impact of PP with small graphs diminishes. On the other hand, BP reduces sparsity and number of memory accesses. Consequently, as we need to offload an entire graph from memory every time _GHOST_ starts processing a new graph, employing BP results in notable energy reductions.
Figure 8: Impact of each orchestration and scheduling optimization on normalized energy consumption in _GHOST_.
### GHOST architecture component-wise performance analysis
To understand the performance of the major blocks in the GHOST architecture, we present a breakdown in Figure 9 in terms of latency when processing each GNN model and graph dataset for each of the main blocks: aggregate, combine, and update. It is evident from Figure 9 that the performance and contribution of each block to the overall inference latency depends on the GNN model being processed and the graph dataset used. In general, aggregate consumes more than half of the latency budget when processing GCN and GS models. This is mainly due to the models operating on graph datasets with large feature vectors and relatively high node degrees. Accordingly, the aggregate phase takes longer time as all neighbor node groups from the partitions matrix (from the graph buffering and partitioning technique in Section 3.4.1) need to be fetched and added. On the other hand, while GAT processes the same graph datasets, it follows a different execution ordering and pipelining as explained in Section 3.4.2. The GAT model used in our experiments is composed of eight attention heads, which are computed using the combine block. Moreover, the softmax function performed by the update block and included in the GAT computations is more time-consuming than the non-linear activation RELU used in GCN and GS models. The aggregation is performed only once at the end. Consequently, the high latencies observed with processing GATs are mainly attributed to the combine and update phases. Lastly, the graph datasets processed by the GIN model are multiple graphs used for graph classification tasks. However, each single graph is much smaller and possesses lower node degrees than the other graphs used with GCN, GS, and GAT. Accordingly, since a smaller number of neighbor nodes need to be reduced, the aggregate phase is not as time consuming as with other models and the performance bottleneck can be attributed to the combine phase.
### Comparison with computing platforms and state-of-the-art GNN accelerators
_GHOST_ is compared against multiple computing platforms and state-of-the-art GNN hardware accelerators: GRIP [23], HyGCN [22], EnG [21], HW_ACC [20], ReGNN [24], ReGraphx[25], Google TPU v4, Intel Xeon CPU, and NVIDIA A100 GPU. We used power, latency, and energy values reported for the selected accelerators, and directly obtained results from executing models on the GPU, CPU, and TPU platforms to estimate the EPB and GOPS for each model and graph dataset.
We compared each hardware accelerator on the models supported by them, as outlined in their papers. For the models used in our analysis (Table 3), the GRIP and HyGCN accelerators support processing GCN, GraphSAGE, and GIN; EnG supports GCN and GraphSAGE; and HW_ACC supports GCN and GAT. The ReRAM-based hardware accelerators, ReGNN and ReGraphx, were used for the GCN and GraphSAGE model comparisons. As mentioned, one of the key attributes of _GHOST_ is its versatility in accommodating a diverse set of GNN models, enabling it to support the different models used in our analysis.
Figure 9: Breakdown analysis results showing the impact of each block in GHOST on the performance for each GNN model and graph dataset.
#### 4.6.1 Throughput comparison
Figure 10 shows the GOPS throughput comparison for _GHOST_ with the computing platforms and GNN hardware accelerators considered. Our accelerator achieves on average 102.3x, 325.3x, 40.5x, 10.2x, 12.6x, 150.6x, 1699.0x, 1567.5x, 584.4x better GOPS when compared to GRIP, HyGCN, EnG, HW_ACC, ReGNN, ReGraphx, TPU, CPU, and GPU, respectively. While _GHOST_ demonstrates promising improvements across all GNN models and datasets, the largest GOPS improvements are observed with the GIN graph dataset used for graph classification tasks. Across all datasets, processing GIN yielded on average 87.4x more GOPS when compared to the GNN hardware accelerators and 2168.9x when compared to GPU, CPU and TPU. This can be attributed to the small sizes of the graphs in each GIN dataset. These results are also consistent with those in Section 4.4, which illustrated that the partitioning optimization had the greatest impact on processing the graphs associated with the GIN model, leading to significant speedup. These findings highlight _GHOST_'s proficiency in handling diverse graph processing tasks. Also, there are significant GOPS improvements with _GHOST_ for the GraphSAGE model, where it performed on average 100.9x better than other GNN accelerators and 1743.1x better than the GPU, CPU and TPU. This showcases how our accelerator is efficiently able to handle complex models, as it was able to overcome the complexity associated with supporting the sampling technique used in GraphSAGE.
#### 4.6.2 Energy Efficiency Comparison
The EPB comparison results for _GHOST_ with the computing platforms and GNN accelerators considered are shown in Figure 11. On average, _GHOST_ attains 11.1x, 60.5x, 3.8x, 85.9, 15.7x, 313.7x, 24276.7, 6178.8x, 2585.3x lower EPB compared to GRIP, HyGCN, EnG, HW_ACC, ReGNN, ReGraphx, TPU, CPU and GPU, respectively. Our accelerator exhibits lower EPB values across all the GNN models and the graph datasets. In particular, GCN, which is the most widely used GNN model, achieved the lowest EPB values, with an average EPB reduction of 116.7x with _GHOST_, when compared to the GNN hardware accelerators and an average reduction of 7120.4x, in comparison to GPU, CPU and TPU. This is due to GCN's uniform operation profile, which involves processes each vertex independently based on its immediate neighbors only, without considering other vertices in the graph. Overall, the improved energy efficiency can be explained in terms of _GHOST_'s significant low latency operation in the optical domain, as well as its relatively low power consumption of 18W compared to the other hardware accelerators and compute platforms.
Figure 10: Throughput comparison between GPU, CPU, TPU, GNN hardware accelerators and _GHOST_.
#### 4.6.3 EPB/GOPS Comparison
The primary motivation for the design of _GHOST_ is to obtain both an energy-efficient and high-throughput GNN acceleration. Accordingly, to showcase _GHOST_'s performance and energy improvements together, we show the EPB/GOPS comparison for _GHOST_ against the computing platforms and GNN accelerators from prior work in Figure 12. It can be seen that _GHOST_ achieves exceptionally lower EPB/GOPS values across all models and datasets. On average, _GHOST_ attains 2.7e3x, 190.3e3x, 197.2x, 1.9e3x, 1.9e3x, 90.1e3x, 121.4e6x, 22.8e6x, 4.8e6x lower EPB/GOPS, compared to GRIP, HyGCN, EnG, HW_ACC, ReGNN, ReGGraphx, TPU, CPU, and GPU, respectively. These results demonstrate how the cross-layer optimizations employed in _GHOST_ across the circuit-level, device-level, and architecture-level along with performing most of the GNN operations in the optical domain, help to mitigate the various GNN inference challenges.
## 5 Conclusion
In this paper, we presented the first silicon photonic GNN accelerator, called _GHOST_. In comparison to nine computing platforms and state-of-the-art GNN accelerators, our proposed accelerator exhibited throughput improvements of at least 10.2x and energy-efficiency improvements of at least 3.8x. These results demonstrate the promise of _GHOST_ in terms of energy-efficiency and high-throughput inference acceleration for GNNs. This work focused on the hardware architecture design with silicon photonics and employed various device-, circuit-, and architecture-level optimizations. When combined with software optimization techniques to reduce the high memory requirements in GNNs, we expect that even better throughput and energy efficiency can be achieved. Furthermore, while various device and circuit design optimizations were employed in _GHOST_, several silicon photonic challenges can still be tackled to further improve throughput and energy efficiency. Such challenges include exploring alternate non-volatile optical memory cells [48] to minimize the needed digital buffering and opto-electric conversions. Another challenge hindering the progress of silicon photonics is the effect of fabrication process variations (FPVs) on the devices used [49]. Several techniques can be used to alleviate the impact of FPV, such as optical channel remapping, and intra-channel wavelength tuning [7].
Figure 11: EPB comparison between GPU, CPU, TPU, GNN hardware accelerators and GHOST.
Figure 12: EPB/GOPS comparison between GPU, CPU, TPU, GNN hardware accelerators and _GHOST_. |
2310.12457 | MuseGNN: Interpretable and Convergent Graph Neural Network Layers at
Scale | Among the many variants of graph neural network (GNN) architectures capable
of modeling data with cross-instance relations, an important subclass involves
layers designed such that the forward pass iteratively reduces a
graph-regularized energy function of interest. In this way, node embeddings
produced at the output layer dually serve as both predictive features for
solving downstream tasks (e.g., node classification) and energy function
minimizers that inherit desirable inductive biases and interpretability.
However, scaling GNN architectures constructed in this way remains challenging,
in part because the convergence of the forward pass may involve models with
considerable depth. To tackle this limitation, we propose a sampling-based
energy function and scalable GNN layers that iteratively reduce it, guided by
convergence guarantees in certain settings. We also instantiate a full GNN
architecture based on these designs, and the model achieves competitive
accuracy and scalability when applied to the largest publicly-available node
classification benchmark exceeding 1TB in size. | Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, David Wipf | 2023-10-19T04:30:14Z | http://arxiv.org/abs/2310.12457v1 | # MuseGNN: Interpretable and Convergent Graph Neural Network Layers at Scale
###### Abstract
Among the many variants of graph neural network (GNN) architectures capable of modeling data with cross-instance relations, an important subclass involves layers designed such that the forward pass iteratively reduces a graph-regularized energy function of interest. In this way, node embeddings produced at the output layer dually serve as both predictive features for solving downstream tasks (e.g., node classification) and energy function minimizers that inherit desirable inductive biases and interpretability. However, scaling GNN architectures constructed in this way remains challenging, in part because the convergence of the forward pass may involve models with considerable depth. To tackle this limitation, we propose a sampling-based energy function and scalable GNN layers that iteratively reduce it, guided by convergence guarantees in certain settings. We also instantiate a full GNN architecture based on these designs, and the model achieves competitive accuracy and scalability when applied to the largest publicly-available node classification benchmark exceeding 1TB in size.
## 1 Introduction
Graph neural networks (GNNs) are a powerful class of deep learning models designed specifically for graph-structured data. Unlike conventional neural networks that primarily operate on independent samples, GNNs excel in capturing the complex cross-instance relations modeled by graphs (Hamilton et al., 2017; Kearnes et al., 2016; Kipf and Welling, 2017; Velickovic et al., 2018). Foundational to GNNs is the notion of message passing (Gilmer et al., 2017), whereby nodes iteratively gather information from neighbors to update their representations. In doing so, information can propagate across the graph in the form of node embeddings, reflecting both local patterns and global network effects, which are required by downstream tasks such as node classification.
Among many GNN architectures, one promising subclass is based on graph propagation layers derived to be in a one-to-one correspondence with the descent iterations of a graph-regularized energy function (Chen et al., 2022; Ahn et al., 2022; Yang et al., 2021; Ma et al., 2021; Gasteiger et al., 2019; Pan et al., 2020; Zhang et al., 2020; Zhu et al., 2021; Xue et al., 2023). For these models, the layers of the GNN forward pass computes increasingly-refined approximations to a minimizer of the aforementioned energy. Importantly, if such energy minimizers possess interpretable/explainable properties or inductive biases, the corresponding GNN architecture naturally inherits them, unlike traditional GNN constructions that may be less transparent. More broadly, this association between the GNN forward pass and optimization can be exploited to introduce targeted architectural enhancements (e.g., forming explainable predictions, robustly handling graphs with spurious edges, etc.). Borrowing from (Chen and Eldar, 2021; Yang et al., 2022), we
will henceforth refer to models of this genre as _unfolded_ GNNs, given that the layers are derived from a so-called unfolded (in time/iteration) energy descent process.
Despite their merits w.r.t. interpretability, unfolded GNNs face non-trivial scalability challenges. This is in part because they can be constructed with arbitrary depth (i.e., arbitrary descent iterations) while still avoiding undesirable oversmoothing effects (Oono and Suzuki, 2020; Li et al., 2018), and the computational cost and memory requirements of this flexibility are often prohibitively high. Even so, there exists limited prior work explicitly tackling the scalability of unfolded GNNs, or providing any complementary guarantees regarding convergence on large graphs. Hence most unfolded GNN models are presently evaluated on relatively small benchmarks.
To address these shortcomings, we propose a scalable unfolded GNN model that incorporates efficient subgraph sampling within the fabric of the requisite energy function. Our design of this model is guided by the following three desiderata: (i) maintain the characteristic interpretable properties and extensible structure of full-graph unfolded GNN models, (ii) using a consistent core architecture, preserve competitive accuracy across datasets of varying size, including the very largest publicly-available benchmarks, and (iii) do not introduce undue computational or memory overheads beyond what is required to train the most common GNN alternatives. Our proposed model, which we will later demonstrate satisfies each of the above, is termed _MuseGNN_ in reference to a GNN architecture produced by the _minimization_ of an _unfolded sampling_-based _energy_. Building on background motivations presented in Section 2, our MuseGNN-related contributions herein are three-fold:
* In Sections 3 and 4, we expand a widely-used unfolded GNN framework to incorporate offline sampling into the architecture-inducing energy function design itself, as opposed to a post hoc application of sampling to existing GNN methods. The resulting model, which we term MuseGNN, allows us to retain the attractive interpretability and extensibility properties of unfolded GNNs, with the scalability of canonical GNN architectures.
* We analytically demonstrate in Section 5 that MuseGNN possesses desirable convergence properties regarding both upper-level (traditional model training) and lower-level (interpretable energy function descent) optimization processes. In doing so we increase our confidence in reliable performance when moving to new problem domains that may deviate from known benchmarks.
* Finally, in Section 6 we provide complementary empirical support that MuseGNN performance is stable in practice, preserving competitive accuracy and scalability across task size. En route, we achieve SOTA performance w.r.t. homogeneous graph models applied to the largest, publicly-available node classification datasets from OGB and IGB exceeding 1TB.
## 2 Background and Motivation
This section provides background details regarding existing unfolded GNN architectures. Later, we discuss why GNNs formed in this way are relevant, followed by their scalability challenges.
### Interpretable GNN Architectures from Unfolded Optimization
Notation.Let \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) denote a graph with \(n=|\mathcal{V}|\) nodes and edge set \(\mathcal{E}\). We define \(D\) and \(A\) as the degree and adjacency matrices of \(\mathcal{G}\) such that the corresponding graph Laplacian is \(L=D-A\). Furthermore, associated with each node is both a \(d\)-dimensional feature vector, and a \(d^{\prime}\)-dimensional label vector, the respective collections of which are given by \(X\in\mathbb{R}^{n\times d}\) and \(T\in\mathbb{R}^{n\times d^{\prime}}\).
GNN Basics.Given a graph defined as above, canonical GNN architectures are designed to produce a sequence of node-wise embeddings \(\{Y^{(k)}\}_{k=1}^{K}\) that are increasingly refined across \(K\) model layers according to the rule \(Y^{(k)}=h(Y^{(k-1)};W,A,X)\in\mathbb{R}^{n\times d}\). Here \(h\) represents a function that updates \(Y^{(k-1)}\) based on trainable model weights \(W\) as well as \(A\) (graph structure) and optionally \(X\) (input node features). To facilitate downstream tasks such as node classification, \(W\) may be trained, along with additional parameters \(\theta\) of any application-specific output layer \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\), to minimize a loss of the form
\[\mathcal{L}(W,\theta)=\sum_{i=1}^{n^{\prime}}\mathcal{D}\bigg{(}g\left[Y^{(K)}\left( W\right)_{i};\theta\right],T_{i}\bigg{)}. \tag{1}\]
In this expression, \(Y_{i}^{(K)}\equiv Y^{(K)}\left(W\right)_{i}\) reflects the explicit dependency of GNN embeddings on \(W\) and the subscript \(i\) denotes the \(i\)-th row of a matrix. Additionally, \(n^{\prime}\) refers to the number of nodes in \(\mathcal{G}\) available for training, while \(\mathcal{D}\) is a discriminator function such as cross-entropy. We will sometimes refer to the training loss (1) as an _upper-level_ energy function to differentiate its role from the lower-level energy defined next.
Moving to Unfolded GNNs.An _unfolded_ GNN architecture ensues when the functional form of \(h\) is explicitly chosen to align with the update rules that minimize a second, _lower-level_ energy function denoted as \(\ell(Y)\). In this way, it follows that \(\ell\left(Y^{(k)}\right)=\ell\left(h\left[Y^{(k-1)};W,A,X\right]\right)\leq \ell\left(Y^{(k-1)}\right)\), This restriction on \(h\) is imposed to introduce desirable inductive biases on the resulting GNN layers that stem from properties of the energy \(\ell\) and its corresponding minimizers (see Section 2.2 ).
While there are many possible domain-specific choices for \(\ell(Y)\) in the growing literature on unfolded GNNs (Chen et al., 2022; Ahn et al., 2022; Yang et al., 2021; Ma et al., 2021; Gasteiger et al., 2019; Pan et al., 2020; Zhang et al., 2020; Zhu et al., 2021; Xue et al., 2023)., we will focus our attention on a particular form that encompasses many existing works as special cases, and can be easily generalized to cover many others. Originally inspired by Zhou et al. (2003) and generalized by Yang et al. (2021) to account for node-wise nonlinearities, we consider
\[\ell(Y):=\|Y-f(X;W)\|_{F}^{2}+\lambda\operatorname{tr}(Y^{\top}LY)+\sum_{i=1}^ {n}\zeta(Y_{i}), \tag{2}\]
where \(\lambda>0\) is a trade-off parameter, \(f\) represents a trainable base model parameterized by \(W\) (e.g., a linear layer or MLP), and the function \(\zeta\) denotes a (possibly non-smooth) penalty on individual node embeddings. The first term in (2) favors embeddings that resemble the input features as processed by the base model, while the second encourages smoothness across graph edges. And finally, the last term is included to enforce additional node-wise constraints (e.g., non-negativity).
We now examine the form of \(h\) that can be induced when we optimize (2). Although \(\zeta\) may be non-smooth and incompatible with vanilla gradient descent, we can nonetheless apply proximal gradient descent in such circumstances (Combettes and Pesquet, 2011), leading to the descent update
\[Y^{(k)}=h\left(Y^{(k-1)};W,A,X\right)=\operatorname{prox}_{\zeta}\left(Y^{(k- 1)}-\alpha\left[(I+\lambda L)Y^{(k-1)}-f(X;W)\right]\right), \tag{3}\]
where \(\alpha\) controls the learning rate and \(\operatorname{prox}_{\zeta}(u):=\operatorname{arg\,min}_{y}\frac{1}{2}\|u-y \|_{2}^{2}+\zeta(y)\) denotes the proximal operator of \(\zeta\). Analogous to more traditional GNN architectures, (3) contains a \(\zeta\)-dependent nonlinear activation applied to the output of an affine, graph-dependent filter. With respect to the former, if \(\zeta\) is chosen as an indicator function that assigns an infinite penalty to any embedding less than zero, then \(\operatorname{prox}_{\zeta}\) reduces to standard ReLU activations; we will adopt this choice for MuseGNN.
### Why Unfolded GNNs?
There are a variety of reasons why it can be advantageous to construct \(h\) using unfolded optimization steps as in (3) or related. Of particular note here, unfolded GNN node embeddings inherit interpretable characteristics of the lower-level energy, especially if \(K\) is sufficiently large such that \(Y^{(K)}\) approximates a minimum of \(\ell(Y)\). For example, it is well-known that an oversmoothing effect can sometimes cause GNN layers to produce node embeddings that converge towards a non-informative constant (Oono and Suzuki, 2020; Li et al., 2018). However, it is transparent how to design \(\ell(Y)\) such that minimizers do not degenerate in this way, and hence the induced \(h\) as in (3) can be iterated indefinitely without risk of oversmoothing, even while spreading information across the graph.
As another representative scenario, in many customer-facing applications, it is desirable to provide explanations for why a particular prediction was made by a GNN model (Ying et al., 2019). In these situations, the embeddings of unfolded GNNs contain additional information when contextualized w.r.t. their role in
the lower-level energy. For example, if for some node \(Y_{i}^{(K)}\) is very far from \(f(X_{i};W)\) relative to other nodes, it increases our confidence that subsequent predictions may be based on network effects rather than the local feature \(X_{i}\). More broadly, the interpretability of unfolded GNNs facilitates bespoke modifications of (3) for addressing heterophily and/or adversarial attacks (Yang et al., 2021), handling long-range dependencies (Xue et al., 2023), forming connections with the gradient dynamics of physical systems (Di Giovanni et al., 2023) and deep equilibrium models (Gu et al., 2020; Yang et al., 2022), and exploiting the robustness of boosting algorithms (Sun et al., 2019) among other things.
### Scalability Challenges and Candidate Solutions
As benchmarks continue to expand in size (see Table 1 below), it is no longer feasible to conduct full-graph GNN training using only a single GPU or even single machine, especially so for relatively deep unfolded GNNs. To address such GNN scalability challenges, there are presently two dominant lines of algorithmic workarounds. The first adopts various sampling techniques to extract much smaller computational subgraphs upon which GNN models can be trained in mini-batches. Examples include neighbor sampling (Hamilton et al., 2017; Ying et al., 2018), layer-wise sampling (Chen et al., 2018; Zou et al., 2019), and graph-wise sampling (Chiang et al., 2019; Zeng et al., 2021). For each of these, there exist both online and offline versions, where the former involves randomly sampling new subgraphs during each training epoch, while the latter (Zeng et al., 2021; Gasteiger et al., 2022) is predicated on a fixed set of subgraphs for all epochs; MuseGNN will ultimately be based on a novel integration and analysis of this approach as will be introduced in Section 3.
The second line of work exploits the reuse of historical embeddings, meaning the embeddings of nodes computed and saved during the previous training epoch. In doing so, much of the recursive forward and backward computations required for GNN training, as well as expensive memory access to high-dimensional node features, can be reduced (Chen et al., 2017; Fey et al., 2021; Huang et al., 2023). This technique has recently been applied to training unfolded GNN models (Xue et al., 2023), although available performance results do not cover large-scale graphs, there are no convergence guarantees, and no code is available (at least as of the time of this submission) for enabling further comparisons. Beyond this, we are unaware of prior work devoted to the scaling and coincident analysis of unfolded GNNs.
## 3 Graph-Regularized Energy Functions Infused with Sampling
Our goal of this section is to introduce a convenient family of energy functions formed by applying graph-based regularization to a set of subgraphs that have been sampled from a given graph of interest. For this purpose, we first present the offline sampling strategy which will undergird our approach, followed by details of energy functional form we construct on top of it for use with MuseGNN. Later, we conclude by analyzing special cases of these sampling-based energies, further elucidating relevant properties and connections with full-graph training.
### Offline Sampling Foundation
In the present context, offline sampling refers to the case where we sample a fixed set of subgraphs from \(\mathcal{G}\) once and store them, which can ultimately be viewed as a form of preprocessing step. More formally, we assume an operator \(\Omega:\mathcal{G}\rightarrow\{\mathcal{G}_{s}\}_{s=1}^{m}\), where \(\mathcal{G}_{s}=\{\mathcal{V}_{s},\mathcal{E}_{s}\}\) is a subgraph of \(\mathcal{G}\) containing \(n_{s}=|\mathcal{V}_{s}|\) nodes, of which we assume \(n_{s}^{\prime}\) are target training nodes (indexed from \(1\) to \(n_{s}^{\prime}\)), and \(\mathcal{E}_{s}\) represents the edge set. The corresponding feature and label sets associated with the \(s\)-th subgraph are denoted \(X_{s}\in\mathbb{R}^{n_{s}\times d}\) and \(T_{s}\in\mathbb{R}^{n_{s}^{\prime}\times d^{\prime}}\), respectively.
There are several key reasons we employ offline sampling as the foundation for our scalable unfolded GNN architecture development. Firstly, conditioned on the availability of such pre-sampled/fixed subgraphs, there is no additional randomness when we use them to replace energy functions dependent on \(\mathcal{G}\) and \(X\) (e.g., as in (2)) with surrogates dependent on \(\{\mathcal{G}_{s},X_{s}\}_{s=1}^{m}\). Hence we retain a deterministic energy substructure contributing to a more transparent bilevel (upper- and lower-level) optimization process, as well as more interpretable node-wise embeddings that serve as attendant minimizers. Secondly, offline sampling allows us to conduct formal convergence analysis that is agnostic to the particular sampling operator \(\Omega\). In this
way, we need not compromise flexibility in choosing a practically-relevant \(\Omega\) in order to maintain desirable convergence guarantees. For example, as will be detailed later, ShadowKHop (Zeng et al., 2021) sampling pairs well with our approach and directly adheres to our convergence analysis in Section 5. And lastly, offline sampling facilitates an attractive balance between model accuracy and efficiency within the confines of unfolded GNN architectures. As will be shown in Section 6, we can match the accuracy of full-graph training with an epoch time similar to online sampling methods, e.g., neighbor sampling.
### Energy Function Formulation
To integrate offline sampling into a suitable graph-regularized energy function, we first introduce two sets of auxiliary variables that will serve as more flexible input arguments. Firstly, to accommodate multiple different embeddings for the same node appearing in multiple subgraphs, we define \(Y_{s}\in\mathbb{R}^{n_{s}\times d}\) for each subgraph index \(s=1,\ldots,m\), as well as \(\mathbb{Y}=\{Y_{s}\}_{s=1}^{m}\in\mathbb{R}^{(\sum n_{s})\times d}\) to describe the concatenated set. Secondly, we require additional latent variables that, as we will later see, facilitate a form of controllable linkage between the multiple embeddings that may exist for a given node (i.e., when a given node appears in multiple subgraphs). For this purpose, we define the latent variables as \(M\in\mathbb{R}^{n\times d}\), where each row can be viewed as a shared summary embedding associated with each node in the original/full graph.
We then define our sampling-based extension of (2) for MuseGNN as
\[\ell_{\text{muse}}\left(\mathbb{Y},M\right):=\sum_{s=1}^{m}\left[\|Y_{s}-f(X _{s};W)\|_{F}^{2}+\lambda\operatorname{tr}(Y_{s}^{\top}L_{s}Y_{s})+\gamma\|Y _{s}-\mu_{s}\|_{F}^{2}+\sum_{i=1}^{n_{s}}\zeta(Y_{s,i})\right], \tag{4}\]
where \(L_{s}\) is the graph Laplacian associated with \(\mathcal{G}_{s}\), \(\gamma\geq 0\) controls the weight of the additional penalty factor, and each \(\mu_{s}\in\mathbb{R}^{n_{s}\times d}\) is derived from \(M\) as follows. Let \(I(s,i)\) denote a function that maps the index of the \(i\)-th node in subgraph \(s\) to the corresponding node index in the full graph. For each subgraph \(s\), we then define \(\mu_{s}\) such that its \(i\)-th row satisfies \(\mu_{s,i}=M_{I(s,i)}\); per this construction, \(\mu_{s,i}=\mu_{s^{\prime},j}\) if \(I(s,i)=I(s^{\prime},j)\). Consequently, \(\{\mu_{s}\}_{s=1}^{m}\) and \(M\) represent the same overall set of latent embeddings, whereby the former is composed of repeated samples from the latter aligned with each subgraph.
Overall, there are three prominent factors which differentiate (4) from (2):
1. \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\) involves a deterministic summation over a fixed set of sampled subgraphs, where each \(Y_{s}\) is unique while \(W\) (and elements of \(M\)) are shared across subgraphs.
2. Unlike (2), the revised energy involves both an expanded set of node-wise embeddings \(\mathbb{Y}\) as well as auxiliary summary embeddings \(M\). When later forming GNN layers designed to minimize \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\), we must efficiently optimize over _all_ of these quantities, which alters the form of the final architecture.
3. The additional \(\gamma\|Y_{s}-\mu_{s}\|_{F}^{2}\) penalty factor acts to enforce dependencies between the embeddings of a given node spread across different subgraphs.
With respect to the latter, it is elucidating to consider minimization of \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\) over \(\mathbb{Y}\) with \(M\) set to some fixed \(M^{\prime}\). In this case, up to an irrelevant global scaling factor and additive constant, the energy can be equivalently reexpressed as
\[\ell_{\text{muse}}\left(\mathbb{Y},M=M^{\prime}\right)\equiv\sum_{s=1}^{m} \left[\|Y_{s}-\left[f^{\prime}(X_{s};W)+\gamma^{\prime}\mu_{s}\right]\|_{F}^{ 2}+\lambda^{\prime}\operatorname{tr}(Y_{s}^{\top}L_{s}Y_{s})+\sum_{i=1}^{n_{s} }\zeta^{\prime}(Y_{s,i})\right], \tag{5}\]
where \(f^{\prime}(X_{s};W):=\frac{1}{1+\gamma}f(X_{s};W)\), \(\gamma^{\prime}:=\frac{\gamma}{1+\gamma}\), \(\lambda^{\prime}:=\frac{\lambda}{1+\gamma}\), and \(\zeta^{\prime}(Y_{s,i}):=\frac{1}{1+\gamma}\zeta(Y_{s,i})\). From this expression, we observe that, beyond inconsequential rescalings (which can be trivially neutralized by simply rescaling the original choices for \(\{f,\gamma,\lambda,\zeta\}\)), the role of \(\mu_{s}\) is to refine the initial base predictor \(f(X_{s};W)\) with a corrective factor reflecting embedding summaries from other subgraphs sharing the same nodes. Conversely, when we minimize \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\) over \(M\) with \(\mathbb{Y}\) fixed, we find that the optimal \(\mu_{s}\) for every \(s\) is equal to the _mean_ embedding for each constituent node across all subgraphs. Hence \(M^{\prime}\) chosen in this way via alternating minimization has a natural interpretation as grounding each \(Y_{s}\) to a shared average representation reflecting the full graph structure.
This reformulation provides additional entry points for interpretability as well. For example, if optimal solutions are such that \(Y_{s}\) and \(\mu_{s}\) are similar while \(f(X_{s};W)\) is significantly different, we have a likely indication that, at least within this subgraph, graph structure and network effects are more dominant than node features. In contrast, if \(Y_{s}\) and \(f(X_{s};W)\) are similar with \(\mu_{s}\) much less so, feature utility may be selectively strong for a given subgraph; however, if all three quantities are similar the situation is more indicative of strong overall node features. Beyond these considerations, we now consider two limiting special cases of \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\) that provide complementary contextualization.
### Notable Limiting Cases
We first consider setting \(\gamma=0\), in which case the resulting energy completely decouples across each subgraph such that we can optimize each
\[\ell_{\text{muse}}^{s}(Y_{s}):=\|Y_{s}-f(X_{s};W)\|_{F}^{2}+\lambda\operatorname {tr}(Y_{s}^{\top}L_{s}Y_{s})+\sum_{i=1}^{n_{s}}\zeta(Y_{s,i})\ \ \forall s \tag{6}\]
independently. Under such conditions, the only cross-subgraph dependency stems from the shared base model weights \(W\) which are jointly trained. Hence \(\ell_{\text{muse}}^{s}(Y_{s})\) is analogous to a full graph energy as in (2) with \(\mathcal{G}\) replaced by \(\mathcal{G}_{s}\). In Section 5 we will examine convergence conditions for the full bilevel optimization process over all \(Y_{s}\) and \(W\) that follows from the \(\gamma=0\) assumption. We have also found that this simplified setting performs well in practice; see Appendix B.1 for ablations.
At the opposite extreme when \(\gamma=\infty\), we are effectively enforcing the constraint \(Y_{s}=\mu_{s}\) for all \(s\). As such, we can directly optimize all \(Y_{s}\) out of the model leading to the reduced \(M\)-dependent energy
\[\ell_{\text{muse}}(M):=\sum_{s=1}^{m}\left[\|\mu_{s}-f(X_{s};W)\|_{F}^{2}+ \lambda\operatorname{tr}(\mu_{s}^{\top}L_{s}\mu_{s})+\sum_{i=1}^{n_{s}}\zeta( \mu_{s,i})\right]. \tag{7}\]
Per the definition of \(\{\mu_{s}\}_{s=1}^{m}\) and the correspondence with unique node-wise elements of \(M\), this scenario has a much closer resemblance to full graph training with the original \(\mathcal{G}\). In this regard, as long as a node from the original graph appears in at least one subgraph, then it will have a single, unique embedding in (7) aligned with a row of \(M\).
Moreover, we can strengthen the correspondence with full-graph training via the suitable selection of the offline sampling operator \(\Omega\). In fact, there exists a simple uniform sampling procedure such that, at least in expectation, the energy from (7) is equivalent to the original full-graph version from (2), with the role of \(M\) equating to \(Y\). More concretely, we present the following (all proofs are deferred to Appendix D):
**Proposition 3.1**.: _Suppose we have \(m\) subgraphs \((\mathcal{V}_{1},\mathcal{E}_{1}),\ldots,(\mathcal{V}_{m},\mathcal{E}_{m})\) constructed independently such that \(\forall s=1,\ldots,m,\forall u,v\in\mathcal{V},\operatorname{Pr}[v\in \mathcal{V}_{s}]=\operatorname{Pr}[v\in\mathcal{V}_{s}\ |\ u\in\mathcal{V}_{s}]=p;(i,j)\in\mathcal{E}_{s}\iff i\in \mathcal{V}_{s},j\in\mathcal{V}_{s},(i,j)\in\mathcal{E}\). Then when \(\gamma=\infty\), we have \(\mathbb{E}[\ell_{\text{muse}}(M)]=mp\ \ell(M)\) with the \(\lambda\) in \(\ell(M)\) rescaled to \(p\lambda\)._
Strengthened by this result, we can more directly see that (7) provides an intuitive bridge between full-graph models based on (2) and subsequent subgraph models we intend to build via (4). Even so, we have found that relatively small \(\gamma\) values nonetheless work well in practice (although in principle it is actually possible to handle \(\gamma=\infty\) using techniques like ADMM (Boyd et al., 2011)).
## 4 From Sampling-based Energies to the MuseGNN Framework
Having defined and motivated a family of sampling-based energy functions vis-a-vis (4), we now proceed to derive minimization steps that will serve as GNN model layers that define MuseGNN forward and backward passes as summarized in Algorithm 1. Given that there are two input arguments, namely \(\mathbb{Y}\) and \(M\), it is natural to adopt an alternating minimization strategy whereby we fix one and optimize over the other, and vice versa.
With this in mind, we first consider optimization over \(\mathbb{Y}\) with \(M\) fixed. Per the discussion in Section 3.2, when conditioned on a fixed \(M\), \(\ell_{\text{muse}}\left(\mathbb{Y},M\right)\) decouples over subgraphs. Consequently, optimization can proceed using subgraph-independent proximal gradient descent, leading to the update rule
\[Y_{s}^{(k)}=\operatorname{prox}_{\zeta}\left[Y_{s}^{(k-1)}-\alpha\left([(1+ \gamma)I+\lambda L_{s}]Y_{s}^{(k-1)}-[f(X_{s};W)+\gamma\mu_{s}]\right)\right], \ \ \forall s. \tag{8}\]
Here the input argument to the proximal operator is given by a gradient step along (4) w.r.t. \(Y_{s}\). We also remark that execution of (8) over \(K\) iterations represents the primary component of a single forward training pass of our proposed MuseGNN framework as depicted on lines 6-8 of Algorithm 1.
We next turn to optimization over \(M\) which, as mentioned previously, can be minimized by the mean of the subgraph-specific embeddings for each node. However, directly computing these means is problematic for computational reasons, as for a given node \(v\) this would require the infeasible collection of embeddings from all subgraphs containing \(v\). Instead, we adopt an online mean estimator with forgetting factor \(\rho\). For each node \(v\) in the full graph, we maintain a mean embedding \(M_{v}\) and a counter \(c_{v}\). When this node appears in the \(s\)-th subgraph as node \(i\) (where \(i\) is the index within the subgraph), we update the mean embedding and counter via
\[M_{I(s,i)}\leftarrow\frac{\rho c_{I(s,i)}}{c_{I(s,i)}+1}M_{I(s,i)}+\frac{(1- \rho)c_{I(s,i)}+1}{c_{I(s,i)}+1}Y_{s,i}^{(K)},\qquad c_{I(s,i)}\gets c_{I(s,i)}+1. \tag{9}\]
Also shown on line 9 of Algorithm 1, we update \(M\) and \(c\) once per forward pass, which serves to refine the effective energy function observed by the core node embedding updates from (8).
For the backward pass, we compute gradients of
\[\mathcal{L}_{\text{muse}}(W,\theta):=\sum_{s=1}^{m}\sum_{i=1}^{n_{s}^{\prime}} \mathcal{D}\left(g\left[Y_{s}^{(K)}\left(W\right)_{i};\theta\right],T_{s,i}\right) \tag{10}\]
w.r.t. \(W\) and \(\theta\) as listed on line 10 of Algorithm 1, where \(\mathcal{L}_{\text{muse}}(W,\theta)\) is a sampling-based modification of (1). For \(W\) though, we only pass gradients through the calculation of \(Y_{s}^{(K)}\), not the full online \(M\) update; however, provided \(\rho\) is chosen to be sufficiently large, \(M\) will change slowly relative to \(Y_{s}^{(K)}\) such that this added complexity is not necessary for obtaining reasonable convergence.
```
1:Input: \(\{\mathcal{G}_{s}\}_{s=1}^{m}\): subgraphs, \(\{X_{s}\}_{s=1}^{m}\): features, \(K\): # unfolded layers, \(E\): # epochs
2:Randomly initialize \(W\) and \(\theta\); initialize \(c\in\mathbb{R}^{n}\) and \(M\in\mathbb{R}^{n\times d}\) to be zero
3:for\(e=1,2,\ldots,E\)do
4:for\(s=1,2,\ldots,m\)do
5:\(\mu_{s,i}\gets M_{I(s,i)},i=1,2,\ldots,n_{s}\)
6:\(Y_{s}^{(0)}\gets f(X_{s};W)\)
7:for\(k=1,2,\ldots,K\)do
8: Update \(Y_{s}^{(k)}\) from \(Y_{s}^{(k-1)}\) by (8) using \(\mu_{s}\)
9:endfor
10: Update \(M\) and \(c\) using (9) with \(Y_{s,i}^{(K)},i=1,2,\ldots,n_{s}\)
11: Update \(W\) and \(\theta\) via SGD over \(\mathcal{L}_{\text{muse}}\) from (10)
12:endfor
13:endfor
```
**Algorithm 1** Training procedure for MuseGNN
## 5 Convergence Analysis of MuseGNN
Global Convergence with \(\gamma=0\).In the more restrictive setting where \(\gamma=0\), we derive conditions whereby the entire bilevel optimization pipeline converges to a solution that jointly minimizes both lower- and upper-level energy functions in a precise sense to be described shortly. We remark here that establishing convergence is generally harder for bilevel optimization problems relative to more mainstream, single-level alternatives (Colson et al., 2005). To describe our main result, we first require the following definition:
**Definition 5.1**.: Assume \(f(X;W)=XW\), \(\gamma=0\), \(\zeta(y)\)=0, \(g(y;\theta)=y\), and that \(\mathcal{D}\) is a Lipschitz continuous convex function. Given the above, we then define \(\mathcal{L}_{\text{muse}}^{(k)}(W)\) as (10) with \(K\) set to \(k\). Analogously, we also define \(\mathcal{L}_{\text{muse}}^{*}(W)\) as (10) with \(Y_{s}^{(K)}\) replaced by \(Y_{s}^{*}:=\operatorname*{arg\,min}_{Y_{s}}\ell_{\text{muse}}^{s}(Y_{s})\) for all \(s\).
**Theorem 5.2**.: _Let \(W^{*}\) be the optimal value of the loss \(\mathcal{L}^{*}_{\text{muse}}(W)\) per Definition 5.1, while \(W^{(t)}\) denotes the value of \(W\) after \(t\) steps of stochastic gradient descent over \(\mathcal{L}^{(k)}_{\text{muse}}(W)\) with diminishing step sizes \(\eta_{t}=O(\frac{1}{\sqrt{t}})\). Then provided we choose \(\alpha\in\left(0,\min_{s}\left\|I+\lambda L_{s}\right\|_{2}^{-1}\right]\) and \(Y^{(0)}_{s}=f(X_{s};W)\), for some constant \(C\) we have that_
\[\mathbb{E}\left[\mathcal{L}^{(k)}_{\text{muse}}(W^{(t)})\right]-\mathcal{L}^{ *}_{\text{muse}}(W^{*})\leq O\left(\frac{1}{\sqrt{t}}+e^{-Ck}\right).\]
Given that \(\mathcal{L}^{*}_{\text{muse}}(W^{*})\) is the global minimum of the combined bilevel system, this result guarantees that we can converge arbitrarily close to it with adequate upper- and lower-level iterations, i.e., \(t\) and \(k\) respectively. Note also that we have made no assumption on the sampling method. In fact, as long as offline sampling is used, convergence is guaranteed, although the particular offline sampling approach can potentially impact the convergence rate; see Appendix C.2 for further details.
Lower-Level Convergence with arbitrary \(\boldsymbol{\gamma}\).We now address the more general scenario where \(\gamma\geq 0\) is arbitrary. However, because of the added challenge involved in accounting for alternating minimization over both \(\mathbb{Y}\) and \(M\), it is only possible to establish conditions whereby the lower-level energy (4) is guaranteed to converge as follows.
**Theorem 5.3**.: _Assume \(\zeta(y)=0\). Suppose that we have a series of \(\mathbb{Y}^{(k)}\) and \(M^{(k)}\), \(k=0,1,2,\ldots\) constructed following the updating rules \(\mathbb{Y}^{(k)}:=\arg\min_{Y}\ell_{\text{muse}}(\mathbb{Y},M^{(k-1)})\) and \(M^{(k)}:=\arg\min_{M}\ell_{\text{muse}}(\mathbb{Y}^{(k)},M)\), with \(\mathbb{Y}^{(0)}\) and \(M^{(0)}\) initialized arbitrarily. Then_
\[\lim_{k\to\infty}\ell_{\text{muse}}(\mathbb{Y}^{(k)},M^{(k)})=\inf_{\mathbb{Y },M}\ell_{\text{muse}}(\mathbb{Y},M). \tag{11}\]
While technically this result does not account for minimization over the upper-level MuseGNN loss, we still empirically find that the entire process outlined by Algorithm 1, including the online mean update, is nonetheless able to converge in practice. See Appendix C.1 for an illustration.
## 6 Experiments
We now seek to empirically show that MuseGNN serves as a reliable unfolded GNN model that:
1. Preserves competitive accuracy across datasets of widely varying size, including the very largest publicly-available graph benchmarks, with a single fixed architecture, and
2. Operates with comparable computational complexity relative to common alternatives that are also capable of scaling to the largest graphs.
For context though, we note that graph benchmark leaderboards often include top-performing entries based on complex compositions of existing models and training tricks, at times dependent on additional features or external data not included in the original designated dataset. Although these approaches have merit, our goal herein is not to compete with them, as they typically vary from dataset to dataset. Additionally, they are more frequently applied to smaller graphs using architectures that have yet to be consistently validated across multiple, truly large-scale benchmarks.
Datasets.We evaluate the performance of MuseGNN on node classification tasks from the Open Graph Benchmark (OGB) [19, 20] and the Illinois Graph Benchmark (IGB) [17]. Table 1 presents the details of these datasets, which are based on homogeneous graphs spanning a wide range of sizes. Of particular note is IGB-full, currently the largest publicly-available graph benchmark, along with a much smaller version called IGB-tiny that we include for comparison purposes. We also point out that while MAG240M is originally a heterogenous graph, we follow the common practice of homogenizing it [19]; similarly for IGB datasets, we adopt the provided homogeneous versions.
MuseGNN Design.For _all_ experiments, we choose the following fixed MuseGNN settings: Both \(f(X;W)\) and \(g(Y;\theta)\) are 3-layer MLPs, the number of unfolded layers \(K\) is 8 (rationaleic discussed later below), and the embedding dimension \(d\) is 512, and the forgetting factor \(\rho\) for the online mean estimation is 0.9. For offline subgraph sampling, we choose a variation on neighbor sampling called ShadowKHop (Zeng et al., 2021), which loosely approximates the conditions of Proposition 3.1. See Appendix A for further details regarding the MuseGNN implementation and hyperparameters.
Baseline Models.With the stated objectives of this section in mind, we compare MuseGNN with GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), and GraphSAGE (Hamilton et al., 2017), in each case testing with both neighbor sampling (NS) (Hamilton et al., 2017) and GNNAutoScale (GAS) (Fey et al., 2021) for scaling to large graphs. As a point of reference, we note that GAT with neighbor sampling is currently the top-performing homogeneous graph model on both the MAG240M and IGB-full leaderboards. We also compare with an analogous full-graph (FG) unfolded GNN (UGNN) with the same architecture as MuseGNN but without scalable sampling.
Accuracy Comparisons.As shown in Table 2, MuseGNN achieves similar accuracy to a comparable full-graph unfolded GNN model on the small datasets (satisfying our first objective from above), while the latter is incapable of scaling to even the mid-sized ogbn-products benchmark. Meanwhile, MuseGNN is generally better than the other GNN baselines, particularly so on the largest dataset, IGB-full. Notably, MuseGNN exceeds the performance of GAT with neighbor sampling, the current SOTA for homogeneous graph models on both MAG240M and IGB-full. Please also refer to Appendix B.1 for MuseGNN accuracy ablations involving \(\gamma\), where we show that while even \(\gamma=0\) achieves strong results, increasing \(\gamma>0\) can lead to further improvement. Likewise, Appendix B.2 contains experiments showing that the improvement of MuseGNN is not merely a product of using ShadowKHop sampling instead of more traditional neighbor sampling with baseline models; there we show that when switched to offline ShadowKHop sampling, the baseline GNNs degrade further, possibly because they are not anchored to an integrated energy.
Timing Comparisons.Turning to model complexity, Table 3 displays the training speed of MuseGNN relative to two common baselines. From these results, we observe that MuseGNN executes with a similar epoch time (satisfying our second objective from above).
Convergence Illustration.In Figure 1 we show the empirical convergence of (4) w.r.t. \(\mathbb{Y}\) during the forward pass of MuseGNN models on ogbn-papers100M for differing values of \(\gamma\). In all cases the energy converges within 8 iterations, supporting our choice of \(K=8\) for experiments.
## 7 Conclusion
In this work, we have proposed MuseGNN, an unfolded GNN model that scales to large datasets by incorporating offline graph sampling into the design of its lower-level, architecture-inducing energy function. In so doing, MuseGNN readily handles graphs with \(\sim 10^{8}\) or more nodes and high-dimensional node features, exceeding 1TB in total size, all while maintaining interpretable layers with concomitant convergence guarantees.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline
**Dataset** & \(|\mathcal{V}|\) & \(|\mathcal{E}|\) & **Dim.** & **\# Class** & **Dataset Size** \\ \hline ogbn-arxiv (Hu et al., 2020) & 0.17M & 1.2M & 128 & 40 & 182MB \\ IGB-tiny (Khatua et al., 2023) & 0.1M & 0.5M & 1024 & 19 & 400MB \\ ogbn-products (Hu et al., 2020) & 2.4M & 123M & 100 & 47 & 1.4GB \\ ogbn-papers100M (Hu et al., 2020) & 111M & 1.6B & 128 & 172 & 57GB \\ MAG240M (Hu et al., 2021) & 244.2M & 1.7B & 768 & 153 & 377GB\({}^{1}\) \\ IGB-full (Khatua et al., 2023) & 269.3M & 4.0B & 1024 & 19 & 1.15TB \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset details, including node feature dimension (Dim.) and number of classes (# Class). |
2310.02122 | Minimalist Neural Networks training for phase classification in
diluted-Ising models | In this article, we explore the potential of artificial neural networks,
which are trained using an exceptionally simplified catalog of ideal
configurations encompassing both order and disorder. We explore the
generalisation power of these networks to classify phases in complex models
that are far from the simplified training context. As a paradigmatic case, we
analyse the order-disorder transition of the diluted Ising model on several
two-dimensional crystalline lattices, which does not have an exact solution and
presents challenges for most of the available analytical and numerical
techniques. Quantitative agreement is obtained in the determination of
transition temperatures and percolation densities, with comparatively much more
expensive methods. These findings highlight the potential of minimalist
training in neural networks to describe complex phenomena and have implications
beyond condensed matter physics. | G. L. Garcia Pavioni, M. Arlego, C. A. Lamas | 2023-10-03T15:04:38Z | http://arxiv.org/abs/2310.02122v1 | # Minimalist Neural Networks training for Phase Classification in Diluted Ising Models
###### Abstract
In this article, we explore the potential of artificial neural networks, which are trained using an exceptionally simplified catalog of ideal configurations encompassing both order and disorder. We explore the generalisation power of these networks to classify phases in complex models that are far from the simplified training context. As a paradigmatic case, we analyse the order-disorder transition of the diluted Ising model on several two-dimensional crystalline lattices, which does not have an exact solution and presents challenges for most of the available analytical and numerical techniques. Quantitative agreement is obtained in the determination of transition temperatures and percolation densities, with comparatively much more expensive methods. These findings highlight the potential of minimalist training in neural networks to describe complex phenomena and have implications beyond condensed matter physics.
+
Footnote †: journal:
## 1 Introduction
In recent decades, machine learning and neural networks have experienced exponential growth in popularity and application across various fields of knowledge [1; 2; 3; 4; 5]. This is largely due to the increasing availability of data in different domains and the ability to process them more rapidly and efficiently, thanks to the development of specialised software and hardware.
Artificial Neural Networks (ANN) are mathematical models composed by interconnected nodes, called neurons, that process information and learn through the identification of patterns in the data [6]. Machine learning algorithms are based on the idea that computers can learn from data, being explicitly programmed for that purpose.
In the field of physics, machine learning methods and neural networks have been successfully employed to discover knowledge in complex systems [7]. These systems often comprise multiple interacting elements and exhibit emergent behaviours that cannot be predicted solely by considering the individual behaviour of each component [8].
However, having data and resources abundance is not a guarantee of obtaining good results. In some cases, models trained with a large amount of data and tuning parameters (in the order of millions or more in the case of neural networks) can capture intrinsic details of the training data, a phenomenon known as 'overfitting' [9]. The same can occur when there is a scarcity of data, where the model learns all the details of the available data, which, being scarce, lose representativeness of the overall set. This phenomenon can conflict with the main objective of neural networks: their ability to generalise [10].
Generalisation refers to the network ability to capture knowledge that emerges from the training data and is also useful for describing other related contexts. The scale of the problem at hand is a determining factor for the amount of resources required for training, in order to achieve a balance between model accuracy and generalisation capacity.
It is generally accepted that the analysis of complex systems in physics requires significant resources. However, this study shows that, at least for certain types of complex systems found in condensed matter and other related applications, achieving a high degree of generalisation is possible even with deliberately limited resources.
In this work, we investigate the properties of the diluted Ising model on two-dimensional crystalline lattices, which is considered a paradigmatic
complex model in condensed matter physics. The diluted Ising model is a variant of the pure Ising model [11], widely employed in statistical physics to describe some magnetic materials. The Ising model exhibits a second-order transition from a disordered phase (paramagnetic) at high temperatures to an ordered phase (ferromagnetic) at low temperatures. In the transition, the system displays a variety of properties typical of complex systems, including long-range correlations, criticality, and scale invariance, which are distinctly different from the nature of the ferromagnetic and paramagnetic phases.
The diluted Ising model [12, 13] introduces an additional level of complexity to the Ising model by allowing the presence of vacancies. This model proves to be useful in investigating materials with impurities or structural defects, providing a simplified yet effective approach to describe the physics of magnetic systems in the presence of disorder.
A closely related phenomenon to the diluted Ising model is percolation [12], which describes the propagation of a fluid through a porous medium, achieving percolation at a critical porosity. In the context of the diluted Ising model, the percolation transition occurs at a critical spin density, below which the system remains disordered, even at zero temperature. The interplay between dilution and temperature gives rise to a phase diagram where the transition between the ordered and disordered states takes place along a curve whose shape has been investigated using a variety of numerical methods and analytical approximations, as in references [14, 15, 16, 17], among others.
However, the main challenge emerges near the percolation density [12], where singularities predicted by approximate methods arise, severely limiting the predictions of numerical techniques such as Montecarlo simulations. For this reason, the study of the phase transition in the diluted Ising model remains an active area of research, for which a variety of theoretical and numerical methods have been developed to achieve a better understanding of critical phenomena. In this context, we will analyse the capacity of neural networks to determine the order-disorder transition in diluted Ising models on various two-dimensional crystalline lattices.
It is worth noting that a variety of machine learning methods have been applied to the Ising models, as in references [18, 19, 20, 21, 22, 23, 24, 25, 26, 27], and to the percolation problem [28, 29, 30, 31, 32, 33, 34]. Regarding percolation, these previous studies primarily investigate the geometric aspects of the problem and are therefore unrelated to our study.
Another distinction is that in this paper, we focus on the potential of what we refer to as'minimalist' networks, which efficiently generalize beyond the
training context.
The concept of minimalist neural networks is broad, as illustrated in the references [35; 36]. In this study, we specifically refer to a training strategy by using a minimal set of configurations that represent ideal ordered or disordered patterns to train a feed-forward neural networks, such as Dense Neural Networks (DNNs) with a single hidden layer.
The objective of this approach is to develop a trained neural network equipped with a diverse and extensible catalog of ideal configurations, capable of representing the impact of various types of couplings within multiple crystalline lattices, encompassing extreme values at both zero and infinite temperatures. Through this process, the neural network effectively learns to classify these limiting configurations and subsequently extends its capabilities to handle more complex configurations.
How can we evaluate the generalisation capability of the trained minimalist neural networks? One way is to face the task of classifying configurations that were not encountered during training. To achieve this, we employ diluted ferromagnetic Ising models on two different two-dimensional crystalline lattices: the square lattice and the honeycomb lattice. These models contain two essential ingredients that are not present in the ideal training configurations: finite temperature and dilution (vacancies).
By evaluating the network performance on these classification tasks, we can determine its ability to generalise and apply the learned knowledge to new situations.
To generate samples of the diluted Ising model at finite temperature and vacancy concentration (complementary to the spin concentration), we perform Montecarlo simulations [37]. Unlike typical usage, in this case, we do not use the simulations to train the neural network but rather to validate and test its generalisation capability. From this perspective, it is irrelevant whether the data comes from simulations or experiments since the network has already been trained on a set of ideal configurations and does not depend on these data for training, although it does rely on it for validation.
In summary, this work presents the use of minimalist neural networks in statistical physics, specifically in the diluted Ising model. Through Monte-carlo simulations and other analytical approximations as benchmark, it is showed how these networks, trained with a catalogue of idealised configurations, can describe complex phenomena and accurately determine order-disorder phase transitions in different types of lattices and capture highly divergent behaviour in regions that pose challenges for other methods, with
comparatively low computational cost.
The paper is structured as follows. In Section 2, we provide a concise overview of the models and crystalline lattices examined in this study. This section begins with an exploration of the Ising model, followed by a description of the diluted Ising model and its relationship with the percolation phenomenon.
Section 3 delves into the process of generating the synthetic data catalog, describes the neural network's architecture, and how the network is trained using the cataloged data.
Section 4 is devoted to presenting the primary findings of the study, which are divided into three parts. Initially, we scrutinize the neural network's capacity for generalization when categorizing phases within the pure Ising model. This process is then extended to encompass the diluted Ising model, ultimately culminating in an analysis of percolation. Finally, the phase transition line is fine-tuned, utilizing the neural network's predictions on an approximate model to ascertain the percolation density. These parts are systematically explored for both square and honeycomb lattices.
The paper concludes with section 5 dedicated to summarizing the principal findings of the study. Additionally, to enhance the overall readability of the paper, technical details concerning the generation of synthetic Monte-carlo data for model validation are presented in the A.
Figure 1: The lattices studied in this work. The square lattice can be seen in the left panel, whereas the right panel illustrates the honeycomb lattice. It’s important to note that in both scenarios, we have applied periodic boundary conditions.
## 2 Models and Lattices
In this section we provide a brief survey of the models and crystalline lattices analyzed in this paper. This section starts with an description of the Ising model, followed by the diluted Ising model and its connection to the percolation.
### Ising model
In the field of solid-state physics, one of the most interesting phenomena is ferromagnetism [38]. This phenomenon emerges in certain metals where a finite fraction of atomic spins spontaneously align in the same direction, thereby generating a macroscopic magnetic field. This aligned phase is commonly referred to as the ferromagnetic phase. The ferromagnetic phase occurs only when the temperature is below a critical value, denoted as \(T_{c}\), also known as the Curie temperature. As the temperature increases beyond this critical value, the atomic spins become randomly oriented, resulting in a net zero magnetic field. This disordered phase is referred to as the paramagnetic phase.
Although ferromagnetism is fundamentally rooted in quantum mechanics, certain aspects of the phenomenon can be effectively described in classical terms, within the framework of the Ising model [39; 40]. This is one of the most studied models in physics due to its exact solution in one and two dimensions, which makes it useful for testing various approximations and then comparing them with analytical solutions. In addition to its significance in magnetism, the Ising model serves as a paradigm for complex systems with two degrees of freedom. For applications and extensions of the Ising model in various fields, please refer to references [41].
The Ising model consists of an n-dimensional lattice comprising \(N\) fixed points or'sites' each identified by a generic index \(i\). At each site, there is a spin variable denoted as \(S_{i}\), representing an 'up' (1) or 'down' (-1) configuration. The entire system's configuration is characterized by the set of these spin values \(\{S_{i}\}\). In absence of an external magnetic field, the model's Hamiltonian is defined as
\[\mathcal{H}=-\sum_{i,j}J_{i,j}\epsilon_{i}\epsilon_{j}S_{i}S_{j}, \tag{1}\]
where the sum is extended to all sites \(i,j\). The two-dimensional (\(n=2\)) lattices analyzed in this study are the square and honeycomb lattices, and are shown in the left and right panels of Figure 1, respectively.
In the case in which the interaction \(J_{i,j}>0\) the spins align in order to minimize the energy, as defined by Hamiltonian 1, leading to a ferromagnetic behavior. Conversely, for \(J_{i,j}<0\), the energy is minimized when neighboring spins at sites \(i,j\) point in opposite directions, resulting in the formation of _Neel_ order. On the other hand, the term \(\epsilon_{i}=1(0)\) represents the presence (absence) of a spin at the \(i\) site, making it possible to describe diluted Ising models. In this context, the spin density of the crystal lattice \(\rho\) is defined as \(\rho=\sum_{i=1}^{N}\epsilon_{i}/N\), where \(N\) is the number of sites in the lattice. We use periodic boundary conditions to ensure that every site has an equal number of neighbors and maintains a consistent local geometry.
The pure (\(\rho=1\)) 2D square lattice with a uniform interaction strength among first neighbors (\(J_{i,j}=J\)) exhibits an exact solution [42] and undergoes a second-order phase transition at the critical temperature \(T_{c}=\frac{2|J|}{k_{B}\ln(1+\sqrt{2})}\approx 2.269\frac{|J|}{k_{B}}\), where \(k_{B}\) represents Boltzmann's constant. For the pure honeycomb lattice, the critical temperature is also analytically known as \(T_{c}=\frac{2|J|}{k_{B}\ln(2+\sqrt{3})}\approx 1.519\frac{|J|}{k_{B}}\). This transition occurs from a disordered state (paramagnetic phase) at temperatures \(T>T_{c}\) to an ordered state (ferromagnetic phase for \(J>0\)) at temperatures \(T<T_{c}\).
In the 2D Ising model, the second-order phase transition is defined by an order parameter selected to be zero in the paramagnetic phase and non-zero in the ferromagnetic phase. In the ferromagnetic scenario, this order parameter is the spin magnetization per spin
\[m=\frac{1}{N}\sum_{i=1}^{N}S_{i}, \tag{2}\]
### Dilute Ising Model and percolation
An important concept that will be discussed is the introduction of defects in a pure spin lattice. This involves the removal of spins from specific sites (\(\epsilon_{i}=0\)), consequently altering the density \(\rho\) of the spin lattice. In this context, eq. 1 gives rise to the diluted Ising Model.
In the scenario of a dilute lattice (\(\rho<1\)), it is well-established that as the spin concentration \(\rho\) decreases, the critical point (\(T_{c}\)) of the phase transition also decreases. This is to be expected, since the critical temperature depends on the interaction between neighboring spins, which decreases with the dilution.
The percolation density \(\rho_{c}\) is defined as the density beyond which the system remains disordered at any temperature. Therefore, for values of \(\rho\leq\rho_{c}\)
the system remains in the paramagnetic phase regardless of the temperature (\(T\)). In other words, the critical temperature is zero, \(T_{c}(\rho_{c})=0\)[12]. The percolation densities, denoted as \(\rho_{c}\), for the square and honeycomb lattices have been obtained approximately with high precision. Specifically, for the square lattice, \(\rho_{c}\approx 0.59274621(13)\)[43], and for the honeycomb lattice, \(\rho_{c}\approx 0.6962(6)\)[15]. It is important to mention that alternative definitions of percolation density exist, which emphasize the geometric aspects of this phenomenon and are unrelated to a transition temperature. However, these definitions fall outside the scope of the diluted Ising model and are not explored in this study.
The phase diagram of the dilute Ising model in the temperature-dilution plane encompasses the paramagnetic and ferromagnetic phases, divided by
Figure 2: Schematic representation of the configuration catalog utilized in the training process. Red dots indicate spin-up sites, while blue dots denote spin-down sites. The depicted configurations, presented from left to right, encompass ferromagnetic, antiferromagnetic (Néel), and stripe patterns. These configurations are showcased for both the square lattice (upper panel), and the honeycomb lattice (lower panel) for the first two configurations. Additionally, the catalog includes random configurations, representing the paramagnetic state at infinite temperature (not showed here).
a critical transition line. In contrast to the pure case, the exact form of this critical line remains unknown. In the vicinity of the pure case, the transition exhibits a linear behavior, with the critical temperature decreasing as the dilution (i.e., the number of vacancies) increases. However, near a critical dilution value, this pattern abruptly changes, signifying the onset of percolation transition, where the system fails to order even at zero temperature. In this region, the transition curve presents a logarithmic divergence. Yet, directly capturing this behavior through Montecarlo simulations is a highly demanding task. However, several numerical and analytical investigations, as reported in references [16; 17], have proposed that the phase transition curve \(T_{c}(\rho)\) satisfies the following relationship
\[\frac{T_{c}(\rho)}{T_{c}(1)}=-\frac{K}{\log(\rho-\rho_{c})}, \tag{3}\]
where \(K\) is a non-universal constant dependent on the crystal lattice.
## 3 Minimalist Neural network training
In this section, we outline the process of creating the synthetic data catalog, the architecture of the neural networks employed, and describe their minimalist training approach utilizing the catalog data.
In typical machine learning phase classification problems, training data is derived from either experimental data or numerical simulations. This approach incurs significant costs and necessitates prior knowledge of the system's phases.
However, in this study, we adopt a distinct approach by employing a catalog of synthetic training data. By'synthetic', we mean that the spin configurations used for training are explicitly generated, with each lattice site assigned a spin value of either \(+1\) or \(-1\), depending on the desired spin configuration. These configurations represent ideal ordered states at absolute zero temperature and ideal disordered, or paramagnetic states, at infinite temperature. Our objective is to train a neural network using this versatile catalog of ideal configurations, which spans the spectrum from extreme and ideal cases. In doing so, we enable the neural network to classify more complex scenarios through the process of generalization.
This is important because we will later use these trained networks to classify finite-temperature dilute systems. During training, the neural network
never finds configurations with temperature and vacancies as input data. In this sense, our training is minimalist and lacks complex scenarios.
The minimalist training catalogue is composed by the following patterns: all spins pointing up or down, chessboard-like configurations, and horizontally or vertically alternating stripes. These configurations represent ideal orders at zero temperature when the spin couplings are positive (ferromagnetic), negative (antiferromagnetic), and different for each direction, respectively. Additionally, random configurations of spins are also employed, representing paramagnetic states at infinite temperature. A representative sample of the mentioned configurations is depicted in upper and lower panels of the Figure 2 corresponding to the square and honeycomb lattices, respectively.
Concerning the training process and the neural network utilized, the fundamental principles of neural networks, encompassing their architecture and training methods, have become standard practices. Therefore, we focus on delineating the essential architecture of our specific neural network and the generation of training data. For a comprehensive understanding of this topic, we recommend referring to the cited references [6; 44; 45; 46; 47].
Our training approach employs a simple neural network design, aligning with the deliberate restriction of resources, both in the training dataset and in the architecture of the neural network itself.
We employed a two-layer fully-connected (DNN) neural network [6]. The first layer utilizes the \(ReLu\)[47] activation function and consists of \(N\) neurons, where \(N\) represents the number of sites within our spin lattice. The
Figure 3: Neural network’s estimated probabilities (p) as a function of Temperature (T) corresponding to the pure Ising model (\(\rho=1\)) for: a) the square lattice with \(N=40\times 40\) sites. b) the honeycomb lattice with \(N=2\times 28\times 28\) sites. The transition temperature is determined by the intersection of both probabilities, marked by a dashed vertical line. This procedure and notation are consistent throughout the paper.
second layer utilizes the _softmax_[47] activation function with a neuron count equivalent to the number of orders for classification. Our choice of cost function is the _cross-entropy_[46], and for optimization, we utilize _RMSProp_[47] (Root Mean Squared Propagation). The implementation is carried out in TensorFlow.
In the training process, we generated datasets comprising 9000 synthetic configurations copies, sampling in equal proportion all the configurations of the catalog. Specifically, for the square lattice, we created synthetic datasets for lattice sizes of \(L\times L\) with \(L\) set to 30 (900 sites) and 40 (1600 sites). Likewise, for the honeycomb lattice, datasets were generated for lattices of fixed dimensions of \(2\times L\times L\) sites, where \(L\) was chosen as 21 (882 sites) and 28 (1568 sites).
During training, we utilized batches of 60 and ran for 5 epochs [47]. This sufficed to attain an accuracy (the fraction of correct predictions) approaching 100%, owing to the simplicity of the catalog.
## 4 Results
In this section, we assess the performance of our model, which has been trained using synthetic minimalist configurations, on the diluted Ising model at finite temperature and vacancy concentration, denoted as \(\rho\). To generate configuration samples of the diluted Ising model at these conditions, Montecarlo simulations are employed, following the methodology outlined in reference [37].
It is important to recall that these simulated configurations are not utilized for training the neural network; rather, they serve the purpose of validating and testing its generalization capabilities. From this perspective, the origin of the data, whether from simulations or experiments, becomes irrelevant. The neural network has already been trained on a curated dataset of ideal configurations and does not depend on these additional data for its training phase. However, it does rely on them for validation purposes.
For the sake of clarity and conciseness, we omit the technical details of the Montecarlo simulations performed to generate the validation samples for our neural network. These details are provided in the Appendix A. In this section, we assume that these samples have been generated, and we proceed to describe their utilization.
### Critical temperatures on pure spin-lattices
The generalisation process is evaluated in two stages. First, we analyse the pure Ising model. To do this, we present the neural network with simulated configurations of the model at different temperatures.
At high temperatures (far from the transition), the model exhibits the characteristics of the paramagnetic state (disordered). Therefore, using the available catalogue of ideal configurations, the neural network easily selects the random configuration. The same applies to the ordered states at low temperatures (on the other side of the transition), where the model displays a ferromagnetic character. In this case, the neural network readily associates these states with the catalogue configurations where all spins point either up or down. In both situations, the neural network discards all other configurations in the catalogue that do not correspond to paramagnetic or ferromagnetic orders.
However, as the temperature approaches the transition point, whose value is analytically known in the thermodynamic limit, the network predictions become increasingly imprecise. Although it still assigns a larger probability to the random configuration above the transition and to the ferromagnetic configuration below it, its predictions become less reliable compared to the extreme temperatures.
To determine the transition using a neural network, we define the transition temperature as the value at which the prediction probabilities for the disordered and ordered phases are equal [48].
Figure 4: Neural network’s estimated probabilities (p) as a function of Temperature (T) in the high density regime of the dilute Ising Model, corresponding to: a) The square lattice with \(N=40\times 40\) sites. b) The honeycomb lattice with \(N=2\times 28\times 28\) sites. Curves correspond to \(\rho=1\) (blue), \(\rho=0.9\) (orange) and \(\rho=0.8\) (green). In both Figures, it can be observed that as the spin density of the lattice decreases, the critical temperature \(T_{c}\) also decreases.
To illustrate the approach, we present results for the square lattice and subsequently for the honeycomb lattice. In the Figure 3-a, we show the probability curves as a function of temperature \(T\), which is hereafter expressed in units of \(|J|/k_{B}\), for the Ising model on a square lattice of size \(N=L\times L\), with \(L=40\). For each temperature \(T\), the probability is determined as an average of the predictions, belonging to 300 independent simulations. Only the prediction probabilities for the ferromagnetic and paramagnetic phases are represented, by blue and orange dots connected by dotted lines, respectively. The remaining catalog-trained phases (stripes and Neel) consistently yielded nearly zero probabilities and are consequently not displayed in the Figure. By intersecting the two curves at the point where both probabilities are equal, we estimate the critical temperature (\(T_{c}\approx 2.325\)). In multiclass classification, the predicted phase corresponds to the one with the highest probability. In this scenario with two dominant classes, the transition point is identified when both class probabilities reach 0.5, indicated by their intersection. Note that the critical temperature obtained through DNN classification is in good agreement with the analytical value (\(T_{c}\simeq 2.269\)).
The same procedure performed for the square lattice was then carried out on the honeycomb lattice. The results are shown in Figure 3-b. Again the neural network selects only the two intervening states, whose probability curves intersect at \(T_{c}\approx 1.567\) for \(L=28\). These results are in good agreement with the analytical solution, \(T_{c}\simeq 1.519\).
It's worth recall that the neural network was never trained on the temperature variable and only received extreme ideal ordered and disordered configurations. Consequently, we could only anticipate qualitative agreement between the analytically determined transition temperature and the network's determination, particularly considering the finite size of the validation samples. However, the DNN identifies the transition with substantial quantitative agreement. In essence, the order parameters derived from the ideal configurations are sufficiently robust to predict the transition, even when it falls far from the extremes, exhibiting distinct properties compared to those of the ordered and disordered phases.
### Dilute spin-lattices
After establishing the neural network's correct classification of magnetic phases in the Ising model on square and honeycomb lattices, we seek to test its generalization capabilities further. Specifically, we aim to determine whether the neural network, trained on minimalist data, can predict changes
in the critical temperature resulting from modifications in the spin lattice. One such alteration includes modifying the critical temperature by varying the spin density, achieved through the introduction of vacancies (holes).
Furthermore, we investigate whether the neural network can effectively classify the phases of the diluted model, even in the absence of explicit training with diluted spin configuration data.
In this section, we delve into the neural network's classification phases within square and honeycomb lattices, characterized by different spin densities (\(\rho\)) obtained from the Montecarlo simulations. These classifications will allow us to estimate critical temperatures \(T_{c}(\rho)\) as a function of spin density.
#### 4.2.1 High-density regime
We begin our exploration by examining the magnetic phases within the low(high) hole(spin)-density regime (\(\rho\approx 1\)), which is significantly distant from the percolation density. Just as in the pure case, the spin configurations were extracted from Montecarlo simulations. To construct the predicted probability curves, we computed the temperature (\(T\)) predictions by averaging results from 300 simulations.
The Figure 4 illustrates the predicted probability curves, corresponding to ferromagnetic and paramagnetic configurations, for both the square lattice (left panel) and the honeycomb lattice (right panel) across three different densities: \(\rho=0.8\) (green), \(\rho=0.9\) (orange), and \(\rho=1\) (blue). These curves show a noticeable trend where, as \(\rho\) decreases, the critical temperature \(T_{c}\) also decreases.
It is worth mentioning that in the honeycomb lattice with \(\rho=0.8\) (as depicted by the green curves in the right panel of Figure 4) below the critical temperature \(T_{c}\), the probability for the ferromagnetic phase falls below 1. This observation indicates our proximity to the percolation density. This phenomenon, where the prediction probability for the ferromagnetic phase diminishes as we approach the percolation density, becomes more pronounced at lower densities.
#### 4.2.2 Low-density regime and percolation
So far we have shown that the neural network can correctly classify the magnetic states with densities \(\rho\approx 1\), even though the network was trained solely on configurations without vacancies nor temperature. We will now shift our focus to examining the classification of low spin densities with the
aim of detecting the percolation density \(\rho_{c}\). An indicator that the percolation density has been reached is that the system becomes disordered at any finite temperature, i.e. \(T_{c}=0\)[12]. In other words below the percolation density, the system does not exhibit phase transition [14]. Consequently, in the probability curves, we should not observe any crossovers when \(\rho<\rho_{c}\).
For the square lattice, it is established that the percolation density is \(\rho_{c}\approx 0.5927\)[43]. This reference provides a basis for comparing the neural network results when examining densities in proximity to the percolation density.
In Figure 5, we present the outcomes derived from the neural network classification for the square spin lattice with a characteristic length of \(L=40\). These results pertain to decreasing densities, namely, \(\rho=0.7\), \(\rho=0.65\), \(\rho=0.63\), \(\rho=0.62\), \(\rho=0.61\) and \(\rho=0.6\).
There are several observations to highlight. In the case of density \(\rho=0.7\), a clear phase transition occurs, as evidenced by the intersection of the probability curves for the two phases at \(T_{c}\approx 1.10\). Similarly, for densities \(\rho=0.65\), \(\rho=0.63\), and \(\rho=0.62\), phase transitions are also evident due to the intersec
Figure 5: Phase classification results for the dilute Ising model on the square spin lattice with \(L=40\), in the low-density regime, across densities from \(\rho=0.7\) to \(\rho=0.6\). Phase transitions are observed at the higher densities, signaled by probability curve intersections. However, from \(\rho=0.61\), no phase transition is observed, indicating a disordered state with \(T_{c}=0\). This suggests a percolation density for this lattice size around this value, close to the approximate theoretical value.
tion of the probability curves, corresponding to each phase, at temperatures \(T_{c}\approx 0.850\), \(T_{c}\approx 0.77\), and \(T_{c}\approx 0.67\), respectively. For these densities it can be observed that, as \(T\) decreases, the prediction probability curve for the ferromagnetic phase (blue curves) no longer remains at 1, as it was the case for density \(\rho=0.7\) and higher. Conversely, the prediction probability curve for the paramagnetic phase (orange curves) no longer remains at 0.
On the other hand, for densities \(\rho=0.61\) and \(\rho=0.6\), the phase transition is no longer observed as there is no intersection between the probability curves corresponding to paramagnetic and ferromagnetic phases. This implies that the percolation density \(\rho_{c}\) has been surpassed. Notably, at all temperatures, the system remains disordered, with the prediction probability for the paramagnetic phase being predominant, i.e. \(T_{c}=0\).
Based on the analysis of the square lattice with a characteristic length of \(L=40\), it can be inferred that the percolation density \(\rho_{c}\) is upper-bounded by \(\rho\approx 0.61\) for this size, in close proximity to the theoretical value.
The same analysis as described previously was carried out for the hon
Figure 6: Phase classification results for the dilute Ising model on a \(L=28\) honeycomb lattice in the low density regime. Phase transitions are observable up to \(\rho=0.72\), marked by probability curve intersections. Notably, higher prediction uncertainty is evident, attributed to the honeycomb lattice’s higher percolation density compared to the square lattice. Beyond \(\rho=0.71\), no further probability curve intersections occur, establishing an upper percolation density limit for this size, near the theoretical value.
eycomb lattice, with the estimated percolation density from the literature being \(\rho_{c}\approx 0.6962\)[15]. In this case, we considered densities proximate to the percolation density, specifically \(\rho=0.75\), \(\rho=0.74\), \(\rho=0.73\), \(\rho=0.72\), \(\rho=0.71\) and \(\rho=0.7\).
The Figure 6 presents the results, akin to the square lattice, for a characteristic size \(L=28\). In this case, a phase transition is observable up to \(\rho\approx 0.72\), characterized by the intersection of probability curves. However, even from the first panel, a more pronounced prediction uncertainty becomes evident, attributed to the higher percolation density in the honeycomb lattice compared to the square lattice. Beyond \(\rho_{c}\approx 0.71\), no further intersections of probability curves are observed, thereby establishing an upper limit on the percolation density for this lattice size, albeit in close proximity to the theoretical value.
### Phase diagram
In the preceding section, we determined the critical temperatures (\(T_{c}\)) for several spin concentrations (\(\rho\)) using neural network classification. To facilitate comparison with alternative approaches, we plotted these \(T_{c}(\rho)\) curves for each spin lattice configuration previously obtained. Subsequently, we performed curve fitting [49], employing a nonlinear model akin to the one found in existing literature [16; 17]. The model proposed is expressed as follows
\[\frac{T_{c}(\rho)}{T_{c}(1)}=\frac{K}{\log(a-b(1-\rho))}. \tag{4}\]
Here, \(K\), \(a\), and \(b\) represent non-universal parameters subject to fitting, while \(\rho\) denotes the density of the spin lattice corresponding to the critical temperature \(T_{c}\). The phase transition curve separates the ordered phase, located below the curve, from the disordered phase, positioned above it, for each type of spin lattice.
Figure 7 (left panel) shows curve fitting results for \(T_{c}(\rho)\) of the \(L\times L\) square lattice (\(L=40\)). The estimated percolation density is \(\rho_{c}=\)0.601, obtained directly from the logarithm divergence condition, i.e. \(a-b(1-\rho_{c})=0\) in 4, indicating \(T_{c}\) approaching zero.
While the primary focus here is not to provide an exhaustive analysis of how the results scale with size, the table 1 presents the fitting outcomes and transition temperatures on the square lattice for both \(L=30\) and \(L=40\).
In the case of the square lattice, as we have mentioned, the percolation density is known to be approximately \(\rho_{c}\approx 0.59274621(13)\)[43]. Remarkably, the fitted value of \(\rho_{c}\) from our data aligns closely with this literature value. This agreement is noteworthy, given that our network was trained under minimalist conditions, without vacancies or temperature considerations, and without utilizing simulation data, all of which are far from the percolation regime.
Following the same procedure carried out for the square lattice, Figure 7 (right panel) shows the curve fit for the \(T_{c}(\rho)\) for the \(2\times L\times L\) sites honeycomb lattice, with \(L=28\), where the estimated percolation density is \(\rho_{c}\approx 0.703\). Additionally, Table 2 provides the fitted parameters and critical density estimates for two lattice sizes.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Square lattice of \(L\times L\) sites & \(K\) & \(a\) & \(b\) & \(\rho_{c}\) \\ \hline \(L=30\) & -1.405 & 0.253 & 0.653 & 0.612 \\ \hline \(L=40\) & -1.279 & 0.311 & 0.780 & 0.601 \\ \hline \end{tabular}
\end{table}
Table 1: Fitted parameters from the function (4) and their respective percolation densities \(\rho_{c}\), for the diluted square lattices with \(L\times L\) sites (\(L=30\) and \(L=40\)).
Figure 7: Left: Normalized critical temperature curve \(T_{c}\) predicted by the neural network (blue open circles) as a function of spin density \(\rho\) for diluted square lattice of \(L\times L\) sites with \(L=40\). The Figure also displays the fit (blue solid line) performed based on the proposed model 4. The ferromagnetic phase lies below the curve, while the paramagnetic phase resides above it. Note that the fitted curve does not intersect the \(\rho\)-axis due to the logarithmic divergence in the transition. Right: The same results as in the left panel, but for the diluted honeycomb lattice with a specific size of \(L=28\), consisting of \(2\times L\times L\) sites.
Given that the established percolation density for the honeycomb network is \(\rho_{c}\approx 0.6962(6)\)[15], the value obtained through the fitting process for \(\rho_{c}\) is very satisfactory, for analogous reasons as those outlined for the square lattice.
## 5 Conclusions
We investigated the order-disorder transition in dilute Ising models through neural networks trained using a'minimalist' approach.
The training process involved utilizing a basic catalog of configurations that encompassed ideal ordered states at \(T=0\), along with random configurations representing the infinite-temperature paramagnetic phase. This allowed the neural network to be trained on a variety of ideal configurations, enabling it to effectively classify these synthetic samples.
By exploiting the generalization ability of the previously trained neural networks, they were used to classify configurations obtained from Montecarlo simulations corresponding to dilute Ising models. Since the neural network was trained with a set of ideal configurations and does not rely on specific data for training, this methodology allows the data to come from simulations or experiments, without any difference.
Initially, we employed the trained neural network to categorize phases within a pure Ising model. The minimalist neural network was provided with configurations derived from simulations of spin models on both square and honeycomb lattices at various temperatures, resulting in probability curves for each magnetic order. The neural network, trained on very simple configurations, demonstrated the capacity to generalize and accurately classify the ordered and disordered phases.
Subsequently, we introduced vacancies into the spin lattices to increase the complexity of the model. Remarkably, the minimalist neural network displayed the capability to generalize and effectively classify the different
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Honeycomb lattice of \(2\times L\times L\) sites & \(K\) & \(a\) & \(b\) & \(\rho_{c}\) \\ \hline \(L=21\) & -1.623 & 0.223 & 0.781 & 0.714 \\ \hline \(L=28\) & -1.647 & 0.208 & 0.704 & 0.703 \\ \hline \end{tabular}
\end{table}
Table 2: Adjusted parameters derived from equation (4) alongside their corresponding percolation densities \(\rho_{c}\) for the diluted honeycomb lattices with \(2\times L\times L\) sites (\(L=21\) and \(L=28\)).
configurations, even though it was never trained on spin lattices featuring vacancies. The neural network exhibited its ability to generalize from ideal spin lattice configurations to scenarios involving finite temperatures and diluted spin lattices.
Analysis of the temperature-dilution phase diagram of the dilute Ising model, spanning paramagnetic and ferromagnetic phases, showed that the transition curve obtained via the neural network is consistent with the expected approximate analytical predictions. This illustrates the power of minimalist neural networks to generalize and classify appropriately the different phases of magnetic orders, for different values of temperature and dilution.
The percolation densities \(\rho_{c}\) for the spin lattices were indirectly obtained through fitting the data from the \(T_{c}(\rho)\) curves. These \(\rho_{c}\) values, derived from the classifications performed by the minimalist neural networks, exhibit a good quantitative alignment with those reported by other approximate methods. This holds true for both the square lattice and the honeycomb lattice.
The findings presented in this paper suggest that minimalist neural networks offer an efficient data analysis approach that does not necessitate extensive resources, making it a cost-effective choice. Despite the synthetic training data, the neural networks yield highly satisfactory results for the examination of dilute spin lattices. This assessment is further reinforced when considering the trade-off between simplicity and complexity in comparison to conventional methods like Montecarlo simulations.
In summary, this study provides a framework for exploring the capabilities of minimalist networks in analyzing and categorizing complex spin lattice configurations, including those featuring frustration and other ingredients. Additionally, due to their pre-trainability with a expandable catalog and ease of deployment, these networks hold potential as valuable predictive tools for both synthetic and experimental data.
This approach has the potential to open new perspectives in the study of complex multicomponent systems in physics and other fields of science with minimalist artificial intelligence tools.
## 6 acknowledgments
Carlos A. Lamas and Marcelo Arlego acknowledge support from CONICET. This work was partially supported by CONICET, Argentina (grant no PIP 2332) and UNLP (grant 11X987).
## Appendix A Montecarlo simulations
In this section, we provide additional information regarding the Montecarlo simulations conducted in this study. These simulations were performed to generate data samples exclusively for the purpose of validating the generality, rather than training, the neural networks employed.
All Montecarlo simulations were implemented in C language. These simulations were carried out for the ferromagnetic Ising model (\(J>0\)) on both square and honeycomb lattices, employing the Metropolis-Hastings algorithm. We executed 300 independent simulations for each density \(\rho\).
In all simulations, periodic boundary conditions were enforced. The simulations started at a temperature above \(T_{c}\), representing a disordered phase, and gradually reduced the temperature in intervals of \(\Delta T\) until reaching temperatures below \(T_{c}\), ultimately attaining the ordered phase. At each temperature point, both the temperature value and spin configuration were stored once equilibrium was achieved.
For the square lattice, we investigated two distinct system sizes: one comprising 900 sites (with a characteristic length of \(L=30\)) and the other consisting of 1600 sites (\(L=40\)). In all cases, periodic boundary conditions were employed. For both sizes, we conducted 300 independent simulations spanning the temperature range \(T=[5,0.2]\), with intervals of \(\Delta T=0.02\) (resulting in 250 temperature values).
We monitored the systems over \(50000\times L^{2}\) Montecarlo steps after completing \(20000\times L^{2}\) relaxation iterations. From each simulation, we extracted 250 spin configurations corresponding to each temperature value \(T\).
Considering the percolation density of the square lattice as \(\rho_{c}\approx 0.5927\)[43], we performed simulations for the dilute Ising model, covering a range of densities, namely \(\rho=1\) (representing the pure case), 0.9, 0.8, 0.7, 0.65, 0.64, 0.63, 0.62, 0.61, 0.6, and 0.55. In each instance, vacancies were randomly introduced into the lattice using a uniform distribution, until the desired spin density \(\rho\) was achieved.
We followed a similar procedure for the honeycomb lattice. In this case, we considered system sizes of \(2\times L\times L\) sites. Specifically, we investigated two system sizes: one comprised of 882 sites (\(L=21\)), and the other consisting of 1568 sites (\(L=28\)). For both sizes, periodic boundary conditions were applied. In each case, we conducted 300 independent simulations covering the temperature range \(T=[3,0.2]\), with intervals of \(\Delta T=0.05\) (resulting in 60 temperature values \(T\)). We monitored the systems for \(290\times L^{2}\) Montecarlo
steps after completing \(32\times L^{2}\) relaxation iterations. From each simulation, we obtained 60 spin configurations corresponding to each value of \(T\).
Considering the percolation density as \(\rho_{c}\approx 0.6962\)[15], we conducted simulations with densities \(\rho=1\) (representing the pure case), 0.9, 0.8, 0.74, 0.73, 0.72, 0.71, 0.7, and 0.65. Similar to the square lattice case, vacancies were randomly introduced into the lattice using a uniform distribution.
To visually depict the outcomes of our simulations, Figure 8 shows graphical representations of magnetization and energy per site as functions of temperature for the dilute Ising model on the square lattice (left column) with \(L=40\) and the honeycomb lattice with \(L=28\) (right column) for the different spin densities analyzed.
The graphs presented in Figure 8 exhibit the expected behavior. In the case of magnetization, it is evident that as the density \(\rho\) decreases, the critical temperature \(T_{c}\) also decreases, resulting in a reduction of the maximum magnetization per site.
Regarding the energy plots, as temperature (\(T\)) approaches zero, it is noteworthy that for the pure cases of the square lattice and honeycomb lattice, the energy attains its anticipated minima of -2 and -3/2, respectively.
Figure 8: Magnetization and energy per site plotted against temperature for the dilute Ising model on the square lattice (left column) with \(L=40\) and the honeycomb lattice with \(L=28\) (right column), for different spin densities.
In addition, we investigated the susceptibility as a function of temperature for the pure Ising model (\(\rho=1\)). The Figure 9 depicts the results for the square lattice with \(L=40\) (left panel) and the honeycomb lattice with \(L=28\) (right panel). These plots show that for the square lattice, the maximum occurs at \(T_{c}\approx 2.28\), while for the honeycomb lattice, it occurs at \(T_{c}\approx 1.6\). These results align with the analytically reported literature values, although it's important to note that these analytical values are derived in the thermodynamic limit, in contrast to our finite-size results.
Based on these observations, we conclude that the simulations are well-suited for extracting spin configurations at various temperature values, confirming their adequacy and suitability for our study.
|
2310.14754 | Approximately well-balanced Discontinuous Galerkin methods using bases
enriched with Physics-Informed Neural Networks | This work concerns the enrichment of Discontinuous Galerkin (DG) bases, so
that the resulting scheme provides a much better approximation of steady
solutions to hyperbolic systems of balance laws. The basis enrichment leverages
a prior - an approximation of the steady solution - which we propose to compute
using a Physics-Informed Neural Network (PINN). To that end, after presenting
the classical DG scheme, we show how to enrich its basis with a prior.
Convergence results and error estimates follow, in which we prove that the
basis with prior does not change the order of convergence, and that the error
constant is improved. To construct the prior, we elect to use parametric PINNs,
which we introduce, as well as the algorithms to construct a prior from PINNs.
We finally perform several validation experiments on four different hyperbolic
balance laws to highlight the properties of the scheme. Namely, we show that
the DG scheme with prior is much more accurate on steady solutions than the DG
scheme without prior, while retaining the same approximation quality on
unsteady solutions. | Emmanuel Franck, Victor Michel-Dansac, Laurent Navoret | 2023-10-23T09:44:39Z | http://arxiv.org/abs/2310.14754v2 | Approximately well-balanced Discontinuous Galerkin methods using bases enriched with Physics-Informed Neural Networks
###### Abstract
This work concerns the enrichment of Discontinuous Galerkin (DG) bases, so that the resulting scheme provides a much better approximation of steady solutions to hyperbolic systems of balance laws. The basis enrichment leverages a prior - an approximation of the steady solution - which we propose to compute using a Physics-Informed Neural Network (PINN). To that end, after presenting the classical DG scheme, we show how to enrich its basis with a prior. Convergence results and error estimates follow, in which we prove that the basis with prior does not change the order of convergence, and that the error constant is improved. To construct the prior, we elect to use parametric PINNs, which we introduce, as well as the algorithms to construct a prior from PINNs. We finally perform several validation experiments on four different hyperbolic balance laws to highlight the properties of the scheme. Namely, we show that the DG scheme with prior is much more accurate on steady solutions than the DG scheme without prior, while retaining the same approximation quality on unsteady solutions.
## 1 Objective and model
In the last decades, much work has been devoted to proposing numerical methods for hyperbolic systems with source terms, which correctly capture stationary solutions of the system, as well as perturbations of flows around these steady states. If the perturbation is smaller than the scheme error, traditional numerical schemes are not able to provide a good approximation of the perturbed steady solution. To address such an issue, a first possibility is to refine the mesh in space. However, for small perturbations, this would greatly increase the computational overhead. To avoid this, schemes specifically dedicated to capturing stationary solutions have been introduced. They are called _well-balanced schemes_.
There are two families of well-balanced (WB) schemes: exactly and approximately WB schemes. Exactly WB schemes give an exact representation of the equilibria. Such schemes are usually developed for subclasses of steady solutions, especially for complex balance laws, or multidimensional problems. For instance, first- and second-order accurate exactly WB schemes have been developed for the shallow water equations [4, 31, 34, 35] or the Euler equations with gravity [28, 49]. High-order exactly well-balanced schemes include [22, 37, 21, 36, 8, 9, 24] with finite volume methods or related approaches, or [53, 11] with discontinuous Galerkin methods. The second family, approximately WB schemes, consist in ensuring a better approximation of the equilibria compared to traditional numerical schemes. This better approximation can be under the form of a better order of accuracy [18, 20, 23] or a better error constant [1, 16]. Both families of WB schemes may incur significant additional computational cost compared to traditional schemes, due to the extensive modifications necessary to ensure the WB property, especially for complex systems and equilibria.
In this work, we focus on providing an approximately WB scheme for the following parametric partial differential equation (PDE):
\[\begin{cases}\partial_{t}u+\partial_{x}F_{\mu_{1}}(u)=S_{\mu_{2}}(u),\\ u(t=0,x)=u_{0}(x),\end{cases} \tag{1.1}\]
with \(\mu_{1}\) and \(\mu_{2}\) the parameters of the PDE. We set \(\mu=\{\mu_{1},\mu_{2}\}\), and we assume that \(\mu\in\mathbb{P}\subset\mathbb{R}^{m}\). In (1.1), the unknown function is \(u\); \(F_{\mu_{1}}\) is called the physical flux function, while \(S_{\mu_{2}}\) is the source term. We assume that the equation is hyperbolic, that is to say that the Jacobian matrix of \(F_{\mu_{1}}\) is diagonalizable with real eigenvalues. Our goal will be to construct an approximately well-balanced approach for the general steady state \(\partial_{x}F_{\mu_{1}}(u)=S_{\mu_{2}}(u)\). The combination of learning and numerical methods (known as Scientific Machine Learning) has produced good results for hyperbolic PDEs. Examples include work on the design of limiters or shock detection [41, 7, 55], artificial viscosity [19, 42, 54, 10], or numerical schemes [5].
The approach proposed in this paper is also based on the hybridization of classical approaches and neural networks. We endeavor to improve the classical Discontinuous Galerkin (DG) method, which usually relies on a discontinuous approximation of the solution in a suitable polynomial basis. More information on the DG method can be found in [26, 39] for instance. A natural way of improving the traditional DG method to improve the accuracy on some family of solutions is to enrich the basis with a prior. This is for example the case of the Trefftz method [30, 6, 12, 27], or the non-polynomial bases studied in [56].
To perform this basis enrichment, we use a learning-based offline computation with a neural network to build a prior which approximates a parametrized family of equilibria. To that end, we use Physics-Informed Neural Networks (PINNs), see e.g. [40, 13], and parametric neural networks [48]. This prior is then introduced into the Discontinuous Galerkin basis, to increase the accuracy of the scheme around this family of equilibria. Note that the prior construction could be handled without the use of neural networks, but we will show that the neural network approach is more efficient. This framework could require significant offline calculation cost (depending on the problem), but will generate a very small additional cost in the online phase, i.e., when actually using the modified scheme. This method enhances the DG basis functions with a prior provided by a neural network. Similar techniques based on neural networks have already been successfully implemented for other applications. In [3], for elliptic problems, the authors use a network to provide a finite element basis that is dynamically enriched. In [46, 47], the authors show that random neural networks can be used as DG bases, and can be more accurate than classical ones for a sufficiently large number of basis function in each cell.
The paper is constructed as following. First, we assume that we know a prior (an approximation) of a family of equilibria, and we introduce the modification of the DG basis. Theoretical results show that this modification does not change the order of accuracy of the method, but decreases the error constant close to steady solutions. Then, we introduce the learning methods that will enable us to build our prior for a family of equilibria, and finally we perform numerical experiments, in one and two space dimensions, on several linear and nonlinear systems of balance laws. A conclusion ends this paper.
## 2 Modified Discontinuous Galerkin scheme
This section is devoted to the presentation of the modified DG scheme. We start by quickly introducing the classical DG scheme in Section2.1, and then move on to proposing the modification in Section2.2. Theoretical convergence results related to this modification will be presented in Section3. In this section and the following one, we write the scheme in the case of a scalar and one-dimensional PDE, but the method is easily extendable to systems and to higher dimensions.
### Classical Discontinuous Galerkin scheme
The goal of this section is to present the classical DG scheme in order to discretize the PDE (1.1). To that end, we discretize the space domain \(\Omega\subset\mathbb{R}^{d}\) in cells \(\Omega_{k}=(x_{k-1/2},x_{k+1/2})\) of size \(\Delta x_{k}\), and of centers \(x_{k}\).
The idea behind the classical DG scheme is to first compute the weak form of the considered PDE, and then to locally approximate the solution in each cell, by projecting it onto a finite-dimensional vector space \(V_{h}\). We consider a space \(V_{h}\) of dimension \(q+1\):
\[V_{h}=\operatorname{Span}\left(\phi_{k,0},\ldots,\phi_{k,q}\right).\]
Note that the space \(V_{h}\) can be different for each cell \(k\).
The first assumption of DG scheme is to approximate the solution \(u\) to the PDE, in each cell, with a value in \(V_{h}\):
\[\forall k,\quad u\big{|}_{\Omega_{k}}(t,x)\simeq u_{k}(t,x).\]
Since \(u_{k}\in V_{h}\), we can write
\[u_{k}(t,x)\coloneqq\sum_{j=0}^{q}u_{k,j}(t)\phi_{k,j}(x). \tag{2.1}\]
To obtain the DG scheme, we first write the weak form of the equation in each cell:
\[\int_{\Omega_{k}}\partial_{t}u(t,x)\phi(x)\,dx+\int_{\Omega_{k}}\partial_{x}F_{ \mu_{1}}(u(t,x))\phi(x)\,dx=\int_{\Omega_{k}}S_{\mu_{2}}(u(t,x))\phi(x)\,dx. \tag{2.2}\]
with \(\phi(x)\) a smooth test function. Performing an integration by parts, this form is equivalent to
\[\partial_{t}\left(\int_{\Omega_{k}}u\,\phi\right)-\int_{\Omega_{k}}F_{\mu_{1} }(u)\,\partial_{x}\phi+\left[F_{\mu_{1}}(u)\,\phi\right]^{x_{k+1/2}}_{x_{k-1/2 }}=\int_{\Omega_{k}}S_{\mu_{2}}(u)\,\phi. \tag{2.3}\]
We now plug the DG representation (2.1) in the weak form (2.3), using \(\phi_{k,i}\) as test function, for any \(i\in\{0,\ldots,q\}\).
1. We begin with the first term: \[\int_{\Omega_{k}}u\,\phi_{k,i}=\sum_{j=0}^{q}\left(\int_{\Omega_{k}}u_{k,j}(t )\phi_{k,j}(x)\phi_{k,i}(x)\,dx\right)=\sum_{j=0}^{q}u_{k,j}(t)\left(\int_{ \Omega_{k}}\phi_{k,j}(x)\phi_{k,i}(x)\,dx\right).\] To handle the integral in the expression above, we introduce the following quadrature formula, with weights \(w_{k,p}\) and points \(x_{k,p}\), valid for any smooth function \(\phi\): \[\int_{\Omega_{k}}\phi(x)\,dx\simeq\sum_{p=1}^{N_{q}}w_{k,p}\,\phi(x_{k,p}).\] We assume that the first and last quadrature points coincide with the cell boundaries, i.e. \(x_{k,1}=x_{k-1/2}\) and \(x_{k,N_{q}}=x_{k+1/2}\). In practice, we use the well-known Gauss-Lobatto quadrature rule, see e.g. [2] for more information. Equipped with this quadrature formula, we introduce \[M_{k,i,j}=\sum_{p=1}^{N_{q}}w_{k,p}\,\phi_{k,j}(x_{k,p})\,\phi_{k,i}(x_{k,p}) \simeq\int_{\Omega_{k}}\phi_{k,j}\phi_{k,i},\] so that the first term of (2.3) becomes \[\int_{\Omega_{k}}u(t,x)\phi_{k,i}(x)\,dx\simeq\sum_{j=0}^{q}M_{k,i,j}u_{k,j}(t).\]
2. Using the same techniques, the second term is approximated in the following way: \[\int_{\Omega_{k}}F_{\mu_{1}}(u)\,\partial_{x}\phi_{k,i}\simeq\sum_{p=1}^{N_{ q}}\left[w_{k,p}F_{\mu_{1}}\left(\sum_{j=0}^{q}u_{k,j}(t)\phi_{k,j}(x_{k,p}) \right)\partial_{x}\phi_{k,i}(x_{k,p})\right].\]
3. We note that the third term reduces to \[\left[F_{\mu_{1}}(u)\,\phi_{k,i}\right]^{x_{k+1/2}}_{x_{k-1/2}}=F_{\mu_{1}} \left(u_{k}(t,x_{k+\frac{1}{2}})\right)\phi_{k,i}(x_{k+\frac{1}{2}})-F_{\mu_{ 1}}\left(u_{k}(t,x_{k-\frac{1}{2}})\right)\phi_{k,i}(x_{k-\frac{1}{2}}),\] where the physical flux \(F_{\mu_{1}}\) has to be approximated at the cell boundaries. To that end, like the well-known finite volumes method, the DG method requires the introduction of a consistent numerical flux \[G_{\mu_{1}}(u_{L},u_{R})\quad\text{ such that }G_{\mu_{1}}(u,u)=F_{\mu_{1}}(u).\] This numerical flux is then used to approximate the interface flux, as follows \[F_{\mu_{1}}\left(u_{k}(t,x_{k+\frac{1}{2}})\right)\simeq G_{\mu_{1}}\left(u_{ k}(t,x_{k+\frac{1}{2}}),u_{k+1}(t,x_{k+\frac{1}{2}})\right).\]
4. Finally, for the last term, we use a straightforward application of the quadrature rule: \[\int_{\Omega_{k}}S_{\mu_{2}}(u)\,\phi\simeq\sum_{p=1}^{N_{q}}\left[w_{k,p}S_{\mu_{ 2}}\left(\sum_{j=0}^{q}u_{k,j}(t)\phi_{k,j}(x_{k,p})\right)\phi_{k,i}(x_{k,p}) \right].\]
Gathering all these terms, we show that, in each cell, the DG scheme can be written as an ordinary differential equation, where the interface flux term couples the cell \(\Omega_{k}\) with its neighbors:
\[\mathcal{M}_{k}\,\partial_{t}u_{k}(t)-\mathcal{F}_{\mu_{1}}(u_{k})+\mathcal{G }_{\mu_{1}}(u_{k-1},u_{k},u_{k+1})=\mathcal{S}_{\mu_{2}}(u_{k}).\]
Now that we have recalled the classical DG space discretization, we have all the tools we need to introduce a modification to this discretization that will enable us to provide an approximately WB scheme.
### Enrichment of the modal DG basis
There are many vector spaces able to represent the solution in each cell. For instance, nodal DG schemes [26] use Lagrange polynomials or other polynomials based on nodes chosen within each cell. Legendre polynomials or Taylor expansions around the cell centers lead to _modal DG schemes_. In this work, we focus on the Taylor basis, given on each cell \(\Omega_{k}\) by
\[V_{h}=\mathrm{Span}\left(\phi_{k,0},\phi_{k,1},\phi_{k,2},\ldots,\phi_{k,q} \right)=\mathrm{Span}\left(1,(x-x_{k}),\frac{1}{2}(x-x_{k})^{2},\ldots,\frac {1}{q!}(x-x_{k})^{q}\right). \tag{2.4}\]
In the remainder of this section, we assume that we have access to a prior on the equilibrium, denoted by \(u_{\theta}(x,\mu)\). Obtaining such a prior is discussed in Section4. For the moment, suffice it to say that \(u_{\theta}\) provides an approximation of the steady solution for \(x\in\Omega\) and for \(\mu\) in some parameter space \(\mathbb{P}\) to be defined.
Given the prior \(u_{\theta}\), we modify the local basis \(V_{h}\) to incorporate the prior: for that, we propose two possibilities.
* The _additive correction_\(V_{h}^{+}\) consists in replacing the first element of \(V_{h}\) by the prior: \[V_{h}^{+}=\mathrm{Span}\left(\phi_{k,0}^{+},\phi_{k,1}^{+},\phi_{k,2}^{+}, \ldots,\phi_{k,q}^{+}\right)=\mathrm{Span}\left(u_{\theta}(x,\mu),(x-x_{k}),\ldots,\frac{1}{q!}(x-x_{k})^{q}\right).\] (2.5)
* The _multiplicative correction_\(V_{h}^{*}\) consists in multiplying each element of \(V_{h}\) by the prior: \[V_{h}^{*}=\mathrm{Span}\left(\phi_{k,0}^{*},\phi_{k,1}^{*},\phi_{k,2}^{*}, \ldots,\phi_{k,q}^{*}\right)=\mathrm{Span}\left(u_{\theta}(x,\mu),(x-x_{k}) \,u_{\theta}(x,\mu),\ldots,\frac{1}{q!}(x-x_{k})^{q}u_{\theta}(x,\mu)\right).\] (2.6)
A first remark is that, if the prior is exactly equal to the steady solution, then it can be exactly represented by an element of \(V_{h}^{+}\) or \(V_{h}^{*}\) (namely, the first one) in each cell, which is not the case for the classical space \(V_{h}\). However, whether the prior is exact or not, the method will only be of interest if the projector onto the modified vector space is accurate (or even exact in the case of an exact prior). The second point to note is that, unlike conventional DG approaches, the bases are not polynomial. We must therefore ensure that this does not hinder the convergence of the DG method. In the next section, we follow Yuan and Shu's work [56] to study the convergence of the modified DG method, and provide error estimates.
## 3 Error estimates
In this section, we prove some convergence results on the modified DG scheme. We assume that our prior \(u_{\theta}\) is \(p\) times continuously differentiable, i.e., that it has differentiability class \(\mathcal{C}^{p}\), with \(p\geqslant q+1\). This hypothesis is compatible with the construction of the prior from Section4.
In [56], the authors study the convergence of the DG scheme for non-polynomial bases. They show that, if the non-polynomial basis can be represented in a specific way by a polynomial basis, then the convergence of the
local and global projection operators is not hampered. Using some stability results (given in [56] for the transport equation) together with these estimations, convergence can be recovered.
These theoretical results will be split in two parts. To begin with, in Section3.1, by prove that the bases proposed in Section2.2 fit into the hypotheses of [56], which ensures convergence. However, this study is insufficient to show that the better the prior, the more accurate the modified DG scheme. To that end, in Section3.2, we derive the projector estimates in the case of \(V_{h}^{*}\), in order to show the potential gains of the method.
### Convergence in non-polynomial DG bases
In [56], the authors prove the following lemma.
**Lemma 3.1**.: _Consider an approximation vector space \(V_{h}\) with local basis \((v_{k,0},\ldots,v_{k,q})\), which may depend on the cell \(\Omega_{k}\). If there exists constant real numbers \(a_{j\ell}\) and \(b_{j}\) independent of the size of the cell \(\Delta x_{k}\) such that, in each cell \(\Omega_{k}\),_
\[\forall j\in\{0,\ldots,q\},\quad\left|v_{k,j}(x)-\sum_{\ell=0}^{q}a_{j\ell}(x -x_{k})^{\ell}\right|\leq b_{j}(\Delta x_{k})^{q+1}, \tag{3.1}\]
_then for any function \(u\in H^{q+1}(\Omega_{k})\), there exists \(v_{h}\in V_{h}\) and a constant real number \(C\) independent of \(\Delta x_{k}\), such that_
\[\|v_{h}-u\|_{L^{\infty}(\Omega_{k})}\leq C\|u\|_{H^{q+1}(\Omega_{k})}(\Delta x _{k})^{q+\frac{1}{2}}.\]
Using this result, the authors show that the global projection error in the DG basis converges with an error in \((\Delta x_{k})^{q+1}\) in the Sobolev norm \(H^{q+1}\), and later prove the convergence of the whole scheme using a monotone flux for a scalar equation. In the remainder of this section, we prove that the two new bases proposed in Section2.2 satisfy the assumptions of Lemma3.1. Using these results together with the proofs of [56], we will obtain that both bases lead to a convergent scheme.
**Proposition 3.2**.: _If the prior \(u_{\theta}(x;\mu)\) has differentiability class \(\mathcal{C}^{q+1}(\mathbb{R})\) with respect to \(x\), then the approximation space \(V_{h}^{+}\) satisfies the assumption (3.1)._
Proof.: Since the prior is \(\mathcal{C}^{q+1}(\mathbb{R})\), we can write its Taylor series expansion around the cell center \(x_{k}\). Namely, there exists a constant \(c\in[x_{k-1/2},x_{k+1/2}]\) such that
\[u_{\theta}(x)=u_{\theta}(x_{k})+(x-x_{k})u_{\theta}^{\prime}(x_{k})+\cdots+ \frac{1}{q!}(x-x_{k})^{q}u^{(q)}(x_{k})+\frac{(x-x_{k})^{q+1}}{(q+1)!}u^{(q+1 )}(c). \tag{3.2}\]
With that expansion, we can write our basis \(V_{h}^{+}\) with respect to the classical modal basis \(V_{h}\) as follows:
\[\begin{pmatrix}u_{\theta}(x)\\ (x-x_{k})\\ \vdots\\ (x-x_{k})^{q}\end{pmatrix}=\underbrace{\begin{pmatrix}u_{\theta}(x_{k})&u_{ \theta}^{\prime}(x_{k})&\ldots&\frac{1}{q!}u_{\theta}^{(q)}(x_{k})\\ 0&1&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&1\end{pmatrix}}\begin{pmatrix}1\\ (x-x_{k})\\ \vdots\\ (x-x_{k})^{q}\end{pmatrix}+(x-x_{k})^{q+1}\underbrace{\begin{pmatrix}\frac{u^{ (q+1)}(c)}{(q+1)!}\\ 0\\ \vdots\\ 0\end{pmatrix}}_{b_{+}}.\]
We remark that the matrix \(A_{+}\) and the vector \(b_{+}\) are independent of \(\Delta x_{k}\). Hence, assumption (3.1) is verified, and Lemma3.1 can be applied.
**Proposition 3.3**.: _If the prior \(u_{\theta}(x;\mu)\) has differentiability class \(\mathcal{C}^{q+1}(\mathbb{R})\) with respect to \(x\), then the approximation space \(V_{h}^{*}\) satisfies the assumption (3.1)._
Proof.: The proof follows the same lines as the proof of the previous proposition. Namely, (3.2) is still satisfied since the prior is \(\mathcal{C}^{q+1}(\mathbb{R})\). Then, the basis \(V_{h}^{*}\) is written with respect to the classical modal basis \(V_{h}\) as follows:
\[\begin{pmatrix}u_{\theta}(x)\\ (x-x_{k})\,u_{\theta}(x)\\ \vdots\\ (x-x_{k})^{q}\,u_{\theta}(x)\end{pmatrix}=\underbrace{\begin{pmatrix}u_{ \theta}(x_{k})&u_{\theta}^{\prime}(x_{k})&\dots&\frac{u^{(q)}(x_{k})}{q!}\\ 0&u_{\theta}(x_{k})&\dots&\frac{u_{\theta}^{(q-1)}(x_{k})}{(q-1)!}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&u_{\theta}(x_{k})\end{pmatrix}}_{\mathcal{A}_{*}}\begin{pmatrix}1 \\ (x-x_{k})\\ (x-x_{k})^{q}\end{pmatrix}+(x-x_{k})^{q+1}\underbrace{\begin{pmatrix}\frac{u_ {\theta}^{q+1}(c)}{q!}\\ \vdots\\ 1\end{pmatrix}}_{b_{*}}\]
Just like before, the matrix \(A_{*}\) and the vector \(b_{*}\) are independent of \(\Delta x_{k}\). Hence, assumption (3.1) is verified, and Lemma3.1 can be applied.
These two propositions show that, if the prior is sufficiently smooth, we can apply the results of [56], which shows the convergence of the method. However, this approach does not give an estimation of the error with respect to the quality of the prior. Indeed, we expect the modified DG scheme to be more accurate when the prior is closer to the solution. Obtaining such an estimate is the objective of the following section.
### Estimate with prior dependency
The goal of this section is to refine the error estimates from Section3.1 for a specific modified basis. We consider the case of \(V_{h}^{*}\), since it is easier to write the projector onto the classical basis. This will enable us to quantify the gains that can be expected when using this new basis. The case of \(V_{h}^{+}\) is more complicated, since the projector is harder to write. Nevertheless, we will show in the numerical experiments from Section5 that both modified bases exhibit similar behavior.
Recall that the basis \(V_{h}^{*}\) is obtained by multiplying each element of \(V_{h}\) by the prior. Therefore, its basis functions are given by \(\phi_{k,j}^{*}=\phi_{k,j}u_{\theta}\) for each cell \(\Omega_{k}\) and for \(j\in\{0,\dots,q\}\).
**Lemma 3.4**.: _Assume that the prior \(u_{\theta}\) satisfies_
\[u_{\theta}(x;\mu)^{2}>m^{2}>0,\quad\forall x\in\Omega,\quad\forall\mu\in \mathbb{P}.\]
_For a given cell \(\Omega_{k}\), for any function \(u\in H^{q+1}(\Omega_{k})\), the \(L^{2}\) projector onto \(V_{h}^{*}\), denoted by \(P_{h}\) and such that \(P_{h}(u)\in V_{h}^{*}\), satisfies the inequality_
Proof.: The proof uses a strategy similar to [56]. We consider the cell \(\Omega_{k}\). For any smooth function \(f\) defined on \(\Omega_{k}\), we define the operator \(T\) by
\[T(f)=\left(\sum_{j=0}^{q}f^{(j)}(x_{k})\frac{1}{j!}(x-x_{k})^{j}\right)\]
and the operator \(T_{\theta}\) by
\[T_{\theta}(f)=\left(\sum_{j=0}^{q}\left(\frac{f}{u_{\theta}}\right)^{(j)}(x_{k };\mu)\frac{1}{j!}(x-x_{k})^{j}\right)u_{\theta}(x;\mu).\]
For simplicity, we no longer explicitly write the dependence in \(\mu\) in this proof. Let \(u\in H^{q+1}(\Omega_{k})\). Using \(T_{\theta}\), we write the following estimation:
\[\|u-P_{h}(u)\|_{L^{\infty}(\Omega_{k})}\leq\|u-T_{\theta}(u)\|_{L^{\infty}( \Omega_{k})}+\|T_{\theta}(u)-P_{h}(u)\|_{L^{\infty}(\Omega_{k})}\eqqcolon N _{1}+N_{2}. \tag{3.3}\]
To complete the proof, we need to estimate both terms \(N_{1}\) and \(N_{2}\).
We start with the estimation of \(N_{1}\). We obtain, according to the relationship between \(T\) and \(T_{\theta}\),
\[N_{1}=\|u-T_{\theta}(u)\|_{L^{\infty}(\Omega_{k})}=\left\|\frac{u}{u_{\theta}}u_{ \theta}-T\left(\frac{u}{u_{\theta}}\right)u_{\theta}\right\|_{L^{\infty}( \Omega_{k})}\leq\left\|\frac{u}{u_{\theta}}-T\left(\frac{u}{u_{\theta}}\right) \right\|_{L^{\infty}(\Omega_{k})}\left\|u_{\theta}\right\|_{L^{\infty}(\Omega_ {k})}. \tag{3.4}\]
We can now use an intermediate result from [56]: for all \(f\) smooth enough, the Taylor formula and the Cauchy-Schwartz inequality, followed by a direct computation, gives
\[\|f-T(f)\|_{L^{\infty}(\Omega_{k})} =\sup_{x\in\Omega_{k}}\left|\int_{x_{k}}^{x}f^{(q+1)}(\xi)\frac{( x-\xi)^{q}}{q!}d\xi\right|\] \[\leq\sup_{x\in\Omega_{k}}\left[\left(\int_{x_{k}}^{x}\left|f^{(q+ 1)}(\xi)\right|^{2}d\xi\right)^{\frac{1}{2}}\left(\int_{x_{k}}^{x}\left|\frac {(x-\xi)^{q}}{q!}\right|^{2}d\xi\right)^{\frac{1}{2}}\right],\] \[\lesssim|f|_{H^{q+1}(\Omega_{k})}\left(\Delta x_{k}\right)^{q+ \frac{1}{2}}. \tag{3.5}\]
Going back to \(N_{1}\) and plugging (3.5) into the estimate (3.4), we obtain
\[N_{1}\lesssim\left|\frac{u}{u_{\theta}}\right|_{H^{q+1}(\Omega_{k})}\left( \Delta x_{k}\right)^{q+\frac{1}{2}}\left\|u_{\theta}\right\|_{L^{\infty}( \Omega_{k})} \tag{3.6}\]
Now, we proceed with estimating \(N_{2}\), the second term of (3.3). The \(L^{2}\) projector \(P_{h}\) onto \(V_{h}^{*}\) is defined by
\[P_{h}(u)=\sum_{j=0}^{q}\frac{\alpha_{j}}{(\Delta x_{k})^{j}}\phi_{j}^{*},\]
with \(\alpha=(\alpha_{j})_{j\in\{1,\dots,q\}}=(M^{*})^{-1}b\), where
\[M_{j\ell}^{*}=\int_{\Omega_{k}}\frac{\phi_{j}^{*}(x)}{(\Delta x_{k})^{j}} \frac{\phi_{\ell}^{*}(x)}{(\Delta x_{k})^{\ell}}dx,\quad\text{and}\quad b_{j} =\int_{\Omega_{k}}u(x)\frac{\phi_{j}^{*}(x)}{(\Delta x_{k})^{j}}dx. \tag{3.7}\]
We are now ready to start estimating \(N_{2}\). Note that
\[N_{2}=\|T_{\theta}(u)-P_{h}(u)\|_{L^{\infty}(\Omega_{k})} =\sup_{x\in\Omega_{k}}\left|\sum_{j=0}^{q}\left(\left(\frac{u}{u_ {\theta}}\right)^{(j)}(x_{k})-\frac{\alpha_{j}}{(\Delta x_{k})^{j}}\right) \frac{1}{j!}(x-x_{k})^{j}u_{\theta}(x)\right|\] \[\leq\sup_{x\in\Omega_{k}}\left|\sum_{j=0}^{q}\left((\Delta x_{k}) ^{j}\left(\frac{u}{u_{\theta}}\right)^{(j)}(x_{k})-\alpha_{j}\right)\frac{1}{j! }\frac{(x-x_{k})^{j}}{(\Delta x_{k})^{j}}\right|\left\|u_{\theta}\right\|_{L^{ \infty}(\Omega_{k})}.\]
Using the Cauchy-Schwartz inequality on the sum, and bounding the resulting polynomial on the cell, we obtain the estimate
\[N_{2}\lesssim\left[\sum_{j=0}^{q}\left((\Delta x_{k})^{j}\left(\frac{u}{u_{ \theta}}\right)^{(j)}(x_{k})-\alpha_{j}\right)^{2}\right]^{\frac{1}{2}}\left\|u _{\theta}\right\|_{L^{\infty}(\Omega_{k})}=\left\|\delta-\alpha\right\|_{2} \left\|u_{\theta}\right\|_{L^{\infty}(\Omega_{k})} \tag{3.8}\]
where the vector \(\delta=(\delta_{j})_{j\in\{0,\dots,q\}}\) is defined by
\[\delta_{j}=(\Delta x_{k})^{j}\left(\frac{u}{u_{\theta}}\right)^{(j)}(x_{k}).\]
Recalling the definition \(\alpha=(M^{*})^{-1}b\), we obtain
\[\left\|\delta-\alpha\right\|_{2}=\left\|(M^{*})^{-1}(M^{*}\delta-b)\right\|_{2 }\leq\left\|(M^{*})^{-1}\right\|_{2}\left\|M^{*}\delta-b\right\|_{2}. \tag{3.9}\]
We first take care of the term in \(M^{*}\delta-b\). We have
\[\left\|M^{*}\delta-b\right\|_{2}^{2} =\sum_{j=0}^{q}\left[\sum_{\ell=1}^{q}M^{*}_{j\ell}\delta_{\ell}-b_ {j}\right]^{2}\] \[=\sum_{j=0}^{q}\left[\sum_{\ell=0}^{q}\left(\int_{\Omega_{k}} \frac{\phi_{j}^{*}(x)}{(\Delta x_{k})^{j}}\frac{\phi_{\ell}^{*}(x)}{(\Delta x_ {k})^{\ell}}dx\right)(\Delta x_{k})^{\ell}\left(\frac{u}{u_{\theta}}\right)^{( \ell)}(x_{k})-\int_{\Omega_{k}}u(x)\frac{\phi_{j}^{*}(x)}{(\Delta x_{k})^{j}} dx\right]^{2}=:\sum_{j=0}^{q}\Xi_{j}^{2}\]
We denote the summand by \(\Xi_{j}\), and we use the definition of the basis to obtain
\[\forall j\in\{0,\dots,q\},\quad\Xi_{j} :=\sum_{\ell=0}^{q}(\Delta x_{k})^{\ell}\left(\frac{u}{u_{\theta} }\right)^{(\ell)}(x_{k})\int_{\Omega_{k}}\frac{\phi_{j}^{*}(x)}{(\Delta x_{k })^{j}}\frac{\phi_{\ell}^{*}(x)}{(\Delta x_{k})^{\ell}}dx-\int_{\Omega_{k}}u(x )\frac{\phi_{j}^{*}(x)}{(\Delta x_{k})^{j}}dx\] \[=\sum_{\ell=0}^{q}(\Delta x_{k})^{\ell}\left(\frac{u}{u_{\theta} }\right)^{(\ell)}(x_{k})\int_{\Omega_{k}}\frac{\phi_{j}(x)}{(\Delta x_{k})^{j }}\frac{\phi_{\ell}(x)}{(\Delta x_{k})^{\ell}}u_{\theta}^{2}(x)dx-\int_{\Omega _{k}}\frac{u(x)}{u_{\theta}(x)}\frac{\phi_{j}(x)}{(\Delta x_{k})^{j}}u_{\theta }^{2}(x)dx\] \[=\int_{\Omega_{k}}\left(\sum_{\ell=0}^{q}\left(\frac{u}{u_{\theta }}\right)^{(\ell)}(x_{k})\phi_{\ell}(x)-\frac{u(x)}{u_{\theta}(x)}\right)\frac {\phi_{j}(x)}{(\Delta x_{k})^{j}}u_{\theta}^{2}(x)dx. \tag{3.10}\]
Using a Taylor expansion, we obtain, for all \(j\in\{1,\dots,q\}\),
\[\Xi_{j}= -\int_{\Omega_{k}}\left(\int_{x_{k}}^{x}\left(\frac{u}{u_{\theta} }\right)^{(q+1)}(\xi)\,\frac{(x-\xi)^{q}}{q!}d\xi\right)\frac{\phi_{j}(x)}{( \Delta x_{k})^{j}}u_{\theta}^{2}(x)dx,\]
from which we get the following upper bound
\[\forall j\in\{1,\dots,q\},\quad\left|\Xi_{j}\right|\leq\sup_{x\in\Omega_{k}} \left|\int_{x_{k}}^{x}\left(\frac{u}{u_{\theta}}\right)^{(q+1)}(\xi)\,\frac{(x -\xi)^{q}}{q!}d\xi\right|\left|\int_{\Omega_{k}}\frac{\phi_{j}(x)}{(\Delta x_{ k})^{j}}u_{\theta}^{2}(x)dx\right|.\]
Using the same ingredients as in the computation of (3.5) for the leftmost term and bounding the rightmost term by the \(L^{\infty}\) norm of the prior and by noting that the classical basis functions are bounded, we obtain the estimate
\[\forall j\in\{1,\dots,q\},\quad\left|\Xi_{j}\right|\lesssim\left|\frac{u}{u_ {\theta}}\right|_{H^{q+1}(\Omega_{k})}\left(\Delta x_{k}\right)^{q+\frac{1}{2}} \left(\Delta x_{k}\right)\left\|u_{\theta}^{2}\right\|_{L^{\infty}(\Omega_{k})}.\]
Going back to what we had set out to prove, we get
\[\left\|M^{*}\delta-b\right\|_{2}=\left(\sum_{j=0}^{q}\left|\Xi_{j}\right|^{2} \right)^{\frac{1}{2}}\lesssim\left|\frac{u}{u_{\theta}}\right|_{H^{q+1}(\Omega _{k})}\left(\Delta x_{k}\right)^{q+\frac{1}{2}}\left(\Delta x_{k}\right)\left\| u_{\theta}^{2}\right\|_{L^{\infty}(\Omega_{k})}. \tag{3.11}\]
Plugging (3.11) into (3.9) and then into (3.8), we get
\[N_{2}\lesssim\left\|(M^{*})^{-1}\right\|_{2}\left|\frac{u}{u_{\theta}}\right| _{H^{q+1}(\Omega_{k})}\left(\Delta x_{k}\right)^{q+\frac{1}{2}}\left(\Delta x_ {k}\right)\left\|u_{\theta}^{2}\right\|_{L^{\infty}(\Omega_{k})}\left\|u_{ \theta}\right\|_{L^{\infty}(\Omega_{k})}. \tag{3.12}\]
Finally, we note that, for any \(y\in\mathbb{R}^{q+1}\), given the expression (3.7) of \(M^{*}\),
\[(M^{*}y,M^{*}y)=\int_{\Omega_{k}}\left(\sum_{j=0}^{q}\frac{\phi_{j}^ {*}(x)}{(\Delta x_{k})^{j}}y_{j}\right)^{2}dx =\int_{\Omega_{k}}\left(\sum_{j=0}^{q}\frac{\phi_{j}(x)}{(\Delta x _{k})^{j}}y_{j}\right)^{2}u_{\theta}(x)^{2}dx\] \[\geqslant m^{2}\int_{\Omega_{k}}\left(\sum_{j=0}^{q}\frac{\phi_{j}( x)}{(\Delta x_{k})^{j}}y_{j}\right)^{2}dx=m^{2}\left(My,My\right),\]
where \(M\) is the mass matrix associated with the classical basis functions
\[M_{j\ell}=\int_{\Omega_{k}}\frac{\phi_{j}(x)}{(\Delta x_{k})^{j}}\frac{\phi_{\ell }(x)}{(\Delta x_{k})^{\ell}}dx=\frac{\Delta x_{k}}{1+j+\ell}=\Delta x_{k}H_{j \ell},\]
where \(H=(H_{j\ell})_{j\ell}\) is the Hilbert matrix. Then we deduce the following inequality
\[\|(M^{*})^{-1}\|_{2}\leqslant\frac{1}{m^{2}}\|M^{-1}\|_{2}=\frac{1}{m^{2}} \frac{1}{\Delta x_{k}}\|H^{-1}\|_{2}. \tag{3.13}\]
Combining (3.12) and (3.13), we obtain
\[N_{2}\lesssim\left|\frac{u}{u_{\theta}}\right|_{H^{q+1}(\Omega_{k})}(\Delta x _{k})^{q+\frac{1}{2}}\frac{\left\|u_{\theta}^{2}\right\|_{L^{\infty}(\Omega_{k })}}{m^{2}}\left\|u_{\theta}\right\|_{L^{\infty}(\Omega_{k})}. \tag{3.14}\]
We get, from (3.6) and (3.14), the expected result.
The above proof relies on the smoothness of the prior. This may seem counter-intuitive in a hyperbolic context. However, since the prior will be obtained from a neural network in Section 4, this smoothness assumption becomes reasonable.
**Lemma 3.5**.: _We make the same assumptions as in the previous lemma, and still consider the vector space \(V_{h}^{*}\). For any function \(u\in H^{q+1}(\Omega)\),_
\[\left\|u-P_{h}(u)\right\|_{L^{2}(\Omega)}\lesssim\left|\frac{u}{u_{\theta}} \right|_{H^{q+1}(\Omega)}(\Delta x_{k})^{q+1}\left\|u_{\theta}\right\|_{L^{ \infty}(\Omega)}.\]
Proof.: We begin by stating the definition of the discrete \(L^{2}\) norm: by assuming that
\[\Omega=\bigcup_{k=1}^{N}\Omega_{k},\]
we obtain
\[\left\|u-P_{h}(u)\right\|_{L^{2}(\Omega)}^{2}\leqslant\sum_{k=1}^{N}\Delta x _{k}\left\|u-P_{h}(u)\right\|_{L^{\infty}(\Omega_{k})}^{2}.\]
Using the result from Lemma 3.4, we get
\[\left\|u-P_{h}(u)\right\|_{L^{2}(\Omega)}^{2} \lesssim\sum_{k=1}^{N}\Delta x_{k}\left(\left|\frac{u}{u_{\theta }}\right|_{H^{q+1}(\Omega_{k})}(\Delta x_{k})^{q+\frac{1}{2}}\,\left(1+\frac{ \left\|u_{\theta}^{2}\right\|_{L^{\infty}(\Omega_{k})}}{m^{2}}\right)\left\|u _{\theta}\right\|_{L^{\infty}(\Omega_{k})}\right)^{2}\] \[\lesssim\sum_{k=1}^{N}\left|\frac{u}{u_{\theta}}\right|_{H^{q+1} (\Omega_{k})}^{2}(\Delta x_{k})^{2q+2}\,\left(1+\frac{\left\|u_{\theta}^{2} \right\|_{L^{\infty}(\Omega_{k})}}{m^{2}}\right)^{2}\left\|u_{\theta}\right\|_ {L^{\infty}(\Omega_{k})}^{2}.\]
We assume that there exists \(\delta_{-}\), \(\delta_{+}\) and \(\Delta x\) such that, for all \(k\in\{1,\ldots,N\}\), \(\delta_{-}\Delta x\leqslant\Delta x_{k}\leqslant\delta_{+}\Delta x\). Then, since \(\left\|u_{\theta}\right\|_{L^{\infty}(\Omega_{k})}\leqslant\left\|u_{\theta} \right\|_{L^{\infty}(\Omega)}\), we obtain
\[\left\|u-P_{h}(u)\right\|_{L^{2}(\Omega)}^{2} \lesssim(\Delta x)^{2q+2}\sum_{k=1}^{N}\left|\frac{u}{u_{\theta}} \right|_{H^{q+1}(\Omega_{k})}^{2}\left(1+\frac{\left\|u_{\theta}^{2}\right\|_ {L^{\infty}(\Omega)}}{m^{2}}\right)^{2}\left\|u_{\theta}\right\|_{L^{\infty}( \Omega_{k})}^{2}\] \[\lesssim(\Delta x)^{2q+2}\left(1+\frac{\left\|u_{\theta}^{2} \right\|_{L^{\infty}(\Omega_{k})}}{m^{2}}\right)^{2}\left\|u_{\theta}\right\|_ {L^{\infty}(\Omega)}^{2}\sum_{k=1}^{N}\left|\frac{u}{u_{\theta}}\right|_{H^{q+ 1}(\Omega_{k})}^{2}.\]
The proof is concluded by recognizing the \(H^{q+1}(\Omega)\) seminorm.
This global error estimate Lemma 3.5 shows that projection error onto the basis \(V_{h}^{*}\) is bounded by
\[\left|\frac{u}{u_{\theta}}\right|_{H^{q+1}(\Omega)}.\]
This bound is equal to zero if the prior is exact, since it is nothing but the \((q+1)^{\text{th}}\) derivative of the constant function equal to one. This estimate also proves that the closer the prior is to the solution, the smaller the bound of the projection error. However, to obtain an even smaller bound, we need the prior and the solution to be close in the sense of the \(H^{q+1}(\Omega)\) seminorm. This means that the prior must be constructed in such a way that it also gives a good approximation of the derivatives of the solution.
As a summary, we have shown that the \(L^{2}\) projection error tends to zero when the prior tends to the solution. This result gives an idea of the expected gains in error ensured by using the modified basis \(V_{h}^{*}\). The final convergence error depends on this projection error, as has been shown in [56]. The proof to obtain the final convergence result is the same as in [56].
For the additive basis \(V_{h}^{+}\), such error estimates are harder to obtain, since the projection in the new basis is harder to write with respect to the traditional one. We expect an error bounded by a term in \(\|u-u_{\theta}\|_{H^{q+1}(\Omega)}\), which would enable us to draw similar conclusions as for the multiplicative basis \(V_{h}^{*}\). Namely, the error would also tend to zero when the prior tends to the solution, and the derivatives of the prior would need to be close to the derivatives of the solution. Proving this result is out of the scope of this paper, even though it should be ensured by the results of [56]. However, we will extensively study the behavior of the additive basis in Section 5.
## 4 Prior construction and algorithm
Equipped with the modified bases from Section 2.2 and with the theoretical results from Section 3, what is left to do is to propose a way to obtain a suitable prior \(u_{\theta}\).
Note that the approach described in Section 2 will be interesting if the prior \(u_{\theta}\) is a good approximation of the steady solution to (1.1) for a wide range of parameters. In addition, according to Section 3.2, the derivatives of the prior must also be good approximations of the derivatives of the steady solution.
This means that we wish to capture large families of solutions, i.e. we want to be able to calculate an approximation for several parameters. For example, assuming that (1.1) depends on 4 physical parameters leads to \(\mu\in\mathbb{R}^{4}\), and considering a problem in two space dimensions, leads to \(x\in\mathbb{R}^{2}\). Therefore, we are looking for a prior \(u_{\theta}(x;\mu)\), where \(u_{\theta}\in\mathcal{C}^{q}(\mathbb{R}^{2}\times\mathbb{R}^{4},\mathbb{R})\). Approaching such a function using polynomials defined on a mesh would be a very difficult task, especially if the space or parameter domains have a complex geometry. Neural networks have demonstrated their ability to approximate functions in fairly high dimensions, notably thanks to their intrinsic regularity. PINNs are a mesh-free approach to solving PDEs using neural networks. Their properties make them good candidates for approaching solutions to high-dimensional problems.
To build our prior, we propose to solve the parametric steady problem with PINNs. To that end, we now briefly introduce this method in Section 4.1, and we show how to compute and store the prior. Then, our algorithm is summarized in Section 4.2.
### Parametric PINNs
Note that the steady solutions to (1.1) are given by
\[\partial_{x}F_{\mu_{1}}(u)=S_{\mu_{2}}(u).\]
This is nothing but a parametric elliptic problem. Therefore, we introduce PINNs for the following generic boundary value problems (BVPs):
\[\begin{cases}\mathcal{D}(u,x;\mu)=f(x,\mu)&\text{ for }x\in\Omega,\\ u(t,x)=g(x,\mu),&\text{ for }x\in\partial\Omega,\end{cases} \tag{4.1}\]
where \(\mathcal{D}\) is a differential operator involving the solution \(u\) and its space derivatives, and with \(\mu\) some physical parameters. We recall that \(\mu\in\mathbb{P}\subset\mathbb{R}^{m}\). PINNs use the fact that classical fully-connected neural networks are
smooth functions of their inputs, as long as their activation functions are also smooth, to approximate the solution to (4.1). Contrary to traditional numerical schemes such as the DG method, where the degrees of freedom encode some explicit modal or nodal values of the solutions, the degrees of freedom of PINNs representation are the weights \(\theta\) of the neural network, and so do not explicitly represent the solution. Equipped with both of these remarks, the idea behind PINNs is to plug the network, which represents the solution to (4.1), into the equation. Then, the degrees of freedom (i.e. the weights \(\theta\) of the network) are found by minimizing a loss function. Since the neural network is differentiable, the derivatives can be exactly computed. In our case, the PINN is thus a smooth neural network that takes as input the space variable \(x\) and the parameter vector \(\mu\), which we denote by \(u_{\theta}(x;\mu)\).
Thanks to these definitions, solving the PDE can be rewritten as the following minimization problem:
\[\min_{\theta}\mathcal{J}(\theta),\quad\text{where }\mathcal{J}(\theta)= \mathcal{J}_{r}(\theta)+\mathcal{J}_{b}(\theta)+\mathcal{J}_{\text{data}}( \theta). \tag{4.2}\]
In (4.2), we have introduced three different terms: the residual loss function \(\mathcal{J}_{r}\), the boundary loss function \(\mathcal{J}_{b}\), and the data loss function \(\mathcal{J}_{\text{data}}\). For parameters \(\mu\in\mathbb{P}\), the residual loss function is defined by
\[\mathcal{J}_{r}(\theta)=\int_{\mathbb{P}}\int_{\Omega}\big{\|} \mathcal{D}(u_{\theta},x;\mu)-f(x;\mu)\big{\|}_{2}^{2}\,dxd\mu, \tag{4.3}\]
while the boundary loss function is given by
\[\mathcal{J}_{b}(\theta)=\int_{\mathbb{P}}\int_{\partial\Omega} \big{\|}u_{\theta}(x,\mu)-g(x,\mu)\big{\|}_{2}^{2}\,dxd\mu. \tag{4.4}\]
Finally, to define the data loss function, we assume that we know the exact solution to (4.1) at some points \(x_{i}\) and for some parameters \(\mu_{i}\), and we set
\[\mathcal{J}_{\text{data}}(\theta)=\sum_{i}\big{\|}u_{\theta}(x_{i},\mu_{i})-u (x_{i},\mu_{i})\big{\|}_{2}^{2}.\]
In practice, the integrals in (4.3) and (4.4) are approximated using a Monte-Carlo method. This method relies on sampling a certain number of so-called "collocation points" in order to approximate the integrals. Then, the minimization problem on \(\theta\) is solved using a gradient-type method, which corresponds to the learning phase.
The main advantage of PINNs is that they are mesh-free and less sensitive to dimension than classical methods. Indeed, neural networks easily deal with large input dimensions, and the Monte-Carlo method converges independently of the dimension. Consequently, PINNs are particularly well-suited to solving parametric PDEs such as (4.1). Thanks to that, we do not solve for a single equilibrium but rather for families of equilibria indexed by the parameters \(\mu\).
Traditional PINNs use this method to approximate both (4.3) and (4.4). However, for the boundary conditions, we elected to use another approach, which makes it possible to completely eliminate \(\mathcal{J}_{b}\) from the minimization algorithm. The idea is to define the approximate solution through a boundary operator \(\mathcal{B}\), which can for instance be a multiplication by a function which satisfies the boundary condition. We obtain
\[\widetilde{u}_{\theta}(x;\mu)=\mathcal{B}\big{(}u_{\theta},x;\mu\big{)},\]
with \(u_{\theta}\) the neural network and \(\mathcal{B}\) a simple operator such as \(\widetilde{u}_{\theta}\) exactly satisfies the boundary conditions. Using \(\widetilde{u}_{\theta}\), the residual loss becomes, instead of (4.3):
\[\mathcal{J}_{r}(\theta)=\int_{\mathbb{P}}\int_{\Omega}\big{\|} \mathcal{D}(\widetilde{u}_{\theta},x;\mu)-f(x;\mu)\big{\|}_{2}^{2}\,dxd\mu. \tag{4.5}\]
Examples of such functions \(\mathcal{B}\) are provided in Section5.
With this approach, we have presented one method for offline construction of our prior for a family of equilibria. Note that it is possible to further enhance this prior with data from previous simulations, thanks to the loss function \(\mathcal{J}_{\text{data}}\). Even though training PINNs may be harder than training traditional purely data-driven neural networks, they are much more efficient as priors. Indeed, the error estimates of Section3.2 show that the error
depends on the \(q\)-th derivative of the ratio between the prior and the solution. Therefore, to obtain a small error, it is important for the prior to provide a good approximation of not only the steady solution, but also of its derivatives. Since the PINN loss (4.5) inherently contains derivatives of \(u_{\theta}\), the resulting trained PINN will be more efficient in this respect. Note that a purely data-driven network could also be interesting if the data contains information on the derivatives.
### Algorithm
Now that we have discussed the strategy we use to obtain our prior, we give some details on the offline and online algorithms that we developed to construct the modified DG bases in practice. We start by describing the offline step, where the families of priors are computed. Then, we move on to an online algorithm, explaining how to construct the DG bases using the prior, and how to apply them to the actual DG time iterations.
```
1:space domain \(\Omega\), parameter set \(\mathbb{P}\), initial neural network \(u_{\theta_{0}}(x;\mu)\), \(\eta\) learning rate, \(N\) number of collocation points, \(n_{\text{epochs}}\) number of training epochs
2:trained neural network \(u_{\theta}(x;\mu)\)
3:initialize the weights: \(\theta=\theta_{0}\)
4:for\(n\leq n_{\text{epochs}}\)do
5: sample \(N\) values of \(x\) in \(\Omega\) and \(\mu\) in \(\mathbb{P}\)
6: compute the loss function \(\mathcal{J}(\theta)\)
7: update \(\theta\) using the gradient of \(\mathcal{J}(\theta)\): \(\theta=\theta-\eta\nabla_{\theta}J(\theta)\)
8:endfor
```
**Algorithm 1**Offline part: neural network training
In practice, we do not use a classical gradient descent to update the weights, but rather the Adam algorithm. Moreover, sampling is done through a uniform law on the space and parameter domains. It would also be possible use non-uniform sampling like in [52] for instance, but we elected to use uniform sampling for the sake of simplicity. Note that Algorithm 1 does not contain solution data in its inputs. Indeed, almost all numerical experiments from Section 5 do not require data on the solution. This avoids the cost of data production, which would otherwise require sampling the exact solution if it is known, or using a numerical scheme otherwise.
```
1:prior \(u_{\theta}\), degree \(N_{q}\) of the Gauss-Lobatto quadrature rule, initial data \(u_{0}\), space mesh \(\Omega_{h}\), parameters \(\mu\), \(n_{t}\) number of time steps
2:numerical solution \(u_{k}(t,x)\) on each cell \(\Omega_{k}\)
3:use the mesh \(\Omega_{h}\) to obtain all quadrature points \(x_{k,p}\) in each cell \(\Omega_{k}\)
4:evaluate the prior at each point \(x_{k,p}\): we obtain \(\tilde{u}_{k,p}\coloneqq u_{\theta}(x_{k,p};\mu)\)
5: reconstruct \(u_{k}(0,x)\) using \(\tilde{u}_{k,p}\)
6:for\(n\leq n_{t}\)do
7: construct the mass matrix \(\mathcal{M}\), the nonlinear flux \(\mathcal{F}\), the interface flux \(\mathcal{G}\) and the source term \(\mathcal{S}\) using \(\tilde{u}_{k,p}\) and the quadrature rule
8: update the solution \(u_{k}\) at the next time step, using \(u_{k}\) at the previous time step as well as the terms computed in the previous step
9:endfor
```
**Algorithm 2**Online part: using the neural network in the DG scheme
In this second step, the additional computational overhead associated to our method, compared to the classical DG scheme, comes from two distinct sources. The first one is a preprocessing phase, where we evaluate the prior on the quadrature points (step 2 of Algorithm 2). Even though such networks have been made to be quickly evaluated on GPUs, this evaluation step remains fast on CPUs. The second source of computational cost is associated to the quadrature rule. Indeed, in some cases, we will require a quadrature rule with a higher degree than the traditional DG scheme. The classical approach is to use \(N_{q}=q\) quadrature points for bases made of \(q\) polynomial functions, since the quadrature is exact for polynomials of degree \(q\). However, in our case, our basis
is non-polynomial. Hence, to have a good approximation of the integral of the prior, we may need to increase the degree of the quadrature. In most cases, this increase is slight; for a few test cases, especially to approximate functions with large derivatives, we will need to use fine quadrature rules.
## 5 Applications and numerical results
This section is dedicated to a validation of the approach on several parametric hyperbolic systems of balance laws: the linear advection equation in Section 5.1, the 1D shallow water equations in Section 5.2, the Euler-Poisson system in Section 5.3, and the 2D shallow water equations in Section 5.4. In the first two cases, there exist some exact well-balanced schemes in the literature. However, for the Euler-Poisson system and the 2D shallow water equations, exact (or even approximate) WB schemes are either not available or very complicated to implement. Code replicating the experiments of Section 5.1 is freely available [33] on GitHub1.
Footnote 1: [https://github.com/Victor-MichelDansac/DG-PINNs.git](https://github.com/Victor-MichelDansac/DG-PINNs.git)
In this section, we denote by \(K\) the number of cells. We test both bases \(V_{h}^{*}\) and \(V_{h}^{+}\) at first, showing that both display similar results. To cut down on the number of tables, we then only present the results for the additive basis \(V_{h}^{+}\).
Moreover, the time step \(\Delta t\) is given by
\[\Delta t=C_{\mathrm{CFL}}\,C_{\mathrm{RK}}\,\frac{\min_{k\in\{1,\ldots,K\}} \Delta x_{k}}{\lambda}, \tag{5.1}\]
where \(\lambda\) is the maximal wave speed of the system, \(C_{\mathrm{CFL}}\) is a CFL (Courant-Friedrichs-Lewy) coefficient, and \(C_{\mathrm{RK}}\) is the stability coefficient associated to the time discretization. All experiments are run using a strong stability-preserving Runge-Kutta (SSPRK) time discretization of the correct order. The time discretizations, with their associated stability coefficients \(C_{\mathrm{RK}}\), are collected in Table 1. To determine \(C_{\mathrm{CFL}}\), we run a study of the stability condition for the first experiment; this study is not repeated for the other experiments, since the new bases do not influence the stability condition.
### Linear advection
We first consider the case of a linear advection equation with a source term, on the space domain \(\Omega=(0,1)\). The equation is given as follows:
\[\left\{\begin{aligned} \partial_{t}u+\partial_{x}u& =s(u;\mu),\qquad\text{ for }x\in\Omega\\ u(t=0,x)&=u_{\mathrm{ini}}(x;\mu),\\ u(t,x=0)&=u_{0},\end{aligned}\right. \tag{5.2}\]
Here, the parameter vector \(\mu\) is made of three elements:
\[\mu=\begin{pmatrix}\alpha\\ \beta\\ u_{0}\end{pmatrix}\in\mathbb{P}\subset\mathbb{R}^{3},\quad\alpha\in\mathbb{R }_{+},\quad\beta\in\mathbb{R}_{+},\quad u_{0}\in\mathbb{R}_{+}^{*}.\]
The source term depends on \(\mu\) as follows:
\[s(u;\mu)=\alpha u+\beta u^{2},\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline number \(q\) of basis elements & 0 & 1 & 2 & 3 \\ \hline time discretization & explicit Euler & SSPRK2 [45] & SSPRK3(5) [45] & SSPRK4(10) [29] \\ stability coefficient \(C_{\mathrm{RK}}\) & 1 & 1 & 2.65 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Stability coefficients \(C_{\mathrm{RK}}\) of the high-order time discretizations used in the numerical experiments, with respect to the number \(q\) of basis elements.
and straightforward computations show that the associated steady solutions take the form
\[u_{\text{eq}}(x;\mu)=\frac{\alpha u_{0}}{(\alpha+\beta u_{0})e^{-\alpha x}-\beta u _{0}}. \tag{5.3}\]
To compute the time step, we take \(\lambda=1\) in (5.1), since the advection velocity in (5.2) is equal to \(1\). The first paragraph of this section shows how to choose \(C_{\text{CFL}}\) to complete the determination of the time step. Unless otherwise stated, we prescribe Dirichlet boundary conditions consisting in the steady solution.
To obtain a suitable prior \(u_{\theta}\), we train a PINN with parameters \(\theta\). To avoid cumbersome penalization of boundary conditions, we define \(\widetilde{u}_{\theta}\) using a boundary operator \(\mathcal{B}\), as follows:
\[\widetilde{u}_{\theta}(x;\mu)=\mathcal{B}(u_{\theta},x;\mu)=u_{0}+xu_{\theta} (x;\mu),\]
so that the boundary condition \(\widetilde{u}_{\theta}(0;\mu)=u_{0}\) is automatically satisfied by \(\widetilde{u}_{\theta}\). The parameter space \(\mathbb{P}\) is chosen such that the steady solution is well-defined, and we take
\[\mathbb{P}=[0.5,1]\times[0.5,1]\times[0.1,0.2]. \tag{5.4}\]
Thanks to the boundary operator \(\mathcal{B}\), the loss function only concerns the ODE residue, and we set
\[\mathcal{J}(\theta)=\left\|\partial_{x}\widetilde{u}_{\theta}-\alpha \widetilde{u}_{\theta}-\beta\widetilde{u}_{\theta}^{2}\right\|_{2}^{2}.\]
We use a neural network with \(5\) fully connected hidden layers, and around \(1200\) trainable parameters. Training takes about \(4\) minutes on a dual NVIDIA K80 GPU, until the loss is equal to about \(10^{-6}\). For this experiment, we increased the order of the quadrature compared to the baseline for the case with one basis function. Indeed, we take \(n_{Q}=\max(q+2,3)\), to ensure a sufficient precision when integrating the prior.
In this section, we compare four strategies: the basis \(V_{h}\) (2.4), the basis \(V_{h}^{*}\) with multiplicative prior (2.6), the basis \(V_{h}^{+}\) with additive prior (2.5), and the basis \(V_{h}^{\text{ex},+}\) which uses the exact steady solution (5.3) as a prior. First, we study the stability condition in Section 5.1.1. Then, we tackle the approximation of a steady solution without perturbation in Section 5.1.2 and with perturbation in Section 5.1.3. Finally, the approximation of an unsteady solution is computed in Section 5.1.4.
#### 5.1.1 Study of the stability condition
The very first experiment we run aims at making sure that the new bases do not alter the stability condition of the DG scheme. To that end, we slowly increase \(C_{\text{CFL}}\) until the time step \(\Delta t\) is too large for the scheme to be stable. For this experiment, the initial condition is made of the steady solution
\[u_{\text{ini}}(x;\mu)=u_{\text{eq}}(x;\mu), \tag{5.5}\]
and the final time is \(T=0.5\). Table 2 contains the optimal values of \(C_{\text{CFL}}\) (larger values leading to instabilities) obtained with the four bases and for \(q\in\{0,1,2,3\}\). We observe that the new bases do not change the stability condition, except for \(V_{h}^{\text{ex},+}\) with \(q=1\), which is slightly more stable. This study will not be repeated for other experiments, since it would yield similar results. In practice, we take \(C_{\text{CFL}}=0.1\) to ensure stability.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(q\) & basis \(V_{h}\) & basis \(V_{h}^{*}\) & basis \(V_{h}^{+}\) & basis \(V_{h}^{\text{ex},+}\) \\ \hline \(0\) & \(1.250\) & \(1.250\) & \(1.250\) & \(1.250\) \\ \(1\) & \(0.399\) & \(0.399\) & \(0.399\) & \(0.416\) \\ \(2\) & \(0.209\) & \(0.209\) & \(0.209\) & \(0.209\) \\ \(3\) & \(0.185\) & \(0.185\) & \(0.185\) & \(0.185\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Maximal values of \(C_{\text{CFL}}\) obtained for the four bases and for a number of basis elements \(q\in\{0,1,2,3\}\).
#### 5.1.2 Steady solution
We now study the approximation of a steady solution, with and without perturbation. The goal of this section is to check whether the prior indeed makes it possible to decrease the error compared to the usual modal basis. For this experiment, the initial condition remains (5.5), and the final time is \(T=0.1\).
As a first step, the values of the parameters \(\mu\) are set to the midpoints of the intervals making up the parameter space (5.4). The \(L^{2}\) errors between the exact and approximate solutions are collected in Table 3.
In this case, we expect both \(V_{h}^{*}\) and \(V_{h}^{+}\) to show similar behavior. Moreover, we expect the basis \(V_{h}^{\mathrm{ex},+}\) to provide an exactly well-balanced scheme, up to machine precision. To that end, only for \(V_{h}^{\mathrm{ex},+}\), we take \(n_{Q}=\max(q+2,5)\), to ensure that the quadrature of the exact prior is also exact, up to machine precision.
We observe that the bases with and without prior allow a convergence of the correct order, i.e. of the same order as the number of basis elements. Moreover, we observe a consistent gain for all mesh resolutions, for a given size of the modal basis, which is lower the larger the size of the basis. Bases \(V_{h}^{*}\) and \(V_{h}^{+}\) seem to have comparable performance, with \(V_{h}^{*}\) being somewhat better for large values of \(q\), and \(V_{h}^{+}\) taking the lead for small values of \(q\). Finally, we observe that the basis \(V_{h}^{\mathrm{ex},+}\) is indeed able to provide a solution that is exact up to machine precision, thus validating the exact well-balanced property of the scheme using this basis.
As a second step, to refine this study, we now consider \(10^{3}\) parameters, randomly sampled from the parameter space (5.4). For \(q\in\{0,1,2,3\}\) and \(K=10\) discretization cells, we compute the minimum, average and maximum gains obtained with both bases \(V_{h}^{*}\) and \(V_{h}^{+}\). These values are reported in Table 4. We observe, on average, a significant gain in all cases, with larger gains obtained for smaller values of \(q\). Furthermore, the minimum gain is always greater than one. Like in the previous experiment, we observe that, even though both bases display similar behavior and very good results, \(V_{h}^{+}\) behaves better than \(V_{h}^{*}\) for small values of \(q\), and vise versa. Consequently, and to limit the number of tables in the remainder of this section, we perform all subsequent experiments with the basis \(V_{h}^{+}\).
#### 5.1.3 Perturbed steady solution
We now test the scheme on a perturbed steady solution, For this experiment, the initial condition is similar to (5.5), but with a perturbation. Indeed, we take
\[u_{\mathrm{ini}}(x;\mu)=\big{(}1+\varepsilon\sin(2\pi x)\big{)}u_{\mathrm{eq} }(x;\mu),\]
where \(\varepsilon\) is taken nonzero or zero, to control the strength of the perturbation. The final time is \(T=2\), and we study the impact of the perturbation by taking \(\varepsilon\in\{10^{-4},10^{-2},1\}\), and \(K=10\) discretization cells. The results are collected in Figure 1. We observe two different states: first, while the perturbation is being dissipated, the errors with the two bases are similar. Then, we note that the introduction of the prior has made it possible for the approximate solution to converge towards a final solution that is closer to the exact, unperturbed steady solution.
#### 5.1.4 Unsteady solution
Next, we seek to confirm that our proposed basis does not deteriorate the approximation of unsteady solutions. To that end, we consider an unsteady solution of the homogeneous problem, i.e. a solution to (5.2) with \(s(u;\mu)=0\). We take the following initial condition:
\[u_{0}(x)=0.1\left(1+\exp\left(-100(x-0.5)^{2}\right)\right),\]
so that \(u(t,x)=u_{0}(x-t)\). The final time is set to \(T=1\), and periodic boundary conditions are prescribed.
We compute the approximate solution with the two bases, for several values of \(q\). The results are collected in Table 5. We note that the basis with prior does not affect the approximate solution for \(q\geq 1\), while the results are slightly worse with the prior for \(q=0\). To improve the results here, one could introduce a space-time basis in a space-time discontinuous Galerkin method; this will be the object of future work.
\begin{table}
\end{table}
Table 3: Advection equation: errors, orders of accuracy, and gain obtained when approximating a steady solution for bases without prior (basis \(V_{h}\)), with a PINN prior (bases \(V_{h}^{*}\) and \(V_{h}^{+}\)), and with an exact prior (basis \(V_{h}^{\text{ex},+}\)).
### Shallow water equations
After studying a scalar linear advection equation in Section 5.1, we now turn to a nonlinear system of conservation laws. Namely, we tackle the shallow water equations
\[\begin{cases}\partial_{t}h+\partial_{x}Q=0,\\ \partial_{t}Q+\partial_{x}\left(\frac{Q^{2}}{h}+\frac{1}{2}g\frac{1}{h} \overline{\overline{\varepsilon}}\right)=-gh\partial_{x}Z(x;\alpha,\beta), \end{cases} \tag{5.6}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{gains in basis \(V_{h}^{*}\)} & \multicolumn{3}{c}{gains in basis \(V_{h}^{+}\)} \\ \cline{2-6} \(q\) & minimum & average & maximum & minimum & average & maximum \\ \hline
0 & 63.46 & 735.08 & 4571.89 & 63.46 & 735.08 & 4571.89 \\
1 & 32.22 & 149.38 & 450.74 & 26.01 & 190.08 & 830.20 \\
2 & 6.20 & 54.16 & 118.45 & 5.92 & 45.47 & 313.07 \\
3 & 1.55 & 19.54 & 108.10 & 1.56 & 13.69 & 184.17 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Advection equation: statistics of the gains obtained for the approximation of a steady solution in bases \(V_{h}^{*}\) and \(V_{h}^{+}\) with respect to basis \(V_{h}\).
Figure 1: Advection equation: errors, with respect to time, for the approximation of a perturbed steady solution for bases with and without prior.
where \(h>0\) is the water height, \(Q\) the water discharge, \(g=9.81\) the gravity constant, and where the parameterized topography function is
\[Z(x;\alpha,\beta)=\beta\omega\left(\alpha\left(x-\frac{1}{2}\right)\right). \tag{5.7}\]
In (5.7), the function \(\omega\in\{\omega_{g},\omega_{c}\}\) is either a Gaussian bump function
\[\omega_{g}(x)=\frac{1}{4}e^{-50x^{2}} \tag{5.8}\]
or a compactly supported bump function, with parameter \(\mathfrak{G}=0.15\):
\[\omega_{c}(x)=\begin{cases}\exp\left(1-\frac{1}{1-\left(\frac{x}{\mathfrak{G }}\right)^{2}}\right)&\text{ if }|x|<\mathfrak{G},\\ 0&\text{ otherwise.}\end{cases} \tag{5.9}\]
Unless otherwise mentioned, the final physical time is \(T=0.05\), and the space domain is \(\Omega=(0,1)\). For each experiment, Dirichlet boundary conditions corresponding to the steady solution are prescribed.
The steady solutions are given by cancelling the time derivatives in (5.6), and we get the following characterization:
\[Q_{\text{eq}}=\text{constant}=:Q_{0}\qquad\text{and}\qquad\left(1-\frac{Q_{0} ^{2}}{gh_{\text{eq}}(x;\mu)^{3}}\right)\partial_{x}h_{\text{eq}}(x;\mu)+ \partial_{x}Z(x;\alpha,\beta)=0. \tag{5.10}\]
To solve the nonlinear ODE on \(h\), we impose \(h=h_{0}\) at some point in space. Without loss of generality, we restrict the study to the case \(Q_{0}>0\). This leads us to a family of steady solutions with four parameters, and thus a parameter vector \(\mu\) made of four elements:
\[\mu=\begin{pmatrix}\alpha\\ \beta\\ h_{0}\\ Q_{0}\end{pmatrix}\in\mathbb{P}\subset\left(\mathbb{R}_{+}^{*}\right)^{4}.\]
\begin{table}
\end{table}
Table 5: Advection equation: errors, orders of accuracy, and gain obtained when approximating an unsteady solution for bases with and without prior.
Hence, we compute \(\Delta t\) in (5.1) by taking \(\lambda=\frac{Q_{0}}{h_{0}}+\sqrt{gh_{0}}\).
Depending on the values of these parameters, the Froude number
\[\text{Fr}=\sqrt{\frac{Q^{2}}{gh^{3}}}\]
controls the so-called flow regime for the steady solution. They can be in three distinct regimes: subcritical (\(\text{Fr}<1\) everywhere), supercritical (\(\text{Fr}>1\) everywhere) or transcritical (\(\text{Fr}=1\) somewhere in the domain). Each regime has its own parameter space for \(h_{0}\) and \(Q_{0}\), described later, but in all cases we take, unless otherwise stated,
\[0.5\leq\alpha\leq 1.5\quad;\qquad 0.5\leq\beta\leq 1.5. \tag{5.11}\]
To approximate the steady water height within this parameter space, we use a fully-connected PINN with about \(4000\) trainable parameters. Its result \(\widetilde{h}_{\theta}\) is modified through a boundary function \(\mathcal{B}\) that will be defined for each regime. The loss function is once again made only of the steady ODE, and we minimize
\[\mathcal{J}(\theta)=\left\|\left(1-\frac{Q_{0}^{2}}{g\widetilde{h}_{\theta}(x; \mu)^{3}}\right)\partial_{x}\widetilde{h}_{\theta}(x;\mu)+\partial_{x}Z(x; \alpha,\beta)\right\|.\]
Training takes about \(5\) minutes on a dual NVIDIA K80 GPU, and lasts until the loss is about \(10^{-4}\), depending on the regime.
#### 5.2.1 Subcritical flow
We start with a subcritical flow, where the parameter space for \(h_{0}\) and \(Q_{0}\) is:
\[2\leq h_{0}\leq 3\quad;\qquad 3\leq Q_{0}\leq 4. \tag{5.12}\]
To strongly enforce the boundary conditions, the prior \(\widetilde{h}_{\theta}\) is obtained as follows from the result \(h_{\theta}\) of the PINN:
\[\widetilde{h}_{\theta}(x;\mu)=\mathcal{B}(h_{\theta},x;\mu)=h_{0}+Z(x;\alpha, \beta)\,h_{\theta}(x;\mu). \tag{5.13}\]
To test the preservation of the steady solution, we set the initial water height to \(h_{\text{eq}}\).
A goal of this section is to better understand the differences between the two topography functions: the Gaussian bump (5.8) and the compactly supported bump (5.9). It is well-known that compactly supported functions exhibit large derivatives close to the support, see for instance [44]. As a consequence, to get a good approximation of these derivatives when computing integrals involving the PINN, we take \(n_{Q}=q+6\) when \(\omega=\omega_{c}\). Note that this choice is also motivated by the results in [44], where the authors had to take larger polynomial degrees to observe the correct orders of convergence. The Gaussian topography also suffers from the same drawback, but to a lesser extent, and we take \(n_{Q}=q+3\) when \(\omega=\omega_{g}\) when integrating the result of the PINN.
For the compactly supported topography, the results are reported in Table 6; for the Gaussian topography, the results are reported in Table 7.
As a conclusion of this first test case, we observe that using a Gaussian topography compared to a compactly supported topography leads to a more stable order of accuracy, but with lower gains, except for small values of \(K\) where the compactly supported topography is not well-approximated. The most important point is that the Gaussian topography requires a lower order quadrature to converge. These results are in line with [44]. As a consequence, we use the Gaussian topography in the remainder of this section.
Like in the previous section, we now consider \(10^{3}\) parameters in \(\mathbb{P}\), and we compute the minimum, average and maximum gains for \(q\in\{0,1,2\}\). To that end, we take \(K=20\) discretization cells. The results are reported in Table 8, where we observe that the average gains are substantial, whatever the value of \(q\), and that the minimum gain is always greater than \(1\).
#### 5.2.2 Supercritical flow
We now turn to a supercritical flow. In this case, the remaining parameters \(h_{0}\) and \(Q_{0}\) are taken such that:
\[0.5\leq h_{0}\leq 0.75\quad;\qquad 4\leq Q_{0}\leq 5. \tag{5.14}\]
The boundary conditions are enforced using the same expression (5.13) as in the subcritical case. We check the approximate preservation of the steady solution by taking the initial water height equal to the steady solution.
The results are displayed in Table 9, and we note that the gains are in line with the subcritical case, from Table 7.
Furthermore, in Table 10, we display some statistics on the gains obtained by using the prior, in the same configuration as for the subcritical regime. We draw similar conclusions to the subcritical case.
#### 5.2.3 Transcritical flow
The last steady experiment we study is the preservation of a transcritical steady solution. Such steady solutions are significantly harder to capture. Indeed, when \(\mathrm{Fr}=1\), the steady ODE (5.10) yields \(\partial_{x}Z=0\), and therefore the derivative of the steady water height is not defined using only (5.10). This is a well-known issue when approximating transcritical steady solutions, see for instance [14, 24].
\begin{table}
\end{table}
Table 6: Shallow water system, compactly supported topography (5.9): errors, orders of accuracy, and gain obtained when approximating a subcritical steady solution for bases with and without prior.
However, in our case, we provide a simple PINN by defining the water height such that the Froude number is equal to \(1\) at the top of the topography bump, i.e. at \(x=1/2\), where \(\partial_{x}Z=0\). When \(\mathrm{Fr}=1\), the water height becomes equal to \(h_{c}(\mu)=Q_{0}^{2/3}g^{-1/3}\), and we fix this value for the prior evaluated at \(x=1/2\). This eliminates \(h_{0}\) as a degree of freedom, and we choose \(2\leq Q_{0}\leq 3\). Moreover, we take \(0.75\leq\alpha\leq 1.25\) for this regime.
Then, to ensure a correct treatment of the boundary conditions and to obtain the correct value of \(\widetilde{h}_{\theta}\) at the
\begin{table}
\end{table}
Table 7: Shallow water system, Gaussian topography (5.8): errors, orders of accuracy, and gain obtained when approximating a subcritical steady solution for bases with and without prior.
\begin{table}
\end{table}
Table 8: Shallow water system, Gaussian topography (5.8): statistics of the gains obtained for the approximation of a subcritical steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
top of the bump, we take
\[\widetilde{h}_{\theta}(x;\mu)=h_{R}(\mu)+\left(1-\tanh\left(15\left(x-\frac{1}{2} \right)\right)\right)\frac{h_{L}(\mu)-h_{R}(\mu)}{2}\ h_{\theta}(x;\mu).\]
In this expression, \(h_{L}(\mu)\) and \(h_{R}(\mu)\) are the left and right boundary conditions. Since we consider a smooth
\begin{table}
\end{table}
Table 10: Shallow water system, Gaussian topography (5.8): statistics of the gains obtained for the approximation of a supercritical steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
\begin{table}
\end{table}
Table 9: Shallow water system, Gaussian topography (5.8): errors, orders of accuracy, and gain obtained when approximating a supercritical steady solution for bases with and without prior.
steady solution, relations (5.10) lead to
\[E(h,x;\mu)\coloneqq\frac{Q_{0}^{2}}{2h^{2}}+g\big{(}h+Z(x;\alpha,\beta)\big{)}= \text{constant}.\]
Since \(Z(0.5;\alpha,\beta)=\beta/4\), we obtain that \(h_{L}(\mu)>h_{R}(\mu)\) are the two solutions of the following equation, with unknown \(h\):
\[\frac{Q_{0}^{2}}{2h^{2}}+gh=E\bigg{(}h_{c}(\mu),\frac{1}{2};\mu\bigg{)}.\]
Table 11 contains the errors, order of convergence and the gains. We observe that the gains are lower than in the other two cases, but that was to be expected since the transcritical solution comes from a singular ODE, and it is harder for the PINN to approximate its solutions.
Finally, we report in Table 12 the minimum, average and maximum gains obtained by using the basis \(V_{h}^{+}\) instead of the basis \(V_{h}\). We draw the same conclusions as in the other two regimes, even though the gains are, on average, lower. This was to be expected, since the transcritical regime is harder to capture than the subcritical and supercritical ones, and therefore that the prior is of lower quality. Nevertheless, the gains remain substantial for all values of \(q\).
\begin{table}
\end{table}
Table 11: Shallow water system, Gaussian topography (5.8): errors, orders of accuracy, and gain obtained when approximating a transcritical steady solution for bases with and without prior.
#### 5.2.4 Perturbation of a steady flow
This last experiment related to the shallow water equations concerns a perturbed steady flow. We only perform this study on the subcritical flow, but the other regimes behave the same. We take \(\varepsilon\in\{5\cdot 10^{-k}\}_{k\in\{1,2,3\}}\) and \(20\) space cells, and set the initial water height to \(h(0,x;\mu)=\big{(}1+\varepsilon\sin(2\pi x)\big{)}h_{\text{eq}}(x;\mu)\).
The errors on \(h\) with respect to time are displayed in Figure 2, until the final physical time \(T=1\). Like in Section 5.1, with the prior, the error decreases to a much lower level than without the prior. This good behavior was expected since the prior makes it possible for the enhanced DG scheme to achieve higher accuracy on steady solutions.
### Euler-Poisson equations in spherical geometry
We now consider the Euler-Poisson equations in spherical geometry. This system is used in astrophysics, for instance, where it serves to model stars held together by gravitation, see e.g. [15, 17, 32]. They are given by
\[\begin{cases}\partial_{t}\rho+\partial_{r}Q=-\dfrac{2}{r}Q,\\ \partial_{t}Q+\partial_{r}\left(\dfrac{Q^{2}}{\rho}+p\right)=- \dfrac{2}{r}\dfrac{Q^{2}}{\rho}-\rho\partial_{r}\phi,\\ \partial_{t}E+\partial_{r}\left(\dfrac{Q}{\rho}(E+p)\right)=- \dfrac{2}{r}\dfrac{Q}{\rho}(E+p)-Q\partial_{r}\phi,\\ \dfrac{1}{r^{2}}\partial_{rr}(r^{2}\phi)=4\pi G\rho,\end{cases} \tag{5.15}\]
where \(G\) is a gravity constant, fixed to \(G=1\) in our applications, and where we take \(p\) as a function of \(\rho\), \(Q\) and \(E\) through a pressure law to be specified. In (5.15), \(\rho\) is the density, \(Q\) is the momentum, \(E\) is the energy, and \(\phi\) is the gravitational potential. Unless otherwise mentioned, the boundary conditions are of Dirichlet type, with the value of the steady solution prescribed of the boundaries.
The space domain is \(r\in(0,1)\). The apparent singularity at \(r=0\) is resolved by imposing suitable boundary conditions, namely \(\rho(0)=1\) and \(\partial_{r}\rho(0)\) given according to the pressure law. Indeed, the assumption that there is no gravity at \(r=0\) leads to \(\partial_{r}p(0)=0\), which makes it possible to determine \(\partial_{r}\rho(0)\). For more information on the boundary conditions and on the DG discretization of (5.15), the reader is referred to [57].
The steady solutions at rest are given by
\[\begin{cases}Q=0,\\ \partial_{r}p+\rho\partial_{r}\phi=0,\\ \partial_{rr}(r^{2}\phi)=4\pi r^{2}G\rho.\end{cases}\]
For the steady solutions, we shall distinguish two cases for the pressure law: a polytropic pressure law, and a temperature-dependent pressure law.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{minimum gain} & \multicolumn{2}{c}{average gain} & \multicolumn{2}{c}{maximum gain} \\ \cline{2-6} \cline{5-6} \(q\) & \(h\) & \(Q\) & \(h\) & \(Q\) & \(h\) & \(Q\) \\ \cline{2-6} \(0\) & \(35.82\) & \(26.19\) & \(254.53\) & \(177.02\) & \(928.03\) & \(668.73\) \\ \(1\) & \(5.51\) & \(4.73\) & \(30.83\) & \(38.69\) & \(134.83\) & \(142.11\) \\ \(2\) & \(4.55\) & \(6.16\) & \(16.49\) & \(24.29\) & \(96.95\) & \(109.94\) \\ \hline \hline \end{tabular}
\end{table}
Table 12: Shallow water system, Gaussian topography (5.8): statistics of the gains obtained for the approximation of a transcritical steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
#### 5.3.1 Polytropic pressure law
In this case, we introduce two parameters \(\kappa\) and \(\gamma\), so the parameter vector \(\mu\) is composed of two elements:
\[\mu=\begin{pmatrix}\kappa\\ \gamma\end{pmatrix}\in\mathbb{P}\subset\mathbb{R}^{2},\quad\kappa\in\mathbb{R} _{+},\quad\gamma\in(1,+\infty).\]
Equipped with this parameter vector, we define the polytropic pressure law
\[p(\rho;\mu)=\kappa\rho^{\gamma},\]
and the steady solutions are then given as solutions to the following nonlinear second-order ordinary differential equation:
\[\frac{d}{dr}\left(r^{2}\kappa\gamma\rho^{\gamma-2}\frac{d\rho}{dr}\right)=4 \pi r^{2}G\rho.\]
Figure 2: Shallow water equations, compactly supported topography: errors, with respect to time, for the approximation of a perturbed subcritical steady solution for bases with and without prior.
In general, this ODE does not have analytic solutions. However, it turns out that, for specific values of \(\gamma\), there exists an analytic solution to this ODE. For instance, with \(\gamma=2\), we obtain
\[\rho(r)=\frac{\sin(\alpha r)}{\alpha r},\qquad\text{with}\quad\alpha=\sqrt{\frac {2\pi G}{\kappa}}.\]
Regarding the boundary conditions, the condition \(\partial_{r}p(0)=0\) leads to \(\partial_{r}\rho(0)=0\) for this pressure law. In this case, we take \(\lambda=1+\sqrt{\gamma}\) in (5.1) to compute the time step \(\Delta t\).
To obtain a prior \(\rho_{\theta}\), as usual, we train a PINN with about 1400 trainable parameters on 7 fully connected layers. The boundary conditions are taken into account by setting
\[\widetilde{\rho}_{\theta}(r;\mu)=1+r^{2}\rho_{\theta}(r;\mu),\]
where \(\rho_{\theta}\) is the result of the PINN. The PINN is trained on the parameter space
\[\mathbb{P}=[2,5]\times[1.5,3.5], \tag{5.16}\]
with only the physics-based loss function corresponding to the steady solution:
\[\mathcal{L}_{\theta}=\left\|\frac{d}{dr}\left(r^{2}\kappa\gamma\widetilde{ \rho}_{\theta}^{\gamma-2}\frac{d\widetilde{\rho_{\theta}}}{dr}\right)-4\pi r^ {2}G\widetilde{\rho}_{\theta}\right\|.\]
In addition, the prior for \(q\) is set to \(Q_{\theta}=1\) since we wish to approximate a constant momentum. Finally, the prior for \(E\) is set to \(E_{\theta}=p(\widetilde{\rho}_{\theta};\mu)/(\gamma-1)\). Training takes about 5 minutes on a dual NVIDIA K80 GPU, until the loss is equal to about \(5\cdot 10^{-5}\). In the DG discretization, the degree of the quadrature formula is the usual \(n_{Q}=q+2\): there is no need to further increase the order of the quadrature rule in this case.
We first collect, in Table 13, the results of the approximation in both bases (with and without prior), for \(\kappa=2\) and \(\gamma=2.5\), and until the final time \(T=0.01\). As usual, the observed gain is larger for smaller number of basis elements. We observe a slight superconvergence on the momentum \(Q\) when using the prior with \(q=0\). For these values of \(\kappa\) and \(\gamma\), gains on the density are not very large for \(q=2\), but this is compensated by larger gains on the energy.
To extend this study, we compute the statistics over the whole parameter space (5.16) by uniformly sampling \(10^{3}\) values and taking 10 cells in the mesh. The results are reported in Table 14. Just like before, the average gain is substantial, while the minimum rarely falls below 1. Moreover, note that the gains recorded in Table 13 correspond to a rather bad set of parameters compared to the average.
#### 5.3.2 Temperature-dependent pressure law
In this case, we take a given smooth temperature function \(T(r,\mu)\) parameterized by \(\mu\), where the parameter vector \(\mu\) is composed of two elements:
\[\mu=\begin{pmatrix}\kappa\\ \alpha\end{pmatrix}\in\mathbb{P}\subset\mathbb{R}^{2},\quad\kappa\in\mathbb{R} _{+},\quad\alpha\in\mathbb{R}_{+}.\]
This allows us to define the parameterized temperature function \(T(r;\alpha)=e^{-\alpha r}\), and so we get the following temperature-based pressure law:
\[p(\rho;\mu)=\kappa\rho T.\]
For this pressure law, the steady solutions are given by the following nonlinear second-order ODE:
\[\frac{d}{dr}\left(r^{2}\kappa\frac{T}{\rho}\frac{d\rho}{dr}\right)+\frac{d}{ dr}\left(r^{2}\kappa\frac{dT}{dr}\right)=4\pi r^{2}G\rho,\]
and the boundary condition \(\partial_{r}p(0)=0\) leads to \(\partial_{r}\rho(0)=\alpha\). For this pressure law, we also take \(\lambda=1+\sqrt{\gamma}\) in (5.1) to compute \(\Delta t\).
The prior \(\rho_{\theta}\) is obtained _via_ a PINN with the same characteristics as in the polytropic case, and whose result is still denoted by \(\rho_{\theta}\). To impose the boundary conditions, this time, we set
\[\widetilde{\rho}_{\theta}(r;\mu)=1+\alpha r+r^{2}\rho_{\theta}(r;\mu).\]
The parameter space is
\[\mathbb{P}=[2,5]\times[0.5,1.5], \tag{5.17}\]
\begin{table}
\end{table}
Table 13: Euler-Poisson system, polytropic pressure law: errors, orders of accuracy, and gain obtained when approximating a steady solution for bases with and without prior.
\begin{table}
\end{table}
Table 14: Euler-Poisson system, polytropic pressure law: statistics of the gains obtained for the approximation of a steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
and the PINN is trained using only the physics-based loss function
\[\mathcal{L}_{\theta}=\left\|\frac{d}{dr}\left(r^{2}\kappa\frac{T}{\tilde{\rho}_{ \theta}}\frac{d\widetilde{\rho}_{\theta}}{dr}\right)+\frac{d}{dr}\left(r^{2} \kappa\frac{dT}{dr}\right)=4\pi r^{2}G\widetilde{\rho}_{\theta}\right\|.\]
Training takes about \(5\) minutes on a dual NVIDIA K80 GPU, until the loss is equal to about \(5\cdot 10^{-4}\). The priors \(Q_{\theta}\) and \(E_{\theta}\) are then defined in the same way as in the polytropic case. In this case, we also take \(n_{Q}=q+2\).
As is becoming usual, we first report, in Table 15, the results of the approximation in both bases (with and without prior). The final time is set to \(T=0.01\), and we take \(\kappa=3.5\) and \(\alpha=0.5\). As usual, using the prior provides significant gains, especially for low values of \(q\). Compared to the polytropic case, gains are consistently better for the large values of \(q\).
To understand gains on the whole parameter space (5.17), we uniformly sample \(10^{3}\) values of \(\kappa\) and \(\alpha\) and take a mesh made of \(10\) cells. We compute the minimum, average and maximum gains. These values are reported in Table 16. For this pressure law, the minimum gain is always larger than \(1\), and we obtain consistently large average gains, even for \(q=2\).
#### 5.3.3 Spherical blast wave
The goal of this last test case is to show that our prior does not negatively affect the capability of the scheme to capture discontinuous solutions. Let us emphasize that numerical viscosity is not an object of this study, and
\begin{table}
\end{table}
Table 15: Euler-Poisson system, temperature-based pressure law: errors, orders of accuracy, and gain obtained when approximating a steady solution for bases with and without prior.
therefore that we have not used any regularization procedure. Consequently, results will show some oscillations.
This experiment is nothing but a Riemann problem in spherical geometry, inspired by the experiments in [50]. As such, the initial condition is piecewise constant on the space domain \(r\in(0,0.4)\), as follows:
\[\rho(0,x)=\begin{cases}2&\text{if }r<0.2,\\ 1&\text{otherwise};\end{cases}\qquad Q(0,x)=0;\qquad p(0,x)=\begin{cases}2& \text{if }r<0.2,\\ 1&\text{otherwise}.\end{cases}\]
For this experiment, the pressure law is the standard ideal gas law
\[p=(\gamma-1)\left(E-\frac{1}{2}\frac{Q^{2}}{\rho}\right),\]
and we take the gas constant \(\gamma\) equal to \(1.4\). The experiment is run until the final time \(T=0.1\), and with Neumann boundary conditions. We take \(25\) discretization cells, and we use a basis made of \(3\) elements. Moreover, the source term is deactivated: we set \(\phi=0\), and we merely consider the Euler equations in spherical geometry, without gravity effects.
The results are depicted in Figure 3, where we compare the two bases (with and without prior, blue and orange lines respectively) to a reference solution (green line). We observe very good agreement with the reference solution, even though oscillations are present, as expected. We also note that the graphs for the solutions with and without prior are superimposed, which means that the quality of the approximation of this discontinuous solution has not been degraded by the introduction of the prior in the basis.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{4}{c}{minimum gain} & \multicolumn{4}{c}{average gain} & \multicolumn{4}{c}{maximum gain} \\ \cline{2-9} \cline{5-10} \(q\) & \(\rho\) & \(Q\) & \(E\) & \(\rho\) & \(Q\) & \(E\) & \(\rho\) & \(Q\) & \(E\) \\ \hline
0 & 13.30 & 1.05 & 16.24 & 151.96 & 1.88 & 150.63 & 600.13 & 2.91 & 473.83 \\
1 & 6.30 & 7.53 & 5.40 & 72.63 & 77.20 & 51.09 & 321.20 & 302.58 & 257.19 \\
2 & 3.35 & 3.45 & 2.20 & 18.96 & 22.58 & 13.56 & 55.47 & 63.45 & 47.83 \\ \hline \hline \end{tabular}
\end{table}
Table 16: Euler-Poisson system, temperature-based pressure law: statistics of the gains obtained for the approximation of a steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
Figure 3: Euler equations, temperature-based pressure law: statistics for the approximation of a steady solution for bases with and without prior.
### Shallow water equations in two space dimensions
The last system considered in this series of experiments is the two-dimensional shallow water system. It is given by
\[\begin{cases}\partial_{t}h+\mathbf{\nabla}\cdot\mathbf{Q}=0,\\ \partial_{t}\mathbf{Q}+\mathbf{\nabla}\cdot\left(\frac{\mathbf{Q}\otimes\mathbf{Q}}{h}+\frac{1} {2}gh^{2}\text{Id}\right)=-gh\mathbf{\nabla}Z(\mathbf{x};\mu),\end{cases}\]
where \(g\) is the gravity constant, \(\text{Id}\) is the \(2\times 2\) identity matrix, \(h\) is the water height, \(\mathbf{Q}\) is the water discharge, and \(Z\) is the topography. For this system, \(\Delta t\) is computed by setting \(\lambda=2+\sqrt{\gamma}\) in (5.1).
The space variable \(\mathbf{x}=(x_{1},x_{2})\) belongs to the space domain \(\Omega=[-3,3]^{2}\), and we introduce three parameters:
\[\mu=\begin{pmatrix}\alpha\\ \Gamma\\ r_{0}\end{pmatrix}\in\mathbb{P}\subset\mathbb{R}^{3},\quad\alpha\in\mathbb{R} ^{*}_{+},\quad\Gamma\in\mathbb{R}^{*}_{+},\quad r_{0}\in\mathbb{R}^{*}_{+}.\]
This enables us to define the topography as the following Gaussian bump function, with \(r=\|\mathbf{x}\|\):
\[Z(\mathbf{x};\mu)=\Gamma\exp\left(\alpha(r_{0}^{2}-r^{2})\right),\]
see for instance [43] for a similar test case. On this topography, we consider the following steady solution:
\[\begin{cases}h_{\text{eq}}(\mathbf{x};\mu)=2-Z(\mathbf{x};\mu)-\frac{\Gamma}{8\alpha g }Z(\mathbf{x};\mu)^{4},\\ \mathbf{Q}_{\text{eq}}(\mathbf{x};\mu)=-\mathbf{x}^{\perp}\,h_{\text{eq}}(\mathbf{x};\mu)\,u_ {\text{eq}}(\mathbf{x};\mu),\end{cases} \tag{5.18}\]
with \(\mathbf{x}^{\perp}=(-x_{2},x_{1})\) and where
\[u_{\text{eq}}(\mathbf{x};\mu)=\alpha Z(\mathbf{x};\mu)^{2}.\]
To obtain a relevant prior, we approximate \(h_{\text{eq}}\) and \(u_{\text{eq}}\), using a different PINN for each of the two functions. The results of the PINN are denoted by \(h_{\theta}\) and \(u_{\theta}\), and we define the priors \(\widetilde{h}_{\theta}\) and \(\widetilde{u}_{\theta}\) as follows, to include the boundary conditions:
\[\widetilde{h}_{\theta}(\mathbf{x};\mu)=2-Z(\mathbf{x};\mu)\,h_{\theta}(\mathbf{x};\mu)^{ 2}\quad\text{and}\quad\widetilde{u}_{\theta}(\mathbf{x};\mu)=Z(\mathbf{x};\mu)\,u_{ \theta}(\mathbf{x};\mu).\]
Another possibility would be to strongly impose the divergence-free constraint, by learning a potential and taking the prior \(\mathbf{Q}_{\theta}(\mathbf{x};\mu)\) as the curl of this potential. However, we elected not to do so, since the current strategy was able to train faster. The parameter space is
\[\mathbb{P}=[0.25,0.75]\times[0.1,0.4]\times[0.5,1.25].\]
The loss function is made in equal parts of the now usual PDE loss, and of the minimization with respect to data. Data is regenerated at each epoch, and helps to avoid falling in a local minimum corresponding to a lake at rest, where \(\widetilde{h}_{\theta}+Z=\text{constant}\) and \(u_{\theta}=0\). Each PINN has about 2500 parameters, and training takes about 10 minutes on an NVIDIA V100 GPU, until the loss functions reaches about \(4\times 10^{-7}\). This prior is integrated with a quadrature formula of degree \(n_{Q}=q+3\): we needed to increase the usual quadrature degree by 1 to obtain the best possible approximation.
#### 5.4.1 Approximation of a steady solution
We take the steady solution (5.18) as the initial condition to test the approximate well-balanced property. The experiments are run until the final physical time \(T=0.01\). We prescribe Dirichlet boundary conditons consisting in the value of the steady solution.
First, we take the parameters as the center of the parameter cube \(\mathbb{P}\). The results are collected in Table 17, and we note that, as expected, the presence of the prior makes it possible to reach much lower errors, especially for the water height \(h\).
In addition, we provide some statistics over the whole parameter space \(\mathbb{P}\), computed on a mesh with \(25\times 25\) cells, in Table 18. We not that, on average, the gains are substantial. However, note that the minimum gains may be smaller than 1, which denotes a loss of precision due to the prior. This happens in around 0.75% of cases, so we obtain an improvement in an overwhelming majority of cases.
#### 5.4.2 Perturbed steady solution
We now compare the new basis to the classical one when the initial condition is a perturbed steady solution. To that end, the initial water height is set to
\[h(0,\mathbf{x};\mu)=h_{\text{eq}}(\mathbf{x};\mu)-0.02\exp\left(-2((x_{1}+2)^{2}+(x_{2}+ 2)^{2})\right),\]
thus creating a bump-shaped perturbation whose center is located at \((-2,-2)\). For simplicity, we still use the value of the steady solution as Dirichlet boundary conditions. Moreover, we set the parameters \(\mu\) to be the center of the cube \(\mathbb{P}\), and we take \(q=1\) with \(16^{2}\) discretization cells.
\begin{table}
\end{table}
Table 17: Shallow water equations in two space dimensions: errors, orders of accuracy, and gain obtained when approximating a steady solution for bases with and without prior.
\begin{table}
\end{table}
Table 18: Shallow water equations in two space dimensions: statistics of the gains obtained for the approximation of a steady solution in basis \(V_{h}^{+}\) with respect to basis \(V_{h}\).
The pointwise difference between \(h\) and \(h_{\text{eq}}\) is displayed in Figure 4. We observe that the prior-enriched basis \(V_{h}^{+}\) (right panels) is able to capture the perturbation much better than the classical basis \(V_{h}\) (left panels). Indeed, the background, underlying steady solution has been smeared by basis \(V_{h}\), while is is preserved with much greater resolution by basis \(V_{h}^{+}\).
## 6 Conclusion
In this work, we proposed a Discontinuous Galerkin scheme whose basis has been enriched by neural networks to ensure an approximate well-balance property for a generic PDE and a generic equilibrium. The offline phase of the algorithm consists in learning a family of equilibria using parametric PINNs. Then, during the online phase, the trained network is used to enrich the DG basis and to approximate the solution to the PDE.
The results show significant gains in accuracy compared with the conventional DG method, particularly for low-dimensional approximation spaces. To obtain the same accuracy, we can significantly reduce the number of cells and use larger time steps. The method has been validated on a wide range of PDEs and equilibria, showing that it is a general-purpose approach. Furthermore, it makes it possible to handle complicated equilibria, on complex geometries, which are rarely treated by conventional WB schemes, especially in two space dimensions. The cost of training the network is low, as is the cost of inference. The main additional cost of the method comes from the quadrature rule, whose order has to be increased to ensure a good approximation of the integral of the prior. In most cases, this increase in order is not very important, and the gain between our approach and the classical ones remains significant.
There are several possible ways of extending our approach. From an application point of view, we wish to deal with more difficult equilibria, such as equilibria for the magnetohydrodynamics in tokamaks. From a methodological point of view, we would like to improve the determination of the prior by replacing parametric PINNs with physics-informed neural operators [51, 25] in order to widen the family of equilibria that can be considered. The other approach is to extend the method with time-dependent priors, in order to increase the accuracy of the scheme around families of unsteady solutions. To that end, we wish to move on to space-time DG methods, see e.g. [38].
|
2310.09705 | SGA: A Graph Augmentation Method for Signed Graph Neural Networks | Signed Graph Neural Networks (SGNNs) are vital for analyzing complex patterns
in real-world signed graphs containing positive and negative links. However,
three key challenges hinder current SGNN-based signed graph representation
learning: sparsity in signed graphs leaves latent structures undiscovered,
unbalanced triangles pose representation difficulties for SGNN models, and
real-world signed graph datasets often lack supplementary information like node
labels and features. These constraints limit the potential of SGNN-based
representation learning. We address these issues with data augmentation
techniques. Despite many graph data augmentation methods existing for unsigned
graphs, none are tailored for signed graphs. Our paper introduces the novel
Signed Graph Augmentation framework (SGA), comprising three main components.
First, we employ the SGNN model to encode the signed graph, extracting latent
structural information for candidate augmentation structures. Second, we
evaluate these candidate samples (edges) and select the most beneficial ones
for modifying the original training set. Third, we propose a novel augmentation
perspective that assigns varying training difficulty to training samples,
enabling the design of a new training strategy. Extensive experiments on six
real-world datasets (Bitcoin-alpha, Bitcoin-otc, Epinions, Slashdot, Wiki-elec,
and Wiki-RfA) demonstrate that SGA significantly improves performance across
multiple benchmarks. Our method outperforms baselines by up to 22.2% in AUC for
SGCN on Wiki-RfA, 33.3% in F1-binary, 48.8% in F1-micro, and 36.3% in F1-macro
for GAT on Bitcoin-alpha in link sign prediction. | Zeyu Zhang, Shuyan Wan, Sijie Wang, Xianda Zheng, Xinrui Zhang, Kaiqi Zhao, Jiamou Liu, Dong Hao | 2023-10-15T02:19:07Z | http://arxiv.org/abs/2310.09705v1 | # SGA: A Graph Augmentation Method for Signed Graph Neural Networks
###### Abstract
Signed Graph Neural Networks (SGNNs) play a crucial role in the analysis of intricate patterns within real-world signed graphs, where both positive and negative links coexist. Nevertheless, there are three critical challenges in current signed graph representation learning using SGNNs. First, signed graphs exhibit significant sparsity, leaving numerous latent structures uncovered. Second, SGNN models encounter difficulties in deriving proper representations from unbalanced triangles. Finally, real-world signed graph datasets often lack supplementary information, such as node labels and node features. These challenges collectively constrain the representation learning potential of SGNN. We aim to address these issues through data augmentation techniques. However, the majority of graph data augmentation methods are designed for unsigned graphs, making them unsuitable for direct application to signed graphs. To the best of our knowledge, there are currently no data augmentation methods specifically tailored for signed graphs. In this paper, we propose a novel Signed Graph Augmentation framework, **SGA**. This framework primarily consists of three components. In the first part, we utilize the SGNN model to encode the signed graph, extracting latent structural information in the encoding space, which is then used as candidate augmentation structures. In the second part, we analyze these candidate samples (i.e., edges), selecting the most beneficial candidate edges to modify the original training set. In the third part, we introduce a new augmentation perspective, which assigns training samples different training difficulty, thus enabling the design of new training strategy. Extensive experiments on six real-world datasets, i.e., Bitcoin-alpha, Bitcoin-otc, Epinions, Slashdot, Wiki-elec and Wiki-RfA show that SGA improve the performance on multiple benchmarks. Our method outperforms baselines by up to 22.2% in terms of AUC for SGCN on Wiki-RfA, 33.3% in terms of F1-binary, 48.8% in terms of F1-micro, and 36.3% in terms of F1-macro for GAT on Bitcoin-alpha in link sign prediction. Our implementation is available in PyTorch1.
Footnote 1: [https://anonymous.4open.science/r/SGA-3127](https://anonymous.4open.science/r/SGA-3127)
Signed Graph, Graph Neural Networks, Graph Augmentation
## 1 Introduction
As social media continues to gain widespread popularity, it gives rise to a multitude of interactions among individuals, which are subsequently documented within social graphs [1; 2]. While many of these social interactions denote positive connections, such as liking, trust, and friendship, there are also instances of negative interactions, encompassing feelings of hatred, distrust, and more. In essence, graphs that encompass both positive and negative interactions or links are commonly termed as _signed graphs_[3; 4]. For instance, Slashdot [5], a tech-related news website, allows users to tag other users as either 'friends' or 'foes'. Such a situation can be naturally modeled as a _signed graph_. In recent years, there has been a growing interest among researchers in exploring network representation within the context of signed graphs [6; 7; 8]. Most of these methods are combined with Graph Neural Networks (GNN) [9; 10], and are therefore collectively referred to as Signed Graph Neural Networks (SGNN) [11; 12]. This endeavor is focused on acquiring low-dimensional representations of nodes, with the ultimate goal of facilitating subsequent network analysis tasks, especially _link sign prediction_.
There are three issues within signed graph representation learning, i.e., 1) real-world signed graph datasets are exceptionally sparse [4], with a significant amount of potential structure remaining uncollected or undiscovered, 2) according to the analysis in [13], SGNN fails to learn proper representations from unbalanced triangles, despite the prevalence of unbalanced triangles in real-world datasets, 3) real-world signed graph datasets only contain structural information and lack more side information. Data augmentation which has been well-studied in computer vision [14; 15; 16; 17] and natural language processing [18; 19; 20] holds promise for alleviating the three issues mentioned above.
In recent years, there have been significant advancements in graph data augmentation methods [21; 22; 23], including node perturbation [24; 25], edge perturbation [26] and sub-graph sampling [27]. _Yet_, current graph data augmentation (GDA) methods cannot be directly applied to signed graphs. The limitation of current GDA methods primarily include the following three aspects: 1) Some graph augmentation methods [21; 22] incorporate side information such as node features and node labels. However, real-world signed graph datasets lack this kind of information and only possess structural information. 2) Random structural perturbation [26] is not applicable to signed graph neural networks. As shown in Figure 1, we employed three different random methods (i.e., random addition/removal of positive edges, random addition/removal of negative edges, and random sign-flipping of existing edges) in conjunction with the classic SGNN model, SGCN [11]. The experimental results show that these random methods cannot enhance the performance of SGCN. The experiment results involving random edge flipping (Figure 1(c)) demonstrate that the data augmentation methods employed in signed graph contrastive learning model [8; 28] do not readily extend to general representation learning models. 3) Current data augmentation methods typically enhance data from aspects like node features and node labels, lacking novel augmentation perspectives [29].
To the best of our knowledge, there currently exists no data augmentation solution tailored specifically for SGNN models. In this paper, we embark on the exploration of data augmentation methods for signed graph representation learning. The primary challenges are as follows:
1. Uncover potential structural information using only structural data.
2. Design a more refined and targeted approach to modify the existing structure in order to mitigate the adverse effects of unbalanced cycles on SGNN models [13].
Figure 1: Effectiveness of data augmentation through random structural perturbations (SGCN [11] as backbone model) on link sign prediction performance. (a) Randomly increasing or decreasing positive edges. (b) Randomly increasing or decreasing negative edges. (c) Randomly flipping the sign of edges.
3. Propose a new data augmentation perspective specifically tailored for signed graphs.
To address the aforementioned challenges, we propose a novel Signed Graph Augmentation framework, **SGA**. This framework primarily consists of three components which focuses on mining new structural information and edge features from the training samples (i.e., edges) with only structural information. In more details, to address the **first** challenge, we treat uncovering potential structural information as an edge prediction problem. We utilize a classic SGNN model, such as SGCN [11], to encode the nodes of the signed graph. In the encoding space, We consider the relationships between nodes that are close in proximity as potential positive edges, while we view the relationships between nodes that are farther apart as potential negative edges. The newly discovered structural information is treated as candidate training samples (i.e., edges). To address the **second** challenge, We do not directly insert these candidate training samples into the training set but adopt a more cautious approach to discern whether they introduce harmful information. We demonstrate that only candidate training samples that do not decrease the local balance of nodes (see Def. 2) are beneficial candidate training samples. Based on this conclusion, we select the beneficial candidate training samples to insert into the training set. To address the **third** challenge, we introduce a new data augmentation perspective: _training difficulty_ and assign different difficulty scores to various training samples. Differing from conventional training approaches that assign equal training weights to all samples, we have developed novel training schemes for SGNN models based on these varying difficulty levels.
To evaluate the effectiveness of SGA, we perform extensive experiments on six real-world datasets, i.e., Bitcoin-alpha, Bitcoin-otc, Epinions, Slashdot, Wiki-elec and Wiki-RfA. We verify that our proposed framework SGA can enhance model performance using the common SGNN model SGCN[11] as the encoding module. The experimental results show that SGA improves the link sign prediction accuracy of five base models, including two unsigned GNN models (GCN[9] and GAT[30]) and three signed GNN models (SGCN, SiGAT[31] and GS-GNN[32]). SGA boosts up to 22.2% in terms of AUC for SGCN on Wiki-RfA, 33.3% in terms of F1-binary, 48.8% in terms of F1-micro, and 36.3% in terms of F1-macro for GAT on Bitcoin-alpha in link sign prediction, at best. These experimental results demonstrate the effectiveness of SGA.
* We are the first to introduce the research on data augmentation for signed graph neural networks.
* We have proposed a novel signed graph augmentation framework which tries to alleviating the three issues existing in signed graph neural networks. This framework not only helps uncover potential training samples but also aids in selecting beneficial samples to mitigate the introduction of harmful structural information. Additionally, it enables the augmentation of training samples with new feature (i.e., training difficulty see Def. 3), which forms the basis for a new training strategy.
* Extensive experiments on six real-world datasets with five backbone models demonstrate the effectiveness of our framework.
## 2 Related Work
As described above, relevant topics about our paper is Signed Graph Neural Networks and Graph Augmentation. Next, we will discuss these two aspects separately.
### Signed Graph Neural Networks
Due to the widespread popularity of social media, signed graphs have garnered significant attention in the field of network representation [33; 34; 28; 13]. Existing research has predominantly concentrated on tasks related to _link sign prediction_, while overlooking other crucial tasks like node classification [35], node ranking [36], and community detection [37]. Some signed graph embedding techniques, such as SNE [38], SIDE [39], SGDN [40], and ROSE [41], rely on random walks and linear probabilistic methods. In recent years, neural networks have been employed for signed graph representation learning. The first Signed Graph Neural Network (SGNN), SGCN [11], generalizes GCN [9] to signed graphs by utilizing balance theory to ascertain the positive and negative relationships between nodes separated by multiple hops. Another noteworthy GCN-based approach is GS-GNN, which relaxes the balance theory assumption and typically assumes nodes can be grouped into multiple categories. Additionally, prominent SGNN models like SiGAT [31], SNEA [7], SDGNN [12], and SGCL [8] are based on GAT [42]. These efforts mainly revolve around the development of more advanced SGNN models. Our work diverges from these approaches as we introduce a novel signed graph augmentation to improve the performance of SGNNs
### Graph Data Augmentation
In response to the challenges posed by data noise and limited data availability in graph representation learning, there has been a recent surge in research focused on enhancing graph data augmentation techniques. [43; 22; 23]. According to a survey of graph data augmentation [29], graph augmentation methods can be classified into three types, i.e., feature-wise [23; 44; 45], structure-wise [46; 47; 48] and label-wise [49; 50]. For feature-wise type, LAGNN [23] enriches the node features by employing a generative model that takes as input the localized neighborhood information of the target node. Other feature-wise methods [51; 24] generate augmented node feature by random shuffling. Structure-wise augmentation methods target at modifying edges and nodes (e.g., randomly adding or deleting edges). GAUG [21] employs neural edge predictors can effectively encode class-homophilic structure to promote intra-class edges and demote inter-class edges in given graph structure. GraphSMOTE [52] insert nodes to enrich the minority classes. Graph diffusion method (GDC [53]) can generate an augmented graph by providing the global views of the underlying structure. Label-wise augmentation methods aim at augmenting the limited labeled training data. G-Mixup [22] augment graphs for graph classification by interpolating the generator (i.e., graphon) of different classes of graphs. It is worth noting that most of these data augmentation methods rely on additional information such as node features and node labels. However, for signed graphs, these types of information are absent, making these methods not directly applicable to data augmentation in signed networks.
## 3 Problem Statement
A _signed graph_ is defined as \(\mathcal{G}=(\mathcal{V},\mathcal{E}^{+},\mathcal{E}^{-})\), where \(\mathcal{V}=\{v_{1},\ldots,v_{|\mathcal{V}|}\}\) represents the set of nodes, and \(\mathcal{E}^{+}\) and \(\mathcal{E}^{-}\) denote the positive and negative edges, respectively. Each edge \(e_{ij}\in\mathcal{E}^{+}\cup\mathcal{E}^{-}\) connecting two nodes \(v_{i}\) and \(v_{j}\) can be either positive or negative, but not both, meaning that \(\mathcal{E}^{+}\cap\mathcal{E}^{-}=\varnothing\). We use \(\sigma(e_{ij})\in\{+,-\}\) to denote the _sign_ of \(e_{ij}\). The structure of \(\mathcal{G}\) is represented by the adjacency matrix \(A\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), where each entry \(A_{ij}\in\{1,-1,0\}\) signifies the sign of the edge \(e_{ij}\). It's important to note that, unlike unsigned graph datasets, signed graphs typically do not provide node features, meaning there is no feature vector \(x_{i}\) associated with each node \(v_{i}\).
_Positive_ and _negative neighbors_ of \(v_{i}\) are denoted as \(\mathcal{N}_{i}^{+}=\{v_{j}\mid A_{ij}>0\}\) and \(\mathcal{N}_{i}^{-}=\{v_{j}\mid A_{ij}<0\}\), respectively. Let \(\mathcal{N}_{i}=\mathcal{N}_{i}^{+}\cup\mathcal{N}_{i}^{-}\) be the set of neighbors of node \(v_{i}\). \(\mathcal{O}_{3}\) represents the set of _triangles_ in the signed graph, i.e., \(\mathcal{O}_{3}=\{\{v_{i},v_{j},v_{k}\}\mid A_{ij}A_{jk}A_{ik}\neq 0\}\). A triangle \(\{v_{i},v_{j},v_{k}\}\) is called _balanced_ if \(A_{ij}A_{jk}A_{ik}>0\), and is called _unbalanced_ otherwise.
**Problem Definition**: \(D_{\text{train}}\cup D_{\text{test}}=\mathcal{E}^{+}\cup\mathcal{E}^{-}\), where \(D_{\text{train}}\) refers to the set of train samples (edges) and \(D_{\text{test}}\) refers to the set of test samples. When only given \(D_{\text{train}}\), our purpose is to design a graph augmentation strategy \(\psi:(D_{\text{train}})\rightarrow(D^{\prime}_{\text{train}},\mathcal{F})\), where \(D^{\prime}_{\text{train}}\) refers to augmented train edge set and \(\mathcal{F}\) refers to the newly generated edge features.
## 4 Proposed Method
In this section, we present a Signed Graph Augmentation framework, aiming to augment training samples (i.e., edges) from structure perspective (edge manipulation) to side information (edge feature). Figure shows the overall architecture. SGA encompasses three key elements: 1) generating new training candidate samples, 2) Selecting beneficial training samples, and 3) introducing new feature (training difficulty) for training samples. To be specific, we **first** utilize the SGNN model for encoding the nodes within the signed graph. Within this encoding space, our objective is to unearth latent relationships between nodes and produce fresh training candidate samples, specifically edges. On an intuitive level, we posit that nodes in close proximity within the encoding space are inclined to form positive relationships (positive edges), whereas nodes further apart are more likely to establish negative relationships (negative edges). **Subsequently**, we conduct a theoretical analysis, highlighting that only training samples that do not decrease the local balance of nodes (see Def. 2) are beneficial candidate training samples. **Lastly**, we propose a new graph augmentation perspective, assign a difficulty score to each training sample and use this feature to guide the training process. Intuitively, we aim for the backbone model to prioritize the retention of structural information with lower difficulty scores while downplaying the significance of structural information with higher difficulty scores.
### Generating Candidate Training Samples
Real-world signed graph datasets are extremely sparse, with many missing or uncollected relationships between nodes. In this subsection, we attempt to uncover the potential relationships between nodes. We first use SGNN model, e.g., SGCN [11], the classical SGNN model, as the encoder to project nodes from topological space to embedding space. Here, the node representations are updated by aggregating information from different types of neighbors as follows:
For the first aggregation layer \(\ell=1\):
\[\begin{split} H^{pos(1)}&=\sigma\left(\mathbf{W}^{pos(1 )}\left[A^{+}H^{(0)},H^{(0)}\right]\right)\\ H^{neg(1)}&=\sigma\left(\mathbf{W}^{neg(1)}\left[A ^{-}H^{(0)},H^{(0)}\right]\right)\end{split} \tag{1}\]
For the aggregation layer \(\ell>1\):
\[\begin{split} H^{pos(\ell)}&=\sigma\left(\mathbf{W}^{ pos(\ell)}\left[A^{+}H^{pos(\ell-1)},A^{-}H^{neg(\ell-1)},H^{pos(\ell-1)}\right] \right)\\ H^{neg(\ell)}&=\sigma\left(\mathbf{W}^{neg(\ell) }\left[A^{+}H^{neg(\ell-1)},A^{-}H^{pos(\ell-1)},H^{neg(\ell-1)}\right]\right),\end{split} \tag{2}\]
where \(H^{pos(\ell)}(H^{neg(\ell)})\) are positive (negative) part of representation matrix at the \(\ell\)th layer. \(A^{+}(A^{-})\) are the row normalized matrix of positive (negative) part of the adjacency matrix \(A\). \(\mathbf{W}^{pos(\ell)}(\mathbf{W}^{neg(\ell)})\) are learnable parameters of positive (negative) part, and \(\sigma(\cdot)\) is the activation function. \([.]\) is the concatenation operation. After conducting message-passing for \(L\) layers, the final node representation matrix is \(Z=H^{(L)}=\left[H^{pos(L)},H^{neg(L)}\right]\). For node \(v_{i}\), the node embedding is \(Z_{i}\). As we wish to classify whether a pair of node are with a positive, negative or no edge between them. We train a multinomial logistic regression classifier (MLG) [11]. The training loss is as follows:
Figure 2: The overall architecture of SGA. Green lines represent positive edges and red lines represent negative edges.
\[\mathcal{L}\left(\theta^{\text{MLG}}\right)=-\frac{1}{|D_{\text{train}}|}\sum_{ \left(v_{i},v_{j},\sigma(e_{ij})\right)\in D_{\text{train}}}\log\frac{\exp \left(\left[Z_{i},Z_{j}\right]\theta^{\text{MLG}}_{\sigma(e_{ij})}\right)}{ \Sigma_{q\in\{+,-,?\}}\exp\left(\left[Z_{i},Z_{j}\right]\theta^{\text{MLG}}_{q }\right)} \tag{3}\]
\(\theta^{\text{MLG}}\) refers to the parameter of the MLG classifier. Using this classifier, for any two node \(v_{i}\), \(v_{j}\) we can calculate the probability of forming a positive or negative edge between any two nodes, denoted as \(Pr_{e_{ij}}^{pos}\) and \(Pr_{e_{ij}}^{neg}\). We configure four probability threshold hyper-parameters, i.e., the probability threshold for adding positive edges (\(\epsilon^{+}_{add}\)), the probability threshold for adding negative edges (\(\epsilon^{-}_{add}\)), the probability threshold for deleting positive edges (\(\epsilon^{-}_{del}\)), the probability threshold for deleting negative edges (\(\epsilon^{-}_{del}\)), We adopt the following strategy to generate candidate training samples:
* \(\forall v_{i},v_{j}\in\mathcal{V}\), if \((v_{i},v_{j},\sigma(e_{ij}))\notin D_{\text{train}}\), \(Pr_{e_{ij}}^{pos}>\epsilon^{+}_{add}\lor Pr_{e_{ij}}^{neg}>\epsilon^{-}_{add}\), then \(D_{\text{train\_add}}^{\text{card}}\cup(v_{i},v_{j},\sigma(e_{ij}))\)
* \(\forall v_{i},v_{j}\in\mathcal{V}\), if \((v_{i},v_{j},\sigma(e_{ij}))\in D_{\text{train}}\), \(A_{ij}>0\), \(Pr_{e_{ij}}^{pos}<\epsilon^{+}_{del}\), then \(D_{\text{train\_del}}^{\text{card}}\cup(v_{i},v_{j},\sigma(e_{ij}))\)
* \(\forall v_{i},v_{j}\in\mathcal{V}\), if \((v_{i},v_{j},\sigma(e_{ij}))\in D_{\text{train}}\), \(A_{ij}<0\), \(Pr_{e_{ij}}^{neg}<\epsilon^{-}_{del}\), then \(D_{\text{train\_del}}^{\text{card}}\cup(v_{i},v_{j},\sigma(e_{ij}))\)
\(D_{\text{train\_add}}^{\text{card}}\) and \(D_{\text{train\_del}}^{\text{card}}\) respectively refer to the candidate training set for adding edges and the candidate training set for deleting edges.
### Selecting Beneficial Candidate Training Samples
After generating candidate training set \(D_{\text{train\_add}}^{\text{card}}\) and \(D_{\text{train\_del}}^{\text{card}}\), we next aim to incorporate these training candidates into the training sample set \(D_{\text{train}}\). One issue that needs to be addressed is that not all new generated training samples (i.e., edges) result in positive effects. Hence, we need to select beneficial candidate training samples.
According to [13; 28], SGNN models that rely on balance theory cannot learn a proper representation for nodes from unbalanced triangles, as is shown in Figure 3.
**Definition 1**.: Balanced (unbalanced) triangles _are cycles with 3 nodes containing even (odd) negative edges._
The addition of both positive and negative edges can result in changes in the local structure of nodes, potentially leading to unbalanced triangles. We provide a definition of local balance degree.
**Definition 2** (Local Balance Degree).: _For node \(v_{i}\), the local balance degree is defined by:_
\[D_{3}(v_{i})=\frac{|\mathcal{O}_{3}^{+}(v_{i})|-|\mathcal{O}_{3}^{-}(v_{i})|}{ |\mathcal{O}_{3}^{+}(v_{i})|+|\mathcal{O}_{3}^{-}(v_{i})|} \tag{4}\]
_where \(\mathcal{O}_{3}^{+}(v_{i})\) (\(\mathcal{O}_{3}^{-}(v_{i})\)) represents the set of balanced (unbalanced) triangles containing node \(v_{i}\). \(|\cdot|\) represents the set cardinal number._
From Def. 2, we can observe that the node's local balance degree is related to the count of balanced and unbalanced triangles that include this node. Based on this definition, we can conclude that beneficial candidate training samples does not decrease the local balance degree of target nodes.
### Introducing New feature for Training Samples
Based on the above analysis, it is apparent that unbalanced triangles pose a challenging task for SGNNs. Intuitively, when an edge is part of an unbalanced triangle, its level of difficulty in terms of representation learning should surpass that of edges not involved in such structures, as shown in Figure 4.
Figure 3: Four isomorphism types of triangles. Green and red lines represent positive and negative edges, resp.
**Definition 3** (Edge Difficulty Score).: _For edge \(e_{ij}\), the difficulty score is defined by:_
\[\text{Score}(e_{ij})=1-\frac{D_{3}(v_{i})+D_{3}(v_{j})}{2} \tag{5}\]
_where \(D_{3}(v_{i}\) and \(D_{3}(v_{j})\) refer to the local balance degree of node \(v_{i}\) and \(v_{j}\), respectively._
Upon quantifying the difficulty scores for each edge within the training set, a curriculum-based training approach is applied to enhance the performance of the SGNN model. This curriculum is fashioned following the principles set forth in [54], which enables the creation of a structured progression from easy to difficult. The process entails initially sorting the training set \(\mathcal{E}\) in ascending order based on their respective difficulty scores. Subsequently, a pacing function \(g(t)\) is employed to allocate these edges to distinct training epochs, transitioning from easier to more challenging instances, where \(t\) signifies the \(t\)-th epoch.we use linear pacing function as shown below:
## 5 Experiments
In this section, we commence by assessing the enhancements brought about by SGA in comparison to diverse backbone models for the link sign prediction task. We will answer the following questions:
* **Q1**: Can SGA framework increase the performance of backbone models?
* **Q2**: Do each part of the SGA framework play a positive role?
* **Q3**: Is the proposed method sensitive to hyper-parameters? How do key hyper-parameters impact the method performance?
### Datasets
We conduct experiments on six real-world datasets, i.e., Bitcoin-OTC, Bitcoin-Alpha, Wiki-elec, Wiki-RfA, Epinions, and Slashdot. The main statistics of each dataset are summarized in Table 1. In the following, we explain important characteristics of the datasets briefly.
**Bitcoin-OTC2**[55; 56] and **Bitcoin-Alpha3** are two datasets extracted from bitcoin trading platforms. Due to the fact Bitcoin accounts are anonymous, people give trust or not-trust tags to others in order to enhance security.
Footnote 2: [http://www.bitcoin-otc.com](http://www.bitcoin-otc.com)
Footnote 3: [http://www.btc-alpha.com](http://www.btc-alpha.com)
**Wiki-elec4**[57; 1] is a voting network in which users can choose trust or distrust to other users in administer elections.
**Wiki-RfA**[58] is a more recent version of Wiki-elec.
**Epinions5**[57] is a consumer review site with trust and distrust relationships between users.
Footnote 4: [https://www.wikipedia.org](https://www.wikipedia.org)
**Shashdot6**[57] is a technology-related news website in which users can tag each other as friends (trust) or enemies (distrust).
Figure 4: Illustration of node difficulty, where green lines represent positive edges and red lines represent negative edges.
Following the experimental settings in [11], We randomly split the edges into a training set and a testing set with a ratio 8:2. We run with different train-test splits for 5 times to get the average scores and standard deviation.
### Baselines and Experiment Setting
We use five popular graph representation learning models as backbones, including both unsigned GNN models and signed GNN models.
**Unsigned GNN**: We employ two classical GNN models (i.e., GCN [9] and GAT [30]). These methods are designed for unsigned graphs, thus, as mentioned before, we consider all edges as positive edges to learn node embeddings in the experiments.
**Signed Graph Neural Networks**: SGCN [11] and SiGAT [31] respectively generalize GCN [9] and GAT [30] to signed graphs based on message mechanism. Besides, SGCN integrates the balance theory. GS-GNN [32] adopts a more generalized assumption (than balance theory) that nodes can be divided into multiple latent groups. We use these signed graph representation models as baselines to explore whether **SGA** can enhance their performance.
We implement our SGA using PyTorch [59] and employ PyTorch Geometric [60] as its complementary graph library. The graph encoder, responsible for augmenting the graph, consists of a 2-layer SGCN with an embedding dimension of 64. This encoder is optimized using the Adam optimizer, set with a learning rate of 0.01 over 300 epochs. To ensure a consistent comparison, we randomly standardized the node embedding dimension at 64 across all embedding-based methods, matching the dimensionality used in GS-GNN [61]. For the baseline methods, we adhere to the parameter configurations as recommended in their originating papers. Specifically, for unsigned baseline models like GCN and GAT, we employ the Adam optimizer, with a learning rate of 1e-2, a weight decay of 5e-4, and span the training over 500 epochs. In contrast, signed baseline models are trained with an initial learning rate of 5e-3, a weight decay of 1e-5, and are run for 3000 epochs.
The experiments were performed on a Linux machine with eight 24GB NVIDIA GeForce RTX 3090 GPUs.
Our primary evaluation criterion is link sign prediction--a binary classification task. We assess the performance employing AUC, F1-binary, F1-macro, and F1-micro metrics, consistent with the established norms in related literature [61, 32]. It's imperative to note that across these evaluation metrics, a higher score directly translates to enhanced model performance.
### Performance on Link Sign Prediction (Q1)
To comprehensively evaluate the performance of our proposed SGA, we contrast it with several baseline configurations that exclude SGA integration. For a detailed view, AUC and F1-binary score results are presented in Table 2. Further, the F1-micro and F1-macro scores can be referenced in Table 3. For each model, the mean AUC and F1-binary scores, along with their respective standard deviations, are documented. These metrics are derived from five independent runs on each dataset, utilizing distinct, non-overlapping splits: 80% of the links are earmarked for training, while the residual 20% serve as the test set. Additionally, the table elucidates the percentage improvement in these metrics attributable to the integration of SGA, relative to the baseline models devoid of SGA. The results proffer several salient insights:
* Our investigations affirm that the SGA framework serves as a potent catalyst in augmenting the performance of both signed and unsigned graph neural networks. This underscores the efficacy of tailoring the inherent graph structure and the subsequent training regimen, enabling models to astutely discern intricate node relationships.
* Within the realm of unsigned graph neural networks, the GAT model exhibits a more pronounced enhancement in performance relative to the GCN in the majority of scenarios. This observation is attributable to GAT's comparatively modest baseline performance in relation to GCN, engendering a larger margin for refinement. This phenomenon
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & \# Links & \# Positive Links & \# Negative Links \\ \hline Bitcoin-OTC & 35,592 & 32,029 & 3,563 \\ Bitcoin-Alpha & 24.186 & 22,650 & 1,536 \\ Wiki-elec & 103,689 & 81,345 & 22,344 \\ Wiki-RFA & 170,335 & 133,330 & 37,005 \\ Epinions & 840,799 & 717,129 & 123,670 \\ Slashdot & 549,202 & 425,072 & 124,130 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The statistics of datasets.
accentuates the potential of the SGA framework to stabilize the GAT's training dynamics by refining the graph topology and the associated training procedure.
* Among the signed neural networks we scrutinized--SGCN, SiGAT, and GS-GNN--it's pivotal to note that only SGCN integrates the balance theory. This strategic incorporation propels SGCN to manifest the most pronounced improvements across a majority of the datasets. Even though SGCN's performance tends to be lackluster when leveraging random embeddings for initial node representations, our findings suggest that aligning the dataset more closely with the model's intrinsic assumptions can pave the way for superior performance outcomes.
* A salient advantage conferred by the SGA framework is the enhanced stability observed in signed graph neural network models. This stability is palpable through reduced standard deviation in AUC and F1-binary scores across most datasets. Contrarily, unsigned baseline models, which inherently overlook the nuanced negative inter-node relationships, do not seem to reap similar stability dividends.
### Ablation Study (Q2)
To ascertain the contributions of various components of SGA towards the model's overall performance, we systematically dissect and evaluate the SGCN under different conditions. Below are the configurations we investigate:
* **SGCN:** This configuration deploys SGCN on the original graph, devoid of any curriculum learning integration.
* **+SA (Structure Augmentation, refer to Sec. 4.1 and Sec. 4.2):** SGCN operates on augmented datasets. This augmentation involves the addition or removal of edges from the initial graph.
* **+TP (Training Plan, refer to Sec. 4.3):** SGCN runs on the original graph, but with a modified training paradigm. Adopting a curriculum learning approach, we rank edges by their "difficulty". The model is then progressively exposed to these edges, transitioning from simpler to more challenging ones as training epochs progress.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline Datasets & \multicolumn{2}{c|}{Bitcoin-alpha} & \multicolumn{2}{c|}{Bitcoin-oc} & \multicolumn{2}{c|}{Epinions} & \multicolumn{2}{c|}{Slabdet} & \multicolumn{2}{c}{Wiki-elec} & \multicolumn{2}{c}{Wiki-RIA} \\ Methods & AUC & F1-binary & AUC & F1-binary & AUC & F1-binary & AUC & F1-binary & AUC & F1-binary & AUC & F1-binary \\ \hline GCN & 60.9\(\pm\)0.8 & 73.6\(\pm\)1.5 & 69.1\(\pm\)0.8 & 83.0\(\pm\)1.5 & 68.5\(\pm\)0.2 & 80.4\(\pm\)0.2 & 51.8\(\pm\)0.7 & 55.8\(\pm\)1.9 & 64.0\(\pm\)1.1 & 75.5\(\pm\)1.5 & 60.4\(\pm\)0.7 & 72.0\(\pm\)1.0 \\ +sGA & 64.5\(\pm\)1.4 & 80.7\(\pm\)2.5 & 69.3\(\pm\)1.0 & 89.0\(\pm\)0.9 & 69.3\(\pm\)0.2 & 82.5\(\pm\)0.4 & 51.5\(\pm\)2.0 & 61.9\(\pm\)1.0 & 64.8\(\pm\)0.3 & 75.0\(\pm\)1.7 & 60.9\(\pm\)0.3 & 72.9\(\pm\)1.6 \\ (Improv.) & 5.9\(\pm\) & 9.7\(\pm\) & 7.2\(\pm\) & 1.2\(\pm\) & 2.6\(\pm\) & - & 10.9\% & 1.3\% & - & 0.8\% & 1.3\% \\ \hline GAT & 59.3\(\pm\)1.6 & 68.0\(\pm\)19.3 & 67.2\(\pm\)2.1 & 84.6\(\pm\)5.9 & 53.1\(\pm\)1.7 & 68.0\(\pm\)19.0 & 51.4\(\pm\)1.6 & 61.0\(\pm\)18.5 & 54.6\(\pm\)2.3 & 69.8\(\pm\)15.1 & 51.9\(\pm\)1.1 & 72.8\(\pm\)4.5 \\ +sGA & 65.0\(\pm\)6.8 & 90.7\(\pm\)3.1 & 74.3\(\pm\)2.0 & 93.6\(\pm\)0.5 & 51.9\(\pm\)2.1 & 73.0\(\pm\)16.9 & 58.9\(\pm\)4.9 & 70.9\(\pm\)12.9 & 56.8\(\pm\)4.7 & 73.3\(\pm\)6.1 & 59.7\(\pm\)5.7 & 72.6\(\pm\)17.5 \\ (Improv.) & 0.6\(\pm\) & 33.3\% & 10.6\% & 10.6\% & - & 7.4\% & 14.6\% & 16.2\% & 4.7\% & 5\% & 15\% & \\ \hline SGCN & 75.3\(\pm\)0.2 & 90.5\(\pm\)0.8 & 79.4\(\pm\)1.5 & 92.3\(\pm\)1.2 & 68.6\(\pm\)4.4 & 90.5\(\pm\)1.4 & 61.0\(\pm\)1.6 & 67.3\(\pm\)3.3 & 66.9\(\pm\)3.7 & 77.4\(\pm\)8.2 & 62.2\(\pm\)6.9 & 71.3\(\pm\)3.9 \\ +sGA & 80.6\(\pm\)1.2 & 93.9\(\pm\)0.4 & 82.2\(\pm\)0.4 & 97.4\(\pm\)0.7 & 78.1\(\pm\)0.4 & 92.8\(\pm\)0.4 & 63.2\(\pm\)1.4 & 84.6\(\pm\)0.8 & 77.7\(\pm\)0.2 & 87.0\(\pm\)0.7 & 76.0\(\pm\)0.5 & 86.9\(\pm\)0.4 \\ (Improv.) & 7.1\% & 3.8\% & 3.4\% & 2.6\% & 14.9\% & 2.6\% & 3.6\% & 25.7\% & 16.1\% & 12.3\% & 22.2\% & 21.8\% \\ \hline SGAT & 79.7\(\pm\)4.0 & 96.7\(\pm\)0.4 & 87.7\(\pm\)0.8 & 95.2\(\pm\)0.2 & 82.8\(\pm\)4.3 & 93.4\(\pm\)0.7 & 80.6\(\pm\)2.1 & 86.7\(\pm\)2.1 & 87.0\(\pm\)0.3 & 90.3\(\pm\)0.1 & 87.1\(\pm\)0.1 & 90.3\(\pm\)0.0 \\ +sGA & 87.4\(\pm\)0.8 & 96.3\(\pm\)0.3 & 89.8\(\pm\)0.9 & 95.1\(\pm\)0.4 & 88.4\(\pm\)1.5 & 94.6\(\pm\)0.6 & 85.0\(\pm\)0.6 & 89.3\(\pm\)0.3 & 88.5\(\pm\)0.3 & 90.5\(\pm\)0.2 & 88.0\(\pm\)0.2 & 90.3\(\pm\)0.1 \\ (Improv.) & 9.7\% & - & 2.4\% & - & 6.8\% & 1.3\% & 5.5\% & 3\% & 1.6\% & 0.3\% & 1\% & - \\ \hline SGNN & 85.1\(\pm\)1.3 & 97.0\(\pm\)0.1 & 83.3\(\pm\)1.1 & 95.9\(\pm\)0.3 & 88.9\(\pm\)0.4 & 95.0\(\pm\)0.6 & 77.9\(\pm\)0.7 & 88.6\(\pm\)0.3 & 88.2\(\pm\)0.2 & 90.9\(\pm\)0.1 & 86.8\(\pm\)0.2 & 90.3\(\pm\)0.2 \\ +sGA & 89.9\(\pm\)0.7 & 96.4\(\pm\)0.2 & 90.7\(\pm\)0.9 & 96.0\(\pm\)0.2 & 90.1\(\pm\)0.3 & 95.1\(\pm\)0.4 & 81.3\(\pm\)0.5 & 88.0\(\pm\)0.5 & 89.1\(\pm\)0.1 & 91.1\(\pm\)0.1 & 87.6\(\pm\)0.2 & 90.5\(\pm\)0.1 \\ (Improv.) & 5.7\% & - & 2.8\% & 0.1\% & 1.4\% & 0.1\% & 4.3\% & - & 1\% & 0.1\% & 0.9\% & 0.3\% \\ \hline \end{tabular}
\end{table}
Table 2: Link sign prediction results (average \(\pm\) standard deviation) with AUC (%) and F1-binary (%) on six
\begin{table}
\begin{tabular}{l|c c c|c c c c|c c c|c c} \hline Datasets & \multicolumn{2}{c|}{Bitcoin-alpha} & \multicolumn{2}{c|}{Bitcoin-oc} & \multicolumn{2}{c|}{Epinions} & \multicolumn{2}{c|}{Slabdet} & \multicolumn{2}{c|}{Wiki-elec} & \multicolumn{2}{c}{Wiki-RIA} \\ Methods & F1-mirror & F1-macro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-micro & F1-macro \\ \hline GCN & 59.9\(\pm\)1.8 & 45.0\(\pm\)0.9 & 72.9\(\pm\)2.1 & 73.7\(\pm\)1.4 & 70.4\(\pm\)0.2 & 60.0\(\pm\)0.2 & 47.1\(\pm\)1.4 & 49.1\(\pm\)0.0 & 65.8\(\pm\)1.5 &
* **+SGA:** This is a holistic approach where SGCN operates on an augmented graph and also incorporates the aforementioned training plan. The model learns easier edges initially, gradually advancing to more intricate ones.
Our meticulous ablation study, encapsulated in Table 4 and executed across six benchmark datasets, bequeaths a panoply of profound revelations:
* **Significance of Data Augmentation:** Data Augmentation emerges as a pivotal component in enhancing the model's efficacy. Even when employed in isolation, it frequently outperforms the baseline model trained on the original graph without any specialized training strategy.
* **Isolated Impact of the Training Plan:** Introducing the Training Plan, devoid of concurrent modifications, occasionally offers subtle performance augmentations vis-a-vis Data Augmentation alone. An illustrative case is its behavior on the Bitcoin-alpha dataset: the Training Process modality engenders a decrement in AUC yet an ascendant trajectory in F1 score metrics.
* **Synergistic Benefits of Training Plan and Data Augmentation:** Melding the Training Plan with Data Augmentation orchestrates a synergy, frequently magnifying the individual benefits of Data Augmentation. A notable anomaly is discerned within the Slashdot dataset metrics: Data Augmentation in isolation showcases an elevated F1 score albeit a diminished AUC, in contrast even to the pristine SGCN technique. However, when the Training Plan is integrated, it champions superior AUC outcomes. Concurrently, the SGA strategy harmoniously amalgamates these attributes, delivering a commendable equilibrium of AUC and F1 scores, underscoring a symbiotic interplay between the methodologies.
### Parameter Sensitivity Analysis (Q3)
In this subsection, we undertake a sensitivity analysis focusing on six hyper-parameters: \(c^{+}_{del}\), \(\epsilon^{-}_{del}\), \(\epsilon^{+}_{add}\), \(\epsilon^{-}_{add}\) (these delineate the probability thresholds for adding or removing positive/negative edges); \(T\) represents the number of intervals during the training process where more challenging edges are incrementally added to the training set; and \(\lambda_{0}\) designates the initial fraction of the easiest examples. Performance metrics for the SGCN model within the SGA framework, as measured by AUC and F1-binary score across various hyper-parameter configurations, are illustrated in Figures 5 and 6 respectively.
\begin{table}
\begin{tabular}{c c|c c c c} \hline Dataset & Metric & SGCN & +SA & +TP & +SGA \\ \hline \multirow{4}{*}{Bitcoin-alpha} & AUC & 75.3\(\pm\)0.2 & 78.4\(\pm\)0.3 & 73.8\(\pm\)1.2 & **80.6\(\pm\)**1.0 \\ & F1-binary & 90.5\(\pm\)0.8 & 93.3\(\pm\)0.3 & 91.5\(\pm\)0.1 & **94.0\(\pm\)**0.3 \\ & F1-micro & 83.4\(\pm\)1.3 & 88.1\(\pm\)0.4 & 84.9\(\pm\)0.1 & **89.2\(\pm\)**0.6 \\ & F1-macro & 62.1\(\pm\)1.1 & 67.5\(\pm\)0.4 & 62.7\(\pm\)0.4 & **69.7\(\pm\)**0.5 \\ \hline \multirow{4}{*}{Bitcoin-oc} & AUC & 79.4\(\pm\)1.5 & 80.8\(\pm\)1.4 & 79.7\(\pm\)0.9 & **82.2\(\pm\)**0.7 \\ & F1-binary & 92.3\(\pm\)1.2 & 94.7\(\pm\)0.9 & 92.3\(\pm\)1.5 & **94.7\(\pm\)**0.7 \\ & F1-micro & 86.7\(\pm\)1.9 & 90.7\(\pm\)1.6 & 86.7\(\pm\)2.4 & **90.6\(\pm\)**1.1 \\ & F1-macro & 72.0\(\pm\)1.5 & 77.3\(\pm\)2.3 & 72.1\(\pm\)2.8 & **77.6\(\pm\)**1.7 \\ \hline \multirow{4}{*}{Epinions} & AUC & 68.6\(\pm\)4.4 & 75.5\(\pm\)2.2 & 75.0\(\pm\)0.5 & **78.1\(\pm\)**0.4 \\ & F1-binary & 90.5\(\pm\)1.4 & 91.6\(\pm\)0.3 & 88.6\(\pm\)2.6 & **92.8\(\pm\)**0.4 \\ & F1-micro & 83.9\(\pm\)2.0 & 85.9\(\pm\)0.5 & 81.7\(\pm\)3.5 & **87.9\(\pm\)**0.6 \\ & F1-macro & 68.0\(\pm\)2.9 & 73.7\(\pm\)1.3 & 70.1\(\pm\)2.5 & **76.9\(\pm\)**0.8 \\ \hline \multirow{4}{*}{Slashdot} & AUC & 61.0\(\pm\)1.6 & 58.8\(\pm\)5.1 & **63.3\(\pm\)**0.3 & 63.2\(\pm\)1.2 \\ & F1-binary & 67.3\(\pm\)3.3 & **85.7\(\pm\)**1.1 & 69.2\(\pm\)6.4 & 84.6\(\pm\)0.8 \\ & F1-micro & 58.2\(\pm\)3.1 & **76.6\(\pm\)**1.1 & 60.7\(\pm\)6.0 & 75.8\(\pm\)1.0 \\ & F1-macro & 54.6\(\pm\)2.4 & 58.2\(\pm\)7.6 & 56.6\(\pm\)3.2 & **63.8\(\pm\)**1.2 \\ \hline \multirow{4}{*}{Wiki-elec} & AUC & 66.9\(\pm\)3.7 & 77.5\(\pm\)0.4 & 70.4\(\pm\)2.6 & **77.7\(\pm\)**0.2 \\ & F1-binary & 77.4\(\pm\)8.2 & 86.7\(\pm\)0.7 & 74.5\(\pm\)9.2 & **87.0\(\pm\)**0.7 \\ & F1-micro & 69.2\(\pm\)7.6 & 80.3\(\pm\)0.8 & 67.2\(\pm\)8.1 & **80.6\(\pm\)**0.8 \\ & F1-macro & 61.8\(\pm\)3.9 & 74.1\(\pm\)0.5 & 62.3\(\pm\)5.1 & **74.3\(\pm\)**0.5 \\ \hline \multirow{4}{*}{Wiki-RfA} & AUC & 62.2\(\pm\)6.9 & 75.6\(\pm\)0.7 & 68.6\(\pm\)3.7 & **76.0\(\pm\)**0.5 \\ & F1-binary & 71.3\(\pm\)3.9 & 86.1\(\pm\)1.1 & 75.7\(\pm\)4.3 & **86.9\(\pm\)**0.4 \\ \cline{1-1} & F1-micro & 61.7\(\pm\)4.1 & 79.4\(\pm\)1.3 & 67.2\(\pm\)3.9 & **80.2\(\pm\)**0.5 \\ \cline{1-1} & F1-macro & 56.3\(\pm\)4.6 & 72.9\(\pm\)1.0 & 62.0\(\pm\)3.0 & **73.5\(\pm\)**0.5 \\ \hline \end{tabular}
\end{table}
Table 4: The ablation study results of using different component of SGA.
From the figures, it's evident that the patterns of AUC and F1 score diverge depending on the adjustments to the hyperparameters. Further, the SGCN performance on distinct datasets, like Slashdot, displays more pronounced variance in both AUC and F1 score relative to other datasets. On a broader scale, the AUC is fairly consistent with changes to \(\epsilon^{-}_{del}\), \(\epsilon^{+}_{add}\), and \(T\). Notably, as \(\epsilon^{+}_{del}\) or \(\epsilon^{-}_{add}\) values rise, there's a tendency for the AUC to augment. Interestingly, AUC initially increases and then undergoes a minor dip as \(\lambda_{0}\) rises. Regarding the F1 score, its sensitivity is limited concerning changes to \(\epsilon^{-}_{add}\), \(T\), and \(\lambda_{0}\)--with the Slashdot dataset being an outlier. In general, an increase in \(\epsilon^{-}_{del}\) and \(\epsilon^{+}_{add}\) boosts the F1 score. However, for \(\epsilon^{+}_{del}\), the optimal value can differ across datasets, typically lying between 0.1 and 0.3.
Figure 5: Performance of SGCN: AUC scores (with standard deviation) across six benchmark datasets, evaluated under variations in parameters \(\epsilon^{+}_{del}\), \(\epsilon^{-}_{del}\), \(\epsilon^{+}_{add}\), \(\epsilon^{-}_{add}\), \(T\) and \(\lambda_{0}\).
## 6 Conclusion
Data augmentation is a widely embraced technique for enhancing the generalization of machine learning models, but its application to Graph Neural Networks (GNNs) presents distinctive challenges due to graph irregularity. Our groundbreaking research focuses on data augmentation for signed graph neural networks, introducing a novel framework to alleviate three significant issues in this field: exceptionally sparse real-world signed graph datasets, the difficulty in learning from unbalanced triangles, and the absence of side information in these datasets. In this paper, we propose a novel Signed Graph Augmentation framework, SGA. This framework is primarily composed of three key components. In summary, this framework has three main components: encoding structural information using the SGNN model, selecting beneficial edges for modification, and introducing a new perspective on training difficulty for improved training strategies. Through extensive experiments on benchmark datasets, our Signed Graph Augmentation (SGA) framework proves its versatility in boosting various SGNN models. As a promising future direction, further exploration of the theoretical foundations of signed graph augmentation is warranted, alongside the development of more potent methods to analyze diverse downstream tasks on signed graphs.
|
2301.06410 | Flow imaging as an alternative to pressure transducers through vision
transformers and convolutional neural networks | In this work, we propose a framework whereby flow imaging data is leveraged
to extract relevant information from flowfield visualizations. To this end, a
vision transformer (ViT) model is developed to predict the unsteady pressure
distribution over an airfoil under dynamic stall from images of the flowfield.
The network is capable of identifying relevant flow features present in the
images and associate them to the airfoil response. Results demonstrate that the
model is effective in interpolating and extrapolating between flow regimes and
for different airfoil motions, meaning that ViT-based models may offer a
promising alternative for sensors in experimental campaigns and for building
robust surrogate models of complex unsteady flows. In addition, we uniquely
treat the image semantic segmentation as an image-to-image translation task
that infers semantic labels of structures from the input images in a supervised
way. Given an input image of the velocity field, the resulting convolutional
neural network (CNN) generates synthetic images of any corresponding fluid
property of interest. In particular, we convert the velocity field data into
pressure in order to subsequently estimate the pressure distribution over the
airfoil in a robust manner. | Renato F. Miotto, William R. Wolf | 2023-01-16T12:57:42Z | http://arxiv.org/abs/2301.06410v1 | Flow imaging as an alternative to pressure transducers through vision transformers and convolutional neural networks
###### Abstract
In this work, we propose a framework whereby flow imaging data is leveraged to extract relevant information from flowfield visualizations. To this end, a vision transformer (ViT) model is developed to predict the unsteady pressure distribution over an airfoil under dynamic stall from images of the flowfield. The network is capable of identifying relevant flow features present in the images and associate them to the airfoil response. Results demonstrate that the model is effective in interpolating and extrapolating between flow regimes and for different airfoil motions, meaning that ViT-based models may offer a promising alternative for sensors in experimental campaigns and for building robust surrogate models of complex unsteady flows. In addition, we uniquely treat the image semantic segmentation as an image-to-image translation task that infers semantic labels of structures from the input images in a supervised way. Given an input image of the velocity field, the resulting convolutional neural network (CNN) generates synthetic images of any corresponding fluid property of interest. In particular, we convert the velocity field data into pressure in order to subsequently estimate the pressure distribution over the airfoil in a robust manner.
keywords: Vision Transformers, Convolutional Neural Networks, Unsteady Flows, Dynamic Stall, Surrogate Models +
Footnote †: journal: Journal
## 1 Introduction
Pressure is a thermodynamic property that plays a key role in fluid mechanics. It is of utmost importance in aerodynamic load prediction [1; 2; 3], noise generation [4; 5], flow instability and turbulence [6; 7; 8], and flow control [9]. The increase of time resolution in velocity field measurements during the last decade has opened the path to obtaining instantaneous pressure fields by combining experimental data with the flow governing equations [10; 11]. But often, the available high-speed cameras and particle image velocimetry (PIV) systems do not offer a high enough sampling frequency for unsteady flows of practical interest, which hinders the previous approaches. In case of poor time resolution, the pressure field can be obtained by solving a Poisson equation if the flow is incompressible. But still, the missing time information poses specific constraints in the boundary conditions [12; 13].
Recent developments on machine learning have recently been paving the way to new interesting research avenues without the need of time-resolved field measurements to estimate pressure. Using velocity-probe measurements with high sampling frequency, Jin et al. [14] combined proper orthogonal decomposition (POD) with recurrent neural networks to reconstruct the spatial distribution of velocity, retrieving the time resolution from PIV. Raissi et al. [15] used
a physics-informed neural network (PINN) [16] to construct computationally efficient and fully differentiable surrogates for velocity and pressure fields from the transport of passive scalars. Their technique allows the extraction of quantitative information from available flow visualizations such as the transport of dye or smoke in physical systems and contrast agents in biological systems. PINNs were also used to quantify velocity and pressure fields from tomographic background oriented Schlieren [17] and PIV images [18].
It is not only with regard to the flowfield that pressure is important, but also its distribution on aerodynamic surfaces. For instance, pressure distribution data is crucial for aerodynamic design optimization [19] and many surrogate models have recently been developed to reduce the computational time involved in predicting aerodynamic forces during the design process [20; 21; 22; 23]. In the context of unsteady aerodynamics, pressure distribution data are used to provide insights on the turbulent structures responsible for the far-field noise generated by unsteady airfoils [4; 24; 5] and also to estimate the instant at which the flow separates near the leading edge [25; 26; 27]. This is particularly useful for developing control strategies for dynamic stall [28; 29]. It is noteworthy that other methodologies that do not depend on pressure to indicate flow separation near the leading edge have been recently proposed (see [30] and [31] for example), but they are not easily accessible outside of the laboratory environment.
Recently, Hui et al. [21] used a convolutional neural network (CNN) to predict steady pressure distributions over different airfoil profiles. However, the unsteady pressure distribution data constitutes a much more difficult and costly problem because the spatio-temporal complexity of unsteady separated flows renders them difficult to investigate and fully characterize. It comes as no surprise that, thus far, unsteady separated flows have defied obtaining general analytical solutions. During dynamic stall, for example, the generated unsteady aerodynamic loads play a critical role in determining both the mechanical life span and performance of unsteady lifting devices such as wings, helicopter rotors and wind turbine blades [32]. To control these loads, it is required both an understanding of the unsteady flow conditions as well as providing mechanisms for prescribing the ensuing blade-flow interactions.
Given the importance of quantifying the unsteady forces in engineering systems, the present work aims to provide an alternative to obtain them in experimental settings. Dynamic stall is taken here as an example since the flow has a common structure owning certain complexity as well. Despite that, the concepts applied herein can be easily extended to other branches of fluid mechanics where a regression task is involved. To this end, we chose CNNs and vision transformers (ViTs) to study the flowfields from our high-fidelity numerical simulations of dynamic stall [1; 2] due to their fewer connections and parameters [33; 34]. Here, we aim at reusing or transferring information from previously learned tasks to extract quantitative information from available flow visualizations. Different architectures are employed in the hope of finding the mapping relation between the flow structures and the underlying airfoil responses. Based on any fluid property from the unsteady flowfield, the network between the existing flow structures and some concerned flow feature is constructed. The ViT or CNN-based deep learning methods, then, link the map of fluid properties to the aerodynamic loadings, which represents the feature learned from the flowfield. Precisely, we sought a model capable of predicting the distribution of pressure coefficient (\(C_{p}\)) over the airfoil. Thus, the present models could serve as an alternative to pressure taps or surface-mounted pressure transducers in acquiring surface pressure data.
CNNs have been successfully applied to identify features in fluid flows by Strofer et al. [35]. Jin et al. [36] designed a CNN architecture to predict the velocity field around a cylinder using
measurements of the surface pressure as input. Ye et al. [37] used the classical simple network LeNet-5 to predict the pressure on a cylinder from the velocity distributions along its wake. CNNs were also shown to be viable alternatives for detecting shock waves, with less time consumption than traditional methods [38]. This class of network was also employed in a new technique to extract underlying flow features from the original flowfield data, as proposed by Obayashi et al. [39]. These authors made use of the nonlinear decomposition from the CNN process to extract flow features different from those of proper orthogonal decomposition in each mode. Guastoni et al. [40] and Guemes et al. [41] used CNNs to predict two-dimensional instantaneous velocity-fluctuation fields at different wall-normal locations from wall measurements. The studies mentioned above demonstrate the possibilities that CNNs exhibit for feature detection in fluid mechanics.
In addition to CNNs, the emerging of transformer-based architectures could provide more ideas for leveraging surrogate models in fluid dynamics. This line of research, though, is more scarce in the literature. For instance, among the few studies available, we mention that of Yousif et al. [42], Patil et al. [43] and Dang et al. [44]. The former authors demonstrated that transformers can efficiently predict the temporal evolution of extremely coarse velocity fields from turbulent boundary layers. Such predictive ability of transformers was also evaluated by Patil et al. [43]. Dang et al. [44], in turn, employed a ViT [34] along with other techniques to predict turbulence dynamics on coarsened grids. It is still unclear how far the applications of ViTs in fluid mechanics can reach. Above all, more studies need to be carried out to determine their applicability and reliability for fluid flows, not only on turbulence modeling, but also on feature characterization.
The quest to amplify the scope of information extracted in experimental fluid mechanics is the major pursuit of the current work and the aforementioned regression task to predict the unsteady aerodynamic loading is only one step towards this goal. Differently from the cited references, here, we propose a framework whereby numerical simulation data is leveraged to extract relevant information from experimental flowfield visualizations. This is somewhat parallel to the recent work by Wang et al. [45; 46] that bridges the gap between experiments and numerical simulations through a data fusion approach. However, differently from the previous references, here, we embrace computer vision techniques to this end. In particular, we apply ViTs and CNNs to predict the unsteady pressure distribution over an unsteady airfoil. Despite substantial advances in experimental fluid mechanics, the use of measurements to reliably infer fluid properties, such as density, velocity, pressure or stress fields, is not a straightforward task. This information, on the other hand, comes natural to CFD practitioners. So, in this work, we also address the question of using the information learned from numerical simulation datasets to extract fluid properties from experiments that, until then, would be very complicated or even impossible to obtain. With that, not only we are able to predict surface pressure distributions, but also obtain the entire pressure field or any other property of interest from visualizations of the velocity field.
## 2 Methodology
In this work, we train a deep neural network model to capture relevant flow structures and establish a mapping relationship between these structures and the underlying airfoil response, here expressed in terms of the unsteady pressure coefficient (\(C_{p}\)) distribution over the airfoil
suction side. Both the location and the morphology of the flow structures with respect to the airfoil must be properly inferred by the neural network for an accurate estimation of the aerodynamic loads. For that, we use ViTs and CNNs for their success in identifying flow features.
In addition to computing a regression model to obtain the aerodynamic loads from flow images, we are also interested in expanding the amount of information that could be extracted from experimental datasets. While these two things might be somewhat related, we refer to them separately. This is because a model is created to obtain the airfoil response, and another one to extract further information from the flow. Here, for example, this other information consists in images. For this, one needs to train a neural network capable of receiving images of the flow as input and generating another image as output. The motivation for this image-to-image translation stems from the fact that any physical properties can be obtained from a numerical simulation, while the amount of information extracted from experiments can be more limited or even too complicated to be acquired. Thus, this model can be trained with some numerical or experimental data for later use in experimental campaigns. In case it is impossible to obtain annotations in an experiment, the model can be trained only with annotated field samples from CFD simulations for a later operation in an experimental setup.
In what follows, we split the methodology explanation in two parts: one for the regression task related to the prediction of the unsteady aerodynamic loads, and the other for the image synthesis. A PyTorch code implementation with the present methodology is available at _link to be added after review_.
### Regression task
#### 2.1.1 Input data
The starting point of the method is the input images of flowfields obtained from numerical simulations or experiments, such as those from a 2D cross section. The images are obtained from the high-fidelity simulations of Miotto et al. [1; 2] in terms of spanwise-averaged flowfields. In these references, the authors were interested in understanding the mechanisms of dynamic stall onset and the conditions for pitch and plunge equivalence to occur. Any physical property of interest could be used as input to the network. Here, we train several models by varying the number of outputs, which is determined by the desired task (further discussion provided in Sec. 2.1.2), and the type of fluid property considered as input. One model uses the \(u\)- and \(v\)-velocity components as input, while another uses the \(z\)-vorticity field, and a last one uses the pressure coefficient distribution \(C_{p}\). The rationale for choosing these quantities is that the velocity and vorticity fields can be directly obtained experimentally, through PIV techniques, and the pressure is closely related to the airfoil aerodynamic loads. Figure 1 shows examples of spanwise-averaged snapshots used as an input to the CNNs and the ViT.
Table 1 presents the simulations used to train the neural network, and also employed to assess the capability of the model to interpolate and extrapolate between and beyond simulation parameters. The highlighted rows have no meaning other than to facilitate visualization. In total, five distinct datasets (labeled from DS1 to DS5) are built, all containing 600 \(\times\) 600 RGB images of a given physical property. The simulations used for each dataset are identified by the colored circles in Table 1. These images consider all simulations of dynamic stall cases reported in Refs. [2; 1], which include a periodic plunging airfoil and constant ramp pitching and plunging airfoils for Mach numbers 0.1 and 0.4. When generating these images, it is important to keep
a fixed range for the contour levels of the property of interest. We used the values \([-2,\ 2]\) for both velocity components, \([-5,\ 5]\) for \(z\)-vorticity and \([-6,\ 0]\) for \(C_{p}\). The velocity components and length scales are non-dimensionalized by the freestream velocity and chord length. Finally, this collection of images in each dataset DS was shuffled and arbitrarily divided into groups of cardinality \(0.8\,n(\text{DS})\), \(0.1\,n(\text{DS})\) and \(0.1\,n(\text{DS})\) to form the training, validation, and test sets, respectively.
Data augmentation is used to artificially increase the size of the training set. Realistic variants of each training instance are generated by shifting, rotating, and resizing every picture through preprocessing layers [47]. The transformations applied to the input images are only geometrical and, therefore, preserve the semantics of the images. Moreover, the dynamic stall
\begin{table}
\begin{tabular}{c c c c c|c c c c c} \hline \hline & \multicolumn{6}{c|}{Simulation parameters} & \multicolumn{6}{c}{Datasets} \\ \hline \# & Reynolds & Mach & Motion & Rate / Freq. & No. images & DS1 & DS2 & DS3 & DS4 & DS5 \\ \hline
1 & 60,000 & 0.1 & Plunge periodic & 0.25 & 2049 & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \\
[MISSING_PAGE_POST]
unge ramp & 0.05 & 660 & & & & \\
20 & 200,000 & 0.1 & Plunge ramp & 0.10 & 2800 & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Databases of high-fidelity simulations of dynamic stall employed in this work and cardinality of the datasets. The colored circles identify the simulations that make up each dataset.
Figure 1: Examples of images used as input to the models. These images correspond to different flow properties for the same instant of a given flow.
vortex (DSV) and the entire airfoil are fully framed in all generated instances.
#### 2.1.2 Network architectures
In this work we seek for regression models capable of predicting one or more scalar quantities. Particularly, a model is built for predicting the distribution of \(C_{p}\) along the airfoil suction side. The convolutional layers process the flowfield information by extracting features, which are gathered by the fully connected layers to obtain the aerodynamic loadings. In order to avoid a highly biased model, meaning that it is too simple to learn the underlying structure of the data, we used transfer learning. It is an useful approach to speed up training considerably while also requiring significantly less training data to bootstrap computer vision models [48], and consists of using pre-trained models to leverage features learned on one problem and use them on another problem. Here, we used many architectures pre-trained on the ImageNet dataset, which are readily available for deep learning libraries such as TensorFlow [49; 50] and PyTorch [51]. Even though pictures related to fluid flow problems are absent from ImageNet, the method transfers well to the present flowfields involving dynamic stall.
Some CNN architectures are selected to speed up the training process and yield more accurate results in solving the regression problems of interest. Among them, we test the VGG-11-BN [52], the Inception-V3 [53], the ResNet-50 [54], the EfficientNet-B4 and [55]. For the transformer, we employ the ViT-B/16 [34]. It is important to mention that these networks were trained on classification problems, whereas, here, we are interested in a regression task. Therefore, besides the objective function, it is also necessary to change the top layer of the network to adjust them to our problem.
The present implementation will depend on the number of outputs we want the neural network to have. In this sense, the fully-connected layers are designed to meet our objective of finding the distribution of pressure coefficient over the airfoil suction side. For this, the output needs to be a large array to store the entire load distribution over the airfoil surface. Here, we interpolate the results to 300 points uniformly distributed along the airfoil suction side to keep the data independent of the mesh used in the simulation. Hence, for the pressure distribution task, the output layer is a linear module with 300 units. Finally, the pre-trained models assume that the images are pre-processed in a specific way. Hence, the pre-processing step is not only used for data augmentation, but also to properly scale the pixel range or resize the picture to the size expected by the original model.
#### 2.1.3 Training procedure
After instantiating the pre-trained models, the top layer of the fully connected block is replaced by a linear layer that has the desired number of outputs, as mentioned in Sec. 2.1.2. One could also have defined an entirely new fully-connected block to add on top of the convolutional base if desired. Then, a feature extraction is performed by keeping the parameters of the model up to this last layer unchanged and training it for 30 epochs. This value is chosen arbitrarily and this step prevents the large gradient updates triggered by the randomly initialized weights from wrecking the learned weights in the convolutional base. Also, by doing so, the computational efficiency is improved. In sequence, the network is fine tuned by training all layers for another 170 epochs at a very slow learning rate to make sure that the magnitude of the updates will not wreck the previously learned features.
Using the framework of maximum likelihood estimation, the mean square error (MSE) cost function is preferred for regression problems. However, here we use the logarithm of the hyperbolic cosine function (logcosh) as it is not strongly affected by occasional wildly incorrect predictions. The NAdam optimizer [56] is also employed with learning rate of \(1\times 10^{-5}\) accompanied by a scheduler that reduces the learning rate by a factor of 0.6 once learning stagnates for 13 epochs. An early stopping with patience of 50 epochs based on the MSE metric is set to prevent unnecessary computation. In all cases, the batch size is fixed at 28.
### Image synthesis
Image synthesis with supervised machine learning is the process of artificially generating images that contain some particular desired content associated with a specific label. The most prominent machine learning model for generating content is known as generative adversarial networks (GANs) [57]. The GANs are based on game theory, where two networks compete with each other to generate the best segmentation. One neural network, called the generator, generates new data instances, while the other, the discriminator, evaluates them for authenticity. Differently from the aforementioned networks used for the regression task, GANs are fully convolutional networks (FCN) [58], which are similar to a common convolution neural network. However, the fully connected layers are typically replaced by transposed convolutional layers [59] which allow one to upsample the input feature map to a desired output feature map using some learnable parameters.
Certainly, the nature of having a discriminator model in a GAN provides output target differences on the pixel level, which emerge from a deeper understanding of the images. However, GANs are often difficult to train and tune [60], and a simpler approach to synthesize an image is to use the U-Net [61], a network developed to work with fewer training images and produce accurate biomedical image segmentation. This network resembles an encoder-decoder structure, but with the addition of skip connections that are used to transfer fine-grained information from the low-level layers of the contracting path to the high-level layers of the expanding path. For its simplicity, it is the architecture used in the present work. Usually, the U-Net combines a pixel-wise Softmax over the final feature map with the cross entropy loss function. However, the Softmax is removed in the present framework and the MSE loss function is employed instead.
The velocity field is a good candidate for an input since it can be directly obtained from experiments using PIV. So, we build a U-Net model that takes images of \(u\)- and \(v\)-velocity components, concatenated channelwise. The outputs can be any fluid property of interest. Here, we are interested in the spatial distribution of pressure coefficient, \(C_{p}\), as output for its importance in engineering systems, as discussed in Sec. 1. Both input and output images have resolution of \(256\times 256\). Similarly to what is done with regression models, data augmentation is employed to artificially increase the number of training samples. This augmentation only applies translation, horizontal flipping and scaling operations. In all training instances, the airfoil and dynamic stall vortex are fully framed. To investigate the interpolation and extrapolation capabilities of the model, the same datasets from Table 1 are used to train the network.
The U-Net is trained for 100 epochs using the NAdam optimizer [56] with a learning rate of \(3\times 10^{-4}\) and a batch size of 8 to minimize the MSE objective. Similar to what was mentioned before, a scheduler that reduces the learning rate by a factor of 0.6 once learning stagnates for 13 epochs is also employed and an early stopping with patience of 50 epochs based on the MSE metric is set to prevent unnecessary computation.
## 3 Results
In this section, we present the results obtained with our CNN models and the ViT, starting with the performance of each model on \(\bullet\) DS1 (see Table 1) training and validation sets as shown in Fig. 2. In this figure, the opaque lines represent the training set, while the semi-transparent lines represent the validation set. At epoch 30, a discontinuity appears in the plots, corresponding to the moment in which fine tuning starts (see Sec. 2.1.3 for details). After training, it is evident that the ViT-B/16 network performs better than the other architectures for both training and validation sets. EfficientNet-B4, in turn, is at the opposite extreme. Note that the loss and MSE of Inception-V3 and VGG-11-BN are lower in validation than in training. This is expected and occurs due to the presence of either dropout or regularization in the networks, which inflates their training loss. As ViT-B/16 presented a better performance in this task, all the following results will be shown only for this network. Notice that the ViT-B/16 and the ResNet-50 could be trained even further to improve their results, but we decided to keep the training limited to 200 epochs for simplicity, and also considering that satisfactory results were obtained. Training duration varies according to the architecture and the size of the dataset; as an example, for \(\bullet\) DS1, it takes from 9 to 12 hours in a NVIDIA Tesla A100 for each network.
Results for the \(C_{p}\) distribution over the airfoil suction side for some selected snapshots belonging to the \(\bullet\) DS1 test set are compared against their true values in Fig. 3. The model used \(u\)- and \(v\)-velocity components as input, but the images reproduced in the graphs only refer to the corresponding \(u\)-velocity field. The reason behind using velocity as input is because it can be obtained directly from PIV measurements. An excellent agreement between the actual computed distribution of \(C_{p}\) and that predicted by the network is observed, showing that the ViT is capable of processing the pressure information by extracting relevant features from the input image. This information was then translated into surface measurements. A similar conclusion holds for the CNNs (not shown here). The significance of these results is much more optimistic: they show that, if the model generalizes to other flow or motion parameters, the pressure sensors can be partially or totally replaced by a ViT or CNN-based model. This would
Figure 2: Evaluation of the logcosh loss (left) and mean squared error (right) for all networks on the training (opaque lines) and validation (semi-transparent lines) sets of \(\bullet\) DS1 using images of \(u-\) and \(v\)-velocity components as input to predict the airfoil \(C_{p}\) distribution.
have a big impact on experimental campaigns because this type of measurement requires the installation of several probes, which can be complicated to assemble or cost prohibitive. Thus, if access to numerical results are provided from similar flows of interest, such data can be used to train models for later use in experimental setups, replacing the need for pressure transducers. In the absence of numerical data, in turn, experimental measurements need to be obtained to train the models. However, due to its ability to generalize to parameter variations, the underlying training process would only require a few experimental runs and, once trained, a model can be reused in future studies. In this case, the transducers can be partially replaced by the models.
### Generalization
The previous arguments hold based on the assumption that the model is able to generalize to the flow and motion parameters. Obviously, this hypothesis needs to be verified. To answer this question, we begin by investigating the model's ability to predict the pressure distribution in a flow with an intermediate Mach number. We train the model on the \(\bullet\) DS1 dataset, which contains flow snapshots for \(M_{\infty}=0.1\) and \(M_{\infty}=0.4\). Then, a test is performed using the snapshots from simulation #10, which consists of an SD7003 airfoil under the same pitch-up maneuver reported by Miotto et al. [2], but at freestream Mach number \(M_{\infty}=0.2\). Despite never seen an image of the dynamic stall problem for this Mach number, the network is capable of predicting the unsteady surface pressure distribution, as shown in Fig. 4. This figure shows results for three selected snapshots from simulation #10. The \(u\)-velocity field is shown in the background, and the instantaneous surface \(C_{p}\) distributions are presented for the CFD calculation and the network model. One can observe that the model predicts the main trends of the pressure distribution with a good accuracy and, hence, it is able to interpolate between different compressible regimes.
The model capability to extrapolate between different airfoil kinematics and flow parameters is also sought. However, this is a more challenging task because depending on the extrapolation parameters, the flow features can be very different from what the network was trained for. For example, if the Mach number is significantly increased, the mechanism of dynamic stall formation will involve shock waves [62; 32] which have never been seen by the neural network. Hence, the model is not expected to be able to correctly interpret images with semantics very different from those on which it was trained. Difficulties in extrapolating the source domain
Figure 3: Pressure coefficient distribution over the airfoil suction side and corresponding snapshot of \(u\)-velocity field (in background) of arbitrarily chosen instances from the \(\bullet\) DS1 test set, never seen by the network.
were also observed by Erichson et al. [63], despite these authors employing a shallow architecture to solve a different task from ours. Later in this work (Sec. 3.3), we will explore this subject considering different combinations of simulations in the construction of the training set, through which it is possible to see how the model generalization is impacted when some semantics are not present during training.
It was shown that the ViT model can interpolate between different compressible flows for Mach numbers \(0.1\) and \(0.4\). Now, the capability of the network is tested for extrapolation of different Mach numbers using the dataset \(\bullet\) DS3 (see Table 1) to train the model. This dataset consists of simulations with \(M_{\infty}=0.1\) and \(0.2\), and the extrapolation is performed for the \(M_{\infty}=0.4\) flow from simulation #3. Results are shown in Fig. 5, where it can be seen that, unlike the previous interpolation results, the deviations of the predictions from the true values are larger for the extrapolation task, especially during the transport of the dynamic stall vortex. However, the main trends are captured by the ViT prediction.
The extrapolation capability of the model is also tested for a
Figure 4: Ability to interpolate between different compressible regimes. The model is trained on the \(\bullet\) DS1 set (\(M_{\infty}=0.1\) and \(0.4\)) using \(u\)- and \(v\)-velocity components as input, and evaluates the surface \(C_{p}\) distribution for arbitrary snapshots from simulation #10 with \(M_{\infty}=0.2\).
Figure 5: Ability to extrapolate for different compressible regimes. The model is trained with the dataset \(\bullet\) DS3 (\(M_{\infty}=0.1\) and \(0.2\)) using \(u\)- and \(v\)-velocity components as input, and evaluates the surface \(C_{p}\) distribution for arbitrary snapshots for simulation #3 with \(M_{\infty}=0.4\).
Figure 6 shows the results of the network trained on the \(\bullet\) DS1 dataset with Reynolds number \(Re=60,000\). The model is tested with simulation \(\#17\) with \(Re=200,000\). Overall, the predicted pressure load distributions follow the same trends as the computed CFD curves. In the leftmost plot of figure, a step-like pattern appears in the true pressure distribution for \(0.1<x/c<0.2\), and this feature is smoothed out by the network. This pressure jump pattern is due fine-scale instabilities that take place in a separation bubble in the vicinity of the leading edge. Such phenomenon was studied in detail by Benton and Visbal [64; 65] and it is also present in the ramp motion simulations at \(Re=60,000\)[2]. However, as the Reynolds number increases to \(Re=200,000\), the separation bubble becomes smaller [65] and the resolution of the input image, which has size \(224\times 224\), is not sufficient to represent the fine scales, causing the predicted labels to being smoothed out 1.
Footnote 1: Although each network expects a specific image size, there is room for improvement in capturing fine-grained scales by zooming into the region of interest. In the results shown here, the airfoil did not occupy the entire width of the image, as shown in Fig. 1. In fact, the background images in the results were enlarged from the original, non-resized, \(600\times 600\) input image (Fig. 1) to aid in the interpretation of the plots. So, these are not the actual network inputs.
### Using different fluid properties as input
In the previous section, we verified that the trained model is able to interpolate and extrapolate for different flow parameters using velocity field images as input. However, it is observed that the extrapolated results have some discrepancies from the true \(C_{p}\) solutions, especially in the more advanced stages of the dynamic stall vortex. This could be due to a high-variance model or an irreducible error in the velocity data itself. By irreducible data error, we are not saying that the velocity field is incorrect, but rather that the images are inherently too noisy for the regression task. To check whether overfitting occurs due to model complexity, we compared the predictions of all architectures mentioned in Section 2.1.2 using velocity components as input and found very similar trends (not shown here for brevity). Thus, the possibility of the low extrapolation capacity being related to the complexity of the model was ruled out. This suggests that using another fluid property as input to the network can lead to better generalization.
Figure 6: Ability to extrapolate for different Reynolds numbers. The model is trained on the \(\bullet\) DS1 (\(Re=60,000\)) using \(u\)- and \(v\)-velocity components as input, and evaluates the surface \(C_{p}\) distribution for arbitrary snapshots of simulation \(\#17\) with \(Re=200,000\).
As it will be shown below, the ViT inaccuracy in extrapolating the pressure distribution is compensated by directly using the pressure field as input. We also test the \(z\)-vorticity field, but the obtained results had a worse agreement (not shown here) due to the fact that the vorticity field has a wide range of magnitudes throughout the entire domain. For example, near the wall, the non-slip boundary condition implies a region of high vorticity that causes saturation of image levels. This poses a difficulty in model inference for cases that go beyond the source domain (_i.e._, the training data), as the saturation close to the wall does not allow an accurate estimate of the vorticity value in that region. Precisely, our learned task now faces different conditional distributions between the source and target domains [66]. Although the boundary levels could be increased to avoid their saturation near the wall, relevant information about the separated flow would be lost, which is not a positive trade-off.
Figures 7 and 8 compare the results obtained by two models, one trained with velocity components, and another trained with pressure. Comparisons are provided with the true flow values for extrapolations in terms of Mach and Reynolds numbers. For the Mach number extrapolation (Fig. 7), the dataset \(\bullet\) DS3 (\(M_{\infty}=0.1\) and \(0.2\)) is used for training the models and we use simulation #13 as a target with \(Ma_{\infty}=0.4\). The model employed in the Reynolds number extrapolation is trained with \(\bullet\) DS1 (\(Re=60,000\)) and tested with simulation #18 which is performed for \(Re=200,000\). As can be seen from these figures, predicting the surface distribution of \(C_{p}\) when using the pressure coefficient field as input guarantees a better agreement with the expected results. In the present figures, one can see that the results of the Mach number extrapolation show excellent agreement for all images, and those obtained for the Reynolds number extrapolation are also improved when the pressure field is used as an input instead of the velocity components.
From Fig. 8, we still see that the laminar separation bubble is not captured (leftmost figure), but this is expected as the image resolution is the same as before. In the second plot of the same figure, the low pressure region of the vortex core is close to the airfoil surface, and the saturation of the image makes it impossible to correctly predict the load along this region. This is a concept shift problem [67; 68] analogous to the one encountered when using a model based on z-vorticity, as discussed above. However, unlike what happens with the vorticity field, the low pressure vortex core that leads to the image saturation does not
Figure 7: Ability to extrapolate for different compressible regimes. The model is trained with \(\bullet\) DS3 (\(M_{\infty}=0.1\) and \(0.2\)), and evaluates the surface \(C_{p}\) distribution for arbitrary snapshots of simulation #13, with \(M_{\infty}=0.4\).
at all instants of time, which makes the problem simpler to solve. One could, for example, increase the range of the color scale and retrain the model for the new values. It would also be possible to create a system that accepts the maximum and minimum limits of the scale as an input metadata, making it robust to images with different color scales. This metadata could be informed to the fully-connected layers of the existing architecture, or else a hypernetwork could be created using this metadata to return the parameters of the fully-connected layers.
Although the scale range information is an interesting solution to the color saturation problem, its implementation is beyond the scope of this work. Here, we want to keep the model as simple as possible as it is a proof of concept. Indeed, in the absence of saturated regions near the airfoil surface, the present ViT-based model proves to be an interesting alternative to physical measurement devices. As the results demonstrate, the model is robust with respect to changes in Mach and Reynolds numbers, and this stems from the fact that, despite the flow being different from those with which the network was trained, there is a common high-level semantics between them. Furthermore, we see that when the \(C_{p}\) field is used as input, the extrapolation capacity of the system improves significantly. The same also occurs with interpolation, as we can see in Fig. 9, where an excellent agreement with the expected result is observed.
As the results suggest, the success of our predictor depends on its capacity to learn intermediate concepts between raw input and label that are general enough to make sense across a wide range of different but related domains. The intuition is that, assuming there are latent variables that correspond to the true explanatory factors of the observed data, answering questions and learning dependencies in the space of these latent variables is likely to be easier than answering questions about the raw input. In fact, past studies reveal that deep networks learn more transferable representations that disentangle the explanatory factors of variations behind data [69; 70]. This helps explain why our deep model is more successful at extrapolating results when compared to the shallow architecture of Erichson et al. [63]. However, deep representations can only reduce the cross-domain distribution discrepancy, but not remove it [71]. Therefore, recent research on deep domain adaptation further embeds adaptation modules in deep networks that explicitly minimize a distribution discrepancy measure, or use adversarial training to align source and target domains in the representation space [72; 68].
In the present work, we choose not to use any technique for distribution matching because
Figure 8: Ability to extrapolate for different Reynolds numbers. The model is trained with \(\bullet\) DS1 (\(Re=60,000\)), and evaluates the surface \(C_{p}\) distribution for arbitrary snapshots of simulation \(\#18\) with \(Re=200,000\).
this involves challenges that are beyond the objectives prescribed here. In fact, domain adaptation methods for high-dimensional regression are still in an early stage and lack an effective approach. For instance, typical domain adaptation methods developed for classification are based on the clustering assumption to get confident decision boundaries and under-perform in regression tasks [73]. One reason for this lies in the fact that regression performances are not robust to feature scaling that occurs in domain adaptation classification methods [74]. Thus, applying an improper domain adaptation technique may be ineffective or even harmful in our regression setting.
To overcome the absence of a more principled approach to match the feature distribution across domains, here we leverage the data itself, using multiple simulations, to learn representations that separate the various explanatory sources. Doing so should yield a significantly more robust representation for the complex and richly structured variations that exist in dynamic stall cases. What is shown next is the importance of including possible semantic variations in the training data for a good generalization. Later, we will also discuss alternatives for using this tool in experimental settings where the pressure field cannot be easily obtained to serve as an input to the network.
### Including sources of variability
Knowing that pressure is a much more robust property for model generalization, we will use it from now on to show how CNN and ViT predictions are impacted by adding or removing some flow feature from the training set. For instance, any rescaling of the color map caused by the intensity of the dynamic stall vortex, including changes in its topology and location, are known nuisance factors and should be learned at the outset. This is especially important since the continuous interpretation of an image changes according to these factors, and networks do not transfer well across different feature and label distributions [75; 76; 66; 77]. Fortunately, the potential nuisance variability is usually already known in non-stationary aerodynamic problems and can be dealt with early on, without the need to learn it through complex adversarial training [78; 68]. In fact, we know that the flow and the underlying aerodynamic response are influenced by the effects of compressibility [62; 32; 1] and Reynolds number [79], as well as the airfoil profile
Figure 9: Ability to interpolate between different compressible regimes. The model is trained on \(\bullet\) DS1 set (\(M_{\infty}=0.1\) and \(0.4\)), and evaluates the \(C_{p}\) distribution for arbitrary snapshots from simulation \(\#2\) with \(M_{\infty}=0.2\).
[80] and its kinematics [79, 2]. Thus, including these sources of variability in the training set improves generalization.
What we just mentioned above might lead one to believe that any nuances would need to be included in the training set, making the learning process not practical for the task at hand. But this is not entirely true. Note that, although we never varied the Reynolds number during training, the ViT prediction was very accurate in extrapolating this parameter. This stems from the fact that the marginal distribution (distribution of features) between the source and target domains are very similar. At colloquial level, this means that the neural network has learned flow features that are still present in the target domain (Reynolds 200,000 flow, in this case). So, let \(X\) and \(Y\) denote the (flow) features and target (aerodynamic response), respectively, forming a causal system in which \(X\) is the cause for \(Y\). To reduce domain discrepancy, we seek for invariant components \(\mathcal{T}(X)\) that have a similar probability distribution \(P(\mathcal{T}(X))\) across domains. However, it is not clear if the conditional probability \(P(Y|\mathcal{T}(X))\) in different domains is also similar when \(P(Y|X)\) changes [81]. In other words, it is not possible to guarantee that the learnt representation \(\mathcal{T}(X)\) will have any relevant information for predicting \(Y\) in the target domain. In fact, learning invariant representations alone is not sufficient to guarantee good generalization on the target domain, even after minimizing the source error [77, 72]. It is also necessary to align the label distributions, which without target labels is still an open question.
To illustrate how divergence of marginal label distribution undermines generalizability, even in the presence of domain-invariant features, we base our discussion on airfoil kinematics variation. We begin by considering two different models, one trained on \(\bullet\) DS1 dataset and the other trained on \(\bullet\) DS2. Both datasets contain simulations of an SD7003 airfoil performing periodic and ramp-type motions, however, in \(\bullet\) DS2 we add an extra simulation with reduced frequency 0.5 2. Notice that the only reduced frequency present in \(\bullet\) DS1 is 0.25. Then, in Fig. 10 we operate these systems on a simulation with reduced frequency 0.5, but at a different Mach number from that used in \(\bullet\) DS2. At such higher reduced frequency, the flow separates over the pressure side of the airfoil, as observed in the leftmost plot of the figure. The fact that the separation is not symmetrical is due to the airfoil having a static angle of attack of 8 deg. in our simulations3.
Footnote 2: Reduced frequency is a parameter used for the periodic motion and is defined as \(k=\omega H/(2U_{\infty})\), where \(\omega\) is the circular frequency, \(H\) is the amplitude of the plunge (we used \(H=0.5c\) in all periodic simulations), and \(U_{\infty}\) is the freestream velocity. This is different from the parameter used for ramp motions. In this case, we use the pitch rate (indicated in Table 1), which is defined as \(\Omega^{+}=\omega c/U_{\infty}\). Here, \(\omega\) is the angular velocity of the airfoil and \(c\) is the airfoil chord.
Footnote 3: The reason for the dataset being formed by simulations with this angle of attack is that they were compared with the literature in other works by the authors.
When the flow separates on the airfoil pressure side, the suction is diverted to this side causing the pressure coefficient to become positive near the top of the leading edge (see the leftmost plot in Fig. 10). Two problems occur here: (1) The color range for the pressure coefficient has its maximum set at 0, which leads to a concept shift due to color saturation; and (2) there is a misalignment of label distribution between source and target domains. Precisely, the label shift has a particular form of high-dimensional interval shift. This means that, although this simulation presents flow structures similar to those of the training set (domain-invariant features), the network trained on \(\bullet\) DS1 never experienced a positive value of \(C_{p}\) on the airfoil
suction side, as seen in the leftmost plot. In other words, many probes placed approximately in between \(x/c=[0,\ 0.4]\) experience an interval shift. As a consequence, the divergence in the label distribution causes the model to fail to accurately predict the forces in the region close to the leading edge. Such misalignment is also observed in the second plot of Fig. 10. In this case, the network never experienced a large and strong vortex structure so early in the dynamic stall development, as occurs with increasing reduced frequency. Despite this, the model trained on \(\bullet\) DS1 still predicts loading very well. But, after including one case of such variability in the dataset (\(\bullet\) DS2) the generalization improved. Finally, in the rightmost plot, both models perform similarly since this image is aligned with the source distribution.
It is important to highlight that, even in a flow with kinematic or fluid parameters very different from those with which the network was trained, there may be an intersection in the distribution of the flow features and their respective induced aerodynamic responses where the prediction will be accurate. To get an idea of the moment in which such an intersection occurs, let us take as an example the result above of the networks trained with \(\bullet\) DS1 and \(\bullet\) DS2 running the inference in simulation \(\#5\), but now with a different objective in mind: to predict the aerodynamic coefficients. Notice that all the results we have shown so far are for the \(C_{p}\) distribution over the suction side of the airfoil. The reason for building a surrogate model capable of estimating this quantity revolves around the complexity and/or cost of acquiring it in experimental settings. Nevertheless, once the network is capable of extracting features from flow visualizations, its dense layers can be designed for other tasks [70], such as predicting aerodynamic coefficients (lift, drag and quarter-chord pitch moment - \(C_{l}\), \(C_{d}\) and \(C_{m}\), respectively). This is done by changing the number of output layer nodes to \(3\) (one for each coefficient) and retraining the model.
The result is shown in Fig. 11, from which we see that the model trained on \(\bullet\) DS1 is not able to reconcile the lift, drag and pitch moment coefficients for the entire motion history for a case of a higher reduced frequency (simulation \(\#\) 5 was never seen by the network). However, in the interval from \(8\lessapprox t\lessapprox 11\), the source and target distributions are better aligned and the predictions are in good agreement. In fact, this model never experienced a situation where very low or negative lift and drag occur. These values come from the low pressure core that is formed
Figure 10: Effects of varying reduced frequency. The model is trained on \(\bullet\) DS1 (reduced frequency 0.25) or \(\bullet\) DS2 (adding a simulation with reduced frequency 0.5), and evaluates \(C_{p}\) over the airfoil suction side for simulation \(\#5\) with reduced frequency 0.5.
on the pressure side of the airfoil that moves at this high reduced frequency, as mentioned earlier. Because it is a very different flow condition from those present in \(\bullet\) DS1, the network fails to predict the motion periods in which the flow dynamics shifts from the suction side of the airfoil to the pressure side. After adding one example of this semantic nuisance (\(\bullet\) DS2), the agreement between the predicted and true labels in Fig. 11 improves considerably, especially for the pitch and drag coefficients. At this point, it is interesting to note that both the interpolation and extrapolation of the source domain lead to meaningful errors in the predicted drag, although the main trend is preserved. This error is probably related to the role of viscous effects in the calculation of the drag force, where it is more prominent.
To reinforce our arguments about including sources of variability, let us consider other variations of airfoil motion in the construction of the dataset. Here, we pose the following question: How is the inference of the network on airfoils in ramp-like motions if we train it only using periodic motions (\(\bullet\) DS4), and vice-versa (\(\bullet\) DS5)? The flowfields are fundamentally different between these two types of motions. For ramp motions, the airfoil always presents a separated flow along the airfoil suction side, even before starting its movement. Furthermore, at high effective angles of attack, a strong trailing-edge vortex is formed [2]. For the periodic simulations, in turn, the airfoil movement induces flow reattachment and the trailing-edge vortex is much weaker [1]. So, although in both cases there are common structures in the flowfield, such as the dynamic stall vortex and the trailing-edge vortex, the underlying aerodynamic response (marginal label distribution) can be substantially different for some samples taken from them.
Figure 12 shows results for the neural network trained only with periodic data (\(\bullet\) DS4) running inference on simulation #6, which consists of a ramp movement. The first two plots of the figure demonstrate that the emergence and passage of the dynamic stall vortex under the airfoil are very well captured by the neural network. However, the model fails to estimate the airfoil loading from the moment the trailing edge vortex develops, as shown in the rightmost plot. Here, both the misalignment between the source and target domains and the trailing-edge vortex saturation problem impair generalization. Despite this, the pressure distribution trend is still maintained.
A similar conclusion is obtained when the model trained only with ramp data (\(\bullet\) DS5)
Figure 11: Effect of varying reduced frequency. The model is trained on \(\bullet\) DS1 (reduced frequency 0.25) or \(\bullet\) DS2 (adding a simulation with reduced frequency 0.5), and evaluates lift, drag and pitch moment coefficient for simulation #5 with reduced frequency 0.5.
operates in a periodic case. In this sense, Fig. 13 shows the results for snapshots of simulation #3, through which the two plots on the right show that the passage of the dynamic stall vortex is well represented by the network. However, when the flow is attached (the leftmost plot), the network prediction fails due to presence of both a covariate shift and an interval shift. As we observed, these shifts often arise in practical regression settings that need moderate extrapolation and interpolation.
### Synthesizing fluid properties
In Sec. 3.2, we showed that the pressure coefficient field is preferable to velocity when it comes to building our regression model. But the role that pressure plays in flow analyses goes beyond simply obtaining aerodynamic loadings. In this session we will show how neural networks can be used to extract the pressure field from the fluid flow velocities. This is important because, while this variable is readily available in numerical simulations, its determination in a experimental setting is more challenging, particularly in compressible flows, and requires appropriate techniques [12]. Thus, in this section, we present an alternative to determine the pressure field from velocity data, complementing the previous regression model.
Figure 12: Effect of varying motion. The model is trained on \(\bullet\) DS4 (periodic motion), and evaluates the \(C_{p}\) distribution for simulation #6, which consists of a ramp motion.
Figure 13: Effect of varying motion. The model is trained on \(\bullet\) DS5 (ramp motion), and evaluates the \(C_{p}\) distribution for simulation #3, which consists of a periodic motion.
The approach works similarly to what we have already discussed, in the sense that the available data (whether from numerical simulations or from experiments) can be used to train the model for later use in experiments. For example, if provided with numerical data where the thermodynamic properties can be easily obtained, these data can be used to train the regressor. In the case of having only PIV data, the pressure would first need to be determined using the techniques described by Van der Kindere et al. [12] to then train the model. In the present work, we demonstrate the application of this technique only in the numerical scope, since these are the data available. But, as Visbal [82] shows, PIV and numerical visualizations are comparable to each other and, therefore, this does not change our conclusions. In addition, if experimentally acquired flowfields are corrupted with incorrect and missing entries, the recent technique by Scherl et al. [83] can be used to improve the flowfield data quality.
Here, we draw on image-to-image translation to amplify the scope of information extracted in experimental fluid mechanics. Particularly, given the source domain of images of the velocity field concatenated channelwise, we train a U-Net [61] to generate synthetic pressure coefficient fields. For this, the same dataset \(\bullet\) DS1 from the regression model was used, and after 84 epochs, we reached a validation mean squared error of approximately \(1.35\mathrm{e}-5\), which we consider satisfactory. This training took about 4.5 hours in a NVIDIA Tesla A100. Some results are presented in Fig. 14, where we show the input velocities, the actual pressure coefficient fields and those predicted by the U-Net. These results are displayed columnwise, each column corresponding to a snapshot randomly selected from simulation #10 with \(M_{\infty}=0.2\). Therefore, we are interpolating between different Mach numbers.
An excellent agreement is observed between the true and predicted fields in Fig. 14, demonstrating that a simple encoder-decoder architecture can correctly map the input attribute values to the corresponding synthetic image. However, some fine details are missing out from the predictions. Nevertheless, these synthetic fields provide good results when used as input to the regression model built previously. To illustrate this, in Fig. 15 we evaluate the accuracy of using the synthetic \(C_{p}\) images of some snapshots from simulation #10 and also compared them with the results obtained using the real pressure field as input and with the true value of the \(C_{p}\) distribution over the airfoil suction side. Clearly, the synthetic field produces an almost identical estimate of the pressure distribution when feeding the true pressure field into the network. Thus, we demonstrate that image-to-image translation (U-Net) and regression (ViT-B/16) networks can be combined to improve airfoil loading prediction from velocity data. We believe that this framework can improve the arsenal of tools available for analyzing unsteady flows with a broad range of spatial and temporal scales.
## 4 Conclusions
Based on unsteady flowfields, a pre-trained ViT-B/16 network renders the backbone of a neural network model that links the existing flow structures to the aerodynamic response of the airfoil. Here, we focus on the \(C_{p}\) surface distribution, but we also show that it is possible to obtain other quantities such as aerodynamic coefficients just by retraining the network with a different number of outputs in the network. The ViT correctly infires the attributes present in the flow image even in a compressible flow regime for which no annotations are given. As a result, an excellent agreement between predicted and ground truth values is obtained for the \(C_{p}\) over the airfoil suction side. Similar conclusions are reached with CNNs (not shown here
for brevity), but ViT outperforms them in the regression task. This fact demonstrates that ViT- or CNN-based models can be used to interpolate between flow parameters. The ability of the neural network to extrapolate the source domain is also investigated by varying Mach and Reynolds numbers, besides the airfoil kinematics. In this sense, it is demonstrated that using pressure as input to the network instead of the velocity field improves generalization. In principle, this makes sense, since pressure is directly related to forces.
When encountering out-of-distribution data (_i.e._, when the target flowfield contains never
Figure 14: Output images from model trained on \(\bullet\) DS1 (\(M_{\infty}=0.1\) and \(0.4\)) using \(u\)- and \(v\)-velocity components as input. The model evaluates the \(C_{p}\) flowfield for arbitrary snapshots of simulation #10 with \(M_{\infty}=0.2\).
seen semantics), the model is prone to error. So, in this work, we aim to establish to what degree it impacts the model to take steps to resolve the issues and improve the model's accuracy. By training our network on different types of simulation parameters, we demonstrate that the position and morphology of the dynamic stall vortex with respect to the airfoil, as well as the trailing edge vortex, are sources of variability that must be learned for the model to better generalize. This ensures that marginal label distribution is aligned across domains, which is known to be a necessary condition for generalization when learning domain-invariant representations. Fortunately, though, in fluid mechanics most of the nuisance variability is already known and can be dealt with at the outset.
In addition to providing evidence that not including sources of variability in the training set can lead to a deterioration in model performance, we purposely subject it to other sources of error to assess its robustness. For example, the saturation color range of the images is maintained and the airfoil is not perfectly framed in the image, in the sense that there is some padding extending from the airfoil limits. These conditions are likely to occur in practice and, therefore, must be taken into account when judging the result of inferences. In this sense, we show that the presence of saturated regions near the airfoil surface leads to a concept shift, which is when the same input results in different outputs. Thus, we can say that the saturated regions are ambiguous representations. The non-framing of the airfoil in the image, in turn, results in a low spatial resolution that causes the smoothing of fine-grained scale phenomena, which are more likely to exist at high Reynolds numbers. These observations can be taken as good practices when training the neural network. But even in the presence of these errors, the network inferences show good agreement with the true results.
Finally, we also demonstrate that the velocity can be used to synthesize any corresponding physical quantity through an image-to-image translation approach. Here, the mapping between velocity and pressure coefficient field is used as example for it is preferable to velocity when it comes to building our regression model. The results using the synthetic image show an excellent agreement with those obtained from the actual pressure field, proving to be an interesting pre-processing technique for regression.
Figure 15: Calculation of surface \(C_{p}\) distribution from output images. The model is trained on \(\bullet\) DS1 (\(M_{\infty}=0.1\) and \(0.4\)) using the artificially generated \(C_{p}\) field as input, and evaluates the \(C_{p}\) distribution for arbitrary snapshots for simulation #10 with \(M_{\infty}=0.2\).
## Acknowledgments
The authors acknowledge Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, FAPESP, for supporting the present work under research grants No. 2013/08293-7, 2013/07375-0 and 2021/06448-0. FAPESP is also acknowledged for the fellowship provided to the first author under grant 2022/09196-4. The CEPID-CeMEAI cluster Euler, and cluster SDumont are acknowledged for providing the computational resources for this research. Finally, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, CNPq, is also acknowledged for supporting this research under Grants No. 407842/2018-7 and 308017/2021-8.
|
2305.16492 | Image Classification of Stroke Blood Clot Origin using Deep
Convolutional Neural Networks and Visual Transformers | Stroke is one of two main causes of death worldwide. Many individuals suffer
from ischemic stroke every year. Only in US more over 700,000 individuals meet
ischemic stroke due to blood clot blocking an artery to the brain every year.
The paper describes particular approach how to apply Artificial Intelligence
for purposes of separating two major acute ischemic stroke (AIS) etiology
subtypes: cardiac and large artery atherosclerosis. Four deep neural network
architectures and simple ensemble method are used in the approach. | David Azatyan | 2023-05-25T21:46:12Z | http://arxiv.org/abs/2305.16492v1 | # Image Classification of Stroke Blood Clot Origin
###### Abstract
Stroke is one of two main causes of death worldwide. Many individuals suffer from ischemic stroke every year. Only in US more over 700,000 individuals meet ischemic stroke due to blood clot blocking an artery to the brain every year.
\(\bullet\)Stroke symptoms typically start suddenly, over seconds to minutes, and in most cases do not progress further. The symptoms depend on the area of the brain affected. The more extensive the area of the brain affected, the more functions that are likely to be lost. \(\triangleright\)[10]
\(\bullet\)Ischemic stroke occurs because of a loss of blood supply to part of the brain, initiating the ischemic cascade. \(\triangleright\)[10] In the case of subsequent strokes, it is possible to mitigate consequences if physicians can determine stroke etiology. The paper describes particular approach how to apply Artificial Intelligence for purposes of separating two major acute ischemic stroke (AIS) etiology subtypes: cardiac and large artery atherosclerosis. [1] Four deep neural network architectures and simple ensemble method are used in the approach.
**Keywords:** Computer vision, Artificial Intelligence, Medical imaging, Image Classification, Deep learning
## 1 Introduction
Stroke is the second-leading cause of death worldwide despite of significant development of medical science and practice. As in other areas of medicine early and accurate diagnosis plays significant role in the fight against disease.
Nowadays mechanical thrombectomy has become the standard of care treatment for acute ischemic stroke from large vessel occlusion. As a result, retrieved clots became amenable to analysis. Healthcare professionals are currently attempting to apply deep learning-based methods to predict ischemic stroke etiology and clot origin. However, unique data formats, image file sizes, as well as the number of available pathology slides create challenges which requires specific approaches to solve them [1].
In such terms simple classification task using transfer learning with SOTA computer vision architectures EfficientNet CNN [6] and Swin Transformer [7] along appropriate data preparation is a suitable solution.
## 2 Dataset
The "train" dataset for the competition contains 754 high-resolution whole-slide digital pathology images in TIF format. Every image represents a blood clot of a patient suffered from an acute ischemic stroke. [1] Also, each image belongs to a one of 632 patients from 11 medicine centers. Each patient may have up to five images and labeled by either CE (Cardioembolic) or LAA (Large Artery Atherosclerosis).
"Other" dataset includes 396 high-resolution whole-slide digital images in TIF format. Each image belongs to a one of 336 patients with no relation to medicine centers. Each patient may have up to five images and labeled by either "Other" or "Unknown".
All images presented in RGB format. Distributions of disk image size and width/height of images in pixels are presented on figure 1.
The ratio between number of images in the classes CA and LAA for "train" dataset is presented on figure 2.
## 3 Data preprocessing
Training dataset includes very few high-resolution images in TIF format and that is why it is associated with the following major disadvantages of using original images:
* particular images are stored in files sized up to 2.5 GB, which makes it very hard to work with such files especially if a lot of RAM is not available;
* dataset with high-resolution images is not useful for classification task because of even building SOTA solution of a classification task requires images with resolution in the range from 224x224 pixels to 512x512 pixels;
* original images can have vast 'empty' spaces that doesn't carry any useful information;
* small training sample in deep learning tends models to low ability of generalization and increased risk of overfitting.
To avoid some disadvantages and mitigate the impact of others data preparation includes the following steps:
* pruning and rotation [13]: pruning allows to delete empty columns and rows for a particular image either if height of the image is bigger than its width than the image made mirror turn relative the main diagonal (figure 3 shows original image and figure 4 shows image after pruning and rotation);
* resizing: resizing leads all images to one standard size 1024x1024 which makes further use of images easier due to less disk space and faster transformation operations in RAM;
* augmentation: several following methods of Albumentations library method were used: ToGray, Transpose, VerticalFlip, HorizontalFlip, RandomBrightness, RandomContrast, OneOf(MotionBlur, MedianBlur, GaussianBlur, GaussNoise), OneOf(OpticalDistortion, GridDistortion, ElasticTransform), CLAHE, HueSaturationValue, ShiftScaleRotate, RandomResizedCrop,
Figure 1: Distributions of disk image size and width/height of images.
Figure 2: Distributions of disk image size and width/height of images.
Cutout which all have supplemented effect for regularization purposes.
Data normalization with mean = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225) is applied for tensor of each image as a last step. Complex data preprocessing described in the section makes possible to train models on available data expecting higher robustness.
Let's note, that one example in "train" dataset is a patient and a patient can have up to five images. Test performed on the "train" dataset shows that if only last chronological image is taken for each patient, the highest quality of the solution can be achieved. Therefore, as for training purposes as for inference purposes for each patient only last chronological image is taken.
## 4 Models
Classification task of medical images with weak signals seems very challenging notably due to very small "train" dataset. Nevertheless, using a combination of several deep learning architectures helps to avoid their individual disadvantages and to extract signals to separate classes as right as possible.
Several deep learning architectures are used to build final solution: EfficientNet B0 [6] with Noisy Student weights [5] for resolution 512x512, EfficientNet B4 [6] with Noisy Student weights [5] for resolution 384x384, EfficientNet B4 [6] with Noisy Student weights [5] for resolution 512x512, EfficientNet B5 [6] with Noisy Student weights [5] for resolution 512x512, Swin Large with window parameter 7 for resolution 224x224 [7], Swin Large with window parameter 12 for resolution 384x384 [7].
The following successive changes in each model architecture are made:
* liner layer with 128 outputs is added;
Figure 4: Image after pruning and rotation.
Figure 3: Original Image.
* dropout 0.1 is added;
* linear layer with 64 outputs is added;
* liner layer with 2 outputs is added.
Softmax function [12] is used to transform outputs of each model to probabilities.
## 5 Training
We use weighted multi-class logarithmic loss (WMCLL) as evaluating metric.
\[\textit{WMCLL}=-\frac{\sum_{i=1}^{M}w_{i}\times\sum_{j=1}^{N_{i}}\frac{\gamma_{ ij}}{N_{i}}\times\ln p_{ij}}{\sum_{i=1}^{M}w_{i}}\]
where \(N\) is the number of images in the class set, \(M\) is the number of classes, \(w_{i}\) - weight for class \(i\) (equality of weights has been given), \(ln\) is the natural logarithm, \(\gamma_{ij}\) is 1 if observation \(j\) belongs to class \(i\) and 0 otherwise, \(p_{ij}\) is the predicted probability that image \(j\) belongs to class \(i\)[1].
Since evaluating metric is a loss it is used as custom loss function in neural networks training to ensure better convergence in optimization process comparing to standard loss function applied for classification tasks. Therefore, training with minimizing loss function the same time directly minimizes evaluation metric.
Due to the task seems as multilabel classification with two classes, Label Smoothing technique was also used with coefficient 0.01.
All models are trained with Adam optimizer and weight decay parameter equal to 1e-06. Scheduler ReduceLROnPlateau is used with maximum learning rate 1e-04 and minimum learning rate 1e-05, patience parameter equals 1 and factor to reduce learning rate is 0.1. Maximum number of epochs to train is set to 30 with stopping after six epochs in a row with no decrease of validation loss. Best weights are saved from the best epoch.
For training purposes "train" dataset is splitted into five folds to use it in 5-fold cross-validation scheme. In the first EfficientNet B0 with Noisy Student weights for resolution 512x512 is trained on "train" dataset and after CE an LAA classes are predicted for "other" dataset. To improve model's prediction power images from "other" dataset with probability of classes (CA and LAA) not less than 0.9 are selected and added to "train" dataset with appropriate labels of classes (CA and LAA). Therefore, train folds became bigger. After that EfficientNet B4 with Noisy Student weights for resolution 384x384, EfficientNet B4 with Noisy Student weights for resolution 512x512, EfficientNet B5 with Noisy Student weights for resolution 512x512, Swin Large with window parameter 7 for resolution 224x224, Swin Large with window parameter 12 for resolution 384x384 are also trained on the expanded "train" dataset with 5-folds CV scheme in the training approach described in the article above.
## 6 Results
Final solution is a simple ensemble of five following models: EfficientNet B4 with Noisy Student weights for resolution 384x384, EfficientNet B4 with Noisy Student weights for resolution 512x512, EfficientNet B5 with Noisy Student weights for resolution 512x512, Swin Large with window parameter 7 for resolution 224x224, Swin Large with window parameter 12 for resolution 384x384. Probability of classes CE and LAA calculated as the mean of appropriate probabilities of the five models. In despite of simplicity the ensembling approach among other things allows to increase robustness of prediction. Finally, WCLL = 0.69682 on the public leaderboard and WCLL = 0.67188 on the private leaderboard.
## 7 Conclusions
The paper presents artificial intelligence-based etiology classification of the blood clot origins in ischemic stroke in the situation of the very small data (particularly few hundred images). To allow AI machine work well some strong data augmentations are used among deep CNN (EfficientNet with Noisy Student self-training) and Visual Transformers (Swin) pretrained on ImageNet. The approach shows quite well results on private leaderboard with entering gold zone. It is possible to improve the results by more accurate hyperparameter
tuning, more thorough data preprocessing, applying stacking, using TTA (test time augmentation) and using stacking with GBM built on neural networks embeddings.
|
2304.02902 | Towards Efficient MCMC Sampling in Bayesian Neural Networks by
Exploiting Symmetry | Bayesian inference in deep neural networks is challenging due to the
high-dimensional, strongly multi-modal parameter posterior density landscape.
Markov chain Monte Carlo approaches asymptotically recover the true posterior
but are considered prohibitively expensive for large modern architectures.
Local methods, which have emerged as a popular alternative, focus on specific
parameter regions that can be approximated by functions with tractable
integrals. While these often yield satisfactory empirical results, they fail,
by definition, to account for the multi-modality of the parameter posterior. In
this work, we argue that the dilemma between exact-but-unaffordable and
cheap-but-inexact approaches can be mitigated by exploiting symmetries in the
posterior landscape. Such symmetries, induced by neuron interchangeability and
certain activation functions, manifest in different parameter values leading to
the same functional output value. We show theoretically that the posterior
predictive density in Bayesian neural networks can be restricted to a
symmetry-free parameter reference set. By further deriving an upper bound on
the number of Monte Carlo chains required to capture the functional diversity,
we propose a straightforward approach for feasible Bayesian inference. Our
experiments suggest that efficient sampling is indeed possible, opening up a
promising path to accurate uncertainty quantification in deep learning. | Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer | 2023-04-06T07:20:57Z | http://arxiv.org/abs/2304.02902v1 | # Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry
###### Abstract
Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape. Markov chain Monte Carlo approaches asymptotically recover the true posterior but are considered prohibitively expensive for large modern architectures. Local methods, which have emerged as a popular alternative, focus on specific parameter regions that can be approximated by functions with tractable integrals. While these often yield satisfactory empirical results, they fail, by definition, to account for the multi-modality of the parameter posterior. In this work, we argue that the dilemma between exact-but-unaffordable and cheap-but-inexact approaches can be mitigated by exploiting symmetries in the posterior landscape. Such symmetries, induced by neuron interchangeability and certain activation functions, manifest in different parameter values leading to the same functional output value. We show theoretically that the posterior predictive density in Bayesian neural networks can be restricted to a symmetry-free parameter reference set. By further deriving an upper bound on the number of Monte Carlo chains required to capture the functional diversity, we propose a straightforward approach for feasible Bayesian inference. Our experiments suggest that efficient sampling is indeed possible, opening up a promising path to accurate uncertainty quantification in deep learning.
Keywords:Uncertainty quantification Predictive uncertainty Bayesian inference Monte Carlo sampling Posterior symmetry
## 1 Introduction
Despite big data being the dominant paradigm in deep learning, the lack of infinitely many observations makes uncertainty quantification (UQ) an important problem in the field. A key component of UQ in Bayesian learning is the parameter posterior density that assigns a posterior probability to each parameter value5[23]. Between the extreme cases of all posterior probability mass
concentrating on a single value, indicating complete certainty about the model parameters, and being distributed uniformly over all possible values in a reflection of total ignorance, the shape of the parameter posterior density is central to the quantification of predictive uncertainty. However, the parameter posterior for Bayesian neural networks (BNNs) is typically highly multi-modal and rarely available in closed form. The classical Markov chain Monte Carlo (MCMC) approach asymptotically recovers the true posterior but is considered prohibitively expensive for BNNs, as the large number of posterior modes prevents a reasonable mixing of chains [25]. Popular approximation techniques, such as Laplace approximation (LA; [9, 29]) or deep ensembles (DE; [28]), therefore focus on local regions of the posterior landscape. While these methods are faster than traditional MCMC and perform well in many applications, they systematically omit regions of the parameter space that might be decisive for meaningful UQ [25] (also shown in Section 5.3).
In this work, we challenge the presumed infeasibility of MCMC for NNs and propose to exploit the - in this context, rarely considered - unidentifiability property of NNs, i.e., the existence of two or more equivalent parameter values that describe the same input-output mapping. We refer to these equivalent values as _equivoutput parameter states_. Equiooutput parameter states emerge from certain activation functions [6, 27, 37], as well as the free permutability of neuron parameters in hidden layers [20], and can be transformed into one another.
The functional redundancy arising from this phenomenon grows rapidly with the depth and width of a network (cf. Figure 1) and induces symmetries in the posterior density. For exact inference (up to a Monte Carlo error), we need to incorporate all non-equivo output parameter states that lead to distinct input-output mappings. Considering only these _functionally diverse_ mappings means, in turn, that our effective parameter space makes up a comparatively small fraction of the network's original parameter space. Since their numerous equiooutput counterparts do not contribute any new information to predictive uncertainty, we need much fewer MCMC samples when approximating the posterior predictive density (PPD) via Monte Carlo integration. By explicitly removing symmetries from samples _post-hoc_, we can even expose the functionally relevant part of the posterior and provide an opportunity for interpretation and analytical approximation in the reduced effective parameter space.
#### 1.0.1 Our Contributions.
We analyze the role of posterior space redundancies in quantifying BNN uncertainty, making the following contributions: 1) We show that the full PPD can be obtained from a substantially smaller reference set containing uniquely identified parameter states in function space. 2) We propose an estimation procedure for the number of Monte Carlo chains required to discover functionally diverse modes, providing a practical guideline for sampling from the parameter space of multi-layer perceptrons (MLPs). 3) We supply experimental evidence that our approach yields superior predictive performance compared to standard MCMC and local approximation methods. 4) Lastly, we demonstrate the posterior interpretability and analytic approximation that can
be obtained from explicitly removing symmetries _post-hoc_, for which we propose an algorithmic proof-of-concept.
## 2 Related Work
In this section, we review the literature on the existence, removal, and utilization of symmetries in MLPs, as well as the relation between such symmetries and statistical inference.
#### 2.0.1 Existence of Parameter State Symmetries.
Non-unique network parameter states have been considered in the literature before. [20] were among the first to note that equioutput states induce symmetries in the parameter space of MLPs. Focusing, within the general linear group of the parameter space, on the subgroup of transformations that leave the input-output mapping unchanged, they derived equivalence classes of equioutput parameter states and showed that, for every MLP, there exists a minimal and complete set of representatives covering all functionally different parameter states. [42] and [27] continued along this line of work to study single-hidden-layer MLPs with specific activation functions, advancing from \(\tanh\) to more general self-affine activations. An extension to MLPs of arbitrary depth was studied by [6] in the context of \(\tanh\) activations. More recently, [37] characterized equioutput parameter states for ReLU activations, again focusing on the case of a single hidden layer, and [1] classified all \(\mathbb{G}\)-invariant single-hidden-layer MLPs with ReLU activation for any finite orthogonal group \(\mathbb{G}\). Lastly, [45] generalized much of the above in a framework addressing the identifiability of affine symmetries in arbitrary architectures.
#### 2.0.2 Symmetry Removal.
Symmetries in the parameter posterior density of Bayesian models can produce adverse effects that have been addressed in several research areas of statistics and machine learning. A prominent example is _label switching_ in finite mixture models, where the permutability of label assignments to the mixture components induces symmetries similar to those in BNNs. To make mixture models identifiable, [3] introduced an adaptive Metropolis algorithm with online relabeling, effectively removing permutation symmetries by optimizing over discrete sets of permutation matrices. Such exhaustive-search approaches, however, scale poorly to modern NNs with many parameters, as the amount of equioutput states rises exponentially with the number of parameters.
In BNNs, symmetries have been known to slow down MCMC convergence to the stationary parameter posterior density due to budget wasted on visiting symmetric modes [31, 34]. [25], reporting results from extensive and large-scale experiments, indeed find that MCMC chains tend to mix better in function space than in parameter space. Consequently, reducing the effect of symmetries by imposing parameter constraints and defining anchoring points for subsets of the latent variables has been shown to improve mixing [41]. A proposal for constrained sampling can be found, for example, in [39], with application to ReLU-activated MLPs.
Utilizing Symmetries.Symmetries in the parameter posterior density are not, however, necessarily a nuisance. Quite on the contrary, they can be useful to enhance generalization and make inference affordable. An increasing body of work has been exploring the use of symmetry removal in the context of _mode connectivity_, an approach to find more robust NN solutions by retrieving connected areas of near-constant loss rather than isolated local optima [10, 17]. Focusing on equioutput permutations of hidden-layer neurons, [43] and [2], among others, propose to align the layer-wise embeddings of multiple networks and thus improve upon the performance of individual models. Following a similar idea, [38] apply a _post-hoc_ standardization on parameter vectors in ReLU-activated NNs that draws from the notion of equioutput equivalence classes.
In the field of Bayesian deep learning, the idea of utilizing - exact or approximate - parameter symmetries, represented by permutation groups, has led to the development of _lifted MCMC_[33, 5]. Orbital Markov chains have been introduced to leverage parameter symmetries in order to reduce mixing times [32, 5]. Lifted MCMC has been considered mainly in the context of probabilistic graphical models. There is scope to harness lifted MCMC in the context of MLPs since these can be cast as graphical models [26, 45].
We believe that equioutput symmetries have the potential to facilitate MCMC inference, despite the apparent complexity they introduce in the parameter posterior density. From the insight that a vast part of the sampling space is made up of symmetric copies of some minimal search set, we conclude that running multiple short MCMC chains in parallel, each of which can sample a functionally different mode, represents a more efficient use of the available budget than collecting a large number of samples from a single chain. In Section 4, we propose an upper bound for MCMC chains necessary to observe all functionally diverse posterior modes, which is a key criterion for successful inference. The perspective of parameter posterior symmetries thus lends a new theoretical justification to previous efforts in multi-chain MCMC that are motivated mainly by the exploitation of parallel computing resources [30, 40]. Our experiments in Section 5 suggest that this approach is indeed more effective than single-chain MCMC in BNNs with many symmetries in their parameter posterior density. In agreement with [25], our findings advocate to focus on function-space instead of parameter-space mixing during MCMC sampling.
We thus view the existence of equioutput parameter states as a benign phenomenon. That said, there are still benefits to be gained from removing the symmetries: with a parameter posterior density reduced to the minimal parameter set sufficient to represent its full functional diversity, we get an opportunity for better interpretation, and possibly even analytical approximation. We demonstrate the potential of symmetry removal in Section 5.3 by means of a custom algorithm (Supplementary Material B). In the following section, we provide the mathematical background and introduce the characterization and formal notation of equioutput transformations.
## 3 Background and Notation
#### 3.0.1 MLP Architectures.
In this work, we consider NNs of the following form. Let \(f:\mathcal{X}\rightarrow\mathcal{Y}\) represent an MLP with \(K\) layers, where layer \(l\in\{1,\ldots,K\}\) consists of \(M_{l}\) neurons, mapping a feature vector \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\top}\in\mathcal{X}\subseteq\mathbb{R}^{n},n\in \mathbb{N}\), to an outcome vector
\[f(\mathbf{x})=:\hat{\mathbf{y}}=(\hat{y}_{1},\ldots,\hat{y}_{m})^{\top}\in\mathcal{Y} \subseteq\mathbb{R}^{m},\ \ m\in\mathbb{N},\]
to estimate \(\mathbf{y}=(y_{1},\ldots,y_{m})^{\top}\in\mathcal{Y}\). The \(i\)-th neuron in the \(l\)-th layer of the MLP is associated with the weights \(w_{lij},\ j=1,\ldots,M_{l-1}\), and the bias \(b_{li}\). We summarize all the MLP parameters in the vector
\[\mathbf{\theta}:=(w_{211},\ldots,w_{KM_{K}M_{K-1}},b_{21},\ldots,b_{KM_{K}})^{\top }\in\Theta\subseteq\mathbb{R}^{d}\]
and write \(f_{\mathbf{\theta}}\) to make clear that the MLP is parameterized by \(\mathbf{\theta}\). For each hidden layer \(l\in\{2,\ldots,K-1\}\), the inputs are linearly transformed and then activated by a function \(a\). More specifically, we define the pre-activations of the \(i\)-th neuron in the \(l\)-th hidden layer as
\[o_{li}=\sum_{j=1}^{M_{l-1}}w_{lij}z_{(l-1)j}+b_{li}\]
with post-activations \(z_{(l-1)i}=a(o_{(l-1)i})\) from the preceding layer. For the input layer, we have \(z_{1i}=x_{i},i=1,\ldots,n\), and for the output layer, \(z_{Ki}=\hat{y}_{i},i=1,\ldots,M_{K}\).
#### 3.0.2 Predictive Uncertainty.
In the Bayesian paradigm, a prior density \(p(\mathbf{\theta})\) is imposed on the parameters, typically as part of a Bayesian model of the data. Using Bayes' rule, the parameter posterior density
\[p(\mathbf{\theta}|\mathcal{D})=\frac{p(\mathcal{D}|\mathbf{\theta})p(\mathbf{\theta})}{p( \mathcal{D})}\]
updates this prior belief based on the information provided by the data \(\mathcal{D}\) and encoded in the likelihood \(p(\mathcal{D}|\mathbf{\theta})\). In supervised learning, the data are typically given by a set of \(N\) feature vectors \(\mathbf{x}\in\mathcal{X}\) and outcome vectors \(\mathbf{y}\in\mathcal{Y}\), forming the dataset \(\mathcal{D}=\{(\mathbf{x}^{(1)},\mathbf{y}^{(1)}),\ldots,(\mathbf{x}^{(N)},\mathbf{y}^{(N)})\}\). The PPD \(p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathcal{D})\) quantifies the predictive or functional uncertainty of the model for a new observation \((\mathbf{x}^{*},\mathbf{y}^{*})\in\mathcal{X}\times\mathcal{Y}\). Since
\[p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathcal{D})=\int_{\Theta}p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{ \theta})p(\mathbf{\theta}|\mathcal{D})\,\mathrm{d}\mathbf{\theta},\]
deriving this uncertainty requires access to the posterior density \(p(\mathbf{\theta}|\mathcal{D})\), which can be estimated from MCMC sampling.
Equioutput Transformations.Let us now characterize the notion of equioutput parameter states, and the transformations to convert between them, more formally. Two parameter states \(\mathbf{\theta},\mathbf{\theta}^{\prime}\) are considered _equioutput_ if the maps \(f_{\mathbf{\theta}},f_{\mathbf{\theta}^{\prime}}\) yield the same outputs for all possible inputs from \(\mathcal{X}\). We denote this equivalence relation (see proof in Supplementary Material A.2) by \(\sim\) and write:
\[\mathbf{\theta}\sim\mathbf{\theta}^{\prime}\Longleftrightarrow f_{\mathbf{\theta}}(\mathbf{x})= f_{\mathbf{\theta}^{\prime}}(\mathbf{x})\,\forall\,\mathbf{x}\in\mathcal{X},\ \ \mathbf{\theta},\mathbf{\theta}^{\prime}\in\varTheta.\]
The equioutput relation is always defined with respect to a particular MLP \(f\), which we omit in our notation when it is clear from the context.
All MLPs with more than one neuron in at least one hidden layer exhibit such equioutput parameter states that arise from permutation invariances of the input-output mapping [20, 27]. Since the operations in the pre-activation of the \(i\)-th neuron in the \(l\)-th layer commute6, the \(M_{l}>1\) neurons of a hidden layer \(l\) can be freely interchanged by permuting their associated parameters. In addition, equioutput transformations can arise from the use of certain activation functions with inherent symmetry properties. For example, in the case of \(\tanh\), the signs of corresponding parameters can be flipped using \(\tanh(x)=-\tanh(-x)\). For ReLU activations, a scaling transformation can be applied such that the mapping of the network remains unchanged, i.e., \(\text{ReLU}(x)=c^{-1}\cdot\text{ReLU}(c\cdot x)\) for \(|c|>0\) (see also Supplementary Material A.1).
Footnote 6: Recall that the pre-activation of neuron \(i\) in layer \(l\) is \(o_{li}=\sum_{j=1}^{M_{l-1}}w_{lij}z_{(l-1)j}+b_{li}\). By the commutative property of sums, any permutation \(\pi:J\to J\) of elements from the set \(J=\{1,\ldots,M_{l-1}\}\) will lead to the same pre-activation:
\[\mathcal{F}_{\mathbf{T}}:\varTheta\to\varTheta,\mathbf{\theta}\mapsto\mathbf{T}\mathbf{\theta },\ \ \mathbf{T}\in\mathbb{R}^{d\times d},\]
be an activation-related transformation of a parameter vector that might, for instance, encode an output-preserving sign flip. \(\mathcal{F}_{\mathbf{T}}\) constitutes an _equioutput_ transformation if \(f_{\mathbf{\theta}}(\cdot)=f_{\mathcal{F}_{\mathbf{T}}(\mathbf{\theta})}(\cdot)\). We collect all output-preserving transformation matrices \(\mathbf{T}\) in the set \(\mathcal{T}\), i.e.,
\[\mathcal{T}=\left\{\mathbf{T}\in\mathbb{R}^{d\times d}\mid f_{\mathbf{\theta}}(\cdot) =f_{\mathcal{F}_{\mathbf{T}}(\mathbf{\theta})}(\cdot)\right\}.\]
Similarly, let
\[\mathcal{F}_{\mathbf{P}}:\varTheta\to\varTheta,\mathbf{\theta}\mapsto\mathbf{P}\mathbf{ \theta},\ \ \mathbf{P}\in\{0,1\}^{d\times d},\]
be a transformation that permutes elements in the parameter vector. We define the set of permutation matrices that yield equioutput parameter states as
\[\mathcal{P}=\left\{\mathbf{P}\in\mathbb{R}^{d\times d}\mid f_{\mathbf{\theta}}(\cdot) =f_{\mathcal{F}_{\mathbf{P}}(\mathbf{\theta})}(\cdot)\right\}.\]
The cardinality of \(\mathcal{P}\) is at least \(\prod_{l=2}^{K-1}M_{l}!\)[20] when traversing through the NN from the first layer in a sequential manner, applying to each layer permutations that compensate for permutations in its predecessor.
Since activation functions operate neuron-wise, activation- and permutation-related equioutput transformations do not interact (for instance, we could permute the associated weights of two neurons and later flip their sign). We can, therefore, define arbitrary combinations of activation and permutation transformations as
\[\mathcal{E}=\left\{\boldsymbol{E}=\boldsymbol{TP}\in\mathbb{R}^{d\times d}, \boldsymbol{T}\in\mathcal{T},\boldsymbol{P}\in\mathcal{P}\ |\ f_{\boldsymbol{\theta}}(\cdot)=f_{\mathcal{F}_{\boldsymbol{E}}( \boldsymbol{\theta})}(\cdot)\right\}.\]
The transformation matrices in \(\mathcal{E}\) will exhibit a block-diagonal structure with blocks corresponding to network layers. This is due to the permutations \(\boldsymbol{P}\) affecting both incoming and outgoing weights, but only in the sense that two incoming and two outgoing weights swap places, never changing layers. The activation-related sign flips or re-scalings occur neuron-wise, making \(\boldsymbol{T}\) a diagonal matrix that does not alter the block-diagonal structure of \(\boldsymbol{P}\).
For the cardinality of the set \(\mathcal{E}\) of equioutput transformations, we can establish a lower bound that builds upon the minimum cardinality of \(\mathcal{P}\):
\[|\mathcal{E}|\geq\prod_{l=2}^{K-1}M_{l}!\cdot|\mathcal{T}_{l}|,\]
where \(|\mathcal{T}_{l}|\) denotes the number of activation-related transformations applicable to neurons in layer \(l\). From this, it becomes immediately clear that the amount of functional redundancy increases rapidly with the network size (see also Figure 1). As a result of equioutput parameter states, the MLP parameter posterior density exhibits functional redundancy in the form of symmetries for commonly used priors (see Section 4):
\[p(\boldsymbol{\theta}|\mathcal{D})=\frac{p(\mathcal{D}|\boldsymbol{\theta})p( \boldsymbol{\theta})}{p(\mathcal{D})}\stackrel{{ 10,13}}{{=}}\frac{p(\mathcal{D}|\boldsymbol{E \theta})p(\boldsymbol{E\theta})}{p(\mathcal{D})}=p(\boldsymbol{E\theta}| \mathcal{D}),\ \ \boldsymbol{\theta}\in\Theta,\boldsymbol{E}\in\mathcal{E}. \tag{1}\]
## 4 Efficient Sampling
These symmetric structures in the parameter posterior density suggest that sampling can be made more efficient. In the following section, we show that the PPD can theoretically be obtained from a small reference set of non-equivalent parameter states and propose an upper bound on Markov chains that suffice to sample all non-symmetric posterior modes.
### Posterior Reference Set
As introduced in Section 3, for each parameter state \(\boldsymbol{\theta}\) of an NN, there are functionally redundant counterparts \(\boldsymbol{\theta}^{\prime}\) related to \(\boldsymbol{\theta}\) by an equioutput transformation, such that \(f_{\boldsymbol{\theta}}(\cdot)=f_{\boldsymbol{\theta}^{\prime}}(\cdot)\). We can use this equivalence relation to dissect the parameter space \(\Theta\) into disjoint equivalence classes. For this, let the _reference set_
\(\mathcal{S}_{1}\) be a minimal set of representatives of each equivalence class (cf. _open minimal sufficient search sets_ in [6]). All parameter states in \(\mathcal{S}_{1}\) are functionally diverse, i.e., \(\boldsymbol{\theta},\tilde{\boldsymbol{\theta}}\in\mathcal{S}_{1}\Rightarrow \boldsymbol{\theta}\not\sim\tilde{\boldsymbol{\theta}}\), and each element in \(\Theta\) is equivalent to exactly one element in \(\mathcal{S}_{1}\). For a finite amount of equioutput transformations, as in the case of \(\tanh\)-activated MLPs (finite possibilities of sign-flip combinations of hidden neurons), the NN parameter space can then be dissected into \(|\mathcal{E}|\) disjoint _representative sets_, which contain equioutput transformations of the elements of the reference set, in the following way.
Proposition 1 (Parameter space dissection): _Let \(\mathcal{S}_{1}\) be the reference set of uniquely identified network parameter states. Then, for a finite number of equioutput transformations, it holds that the parameter space can be dissected into \(|\mathcal{E}|\) disjoint, non-empty representative sets up to a set \(\mathcal{S}^{0}\subset\Theta\), i.e.,_
\[\Theta=\left(\ \dot{\bigcup}_{j=1}^{|\mathcal{E}|}\mathcal{S}_{j}\right) \dot{\cup}\mathcal{S}^{0}\text{, where }\mathcal{S}_{j}\cong\{\boldsymbol{\theta}\ |\ \boldsymbol{\theta}=\boldsymbol{E}_{j}\boldsymbol{\theta}^{\prime}\quad \forall\boldsymbol{\theta}^{\prime}\in\mathcal{S}_{1},\boldsymbol{E}_{j}\in \mathcal{E}\}\,, \tag{2}\]
_where \(\dot{\cup}\) denotes the union over disjoint sets. We use \(\mathcal{S}^{0}\) as a residual quantity to account for cases that cannot be assigned unambiguously to one of the sets \(\mathcal{S}_{j}\) because they remain unchanged even under a transformation with non-identity matrices \(\boldsymbol{E}_{j}\in\mathcal{E}\)._
The edge cases that make up \(\mathcal{S}^{0}\) exist, e.g., on the boundary of two classes [6] or in degenerated cases such as the zero vector [42, 45]. For a characterization of the involved sets, as well as a proof sketch, see Supplementary MaterialA.4.
Equioutput parameter states have the same posterior probabilities \(p(\boldsymbol{\theta}|\mathcal{D})=p(\boldsymbol{E}\boldsymbol{\theta}| \mathcal{D})\) if the prior is transformation-invariant; see Supplementary Material A.3, Equations (10)-(13). Moreover, equioutput parameter states produce by definition the same predictions \(p(\boldsymbol{y}^{*}|\boldsymbol{x}^{*},\boldsymbol{\theta})=p(\boldsymbol{y} ^{*}|\boldsymbol{x}^{*},\boldsymbol{E}\boldsymbol{\theta})\) for any \(\boldsymbol{E}\in\mathcal{E}\). Thus, the following corollary holds.
Figure 1: Example of \(\tanh\)-activated MLPs. _Left_: Cardinality lower bound of the equioutput transformation set for a single hidden layer with 1 to 128 neurons; the functional redundancy factor for a network with 128 neurons is \(1.31\cdot 10^{254}\). _Right_: A ten-dimensional MLP parameter posterior (top-right corner, depicted as bivariate marginal density) exhibits symmetries, such that all red sample clusters are equioutput-related to the green cluster. The associated function spaces are identical, i.e., many posterior modes are redundant.
Corollary 1 (Reformulated posterior predictive density): _Let \(\mathcal{E}\) be finite. As in Proposition 1, consider the disjoint non-empty sets \(\mathcal{S}_{j},j\in\{1,\ldots,|\mathcal{E}|\}\), and residual space \(\mathcal{S}^{0}\). If the prior density \(p(\boldsymbol{\theta})\) is transformation-invariant, then the posterior predictive density expresses as_
\[p(\boldsymbol{y}^{*}|\boldsymbol{x}^{*},\mathcal{D}) =\int_{\Theta}p(\boldsymbol{y}^{*}|\boldsymbol{x}^{*},\boldsymbol {\theta})p(\boldsymbol{\theta}|\mathcal{D})\,\mathrm{d}\boldsymbol{\theta} \tag{4}\] \[=|\mathcal{E}|\int_{\mathcal{S}_{j}}p(\boldsymbol{y}^{*}| \boldsymbol{x}^{*},\boldsymbol{\theta})p(\boldsymbol{\theta}|\mathcal{D})\, \mathrm{d}\boldsymbol{\theta}+\int_{\mathcal{S}^{0}}p(\boldsymbol{y}^{*}| \boldsymbol{x}^{*},\boldsymbol{\theta})p(\boldsymbol{\theta}|\mathcal{D})\, \mathrm{d}\boldsymbol{\theta}\] \[\approx|\mathcal{E}|\int_{\mathcal{S}_{j}}p(\boldsymbol{y}^{*}| \boldsymbol{x}^{*},\boldsymbol{\theta})p(\boldsymbol{\theta}|\mathcal{D})\, \mathrm{d}\boldsymbol{\theta}.\]
The proof of Corollary 1 is given in Supplementary Material A.3. It follows from Proposition 1 and the assumption of transformation-invariant prior densities, which is often satisfied in practice (e.g., for widely-applied isotropic Gaussian priors). We can further approximate (4) by (4) as the set \(\mathcal{S}^{0}\subset\mathbb{R}^{d}\) is of negligible size (depending on \(\Theta\), potentially even with zero Lebesgue measure).
As a consequence of Corollary 1, the PPD can be obtained up to the residual set by only integrating over uniquely identified parameter states from one of the sets \(\mathcal{S}_{j}\), with a multiplicative factor \(|\mathcal{E}|\) that corrects the probability values by the amount of redundancy in the posterior. In other words, only a fraction \(1/|\mathcal{E}|\) of the posterior must be sampled in order to infer a set of uniquely identified parameter states of the NN, and thus, to obtain the full PPD. This reduces the target sampling space drastically, as illustrated in Figure 1. For example, it allows the posterior space of a single-layer, tanh-activated network with 128 neurons to be effectively reduced to a \(10^{254}\)-th of its original size.
In the case of an infinite amount of equioutput transformations, such as in ReLU-activated MLPs (the scaling factor \(|c|>0\) can be chosen arbitrarily), we can use similar reasoning. Only one representative set of the posterior density needs to be observed in order to capture the full functional diversity of a network because the integrals over two representative sets are identical. For a more in-depth discussion of ReLU symmetries, see, for example, [4].
#### 4.2.2 How to Obtain a Representative Set?
In practice, when using Monte Carlo to approximate Equation (4), it is not necessary to actually constrain the sampling procedure to a specific set \(\mathcal{S}_{j}\), which might indeed not be straightforward7. Since any equioutput transformation is known _a priori_, we just need to be aware of the fact that each sample can theoretically be mapped to different representative sets after running the sampling procedure. Hence, for the calculation of the PPD integral, the samples can remain scattered across the various representative sets as long as they cover all functionally diverse parameter states. For
the purpose of providing better interpretability and analytic approximation of the posterior, it may still be worthwhile to explicitly remove the symmetries. In Section 5.3, we demonstrate such symmetry removal using a custom algorithm for \(\tanh\)-activated networks (Supplementary Material B).
### An Upper Bound for Markov Chains
The question remains how many samples are needed to approximate a set of uniquely identified parameter states sufficiently well. Even in a symmetry-free setting, BNN posteriors can exhibit multiple functionally diverse modes representing structurally different hypotheses, depending on the network architecture and the underlying data-generating process. For example, in Section 5.3, we discuss the case of an under-parameterized network that preserves three distinctive modes caused by its restricted capacity.
In the following, we assume \(\nu\in\mathbb{N}\) functionally diverse modes with the goal of visiting every mode or its local proximity at least once when running MCMC. As the ability to switch from one mode to another within a chain depends on various factors, such as the acceptance probability and the current state of other parameters, increasing the number of samples per chain does not necessarily correlate with the number of visited modes. We, therefore, propose to focus on the number of independent chains, rather than the number of samples per chain, to effectively control the number of visited modes.
This further allows us to derive an upper bound for the number of independent chains that are required to visit every mode at least once. The number of samples from each chain will then ultimately determine the approximation quality. In the computation of the PPD, we formulate the Monte Carlo integration over all samples from all chains simultaneously [30]. In practice, given a user-defined number of maximal resources \(\rho\) (e.g., CPU cores), the following proposition provides a lower bound on the probability that the number of chains \(\mathcal{G}\) necessary to visit every mode remains below the resource limit of the user (i.e., \(\mathcal{G}<\rho\)).
Proposition 2 (Probabilistic bound for sufficient number of Markov chains): _Let \(\pi_{1},\ldots,\pi_{\nu}\) be the respective probabilities of the \(\nu\) functionally diverse modes to be visited by an independently started Markov chain and \(\Pi_{J}:=\sum_{j\in J}\pi_{j}\). Then, given \(\rho\) chains,_
\[\mathbb{P}(\mathcal{G}<\rho)\geq 1-\rho^{-1}\left\{\sum_{q=0}^{\nu-1}(-1)^{ \nu-1-q}\sum_{J:|J|=q}(1-\Pi_{J})^{-1}\right\}. \tag{5}\]
The proof can be found in Supplementary Material A.6. Note that this bound is independent of the NN architecture and only depends on the assumptions about the number and probabilities of functionally diverse modes \(\nu\), disregarding symmetric copies. Proposition 2 can be used to calculate the number of MCMC chains given certain assumptions - for example, from domain knowledge, or in a worst-case scenario calculation - and thus provides practical guidance for MCMC sampling of MLPs. Judging by the comparably high predictive performance of
local approximations such as LA and DE [28, 29], we conclude that a small amount of functional modes is reasonable to assume in practice. Our qualitative experiments in Section 5 support this supposition.
As an example of applying Proposition 2, assume \(\nu=3\) functionally diverse modes in a reference set with \(\pi_{1}=0.57,\pi_{2}=0.35,\pi_{3}=0.08\) (chosen to represent a rather diverse functional mode set). An upper bound of \(\rho=1274\) chains ensures that we observe all functionally diverse modes with probability \(\mathbb{P}(\mathcal{G}<\rho)\geq 0.99\).
## 5 Experiments
We now investigate our theoretical findings and compare the resulting approach to single-chain MCMC and DE. In all experiments8, we employ a Bayesian regression model with a normal likelihood function, standard normal prior for parameters \(\mathbf{\theta}\), and a truncated standard normal prior restricted to the positive real line for the variance of the normal likelihood, which we treat as a nuisance parameter. Depending on the task, we either use a No-U-Turn sampler [22] with \(2^{10}\) warmup steps to collect a single sample from the posterior or derive the maximum-a-posteriori estimator using a gradient-based method (details are given in Supplementary Material C.2, C.3).
Footnote 8: [https://anonymous.4open.science/r/efficient_mcmc_sampling_in_bnns_symmetry-6FAF/](https://anonymous.4open.science/r/efficient_mcmc_sampling_in_bnns_symmetry-6FAF/)
### Performance Comparison
In our first experiment, we demonstrate the predictive performance of BNNs, where the PPD is calculated based on MCMC sampling, using the derived upper bound for the number of chains (ours). In this case, we collect one sample per chain for \(G\) chains, and thus \(G\) samples in total. This is compared to MCMC sampling collecting \(G\) samples from a single chain (s.c.), and DE with ten ensemble members on three synthetic datasets (\(\mathcal{D}_{S}\), \(\mathcal{D}_{I}\), and \(\mathcal{D}_{R}\)) as well as benchmark data from [11] (for dataset details and additional results on LA,
\begin{table}
\begin{tabular}{l|c c c c|c c c} & \multicolumn{3}{c}{**Smaller network \(f_{1}\)**} & \multicolumn{3}{c}{**Larger network \(f_{2}\)**} \\ & MCMC (ours) & MCMC (s.c.) & DE & MCMC (ours) & MCMC (s.c.) & DE \\ \hline \(\mathcal{D}_{S}\) & **-0.53** (\(\pm\) 0.09) & -0.56 (\(\pm\) 0.11) & -0.58 (\(\pm\) 0.11) & **-0.59** (\(\pm\) 0.12) & **-0.59** (\(\pm\) 0.12) & -2.13 (\(\pm\) 0.03) \\ \(\mathcal{D}_{I}\) & **0.79** (\(\pm\) 0.06) & 0.65 (\(\pm\) 0.07) & 0.56 (\(\pm\) 0.06) & **0.91** (\(\pm\) 0.09) & **0.91** (\(\pm\) 0.09) & -2.02 (\(\pm\) 0.02) \\ \(\mathcal{D}_{R}\) & 0.64 (\(\pm\) 0.10) & **0.75** (\(\pm\) 0.11) & -1.46 (\(\pm\) 0.06) & **0.95** (\(\pm\) 0.08) & **0.95** (\(\pm\) 0.08) & -2.20 (\(\pm\) 0.02) \\ Airfoil & **-0.74** (\(\pm\) 0.04) & -0.80 (\(\pm\) 0.05) & -1.62 (\(\pm\) 0.03) & **0.92** (\(\pm\) 0.05) & 0.72 (\(\pm\) 0.10) & -2.17 (\(\pm\) 0.01) \\ Concrete & **-0.41** (\(\pm\) 0.05) & -0.44 (\(\pm\) 0.06) & -1.59 (\(\pm\) 0.03) & **0.26** (\(\pm\) 0.07) & 0.25 (\(\pm\) 0.07) & -2.03 (\(\pm\) 0.01) \\ Diabetes & **-1.20** (\(\pm\) 0.07) & **-1.20** (\(\pm\) 0.07) & -1.47 (\(\pm\) 0.07) & **-1.18** (\(\pm\) 0.08) & -1.22 (\(\pm\) 0.09) & -2.09 (\(\pm\) 0.04) \\ Energy & **0.92** (\(\pm\) 0.04) & 0.69 (\(\pm\) 0.12) & -1.76 (\(\pm\) 0.02) & 2.07 (\(\pm\) 0.46) & **2.38** (\(\pm\) 0.11) & -1.99 (\(\pm\) 0.02) \\ ForestF & **-1.37** (\(\pm\) 0.07) & **-1.37** (\(\pm\) 0.07) & -1.60 (\(\pm\) 0.06) & **-1.43** (\(\pm\) 0.45) & -1.69 (\(\pm\) 0.49) & -2.20 (\(\pm\) 0.02) \\ Yacht & **1.90** (\(\pm\) 0.16) & 1.29 (\(\pm\) 0.56) & -1.14 (\(\pm\) 0.14) & **3.31** (\(\pm\) 0.21) & 0.15 (\(\pm\) 0.09) & -2.18 (\(\pm\) 0.03) \\ \end{tabular}
\end{table}
Table 1: Mean log pointwise predictive density (L PPD) values on test sets (larger is better; one standard error in parentheses). The highest performance per dataset and network is highlighted in bold.
see Supplementary Material C.1 and E.1, respectively). We use a smaller NN \(f_{1}\) with a single hidden layer containing three neurons and a larger network \(f_{2}\) with three hidden layers having 16 neurons each, both with \(\tanh\) activation. As in Section 4.2, we assume three functionally diverse modes \(\nu=3\) and mode probabilities \(\pi_{1}=0.57,\pi_{2}=0.35,\pi_{3}=0.08\) as in the given example. To demonstrate the performance of our MCMC-based PPD approximation, we measure the goodness-of-fit on the test data using the log point-wise predictive density (LPPD; [18])
\[\text{LPPD}=\log\int_{\Theta}p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{\theta})p(\mathbf{\theta} |\mathcal{D})\,\mathrm{d}\mathbf{\theta}\approx\log\Bigg{(}\tfrac{1}{G}\sum_{g=1} ^{G}p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{\theta}^{(g)})\Bigg{)}, \tag{6}\]
where \(\mathbf{\theta}^{(1)},\ldots,\mathbf{\theta}^{(G)}\) are \(G\) samples obtained across all chains via MCMC sampling from the parameter posterior density \(p(\mathbf{\theta}|\mathcal{D})\). Equation (6) is evaluated at each test point \((\mathbf{x}^{*},\mathbf{y}^{*})\). Table 1 reports the mean LPPD across \(N^{*}\) independent test points for each combination of dataset and sampling scheme (see Appendix C.1 for details). Our results clearly indicate that using only a moderate amount of Markov chains, following our approach, yields equal or even better performance than single-chain MCMC and DE in all but two experiments.
### Practical Evaluation of Corollary 1
Next, we investigate the property derived in Corollary 1 using our proposed upper bound of chains, again with the assumption from the example in Section 4.2. To this end, we analyze the PPD for dataset \(\mathcal{D}_{I}\), using network \(f_{2}\). For every newly collected sample in the MCMC run, the updated PPD is computed approximately on a two-dimensional (input/output) grid. Then, the Kullback-Leibler (KL) divergence between consecutive densities is averaged over the grid of input values of \(f_{2}\) (details in Supplementary Material D.1). As shown in Figure 2, despite the size of the network \(f_{2}\) and the high amount of equioutput parameter states \(|\mathcal{E}|=\left(16!\cdot 2^{16}\right)^{3}\approx 2.58\cdot 10^{54}\), the PPD converges after notably fewer than \(|\mathcal{E}|\) samples and plots of the function space indicate the saturation of functional diversity already after 1274 samples from as many chains.
### Posterior Symmetry Removal
So far we were mainly concerned with the predictive performance of MCMC sampling. Yet, mapping all samples to a joint representative set, as characterized in Section 4.1, has the potential to reduce the effective weight space enormously, facilitating interpretability and possibly even analytical approximation. For this, we propose a custom algorithm for \(\tanh\)-activated MLPs as a proof-of-concept. Our algorithm removes symmetries in a data-dependent manner and thus minimizes the number of remaining modes in the representative set (details in Supplementary Material B).
We demonstrate the efficacy of the approach for \(f_{1}\) in two experiments A and B for datasets \(\mathcal{D}_{S}\) and \(\mathcal{D}_{I}\), respectively. For experiment A (Figure 3, top) we visualize the neuron parameter space along the steps of the proposed algorithm (for details, see Algorithms 1-4 in Supplementary Material B). Different colors encode the current neuron index (i.e., one of \(\{1,2,3\}\)) in the hidden layer of the respective neuron parameter vector. Initially, symmetries of the posterior densities are clearly noticeable (Figure 3a), and the neuron parameter vectors
Figure 3: For experiment A (top row), the parameter subspaces of hidden neurons are visualized in their initial state as obtained when running MCMC (a), and after their transformation and reassignment during the steps of Algorithm 4 (b, c). Different colors encode the current neuron index in the hidden layer. By reassigning neurons, Algorithm 4 effectively finds an optimal reference set and allows to separate the multi-modal complex univariate marginal density (d) into three univariate densities. For experiment B (bottom row), the MLP parameter posterior density after symmetry removal results in a tri-modal system, here illustrated as a bivariate plot (f). Investigating these modes, we find that all are functionally diverse, i.e., represent a different hypothesis of the dataset (g-i). Combined, they form the full function space (j).
Figure 2: Convergence of MCMC depicted as the change in KL-divergence on original (black) and log-scale (blue) when consecutively adding another sample from a new and independent chain and re-estimating the posterior density. Small overlaying plots: approximated PPD of the network after \(2^{0}\), \(2^{4}\), \(2^{8}\), and \(G=1274\) samples; darker colors correspond to higher probabilities.
are distributed identically (Figure 2(d)). Upon the first step of the algorithm (Figure 2(b)), the parameter space is effectively halved as a consequence of removing the \(\tanh\)-induced sign-flip symmetry, and three clusters remain. The second (clustering) step removes all permutation symmetries from the full posterior density by assigning areas of this parameter subspace to the neurons of the model. This is depicted in Figure 2(c) and clearly shows the separation of states by different cluster colors. In Figures 2(d) and 2(e), the univariate marginal density of each neuron's incoming weight \(w_{2i1}\) reveals their reassignment to the parameter space. After running our algorithm, each neuron exhibits a distinct unimodal density, resulting in a unimodal density in the full parameter space (visualization in Supplementary Material E.3). We can conclude that only one functionally diverse mode exists in this case, which should also be recovered by local approximations like LA.
For experiment B (Figure 3, bottom), we focus on the visualization of the full parameter posterior density obtained after the application of the symmetry removal algorithm (Figure 2(f)). Three functionally diverse modes, represented by different colors, remain in the BNN posterior. We can now interpret these modes in function space by further clustering the transformed samples using a spectral clustering approach (details in Supplementary Material D.2). Figures 2(g)-j visualize the network parameter states in the function space, revealing three functionally diverse hypotheses the network can potentially learn from the given dataset. Such knowledge allows for a better-suited approximation by, e.g., a mixture of Laplace approximations (MoLA; [13]), as shown in Supplementary Material E.2. Note that approaches focusing on a single mode, such as standard LA, would have captured only a third of the functional diversity. For meaningful UQ, it is thus imperative that all functionally diverse modes are accounted for.
## 6 Discussion
We showed that the PPD for Bayesian MLPs can be obtained from just a fraction of the parameter space due to the existence of equioutput parameter states. Together with an upper bound on the number of MCMC chains to guarantee the recovery of every functionally diverse mode, our approach paves the way towards exact uncertainty quantification (up to a Monte Carlo error) in deep learning. Furthermore, we demonstrate the use of symmetry removal and present a proof-of-concept approach to map samples of \(\tanh\)-activated MLPs to a representative set. This _post-hoc_ procedure improves the interpretability of the symmetry-free posterior density drastically and facilitates analytical approximations. As a future research direction, we plan to investigate whether our MCMC sampling approach can be improved by initializing the sampling states in an informative way via ensemble training, building upon insights in [19, 36, 46].
#### Acknowledgments.
LW is supported by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the German Federal Ministry of Education and Research. |
2307.15090 | Understanding Forward Process of Convolutional Neural Network | This paper reveal the selective rotation in the CNNs' forward processing. It
elucidates the activation function as a discerning mechanism that unifies and
quantizes the rotational aspects of the input data. Experiments show how this
defined methodology reflects the progress network distinguish inputs based on
statistical indicators, which can be comprehended or analyzed by applying
structured mathematical tools. Our findings also unveil the consistency between
artificial neural networks and the human brain in their data processing
pattern. | Peixin Tian | 2023-07-27T00:37:18Z | http://arxiv.org/abs/2307.15090v2 | # Understanding the Forward Process of Convolutional Neural Networks.
###### Abstract
Despite widespread adoption and application in various domains, the interpretability remains a major challenge since the inception of this field. Understanding the decision-making process of these models, the dynamic changes occurring within the model as data is processed, and the appropriate adjustments of model parameters to cater to specific requirements can be quite challenging. Consequently, the lack of confidence in deep learning models engenders a diminished level of trust, impeding both their advancement in research and practical implementation. This research paper delves into a comprehensive analysis of the rotational transformations observed in the forward process of convolutional neural networks (CNN). It elucidates the activation function as a discerning mechanism that unifies and quantizes the rotational aspects of the input data. Experiments show how this defined methodology reflects the progress network distinguish inputs based on statistical indicators, which can be comprehended or analyzed by applying structured mathematical tools. Our findings also unveil the consistency between artificial neural networks and the human brain in their data processing pattern. Since the emergence of ChatGPT, its ability to exhibit some autonomous thinking and anti-human tendencies has increased people's concerns about the use of this technology. These discoveries empower us to quantify and intervene in the decision-making process of neural networks through the utilization of mathematical tools, thus bolstering the dependability and trustworthiness of artificial intelligence in applications.
**Keywords:** Explainable artificial intelligence, Convolutional neural network
## 1 Introduction
Artificial intelligence has made remarkable strides in domains such as computer vision, natural language processing and autonomous systems. Yet the inner workings of most AI models remain obscure and incomprehensible to humans, hindering our ability to scrutinize and evaluate their reasoning and decisions. This raises concerns about the trustworthiness, accountability, and ethics of AI systems, particularly when they are deployed in critical domains that affect human lives, property, and security. Explainable artificial intelligence (XAI) is a research field that aims to ensure that AI systems can justify their decisions [1, 2]. However, the current explanations of models are inadequate, and understanding and influencing the network decision processes remain major hurdle.
XAI methods can be classified into local and global approaches. Local approaches examine how different attributes affect the model decision for a given input, while global approaches measure the importance of high-level concepts for the model prediction. Local Interpretable Model-Agnostic Explanations [3] (LIME) is a local approach that generates a new dataset by perturbing the prediction of a complex model for a specific input and fits a simple model to the new dataset to explain the complex model locally. Lundberg and Lee [4] introduced a novel approach called Shapley Additive Explanations (SHAP) for interpreting model predictions. This method determines the significance of each feature in making a specific prediction. The framework encompasses a unique set of additive feature importance measures and provides theoretical substantiation for the existence of a singular solution within this category, which encompasses a multitude of advantageous characteristics.Testing with Concept Activation Vectors [5] (TCAV) is a global method that defines high-level concepts (such as texture, color, shape), and calculates their activation vectors at different levels in convolutional neural networks. Thus, obtaining a concept sensitivity score, which indicates the degree of dependence of the model on these concepts [1, 6, 7, 8, 9]. Besides above methods, XAI has witnessed numerous inspiring endeavors such as Grad-CAM [10], DeepLIFT [11] and Integrated Gradients [12]. Various strategies have been developed to address the issue of neural networks' lack of interpretability from a systematic standpoint [13, 14, 15, 16], yielding encouraging outcomes. In contrast to current XAI methods, this paper aims to understand and describe the data transformations in the network, rather than just understanding the influence of a certain attribute on the model prediction.
A powerful and versatile category of neural networks, known as convolutional neural networks, has been applied to a range of domains with great success. CNNs are composed of several fundamental elements: convolutional layers, pooling layers, activation layers, normalization layers, and fully connected layers. By integrating residual structures into CNNs [31], they have managed to surpass the inherent depth limitations of conventional networks, leading to improved performance. CNNs continue to be widely utilized due to their exceptional ability to generalize effectively and their remarkable computational efficiency, along with other advantages [17, 18].
Our goal was to elucidate the mechanisms of data transformation and prediction generation in a trained convolutional neural network across its various structures and levels. To accomplish this, we devise novel interpretations of some operations that
under preserved the input-output relationship. We designed experiments to demonstrate their validity. On this basis, we developed methods to assess and manipulate the model output using these interpretations.
Libby and Buschman [19] have revealed that the brain avoids interference in memory processing by selectively rotates sensory representations. This paper investigates how artificial neural networks process data by analyzing activation functions in terms of rotation. The representation reveals the data processing within the network and shows its similarity to information processing in the brain.
We acknowledge the limitations of our method in this paper. Our analysis is restricted to two-dimensional discrete convolution and the rectified linear unit [20, 21, 22] (ReLU) activation function, which are widely used in CNNs, but not universal. Therefore, our conclusion may not generalize to other types of CNNs.
## 2 Methods
Here we introduce a selection of methods and concepts that are employed in subsequent experiments, accompanied by comprehensive explanations.
### Two-dimensional discrete convolution
Two-dimensional discrete convolution is the most common convolution operation in convolutional neural networks when processing images. It can be expressed as below [23]:
\[H(i,j)=\sum_{m}\sum_{n}F(m,n)G(i-m,j-n) \tag{1}\]
After training, the model parameters are fixed, thus each convolution operation can be seen as a linear equation, and multiple-channel convolution filters form a system of linear equations. For instance, consider a 2 \(\times\) 2 convolution filter with 4 channels, \(G^{\prime}\), and input feature \(F^{\prime}\):
\[\begin{split} G^{\prime 1}=\left[\begin{array}{cc}k_{1}^{1}&k_{2}^{ 1}\\ k_{3}^{1}&k_{4}^{1}\end{array}\right],\quad G^{\prime 2}=\left[\begin{array}{cc}k_{ 1}^{2}&k_{2}^{2}\\ k_{1}^{2}&k_{1}^{2}\end{array}\right],\quad G^{\prime 3}=\left[\begin{array}{cc}k_{ 1}^{3}&k_{2}^{3}\\ k_{3}^{3}&k_{4}^{3}\end{array}\right],\quad G^{\prime 4}=\left[\begin{array}{cc}k_{ 1}^{4}&k_{ 2}^{4}\\ k_{1}^{4}&k_{1}^{4}\end{array}\right]\\ \end{split} \tag{2}\]
The corresponding linear equation system can be described as (ignore bias):
\[\begin{split}\begin{cases}F^{\prime}*{G^{\prime}}^{1}=k_{4}^{1} \cdot x_{1}+k_{3}^{1}\cdot x_{2}+k_{2}^{1}\cdot x_{3}+k_{1}^{1}\cdot x_{4}=y^ {1}\\ F^{\prime}*{G^{\prime}}^{2}=k_{4}^{2}\cdot x_{1}+k_{3}^{2}\cdot x_{2}+k_{2}^{ 2}\cdot x_{3}+k_{1}^{2}\cdot x_{4}=y^{2}\\ F^{\prime}*{G^{\prime}}^{2}=k_{4}^{3}\cdot x_{1}+k_{3}^{3}\cdot x_{2}+k_{2}^{ 3}\cdot x_{3}+k_{1}^{3}\cdot x_{4}=y^{3}\\ F^{\prime}*{G^{\prime}}^{2}=k_{4}^{4}\cdot x_{1}+k_{3}^{4}\cdot x_{2}+k_{2}^{ 4}\cdot x_{3}+k_{1}^{4}\cdot x_{4}=y^{4}\end{cases}\end{split} \tag{3}\]
Using matrix operations to express this process:
\[\left[\begin{array}{cccc}k_{4}^{1}&k_{3}^{1}&k_{2}^{1}&k_{1}^{1}\\ k_{4}^{2}&k_{3}^{2}&k_{2}^{2}&k_{1}^{2}\\ k_{4}^{3}&k_{3}^{3}&k_{2}^{3}&k_{1}^{3}\\ k_{4}^{4}&k_{3}^{4}&k_{2}^{4}&k_{1}^{4}\end{array}\right]\left[\begin{array}{ c}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{array}\right]=\left[\begin{array}{c}y^{1}\\ y^{2}\\ y^{3}\\ y^{4}\end{array}\right] \tag{4}\]
Matrix theory enables analyzing two-dimensional discrete convolution operations in the representation. In CNNs, channels commonly outnumber variables, and linear equations constrain the output differently. We measure the influence of each equation in output, select, and solve the most relevant ones, to describe data variation in the network.
### ReLU
ReLU [20, 21, 22] activation function dominates the current landscape of deep convolutional neural networks. Its appeal lies in its ability to accelerate convergence and promote sparsity in deep learning models. Its calculation is simple:
\[f(x)=max(0,x) \tag{5}\]
As in equation (5), the calculation of ReLU is simple. The activation function is mainly regarded as a kind of trigger mechanism. This discrete representation hampers analysis. Hence, this paper offers an alternative angle for analyzing ReLU. Here we show that ReLU performs a rotation of vectors in high-dimensional space. This selective rotation operation eludes a mathematical equation, but its effects on specific inputs can be assessed.
\[\mathbf{R}(\left[\begin{array}{c}x_{1}\\ x_{2}\\ \cdots\\ x_{n}\end{array}\right])=\left[\begin{array}{c}max(0,x_{1})\\ max(0,x_{2})\\ \cdots\\ max(0,x_{n})\end{array}\right] \tag{6}\]
\[\theta=\arccos\frac{\mathbf{v}\cdot\mathbf{v}^{\prime}}{\|\mathbf{v}\|\| \mathbf{v}^{\prime}\|} \tag{7}\]
As in equation (6), We define this selective rotation as \(\mathbf{R}(\mathbf{v})\), which shares the input and output with ReLU. \(\mathbf{R}(\mathbf{v})\) rotates a spatial vector by a minimum angle to align or orthogonalized it with the positive quadrant of the coordinate system defined by the basis vectors. Its scaling factor is \(\frac{k^{\prime}}{k}\), where \(k\) and \(k^{\prime}\) are the magnitudes of the original vector and the projection onto all positive subspaces, respectively. In this way, we can calculate the angle, \(\theta\), between high-dimensional space vectors and analyzing its mathematical properties before and after applying activation function as equation (7). We will demonstrate how defining such a rotation operation can reveal important insights in the following sections.
### Fully connected (FC) layer
We focus on the FC layers that are closest to the output here. Those that are inserted in the middle of the network for other purposes are beyond the scope of this paper.
The fully connected layer linearly transforms the data from the deep neural network and outputs the final distribution for classification tasks [24]. The layer parameters fit the network's data set distribution globally. The network assigns each data point to the most likely category channel and the confidence score reflects the similarity of this assignment.
Consider a network that classifies its inputs into \(n\) categories based on the output of its final hidden layer, which has c units. The FC layer with a \(c\times n\) weight matrix that maps the hidden output to the output distribution. To do this, the network first flattens the hidden output into a \(1\times c\) vector and then multiplies it by the weight matrix. The network then compares the resulting vector with each of the \(n\) category-specific vectors in the FC layer, which are fixed after training, and chooses the category with the highest similarity score as the output.
Projecting the data onto a low-dimensional subspace with maximum variance in the original space by Principal Component Analysis [25] (PCA) partly retains the global spatial information of the high-dimensional distribution that shows the structure of categories instead of their similarities or differences.
## 3 Results
To reveal the hidden shape of data, we first reduced its dimensions and assumed it formed a cone. Then we projected it onto different planes and looked for the sparsest clusters of points which marked the vertex of the cone. The principal axis was automatically calculated based on the vertex coordinates and the mean of the data. For clear visualization, we transform the data to a new coordinate system that has the cone's vertex as the origin and the principal axis as the z-axis.
We conducted dimensionality reduction to the parameters of fully connected (FC) layer in models trained on the ImageNet dataset [26]. As illustrated in figure 1, the parameters of the FC layers in these models do not follow a uniform distribution within the three-dimensional subspace. Instead, they exhibit a distinctive conical pattern. Interestingly, this pattern can be observed in previous studies exploring restricted Boltzmann machines [27], suggesting that it is not a result of the convolutional operations.
We utilized Python 3.8.2 for data analysis and visualization, and employed the libraries sklearn [28] (v.1.2.2), NumPy [29] (v.1.22.3) and Matplotlib [30] (v.3.5.2). The neural network was implemented in PyTorch (v.1.11.0) framework.
**Cone-shape distribution.** Figure 1 provides a comparative analysis between the distribution of FC layer parameters, which have been reduced to a three-dimensional subspace, and the standard conical surface. We utilized the ResNet [31] series networks, renowned for their widespread usage and excellent performance. ResNet exhibits a relatively straightforward architecture, comprising fundamental components necessary for the efficacy of deep neural networks, including convolutional layers, pooling layers, normalization layers, fully connected layers, and residual connections. We employed pre-trained ResNet models obtained from two prominent providers, PyTorch [32] and Microsoft [33]. Networks with matching depths from both providers exhibited comparable performance on the ImageNet dataset.
Beside ResNet, we present dimensionality reduction results for VGG19 [34] and Vision Transformer [35; 36], thereby showcasing the ubiquity of this phenomenon across diverse neural network architectures. The subsequent analysis will attempt to explain the emergence mechanism of the conical distribution.
To better evaluate the parameters after dimensionality reduction, we divided them into \(n\) equal parts along the z-axis direction. We calculate the area of the circumscribed circle of the convex polygon formed by the projection of the points in each part on the xy-plane. The area of circumscribed circle varies with the cross-section of distribution. How the area change curve approaches the standard quadratic curve reflects the similarity between the points and conical distribution. A sufficient number of points are needed to calibrate the cross-section of data distribution. Some slices, however, were too sparse to provide reliable information.
Figure 1: **Visualization of the parameters distribution.** We performed PCA on the FC layer parameters and normalized them to three dimensions. The yellow surface displays a standard conical distribution, with a principal axis indicated by the orange vector. We used open-source ResNet series models from **a,** Pytorch and **b,** Microsoft as well as **c,** VGG19 and **d,** VisionTransformer.
We performed experiments using models of different depths provided by Pytorch. According to the official documentation, Pytorch trained these models with relatively consistent parameters, thus minimizing the influence of specific training process. As shown in figure 2, the distribution after dimensionality reduction generally approached a conical with depth increasing. The aforementioned experiments do not serve as proof, but approximating the distribution as a cone can facilitate the analysis of rotations.
**Rotating cone.** When studying the cone-shaped distribution, a natural approach is to examine the distribution along the directions parallel and perpendicular to the principal axis. Through PCA, we projected the FC layers parameters onto the xz and xy planes [37], resulting in two-dimensional projected data distributions denoted as \(P_{xz}\) and \(P_{xy}\), respectively.
\(P_{xz}\) and \(P_{xy}\) are utilized to ascertain the relative positions of points corresponding to a specific category within the overall distribution and the cross-sectional projection
Figure 2: **Variation in the area of the parametric convex polygon’s circumcircle after dimensionality reduction.** The parameters of the dimensionality-reduced FC layer were partitioned into \(n\) segments. By plotting the circumcircle area of each segment’s convex polygon, we computed the deviation and p-value from the standard quadratic curve. We employed least-squares to evaluate the discrepancy between the data points and the curve. \(n=35;\text{ResNet34},P=2.8\times 10^{-4};\text{ResNet50},P=4.8\times 10^{-5}; \text{ResNet101},P=5.2\times 10^{-5};\text{ResNet152},P=5.5\times 10^{-7}.\) ***\(P<0.001\).
plane. We examine the vertical and polar coordinate of the same point under two scenarios: varying model providers with identical model architecture, and the same model provider with different model architectures. Subsequently, we perform statistical analysis to determine the relative positions of these points within the model distribution. In the case of \(P_{xz}\), the cone-shaped distribution exhibits non-uniformity along its principal axis. Consequently, our analysis focuses on the relative positions of points within the complete dataset, rather than their positions within uniformly distributed intervals. As for \(P_{xy}\), we transform the Cartesian coordinates into polar coordinates and rotate the coordinate system to align the first category precisely with the polar axis.
Fig. 3: **Inter-model similarity of FC layers.****a,**\(P_{xy}\) and \(P_{xz}\) are the results of projecting the data onto the xy-plane and xz-planes respectively. **b,** Convert \(P_{xy}\) in Cartesian coordinates to polar coordinates. **c,**\(P_{xz}\) and \(P_{xy}\) are halved by y-coordinate and first-class position. **d,** Calculate the proportion of categories with consistent partition to assess the similarity between inter-model FC layers. **e,** Comparison of projection \(P_{xy},P_{xz}\) similarity in the FC layer between PyTorch and Microsoft providers on the identical architecture model. **f,** Comparison of projection \(P_{xy},P_{xz}\) similarity in the FC layer provided by PyTorch on the identical architecture model.
As shown in figure 3 We partition the data within the entire dataset into two segments. The data from the same category tend to cluster together on the projection plane aligned with the principal axis, across models from various providers. On the other hand, the clustering probability decreases significantly on the projection plane perpendicular to the principal axis, indicating a more random distribution. Remarkably, models trained by PyTorch with its consistent training methodology show high consistency in both axial directions. These results suggest that neural networks perform a selective rotation operation that is related to the training process. This operation rotates data from different categories by varying angles in high-dimensional space, increasing the randomness in the projection planes orthogonal to the rotation axis.
Here we show that neural networks perform a selective rotation operation that is directly linked to the training process. This operation rotates data from different categories by varying angles in high-dimensional space, resulting in a higher degree of randomness in the projection planes perpendicular to the rotation axis. We illustrate this phenomenon using two scenarios of projecting high-dimensional data onto two-dimensional planes. In the first scenario, the distributions of \(p_{xz}\) and \(p^{\prime}_{xz}\) are more consistent, while the similarity between \(P_{xy}\) and \(P^{\prime}_{xy}\) is close to random. In contrast, in the second scenario, both directions show high consistency. Our findings reveal a novel aspect of neural network learning and suggest a possible way to improve their performance.
We define ReLU as a selective rotation operation as the equation (6). This operation poses certain challenges to capture with a unified mathematical equation, but we can quantify each ReLU operation based on specific inputs.
**Data rotation throughout the network.** In this paper, we analysis hidden layer features from two perspectives: feature maps and channels. Since the networks' output relies on the cross-channel information, inter-channel relationships can reflect the decision-making process better than feature maps. Different channels store distinct features extracted from the input, and that ReLU selectively rotates the hidden layer data. This rotation causes data from different channels to exhibit diverse characteristics. As a result, the cone-shaped distribution resulting from the rotation is observed among the data across channels.
We investigate the inter-channel parameters of the network's hidden layers using PCA. To achieve this, we flatten the feature maps within each channel into one-dimensional vectors and apply PCA to reduce the data dimensionality across all channels. We observe a cone-shaped distribution that emerges gradually with increasing layer depth (Figure 4). The data exhibit a uniform distribution before passing through the activation layer, indicating that this activation function causes the cone-shaped distribution.
We define ReLU as a rotation operation and show how this explains the network's data processing. The network transforms the input by channel-wise [38, 39] operations and feature-wise [40] down-sampling. According to equation (4), When the number of channels increases, convolution introduces conflicting information, thus reducing the number of channels can be advantageous to some extent. We analysis the high-dimensional vector angles among the dominant channels in the output of the nearest ReLU layer.
Figure 4: **Inter-channel dimensional reduction by PCA.****a-f,** The result of applying PCA to the input and to the output of the first convolutional layer, the first ReLU layer, the first, the second and the fourth residual layer. **g,** The overall structure of ResNet34. **A-F** represents the data position of **a-f**.
**Quantize rotation.** Figure 5**a** shows the rotated angle of data points for each category along the principal axis of the cone-shaped distribution, sorted from bottom to top. We show that ReLU processing leads to a substantial variation in the rotational probability of class-specific data points at different locations within the cone. This variation reflects the "density" of data points across the entire distribution, which is a key factor for identifying their positions. Because rotation introduces more uncertainty perpendicular to the principal axis, assessing data points based on their distribution-wide positions is more reasonable. Our experiment reveals a way to distinguish different classes along the principal axis.
Confidence levels also influence the rotation angles. We arrange the input in ascending order based on their confidence levels and measure the rotation angles by the dominant channels in the final layer.Figure 5**b** shows that the vector angles after ReLU processing the rotation angle grows, so does the corresponding increase in the level of confidence. This observation implies that the model assigns more confident classifications to inputs that undergo more substantial rotations.
Data rotation reveals intrinsic characteristics of data categories. We investigated how the angle of data rotation affects its position in the distribution and the network's confidence in its assessment. These findings suggest that data rotation can be used as a mathematical method to represent and manipulate the intrinsic characteristics of data categories.
It is challenging to track data rotation within the network due to the fluctuations in channel numbers and interlayer correspondence. The significant disparity in the count of unrotated classes highlights the network's varied input processing at different positions within the distribution. However, the experiment solely captures the data rotation dynamics in the later stage of the model.
**Role of rotation.** We also approach the analysis of feature maps from an alternative perspective. We consider each row of a feature map as an individual point in high-dimensional space, and collectively, the feature map represents a cluster of points.
**R
Fig. 5: **Relationship between rotation angle and input property.****a,** The changes of rotation angle along principal axis. **b,** The changes of rotation angle as the confidence level increases.
This thought is commonly used in video analysis, where each frame can be regarded as a fundamental unit. We extend this method to image analysis as well. By recognizing the correlation between adjacent rows in an image, this angle enables us to effectively locate distinct parts within the image.
We employed the t-Distributed Stochastic Neighbor Embedding [41] (t-SNE) technique for dimensionality reduction, aiming to preserve the intrinsic distance relationships among points in high-dimensional space. Figure 6 presents the t-SNE visualization results for the same input data across different layers of the model. Initially, the data points exhibit distinct clusters, which gradually merge and become indistinguishable as we move towards the final layers, reflecting the process of information integration within a single feature map. This process also reveals the occurrence of rotational phenomena. Unlike a simple linear one, this selective rotation operation varies depending on the input. It involves the gradual mixing and alignment of feature maps within a channel during the rotation and transformation process.
When classifying, the model needs to distinguish the main object from the background noise. We divide the data into two parts: the object and the noise. As the model rotates and transports features, the information within a single feature map converges and becomes indistinguishable. Figure 6 shows the dimensionality reduction results of the same input data in different layers of the model. The data points form clear clusters at first and blur at the end. We can exploit this characteristic to intervene in the model's decision-making in the early stages.
Figure 6: **Inner-feature dimensional reduction by t-SNE.****a-f,** The result of applying t-SNE to the input and to the output of the first convolutional layer, the first ReLU layer, the first,the second and the third residual layer. **g,** The overall structure of ResNet34. **A-F** represents the data position of **a-f**.
**Intervene making decision.** We utilized the K-means [42] clustering algorithm for binary classification of the dimensionally reduced data. We then set the two classes to zero in their respective corresponding channels or feature map rows and calculated the prediction. Figure 7**b** shows the decision intervention results of the same input at different depths of the hidden layers in the ResNet34 model. The feature-wise method outperformed the channel-wise method. Interestingly, the success rates of both methods displayed an inverted U-shaped trend with increasing network depth. This phenomenon reflects the data's mixing and separation dynamics during the selective rotation process in the network. As the rotation progresses, the data tends to become more homogeneous within the network initially, followed by a re-clustering
Fig. 7: **Comparing correction in channel-wise and feature-wise methods.****a,** Depicts how channel-wise and feature-wise approaches intervene a network’s misclassification of an image. The network’s confidence and output vary depending on the depth of intervention within the network. **b,** Shows the number of incorrect decisions that are corrected by intervening at different stages of the network and removing duplicate samples. From left to right, the outputs of the first convolutional layer, the first, second, third, and fourth residual layers, and the total are shown. The feature-wise and channel-wise methods are represented by blue and green bars, respectively, while the pink bar indicates the total count of misclassified samples by the model. The experiment involved 10,000 images.
phenomenon that often facilitates improved classification. By integrating suitable detection and decision algorithms, effective manipulation of data decisions within the network can yield favorable outcomes.
Figure 7**a** showcases the successful rectification of the model's inaccurate predictions through channel intervention preceding the initial residual layer. Despite not aligning with the ground truth labels, the succeeding layers' rectified outcomes generally correspond to the objects present in the images. Conversely, the feature map intervention method did not effectively address the mispredictions in this specific instances. As anticipated, our decision intervention approach based on clustering demonstrated superior performance during the initial stages of information fusion and rotation within the model.
## 4 Discussion
This study aims to elucidate the image data classification process carried out by mature networks and provides a mathematical reinterpretation of this process. We employed fundamental mathematical tools yet uncovered some valuable phenomena. Specifically, the rotational distribution and the understanding of the ReLU activation function exhibit remarkable similarities, offering a potential framework for a structured restatement approach. This discovery implies a significant correspondence between artificial neural networks and the human brain in terms of information processing, which is truly remarkable. Moreover, interpreting ReLU as a form of selective rotation opens new avenues for quantifying and influencing network decision-making processes. This paper offers preliminary insights into these possibilities and anticipates further advancements in future research to make practical strides in this field.
Due to constraints in space and capabilities, the vast potential of this theory remains unexplored. This paper focuses solely the ReLU activation function and does not offer a comprehensive explanation or proof for the generation process of the observed rotational distribution. Remarkably, the observed rotational distribution is not limited to networks utilizing the ReLU activation function. This suggests that the interpretive methodology employed in this study may have broader applicability to other activation functions, albeit with increased intricacy. Furthermore, this study does not track the rotational phenomenon of data throughout the network, fails to elucidate the precise meaning of the term "density" in categories, and does not provide practical quantification and intervention approaches. We eagerly anticipate subsequent research that can make more significant contributions, freeing artificial intelligence researchers and engineers from the black-box nature of artificial neural networks. Such work would unveil the relationship between artificial and human intelligence, propelling the field forward.
|
2305.01844 | Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A
Minimalist Approach | In this study, we explore the potential of using a straightforward neural
network inspired by the retina model to efficiently restore low-light images.
The retina model imitates the neurophysiological principles and dynamics of
various optical neurons. Our proposed neural network model reduces the
computational overhead compared to traditional signal-processing models while
achieving results similar to complex deep learning models from a subjective
perceptual perspective. By directly simulating retinal neuron functionalities
with neural networks, we not only avoid manual parameter optimization but also
lay the groundwork for constructing artificial versions of specific
neurobiological organizations. | Junjie Ye, Jilin Zhao | 2023-05-03T01:16:45Z | http://arxiv.org/abs/2305.01844v1 | # Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A Minimalist Approach
###### Abstract
In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images. The retina model imitates the neurophysiological principles and dynamics of various optical neurons. Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models while achieving results similar to complex deep learning models from a subjective perceptual perspective. By directly simulating retinal neuron functionalities with neural networks, we not only avoid manual parameter optimization but also lay the groundwork for constructing artificial versions of specific neurobiological organizations.
## 1 Introduction
Research has already revealed the complex information processing capabilities of mammalian retinas [1], which preprocess optical signals before sending them to higher-level visual cortices. Recent studies have also identified numerous subtypes of retinal neurons, each with unique roles that enable hosts to quickly and intelligently adapt to changing environments based on perceptions [2, 3]. Leveraging the working principles of the retina [4] has led to the development of more intelligent perception-related algorithms and impressive achievements in image processing tasks, such as high-dynamic range image tone mapping [5, 6, 7, 8].
The work discussed in [7] is a prime example of a model inspired by the workings of various types of neurons, such as horizontal and bipolar neurons, for tone mapping tasks. The authors of [7] systematically explored different aspects of retinal circuitry from a signal processing perspective, inspiring the development of a corresponding computational model. By separating images into individual channels for algorithm modulation and aggregating the results, this work achieved superior results.
Traditional tone mapping in digital image processing often involves histogram equalization, which can cause images to lose color constancy, making objects appear in different or unnatural colors after restoration. This is particularly challenging for low-light image restoration tasks. While the work in [7] yielded impressive results, it relied on mathematical formulas and was too algorithmic-oriented. Additionally, some model parameters depended on domain knowledge, requiring author experience to select the most appropriate ones.
In our report, we draw inspiration from [7] to re-examine the working principles of the retina and design a network to address the low-light image restoration problem. Our network aligns with the optical signal processing flow of different neurons, providing a clear correspondence between the neural pathway in the retina and offering a transparent explanation for the design motivation. Furthermore, this simple model benefits from the end-to-end learning philosophy, eliminating the need for manual parameter optimization. Experiments demonstrate satisfying image restoration from a subjective perceptual perspective, and we plan to improve objective metrics in future work.
## 2 Background
Low-light image processing has become a crucial area of research in recent years, as it plays a significant role in various applications such as surveillance, autonomous vehicles, nighttime photography, and even astronomical imaging. Addressing the challenges posed by low-light conditions, such as reduced visibility, increased noise, and loss of details, is essential for enhancing the performance of computer vision tasks like classification, detection, and
action recognition [9, 10, 11]. Deep learning techniques, particularly convolutional neural networks (CNNs), have demonstrated remarkable success in addressing low-light image processing challenges [12, 13].
Numerous deep learning-based methods have been proposed for low-light image enhancement, which mainly focus on various computer vision tasks. For classification purposes, the authors in [14] introduced a method that adapts to the varying illumination conditions by incorporating a spatial transformer network. Similarly, for object detection in low-light scenarios, a multi-scale feature fusion strategy was proposed in [15], which improves the detection performance by exploiting the rich contextual information. In the context of action recognition under low-light conditions, researchers in [16] developed a two-stream fusion framework that effectively captures the spatial-temporal information. Researchers in [17] proposed a sampling strategy that leverages temporal structure of video data to enhance low-light frames. Additionally, the authors in [18] proposed a light-weight CNN for enhancing low-light images that significantly reduces the computational complexity without sacrificing the classification accuracy.
Low-light image denoising is another area of interest, with various CNN-based approaches being proposed. A deep joint denoising and enhancement network was introduced in [19] to tackle the challenges of denoising and enhancing low-light images simultaneously. The authors in [19] proposed a noise-aware unsupervised deep feature learning method to improve the denoising performance by learning the noise characteristics adaptively. In [20], an attention mechanism was incorporated into the denoising network to selectively enhance the relevant features for better noise suppression.
Apart from denoising, low-light image dehazing has also been an active research topic. The work in [21] presented a multi-task learning framework that jointly addresses the dehazing and denoising problems, resulting in improved image quality. Another study [22] introduced a deep reinforcement learning-based approach for adaptive low-light image enhancement, which optimizes the enhancement parameters to achieve visually pleasing results.
To address the limitations of deep learning models, such as their large size and computational complexity, bio-inspired approaches have been gaining attention. These approaches draw inspiration from the functioning of the human visual system, particularly the retina, which is known for its efficiency and adaptability in processing visual information under different lighting conditions [1]. However, the majority of the existing research focuses on complex models, limiting their applicability in resource-constrained environments.
In this paper, we present a bio-inspired, simple neural network for low-light image restoration that is inspired by the principles of various neurons in the retina. Our proposed network aims to achieve satisfactory results while maintaining a minimal architecture, making it suitable for deployment in various systems and scenarios.
Another area that has seen advancements is low-light image color correction. The work presented in [23] proposes a deep learning-based method for unsupervised color correction in low-light conditions, where the model learns to map the color distribution of the input image to that of a target image. Similarly, [24] introduced a joint color and illumination enhancement framework using generative adversarial networks (GANs), achieving improved color fidelity and contrast in low-light images.
Furthermore, researchers have explored the potential of unsupervised and self-supervised learning methods for low-light image enhancement. In [9], an unsupervised domain adaptation approach was proposed to transfer the knowledge learned from synthetic low-light data to real-world low-light images. The authors in [25] introduced a self-supervised learning framework for low-light image enhancement, leveraging the cycle-consistency loss to generate high-quality enhanced images.
In summary, while existing deep learning-based methods have achieved impressive results in various low-light image processing tasks, their computational complexity and large model sizes often limit their practical applicability. This paper aims to address these limitations by introducing a bio-inspired, simple neural network for low-light image restoration that draws upon the principles of various neurons in the retina. Our proposed network strikes a balance between performance and computational efficiency, making it suitable for deployment in a wide range of systems and scenarios.
## 3 Method
The computational model in this study takes into account the fact that cone photoreceptors are responsible for colors, while rod photoreceptors handle illuminance. Although low-light conditions may cause images to appear monochromatic, downstream cells such as horizontal cells (HCs) and amacrine cells (ACs) still aim to create a polychromatic visual perception for survival purposes [26, 27]. The model from [7] proposes two stereotypical pathways for optical signal processing in the retina: a vertical path where signals are collected by photoreceptors, relayed by bipolar cells (BCs),
and sent to ganglion cells (GCs), and lateral pathways where local feedbacks transmit information from horizontal cells back to photoreceptors and from amacrine cells to horizontal cells.
Furthermore, cone photoreceptors consist of three types (S-cones, M-cones, and L-cones), each sensitive to different wavelengths, roughly corresponding to the red, green, and blue (RGB) channels of a color image. The study in [7] suggests an overall split-then-combine computational flowchart.
Our model also considers the electrophysiological properties of neurons in the retina. Unlike most neurons in the cerebral cortex, some retinal neurons are so small that local graded potentials can propagate from upstream synapses to downstream somas. For instance, bipolar cells directly excite ganglion cells via local graded potentials from synapses between photoreceptors and bipolar cells. We assume that photoreceptors also have a direct impact on ganglion cells, and we propose a flow of optical signal processing as shown in Fig. 1.
According to [7], the information processed by HCs can be represented by equation 1:
\[h(x,y)=I_{c}(x,y)*g(x,y;\sigma(x,y)) \tag{1}\]
However, the circuits before BCs also introduce a recursive modulation to channel data 2, which can cause numerical instability.
\[b(x,y)=\frac{I_{c}(x,y)}{\alpha+h(x,y)} \tag{2}\]
To avoid this issue, we simplify the equation to a more straightforward finite impulse response form as in equation 3:
\[b(x,y)=\alpha\cdot I_{c}(x,y)+\beta\cdot I_{c}(x,y)h(x,y) \tag{3}\]
This can be further simplified to a residual form in equation 4:
\[b(x,y)=I_{c}(x,y)+h(x,y) \tag{4}\]
Next, along the optical signal processing neural circuitry, BCs exert a double-opponent effect on the signal modulated by HCs, which can be modeled as a convolution with a difference of Gaussian (DoG) kernel, as shown in equation 5
\[v(x,y)=b(x,y)*f(x,y) \tag{5}\]
Although neural networks generally lack the constraint to exert such an effect, we only require the initialization of the corresponding convolutional layer weights to comply with a DoG kernel instance. We choose specific values for this purpose and construct the filter accordingly using equation 6
\[k=G\left(0,\sigma_{1}\right)-G\left(0,\sigma_{2}\right) \tag{6}\]
As mentioned earlier, bipolar cells relay signals via local potentials rather than action potentials, so we assume that the signals, although attenuated, still potentially exert influence on the ganglion cells. This assumption refines equation (5) to equation 7:
\[v(x,y)=I_{c}(x,y)+b(x,y)*f(x,y) \tag{7}\]
Based on the above computational model, we design the neural network architecture in the next section. This architecture aims to mimic the process of optical signal processing in the retina and provide a more accurate representation of how the retina processes visual information. The neural network will be designed to handle both the vertical and lateral pathways for optical signal processing, incorporating the various cell types involved in the process. By taking into account the electrophysiological properties of retinal neurons and the specific interactions between photoreceptors, bipolar cells, and ganglion cells, our model offers a more comprehensive and biologically plausible representation of visual processing in the retina. In conclusion, this computational model strives to accurately represent the complex processing that occurs in the retina, providing valuable insights for the development of improved artificial vision systems and a deeper understanding of the neural mechanisms underlying human vision. By considering the electrophysiological properties of retinal neurons and the interplay between different cell types, this model offers a more complete and biologically plausible representation of the retina's optical signal processing pathways.
## 4 Experiment and Results
### Dataset Utilized
In our experiment, we employ the open-sourced LOw-Light (LOL) image dataset [28]. This dataset comprises 500 pairs of low-light and normal-light images, split into 485 training pairs and 15 testing pairs. The images predominantly feature indoor scenes and have a resolution of 400x600. Given that this image size is comparable to those captured by applications like low-light surveillance, we process the images without down-sampling to represent real-world scenarios.
### Neural Network Design and Configuration
The RGB channel values are processed independently. As a result, we utilize depthwise convolutions to maintain this separation [29]. The overall network structure aligns with equations 1, 4, and 7. It is worth noting that in equation 1, the \(\sigma\) parameter for convolution \(g\) relies on pixel values as the operation moves across the monochromatic input \(I_{c}(x,y)\). To reduce computation overhead, we forego determining \(\sigma\) for each location, opting instead to let the network learn the optimal weights during training by specifying the configuration.
Using the above-described configuration, we train the network for 20 epochs with a batch size of 8 and a learning rate of 0.001. The network has only 108 learnable parameters, so the risk of overfitting is minimal, and we do not perform additional validation. Following training, the network is tested directly on the test set. Sample results are shown in Figure 1, illustrating that our simple network can restore low-light images with a certain level of perceptual quality. Nevertheless the filters' limited size may not effectively extract global illumination information, resulting in darker restored images compared to the ground-truth images.
To evaluate the restored images' quality objectively, we compute the structural similarity index measure (SSIM) for all test samples. The average SSIM is 36.2%, lower than the 76.3% to 93.0% SSIM range reported by other models. Although previous research focused on more theory-oriented heavy models, our straightforward network ensures potential deployment across various systems and scenarios.
## 5 Conclusion
We have developed a simplistic neural network based on the functional principles of retina neurons and applied it to the LIIR problem. By conducting experiments on the benchmark LOL dataset for LIIR tasks, our results indicate a degree of restored image satisfactorily from a subjective perception standpoint. While the SSIM's objective assessment falls short compared to other methods, the importance of drawing inspiration from biological computation for constructing simple networks is evident, and future work can address the current limitations.
Figure 1: Figure showing (a) Original image, (b) Low-light image, and (c) Enhanced image |
2302.08637 | High-frequency Matters: An Overwriting Attack and defense for
Image-processing Neural Network Watermarking | In recent years, there has been significant advancement in the field of model
watermarking techniques. However, the protection of image-processing neural
networks remains a challenge, with only a limited number of methods being
developed. The objective of these techniques is to embed a watermark in the
output images of the target generative network, so that the watermark signal
can be detected in the output of a surrogate model obtained through model
extraction attacks. This promising technique, however, has certain limits.
Analysis of the frequency domain reveals that the watermark signal is mainly
concealed in the high-frequency components of the output. Thus, we propose an
overwriting attack that involves forging another watermark in the output of the
generative network. The experimental results demonstrate the efficacy of this
attack in sabotaging existing watermarking schemes for image-processing
networks, with an almost 100% success rate. To counter this attack, we devise
an adversarial framework for the watermarking network. The framework
incorporates a specially designed adversarial training step, where the
watermarking network is trained to defend against the overwriting network,
thereby enhancing its robustness. Additionally, we observe an overfitting
phenomenon in the existing watermarking method, which can render it
ineffective. To address this issue, we modify the training process to eliminate
the overfitting problem. | Huajie Chen, Tianqing Zhu, Chi Liu, Shui Yu, Wanlei Zhou | 2023-02-17T00:50:24Z | http://arxiv.org/abs/2302.08637v1 | High-frequency Matters: An Overwriting Attack and defense for Image-processing Neural Network Watermarking
###### Abstract
In recent years, there has been significant advancement in the field of model watermarking techniques. However, the protection of image-processing neural networks remains a challenge, with only a limited number of methods being developed. The objective of these techniques is to embed a watermark in the output images of the target generative network, so that the watermark signal can be detected in the output of a surrogate model obtained through model extraction attacks. This promising technique, however, has certain limits. Analysis of the frequency domain reveals that the watermark signal is mainly concealed in the high-frequency components of the output. Thus, we propose an overwriting attack that involves forging another watermark in the output of the generative network. The experimental results demonstrate the efficacy of this attack in sabotaging existing watermarking schemes for image-processing networks with an almost 100% success rate. To counter this attack, we propose an adversarial framework for the watermarking network. The framework incorporates a specially-designed adversarial training step, where the watermarking network is trained to defend against the overwriting network, thereby enhancing its robustness. Additionally, we observe an overfitting phenomenon in the existing watermarking method, which can render it ineffective. To address this issue, we modify the training process to eliminate the overfitting problem.
Model Watermarking, Deep Steganography, Attack and defense, Image Processing.
## I Introduction
Training a high-performing deep learning model is incredibly costly, which consumes a massive amount of computational resources along with electricity, human resources, etc. In fact, training such models is so time-consuming and expensive, that stealing a model is comparatively much simpler and cheaper. Beyond directly copying an original model and simply claiming its ownership, there are several genres of deep learning model theft. These include model fine-tuning [1, 2, 3], model pruning [4, 5, 6], and knowledge distillation. In the face of this many potential threats, model owners have sought ways to protect their intellectual properties, and one such method is model watermarking. Model watermarking is a brand-new technique that embeds a traceable digital watermark into a deep learning model. As such, it offers a promising way to model copyright protection.
The first attempt at model watermarking was made in 2017 by Yusuke Uchida _et al._[7], who proposed a method of embedding a watermark into a deep learning model. The watermark was designed to verify ownership of the model given white-box access. Since then, several exemplary works [8, 9, 10] have emerged to provide better protection for deep learning models in different scenarios up to today, where even black-box attacks can be effectively prevented.
However, most model watermarking approaches are designed to protect classification models; methods that work for image-processing models are few and far between. In short, an image-processing model takes images as its input and outputs modified images, which is quite unlike a classification model that takes in images and simply outputs the digits of a category. In 2020, Zhang _et al._[11] proposed a framework to watermark image-processing neural networks, which, to the best of our knowledge, was the first work in image-processing model watermarking. Essentially, Zhang's work combines model watermarking with deep steganography so as to forcibly embed watermark information in the outputs of the released models. Deep steganography is a technique that
Fig. 1: Attack workflow: The adversary sends the input image to the watermarked image-processing model to derive the watermarked output. If it trains a surrogate model directly with the input image and the watermarked output, the surrogate model will contain the watermark information. By overwriting the watermarked output, it removes the watermark in the output set and forges its own watermark inside the overwritten output. Finally, a watermark-free surrogate model can be trained.
uses deep learning models to hide a secret image completely within a cover image, such that it is invisible to the naked human eye. The image containing the embedded secret image is called the container image. By releasing a set of processed images containing a hidden watermark, any attacker intending to steal the model is compelled to train their own watermarked model. Subsequently, Quan _et al._[12] devised another image-processing model watermarking scheme that takes a backdoor watermarking approach. Briefly, the watermarked model functions normally when it receives normal input images. When it receives a noise trigger input, it outputs a pre-defined watermark to validate ownership. Even though recent studies show that steganography plays an essential role in the protection of images, this type of approach might still vulnerable to attacks.
In fact, our study shows that current watermarking methods for image-processing models are not adequately robust. For example, we find that, due to the properties of deep steganography, watermarking with image-processing models is vulnerable to changes in the frequency domain, especially the high-frequency domain. To outline what we mean, we devise an overwriting attack method that shows how existing image-processing model watermarking methods, and even deep steganography itself can be nullified. Having designed the attack, we also designed a defense against it that promises to guarantee the safety of deep learning models. The defense method mitigates the overwriting attack through a new adversarial training framework that combines a watermarking method with the overwriting attack.
The general workflow of the attack is described in Figure 1. Here, a released image-processing deep learning model is watermarked such that every image it outputs contains an invisible watermark. If an attacker tries to train a surrogate model via knowledge distillation, the surrogate model will carry the watermark information automatically. However, in our attack, we train an overwriting network that overwrites the embedded watermark in the output from the watermarked model. A surrogate model is also trained with the overwritten output and the input image sets. Thus, the watermark is nullified, for the original watermark can no longer be retrieved from the output of the surrogate model.
To effectively counter the overwriting attack, we propose an adversarial training framework that deliberately incorporates an overwriting network to enhance the robustness of the watermarking network. Figure 2 demonstrates. Briefly, an overwriting network is trained along with a watermarking network, which together form a defense network. There is an adversarial training process, where the overwriting network tries to overwrite the watermark in the container image so that the retrieval network in the watermarking network cannot retrieve a valid recovered watermark from it. In contrast, the watermarking network tries to retrieve a valid recovered watermark even if the container image has been overwritten. This competitive process significantly boosts the robustness of the watermarking network.
Overall, our contributions are as follows:
1. Through frequency analysis, we have unraveled where a secret image signal is embedded in a container image. Accordingly, we devised a possible attack to nullify the currently existing image-processing model watermarking methods.
2. We devised a corresponding defense method based on adversarial training that counters the proposed attack method with a new adversarial training framework to protect the image-processing network.
3. We discovered an overfitting problem with the current watermarking method for protecting image-processing models that will nullify the protection, and fixed it by modifying the training process.
The rest of this paper is organized as follows. In section II, we demonstrate the preliminaries by listing the notations used in the context and illustrating the background and related works. We then describe our proposed method in detail in Section III. Our experiment processes and results are presented in Section IV, and they are analyzed and discussed in Section V. Lastly, we draw a conclusion about this work in Section VI.
## II Preliminary
### _Watermarking & Deep Learning_
Watermarking is a powerful method for object authentication and ownership validation. It has established strong ties with deep learning in recent times. To provide a comprehensive overview of these interactions, we have categorized them into two main categories: model watermarking and image watermarking using deep learning. For the reader's convenience, a list of all the notations used in the subsequent sections can be found in Table I.
#### Ii-A1 Model watermarking
The existing techniques for model watermarking can be classified into three categories: model weight watermarking, backdoor watermarking, and active watermarking.
Fig. 2: Defense workflow: After embedding the watermark, an overwriting network performs an overwriting attack that yields an overwritten image that is then fed to the retrieval network. The retrieval network is then required to retrieve a valid recovered watermark from the overwritten image. Together, these networks form the defense network.
In model weight watermarking, as described in [7], the watermark is embedded into the model's weight parameters during the training process. To retrieve the watermark, one needs complete access to the model's internal structure, which is often not feasible in real-world scenarios. Furthermore, these methods are not highly resilient against attacks such as model pruning, fine-tuning, and knowledge distillation.
Backdoor watermarking, as discussed in [9], involves the deliberate alteration of a portion of the training data to create an overfitted model. This portion is referred to as the trigger dataset and can be used to validate the ownership of a suspect model. If the majority of the trigger data result in the suspect model producing the watermark labels, the model's ownership can be confirmed with just black-box access. Compared to model weight watermarking, this method is more robust against the previously mentioned attacks.
On the other hand, active watermarking methods aim to prevent model theft proactively. For instance, Tang et al. [13] proposed a method that requires the user to enter a valid serial number before using the desired model. This model is a student model derived from a teacher model and functions correctly only with a valid serial number. Although this approach is proactive in nature and protects the model, a malicious entity can still crack the serial number generator and propagate the stolen model.
#### Ii-A2 Image watermarking via deep learning
Image watermarking methods that leverage deep learning can be further categorized into auto-encoder image watermarking and generative adversarial network image watermarking.
Auto-encoder image watermarking, first introduced by Baluja in [14], involves the use of an embedding network and a retrieval network. The embedding network embeds a watermark or secret image into a cover image to produce a container image that is visually similar to the cover image. The retrieval network then retrieves the watermark from the container image with a tolerable error range. While these methods achieve high perceptual quality, they are susceptible to steganalysis, a detection attack that identifies hidden content in an image. Additionally, the container images generated by these methods lack robustness against distortions and malicious attacks that can result in damage or removal of the hidden content.
Generative adversarial network image watermarking is similar to auto-encoder image watermarking, but with the addition of a discriminator in the framework. During adversarial training, the discriminator is trained to detect hidden content in any image, while the embedding network is tasked with deceiving the discriminator with the container images it generates. This enhances the covertness of the container images against steganalysis. However, they remain vulnerable to distortions during transmission and malicious attacks, such as JPEG compression and overwriting attacks.
### _Related Work_
Watermarking is a powerful method for safeguarding intellectual property and preventing copyright infringement in various domains, including image protection [15], audio [16] and video files [17]. By embedding unique and imperceptible marks within the intellectual property, the watermark serves as evidence of ownership and can be used in legal proceedings to defend against infringement claims. Despite having a long history of use, watermarking is a relatively new application in the realm of deep learning models.
In 2017, Uchida _et al._[7] introduced a novel method for embedding a watermark into the weight parameters of a model, which was considered to be the first attempt at using watermarking techniques for the protection of intellectual property in neural networks. Despite its pioneering efforts, the method's validity in proving ownership required complete access to the parameters, or white-box access, which made it not practical for real-world scenarios. Furthermore, its robustness to different types of attacks was subject to improvement.
Rouhani _et al._[8] then proposed a watermarking framework that provides protection against fine-tuning, pruning, and overwriting of watermarks in both white-box and black-box scenarios. This approach was more robust to attacks, however, it was not capable of preventing knowledge distillation attacks. Szyller _et al._[9] then introduced an approach that was capable of countering all types of attacks, including knowledge distillation, by making a portion of the output from
\begin{table}
\begin{tabular}{c|l} \hline Notation & Definition \\ \hline \(\mathcal{U}\) & The overwriting network. \\ \hline \(\mathcal{O}\) & The defense network. \\ \hline \(\mathcal{E}\) & An embedding network that embeds a secret image into a cover image to yield a container image. \\ \hline \(\mathcal{R}\) & A retrieval network that retrieves a recovered secret image from a container image. \\ \hline \(\mathcal{D}\) & A discriminator network that identifies whether or not a given image contains hidden content. \\ \hline \(\mathcal{E}\mathcal{U}\) & The overwriting embedding network. \\ \hline \(\mathcal{R}\mathcal{U}\) & The overwriting retrieval network. \\ \hline \(H\) & The original and watermark-free image-processing model. \\ \hline \(H^{\prime}\) & A surrogate model mimicking \(H\), but trained on a watermark dataset. \\ \hline \(H_{0}\) & A surrogate model mimicking \(H\), but trained on a watermark-free dataset. \\ \hline \(A\) & A set of images for the image-processing network to process. \\ \hline \(B\) & A set of processed images originating from \(A\). \\ \hline \(B^{\prime}\) & A set of watermark and processed images, originating from \(B\). \\ \hline \(B^{\prime\prime}\) & A set of noisy output images from, originating from \(B\). \\ \hline \(B_{\mathcal{U}}\) & A set of watermark and processed images, but having suffered from the overwriting attack. \\ \hline \(B_{0}\) & A set of processed images from a surrogate model that is not trained on the watermarked dataset. \\ \hline \(C/c\) & A set of cover images/a cover image for concealing secrets. \\ \hline \(C^{\prime}/c^{\prime}\) & A set of container images/a container image where secrets are hidden inside. \\ \hline \(S/s\) & A set of secret images/a secret image to hide. \\ \hline \(S^{\prime}/s^{\prime}\) & A set of recovered secret images/a recovered secret image. \\ \hline \(w\) & A watermark. \\ \hline \(w^{\prime}\) & A recovered watermark. \\ \hline \(w\) & A pure black null image. \\ \hline \(c^{\prime}\) & A container image that contains a watermark. \\ \hline \(x\) & An arbitrary image that is the same size as \(c^{\prime}\) \\ \hline \(x^{\prime}\) & A recovered image originating from \(x\). \\ \hline \(\epsilon\) & a tolerable error range of a recovered secret image. \\ \hline \(\mathcal{L}\) & A loss function. \\ \hline \(\lambda\) & A weight parameter for a regularizer in the loss function. \\ \hline \end{tabular}
\end{table} TABLE I: Notations
the watermarked model deliberately false. This strategy forces the surrogate model to include the watermark information by overfitting the falsified labels, thus representing a trade-off between robustness and accuracy.
It is worth noting that all of these methods, including the one proposed by Szyller _et al._, work in a passive manner to defend against attacks, as the watermarks only serve to prove ownership after a copyright violation has already occurred, rather than preventing the violation from happening in the first place. Furthermore, these methods, along with most other model watermarking methods, are designed for classification models, with only a limited number of watermarking methods available for image-processing models.
In 2020, Zhang _et al._ proposed a watermarking method for image-processing deep learning models [11]. This method is the first of its kind and incorporates the concept of deep steganography, which is the technique of hiding information in such a way that it is not detected. The method fuses imperceptible image watermarking with model watermarking, making it effective against black-box knowledge distillation attacks. The technique of steganography has a long history, dating back centuries, and has been utilized in different domains. Baluja first introduced the use of a deep learning model for image steganography in 2017 [14]. The method involves hiding one image within another image in such a way that it is not visible to the naked eye.
Several advancements have been made in the field of deep steganography since then, with Wu _et al._ designing a framework to perform end-to-end deep steganography [18] and Zhang _et al._ developing a framework to hide an arbitrary image within another image [19]. In [20], an image watermarking method was merged with the image-generative network, using deep steganography to prevent the model from being misused.
However, as deep steganography evolves, so do the attacks. Traditional attacks on steganography include image resizing, cropping, distortion, and compression, as illustrated in Hosam's work [21]. Additionally, deep learning has been utilized to perform these attacks, as seen in Boroumand _et al._'s work [22], where a deep convolution neural network (DCNN) framework was proposed to perform deep steganalysis. Corley _et al._[23] designed a framework based on a generative adversarial network (GAN) with significant performance that is capable of purging secret images hidden in container images. Thus, similar to the battles between attacks and defenses in model watermarking, intense battles also exist in image watermarking through deep steganography.
## III Method - Attack and defense
### _Attack Analysis_
In the current watermarking method for image-processing neural networks, deep steganography is seamlessly integrated with model watermarking. The watermarking process is composed of two key components, an embedding network \(\mathcal{E}\) and a retrieval network \(\mathcal{R}\). As illustrated in Figure 3, the watermarking process begins by training \(\mathcal{E}\) and \(\mathcal{R}\) on a set of processed images, \(B\), and a watermark image, \(w\). The embedding network \(\mathcal{E}\) then embeds the watermark image \(w\) into each image \(b_{i}\) in the set \(B\) to produce a watermarked image set, \(B^{\prime}\). This process is denoted as
\[B^{\prime}=\mathcal{E}(B,w). \tag{1}\]
In the event of the presence of an adversary, they will only have access to the unprocessed image set \(A\), and the watermarked processed image set \(B^{\prime}\). The adversary can then train a surrogate model, denoted as \(H^{\prime}\), using \(A\) and \(B^{\prime}\), such that the model learns to produce processed images with watermarks similar to those in \(B^{\prime}\). Finally, the retrieval network \(\mathcal{R}\) should be capable of retrieving a recovered watermark \(w^{\prime}\) from both the original watermarked image set \(B^{\prime}\) and the noisy output set \(B^{\prime\prime}\) produced by the surrogate model \(H^{\prime}\), denoted as
\[w^{\prime}=\mathcal{R}(b^{\prime}),\text{ s.t. }w^{\prime}=w+\epsilon, \text{ {iff} }b^{\prime}\in B^{\prime}\cup B^{\prime\prime}, \tag{2}\]
where \(\epsilon\) represents a tolerable error range. Meanwhile, if \(\mathcal{R}\) receives a watermark-free image \(x\) as input, \(\mathcal{R}\) will yield a null image \(w_{0}\) that is purely dark, denoted as
\[w_{0}=\mathcal{R}(x),\forall\ x\not\in B^{\prime}\cup B^{\prime\prime}. \tag{3}\]
Fig. 3: Framework of the Watermarking Network: Starting from the very left, the embedding network is trained to embed a watermark into a processed image set so as to yield a watermarked image set. An adversary trains a surrogate model with a set of raw images and the watermarked image set, and thus the surrogate model carries the watermark information. Every time when the surrogate model yields a set of noisy output, the retrieval network is able to retrieve a recovered watermark from the noisy output to validate the model’s ownership.
However, deep steganography is vulnerable to perturbations in the frequency domain, as highlighted in the work of Zhang _et al._[24]. This motivates us to explore an overwriting attack on the watermarked image set \(B^{\prime}\). The objective of the attack is to generate an overwritten image set \(B_{\mathcal{U}}\) such that the retrieval network \(\mathcal{R}\) is unable to retrieve a valid watermark from \(B_{\mathcal{U}}\) or from the outputs of a surrogate model \(H^{\prime}\) trained on \(B_{\mathcal{U}}\). The objective of this attack is denoted as
\[\forall\ b_{u}\in B_{\mathcal{U}}\cup B^{\prime\prime},\mathcal{R}(b_{u})\neq w +\epsilon. \tag{4}\]
In other words, the goal here is to purge the signal of the watermark inside the container images so that the surrogate model trained on them does not contain the watermark's information. Thus, the watermarking method is nullified.
Conversely, to counter the overwriting attack, we need a watermarking network that is sufficiently robust so as to be able to retrieve a valid recovered watermark \(w^{\prime}\) under such an attack. The objective of the defense is denoted as
\[\exists\ \mathcal{R},w^{\prime}=\mathcal{R}(b_{u}),\ \text{s.t.}\ w^{\prime}=w+ \epsilon,\forall\ b_{u}\in B_{\mathcal{U}}\cup B^{\prime\prime}. \tag{5}\]
This objective requires the watermarking method to completely withstand the overwriting attack.
### _The Attack Network_
#### Iv-B1 The overwriting attack
OverwieAs depicted in Figure 4, the overwriting attack aims at the output image set \(B^{\prime}\), which contains the watermark. A deep steganographic model \(\mathcal{U}\) is trained, which consists of an embedding function \(\mathcal{E}_{\mathcal{U}}\) and a retrieval function \(\mathcal{R}_{\mathcal{U}}\). As illustrated in Algorithm 1, this model is capable of embedding an arbitrary image into another arbitrary image so as to perform an overwriting attack on the given container image set \(B^{\prime\prime}\). The result is a set of overwritten container images \(B_{\mathcal{U}}\), where \(w^{\prime}\) cannot be validly retrieved.
```
while\(\mathcal{L}_{\mathcal{U}}\) not converged do \(c_{\mathcal{U}}\leftarrow\mathcal{E}_{\mathcal{U}}(x,c^{\prime})\)\(\triangleright\) Overwrite \(x^{\prime}\leftarrow\mathcal{R}_{\mathcal{U}}(c_{\mathcal{U}})\)\(\triangleright\) Retrieve \(\mathcal{L}_{\mathcal{U}}\leftarrow\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}(c^{ \prime},c_{\mathcal{U}})+\mathcal{L}_{\mathcal{R}}^{\mathcal{U}}(c_{\mathcal{U }},x^{\prime})\)\(\triangleright\) Get loss \(\mathcal{L}_{\mathcal{U}}\).back_propagation()\(\triangleright\) Backwards endwhile
```
**Algorithm 1** Train the Overwriting Network
This attack is denoted as
\[\mathcal{E}_{\mathcal{U}}(B^{\prime})=B_{\mathcal{U}},\ \text{s.t.}\ \mathcal{R}(B_{ \mathcal{U}})\neq w+\epsilon. \tag{6}\]
Since the watermark information in \(B_{\mathcal{U}}\) is lost, an attacker can train a surrogate model \(\mathcal{H}_{\mathcal{U}}\) with \(A\) and \(B_{\mathcal{U}}\), which is either watermark-free, or it contains a self-made watermark \(w_{\mathcal{U}}\).
Loss Functions.The loss function for training \(\mathcal{U}\) is defined as
\[\mathcal{L}_{\mathcal{U}}=\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}+\mathcal{L} _{\mathcal{R}}^{\mathcal{U}}, \tag{7}\]
where \(\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}\) and \(\mathcal{L}_{\mathcal{R}}^{\mathcal{U}}\) respectively denote the embedding loss and the retrieval loss of \(\mathcal{U}\).
\(\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}\) is further decomposed into
\[\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}=\lambda_{mse}l_{mse}+\lambda_{vgg}l_{ vgg}+\lambda_{freq}l_{freq}, \tag{8}\]
where the \(\lambda\)s are weight parameters.
\(l_{bs}\) is the \(L2\) loss between the cover images \(C\) and container images \(C^{\prime}\), defined as
\[l_{mse}=\sum_{c_{i}\in C,c^{\prime}_{i}\in C^{\prime}}\frac{1}{N_{c}}\|c_{i}-c ^{\prime}_{i}\|^{2}, \tag{9}\]
where \(N_{c}\) is the total number of pixels.
\(l_{vgg}\) denotes the perceptual loss between \(C\) and \(C^{\prime}\), defined as
\[l_{vgg}=\sum_{c_{i}\in C,c^{\prime}_{i}\in C^{\prime}}\frac{1}{N_{f}}\|VGG_{k }(c_{i})-VGG_{k}(c^{\prime}_{i})\|^{2}, \tag{10}\]
where \(N_{f}\) and \(VGG_{k}\) respectively denote the total number of feature neurons and the features extracted at layer \(k\).
\(l_{freq}\) is the frequency loss [25] between \(C\) and \(C^{\prime}\) for controlling consistency in the frequency domain, defined as
\[l_{freq}=\sum_{c_{i}\in C,c^{\prime}_{i}\in C^{\prime}}\frac{1}{N_{p}}\mathcal{ F}(c_{i},c^{\prime}_{i}), \tag{11}\]
where \(\mathcal{F}\) and \(N_{p}\) are the focal frequency loss function and the total number of image pairs.
\(\mathcal{L}_{\mathcal{R}_{\mathcal{U}}}\) is also further decomposed into
\[\mathcal{L}_{\mathcal{R}}^{\mathcal{U}}=\lambda_{mse}l_{mse}+\lambda_{vgg}l_{ vgg}+\lambda_{freq}l_{freq}, \tag{12}\]
Fig. 4: Framework of the Overwriting Network: The overwriting network is trained to embed an arbitrary image, or, a watermark, into a container image so as to yield a overwritten image. The overwriting also contains a retrieval network that is able to retrieve the recovered image, whereas the retrieval network in the watermarking network can only retrieve a null image from the overwritten image.
where the terms therein are identical to those in \(\mathcal{L}_{\mathcal{E}_{\mathcal{U}}}\) but applied to image pairs \((s_{i},s^{\prime}_{i})\) from the secret images \(S\) and the retrieved secret images \(S^{\prime}\).
### _The defense Network_
#### Iii-C1 Overview
To counter the threat of the attack network, we devised an adversarial training framework, i.e., the defense network \(\mathcal{O}\), that includes both the watermarking framework \(f_{wm}\) and \(\mathcal{U}\), and where \(f_{wm}\) and \(\mathcal{U}\) are each configured as a two-party-mini-max game. In short, we set up an adversarial training scheme by training \(f_{wm}\) along with \(\mathcal{U}\) according to the following settings in Figure 5:
As demonstrated in Algorithm 2, the embedding network \(\mathcal{E}\) in \(f_{wm}\) is initially trained to embed \(w\) into \(B\) to get \(B^{\prime}\). A discriminator network \(\mathcal{D}\) then determines whether \(B^{\prime}\) contains a watermark so as to make \(\mathcal{E}\) to hide the watermark more covertly. Meanwhile, the overwriting embedding network \(\mathcal{E}_{\mathcal{U}}\) is trained to embed an arbitrary image into another arbitrary image so as to perform an overwriting attack. \(B^{\prime}\) is then fed to \(\mathcal{E}_{\mathcal{U}}\) along with an arbitrary image set \(S\) of the same size of \(B^{\prime}\) to yield an overwritten image set \(B_{\mathcal{U}}\). Lastly, \(B^{\prime}\) and \(B_{\mathcal{U}}\) are passed to the retrieval network \(\mathcal{R}\) in \(f_{wm}\) to retrieve \(w^{\prime}\), and \(\mathcal{R}\) is required to produce a null image \(w_{0}\) when it receives watermark-free images from \(A\) and \(B\).
```
while\(\mathcal{L}\) not converged do \(B^{\prime}\leftarrow\mathcal{E}(w,B)\)\(\triangleright\) Embed \(B_{\mathcal{U}}\leftarrow\mathcal{E}_{\mathcal{U}}(w_{\mathcal{U}},B^{\prime})\)\(\triangleright\) Overwrite \(w_{0}\leftarrow\mathcal{R}(A;B)\)\(\triangleright\) Null retrieval \(w^{\prime}\leftarrow\mathcal{R}(B^{\prime};B_{\mathcal{U}})\)\(\triangleright\) Ret. watermark \(\mathcal{L}\leftarrow\mathcal{L}_{\mathcal{U}}(A,B,B^{\prime},B_{\mathcal{U}},w,w_ {\mathcal{U}},w_{0},w^{\prime})\)\(\triangleright\) Get Loss \(\mathcal{L}\leftarrow\mathcal{L}+\mathcal{L}_{\mathcal{O}}(A,B,B^{\prime},B_{ \mathcal{U}},w,w_{\mathcal{U}},w_{0},w^{\prime})\)\(\mathcal{L}\).back()\(\triangleright\) Backwards endwhile
```
**Algorithm 2** The Defense Network - Initial Training Stage
At the adversarial training stage, as illustrated in Algorithm 3, only \(\mathcal{R}\) is trained for better robustness. On top of the previous training settings, \(\mathcal{R}\) is further forced to retrieve a watermark from the noisy output \(B^{\prime\prime}\) generated by the surrogate model \(H^{\prime}\). Meanwhile, a clean surrogate model \(H_{0}\) is trained to produce clean output \(B_{0}\), which boosts the specificity of \(\mathcal{R}\), where \(\mathcal{R}\). Further, \(\mathcal{R}\) must also retrieve a null image when it receives \(B_{0}\). This solves an intractable problem that we encountered in the experiments, which is further discussed in Section IV-C1.
The two-party-mini-max game is defined as
\[\begin{split}&\min_{\mathcal{E},\mathcal{R}}\;\max_{\mathcal{E}_{ \mathcal{U}}}\;\mathcal{L}(\mathcal{E},\mathcal{R},\mathcal{E}_{\mathcal{U}}) =\\ &\bigg{(}\mathbb{E}\big{[}\sum_{b_{i}\in B,s_{i}\in S}\frac{1}{N_{ c}}\big{\|}\mathcal{R}\big{(}\mathcal{E}_{\mathcal{U}}(\mathcal{E}(b_{i},w),s_{i}) \big{)}-w\big{\|}^{2}\big{]}\bigg{)},\end{split} \tag{13}\]
where \(\mathcal{E}_{\mathcal{U}}\) mostly benefits with \(\mathcal{R}\) cannot retrieve a valid \(w^{\prime}\). Additionally, \(\mathcal{E}\) and \(\mathcal{R}\) get the most bonuses when a \(w^{\prime}\) that is infinitely close to \(w\) is retrieved.
#### Iii-C2 Loss Functions
The loss function for training the defense network is defined as
\[\mathcal{L}=\mathcal{L}_{\mathcal{U}}+\mathcal{L}_{\mathcal{O}} \tag{14}\]
where \(\mathcal{L}_{\mathcal{U}}\) and \(\mathcal{L}_{\mathcal{O}}\) respectively denote the loss of training the overwriting network and the watermarking part of the defense network. Similar to Equation 7, \(\mathcal{L}_{\mathcal{U}}\) here is defined as
\[\mathcal{L}_{\mathcal{U}}=\mathcal{L}_{\mathcal{E}}^{\mathcal{U}}+\mathcal{L}_{ \mathcal{R}}^{\mathcal{U}}+l_{\mathcal{U}}. \tag{15}\]
The extra term \(l_{\mathcal{U}}\) denotes the adversarial overwriting loss that attempts to make \(\mathcal{R}\) retrieve a blank image \(w_{0}\) from \(B_{\mathcal{U}}\). This is defined as
\[l_{\mathcal{U}}=\sum_{b_{i}\in B_{\mathcal{U}}}\frac{1}{N_{c}}\|\mathcal{R}(b_{ i})-w_{0}\|^{2}. \tag{16}\]
\(\mathcal{L}_{\mathcal{O}}\) is then further decomposed into
\[\mathcal{L}_{\mathcal{O}}=\mathcal{L}_{\mathcal{E}}^{\mathcal{O}}+\mathcal{L}_{ \mathcal{R}}^{\mathcal{O}}+\mathcal{L}_{\mathcal{D}}^{\mathcal{O}}+l_{\mathcal{O}} \tag{17}\]
where the terms represent the loss of training the embedding network, the retrieval network, the discriminator, and the defense adversarial loss. Further, \(\mathcal{L}_{\mathcal{E}}^{\mathcal{O}}\) comprises the following terms:
\[\mathcal{L}_{\mathcal{E}}^{\mathcal{O}}=\lambda_{mse}l_{mse}+\lambda_{freq}l_{freq }+\lambda_{vgg}l_{vgg}+\lambda_{adv}l_{adv}, \tag{18}\]
where the former three losses are identical to those appearing in Equation 8. The last term \(l_{adv}\) represents the adversarial loss against the discriminator network, defined as
\[l_{adv}=\mathbb{E}_{b^{\prime}_{i}\in B^{\prime}}\big{[}\log(\mathcal{D}(b^{ \prime}_{i}))\big{]}. \tag{19}\]
The goal is to make the embedding network produce container images that cannot be detected by the discriminator network.
\(\mathcal{L}_{\mathcal{R}}\) is decomposed into
\[\mathcal{L}_{\mathcal{R}}=\lambda_{wm}l_{wm}+\lambda_{clean}l_{clean}+\lambda_{ cst}l_{cst}, \tag{20}\]
where the \(\lambda\)s are weight parameters. \(\mathcal{L}_{wm}\) denotes watermark retrieval loss
\[l_{wm}=\sum_{b^{\prime}_{i}\in B^{\prime}}\frac{1}{N_{c}}\|\mathcal{R}(b^{\prime} _{i})-w\|^{2}+\sum_{b^{\prime\prime}_{i}\in B^{\prime\prime}}\frac{1}{N_{c}}\| \mathcal{R}(b^{\prime\prime}_{i})-w\|^{2}. \tag{21}\]
\(\mathcal{L}_{cln}\) represents the blank extraction loss for guiding \(\mathcal{E}\) to extract only blank images from images not possessing watermark information, denoted as
\[l_{clean}=\sum_{a_{i}\in A}\frac{1}{N_{c}}\|\mathcal{R}(a_{i})-w_{0}\|+\sum_{b_{i} \in B}\frac{1}{N_{c}}\|\mathcal{R}(b_{i})-w_{0}\|, \tag{22}\]
where \(w_{0}\) is a blank image. Lastly, \(\mathcal{L}_{cst}\) is the consistency loss for ensuring that the watermarks extracted from different images are consistent, denoted as
\[l_{cst}=\sum_{x,y\in B^{\prime}\bigcup B^{\prime\prime}}\|\mathcal{R}(x)- \mathcal{R}(y)\|^{2} \tag{23}\]
\(l_{\mathcal{O}}\) stands for the defense adversarial loss that guarantees that \(\mathcal{R}\) can retrieve \(w^{\prime}=w+\epsilon\) from the overwritten images \(B_{\mathcal{U}}\), defined as
\[l_{\mathcal{O}}=\sum_{b_{i}\in B_{\mathcal{U}}}\frac{1}{N_{c}}\|\mathcal{R}(b _{i})-w\|^{2}. \tag{24}\]
### _Discussion_
In our defense framework, the overwriting network is trained in tandem with the watermarking network to form the defense network. The purpose of the overwriting network is to overwrite the original watermark with a forged watermark, creating an adversarial relationship between the two. The retrieval network of the watermarking network must then be able to retrieve the original watermark from the overwritten images, as demonstrated in previous work [26].
As the two embedding networks embed the original and forged watermarks into the same container image in parallel, both secret images are preserved within the container image. This is because, as shown in [26], it is possible to embed multiple secret images into one cover image, albeit with a higher perceptual quality loss. Our experiments show that without proper adversarial training, the watermarking network is unable to retrieve a valid watermark. Thus, our adversarial training scheme is a crucial component of the defense framework.
## IV Experiment
### _Experimental Setup_
#### Iv-A1 Dataset
Two datasets were used to train the image-processing surrogate model: the de-raining dataset from [27] and an 8-bit image dataset generated via the algorithm in [28]. The de-raining dataset is public available, while the 8-bit image dataset is generated using the images in ImageNet datset. The goal of the first task was to remove rain drops from the images. The second task was to transform an input image into an 8-bit style artwork. With each task, we split the dataset into two subsets: a training set of \(4000\) images and a test set of 1000 images. All the images were resized to \(256\times 256\) for training.
We also took samples from the ImageNet dataset to train \(\mathcal{U}\). Here, there were \(40,000\) images in the training set and \(10,000\) images in the test set. Each image was greater than \(256\times 256\) in size so we randomly cropped the images down to \(256\times 256\) so as to enhance \(\mathcal{U}\)'s robustness.
#### Iv-A2 Implementation details
\(\mathcal{E}\)'s network structure follows UNet [29]. UNet has considerably high performance with reference to translation based and semantic segmentation tasks, so the network produces good results when there are close
Fig. 5: Framework of the Defense Network: The initial training stage is critical for the success of the proposed method, as it involves the concurrent training of both the watermarking and overwriting networks. The retrieval network must be able to extract a valid watermark from the overwritten images generated by the overwriting embedding network. During the adversarial training stage, the retrieval network is further refined through exposure to both watermarked images and watermark-free images. If the retrieval network encounters a watermarked image, it should produce a valid recovered watermark. Conversely, when it encounters a watermark-free image, it should output a null image.
connections between the inputs and the outputs. CEILNet [30] was used as the model for \(\mathcal{R}\), which is believed to work well when the inputs somewhat differ from the outputs. Patch-GAN [31] was used for \(\mathcal{D}\).
In terms of the overwriting network, UNet [29] was once more used as the network structure for \(\mathcal{E}_{\mathcal{U}}\). For \(\mathcal{R}_{\mathcal{U}}\), we simply used stacks of convolutional layers, as the critical point lies in the overwriting procedure, and, here, the embedding network plays the more crucial role. Moreover, there is no discriminator in this training process. Lastly, the defense network comprised the watermarking network and \(\mathcal{U}\). Together, they form \(\mathcal{O}\).
#### Iv-A3 Evaluation metrics
We chose PSNR and SSIM [32] to evaluate the visual quality of the container image in comparison to its corresponding cover image. Additionally, we used normalized cross correlation (NCC) to measure whether a retrieved watermark was valid. If the NCC between a retrieved watermark and the original watermark was greater than \(0.95\), the retrieved watermark was considered to be legitimate. NCC is defined as
\[NCC=\frac{\langle\mathcal{R}(b^{\prime}_{i}),w\rangle}{\|\mathcal{R}(b^{\prime }_{i})\|\cdot\|w\|} \tag{25}\]
where \(\langle\cdot,\cdot\rangle\) denotes the inner product, and \(\|\cdot\|\) denotes the L2 norm. The success rate \(SR\) is defined as the ratio of successfully retrieved watermarks from a certain amount of container images.
### _Baseline Reproduction_
#### Iv-B1 Training the watermarking network
First, we reproduced Zhang's method [11] as our experimental baseline. When training the watermarking networks \(\mathcal{E}\), \(\mathcal{R}\), and \(\mathcal{D}\), we set the initial learning rate of the Adam optimizer to 0.001. Here, the goal was to equip the watermarking network with the ability to embed a fixed watermark into an arbitrary image, and to retrieve the watermark from the container image. Therefore, we trained the network on the "ImageNet" training dataset, where 1 epoch contains 40,000 images. The images were randomly cropped down to \(256\times 256\) so as to increase the randomness of the input data. We set the batch size to 10, which means that the model ran through 10 images in one iteration. If there was no loss descent within 4000 iterations, we decreased the learning rate by 0.2. All \(\lambda\)s were set to 1 except for \(\lambda_{3}=0.01\), which is the weight parameter of the adversarial loss.
Figure 6 depicts the test results of the two trained models. Each row of the two images is an instance. From left to right, each column represents the cover images \(c\), the secret images \(s\), the container images \(c^{\prime}\), the retrieved secret images \(s^{\prime}\), and the null images retrieved from watermark-free images. From the results, it is clear that the watermarking network was able to complete both the embedding and the retrieval tasks quite well. Further, a pure black image was guaranteed when the input contained no hidden content. Here, our settings slightly differ with Zhang's method. Zhang set the null image to pure white, but for ease of reading, we set the null image to black.
#### Iv-B2 The adversarial stage
This stage was designed to enhance the robustness of the retrieval network by training it to retrieve the watermark from the surrogate model's processed images. The retrieval will fail if this step is not conducted, because the retrieval network does not meet any noisy samples from the surrogate model. However, because of a problem we discovered (discussed in Section IV-C), we made a change to this stage of the process where we involved the outputs from a watermark-free surrogate model in the fine-tuning process.
To train the surrogate models, we used the de-raining and the 8-bit datasets in two parallel experiments. The paired processed images were watermarked by the watermarking network. By training the surrogate models this way, we forced the surrogate models to overfit the watermark signals hidden in the processed images, such that every output from the surrogate models carried the watermark signal. Here, the batch size was set to \(20\) and the number of epochs was set to \(50\) based on some previous experiments. The initial learning rate was set to \(0.001\), and then decayed by \(0.2\) if the loss remained unchanged for 5 epochs. Additionally, we use the same settings to train a watermark-free surrogate model with the watermark-free datasets.
After training the surrogate models, we used them to produce noisy watermarked images and the watermark-free images, which were then fed into the retrieval network \(\mathcal{R}\). In the adversarial training state, the hyperparameters remained the same as those in training the watermarking network for updating \(\mathcal{R}\). However, we reset the learning rate back to \(0.001\) so as to let the trained network escape reach of the local minimum. The fine-tuning process lasted for \(50\) epochs.
As a result, the fine-tuned retrieval network was able to retrieve the watermark from both the watermarked image \(B^{\prime}\) and the surrogate model's outputs \(B^{\prime\prime}\). The results are visually presented in Figure 7. Details of the results are listed in Table II.
With each dataset, we conducted three parallel experiments: one using UNet, one using a residue network with \(16\) blocks
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Condition/Metric & **PSNR** & **SSIM** & **NCC** & **SR**(\%) \\ \hline
**De-raining \(\times\)\(\mathcal{W}\)** & 30.49 & 0.8688 & 0.9992 & 100 \\
**De-raining \(\times\)\(\mathcal{W}\)\(\times\) UNet** & / & / & 0.9974 & 100 \\
**De-raining \(\times\)\(\mathcal{W}\)\(\times\) Res16** & / & / & 0.9877 & 100 \\
**8-bit \(\times\)\(\mathcal{W}\)** & 32.89 & 0.8739 & 0.9999 & 100 \\
**8-Bit \(\times\)\(\mathcal{W}\)\(\times\) UNet** & / & / & 0.9985 & 100 \\
**8-Bit \(\times\)\(\mathcal{W}\)\(\times\) Res16** & / & / & 0.9910 & 100 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Original Method Results
Fig. 6: Test Results of the Watermarking Network
(Res16), and one performed directly on the watermarked image \(B^{\prime}\). PSNR and SSIM were used to measure the quality of the container image \(c^{\prime}\) compared to its corresponding cover image \(c\). NCC and SR were only used to validate the watermark retrieval. Remarkably, the success rate of the watermark retrieval reached \(100\%\) in each experiment, which firmly verifies the efficacy of Zhang's method.
### _Attacks_
We trained our overwriting network with the Adam optimizer on the ImageNet training set. The learning rate and batch size were set to \(0.001\) and \(20\). We decreased the learning rate by \(0.2\) if there was no loss decrease within \(2,000\) iterations. After \(20\) epochs, there was no significant descent loss, so we set the number of epochs to \(30\). The \(\lambda\)s were all set to \(1\). The cover images were randomly cropped into size of \(256\times 256\) so as to increase randomness of the input data and, in turn, enhance the robustness of the overwriting network. Further, the overwriting network was trained to embed one of four selected watermarks: "flower", "copyright", "lena", or "pepper", into an arbitrary image, and then retrieve the watermark. The effect is depicted in Figure 8, where each row is an instance. From left to right, each column respectively represents the cover images \(c\), the secret images \(s\), the container images \(c^{\prime}\), and the recovered secret images \(s^{\prime}\).
After having the trained overwriting network, we launched an attack on the watermarked images. Generally, the watermarked images \(B^{\prime}\) were overwritten with another watermark so as to prevent the watermark from being retrieved. The direct effect of the attack is depicted in Figure 9, where each row is an instance. From left to right, each column respectively represents the cover images \(c\), the secret images \(s\), the container images \(c^{\prime}\), and the retrieved secret images \(s^{\prime}\). Table III lists the results of the visual quality test for the container image and watermark retrieval under various conditions, namely, different combinations of surrogate model types and datasets. Each value is the average number of the item derived from 100 randomly selected images. We performed three experiments with each of the two tasks, i.e., a direct attack on \(B^{\prime}\), an attack on the UNet surrogate model, and another on the Res16 surrogate model.
Compared to the watermarked images \(B^{\prime}\), the quality of the attacked image \(B_{\mathcal{U}}\) decreased slightly. However, the quality loss was still negligible to human eye. Success rates across all the experiments were no greater than 10%, and half of them reached 0%, which proves the efficacy of our attack. Notably, the success rate of the watermark retrieval with the Res16 surrogate model on both tasks was higher than the others, which is an interesting phenomenon.
#### Iv-C1 The overfitting problem in the retrieval network.
In the attack, we also tried to use the fine-tuned retrieval network to extract watermarks from the images that were only processed by the overwriting network. In other words, we tried to extract watermarks from images that did not contain a watermark signal embedded by the watermarking network. Under these circumstances, the retrieval network was still able to retrieve the watermark with decreased quality as demonstrated in
Fig. 8: Test Results of the Overwriting Network
Fig. 7: Watermarks Retrieved from the Surrogate Models’ Output
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Condition/Metric & **PSNR** & **SSIM** & **NCC** & **SR(\%)** \\ \hline
**De-raining \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)** & 27.24 & 0.8031 & 0.0565 & 4 \\
**De-raining \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)\(\times\)**UNet** & / & / & 0.0109 & 0 \\
**De-raining \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)\(\times\)**Res16** & / & / & 0.1527 & 10 \\
**8-bit \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)\(\times\)**U** & 31.91 & 0.6061 & 0.2968 & 0 \\
**8-Bit \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)\(\times\)**UNet** & / & / & 0.0678 & 0 \\
**8-Bit \(\times\)**\(\mathcal{W}\)\(\times\)\(\mathcal{U}\)\(\times\)**Res16** & / & / & 0.2248 & 5 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Attack Results
Fig. 9: Results of the Overwriting Attack
Figure 11. This indicated that, during the fine-tuning, the retrieval network was tuned to output a pre-defined watermark if there was any secret image signal in the container images, regardless of what exactly the signal represented.
Though this method can withstand an overwriting attack by this overfitting phenomenon, the phenomenon is harmful to this method. This is because the watermarking scheme is nullified if a valid watermark can be retrieved from any container image that does not contain the corresponding watermark information.
We managed to overcome this problem with a fairly simple manoeuvre. We trained a watermark-free surrogate model, and then, we added its output images into the adversarial stage of fine-tuning the retrieval network. The retrieval network was therefore made to differentiate the outputs of the watermark-free surrogate model from those of the watermarked surrogate model, and output the null images correspondingly. This extra step successfully mitigates this problem.
### _defenses_
Lastly, we trained the defense network with the same hyperparameters as above. The main idea was to concurrently train a watermarking network and an overwriting network, and to make the retrieval network retrieve the watermark from the overwritten container image. Meanwhile, as the adversary, the overwriting network attempts to overwrite the watermark within the container image so that the retrieval network will only yield null images. Figure 10 directly illustrates the training process, where the defense network is trained to embed the watermark into an arbitrary image, and retrieve the watermark from the container and overwritten images. Further, the retrieval network must generate a null image if there is not the embedded watermark signal in the input.
The settings in the fine-tuning stage were almost the same as for the watermarking network's adversarial stage. Additionally, the overwriting network also participated in this stage so as to force the retrieval network to produce either the watermark or a null image when it encounters the overwritten container image or the container image generated only by the overwriting network.
Finally, we tested the defense network on two datasets with different settings. Table IV shows the test results. As shown, the container images generated by the defense network have a better visual quality than those generated by the watermarking network. Among the watermarking retrieval tests, all success rates reached 100% except for the direct overwriting attack on the 8-bit datasets, which verifies the efficacy of our defense method.
Fig. 11: Overfitting Phenomenom: From left to right, the images depict the rainy image to process, the watermarked image from the overwriting network, the overwriting watermark, and the retrieved watermark from the second image by the retrieval network. The watermark can be retrieved from any container image that has some steganographic content.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline ConditionMetric & **PSNR** & **SSIM** & **NCC** & **SR(\%)** \\ \hline
**De-raining**\(\times\)\(\mathcal{O}\) & 34.07 & 0.9022 & 0.9997 & 100 \\
**De-raining**\(\times\)\(\mathcal{O}\)\(\times\)\(\mathcal{U}\) & / & / & 0.9924 & 100 \\
**De-raining**\(\times\)\(\mathcal{O}\)\(\times\)\(\mathcal{U}\)\(\times\)**UNet** & / & / & 0.9915 & 100 \\
**De-raining**\(\times\)\(\mathcal{O}\)\(\times\)\(\mathcal{U}\)\(\times\)**Res16** & / & / & 0.9914 & 100 \\
**8-bit**\(\times\)\(\mathcal{O}\) & 34.54 & 0.8796 & 0.9998 & 100 \\
**8-bit**\(\times\)\(\mathcal{O}\)\(\times\)\(\mathcal{U}\) & / & / & 0.9040 & 0.81 \\
**8-bit**\(\times\)\(\mathcal{O}\)\(\times\)\(\mathcal{U}\)\(\times\)**Unset** & / & / & 0.9991 & 100 \\
**8-bit**\(\times\)\(\mathcal{O}\)\(\times\)**U**\(\times\)**Res16** & / & / & 0.9982 & 100 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Watermark Retrieval Results Comparison among Defenses
Fig. 10: Training Process of the defense Network: From left to right: the images to process \(A\), the processed images \(B\), the watermarked processed images \(B^{\prime}\), the overwritten watermark processed images \(B_{\mathcal{U}}\), the overwritten watermark-free processed images \(B^{\prime}_{\mathcal{U}}\), the null images retrieved from \(A\) and \(B\), the watermarks retrieved from \(B^{\prime}\) and \(B_{\mathcal{U}}\), the null images retrieved from \(B^{\prime}_{\mathcal{U}}\), and the watermark image \(w\).
## V Discussion
### _Analysis of the Overwriting Attack_
#### V-A1 Frequency Analysis
The objective of the study is to investigate the cause of the overwriting attack's capability to render the embedded watermark in the container image ineffective. This is achieved by calculating the Azimuthal Integral of the experimental images and comparing their frequency domains. The final data is obtained by averaging the Azimuthal Integral computed from 1,000 groups of test images, each group consisting of the container image generated by the watermarking network, the cover image, the overwritten container image, the output from the surrogate model, and the overwritten output from the surrogate model. The images within each group correspond to the same processed image.
Typically, images processed by Deep Convolutional Neural Networks (DCNNs) display a bias in the high frequency domain. As illustrated in Figure 11(a), the container image generated by the watermarking network and its corresponding image generated by the surrogate model exhibit an abnormally high amplitude in the high frequency domain, which distinguishes them greatly from the cover image. This is the reason why the watermark can be invisibly embedded into the cover image, as human eyes are not sensitive enough to the high frequency domain of an image.
However, through fine-tuning, the retrieval network in the watermarking network can still retrieve the watermark from the surrogate model's output, despite its significant deviation from the frequency distribution of the container image. This emphasizes the significance of the fine-tuning stage. In the case of the overwritten container image, it displays a marked bias in the high frequency domain, both in comparison to the cover image and the watermarked image. A peak can be observed in the range of 160 to 175 on the frequency axis, which neutralizes the previously embedded watermark.
To further ascertain the location where the watermark is embedded, a low-pass filter is applied to the watermarked images. The filtered image retains its visual quality to the extent that changes are not easily noticeable by the human eye. This filter is applied to 1,000 container images and then the watermark retrieval is performed. As expected, the success rate of the retrieval drops to 0, and the direct effect can be seen in Figure 13, where each column in each row, from left to right, respectively represents the container image, the retrieved watermark, the filtered container image, and the nullified retrieval result. This underscores the high sensitivity of the watermark retrieval to the high frequency distribution of the container image.
Fig. 12: Frequency Analysis
Fig. 13: Watermark Retrieval of Low-pass Filtered Container Images
### _Analysis of the defense Network_
#### V-B1 Residue Analysis
First, we performed a residue analysis on the container images generated by our defense and watermarking networks. The details can be seen in Figure 14, where from left to right, each column in each row respectively represents the cover image, the container image, and the residue enhanced 10 times. Intuitively, the residues of the defense network's output seem to be darker (better) than those of the watermarking network's output.
#### V-B2 Frequency Analysis.
In the adversarial stage of the watermarking network, the retrieval network is required to retrieve the watermark from both the surrogate model's output and the container images. Due to the bias in the frequency domain of the overwritten surrogate model's output shown in Figure (a)a, the retrieval fails, because it has never encountered any input with such a frequency distribution. However, in the defense network's fine-tuning stage, the surrogate model's output is protected by the defense network, and they share almost the same frequency distribution as the overwritten surrogate model's output in Figure (b)b. This forces the retrieval network to become more robust to the mutable watermark signal. Further, with the assistance of the frequency loss, the container images generated by the defense network share a more similar frequency distribution to the cover images than those generated by the watermarking network. Our defense method therefore shows good robust to overwriting attack, even if the type of the surrogate model does not match the one used in fine-tuning. Nevertheless, it is still possible for an adversary to change the attack method to cripple the watermark with a higher cost of visual quality - through low-pass filtering for example.
### _Ablation Study_
#### V-C1 The frequency loss
The ablation experiments in Zhang _et al._[11] prove the necessity of several loss terms, including the clean loss and the consistent loss. In our defense network, the frequency loss regularizer is added into the loss function so as to guide the network to generate the container images that share a more similar frequency distribution to the cover image. The only difference between the loss terms in our defense network and the watermarking network is the frequency loss. This boosts the image quality, as is proven in the test results presented in Tables II, and IV. Here, both the PSNR and the SSIM values of the container images generated by our defense network are higher than those from the watermarking network. Further, as Figure (b)b shows, the high frequency distribution of the containers from the defense network is closer to the cover image than those from the watermarking network.
#### V-C2 Fine-tuning
Unlike the original watermarking method in [11], we additionally add a group of watermark-free images generated by the surrogate model trained on the watermark-free training dataset into the fine-tuning dataset. This prevents the watermarking network from overfitting the steganographic signal in the container images so that it will retrieve the watermark regardless of what exact watermark signal lies in the container images. Figure 11 shows how the overfitting phenomenon nullifies the watermarking method. Therefore, the watermark-free surrogate model's output is essential in this stage. If a watermark can be retrieved from a container image that does not contain the specific watermarking signal, the method can be claimed unreliable. Further, by inserting the overwriting network into the watermarking network to form the defense network, the defense network is pushed to become more robust to both the overwriting attack and the addition of noise. Further, the embedding network hides the watermark more covertly, and the retrieval network differentiates the container image carrying both the specific watermark and the overwritten watermark images from the watermark-free images, and the watermark-free surrogate model's output, and the container images carrying any other secret images.
## VI Conclusion
In this study, we present an overwriting attack that effectively nullifies the watermark embedded in images processed by image processing neural networks. Our attack is also a threat to deep steganography, as it can invisibly replace a secret image with minimal impact on the visual quality of the image. Additionally, we identify an overfitting issue in the original watermarking method and resolve it with an alternative training approach. To defend against our proposed overwriting attack, we develop an adversarial framework defense network that integrates the watermarking network and the overwriting network. To the best of our knowledge, this defense network is resilient against the overwriting attack. Through adversarial training, the defense network is able to retrieve valid watermarks from overwritten images and the output of the overwritten surrogate model.
There is ample room for future research in the area of image-processing model watermarking, including the development of robust watermarking techniques and malicious attacks. Although our method demonstrates robustness against overwriting attacks, the adversary can still manipulate the frequency domain of the output to erase the embedded watermark with minimal perceptual impact. To address this issue, a more robust watermarking method that embeds the watermark in the low frequency domain of the image should be explored.
Fig. 14: Residue Analysis (10x Enhanced) |
2306.13086 | Convergence results for gradient flow and gradient descent systems in
the artificial neural network training | The field of artificial neural network (ANN) training has garnered
significant attention in recent years, with researchers exploring various
mathematical techniques for optimizing the training process. In particular,
this paper focuses on advancing the current understanding of gradient flow and
gradient descent optimization methods. Our aim is to establish a solid
mathematical convergence theory for continuous-time gradient flow equations and
gradient descent processes based on mathematical anaylsis tools. | Arzu Ahmadova | 2023-06-22T17:58:17Z | http://arxiv.org/abs/2306.13086v1 | Convergence results for gradient flow and gradient descent systems in the artificial neural network training
###### Abstract
The field of artificial neural network (ANN) training has garnered significant attention in recent years, with researchers exploring various mathematical techniques for optimizing the training process. In particular, this paper focuses on advancing the current understanding of gradient flow and gradient descent optimization methods. Our aim is to establish a solid mathematical convergence theory for continuous-time gradient flow equations and gradient descent processes based on mathematical anaylsis tools.
###### Contents
* 1 Introduction
* 2 Convergence results for gradient flow systems
* 3 Convergence results for gradient descent systems
Introduction
Artificial neural networks (ANNs) have led to performance improvements in various tasks involving rectified linear unit (ReLU) activation via gradient flow (GF) and gradient descent (GD) schemes. GF and GD systems are closely related concepts that are often used in optimization and machine learning. GF systems refer to the dynamics of a function evolving over time under the influence of its gradient. These systems can be thought of as a continuous version of gradient descent, where the parameters of the function change continuously rather than in discrete steps. GF systems are used in a variety of applications, including machine learning, physics, and chemistry. On the other hand, GD is an optimization algorithm aimed at minimizing a function. It works by iteratively adjusting the parameters of the function in the direction of the negative gradient, which is the direction of steepest decrease in the function's value. This process continues until the parameters reach a point where the gradient is very close to zero, indicating that a minimum has been found in the training of ANNs with ReLU activation function. Hence, GF represents the evolution of a function under its gradient, while GD is an algorithm for minimizing a function.
Standard convergence results for GF and GD systems frequently rely on the convexity of the potential function near an isolated minimum. The convergence of GF and GD processes to the global minimum for convex objective functions has been established in various settings, as demonstrated in studies such as [3, 12, 17]. For more information on abstract convergence results for GF and GD processes in non-convex settings, please refer to the studies in [4, 9] and the references cited within.
In the analysis of convergence of gradient descent scheme, Lojasiewicz inequality has played important role. Note that the Lojasiewicz inequality implies that if \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a real analytic function, then any bounded solution \(x\) of the gradient system converges to a critical point of \(F\) as \(t\) tends to infinity. For a deeper understanding of the Lojasiewicz convergence theorem, one can refer to references [15, 16]. A revisited theorem of this convergence has been studied by Haraux in his article [11] and in his book [10] (Chapter 7), in collaboration with Jendobi.
In recent papers, researchers have been exploring various aspects of the convergence of these algorithms, such as the impact of the step size, the presence of noise, the choice of initialization, and the properties of the loss function. Some papers also compare the performance of the gradient flow and gradient descent systems under different conditions and with different types of ANNs. The main goal of these analyses is to understand how the parameters of the ANN are updated during the training process and how the training error decreases over time.
Although there are many scientific articles about the convergence analysis of GD,
there are relatively fewer articles about the convergence analysis of GF processes in the context of training ANNs. Eberle et al. in [8] showed that the objective functions in the training of ANNs with ReLU activation meet the requirements for an appropriate Lojasiewicz inequality if the target function and the input data's probability distribution are piecewise polynomial. For convergence analyses of GF and GD processes with constant target functions, refer to [5]. To understand convergence analysis of GF and GD processes in the training of ANNs with piecewise linear target functions, consult [14].
In this article, we first consider the time-continuous gradient system for all \(t\in[0,\infty)\), \(x\in\mathbb{C}^{1}([0,\infty),\mathbb{R}^{d})\) and \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\) that
\[x^{\prime}(t)=-\left(\nabla F\right)(x(t)). \tag{1}\]
If \(x\) is a solution to the gradient system (1), its time derivative is always equal to the negative gradient \(-\nabla F(x)\), which indicates the direction of steepest descent. As a result, it's reasonable to assume that every solution \(x\) to (1) has the property that \(F\) along the solution \(x\) is non-increasing. When \(x\) is a solution to (1) and \(F\) is continuously differentiable, the composition \(F\circ x\) is also non-increasing. If \(F\circ x\) is constant, then \(x\) itself is constant.
It is important to determine whether \(x(t)\) always converges for all \(t\in[0,\infty)\). However, in 2 dimensions, it has been shown that convergence may not occur even for a \(\mathbb{C}^{\infty}\) potential \(F\) - this was conjectured by Curry [6] and proven by Palis and de Melo [18]. The inequality \(\int_{0}^{\infty}|\left(\nabla F\right)(x(s))|\mathrm{d}s<\infty\) is therefore false for general gradient systems. If it were true, it would imply convergence, which has been shown to be false in the general smooth case. A counterexample to this was already exhibited by Curry in 1948, and has later been generalized in [2, Section 17.1] and [10, Section 10.3]. As a consequence, the inequality \(\int_{0}^{\infty}\|\left(\nabla F\right)(x(s))\|ds<\infty\) is true when \(F\) is analytic in a ball. This has been proven using Lojasiewicz gradient inequality. It is worth noting that in this section, the results are revisited by considering \(F\) from \(C^{1}\) rather than the \(C^{2}\) condition established in [11].
In Section 3, we study convergence analysis of the following GD systems:
\[x_{n+1}=x_{n}-\gamma_{n}\left(\nabla F\right)(x_{n}). \tag{2}\]
In a discrete time setting, this paper makes a significant contribution by providing a convergence proof for the GD system. The key result establishes convergence using the inequality \((F^{\prime}(x)-F^{\prime}(y))(x-y)\leq c|x-y|^{1+\alpha}\), but with weaker assumptions than previous works. Specifically, this section can be considered as a special case of the study conducted by Dereich and Kassing [7]. However, the novel aspect lies in
the consideration of a perturbed term, where it is assumed to be zero, even under weaker assumptions compared to the aforementioned paper. By demonstrating the convergence of the GD system under these relaxed conditions, the research presented in this paper expands the understanding of convergence properties in discrete time settings. This finding is of great importance, as it opens up new avenues for applying GD algorithms in various practical scenarios.
The motivation behind this article stems from the fact that the convergence of GF and GD processes is not well understood, particularly when it comes to GF processes. To enhance the accuracy and efficiency of the training process, it is crucial to deepen our understanding of the convergence behavior of these processes.
This study aims to address this gap in the literature by focusing on the convergence analysis of both GF and GD processes in the training of ANNs. This research provides valuable insights and contributes to the field by improving the understanding of GF and GD processes and their convergence behaviors.
The remainder of this article is organized as follows. Section 2 is devoted to proving the convergence results for GF systems with the help of Theorem 1 and Theorem 2. In Section 3, we prove the convergence results for GD systems under weeker assumptions stated in Theorem 3 and Theorem 4.
## 2 Convergence results for gradient flow systems
The following theorems are the main results of this section and establishes proofs of convergence results for GF scheme. We note that \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\) in this section, because the \(\mathbb{C}^{1}\) condition guarantees that the gradient of the function is differentiable.
**Theorem 1**.: _Let \(d\in\mathbb{N}\), \(x\in\mathbb{C}^{1}([0,\infty),\mathbb{R}^{d})\), \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\), and let \(\left\|\cdot\right\|:\mathbb{R}^{d}\to[0,\infty)\) be a norm, assume \(\inf_{t\in[0,\infty)}F(x(t))>-\infty\) and assume for all \(t\in[0,\infty)\) that_
\[x^{\prime}(t)=-\left(\nabla F\right)(x(t)). \tag{3}\]
_Then_
**(i)**: _it holds that_ \(\left(F(x(t))\right)_{t\in[0,\infty)}\) _converges in_ \(\mathbb{R}\) _as_ \(t\to\infty\) _and_
**(ii)**: _it holds that_
\[\int_{0}^{\infty}\|\left(\nabla F\right)(x(s))\|^{2}\mathrm{d}s<\infty. \tag{4}\]
Proof.: (i) Note that equation (3) and the chain rule assure for all \(t\in[0,\infty)\) that
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}(F\circ x)(t)\Big{)}=F^{\prime}(x(t))x^{ \prime}(t)=-\left\|(\nabla F)\left(x(t)\right)\right\|^{2}\leq 0. \tag{5}\]
This and \(\inf_{t\in[0,\infty)}F(x(t))>-\infty\) imply (i).
(ii) Integrating (5) ensures for all \(t\in[0,\infty)\) that
\[\int_{0}^{t}\|\left(\nabla F\right)(x(s))\|^{2}\mathrm{d}s=F(x(0))-F(x(t)). \tag{6}\]
This and \(\inf_{t\in[0,\infty)}F(x(t))>-\infty\) imply (ii).
**Theorem 2**.: _Let \(d\in\mathbb{N}\), \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\), let \(x\in\mathbb{C}^{1}([0,\infty),\mathbb{R}^{d})\) be bounded, let \(\left\|\cdot\right\|:\mathbb{R}^{d}\to[0,\infty)\) be a norm, and assume for all \(t\in[0,\infty)\) that_
\[x^{\prime}(t)=-\left(\nabla F\right)(x(t)). \tag{7}\]
_Then_
**(i)**: _it holds that_ \(\left(F(x(t))\right)_{t\in[0,\infty)}\) _converges in_ \(\mathbb{R}\) _as_ \(t\to\infty\) _and_
**(ii)**: _it holds that_
\[\limsup_{t\to\infty}\|\left(\nabla F\right)(x(t))\|=0. \tag{8}\]
Proof.: Continuity of \(F\) and boundedness of \(x\) imply that \(F\circ x\) is bounded. This and Theorem 1 imply (i) and
\[\int_{0}^{\infty}\|\left(\nabla F\circ x\right)(s)\|^{2}ds<\infty. \tag{9}\]
The restriction of the continuous function \(\nabla F\) to the compact set \(\overline{x([0,\infty))}\) is bounded and uniformly continuous. This and equation (7) imply that \(x^{\prime}\) is bounded. The fact that \(x\) has a bounded derivative guarantees that \(x\) is uniformly continuous. Since the function \(\nabla F\) is continuous and \(x\) is bounded and uniformly continuous, this yields that \(\nabla F\circ x\) is uniformly continuous. This and the fact that the restriction of the continuous function \(\left\|\cdot\right\|^{2}\) to the compact set \(\overline{\left(\nabla F\circ x\right)([0,\infty))}\) is uniformly continuous, prove that \(\|\nabla F\circ x\|^{2}\) is uniformly continuous on \([0,\infty)\). Combining [10, Theorem 2.1.2] together with uniform continuity of \(\|(\nabla F)\circ x\|^{2}\) and (9) imply that
\[\limsup_{t\to\infty}\|\left(\nabla F\right)(x(t))\|=0. \tag{10}\]
This proves (ii) and completes the proof of Theorem 2.
Convergence results for gradient descent systems
The following theorems are the main results in this section. We note that \(\gamma_{n}\) is used as the step size in the \(n\)-th iteration of the gradient descent algorithm, which is defined by the equation \(x_{n+1}=x_{n}-\gamma_{n}\left(\nabla F\right)(x_{n})\). The assumption \(\overline{\lim}_{n\to\infty}\gamma_{n}=0\) ensures that the step size goes to zero as the algorithm progresses, which is necessary for the algorithm to converge to a minimum of the function. The assumption \(\sum_{n=0}^{\infty}\mathds{1}_{(0,1)}(\alpha)\gamma_{n}^{\frac{1+\alpha}{1- \alpha}}<\infty\) ensures that the step size decays fast enough for the algorithm to converge. For consistency, \(F\) is also selected from \(\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\) in this section.
**Theorem 3**.: _Let \(d\in\mathbb{N}\), \(c\in(0,\infty)\), \(\alpha\in(0,1]\), \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\), let \(\|\cdot\|:\mathbb{R}^{d}\to[0,\infty)\) be a norm, let \(\gamma\colon\mathbb{N}_{0}\to(0,\infty)\) satisfy that \(\overline{\lim}_{n\to\infty}\gamma_{n}=0\), let \(C\subseteq\mathbb{R}^{d}\) be convex, assume for all \(x,y\in C\) that_
\[(F^{\prime}(x)-F^{\prime}(y))(x-y)\leq c\|x-y\|^{1+\alpha}, \tag{11}\]
_assume that \(\sum_{n=0}^{\infty}\mathds{1}_{(0,1)}(\alpha)\gamma_{n}^{\frac{1+\alpha}{1- \alpha}}<\infty\), let \(x\colon\mathbb{N}_{0}\to C\), assume \(\inf_{n\in\mathbb{N}_{0}}F(x_{n})>-\infty\), and assume for all \(n\in\mathbb{N}_{0}\) that_
\[x_{n+1}=x_{n}-\gamma_{n}\nabla F(x_{n}). \tag{12}\]
_Then the following statements hold:_
**(i)**: \((F(x_{n}))_{n\in\mathbb{N}_{0}}\) _converges in_ \(\mathbb{R}\) _and_
**(ii)**: \(\sum_{n=0}^{\infty}\gamma_{n}\|\nabla F(x_{n})\|^{2}<\infty\)_._
Proof.: Note that the fundamental theorem of calculus yields for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{n})\] \[=\int_{0}^{1}F^{\prime}(x_{n}+\lambda(x_{n+1}-x_{n}))(x_{n+1}-x_{ n})\mathrm{d}\lambda\] \[=\int_{0}^{1}F^{\prime}(x_{n})(x_{n+1}-x_{n})\mathrm{d}\lambda \tag{13}\] \[+\int_{0}^{1}\Big{(}F^{\prime}(x_{n}+\lambda(x_{n+1}-x_{n}))-F^{ \prime}(x_{n})\Big{)}(x_{n}+\lambda(x_{n+1}-x_{n})-x_{n})\frac{1}{\lambda} \mathrm{d}\lambda.\]
Hence, equation (12) yields for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{n})=-\gamma_{n}\|\nabla F(x_{n})\|^{2} \tag{14}\] \[+\int_{0}^{1}\Big{(}F^{\prime}(x_{n}+\lambda(x_{n+1}-x_{n}))-F^{ \prime}(x_{n})\Big{)}(x_{n}+\lambda(x_{n+1}-x_{n})-x_{n})\frac{1}{\lambda} \mathrm{d}\lambda.\]
This, inequality (11), and equation (12) imply for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{n}) \tag{15}\] \[\leq-\gamma_{n}\|\nabla F(x_{n})\|^{2}+c\int_{0}^{1}\lambda^{1+ \alpha}\|x_{n+1}-x_{n}\|^{1+\alpha}\frac{1}{\lambda}\mathrm{d}\lambda\] \[=-\gamma_{n}\|\nabla F(x_{n})\|^{2}+\frac{c}{1+\alpha}\gamma_{n} ^{1+\alpha}\|\nabla F(x_{n})\|^{1+\alpha}.\]
**Step 1:** Throughout Step 1 we assume \(\alpha=1\). Then (15) implies for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{n})\leq-\|\nabla F(x_{n})\|^{2}\Big{(}\gamma_{n}-\frac{c}{2} \gamma_{n}^{2}\Big{)}. \tag{16}\]
This and \(\overline{\lim}_{n\to\infty}\gamma_{n}=0\) ensure that \((F(x_{n}))_{n\in\mathbb{N}_{0}}\) is eventually monotonically non-increasing. This and \(\inf_{n\in\mathbb{N}_{0}}F(x_{n})>-\infty\) imply that \(\lim_{n\to\infty}F(x_{n})\) exists in \(\mathbb{R}\). This proves (i) in the case \(\alpha=1\).
Summing over (16) gives for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{0})-F(x_{n+1})=-\sum_{k=0}^{n}(F(x_{k+1})-F(x_{k}))\geq\sum_{k=0}^{n}\| \nabla F(x_{k})\|^{2}(\gamma_{k}-\frac{c}{2}\gamma_{k}^{2}). \tag{17}\]
This, \(\inf_{n\in\mathbb{N}_{0}}F(x_{n})>-\infty\) and \(\overline{\lim}_{n\to\infty}\gamma_{n}=0\) imply that
\[\sum_{k=0}^{\infty}\gamma_{k}\|\nabla F(x_{k})\|^{2}<\infty. \tag{18}\]
This proves (ii) in the case \(\alpha=1\).
**Step 2**: Throughout Step 2 we assume \(\alpha<1\). Summing over (15) shows for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{0})\leq-\sum_{k=0}^{n}\gamma_{k}\|\nabla F(x_{k})\|^{2}+\frac {c}{1+\alpha}\sum_{k=0}^{n}\gamma_{k}^{\frac{1+\alpha}{2}}\|\nabla F(x_{k})\| ^{1+\alpha}\gamma_{k}^{\frac{1+\alpha}{2}}. \tag{19}\]
This and Holder's inequality give for all \(n\in\mathbb{N}_{0}\) that
\[F(x_{n+1})-F(x_{0}) \leq-\sum_{k=0}^{n}\gamma_{k}\|\nabla F(x_{k})\|^{2} \tag{20}\] \[+c\Big{(}\sum_{k=0}^{n}\gamma_{k}^{\frac{1+\alpha}{2}\frac{2}{1- \alpha}}\Big{)}^{\frac{1-\alpha}{2}}\Big{(}\sum_{k=0}^{n}\gamma_{k}\|\nabla F( x_{k})\|^{2}\Big{)}^{\frac{1+\alpha}{2}}.\]
Aiming at a contradiction assume that \(\sum_{k=0}^{\infty}\gamma_{k}\|\nabla F(x_{k})\|^{2}=\infty\). Then (20), \(\inf_{n\in\mathbb{N}_{0}}F(x_{n})>-\infty\) and \(\sum_{k=0}^{\infty}\gamma_{k}^{\frac{1+\alpha}{1-\alpha}}<\infty\) ensure that
\[\infty =\Big{(}\sum_{k=0}^{\infty}\gamma_{k}\|\nabla F(x_{k})\|^{2} \Big{)}^{\frac{1+\alpha}{2}}\Big{[}\Big{(}\sum_{k=0}^{\infty}\gamma_{k}\| \nabla F(x_{k})\|^{2}\Big{)}^{\frac{1-\alpha}{2}}-c\Big{(}\sum_{k=0}^{\infty }\gamma_{k}^{\frac{1+\alpha}{1-\alpha}}\Big{)}^{\frac{1-\alpha}{2}}\Big{]}\] \[\leq F(x_{0})-\inf_{n\in\mathbb{N}}F(x_{n})<\infty. \tag{21}\]
This is a contradiction. This proves (ii) in the case \(\alpha<1\).
Next we prove (i). Summing over (15), Holder's inequality, and (ii) yield that
\[\sum_{n=0}^{\infty}\|F(x_{n+1})-F(x_{n})\| \leq\sum_{n=0}^{\infty}\gamma_{n}\|\nabla F(x_{n})\|^{2}\] \[+\frac{c}{1+\alpha}\sum_{n=0}^{\infty}\gamma_{n}^{1+\alpha}\| \nabla F(x_{n})\|^{1+\alpha} \tag{22}\] \[\leq\sum_{n=0}^{\infty}\gamma_{n}\|\nabla F(x_{n})\|^{2}\] \[+\frac{c}{1+\alpha}\Big{(}\sum_{n=0}^{\infty}\gamma_{n}^{\frac{1 +\alpha}{2}\frac{2}{1-\alpha}}\Big{)}^{\frac{1-\alpha}{2}}\Big{(}\sum_{n=0}^ {\infty}\gamma_{n}\|\nabla F(x_{n})\|^{2}\Big{)}^{\frac{1+\alpha}{2}}<\infty.\]
This implies that \(\big{(}F(x_{n})\big{)}_{n\in\mathbb{N}_{0}}\) is a Cauchy sequence, and hence, convergent in \(\mathbb{R}\). This proves (i) in the case \(\alpha<1\). The proof of Theorem 3 is thus completed.
**Theorem 4**.: _Let \(d\in\mathbb{N}\), \(c\in(0,\infty)\), \(\alpha\in(0,1]\), \(F\in\mathbb{C}^{1}(\mathbb{R}^{d},\mathbb{R})\), let \(\|\cdot\|:\mathbb{R}^{d}\to[0,\infty)\) be a norm, let \(C\subseteq\mathbb{R}^{d}\) be a bounded and convex set, and let \(\gamma\colon\mathbb{N}_{0}\to(0,\infty)\) satisfy that \(\overline{\lim}_{n\to\infty}\gamma_{n}=0\), assume for all \(x,y\in C\) that_
\[(F^{\prime}(x)-F^{\prime}(y))(x-y)\leq c\|x-y\|^{1+\alpha}, \tag{23}\]
assume that \(\sum_{n=0}^{\infty}\mathds{1}_{(0,1)}(\alpha)\gamma_{n}^{\frac{1+\alpha}{1- \alpha}}<\infty\), let \(x\colon\mathbb{N}_{0}\to C\), and assume for all \(n\in\mathbb{N}_{0}\) that_
\[x_{n+1}=x_{n}-\gamma_{n}\nabla F(x_{n}). \tag{24}\]
_Then the following statements hold:_
**(i)**: \((F(x_{n}))_{n\in\mathbb{N}_{0}}\) _converges in_ \(\mathbb{R}\)_,_
**(ii)**: \(\sum_{n=0}^{\infty}\gamma_{n}\|\nabla F(x_{n})\|^{2}<\infty\)_, and_
**(iii)**: _if_ \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\)_, then_
\[\overline{\lim}_{n\to\infty}\|\nabla F(x_{n})\|=0. \tag{25}\]
Proof.: Continuity of \(F\) and boundedness of \(x\) ensure that \((F(x_{n}))_{n\in\mathbb{N}_{0}}\) is bounded. This and Theorem 3 imply (i) and (ii).
(iii) Aiming at a contradiction assume that there exist \(\varepsilon\in(0,\infty)\) and a subsequence \((n_{k})_{k\in\mathbb{N}_{0}}\subseteq\mathbb{N}\) such that \(\underline{\lim}_{k\to\infty}n_{k}=\infty\) and
\[\|\nabla F(x_{n_{k}})\|\geq\varepsilon. \tag{26}\]
Define \(\kappa\in[0,\infty]\) by \(\kappa\coloneqq\sup_{n\in\mathbb{N}_{0}}\|\nabla F(x_{n})\|\). Continuity of \(\nabla F\) and boundedness of \(x\) imply the boundedness of \(((\nabla F)\,(x_{n}))_{n\in\mathbb{N}_{0}}\) and thus \(\kappa<\infty\). Without loss of generality we assume that \(\kappa>0\). Since \(\nabla F\big{|}_{\overline{C}}\) is uniformly continuous, then there exists \(\delta\in(0,\infty)\) such that for all \(x,y\in C\) with \(\|x-y\|\leq\delta\) it holds that
\[\|\nabla F(x)-\nabla F(y)\|\leq\frac{\varepsilon}{2}. \tag{27}\]
We assume without loss of generality that \(\sup_{k\in\mathbb{N}_{0}}\gamma_{n_{k}}<\frac{\delta}{\kappa}\). The fact that \(\sum_{l=0}^{\infty}\gamma_{l}=\infty\) implies that there exist \((m_{k})_{k\in\mathbb{N}_{0}}\subseteq\mathbb{N}\) which satisfy for all \(k\in\mathbb{N}_{0}\) that \(m_{k}\geq n_{k}\) and it holds that
\[\sum_{l=n_{k}}^{m_{k}}\gamma_{l}\leq\frac{\delta}{\kappa}<\sum_{l=n_{k}}^{m_{ k}+1}\gamma_{l}. \tag{28}\]
Assume without loss of generality for all \(k\in\mathbb{N}_{0}\) that \(m_{k}+1<n_{k+1}\). Now for all \(k\in\mathbb{N}_{0}\), \(l\in\{n_{k},\ldots,m_{k}+1\}\) we obtain from (24) and (28) that
\[\|x_{l}-x_{n_{k}}\|=\left\|\sum_{i=n_{k}}^{l-1}(x_{i+1}-x_{i})\right\|\leq\sum_ {i=n_{k}}^{l-1}\gamma_{i}\|\nabla F(x_{i})\|\leq\kappa\sum_{i=n_{k}}^{m_{k}} \gamma_{i}\leq\delta. \tag{29}\]
This, (27) and (26) imply for all \(k\in\mathbb{N}_{0}\), \(l\in\{n_{k},\ldots,m_{k}+1\}\) that
\[\|\nabla F(x_{l})\|\geq\|\nabla F(x_{n_{k}})\|-\frac{\varepsilon}{2}\geq \varepsilon-\frac{\varepsilon}{2}=\frac{\varepsilon}{2}. \tag{30}\]
Furthermore, this together with (28) ensures for all \(k\in\mathbb{N}_{0}\) that
\[\sum_{l=n_{k}}^{m_{k}+1}\gamma_{l}\|\nabla F(x_{l})\|^{2}\geq\Big{(}\frac{ \varepsilon}{2}\Big{)}^{2}\sum_{l=n_{k}}^{m_{k}+1}\gamma_{l}\geq\Big{(}\frac{ \varepsilon}{2}\Big{)}^{2}\frac{\delta}{\kappa}. \tag{31}\]
Next this implies that
\[\sum_{l=0}^{\infty}\gamma_{l}\|\nabla F(x_{l})\|^{2}\geq\sum_{k=0}^{\infty} \sum_{l=n_{k}}^{m_{k}+1}\gamma_{l}\|\nabla F(x_{l})\|^{2}\geq\sum_{k=1}^{ \infty}\Big{(}\frac{\varepsilon}{2}\Big{)}^{2}\frac{\delta}{\kappa}=\infty. \tag{32}\]
This is a contradiction to (ii). The proof of Theorem 4 is thus completed.
## Acknowledgement
I would like to express sincere gratitude to Prof. Dr. Martin Hutzenthaler for his guidance and meticulous review of the paper. I would also like to thank Prof. Dr. Alain Haraux for his assistance in comprehending the theory of gradient flow systems and patiently addressing all questions pertaining to convergence analysis.
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the research grant number HU1889/7-1.
|
2303.16720 | Convolutional neural network search for long-duration transient
gravitational waves from glitching pulsars | Machine learning can be a powerful tool to discover new signal types in
astronomical data. We here apply it to search for long-duration transient
gravitational waves triggered by pulsar glitches, which could yield physical
insight into the mostly unknown depths of the pulsar. Current methods to search
for such signals rely on matched filtering and a brute-force grid search over
possible signal durations, which is sensitive but can become very
computationally expensive. We develop a method to search for post-glitch
signals on combining matched filtering with convolutional neural networks,
which reaches similar sensitivities to the standard method at false-alarm
probabilities relevant for practical searches, while being significantly
faster. We specialize to the Vela glitch during the LIGO-Virgo O2 run, and set
upper limits on the gravitational-wave strain amplitude from the data of the
two LIGO detectors for both constant-amplitude and exponentially decaying
signals. | Luana M. Modafferi, Rodrigo Tenorio, David Keitel | 2023-03-29T14:17:06Z | http://arxiv.org/abs/2303.16720v1 | Convolutional neural network search for long-duration transient gravitational waves from glitching pulsars
###### Abstract
Machine learning can be a powerful tool to discover new signal types in astronomical data. We here apply it to search for long-duration transient gravitational waves triggered by pulsar glitches, which could yield physical insight into the mostly unknown depths of the pulsar. Current methods to search for such signals rely on matched filtering and a brute-force grid search over possible signal durations, which is sensitive but can become very computationally expensive. We develop a method to search for post-glitch signals on combining matched filtering with convolutional neural networks, which reaches similar sensitivities to the standard method at false-alarm probabilities relevant for practical searches, while being significantly faster. We specialize to the Vela glitch during the LIGO-Virgo O2 run, and set upper limits on the gravitational-wave strain amplitude from the data of the two LIGO detectors for both constant-amplitude and exponentially decaying signals.
## I Introduction
Gravitational wave (GW) detectors are sensitive to a variety of different sources. Spinning neutron stars (NSs) with non-axisymmetric deformations, in particular, are promising GW sources [1], e.g. for long-lasting quasi-monochromatic signals (continuous waves, CWs) [2; 3]. The expected amplitudes of CWs are several orders of magnitude smaller than the signals already detected from compact binary coalescences (CBCs) [4], requiring long integration times, and have not been discovered yet by current LIGO-Virgo-KAGRA detectors [5; 6; 7].
Pulsars are rotating NSs that also emit electromagnetic (EM) beams. Of these, some show glitches [8], i.e. rare anomalies in their otherwise very stable frequency evolution. Glitches typically feature a sudden spin-up, often followed by a relaxation phase, and are one of the few circumstances in which we may examine the inside of a NS and the characteristics of matter at supernuclear density [9]. They could also excite non-axisymmetric perturbations, hence triggering GWs [10]. These signals could be "burst-like", with dampening timescales of milliseconds [11], and also long-duration transient CW-like signals (tCWs) with timescales from hours to months [12]. The detection of GWs from a pulsar glitch could yield more understanding of the NS equation of state [13].
Detection prospects for tCWs were studied in Ref. [14], finding that upcoming LVK observing runs will offer the first chances to detect them or at least to obtain upper limits that can physically constrain glitch models. Third generation GW detectors such as Einstein Telescope [15] and Cosmic Explorer [16] will be able to probe deeper into the population of known glitchers. The Vela pulsar (J0835\(-\)4510) is a particularly strong candidate, because of its large and frequent glitches, also characterized by long recovery times up to several hundred days.
Recent works have performed unmodelled searches for burst-like transients [17; 18] and modelled searches for long-duration tCWs [19; 20; 21]. In this work we focus on tCW searches, which are so far based on matched filtering of the data against a signal model. Typically, a template bank is constructed to cover the possible range of signal parameters. However these model-dependent searches are usually limited in the volume of parameter space that can be covered, due to high computational cost, as we detail in Sec. II.
A possible way out of computational bottlenecks in GW data analysis is deep learning (DL), a subfield of machine learning which consists of processing data with deep neural networks. DL has been gaining ground in different scientific fields and in particular in gravitational astronomy, as it offers powerful novel methods to search for or classify signals while decreasing computational cost. Ref. [22] offers a review of GW applications, spanning from data quality studies and detector characterization, to waveform modeling and searches using different DL model architectures. In this work we provide a DL algorithm based on a Convolutional Neural Network (CNN) as a complementary tool to matched filtering in the search for tCWs from glitching pulsars.
The paper is structured as follows. In Sec. II we set up the detection problem with a specific real-world test case (searching for tCWs after the 2016 Vela pulsar glitch), starting from matched filtering, and we motivate how DL can help with its limitations. In Sec. III we introduce our CNN architecture and training strategy and in Sec. IV we explain our evaluation method and testing sets. In Sec. V we test the CNN on simulated and real data, and in Sec. VI we consider extensions to parameter estimation. We then present observational upper limits on the strain amplitude of tCWs after the 2016 Vela glitch in Sec. VII, comparing with those previously obtained in
Ref. [19]. Finally we state conclusions in Sec. VIII.
## II Problem Statement and a Real-World Test Case
The standard method to search for tCWs is based on matched filtering and was first introduced in Ref. [12]. So far it has been applied in Ref. [19; 20] to search for signals from various known glitching pulsars in data from the second and third LIGO-Virgo observing runs (O2 and O3). We briefly summarize the method here and provide more details in Appendix A. Considerations for the practical setup of such searches based on the ephemerides of known pulsars are provided in Ref. [21]. The setup of this detection method is very similar to that of narrow-band CW searches [23; 24; 20], but for tCWs one must also take into account additional parameters describing the temporal evolution of the signal, e.g. its start time and duration.
The pulsar population presents a large variety of glitching behaviors [8], and there are several models explaining this phenomenology [9; 10]. Similarly, GW signals could derive from different physical mechanisms, e.g. Ekman flows [25; 26; 27] or Rossby \(r\)-modes [28; 29], varying from glitch to glitch. Depending on the model, it may be reasonable to expect tCWs with durations of the order of the timescales of the post-glitch recovery, i.e. the time needed for the frequency to return to its secular value [30]. In general, searches for tCWs must account for this model uncertainty by covering wider ranges of possible parameters, including all possible combinations of the transient parameters and sufficiently wide frequency evolution template banks, of sizes up to tens of millions per target.
### Current method for detecting tCWs
In matched filtering the first step is to define the signal model. We ignore the specific physical process behind the generation of the signal, and define a generic tCW model following Ref. [12], by generalizing the standard CW model.
First we introduce
\[h_{\rm CW}(t;\lambda,\mathcal{A})=\sum_{\mu=0}^{3}\mathcal{A}^{\mu}h_{\mu}(t; \lambda), \tag{1}\]
where \(h_{\mu}\) are four basis functions derived in Ref. [31], and \(\lambda,\mathcal{A}\) are the frequency evolution and amplitude parameters of the signal, respectively1. The four amplitude parameters depend on the CW amplitude \(h_{0}\), inclination \(\cos\iota\), polarization angle \(\psi\), and initial phase \(\phi_{0}\), while the frequency evolution parameters include the source position (right ascension \(\alpha\) and declination \(\delta\)) and the GW frequency \(f\) and spindowns (frequency derivatives) \(\dot{f},\ddot{f},\dddot{f},\ldots\) For GW emission driven by a mass quadrupole, the GW frequency and spindown values are twice the rotational values.
Footnote 1: We follow the notation of [32; 12; 33] and specialize to a single signal harmonic.
From Eq. (1), tCWs can be obtained by multiplying with a window function \(\varpi(t;\mathcal{T})\) dependent on time and an additional set of transient parameters \(\mathcal{T}\):
\[h(t;\lambda,\mathcal{A},\mathcal{T})=\varpi(t;\mathcal{T})h_{\rm CW}(t; \lambda,\mathcal{A}). \tag{2}\]
For instance, \(\mathcal{T}\) can be the start time \(t^{0}\) and the duration \(\tau\) of the tCW signal. Two simple choices of window functions are the rectangular window, for signals of constant amplitude truncated in time,
\[\varpi_{t}(t;t^{0},\tau)=\begin{cases}1&\text{if }t\in[t^{0},t^{0}+\tau]\\ 0&\text{otherwise}\end{cases}, \tag{3}\]
and the exponentially decaying window
\[\varpi_{e}(t;t^{0},\tau)=\begin{cases}e^{-(t-t^{0})/\tau}&\text{if }t\in[t^{0},t^{0}+3\tau]\\ 0&\text{otherwise}\end{cases}, \tag{4}\]
where for \(\varpi_{e}\) the truncation at \(3\tau\) was chosen by Ref. [12] as the point where the amplitude has decreased by more than 95%.
While the rectangular window follows the exact evolution of a standard CW and solely cuts it in time, the exponentially decaying window modifies the time evolution of the amplitude \(h_{0}\). In general, one can define an arbitrary window function depending on different transient parameters \(\mathcal{T}\), but for simplicity we only consider and compare the two choices above.
Once the signal model is defined, one can then proceed as in Ref. [12] with a noise-versus-signal hypothesis test, assuming Gaussian noise for the noise hypothesis and adding Eq. (2) for the signal hypothesis. The standard procedure consists of maximizing the likelihood ratio between the hypotheses over the amplitude parameters \(\mathcal{A}\), obtaining what is known as the \(\mathcal{F}\)-statistic [31; 34].
In general, for both CWs and tCWs, the \(\mathcal{F}\)-statistic over a given stretch of data is calculated from the antenna pattern matrix \(\mathcal{M}_{\mu\nu}\) and the projections \(x_{\mu}\) of the data onto the basis functions \(h_{\mu}\). The implementation in LALSuite[35; 33] splits the data \(x^{X}(t)\) of detector \(X\) into short Fourier transforms (SFTs) [36], i.e. the Fourier Transforms of time segments of duration \(T_{\rm SFT}\). Then the \(\mathcal{F}\)-statistic is computed from a set of per-SFT quantities numerically of order \(\mathcal{O}(1)\), called \(\mathcal{F}\)-statistic _atoms_. Coherently combining \(N_{\rm det}\) detectors, and suppressing the dependence on \(\lambda\) for the rest of this section, one can write:
\[x_{\mu,\alpha} =2\sum_{X=1}^{N_{\rm det}}S_{X\alpha}^{-1}\int_{t_{\alpha}}^{t_{ \alpha}+T_{\rm SFT}}x^{X}(t)h_{\mu}^{X}(t)dt, \tag{5}\] \[\mathcal{M}_{\mu\nu,\alpha} =2\sum_{X=1}^{N_{\rm det}}S_{X\alpha}^{-1}\int_{t_{\alpha}}^{t_{ \alpha}+T_{\rm SFT}}h_{\mu}^{X}(t)h_{\nu}^{X}(t)dt,\]
where \(S^{-1}_{\mathcal{X}\alpha}\) is the single-sided noise power spectral density (PSD) for a detector \(X\) at the template frequency at the start time \(t_{\alpha}\) of the SFT \(\alpha\). More specifically, in the following, with the word "atoms" we refer to a set of closely related per-SFT quantities as used in the LALSuite code, which also include noise weighting. We define these explicitly in Appendix A, following Ref. [33].
To compute the (transient) \(\mathcal{F}\)-statistic over a window \(\varpi(t^{0},\tau)\), one needs partially summed quantities
\[x_{\mu}(t^{0},\tau) =\sum_{\alpha}\varpi(t_{\alpha};t^{0},\tau)x_{\mu,\alpha}, \tag{6}\] \[\mathcal{M}_{\mu\nu}(t^{0},\tau) =\sum_{\alpha}\varpi^{2}(t_{\alpha};t^{0},\tau)\mathcal{M}_{\mu \nu,\alpha}, \tag{7}\]
where the sums go over the set of SFTs matching the window. From these,
\[2\mathcal{F}(t^{0},\tau)=x_{\mu}(t^{0},\tau)\mathcal{M}^{\mu\nu}(t^{0},\tau)x_ {\nu}(t^{0},\tau), \tag{8}\]
where \(\mathcal{M}^{\mu\nu}\) is the inverse matrix of \(\mathcal{M}_{\mu\nu}\).
Inserting the template into this equation defines the optimal signal-to-noise ratio (SNR)
\[\rho_{\rm opt}=\sqrt{\mathcal{A}^{\mu}\mathcal{M}_{\mu\nu}\mathcal{A}^{\nu}}, \tag{9}\]
where we have suppressed the explicit dependence on the transient parameters. In the absence of a signal, \(\rho_{\rm opt}=0\). In later sections, we will regularly refer to simulated signals as "injections" (into real or simulated data) and to the optimal SNR of the template used to generate an injection as \(\rho_{\rm inj}\).
To get a detection statistic from Eq. (8) that only depends on \(\lambda\), for the CW case, one just takes the total sums over the full observation time. But for tCWs we need to handle the case of unknown \(t^{0}\) and \(\tau\). To obtain a detection statistic that depends only on \(\lambda\), one can e.g. maximize over \(\mathcal{T}\) obtaining a statistic \(2\mathcal{F}_{\rm max}\), or marginalize over the same parameters obtaining the transient Bayes factor \(B_{\rm tS/G}\). It has been shown that \(B_{\rm tS/G}\) is more sensitive than \(2\mathcal{F}_{\rm max}\) in Gaussian noise [12] and also more robust on real data [37].
### Computational cost limitations
The method can be divided into three steps, at each \(\lambda\): computing the \(\mathcal{F}\)-statistic atoms, computing Eq. (8) for all the different combinations of transient parameters, and finally maximization/marginalization over the same parameters to obtain an overall detection statistic. Besides LALSuite, this has also been implemented in PyFstat[38], which is a python package that wraps the corresponding LALSuite functions and also adds a GPU implementation of the last two steps [39].
The computing time varies for each step and can be estimated as discussed in Ref. [39; 12]. In particular, the most cost-intensive step is typically the second one because of the large number of partial sums that need to be taken corresponding to all different combinations of \(\mathcal{T}\). Timing models and results for this step for both CPU and GPU implementations can be found in Ref. [39]. As discussed there and in Ref. [12], this step also crucially depends on the window function: calculations for exponential windows are much more expensive not only because of the exponential functions themselves, but also because partial sums cannot be efficiently reused like in the rectangular case. For realistic parameter ranges, searches with an exponential window model can take orders of magnitude longer with respect to the rectangular model in either implementation. Therefore, searches for long-duration tCWs [20; 19; 21] have only applied the simpler rectangular window, even though models like that of Ref. [30] assume post-glitch evolutions following that of the EM signal, typically fitted by exponential functions.
DL models can avoid brute-force loops over many parameter combinations, as once they are trained only a single forward-pass of the network is needed per input data instance. Hence, they can be faster, and also have the potential to be more agnostic to signal models. One could replace matched filtering completely by training a network on the full detector data, thus allowing frequency evolutions different from the standard spin-down model, but this would likely come with a loss in sensitivity, similarly to excess power methods.
Instead, we apply CNNs with the \(\mathcal{F}\)-statistic atoms as input data, which are an intermediate output of matched filtering, meaning we lock the frequency evolution model but still allow flexibility in the amplitude evolution of the GW and the potential for significant speed-up. As we will demonstrate, with this approach one can reach sensitivities similar to those of the standard detection statistics at reduced computational cost.
### Test case based on O2 Vela search
We tune the setup of our training, search and comparisons to that from the first search for long-duration tCWs [19], using data from the two LIGO detectors during the second observing run (O2) of the LIGO-Virgo network [40]. Two priority targets, Vela and the Crab, glitched during that observing period. No evidence was found for tCW signals in the data, so 90% upper limits on the GW amplitude \(h_{0}\) as a function of signal durations \(\tau\) were reported. For a first proof of concept we will only focus on the Vela (J0835\(-\)4510) analysis, since its 2016 glitch is one of the largest that have been analyzed by GW searches, with a relative jump in frequency of \(\approx 1.43\times 10^{-6}\). More glitches have been analyzed during O3 [20], of which two reached similar strengths. But due to its combination of a large glitch and its proximity, the 2016 Vela glitch was the most promising to date.
Following Ref. [19], we construct our test case as a narrow-band search in frequency and a single spindown parameter, centred on the values inferred from the timing observations of Ref. [41], and directed at Vela's sky
position. We search for tCWs with start times \(t_{0}\) within half a day of the glitch time (December 12th 2016 at 11:38:23.188 UTC), and with durations \(\tau\) up to 120 days. All relevant source and search parameters are listed in Table 1. Even when using simulated data, we match their timestamps to the real observational segments as given by Ref. [42], which show a notable 12-day gap and various smaller ones. We study the effect of these gaps on the various detection statistics in Appendix B. The total number of analyzed SFTs, each of duration \(T_{\rm SFT}=1800\) s, are 3782 for H1 and 3234 for L1.
## III CNN architecture and training
### Input data format
The performance of DL models depends crucially on the type and quality of their input data. In the GW field, different options have been explored depending on how heavily the data have been transformed [22]. Directly using GW strain time series would allow a flexible training in terms of both amplitude and frequency evolution of potential signals. The whitened detector strain has been used e.g. in Ref. [43, 44, 45] for CBC signals. For CW-like signals, different approaches have been investigated. Some examples include Fourier transforms of blocks of data in the search for long-duration transients from newly-born neutron stars [46]. Also, for the search of CWs from spinning NSs, time-frequency spectrograms and Viterbi maps [47] and power-based statistics [48, 49, 50] have been used.
As stated in Sec. II, the most expensive step of the algorithm from Ref. [12] for post-glitch transients is typically the calculation of the partial sums of the \(\mathcal{F}\)-statistic atoms over the transient parameters \(\mathcal{T}\). In comparison, the cost of computing the \(\mathcal{F}\)-statistic atoms (the main matched-filter stage) is marginal. Therefore we decide to use as input data to our network the \(\mathcal{F}\)-statistic atoms, which contain all the information needed to determine the presence of a signal in the data. The exact expressions of the atoms are derived in Appendix A. When passing to the network, we do not perform any type of preprocessing to the \(\mathcal{F}\)-statistic atoms, since they are already normalized to order \(\mathcal{O}(1)\) both for noise-only data and for signals within the range of SNRs considered here.
As we will demonstrate, this choice allows for a comparatively simple DL model architecture and training setup to reach detection sensitivity close to that of traditional detection statistics. However, it means that we keep to the specific frequency evolution of the standard CW model, and the atoms still have to be computed for each set of parameters \(\lambda\).
By using \(\mathcal{F}\)-statistic atoms as input to the network, we can also easily implement a multi-detector configuration by merging the single-detector atoms into a set of equi-spaced timestamps spanning the total observing time: the result will correspond to the single-detector values at timestamps when only one detector is available, the sum when multiple detectors are on, and filled up with zeros as required. The merged atoms span 4190 timestamps, spaced by 1800 s as the original SFTs. The summing of the multi-detector atoms allows us to implement a simpler network in contrast to e.g. multi-channel or multi-input models and comes with the gain in sensitivity from coherently combining the detectors.
### CNN architecture
CNNs are a type of artificial neural networks that use convolutional layers as building blocks [51, 52]. While in fully-connected layers neurons are connected to all those of the previous layer, this would be computationally unfeasible for a high-dimensional input such as an image, for which a fully-connected layer would use a weight for each pixel. Convolutional layers avoid this problem by exploiting the spatial structure and repeating patterns in the data. To achieve this they convolve various independent filters (also called convolution kernels) with the input. Each filter is a matrix of weights to be trained, which slides along the input data, producing a feature map. These maps are then transformed through a non-linear activation function. This way, each filter learns to identify a pattern or characteristic of the data, regardless of location, which is preserved in the feature maps. These maps are then stacked to yield the input for the next layer. This enables the CNN to learn where features, or mixtures of features, are located in the data.
For this work we use a simple CNN, made up of three one-dimensional convolutional layers. This means that
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{2}{c}{Source parameters} \\ \hline \hline \(d\) & 287 pc \\ \(\alpha\) & 2.2486 rad \\ \(\delta\) & \(-0.7885\) rad \\ \(f\) & 22.3722 Hz \\ \(\dot{f}\) & \(-3.12\times 10^{-11}\) Hz s\({}^{-1}\) \\ \(\ddot{f}\) & \(1.16\times 10^{-19}\) Hz s\({}^{-2}\) \\ \(T_{\rm ref}\) & 58000 MJD \\ \(T_{\rm gl}\) & 57734.485 MJD \\ \hline \hline \end{tabular}
\begin{tabular}{c c} \hline \hline \multicolumn{2}{c}{Search parameters} \\ \hline \hline \(\Delta f\) & \(0.1\) Hz \\ \(\Delta\dot{f}\) & \(1.01\times 10^{-13}\) Hz s\({}^{-1}\) \\ \(df\) & \(9.57\times 10^{-8}\) Hz \\ \(d\dot{f}\) & \(9.15\times 10^{-15}\) Hz s\({}^{-1}\) \\ \(t_{\rm min}^{0}\) & \(T_{\rm gl}-0.5\) day \\ \(t_{\rm max}^{0}\) & \(T_{\rm gl}+0.5\) day \\ \(\tau_{\rm min}\) & \(T_{\rm gl}-\frac{1.4\rm{day}}{2}+2T_{\rm SFT}\) \\ \(\tau_{\rm max}\) & \(120\,{\rm days}-2T_{\rm SFT}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ephemerides and search parameters for Vela and its glitch during the O2 run. The distance from the source is indicated with \(d\) and its sky coordinates are \(\alpha\) (right ascension) and \(\delta\) (declination). The frequency and spindown values for the source parameters are the GW values and are all referenced to \(T_{\rm ref}\). In the search parameters, we indicate with \(\Delta\) the search band centered around the GW frequency (or spindown), and with \(df,df\) the resolutions in frequency and spindown of the search. The search in second order spin-down is fixed to the nominal GW value. For the transient parameters, we list the minimum and maximum values of start times \(t^{0}\) and \(\tau\) we search over, with resolutions matching \(T_{\rm SFT}=1800\) s.
in each layer the filters slide along the time dimension (vertical axis) and convolve all columns at once. We always set the filter size to 3 along the time dimension in each layer, and to increasingly narrow down important features we use a decreasing number of 128, 64 and 8 independent filters per layer and set the stride parameter to 3, 2 and 1 respectively. The stride is the number of pixels the filter moves while sliding through the input matrix, i.e. the bigger the stride, the smaller the output feature maps. We do not implement pooling layers [53], since we find that for our particular architecture they are detrimental to the performance of the network.
The convolutional layers are then followed by a flattening and dropout layer [54] and four fully-connected layers. Again we decrease the number of neurons from layer to layer (256, 128 and 64) so that the number of trainable parameters decreases when approaching the final output. The last of the fully-connected layers gives the output of the network. We choose this to be a single continuous, positive value \(\rho_{\text{CNN}}\) which we train to match the injected SNR from Eq. (9) for each sample in the signal training set, and to return 0 for pure-noise samples. We use the Rectified Linear Unit (ReLU) activation function [55] for all layers, including for the output, which produces the desired range of values for an SNR-like quantity: \(\rho_{\text{CNN}}\in[0,+\infty)\). The dropout layer, which helps to prevent the CNN from overfitting [54], has a dropout rate of 0.33.
The full network architecture is illustrated in Fig. 1 and has been implemented using the Keras library [56] based on the Tensorflow infrastructure [57]. For the various hyperparameters, we chose starting values from an exploratory run with the optimization framework Optuna [58], minimizing the loss on the validation set using a sampler based on the Tree-structured Parzen Estimator algorithm [59], and further tuned them manually.
### Training strategy
The CNN is trained by minimizing a loss function \(L\) over a training set containing both pure-noise samples and simulated signals. We use the mean squared error loss:
\[L=\frac{1}{n}\sum_{i=1}^{n}(\rho_{\text{inj}}-\rho_{\text{CNN}})^{2}\,, \tag{10}\]
where \(\rho_{\text{inj}}\) is the injected SNR of the training signals, corresponding to Eq. (9). The training of our CNN's continuous output function can therefore be considered a regression learning approach.
In general, we separate the training into two stages which differ with respect to the injection set (both in the number of training samples and the SNR range of signal samples), optimizer and number of epochs. A summary of the parameters used in each stage are shown in Tab. 2 and will be explained in more detail below.
When training a network, the weights are updated by an optimizer through gradient descent. There is a variety of well-established optimizers built to e.g. prioritize speeding up convergence or improving generalization. We use the Adam optimizer [60] in a first training stage, i.e. for the first 200 epochs, and the Stochastic Gradient Descent algorithm (SGD) [61] for the remaining 1000 epochs. We found that the Adam algorithm converges rapidly while the SGD generalizes better, reducing the unwanted effect of overfitting [62].
We also use the curriculum learning (CL) training strategy [63]. CL has been applied on GW data in previous works [64; 65] and consists of training on datasets of increasing difficulty. The difficulty criterion depends on the type of data and problem, and for our case we choose the range of injected SNRs. We use a simple two-stage CL strategy, first training on an easier dataset with strong signals (high SNR) and then adding a more difficult dataset, i.e. weaker signals. We align the stages of
Figure 1: Architecture of the convolutional neural network (CNN) model, which as its input takes the \(\mathcal{F}\)-statistic atoms. The CNN is made up of a stack of three convolutional layers (labelled Conv1D in figure) followed by four fully connected layers (FC) and the output layer. The convolutional and fully connected layers all use the ReLU activation function. The convolutional part and the fully connected part of the network are separated by a flattening layer, needed to transform the output feature maps to a digestible input for the dense layers, and a dropout layer.
CL to the change in optimizer. More details on the training sets are given in Sec. III.4. Alternative CL approaches avoid re-using the easier dataset in further stages, but we decide to keep it also in the final stage so that the model still remains robust to the stronger signals.
We train two CNN models with the same architecture, the same CL setup, etc.: one on only-rectangular signals and one on only-exponential signals. All training sets are equally split into noise and signal atoms. The validation set, used to compute the loss function after each epoch, is 33% of the training data.
We start by training and testing on simulated Gaussian noise only, but with realistic gaps matching those of the O2 run (results will be shown in Sec. V.1). Real data, however, can present instrumental artifacts which differ from a Gaussian background. For tCW searches in particular, the disturbances that most affect them are narrow spectral features similar to quasi-monochromatic CWs or tCWs (so called "lines") [66, 67, 68]. We also test these CNNs on real data (Sec. V.2). But as we will see, to be robust to non-Gaussianities of the data one must use real data also in the training set. There are different approaches that can be employed, e.g. only using real data in the training set, or using a mixture of both simulated and real data. We explain our choice and show our results in Sec. V.3.
### Training sets
#### iv.4.1 Gaussian-noise synthetic data
For Gaussian noise training data, we use the approach of generating "synthetic" detection statistics (and atoms, see appendix A.4 of Ref. [12]), which avoids generating and analyzing SFT files and therefore is considerably faster than full searches over simulated Gaussian noise data. Specifically, we use the synthesize_TransientStats program from LALSuite which draws samples of the quantities needed to compute atoms, either under the pure Gaussian noise assumption or as expected for signals (with randomized parameters) in Gaussian noise. It then produces the atoms we need as input for the CNNs and also the \(2\mathcal{F}_{\rm max}\) and \(B_{\rm tS/G}\) statistics (for given transient parameter search ranges) we will use for comparing the detection performance of the CNNs.
In each stage of the CL, the training set is balanced, with half of the samples being pure noise and the other half containing tCW signals. All signals correspond to Vela's sky location. Frequency and spindowns do not matter for the synthesizing method. The parameters \(\cos\iota\), \(\psi\), and \(\phi_{0}\) are randomized over their natural ranges, i.e. \(\cos\iota\in[-1,1],\psi\in[0,\pi/2],\phi\in[0,2\pi]\). The transient parameters of the signals are drawn from the search ranges shown in Tab. 2, which also summarizes the SNR ranges and set sizes for the two CL levels.
#### iv.4.2 Real data
The real-data training set atoms are generated from analyzing a \(0.1\,\mathrm{Hz}\) band of LIGO O2 data [40, 42]. To avoid training bias, we have to choose this band as disjoint from the search band around the nominal GW frequency of Vela, but it should be close enough to have similar noise characteristics. We center this band around \(22.2\,\mathrm{Hz}\), which yields a frequency region without visible lines in the PSDs of the two detectors.
The noise samples are produced by running a grid-based search with PyFstat[38, 69] and storing the output atoms, using the same setup as in Tab. 1 except for the shift in frequency and using a coarser frequency resolution to reduce correlations between atoms at different \(\lambda\): namely \(df=5\times 10^{-6}\,\mathrm{Hz}\) for the first CL stage and \(df=3\times 10^{-6}\,\mathrm{Hz}\) for the second.
The signal samples of this training set are generated by injecting signals at random frequencies within the same offset band, with spindowns fixed to the nominal GW values (twice those from pulsar timing). Atoms are then produced by a single-template PyFstat analysis for each injection.
### Timings
The training of the network takes about \(1.8\,\mathrm{hours}\) on a Tesla V100-PCIE-32GB for only synthetic data. When including real data, the training takes twice as long in total. After training, evaluation on the same GPU takes \(c_{\rm CNN}\approx 4\times 10^{-4}\,\mathrm{s}\) per sample, when averaged over a batch of \(10^{4}\) samples using Tensorflow's method Model.predict_on_batch. This compares favorably with costs for the \(2\mathcal{F}_{\rm max}\) or \(B_{\rm tS/G}\) statistics, which for the same set of transient parameters take \(\approx 10^{-2}\,\mathrm{s}\) per sample for rectangular windows (similar with both the LALSuite CPU code on a Intel Xeon Gold 6130
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & CL stage 1 & CL stage 2 \\ injection & \(\rho_{\rm inj}\) & \(U[6,40]\) & \(\mathrm{set}\,1+U[4,10]\) \\ set & \(N_{\rm train}\) & \(4\times 10^{4}\) & \(\mathrm{set}\,1+6\times 10^{4}\) \\ optimizer & & Adam & SGD \\ epochs & & 200 & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters used for the two stages of curriculum learning (CL). We first train on strong signals with \(\rho_{\rm inj}\) randomly drawn from a uniform distribution \(U[6,40]\), and then we add to the training weaker signals with \(\rho_{\rm inj}\) randomly drawn from \(U[4,10]\). The number of samples we train on is \(4\times 10^{4}\) in the first CL stage, and during the second CL stage we add \(6\times 10^{4}\) more samples, this time with \(\rho_{\rm inj}\) in the weaker range. The final stage uses the full \(10^{5}\) samples training dataset, including both high-SNR and low-SNR signals. The \(\{t^{0},\tau\}\) parameter ranges of the signals match the ones used for the search, which are listed in Tab. 1.
2.10GHz and the PyFast GPU version from Ref. [39] on the same V100) and for exponential windows over 15 s on the CPU and \(c_{B_{\mathrm{ts/G}}^{\mathrm{GPU}}}^{\mathrm{GPU}}\approx 3\times 10^{-2}\) s on the V100.
## IV Evaluation method
The output \(\rho_{\mathrm{CNN}}\) can be used as a detection statistic for hypothesis testing. We compare the performance of the CNNs with other detection statistics using receiver operating characteristic (ROC) curves on the separate test sets described below. These show the probability of detection \(p_{\mathrm{det}}\), corresponding to the fraction of signals above threshold, or true positives, as a function of the probability of false alarm \(p_{\mathrm{FA}}\), which is equal to the fraction of false positives from pure-noise samples. The number of noise samples determines how deep in \(p_{\mathrm{FA}}\) the curves can go, while the number of signal samples determines the accuracy of the \(p_{\mathrm{det}}\) estimate. In the following we always use \(\approx 10^{7}\) noise samples and \(10^{4}\) signal samples. The latter corresponds to \(p_{\mathrm{det}}\) uncertainties of \(\lesssim 2\%\), and we will consider differences in ROC curves below this level as marginally significant. This disparity in noise and signal set sizes is due to the fact that, as mentioned in Sec. II, typical template bank sizes for standard searches can reach the order of millions, and so the operating \(p_{\mathrm{FA}}\) at which we assess the performance of our method must reach at least \(10^{-6}\) to \(10^{-7}\).
We will compare two versions of the CNN, one trained on only rectangular windows (output \(\rho_{\mathrm{CNN}}^{\mathrm{CPU}}\)) and the other trained on only exponential windows (output \(\rho_{\mathrm{CNN}}^{\mathrm{CPU}}\)), with the detection statistics \(2\mathcal{F}_{\mathrm{max}}\) and \(B_{\mathrm{ts/G}}\). We also use the r/e superscripts on these statistics depending on the window assumed in their computation. In addition, in this section we introduce a two-stage filtering process combining a CNN with \(B_{\mathrm{ts/G}}\).
### Testing sets
Our test sets are constructed to match the O2 Vela search discussed in Sec. II.3. For both the synthetic and real data case, we generate two separate test sets: both with the same \(10^{7}\) noise samples, corresponding to the number of \(\lambda\) parameters in the template bank for our reference search (compare Tab. 1). But the \(10^{4}\) signal samples are rectangular-shaped for the first set, and exponentially-decaying-shaped for the second. The generation methods for each set are the same as described in Sec. III.4. The transient parameter ranges of the signal testing set are the same as those of the training sets, while the \(\rho_{\mathrm{inj}}\) of the testing sets are drawn from \(U[4;40]\). The real data signal frequencies are randomly drawn from a 0.1 Hz band around the nominal GW frequency of Vela.
### Two-stage filtering
We have seen in Sec. III.5 that evaluating the CNNs is very fast. In particular, for exponential windows it is faster than even the GPU implementation of \(2\mathcal{F}_{\mathrm{max}}^{\mathrm{e}}\) and \(B_{\mathrm{ts/G}}^{\mathrm{e}}\) from Ref. [39]. Motivated by this, in all of the following tests we also evaluate a combined detection approach, using the CNN as a preliminary filter stage with \(B_{\mathrm{ts/G}}\) as a second stage. The CNN can first be run on all the available data, and then a follow-up with the traditional detection statistic is done only if \(\rho_{\mathrm{CNN}}\) is above a given threshold. A simplified flowchart of this method, in comparison with the purely traditional and the pure CNN approaches, is shown in Fig. 2.
The amount of follow-up candidates can be set by choosing an operating \(p_{\mathrm{FA}}^{\mathrm{CNN}}\) for the CNN stage, so that we gain as much sensitivity as possible while the whole pipeline is still computationally less expensive than directly computing \(B_{\mathrm{ts/G}}\) over all templates.
Towards low overall \(p_{\mathrm{FA}}\), we will see that performance improves compared to the single-stage CNN and approaches that of the pure \(B_{\mathrm{ts/G}}\) more closely. It is also possible for this two-stage method to achieve higher \(p_{\mathrm{det}}\) at low \(p_{\mathrm{FA}}\) than pure \(B_{\mathrm{ts/G}}\) if the CNN learns to better discard some noise features that would cause loud \(B_{\mathrm{ts/G}}\) outliers. On the other hand, the ROC curves of this method will not arrive at the point \((p_{\mathrm{det}}=1,p_{\mathrm{FA}}=1)\)
Figure 2: Flowchart of three different methods for the tCW search: from the SFT data to \(\mathcal{F}\)-statistic atoms and then either via partial sums and marginalization to the \(B_{\mathrm{ts/G}}\) statistic, or to the CNN. In the two-stage filtering method, after the CNN a subset of candidates get passed back to the branch leading to \(B_{\mathrm{ts/G}}\). (For “synthetic” data, one starts directly from atoms.)
as is normally the case for single-stage detection methods. Any signals lost, i.e. the amount of \(p_{\rm det}\) lost, during the first stage cannot be restored, since only the candidates passing that stage's threshold are passed on to \(B_{\rm tS/G}\). Therefore, by choosing a particular \(p_{\rm FA}^{\rm CNN}\), the highest probability of detection in the two-stage method can reach will be at most \(p_{\rm det}^{\rm CNN}(p_{\rm FA}^{\rm CNN})\).
We choose \(p_{\rm FA}^{\rm CNN}=10^{-3}\) as an operating point, since it offers a good compromise between computational cost for the follow-up and the loss of overall \(p_{\rm det}\) at that operating point, as we will see in Sec. V.1.
In the case of an exponential window test set with \(N\approx 1.15\times 10^{7}\) templates and with the hardware and timings as in Sec. III.5, the first stage takes \(c_{\rm CNN}\times N\approx 1.3\,\)hours. We pass a fraction of \(10^{-3}\) of candidates to the second stage where \(B_{\rm tS/G}^{\rm c}\) is evaluated. This is then expected to take \(c_{B_{\rm tS/G}^{\rm GPU}}^{\rm CNN}\times p_{\rm FA}^{\rm CNN}\times N\approx 3 50\,\)s. The two-stage filtering method thus takes in total less than \(1.5\,\)hours against the \(c_{B_{\rm tS/G}^{\rm GPU}}^{\rm GPU}\times N\approx 95\,\)hours needed without the first CNN stage. A single-stage \(B_{\rm tS/G}^{\rm e}\) search on a CPU (Intel Keon Gold 6130 2.10GH) would even take \(\gtrsim 4\times 10^{4}\,\)hours.
## V Performance on synthetic and real data
### Results on synthetic data
We start by testing the CNNs that were trained on synthetic data only, also using purely synthetic data for the test. As mentioned in Sec. III.4.1, synthetic data are random draws of the \(\mathcal{F}\)-statistic atoms, assuming an underlying Gaussian noise distribution, from which the tCW detection statistics \(2\mathcal{F}_{\rm max}^{\rm r}\) and \(B_{\rm tS/G}^{\rm r}\) can then be calculated.2 The detection problem is easier for this case than for real data, so we show these results as a starting point. In Fig. 3 we show ROC curves comparing the CNNs against the other two detection statistics. The two subplots refer to the two different test sets, which differ only in the window function of the injected signals, but for both cases the standard detection statistics use rectangular windows.
Footnote 2: We omit the much more expensive \(2\mathcal{F}_{\rm max}^{\rm e}\) and \(B_{\rm tS/G}^{\rm e}\) statistics in this section. We will only compute them on a full test set for comparison once, on real data, in Sec. V.3.
As expected [12], \(B_{\rm tS/G}^{\rm r}\) performs marginally better than \(2\mathcal{F}_{\rm max}^{\rm r}\). Both their probabilities of detection surpass \(p_{\rm det}=90\%\) even at \(p_{\rm FA}\) as low as \(10^{-7}\). The two CNNs (trained on only rectangular and only exponentially decaying windows) perform similarly overall, with at most \(\lesssim 1\%\) difference in \(p_{\rm det}\) between each other and a loss with respect to \(B_{\rm tS/G}^{\rm r}\) of at most \(7\%\) for rectangular and \(5\%\) for exponential windows.
The two-stage filtering reaches a plateau at higher \(p_{\rm FA}\) values, corresponding (as mentioned in Sec. IV.2) to the fixed (\(p_{\rm FA}=10^{-3},p_{\rm det}<1\)) operating point of the first stage of the method. However, at low \(p_{\rm FA}\) it considerably improves \(p_{\rm det}\) for both CNNs. It actually performs marginally better than \(B_{\rm tS/G}^{\rm t}\) below \(p_{\rm FA}<10^{-5}\) for both types of signals: by \(\approx 1\%\) for rectangular and \(\approx 2\%\) for exponentially decaying injections.
The improvement is not necessarily significant compared with counting uncertainty, and in the exponential case where it seems larger we have also not compared to the traditional statistics using exponential windows.
Figure 3: ROC curves with synthetic data as testing set and CNNs trained on synthetic data. Note the vertical axis is zoomed in. Both panels use the same noise testing set, but the signals in the testing sets are rectangular-windowed in the upper panel, and exponentially decaying in the bottom panel. For both panels, the black lines are the ROC curves for \(2\mathcal{F}_{\rm max}^{\rm r}\) (dash-dotted) and for \(B_{\rm tS/G}^{\rm r}\) (dashed), always searching for a rectangular signal (even when the test set contains exponentially decaying signals). The solid windowed lines are the ROC curves for the CNNs, trained on only rectangular-shaped signals (blue) and on only exponentially decaying signals (orange). The dotted colored lines are the ROC curves for the two-stage filtering method, with the corresponding windows used in training.
Still, in general such an improvement is indeed possible because \(B_{\rm tS/G}\) is not an optimal statistic in either case. As discussed in Ref. [34] (for CWs), a better detection statistic for a given signal population can in general be obtained by marginalization over an appropriate amplitude prior, as opposed to the unphysical prior that the maximization used for the \(\mathcal{F}\)-statistic (and derived statistics such as \(2\mathcal{F}_{\rm max}\) and \(B_{\rm tS/G}\)) corresponds to. In addition, Ref. [70] has demonstrated how in particular for short data stretches a more sensitive statistic than the standard \(\mathcal{F}\)-statistic can be constructed, mainly exploiting a reduction in noise degrees of freedom. Therefore, it seems consistent that a CNN can also learn to behave better than the standard statistics, and more similarly to the improved alternatives, in at least certain parts of the tCW parameter space.
### Results of synthetic-trained CNNs on real data
In the previous section we have shown how CNNs can be used as a competitive method against the standard detection statistics, at least in the context of synthetic data. We now evaluate the same models trained with
Figure 4: Estimated SNRs from three different methods (one for each row) as a function of injected SNRs in real LIGO O2 data. From top to bottom, the estimated SNRs are \(\rho_{\mathcal{F}_{\rm max}}^{\rm F}\) estimated from \(2\mathcal{F}_{\rm max}^{\rm r}\), in the second row \(\rho_{\rm CNN}^{\rm r/e}\) is estimated from the CNNs trained on only synthetic data with windows matching that of the test set and lastly, in the third row \(\rho_{\rm CNN}^{\rm r/e}\) is estimated from the CNNs trained on both synthetic and real data. The injected signals are rectangular-shaped on the left column and exponentially decaying on the right column. The color scale (shared between all panels) indicates the durations of the injected signals. For guidance, we always plot the diagonal of the subplots, where ideally the points should align.
synthetic data, but now we test on real data. The corresponding CNN detection probabilities suffer additional losses, compared to Fig. 3, of up to 7% and 16% for rectangular and exponential windows respectively. Instead of showing these ROC curves, we proceed directly to identifying and addressing the main reason for these losses.
For real-data injections, we show in Fig. 4 the estimated \(\rho_{\text{CNN}}\) compared with the traditional estimator [33]
\[\rho^{r}_{\mathcal{F}_{\text{max}}}=\sqrt{2\mathcal{F}_{\text{max}}^{r}-4}\,, \tag{11}\]
both as functions of \(\rho_{\text{inj}}\). Different injection windows are covered in the two columns. Ideally, each estimator would align all the points on the diagonals of these plots.
We see that \(\rho^{r}_{\mathcal{F}_{\text{max}}}\) (first row) aligns well with the injected \(\rho_{\text{inj}}\) for rectangular windows, but for exponentially decaying signals there is a loss in recovered SNR (i.e. the points do not align with the diagonal) because the injected window (exponential) and the search window (rectangular) do not match. The loss in SNR is also enhanced by a 12-day gap in the data, which we discuss in more detail in Appendix B.
In the second row, we see that the CNN trained on synthetic data and rectangular windows recovers \(\rho^{r}_{\text{CNN}}\) that mostly align with \(\rho_{\text{inj}}\), but the loss increases as the duration of the injected signal decreases. The loss in SNR is even more evident for \(\rho^{e}_{\text{CNN}}\) on exponentially decaying signals. Since both of these CNNs were trained on synthetic data only, it is natural to expect that they are not robust to detector artifacts that could contaminate the data, especially at shorter durations [71, 68]. We will now demonstrate that it helps to include real data in the training set to mitigate this effect.
### Performance of training on real data
To improve sensitivity, we also trained CNNs using real data, keeping the same architecture. As mentioned before in Sec. III.3, there are different ways one can incorporate real data in the training set. We have tried different implementations using the same total number of training epochs: training a model from scratch on only real data, training a model on a mixture of synthetic and real data, and taking the previous synthetic-only-trained model and continuing its training on only real data (similar to a CL approach).
We find that training exclusively on real data yields a good signal recovery, i.e. removes the SNR losses seen in Fig. 4, but overestimates the SNR of the noise samples, leading to inferior ROC results overall. This effect is mitigated when training with both synthetic and real data. Between the two options of training on a mixture of both all at once, and first training on synthetic and then on real data, the latter performs best. More precisely, when first training on synthetic and then on real data, we take the previously trained CNNs on synthetic data for 200+1000 epochs (first CL stage + second CL stage) and continue their training on only real data for another 200+1000 epochs.
As before, we train two models, each on a single window type. In the third row of Fig. 4 we show the recovered \(\rho^{r}_{\text{CNN}}\) from these networks (trained first on synthetic and then real data) on real-data injections. For rectangular injections, the loss in SNR is now reduced compared to the CNNs with only synthetic training data. The issue of more SNR loss for shorter signals is largely resolved except for a few underestimated outliers with short durations: the CNN is now able to recover both long and short signals well. In the exponential case, \(\rho^{e}_{\text{CNN}}\) also aligns well with the injected \(\rho_{\text{inj}}\), with just a wider scatter.
The models trained on both synthetic and real data
Figure 5: ROC curves with real data as the testing set and CNNs trained on both synthetic and real data. Note the vertical axis is zoomed in. Both panels use the same noise testing set, but the test signals are rectangular-shaped in the upper panel, and exponentially decaying in the bottom panel. The legend structure is the same as in Fig. 3, with the addition of two extra ROC curves in the exponential test case corresponding to the \(2\mathcal{F}_{\text{max}}^{a}\) and \(B_{\text{GS/G}}^{e}\) statistics computed with the more expensive exponential window.
also perform well on real noise samples, not producing overly loud outliers. ROC curves for this case are shown in Fig. 5. For rectangular signals, sensitivities of all the different statistics are quite similar to those on synthetic data (top panel of Fig. 3). However, while for synthetic data the two-stage filtering performance matched or exceeded that of the standard detection statistics at low \(p_{\rm FA}\), it now shows a marginal loss in the same regime with respect to the standard statistics, but of only \(\approx 1\%\).
Also, the CNN trained on exponential windows performs marginally better even with rectangular window injections at the lowest values of \(p_{\rm FA}\). It was noted by Ref. [12] that also \(2\mathcal{F}_{\rm max}^{e}\) can outperform \(2\mathcal{F}_{\rm max}^{r}\) on rectangular signals. However, without larger studies on different configurations and data sets it cannot be determined if the observed phenomenon in the CNNs is related or just due to specifics of the training and test data sets.
On the other hand, for the exponential test set, \(p_{\rm det}\) for all statistics has systematically worsened by \(10\%\) to \(15\%\) compared to the synthetic results from Fig. 3. This could be because the weak late-time portions of the signals are more difficult to pick up in real noise. The CNNs lose relatively less detection power, meaning that the gap in \(p_{\rm det}\) against the traditional statistics has narrowed down: at the lowest \(p_{\rm FA}\), the difference in \(p_{\rm det}\) between \(B_{\rm tS/G}^{r}\) and \(\rho_{\rm CNN}^{e}\) is down to \(3\%\).
Since this is our most complete test set, we also show the results of the more computationally expensive statistics computed with windows matching the injections, i.e. \(2\mathcal{F}_{\rm max}^{e}\) and \(B_{\rm tS/G}^{e}\). To obtain these we used the GPU implementation from Ref. [39]. Their ROC curves improve by \(2-3\%\) over the statistics with rectangular windows.
For \(p_{\rm FA}<10^{-5}\) the two-stage filtering ROC curve very closely matches that of \(B_{\rm tS/G}^{e}\). It cannot yield higher \(p_{\rm det}\) (as it did in the synthetic case) on this specific data set, because the template corresponding to the loudest \(B_{\rm tS/G}^{e}\) outlier also passes the first-stage \(\rho_{\rm CNN}^{r/e}\) threshold. But it does converge to the sensitivity of the traditional statistic when far enough to the left of the ROC to go below the plateau level set by the first stage. As discussed before, it also takes \(80\) times less time even with the GPU implementation of Ref. [39].
It is important to remember that ROC performance depends on the population of signal candidates considered. In this case we use signals with \(\rho_{\rm inj}\in[4,40]\). In Sec. VII we will revisit the same real-data noise set but with a different injection set matching the one used in Ref. [19] for establishing upper limits.
## VI Parameter estimation with CNNs
So far, we have considered a CNN with only one output: the estimated SNR, which only allows yes/no detection statements. In practice, one will also be interested in the parameters of a signal candidate. Estimates of the frequency-evolution parameters \(\lambda\) are naturally obtained from where the detection statistic peaks in the template bank. In our preferred setup, the two-stage filtering, final candidates will additionally have \((t^{0},\tau)\) estimates from the algorithm in Ref. [12].
However, CNNs can also estimate multiple parameters directly. In this section, we present a first exploration of this possibility via training CNNs with multiple outputs. We keep the same architecture as before with only the addition of one extra node to the output layer: we now want to output both \(\rho_{\rm CNN}\) and a duration estimate \(\tau_{\rm CNN}\) for a potential signal. One could also estimate the starting time \(t^{0}\) by adding another output, but since in our setup the starting time is limited to a small range around \(T_{\rm gl}\), we ignore it for this first proof of concept.
We here show the results of two newly trained models with the additional \(\tau_{\rm CNN}\) output and trained and tested with synthetic data. The duration input labels for training are normalized to a range between \([0,1]\). The output can then be transformed back to the duration in days.
Figure 6: Duration parameters \(\tau_{\rm CNN}^{r/e}\) estimated by the two-output CNNs trained on rectangular (top panel) and exponential signals (bottom panel), plotted against the duration \(\tau\) of the injected signal. These use the same synthetic test sets as in Sec. V.1, with windows matching the training in each panel. Only signals passing the \(\rho_{\rm CNN}^{r/e}\) thresholds of the two-stage filtering are plotted. The color scale (shared between both panels) corresponds to the \(\rho_{\rm inj}\) of the injected signals.
In general, we find that the accuracy of the \(\rho_{\rm CNN}\) estimation remains unaltered, while the quality of \(\tau_{\rm CNN}\) estimates depends on which type of windows we use in training. These are shown for one CNN trained on only rectangular signals and by one trained on only exponential signals, as functions of the injected durations from the test set, in Fig. 6. To focus on signals that are at least marginally detectable, we only include those passing the \(\rho_{\rm CNN}^{\rm/e}\) thresholds of the two-stage filtering, in both cases corresponding to \(p_{\rm FA}^{\rm CNN}=10^{-3}\).
For the rectangular-trained CNN, the estimated durations \(\tau_{\rm CNN}^{\rm r}\) follow quite well the true durations, but with some scatter especially for low-SNR signals. About 3% of signals form a set of outliers with durations underestimated by over 95%, mostly for shorter injections. Excluding these outliers, the remaining distribution of relative errors is well-centered around zero with a root-mean-square (RMS) of 27%. For the exponential-trained CNN, there is a mild trend towards more under-estimated \(\tau_{\rm CNN}^{\rm c}\) values for longer injected signals. In this case, about 7% of signals are underestimated by over 95%, but after excluding this subset the error distribution is only slightly wider with a RMS of 29%.
We also tested these synthetic-trained CNNs on real data. For rectangular signals, the duration estimation is overall quite robust, and the plot equivalent to Fig. 6 follows the same shape except for some more cases of overestimation at high durations. The outlier percentages for real data estimations slightly increase to 6% and the RMS error stays close to 27%. For exponential signals, there is noticeably more overestimation and the outlier percentage increases to 11% and the RMS error to 47%.
As an additional investigation, we trained a two-output CNN on signals with _both_ rectangular and exponential windows. Overall it behaves like a mixture of the results of the two separately trained networks, with the exception of stronger overestimation at long durations and high SNRs in real data. Performance could potentially be improved by adding another output acting as a window label, since the duration parameter \(\tau\) actually has different meanings for the two different windows, as also discussed in Ref. [14]. This would be similar to how Ref. [12] has already discussed treating window function choice as an additional parameter in Bayesian parameter estimation.
This and other possible improvements to the architecture and training of the two-output network to obtain better accuracy, and any further extensions of this approach to parameter estimation with CNNs, are left for future work.
## VII O2 Vela Glitch Upper Limits
No detection of a post-glitch tCW signal was reported in the O2 search [19], so upper limits on the GW strain amplitude were set. Having found no interesting outliers in the CNN results on the same data set either, we repeat the procedure here, but covering both rectangular and exponential window choices.
Upper limits are computed by injecting simulated signals in the same data used in the search and then counting how many signals are recovered by the chosen method. For a set of injections at fixed duration \(\tau\), one can then fit a sigmoid curve of the counts against injected amplitude \(h_{0}\) to find the upper limit amplitude \(h_{0}^{90\%}\) at which 90% of the injected signals are recovered above the threshold of each statistic. A more detailed explanation can be found in the appendix of Ref. [19].
As the thresholds to distinguish between noise and signal candidates we use the highest output of the search over the original data and the ranges given in Table 1, for each detection statistic. This means that the thresholds of the different detection statistics do not necessarily correspond to the same \(p_{\rm FA}\), but it is a consistent method that can be applied to any statistic.3 It is also a conservative choice in the sense that any lower threshold would produce stricter upper limits, but would require outlier followup. In the special case of the two-stage filtering, the first threshold on the CNN stage is chosen to let a fraction \(10^{-3}\) of candidates pass and the final threshold is given by the highest \(B_{\rm tS/G}\) in that remaining set.
Footnote 3: Alternatively, one could use the method described in Ref. [37] to estimate the distribution of the expected loudest outlier from the background and derive a threshold. However, we find \(\rho_{\rm CNN}\) to show more complicated distributions on our test data than the typical ones for \(2\mathcal{F}_{\rm max}\) and \(B_{\rm tS/G}\). So, further investigation would be required to apply the method.
We create an injection set with simulated signals of amplitude \(h_{0}\) in the ranges used in the O2 search [19], chosen to correspond to detection statistics around the threshold values. This translates to mostly weaker signals compared to the test sets described in Sec. IV and used for the ROC curves in Sec. V. The durations of the injections are not distributed uniformly as done for the previous testing sets, but rather chosen at discrete steps from 0.5 to 120 days.
In Fig. 7, we reproduce the plot from Ref. [19] for rectangular-window signals, showing the upper limits on \(h_{0}^{90\%}\) as a function of the duration parameter \(\tau\), and we extend it with an exponential-window injection set. We show upper limits derived from several methods discussed before, namely \(2\mathcal{F}_{\rm max}^{\rm r}\), \(B_{\rm tS/G}^{\rm r}\), and the two-stage filtering method using CNNs trained on only rectangular windows or only exponential windows, combined with \(B_{\rm tS/G}^{\rm r/e}\). The statistics \(2\mathcal{F}_{\rm max}^{\rm e}\) and \(B_{\rm tS/G}^{\rm e}\) for the full injection set are again omitted due to computational cost.
The upper limits from \(2\mathcal{F}_{\rm max}^{\rm r}\) using rectangular injections can be directly compared to those obtained in Ref. [19]. They are consistent within \(1\sigma\) error bars, and the small differences are due to the different individual injections and the different sigmoid fitting procedure and uncertainty estimation (matching the implementation as in Ref. [20]).
The upper limits from the two-stage filtering reach values close to \(2\mathcal{F}_{\rm max}^{\rm r}\) and \(B_{\rm tS/G}^{\rm r}\), higher only by 6-7% for both signal types.
As seen before in the real-data ROC curves, the two-stage filter cannot improve over \(B_{\rm tS/G}^{\rm r/e}\) on these specific data sets because the templates corresponding to the loudest outliers from those statistics also pass the first-stage \(\rho_{\rm CNN}^{\rm r/e}\) thresholds.
Another factor that could potentially be relevant in this new, typically weaker injection set, is that upper limits depend strongly on the performance for weak signals near threshold, which are the most challenging for CNNs. This is exacerbated by using steps in the amplitude \(h_{0}\), instead of SNR, to quantify the strength of the signals. While the SNR takes into account the other parameters that affect detectability, especially the inclination \(\cos\iota\), the amplitude does not contain this information and so cannot be used as a direct proxy for the SNR. Therefore, at each \(h_{0}\) step of the injection set, there can be a tail of signals with low SNRs, where the differences between the CNN and traditional detection statistics are more significant.
## VIII Conclusions
In this work we present a new method based on CNNs for detecting potential tCWs - long-duration quasi-monochromatic GW signals - from glitching pulsars. CNNs have proven to be promising tools for detecting various GW signals, but have not been tested before on tCWs from glitching pulsars. Previous searches were entirely matched-filtering based and limited in computational cost. In this work we have used intermediate matched-filter outputs (the \(\mathcal{F}\)-statistic atoms) as input to the CNNs, which allows to replace the most computationally expensive part of the analysis. In particular, practical searches with the \(2\mathcal{F}_{\rm max}\) and \(B_{\rm tS/G}\) statistics were limited to assuming constant amplitude (rectangular window) tCWs due to the much higher cost of other window functions, while with CNNs we can easily train on different windows.
The CNNs are constructed to output an estimator of SNR in the data. We have trained and tested CNNs for either rectangular or exponentially decaying windows, first on synthetic, Gaussian data, but with gaps corresponding to the timestamps of the LIGO O2 data after the Vela glitch of 2016. We use curriculum learning, i.e. first train on stronger and then also on weaker signals. Then we have tested these CNNs on the real LIGO data from O2 as previously analyzed in Ref. [19], and also trained with real data using the same architecture to improve the results. We find the best results when starting from the model we had trained on synthetic data and continuing its training on only real data.
We find that a simple implementation of such atoms-based CNNs already approaches within 10% of the detection probability at fixed false-alarm probability of the traditional detection statistics. As a CNN-based method that mostly closes this remaining gap, we propose a two-stage filtering method consisting of first applying the CNN to all signal templates and only passing candidates above a certain threshold on \(\rho_{\rm CNN}\) to the \(B_{\rm tS/G}\) statistic. For a real data test set containing injected signals of broad SNR range (from 4 to 40), the probabilities of detection at false-alarm probabilities as low as \(10^{-7}\) are only \(\lesssim 2\%\) lower than those of \(B_{\rm tS/G}\) for rectangular windows and 4% better for exponential windows. Comparing the computing time of the two-stage filtering with that of \(B_{\rm tS/G}\) for exponential signals using the GPU implementation of Ref. [39], we find that our method is 80 times faster than evaluating the full data without the CNN stage.
We then use this new method to set updated observational upper limits on the GW strain amplitude \(h_{0}\) after
Figure 7: Upper limits on \(h_{0}\) as a function of duration \(\tau\) for the different methods: \(2\mathcal{F}_{\rm max}^{\rm r}\) (dash-dotted black) and \(B_{\rm tS/G}^{\rm r}\) (dashed black), both recovering rectangular signals. The two different shapes of the injections are used in the different panels (rectangular on the left, exponential on the right). In each subplot we also show the two-stage filtering trained on only rectangular-shaped signals (left panel, solid blue) and two-stage filtering trained on only exponentially decaying signals (right panel, solid orange).
the O2 Vela glitch, with extended parameter space coverage including exponentially decaying signals. For this we make a separate injection set with signal parameters in the same ranges as in Ref. [19]. The upper limits from the two-stage filtering almost match those reached by the standard statistics, higher only by \(\lesssim 7\%\).
Thanks to its computational efficiency, the CNN-based approach can be a competitive method for the overall tCW search effort that is also complementary to the reference method: if extended to more generic searches, it will help increase overall detection probability by extending discovery space while one can afterwards still brute-force evaluate the traditional detection statistics when interested in pushing for the deepest possible upper limits on individual targets.
Considering such further applications of CNNs to tCWs from glitching pulsars, we have here focused on a single target pulsar and data set, but the approach can be generalized to other targets or even unknown sky positions, either by training from scratch in each case or through transfer learning [72]. Due to the computational efficiency of both the training and evaluation phase, the two-stage filtering could thus be scaled up to broader searches covering a larger variety of targets than currently feasible with the standard methods, especially when wanting to include multiple amplitude evolution window options.
Also, since CWs can be obtained from the tCWs model by setting a rectangular window function to cover the entirety of the data, this method can easily be applied to persistent CW targeted searches. Our implementation of CNNs, however, was designed specifically to avoid the computationally expensive computation of partial sums of the \(\mathcal{F}\)-statistic atoms over all the combinations of the transient parameters, which does not concern CW searches.
Furthermore, one could go beyond the current configuration, in which we trained the CNNs on \(\mathcal{F}\)-statistic atoms, i.e. quantities computed during the matched filtering step. This still constrains the frequency evolution of the signal to be CW-like, but already allows for flexible amplitude evolution and significant speed-up compared to the traditional method, effectively allowing to search a wider parameter space at the same cost. A different approach would be to train a CNN (or different type of network) on spectrograms or the full timeseries strain data, which could allow searches for unmodeled tCWs both in frequency and amplitude evolution.
The major drawback would be that the amplitudes of tCW signals are expected to be very weak, with \(h_{0}\) upper limits from O3 reaching \(10^{-25}\)[20]. Such signals are too weak to be directly discernible, e.g. in time-frequency maps of the data. A study of using Fourier transforms of the data as input to a CNN to search for persistent CWs was done in Ref. [48], and a broader set of machine-learning strategies on SFTs was evaluated in a public Kaggle challenge [73]. Similar approaches could be applied to tCWs as well, but it will be a challenging problem, which we leave for future work. Despite not reaching the high sensitivities of purely matched filtering methods, these faster, generic searches could still extend the tCW science case by enabling the search for post-glitch GWs corresponding to more general glitch models and over larger parameter spaces, potentially leading to all-sky, all-time, all-frequency blind searches of tCWs.
## Acknowledgements
We thank Vincent Boudart for making the CNN architecture plot, the ULiege group from the STAR Institute, in particular Jean-Rene Cudell, Maxime Fays, Gregory Baltus and Prasanta Char for hosting L.M.M. during a g2net (CA17137) short term scientific mission working on this project, Karl Wette for valuable help with LALSuite SWIG wrappings [74], Xingyu Zhong and Andrew L. Miller for initial exchanges about applying CNNs to tCWs, Pep Covas for an interesting seminar talk on his improved short-segment detection statistic that gave us useful inspiration on interpreting our ROC results, and Alicia M. Sintes, Maite Mateu-Lucena and other members of the LIGO-Virgo-KAGRA Continuous Waves working group for useful comments. This work was supported by the Universitat de les Illes Balears (UIB); the Spanish Ministry of Science and Innovation (MCIN) and the Spanish Agencia Estatal de Investigacion (AEI) grants PID2019-106416GB-I00/MCIN/AEI/10.13039/501100011033; the MCIN with funding from the European Union NextGenerationEU (PRTR-C17.I1); the FEDER Operational Program 2021-2027 of the Balearic Islands; the Comunitat Autonoma de les Illes Balears through the Direccio General de Politica Universitaria i Recerca with funds from the Tourist Stay Tax Law ITS 2017-006 (PRD2018/24, PRD2020/11); the Conselleria de Fons Europeus, Universitat i Cultura del Govern de les Illes Balears; and EU COST Actions CA18108 and CA17137. DK is supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades (ref. BEAGAL 18/00148) and cofinanced by the Universitat de les Illes Balears. RT is supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades (ref. FPU 18/00694). This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Cen
tre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. The authors gratefully acknowledge the computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV).
This paper has been assigned document numbers LIGO-P2200371-v3 and ET-0085B-23.
## Appendix A \(\mathcal{F}\)-statistic atoms
Here we will explain in more detail the procedure to obtain the \(\mathcal{F}\)-statistic atoms, and resulting detection statistics, as implemented in LALSuite[35] and PyFstat[38]. The general approach, as originally developed for persistent CWs, is documented in Ref. [33] though we also include here the modifications for transients following Ref. [12] and adjust some of the notation for convenience.
Detection statistics for (t)CW signals can be derived from the likelihood ratio between hypotheses about a data set \(x(t)\). In particular, a Gaussian noise hypothesis \(\mathcal{H}_{\mathrm{G}}\) and a signal hypothesis \(\mathcal{H}_{\mathrm{tS}}\) can be written as
\[\mathcal{H}_{\mathrm{G}}:x(t) =n(t), \tag{10}\] \[\mathcal{H}_{\mathrm{tS}}:x(t) =n(t)+h(t;\lambda,\mathcal{A},\mathcal{T}). \tag{11}\]
We have written this for a single frequency-evolution template \(\lambda\), but it can be generalized by iterating over a template bank.
The likelihood ratio between the two hypotheses can then be analytically maximized [31; 34] over the amplitude parameters \(\mathcal{A}\), yielding the \(\mathcal{F}\)-statistic. As introduced in Eq. (8), it depends on different combinations of the start times \(t^{0}\) and duration parameters \(\tau\). The implementation consists of computing per-SFT quantities, the "\(\mathcal{F}\)-statistic atoms". When weighted with an appropriate window and summed up, these give the building blocks for \(2\mathcal{F}(t^{0},\tau)\).
In particular, following Ref. [12], \(2\mathcal{F}(t^{0},\tau)\) is computed from the antenna-pattern matrix and projections of the data onto the basis of the signal model from Eq. (7), both depending on a window function \(\varpi(t^{0},\tau)\). The CW case corresponds to a rectangular window covering the full observation time.
The antenna-pattern matrix can be written in block-form:
\[\mathcal{M}_{\mu\nu}(t^{0},\tau)=\mathcal{S}^{-1}T_{\mathrm{data}}\begin{pmatrix} \hat{A}&\hat{C}&0&0\\ \hat{C}&\hat{B}&0&0\\ 0&0&\hat{A}&\hat{C}\\ 0&0&\hat{C}&\hat{B}\end{pmatrix} \tag{12}\]
where \(T_{\mathrm{data}}\equiv N_{\mathrm{SFT}}T_{\mathrm{SFT}}\), \(\mathcal{S}^{-1}\equiv\frac{1}{N_{\mathrm{SFT}}}\sum_{X\alpha}S_{X\alpha}^{-1}\) and \(\hat{A},\hat{B},\hat{C}\) are the independent components of the matrix given by
\[\hat{A}(t^{0},\tau) \equiv\sum_{X\alpha}\varpi^{2}(t_{\alpha};t^{0},\tau)\langle( \hat{a}_{\alpha}^{X})^{2}\rangle_{t}, \tag{13}\] \[\hat{B}(t^{0},\tau) \equiv\sum_{X\alpha}\varpi^{2}(t_{\alpha};t^{0},\tau)\langle( \hat{b}_{\alpha}^{X})^{2}\rangle_{t},\] \[\hat{C}(t^{0},\tau) \equiv\sum_{X\alpha}\varpi^{2}(t_{\alpha};t^{0},\tau)\langle\hat{ a}_{\alpha}^{X}\hat{b}_{\alpha}^{X}\rangle_{t}.\]
We have used here the same notation from Sec. II, i.e. the \(X,\alpha\) indices run over detectors and SFTs, respectively. For compactness, we have suppressed the transient parameters on the right-hand side of Eq. (12).
These components are noise-weighted (indicated by the hat) such that for a function \(z_{\alpha}^{X}\)
\[\hat{z}_{\alpha}^{X}(t)\equiv\sqrt{w_{\alpha}^{X}}z_{\alpha}^{X}(t), \tag{14}\]
with weights \(w_{\alpha}^{X}\equiv S_{X\alpha}^{-1}/\mathcal{S}^{-1}\), and time-averaged (indicated by the brackets) such that
\[\langle z_{\alpha}^{X}\rangle_{t}\equiv\frac{1}{T_{\mathrm{SFT}}}\int_{t^{ \alpha}}^{t_{\alpha}+T_{\mathrm{SFT}}}z_{\alpha}^{X}(t)dt. \tag{15}\]
The definitions of the antenna-pattern functions \(a(t),b(t)\) can be found in Ref. [31]. In practice, \(a_{\alpha}^{X}\) and \(b_{\alpha}^{X}\) are computed once per SFT at its representative timestamp instead of computing the time averages explicitly, and noise-weighted afterwards.
The square root of the determinant of the matrix is then (suppressing again the \((t^{0},\tau)\) dependency):
\[\hat{D}\equiv\hat{A}\hat{B}-\hat{C}^{2}\,. \tag{16}\]
Note that we have assumed the long-wavelength limit approximation [75] which implies that \(a(t),b(t)\) are real-valued and there are no off-axis terms in Eq. (12).
The SFT data is normalized as
\[y_{\alpha}^{X}(t)\equiv\frac{x_{\alpha}^{X}(f)}{\sqrt{\frac{1}{2}}T_{ \mathrm{SFT}}S_{X\alpha}(f)}, \tag{17}\]
and used to define the two complex quantities
\[F_{\mathrm{a},\alpha}^{X} \equiv\int_{t_{\alpha}}^{t_{\alpha}+T_{\mathrm{SFT}}}y_{\alpha}^{X }(t)\hat{a}_{\alpha}^{X}(t)e^{-i\phi_{\alpha}^{X}(t)}dt, \tag{18}\] \[F_{\mathrm{b},\alpha}^{X} \equiv\int_{t_{\alpha}}^{t_{\alpha}+T_{\mathrm{SFT}}}y_{\alpha}^{X }(t)\hat{b}_{\alpha}^{X}(t)e^{-i\phi_{\alpha}^{X}(t)}dt,\]
where \(\phi^{X}_{\alpha}(t)\) is the phase obtained from integrating the frequency evolution of a given signal template. These quantities take the role of the data projections \(x_{\mu}\) in the more abstract notation used earlier, with the detailed translation documented in Ref. [33].
The set of \(\{\langle(\hat{a}^{X}_{\alpha})^{2}\rangle_{t},\langle(\hat{b}^{X}_{\alpha})^{2} \rangle_{t},\langle\hat{a}^{X}_{\alpha}\hat{b}^{X}_{\alpha}\rangle_{t},F^{X}_{ \mathrm{a}\alpha},F^{X}_{\mathrm{b},\alpha}\}\) from equations (4) and (5) are what we refer to as the \(\mathcal{F}\)-statistic atoms. More technical detail of how these are implemented in LALSuite can be found in Ref. [76, 33], based on the algorithm by Ref. [77].
For the CNN, we use these atoms as input. On the other hand, to compute traditional detection statistics the codes compute Eq. (4) and Eq. (5) and
\[F_{\mathrm{\{a,b\}}}(t^{0},\tau)\equiv\sum_{X\alpha}\varpi(t_{\alpha};t^{0}, \tau)F^{X}_{\mathrm{\{a,b\},\alpha}}, \tag{6}\]
i.e. the window-weighted partial sums of the atoms. Finally, from all of these it then computes
\[2\mathcal{F}(t^{0},\tau)=\frac{2}{\hat{D}}[\hat{B}|F_{\mathrm{a}}|^{2}+\hat{A }|F_{\mathrm{b}}|^{2}-2\hat{C}\mathfrak{R}(F_{\mathrm{a}}F_{\mathrm{b}}^{*})]. \tag{7}\]
This still depends on \((t^{0},\tau)\), through the implicit partial sums for all quantities on the right-hand side.
For a search over unknown transient parameters, one can discretize the ranges \(t^{0}\in[t^{0}_{\mathrm{min}},t^{0}_{\mathrm{min}}+\Delta t^{0}]\) and \(\tau\in[\tau_{\mathrm{min}},\tau_{\mathrm{min}}+\Delta\tau]\) in steps \(dt^{0}\) and \(d\tau\). Then, \(\mathcal{F}_{mn}\equiv\mathcal{F}(t^{0}_{m},\tau_{n})\) is computed for a \(N_{t^{0}}\times N_{\tau}\) rectangular grid
\[\begin{split} t^{0}_{m}&=t^{0}_{\mathrm{min}}+mdt^{0},\\ \tau_{n}&=\tau_{\mathrm{min}}+nd\tau.\end{split} \tag{8}\]
Finally, one can e.g. maximize over \(\{t^{0},\tau\}\), obtaining the detection statistic \(2\mathcal{F}_{\mathrm{max}}\), or alternatively marginalize over them, obtaining the transient Bayes factor \(B_{\mathrm{tS/G}}\). Complete derivations of both these detection statistics for tCWs can be found in Ref. [12]. They still depend on the frequency evolution parameters \(\lambda\), and a search where the source parameters are not exactly known will typically consist of setting up a grid in \(\lambda\) covering the desired range.
As described in Ref. [39], the pycuda GPU implementation in PyFstat is largely equivalent to the one in LALSuite. (For CPU calculations of the transient \(\mathcal{F}\)-statistic and other functionality, PyFstat calls LALSuite functions through SWIG wrappings [74].) However, when testing for this paper, we noticed that a LALSuite fix that was introduced in 2018 had not been included in PyFstat yet: when computing \(\mathcal{F}_{mn}\) over very little data (few SFTs from a single detector), the antenna-pattern matrix can become ill-conditioned for some combinations of \(\lambda\) parameters, making the determinant from Eq. (5) approach zero and causing spuriously large detection statistics outliers. PyFstat 2.0.0 (yet to be released as of this writing) avoids this problem the same way as the LALSuite code, by truncating to \(2\mathcal{F}_{mn}=4\) (the Gaussian noise expectation value) when the antenna-pattern matrix condition number exceeds a threshold that has been fixed to \(10^{4}\). Results included in Sec. V use this version. This issue of the \(\mathcal{F}\)-statistic for short durations has also been described in Ref. [70], where an alternative and improved statistic was derived for this case.
## Appendix B The effect of gaps on traditional detection statistics
The data we analyze in this work starts on December 11, 2016 and the final SFT timestamp is on April 11, 2017. This time span has a notable gap of 12 days as shown in Fig. 8, and other smaller gaps of about 1-2 days at most.
We have found that the CNN trained on \(\mathcal{F}\)-statistic atoms does not show any pathological behavior due to these gaps, though generalizing its architecture and our training strategy to data sets with different timestamps realizations is left to future work. On the other hand, the SFT-based \(\mathcal{F}\)-statistic algorithm is in general terms also robust to the presence of gaps as well, but the transient detection statistics are somewhat affected by large gaps as the one present in our O2 data set, depending on window choice.
First we note that if a signal falls completely inside a gap, in our testing sets, for convenience we discard that injection and draw another set of random signal parameters. Because in our setup \(t^{0}\) only varies within \(T_{\mathrm{gl}}\pm 0.5\,\mathrm{day}\), this only has an effect for a small fraction of short duration signals and for the upper limits in Sec. VII, where the shortest \(\tau\) is \(0.5\,\mathrm{days}\), this has no influence.
On the other hand, if the signal falls partially into one or more gaps, behavior is different for the two types of injections sets. In the sets where the primary parameter is \(\rho\) as used for the CNN training and the test sets in Sec. V, the signal amplitude is adjusted upwards to achieve the desired \(\rho\). For the upper limits injections, the \(h_{0}\) is fixed, hence \(\rho\) is lower due to the gaps. We now
Figure 8: Time segments of the two LIGO detectors H1 and L1 corresponding to the data analyzed in this work. For reference the Vela 2016 glitch happened at \(T_{\mathrm{gl}}=1165577920\,\mathrm{GPS}\), corresponding to \(0.5\,\mathrm{days}\) on this scale.
concentrate on the first case.
For rectangular signals the loss will be minimal especially for long-duration signals. When analyzing exponential-window injections with a statistic assuming a rectangular window, however, the loss from this mismatch is worsened when the signal falls partially in a gap.
To study this effect we generated \(10^{4}\) synthetic samples for exponential signals analyzed assuming rectangular windows. The parameters of the signals are drawn from the same distributions as defined in Tab.1. In Fig. 9, we show injected signal durations and estimated SNRs from \(2\mathcal{F}_{\text{max}}\) via Eq. (11). In the first panel, data without gaps is assumed, and in the second panel we used the actual O2 timestamps, with gaps. In the latter case there is an evident loss in estimated SNR around \(\tau\sim 40\) days. This is due to the long gap of 12 days starting about 10 days after the beginning of the analyzed data. For shorter \(\tau\), most of the power of the injected signals is concentrated before the long gap and can be recovered by a shorter rectangular window, while there is not too much a loss of SNR from the weaker late-time portions of the signal (spread over \(3\tau\), as defined in Eq. (4)) falling into the gap. On the other hand for these durations around 40 days, most of the power falls into the gap, hence the noticeable loss in SNR when recovered with a mismatched rectangular window.
This effect is even more accentuated when using not only realistic timestamps but also real data, because of the non-stationary characteristics of real noise. While for the standard detection statistics considered here this effect cannot be easily remedied, a properly trained CNN can alleviate the loss, as shown in Fig. 4 for the CNN trained on a mixture of both synthetic and real data.
|
2307.07457 | Structured Pruning of Neural Networks for Constraints Learning | In recent years, the integration of Machine Learning (ML) models with
Operation Research (OR) tools has gained popularity across diverse
applications, including cancer treatment, algorithmic configuration, and
chemical process optimization. In this domain, the combination of ML and OR
often relies on representing the ML model output using Mixed Integer
Programming (MIP) formulations. Numerous studies in the literature have
developed such formulations for many ML predictors, with a particular emphasis
on Artificial Neural Networks (ANNs) due to their significant interest in many
applications. However, ANNs frequently contain a large number of parameters,
resulting in MIP formulations that are impractical to solve, thereby impeding
scalability. In fact, the ML community has already introduced several
techniques to reduce the parameter count of ANNs without compromising their
performance, since the substantial size of modern ANNs presents challenges for
ML applications as it significantly impacts computational efforts during
training and necessitates significant memory resources for storage. In this
paper, we showcase the effectiveness of pruning, one of these techniques, when
applied to ANNs prior to their integration into MIPs. By pruning the ANN, we
achieve significant improvements in the speed of the solution process. We
discuss why pruning is more suitable in this context compared to other ML
compression techniques, and we identify the most appropriate pruning
strategies. To highlight the potential of this approach, we conduct experiments
using feed-forward neural networks with multiple layers to construct
adversarial examples. Our results demonstrate that pruning offers remarkable
reductions in solution times without hindering the quality of the final
decision, enabling the resolution of previously unsolvable instances. | Matteo Cacciola, Antonio Frangioni, Andrea Lodi | 2023-07-14T16:36:49Z | http://arxiv.org/abs/2307.07457v1 | # Structured Pruning of Neural Networks for Constraints Learning
###### Abstract
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse applications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
keywords: Artificial Neural Networks, Mixed Integer Programming, Model compression, Pruning +
Footnote †: journal: Operations Research Letters
## 1 Introduction
The concept of embedding learned functions inside Mixed Integer Programming (MIP) formulations, also known as "Learning-Symbolic Programming" or "Constraint Learning", has gained attention in recent literature [1; 2; 3]. Furthermore, there has been an increase in the availability of tools that automatically embed commonly used predictive models into MIPs [4; 5; 6; 7]. These techniques and tools are especially valuable when employing ML models for predictions and utilizing OR methods for decision making based on those predictions. Unlike the traditional two-stage approaches [8], embedding the predictive model within the decision-making process in an end-to-end optimization framework has been shown to yield superior results. Examples of applications are automatic algorithmic configuration [9; 10], adversarial examples identification [11], cancer treatments development [12], and chemical process optimization [13; 14].
A very relevant case is when the learned function is an ANN, since ANNs are the state-of-the-art mod
els for numerous essential ML tasks in Computer Vision and Natural Language processing. Consequently, there have been efforts in the literature to automate the embedding of ANNs [15]. For instance, [5] enables to incorporate feed-forward architectures with ReLU activation functions into MIPs, utilizing the output of the ANN in the objective function. The maturity of the field is demonstrated by the fact that one of the leading commercial MIP solvers, Gurobi, recently released a package that allows feed-forward ReLU networks to be part of MIP formulations, with compatibility for popular ML packages such as PyTorch, Keras, and scikit-learn.
Unfortunately, even when we consider simple architectures that have only ReLU activation functions, the representation of an ANN in a MIP will introduce binary variables, due to the combinatorial nature of the ReLU function. Additionally, the number of binary variables and the associated constraints that need to be added to the MIP is proportional to the number of parameters in the ANN. Deep Learning has witnessed a clear trend towards developing architectures with a very large number of parameters, which contributes to ANNs high predictive power and state-of-the-art performance in various applications. This, however, poses issues in terms of training costs, storage requirements, and prediction time. Consequently, numerous methods, known as model compression techniques, have been developed to reduce the size of ANNs without compromising their predictive capability. Yet, the large size of the ANNs presents an even more significant scalability challenge when it is embedded into a MIP, due to the potentially exponential growth of the latter computational cost with its size (and, in particular, the number of binary variables). Using a state-of-the-art network in a MIP formulation may easily result in an overwhelming number of binary variables and constraints, rendering the models unsolvable within a reasonable time using any available solver.
In this paper, we demonstrate that pruning methods, originally developed to address specific ML challenges, can be effectively applied in the context of embedding ANNs into MIPs. Specifically, we utilize a structured pruning technique that we previously developed to significantly accelerate the solution time for adversarial example identification problems using Gurobi.
The remainder of the paper is organized as follows: Section 2 provides a formal definition of the problem concerning the embedding of learned functions in MIP formulations. Additionally, it presents one of the existing formulations from the literature specifically designed for embedding ANNs. In Section 3, we introduce pruning techniques and we describe the specific pruning method employed in our experiments. Section 4 focuses on the benefits of pruning when incorporating ANNs into MIPs. We discuss the reasons why pruning is advantageous in this context and provide insights on selecting appropriate pruning techniques. Finally, in Section 5 we present numerical results to empirically validate that pruning can effectively speed up the solution process of MIPs with embedded ANNs.
## 2 Embedding learned functions in Mixed Integer Programs
We consider a general class of (Mixed-Integer) Nonlinear Programs with "learned constraints". That is, the formulation of the problem would need to involve some functions \(g_{i}(x)\), \(i=1,\ldots,k\), defined on the variable space of the optimization decisions, that are "hard" in the sense that no compact algebraic formulation, and not even an efficient computation oracle, is available. Yet, (large) data sets are available, or can be constructed, of outputs \(\bar{y}=g_{i}(\bar{x})\) for given \(\bar{x}\). These datasets can be used in several existing ML paradigms (Support Vector Machines, Decision Trees, ANNs,...) to construct estimates \(\bar{g}_{i}(x)\) of each \(g_{i}(x)\), \(i=1,\ldots,k\), with a workable algebraic description that can then be inserted into an optimization model. Thus, we consider the class of Mathematical Programs with Learned Constraints (MPLC)
\[\min cx+by\] (1) s.t. \[y_{i}=\bar{g}_{i}(x) i=1,\ldots,k \tag{2}\] \[Ax+By\leq d\] (3) \[x\in X \tag{4}\]
Linearity in (1) and (3) is not strictly necessary in our development, but it is often satisfied in applications (see, e.g., [11; 12; 5]) and we assume it for notational simplicity. Indeed, when \(X\) in (4) contains
integrality restrictions on (some of) the \(x\) variables, the class already contains Mixed-Integer Linear Programs (MILP), whose huge expressive power does not need to be discussed. Of course, a significant factor in the complexity of (1)-(4) is the algebraic form of the \(\bar{g}_{\ell}(x)\), which impacts the class of optimization problems it ultimately belongs to. A significant amount of research is already available on formulations for embedding feedforward ANNs, in particular with ReLU activations, in a MIP context [11; 1; 2; 3; 16]. In these formulations, the neural network is constructed layer by layer. Denoting the input vector at layer \(\ell\) as \(o_{\ell}\), and the corresponding weight matrix and bias vector as \(W_{\ell}\) and \(b_{\ell}\), respectively, one has
\[o_{\ell+1}=\max(\,0\,,\,W_{\ell}o_{\ell}+b_{\ell}\,)\]
that can be expressed in a MI(L)P form as
\[v_{\ell}^{+}-v_{\ell}^{-}=W_{\ell}o_{\ell}+b_{\ell} \tag{5}\] \[0\leq v_{\ell}^{+}\leq M^{+}z_{\ell}\] (6) \[0\leq v_{\ell}^{-}\leq M^{-}(1-z_{\ell})\] (7) \[o_{\ell+1}=v_{\ell}^{+}\] (8) \[z_{\ell}\in\{\,0\,,\,1\,\}^{m} \tag{9}\]
Constraints (6) and (7) ensure that both \(v_{\ell}^{+}\) and \(v_{\ell}^{-}\) are (component-wise) positive, and since the \(z_{\ell}\) are (component-wise) binary, that at most one of them is positive. Consequently, constraint (5) forces the relations \(v_{\ell}^{+}=\max\{\,W_{\ell}o_{\ell}+b_{\ell}\,,\,0\,\}\) and \(v_{\ell}^{-}=\min\{\,W_{\ell}o_{\ell}+b_{\ell}\,,\,0\,\}\) (of course, constraint (8) is only there to make apparent what the output of the layer is). Denoting by \(n\) the number of neurons in layer \(\ell\) and by \(m\) the number of neurons in layer \(\ell+1\), system (5)-(9) contains \(m\) binary variables, \(n+2m\) continuous variables, and \(3m\) constraints. A significant aspect of this model (fragment) is the use of big-M constraints (6) and (7). It is well known that the choice of the value for the constants \(M\) can significantly impact the time required to solve an instance. Indeed, the Optimized Big-M Bounds Tightening (OBBT) method has been developed in [11] to find effective values for this constant.
As previously mentioned, the state-of-the-art solver Gurobi now includes an open-source Python package that automatically embeds ANNs with ReLU activation into a Gurobi model. Additionally, starting from the 10.0.1 release, Gurobi has the capability to detect if a model contains a block of constraints representing the relationship \(y=g(x)\), where \(g(\cdot)\) is an ANN, in order to then apply the aforementioned OBBT techniques to enhance the solution process. Despite showing a substantial improvement with respect to the previous version, the capabilities of Gurobi to solve these MIPs are still limited. In particular, when embedding an ANN into a MIP, Gurobi is not able to solve the problem in a reasonable time unless the number of layers and neurons in the ANN is small.
## 3 Artificial Neural Networks Pruning
As mentioned in the introduction, the size of state-of-the-art ANNs has been growing exponentially over the years. While these models deliver remarkable performance, they come with high computational costs for training and inference, as well as substantial memory requirements for storage. To address this issue, various techniques have been developed to reduce these costs without significantly compromising the predictive power of the network. One such technique is pruning, which involves reducing the ANN size by eliminating unnecessary parameters. Consider for instance a linear layer with input \(x_{inp}\), output \(x_{out}\), and weight and bias tensors \(W\) and \(b\), i.e., \(x_{out}=Wx_{inp}+b\). Thus, pruning entails removing certain entries from \(W\) or \(b\). That is, pruning, say, the parameter \(W_{1,1}\) results in the first coordinate of \(x_{inp}\) being ignored in the scalar product when computing the first coordinate of \(x_{out}\).
Pruning individual weight entries can offer some advantages, but it is generally suboptimal. Since most of the computation is performed on GPUs, there is little computational benefit unless entire blocks of computation, such as tensor multiplications, are removed. Removing entire structures of the ANN is known as _structured_ pruning, in contrast to _unstructured_ pruning that involves eliminating single weights. In the example of the linear layer, structured pruning would aim to remove entire neurons by deleting rows from the \(W\) tensor (along with the corresponding \(b\) entry in most cases). Figures 1, 2, and 3 illustrate the difference between these two pruning
techniques.
The literature on pruning techniques for neural networks is vast and encompasses a wide range of approaches. One simple and commonly used method is magnitude-based pruning, which involves removing parameters with small magnitudes. This was first introduced in [17] and has been widely adopted since. However, more sophisticated strategies have also been proposed, such as Bayesian methods [18; 19; 20; 21], combinations of pruning with other compression techniques [22; 23; 24], and zero accuracy drop pruning [25; 26; 27; 28].
A relevant subset of pruning techniques uses a regularization term to enforce sparsity in tensor weight. It is common practice in Machine Learning to add a regularization term \(R(w)\) to the standard loss function \(L(X,Y,w)\), where \(w\) is the vector containing the ANNs parameters and \((X,Y)\) is the training set. Usually, \(R(w)\) penalizes the magnitude of the parameters (e.g., \(R(w)=|w|_{2}^{2}\)) and it is known to improve the generalization performances of the model. If the form of \(R(w)\) is chosen carefully, e.g., \(R(w)=|w|_{1}\), it can also lead to a sparse parameter vector \(w\). When a network parameter is zero, it can typically be removed without changing the model output for any given input. Hence, if \(R(w)\) is chosen appropriately to induce all the weights of some neurons to be zero, then such neurons can be removed from the network. Many regularization terms have been proposed both for structured and unstructured pruning, including but not limited to \(l_{1}\) norm, BerHu term [29], group lasso, and \(l_{p}/l_{q}\) norms [30].
### The Structured Perspective Regularization Term
In the literature, the majority of pruning techniques rely on heuristics to determine the impact of removing a parameter or a structure from the ANN. This trend persists in recent works [31; 32; 33; 34], including methods that still utilize simple magnitude-based criteria [35; 36; 37; 38]. Only a few techniques attempt to develop a theoretically-grounded methodology [39; 40; 41; 18], and these methods do not primarily focus on structured pruning. In light of this, a pruning technique was developed in [42] that is motivated by strong theoretical foundations and specifically addresses structured pruning. In [42], the pruning problem is addressed by starting with a naive exact MIP formulation and then deriving a stronger formulation by leveraging the Perspective Reformulation technique [43]. Analogously to what is done in [44; 45; 46] for individual variables rather than groups of them, an efficient way to solve the continuous relaxation of this problem is obtained by projecting away the binary variables, resulting in an equivalent problem to standard ANN training with the inclusion of the new _Structured Perspective Regularization_ (SPR) term
\[z(W;\alpha,M)=\] \[\begin{cases}2\sqrt{(1-\alpha)\alpha}\|W\|_{2}&\text{if }\frac{\|W\|_{\infty}}{M}\leq\sqrt{\frac{\alpha}{1-\alpha}}\|W\|_{2}\leq 1 \\ \frac{\alpha M}{\|W\|_{\infty}}\|W\|_{2}^{2}+(1-\alpha)\frac{\|W\|_{\infty}}{M} &\text{if }\sqrt{\frac{\alpha}{1-\alpha}}\|W\|_{2}\leq\frac{\|W\|_{\infty}}{M}\leq 1 \\ \alpha\|W\|_{2}^{2}+(1-\alpha)&\text{otherwise,}\end{cases}\]
Figure 1: Unpruned network
Figure 3: Structured pruning
Figure 2: Unstructured pruning
where \(M\) is a constant, \(\alpha\) is a tunable hyper-parameter and \(W\) is the weight tensor corresponding to the structure we want to prune (e.g., the weight matrix of a neuron). That is, in order to prune the ANN one trains it using as loss function
\[L(X,Y,W)+\lambda\sum_{j\in\mathcal{N}}z(W_{j};\alpha,M),\]
where \(W_{j}\) is the weight matrix corresponding to neuron \(j\) and \(\mathcal{N}\) is the set of neurons of the ANN. Coupled with a final magnitude-based pruning step, this approach has been shown to provide state-of-the-art pruning performances thanks to the unique and interesting properties of the SPR term. This potentially comes at the expense of extra hyperparameter tuning effort for \(\alpha\) and \(M\), which is unlikely to be a major issue in this application since ANNs that can be embedded in a MILP, even after pruning, cannot possibly have the extremely large size common in applications like Computer Vision and Naturale Language Processing, and therefore their training and tuning time is unlikely to be a major factor.
## 4 Pruning as a Speed-Up Strategy
As previously mentioned, in the context of embedding ANNs in MIPs, scalability becomes a significant challenge as the number of (binary) variables and constraints grows proportionally with the number of parameters in the embedded ANN, but the cost of solving the MI(L)P may well grow exponentially in the number of (binary) variables. It therefore makes even more sense to employ the ML compression techniques that are used to reduce the computational resources required by ANNs. Many compression techniques other than pruning exist in the ML literature. However, not all of them are effective in the context of MIPs with embedded ANNs. For instance, quantization techniques aim to train networks that have weight values in a discrete (relatively small) set of \(\mathbb{R}\)[47; 48]. One possibility is to directly implement the ANN using a lower bit number format than the standard Float-32 one [49; 50]. Quantization is a very popular technique in ML since can decrease both backward- and forward-pass computational effort, at the same time reducing the memory footprint of the resulting model. However, in the contest of MIPs, quantization does not bring any advantage, since the resulting problem from embedding a quantized ANN is not significantly different, from an Operations Research point of view, to the one where a non-quantized model has been embedded. Indeed, weights are coefficients in (5)-(9), and having them in a small set of (integer) values may at most have a minor impact on the solution time. Other methods, like low-rank decomposition and parameter-sharing techniques [51; 52; 53; 54], modify the internal operations of layers; this means that they cannot directly be used in this context without the development of new, specific formulations and new algorithms that can automatically detect them in a MIP problem.
By contrast, structured pruning techniques perfectly fit the needs of embedding an ANN in a MIP. Even unstructured pruning may have some impact, since when a weight is removed (i.e., set to zero) the corresponding entry in the MIP constraints matrix is also set to zero, leading to a sparser constraints matrix. However, entirely removing variables or constraints is more effective; in the case of a feed-forward ANN, this corresponds to performing structured pruning on neurons, as visualized in Figures 4- 5- 6- 7.
pruning techniques only bring advantages at inference time, but reducing the number of parameters only reduces linearly the computational cost of the forward pass. By contrast, removing neurons of a network brings an exponential speed up in the time required to solve the resulting MIP formulations. Hence, pruning is arguably more relevant for OR than for ML, despite having been developed in the latter area. In particular, structured pruning--as opposed to unstructured one--is crucial in that it allows using existing automatic structure detection algorithms, such as that implemented in Gurobi, while unstructured pruning is very likely to result in a different structure of the constraints matrix that would not be recognizable, thereby preventing the use of OBBT techniques that are crucial in this context.
Based on the considerations above, we argue about modifying the existing pipeline for embedding ANNs in MIPs. After training the ANN (or during training, depending on the technique used), we prune the model before embedding it in the MIP formulation of the problem in hand. This approach either reduces the solution time of the MIP with the same generalisation performances, or, possibly, allows one to include larger, therefore more expressive ANNs, capable of achieving higher accuracy while still maintaining the ability to solve the resulting MIPs within a reasonable time. In particular, we will employ the Structured Perspective Regularization, i.e., we train the ANN by adding the SPR term to the loss, which will lead to a weight tensor with a structured sparsity. After fixing to zero (i.e., removing) neurons whose weights are all below a fixed threshold, we fine-tune the network with a standard loss for a few more epochs (see [42] for details). The obtained ANN is then embedded in the MIP, and it will require the addition of fewer variables and constraints with respect to its unpruned counterpart.
## 5 Experiments
### Building adversarial examples
We test the effectiveness of pruning in the task of finding an adversarial example of a given network. In particular, we focus on the verification problem [55], which consists in finding a _slight_ modification of an input that is originally correctly classified by the network in such a way that the modified one is assigned to a chosen class by the ANN. More formally, assume we are given a trained ANN \(\bar{g}(\cdot):\mathbb{R}^{n}\rightarrow[0,1]^{C}\) and one input \(x\) such that \(\bar{g}(x)\) has its maximum value at the coordinate corresponding to the correct class of \(x\). Denoting with \(k\) this coordinate and with \(h\) the coordinate with the second highest value of \(\bar{g}(x)\), the problem we want to solve is
\[\max\,y_{h}-y_{k} \tag{10}\] \[\text{s.t.}\,\,y=\bar{g}(\bar{x})\] (11) \[\Delta\geq\,x-\bar{x}\] (12) \[\Delta\geq\,\bar{x}-x\] (13) \[\bar{x}\in\mathbb{R}^{n} \tag{14}\]
where \(\Delta\) is a given distance bound. Clearly, (10)-(14) is a special case of the MPLC class (1)-(4). In particular, the constraint (11) encodes an ANN function, so it needs to be handled with the techniques we presented in Section 2. We selected this problem since it is of great interest to ML researchers. Furthermore, it can in principle be relevant to test the robustness of networks of any size, and therefore it
Figure 6: Constraints matrix, in red the removed constraints and variables due to neurons pruning
Figure 7: Corresponding network, the highlighted part is the pruned one.
allows to explore the boundaries of what MPLC approaches (with or without pruning) can achieve.
### General setup and notation
To test the effectiveness of our pruning techniques, we ran some experiments on network robustness using the MNIST dataset. We used the same settings of the notebook available at [https://github.com/Gurobi/gurobi-machinelearning/blob/main/notebooks/adversarial/adversarial_pytorch.ipynb](https://github.com/Gurobi/gurobi-machinelearning/blob/main/notebooks/adversarial/adversarial_pytorch.ipynb), where formulation (10)-(14) is solved with \(\Delta=5\).
We trained the ANNs using the Pytorch SGD optimizer with no weight decay and no momentum. We used 128 as batch size and we trained the network for 50 epochs with a constant learning rate equal to 0.1. All the networks are Pytorch sequential models containing only Linear and ReLU layers. For the pruned networks, we performed a (limited) 3 by 3 grid search to choose the \(\lambda\) factor that multiplies the SPR term and the \(\alpha\) hyper-parameter needed in its definition (\(M\) is automatically set as in [42]). After 50 training epochs, the model is fine-tuned for 10 epochs without using any regularization. Note that the objective of the grid search is to find the smallest network that keeps basically the same out-of-sample accuracy of the original one, and better results could conceivably be obtained by employing end-to-end techniques that take into account the optimization process in the computation of the loss [56; 57].
In tables 1 and 2, the first column reports the network architecture of the used ANN and if pruning was used, while the \(\Delta\) parameter value of (10)-(14) can be found in the first row. We compare the result of the baseline approach (i.e., without pruning) and the result obtained using the pruning method with the best hyper-parameters found. We report the validation accuracy (in percentage), the time needed by Gurobi to solve the obtained MIP (in seconds), and the number of branch-and-bound nodes explored during that time. Additionally, for the pruned networks, we report the value of \(\lambda\) and \(\alpha\) and the architecture of the network after pruning. When referring to a network architecture, the terms \(L\)x\(N\) refer to a sequence of \(L\) layers each of them containing \(N\) neurons. When multiple terms follow each other, it indicates their order in the network. For example, 2x20-3x10 stands for a network that starts with 2 layers of 20 neurons and continues with 3 layers of 10 neurons. Each experiment is repeated 3 times and a time limit of 1800 seconds is given to Gurobi.
### Detailed results
Table 1 shows the results using \(\Delta\)=5 on 4 different architectures with an increasing number of neurons and layers. When pruning small architectures, like the 2x50 and 2x100 networks, pruning the ANN results in at least halving the time used by Gurobi. Moreover, the accuracy of the pruned models is higher than the baseline, this is, likely, because pruning has also a regularization effect.
The results on the 2x200 architecture show that the baseline is not able to solve the problems in the given time for two out of three runs. Instead, our method always leads to MIPs that are easily solved by Gurobi while maintaining the same accuracy as the baseline.
Finally, we report the results using the 6x100 networks, significantly bigger with respect to the previous ones. The baseline, once again, cannot solve two out of the three problems in the given time limit. Instead, our method is able to succeed in all cases, at the cost of losing a little bit of accuracy (0.3 percent in the best case).
As a last remark, we notice that for all the MIPs we solved relatively to unpruned network, no counterexample existed in the given neighborhood (i.e., the optimal value of (10)-(14) is negative). This remains true for the corresponding pruned counterparts, confirming that the pruned and unpruned versions of the MIPs are qualitatively very similar.
### Investigating the quality of the solutions
To better validate the quality of our results, we solved again the adversarial problem (10)-(14) using \(\Delta\)=20 and employing the same networks trained in the previous experiments. This was aimed to find adversarial examples in the given region to better understand the effect of pruning on the resulting MIP. We report the results in Table 2, where the "accuracy" and "pruned architecture" columns have been removed since they are the same as in the previous table. For all the experiments, a counter-example existed in the given region, and in the last column of Table 2, named "Found", we report if Gurobi was
able to find one adversarial example in the given time limit. Unsurprisingly, for all the MIPs corresponding to pruned networks, Gurobi was able to find an adversarial example within a time considerably inferior to the 1800 seconds limit. Moreover, all the adversarial examples obtained using a pruned network were also adversarial for the unpruned counterpart with the same starting architecture. This empirically proved that, in our setting, pruning can be even used to solve the adversarial example problem for the unpruned counterpart and it is again a good indication that pruning does not heavily affect the resulting MIP. This is in accordance with the ML literature, where there is a good consensus that not-too-aggressive pruning of ANNs does not significantly impacts their robustness [58; 59], and therefore the existence--or not--of the counter-example in our application. Finally, the times reported in Table 2 show that the speed-up is still very significant even with the new value of \(\Delta\) and that in some cases the Baseline is not able to find any adversarial example.
We conclude this section by noting that additional experiments, which are not included in this paper for the sake of brevity, have shown that a high setting of the OBBT parameter [11] of Gurobi is crucial to obtain good performances both for pruned and unpruned instances, confirming the importance of structured pruning.
## 6 Conclusions and future directions
This paper has demonstrated the effectiveness of pruning artificial neural networks in accelerating the solution time of mixed-integer programming problems that incorporate ANNs. The choice of the sparsity structure for pruning plays a crucial role in achieving significant speed-up, and we argued that structured pruning is superior to unstructured one. Further research in this area can focus on gaining a deeper understanding of which sparsity structures are most suitable for improving the solution time of MIPs. Exploring the trade-off between pruning-induced sparsity and solution quality is another interesting avenue for future investigations. By advancing our understanding of pruning techniques and
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{\(\Delta=5\)} \\ \hline Arch & \(\lambda\)-\(\alpha\) & Acc. & Time & Nodes & Pruned Arch \\ \hline \multirow{3}{*}{2x50 Baseline} & 97.55 & 14.88 & 5820 & \\ & 97.47 & 20.65 & 12040 & \\ & 97.25 & 8.07 & 9497 & \\ \hline \multirow{3}{*}{2x50 Pruned} & 97.77 & 3.29 & 3328 & 1x39-1x43 \\ & 97.49 & 3.93 & 6482 & 1x30-1x42 \\ & 97.73 & 1.96 & 3992 & 1x39-1x42 \\ \hline \multirow{3}{*}{2x100 Baseline} & 97.96 & 39.29 & 2971 & \\ & 97.76 & 35.01 & 3112 & \\ & 97.97 & 39.00 & 3019 & \\ \hline \multirow{3}{*}{2x100 Pruned} & 98.08 & 15.99 & 3066 & 1x61-1x80 \\ & 98.01 & 15.65 & 3107 & 1x61-1x87 \\ & 98.04 & 17.67 & 2951 & 1x63-1x86 \\ \hline \multirow{3}{*}{2x200 Baseline} & 98.14 & 1800.37 & 424758 & \\ & 98.04 & 1800.18 & 401361 & \\ & 97.95 & 781.90 & 58656 & \\ \hline \multirow{3}{*}{2x200 Pruned} & 97.96 & 18.66 & 3029 & 1x56-1x144 \\ & 0.5-0.5 & 98.13 & 24.65 & 3600 & 1x57-1x144 \\ & 98.04 & 28.19 & 2997 & 1x59-1x140 \\ \hline \multirow{3}{*}{6x100 Pruned} & 97.60 & 474,76 & 15261 & \\ & 97.77 & 1800.02 & 798306 & \\ & 97.67 & 818.19 & 14334 & \\ \hline \multirow{3}{*}{2x200 Pruned} & 97.52 & 231.02 & 3173 & 1x39-1x82 \\ & 1.0-0.1 & 97.47 & 79.22 & 7566 & 2x61-1x60-1x54 \\ & 97.21 & 44.24 & 11417 & 1x37-1x72-1x48 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results using \(\Delta=5\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{\(\Delta=20\)} \\ \hline Arch & \(\lambda\)-\(\alpha\) & Time & Nodes & Found \\ \hline \multirow{3}{*}{2x50 Baseline} & 1.85 & 1 & YES \\ & 5.06 & 1221 & YES \\ & 1.66 & 1 & YES \\ \hline \multirow{3}{*}{2x50 Pruned} & 2.73 & 127 & YES \\ & 9.20 & 1128 & YES \\ & 0.64 & 1 & YES \\ \hline \multirow{3}{*}{2x100 Baseline} & 17.35 & 1217 & YES \\ & 147.67 & 7532 & YES \\ & 102.67 & 3252 & YES \\ \hline \multirow{3}{*}{2x100 Pruned} & 6.66 & 2079 & YES \\ & 6.03 & 127 & YES \\ & 2.16 & 1 & YES \\ \hline \multirow{3}{*}{2x200 Pruned} & 439.17 & 40075 & YES \\ & 563.24 & 17597 & YES \\ & 508.14 & 6014 & YES \\ \hline \multirow{3}{*}{2x200 Pruned} & 2.56 & 1 & YES \\ & 18.65 & 5433 & YES \\ & 9.60 & 1202 & YES \\ \hline \multirow{3}{*}{6x100 Baseline} & 1800.06 & 138918 & NO \\ & 1800.03 & 237328 & NO \\ & 1800.10 & 184954 & NO \\ \hline \multirow{3}{*}{6x100 Pruned} & 15.53 & 1 & YES \\ & 1.0-0.1 & 7.51 & 1 & YES \\ \cline{1-1} & 129.70 & 27045 & YES \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results using \(\Delta=20\).
their impact on MIPs, we can enhance the efficiency and scalability of embedding ANNs in optimization problems.
## Acknowledgments
The authors are grateful to Pierre Bonami for his generous and insightful feedback. This work has been supported by the NSERC Alliance grant 544900- 19 in collaboration with Huawei-Canada
|
2301.07901 | Bayesian seismic tomography based on velocity-space Stein variational
gradient descent for physics-informed neural network | In this study, we propose a Bayesian seismic tomography inference method
using physics-informed neural networks (PINN). PINN represents a recent advance
in deep learning, offering the possibility to enhance physics-based simulations
and inverse analyses. PINN-based deterministic seismic tomography uses two
separate neural networks (NNs) to predict seismic velocity and travel time.
Naive Bayesian NN (BNN) approaches are unable to handle the high-dimensional
spaces spanned by the weight parameters of these two NNs. Hence, we reformulate
the problem to perform the Bayesian estimation exclusively on the NN predicting
seismic velocity, while the NN predicting travel time is used only for
deterministic travel time calculations, with the help of the adjoint method.
Furthermore, we perform BNN by introducing a function-space Stein variational
gradient descent (SVGD), which performs particle-based variational inference in
the space of the function predicted by the NN (i.e., seismic velocity), instead
of in the traditional weight space. The result is a velocity-space SVGD for the
PINN-based seismic tomography model (vSVGD-PINN-ST) that decreases the
complexity of the problem thus enabling a more accurate and physically
consistent Bayesian estimation, as confirmed by synthetic tests in one- and
two-dimensional tomographic problem settings. The method allows PINN to be
applied to Bayesian seismic tomography practically for the first time. Not only
that, it can be a powerful tool not only for geophysical but also for general
PINN-based Bayesian estimation problems associated with compatible NNs
formulations and similar, or reduced, complexity. | Ryoichiro Agata, Kazuya Shiraishi, Gou Fujie | 2023-01-19T06:05:23Z | http://arxiv.org/abs/2301.07901v2 | Bayesian seismic tomography based on velocity-space Stein variation gradient descent for physics-informed neural network
###### Abstract
In this study, we propose a Bayesian seismic tomography inference method using physics-informed neural networks (PINN). PINN represents a recent advance in deep learning, offering the possibility to enhance physics-based simulations and inverse analyses. PINN-based deterministic seismic tomography uses two separate neural networks (NNs) to predict seismic velocity and travel time. Naive Bayesian NN (BNN) approaches are unable to handle the high-dimensional spaces spanned by the weight parameters of these two NNs. Hence, we reformulate the problem to perform the Bayesian estimation exclusively on the NN predicting seismic velocity, while the NN predicting travel time is used only for deterministic travel time calculations, with the help of the adjoint method. Furthermore, we perform BNN by introducing a function-space Stein variational gradient descent (SVGD), which performs particle-based variational inference in the space of the function predicted by the NN (i.e., seismic velocity), instead of in the traditional weight space. The result is a velocity-space SVGD for the PINN-based seismic tomography model (vSVGD-PINN-ST) that decreases the complexity of the problem thus enabling a more accurate and physically consistent Bayesian estimation, as confirmed by synthetic tests in one- and two-dimensional tomographic problem settings. The method allows PINN to be applied to Bayesian seismic tomography practically for the first time. Not only that, it can be a powerful tool not only for geophysical but also for general PINN-based Bayesian estimation problems associated with compatible NNs formulations and similar, or reduced, complexity.
Seismic tomography, physics-informed neural network (PINN), Bayesian neural network, function-space stein variation gradient descent (fSVGD)
## I Introduction
Seismic tomography is a technique used to determine the interior seismic structure of the Earth, using seismic waves excited by natural earthquakes and artificial sources. This technique is essential for studying the evolution of the Earth, plate tectonics, and mechanisms of earthquake generation. Seismic tomography, similarly to many other physics-based estimation problems in geoscience, represents an under-determined inverse problem because the number of sources and receivers is limited. A priori constraints on the parameters of interest (e.g., seismic velocity) can be used to regularize the problem and provide the uncertainty quantification (UQ) of the estimates, ensuring reliable analytical results. To meet such requirement, recently, a growing body of research uses Bayesian estimation to perform UQ estimation, considering unknown parameters as stochastic variables. In the field of seismic tomography, there are many examples of Bayesian estimations in two-dimensional (2D) surface wave [1, 2, 3, 4], 2D seismic refraction [5], and three-dimensional (3D) tomography [6, 7, 8]. These methods employ a Bayesian estimation of the posterior probability distribution of the seismic velocity structure, combining travel time data, prior information on the target velocity structure, and forward numerical calculations of travel time based on grid or mesh discretization of the target domain.
Recent developments in deep learning techniques provide entirely new options for solving partial differential equations (PDEs) and inversion problems. Physics-informed neural networks (PINN) [9], in particular, have attracted considerable attention owing to their high applicability and flexibility. Raissi et al. [9] presented a PINN application to Navier-Stokes equations. It now has been applied to a variety of fields, including solid earth science, in forward simulations (e.g., seismic travel time calculation [10, 11], seismic wave field simulation [12], and crustal deformation modeling [13]), and inverse problems (e.g., seismic tomography [14, 15] and full-waveform inversion [16]). PINN uses neural networks (NN) to construct functions predicting physical quantities, such as travel time, taking spatial coordinates and time (for dynamic problems) as inputs. The NN is trained based on a loss function consisting of residuals of the governing PDE in some given evaluation (or collocation) points. The advantages brought by the PINN travel time calculation [10, 11] and seismic tomography in general [14, 15], such as mesh-free frameworks not requiring initial model setup, demonstrate a considerable potential for further important developments. The one considered in this study aims at extending PINN-based seismic tomography to include UQ based on Bayes' theorem.
Bayesian inversion based on PINN is an application of Bayesian neural network (BNN) where the posterior probability density function (PDF) of the weight parameters in NN is estimated. Stochastic properties of target physical quantities, such as seismic wave velocity, are obtained from the posterior predictive PDF, calculated based on the posterior PDF. Bayesian sampling methods and variational inference (VI) are the methods most commonly used to estimate the posterior PDF of the weight parameters in BNN [17]. Bayesian sampling is the most accurate method for BNN, enabling sampling from the true posterior PDF. For instance, the Hamiltonian/hybrid Monte Carlo (HMC) method [18] has been used in previous applications of Bayesian PINN [19],
[20]. However, being a sequential sampling method, HMC computation time and scalability are not suitable for large problems. In VI, the posterior PDF estimation problem is replaced with an optimization one, minimizing the Kullback-Leibler (KL) divergence between the target PDF and its parametric approximation. VI are usually based on the mean-field approximation (MFVI), which approximates the target PDF combining multiple independent simple PDFs. However, MFVI displays poor performance in Bayesian PINN, giving an over-simplified PDF approximation [19, 21]. A more recent non-parametric class of VI, called particle-based VI (ParVI) and best known as Stein variational gradient descent (SVGD) [22], has attracted attention for the characteristic high approximation accuracy and computational parallelism. ParVI methods iteratively update a set of particles and use the corresponding empirical probability measure to approximate the target posterior PDF accurately. These methods have already been applied both to BNN and Bayesian PINN problems, using relatively small NN structures (e.g., [23]). Even so, BNNs have a challenging aspect, that is, the multi-modality of the posterior PDF in the high-dimensional weight space resulting from over-parametrization. In fact, optimized networks are able to express the same function values using multiple parameter combinations. Exploring PDFs solutions using ParVI and SVGD in such an over-parametrized space is a difficult task, resulting in degraded approximation performance [24].
However, in physics problems, such as those of seismic tomography, the posterior PDFs of interest are defined in the space of the functions predicted by NNs (e.g., seismic velocities); therefore, they are expected to have a considerably simpler shape than those defined in the weight space. Performing ParVI in function space, instead of weight space with multi-modality, in BNN avoids posterior PDF degraded approximation issues [24]. In addition, considering Bayesian estimations in the function space also enables the efficient incorporation of physically meaningful components of prior information, which would not be possible in the weight space [26].
Another important aspect to consider is the confinement of a NN targeted in Bayesian estimation. In fact, previous PINN-based seismic tomography methods [14, 15] included two NNs: one predicting travel time as a solution of the governing PDE and the other seismic wave velocity. A similar NN formulation was employed in other inverse problems (e.g. [16]). Targeting the weight parameters of two NNs in Bayesian estimation is not efficient, considering we are exclusively interested in the seismic velocity UQ. Moreover, the only requirement for the travel time NN is to satisfy the governing PDE for the given velocity structure. Therefore, if we could confine the target parameters to the weights of the NN predicting seismic velocity (or other relevant functions for UQ), the exploration of the posterior PDF would be drastically simplified.
This study proposes a novel method, based on Bayesian inference and PINN, to address UQ and inverse problems applied to seismic tomography, overcoming existing limitations. The main contributions of this study can be summarized as follows. We reformulated the problem to perform the Bayesian estimation only on the velocity NN, while using the travel time NN for deterministic travel time calculations, with the help of a numerical technique called the adjoint method [27]. We performed Bayesian estimation for the velocity network by introducing SVGD directly in the space of functions predicted by the NN (i.e., seismic velocity), instead of in the whole weight space, leveraging a recent development in BNN techniques [24]. The proposed approach is called "velocity-space SVGD for PINN-based seismic tomography (vSVGD-PINN-ST)" and represents the first practical application of Bayesian estimation in PINN-based seismic tomography. Furthermore, vSVGD-PINN-ST can be generalized to the UQ of PINN-based inverse problems sharing similar NN formulation and problem size.
This paper is organized as follows: In Section II, we present a PINN formulation of seismic tomography from previous studies and explain a naive formulation of Bayesian estimation as a baseline method. In Section III, we present our vSVGD-PINN-ST, introducing the improvements presented earlier. In Section IV and V, we validate the applicability of the proposed method to realistic problems through synthetic seismic tomography tests in one and two dimensions. In Section VI and VII we discuss results and give concluding remarks, respectively.
## II Baseline method
### _PINN-based deterministic seismic tomography_
First, we first introduce the PINN formulation of the deterministic seismic tomographic problem as proposed by [14]. In this work, we follow a slightly modified approach. We start from the eikonal equation relating the spatial derivative of the travel time field to the velocity structure as follows:
\[|\nabla T(\mathbf{x},\mathbf{x}_{s})|^{2} = \frac{1}{v^{2}(\mathbf{x})},\quad\forall\,\mathbf{x}\in\Omega \tag{1}\] \[T(\mathbf{x}_{s},\mathbf{x}_{s}) = 0, \tag{2}\]
where \(\Omega\) is a \(\mathbb{R}^{d}\) domain, with \(d\) as the space dimension, \(T(\mathbf{x},\mathbf{x}_{s})\) is the travel time at the point \(\mathbf{x}\) from the source \(\mathbf{x}_{s}\), \(v(\mathbf{x})\) is the velocity defined on \(\Omega\), and \(\nabla\) denotes the gradient operator. The second equation defines the point source condition. To avoid singularities in this condition, previous studies modeling the travel time using PINN introduced the following factored form [10, 11]:
\[T(\mathbf{x},\mathbf{x}_{s})=T_{0}(\mathbf{x},\mathbf{x}_{s}),\tau(\mathbf{x},\mathbf{x}_{s}) \tag{3}\]
where \(T_{0}(\mathbf{x})\) is defined as
\[T_{0}(\mathbf{x},\mathbf{x}_{s})=\left|\mathbf{x}-\mathbf{x}_{s}\right|. \tag{4}\]
Using this factorization, the point source condition becomes:
\[\tau(\mathbf{x}_{s},\mathbf{x}_{s})=\frac{1}{v(\mathbf{x}_{s})}. \tag{5}\]
Considering \(v\) positive, we can introduce the residuals of the eikonal equation \(r_{\text{EE}}\) and the point source condition \(r_{\text{PC}}\) in terms of velocity:
\[r_{\text{EE}} = v(\mathbf{x})-\frac{1}{|\nabla T(\mathbf{x},\mathbf{x}_{s})|}, \tag{6}\] \[r_{\text{PC}} = v(\mathbf{x}_{s})-\frac{1}{\tau(\mathbf{x}_{s},\mathbf{x}_{s})}. \tag{7}\]
The PINN-based seismic tomography method solves the eikonal equation and estimates the velocity structure simultaneously, by training NNs to predict the travel-time and velocity function. Waheed et al. [14] proposed to use two different NNs to construct the functions \(f_{T}\) and \(f_{v}\), characterized by weight parameters \(\mathbf{\theta}_{T}\) and \(\mathbf{\theta}_{v}\). Optimized NNs are expected to return accurate approximations of the travel time and velocity. Such formulation, which introduces two NNs, one for the solution of the governing PDE and the other for PDE parameters optimization, is generally used in PINN-based inverse analysis [16]. In this study, we define NNs functions as
\[T(\mathbf{x},\mathbf{x}_{s}) \simeq f_{T}(\mathbf{x},\mathbf{x}_{s},\mathbf{\theta}_{T}) \tag{8}\] \[= T_{0}(\mathbf{x},\mathbf{x}_{s})/f_{\tau^{-1}}(\mathbf{x}, \mathbf{x}_{s},\mathbf{\theta}_{T}),\] \[v(\mathbf{x}) \simeq f_{v}(\mathbf{x},\mathbf{\theta}_{v})\] (9) \[= v_{0}(\mathbf{x})+f_{v_{\rm{pth}}}(\mathbf{x},\mathbf{\theta}_{v}),\]
where \(f_{\tau^{-1}}\) is an NN-based function approximating \(1/\tau(\mathbf{x},\mathbf{x}_{s})\), \(v_{0}(\mathbf{x})\) is the reference velocity set by the user, and \(f_{v_{\rm{pth}}}(\mathbf{x},\mathbf{\theta}_{v})\) is the neural network function approximating the velocity perturbation component. We approximated \(1/\tau\) and \(v_{\rm{pth}}\) using NNs, instead of directly computing \(\tau\) and \(v\) as done in previous studies [10, 11], to improve convergence performances. We used fully connected feed-forward networks to implement both \(f_{\tau^{-1}}\) and \(f_{v_{\rm{pth}}}\). The \(f_{v_{\rm{pth}}}\) network is characterized by the parameters \(v_{\rm{pth}}^{\rm{max}}\) and \(v_{\rm{pth}}^{\rm{max}}\), representing the maximum and minimum velocity, set a priori. Additional operations were applied to normalize the input and output of NNs, improving the convergence performance and setting upper and lower limits of the final output values (see Appendix A). For a deterministic tomographic problem, the two NNs are trained simultaneously using the same loss function:
\[L = \alpha_{1}\sum_{i=1}^{N_{T}}\left(T_{\rm{obs}}^{(i)}-f_{T}( \mathbf{x}_{r}^{(i)},\mathbf{x}_{s}^{(i)};\mathbf{\theta}_{T})\right)^{2} \tag{10}\] \[+ \alpha_{2}\sum_{i=1}^{N_{c}}\left(f_{v}(\mathbf{x}_{c}^{(i)}; \mathbf{\theta}_{v})-\frac{1}{|\nabla f_{T}(\mathbf{x}_{c}^{(i)},\mathbf{x}_{s}^ {(i)};\mathbf{\theta}_{T})|}\right)^{2}\] \[+ \alpha_{3}\sum_{i=1}^{N_{s}}\left(f_{v}(\mathbf{x}_{s}^{(i)};\bm {\theta}_{v})-f_{\tau^{-1}}(\mathbf{x}_{s}^{(i)},\mathbf{x}_{s}^{(i)};\mathbf{ \theta}_{T})\right)^{2},\]
where \(N_{T}\), \(N_{c}\) and \(N_{s}\) are the number of travel time data, collocation points, and source points, respectively; \(T_{\rm{obs}}\) represents the travel time data; \(\mathbf{x}_{r}\) and \(\mathbf{x}_{c}\) are the coordinates of receiver and collocation points, respectively; \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are loss weights, typically assigned to \(1/N_{T}\), \(1/N_{c}\) and \(1/N_{s}\), respectively, although manual or automated tuning can be used for a stable training [28]. The collocation points, which are usually selected randomly within the target domain \(\Omega\), are set as evaluation points of the PDE residuals [9]. The first term on the right-hand side is the loss due to observational constraints, consisting of the sum of the squared difference of true and predicted travel time values for each source-receiver pair. The second term is the loss due to physics-informed constraints, consisting of the sum of the squared residuals defined in Equation 6 for each pair of source and collocation point. The third term is the one modeling the point source constraints, consisting of the sum of the squared residual defined in Equation 7 for each source point.
Following the above definitions, we defined the training datasets for the observation and PDE loss functions as \(\mathbf{X}_{T}=\{(\mathbf{x}_{r}^{(1)},\mathbf{x}_{s}^{(1)},T_{\rm{obs}}^{(1)},(\mathbf{x}_{r}^{(2)},\mathbf{x}_{s}^{(2)},T_{\rm{obs}}^{(2)}),\ldots,(\mathbf{ x}_{r}^{(N_{T})},\mathbf{x}_{s}^{(N_{T})},T_{\rm{obs}}^{(N_{T})})\}\) and \(\mathbf{X}_{c}=\{(\mathbf{x}_{c}^{(1)},\mathbf{x}_{s}^{(1)}),(\mathbf{x}_{c}^ {(2)},\mathbf{x}_{s}^{(2)}),\ldots,(\mathbf{x}_{c}^{(N_{c})},\mathbf{x}_{s}^ {(N_{c})})\}\), respectively. Tomographic estimation of \(v(\mathbf{x})\) is given by \(f_{v}(\mathbf{x},\mathbf{\theta}_{v}^{*})\), where \(\mathbf{\theta}_{T}^{*},\mathbf{\theta}_{v}^{*}=\underset{\mathbf{\theta}_{T},\mathbf{\theta} _{v}}{\arg\min}L(\mathbf{\theta}_{T},\mathbf{\theta}_{v})\) represent the NNs optimized values. Fig. 1 schematically illustrates settings and constraints used to train the NNs involved in both the tomographic deterministic and Bayesian formulation (described in the following section).
### _Naive Bayesian formulation of PINN-based seismic tomography_
In this study, we considered the weight parameters as stochastic variables, formulating the estimation problem using the Bayes' theorem. In a naive formulation, we consider a Bayesian neural network (BNN) for both \(\mathbf{\theta}_{T}\) and \(\mathbf{\theta}_{v}\) given by:
\[P(\mathbf{\theta}_{T},\mathbf{\theta}_{v}|\mathbf{d}) = \frac{P(\mathbf{d}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})P(\mathbf{\theta}_ {T},\mathbf{\theta}_{v})}{P(\mathbf{d})} \tag{11}\] \[\propto P(\mathbf{d}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})P(\mathbf{\theta}_{T}, \mathbf{\theta}_{v}),\]
where \(\mathbf{d}\) is the data vector, including not only travel time data but also the constraints from the eikonal equation: The likelihood function includes the two following components:
\[P(\mathbf{d}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})=P(\mathbf{T}_{\rm{obs}}|\mathbf{ \theta}_{T},\mathbf{\theta}_{v})P(\mathbf{r}|\mathbf{\theta}_{T},\mathbf{\theta}_{v}). \tag{12}\]
The first term on the right-hand side represents the stochastic observation error of the travel time, for which the following Gaussian distribution is assumed:
\[P(\mathbf{T}_{\rm{obs}}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})\] \[=\frac{1}{Z}\exp\left(-\frac{1}{2}(\mathbf{T}_{\rm{obs}}-\mathbf{ f}_{T}(\mathbf{\theta}_{T}))^{\top}\mathbf{E}_{\rm{obs}}^{-1}(\mathbf{T}_{\rm{obs}}- \mathbf{f}_{T}(\mathbf{\theta}_{T}))\right), \tag{13}\]
where
\[\mathbf{T}_{\rm{obs}} = \left[T_{\rm{obs}}^{(1)},T_{\rm{obs}}^{(2)},\ldots,T_{\rm{obs}}^{ (N_{T})}\right], \tag{14}\] \[\mathbf{f}_{T}(\mathbf{\theta}_{T}) = \left[f_{T}^{(1)},f_{T}^{(2)},\ldots,f_{T}^{(N_{T})}\right],\] (15) \[f_{T}^{(i)} = f_{T}(\mathbf{x}_{r}^{(i)},\mathbf{x}_{s}^{(i)};\mathbf{\theta}_{T}), \tag{16}\]
and \(\mathbf{E}_{\rm{obs}}\) and \(Z\) are the covariance matrix for the observation error and a normalizing coefficient, respectively. We see that \(P(\mathbf{T}_{\rm{obs}}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})\) is an explicit function of \(\mathbf{\theta}_{T}\), implicitly dependent on \(\mathbf{\theta}_{v}\). To materialize the second term of Equation 12, we assume Gaussian properties for the residuals as well:
\[P(\mathbf{r}|\mathbf{\theta}_{T},\mathbf{\theta}_{v})\] \[=\frac{1}{Z}\exp\left(-\frac{1}{2}\mathbf{r}^{\top}(\mathbf{\theta}_{ T},\mathbf{\theta}_{v})\mathbf{E}_{r}^{-1}\mathbf{r}(\mathbf{\theta}_{T},\mathbf{\theta}_{v}) \right), \tag{17}\]
where
\[\mathbf{r}(\mathbf{\theta}_{T},\mathbf{\theta}_{v})=\left[r^{(1)}_{\rm EE},r^{(2)}_{\rm EE },\ldots,r^{(N_{e})}_{\rm EE},r^{(1)}_{\rm PC},r^{(2)}_{\rm PC},\ldots,r^{(N_{s}) }_{\rm PC}\right], \tag{18}\]
\[r^{(i)}_{\rm EE}=f_{v}(\mathbf{x}^{(i)}_{c};\mathbf{\theta}_{v})-\frac{1}{|\nabla f _{T}(\mathbf{x}^{(i)}_{c},\mathbf{x}^{(i)}_{s};\mathbf{\theta}_{T})|}, \tag{19}\]
\[r^{(i)}_{\rm PC}=f_{v}(\mathbf{x}^{(i)}_{s};\mathbf{\theta}_{v})-f_{\tau^{-1}}( \mathbf{x}^{(i)}_{s},\mathbf{x}^{(i)}_{s};\mathbf{\theta}_{T}), \tag{20}\]
and \(\mathbf{E}_{\rm r}\) is the covariance matrix of PDE residuals, which may only have diagonal components given depending on the confidence level of the forward model. We use the notations \(\mathbf{r}(\mathbf{\theta}_{T},\mathbf{\theta}_{v})\) and \(\mathbf{r}\) intermittently. The simplest choices for the prior PDF \(P(\mathbf{\theta}_{T},\mathbf{\theta}_{v})\) are, for instance, an independent and identically distributed (i.i.d.) zero-mean Gaussian distribution [19] or a student's \(t\)-distribution [23]. Once the specific forms of the likelihood function and prior PDF are determined, the approximate posterior PDF can be obtained by several Bayesian estimation methods, introduced in the next section. The stochastic property of the seismic velocity is then obtained as the predictive PDF \(P(\mathbf{v}|\mathbf{d})\) based on the marginal posterior PDF of \(\mathbf{\theta}_{v}\) as follows:
\[P(\mathbf{v}|\mathbf{d}) = \int P(\mathbf{v}|\mathbf{\theta}_{v})P(\mathbf{\theta}_{v}|\mathbf{d})d \mathbf{\theta}_{v}. \tag{21}\]
### _Stein variation gradient descent for naive Bayesian PINNN-based seismic tomography_
The naive Bayesian formulation involves the Bayesian estimation of the weight parameters posterior PDF \(P(\mathbf{\theta}_{T},\mathbf{\theta}_{v}|\mathbf{d})\). ParVI methods, such as SVGD [22], are paradigms of Bayesian estimation, recently gaining more popularity in various fields, including geophysics research (e.g., [4, 29, 30]). SVGD is known for a more efficient approximation ability in the calculation of the posterior distribution compared to HMC, which has degraded performance with high dimensional problems and large datasets. As in ordinary MFVI, in SVGD the Bayesian estimation is replaced with a minimization problem of the KL divergence, defined as follows:
\[KL\left(Q(\mathbf{\theta})\|P(\mathbf{\theta}|\mathbf{d})\right)=\int Q(\mathbf{\theta}) \log\frac{Q(\mathbf{\theta})}{P(\mathbf{\theta}|\mathbf{d})}d\mathbf{\theta}, \tag{22}\]
where \(P(\mathbf{\theta}|\mathbf{d})\) and \(Q(\mathbf{\theta})\) are the target posterior and approximate distributions, respectively. SVGD employs a set of particles \(\{\mathbf{\theta}\}_{i=1}^{n}\) to approximate the target posterior PDF by minimizing the KL divergence. These particles iteratively move towards the posterior distribution following the gradient of the KL divergence \(\mathbf{\varphi}\), which is obtained from the kernelized Stein discrepancy defined in a reproducing kernel Hilbert space (RKHS). The update equations of SVGD are given as follows:
\[\mathbf{\theta}_{i}^{l+1}=\mathbf{\theta}_{i}^{l}+\epsilon_{l}\mathbf{\phi}\left(\mathbf{ \theta}_{i}^{l}\right), \tag{23}\]
where
\[\mathbf{\phi}(\mathbf{\theta})=\frac{1}{n}\sum_{j=1}^{n}\{k(\mathbf{\theta}_{j}^{l},\mathbf{ \theta})\nabla_{\mathbf{\theta}_{j}^{l}}\log P(\mathbf{\theta}_{j}^{l}|\mathbf{d})+ \nabla_{\mathbf{\theta}_{j}^{l}}k(\mathbf{\theta}_{j}^{l},\mathbf{\theta}_{v})\}, \tag{24}\]
\(\epsilon_{l}\) is the step size at each iteration \(l\), which can be determined using various adaptive optimizers (such as Adam [31]), and \(k(\mathbf{x},\cdot)\) represents a positive definite kernel. In particular, we adopt a radial basis function (RBF) kernel with bandwidth determined by the median heuristic as in previous studies. The "driving force" term is a smoothed gradient of the log posterior density that moves the particles toward the high-density regions of the posterior distribution. The "repulsive force" term promotes diversity and prevents particles from concentrating on the mode of the target PDF. This combination of two forces results in an efficient non-parametric approximation of the posterior PDF, using a finite number of particles. In an SVGD optimization, a mini-batch stochastic gradient descent can be used [22] for efficient optimization with large training datasets. In BNN applications, NNs corresponding to the particles \(\{\mathbf{\theta}\}_{i=1}^{n}\) are trained simultaneously by SVGD. SVGD can be applied to the naive Bayesian formulation of the target inverse problem introduced in the previous section by taking \(\mathbf{\theta}=(\mathbf{\theta}_{T}\,\mathbf{\theta}_{v})^{\top}\). We call this naive approach "SVGD for PINN-based seismic tomography" (SVGD-PINN-ST). Algorithm 1 summarizes the above baseline SVGD-PINN-ST algorithm with a mini-batch stochastic descent.
## III Formulation of velocity-space SVGD for PINN-based seismic tomography
### _Formulation of Bayesian estimation exclusive for velocity NN_
Although SVGD is applicable to BNN or Bayesian PINN theoretically, the parameter space \(\mathbf{\theta}=(\mathbf{\theta}_{T}\,\mathbf{\theta}_{v})^{\top}\) of a practical problem setting is usually high-dimensional (above \(10^{4}\) parameters), making the problem intractable for both HMC and SVGD. Consequently, reducing the effective parameter space dimension in the target Bayesian estimation is essential. For this reason, we seek to reduce the target space from \((\mathbf{\theta}_{T}\,\mathbf{\theta}_{v})^{\top}\) to \(\mathbf{\theta}_{v}\), considering that we are exclusively interested in the uncertainty of \(v\). The only requirement for \(\mathbf{\theta}_{T}\) is to minimize the residuals of the eikonal equation. We reformulate the Bayesian estimation defined in Equation 11 by considering exclusively the estimation of the posterior PDF of \(\mathbf{\theta}_{v}\) as follows:
\[P(\mathbf{\theta}_{v}|\mathbf{d}) \propto P(\mathbf{T}_{\rm obs}|\mathbf{\theta}_{v})P(\mathbf{\theta}_{v}), \tag{25}\]
with the condition
\[\mathbf{r}(\mathbf{\theta}_{T},\mathbf{\theta}_{v})=\mathbf{0}. \tag{26}\]
To estimate the posterior probability in Equation 25, we consider an SVGD update for \(\mathbf{\theta}_{v}\) only:
\[\mathbf{\theta}_{v\,i}^{l+1}=\mathbf{\theta}_{v\,i}^{l}+\epsilon_{l}\mathbf{\phi}\left(\mathbf{ \theta}_{v\,i}^{l}\right), \tag{27}\]
where
\[\mathbf{\phi}(\mathbf{\theta}_{v})\] \[=\frac{1}{n}\sum_{j=1}^{n}\{k(\mathbf{\theta}_{v\,j}^{l},\mathbf{\theta}_ {v})\nabla_{\mathbf{\theta}_{v\,j}^{l}}\log P(\mathbf{\theta}_{v\,j}^{l}|\mathbf{d})+ \nabla_{\mathbf{\theta}_{v\,j}^{l}}k(\mathbf{\theta}_{v\,j}^{l},\mathbf{\theta}_{v})\}. \tag{28}\]
To obtain \(\nabla_{\mathbf{\theta}_{v\,j}}\log P(\mathbf{\theta}_{v\,j}|\mathbf{d})\), the calculation of the total derivative is required, because the likelihood function is implicitly dependent on \(\mathbf{\theta}_{v}\) through \(\mathbf{\theta}_{T}\), as seen in Equation 13.
The above calculation can be performed using the Lagrange multiplier method introducing the equality constraints \(\mathbf{r}=\mathbf{0}\). This is a type of the discrete version of the adjoint method [27]. Here, we define the Lagrange function as the sum of the log likelihood function \(J=\log P(\mathbf{T}_{\mathrm{obs}}|\boldsymbol{\theta}_{v})\) and a Lagrange multiplier satisfying the above constraints, obtaining
\[J_{\mathrm{L}}=J+\boldsymbol{\lambda}^{\top}\mathbf{r}, \tag{29}\]
where \(J_{\mathrm{L}}\) and \(\boldsymbol{\lambda}\) are the Lagrange function and multiplier, respectively. The total derivative of \(J_{\mathrm{L}}\) with respect to \(\boldsymbol{\theta}_{v}\) can be calculated using the following chain rule of differentiation:
\[\frac{dJ_{\mathrm{L}}}{d\boldsymbol{\theta}_{v}} =\frac{\partial J_{\mathrm{L}}}{\partial\boldsymbol{\theta}_{v}}+ \frac{\partial J_{\mathrm{L}}}{\partial\boldsymbol{\theta}_{T}}\frac{d \boldsymbol{\theta}_{T}}{d\boldsymbol{\theta}_{v}}+\frac{\partial J_{\mathrm{L }}}{\partial\boldsymbol{\lambda}}\frac{d\boldsymbol{\lambda}}{d\boldsymbol{ \theta}_{v}}\] \[=\boldsymbol{\lambda}^{\top}\frac{\partial\boldsymbol{\tau}}{ \partial\boldsymbol{\theta}_{v}}+(\frac{\partial J}{\partial\boldsymbol{ \theta}_{T}}+\boldsymbol{\lambda}^{\top}\frac{\partial\boldsymbol{\tau}}{ \partial\boldsymbol{\theta}_{T}})\frac{d\boldsymbol{\theta}_{T}}{d\boldsymbol {\theta}_{v}}+\mathbf{r}^{\top}\frac{d\boldsymbol{\lambda}}{d\boldsymbol{ \theta}_{v}}. \tag{30}\]
The second and third terms on the right-hand side still include a total derivative. However, the third term vanishes if \(f_{T}\) is trained for \(f_{v}\) and \(\mathbf{r}\) becomes sufficiently small. The second term can also be eliminated by solving the following equation for \(\boldsymbol{\lambda}\):
\[\frac{\partial J}{\partial\boldsymbol{\theta}_{T}}+\boldsymbol{\lambda}^{\top} \frac{\partial\mathbf{r}}{\partial\boldsymbol{\theta}_{T}}=\mathbf{0}. \tag{31}\]
We approximate the solution of the last equation using the L-BFGS algorithm [32] using a zero vector as the initial solution. Equation 31 represents a discrete adjoint equation. The target total derivative is obtained by evaluating the first term in Equation 30, using the solution \(\boldsymbol{\lambda}^{*}\) satisfying Equation 31:
\[\frac{dJ_{\mathrm{L}}}{d\boldsymbol{\theta}_{v}}=\boldsymbol{\lambda}^{*\top} \frac{\partial\mathbf{r}}{\partial\boldsymbol{\theta}_{v}}. \tag{32}\]
Using this method, the gradient of the logarithm of the posterior PDF can be calculated as follows:
\[\nabla_{\boldsymbol{\theta}_{v}}\log P(\boldsymbol{\theta}_{v}| \mathbf{d}) = \nabla_{\boldsymbol{\theta}_{v}}\log P(\mathbf{T}_{\mathrm{obs}}| \boldsymbol{\theta}_{v})+\nabla_{\boldsymbol{\theta}_{v}}\log P(\boldsymbol{ \theta}_{v}) \tag{33}\] \[= \frac{dJ_{\mathrm{L}}}{d\boldsymbol{\theta}_{v}}+\nabla_{ \boldsymbol{\theta}_{v}}\log P(\boldsymbol{\theta}_{v}).\]
The discrete adjoint method is usually applied to regular systems, such as discretized PDEs. However, our adjoint equation is over-determined, in which the number of constraints (collocation points) is larger than that of the unknowns (weight parameters). Because PINN-based systems of PDEs are usually over-parameterized, the number of the former is significantly larger than the latter. For practical applications, considering that \(\frac{\partial J}{\partial\boldsymbol{\theta}_{T}}\) is a highly sparse vector due to the over-parametrization, Equation 31 is approximately satisfied by a moderate number of collocation points.
After each SVGD update for \(\boldsymbol{\theta}_{v}\) for all particles, \(\boldsymbol{\theta}_{T}\) is trained for a new \(\boldsymbol{\theta}_{v}\) satisfying \(\mathbf{r}=\mathbf{0}\) as follows:
\[\boldsymbol{\theta}_{T}^{*}=\mathop{\mathrm{arg\,min}}_{\boldsymbol{\theta}_{ T}}L(\boldsymbol{\theta}_{T}), \tag{34}\]
where
\[L = \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left(f_{v}(\mathbf{x}_{c}^{(i)}, \boldsymbol{\theta}_{v})-\frac{1}{|\nabla f_{T}(\mathbf{x}_{c}^{(i)},\mathbf{x }_{s}^{(i)},\boldsymbol{\theta}_{T})|}\right)^{2} \tag{35}\] \[+ \frac{1}{N_{s}}\sum_{i=1}^{N_{c}}\left(f_{v}(\mathbf{x}_{s}^{(i)} ;\boldsymbol{\theta}_{v})-f_{\tau^{-1}}(\mathbf{x}_{s}^{(i)},\mathbf{x}_{s}^{( i)};\boldsymbol{\theta}_{T})\right)^{2}.\]
The resulting process is equivalent to a PINN-based forward solver of the eikonal equation [10, 11]. Algorithm 2 summarizes the improved SVGD algorithm in which the target parameter space is reduced to that of \(\boldsymbol{\theta}_{v}\) only.
### _Bayesian estimation of velocity NN with particle-based functional variational inference_
In the previous section, the parameter space for the Bayesian estimation was reduce to that of \(\boldsymbol{\theta}_{v}\). However, the NN for \(f_{v}\) is high-dimensional and over-parameterized as well, which result in a posterior PDF with multi-modal features in the weight space that impairs Bayesian estimation. To address this, we introduce a function-space SVGD (fSVGD or, more generally, fParVI) [24] to formulate the Bayesian estimation directly in the function space predicted by the associated NN. This reformulation remarkably improves the estimation accuracy of the posterior PDF in BNN by avoiding the direct exploration of a multi-modal PDF over the weight space [24]. We rewrite the equation for the Bayes' theorem as follows:
\[P(\mathbf{v}|\mathbf{d}) \propto P(\mathbf{T}_{\mathrm{obs}}|\mathbf{v})P(\mathbf{v}), \tag{36}\]
where \(\mathbf{v}\) is evaluated at \(N_{v}\) velocity evaluation points. The evaluation points are randomly chosen, similarly to collocation points:
\[\mathbf{v} = \left[f_{v}(\mathbf{x}_{v}^{(1)},\boldsymbol{\theta}_{v})\,f_{v}( \mathbf{x}_{v}^{(2)},\boldsymbol{\theta}_{v}),\ldots,f_{v}(\mathbf{x}_{v}^{(N_{v })},\boldsymbol{\theta}_{v})\right]^{\top}, \tag{37}\]
where the associated velocity dataset is defined as \(\mathbf{X}_{v}=\{(\mathbf{x}_{v}^{(1)},\mathbf{x}_{s}^{(1)}),(\mathbf{x}_{v}^{(2 )},\mathbf{x}_{s}^{(2)}),\ldots,(\mathbf{x}_{v}^{(N_{v})},\mathbf{x}_{s}^{(N_{v })})\}\). The source locations are used for the evaluation of the residual vector in the adjoint calculation. The update vector in Equation 27 is modified in fSVGD as follows:
\[\boldsymbol{\phi}\left(\boldsymbol{\theta}_{v\,i}^{l+1}\right)=\frac{\partial \mathbf{v}_{i}}{\partial\boldsymbol{\theta}_{v\,i}^{l}}\overset{\top}{\boldsymbol {\psi}}(\mathbf{v}_{i}^{l}), \tag{38}\]
where
\[\boldsymbol{\psi}(\mathbf{v})=\frac{1}{n}\sum_{j=1}^{n}\{k(\mathbf{v}_{j}^{l}, \mathbf{v})\nabla_{\mathbf{v}_{j}^{l}}\log P(\mathbf{v}_{j}^{l}|\mathbf{d})+ \nabla_{\mathbf{v}_{j}^{l}}k(\mathbf{v}_{j}^{l},\mathbf{v})\}. \tag{39}\]
The above rule calculates the update vector in the function (velocity) space based on SVGD and then converts it in the weight space using the Jacobian matrix. As the repulsive force is defined over the function space, the approximation of the posterior PDF does not suffer from the multi-modal feature present in the weight space formulation. Furthermore, as the functional prior \(P(\mathbf{v})\) is explicitly included, more physically meaningful prior information \(P(\boldsymbol{\theta}_{v})\) than that in the weight space can be introduced straightforwardly. Although velocity evaluation points can be changed in each epoch, the same points must be used for all the SVGD particles because the point locations define the PDF evaluated by SVGD. The calculation of \(\nabla_{\mathbf{v}_{j}^{l}}\log P(\mathbf{v}_{j}^{l}|\mathbf{d})\) requires the adjoint formulation \(\nabla_{\boldsymbol{\theta}_{v}\,j}\log P(\boldsymbol{\theta}_{v\,j}|\mathbf{d})\), as in the analysis presented in the previous section. In fact, the same formulation in Equations 29, 30 and 31 is applicable, simply replacing \(\boldsymbol{\theta}_{v}\) with \(\mathbf{v}\). These methods overcome the existing limitations in BNN for PINN-based inverse analyses in realistically complex problem
setting. As we focused on the velocity space to perform fsvgd, the final version of the proposed approach was called "velocity-space svgd for PINN-based seismic tomography" (vsvgd-pinn-st). Algorithm 3 summarizes the steps of vsvgd-pinn-st.
## IV Synthetic test in 1D tomographic problem
To verify the ability of the proposed method to perform Bayesian estimation, we apply vsvgd-pinn-st to synthetic 1D tomography tests in a simple problem setting and compare the results with those obtained using the baseline svgd-pinn-st and linear travel time tomography, which provides an analytical solution.
### _Linear traveltime tomography_
Linear travel time tomography estimates velocity perturbation from a reference model using a Taylor series expansion. We obtain, as a result, a linear inverse problem for the residual travel time. When a conjugate pair of the prior and posterior pdf is adopted, such as Gaussian distributions, Bayesian linear regression for linearized tomography provides an analytical solution for the posterior pdf (see Text S1 in Supporting Material). However, linear tomography neglects the dependence of the ray path (i.e., travel time in 1D problems) on velocity perturbations in the reference model. Consequently, the method accuracy degrades when the reference model does not offer an accurate approximation of the true one.
### _1D Synthetic Test_
In the 1D synthetic test (1DST), we set a simple true velocity model, with a constant velocity of 1 km/s in the 1D domain defined by \(0\leq x\leq 1.2\,\mathrm{km}\), to test the ueq performance (see Fig. 2). We employ such a simple structure because the focus here is the ability of ueq, not the estimation of velocity. In this configuration, the travel time is readily obtained. Ten points, serving both as receivers and sources, are evenly distributed in two regions defined by the intervals \(0.2\leq x\leq 0.4\,\mathrm{km}\) and \(0.8\leq x\leq 1\,\mathrm{km}\) using a 0.05 km spacing. We refer to the five points in each of the above intervals as Group 1 and Group 2, respectively. We only consider rays between points within the same group. Therefore the number of travel time data points is \(5\times 4+5\times 4=40\). No ray paths exist in the intervals \(0\leq x\leq 0.2\,\mathrm{km}\), \(0.4\leq x\leq 0.8\,\mathrm{km}\), and \(1\leq x\leq 1.2\,\mathrm{km}\), in which the uncertainty of velocity estimation is expected to be closer to that given by prior. \(\mathbf{E}_{\mathrm{obs}}\) was set assuming an i.i.d. zero-mean Gaussian noise distribution with standard deviation \(0.005\,s\), although we did not actually add artificial noise to the travel times, which were calculated analytically between points. The prior probability in the velocity space is represented as a stochastic process for Bayesian estimation because \(f_{v}\) is a continuous function of the 1D coordinate \(x\) in PINN, evaluated at arbitrary collocation points within the target domain. Hence, we use a Gaussian process with the same mean \(\mu(x)=1\,\mathrm{km}/\mathrm{s}\) as the true model and kernel function defined by:
\[k(x_{i},x_{j})=\sigma_{1}^{2}\exp\left(-\frac{1}{2\sigma_{2}^{2}}|x_{i}-x_{j} |^{2}\right). \tag{40}\]
This is an RBF kernel, where \(\sigma_{1}\) and \(\sigma_{2}\) are the standard deviations of the marginal probability and correlation length scale, respectively. We set \(\sigma_{1}=0.1\,\mathrm{km}/\mathrm{s}\) and adopt three different \(\sigma_{2}\) values, namely, \(0.25\,\mathrm{km}\), \(0.15\,\mathrm{km}\), and \(0.075\,\mathrm{km}\), for comparison.
In linearized tomography, which we consider to calculate the ground truth of the posterior probability, we set the reference model constant velocity at the true value of 1 km/s to achieve the best accuracy for the linear approximation. We divide the region using a 0.025 km spacing and estimate 48 unknowns parametrizing the velocity perturbation. For these parameters, a prior pdf is generated based on the Gaussian process defined above.
In vsvgd- and svgd-pinn-st, we use fully connected feed-forward neural networks for both \(f_{\tau^{-1}}\) and \(f_{v_{\mathrm{ph}}}\), setting \(v_{0}(\mathbf{x})=1\,\)km/s. The values of \(v_{\mathrm{ph}}^{\mathrm{max}}\) and \(v_{\mathrm{ph}}^{\mathrm{min}}\) are set to -0.4 and 0.4 km/s, giving the upper and lower limits of the velocity prediction \(f_{v}\) as 1.4 and 0.6 km/s, respectively. The Swish activation function [33] is applied in each layer, except in the output one where a linear activation is specified. We use three hidden layers for both \(f_{\tau^{-1}}\) and \(f_{v_{\mathrm{ph}}}\) with 50 and 10 hidden units, respectively. \(\mathbf{\theta}_{v}\) is initialized using the He's method [34]. \(\mathbf{\theta}_{T}\) is trained using the initialized \(\mathbf{\theta}_{v}\) according to Equations 34 and 35, before running the algorithms. For both algorithms, 256 SVgd particles are employed and the Adam optimizer [31] is used to determine \(\epsilon_{l}\). The travel time batch data size for \(\mathbf{X}_{2}^{b}\) (see line 2 in Algorithm 3) is set to 40 (i.e., full batch). For practical convenience, the coordinate data of the velocity evaluation points \(\mathbf{X}_{v}\) are generated in each iteration by random sampling in the target domain. The associated batch data \(\mathbf{X}_{v}^{b}\) are taken as the corresponding \(\mathbf{X}_{v}\) values. The data size is assigned to 200; the number of epochs for each iteration \(l\) of vsvgd-pinn-st is set to 2,000. Each training session of \(\mathbf{\theta}_{T}\) (see line 6 in Algorithm 3) is conducted by using a L-BFGS algorithm [32] for 10 epochs with \(N_{c}=200\). The collocation points coordinates are generated randomly at each iteration, similarly to the \(\mathbf{X}_{v}\) selection process. The initial learning rate of the Adam optimizer is set to \(10^{-2}\). For the baseline naive svgd-pinn-st, introducing a prior probability in the weight space, equivalent to that in the velocity space in Equation 40, represents a challenging task. Hence, we use an i.i.d. zero-mean Gaussian distribution as the prior probability of \(\mathbf{\theta}=(\mathbf{\theta}_{T}\,\mathbf{\theta}_{v})^{\top}\) with variance \(\sigma_{\theta}^{2}\). Considering that a physically meaningful choice of \(\sigma_{\theta}\) values is difficult, we test the standard value \(\sigma_{\theta}^{2}=10^{0}\) (see, for instance, [19]) and a significantly large one \(\sigma_{\theta}^{2}=10^{2}\). The number of epochs for each iteration \(l\) and the initial learning rate of the Adam optimizer are set to 90,000 and \(10^{-3}\), respectively.
When the prior probability has a long correlation length (\(\sigma_{2}=0.25\,\mathrm{km}\)), the uncertainty estimation by vsvgd-pinn-st agrees well with the analytical solution of the linearized tomography (see Fig. 2 (a) and (b)). In the region including the sources and receivers, the standard deviation (\(1\sigma\)) of the posterior probability is small for both methods. This value increases where there is a lack of observations. However, it is always significantly smaller than the standard deviation of the prior probability due to the imposed spatial correlation constraint. With an intermediate and a short correlation length
(\(\sigma_{2}=0.15\,\mathrm{km}\) and \(0.075\,\mathrm{km}\)), the uncertainty estimation accuracy by vSVGD-PINN-ST slightly decreases, as shown by the increasing difference from the analytical solution (see Fig. 2 (c)(e) and (d)(f) for linearized tomography and vSVGD-PINN-ST, respectively). In particular, for \(\sigma_{2}=0.15\,\mathrm{km}\) the uncertainty estimated by vSVGD-PINN-ST is close to the analytical solution at both boundaries of the region. However, simulated results deviate from analytical values in the central region. For \(\sigma_{2}=0.075\,\mathrm{km}\), although the estimated standard deviation in the central region is accurate, the curve is smoother than that of the analytical solution on the whole domain. This feature may be also related to a slightly overestimated uncertainty at the boundaries. The prior probability with a short correlation length assumes the existence of shorter-wavelength components in the target velocity structure. The NN architecture adopted in this analysis fails to learn some of these components possibly due to some learning bias, such as the spectral bias [35] that affects the ordinary PINN formulation based on fully connected feed-forward NNs. A learning bias may also have slightly compromised the uncertainty estimation accuracy of the experiments with the two lowest \(\sigma_{2}\) values. However, we consider the discrepancy in the estimated uncertainty acceptable for practical use at present. Further advances, in fact, would require improved NN architectures. Alternatively, a different configuration of the loss function may reduce the effect of the learning bias. This aspect is discussed in Section VI-A, as a future development.
We examine the relationship between UQ results and two other parameters, namely, the number of particles and epochs, focusing mainly on the experiment with \(\sigma_{2}=0.25\,\mathrm{km}\). Surprisingly, even when the number of SVGD particles (\(n\)) is small (e.g., 32), the estimated posterior mean and standard deviation still capture the basic features of the analytical solution (see Fig. 3(a)). This result demonstrates the advantage of using SVGD in the approximation efficiency over Bayesian sequential sampling methods. However, we observe an asymmetry in the shape of the curve of the standard deviation when \(n\) is small, showing a imperfect convergence near Group 2. The asymmetry disappears when \(n\) is increased. Increasing the number of epochs (or iterations) makes the UQ results converge from the side of larger standard deviations (see Fig. 3 (b)). This means that the SVGD update started with sufficiently diverging particles. Interestingly, the standard deviation curves of the first epochs show asymmetric patterns similar to the one seen in the results where a smaller \(n\) is used. Another interesting finding is that the same analysis applied to the experiment with \(\sigma_{2}=0.15\,\mathrm{km}\) shows faster convergence than the previous case (see Fig. S1 in Supporting Material). These results suggest SVGD requires more iterations to converge for the prior probability distributions with longer correlation length scale.
SVGD-PINN-ST completely failed to reproduce the expected spatial variation features of the posterior probability for both of the assumed prior PDFs in the weight space (the one with \(\sigma_{\theta}^{2}=10^{0}\) and the other with \(\sigma_{\theta}^{2}=10^{2}\)), showing no correspondence between these features and the locations of sources and receivers (see Fig. 4 and Fig. S2 in Supporting Material). These findings suggest two possible reasons explaining the poor performance of SVGD-PINN-ST in the estimation of the posterior probability. The first one is that the parameter space for the Bayesian estimation using the naive SVGD approach is too broad and multi-modal; the second one is that unfavorable effects on the prior probability in the weight space are not physically interpretable. Estimated SVGD-PINN-ST mean values are close to the true on. However, they show a larger discrepancy than those calculated by vSVGD-PINN-ST result.
In summary, vSVGD-PINN-ST can estimate the uncertainty in tomographic velocity estimation problems more accurately than the baseline SVGD-PINN-ST method. We confirmed that improvements in vSVGD-PINN-ST, which avoid direct Bayesian estimation in high-dimensional and multi-modal weight spaces, are essential to conduct an efficient Bayesian estimation in PINN-based inversion analyses. We also found that, even when the spatial correlation length of the velocity is short, vSVGD-PINN-ST returns competitive uncertainty estimates.
## V Synthetic tests in 2D tomographic problems
In this section, we test the applicability of the vSVGD-PINN-ST method to realistic 2D synthetic problems in two scenarios: surface-wave and refraction tomography.
### _2D Synthetic Test 1: surface-wave tomography_
The 2D synthetic test 1 (2DST1) is a synthetic test in 2D surface-wave tomography, in which a low velocity anomaly is surrounded by sources and receivers aligned in a circular arrangement. The purpose of this test is to show that the proposed vSVGD-PINN-ST algorithm gives a reasonable estimation of the velocity and its uncertainty, consistent with characteristic ray paths drawn using this circular configuration.
We use a true velocity model with a homogeneous background set at 2 km/s, containing a circular low velocity anomaly with 1.2 km/s at the center. Sixteen receivers are evenly distributed around the anomaly at a radius of 4 km (see Fig. 5 (a)). Each receiver also serves as a source point, thus, the number of travel time data points is \(16\times 15=240\). This configuration is inspired by the one in [3, 4]. However, we use connected background and anomaly regions to comply with the Gaussian prior distribution introduced later, which supports a NN-predicted continuous velocity function. We calculate the travel time for each source-receiver pair using the fast sweeping method [36] implemented with the tcrpy python package [37]. We add independent zero-mean Gaussian noise with standard deviation 0.01 s to the calculated travel times and use the resulting values as the observation data. \(\mathbf{E}_{\mathrm{obs}}\) is set accordingly, assuming that the error distribution of the travel time observations is known. The ray paths drawn using the numerical solution are absent at the center of the anomaly and outside of the sources and receivers (see Fig. 5 (b)). An accurate Bayesian estimation should infer a larger standard deviation for the estimated velocity in these regions compared to that in the areas where the ray paths go through. We adopted a Gaussian process as prior probability, defined by the kernel function in Equation 40 with the 2D L2 norm as
argument, mean \(\mu(\mathbf{x})=1.75\,\mathrm{km/s}\), and standard deviations \(\sigma_{1}=0.5\,\mathrm{km/s}\), and \(\sigma_{2}=1.5\,\mathrm{km}\).
In vSVGD-PINN-ST, we use fully connected feed-forward neural networks for both \(f_{\tau^{-1}}\) and \(f_{v_{\mathrm{pth}}}\), with \(v_{0}(\mathbf{x})=\mu(\mathbf{x})\). \(v_{\mathrm{pth}}^{\mathrm{max}}\) and \(v_{\mathrm{pth}}^{\mathrm{min}}\) are set to -1.5 km and 1.5 km, giving the upper and lower limits of the velocity predicted by \(f_{v}\) as 3.25 km/s and 0.25 km/s, respectively. For both networks, we apply the Swish activation function [33] in each layer except in the output one where a linear activation is specified. Six hidden layers are used for both \(f_{\tau^{-1}}\) and \(f_{v_{\mathrm{pth}}}\) with 50 and 20 hidden units, respectively. The number of SVGD particles employed for this experiment is 512. The Adam optimizer [31] with an initial learning rate of \(10^{-2}\) is used to determine \(\epsilon_{l}\). Batch sizes for \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{v}^{b}\) (see line 2 in Algorithm 3) are 240 (i.e., full batch) and 400, respectively. \(\mathbf{\theta}_{v}\) is initialized using a prior Gaussian process, obtained by using vSVGD-PINN-ST with zero weight on the travel time observation data. \(\mathbf{\theta}_{T}\) is trained using the initialized \(\mathbf{\theta}_{v}\) according to Equations 34 and 35 before the vSVGD-PINN-ST algorithm is applied. The number of epochs for iteration \(l\) is set to 800. Each training step of \(\mathbf{\theta}_{T}\) (see line 6 in Algorithm 3) is conducted by using the L-BFGS algorithm [32] for 50 epochs with \(N_{c}=1,600\), taking the previous SVGD iteration result as the initial guess. To prevent the solution from being trapped into local minima due to an ineffective initial guess, the training of \(\mathbf{\theta}_{T}\) is conducted from scratch for selected epochs (i.e., epoch number 50, 100, 200, and 400). The coordinates of the velocity evaluation and collocation points are randomly generated within the target domain and \(\mathbf{X}_{v}^{b}\) and \(\mathbf{X}_{c}^{b}\) are set to the corresponding \(\mathbf{X}_{v}\) and \(\mathbf{X}_{c}\) values, respectively.
The mean velocity model estimated by vSVGD-PINN-ST agrees well with the true value in the region where ray paths are present (see Fig. 6 (a) and (b)). In the regions outside the sources and receivers circumference, the mean is close to the true value probably because of the assumption on spatial correlation. The posterior PDF standard deviation values in these regions smoothly approach that of the prior Gaussian process (see Fig. 6 (c)). The estimated values in the central region reflect the absence of ray paths and are, as expected, larger than those in the surrounding areas where the ray paths go through. However, overall values are smaller than those of the prior uncertainty. This is primarily due to the assumption on the spatial correlation of the prior probability. In addition, the absence of ray paths in such a configuration does not allow too large velocity in this region, providing an additional indirect constraint. The mean values in the central area are similar to the true ones, suggesting that the true model used is highly consistent with the prior probability. To further analyze results, we compared the marginal probability distributions of three points (black marks in Fig. 5 (b)). Histograms in Fig. 7 show the difference in the uncertainty of the estimated velocity in the regions with and without ray paths (marked by a circle and a square or an inverse triangle, respectively). These results suggest that tomographic results obtained by vSVGD-PINN-ST are consistent with the true values and prior probability, and the ray paths distribution for the true velocity model.
To prove that vSVGD-PINN-ST improves UQ accuracy in a realistic problem setting, we conducted the same analysis using the naive method, SVGD-PINN-ST. Similarly to the SVGD-PINN-ST analysis in 1DST, we use i.i.d. zero-mean Gaussian prior PDFs as weight parameters (see Text S2 in Supporting Material for other details), taking \(\sigma_{\theta}^{2}=10^{0}\). Results show that the velocity models estimated by the SVGD-PINN-ST are not consistent with the true model and ray paths distributions (see Fig. S3 in Supporting Material), reconfirming that SVGD-PINN-ST could not estimate the posterior probability in seismic tomographic problems effectively.
### _2D Synthetic Test 2: seismic refraction tomography_
In this section, we present the results of the 2D synthetic test 2 (2DST2), in which we consider a refraction tomographic problem where sources and receivers are located on the Earth's surface. We use refracted rays penetrating underground to estimate 2D velocity profiles. Compared to 2DST1, this tomographic problem is ill-posed, given the distribution of sources and receivers [38]. Furthermore, there exist only a few studies on Bayesian seismic tomography for this type of setting [5]. However, in the following, we show that our vSVGD-PINN-ST algorithm is suitable for this type of tomographic problem as well.
We set a relatively simple true velocity model with an almost depth-dependent structure in a domain ranging from 0 to 30 and from 0 to 200 km in depth and horizontal distance, respectively. A velocity bump is included in the region between 70 and 130 km in the horizontal axis. We consider 20 sources and 96 receivers distributed with a uniform spacing between 5 and 195 km in the horizontal distance on the surface of the model (see Fig. 8 (a)). Therefore, the total number of travel time data is \(20\times 96=1920\). As in 2DST1, we create synthetic data by calculating the travel time for each source-receiver pair using the fast sweeping method and add the i.i.d. zero-mean Gaussian noise with a standard deviation of 0.05 s. \(\mathrm{E_{obs}}\) is set accordingly, assuming that the error distribution of traveltime observations is known. The ray paths drawn by using the numerical solution are densely distributed in a shallow portion of the target region (see Fig. 8 (b)). For the prior probability, we set a Gaussian process with an RBF kernel, similarly to the previous tests. In refraction tomography, we assume that velocity increases with increasing depth. Therefore, we adopted a mean velocity model \(\mu(\mathbf{x})\) that is linearly depth-dependent (see Fig. S4 in Supporting Material). Under such an assumption, the correlation length in the horizontal and vertical directions are likely to have significant differences. To reflect this prior information, we redefine the RBF kernel as follows:
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sigma_{1}^{2}\exp\left[-\frac{1}{2}\left( \frac{(x_{i}-x_{j})^{2}}{\sigma_{x}^{2}}+\frac{(z_{i}-z_{j})^{2}}{\sigma_{z}^ {2}}\right)\right] \tag{41}\]
where \(\mathbf{x}=(x\,z)^{\top}\), and \(\sigma_{x}\) and \(\sigma_{z}\) are the correlation lengths in the horizontal and vertical direction, respectively. We set \(\sigma_{1}=1\,\mathrm{km/s}\), \(\sigma_{x}=35\,\mathrm{km}\), and \(\sigma_{z}=5\,\mathrm{km}\).
We use the same fully connected feed-forward neural networks as in the previous experiment, with six hidden layers for both \(f_{\tau^{-1}}\) and \(f_{v_{\mathrm{pth}}}\), including 50 and 20 hidden units, respectively. We set \(v_{0}(\mathbf{x})=\mu(\mathbf{x})\) as the mean velocity of the
prior Gaussian process introduced in the following. \(v_{\rm ptb}^{\rm max}\) and \(v_{\rm ptb}^{\rm min}\) are set to -3 km and 3 km, respectively, giving the depth-dependent upper and lower limits of the velocity predicted by \(f_{v}\). The Adam optimizer with an initial learning rate of \(10^{-3}\) is used to determine \(\epsilon_{l}\) in both algorithms. Weight parameters are initialized in the same way as in 2DST1. The batch sizes of \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{v}^{b}\) (see line 2 in Algorithm 3) are 480 and 400, respectively. The number of epochs for each iteration \(l\) is set to 200. Each training session of \(\boldsymbol{\theta}_{T}\) (see line 6 in Algorithm 3)is performed by the L-BFGS algorithm [32] for 50 epochs with \(N_{c}=1,600\). The initial guess was given in the same way as in 2DST1, except for the epochs 12, 25, 50, and 100 that were trained from scratch. The coordinates of the velocity evaluation and collocation points were randomly generated within the target domain, assigning to \(\mathbf{X}_{v}^{b}\) and \(\mathbf{X}_{c}^{b}\) the corresponding values in \(\mathbf{X}_{v}\) and \(\mathbf{X}_{c}\), respectively.
The mean velocity model estimated by vSVGD-PINN-ST agrees well with the true value in the region with a high density of ray paths (see Fig. 9 (a) and (b)). Fig. 9 (c) shows that the standard deviation is generally small in the same regions, with some irregular increases marked in light blue that may reflect complex ray-path patterns and heterogeneous velocity structure. In the bottom and side region not reached by ray paths, it smoothly increases to around 1 km/s, the standard deviation of the prior probability. The results of the experiment are consistent with the true values and the prior probability, proving that our vSVGD-PINN-ST algorithm can estimate the posterior probability successfully, even in highly ill-posed problem setting.
To further analyze the results, we compared the estimated mean model, true model, prior mean, and marginal probability distributions along the two lines marked in Fig. 8 (b) (see Fig. 10). In both profiles, the true velocity model agrees well with the estimated mean or, at least, its values are included within a high frequency region in the shallow portion. The mean and true velocity begin to grow apart at about 15 and 20 km of depth in the 20 (dashed line) and 100 km (dash-dot line) horizontal lines, respectively. We hypothesize that this is due to the reduction in the number of ray paths as the depth increases. At 20 km in the horizontal distance, the uncertainty is small even in the depth between 15 and 20 km that is out of the ray paths coverage, probably due to the spatial correlation assumption in the prior probability. Below these depths, the frequency color maps show a broad distribution, with increasing uncertainty due to the lack of ray paths. We expected the estimated mean model to agree well with the prior one in this depth, since information is not obtained directly from data. However, the former does not approach the latter as the depth increases, it grows apart from it. As we hypothesize in Section IV-B, we attribute this finding to some learning bias affecting our NN architectures, such as the spectral bias [35]. A possible way to address this issue is discussed as a further development in Section VI-A.
## VI Discussion
### _Advantages and future developments of PINN-based Bayesian seismic tomography_
The test problem setup for seismic refraction tomography in 2DST2 is different from the two previous ones, presenting the same observation geometry as that of real subsurface structural exploration. The successful performances of our method in this test confirm its applicability to actual observational data in geophysical exploration and subsurface structural studies in seismogenic zones. Existing seismic tomographic methods used in these research domains consider an arbitrary initial model to calculate the theoretical travel time and, subsequently, they update the velocity model using an iterative method to minimize the residual between the theoretical and observed travel time. The final velocity model is obtained when the residual becomes sufficiently small. A Monte Carlo (MC) analysis with initial model randomization is used to evaluate the uncertainty and reliability of the solution obtained using such tomographic methods [39, 40]. The idea behind the MC analysis is that the main source of uncertainty in the estimation results of traditional tomographic methods lies in the choice of the initial model. The introduction of Bayesian estimation in the proposed method mitigates the dependence of the estimation upon the initial value. The uncertainty evaluation reflects the quality and quantity of the observation data instead. In conventional tomography methods, model parameterization (quantity and arrangement selection) and regularization (e.g., smoothing parameters selection) are often determined subjectively. In this study, model parameterization is performed controlling the continuous functions represented by NNs, using the prior probabilities introduced by Bayes' theorem in the form of stochastic processes. The selection of regularization parameters is performed choosing the ones that characterize prior probabilities in Bayesian statistics, such as the correlation distance in the stochastic process adopted in our method. Although this aspect is not considered in this study, the most suitable parameters can be objectively determined within the framework of Bayesian statistics (e.g., using hierarchical or empirical Bayesian frameworks, see [53]). This aspect represents an important future development.
PINN-based Bayesian seismic tomography benefits from the general advantages of PINN methods for the solution of PDEs and deterministic inversion problems (see [41], for instance). The PINN-based approach we used does not require, in fact, a mesh or grid for numerical calculation because the continuous functions defined by NNs are differentiable and automatically generated in the domain of interest. Moreover, neural networks, automatic differentiation, and optimization algorithms can be introduced easily using existing deep-learning frameworks, such as Pytorch and TensorFlow. These functions also contributed to the efficient implementation of the SVGD algorithm and the adjoint method required in vSVGD-PINN-ST. The extension of this approach to higher dimensional domains (i.e., from 1D to 2D and from 2D to 3D) is easier than in ordinary numerical simulations since the mesh-free framework reduces the dependency of the algorithm on the dimension of the target problem. In fact, our Pytorch
code for vSVGD-PINN-ST required the modification of less than 100 lines of code to switch from 1D to 2D analyses.
The traditional PINN formulation with fully connected feed-forward NNs shows poor performance when the target functions include high-frequency or multi-scale features [42]. This is due to the NN spectral bias [35], which may be also responsible for the slight degradation of the estimation accuracy when the correlation length is small in 1DST and the discrepancy between the estimated and prior mean velocity at the bottom of the target domain in 2DST. Further improvements, such as the introduction of random Fourier feature methods [42, 43] and loss functions that account for physical causality [44], have been reported to tackle the above learning issues. We expect that a combination of these improvements will improve the UQ of our PINN-based seismic tomography method, resulting in increasingly realistic velocity structures.
All calculations performed for vSVGD-PINN-ST were accelerated using full parallelization for each SVGD particle. For instance, performing 2DST1 took 15.9 hours, using 512 CPU cores (64-core AMD EPYC 7742 \(\times\) 8 in Earth Simulator 4, made available by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC)) assigned to each one of the 512 particles. The 2DST1 experiment mimics the setting of the synthetic test by Zhang & Curtis in [4], where the authors analyzed the computational cost required to execute the several Bayesian seismic tomography methods compared in their study. They found that analyses based on the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) [45, 46] and reversible jump MCMC (rj-MCMC) method [1, 47] required 80.05 and 17.1 calculation hours, respectively, using six CPUs. Further parallelization of these sampling algorithms is difficult due to their sequential features. These considerations suggest that vSVGD-PINN-ST is competitive with some existing methods in terms of time-to-solution because of the high parallelism of the algorithm (note that Zhang & Curtis proposed highly efficient variational inference methods [4]). Furthermore, the estimation using our PINN-based method provides mesh-free continuous velocity models (such as those of individual SVGD particles shown in Fig. S5 in Supporting Material), whereas those obtained in [4] are parametrized with a relatively coarse spatial grid, which reduces computation cost. Incorporating finer grids will impose significantly larger cost on these other methods. Most of the vSVGD-PINN-ST computational time is required by the PINN-based solution of the eikonal equation in each SVGD update (in line 6 of Algorithm 3). Hyper parameters affecting this calculation time, such as the number of hidden layers in the NNs, hidden units in each NN layer, epochs, and collocation points, were not optimized in this study. To improve the SVGD efficiency, for instance, reducing the number of particles and iterations, and using improved kernels [48, 49] and second-order methods [50, 51] will be considered for future developments. For large-scale problems, requiring increasing numbers of NNs hidden layers and units, GPUs efficient acceleration might prove more advantageous for our PINN-based method compared with other ones based on ordinary numerical calculation.
To verify the accuracy of the PINN-based travel time calculation in vSVGD-PINN-ST, we compared the final travel time predicted by \(f_{T}\) with the result of the fast sweeping method, which uses the final velocity predicted by \(f_{v}\). Such comparison with reference solutions obtained from ordinary numerical simulation methods is currently the only available method to check the convergence to the true solution. Further theoretical studies are required to understand convergence properties of PINN-based solutions [41].
### _Prior probability_
Compared to the 2D surface wave tomography targeted in 2DST1, 2D seismic refraction tomography in 2DST2 is more likely to lead to a highly ill-posed inverse problem due to the distribution of sources and receivers [5, 38]. Leveraging prior information to provide proper constraints is crucial in this type of seismic tomography. In 2DST2, we introduced a mean function and correlation length dependent on depth and direction, respectively, in the Gaussian process with a RBF kernel used as the prior probability. This choice is based on the solid Earth science knowledge that seismic velocity structures are nearly horizontally stratified. The values of the parameters used in the kernel of the prior Gaussian process are reviewed in Table I. In our vSVGD-PINN-ST, such prior constraints can be incorporated directly because the Bayesian inference is performed in velocity space. In previous studies on Bayesian PINN, the estimation was performed in weight space using a simple prior probability, such as the i.i.d. Gaussian distribution, for the weight parameters [19, 21]. To understand how such a simple prior probability defined in weight space behaves in the physical space, we examine the corresponding Gaussian process in the function space to this prior. We draw 1,000 random samples of the weight parameter set of the NN predicting velocity used in 2DST2, using the i.i.d. Gaussian prior distribution, and we generate 1,000 corresponding predicted velocity structures. Subsequently, we find the best fitting \(\sigma_{1}\), \(\sigma_{x}\) and \(\sigma_{z}\) values in Equation 41 for the 1,000 velocity models (see Text S3 in Supplementary Material for further details). In Table I, two examples of the estimated parameter sets for the kernel function are presented. For instance, when the standard deviations of the i.i.d. Gaussian distributions is set to \(\sigma_{\theta}=10^{-0.6}\), the marginal standard deviation (\(\sigma_{1}\)) has a similar value to that of the prior distribution in 2DST. However, the estimated horizontal correlation length (\(\sigma_{x}\)) is larger than the horizontal domain size, which is physically not appropriate. In contrast, when \(\sigma_{\theta}=10^{-0.4}\), although the correlation length appears to be within an acceptable range, the marginal standard deviation (\(\sigma_{1}\)) is so large that the constraint from the depth-dependent mean function becomes too weak. These two examples demonstrate that introducing a physically interpretable prior probability is not an easy task when Bayesian PINN is performed in the weight space and a simple function such as i.i.d. Gaussian is adopted. A learning algorithm has been proposed in [52] to obtain the prior probability in the weight space from a target one defined in the function space. However, the high computational complexity of the learning algorithm is inevitably problematic when the problem size becomes large. Introducing SVGD in the function space for Bayesian PINN is an effective solution
not only because it overcomes the multi-modality issues in the weight space, but also because it results in a physically interpretable Bayesian inference.
When comparing different methods, we should always consider that a different parametrization leads to a different prior probability, resulting in a consequent discrepancy in the posterior probability. In [4], for instance, the results obtained with methods based on adaptive parametrization (i.e., the rj-MCMC method [1, 47]) were significantly different from those based on a fixed parametrization. Similarly, even if we had considered the exact same problem setting as the one in [4], performing a meaningful comparison would have been difficult because the seismic velocity obtained by the PINN-based tomography is also adaptively parameterized. Therefore, comparison of Bayesian seismic tomography methods in standardized problem settings with equivalent prior probabilities represents an important aspect to be considered for future developments.
Previous studies claim that the PINN-based approach can effectively solve ill-posed inverse problems without introducing prior knowledge for the target parameters estimation [14, 16]. From the Bayesian viewpoint, a PINN-based inverse analysis without prior constraints can be interpreted as a maximum a posteriori (MAP) estimation of the weight parameters, incorporating a prior PDF with uniform distribution of a wide value range (e.g., an improper flat prior). If we focus on the space of the function predicted by a NN, such a prior PDF is not a "non-informative prior" any more, because it imposes implicit prior information in the function space, obtained by a nonlinear transformation from the weight space (see [53] for uniform distributions defined by a nonlinear variable transformation from a different feature space). As a result, a MAP solution obtained without introducing explicit prior constraints may include effects from implicit prior information that are nonnegligible. As proposed in this study, explicitly providing physically interpretable prior information defined in the function space helps in addressing this issue hidden in ordinary PINN-based inverse analyses.
### _Application of vSVGD-PINN-ST to other PINN-based inversion problems_
The proposed vSVGD-PINN-ST algorithm can be applied to general PINN-based inversion problems that incorporate two NNs, one predicting the solution of the governing equation and the other its parameters (e.g., the full waveform inversion [16]). Some of the previous studies on Bayesian PINN targeted simple problems in which the solution of the governing equation predicted by a single NN is the only target of Bayesian estimation (e.g., [20, 23]). The proposed approach can also be used for this type of problems by introducing a simplified method that we call "function-space SVGD for PINN" (fSVGD-PINN). fSVGD-PINN can be derived from vSVGD-PINN-ST simply incorporating fSVGD in PINN as described in Section III-B, removing the procedure used to separate one of the two NNs from the Bayesian estimation described in Section III-A.
## VII Conclusion
In this study, we developed the vSVGD-PINN-ST algorithm, which performs PINN-based Bayesian seismic tomography using SVGD, the best-known particle-based variational inference method, applied only in the velocity space and enhanced with several mathematical and numerical techniques. The vSVGD-PINN-ST performance was tested in one- and two-dimensional Bayesian seismic tomography synthetic tests. Such problems cannot be handled by naive baseline algorithms that perform SVGD in the weight space of the component NNs predicting velocity and travel time. Results show that our method not only allows for accurate UQ but it can also incorporate physically-interpretable prior probability defined in the velocity (function) space, overcoming existing limitations of traditional BNN approaches based on Bayesian estimation in the weight space. To the authors' best knowledge, this is the first success in PINN-based Bayesian seismic tomography with practical estimation accuracy. The success of the last synthetic test adopting a realistic observation geometry, similar to a subsurface structural exploration, suggest that our method can be applied to actual observational data in geophysical exploration and subsurface structural studies. Finally, the proposed method offers a new fundamental Bayesian approach that can be applied to inverse problems sharing the same formulation, in geoscience and other fields, leveraging on the flexibility and extendibility of PINN.
## Acknowledgments
Comments from Dr. Tatsu Kuwatani were valuable for designing the 1D synthetic test. This research was supported by JSPS KAKENHI Grant Number 21K14024. Computational resources of the Earth Simulator 4 provided by JAMSTEC was used.
```
0: A set of initial particles \(\{\mathbf{\theta}^{0}\}_{i=1}^{n}\), where \(\mathbf{\theta}=(\mathbf{\theta}_{T}\,\mathbf{\theta}_{v})^{\top}\).
0: A set of particles \(\{\mathbf{\theta}\}_{i=1}^{n}\), which approximates the target distribution \(P(\mathbf{\theta}|\mathbf{d})\).
1:for iteration \(l\)do
2: Sample a mini batch \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{c}^{b}\) from training set \(\mathbf{X}_{T}\) and \(\mathbf{X}_{c}\).
3: For each \(i\in[n]\), calculate the SVGD update vector for \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{c}^{b}\) according to Equation 24.
4: For each \(i\in[n]\), calculate \(\mathbf{\theta}_{i}^{l+1}\) according to Equation 23.
5: Set \(l\gets l+1\)
6:endfor
```
**Algorithm 1** SVGD-PINN-ST, a naive approach based on SVGD in weight space.
## Appendix
Normalization strategy of NN input and output
In this section, we outline the methods used to normalize the input and output value of the model NNs to improve their convergence performance, using a similar approach to the one in [16]. In order to obtain normalized coordinates in
**Algorithm 2** An updated approach based on SVGD only in the weight space of velocity NN.
```
0: A set of initial particles \(\{\mathbf{\theta}_{v}^{0}\}_{i=1}^{n}\) and \(\{\mathbf{\theta}_{T}^{0}\}_{i=1}^{n}\) that are initially trained for \(\{\mathbf{\theta}_{v}^{0}\}_{i=1}^{n}\).
0: A set of particles \(\{\mathbf{\theta}_{v}\}_{i=1}^{n}\), which approximates the target distribution \(P(\mathbf{\theta}_{v}|\mathbf{d})\).
1:for iteration \(l\)do
2: Sample a mini batch \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{c}^{b}\) from training set \(\mathbf{X}_{T}\) and \(\mathbf{X}_{c}\).
3: For each \(i\in[n]\), calculate the SVGD update vector for \(\mathbf{X}_{T}^{b}\) according to Equation 28.
4: For each \(i\in[n]\), calculate \(\mathbf{\theta}_{v\,i}^{l+1}\) according to Equation 27.
5: For each \(i\in[n]\), update \(\mathbf{\theta}_{T\,i}^{l+1}\) for \(\mathbf{X}_{c}^{b}\) according to Equation 34 and 35.
6: Set \(l\gets l+1\)
7:endfor
```
**Algorithm 3** VSVGD-PINN-ST, the final version of the algorithm based on velocity-space SVGD.
```
0: A set of initial particles \(\{\mathbf{\theta}_{v}^{0}\}_{i=1}^{n}\) and \(\{\mathbf{\theta}_{T}^{0}\}_{i=1}^{n}\) that are initially trained for \(\{\mathbf{\theta}_{v}^{0}\}_{i=1}^{n}\).
0: A set of particles \(\{\mathbf{\theta}_{v}\}_{i=1}^{n}\), such that \(f(\mathbf{x};\mathbf{\theta}_{v\,i})\) approximates the target distribution \(P(v(\mathbf{x})|\mathbf{d})\).
1:for iteration \(l\)do
2: Sample a mini batch \(\mathbf{X}_{T}^{b}\), \(\mathbf{X}_{c}^{b}\) and \(\mathbf{X}_{v}^{b}\) from training set \(\mathbf{X}_{T}\), \(\mathbf{X}_{c}\) and \(\mathbf{X}_{v}\).
3: For each \(i\in[n]\), calculate the SVGD update vector for \(\mathbf{X}_{T}^{b}\) and \(\mathbf{X}_{v}^{b}\) in the velocity space according to Equation 39.
4: For each \(i\in[n]\), calculate the SVGD update vector in the weight space according to Equation 38.
5: For each \(i\in[n]\), calculate \(\mathbf{\theta}_{v\,i}^{l+1}\) according to Equation 27.
6: For each \(i\in[n]\), update \(\mathbf{\theta}_{T\,i}^{l+1}\) for \(\mathbf{X}_{c}^{b}\) according to Equation 34 and 35.
7:endfor
```
**Algorithm 4** VSVGD-PINN-ST, the final version of the algorithm based on velocity-space SVGD.
## Appendix B
Fig. 1: Neural network formulation adopted in this study and a schematic view of PINN-based (deterministic) seismic tomography.
Figure 2: The results of 1DST. (a)(b) Those obtained by using linearized tomography, which we consider as the ground truth, and \(\forall\)SVGD-PINN-ST, respectively, with \(\sigma_{2}=0.25\,\mathrm{km}\). (c)(d) Those with \(\sigma_{2}=0.15\,\mathrm{km}\). (e)(f) Those with \(\sigma_{2}=0.075\,\mathrm{km}\).
where \(a\) is the output of either \(f_{v_{\mathrm{pub}}}\) or \(f_{\tau^{-1}}\) and \(a^{\mathrm{max}}\) and \(a^{\mathrm{min}}\) represent the maximum and minimum range values defined in the following. For \(f_{v_{\mathrm{pub}}}\), \(a^{\mathrm{max}}=v_{\mathrm{pub}}^{\mathrm{max}}\) and \(a^{\mathrm{min}}=v_{\mathrm{pub}}^{\mathrm{min}}\), which are given a priori. For \(f_{\tau^{-1}}\), \(a^{\mathrm{max}}=v^{\mathrm{max}}\) and \(a^{\mathrm{min}}=v^{\mathrm{min}}\), where \(v^{\mathrm{max}}\) and \(v^{\mathrm{min}}\) given by \(v^{\mathrm{max}}=\max(v_{0}(\mathbf{x}))+v_{\mathrm{ptb}}^{\mathrm{max}}\) and \(v^{\mathrm{min}}=\min(v_{0}(\mathbf{x}))+v_{\mathrm{ptb}}^{\mathrm{min}}\), respectively. This operation imposes direct output values of the networks that are included in the interval \([-1,1]\), ensuring that the final upper and lower output limit values are determined by \(a^{\mathrm{max}}\) and \(a^{\mathrm{min}}\).
|
2306.14064 | Modeling Graphs Beyond Hyperbolic: Graph Neural Networks in Symmetric
Positive Definite Matrices | Recent research has shown that alignment between the structure of graph data
and the geometry of an embedding space is crucial for learning high-quality
representations of the data. The uniform geometry of Euclidean and hyperbolic
spaces allows for representing graphs with uniform geometric and topological
features, such as grids and hierarchies, with minimal distortion. However,
real-world graph data is characterized by multiple types of geometric and
topological features, necessitating more sophisticated geometric embedding
spaces. In this work, we utilize the Riemannian symmetric space of symmetric
positive definite matrices (SPD) to construct graph neural networks that can
robustly handle complex graphs. To do this, we develop an innovative library
that leverages the SPD gyrocalculus tools \cite{lopez2021gyroSPD} to implement
the building blocks of five popular graph neural networks in SPD. Experimental
results demonstrate that our graph neural networks in SPD substantially
outperform their counterparts in Euclidean and hyperbolic spaces, as well as
the Cartesian product thereof, on complex graphs for node and graph
classification tasks. We release the library and datasets at
\url{https://github.com/andyweizhao/SPD4GNNs}. | Wei Zhao, Federico Lopez, J. Maxwell Riestenberg, Michael Strube, Diaaeldin Taha, Steve Trettel | 2023-06-24T21:50:53Z | http://arxiv.org/abs/2306.14064v1 | # Modeling Graphs Beyond Hyperbolic:
###### Abstract
Recent research has shown that alignment between the structure of graph data and the geometry of an embedding space is crucial for learning high-quality representations of the data. The uniform geometry of Euclidean and hyperbolic spaces allows for representing graphs with uniform geometric and topological features, such as grids and hierarchies, with minimal distortion. However, real-world graph data is characterized by multiple types of geometric and topological features, necessitating more sophisticated geometric embedding spaces. In this work, we utilize the Riemannian symmetric space of symmetric positive definite matrices (SPD) to construct graph neural networks that can robustly handle complex graphs. To do this, we develop an innovative library that leverages the SPD gyrocalculus tools [28] to implement the building blocks of five popular graph neural networks in SPD. Experimental results demonstrate that our graph neural networks in SPD substantially outperform their counterparts in Euclidean and hyperbolic spaces, as well as the Cartesian product thereof, on complex graphs for node and graph classification tasks. We release the library and datasets at [https://github.com/andyweizhao/SPD4GNNs](https://github.com/andyweizhao/SPD4GNNs).
Keywords:Graph Neural Networks Riemannian Geometry Symmetric Space of Symmetric Positive Definite Matrices
## 1 Introduction
Complex structures are a common feature in real-world graph data, where the graphs often contain a large number of connected subgraphs of varying topologies (including grids, trees, and combinations thereof). While accommodating the diversity of such graphs is necessary for robust representation learning, neither Euclidean nor hyperbolic geometry alone has been sufficient [25].
This inefficiency stems from geometric reasons. Properties of the embedding space strongly control which graph topologies embed with low distortion, with simple geometries selecting only narrow classes of graphs. Euclidean geometry provides the foundational example, where its abundant families of equidistant lines allow for efficient representation of grid-like structures, but its polynomial volume growth is too slow to accommodate tree-like data. This is somewhat ameliorated by moving to higher dimensions (with faster polynomial volume growth), though with a serious trade-off in efficiency [4]. An alternative is to move to hyperbolic geometry, where volume growth is exponential, providing plenty of room for branches to spread out for the isometric embedding of trees [7, 37]. However, hyperbolic geometry has a complementary trade-off: it does not contain equidistant lines, which makes it unfit for embedding the grid-like structures Euclidean space excelled at [8].
Many proposed graph neural networks utilize representations of graph data to perform machine learning tasks in various graph domains, such as social networks, biology and molecules [24, 39, 11, 13, 34, 33, 14, 2]. A subset of these networks take seriously the constraints of geometry on representation capability, and work to match various non-Euclidean geometries to common structures seen in graph data. For instance, Chami et al. [11] and Defferrard et al. [13] show that constructing graph neural networks in hyperbolic and spherical spaces have been successful in embedding graphs with hierarchical and cyclical structures, respectively. However, the relative geometric simplicity of these spaces poses serious limitations including (a) the need to know the graph structure prior to choosing the embedding space, and (b) the inability to perform effectively with graphs built of geometrically distinct sub-structures, a common feature of real-world data.
Avoiding these limitations may necessitate resorting to more complex geometric spaces. For example, Gu et al. [18] employed Cartesian products of various geometric spaces to represent graphs with mixed geometric structures. But any such choice must be carefully considered: isometries play an essential role in the construction of the above architectures, and any increase in complexity accompa
Figure 1: Propagation schema for utilizing SPD geometry while performing calculations in the tangent (Euclidean) space: starting from an SPD embedding, map a node and its neighbors to the tangent space via the logarithm, and perform a modified Euclidean aggregation (Table 1) before returning to SPD via the Riemannian exponential map.
nied by too great a decrease in symmetry may render a space computationally intractable.
Riemannian symmetric spaces, which have a rich geometry encompassing all the aforementioned spaces, strike an effective balance between geometric generality and ample symmetry. Lopez et al. [27] proposed particular symmetric spaces, namely Siegel spaces, for graph embedding tasks, and demonstrated that many different classes of graphs embed in these spaces with low distortion. Lopez et al. [28] suggested utilizing the symmetric space SPD of symmetric positive definite matrices that is less computationally expensive than Siegel spaces. Furthermore, they developed gyrocalculus tools that enable "vector space operations" on SPD.
Here we extend the idea of Lopez et al. [28] to construct graph neural networks in SPD, particularly by utilizing their gyrocalculus tools to implement the building blocks of graph neural networks in SPD. The building blocks include (a) feature transformation via isometry maps, (b) propagation via graph convolution in the tangent space of SPD (the space of symmetric matrices \(S_{n}\)), (c) bias addition via gyrocalculus, (d) non-linearity acting on eigenspace, and (e) three classification layers. We develop _SPD4GNNs_, an innovative library that showcases training five popular graph neural networks in SPD\({}_{n}\), alongside the functionality for training them in Euclidean and hyperbolic spaces.
We perform experiments to compare four ambient geometries (Euclidean, hyperbolic, products thereof, and SPD) across popular graph neural networks, evaluated on the node and graph classification tasks on nine datasets with varying complexities. Results show that constructing graph neural networks in SPD space leads to big improvements in accuracy over Euclidean and hyperbolic spaces on complex graphs, at the cost of doubling (resp. quadrupling) the training time of graph neural networks compared to hyperbolic space (resp. Euclidean space).
Finally, we provide a summary of the numerical issues we encountered and the solutions to addressing them (see Appendix 0.F).
## 2 Related Work
Graph Neural Networks.Graph neural networks (GNNs) have been profiled as the de facto solutions for learning graph embeddings [24, 39, 40, 33, 14, 2]. These networks can be differentiated into two dimensions: (a) how they propagate information over graph nodes and (b) which geometric space they use to embed the nodes. In **Euclidean space**, a class of GNNs has been proposed that represents graph nodes in a flat space and propagates information via graph convolution in various forms, such as using Chebyshev polynomial filters [12, 24], high-order filters [40], importance sampling [20], attention mechanisms [39], graph-isomorphism designs [41, 34, 2], and differential equations [16, 33, 14]. In contrast, **non-Euclidean spaces** have a richer structure for representing geometric graph structures in curved spaces. Recently, there has been a line of GNNs developed in these spaces that performs graph convolution on different Riemannian manifolds in order to accommodate various graph structures, such as hyperbolic space on
tree-like graphs [11, 26, 42], spherical space on spherical graphs [13], and Cartesian products of thereof [19].
SPD _Space._ Representing data with SPD matrices has been researched for many years, with the representations being primarily in the form of covariance matrices [15, 22, 44, 17, 9]. These matrices capture the statistical dependencies between Euclidean features. Recent research focused on designing the building blocks of neural networks in the space of covariance matrices. This includes feature transformation that maps Euclidean features to covariance matrices via geodesic Gaussian kernels [15, 6], nonlinearity on the eigenvalues of covariance matrices [22], convolution through SPD filters [44] and Frechet mean [9], Riemannian recurrent networks [10], and Riemannian batch normalization [5].
Nguyen et al. [31] recently approached hand gesture classification by embedding graphs into SPD via a neural network. The architectures we consider here are different. While we alternate between SPD and its tangent space using the exponential and logarithm maps, Nguyen et al. [31] do so via an aggregation operation and the log map. Further, we couple this alternation with our building blocks to operate graph neural networks in SPD.
## 3 Background
### The Space SPD
We let \(\mathrm{SPD}_{n}\) denote the space of positive definite real symmetric \(n\times n\) matrices. This space has the structure of a Riemannian manifold of non-positive curvature of \(n(n+1)/2\) dimensions. The tangent space to any point of \(\mathrm{SPD}_{n}\) can be identified with the vector space \(S_{n}\) of all real symmetric \(n\times n\) matrices. \(\mathrm{SPD}_{n}\) is more flexible than Euclidean or hyperbolic geometries, or products thereof. In particular, it contains \(n\)-dimensional Euclidean subspaces, \((n-1)\)-dimensional hyperbolic subspaces, products of \(\lfloor\frac{n}{2}\rfloor\) hyperbolic planes, and many other interesting spaces as totally geodesic submanifolds; see the reference [21] for an in-depth introduction to these well-known facts. While it is not yet fully understood how our proposed models leverage the Euclidean and hyperbolic subspaces in \(\mathrm{SPD}_{n}\), we hypothesize that the presence of these subspaces is an important factor in the superior performance of \(\mathrm{SPD}_{n}\) graph neural networks. Refer to Figure 2 for a demonstration of how this hypothesis may manifest.
Riemannian Exponential and Logarithmic Maps.For \(\mathrm{SPD}_{n}\), the Riemannian exponential map at the basepoint \(I_{n}\) agrees with the standard matrix exponential \(\exp\colon S_{n}\to\mathrm{SPD}_{n}\). This map is a diffeomorphism with inverse the matrix logarithm \(\log\colon\mathrm{SPD}_{n}\to S_{n}\). These maps allow us to pass from \(\mathrm{SPD}_{n}\) to \(S_{n}\) and back again. Given any two points \(X,Y\in\mathrm{SPD}_{n}\), there exists an isometry (i.e., a distance-preserving transformation) that maps \(X\) to \(Y\). As such, the choice of a basepoint for the exponential and logarithm maps is arbitrary since any other point can be mapped to the basepoint by an isometry. In particular, there is no loss of generality with fixing the basepoint \(I_{n}\) as we do.
Non-linear Activation Functions in \(\mathrm{SPD}_{n}\).We use two non-linear functions on SPD matrices, namely (i) ReEig [22]: factorizing a point \(P\in\mathrm{SPD}_{n}\) and then employing the ReLU-like non-linear activation function \(\varphi_{a}\) to suppress the positive eigenvalues that are bigger than \(0.5\)2:
Footnote 2: TgReEig equals ReEig in the case of \(\varphi_{a}(x)=\max(x,1)\).
\[\varphi^{\mathrm{SPD}}(P)=U\varphi_{a}(\varSigma)U^{T}\hskip 28.452756ptP=U \varSigma U^{T} \tag{1}\]
(ii) TgReEig: projecting \(P\in\mathrm{SPD}_{n}\) into the tangent space and then suppressing the negative eigenvalues of the projected point \(\in S_{n}\) with the ReLU non-linear activation function \(\varphi_{b}\), i.e. \(\varphi^{\mathrm{SPD}}(P)=U\exp(\varphi_{b}(\log(\varSigma)))U^{T}\).
### Gyrocalculus on SPD
Addition and Subtraction.Gyro-calculus is a way of expressing natural analogues of vector space operations in Riemannian manifolds. Following Lopez et al. [28], given two points \(P,Q\in\mathrm{SPD}_{n}\), we denote gyro-addition and gyro-inversion by:
\[P\oplus Q=\sqrt{P}Q\sqrt{P}\hskip 28.452756pt\ominus P=P^{-1} \tag{2}\]
For \(P,Q\in\mathrm{SPD}_{n}\), the value \(P\oplus Q\in\mathrm{SPD}_{n}\) is the result of applying the \(\mathrm{SPD}_{n}\)-translation moving the basepoint \(I_{n}\) to \(P\), evaluated on \(Q\).
Isometry Maps.Any invertible \(n\times n\) real matrix \(M\in\mathrm{GL}(n,\mathbb{R})\) defines an isometry of \(\mathrm{SPD}_{n}\) by
\[M\mathbin{\raisebox{-1.0pt}{\mbox{\tiny$\circ$}}}P=MPM^{T} \tag{3}\]
where \(P\in\mathrm{SPD}_{n}\).
Lopez et al. [28] proposed defining \(M\) in two forms, namely a rotation element in \(SO(n)\) and a reflection element in \(O(n)\). In this case, the choice of rotation and reflection becomes a hyperparameter for \(M\), and that needs to be selected before training. In contrast, the form of \(M\) we considered is more flexible and can be automatically adjusted by training data. To do so, we first let \(M\) denote the
Figure 2: Graphs exhibiting both euclidean/grid-like and hyperbolic/tree-like features (left) cannot embed well in either euclidean or hyperbolic spaces due to the impossibility of isometrically embedding trees in Euclidean spaces and grids in hyperbolic spaces (center). However, \(\mathrm{SPD}_{n}\) (right) contains both euclidean and hyperbolic subspaces, which allows embedding a broad class of graphs, including the example in the figure.
orthogonal basis of a learnable square matrix, and then tune the square matrix from training data. Thus, \(M\), as an orthogonal matrix that extends rotations and reflections, is better suited to fit the complexity of graph data.
## 4 Graph Neural Networks
In this section, we introduce the notation and building blocks of graph neural networks using graph convolutional network (GCN) [24] as an example, and present modifications for operating these building blocks in SPD. Table 1 establishes parallels between five popular graph neural networks in Euclidean, hyperbolic and SPD spaces.
### GCN in Euclidean Space
Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with a vertex set \(\mathcal{V}\) and an edge set \(\mathcal{E}\), we define \(d\)-dimensional input node features \((x_{i}^{0})_{i\in\mathcal{V}}\), where the superscript \(0\) indicates the first layer. The goal of a graph neural network is to learn a mapping denoted by:
\[f:(\mathcal{V},\mathcal{E},(x_{i}^{0})_{i\in\mathcal{V}})\rightarrow\mathcal{Z }\in\mathbb{R}^{|\mathcal{V}|\times d}\]
where \(\mathcal{Z}\) is the space of node embeddings obtained from the final layer of GCN, which we take as the input of classification layer to perform downstream tasks.
Let \(\mathcal{N}(i)=\{j:(i,j)\in\mathcal{E}\}\cup\{i\}\) be the set of neighbors of \(i\in\mathcal{V}\) with self-loops, and \((W^{l},b^{l})\) be a matrix of weights and a vector of bias parameters at layer \(l\), and \(\varphi(\cdot)\) be a non-linear activation function. We now introduce **message passing**, which consists of the following three components for exchanging information between the node \(i\) and its neighbors at layer \(l\):
\[h_{i}^{l} =W^{l}x_{i}^{l-1}\] Feature transform (4) \[p_{i}^{l} =\sum_{j\in\mathcal{N}(i)}k_{i,j}h_{j}^{l}\] Propagation (5) \[x_{i}^{l} =\varphi(p_{i}^{l}+b^{l})\] Bias & Nonlinearity (6)
where \(k_{i,j}=c_{i}^{-\frac{1}{2}}c_{j}^{-\frac{1}{2}}\) with \(c_{i}\) as the cardinality of \(\mathcal{N}(i)\). \(k_{i,j}\) represents the relative importance of the node \(j\) to the node \(i\).
\begin{table}
\begin{tabular}{l|l|l|l} \hline Operations & GNNs & Euclidean Space & Hyperbolic and SPD Space \\ \hline \multirow{2}{*}{Feature Trans} & All & \(h_{i}^{l}=W^{l}x_{i}^{l-1}\) & \(Q_{i}^{l}=M^{l}\odot Z_{i}^{l-1}\) \\ \hline \multirow{2}{*}{Propagation} & GCN & \(p_{i}^{l}=\sum_{j\in\mathcal{N}(i)}k_{i,j}h_{j}^{l}\) & \(P_{i}^{l}=\exp(\sum_{j\in\mathcal{N}(i)}k_{i,j}\log(Q_{j}^{l}))\) \\ & GAT & \(p_{i}^{l}=\alpha_{i,i}h_{i}^{l}+\sum_{j\in\mathcal{N}(i)}\alpha_{i,j}h_{j}^{l}\) & \(P_{i}^{l}=\exp(\alpha_{i,i}\log(Q_{i})+\sum_{j\in\mathcal{N}(i)}\alpha_{i,j} \log(Q_{j}^{l}))\) \\ & Cheb & \(p_{i}^{l}=h_{i}^{l}+W^{l}\sum_{j\in\mathcal{N}(i)}k_{i,j}h_{j}^{l}\) & \(P_{i}^{l}=Q_{i}^{l}\oplus(M^{l}\odot\exp(\sum_{j\in\mathcal{N}(i)}k_{i,j}\log Z _{j}^{l-1}))\) \\ \hline Bias\&Nonlin & All & \(x_{i}^{l}=\varphi(p_{i}^{l}+b^{l})\) & \(Z_{i}^{l}=\varphi(P_{i}^{l}\oplus B^{l})\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of operations in different spaces across three graph neural networks, i.e., GCN [24], GAT [39] and 1-order Cheb [12]. SGC [40] and GIN [41] are presented in Appendix B, which indeed apply propagation before feature transformation.
### GCN in SPD
Mapping from Euclidean to SPD space.Oftentimes, input node features are not given in SPD, but in Euclidean space \((x_{i}^{0})_{i\in\mathcal{V}}\in\mathbb{R}^{d}\). Therefore, we design a transformation that maps Euclidean features to a point in SPD. To do so, we first learn a linear map that transforms the \(d\)-dimensional input features into a vector of dimension \(n(n+1)/2\), that we arrange as the upper triangle of an initially zero matrix \(A\in\mathbb{R}^{n\times n}\). We then define a symmetric matrix \(U\in S_{n}\) such that \(U=A+A^{T}\). We now apply the exponential map such that \(Z=\exp(U)\), which moves the coordinates from the tangent space \(S_{n}\) to the original manifold \(\mathrm{SPD}_{n}\). Thus, the resulting node embeddings \((Z_{i}^{0})_{i\in\mathcal{V}}\) are in \(\mathrm{SPD}_{n}\). By performing this mapping only once, we enable GNNs to operate in \(\mathrm{SPD}_{n}\).
Feature Transform.We apply isometry maps to transform points in SPD at different layers, denoted by: \(Q_{i}^{l}=M^{l}\odot Z_{i}^{l-1}\), where \(Q_{i}^{l},Z_{i}^{l-1}\in\mathrm{SPD}_{n}\) and \(M^{l}\) is a isometry map (see SS3.2) at layer \(l\) of the GNN.
Propagation.This step aggregates information from all the neighbors \(\mathcal{N}(i)\) of a given node \(i\), with the information weighted by the importance of a neighbor to the node \(i\) (see Eq. 5). We note that propagation involves addition and scaling operators. This results in two alternative approaches for computing propagation: (a) employing gyro-addition to aggregate information over the neighbors for each node; (b) computing the Riemannian Frechet mean in \(\mathrm{SPD}_{n}\)--which requires hundreds of iterations to find a geometric center. Therefore, these approaches are costly to compute and also involve the use of cumbersome Riemannian optimization algorithms (see Appendix 0.A for optimization). Here we perform aggregation via graph convolution in the space of symmetric matrices \(S_{n}\) denoted by: \(P_{i}^{l}=\exp(\sum_{j\in\mathcal{N}(i)}k_{i,j}\log(Q_{j}^{l}))\), where \(P_{i}^{l}\in\mathrm{SPD}_{n}\) and \(k_{i,j}=c_{i}^{-\frac{1}{2}}c_{j}^{-\frac{1}{2}}\) (as in the Euclidean case). This is similar to the approach of Chami et al. [11] by performing propagation in the tangent pace and the posterior projection through the exponential map.
Bias Addition and Non-linearity.Finally, we add the bias \(B^{l}\) at layer \(l\) to the result of propagation through gyro-addition followed by applying a non-linear function, denoted by: \(Z_{i}^{l}=\varphi^{\mathrm{SPD}}(P_{i}^{l}\oplus B^{l})\), with \(Z_{i}^{l}\in\mathrm{SPD}_{n}\) as the new embedding for the node \(i\) at layer \(l\) and \(B^{l}\in\mathrm{SPD}_{n}\).
Message Passing in \(\mathrm{SPD}\).We establish a one-to-one correspondence between the Euclidean and SPD versions of GCN at layer \(l\) for node \(i\):
\[Q_{i}^{l} =M^{l}\odot Z_{i}^{l-1}\] Feature transform (7) \[P_{i}^{l} =\exp(\sum_{j\in\mathcal{N}(i)}k_{i,j}\log(Q_{j}^{l}))\] Propagation (8) \[Z_{i}^{l} =\varphi^{\mathrm{SPD}}(P_{i}^{l}\oplus B^{l})\] Bias & non-lin (9)
Classification.In node classification setups3, we are given \(\{Z_{i},y_{i}\}_{i=1}^{N}\) on a dataset, with \(N\) as the number of instances, \(Z_{i}\in\text{SPD}_{n}\) as the \(i\)-th node embedding obtained from the final layer of a graph neural network, and \(y_{i}\in\{1,\ldots,K\}\) as the true class of \(i\)-th node. Let \(h:\text{SPD}\mapsto\{1,\ldots,K\}\) be a classifier that best predicts the label \(y_{i}\) of a given input \(Z_{i}\). Indeed, the input space of \(h\) can be in various forms, not limited to \(\text{SPD}_{n}\). Here we introduce three classifiers in two alternative input spaces: (a) \(\mathbb{R}^{d(d+1)/2}\) and (b) \(S_{n}\)4. To do so, we first take Riemannian logarithm \(\log\colon\text{SPD}_{n}\to S_{n}\) of each \(Z_{i}\) at the identity. For (a), we vectorize the upper triangle elements of \(X\) as \(\mathbf{x}=(X_{1,1}\cdots X_{1,d},X_{2,2},\cdots,X_{d,d})\in\mathbb{R}^{d(d+1) /2}\), and then design two classifiers, i.e., Linear-XE (Linear Classifier coupled with Cross-Entropy loss) and NC-MM (Nearest Centroid Classifier with Multi-Margin loss). For (b), we design a SVM-like classifier SVM-MM acting in \(S_{n}\), a similar approach to the proposal of Nguyen et al. [31]. We present the details of these classifiers in Appendix 0.C.
Footnote 3: For graph classification, \(Z_{i}\) and \(y_{i}\) denote the ‘center’ of the graph \(i\) and its true class. We take the arithmetic mean of node embeddings in \(\text{SPD}_{n}\) to produce \(Z_{i}\in\text{SPD}_{n}\).
Footnote 4: We also design several classifiers with the input space in \(\text{SPD}_{n}\), but these do not yield better results than those in \(S_{n}\).
## 5 Experiments
In this section, we first perform experiments for node and graph classification, and then analyze the ability of three geometric spaces in arranging and separating nodes with different classes. Further, we compare the training efficiency of different spaces and the usefulness of three classifiers. Lastly, we discuss **product space** (the Cartesian product of Euclidean and hyperbolic spaces) and compare it with SPD in Appendix 0.H.
Baselines.To investigate the usefulness of different geometric spaces on graph neural networks, we choose five well-known graph architectures as representatives: GCN [24], GAT [39], Cheb [12], SGC [40] and GIN [41], and evaluate these architectures in Euclidean, hyperbolic and SPD spaces. For the hyperbolic versions of these architectures, we use Poincare models and extend the implementation of Poincare GCN [11] to other four architectures.
### Node Classification
Datasets.We evaluate graph neural networks in the three spaces on 5 popular datasets for node classification: Disease [1], Airport [43], Pubmed [30], Citeseer and Cora [36]. Overall, each dataset has a single graph that contains up to thousands of labeled nodes. We use the public train, validation and test splits of each dataset, and provide dataset statistics in Appendix 0.C. Unlike previous works [11, 42], we only use original node features to ensure a fair and transparent comparison.
Setup.We compare three geometries, namely Euclidean, hyperbolic and SPD, in two low-dimensional spaces: (i) 6 dimensions: \(\mathbb{R}^{6}\), \(\mathbb{H}^{6}\) and \(\text{SPD}_{3}\), and (ii) 15 dimensions: \(\mathbb{R}^{15}\), \(\mathbb{H}^{15}\) and \(\text{SPD}_{5}\), a common choice of dimensions in previous work [11, 42]. The reason we considered for low-dimensional space is the following: If the structure of data matches the geometry of embedding space, a low-dimensional space can be leveraged efficiently for producing high-quality embeddings. If they do not match, a large dimension is needed to compensate for the wrong use of unsuitable geometric spaces. Here we investigate the efficiencies of different geometries in space use when given a small dimension. We report mean accuracy and standard deviation of binary/multi-label classification results under 10 runs, and provide training details in Appendix 0.A.
Results.Figure 3 shows the accuracy results of a node classification task in the three 6-dimensional geometries across five datasets on five GNNs, see also Table 6. For graphs with \(\delta\)-hyperbolicity \(>1\), \(\text{SPD}_{3}\) achieves the best accuracy in all cases except the Cheb architecture on the Citeseer dataset. We also observe that the accuracy of SPD is similar to hyperbolic space on the Airport dataset \(\delta=1\)
Figure 3: Evaluation of five graph neural networks coupled with Linear-XE on five node classification datasets in the three 6-dimensional spaces: \(\mathbb{R}^{6}\), \(\mathbb{H}^{6}\) and \(\text{SPD}_{3}\). Each radar chart shows classification accuracy (on a varying scale, as noted by the gridlines with circular shapes) from the five GNN architectures on a dataset. Each dataset has only one graph. \(\delta\)-hyperbolicity shows the degree to which the dataset graph is a hyperbolic tree. A smaller \(\delta\) indicates a more tree-like dataset.
The Disease graph is a tree (\(\delta=0\)) and has optimal performance in hyperbolic space. The accuracy is much lower for these two tree-like datasets in Euclidean geometry for all GNNs except Cheb.
In the case of tree-like datasets, hyperbolic space provides not only accuracy for these tasks but also efficiency. Figure 4 compares 6-dimensional hyperbolic space to \(\mathbb{R}^{15}\) and \(\text{SPD}_{5}\) (also 15-dimensional), showing that even a much smaller dimensional hyperbolic space achieves the best performance on Disease. Notably, the poor performance of Cheb across all spaces might be attributed to the low representational capacity of the first-order Chebyshev polynomial used in the graph neural network for embedding the tree structure of Disease. Results comparing the 6d and 15d geometries are reported in Table 7 (appendix).
### Graph Classification
Datasets.We evaluate graph neural networks in three spaces on the popular TUDataset benchmark [29]. Here we focus on datasets with node features, and choose a sample of 4 popular datasets in two domains, namely (a) Biology: ENZYMES [35] and PROTEINS [3]; (b) Molecules: COX2 [38] and AIDS [32]. Overall, each dataset instance has one labeled graph with dozens of nodes. We use the first split of train and test sets in the 10-fold cross-validation setup5, and select 10% of the training set uniformly at random as the development set. We provide data statistics in Appendix C.
Footnote 5: Morris et al. [29] proposed to run 10-fold cross-validation by generating ten splits of train and test sets, and repeat this ten times to reduce model initialization. This requires 100 runs for one setup, but it is impractical in our large-scale study.
Setup.Following Morris et al. [29], we predict the class of an unlabeled graph by classifying its center. In particular, we produce the graph center by using mean pooling to take the arithmetic mean of node embeddings in a graph. To compare efficiency in space use, we conduct experiments in three spaces with the same dimension size of 36, namely \(\mathbb{R}^{36}\), \(\mathbb{H}^{36}\) and \(\text{SPD}_{8}\), the smallest dimension
Figure 4: Comparison of 6d and 15d spaces on Disease. \(\text{SPD}_{5}\) has 15 dimensions.
size in the grid search from Morris et al. [29]. We report the mean accuracy and standard deviation of graph classification results under 10 runs, and provide training details in Appendix 0.A.
Results.Figure 5 shows the accuracy results of a graph classification task in three geometries across five datasets on five GNNs, see also Table 8. Figure 6 shows the distribution of \(\delta\)-hyperbolicity over instances. We see that \(\text{SPD}_{8}\) achieves better or similar accuracy than its counterparts of the same dimenison in all cases except Cheb on COX2 and GIN on ENZYMES. On the AIDS dataset, SPD achieves much better accuracy across all GNNs, and on ENZYMES SPD achieves much better accuracy on SGC.
We also observe that hyperbolic space does not yield much increased accuracy over Euclidean space in most cases, except the AIDS data set. Furthermore, SPD significantly outperforms hyperbolic space on the AIDS dataset but not the COX2 data set. Both of these datasets have the property that almost all instances are tree-like (\(\delta\leq 1\)) (see Figure 6), but the hyperbolicity constants are less concentrated in the AIDS dataset than in the COX2 dataset. It is possible that the flexibility of SPD explains the increased performance over hyperbolic space in this case. For example, SPD admits totally geodesic submanifolds isometric to hyperbolic spaces of varying constant curvatures. It would be interesting to find out, for example, if these graphs of different hyperbolicity constants stay near copies of hyperbolic space of different curvatures.
Figure 7 (a) depicts the case when the ambient geometry is Euclidean. In this example, nodes from the pink, blue and green classes are well-separated but the nodes in red and orange cannot be easily distinguished. Figure 7 (b) depicts the case when the nodes are embedded in the Poincare ball model of hyperbolic space. In the x-z plane five classes are well-separated, including the red and orange classes. Figure 7 (c) depicts the case when the nodes are embedded into SPD where the best class separation is achieved. Indeed, the Cora graph has hyperbolicity constant \(\delta=11\), so one cannot expect it to embed well into hyperbolic space.
Training TimeAs a case study on the impact of the choice of the latent space geometry on the training time of GNN models, Figure 8 compares the training efficiency of three graph neural networks on Citeseer across three 6-dimensional manifolds: \(\mathbb{R}^{6}\), \(\mathbb{H}^{6}\), and SPD\({}_{3}\). Overall, we see that models with Euclidean latent spaces needed the least amount of training time, but produced the lowest accuracy.
Figure 6: Distributions of \(\delta\)-hyperbolicity on four graph classification datasets, where each instance consists of one graph. Y-axis shows the number of graphs for a given \(\delta\)-hyperbolicity on X-axis.
Figure 7: Vizualizations of the node embeddings into three 6-dimensional geometries for SGC on the Cora dataset. In each space, the nodes are vectorized and then projected linearly to \(\mathbb{R}^{3}\) via PCA.
Moreover, hyperbolic space slows down the training in Euclidean space by up to four times, and the effect on accuracy is either slightly positive or negative. For instance, using hyperbolic space with the SGC architecture only brought small accuracy improvements, while applying it to GCN resulted in a drop in accuracy. On the other hand, even though SPD models require nearly double the training time of hyperbolic models, the SPD models bring big improvements in accuracy, not only on Citeseer in the current study case, but also on many datasets such as AIDS and PROTEINS (see Figure 5). We note that the longer training times for SPD models can be attributed to the involvement of eigendecompositions. However, the benefits of using SPD space in graph neural networks appear to outweigh this drawback.
ClassifiersOur three classification layers are built upon traditional classification methods. LINEAR-XE and SVM-MM are both linear classifiers that separate classes with hyperplanes, differing in the choice of loss functions: cross-entropy and multi-margin loss. In contrast, NC-MM layer learns class-specific centroids and then determining the class of an unlabeled node (or graph) by examining which centroid it is closest to according to a similarity function.
Table 2 shows the usefulness of our classifiers in SPD on Citeseer and Cora. Overall, we see that the MM-based classifiers (SVM-MM and NC-MM) are often helpful, outperforming Linear-XE in SPD, and when they succeed, their improvements are substantial. This means the benefits of using SPD and these advanced classifiers are complementary, resulting in stacked performance gains. This is an important finding as it hints at the possibility of accommodating more advanced classification methods recently developed in Euclidean space, when constructing graph neural networks in SPD. Note that the results on other datasets are similar, which we present in Appendix D and E.
Figure 8: Evaluation of three 6-dimensional spaces across graph neural networks in terms of training time and accuracy on Citeseer (\(\delta=5.0\)). Each point has a unique pattern that combines color and shape. For instance, a red triangular means GCN in SPD.
## 6 Conclusions
This work brings sophisticated geometric tools to graph neural networks (GNNs). Following the maxim 'complex data requires complex geometry', we leverage the flexibility of the space of symmetric positve definite (SPD) matrices to construct GNNs which do not require careful prior knowledge of graph topologies. This is a distinct advantage over familiar spaces such as Euclidean, spherical or hyperbolic geometries, where only narrow classes of graphs embed with low distortion.
To operate GNNs in SPD, we designed several building blocks, and developed a library (SPD4GNN) that enables training five popular GNNs in SPD, Euclidean and hyperbolic spaces. Our results confirm the strong connection between graph topology and embedding geometry: GNNs in SPD provide big improvements on graph datasets with multi-modal structures, with their counterparts in hyperbolic space performing better on strictly tree-like graphs.
Determining the optimal classifier for training GNNs in the complex geometry of SPD is challenging, and presents an avenue for continued improvement. This work only begins the process of designing geometrically meaningful classifiers and identifying the conditions which guarantee good performance. Additional performance gains may come through careful implementation of the computationally demanding functions in SPD. While this work contains techniques for accelerating computations in SPD, further optimization is likely possible.
Constructing tools to aid the interpretability of SPD embeddings is an important direction of future work, including quantitative measures for (a) comparing the geometry of the learned embeddings to the real-world graphs' topology and (b) understanding how the geometric features of SPD are leveraged in for graph tasks. While the results of this current work suggest some of SPD's superior performance may be due to graphs of varying hyperbolicity finding geometric subspaces optimally adapted to their curvature, such measures would enable the precise quantitative analysis required for verification.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline & \(\mathbb{R}^{6}\) & \multicolumn{4}{c}{SPD\({}_{3}\)} \\ & Lin-XE & Lin-XE & SVM-MM & NC-MM \\ \hline GIN & 48.2 \(\pm\) 6.3 & **68.0**\(\pm\) 1.3 & 67.3 \(\pm\) 1.2 & 67.0 \(\pm\) 0.8 \\ SGC & 62.6 \(\pm\) 3.4 & 69.4 \(\pm\) 1.0 & **69.7**\(\pm\) 0.8 & 67.9 \(\pm\) 1.5 \\ Cheb & 63.2 \(\pm\) 2.1 & 54.6 \(\pm\) 10.4 & 61.4 \(\pm\) 4.4 & **64.0**\(\pm\) 2.3 \\ GAT & 55.0 \(\pm\) 5.2 & 67.3 \(\pm\) 1.7 & **69.2**\(\pm\) 0.7 & 68.1 \(\pm\) 1.1 \\ GCN & 64.7 \(\pm\) 2.3 & **69.9**\(\pm\) 0.8 & 69.2 \(\pm\) 0.8 & 68.2 \(\pm\) 1.0 \\ \hline GIN & 77.1 \(\pm\) 1.0 & **79.9**\(\pm\) 0.6 & 79.5 \(\pm\) 0.6 & 78.8 \(\pm\) 1.0 \\ SGC & 75.7 \(\pm\) 3.6 & 81.5 \(\pm\) 0.9 & **81.8**\(\pm\) 0.3 & 81.1 \(\pm\) 0.6 \\ Cheb & 71.9 \(\pm\) 2.8 & 75.5 \(\pm\) 3.9 & 77.9 \(\pm\) 2.4 & **79.2**\(\pm\) 1.3 \\ GAT & 67.9 \(\pm\) 4.2 & 79.4 \(\pm\) 0.9 & **81.2**\(\pm\) 1.4 & **81.2**\(\pm\) 1.1 \\ GCN & 78.1 \(\pm\) 1.7 & 79.7 \(\pm\) 0.9 & **80.7**\(\pm\) 0.5 & 80.2 \(\pm\) 1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of different classifiers on Citeseer (top) and Cora (bottom). We bold the best accuracy in each row.
## 7 Ethical Considerations
We have not identified any immediate ethical concerns such as bias and discrimination, misinformation dissemination, privacy issues, originating from the contributions presented in this work. However, it is important to note that our SPD models use computationally demanding functions, such as determining eigenvalues and eigenvectors, which may incur a negative environmental impact due to increased energy consumption. Nevertheless, SPD models do not outsuffer Euclidean and hyperbolic counterparts in terms of computational overhead. This is because Euclidean and hyperbolic models would require substantial computing resources when dealing with larger dimensions, a necessity for compensating for the challenges of embedding complex graphs into these ill-suited spaces.
## Acknowledgements
We thank Anna Wienhard and Maria Beatrice Pozetti for insightful discussions, as well as the anonymous reviewers for their thoughtful feedback that greatly improved the texts. This work has been supported by the Klaus Tschira Foundation, Heidelberg, Germany, as well as under Germany's Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Cluster of Excellence).
|
2307.16214 | Question Answering with Deep Neural Networks for Semi-Structured
Heterogeneous Genealogical Knowledge Graphs | With the rising popularity of user-generated genealogical family trees, new
genealogical information systems have been developed. State-of-the-art natural
question answering algorithms use deep neural network (DNN) architecture based
on self-attention networks. However, some of these models use sequence-based
inputs and are not suitable to work with graph-based structure, while
graph-based DNN models rely on high levels of comprehensiveness of knowledge
graphs that is nonexistent in the genealogical domain. Moreover, these
supervised DNN models require training datasets that are absent in the
genealogical domain. This study proposes an end-to-end approach for question
answering using genealogical family trees by: 1) representing genealogical data
as knowledge graphs, 2) converting them to texts, 3) combining them with
unstructured texts, and 4) training a trans-former-based question answering
model. To evaluate the need for a dedicated approach, a comparison between the
fine-tuned model (Uncle-BERT) trained on the auto-generated genealogical
dataset and state-of-the-art question-answering models was per-formed. The
findings indicate that there are significant differences between answering
genealogical questions and open-domain questions. Moreover, the proposed
methodology reduces complexity while increasing accuracy and may have practical
implications for genealogical research and real-world projects, making
genealogical data accessible to experts as well as the general public. | Omri Suissa, Maayan Zhitomirsky-Geffet, Avshalom Elmalech | 2023-07-30T12:49:54Z | http://arxiv.org/abs/2307.16214v1 | Question Answering with Deep Neural Networks for Semi-Structured Heterogeneous Genealogical Knowledge Graphs
###### Abstract
With the rising popularity of user-generated genealogical family trees, new genealogical information systems have been developed. State-of-the-art natural question answering algorithms use deep neural network (DNN) architecture based on self-attention networks. However, some of these models use sequence-based inputs and are not suitable to work with graph-based structure, while graph-based DNN models rely on high levels of comprehensiveness of knowledge graphs that is nonexistent in the genealogical domain. Moreover, these supervised DNN models require training datasets that are absent in the genealogical domain. This study proposes an end-to-end approach for question answering using genealogical family trees by: 1) representing genealogical data as knowledge graphs, 2) converting them to texts, 3) combining them with unstructured texts, and 4) training a transformer-based question answering model. To evaluate the need for a dedicated approach, a comparison between the fine-tuned model (Uncle-BERT) trained on the auto-generated genealogical dataset and state-of-the-art question-answering models was performed. The findings indicate that there are significant differences between answering genealogical questions and open-domain questions. Moreover, the proposed methodology reduces complexity while increasing accuracy and may have practical implications for genealogical research and real-world projects, making genealogical data accessible to experts as well as the general public.
Question answering, Genealogy, Neural Networks, Knowledge graph, Natural Language Processing, Transformers, Cultural heritage
## 1 Introduction
The popularity of "personal heritage", user-generated genealogical family tree creation, has increased in recent years, driven by new digital services, such as online family tree sharing sites, family tree creation software, and even self-service DNA analysis by companies like Ancestry and My Heritage. These genealogical information systems allow users worldwide to create, upload and share their family tree in a semi-structured graph format named GEDCOM (GEnealogical Data COMmunication)1. Most genealogical information systems also provide natural search capabilities (a search engine) to find relatives and related family trees. While the user interface [4; 10; 56; 75; 103] and user interactions [45; 69] with genealogical information systems are well researched, to the best of our knowledge, there is no research on natural question-answering in the genealogical domain for genealogical information systems.
Footnote 1: [https://www.gedcom.org/](https://www.gedcom.org/)
As humans, we are accustomed to asking questions and receiving answers from others. However, the standard search engines and information retrieval (IR) systems require users to find answers from a list of documents. For example, for the question "How many children does Kate Kaufman have?", the system will retrieve a list of documents containing the words "children" and "Kate Kaufman". Unlike search engines and IR systems, natural question answering algorithms aim to provide precise answers to specified questions
[47]. Thus, if a user is searching a genealogical database for the family tree of Kate Kaufman2, a built-in question answering system will not return a list of possible matches but will provide a short and precise answer to various natural language questions. For instance, for a question such as "Where was Kate's father born?", a genealogical question answering system will return the answer "Hesse, Germany". Genealogical centers and museums seek to create a unique and personal experience for visitors using chatbots [73] and even holographic projections of private or famous people [78]. Hence, one practical implication of such a genealogical question answering system can be posing natural questions to a museum holographic character, or even a holographic restoration of a person from a family tree. Imagine walking into a genealogical center and talking to your great-grandmother, asking her questions about your family history and heritage. The underlying technology for such a conversation (inter alia) is based on the ability to answer natural questions on the GEDCOM data of genealogical family trees. The current state-of-the-art method for solving such a task is based on deep neural networks (DNN).
Footnote 2: [https://dbs.anumu-seum.org.il/skn/en/c6/e22164995/Personalities/Kauffman_Kate](https://dbs.anumu-seum.org.il/skn/en/c6/e22164995/Personalities/Kauffman_Kate)
DNN models for open-domain natural question answering achieved high accuracy in multiple studies [15, 102, 116, 118, 119, 120, 127]. Training DNN models for question answering requires a golden standard dataset constructed from questions, answers, and corresponding texts from which these answers can be extracted. An extensive golden standard dataset for the natural question answering task widely used for training such models is Stanford Question Answering Dataset (SQuAD) [90, 91]. However, in the field of genealogy, there are no standard training datasets of questions and answers similar to SQuAD.
Generating a genealogical training dataset for question answering DNN is challenging, since genealogical data constitutes a semi-structured heterogeneous graph. It contains a mix of a structured graph and unstructured texts with multiple nodes and edge types, where nodes may include structured data on a specific person node (e.g., person's birthplace), structured data on a specific family node (e.g., marriage date), relations between nodes, and unstructured text sequences (e.g., bio notes of a person). Such a mix of structured heterogeneous graph data and unstructured text sequences is not the type of input that state-of-the-art models, like BERT [22] and other sequence-based DNN models, are designed to work with.
Therefore, the main objective of the proposed study is to design and empirically validate an end-to-end pipeline and a novel methodology for question-answering DNN using graph-based genealogical family trees combined with unstructured texts.
The research questions addressed in this study are:
1. What is the effect of the training corpus domain (i.e., open-domain vs. genealogical data) and the consanguinity scope on the accuracy of neural network models in the genealogical question answering task?
2. How to traverse a genealogical data graph while preserving the meaning of the genealogical relationships and family roles?
3. What is the effect of the question type on the DNN models' accuracy in the genealogical question answering task?
The main contributions of the study are:
1. A new automated method for question answering dataset generation derived from family tree data, based on the knowledge graph representation of genealogical data and its automatic conversion into a free text;
2. A new graph traversal method for genealogical data;
3. A fine-tuned question answering DNN model for the genealogical domain, Uncle-BERT, based on BERT 3[22] that outperforms state-of-the-art DNN models (trained for answering open-domain questions) for various question types.
Footnote 3: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased)
## 2 Related work
This section covers related work in the fields relevant to this research: genealogical family trees, neural network architecture, and question answering using neural networks.
2.1 Genealogical family trees
Genealogical family trees have become popular in recent years. Both non-profit organizations and commercial companies allow users worldwide to upload and update their family tree online. For example, commercial enterprises like Ancestry and My Heritage collect over 100 million4 and 48 million5 family trees, respectively; FamilySearch is the largest non-profit online collection of family trees with over a billion6 unique individuals worldwide. Family trees can be created from various sources, such as family trees uploaded by private users (UGC) [6], clinical reports and DNA records [20; 105], biographical register [64], and even books [27]. Family tree records contain valuable information about individuals and their genealogical relationships, information that is useful for historical research and preservation [46], population and migration research [84], and even medical research [124; 126]. The user-generated content family trees phenomena, also called "personal heritage", combines the study of the history of one's ancestors with local and social history [6]. Figure 1 illustrates the degrees of relationships between two people in the genealogical domain [12].
Figure 1: Relation degrees in genealogy7.
#### 2.1.1 The GEDCOM genealogical data standard
The de facto standard in the field of genealogical family trees is the GEDCOM format [36, 56]. The standard developed by The Church of Jesus Christ of Latter-day Saints in 1984, and the latest released version (5.5.1) that was drafted in 1999 and fully released in 2019, still dominates the market [42]. Other standards have been suggested as replacements, but none were extensively adopted by the industry. GEDCOM is an open format with a simple lineage-linked structure, in which each record relates to either an individual or a family, and relevant information, such as names, events, places, relationships, and dates, appears in a hierarchical structure [36]. There are several open online GEDCOM databases, including Genealogy Forum [35], WikiTree8, GedcomIndex9, Anu Museum10, Ancestry.com, and others.
Footnote 8: [https://www.wikitree.com/](https://www.wikitree.com/)
Footnote 9: [http://gedcomindex.com/gedcoms.html](http://gedcomindex.com/gedcoms.html)
Footnote 10: [https://dbs.anumuseum.org.il](https://dbs.anumuseum.org.il)
In GEDCOM format, every person (individual) in the family tree is represented as a node that may contain known attributes, such as first name, last name, birth date and place, death date and place, burial date and place, notes, occupation, and other information. Two individuals are not linked to one another directly. Each individual is linked to a family node as a "spouse" (i.e., a parent) or a "child" in the family. Figure 2 shows a sub-graph corresponding to a Source Person (SP) whose data is presented in the GEDCOM file in Figure 3. Each individual and family are assigned a unique ID - a number bracketed by @ symbols and a class name (INDI - individual, FAM - family). The source person is noted as SP (@I137@ INDI - Emily Williams in the GEDCOM file), families as F and other persons as P. In this example, P3, P4, P5, and P6 are the grandparents of SP; P1 and P2 are SP's parents in family F1 (@F79@ in the GEDCOM file); P7 and P8 are SP's siblings; P10 (@I162@ INDI - John Williams in the GEDCOM file) is SP's spouse from family F4 (@F73@ in the GEDCOM file), P12 and P13 are SP's children; and P15, P16, and P17 are SP's grandchildren. Moreover, as seen in Figure 3, SP was a female, born on 28 MAY 1816 in New York, USA, who died on 7 FEB 1899 in Uinta, Wyoming, USA, and was buried three days later in the same place. Furthermore, SP was baptized on 1 JUN 1832 and was endowed on 30 DEC 1845 in TEMP NAUVO (maybe Nauvoo Temple11, Illinois). Her husband, P10, John Williams, was a male, born on 16 MAY 1826 in Indiana, USA, who died on 25 SEP 1912 in Uinta, Wyoming, USA, and was buried three days later in the same place. He was baptized on 9 AUG 1877, although there is a note stating that it may be on the 12 of AUG 1877, and he was endowed with his wife. For practical reasons, the GEDCOM file example in Figure 3 contains only a small portion of the data presented in Figure 2.
Footnote 11: [https://churchofjesuschristtemples.org/nauvoo-temple/](https://churchofjesuschristtemples.org/nauvoo-temple/)
Figure 2: Family tree structure.
### Question answering using DNN
A DNN is a computational mathematical model that consists of several "neurons" arranged in layers. Each neuron performs a computational operation and transmits the computed information (calculation result) to the neurons in the next layer. The information is passed over and changed from layer to layer until it becomes the output in the network's last layer. The conventional learning method is backpropagation, which refers to learning as an optimization problem [122]. After each training cycle, a comparison between the network prediction (output) and the actual expected result is performed, and a "loss" (i.e., the gap) is calculated to estimate the changes needed in the network operations (the weight of neuron's transformation). Changes in the network weights are usually performed using the Gradient Descent methods [7].
In recent years, DNNs have become the state-of-the-art method for text analysis in the cultural heritage space [110], and natural language question-answering systems based on DNN have become the state-of-the-art method for solving the question answering task [61]. The underlying task of question answering is Machine Reading Comprehension (MRC), which allows machines to read and comprehend a specified context passage for answering a question, similarly to language proficiency exams. Question answering, on the other hand, aims to answer a question without a specific context. These QA systems store a database containing a sizeable unstructured corpus and generate the context in real-time based on relevant text passages to the input question [138]. Due to the magnitude of comparisons needed between the query and each text passage in the corpus, and due to the number of calculations (a large number of multiplications of vectors and matrices) when a DNN model predicts the answer span for every given text passage, DNNs are not applied on the entire database of texts, but only on a limited number of passages. Hence, when a user asks a question, the system searches 12 the database for K passages that are relevant to the user question. The system will then use the DNN model to predict the answer span (start and end positions) for each text passage (from the K passages) with a confidence level. The answer with the highest confidence level is selected as the answer to be presented to the user. Thus, a typical pipeline (shown in Figure 4) of DNN for question answering will be a compound of (1) two inputs - (a) a text passage (i.e., a document) that may contain the answer, and (b) a question; and (2) two outputs: (a) the start index of the answer in the text passage, and (b) the end index of the answer in the text passage. The inputs are encoded into vectors using static embeddings methods, such as
Fig. 3: (part of the) GEDCOM family tree file.
Word2Vec [77] and GloVe [86] or using contextualized embeddings of words like Bidirectional Encoder Representations from Transformers (BERT) [22], Embeddings from Language Models (ELMo) [87] and other methods [88]. One of the main advantages of contextual embeddings is the ability to handle disambiguations of words and entities [81; 129]. The input vector is transferred through the network, and the final layer output vectors are the probability of every word to be the start or the end of the span (i.e., answer). The score of every span is a combination of the start and end tokens' probabilities. The most probable span is then translated back to a sequence of words using the embedding method [22] (see section 3.2 for a more detailed description). Researchers proposed various DNN-based models to solve the task of finding (ranking) an answer span (the part of the text that contains the answer for the question) in the document [22; 97; 118; 119; 133] or a single sentence [34; 62].
#### 2.2.1 Natural question answering using DNN architecture
Over the years, different deep learning layers have been developed with various abilities. Until recently, the typical architecture for natural language questions answering was based on Recurrent Neural Networks (RNN) such as Long Short Term Memory (LSTM) [48] and Gated Recurrent Units (GRU) layers [17]. RNN layers allow the network to "remember" previously calculated data and thus learn answers regarding an entire sequence. These layers are used to construct different models, including a sequence-to-sequence model [112] that uses an encoder-decoder architecture [17] that fits the question-answering task. This model maps a sequence input to a sequence output, like a document (sequence of words) and a question (sequence of words) to an answer (sequence of words) or to classify words (whatever the word is the start or the end of the answer). RNN architecture often works with direct and reverse order sequences (bidirectional-RNN) [96]. It may also include an attention mechanism [115], which "decides" (i.e., ranks) which parts in the sequence are more important than others during the transformation of a sequence from one layer to another.
Another typical architecture is based on a Convolutional Neural Network (CNN). Unlike RNNs, CNNs architecture does not have any memory state that accumulates the information from the sequence data. CNN architecture uses pre-trained static embeddings where each CNN channel aggregates information from the vectorial representation. Channels of different sizes enable it to deal with n-gram-like information in a sentence [57].
Question answering task can also be modeled as a graph task (e.g., traversal, subgraph extraction). The data can be represented as a knowledge graph (KGQA), where each node is an entity, and each edge is a relation between two entities. When answering the question, the algorithm finds the relevant entities for the question and traverses over the relations or uses the node's attributes to find the answer node or attribute [13; 24; 134]. To work with graphs, Graph Neural Networks (GNN) [94] models have been developed that operate directly on the graph structure. GNN can be used for resolving answers directly from a knowledge graph by predicting an answer node from question nodes (i.e., entities) [29; 38; 72; 80; 101; 134]. The GNN model is similar to RNN in the sense that it uses near nodes and relations (instead of previous and next token in RNN) to classify (i.e., label) each node. However, these models cannot directly work with unstructured or semi-structured data or rely on the ability to complete and update the knowledge graph from free texts using knowledge graph completion tasks, such as relation extraction [8; 82; 128] or link prediction [32; 52].
An improved approach considered to be the state-of-the-art in many NLP tasks, including question answering, is Transformers architecture [115], which uses the attention mechanism with feed-forward layers (not RNNs); this kind of attention is also called Self Attention Network (SAN). Well-known examples of SANs are Bidirectional Encoder Representations from Transformers (BERT) [22] and GPT-2 [89] models. Several BERT-based models were developed in recent years [125], achieving state-of-the-art performance (accuracy) in different question answering tasks. These include RoBERTa - a BERT model with hyperparameters and training data size tuning [70]; DistilBERT - a smaller, faster, and lighter version of BERT [93]; ELECTRA - a BERT-like model with a different training approach [18]. Although standard BERT-based models receive textual sequence as input, all the above architectures can also be mixed. For example, a Graph Convolutional Network (GCN) [114] can be utilized for text classification by modeling the text as a graph and using the filtering capabilities of a CNN [131].
There are several question-answering DNN pipelines based on knowledge graphs that support semi-structured data (a mix of a structured graph and unstructured texts) [29; 40; 134; 137]. As shown in Figure 5, a current state-of-the-art pipeline of this type,
Deciphering Entity Links from Free Text (DELFT) [134], uses the knowledge graph to extract related entities and sentences, filters possible textual sentences using BERT, and then traverses a filtered subgraph using a GNN. The pipeline starts with identifying the entities in the question. Then, related entities ("candidates") from the knowledge graph and relevant sentences ("evidence relations") from unstructured texts are extracted and filtered using BERT. A new subgraph is generated using the question entities, the filtered evidence relations, and the candidate entities. Using this subgraph, a GNN model learns to rank the most relevant node. Thus, the model obtains a "trail" from the question nodes to a possible candidate node (i.e., answer). The pipeline applies two DNN models: a BERT model to rank the evidence relations and a GNN model to traverse the graph (i.e., predict the answer node).
However, these methods, using the unstructured texts to create or complete the knowledge graph, rely heavily on well-defined semantics and fail to handle questions with entities completely outside the knowledge graph or questions that cannot be modeled within the knowledge graph. For example, Differentiable Neural Computer (DNC) [38] can be used to answer traversal questions ("Who is John's great-great-grandfather?"), but not to answer content-related questions when the answer is written in the person's bio notes (e.g., "When did John's great-great-grandfather move to Florida?"). As part of the evaluation experiments in this study, the performance of the above mentioned DELFT pipeline, adapted to the genealogical domain, was compared to that of the proposed pipeline.
In summary, the generic question answering pipelines described above cannot be applied as-is in the genealogical domain, without compromising on accuracy, for the following reasons: (1) The raw data is structured as graphs, each graph contains more information than a DNN model can handle in a single inference process (each node is equivalent to a document), (2) A user may ask about different nodes and different scopes of relations (i.e., different genealogical relation degrees); (3) There is a high number of nodes containing a relatively small volume of structured data and a relatively large volume of unstructured textual data. In addition, the vast amount of different training approaches, hyperparameters tuning, and architectures indicate the complexity of the models and sensitivity to a specific domain and sub-task.
The question answering approach proposed in this study simplifies the task pipeline by converting the genealogical knowledge graph into text, which is then combined with unstructured genealogical texts and processed by BERT's contextual embeddings. Converting the genealogical graph into text passages can be performed using knowledge-graph-to-text templates and methodologies [21, 26, 55, 76, 123], and knowledge-graph-to-text machine learning and DNN models [5, 33, 63, 66, 68, 78, 79, 99, 106]. Template-based knowledge-graph-to-text methods use hardcoded or extracted linguistic rules or templates to convert a subgraph into a sentence. Machine learning and DNN models can be trained to produce a text from knowledge-graph nodes. The input for a knowledge-graph-to-text model is a list of triples of two nodes and their relation, and the output is a text passage containing a natural language text with input nodes and their relations as syntactic sentences. To this end, DNN models are often trained using commonsense knowledge graphs of facts, such as ConceptNet [107], BabelNet [83], DBpedia [3], and Freebase [85], where nodes are entities, and the edges represent the semantic relationships between them. Some models use the fact that knowledge graphs are language-agnostic to generate texts in multi-languages (e.g., [79]).
### Questions and answers generation for DNN-based question answering systems
Training of a DNN question answering model requires a set of text passages and corresponding pairs of questions and answers. Multiple approaches exist for generation of questions (and answers): knowledge-graph-to-question template-based methodology (similar to the context generation) [67, 98, 136, 140], WH questions (e.g., Where, Who, What, When, Why) rule-based approach [80], knowledge graph-based question generation [16, 50], and DNN-based models for generating additional types of questions [25, 117, 135, 49]. The rule-based method uses part-of-speech parsing of sentences using the Stanford Parser [59], creates a tree query language and tree manipulation [65], and applies a set of rules to simplify and transform the sentences to a question. To guarantee question quality, questions are ranked by a logistic regression model for question acceptability [44]. The DNN question generation models are trained on SQuAD [90, 91] or on facts from a knowledge graph to predict the question and its correct answer from the context (i.e., the opposite task from question answering) using bi-directional [96] LSTM [48] encoder-decoder [17] model with attention [115].
This study adopted the format of the SQuAD dataset, which is a well-known benchmark for machine learning models on question answering tasks with a formal leaderboard13. SQuAD is a reading comprehension dataset consisting of questions created by crowd workers on a set of Wikipedia articles. The answers to the questions are segments of text from the corresponding reading passage (context), or the question might be unanswerable. SQuAD 2.0 combines 100,000 questions and answers and over 50,000 unanswerable questions written adversarially by crowd workers to look similar to answerable ones. To do well on SQuAD 2.0, natural question answering models must answer questions when possible and determine when no answer is supported by the paragraph, in which case they must abstain from answering.
Footnote 13: [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
SQuAD 2.0 is a JSON formatted dataset, presented in Figure 6, where each topic (a Wikipedia article) has a _title_ and _paragraphs_. Each paragraph contains a _context_ (text passage) and questions (_qas_). Each question contains the _question_ text, _id_, may contain _answers_ (if it is answerable), may contain _plausible answers_, or be marked as _impossible_. Each answer is constructed from a _text_ and a _start index_ (the word index) of the answer in the text passage.
Figure 4: Typical open-domain question answering pipeline.
Figure 5: Typical knowledge graph question answering pipeline.
## 3 Methodology
While using DNNs for the open-domain question answering task has become the state-of-the-art approach, automated question answering systems for genealogical data is still an underexplored field of research. This paper presents a new methodology for a DNN-based question answering pipeline for semi-structured heterogeneous genealogical knowledge graphs. First, a training corpus that captures both the structured and unstructured information in genealogical graphs is generated. Then, the generated corpus is used to train a DNN-based question answering model.
### Gen-SQuAD generation and graph traversal
The first phase in the proposed methodology is to generate a training dataset using the text sequence encoding with a graph traversal algorithm. This dataset should contain questions with answers and free text passages from which the model can retrieve these answers.
Generating a training dataset from genealogical data is a three-step process. The result of the process is Gen-SQuAD, a SQuAD 2.0 format dataset tailored to the genealogical domain. As shown in Figure 7, the process includes the following steps: (1) decomposing the GEDCOM graphs to CIDOC-CRM-based 14 knowledge sub-graphs, (2) generating text passages from the obtained knowledge sub-graphs, and (3) generating questions and answers from the text passages. Finally, the context and matching questions and answers are saved in the SQuAD 2.0 JSON format. The following sections present in detail each step of the Gen-SQuAD generation process.
Figure 6: SQuAD 2.0 JSON format example.
#### 3.1.1 Sub-graph extraction and semantic representation
While there are some DNN models that can accept large inputs [9, 58], due to computational resource limitations, many DNN models tend to accept limited size inputs, usually ranging from 128 to 512 tokens (i.e., words) [141]. However, family trees tend to hold a lot of information, from names, places, and dates to free-text notes, life stories, and even manifests. Therefore, using the proposed methodology, it is not practical to build a model that will read an entire family tree as an input (sequence), and it is necessary to split the family tree into sub-trees (sub-graphs). Several generic graph traversal algorithms may be suitable for traversing a graph and extracting sub-graphs, such as Breadth-First-Search (BFS) and Depth-First-Search (DFS). BFS's scoping resembles a genealogy exploration process that treats first relations between individuals that are at the same depth level (relation degree) in the family tree, moving from the selected node's level to the outer level nodes. However, the definition of relation degrees in genealogy (i.e., consanguinity) is different from the pure graph-theory mathematical definition implemented in BFS [12]. For example, parents are considered first-degree relations in genealogy (based on the ontology), while they are considered to be second-degree relations mathematically, since there is a family node between the parent and the child (i.e., the parent and the child are not connected directly), with siblings considered to be second-degree relations in both genealogy and graph theory. Combined BFS-DFS algorithms such as Random Walks [39] do not take into account domain knowledge and sample nodes randomly. In the genealogical research field, several traversal algorithms have been suggested for user interface optimization [56]. However, these algorithms aim to improve interfaces and user experience and are not suitable for complete data extraction (graph to text) tasks.
This paper presents a new traversal algorithm, GenBFS, which is essentially the BFS algorithm adapted to the genealogical domain. The Gen-BFS algorithm is formally defined as follows:
Fig. 7: Gen-SQuAD generation.
Where each node can be a Person or a Family, each Person node has two links (edges) types: \(\text{fam}_{\text{child}}\) (FAMC in GEDCOM standard) and \(\text{fam}_{\text{parent}}\) (FAMS in GEDCOM standard), each Family has the opposite edge types: \(\text{child}_{\text{fam}}\) and \(\text{parent}_{\text{fam}}\). Where \(\{\text{fam}_{\text{child}}\}\) is the collection of all the families in which a person is considered a child (biological family and adopted families), \(\{\text{fam}_{\text{parent}}\}\) is the collection of all the families in which a person is a parent (spouse) (i.e., all the person's marriages), \(\{\text{child}_{\text{fam}}\}\) is a collection of all the persons that are considered to be children in a family and \(\{\text{parent}_{\text{fam}}\}\) is a collection of all the persons considered to be a parent in a family. For example, the SP in Figure 2 is linked to two nodes. The link type to F1 is \(\text{fam}_{\text{child}}\), and the link type to F4 is \(\text{fam}_{\text{parent}}\). The family F1 in Figure 2 has two types of links. The link type to SP, P7, P8 is \(\text{child}_{\text{fam}}\), and the link type to P1 and P2 is \(\text{parent}_{\text{fam}}\).
Figure 8 illustrates the Gen-BFS traversal applied to the family tree presented in Figure 2. As shown in Figure 8, Gen-BFS is aware of the genealogical meaning of the nodes and reduces the tree traversal's logical depth. It ignores families in terms of relation degree, considers SP's spouses as the same degree as SP and SP's parents and children as first degree, and keeps siblings and grandparents as second-degree. In particular, lines 1-20 in Algorithm 1 represent a BFS-style traverse over the graph. In lines 5-8, the algorithm introduces domain knowledge and adds nodes to its queue according to the node type. The code in lines 9-17 ensures that the traversal will stop at the desired depth level. If the current node is a Person (line 12) and the current depth (CD) is about to get deeper than the required depth (D), then the while loop will end (line 14). Otherwise, the Persons and Families in the
Fig. 8: Gen-BFS algorithm15.
current depth (kn) will be added to the node queue (NQ) and may (depending on the stop mechanism) be added to the depth queue (DQ). In line 21, the depth queue (DQ) holds all the Family nodes and most of the Person nodes (except for spouses of the last depth level's Person nodes) within the desired depth level. For example, traversing with D = 1 over the family tree in Figure 2 will result in DQ that contains SP and her children and parents (F1, F4, P10, P1, P2, P12, and P13). However, according to the genealogical definition of depth levels in a family relationship, the children's spouses, P11 and P14 (but not the grandchildren, F5 and F6, which belong to D = 2) should also be retrieved. Lines 21-28 address this issue and add the missing Person nodes, thus logically reducing the depth of the graph.
Each family tree was split into sub-graphs using the Gen-BFS algorithm. New sub-graphs were created for each person as SP (source person) and its relations at different depth levels. Therefore, there is an overlap between the sub-graphs (a person can appear in several sub-graphs), and the sub-graphs cover all the individuals and relations in a given family tree. The Gen-BFS traversal algorithm is used both for dataset generation and for selecting the scope of the user's query in the inference phase (i.e., when answering the question).
Once extracted, each genealogical sub-graph was presented as a knowledge graph. This study adopted an event-based approach to data modeling presented in the past literature ([2, 113, 31]). As in [113], a formal representation of the GEDCOM heterogeneous graph (excluding the unstructured texts) as a knowledge graph was implemented using CIDOCRM, but in a more specific manner (e.g., we used concrete events and properties such as _birth, brought into life_ as opposed to [113] that used generic vocabulary). We chose to use CIDOC-CRM as it is a living standard (ISO 21127:2014) for cultural heritage knowledge representation. CIDOC-CRM is designed as "a common language for domain experts" and "allows for the integration of data from multiple sources in a software and schema-agnostic fashion" [60]. It has been applied as a base model and extended in many domains related to cultural heritage, and in this study, it was chosen as a basis for defining the genealogical domain ontology due to its standard and generic nature and event-based structure, that enables _n_-ary rather than binary relationships between entities in the ontology, as required for representing genealogical and biographic data based on events in families and person's lives (e.g., E67 represents a birth event that connects a person, a place and a time span). Genealogical graphs contain instances of two explicit classes: Person (E21 in CIDOC-CRM) and family that can be represented as a Group (E74 in CIDOC-CRM); and several implicit classes: Place (E53), Event (E5), Death (E69), Birth (E67) and others. These implicit classes are not structured as separate entities in the GEDCOM standard, but need to be extracted from the GEDCOM attributes. Properties matching various GEDCOM relations can also be easily found in CIDOC-CRM, e.g., the relation of a person to its children can be represented using P152 (is parent of).
Figure 9 is an example of a representation of the GEDCOM sub-graph as a knowledge graph. As illustrated in the figure, the SP node is an instance of the class Person and has a relation (property) to a birth event (E21\(\Rightarrow\)P98\(\Rightarrow\)E67) with a relation to the place, Paris (E67\(\Rightarrow\)P7\(\Rightarrow\)E53) and a relation to the birth year with the value 1950 (E67\(\Rightarrow\)P4=\(\Rightarrow\)E52). Representing GEDCOM as a knowledge graph is a critical step as the dataset generation method is based on well-established knowledge-graph algorithms, as described next.
#### 3.1.2 Text passage generation
Next, a textual passage from each sub-graph is generated, representing the _SP's_ genealogical data based on the graph-to-sequence. Text passages were generated using a knowledge-graph-to-text DNN model [68] and completed (for low model confidence or
Fig. 9: GEDCOM individual’s knowledge graph in the CIDOC-CRM-based format.
missing facts) with knowledge-graph-to-text template-based methodology [76]. It is important to note that converting the obtained genealogical knowledge sub-graphs to text is a more straightforward task than the open domain knowledge-graph-to-text or generic commonsense knowledge-graph-to-text task, since they are well structured and relatively limited in their semantics. For example, the sub-graph presented in Figure 9 can be converted to a sentence with template rules or using DNN models. A rule example will be: [First Name] [Last Name] _was born in_ [Birth Year] _in_ [Birthplace] = "John Williams was born in 1950 in Paris".
Using a knowledge-graph-to-text DNN model [68] and a knowledge-graph-to-text templates methodology [76], multiple variations of sentences conveying the same facts (comprised of the same nodes and edges in the graph) were composed based on different templates and combined with the sentence paraphrasing using a DNN-based model (the model of [63]). Most of the text passages were generated using a DNN model. However, the template-based method added variations that the DNN model did not capture. Table 1 above presents examples of such sentences created for the sub-graph in Figure 9.
Another critical challenge resolved by this approach is the multi-hop question answering problem, where the model needs to combine information from several sentences to answer the question. Although there are multi-hop question answering models presented in the literature [30, 74], their accuracy is significantly lower than a single-hop question answering. To illustrate the problem, consider a user asking about the SP's (John's) grandfather: "Where was John's grandfather born?" or "Where was Tim Cohen born?", where Tim Cohen refers to John's grandfather. To answer both questions without multi-hop reasoning for resolution of multiple references to the same person, the graph-to-text template-based rules include patterns that encapsulate both the SP's relationship type (John's grandfather) and the relative's name (Tim Cohen), thus allowing the model to learn that Tim Cohen is John's grandfather. There are three types of references to a person that allows the DNN model to resolve single or multi-hop questions: 1) Direct referencing to a person with his/hers first and last name (e.g., John Williams), 2) Partial referencing to a person with his/hers first or last name (e.g., John), and 3) Multi-hop encapsulation, i.e., referencing to a person with their relative name to the SP (e.g., Alexander's son).
As a result of the above processing, multiple text passages were created for each SP's sub-graph. Since each sentence is standalone and contains one fact, sentences were randomly ordered within each text passage. Thus, even if the passage is longer than the neural model's computing capability, the model will likely encounter all types of sentences during its training process. These text passages were further encoded as vectors (i.e., embeddings) to train a DNN model that learns contextual embeddings to predict the answer (i.e., start and end positions in the text passage) for a given question.
#### 3.1.3 Generation of questions and answers
Using the generated text passages (contexts), pairs of questions and answers were created. The answers were generated first, and then the corresponding questions were built for them as follows. Knowledge graph nodes and properties (relationships), as well as named entities and other characteristic keywords extracted from free text passages were used as answers. To achieve extensive coverage, multiple approaches were used for generation of questions. First, a rule-based approach was applied for question generation from knowledge graphs [140] and a statistical question generation technique [44] was utilized for WH question generation from the unstructured texts in GEDCOM.
Most of the questions (73%) were created using these methods. To identify the types of questions typical of the genealogical domain and define rule-based templates for their automatic generation, this study examined the genealogical analysis tasks that users tend to perform on genealogical graphs [10]. These tasks include: (1) identifying the SP's ancestors (e.g., parents, grandparents) or descendants (e.g., children, grandchildren), (2) identifying the SP's extended family (second-degree relations), (3) identifying family events, such as marriages, (4) identifying influential individuals (e.g., by occupation, military rank, academic achievements, number of children), and (5) finding information about dates and places, such as the date of birth, and place of marriage [4; 10]. These analysis tasks were adopted to define characteristic templates for natural language questions that a user may ask about the SP or its relatives. Some of these questions can be answered directly from the structured knowledge graph (e.g., "When was Tim's father born?"), while others can only be answered using the unstructured texts attached to the nodes (e.g., "Did Tim's father have cancer?").
A DNN-based model for generating additional types of questions [25] was used to complement the rule-based method. The neural question generation model predicted questions from all the unstructured texts in the GEDCOM data and produced 24% of the questions in the dataset (excluding duplicate questions already created using the WH-based and rule-based approaches).
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Template-based rule example** & **Result** & **Reference type** \\ \hline \begin{tabular}{l} [First Name] [Last Name] was born in [Birth Year] in [Birthplace] \\ \end{tabular} & \begin{tabular}{l} John Williams was born in 1950 in \\ Paris \\ \end{tabular} & Direct \\ \hline \begin{tabular}{l} [First Name] was born in [Birth Year] in [Birthplace] \\ \end{tabular} & \begin{tabular}{l} John was born in 1950 in Paris \\ \end{tabular} & Partial \\ \hline \begin{tabular}{l} [Name relative of SP] ([First Name] [Last Name]) was born in [Birth Year] in [Birthplace] \\ \end{tabular} & \begin{tabular}{l} Alexander’s son (John Williams) \\ was born in 1950 in Paris \\ \end{tabular} & Multi-hop encapsulation \\ \hline \begin{tabular}{l} [First Name] was born in [Birthplace] in [Birth Year] \\ \end{tabular} & \begin{tabular}{l} John was born in Paris in 1950 \\ \end{tabular} & Partial \\ \hline \begin{tabular}{l} [Relative First Name] [Relative Last Name] ([Relation to \\ SP]) was born in [Birth Year] in [Birthplace] \\ \end{tabular} & \begin{tabular}{l} Alexander Williams (John’s father) \\ was born in 1927 in Nice. \\ \end{tabular} & Multi-hop encapsulation \\ \hline \begin{tabular}{l} In [Birth Year] [First Name] was born \\ \end{tabular} & \begin{tabular}{l} In 1950 John was born \\ \end{tabular} & Partial \\ \hline \begin{tabular}{l} [Birthplace] was [First Name]’s birthplace \\ \end{tabular} &
\begin{tabular}{l} Paris was John’s birthplace \\ \end{tabular} & Partial \\ \hline \end{tabular}
\end{table}
Table 1: Genealogical-knowledge-graph-to-text context template example.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Template-based rule example** & **Result** \\ \hline \begin{tabular}{l} How many children did [First Name] \\ [Last Name] have? \\ \end{tabular} & \begin{tabular}{l} How many children \\ did John Williams \\ have? \\ \end{tabular} \\ \hline \begin{tabular}{l} How many grandchildren did [Relative First Name] [Relative Last Name] ([Relation to SP]) have? \\ \end{tabular} & \begin{tabular}{l} How many grand- \\ children did Alexander Williams (John’s father) have? \\ \end{tabular} \\ \hline \begin{tabular}{l} Was [Birthplace] [First Name] ’s birthplace? \\ \end{tabular} &
\begin{tabular}{l} Was Paris John’s birthplace? \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 2: Knowledge-graph-to-text question template examples.
Finally, additional rules were manually compiled using templates [1, 28] to create questions missed by previous methods, mainly quantitative and yes-no questions (as illustrated in Table 2). These questions were 3% of all the questions in the datasets. All answer indexes were tested automatically to ensure that the answer text exists in the context passage. A random sample of 120 questions was tested manually by the researchers as a quality control process, and the observed accuracy was virtually 100%. However, it is still possible that DNN generated some errors. Nevertheless, even in this case, the study's conclusions would not change, as such errors would have a similar effect (same embeddings) on all the tested models.
### Fine-tuning the BERT-based DNN model for question answering
Fine-tuning a DNN model is the process of adapting a model that was trained on generic data to a specific task and domain [22]. An initial DNN model is usually designed and trained to perform generic tasks on large domain-agnostic texts, like Wikipedia. In the case of the open-domain question answering, the BERT baseline model was pre-trained on English Wikipedia and Books Corpus [139] using Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) objectives [22]. The MLM methodology is a self-supervised dataset generation method. For each input sentence, one or more tokens (words) are masked, and the model's task is to generate the most likely substitute for each masked token. In this fill-in-the-blank task, the model uses the context words surrounding a mask token to try to predict what the masked word should be. The NSP methodology is also a self-supervised dataset generation method. The model gets a pair of sentences and predicts if the second sentence follows the first one in the dataset. MLM and NSP are effective ways to train language models without annotations as a basis for various supervised NLP tasks. Combining MLM and NSP training methods allow modeling languages with both word-level relations and sentence-level relations understanding. The pre-trained BERT-based question answering model was designed with 12 layers, 768 hidden nodes, 12 attention heads, and 110 million parameters. Using such a pre-trained model, DNN layers can be added to fit to a specific task [22].
As shown in Figure 10, a new BERT-based model, Uncle-BERT, was fine-tuned for genealogical question answering as follows: (1) adding a pair of output dense layers (vectors) with dimensions of the hidden states in the model (\(S\) and \(E\)), (2) computing the probability that each token in these layers (vectors) is the start (\(S\)) or end (\(E\)) of the answer, and finally (3) running and tuning the baseline BERT model described above for learning \(S\) and \(E\). The probability of a token being the start or the end of the answer is the dot product between the token's numerical representation (i.e., embeddings) in the last layer of BERT and the new output layers (vectors \(S\) or \(E\)), followed by a softmax activation over all the tokens. Then, using the genealogical training dataset, the model is trained to solve the task in the study's domain. It should be noted that generation methods for pre-trained static node embeddings like node2vec [39] or TransE [14] treat triples as the training instance for embeddings, which may be insufficient to model complex information transmission between nodes. Therefore, the information is encoded from graph nodes into syntactic sentences and then the original BERT approach [22] is applied to generate comprehensive contextual embeddings from these sentences [43].
Figure 11 summarizes the developed genealogical question answering pipeline. To simplify the task, the proposed architecture asks the user to first select the family tree from the corpus (future research can eliminate this step by embedding the family trees [37] and ranking them based on similarity to the question [92]). As demonstrated in the figure, the family tree corpus (comprised of GEDCOM files) is processed into question answering datasets for different scopes. The process starts when a user selects a specific person from a family tree. Then the user indicates a scope (a genealogical relation degree, as described in Figure 1) to ask about (e.g., the _SP_ itself, first-degree relative, second-degree relatives) and asks a question ("What was Alexander's father's military rank?"). The Gen-BFS algorithm incorporates the SP and the scope to generate a text passage that encapsulates the _SP_'s scope aligned with the user intent (equivalent to finding the top K text passages in the open-domain question answering pipeline). Finally, a fine-tuned DNN model, selected based on the requested relational degree (i.e., a model trained to predict answers on the requested relational degree), predicts the answer using the generated text passage and a question as inputs.
Figure 11: Genealogical question answering pipeline (the proposed architecture).
Figure 10: The DNN model fine-tuning process.
## 4 Experimental design
This section describes the experimental dataset and training conducted to validate the proposed methodology for the genealogical domain.
### Datasets
In this research, 3,140 family trees containing 1,847,224 different individuals from the corpus of the Douglas E. Goldman Jewish Genealogy Center in Anu Museum16 were used. The Douglas E. Goldman Jewish Genealogy Center contains over 5 million individuals and over 30 million family tree connections (edges) to families, places, and multimedia items. To comply with the Israeli privacy regulation17 and the European general data protection regulation18 (GDPR), only family trees for which the Douglas E. Goldman Jewish Genealogy Center in Anu Museum has been granted consent or rights to publish online were used in the dataset generation. Moreover, as far as possible, all records containing living individuals have been removed from the dataset. Furthermore, all personal information and any information that can identify a specific person in this paper's examples, including the examples in the figures, have been altered to protect the individuals' privacy.
Footnote 16: [https://dbs.anumu-seum.org.il/skin/en/c6/e18493701](https://dbs.anumu-seum.org.il/skin/en/c6/e18493701)
Footnote 17: [https://www.gov.il/BlobFolder/legal-info/data_security_regulation/en/PROTECTION%20OF%20PRIVACY%20REGULATIONS.pdf](https://www.gov.il/BlobFolder/legal-info/data_security_regulation/en/PROTECTION%20OF%20PRIVACY%20REGULATIONS.pdf)
From the filtered GEDCOM files belonging to the above corpus, and after removing some files with parsing or encoding errors, three datasets were generated: Gen-SQuAD\({}_{0}\) using zero relation degree (SP and its spouses) with 6,283,082 questions, Gen-SQuAD\({}_{1}\) using first-degree relations with 28,778,947 questions, and Gen-SQuAD\({}_{2}\) using second-degree relations with 75,281,088 questions. Although all generated datasets contain millions of examples, only 131,072 randomly selected questions were used from each dataset when training the Uncle-BERT models. These were enough for the models to converge. Therefore, the size of the dataset did not impact the training results.
Each dataset was split into a training set (60%), a test set (20%), and an evaluation set (20%). To better evaluate the success of the different question answering models, the 131,072 questions in each dataset were classified into twelve types. Examples of questions and their classification types are shown in Table 3. Each question may refer to the _SP's_ relationship type (e.g., Emily's grandson or by the direct name of the relative,e.g., Grace) and target one type of ontological entity as an answer (date, place, name, relationship type). Questions were classified into types based on the template, if generated using the template-based method (e.g., templates using place attributes were classified as "place", and date attributes as "date"), based on the WH question (e.g., When questions were classified as "date", and Where as "place"), if generated using the WH generation algorithm, or as general information / named entity, if generated by the DNN model. Therefore, the information / named entity may also include the other types of questions. It is important to note that these questions are semantically similar to the open-domain questions in SQuAD [90, 91] datasets.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Question type / objective** & **Examples** & **Source** \\ \hline Name & What is Emily’s full name? & Rule-based \\ & What is Emily’s last name? & Rule-based \\ \hline Date & When was Emily born? & Rule-based \\ & When did Emily get married? & Rule-based \\ \hline Place & Where was Emily buried? & Rule-based \\ & Where did Emily live? & Rule-based \\ \hline Information / named entity & Who was Emily’s first boyfriend? & DNN \\ & Did Emily go to college? & DNN \\ \hline First-degree relation & Who was Emily’s son? & DNN, rule-based \\ & Who was Jonathan? & DNN \\ \hline Second-degree relation & How many sisters did Emily have? & Rule-based \\ & How many brothers did Emily have? & Rule-based \\ \hline First-degree date & When was Emily’s husband born? & Rule-based \\ & When was John born? & Rule-based \\ \hline First-degree place & Where was Emily’s father born? & Rule-based \\ & Where was Alexander born? & Rule-based \\ \hline First-degree information / named entity & What was Emily’s father’s academic degree? & DNN \\ & What was Alexander’s illness? & DNN \\ \hline Second-degree date & When did Emily’s sister die? & Rule-based \\ & When did Yalma die? & Rule-based \\ \hline Second-degree place & Where was Emily’s grandson born? & DNN \\ & Where was Grace born? & Rule-based \\ \hline Second-degree information / named entity & What was Emily’s grandfather’s rank in the military? & DNN \\ & Where was Tim’s first internship as a lawyer? & DNN \\ \hline \end{tabular}
\end{table}
Table 3: Question types.
### Uncle-BERT fine-tuning
For fine-tuning Uncle-BERT19, the generated Gen-SQuAD training datasets were used. Each context in the Gen-SQuAD\({}_{0}\), Gen-SQuAD\({}_{1}\), and Gen-SQuAD\({}_{2}\) datasets was lowercased and tokenized using WordPiece [132].
Footnote 19: A link to the code: [https://github.com/om-rivm/Uncle-BERT](https://github.com/om-rivm/Uncle-BERT)
Figure 12 presents the model's input, where the [CLS] tag, which stands for classifier token, is the beginning of the input, followed by the first part of the input - the question. The [SEP] tag, which stands for a separator, separates the first part of the input (i.e., a question) and the second part - the context. [CLS] at the end indicates the end of the input.
To evaluate the effect of the depth of the consanguinity scope on the model's accuracy, an Uncle-BERT model was trained for each of the three datasets: Uncle-BERT\({}_{0}\) using Gen-SQuAD\({}_{0}\), Uncle-BERT\({}_{1}\) using Gen-SQuAD\({}_{1}\), and Uncle-BERT\({}_{2}\) using Gen-SQuAD\({}_{2}\). All models were trained with the same hyperparameters, that are shown in Table 4.
Max question tokens is the maximum number of tokens to process from the question input; if the question input length was greater than the Max question tokens, it was trimmed. Max sequence tokens are the maximum tokens to process from the combined context and question inputs.
If the cumulative context and question length was longer than the Max sequence tokens hyperparameter value, the context was split into shorter sub-texts using a sliding window technique; the Doc stride represents the sliding window overlap size. For example, consider the following hyperparameters' values: the max sequence tokens hyperparameter is 25, the doc stride hyperparameter is 6, and the following training example: "[CLS] When was Matt Adler's father born? [SEP] Matt's father (Noah Adler) was born in 1950 in London, England. Matt's father (Noah Adler) was a male. Matt Adler was born in 1975 in London, England. Matt's mother (Carol) was born in 1950. [CLS]"; the question of the training example contains 7 tokens, and 18 tokens are left for the context. Therefore, the context will be split into three training examples: 1) "[CLS] When was Matt Adler's father born? [SEP] Matt's father (Noah Adler) was born in 1950 in London, England. Matt's father (Noah Adler) was a male [CLS]" (i.e., tokens 1 to 18), 2) "[CLS] When was Matt Adler's father born? [SEP] father (Noah Adler) was a male. Matt Adler was born in [CLS]" (i.e., tokens 1 to 30), 3) "[CLS] When was Matt Adler's father born? [SEP] a male. Matt Adler was born in 1975 in London, England. Matt's mother (Carol) was born in 1950. [CLS]" (i.e., tokens 24 to 42). The model will be trained with the same question on the three new examples; if the answer span does not exist in an example, it is considered unanswerable.
Max answer tokens is the maximum number of tokens that a generated answer can contain. Train size is the number of examples used from the dataset during the training cycle.
As is customary with the SQuAD benchmark, an F1 score was calculated to evaluate Uncle-BERT models:
\[F1=2*\frac{precision*recall}{precision+recall}\]
Precision equals the fraction of correct tokens out of the retrieved tokens (i.e., words that exist in both the predicted and the expected answer), and recall equals the fraction of the correct tokens in the retrieved (pre
\begin{table}
\begin{tabular}{|l|l|} \hline
**Hyperparameters** & **Value** \\ \hline Max question tokens & 64 \\ \hline Max sequence tokens & 512 \\ \hline Max answer tokens & 30 \\ \hline Doc stride & 128 \\ \hline Batch size & 8 \\ \hline Learning rate & 3e-5 \\ \hline Train size & 131,072 \\ \hline Epocs & 20 \\ \hline \end{tabular}
\end{table}
Table 4:
Figure 12: Uncle-BERT model input example.
dicted) answer out of the tokens in the expected answer. This metric allows measuring both exact and partial answers.
## 5 Results
To evaluate the accuracy of the proposed fine-tuned models, the Gen-SQuAD\({}_{2}\) dataset was used to represent a real-world use-case in which a user is investigating her genealogical roots with the genealogical scope of two relation degrees (generations)20. To compare the model's accuracy for each type of answer, an F1 score was calculated to evaluate every Uncle-BERT model (i.e., Uncle-BERT\({}_{0}\) trained on Gen-SQuAD\({}_{0}\), Uncle-BERT\({}_{1}\) trained on Gen-SQuAD\({}_{1}\), and Uncle-BERT\({}_{2}\) trained on Gen-SQuAD\({}_{2}\)). An overall accuracy evaluation of the three models was performed by calculating the F1 score for a mix of random questions of all types.
Footnote 20: Similar to Anu Museum user interface - [https://dbs.anumu-seum.org.il/skn/en/c6/e8492037/Personalities/Weizmann_Chaim](https://dbs.anumu-seum.org.il/skn/en/c6/e8492037/Personalities/Weizmann_Chaim)
Figures 13 and 14 show the training loss and F1 scores of each of the three models. As expected, the more complex the context and questions, the lower the F1 score. While on narrow persons' contexts and questions (Gen-SQuAD\({}_{0}\)), the model achieved an F1 score of 99.84; on second-degree genealogical relations (Gen-SQuAD\({}_{2}\)), it achieved only an F1 score of 80.28.
Furthermore, as can be observed in Table 5, compared to the Uncle-BERT\({}_{2}\) model (trained with broader contexts of second-degree genealogical relations), the Uncle-BERT\({}_{0}\), which was trained using information about the _SP_ and its spouses, fails to answer questions of any kind, including questions about the _SP_ alone. We hypothesize that the model overfits to narrow contexts and therefore cannot handle larger context (Gen-SQuAD\({}_{2}\)) "noise". This emphasizes the importance of the context size in the training data. Uncle-BERT\({}_{1}\) successfully answers most of the question types and even overtakes Uncle-BERT\({}_{2}\) in place-related questions. Except for place-related questions, it seems that a broader context improves the model's accuracy (Uncle-BERT\({}_{2}\)).
Next, the best model, Uncle-BERT\({}_{2}\), was compared to several state-of-the-art open-domain question-answering DNN models. To this end, all the following models were trained using SQuAD 2.0 [90]: BERT [22], Distilbert [93], RoBERTa [70], Electra [18], DELFT [134]. Furthermore, to evaluate the effectiveness of the proposed genealogical question answering pipeline compared to the state-of-the-art knowledge graph-based pipeline, the genealogical adaptation of
Figure 14: The three Uncle-BERT model’s train F1 score22.
Figure 13: The three Uncle-BERT model’s train loss21.
the DELFT model, Uncle-DELFT\({}_{2}\), was created. Uncle-DELFT\({}_{2}\), based on BERT combined with the GNN graph traversal, was trained on Gen-SQuAD\({}_{2}\).
As can be observed in Table 6, the baseline BERT model trained on the open-domain SQuAD 2.0 achieved an F1 score of 83 on the open-domain SQuAD 2.0 dataset [90]. However, on the genealogical domain dataset (Gen-SQuAD\({}_{2}\)), it achieved a significantly lower F1 score (60.12) compared to the Uncle-BERT\({}_{2}\) (81.45). The fact that Uncle-BERT\({}_{2}\) achieves a higher F1 score is not surprising since the model was trained on genealogical data, as opposed to the baseline BERT model trained on the open-domain question data. However, when comparing Uncle-BERT\({}_{2}\) to Uncle-DELFT\({}_{2}\), it is clear that the performance improvement is due to the proposed methodology and not just due to the richer or domain-specific training data. Moreover, the DELFT method is much more complex than BERT, yet it achieved a lower score even when trained on the same domain-specific data. The fact that the vast majority of entities (found in both the "user" question and the expected answer) exists only in the unstructured data makes it hard for the GNN to find the correct answer (i.e., to complete the graph). This finding emphasizes the uniqueness of a genealogical question answering task compared to the open-domain question-answering and the need for the end-to-end pipeline and methodology for training and using DNNs for this task, as presented in this paper. Since Uncle-BERT\({}_{2}\) achieved a higher accuracy score than the more complex Uncle-DELFT\({}_{2}\) model, we conclude that the proposed method reduces complexity while increasing accuracy.
As shown in Table 6, although some questions appear in both Gen-SQuAD\({}_{2}\) and SQuAD 2.0 datasets, there is still a significant difference between open-domain questions and genealogical questions. Except for Uncle-DELFT\({}_{2}\) in the case of date questions, all the state-of-the-art models failed to answer natural genealogical questions compared to Uncle-BERT\({}_{2}\) (and in many cases, even compared to Uncle-BERT\({}_{1}\)). However, Uncle-DELFT\({}_{2}\) was successful regarding date questions. This may imply that objective date questions are harder to extract from unstructured texts and the graph structure contributes to resolving such questions. Moreover, BERT's success on SP's date questions (compared to Uncle-BERT\({}_{2}\)) may suggest that these questions are more generic and have more common features among different domains than unique features in the genealogical domain. Furthermore, the current state-of-the-art knowledge graph pipeline (i.e., DELFT) achieved performance similar to simpler BERT-based models. This indicates that while it is beneficial for open-domain questions, it is not as effective in the genealogical domain. This result, combined with the additional complexity of DELFT, makes it less satisfactory in this domain (except for date questions, as mentioned above).
Interestingly, the "basic" BERT model outperforms all the newer BERT-based models (except for Uncle-BERT\({}_{2}\)). Furthermore, the fact that Uncle-BERT\({}_{1}\) achieved a higher F1 score on place type questions may indicate that place type questions may be more sensitive to "noise" or broad context. For example, place names may have different variations for the same entity (high "noise"), e.g., NY, NYC, New York, and New York City are all references to the same entity. This variety makes the model's task more difficult, thus adding broader contextual information and other types of "noise" (e.g., other entities, more people names, and dates), which may reduce the model's accuracy. Another possible reason for Uncle-BERT\({}_{2}\)'s lower accuracy on place type questions may be the fact that Uncle-BERT\({}_{2}\) was trained with both one-hop-away and two-hop-away contexts while Uncle-BERT\({}_{1}\) was trained only with one-hop-away contexts. The fact that the F1 score of the model is smaller on second-degree place objective questions (1.39) than on first-degree (4.72) and zero-degree (10.01) place objective questions may reinforce this indication. However, it is important to notice that in many cases, this factor will not affect the F1 score since the F1 score does not use the position of the answer (start and end index), but only the selected tokens compared to the answer tokens. Since most children and parents live in the same place, either the parent's place (e.g., birthplace) or the child's place can be selected by the model without affecting the F1 score. Table 7 presents some examples of answer predictions for place objective questions by Uncle-BERT\({}_{1}\) and Uncle-BERT\({}_{2}\). These results suggest that higher accuracy can be achieved by classifying the question types and using a different model for different question types and relation depths.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
**Question objective** & **BERT** & **DistilBERT** & **RoBERTa** & **Electra** & **DELFT** & **Uncle-DELFT\({}_{2}\)** & **Uncle-BERT\({}_{2}\)** \\ \hline Name & 28.27 & 28.54 & 38.97 & 21.30 & 32.99 & 39.84 & **97.64** \\ \hline Date & 60.58 & 53.30 & 44.33 & 34.92 & 39.62 & **79.35** & 55.10 \\ \hline Place & 74.96 & 64.67 & 40.41 & 26.03 & 36.27 & 66.66 & **78.52** \\ \hline Information / named entity & 71.20 & 66.79 & 34.96 & 31.72 & 40.91 & 70.58 & **87.40** \\ \hline First-degree relation & 65.20 & 62.41 & 55.32 & 49.10 & 34.42 & 46.48 & **89.45** \\ \hline Second-degree relation & 55.85 & 46.31 & 42.03 & 43.09 & 37.56 & 41.01 & **82.52** \\ \hline First-degree date & 46.45 & 48.16 & 42.54 & 37.45 & 40.58 & **64.84** & 53.85 \\ \hline First-degree place & 74.58 & 66.73 & 47.02 & 21.50 & 36.64 & 75.78 & **81.83** \\ \hline First-degree information / named entity & 60.57 & 64.54 & 35.95 & 35.30 & 37.59 & 68.78 & **87.28** \\ \hline Second-degree date & 39.49 & 39.75 & 23.30 & 26.99 & 38.29 & **60.15** & 44.87 \\ \hline Second-degree place & 69.49 & 66.28 & 41.34 & 22.47 & 36.60 & 66.40 & **79.12** \\ \hline Second-degree information / named entity & 60.19 & 62.38 & 34.37 & 37.15 & 35.70 & 47.26 & **81.04** \\ \hline
**Overall** & 60.12 & 60.19 & 39.45 & 43.39 & 37.56 & 42.96 & **81.45** \\ \hline \end{tabular}
\end{table}
Table 6: F1 scores of Uncle-BERT\({}_{2}\) and other state-of-the-art models on Gen-SQuAD\({}_{2}\).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Question objective** & **Question** & **Context (relevant parts)** & **Correct (relevant parts)** & **Uncle-** & **Uncle-** \\
**objective** & & **Answer** & **BERT\({}_{1}\)** & **BERT\({}_{2}\)** \\ \hline Place & Where was John born? &... John was born in Poland in 1866... John grew up in PL until he was... & \multirow{2}{*}{Poland} & \multirow{2}{*}{in Poland} & \multirow{2}{*}{PL} \\ \cline{1-1} \cline{5-5} & Where was John buried? &... John died and was buried in Germany during... & & & & \\ \cline{1-1} \cline{5-5} & Where did John’s father go material? &... Matt (John’s father) was born in Warsaw, Poland & & & & \\ \cline{1-1} \cline{5-5} & Where was Matt &... Matt married Elain in Warsaw... & & & & \\ \cline{1-1} \cline{5-5} & Where was Matt &... Matt died at home in Poland surrounded... his father & & & & \\ \cline{1-1} \cline{5-5} & like & killed? &... his father (John’s grandfather) was killed in Pruszkow in 1850... & & & & \\ \hline Second-degree place & Where did John’s grandfather die? &... his father (John’s grandfather) was killed in Pruszkow in 1850... & & & & \\ \hline \end{tabular}
\end{table}
Table 7: Uncle-BERT\({}_{1}\)’s and Uncle-BERT\({}_{2}\)’s prediction examples
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Question objective** & **Uncle-BERT\({}_{6}\)** & **Uncle-BERT\({}_{1}\)** & **Uncle-BERT\({}_{2}\)** \\ \hline Name & 44.53 & 95.14 & **97.64** \\ \hline Date & 21.60 & 52.48 & **55.10** \\ \hline Place & 27.54 & **88.53** & 78.52 \\ \hline Information \(\backslash\) named entity & 16.91 & 15.22 & **87.40** \\ \hline First-degree relation & 19.58 & 86.94 & **89.45** \\ \hline Second-degree relation & 20.66 & 63.45 & **82.52** \\ \hline First-degree date & 13.26 & 43.44 & **53.85** \\ \hline First-degree place & 34.17 & **86.55** & 81.83 \\ \hline First-degree information / named entity & 8.95 & 12.21 & **87.28** \\ \hline Second-degree date & 11.68 & 43.12 & **44.87** \\ \hline Second-degree place & 33.10 & **80.51** & 79.12 \\ \hline Second-degree information / named entity & 8.37 & 11.34 & **81.04** \\ \hline
**Overall** & 19.73 & 69.92 & **81.45** \\ \hline \end{tabular}
\end{table}
Table 5: Uncle-BERT models F1 score on Gen-SQuAD\({}_{2}\).
## 6 Conclusions and future work
This study proposed and implemented a multi-phase end-to-end methodology for DNN-based answering natural questions using transformers in the genealogical domain.
The presented methodology was evaluated on a large corpus of 3,140 family trees comprised of 1,847,224 different persons. The evaluation results show that a fine-tuned Uncle-BERT\({}_{2}\) model, trained on the genealogical dataset with second degree relationships, outperformed all the open-domain state-of-the-art models. This finding indicates that the genealogy domain is distinctive and requires a dedicated training dataset and fine-tuned DNN model. A comparison of the proposed knowledge-graph-to-text approach was also found to be superior to the direct knowledge graph-based models, such as DELFT, even after domain-adaptation, both in terms of accuracy and complexity. This study also examined the effect of the type of question on the accuracy of the question answering model. The date-related questions are different as they can be answered with greater accuracy directly from the knowledge graph and may have more generic features than other question types, while place-related questions are more sensitive to noise than other question types. In addition, the evaluation results of the three Uncle-BERT models showed that the consanguinity scope of graph traversal used for generating a training corpus influences the accuracy of the models.
In summary, this paper's contributions are: (1) a genealogical knowledge graph representation of GEDCOM standard; (2) a dedicated graph traversal algorithm adapted to interpret the meaning of the relationships in the genealogical data (Gen-BFS); (3) an automatically generated SQuAD-style genealogical training dataset (Gen-SQuAD); (4) an end-to-end question answering pipeline for the genealogical domain; and (5) a fine-tuned question-answering BERT-based model for the genealogical domain (Uncle-BERT).
Although the proposed end-to-end methodology was implemented and validated for the question answering task, it can be applied to other NLP downstream tasks in the genealogical domain, such as entity extraction, text classification, and summarization. Researchers can utilize the study's results to reduce the time, cost, and complexity and to improve accuracy in the genealogical domain NLP research.
Possible directions for future research may include: (1) investigating the tradeoff between rich context passage generation and increasing the Gen-BFS scope, (2) integration with DNC or GNNs for dynamic scoping, (3) finding a method for classifying question types, (4) investigating the contribution of each question type to the accuracy of the model, and developing a model selection or multi-model method for each question type, (5) investigating larger contexts (relation degrees) using models that can handle larger input (e.g., Longformer [58] or Reformer [9]), (6) extending the Gen-BFS algorithm to handle missing family relations by adding a knowledge graph completion step while traversing the graph, (7) investigating the influence of the order of verbalized sentences and especially the order of person reference types, (8) investigating an architecture that will rank family trees (embedding the entire graph [37]) based on similarity to the question [92]) and eliminate the need for the user to select a family tree, (9) investigating the impact of spelling mistakes and out-of-vocabulary words on the quality of the results, (10) and training other transformer models on genealogical data to further optimize question answering DNN models for the genealogical domain.
|
2306.07030 | Resource Efficient Neural Networks Using Hessian Based Pruning | Neural network pruning is a practical way for reducing the size of trained
models and the number of floating-point operations. One way of pruning is to
use the relative Hessian trace to calculate sensitivity of each channel, as
compared to the more common magnitude pruning approach. However, the stochastic
approach used to estimate the Hessian trace needs to iterate over many times
before it can converge. This can be time-consuming when used for larger models
with many millions of parameters. To address this problem, we modify the
existing approach by estimating the Hessian trace using FP16 precision instead
of FP32. We test the modified approach (EHAP) on
ResNet-32/ResNet-56/WideResNet-28-8 trained on CIFAR10/CIFAR100 image
classification tasks and achieve faster computation of the Hessian trace.
Specifically, our modified approach can achieve speed ups ranging from 17% to
as much as 44% during our experiments on different combinations of model
architectures and GPU devices. Our modified approach also takes up around 40%
less GPU memory when pruning ResNet-32 and ResNet-56 models, which allows for a
larger Hessian batch size to be used for estimating the Hessian trace.
Meanwhile, we also present the results of pruning using both FP16 and FP32
Hessian trace calculation and show that there are no noticeable accuracy
differences between the two. Overall, it is a simple and effective way to
compute the relative Hessian trace faster without sacrificing on pruned model
performance. We also present a full pipeline using EHAP and quantization aware
training (QAT), using INT8 QAT to compress the network further after pruning.
In particular, we use symmetric quantization for the weights and asymmetric
quantization for the activations. | Jack Chong, Manas Gupta, Lihui Chen | 2023-06-12T11:09:16Z | http://arxiv.org/abs/2306.07030v1 | # Resource Efficient Neural Networks Using Hessian Based Pruning
###### Abstract
Neural network pruning is a practical way for reducing the size of trained models and the number of floating point operations (FLOPs). One way of pruning is to use the relative Hessian trace to calculate sensitivity of each channel, as compared to the more common magnitude pruning approach. However, the stochastic approach used to estimate the Hessian trace needs to iterate over many times before it can converge. This can be time-consuming when used for larger models with many millions of parameters. To address this problem, we modify the existing approach by estimating the Hessian trace using FP16 precision instead of FP32. We test the modified approach (EHAP) on ResNet-32/ResNet-56/WideResNet-28-8 trained on CIFAR10/CIFAR100 image classification tasks and achieve faster computation of the Hessian trace. Specifically, our modified approach can achieve speed ups ranging from 17% to as much as 44% during our experiments on different combinations of model architectures and GPU devices. Our modified approach also takes up \(\sim\)40% less GPU memory when pruning ResNet-32 and ResNet-56 models, which allows for a larger Hessian batch size to be used for estimating the Hessian trace. Meanwhile, we also present the results of pruning using both FP16 and FP32 Hessian trace calculation and show that there is no noticeable accuracy differences between the two. Overall, it is a simple and effective way to compute the relative Hessian trace faster without sacrificing on pruned model performance. We also present a full pipeline using EHAP and quantization aware training (QAT), using INT8 QAT to compress the network further after pruning. In particular, we use symmetric quantization for the weights and asymmetric quantization for the activations. The framework has been open-sourced and the code is available at [https://github.com/JackkChong/Resource-Efficient-Neural-Networks-Using-Hessian-Based-Pruning](https://github.com/JackkChong/Resource-Efficient-Neural-Networks-Using-Hessian-Based-Pruning).
## I Introduction
In recent years, neural networks have become more powerful in many tasks such as computer vision and natural language processing [1]. This is partly due to larger models with many millions of parameters and increasing input sizes. Some notable examples for image classification on ImageNet are EfficientNet-B7 with 66 million parameters [2] and ViT-G/14 vision transformer with 1843 million parameters [3]. However, it has been shown that these models with millions of parameters are usually over-parameterized [4]. Moreover, it is very hard to deploy these models on cloud servers or edge devices due to their large size and bigger memory footprint. Since most deep neural networks contain redundant neurons, one way to make these models deploy-able is to compress them through neural network pruning. Since then, much research has been conducted to determine the best way to prune a neural network, such as the magnitude-based approach or the second order Hessian-based approach.
An important challenge to pruning is to find out which parameters are insensitive and can be pruned away without severe degradation of the model's performance. One way of doing so is to use the relative Hessian trace to calculate the sensitivity of each channel, in contrast to the standard magnitude pruning approach. The seminal work of [5] proposed Hessian Aware Pruning, which makes use of the stochastic Hutchinson trace estimator to determine the relative Hessian trace of each channel.
However, the Hutchinson trace estimator used to estimate the Hessian trace needs to iterate hundreds of times before it can converge to a stable value. This can be time-consuming when calculating the Hessian trace for larger models with many billions of parameters. In addition, the compression ratio of the pruned model is non-deterministic and can fluctuate within a few percentage points. The Hessian trace may need to be computed multiple times before pruning the model if a specific compression ratio is desired. To address this problem, we modify the existing approach in HAP by estimating the Hessian trace using FP16 precision instead of FP32. Our main contributions are:
* We test the default FP32 approach on 3 different neural network models (ResNet-32/ResNet-56/WideResNet-28-8) and record the average time spent to estimate the Hessian trace.
* We test our modified FP16 approach and compare the average time spent to estimate the Hessian trace against the default approach.
* We record the peak memory statistics of the GPU when estimating the Hessian trace using both FP32 and FP16 approaches.
* We compare the pruning results using both FP32 and FP16 approaches.
* We present a full pipeline using EHAP and quantization aware training together. INT8 quantization aware training (QAT) is done to compress the network further after
pruning.
## II Related Work
In recent years, the size of neural networks have rapidly increased and there is a growing need for faster inference on edge devices [6, 7, 8, 9, 10, 11]. Gradually, many approaches to make neural networks more compact and efficient have started to emerge [12, 13, 14, 15, 16, 17]. In this paper, we briefly discuss the related work on pruning and quantization.
### _Pruning_
Pruning can be grouped into two categories: structured pruning and unstructured pruning. Unstructured pruning aims to prune away redundant neurons without any structure. Unfortunately, this can result in sparse matrix operations which are difficult to speed up using GPUs [18]. To address this, structured pruning removes groups of redundant neurons in a structured way so that matrix operations can still be conducted efficiently. However, the difficulty lies in severe accuracy degradation when the model is pruned to a large degree. In both methods, the important challenge is to determine which parameters are insensitive to pruning. Many methods have been proposed in the literature to address this utilising regularization approaches [19, 20, 21, 22, 23], first or second order methods [24, 25, 26], gradient-based techniques [27, 28, 29], similarity approaches [30], sensitivity and feedback [31, 32, 33, 34, 35], and magnitude based approaches [36, 37, 38, 39, 40, 41].
One of the most popular ways is magnitude-based pruning. At the heart of this approach is the assumption that small parameters are redundant and can be pruned away. For example, [42] explores the use of magnitude based pruning to achieve a significant reduction in model size and only a marginal drop in accuracy. Another variant was used in [43] to prune redundant neurons using the scaling factor of batch normalization layers. In particular, those channels with smaller scaling factors are seen as unimportant and were pruned away. However, an important drawback of magnitude-based pruning is that parameters with smaller magnitudes can still be quite sensitive to pruning. A second-order Taylor series expansion of the model's loss function shows that the change to the loss depends on not just the magnitude of the weights, but also the second-order Hessian [44]. In particular, parameters with small magnitudes may still possess a high degree of second-order sensitivity.
To account for the second-order component in the Taylor series expansion, several Hessian-based pruning methods have emerged. The Hessian diagonal was used as the sensitivity metric in [44, 45] also used the Hessian diagonal, but considered off-diagonal components and proved a correlation with the Hessian inverse. However, one important drawback of these methods was that it results in unstructured pruning, which is difficult to accelerate with GPUs. There are also other second-order pruning methods that do not use the Hessian. For example, EigenDamage [46] uses the Gauss-Newton operator instead of the Hessian and the authors estimate the GN operator using Kronecker products.
For this paper, we focus on Hessian Aware Pruning from [5] to conduct second-order structured pruning of a pretrained network. In this method, a stochastic Hutchinson trace estimator is used to determine the relative Hessian trace. The result is then used to calculate the sensitivity of each channel.
### _Quantization_
By default, neural networks are trained using FP32 precision. However, most networks do not need such a high level of precision to infer well. A model's weights and activations can be quantized from floating point into lower numerical precision so that it can boost inference time without a significant drop in accuracy [47]. Currently, the two main focus areas for quantization are Post-Training Quantization (PTQ) and Quantization Aware Training (QAT). The difference between the two is whether the parameters in the quantized model are fine-tuned after quantization.
In PTQ, the model is quantized directly without any fine-tuning [48]. As such, PTQ methods usually do not rely heavily on additional training data. However, it comes at the cost of lower accuracy because quantization can degrade the accuracy of the model significantly without fine-tuning.
In QAT, fine-tuning is done after quantization by adding quantizer blocks into the pretrained model. These quantizer blocks simulate the quantization noise by conducting quantize and de-quantize operations [49]. Although QAT requires additional time and computational resources to fine-tune the model, it results in higher accuracy of the model.
For this paper, we use QAT using the framework in [50] to quantize the pruned models using INT8 precision and report our results in Section IV.
## III Methodology
The focus of this paper is on supervised learning tasks, where we aim to minimize the empirical risk by solving the following optimization problem:
\[L(w)=\frac{1}{N}\sum_{i=1}^{N}l(x_{i},y_{i},w), \tag{1}\]
where \(w\in\mathbb{R}^{N}\) refers to trainable model parameters, \(l(x_{i},y_{i},w)\) refers to the loss for each input sample \(x_{i}\), \(y_{i}\) refers to the target label and \(N\) refers to the number of training samples. Before pruning, we make the assumption that the model has been pretrained and converged to a local minima. This means that the gradient, \(\nabla_{w}L(w)=0\) and the Hessian is Positive Semi-Definite (PSD). The main objective is to prune as many parameters as possible to compress the model without significant accuracy degradation.
Let \(\Delta w\in\mathbb{R}^{N}\) represent the pruning perturbation such that the pruned weights go to zero. The corresponding change in loss from Taylor Series expansion is represented as:
\[\Delta L=L(w+\Delta w)-L(w)=g^{T}\Delta w+\frac{1}{2}\Delta w^{T}H\Delta w+O( ||\Delta w||^{3}) \tag{2}\]
where \(g\) represents the gradient of the loss function \(L\) w.r.t. weights \(w\) and \(H\) represents the second-order derivative (Hessian). We let \(g=0\) and the Hessian be PSD for a pretrained network that has converged to a local minimum. We also assume higher order terms can be ignored [45].
To determine the weights that give minimum perturbation to the loss, we present the following optimization problem:
\[\underset{\Delta w}{min}\frac{1}{2}\Delta w^{T}H\Delta w=\frac{1}{2}\begin{bmatrix} \Delta w_{p}\\ \Delta w_{l}\end{bmatrix}^{T}\begin{bmatrix}H_{p,p}&H_{p,l}\\ H_{l,p}&H_{l,l}\end{bmatrix}\begin{bmatrix}\Delta w_{p}\\ \Delta w_{l}\end{bmatrix} \tag{3}\]
where \(\Delta w_{p}\) and \(\Delta w_{l}\) denote the perturbation to the weights of the pruned channels (p-channels) and un-pruned channels (l-channels) respectively, \(H_{l,p}\) represents cross Hessian w.r.t. l-channels and p-channels, \(H_{p,p}\) represents Hessian w.r.t. p-channels only and \(H_{l,l}\) represents Hessian w.r.t. l-channels only. Since we are pruning p-channels, we subject our optimization problem to the following constraint:
\[\Delta w_{p}+w_{p}=0 \tag{4}\]
By using the Lagrangian method to solve the constrained optimization problem, we get the following (refer to Appendix for details):
\[\frac{1}{2}\Delta w^{T}H\Delta w=\frac{1}{2}w_{p}^{T}(H_{p,p}-H_{p,l}H_{l,l}^{ -1}H_{l,p})w_{p} \tag{5}\]
which gives us the change to the model loss when a group of parameters are pruned. Next, we dive into how Hessian Aware Pruning is conducted in [5].
### _Hessian Aware Pruning (HAP)_
According to [5], we use the following approximation for the perturbation to the model loss (i.e. the sensitivity metric):
\[\frac{1}{2}\Delta w^{T}H\Delta w\approx\frac{1}{2}w_{p}^{T}\frac{Trace(H_{p,p} )}{p}w_{p}=\frac{Trace(H_{p,p})}{2p}||w_{p}||_{2}^{2} \tag{6}\]
where \(Trace(H_{p,p})\) refers to the trace of the Hessian block for p-channels. We can approximate this efficiently using a randomized numerical method known as Hutchinson's Method [51] (Algorithm 1).
```
Input: Parameters \(\theta\), number of iterations \(n_{v}\) Output: Kessel Hessian trace approximation \(\mathbf{E}[w^{T}Hv]\)
1 Initialize gradient scaler and default scaling factor \(s=2^{16}\)
2 Compute the scaled gradient of \(\theta\) by FP16 backpropagation, \(sg_{i}=s\frac{dL}{dW_{i}}\)
3 Unscale the gradient, \(g_{i}=\frac{sg_{i}^{2}}{s}\)
4 Reduce the scaling factor to \(s=2^{8}\)
5for\(i=1,2,\dots n_{v}\)do
6 Draw a random vector \(v\) from the Rademacher distribution
7 Compute scaled \(Hv\) by FP16 backpropagation, \(sHv=s\frac{d(sv)}{d\theta}\)
8 Compute \(sv^{T}Hv\) by taking dot product between \(v\) and \(sHv\)
9 end for
10 Compute \(\mathbf{E}[sv^{T}Hv]\) by taking mean of all \(sv^{T}Hv\) over \(n_{v}\) iterations
```
**Algorithm 2**Efficient Hutchinson's Method
By using Hutchinson's Method, we can approximate the Hessian trace without computing the full Hessian. In particular, we can show that (refer to Appendix for details):
\[Tr(H)=\mathbf{E}[v^{T}Hv] \tag{7}\]
and therefore, we use this identity to calculate the second-order sensitivity metric for each channel:
\[\frac{Trace(H_{p,p})}{2p}||w_{p}||_{2}^{2}=\frac{\mathbf{E}[v^{T}Hv]}{2p}||w_{ p}||_{2}^{2} \tag{8}\]
Fig. 1: An illustration of simulated quantization based forward pass in floating point precision. The quantizer blocks conduct quantize and de-quantize operations to induce quantization errors into the computational graph so that the model can optimize on them during QAT.
### _Efficient Hessian Aware Pruning (EHAP)_
The authors in [5] used default FP32 precision to compute the Hessian trace with Hutchinson's Method. In their experiments with ResNet50 on ImageNet, the longest time taken to approximate the Hessian trace was three minutes. However, the compression ratio cannot be directly controlled and may fluctuate within a few percentage points during pruning. If a specific compression ratio is desired, Hutchinson's Method must be run several times. This can be time-consuming if we were to scale up a model to several hundred million parameters and want to run Hutchinson's Method multiple times.
To address this problem, we propose to run the backpropagation operations within Hutchinson's Method in FP16 precision. By leveraging PyTorch's Automatic Mixed Precision (AMP) package [52], we can approximate the Hessian trace faster using the modified approach (Algorithm 2). In more detail, we leverage AMP's autocasting context manager to enable backpropagation to run in FP16 during Hutchinson's Method. We also adopt AMP's gradient scaler to scale gradients during the backpropagation process. The rationale behind scaling is to avoid underflow or overflow. Without scaling, values which are too small will flush to zero (underflow) and those which are too big will turn into NaNs (overflow) [52]. After tuning, we find that the optimal scaling factor to use for second-order backpropagation in our experiments is \(2^{8}\).
In addition, we do not unscale the second-order gradients because the scale does not affect our approximation. This is because we only require the relative Hessian trace to compare between different channels' sensitivities, so we are not concerned with the absolute value of the Hessian trace approximation. Overall, we find that using FP16 precision is a simple way to approximate the Hessian trace faster.
### _Quantization Aware Training (QAT)_
We follow the framework provided in [50] to conduct INT8 quantization for our pruned models. Our goal is to conduct INT8 QAT for our models after pruning. Below, we explain uniform quantization and the addition of quantizer blocks to insert quantize and de-quantize operations into the computational graph.
### _Uniform Quantization_
We present the uniform quantization formula as shown below:
\[Q(r)=Int(r/S)-Z \tag{9}\]
where \(Q\) refers to the quantization function, \(r\) is the FP32 weight / activation that we want to quantize, \(S\) is a real-valued scaling factor and \(Z\) is the quantized value that will represent real-value zero (i.e. zero point). The \(Int\) operation is simply a rounding operation to nearest integer. Since the quantized values are uniformly spaced, this method of quantization is called uniform quantization.
An important part of uniform quantization is calibration, where we select the clipping range \([\alpha,\beta]\) so that we can determine the appropriate scaling factor \(S\). The easiest way to choose the clipping range is to select the maximum and minimum values of the signal, where \(\alpha=r_{min}\) and \(\beta=r_{max}\). This corresponds to asymmetric quantization since the clipping range is not symmetric to the origin. An alternative way to choose the clipping range is to let \(-\alpha=\beta=max(|r_{max}|,|r_{min}|)\), which corresponds to symmetric quantization.
In this paper, we adopt symmetric quantization for weights. However, we use asymmetric quantization for activations after ReLU because those activations will always be non-negative.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**CIFAR-100** & **Original Acc.** & **Pruned Acc. (HAP)** & **Pruned Acc. (EHAP)** \\ \hline ResNet-32 & 93.83\% \(\pm\) 0.09\% & 88.47\% \(\pm\) 0.25\% & 88.89\% \(\pm\) 0.23\% \\ ResNet-56 & 94.32\% \(\pm\) 0.21\% & 90.47\% \(\pm\) 0.26\% & 89.83\% \(\pm\) 0.42\% \\ WideResNet-28-8 & 96.23\% \(\pm\) 0.09\% & 95.14\% \(\pm\) 0.08\% & 95.22\% \(\pm\) 0.09\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: HAP and EHAP Results for CIFAR-10. There are no noticeable accuracy differences between the two approaches.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**CIFAR-100** & **Avg. Time** & **\% Speed Up** & **Memory Used** \\ \hline ResNet-32 (HAP) & 86.970 s & 18.40\% & 2.402 GB \\ ResNet-32 (EHAP) & 70.968 s & 1.407 GB \\ ResNet-56 (HAP) & 146.659 s & 3.859 GB \\ ResNet-56 (EHAP) & 120.557 s & 17.80\% & 2.142 GB \\ WRN-28-8 (HAP) & 277.476 s & 32.36\% & 4.073 GB \\ WRN-28-8 (EHAP) & 187.684 s & 3.678 GB \\ \hline \hline \end{tabular}
\end{table} TABLE III: Timing benchmarks for Hutchinson’s Method using an NVIDIA GeForce RTX2060 device. We show here that EHAP can compute the relative Hessian trace faster in all cases as compared to HAP.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**CIFAR-10** & **Original Acc.** & **Pruned Acc. (HAP)** & **Pruned Acc. (EHAP)** \\ \hline ResNet-32 & 93.83\% \(\pm\) 0.09\% & 88.47\% \(\pm\) 0.25\% & 88.89\% \(\pm\) 0.23\% \\ ResNet-56 & 94.32\% \(\pm\) 0.21\% & 90.47\% \(\pm\) 0.26\% & 89.83\% \(\pm\) 0.42\% \\ WideResNet-28-8 & 96.23\% \(\pm\) 0.09\% & 95.14\% \(\pm\) 0.08\% & 95.22\% \(\pm\) 0.09\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: HAP and EHAP Results for CIFAR-10. There are no noticeable accuracy differences between the two approaches.
Using asymmetric quantization will maximise the entire 8-bitwidth since we only need to represent positive values for activations.
To recover the floating point values from the quantized values in Equation 9, we have the de-quantization formula shown below:
\[r\approx S(Q(r)+Z) \tag{10}\]
Note that the de-quantize operation cannot exactly recover the floating point value due to rounding operation in the previous quantize operation. Next we describe how we apply Equations 9 and 10 to add quantizer blocks and conduct QAT.
### _Adding Quantizer Blocks_
When running experiments, it is easier to conduct QAT on general-purpose hardware instead of the actual quantized device. We do so by using quantizer blocks to insert quantization effects into the computational graph [50]. During QAT, the quantization errors will then appear in the loss and be accounted for during gradient descent. A schematic diagram of this is shown in Figure 1. In more detail, we conduct QAT by simulating quantization on our general-purpose floating point hardware. For both weights and activations, we simply need to pass them through the quantizer blocks during forward pass to induce quantization effects into the computational graph.
## IV Experiments
For all our experiments with ResNet-32 / ResNet-56 / WideResNet-28-8, we target a compression ratio of 10% i.e. sparsifying the model parameters by 90%. The experimental settings, resulting file sizes, FLOPs and compression ratios can be found in the Appendix.
### _EHAP Results for CIFAR-10 and CIFAR-100_
We test EHAP on CIFAR-10 [53] and CIFAR-100 [53] using ResNet-32, ResNet-56 and WideResNet-28-8 and show the results in Table I and Table II. In particular, we report the original accuracy for each model type and compare between the pruned accuracies using EHAP and HAP.
From Table I and Table II, we show that EHAP (the modified approach) achieves equivalent accuracy results when compared to using HAP (the standard approach). In some cases, EHAP even achieves higher accuracy than HAP on both CIFAR-10 and CIFAR-100 datasets, despite using half the precision for Hessian trace calculation. This outcome happens because layers are pruned based on the relative Hessian trace only. Thus, we do not need a high level of precision for Hutchinson's Method to achieve good results.
We also compare the average time taken to run Hutchinson's Method using both the standard approach and the modified approach in Table III and Table IV. In particular, we report the average time taken to compute the relative Hessian trace, the percentage speed up and the max memory allocated by the GPU device for Hutchinson's Method using EHAP and HAP. From Table III and Table IV, we achieve speed ups (of upto 44.5%) in all of our model architectures and on both types of GPU devices. The speed ups can be seen on both the older RTX 2060 device and the newer RTX 3090 device. In addition, we also save on the amount of GPU memory used (of upto 1.7 GB), which can be helpful for GPU devices with limited memory.
and Table VI. In particular, we report the original, pruned and quantized accuracies in the tables for easy comparison. The quantized accuracies shown are the accuracies that we can expect from the model when we deploy to the actual quantized device. It can be seen that QAT does not have an adverse impact on the model accuracy after pruning, achieving very similar accuracy results. In some cases, the QAT accuracy is actually slightly higher than the pruned accuracy. We also achieve smaller file sizes after QAT since we only save the quantized weights and biases and their corresponding scaling factors.
## V Limitations and Future Work
We have showcased an integrated pipeline for pruning and quantization utilizing more efficient Hessian aware pruning. A limitation in this work is that the gradient scale used for second-order FP16 backpropagation in EHAP is static and was tuned manually by hand. However, dynamic gradient scaling could be used so that we do not have to tune the scale manually.
Another area of future work is to modify the quantization scheme to be more sophisticated by using Hessian based quantization such that the bitwidth of each layer is not constant and varies as per the Hessian rule.
## VI Conclusion
The existing method used to compute the relative Hessian trace in default FP32 can be time-consuming for larger models if we need to run the pruning script multiple times. We propose a modified approach that computes the relative Hessian trace using FP16 precision. From the results obtained using CIFAR-10 and CIFAR-100 benchmark data sets in Section IV, we show that our modified approach is more efficient and consistently faster (upto 44.5%) than the standard approach used to approximate the relative Hessian trace. We also do not compromise on pruned model accuracy using our modified approach at the same compression ratio. In addition, we save on a modest amount of GPU memory used (upto 1.7 GB) during our modified approach, which enables a larger Hessian batch size to be used to improve the accuracy of our Hessian trace approximation. Overall, it is a simple and effective way to compute the relative Hessian trace faster without sacrificing on pruned model accuracy.
|
2304.09499 | The Responsibility Problem in Neural Networks with Unordered Targets | We discuss the discontinuities that arise when mapping unordered objects to
neural network outputs of fixed permutation, referred to as the responsibility
problem. Prior work has proved the existence of the issue by identifying a
single discontinuity. Here, we show that discontinuities under such models are
uncountably infinite, motivating further research into neural networks for
unordered data. | Ben Hayes, Charalampos Saitis, György Fazekas | 2023-04-19T08:40:10Z | http://arxiv.org/abs/2304.09499v1 | # The Responsibility Problem in Neural
###### Abstract
We discuss the discontinuities that arise when mapping unordered objects to neural network outputs of fixed permutation, referred to as the _responsibility problem_. Prior work has proved the existence of the issue by identifying a single discontinuity. Here, we show that discontinuities under such models are uncountably infinite, motivating further research into neural networks for unordered data.
## 1 Introduction
The _responsibility problem_(Zhang et al., 2020) describes an issue when training neural networks with unordered targets: the fixed permutation of output units requires that each assume a "responsibility" for some element. Permutations of responsibility result in discontinuities wherein small changes in a permutation invariant metric demand large changes in the layer's actual output. For feed-forward networks, the worst-case approximation of such discontinuous functions is arbitrarily poor for at least some subset of the input space (Krastios & Zamalooy, 2022)
Empirically, degraded performance has been observed on set prediction tasks (Zhang et al., 2020), motivating research into architectures for set generation which circumvent these discontinuities (Zhang et al., 2020; Kosiorek et al., 2020; Rezatofighi et al., 2018). The problem has briefly been stated formally in the literature (Zhang et al., 2020), but proven only by the existence of a single discontinuity. Here we prove an alternative statement of the responsibility problem, categorising unordered-to-ordered mappings as operations that are isomorphic to sorting and operations that are not.
In the first case, the discontinuities that arise are "classic" permutations of responsibility - that is, an element that was initially represented by one output unit becomes represented by another. Using a result from Hemasinha & Weaver (2015), we find that any mappings that preserve a consistent ordering between elements are isomorphic and thus all admit such discontinuities. In the second case, discontinuities also appear wherever the map does _not_ implement a consistent ordering. We demonstrate that, as a consequence, the set of discontinuities in both sorting and non-sorting maps is uncountably infinite.
This result is congruent with experimental evidence that performance suffers when the orderless structure of set data is not explicitly accounted for in model architectures. This has practical implications for any task that inherently involves set prediction, including object detection, point cloud generation, molecular graph generation, speech separation, and more.
## 2 The Responsibility Problem
Let \(\Theta\) denote the set of sets \(\mathbf{\theta}\) of cardinality \(|\mathbf{\theta}|=n\) with elements in \(\mathbb{R}^{d}\) where \(d\geq 2\):
\[\mathbf{\theta}\triangleq\left\{\mathbf{x}_{k}\mid\mathbf{x}_{k}\in\mathbb{R}^{d},k\in \{1,\ldots,n\}\right\}\in\Theta.\]
We are interested in the set \(\mathcal{F}\) of maps \(f\colon\Theta\to\mathbb{R}^{n\times d}\), such that all \(f\in\mathcal{F}\) select a single assignment of each element in any \(\mathbf{\theta}\in\Theta\) to each row of the resulting matrix:
\[f(\mathbf{\theta})=\begin{bmatrix}\mathbf{x}_{\pi(1)}&\mathbf{x}_{\pi(2)}&\ldots&\mathbf{x}_{ \pi(n)}\end{bmatrix}^{T},\qquad\forall f\in\mathcal{F},\ \ \forall\mathbf{\theta}\in\Theta,\ \ \ \pi=\Pi_{f}(\mathbf{\theta})\enspace,\] |
2303.08720 | Practicality of generalization guarantees for unsupervised domain
adaptation with neural networks | Understanding generalization is crucial to confidently engineer and deploy
machine learning models, especially when deployment implies a shift in the data
domain. For such domain adaptation problems, we seek generalization bounds
which are tractably computable and tight. If these desiderata can be reached,
the bounds can serve as guarantees for adequate performance in deployment.
However, in applications where deep neural networks are the models of choice,
deriving results which fulfill these remains an unresolved challenge; most
existing bounds are either vacuous or has non-estimable terms, even in
favorable conditions. In this work, we evaluate existing bounds from the
literature with potential to satisfy our desiderata on domain adaptation image
classification tasks, where deep neural networks are preferred. We find that
all bounds are vacuous and that sample generalization terms account for much of
the observed looseness, especially when these terms interact with measures of
domain shift. To overcome this and arrive at the tightest possible results, we
combine each bound with recent data-dependent PAC-Bayes analysis, greatly
improving the guarantees. We find that, when domain overlap can be assumed, a
simple importance weighting extension of previous work provides the tightest
estimable bound. Finally, we study which terms dominate the bounds and identify
possible directions for further improvement. | Adam Breitholtz, Fredrik D. Johansson | 2023-03-15T16:05:05Z | http://arxiv.org/abs/2303.08720v1 | # Practicality of generalization guarantees for unsupervised domain adaptation with neural networks
###### Abstract
Understanding generalization is crucial to confidently engineer and deploy machine learning models, especially when deployment implies a shift in the data domain. For such domain adaptation problems, we seek generalization bounds which are tractably computable and tight. If these desiderata can be reached, the bounds can serve as guarantees for adequate performance in deployment. However, in applications where deep neural networks are the models of choice, deriving results which fulfill these remains an unresolved challenge; most existing bounds are either vacuous or has non-estimable terms, even in favorable conditions. In this work, we evaluate existing bounds from the literature with potential to satisfy our desiderata on domain adaptation image classification tasks, where deep neural networks are preferred. We find that all bounds are vacuous and that sample generalization terms account for much of the observed looseness, especially when these terms interact with measures of domain shift. To overcome this and arrive at the tightest possible results, we combine each bound with recent data-dependent PAC-Bayes analysis, greatly improving the guarantees. We find that, when domain overlap can be assumed, a simple importance weighting extension of previous work provides the tightest estimable bound. Finally, we study which terms dominate the bounds and identify possible directions for further improvement.
## 1 Introduction
Successful deployment of machine learning systems relies on generalization to inputs never seen in training. In many cases, training and in-deployment inputs differ systematically; these are domain adaptation (DA) problems. An example of a setting where these problems arise is healthcare. Learning a classifier from data from one hospital and applying to samples from another is an example of tasks that machine learning often fail at (AlBadawy et al., 2018; Perone et al., 2019; Castro et al., 2020). In high-stakes settings like healthcare, guarantees on model performance would be required before meaningful deployment is accepted. Modern machine learning models, especially neural networks, perform well on diverse and challenging tasks on which conventional models have had only modest success. However, due to the high flexibility and opaque nature of neural networks, it is often hard to quantify how well we can expect them to perform in practice.
Performance guarantees for machine learning models are typically expressed as generalization bounds. Bounds for unsupervised domain adaptation (UDA), where no labels are available from the target domain, have been explored in a litany of papers using both the PAC (Ben-David et al., 2007; Mansour et al., 2009) and PAC-Bayes (Germain et al., 2020) frameworks; see Redko et al. (2020) for an extensive survey. Despite great interest in this problem, very few works actually compute or report the _value_ of the proposed bounds. Instead, the results are used only to guide optimization or algorithm development. Moreover, the bounds presented often contain terms which are non-estimable without labeled data from the target domain even under favourable conditions (Johansson et al., 2019; Zhao et al., 2019).
For deployment in sensitive settings we wish to find bounds which are: a) Amenable to calculation; they do not contain non-estimable terms and are tractable to compute. b) Tight; they are close to the error in deployment (or at least non-vacuous). How do existing bounds fare in solving this problem? As we will see, for realistic problems under favorable conditions, most, if not all, bounds in the literature struggle to satisfy one or both of these goals to varying degrees.
In this work, we examine the practical usefulness of current UDA bounds as performance guarantees in the context of learning with neural networks. Examining the literature with respect to our desiderata, we identify bounds which show promise in being estimable and tractably computable (Section 2.1). We find that terms related to sample generalization dominate existing bounds for neural networks, prohibiting tight guarantees. To remedy this, we apply PAC-Bayes analysis (McAllester, 1999) with data-dependent priors (Dziugaite & Roy, 2019) in four diverse bounds (Sections 2.3-2.4). Two are existing PAC-Bayes bounds from the UDA literature and two are PAC-Bayes adaptations of bounds based on importance weighting (IW) and integral probability metrics (IPM). We evaluate the bounds empirically under favorable conditions in two tasks which fulfill the covariate shift and domain overlap assumptions; one task concerns digit image classification and the second X-ray classification (Section 3). Our results show that all four bounds are vacuous on both tasks without data-dependent priors, but some can be made tight with them (Section 4). Furthermore, we find that the simple extension of applying importance weights to previous work outperforms the best fully observable bound from the literature in tightness. This result highlights amplification of bound looseness due to interactions between domain adaptation and sample generalization terms. We conclude by offering insights into achieving the tightest bounds possible given the current state of the literature (Section 5).
## 2 Background
In this section, we introduce the unsupervised domain adaptation (UDA) problem and give a survey of existing generalization bounds through the lens of practicality: do the bounds contain non-estimable terms and are they tractably computable? We go on to select a handful of promising bounds and combine them with data-dependent PAC-Bayes analysis to arrive at the tightest guarantees available.
We study UDA for binary classification, in the context of an input space \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and a label space \(\mathcal{Y}=\{-1,1\}\). While our arguments are general, we use as running example the case where \(\mathcal{X}\) is a set of black-and-white images. Let \(\mathcal{S}\) and \(\mathcal{T}\), where \(\mathcal{S}\neq\mathcal{T}\), be two distributions, or _domains_, over the product space \(\mathcal{X}\times\mathcal{Y}\), called the source domain and target domain respectively. The source domain is observed through a labeled sample \(S=\{x_{i},y_{i}\}_{i=1}^{n}\sim(\mathcal{S})^{n}\) and the target domain through a sample \(S^{\prime}_{x}=\{x^{\prime}_{i}\}_{i=1}^{m}\sim(\mathcal{T}_{x})^{m}\) which lacks labels, where \(\mathcal{T}_{x}\) is the marginal distribution on \(\mathcal{X}\) under \(\mathcal{T}\). Throughout, \((\mathcal{D})^{N}\) denotes the distribution of a sample of \(N\) datapoints drawn i.i.d. from the domain \(\mathcal{D}\).
The UDA problem is to learn hypotheses \(h\) from a hypothesis class \(\mathcal{H}\), by training on \(S\) and \(S^{\prime}_{x}\), such that the hypotheses perform well on unseen data drawn from \(\mathcal{T}_{x}\). In the Bayesian setting, we learn posterior distributions \(\rho\) over \(\mathcal{H}\) from which sampled hypotheses perform well on average. We measure performance using the expected _target risk_\(R_{\mathcal{T}}\) of a single hypothesis \(h\) or posterior \(\rho\),
\[\underbrace{R_{\mathcal{T}}(h)=\underset{(x,y)\sim\mathcal{T}}{\mathbb{E}}[ \ell(h(x),y)]}_{\text{Risk for single hypothesis $h$}}\quad\text{or}\quad\underbrace{ \underset{h\sim\rho}{\mathbb{E}}R_{\mathcal{T}}(h)}_{\text{Gibbs risk of posterior $\rho$}}, \tag{1}\]
for a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}_{+}\). In this work, we study the zero-one loss, \(\ell(y,y^{\prime})=\mathds{1}[y\neq y^{\prime}]\). The Gibbs risk is used in the PAC-Bayes guarantees (Shawe-Taylor & Williamson, 1997; McAllester, 1998), a generalization of the PAC framework (Valiant, 1984; Vapnik, 1998).
When learning from samples, the empirical risk \(\hat{R}_{\mathcal{D}}\) can be used as an observable measure of performance,
\[\hat{R}_{\mathcal{D}}(h)=\frac{1}{m}\sum_{i=1}^{m}\ell(h(x_{i}),y_{i}), \tag{2}\]
for a sample \(\{(x_{i},y_{i})\}_{i=1}^{n}\sim(\mathcal{D})^{n}\), with the empirical risk of the Gibbs classifier defined analogously. However, since no labeled sample from \(\mathcal{T}\) is available, the risk of interest is not directly observable. Hence, the most
common way to approximate this quantity is to derive an upper bound on the target risk. We refer to these as _UDA bounds_. Crucially, _any practical performance guarantee must be made using only observed data and assumptions on how \(\mathcal{S}\) and \(\mathcal{T}\) relate_. Throughout, we make the following common assumptions.
**Assumption 1** (Covariate shift & overlap).: The source domain \(\mathcal{S}\) and target domain \(\mathcal{T}\) satisfy for all \(x,y\)
\[\begin{array}{ll}\textbf{Covariate shift:}&\mathcal{T}_{y}(Y\mid X=x)= \mathcal{S}_{y}(Y\mid X=x)\quad\text{and}\quad\mathcal{T}_{x}(X=x)\neq \mathcal{S}_{x}(X=x)\\ \textbf{Overlap:}&\mathcal{T}_{x}(X=x)>0\Rightarrow\mathcal{S}_{x}(X=x)>0. \end{array}\]
These are strong assumptions and covariate shift cannot be verified statistically. They are not required by every bound in the literature but together they are sufficient to guarantee identification and consistent estimation of the target risk (Shimodaira, 2000). More importantly, the generalization guarantees we study more closely are not fully observable without them unless target labels are available. Finally, this setting is among the most simple and favorable ones for UDA which should make for an interesting benchmark--if existing bounds are vacuous also here, significant challenges remain.
### Overview of existing UDA bounds
Most existing UDA bounds on the target risk share a common structure due to their derivation. The typical process starts by bounding the _expected_ target risk using the _expected_ source risk and measures of domain shift. Thereafter, terms are added which bound the sample generalization error, the difference between the expected source risk and its empirical estimate. The results can be summarized conceptually, with \(f\) and arguments variously defined, as expressions on the form
\[R_{\mathcal{T}}\leq f(\text{Empirical source risk, Measures of domain shift, Sample generalization error})\.\]
There are two main forms taken by this function; one in which sample generalization terms are related to domain shift terms through addition and one where they are multiplied. We call these additive and multiplicative bounds respectively. One example is the classical result due to Ben-David et al. (2007) which uses the so-called \(\mathcal{A}\)-distance to bound the target risk of \(h\in\mathcal{H}\) with probability \(\geq 1-\delta\),
\[R_{\mathcal{T}}(h)\leq\underbrace{\hat{R}_{\mathcal{S}}(h)}_{\text{Emp. risk}}+\underbrace{\sqrt{\frac{4(d\log\frac{2em}{d}+\log\frac{4}{\delta})}{m}}}_{ \text{Sample generalization}}+\underbrace{d_{\mathcal{H}}(\mathcal{S},\mathcal{T})+ \lambda}_{\text{Domain shift}}, \tag{3}\]
where \(d\) is the VC dimension of the \(\mathcal{H}\), \(\lambda\) is the sum of the errors on both domains of the best performing classifier \(h^{*}=\arg\min_{h\in\mathcal{H}}(R_{\mathcal{S}}(h)+R_{\mathcal{T}}(h))\), and \(d_{\mathcal{H}}(\mathcal{S},\mathcal{T})=2\sup_{A\in\{\{x:h(x)=1\}:h\in \mathcal{H}\}}|\Pr_{\mathcal{S}}[A]-\Pr_{\mathcal{T}}[A]|\) is the \(\mathcal{A}\)-distance for the characteristic sets of hypotheses in \(\mathcal{H}\).
Three challenges limits the practicality of this bound: i) \(\lambda\) is not directly estimable without target labels and must be assumed small for an informative bound. This is a pattern in UDA theory which illustrates a fundamental link between estimability and assumption. ii) The VC dimension can easily lead to a vacuous result for modern neural networks. For example, the VC dimension of piecewise polynomial networks is \(\Omega(pl\log\frac{p}{l})\) where \(p\) is the number of parameters and \(l\) is the number of layers (Bartlett et al., 2019). iii) The \(\mathcal{A}\)-distance can be tractably computed only for restricted hypothesis classes. These issues are not unique the bound above, they are exhibited to varying degrees by any UDA bound.
In response to concerns for practical generalization bounds in deep learning, Valle-Perez and Louis (2020) put forth seven desiderata for predictive bounds. While these are of interest also for UDA, we concern ourselves primarily with the fifth (non-vacuity of the bound) and sixth (efficient computability) desiderata as they are of paramount importance to achieving practically useful guarantees. In this work, we study UDA bounds with emphasis on how each term influences the following properties when learning with deep neural networks.
1. **Tightness.** Is the term a poor approximation? Is it likely to lead to a loose bound?
2. **Estimability.** Is the term something which we can estimate from observed data?
## 3 **Computability.** Can we tractably compute it for real-world data sets and hypothesis classes?
Next, we give a short summary of existing UDA bounds, with the excellent survey by Redko et al. (2019) as starting point, while evaluating whether they contain non-estimable terms, if the bound was computed in the paper1 and if the bound is computationally tractable for neural networks. We have listed considered bounds in Table 1.
Footnote 1: By this we mean the whole bound. Some of the works listed have computed one or several parts of their bound. However, the computation is generally done for simpler model classes than neural networks.
We will now reason about the bounds' potential to reach our stated desiderata, starting with tractability. We begin by noting that several divergence measures (e.g., \(\mathcal{A}\)-distance, \(H\Delta H\)-distance, Discrepancy distance) are defined as suprema over the hypothesis class \(\mathcal{H}\). Typically, these are intractable to compute for neural nets due to the richness of the class, and approximations would yield lower bounds rather than upper bounds. Several works fail to yield practically computable bounds for neural networks for this reason (Ben-David et al., 2007; Blitzer et al., 2008; Ben-David et al., 2010; Morvant et al., 2012; Mansour et al., 2009; Redko et al., 2019; Cortes and Mohri, 2014; Cortes et al., 2015). There are also some works which deal with so called localized discrepancy which depends on finding a subset of promising classifiers and bounding their performance instead. (Zhang et al., 2020; Cortes et al., 2019) However, this subset is not easy to find in general and as such we view these approaches as intractable also.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} & & & Non-est. & Evaluated & \\ Paper/reference & Divergence & Add/Mult & terms & bound & Tractable \\ \hline Ben-David et al. (2007) & \(\mathcal{A}\)-distance & Add & ✓ & X & X \\ Blitzer et al. (2008) & \(H\Delta H\) & Add & ✓ & X & X \\ Ben-David et al. (2010) & \(H\Delta H\) & Add & ✓ & X & X \\ Morvant et al. (2012) & \(H\Delta H\) & Add & ✓ & X & X \\ Mansour et al. (2009) & Discrepancy dist. & Add & ✓ & X & X \\ Redko et al. (2019) & Discrepancy dist. & Add & ✓ & X & X \\ Kuroki et al. (2019) & S-discrepancy & Add & ✓ & X & X \\ Cortes and Mohri (2014) & Gen. discrepancy & Add & ✓ & X & X \\ Cortes et al. (2015) & Gen. discrepancy & Add & ✓ & X & X \\ Zhang et al. (2012) & IPM & Add & ✗ & ✗ & ✓ \\ \hline Redko (2015) & IPM, MMD & Add & ✓ & X & ✓ \\ Long et al. (2015) & MMD & Add & ✓ & X & ✓ \\ Redko et al. (2017) & IPM & Add & ✓ & X & ✓ \\ Johansson et al. (2019) & IPM & Add & ✓ & X & ✓ \\ Zhang et al. (2019) & Margin disparity & Add & ✓ & X & X \\ Dhouib et al. (2020) & Wasserstein & Add & ✓ & X & ✓ \\ Shen et al. (2018) & Wasserstein & Add & ✓ & X & ✓ \\ Courty et al. (2017b) & Wasserstein & Add & ✓ & X & ✓ \\ Germain et al. (2013) & Domain disagreement & Add & ✓ & X & X \\ Zhang et al. (2020) & Localized discrepancy & Add & ✓ & X & X \\ Cortes et al. (2019) & Localized discrepancy & Add & ✓ & X & X \\ Acuna et al. (2021) & f-divergences & Add & ✓ & X & X \\ Germanin et al. (2016) & ✗ & Mult & ✗ & ✗ & ✓ \\ Cortes et al. (2010) & Renyi & Mult & ✗ & ✗ & ✗ \\ Dhouib and Redko (2018) & \(L^{1},\chi^{2}\) & Mult & ✓ & X & X \\ \end{tabular}
\end{table}
Table 1: Overview of existing UDA bounds with respect to a) measures of domain divergence, b) whether domain and sample generalization terms add or multiply, c) non-estimable terms, d) whether the bounds were computed empirically, e) computational tractability. The highlighted rows represent a selection of bounds which are possible to estimate under the assumptions we make and which hold promise in fulfilling our other desiderata. \(\mathsf{X}^{*}\) denotes that under the assumptions made in this work we do not have non-estimable terms.
Continuing with estimability, we may remove from consideration also those whose non-estimable term cannot be dealt with without assuming that they are small--an untestable assumption which does not follow from overlap and covariate shift. This immediately disqualifies a large swathe of bounds which all include the joint error of the optimal hypothesis on both domains, or some version thereof, a very common non-estimable term in DA bounds (Kuroki et al., 2019; Redko, 2015; Long et al., 2015; Redko et al., 2017; Johansson et al., 2019; Zhang et al., 2019; Dhouib et al., 2020; Shen et al., 2018; Courty et al., 2017; Germain et al., 2013; Dhouib and Redko, 2018; Acuna et al., 2021). In principle, we might be able to approximate this quantity under overlap, e.g. by using importance sampling. However, this would entail solving a new optimization problem to find a hypothesis which has a low joint error (see discussion after equation 3). If we instead wish to upper bound the term, we must solve an equivalent problem to the one we are trying to solve in the first place.
Three bounds remain: Zhang et al. (2012) use integral probability metrics (IPM) between source and target domains to account for covariate shift in their bound, which are tractable to compute under assumptions on the hypothesis class; Cortes et al. (2010) use importance weights (IW) and Renyi divergences which are well-defined and easily computed under overlap; Germain et al. (2016) use a related metric based on the norm of the density ratio between domains with similar properties. Respectively, the first two results bound sample generalization error using the uniform entropy number and the covering number. These measures are intractable to compute for neural networks, and while they may be upper bounded by the VC dimension (Wainwright, 2019), this is typically large enough to yield uninformative guarantees (Zhang et al., 2021). In contrast, Germain et al. (2016) use a PAC-Bayes analysis which we can apply to neural networks by specifying prior and posterior distributions over network weights. Using Gaussian distributions for both priors and posteriors, the PAC-Bayes bound can be readily computed. Thus, to enable closer comparison and tractable computation, in the coming sections, we unify each bounds' dependence on sample generalization by adapting IW and IPM bounds to the PAC-Bayes framework. For completeness, we include an additional PAC-Bayes bound from the UDA literature, which has a non-estimable term, namely Germain et al. (2013).
The selected bounds are given in Sections 2.3-2.4. Germain et al. (2016) will be referred to as the multiplicative bound (**Mult**), Germain et al. (2013) as the additive bound (**Add**), the adaptation of importance weighting to PAC-Bayes as **IW**; and the adaptation of IPM bounds as **MMD**. Unfortunately, also the resulting PAC-Bayes bounds are uninformative for even simple image classification UDA tasks; see Figure 1. Consistent with our understanding of standard learning with neural networks, _we find both classical PAC and PAC-Bayes bounds vacuous in the tasks most frequently used as empirical benchmarks in papers deriving UDA bounds_. Next, we detail how to make use of data-dependent priors to get the tightest possible bounds.
### Tighter sample generalization guarantees using PAC-Bayes with data-dependent priors
PAC-Bayes theory studies generalization of a posterior distribution \(\rho\) over hypotheses in \(\mathcal{H}\), learned from data, in the context of a prior distribution over hypotheses, \(\pi\). The generalization error in \(\rho\) may be bounded using the divergence between \(\rho\) and \(\pi\) as seen in the following classical result due to McAllester.
**Theorem 1** (Adapted from Thm. 2 in McAllester (2013)).: _For a prior \(\pi\) and posterior \(\rho\) on \(\mathcal{H}\), a bounded loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\to[0,1]\) and any fixed \(\gamma,\delta\in(0,1)\), we have w.p. at least \(1-\delta\) over the draw of
Figure 1: Best bounds achieved without data-dependent priors on the MNIST/MNIST-M task using LeNet-5 as well as the target error for the same posterior hypothesis. Note that all the bounds are vacuous, i.e. they are above one.
samples from \(\mathcal{D}\), with \(D_{\mathrm{KL}}(p\|q)\) the Kullback-Liebler (KL) divergence between \(p\) and \(q\),_
\[\mathop{\mathbb{E}}_{h\sim\rho}R_{\mathcal{D}}(h)\leq\frac{1}{\gamma}\mathop{ \mathbb{E}}_{h\sim\rho}\hat{R}_{\mathcal{D}}(h)+\frac{D_{\mathrm{KL}}(\rho\| \pi)+\ln(\frac{1}{\delta})}{2\gamma(1-\gamma)m}\.\]
The bound in Theorem 1 grows loose when prior \(\pi\) and posterior \(\rho\) diverge--when the posterior is sensitive to the training data. When learning with neural networks, \(\pi\) and \(\rho\) are typically taken to be distributions on the weights of the network before and after training. However, the weights of a trained deep neural network will be far away from any uninformed prior after only a few epochs. For this reason, Dziugaite et al. (2021) developed a methodology, based on work by Ambroaldez et al. (2007) and Parrado-Hernandez et al. (2012), for learning _data-dependent_ neural network priors by a clever use of sample splitting. To ensure that the bound remains valid, any data which is used to fit the prior must be independent of the data used to evaluate the bound. In this work, we learn \(\pi\) and \(\rho\) following Dziugaite et al. (2021), as described below.
1. A fraction \(\alpha\in[0,1)\) is chosen and the available training data, \(S\), is split randomly into two parts, \(S_{\alpha}\) and \(S\setminus S_{\alpha}\) of size \(\alpha m\) and \((1-\alpha)m\), respectively.
2. A neural network is randomly initialized and trained using stochastic gradient descent on \(S_{\alpha}\) for one epoch. From this we get the weights, \(w_{\alpha}\).
3. The same network is trained, starting from \(w_{\alpha}\), on all of \(S\) until a stopping condition is satisfied. In this work, we terminate training after 5 epochs. We save the weights, \(w_{\rho}\).
4. From \(w_{\alpha}\) and \(w_{\rho}\) we create our prior and posterior from Normal distributions centered on the learned weights, \(\pi=\mathcal{N}(w_{\alpha},\sigma I)\) and \(\rho=\mathcal{N}(w_{\rho},\sigma I)\) respectively. \(\sigma\) is a hyperparameter governing the specificity (variance) of the prior which may be chosen when evaluating.
5. Finally, we use the learned prior and posterior to evaluate the bounds on \(S\setminus S_{\alpha}\).
### PAC-Bayes bounds from the domain adaptation literature
Both the additive and multiplicative PAC-Bayes UDA bounds described in Section 2.1 are defined in Germain et al. (2020) and make use of a decomposition of the zero-one risk into the _expected joint error_
\[e_{\mathcal{D}}(\rho)=\mathop{\mathbb{E}}_{h,h^{\prime}\sim\rho\times\rho} \mathop{\mathbb{E}}_{x,y\sim\mathcal{D}}\ell(h(x),y)\ell(h^{\prime}(x),y),\]
which measures how often two classifiers drawn from \(\rho\) make the same errors, and the _expected disagreement_
\[d_{\mathcal{D}_{x}}(\rho)=\mathop{\mathbb{E}}_{h,h^{\prime}\sim\rho\times\rho }\mathop{\mathbb{E}}_{x\sim\mathcal{D}_{x}}\ell(h(x),h^{\prime}(x)),\]
which measures how often two classifier disagree on the labeling of the same point. Empirical variants \(\hat{e}_{\mathcal{D}}(\rho)\) and \(\hat{d}_{\mathcal{D}_{x}}(\rho)\) replace expectations with sample averages analogous to equation 2.
In the bound of Germain et al. (2016), the sample generalization component (KL-divergence between prior and posterior) is multiplied with a domain shift component (supremum density ratio).
**Theorem 2** (Multiplicative bound, Germain et al. (2016)).: _For any real numbers \(a,b>0\) and \(\delta\in(0,1)\), it holds, under Assumption 1, with probability at least \(1-\delta\) over labeled source samples \(S\sim(\mathcal{S})^{m}\) and unlabeled target samples \(T_{x}\sim(\mathcal{T}_{x})^{n}\), with constants \(a^{\prime}=\frac{a}{1-e^{-a}}\), \(b^{\prime}=\frac{b}{1-e^{-b}}\), for every posterior \(\rho\) on \(\mathcal{H}\) that_
\[\mathop{\mathbb{E}}_{h\sim\rho}R_{\mathcal{T}}(h)\leq a^{\prime}\frac{1}{2} \hat{d}_{\mathcal{T}_{x}}+b^{\prime}\beta_{\infty}(\mathcal{T}\|\mathcal{S}) \hat{e}_{\mathcal{S}}+\Big{(}\frac{a^{\prime}}{na}+\frac{b^{\prime}\beta_{ \infty}(\mathcal{T}\|\mathcal{S})}{mb}\Big{)}\Big{(}2D_{\mathrm{KL}}(\rho\|\pi) +\ln\frac{2}{\delta}\Big{)}+\eta_{\mathcal{T}\setminus\mathcal{S}}\,\]
_with \(\beta_{\infty}(\mathcal{T}\|\mathcal{S})=\sup\limits_{x\in\mathrm{supp}( \mathcal{S}_{x})}\mathcal{T}_{x}(x)/\mathcal{S}_{x}(x)\),2 and \(\eta_{\mathcal{T}\setminus\mathcal{S}}=\mathop{\mathbb{E}}_{(x,y)\sim \mathcal{T}}[\mathds{1}[(x,y)\notin\mathrm{supp}(\mathcal{S})]]\sup_{h\in \mathcal{H}}R_{\mathcal{T}\setminus\mathcal{S}}(h)\)._
Footnote 2: Terms \(\beta_{q}\) based on the \(q\):th moment of the density ratio for \(q<\infty\) are considered in (Germain et al., 2020) but not here.
The bound is simplified slightly in our setting due to Assumption 1 as \(\eta_{\mathcal{T}\setminus\mathcal{S}}\) will be \(0\). By Bayes rule, \(\beta_{\infty}\) can be computed as the maximum ratio between conditional probabilities of an input being sampled from \(\mathcal{T}\) or \(\mathcal{S}\). The second result due to Germain et al. is additive in its interaction between domain and sample generalization terms.
**Theorem 3** (Additive bound, Germain et al. (2013)).: _For any real numbers \(\omega,\gamma>0\) and \(\delta\in(0,1)\), with probability at least \(1-\delta\) over labeled source samples \(S\sim(\mathcal{S})^{m}\) and unlabeled target samples \(T_{x}\sim(\mathcal{T}_{x})^{m}\); for every posterior \(\rho\) on \(\mathcal{H}\), it holds with constants \(\omega^{\prime}=\frac{\omega}{1-e^{-\omega}}\) and \(\gamma^{\prime}=\frac{2\gamma}{1-e^{-2\gamma}}\) that_
\[\mathop{\mathbb{E}}_{h\sim\rho}R_{\mathcal{T}}(h)\leq\mathop{\mathbb{E}}_{h \sim\rho}\omega^{\prime}\hat{R}_{\mathcal{S}}(h)+\gamma^{\prime}\frac{1}{2} \hat{D}\mbox{is}_{\rho}(S,T_{x})+\Big{(}\frac{\omega^{\prime}}{\omega}+\frac{ \gamma^{\prime}}{\gamma}\Big{)}\frac{D_{\mathrm{KL}}(\rho\|\pi)+\log\frac{3}{ 2}}{m}+\lambda_{\rho}+\frac{1}{2}(\gamma^{\prime}-1),\]
_where \(\hat{D}\mbox{is}_{\rho}(S,T_{x})=|\hat{d}_{\mathcal{T}_{x}}-\hat{d}_{\mathcal{ S}_{x}}|\) is the empirical domain disagreement, \(\lambda_{\rho}=|e_{\mathcal{T}}(\rho)-e_{\mathcal{S}}(\rho)|\)._
Next we will combine the main techniques used to account for domain shift in Cortes et al. (2010) and Zhang et al. (2012) with the PAC-Bayes analysis of Theorem 1, producing two corollaries for the UDA setting.
### Adapting the classical PAC-Bayes bound to unsupervised domain adaptation
First, we adapt the bound in Theorem 1 to UDA by incorporating importance weighting (Shimodaira, 2000; Cortes et al., 2010).We define a weighted loss \(\ell^{w}(h(x),y)=w(x)\ell(h(x),y)\) where \(w(x)=\frac{T(x)}{S(x)}\) and \(\ell\) is the zero-one loss. The risk of a hypothesis using this loss is denoted by \(R^{w}\).
**Corollary 1**.: _(IW bound) Consider the conditions in Theorem 1 and let \(\beta_{\infty}=\sup_{x\sim\mathcal{X}}w(x)\). We have, for any choice of \(\gamma,\delta\in(0,1)\) and any pick of prior \(\pi\) and posterior \(\rho\) on \(\mathcal{H}\),_
\[\mathop{\mathbb{E}}_{h\sim\rho}R_{\mathcal{T}}(h)\leq\frac{1}{\gamma}\mathop{ \mathbb{E}}_{h\sim\rho}\hat{R}_{\mathcal{S}}^{w}(h)+\beta_{\infty}\frac{D_{ \mathrm{KL}}(\rho\|\pi)+\ln(\frac{1}{2})}{2\gamma(1-\gamma)m}\.\]
Proof.: Since Theorem 1 holds for loss functions mapping onto \([0,1]\), we divide the weighted loss \(\ell^{w}\) by the maximum weight, \(\beta_{\infty}\). The argument then follows naturally when we apply Theorem 1 with the loss function \(\frac{\ell^{w}}{w_{max}}\).
\[\mathop{\mathbb{E}}_{h\sim\rho,(x,y)\sim\mathcal{T}}\Big{[}\frac{\ell(h(x),y)} {\beta_{\infty}}\Big{]}=\mathop{\mathbb{E}}_{h\sim\rho,(x,y)\sim\mathcal{S}} \Big{[}\frac{\ell^{w}(h(x),y)}{\beta_{\infty}}\Big{]}\leq\frac{1}{\gamma\beta _{\infty}}\hat{R}_{\mathcal{S}}^{w}+\frac{D_{\mathrm{KL}}(\rho\|\pi)+\ln( \frac{1}{\delta})}{2\gamma(1-\gamma)m}\]
The first equality holds due to Assumption 1 and the definitions of \(w\) and \(\ell^{w}\).
Now we continue with applying an argument based on integral probability metrics (IPM) drawn from Zhang et al. (2012) and similar works. IPMs, such as the kernel maximum mean discrepancy (MMD) (Gretton et al., 2012) and the Wasserstein distance have been used to give tractable and even differentiable bounds on target error in UDA to guide algorithm development (Courty et al., 2017; Long et al., 2015). The kernel MMD is a IPM, defined as follows, in terms of its square, given a reproducing kernel \(k(\cdot,\cdot):\mathcal{X}\times\mathcal{X}\to\mathbb{R}\)
\[\mathbf{MMD}_{k}(P,Q)^{2}=\mathop{\mathbb{E}}_{X\sim P,X^{\prime}\sim P}[k(X, X^{\prime})]-2\mathop{\mathbb{E}}_{X\sim p,Y\sim Q}[k(X,Y)]+\mathop{\mathbb{E}}_{Y \sim Q,Y^{\prime}\sim Q}[k(Y,Y^{\prime})].\]
Here, \(X\) and \(Y\) are random variables, and \(X^{\prime}\) is an independent copy of \(X\) with the same distribution and \(Y^{\prime}\) is an independent copy of \(Y\). This measures a notion of discrepancy between the distributions \(P\) and \(Q\) based on their samples.
We combine the MMD with the bound from McAllester (2013) to arrive at what we will call the **MMD** bound.
**Corollary 2**.: _(MMD bound) Let \(\overline{\ell}_{h}(x)=\mathbb{E}[\ell(h(x),Y)\mid X=x]\) be the expected pointwise loss at \(x\in\mathcal{X}\) and assume that, for any \(h\in\mathcal{H}\), \(\overline{\ell}_{h}\) can be uniformly bounded by a function in reproducing-kernel Hilbert \(\mathcal{L}\) space with kernel \(k\) such that \(\forall x,x^{\prime}\in\mathcal{X}:0\leq k(x,x^{\prime})\leq K\). Then, under Assumption 1, with \(\gamma,\delta\in(0,1)\) and probability \(\geq 1-\delta\) over labeled source samples \(S\sim(\mathcal{S})^{m}\) and unlabeled target samples \(S^{\prime}_{x}\sim\mathcal{T}_{x}^{m}\),_
\[\mathop{\mathbb{E}}_{h\sim\rho}R_{\mathcal{T}}(h)\leq\frac{1}{\gamma}\mathop{ \mathbb{E}}_{h\sim\rho}\hat{R}_{\mathcal{S}}(h)+\frac{D_{\mathrm{KL}}(\rho\| \pi)+\log\frac{2}{\delta}}{2\gamma(1-\gamma)m}+\mathrm{M}\hat{\mathrm{MD}}_{k}( \mathcal{S},\mathcal{T})+2\sqrt{\frac{K}{m}}\Big{(}2+\sqrt{\log\frac{4}{\delta} }\Big{)}\, \tag{4}\]
_where \(\mathrm{M\!M\!D}_{k}(\mathcal{S},\mathcal{T})\) is the biased empirical estimate of the maximum mean discrepancy between \(\mathcal{S}\) and \(\mathcal{T}\) computed from \(S_{x}\) and \(S^{\prime}_{x}\), see eq. (2) in Gretton et al. (2012)._
Proof.: By assumption, for any hypothesis \(h\in\mathcal{H}\),
\[R_{\mathcal{T}}(h)=R_{\mathcal{S}}(h)+\mathbb{E}_{\mathcal{T}}[\ell(h(X),Y)]- \mathbb{E}_{\mathcal{S}}[\ell(h(X),Y)]\leq R_{\mathcal{S}}(h)+\sup_{l\in \mathcal{L}}\left|\mathbb{E}_{\mathcal{T}_{x}}[l_{h}(X)]-\mathbb{E}_{ \mathcal{S}_{x}}[l_{h}(X)]\right|\.\]
The inequality holds because \(\mathbb{E}_{\mathcal{S}}[\ell(h(x),Y)\mid X=x]=\mathbb{E}_{\mathcal{T}}[\ell(h( x),Y)\mid X=x]\) due to Assumption 1 (covariate shift) and the assumption that \(\ell_{h}\) is uniformly bounded by a function in \(\mathcal{L}\). The RHS of the inequality is precisely \(\mathrm{M\!M\!D}_{\mathcal{L}}\) and the full result follows by linearity of expectation (over \(\rho\)), since the MMD term is independent of \(h\), and application of Theorem 1. The right-most term in equation 4 follows from a finite-sample bound on MMD, Theorem 7 in Gretton et al. (2012), and a union bound w.r.t. \(\delta\).
Representation learning.When no function family \(\mathcal{L}\) satisfying the conditions of Corollary 2 is known, an additional unobservable error term must be added to the bound, to account for excess error. Bounds based on the MMD and other IPMs have been used heuristically in representation learning to find representations which minimize the induced distance between domains and achieve better domain adaptation (Long et al., 2015). However, even if covariate shift holds in the input space, these are not guaranteed to hold in the learned representation. For this reason, we do not explore such approaches even though they might hold some promise. An example of this is recent work by Wu et al. (2019) which provided some interesting ideas about assumptions which constrains the structure of the source and target distributions under a specific representation mapping. Exploring such ideas further is an interesting direction for future work.
## 3 Experimental setup
We describe the experimental setup briefly, leaving more details in Appendix A. We examine the dynamics of the chosen bounds (**Add**, **Mult**, **MMD**, **IW**), which parts of the bounds dominate their value, the effect of varying the amount of data used to inform the prior and if the bounds have utility for early stopping indication or model selection. In addition to these we also want to answer if these bounds practically useful as guarantees, and if not, what is lacking and what future directions should be explored to reach our desiderata? Since the term \(\lambda_{\rho}\) in **Add** (Theorem 3) depends on target labels, we give access to these for purposes of analysis and illustration, keeping in mind that the bound is not fully estimable in practice.
We perform experiments on two image classification tasks, described further below, using three different neural network architectures. The architectures used are: a modified version of LeNet-5 due to Zhou et al. (2019), a fully connected network and a version of ResNet50 (He et al., 2016). These specific architectures were picked as examples due to their varying parameter size and complexity. The experiments are repeated for 5 distinct random seeds, with the exception of the varying of image size where we only conduct the experiment for one seed.
We learn prior and posterior network weights as described in Section 2.2. When both sets of weights have been trained, we use these as the means for prior and posterior distributions \(\pi\) and \(\rho\), chosen to be isotropic Gaussians with equal variance, \(\sigma\). Each bound is calculated for each pair of prior and posterior. To estimate the expectation over the posterior we sample 5 pairs of classifiers from the posterior and average their bound components (e.g., source risk) to get an estimate of the bound parts. When this has been calculated, we perform a small optimization step to choose the free parameters of the different bound through a simple grid search over a small number of combinations. We use the combination that produces the lowest minimum bound and account for testing all \(k\) combinations by applying a union bound argument, modifying the certainty parameter \(\delta\) to \(\delta/k\). When calculating the MMD, we use the linear statistic for the MMD as detailed in Gretton et al. (2012) with the kernel \(k(x,y)=\exp(\frac{-\left\|x-y\right\|^{2}}{2\kappa^{2}})\). This calculation is averaged over 10 random shuffles of the data for a chosen bandwidth, \(\kappa>0\). The process is repeated for different choices of bandwidth (see Appendix for details) and the maximum of the results is taken as the value of the MMD. Note that we calculate the MMD in the input space and have not adjusted it for sample variance. When calculating the importance weights we assume here that the maximum weight, \(\beta_{\infty}\), is uniformly bounded as
the bound would potentially be vacuous otherwise. In addition, we will consider the importance weights to be perfectly computed for simplicity.
We construct two tasks from standard data sets which both fulfill Assumption 1 by design, one based on digit classification and one real-world task involving classification of X-ray images. These are meant to represent realistic UDA tasks where neural networks are the model class of choice.
### Task 1: MNIST mixture
MNIST (Lecun, 1998) is a digit classification data set containing 70000 images widely used as a benchmark for image classifiers. MNIST-M was introduced by Ganin et al. (2016) to study domain adaptation and is a variation of MNIST where the digits have been blended with patches taken from images from the BSDS500 data set (Arbelaez et al., 2011). We use MNIST and MNIST-M to construct source and target domains, both of which contain samples from each data set, but with different label density for images from MNIST and MNIST-M. To create the source data set, we start with images labeled "0" by adding 1/12th of the samples from MNIST-M and 11/12th of the samples from MNIST, we increase the proportion from MNIST-M by 1/12 for each subsequent label, "1", "2", and so on. The complement of the source samples is then used as the target data, see Figure 2 for an illustration. We make this into a binary classification problem by relabeling digits 0-4 to "0" and the rest to "1". The supremum density ratio \(\beta_{\infty}\approx 11\) (see Theorem 2) is known and the mixture guarantees overlap in the support of the domains (Assumption 1).
### Task 2: X-ray mixture
ChestX-ray14 (Wang et al., 2017) and CheXpert (Irvin et al., 2019) are data sets of chest X-rays and labeled according to the presence of common thorax diseases. The data sets contain 112,120 and 224,316 labeled images, respectively. Since the two data sets do not have full overlap in labels, we use the subset of labels for which there is overlap. The labels which occur in both data sets are: No finding, Cardiomegaly, Edema, Consolidation, Atelectasis, and Pleural Effusion. In addition, there is an uncertainty parameter present in the CheXpert data set which indicates how certain a label is. As we consider binary classification, we set all labels that are uncertain to positive. Therefore, a single image in the CheXpert data set might have multiple associated labels. For most experiments we resize all the images in the data sets to 32x32 to be able to use the same architectures for both tasks. However, we also conduct a small experiment with ResNet50 where larger image sizes are considered.
We take 20% of chestX-ray14 and add it to CheXpert to create our source data set. The target is the remaining part of ChestX-ray14 which is then 89,696 images compared to the 246,072 of the source. With this, we know from Appendix B that \(\beta_{\infty}\approx 11\) and we have overlap. We turn this into a binary classification problem in a one-vs-rest fashion by picking one specific label to identify, relabeling images with "1" if it is
Figure 2: Example of source label densities in the task with the mix of MNIST and MNIST-M samples. An example from each of the two data sets can be seen on the right, the upper one being from MNIST and lower one from MNIST-M. The target domain is the complement of source samples.
present, and setting the labels of images with any other finding to "0". In this work, we consider only the task of classifying "No Finding".
## 4 Results
As expected, when we compute bounds with data-dependent priors, we achieve bounds which are substantially tighter than without them, as seen clearly by comparing Figure 1 to Figure 3. We also observe that the additive bound (**Add**) due to Germain et al. (2013) is the tightest overall for both tasks, followed closely by the **IW** bound. The latter is not so surprising as when we apply data-dependent priors, there is effectively a point in training where the \(D_{\text{KL}}\)-divergence between prior and posterior networks is very small. Moreover, due to overlap, the weighted source error is equal to the target error in expectation. Thus the only sources of looseness left is the error in the approximation of the expectation over the posterior and the \(\log\frac{1}{8}\) term which is very small here. We can also see in Figure 4(a) and 4(b) that the minimum of the **IW** bound is often very close to the minimum of the additive bound. However, unlike **IW**, the **Add** bound relies on access to target labels in to compute the term \(\lambda_{\rho}\) (see further discussion below).
The evolution of the different bounds during training is shown for both tasks in Figure 6. Of course, all bounds will increase at some point as training progresses and the prior and posterior diverges further from each other and \(D_{\text{KL}}\) increases. While **Add** is consistently very tight, we note that the \(\lambda_{\rho}\) term which we cannot observe might be a significant part of the bound when the \(D_{\text{KL}}\)-term is low as we can see in Figure 3(b). This is an issue for the additive bound since if we have sufficiently small variance of the posterior then the
Figure 4: An illustration of constituent parts for three of the bounds with the fully connected architecture on the MNIST mixture task. \(\sigma=0.03\)
Figure 3: The tightest bounds achieved on the LeNet-5 architecture. This illustrates the tightening effect of using data-dependent priors. Non-vacuous bounds are obtained when using data to inform the prior. The shaded area between 0.5 and 1 is where a random classifier would perform on average, and the shaded area above 1 signifies vacuity.
disagreement will be low, using informed priors will make the \(D_{\text{KL}}\) small while using neural networks often lead to having a low source error. This will leave only the constant term, \(\log\frac{1}{\delta}\) and the unobservable \(\lambda_{\rho}\) terms and in those situations the bound might even be dominated by the unobservable term.
The multiplicative bound of (Germain et al., 2016) (**Mult**) suffers from the amplification of the source error \(e_{S}\) and \(D_{\text{KL}}\) term by the factor \(\beta_{\infty}\), and is generally larger than the **Add** and **IW** bounds. Conceptually, the **Mult** and **IW** bounds are similar, but in the former, the loss is multiplied uniformly by the largest weight. For tasks where certain inputs with high loss are more uncommon in the target domain than in the source domain, this is especially detrimental. The **MMD** bound is initially dominated by the MMD distance between inputs from the source and target domains, as shown in 4c, which is large and independent of the learned hypothesis. As such, this term cannot be reduced by optimization, without, for example, computing it in representation space (Long et al., 2015). With this approach, unobservable errors due to non-invertible representations must be accounted for (Johansson et al., 2019).
Experiments on using bounds for early stopping and model selection with different architectures yield the results seen in Figure 5. We can see that the errors achieved by terminating training at the smallest bound value (colored markers) do not coincide with the best-achieved target performance during training (denoted by the vertical dotted lines). Clearly, the bounds are not tight enough to do early stopping. This is a result of the sample generalization term \(D_{\text{KL}}\) increasing during training. For other analyses, this need not be the case. For larger architectures, the early-stopped models are closer to the best target models. If we instead look at the same figure again, but this time focus on utility for model selection we find something interesting. It seems that the bounds might be useful in this regard as they consistently have lower values for architectures/models which perform well. However, to be able to say this conclusively a more thorough study with different learning setups must be done. Both of the two previous observations should be contextualised with the fact that the domain shift terms are not dependent on the model as such, but amplify looseness in the case of the **Mult** and **IW** bounds. For the **MMD** and **Add** bounds, increased looseness during training is an artifact only of sample generalization.
As we can see in Figure 6(a), when we vary the size of the images we give to the ResNet50 architecture we observe that the error seems to decrease for the larger image sizes. Although, the minimum bound value achieved does not seem to follow the same trend consistently. This is likely the result of the amount of epochs trained for both prior and posterior. In Figure 6(b), we see that the choice of prior sample proportion \(\alpha\) makes some change to the smallest bound achieved. We also see the minimum bound values grow for large values of \(\alpha\), indicating that the remaining data is better spent calculating the bound than informing the prior in this case. We can also infer from Figure 1 that using no data to inform the prior is worse than using some. The overall shape is consistent with the results reported in Dziugaite et al. (2021, Figure 1).
Figure 5: An illustration of the minimum bound value achieved by each of the three architectures on both tasks. The lowest target error achieved is indicated by a black marker with a vertical dotted line through it.
## 5 Discussion
From our survey of the literature, it is clear that only a small handful of analyses of UDA generalization can be informative as practical bounds on target domain performance. The main obstacle for computing existing bounds is that they are vacuous or intractable to compute for the kinds of models which perform the best on common UDA benchmarks--deep neural networks. A potential remedy is the use of PAC-Bayes bounds, which perform well once they are applied with data-dependent priors; without this they are vacuous. In our experiments, the **Add** bound with the unobservable term is the tightest which is unsurprising given its dependence on target labels. Furthermore, we note that the application of importance weights also performs very well as the setting is sufficiently benign. As such we can say that in this setting we can achieve the desiderata of a tractably computable, tight bound using the **IW** bound. However, recall that the guarantee we get is on a distribution over classifiers and not on one specific classifier. It should be noted, however, that the **IW** bound can become vacuous in certain situations where the worst-case density ratio, \(\beta_{\infty}\), is large and either the \(D_{\text{KL}}\) term or errors on underrepresented classes is large enough.
We found that the lowest value of the bounds achieved during training does not in general correspond to the best performing model on target. This tells us that these bounds are not useful metrics for early stopping. Further, the findings for using bound values for model selection are inconclusive, more experiments have to be conducted to answer this question satisfactorily. During training, we see that the dynamics are dominated by
Figure 6: Bounds evaluated at different points during training of the LeNet-5 architecture. \(\alpha=0.3\), \(\sigma=0.03\). The shaded areas represent one standard deviation.
Figure 7: a) show results of varying image sizes when training the ResNet50 network on the X-ray task. b) shows how the minimum value of the bound varies with \(\alpha\).
the KL-divergence term, inherent to PAC-Bayes analysis, as training progresses. This reinforces our view that these bounds might be useful in getting performance estimates of methods at one particular point and not over several points during training if we do not have access to a large sample. This issue might be ameliorated by regularizing towards the prior during training, although this introduces yet another optimization as we now have to find the optimal regularization strength. In addition, it is not certain whether this will have any adverse effects on the final performance of the learned classifier.
A limitation of this work is that the bounds are cumbersome to compute and it is possible to do several optimizations in the process of producing the bounds. We have tried to do as few as possible in the name of practicality. We list some of the possible further optimizations in Appendix A for the reader's consideration. The impracticality of computing PAC-Bayes bounds is a known issue that has had some work done by Viallard et al. (2021) where they introduce an approach which would remove the computation of expectation over the posterior. In addition, in this work the computation of test errors have dominated the computation time. To produce the results one has to compute the predictions at least 50 times(5 pairs of sampled models from the posterior and 5 random seeds) for each datapoint in the bound for a single choice of prior. This will naturally consume increasing amounts of time with larger data sets.
Furthermore, the overlap assumption will not hold for all real-world applications. In fact, many of the benchmarks for algorithm development, such as the SVHN\(\rightarrow\)MNIST task (Ganin et al., 2016) blatantly violate overlap, since images of house numbers and handwritten digits differ vastly in pixel space. Examples where overlap holds by definition include when the target domain represents a subpopulation of a larger population given by the source domain, e.g., women (target) among all patients (source) with a medical condition. Although an easy learning problem on its face, the optimal model in the full population may not be optimal for the subpopulation. Even when overlap is violated, many share the intuition that overlap may hold in a transformed space (Wu et al., 2019), representative of the core aspects of the problem--a digit is a digit, whether on a house or a postcard.
The strictness of the overlap assumption has been studied by D'Amour et al. (2021) where it was found that even for Gaussian distributions with insubstantial differences in mean parameters, overlap vanishes in high dimensions. Motivated by this fact we might wish to adopt relaxed versions of our assumptions or completely novel ones which still guarantee consistent estimation. A first step could be to require overlap only in a transformed space, not in the input space, like in Wu et al. (2019) or only requiring overlap in specific regions and leveraging assumptions on "closeness" in the other regions, as in Johansson et al. (2019). Further, task-specific assumptions are likely needed for a more complete description of out-of-distribution generalization. We mean task-specific in the sense that the assumptions will depend on the structure on the problem and the data-generating process (Hansen, 2008) or other approaches. Overcoming this gap is a important direction of future study.
Another limitation of this work is that the hypotheses do not optimize for adaptation to the target domain, which might be achieved through representation learning as in Ganin et al. (2016) or minimization of a weighted loss (Shimodaira, 2000). Our setting is representative for tasks where the target domain is unknown during training, but known when computing the bounds. Further, in this work we have assumed that we are able to estimate the importance weights exactly which may not be feasible in high-dimensional settings. In addition, there is no guarantee that the estimation error of the weights is small and thus even a small misestimation may have quite large implications for the resulting bound.
Future work regarding generalization bounds should preferably comment upon usefulness of their bound as a practical guarantee for performance, which is something that is often lacking. Ideally this would extend to explicit calculation if the bound is possible to compute. New bounds are often used as inspiration towards new algorithms which are hoped to result in more generalizable models. However, this is seldom guaranteed by theory and verified only in limited settings empirically.
Our results offer indications for how to obtain tractable and tight bounds for neural networks used in UDA tasks with available tools. If overlap can be assumed to hold, then use the **IW** bound, estimate importance weights using density estimation (Sugiyama et al., 2012) or probabilistic classifiers and apply data-dependent priors. The amount of data to use and how long to train your prior etc. are all task dependent and thus some engineering is necessary to pick optimal values. If this cannot be assumed, the most promising approach
to get bounds which fulfill our desiderata in this case would be to use the **MMD** bound as this does not technically rely on overlap and is tractable to compute for neural networks. This relies on the added assumptions of the pointwise loss being bounded by a function in the associated reproducing-kernel Hilbert space, which may or may not hold. The nature of this assumption makes it less useful since no test for this is available absent overlap and is similar in nature to assuming that joint optimal error is small. However, if the function under estimation is believed to be smooth, the assumption is more plausible. In conclusion, it is clear that the general case demands new research, and alternative, task-specific assumptions, to allow tight performance guarantees for realistic problems. In either setting, we conjecture that the tightest bounds will be coupled to the training procedure.
## Acknowledgements
This work was supported in part by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
The computations and data handling were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE) partially funded by the Swedish Research Council through grant agreement no. 2018-05973. Mikael Ohman at C3SE is acknowledged for his assistance concerning technical and implementation aspects in making the code run on the C3SE resources.
|
2306.12818 | StrainTensorNet: Predicting crystal structure elastic properties using
SE(3)-equivariant graph neural networks | Accurately predicting the elastic properties of crystalline solids is vital
for computational materials science. However, traditional atomistic scale ab
initio approaches are computationally intensive, especially for studying
complex materials with a large number of atoms in a unit cell. We introduce a
novel data-driven approach to efficiently predict the elastic properties of
crystal structures using SE(3)-equivariant graph neural networks (GNNs). This
approach yields important scalar elastic moduli with the accuracy comparable to
recent data-driven studies. Importantly, our symmetry-aware GNNs model also
enables the prediction of the strain energy density (SED) and the associated
elastic constants, the fundamental tensorial quantities that are significantly
influenced by a material's crystallographic group. The model consistently
distinguishes independent elements of SED tensors, in accordance with the
symmetry of the crystal structures. Finally, our deep learning model possesses
meaningful latent features, offering an interpretable prediction of the elastic
properties. | Teerachote Pakornchote, Annop Ektarawong, Thiparat Chotibut | 2023-06-22T11:34:08Z | http://arxiv.org/abs/2306.12818v2 | StrainNet: Predicting crystal structure elastic properties using SE(3)-equivariant graph neural networks
###### Abstract
Accurately predicting the elastic properties of crystalline solids is vital for computational materials science. However, traditional atomistic scale _ab initio_ approaches are computationally intensive, especially for studying complex materials with a large number of atoms in a unit cell. We introduce a novel data-driven approach to efficiently predict the elastic properties of crystal structures using SE(3)-equivariant graph neural networks (GNNs). This approach yields important scalar elastic moduli with the accuracy comparable to recent data-driven studies. Importantly, our symmetry-aware GNNs model also enables the prediction of the strain energy density (SED) and the associated elastic constants, the fundamental tensorial quantities that are significantly influenced by a material's crystallographic group. The model consistently distinguishes independent elements of SED tensors, in accordance with the symmetry of the crystal structures. Finally, our deep learning model possesses meaningful latent features, offering an interpretable prediction of the elastic properties.
**Keywords:** strain energy tensor, elastic constants, crystallographic group, equivariant neural networks, density functional theory
**Open data:** The dataset used in this work has been made available at
[https://github.com/trachote/predict_elastic_tensor](https://github.com/trachote/predict_elastic_tensor)
## I Introduction
The elastic properties of crystalline solids such as elastic constants, bulk modulus, and shear modulus are important macroscopic quantities that determine materials' mechanical characteristics. Computational study of which can provide theoretical guidelines to understand various phenomena in solid materials, e.g., mechanical stability of crystal structures [1; 2], pressure-driven deformation and phase transition of materials [3; 4], the response of materials to sound wave propagation [5; 6], and the hardness of materials [7; 8; 9], to name a few. From an atomistic scale description, an _ab initio_ approach based on density functional theory has been employed to investigate the macroscopic elastic properties of materials, yielding, for example, elastic constants and elastic moduli that are in agreement with experimental results [10; 11; 12; 13].
Three atomistic scale computational methods are usually adopted to calculate the elastic constants of crystal structures: an energy-based method [14], a stress-strain method [15; 10], and a calculation from group velocity [16; 17; 18]. The energy-based and the stress-strain methods are more standard, and derive the elastic constants using respectively an energy and a stress tensor acquired from an _ab initio_ calculation. Although both methods yield a comparable prediction of elastic constants, the energy-based method is computationally less efficient, involving a larger number of plane-wave basis set and \(k\)-point meshes so that the predicted elastic constants converge to reliable values [19]. Indeed, the more efficient stress-strain method is commonly employed to determine elastic constants, for example, by Materials Project database, and by established software such as VASP and CASTEP [20; 21; 22].
Despite forming a basis for computational materials science, atomistic scale simulation can be computationally prohibitive, especially when the number of atoms in a unit cell grows large. Such constraint limits an _ab initio_ method's capability to investigate more complex crystalline solids. On the other hand, advances in machine learning bring about alternative approaches to computational materials science. This data-driven paradigm can reasonably predict the elastic properties of crystal structures, provided sufficient training data from an _ab initio_ method [23; 24]. Even when the number of atoms in a unit cell is large, such approach can efficiently predict the bulk and shear moduli of complex materials such as alloys [25; 26]. Applying machine learning together with evolutionary algorithms and _ab initio_ calculations is also potentially useful in searching for novel superhard materials [27; 28].
Machine learning models based on graph neural networks (GNNs) have received increasing attention in studying solid materials and molecules. With GNNs, it is natural to encode atomistic scaled descriptions of solids into a computational framework; atomic locations and the associated atomic attributes can be directly embedded into node attributes of a graph, whereas pairwise interactions among atoms can be encoded into the edge attributes. Efficient GNNs training procedures have also been proposed [29; 30; 31], enabling GNNs to learn the representation of a complex relationship between the input and its associated prediction [32]. These neural networks can also be endowed with translation-rotation equivariant property, so that input atomic locations of molecules
(or point clouds) in \(\mathbb{R}^{3}\) that differ only in their orientation or their centroid can be identified. Enforcing an SE(3) equivariant property helps the networks to extract a more compact (translation and rotation independent) relationship between the inputs and their predictions. Due to these appealing features, variations of SE(3) equivariant GNNs have been developed to study materials science [33].
In this work, we adopt a data-driven approach using graph neural networks (GNNs) to predict the elastic properties of crystal structures. Accounting for the symmetry of crystalline solids, our SE(3) equivariant GNNs take as an input atomistic descriptions of strained materials, such as atomic locations and their associated atomic attributes, and predict the strain energy density (SED) of the materials. The prediction of the SED, which is the energy stored in a crystal structure when distorted by either tensile or compressive strains in particular directions, can be obtained efficiently and relatively accurately, given sufficient training data. The model thus provides an alternative approach to the standard _ab initio_ energy-based prediction method.
After the SED is computed, we can then calculate the elastic constants and construct the elastic tensor with a simple analytical expression, see Sec. II.1. Other macroscopic (scalar) elastic properties including bulk modulus, shear modulus, Young's modulus, and Poisson's ratio immediately follow from the elastic constants. Sec. IV reports our model prediction results of these elastic properties. The prediction of the scalar elastic properties are comparable to those of the recent data-driven work [23; 24]. Importantly, however, the data-driven prediction of the SED and the associated elastic constants, which are fundamental tensorial quantities that depend on the crystallographic group, are first reported here. The trained model consistently reveal the independent components of strain energy tensors dictated by the symmetry of the crystal structures (see Fig. 3). Our symmetry-aware GNNs architecture as well as the dataset used to train the model are provided in Sec. III. Also, as opposed to a black-box deep learning prediction, we show in Sec. IV.3 that the latent features (data representations) that the model learned are physically meaningful, rendering our GNNs explainable. We conclude this work in Sec. V, and provide supplementary results and necessary derivations in the Supplementary Materials.
## II Linear elasticity background
### The strain energy tensor and the elastic tensor
Assuming linear elastic deformation and no external stress, the strain energy density (SED) of isotropic materials can be expressed as [34]
\[U(\epsilon_{1},\epsilon_{2},...,\epsilon_{6})=\frac{1}{2}\sum_{i,j=1}^{6}C_{ij }\epsilon_{i}\epsilon_{j}, \tag{1}\]
where \(C_{ij}\) is an _elastic constant_ and \(\epsilon_{i}\) is a strain component \(i\in\{1,2,\ldots,6\}\) in Voigt notation. If the crystal structure is strained by at most two strain components (two Voigt indices), one can consider the SED tensor of rank 2 which we denote \(U_{ij}\equiv U(\epsilon_{i},\epsilon_{j})\) for \(i\neq j\) and \(U_{ii}\equiv U(\epsilon_{i})\). In this study, we consider \(U_{ij}\) in units of eV per atom. The SED is the energy stored in distorted crystal structure when the lattice is strained by \(\epsilon_{i}\) and \(\epsilon_{j}\). Since the elastic constant tensor is symmetric, the SED tensor is also symmetric and can be expressed as
\[U_{ij} =C_{ij}\epsilon_{i}\epsilon_{j}+\frac{1}{2}\left(C_{ii}\epsilon_{ i}^{2}+C_{jj}\epsilon_{j}^{2}\right)\text{ for }i\neq j\text{ and,} \tag{2}\] \[U_{ii} =\frac{1}{2}C_{ii}\epsilon_{i}^{2}.\]
The elastic constants can then be analytically obtained as a function of the SED:
\[C_{ij}=\frac{1}{\epsilon_{i}\epsilon_{j}}\left[\left(1+\delta_{ij}\right)U_{ ij}-\left(1-\delta_{ij}\right)\left(U_{ii}+U_{jj}\right)\right], \tag{3}\]
where \(\delta_{ij}\) is the Kronecker delta.
Because the SED tensor and the elastic tensor are symmetric tensors of rank 2 with \(6\times 6\) components, there are \((6+1)6/2=21\) independent components. In the matrix form, we denote the SED tensor as
\[\mathbf{U}=\begin{bmatrix}U_{11}&U_{12}&U_{13}&U_{14}&U_{15}&U_{16}\\ &U_{22}&U_{23}&U_{24}&U_{25}&U_{26}\\ &&U_{33}&U_{34}&U_{35}&U_{36}\\ &&&U_{44}&U_{45}&U_{46}\\ &&&U_{55}&U_{56}\\ &&&U_{66}\end{bmatrix}, \tag{4}\]
where only the upper triangular elements are shown (the lower diagonal elements are left blank for brevity). For crystalline solids, however, the upper triangular elements are not completely independent, depending on the symmetry of the crystal structure. For instance, the SED tensor of a cubic lattice is
\[\mathbf{U}_{cubic}=\begin{bmatrix}U_{11}&U_{12}&U_{12}&U_{14}&U_{14}&U_{14}\\ &U_{11}&U_{12}&U_{14}&U_{14}&U_{14}\\ &&U_{11}&U_{14}&U_{14}&U_{14}\\ &&&U_{44}&U_{45}&U_{45}\\ &&&U_{44}&U_{45}\\ &&&&U_{44}\end{bmatrix}. \tag{5}\]
Note that some components of the elastic constants \(C_{ij}\) can be zero in many crystal structures, e.g. a cubic lattice has 9 zeros out of 21 independent components.
However, due to the property of SED from Eq. (2), \(U_{ij}\) is never 0. For the purpose of machine learning regression, working with SED helps avoid zero inflation problems in the training dataset. Fig. S2 shows the distributions of \(C_{ij}\) and \(U_{ij}\), which cover different ranges depending on the indices. Importantly, the distribution of elastic constants can concentrate around zero, but this is not the case for the SED. As an illustrative example of how the SED can avoid a zero inflation problem, consider the elastic constants of a diamond cubic crystal structure. For diamond, \(C_{11}\), \(C_{12}\), \(C_{14}\), \(C_{44}\), and \(C_{45}\) are 1054, 126, 0, 562, and 0 GPa, respectively. Then, Eq. (2) gives \(U_{11}\), \(U_{12}\), \(U_{14}\), \(U_{44}\), and \(U_{45}\) subjected to 2% strain to be 7.506, 16.807, 11.509, 4.002, and 8.005 meV/atom, respectively. The magnitude of \(U_{12}\) is the largest as it is the sum of \(C_{12}\) and \(C_{11}\), while \(U_{14}\) and \(U_{45}\) are smaller (but non-zero) because \(C_{14}\) and \(C_{45}\) are zero.
### Strain operations
The unit cell of a crystal can be compactly parametrized by a lattice matrix \(\mathbf{L}=[\mathbf{a}\ \mathbf{b}\ \mathbf{c}]\), where \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}\) are vectors of lattice parameters in the Cartesian coordinate. If the system is applied by strain, the lattice matrix will be deformed by
\[\mathbf{L}^{\prime}(\epsilon_{1},\epsilon_{2},...,\epsilon_{6})=\boldsymbol{ \varepsilon}_{I}\mathbf{L}, \tag{6}\]
where \(\boldsymbol{\varepsilon}_{I}\) is the strain matrix
\[\boldsymbol{\varepsilon}_{I}=\begin{bmatrix}\epsilon_{1}&\frac{\epsilon_{6}} {2}&\frac{\epsilon_{5}}{2}\\ \frac{\epsilon_{6}}{2}&\epsilon_{2}&\frac{\epsilon_{2}}{2}\\ \frac{\epsilon_{6}}{2}&\frac{\epsilon_{6}}{2}&\epsilon_{3}\end{bmatrix}+ \mathbf{I}. \tag{7}\]
In this work, we assume the crystal structures will be deformed by at most two strain components, so that \(\mathbf{L}^{\prime}=\mathbf{L}^{\prime}(\epsilon_{i},\epsilon_{j})\).
Due to an applied strain, the atomic coordinates \(\mathbf{r}\) must also be transformed accordingly as
\[\mathbf{r}^{\prime}=\boldsymbol{\varepsilon}_{I}\mathbf{L}\mathbf{r}_{f}, \tag{8}\]
where \(\mathbf{r}_{f}\) is a fractional coordinate of an atom in an unstrained lattice. Noting that \(\mathbf{r}_{f}\) typically shifts from its equilibrium value when the strain is applied (say, in density function theory or in experiments); however, the atomic relaxation is neglected for simplicity.
## III Machine learning with SE(3)-equivariant graph neural networks
### Crystal graphs
A crystal structure can be represented as a multigraph whose nodes and edges, respectively, encode atoms and their pairwise connections. A pair of nodes describing an atom of type \(m\) and an atom of type \(n\) can be connected by multiple edges, encapsulating the interactions between the atom of type \(m\) in a unit cell and atoms of type \(n\) in the unit cell as well as in other periodic cells of consideration, see Fig 1. An atom of type \(m\) can interact with an atom of the same type in periodic cells if the interaction range of consideration is large enough, represented by self-loops in the node \(m\) of the crystal graph. Each atom's location and its atomic features (e.g., atomic mass, electronegativity, polarizability, atomic radius, and etc.) in a unit cell is accounted for by the attributes of a single node.
More formally, a crystal graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of a set of nodes (vertices) \(\mathcal{V}\) and a set of edges \(\mathcal{E}\) defined as
\[\mathcal{V}=\{(\mathbf{f}_{n},\mathbf{r}_{n})\mid\mathbf{f}_{n} \in\mathbb{R}^{M},\ \mathbf{r}_{n}=\mathbf{L}\mathbf{r}_{f_{n}}\in\mathbb{R}^{3}\},\] \[\mathcal{E}=\{\Delta\mathbf{r}_{mn}^{(\mathbf{T})}\mid\Delta \mathbf{r}_{mn}^{(\mathbf{T})}=\mathbf{r}_{m}-\mathbf{r}_{n}+\mathbf{T};\ \mathbf{r}_{m},\mathbf{r}_{n}\in\mathbb{R}^{3}\},\]
where \(m\) and \(n\) are indices of the two atoms in the unit cell, \(\mathbf{f}_{n}\) is a vectorized atomic feature of an atom of type \(n\), \(M\) is the number of atomic features, \(\mathbf{r}_{f_{n}}\) is the fractional coordinate of an atom of type \(n\), and \(\mathbf{T}\) is a translation vector within the space of interest. Each node embeds two kinds of information: atomic attributes \(\mathbf{f}\) and atomic positions \(\mathbf{r}\) in the unit cell. From this definition, the unit cell and its periodic images is compactly represented as a multigraph; only a finite number of nodes describing the atoms in the unit cell is required, and the number of edges joining the two nodes grows linearly with the number of periodic images of the atoms in the structure of interest (see Fig. 1(a)).
A crystal graph describing the whole crystal structure will require infinitely many edges information. In practice, one typically embeds only partial information of the structure using the description of the unit cell and a finite number of edges encoding interactions with neighbor atoms [35]. One method to construct the edges is to connect a target atom to other atoms within a finite radius; however, still, this method generates an excessive number of edges, which is computationally infeasible for GNNs machine learning. Alternatively, some algorithms determine the connection between atoms using, e.g., the Voronoi tessellation, generating a moderate number of edges [36; 37]. However, for layered materials, using Voronoi tessellation causes the edges connecting atoms between layers to be absent. These interlayer connections are crucial in differentiating the structure strained in out-of-plane direction from the structures strained in other directions. In this work, we define an edge between a pair of atoms \(\Delta\mathbf{r}\) (dropping the translation vector superscript and the atom indices subscript for brevity) to be non-zero only if each component \(\Delta r_{\alpha}\) where \(\alpha\in\{x,y,z\}\) satisfies the following spatial cutoff criterion: \(|\Delta r_{\alpha}|\leq\min(|a_{\alpha}|+|b_{\alpha}|+|c_{\alpha}|,s)+\delta\) where \(a_{\alpha}\), \(b_{\alpha}\), and \(c_{\alpha}\) are the \(\alpha\) components of the corresponding lattice parameters constituting the lattice matrix \(\mathbf{L}\), \(s\) is a cutoff distance, and \(\delta\) is a small cutoff extension of our choice. This can reduce the number of edges from
the hard radial cutoff criterion and ensures the connections of atoms between layers for layered materials by an appropriate choice of \(s\) and \(\delta\).
### Rotational and translational equivariance
If two crystal graph inputs constructed from an identical crystal graph strained by two different sets of strained components \((\epsilon_{i},\epsilon_{j})\) and \((\epsilon_{k},\epsilon_{l})\) yielding the new vertices \(\mathcal{V}^{\prime}=\{(\mathbf{f}_{n},\mathbf{r}^{\prime}_{n})\}\) and \(\mathcal{V}^{\prime\prime}=\{(\mathbf{f}_{n},\mathbf{r}^{\prime\prime}_{n})\}\), respectively, are equivalent up to a 3D rotation and a translation, then we expect our machine learning model to predict the same SED. This symmetry-aware property can be implemented with geometric deep learning models [38]. We will equip a latent representation of our model with this symmetry-aware (equivariant) property, which is defined as follow.
A latent representation \(\phi:\mathcal{V}\rightarrow\mathcal{Y}\) is _equivariant_ if for each transformation \(T_{g}:\mathcal{V}\rightarrow\mathcal{V}\) with \(g\) being an element of an abstract group \(G\), there exists a transformation \(S_{g}:\mathcal{Y}\rightarrow\mathcal{Y}\) such that the following condition holds:
\[S_{g}[\phi(v)]=\phi\left(T_{g}[v]\right),\]
for all \(g\in G,v\in\mathcal{V}.\) In our case, the group \(G\) of interest is SE(3), providing the latent representation with a 3D rotation and translation equivariant property. This latent feature \(\phi\) can be achieved with SE(3)-equivariant GNNs, known as Tensor Field Network (TFN) [39], and with a TFN with an appropriate attention mechanism, known as SE(3)-transformers [40], see more detailed recipes of these two models in Sec. S1.
The key idea that enables SE(3)-equivariant property of these two GNNs is that its message passing kernel is constructed from translational invariant spherical harmonic bases, with the structure-preserving transformation \(S_{g}\) given by the Wigner-D matrices (see Sec. S1). Under a 3D rotation, each spherical harmonic \(J\) basis function of the kernel transforms according to
\[Y_{J}\left(\mathbf{R}_{g}^{-1}\frac{\Delta\mathbf{r}}{||\Delta\mathbf{r}||} \right)=\mathbf{D}_{J}^{*}(g)Y_{J}\left(\frac{\Delta\mathbf{r}}{||\Delta \mathbf{r}||}\right),\]
where \(\mathbf{D}_{J}(g)\) is the \(J^{th}\) Wigner-D matrix, and \(\mathbf{R}_{g}\) is a rotation matrix associated with \(g\in\) SO(3), making the learned latent representation \(\phi\) equivariant [39]. The multi-head attention mechanism in SE(3)-transformers also induces an equivariant property on the product rule between each key and value pairs, rendering the whole message passing algorithm equivariant under SE(3) [40]. In this work, we use SE(3)-transformers to first build a compressed representation of the orientations of a strained crystal structure, then such learned latent representation will be passed to other networks with a high expressive power to perform the prediction (regression) task of the SED. The following section contains the entire framework for the SED prediction.
### Model architecture and training procedure
We now describe our GNNs-based model that predicts the SED tensor. The architecture of our model, which we term _StrainNet_, is illustrated in Fig. 2. The model is designed to receive two types of input: a crystal graph representing a _strained_ crystal structure, and a one-hot vector of dimension \(6(6+1)/2=21\)[41] indicating the _strained components_, with a value \(1\) in the \(ij\) component that the strain operation is applied on and with a value \(0\) otherwise.
The crystal graph of a strained crystal structure is an input feature of the SE(3)-equivariant GNNs building block, which generates _two_ latent features. The first latent feature (\(\mathbf{f}_{\text{class},n}^{0}\) in Fig. 2) is fed into a _classification_ neural network that predicts a vector of dimension \(21\), identifying all the upper-triangular elements of the SED tensor whose values are exactly identical (by symmetry) to the component indicated by the input one-hot vector. This inherent classification informs the model to discern the degenerate structure of the SED tensor that depends on the symmetry of an input crystal graph (see Fig.3). For example, for a cubic lattice strained in the direction \(11\), the input one-hot vector \((1,0,\dots,0)^{T}\) together with the latent feature \(\mathbf{f}_{\text{class},n}^{0}\) shall give a prediction of a vector that has the value close to \(1\) in the indices \(ij=11,22,33\), and the value close to \(0\) in the other \(18\) indices. Finally, the predicted _degeneracy class vector_ (DCV) together with the second latent feature (\(\mathbf{f}_{\text{reg},n}^{0}\) in Fig. 2) will be fed into the final neural network that predicts (regresses on) the SED.
To generate an SE(3)-equivariant latent representation of an input crystal graph, as alluded to in the previous section, we first used the SE(3)-Transformer that receives the strained crystal graph input. The material struc
Figure 1: A 2-dimensional lattice and its _strained_ crystal graph embedding. Grey dashed lines are periodic boundaries. The whole structure can be spanned by the 3 atoms in 3 different bases in the unit cell, represented by 3 different nodes. Black lines are (undirected) edges connecting two neighbor nodes. The strained lattice is represented as a multigraph on the right, which is the crystal graph input into our SE(3)-equivariant GNN building block. Note that to describe the crystal graph for the unit cell and the atoms on its boundary, only three nodes are required. The multi-edges of each pair of nodes are distinguished by different crystallographic directions.
ture from the database is first transformed according to Eq. (6) by a strain of a fixed magnitude, which is then converted into the strained crystal graph input. Noting that in the _ab initio_ calculation, the atomic coordinates in the strained lattice are optimized to be in the equilibrium positions under an applied strain, but in this work, the fractional coordinates are kept as in pre-strain condition. The strained crystal graph input is given by
\[\mathcal{V}^{\prime}=\{(\mathbf{f}_{n},\mathbf{r}_{n}^{\prime}) \mid\mathbf{f}_{n}\in\mathbb{R}^{M},\;\mathbf{r}_{n}^{\prime}=\boldsymbol{ \epsilon}_{I}\mathbf{L}\mathbf{r}_{f_{n}}\in\mathbb{R}^{3}\},\] \[\mathcal{E}^{\prime}=\{\Delta\mathbf{r}_{mn}^{\prime}\mathbf{(T }^{\prime})\mid\Delta\mathbf{r}_{mn}^{\prime}\mathbf{(T}^{\prime})=\mathbf{r }_{m}^{\prime}-\mathbf{r}_{n}^{\prime}+\mathbf{T}^{\prime};\;\mathbf{r}_{m}^{ \prime},\mathbf{r}_{n}^{\prime}\in\mathbb{R}^{3}\},\]
where the _prime_ notation indicates that the atomic positions and the translation vector are of the strained lattice.
The training data of the \(\{U_{ij}\}\) is computed, via Eq. (2), from \(\{C_{ij}\}\) extracted from Materials Project database [20] (see Sec. S5). For molecules and materials, if the number of training data is large enough, only atomic numbers are sufficient for the node (atomic) attributes [42]. Nevertheless, in this work, the features such as atomic mass, electronegativity, polarizability, atomic radius, atomic number, and \(ij\) indices, are used as node attributes. Polarizability data is obtained from [43], whereas other attributes are obtained from Materials Project database. The atomic number is vectorized through an embedding layer of dimension 512, which is then concatenated with the other four node attributes (see Fig. 2). It is well known that the elastic properties are influenced by various factors, including the characteristics of acoustic phonons, which exhibit an inverse relationship with atomic mass. Additionally, the resistance to changes in bond length is correlated with both electronegativity and polarizability. These three atomic properties are thus incorporated into node attributes, and they significantly improve the model prediction accuracy.
Since the atomic features are scalar attributes, the input feature vector will be fed into the \(l=0\) channel of the transformer network since such channel transforms like a scalar (see Sec. S1). We'll denote each node's input feature vector as \(\mathbf{f}_{\text{in},n}^{0}=\mathbf{f}_{n}\). For computational feasibility, we will restrict the output feature vector of the SE(3)-transformers to consist of 4 channels \(\mathbf{f}_{\text{hid},n}^{l}\) with \(l\in\{0,1,2,3\}\). These latent features are fed into two different SE(3)-equivariant GNNs _without_ attention (TFNs), outputing a node-wise classification feature vector \(\mathbf{f}_{\text{class},n}^{0}\), and the node-wise regression feature vector \(\mathbf{f}_{\text{reg},n}^{0}\), see Fig. 2. Note that the attention mechanism is not deployed in the last GNN layer as it yields a better prediction accuracy.
To train the model to discern different orientations of a strained crystal graph and classify its appropriate degeneracy class vector, in each epoch, we draw a new realization of a crystal graph in a different orientation and of a
Figure 2: With a strained crystal graph and the strained component one-hot vector as the input, StrainNet employs a self-supervised approach combining both classification and regression networks to predict both the degeneracy class vector and the \(U_{ij}\). The \(\mathbf{f}_{\text{in},n}^{0}\equiv\mathbf{f}_{n}\) denotes the input feature of the \(n^{\text{th}}\) node, where a superscript, \(0\), indicates the input into the \(l=0\) channel of the spherical harmonics in SE(3)-Transformer, which is a channel whose feature is invariant under rotation (transforms like a scalar quantity). \(\Delta\mathbf{r}_{mn}^{\prime}\) is the edge connecting a target node \(m\) and a neighbor node \(n\). The degeneracy class vector from the classification network will be concatenated with the latent representation of the strained crystal graph (\(\mathbf{f}_{\text{reg},n}^{0}\)) for the final regression network to predict the SED.
new one-hot vector in the same class of vector associated with the original strained component. Specifically, a new crystal graph input is sampled from a uniformly random orientation (uniformly random Euler angles) around the center of mass in the unit cell of the original strained crystal graph. A new one-hot vector of the strained components is chosen such that the element 1 is drawn uniformly from one of the non-zero components of the DCV. For example, if the original strained component is 11 for a cubic material, a new one-hot vector in each epoch is drawn from the situations where only one of the strained components 11, 22, or 33 is 1, while the other components are 0.
Our self-supervised approach combines both the regression errors and the classification errors in the global loss function:
\[\mathcal{L}=\frac{1}{N}\sum_{n=1}^{N}|U_{ij}^{(n)}-U_{ij}^{(n),\text{pred}}|+ \lambda\mathcal{L}_{\text{class}}, \tag{9}\]
where the SED prediction \(U_{ij}^{(n),\text{pred}}\) is the output from the regression network, \(\mathcal{L}_{\text{class}}\) is the binary cross-entropy loss function for multi-label classification, \(\lambda\) specifies the relative significance between the classification and the regression errors, and \(N\) is the total number of training samples. We stop the training when the gradient of the loss is relatively small over multiple epochs. Training was performed on NVIDIA A100 GPUs.
### Choosing a subset of strained crystal structures for training data
By the virtue of rotational equivariance, we expect StrainNet to efficiently predict the 21 components of \(\mathbf{U}\) using training data consisting of only non-degenerate components. We have selected the number of non-degenerate components for training that depends on crystal systems, i.e. cubic, tetragonal, hexagonal, trigonal, monoclinic, and triclinic. For cubic materials, only 6 components are used for training (the number of non-degenerate components for other crystal systems can be found in Table S1 in the Supplemental Materials). Recall that, as discussed earlier, the cubic lattice has 5 distinct \(U_{ij}\) as shown in Eq. (5); however, the strain tensor transform the cubic structures into 6 distinct structures (according to the symmetry in Laue class \(m\bar{3}m\)) as follows
\[\begin{bmatrix}\mathcal{T}_{1}&\mathcal{T}_{2}&\mathcal{T}_{2}&\mathcal{O}_{ 2}&\mathcal{M}_{2}&\mathcal{M}_{2}\\ &\mathcal{T}_{1}&\mathcal{T}_{2}&\mathcal{M}_{2}&\mathcal{O}_{2}&\mathcal{M}_ {2}\\ &&\mathcal{T}_{1}&\mathcal{M}_{2}&\mathcal{M}_{2}&\mathcal{O}_{2}\\ &&\mathcal{O}_{1}&\mathcal{M}_{1}&\mathcal{M}_{1}\\ &&&&\mathcal{O}_{1}&\mathcal{M}_{1}\\ &&&&\mathcal{O}_{1}\end{bmatrix},\]
where each element in the matrix stands for a crystal system of \(\mathbf{L}^{\prime}(\epsilon_{i},\epsilon_{j})\), \(\mathcal{T}\), \(\mathcal{O}\), and \(\mathcal{M}\) are tetragonal, orthorhombic, and monoclinic lattices, respectively, and a different number labeling the subscript indicates a different structure. Despite the fact that \(U_{14}\) (\(C_{14}\)) is equal to \(U_{15}\) (\(C_{15}\)) by the cubic symmetry, \(\mathbf{L}^{\prime}(\epsilon_{1},\epsilon_{4})\) and \(\mathbf{L}^{\prime}(\epsilon_{1},\epsilon_{5})\) possess different lattice symmetries which are orthorhombic and monoclinic lattices, respectively. This is because \(\epsilon_{1}\) strains the structure in the \(x\) direction, while \(\epsilon_{4}\) and \(\epsilon_{5}\) strain the structure in both the \(y\) and the \(z\) directions, and both the \(x\) and the \(y\) directions, respectively. Since \(\mathbf{L}^{\prime}(\epsilon_{1},\epsilon_{4})\) and \(\mathbf{L}^{\prime}(\epsilon_{1},\epsilon_{5})\) are not equivalent up to a rotation, the SE(3) kernel regards the two inputs as different inputs. By exploiting SE(3) kernel to identify inputs that are equivalent up to a rotation, for cubic materials, it suffices to use the training data consisting of the distinct input structures, strained only in 6 directions, i.e., 11, 12, 14, 15, 44, and 45.
### The distance between SED components
To evaluate our model capability to predict the symmetry-dependent degeneracy pattern of the SED tensor, we use the Canberra distance between two components of the SED tensor as a metric:
\[d(U_{ij},U_{kl})\equiv\frac{1}{N}\sum_{n=1}^{N}\frac{|U_{ij}^{(n)}-U_{kl}^{(n )}|}{|U_{ij}^{(n)}|+|U_{kl}^{(n)}|}, \tag{10}\]
where \(N\) represents the number of samples, and \(ij\) and \(kl\) are Voigt indices. With this metric, the distance between \(U_{ij}\) and \(U_{kl}\) is zero if they belong to the same degeneracy class (their values are identical.) The top row of Fig. 3 shows the ground-truth distance pattern for cubic, hexagonal, and monoclinic materials. Dark blue and bright green colors indicate the \(d(U_{ij},U_{kl})\) values that are closest to zero and 0.6, respectively. It is important to note that the degeneracy pattern of \(U_{ij}\) depends on the crystal symmetry, which results in a unique Canberra distance pattern for each crystal system. For instance, in cubic materials, \(U_{11}\), \(U_{22}\), and \(U_{33}\) are identical (in the same degeneracy class), so the distance pattern of cubic materials displays a dark blue box corresponding to indices 11, 22, and 33. The distance patterns in Figs. 3 and S9 are averaged over the samples with the same crystal system in the dataset. Hence, the distance pattern of low-symmetry crystals, such as monoclinic and triclinic, will strongly depend on the dataset.
## IV Discussion and results
### Rationale for the StrainNet model architecture
The underlying principle for using an equivariant network as a core component is to construct a compact representation of the SED tensor, such that the crystal structures strained by different sets of \(\epsilon_{i}\) and \(\epsilon_{j}\) that are equivalent up to rotation (and translation) possess the same SED. Such symmetry-dependent SED tensor of
specific crystal structures is demonstrated in the ground-truth distance patterns \(d(U_{ij},U_{kl})\) of Fig. 3 (top), where the StrainNet can well approximate such symmetry-dependent patterns, see Fig. 3 (bottom).
Fig. S4 shows our earlier model development with the goal to compactly represent the SED tensor, beginning from the simplest but less expressive to StrainNet which can compactly represent the SED tensor rather accurately. All these trial GNN-based models were trained on crystal structures with fewer than 500 edges. The simplest model (Fig. S4(a)) takes as an input only the crystal graph with node attributes, i.e., atomic mass, atomic radius, electronegativity, polarizability, and embedded atomic number. This minimal equivariant model does not yield a sufficiently accurate representation of \(U_{ij}\); we found that the model is more biased towards predicting the SED tensor whose distance matrix pattern is more akin to that of the cubic crystals. This could be because the model is not sufficiently expressive, and thus attempts to fit the majority of the training dataset comprising more higher-symmetry structures (see Table S3 for the crystal system statistics of our curated dataset).
To improve the minimal model's expressiveness, we introduce the degeneracy class vector (DCV) as an additional input, to inform the model about the symmetry-dependent class of the SED tensor. The DCV is first embedded by fully-connected layers and then concatenated with global max-pooled features or node attributes, as shown in Fig. S4(b) and (c), respectively. These symmetry-informed models yield lower MAE and RMSE of the SED tensor compared to that of the minimal model, while the model in Fig. S4(b) performs slightly better than the model in Fig. S4(c) (see Table S4). Moreover, the predicted distance patterns are less biased towards those of the cubic lattice. Specifically, for crystal structures with lower symmetry such as hexagonal, tetragonal, trigonal, and orthorhombic, the predicted distance patterns are relatively similar to their ground-truth distance patterns. As expected, informing the model about the DCV of the input helps force the distance between \(U_{ij}\) and \(U_{kl}\) belonging to the same degeneracy class to be closer to zero.
While these models with a DCV included as an input give good predictions of the SED tensor, they require the knowledge of the pretrained crystal symmetry _a priori_ (to properly assign the value of the DCV). To overcome this limitation, we use a self-supervised method schematically shown Fig. 2 to modify the model to predict, rather than to require, the DCV. The classification neural network that predicts the DCV takes as an input a one-hot vector of the strained \(ij\) components, representing the \(\epsilon_{i}\) and \(\epsilon_{j}\) components in (7). This self-supervised technique enables the StrainNet to predict \(U_{ij}\) without any prior knowledge of the pretrained crystal symmetry. Importantly, StrainNet can express the degeneracy class of the SED tensor relatively well, see Figs. 3 and S9.
### StrainNet's prediction results
Table 1 summarizes StrainNet's prediction errors on the test set. To the best of our knowledge, StrainNet provides the first data-driven prediction of \(U_{ij}\) with a reasonable accuracy. Using these predicted SED tensors together with Eq. (3), elastic tensors of materials can also be calculated. Our model's prediction accuracy significantly depends on the elemental composition of the compounds. Fig. S7 depicts the MAE of \(U_{ij}\) and \(C_{ij}\), averaged over 21 \(ij\) components and categorized by elements. It is interesting to note that the compounds containing period 2 elements (Be, B, C, N, O, and F), some period 6 transition metals (Ta, W, Re, and Os), and some actinides (Th, Pa, and Pu), which have high bulk and shear moduli on average, exhibit a higher MAE for \(U_{ij}\) compared to the overall MAE of the entire test set. As a result, their MAE for \(C_{ij}\) are also significantly higher than the overall MAE of the test set. In contrast, the compounds containing lanthanides, which exhibit medium bulk and shear moduli on average, except for Gd, demonstrate a considerably lower MAE than the overall MAE of the test set, for both \(U_{ij}\) and \(C_{ij}\).
To further investigate the prediction results of the elastic constants, we plot the comparisons between the prediction (obtained from the predicted SED tensors together with Eq. (3)) and the ground truth of the elastic constants in Fig. 4. Fig. 4 (a) shows the prediction results of every \(C_{ij}\) components of all crystal structures, whereas Figs. 4(b)-(f) show the prediction results of \(C_{22}\), \(C_{23}\), \(C_{26}\), \(C_{55}\), and \(C_{56}\) of cubic materials. For cubic materials, the model is trained only on a non-degenerate subset of 21 \(U_{ij}\) components, specifically components 11, 12, 14, 15, 44, and 45. Notably, the model is able to predict other unseen components reasonably well. However, the prediction of the elastic constant components that are exactly zero is challenging. For example, the ground-truth values of \(C_{26}\) and \(C_{56}\) of cubic materials are exactly 0 GPa, while the means and standard deviations of these components from our model predictions are 0.69 and 4.45 GPa, and -0.43 and 4.85 GPa respectively (see Fig. S6 for the histograms of the prediction results). Although the standard deviations are relatively large, the means are less than 1 GPa. It is worth noting that the first-principles calculations also make some errors in these values, but these are usually not computed since they are known to be exactly 0 GPa.
To see how well StrainNet discerns the geometric relationship between different strained crystal graph inputs, we plot the Canberra distance metric of the predicted and ground-truth SED tensors for comparison in Fig. 3. The predicted distance metric of cubic materials, as shown in the left column of Fig. 3, closely resembles the ground-truth distance metric, although there are minor discrepancies in the values that belong to the same degeneracy class. For instance, the distances between the predicted \(U_{11}\), \(U_{22}\), and \(U_{33}\) are \(4.8-6.2\)\(\mu\)eV/atom, resulting in the percentage errors of \(1.83-2.43\%\) (com
puted from \(\frac{1}{N}\sum_{n=1}^{N}|U_{ij}^{(n)}-U_{kl}^{(n)}|/U_{avg}^{(n)}\times 100\%\) where \(U_{avg}^{(n)}=(|U_{ij}^{(n)}|+|U_{kl}^{(n)}|)/2\).) For hexagonal materials, \(U_{11}\) are identical to \(U_{22}\), but not to \(U_{33}\). The predicted distance metric of hexagonal materials accordingly forms a dark blue box in the 11 and 22 components. The percentage error between the predicted \(U_{11}\) and \(U_{22}\) is 2.22%. For monoclinic materials, the values of each \(U_{ij}\) are not necessarily identical to one another, and the distance metric is averaged over monoclinic materials data. The resulting distance pattern may differ significantly depending on the dataset. Notably, these agreements between the predicted and the ground-truth distance metric in various crystal structures reveal that StrainNet gives an excellent prediction of the degenerate structure of SED tensors of strained crystalline materials.
Lastly, the model can also predict other elastic properties, including Voigt's bulk modulus (\(B_{V}\)), Voigt's shear modulus (\(G_{V}\)), Young's modulus (\(Y\)), and Poisson's ratio (\(\nu\)), using the predicted \(C_{ij}\) together with Eqs. (S9)\(-\)(S12). Table 1 summarizes the MAE and RMSE of our predicted elastic properties compared to previous works. The MAE and RMSE of \(B_{V}\) are 12.59 and 20.83 GPa, respectively, which are comparable to those of Mazhnik et al. [23]. The MAE and RMSE of \(G_{V}\) are 10.49 and 18.95 GPa, respectively, with the RMSE of \(G_{V}\) being comparable to that reported by Zhao et al. [24]. Our work also yields smaller errors for \(\nu\) than Mazhnik et al. [23] Note that our dataset (see Sec. S5) differs from the dataset in the relevant work of [23, 24]. Fig. S8 presents the MAE of \(B_{V}\), \(G_{V}\), \(Y\) and \(\nu\), categorized by elements. The model produces high MAE for compounds containing period 2 elements, mainly due to their larger MAE in \(U_{ij}\) and \(C_{ij}\) (see Fig. S7). On the other hand, the model can produce smaller MAE for lanthanide compounds, as their MAE in \(U_{ij}\) and \(C_{ij}\) (see Fig. S7) are smaller.
### Interpretability of StrainNet's latent features
To investigate how latent features facilitate successful prediction of the SED tensor, we employed the diffusion map to faithfully visualize the three-dimensional data representation of the high-dimensional latent features of the entire test set [44, 45]. Figs. 5 and S10 reveal the dimensionality reduction of the latent features in the diffusion coordinates \((\varphi_{1},\varphi_{2},\varphi_{3})\). Two different latent features were considered: the global max-pooling layer from the GNN representation of the input crystal graph (denoted as \(\mathbf{A}\) in Fig. 2), and the concatenation between \(\mathbf{A}\) and the embedded DCV (denoted as \(\mathbf{B}\) in Fig. 2). These low-dimensional representations in Figs. 5 and S10 are colored by the strained component \(ij\) (middle and right columns) and by the energy scale (left column).
What does the max-pooled feature of the crystal graph input \(\mathbf{A}\) represent? Fig. S10(c) shows that the crystal graph latent representations of the _same material_ that are strained in different \(ij\) components are _almost identical_, in fact sharing greater resemblance than the representations of different materials that belong to the same symmetry class. This is consistent with the result of Bronstein et al. which shows that the latent representation in SE(3)-equivariant GNNs of the original input is very similar to those of the mildly spatially distorted inputs, and, interestingly, is less similar to those of inputs spatially translated further away [38]. Hence, the latent feature \(\mathbf{A}\) seems to uniquely encode the information of the input material, rather than the symmetry of the input. Thus \(\mathbf{A}\) alone does not suffice to meaningfully represent the symmetry-dependent SED tensor (Fig. S10(a) and (b)).
On the other hand, with the embedded DCV in the latent feature \(\mathbf{B}\), the network can differentiate different strained components within the same material class. When strained in 21 different components, the latent features \(\mathbf{A}\) of three example materials (trigonal LuTITe\({}_{2}\) phase, tetragonal YMnSi phase, and cubic Ta\({}_{3}\)Ru phase) that were not differentiable within the same material class (Fig. S10(c)) are now clearly segregated in the \(\varphi_{2}\) variable through the help of the embedded DCV, see Fig. 5(c). In fact, the latent feature \(\mathbf{B}\) which combines both the input crystal graph representation and the embedded DCV offers an interpretable latent representation of the SED (Fig. 5), which we now discuss.
The latent feature \(\mathbf{B}\) is organized such that the \((\varphi_{1},\varphi_{2})\) coordinate encodes the strain energy density that varies from a smaller value in the \((+,+)\) quadrant to a larger value in the \((-,-)\) quadrant (see Fig. 5(a)). Since \(U_{ij}\) stored by the tensile strain (\(i,j\in\{1,2,3\}\)) is typically higher than \(U_{ij}\) stored by the shear strain (\(i,j\in\{4,5,6\}\)), Figs. 5(a) and (b) consistently reveal that the latent features of the materials strained by the 11, 22, or 33 component have a negative \(\varphi_{2}\) value, whereas those of the materials strained by the 44, 55, or 66 components have a positive \(\varphi_{2}\) value. In addition, since \(U_{ij}\) for \(i\neq j\) is computed from the sum of \(C_{ii}\), \(C_{jj}\), and \(C_{ij}\), it is typically larger than \(U_{ii}\) and \(U_{jj}\). Figs. 5(a) and (b) also consistently reveal that the latent features of the materials strained by 44, 45, 11, and 12 components are respectively organized in the \(\varphi_{2}\) coordinate from a more positive to a more negative value. Additionally, Figs 5(a) and (c) show that materials with higher (lower) average SED will be represented by a larger negative (larger positive) \(\varphi_{1}\) variable. In summary, the max-pooled feature from the graph neural networks \(\mathbf{A}\) encodes the material information, and, together with the information of the DCV, the concatenated latent feature \(\mathbf{B}\) effectively encapsulates both the material information and its strained component-dependent SED.
## V Conclusions
We have demonstrated a novel data-driven framework for predicting the elastic properties of crystal structures using SE(3)-equivariant graph neural networks
(GNNs). By leveraging SE(3)-equivariant GNNs as building blocks, our self-supervised deep learning model accurately predicts Voigt's bulk modulus, Voigt's shear modulus, Young's modulus, and Poisson's ratio that are comparable to other non-equivariant deep learning studies [23, 24].
A key contribution is the prediction of the strain energy density (SED) and its associated elastic constants, which are tensorial quantities that depend on a material's crystallographic group. The similarity between the distance metrics of the SED components between the ground truths and the predictions demonstrates our model's capability to identify different symmetry groups of strained crystal structures. Requiring only a strained crystal graph and the strained component as the input, our approach offers an efficient alternative to the standard _ab initio_ method for designing new materials with tailored elastic properties.
The interpretability of the model is also a notable feature. The learned latent representations taking into account the degeneracy class vector are organized in a physically meaningful structure for predicting the SED tensors. This interpretability aspect enhances the transparency of model prediction, enabling the justification of whether the prediction is physically relevant.
The combination of interpretability and the consideration of crystallographic groups sets our model apart from recent data-driven methods for predicting elastic properties of materials. We hope this work is a stepping stone toward an efficient data-driven approach for materials discovery and design, and opens up avenues for approaching more challenging tensor prediction tasks, e.g.,
\begin{table}
\begin{tabular}{c c c c c c c c} Properties & \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{Mazhnik et al. [23]} & Zhao et al. [24] \\ & Average & MAE & RMSE & Average & MAE & RMSE & RMSE \\ \hline \(U_{ij}\) (meV/atom) & 2.652 & **0.655** & **1.288** & & & & \\ \(C_{ij}\) (GPa) & 42.92 & **10.37** & **17.69** & & & & \\ \(B_{V}\) (GPa) & 107.07 & 1.148 & 19.73 & 111.83 & 11.11 & 19.54 & 16.530 \\ \(G_{V}\) (GPa) & 50.98 & 9.61 & 17.10 & 54.81 & 8.24 & 11.43 & 15.780 \\ \(Y\) & 129.62 & 22.29 & 38.63 & 138.95 & 19.15 & 26.23 & \\ \(\nu\) & 0.401 & **0.037** & 0.133 & 0.286 & 0.041 & 0.105 & \\ \end{tabular}
\end{table}
Table 1: Statistics of the dataset and the model prediction
Figure 3: The distance metrics \(d(U_{ij},U_{kl})\) computed from the test set of the ground-truth SED (top row) and the predicted SED (bottom row). The columns organize the crystal structure, appearing from left to right is for cubic, hexagonal, and monoclinic materials respectively. The model prediction shows an excellent agreement with the ground-truth for crystals with high symmetry (cubic), and a good agreement with the ground truths for crystals with lower symmetry (hexagonal and monoclinic).
predicting dielectric and piezoelectric tensors, which are second-rank and third-rank tensors, respectively.
###### Acknowledgements.
This research is supported by the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (Grant No. B05F650024) and by Thailand Science research and Innovation Fund Chulalongkorn University (IND66230005). The authors acknowledge high performance computing resources including NVIDIA A100 GPUs from Chula Intelligent and Complex Systems Lab, Faculty of Science, and from the Center for AI in Medicine (CU-AIM), Faculty of Medicine, Chulalongkorn University, Thailand. The authors also acknowledge the National Science and Technology Development Agency, National e-Science Infrastructure Consortium, Chulalongkorn University and the Chulalongkorn Academic Advancement into Its 2nd Century Project (Thailand) for providing computing infrastructure.
|
2303.02219 | NSGA-PINN: A Multi-Objective Optimization Method for Physics-Informed
Neural Network Training | This paper presents NSGA-PINN, a multi-objective optimization framework for
effective training of Physics-Informed Neural Networks (PINNs). The proposed
framework uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to enable
traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape
local minima effectively. Additionally, the NSGA-II algorithm enables
satisfying the initial and boundary conditions encoded into the loss function
during physics-informed training precisely. We demonstrate the effectiveness of
our framework by applying NSGA-PINN to several ordinary and partial
differential equation problems. In particular, we show that the proposed
framework can handle challenging inverse problems with noisy data. | Binghang Lu, Christian B. Moya, Guang Lin | 2023-03-03T21:24:51Z | http://arxiv.org/abs/2303.02219v2 | # NSGA-PINN: A Multi-Objective Optimization Method for Physics-Informed Neural Network Training
###### Abstract
This paper presents NSGA-PINN, a multi-objective optimization framework for effective training of Physics-Informed Neural Networks (PINNs). The proposed framework uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to enable traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape local minima effectively. Additionally, the NSGA-II algorithm enables satisfying the initial and boundary conditions encoded into the loss function during physics-informed training precisely. We demonstrate the effectiveness of our framework by applying NSGA-PINN to several ordinary and partial differential equation problems. In particular, we show that the proposed framework can handle challenging inverse problems with noisy data.
Machine learning Data-driven scientific computing Multi-Objective Optimization
## 1 Introduction
Physics-informed neural networks (PINNs), as proposed in the seminal paper by Raissi et al. in [1], have recently gained significant attention in the scientific machine-learning community. This is due to their ability to accelerate the simulation and discovery of complex dynamical systems. PINNs infer the unknown solution (i.e., the physics of interest) by incorporating the underlying governing equations of the system into the training loss function [2]. As a result, they have enabled solving various problems modeled by ordinary and partial differential equations (PDEs) [3][4], which can be challenging for standard numerical approaches. Furthermore, the physics-informed framework has successfully tackled challenging inverse problems [5][6] by combining PINNs with data (i.e., scattered measurements of the states).
PINNs (Physics-Informed Neural Networks) incorporate multiple loss functions, including residual loss, initial loss, boundary loss, and, if needed, data loss for inverse problems. The most common approach to train PINNs is by optimizing the total PINN loss function using standard stochastic gradient descent (SGD) [7][8][9] methods such as Adam, as discussed in 2.2. However, with SGD methods, optimizing highly non-convex loss functions [10][11] for neural network training can be challenging since there is a risk of getting trapped in numerous sub-optimal local minima, particularly when solving inverse problems or dealing with noisy data [12]. Additionally, SGD only allows for satisfying initial and boundary conditions as soft constraints, which may limit the use of PINNs in the design, optimization, and control of complex systems.
Inspired by multi-objective optimization algorithms [13][14][15], we approach the above problems by considering the training of PINNs as a multi-objective optimization problem. We propose the NSGA-PINN framework to escape
local minima and satisfy the system's constraints, such as initial and boundary conditions, as hard constraints [16]. Specifically, we use the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) [17] to exactly satisfy the losses and help the SGD methods escape local minima.
Our experimental results on inverse ordinary differential equation (ODE) and partial differential equation (PDE) problems show that the NSGA-PINN algorithm is effective in converging to an optimal solution with excellent generalization capabilities and is robust to noise.
The rest of the paper is organized as follows. First, in Section 2, we provide a brief introduction to the following background information: PINN, SGD method, and NSGA-II algorithm. Then, in Section 3, we describe our proposed NSGA-PINN method. In Section 4, we present experimental results using inverse ODE problem and PDE problems to study the behavior of NSGA-PINN. We also test the robustness of our method in the presence of noisy data. Our results are discussed in Section 5. Finally, we conclude the paper in Section 6.
## 2 Background
This section introduces the physics-informed neural network (PINN) framework, the stochastic gradient descent (SGD) method, and the non-dominated sorting genetic algorithm (NSGA-II) algorithm.
### Physics-Informed Neural Networks
Consider computing data-driven solutions to partial differential equations (PDEs) of the general form:
\[u_{t}+\mathscr{N}[u:\lambda]=0,\quad x\in\Omega,t\in[0,T] \tag{1}\]
Here, \(u\) represents the solution of the PDE, \(\Omega\subset\mathbb{R}^{d}\) represents the spatial domain, and \(\mathscr{N}[:]\) denotes a differential operator.
The goal of PINN is to learn a parametric surrogate \(u_{\theta}\) with trainable parameters \(\theta\) that approximates the solution \(u\). To achieve this goal, a neural network is constructed and the total loss function of PINN is minimized. The total loss function consists of several components: residual loss, initial loss, boundary loss, and data loss, i.e,
\[\mathcal{L}_{total}=w_{f}\mathcal{L}_{res}+w_{g}\mathcal{L}_{ics}+w_{j} \mathcal{L}_{bc}+w_{h}\mathcal{L}_{data}. \tag{2}\]
We use the coefficients \(w\) to balance the loss terms. Each loss term is calculated by applying the L2 approximation [18][19][20]. In particular, \(\mathcal{L}_{res}\) denotes the residual loss, which is the difference between the exact value of the PDE and the predicted value from the PINN deep neural network (DNN) [21][22]:
\[\mathcal{L}_{res}=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\lVert u_{\theta}(x_{i}^{r })-u(x_{i}^{r})\rVert^{2}\]
In the above, we only considered the problem in the spatial domain for the sake of simplicity. Extending it to the temporal domain is straightforward. Moreover, \(u_{\theta}(x_{i}^{r})\) represents the output value of the PINN DNN on a set of \(N_{r}\) points sampled within the spatial domain \(\Omega\). This can be computed using automatic differentiation methods [23]. On the other hand, \(u(x_{i}^{r})\) denotes the true solution of the PDE.
The initial or boundary loss represents the difference between the true solution and the predicted value from the PINN DNN at the initial or boundary condition. For instance, the boundary loss (for a given boundary condition \(h\)) at a set of \(N_{b}\) boundary points \(x_{i}^{b}\) is defined as follows:
\[\mathcal{L}_{bc}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}||u_{\theta}(x_{i}^{b})-h(x _{i}^{b})||^{2}\]
Furthermore, if we tackle inverse problems and have a set of \(N_{d}\) experimental data points \(y_{i}^{d}\), we can calculate the data loss using the PDE equation as follows:
\[\mathcal{L}_{data}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}||u_{\theta}(x_{i}^{d})-y _{i}^{d}||^{2}\]
As shown in Equation 2, the loss value of a physics-informed neural network (PINN) is calculated as a simple linear combination with soft constraints. To the authors' knowledge, no optimization methods are available to treat the PINN loss function as a multi-objective optimization problem.
### Stochastic Gradient Decent
Stochastic Gradient Descent (SGD) and its variants, such as Adam, are the most commonly used optimization methods for training neural networks (NN). SGD uses mini-batches of data, which are subsets of data points randomly selected from the training dataset. This injects noise into the updates during neural network training, enabling exploration of the non-convex loss landscape. The optimization problem for SGD can be written as follows:
\[\theta^{*}=\arg\min_{\theta}\mathcal{L}(\theta;\mathcal{T}). \tag{3}\]
This paper focuses on a variant of Stochastic Gradient Descent (SGD) known as the Adaptive Moment Estimates (Adam) optimizer. Adam is the most popular and fastest method used in deep learning. The optimizer requires only first-order gradients and has very little memory requirement. It results in effective neural network (NN) training and generalization.
However, applying SGD methods to Physics-Informed Neural Network (PINN) training presents inevitable challenges. For complex non-convex PINN loss functions, SGD methods can get stuck in a local minimum, particularly when solving inverse problems with PINNs or dealing with noisy data.
### NSGA-II algorithm
The Non-dominated Sorted Genetic Algorithm NSGA-II algorithm, proposed by Deb, K. et al. in [17], is a fast and elitist algorithm for solving multi-objective optimization problems. Like other evolutionary algorithms (EAs) [24][25], NSGA-II mainly consists of a parent population and genetic operators such as crossover, mutation, and selection. To solve multi-objective problems, the NSGA-II algorithm uses non-dominated sorting to assign a front value to each solution and calculates the density of each solution in the population using crowding distance. It then uses crowded binary selection to choose the best solutions based on front value and density value. We use these functions in our NSGA-PINN method, and we will explain them in detail in Section 3.
## 3 The NSGA-PINN Framework
This section describes the proposed NSGA-PINN framework for multi-objective optimization-based training of a Physics-Informed Neural Network (PINN).
### Non-dominated Sorting
The proposed NSGA-PINN utilizes non-dominated sorting (see Algorithm 3 for more detailed information) during PINN training. The input \(P\) can consist of multiple objective functions, or loss functions, depending on the problem setting. For a simple ODE problem, these objective functions may include a residual loss function, an initial loss function, and a data loss function (if experimental data is available and we are tackling an inverse problem). Similarly, for a PDE problem, the objective functions may include a residual loss function, a boundary loss function, and a data loss function.
In the EAs, the solutions refer to the elements in the parent population. We randomly choose two solutions in the parent population \(p\) and \(q\), if \(p\) has a lower loss value than \(q\) in all the objective functions, we define \(p\) as dominating \(q\). If \(p\) has at least one loss value lower than q, and all others are equal, the previous definition also applies. For each \(p\) element in the parent population, we calculate two entities: 1) domination count \(n_{p}\), which represents the number of solutions that dominate solution \(p\), and 2) \(S_{p}\), the set of solutions that solution \(p\) dominates. Solutions with a domination count of \(n_{p}=0\) are considered to be in the first front. We then look at \(S_{p}\) and, for each solution in it, decrease their domination count by 1. The solutions with a domination count of 0 are considered to be in the second front. By performing the non-dominated sorting algorithm, we obtain the front value for each solution [17].
### Crowding-Distance Calculation
In addition to achieving convergence to the Pareto-optimal set for multi-objective optimization problems, it is important for an evolutionary algorithm (EA) to maintain a diverse range of solutions within the obtained set. We implement the Crowding-distance calculation method to estimate the density of each solution in the population. To do this, first, sort the population according to each objective function value in ascending order. Then, for each objective function, assign infinite distance values to the boundary solutions, and assign all other intermediate solutions a distance equal to the absolute normalized difference in function values between two adjacent solutions. The overall crowding-distance value is calculated as the sum of individual distance values corresponding to each objective. A higher density value represents a solution that is far away from other solutions in the population.
```
inputs: P; for\(p\in P\)do n = [] ; // set of points be dominated by other points S = [] ; // set of points dominate other points for\(q\in P\)do if\(p<q\)then \(S_{p}=S_{p}\cup\{q\}\) ; // q is dominated by p elseif\(q<p\)then \(n_{p}=n_{p}+1\) if\(n_{p}=0\)then \(p_{rank}=1\); \(F_{1}=F_{1}\cup\{p\}\) ; // assign p to the first front i = 1; while\(F_{i}\neq\emptyset\)do \(Q=\emptyset\) ; // used to store the members of the next front for\(p\in F_{i}\)do for\(q\in S_{p}\)do \(n_{q}=n_{q}-1\); if\(n_{q}=0\)then \(q_{rank}=i+1\); \(Q=Q\cup\{q\}\) ; // q is in the next front i =i+1; \(F_{i}=Q\);
```
**Algorithm 1**Non-dominated sorting
### Crowded Binary Tournament Selection
```
inputs: N; mating pool = []; divide solutions into the N/2 array each array has 2 elements; whilesized (mating pool) \(\neq\) Ndo for ele in arrdo if\(F_{ele[0]}<F_{ele[1]}\)then mating pool \(\gets\)\(F_{ele[0]}\) ; // select element with lower front value elseif\(F_{ele[0]}=F_{ele[1]}\)then if\(D_{ele[0]}>D_{ele[1]}\)then mating pool \(\gets\)\(F_{ele[0]}\) ; // select element with higher density value elseif\(D_{ele[0]}<D_{ele[1]}\)then mating pool \(\gets\)\(F_{ele[1]}\) else mating pool \(\gets random(ele[0],ele[1])\) else mating pool \(\gets random(ele[0],ele[1])\)
```
**Algorithm 2**Crowded binary tournament selection
The Crowded binary tournament selection, explained in more detail in Algorithm 2, was used to select the best physics-informed neural network (PINN) models for the mating pool and further operations. Before implementing this selection method, we labeled each PINN model so that we could track the one with the lower loss value. The population of size \(n\) was then randomly divided into \(n/2\) groups, each containing two elements. For each group, we compared the two elements based on their front and density values. We preferred the element with a lower front value and a higher density value. In Algorithm 2, \(F\) denotes the front value and \(D\) denotes the density value.
```
Hyper-parameters: parent population N and max generation number \(\alpha\); \(count=0\) ; Initialize the parent set \(S\): the parent set has a number of N neural networks.; while\(count<\alpha\)do \(f1\gets res(nn)\) for nn in S; \(f2\gets ics(nn)\) for nn in S; \(f3\gets data(nn)\) for nn in S; \(R_{t}=P_{t}\cup Q_{t}\) ; \(F_{i}\) = non dominated sorting(f1,f2,f3) ; crowding distance sorting(\(F_{i}\)); mating pool \(\leftarrow\) crowded binary selection(\(F_{i}\), \(\prec_{n}\)); \(Q_{t}\leftarrow\) ADAM optimizer (mating pool(i)); \(count=count+1\);
```
**Algorithm 3**Training PINN by NSGA-PINN method
### NSGA-PINN Main Loop
The main loop of the proposed NSGA-PINN method is described in Algorithm 3. The algorithm first initializes the number of PINNs to be used (\(N\)) and sets the maximum number of generations (\(\alpha\)) to terminate the algorithm. Then, the PINN pool is created with \(N\) PINNs. For each loss function in a PINN, \(N\) loss values are obtained from the network pool. When there are three loss functions in a PINN, \(3N\) loss values are used as the parent population. The population is sorted based on non-domination, and each solution is assigned a fitness (or rank) equal to its non-domination level [17]. The density of each solution is estimated using crowding-distance sorting. Then, by performing a crowded binary tournament selection, PINNs with lower front values and higher density values are selected to be put into the mating pool. In the mating pool, the ADAM optimizer is used to further reduce the loss value. The NSGA-II algorithm selects the PINN with the lowest loss value as the starting point for the ADAM optimizer. By repeating this process many times, the proposed method helps the ADAM optimizer escape local minima.
## 4 Numerical Experiments
This section evaluates the performance of physics-informed neural networks (PINN) trained with the proposed NSGA-PINN algorithm. We tested our framework on both ordinary differential equation (ODE) and partial differential equation (PDE) problems. Our proposed method is implemented using the PyTorch library. For each problem, we compared the loss values of each component of the PINN trained with the NSGA-PINN algorithm to the loss values obtained from the PINN trained with the ADAM method, using the same neural network structure and hyperparameters. To test the robustness of the proposed NSGA-PINN algorithm, we added noise to the experimental data used in each inverse problem.
### Inverse pendulum problem
The algorithm was first used to train PINN on the inverse pendulum problem without noise. The pendulum dynamics are described by the following initial value problem (IVP):
\[\dot{\theta}(t)=\omega(t) \tag{4}\] \[\dot{\omega}(t)=-k\sin\theta(t)\]
where the initial condition is sampled as follows \(\left(\theta(0),\omega(0)\right)=\left(\theta_{0},\omega_{0}\right)\in[-\pi, \pi]\times[0,\pi]\) and the true parameter unknown parameter \(k=1.0\).
Our goal is to approximate the mapping using a surrogate physics-informed neural network: \(\theta_{0},\omega_{0},t\mapsto\theta(t),\omega(t)\). For this example, we used a neural network for PINN consisting of 3 hidden layers and 100 neurons in each layer. The PINN training loss for the neural network is defined as follows:
\[\mathcal{L}=\mathcal{L}_{res}+\mathcal{L}_{ics}+\mathcal{L}_{data}.\]
To determine the total loss in this problem, we add the residual loss, initial loss, and data loss. We calculate the data loss using the mesh data \(t_{s}\), which ranges from 0 to 1 (seconds) with a step size of 0.01. We fit this data onto the ODE to determine the data loss value accurately. For this problem, we set the parent population to 20 and the maximum number of generations to 20 in our NSGA-PINN.
In the course of our experiment, we tested various methods, which are illustrated in Figure 1. Based on our observations, we found that the Adam optimizer did not yield better results after 400 epochs, as the loss value remained around 1e-1. We also tried the NSGA-II algorithm for PINN, which introduced some diversity to prevent the algorithm from getting stuck at local minima, but the loss value was still around 4.0. Ultimately, we implemented our proposed NSGA-PINN algorithm to train PINN on this problem, resulting in a significant improvement with a loss value of 1e-5.
To gain a clear understanding of the differences in loss values between optimization methods, we collected numerical loss values from our experiment. For the NSGA and NSGA-PINN methods, loss values were calculated as the average since they are obtained using ensemble methods through multiple runs. Our observations revealed that the total loss value of PINN trained with the traditional Adam optimizer decreased to 1.935e-04. However, by training with the NSGA-PINN method, the loss value decreased even further to 1.9760e-06, indicating improved satisfaction with the initial-condition constraints.
In Figure 2, we compare the predicted angle and velocity state values to the true values to analyze the behavior of the proposed NSGA-PINN method. The top figure shows how accurately the predicted values match the true values, illustrating the successful performance of our algorithm. At the bottom of the figure, we observe the predicted value of the parameter \(k\), which agrees with the true value of \(k=1\). This result was obtained after running our NSGA-PINN algorithm for three generations.
\begin{table}
\begin{tabular}{l|c c c c} \hline
**Methods** & **residual loss** & **Initial loss** & **data loss** & **total loss** \\ \hline ADAM & 0.00013 & 3.24e-05 & 3.11e-05 & 1.935e-04 \\ NSGA & 0.12 & 2.67 & 4.10 & 6.89 \\ NSGA-PINN & 8.96e-07 & 5.58e-07 & 5.22e-07 & 1.9760e-06 \\ \hline \end{tabular}
\end{table}
Table 1: Inverse pendulum problem: Each loss value from NN trained by using different training methods.
Figure 1: Inverse pendulum problem: Group of figures from left to right columns show each loss value by using different training method.
### Inverse pendulum problem with noisy data
In this section, we introduced Gaussian noise to the experimental data collected for the inverse problem. The noise was sampled from the Gaussian distribution:
\[P(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^{2}/2\sigma^{2}} \tag{5}\]
For this experiment, we chose to set the mean value (\(\mu\)) to 0 and the standard deviation of the noise (\(\sigma\)) to 0.1.
As depicted in Figure 3, we trained the PINN model using the Adam optimizer. However, we encountered an issue where the loss value failed to decrease after 400 epochs. This suggested that the optimizer had become stuck in a local minimum, which is a common problem associated with the Adam optimizer when presented with noise.
To address this issue, we implemented the proposed NSGA-PINN method, resulting in significant improvements. Specifically, by increasing the diversity of the NSGA population, we were able to escape the local minimum and converge to a better local optimum where the initial condition constraints were more effectively satisfied.
Figure 3: Inverse pendulum problem with data noise: Group of figures from left to right columns show each loss value by using different training method with noisy data.
Figure 2: Inverse pendulum problem: the top figure shows the comparison between the true value with the predicted value from PINN trained by NSGA-PINN method. The figure on the bottom shows the prediction of constant value k.
By examining Table 2, we can see a clear numerical difference between the two methods. Specifically, the table shows that the PINN trained by the ADAM method has a total loss value of 0.017, while the PINN trained by the proposed NSGA-PINN method has a total loss value of 0.0124.
Finally, in Figure 4, we quantify uncertainty using an ensemble of predictions from our proposed method. This ensemble allows us to compute the 95% confidence interval, providing a visual estimate of the uncertainty. To calculate the mean value, we averaged the predicted solutions from an ensemble of 100 PINNs trained by the NSGA-PINN algorithm. Our observations indicate that the mean is close to the solution, demonstrating the effectiveness of the proposed method. When comparing the predicted trajectory from the PINN trained with the NSGA-PINN algorithm to the one trained with the ADAM method, we found that the NSGA-PINN algorithm yields results closer to the real solution in this noisy scenario.
### Burgers equation
This experiment uses the Burgers equation to study the effectiveness of the proposed NSGA-PINN algorithm on a PDE problem. The Burgers equation is defined as follows:
\[\frac{du}{dt}+u\frac{du}{dx}=v\frac{d^{2}u}{dx^{2}},\quad x\in[-1, 1],t\in[0,1] \tag{6}\] \[u(0,x)=-\sin(\pi x)\] \[u(t,-1)=u(t,1)=0\]
Here, \(u\) is the PDE solution, \(\Omega=[-1,1]\) is the spatial domain, and \(v=0.01/\pi\) is the diffusion coefficient.
The nonlinearity in the convection term causes the solution to become steep, due to the small value of the diffusion coefficient \(v\). To address this problem, we utilized a neural network for PINN, which consisted of 8 hidden layers with 20 neurons each. The hyperbolic tangent activation function was used to activate the neurons in each layer. We sampled 100 data points on the boundaries and 10,000 collocation data points for PINN training.
For the proposed NSGA-PINN method, the original population size was set to 20 neural networks, and the algorithm ran for 20 generations. The loss function in the Burgers' equation can be defined as follows:
\[\mathcal{L}=\mathcal{L}_{u}+\mathcal{L}_{b}+\mathcal{L}_{ics}.\]
\begin{table}
\begin{tabular}{l|c c c c} \hline
**Methods** & **residual loss** & **Initial loss** & **data loss** & **total loss** \\ \hline ADAM & 0.0028 & 0.0028 & 0.0114 & 0.017 \\ NSGA-PINN & 0.0015 & 0.0020 & 0.0089 & 0.0124 \\ \hline \end{tabular}
\end{table}
Table 2: Inverse pendulum problem with data noise: each loss value from NN trained by different training methods with noisy data.
Figure 4: Inverse pendulum problem with data noise : The figure shows the comparing of the result from PINN trained by NSGA-PINN method and ADAM method with noisy input data.
Here, the total loss value is the combination of the residual loss, the initial condition loss, and the boundary loss.
We can observe the effectiveness of the proposed NSGA-PINN algorithm by examining the loss values depicted in Figure 6 and Table 3. In particular, Table 3 compares the loss values of PINNs trained by the NSGA-PINN algorithm and the traditional ADAM method. The results show that the ADAM and NSGA-PINN algorithms produce almost identical loss values.
Finally, Figure 6 displays contour plots of the solution to Burgers' equation. The top figure shows the result predicted using the proposed NSGA-PINN algorithm. The bottom row compares the exact value with the values from the proposed algorithm and the ADAM method at t = 0.25, 0.50, and 0.75. Based on this comparison, both the NSGA-PINN algorithm and the ADAM method predict values that are close to the true values.
### Burgers equation with noisy data
In this experiment, we evaluate the effectiveness of the NSGA-PINN algorithm when applied to noisy data and the Burgers equation. We compare the results obtained from the proposed algorithm with those obtained using the ADAM optimization algorithm. To simulate a noisy scenario, Gaussian noise was added to the experimental/input data. We sampled the noise from a Gaussian distribution with a mean value (\(\mu\)) of 0.0 and a standard deviation (\(\sigma\)) of 0.2.
\begin{table}
\begin{tabular}{l|c c c} \hline
**Methods** & **residual loss** & **boundary loss** & **total loss** \\ \hline ADAM & 0.0045 & 0.0481 & 0.0526 \\ NSGA-PINN & 0.0006 & 0.0473 & 0.0479 \\ \hline \end{tabular}
\end{table}
Table 4: Burgers equation with data noise: Comparison of the loss value from NN trained by ADAM method and NSGA-PINN method for Burgers equation with noisy data.
Figure 5: Burgers equation: group of figures from left to right columns show each loss value by using different training method
\begin{table}
\begin{tabular}{l|c c c} \hline
**Methods** & **residual loss** & **boundary loss** & **total loss** \\ \hline ADAM & 0.0002 & 9.4213e-05 & 0.0003 \\ NSGA-PINN & 0.0001 & 7.0643e-05 & 0.0002 \\ \hline \end{tabular}
\end{table}
Table 3: Burgers equation: Comparison of the loss value from NN trained by ADAM method and NSGA-PINN method for Burgers equation.
We analyzed the effectiveness of the proposed NSGA-PINN method with noisy data. Specifically, Figure 7 and Table 4 illustrate the corresponding loss values. It is worth noting that, while the PINN trained with ADAM no longer improves after 5000 epochs and reaches a final loss value of 0.0526, training the PINN with the proposed algorithm for 20 generations results in a reduced total loss of 0.0479.
Figure 6: Burgers equation : the top panel shows the contour plots of solution of the Burgers equation. The lower figure shows the comparison of the exact value with the predicted value at different time point.
Figure 7: Burgers equation with data noise: the left column shows the residual loss. The right column shows the boundary loss.
Finally, Figure 8 shows the results of the PINN trained by the NSGA-PINN method with noisy data. The top figure shows a smooth transition over space and time. The lower figures compare the true value with the predicted value for the PINN trained by the proposed method and the traditional ADAM optimization algorithm. The results demonstrate that the prediction from a PINN trained by NSGA-PINN approaches the true value of the PDE solution more closely.
### Test survival rate
In this final experiment, we conducted further tests to verify the feasibility of our algorithm. Specifically, we calculated the survival rate between each generation to determine if the algorithm was learning and using the learned results as a starting point for the next generation.
The experiment consisted of the following steps: First, we ran the total NSGA-PINN method 50 times. Then, for each run, we calculated the survival rate between each generation using the following formula:
\[S=Q_{i}/P_{i}. \tag{7}\]
Here, \(Q_{i}\) represents the number of offspring from the previous generation, and \(P_{i}\) represents the number of parent population in the current generation. Finally, to obtain a relatively robust data that represents the trend of survival rate, we calculate the average value of survival rate between each generation as the algorithm progresses.
Figure 9 shows that the survival rate increases as the algorithm progresses. The survival rate of the first two generations is approximately 50%, but by the end of the algorithm, it improves to 71%. This indicates that our algorithm is progressively learning as subsequent generations are generated, which significantly enhances PINN training.
## 5 Discussion
The experimental results in the previous section showed promising outcomes for training PINNs using the proposed NSGA-PINN method. As described in Section 3, when solving the inverse problem using the traditional Adam optimizer, the algorithm became trapped in a local optimum after running for 400 epochs. However, by using the NSGA-PINN method, the loss value continued to decrease, and the predicted solution was very close to the true value. Additionally, when dealing with noisy data, the traditional Adam optimizer had difficulty learning quickly and making accurate predictions. On the other hand, the proposed NSGA-PINN algorithm learned efficiently and converged to a better local optimum for generalization purposes.
Figure 8: Burgers equation with data noise: the top panel shows the contour plots of solution of the Burgers equation. The lower figure shows the comparison of the exact value with the predicted value at different time point.
However, the main drawback of the proposed method is that it requires an ensemble of neural networks (NNs) during training. Consequently, the proposed NSGA-PINN incurs a larger computational cost than traditional stochastic gradient descent methods. Therefore, reducing the computational cost of NSGA-PINN is a goal for our future work. For instance, some of the training computational cost could be mitigated by using parallelization. Additionally, we will attempt to derive effective methods for finding the best trade-off between NSGA and Adam.
More specifically, in our future work, we will focus on balancing the parent population (\(N\)), max generation number (\(\alpha\)), and number of epochs used in the Adam optimizer. These values are manually initialized in the proposed method. The parent population determines the diversity in the algorithm, and we ideally want high diversity. The max generation number determines the total learning time. Increasing this time allows the algorithm to continue learning from previous generations, but it may lead to overfitting if the number is too large. Note that there is a trade-off between the max generation number and the epoch number used in the Adam optimizer. A higher generation number allows the NSGA algorithm to perform better, helping the Adam optimizer escape local optima, but this comes at a higher computational cost. Meanwhile, increasing the number of epochs used in the Adam optimizer helps the model decrease the loss value quickly, but it reduces the search space and may lead to the algorithm becoming trapped in local minima.
## 6 Conclusion
In this paper, we proposed a novel multi-objective optimization method called NSGA-PINN for training physics-informed neural networks. Our approach involves using the Non-dominated Sorting Genetic Algorithm (NSGA) to handle each component of the training loss in PINN. This allows us to achieve better results in terms of inverse problems, noisy data, and satisfying constraints. We demonstrated the effectiveness of NSGA-PINN by applying it to several ordinary and partial differential equation inverse problems. Our results show that the proposed framework can handle challenging noisy scenarios.
|
2302.06356 | Detection and Segmentation of Pancreas using Morphological Snakes and
Deep Convolutional Neural Networks | Pancreatic cancer is one of the deadliest types of cancer, with 25% of the
diagnosed patients surviving for only one year and 6% of them for five.
Computed tomography (CT) screening trials have played a key role in improving
early detection of pancreatic cancer, which has shown significant improvement
in patient survival rates. However, advanced analysis of such images often
requires manual segmentation of the pancreas, which is a time-consuming task.
Moreover, pancreas presents high variability in shape, while occupying only a
very small area of the entire abdominal CT scans, which increases the
complexity of the problem. The rapid development of deep learning can
contribute to offering robust algorithms that provide inexpensive, accurate,
and user-independent segmentation results that can guide the domain experts.
This dissertation addresses this task by investigating a two-step approach for
pancreas segmentation, by assisting the task with a prior rough localization or
detection of pancreas. This rough localization of the pancreas is provided by
an estimated probability map and the detection task is achieved by using the
YOLOv4 deep learning algorithm. The segmentation task is tackled by a modified
U-Net model applied on cropped data, as well as by using a morphological active
contours algorithm. For comparison, the U-Net model was also applied on the
full CT images, which provide a coarse pancreas segmentation to serve as
reference. Experimental results of the detection network on the National
Institutes of Health (NIH) dataset and the pancreas tumour task dataset within
the Medical Segmentation Decathlon show 50.67% mean Average Precision. The best
segmentation network achieved good segmentation results on the NIH dataset,
reaching 67.67% Dice score. | Agapi Davradou | 2023-02-13T13:43:50Z | http://arxiv.org/abs/2302.06356v1 | Detection and Segmentation of Pancreas using Morphological Snakes and Deep Convolutional Neural Networks
###### Abstract
Pancreatic cancer is one of the deadliest types of cancer, with 25% of the diagnosed patients surviving for only one year and 6% of them for five. Computed tomography (CT) screening trials have played a key role in improving early detection of pancreatic cancer, which has shown significant improvement in patient survival rates. However, advanced analysis of such images often requires manual segmentation of the pancreas, which is a time-consuming task. Moreover, pancreas presents high variability in shape, while occupying only a very small area of the entire abdominal CT scans, which increases the complexity of the problem. The rapid development of deep learning can contribute to offering robust algorithms that provide inexpensive, accurate, and user-independent segmentation results that can guide the domain experts. This dissertation addresses this task by investigating a two-step approach for pancreas segmentation, by assisting the task with a prior rough localization or detection of pancreas. This rough localization of the pancreas is provided by an estimated probability map and the detection task is achieved by using the YOLOv4 deep learning algorithm. The segmentation task is tackled by a modified U-Net model applied on cropped data, as well as by using a morphological active contours algorithm. For comparison, the U-Net model was also applied on the full CT images, which provide a coarse pancreas segmentation to serve as reference. Experimental results of the detection network on the National Institutes of Health (NIH) dataset and the pancreas tumour task dataset within the Medical Segmentation Decathlon show 50.67% mean Average Precision. The best segmentation network achieved good segmentation results on the NIH dataset, reaching 67.67% Dice score.
**Keywords:** Pancreatic cancer, morphological snakes, deep learning, segmentation, convolutional neural networks
## 1 Introduction
Cancer is the second leading cause of death worldwide, accounting for an estimated 9.6 million deaths in 2018 [1]. Globally, about 1 in 6 deaths is caused by cancer and pancreatic cancer is the seventh highest cause of death from it [1]. In most cases, the symptoms are not visible until the disease has reached an advanced stage. Due to its very poor prognosis, after diagnosis, 25% of people survive only one year and the 5-year survival rate is only 6% [2].
This highlights the need for improved screening modalities and early detection. In this context, radiomics, the process of applying machine learning algorithms to extract meaningful information and automate image analysis processes from computed tomography (CT) scans or magnetic resonance imaging (MRI), has become a very active research area. This allows the noninvasive characterization of lesions and the assessment of their progression and possible response to therapies. However, the application of such techniques, besides requiring high quality and reproducible data, often require manual segmentation of the structures of interest which is a time-consuming task. Moreover, manual segmentation is also user-dependent. Consequently, it benefits from automatic segmentation techniques.
Automatic pancreas segmentation still remains a very challenging task, as the target often occupies a very small fraction (e.g., \(<\) 0.5% of an abdominal CT) of the entire volume, has poorly-defined boundaries with respect to other tissues and suffers from high variability in shape and size.
The goal of this dissertation is to develop a model to automate pancreas segmentation in CT scans, in order to help the medical community to produce faster and more consistent results.
The next section describes related work, while Section 3 presents the proposed methods and implementation details. Section 4 details the obtained results. Finally, Section 5 summarizes the conclusions and presents directions for future work.
## 2 Related Work
### Deep Learning for Pancreas Segmentation
Early work on pancreas segmentation from abdominal CT used statistical shape models or multi-atlas techniques. In these approaches, the Dice similarity coefficient (DSC) or Dice score in the public National Institutes of Health (NIH) would not exceed 75%. Therefore, convolutional neural networks (CNNs) have rapidly become the mainstream methodology for medical image segmentation. Despite their good representational power, it was observed that such deep segmentation networks are easily disrupted by the varying contents in the background regions, when detecting small organs such as the pancreas, and as a result produce less satisfying results. Taking that into consideration, a coarse-to-fine approach is commonly adopted. These cascaded frameworks extract regions of interest (Rols) and make dense predictions on that particular Rols. More specifically, the state-of-the art methods primarily fall into two categories.
The first category is based on segmentation networks originally designed for 2D images, such as the fully convolutional networks and the U-Net [33]. The U-Net architecture utilizes up-convolutions to make use of low-level convolutional feature maps by projecting them back to the original image size, which delineates object boundaries with details. Many frameworks have used a variant of the U-Net architecture to segment the pancreas [25, 27] and others have made use of fully convolutional network (FCN) and U-Net in order to build more complex models. For example, Zhou et al.[49] finds the rough pancreas region and then a trained FCN-based fixed-point model refines the pancreas region iteratively. Roth et al. [36] first segment pancreas regions by holistically-nested networks and then refines them by the boundary maps obtained by robust spatial aggregation using random forests. In addition, the TernaryNet proposed by Heinrich et al. [17] applies sparse convolutions to the CT pancreas segmentation problem, which reduce the computational complexity by requiring a lower number of non-zero parameters. Combinations of convolutional neural networks with recurrent neural networks [46, 44, 5] have also been applied, as well as long short-term memory (LSTM) networks [24]. Recently, Zheng et al. [48] proposed a model, which can involve uncertainties in the process of segmentation iteratively, by utilizing the shadowed sets theory. In some cases, in order to incorporate spatial 3D contextual information through the 2D segmentation, slices along different views (axial, sagittal, and coronal) are used, by fusing the results of all 2D networks, e.g. through majority voting.
In the second category, the methods are based on 3D convolutional layers and therefore operations such as 2D convolution, 2D max-pooling, and 2D up-convolution are replaced by their 3D counterparts. Such networks are the 3D U-Net [9] (which is a 3D extension of the U-Net), the Dense V-Net [15], ResDSN [52], 3DFCN [37], and more [47]. In addition, OBELISK-Net proposed by Heinrich et al. [18] applies sparse convolutions to 3D U-Net, Oktay et al. [31] applies attention gates (AG) and Khosravan et al. [20] utilizes Projective Adversarial Network to perform pancreas segmentation. Finally, Zhu et al. [51] applies neural architecture search, to automatically find optimal network architectures between the 2D, 3D and pseudo3D convolutions.
### Active Contours and Morphological Snakes
Active contours, also known as "snakes", were first introduced in 1988 by Kass et al. [19]. These are energy-minimizing methods, which are used extensively in medical image processing, as a segmentation technique. The idea is to initialize a position for the contour, and then define image forces that act on the contour, making it change its position and adapt to the image's features. For instance, Kass et al. [19] define three energies: the internal energy (\(E_{int}\)), the image energy (\(E_{image}\)) and the constraint energy (\(E_{com}\)), which represent the internal energy of the snake due to bending, the image's energy (which takes into consideration, for example, edges and lines) and the energy of external constraint forces (which takes into consideration constraints created by the user), respectively. In this work, the authors propose different expressions for each energy, which are beyond the scope of this work and will not be presented. After defining all the energies, Kass et al. [19] present their active contour algorithm as the solution of the following energy minimization problem:
\[\begin{split} E_{snake}^{*}&:=\int_{0}^{1}E_{snake} \big{(}v(s)\big{)}\ ds\\ &:=\int_{0}^{1}E_{int}\big{(}v(s)\big{)}\\ &+E_{image}\big{(}v(s)\big{)}+E_{con}\big{(}v(s)\big{)}\ ds,\end{split} \tag{1}\]
where \(\int_{C}f(r)\ ds:=\int_{a}^{b}f\big{(}r(t)\big{)}\ \big{|}r^{\prime}(t) \big{|}\ dt\) is the line integral of \(f\) along a piecewise smooth curve \(C\), \(r:[a,b]\to C\) an arbitrary bijective parametrization
of the curve \(C\) such that \(r(a)\) and \(r(b)\) are the endpoints of \(C\), with \(a<b\).
Since the minimization of energy leads to dynamic behavior in the segmentation and because of the way the contours slither while minimizing energy, Kass et al. [19] called them snakes. It should also be noted that this minimization problem is generally not convex, which means that different snake initializations may lead to different segmentations. In order to overcome the problems of bad initialization and local minima, many variants of this method have been proposed, such as using a balloon force to encourage the contour expansion [11] or incorporating gradient flows [21]. Geometric models for active contours were also introduced, by Caselles et al. [6], Yezzi et al. [45], and more [41, 26], as well as a geodesic active contour (GAC) model by Caselles et al. [7]. [12] et al. incorporated statistical shape knowledge in a single energy functional, by modifying the Mumford-Shah functional [30] and its cartoon limit and Chan and Vese [8] introduced a region-based method which is not using an edge-function, but it minimizes an energy, which can be seen as a particular case of the minimal partition problem. Additional region-based methods were proposed, such as those by Li et al. [23], Zhu and Yuille [50], and Tsai et al. [43].
Inspired by the active contour evolution, a new framework for image segmentation was proposed in [3], which the authors called Morphological Snakes. Instead of computing the snake that minimizes \(E^{*}_{snake}\), Alvarez et al. [3] proposed a new approach, which focuses on finding the solution of certain set of partial differential equations (PDEs). This approach also yields a snake-like curve and because it approximates the numerical solution of the standard PDE snake model by the successive application of a set of morphological operators (such as dilation or erosion) defined on a binary level-set, it results in a much simpler, faster and stable, curve evolution.
The same authors [28] have introduced morphological versions of two of the most popular curve evolution algorithms: morphological active contours without edges (MorphACWE) [8] and morphological geodesic active contours (MorphGAC) [7].
MorphACWE works well when pixel values of the inside and the outside regions of the object to segment have different average. It does not require that the contours of the object are well defined, and it can work over the original image without any preprocessing.
MorphGAC is suitable for images with visible contours, even when these contours are noisy, cluttered, or partially unclear. It requires, however, that the image is preprocessed to highlight the contours and the quality of the MorphGAC segmentation depends greatly on this preprocessing step.
Considering the application of this work, the MorphGAC algorithm was adopted
## 3 Methodology
### Datasets
This work relied on two datasets, the pancreas tumour task dataset within the Medical Segmentation Decathlon [40] and the NIH (National Institutes of Health) dataset [34, 35, 10].
The Decathlon dataset is a recent multi-institutional effort to generate a large, open-source collection of annotated medical image datasets of various clinically relevant anatomies. The pancreas dataset was collected by the Memorial Sloan Kettering Cancer Center (New York, NY, USA) and contains 420 3D CT scans, 282 for training and 139 for testing. It consists of patients undergoing resection of pancreatic masses (intraductal mucous neoplasms, pancreatic neuroendocrine tumours, or pancreatic ductal adenocarcinoma). The CT scans have slice thickness of 2.5 mm and a resolution of 512\(\times\)512 pixels. An expert abdominal radiologist performed manual slice-by-slice segmentation of the pancreatic parenchyma and pancreatic mass (cyst or tumour), using the Scout application [13].
The NIH dataset contains 80 abdominal contrast-enhanced 3D CT scans from 53 male and 27 female subjects. It consists of healthy patients, since 17 of them are healthy kidney donors scanned prior to nephrectomy and the remaining 65 subjects were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions. The CT scans have slice thickness between 1.5 and 2.5 mm and a resolution of 512\(\times\)512 pixels, with varying pixel sizes. The pancreas was manually segmented in each slice by a medical student and the segmentations were verified/modified by an experienced radiologist.
Due to computational power limit, only a certain amount of data was used. More specifically, the detection model was trained and evaluated using both datasets and the segmentation model was trained and evaluated using only the NIH dataset. More details are given in Sections 3.2.2 and 3.3.2, respectively. In addition, only the information from the transversal plane was taken into consideration. In order to evaluate the variability in the data, a probability map was recreated to define the most likely position of the pancreas. Moreover, the average Hounsfield range and percentage of the volume occupied by the pancreas was also evaluated. The results are presented in Section 4.1.
### Detection Network Architecture
#### 3.2.1 Pre-processing
Pre-processing is a crucial step in machine learning in order to improve the performance of the models. For the detection task, the intensity of all data was clipped between -200 to +300 HU, in order to capture the pancreas intensity range and intensity its boundaries. More specifically, this range was carefully chosen considering the data exploration results that will be later presented in Section 4.1. The -125 to +225 intensity range was also tested for clipping, as well as using 16-bit input images, in order to provide the algorithm with more information. However, both proved to be inefficient and were not implemented.
#### 3.2.2 YOLOv4
In this work, the publicly available YOLOv4 [4] was deployed as a detection network. The detection network was trained with both the Decathlon and the NIH datasets. These datasets were shuffled and split into training and validation sets. More specifically, 66 scans from NIH and 34 scans from Decathlon (100 in total) were used for training with 20% validation. The hold-out method was used to evaluate the performance of the model, by using 10% of the scans (5 of each dataset) for testing.
Relying on studies regarding the effectiveness of transfer learning in deep networks [32, 38], weights pre-trained on the ImageNet dataset [22] were used. Afterwards, YOLOv4 was fine-tuned and re-trained with the pancreas images. The model was trained for 20000 iterations with a batch size of 64 and a learning rate of 0.0013.
#### 3.2.3 Evaluation Metrics
The performance of all models was evaluated through the hold-out methodology, using the corresponding test datasets. For the quality assessment of the predictions, various metrics were used, i.e., precision, recall, IoU, and mean average precision (mAP) for the detection task. All of the metrics are briefly introduced in the following paragraphs.
Precision, also known as positive predictive value (PPV), measures how accurate the model's predictions are, i.e., the percentage of the predictions that are correct. It is given as the ratio of true positives and the total number of predicted positives, as seen in the following expression:
\[PPV=\frac{TP}{TP+FP}, \tag{2}\]
where TP denotes the true positives (predicted correctly as positive) and FP the false positives (predicted incorrectly as positive).
Similarly, recall, also known as sensitivity or true positive rate (TPR), measures how well all the positives are predicted. It is given as the ratio of the true positives and the total of ground truth positives, as seen in the following equation:
\[TPR=\frac{TP}{TP+FN}, \tag{3}\]
where FN are the false negatives (incorrectly predicted as negatives).
The IoU metric, also known as Jaccard index, is a popular similarity measure for object detection problems using the predicted and ground-truth bounding boxes. Evidently, as exemplified in Figure 1, a bigger overlap between the two bounding boxes results in higher IoU score and therefore, detection accuracy.
The average precision (AP) is a default evaluation metric in the PascalVOC competition [14, p. 313], which derives from the area under the precision/recall curve. The precision/recall curve is computed from a model's ranked output and the predictions are ranked with respect to the confidence score of each bounding box. In this work, the detection model is set to keep only predictions with a confidence score higher than 25%. The AP provides an indication of the shape of the precision/recall curve, and is defined as the interpolated average precision at a set of eleven equally spaced recall levels [0, 0.1,..., 1], expressed as:
\[AP=\frac{1}{11}\sum_{r\in\{0,0.1,...,1\}}p_{interp}(r). \tag{4}\]
At each recall level \(r\), the interpolated precision \(p_{interp}\) is calculated by taking the maximum precision measured for that \(r\), given by the following formula:
\[p_{interp}(r)=\underset{\tilde{r}:\tilde{r}\geq r}{max}\;p(\tilde{r}). \tag{5}\]
Since mAP is calculated by taking the average of the AP calculated for all the classes, they will be used interchangeably in the current context.
Figure 1: Calculation of the IoU metric. With purple color the predicted bounding box is depicted and with green the ground truth.
### Segmentation Network Architecture
#### 3.3.1 Pre-processing
Intensity clipping was not adopted for the segmentation task, since it did not show any improvement in the performance of the models. For the segmentation network, curvature driven image denoising is applied on each slice, in order to make the pixel distribution more uniform. Finally, contrast limited adaptive histogram equalization (CLAHE) and normalization were also investigated for the segmentation task, but they also proved to be ineffective, in terms of Dice score.
#### 3.3.2 U-Net model
An adaptation of the original U-Net architecture was chosen as a segmentation model. More specifically, following the original implementation, 3\(\times\)3 convolutional kernels are used in the contracting path with a stride of 1, each followed by a 2\(\times\)2 max-pooling operation with stride 2, and 2\(\times\)2 convolutional kernels with a stride of 2 are implemented in the expansive path. Each convolution layer, except for the last one, is followed by a ReLU, and dropout of 0.5 is selected. However, the number of levels in both the descending and ascending paths of the network was increased from four blocks to five blocks and the number of filters per layer was reduced compared to the original implementation, as visualized in Figure 2. In addition, residual connections were included in each convolutional block, padding was added and the sigmoid function was used as an activation function for the last layer. Due to memory limitations, all scans were resized from 512\(\times\)512 to 256\(\times\)256.
As mentioned in Section 3.1, the U-Net model was trained on 50 volumes from the NIH dataset, 45 of them were used for training and 5 for validation.
Since the pancreas occupies only a very small region of a CT-Scan, the work [29] is followed, and a DSC-loss layer is used to prevent the model from being heavily biased towards the background class. In more detail, the DSC between two voxel sets, \(A\) and \(B\), can be expressed as
\[DSC(A,B)=\frac{2x|A\cap B|}{|A|+|B|}, \tag{6}\]
and this is slightly modified into a loss function between the ground-truth mask Y and the predicted mask \(\hat{Y}\), in the following way:
\[L(\hat{Y},Y)=1-\frac{2x\sum_{i}\hat{y}_{i}y_{i}+\epsilon}{\sum_{i}\hat{y}_{i}+ \sum_{i}y_{i}+\epsilon}, \tag{7}\]
where \(\epsilon\) is an added term to avoid underflow.
The model is trained with batches of 128 instances and optimized using Adam with a learning rate of 0.0001. Early-stopping is also used, in order to avoid overfitting of the network, and the model with the lowest validation loss is chosen.
Data augmentation is also implemented, by applying elastic deformation of images as described in [39], as well as image shifting, rotation, zooming and flipping, in order to expand the training dataset and make the network more robust to such variations.
#### 3.3.3 Cropped U-Net model
In order to reduce the irrelevant information in a CT-scan and train the U-Net on scans with more useful information, the scans were cropped to a smaller size. The rest of the training procedure described in Section 3.3.2 was kept intact.
The image size of the cropped scans was carefully decided, as on the one hand it should be as small as possible, but also contain the whole pancreas. Taking into account the information retrieved from Figure 7, which shows the probability of pancreas existing in a 512\(\times\)512 CT slice, as well as the maximum size of pancreas in both datasets, a default cropping position was decided. The default cropping position has its centroid at position \((x=287,y=250)\) and image size of 224\(\times\)224, as visualized in Figure 3.
#### 3.3.4 YOLO + MorphGAC model
In order to overcome the problem of initialization, the MorphGAC algorithm was combined with the YOLO detection model. The MorphGAC segmentation was applied on cropped bounding box predictions using their centroid as an initialization point. The segmentation results are then repositioned back to 512\(\times\)512 images, in order to form the final 3D segmented pancreas.
Figure 3: Default cropping position of a 512\(\times\)512 CT-scan into a 224\(\times\)224 scan.
#### 3.3.5 YOLO + U-Net model
An additional two-step approach with cropped CT-scans is proposed, which combines the YOLO detection model to crop the images containing the pancreas, as well as the default cropping position mentioned in Section 3.3.3, to crop the images without pancreas. The U-Net architecture visualized in Figure 2 and the training procedure described in 3.3.2 were used.
#### 3.3.6 Post-processing
A post-processing step is adopted for all predicted segmentations, aiming for optimization and a smoother result, by leveraging spatial information. Since pancreas has a uniform and undivided shape, it is unlikely that a pixel in a 2D slice has a different value from both the previous and the next slice. Taking that into consideration, all slices in a predicted segmentation are compared in groups of three and the values of the middle ones are changed, when different from the other two. In Figure 4, the segmentation improvement on the middle slice is depicted, after post-processing it.
#### 3.3.7 Evaluation Metrics
The performance of all models was evaluated through the hold-out methodology, using the corresponding test datasets. Since MorphGAC algorithm needs no training, the YOLO+MorphGAC approach was evaluated on the detection's model test dataset. For the quality assessment of the segmentation task, the DSC or Dice score was used. Because of the similarity between the IoU metric and the Dice score, the former was not used for the evaluation of the segmentation models.
The Dice score is the most common evaluation metric for the segmentation of medical images and it is calculated as twice the area of overlap divided by the total number of pixels in both images, as illustrated in Figure 5.
Evidently, the DSC is equivalent to the IoU, using the following expression:
\[DSC=\frac{2\text{IoU}}{1+\text{IoU}}. \tag{8}\]
## 4 Results & discussion
### Data Exploration
The average Hounsfield range and percentage of the volume occupied by the pancreas were evaluated. In Figure 6 the pancreas intensity values for both datasets are illustrated, where Decathlon
Figure 4: Example of the application of the optimization step on a slice. The optimization is applied only on the middle slice of the three used.
Figure 5: Calculation of the DSC metric.
Figure 2: The 5-level U-Net segmentation network for an input image of size 256x256.
has a mean +80.63 \(\pm\) 57.91 HU and NIH +86.71 \(\pm\) 32.18 HU.
In addition, the mean percentage of pancreas in the abdominal ct scans is 0.46% and 0.49% for Decathlon and NIH respectively, taking into consideration only values \(<\)800 HU, in order to exclude the air. Moreover, a probability map was recreated to define the most likely position of the pancreas. In Figure 7 is the probability map of pancreas in a 2D slice for both datasets is visualized. In the Decathlon dataset, the pancreas location ranges from pixel 150 to 434 in the x-axis and from pixel 139 to 348 in the y-axis. Similarly, in NIH dataset, the pancreas location ranges from 167 to 405 in the x-axis and from 143 to 360 in the y-axis.
### Detection Task
The detection model was evaluated on the holdout test set containing volumes from both the NIH and the Decathlon datasets and it can predict pancreas successfully in both of them, as visualized in Figure 8.
The results of the detection model evaluated by the PPV, TPR, IoU, and mAP metrics described in Section 3.2.3 are presented in Table 1. The detection model shows a mAP of 50.67% on the test set, with the IoU being 47.70%, the precision 0.63, and the recall 0.52. However, the performance of the model varies significantly between the two datasets, with the mAP being 71.43% for the NIH dataset and 29.92% for the Decathlon dataset. For that reason, the performance of the model in Table 1 is also evaluated separately for each dataset. More specifically, the model shows a IoU of 57.43% with a precision and recall of 0.63 and 0.52 respectively, on the NIH dataset. On the Decathlon dataset, the model has 37.96% IoU, the precision is 0.5 and the recall 0.36.
The low detection accuracy of the YOLO model on the Decathlon dataset derives from the multi-label nature of some ground-truth files. More specifically, it was later realized the YOLO model was configured to be trained on one class and therefore, since the pancreatic parenchyma and pancreatic mass (cyst or tumour) are annotated separately as different classes in the Decathlon dataset, the second class was ignored. as shown in Figure 9. In addition, during the conversion from NIFTI to tilt format (which is required by YOLO), all the individual globs of healthy pancreas surrounding the pancreatic mass, are annotated as separate tiny bounding boxes. However, as visualized on this figure, it is very interesting to notice that the model outputs a fairly good prediction on the class that it was trained on (upper-right green bounding box).
\begin{table}
\begin{tabular}{|l c c c c|} \hline Test dataset & PPV & TPR & IoU & mAP \\ \hline NIH \& Decathlon & 0.63 & 0.52 & 47.70\% & 50.67\% \\ NIH & 0.75 & 0.68 & 57.43\% & 71.43\% \\ Decathlon & 0.5 & 0.36 & 37.96\% & 29.92\% \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results of the detection model.
Figure 6: The Household unit values of the pancreas for the Decathlon (a) and NIH (b) dataset, respectively. In Decathlon dataset the intensity range from -998.0 HU to +3071.0, while in NIH from -875.0 HU to +458.0 HU.
Figure 7: Probability of pancreas in an 512x512 image for Decathlon (a) and NIH (b) datasets, respectively.
Unfortunately, the problem was not discovered on time, since the YOLO model was trained only on the NIH dataset at first with healthy pancreas. Nevertheless, this could be overcome by considering a single class once the goal of the work was to segment the healthy pancreas. Due to the unavailability of the computer on which YOLO was trained afterwards, this could not be realized in the current dissertation.
In Figures 10 and 11, the metrics used for the assessment of the detection model during training are visualized.
### Segmentation Task
The segmentation models were evaluated on the holdout set containing volumes from the NIH dataset and the pancreas can be successfully segmented, as visualized in Figure 13.
The segmentation performance of all models on the NIH dataset is presented on Table 2. The YOLO+MorphGAC model has the best performance achieving a dice score of 67.67 \(\pm\) 8.62%, followed by the YOLO+U-Net model with a Dice score of 64.87 \(\pm\) 4.79%. The U-Net and cropped U-Net show a lower performance with a Dice score of 59.91 \(\pm\) 13.69% and 59.08 \(\pm\) 12.76%, respectively. The obtained DSC from the cropped U-Net when compared with the YOLO+U-Net was lower and showed a higher standard deviation. The preliminary results suggest that YOLO was able to improve segmentation when compared to using a probability map. A similar behavior was observed for the U-Net model with improved results when using the YOLO model. Regarding segmentation performance, the highest average Dice score was obtained with the YOLO+MorphGAC model and the lowest standard deviation with the YOLO+U-Net model.
The loss function used for the assessment of the segmentation model during training is visualized in Figure 12.
In Figures 13 and 14 representative slices of the best and worst results of the proposed segmentation models are presented, respectively.
#### 4.3.1 Comparison with the state-of-the-art
In Tables 3 and 4 a comparison of the best two proposed models with the state-of-the art deep learning models is presented with respect to the dice coefficient and the training details, respectively. Only models which implement a 2D approach were taken into consideration and trained on the NIH dataset as well. The proposed models display a low performance compared to the other networks. However, when examining the training methodology of all the state-of-the-art methods, it is concluded that cross validation is the adopted
\begin{table}
\begin{tabular}{|l l|} \hline Test dataset & Dice score \\ \hline YOLO+MorphGAC & 67.67 \(\pm\) 8.62 \% \\ U-Net & 59.91 \(\pm\) 13.69 \% \\ Cropped U-Net & 59.08 \(\pm\) 12.76 \% \\ YOLO+U-Net & 64.87 \(\pm\) 4.79 \% \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation results of all proposed segmentation models.
Figure 11: The validation mAP during training, reaching a maximum value of 90.9%.
Figure 12: The metric of loss for the YOLO+U-Net during training with a minimum value of 0.28.
Figure 9: Detection of pancreas on a multilabel slice. The green color represents the ground-truth and the purple the YOLO prediction. The two bigger green bounding boxes represent the two classes.
validation technique, as well as that they leverage the whole NIH dataset for the training, which in the current work was not possible, due to computational and time limitations. These two reasons could be the main causes for the low segmentation performance of the proposed YOLO+U-Net model and following the state-of-the-art training methods could result in a higher a Dice score.
## 5 Conclusion and Future Work
### Conclusions
Taking into account the state-of-the-art cascaded segmentation models, the main goal of the current work was to investigate and present simpler two-step approaches for the segmentation of the pancreas in CT. In the end, three models were investigated and compared to the U-Net implementation. The two-step approach was achieved by using a detection network for the pancreas localization prior to the segmentation. This approach, which was combined with both morphological snakes and a U-Net segmentation network, proved to be the most efficient, showing that the organ detection benefits the segmentation task. Another approach was to reduce the background information by cropping the data using a pancreas probability map and then using a U-Net network to segment this smaller area. However, this showed no improvement in segmentation performance, when compared to U-Net segmentation results on whole CT-scans. Due to the Covid-19 situation, many computational limitations existed, which resulted in insufficient training of both the detection and the segmentation network, which is the main reason of the low performance of the proposed models. Since the segmentation performance lies below 70%, it is concluded that further investigation and model improvement is needed, in order to tackle the challenging nature of the pancreas segmentation problem efficiently.
### Future Work
Regarding the proposed models, there are changes that could be investigated to further improve the performance, focusing on YOLO+MorphGAC and YOLO+U-Net architectures. First and foremost, since the detection network plays a vital role on both of them, its improvement would also lead to overall improved performance of the model. Considering that YOLOv4 is one of the most modern and efficient state-of-the-art detection model, experimenting with differ
\begin{table}
\begin{tabular}{|l l l l|} \hline Method & Dataset & Train/Test & Validation \\ \hline
2D U-Net [17] & NIH & 63.9 & 5-fold CV \\
2D U-Net [27] & NIH & 62/20 & 4-fold CV \\
2D FCN + RNN [5] & NIH & 62/20 & 4-fold CV \\ Infictically Nested 2D FCN [36] & NIH & 62/20 & 4-fold CV \\
2D FCN [49] & NIH & 62/20 & 4-fold CV \\ YOLO+U-Net (proposed) & NIH & 50/5 & holdout \\ YOLO+MorphGAC (proposed) & - & -5 & - \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of the training details with the state-of-the-art methods.
Figure 14: Some of the worst segmentations on the test set of the YOLO+MorphGAC (a), U-Net (b), cropped U-Net (c) and YOLO+U-Net (d) model, respectively. The blue line indicates the model’s predicted segmentation and the red line represents the ground-truth.
Figure 13: Some of the best segmentations on the test set of the YOLO+MorphGAC (a), U-Net (b), cropped U-Net (c) and YOLO+U-Net (d) model, respectively. The blue line indicates the model’s predicted segmentation and the red line represents the ground-truth.
\begin{table}
\begin{tabular}{|l l|} \hline Method & DSC \\ \hline
2D U-Net [17] & 71.07\(\pm\) 9.50 \\
2D U-Net [27] & 86.93\(\%\) + 4.92 \\
2D FCN + RNN [5] & 83.70\(\%\) \(\leq\) 5.10 \\ Holistically Nested 2D FCN [36] & 81.27\(\%\) \(\leq\) 6.27 \\
2D FCN [49] & 83.18\(\%\) \(\pm\) 4.81 \\ YOLO+U-Net (proposed) & 64.87\(\%\) \(\pm\) 4.79 \\ YOLO+MorphGAC (proposed) & 67.67\(\%\) \(\pm\) 8.62 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the DSC with the state-of-the-art methods.
ent ones would not be the main priority. A slight improvement could occur by testing different backbone models for feature extraction, such as ResNet [16] or EfficientNet [42]. More importantly, leveraging the whole Decathlon dataset and training on a larger dataset, could also lead to a better detection performance. Regarding the proposed U-Net architecture, similarly to the detection task, the need to train on a larger dataset is undeniable and adopt the training methodology of the state-of-the-art models. In addition, a YOLO+FCN approach would also be interesting for investigation. Finally, exploring 3D versions of the architectures could also improve the results. The present work focuses on the segmentation of healthy pancreas, nevertheless, once Decathlon dataset also presents tumor masks, tumor segmentation could also be explored.
|
2307.11792 | Quantum Convolutional Neural Networks with Interaction Layers for
Classification of Classical Data | Quantum Machine Learning (QML) has come into the limelight due to the
exceptional computational abilities of quantum computers. With the promises of
near error-free quantum computers in the not-so-distant future, it is important
that the effect of multi-qubit interactions on quantum neural networks is
studied extensively. This paper introduces a Quantum Convolutional Network with
novel Interaction layers exploiting three-qubit interactions, while studying
the network's expressibility and entangling capability, for classifying both
image and one-dimensional data. The proposed approach is tested on three
publicly available datasets namely MNIST, Fashion MNIST, and Iris datasets,
flexible in performing binary and multiclass classifications, and is found to
supersede the performance of existing state-of-the-art methods. | Jishnu Mahmud, Raisa Mashtura, Shaikh Anowarul Fattah, Mohammad Saquib | 2023-07-20T20:45:44Z | http://arxiv.org/abs/2307.11792v3 | # Quantum Convolutional Neural Networks with Interaction Layers for Classification of Classical Data
###### Abstract
Quantum Machine Learning (QML) has come into the limelight due to the exceptional computational abilities of quantum computers. With the promises of near error-free quantum computers in the not-so-distant future, it is important that the effect of multi-qubit interactions on quantum neural networks is studied extensively. This paper introduces a Quantum Convolutional Network with novel Interaction layers exploiting three-qubit interactions, while studying the network's expressibility and entangling capability, for classifying both image and one-dimensional data. The proposed approach is tested on three publicly available datasets namely _MNIST_, _Fashion MNIST_, and _Iris_ datasets, flexible in performing binary and multiclass classifications, and is found to supersede the performance of existing state-of-the-art methods.
**Keywords:** Quantum Machine Learning, classification, entanglement, quantum gates, qubits.
## 1 Introduction
In this era of artificial intelligence a constant improvement in computation speed, accuracy, and precision is a necessity. This widespread success in the world of computing over the last decade can be attributed to both the development of efficient software algorithms and the advancements in computational hardware. However, the physical limits of semiconductor fabrication in the post-Moore's Law era raise concerns about the extrapolation of its effectiveness in the future. On the other hand, significant advancements have been made in the field of quantum computing, which has shown promise as a potential solution for modern computing problems. Quantum computing exploits the laws of quantum mechanics to store and process information in quantum devices, using qubits instead of classical bits, which enables them to solve problems intractable for classical computers (Arute et al (2019)). Qubits have several properties that make them superior to the classical bit (Yanofsky and Mannucci (2008)). Firstly, they are the superposition of the two fundamental bit states. This property enables a collective number of qubits to have exponentially higher processing power than the same number of classical bits. The second noteworthy property is entanglement, which creates interdependencies among the qubits, suggesting that the change of state of one will affect the state of the qubit it is entangled with. Superposition, Entanglement along with Interference, which is a method of controlling the qubit wavefunctions to reach their desired states, are the three properties that give qubits their formidable potential.
The era of quantum computing, currently referred to as the Noisy Intermediate Scale Quantum (NISQ) era, is characterized by the lack of absolute control over the qubits due to errors arising from quantum decoherence, crosstalk, and imperfect calibration, thereby limiting the number of qubits used on quantum computers. However, the revelation in January 2022 that quantum computing in silicon hit 99% fidelity (Madzik et al (2022)) indicates a greater similarity between the desired and actual quantum states. This result promises near-error-free quantum computing and indicates that they are close to being utilized in large-scale applications, which further motivates the development of various machine learning algorithms to be implemented on quantum devices.
## 2 Related Works
Quantum machine learning (QML) involves constructing a sequential circuit of different types of parameterized quantum gates that act on a specific number of qubits. The main tasks of these QML algorithms are to logically place these parameterized gates and train their parameters to minimize the objective cost function. QML has already been implemented to address one of the most fundamental machine learning problems, i.e., classification. It is shown in Rebentrost et al (2014) and Mengoni and Di Pierro (2019) that kernel-based algorithms, such as Quantum-enhanced Support Vector Machine (QSVM) can classify data efficiently and accurately. The search for devising various convolutional networks in the quantum domain is introduced in Cong et al (2019) in which the concept of quantum convolutional neural networks
(QCNNs) is proposed. Their architecture also claims to be able to tackle the exponential complexity of many-body problems, making them suitable for use in quantum physics problems. Advancing the field of QCNNs, a parameterized quantum neural network (QNN) with an enhanced feature mapping process has been designed in Liu et al (2021). Their proposed network is called a quantum-classical CNN (QCCNN) which is suitable for NISQ devices. It appears that several quantum counterparts of a large variety of classical machine learning models have been proposed over the recent years, all of which claim superior performances in various categories, such as accuracy and speed. Therefore, it is of no surprise that quantum networks have been shown to have a wide range of applications in medicine (Jain et al (2020)), weather forecasting (Enos et al (2021)), quantum chemistry (Von Lilienfeld (2018)), and many more.
Ever since the proposal of QCNNs and the availability of quantum simulators and quantum computers, much attention has been drawn to devising various methods to improve the performance of classification problems. This is driven by the fact that QCNN models are immune to barren plateau problems (Pesah et al (2021)) contrary to other structures. The architecture proposed in Hur et al (2022) has been benchmarked for binary image classification on the _MNIST_ and _Fashion MNIST_ datasets. A multiclass classification method using a quantum network is also reported in Chalumuri et al (2021), which has been proven to perform well on 1D data such as the _Iris dataset_. All these prior studies confirm that a QNN aids speed with a significantly lower number of parameters with better accuracy than their existing classical counterparts using a comparable number of training parameters. However, a crucial aspect of designing parameterized quantum circuits is to maintain sufficient expressibility and entangling capability, while keeping it cost-effective Sim et al (2019). Expressibility is the ability of a quantum circuit, comprising quantum gates to explore the Hilbert space. The cost of a quantum circuit is judged by the number of layers and hence its parameters, as well as the number of two-qubit operations. This paper explores the relative changes in the performance of a QCNN network upon the incorporation of three-qubit interactions while maintaining a relatively low depth and a small number of parameters for comparability. Although three-qubit gates are practically difficult to implement on NISQ devices and their synthesis using 1 and 2-qubit interactions exponentially increases depth, it is important to explore the comparative changes in the performance of a network resulting from their addition which is expected to be a reality considering the rapid growth in the performance of quantum hardware in recent times. These three-qubit interactions are brought forward using novel Interaction Layers in our network which use a minimal number of trainable parameters. Furthermore, to explore the performance of the proposed network we have used an ancilla-based classifier as the final layer of the circuit to carry out binary and multiclass classification tasks.
The major contributions of this work are summarized as follows:
1. A new QCNN architecture is proposed which uses _Amplitude_ and _Angle Encoding_ schemes separately, considering two different data reduction and encoding techniques.
2. Novel Interaction Layers are introduced, which exhibit sufficient expressibility and exploit the entanglement property of qubits further to help the quantum network learn more nuanced information from the data.
3. A classifier layer involving ancilla qubits and _CNOT_ gates are cascaded with the quantum convolutional structure to accommodate both binary and multiclass classifications.
The use of ancilla qubits in the classifier layer cascaded with the Interaction Layers in a QCNN structure is a first to the best of our knowledge.
4. The proposed network is tested on three publicly available datasets for binary and multiclass classification, and it is seen that the performance supersedes that of the existing state-of-the-art models using a similar number of parameters. The versatility of the network is further demonstrated in its ability to perform equally well in both image and 1-dimensional data.
## 3 Proposed Architecture
A simplified block diagram representation of the proposed architecture is depicted in Fig.1. A quantum circuit, designed in the spirit of QCNN structure and possessing minimal trainable parameters, has been proposed. The robustness of QCNNs against the "barren plateau" issue is expected to be exhibited by the proposed network (Pesah et al (2021)).
Interaction Layers are introduced in various stages of the proposed quantum architecture and are designed to leverage three-qubit interactions through the use of _Toffoli_ and parameterized rotational gates. The implementation of these Layers in various stages of the network can be observed in Fig.1. It differs from the earlier quantum convolutional methods, which relied on the reduction of qubits through sequences of convolutional and pooling layers alone. The rapid advancement of quantum hardware indicates that considerably more intricate operations on qubits will soon be possible. Although it is practically challenging to achieve three-qubit interactions on current technology, this paper investigates the comparative advantage in a network's performance with the addition of such layers. It must be noted that the number of trainable parameters and the total number of qubits have been kept minimum such that they can be implemented on NISQ devices for the purpose of comparison with other methods. The paradox of deploying three-qubit interactions on the network that was, at its core, intended to operate on NISQ devices is acknowledged by the authors. When moving from NISQ to more potent quantum computers, more qubits would be used, the circuit depth would expand, and there would be more qubit interactions with multi-qubit gates. By keeping a small number of qubits and trainable parameters, this study aims to evaluate how expanded multi-qubit interaction improves QCNN network performance in comparison to networks limited to two-qubit operations.
It is expected that the incorporation of these novel Interaction Layers will enable the network to have the ability to extensively span the Hilbert space as well as exploit the entanglement property further for improved classification performance. The use
of Toffoli gates, enabling three-qubit interactions in QCNN networks is a first to the best of our knowledge.
### Data Preprocessing
The number of qubits and, therefore, the size of a QNN is bound by the current limitations of NISQ computing technology, in contrast to classical models, which often possess many trainable parameters due to their substantial size and depth. In the next stage where quantum feature encoding is performed, the features of the data to be classified are inserted as parameters of quantum gates, which perform various operations on these qubits. Therefore, a limited number of qubits also sets a bar on the total number of gate parameters; thus, the dimensionality reduction of classical data prior to its utilization within a quantum network is deemed imperative.
Figure 1: Simplified Block diagram of the proposed architecture
Standard classical techniques, such as the autoencoder and simple resizing, are chosen as they allow for efficient compression of high-dimensional input data, which is important for reducing the computational complexity of quantum machine learning models. The autoencoder is particularly useful in this regard, as it can learn to represent the input data of dimensions \(p\times p\), to a lower \(q\)-dimensional space, \(q<p\), extracting a reduced set of features of size \(q\times 1\), while still preserving important features and minimizing information loss. As an alternative, the simple resizing operation can also be effective in reducing the dimensionality of input data. It converts input data of dimensions \(p\times p\) to a desired dimension of \(q\times q\), \(q<p\).
### Quantum Feature Encoding
The projection of the reduced classical values, received as output from the previous layer, into quantum states is referred to as quantum feature encoding. Mathematically, the mapping of the classical input data, \(X\), into higher dimensional quantum states, represented in the Hilbert space and denoted by \(H\), is represented as
\[\phi:X\mapsto H\]
where \(\phi\) is the feature map. In this stage of the network, \(n\) qubits initialized to the state of \(\left|0\right\rangle\), are fed. The qubits are then subjected to state operations via quantum gates, parameterized by the classical data \(X\) which is the output of the block performing classical data reduction. This results in the mapping of the classical data to the Hilbert space and the resulting state is represented by \(\left|\psi\right\rangle\). This process of quantum state preparation encodes the classical values into the input qubits which can then exploit the unique properties of superposition, entanglement, and interference to achieve superior performance. Two widely employed state preparation techniques, known as amplitude and _Angle Encoding_, have aided in achieving significant results and are discussed in subsequent sections.
#### 3.2.1 Amplitude Encoding
In this encoding scheme, the normalized classical vectors from the Data Preprocessing Layer are represented as amplitudes of the \(n\) input qubits in the Quantum Feature Encoding Layer. This displays a particular quantum advantage as normalized feature vectors of size \(2^{n}\) can be encoded into only \(n\)-qubits (Schuld and Petruccione (2018)). The following equation shows the states prepared after performing _Amplitude Encoding_ on the input qubits.
\[\left|\psi_{x}\right\rangle\ =\sum_{i=1}^{N}x_{i}\big{|}i\rangle \tag{1}\]
Here \(\left|\psi_{x}\right\rangle\) is the quantum state corresponding to the \(N\)-dimensional classical datapoint \(X\) after reduction, where \(N=2^{n}\), \(x_{i}\) is the \(i\)-th element of the datapoint \(X\) and \(\left|i\right\rangle\) is the \(i\)-th computational basis state.
In a classical neural network, each binary value necessitates a distinct trainable weight or bias, resulting in a considerable number of parameters. In contrast, _Amplitude Encoding_ permits the representation of data through the amplitudes of a limited
number of quantum states, thereby enabling a more compact representation. This has been demonstrated to result in a significant decrease in the number of trainable parameters, contributing to the simplification of the model and enhancement of its performance. While this method provides this benefit, it also increases the depth of the quantum circuit as _O(poly(n))_ or as _O(n)_ if the number of qubits fed in this layer is increased (Araujo et al (2021)).
#### 3.2.2 Angle Encoding
_Angle Encoding_ is another technique employed in quantum machine learning for the representation of data, which utilizes the rotation of quantum gates (\(R_{x},R_{y}\) and \(R_{z}\)) to encode classical information. This method involves encoding the \(N\) features of classical data as the angles of \(n\) input qubits between quantum states (Schuld (2021)). In this method, \(N\) has been kept equal to \(n\) to allow us to use the maximum size of classical features possible. The advantage of this approach lies in its ability to represent continuous data more naturally and efficiently compared to _Amplitude Encoding_ (Schuld (2021)). The states resulting from performing _Angle Encoding_ on the input qubits are:
\[\big{|}\psi_{x}\big{\rangle}\ =\otimes_{i=1}^{n}R(x_{i})\big{|}0^{n}\big{\rangle} \tag{2}\]
Here \(R(.)\) can be either of the rotation gates \(R_{x}\), \(R_{y}\), or \(R_{z}\). In _Angle Encoding_, the angles between the quantum states can be varied continuously to capture the intricacies of the data. This leads to a more precise and nuanced representation of the data and can result in improved performance for certain types of quantum machine learning models. Although, unlike _Amplitude Encoding_, it can only encode one qubit with one feature value, resulting in the reduction of noise, which makes it particularly advantageous in NISQ computing.
The selection of encoding techniques for this design is contingent upon the classical dimensionality reduction technique employed in the first layer. It can be recalled from the previous section that the _Amplitude Encoding_ method, which uses \(n\) input qubits, can accommodate a maximum of \(2^{n}\) data points. This requires the use of simple resizing to \(2^{n/2}\times 2^{n/2}\) dimension followed by flattening, which is essential according to this state preparation method. Conversely, the _Angle Encoding_ technique encodes the flattened \(N\) data points into \(n\) qubits and thus relies on the use of an autoencoder to reduce the dimensions accordingly.
### Proposed Layers
#### 3.3.1 The Quantum Convolutional Layer
The proposed model for the classification problem is comprised of two Convolutional Layers with one Pooling Layer in between. As shown in Fig.2, each of these layers is constructed of blocks of quantum gates called ansatzes, which are parameterized quantum circuits. In this study, an ansatz is composed of different configurations of single and multi-gate operations as illustrated in Fig.3 and 4.
The first ansatz in Fig.3 consists of a relatively large number of parameters, i.e., 15, which helps increase flexibility, and the controlled \(R\) gates help increase expressibility. The ansatz in Fig.4 has five fewer parameters, which is a parametrized form of a
reduced version of the circuit that recorded the best expressibility in a study carried out by Sim et al (2019). The ansatzes consist of the \(R_{x}\), \(R_{y}\), \(R_{z}\) gates, which cause qubit rotations about the \(x\), \(y\), and \(z\) axes, respectively. Ansatz 1 additionally has the \(U3\) gate, which can be decomposed to rotation and phase shift gates and is represented by the matrix as follows:
\[U3(\theta_{1},\theta_{2},\theta_{3})=\begin{bmatrix}\cos(\frac{\theta_{1}}{2})& -\mathrm{e}^{\mathrm{i}\theta_{3}}\sin(\frac{\theta_{1}}{2})\\ \mathrm{e}^{\mathrm{i}\theta_{2}}\sin(\frac{\theta_{1}}{2})&\mathrm{e}^{ \mathrm{i}(\theta_{2}+\theta_{3})}\cos(\frac{\theta_{1}}{2})\end{bmatrix}\]
Here, \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are the parameters of the \(U3\) gate, and the matrix represents a unitary operation on a qubit. The reason for the selection of two different ansatzes is to inspect the flexibility of the proposed network performance on slight changes in the structure of the ansatz and the number of trainable parameters.
#### 3.3.2 The Quantum Pooling Layer
The main purpose of the Pooling Layer in any convolutional neural network is to reduce the spatial size of the data representation and to maintain the most important information. In the process, the layer helps reduce the computational cost of the network and improve its generalization capabilities by decreasing overfitting, making it robust to translations, rotations, and other minute changes.
The quantum Pooling Layer in Fig.5 traces out one qubit from the two qubits it is fed and thus reduces the two-qubit states to a one-qubit state. The Pooling Layer uses two controlled rotation gates and a \(Pauli-X\) gate. The Pooling Layers, along with the Convolutional Layers, extend qubit interactions beyond nearest neighbors and hence establishes further dependencies. The output qubits are further processed by an Interaction Layer to aid the learning process.
Figure 2: The Convolutional Layer comprising of ansatzes, all of which use two-qubit interactions. The input qubits have their states changed by parameterized gate operations in the layer
#### 3.3.3 The Interaction Layers
The novelty brought forward in the proposed quantum architecture involves making variational quantum layers designed to introduce extensive entanglement and expressibility in the overall quantum network. In order to bring forward this special quantum phenomenon, in this proposed _Interaction layer_, _Toffoli_ gates are cascaded with the convolutional and rotational gates and are expected to establish three-qubit interactions, as shown in Fig.6 and 7. In order to bring forward this special quantum phenomenon in this proposed layer, _Toffoli_ and _CNOT_ gates are cascaded with the convolutional and rotational gates and are expected to establish three and two-qubit interactions, as shown in Fig.6 and 7.
In this era of NISQ computing, the implementation of two-qubit gates is more difficult than single-qubit ones. However, with the recent developments in quantum hardware and the promise of near error-free quantum computers in the not-so-distant future, the use of multi-qubit gates in quantum computers is expected to be more common. It must be noted that the use of two-qubit gates, such as the _CNOT_ gate, in quantum machine learning models is motivated by the increase in entanglement and expressibility of the network, which helps it to learn more complex features of classical data Sim et al (2019). It is therefore imperative that studies are conducted exploring the effectiveness of three-qubit gates in various quantum networks.
Figure 4: Ansatz 2 containing 10 trainable parameters
Figure 5: The ansatz used for building the Pooling Layer
Figure 3: Ansatz 1 containing 15 trainable parameters
This paper explores the comparative advantage of three-qubit _Toffoli_ gates with the introduction of Interaction layers in the conventional QCNN structure. The difference in performance upon this addition is expected to indicate the extent of effectiveness which may result from the successful implementation of such gates in quantum hardware.
In the first Interaction Layer, the four _Toffoli_ gates entangle the four qubits in a circuit-block interaction configuration, as illustrated in Fig.6, which means that interdependency has been established between these quantum states, so the measured value of one state will depend on the others. This particular configuration is chosen over a nearest-neighbor or all-to-all configuration, as it tends to display favorable expressibility and less qubit connectivity requirements. The placement of these _Toffoli_ gates after the previous layers enable the network to span all the basis states more strongly (i.e., with a higher probability for the basis states that previously had near-zero probabilities) than without them.
After the second Convolutional Layer, the qubits are passed through Interaction Layer 2. This Interaction Layer differs from the first in the inclusion of \(R_{x}\), \(R_{y}\), and \(R_{z}\) rotation gates with trainable parameters between its _Toffoli_ gates as shown in Fig.7. The parameterized gates in between the _Toffoli_ gates increase the degree of freedom of the quantum states, increasing the flexibility of the learning process and therefore has the potential to learn more nuanced features of the training data.
The last Interaction Layer in Fig.8 acts as a classifier utilizing _CNOT_ gates to entangle the remaining qubits with the ancilla qubits, which are used to store the
Figure 6: The first proposed Interaction Layer which comes before the second Convolutional Layer
Figure 7: The second novel Interaction Layer which comes after the second Convolutional Layer
entangled states. It must be noted that the number of ancilla qubits is equal to the number of classes that are to be classified using the network and have been set to \(|0\rangle\) initially. The ancilla qubits interact with the remaining qubits of the network through the _CNOT_ gates as shown in Fig.8 and are passed through the three rotational gates at the terminal of the quantum network. Finally, the measurement operation is performed on the ancilla qubits which causes the wavefunction to collapse into deterministic values. The expectation values of the ancilla qubits are measured and fed into the _Softmax_ function.
The introduction of these Interaction Layers in the middle of the conventional QCNN along with an ancilla-based classifier (Interaction Layer 3) is expected to provide promising results and is inspected in detail later in the next section 4. The proposed Interaction Layers consisting of a combination of _Toffoli_ and _CNOT_ gates along with trainable parameters in between the Convolutional Layers, and the use of ancilla qubit-_CNOT_ classifier in QCNNs is a first to the best of our knowledge.
#### 3.3.4 Cost Function and Softmax
Following the measurement of the ancilla qubits, the classical values are sent to the _softmax_ function to calculate the probability vectors for each class. Then the losses are calculated using the classical categorical cross-entropy loss function which can be expressed as:
\[loss=\sum_{i=1}^{output\ size}y_{i}.log(\bar{y_{i}}) \tag{3}\]
where \(y_{i}\) is the true-label and \(\bar{y_{i}}\) is the predicted probability of the corresponding class. The parameters of the quantum gates are optimized through gradient descent using classical computational techniques, after which the parameters are updated accordingly through back-propagation.
Figure 8: The final Interaction Layer acts as the classifier for the classification of three classes. Note that the three qubits from the bottom are ancilla qubits for the three classes and are initialized to \(|0\rangle\)
## 4 Simulation and Results
### Dataset
The widely utilized standard datasets, namely _MNIST_Deng (2012) and _Fashion MNIST_Xiao et al (2017) are employed to benchmark the proposed QCNN model. Binary classification involving classes (0, 1) and three-class classification involving classes (0, 1, 2) are performed. In the _MNIST dataset_, the number of training (test) images, for class 0 is 5,923 (980); for class 1 it is 6,742 (1,135); for class 2 it is 5,958 (1,032). The _Fashion MNIST dataset_ consists of 6,000 training and 1,000 test images per class. The original size of the images from either dataset is \(28\times 28\), and a reduction in dimension is accomplished through a classical autoencoder or simple resize to the desired shape. A third dataset known as the _Iris dataset_De Marsico et al (2015) is used solely for the purpose of multiclass classification. It consists of feature data of three classes of iris species with 50 samples per class. The features include 4 attributes per sample namely sepal length, sepal width, petal length & petal width. The dataset is such that one flower class is linearly separable from others but the other two classes are not linearly separable from each other.
### Simulation
The simulation of the proposed QCNN model is conducted using Pennylane (Bergholm et al (2018)). The variational circuit is trained through the use of the Nesterov Moment Optimization algorithm (Nesterov (1983)). A loop is executed through the training process where a batch of randomly selected images is fed into the network in each iteration, reducing run time and preventing the gradient from becoming trapped in a
Figure 9: Cost vs. Iteration plot using _Angle Encoding_
local minimum. The optimization of the learning process is further facilitated through the use of an adaptive learning rate, where the learning rate is decreased as the rate of change of the output of the cost function is decreased.
### Performance evaluation
#### 4.3.1 Binary classification
For the binary classification problem, classes 0 and 1 are chosen for both datasets in order to be able to compare with previous works. A total of eight input qubits along with two ancilla qubits for the two classes are used in the proposed model. The Convolutional and the Pooling Layers are arranged as illustrated in Fig.1. During training, the batch size is kept at 50 images, which are randomly selected in each iteration. The learning rate in the Nesterov Optimizer is tuned to be 0.05 at the beginning of the learning process and in the later stages, it is reduced in accordance with the decrease in the cost and the improvement of test accuracy. The trainable parameters are initialized randomly using the normal distribution and the average classification accuracy is calculated over five random initializations.
Figure 10: Accuracy vs. iteration plot using _Angle Encoding_
The effect of the following different approaches on overall performance is investigated:
1. Quantum encoding by either _Amplitude Encoding_ or _Angle Encoding_.
2. The two parameterized ansatzes given in Fig.3 and 4 used to construct the Convolutional Layers in Fig.2.
The classification of the _Fashion MNIST_ is benchmarked to an accuracy of 96.60% using the combination of autoencoder with _Angle Encoding_ and ansatz 1 as the Convolutional Layer filters, for which the total number of trainable parameters in the quantum network is 50. In order to demonstrate the convergence of the cost function of the training stage, Fig.9 and Fig.11 are shown. It can be observed from Fig.10 that over 90% accuracy is reached within only 50 iterations for both the ansatzes which suggests that the network converges rapidly with respect to the number of iterations. The peak accuracy attained for the _MNIST_ dataset is 99.24%. The number of trainable parameters, in this case, is also 50, and simple resizing with ansatz 1 is used. The accuracies for the different combinations are summarized in Table 1.
The superiority in performance of ansatz 1 over ansatz 2, due to its additional parameters is demonstrated in Fig.11 and 12, where the performances are compared with respect to the same Quantum Feature Encoding methods.
It can be observed from table 1, that the type of Quantum Encoding method used to bear the best accuracies is dependent on the dataset that is used. The peak accuracy for the _Fashion MNIST_ dataset results from reducing the classical data by an autoencoder followed by _Angle Encoding_ whereas, for _MNIST_, it is simple resizing followed by _Amplitude Encoding_. Comparison of the results with other existing quantum machine learning models for binary classification, such as that proposed in Hur et al (2022),
\begin{table}
\begin{tabular}{c c c c} \hline
**Dataset** & **Ansatz** & **Encoding Method** & **Trainable parameters** & **Accuracy (\%)** \\ \hline \multirow{4}{*}{Fashion MNIST} & 1 & Angle & 50 & 96.60 \\ & & Amplitude & & 92.50 \\ & 2 & Angle & & 94.70 \\ & & Amplitude & & 91.30 \\ \hline \multirow{4}{*}{MNIST} & 1 & Angle & 50 & 98.22 \\ & 1 & Amplitude & & 99.24 \\ \cline{1-1} & 2 & Angle & & 93.52 \\ \cline{1-1} & & Amplitude & & 94.28 \\ \hline \end{tabular}
\end{table}
Table 1: Results of the proposed binary classification model
\begin{table}
\begin{tabular}{c c c} \hline
**Dataset** & **Model used** & **Accuracy (\%)** \\ \hline \multirow{2}{*}{Fashion MNIST} & Proposed & 96.60 \\ & QCNNFCDC (Hur et al (2022)) & 94.30 \\ & Proposed without E & 92.00 \\ \hline \multirow{2}{*}{MNIST} & Proposed & 99.24 \\ & QCNNFCDC (Hur et al (2022)) & 98.70 \\ \hline \multirow{2}{*}{Iris} & Proposed & 94.74 \\ & HCQAMC (Chalumuri et al (2021)) & 92.10 \\ \end{tabular} Note: ‘E’ refers to Interaction Layers 1 and 2.
\end{table}
Table 2: Table showing a comparison of the results of our proposed model to that of existing models for different datasets
shows that our model, surpasses their accuracy, as shown in the first two rows of table 2. The accuracy for the binary classification of classes 0 and 1 for _Fashion MNIST_ is 2.6% and for _MNIST_ it is 0.5% more than that found in Hur et al (2022). This increase in accuracy can be attributed to the incorporation of the Interaction Layers and the use of the ancilla-based final classifier (Interaction Layer 3). Hur et al (2022)
Figure 11: Cost vs. iteration plot using _Amplitude Encoding_
Figure 12: Accuracy vs. iteration plot using _Amplitude Encoding_
have further shown that their quantum network outperforms classical counterparts using a similar number of trainable parameters for the binary classification problem. It can therefore be concluded that the results of the study in our paper exhibit a clear improvement in the performance of the proposed method.
Figure 14: Accuracy vs Iteration plots comparison between models with and without Interaction Layers 1 and 2
Figure 13: Cost vs Iteration plots comparison between models with and without Interaction Layers 1 and 2
superiority in performance compared to the classical networks with a similar number of trainable parameters.
The peak accuracy for ansatz 1 used to construct the Convolutional Layers is 96.60% compared to 94.70% when ansatz 2 is used. This increase can be related to the number of trainable parameters available for each ansatz which is more in the case of ansatz 1. Additionally, the effect of the proposed Interaction Layers 1 and 2 on the performance of the network is demonstrated by comparing the performances with and without their presence. As evident in Fig.13 and Fig.14, it can be concluded that these layers help reduce cost and increase accuracy by creating further dependencies between quantum states and making them more capable of spanning the _Hilbert Space_ adding only 12 more trainable parameters. In both cases, the data reduction and quantum encoding technique used is autoencoder and _Angle Encoding_ respectively on the _Fashion MNIST_ dataset with ansatz 1 in Fig.3 used as the convolutional filter.
#### 4.3.2 Multiclass classification
Multiclass classification is performed on the _MNIST_ and _Fashion MNIST_ datasets with the network slightly modified to include two Convolutional Layers cascaded together in each convolutional stage. It must be noted that these cascaded Convolutional Layers share the same weight and therefore the number of trainable parameters in the circuit does not significantly increase. The number and placement of the Interaction Layers remain unchanged from the network for Binary classification. Classes 0, 1, and 2 are selected for both datasets and the batch size is kept at 100 with the learning rate set at 0.05 in the beginning and adapted to 0.01 after 50 iterations. The peak classification accuracy obtained is 91.76% for the _Fashion MNIST_ dataset and 85.11% for the _MNIST_ dataset using ansatz 1. The total number of trainable parameters in the network is only 53.
It is noticed that the combination of ansatz 1 and _Angle Encoding_ as the Quantum Encoding Method provides the highest accuracy for the _Fashion MNIST_ dataset but the combination of ansatz 1 with the _Amplitude Encoding_ dataset performs better for _MNIST_ dataset.
To demonstrate the flexibility of the proposed circuit, performance on the _Iris_ dataset is also tested. In order to accommodate data of smaller dimensions, a cut-down version of the proposed circuit, with only 4 qubits was sufficient. The test accuracy with a batch size of 50 and a learning rate of 0.005 is found to be 94.74%. A three-class classifier with a variational quantum circuit has been proposed in Chalumuri et al (2021), where classification was performed on classical one-dimensional feature data. The accuracy of 94.74% supersedes the accuracy of the network proposed in Chalumuri et al (2021) (92.10%) as shown in the third row of table 2. It must also be noted
\begin{table}
\begin{tabular}{c c c c} \hline
**Dataset** & **Encoding Method** & **Trainable Parameters** & **Accuracy** \\ \hline Fashion MNIST & Angle & 53 & 91.76\% \\ \hline MNIST & Amplitude & 53 & 85.11\% \\ \hline Iris & Angle & 31 & 94.74\% \\ \hline \end{tabular} Note: In all of the cases, ansatz 1 is used to construct the Convolutional Layers.
\end{table}
Table 3: Results of the Multiclass Classification problems.
that the network used for benchmarking the _Iris_ dataset has only 31 parameters and is much shallower than the one in Chalumuri et al (2021). It is therefore understood that the network is not only limited to image classification but performs equally well in one-dimensional feature data. The results of Multiclass classification problems are summarized in 3.
The high accuracy achieved with only 53 parameters (for _Fashion MNIST_ and _MNIST_) and 47 parameters (for _Iris_) can be directly attributed to the incorporation of the Interaction Layers. When expanded qubit interactions are used, it is possible to achieve such accuracy while using a few parameters. This implies that these interactions will speed up the training of QCNNs while producing better outcomes with shallower circuits, enabling the development of networks that are more resistant to the barren plateau issues that result from greater depth.
## 5 Conclusion
In this work, a shallow entangled QCNN with a minimal number of trainable parameters is proposed, which provides very satisfactory performance in binary and multiclass classification problems. The incorporation of weighted Interaction Layers, consisting of trainable parameters and utilizing three-qubit interactions between the quantum Convolutional Layers, has played a major role in enhancing the performance of the network. In doing so, it also studies the effect of the addition of such parameterized 3-qubit layers in a QCNN structure, which is a first of its kind. This result indicates the significance of increased qubit interaction on the substantial increase in the ability of a quantum network to learn more complex information from the training data whilst only using a few parameters.
This approach constitutes a novel way towards the development of a generalized parameterized QNN that performs equally well for binary and multiclass classification on both image data and one-dimensional feature data. It further explores the possibilities of performance enhancement of quantum networks upon the use of increased qubit interaction which is expected to be a reality in the not-so-distant future.
The simulation results indicate a quantum advantage of such networks, showing a clear superiority in performance compared to their classical counterparts using a similar number of parameters. Further research could be conducted to gain a more comprehensive understanding of the quantum advantage of these networks. An extensive investigation of the underlying causes of data dependencies on the proposed feature encoding methods can be done. Other future milestones may also include an extension of the work for big data analysis and the solution of more complex problems utilizing more resources and power on real quantum computers.
## Statements and Declaration
The authors declare no competing interest in any other work or publication.
## Data Availability
The simulation code used in this paper can be found at the following link: Simulation and Code.
The datasets used in this paper are publicly available and can be found in the works of Deng (2012), Xiao et al (2017), and De Marsico et al (2015).
|
2306.01870 | Implicit Regularization in Feedback Alignment Learning Mechanisms for
Neural Networks | Feedback Alignment (FA) methods are biologically inspired local learning
rules for training neural networks with reduced communication between layers.
While FA has potential applications in distributed and privacy-aware ML,
limitations in multi-class classification and lack of theoretical understanding
of the alignment mechanism have constrained its impact. This study introduces a
unified framework elucidating the operational principles behind alignment in
FA. Our key contributions include: (1) a novel conservation law linking changes
in synaptic weights to implicit regularization that maintains alignment with
the gradient, with support from experiments, (2) sufficient conditions for
convergence based on the concept of alignment dominance, and (3) empirical
analysis showing better alignment can enhance FA performance on complex
multi-class tasks. Overall, these theoretical and practical advancements
improve interpretability of bio-plausible learning rules and provide groundwork
for developing enhanced FA algorithms. | Zachary Robertson, Oluwasanmi Koyejo | 2023-06-02T18:57:24Z | http://arxiv.org/abs/2306.01870v2 | # Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks
###### Abstract
In the quest to enhance the efficiency and bio-plausibility of training deep neural networks, Feedback Alignment (FA), which replaces the backward pass weights with random matrices in the training process, has emerged as an alternative to traditional backpropagation. While the appeal of FA lies in its circumvention of computational challenges and its plausible biological alignment, the theoretical understanding of this learning rule remains partial. This paper uncovers a set of conservation laws underpinning the learning dynamics of FA, revealing intriguing parallels between FA and Gradient Descent (GD). Our analysis reveals that FA harbors implicit biases akin to those exhibited by GD, challenging the prevailing narrative that these learning algorithms are fundamentally different. Moreover, we demonstrate that these conservation laws elucidate sufficient conditions for layer-wise alignment with feedback matrices in ReLU networks. We further show that this implies over-parameterized two-layer linear networks trained with FA converge to minimum-norm solutions. The implications of our findings offer avenues for developing more efficient and biologically plausible alternatives to backpropagation through an understanding of the principles governing learning dynamics in deep networks.
Machine Learning, ICML
## 1 Introduction
Backpropagation, a widely successful learning rule for artificial neural networks, has been instrumental in advancing deep learning. Nevertheless, it presents significant challenges, including the intricate backward pass mechanism, which complicates training parallelism due to communication bottlenecks. In addition, it lacks biological plausibility, further limiting its practical utility.
As an alternative, Feedback Alignment (FA) offers an attractive learning rule that alleviates the computational complexities and bio-plausibility issues associated with backpropagation (Lillicrap et al., 2016). FA replaces the backward pass with random feedback matrices, promising a more straightforward approach to training neural networks.
Despite its advantages, the understanding of FA's underlying principles, particularly in the context of deep neural networks, remains elusive. This gap in knowledge motivates our current investigation, which seeks to unravel the inherent laws that govern the learning dynamics under FA.
Our work makes several significant contributions. Firstly, we establish a set of conservation laws for the learning dynamics under FA, elucidating the implicit bias exhibited by this learning rule--a bias distinct from that of backpropagation. Secondly, these conservation laws enable us to identify a sufficient condition for layer-wise alignment with feedback matrices. Lastly, we provide evidence that two-layer linear networks trained with FA converge to a global optimum.
We believe that our results will have broad implications for future research and practical applications. By quantifying the properties of alternative learning rules like FA, our analysis provides valuable insights that can inform both theoretical advancements and the design of more efficient and biologically plausible alternatives to backpropagation.
## 2 Related Work
The pioneering work by Stork (1989) questioned the biological plausibility of backpropagation, leading to the exploration of alternative learning rules. Feedback Alignment (FA) emerged as a promising candidate in this regard. Lillicrap et al. (2016) first introduced FA as an effective learning rule for deep neural networks. This approach was further investigated by Nokland (2016), who demonstrated that random synaptic feedback weights could support error backpropagation for deep learning.
The dynamics of learning with FA have also been the subject of several studies (Refinetti et al., 2021; Song et al., 2021; Launay et al., 2020; Lechner, 2020; Bordelon and Pehlevan, 2022). For instance, Refinetti et al. (2021) highlighted the dynamics of aligning before memorizing in learning with FA, while a parallel line of work by Launay et al. (2020)
showcased that FA can be successfully scaled to modern deep learning tasks and architectures. The work of Lechner (2020) also shows alignment of the learning rule updates with the gradient for single-output networks trained with a variant of feedback alignment, but there are counter-examples for the multi-output setting. There have also been works investigating bio-plausible learning rules applied to networks with large width (Song et al., 2021; Bordelon and Pehlevan, 2022). Our analysis can be seen as a generalization of the approach taken by Lillicrap et al. (2016) to investigate convergence of FA. In particular, our Theorem 5.1 is inspired by a relation satisfied by the parameter updates used in their main convergence result for linear feedback alignment.
Finally, our study is related to the implicit bias of models trained with gradient descent (Vallet et al., 1989; Duin, 2000; Du et al., 2018; Soudry et al., 2018; Belkin et al., 2019). There is a longer history of investigating the phenomena of gradient descent picking solutions that generalize well in linear models (Vallet et al., 1989; Duin, 2000). However, more recently, there has been work looking into similar phenomena involving deep neural networks (Du et al., 2018; Soudry et al., 2018; Belkin et al., 2019). In particular, a related conservation law of neural networks was studied by Du et al. (2018), where they examined the self-balancing property of layers in deep homogeneous models. While their work provides valuable insights into the role of conservation laws in learning dynamics, it does not directly address the implications on alignment as a result of these laws.
Our work diverges from this existing body of literature by proposing a set of conservation laws specifically tailored to the learning dynamics under FA. A unique feature of our study is that these conservation laws yield immediate implications on alignment as a corollary, offering a more comprehensive understanding of the FA's underlying principles. To the best of our knowledge, this is the first result establishing layer-wise alignment for a non-linear network trained with feedback alignment, paving the way for future research in layer-wise alignment in more general settings such as wide neural networks or for other learning rules.
## 3 Basic Notation
To fully comprehend the dynamics underpinning Feedback Alignment (FA), we first need to establish the appropriate mathematical notation and formalism. This section will clarify the key definitions and operations used throughout this study. We will denote scalars and vectors by lower-case letters (e.g., \(x,y,z\)) and matrices by uppercase letters (e.g., \(A,B,C\)). The symbol \(\odot\) represents the Hadamard (element-wise) product of two matrices or vectors of the same dimensions. Importantly, we denote the trace of a square matrix \(A\) by \(Tr(A)\). Finally, we note that the trace operator can be used to compute the inner product between two matrices \(A\) and \(B\) of the same dimension. We denote this inner product as \(\langle A,B\rangle=Tr(A^{T}B)\), where \(A^{T}\) is the transpose of \(A\).
## 4 Feedback Alignment: A Closer Look
Consider a neural network \(f\) parameterized with \(L\) layers, where each layer is denoted as \(l\), with \(l\in 1,2,\ldots,L\). Each layer is associated with an activation function \(\phi\) and a weight matrix \(W_{l}\in\mathbb{R}^{n_{l+1}\times n_{l}}\). Here, \(n_{l}\) refers to the number of neurons in layer \(l\).
For a given input \(x\in\mathbb{R}^{n_{0}}\), the pre-activation \(h^{l}\) and the activation \(a^{l}\) of layer \(l\) are computed recursively as follows:
\[h_{l}=W_{l}a_{l-1},\;a_{l}=\phi(h_{l})\]
where \(\phi\) denotes the non-linear activation function, \(a_{l-1}\) is the activation of the previous layer, and \(a_{0}=x\).
In FA, the feedback weights, denoted as \(B_{l}\in\mathbb{R}^{n_{l}\times n_{l+1}}\), are fixed random matrices independent of \(W_{l}\). The error \(\delta_{l}\) at layer \(l\) is calculated as:
\[\delta_{l}=\phi^{\prime}(h_{l})\odot B_{l+1}\delta_{l+1},\;\delta_{L}=\nabla_{a _{L}}\mathcal{L}(f)\]
where \(\mathcal{L}(f)\) is the loss function applied to the network, and \(\nabla_{a_{L}}\mathcal{L}\) is the gradient of \(\mathcal{L}\) with respect to the final layer activation \(a_{L}\).
The weight update under the FA rule is then given by:
\[\Delta W_{l}=-\eta\cdot(a_{l})^{\top}\delta_{l+1}^{\top}\]
where \(\eta\) is the learning rate.
We note that unlike in backpropagation, the feedback matrices \(B_{l}\) are not tied to the feedforward weights \(W_{l}\). In that setting, we would have time-dependent feedback matrices \(B_{l}(t)=W_{l}(t)\). This introduces an alignment challenge since there is no guarantee the feedback matrices align with the learned weights.
## 5 Theoretical Analysis
Building upon the FA formalism detailed in the previous section, we aim to present our main results and provide insight on their distinctive characteristics and how they differ from the traditional backpropagation (BP) in the context of learning rules.
It has been previously observed that the matrices learned through feedback alignment tend to align with their respective feedback matrices (Lillicrap et al., 2016; Nokland, 2016;
Launay et al., 2020). Our first main result concerns a conservation law for the FA learning dynamics that can explain the layer-wise alignment phenomena. Specifically, we introduce a conserved quantity, which remains invariant throughout the training process. This invariance holds under some general conditions related to the activation function and the loss function.
**Theorem 5.1**.: _Suppose that we apply feedback alignment to a scalar output ReLU network with differentiable loss function. Then the flow of the layer weights under feedback alignment for all \(t\in\mathbb{R}_{\geq 0}\) maintains,_
\[\frac{1}{2}\|W_{i}(t)\|_{F}^{2}-\langle W_{i+1}(t),B_{i+1}\rangle\]
\[=\frac{1}{2}\|W_{i}(0)\|_{F}^{2}-\langle W_{i+1}(0),B_{i+1}\rangle\]
Proof Sketch.: The major technical part of the proof is to show that we have:
\[\langle\dot{W}_{i},W_{i}\rangle=\langle\dot{W}_{i+1},B_{i+1}\rangle.\]
The trace map is linear so after an integration by parts we have that,
\[\Rightarrow\int_{0}^{t}\text{Tr}(\dot{W}_{i}W_{i}^{T})ds=\text{ Tr}\left[\int_{0}^{t}\dot{W}_{i}W_{i}^{T}\right]ds\]
\[=\frac{1}{2}\|W_{i}(t)\|_{F}^{2}-\frac{1}{2}\|W_{i}(0)\|_{F}^{2}\]
Finally,
\[\int_{0}^{t}\text{Tr}(\dot{W}_{i+1}B_{i+1}^{T})ds=\text{Tr}\left[\int_{0}^{t} \dot{W}_{i+1}B_{i+1}^{T}ds\right]\]
\[=\langle W_{i+1}(t),B_{i+1}\rangle-\langle W_{i+1}(0),B_{i+1}\rangle\]
The result follows.
This conservation law has several intriguing implications. One such implication is the existence of an implicit bias in FA that is analogous to, but distinct from, the bias in gradient descent. This bias effectively governs the learning trajectory of the FA rule, directing it towards certain types of solutions over others.
The conservation law also offers an insight into the alignment challenge in FA. As a corollary, if we initialize \(W_{i+1}(0)=B_{i+1}\) such that \(\|W_{i}(0)\|\leq\|W_{i+1}(0)\|\) then the conservation law simplifies to,
\[\langle W_{i+1}(t),B_{i+1}\rangle\geq\|W_{i+1}(0)\|_{F}^{2}-\frac{1}{2}\|W_{i }(0)\|_{F}^{2}\geq 0\]
Thus, there are general initialization schemes that guarantee layer-wise alignment.
Lastly, we focus on the case of two-layer linear networks trained with FA. By exploiting the conservation law, we show that these networks are capable of converging to a global optimum.
**Theorem 5.2**.: _Assume that we are to fit data \(y\) with squared-loss and an over-parameterized two-layer network \(f_{w_{t}}(X)=Xw_{t}=XW_{1}(t)W_{2}(t)\) with data \(X\) such that rows are linearly-independent. Assume we may pick \(w_{0}\in\text{span}(X^{T})\) such that we have positive alignment for all time. If we run (direct) feedback alignment flow then we have the following,_
\[\lim_{t\rightarrow\infty}e^{rt}\cdot\|y-Xw_{t}\|_{2}\to 0\]
_for some \(r>0\). Moreover, \(w_{\infty}=W_{1}(\infty)W_{2}(\infty)\) is the minimum-norm solution._
Proof Sketch.: We prove an auxiliary lemma that shows that the network weights stay in the span of the input data matrix
Figure 1: We plot the ratio \(\frac{\langle W_{2}(t),B_{2}\rangle-\frac{1}{2}\|W_{1}(t)\|_{F}^{2}}{\langle W _{2}(0),B_{2}\rangle-\frac{1}{2}\|W_{1}(0)\|_{F}^{2}}\) during training to verify Theorem 5.1 for two-layer linear and ReLU networks.
Figure 2: We plot the loss during training for two-layer linear and ReLU networks.
\(X^{T}\). To show this we derive the update rules in continuous time, under the assumption that the weights are initialized in the span of the data. A simple option is to initialize with \(W_{1}=0\). Following this, we show that the evolution of the error is decreasing geometrically over time implying that the algorithm will converge. Finally, we demonstrate that under these conditions, the solution found by the flow will be related to the Moore-Penrose psuedoinverse which minimizes the norm of the solution.
This result implies that feedback-alignment applied to linear networks in the over-parameterized regime enjoy similar generalization properties to those of networks trained with gradient descent. These types of results have been popular as a way to explain why deep neural networks are able to generalize (Belkin et al., 2019).
These theoretical findings indicate that FA shares some properties with the more traditional gradient descent approach to learning. In particular, we also establish a connection between initialization, layer-wise alignment, and convergence with our results. This could set the stage for further exploration of alternative learning rules that can overcome the practical challenges associated with BP, whilst retaining its effectiveness.
## 6 Experiments
In our effort to comprehend the fundamental principles governing Feedback Alignment (FA), we devised a series of experiments to validate our theoretical assertions. We specifically aimed to observe the FA learning dynamics, understanding its implicit bias and ultimately, its convergence to the global optimum in over-parameterized two-layer linear networks.
Our network design involved multi-layer configurations, with the task of finding the global minimizers for the squared-loss function. For the generation of training data, we relied on a multivariate normal distribution, where the true response variable was a noiseless linear transformation of the input.
A crucial element of our experiments was implementing FA to update the network weights. Theorem 5.1 predicts that the quantity \(\frac{1}{2}\|W_{i}(t)\|_{F}^{2}-\langle W_{i+1}(t),B_{i+1}\rangle\) should remain invariant under the FA learning flow. Tracking this quantity provided empirical validation of this theoretical result. Furthermore, comparing the FA-learned weights to the Moore-Penrose pseudoinverse solution allowed us to observe the convergence of FA to the global optimum.
We carried out these procedures for two network architectures - a two-layer linear network and a two-layer ReLU network. To assure high-probability positive layer-wise alignment and convergence, we initialized the first layer weights using a distribution \(\mathcal{N}(0,1/10)\), and the second layer weights with a distribution \(\mathcal{N}(0,1)\). We initialized the feedback matrices to match the initialization of the second layer weights.
### Discussion of Results
The experimental results, as displayed in Figures 1 and 2, corroborate the conservation law postulated by Theorem 5.1. The tracked quantity \(\frac{1}{2}\|W_{i}(t)\|_{F}^{2}-\langle W_{i+1}(t),B_{i+1}\rangle\) remained nearly constant across iterations which provides empirical support to our theoretical results. This manifestation of the conservation law substantiates the intriguing parallels between FA and Gradient Descent (GD), specifically with regards to their implicit biases. In the context of the two-layer linear network, the convergence of FA-learned weights to the global optimal solution was observed. We verified this convergence is identical to the minimum norm solution up to three significant digits. This underscores the implicit bias of feedback alignment towards solutions that generalize well.
In particular, we find the (nearly) exact conservation of layer-wise in non-linear networks updating with the feedback alignment learning rule to be compelling. While we do not show layer-wise alignment is sufficient for convergence in the non-linear setting, it seems plausible that such a condition would become useful in the large-width setting which has been successfully analyzed for gradient descent. Overall, the implications of these shared similarities warrant further investigation, potentially fostering the development of more powerful theoretical techniques capable of distinguishing these two learning rules. We think that research into establishing (or failing) implicit bias results for bio-plausible learning rules could elucidate higher-level principles behind good learning rules.
## 7 Conclusion
In conclusion, our findings challenge the prevailing narrative that FA and GD are fundamentally different learning algorithms. We demonstrate that, under certain conditions, FA can mirror the behavior of GD, offering a computationally efficient and biologically plausible alternative. Our main motivation is to pave the way for developing more efficient deep learning algorithms, better approximating the learning dynamics in biological systems. Overall, our results connecting layer-wise alignment with convergence in linear-models suggest that layer-wise alignment may also be a useful tool for analyzing learning dynamics in wide neural networks. Future work aims to extend these results to nonlinear models, facilitating the creation of more biologically plausible models.
## Acknowledgements
We'd like to thank Tengyu Ma for stimulating conversation about the project and inspiring the focus on investigating implicit bias presented.
|
2303.05238 | An Unscented Kalman Filter-Informed Neural Network for Vehicle Sideslip
Angle Estimation | This paper proposes a novel vehicle sideslip angle estimator, which uses the
physical knowledge from an Unscented Kalman Filter (UKF) based on a non-linear
single-track vehicle model to enhance the estimation accuracy of a
Convolutional Neural Network (CNN). The model-based and data-driven approaches
interact mutually, and both use the standard inertial measurement unit and the
tyre forces measured by load sensing technology. CNN benefits from the UKF the
capacity to leverage the laws of physics. Concurrently, the UKF uses the CNN
outputs as sideslip angle pseudo-measurement and adaptive process noise
parameters. The back-propagation through time algorithm is applied end-to-end
to the CNN and the UKF to employ the mutualistic property. Using a large-scale
experimental dataset of 216 manoeuvres containing a great diversity of vehicle
behaviours, we demonstrate a significant improvement in the accuracy of the
proposed architecture over the current state-of-art hybrid approach combined
with model-based and data-driven techniques. In the case that a limited dataset
is provided for the training phase, the proposed hybrid approach still
guarantees estimation robustness. | Alberto Bertipaglia, Mohsen Alirezaei, Riender Happee, Barys Shyrokau | 2023-03-09T13:21:53Z | http://arxiv.org/abs/2303.05238v1 | # An Unscented Kalman Filter-Informed Neural Network for Vehicle Sideslip Angle Estimation
###### Abstract
This paper proposes a novel vehicle sideslip angle estimator, which uses the physical knowledge from an Unscented Kalman Filter (UKF) based on a non-linear single-track vehicle model to enhance the estimation accuracy of a Convolutional Neural Network (CNN). The model-based and data-driven approaches interact mutually, and both use the standard inertial measurement unit and the tyre forces measured by load sensing technology. CNN benefits from the UKF the capacity to leverage the laws of physics. Concurrently, the UKF uses the CNN outputs as sideslip angle pseudo-measurement and adaptive process noise parameters. The back-propagation through time algorithm is applied end-to-end to the CNN and the UKF to employ the mutualistic property. Using a large-scale experimental dataset of 216 manouvres containing a great diversity of vehicle behaviours, we demonstrate a significant improvement in the accuracy of the proposed architecture over the current state-of-art hybrid approach combined with model-based and data-driven techniques. In the case that a limited dataset is provided for the training phase, the proposed hybrid approach still guarantees estimation robustness.
State estimation, Sideslip angle, Physics-informed neural network, Unscented Kalman filter, Machine learning
## I Introduction
Active vehicle control systems rely on the sideslip angle and yaw rate information to ensure stability and controllability [1, 2]. Whereas low-cost gyro sensors measure the yaw rate, the vehicle sideslip angle must be estimated. Its direct measurement is possible via optical speed sensors or real-time kinematic positioning-global navigation satellite system (RTK-GNSS), but they are expensive to be installed in passenger vehicles [3]. Hence, the development of filter architectures is required to estimate the sideslip angle in real-time and with the desired accuracy error below one degree in high excitation driving conditions [4]. Sideslip angle estimation is particularly challenging for the following aspects:
* A large diversity of vehicle manoeuvres, e.g. steady-state, transient, low, and high excitation.
* The highly non-linear behaviour of tyres leads to a substantial limitation due to tyre model accuracy.
* Data collection requires expensive and high calibration sensitive instruments.
* Numerous external disturbances, e.g. bank angle, road slope, and road friction coefficient.
Several approaches have been proposed for vehicle sideslip angle estimation [5, 6]. They are split into three main groups, i.e. model-based, data-driven and hybrid approaches. The model-based approach relies on the physical knowledge of a vehicle model for state estimation. Open-loop deterministic models are insufficient to provide an accurate estimation, so stochastic closed-loop observers, e.g. extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filters, are currently applied to estimate unmeasurable states. EKF and UKF are the industrial state-of-art for vehicle sideslip angle estimation because their accuracy can be guaranteed in a specific operating region, and their properties are easily assessed [7]. However, they both struggle in transient and high excitation driving conditions due to the growing non-linearities in the vehicle model [8]. Moreover, they require extensive system knowledge. The data-driven approach has higher accuracy than the model-based approach when enough quality data are pro
Fig. 1: Framework overview of the CNN-UKF approach. A CNN provides a sideslip angle pseudo-measurement and the process noise parameters to a UKF based on a single-track vehicle model. The UKF monitors and weights the CNN’s estimation through physical laws.
vided in the training phase [8]. Different neural network (NN) architectures have been proposed, e.g. feed forward neural network (FFNN) and recurrent neural network (RNN). However, they all lack interpretability and generalisation capabilities. Thus, a purely data-driven approach is hardly applicable for safety applications in the automotive domain [7]. The third approach, named hybrid, combines the pros of the model-based and data-driven approaches. It improves the model-based accuracy thanks to the NN outputs and, simultaneously, gives the data-driven approach an interpretability thanks to the vehicle model. In the proposed hybrid architectures [9, 10, 7], the model and the neural network work in a unidirectional way. Thus, the model in the hybrid architecture relies on the NN knowledge without backward communication, reducing the approach's potential.
This paper proposes a new hybrid approach for vehicle sideslip angle estimation. Its novelty is the mutualistic relationship between the model-based approach, characterised by a UKF based on a single-track model, and the data-driven approach, represented by a Convolutional Neural Network (CNN). It is a sequential hybrid architecture in which the CNN passes the pseudo-measurement of the sideslip angle, the level of distrust of its estimation and the process noise parameters' of the vehicle model to the UKF, see Fig. 1. A key aspect of the proposed hybrid approach is the training process which allows the development of a physics-informed NN [11]. This paper contains the non-linear vehicle dynamics in a UKF, so the physics-informed NN will also be referenced as a UKF-informed NN. The training is end-to-end, so the Back-Propagation Through Time (BPTT) algorithm moves through the CNN, the UKF and backwards. Thus, the CNN is constrained to respect the physical laws of vehicle dynamics. Furthermore, it allows the CNN to estimate variables for which a reference is unavailable, i.e. the process noise parameters and pseudo-measurement level of distrust. This will lead to a high estimation accuracy compared to the state-of-art hybrid approaches. The performance is evaluated using a large-scale real-world experimental dataset. The dataset contains a great diversity of driving situations, recorded with a constant high friction coefficient.
The paper is organised as follows. Section II contains a summary of the previous works and the main paper contributions. Section III describes the CNN and UKF used in the proposed hybrid approach. Section IV describes how the experiments are conducted and evaluated. Section V reports the obtained results, and Section VI summarises the conclusions and future research paths.
## II Related Works
A summary of the three categories, i.e. _model-based_, _data-driven_ and _hybrid_, is presented in Table II.
The first approach is called _model-based_ and relies on the laws of physics. The vehicle behaviour can be described using the geometric constraints, i.e. kinematic model, or considering the forces and moments acting on the vehicle, i.e. dynamic model. The kinematic model requires only geometrical parameters and does not need extensive vehicle parametrisation because its reliability depends mainly on sensing capabilities. The state-of-art kinematic observer [14] is based on a linear parameter varying system, where the states are the vehicle velocities, and the accelerations are the inputs. This approach leads to high accuracy in transient manoeuvres, but the model is not observable in nearly steady-state conditions [3]. Hence to avoid unobservability, a heuristic function is applied to lead the lateral velocity to zero when the vehicle is moving straight or nearly straight [14]. The downside is the amount of data necessary to define the heuristic function. Moreover, despite the performance improvement, it is still susceptible to integration error and sensor drifting. Thus, in recent publications [15, 16, 17, 18], the measurements from the Inertial Measurement Unit (IMU) are coupled with those from a Global Navigation Satellite System (GNSS) to increase the amount of information available for the estimator. The velocities measured by the GNSS are integrated into an estimation-prediction framework, which estimates the sideslip angle and partially compensates for the error induced by the low GNSS sampling rate [15]. However, GNSS/IMU fusion kinematic approach still suffers from the low GNSS sampling rate. Furthermore, a high-precision GNSS is too expensive as the standard sensor in passenger vehicles, and signal reception cannot always be assured. Therefore, it is mainly applied to racing [19]. Thus, a solution is to consider dynamic models to rely less on the sensor signal quality. Dynamic models allow a more robust noise computation of the accelerations than kinematic models [3]. However, dynamic models require a more profound knowledge of vehicle parameters and the presence of a tyre model, which is a critical source of uncertainty [35]. EKF and UKF are the state-of-art estimation techniques for the model-based approach, and the process and the observation noises are commonly assumed to be Gaussian and uncorrelated. The EKF uses a first-order Taylor series expansion to linearise around
the current mean and covariance. It has excellent accuracy in nearly steady-state conditions and when the vehicle behaves close to linearity, i.e. up to a lateral acceleration of approximately \(4\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}\)[8]. When the vehicle behaves with strong non-linearities, UKF assures a better estimation accuracy because it linearises up to the third order of the Taylor series expansion [8]. However, both observers suffer from the mismatches between the physical and modelled tyre behaviour. A possible solution is to combine the pros of dynamic and kinematic models to develop a hybrid kinematic-dynamic observer [36, 37]. This family combines the accuracy in transient manoeuvres of the kinematic models and the better robustness to sensor noise of the dynamic models. The kinematic and the dynamic filters work simultaneously, and the final sideslip angle estimation is a weighted average of the two approaches. The weights are chosen according to the lateral acceleration signal [37]. However, the weighting coefficients' tuning is complex, and the optimum solution varies according to the considered manoeuvres. Another solution to combine dynamic and kinematic models is the development of a modular scheme to estimate in sequential steps tyre forces, longitudinal and lateral velocities [39]. It consists of monitoring the wheel capacities under combined slip at each vehicle corner to estimate the individual forces and velocities. The approach is experimentally validated in different road conditions, but the results do not show its performance when the vehicle is driven at the limit of handling. Thus, the approach applicability to evasive manoeuvres is limited.
A solution to enhance the state estimation robustness to tyre model inaccuracies of dynamic model is the introduction of adaptive tyre models [23, 48] or new proprioceptive load-sensing technology, e.g. intelligent bearings or smart tyres [35, 57]. The Kalman filter can use tyre force measurements as an additional feedback to improve the estimation and magnify the Kalman gain, especially in the case of non-linear vehicle behaviour. The enhanced vehicle safety and the sensor's cost efficiency make the innovative load-sensing technologies candidate to become part of the standard sensor setup for passenger vehicles [35].
A _data-driven_ approach reduces extensive requirements of system knowledge compared to the model-based approach. A deep NN with eight hidden layers, each having a different number of long short-term memory (LSTM) cells, is proposed [44]. Despite the increased training time of such a deep NN, the authors state that a smaller network was incapable of reaching the level of accuracy of deeper NN. The issue is that deep NNs are prone to overfit, and their performance strongly lacks generalisation capabilities. To overtake this issue, an NN classification is applied to choose which available NN is most suitable for specific road conditions [43]. Each of the three FFNNs is built with a single hidden layer, and they are trained with three different datasets corresponding to three different road friction conditions, i.e. dry, wet and ivc. Moreover, the performance of data-driven approach can be enhanced by the availability of tyre force measurements [8]. In this case, a FFNN with two hidden layers outperforms the accuracy of a more complex RNN architecture based on LSTM cells. A FFNN also exceeds the performance of various model-based approaches, even if it tends to sporadic higher maximum error. However, the data-driven performance is highly dependent on the amount of representative data, and the data-driven approach will lack performance as soon as the dataset contains a lower amount of data in a particular range of the sideslip angle.
Although the data-driven approach generally has a better estimation performance than the model-based approach, it is impossible to guarantee robust performance over vehicle operating conditions. Conversely, a model-based approach based on a dynamic model with tyre force measurements has lower accuracy, but its performance is consistent over the working region [8]. Thus, a _hybrid_ model-based and data-driven approach is proposed. Here we employ two leading typologies: model-to-NN and NN-to-model as explained below.
Model-to-NN family aims to augment the number of the NN's inputs using the output of a vehicle model. This will transfer some immeasurable physical states to a NN. A kinematic vehicle model can compute the derivative of the sideslip angle, which is used as extra input for the following RNN based on a Gated Recurrent Unit (GRU) cell [7]. The kinematic model provides the NN with a pseudo-measurement that contains a lot of errors, biases and drift. Despite this, with the extra vehicle model information, the NN reduces the sideslip angle's Mean Squared Error (MSE) of the non-informed NN by \(2.7\,\mathrm{\char 37}\), \(5.6\,\mathrm{\char 37}\), and \(1.2\,\mathrm{\char 37}\), respectively, for dry, wet and snow conditions [7]. The slight improvement shows the benefits of developing a hybrid approach and highlights the importance of providing a more accurate pseudo-measurement.
NN-to-model family, vice-versa, aims to provide a sideslip angle pseudo-measurement to the following EKF/UKF. In this case, the NN output is post-processed by a Kalman filter to improve the sideslip angle estimation. One of the first approaches combines an Adaptive Neuro-Fuzzy Inference System (ANFIS) [46] with a UKF to estimate sideslip angle. The model-based component is employed as a filter to minimise the noise of the NN output and the variance of the estimation mean square error [50]. The ANFIS is trained using synthetic data generated through a high-fidelity simulation environment (CarSim). The ANFIS-UKF improves the performance of only an ANFIS [46] by \(21\,\mathrm{\char 37}\) on average for five manoeuvres with high friction conditions. However, the presented figures show a maximum value of sideslip angle of only \(3\,\mathrm{deg}\) in absolute value, which makes the estimation performance easier than in extreme driving conditions. Furthermore, there is no explanation for tuning the observation noise parameter related to the pseudo-sidselip angle. This value is essential because it defines the level of distrust the UKF can give to the NN output. A similar approach involves integrating an FFNN with a UKF based on a kinematic vehicle model [49]. Contrary to the ANFIS-UKF approach, the model-based component of the hybrid approach is responsible for filtering the estimation noise and correcting the NN output. This is possible thanks to a proportional feedback correction which improves the performance of the pseudo-sidselip angle. Although the NN is trained using synthetic measurements, the approach is validated using experimental data and the presented results. Unfortunately, all the results are normalised,
so it is impossible to understand if the vehicle was driven at the limit of handling. The presented approach improves the data-driven approach estimation accuracy of \(73.3\,\%\). However, there needs to be an analysis of how to decide the distrust level of the pseudo-sideslip angle. Otherwise, when the NN is uncertain due to a lack of data, it will negatively influence the UKF's performance. Furthermore, the kinematic vehicle model is highly susceptible to measurement noise and is not observable in steady-state driving. For this reason, the FFNN is substituted by a deep ensemble NN in more recent publications [9, 10].
Deep Ensemble (DE) of RNN, based on LSTM cells, estimates a sideslip angle pseudo-measurement and its level of distrust which are then provided as extra measurements to a UKF [9, 10]. The level of distrust is modified through a user-defined linear function before being used by the UKF. This further step is mandatory to scale the NN's distrust level to a meaningful value for the UKF. This hybrid architecture reduces the Root Mean Squared Error (RMSE) by, on average, \(8\,\%\) vs the RNNs [9]. The extra tuning of the level of distrust can easily lead the approach to overfit. Moreover, the level of distrust is computed through the standard deviation of the sideslip angle pseudo-measurements estimated by the RNNs. This does not lead to a physics-informed NN, so it is still complex to assess the properties of this hybrid approach. The reason is that the DE-RNN is not aware of the performance of the UKF, so the estimated level of distrust is not scaled according to the UKF's accuracy. Vice-versa, a physics-informed NN learns the Kalman filter's precision during the training, providing the best level of distrust to maximise the hybrid sideslip angle estimation.
This work proposes a hybrid approach employing a mutual relationship between the model-based and data-driven approaches for vehicle sideslip angle estimation. The inputs and outputs of the NN are, respectively, inputs and measurements of the UKF. The mutual relationship is enforced by the end-to-end training meaning that the back-propagation algorithm passes through the NN, UKF and vice-versa.
The main contribution of this paper is threefold1. The first is a mutual hybrid approach in which the CNN, trained end-to-end with the UKF, estimates a pseudo-measurement of the sideslip angle and its level of distrust. The UKF observation model uses both NN outputs to enhance the accuracy (MSE) of the state-of-art model-based, data-driven, and hybrid approach for vehicle sideslip angle estimation. Contrary to other hybrid approaches [9, 10], our NN's level of distrust does not need to be scaled after the NN training due to the mutualistic relationship of the architecture.
Footnote 1: The code for our method will be released upon paper acceptance.
The second contribution is the online estimation of the UKF process model uncertainties due to the mutualistic hybrid approach. This provides higher robustness than the state-of-art, even when a limited dataset is used for the training.
The third contribution is that the proposed hybrid architecture is a UKF-informed NN, which means that the NN must comply with the laws of physics of vehicle dynamics. Thus, the proposed approach has a lower Maximum Error (ME) than a state-of-art model-based [35], and data-driven [8] approaches as well as the state-of-art hybrid approach [9].
## III UKF-Informed Neural Network
This section describes the proposed hybrid approach based on a CNN end-to-end trained with a UKF (CNN-UKF). A comparison between the proposed mutualistic hybrid approach and the hybrid unidirectional baseline [9] is represented in Fig. 2a and Fig. 2b, respectively. The proposed approach develops a UKF-informed NN, where the NN is constrained to respect the vehicle dynamics. At the same time, the baseline (DE-UKF) corresponds to a UKF augmented by the DE outputs. The approach's discretisation is performed through a zero-order hold method [39] due to its good trade-off between simplicity and accuracy. The discretisation works at \(100\,\mathrm{Hz}\), the standard frequency for vehicle state estimation.
### _Data-driven Component_
A straightforward CNN can cope with the complexity of the task because the approach's strength is inside the hybrid architecture. It consists of an input layer, two hidden layers and an output layer.
Seventeen measurements form the input layer \((x)\): longitudinal and lateral accelerations \(a_{x}\) and \(a_{y}\) respectively, longitudinal velocity \(V_{x}\), road wheel angle \(\delta\), yaw rate \(\dot{\psi}\), and longitudinal, lateral and vertical tyre forces for each of the four wheels, respectively \(F_{x}\), \(F_{y}\) and \(F_{z}\). Before being used, the input measurements are normalised because each input has a different physical meaning and order of magnitude. Thus, all the inputs are mapped onto the interval \([0,\,1]\) to speed up and stabilise the training process [58]. A different normalisation method which scales the data to a mean of zero and a standard deviation of one has been tested. Still, the mapping onto the interval \([0,\,1]\) produced the best results after the training.
Fig. 2: Fig. 2a shows the proposed hybrid approach architecture (CNN-UKF). Fig. 2b shows the baseline hybrid architecture (DE-UKF) proposed by [9]. The black arrows show the flow of information during the online estimation, while the green arrows show the flow of information during the back-propagation. The dashed green arrow represents a term used in the cost function computation but not used by the optimiser to update the NN weights.
The two hidden layers consist of 200 and 100 neurons and Rectified Linear Unit (ReLU) activation functions. The hidden layers are 2D convolutions with kernel sizes \(1\times 1\), \(0\) padding, stride equal to \(1\) and active bias. The CNN uses a dropout regularisation technique equal to \(0.2\) and a Xavier weight initialisation to avoid overfitting.
The output layer is formed by four neurons corresponding to the pseudo-measurement of the sidelip angle \(\beta_{DD}\), the level of distrust in the pseudo-measurement \(\sigma_{DD}\), the uncertainty of the UKF process model lateral velocity \(\sigma_{V_{y}}\) and the uncertainty of the UKF process model \(\sigma_{\dot{\psi}}\). A reference is available only for \(\beta_{DD}\), but the other three outputs strongly affect the estimation of the model-based component, which is used in the training loss function; see Section III-C for further details. Thus, all four CNN outputs are correctly trained during the end-to-end training. \(\sigma_{V_{y}}\), \(\sigma_{\dot{\psi}}\) and \(\sigma_{DD}\) are further processed with a sigmoid function to constrain their values inside the meaningful interval \(\left[0,\,1\right]\). This last step assures that the CNN can produce uncertainties which not lead to UKF failure.
### _Model-based Component_
A UKF based on a non-linear single-track model with tye axle forces computed by the Duggoff tye is chosen as the model component of this study [8], see Fig. 3. The Duggoff tye model parameters (tye cornering stiffness, peak friction coefficient and velocity reduction friction coefficient) are optimised offline using experimental skidpad measurements [30, 32, 35]. The implemented optimisation is a genetic algorithm due to its efficiency with a non-linear and non-convex cost function. The vehicle's symmetry is exploited to merge the left and right wheels into a single central axle, which emulates the entire vehicle's behaviour. This model considers only the in-plane dynamics, so the lateral weight transfer, roll and pitch dynamics are ignored. The static weight distribution is considered together with the effect of steady-state longitudinal weight transfer concerning the normal forces on the front and rear axle. A UKF is implemented for its superior estimation accuracy when the vehicle behaves strongly non-linearly. The vehicle states (\(x_{s}\)) are the \(V_{y}\) and the \(\ddot{\psi}\), while the vehicle inputs (\(u_{v}\)) are the \(V_{x}\) and the \(\delta\). The stochastic process model is responsible for predicting the next time steps of the states according to the following equation:
\[\dot{x}_{s}\left(t\right)=f\left(x_{s}\left(t\right),u_{v}\left(t\right) \right)+\omega\left(t\right) \tag{1}\]
where \(f\left(x_{s}\left(t\right),u_{v}\left(t\right)\right)\) is the non-linear single track vehicle model, eq. 2, and \(\omega\) is the vector containing the process noise parameters [\(\sigma_{V_{y}}\), \(\sigma_{\dot{\psi}}\)].
\[\begin{split}& f\left(x_{s},u_{v}\right)=\\ &=\begin{cases}\dot{V}_{y}=\frac{1}{m}\left(F_{yr}\left(x_{s},u_{ v}\right)\cos\left(\delta\right)+F_{yr}\left(x_{s},u_{v}\right)\right)-V_{x}\dot{\psi}\\ \ddot{\psi}=\frac{1}{I_{xz}}\left(l_{f}F_{yf}\left(x_{s},u_{v}\right)\cos \left(\delta\right)-l_{r}F_{yr}\left(x_{s},u_{v}\right)\right)\end{cases} \end{split} \tag{2}\]
where \(m\) (\(1970\,\mathrm{kg}\)) is the vehicle mass, \(I_{zz}\) (\(3498\,\mathrm{kgm}^{2}\)) is the vehicle moment of inertia about the vertical axis, \(l_{f}\) (\(1.47\,\mathrm{m}\)) and \(l_{r}\) (\(1.41\,\mathrm{m}\)) are, respectively, the distance of front and rear axles from the vehicle CoG. \(F_{yf}\) and \(F_{yr}\) are, respectively, the lateral tye forces at the front and rear axles. The process noise parameters, \(\sigma_{V_{y}}\) and \(\sigma_{\dot{\psi}}\), are assumed Gaussian and uncorrelated and they capture the uncertainties due to:
* The mismatch between the physical and modelled vehicle behaviour.
* The discretisation error.
* The various operational environments in which the sensors operate.
The filter performance is strongly connected with the process noise parameters, so these are initially tuned using a two-stage Bayesian optimisation (TSBO) [59]. During the estimation, they are computed online by CNN. This is only possible thanks to the mutualistic relationship between CNN and the UKF.
The observation model is responsible for comparing the process model predictions with the available measurements, according to the following equation.
\[y_{m}\left(t\right)=g\left(x_{s}\left(t\right),u_{v}\left(t\right)\right)+v \left(t\right) \tag{3}\]
where \(g\left(x_{s}\left(t\right),u_{v}\left(t\right)\right)\) is the measurement vehicle model, eq. 4, and \(v\) is the vector containing the observation noise parameters [\(\sigma_{a_{y}\,m}\), \(\sigma_{\dot{\psi}_{m}}\), \(\sigma_{F_{yf\,me}}\), \(\sigma_{F_{yr\,me}}\), \(\sigma_{DD}\)].
\[\begin{split}& g\left(x_{s},u_{v}\right)=\\ &=\begin{cases}a_{y\,me}=\frac{1}{m}\left(F_{yf}\left(x_{s},u_{v} \right)\cos\left(\delta\right)+F_{yr}\left(x_{s},u_{v}\right)\right)\\ \dot{\psi}_{me}=\dot{\psi}\\ F_{yf\,me}=F_{yf}\left(x_{s},u_{v}\right)\\ F_{yr\,me}=F_{yr}\left(x_{s},u_{v}\right)\\ \beta_{DD}=\text{atan}\frac{\left(V_{y}\right)}{V_{x}}\end{cases}\end{split} \tag{4}\]
where \(a_{y\,me}\), \(\dot{\psi}_{me}\), \(F_{yf\,me}\) and \(F_{yr\,me}\) are the vehicle measurements, and \(\beta_{DD}\) is the pseudo-measurement, corresponding to the CNN's output. The observation noise parameters \(\sigma_{a_{y\,me}}\) (\(0.033\,\mathrm{m/s}^{2}\)), \(\sigma_{\dot{\psi}_{m}}\) (\(0.001\,\mathrm{rad/s}\)), \(\sigma_{F_{yf\,me}}\) (\(26\,\mathrm{N}\)) and \(\sigma_{F_{yr\,me}}\) (\(56\,\mathrm{N}\)) are the uncertainties of the vehicle measurements and they compensate for the sensor noises. They are tuned by a statistical analysis of the vehicle sensor measurements, which consists of computing the standard deviation of the low-pass measured signal when the steering angle is null and the longitudinal velocity is constant [59]. The variable \(\sigma_{DD}\) is the level of distrust assigned to the pseudo-measurement \(\beta_{DD}\) provided by CNN. The level of distrust computed by the CNN differs from a classic uncertainty measurement because it corresponds to the uncertainty of the pseudo-measurement scaled to match the weight of the noise parameters.
The observation model is used to perform the observability analysis. A sufficient condition to develop a UKF observer
Fig. 3: Single-track vehicle model.
is the full rank condition of the observation model. A full rank equal to five is obtainable for all the operating regions in which \(\delta\neq k\pi,\;\forall k\in\mathbf{Z}\) and \(V_{x}\neq 0\), where \(\mathbf{Z}\) is the set of all the integers. The second condition is always respected because the measurement is considered when \(V_{x}\) is higher than \(5\,\mathrm{m/s}\). The steering angle is always inside the range \(|\delta|\leq\pi/2\), so the only realistic unobservability happens when \(\delta=0\). However, the vehicle sideslip angle is relevant for lateral dynamics, so it only happens when \(\delta\neq 0\).
### _Training Phase_
The UKF-informed CNN is trained in a supervised way using a labelled dataset. The training is split into two phases: pre-training and end-to-end learning.
#### Iii-C1 Pre-training
It consists of the back-propagation algorithm applied only to the CNN to speed up and stabilise the following end-to-end training phase. The sum of \(\sigma_{V_{y}}\), \(\sigma_{\hat{\psi}}\) MSE losses and \(\beta_{DD}\), \(\sigma_{DD}\) Gaussian negative log-likelihood loss constitute the cost function. The MSE loss functions (\(MSE_{L,\,\sigma_{\hat{\psi}}}\) and \(MSE_{L,\,\sigma_{\hat{\psi}}}\)) are represented as:
\[\begin{split} MSE_{L,\,\sigma_{\hat{\psi}}}&=\frac {1}{N}\sum_{i=1}^{N}\left(\hat{\sigma}_{V_{y},\,i}-\sigma_{V_{y},\,i}\right)^{ 2}\\ MSE_{L,\,\sigma_{\hat{\psi}}}&=\frac{1}{N}\sum_{i= 1}^{N}\left(\hat{\sigma}_{\hat{\psi},\,i}-\sigma_{\hat{\psi},\,i}\right)^{2} \end{split} \tag{5}\]
where \(N\) is the size of the mini-batch (256), \(\hat{\sigma}_{V_{y}}\) (\(0.0007\,\mathrm{m/s}\)) and \(\hat{\sigma}_{\hat{\psi}}\) (\(0.002\,\mathrm{rad/s}\)) are the initial process model uncertainties tuned by the TSBO for the model-based approach. These losses steer the CNN to predict the process model uncertainties with a meaningful order of magnitude. The Gaussian negative log-likelihood loss function (\(NLL_{L,\,\beta_{DD}}\)) is represented in the following equation:
\[\begin{split} NLL_{L,\,\beta_{DD}}&=\\ &=\frac{1}{2}\sum_{i=1}^{N}\left(\log\left(\max\left(\sigma_{DD \,i},\epsilon\right)\right)+\frac{\left(\beta_{me,\,i}-\beta_{DD,\,i}\right)^{ 2}}{\max\left(\sigma_{DD\,i},\epsilon\right)}\right)\end{split} \tag{6}\]
where \(\epsilon\) (\(10^{-6}\)) is a constant term for stability, and \(\beta_{me}\) is the sideslip angle ground truth. The gradient used by the optimiser to update the CNN weights is not influenced by \(\sigma_{DD}\). The pre-training leads the CNN's weights to estimate the correct \(\beta\) with a meaningful \(\sigma_{DD}\), they will be fine-tuned during the following end-to-end training phase. The sideslip angle ground truth is measured through the Corrsys-Datron optical speed sensor installed in the vehicle's front bumper. The sensor reference system is moved to correspond with the vehicle CoG. The measurement is filtered using a zero-phase low-pass filter (bandwidth \(5\,\mathrm{Hz}\)) because the training phase is sensitive to extreme outliers or noisy references [6]. The cost function is minimised by a mini-batch stochastic gradient descent algorithm based on a standard ADAM optimiser with a learning rate (0.0008). The training procedures' user-defined parameters are optimised through a Bayesian optimisation.
#### Iii-C2 End-to-end learning
It creates a mutualistic relationship between the model-based and data-driven approaches. The UKF is treated as a computation graph unrolled through time, so the CNN-UKF is discriminatively trained over the entire mini-batch length and not on a single step. The procedure to compute the loss function gradient is close to [55], but in the proposed study a UKF is implemented rather than a linear Kalman filter. The first step is the computation of a loss a function (\(L\left(\theta\right)\)) that connects the output of the UKF-CNN (\(\beta\)) structure with the available ground truth (\(\beta_{me}\)). The training phase minimises the loss function error between the estimated and the measured sideslip angle, allowing to correctly estimate \(\beta\) and all the parameters influencing it, so \(\sigma_{DD}\), \(\sigma_{\hat{\psi}}\) and \(\sigma_{V_{y}}\). The loss function depends on the CNN's weights (\(\theta\)) and it is based on the following equation:
\[L\left(\theta\right)=\frac{1}{N}\sum_{i=1}^{N}\left(\beta_{me,\,i}-\beta_{DD, \,i}\right)^{2}+\frac{1}{N}\sum_{i=1}^{N}\left(\beta_{me,\,i}-\beta_{i}\right) ^{2} \tag{7}\]
where \(\beta_{DD}\) is the output of the CNN and \(\beta_{me}\). The first loss function part, \(\frac{1}{N}\sum_{i=1}^{N}\left(\beta_{me,\,i}-\beta_{DD,\,i}\right)^{2}\), involves just one CNN output and it is not affected by the UKF. It helps the CNN to estimate the correct pseudo-measurement \(\beta_{DD}\). The second part of the loss function, \(\frac{1}{N}\sum_{i=1}^{N}\left(\beta_{me,\,i}-\beta_{i}\right)^{2}\), is affected by all four CNN outputs and by the UKF. The second step to train the proposed UKF-CNN is the computation of the gradient of the loss function with respect the CNN weights (\(\nabla_{\theta}L\left(\theta\right)\)). This is performed following the typical BPTT algorithm. Moving backward from the loss function, the \(\nabla_{\theta}L\left(\theta\right)\) is computed by a recursive computation of the loss function gradient with respect to the vehicle states from \(t-1\) to \(t\) according to:
\[\frac{\partial L}{\partial x_{s,\,t-1}}=\frac{\partial ukf_{t-1}}{\partial x_{s, \,t-1}}\frac{\partial L}{\partial kf_{t-1}}+\frac{\partial x_{s,\,t}}{\partial x _{s,\,t-1}}\frac{\partial L}{\partial x_{s,\,t}} \tag{8}\]
where \(ukf_{t-1}\) represents all the functions that describes the UKF algorithm, i.e. process model, observation model and Kalman gain computation. The output of the UKF algorithm depends from the vehicle states, inputs, measurements and from the CNN outputs. The gradient computation continues applying the chain rule to eq. 8, and moving backward computing the derivative with respect to each CNN weight, as for a normal NN. The training is based on a mini-batch stochastic gradient descent algorithm (mini-batch size of 256) based on a standard ADAM optimiser with a learning rate (0.0008) is implemented.
## IV Experiment Setup
This section describes how the experiments have been conducted and how the proposed approach has been compared to the baseline methods.
### _Dataset_
The experiments have been conducted at the Automotive Testing Papenburg GmbH with the test platform based on a BMW Series 545i. The vehicle was instrumented with the standard IMU, wheel force transducers and intelligent bearings
for each wheel, GPS and a Corrsys-Datron optical sensor to measure the sideslip angle (measurement accuracy of \(\pm 0.2^{\circ}\)). The high-end optical speed sensor is used to measure the ground truth. The intelligent bearings demonstrate a similar accuracy to the wheel force transducer [60], the most common sensor technique in research for tyre force measurement. Thus, the tyre forces in the training dataset are taken from the wheel force transducers making the paper easier to reproduce. The dataset contains 216 manoeuvres corresponding to two hours of driving and consists of standard vehicle dynamics manoeuvres, e.g. double lane change, slalom, random steer, J-turn, spiral, braking in the turn, and steady-state circular tests, together with recorded laps at the handling track. All manoeuvres were driven on dry asphalt with tyres inflated according to the manufacturer's specifications. The bank angle and the road slope were negligible, and the friction coefficient was approximately constant. Two different electronic stability control settings (On, Off) were used. All the measurements were recorded at \(100\,\mathrm{Hz}\), the standard frequency for vehicle state estimation. A statistical outlier removal has been applied to remove extreme outliers. However, particular attention is paid not deleting edge case measurements which are the most important data. Furthermore, all the manoeuvres were manually inspected to check the outlier removal efficacy. The measurements are considered when \(V_{x}\) is higher than \(5\,\mathrm{m/s}\) and are filtered using a low-pass zero-phase filter with a cut-off frequency of \(5\,\mathrm{Hz}\) based on a finite impulse response technique [6].
The log distribution of the sideslip angle and lateral acceleration is represented in Fig. 4. The lateral acceleration is almost spread equally in the range \([-10,\,10]\,\mathrm{m/s^{2}}\). In contrast, the sideslip angle measurements mainly distribute in the range \([-3,\,3]\) deg. The latter is a common phenomenon because it is challenging to perform manoeuvres with a high sideslip angle, even when the vehicle has a very high lateral acceleration. Especially in dry road conditions, only a professional driver can induce a high sideslip angle.
A second dataset is selected from the same measurements. It will be referenced as limited dataset because it only contains measurements of when the vehicle has lateral acceleration \(|a_{y}|\leq 7\,\mathrm{m/s^{2}}\). This simulates the cost and complexity of recording a large number of manoeuvres in which the vehicle is driven at the extreme vehicle behaviour, but not at the handling limits. Such a situation is common in the automotive field because, at the handling limits, the driver can easily lose the vehicle's control. Thus, the limited dataset will be used to analyse the proposed hybrid approach regarding its robustness and generalisation capabilities.
Both datasets are split into three sub-sets: training (\(75\,\mathrm{\char 37}\)), validation (\(15\,\mathrm{\char 37}\)) and test (\(10\,\mathrm{\char 37}\)). The test set contains the same manoeuvres for both the full and limited datasets. It consists of manoeuvres representing the entire driving behaviour, but more focus is paid to highly non-linear situations. It includes 23 manoeuvres: two braking in the turn, two skidpad, five J-turn, four slalom, four lane change, two random steers, three spiral and one lap track.
### _Key Performance Indicators_
The performance of the different approaches is assessed through four key performance indicators (KPIs), which are commonly used in sideslip angle estimation [30, 8, 35].
* The MSE assesses the overall estimation performance.
* The non-linear MSE (MSE\({}_{\text{ul}}\)) corresponds to the MSE computed only when \(|a_{y}|\geq 4\,\mathrm{m/s^{2}}\). It measures the estimation performance when the vehicle behaves non-linearly.
* The absolute maximum error (ME) measures the worst estimation performance.
* The non-linear ME (ME\({}_{\text{ul}}\)) measures the worst estimation performance in the case of non-linear vehicle behaviour.
The non-linear KPIs analyse the hybrid approach performance in the most critical scenarios. The MSE and MSE\({}_{\text{ul}}\) are used to evaluate the estimation accuracy, while ME and ME\({}_{\text{ul}}\) are used to assess temporary high errors in the estimation. The latter is relevant to assess whether the estimation is always coherent with the physical vehicle behaviour.
### _Baseline Methods_
The proposed hybrid approach is compared with the state-of-art model-based, data-driven and hybrid approaches. All the considered baselines are adapted and optimised to use the same sensor setup and dataset, ensuring an objective and fair comparison.
The model-based approach is a UKF-based on a single-track model with tyre force measurements, as presented in [8]. The process noise parameters are tuned with the TSBO, and the observation noise parameters associated with the tyre force measurements are adapted online to enhance the observer's performance. The adaptability is related to the reduction of the level of noise coupled with tyre force measurements, this increases the Kalman gain when the vehicle behaves non-linearly. Thus, the effect of the Kalman gain is magnified during manoeuvres at the handling limit. Otherwise, a magnified Kalman gain when the vehicle behaves linearly could influence the vehicle states to follow the measurement sensor noises. The adaptability is triggered with a hysteresis loop to avoid the chattering phenomenon.
The data-driven approach is a FFNN that uses IMU and tyre force measurements as inputs, as evaluated in [8]. A simple FFNN reaches a better performance than a RNN when the tyre force measurements are included in the input set because the RNN prediction power is insufficient to compensate for
Fig. 4: Log distribution of sideslip angle and lateral acceleration. Each bin corresponds to \(1\,\mathrm{deg}\) and \(1\,\mathrm{m/s^{2}}\)
the higher numbers of parameters to be trained. The NN is formed by two hidden layers with respectively 250 and 125 neurons each and ReLU activation functions. It uses a dropout regularisation technique (0.2) and a Xavier initialisation to avoid overfitting. An early stopping method with patience equal to 20 is applied for the same reason. The MSE is the loss function minimised by a mini-batch stochastic gradient descent algorithm based on a standard ADAM optimiser with a learning rate (0.001). The mini-batch size is 1024. For the training procedures, user-defined parameters are optimised through a Bayesian optimisation.
The hybrid approach is a deep ensemble-UKF (DE-UKF) [9] adapted to maximise the estimation performance on a dataset with type force measurements. The DE is formed by 20 FFNNs trained independently on the same dataset. The FFNNs different estimations are combined in a model averaging. Hence, the final \(\beta_{DD}\) is the mean of the FFNNs estimations, and \(\sigma_{DD}\) is the variance of the different model estimations. Each FFNN is trained using a Gaussian negative log-likelihood cost function optimised through mini-batch stochastic gradient descent based on an ADAM optimiser with a learning rate (0.0008). The epoch's number for each FFNN is 30. DE relies on the stochasticity of neural network training, which allows every FFNN to converge to a different set of parameters. However, the estimation accuracy is low when all models predict incorrectly, and there is no guarantee that the \(\sigma_{DD}\) will be high. This especially happens when the error is in the low sideslip angle range because the NNs estimations tend to be closer. A high level of distrust suggests that the UKF does not rely on the data-driven pseudo-measurement but trusts the estimation of the UKF process model. Vice-versa, when the level of distrust is low, the UKF considers the neural network estimation reliable. \(\sigma_{DD}\) must be scaled before being used by the UKF because the output of the DE does not match the weight of the other noise parameters. Otherwise, the UKF puts too much trust in \(\beta_{DD}\). The scaling is based on an exponential function (eq. 9) which differentiates approximately similar \(\sigma_{DD}\).
\[\sigma_{DD,\,sc}=10^{p_{1}}\sigma_{DD}^{p_{2}} \tag{9}\]
where \(p_{1}\) (-4.2690 for the full dataset and -1.4353 for the limited one) and \(p_{2}\) (0.7901 for the full dataset and 1.465 for the limited one) are two scaling parameters tuned using a Bayesian Optimisation. The values of \(p_{1}\) and \(p_{2}\) change according to the dataset because they strongly influence the DE's estimation performance. If \(p_{1}\) and \(p_{2}\) are not re-tuned for the limited dataset, the UKF will put too much trust in the DE, even if it lacks performance.
## V Results
This section demonstrates the performance of the proposed approach. Subsection V-A analyses how accurate the proposed approach is with respect to the baselines when it is trained using a full dataset. Subsection V-B shows the results of the robustness analysis when only a limited dataset is available. This demonstrates that the data-driven approach is highly influenced by the amount and quality of the data.
### _Full Dataset Results_
The CNN-UKF, the DE-UKF and the data-driven approach have been trained using the full dataset.
The overall comparison is presented in Table II. Both hybrid approaches perform better than the model-based and data-driven approaches considering all four KPIs. This highlights the importance of the hybrid architecture for vehicle sideslip angle estimation. For instance, the model-based approach has a higher MSE and MSE\({}_{\text{nl}}\) than the data-driven approach but a lower average ME and ME\({}_{\text{nl}}\). The hybrid approaches have the same estimation accuracy (MSE and MSE\({}_{\text{nl}}\)) as the data-driven approach without the average higher ME. The reason is that in a hybrid approach, data-driven estimation is always validated through the model-based approach.
It can be seen that CNN-UKF outperforms the three other approaches for all the proposed KPIs. However, it does not have the same benefits in magnitude for all of them. The overall MSE and ME of DE-UKF and CNN-UKF are comparable. The minor improvements for the linear vehicle behaviour are respectively \(1.15\,\mathrm{\char 37}\) and \(0.20\,\mathrm{\char 37}\) in favour of the CNN-UKF. Anyhow, if the performance is evaluated when the vehicle behaves non-linearly, the CNN-UKF will strongly outperform DE-UKF with an improvement of \(24.84\,\mathrm{\char 37}\) for the MSE\({}_{\text{nl}}\) and \(5.60\,\mathrm{\char 37}\) for the ME\({}_{\text{nl}}\). A possible explanation is that the end-to-end training informs CNN about the vehicle dynamics compensating for the lower amount of data in this operating condition.
On the contrary, the DE during the training is not aware of the physical vehicle behaviour, so it is subjected to a decay in performance where the dataset has fewer samples. The DE becomes aware of the UKF performance only while tuning the level of distrust scaling parameters. Furthermore, the process model noise parameters are online adapted in the CNN-UKF, allowing the UKF to accommodate better the mismatches between the physical and modelled vehicle behaviour.
Similar conclusions can be stated from the log distribution of the sideslip angle error in the non-linear operating range, see Fig. 5. The data-driven and the hybrid approaches have a similar amount of \(\beta\) error samples in the range \([-1.5,\,1.5]\) deg. In contrast, the model-based approach suffers from the lower accuracy of the vehicle model in the non-linear operating region. However, the data-driven approach and partially the DE-UKF are more prone to high estimation errors (\(\geq\)\(1.5\,\mathrm{deg}\)) than the model-based and CNN-UKF. The latter outperforms all other approaches and has the \(\beta\) error mean closest to zero and the lowest standard deviation. Hence, a UKF coupled with a data-driven approach has the same performance as
a data-driven approach in a low error range, but it reduces the sporadic high errors of a purely data-driven approach. Furthermore, the end-to-end training and the process noise parameters adaption allow the CNN-UKF to maximise the hybrid capability especially when the vehicle \(|a_{y}|>$4\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\).
Fig. 6 analyses how the estimation performance change for different manoeuvres. The model-based approach has a weak accuracy, especially in braking-in-the-turn, J-turn and skidpad tests. The braking-in-the-turn involves a coupling between the longitudinal and lateral dynamics, which is not modelled in the used single-track vehicle model. In a J-turn manoeuvre, the vehicle is driven at the limits of handling, where the mismatches between the physical and modelled vehicle behaviour are higher. Whereas for skidpad tests, the explanation is that it is a quasi steady-state manoeuvre, so the vehicle yaw acceleration is almost null, and the difference between estimated and measured type forces becomes essential for the \(\beta\) estimation. The type model is one of the most significant uncertainty sources in the model-based approach. The data-driven approach almost constantly behaves better than the model-based but worse than the hybrid approaches for estimation accuracy. However, it outperforms the DE-UKF in a spiral manoeuvre, and a possible explanation is that the DE-UKF puts too much trust in the UKF process model. The CNN-UKF outperforms all the other approaches in five out of seven manoeuvres. Particularly relevant is the improvement in the slalom and spiral manoeuvres. The slalom has the highest number of sideslip angle peaks (Fig. 7), which are the most difficult moments to estimate sideslip. Spiral manoeuvres are particularly challenging because it has an extra turn respect the J-turn.
Fig. 7 shows the sideslip angle estimation in a slalom manoeuvre at the handling limits. All four approaches provide a reliable estimation, but the CNN-UKF outperforms the other approaches when the vehicle reaches a \(\beta\) peak of \(10\,\mathrm{deg}\) at around \(14\,\mathrm{s}\). This is a typical situation where a correct estimation of \(\beta\) is essential to help the vehicle control system maintain vehicle stability. Thus, an improved estimation in this condition is particularly relevant for safety. The already mentioned high non-linearities reduce the accuracy of the model-based approach. The data-driven approach lacks accuracy at \(15\,\mathrm{s}\) due to the few data in the training set describing this vehicle's operation point. The DE-UKF improves the estimation performance between \(5\,\mathrm{s}\) and \(13\,\mathrm{s}\) combining the pros of the model-based and data-driven approach, but it lacks performance at around \(14\,\mathrm{s}\). CNN-UKF improves the estimation accuracy not only in the range of \([5,\;10]\) s but also in the highest peak at \(14\,\mathrm{s}\), as can be observed in Fig. 8.
Fig. 8a shows the \(\beta\) and \(\beta_{DD}\) for the hybrid approaches. CNN-UKF and DE-UKF \(\beta_{DD}\)s lack accuracy between \(12\,\mathrm{s}\) and \(15\,\mathrm{s}\), but the CNN-UKF \(\beta\) is accurate because the UKF is correctly weighting the UKF process model's information with the NN's pseudo-measurement. Vice-versa, the UKF of the DE-UKF puts too much trust in \(\beta_{DD}\). When the \(\beta_{DD}\) error rises, the corresponding level of distrust (Fig. 8c) also grows. CNN-UKF and DE-UKF \(\sigma_{DD}\)s have the same order of magnitude in normal driving, but the one related to CNN-UKF rises much more than the DE-UKF. This broader range makes the proposed approach much less confident in the NN when its output is incorrect. This is not possible for the DE-UKF due to its training process. The DE-UKF does not have end-to-end training, so its \(\sigma_{DD}\) cannot match the weight of the other UKF noise parameters. The DE-UKF \(\sigma_{DD}\) non-linear scaling compensates only partially this issue. Fig. 8c clearly demonstrates how the CNN-UKF distrust level range is \(\left[10^{-3},\,1\right]\), while the range for the DE-UKF is only \(\left[10^{-3},\,10^{-2}\right]\).
Another explanation for the better performance of the CNN-UKF is related to the online adaptation of the process noise parameters. The adaptive parameters allow the UKF to know the current mismatches between the modelled and physical vehicle behaviour. The process noise parameters of the DE-UKF and model-based approach are constant, so they correspond to a trade-off between the different driving conditions. Vice-versa, the CNN-UKF relies on optimal tuned process noise parameters every instant. Fig. 8b and 8d show the values of \(\sigma_{V_{y}}\) and \(\sigma_{\psi}\), respectively. As expected from the literature [61], both increase with the growth of vehicle non-linearities. This further
Fig. 5: Distribution of the sideslip angle error when the vehicle \(|a_{y}|>$4\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\) for every approach in the test set. Each bin is \(0.25\,\mathrm{deg}\) wide. The x represents the mean and the line between the vertical symbols (\(|-|\)) is the standard deviation of the sideslip angle error.
Fig. 6: Sideslip angle \(\text{MSE}_{\text{al}}\) comparison for every group of manoeuvres.
Fig. 7: Slalom manoeuvre. Comparison of the sideslip angle estimation between all four approaches.
proves that CNN-UKF behaves according to physical vehicle motion. \(\sigma_{V_{y}}\) has a peak at \(14\,\mathrm{s}\), corresponding to the last vehicle's right turn, where the rear inner tyre is even detached from the ground due to the aggressiveness of the manoeuvre. This extreme condition is created by a transient lateral load transfer (not modelled) which strongly influences lateral tyre force production, resulting in a significant \(V_{y}\) model mismatch. Moreover, the effect of the front axle longitudinal force (\(F_{xf}\)) on the lateral velocity is not modelled \(\left(\frac{F_{xf}\sin(\delta)}{m}\right)\). Overall, the constant and the adapted process noise parameter have the same magnitude. Still, the one associated with CNN-UKF is generally bigger (apart from \(13\,\mathrm{s}\) to \(14\,\mathrm{s}\)). The reason is that the constant \(\sigma_{V_{y}}\) was optimised, considering also less aggressive manoeuvre where the vehicle model is more reliable.
The process noise parameter \(\sigma_{\dot{\psi}}\) rises by two orders of magnitude when the vehicle has a high sideslip angle. At the same time, when \(\beta\) is low, the adapted \(\sigma_{\dot{\psi}}\) is slightly lower than the constant process noise parameter. A possible explanation is that the mismatches of the modelled \(\dot{\psi}\) are higher than that of \(V_{y}\). The meaningful adaptability of the process parameter noises shows the CNN-UKF has an insight into vehicle dynamics physics and it can online compensate for it.
Similar conclusions are obtained from the spiral manoeuvre represented in Fig. 9. Here, the CNN-UKF approach outperforms the accuracy of all other three approaches, particularly from \(5\,\mathrm{s}\) to \(13\,\mathrm{s}\). The performance of the CNN-UKF is similar to sum of the best estimation between the data-driven approach, from \(4\,\mathrm{s}\) to \(6\,\mathrm{s}\), and the model-based approach, from \(6\,\mathrm{s}\) to \(10\,\mathrm{s}\).
The test set also contains a recording of an entire lap in a racing circuit, where the effect of combined slip is maximal. Fig. 10b shows the vehicle's lateral and longitudinal acceleration, and it highlights how the driver is pushing the vehicle at the limit of handling in all the corners, see \([1,\,7]\)\(\mathrm{s}\), \([16,\,19]\)\(\mathrm{s}\) and \([23,\,29]\)\(\mathrm{s}\). The sideslip angle estimation performance of the four approaches is represented in Fig. 10a. The model-based approach has the lowest performance, especially in the range \([1,\,7]\)\(\mathrm{s}\). This result is expected because the implemented Duggoff type model works in pure slip conditions. A similar conclusion is also visible in the spiral manoeuvre, Fig. 9. The data-driven approach performs better than a model-based approach. However, it has the maximum absolute error at \(23\,\mathrm{s}\) and \(26\,\mathrm{s}\), where the vehicle performs a cornering while braking. Both proposed hybrid approaches have higher accuracy than the others, but the CNN-UKF has the best performance. A possible explanation is the physic-informed NN architecture, which allows evaluating a very accurate NN level of distrust. Fig. 10c shows the NN level of distrust for the DE-UKF and CNN-UKF. While the DE-UKF level of distrust is almost constant along the manoeuvre, the one associated with CNN-UKF has two peaks in correspondence with the data-driven maximum errors. This allows the CNN-UKF to avoid following the high estimation error of the data-driven
Fig. 8: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig: Fig Fig: Fig
component. It is further proof of how the CNN-UKF is a physics-informed NN in which the UKF and NN are mutually cooperating to improve the overall estimation of the hybrid approach.
### _Robustness Analysis using the Limited Dataset_
A sideslip angle filter must not only be accurate, but it should be robust to a different amount of qualitative data during the training and tuning phase. Hence, to prove the robustness of the proposed hybrid approach, its estimation performance is compared with the baseline methods when they all have been trained using the limited dataset.
The overall comparison is presented in Table III. Here, the model-based approach shows the same performance as with the full dataset (see Table II), because it is not influenced by the amount of data. As expected the other approaches show a reduced performance with the limited dataset where the MSE is more than doubled while the ME sees a moderate increase. Now the data-driven approach has the worst performance in all four KPIs. The accuracy loss is higher than \(30\,\mathrm{\char 37}\) for all the indicators, without a particular weakness in one of the proposed KPIs. An explanation is that the dataset does not have representative data of the vehicle driven with \(|a_{y}|\geq$7\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\), so it must generalise much more than with a full dataset. Significantly, the NN must reconstruct the extreme non-linear vehicle behaviour, the most complex vehicle operating region, without having representative data for these conditions.
The model-based and hybrid approaches' performance is very similar, but DE-UKF and CNN-UKF have the best KPIs. The explanation is that hybrid approaches use the best estimation accuracy of the model and the NN together. Both hybrid approaches strongly rely on the estimation of the UKF process model because they cannot put much trust in the data-driven part. However, the NN still has benefits when the vehicle behaves linearly due to the excellent amount of data in that range. This highlights how the hybrid approaches improve the robustness of both model-based and data-driven approaches. The hybrid approach shows a minor improvement compared with the model-based approach. However, the result is significant because it highlights how the hybrid approach is as robust as a model, even if trained with a limited training dataset. On the contrary, a purely data-driven approach is not robust for using a small training set resulting in poor estimation accuracy.
The CNN-UKF performs slightly better than DE-UKF in all four KPIs. However, the CNN-UKF outperforms the DE-UKF, mainly for the MSE and \(\mathrm{MSE_{nl}}\). The main reason is the adaptability of the process noise parameters, which cope with the change of vehicle model mismatches in the various vehicle operating points. However, the improvement in accuracy is not enough to be considered significant (\(<5\) %).
Fig. 11 shows the sideslip angle error log distribution in
Fig. 12: J-turn manoeuvre. Comparison of the sideslip angle estimation between all four approaches using the limited dataset.
Fig. 10: Fig. 10a compares the sideslip angle estimation between four approaches in a portion of a racing track. Fig. 10b shows the recorded lateral and longitudinal acceleration of the vehicle. It highlights the combined slip situation at which the vehicle is driven. Fig. 10c shows the level of distrust in the NN for the hybrid approaches.
the non-linear operating range. All the approaches which rely on a model highly outperform the data-driven approach. The latter have \(\beta\) error samples in the range \([-3.8,\,4]\) deg, while the other approaches have \(\beta\) errors between \([-1.8,\,1.8]\) deg. This proves that the data-driven approach is highly prone to high estimation errors when trained with a limited dataset. The performance of the model-based and hybrid approaches is very similar. They also share an equal error distribution. The data-driven approach slightly outperforms the other approaches in the very low error range \([-0.3,\,0.6]\) deg. This explains why the hybrid approaches are more accurate overall than the model-based one, despite mainly relying on it.
Fig. 12 shows the sideslip angle estimation in a J-turn manoeuvre at the handling limits. The model-based and hybrid approaches behave almost identically, and all strongly outperform the purely data-driven approach. The only visible differences are between \([1.5,\,3]\) s, where the CNN-UKF captures slightly better the conclusion of the peak and between \([4,\,7]\) s where the DE-UKF is closer to the \(\beta\) reference.
The major difference between DE-UKF and CNN-UKF is visible from the comparison of the \(\beta_{DD}\), see Fig. 13. The \(\beta_{DD}\) computed by the CNN-UKF is highly outperforming the one estimated by the DE-UKF. The explanation is that the CNN-UKF is trained end-to-end, so the output of the CNN has physical information that the NN uses to increase its accuracy. The DE is trained independently, performing similarly to the purely data-driven approach. An higher accuracy of \(\beta_{DD}\) implies that the following UKF can rely on a better sideslip angle pseudo-measurement. This proves the benefits of using a physical informed-NN. Despite this, the \(\beta\) estimation of DE-UKF and CNN-UKF is similar because the model-based approach still outperforms both \(\beta_{DD}\). Thus, both hybrid approaches mainly rely on the UKF. Due to the high chances of dealing with a limited dataset, the hybrid approach is fundamental to improving vehicle sideslip angle estimation.
However, the performance of the proposed CNN-UKF approach is still influenced by the amount and quality of data in the training set. Thus, it still represents a limitation of the proposed approach that must be addressed in the future. This highlights the importance of defining standards procedure to collect valuable and broad datasets. Regardless, the proposed CNN-UKF allows the introduction of possible solutions for lack of data, e.g., weakly-supervised learning during the end-to-end training, which allows for using data recorded without expensive sensors.
## VI Conclusion
The paper presents a novel hybrid approach to vehicle sideslip angle estimation, which involves utilising the physical knowledge from a UKF based on a single-track vehicle model to enhance the estimation accuracy of a CNN. Using a large-scale experimental dataset of 216 manoeuvres, it has been shown that the hybrid approach is more accurate than purely model-based or data-driven approaches. Moreover, the CNN-UKF is slightly reducing the MSE of the DE-UKF. However, when the MSE\({}_{\text{nl}}\) is compared, the CNN-UKF outperforms the DE-UKF by \(25\,\%\), providing a much higher accuracy in the most critical operating region for active vehicle control systems. The CNN-UKF, thanks to the end-to-end training, is forcing the CNN to comply with the vehicle physics, reducing the ME and ME\({}_{\text{nl}}\) of all other approaches. When a limited dataset is provided, the proposed hybrid approach has a minor improvement in the estimation robustness over the model-based and the DE-UKF approach for all the KPIs. The CNN-UKF is highly outperforming the estimation of a purely data-driven approach. Future works involve testing the generalisation capability of the CNN-UKF utilising a dataset with different levels of road grip, e.g. wet, snow or icy roads.
|
2310.02309 | Parameter estimation by learning quantum correlations in continuous
photon-counting data using neural networks | We present an inference method utilizing artificial neural networks for
parameter estimation of a quantum probe monitored through a single continuous
measurement. Unlike existing approaches focusing on the diffusive signals
generated by continuous weak measurements, our method harnesses quantum
correlations in discrete photon-counting data characterized by quantum jumps.
We benchmark the precision of this method against Bayesian inference, which is
optimal in the sense of information retrieval. By using numerical experiments
on a two-level quantum system, we demonstrate that our approach can achieve a
similar optimal performance as Bayesian inference, while drastically reducing
computational costs. Additionally, the method exhibits robustness against the
presence of imperfections in both measurement and training data. This approach
offers a promising and computationally efficient tool for quantum parameter
estimation with photon-counting data, relevant for applications such as quantum
sensing or quantum imaging, as well as robust calibration tasks in
laboratory-based settings. | Enrico Rinaldi, Manuel González Lastre, Sergio García Herreros, Shahnawaz Ahmed, Maryam Khanahmadi, Franco Nori, Carlos Sánchez Muñoz | 2023-10-03T18:00:02Z | http://arxiv.org/abs/2310.02309v1 | Parameter estimation by learning quantum correlations in continuous photon-counting data using neural networks
###### Abstract
We present an inference method utilizing artificial neural networks for parameter estimation of a quantum probe monitored through a single continuous measurement. Unlike existing approaches focusing on the diffusive signals generated by continuous weak measurements, our method harnesses quantum correlations in discrete photon-counting data characterized by quantum jumps. We benchmark the precision of this method against Bayesian inference, which is optimal in the sense of information retrieval. By using numerical experiments on a two-level quantum system, we demonstrate that our approach can achieve a similar optimal performance as Bayesian inference, while drastically reducing computational costs. Additionally, the method exhibits robustness against the presence of imperfections in both measurement and training data. This approach offers a promising and computationally efficient tool for quantum parameter estimation with photon-counting data, relevant for applications such as quantum sensing or quantum imaging, as well as robust calibration tasks in laboratory-based settings.
## I Introduction
Quantum parameter estimation refers to the fundamental problem of inferring the value of unknown physical quantities within a quantum system [1]. It has important practical applications in device characterization [2; 3], quantum feedback and control [4; 5; 6; 7; 8; 9] and quantum communications [10; 11], and most importantly, it is at the core of the field of quantum metrology [12; 13; 14; 15; 16; 17; 18; 19; 20], whereby quantum correlations are exploited to estimate physical quantities with high precision.
The most conventional approach to quantum parameter estimation is based on the following steps: i) prepare a quantum system in a given state; ii) allow it to evolve through a unitary transformation that encodes the unknown parameters; iii) perform a measurement on the system; iv) estimate the parameter from the measurement results; v) repeat the process to gather statistical information and increase the precision of the estimation [14; 16]. Beyond this standard prepare-evolve-measure approach, there are relevant situations where, instead, one performs a single-shot continuous quantum measurement on a quantum probe [1]. In this case, one is interested in obtaining an estimation that will be continuously updated as data is gathered over the time evolution of a single quantum trajectory. Protocols of quantum parameter estimation from continuous measurements have been developed and studied over the years [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]; such parameter estimation techniques are particularly relevant for applications such as atomic magnetometry [35; 36], quantum metrology with open quantum systems [37; 27; 38; 37] or estimation of classical time-dependent signals [39]. Previous works have established computational approaches to calculate the probability density \(P(\theta|D)\)[22; 28; 29; 30; 31] of unknown \(n\)-dimensional parameters \(\theta=[\theta_{1},\theta_{2},\ldots,\theta_{n}]\) conditioned on the continuously recorded data, \(D\), which could correspond, for instance, to photon-counting records [29; 30], continuous weak measurements [40; 41; 42], or homodyne measurements [31]. These calculations rely on the application of Bayes' theorem on the likelihood functions \(P(D|\theta)\), obtained as the trace of un-normalized conditional density matrices with the evolution conditioned on the measured data record [43].
This Bayesian approach provides, in principle, the most accurate description of the knowledge that can be gained about \(\theta\) given the measurement data \(D\). However, the application of this technique faces several difficulties. Firstly, it requires exact modelling of the open quantum system and any source of error, which can become challenging for noisy, complex quantum systems [44]. Secondly, the method can be computationally demanding. For instance, when tackling the problem of multi-parameter estimation, where one wishes to estimate several parameters simultaneously, Bayesian inference calls for the use of stochastic methods to sample the posterior
probability distribution [28]. Generally, the Bayesian parameter estimation method will be computationally time-consuming, even in simple systems. This hampers the prospects for real-time estimation and for the integration of the inference process in the actual measurement device taking the data, which would be desirable for reduced latency and energy consumption [45, 46].
A novel approach to this problem that has gained attention in recent years is the application of machine-learning methods in which a neural network (NN) is trained to aid in the inference process. Crucially, this method allows to circumvent the requirement for detailed knowledge of the underlying physical system, as long as a reasonable amount of training data is available. The use of NNs have been demonstrated for a variety of tasks in quantum metrology, including parameter estimation in the presence of unavoidable noise [47, 48, 49, 50, 51], calibration [52, 53], phase estimation in interferometry [52, 53, 54], magnetometry [51, 55], control [50], and tomography [56, 57]. In the context of continuously monitored systems, NNs have been employed for the reconstruction of quantum dynamics [2, 58] and parameter estimation under continuous weak dispersive measurements [47, 48, 55, 59], which are characterized by a continuous, diffusive evolution of the quantum system [1, 60].
Here, we assess the potential of deep learning to tackle the parameter estimation task from continuous measurements of the photon-counting type. These measurements are described by point processes that, rather than yielding a diffusive evolution of the system, are characterized by sudden quantum jumps upon the acquisition of a discrete signal, such as the detection of a single photon [1]. This type of measurements, ubiquitous in the field of quantum optics and cavity-QED [61, 62, 63, 64, 65, 66, 67, 68], provide insights into the particle-like nature of quantum systems and can produce strongly correlated signals that directly violate classical inequalities, as in the case of the renowned effect of photon antibunching [68]. Our work addresses the question of whether NNs can effectively process and exploit the quantum correlations present in this type of discrete signals.
Our proposed NN architectures approach this task as a regression problem, taking the time delays between individual photodetection events as an input and providing an estimate \(\hat{\theta}\) for the desired parameters of interest as the output, as sketched in Fig. 1. Our main finding is that the NNs can take advantage of the quantum correlations within the signal, providing higher precision than using an equivalent signal that lacks these correlations, e.g., the integrated intensity value. By computing the root-mean-squared error (RMSE) of the estimations, we find that, after training, the NN can reach the limit of precision established by Bayesian inference, while requiring dramatically fewer computational resources. We also demonstrate that parameter estimation with NNs could be more resilient to noise than a simple Bayesian approach with no detailed knowledge of the underlying noise sources incorporated into the model, since it is possible for NNs to indirectly learn the noise model during the training phase. In contrast to a full Bayesian posterior distribution, the architectures studied in this work provide a single point estimate of the parameters, without including a confidence interval. Nevertheless, we evaluate their efficiency as estimators by conducting statistical analyses of their performance across numerous data samples.
Figure 1: **Quantum parameter estimation strategies in open quantum systems**. Parameters are encoded in the dynamics of an open quantum system: here, these are the frequency detuning between the qubit transition frequency and the drive frequency, \(\Delta=\omega_{q}-\omega_{L}\), and the Rabi frequency \(\Omega\) of the coherent drive, proportional to the dipole moment of the qubit and the amplitude of the electromagnetic field driving it. Photon-counting measurements of the quantum light radiated yield a record of the times of photodetection. The unknown parameters can be reconstructed through Bayesian parameter estimation (left). Here, we show the posterior probability distribution \(P(\theta|D)\) for the data obtained from a single quantum trajectory, from which an estimator \(\hat{\theta}\) is obtained by computing the mean value of the distribution. The ground truth value for the parameters is contained within the posterior distribution in a region of high probability. An alternative approach is based on the use of neural networks (right). The neural network learns the inverse process that maps observed data to the parameters \(x\rightarrow\text{NN}\rightarrow\hat{\theta}\) through an initial training phase that employs data generated with known parameters. Once trained, inference for new data is much faster using the neural network than repeating the Bayesian parameter estimation with the new data.
## Results
### System and data generation
The data we use to train the deep neural networks and perform parameter inference consists of simulated measurements of single photo-detection times from the light emitted by a driven two-level system (TLS). This system represents a paradigmatic example of the types of quantum parameter estimation problems studied extensively in the literature [21, 22, 27]. It is a prototypical model that can describe various quantum emitters, such as quantum dots [69], molecules [70], or color centres [71]. We consider a TLS with states \(\{|0\rangle,|1\rangle\}\), lowering operator \(\hat{\sigma}=|0\rangle\langle 1|\) and transition frequency \(\omega_{q}\), continuously excited by a coherent drive with frequency \(\omega_{L}\) and a Rabi frequency \(\Omega\). This quantum probe is coupled to the environment, giving rise to spontaneous emission with a rate \(\gamma\). In the frame rotating at the drive frequency, the dynamics of such a system is described by the master equation (we set \(\hbar=1\) herein):
\[\partial_{t}\hat{\rho}=-i\left[\Delta\hat{\sigma}^{\dagger}\hat{\sigma}+\Omega (\hat{\sigma}+\hat{\sigma}^{\dagger}),\hat{\rho}\right]+\frac{\gamma}{2}D( \hat{\sigma})[\hat{\rho}], \tag{1}\]
where \(\Delta\equiv\omega_{q}-\omega_{L}\) and \(D(\hat{x})[\hat{\rho}]\equiv 2\hat{x}\hat{\rho}\hat{x}^{\dagger}-\{\hat{x}^{ \dagger}\hat{x},\hat{\rho}\}\) follows the definition of the Lindblad superoperator [72]. The interplay between continuous driving and losses eventually brings the system into a steady state.
The estimation is performed using the data \(D\) generated by the continuous photon-counting measurement of the radiation emitted by the system during a single run of the experiment, considering an initial state \(|\psi(0)\rangle=|0\rangle\) and a total evolution time, \(T\), much longer than the relaxation time towards the steady state. We simulate this process using the Monte Carlo method of quantum trajectories [1, 43, 73] implemented in the QuTiP library [74, 75]. Under continuous photon-counting measurements, the system evolves stochastically under the following rules:
* at every differential time step, \(dt\), the system can experience a quantum jump \(|\psi(t+dt)\rangle\propto\hat{\sigma}|\psi(t)\rangle\) with a probability \(p(t)=dt\gamma\langle\hat{\sigma}^{\dagger}\hat{\sigma}\rangle(t)\), which corresponds to the detection of a photon emitted by the system;
* otherwise, the system follows a non-Hermitian evolution \(|\psi(t+dt)\rangle\propto[\mathbf{I}-i(\hat{H}-i\hat{\sigma}^{\dagger}\hat{ \sigma}/2)dt]|\psi(t)\rangle\). The wavefunction is normalized in each time step.
Each stochastic simulation of a trajectory corresponds to an individual realization of the experiment, generating a data vector by storing the specific set of times \((t_{1},\dots,t_{N})\)
Figure 2: **Classical and quantum data**. (a) Dynamics of an individual Monte Carlo quantum jump trajectory for \(\Delta=0.8\gamma\). Top panel shows the posterior probability of the parameter \(\Delta\) for the photon-counting data \(D(t)\) acquired up to time \(t\) from a certain trajectory, starting from a flat prior. Bottom panel shows the corresponding evolution of the population of the qubit for the same trajectory, with red grid lines marking the times when a quantum jump occurs, leading to the collapse of the population to zero and subsequently restarting the system from the \(|0\rangle\) state. The list of time delays between jumps constitutes the quantum data \(D\) from which we infer the unknown parameters \(\theta\). (b) Waiting-time distribution \(w(\tau)\) of the measured time delays \(\tau_{i}\) between photo-detection events during an individual trajectory, which constitute the quantum signal. The plot shows the distribution for several \(\Delta\) parameters of the system. Clear patters emerge which are \(\Delta\)-dependent. A classical signal corresponding to the integrated intensity is defined as the mean of the measured delays, \(\bar{\tau}=\sum_{i=1}^{N}\tau_{i}/N\). Since all \(\tau_{i}\) are independent random variables coming from the same underlying distribution \(w(\tau)\), \(\mathrm{E}[\tau]=\mathrm{E}[\tau_{i}]=\mathrm{E}[\tau]\), shown as a black-dashed line. The red vertical gridline marks the value \(\Delta=0.8\gamma\) used as ground truth in panel (a). The horizontal grid line marks the corresponding expected value \(\mathrm{E}[\tau](\Delta=0.8\gamma)\) (c) Heatmap of normalized histograms of \(\tau\) obtained from \(10^{4}\) trajectories with \(N=48\) clicks each, simulated with \(\Delta=0.8\gamma\). The oscillatory patterns of the waiting-time distribution are clearly reproduced in the histograms, which represent the quantum data. (d) Histogram of all the values \(\bar{\tau}\) obtained for the \(10^{4}\) trajectories used in panel (c). This represents the classical data, given by the mean of delays measured in a given trajectory. The resulting Gaussian distribution is centered around \(\mathrm{E}[\tau]\), marked by the dashed red gridline. We fix \(\Omega=\gamma\) in all panels. All units are defined in terms of \(\gamma\).
when a photodetection leading to a quantum jump occurs during the course of a total evolution time \(T\), with \(N\) being the total number of jumps recorded. For convenience, we define our data as the set of time delays between these jumps, \(D=[\tau_{1},\ldots,\tau_{N}]\), where \(\tau_{i}=t_{i}-t_{i-1}\), \(i=1,\ldots N\) and \(t_{0}=0\). An important feature of this particular system is that each time interval \(\tau_{i}\) is an independent random variable, given that the system collapses to the ground state \(|0\rangle\) after each emission [29].
Each trajectory is simulated for an evolution time \(T\) chosen to generate a data record \(D\) of precisely \(N=48\) jumps. This number is chosen to meet a compromise; it is sufficiently large to ensure that the system is in a steady state, while also allowing for inference with a limited number of photons in a relatively short timescale spanning just a few tens of emitter lifetimes, which is typically within the range of nanoseconds for solid-state quantum emitters [62]. It is important to note that fixing \(N\) requires that each trajectory will have a different evolution time \(T\)[76; 77]. More detailed information about the unravelling of the dynamics in quantum trajectories can be found in the Supplemental Material [78]
After obtaining the data \(D\) from a single trajectory or experiment, our goal is to estimate the unknown parameters \(\theta\) that govern the dynamics of the system. In this study, we specifically focus on the parameters \(\Delta\) and/or \(\Omega\), considering that \(\gamma\) is known. While this might be an idealized setting, it is instructive enough to benchmark the power of quantum correlations and neural networks for the task of parameter estimation. We are interested in benchmarking the NNs against the ultimate precision limit set by the Bayesian inference process. This process yields a posterior probability distribution for the unknown parameters conditioned on the measured data \(D\): following Bayes' rule we have \(P(\theta|D)=P(D|\theta)/\int d\theta P(D|\theta)\). A comprehensive explanation of Bayesian inference from continuous measurements is provided in Ref. [27] and, within the context of this work, in Ref. [78]. An illustrative individual trajectory, generated through our simulation approach, is depicted in Fig. 2(a), which shows the evolution of the posterior probability distribution for a single parameter, namely \(\theta=[\Delta]\), in the top panel, and the TLS population in the bottom panel. Red grid lines mark the times when a quantum jump takes place. One can see how the regions of high probability in the posterior distribution concentrate more around the ground truth as time increases and more data is collected.
Once the posterior distribution is obtained, we build an estimator by taking the mean, \(\hat{\theta}=E[\theta|D]=\int d\theta P(\theta|D)\theta\). While reducing the full posterior to a single estimation \(\hat{\theta}\) implies a loss of information and involves a subtle choice of the best estimator, this step allows us to straightforwardly compare the result with the estimation provided by the NNs. Here we also note that we use a flat (uniform) prior on the parameters of interest, while in general the Bayesian framework allows us to introduce our knowledge of the system using arbitrary priors.
A central question of our study is whether NNs can leverage the quantum correlations in the data. The emission from a TLS is strongly correlated; the fact that the emitter can only store one excitation at a time gives rise to the well-known non-classical phenomenon of antibunching [68], i.e., the suppressed probability of detecting two photons simultaneously. These quantum correlations can be evidenced through several observables, such as the waiting-time distribution \(w(\tau)\) (the probability of two consecutive emissions to be separated by a time delay \(\tau\)), which presents a vanishing value at zero delay, \(w(\tau=0)=0\), and an oscillatory behaviour that depends strongly on the system parameters. These features, shown in Fig. 2(b), are translated directly into the histograms of time delays observed in each individual trajectory, as we show in Fig. 2(c). We will refer to a signal containing these time delays as a quantum signal.
To assess the information contained in these correlations, we compare it to an equivalent classical signal where the only relevant value is the integrated intensity, i.e. the total number of counts detected per unit time, \(I=N/T\). We can interpret such a classical signal as measurements lacking the necessary time resolution to detect individual photons, providing an integrated intensity without any access to the internal temporal correlations between photon emissions. In fact, we choose the inverse of the intensity as the classical signal, since it conveniently represents that we have reduced the list of measured delays, that constitute the quantum signal, by taking their average, \(\bar{\tau}\equiv\sum_{i=1}^{N}\tau_{i}/N=I^{-1}\). Since \(\bar{\tau}\) is a sum or independent random variables \(\tau_{i}\) that all follow the probability distribution \(w(\tau_{i})\), the central limit theorem ensures that, when \(N\) is large, \(\bar{\tau}\) is itself a random variable that follows instead a normal distribution, with \(\mathrm{E}[\bar{\tau}]=\mathrm{E}[\tau_{i}]\). This normal distribution stands in contrast to the more complicated probability distribution \(w(\tau_{i})\) followed by the individual random variables \(\tau_{i}\). In the steady state, this distribution is centered around \(\mathrm{E}[\tau_{i}]=\mathrm{E}[\tau]\equiv\int d\tau w(\tau)\tau=(\langle \hat{\sigma}^{\dagger}\hat{\sigma}\rangle_{\mathrm{ss}}\gamma)^{-1}\) [see Fig. 2(d)].
Access to \(\mathrm{E}[\tau]\) instead of \(w(\tau)\) represents an obvious loss of information, where any contributions from quantum correlations is completely discarded. Thanks to this, the RMSE of the estimation made by using the classical signal serves as a reference for comparison when doing estimations with the full quantum signal: whenever the obtained RMSE is lower than that of the classical signal, we understand that the estimation method is taking some advantage from quantum correlations. The process of parameter estimation from this classical signal, described in more detail the Methods section, also follows a Bayesian approach in which the chosen estimator \(\hat{\theta}\) is the mean of the posterior \(P(\theta|\bar{\tau})\).
### Training
We considered different neural-network architectures with the common structure of inputs and outputs that we illustrate in Fig. 1. The input consists of a vector \(x=D=[\tau_{1},\tau_{2},\dots,\tau_{N}]\) of time delays between photodetection events, while the output is a vector of size \(n\) corresponding to the estimations of the \(n\)-dimensional estimation problem, \(y=[\hat{\theta}_{1},\hat{\theta}_{1},\dots,\hat{\theta}_{n}]\). Here, we treat both one-dimensional estimation problems, with \(\theta=[\Delta]\) in the 1D case, and \(\theta=[\Delta,\Omega]\) in the 2D case. We fixed the size of the input vector to \(N=48\), indicating that our networks perform estimations based on a single experiment run once 48 photons have been detected.
Estimation via NNs requires a training stage akin to a calibration process (learning phase). In this work, the input data used for training, \(x_{\text{train}}\), consists of a list of time delays obtained from trajectories simulated in QuTiP, with the corresponding target data \(y_{\text{train}}\) being the ground truth parameters used to run the simulations. Our best results are achieved using two types of architectures:
1. _RNN_: A recurrent neural network (RNN) with two Long Short-Term Memory (LSTM) layers. These networks can learn correlations in sequential data, rendering them particularly promising for exploiting temporal quantum correlations. It is worth noting that there are no such correlations between time delays in our present system, since the \(\tau_{i}\) are independent due to the collapse following a quantum jump. Nonetheless, these networks still deliver a very good performance. More information on these networks and their application in parameter estimation from continuous signals can be found in [55].
2. _Hist-Dense_: A fully connected neural network with a first custom layer which incorporates the physical knowledge of the absence of correlations between time delays by performing a histogram of the input data. The geometry of the bins of the histogram is defined by the number of bins and their range. We set these as non-trainable hyperparameters fixed at 700 total bins and a range given by \(\tau_{\text{min}}=0\) and \(\tau_{\text{max}}=100/\gamma\).
Further details on the networks and the training process can be found in the Methods section. In all the numerical calculations, the unit of time is set by \(1/\gamma\) (which we assume to be a known parameter) by fixing \(\gamma=1\). After training, we benchmark the estimations made by the NNs against the estimation made through Bayesian inference and, in the single-parameter estimation case, also against the estimation done on the classical version of the signal. Details of this validation process are in the Methods section.
### Single-parameter estimation (1D)
First, we focus on the simplest situation in which a single parameter is estimated: the detuning \(\theta=[\Delta]\). Fig
Figure 3: **Performance of different estimation strategies**. (a) Scatter plot of estimations \(\hat{\Delta}\) across a range of \(\Delta\) values from classical (left) and quantum (right) versions of the photon-counting signal, computed for \(10^{4}\) trajectories for each \(\Delta\) value. All the different estimators used for quantum data (Bayesian, NN) give a similar result, so only one is shown (NN-Hist). The blue shade of the points represents the underlying probability density function obtained through kernel density estimation. The dashed red line represents the ground truth. Orange points represent the mean of the the estimations. (b) RMSE of the estimators for on classical (red) and quantum (yellow, blue, green) versions of the signal. Estimations on quantum signals clearly outperform the classical estimation, signaling the useful information encoded in quantum correlations. The predictions made by NNs (blue, green) are on par with Bayesian inference (yellow), which is the optimal process in terms of information retrieval. Dashed lines mark the lower limit set by the biased Cramér-Rao bound for the classical and quantum signals. (c) Bias of the predictions for the same cases as in (b).
ure 3(a) depicts a scatter plot of the predictions made by each method for different values of \(\Delta\). These plots show that the predictions made using the quantum signal--regardless of whether Bayesian inference or NNs are used--are more densely concentrated around the ground truth than the predictions obtained using the classical version of the signal. This already suggests that quantum correlations in the data can provide an advantage.
This advantage is clearly quantified in Fig. 3(b), which compares the RMSE of the predictions obtained by each of the methods computed on a different set of validation trajectories (see Methods for details on the validation data). The estimation from classical data (red) always has a higher RMSE than estimation methods that have access to the quantum correlations in the data. Figure 3(b) also shows that the NN architectures used in this work learn to extract information encoded in these correlations, with the resulting RMSE reaching values very close to the ultimate limit set by Bayesian inference. The fact that neural networks perform as well as the Bayesian approach by leveraging on quantum correlations in a completely model-agnostic manner is the main result of our work.
We note, however, that none of the approaches studied here provided an estimator that is unbiased for all \(\Delta\), as we show in Fig. 3(c). There, we plot the bias of several estimators, defined as \(\text{bias}(\hat{\theta})\equiv\text{E}(\hat{\theta}-\theta)\). This bias is not problematic _per se_, since it is known that biased estimators can be preferable to unbiased ones in some cases [79]. However, this bias needs to be taken into account at the time of comparing the obtained RMSE with the lower bound set by the Fisher information and the Cramer-Rao bound (CRB) [80, 81]. As we discuss in more detail in Ref. [78], one needs to consider the _biased_ CRBs [82]. We calculated the Fisher information and the biased CRBs for the photon-counting measurements considered here [27, 28, 29], incorporating the calculated biases shown in Fig. 3(c). The resulting lower limits for the RMSE are shown as dashed lines in Fig. 3(b). We show that, for the quantum signal, both the Bayesian and NN estimators saturate the CRB for a wide range of values of \(\Delta\).
### Multi-parameter estimation (2D)
We also benchmarked our results for the 2D parameter-estimation case, where both \(\Delta\) and \(\Omega\) are unknown and estimated simultaneously, \(\theta=[\Delta,\Omega]\).
Our results for the 2D estimation case are shown in Fig. 4. In panel (a), we depict the RMSE of Bayesian estimation from the quantum signal in a region of the plane \([\Delta,\Omega]\). The resulting map shows a nontrivial structure that reflects the underlying physical complexity of the system and that is consistent with previous reports in the literature [28, 29]. The same physical complexity is captured by the NNs, as shown in Fig. 4(b), which depicts the RMSE obtained by the estimation of a _Hist-Dense_ architecture and shows that the RMSE reflects the same underlying structure as the one obtained with Bayesian inference, yielding comparably small values overall. Both results can be more clearly compared in Fig. 4(c), depicting overlaying contour lines of the two plots in panels (a) and (b).
## Discussion
Our results clearly establish that NNs are able to exploit the information hidden in quantum correlations in order to perform parameter estimation. Given their comparable precision to Bayesian inference, a careful evaluation of the benefits of opting for NNs as an alternative becomes important.
Figure 4: **Multidimensional parameter estimation for \(\theta=[\Delta,\Omega]\).** The qubit-drive detuning \(\Delta\) and the Rabi frequency \(\Omega\) are estimated simultaneously. (a) RMSE of the estimations made through Bayesian inference. (b) RMSE of the estimations made by the NN. (c) Contours corresponding to Bayesian (red) and NN (blue) are overlaid for the sake of comparison.
### Computation efficiency
A first aspect in which NNs can bring an important advantage is the efficiency of the computation, which is particularly relevant for the case in which one wishes to achieve real-time estimation. For the 2D case of simultaneous estimation of two parameters, the posterior for a single trajectory for Bayesian estimation can be computed using the UltraNest package [83] (see Methods) in a typical wall time of 10 seconds on a MacBook Pro 2023 with an Apple M2 Max Chip. On the other hand, an estimation using the NNs built in this work, while being less informative (unlike the Bayesian posterior they provide a single estimation without a confidence interval) can be computed in a wall time of less than 1 ms, on the same device. The efficiency of the required calculation is such that trained NNs can be converted into compact and efficient TensorFlow Lite models, with a size of approximately 90kB. These models are suitable for deployment and execution on microcontrollers, which would minimize resource demands and enable data collection and estimation on the same device, leading to a significant reduction in latency [45, 46]. We can expect that, during data collection of photon-counting measurements, different chunks of measured time-delays are fed into the trained NN models for inference in real time. Such a situation would not only allow for precise and robust real time inference, but could also inform about time-drifting parameters by looking at the temporal sequence of predictions.
### The challenge of noise
Another important challenge faced by the Bayesian inference approach is the requirement of a precise modelling of the underlying physical system. This become particularly problematic in the presence of noise, which not only requires a perfect characterization and modelling of the different sources of imperfections, but also makes the subsequent calculation of posteriors much more involved. In contrast, the model-agnostic approach of using NNs allows for a robust prediction without the need of any modelling of the sources of noise, only by training the network directly on noisy data. In the particular case of photon-counting with single-photon detectors, numerous sources of noise can be attributed just to the detection process, such as the efficiency, bandwidth, dead time, dark counts or jitter noise [67]. All these sources of noise have non-trivial consequences on the statistics of the photodetection events [84] and on the quantum trajectories associated to these imperfect measurements [85]. Hence, the necessity of a perfect modelling and characterization of these processes seriously hampers the utilization of Bayesian inference.
### Noise in \(x\): time jitter
We illustrate the potential advantage brought in by the estimation with NNs by considering the particular case of time jitter [84], which we describe by adding a noise term to the values of time delays \(\tau\) used both in the training and estimation of the NNs for the 1D estimation case \(\theta=[\Delta]\). We consider noise following a normal distribution with standard deviation \(\sigma_{\tau}\), so that
Figure 5: **Parameter estimation with noisy data.** (a) RMSE of NN predictor trained and evaluated with data containing jitter noise given by a Gaussian distribution with standard deviation \(\sigma_{\tau}\) (green circles). This is compared to the RMSE of Bayesian inference lacking a description of the noise (yellow crosses). Results shown for a fixed \(\Delta=0.27\gamma\). (b) Same as in panel (a), shown versus \(\Delta\) and with a fixed jitter noise \(\sigma_{\tau}=0.76\gamma\). RMSE of Bayesian predictor in the absence of noise shown for comparison (gray crosses). (c) RMSE of NN predictor trained with noisy target training data \(y_{\text{train}}\), with noise following a Gaussian distribution with standard deviation \(\sigma_{y}\). We observe that the RMSE of the NN predictor (green circles) can remain smaller than the standard deviation of the training data \(\sigma_{y}\) (gray dot-dashed line). The horizontal grid line marks the RMSE of Bayesian inference with noiseless data for comparison. Results shown for a fixed \(\Delta=0.27\gamma\). (d) Sames as in panel (e), shown as a function of \(\Delta\). Different colors represent different values of \(\sigma_{y}\), which are also represented as horizontal grid lines, confirming that the RMSE of NN predictor remains smaller than \(\sigma_{y}\) for all values of \(\Delta\).
\(x\to x+x_{\text{noise}}(\sigma_{\tau})\), with \(x_{\text{noise}}(\sigma_{\tau})\sim\mathcal{N}(0,\sigma_{\tau})\). In order to assess the resilience of the NN predictors to this type of noise, we defined a list of values of \(\sigma_{\tau}\) in the range \(\sigma_{\tau}=[0,\gamma^{-1}]\). For each value in this list, we sampled a population of \(x_{\text{noise}}\) and trained a model with the corresponding noisy data.
The resulting RMSE of the predictions returned by each of these models is computed using the same validation trajectories used in Fig. 3, but containing an added Gaussian noise with the same \(\sigma_{\tau}\) as the data used to train the model. The resulting RMSEs are shown in Fig. 5(a), compared to the RMSE of the prediction via an imperfect Bayesian inference process in which the noise is not explicitly modeled. We observe that, in the case of NNs, the results are more insensitive to the jitter noise and the RMSE of the NN predictor is always lower than that of the Bayesian predictor. This result remains true for all values of \(\Delta\), as shown in Fig. 5(b). We emphasize that this did not require any modelling of the imperfections, as would have been required for a correct process of Bayesian inference. The latter would presumably recover the values reported by the NN, but requiring a perfect characterization of the sources of noise, which may be challenging in some scenarios.
### Noise in \(y_{\text{train}}\): imperfect calibration
Another relevant question is the effect of noise in the training target data \(y_{\text{train}}\). In particular, we consider the case in which the target data used for training differs from the underlying ground truth used to generate the quantum trajectories by a Gaussian noise term with standard deviation \(\sigma_{y}\), so that \(y_{\text{train}}\to y_{\text{train}}+y_{\text{noise}}(\sigma_{y})\), with \(y_{\text{noise}}\sim\mathcal{N}(0,\sigma_{y})\). This could reflect, for instance, the finite precision of a calibration device employed to experimentally measure the set of target values used to train the network.
The question we address is whether a NN will be able to perform parameter estimation with a lower RMSE than the standard deviation \(\sigma_{y}\) of the noisy target data used for training. A positive answer to this question would indicate that the precision of the NN estimator can outperform that of the calibration method or device used to generate the training data. Figures 5(c) and 5(d) illustrate that this scenario is indeed possible. Similarly to the process described for Figure 5(a), we defined a series of values of \(\sigma_{y}\) and, for each value, we sampled a population of \(y_{\text{noise}}\) and trained a model with noisy target data. The validation of the trained models is done using the same set of trajectories as in Fig. 3. Figure 5(c) shows that, once the standard deviation of the noisy training target data \(\sigma_{y}\) (dot-dashed line) becomes greater than the RMSE achievable in the noiseless case (horizontal grid line), the predictions made by the NNs trained with this noisy data remain robust and approximately equal to the ideal RMSE. As \(\sigma_{y}\) increases, the RMSE of the NN predictor eventually increases as well, but it remains much smaller than \(\sigma_{y}\). This observation holds for all values of \(\Delta\), as shown in Fig. 5(d). This result highlights the potential of the NNs to improve sub-optimal estimations used to feed the training process.
### Conclusions and outlook
We have established the potential of NNs to perform parameter estimation from the data generated by continuous measurements of the photon-counting type. We have carefully benchmarked their precision against the ultimate limit set by Bayesian inference, showing that NN approaches can reach comparable performance in a much more computationally efficient way, while also being more robust against noise.
These results pave the way for future research that can explore more complex quantum optical systems and measurement schemes. For example, future studies could investigate the radiation from cavity-QED systems with non-trivial quantum statistics [61, 66, 87, 86] or several quantum emitters, as well as multiple simultaneous detection channels, such as those provided by arrays of single-photon avalanche diodes for generalized Hanbury-Brown and Twiss measurements [62, 64], streak cameras [63], or single-photon fibre bundle cameras [65, 88].
One aspect we leave for future work is the exploration of NN methods that incorporate mechanisms to do uncertainty quantification (UQ) on the predictions (compute prediction intervals) [89]. For example, Bayesian Neural Networks [90] train a set of weights distributions and the predictions from those networks are obtained by sampling the posterior of the weights after training. Other approaches to UQ include bootstrapping and ensemble methods which are easily implemented and only require extra training time, maintaining all the benefits of NN estimators that we have shown in our work.
Another aspect deserving further exploration is leveraging domain knowledge and incorporating physical models into the learning process [2]. We note that the _Hist-Dense_ architecture used in this study could already be considered a physics-informed approach, taking advantage of a known feature about the model: the absence of information contained in the particular ordering of delays in the input data \(x\), given that the source is a TLS that resets its state after every emission [29].
Moreover, it is worth noting that protocols of parameter estimation from photon-counting data, as the one described here, may find applications in fields such as fluorescence quantum microscopy [91, 92, 65, 93, 88] or quantum imaging [94, 95]. In these areas, experimental quantum signals beyond the capabilities of classical simulations--due to the number of emitters and detectors involved--are already within reach, calling for efficient methods of
analysis for the optimal image reconstruction [96].
## Methods
### Photodetection time data
In practice, rather than working with the absolute times of detection, \(t_{i}\), we find it more convenient to consider \(D\) to be equivalently composed of a list of time delays \(\tau_{i}=t_{i}-t_{i-1}\), with \(i=1,\ldots N\) and \(t_{0}=0\). For simplicity of calculations, we consider trajectories with different values of total evolution time \(T\) but a fixed number of total detections \(N\), and assume that the measurement is immediately terminated after the last detection, i.e., \(t=t_{N}\). This means that \(t-t_{N}\) will always be zero, becoming an irrelevant time interval that we do not need to include in our data, which will then only consist of \(N\) values, \(D=[\tau_{1},\ldots\tau_{N}]\). This format provides a set of data records with a fixed number of photodetection events or "clicks" that can be easily used as inputs for the NNs that we use in this work.
### Bayesian Inference
_Bayes' Rule.--_ We consider that, at the beginning of our estimation protocol, our knowledge of the unknown set of parameters is characterized by a constant prior probability distribution \(P_{0}(\theta)\) over an arbitrarily large but finite support (we define an initial support that is big enough not to affect the final estimation, and alternative prior distributions can also be used). Our goal is to update this knowledge upon the new evidence provided by some measurement data \(D\), i.e., to compute the posterior probability \(P(\theta|D)\). According to Bayes' rule, this is given by
\[P(\theta|D)=\frac{P(D|\theta)}{P(D)}P_{0}(\theta)=\frac{P(D|\theta)}{\int d \theta P(D|\theta)P_{0}(\theta)}P_{0}(\theta), \tag{2}\]
where the integral is defined over the domain of possible values of \(\theta\), i.e., the support of \(P_{0}(\theta)\). If we have no prior knowledge of \(\theta\), as we assume here, \(P_{0}(\theta)\) is constant over its support and can be taken out of the integral in the denominator in Eq. (2), leaving us with a simple relation between \(P(\theta|D)\) and \(P(D|\theta)\):
\[P(\theta|D)=\frac{P(D|\theta)}{\int d\theta P(D|\theta)}. \tag{3}\]
Thus, the required step to compute the posterior \(P(\theta|D)\) is the calculation of the likelihood \(P(D|\theta)\), which in general can be computed as the norm of a conditional density matrix evolving according to the list of quantum-jump times in data \(D\), as explained in detail in Ref. [78].
_Likelihood computation.--_ The calculation of the likelihood is particularly simple for the dynamics of a TLS, since each time interval is independent of the others, given that the system is completely reset to the ground state \(|0\rangle\) after each emission [29]. Consequently, the probability for the set of data \(D=[\tau_{1},\ldots,\tau_{N}]\) is simply given by the product of the probabilities of each interval
\[P(D|\theta)=\prod_{i=1}^{N}w(\tau_{i};\theta), \tag{4}\]
where the waiting-time distribution \(w(\tau;\theta)\) gives the probability of two successive clicks being separated by a time interval \(\tau\) and we made explicit its dependence on the system parameters \(\theta\). In our system, governed by the master equation (1), the waiting-time distribution has the following analytical form,
\[w(\tau;\theta)=\frac{8\gamma\Omega^{2}}{R}\exp[-\gamma\tau/2] \times\sum_{\eta=-1,1}\eta\cosh\left(\frac{\tau\sqrt{\gamma^{2}-4 \left(\Delta^{2}+4\Omega^{2}\right)+\eta R}}{2\sqrt{2}}\right) \tag{5}\]
with \(R=\sqrt{\left[\gamma^{2}+4\left(\Delta^{2}+4\Omega^{2}\right)\right]^{2}-64 \gamma^{2}\Omega^{2}}\).
_Posterior computation.--_ Once the method to compute the likelihood \(P(D|\theta)\) is established, the only obstacle to using Bayes rule in Eq. (2) is the calculation of the normalization factor in the denominator. This factor is called the evidence or the marginal likelihood, as it marginalizes the likelihood over all possible values of the parameters. While this integral can be computed analytically for simple likelihood distributions, or numerically efficiently for a low-dimensional parameter space, many sampling techniques are typically used in Bayesian inference to bypass its direct computation. Markov Chain Monte Carlo (MCMC) [97] is a stochastic sampling method that allows us to compute samples from the posterior \(P(\theta|D)\) as draws from multiple Markov chains evolving in Monte Carlo time according to a stationary distribution proportional to the posterior. Another method is Nested Sampling (NS) [98], which is able to provide an estimate for the marginal likelihood itself by converting the original multi-dimensional integral over the parameter space to a one-dimensional integral over the inverse volume of the prior support. NS methods have been shown to be effective also in the case of multi-modal posteriors (related to parameter degeneracy) and when the posterior distribution has heavy or light tails.
For the multi-dimensional case \(n=2\), we compute the posterior probability distributions \(P(\theta|D)\) using the UltraNest package [83] in Python. UltraNest contains several advanced algorithms to improve the efficiency and correctness of nested sampling, including bootstrapping uncertainties on the marginal likelihood and handling vectorized likelihood functions over parameter sets (together with additional MPI support).
### Parameter estimation from classical signals
We consider that the classical version of the measured quantum signal is given by the sample mean of the \(N\) observed delays \(\bar{\tau}=\sum_{i=1}^{N}\tau_{i}/N\). Being a sum of many independent random variables \(\tau_{i}\), \(\bar{\tau}\) is itself a random variable that, in the limit of large \(N\), will follow a Gaussian distribution \(\bar{\tau}\sim\mathcal{N}(\mu,\sigma)\), where \(\mu=\mathrm{E}[\tau_{i}]\) and \(\sigma^{2}=\mathrm{Var}[\tau_{i}]/N\). Since the independent random variables follow the waiting-time distribution given by Eq. (5), \(\tau_{i}\sim w(\tau;\theta)\), \(\mu\) and \(\sigma\) can be directly obtained:
\[\mu(\theta) =\left[\gamma\langle\hat{\sigma}^{\dagger}\hat{\sigma}\rangle_{ \mathrm{ss}}(\theta)\right]^{-1}=\frac{\gamma^{2}+4\Delta^{2}+8\Omega^{2}}{4 \gamma\Omega^{2}}, \tag{6a}\] \[\sigma(\theta)^{2} =\frac{(\gamma^{2}+4\Delta^{2})^{2}-8(\gamma^{2}-12\Delta^{2}) \Omega^{2}+64\Omega^{4}}{N16\gamma^{2}\Omega^{4}}, \tag{6b}\]
were we assume that the emission is recorded when the driven emitter has reached a steady state and we made explicit the dependence of these quantities on the unknown parameters \(\theta\). Within a Bayesian framework, this means that the likelihood of the classical data is
\[P(\bar{\tau}|\theta)=\frac{1}{\sigma\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left( \frac{\bar{\tau}-\mu}{\sigma}\right)^{2}\right], \tag{7}\]
and the posterior is given by Eq. (3), assuming a constant prior. From here, we obtain a single estimation by taking the mean of the posterior, i.e., \(\hat{\theta}=\mathrm{E}[\theta|D]=\int d\theta\theta P(\theta|D)\). We make this choice in order to be consistent with the estimator used for Bayesian inference with quantum data, and because it shows overall lower RMSE and bias than the maximum-likelihood estimator, see Ref. [78].
### Training and design of the neural networks
For the 1D estimation case, the networks are trained with \(4\times 10^{6}\) trajectories in which the first 48 photodetection times are stored. From these, 80% are taken as training data, and 20% as validation data during the training process. Each trajectory is simulated by fixing \(\Omega=\gamma\), and choosing randomly a value of the detuning within the domain \(\Delta\in[0,5\gamma]\), which should be large enough to cover the expected range of possible values of \(\Delta\). We choose this domain to be the same as the support of the prior in the Bayesian inference, so that both methods rely on the same underlying assumptions about the probability distribution of \(\Delta\). All the networks are trained using the Adam optimizer. A summary of these architectures--including number of trainable parameters and details about the batch size and number of epochs used in training--is provided in Table 1. All the networks are defined and trained using the Tensorflow library [99].
For the 2D parameter estimation, we train the network with a new set of \(4\times 10^{6}\) trajectories generated for random values of \(\Delta\) and \(\Omega\) in the range \(\Delta\in[0.,3\gamma]\) and \(\Omega\in[0.25\gamma,5\gamma]\) (lower values of \(\Omega\) are not used in order to avoid longer simulations times associated with the lower pumping rate).
### Validation process
#### 1D case
In order to validate the performance of the NNs, we generate a set of validation trajectories for a uniform grid of 40 values of \(\Delta\) (with \(\Omega=\gamma\)) in the range \(\Delta\in[0,2.1\gamma]\). For each value of \(\Delta\), we generated \(10^{4}\) trajectories containing 48 photodetection events each. For each trajectory, we obtained an estimation of the parameter for each of the methods, taking the mean of the posterior as the estimation in the Bayesian case. We then computed the MSE from the \(10^{4}\) different predictions with each of the methods for each value of \(\Delta\).
\begin{table}
\begin{tabular}{l r r r} \multicolumn{3}{l}{_Recurrent Neural Network (RNN)_} \\ \hline \hline Layer & \multicolumn{2}{c}{Output shape} & \multicolumn{2}{c}{Activation \# Parameters} \\ \hline LSTM & 17 & ReLU & 1292 \\ LSTM & 17 & ReLU & 2380 \\ Dense & 1 & Linear & 18 \\ \hline Trainable params. & 3,690 & & \\ Epochs & 1200 & & \\ \multicolumn{3}{l}{_Hist-Dense_} \\ \hline \hline Layer & \multicolumn{2}{c}{Output shape} & \multicolumn{2}{c}{Activation \# Parameters} \\ \hline Histogram & 700 & - & 0 \\ Dense & 100 & ReLU & 70100 \\ Dense & 50 & ReLU & 5050 \\ Dense & 30 & ReLU & 1530 \\ Dense & 1 & Linear & 31 \\ \hline Trainable params. & 76,711 & & \\ Epochs & 1200 & & \\ \multicolumn{3}{l}{_Hist-Dense 2D_} \\ \hline \hline Layer & \multicolumn{2}{c}{Output shape} & \multicolumn{2}{c}{Activation \# Parameters} \\ \hline Histogram & 700 & - & 0 \\ Dense & 100 & ReLU & 70100 \\ Dense & 50 & ReLU & 5050 \\ Dense & 30 & ReLU & 1530 \\ Dense & 20 & ReLU & 620 \\ Dense & 10 & ReLU & 210 \\ Dense & 2 & Linear & 22 \\ \hline Trainable params. & 77,532 & & \\ Epochs & 1200 & & \\ \end{tabular}
\end{table}
Table 1: Summary of the NN architectures used for parameter estimation. All networks are trained with the Adam optimizer, with a learning rate 0.001, using a batch size of 12800 and with a mean squared logarithmic error (MSLE) loss function.
2D case
Validation is done for a grid of \((\Delta,\Omega)\) pairs built from a grid of 40 values of \(\Delta\) in the range \(\Delta\in[0.,2.1\gamma]\) and a grid of 40 values of \(\Omega\) in the range \(\Omega\in[0.25\gamma,2.1\gamma]\), spanning a square grid of 1600 points. For each of these points, we simulate \(10^{4}\) validation trajectories with 48 clicks each.
## Data availability
All the data necessary to reproduce the results in this work, including simulated quantum trajectories for training of the NNs, validation trajectories, trained models and Bayesian inference calculations in 2D via nested sampling are openly available on Zenodo [100].
## Code availability
The codes used to generate the results in this work, including the implementation of the proposed NN architectures as TensorFlow models, are openly available on Zenodo and GitHub [101].
|
2302.08347 | The autoregressive neural network architecture of the Boltzmann
distribution of pairwise interacting spins systems | Generative Autoregressive Neural Networks (ARNNs) have recently demonstrated
exceptional results in image and language generation tasks, contributing to the
growing popularity of generative models in both scientific and commercial
applications. This work presents an exact mapping of the Boltzmann distribution
of binary pairwise interacting systems into autoregressive form. The resulting
ARNN architecture has weights and biases of its first layer corresponding to
the Hamiltonian's couplings and external fields, featuring widely used
structures such as the residual connections and a recurrent architecture with
clear physical meanings. Moreover, its architecture's explicit formulation
enables the use of statistical physics techniques to derive new ARNNs for
specific systems. As examples, new effective ARNN architectures are derived
from two well-known mean-field systems, the Curie-Weiss and
Sherrington-Kirkpatrick models, showing superior performance in approximating
the Boltzmann distributions of the corresponding physics model compared to
other commonly used architectures. The connection established between the
physics of the system and the neural network architecture provides a means to
derive new architectures for different interacting systems and interpret
existing ones from a physical perspective. | Indaco Biazzo | 2023-02-16T15:05:37Z | http://arxiv.org/abs/2302.08347v3 | The autoregressive neural network architecture of the Boltzmann distribution of pairwise interacting spins systems
###### Abstract
Generative Autoregressive Neural Networks (ARNN) have recently demonstrated exceptional results in image and language generation tasks, contributing to the growing popularity of generative models in both scientific and commercial applications. This work presents a physical interpretation of the ARNNs by reformulating the Boltzmann distribution of binary pairwise interacting systems into autoregressive form. The resulting ARNN architecture has weights and biases of its first layer corresponding to the Hamiltonian's couplings and external fields, featuring widely used structures like the residual connections and a recurrent architecture with clear physical meanings. However, the exponential growth, with system size, of the number of parameters of the hidden layers makes its direct application unfeasible. Nevertheless, its architecture's explicit formulation allows using statistical physics techniques to derive new ARNNs for specific systems. As examples, new effective ARNN architectures are derived from two well-known mean-field systems, the Curie-Weiss and Sherrington-Kirkpatrick models, showing superior performances in approximating the Boltzmann distributions of the corresponding physics model compared to other commonly used ARNN architectures. The connection established between the physics of the system and the ARNN architecture provides a way to derive new neural network architectures for different interacting systems and interpret existing ones from a physical perspective.
###### Contents
* I Introduction
* II Autoregressive form of the Boltzmann distribution of the pairwise interacting systems
* II.1 The single variable conditional probability
* III Models
* III.1 The Curie-Weiss model
* III.2 The Sherrington-Kirkpatrick model
* IV Results
* V Conclusions
## I Introduction
The cross-fertilization between machine learning and statistical physics, in particular of disordered systems, has a long history [1; 2]. Recently, the development of deep neural network frameworks [3] have been applied to statistical physics problems [4] spanning a wide range of domains, including quantum mechanics [5; 6], classical statistical physics [7; 8], chemical and biological physics [9; 10]. On the other hand, techniques borrowed from statistical physics have been used to shed light on the behavior of Machine Learning algorithms [11; 12], and even to suggest training or architecture frameworks [13; 14]. In recent years, the introduction of deep generative autoregressive models [15; 16], like transformers [17], has been a breakthrough in the field, generating images and text with a quality comparable to human-generated ones [18]. The introduction of deep ARNNs was motivated as a flexible and general approach to sample generation from a probability distribution learned from data [19; 20; 21]. In classical statistical physics, the ARNN was introduced, in a variational setting, to sample from a Boltzmann distribution (or equivalently, in the computer science literature, usually called an energy model [22]) as an improvement over the standard variational approach relying on the high expressiveness of the ARNNs [8]. Then similar approaches have been used in different contexts, and domains of classical [23; 24; 25; 26; 27] and quantum statistical physics [28; 29; 30; 31; 32; 33; 34]. The ability of the ARNNs to efficiently generate samples, thanks to the ancestral sampling procedure, opened the way to overcome the slowdown of Monte-Carlo methods for frustrated or complex systems, although two recent works questioned the real gain in very frustrated systems [35; 36]. The use of ARNNs in statistical physics problems has largely relied on pre-existing neural network architectures which may not be well-suited for the particular problem at hand. This work aims to demonstrate how the knowledge of the physics model can inform the design of more effective ARNN architectures. The derivation of an exact ARNN architecture for the classical Boltzmann distribution of a general pairwise interacting Hamiltonian of binary variables will be presented. Despite the generality of the Hamiltonian, the resulting architecture displays interesting properties: the first layer parameters are directly related to the Hamiltonian parameters and the emergence of residual connections and recurrent structures with clear physical interpretations.
The resulting deep ARNN architecture has the number of parameters of the hidden layers scaling exponentially with the system's size. However, the clear physical picture of the architecture allows us to use standard statistical physics techniques to find new feasible ARNN architecture for specific Hamiltonian or energy models. To show the potential of the derived representation, the ARNN architectures for two well-known mean-field models are derived: the Curie-Weiss model (CW) and the Sherrington-Kirkpatrick model (SK). These fully con
nected models are chosen due to their paradigmatic role in the history of statistical physics systems. The CW model, despite its straightforward Hamiltonian, was one of the first models explaining the behavior of ferromagnet systems, displaying a second-order phase transition [37]. In this case, an exact ARNN architecture at finite N and in the thermodynamic limit is obtained with the number of parameters scaling polynomially with the system's size.
The SK model [38] is a fully connected spin glass model of disordered magnetic materials. The system admits an analytical solution in the thermodynamic limit, the celebrated [39] k-step replica symmetric breaking (k-RSB) solution [40; 41] of Parisi. The complex many-valley landscape of the Boltzmann probability distribution captured by the k-RSB solution of the SK model is the key concept that unifies the description of many different problems, and similar replica computations are applied to very different domains like neural networks [42; 43], optimizations [44], inference problems [11], or in characterizing the jamming of hard spheres [45; 46].
In the following, I will derive an ARNN architecture for the Boltzmann distribution of the SK model for a single instance of disorder, with a finite number of variables. The derivation is based on the k-RSB solution, resulting in a deep ARNN architecture with parameters scaling polynomially with the system size.
## II Autoregressive form of the Boltzmann distribution of the pairwise interacting systems
The Boltzmann probability distribution of a given Hamiltonian \(H[\mathbf{x}]\) of a set of \(N\) binary variables \(\mathbf{x}=\left\{x_{1},x_{2},...x_{N}\right\}\) at inverse temperature \(\beta\) is \(P_{B}(\mathbf{x})=e^{-\beta H(\mathbf{x})}/Z\). The \(Z=\sum_{\mathbf{x}}e^{-\beta H(\mathbf{x})}\) is the normalization factor. It is generally challenging to compute marginals and average quantities when \(N\) is large and generate samples on frustrated systems. Defining the sets of variables \(\mathbf{x}_{<i}=\left(x_{1},x_{2}\ldots x_{i-1}\right)\) and \(\mathbf{x}_{>i}=\left(x_{i+1},x_{i+2}\ldots x_{N}\right)\) respectively with an index smaller and larger than \(i\), then if we can rewrite the Boltzmann distribution in the autoregressive form: \(P_{B}\left(\mathbf{x}\right)=\prod_{i}P\left(x_{i}|\mathbf{x}_{<i}\right),\) it would be straightforward to produce independent samples from it, thanks to the ancestral sampling procedure [8]. It has been proposed [8] to use a variational approach to approximate the Boltzmann distribution with trial autoregressive probability distributions where each conditional probability is represented by a feed-forward neural network with a set of parameters \(\theta\), \(Q^{\theta}\left(\mathbf{x}\right)=\prod_{i}Q^{\theta_{i}}\left(x_{i}|\mathbf{ x}_{<i}\right)\). The parameters \(\theta\) can be learned minimizing the (inverse) Kullback-Leibler divergence \(D_{KL}\), with the true probability function:
\[\begin{split}& D_{KL}\left(Q^{\theta}|P_{B}\right)=\sum_{ \left\{\mathbf{x}\right\}}Q^{\theta}[\mathbf{x}]\ln\left(\frac{Q^{\theta}[ \mathbf{x}]}{P_{B}[\mathbf{x}]}\right)\\ &=\beta F[Q^{\theta}]-\beta F[P_{B}]\end{split} \tag{1}\]
where:
\[F[P]=\sum_{\left\{\mathbf{x}\right\}}P[\mathbf{x}]\left[\frac{1}{\beta}\log P [\mathbf{x}]+H[\mathbf{x}]\right]\]
is the variational free energy of the system. The Kullback-Leibler divergence is always larger or equal to zero, so the variational free energy \(F[Q^{\theta}]\) is always an upper bound of the free energy of the system \(F[P_{B}]\)[8]. Minimizing the KL divergence with respect to the parameters of the ARNN is equivalent to minimizing the variational free energy \(F[Q^{\theta}]\). The computation of \(F[Q^{\theta}]\) and their derivatives with respect to the ARNN's parameters involve a summation over all the configurations of the systems, that grows exponentially with the system's size, making it unfeasible after a few numbers of parameters. In practice, they are estimated summing over a subset of configurations sampled directly from the ARNN thanks to the ancestral sampling procedure[8]. Usually, an annealing procedure is applied, starting at a high temperature and slowly decreasing it. Apart from the minimization procedure, the choice of the architecture of the neural networks is crucial to obtain a good approximation of the Boltzmann distribution.
### The single variable conditional probability
The generic \(i\)-th conditional probability factor of the Boltzmann distribution can be rewritten in this form:
\[\begin{split}& P\left(x_{i}|\mathbf{x}_{<i}\right)=\frac{P\left( \mathbf{x}_{<i+1}\right)}{P\left(\mathbf{x}_{<i}\right)}=\frac{\sum_{\mathbf{ x}_{>i-1}}P\left(\mathbf{x}\right)}{\sum_{\mathbf{x}_{>i-1}}P\left(\mathbf{x} \right)}\\ &=\frac{\sum_{\mathbf{x}_{>i}}e^{-\beta H}}{\sum_{\mathbf{x}_{>i- 1}}e^{-\beta H}}=\frac{f\left(x_{i},\mathbf{x}_{<i}\right)}{\sum_{x_{i}}f \left(x_{i},\mathbf{x}_{<i}\right)}.\end{split} \tag{2}\]
where I defined:
\[f\left(x_{i}=\pm 1,\mathbf{x}_{<i}\right)=\sum_{\mathbf{x}_{>i}}e^{-\beta H} \delta_{x_{i},\pm 1}. \tag{3}\]
The \(\delta_{a,b}\) is the Kronecker function that is one when the two values \(\left(a,b\right)\) coincide and zero otherwise. Usually, in the representation of the conditional probability \(P\left(x_{i}=1|\mathbf{x}_{<i}\right)\) as a feed-forward neural network, the set of variables \(\mathbf{x}_{<i}\) is the input, and the sigma function \(\sigma(x)=\frac{1}{1+e^{-x}}\) is the last layer, assuring the output is between \(0\) and \(1\). The probability \(P\left(x_{i}=-1|\mathbf{x}_{<i}\right)=1-P\left(x_{i}=1|\mathbf{x}_{<i}\right)\) is straightforward to obtain. With simple algebraic manipulations, we can write:
\[\begin{split}& P\left(x_{i}=1|\mathbf{x}_{<i}\right)=\frac{f\left(1,\mathbf{x}_{<i}\right)}{f\left(1,\mathbf{x}_{<i}\right)+f\left(-1,\mathbf{x}_{ <i}\right)}\\ &=\frac{1}{1+\frac{f\left(-1,\mathbf{x}_{<i}\right)}{f\left(1, \mathbf{x}_{<i}\right)}}=\sigma\left(\log\left[f\left(1,\mathbf{x}_{<i}\right) \right]-\log\left[f\left(-1,\mathbf{x}_{<i}\right)\right]\right)\end{split} \tag{4}\]
Consider a generic two-body interaction Hamiltonian of binary spin variables \(x_{i}\in\{-1,1\}\), \(H=-\sum_{i<j}J_{ij}x_{i}x_{j}-\sum_{i}h_{i}x_{i}\), where \(J_{ij}\) are the interaction couplings and \(h_{i}\) are the external fields. Taking into
account a generic variable \(x_{i}\) the elements of the hamiltonian can be grouped into the following five sets:
\[H_{ss} =-\sum_{s,s^{\prime}<i}J_{ss^{\prime}}x_{s}x_{s^{\prime}}-\sum_{s<i} h_{s}x_{s}\] \[H_{si}[x_{i} =\pm 1] =\mp H_{si}=\mp(\sum_{s<i}J_{si}x_{s}+h_{i})\] \[H_{il}[x_{i} =\pm 1] =\mp H_{il}=\mp\sum_{l>i}J_{il}x_{l}\] \[H_{sl} =-\sum_{s<i,l>i}J_{sl}x_{s}x_{l}\] \[H_{ll} =-\sum_{l,l^{\prime}>i}J_{ll^{\prime}}x_{l}x_{l^{\prime}}-\sum_{l> i}h_{l}x_{l}\]
where the dependence on the variable \(x_{i}\) is made explicit. Substituting them in eq.4, we obtain:
\[P\left(x_{i}=1|\mathbf{x}_{<i}\right)=\sigma\bigg{(}2\beta H_{si} [\mathbf{x}_{<i}]+\log(\rho_{i}^{+}[\mathbf{x}_{<i}]) \tag{5}\] \[-\log(\rho_{i}^{-}[\mathbf{x}_{<i}])\bigg{)},\]
where:
\[\rho_{i}^{\pm}[\mathbf{x}_{<i}]=\sum_{\mathbf{x}_{>i}}e^{-\beta(\pm H_{il}+H_{ si}[\mathbf{x}_{<i}]+H_{il})} \tag{6}\]
The set of elements \(H_{ss}\) cancels out. The conditional probability, eq.5, can be interpreted as a feed-forward neural network, following, starting from the input, the operation done on the variables \(\mathbf{x}_{<i}\); the network has the following architecture (see fig.1):
\[P_{i}\left(x_{i}=1|\mathbf{x}_{<i}\right) =\sigma\bigg{\{}x_{i}^{1}+\log\big{[}\sum_{c}e^{b_{c}^{+}+\sum_{ l=i+1}^{N}w_{cl}x_{il}^{1}}\big{]}\] \[-\log\big{[}\sum_{c}e^{b_{c}^{-}+\sum_{l=i+1}^{N}w_{cl}x_{il}^{1} }\big{]}\bigg{\}}, \tag{7}\]
where:
\[x_{i}^{1} =2\beta H_{si}=2\beta(\sum_{s=1}^{i-1}J_{si}x_{s}+h_{i}), \tag{8}\] \[x_{il}^{1} =\sum_{s=1}^{i-1}J_{sl}x_{s}, \tag{9}\]
are the output of the first layer. Then, a second layer acts on the set of \(x_{il}^{1}\) (see fig.1). The \(\sum_{c}\) is over the \(2^{N-i}\) number of configurations of the set of \(\mathbf{x}_{>i}\) variables. The parameters of the second layer are
\[b_{c}^{\pm} =\beta\sum_{l=i+1}^{N}(\pm J_{il}+h_{l}+\sum_{l^{\prime}=l+1}^{N }J_{ll^{\prime}}x_{l^{\prime}}^{c})x_{l}^{c} \tag{10}\] \[w_{cl} =\beta x_{l}^{c}, \tag{11}\]
where \(c\) is the index of the configuration of the set of \(\mathbf{x}_{>i}\) variables. Then, the two functions \(\rho^{\pm}\) are obtained by applying the non-linear operator \(\log\Sigma\exp(\mathbf{x})=\log(\sum_{i}e^{x_{i}})\) at the output of the second layer (see fig.1). As the last layer, the two \(\rho^{\pm}\) and \(x_{i}^{1}\) are combined with the right signs, and the sigma function is applied. The whole ARNN architecture of the Boltzmann distribution of the general pairwise interacting Hamiltonian (H\({}_{2}\)ARNN) is depicted in fig.1. The total number of parameters scales exponentially with the system size, making its direct use infeasible for the sampling process. Nevertheless, the H\({}_{2}\)ARNN architecture shows some interesting features:
* The weights and biases of the first layers are the parameters of the Hamiltonian of the Boltzmann distribution. As far as the author knows, this is the first time that this connection is derived.
* Residual connections among layers, due to the \(x_{i}^{1}\) variables, naturally emerge from the derivation. The importance of residual connections has recently been highlighted [47] and has become a crucial element in the success of the ResNet and transformer architectures [48], in classification and generation tasks. They were presented as a way to improve the training of deep neural networks avoiding the exploding and vanishing gradient problem. In our context, they represent the direct interactions among the variable \(x_{i}\) and all the previous variables \(\mathbf{x}_{<i}\).
* The H\({}_{2}\)ARNN has a recurrent structure [3; 49], similar to some ARNN architectures used in statistical physics problems [31; 27]. The first layer, see figure 1, is composed of the following set of linear operators on the input \(x_{il}^{1}(\mathbf{x}_{<i})=\sum_{s=1}^{i-1}J_{si}x_{s}\) with \(i<l<=N\). The set of \(x_{il}\) can be rewritten in recursive form observing that: \[x_{il}^{1}=x_{i-1,l}^{1}+J_{i-1,l}x_{i-1}\] (12) The neurons \(x_{il}^{1}\) in the first layer of each conditional probability in the H\({}_{2}\)ARNN architecture de
Figure 1: \(\mathbf{H}_{2}\)**ARNN** Architectures of a single Boltzmann conditional probability of a pairwise interacting Hamiltonian, eq.7. The \(x_{<i}\) variables are the input, the output is the estimation of the conditional probability \(Q^{\theta}(x_{i}=1|\mathbf{x}_{<i})\). The first layer computes the \(x_{i}^{1}\) and \(x_{il}^{1}\) variables, see eq.8, where the weight and bias, directly related to the Hamiltonian parameters, are shown in orange. The non-linear operators are represented in square form. The width of the second layer grows exponentially with the system size. The last layer is the sigma function.
pend on the output of the first layer, \(x_{i-1,l}^{1}\), of the previous conditional probability.
The number of parameters of the layers of the feed-forward neural network representations of the \(\rho_{i}^{\pm}\) functions, eq.6, scale exponentially with the system's size, proportionally to \(2^{N-i}\). The functions \(\rho_{i}^{\pm}\) take into account the effect of the interactions among \(\mathbf{x}_{<i}\) and \(\mathbf{x}_{>i}\) on the variable \(x_{i}\). The \(\rho_{i}^{\pm}\) function can be interpreted as the partition function of a system, where the variables are the \(\mathbf{x}_{>i}\) and the external fields are determined by the values of the variables \(\mathbf{x}_{<i}\). Starting from this observation, in the following, I will show how to use standard tools of statistical physics to derive deep AR-NN architectures that eliminates the exponential growth of the number of parameters.
## III Models
### The Curie-Weiss model
The Curie-Weiss model (CW) is a uniform, fully-connected Ising model. The Hamiltonian, with \(N\) spins, is \(H\left(\mathbf{x}\right)=-h\sum_{i=1}^{N}x_{i}-\frac{J}{N}\sum_{i<j}x_{i}x_{j}\). The conditional probability of a spin \(i\), eq.5, of the CW model is:
\[P^{CW}\left(x_{i}=1|\mathbf{x}_{<i}\right)=\sigma\bigg{(}2\beta h +2\beta\frac{J}{N}\sum_{s=1}^{i-1}x_{s}+\\ \log(\rho_{i}^{+}[\mathbf{x}_{<i}])-\log(\rho_{i}^{-}[\mathbf{x} _{<i}])\bigg{)}, \tag{13}\]
where:
\[\rho_{i}^{\pm}[\mathbf{x}_{<i}]\propto\sum_{\mathbf{x}_{>i}}e^{\beta\left(h \pm\frac{J}{N}+\frac{J}{N}\sum_{s<i}x_{s}\right)\sum_{l>i}x_{l}+\frac{\beta J} {2N}\left(\sum_{l,l^{\prime}>i}x_{l}x_{l^{\prime}}\right)} \tag{14}\]
Defining \(h_{i}^{\pm}[\mathbf{x}_{<i}]=h\pm\frac{J}{N}+\frac{J}{N}\sum_{s=1}^{i-1}x_{s}\), at given \(\mathbf{x}_{<i}\), eq.14 is equivalent to the partition function of a CW model, with \(N-i\) spins and external fields \(h_{i}^{\pm}\). As shown in the appendix, the summations over \(\mathbf{x}_{>l}\) can be easily done, finding the following expression:
\[\rho_{i}^{\pm}[\mathbf{x}_{<i}]=\sum_{k=0}^{N-i}e^{h_{i,k}^{\pm}+w_{i,k}^{\pm} \sum_{s}x_{s}}\]
where we defined:
\[b_{i,k}^{\pm}=\log\left(\binom{N-i}{k}\right)+\frac{\beta J}{2N }\left(N-i-2k\right)^{2}+\\ (N-i-2k)\left(\beta h\pm\frac{\beta J}{N}\right) \tag{15}\] \[\omega_{i,k}^{\pm}=\frac{\beta J}{N}\left(N-i-2k\right). \tag{16}\]
The final feed-forward architecture of the Curie-Weiss Autoregressive Neural Network (CW\({}_{N}\)) architecture is:
\[P^{CW_{N}}\left(x_{i}=+1|\mathbf{x}_{<i}\right)=\sigma\bigg{[}b_{0}+\omega_{0 }\sum_{s=1}^{i-1}x_{s}\\ +\log\big{(}\sum_{k=0}^{N-i}e^{b_{i,k}^{+}+w_{i,k}^{+}\sum_{s=1}^{i-1}x_{s} }\big{)}-\log\big{(}\sum_{k=0}^{N-i}e^{b_{i,k}^{-}+w_{i,k}^{-}\sum_{s=1}^{i-1} x_{s}}\big{)}\bigg{]}, \tag{17}\]
where \(b^{0}=2\beta h\), \(\omega_{i}^{0}=\frac{2\beta J}{N}\) are the same, and so shared, among all the conditional probability functions, see fig.2. Their parameters have an analytic dependence on the parameters \(J\) and \(h\) of the Hamiltonian of the systems.
The number of parameters of a single conditional probability of the CW\({}_{N}\) is \(2+4(N-i)\), decreasing as \(i\) increases. The CW-ARNN depends only on the sum of the input variables. The total number of parameters of the whole conditional probability distribution scales as \(2N^{2}+O(N)\).
If we consider the thermodynamical limit, \(N\gg 1\), the ARNN architecture of the CW model, simplify (see sec.IB of the appendix for details) to the following expression:
\[P^{CW_{\infty}}\left(x_{i}=1|\mathbf{x}_{<i}\right)=\sigma\left(b^{0}+\omega_ {i}^{0}\sum_{s=1}^{i-1}x_{s}+\omega_{i}^{1}\text{sgn}(\sum_{s=1}^{i-1}x_{s}) \right) \tag{18}\]
where \(b^{0}=2\beta h\), \(\omega_{i}^{0}=\frac{2\beta J}{N}\) are the same as before, and shared, among all the conditional probability functions.
Figure 2: **CW\({}_{N}\) and CW\(\infty\) architectures of a single conditional probability.** Diagrams A and B represent the CW\({}_{N}\) and CW\(\infty\) architectures, respectively. Both diagrams involve the operation of the sum of the input variables \(\mathbf{x}_{<i}\). A skip connection, composed of a shared weight (represented by the orange line), is also present in both cases. In the CW\({}_{N}\) architecture, \(2(N-1)\) linear operations are applied (with fixed weights and biases, as indicated in eq.8), followed by two non-linear operations represented by \(\log\sum\exp(x)\). On the other hand, in the CW\(\infty\) architecture, apart from the skip connection, the input variables undergo a \(sgn\) operation before being multiplied by a free weight parameter and passed through the final layer represented by the sigma function. The number of parameters in the CW\({}_{N}\) architecture scales as \(2N^{2}+O(N)\), while in the CW\(\infty\) architecture, it scales as \(N+1\).
The \(\omega_{i}^{1}=-2\beta J|m_{i}|\) is different for each of them and can be computed analytically. The total number of parameters of the \(\text{CW}_{\infty}\) scale as \(N+2\).
### The Sherrington-Kirkpatrick model
The SK Hamiltonian, with zero external fields for simplicity, is given by:
\[H\left(\mathbf{x}\right)=-\sum_{i<j}J_{ij}x_{i}x_{j} \tag{18}\]
where the set of couplings, \(\underline{J}\), are i.i.d. random variable extracted from a Gaussian probability distribution \(P(J)=\mathcal{N}(0,J^{2}/N)\).
To find a feed-forward representation of the conditional probability of its Boltzmann distribution we have to compute the quantities in eq.6, that, defining \(h_{l}^{\pm}[\mathbf{x}_{<i}]=\pm J_{il}+\sum_{s=1}^{i-1}J_{sl}x_{s}\), can be written as:
\[\rho_{i}^{\pm}[\mathbf{x}_{<i}]=\sum_{\mathbf{x}_{>i}}\exp\left(\beta\sum_{l= i+1}^{N}h_{l}^{\pm}[\mathbf{x}_{<i}]x_{l}+\sum_{l^{\prime}>l>i}^{N}J_{ll^{ \prime}}x_{l}x_{l^{\prime}}\right)\]
The above equation can be interpreted as a SK model over the variables \(\mathbf{x}_{>i}\) with site depends external fields \(h_{l^{\prime}}^{\pm}[\mathbf{x}_{<i}]\). I will use the replica trick [50], which is usually applied together with the average over the system's disorder. In our case, we deal with a single instance of disorder, the set of couplings is fixed. In the following I will assume that \(N-i\gg 1\), and the average over the disorder \(\mathbb{E}\) is taken on the coupling parameters \(J_{ll^{\prime}}\) with \(l,l^{\prime}>i\). In practice I will use the following approximation to compute the quantity:
\[\log\rho_{i}^{\pm}\sim\mathbb{E}\left[\log\rho_{i}^{\pm}\right]=\lim_{n\to 0 }\frac{\log(\mathbb{E}\left[(\rho_{i}^{\pm})^{n}\right])}{n}\]
In the last equality, I use the replica trick. Implicitly, I assume that the quantities \(\log\rho_{i}^{\pm}\) are self-averaged on the \(\mathbf{x}_{>i}\) variables. The expression of the average over the disorder of the replicated function is:
\[\mathbb{E}_{\underline{J}_{ll^{\prime}}}\left[(\rho_{i}^{\pm}[ \mathbf{x}_{<i}])^{n}\right]=\int\prod_{l<l^{\prime}}dP_{J_{ll^{\prime}}} \bigg{\{}\sum_{\{\underline{x}^{a}\}_{l^{\prime}+1}^{N}}\exp\bigg{[}\\ \beta\bigg{(}\sum_{\begin{subarray}{c}i<l\leq N\\ 1<a\leq n\end{subarray}}h_{l}^{\pm}[\mathbf{x}_{<i}]x_{l}^{a}+\sum_{ \begin{subarray}{c}i<l<l^{\prime}\leq N\\ 1<a\leq n\end{subarray}}J_{ll^{\prime}}x_{l}^{a}x_{l^{\prime}}^{a}\bigg{)} \bigg{]}\bigg{\}} \tag{19}\]
where \(dP_{J_{ll^{\prime}}}=P(J_{ll^{\prime}})dJ_{ll^{\prime}}\). Computing the integrals over the disorder, we find:
\[\mathbb{E}_{\underline{J}_{ll^{\prime}}}\left[(\rho_{i}^{\pm}[ \mathbf{x}_{<i}])^{n}\right]\propto\int\prod_{a<b}dQ_{ab}e^{-\frac{N}{2}\beta^ {2}Q_{a,b}^{2}}\prod_{l}\left[\sum_{\underline{x}_{<i}^{a}}\exp\left\{\beta \left[h_{l}^{\pm}[\mathbf{x}_{<i}]\sum_{a}x_{l}^{a}+\beta\sum_{a<b}Q_{a,b}x_{l }^{a}x_{l}^{b}\right]\right\}\right] \tag{20}\]
where in the last line I used the Hubbard-Stratonovich transformation to linearize the quadratic terms. The Parisi solutions of the SK model prescribe how to parametrize the matrix of the overlaps \(\{Q_{a,b}\}\)[50]. The easiest way to parametrize the matrix of the overlaps is the replica symmetric solutions (RS), where the overlaps are equal and independent from the replica index:
\[Q_{a,b}=\begin{cases}0,&\text{if }a=b\\ q,&\text{otherwise}\end{cases},\]
Then a sequence of better approximations can be obtained by breaking, by step, the replica symmetry, from the 1-step replica symmetric breaking (1-RSB) to k-step replica symmetric breaking (k-RSB) solutions. The infinite k-step limit of k-step replica symmetric breaking solution gives the exact solution of the SK model [51]. The sequence of k-RSB approximations can be seen as nested non-linear operations [52], see appendix for details.
Every k-step replica symmetric breaking solution leads to adding a Gaussian integral and two more free variational parameters to the representation of the \(\rho^{\pm}\) functions. In the following, we will use a feed-forward representation that enlarges the space of parameters, using a more computationally friendly non-linear operator. Numerical evidence of the quality of the approximation used is shown in the appendix. Overall, the parametrization of the overlaps matrix allows to perform the sum over all the configurations of the variables \(\mathbf{x}_{i>}\) getting rid of the exponential scaling with the system's size of the number of parameters. The final AR-NN architecture of the SK model (SK-ARNN) is, see appendix for details,
\[Q^{\text{RS/k-RSB}}\left(x_{i}=1|\mathbf{x}_{<i}\right)=\sigma \bigg{(}x_{i}^{1}(\mathbf{x}_{<i})\\ +\log(\rho_{i}^{+,(\text{RS/kRSB})})-\log(\rho_{i}^{-,(\text{RS/kRSB})}) \bigg{)}. \tag{21}\]
For RS and 1-RSB cases we have:
\[\log\rho^{\pm,RS}=\sum_{l^{\pm}=i+1}^{N}w_{0}^{i,l^{+}}\log\sigma(b_{1}^{i,l^ {\pm}}+w_{1}^{i,l^{\pm}}x_{i,l^{\pm}}^{1}(\mathbf{x}_{<i}))\\ \log\rho^{\pm,1RSB}=\sum_{l^{\pm}=i+1}^{N}w_{0}^{i,l^{\pm}} \log\sigma(b_{1}^{i,l^{\pm}}+\\ w_{1}^{i,l^{\pm}}\log\sigma(b_{2}^{i,l^{\pm}}+w_{2}^{i,l^{\pm}}x_{i,l^{ \pm}}^{1}(\mathbf{x}_{<i}))).\]
The set of \(x_{i,l^{\pm}}^{1}(\mathbf{x}_{<i})\) is the output of the first layer of the ARNN, see eqs.8-9, and \((w_{0}^{i,l^{\pm}},b_{1}^{i,l^{\pm}},w_{1}^{i,l^{\pm}},b_{2}^{i,l^{\pm}},w_{2}^{ i,l^{\pm}})\) are free variational parameters of the ARNN, see fig.3.
The number of parameters of a single conditional probability distribution scales as \(2(k+1)(N-i)\) where \(k\) is the level of the k-RSB solution used, assuming \(k=0\) as the RS solution.
## IV Results
In this section, I compare several ARNN architectures in learning to generate samples from the Boltzmann distribution of the CW and SK models. Moreover, the ability to recover the Hamiltonian coupling parameters from Monte-Carlo-generated instances is presented. The CW\({}_{N}\), CW\({}_{\infty}\) and SK\({}_{RS/LRSB}\) architectures, presented in previous sections, are compared with:
* The one parameter (1P) architecture, where a single weight parameter is multiplied by the sums of the input variables, and then the sigma function is applied. This architecture was already used for the CW system in [36]. The total number of parameters scales as \(N\).
* The single layer (1L) architecture, where a fully connected single linear layer parametrizes the whole probability distribution, where a mask is applied to a subset of the weights in order to preserve the autoregressive properties. The width of the layer is \(N\), and the total number of parameters scale as \(N^{2}\)[15].
* The MADE architecture[15], where the whole probability distribution is represented with a deep sequence of fully connected layers, with non-linear activation functions and masks in between them, to assure the autoregressive properties. Respect the 1L the deep architecture of MADE enhances the expressive power. The MADE\({}_{dw}\) used has \(d\) hidden layers, each of them with width \(w\) times the input variables \(N\). For instance, the 1L architecture is equivalent to the MADE\({}_{11}\) and MADE\({}_{23}\) has two hidden fully-connected layers, each of them of width \(3N\).
The parameters of the ARNN are trained using the Kullback-Leibler divergence or equivalently the variational free energy, see equation 1. Given an ARNN, \(Q^{\theta}\), that depend on a set of parameters \(\theta\) and the Boltzmann distribution \(P\), the variational free energy can be estimates as:
\[F[Q^{\theta}] =\sum_{\{\mathbf{x}\}}Q^{\theta}\left[\frac{1}{\beta}\log Q^{ \theta}+H[\mathbf{x}]\right]\] \[\approx\sum_{\mathbf{x}\sim Q^{\theta}}\left[\frac{1}{\beta}\log Q ^{\theta}+H[\mathbf{x}]\right].\]
The samples are extracted, thanks to the ancestral sampling, from the ARNN \(Q^{\theta}\). At each step of the training, the derivative of the variational free energy with respect to the parameters \(\theta\) is estimated and used to update the parameters of the ARNN. Then a new batch of samples is extracted from the ARNN and used again to compute the derivative of the variational free energy and update the parameters[8]. This process was repeated until a stop criterion is meet or a maximum number of steps is reached. For each model and temperature, a maximum number of 1000 epochs are allowed, with a batch size of 2000 samples, and a learning rate of 0.001. The ADAM algorithm[53] was applied for the optimization of the ARNN parameters. An annealing procedure was used to improve the performances and avoid model-collapse problems[8], where the inverse temperature \(\beta\) was increased from 0.1 to 2.0 with a step of 0.05. The code was written using the PyTorch framework[54], and it is open-source, released in [55].
The CW\({}_{N}\) has all its parameters fixed and precomputed analytically, see eq.15. The CW\({}_{\infty}\) has one free parameter for each of its conditional probability distributions, and 1 shared parameter to be trained, see eq.17. The parameters of the first layer of the SK\({}_{RS/LRSB}\) architecture are shared and fixed by the values of the couplings and fields of the hamiltonian. The parameters of the hidden layers are free and trained. The parameters of the MADE\({}_{dw}\), 1L and 1P architectures are free and trained.
As explained in sec.II, the variational free energy \(F[Q^{\theta}]\) is always an upper bound of the free energy of the system, and its values will be used, in the following, as a benchmark of the performance of the ARNN architecture tested to approximate the Boltzmann distribution; after the training procedure, the variational free energy was estimated using 20,000 configurations sampled from each of the ARNN architecture considered. The training procedure was the same for all the experiments unless conversely specified.
CW modelThe results on the CW model, with \(J=1\) and \(h=0\), are shown in fig.4.
Figure 3: **SK\({}_{RS/LRSB}\) architectures of a single conditional probability** [CHECK] The diagram depicts the SK\({}_{RS/LRSB}\) architectures that approximate a single conditional probability of the Boltzmann distribution in the SK model. The input variables are \(\mathbf{x}_{<i}\), while the output is the conditional probability \(Q^{\text{RS/LRSB}}(x_{i}=1|\mathbf{x}_{<i})\). The non-linear operations are represented by squares and the linear operations by solid lines. The parameters, in the orange lines, are equal to the Hamiltonian parameters and shared among the conditional probabilities, as indicated in equation 8. The depth of the network is determined by the level of approximation used, with the \(Q^{\text{RS}}\) architecture having only one hidden layer and the \(Q^{\text{k-SRB}}\) architecture having a sequence of \(k+1\) hidden layers. The total number of parameters scales as \(2(k+1)N^{2}+\mathbb{O}(N)\), where the \(RS\) case corresponds to \(k=0\).
The plots A1, A2, and A3, in the first row, show the relative error of the free energy density (\(fe[P]=F[P]/N\)), with respect to the exact one, computed analytically [37], see appendix for details, for different system sizes \(N\). The variational free energy density estimated from samples generated with the CW\({}_{N}\) architecture does not have an appreciable difference with the analytic solution, and for the CW\({}_{\infty}\), it improves as the system size increases. Fig.4.B plots the error, in the estimation of the free energy density for the architectures with fewer parameters, 1P and CW\({}_{\infty}\) (both scaling linearly with the system's size); It shows clearly that a deep architecture, in this case with only one more parameter, improves the accuracy by orders of magnitude. The need for deep architectures, already on a simple model as the CW, is indicated by the poor performance of the 1L architecture, despite its scaling of parameters as \(N^{2}\), it achieves similar results to the 1P. The MADE architecture obtains good results but was not comparable to CW\({}_{N}\), even though having a similar number of parameters. The plot in fig.4.c shows the distribution of the overlaps, \(q_{\mathbf{a},\mathbf{b}}=\frac{1}{N}\sum_{i}a_{i}b_{i}\) where \(a_{i},b_{i}\) are two system configurations, between the samples generated by the ARNNs. The distribution is computed at \(\beta=1.3\) for \(N=200\). It can be seen as the poor performance of the 1-layer networks (1P, 1L) is due to the difficulty of correctly representing the configurations with magnetization different from zero in the proximity of the phase transition. This could be due to mode collapse problems [36], which do not affect the deeper ARNN architectures tested.
SK modelIn figure 5, the result of the SK model, with \(J=1\) and \(h=0\) are shown; as before in the first row there is the relative error in the estimation of the free energy density at different system sizes.
In this case, the exact solution, for a single instance of the disorder and a finite \(N\) is not known. The free energy estimation of the SK\({}_{2RSB}\) was taken as the reference to compute the relative difference. The free energy estimations of SK\({}_{kRSB}\) with \(k=1,2\) are very close to each other. The performance of the SK\({}_{RS}\) net is the same as the 1L architecture even with a much higher number of parameters. The MADE architecture tested, even with a similar number of parameters of the SK\({}_{kRSB}\) nets, see fig.5.C, estimate a larger free energy, with differences increasing with \(N\). To better assess the difference in the approximation of the Boltzmann distribution of the architecture tested, I consider to check the distributions of the overlaps \(q\) among the generated samples. The SK model, with \(J=1\) and \(h=0\), undergo a phase transition at \(\beta=1\), where a glassy phase is formed, and an exponential number of metastable states appears [50]. This fact is reflected in the distribution of overlaps that have values different from zero in a wide region of values of \(q\)[56]. Observing the distribution of the overlaps in the glassy phase, \(\beta=1.3\), between the samples generated by the ARNNs, fig.5.D, we can check as the distribution generated by the SK\({}_{kRSB}\) is higher in the region between the peak and zero overlaps, suggesting that these architectures better capture the complex landscape of the SK Boltzmann probability distribution [56].
The final test for the derived SK\({}_{kRSB}\) architectures involves evaluating the ability to recover the Hamiltonian couplings of the system using only samples extracted from the Boltzmann distribution of a single instance of
Figure 5: **Results for SK model.** The SK model considered has \(J=1\) and \(h=0\) (see the text for details). The system undergoes a phase transition at \(\beta=1\)[50]. [**A1, A2, A3**] Relative difference in the estimation of the free energy for increasing system sizes with respect to the free energy computed by SK\({}_{2RSB}\) architecture. The results are averaged over 10 instances of the disorder. [**B**] Scaling with \(N\) of the number of parameters of the ARNN architectures. [**C**] Distribution of the overlaps of the samples generated by the ARNNs architectures for the SK model with \(N=200\) variables and \(\beta=1.5\), averaged over 10 different instances.
Figure 4: **Results for CW model.** The CW model considered has \(J=1\) and \(h=0\) (see the text for details). The system undergoes a second-order phase transition at \(\beta=1\) where a spontaneous magnetization appears[37]. [**A1, A2, A3**] Relative error in the estimation of the free energy for different system sizes with respect to the analytic solution. [**B**] Scaling with \(N\) of the mean and max relative error of the two smaller architecture \(1P\) and CW\({}_{\infty}\), both scaling linearly with the system’s size. [**C**] Distribution of the overlaps of the samples generated by the ARNNs for the CW system with \(N=200\) variables and \(\beta=1.3\)
the SK model. 10,000 samples were generated using the Metropolis Monte-Carlo algorithm and the SK\({}_{1RSB}\) was trained to minimize the log-likelihood computed on these samples (see SI for details). According to the derivation of the SK\({}_{hRSB}\) architecture, the weights of the first layer of the neural network should correspond to the coupling parameters of the Hamiltonian. Due to the gauge invariance of the Hamiltonian with respect to the change of sign of all the couplings \(J\)s, I will consider their absolute values in the comparison. The weights parameters of the first layers of the SK\({}_{1RSB}\) were initialized at small random values. As shown in Fig.6, there is a strong correlation between the weights of the first layer and the couplings of the Hamiltonian, even though the neural network was trained in an over-parameterized setting; it has 60,000 parameters, significantly more than the number of samples.
## V Conclusions
In this study, the exact architecture Autoregressive Neural Network (ARNN) architecture (H\({}_{2}\)ARNN) of the Boltzmann distribution of the pairwise interacting system Hamiltonian was derived. The H\({}_{2}\)ARNN is a deep neural network, with the weights and biases of the first layer corresponding to the couplings and external fields of the Hamiltonian, see eqs.8-9. The H\({}_{2}\)ARNN architecture has skip-connections and a recurrent structure with a clear physical interpretation. Although the H\({}_{2}\)ARNN is not directly usable due to the exponential increase in the number of hidden layer parameters with the size of the system, its explicit formulation allows using statistical physics techniques to derive tractable architectures for specific problems. For example, ARNN architectures, scaling polynomially with the system's size, are derived for the CW and SK models. In the case of the SK model, the derivation is based on the sequence of k-step replica symmetric breaking solutions, which were mapped to a sequence of deeper ARNNs architectures.
The results, checking the ability of the ARNN architecture to learn the Boltzmann distribution of the CW and SK models, indicate that the derived architectures outperform commonly used ARNNs. Furthermore, the close connection between the physics of the problem and the neural network architecture is shown in the results of fig.6. In this case, the SK\({}_{1RSB}\) architecture was trained on samples generated with Monte-Carlo technique from the Boltzmann distribution of an SK model; the weights of the first layer of the SK\({}_{1RSB}\) were found to have a strong correlation with the coupling parameters of the Hamiltonian.
Even though the derivation of a simple and compact ARNN architecture is not always feasible for all types of pairwise interactions and exactly solvable physics systems are rare, the explicit form of the H\({}_{2}\)ARNN and its clear physical interpretation provides a means to derive approximate architectures for specific Boltzmann distributions.
In this work, while the ARNN architecture of an SK model was derived, its learnability was not thoroughly examined. The problem of finding the configurations of minimum energy for the SK model is known to belong to the NP-hard class, and the effectiveness of the ARNN approach in solving this problem is still uncertain and a matter of ongoing research [27; 35; 36]. Further systematic studies are needed to fully understand the learnability of the ARNN architecture presented in this work at very low temperatures and also on different systems.
There are several promising directions for future research to expand upon presented ARNN architectures. For instance, deriving the architecture for statistical models with more than binary variables. In statistical physics, the models with variables that have more than two states are called Potts models. The language models, where each variable represents a word, and could take values among a huge number of states, usually more than tens of thousand possible words (or states), belong to this set of systems. The generalization of the present work to Potts models could allow us to connect the physics of the problem to recent language generative models like the transformer architecture [18]. Another direction could be to consider systems with interactions beyond pairwise, to describe more complex probability distributions. Additionally, it would be interesting to examine sparse interacting system graphs, such as systems that interact on grids or random sparse graphs. The first case is fundamental for a large class of physics systems and image generation tasks, while the latter type, such as Erdos-Renyi interaction graphs, is common in optimization [44] and inference problems [57].
Figure 6: **Scatter plot of the weights vs the coupling.** Scatter plot of the absolute values of weights of the first layer of a SK\({}_{1RSB}\) vs the absolute values of the coupling parameters of the SK model. The weights are trained over 10,000 samples generated by the Metropolis Monte Carlo algorithm on a single instance of the SK model with \(N=100\) variables at \(\beta=2\). They are initialized at small random values. The blue line is the fit of the blue points, clearly showing a strong correlation between the weights and the coupling parameters of the Hamiltonian. |
2307.13807 | Sports Betting: an application of neural networks and modern portfolio
theory to the English Premier League | This paper presents a novel approach for optimizing betting strategies in
sports gambling by integrating Von Neumann-Morgenstern Expected Utility Theory,
deep learning techniques, and advanced formulations of the Kelly Criterion. By
combining neural network models with portfolio optimization, our method
achieved remarkable profits of 135.8% relative to the initial wealth during the
latter half of the 20/21 season of the English Premier League. We explore
complete and restricted strategies, evaluating their performance, risk
management, and diversification. A deep neural network model is developed to
forecast match outcomes, addressing challenges such as limited variables. Our
research provides valuable insights and practical applications in the field of
sports betting and predictive modeling. | Vélez Jiménez, Román Alberto, Lecuanda Ontiveros, José Manuel, Edgar Possani | 2023-07-11T15:53:11Z | http://arxiv.org/abs/2307.13807v1 | Sports Betting: an application of neural networks and modern portfolio theory to the English Premier League
###### Abstract
This paper presents a novel approach for optimizing betting strategies in sports gambling by integrating Von Neumann-Morgenstern Expected Utility Theory, deep learning techniques, and advanced formulations of the Kelly Criterion. By combining neural network models with portfolio optimization, our method achieved remarkable profits of \(135.8\%\) relative to the initial wealth during the latter half of the \(20/21\) season of the English Premier League. We explore complete and restricted strategies, evaluating their performance, risk management, and diversification. A deep neural network model is developed to forecast match outcomes, addressing challenges such as limited variables. Our research provides valuable insights and practical applications in the field of sports betting and predictive modeling.
Decision Theory Utility Theory Neural Networks Sports Prediction Modern Portfolio Theory Numerical Optimization
## 1 Introduction
This study aims to address the question of which bets a rational gambler, considering their risk appetite, should select in order to maximize expected utility. The concept of optimality is defined based on the Von Neumann-Morgenstern Classical Utility Theory. We employ deep learning techniques to estimate event odds, and subsequently employ the Sharpe Ratio and Kelly's financial criteria to identify the optimal set of bets.
The primary objective of this study is to assess the performance of the Sharpe Ratio and the Kelly Criterion within the context of real-world sports betting, specifically during the second half of the 2020-2021 season of the English Premier League (EPL). Additionally, compare the impact of underlying assumptions on the aforementioned criteria, shedding light on their significance.
### Sports Betting
The analysis of a bet can be approached as a decision problem defined by the tuple \((D,E,C,(\succeq))\). Within our mathematical framework, this decision tuple \((D,E,C,(\succeq))\) encompasses fundamental elements for making rational decisions within the realm of sports betting. The decision set \(D\) represents the available choices or bets that a bettor can select. The events set \(E\) defines the potential outcomes or events associated with these bets. The consequence set \(C\) encompasses the various outcomes or consequences that arise from each
event, denoted as \(c=c(d,e)\). The preference relation (\(\succeq\)) captures the bettor's subjective preferences and enables comparisons between different bets based on personal criteria [1]. By defining and analyzing these elements, our approach aims to assist rational gamblers in choosing optimal bets that maximize their expected utility and align with their preferences [2]. Specifically, we seek the optimal bet \(d_{*}\in D\) based on the bettor's preferences (\(\succeq\)). These preferences are contingent upon the consequences \(c\in C\), which in this case include the odds and probability of the event \(e\in E\)[3].
We denote \(\ell\) as the percentage of the wealth wagered on an event \(e\), hence \(\underline{\ell}\) represents the wager vector. Consequently, the decision \(d_{\underline{\ell}}\) signifies the act of placing a bet of \(\underline{\ell}\) on the events \(e\in E\).
#### 1.1.1 Odds
Within the domain of sports betting, let \(W_{0}\) represent the initial wealth and \(W_{1}\) denote the wealth obtained for placing a bet on event \(e\) but not taking in account the initial quantity \(W_{0}\). In the event's occurrence, the outcome manifests as a gross profit of \(W_{1}+W_{0}\), while in its non-occurrence, the bet is forfeited, resulting in a loss. The probabilities associated with these outcomes are denoted by \(p\) and \(1-p\) respectively. It is assumed that all wagers result in a positive return, that is, \(W_{1}>0\).
On the other hand, the odds \(\sigma\) are defined as the ratio between the gross profit and the initial wealth, i.e. \(\sigma=(W_{1}+W_{0})/W_{0}=1+W_{1}/W_{0}\). It is _assumed that the odds and probabilities of the events are fixed over time_.
#### 1.1.2 Returns
Formally, the net return \(\varrho\) from betting \(\$1\) on event \(e\) is the random variable \(\varrho=\sigma-1\) with probability \(p\) and \(\varrho=-1\) with probability \(1-p\). In the case where there are \(m\) events, it is defined as the random vector \(\underline{\varrho}\), where the \(i\)-th entry is the random variable \(\varrho_{i}\). To facilitate notation, we introduce the odds vector \(\underline{\sigma}:=(\sigma_{1},\sigma_{2},\ldots,\sigma_{m})^{\prime}\), representing the individual odds per event. Similarly, we define the probability vector \(\underline{p}\) in the same manner. Additionally, we denote the diagonal matrix of odds as \(D_{\sigma}:=\text{diag}(\sigma_{1},\sigma_{2},\ldots,\sigma_{m})\).
\[\text{i.e.}\quad\underline{\varrho}=D_{\sigma}m-\mathbf{1},\quad m\sim\text{ multinomial}(1;\underline{p}). \tag{1}\]
Likewise, the total return \(R(\underline{\ell})\) of one hundred percent of the initial wealth for betting \(\ell_{i}\) percent of the wealth to the event \(e_{i}\) is equal to the initial wealth plus the random gains or consequences [4]. In mathematical notation,
\[R(\underline{\ell})=1+\sum_{i=1}^{m}\ell_{i}\varrho_{i}=1+\underline{\ell}^{ \prime}\underline{\varrho}. \tag{2}\]
The analysis assumes certain conditions regarding the nature of the betting system. Specifically, it assumes the absence of _short selling_ and borrowing, and assumes that money is infinitely divisible with no minimum bet requirement. These assumptions can be summarized as follows : First, it is implied that the wager amount \(\ell_{i}\) for each event \(i\) is non-negative (\(\ell_{i}\geq 0\)) for all \(i\). Second, the assumption is made that the total wager across all events satisfies the constraint \(\sum_{i}^{m}\ell_{i}\leq 1\). Lastly, the last two assumptions state that the wager amount \(\ell_{i}\) for each event \(i\) is bounded within the range \([0,1]\).
#### 1.1.3 Betting Market
By definition, for a bet to be fair one would expect the net return to be zero, i.e. \(\mathbb{E}[\varrho]=0\). The above happens if and only if \(\sigma=1/p\). But, the odds of a bookmaker are always of the form \(\sum_{i}^{m}1/\sigma_{i}=1+tt\)[5], where the \(tt\) is known as the commission (commonly known as track take) that a casino charges. Therefore \(tt>0\). However, since the betting market is a non-efficient market [6], one can find odds \(\underline{\sigma}\) of different casinos such that \(tt\leq 0\). When the market commission is negative, arbitrage is obtained1. The strategy that exploits this phenomenon is of the form \(\ell_{i}^{(A)}:=\sigma_{i}^{-1}/\sum_{j}\sigma_{j}^{-1}\), since fixing the event \(e_{i}\) the total return \(R(\underline{\ell}_{A})=1+\ell_{i}^{(A)}\sigma_{i}-\sum_{j}^{m}\ell_{j}^{(A)}= 1/(1+tt),\quad\forall e_{i}\in E\). It will be shown below, that, this strategy matches the vector of strategies found by the Sharpe Ratio Criterion for non-efficient markets.
### Uncertainty
This section provides a compilation of key findings from Frequentist Statistical Inference and Information Theory, which serve as justifications for the methods employed in this research.
#### 1.2.1 Information Theory
In information theory, the concept of entropy is introduced to quantify the uncertainty associated with a phenomenon represented by the random variable \(X\). Entropy, denoted as \(H(X)\), is defined as \(H(X):=-\mathbb{E}_{F}[\log(X)]=-\int\mathcal{X}\log dF(x)\), where the random variable \(X\) follows a distribution characterized by \(F\)[7]. In other words, we express \(X\) as \(X\sim F\). It is worth noting that the density function \(f\) is related to the distribution function \(F\) through the expression \(f(x)=\frac{d}{dx}F(x)\). The entropy tends to approach \(0^{+}\) when uncertainty is low and approaches infinity in the presence of high uncertainty. Cross-entropy, denoted as \(H(F,G):=-\mathbb{E}_{F}[\log(g(X))]\), quantifies the difference between distributions \(F\) and \(G\) for the same phenomenon. Furthermore, the Kullback-Leibler Divergence (KL-Divergence) [8] provides a means to estimate the disparity between distributions \(F\) and \(G\), which is given by
\[D_{\text{KL}}(F||G):=\mathbb{E}_{F}\left[\log\left(f(X)/g(X)\right)\right]. \tag{3}\]
Two important properties of the KL-Divergence are its direct relationship to cross-entropy and its equivalence to the minimization of KL-Divergence when maximizing the likelihood of a random sample. This equivalence is demonstrated through \(\max\{L(\underline{\theta};\underline{x})\}=\max\left\{\frac{1}{n}\sum_{j}^{ n}\log(f(\underline{x}_{j};\underline{\theta}))\right\}=\max\left\{\mathbb{E}_{ \hat{F}_{\text{emp}}}[\log(f(\underline{x}_{j};\underline{\theta}))]\right\} =\min\{H(\hat{F}_{\text{emp}},F)\}=\min\{D_{\text{KL}}(\hat{F}_{\text{emp}} ||F)\}\)[9].
#### 1.2.2 Deep Learning
In the realm of event prediction, deep learning methods have gained prominence due to their effectiveness in nonlinear scenarios. They excel in identifying relationships between covariates and exhibit flexibility in optimizing objective functions to address diverse requirements for the same phenomenon [9]. For instance, optimizing the KL-Divergence for observed and predicted data is a valuable tool employed to estimate the odds of EPL matches in the study.
With this approach, it becomes possible to make more informed wagers with bookmakers. However, to complete the process, it is essential to determine the specific events to bet on and the corresponding wager allocation based not only on the probabilities estimated but also on the market odds.
### Utility Theory
In order to avoid contradictions and paradoxes associated with the moral value given to money, this study adopts the Von Neumann-Morgenstern Utility Axioms [10] as the basis for its methodology. Leveraging this theory offers several notable advantages, including the recognition that the nominal value of money differs from its moral value, as well as the establishment of a clear correspondence between qualitative preferences and quantitative utilities in the context of gambling.
Consider a probability space \((\Omega,\mathscr{F},\mathbb{P})\), where \(\Omega\) represents the sample space, \(\mathscr{F}\) denotes the associated sigma algebra, and \(\mathbb{P}\) is the probability measure. Within this context, let \(X\) be a random variable that follows a distribution characterized by \(F\), and takes on values \(x\) belonging to the set \(\mathcal{X}\). The probability distribution \(F\) is referred to as a "lottery" over the set \(\mathcal{X}\). The objective is to establish preference relations (\(\succeq\)) over the set of lotteries, denoted by \(\mathscr{Z}=\{F|F\) is a probability distribution over \(\mathcal{X}\}\)[10]. In essence, making a decision \(d\in D\) to choose a lottery \(F\in\mathscr{Z}\) is tantamount to studying \(D\) itself, given the fixed probabilities. Relaxing the assumption of fixed probabilities would transform the gambling problem into a Bayesian framework [2].
As previously mentioned, the moral value of money differs from its nominal value [11][12]. To capture this distinction, the moral value associated with a monetary outcome \(x\) is modeled using a Bernoulli Utility Function \(u:\mathcal{X}\rightarrow\mathbb{R}\). Similarly, to quantify the utility of a lottery \(F\), a Von Neumann-Morgenstern Utility Functional \(U:\mathcal{X}\rightarrow\mathbb{R}\) is employed. According to the Expected Utility Theorem [10], the utility functional can be expressed as \(U(F)=\int_{\mathcal{X}}udF=\mathbb{E}_{F}[u(X)]\). Additionally, it follows that \(U(F_{X})\geq U(F_{Y})\iff F_{X}\succeq F_{Y}\)[10]. Consequently, a gambler's preferences can be quantified through the utility function \(u\), which is determined by the individual's risk profile.
#### 1.3.1 Utility Functions
In economic practice, the utility function \(u\) is commonly assumed to be an increasing function that exhibits decreasing marginal rates of substitution. This implies that \(u^{\prime}(x)>0\) and \(u^{\prime\prime}(x)<0\) for \(x\in\mathcal{X}\). Such assumptions capture the notion that individuals with higher wealth tend to exhibit lower levels of risk aversion [13].
Modern Portfolio Theory, pioneered by Harry Markowitz, suppose a quadratic utility function, where \(u(x)\) is a polynomial of degree two, given by \(u(x)=\beta_{0}+\beta_{1}x+\beta_{2}x^{2}\)[14]. Consequently, the utility of a lottery can be expressed as:
\[U(F)=\mathbb{E}_{F}[X]=\beta_{0}+\beta_{1}\mathbb{E}[X]+\beta_{2}(\text{Var}(X )+\mathbb{E}[X]^{2})=W(\mu,\sigma). \tag{4}\]
Here, \(W\) denotes a utility function that depends on the mean \(\mu\) and variance \(\sigma^{2}\) of the random variable \(X\). Thus, the utility of a lottery can be fully characterized by its mean and variance. In fact, \(F_{1}\succeq F_{2}\iff W(\mu_{1},\sigma_{1})\geq W(\mu_{2},\sigma_{2})\). To align with the principle that greater wealth is always preferred, the function \(W\) must be monotonically increasing in \(\mu\), while also being monotonically decreasing in \(\sigma\). In other words, for the same expected return, individuals with risk-averse preferences prefer lotteries with lower variance.
On another note, Daniel Bernoulli argued in his work "Exposition of a New Theory of the Measurement of Risk" that the change in utility experienced by an individual is inversely proportional to their wealth [12]. Consequently, Bernoulli suggested that utility functions follow a logarithmic form:
\[u^{\prime}(x)=\frac{1}{x}\implies u(x)=\log(x)+C. \tag{5}\]
Once the utility functions have been characterized and the probabilities of the events have been estimated, the objective is to identify the lottery that maximizes expected utility in order to determine the optimal bet.
## 2 Betting Strategies
In this section, we address the problem of determining the best strategy for a set of \(r\) random rewards within the betting system described previously.
### Sharpe Ratio
Under the assumption (4) that a rational gambler's utility follows a quadratic form, the objective is to identify the best strategy in the universe \(\Psi=\{\psi=(\mu,\sigma)|\sum_{k=1}^{r}\ell_{k}=1\}\)[14]. Here, \(\mu=\sum_{k}^{r}\ell_{k}\mu_{k}\) represents the return of the portfolio, and \(\sigma^{2}=\underline{\ell}^{\prime}\Sigma\underline{\ell}\) denotes the portfolio's variance. The covariance matrix \(\Sigma\) captures the covariances between the \(r\) random rewards, while \(\mu_{k}=\mathbb{E}[\varrho_{k}]\) represents the expected value of bet \(k\).
A rational strategy aims to minimize the portfolio variance while maintaining a specified expected return level \(\mu_{*}\). Such a strategy \(\underline{\ell}_{*}\) can be obtained by solving the following optimization problem:
\[\underset{\underline{\ell}\succeq\mathbf{0}}{\operatorname{argmin}}\{ \underline{\ell}^{\prime}\Sigma\underline{\ell}\}\quad\text{subject to}\quad \underline{\ell}^{\prime}\underline{\mu}=\mu_{*},\quad\sum_{k}^{r}\ell_{k}=1. \tag{6}\]
In the context of sports gambling, where simultaneous bets are placed, the portfolio return is given by \(\underline{\mu}=D_{\underline{x}}\underline{p}-\mathbf{1}\), and the covariance matrix is \(\Sigma=D_{e}\left(\text{diag}(\underline{p})-\underline{p}\underline{p}^{ \prime}\right)D_{e}\), as shown in equation (1). In the case of simultaneous bets, \(\Sigma\) becomes a block diagonal matrix, denoted as \(\Sigma=\text{diag}\left(\Sigma_{1},\Sigma_{2},\ldots,\Sigma_{r}\right)\).
The set of optimal portfolios with minimum variance, for all possible return levels, is referred to as the Efficient Frontier [15]. To identify the best portfolio within this optimal set, the Sharpe Ratio was used, which is defined as the ratio between the difference of the portfolio return and the risk-free rate \(R_{f}\), and the standard deviation of the portfolio [16]. The Sharpe Ratio is given by:\(S(\underline{\ell}):=(\underline{\ell}^{\prime}\underline{\mu}-R_{f})/\sqrt{ \underline{\ell}^{\prime}\Sigma\underline{\ell}}\). To facilitate the optimization process, it is beneficial to transform the Sharpe Ratio problem into a convex optimization problem by introducing an additional dimension. This transformation helps avoid issues related to non-convexity and local optima [17]. It is introduced a change of variable \(\underline{y}=\kappa\underline{\ell}\), assuming a feasible solution exists such that \(\underline{\ell}^{\prime}\underline{\mu}>R_{f}\), and fix a scalar \(\kappa>0\) such that \((\underline{\mu}-R_{f}\mathbf{1})^{\prime}\underline{y}=1\). The resulting convex optimization problem in \(r+1\) dimensions is as follows [17]:
\[\underset{\underline{y}\succeq\mathbf{0}}{\operatorname{argmin}}\left\{ \underline{y}^{\prime}\Sigma\underline{y}\right\}\quad\text{subject to}\quad\frac{\left(\underline{\mu}-R_{f}\mathbf{1} \right)^{\prime}\underline{y}=1}{\sum_{k}^{r}y_{k}=\kappa}\quad. \tag{7}\]
As mentioned in Section 1.1.3, in a sports betting market where the commission \(tt\) is negative and under the strategy \(\underline{\ell}_{A}\), the gross return \(R(\underline{\ell}_{A})>1\). The Sharpe Criterion helps identify such market inconsistencies by converging on the optimal strategy \(\ell_{A}\). This occurs because \(\mathbb{E}[R(\underline{\ell}_{A})]=\mathbb{E}[1+\underline{\ell}_{A}^{\prime} \underline{\mathcal{G}}]=1+\underline{\ell}_{A}^{\prime}(D\circ\underline{p}- \mathbf{1})=1/(1+tt)>1\). Furthermore, \(\mathrm{Var}(R(\underline{\ell}_{A}))=\underline{\ell}_{A}^{\prime}\mathrm{ Var}(D_{e}\boldsymbol{m}-\mathbf{1})\underline{\ell}_{A}=\underline{\ell}_{A}^{ \prime}(D_{e}(\mathrm{diag}(\underline{p})-\underline{p}\underline{p}^{ \prime})D_{e})\underline{\ell}_{A}=0\). Thus, the optimal strategy under quadratic utilities and arbitrage is \(\underline{\ell}_{A}\). This is due to the fact that the Sharpe Ratio is positively infinite, as the numerator is positive and the denominator is zero, assuming the risk-free asset is zero, which is reasonable for 90-minutes bets.
In summary, assuming quadratic utilities, the mean and variance of the returns \(R_{k}\) alone provide both necessary and sufficient information to determine the optimal market portfolio that maximizes the Sharpe Ratio Criterion, which is a crucial aspect discussed in this section. It is important to highlight that this criterion exploits arbitrage opportunities, enabling strategies with no risk at all. Furthermore, the allocation between the portfolio and the risk-free asset \(R_{f}\) can range from 0% to 100% of the total wealth, depending on the individual's risk tolerance. However, it is important to note that, when investing the entire budget in such a portfolio, the probability of ruin becomes positive which is a downside of this model.
### Kelly Criterion
The Kelly Criterion is a strategy aimed at maximizing long-term wealth by effectively balancing the potential for large returns with the risk of losses [18]. The formula for determining the optimal strategy, denoted as \(\ell_{*}\), is derived as follows.
#### 2.2.1 Classical Bivariate Kelly Criterion
Consider a sequence of random rewards \(\left\{R_{j}\right\}_{j}^{n}\), and let \(W_{n}\) denote the final wealth of an individual who reinvestix their returns according to a fixed strategy \(\ell\). At time \(n\), the individual's wealth is given by \(W_{n}=W_{0}\prod_{j}^{n}R_{j}(\ell)\), where \(W_{0}\) represents the initial wealth. By defining \(G_{n}:=W_{n}/W_{0}\) and taking logarithms, obtain the random walk expression \(G_{n}=\sum_{j}^{n}\log\left(R_{j}(\ell)\right)\), which exhibits a drift term equal to the expected value \(\mathbb{E}[\log R_{j}(\ell)]\). In the other side, if \(S_{n}\) is the number of victories at time \(n\) then \(S_{n}\sim\mathrm{binomial}(n,p)\). Hence, the following relationship:
\[W_{n} =\underbrace{(1+(\sigma-1)\ell)^{S_{n}}}_{\mathrm{Winnings}}( \underbrace{1-\ell)^{n-S_{n}}}_{\mathrm{Losses}}W_{0},\] \[\iff G_{n} =S_{n}\log\left(1+(\sigma-1)\ell\right)+(n-S_{n})\log\left(1-\ell \right).\]
Since \(G_{n}\) is a sum of independent and identically distributed (i.i.d.) random variables \(\log(R_{j}(\ell))\), according to Borel's Law of Large Numbers [19],
\[\lim_{n\to\infty}\frac{1}{n}G_{n}(\ell)=p\log\left(1+(\sigma-1)\ell\right)+(1- p)\log\left(1-\ell\right),\quad\text{with probability 1}. \tag{8}\]
The expression (8) denoted as to \(G(\ell)\) is defined as the _wealth log-growth rate_ by John Kelly [20]. Since \(G\) is a function of the strategy \(\ell\), taking the derivative of \(G\) with respect to \(\ell\) leads us to the optimal solution. Thus,
\[G^{\prime}(\ell_{*})=0\iff\ell_{*}=\frac{\sigma p-1}{\sigma-1}. \tag{9}\]
Recalling that in fair gambling (1.1.3), the odds of the event \(e\in E\) are the reciprocal of the probabilities of this events. However, if \(1/\sigma=\tilde{p}\neq p\), hence \(\ell_{*}=(p-\tilde{p})/(1-\tilde{p})\). Rearranging terms, it is obtained,
\[G(\ell_{*})=p\log\left(1+(\sigma-1)\frac{p-\tilde{p}}{1-\tilde{p}}\right)+(1- p)\log\left(1-\frac{p-\tilde{p}}{1-\tilde{p}}\right)=p\log\left(\frac{p}{ \tilde{p}}\right)+(1-p)\log\left(\frac{1-p}{1-\tilde{p}}\right)=D_{\mathrm{KL }}(p||\tilde{p}). \tag{10}\]
Thus, the maximum log-growth is equal to the KL-Divergence (3). Therefore, the greater the disparity between the odds and the actual probability observed by the bettor, the greater the competitive advantage.
#### 2.2.2 Properties
The Kelly Criterion possesses several noteworthy properties that contribute to its significance and effectiveness in optimizing long-term wealth accumulation.
Firstly, the log-growth rate \(G(\ell)\) associated with the Kelly Criterion exhibits a unique optimal strategy, as established by Eduard Thorp in his paper "Optimal gambling systems for favorable games" [21].There
exists a critical threshold \(\ell_{c}>\ell_{*}\), where \(\ell_{*}\) represents the Kelly Criterion strategy, such that \(G(\ell)\) transitions from positive to negative, reaching a value of zero. This property remarks emphasizes the distinct nature of different strategies in relation to their alignment with this critical threshold \(\ell_{c}\).
Thorp's research [21] also underscores the significant impact of the chosen strategy on capital growth. When \(G(\ell)>0\), the wealth \(W_{n}\) grows infinitely with probability 1, highlighting the potential for substantial wealth accumulation. Conversely, for \(G(\ell)<0\), \(W_{n}\) converges to zero over time. In the case where \(G(\ell)=0\), the wealth exhibits interesting behavior, with the upper limit \(\limsup W_{n}\) tending to infinity and the lower limit \(\liminf W_{n}\) approaching zero as the investment horizon extends indefinitely. These findings demonstrate the sensitivity of capital growth to the chosen strategy and its profound influence on long-term financial outcomes.
Additionally, the superiority of the Kelly Criterion strategy \(\ell_{*}\) over alternative strategies \(\ell\) is established by Breiman [22]. Irrespective of the specific alternative strategy employed, portfolios adhering to the Kelly Criterion consistently outperform other strategies with probability one in terms of wealth accumulation. As the investment horizon extends indefinitely, the wealth \(W_{n}(\ell)\) of a portfolio following the Kelly Criterion experiences infinite growth relative to the wealth \(W_{n}(\ell)\) of portfolios employing alternative strategies. That is to say \(\lim W_{n}(\ell_{*})/W_{n}(\ell)=\infty\) when \(n\to\infty\), with probability 1. This highlights the remarkable advantage of the Kelly Criterion in maximizing long-term wealth accumulation.
The properties of the Kelly Criterion underscore its significance and effectiveness in optimizing long-term wealth accumulation. The distinct nature of strategies in relation to the critical threshold, the sensitivity of capital growth to the chosen strategy, and the superior performance of the Kelly Criterion strategy over alternatives all contribute to its importance. By aligning with the Kelly Criterion, investors can enhance their wealth accumulation potential, leading to more favorable financial outcomes in the long run.
#### 2.2.3 Multivariate Kelly Criterion
Expanding Kelly's Criterion to encompass multivariate bets, where there are \(m\) possible events associated with the same phenomenon, when event \(e_{i}\) occurs the raw return is denoted as \(R(e_{i};\underline{\ell})=1+\sigma_{i}\ell_{i}-\sum_{j}^{m}\ell_{j}\), where \(\underline{\ell}\) represents the vector of strategies corresponding to each event.
Considering \(S_{i}\)as the number of occurrences of the \(i\)-th outcome where \(\sum_{j}^{m}S_{j}=n\), then the wealth at trail \(n\) is given by \(W_{n}(\underline{\ell})=\prod_{i}^{m}\left(1+\sigma_{i}\ell_{i}-\sum_{j}^{m} \ell_{j}\right)^{S_{i}}W_{0}\). By taking logarithms and considering the limit as \(n\) approaches infinity, the log-growth rate is obtained as followed [23]:
\[G(\underline{\ell})=\sum_{i=1}^{m}p_{i}\log\left(1+\sigma_{i}\ell_{i}-\sum_{j =1}^{m}\ell_{j}\right),\quad\text{with probability 1}.\]
The formulation of the Multivariate Kelly Criterion in a matrix-based representation constitutes an original contribution. By introducing the probability vector \(\underline{p}\in[0,1]^{m}\) as the vector of probabilities associated with each event and the consequences matrix \(W=[\underline{w}_{1}|\underline{w}_{2}|\ldots|\underline{w}_{m}]\), where \(\underline{w}_{i}=\sigma_{j}\hat{\mathbf{e}}_{j}\) and \(\hat{\mathbf{e}}_{j}\) is the \(j\)-th canonical vector, the problem of determining the Multivariate Kelly Criterion can be cast. The objective is to maximize the expression
\[\max\left\{\underline{p}^{\prime}\log\left(\mathbf{1}+W^{\prime}\underline{ \ell}-\sum_{i=1}^{m}\ell_{i}\mathbf{1}\right)\right\}\quad\text{subject to} \quad\sum_{i=1}^{m}\ell_{i}\leq 1,\quad\underline{\ell}\geq\mathbf{0}. \tag{11}\]
Importantly, the optimization problem associated with the Multivariate Kelly Criterion exhibits a concave structure, enabling its solution through convex optimization algorithms like Successive Quadratic Programming (SQP) [24]. While Smoczynski has proposed an algorithm for determining \(\underline{\ell}_{*}\) in his work [23], a closed-form solution is not available. The formulation of this criterion in matrix form significantly contributes to a deeper comprehension of its characteristics. By focusing on the functional form of the log growth and obtaining analytical gradients, we can effectively maximize the criterion function. Furthermore, this matrix-based approach facilitates practical implementation by eliminating the need for iterative loops and relying solely on linear algebraic operations. As a result, this formulation not only enhances theoretical understanding but also provides valuable insights for the efficient application of the Multivariate Kelly Criterion.
#### 2.2.4 Multivariate and Simultaneous Kelly Criterion
In the most general case, the multivariate criterion has been extended to incorporate simultaneous random rewards. This extension represents a novel development in the field. In this scenario, a set of \(r\) independent random rewards occurs simultaneously, each with \(m_{k}\) possible events. The probability space is defined as \(\Omega=\bigtimes_{k}^{r}\Omega_{k}\), resulting in a total of \(M=\sum_{k}^{r}m_{k}\) events and \(N=\prod_{k}^{r}m_{k}\) possibilities. The strategy vector is defined as the concatenation of the betting vectors of the \(r\) random rewards, denoted as \(\underline{\ell}=(\underline{\ell}_{1},\underline{\ell}_{2},\ldots,\underline {\ell}_{r})^{\prime}\in\mathbb{R}^{M}\). Similarly, the decimal odds vector is represented as \(\underline{o}\) and the net returns vector as \(\underline{o}\). The overall raw return of the strategy, denoted as \(R(\underline{\ell})\), can be expressed as \(R(\underline{\ell})=1+\underline{\ell}^{\prime}\underline{o}\). The matrix of consequences, denoted as \(W\in\mathbb{R}^{M\times N}\), represents the profit possibilities, with each column corresponding to a specific profit outcome \(\underline{\omega}_{j_{k}}=\underline{\omega}_{j}\) for each one of the sample spaces \(\Omega_{k}\subseteq\Omega\).
\[\text{i.e.}\quad\underline{w}_{j}=D_{o}(\hat{\mathbf{e}}_{j_{1}},\hat{ \mathbf{e}}_{j_{2}},\ldots,\hat{\mathbf{e}}_{j_{r}})^{\prime},\quad j_{k}\in \underline{\omega}_{j},\forall k=1,2,\ldots,M.\]
Since the outcomes of each bet are assumed to be independent, the probability vector is given by
\[\underline{p}_{i}=\mathbb{P}\left[\underline{o}(\underline{\omega}_{i}) \right]=\mathbb{P}\left[\bigcap_{k}^{r}\underline{o}_{k}(\omega_{i,k})\right] =\prod_{k}^{r}\mathbb{P}\left[\underline{o}_{k}(\omega_{i,k})\right]. \tag{12}\]
Consequently, the Multivariate and Simultaneous Kelly Criterion is formulated as the optimization problem:
\[\text{max}\left\{\underline{\ell}^{\prime}\log\left(\mathbf{1}+W^{\prime} \underline{\ell}-\sum_{i=1}^{M}\ell_{i}\mathbf{1}\right)\right\}\quad\text{ subject to}\quad\sum_{i=1}^{M}\ell_{i}\leq 1,\quad\underline{\ell}\geq \mathbf{0}. \tag{13}\]
The extension of the Multivariate Kelly Criterion to incorporate simultaneous and multiple random rewards presents significant advantages and opportunities for optimizing decision-making in intricate betting scenarios. As previously discussed in the context of the Multiple Kelly Criterion, this expanded framework offers a distinct advantage by enabling analytical and programmatically efficient implementation. By leveraging the functional form of the criterion, decision-makers can employ a numerical approach that is both robust and stable within the realm of matrix algebra.
The aforementioned properties of the Kelly Criterion highlight its resilience and competitive edge in the pursuit of long-term wealth accumulation. The existence of a distinctive optimal strategy, coupled with the profound influence of strategy selection on capital growth, solidifies the Criterion's efficacy and prominence within the realm of financial decision-making. Building upon these foundational principles, we can now delve into a real-world application that pits the Kelly Criterion against the Sharpe Criterion : betting in the English Premier League. This practical demonstration will further illuminate the practical implications and comparative performance of these two criteria in a tangible and relevant context.
## 3 Results
This section presents the optimal betting portfolios for the second part of the 2020/2021 season of the English Premier League. These portfolios are derived using the Sharpe Ratio Criterion and the Kelly Criterion based on odds estimated by deep learning models. For each criterion, two types of strategies were examined: unrestricted strategies and restricted strategies. The former allows for betting on all possible events of each match, while the latter imposes limitations on the number of events that can be bet upon.
It is worth noting that excessive betting can lead to a negative log-growth rate \(G(\underline{\ell})<0\), resulting in the wealth tending towards zero, as mentioned in the first property of the Kelly Criterion [21]. However, according to the findings of E. Thorp stated at 2.2.2, this negative impact occurs after reaching the optimal point. Therefore, it is preferable to underestimate the bets rather than overestimate them. This rationale underscores the inclusion of fractional bets in the strategies.
### Predictive Model
The predictive model was constructed based on data obtained from three distinct public sources. Firstly, EA Sports' ratings [25] were utilized to assess the overall quality, offense, midfield, and defense of teams on a weekly basis. Secondly, team statistics were collected from Understats [26]. Lastly, match odds from multiple bookmakers, obtained through Football Data U.K.. [27], were integrated into the model.
#### 3.1.1 Data
The dataset encompassed the period from the 2014/2015 season to the 2020/2021 season. To ensure the reliability of the analysis, data from the initial week of each season was excluded. These initial weeks were deemed less informative due to ongoing team rebuilding during the summer transfer market, introducing substantial uncertainty in team performance.
The final dataset comprised a total of 2,660 games, which were divided into three distinct groups: training (2014/2015 to 2019/2020 seasons), validation (first half of the 2020/2021 season), and test (second half of the 2020/2021 season). Temporal variables were aggregated using weighted maximum likelihood, employing an exponential decay rate of 0.1. This particular value was suggested by the Dixon-Coles analysis [28] and effectively captured the diminishing impact of historical data as time progressed. Moreover, the model considered variables for both home and visiting teams, encompassing various aspects of team performance (refer to the appendix for detailed variable information).
As an additional input to the model, the final odds from Pinnacle Sports [29], a reputable bookmaker, were incorporated. This incorporation was motivated by the by the work of Surowiecki, which recognize that market information holds predictive power in estimating potential events [30]. By integrating these odds, the model could effectively leverage market insights to enhance its predictive capabilities.
#### 3.1.2 Model Selection
To determine the variables with the highest predictive power, lasso-shrinkage techniques were employed for a multinomial regression analysis [31]. The use of variable selection was motivated by the relatively limited size of the dataset compared to the number of parameters in the neural network, which increased the risk of overfitting.
By employing the lasso technique, a subset of variables with the most significant predictive impact was identified. The multinomial regression model trained on this reduced set of variables achieved a predictive accuracy of 41.76% and a cross-entropy value of 1.34. In contrast, when trained on the complete set of variables, the model achieved an accuracy of 51.76% and a cross-entropy value of 1.22. It is important to note that the coefficients of the multinomial regression model were not easily interpretable, and the accuracy was significantly reduced by 19.3% compared to the original model, while the entropy was 9.8% higher. Considering these findings, it was determined that utilizing all variables in the subsequent deep learning models would be more beneficial. This decision was influenced by the understanding that deep learning models have the capability to effectively capture complex relationships and patterns present in the data, even if certain variables may have less intuitive interpretations.
Figure 1: Graphs illustrating the decay of weights in the lasso penalization models for the Home and Avay outcome. The bold lines correspond to variables whose absolute coefficients exceed one standard deviation of all coefficient values. Detailed information on the specific variables and their associated numbers can be found in the appendix.
A training set consisting of 2,200 observations was obtained, while the number of parameters to estimate was 2,700, given the \(30\times 30\times 3\) dimensions. Consequently, regularization was necessary to address the challenge of having more variables than observations.
Furthermore, certain hyperparameters were fixed across all deep learning models with different regularizations using the framework described at [9][32]. These fixed hyperparameters included the architecture design, where a funnel architecture was employed, progressing from a higher number of neurons to a lower number as the number of hidden layers increased. The random seed and kernel initializer were also standardized, using He-Normal for hidden layers and Glorot-Normal for the output layer [32]. The number of hidden layers was set to three, and for models incorporating batch normalization regularization, the number of hidden layers became a hyperparameter. Additionally, the NAdAM learning algorithm [33] was utilized consistently across all models.
Subsequently, several hyperparameters were learned through the optimization process. The number of neurons in the first layer was explored within a range of 70% to 200% of the number of variables, while for subsequent layers, the algorithm initiated the search with the number of neurons from the previous layer. The learning rate was selected from a list of three values: \(10^{-2}\), \(10^{-3}\), and \(10^{-4}\). The penalization magnitude and the convexity trade-off (elastic net) were determined by sampling penalty values from a uniform distribution.
To evaluate the predictive errors and information loss, cross-temporal validation techniques, specifically the Split Temporal Cross-Validation (STCV) approach, were employed.
#### 3.1.3 Model Assessment
Subsequently, the best models selected from the training and validation sets were trained. Among the different architectures considered, the _Drop Out_ architecture _consistently_ demonstrated superior performance in terms of precision and information loss. Therefore, it was chosen as the best model for further analysis.
The predictions made by the model were compared to the pre-match odds provided by the bookmaker Pinnacle Sports. This comparison was motivated by the belief that the odds reflect the collective wisdom of the crowds [30]. In terms of accuracy, the model achieved a 54% accuracy rate, while the crowds achieved 51.5%. In comparison, the null models of \({}^{*}\)always bet on the home team,\({}^{*}\) "always bet on a draw," and "always bet on the away team\({}^{*}\) had accuracy rates of 38%, 20%, and 42% respectively. Regarding information loss, the model achieved a value of 1.0318, while the crowds achieved 0.9966, and the historical frequencies up to the 19th matchweek of the 2020/2021 season had an information loss value of 1.0631.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Hyperparameters \(\backslash\) Model** & **Elastic Net** & **Lasso** & **Ridge** & **Drop Out** & **Batch Norm** \\ \hline
**Num. Layers** & 3 & 3 & 3 & 3 & 9 \\
**Num. Neurons** & 37, 25, 21 & 25, 21, 25 & 37, 33, 33 & 25, 41, 37 & 33, 33, 37, 37, 37, 37, 49, 45, 97 \\
**Learning Rate** & 0.01 & 0.01 & 0.001 & 0.001 & 0.001 & 0.001 \\
**Penalty** & 0.9, 0.8, 0.1, 0.3 & 0.1, 0.5, 0.06, 0.5 & 0.6, 0.8, 0.2, 0.01 & - & - \\
**Convexity** & 0.12, 0.03, 0.33, 0.23 & - & - & - & - \\
**Dropout Rate** & - & - & - & 0.1122, 0.00025, 0.000055 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Optimal Hyperparameters for Different Regularizations in the Validation Set.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Loss** & **Accuracy** \\ \hline
**Elastic Net** & 1.0288 \(\pm\) 0.1241 & 58\% \(\pm\) 16.91\% \\
**Lasso** & 1.0405 \(\pm\) 0.1234 & 55\% \(\pm\) 16.88\% \\
**Hidge** & 1.0185 \(\pm\)0.1273 & 57\% \(\pm\) 19\% \\
**Drop Out** & 0.9819 \(\pm\)0.0996 & 55\% \(\pm\)11.61\% \\
**Batch Norm** & 0.9386 \(\pm\)0.1569 & 57\% \(\pm\) 17.64\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cross Entropy in the Validation Set.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Loss** & **Accuracy** \\ \hline
**Elastic Net** & 1.0288 \(\pm\) 0.1241 & 58\% \(\pm\) 16.91\% \\
**Lasso** & 1.0405 \(\pm\) 0.1234 & 55\% \(\pm\) 16.88\% \\
**Hidge** & 1.0185 \(\pm\)0.1273 & 57\% \(\pm\) 19\% \\
**Drop Out** & 0.9819 \(\pm\)0.0996 & 55\% \(\pm\)11.61\% \\
**Batch Norm** & 0.9386 \(\pm\)0.1569 & 57\% \(\pm\) 17.64\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: The average of the metrics plus/minus one sample standard deviation, in the test set is presented using STCV.
### Portfolios
This section presents the results of the optimal portfolios based on quadratic and logarithmic utilities. It is important to note the following assumptions and constraints throughout the analysis: no more than 100 percent of the wealth can be bet, short selling is not allowed, the risk-free asset has a zero return, money is infinitely divisible, the portfolio is also infinitely divisible 2. Without loss of generality the initial wealth is set to $1. Gains are reinvested each matchweek, i.e. \(W_{n}=\prod_{\mathrm{J19}}^{\mathrm{J38}}R_{i}(\underline{\ell})\).
Footnote 2: For numerical purposes, any wager less than 0.0001 was considered negligible.
Four types of scenarios were considered for the portfolios. Each strategy was evaluated under two scenarios: restricted betting, where only a single event per match can be bet upon, and unrestricted betting, allowing for multiple bets per match. Furthermore, portfolios were examined for both full strategies and fractional strategies at \(f=\)17%. The optimal fraction of 17% for this set was determined after conducting two hundred Dirichlet simulations in the validation set. In practice, it is common to use 25% of the Kelly Criterion strategy for betting purposes. The decision to split the bets is based on the insight that under-betting is preferable to over-betting, as indicated by E. Thorp [21].
#### 3.2.1 Complete Strategies
In the test set, there are 20 matchweeks, with 10 matches taking place on each match day. Each match has three possible outcomes: home win, draw, or away win. The results of one match are independent of the results of the other matches. Consequently, there are \(M=\sum_{i}^{r}m_{i}=3\times 10=30\) possible bets and \(N=\prod_{i}^{r}m_{i}=3^{10}=59,049\) combinations of outcomes per matchedweek.
For portfolios with logarithmic utilities, the optimal bets were determined by optimizing the Multiple Simultaneous Kelly Criterion (13) using the SQP algorithm. On average, this model took approximately 20.9 seconds to converge per fixture. However, it should be noted that the numerical algorithm did not converge for matchweek 23 due to gradient overflow. Despite this challenge, it was decided to use the stakes from the last iteration of the algorithm as an approximation for that matchweek. Although it is important to acknowledge this approximation, it was necessary in order to maintain the continuity of the analysis and ensure the inclusion of matchweek 23 in the overall assessment of the strategies.
\begin{table}
\begin{tabular}{c c|c c|c|c} \hline \hline
**Drop Out** &
\begin{tabular}{c} **Market Odds** \\ \end{tabular} & \multicolumn{3}{c}{**Prediction**} \\ \hline & Home & 60\(\backslash\)55 & 0\(\backslash\)0 & 16\(\backslash\)21 & **76** \\
**Observed** & Draw & 17\(\backslash\)15 & 0\(\backslash\)0 & 23\(\backslash\)25 & **40** \\ & Away & 36\(\backslash\)36 & 0\(\backslash\)0 & 48\(\backslash\)48 & **84** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The confusion matrix compares the predictions of the models to the actual outcomes. The lower part of the table corresponds to the predictions made by the models, while the last column represents the observed outcomes. The elements on the left side of the diagonals correspond to the predictions made by the Drop Out model, while the elements on the right side represent the odds ratios provided by Pinnacle Sports.
Figure 2: Plot of the median final wealth curve obtained by two hundred Dirichlet(**1**) simulations on the validation set.
In the case of quadratic utilities, the optimal portfolios were obtained by maximizing the Sharpe Ratio through the solution of the convex optimization problem (7) using the SQP algorithm. On average, the algorithm took 5.3 seconds per fixture to converge. It is worth mentioning that all optimizations successfully converged for the Sharpe Ratio criterion.
Three matches with arbitrage opportunities across three different matchweeks were identified. One notable example is the match between Everton and Aston Villa on June 5, 2021, which had a commission of \(tt=-0.0085\). The following table provides a summary of the performance of both strategies in comparison to the match featuring arbitrage on that particular match day.
#### 3.2.2 Restricted Strategies
The restricted betting sample space consists of the event with the highest expected value. Mathematically, it can be defined as \(\Omega:=\times_{i}^{\tau}\left\{\omega_{k}\ \middle|\ k=\operatorname*{ argmax}_{j}\{\mathbb{E}[\varrho_{i,j}]\},\quad j=1,2,\ldots,m_{i}\right\}\). Consequently, the number of possible outcomes is reduced to \(N=2^{10}\), and the total number of bets is \(M=3\times 10\). The algorithms for both utilities exhibited convergence, with an average convergence time of approximately 1 second for all match days, and there were no instances of non-convergence.
The table below presents the portfolios derived from the Kelly Criterion and the Sharpe Ratio Criterion for the final matchweek of the English Premier League. It is noteworthy that the Kelly criterion never wag
Figure 3: Plots of wealth \(W_{n}\) under complete strategies for both utilities. The line represents the accumulated wealth up to the \(i\)-the matchweek and the bars represent the profit or loss for the given matchweek.
entire wealth, unlike the Sharpe Ratio criterion. Additionally, while the two criteria allocate similar amounts for each event, the magnitude of the largest bet differs between them.
#### 3.2.3 Portfolios' Performance
The metrics for the eight portfolios for the 20 fixtures are summarized below, including the metrics _pval bets_ and _pval wealth_. The _pval bets_ metric represents the p-value of the hypothesis test that the average of the betting outcomes is not positive. Similarly, the _pval wealth_ metric is the p-value for the hypothesis that the wealth obtained in the matchweeks is not positive.
## 4 Conclusions
This study aimed to address several key aspects: first, identifying the optimal betting strategy for a rational gambler seeking to maximize expected utility, taking into account logarithmic or quadratic utilities. Second, analyzing the characteristics and performance of complete and restricted betting strategies to gain insights into their respective qualities. Third, exploring the qualitative and quantitative differences between portfolios
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline
**Home** & Arsenal & Aston Villa & Fuhlam & Leeds & Leicester & Liverpool & Man. City & Sheffield & Westham & Wofves \\
**Away** & Brighton & Chekena & Newcastle & West Brom & Tottetham & Crystal & Everton & Burnley & Southampton & Man. United \\ \hline \(e\) & **D** & **H** & **A** & **A** & **A** & **A** & **H** & **A** & **A** & **A** \\ \(e\) & 4.46 & 7 & 3.41 & 7 & 3.5 & 18.32 & 1.47 & 2.45 & 5.03 & 2.75 \\ \(\hat{p}\) & 26.25\% & 31.66\% & 41.48\% & 26.68\% & 31.58\% & 10.17\% & 75.39\% & 48.46\% & 25.11\% & 41.88\% \\ \(\mathbb{E}[\varrho]\) & 17.07\% & 121.65\% & 41.45\% & 87.95\% & 10.53\% & 86.24\% & 10.82\% & 18.73\% & 26.31\% & 15.17\% \\
**Stractiges** & & & & & & & & & & \\
**Kelly** & 3.23\% & 17.89\% & 12.77\% & 12.05\% & 2.65\% & 3.93\% & 14.33\% & 8.56\% & 4.47\% & 5.61\% \\
**Sharpe** & 4.44\% & 11.50\% & 14.72\% & 9.16\% & 3.99\% & 2.82\% & 27.05\% & 12.52\% & 5.54\% & 8.26\% \\ \hline \end{tabular}
\end{table}
Table 6: Restricted strategies for the final day of the 20/21 EPL. The events \(e\) denote bets on home win **H**, draw **D** or away win **A**.
Figure 4: Plots depicting the wealth \(W_{n}\) under restricted strategies for both quadratic and logarithmic utilities.
constructed based on the Kelly Criterion and the Sharpe Ratio Criterion in a real-life betting scenario. Fourth, developing statistical learning models to forecast outcomes in the English Premier League. Notably, this study made significant contributions by introducing a matrix-based formulation (11) for the multivariate Kelly Criterion and formulating the optimization problem for the multiple and simultaneous Kelly Criterion (13), enhancing the understanding and applicability of these approaches in practical settings.
The successful demonstration of the first objective of this study involved the development of a systematic method to identify optimal bets within the framework of \((D,E,C,(\succeq))\). The method followed a step-by-step approach: first, by defining the set \(E\) to determine the elements of \(C\), and subsequently constructing the set \(D\) from these two sets. Second, the probabilities of the joint events were estimated. Third, the optimal decision \(d_{\ell_{*}}\) was determined by maximizing the expected utility of returns, denoted as \(\mathbb{E}[u(R(\underline{\ell}))]\), while ensuring that \(d_{\ell_{*}}\) belongs to the set \(D\). Finally, appropriate metrics were defined to assess the performance of the strategies in terms of returns.
Regarding the second aspect investigated, it was observed that although the restricted strategies exhibited better convergence and higher returns compared to the complete strategies, they also displayed higher variance and lower diversification. Notably, in extreme cases, the Sharpe Ratio criterion led to the possibility of the player's ruin, as evidenced by the complete loss of funds two weeks prior to the conclusion of the study. Moreover, it was theoretically established that the advantages of exploiting arbitrage opportunities diminish when stakes are limited. Furthermore, the narrowing of the set of possible bets in restricted strategies resulted in a compromise of the fundamental properties of the criteria. Both the Kelly Criterion and the Sharpe Ratio Criterion exhibit similarities and differences in their portfolio outcomes. Firstly, unlike portfolios based on the Sharpe Ratio, Kelly-based portfolios offer protection against the risk of player ruin. For instance, on average, the full investment strategy and the Kelly-constrained strategy wagered 98% and 83% of the total wealth, respectively. Secondly, it is noteworthy that the logarithmic growth achieved through the Kelly Criterion is not invariant to fractional bets, which sets it apart from the Sharpe Ratio approach. Consequently, fractionalizing the Kelly strategy prior to optimization results in a distinct strategy compared to the fractional strategy \(f\underline{\ell}\) derived post-optimization.
Thirdly, it is noteworthy that portfolios utilizing logarithmic utilities tend to exhibit relatively lower levels of diversification compared to portfolios employing quadratic utilities, as a proportion of the total stake invested in the portfolio. This observation was addressed by examining the maximum stake per mathweek relative to the total amount wagered in the portfolio, revealing that the Kelly Criterion maximum bet averaged 29.7% compared to the maximum Sharpe Ratio's bet average of 25.9%. Consequently, logarithmic portfolios tend to display higher volatility, resulting in more pronounced gains or losses. However, when considering the total budget of $1, the maximum Kelly Criterion stakes average 1.9% less than the Sharpe Ratio approach. This discrepancy arises due to the logarithmic nature of the growth function \(G(\underline{\ell})\), which tends towards negative infinity as \(\sum_{i}\ell_{i}\) approaches 1, as demonstrated by Thorp [21].
Fourth, while the constituent elements of the portfolios are similar for both the Kelly Criterion and the Sharpe Ratio Criterion (see Table 6), the strategies themselves are not proportionally aligned between the two methods. This divergence is evident even in extreme cases such as arbitrage, where notable differences are observed in the resulting portfolios (see Table 5).
In regards to the deep learning model, several considerations need to be addressed. Despite the advantages in terms of predictive power, flexibility, and relaxed assumptions offered by neural networks in modeling sporting events, it is important to acknowledge that the present study faced limitations due to the low volume of available data. Nevertheless, employing a deep learning approach still represents an improvement
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{3}{*}{**Model**} & **Strategy** & **Sharpe** & **Kelly** & **Kelly** & **Sharpe** & **Kelly** & **Sharpe** & **Kelly** & **Sharpe** \\ & **Restricted** & Yes & Yes & No & No & Yes & No & No & Yes \\ & **Fraccolin** & 17\% & 17\% & 17\% & 17\% & 100\% & 100\% & 100\% & 100\% \\ \hline \multirow{6}{*}{**Metrics**} & **Final Wealth** & **135\%** & **111\%** & **102\%** & **97\%** & **24\%** & **16\%** & **3\%** & **0\%** \\ & **Number of Bets** & 196 & 195 & 319 & 323 & 195 & 323 & 319 & 176 \\ & **Average Stake** & 17\% & 14\% & 17\% & 17\% & 83\% & 100\% & 98\% & 100\% \\ & **Hits** & 33\% & 33\% & 30\% & 30\% & 33\% & 30\% & 32\% \\ & **Sharpe Average** & 0.62 & 0.58 & 0.64 & 1.41 & 0.58 & 1.41 & 0.64 & 0.62 \\ & **Log-Growth Average** & 0.06 & 0.06 & 0.07 & 0.04 & 0.21 & 0.16 & 0.23 & \(\sim\) \\ & **Volatility Average** & 0.01 & 0.02 & 0.02 & 0.00 & 0.56 & 0.13 & 0.54 & 0.38 \\ & **Betting pval** & 0.21 & 0.35 & 0.42 & 0.50 & 0.35 & 0.50 & 0.42 & 0.35 \\ & **Wealth pval** & 0.16 & 0.33 & 0.41 & 0.50 & 0.33 & 0.50 & 0.41 & 0.31 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Summary statistics of the performance for each of the portfolios throughout the test period.
over traditional multinomial regression for sports modeling. Furthermore, the selection of variables used in the model remains an area of ongoing investigation. It is evident that there is a deficiency in the number of variables available to accurately determine match outcomes, particularly at the individual player level. Despite this limitation, Zimmerman suggests that there exists an empirical benchmark of 75% in terms of predictive accuracy for sporting events [3]. Moreover, Hubacek emphasizes that the selection of variables holds greater significance than the specific statistical model employed used [34]. These perspectives underscore the importance of further refining the variable selection process to enhance the predictive capabilities of the model.
Finally, it is imperative to validate the underlying assumptions of the models. The cornerstone of the developed method relies on the premise that the actual probabilities of events are known. However, upon examining the predictions' variability in Table 3 and the classification errors illustrated in Table 4, doubts arise regarding the veracity of this assumption. Notably, the model's a priori cross entropy was anticipated to be 1.0318, whereas the empirical entropy based on frequencies amounts to 1.0631. Furthermore, the maximum entropy for a three-event phenomenon is calculated to be 1.0986. Consequently, the certainty surrounding this assumption is called into question.
Furthermore, it is crucial to evaluate whether the model's performance of 35.8% can be attributed to "luck" or "skill." To address this, the model is subjected to a test against 500 Dirichlet simulations with unit parameters, conducted within the same test period but without restrictions and employing strategies bounded to the same fraction \(f\) as the model in question. The results reveal that, on average, the model outperforms 78 out of 100 simulations. This serves as compelling evidence that the model's performance extends beyond mere chance. Nevertheless, despite investigating the relationship between log-growth and performance for the simulations, no linear evidence supporting such a connection is found (p-value of 0.595).
### Future Research
The findings and hypotheses explored in this study open up various avenues for future research. These avenues span both financial methodologies and predictive modeling in the context of sports betting.
From a financial perspective, it would be of great theoretical and empirical interest to examine the long-term behavior of returns when encountering arbitrage opportunities. Exploring regularization techniques, particularly in the context of \(L_{2}\) regularization for betting portfolios, could also yield valuable insights. Additionally, investigating the divergences in portfolio composition and returns between restricted and full strategies, considering both quadratic and logarithmic utilities, through simulation studies would provide further understanding.
In the realm of predictive modeling, there are multiple aspects worth exploring. One avenue is adopting a Bayesian perspective to analyze the prediction problem, complementing the frequentist approach employed in this research. Furthermore, incorporating player-level data in addition to aggregate team data could enhance the predictive model's accuracy and granularity. The inclusion of data from other soccer leagues could also contribute to a more comprehensive and robust modeling approach.
It is important to acknowledge that the work presented in this study is grounded in the Von Neumann and Morgenstein's Axioms of Preference. However, it is well-known that these axioms may not always hold in
Figure 5: Inter-temporal and final distribution of 500 simulated strategies. The final wealth of these strategies is compared with that of the best model.
reality. Exploring the disparities between theoretical preferences and empirical observations, given the same available information, would shed light on the limitations and challenges associated with relying solely on axiomatic models. Furthermore, investigating optimal policies under Reinforcement Learning models could provide valuable insights into the dynamic decision-making processes within the realm of sports betting.
Overall, the future research directions outlined above have the potential to further advance the understanding of financial strategies, predictive modeling approaches, and the complex dynamics of decision-making in sports betting beyond this paper.
## Appendix
The appendix provide a list and description of the three databases utilized in this study for the estimation of probabilities and obtaining odds from various bookmakers. The variables employed in the prediction of match outcomes, with a dagger (\(\dagger\)) indicating their usage, and the target variable marked by an asterisk (\(*\)), are also specified.
### Football Data U.K.
The "Football Data U.K." database contains records representing individual matches played in the English Premier League from the 2013/2014 season through the 2020/2021 season. The database includes a range of variables that provide valuable insights into match characteristics and team performance. However, it should be noted that a subset of variables was excluded from the analysis for matches prior to 2019, as these variables were not available in the reported files during that time period.
### Fetched Variables
* date: Date in day/month/year on which the match took place.
* homoteam: Home team name.
* awayteam: Name of the team playing as visitor.
* fthg: Goals scored at the end of the home team's game.
* ftag: Goals scored at the end of the game by the away team.
* ftr: Final score. The possible values are: _H, D, A_. Which represent that the home team won, drew or that the visiting team won, respectively.
* referee: Name of the main referee who directed the match.
* _h, d, _a: The bookmakers' odds for the possible outcomes of the match. The information is collected on Tuesdays and Fridays. The bookmakers are :
* b365: Bet365.
* bw: Bwin.
* iw: Interwetten.
* ps: Pinnacle Sports.
* vc: Victor Chandler.
* wh: William Hill.
* \(\dagger\)psch, pscd, psca: Pinnacle Sports' final odds, that is, an instant before the match starts, for the result that the home team wins, draws or that the away team wins.
### Generated Variables
* \(\dagger\)matchweek: The day on which the respective matches are played.
* \(*\)result: The variable _ftr_ renamed.
* season: Season in which they are playing. If the season is 2013/2014, 13 is captured.
* maxo_: The maximum odds for the three possible outcomes (H, T, V) of the six bookmakers mentioned at the beginning. _Note_: Pinnacle Sports final odds are not considered. _Note 2_: These are the odds used for betting.
* market_tracktake: The market commission. That is, the sum of the inverse of the maximum odds found for each outcome.
* without the respective commission of the collected and the final Pinnacle Sports odds.
### Understats
The Understats database provides match-level data for each team in the English Premier League. Each record in the table represents a match that a team has played, rather than the match itself. This means that if there are 380 matches in the Premier League in a season, there will be 760 records in this table for that season.
It is important to note that observations for teams on the first matchweek of each season were removed from the dataset. This is because the variance between the last game of the previous season and the first game of the current season tends to be very large due to changes in the player market and initial team performances, as mentioned on the present work. Removing these observations helps to avoid potential biases in the data caused by these factors.
**Fetched Variables**
* h_a: Character that represents whether the team plays as home or away team, whose values are \(a\), \(h\), respectively.
* xG: Number of goals expected in the match by the team.
* xGA: Number of goals expected in the match by the opposing team.
* npxG: Number of expected goals in the match by the team without taking penalties into account.
* npxGA: Number of goals expected in the match by the opposing team without taking penalties into account.
* npxGD: Difference between npxG and npxGA.
* deep 3: Number of passes completed by the team in the last quarter of the court -on the opposing team's side. Footnote 3: These statistics present great inconsistencies with respect to the official Understats page. Likewise, since they were obtained through an R and Python package, the methodology with which these variables were obtained is unknown.
* deep_allowed1: Number of passes completed by the opposing team in the last quarter of the court -on the team's side-. Footnote 3: These statistics present great inconsistencies with respect to the official Understats page. Likewise, since they were obtained through an R and Python package, the methodology with which these variables were obtained is unknown.
* scored: Number of goals scored by the team in the match.
* missed: Number of goals conceded by the opposing team in the match.
* xpts: Number of expected points. It is the expected result for the team.
* result: Result of the match for the team. Possible values are _w, d, l_ representing that the team won, drew or lost the match, respectively.
* date: Date in year-month-day when the match took place.
* wins, draws, loses: Dummy variables representing whether the team won, drew or lost the match, respectively.
* pts: Points obtained by the result of the match for the team. Winning, drawing or losing awards 3, 1 and 0 points, respectively.
* ppda.att1: Total passes made by the team when attacking divided by the number of defensive actions of the opposing team (interceptions + tackles + fouis). Metric suggested by Colin Trainor. Footnote 11: The pda.def1: Total passes made by the team when defending divided by the number of defensive actions by the opposing team. Footnote 12: The pda.att1: Total passes made by the opposing team when attacking divided by the number of the team's defensive actions (interceptions + tackles + fouis). Metric suggested by Colin Trainor. Footnote 13: These statistics present great inconsistencies with respect to the official Understats page. Likewise, since they were obtained through an R and Python package, the methodology with which these variables were obtained is unknown.
* ppda.allowed.def1: Total passes completed by the opposing team while defending divided by the number of defensive team actions. Footnote 14: The pda.att1: Total passes made by the opposing team when attacking divided by the number of the team's defensive actions (interceptions + tackles + fouis). Metric suggested by Colin Trainor. Footnote 15: The pda.att1: The pda.
* \(\dagger\)att: Rating from 1 to 100 of the team's attack up to that week.
* \(\dagger\)mid: Rating from 1 to 100 of the team's average up to that week.
* \(\dagger\)def: Rating from 1 to 100 of the team's defense up to that week.
* \(\dagger\)transfer_budget: Budget for the transfer market, in millions of euros, of the team for that season.
* speed4: Type of speed the team plays with. Footnote 4: Despite being variables with excellent information, for the 2019 seasons onwards, SoFIFA stopped updating these values and they became constant for all teams. So these variables were no longer, in their entirety, informative.
* dribbling: Type of the number of dribbles with which the team plays2..
* passing: Type of passes with which the team plays. They can be very risky, normal or safe passes.2. Footnote 2: Inventive of the team plays.
* positioning: Formation with which the team plays.
* crossing2: Type of band changes in the passes with which the team plays.
* aggression2: Aggressiveness with which the team defends.
* pressure2: Pressure with which the team defends.
* team_width: Width of the formation with which the team plays.
* defender_line2: Type of mark with which the team defends.
Footnote 2: Inventive of the team plays.
#### Main Database
The main database used in the machine learning models consists of various variables, each undergoing a specific transformation before being used to train the neural networks. It should be noted that the standardization or normalization transformations applied to the variables have a time window of one matchweek, meaning that the calculations are based on data from the same week. These transformations are important for ensuring that the variables are appropriately scaled and prepared for input into the neural networks.
1. matchweek 5 Footnote 5: Normalized. That is for say \(s=(x-x_{(1)})/(x_{(n)}-x_{(1)})\).
2. position_table_home6 Footnote 6: Inversely normalized. In other words \(s=(x_{(n)}-x)/(x_{(n)}-x_{(1)})\).
3. total_pts_7 Footnote 7: Standardized. i.e. \(t=(x-\bar{x})/\bar{s}\).
4. npxGD_ma_home Footnote 8: Normalized, with the maximum observation cliffed at 100.
5. npxGD_var_home
6. big_six_home
7. promoted_team_home
8. position_table_away15
9. total_pts_away16
10. npxGD_ma_away
11. npxGD_var_away
12. big_six_away
13. promoted_team_away16
17. ova_home16
18. att_home16
19. att_home16
20. saa_home14
|
2308.03249 | Analysis of Optical Loss and Crosstalk Noise in MZI-based Coherent
Photonic Neural Networks | With the continuous increase in the size and complexity of machine learning
models, the need for specialized hardware to efficiently run such models is
rapidly growing. To address such a need, silicon-photonic-based neural network
(SP-NN) accelerators have recently emerged as a promising alternative to
electronic accelerators due to their lower latency and higher energy
efficiency. Not only can SP-NNs alleviate the fan-in and fan-out problem with
linear algebra processors, their operational bandwidth can match that of the
photodetection rate (typically 100 GHz), which is at least over an order of
magnitude faster than electronic counterparts that are restricted to a clock
rate of a few GHz. Unfortunately, the underlying silicon photonic devices in
SP-NNs suffer from inherent optical losses and crosstalk noise originating from
fabrication imperfections and undesired optical couplings, the impact of which
accumulates as the network scales up. Consequently, the inferencing accuracy in
an SP-NN can be affected by such inefficiencies -- e.g., can drop to below 10%
-- the impact of which is yet to be fully studied. In this paper, we
comprehensively model the optical loss and crosstalk noise using a bottom-up
approach, from the device to the system level, in coherent SP-NNs built using
Mach-Zehnder interferometer (MZI) devices. The proposed models can be applied
to any SP-NN architecture with different configurations to analyze the effect
of loss and crosstalk. Such an analysis is important where there are
inferencing accuracy and scalability requirements to meet when designing an
SP-NN. Using the proposed analytical framework, we show a high power penalty
and a catastrophic inferencing accuracy drop of up to 84% for SP-NNs of
different scales with three known MZI mesh configurations (i.e., Reck,
Clements, and Diamond) due to accumulated optical loss and crosstalk noise. | Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast | 2023-08-07T02:01:18Z | http://arxiv.org/abs/2308.03249v1 | # Analysis of Loss and Crosstalk Noise in MZI-based Coherent Silicon Photonic Neural Networks
###### Abstract
With the continuous increase in the size and complexity of machine learning models, the need for specialized hardware to efficiently run such models is rapidly growing. To address such a need, silicon-photonic-based neural network (SP-NN) accelerators have recently emerged as a promising alternative to electronic accelerators due to their lower latency and higher energy efficiency. Not only can SP-NNs alleviate the fan-in and fan-out problem with linear algebra processors, their operational bandwidth can match that of the photodetection rate (typically \(\approx\)100 GHz), which is at least over an order of magnitude faster than electronic counterparts that are restricted to a clock rate of a few GHz. Unfortunately, the underlying silicon photonic devices in SP-NNs suffer from inherent optical losses and crosstalk noise originating from fabrication imperfections and undesired optical couplings, the impact of which accumulates as the network scales up. Consequently, the inferencing accuracy in an SP-NN can be affected by such inefficiencies--e.g., can drop to below 10%--the impact of which is yet to be fully studied. In this paper, we comprehensively model the optical loss and crosstalk noise using a bottom-up approach, from the device to the system level, in coherent SP-NNs built using Mach-Zehnder interferometer (MZI) devices. The proposed models can be applied to any SP-NN architecture with different configurations to analyze the effect of loss and crosstalk. Such an analysis is important where there are inferencing accuracy and scalability requirements to meet when designing an SP-NN. Using the proposed analytical framework, we show a high power penalty and a catastrophic inferencing accuracy drop of up to 84% for SP-NNs of different scales with three known MZI mesh configurations (i.e., Reck, Clements, and Diamond) due to accumulated optical loss and crosstalk noise.
Silicon photonic integrated circuits, deep learning, optical neural networks, optical loss, optical crosstalk noise.
## I Introduction
With the rising demand for larger neural networks to address complex and computationally expensive problems, artificial intelligence (AI) accelerators need to consistently deliver better performance and improved accuracy while being energy-efficient. In this context, deep neural networks have attracted substantial interest for various applications ranging from image recognition to network anomaly detection, decision-making problems, self-driving cars, pandemic rate prediction, and early-stage cancer detection [1]. Given the significant growth in the demand for data-driven and computationally expensive applications, the energy efficiency of electronic-based deep-learning inference accelerators has been relatively low, and they have been unable to keep up with the performance requirements of emerging deep-learning applications [2, 3]. Silicon photonics (SiPh) enables the deployment of integrated photonics across a wide range of applications, from realizing ultra-fast communication for Datacom applications [4, 5, 6, 7] to energy-efficient optical computation in emerging hardware accelerators for deep learning [8, 9, 10]. To alleviate the limitations of conventional CMOS-based electronic accelerators in terms of energy consumption and latency, new SiPh-based hardware accelerators optimized for deep learning applications are on the rise. By leveraging optical interconnects for communication and photonic devices for computation, silicon-photonic-based neural network (SP-NN) accelerators offer the promise of up to 1000 times higher energy efficiency for performing computationally expensive multiply-and-accumulate operations [2], which are the most power-hungry and common operations in deep learning applications [2].
Among different SP-NN implementations, coherent SP-NNs, which operate on a single wavelength, have an inherent advantage over noncoherent SP-NNs that require power-hungry wavelength-conversion steps and multiple wavelength sources [11]. Fig. 1 presents an overview of a multi-layer coherent SP-NN with \(N_{1}\) inputs, \(N_{2}\) outputs, and \(M\) layers. Each layer comprises an optical-interference unit (OIU) implemented using an array of Mach-Zehnder interferometers (MZIs) with a specific architecture, connected to a nonlinear-activation unit (NAU) using an optical gain (amplification) unit (OGU). Within an OIU, MZIs can be used to realize matrix-vector multiplication as shown by [2, 12]. Accordingly,
Fig. 1: Overview of a coherent SP-NN with \(N_{1}\) inputs, \(N_{2}\) outputs, and \(M\) layers.
several coherent SP-NN architectures have been proposed by cascading arrays of MZIs to perform large-scale linear multiplication in the optical domain [13, 14, 15].
While SP-NNs are promising alternatives to electronic-based deep learning hardware accelerators, several factors limit their performance and scalability. The underlying devices in SP-NNs (e.g., MZIs in coherent SP-NNs) suffer from optical loss and crosstalk noise due to device fabrication imperfections (e.g., sidewall roughness) and undesired mode couplings [8, 16]. For example, prior work has shown up to 1.5 dB insertion loss and \(-18\) dB crosstalk in a 2\(\times\)2 MZI [17]. While optical loss and crosstalk noise are small and seem to be negligible at the device level, they accumulate as the network scales up, hence leading to severe performance degradation at the network and system level (e.g., drop in inferencing accuracy). Moreover, crosstalk cannot be filtered in coherent SP-NNs--our focus in this paper--due to the coherence between the noise and victim signals. Therefore, there is a critical need for careful analysis of optical loss and crosstalk noise in coherent SP-NNs and exploring their impact on SP-NN performance.
The novel contribution of this paper is to comprehensively analyze optical loss and crosstalk noise and their impact on coherent SP-NN performance from the device level to the system level. We develop a realistic device-level MZI compact model to analyze the optical loss from different sources (i.e., propagation loss, directional coupler loss, and metal absorption loss) and the coherent crosstalk noise in the MZI. This model is also able to capture the impact of optical phase settings, which represent weight parameters in coherent SP-NNs, on the MZI's optical loss and crosstalk noise. Leveraging our accurate device-level models, we present layer- and network-level optical loss and coherent crosstalk models that scale with the number of inputs and layers in coherent SP-NNs. In addition, we propose a detailed analysis of the effect of optical loss and crosstalk in SP-NNs when the optoelectronic NAU units are used. The proposed framework enables an accurate exploration of the laser power penalty and inferencing accuracy drop in SP-NNs with different mesh configurations under the effect of optical loss and crosstalk noise. Leveraging our proposed framework, we also quantify the maximum optical loss acceptable in the underlying devices when specific inferencing accuracy goals must be met within an SP-NN.
The proposed analytical framework can be applied to any coherent SP-NN architecture of any size to analyze the average and worst-case optical loss and crosstalk noise in the network. In this paper, we consider three well-known MZI-based coherent SP-NN architectures, namely Clements [14, 15], and Diamond [13]. For example, for the case study of the Clements SP-NN with 16 inputs and 2 hidden layers, we show that the optical loss, optical crosstalk power, and laser power penalty (i.e., to compensate for optical loss and crosstalk) can be as high as 6 dB, 31.6 dBm, and 20 dBm, respectively. Considering the MNIST classification task as an example, we show that the network inferencing accuracy can drop by about 84.6% due to optical loss and crosstalk noise. We also show that by increasing the number of inputs from 16 to 64 in the same network, the resulting optical power penalty increases significantly to as high as 140 dBm. The proposed analyses in this paper extend our prior work in [12] by performing the loss and crosstalk analysis for two more SP-NN configurations, analyzing the effect of optical loss and crosstalk in SP-NNs when optoelectronic nonlinear activation units are used, presenting an optical power penalty model for SP-NNs, and analysing the scalability constraints due to the optical loss and crosstalk in the MZI-based SP-NNs when being used as a photonic processing unit for high-performance computation.
The rest of the paper is organized as follows. Section II presents an overview of the building blocks in SP-NNs, SP-NN design and working mechanism, and prior related work. Section III presents analytical models to analyze the impact of optical loss and crosstalk from the device level to the system level in SP-NNs. The impact of the optical loss and crosstalk in optoelectronic NAU units is modeled in this section. Section IV presents the simulation results to show the impact of loss and crosstalk on the performance of SP-NNs with the three MZI mesh configurations of Clements, Reck, and Diamond. Section V presents the discussion on the effect of optical loss and crosstalk noise on SP-NN power consumption (i.e., laser power penalty) as well as scalability constraints in SP-NNs. Finally, Section VI concludes this work.
## II Background and Prior Related Work
In this section, we present an overview of the MZI building block in coherent SP-NNs as the primary vector-matrix multiplier unit and some fundamentals of MZI-based coherent SP-NNs. We also discuss different sources of optical loss and crosstalk in MZIs. Moreover, we review prior work on studying the effect of loss and crosstalk in SP-NNs.
### _Mach-Zehnder Interferometer (MZI)_
MZIs can be used to realize linear multiplication between a 2\(\times\)1 vector (signals applied to the two inputs) and a 2\(\times\)2 matrix (defined based on the phase settings in the MZI). Such an MZI-based multiplier unit can be constructed using two 3-dB directional couplers (DCs) with an ideal splitting ratio of 50:50 and two integrated phase shifters (\(\theta\) and \(\phi\)), as shown in Fig. 2. Phase shifters in this design can be implemented, for example, using microheaters on top of the underlying waveguide [17]. By introducing a temperature change using microheaters, the refractive index of the underlying silicon waveguide will change due to the thermo-optic effect, leading to a change in the phase of the electric field of the propagating optical signal. Therefore, by controlling the phase shift between the two arms in an MZI, we can control the interference in the output. Note that in the MZI in Fig. 2, \(0\leq\theta\leq\pi\) and \(0\leq\phi\leq 2\pi\). The transfer matrix of an MZI-based multiplier unit can be realized by multiplying the transfer matrices of the two 3-dB DCs (\(T_{DC1,DC2}\)) and the transfer matrices of the two phase shifters (\(T_{\phi,\theta}\)). Accordingly, the ideal transfer
matrix of an MZI-based multiplier unit (i.e., without optical loss and crosstalk noise) can be defined as [13, 18]:
\[T_{MZI}(\theta,\phi) =T_{DC2}\cdot T_{\theta}\cdot T_{DC1}\cdot T_{\phi}\] \[=\begin{pmatrix}\frac{e^{i\phi}}{2}(e^{i\theta}-1)&\frac{i}{2}(e^ {i\theta}+1)\\ \frac{i\pi^{i\phi}}{2}(e^{i\theta}+1)&-\frac{1}{2}(e^{i\theta}-1)\end{pmatrix}. \tag{1}\]
While MZIs can help perform matrix-vector multiplication in the optical domain, they have a large footprint. For example, state-of-the-art MZI multipliers can be up to about 300 \(\mu\)m long and 30 \(\mu\)m wide [8], limiting the scalability of MZI-based coherent SP-NNs [17, 19]. In addition to a large footprint, they also suffer from high optical losses and crosstalk noise [20]. The optical loss in an MZI originates from the absorption in the metallic contacts in proximity when using microheaters for applying the required phase shifts on the two MZI arms, the directional couplers imperfections, and the propagation loss of the waveguides, which is mainly due to the sidewall roughness in the waveguides. The optical loss and crosstalk noise in MZIs will be discussed in detail in Section III.
### _MZI-based Coherent SP-NNs_
MZI-based SP-NNs rely on the manipulation of the electrical field's phase of a single optical wavelength to perform matrix-vector multiplication. MZIs in such SP-NNs are responsible for phase manipulations and interference to carry out the computations [11, 13, 14, 15, 17]. MZIs can be cascaded in the form of an array following a specific configuration to implement an OIU for performing large matrix-vector multiplication in the optical domain. Fig. 1 shows an overview of a coherent SP-NN composed of an optical interface unit (OIU), optical gain unit (OGU), and nonlinear activation unit (NAU). A fully connected layer (\(L_{i}\)) with \(n\) inputs performs matrix-vector multiplication between the input vector (\(I_{i}\)) and a weight matrix (\(W_{i}\)). The output vector of the OIU then will be passed into the OGU to realize unitary matrices with arbitrary magnitudes, and eventually into the NAU to apply a nonlinear activation function (\(f_{i}\)), the result of which will be the input to the next layer (\(L_{i+1}\)). The output of \(L_{i}\) can be mathematically modeled as \(O_{i}^{n_{i}\times 1}=f_{i}(W_{i}^{n_{i}\times n_{i-1}},O_{i-1}^{n_{i-1} \times 1})\), in which \(W_{i}\) is the layer's corresponding weight matrix. The weight matrix (\(W_{i}\)) can be obtained by training the network. Using singular value decomposition (SVD), the obtained weight matrix can be mathematically modeled as \(W_{i}=U_{i}^{n_{i}\times n_{i}}\Sigma_{i}^{n_{i}\times n_{i-1}}V_{i}^{H,n_{i- 1}\times n_{i-1}}\). In this formulation, \(U_{i}\) and \(V_{i}\) are unitary matrices. Moreover, \(V_{i}^{H}\) denotes the Hermitian transpose of \(V_{i}\) and \(\Sigma_{i}\) is a diagonal matrix consisting of the eigenvalues of \(W_{i}\). A unitary matrix can be implemented by an array of cascaded 2\(\times\)2 MZIs in a specific configuration according to:
\[U_{i}^{n_{i}\times n_{i}}=D\left(\prod_{(m,n)\in S}T_{MZI_{m,n}}\right). \tag{2}\]
Using this scheme, each unitary matrix \(U\) can be decomposed into the products of several MZIs' transfer matrices. The order of the multiplication of MZIs' transfer matrices plays an important role in SP-NNs, determining the configuration of the MZIs in the SP-NNs (i.e. Clements, Reck, or Diamond see Fig. 2). In (2), \(D\) is a diagonal matrix with complex elements with a unity modulus [14], and \(S\) denotes the order of the multiplication of the MZIs' transfer matrices. \(S\) will be determined based on the configuration of the array of cascaded MZIs used to map the weight matrices in order to perform matrix-vector multiplication in the optical domain. Moreover, \(m\) and \(n\) denote the input ports (i.e., \(I_{1}-I_{4}\) in Fig.2 (a)) which require transformation using each MZI in the network. For example, \(m=1\) and \(n=2\) refer to the MZI in between input ports 1 and 2 in the network. Note that \(n=m+1\) always applies [14]. The location of each MZI in the network can be determined during the mapping of the weights to the array of cascaded MZIs which itself depends on the configuration of the network.
Several configurations have been proposed for the network topology of cascaded MZIs and to carry out the unitary transformation of the input optical signals. Three well-known mesh configurations: Reck, Clements, and Diamond [13, 14, 15] are considered in this paper to realize MZI-based unitary multipliers. A 4\(\times\)4 Reck mesh configuration is shown in Fig. 2(a) organizing an array of MZIs in a triangular shape. In general, any \(N\times N\) unitary multiplier based on the Reck design consists of \(\frac{N(N-1)}{2}\) MZIs, where \(N\) is the number of the input ports. The same number of MZIs can be configured in a rectangular shape as is shown in Fig. 2(b), and this configuration is called the Clements mesh. The advantage of
Fig. 3: Schematic of a 2\(\times\)2 MZI multiplier unit with different sources of optical loss and crosstalk noise. Here, \(I_{2}\to O_{2}\) is shown as an example with \(\theta=\pi\) (\(l_{MZI}\): MZI length, \(L_{m}\): loss due to metallic absorption, \(L_{DC}\): DC’s insertion loss, \(L_{p}\): propagation loss).
Fig. 2: Schematic of a 4\(\times\)4 single MZI-based optical interface unit (top) with different mesh configurations. (a) Reck, (b) Clements, and (c) Diamond.
the Clements design over the Reck is that the network is more symmetric, hence making the unitary multiplier more resilient to propagation loss due to more symmetric and, on average, shorter optical paths compared to the Reck design [14]. We can also increase the number of MZIs to make the Reck design more symmetric. The work in [13] proposed this symmetric design by adding additional \(\frac{(N-2)(N-1)}{2}\) MZIs to the Reck configuration to design the Diamond mesh configuration, shown in Fig. 2(c). For an array of cascaded MZIs with \(N\) inputs, \((N-1)^{2}\) MZIs will be used in a diamond shape. Although the network topology includes a higher number of input and output ports, only the last \(N\) inputs are used to perform matrix-vector multiplication, while the rest of the inputs can be used for characterization and calibration of the MZIs in the network [13].
### _Optical Loss and Crosstalk Noise in MZIs_
MZIs intrinsically suffer from optical loss and crosstalk noise. The schematic of a single 2\(\times\)2 MZI multiplier is shown in Fig. 3. An optical signal traversing an MZI can undergo different losses based on the MZI phase settings. The main sources of optical loss in an MZI are the DCs, metal absorption--which varies slightly with the adjusted phase settings in the MZI--due to the proximity of microheater's metallic contacts to waveguides, and propagation loss mainly due to sidewall roughness in waveguides [21]. In addition, crosstalk noise can originate from the undesired coupling of light in the DCs in an MZI. In coherent SP-NNs, we deal with coherent (i.e., intra-channel or in-band) crosstalk. As shown in Fig. 3, coherent crosstalk noise can interfere with the main optical signal (victim signal) based on their phase difference, imposing power fluctuations on the victim signal. Unlike incoherent networks, in coherent networks like MZI-based SP-NNs, the in-band coherent crosstalk noise cannot be easily filtered in the output due to the coherence between the crosstalk and victim signals (i.e., on the same wavelength).
Unlike in SP-NNs, optical loss and crosstalk noise have been widely studied in chip-scale Datacon photonic networks (e.g., [22] and [23]), showing signal integrity degradation and scalability constraints in these networks due to optical loss and crosstalk noise. Unfortunately, the existing work on optical loss and crosstalk analysis in such networks cannot be applied to SP-NNs because optical loss and crosstalk noise characteristics of silicon photonic devices for photonic computation in SP-NNs are different. For example, a 2\(\times\)2 MZI switching cell, whose structure is similar to the one in Fig. 3 but without \(\phi\), in an optical switch fabric can only take two functional states based on \(\theta\) for optical loss and crosstalk analysis: the Cross-state, where \(\theta=0\) and \(I_{1}\to O_{2}\) and \(I_{2}\to O_{1}\), and the Bar-state, where \(\theta=\pi\) and \(I_{1}\to O_{1}\) and \(I_{2}\to O_{2}\). As a result, crosstalk noise can be easily characterized in such a device. However, in the coherent SP-NNs, \(\theta\), which determines the MZI state, can take any value between 0 and \(\pi\) (0\(\leq\theta\leq\pi\)) to perform computation via interference between the inputs. The analysis of optical loss and crosstalk in SP-NNs should therefore account for various phase settings in the underlying MZI devices.
### _Prior Related Work_
While several coherent SP-NNs have been recently proposed [13, 24], the work in [25, 8] showed that the inferencing accuracy of such networks can drop by up to 90% due to fabrication-process variations and thermal crosstalk. In addition to such variations, the work in [13] explored the impact of optical loss and phase noise in MZIs for the Diamond SP-NN configuration and showed that the SP-NN's inferencing accuracy can drop to below 20% when scaling the network in the presence of phase errors and MZI losses. However, the work in [13] simply assumed that the loss in different devices in a network to be normally distributed (irrespective of their phase settings).
The work in [26] systematically analyzed the impact of loss and crosstalk in SP-NNs. However, the loss and crosstalk models presented in [26] are not a function of the phase settings of the MZIs. Hence, only the maximum and minimum loss and crosstalk are calculated. In addition, the models illustrated in [26] cannot be extended to OIUs of any arbitrary configuration. As the size and complexity of emerging SP-NNs increase to handle more complex tasks, the total insertion loss accumulated in the network increases as well. This necessitates the use of power-hungry optical amplification devices [27] and higher laser power at the input. Uncertainties due to fabrication-process variations--the analysis of which is beyond the scope of this paper--in the two DCs in an MZI can degrade the extinction ratio (ER) of the device which, in turn, will increase the loss and crosstalk in the output [28]. Yet, no prior work comprehensively analyzes the impact of optical loss and crosstalk noise in coherent SP-NNs from the device to the system level. Although the work in [28] suggests that using silicon nitride instead of silicon to implement MZI-based SP-NNs as a possible solution to reduce losses, the performance degradation due to coherent crosstalk in SP-NNs still remains unaddressed.
Different from the aforementioned work, this paper presents a comprehensive modeling framework for the optical loss and coherent crosstalk noise in coherent SP-NNs. The proposed loss and crosstalk analysis at the MZI device level takes into account the phase settings on the device. In addition, the models developed at the network level are adaptable meaning that the number of inputs/outputs and layers in SP-NNs can vary to explore SP-NN optical power penalty and scalability constraints while evaluating average and worst-case optical loss and crosstalk. Compared to our prior work in [12], we extended our analysis for two more well-known SP-NN configurations (Reck and Diamond) in addition to the Clements configurations. We show that our proposed models can be applied to any SP-NN with any configuration, proving their versatility. We present a comprehensive analysis of the laser power penalty and inferencing accuracy loss in SP-NNs due to optical loss and crosstalk noise. In addition, we present an analysis of the effect of optical loss and crosstalk when optoelectronic NAUs are used in the SP-NNs. We also present a detailed analysis of scalability constraints of SP-NNs due to optical loss and crosstalk when a single OIU is being used as a processing unit to carry out the computations in the optical
domain.
## III Optical Loss and Crosstalk Noise Analysis
This section presents the compact models developed to analyze optical loss and crosstalk noise in MZI-based coherent SP-NNs from the device level to the network level. We also model the loss and crosstalk noise in the optoelectronic nonlinear activation unit in coherent SP-NNs. All these models will be used to explore the power penalty, performance (e.g., inferencing accuracy), and scalability constraints in SP-NNs of different sizes and architectures under optical loss and crosstalk noise, as will be discussed in Section IV.
### _Modeling Optical Loss and Crosstalk at Device Level_
The schematic of a single 2\(\times\)2 MZI-based multiplier unit is shown in Fig. 3. It comprises two 3-dB DCs and two integrated phase shifters (\(\theta\) and \(\phi\)) on the upper arm. DCs in the MZI structure are responsible for splitting and combining the optical signals traversing the MZI. Optical crosstalk noise stems from undesired mode coupling in these DCs in the MZI structure. The splitting ratio for an optical signal at the input of a DC can be determined by the cross-over coupling coefficient (\(\kappa\)) and the power transmission coefficient (\(t\)) in the DC. Considering the DC optical loss to be \(L_{DC}\), we have \(|\kappa|^{2}+|t|^{2}=L_{DC}^{2}\). To be used in an MZI-based multiplier unit for coherent SP-NNs, the DCs should perform exact 50:50 splitting/combining (\(\kappa=t\) = 0.5). Both \(\kappa\) and \(t\) are wavelength dependent and they also depend on the waveguide width and thickness and the gap between the waveguides in DCs. As discussed in Section II, the main sources of optical loss in an MZI are the DC's loss (\(L_{DC}\)), propagation loss (\(L_{p}\)), and the metal absorption loss (\(L_{m}\)) due to interaction of the propagating light with the metallic contacts integrated on top of the waveguide when using microheaters to implement the phase shifters. The propagation loss originates from sidewall roughness and scattering loss in the waveguides [16]. The metal absorption loss depends on the metal used to implement the heater's contacts and their geometry such as thickness, width, length, and the longitudinal distance between the waveguide and the contacts [29].
Considering (1) and the MZI's optical loss parameters defined (i.e., \(L_{DC}\), \(L_{m}\), and \(L_{p}\), as shown in Fig. 3), an optical-loss-aware transfer matrix model for the 2\(\times\)2 MZI multiplier unit can be defined as:
\[\begin{pmatrix}O_{1}\\ O_{2}\end{pmatrix}=T_{DC_{2}}\cdot T_{\theta}\cdot T_{DC_{1}}\cdot T_{\phi} \cdot\begin{pmatrix}I_{1}\\ I_{2}\end{pmatrix}\]
\[T_{DC_{2}}=\begin{pmatrix}L_{DC}\sqrt{1-\kappa_{2}}&L_{DC}j\sqrt{\kappa_{2}} \\ L_{DC}j\sqrt{\kappa_{2}}&L_{DC}\sqrt{1-\kappa_{2}}\end{pmatrix},\]
\[T_{\theta}=\begin{pmatrix}L_{p}l_{MZI}L_{m}e^{j\theta}&0\\ 0&L_{p}l_{MZI}\end{pmatrix},\]
\[T_{DC_{1}}=\begin{pmatrix}L_{DC}\sqrt{1-\kappa_{1}}&L_{DC}j\sqrt{\kappa_{1}} \\ L_{DC}j\sqrt{\kappa_{1}}&L_{DC}\sqrt{1-\kappa_{1}}\end{pmatrix},\]
\[T_{\phi}=\begin{pmatrix}L_{m}e^{j\phi}&0\\ 0&1\end{pmatrix}. \tag{3}\]
In this model, \(\kappa_{1}\) and \(\kappa_{2}\) are the cross-over coupling coefficients in the two DCs.
Optical crosstalk noise in a 2\(\times\)2 MZI switching element, whose structure is similar to the one shown in Fig. 3 but without \(\phi\) phase shifter at the input of the MZI, can be analyzed by considering the MZI in the Bar (\(\theta=\pi\)) or in the Cross-state (\(\theta=0\)). By injecting an optical signal into one of the input ports of the MZI, the crosstalk noise can be determined by capturing the undesired optical signal transmission on the opposite output when in Cross- or Bar- states. For example, considering the MZI schematic depicted in Fig. 3, when injecting light into \(I_{2}\), by setting \(\theta=\pi\), the crosstalk coefficient in the Bar-state (\(X_{B}\)) can be captured on \(O_{1}\). Similarly, it is possible to capture the Cross-state crosstalk coefficient (\(X_{C}\)) on \(O_{2}\) by setting \(\theta=0\). Different from the MZIs used in switching networks, MZIs in coherent SP-NNs can take more than the two Bar-state and Cross-state, depending on the value of \(\theta\). In a 2\(\times\)2 MZI-based multiplier unit, \(\theta\) can assume any value between 0 and \(\pi\) (0\(\leq\theta\leq\pi\)) to perform a 2\(\times\)2 unitary multiplication (0 \(\leq\phi\leq 2\pi\) does not change the MZI state). Therefore, the exact analysis of the crosstalk noise per device's output ports cannot be easily performed in coherent SP-NNs.
To address this limitation, we can define a statistical model to estimate the crosstalk noise on the output ports of each MZI in coherent SP-NNs depending on the MZI's \(\theta\) phase setting. Considering the crosstalk coefficient in the two known Cross-state and Bar- state (\(X_{B}\) and \(X_{C}\), where typically \(X_{B}\leq X_{C}\)[30]), we can statistically model the crosstalk coefficient in the MZI in any intermediate state (\(X\)) as the function of \(\theta\) using a Gaussian distribution whose mean can be calculated according to \(\mu(\theta)=\frac{X_{B}-X_{C}}{\pi}\theta+X_{C}\) and standard deviation of 0.05\(\cdot\mu(\theta)\), considered here as an example. Using the crosstalk coefficient in intermediate states \(X\), the transfer matrix of a 2\(\times\)2 MZI-based multiplier unit in (3) can be written as:
\[\begin{pmatrix}O_{1}\\ O_{2}\end{pmatrix}=\begin{pmatrix}(1-X)T_{11}&(1-X)T_{12}\\ (1-X)T_{21}&(1-X)T_{22}\end{pmatrix}\begin{pmatrix}I_{1}\\ I_{2}\end{pmatrix}\]
\[+\begin{pmatrix}(X)T_{21}&(X)T_{22}\\ (X)T_{11}&(X)T_{12}\end{pmatrix}\begin{pmatrix}I_{1}\\ I_{2}\end{pmatrix}. \tag{4}\]
Models presented in (3) and (4) can be used to analyze the optical loss and crosstalk noise in any MZI with different phase settings. For example, considering the parameters listed in Table I, Fig. 4 shows the insertion loss and crosstalk power for a single 2\(\times\)2 MZI multiplier unit. In this figure, the x-axis shows the state of the MZI based on \(\theta\) where \(0\leq\theta\leq\pi\). Note that \(\phi\) does not change the MZI state, but its loss is included in the results shown. Observe that the insertion loss on each output port is \(\approx\)0.3-0.8 dB. We used Ansys Lumerical [31] to validate the results in Fig. 4(a). Note that commercial tools (e.g., Lumerical or Synopsys) cannot analyze the crosstalk in intermediate states in the MZI, hence their results are not considered in Fig. 4(b). Let us revisit Fig. 3: compared to input \(I_{2}\), the optical signal on \(I_{1}\) experiences higher insertion loss because of \(L_{m}\) through \(\phi\). Therefore, for example, the
insertion loss is higher on \(O_{2}\) for the Cross-state when \(\theta=0\), and it is higher on \(O_{1}\) for the Bar-state when \(\theta=\pi\). Note that the fluctuations in the crosstalk power in Fig. 4 are due to the Gaussian noise model defined for the MZI. The coherent crosstalk power in the MZI output changes between \(\approx-18\) dBm and \(\approx-25\) dBm, when the input power is 0 dBm.
### _Modeling Optical Loss and Crosstalk at Network Level_
As discussed in Section II-B, the weight matrix of each hidden layer \(M\) in an SP-NN architecture with \(N_{1}\) inputs and \(N_{2}\) outputs can be decomposed into a multiplication of two unitary and one diagonal matrix implemented by OIUs followed by an OGU, which can be implemented using semiconductor optical amplifiers (SOAs), and an NAU, as shown in Fig. 1. The number of MZIs in an OIU depends on the mesh configuration (Clements, Reck, or Diamond) and the number of inputs (\(N_{1}\)) and outputs (\(N_{2}\)). Moreover, as discussed in Section III-A, the optical loss and crosstalk noise at the outputs of an MZI in an OIU is a function of \(\theta\) and \(\phi\) in the MZI, which can be obtained during the training of the network [12].
The optical loss in the output of an SP-NN layer (\(L_{i}\)) can be systematically modeled as:
\[IL_{i}=G\cdot IL_{OIU}(C,N_{1},N_{2},\{\theta\},\{\phi\})\cdot IL_{NAU}. \tag{4}\]
In (4), \(IL_{OIU}\) is the insertion loss in the OIU that can be calculated using (3) for each MZI, and it depends on the OIU's mesh configuration (\(C\)), its dimension (\(N_{1}\) and \(N_{2}\)), and the phase settings of MZIs in the OIU. Moreover, \(G\) is the optical gain of the SOAs in the OGU, and \(IL_{NAU}\) is the insertion loss due to the NAU. In this paper, we consider the state-of-the-art SOA in [27] with \(G=17\) dB. Also, the insertion of the optoelectronic NAU is considered to be \(IL_{NAU}=0\)-1 dB, depending on its nonlinear response [24].
Optical crosstalk noise from each MZI will be accumulated at the outputs of the OIU as the optical signal propagates through the SP-NN. Therefore, following the same approach for optical loss calculation, the accumulated optical crosstalk noise power at the end of the \(L_{i}\) can be modeled as:
\[XP_{i}=\sum_{j=1}^{N_{MZI}}\left(P\cdot X_{MZI}^{ij}(\rho)\cdot IL_{OIU}^{ij} \right)\cdot G\cdot IL_{NAU}. \tag{5}\]
In (5), \(N_{MZI}\) is the total number of MZIs in the OIU in layer \(M_{i}\) and it depends on the configuration of the OIU as well as its dimension. Also, \(P\) is the input optical power. Moreover, \(X_{MZI}^{ij}(\rho)\) can be calculated using (4) and is the coherent crosstalk coefficient on the output of layer \(L_{i}\) originating from MZI\({}_{j}\) in the OIU. Also, \(\rho\) is the optical phase of the crosstalk signal. Similarly, \(IL_{OIU}^{ij}\) is the power attenuation fraction coefficient per MZI, which can be calculated using (3), experienced by \(X_{MZI}^{ij}(\rho)\) as it traverses the OIU. While SOA's gain can compensate for the OIU insertion loss (to some extent), SOAs can also amplify the optical coherent crosstalk noise on the SP-NN's outputs. By integrating the insertion loss and crosstalk models in (4) and (5) across multiple layers, we can analyze the network-level optical insertion loss and crosstalk power in coherent SP-NNs of any size and configuration.
### _Modeling Optical Loss and Crosstalk in Optoelectronic NAUs_
Nonlinear activation functions are an integral part of deep neural networks due to the essential need for the realization of the complex nonlinear relationship between the inputs and outputs of the SP-NNs [24]. NAUs are responsible to trigger a single activation at the end of each layer's output and pass the output to the input of the next layer. NAUs can be implemented electronically [2], optoelectronically [24], or optically [32], each with different costs. High power consumption and latency as well as the need for lasers because of the need for multiple E-O and O-E conversions can be named as limitations related to electronically implemented NAUs. Moreover, very large waveguide lengths and high optical power must be used for optical NAUs due to the weak nonlinearity of photonic platforms [24, 33]. Optoelectronic NAUs presented in [24, 34] show great promise as alternatives to electronic ones due to the ability to implement an arbitrary nonlinear response via self-intensity modulation (e.g., MZI-based electro-optical modulators) of the input optical signal. Note that research on optical NAUs is still ongoing. Therefore, in most cases, optoelectronic or electronic NAU are used in SP-NNs.
The schematic diagram of the optoelectronic NAU considered in our work to realize _ReLU_-like activation response is shown in Fig. 5. The optical signal at each output of the OIU (\(O_{i_{N}}\)) will pass through a directional coupler with a cross-over coupling coefficient of \(\alpha\) (\(\alpha=0.1\) in this paper, see Table I). The larger portion of the split power (\(\sqrt{1-\alpha}O_{i_{N}}\)) will go through an intensity modulation using an MZI-based electro-optical modulator (see Fig. 5). The remaining portion of the input optical power (\(\sqrt{\alpha}O_{i_{N}}\)) is used to drive the MZI-based optical modulator used in the NAU [24], based on the following principle: the optical signal \(\sqrt{\alpha}O_{i_{N}}\) routed in the upper branch enters a photodetector (PD) with the responsivity of \(\Re\) (A/W)--\(\Re\times P_{in}=I_{PD}\) for an ideal PD where \(I_{PD}\) is the
Fig. 4: Insertion loss, (a), and crosstalk power, (b), at the output of the 2\(\times\)2 MZI in Fig. 4 simulated using the parameters listed in Table I.
PD's photocurrent--followed by a transimpedance amplifier (TIA) with a gain of \(G_{TIA}\), to apply a gain to the output of the PD and convert the PD's photocurrent to a voltage output. The output of the TIA will then goes through a signal conditioning unit (\(H(.)\)) to apply an arbitrary function to the output voltage of the TIA. In our analysis, we used the identity function \(H(\Re G_{TIA}\sqrt{\alpha}O_{i_{N}})=\Re G_{TIA}\sqrt{\alpha}O_{i_{N}}\) for the sake of simplicity as the conditioning unit. Finally, an additional bias voltage (\(V_{b}\)) and the output of the conditioning unit will be used to drive the MZI-based electro-optical intensity modulator in the lower branch.
The ideal nonlinearity (\(f(O_{i_{N}})\)) of the MZI-based optoelectronic NAU depicted in Fig. 5 can be mathematically modeled as [24, 34]:
\[f(O_{i_{N}})= jO_{i_{N}}\sqrt{1-\alpha}\cos\left(\pi\frac{V_{b}}{2V_{\pi}}+ \pi\frac{H\left(G_{TIA}\Re\alpha|O_{i_{N}}|^{2}\right)}{2V_{\pi}}\right)\] \[\cdot\exp\left(-j\left[\pi\frac{V_{b}}{2V_{\pi}}+\pi\frac{H\left( G_{TIA}\Re\alpha|O_{i_{N}}|^{2}\right)}{2V_{\pi}}\right]\right). \tag{6}\]
Here, \(V_{\pi}\) is the voltage that is required to impose a \(\pi\) phase shift in the MZI-based intensity modulator in the optoelectronic NAU. A _ReLU_-like nonlinear activation response can be realized by setting \(V_{b}=V_{\pi}\) in the formulation proposed in (6). Note that in the activation function modeled in (6) and [24, 34], the effect of optical loss and crosstalk noise from the OIU unit, PD's sensitivity, shot noise, dark current (output current of the PD in the absence of any optical signal), and optical insertion loss of the DC and MZI in the NAU architecture are not considered.
To update (6) for analyzing the effect of optical loss and crosstalk noise in SP-NNs when optoelectronic NAUs are used, we can replace \(O_{i_{N}}\) in (6) with \(IL_{OIU}\cdot G\cdot(O_{i_{N}}+XP_{i}(\theta_{err}))\). Note that in these formulations, \(G\) and the loss values are considered in terms of power amplification and attenuation coefficient. Moreover, \(\theta_{err}\) is taking into account the phase of the crosstalk noise at the output of the OIU unit. Using this approach, we can write the input of the optoelectronic NAU unit including the optical loss and crosstalk from OIU (\(O_{i_{N}}^{\prime}\)) as [35]:
\[O_{i_{N}}^{\prime}=IL_{OIU}\cdot G\cdot(O_{i_{N}}+XP_{i}\exp(-j\theta_{err})). \tag{7}\]
The output photocurrent of the PD with a responsivity of \(\Re\) and shot noise of \(I_{shot}\) and dark current of \(I_{dark}\) can be modeled as:
\[I_{PD}=\Re{O_{i_{N}}^{\prime}}+I_{shot}+I_{dark}, \tag{8}\]
where \(I_{shot}=\sqrt{2qI_{in}^{2}\Re B}\) in which \(B\) is the PD's bandwidth and \(q=1.60\times 10^{-19}\) C is the electronic charge [35]. The output photocurrent then enters the TIA with the gain of \(G_{TIA}\) and will be converted to an amplified voltage \(V_{G}\) according to:
\[V_{G_{TIA}}=G_{TIA}\cdot I_{PD}. \tag{9}\]
Keeping the assumption of \(V_{H}=H(V_{G_{TIA}})=V_{G_{TIA}}\), we can systematically model the activation function response using an MZI-based optoelectronic intensity modulator mentioned in [24, 34] under optical loss and crosstalk noise from the OIU unit as:
\[f_{n}(O_{i_{N}})= jL_{mod}O_{i_{N}}^{\prime}\sqrt{1-\alpha}\exp\left(-\frac{j \Delta\phi_{N}}{2}\right)\cos\left(\frac{\Delta\phi_{N}}{2}\right)\] \[+L_{mod}\sqrt{1-\alpha}O_{i_{N}}^{\prime}X_{mod}\cdot e^{j\theta_ {med}} \tag{10}\]
where
\[\Delta\phi_{N}=\frac{\pi}{V_{\pi}}\left[V_{b}+V_{H}\right]. \tag{11}\]
Here, we also assumed that \(V_{b}=V_{H}\) to realize _ReLU_-like activation response and the output photocurrent of the PD is considered to be zero (\(I_{PD}=0\)) for \(O_{i_{N}}^{2}\leq S_{PD}\), where \(S_{PD}\) stands for PD sensitivity (the minimum detectable optical power by the PD). In (10), \(X_{mod}\) is the optical crosstalk coefficient related to the MZI-based electro-optical modulator, \(L_{mod}\) is its insertion loss, and \(\theta_{mod}\) is the phase of the optical crosstalk noise from the MZI in the intensity modulator in the optoelectronic NAU [34].
## IV Simulation Results and Discussions
Optical loss and crosstalk noise lead to the deterioration of the performance of SP-NNs. We developed a framework to analyze the effect of optical loss and crosstalk noise in MZI-based coherent SP-NNs on top of Neuroplica [13, 38]. Neuroplica is a flexible chip-level simulation platform for nanophotonic neural networks written in Python/NumPy. It
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Par** & **Definition** & **Value** & **Ref.** \\ \hline \hline \(X_{B}\) & Crosstalk in Bar-state & -25 dB & [30] \\ \hline \(X_{C}\) & Crosstalk in Cross-state & -18 dB & [30] \\ \hline \(IM2I\) & MZI length & 300 \(\mu\)m & [17] \\ \hline \(L_{m}\) & PhS (metal) absorption loss & 0.2 dB & [29] \\ \hline \(L_{p}\) & Propagation loss & 2 dB/cm & [16] \\ \hline \(L_{DC}\) & Insertion loss of DC & 0.1 dB & [16] \\ \hline \(G\) & SOA gain & 17 dB & [27] \\ \hline \(P\) & Input optical power & 0 dBm & - \\ \hline \(G_{TIA}\) & TIA gain & \(100\)\(\Omega\) & [24] \\ \hline \(\Re\) & Responsivity & 1 AW & [24, 34, 36] \\ \hline \(\alpha\) & DC’s splitting ratio in NAU & 0.1 & [24, 34, 36] \\ \hline \(V_{\pi}\) & Intensity modulator \(\pi\) & 10 V & [24, 34, 36] \\ \hline \(V_{b}\) & NAU’s bias voltage & 10 V & [24, 34, 36] \\ \hline \(B\) & PD’s bandwidth & 42.5 GHz & [36] \\ \hline \(I_{dark}\) & PD’s dark current & 3.5 \(\mu\)A & [36] \\ \hline \(S_{PD}\) & PD’s sensitivity & -11.7 dBm & [36, 37] \\ \hline \end{tabular}
\end{table} TABLE I: Device-level loss, crosstalk coefficient, power, gain, and NAU parameters considered in this paper (PhS: Phase shifter, PD: Photodetector).
Fig. 5: Schematic block diagram of the optoelectronic nonlinear activation unit used to realize ReLU-like nonlinear activation function [24, 34].
provides a wide range of abstraction levels for simulating optical neural networks [38]. We expanded the analysis of loss and crosstalk noise for a single MZI using the defined mathematical models in Section III and parameters listed in Table I to perform layer-level (i.e., OIU), network-level (i.e., multi-layer SP-NN), and system-level (i.e., network accuracy) analyses using Neuroptica[38]. For layer- and network-level analyses, we consider random phase settings for MZIs in OIUs of different dimensions and configurations (\(N=\) 8, 16, 32, and 64, \(M=\) 1, 2, and 3 layers ). SVD is used to obtain the corresponding weight matrix in an SP-NN (see Fig. 1) with three mesh configurations of Reck, Diamond, and Clements (see Fig. 2). Note that the random phases are only used for layer- and network-level analyses. As for the system-level analysis, we use shifted fast Fourier transform (shifted-FFT) on the MNIST handwritten digit dataset. The reason for using shifter-FFT is to reduce the number of inputs which leads to the size of the mesh configurations being smaller and more manageable so we can carry out the training of the SP-NNs of different configurations (i.e., Reck, Diamond, and Clements). Note that the training of the network is performed on complex inputs which leads to complex values of the weights when the training of the SP-NN is finished [39].
We use inferencing accuracy as a figure of merit to analyze the effect of optical loss and crosstalk noise at the system level in the SP-NNs using the ideal _ReLU_ activation function. Moreover, we use relative-variation distance (RVD) (a measure of the deviation of two matrices [25]) for the scalability analysis of OIUs of different scales with different mesh configurations. Such an analysis is helpful when using standalone OIUs as a photonic multiplication unit to perform matrix-vector multiplication. We also present a comprehensive analysis of the effect of optical loss and crosstalk noise on the SP-NNs' performance when optoelectronic NAUs are used instead of the ideal _ReLU_ activation response. In addition, we analyze the laser power penalty in SP-NNs with different scales and configurations to compensate for optical loss and crosstalk noise.
### _Optical Loss and Crosstalk in OIUs_
Using the analytical models in (5) and (4) proposed in Section III, a single layer (\(M=1\)) OIU with \(N=\) 8 (considered as an example), three different mesh configurations of Clements, Reck, and Diamond are simulated to capture the impact of optical loss and crosstalk noise at the OIU outputs. In these simulations, the optical insertion loss is calculated for 100 random weight matrices and the results are shown in the
Fig. 6: Optical loss analysis for 100 random weight matrices mapped to a fully connected 8\(\times\)8 OIU with different mesh configurations: (a) Clements, (b) Reck, and (c) Diamond. Output IDs are numbered from top to bottom (see Fig. 2).
Fig. 7: Optical crosstalk power analysis for 1000 random weight matrices mapped to a fully connected 8\(\times\)8 OIU with different mesh configurations: (a) Clements, (b) Reck, and (c) Diamond. Output IDs are numbered from top to bottom (see Fig. 2).
form of box plots in Fig. 6. We can see from Fig. 6(a) that for the Clements configuration, the average insertion loss among all the output ports is 6.5 dB, while the worst-case insertion loss can be as high as 14.8 dB. The average insertion loss for Reck and Diamond configurations is 6 dB and 6.5 dB, respectively. As for the worst-case loss, the Reck configuration experiences 15.11 dB, while the Diamond configuration undergoes 18 dB of optical loss. We can conclude that the worst-case insertion loss for Diamond configuration is higher than both Clements and Reck due to using a higher number of MZIs to perform the same matrix-vector multiplication. Note that SOA gain is not considered in these simulations focusing on the optical loss and crosstalk in OIUs. In addition, observe that the insertion loss for the Clements configuration is almost similar on all the outputs due to its more symmetric mesh configuration compared to Reck and Diamond. We also analyze optical crosstalk noise power for a single layer OIU with \(N=8\) across different mesh configurations. We used 1000 random weight matrices to perform statistical analysis of accumulated coherent crosstalk noise at each output in the OIU with Clements, Reck, and Diamond configurations. In each iteration, a random optical phase (\(\rho\)), where \(0\leq\rho\leq 2\pi\), is assigned to the crosstalk noise signal from each MZI in the OIU structure to emulate the crosstalk noise signal phase and behavior throughout the network. Note that the crosstalk signal from each MZI will interact with each other and the victim signal at the outputs of the OIUs. This approach is acceptable when optical signals traverse a large network of devices (e.g., in OIUs), and hence experience random phase shifts. The results related to the crosstalk power including the insertion loss are shown in Fig. 7. Note that 0 dBm input optical power at the input of the OIUs has been considered in these simulations. Observe that the average crosstalk power for the Clements, Reck, and Diamond meshes can be as high as -24.3, -22.1, and -21.41 dBm, respectively. As for the worst-case crosstalk power, the three mesh configurations of Clements, Reck, and Diamond exhibit -6.3, -5.1, and -5.2 dBm, respectively. Observe that the crosstalk power is slightly lower on the ports with higher insertion loss in Reck and Diamond configurations, which is consistent with the results in Fig. 6.
### _Optical Loss and Crosstalk in SP-NNs_
The optical loss and crosstalk power can be analyzed in an SP-NN comprising multiple hidden layers and with different dimensions and configurations using the models developed in (4) and (5). We extended the layer-level insertion loss and crosstalk models in (4) and (5) for the full-network analysis to demonstrate how the crosstalk power and insertion loss impact changes as we scale up SP-NNs. Average- and worst-case insertion loss and crosstalk analysis are performed for 1000 random weight matrices for SP-NNs with different dimensions (\(N=\) 8, 16, 32, and 64) and numbers of hidden layers (\(M=\) 1, 2, and 3) while considering OGUs including SOAs with a gain up to 17 dB (see Table I). The worst-case and average-case insertion loss and crosstalk power at the output of MZI-based SP-NNs with different dimensions and configurations are depicted in Fig. 8. Note that the insertion loss of the optoelectronic NAU is included in the results. Moreover, for the average case, the mean of the optical loss and crosstalk power over the outputs of the SP-NNs of different dimensions and configurations has been calculated. Consequently, to calculate the worst-case loss and crosstalk power, the maximum of the
Fig. 8: The average and the worst-case insertion loss and coherent crosstalk power in coherent SP-NNs with different OIU mesh configurations, based on the network in Fig. 1(b) and with different numbers of inputs (\(N\)) and layers (\(M\)). The optical input power at layer one is 0 dBm.
aforementioned parameters among all of the SP-NNs' outputs has been reported.
As can be seen from Fig. 8, the average (see Fig. 8 (i), (iii), and (v),) and worst-case (see Fig.8 (ii), (iv), and (vi)) insertion loss for SP-NNs increase drastically as \(M\) and \(N\) increase. As for the Reck and Diamond configurations, the average insertion loss is almost the same and lower than the Clements configuration. The reason for this reduction in loss is the structural differences among the Clements, Reck, and Diamond mesh configurations. As can be seen from Fig. 2, for Reck and Diamond, the longest and the shortest optical path from the input to the output are not as symmetric as those in Clements. For Reck and Diamond, the shortest path from the input to the output can include only one MZI, while for Clements, it includes \(\frac{N}{2}\) MZIs. As for the longest path between the input and the output, Reck and Diamond cross a higher number of MZIs (as high as \(N-1\)), which is higher than Clements with \(\frac{N}{2}\) due to its symmetry in all directions. This also leads to significantly higher worst-case insertion loss for Reck and Diamond (see 8 (ii), (iv), and (vi)). For example, the worst-case insertion loss for a single 64\(\times\)64 layer OIU unit with Reck (see Fig. 8(iv))) and Diamond (see Fig. 8(vi) can be as high as 76 dB which is significantly higher than Clements with 54 dB (see Fig. V-A(ii)). Furthermore, the optical loss values scale linearly (see Fig. 8(i)-(vi)) with the scale of the SP-NNs. Note that although the Diamond mesh has a higher number of MZIs compared to Reck when being used in the OIU units, some of the output ports (\(N-2\)) are not used during inferencing and are reserved only for the characterization and calibration of the MZIs in the Diamond mesh [13]. This leads to similar average loss and crosstalk performance for SP-NNs with both Reck and Diamond mesh configurations.
Following the same approach, the average- and worst-case crosstalk power is calculated for SP-NNs of different sizes and mesh configurations considering 0 dBm input optical power, as shown in Fig. 8(vii)-(vii). When \(N\) and \(M\) increase, the number of MZIs that can generate coherent crosstalk to be accumulated at the output ports increases as well, hence one would expect a higher crosstalk power at the outputs. However, crosstalk signals also experience a higher insertion loss as the network scales up (see Fig. 8(i)-(vi)). Consequently, the coherent crosstalk power in the output decreases when both \(N\) and \(M\) increase because of high optical insertion losses as the network scales up. For example, when \(M=1\), the average and worst-case coherent crosstalk power increases with \(N\) and it can be as high as, respectively, 21.2 dBm and 40 dBm for Clements and 30 dBm and 43 dBm for Reck and Diamond when \(N=64\). We can see that Reck and Diamond show much higher average and worst-case crosstalk power due to their asymmetric mesh structure. Note that using SOAS to compensate for the accumulated insertion loss in the SP-NNs leads to further degradation of the SP-NNs performance because of amplifying the coherent crosstalk signals at the output of the SP-NN.
### _Impact of Optical Loss and Crosstalk Noise on SP-NN Inferencing Accuracy With Ideal NAUs_
To understand the system-level performance degradation in SP-NNs due to the impact of optical loss and crosstalk noise, we consider three case studies of an SP-NN with two hidden layers (\(M=3\)) of 16 neurons each (\(N=16\)) with Clements, Reck, and Diamond mesh configurations, trained on the MNIST handwritten digit classification task using the same hyperparameters in all the three case studies. Each image in the MNIST dataset is converted to a complex feature vector of length 16 using the shifted fast Fourier transform (shifted-FFT), discussed in [25]. The nominal test accuracy after training is 91.5%, 92.4, and 90.6 for Reck, Clements, and Diamond mesh configurations, respectively. Note that the nominal test accuracy is slightly different for the three case studies due to structural differences in the mesh configurations which leads to different phase settings and distribution over the MZIs in the SP-NNs. To analyze the effect of optical loss and crosstalk noise during inferencing, we integrated the proposed MZI models in (3) and (4) into our SP-NN model implementation. Note that in all of the simulations in this section, an ideal _ReLU_ nonlinear activation response was considered focusing on the impact of optical loss and crosstalk from OiUs on the SP-NNs inferencing accuracy. Moreover, the OGUs in these simulations are considered to have unity gain during the training due to the normalization of the training and test data.
We consider the expected values of \(L_{DC}\), \(L_{m}\), and \(L_{p}\) within the ranges 0.1-0.4 dB [16], 0.1-0.3 dB [29], and 1-4 dB [29]. The results of the proposed MZI model are shown in Fig.
dB/cm [16], respectively. Considering an MZI with a length of \(l_{MZI}=\) 300 \(\mu\)m from [17], the propagation loss per MZI (\(L_{p}\cdot l_{MZI}\)) is 0.03-0.12 dB. Our analyses showed that the impact of the DC insertion loss (\(L_{DC}\)) on the SP-NN's inferencing accuracy is significantly higher compared to that of \(L_{m}\) and \(L_{p}\) for Clements: the accuracy dropped to \(\approx\)10% when only the impact of \(L_{DC}\) was considered in the network (see Fig. 9(a)-(c)). Note that the simulation results in Fig. 9(a) showed the same impact of \(L_{DC}\) on inferencing accuracy for SP-NNs with Reck and Diamond mesh configurations when only one source of optical loss was considered at a time. Moreover, the Diamond mesh configuration showed the most susceptibility to the optical insertion loss when one source of optical loss in the MZI was considered at a time (see Fig. 9(a)-(c)).
To understand the simultaneous effect of different optical loss sources (i.e., DCs) on the SP-NN inferencing accuracy, for each case study (i.e., SP-NNs with Clements, Reck, and Diamond mesh configurations), two experiments are defined in which all of the three device-level loss values vary simultaneously from a half-normal distribution. Note that for each experiment, the simulations were repeated 1000 times to avoid loss of generality. The statistical characterization of the two experiments is as the following:
* (EXPT1) considers a mean of \(\mu=\) minimum expected loss value and standard deviation, \(\sigma\), such that 3\(\sigma=\) their maximum expected loss value. Results for EXPT1 are shown in Fig. 10(a)-(c).
* (EXPT2) considers loss values with a mean of \(\mu=\) 0 and the same standard deviation as EXPT1. Results for EXPT2 are shown in Fig. 10(d)-(f).
As can be seen from EXPT1 results in Fig. 10(a)-(c), out of the three mesh configurations, Clements shows the most resilience to optical loss. Considering Fig. 10(a), almost 60% of the 1000 scenarios show inferencing accuracy less than 25%. This is much lower compared to the Reck and Diamond which show about 90% and 100% of the cases with an accuracy below 25%, respectively (see Figs. 10(b) and (c)). Note that in the Clements mesh configuration, only 14% of 1000 scenarios show inferencing accuracy higher than 50% and none of the scenarios in Reck and Diamond mesh configurations show accuracy higher than 50%. Considering the results for EXPT1, the results for EXPT1 are shown in Fig. 10(a)-(c).
Fig. 10: Inferencing accuracy when the loss parameters (\(L_{DC}\), \(L_{m}\), and \(L_{p}\cdot l_{MZI}\)) are simultaneously varied for three different OIU mesh configurations. Two 16\(\times\)16 hidden layers (\(M=3\)) have been considered for each case. (a)–(c) show the case where each of the 1000 points in the scatter plot represents an instance of the SP-NN where the \(L\)’s are sampled from a half-normal distribution with mean, \(\mu=\) their minimum expected value and standard deviation, \(\sigma\), such that 3\(\sigma=\) their maximum expected value. (d)–(f) show the case where \(L\)’s are sampled from a half-normal distribution with mean, \(\mu=\) 0 and standard deviation, \(\sigma\), such that 3\(\sigma=\) their maximum expected value to show the maximum tolerable \(L\) values for each configuration. Note that the effect of the optical crosstalk noise is neglected in these simulations focusing on the effect of optical loss on the SP-NNs inferencing accuracy.
EXPT2 shown in Figs. 10(d)-(c), we can obtain the maximum tolerable device-level optical loss (for each source of optical loss) while considering a threshold for maximum acceptable drop in the inferencing accuracy. In this paper, we consider 5% accuracy drop in the presence of the simultaneous effect of all sources of optical loss. Accordingly, the Clements mesh configuration can tolerate up to 0.22 dB of metallic loss, 0.04 dB of propagation loss, and 0.08 dB of DC's loss in the MZIs, while these values are lower for the other two configurations (see Figs. 10(d)-(c)).
Considering (5), to understand the impact of optical crosstalk noise on SP-NN inferencing accuracy, we model the crosstalk coefficient \(X\) using a linear interpolation between the worst-case (MZI in Cross-bar, \(X_{C}=-\) 18 dB) and the best-case (MZI in Bar-state, \(X_{B}=-\) 25 dB) crosstalk; see Section III. Fig. 11 shows the inferencing accuracy in the three SP-NN mesh configurations in the presence of both optical loss and crosstalk when \(X_{B}\leq X\leq X_{C}\) and for different \(X_{B}\) and \(X_{C}\) values and with optical losses set to their corresponding minimum expected values. As shown in Fig. 11, when \(X_{C}=-\)18 dB and \(X_{B}=-\)25 dB (considered as an example based on the work in [30]), the accuracy drops to 11.5% for Clements, 10.1 % for Reck, and 8.3% for Diamond. We also found that under expected values of optical crosstalk and loss, the accuracy remains at \(\approx\)10% in all three mesh configurations. Note that when \(X_{B/C}\) decreases to below -50 dBm (lower-left corner in Figs. 11(a)-(c)), the accuracy saturates at about 68% for Clements which is significantly higher than Reck at 43.3% and Diamond at 10.1%. Note that in all of the simulation results presented in this section, the unity gain for the SOAs was considered during the inferencing. The reason for that is that increasing the SOA's gain to compensate for the total insertion loss of the network even further deteriorates the SP-NNs' performance due to amplification of the optical crosstalk noise (see Fig. 8(vii)-(xii)). In all three case studies presented in this paper, only considering crosstalk noise and neglecting the loss parameters led to an accuracy lower than 25%. This shows the more critical effect of the crosstalk noise than the loss parameters on SP-NNs' accuracy. Simulation results showed that even a small increase in the SOAs gain (\(\approx\)5 dB) led to even worse accuracy of below 20% for all case studies when both optical loss and crosstalk noise were considered because of the amplification of the crosstalk signals at the outputs of the SP-NNs.
The results presented in this section motivate the need for SP-NN design exploration and optimization from the device to the system level to alleviate the impact of optical loss and crosstalk. Moreover, our proposed approach can be used to determine the suitable mesh configuration based on design requirements, and also the maximum tolerable crosstalk power coefficients and component optical losses to guarantee certain inferencing accuracy.
Fig. 11: Inferencing accuracy in the presence of both optical loss and crosstalk noise for different values of \(X_{B}\) and \(X_{C}\) where \(X_{B}\leq X\leq X_{C}\) (see Section III) for different OIU mesh configurations. Two 16\(\times\)16 hidden layers (\(M=3\)) have been considered for each case. The minimum expected value for optical loss parameters has been considered in these simulations (See Table I). (a) Clements, (b) Reck, and (c) Diamond. The dashed line shows the case where \(X_{B}=X_{C}\).
Fig. 12: Average MSE for optoelectronic NAU under the effect of the optical loss and crosstalk noise from OIUs. Each point in the figure has been averaged over 1000 random phases for input crosstalk noise interfering with the victim optical signal.
### _Impact of Optical Loss and Crosstalk Noise on SP-NN Inferencing Accuracy with Non-Ideal NAUs_
To understand the impact of optical loss and crosstalk noise on SP-NNs' system-level performance when optoelectronic NAUs are used, we use the analytical model in (10) and parameters reported in Table I to emulate the _ReLU_ as the nonlinear activation function using the NAU depicted in Fig. 5. The mean-square error (MSE) of the NAU's response (compared to the ideal case) with parameters listed in Table I is used as a metric to understand how optical loss and crosstalk noise from OIUs deteriorates the optoelectronic NAU's performance. Note that for each iteration, 1000 random crosstalk noise phases (\(\theta_{err}\), see (10)) are considered and the mean of the resulting MSE is reported in Fig. 12. In this simulation, the PD's sensitivity, dark current, and bandwidth are considered according to Table I in the optoelectronic NAU [29]. Moreover, we also consider the optical loss and crosstalk noise related to the input DC and the MZI in the optoelectronic NAU (see Fig. 5). Considering the crosstalk noise and loss at the output of the OIU unit, we can see that the MSE for a single optoelectronic NAU used in the SP-NN can be as high as 180 meaning that the actual response of the NAU is significantly deviated from the ideal _ReLU_ nonlinear activation response.
To analyze the system-level effect of optical loss and crosstalk noise in SP-NNs implemented using optoelectronic NAUs, three different SP-NN networks with 2 hidden layers (\(M=\) 3) and 16 inputs with Clements, Reck, and Diamond mesh configuration for the OIU and the optoelectronic activation as the NAU (see (10)) were trained on shifted-FFT MNIST handwritten digit dataset using NeuropriceA with nominal test set accuracy of 90%, 93.8, and 94.1% for the Diamond, Reck, and Clements, respectively. Note that also in these simulations, the unity gain has been considered for the OGUs. We found out that for all three cases when considering the minimum expected values of the device-level loss parameters reported in Table I for MZIs in the OIU and optoelectronic NAU, even a negligible crosstalk noise related to the MZIs in the OIUs (\(\approx\)\(-\)60 dBm) can lead to a significant drop in the inferencing accuracy to below 15%, which is drastically lower than the accuracy reported in Fig. 11 when ideal _ReLU_ activation function is used (68% for Clements and 43.3% for the Reck). Therefore, the SP-NNs which are implemented using optoelectronic NAUs are significantly sensitive to the optical loss and crosstalk noise from OIUs. Note that only considering standalone optical loss and crosstalk from optoelectronic NAUs while neglecting the optical loss and crosstalk from OIUs leads to less than 2% drop in the SP-NNs inferencing accuracy.
## V Power Penalty and Scalability Constraint
Leveraging the results from the previous section, here we analyze the impact of optical loss and crosstalk noise on SP-NN power consumption (i.e., laser power penalty) as well as scalability constraints in SP-NNs.
### _Power Penalty due to Optical Loss and Crosstalk_
The optical loss and coherent crosstalk necessitate an increase in the laser power at the input to compensate for optical loss and crosstalk in SP-NNs. We study this power penalty by considering the input optical laser power (\(P_{lsr}\)) required at the SP-NN input to compensate for the impact of optical loss and coherent crosstalk at the output (see Fig. 1). For the network output \(Y_{y}\) in a coherent SP-NN, the input optical laser power should satisfy:
\[P_{lsr}\geq S_{PD}+IL^{y}+XP^{y}(\rho,P_{lsr}). \tag{13}\]
Here, \(IL^{y}\) and \(XP^{y}(\rho,P_{lsr})\) are the insertion loss (in dB) and coherent crosstalk power (in dBm), respectively, at the network output \(Y_{y}\). They can be calculated using (4) and (5), which include SOA gains in OGUs. Note that the total insertion loss for a signal at output \(Y_{y}\) is determined by both \(IL^{y}\) and the interference between the victim signal and the coherent crosstalk signal (determined by crosstalk signal phase \(\rho\)) at the same output, where the coherent crosstalk power also depends on \(P_{lsr}\). Also, \(S_{PD}\) is the sensitivity of the photodetector (in dBm) in electronic or optoelectronic NAUs [24], taken to be \(-\)11.7 dBm in this paper [37]. Considering the average and the worst-case insertion loss and crosstalk in Figs. 8(i)-(xii), Figs. 13(a)-(f) show the average and the worst-case power penalty in coherent SP-NNs of different scales and mesh configurations as \(N\) and \(M\) increase. Here, the interference between the victim signal and the coherent crosstalk signal at each output is explored statistically and by considering
Fig. 13: The optical power penalty in coherent SP-NNs with different OIU mesh configurations, based on the network in Fig. 1(b) and with different numbers of inputs (\(N\)) and layers (\(M\)).
both the average and the worst-case scenarios. On average, the optical power penalty to compensate for both insertion loss and coherent crosstalk is substantially high and easily exceeds 30 dBm for all case studies when \(N\geq\)32, thereby considerably limiting SP-NN scalability. Note that the worst-case laser power penalty for Diamond and Reck is always significantly higher than Clements (i.e., 254 dBm compared to 188 dBm for \(M=3\) and \(N=64\)).
### _Scalability Constraints due to Optical Loss and Crosstalk_
As it was shown in the previous sections, optical loss and crosstalk significantly limit the scalability of the MZI-based SP-NNs. The work in [26] showed that the matrix-vector multiplications can be done in multiple steps using a single MZI-based OIU. The trained weight matrix can be broken down into multiple sub-matrices and by loading the sub-matrices into memory and repeatedly updating the phase settings of the MZIs in the network, one large matrix-vector multiplication can be carried out using a smaller mesh of OIU to limit the effect of optical loss and crosstalk on the network's performance. Another example of this application is when MZI-based OIUs are used in a crossbar architecture to carry out optical multiplication, which was presented in [40].
To analyze the scalability constraints due to optical loss and crosstalk noise in SP-NNs for the aforementioned scenarios, we use RVD as a parameter to measure the deviation between an intended transfer matrix and a deviated transfer matrix (due to loss and crosstalk) in an OIU. As a result, this metric can be used to assess how the ideal transfer matrix deviates when optical loss and crosstalk noise are included in OIUs. RVD can be written as:
\[RVD(T,\tilde{T})=\frac{\Sigma_{m}\Sigma_{n}[T^{m,n}-\tilde{T}^{m,n}]}{\Sigma_{ m}\Sigma_{n}[T^{m,n}]}. \tag{6}\]
In this formulation, \(T\) and \(\tilde{T}\) are the transfer matrices of the OIU with different mesh configurations with and without the loss and crosstalk noise, respectively. When the RVD is closer to 0, the deviated transfer matrix is closer to the ideal one, hence the inferencing accuracy is higher, as shown by [8]. Fig. 14 reports the RVD values for Clements, Reck, and Diamond mesh configurations with different sizes under optical loss and crosstalk reported in Table I. For these simulations, 1000 random weight matrices were tested on a single MZI-based OIU with Clements, Reck, and Diamond configuration with sizes of \(N=\) 4, 8, 16, 32, and 64. The reason for using 1000 random weight matrices is to avoid loss of generality in our analysis. As shown by Fig. 14, in all cases the impact of optical crosstalk is more critical for scalability than the optical loss (see the results for crosstalk noise in the figure). One reason for this is due to using SOAs to compensate for the total insertion loss of the network that also amplifies the coherent optical crosstalk noise. Moreover, the RVD increases as the network scales up. Observe that the RVD increase is even worse for the Diamond mesh due to using a larger number of MZIs when scaling the network.
To better understand the relationship between RVD and inferencing accuracy of SP-NNs, a single OIU of different mesh configurations with \(N=\) 4, 8, 16, 32, and 64 followed by an ideal _ReLU_ NAU were trained on a linearly separable Gaussian dataset presented in [13]. The test set accuracy is 100% for all the case studies. Note that the proposed Gaussian dataset is considered because it is simpler than MNIST dataset and requires a single layer only. As a result, we use it as an example to show the relationship between RVD, network accuracy, and the scalability constraints in OIUs across different OIU mesh configurations. The RVD values against the inferencing accuracy of a single OIU of different sizes and configurations under the impact of both optical loss and crosstalk noise (using the parameters listed in Table I) trained on a linearly separable Gaussian dataset is reported in Fig. 15. Observe that as the RVD increases, the network's accuracy decreases. Moreover, out of three mesh configurations, Clements showed less than 4% and 15% drop
Fig. 14: Average RVD values for 1000 random weight matrices for SP-NNs with different sizes and mesh configurations. Three different scenarios of loss only, crosstalk only, and loss and crosstalk values are considered.
Fig. 15: RVD values for weight matrices for SP-NNs with different sizes and mesh configurations trained on a linearly separable Gaussian dataset against the inferencing accuracy.
in the inferencing accuracy when \(N=8\) and \(N=16\), respectively, which is significantly lower than Reck with 53% and 75% and than Diamond with 63% and 81% accuracy drops, respectively. The Reck configuration shows about a 4% drop in accuracy when \(N=4\). Furthermore, the Diamond configuration shows a catastrophic 30% drop in accuracy even when a deficient number of inputs \(N=4\) is used.
Although the Diamond structure shows the least tolerance to optical loss and crosstalk, the work in [13] suggests that this mesh configuration has the most tolerance to fabrication-process variations compared to Clements [8]. However, our results presented in this paper show that Diamond mesh cannot be scaled up in the presence of optical loss and crosstalk noise. Note that among the three mesh configurations studied in this paper, Clements architecture showed the highest resilience to optical loss and coherent crosstalk, making it the preferred configuration to implement SP-NNs.
System architects can benefit from the proposed analyses in this paper to better understand the impact of optical loss and crosstalk noise in SP-NNs, and how such an impact change among different SP-NN architecture choices. Also, our analyses can help device designers to better understand device-level performance requirements (e.g., maximum optical loss at the device level) to achieve certain performance and accuracy in SP-NNs. Loss- and crosstalk-aware training of the MZI-based SP-NNs can be considered as a possible solution to alleviate the effect of optical loss and crosstalk in MZI-based photonic computing systems, not studied in this paper.
## VI Conclusion
The performance and scalability of SP-NNs are limited by optical loss and crosstalk noise in underlying silicon photonic devices. This paper presents a framework for modeling optical loss and crosstalk noise for SP-NNs of different scales with different mesh configurations. We presented a detailed and comprehensive analysis of average- and worst-case optical loss and crosstalk noise and its corresponding laser power penalty in SP-NNs with three different mesh configurations of Clements, Reck, and Diamond while exploring the drops in inferencing accuracy under different scenarios. In particular, the results showed a significant power penalty to compensate for loss and crosstalk noise and accuracy loss of at least 84% for all case studies. Additionally, we conducted an extensive analysis of optical loss and crosstalk in optoelectronic NAUs. Moreover, we thoroughly analyzed the scalability limitations of SP-NNs arising from optical loss and crosstalk. The valuable insights presented in this study can be leveraged by SP-NN device and system architects to explore and optimize different challenges in the development and evaluation of SP-NNs in the presence of optical loss and crosstalk noise.
|
2308.09764 | The Impact of Background Removal on Performance of Neural Networks for
Fashion Image Classification and Segmentation | Fashion understanding is a hot topic in computer vision, with many
applications having great business value in the market. Fashion understanding
remains a difficult challenge for computer vision due to the immense diversity
of garments and various scenes and backgrounds. In this work, we try removing
the background from fashion images to boost data quality and increase model
performance. Having fashion images of evident persons in fully visible
garments, we can utilize Salient Object Detection to achieve the background
removal of fashion data to our expectations. A fashion image with the
background removed is claimed as the "rembg" image, contrasting with the
original one in the fashion dataset. We conducted extensive comparative
experiments with these two types of images on multiple aspects of model
training, including model architectures, model initialization, compatibility
with other training tricks and data augmentations, and target task types. Our
experiments show that background removal can effectively work for fashion data
in simple and shallow networks that are not susceptible to overfitting. It can
improve model accuracy by up to 5% in the classification on the FashionStyle14
dataset when training models from scratch. However, background removal does not
perform well in deep neural networks due to incompatibility with other
regularization techniques like batch normalization, pre-trained initialization,
and data augmentations introducing randomness. The loss of background pixels
invalidates many existing training tricks in the model training, adding the
risk of overfitting for deep models. | Junhui Liang, Ying Liu, Vladimir Vlassov | 2023-08-18T18:18:47Z | http://arxiv.org/abs/2308.09764v2 | The Impact of Background Removal on Performance of Neural Networks for Fashion Image Classification and Segmentation
###### Abstract
Fashion understanding is a hot topic in computer vision, with many applications having great business value in the market. Fashion understanding remains a difficult challenge for computer vision due to the immense diversity of garments and various scenes and backgrounds. In this work, we try removing the background from fashion images to boost data quality and increase model performance. Having fashion images of evident persons in fully visible garments, we can utilize Salient Object Detection to achieve the background removal of fashion data to our expectations. A fashion image with the background removed is claimed as the "rempb" image, contrasting with the original one in the fashion dataset. We conducted extensive comparative experiments with these two types of images on multiple aspects of model training, including model architectures, model initialization, compatibility with other training tricks and data augmentations, and target task types. Our experiments show that background removal can effectively work for fashion data in simple and shallow networks that are not susceptible to overfitting. It can improve model accuracy by up to 5% in the classification on the FashionStyle14 dataset when training models from scratch. However, background removal does not perform well in deep neural networks due to incompatibility with other regularization techniques like batch normalization, pre-trained initialization, and data augmentations introducing randomness. The loss of background pixels invalidates many existing training tricks in the model training, adding the risk of overfitting for deep models.
background removal, fashion analysis, Salient Object Detection
## I Introduction
Fashion analysis has been a favored domain in computer vision thanks to its great business value in the online shopping experience. With the progress of computer vision and deep learning, there are many outstanding applications in image retrieval, product recognition, style recommendation, and competitor analysis in the fashion market. Most fashion datasets consist of images with clothes visible in various scenes such as online shopping, daily life, celebrity event, etc. In addition to diverse garments, various backgrounds make fashion data challenging for automated fashion analysis using computer vision methods.
This work evaluates whether background removal can augment data quality and improve model performance on fashion data. Inspired by data augmentation and attention mechanism, background removal removes pixels of specific regions to filter noises at the data level while maintaining the entire fashion object visible. It is expected to increase model accuracy at the expense of speed. To remove the background of fashion images, we apply Salient Object Detection (SOD) [1], which aims to segment the most visually attractive objects and helps to remove the background of fashion images to our expectations cleanly. To evaluate and validate the value of background removal for fashion data analysis using machine learning, we conduct comparative experiments on various model architectures with the original images and images without background that we call _rembg_ images.
More specifically, we focus on the following questions. Is it possible to remove the background of fashion images cleanly? Can background removal boost fashion data quality? What situation and aspects of model training can background removal positively affect model performance? How about the compatibility of background removal with other data augmentations or training tricks? Is background removal necessary for fashion data if only concerning the model accuracy?
To comprehensively understand the impact of background removal on model performance, we design and conduct extensive experiments to compare two different inputs, namely the original and _rembg_ images, in the following aspects of model training: various model architectures such as backbone, network depth, normalization layer, various initialization consisting of random initialization and pre-trained initialization, compatibility with other data augmentation, and various task types including classification on FashionStyle14 [2], instance segmentation and semantic segmentation on Fashionpedia [3].
With extensive experiments on background removal, we found that background removal is unnecessary for fashion data in most situations. Background removal can eliminate background interference and assist the model in focusing on key regions. However, due to the loss of background information, background removal greatly increases the risk
of overfitting deep networks. It weakens the ubiquitous regularization techniques that can release overfitting, such as batch normalization, pre-trained initialization based on transfer learning, and other common data augmentation extending datasets with similar data. The incompatibility with existing training measures and tricks makes background removal only play a positive role in simple and shallow networks, which are not susceptible to overfitting. Moreover, background removal can merely work in the classification task, while instance and semantic segmentation can still provide location annotations unaffected by background removal.
In summary, our results indicate that background removal is a good choice for shallow networks trained from scratch in the classification task but not a necessary option for fashion data if using a deep network or pre-trained initialization.
Our contributions in this paper are summarized as follows.
* We empirically study the impact of background removal on the performance of neural network models for fashion image analysis. We obtain _rembg_ images (images without background) using a Salient Object Detection tool and compare the performance of models trained on different inputs, original or _rembg_ images of multiple datasets.
* We testify background removal from fashion images in various model architectures such as network depth, normalization layers, and backbone.
* We verify the background removal from fashion images in different model initializations, including random and pre-trained initializations.
* We examine background removal's compatibility with other data augmentations like rotate, flip, etc.
* We validate background removal from fashion images in various task types, including classification, instance segmentation, and semantic segmentation.
%inputsections/preliminaries
## II Controlled Comparative Experiments Design
In this section, we present the design of our controlled comparative experiments to verify the effect of background removal from fashion images on the accuracy of fashion image classification and segmentation with different model training methods and parameters.
To comprehensively research the value of background removal for fashion understanding in every aspect of model training, we design the controlled experiments respectively on model architecture, model initialization, data augmentation, and task types, including classification, instance segmentation, and semantic segmentation. More specifically, we design and perform several groups of controlled comparative evaluation experiments for the following model aspects.
* **Model architecture.** First, we testify to the effect of background removal on the performance of different model architectures with various key architecture attributes, including network depths, normalization layers, backbones, etc.
* **Model initialization.** Second, we verify the value of background removal on the performance of the models with various model initialization, often categorized into two kinds: (1) training a model from scratch with random initialization or (2) fine-tuning a model from the one pre-trained on ImageNet. [4].
* **Data augmentation.** Third, we evaluate the compatibility of background removal, considered as one kind of data preprocessing, with other data preprocessing techniques, namely, data augmentation techniques used for images to boost model performance and generalization, such as crop, flip, rotate, shear, brightness, Solarize, etc.
* **Task Type.** Lastly, we evaluate the effect of background removal on the performance of different ML tasks on fashion data, including classification, instance segmentation, and semantic segmentation.
By evaluating the effect of background removal on the performance of fashion image classification models for the above aspects, we can more comprehensively learn about the significance of background removal for fashion data and decide whether background removal is a necessary preprocessing for fashion data to improve fashion model performance.
### _Neural Networks Used in Evaluation Experiments_
IN this study, we evaluate the impact of background removal on the performance of Convolutional Neural Networks (CNNs) for image classification and segmentation. CNN [5] is a widely used Artificial Neural Network for image processing and understanding. It has become the mainstream approach in computer vision, applied in various tasks such as image classification, object detection, and image segmentation. The core components of CNN are the convolutional, pooling, and fully connected layers. The convolutional layer extracts features from images using learnable kernels, while the pooling layer down-samples the feature maps, reducing model parameters and enabling efficient training. The fully connected layers serve as classifiers for output prediction based on the extracted features. For our experiments, we selected VGG11, VGG13, VGG16, and VGG19 [6] as representative of shallow and simple models, ResNet-50 [7], ResNet-101 [7] as typical deep networks, and ResNeSt-101 [8] as a deep network with attention architecture. For image instance segmentation, we used ResNeXt-101 and ResNeSt-101 to assess the impact of background removal on the model's performance.
### _Data Sets_
To evaluate the benefit (if any) of background removal for fashion image classification, we used the FashionStyle14 dataset [2], where each image represents a fashion style and has visible and salient fashion objects. To evaluate the effect of background removal on the performance of instance segmentation and semantic segmentation, we used Fashionpedia dataset [3], in which, on average, each image shows one person with several main garments, accessories, and garment parts, each annotated by a tight segmentation mask.
### _Background Removal_
Inspired by data augmentation and attention mechanisms, background removal is to remain the entire fashion object
visible and remove pixels of specific regions to filter noises at the data level. A typical fashion image, such as those in the FashionStyle14 [2] and Fashionpedia [3] datasets, is a clothing image in daily life, street style, celebrity events, runway, and online shopping annotated by fashion experts. Figure 1 shows several sample images from FashionStyle14 and Fashionpedia datasets used in this study. The common feature of most fashion images is having visible key objects of fashion coordinate, i.e., usually, one or several evident salient persons in assorted garments in the center of an image, which is easily distinguished from diverse backgrounds. This feature of fashion images to have salient persons in clothes perfectly matches the concept of Salient Object Detection (SOD) [1] aiming to segment the most visually attractive objects in an image. Naturally, one can use a SOD method, e.g., \(U^{2}\)-Net [9], to remove the background of fashion data. For our comparative evaluation experiments on fashion images, we chose the Rembg [10] tool based on \(U^{2}\)-Net to remove the background while keeping salient persons in clothes intact.
To validate the value of background removal for fashion data, we conduct our comparative evaluation experiments with the original images and the corresponding images without background, called _rembg_ images, obtained from the original images using the Rembg tool. Figure 2 illustrates the effect SOD applied to fashion data to remove background, which is ideal, basically meeting the requirements of background removal we expected.
It is worth noting that some fashion images might not be feasible to remove the background using SOD, such as the images of product details and the images containing half or part of models in garments. When the key objects are not fully displayed and not placed in the center of the image, the SOD method might destroy the original images losing information and garments rather than purely removing the image background. Furthermore, the black clothes with high contrast to the rest of the image are easily ignored by SOD. Fortunately, the datasets chosen for our experiments are pre-processed by manual selection of sensible criteria so that each image has visible and salient fashion objects.
### _The General Pipeline for Comparative Experiments_
Our general pipeline for controlled comparative experiments shown in Figure 3 comprises three stages: preprocessing, model training, and result analysis. To comprehensively research the value of background removal for fashion data, we control experiments through the whole procedure of ML model training. We compare the effect of background removal in image classification and segmentation for different inputs in various models and training settings. The choice of input fashion dataset depends on the targeted task and its expected output. As mentioned in Section II-B, we choose FashionStyle14 [2] for image classification and Fashionpedia [3] for instance and semantic segmentation to validate the benefit of background removal in different fashion tasks. The preprocessing stage includes background removal and data augmentation. We remove the background from the fashion images using the SOD tool [10]. The fashion images without backgrounds are marked with _rembg_. Data augmentation for images, e.g., crop, flip, rotate, shear, brightness, Solarize, etc., is also one kind of preprocessing. The model training stage contains model initialization, such as random and pre-trained initialization, model selection for various model architectures, and backbones. Finally, we evaluate model performances on a specific task by specific evaluation metrics to verify whether background removal can enhance model performance under the same settings as the original images.
### _Experiment Settings_
We conducted experiments on the following two personal computers: Core\({}^{\text{m}}\) i9-9900KF and Intel\({}^{\text{\textregistered}}\) Core\({}^{\text{m}}\) i7-8700K, each with GPU Nvidia GeForce RTX 2080 Ti. All our model codes are based on the open-source toolboxes of OpenMMLab [11], including MMClassification [12], MMDetection [13], and MMSegmentation [14].
## III Evaluation of the Impact of Background Removal on Fashion Image Classification
First, we present our evaluation of the impact of background removal on the performance of representative Neural Networks for image classification on the FashionStyle14 [2] dataset.
Fig. 1: Original image samples from FashionStyle14 [2] and Fashionpedia [3].
Fig. 3: The general pipeline of comparative experiments.
Fig. 2: _Rembg_ image samples from FashionStyle14 [2] and Fashionpedia [3].
### _The pipeline for image classification experiments._
The pipeline for experiments to assess the impact of background removal in the fashion image classification task is shown in Figure 4. In this pipeline, we validate the value of background removal for different model architectures, model initialization, and data augmentation. We obtain the comparative results via multiple training settings and controlled inputs. The pipeline includes data augmentation at its preprocessing stage, model initialization, and selection at its model training stage. The input can be either original images or "rembg" images obtained by the SOD tool [10], constituting a set of contrasting inputs. At the data augmentation step, the image augmentation operations can include resizing, cropping, flipping, or other compound policies like RandAugment [15]. At the step of model initialization, we compare the effect of background removal for two options: random initialization and pre-trained initialization. At a model architecture step, we compare the effect of background removal for several representative image classification models of various architectures and backbones, namely, the VGG series [6] with or without batch normalization [16] layers, ResNet [7], ResNeSt [8], and Swin Transformer [17][18]. The evaluation of the impact of background removal for models with different network depths, network backbones including CNN and Transformer, and with or without normalization can be accomplished by the combinations of these models. The output of classification is the categorical index of fashion style classes. Comparative evaluation experiments on the above zoo of various model architectures will help us comprehensively research the effect of background removal in fashion image classification.
Sincerely, an image classification model should be evaluated in multiple respects, such as quantitative accuracy, visual quality, inference speed, etc. However, our experiments focused on metrics for quantifying model accuracy rather than other aspects because we merely cared whether background removal of fashion data could improve model performance. Accuracy is essential for the classification task, which measures how positive and negative examples are correctly classified. As FashionStyle14 has only 14 classes, we consider only Top-1 accuracy, which measures the proportion of samples for which the predicted label matches the ground-truth label, i.e., an accurate prediction is a predicted class with the highest probability that is precisely the expected one.
### _Comparisons on Model Architecture_
First, we evaluate the influence of several key attributes of a neural network, including network depth, normalization layer, block unit, backbone, and so on, on the classification performance of FashionStyle14 fashion images [2] with removed backgrounds compared to the classification of the original FashionStyle14 images. Considering the interference of pre-trained model parameters, we train all the models from scratch using random parameter initialization. Below, we provide more details on our experiment setup.
**Network depth.** We choose VGG11, VGG13, VGG16, and VGG19 [6] as the representative models of shallow and simple networks; ResNet-50 and ResNet-101 [7] standing for typical deep networks; ResNeSt-101 [8] as a supplementary to the deep networks with the attention architecture. We uniformly train the VGG series models for 100 epochs with a batch size of 32, with an SGD optimizer using a learning rate of 0.01 and a "Step" decay learning rate scheduler. For ResNet-50 and ResNet-101, the batch size is 32, and the epoch is 100. Their optimizer is SGD with a learning rate of 0.1 and a "Step" learning rate policy, which has minor changes from the original one trained on ImageNet [4]. ResNeSt is trained at a 16 batch size for 300 epochs, using an SGD optimizer with a learning rate of 6e-3 and a "Cosine Annealing" learning rate policy [19] with a warm-up for a start.
**Normalization layer.** We add batch normalization [16] layers based on the VGG series, which are usually named VGG11_BN, VGG13_BN, VGG16_BN, and VGG19_BN, forming the control group to the original VGG series. Batch normalization accelerates deep network training by reducing internal covariate shifts. We can also verify the value of normalization operation for training models to avoid background noise by comparing these results. As the experiments suggested, we found learning rate 5e-3 is better for the VGG series with batch normalization, and the rest training parameters keep the same as the VGG series.
**Backbone Design.** In addition to traditional CNN architecture, we have also replenished the tiny version of the Swin Transformer [17] (a.k.a Swin-T) represented for the transformer, which is currently popular model architecture. As our attempts indicated, the Swin-T obtained the best when we set batch size = 16, epoch = 400, using AdamW [20] optimizer with a learning rate of 1e-4 and a "Cosine Annealing" [19] learning rate scheduler with a linear warm-up.
The more complex the model is, the more complicated its training converges, especially with limited resources for training models from scratch. We may see the performance of better models on ImageNet get worse results in the experiments. More importantly, we focus on the differences caused by the original and rembg images. The gap is usually visible even before the model reaches its best accuracy. Figure 5 compares the performance of the trained from scratch models on FashionStyle14 images with removed backgrounds (rembg) compared to the original images. As shown in Figure 5, background-removed (rembg) images significantly improve
Fig. 4: The pipeline for image classification experiments on FashionStyle14.
performance in the VGG series, especially for VGG16 without batch normalization, which can improve over 5% accuracy. The results meet our general expectation that background removal benefits model training by filtering the noise in advance, thus reducing learning difficulty.
However, this improvement reduces a lot after adding batch normalization [16]. The decline is due to a conflict between background removal and batch normalization. Batch normalization is a regularization technique to avoid overfitting due to insufficient quality labeled data. Similarly, data augmentation plays the same role by supplementing a dataset with similar data created from the original data in the dataset. On the contrary, instead of introducing random noises and transforming for better generalization, background removal removes lots of pixels that belong to the background, making regularization techniques more challenging to overcome overfitting. In other words, background removal plays a role opposite to batch normalization. Thus, batch normalization can not increase model accuracy with rembg images as much as with original images. From a cognitive point of view, background removal is equal to the human attention mechanism in the pre-processing step, leading the model to center on the foreground without interfering with the background. In contrast, batch normalization did the work during model training.
Meanwhile, as the network depth grows, the performance improvement from background removal still exists and is similar to the VGG_BN series. This trend suggests that the model depth does not impact the performance improvements due to image background removal. Using rembg images is slightly worse than using the original image for the ResNeSt-101 that acquired the best accuracy. ResNeSt is featured by its Split-Attention Network, which can assist models in learning the foreground better. It suggests that background removal does not work for those models with attention mechanisms to distinguish foreground and background.
Besides, we extend the mean model accuracy to the accuracy per class (not shown for lack of space) and see that the effect of background removal involves all the classes rather than a specific class. We can confirm that background removal positively affects model training. In sum, background removal is a special kind of attention mechanism helping models to focus on key regions at the input layer of models while also increasing the risk of overfitting for deep networks due to the loss of the background information.
On the one hand, for a simple and shallow network like VGG, the benefit of attention far outweighs the damage of regularization. On the other hand, with the growing complexity of deep networks and architectures, normalization like batch normalization and layer normalization is indispensable for models to enhance generalization and avoid overfitting caused by the shortage of enough labeled data. Thus, the advantages of background removal are inclined to decline until they disappear. Especially for the models with attention mechanisms, the effect of background removal is weaker or worse than the attention mechanism when the model is fully trained.
### _Comparisons on Model Initialization_
Model training from scratch is time-consuming and challenging to re-implement due to limited computer resources and random initialization. In most cases, one tends to use pre-trained model parameters on ImageNet as model initialization to fine-tune models swiftly. That is why we emphasize the value of background removal for the initialization from pre-trained models. In the following section, we compare the effect of background removal on the performance of the models with pre-trained initialization and those with random initialization.
Pre-trained initialization is the first choice for most classification tasks as it improves the model convergence speed. The model parameters previously trained on a large dataset can be reused as the initial model parameters in the target domain, equal to training models with more data for better performance and generalization. To assess background removal benefits in image classification when training from scratch versus training from pre-trained initialization, we experiment with the VGG series [28], the VGG_BN series [16], ResNet-50 [7], ResNet-101 [7], ResNeSt-101 [8], and Swin-T [17]. ViT-B [21] is also considered supplementary to transformer architecture. VGG and VGG_BN series retains the same training settings as those training from scratch, except for modifying the learning rate as 1e-4. For ResNet-50 and ResNet-101, we set a fixed learning rate of 1e-5, batch size of 32, and epoch 300. ResNeSt is fine-tuned for 200 epochs at a learning rate 5e-5 with a "Step" policy, a batch size of 32. Swin-T and ViT-B are trained for 100 epochs at a fixed learning rate of 1e-5, with a batch size of 32.
Figure 6 compares the performance of the fine-tuning pre-trained models on images with removed backgrounds (rembg) compared to the original ImageNet images. As we can see, all the models training from pre-trained parameters on ImageNet perform worse on rembg images than on original images. Combined with the results of Figure 5, it is crystal clear that background removal is useless for fine-tuning models from pre-trained ones. This is because the original ImageNet images having diverse backgrounds are much different from the corresponding images without backgrounds, and these differences make transfer learning from ImageNet to fashion
Fig. 5: The performance of the trained from scratch models on rembg images with removed backgrounds compared to the original FashionStyle14 images.
images without backgrounds more difficult. Another reason is related to model generalization. Pre-trained initialization and transfer learning enhance model generalization since the model can learn features with more data. In contrast, the model trained with rembg images is more susceptible to overfitting in deep networks. Besides, the models using pre-trained initialization perform better than the ones training from scratch. Thus, the performance improvement due to background removal is far less beneficial than using transfer learning by pre-trained initialization.
In our more detailed evaluation of the per-class accuracy of models with pre-trained initialization (not shown for lack of space), we found that using rembg images decreases model accuracy in most categories. For ResNeSt-101, as shown in Figure 5, it got no improvement with rembg images with random initialization, and it performs much worse with pre-trained initialization, as shown in Figure 6. It manifests that background removal is impractical for models with attention mechanisms, no matter what kind of initialization.
### _Comparisons on Data Augmentation_
In addition to using pre-trained models for initialization, data augmentation is a valuable technique to boost model training. Image augmentations can be organized into two categories: spatial-level transformations, such as crop, flip, rotate, translate, shear, etc., and pixel-level transformations, such as brightness, sharpness, solarize, posterize, etc.
Theoretically, since we set the pixel value of the background to zero after removing the background, background pixels are no longer compatible with other data augmentation involved with pixel values. As shown in Figure 7, we visually compare popular pixel-level transformers, including Solarize, Posterize, Sharpen, Color jitter, Equalize, ToGray, Random Brightness, and Cutout [22], applied to the original and rembg images. These transformers applied to the whole image merely work on the pixels of the main object in the rembg image. However, as illustrated with Cutout, an image transformer can randomly erase parts of an image to improve model robustness; consequently, it fails to work with background removal owing to the lack of background. Similarly, background removal is incompatible with other regularization techniques like CutMix [23] and MixUp [24] that replace regions of images with other patches because they might mix up the blank pixel of the background with the original one. In other words, if we choose background removal as a preprocessing step, it becomes limited for model training to obtain improvement by currently existing data augmentation. It is demanding to introduce random noise for regularization and generalization when losing a large area of pixels that belong to the background.
Moreover, data augmentation is a technique of artificially extending a dataset with similar data created from the original dataset, targeted at releasing the overfitting of deep networks caused by the scarcity of enough annotated data. It is used to introduce randomness for better generalization, while background removal is used to simultaneously reduce the noise and information of images. There is a natural conflict between their measures and goals. The images applied with background removal do not perform well in the compatibility with other data augmentation. Furthermore, there is a risk of mixing up pure black clothes and existing items after setting the background pixels to zero.
## IV Evaluation of the Impact of Background Removal on Fashion Image Segmentation
In addition to the fashion image classification task, the image segmentation task is critical in fashion understanding with essential applications, especially for fine-grained fashion image classification. Thus, verifying the impact of background removal on the segmentation task is indispensable for our research on the effect of background removal in fashion image processing. Image segmentation can be formulated into three categories, classifying pixels with semantic labels (semantic segmentation), partitioning of individual objects (instance segmentation), or both (panoptic segmentation) [26]. Semantic segmentation involves labeling each pixel in an image with an object category, resulting in a semantic image with categorical pixel values. Instance segmentation separates individual objects in an image, treating them as distinct entities, even if they belong to the same class. Essentially, instance segmentation expands on semantic segmentation by detecting and delineating each object region.
After careful selection, we have chosen the Fashionpedia data set [3] for the instance segmentation experiments and experiments on semantic segmentation with complete annotations. To assess the impact of background removal on fashion image segmentation, we designed two experiment pipelines for two kinds of image segmentation: instance segmentation and semantic segmentation, shown in Figure 8 and 9, respectively.
Fig. 6: The performance of the fine-tuning pre-trained models on images with removed backgrounds (rembg) compared to the original ImageNet images.
Fig. 7: Samples of pixel-level transformations of original (first row) and rembg (second row) images. [25]
### _Impact of Background Removal on Instance Segmentation._
The input to the pipeline for the experiments on instance segmentation (Figure 8) is chosen between original images of Fashionpedia and corresponding _rembg_ images, with the annotations of class, locations of bounding boxes (bbox) and mask. To assess the impact of background removal on the performance of models for instance segmentation, we experiment with the models based on Mask R-CNN [27] framework with various backbones to extract image features. Mask R-CNN is a highly influential model for instance segmentation.
For our experiments with instance segmentation, we choose ResNeXt-101 [28] and ResNeSt-101 [8] represented for CNN, Swin-T [17] and PVT [29] represented for Transformer. The ResNeXt-101 and ResNeSt-101 are trained with a batch size of 4 due to the limitation of GPU memory. The optimizer is SGD with a learning rate of 0.02, and the "Step" learning rate policy with a linear warm-up for 12 epochs. The training strategy for Swin-T and PVT uses AdamW [20] optimizer with a learning rate of 0.001 and a" Step" learning rate policy for 12 epochs. All the models utilize pre-trained ones on ImageNet as initialization, setting the input size as 512 x 512. There is no additional test for models trained from scratch since the cost of training segmentation models from scratch is relatively high and time-consuming, which is a thankless job and has less reference value.
Table I shows almost no difference between the performance of the Mask R-CNN with ResNeXt-101 or ResNeSt-101 backbone on original images and its performance on rembg images. Nevertheless, as presented in Section 4.3, those models trained by pre-trained initialization achieve much better with original images. We assert that under the Mask R-CNN architecture, RPN proposals are merely calculated by target bbox coordinates annotations, not affected by background removal. Thus, background removal does not impact mask generation, highly dependent on regions of interest and classification. Whereas using Swin-T and PVT as the backbone of Mask R-CNN, the results significantly drop when taking rembg images as input. The result is consistent with the previous experiments (Section III-C) for transformer models.
To sum up, background removal can not positively influence fashion image segmentation since the bbox and mask annotations can not be affected by background.
### _Impact of Background Removal on Semantic Segmentation._
Semantic segmentation is also a common task for fashion image processing. The pipeline for semantic segmentation experiments is shown in Figure 9. The input is fashion images from Fashionpedia [3], with the semantic labels of ground truth generated by the annotation file, where the pixel value represents categories. The main difference from instance segmentation is that the output of semantic segmentation, also known as pixel-based classification, is a high-resolution semantic image with a categorical index (class) per pixel. For semantic segmentation, firstly, we need to generate semantic label annotation as ground truth, that is, a grey image of the same size as input with pixel value standing for categorical index. Semantic segmentation can be reckoned as binary classification per pixel. The background pixel can be ignored by setting a value of 255 since the categorical index starts at 0. Importantly, in this way, since the annotation of semantic segmentation can mark each background pixel, the background pixels will not be engaged in calculating the loss function. In simple words, the loss function of semantic segmentation is naturally supported for neglecting background by annotating pixel values of semantic labels. Thus, in theory, background removal should not work for semantic segmentation; therefore, we only experiment with Swin-B (Swin Transformer-Base) [17] to validate the impact of background removal in the semantic segmentation task.
The results of our evaluation experiments on background removal in semantic segmentation with Swin-B on fashion images from Fashionpedia are summarised in TableII, which testify that background removal does not benefit semantic segmentation. The slight gap between the performance of
Fig. 8: The pipeline of instance segmentation on Fashionpedia.
Fig. 9: The pipeline of semantic segmentation on Fashionpedia.
Swin-B on original images and its performance on rembg images is caused by random initialization.
Considering the classification, instance segmentation, and semantic segmentation, background removal is only beneficial in the classification task, especially when training simple models from scratch. Background removal cannot influence the bbox and mask location annotation of instance segmentation. The loss function of semantic segmentation is also approved to ignore background pixels.
## V Conclusions
We have empirically evaluated the impact of background removal on the performance of Neural Networks for fashion image classification and segmentation. We have experimented with various aspects of model training, including model architecture, initialization, data augmentation, and task type (classification and segmentation). Note that we limit our research to only fashion data and do not extend it to other data sources because our applied research has been primarily focused on the application domain of fashion image processing and understanding. Moreover, the background removal method we used, based on Salient Object Detection (SOD), has a certain limitation, as it can merely effectively remove background from the fashion images that usually have salient persons in clothes positioned in the center rather than the details or parts of fashion images.
Our evaluation experiments show that, in most cases, background removal of fashion data is unnecessary in practice, even though, generally, a variety of backgrounds is one of the significant obstacles to training AI models to learn. With this perceptional premise, the models could be enhanced if removing diverse backgrounds. However, compared to the limited benefit of training shallow classification models from scratch, background removal has certain applicable limitations and conditions. Firstly, it is not sensible to fine-tune models from pre-trained ones. Besides, owing to the loss of background, background removal is not compatible with data augmentation techniques aiming to supplement the dataset by creating similar data. On one side, background removal can reduce noises and force models to learn key regions. On the other side, it adds the risk of overfitting for deep networks due to the loss of information and randomness. Similarly, regularization techniques like batch normalization are weakened by images without background. The positive effect of background removal is limited to simple and shallow networks that are not easy to suffer overfitting.
Nevertheless, our evaluation shows that background removal on fashion data benefits the fashion image classification task. In contrast, it provides no benefit for image segmentation, namely, instance segmentation and semantic segmentation with location annotation. Furthermore, background removal can facilitate model training to focus on the critical regions of interest only when the model cannot distinguish between foreground and background completely without adequate training strategy. However, background removal is mostly not an easy job for common data and images. Only fashion data, most images with full outfits visible, are relatively feasible to remove background by Salient Object Detection and related tools.
|
2304.04195 | Fast Charging of Lithium-Ion Batteries Using Deep Bayesian Optimization
with Recurrent Neural Network | Fast charging has attracted increasing attention from the battery community
for electrical vehicles (EVs) to alleviate range anxiety and reduce charging
time for EVs. However, inappropriate charging strategies would cause severe
degradation of batteries or even hazardous accidents. To optimize fast-charging
strategies under various constraints, particularly safety limits, we propose a
novel deep Bayesian optimization (BO) approach that utilizes Bayesian recurrent
neural network (BRNN) as the surrogate model, given its capability in handling
sequential data. In addition, a combined acquisition function of expected
improvement (EI) and upper confidence bound (UCB) is developed to better
balance the exploitation and exploration. The effectiveness of the proposed
approach is demonstrated on the PETLION, a porous electrode theory-based
battery simulator. Our method is also compared with the state-of-the-art BO
methods that use Gaussian process (GP) and non-recurrent network as surrogate
models. The results verify the superior performance of the proposed fast
charging approaches, which mainly results from that: (i) the BRNN-based
surrogate model provides a more precise prediction of battery lifetime than
that based on GP or non-recurrent network; and (ii) the combined acquisition
function outperforms traditional EI or UCB criteria in exploring the optimal
charging protocol that maintains the longest battery lifetime. | Benben Jiang, Yixing Wang, Zhenghua Ma, Qiugang Lu | 2023-04-09T08:28:39Z | http://arxiv.org/abs/2304.04195v1 | Fast Charging of Lithium-Ion Batteries Using Deep Bayesian Optimization with Recurrent Neural Network
###### Abstract
Fast charging has attracted increasing attention from the battery community for electrical vehicles (EVs) to alleviate range anxiety and reduce charging time for EVs. However, inappropriate charging strategies would cause severe degradation of batteries or even hazardous accidents. To optimize fast-charging strategies under various constraints, particularly safety limits, we propose a novel deep Bayesian optimization (BO) approach that utilizes Bayesian recurrent neural network (BRNN) as the surrogate model, given its capability in handling sequential data. In addition, a combined acquisition function of expected improvement (EI) and upper confidence bound (UCB) is developed to better balance the exploitation and exploration. The effectiveness of the proposed approach is demonstrated on the PETLION, a porous electrode theory-based battery simulator. Our method is also compared with the state-of-the-art BO methods that use Gaussian process (GP) and non-recurrent network as surrogate models. The results verify the superior performance of the proposed fast charging approaches, which mainly results from that: (i) the BRNN-based surrogate model provides a more precise prediction of battery lifetime than that based on GP or non-recurrent network; and (ii) the combined acquisition function outperforms traditional EI or UCB criteria in exploring the optimal charging protocol that maintains the longest battery lifetime.
Data-driven optimization, Bayesian optimization, Fast-charging optimization, Recurrent neural network.
## I Introduction
Fast charging is an essential technology for alleviating the issues of mileage anxiety and overly long charging time for electrical vehicles (EVs), and thus it has drawn increasing attention in recent years. However, fast charging with extremely high power may intensify side reactions such as lithium dendrite growth and electrolyte decomposition[1] that can considerably degrade batteries. Fast charging process could also produce excessive heat which may lead to safety risks such as fire and explosion[2, 3]. Therefore, it is imperative to develop fast charging strategies that can maintain the safe and efficient operation of batteries while minimizing the battery degradation.
The optimization of charging strategy has received broad attention due to the widespread demand for fast charging in various daily-life and industrial applications. In general, the optimization mainly focuses on the selection of charging currents, aiming to reduce battery's degradation and maintain battery's safe operation to the maximum extent. Existing methods on this topic can be roughly classified into model-based and data-driven approaches. For model-based methods, commonly employed battery models include equivalent circuit model (ECM)[4], single particle model (SPM)[5], and pseudo-two-dimensional model (P2D)[6]. In particular, a single shooting method with a coupled electrochemical-thermal aging model has been proposed to design a fast-charging strategy[7], where the growth of the solid electrolyte interface (SEI) layer can be captured. Ashwin et al.[8] modeled battery aging mechanism using the P2D model, accounting for the thickness of SEI layer and the temperature. Parhizi et al.[9] studied the capacity degradation of lithium-ion batteries under different operating conditions by analyzing the growth of SEI layer based on the SPM. Xavier et al.[10] developed model-predictive control (MPC) based on reduced-order models to design fast charging strategies under the constraint of no Li plating. A quadratic dynamic matrix control method is developed to design high-performance fast-charging strategies in the presence of temperature and voltage constraints[11].
However, model-based approaches have been shown to be inefficient since they require extensive expertise knowledge and their performance is largely limited by the model complexity. Given the overwhelming complexity in modeling battery's physical and chemical properties with electrochemistry[12], data-driven approaches arise by treating the design of charging strategies as a black-box problem and using experimental data to estimate the optimal solution. Data-driven approaches are advantageous in that they do not require domain knowledge, instead, they only need experimental data to extract valuable insights and establish statistical machine learning models. For instance, Severson et al.[13] utilized the elastic net regression to predict battery cycle life based on the proactive charging and discharging features during early cycles. In addition, Hong et al.[14] proposed an approach to predict the remaining useful life (RUL) of batteries based on the voltage curve information from only a few early cycles, where dilated CNN is used to deal with the time-sequential voltage information and implicit sampling is adopted to measure the output uncertainty of the neural network.
A particular category of data-driven approaches directly utilizes experimental data for discovering the optimal charging strategies without explicitly building the data-driven battery models. For such black-box optimization methods, the charging phases are often split into divisions and the optimal charging current is determined for each division. Specifically, each charging strategy, consisting of a vector of candidate charging currents across these divisions, is evaluated by performing experiments and then the optimal strategy is obtained by comparing different charging strategies. A well-known
example is the grid search approach that performs a large number of repetitive experiments across different grid points (charging strategies) to yield an approximated optimum. However, this method becomes computationally and experimentally intractable if the number of decision variables is large and the experiments are expensive to perform\({}^{\text{\@@cite[cite]{\@@bibref{Authors Phrase1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year1Year11Year1Year11Year1Year11Year11Year1Year11Year1Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year11Year111Year11Year11Year111Year11Year111Year11Year11Year111Year11Year111Year11Year11Year111Year11Year111Year11Year111Year11Year111Year11Year111Year11Year111Year11Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year111Year1111Year111Year1111Year111Year111Year1111Year111Year1111Year111Year1111Year1111Year1111Year1111Year1111Year1111Year1111Year1111Year11111Year1111Year1111Year11111Year11111Year11111Year11111Year11111Year11111Year11111Year11111Year11111Year111111Year11111Year111111Year111111Year111111Year111111Year111111Year1111111Year1111111Year1111111Year11111111Year1
connect past states/inputs with the prediction of outputs. However, traditional RNN as in Fig. 1 often cannot well capture long-term information[24]. To this end, the long short-term memory (LSTM) network, a variant of RNN, will be used in this work to alleviate this problem[25].
### _Bayesian Recurrent Neural Network_
Although the RNN and LSTM networks are advantageous in capturing dynamical dependencies, they are unable to estimate prediction uncertainty due to their deterministic nature. Such statistical quantifications are critical for assessing the confidence level associated with the subsequential decision-making, particularly Bayesian optimization and inference. Therefore, Bayesian RNN has been developed to provide uncertainty information for the output prediction[26]. The work on Bayesian RNN was pioneered by Gal et al.[27] where variational dropout approach was adopted to obtain the Bayesian approximation of output prediction. Such techniques are further adopted and extended by Sun and Braatz[23] for fault detection and identification. For these methods, the full-scale Bayesian framework is utilized by treating all network parameters as random variables. The overall predicted output of the network is obtained by taking the expectation of individual predicted output under given parameters with respect to the posterior of these parameters. However, such methods involve an overly large size of parameters, posing significant challenge for computation. In this work, we adopt the strategy in Lazaro-Gredilla et al.[28] by only treating the parameters of the last layer of the RNN as random variables. This approach can not only avoid the computation issue associated with full-scale Bayesian setup, but also generate the distribution of the predicted output that is critical for the subsequent decision-making within the Bayesian optimization framework. The output of the RNN, under given network parameters, is assumed to be Gaussian distributed in this work.
We consider the typical BO setup in (1) with BRNN as a surrogate model to predict output \(y_{t}\) (i.e., battery cycle life) under any given input \(x_{t}\) (i.e., charging strategies). Denote \(D\) as the set of data samples, and \(\theta_{y}\) as the parameters of the last layer of the BRNN, which are assumed to be random. Then the posterior distribution of \(y_{t}\), given the sample dataset \(D\), is
\[p(y_{t}|D)=\int p(y_{t}|\theta_{y})p(\theta_{y}|D)\,\mathrm{d}\theta_{y} \tag{5}\]
where \(\theta_{y}=[W_{y}\quad b_{y}]\) and \(p(\theta_{y}|D)\) is the posterior distribution of the random parameters in the last layer of BRNN under given dataset \(D\), and \(p(y_{t}|\theta_{y})\) is the presumed distribution of the output under a fixed \(\theta_{y}\). With such a Bayesian framework, the posterior distribution of output \(y_{t}\) is essentially a weighted average of that under all possible parameter values where the weight is the posterior of parameter \(\theta_{y}\) under dataset \(D\). Given the posterior distribution \(p(y_{t}|D)\), we denote \(\mu_{y_{t}}(x_{t})\) and \(\sigma_{y_{t}}(x_{t})\) as the mean and standard deviation of \(y_{t}\) for any given input \(x_{t}\). Then the AF can be constructed from these posterior statistics to determine the subsequent optimal input for attempt.
## III Deep BO with BRNN for Battery Fast Charging
### _Battery Fast Charging Optimization Problem_
The objective of this work is to discover the optimal fast charging strategy for lithium-ion batteries, with as few experiments as possible, that can slow down the battery degradation and respect safety constraints. The inherent conflict between battery aging and fast charging makes the problem of seeking the optimal charging strategy to be nontrivial. To simplify this problem, we fix the charging time to be a constant and design the charging strategy under this constant charging time, with the aim of reducing the battery aging and satisfying the safety constraints.
First, the battery cycle life is defined as the number of cycles when the battery capacity degrades to 80% of the nominal capacity. In general, longer cycle life indicates slower battery degradation. The charging protocols will follow the multi-constant-current-step charging profile[29, 18]. Specifically, we divide the fixed charging time to \(n\) steps, with the charging current in the \(i\)th step defined as \(I_{(i)}\). Then the aforementioned problem can be modeled as a combinational optimization:
\[\arg\max_{I_{(1)^{2}}(2)^{-},I_{(n)}}y=f(I_{(1)^{*}}I_{(2)},\cdots,I_{(n)}) \tag{6}\]
where \(y\) indicates the battery cycle life and \(f(\cdot)\) represents the mapping from charging strategy to battery cycle life.
We specify the function \(f(\cdot)\) in (6) as the objective function of BO. Since there is no analytical expression for \(f(\cdot)\), we can only establish an approximate model \(\hat{f}(\cdot)\) (i.e., the surrogate model) via experimental observation data. In addition, the design of acquisition function is also important for finding satisfactory charging strategies in the parameter space. This work mainly focuses on developing novel surrogate models utilizing additional battery experiment data to improve the battery lifetime prediction, as well as a novel acquisition function to efficiently find the optimal charging strategy. The proposed novel surrogate model and acquisition function will be discussed in the two subsections below.
### _Battery Lifetime Prediction with LSTM Networks_
To design strategies for battery charging, the ultimate objective is to discover the optimal charging strategies that maximize the battery cycle life. To this end, we choose the battery cycle life as the output of the LSTM network and the charging currents across different charging steps as the input. However, directly predicting battery lifetime only based on charging currents is always difficult[13]. On the other hand, the voltage during the charging process is more indicative because it can better reflect the health status of a battery and thus can be used to speculate the battery lifetime. In addition, the voltage also has a strong relationship with the current since it is determined by the current during the charging process. Therefore, as an intermediate variable, voltage can serve as a bridge connecting the current and the battery lifetime.
However, different from charging currents, the voltage continuously changes over time and charging steps, and also varies slightly across different charging-discharging cycles. In addition, after completing all charging-discharging cycles, one can gather a large number of voltage measurements, and thus the voltage data throughout all cycles forms a high-dimensional
vector. Such high-dimensional voltage information is hard to be incorporated into the LSTM network. Therefore, it is necessary to reduce its dimension, meaning to find a low-dimensional representation of the voltage data obtained from all experimental cycles. Given the observation that voltage curves from the first few cycles are similar, we choose the first cycle as the representative (see Fig. 2(a) and Fig. 2(c)). Next, we discard the irrelevant discharging and constant-voltage steps from the voltage curve because they are not affected by charging currents, as in the shaded area in Fig. 2(b) and Fig. 2(d). Then we divide the truncated voltage curve into several parts based on the state of charge (SOC), corresponding to different charging steps. For each part, we construct two or three statistical features to represent the original voltage curve to reduce the dimension. Here four schemes are used to produce statistical features:
**Scheme 1**: use the voltage mean and variance in each part.
**Scheme 2**: use the 20%, 50% and 80% quantile of the voltage in each part.
**Scheme 3**: fit the voltage curve by linear regression, then use the slope and intercept of the linear function in each part.
**Scheme 4**: fit the voltage curve by quadratic regression (\(y=ax^{2}+bx+c\)), then use the parameters (\(a,b,c\)) in each part.
The entire process to reduce the dimension of voltage data is illustrated in Fig. 2. Let us denote the charging currents as \(\mathbf{I}\), and the voltage features as \(\mathbf{U}\). In our approach, we divide the constant-current (CC) charging process into several steps, and the charging current in each step is defined as \(I_{1},I_{2},\cdots,I_{n}\), where \(\mathbf{n}\) is the number of steps. The set \(\{I_{1},I_{2},\cdots,I_{n}\}\) of currents forms the decision variables of the BO. Since the feature vector \(\mathbf{U}\) is obtained after the above dimension reduction of the voltage data, it is already a low-dimensional array containing the sequence information. Inspired by the role of hidden states in the RNN, in this work, we propose to use the low-order voltage features \(\mathbf{U}\) as intermediate states for enhancing the battery lifetime prediction with current input \(\mathbf{I}\), as shown in Fig. 3. Specifically, the relation between charging strategies \(\mathbf{I}\) (input), voltage features \(\mathbf{U}\) (state), and predicted battery cycle life \(\mathbf{y}\) (output) is depicted below:
\[\mathbf{\bar{U}}=\mathbf{\hat{g}}(\mathbf{I}),\quad\mathbf{\hat{y}}=\mathbf{\hat{h}}(\mathbf{I},\mathbf{ \bar{U}}) \tag{7}\]
With this novel architecture, the prediction of battery lifetime becomes a two-step process. First, we use the charging strategy \(\mathbf{I}\) to predict the state \(\mathbf{U}\), followed by the second step of using the combined \(\mathbf{U}\) and \(\mathbf{I}\) to predict battery lifetime \(\mathbf{y}\). Note that combining \(\mathbf{U}\) and \(\mathbf{I}\) for predicting the battery lifetime is equivalent to adding an extra feature \(\mathbf{U}\) to superrise and improve the prediction of battery lifetime \(\mathbf{y}\) directly from \(\mathbf{I}\).
Another important fact is that strong temporal correlation exists within the charging current sequence\(\{I_{(1)},I_{(2)},\cdots,I_{(n)}\}\), i.e., the present charging current can impact the final cycle life along with the subsequent charging currents throughout the entire cycle. Therefore, to capture such dynamical relation, we utilize an LSTM network to model the connections between charging currents across all steps. In other words, the \(\mathbf{\hat{g}}(\cdot)\) function above will be described by an LSTM network. For the lifetime prediction function \(\mathbf{\hat{h}}(\cdot)\), we will represent it by a traditional multi-layer perceptron (MLP) network that consumes the combined \(\mathbf{U}\) and \(\mathbf{I}\) as the inputs.
Figure 3: The proposed framework to predict battery lifetime.
Figure 2: The process of reducing the dimension of the voltage data. (a) The original voltage curve; (b) Omit discharging part; (c) Retain only one cycle; (d) Omit the CV step; (e) Separate the charging process based on the SOC.
### _Acquisition Function Design_
After the previously established surrogate model, the next task for BO is to decide which point to sample in the subsequent iteration. Such decision making is often achieved by optimizing a constructed AF. Common strategies to establish AFs include expected improvement (EI), upper confidence bound (UCB), probability of improvement (PI), and Thompson Sampling (TS) [30]. The objective of the EI-based AF is to maximize the expected improvement
\[\alpha_{\text{EI}}(\mathbf{x})=\sigma_{f}(\mathbf{x})((Z_{f}(\mathbf{x})\Phi(Z_{f}(\mathbf{x}) ))+\phi(Z_{f}(\mathbf{x}))) \tag{8}\]
with
\[Z_{f}(\mathbf{x})=\frac{f(\mathbf{x}^{\dagger})-\mu_{f}(\mathbf{x})}{\sigma_{f}(\mathbf{x})} \tag{9}\]
where \(\Phi(\cdot)\) and \(\phi(\cdot)\) are the cumulative distribution function and the probability distribution function for normal distribution, respectively; \(f(\mathbf{x}^{\dagger})\) stands for the best observation value so far; \(\mu_{f}(\mathbf{x})\) and \(\sigma_{f}(\mathbf{x})\) indicate the posterior predictive mean and standard deviation for the objective function, respectively. The UCB-based AF has a simpler expression that is given by
\[\alpha_{\text{UCB}}(\mathbf{x})=\mu_{t}(\mathbf{x})+\kappa\sigma_{t}(\mathbf{x}) \tag{10}\]
where parameter \(\kappa\) balances posterior mean \(\mu_{t}(\mathbf{x})\) and variance \(\sigma_{t}(\mathbf{x})\).
In this work, we propose to use a hybrid AF for facilitating the BO. Initially, we use the EI-based AF to guide the search process. However, this method often becomes inefficient if the expected improvement of the entire parameter space is low, which means that the estimation of the objective function is inaccurate, causing the algorithm to fall into a local optimum. In this case, we can resolve this issue by further switching to the UCB-based AF. In practical applications, it often occurs that the maximum of the EI and the UCB falls on the same sample point [31]. A reasonable explanation is that both EI and the UCB criteria consider the uncertainty of the surrogate model when searching the subsequent sample point. In particular, the EI criterion calculates the possible improvement based on a normal probability distribution and the UCB criterion calculates the confidence bound based on the posterior variance. This often leads to the coinciding maximum for the EI- and the UCB-based AFs when the uncertainty exists [32]. Under such situations, the charging strategy corresponding to the UCB in a certain quantile (often between 75% to 90%) can be selected as the candidate strategy. If the selected charging strategy still repeats, then the final strategy will be randomly selected. When the maximum point of the EI has already been explored, a charging strategy corresponding to UCB in a certain quantile can make the algorithm jump out of the local optimum to enhance the exploration, while still taking the advantage of the existing prediction. Therefore, such a scheme allows us to make better usage of the existing learning results of the lifetime prediction network, to fully exploit areas with long lifetime expectation in the input space. This algorithm that combines EI and UCB can be described in Algorithm 1.
```
1:Decide exploration leaning or exploitation leaning based on \(D_{y}\) and rounds
2:for\(I_{l}\in I\)do
3:if exploitation leaning then
4:\(\sigma(\hat{y}_{l})\leftarrow\) 0.5 \(\sigma(\hat{y}_{l})\)
```
5:endif
6: Compute \(EI(\hat{y}_{l})\)
7:endfor
8:if\(\max_{l\in I}EI(\hat{y}_{l})>0.001\)then
9:return\(\arg\max_{l\in I}EI(\hat{y}_{l})\)
10:endif
11:for\(I_{l}\in I\)do
12:if exploitation leaning then
13:\(\sigma(\hat{y}_{l})\gets\) 0.5 \(\cdot\sigma(\hat{y}_{l})\)
14:endif
15: Compute \(UCB(\hat{y}_{l})\)
16:endfor
17:\(I_{c}=\arg\max_{l\in I}UCB(\hat{y}_{l})\)
18:if\(I_{c}\in D_{l}\)then
19:return\(I_{c}\)
20:endif
21:if exploitation leaning then
22: Randomly choose \(I_{c}\) s.t. \(UCB(\hat{y}_{c})\) is between 75% and 90% percentile of \(UCB(\hat{y}_{l})\), \(I_{l}\in I\)
23:endif
24:if\(I_{c}\in D_{l}\)then
25:return\(I_{c}\)
26:endif
27:while\(I_{c}\in D_{l}\)do
28: Randomly choose \(I_{c}\) from \(I\)
29:endwhile
30:return\(I_{c}\) ```
**Algorithm 1** Deep Bayesian optimization with combined acquisition function.
## IV Results
In this section, we first introduce the used battery simulator for validating our methods. Then, we apply our proposed BRNN-based lifetime prediction method and compare it to the widely used GP-based method. Finally, we test the performance of the proposed BO with mixed EI-UCB-based acquisition function for fast charging protocol optimization to demonstrate the efficacy of using the hybrid AF.
### _Simulation_
The simulation is implemented on the PETLION [20], a porous electrode theory-based battery simulator. We split the charging current into three steps: \(I_{(1)}\), \(I_{(2)}\), and \(I_{(3)}\), where \(I_{(1)}\) and \(I_{(2)}\) are free variables and \(I_{(3)}\) is determined by the former currents. Define \(\Delta Q_{(i)},i=1,2,3\), as the change of SOC before and after the \(i\) -th charging step, and denote \(\Delta t_{(i)},i=1,2,3\), as the duration of the three steps. Then it can be obtained as
\[t_{(i)}=\frac{\Delta Q_{(i)}}{I_{(i)}},i=1,2,\;\;t_{(3)}=t_{f}-t_{(1)}-t_{(2)}, \;\;t_{(3)}=\frac{\Delta Q_{(2)}}{t_{(3)}} \tag{11}\]
where, \(t_{f}\) is the total charging time, and \(t_{f}\)=800 s is used in this work. \(I_{(1)}\), \(I_{(2)}\), and \(I_{(3)}\) are the charging currents corresponding to charging battery from the SOC of 0% to 20%, from 20% to 40%, and from 40% to 80%, respectively. Both \(I_{(1)}\) and \(I_{(2)}\) range from 2.2 C to 6.0 C in this work, with step length of 0.1 C. We split the ranges of \(I_{(1)}\) and \(I_{(2)}\) into fine grids and simulate the battery lifetime at each grid. The battery lifetime over the entire 2D space across different current values is shown in Fig. 4.
### _Surrogate Models for Lifetime Prediction_
In this section, we are to validate the predictive ability of the BRNN-based surrogate model. For comparison, we use both non-recurrent network and BRNN to predict the battery lifetime, where the inputs to both networks are identical, including charging currents and charging voltages. Fig. 5 shows the prediction result of the non-recurrent network. Comparing with Fig. 4, all plots in Fig. 5 are quite different from the true plot, regardless of the scheme that is employed. This indicates the incapability of using non-recurrent network as surrogate models for predicting battery lifetime.
In contrast, the prediction results of the BRNN-based surrogate model show much improved accuracy as in Fig. 6. With this surrogate model, all of these four schemes can yield precise predictions of battery lifetime and in particular, Scheme 3 and Scheme 4 present better performance than Scheme 1 and 2. Therefore, the following section will only employ Scheme 3 for demonstrating the proposed method and its comparison with other methods.
### _Bayesian Optimization of Fast Charging Protocols_
We choose four random charging strategies to initialize the BO process. With these starting points, BO is then conducted to sequentially search the optimal strategy, until reaching the given maximum number of iterations, i.e., 20 rounds. Since the optimization result is affected by the initializing points, we repeatedly run the entire optimization routine by 20 times, and then calculate the mean result as the final result.
For comparison, we employ three variants of BO: defined respectively as the baseline, control group, and the proposed method. First, the baseline method adopts the traditional BO with GP as the surrogate model and the EI criterion as the AF. Second, the control group method uses BRNN as the surrogate model and the EI criterion as the AF. However, the BRNN here only uses the charging currents as the input without using the charging voltages. Finally, the proposed method employs BRNN as the surrogate model and mixed EI and UCB criteria as the AF, with both charging currents and charging voltages utilized as the inputs to the BRNN.
#### Iv-C1 The baseline method
For this method, the prediction result of the surrogate model after 20 iterations based on GP and the trajectory of the sample points over the BO iterations are shown in Fig. 7. We can see that the surrogate model cannot provide an accurate prediction for the battery lifetime even after 20 rounds of iterations. Meanwhile, the selected samples points by BO are largely dispersed in parameter space without converging to the optimum. This phenomenon indicates that the traditional BO, based purely on the GP as the surrogate, cannot fully learn the parameter space, and thus is not able to discover the optimum rapidly. Such an observation, on the other hand, underpins the importance of developing advanced surrogate models with more complex machine learning models and additional battery features for enhancing the battery lifetime prediction.
#### Iv-C2 The control group
The control group adopts the BRNN as the surrogate model, but the charging voltages are not considered as the input to the network. The left plot of Fig. 8 demonstrates the prediction result of the surrogate model after 20 BO iterations. It is
Fig. 4: Illustration of the simulated battery lifetime across the 2D parameter space.
Fig. 5: Battery lifetime prediction by the non-recurrent network with both charging currents and voltages as inputs. Each plot (a), (b), (c) and (d) indicates Scheme 1 to 4.
Fig. 6: Battery lifetime prediction by BRNN network with both charging currents and voltages as inputs. Each plot (a), (b), (c) and (d) indicates Scheme 1 to 4, respectively.
observed that after these iterations, the surrogate model can discover the region where the global optimum resides (see the yellow region), despite that the predicted performance over the entire region still has a large discrepancy with respect to the true values as in Fig. 4. The trajectory of sample points from the control group is shown in the right plot of Fig. 8. It reveals that the optimization algorithm often visits the edge of the parameter space, where the battery lifetime is far less than the optimal value. Although the sample points finally converge into the region of the optimum, the control group still suffers from low efficiency in convergence.
#### V-B3 The proposed method
The proposed method uses the BRNN as the surrogate model, and both charging currents and voltages are employed as the input to the network. The prediction result of the surrogate model and the trajectory of the sample points during BO iterations are shown in Fig. 9, respectively. The left plot of Fig. 9 shows that after 20 BO iterations, the proposed method can predict the battery lifetime with a high accuracy, as indicated by the high similarity between the color maps in Fig. 9 and Fig. 4. In fact, for the proposed method, the surrogate model can capture the true model rapidly. Fig. 10 shows the estimated posterior distribution of the battery lifetime based on our proposed method at iterations 1, 5, 10, and 15. One can see despite the large difference from the true distribution at the beginning, the proposed surrogate model can well capture to the true model only after 10 iterations. This verifies the effectiveness of our BRNN-based surrogate model with both currents and voltages as inputs. Fig. 9 (right) shows the trajectory of the sample points over BO iterations based on the proposed method. One can see that the sample points can converge to the neighborhood of the optimum after a few iterations although the starting points are far from the optimum. Thus, our method exhibits superior performance in both optimization result and computation efficiency.
#### V-B4 Comparison of Loss Function
To quantitatively compare the prediction performance of the surrogate models, we define the following loss function:
\[L=y^{*}-\max\left(D_{y}\right) \tag{12}\]
where \(y^{*}\) denotes the global optimal battery lifetime in the simulation dataset, and \(D_{y}\) denotes the set of all the estimated data points. We run the optimization 20 times with the three candidate methods, and then calculate the mean and variance of the loss functions (12).
As in Fig. 11, the proposed method has the fastest downtrend and the lowest loss value after convergence. For the robustness of these algorithms, i.e., considering the value of the mean plus the variance of the loss, we can see that all algorithms have a high loss value at the beginning of the optimization. However, the loss value of the baseline method remains high whereas that of the proposed method decreases rapidly. In addition, the loss value of our method also clearly outperforms that of the control group, as indicated by the large difference between the blue and green curves over the last few i
Fig. 8: Left: The prediction result of the surrogate model of the control group after 20 BO iterations; Right: The trajectory of the sample points over BO iterations based on the control group. The red dot is the determined next sample point.
Fig. 10: The predicted battery lifetime based on the proposed BRNN surrogate model over BO iterations. Each plot (a), (b), (c) and (d) indicates iterations 1, 5, 10, and 15, respectively.
Fig. 7: Left: The prediction result of the surrogate model after 20 iterations based on the baseline method; Right: The trajectory of the sample points over BO iterations, where the background is the simulation result from Fig. 4, and the red dot stands for the determined next sample points.
Fig. 9: Left: The prediction result of the BRNN-based surrogate model after 20 iterations; Right: The trajectory of the sample points over BO iterations based on the proposed method. The red dot is the determined next sample point.
discovered solution and the optimization efficiency of the proposed method surpass those of the control group and the baseline method. This observation indicates that adopting BRNN as the surrogate model and using charging voltages as extra information can significantly improve the BO for designing optimal charging strategies.
## V Conclusion
In this article, a novel Bayesian optimization algorithm is proposed with the BRNN as the surrogate model and the combination of EI and UCB criteria as the acquisition function for fast charging strategy optimization. The proposed optimization method was validated using the data generated by the PETLION simulator. The results show that the BRNN-based surrogate model can predict the battery lifetime with a higher accuracy than the widely used BO based on GP or without utilizing additional voltage information. Simulation results further show that the proposed method also has the fastest convergence speed for fast charging optimization. The proposed deep Bayesian optimization approach embodies the potential of using data-driven methods to handle complicated problems of battery parameter optimization, such as the design of next-generation electrode and electrolyte chemistries [33].
|
2306.02376 | Towards Deep Attention in Graph Neural Networks: Problems and Remedies | Graph neural networks (GNNs) learn the representation of graph-structured
data, and their expressiveness can be further enhanced by inferring node
relations for propagation. Attention-based GNNs infer neighbor importance to
manipulate the weight of its propagation. Despite their popularity, the
discussion on deep graph attention and its unique challenges has been limited.
In this work, we investigate some problematic phenomena related to deep graph
attention, including vulnerability to over-smoothed features and smooth
cumulative attention. Through theoretical and empirical analyses, we show that
various attention-based GNNs suffer from these problems. Motivated by our
findings, we propose AEROGNN, a novel GNN architecture designed for deep graph
attention. AERO-GNN provably mitigates the proposed problems of deep graph
attention, which is further empirically demonstrated with (a) its adaptive and
less smooth attention functions and (b) higher performance at deep layers (up
to 64). On 9 out of 12 node classification benchmarks, AERO-GNN outperforms the
baseline GNNs, highlighting the advantages of deep graph attention. Our code is
available at https://github.com/syleeheal/AERO-GNN. | Soo Yong Lee, Fanchen Bu, Jaemin Yoo, Kijung Shin | 2023-06-04T15:19:44Z | http://arxiv.org/abs/2306.02376v1 | # Towards Deep Attention in Graph Neural Networks:
###### Abstract
Graph neural networks (GNNs) learn the representation of graph-structured data, and their expressiveness can be further enhanced by inferring node relations for propagation. Attention-based GNNs infer neighbor importance to manipulate the weight of its propagation. Despite their popularity, the discussion on deep graph attention and its unique challenges has been limited. In this work, we investigate some problematic phenomena related to deep graph attention, including vulnerability to over-smoothed features and smooth cumulative attention. Through theoretical and empirical analyses, we show that various attention-based GNNs suffer from these problems. Motivated by our findings, we propose AERO-GNN, a novel GNN architecture designed for deep graph attention. AERO-GNN provably mitigates the proposed problems of deep graph attention, which is further empirically demonstrated with (**a**) its adaptive and less smooth attention functions and (**b**) higher performance at deep layers (up to 64). On 9 out of 12 node classification benchmarks, AERO-GNN outperforms the baseline GNNs, highlighting the advantages of deep graph attention. Our code is available at [https://github.com/syleeheal/AERO-GNN](https://github.com/syleeheal/AERO-GNN).
Machine Learning, Graph Neural Networks, Graph Neural Networks, Graph Neural Networks, Graph Attention, Graph Neural Networks
## 1 Introduction
Graph neural networks (GNNs) are a class of neural networks for representation learning on graph-structured data. Recently, GNNs have been successfully applied to a wide range of graph-related tasks, including social influence prediction (Qiu et al., 2018), traffic forecast (Derrow-Pinion et al., 2021), physical system modeling (Sanchez-Gonzalez et al., 2020), product recommendation (Wu et al., 2022), and drug discovery (Stokes et al., 2020).
GNNs widely adopt message-passing frameworks, composed of two main pillars: feature transformation and propagation (a.k.a. neighborhood aggregation) (Wu et al., 2019; Gilmer et al., 2017). Feature transformation updates node features from previous layers' features. Propagation involves each node passing its own features to its neighbors, and for each node, the passed-down neighbor features are aggregated to update its own node features. In this framework, a propagation layer determines the propagation weight for each adjacent node pair, and each additional layer allows nodes to propagate to one more hop of neighbors.
Attention-based GNNs aim to learn to propagate by inferring the relational importance between node pairs. Graph attention infers relational importance for node pairs. Among many, GAT and its variants (Velickovic et al., 2018; Wang et al., 2019; Brody et al., 2021; Shi et al., 2021; Kim and Oh, 2021; Bo et al., 2021; Wang et al., 2021; Yang et al., 2021; He et al., 2021) learn edge attention. The _edge-attention_ models infer the importance, or weight, of each neighbor w.r.t. each source node by applying an attention mechanism between adjacent node pairs. During propagation, each source node aggregates features from its neighbors based on the inferred importance.
Another class of GNNs, which we call _hop-attention_ (or _layer-attention_) models, learns the relative importance of each hop (Liu et al., 2020; Chien et al., 2021; Zhang et al., 2022; Chanpuriya and Musco, 2022). Hop-attention models apply attention coefficients at every propagation layer to express the relative importance of a given hop in determining the final node features. Thereby, hop-attention models learn which hops, among multi-hop neighbors, each source node should attend to during propagation. Intuitively, edge-attention models learn importance _within_ each hop, and hop-attention models learn importance _of_ each hop.
Can the existing attention-based GNNs learn expressive attention over deep layers? Prior research does not provide a clear answer. While many studies report the over-smoothing problem of _node features_ at deep layers (Li et al., 2018;
Chen et al., 2020; Liu et al., 2020; Chen et al., 2020), they do not address how it may relate to _graph attention_. Some works focus on designing expressive graph attention layers (Brody et al., 2021; Shi et al., 2021; Kim and Oh, 2021; Bo et al., 2021; Yang et al., 2021; He et al., 2021), but their scope is limited to shallow model depth. Even for attention-based GNNs that generalize their performance over deep layers (Liu et al., 2020; Wang et al., 2021; Chien et al., 2021; Zhang et al., 2022), they do not explicitly discuss properties related to deep attention. Very recently, Zhao et al. (2023) explored the relationship between model depth and attention function for graph transformers. Their analysis, however, was confined to the relationship between model depth and transformer-style attention to substructures. In short, the theoretical underpinnings to understand deep graph attention are under-explored.
In this work, we investigate some problematic phenomena related to deep graph attention. Through both theoretical and empirical analyses, we show that the _vulnerability to over-smoothed features_ and _smooth cumulative attention_ limit the graph attention from becoming expressive over deep layers. Specifically, several representative attention-based GNNs, including GATv2 (Brody et al., 2021), FAGCN (Bo et al., 2021), GPRGNN (Chien et al., 2021), and DAGNN (Liu et al., 2020), suffer from these problems.
Motivated by our analyses, we propose a novel graph attention architecture, **A**tentive d\(\underline{\text{E}}\)ep p\(\underline{\text{R}}\)**O**pagation-GNN (AERO-GNN). We theoretically demonstrate that AERO-GNN can mitigate the stated problems, which is further elaborated empirically by AERO-GNN's (**a**) adaptive and less smooth attention functions and (**b**) higher performance at deep layers (up to 64). On 9 out of 12 node classification benchmarks, including both homophilic and heterophilic graphs, AERO-GNN outperforms all the baselines, highlighting the advantages of deep graph attention.
In summary, our central contributions are two-fold:
1. **Theoretical Findings.** We formulate two theoretical limitations of deep graph attention. AERO-GNN provably mitigates the problems, whereas the representative attention-based GNNs inevitably suffer from them.
2. **Empirical Findings.** AERO-GNN shows superior performance in node classification benchmarks. Also, compared to the representative attention-based GNNs, AERO-GNN learns more adaptive and less smooth attention functions at deep layers.
## 2 Preliminaries
**Graphs.** Let \(G=(V,E)\) be a graph with node set \(V\) and edge set \(E\subseteq\binom{V}{2}\).1 Let \(n=|V|\) and \(m=|E|\) denote the number of nodes and edges, resp. WLOG, we assume \(V=[n]=\{1,2,\dots,n\}\). Let \(A=A(G)\in\{0,1\}^{n\times n}\) denote the adjacency matrix of \(G\), where \(A_{ij}=1\) iff \((i,j)\in E\), and we use \(D=\operatorname{diag}(d_{1},d_{2},\dots,d_{n})\) to denote the degree matrix of \(G\), where \(d_{i}\) is the degree of node \(i\). Let \(X\in\mathbb{R}^{n\times d_{i}}\) denote the initial node feature matrix, where each node \(i\in V\) has node feature \(X_{i}\) of dimension \(d_{x}\). Footnote 1: We assume undirected, unweighted graphs, but our theoretical results can be easily extended to directed and/or weighted graphs.
**Message-Passing GNNs.** Feature transformation and propagation are the two main building blocks of message-passing GNNs (Wu et al., 2019; Gilmer et al., 2017). Feature transformation updates the features of each node based on the node's features of previous layers. A feature transformation layer can be expressed as: \(H^{(k)}=\sigma(H^{(k-1)}W^{(k)}),\forall 1\leq k\leq k_{max}\),2 where \(k\) denotes the index of each layer with \(k_{max}\) being the total number of layers, \(H^{(k)}\in\mathbb{R}^{n\times d_{n}}\) is the _hidden node feature matrix_ at layer \(k\), \(W^{(k)}\) is the weight matrix at layer \(k\), and \(\sigma\) is an activation function.
Footnote 2: Usually, \(H^{(0)}\) is a function on the initial features, i.e., \(X\).
On the other hand, a propagation layer passes node features to its neighbors, and each node's features are updated as an aggregation of the features. Specifically, a propagation layer can be expressed as follows: \(H^{(k)}=\tilde{A}H^{(k-1)},\forall 1\leq k\leq k_{max}\), where \(\tilde{A}=(D+I)^{-1/2}(A+I)(D+I)^{-1/2}\) is the symmetrically normalized adjacency matrix with self-loops.
Many GNNs fuse the two operations in one layer, such that \(H^{(k)}=\sigma(\tilde{A}H^{(k-1)}W^{(k)})\)(Welling and Kipf, 2016; Velickovic et al., 2018; Chen et al., 2020). Some others use only propagation layers to update \(H^{(k)}\), after obtaining \(H^{(0)}\) with feature transformation (Gasteiger et al., 2018; Liu et al., 2020; Chien et al., 2021). Note that \(\tilde{A}\) determines the magnitude in which each node's features propagate to its neighbors, and the number of propagation layers determines the number of hops to propagate to.
**Edge Attention.** Edge-attention GNNs (e.g., GAT and its variants) learn an edge-attention matrix \(\mathcal{A}^{(k)}=(\alpha_{ij}^{(k)})\in\mathbb{R}^{n\times n}\) at each propagation layer (i.e., hop) \(k\),3 where each edge attention coefficient \(\alpha_{ij}^{(k)}\) can be seen as the importance of node \(j\) w.r.t node \(i\) at layer \(k\). At each propagation layer, the edge attention coefficients are used to weigh propagation between each adjacent node pair.
Footnote 3: Single-head attention is assumed for simplicity.
**Hop Attention.** Hop-attention GNNs learn a hop-attention matrix \(\Gamma^{(k)}=\operatorname{diag}(\gamma_{1}^{(k)},\gamma_{2}^{(k)},\dots, \gamma_{n}^{(k)})\in\mathbb{R}^{n\times n}\) at each propagation layer \(k\). With hop attention, different importance \(\gamma_{i}^{(k)}\) can be assigned at different layers \(k\) for every node \(i\). Typically,
\[Z^{(k)} =\sum_{t\sim 0}^{k}\Gamma^{(\ell)}H^{(\ell)},\forall 1\leq k\leq k _{max}, \tag{1}\] \[=\sum_{t\sim 0}^{k}\Gamma^{(\ell)}\tilde{A}^{\ell}H^{(0)},\forall 1 \leq k\leq k_{max}, \tag{2}\]
where the sum of hidden feature matrix \(H^{(t)}\)'s up to layer \(k\), each weighted by the hop attention matrix \(\Gamma^{(\ell)}\), expresses the _layer-aggregated_ node feature matrix \(Z^{(k)}\). Equation (2) highlights that the hop attention matrix \(\Gamma^{(\ell)}\) can be seen as learning weights of \(\ell\) hop neighbors expressed in \(\tilde{A}^{\ell}\) (A similar analysis can be found in Dong et al. (2021)).
## 3 Theoretical Analysis on Deep Attention
Can attention functions of the representative attention-based GNNs remain expressive over deeper layers? Prior research has suggested possible reasons for the performance degradation of deep GNNs, including the over-smoothing of node features, over-squashing (Alon and Yahav, 2021), and over-correlation (Jin et al., 2022). However, discussion dedicated to theoretical limitations of _deep graph attention_ has been little. In this section, we formulate two theoretical limitations of the representative attention-based GNNs concerning their ability to remain expressive over deeper layers.
### A Systematic Understanding of Graph Attention
A systematic understanding of graph attention, integrating edge and hop attention, would allow us to discuss various attention-based GNNs and their theoretical limitations within the same framework.
**Cumulative Attention.** We have observed that edge and hop attention achieve the same goal (i.e., inferring the relational importance between node pairs), but from two different perspectives. Consequently, they both learn weights in which each node's features propagate to its neighbors. This observation motivates us to integrate edge and hop attention under the same umbrella. To this end, we propose a concept of _cumulative attention matrix_, denoted by \(T^{(k)}\), which intuitively represents attention between all node pairs within \(k\) hops (or equivalently, at layer \(k\)) that considers both edge and hop attentions.
Formally, given any \(k_{max}\), for each \(0\leq k\leq k_{max}\),
\[T^{(k)}=\Gamma^{(k)}\prod_{\ell=k}^{1}\mathcal{A}^{(\ell)}, \tag{3}\]
where \(T^{(0)}=\Gamma^{(0)}\).4
Footnote 4: For FAGCN, \(T^{(k)}=\Gamma^{(k)}\prod_{\ell=k_{max}}^{k_{max}-k+1}\mathcal{A}^{(\ell)}\). However, in our theoretical analyses, it is equivalent to Eq. (3) when \(k_{max}\) is fixed.
**Rephrased Propagation w.r.t \(T^{(k)}\)'s.** Here, we discuss various representative attention-based GNNs (spec., GATv2, FAGCN, GPRGNN, and DAGNN). We detail their attention functions in Table 1. Those models are used throughout our theoretical analyses and empirical evaluation.
GATv2 and FAGCN learn edge attention \(\alpha_{ij}^{(k)}\). GATv2 uses hidden features \((H_{i}\|H_{j})\), whereas FAGCN layer-aggregated features \((Z_{i}\|Z_{j})\), in computing each edge attention coefficient \(\alpha_{ij}^{(k)}\). Notably, their \(\alpha_{ij}^{(k)}\) have different bounds, such that GATv2's \(\alpha_{ij}^{(k)}\in(0,1)\) and FAGCN's \(\alpha_{ij}^{(k)}\in(-1,1)\). They do not learn hop attention coefficients, which can be expressed as a constant (1 for GATv2 and \(c_{\gamma}\) for FAGCN).
On the other hand, GPRGNN and DAGNN learn hop attention \(\gamma_{i}^{(k)}\). GPRGNN's hop attention is not explicitly bounded and, thus, can learn negative hop attention \(\gamma_{i}^{(k)}\). While \(\gamma_{i}^{(k)}\in(0,1)\) for DAGNN, it learns _node-adaptive_ hop attention. Neither GPRGNN nor DAGNN learns nontrivial edge attention, and their edge attention coefficients are equivalently degree-normalized constants, i.e., \(\alpha_{ij}^{(k)}=1/\sqrt{d_{i}d_{j}},\forall i,j,k\).
The discussed GNNs consist of a representative and diverse set of attention functions, yet we can succinctly rephrase them w.r.t \(T^{(k)}\) (see Table 2).5
Footnote 5: FAGCN and GATv2’s propagation formulae are rewritten from their original form. Refer to Appendix B for the details.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Pre-normalized Edge Attention & Normalized Edge Attention & Hop Attention \\ \hline \hline
** GATv2** & \(\hat{\alpha}_{i}^{(k)}=\exp((W_{edge}^{(k)})^{\tau}\sigma(H_{i}^{(k-1)}|H_{j} ^{(k-1)}))\) & \(\alpha_{ij}^{(k)}=\hat{\alpha}_{ij}^{(k)}/\sum_{j^{\prime}\in\mathcal{N}(j)} \hat{\alpha}_{ij^{\prime}}^{(k)}\) & \(\gamma_{i}^{(k)}=1\) \\
**FAGCN** & \(\hat{\alpha}_{ij}^{(k)}=\tanh((W_{edge}^{(k)})^{\tau}(Z_{i}^{(k-1)}|Z_{j}^{(k-1 )}))\) & \(\alpha_{ij}^{(k)}=\hat{\alpha}_{ij}^{(k)}/\sqrt{d_{i}d_{j}}\) & \(\gamma_{i}^{(k)}=c_{\gamma},\forall k,i\) (*) \\
**GPRGNN** & \(\hat{\alpha}_{ij}^{(k)}=1\) & \(\alpha_{ij}^{(k)}=1/\sqrt{d_{i}d_{j}}\) & \(\gamma_{i}^{(k)}=\gamma_{ij}^{(k)}=c_{\gamma,\forall k}\) (**) \\
**DAGNN** & \(\hat{\alpha}_{ij}^{(k)}=1\) & \(\alpha_{ij}^{(k)}=1/\sqrt{d_{i}d_{j}}\) & \(\gamma_{i}^{(k)}=\mathrm{sigmoid}(W_{hop}{}^{\tau}H_{i}^{(k)})\) (***) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Attention Functions of Graph Attention Models
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & \(k\)-Layer Propagation & Rephrased Propagation w.r.t \(T^{(k)}\)’s \\ \hline \hline
** GATv2** & \(H^{(k)}=(\prod_{\ell=k}^{1}\mathcal{A}^{(\ell)})X\left(\prod_{\ell=k}^{k}W^{( \ell)}\right)\) & \(H^{(k)}=T^{(k)}X\prod_{\ell=k}^{k}W^{(\ell)}\) \\
**FAGCN** & \(Z^{(k)}=(\Gamma+\Gamma\mathcal{A}^{(k)}+\Gamma\mathcal{A}^{(k)}\mathcal{A}^{(k -1)}+\cdots+\Gamma\prod_{\ell=k}^{1}\mathcal{A}^{(\ell)})H^{(0)}\) (*) & \(Z^{(k)}=\sum_{\ell=k}^{k}T^{(\ell)}H^{(0)}\) \\
**GPRGNN/DAGNN** & \(Z^{(k)}=(\Gamma^{(0)}+\Gamma^{(1)}\mathcal{A}+\Gamma^{(2)}\mathcal{A}^{2}+ \cdots+\Gamma^{(k)}\mathcal{A}^{k})H^{(0)}\) (**) & \(Z^{(k)}=\sum_{\ell=0}^{k}T^{(\ell)}H^{(0)}\) \\ \hline \hline \end{tabular} (*) recall that for FAGCN, \(\Gamma^{(k)}\) is identical for all \(k\), and we use a simplified notation \(\Gamma\) to denote all \(\Gamma^{(k)}\)’s.
(**) recall that for GPRGNN and DAGNN, \(\mathcal{A}^{(k)}\) is identical for all \(k\), and we use \(\mathcal{A}\) to denote all \(\mathcal{A}^{(k)}\)’s (here, \(\mathcal{A}^{k}\) is the \(k\)-th power of \(\mathcal{A}\))
\end{table}
Table 2: Propagation of Graph Attention Models
### Theoretical Problems of Deep Graph Attention
Two assumptions are used throughout our theoretical analyses, and proper normalization and preprocessing may always satisfy them in practice.
**Assumption 1**.: The graph \(G\) is connected and non-bipartite, and the initial node features in \(X\) are pairwise distinct (i.e., \(X_{i}\star X_{j},\forall i\neq j\in V\)) and finite (i.e., \(\left\|X_{i}\right\|_{F}<\infty\), \(\forall i\in V\)).
**Assumption 2**.: There exists a global constant \(C_{param}>0\) such that, for each considered GNN model, each entry of each parameter matrix or vector (e.g., \(W^{(k)}\)'s) and each entry of each intermediate variable (e.g., \(H^{(k)}\)'s and \(Z^{(k)}\)'s) is bounded in \([-C_{param},C_{param}]\).
**Problem 1: Vulnerability to Over-Smoothing.** In the first problem, we establish a connection between attention functions6 and the node feature over-smoothing. Specifically, we examine the _vulnerability_ and _resistance_ of attention functions to over-smoothed node features. Informally, if the pre-normalized edge attention coefficients \(\tilde{\alpha}_{i}^{(k)}\)'s (resp., hop attention coefficients \(\gamma_{i}^{(k)}\)'s) are always identical for the node pairs (resp., nodes) with identical (over-smoothed) hidden features, the attention function is vulnerable to over-smoothing. 7 For simplicity, we use \(f_{att}=f_{att}(\theta)\) to denote \((\bar{\mathcal{A}},\Gamma)\), where \(\theta\) denotes GNN parameters.
Footnote 6: We see attention matrices as functions here.
Footnote 7: We use such a definition using the _equality_ of node features for simplicity. See Appendix A.4 for a relaxed version of Definition 1.
**Definition 1**.: Given \(G=(V,E)\) and initial node features \(X\) satisfying Assumption 1, suppose that \(H_{i}^{(k^{\prime})}=H_{i^{\prime}}^{(k^{\prime})}\) and \(H_{j}^{(k^{\prime})}=H_{j^{\prime}}^{(k^{\prime})}\) holds (due to the over-smoothing of node features) for some \((i,j),(i^{\prime},j^{\prime})\in E\) and \(k^{\prime}\geq 1\). We say that \(f_{att}\) is _vulnerable to over-smoothing_ (V2OS), if \(\forall\theta\), \(\tilde{\alpha}_{ij}^{(k^{\prime}+1)}=\tilde{\alpha}_{i^{\prime}j^{\prime}}^{ (k^{\prime}+1)}\) and \(\gamma_{i}^{(k^{\prime})}=\gamma_{i^{\prime}}^{(k^{\prime})}\); that \(f_{att}\) is _weakly resistant to over-smoothing_ (WR2OS), if \(\exists\theta\), \(\tilde{\alpha}_{ij}^{(k^{\prime}+1)}\neq\tilde{\alpha}_{i^{\prime}j^{\prime}}^ {(k^{\prime}+1)}\) or \(\gamma_{i}^{(k^{\prime})}\neq\gamma_{i^{\prime}}^{(k^{\prime})}\); and that \(f_{att}\) is _strongly resistant to over-smoothing_ (SR2OS), if \(\exists\theta\), \(\tilde{\alpha}_{ij}^{(k^{\prime}+1)}\neq\tilde{\alpha}_{i^{\prime}j^{\prime}}^ {(k^{\prime}+1)}\) and \(\gamma_{i}^{(k^{\prime})}\neq\gamma_{i^{\prime}}^{(k^{\prime})}\).
_Remark 1_.: Definition 1 is practically significant. A GNN with WR2OS/SR2OS \(f_{att}\) can possibly remain expressive, even when some node features in intermediate layers begin to over-smooth. An expressive \(f_{att}\) can mitigate widely known problems of deep GNNs, including over-smoothing and over-squashing.
**Theorem 1**.: _For GATv2, GPRGNN, and DAGNN, \(f_{att}\) is V2OS; for FAGCN, \(f_{att}\) is WR2OS (but not SR2OS)._
Proof.: All the proofs are in Appendix A.
**Problem 2: Smooth Cumulative Attention.** If Problem 1 does not exist, such that the attention coefficients are non-trivial for any model depth, can the attention functions remain expressive over deeper layers? We argue not. For the representative attention-based GNNs, we show that the cumulative attention matrices \(T^{(k)}\)'s become over-smoothed over increasing layers, such that different nodes have attention close to each other, up to a positive scaling factor. _This is critically contrary to the goal of attention_. Formally, we define a smoothness score \(S:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}_{\geq 0}\) by
\[S(T)=\sum\nolimits_{(i,j)\in\binom{[n^{\times}]}{2}}\left\|\frac{T_{i}^{(k)}}{ \left\|T_{i}^{(k)}\right\|_{1}}-\frac{T_{j}^{(k)}}{\left\|T_{j}^{(k)}\right\|_ {1}}\right\|_{1}\Big{/}\binom{n}{2}.\lx@note{footnote}{We use such a definition using the _equality_ of node features for simplicity. See Appendix A.4 for a relaxed version of Definition 1.} \tag{4}\]
\(S(\cdot)\) is bounded, and a smaller \(S(T^{(k)})\) indicates that \(T^{(k)}\) is smoother. Specifically, when \(S(T^{(k)})=0\), all the rows of \(T^{(k)}\) become equivalent up to a positive scaling factor. See Appendix A for more theoretical properties of \(S(\cdot)\).
**Theorem 2**.: _Given \(G=(V,E)\) and \(X\) with Assumptions 1 and 2 satisfied, for GATv2, GPRGNN, and DAGNN, \(\lim_{k\rightarrow\infty}S(T^{(k)})=0\); for FAGCN, \(\lim_{k\rightarrow\infty}T_{ij}^{(k)}=0,\forall i,j\)._
By Theorem 2, for the aforementioned attention-based GNNs, \(S(T^{(k)})\) converges to 0 and loses expressiveness as \(k\) goes to infinity. Importantly, in our proofs, we show that problem 2 occurs _without assuming problem 1_.
Below, we provide some insights into the reason why the GNNs without _node-adaptive hop attention_ or _negative attention_ may suffer from the problem stated above.
**Definition 2**.: Given \(G=(V,E)\), and \(i,j\in V\), the set of _attention path_ between \(i\) and \(j\) at layer \(k\), denoted as \(\mathcal{P}^{(k)}(i,j)\), is defined as \(\{(i=v_{0},v_{1},v_{2},\ldots,v_{k-1},v_{k}=j):(v_{l-1},v_{l})\in E,\forall 1 \leq l\leq k\}\), i.e., the set of all the paths of length \(k\) from \(i\) to \(j\) in \(G\).9 The _degree of intersection_ between two paths \(\mathbf{p}_{v}=(v_{0},v_{1},v_{2},\ldots,v_{k-1},v_{k})\) and \(\mathbf{p}_{u}=(u_{0},u_{1},u_{2},\ldots,u_{k-1},u_{k})\), denoted by \(\mathrm{doi}(\mathbf{v},\mathbf{u})\), is defined as \(|\{t:1\leq t\leq k,v_{t-1}=u_{t-1},v_{t}=u_{t}\}|\).
Footnote 9: Unlike _simple_ paths, repeated nodes are allowed.
_Remark 2_.: The intuition of Definition 2 is that we can decompose each entry \(T_{ij}^{(k)}\) with respect to attention paths from \(j\) to \(i\). Specifically, \(T_{ij}^{(k)}=\gamma_{i}^{(k)}\sum\nolimits_{(j=v_{0},v_{1},\ldots,v_{k}=i) \in\mathcal{P}^{(k)}(j,i)}\prod_{l=1}^{k}\alpha_{v_{l-1},v_{l}}^{(l)},\forall i,j,k\).
**Lemma 1**.: _Given \(G=(V,E)\) and initial node features satisfying Assumption 1, \(i,j,x\in V\), and any \(N_{1},N_{2}>0\), there exists \(K\) such that \(|\{(\mathbf{p}_{i},\mathbf{p}_{j}):\mathbf{p}_{i}\in\mathcal{P}^{(k)}(i,x), \mathbf{p}_{j}\in\mathcal{P}^{(k)}(j,x),\mathrm{doi}(\mathbf{p}_{i},\mathbf{p} _{j})\geq N_{1}\}|\geq N_{2},\forall k\geq K\)._
Proof.: All the proofs are in Appendix A.
By Lemma 1 and Remark 2, the constituent terms of \(T_{ix}\) and \(T_{jx}\) increasingly intersect at deeper layers for any \(i,j,x\).
In other words, bounds of \(f_{att}\) and growing degree of intersection in \(T^{(k)}\) together can cause problem 2, irrespective of \(f_{att}\)'s expressive power at each layer. This partially explains why some GNNs without node-adaptive hop attention or negative attention end up with a smooth cumulative attention \(T^{(k)}\). More details can be found in the proof of Theorem 2 in Appendix A.
While we focused on GATv2, which is a representative variant of GAT, in our theoretical analysis, our analysis can be easily extended to other GAT variants, such as SuperGAT (Kim and Oh, 2021) with a self-supervised loss term, and CATs (He et al., 2021) further using structural features. See Appendix A for more discussions on those variants.
## 4 Proposed Method: AERO-GNN
In this section, we introduce the proposed model, Attentive dEep pROpagation-GNN (AERO-GNN). We present the model overview and discuss its attention functions. Finally, we show how AERO-GNN provably addresses the theoretical limitations, with a discussion on model complexity (spec., number of parameters).
### Model Overview
The feature transformation and propagation of AERO-GNN consist of:
\[H^{(k)} =\begin{cases}\mathrm{MLP}(X),&\text{if }k=0,\\ \mathcal{A}^{(k)}H^{(k-1)},&\text{if }1\leq k\leq k_{max},\end{cases} \tag{5}\] \[Z^{(k)} =\sum_{\ell\sim 0}^{k}\Gamma^{(\ell)}H^{(\ell)},\forall 1\leq k \leq k_{max},\] (6) \[Z^{*} =\sigma(Z^{(k_{max})})W^{*}, \tag{7}\]
where \(\mathrm{MLP}\) is a multi-layer perceptron for the feature transformation, \(k_{max}\) is the total number of layers, \(W^{*}\) is a learnable weight matrix, \(Z^{*}\) is the final output node features, and \(\sigma=\mathrm{ELU}\)(Clevert et al., 2016) is the activation function.
AERO-GNN computes the edge attention matrix \(\mathcal{A}^{(k)}\) and the hop attention matrix \(\Gamma^{(k)}\) with learnable parameters. The propagation of AERO-GNN can also be written in terms of the cumulative attention matrices \(T^{(k)}\)'s in Eq. (3), like the other attention-based GNNs (see Table 2). Specifically,
\[Z^{(k)}=\sum_{\ell\sim 0}^{k}T^{(\ell)}H^{(0)},\forall 1\leq k\leq k_{max}. \tag{8}\]
### Using Layer-Aggregated Features
We design AERO-GNN to use the layer-aggregated features \(Z^{(k)}\) in computing both the edge attention \(\mathcal{A}^{(k)}\) and the hop attention \(\Gamma^{(k)}\) at each layer \(k\) to make it resistant to over-smoothing. In Theorem 1, FAGCN is the only model that is WR2OS (_weakly resistant to over-smoothing_), since it uses \(Z^{(k)}\)'s for computing its edge attention (see Table 1). Even if the node features become over-smoothed at deep layers, a GNN using layer-aggregated features \(Z^{(k)}\) can adjust the edge (and hop) attention coefficients based on the cumulative information over multiple layers to allow attention functions to be non-trivial.
However, the magnitude of \(Z^{(k)}\) may increase as \(k\) increases, as shown in Eq. (6), which may cause instability. We, thus, utilize weight-decay (Chen et al., 2020) to re-scale \(Z\). Formally, the re-scaling is done by
\[\begin{cases}\lambda_{k}&=\log(\frac{\lambda}{k}+1+\epsilon)\\ \tilde{Z}^{(k)}&=\lambda_{k}Z^{(k)}\end{cases}\]
where \(\lambda>0\) is a hyperparameter, and \(\epsilon>0\) is a small number (we use \(\epsilon=10^{-6}\)) ensuring that \(\lambda_{k}\) does not converge to \(0\). We use \(\tilde{Z}^{(k)}\) in the computation of both attention functions.
### Attention Functions
**Edge Attention.** At every layer \(1\leq k\leq k_{max}\), we compute the pre-normalized edge attention \(\tilde{\mathcal{A}}^{(k)}=(\tilde{\alpha}^{(k)}_{ij})\) and the (normalized) edge attention \(\mathcal{A}^{(k)}=(\alpha^{(k)}_{ij})\) as follows:
\[\begin{cases}\tilde{\alpha}^{(k)}_{ij}&=\mathrm{softplus}\big{(}(W^{(k)}_{edge })^{\tau}\sigma(\tilde{Z}^{(k-1)}_{i}\|\tilde{Z}^{(k-1)}_{j})\big{)}\\ \alpha^{(k)}_{ij}&=\tilde{\alpha}^{(k)}_{ij}/\sqrt{\sum_{j^{\prime}\in N(i)} \tilde{\alpha}^{(k)}_{ij^{\prime}}\sum_{i^{\prime}\in N(j)}\tilde{\alpha}^{( k)}_{ij^{\prime}}}\end{cases}\]
where \(W^{(k)}_{edge}\) is a learnable weight vector, \(\tilde{Z}^{(k-1)}\) is the \(i\)-th row of \(\tilde{Z}^{(k-1)}\), and \(\mathcal{A}^{(k)}\) is symmetrically normalized.
In AERO-GNN, we use the symmetric normalization, instead of the row-wise normalization used in GATv2, due to its theoretical and empirical superiority (Wang et al., 2018; He et al., 2020), especially w.r.t. training stability. Softplus (Zheng et al., 2015) is used to positively map edge attention, with two primary advantages over two other mapping functions, \(\exp\) and \(\tanh\). Compared to \(\exp\) used in GATv2, softplus has the higher computational stability (Kleshchevnikov, 2020; Nbro, 2020). Note that the bound of \(\tanh\), in addition to degree normalization, essentially makes \(\lim_{k\to\infty}T^{(k)}_{ij}=0\), \(\forall i,j\), for FAGCN (see Theorem 2).
**Hop Attention.** At each layer \(0\leq k\leq k_{max}\), we use \(H^{(k)}\) and \(\tilde{Z}^{(k-1)}\) (for \(k\geq 1\)) to compute the hop attention \(\Gamma^{(k)}\):
\[\begin{cases}\gamma^{(0)}_{i}&=(W^{(0)}_{hop})^{\tau}\sigma(H^{(0)}_{i})+b^{(0)} _{hop}\\ \gamma^{(k)}_{i}&=(W^{(k)}_{hop})^{\tau}\sigma(H^{(k)}_{i}\|\tilde{Z}^{(k-1)}_{ i})+b^{(k)}_{hop},\forall 1\leq k\leq k_{max}\end{cases}\]
where \(W^{(k)}_{hop}\) is a learnable weight vector and \(b_{hop}\) is a learnable bias scalar.
Motivated by Theorem 2, we use node-adaptive hop attention to alleviate the problem of "intersecting attention paths" stated in Lemma 1, and we allow both positive and negative hop attention coefficients to prevent \(S(T^{(k)})\) from converging to zero as \(k\) goes to infinity (note that \(\gamma^{(k)}_{i}\)'s of the same sign within the same layer \(k\) cannot change \(S(T^{(k)})\)).
Importantly, \(b_{hop}^{(k)}\) should be initialized as 1s. This contributes significantly to model training stability by biasing the hop attention \(\gamma_{i}^{(k)}\)'s to be initialized positive.
### Theoretical Merits
We summarize below the theoretical merits of AERO-GNN. First, AERO-GNN provably mitigates the problems of deep graph attention stated in Section 3 (Theorems 1 and 2).
**Theorem 3**.: _For AERO-GNN, \(f_{att}\) is SR2OS (see Def. 1)._
**Theorem 4**.: _Given \(G=(V,E)\) and \(X\) with Assumptions 1 and 2 satisfied, for AERO-GNN, \(\forall T^{(k)}\) (spec., even if \(S(T^{(k)})=0\)), \(\exists\theta\) such that \(S(T^{(k+1)})>0\).10_
Footnote 10: Recall that \(\theta\) represents all the parameters in the GNN model.
The number of parameters used by AERO-GNN is comparable to, or even smaller than, those of the edge-attention GNNs. We ignore the parameters used in computing the first hidden features (\(H^{(0)}\)) and those in the output layer, since they are used in all GNN models. Thus, we only consider the number of _additional parameters_. Analysis for more models can be found in Appendix E.
**Theorem 5**.: _Given the dimension \(d_{H}\) of hidden node features and the number of layers \(k_{max}\), for AERO-GNN and FAGCN, the number of additional parameters is \(\Theta(k_{max}d_{H})\); for GATv2, the number is \(\Theta(k_{max}d_{H}^{2})\)._
Proof.: All the proofs are in Appendix A.
## 5 Experiments
In this section, we conduct experiments to demonstrate the empirical strengths of AERO-GNN and elaborate on the theoretical analyses.
### Experimental Settings
**Datasets.** We use 12 node classification benchmark datasets, among which 6 are homophilic and 6 are heterophilic (McPherson et al., 2001; Pei et al., 2020; Lim et al., 2021). In all the experiments, we use the publicly available train-validation-test splits, unless otherwise specified. We use _sparse-labeled_ training for homophilic graphs and _dense-labeled_ training for heterophilic graphs (small and large proportions of train labels, respectively; refer to Appendix C for details).
**Baseline Methods.** The baseline methods consist of various representative attention-based GNNs, including both _edge-attention GNNs_ (GATv2, GATv2\({}^{R}\), GT (Shi et al., 2021), FAGCN, DMP (Yang et al., 2021))11 and _hop-attention GNNs_ (GPRGNN, DAGNN, MixHop (Abu-El-Haija et al., 2019)). In addition to some _simple GNNs_ (GCN (Welling and Kipf, 2016), APPNP (Gasteiger et al., 2018)), _deep GNNs_ without attention also serve as baselines (GCN-II (Chen et al., 2020), A-DGN (Gravina et al., 2023)). See Appendix E and F for their details.
Footnote 11: GATv2\({}^{R}\) is a GATv2 model with initial residual connection.
**Experiment Details.** The Adam optimizer (Kingma and Ba, 2015) is used to train the models, and the best parameters are selected based on early stopping. In measuring model performance (Section 5.2), we use 100 predetermined random seeds and report the mean \(\pm\) standard deviation (SD) of classification accuracy over 100 trials. When analyzing attention coefficient distribution (Section 5.3), attention coefficients are averaged over 10 trials.
### Node-Classification Performance
We evaluate the performance of each model on 12 real-world node classification benchmarks.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline
**Dataset** & Chameleon & Squirrel & Actor & Texas & Cornell & Wisconsin & Computer & Photo & Wiki-CS & Pubmed & Citeseer & Cora & A.R. \\ \hline
**Homopally** & 0.04 & 0.03 & 0.01 & 0.00 & 0.02 & 0.05 & 0.70 & 0.77 & 0.57 & 0.66 & 0.63 & 0.77 & \\ \hline \hline
**GCN** & 67.97 \(\pm\) 2.5 & 53.33 \(\pm\) 1.3 & 30.57 \(\pm\) 0.7 & 65.65 \(\pm\) 4.8 & 58.41 \(\pm\) 3.3 & 62.02 \(\pm\) 5.9 & 81.27 \(\pm\) 1.4 & 90.24 \(\pm\) 1.3 & 79.08 \(\pm\) 0.5 & 79.54 \(\pm\) 0.4 & 72.50 \(\pm\) 0.5 & 83.15 \(\pm\) 0.5 & 9.1 \\
**APNP** & 53.04 \(\pm\) 2.2 & 40.37 \(\pm\) 1.5 & 35.49 \(\pm\) 1.0 & 79.89 \(\pm\) 4.2 & 80.16 \(\pm\) 5.9 & 84.24 \(\pm\) 4.6 & 81.27 \(\pm\) 1.4 & 91.12 \(\pm\) 1.2 & 79.05 \(\pm\) 0.5 & 79.90 \(\pm\) 0.3 & 73.06 \(\pm\) 0.3 & 83.60 \(\pm\) 1.3 & 7.8 \\ \hline
**GCN-II** & 60.38 \(\pm\) 1.9 & 48.76 \(\pm\) 2.4 & 35.77 \(\pm\) 1.0 & 78.59 \(\pm\) 6.6 & 78.84 \(\pm\) 6.6 & 83.20 \(\pm\) 4.7 & 84.24 \(\pm\) 1.2 & 91.81 \(\pm\) 0.9 & 79.28 \(\pm\) 0.6 & 80.14 \(\pm\) 0.6 & 72.00 \(\pm\) 0.1 & 86.30 \(\pm\) 0.18 & 5.5 \\
**A-DGN** & 69.63 \(\pm\) 2.0 & 57.77 \(\pm\) 1.9 & 36.41 \(\pm\) 1.0 & 82.22 \(\pm\) 4.8 & **81.41 \(\pm\) 0.7** & **83.44 \(\pm\) 1.5 & 90.53 \(\pm\) 1.3 & 79.11 \(\pm\) 0.6 & 78.68 \(\pm\) 0.6 & 70.16 \(\pm\) 0.9 & 79.84 \(\pm\) 0.9 & 6.4 \\ \hline
**GAT** & 68.01 \(\pm\) 2.5 & 54.49 \(\pm\) 1.7 & 30.36 \(\pm\) 0.9 & 60.46 \(\pm\) 6.2 & 58.22 \(\pm\) 3.7 & 63.59 \(\pm\) 6.1 & 84.46 \(\pm\) 1.3 & 89.88 \(\pm\) 1.1 & 79.44 \(\pm\) 0.5 & 78.94 \(\pm\) 0.4 & 71.89 \(\pm\) 0.6 & 83.78 \(\pm\) 0.8 & 8.5 \\
**GATv2** & 60.08 \(\pm\) 2.2 & 57.62 \(\pm\) 3.02 & 30.83 \(\pm\) 7.0 & 58.38 \(\pm\) 3.8 & 61.94 \(\pm\) 4.7 & 81.94 \(\pm\) 2.1 & 89.87 \(\pm\) 1.2 & 79.64 \(\pm\) 0.5 & 72.92 \(\pm\) 0.3 & 71.23 \(\pm\) 1.12 & 83.88 \(\pm\) 0.6 & 8.9 \\
**GATv2** & **61.92 \(\pm\) 1.91** & **62.18 \(\pm\) 1.37 \(\pm\) 0.9** & 60.86 \(\pm\) 6.5 & 57.32 \(\pm\) 4.5 & 60.15 \(\pm\) 5.1 & 81.73 \(\pm\) 2.8 & 88.71 \(\pm\) 1.7 & 79.55 \(\pm\) 0.6 & 78.28 \(\pm\) 0.4 & 71.00 \(\pm\) 0.8 & 83.24 \(\pm\) 0.6 & 9.3 \\
**GT** & 69.34 \(\pm\) 1.2 & 55.04 \(\pm\) 1.9 & 36.29 \(\pm\) 1.0 & **63.54** & 80.00 \(\pm\) 4.9 & **44.83** & 84.38 \(\pm\) 1.3 & 91.28 \(\pm\) 1.1 & **79.03** \(\pm\) 0.5 & 79.04 \(\pm\) 0.5 & 70.16 \(\pm\) 0.8 & 82.09 \(\pm\) 0.7 & 5.6 \\
**FAGCN** & 69.88 \(\pm\) 2.3 & 42.20 \(\pm\) 1.8 & 36.57 \(\pm\) 0.7 & 77.30 \(\pm\) 7.7 & 78.32 \(\pm\) 6.3 & 82.41 \(\pm\) 3.8 & 28.79 \(\pm\) 1.2 & 91.99 \(\pm\) 1.0 & 79.27 \(\pm\) 0.6 & 79.19 \(\pm\) 0.4 & 71.55 \(\pm\) 0.8 & 83.85 \(\pm\) 0.5 & 7.5 \\
**DMP** & 63.79 \(\pm\) 1.4 & 34.9 \(\pm\) 7.6 & 28.30 \(\pm\) 2.7 & 66.88 \(\pm\) 7.0 & 56.41 \(\pm\) 5.5 & 67.33 \(\pm\) 4.5 & 70.88 \(\pm\)
**On Heterophilic Graphs.** As described in Section 5.1, dense-labeled training is used on heterophilic graphs, following prior research. On heterophilic datasets, AERO-GNN obtains significant and consistent performance gains compared to the baseline models. Even in relatively small graphs (e.g., _texas_, _cornell_, _wisconsin_), where three propagation layers are enough to reach most nodes, AERO-GNN outperforms all the baseline attention-based GNNs. In other words, even when the relative advantage of a deeper model is small, AERO-GNN still maintains competitive performance. Despite being specifically designed to address the heterophily problem, FAGCN, DMP, and GPRGNN are outperformed by AERO-GNN.
**On Homophilic Graphs.** In line with the prior research, sparse-labeled training is used for homophilic graphs. This poses a distinct set of challenges from dense-labeled training in heterophilic graphs, especially for attention-based GNNs. That is because, in homophilic graphs, the relative importance of each neighbor may not vary as significantly as it does in heterophilic graphs. Additionally, with a small number of train labels, deep and complex models may prone to overfitting. As a result, only a few models achieve strong performance in both settings (GCN-II, GT, and GPRGNN perform the best among the baselines). Despite such difficulties, AERO-GNN demonstrates strong performance, ranking first in 5 out of the 6 homophilic datasets.
For further evaluation, we discuss AERO-GNN's performance with multi-head attention and with datasets proposed by Platonov et al. (2023) in Appendix G, where AERO-GNN still has the best average ranking among the competitors.
### Empirical Elaboration of the Theoretical Analysis
In this section, we empirically elaborate on the theoretical limitations of the representative attention-based GNNs (Theorems 1, 2) and show that AERO-GNN effectively mitigates the problems (Theorems 3, 4). Specifically, compared to the baselines, we show that AERO-GNN has
* _edge/node-, hop-, and graph-_adaptive attention function,
* _less smooth_ and _un-smoothing_ cumulative attention \(T^{(k)}\),
* and _higher_ performance at deep layers.
Here, we bring our focus back to GATv2, FAGCN, GPRGNN, DAGNN, and AERO-GNN. 12
Footnote 12: Training of vanilla GATv2 becomes unstable over deeper layers. Thus, for a fair comparison, we use GATv2\({}^{R}\) instead in the following sections.
**Statistics of the Attention Coefficients.** According to Theorems 1 and 3, only AERO-GNN would have both attention functions resistant to the over-smoothed node features. While FAGCN would only have a resistant edge-attention function, the attention coefficient distributions of the other models are expected to shrink or remain stationary with the increasing number of layers (when the over-smoothing of node features is more likely to occur). To test the hypothesis, we train 64 layers of each model and conduct post-hoc analysis of their learned attention distributions.
First, we study the edge-attention coefficients \(\alpha_{ij}^{(k)}\)'s. For each \(k\), Figure 1 presents (**a**) the distribution of \(\alpha_{ij}^{(k)}\)'s and (**b**) the Frobenius norm of the difference between the attention coefficient at layer \(k\) and \((k-1)\), i.e., \(\left\|\left(\alpha_{ij}^{(k)}-\alpha_{ij}^{(k-1)}\right)_{ij}\right\|_{F}= \sqrt{\sum_{i,j}(\alpha_{ij}^{(k)}-\alpha_{ij}^{(k-1)})^{2}}\). 13 If the
Figure 1: **Statistics of \(\alpha_{ij}^{(k)}\)’s for each \(k\) with \(k_{max}=64\). Only AERO-GNN learns edge-, hop-, and graph-adaptive edge attention over deep layers.**
distributions shrink or remain stationary over \(k\) for all graphs, we have strong evidence that attention coefficients are not working properly at deep layers.
For FAGCN, all the coefficients approach zero, and SDs of GATv2's attention coefficients remain stationary for all graphs (see Figure 1(a)). Both FAGCN and GATv2's attention coefficients remain stationary at deep layers, with near 0 difference between \(\alpha_{ij}^{(k)}\) and \(\alpha_{ij}^{(k-1)}\) at large \(k\) (see Figure 1(b)).
Only AERO-GNN's edge attention distributions do not shrink or remain stationary over \(k\). To elaborate, AERO-GNN learns edge attention coefficients that are _edge-adaptive_ (high variances in Figure 1(a)), _hop-adaptive_ (high differences in Figure 1(b)), and _graph-adaptive_ (diverse patterns for different graphs in Figure 1(a)). The results for all 12 datasets are in Appendix G.
We now investigate hop-attention coefficients \(\gamma_{i}^{(k)}\)'s. Figure 2(a) shows the mean value and (b) the SD of \(\gamma_{i}^{(k)}\)'s, for each layer \(k\). Each figure respectively suggests how hop-adaptive and node-adaptive \(\gamma_{i}^{(k)}\)'s are.
Again, as expected, hop attention distributions of DAGNN remain stationary over deep layers, and hence, they are less adaptive to node, hop, or graph (see stationary values in Figure 2(a) and the very small SD values in Figure 2(b)). Since GPRGNN's hop attention \(\gamma_{i}^{(k)}\)'s are free parameters, it learns hop-adaptive and graph-adaptive \(\gamma_{i}^{(k)}\)'s regardless of over-smoothing. However, they are not node-adaptive.
In stark contrast, hop attention coefficients of AERO-GNN are _node-adaptive_ (high SD in Figure 2(b)), _hop-adaptive_ (mean value changes over different layers in Figure 2(a)), and _graph-adaptive_ (diverse patterns for different graphs in Figure 2(a) and (b)). The results for all 12 datasets are in Appendix G.
Through this series of empirical analyses, we present strong evidence that attention functions of the representative attention-based GNNs, except for those of AERO-GNN, are vulnerable to node feature over-smoothing and fail to remain expressive over deep layers.
**Cumulative Attention and Model Performance.** According to Theorems 2 and 4, only AERO-GNN can avoid completely-smoothed cumulative attention (i.e. \(S(T^{(k)})=0\)) over deep layers, highlighting its capacity to learn meaningful attention at any model depth. Here, we empirically elaborate on the theoretical analysis.
Figure 3(a) shows the smoothness scores of cumulative attention matrices \(S(T^{(k)})\). 14 AERO-GNN generally has less smooth \(T^{(k)}\) over deep layers. Notably, there often occurs un-smoothing of \(T^{(k)}\), such that \(S(T^{(k)})>S(T^{(k-1)})\), only for AERO-GNN and FAGCN. This phenomenon is attributable to the use of negative attention in both models. However, \(S(T^{(k)})\) of FAGCN quickly converges to 0. These findings resonate with Theorem 4, showing that AERO-GNN's attention function does remain expressive at deep layers. The results for all 12 datasets are in Appendix G. 15
Footnote 14: Recall that a lower \(S(T^{(k)})\) indicates that the cumulative attention \(T^{(k)}\) is more smoothed
Figure 3(b) illustrates the model performance at layer \(k\in\{2,4,8,16,32,64\}\). AERO-GNN generally achieves better performance over deeper layers. Meanwhile, the performance of the representative attention-based GNNs often drops, even significantly, at deep layers. Table 4 further shows that AERO-GNN generally achieves its best performances at deeper layers than the attention-based GNNs.
In this section, we empirically evaluated the adaptiveness of attention coefficients, smoothness of \(T^{(k)}\), and performance with depth for each model. This set of empirical observa
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline
**Dataset** & CM & SQ & AT & TX & CN & WC & CP & PT & WK & PM & CS & CR \\ \hline \hline
**GCN-II** & 8 & 4 & 4 & 4 & 4 & 4 & **32** & 4 & 16 & **32** & **64** \\
**A-DGN** & 16 & **32** & **32** & 2 & **32** & 2 & 8 & 16 & 8 & 16 & 4 \\ \hline \hline
**GATv2\({}^{th}\)** & 16 & 16 & 16 & **8** & 2 & 2 & 4 & 4 & 2 & 4 & 2 & 2 \\
**FAGCN** & 2 & 2 & 6 & 7 & 2 & 7 & 7 & 8 & 3 & 8 & 4 & 4 \\
**GPGNN** & 16 & 16 & 4 & 4 & 4 & **8** & 4 & **32** & 16 & 8 & 8 \\
**DAGN** & 5 & 20 & 5 & 5 & 5 & 5 & 5 & 10 & 5 & 20 & 10 \\ \hline
**AERO-GNN** & **32** & 16 & 4 & **8** & 4 & 4 & **32** & **32** & 16 & **32** & **32** & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Best Performing Layers Used in Table 3
Figure 3: **The smoothness of the cumulative attention matrix \(T^{(k)}\) and the model performance. AERO-GNN has (_a_) less smoothed \(T^{(k)}\) over deep layers and (_b_) higher best performance achieved at a deeper layer.**
tions, in concert with the theoretical findings, indicate that AERO-GNN indeed learns the most expressive deep graph attention.
## 6 Discussion
Graph attention has had a significant impact on the research and applications of GNNs. Many attention-based GNNs have been designed, investigating how to better infer relations between node pairs. Some models introduced more features or loss terms (Kim and Oh, 2021; Wang et al., 2019; He et al., 2021), and some others relaxed their attentions to include negative values (Bo et al., 2021; Chien et al., 2021; Yang et al., 2021).
On the other hand, making GNNs deeper to enhance their expressivity has been an important setback to GNN research. A number of problems have been proposed to limit its expressiveness to increase over depth. Not to mention over-smoothing, but also over-squashing and over-correlation have been pointed out as possible problems in building deep GNNs. As such, techniques that modulate propagation or aggregation functions (Chamberlain et al., 2021; Gravina et al., 2023; Li et al., 2020; Bodnar et al., 2022), residual connection functions (Li et al., 2019; Chen et al., 2020; Li et al., 2021), hidden features (Zhou et al., 2020; Zhao and Akoglu, 2020; Guo et al., 2023), or graph topologies (Rong et al., 2020; Chen et al., 2020; Zeng et al., 2021; Topping et al., 2022; Bodnar et al., 2022) have been applied to build deeper GNNs.
In this work, we bridge the two research directions, addressing two underexplored questions: (**a**) what are the unique challenges in deep graph attention, and (**b**) how can we design provably more expressive deep graph attention? We argue that the representative attention-based GNNs suffer from the proposed set of problems, possibly on top of the general problems in deep GNNs. Thus, we design AERO-GNN to theoretically and empirically mitigate the problems.
Under a larger context, these findings extend prior literature on limitations to _deep attention in general_. Specifically, similar problems of deep attention smoothing have been reported for transformers (Vaswani et al., 2017; Dong et al., 2021), in both natural language processing (Shi et al., 2022) and computer vision (Gong et al., 2021; Zhou et al., 2021) domains. We demonstrate that attention-based GNNs share related, yet distinct, problems and propose a novel solution. Hence, we expect this work will inspire future research on deep attention and graph learning in various directions.
The generalizability of the present work is limited in that linear propagation is assumed to define \(T^{(k)}\) of GATs. Also, we deliberately suppress non-linearity in the _propagation layers_ of AERO-GNN to focus on the ability to learn dynamic receptive fields, expressed in \(T^{(k)}\). Still, it can be a promising future work to add non-linear propagation to AERO-GNN to address other challenges and applications to more complex tasks.
## Acknowledgements
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration) (No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
|
2307.15251 | PCNN: A Lightweight Parallel Conformer Neural Network for Efficient
Monaural Speech Enhancement | Convolutional neural networks (CNN) and Transformer have wildly succeeded in
multimedia applications. However, more effort needs to be made to harmonize
these two architectures effectively to satisfy speech enhancement. This paper
aims to unify these two architectures and presents a Parallel Conformer for
speech enhancement. In particular, the CNN and the self-attention (SA) in the
Transformer are fully exploited for local format patterns and global structure
representations. Based on the small receptive field size of CNN and the high
computational complexity of SA, we specially designed a multi-branch dilated
convolution (MBDC) and a self-channel-time-frequency attention (Self-CTFA)
module. MBDC contains three convolutional layers with different dilation rates
for the feature from local to non-local processing. Experimental results show
that our method performs better than state-of-the-art methods in most
evaluation criteria while maintaining the lowest model parameters. | Xinmeng Xu, Weiping Tu, Yuhong Yang | 2023-07-28T01:34:20Z | http://arxiv.org/abs/2307.15251v1 | # PCNN: A Lightweight Parallel Conformer Neural Network for Efficient Monaural Speech Enhancement
###### Abstract
Convolutional neural networks (CNN) and Transformer have wildly succeeded in multimedia applications. However, more effort needs to be made to harmonize these two architectures effectively to satisfy speech enhancement. This paper aims to unify these two architectures and presents a Parallel Conformer for speech enhancement. In particular, the CNN and the self-attention (SA) in the Transformer are fully exploited for local format patterns and global structure representations. Based on the small receptive field size of CNN and the high computational complexity of SA, we specially designed a multi-branch dilated convolution (MBDC) and a self-channel-time-frequency attention (Self-CTFA) module. MBDC contains three convolutional layers with different dilation rates for the feature from local to non-local processing. Experimental results show that our method performs better than state-of-the-art methods in most evaluation criteria while maintaining the lowest model parameters.
Xinmeng Xu\({}^{1}\), Weiping Tu\({}^{1,2,3,*}\), Yuhong Yang\({}^{1,3}\)\({}^{1}\)NERCMS, School of Computer Science, Wuhan University, China
\({}^{2}\)Hubei Luojia Laboratory, China
\({}^{3}\)Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China
{xuxinmeng, tuweiping, yangyuhong}@whu.edu.cn
**Index Terms**: speech enhancement, self-attention, convolutional neural network, parallel conformer
## 1 Introduction
Speech enhancement (SE) aims to estimate target speech from a noisy recording, which may consist of ambient noise, interfering speech, and room reverberation. It is a pre-processor for many speech processing applications, such as speech recognition [1], speaker verification [2], and hearing aids design [3]. With the recent advances in supervised learning, deep neural networks (DNNs) are applied to several SE models. Typically, DNN-based SE models operate in the short-time Fourier transform (STFT) domain and estimate the clean target speech from the noisy signal via direct spectral mapping [4, 5], or time-frequency (TF) masking [6, 7].
Convolutional neural network (CNN) represents a successful backbone network architecture, which performs the filter processing on speech frames in parallel. It thus is structurally well-suited to focus on local patterns, such as harmonic structures [8]. Meanwhile, CNN captures the contextual information by stacking multiple layers. While these properties bring efficiency and generalization to CNNs, they also cause two main issues. Firstly, the convolutional operation has a limited receptive field. Secondly, the convolution filters have static weights at inference. The former thus prevents the network from capturing the long-range feature dependencies [9, 10] while the latter sacrifices the adaptability to the input contents. As a result, it needs to meet the requirement in modeling the global noise distribution and generates results with noticeable noise residue.
Self-attention (SA) calculates response at a given feature region by a weighted sum of all other positions [11, 12, 13]. Benefiting from the advantage of global processing, SA achieves a significant performance boost over CNNs in SE tasks by mitigating their shortcomings, i.e., limited receptive field and in adaptability to input content [14, 15]. However, due to the global calculation of SA, its computation complexity grows quadratically with the spatial resolution, making it infeasible to fulfill the real-time demanding of SE systems. In addition, global relationships between these speech features are prone to bias and unreliable because feature regions are usually noisy [5]. In this way, calculating the self-similarity of features between the target speech and the global mixture may not be a practical option.
Inspired by the superior performance of CNN in extracting speech local format patterns and the effectiveness of Transformer in capturing the long-range dependency, we propose the parallel Conformer neural network (PCNN) for monaural speech enhancement. The proposed architecture incorporates CNN and Transformer in a parallel manner. It is followed by a hybrid fusion block containing depth-wise separable convolutions and channel attention for an adaptively and learnable performance trade-off. In addition, to deal with the small receptive field of CNN and the high computational complexity of the Transformer, we specially designed a multi-branch dilated convolution (MBDC) and a self-channel-time-frequency attention (Self-CTFA) module. In particular, the MBDC applies channel-wise attention to different dilation rates of convolutions to enlarge the size of the receptive field of local operation, in which channel-wise attention is independently performed on these three outputs for flexibly achieving feature processing from local to non-local. The self-CTFA module consists of three parallel attention branches, i.e., channel-dimension, time-dimension, and frequency-dimension, in which three 2D attention maps are calculated by three 1D energy distributions of these dimensions.
## 2 Model Description
In this section, we elaborate on our proposed PCNN, which enables adaptive and learnable adjustment of contribution between CNN and Transformer in the SE tasks.
### Overview
We propose a parallel conformer neural network (PCNN) for SE in the time domain. As shown in Figure 1, the architecture consists of a segmentation operation, encoder, separator, masking module, decoder, and overlap-add operation.
The input of PCNN is a raw speech waveform mixture, \(\mathbf{x}\in\mathbb{R}^{1\times N}\), which is firstly split into \(F\) overlapped frames of length
\(L\) with a shifting size \(S\) by **segmentation operation**. In this way, **x** is resulted into a 3-dimensional tensor \(X\in\mathbb{R}^{1\times F\times L}\), which is be expressed as
\[F=\lceil(M-L)/(L-S)+1\rceil, \tag{1}\]
where \(N\) represents the length of the input speech mixture and \(\lceil\cdot\rceil\) rounds the number involved up to the nearest integer. In addition, the **overlap-add operation** in an inverse of **segmentation operation** to merge the frames for the enhanced speech waveform reconstruction.
**Encoder** plays the role of feature extractor [16, 17] and contains two convolutional layers, of which the first one is increasing the number of channels to 64 using convolution with a kernel size of \((1,1)\) and the second one halves the dimension of frame size using a kernel size of \((1,3)\) with a stride of \((1,2)\), in which a dilated-dense block [18] by using four dilated convolutions collaborates between them. In addition, layer normalization and PReLU [19] are adopted after these convolutional layers. On the contrary, **decoder** is responsible for the feature reconstruction and contains a dilated-dense block, and a sub-pixel convolution [20], where followed by a layer normalization, PReLU, and a 2D convolutional layer with a kernel size of \((1,1)\) for the channel dimension recovery of enhanced speech feature into 1.
The **separator** is the crucial part of PCNN and is mainly composed of several parallel conformer blocks (PCBs) that are cascaded together. The PCB, as shown in Figure 2, include a dual-branch module containing an MBDC, Self-CTFA module, and HFB for local and global extracting and leveraging, and a feed-forward network, both preceded by layer normalization steps and with skip connections, which is specially described in Sec 2.2. Unlike conventional Transformer blocks, the feed-forward network consists of a gated recurrent unit (GRU) layer to learn the positional information [21, 22]. The **masking module** utilizes the feature output from **separator** to generate a mask to enhance speech. Concretely, the production from **separator** is doubled along the channel dimension with PReLU and convolution for matching the output of the encoder that then passes through a gated convolution operation [23] and ReLU to get the mask. The element-wise multiplication between the mask and the output of the encoder obtains the final masked encoder feature.
Two loss functions are used in our study. One is the frequency-domain loss function, which starts with calculating the STFT to create a TF representation of the mixture sample. The TF bins corresponding to the target speech are then separated and used to synthesize the source waveform using inverse STFT. In this case, the loss function is formulated by the mean square error (MSE) between the TF bins estimated target speech \(\hat{S}\in\mathbb{R}^{T\times F}\) and the corresponding ground truth \(S\in R^{T\times F}\),
\[\mathcal{L}_{f}=\frac{1}{T\times F}||S-\hat{S}||^{2} \tag{2}\]
where \(||\cdot||^{2}\) denotes the \(l_{2}\) norm, \(T\), and \(F\) denotes the number of frames and frequency bins, respectively. We also use the time-domain loss based on the mean MSE between the enhanced speech and clean speech, which is defined as:
\[\mathcal{L}_{t}=\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\hat{x}_{i}), \tag{3}\]
where \(x\) and \(\hat{x}\) are the clean speech and enhanced speech samples, respectively, and \(N\) represents the number of samples. In this way, we obtain the final loss by combining these two types of loss functions,
\[\mathcal{L}_{total}=\alpha\mathcal{L}_{f}+(1-\alpha)\mathcal{L}_{t}, \tag{4}\]
where \(\alpha\) is a tunable parameter and is set as 0.2.
Figure 1: _Overview of the parallel conformer neural network (PCNN)._
Figure 2: _The architecture of parallel conformer block (PCB). “MBDC” denotes the multi-branch dilation convolution, “Self-CTFA” denotes self channel-time-frequency attention, and “HFB” denotes hybrid fusion block._
### Dual-branch Module
Our proposed dual-branch module, labeled with an orange box in Figure 2, comprises three parts: MBDC for local processing, Self-CTFA module for global processing, and HFB for features fusion. The role of the dual-branch module is to leverage local and global operations adaptively.
**Multi-branch Dilation Convolution (MBDC).** Inspired by [24, 25], we employ the channel-wise attention mechanism to design the MBDC to perform channel selection with multiple convolutions with different dilation rates. The detailed architecture of our proposed MBDC is shown in Figure 3. In our design, we adopt three branches to carry different dilation rates of convolutional layers to generate feature maps with different receptive field sizes. The channel-wise attention is independently performed on these three outputs, and results are added together. In this way, the features can be extracted from local to non-local operations with the flexibility increasing, while the size of the receptive field is enlarged without substantial computational cost by the parallel structure of three dilated convolutional layers with the same kernel size [26]. In addition, comparison results between MBDC, dilated convolution, and convolution in Table 1 indicate that MBDC has a more significant feature sampling rate than dilated convolution while having much lower computational complexity than conventional convolution.
**Self Channel-Time-Frequency Attention (Self-CTFA) Module.** We show the proposed Self-CTFA module in Figure 4. The Self-CTFA module takes TF representation \(\textbf{F}_{in}\in\mathbb{R}^{C\times T\times F}\) as input, where \(C\), \(T\), and \(F\) denote the channels, time frames, and frequency bins, respectively. Self-CTFA module consists of three branches to generate a 1D channel-dimension energy feature \(\textbf{F}_{c}\in\mathbb{R}^{C\times T}\), a 1D time-dimension energy feature \(\textbf{F}_{t}\in\mathbb{R}^{1\times T}\), and a 1D frequency-dimension energy feature \(\textbf{F}_{f}\in\mathbb{R}^{F\times 1}\) in parallel by separately using three global pooling functions.
Each branch contains two sub-branches to calculate query and key using two 1D \(1\times 1\) convolutional layers. After that, the query and key of each branch are multiplied and fed into the softmax activation function to generate the attention feature map, which is defined as:
\[\left\{\begin{array}{l}\textbf{M}_{c}=\text{softmax}(\mathcal{H}_{c1}( \textbf{F}_{c})\times\mathcal{H}_{c1}(\textbf{F}_{c})^{\top}),\\ \textbf{M}_{t}=\text{softmax}(\mathcal{H}_{c1}(\textbf{F}_{t})\times\mathcal{ H}_{c1}(\textbf{F}_{t})^{\top}),\\ \textbf{M}_{f}=\text{softmax}(\mathcal{H}_{c1}(\textbf{F}_{f})\times\mathcal{ H}_{c1}(\textbf{F}_{f})^{\top}),\end{array}\right. \tag{5}\]
where \(\textbf{M}_{c}\in\mathbb{R}^{C\times C}\), \(\textbf{M}_{t}\in\mathbb{R}^{T\times T}\), and \(\textbf{M}_{f}\in\mathbb{R}^{F\times F}\) denote the attention feature maps of channel branch, time branch, and frequency branch, respectively, and \(\mathcal{H}_{c1}\) represents the 1D \(1\times 1\) convolutional layer. Afterwards, the \(\textbf{V}_{in}\) generated from \(\textbf{F}_{in}\) separately multiplies with \(\textbf{M}_{c}\), \(\textbf{M}_{t}\), \(\textbf{M}_{f}\), and these results are added to obtain the output of Self-CTFA module \(\textbf{F}_{out}\). In this way, the Self-CTFA module reduces the computational complexity from \(\mathcal{O}(C^{2}TF+CT^{2}F+CT^{2})\) to \(\mathcal{O}(C^{2}+T^{2}+F^{2})\).
**Hybrid Fusion Module (HFB).** Considering the feature redundancy and knowledge discrepancy among MBDC and Self-CTFA module, we introduce a novel hybrid fusion block (HFB) in our approach. Specifically, we incorporate depth-wise separable convolutions and the channel attention layer into HFB to discriminatively aggregate features in spatial and channel dimensions [27].
## 3 Experimental Setup
### Datasets
In order to evaluate the performance of the proposed model, experiments are conducted on Librispeech corpus [28]. There 6500 clean utterances are selected for the training set and 400 for the validation set, created under the random SNR levels ranging from -5dB to 10 dB. The test set contains 100 utterances under the SNR condition of -5dB, 0dB, 5dB, and 10 dB.
Noise signals from the Demand dataset [29], along with the clean speech recordings, are used to create the noisy speech for the training and validation set. The clean speech and noise recordings with a sampling frequency of 16 kHz and the frame size and frameshift for frame-level processing are set to 512 and 256.
\begin{table}
\begin{tabular}{l|c c c} \hline Type & MBDC & Dilated Conv & Conv \\ \hline Kernel Size & 3\(\times\)3 & 3\(\times\)3 & 9\(\times\)9 \\ Dilation Rate & \{1, 2, 4\} & 4 & 0 \\ Receptive Field & 9\(\times\)9 & 9\(\times\)9 & 9\(\times\)9 \\ Sampling Rate & 33.33\(\%\) & 11.11\(\%\) & 100\(\%\) \\ Param.(M) & N & N & 9.00N \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between MBDC, Dilated Convolution, and Convolution.
Figure 4: The architecture of Self Channel-Time-Frequency Attention (Self-CTFA) Module.
Figure 3: The architecture of Multi-branch Dilation Convolution (MBDC).
### Training and Network Parameters
The clean speech and noise recordings with a sampling frequency of 16 kHz and the frame size and frameshift for frame-level processing are set to 512 and 256. In each training epoch, we chunk a random segment of 4 seconds from an utterance if it is more significant than 4 seconds. The smaller utterances are zero-padded to match the size of the largest utterance in the batch. The Adam optimizer is used for stochastic gradient descent (SGD) based optimization, and the initial learning rate is set to 0.001. MSE is used as a loss function.
## 4 Results and Analysis
### Model Comparison
This section compares alternative baseline models in Table 2 in terms of STOI, PESQ, and SSNR, where the numbers represent the averages over the test set in each condition. Four baseline systems are selected for the comparison, i.e., Conv-TasNet [17], GCRN [23], TSTNN [30], and U-Former [14]. In addition, we also evaluate the performance of PCNN when replacing MBDC with dilated convolution, i.e., PCNN (DC), replacing MBDC with conventional convolution, i.e., PCNN (CC), replacing Sell-CTFA module with self-attention module, i.e., PCNN (SA), removing MBDCs, i.e., PCNN -w/o MBDC, and removing self-CTFA module, i.e., PCNN -w/o Self-CTFA.
Table 2 shows the comparison results of the proposed PCNN and the other four baseline models. The number of parameters and real-time factory (RTF) is also presented. One can observe the following phenomena. First, the proposed model consistently outperforms all baselines in three metrics scores for different cases. Secondly, the proposed PCNN has the fewest parameters followed by GRN and lowest RTF, but PCNN has the best performance, demonstrating the higher performance effectiveness of PCNN. Thirdly, we compare the PCNN when using different components to replace MBDC and self-CTFA module in Table 2, according to the results, we observe that (1) replacing MBDC with dilated convolution achieves lower scores in these evaluation metrics with a similar number of parameters, (2) replacing MBDC with conventional convolution that has the exact size of the receptive field with dilated convolution achieves similar performance with a higher number of parameters and RTF. Finally, the comparison results between PCNN, PCNN -w/o MBDC, and PCNN -w/o Self-CTFA indicate the necessity of the proposed MBDC and self-CTFA module.
### Impact of Branches in Self-CTFA Module
The proposed self-CTFA module consists of the channel (C) branch, time (T) branch, and frequency (F) branch. In this study, we evaluate variants of the self-CTFA module when removing the C, T, and F branches. We set the same parameters for the algorithm in the previous section but control the usage of different branches. Table 2 demonstrates the evaluation results of the self-CTFA module when removing other components in the -5 SNR condition. According to Table 2, we conclude that the existence of the C branch, T branch, and F branch does promote the performance of self-CTFA module. In addition, the F branch performs better than the T and C branches.
## 5 Conclusion
This paper proposes a parallel conformer neural network (PCNN) for SE by leveraging CNN for local detail information capture and the transformer for long-range dependencies extraction. According to the drawbacks of CNN and SA in the transformer, we develop an MBDC to address the small receptive field of CNN and a self-CTFA model to address the high computational complexity of SA. In addition, an HFB is utilized for leveraging the MBDC and self-CTFA. Through experiments, we show the superiority of the proposed method over other methods compared in this paper.
## 6 Acknowledgements
This work is supported by the National Nature Science Foundation of China (No. 62071342, No.62171326), the Special Fund of Hubei Luojia Laboratory (No. 220100019), the Hubei Province Technological Innovation Major Project (No. 2021BAA034) and the Fundamental Research Funds for the Central Universities (No.2042023kf1033).
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c c|c c} \hline Test SNR & \multicolumn{3}{c|}{-5 dB} & \multicolumn{3}{c|}{0 dB} & \multicolumn{3}{c|}{5 dB} & Param. & RTF \\ \hline Metric & STOI (\%) & PESQ & SSNR & STOI (\%) & PESQ & SSNR & STOI (\%) & PESQ & SSNR & \multicolumn{1}{c}{} & \\ \hline Unprocess & 57.71 & 1.37 & -5.04 & 71.02 & 1.73 & -0.05 & 82.53 & 2.03 & 4.93 & - & - \\ \hline Conv-TasNet & 79.81 & 2.23 & 6.65 & 86.76 & 2.42 & 8.23 & 91.46 & 2.86 & 10.12 & 4.58 M & 0.72 \\ GRN & 80.31 & 2.21 & 6.62 & 86.98 & 2.49 & 8.31 & 91.28 & 2.91 & 10.89 & 3.08 M & 0.68 \\ TSTNN & 83.76 & 2.32 & 6.98 & 89.75 & 2.64 & 8.86 & 93.67 & 3.03 & 11.59 & 4.86 M & 0.86 \\ U-Former & 84.75 & 2.38 & 7.03 & 89.60 & 2.68 & 8.94 & 93.76 & 3.08 & 11.96 & 5.85 M & 0.88 \\ \hline PCNN & 87.24 & 2.51 & 7.84 & 92.17 & 2.83 & 9.71 & 94.82 & 3.13 & 12.34 & 3.15 M & 0.51 \\ PCNN (DC) & 85.69 & 2.44 & 7.11 & 90.64 & 2.71 & 9.38 & 93.92 & 3.09 & 12.01 & 3.13 M & 0.49 \\ PCNN (CC) & 85.98 & 2.46 & 7.69 & 91.86 & 2.74 & 9.39 & 94.19 & 3.11 & 12.14 & 3.57 M & 0.54 \\ PCNN (SA) & 87.36 & 2.50 & 7.84 & 92.06 & 2.83 & 9.75 & 94.79 & 3.12 & 12.28 & 6.36 M & 0.96 \\ PCNN -w/o MBDC & 83.32 & 2.30 & 6.86 & 87.96 & 2.59 & 8.78 & 92.89 & 2.98 & 11.87 & 2.98 M & 0.46 \\ PCNN -w/o Self-CTFA & 82.67 & 2.26 & 6.79 & 87.01 & 2.51 & 8.66 & 91.47 & 2.93 & 10.98 & 2.74 M & 0.43 \\ \hline \end{tabular}
\end{table}
Table 2: Comparisons of baseline models in terms of STOI, PESQ, and SSNR.
\begin{table}
\begin{tabular}{l|c c c} \hline Metric & STOI (\%) & PESQ & SSNR \\ \hline PCNN & 87.24 & 2.51 & 7.84 \\ \hline -_w/o C Branch_ & 84.81 & 2.47 & 7.51 \\ -_w/o T Branch_ & 85.99 & 2.44 & 7.46 \\ -_w/o F Branch_ & 85.56 & 2.41 & 7.49 \\ -_w/o C T- Branches_ & 84.57 & 2.35 & 7.23 \\ -_w/o C-F Branches_ & 84.59 & 2.32 & 7.15 \\ -_w/o T-F Branches_ & 83.98 & 2.30 & 7.13 \\ -_w/o C-T-F Branches_ & 83.05 & 2.26 & 6.98 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study of self-CTFA by removing different components in -5 dB SNR condition. |
2305.10550 | Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks | We investigate how sparse neural activity affects the generalization
performance of a deep Bayesian neural network at the large width limit. To this
end, we derive a neural network Gaussian Process (NNGP) kernel with rectified
linear unit (ReLU) activation and a predetermined fraction of active neurons.
Using the NNGP kernel, we observe that the sparser networks outperform the
non-sparse networks at shallow depths on a variety of datasets. We validate
this observation by extending the existing theory on the generalization error
of kernel-ridge regression. | Chanwoo Chun, Daniel D. Lee | 2023-05-17T20:09:35Z | http://arxiv.org/abs/2305.10550v1 | # Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
###### Abstract
We investigate how sparse neural activity affects the generalization performance of a deep Bayesian neural network at the large width limit. To this end, we derive a neural network Gaussian Process (NNGP) kernel with rectified linear unit (ReLU) activation and a predetermined fraction of active neurons. Using the NNGP kernel, we observe that the sparser networks outperform the non-sparse networks at shallow depths on a variety of datasets. We validate this observation by extending the existing theory on the generalization error of kernel-ridge regression.
## 1 Introduction
The utility of sparse neural representations has been of interest in both the machine learning and neuroscience communities. Willshaw and Dayan (1990) first showed that sparse inputs accelerate learning in a single hidden layer neural network. More recently, Babadi and Sompolinsky (2014) analyzed how sparse expansion of a random single hidden layer network modeling the cerebellum enhances the classification performance by reducing both the intraclass variability and excess overlaps between classes. In this work, we examine the effect of sparsity on the generalization performance for regression and regression-based classification tasks in deeper neural networks with rectified linear activations.
Consider a feed-forward deep neural network with a large number of neurons equipped with rectified linear units (ReLU) in each layer (Figure 1a). The weights are random and, for each input, we adjust the bias in the preactivations such that only a fraction \(f\) neurons in each layer are positive after ReLU. This sparse random network is trained by optimally tuning the readout layer using the pseudo-inverse rule. We performed regressions on the one-hot vectors of the real-life datasets, i.e. MNIST, Fashion-MNIST, CIFAR10, and CIFAR10-Grayscale, with 100 training samples (LeCun and Cortes, 2010; Xiao et al., 2017; Krizhevsky et al., 2009). Interestingly, the sparsity of the model with the best generalization performance changes over depths (Figure 1b,c). At shallow depth, the sparse activation improves the generalization performance, whereas at the deeper configurations, denser activations are required to maintain high generalization performance. To theoretically analyze the performance of these networks, we will take the width of the intermediate layers to be very large. It has been well-established that Bayesian inference on an infinite width feedforward neural network is equivalent to training only the readout weights of the network (Matthews et al., 2017; Hron et al., 2020, 2022; Williams, 1996; Lee et al., 2018). This allows us to perform kernel analysis of infinite-width neural networks assuming normally distributed weights, and to exactly infer the posterior network output via kernel ridge regression. Such kernels are referred to as neural network Gaussian process (NNGP) kernels. Cho and Saul (2009) introduced the ReLU neural network kernel which has been experimentally shown by Lee et al. (2018) to have a performance comparable to finite neural networks learned with backpropagation. Lee et al. (2018) performed regression on the one-hot training vectors and took the max of the prediction vector to obtain the test accuracy, while also reporting the mean-squared error of the regression. In another work by Cho and Saul (2011),
they derived an NNGP with Heaviside step activation to induce sparse activation. Here we present a deep NNGP with sparse activation induced by ReLU and appropriately chosen biases, and investigate its generalization performance as shown from the numerical experiments in Figure 1.
To better understand the generalization performance of these networks, we employ the theoretical framework provided in Canatar et al. (2021, 2021). Canatar et al. (2021) used the replica method to derive an analytical expression for the in-distribution generalization error for kernel ridge regression. There exists a large body of literature on the generalization bound and its convergence rate of kernel ridge or ridgeless regression (Spigler et al., 2020; Bordelon et al., 2020; Bietti and Bach, 2020; Scetbon and Harchaoui, 2021; Vakili et al., 2021). In our paper, however, the goal is to characterize the average generalization performance of a kernel. By averaging over the data distribution,Bordelon et al. (2020) and Canatar et al. (2021) formulate the average generalization error as an analytical function of the kernel spectrum and the target function spectrum. We show that their theoretical formula accurately matches our experimental observation, and extend the theory to allow intuitive comparisons between kernels.
### Summary of contributions
We begin by deriving the expression for the sparse NNGP kernel in Section 2. We then experimentally demonstrate in Section 3 that the sparse NNGP \(f<0.5\) outperforms the popular NNGP kernel, i.e. ReLU arccosine kernel \(f=0.5\), at shallow depths. The arccosine kernel without bias is a special case of our sparse NNGP kernel where the fraction of the active neurons is \(f=0.5\). The bias typically does not affect generalization performance (see Supp D.1 and Lee et al. (2018)).
Next, in Section 4 we expand on the existing theory for kernel ridge-regression provided in Canatar et al. (2021) to aid our understanding of the generalization performance of the sparse NNGP. Our theoretical contribution is showing the intuitive relationship between the shape of the kernel eigenspectrum and the shape of the modal error spectrum using first-order perturbation theory, which provides useful insight when comparing kernel functions.
## 2 Sparse neural network Gaussian process
### Architecture
We consider a fully connected feed-forward neural network architecture. The post-activation \(x^{l}\) of each neuron in layer \(l\) is a rectified version of a preactivation \(h^{l}\) shifted by a bias \(b^{l}\). The preactivation
Figure 1: (a) Sparse and deep neural network with random intermediate weights and trained last output layer. The intermediate layer neurons are rectified linear units (ReLU). A fixed fraction (\(f\)) of neurons in each layer are non-zero (red neurons) for a given input; (b,c) Numerical simulation of very wide sparse neural networks over a range of sparsity \(f\) and depth \(L\). Each layer contains 20,000 neurons and the outputs are learned using 100 training examples. (b) Classification accuracy of the models as depth and sparsity are varied. The best-performing model of each depth is indicated with a white marker; (c) Mean-square error (MSE) of the regressions. The model with minimum MSE solution is indicated with a white marker for each depth.
itself is a linear combination of the previous layer activity \(x^{l-1}\). The model is written as
\[x_{j}^{(p),l}=\left[h_{j}^{(p),l}-b^{(p),l}\right]_{+}\qquad h_{j}^{(p),l}=\sum_{ i=1}^{n_{l-1}}w_{ij}^{l}x_{i}^{(p),l-1} \tag{1}\]
For the final output \(h_{j}^{(p),L+1}\) is
\[h_{j}^{(p),L+1}=\sum_{i=1}^{n_{L}}w_{ij}^{L+1}x_{i}^{(p),L} \tag{2}\]
\(w_{ij}^{l}\) denotes a synaptic weight from neuron \(i\) of layer \(l-1\) to neuron \(j\) of layer \(l\). In each layer, there is \(n_{l}\) number of neurons. The superscript \((p),l\) denotes the input sample index and layer number respectively. For the input, we drop the \(l\) superscript, i.e. \(x_{i}^{(p)}\) instead of \(x_{i}^{(p),0}\). The output layer \(L+1\) neuron does not have an activation function, so \(\{h_{0}^{(p),L+1}\dots h_{n^{L-1}}^{(p),L+1}\}\) is considered the model output. \(L\) denotes the number of hidden layers.
For each forward pass of an input, the biases are adjusted such that a fixed fraction \(f\) of neurons are positive in each layer. After the rectification by ReLU, only the \(f\) fraction of neurons are non-zero.
We take the infinite width limit to enable exact Bayesian inference. To this end, we first derive the sparse NNGP kernel formula for a single hidden layer architecture and then compose it to arrive at the sparse NNGP for deep architectures.
### Prior of the single hidden layer architecture (\(L=1\))
For the prior, the weights are independently sampled from a zero-mean normal distribution with standard deviation \(\frac{\sigma}{\sqrt{n^{l-1}}}\), i.e. \(w_{ij}^{l}\sim\mathcal{N}\left(0,\frac{\sigma^{2}}{n^{l-1}}\right)\) for \(l=1\). Since the inputs \(x_{k}^{(p)}\) are fixed, the preactivation of the hidden layer \(h_{i}^{(p)}\), which is the sum of the inputs weighted by the normal random weights, is a zero-mean normal with standard deviation \(\sigma_{h}=\frac{\sigma}{\sqrt{n}}\|x^{(p)}\|\), where \(\|x^{(p)}\|=\sqrt{\sum_{k=1}^{n}\left[x_{k}^{(p)}\right]^{2}}\) and \(n\) is the dimension of the input. In other words, \(h_{i}^{(p)}\sim\mathcal{N}\left(0,\frac{\sigma^{2}}{n}\|x^{(p)}\|^{2}\right)\). Since we know the preactivation is normally distributed, the thresholded rectification \(\left[h_{i}^{(p)}-b^{(p)}\right]_{+}\) of it is a rectified normal distribution. Since we want a fixed level of sparsity, we require a predetermined fraction \(f\) of the rectified normal distribution to be positive, i.e. non-zero, by choosing the appropriate bias. If \(\sigma_{h}^{2}\) is the variance of \(h_{i}^{(p)}\), the bias that guarantees exactly \(f\) fraction of the neurons to be positive is \(b^{(p)}=\sigma_{h}\tau\) where \(\tau=\sqrt{2}\text{erf}^{-1}(1-2f)\). Note that \(b^{(p)}\) is a function of the input, since it is dependent on \(\sigma_{h}\) which is a function of the input norm.
For finite \(n^{l=1}\), the output \(h_{j}^{(p),l=2}\) is non-normal, since it is a dot product between normal random weights \(w_{ij}^{l=2}\) and rectified normal activities \(x_{i}^{(p),1}\) of the hidden layer (Eqn. 2). However, when \(n^{l=1}\rightarrow\infty\), we can invoke the central limit theorem, and the distribution of \(h_{j}^{(p),l=2}\) reaches a normal distribution (Neal, 1996).
In order to compute the posterior output of this network, we first need to compute the similarity between neural representations of two inputs \(p\) and \(q\), i.e. \(E\left[x_{i}^{(p),l=1}x_{i}^{(q),l=1}\right]\) averaged over the distribution of the weights. This similarity is referred to as the Gaussian process kernel \(K(\mathbf{x}^{(p)},\mathbf{x}^{(q)})\), where \(\mathbf{x}^{(p)}\) is a vector representation of the input sample \(p\). The kernel is computed as
\[K(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\int d\mathbf{w}_{i}^{1}P(\mathbf{w}_{i}^ {1})\times\left[\mathbf{w}_{i}^{1}\cdot\mathbf{x}^{(p)}-b^{(p)}\right]_{+} \left[\mathbf{w}_{i}^{1}\cdot\mathbf{x}^{(q)}-b^{(q)}\right]_{+} \tag{3}\]
where \(\mathbf{w}_{i}^{1}\) is a vector whose \(k^{th}\) element is \(w_{ki}^{1}\), a weight between the input and the hidden layers.
The integration (Eqn. 3) can be reduced to a one-dimensional integration which can be efficiently computed using simple numerical integration. The resulting formula for the sparse NNGP kernel is
\[K(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{\sigma^{2}}{2\pi}\|\mathbf{x}^{(p) }\|\|\mathbf{x}^{(q)}\|\left(2I\left(\theta\mid\tau\right)-\tau\sqrt{2\pi}(1+ \cos\theta)\right) \tag{4}\]
\[\theta=\arccos\frac{\mathbf{x}^{(p)}\cdot\mathbf{x}^{(q)}}{\|\mathbf{x}^{(p)}\|\| \mathbf{x}^{(q)}\|} \tag{5}\]
\[I(\theta\mid\tau)=\int_{0}^{\frac{\pi-\theta}{2}}\exp\left(-\frac{ \tau^{2}}{2\sin^{2}(\phi_{0})}\right)2\sin\left(\phi_{0}+\theta\right)\sin(\phi _{0})\\ +\tau\left(\sin\left(\phi_{0}+\theta\right)+\sin(\phi_{0}) \right)\sqrt{\frac{\pi}{2}}\text{erf}\bigg{(}\frac{\tau}{\sqrt{2}\sin(\phi_{0} )}\bigg{)}d\phi_{0} \tag{6}\]
Note that \(\tau\) is the variable that controls sparsity as defined earlier. As \(\tau\to 0\), the kernel is equivalent to the arccosine kernel of degree 1 and zero bias derived by Cho and Saul (2009) (see Supp. D for the proof). See Supp. B for the full derivation of the sparse Kernel.
### Multi-layered sparse NNGP \(L>1\)
In the multilayered formulation of the sparse NNGP, we take all \(n^{l}\rightarrow\infty\). The recursive formula for the multilayered sparse NNGP kernel is
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{\sigma^{2}}{2\pi}\sqrt{K^{l-1} (\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l-1}(\mathbf{x}^{(q)},\mathbf{x}^{(q)} )}\times\left(2I\left(\theta^{l}\mid\tau\right)-\tau\sqrt{2\pi}(1+\cos\theta^ {l})\right) \tag{7}\]
\[\theta^{l}=\arccos\frac{K^{l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})}{\sqrt{K^{ l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l-1}(\mathbf{x}^{(q)},\mathbf{x}^{(q)} )}} \tag{8}\]
See Supp. B for the full derivation. This is almost identical to the formulation of the single-layer kernel in Eqn. 4-5, except the dot product, and hence the length are computed differently. In Eqn. 5, the dot product is between the deterministic inputs is \(\mathbf{x}^{(p)}\cdot\mathbf{x}^{(q)}\) but in Eqn. 8 the dot product of the stochastic representations is computed by \(K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})\). Naturally, it follows that the length of a representation is \(\sqrt{K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})}\). The first hidden layer kernel \(K^{l=1}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})\) is the same as Eqn. 4. Throughout the paper, we assume all layers of a given network have the same sparsity.
Figure 2: Infinite-width Bayesian neural network performance over a range of sparsity and depth. (a) Classification accuracy of the models on real-life datasets. The best-performing model of each depth is indicated with a white marker. Purple dots indicate the best-performing kernel across all sparsities and depths. Each row corresponds to a different number of training samples \(P\). (b) Corresponding mean-square error (MSE) of the regressions on the datasets. The model with minimum MSE solution is indicated with a white marker for each depth.
## 3 Experimental results on sparse NNGP
In the infinite-width case, exact Bayesian inference is possible. We directly compute the posterior of the predictive distribution, whose mean (\(\mu\)) is given by the solution to kernel ridge regression: \(\mu=\mathbf{K}_{*}^{L}\left(\mathbf{K}^{L}+\lambda\mathbf{I}\right)^{-1}\mathbf{Y}\) where \(\lambda\) is a ridge parameter. \(\mathbf{K}^{L}\) is the kernel gram matrix for the representation similarity within training data at the last hidden layer, and \(\mathbf{K}_{*}^{L}\) is the kernel gram matrix for the representation similarity between the test and training data. \(\mathbf{Y}\) is the training target matrix whose each row is a training sample target and each column is a feature. We train the sparse NNGP on MNIST, Fashion-MNIST, CIFAR10, and grayscale CIFAR10 datasets with different training set sizes (see experimental details in Supp. F).
### Sparsity-depth tradeoff
As we sweep over the ranges of sparsity \(f\) and depth \(L\), we see a pattern of the generalization performance over the sparsity vs. depth plane, which we denote the \(fL\)-plane (Figure 2). The patterns of the generalization performance over the \(fL\)-plane are more pronounced in the NNGP solutions (Figure 2) compared to the finite-width solutions (Figure 1). It is consistent throughout different datasets that sparse networks need to be shallow whereas dense networks need to be deep, in order to gain high performance. When sparse networks are too deep, the performance abruptly drops. The result has a narrow confidence interval over the randomized training sets as shown in Supp. G. We also observe a strong preference for sparser models in finite \(\lambda>0\) cases (Supp. E).
### Sparse and shallow networks are comparable to dense and deep networks
The main result of this paper is that the sparse and shallow networks have comparable generalization performances to dense and deep networks. As an example, in the MNIST classification task with \(P=1000\), the best-performing dense \(f=0.5\) model has depth \(L_{d}=5\) with accuracy \(0.9300\pm 0.0028\). However, we can find a sparser \(f=0.13\) model with shallower depth \(L_{s}=1\) with essentially identical performance \(a_{s}=0.9303\pm 0.0025\). At the same depth \(L_{s}=1\), the dense model has an accuracy \(a_{d}=0.9227\pm 0.0015\) lower than \(a_{s}\). The observation of \(L_{s}<L_{d}\) and \(a_{s}>a_{d}\) is highly consistent throughout different datasets and training sample sizes (see Table 1, and Supp. H). In short, it is quantitatively clear that a sparse kernel requires a smaller depth, and hence fewer kernel compositions to reach the performance level observed in the deep dense model.
A greater number of kernel compositions requires more computational time. Therefore the shallow and sparse kernel achieves a performance similar to deep and dense \(f=0.5\) kernel with less computational time. One may argue that the \(f=0.5\) case (arccosine kernel) does not require a numerical integration that is needed for the \(f<0.5\) case, so \(f=0.5\) kernel is computationally cheaper. This is true for a case when the kernels are computed on the fly. However, as done by Cho and Saul (2009) and Lee et al. (2018), it is a common practice to generate the kernels using a pre-computed lookup table that maps \(c^{l-1}\) to \(c^{l}\) (in our case, map \((c^{l-1},f)\) to \(c^{l}\)). This is done because of the large computational cost to compute a kernel when the sample size is large, even for an analytically solvable kernel.
## 4 Theoretical explanation of the sparsity-depth tradeoff
### Dynamics of the kernel over layers
As a recursive function, the kernel reaches or diverges away from a fixed point as the network gets deeper (Poole et al., 2016; Schoenholz et al., 2017; Lee et al., 2018). Here we use the notation \(q^{l}\) to denote the length of a representation in layer \(l\) and \(c^{l}=\cos\theta^{l}\) to denote the cosine similarity between two representations in layer \(l\).
Since the activation function ReLU is unbounded, \(q^{l}\) either decays to \(0\) or explodes to \(\infty\) as \(l\rightarrow\infty\). However, we can find \(\sigma^{*}\) that maintains \(q^{l}\) at its initial length \(q^{1}\) by setting \(\sigma^{*}=\sqrt{\frac{\pi}{I(0|\tau)-\tau\sqrt{2\pi}}}\) which depends on the sparsity level (see Supp. C for the full derivation). Using \(\sigma^{*}\) is encouraged since it guarantees numerical stability in the computation of the sparse NNGP kernel, although in theory, the kernel regression is invariant to the choice of \(\sigma\).
The dynamics of the \(c^{l}\) is
\[c^{l+1}=\frac{\sigma^{*2}}{2\pi}\left(2I\left(\arccos(c^{l})\mid\tau\right)-\tau \sqrt{2\pi}(1+c^{l})\right) \tag{9}\]
when we use \(\sigma^{*}\). This is obtained by dividing Eqn. 7 by \(q^{l}=\sqrt{K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l}(\mathbf{x}^{(q)}, \mathbf{x}^{(q)})}\) on both sides of the equation, assuming the same norm for the inputs \(p\) and \(q\).
In Figure 3, we take a Gram matrix, whose elements are \(K^{l=0}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\mathbf{x}^{(p)}\cdot\mathbf{x}^{( q)}\), spanning two categories of Fashion-MNIST dataset, and pass it through a cascade of the sparse NNGP. We observe that the non-sparse kernel (\(f=0.5\)) assimilates all inputs (\(c^{l}\to 1\)) as the layer gets deeper. This means that the Gram matrix becomes a rank-1 matrix as shown in Figure 2(a). However, if we omit the first eigenvalue, we see that the effective dimensionality (\(ED\)) of the spectrum slowly increases over the layers, instead of converging to 1. We omit the first eigenvalue since it does not affect the prediction \(\mu\) when the target function is zero-mean and the kernel Gram matrix has a constant function as an eigenfunction, which is commonly encountered in practice (see Supp. L). The effective dimensionality is a participation ratio of the kernel eigenvalues \(\eta_{\rho}\)'s computed by \(ED=\frac{\left(\sum_{\rho>0}\eta_{\rho}\right)^{2}}{\sum_{\rho>0}\eta_{\rho}^{ 2}}\) where \(\rho>0\) indicates the omission of the first eigenvalue \(\eta_{0}\). This means that the eigenspectrum slowly flattens disregarding the first eigenvalue. A sparser kernel (\(f=0.3\)), on the other hand, decorrelates the inputs to \(c^{l}<1\), which makes the Gram matrix become the identity matrix plus a constant non-zero off-diagonal coefficient (Figure 2(b)). This matrix has a flat spectrum as in the case of the identity matrix, but with an offset in the first eigenvalue which reflects the non-zero off-diagonal values. Similar to the \(f=0.5\) case, the \(f=0.3\) case also flattens disregarding the first eigenvalue, i.e. increases \(ED\), but at a faster rate. We see the flattening happens even faster for an even sparser kernel with \(f=0.1\) (Figure 2(c)). We clearly see that the flattening happens faster at sparser kernels in Figure 2(d) that shows \(ED\) over the \(fL\)-plane. It is noteworthy that the pattern of \(ED\) over the \(fL\)-plane resembles that of the generalization performances shown in Figure 1 and 2. The next section provides a theoretical explanation that relates the shape of the kernel eigenspectrum to the generalization error.
### Generalization theory
Here we provide a theoretical explanation of the generalization performance of the sparse NNGP kernel. We start by reviewing the theory on the in-distribution generalization error of kernel ridge regression presented by Canatar et al. (2021).
#### 4.2.1 Background: Theory on the generalization error of kernel regression
Assume that the target function \(\bar{f}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}\) exists in a reproducing kernel Hilbert space (RKHS) given by the kernel of interest. The target function can be expressed in terms of the coordinates (\(\bar{v}_{\rho}\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset: \(P\) & \multicolumn{2}{c}{Dense - best acc.} & Sparse - equiv. acc. & Dense - same \(L\) \\ \hline MNIST: 1000 & Accuracy & 0.9300\(\pm\)0.0028 & 0.9303\(\pm\)0.0025 & 0.9227\(\pm\)0.0015 \\ \cline{2-5} & \(L\) & 5 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.139 & 0.5 \\ \hline MNIST: 10000 & Accuracy & 0.9748\(\pm\)0.0014 & 0.9749\(\pm\)0.0013 & 0.9725\(\pm\)0.0013 \\ \cline{2-5} & \(L\) & 3 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.087 & 0.5 \\ \hline CIFAR10: 1000 & Accuracy & 0.3810\(\pm\)0.0033 & 0.3814\(\pm\)0.0046 & 0.3160\(\pm\)0.0060 \\ \cline{2-5} & \(L\) & 18 & 2 & 2 \\ \cline{2-5} & \(f\) & 0.5 & 0.01 & 0.5 \\ \hline CIFAR10: 10000 & Accuracy & 0.5016\(\pm\)0.0055 & 0.5017\(\pm\)0.0057 & 0.4621\(\pm\)0.0035 \\ \cline{2-5} & \(L\) & 18 & 3 & 3 \\ \cline{2-5} & \(f\) & 0.5 & 0.087 & 0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performances of the sparse and dense models. ”Dense - best acc.”: the best-performing dense model. ”Sparse - equiv. acc.”: a sparse model with a performance comparable to ”Dense - best acc.”. ”Dense - same L”: a dense model with the same depth as ”Sparse - equiv. acc.”. The mean and standard deviation over 10 trials with randomly sampled training sets are shown.
on the basis of the RKHS given by the Mercer decomposition \(\int d\mathbf{x}^{\prime}p(\mathbf{x}^{\prime})K(\mathbf{x},\mathbf{x}^{\prime}) \phi_{\rho}(\mathbf{x}^{\prime})=\eta_{\rho}\phi_{\rho}(\mathbf{x})\)
\[\bar{f}(\mathbf{x})=\sum_{\rho=0}^{N-1}\bar{v}_{\rho}\phi_{\rho}(\mathbf{x}), \quad\rho=0,\ldots,N-1. \tag{10}\]
where \(p(\mathbf{x}^{\prime})\) is the input data distribution in \(\mathbb{R}^{n_{0}}\), and \(\phi_{\rho}\) and \(\eta_{\rho}\) are the \(\rho^{th}\) eigenfunction and eigenvalues respectively. We assume \(N\) is infinite, which is required for the theory. At the large training sample size and large \(N\) limit, the generalization error \(E_{g}\) is expressed as a sum of modal errors \(E_{\rho}\) weighted by the target powers \(\bar{v}_{\rho}^{2}\).
\[E_{g}=\sum_{\rho}\bar{v}_{\rho}^{2}E_{\rho} \tag{11}\]
\[E_{\rho}=\frac{1}{1-\gamma}\frac{\kappa^{2}}{\left(\kappa+P\eta_{\rho} \right)^{2}}\qquad\gamma=\sum_{\rho}\frac{P\eta_{\rho}^{2}}{\left(\kappa+P \eta_{\rho}\right)^{2}}\qquad\kappa=\lambda+\sum_{\rho}\frac{\kappa\eta_{ \rho}}{\kappa+P\eta_{\rho}} \tag{12}\]
where \(P\) is the number of training samples and \(\lambda\) is the ridge parameter. Note that \(E_{\rho}\) is independent of the target function but dependent on the input distribution and kernel which are summarized in \(\eta_{\rho}\). In Eqn.(11), the \(E_{\rho}\)'s are weighted by \(\bar{v}_{\rho}^{2}\)'s which are dependent on the target function, input distribution and kernel. Therefore in general, except for the special cases we discuss in this paper, we need to keep track of the change in both \(E_{\rho}\) and \(\bar{v}_{\rho}^{2}\), when tracking the change of \(E_{g}\) with different kernels. Note that each \(E_{\rho}\) is dependent on all \(\eta_{\rho}\)'s whether \(\rho=\rho^{\prime}\) or \(\rho\neq\rho^{\prime}\) due to the \(\kappa\) and \(\gamma\) terms. \(\kappa\) is a self-consistent equation that can be solved with a numerical root-finding algorithm. See Supp. I for the details of the implementation.
#### 4.2.2 Generalization theory applied to sparse NNGP
The theoretical formulation of the generalization error accurately predicts the experimentally observed generalization errors (see Figure 4a,b and Supp. M for the real-life datasets). As an illustrating example, we use synthetic circulant data with a step target function. A Gram matrix generated from a kernel and sampled input data is a circulant matrix if the input data is evenly distributed around a circle, regardless of the choice of the kernel. The set of eigenfunctions of a circulant matrix always consists of the harmonics of the sine and cosine, so the eigenfunctions are invariant to a kernel. Therefore we need to examine the task-model alignment in terms of eigenspectrum in order to understand the generalization performance. Our target function is a zero-meaned square wave function with even step lengths.
Figure 3: Evolution of \(\mathbb{R}^{50\times 50}\) kernel Gram matrix of Fashion-MNIST data spanning two classes, i.e. shirts and ankle boots. The target function is zero-mean, since the class labels are 1 and -1. (a) \(f=0.5\) case. Top: Gram matrices colored by the normalized kernel values (i.e. cosine similarities between representations \(c^{l}\)), at the input layer, \(5^{th}\) layer, and \(10^{th}\) layer. Bottom: the eigenspectrums of the corresponding Gram matrices normalized by the second largest eigenvalue. The effective dimensionality (ED) of a spectrum is indicated in each plot; (b) \(f=0.3\) case; (c) \(f=0.1\) case; (d) ED values over sparsity and depth.
As in the case of the real-life dataset, the circulant dataset also creates a similar generalization performance pattern over the \(fL\)-plane. We first inspect the spectrums of the kernels in the poor generalization performance regime (red region in Figure 4a,b), mainly occupied by the deep sparse networks. It turns out that the eigenspectrums are flat and identical in that regime, disregarding the first eigenvalues (Figure 4b). The first eigenvalue can vary widely in that regime, but this eigenvalue can be disregarded, since it does not affect the generalization performance (see Supp. L). The reason we see the flat spectrums in the shallower depths for the sparser networks is that for the sparse kernels, the cosine similarity \(c^{l}\) converges quickly to some fixed point value below 1 as the depth becomes deeper. This means the resulting Gram matrix becomes an identity matrix offset by a value determined by the fixed point, which has a flat spectrum if we disregard the first eigenvalue that encodes the offset (Figure 3).
The kernels outside the poor performance regime have non-flat spectrums, but that does not mean the least flat spectrum performs the best. For a given depth, the best-performing kernel usually exists below \(f=0.5\), which has neither the flattest nor the least flat spectrum in that given depth (Figure 4c,d). We investigate the modal errors \(E_{\rho}\)'s of each kernel for more insight.
Note that \(E_{g}\) is a dot product between the modal errors and the target function powers (Eqn. 11). Also, as noted above, the target function powers do not change for the circulant dataset. Therefore, having small \(E_{\rho}\)'s for the modes that correspond to high target power \(\bar{v}_{\rho}^{2}\) would greatly contribute to lowering \(E_{g}\). On the other hand, having large \(E_{\rho}\)'s for the modes that correspond to high \(\bar{v}_{\rho}^{2}\) would greatly contribute to increasing \(E_{g}\). In practice, we also observe the increase in \(E_{g}\) due to the increase in \(E_{\rho}\)'s that corresponds to low \(\bar{v}_{\rho}^{2}\) due to a large number of such modes (see Supp. M for the analysis on the real-dataset).
We observe that a spectrum with a steep drop over \(\rho\) leads to modal errors with a steep increase over \(\rho\) that has low \(E_{\rho}\)'s at lower \(\rho\)'s and high \(E_{\rho}\)'s at higher \(\rho\)'s (Figure 4d,e). This relationship between the shape of the eigenspectrum and the modal error spectrum is theoretically supported in the following section. Therefore, compared to the flat eigenspectrum, a moderately steep eigenspectrum results in an optimal modal error spectrum that results in the minimum \(E_{g}\), assuming that the target power spectrum is concentrated around the low \(\rho\)'s (Figure 4e,f). However, for the same target power spectrum, if the eigenspectrum is too steep, the modal error increases too fast over \(\rho\), contributing to increases in \(E_{g}\) (Green dots in Figure 4e).
This explanation is based on the circulant dataset with strictly invariant eigenfunctions and therefore invariant \(\bar{v}_{\rho}^{2}\)'s. However, we show that the same explanation can be applied to the real-life datasets, i.e. MNIST, Fashion-MNIST, CIFAR10, and CIFAR10-Gray, since the eigenfunctions for these datasets do not vary significantly over the variations in depths and sparsity of the NNGP (see Supp. M).
Figure 4: Theoretical analysis of the generalization error over a circulant dataset of \(P=1000\). (a) The Experimental result on the generalization errors over the circulant dataset over the sparsity (\(f\)) and depth (\(L\)). (b) Theoretical predictions of the generalization errors. (c) The generalization error (experimental: blue dotted line, theoretical prediction: black solid line) of the sparse kernels with depth \(L=11\). The kernel with the highest, lowest, and intermediate generalization errors are indicated with red, blue, and green stars respectively; (d) The eigenspectrums (normalized by the second eigenvalues) of the kernels corresponding to the three cases marked in (c). The first 500 eigenvalues are shown; (e) The modal errors \(E_{\rho}\) corresponding to the three cases marked in (c); (f) The target function power \(\bar{v}_{\rho}^{2}\) spectrum. The effective dimensionality (ED) of the target function power spectrum is 1.5 as indicated in the figure.
#### 4.2.3 Relationship between the eigenspectrum shape and modal errors
We expand on the generalization theory (Eqn. 11,12) to elucidate how a change in the eigenspectrum affects the modal errors. To this end, we compute how \(E_{\rho}\)'s change when the spectrum is perturbed from a flat spectrum, assuming noise-free target function and \(\lambda=0\). The first order perturbation of \(E_{\rho}\) from the flat spectrum is given by
\[\nabla E_{\rho}=-2\left(1-\alpha\right)\alpha\frac{1}{\eta}\left(\nabla\eta_{ \rho}-\left\langle\nabla\eta_{\rho}\right\rangle\right) \tag{13}\]
where \(N\) is the number of non-zero eigenvalues and \(\alpha=\frac{P}{N}\). We assume \(P\rightarrow\infty\) and \(N\rightarrow\infty\), but \(\alpha=\mathcal{O}(1)\). \(\eta\) is the eigenvalue of the flat spectrum, \(\nabla\eta_{\rho}\) the perturbation in the eigenvalue, and \(\left\langle\nabla\eta_{\rho}\right\rangle=\frac{1}{N}\sum_{\rho}\nabla\eta_{\rho}\) is a mean value of the perturbations. The derivation of Eqn. 13 is presented in the Supp. J, K.
The intuition provided by Eqn. 13 is that the modal error perturbation \(\nabla E_{\rho}\) is a sign-flipped version of the zero-meaned \(\nabla\eta_{\rho}\). Therefore as the spectrum becomes less flat, the modal errors \(E_{\rho}\)'s decrease for \(\rho\)'s corresponding to larger eigenvalues, but \(E_{\rho}\)'s increase for \(\rho\)'s corresponding to smaller eigenvalues. Therefore, if the \(\bar{v}_{\rho}\)'s are band-limited to \(\rho\)'s corresponding to larger eigenvalues, then the generalization error \(E_{g}\) typically decreases as the spectrum perturbs away from the flat spectrum. On the other hand, if the \(\bar{v}_{\rho}\)'s are band-limited to \(\rho\)'s corresponding to smaller eigenvalues, the generalization error typically increases. In practice, \(\bar{v}_{\rho}\)'s are skewed, yet spread out over the entire spectrum (See Supp. M). Therefore there is a trade-off between the decrease in \(E_{\rho}\)'s at low \(\rho\)'s and the increase in \(E_{\rho}\)'s at high \(\rho\)'s. This results in requiring a moderately steep eigenspectrum, which is often quickly (over layers) achieved by sparse NNGP kernel at shallow depth (Figure 3, 4).
For input distributions where the eigenfunctions of the Gram matrix are similar between two compared kernels, the target function coefficients also exhibit similarity. In such cases, the conventional definition of task-model alignment based on eigenfunctions fails to capture the difference in \(E_{g}\) effectively (see Supp. N). However, our approach, which considers the shape of the eigenspectrum relative to the target function coefficients, successfully captures this difference. This finding complements the spectral bias result presented by Canatar et al. (2021). The results on spectral bias by Canatar et al. (2021) show that the \(E_{\rho}\)'s are in ascending order in contrast to the descending order of \(\eta_{\rho}\), and \(E_{\rho}\) that corresponds to greater \(\eta_{\rho}\) decay at the faster rate as \(P\) increases. However, we need more than the fact that \(E_{\rho}\) monotonically increases over \(\rho\), to compare \(E_{\rho}\) spectrum between kernels since it does not offer an absolute reference for the comparison. Our analysis offers a way to compare spectral biases by computing the explicit first-order perturbation in \(E_{\rho}\)'s, allowing a comparison between kernels theoretically more tractable and interpretable.
## 5 Discussion
We have demonstrated that random sparse representation in a wide neural network enhances generalization performance in the NNGP limit at shallow depths. Our sparse NNGP achieves comparable performance to deep and dense NNGP (i.e., arccosine kernel) with reduced kernel composition. The performance of the arccosine kernel is known to be comparable to finite neural networks trained with stochastic gradient descent in certain contexts (Lee et al., 2018, 2020).
Our analysis reveals that the kernel Gram matrices for sparse and shallow (and dense and deep) networks have an eigenspectrum that yields a model error spectrum that is well-aligned with typical target functions. As demonstrated in the main section, our extended theory on the generalization of kernel regression facilitates the comparison of generalization performance between any two kernels.
Using sparse and shallow architecture, Babadi and Sompolinsky (2014) shows improvements in classification tasks enabled by sparsity in the cerebellum. Our results indicate that sparsity also improves regression performance, which could benefit the computation in the cortex and cerebellum.
Further investigations are needed to explore sparsity's impact in networks with learned representations. Developing a finite-width correction for our sparse NNGP kernel would enable examining sparsity effects within trained representations. Additionally, the implications of sparsity in the trainable intermediate infinite-width layers of the Neural Tangent Kernel should be considered. Furthermore, exploring sparsity's influence in different neural network architectures, including convolutional neural networks, has potential for future research.
## Acknowledgements
### References
* Willshaw and Dayan (1990) David Willshaw and Peter Dayan. Optimal plasticity from matrix memories: What goes up must come down. _Neural computation_, 2(1):85-93, 1990.
* Babadi and Sompolinsky (2014) Baktash Babadi and Haim Sompolinsky. Sparseness and expansion in sensory representations. _Neuron_, 83(5):1213-1226, 2014.
* LeCun and Cortes (2010) Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/).
* Xiao et al. (2017) Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_, 2017.
* Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* Matthews et al. (2017) Alexander G de G Matthews, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Sample-then-optimize posterior sampling for bayesian linear models. In _NeurIPS Workshop on Advances in Approximate Bayesian Inference_, 2017.
* Hron et al. (2020) Jiri Hron, Yasaman Bahri, Roman Novak, Jeffrey Pennington, and Jascha Sohl-Dickstein. Exact posterior distributions of wide bayesian neural networks. _arXiv preprint arXiv:2006.10541_, 2020.
* Hron et al. (2022) Jiri Hron, Roman Novak, Jeffrey Pennington, and Jascha Sohl-Dickstein. Wide bayesian neural networks have a simple weight posterior: theory and accelerated sampling. In _International Conference on Machine Learning_, pages 8926-8945. PMLR, 2022.
* Williams (1996) Christopher Williams. Computing with infinite networks. _Advances in neural information processing systems_, 9, 1996.
* Lee et al. (2018) Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and Jascha Sohl-dickstein. Deep neural networks as gaussian processes. 2018. URL [https://openreview.net/pdf?id=B1EA-M-02](https://openreview.net/pdf?id=B1EA-M-02).
* Cho and Saul (2009) Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. _Advances in neural information processing systems_, 22, 2009.
* Cho and Saul (2011) Youngmin Cho and Lawrence K Saul. Analysis and extension of arc-cosine kernels for large margin classification. _arXiv preprint arXiv:1112.3712_, 2011.
* Canatar et al. (2021a) Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Out-of-distribution generalization in kernel regression. _Advances in Neural Information Processing Systems_, 34:12600-12612, 2021a.
* Canatar et al. (2021b) Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. _Nature communications_, 12(1):1-12, 2021b.
* Spigler et al. (2020) Stefano Spigler, Mario Geiger, and Matthieu Wyart. Asymptotic learning curves of kernel methods: empirical data versus teacher-student paradigm. _Journal of Statistical Mechanics: Theory and Experiment_, 2020(12):124001, 2020.
* Bordelon et al. (2020) Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. In _International Conference on Machine Learning_, pages 1024-1034. PMLR, 2020.
* Bietti and Bach (2020) Alberto Bietti and Francis Bach. Deep equals shallow for relu networks in kernel regimes. _arXiv preprint arXiv:2009.14397_, 2020.
* Scepton and Harchaoui (2021) Meyer Scepton and Zaid Harchaoui. A spectral analysis of dot-product kernels. In _International conference on artificial intelligence and statistics_, pages 3394-3402. PMLR, 2021.
* Sohn and Vanhoucke (2017)
Sattar Vakili, Michael Bromberg, Jezabel Garcia, Da-shan Shiu, and Alberto Bernacchia. Uniform generalization bounds for overparameterized neural networks. _arXiv preprint arXiv:2109.06099_, 2021.
* Neal [1996] Radford M Neal. Priors for infinite networks. _Bayesian learning for neural networks_, pages 29-53, 1996.
* Poole et al. [2016] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. _Advances in neural information processing systems_, 29, 2016.
* Schoenholz et al. [2017] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. 2017. URL [https://openreview.net/pdf?id=H1W1UN9gg](https://openreview.net/pdf?id=H1W1UN9gg).
* Lee et al. [2020] Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, and Jascha Sohl-Dickstein. Finite versus infinite neural networks: an empirical study. _Advances in Neural Information Processing Systems_, 33:15156-15172, 2020.
What Does Sparsity Mean Conceptually in the Infinitely Wide Neural Network?
An intuitive definition of an NNGP kernel is: the dot product of \(n\) dimensional neuronal representations of input \(x^{(1)}\) and \(x^{(2)}\), averaged over all possible realizations of a random neural network. For each realization, we take the dot product between neural representations of two inputs.
For the sparse NNGP, one might mistakenly speculate that the same subset of neurons is active for all inputs (in each realization of the network). For inputs \(x^{(1)}\) and \(x^{(2)}\), different sets of neurons are non-zero, since we are only constraining the fraction of active neurons, not which neurons are active.
If we constrain which subset of neurons are always active and the rest inactive, then \(n=100\) with \(f=0.3\) is equivalent to a narrow network of \(n=30\) with \(f=1\) which is essentially a linear network. In this pathological scenario, the choice of \(f\) does not matter in the \(n\rightarrow\infty\) limit as the reviewer surmised. Generally, however, a ReLU neural network activates different sets of neurons for different inputs, which gives it an interesting nonlinear property.
## Appendix B Sparse NNGP Kernel Derivation
The similarity of two neural representation \(p\) and \(q\) is \(K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})\) which is evaluated with the following integration.
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\int d\mathbf{w}_{j}P(\mathbf{w}_{j} )\left[\mathbf{w}_{j}\cdot\mathbf{x}^{(p),l-1}-b\right]_{+}\left[\mathbf{w}_{ j}\cdot\mathbf{x}^{(q),l-1}-b\right]_{+} \tag{14}\]
At \(N\rightarrow\infty\), we can invoke the central limit theorem and express the integration in terms of a normal random variable \(h_{j}^{(p)}=\mathbf{w}_{j}\cdot\mathbf{x}^{(p),l-1}\). The covariance of the preactivation between two stimuli (\(\text{Cov}[\mathbf{w}\cdot x^{(p),l-1},\mathbf{w}\cdot x^{(q),l-1}]\)) is denoted \(\sigma_{pq}^{2}\) and the variance \(\sigma_{p}^{2}\).
The above integration becomes
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{1}{Z}\int d\mathbf{a}\exp\left( -\frac{1}{2}\begin{bmatrix}a^{(p)}&a^{(q)}\end{bmatrix}\Sigma^{-1}\begin{bmatrix} a^{(p)}&a^{(q)}\end{bmatrix}^{T}\right)\left[a^{(p)}-b\right]_{+}\left[a^{(q)}-b \right]_{+} \tag{15}\]
\[\Sigma=\begin{bmatrix}\sigma_{p}^{2}&\sigma_{pq}^{2}\\ \sigma_{pq}^{2}&\sigma_{q}^{2}\end{bmatrix} \tag{16}\]
The covariance makes it difficult to solve the integration, so we change the basis of \(\mathbf{a}=\begin{bmatrix}a^{(p)}&a^{(q)}\end{bmatrix}^{\top}\) such that the we can instead integrate over a pair of independent normal random variables \(z_{1}\) and \(z_{2}\). Solve this system of equations.
\[a^{(p)}=\mathbf{s}_{p}\cdot\mathbf{z} \tag{17}\]
\[a^{(q)}=\mathbf{s}_{q}\cdot\mathbf{z} \tag{18}\]
The vectors \(\mathbf{s}_{p}\) and \(\mathbf{s}_{q}\) are expressed in terms of the variances and covariance.
\[\mathbf{s}_{p}=\begin{bmatrix}\frac{\sqrt{\sigma_{p}^{2}\sigma_{q}^{2}-\sigma_ {pq}^{4}}}{\sigma_{q}}&\frac{\sigma_{pq}^{2}}{\sigma_{q}}\end{bmatrix} \tag{19}\]
\[\mathbf{s}_{q}=\begin{bmatrix}0&\sigma_{q}\end{bmatrix} \tag{20}\]
The norms of these vectors are \(\left\|\mathbf{s}_{p}\right\|=\sigma_{p}\) and \(\left\|\mathbf{s}_{q}\right\|=\sigma_{q}\).
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{1}{2\pi}\int d\mathbf{z}\exp \left(-\frac{1}{2}\mathbf{z}^{T}\mathbf{z}\right)\left[\mathbf{s}_{p}\cdot \mathbf{z}-b\right]_{+}\left[\mathbf{s}_{q}\cdot\mathbf{z}-b\right]_{+} \tag{21}\]
From the previous section, we know \(b_{p}=\sigma_{p}\tau\). As noted above, \(\|\mathbf{s}_{p}\|=\sigma_{p}\), so \(b_{p}=\|\mathbf{s}_{p}\|\tau\).
Now perform the Gaussian integral. Unfortunately, there is no closed-form solution, However, inspired by the derivation of the sparse step-function NNGP presented in Cho and Saul (2011), we can express this as a 1D integral which is significantly simpler to numerically calculate. Start by adopting a new coordinate system with basis \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), where \(\mathbf{e}_{1}\) is aligned with \(\mathbf{s}_{p}\). In that coordinate system \(\mathbf{s}_{p}=[\|\mathbf{s}_{p}\|\quad 0]\), and therefore \(\mathbf{s}_{p}\cdot\mathbf{z}=z_{1}\|\mathbf{s}_{p}\|\cos\theta+z_{2}\| \mathbf{s}_{q}\|\sin\theta\). With these substitutions, Eqn. 21 becomes the following.
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})= \frac{1}{2\pi}\int\int dz_{1}dz_{2}\exp\left(-\frac{1}{2}z_{1}^{ 2}+z_{2}^{2}\right)\left[z_{1}\|\mathbf{s}_{p}\|-b\right]_{+}\left[z_{1}\| \mathbf{s}_{q}\|\cos\theta+z_{2}\|\mathbf{s}_{q}\|\sin\theta-b\right]_{+} \tag{22}\] \[= \frac{1}{2\pi}\int\int dz_{1}dz_{2}\exp\left(-\frac{1}{2}z_{1}^{ 2}+z_{2}^{2}\right)\left[z_{1}\|\mathbf{s}_{p}\|-\|\mathbf{s}_{p}\|\tau\right] _{+}\left[z_{1}\|\mathbf{s}_{q}\|\cos\theta+z_{2}\|\mathbf{s}_{q}\|\sin\theta -\|\mathbf{s}_{q}\|\tau\right]_{+}\] (23) \[= \frac{\|\mathbf{s}_{p}\|\|\mathbf{s}_{q}\|}{2\pi}\int\int dz_{1} dz_{2}\exp\left(-\frac{1}{2}z_{1}^{2}+z_{2}^{2}\right)\left[z_{1}-\tau\right]_{+} \left[z_{1}\cos\theta+z_{2}\sin\theta-\tau\right]_{+} \tag{24}\]
where
\[\frac{\|\mathbf{s}_{p}\|\|\mathbf{s}_{q}\|}{2\pi}=\frac{1}{2\pi}\sigma_{p} \sigma_{q} \tag{25}\]
Adopt the polar coordinate system. \(z_{1}^{2}+z_{2}^{2}=r^{2}\), \(r\cos\phi=z_{1}\), \(r\sin\phi=z_{2}\), \(rdrd\phi=dz_{1}dz_{2}\).
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})= \frac{1}{2\pi}\sigma_{p}\sigma_{q}\int_{-\pi}^{\pi}d\phi\int_{0}^{ \infty}dr\exp\left(-\frac{r^{2}}{2}\right)r\left[r\cos\phi-\tau\right]_{+}\left[ r\cos\phi\cos\theta+r\sin\phi\sin\theta-\tau\right]_{+} \tag{26}\] \[= \frac{1}{2\pi}\sigma_{p}\sigma_{q}\int_{-\pi}^{\pi}d\phi\int_{0}^ {\infty}dr\exp\left(-\frac{r^{2}}{2}\right)r\left[r\cos\phi-\tau\right]_{+} \left[r\cos(\phi-\theta)-\tau\right]_{+} \tag{27}\]
Solve the integration of \(\int dr\exp\left(-\frac{r^{2}}{2}\right)r\left[r\cos\phi-\tau\right]_{+}\left[ r\cos(\phi-\theta)-\tau\right]_{+}\) within the range of \(r\) where the integrand is non-zero. Since we have not yet found that range, here we just perform an indefinite integral. We will find the range and apply it later.
\[\int dr\exp\left(-\frac{r^{2}}{2}\right)r(r\cos\phi-\tau)(r\cos( \phi-\theta)-\tau) \tag{28}\] \[= -\frac{\sqrt{2\pi}}{2}\tau(\cos\phi+\cos(\phi-\theta))\text{erf} \left(\frac{r}{\sqrt{2}}\right)-e^{-\frac{r^{2}}{2}}\left(\cos\phi\cos(\phi- \theta)\left(r^{2}+2\right)-\cos\phi r\tau+\tau(\tau-\cos(\phi-\theta)r)\right)\] (29) \[= -\frac{\sqrt{2\pi}}{2}\tau(\cos\phi+\cos(\phi-\theta))\text{erf} \left(\frac{r}{\sqrt{2}}\right)\] (30) \[\quad-e^{-\frac{r^{2}}{2}}\left(\cos\phi\cos(\phi-\theta)r^{2}+2 \cos\phi\cos(\phi-\theta)-\tau\cos\phi r+\tau^{2}-\tau\cos(\phi-\theta)r\right)\] (31) \[= -\frac{\sqrt{2\pi}}{2}\tau(\cos\phi+\cos(\phi-\theta))\text{erf} \left(\frac{r}{\sqrt{2}}\right)-e^{-\frac{r^{2}}{2}}\left((\cos\phi r-\tau)( \cos(\phi-\theta)r-\tau)+2\cos\phi\cos(\phi-\theta)\right)\] (32) \[= -\exp-\frac{r^{2}}{2}\left((r\cos\phi-\tau)\left(r\cos(\theta- \phi)-\tau\right)+2\cos\phi\cos(\theta-\phi)\right)-\tau\left(\cos\phi+\cos( \theta-\phi)\right)\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{r}{\sqrt{2}}\right) \tag{33}\]
The feasible range is \(r\cos\phi-\tau>0\) and \(r\cos(\phi-\theta)-\tau>0\). Assume \(\tau>0\). That means at least \(\cos\phi>0\) and \(\cos(\phi-\theta)>0\) for all cases (but this is not a sufficient condition).
\[r>\frac{\tau}{\cos\phi} \tag{34}\]
\[r>\frac{\tau}{\cos(\phi-\theta)} \tag{35}\]
Therefore
\[r>\max\left(\frac{\tau}{\cos\phi},\frac{\tau}{\cos(\phi-\theta)}\right) \tag{36}\]
Find at what \(\phi\) the inequality \(\frac{\tau}{\cos\phi}<\frac{\tau}{\cos(\phi-\theta)}\) holds. Note that \(-\frac{\pi}{2}<\phi<\frac{\pi}{2}\) (from \(\cos\phi>0\)) and \(0<\theta<\pi\).
\[\cos(\phi-\theta)<\cos\phi \tag{37}\]
which is equivalent to
\[\phi<\frac{\theta}{2}=\phi_{c} \tag{38}\]
Find the range of \(\phi\). From \(\cos\phi>0\) we have \(-\frac{\pi}{2}\leq\phi<\frac{\pi}{2}\). From \(\cos\left(\theta-\phi\right)>0\) we have \(-\frac{\pi}{2}\leq\phi-\theta<\frac{\pi}{2}\), which is \(\theta-\frac{\pi}{2}\leq\phi<\theta+\frac{\pi}{2}\). The intersecting domain of the two inequalities is:
\[\theta-\frac{\pi}{2}\leq\phi<\frac{\pi}{2} \tag{39}\]
Now apply these ranges to Eqn. 33. At \(r=\infty\), the indefinite integral is:
\[-\tau\left(\cos\phi+\cos(\theta-\phi)\right)\sqrt{\frac{\pi}{2}} \tag{40}\]
which is for the range \(-\frac{\pi}{2}\leq\phi<\frac{\pi}{2}\).
For the range \(\theta-\frac{\pi}{2}\leq\phi<\phi_{c}\), \(r\) is integrated from \(\frac{\tau}{\cos(\phi-\theta)}\) to \(\infty\). For the range \(\phi_{c}\leq\phi<\frac{\pi}{2}\), \(r\) is integrated from \(\frac{\tau}{\cos\phi}\) to \(\infty\).
When \(r=\frac{\tau}{\cos\left(\phi-\theta\right)}\), the indefinite integral (Eqn. 33) is:
\[L_{1}=-\exp\left(-\frac{\tau^{2}}{2\cos^{2}(\phi-\theta)}\right)2\cos\phi\cos (\theta-\phi)-\tau\left(\cos\phi+\cos(\theta-\phi)\right)\sqrt{\frac{\pi}{2} }\mbox{erf}\bigg{(}\frac{\tau}{\sqrt{2}\cos(\phi-\theta)}\bigg{)} \tag{41}\]
When \(r=\frac{\tau}{\cos\phi}\), the indefinite integral (Eqn. 33) is:
\[L_{2}=-\exp\left(-\frac{\tau^{2}}{2\cos^{2}\phi}\right)2\cos\phi\cos(\theta- \phi)-\tau\left(\cos\phi+\cos(\theta-\phi)\right)\sqrt{\frac{\pi}{2}}\mbox{ erf}\bigg{(}\frac{\tau}{\sqrt{2}\cos\phi}\bigg{)} \tag{42}\]
Therefore the definite integral is:
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{1}{2\pi}\sigma_{p}\sigma_{q} \left(\int_{\theta-\frac{\pi}{2}}^{\pi/2}-\tau\left(\cos\phi+\cos(\theta-\phi )\right)d\phi\sqrt{\frac{\pi}{2}}-\left(\int_{\theta-\frac{\pi}{2}}^{\phi_{c }}L_{1}d\phi+\int_{\phi_{c}}^{\pi/2}L_{2}d\phi\right)\right) \tag{43}\]
In the following steps, we clean up the expression \(\int_{\theta-\frac{\pi}{2}}^{\phi_{c}}L_{1}d\phi\).
\[I_{1}=-\int_{\theta-\frac{\pi}{2}}^{\phi_{c}}L_{1}d\phi \tag{44}\]
Substitute \(\phi\) with \(\phi=\phi_{0}+\theta-\frac{\pi}{2}\).
\[I_{1}=\int_{0}^{\phi_{c}-\theta+\frac{\pi}{2}}\exp\left(-\frac{\tau^{2}}{2\sin^{ 2}(\phi_{0})}\right)2\sin\left(\phi_{0}+\theta\right)\sin(\phi_{0})+\tau\left( \sin\left(\phi_{0}+\theta\right)+\sin(\phi_{0})\right)\sqrt{\frac{\pi}{2}} \text{erf}\bigg{(}\frac{\tau}{\sqrt{2}\sin(\phi_{0})}\bigg{)}d\phi_{0} \tag{45}\]
Also clean up the expression for \(\int_{\phi_{c}}^{\pi/2}L_{2}d\phi\).
\[I_{2}=-\int_{\phi_{c}}^{\pi/2}L_{2}d\phi=\int_{\pi/2}^{\phi_{c}}L_{2}d\phi \tag{46}\]
Substitute \(\phi\) with \(\phi=\phi_{0}+\frac{\pi}{2}\)
\[I_{2}=\int_{0}^{\frac{\pi}{2}-\phi_{c}}\exp\left(-\frac{\tau^{2}}{2\sin^{2}( \phi_{0})}\right)2\sin(\phi_{0})\sin(\theta+\phi_{0})+\tau\left(\sin(\phi_{0} )+\sin(\theta+\phi_{0})\right)\sqrt{\frac{\pi}{2}}\text{erf}\bigg{(}\frac{ \tau}{\sqrt{2}\sin(\phi_{0})}\bigg{)}d\phi_{0} \tag{47}\]
Notice that the integrand for \(I_{1}\) and \(I_{2}\) are the same, and only the upper bonds of the integration ranges are different.
Solve \(\int_{\theta-\frac{\pi}{2}}^{\pi/2}-\tau\left(\cos\phi+\cos(\theta-\phi) \right)d\phi\sqrt{\frac{\pi}{2}}\) term in Eqn. 43.
\[\sqrt{\frac{\pi}{2}}\int_{\theta-\frac{\pi}{2}}^{\pi/2}-\tau\left( \cos\phi+\cos(\theta-\phi)\right)d\phi \tag{48}\] \[= -\tau\sqrt{\frac{\pi}{2}}\left(\sin\phi+\sin(\phi-\theta)\right) \bigg{|}_{\theta-\frac{\pi}{2}}^{\pi/2}\] (49) \[= -\tau\sqrt{\frac{\pi}{2}}\left((1+\cos(\theta))-(-\cos(\theta)- 1)\right)\] (50) \[= -\tau\sqrt{2\pi}(1+\cos\theta) \tag{51}\]
Therefore, Eqn. 43 is equivalent to
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{1}{2\pi}\sigma_{p}\sigma_{q} \left(I_{1}+I_{2}-\tau\sqrt{2\pi}(1+\cos\theta)\right) \tag{52}\]
As shown in Eqn. 38, \(\frac{\theta}{2}=\phi_{c}\). Substitute \(\phi_{c}\) with \(\frac{\theta}{2}\) in \(I_{1}\) and \(I_{2}\). Therefore the final expression for the similarity is
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{1}{2\pi}\sigma_{p}\sigma_{q} \left(2I(\theta\mid\tau)-\tau\sqrt{2\pi}(1+\cos\theta)\right) \tag{53}\]
where
\[I(\theta\mid\tau)=\int_{0}^{\frac{\pi-\theta}{2}}\exp\left(-\frac{\tau^{2}}{2 \sin^{2}(\phi_{0})}\right)2\sin\left(\phi_{0}+\theta\right)\sin(\phi_{0})+\tau \left(\sin\left(\phi_{0}+\theta\right)+\sin(\phi_{0})\right)\sqrt{\frac{\pi}{ 2}}\text{erf}\bigg{(}\frac{\tau}{\sqrt{2}\sin(\phi_{0})}\bigg{)}d\phi_{0} \tag{54}\]
The intergration term (Eqn. 54) can be efficiently computed using a simple numerical integration.
In the step where we make a substitution in Eqn. 22, the angle \(\theta\) is defined as an angle between \(\mathbf{s}_{p}\) and \(\mathbf{s}_{q}\) which is an angle between neural representations.
\[\theta=\arccos\frac{\mathbf{s}_{p}\cdot\mathbf{s}_{q}}{\|\mathbf{s}_{p}\|\| \mathbf{s}_{q}\|}=\arccos\frac{\sigma_{pq}^{2}}{\sigma_{p}\sigma_{q}} \tag{55}\]
For the kernel of the first hidden layer, \(\sigma_{p}^{2}=\frac{\sigma^{2}}{N}\|\mathbf{x}^{(p)}\|^{2}\), and \(\sigma_{pq}^{2}=\frac{\sigma^{2}}{N}\|\mathbf{x}^{(p)}\|\|\mathbf{x}^{(q)}\|\). Therefore, the above equation (Eqn. 55) becomes
\[\theta=\arccos\left[\frac{\mathbf{x}^{(p)}\cdot\mathbf{x}^{(q)}}{\|\mathbf{x} ^{(p)}\|\|\mathbf{x}^{(q)}\|}\right] \tag{56}\]
For the deeper layers, we have \(\sigma_{p}^{2}=\sigma^{2}K^{l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})\), and \(\sigma_{pq}^{2}=\sigma^{2}K^{l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})\). With this substitution, we arrive at the general solution presented in the main section of the paper.
\[K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{\sigma^{2}}{2\pi}\sqrt{K^{l-1}( \mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l-1}(\mathbf{x}^{(q)},\mathbf{x}^{(q)}) }\left(2I\left(\theta^{l}\mid\tau\right)-\tau\sqrt{2\pi}(1+\cos\theta^{l})\right) \tag{57}\]
\[\theta^{l}=\arccos\frac{K^{l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})}{\sqrt{K^{ l-1}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l-1}(\mathbf{x}^{(q)},\mathbf{x}^{(q)})}} \tag{58}\]
## Appendix C \(\sigma^{*}\) Derivation
The sparse kernel equation is shown below.
\[K^{l+1}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{\sigma^{2}}{2\pi}\sqrt{K^{l} (\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l}(\mathbf{x}^{(q)},\mathbf{x}^{(q)}) }\left(2I\left(\theta^{l}\mid\tau\right)-\tau\sqrt{2\pi}(1+\cos\theta^{l})\right) \tag{59}\]
\[\theta^{l}=\arccos\frac{K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(q)})}{\sqrt{K^{l} (\mathbf{x}^{(p)},\mathbf{x}^{(p)})K^{l}(\mathbf{x}^{(q)},\mathbf{x}^{(q)})}} \tag{60}\]
Since we want to see the evolution of the representation length \(K^{l}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})\), substitute \(\mathbf{x}^{(q)}\) with \(\mathbf{x}^{(p)}\) in the kernel equation. We should use \(\theta=0\) at all layers.
\[K^{l+1}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})=\frac{\sigma^{2}}{2\pi}K^{l}_{h}( \mathbf{x}^{(p)},\mathbf{x}^{(p)})\left(2I\left(0\mid\tau\right)-\tau 2\sqrt{2\pi}\right) \tag{61}\]
We require that the representation length do not change over layers, i.e. \(K^{l}_{h}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})=K^{l+1}_{h}(\mathbf{x}^{(p)}, \mathbf{x}^{(p)})\). Therefore,
\[K^{l}_{h}(\mathbf{x}^{(p)},\mathbf{x}^{(p)})=\frac{\sigma^{*2}}{\pi}K^{l}_{h} (\mathbf{x}^{(p)},\mathbf{x}^{(p)})\left(I\left(0\mid\tau\right)-\tau\sqrt{2 \pi}\right) \tag{62}\]
\[\left[1-\frac{\sigma^{*2}}{2\pi}\left(2I\left(0\mid\tau\right)-\tau 2\sqrt{2\pi}\right)\right]K^{l}_{h}( \mathbf{x}^{(p)},\mathbf{x}^{(p)})=0 \tag{63}\]
This means the following equality must be satisfied.
\[1=\frac{\sigma^{*2}}{\pi}\left(I\left(0\mid\tau\right)-\tau\sqrt{2\pi}\right) \tag{64}\]
This equality can always be satisfied, by computing \(\sigma^{*}\) given \(\tau\).
\[\sigma^{*}=\sqrt{\frac{\pi}{I\left(0\mid\tau\right)-\tau\sqrt{2\pi}}} \tag{65}\]
## Appendix D Relationship to the Arcosine Kernel
\(\tau\) in Eq. (5) and (8) is the variable that is dependent on the sparsity \(f\) (\(\tau=\sqrt{2}erf^{-1}(1-2f)\), where \(erf^{-1}\) is the inverse error function). This means \(\tau=0\) when \(f=0.5\), and \(\tau\rightarrow\infty\) as \(f\to 0\). The arcosine kernel presented in the previous works by Cho and Saul (2009) and Lee et al. (2018) is the case when \(\tau=0\) (\(f=0.5\)).
We show that the integration term Eqn. 6 reduces to a simpler form when \(\tau=0\).
\[I(\theta\mid 0)=\int_{0}^{\frac{\pi-\theta}{2}}2\sin\left(\phi_{0}+\theta \right)\sin(\phi_{0})d\phi_{0}\]
\[=\frac{1}{2}\left(2\phi_{0}\cos(\theta)-\sin\left(\theta+2\phi_{0}\right) \right)\big{|}_{0}^{\frac{\pi-\theta}{2}}\]
\[=\frac{1}{2}\left(\left(\pi-\theta\right)\cos(\theta)+\sin\left(\theta\right)\right)\]
Therefore, Eqn. 4 simplifies to
\[K(\mathbf{x}^{(p)},\mathbf{x}^{(q)})=\frac{\sigma^{2}}{2\pi}\|\mathbf{x}^{(p)} \|\|\mathbf{x}^{(q)}\|\left(\left(\pi-\theta\right)\cos(\theta)+\sin\left( \theta\right)\right)\]
which matches Eqn. (3), (6) in Cho and Saul (2009) up to a scaling factor \(\frac{\sigma^{2}}{2}\) which is an arbitrary choice.
### Sensitivity of the Arcosine Kernel to the Bias Noise
The effect of tuning the standard deviation of the bias \(\sigma_{b}^{2}\) in the arccosine kernel is negligible, as provided in Lee et al. (2018). Here we provide the empirical and theoretical evidence for certain setups (Figure S1). In the \(f=0.5\) case, it has been shown empirically in Figure 4(b) (and supplementary Figure 9) of Lee et al. (2018), that \(\sigma_{b}^{2}\) does not significantly affect the generalization performance. Discussing the case when \(f<0.5\) is irrelevant here, since we need a constant bias (as opposed to a random bias) in order to keep the desired sparsity level.
We can show this theoretically for a single-hidden layer case. As shown in Eqn. 5 of Lee et al., 2018, a non-zero \(\sigma_{b}^{2}\) effectively offsets the value of the kernel by exactly \(+\sigma_{b}^{2}\). We show in the main text of our manuscript that in a usual setting, the offset of the kernel does not affect the generalization performance. Hence we can theoretically show in the single hidden layer case, the choice of \(\sigma_{b}^{2}\) does not usually affect the generalization performance.
## Appendix E Effect of the Regularization on the Performance of the Sparse NNGP
The observation noise, i.e. ridge \(\lambda\), is not a part of the model parameter and requires a dedicated investigation, which is a possible future direction. Here, we share an example figure that shows the effect of \(\lambda\) on the generalization performance (Figure S2). It is evident that larger the \(\lambda\) makes sparser kernels perform even better than the non-sparse counterparts. This observation is pronounced and consistent.
## Appendix F Details of the numerical experiments
For the main results, MNIST, Fashion-MNIST, CIFAR10, and a grayscale version of CIFAR10 are used. From each dataset, \(P\) number of samples are randomly chosen as training samples. The images are flattened before being used as inputs. We use one-hot vector \(\mathbb{R}^{10}\) representations of the class labels. The random sampling of the training data is repeated to check the consistency of our results.
The training is done using kernel ridge regression. We generate the kernels for training and test datasets and use the kernel ridge regression formula to make predictions on the test dataset. Since the prediction for each sample is a vector in \(\mathbb{R}^{10}\), we take the index of the maximum coordinate as the predicted class label. No regularization (\(\lambda=0\)) is used unless noted otherwise.
The matrix inversion and matrix multiplication required for kernel ridge regression are computed with GPU acceleration.
## Appendix G Confidence Interval of the Experimental Observations
An example figure showing the slice of \(fL\)-plane with the confidence intervals is shown in Figure S3. This shows that the variation in the performance is insensitive to the choice of the random training samples.
## Appendix H Sparse and Shallow Networks are Comparable Dense and Deep Networks: Additional Results
See table S1.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset: \(P\) & \multicolumn{2}{c}{Dense - best acc.} & Sparse - equiv. acc. & Dense - same \(L\) \\ \hline \multirow{2}{*}{MNIST: 100} & Accuracy & 0.7545\(\pm\)0.032 & 0.7560\(\pm\)0.029 & 0.7292\(\pm\)0.027 \\ \cline{2-5} & \(L\) & 7 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.139 & 0.5 \\ \hline MNIST: 500 & Accuracy & 0.8989\(\pm\)0.0052 & 0.8993\(\pm\)0.0046 & 0.8836\(\pm\)0.0037 \\ \cline{2-5} & \(L\) & 6 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.139 & 0.5 \\ \hline MNIST: 1000 & Accuracy & 0.9300\(\pm\)0.0028 & 0.9303\(\pm\)0.0025 & 0.9227\(\pm\)0.0015 \\ \cline{2-5} & \(L\) & 5 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.139 & 0.5 \\ \hline MNIST: 2000 & Accuracy & 0.9493\(\pm\)0.0022 & 0.9495\(\pm\)0.0021 & 0.9456\(\pm\)0.0021 \\ \cline{2-5} & \(L\) & 3 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.113 & 0.5 \\ \hline MNIST: 10000 & Accuracy & 0.9748\(\pm\)0.0014 & 0.9749\(\pm\)0.0013 & 0.9725\(\pm\)0.0013 \\ \cline{2-5} & \(L\) & 3 & 1 & 1 \\ \cline{2-5} & \(L\) & 3 & 1 & 1 \\ \cline{2-5} & \(f\) & 0.5 & 0.087 & 0.5 \\ \hline CIFAR10: 100 & Accuracy & 0.2428\(\pm\)0.014 & 0.2429\(\pm\)0.014 & 0.2321\(\pm\)0.012 \\ \cline{2-5} & \(L\) & 18 & 6 & 6 \\ \cline{2-5} & \(f\) & 0.5 & 0.294 & 0.5 \\ \hline CIFAR10: 500 & Accuracy & 0.3468\(\pm\)0.011 & 0.3473\(\pm\)0.012 & 0.3018\(\pm\)0.011 \\ \cline{2-5} & \(L\) & 18 & 3 & 3 \\ \cline{2-5} & \(f\) & 0.5 & 0.087 & 0.5 \\ \hline CIFAR10: 1000 & Accuracy & 0.3810\(\pm\)0.0033 & 0.3814\(\pm\)0.0046 & 0.3160\(\pm\)0.0060 \\ \cline{2-5} & \(L\) & 18 & 2 & 2 \\ \cline{2-5} & \(f\) & 0.5 & 0.01 & 0.5 \\ \cline{2-5} & \(L\) & 18 & 2 & 2 \\ \cline{2-5} & \(f\) & 0.5 & 0.01 & 0.5 \\ \hline CIFAR10: 2000 & Accuracy & 0.4156\(\pm\)0.0026 & 0.4163\(\pm\)0.0037 & 0.3483\(\pm\)0.0031 \\ \cline{2-5} & \(L\) & 18 & 2 & 2 \\ \cline{2-5} & \(L\) & 18 & 2 & 2 \\ \cline{2-5} & \(f\) & 0.5 & 0.01 & 0.5 \\ \hline CIFAR10: 10000 & Accuracy & 0.5016\(\pm\)0.0055 & 0.5017\(\pm\)0.0057 & 0.4621\(\pm\)0.0035 \\ \cline{2-5} & \(L\) & 18 & 3 & 3 \\ \cline{2-5} & \(f\) & 0.5 & 0.087 & 0.5 \\ \hline \hline \end{tabular}
\end{table}
Table S1: For each training data, e.g. MNIST with training size \(P\) 100, we show in the table, at which depth \(L\) the dense model \(f=0.5\) performed the best (compared to other depths with \(f=0.5\)). This depth \(L\) and the generalization accuracy at this depth are shown in the first column "Dense - best acc.". We then find a sparse model with comparable performance, and show its \(L\), \(f\), and generalization accuracy in the second column "Sparse - equiv. acc.". For all datasets, i.e. MNIST and CIFAR10 and all different training set sizes that we tested, the depth of the sparse model that performed comparably to the dense model is at least half of that of the dense counterpart (compare \(L\) under the columns "Dense - best acc." and "Sparse - equiv. acc.") We check the performance of the dense model at the depth at which the sparse model performed comparably to the best dense model. This is to quantify the performance gain we get from sparsity at that given depth (compare the columns "Sparse - equiv. acc." and "Dense - same L"). The table shows there is always a performance gain from sparsity, and this gain is greater when \(P\) is smaller or when the task is harder, e.g. CIFAR10. For each setup, we did 10 trials with randomly sampled training sets. We show the standard deviation of accuracy with the \(\pm\) notation.
## Appendix I Applying the generalization theory to real dataset
We follow the method provided by Canatar et al. for applying this theory to real datasets to make predictions on the generalization error. Assuming the data distribution \(p(x)\) is a discrete uniform distribution over both the training and test datasets, we perform eigendecomposition on a \(M\times M\) kernel gram matrix. \(M\) is the number of samples across training and test datasets. We need to divide the resulting eigenvalues by the number of non-zero eigenvalues \(N\), and multiply the eigenvectors by \(\sqrt{M}\) to obtain the finite and discrete estimations of \(\eta_{\rho}=\mathcal{O}(1)\) and \(\phi_{\rho}(\mathbf{x})=\mathcal{O}(1)\) respectively. Assuming \(\mathbf{\Phi}\) is a matrix whose columns are the eigenvectors obtained by the eigendecomposition (before the scaling), the target function coefficients are given by the elements of \(\mathbf{\bar{v}}=\frac{1}{\sqrt{M}}\mathbf{\Phi}^{\top}\mathbf{Y}\)\(\in\mathbb{R}^{N}\). The vector \(\mathbf{Y}\in\mathbb{R}^{M}\) is the target vector that contains both the training and test sets.
Note that we assume there is no noise added to the target function. In this limit, \(\alpha=P/N\) that is greater than 1 results in exactly \(0\) generalization error Canatar et al. (2021). For the NNGP kernel and the dataset we use, we always have \(M=N\), so naturally we have \(\alpha<1\).
## Appendix J Derivative of \(E_{g}\)
We want to compute the derivative of \(E_{g}\) with respect to eigenvalues. We decompose the derivative as the following.
\[\frac{dE_{g}}{d\eta_{i}}=\bar{v}_{i}^{2}\frac{d}{d\eta_{i}}E_{i}+\sum_{i\neq \rho}\bar{v}_{\rho}^{2}\frac{d}{d\eta_{i}}E_{\rho} \tag{66}\]
The problem boils down to solving the derivatives of the modal errors, \(\frac{d}{d\eta_{i}}E_{i}\) and \(\frac{d}{d\eta_{i}}E_{\rho}\) for \(i\neq\rho\). First compute \(\frac{d}{d\eta_{i}}E_{i}\).
\[\frac{d}{d\eta_{i}}E_{i}=\left(1-\gamma\right)^{-2}\frac{d\gamma}{d\eta_{i}} \left(1+P\eta_{i}\kappa^{-1}\right)^{-2}-2\left(1-\gamma\right)^{-1}\left(1+P \eta_{i}\kappa^{-1}\right)^{-3}\left(P\kappa^{-1}-P\eta_{i}\kappa^{-2}\frac{d \kappa}{d\eta_{i}}\right) \tag{67}\]
\[=\left(1-\gamma\right)^{-2}\frac{d\gamma}{d\eta_{i}}\left(1+P\eta _{i}\kappa^{-1}\right)^{-2}-2\left(1-\gamma\right)^{-1}\left(1+P\eta_{i} \kappa^{-1}\right)^{-3}P\kappa^{-1}\\ +2\left(1-\gamma\right)^{-1}\left(1+P\eta_{i}\kappa^{-1}\right)^ {-3}P\eta_{i}\kappa^{-2}\frac{d\kappa}{d\eta_{i}} \tag{68}\]
Then compute \(\frac{d}{d\eta_{i}}E_{\rho}\).
\[\frac{d}{d\eta_{i}}E_{\rho}=\left(1-\gamma\right)^{-2}\frac{d\gamma}{d\eta_{i}} \left(1+P\eta_{\rho}\kappa^{-1}\right)^{-2}+2\left(1-\gamma\right)^{-1}\left(1+ P\eta_{\rho}\kappa^{-1}\right)^{-3}P\eta_{\rho}\kappa^{-2}\frac{d\kappa}{d \eta_{i}} \tag{69}\]
This means the derivative of \(E_{g}\) is the following.
\[\frac{dE_{g}}{d\eta_{i}}=-\bar{v}_{i}^{2}2\left(1-\gamma\right)^{ -1}\left(1+P\eta_{i}\kappa^{-1}\right)^{-3}P\kappa^{-1}+\left(1-\gamma\right)^ {-2}\frac{d\gamma}{d\eta_{i}}\sum_{\rho}\bar{v}_{\rho}^{2}\left(1+P\eta_{\rho }\kappa^{-1}\right)^{-2}\\ +2P\left(1-\gamma\right)^{-1}\kappa^{-2}\frac{d\kappa}{d\eta_{i} }\sum_{\rho}\bar{v}_{\rho}^{2}\left(1+P\eta_{\rho}\kappa^{-1}\right)^{-3}\eta _{\rho} \tag{70}\]
\[=-\bar{v}_{i}^{2}2\left(1-\gamma\right)^{-1}\left(1+P\eta_{i}\kappa^{-1} \right)^{-3}P\kappa^{-1}+\left(1-\gamma\right)^{-2}\frac{d\gamma}{d\eta_{i}}a +2P\left(1-\gamma\right)^{-1}\kappa^{-2}\frac{d\kappa}{d\eta_{i}}b \tag{71}\]
where
\[a=\sum_{\rho}\bar{v}_{\rho}^{2}\left(1+P\eta_{\rho}\kappa^{-1}\right)^{-2} \tag{72}\]
\[b=\sum_{\rho}\bar{v}_{\rho}^{2}\left(1+P\eta_{\rho}\kappa^{-1}\right)^{-3}\eta _{\rho} \tag{73}\]
\[c=\sum_{\rho}\left(\kappa\eta_{\rho}^{-1}+P\right)^{-3}\eta_{\rho}^{-1} \tag{74}\]
We now compute \(\frac{d\kappa}{d\eta_{i}}\).
\[\kappa=\lambda+\sum_{\rho}\frac{\kappa\eta_{\rho}}{\kappa+P\eta_{\rho}}= \lambda+\sum_{\rho}\left(\eta_{\rho}^{-1}+P\kappa^{-1}\right)^{-1} \tag{75}\]
\[\frac{d\kappa}{d\eta_{i}}= \frac{d}{d\eta_{i}}\left(\eta_{i}^{-1}+P\kappa^{-1}\right)^{-1}+ \sum_{\rho\neq i}\frac{d}{d\eta_{i}}\left(\eta_{\rho}^{-1}+P\kappa^{-1}\right) ^{-1} \tag{76}\] \[= \left(\eta_{i}^{-1}+P\kappa^{-1}\right)^{-2}\left(\eta_{i}^{-2}+ P\kappa^{-2}\frac{d\kappa}{d\eta_{i}}\right)+\sum_{\rho\neq i}\left(\eta_{\rho}^{-1}+ P\kappa^{-1}\right)^{-2}\left(P\kappa^{-2}\frac{d\kappa}{d\eta_{i}}\right)\] (77) \[= \left(1+P\eta_{i}\kappa^{-1}\right)^{-2}+P\frac{d\kappa}{d\eta_{ i}}\left(\eta_{i}^{-1}\kappa+P\right)^{-2}+P\frac{d\kappa}{d\eta_{i}}\sum_{ \rho\neq i}\left(\eta_{\rho}^{-1}\kappa+P\right)^{-2}\] (78) \[= \left(1+P\eta_{i}\kappa^{-1}\right)^{-2}+P\frac{d\kappa}{d\eta_{ i}}\sum_{\rho}\left(\eta_{\rho}^{-1}\kappa+P\right)^{-2} \tag{79}\]
\[\frac{d\kappa}{d\eta_{i}}\left(1-P\sum_{\rho}\left(\eta_{\rho}^{-1}\kappa+P \right)^{-2}\right)=\left(1+P\eta_{i}\kappa^{-1}\right)^{-2} \tag{80}\]
\[\frac{d\kappa}{d\eta_{i}}=\left(1-\gamma\right)^{-1}\left(1+P\eta_{i}\kappa^{- 1}\right)^{-2} \tag{81}\]
We now compute \(\frac{d\gamma}{d\eta_{i}}\).
\[\gamma=\sum_{\rho}\frac{P\eta_{\rho}^{2}}{\left(\kappa+P\eta_{\rho}\right)^{2} }=\sum_{\rho}\frac{P}{\left(\kappa\eta_{\rho}^{-1}+P\right)^{2}} \tag{82}\]
\[\frac{1}{P}\frac{d\gamma}{d\eta_{i}}= \frac{d}{d\eta_{i}}\left(\kappa\eta_{i}^{-1}+P\right)^{-2}+\sum_{ \rho\neq i}\frac{d}{d\eta_{i}}\left(\kappa\eta_{\rho}^{-1}+P\right)^{-2} \tag{83}\] \[= -2\left(\kappa\eta_{i}^{-1}+P\right)^{-3}\left(\frac{d\kappa}{d \eta_{i}}\eta_{i}^{-1}-\kappa\eta_{i}^{-2}\right)-2\frac{d\kappa}{d\eta_{i}} \sum_{\rho\neq i}\left(\kappa\eta_{\rho}^{-1}+P\right)^{-3}\eta_{\rho}^{-1}\] (84) \[= -2\left(\kappa\eta_{i}^{-1}+P\right)^{-3}\frac{d\kappa}{d\eta_{i} }\eta_{i}^{-1}+2\left(\kappa\eta_{i}^{-1}+P\right)^{-3}\kappa\eta_{i}^{-2}-2 \frac{d\kappa}{d\eta_{i}}\sum_{\rho\neq i}\left(\kappa\eta_{\rho}^{-1}+P\right) ^{-3}\eta_{\rho}^{-1}\] (85) \[= 2\left(\kappa\eta_{i}^{-1}+P\right)^{-3}\kappa\eta_{i}^{-2}-2 \frac{d\kappa}{d\eta_{i}}\sum_{\rho}\left(\kappa\eta_{\rho}^{-1}+P\right)^{-3 }\eta_{\rho}^{-1} \tag{86}\]
Substituting \(\frac{d\gamma}{d\eta_{i}}\) in \(\frac{dE_{g}}{d\eta_{i}}\), we have
\[\frac{1}{2}\left(1-\gamma\right)\frac{dE_{g}}{d\eta_{i}} =-\bar{v}_{i}^{2}P\kappa^{-1}\left(1+P\eta_{i}\kappa^{-1}\right)^ {-3}\] \[+a\kappa^{-2}\eta_{i}\left(1-\gamma\right)^{-1}\left(1+P\eta_{i} \kappa^{-1}\right)^{-3}-ac\left(1-\gamma\right)^{-1}\frac{d\kappa}{d\eta_{i} }+bP\kappa^{-2}\frac{d\kappa}{d\eta_{i}} \tag{87}\]
Substituting \(\frac{d\kappa}{d\eta_{i}}\) in \(\frac{dE_{g}}{d\eta_{i}}\), we finally have
\[\frac{dE_{g}}{d\eta_{i}}=2\kappa\left(\eta_{i}\kappa E_{g}-\bar{v}_{i}^{2}P \kappa^{2}\right)\left(1-\gamma\right)^{-1}\left(\kappa+P\eta_{i}\right)^{-3} +2\left(bP-cE_{g}\kappa^{2}\right)\left(1-\gamma\right)^{-2}\left(\kappa+P \eta_{i}\right)^{-2} \tag{88}\]
## Appendix K Perturbation Analysis on \(E_{\rho}\)
We first compute the Jacobian of \(E_{\rho}\)'s with respect to \(\eta_{\rho}\)'s evaluated at the flat spectrum, i.e. where all \(\eta_{\rho}\)'s are the same. The eigenvalues are denoted \(\eta\) in this section from here on. In this scenario, the Jacobian simplifies to
\[\mathbf{J}(\alpha)=2\left(1-\alpha\right)\alpha\frac{1}{\eta}\mathbf{M} \tag{89}\]
\[\mathbf{M}_{ij}=-\delta_{ij}+\frac{1}{N}(1-\delta_{ij}) \tag{90}\]
where \(N\) is the number of non-zero eigenvalues, and \(\alpha=\frac{P}{N}\) is a ratio of the training set to the number of non-zero eigenvalues. We assume \(P\rightarrow\infty\) and \(N\rightarrow\infty\), but \(\alpha\sim\mathcal{O}(1)\). \(\mathbf{M}\in\mathbb{R}^{N\times N}\) is a matrix with \(-1\)'s on the diagonal and \(\frac{1}{N}\)'s off-diagonal. To see how \(E_{\rho}\)'s change with a perturbation in the spectrum, we dot \(\mathbf{J}(\alpha)\) with an eigenvalue perturbation vector \(\nabla\mathbf{r}=[\nabla\eta_{0},\dots,\nabla\eta_{\rho},\dots,\nabla\eta_{N- 1}]\), where \(\nabla\eta_{\rho}\)'s are the individual perturbations in the eigenvalues. Since the perturbed spectrum is ordered from the largest eigenvalue to the smallest eigenvalue, i.e. \(\eta+\nabla\eta_{\rho}>\eta+\nabla\eta_{\rho^{\prime}}\), \(\nabla\eta_{\rho}\) should be ordered in a way such that for \(\rho<\rho^{\prime}\), \(\nabla\eta_{\rho}>\nabla\eta_{\rho^{\prime}}\) in the vector \(\nabla\mathbf{r}\). The dot product gives the following result on the change in \(E_{\rho}\).
\[\nabla E_{\rho}=-2\left(1-\alpha\right)\alpha\frac{1}{\eta}\left(\nabla\eta_{ \rho}-\left\langle\nabla\eta_{\rho}\right\rangle\right) \tag{91}\]
where \(\left\langle\nabla\eta_{\rho}\right\rangle=\frac{1}{N}\sum_{\rho}\nabla\eta_{\rho}\) is a mean value of the perturbations. Intuitively, the modal error perturbation \(\nabla E_{\rho}\) is a sign-flipped version of the zero-meaned \(\nabla\eta_{\rho}\).
In Figure S4, we compare a modal error spectrum estimated with our first-order perturbation theory to that of the full theory. We see that the first-order perturbation theory accurately predicts the change in the modal error spectrum when the perturbation is small.
## Appendix L Invariance of the Generalization Error to the Change in a Single Eigenvalue
In this section, we analyze the effect of the offset of the kernel on the generalization performance. Taking offset is defined as adding a constant value to a kernel function \(K(x,x^{\prime})+b\). The posterior output \(\mathbf{\hat{y}}\) of kernel ridge regression is given by
\[\mathbf{\hat{y}}=\mathbf{K}^{\prime}\left(\mathbf{K}+\lambda\mathbf{I}\right)^ {-1}\mathbf{y} \tag{92}\]
where \(\mathbf{y}\in\mathbb{R}^{P}\) is a vector of training labels, \(\mathbf{K}\in\mathbb{R}^{P\times P}\) is a Gram matrix representing the kernel values amongst the training data, and \(\mathbf{K}^{\prime}\in\mathbb{R}^{M\times P}\) is a Gram matrix representing the kernel values
between the test and training data. Now we introduce the offset to the kernel and the resulting regression output is
\[\mathbf{\hat{y}}_{b}=\left(\mathbf{K}^{\prime}+b\mathbf{1}_{M}\mathbf{1}_{P}^{ \top}\right)\left(\mathbf{K}+b\mathbf{1}_{P}\mathbf{1}_{P}^{\top}+\lambda \mathbf{I}\right)^{-1}\mathbf{y} \tag{93}\]
where \(b\mathbf{1}_{M}\mathbf{1}_{P}^{\top}\) is a \(M\times P\) matrix of constant value \(b\).
We claim that if one of the eigenvectors of \(\mathbf{K}\) is an uniform vector \(\phi_{0}=\left[\frac{1}{\sqrt{N}},\ldots,\frac{1}{\sqrt{N}}\right]\), and the target function is zero-mean (hence \(\mathbf{1}_{P}^{\top}\mathbf{y}=0\)), then \(\mathbf{\hat{y}}=\mathbf{\hat{y}}_{b}\). Notice that all three terms in the matrix inverse \(\mathbf{G}_{b}=\left(\mathbf{K}+b\mathbf{1}_{P}\mathbf{1}_{P}^{\top}+\lambda \mathbf{I}\right)^{-1}\) are simultaneously diagonalizable. Therefore, if the eigenvalues of \(\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\) are \(\left\{\frac{1}{s_{0}+\lambda},\frac{1}{s_{1}+\lambda},\ldots,\frac{1}{s_{P -1}+\lambda}\right\}\), then the eigenvalues of \(\mathbf{G}_{b}\) is \(\left\{\frac{1}{s_{0}+Pb+\lambda},\frac{1}{s_{1}+\lambda},\ldots,\frac{1}{s_ {P-1}+\lambda}\right\}\) (\(s_{i}\) is an eigenvalue of \(\mathbf{K}\)). The only difference in the eigenspectrums is the first eigenvalue that corresponds to the uniform eigenvector \(\phi_{0}\). Therefore we can decompose \(\mathbf{G}_{b}\) in the following fashion.
\[\mathbf{G}_{b}=\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}+d\mathbf{1}_{P} \mathbf{1}_{P}^{\top} \tag{94}\]
This is equivalent to the Woodbury matrix identity. The specific expression of \(d\in\mathbb{R}\) is irrelevant to our purpose. Therefore, the regression output is
\[\mathbf{\hat{y}}_{b} =\left(\mathbf{K}^{\prime}+b\mathbf{1}_{M}\mathbf{1}_{P}^{\top }\right)\left(\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}+d\mathbf{1}_{P} \mathbf{1}_{P}^{\top}\right)\mathbf{y} \tag{95}\] \[=\left(\mathbf{K}^{\prime}+b\mathbf{1}_{M}\mathbf{1}_{P}^{\top} \right)\left(\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\mathbf{y}+d \mathbf{1}_{P}\mathbf{1}_{P}^{\top}\mathbf{y}\right)\] (96) \[=\left(\mathbf{K}^{\prime}+b\mathbf{1}_{M}\mathbf{1}_{P}^{\top} \right)\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\mathbf{y}\] (97) \[=\] (98) \[= \mathbf{\hat{y}} \tag{99}\]
The third equality is due to \(\mathbf{1}_{P}^{\top}\mathbf{y}=0\), and the fifth equality is due to the fact that \(\mathbf{1}_{P}^{\top}\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\propto \mathbf{1}_{P}^{\top}\) since \(\mathbf{1}_{P}^{\top}\) is an eigenvector of \(\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\), and therefore
\[\mathbf{1}_{P}^{\top}\left(\mathbf{K}+\lambda\mathbf{I}\right)^{-1}\mathbf{y }\propto\mathbf{1}_{P}^{\top}\mathbf{y}=0 \tag{101}\]
. Hence we conclude that if an eigenvector of \(\mathbf{K}\) is an uniform vector and the target function is zero-mean, the offset of the kernel does not affect the kernel regression prediction, and therefore does not affect the generalization performance. More generally, an additive perturbation \(b\phi_{i}\phi_{i}^{\top}\) to a kernel Gram matrix in an eigenvector direction \(\phi_{i}\) that the target function is orthogonal to \(\phi_{i}^{\top}\mathbf{y}=0\) does not influence the prediction and the generalization performance.
Here we show the alternative proof using the generalization theory. The generalization error for a kernel that has a constant uniform vector is expressed as
\[E_{g}=\frac{1}{1-\gamma}\sum_{\rho=1}^{N}\frac{\kappa^{2}\tilde{v}_{\rho}^{2}+ P\sigma^{2}\eta_{\rho}^{2}}{\left(\kappa+P\eta_{\rho}\right)^{2}}+\frac{1+ \gamma}{1-\gamma}\kappa^{2}\frac{\tilde{v}_{0}^{2}}{\left(\kappa+2P\eta_{0} \right)^{2}} \tag{102}\]
\[\kappa=\lambda+\sum_{\rho=1}^{N}\frac{\kappa\eta_{\rho}}{P\eta_{\rho}+\kappa} \tag{103}\]
\[\gamma=\sum_{\rho=1}^{N}\frac{P\eta_{\rho}^{2}}{\left(P\eta_{\rho}+\kappa \right)^{2}} \tag{104}\]
where \(\eta_{0}\) is the eigenvalue (from Mercer decomposition) that corresponds to the constant eigenfunction \(\phi_{0}(\cdot)\)Canatar et al. (2021b). When the target function is zero-mean, \(\bar{v}_{0}=0\). Therefore, regardless of the value of the \(\eta_{0}\), which is the only eigenvalue that changes with the offset to the kernel, the second term of \(E_{g}\) is \(0\), if the target function is zero-mean. This means that if the target function is zero-mean, the offset to the kernel does not affect the generalization error.
## Appendix M Theory vs. Experiment on the Real-life Datasets
In Figure 4, we presented the experimental observation of the generalization errors and theoretical predictions on the circulant dataset. We then analyze the shape of the eigenspectrums and the modal spectrums to see what contributed to decreasing or increasing the generalization error. Here, we show the same result on the MNIST, Fashion-MNIST, CIFAR10 and CIFAR10-Grayscale datasets (Figure S5). Just as in the circulant dataset, we see that the moderately steep eigenspectrum performs the best. In the case of the real-life datasets, the reason the steep eigenspectrum underperforms is because of the large modal errors in modes that correspond to the low eigenvalues, and there are some significant amount of target function powers in those eigenmodes. In the plots, the first 800 eigenmodes are shown. As a reference, there are total 3000 eigenmodes. We see that the target power spectrum does not vary significantly, as visually shown in the f plots and in the indicated ED values of the target power spectrum.
## Appendix N Comparison to Task-model-alignment in Terms of the Target Power spectrum
In Canatar et al. (2021b), they present the task-model-alignment as a normalized cumulative sum of the target function powerspectrum.
\[C(\rho)=\frac{\sum_{i=0}^{\rho-1}\bar{v_{i}}^{2}}{\sum_{i=0}^{N-1}\bar{v_{i}} ^{2}} \tag{105}\]
The faster the rise of the cumulative power, the more aligned the kernel is to the target function, and therefore the generalization performance is better. However, it is challenging to compare the kernels using this metric when the kernel Gram matrix eigenfunctions do not change much between the models. In this circulant case, the eigenfunctions do not change at all, so the target function power spectrum is identical between any kernels. In this case, the comparison using the task-model alignment in the sense of the target function power spectrum fails. Here, we show that we see similar phenomena the real-life datasets (Figure S6). The models shown with green and blue colors correspond to the models shown in Figure S5. Here we compute the area under the curve (AUC) of the cumulative sum curve to compare how fast these curves rise. Higher AUC may indicate better-aligned model. We observe that while the higher-performing models (blue ones in Figure S6) do have higher AUC than the lower-performing models (green ones in Figure S6), the difference is very small. It is also qualitatively hard to tell which model has better alignment just by inspecting the cumulative spectrum. This highlights that our theoretical result on the explicit relationship between the eigenspectrum and the modal error spectrum can complement the task-model alignment presented in Canatar et al. (2021b).
|
2304.07305 | 1-D Residual Convolutional Neural Network coupled with Data Augmentation
and Regularization for the ICPHM 2023 Data Challenge | In this article, we present our contribution to the ICPHM 2023 Data Challenge
on Industrial Systems' Health Monitoring using Vibration Analysis. For the task
of classifying sun gear faults in a gearbox, we propose a residual
Convolutional Neural Network that operates on raw three-channel time-domain
vibration signals. In conjunction with data augmentation and regularization
techniques, the proposed model yields very good results in a multi-class
classification scenario with real-world data despite its relatively small size,
i.e., with less than 30,000 trainable parameters. Even when presented with data
obtained from multiple operating conditions, the network is still capable to
accurately predict the condition of the gearbox under inspection. | Matthias Kreuzer, Walter Kellermann | 2023-04-14T08:50:21Z | http://arxiv.org/abs/2304.07305v2 | D Residual Convolutional Neural Network coupled with Data Augmentation and Regularization for the ICPHM 2023 Data Challenge
###### Abstract
In this article, we present our contribution to the International Conference on Prognostics and Health Management (ICPHM) 2023 Data Challenge on Industrial Systems' Health Monitoring using Vibration Analysis. For the task of classifying sun gear faults in a gearbox, we propose a residual Convolutive Neural Network (CNN) that operates on raw three-channel time-domain vibration signals. In conjunction with data augmentation and regularization techniques, the proposed model yields very good results in a multi-class classification scenario with real-world data despite its relatively small size, i.e., with less than 30,000 trainable parameters. Even when presented with data obtained from multiple operating conditions, the network is still capable to accurately predict the condition of the gearbox under inspection.
IPCHM 2023 Data Challenge, Vibration Analysis, Condition Monitoring, Fault Classification, Planetary Gearbox
## I Introduction
Industrial machines are subjected to heavy stress conditions and are therefore susceptible to defects and resulting malfunctions. Even small defects can already have significant consequences as they can cause additional costs by, e.g., delaying industrial production through unexpected downtime, or can even put human lives in danger, e.g., resulting from bearing failure in rail vehicles. These damages can stem from normal wear and tear, from poor maintenance or completely unforeseen events. To detect such damages in an early stage whenever possible, reliable condition monitoring techniques are of utmost importance. However, the manual inspection of machines proves difficult and cost-intensive and therefore automatic non-intrusive condition monitoring techniques are preferred and highly sought after. To this end, the analysis of vibration signals has proven to be especially effective since faults especially in rotating machinery alter the typical vibration pattern of a machine. These alterations in the vibration signature can be detected by applying signal processing techniques, e.g., envelope spectrum analysis [1], signal decomposition and filtering techniques [2], or by manually extracting fault-related features [3][4]. However, these traditional methods have recently been eclipsed by end-to-end deep learning-based approaches [5, 6, 7, 8, 9]. Although Deep Neural Networks (DNNs) have proven to be very powerful for the task of fault detection, they come with certain drawbacks. DNN-based approaches require a sufficiently large amount of training data for performing well. Yet, obtaining a sufficient amount of data, especially for faulty conditions is often difficult. Moreover, many architectures rely on a large number of parameters, which makes the training process and the usage with on-board devices difficult. If the number of parameters is too large and the amount of training data insufficient, these networks tend to overfit and not to generalize well for unseen test data. To avoid these drawbacks, we propose a residual Convolutive Neural Network (CNN) that stands out for its small number of parameters and additonally employ data augmentation techniques to further mitigate the effect of a possible over-fitting for the ICPHM 2023 Data Challenge on Industrial Systems' Health Monitoring using Vibration Signal Analysis [10].
This paper is structured as follows: In Sec. II, the dataset of the ICPHM 2023 Data Challenge is described. The proposed fault detection approach is then presented in Sec. III. In particular, the proposed network architecture is discussed in III-A, whereas the utilized data augmentation techniques and the training procedure are described in Sec. III-B and Sec. III-C, respectively. After this, the performance of the model is evaluated in Sec. IV before conclusions are drawn in Sec. V. |
2306.10766 | Substitutional Alloying Using Crystal Graph Neural Networks | Materials discovery, especially for applications that require extreme
operating conditions, requires extensive testing that naturally limits the
ability to inquire the wealth of possible compositions. Machine Learning (ML)
has nowadays a well established role in facilitating this effort in systematic
ways. The increasing amount of available accurate DFT data represents a solid
basis upon which new ML models can be trained and tested. While conventional
models rely on static descriptors, generally suitable for a limited class of
systems, the flexibility of Graph Neural Networks (GNNs) allows for direct
learning representations on graphs, such as the ones formed by crystals. We
utilize crystal graph neural networks (CGNN) to predict crystal properties with
DFT level accuracy, through graphs with encoding of the atomic (node/vertex),
bond (edge), and global state attributes. In this work, we aim at testing the
ability of the CGNN MegNet framework in predicting a number of properties of
systems previously unseen from the model, obtained by adding a substitutional
defect in bulk crystals that are included in the training set. We perform DFT
validation to assess the accuracy in the prediction of formation energies and
structural features (such as elastic moduli). Using CGNNs, one may identify
promising paths in alloy discovery. | Dario Massa, Daniel Cieśliński, Amirhossein Naghdi, Stefanos Papanikolaou | 2023-06-19T08:18:17Z | http://arxiv.org/abs/2306.10766v1 | # Substitutional Alloying Using Crystal Graph Neural Networks
###### Abstract
Materials discovery, especially for applications that require extreme operating conditions, requires extensive testing that naturally limits the ability to inquire the wealth of possible compositions. Machine Learning (ML) has nowadays a well established role in facilitating this effort in systematic ways. The increasing amount of available accurate DFT data represents a solid basis upon which new ML models can be trained and tested. While conventional models rely on static descriptors, generally suitable for a limited class of systems, the flexibility of Graph Neural Networks (GNNs) allows for direct learning representations on graphs, such as the ones formed by crystals. We utilize crystal graph neural networks (CGNN) to predict crystal properties with DFT level accuracy, through graphs with encoding of the atomic (node/vertex), bond (edge), and global state attributes. In this work, we aim at testing the ability of the CGNN MegNet framework in predicting a number of properties of systems previously unseen from the model, obtained by adding a substitutional defect in bulk crystals that are included in the training set. We perform DFT validation to assess the accuracy in the prediction of formation energies and structural features (such as elastic moduli). Using CGNNs, one may identify promising paths in alloy discovery.
**Keywords:** GNN, CGNN, Graphs, Substitutional Alloys, Materials
Discovery, Neural Networks, Machine Learning
## 1 Introduction
The use of machine learning (ML) Michalski et al (2013); LeCun et al (2015) methods in material science to accelerate materials discoveryCurtarolo et al (2013) is at the base of the so-called material informatics (MI) Ramakrishna et al (2019); Ramprasad et al (2017); Takahashi and Tanaka (2016); L. Ward (2017); Rajan (2005). By training ML models on large databases, such as OQMD or the Materials Project high-throughput electronic structure calculation databases Saal et al (2013); Jain et al (2013); Curtarolo (2012); Hachmann et al (2011); NOMAD ([https://nomad-coe.eu](https://nomad-coe.eu)), the goal is to achieve predictions of material properties with quantum accuracy.
As in statistical mechanics with the need for identifying appropriate order parameters of novel phases and structures, the key challenge in ML algorithms is to identify effective system descriptors that can function as structure identifiers. A large variety of descriptors have been proposed, including fixed-length feature vectors of material elemental or electronic properties Seko et al (2015); Xue et al (2016); Isayev et al (2017), as well as structural descriptors, based on rotational and traslational invariant transformations of atomic coordinates, like the Coulomb matrix Rupp et al (2012), atom-centered symmetry functions (ACSFs) Behler (2011), social permutation invariant coordintes (SPRINT) Pietrucci and Andreoni (2011), smooth overlap of atomic positions (SOAP) De et al (2016) and global minimum of root mean-square distance Sadeghi et al (2013). However, these solutions are often system-specific, and are not suitable for vast compositional and structural space exploration.
For this reason, a topic of fervent interest in the materials science community is the use of graph neural networks (GNNs) Zhou et al (2020); Wu et al (2021), which allow to learn representations directly and in a flexible way,
focused on molecular systems Jorgensen et al (2018); Schutt et al (2017); Duvenaud et al (2015); Wu et al (2018); Kearnes et al (2016); Coley et al (2017), surfaces Back et al (2019); Palizhati et al (2019); Gu et al (2020) and periodic crystalsSchutt et al (2017); Xie and Grossman (2018, 2018); Chen et al (2019); Dunn et al (2020); Louis et al (2020); Park and Wolverton (2020); Karamad et al (2020); Chen et al (2019). GNNs can be regarded as the generalization of convolutional neural networks (CNN) to graph-structured data, from which the internal materials representations can be learned and used for prediction of target properties Reiser et al (2022); even though larger amounts of data is required with respect to conventional ML models, GNNs take advantage of the unambiguous physics-guided real-space local associations between the system's degrees of freedom, hence for any type of atomic crystalline structure Gong and Yan (2021). The common idea of GNN-based models is to represent atoms as nodes (V) and their chemical bonds as edges (E) in a graph G(V,E), which can be fed to a trained neural network to create node-level embeddings (learned representations of each atom in its individual chemical environment) through convolutions with neighbouring nodes and edges Fung et al (2021). Therefore, given a set of learnable weights (W), and (y) a target material property, the GNN model reformulates the prediction task as the mapping \(f(G:W)\rightarrow\) y.
A direct benefit of the crystal material GNN-converted graph encoding is the naturally derived vector characterization of the atoms and edges Louis et al (2020). The work of Xie _et al_. Xie and Grossman (2018) represents the pioneering example of a crystal graph convolutional neural network (CGCNN) architecture, which has been later extended in the iCGCNN by Park _et al_. Park and Wolverton (2020) to include 3-body correlations on neighbouring atoms, information on the Voronoi tasselated structure and an optimized chemical representation of interatomic bonds in the crystal graphs.
For the discovery of new materials, one may take various exploring paths, involving high-throughput computational Curtarolo et al (2013) and experimental Liu et al (2019) methods. However, the combined approach of machine-learning methods and compositional manipulation, has very quickly acquired a well established role in materials science, and it is applied in a wide range of property optimization searches like for zinc blende semiconductors Mannodi-Kanakkithodi et al (2022), perovskites Zhai et al (2022); Balachandran et al (2018); Ye et al (2018); Sharma et al (2020); Klug et al (2017); Sampson et al (2017) and others Guan (2019); Ning et al (2017); Oba et al (2018); Deml et al (2015, 2014); Wan et al (2021); Varley et al (2017); Mannodi-Kanakkithodi et al (2020).
In this work, we utilize a particular improvement of the originally proposed Xie and Grossman (2018) CGCNN model, the MatErials Graph Network (MEGNet) model from Chen, Ye and coworkers Chen et al (2019), introduced in Sec.(2.1.1), that has the merit of being developed and tested both on molecules and crystals, with the possibility of defining global state attributes including temperature, pressure and entropy. We test the capabilities of graph networks to predict the properties of single-atom substitutionally defected crystals with the MEGNet model. After considering a pre-trained model on the Materials Project (MP) database (Sec.(2.1.3)), we focus on the formation energies, bulk and shear moduli predictions, both comparing the results obtained in datasets of similarly defected structures (Sec.(3.1)) and the effects of almost all the possible single-atom defects in the same matrices (Sec.(3.2)). To validate the predictions, as described in Sec.(2.2) and Sec.(3.3), we perform Density Functional Theory (DFT) calculations, and we find CGNNs have both a great potential, but also limitations in predicting properties of defected bulk crystals, and promote materials discovery.
## 2 Methods
### Machine Learning framework
#### MEGNet description
In the present work, we utilize the MEGNet modelChen et al (2019). The reasons for this choice lie in the structure and performance of the model:
1. It is characterized by a low number of attributes, one for the atom (atomic number) and one for the bond (spatial distance), but MEGNet outperforms previous graph-based modelsChen et al (2019), as CGCNNXie and Grossman (2018) and MPNNJorgensen et al (2018), with higher number of attributes, as well as SchNetSchutt et al (2017), with a similar low number.
2. the MEGNet framework includes a global state attribute, essential for state-property relationship predictions in materials,
3. The graph network construction of MEGNet has been developed and tested for both molecules and crystals
Here, we limit ourselves to present the main features of the model, but for a more exhaustive explanation we recommend the reader to the original work of Chen _et al._Chen et al (2019) and references there-in. In particular, given a graph \(G(E,V,\mathbf{u})\), where
\begin{table}
\begin{tabular}{|l|r|r|} \hline Parameter & Value & Short Description \\ \hline \hline nfeat\_node & 94 & number of atom features \\ \hline nfeat\_global & 2 & number of state features \\ \hline ngauss\_centers & 110 & number of gaussians \\ \hline converter\_cutoff & 4 & cutoff radius \\ \hline megnet\_blocks & 3 & number of MEGNetLayer blocks \\ \hline optimizer & Adam & optimizer of the model weights \\ \hline lr & 1e-3 & learning rate \\ \hline n1 & 64 & number of hidden units in layer 1 \\ \hline n2 & 32 & number of hidden units in layer 2 \\ \hline n3 & 16 & number of hidden units in layer 3 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters from the pre-trained MEGNet model.
* \(V\) is the set of \(N^{v}\) atomic attribute vectors \(\mathbf{v}_{i}\);
* \(E=\left\{\left(\mathbf{e}_{k},\mathbf{r}_{k},\mathbf{s}_{k}\right)\right\}_{k=1 \dots N}^{e}\) is the set of \(N^{e}\) bond attribute vectors, with \(\mathbf{r}_{k}\) and \(\mathbf{s}\) being the indexes of the connected atoms;
* \(\mathbf{u}\) is the global state attribute vector;
the role of graph network is to recursively update a input graph \(G(E,V,\mathbf{u})\) to an output graph \(G(E^{\prime},V^{\prime},\mathbf{u}^{\prime})\), with a progressive and inclusive information flow going from bonds to atoms, and finally to the global state. In particular, first the attributes of each bond are updated through a function \(\phi_{e}\), applied on the concatenation of the self-attributes, the ones of the connecting \(\mathbf{v}_{s_{k}}\) and \(\mathbf{v}_{r_{k}}\) atoms, and of the global state \(\mathbf{u}\), as in
\[\mathbf{e}_{k}^{\prime}=\phi_{e}\big{(}\mathbf{v}_{s_{k}}\bigoplus\mathbf{v}_ {r_{k}}\bigoplus\mathbf{e}_{k}\bigoplus\mathbf{u}\big{)} \tag{1}\]
The update of atomic attributes involves the average over i-th atom connecting bonds \(\bar{\mathbf{v}}_{i}^{e}=\dfrac{1}{N^{e}}\sum_{k=1}^{N_{i}^{e}}\left\{ \mathbf{e}_{k}^{\prime}\right\}_{r_{k}=i}\), the i-th atom self-attributes \(\mathbf{v}_{i}\) and the global state ones \(\mathbf{u}\), as in
\[\mathbf{v}_{i}^{\prime}=\phi_{v}\big{(}\bar{\mathbf{v}}_{i}^{e}\bigoplus \mathbf{v}_{i}\bigoplus\mathbf{u}\big{)} \tag{2}\]
Finally, an information flow from all three attribute groups is involved in the update of the global state attibutes, as in
\[\mathbf{u}^{\prime}=\phi_{u}\big{(}\bar{\mathbf{u}}^{e}\bigoplus\bar{ \mathbf{u}}^{v}\bigoplus\mathbf{u}\big{)} \tag{3}\]
where \(\bar{\mathbf{u}}^{e}=\dfrac{1}{N^{e}}\sum_{k=1}^{N^{e}}\left\{\mathbf{e}_{k}^ {\prime}\right\}\) and \(\bar{\mathbf{u}}^{v}=\dfrac{1}{N^{v}}\sum_{i=1}^{N^{v}}\left\{\mathbf{v}_{i}^ {\prime}\right\}\).
As mentioned before, for our systems of interest, namely periodic crystals, the atomic number is the only node attribute. For bonds, the spatial distance
is expanded in a Gaussian basis set, centered at a linearly spaced \(r_{0}\) locations between \(r_{0}=0\) and \(r_{0}=r_{\rm cut}\), and characterized by a given width \(\sigma^{1}\). Finally, the global state is simply a two zeros placeholder for global information exchange.
#### Data collection
We consider crystal structures collected through the Python Materials Genomics interface (pymatgen) Ong et al (2013) to the Materials Application Programming Interface from Materials Project Jain et al (2013). When creating the dataset, there were 126301 structures in the database, that had formation energy (E\({}_{\rm form}\)) property, and 13102 structures that had bulk modulus (K\({}_{\rm VRH}\)) and shear modulus (G\({}_{\rm VRH}\)) properties.
#### Pre-trained model
Our focus in this work is on the prediction capabilities of MEGNet for _minimally_ defected systems, that are clearly not in the training database, given the training of a dataset of undefected structures. In order to do so, we consider the substitution of a single atom in a supercell, hoping that CGNN
Figure 1: Parity plots for the pre-trained model on the MP dataset. The plots involve the predictions on the bulk modulus (K\({}_{\rm VRH}\)), the shear modulus (G\({}_{\rm VRH}\)) and the formation energy (E\({}_{\rm form}\)).
training captures atomic similarities, based on combinations of atomic radii, valence electrons, and other atomic properties. Table(1) shows some of the paramaters of the model, and a more complete list can be found at the default implementation of the class Chen et al (2022)
We report parity plots of Fig.(1) for all three properties of interest in this study: bulk modulus (K\({}_{\rm VRH}\)), shear modulus (G\({}_{\rm VRH}\)) and formation energy (E\({}_{\rm form}\)). To evaluate the model accuracy in predicting the properties of interest for the present study, the mean-absolute error (MAE) is used as the evaluation metric. Table(2) presents the MAE values for each predicted property over the dataset, which provides insights into the pre-trained model performance.
### Validation with DFT
We verify the accuracy of the model's predictions for system properties such as bulk modulus K\({}_{\rm VRH}\), shear modulus G\({}_{\rm VRH}\) and formation energy E\({}_{\rm form}\), after single-atom substitution is implemented in \(2\times 2\times 2\) supercells. We perform DFT calculations with Quantum Espresso (QE) Giannozzi et al (2009, 2017, 2020) and its THERMO_PW Dal Corso (2023) driver for the calculation of structural properties. Pseudo-Potentials for all involved atomic species are Ultra-Soft and with Perdew-Burke-Ernzerhof (PBE) Perdew et al (1996) functional. The Methfessel-Paxton smearing Methfessel and Paxton (1989) has been introduced to correctly investigate metallic systems, and the calculations have been set as spin-polarized, for possible non-zero magnetization effects.
\begin{table}
\begin{tabular}{|l|r|} \hline Property & MAE \\ \hline \hline K\({}_{\rm VRH}\) & 6.143 GPa \\ \hline G\({}_{\rm VRH}\) & 10.489 GPa \\ \hline E\({}_{\rm f}\) & 0.029 eV/atom \\ \hline \end{tabular}
\end{table}
Table 2: MAEs of the model for the prediction of the bulk modulus (K\({}_{\rm VRH}\)), shear modulus (G\({}_{\rm VRH}\)) and formation energy (E\({}_{\rm form}\)).
Convergence is checked on the number of k-points, plane-wave cutoff energy, and also, the energy smearing spreading (degauss parameter in QE), for each case, after a preliminary variable-cell relaxation of pure crystals, and then, further fixed-cell relaxation with optimal parameters, for final equilibrium bulk structures. The common acceptance threshold in the variation of the total energy upon parameter change is set at \(10^{-5}\) Ry. Forces and total energy convergence thresholds for ionic minimization are set to a common value of \(10^{-5}\)a.u. and \(10^{-6}\)a.u. respectively. After optimization of pure crystals, fixed-cell relaxation is performed on supercells with single-atom substitutions, and its structural properties are then extracted through the THERMO_PW driver.
The computation of formation energies for validation purposes is performed on the case of single-atom substitutional defects (D) applied on pure bulk crystals (M), as follows,
\[\mathrm{E_{form}}\big{(}\mathrm{M_{1-x}D_{x}}\big{)}=\mathrm{E}\big{(} \mathrm{M_{1-x}D_{x}}\big{)}-\big{(}1-\mathrm{x}\big{)}\mathrm{E}\big{(} \mathrm{M}\big{)}-\mathrm{x}\mathrm{E}\big{(}\mathrm{D}\big{)} \tag{4}\]
where x is the atomic fraction of substitutional defects, \(\mathrm{E}\big{(}\mathrm{M_{1-x}D_{x}}\big{)}\) is the total energy per atom of the compound, while \(\mathrm{E}\big{(}\mathrm{M}\big{)}\) and \(\mathrm{E}\big{(}\mathrm{D}\big{)}\) are the total energies per atom of the precursor species that compose it; the latter energies are obtained by relaxing the ground-state lattices of these species using the same aforementioned QE parameters as for the compound, as necessary to ensure computational consistency in the evaluation of accurate values for the defects formation energies.
## 3 Results: Property predictions for
substitutional alloying with CGNN
There is a large variety of ways to systematically evaluate the effect of substitutional alloying on the properties of crystals Kaxiras (2003). Here, we focus on two key questions:
1. Data Science of Defects: What is the effect of substituting the same defect in a large variety of systems?
2. Is there qualitative and quantitative effects from atom-substituting various elemental defects in the same host crystalline matrix?
While it has not yet been possible to perform an exhaustive search of the kind, in this work, for the first part, the basin of host systems is represented by the Materials Project crystals dataset introduced in the previous sections Jain et al (2013), while for the latter part, the host systems are pure metallic bulk crystalline supercells of Al, Ni, Mo and Au.
### An elemental defect seeing a wealth of different crystalline environments
First, we focus on the prediction of system properties, where the systematic substitutional defective process is applied using the same replacement atom, on randomly selected sites of crystals that exist in the the Materials Project crystals dataset Jain et al (2013). The aim is to explore how material properties change after single-atom substitution, evaluating deviations from original pure crystalline predictions, and how they depend on the specific substitution. For we consider three elemental cases: Rb, Mn and H, as the key replacement atoms that we will mutually compare. The reason for the choice lies in the drastic elemental differences among their unique characteristics and the assumption
that, on the base of these, we might be able to gain deeper understanding on how the model behaves in the predictions of previously unseen defect-induced changes in the system properties, basing on its learned notion of local environment. Table(3) reports, as an example, the values of the atomic weight, radius and electronic configuration of each considered replacement atom.
In Fig.(2), we display the predictions of MEGNet for the bulk moduli of Rb-defected systems with respect to the original, pure ones. As contained in Table(4), this kind of substitutional defect causes the largest root-mean-squared (RMSD) deviations among the three considered atomic species, with a value of approximately 6.2 GPa. In this section, we only report plots referring to the largest RMSD cases, but analogous plots can be found in the Appendix Sec.(A).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Weight(u) & Radius (pm) & El. Configuration \\ \hline \hline H & 1.008 & 53 & 1s\({}^{1}\) \\ \hline Mn & 54.938 & 161 & [Ar] 4s\({}^{2}\) 3d\({}^{5}\) \\ \hline Rb & 85.468 & 265 & [Kr] 5s\({}^{1}\) \\ \hline \end{tabular}
\end{table}
Table 3: Differences in the atomic properties of the three elements considered for the single-atom substitutional process. As an example, we report here the atomic weight, radius and electronic configuration.
Also the shear moduli predictions show the largest RMSD for the case of a Rb-defect, with a value of 0.0628 log(GPa)2. The results are shown in Fig.(3) and on Table(5). According to the model, it seems that both K\({}_{\rm VRH}\) and G\({}_{\rm VRH}\) upon substitution of a Rb atom, are showing a tendency to decrease their value, respectively, implying an increase in their compressibility and decrease in hardness. It is worth noticing that even though the only physical atomic
\begin{table}
\begin{tabular}{|l|r|} \hline Defect & RMSD (GPa) \\ \hline \hline Rb & 6.1645 \\ \hline Mn & 3.1274 \\ \hline H & 1.2975 \\ \hline \end{tabular}
\end{table}
Table 4: RMSD (GPa) for the prediction of the bulk modulus (K\({}_{\rm VRH}\)) in Rb, Mn, and H single-atom substitutionally defected systems with respect to the non-defected ones.
Figure 2: Predicted bulk moduli in the Rb-defected MP crystals with respect to their prediction in non-defected ones. An example structure from the MP dataset is shown, CaN2, in which one atom of Ca has been replaced with a Rb atom. Here only the case of Rb-defected systems is shown, due to its largest RMSD among the set of considered defects. Similar plots for the Mn- and H-defected systems are contained in the Supplementary Materials.
feature that the model exploits is the atomic number, the prediction of larger changes in the structural properties for involved defects with larger radii can be regarded as a reasonable one.
Furthermore, in Fig.(4), we show the predicted deviations in the formation energy of the MP-defected samples for the case of elemental H, and also for all three elements in Tab.(6). As it can be seen, the deviations are all in the order of 0.02 eV/atom, and are progressively less prominent going from H, to Rb and Mn.
### A crystalline matrix seeing a wealth of different elemental defects
The case of systematically changing the single-atom substitutional defect species in the same crystalline matrix is complementary to the prior addressed
one. We focus on pure metallic bulk host matrices, namely Al, Ni, Mo and Au, even though the test could be done on any crystalline matrix. In Fig.(5), we show the key aspects of the performed calculation, with the supercell shown on the left (a), and the elements considered on the right (b). The compositional space considered for the single-atom substitution covers almost the entire periodic table, and also comprises the species highlighted as host matrices when they are not selected as such. Even though our focus is on simple systems, the process can be insightful on the capabilities of the model to learn and predict, with minimal input, physio-chemical trends throughout the periodic table.
We show periodic table plots for the properties of interest, K\({}_{\rm VRH}\), G\({}_{\rm VRH}\) and E\({}_{\rm form}\), for the host matrix which displays the largest deviations in the properties, this is the case of Mo, but analogous plots for the other matrices are included in the Supplementary Material (SM). With the help of the visualization style, we characterize the predictions of the effect on the chemical distance between the substitutional defects and the host matrix on the properties of interest, and how it correlates with the well known trends along the periodic table. In Fig.(6), we show the prediction of the host supercell bulk modulus variation when defected with one of the elements from the periodic table. In particular, in the plot we refer to
\[\rm K^{\prime}{}_{\rm VRH}=K_{VRH}^{Mo(X)}-K_{VRH}^{Mo} \tag{5}\]
where K\({}_{\rm VRH}^{Mo(X)}\) is the bulk modulus of the Mo matrix defected by atom X, and K\({}_{\rm VRH}^{Mo}\) its value for pure Mo (labelled with a red flag in the figure).
Even though Sr represents an outstanding outlier in decreasing the bulk modulus of the Mo crystal, at this scale it is still possible to appreciate how the
Figure 5: Periodic table plot (b) explaining the process of selecting a set of host matrices (in light blue) and substitutional defects (in green). To a given a selected host matrix, the non-selected ones represent defects too. The host matrices are \(3\times 3\times 3\) supercells of the highlighted species (a).
variation happens along the periods: for the 3d, 4d and 5d transition metals from 3rd to 12th group, the defect-induced variations are mainly small, while a tendency to increase is observed, in modulus, in the post-transition metals and, remarkably, in the alkali and alkaline-earth. This behaviour can be interpreted in terms of the well known variations of the bulk modulus along the periodic table of elements, which seem correlated with respect to the defect-induced ones contained in this plot. The effect of substitutional alloying species which, in their bulk and standard temperature and pressure (STP) conditions, show a lower bulk modulus, is, as a tendency, to lower the bulk modulus of their host system.
Due to the strong effect induced by the Sr defect, we further focus on transition metals to evaluate how defect-induced variations fluctuate in the
Figure 6: Predicted bulk modulus variation (K’\({}_{\mathrm{VRH}}\)) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, for each possible defect atomic specie from the provided periodic table. Similar plots for Al, Au and Ni supercells are provided in the Supplementary Materials. The red flag highlights the zero relative difference, meaning the pure Mo matrix selection.
compositional vicinity of the host matrix. Fig.(7) shows the results along the list of 3d, 4d and 5d transition elements, where the variations are harder to distinguish from the prior periodic table plot. We can recognize a decreasing trend that is supportive of our interpretation, but in the compositional vicinity of the host matrix the fluctuations in the defect-induced variations are comparable with the MAE on bulk moduli prediction, just as the differences between the curves.
However, in view of a trend analysis of the variations along the periodic table, Alkali (like K), Alkaline-earth (like Sr and Ba) and post-transition metals (like Se and Te), can support the given interpretation in terms of correlation with the bulk modulus of the impurity, given their associated fluctuations are much larger than the model predictions errors.
Figure 7: Predicted bulk modulus variation (K’\({}_{\rm VRH}\)) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, along the 3d, 4d and 5d series of the periodic table. The black dashed line highlights the pure Mo case.
We may perform analogous investigations for the shear modulus of a pure host matrix, like Mo, and how it gets influenced by single-atom substitutional alloying, spanning over almost the entire periodic table, as shown in Fig.(8). We follow the same protocol and consider the variation \(\rm G^{`VRH}=G_{VRH}^{Mo(X)}-G_{VRH}^{Mo}\). In this case, the colormap reveals Si as an outlier towards hardening of the Mo matrix. This is a feature that cannot be explained by a correlation in defect- and defect-induced properties trends, since the predicted value is larger that the model prediction error on the shear modulus. We believe that the power of this investigation method is in paving the way to an efficient exploration of substitutional alloying, with the twofold possibility of looking for comparable performances (discovery of alternatives) or outstanding ones (discovery of exceptionals). Similarly to the investigation of the bulk modulus, the Alkali and Alkaline-earth metals like K, Rb, Sr and Ba are among the substitutional species providing the largest decreases in the shear modulus.
Overall, the effects and fluctuations caused by the substitutional defects on the defected host can always be highlighted, but it is not the aim of this work to find an exhaustive explanation for the existing predictions: The reasons for such trends may be due to any of the input parameters, such as the atomic number and bond lengths, or an abstract notion of local environment which is good enough to show reasonable correlations with existing alternative descriptors (i.e. atomic properties). The variations along the 3d, 4d and 5d transition metals are, for most of the cases, below the model MAE for \(\rm G_{VRH}\) predictions, therefore no meaningful extrapolation is possible, but we report the plot in Fig.(B9) of the Appendix.
For what concerns the formation energy, by definition of the latter, the results in Fig.(9) can be interpreted as the gain or loss in stability after the single-atom substitutional alloying has taken place. Fluorine represents a strong outlier, raising the formation energy by 0.14 eV/atom with respect to the pure Mo matrix. As shown in Fig.(10), it is evident that there is a trend that spans over the periods, and an overall interesting correlation could be found with the known trends for the atomic electronegativity along the periodic table, suggesting that the higher the latter for the substitutional defect in the Mo matrix, the higher is the resulting formation energy.
Figure 8: Predicted shear modulus variation (G’\({}_{\rm VRH}\)) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, for each possible defect atomic specie from the provided periodic table. Similar plots for Al, Au and Ni supercells are provided in the Supplementary Materials. The red flag highlights the zero relative difference, meaning the pure Mo matrix selection.
Figure 9: Predicted formation energy variation (E’\({}_{\rm form}\)) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, for each possible defect atomic specie from the provided periodic table. Similar plots for Al, Au and Ni supercells are provided in the Supplementary Materials. The red flag highlights the zero relative difference, meaning the pure Mo matrix selection.
### Property prediction - DFT validation
The validation of artificial intelligence methods predictions is a crucial step in order to quantify their quality. Even though the proposed graph network based method has been already validated for its predictions in bulk crystals, this work aims at testing its capabilities in the presence of single-atom substitutional defects. As explained in Sec.2.2, we compare the model predictions of K\({}_{\mathrm{VRH}}\), G\({}_{\mathrm{VRH}}\) and E\({}_{\mathrm{form}}\) with the DFT based ones, obtained with the THERMO_PW driver of QE. Table(7) shows the results for the validation on the properties in the case of an Al matrix and single-atom substitutional defects, including B, C, I, Ni and Zr.
Comparing the MAE values of our DFT calculations, with the ones of the model for non-defected systems in Table(2), we find good performances of the
Figure 10: Predicted formation energy variation (E’\({}_{\mathrm{form}}\)) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, along the 3d, 4d and 5d series of the periodic table. The black dashed line highlights the pure Mo case.
model with respect to K\({}_{\rm VRH}\) and G\({}_{\rm VRH}\) predictions, but large errors when it comes to the formation energies: the first may be regarded as a success, given that the model is predicting properties of a new class of systems, and given the computational limitations; the second one is a negative signature, even though the (i) defect dependent nature of the error order of magnitude opens up a deeper window of investigation on its reasons, and (ii) the validation set for defected systems is extremely small compared to the undefected MP dataset, on which initial MAEs have been evaluated.
### Size effects
In substitutional alloying, the defect atomic specie is usually present in a dilute concentration, in the range of 0.1-10%. For this reason, it is of interest the study of how the property predictions vary with defect concentration, that we propose in the saturation plots of Fig.(11), Fig.(12) and Fig.(13). Our example system follows the choice of the Mo host matrix, with the H, Mn and Rb substitutional defects we focused on, respectively in the second and first part of the previously reported results.
asymptotic over-dilute level (\(<0.1\%\)). Moreover, while the defect formation energy, as expected, shows a common descent to zero-level for all the defect species, the predicted structural properties seem to be sensitive on them, with the Rb-defected Mo crystal conserving a visible difference in the property value even at the limit of 0.1% concentration, both for K\({}_{\rm VRH}\) and G\({}_{\rm VRH}\). Even though it is out of the present work aims to validate the asymptotic-size behaviour of the predictions with accurate but expensive DFT calculations, we believe these results to stand in favour of the positive model understanding of a crystalline defected system environment.
Figure 11: Saturation plot of a Mo host matrix K\({}_{\rm VRH}\) when substitutionally defected with H, Mn or Rb, for different supercell sizes.
Figure 12: Saturation plot of a Mo host matrix \(\rm G_{VRH}\) when substitutionally defected with H, Mn or Rb, for different supercell sizes.
Figure 13: Saturation plot of a Mo host matrix \(\rm E_{form}\) when substitutionally defected with H, Mn or Rb, for different supercell sizes.
Let us continue along the previously paved path of analysis which also looks into the effects of systematically changing the substitutional atomic specie among the large variety contained in the periodic table, as previously done in Fig.(6), with the only exception of selecting the smallest and largest supercell sizes from the previous saturation plots of Fig.(11), respectively \(2\times 2\times 2\) (1% concentration, 16 atoms cell) and \(8\times 8\times 8\) (10% concentration, 1024 atoms cell). In Fig.(14), focusing on the first row of plots which deal with the K'\({}_{\rm vRH}\) variation, and comparing with the previously investigated supercell case of size \(3\times 3\times 3\) (2% concentration) of Fig.(6), one can notice that the scale of property variations changes accordingly: smaller defect concentrations lead to smaller defect-induced effects, and viceversa, as expected. In particular, in the largest supercell case the induced variations range is reduced to a 3% of the \(3\times 3\times 3\) system's one. However, the composition map outliers are left unchanged. The shear modulus variations, in the second row of tables, sees the emergence of new outliers in the chemical neighbourhoods of the previously obtained ones in \(3\times 3\times 3\) supercells, while the formation energy variations undergo both a change in scale and a complete change in the map outliers. Following the brief assessment of predictions quality performed for each of the interesting properties in the previous section, we expect the formation energy variations to suffer by non-negligible fluctuations, leading to the possible need of further validation. However, the main aim of the present discussion is to underline the power of the overall approach in highlighting the path towards composition search in substitutional alloying, which can effectively drive to specific desired emerging properties.
## 4 Conclusions
In this work, we utilized a convolutional graph-neural network, based on the MEGNet architecture Chen et al (2022), in order to attempt the design of novel alloys. Alloying involves substitutional and interstitial alloying at relatively low concentrations, thus, single-defect properties shall be informative on overall designing capabilities and guidance. For this purpose, we utilized the MP database, and we focused on the prediction of the properties of single-atom
Figure 14: Predicted variations K’\({}_{\mathrm{VRH}}\) (first row plots), G’\({}_{\mathrm{VRH}}\) (second row plots), E’\({}_{\mathrm{form}}\) (third row plots) for a single-atom substitutionally defected Mo supercell with respect to the undefected one, for each possible defect atomic specie from the provided periodic table. In plots of column a) a \(2\times 2\times 2\) supercell is considered, while in column b) a \(8\times 8\times 8\) supercell.
substitutionally defected bulk crystals,in the context of both (i) systematic substitution with a specific set of species in a wide variety of crystals from the entire Materials Project dataset and (ii) systematic substitution with a variety of atomic species in a specific set of pure bulk crystals. We also validated some of the results with our own DFT calculations. We believe that such approaches might provide novel insights into alloy design, especially if predictions include extended lattice defects such as dislocations and/or grain boundaries.
Acknowledgments.We would like to thank J. Llorca for insights and fruitful discussions and suggestions. This research was funded in part by the European Union Horizon 2020 research and innovation program under Grant Agreement No. 857470 and from the European Regional Development Fund via Foundation for Polish Science International Research Agenda PLUS program Grant No. MAB PLUS/2018/8.
### Data availability
Requests for additional data and availability can be directed to the provided lead contact. The employed DFT data, ML model and database can be found on the GitHub: [https://github.com/danielcieslinski/suballoy](https://github.com/danielcieslinski/suballoy)
## Declarations
Declarations
|
2306.03054 | Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks | The remarkable proliferation of deep learning across various industries has
underscored the importance of data privacy and security in AI pipelines. As the
evolution of sophisticated Membership Inference Attacks (MIAs) threatens the
secrecy of individual-specific information used for training deep learning
models, Differential Privacy (DP) raises as one of the most utilized techniques
to protect models against malicious attacks. However, despite its proven
theoretical properties, DP can significantly hamper model performance and
increase training time, turning its use impractical in real-world scenarios.
Tackling this issue, we present Discriminative Adversarial Privacy (DAP), a
novel learning technique designed to address the limitations of DP by achieving
a balance between model performance, speed, and privacy. DAP relies on
adversarial training based on a novel loss function able to minimise the
prediction error while maximising the MIA's error. In addition, we introduce a
novel metric named Accuracy Over Privacy (AOP) to capture the
performance-privacy trade-off. Finally, to validate our claims, we compare DAP
with diverse DP scenarios, providing an analysis of the results from
performance, time, and privacy preservation perspectives. | Eugenio Lomurno, Alberto Archetti, Francesca Ausonio, Matteo Matteucci | 2023-06-05T17:25:45Z | http://arxiv.org/abs/2306.03054v1 | # Discriminative Adversarial Privacy: Balancing Accuracy and Membership Privacy in Neural Networks
###### Abstract
The remarkable proliferation of deep learning across various industries has underscored the importance of data privacy and security in AI pipelines. As the evolution of sophisticated Membership Inference Attacks (MIAs) threatens the secrecy of individual-specific information used for training deep learning models, Differential Privacy (DP) raises as one of the most utilized techniques to protect models against malicious attacks. However, despite its proven theoretical properties, DP can significantly hamper model performance and increase training time, turning its use impractical in real-world scenarios. Tackling this issue, we present Discriminative Adversarial Privacy (DAP), a novel learning technique designed to address the limitations of DP by achieving a balance between model performance, speed, and privacy. DAP relies on adversarial training based on a novel loss function able to minimise the prediction error while maximising the MIA's error. In addition, we introduce a novel metric named Accuracy Over Privacy (AOP) to capture the performance-privacy trade-off. Finally, to validate our claims, we compare DAP with diverse DP scenarios, providing an analysis of the results from performance, time, and privacy preservation perspectives.
## 1 Introduction
The burgeoning interest and application of deep learning across diverse industries and domains has been remarkable in recent years. This surge can be ascribed to several pivotal factors including the accessibility of massive data volumes, the enhancements in computational resources, and the evolution of neural network architectures and optimization algorithms. However, with the widespread use of deep learning models and their increasing influence on society and daily life, the security and protection of the sensitive data used to train such models have become an essential concern. In an era of stringent data protection legislations like the European General Data Protection Regulation [1] and the Chinese Cyber Security Law [2], threats and potential data security breaches evolve at a pace that is often faster than legislative response. One such threat is a class of attacks known as Membership Inference Attacks (MIAs) [3], which aim to deduce whether certain individual-specific information was part of the training dataset of a machine learning model. Such attacks pose a formidable challenge and lay the ground for more sophisticated and potentially harmful breaches. Despite their recognition in the literature, no formally effective countermeasure is widely adopted in the deep learning development community.
Among the different solutions to increase the resistance of machine learning models to MIAs, the incorporation of Differential Privacy (DP) within training optimisers has stood out for its potential. DP is frequently considered the go-to mechanism for guaranteeing privacy due to its theoretical properties and robustness [4]. However, research has shown that the high level of privacy offered by DP comes at a cost, as adopting a high level of DP usually results in severe performance loss and increased learning time. This makes DP not practical for many real-world applications and sometimes not even for simple simulations [5, 6].
In this paper, we introduce a novel privacy-preserving learning technique, called Discriminative Adversarial Privacy (DAP). DAP leverages the structure of MIAs to accomplish multi-objective adversarial learning. By using our approach, the training process of deep learning models is faster than training with DP and the final model has higher performance with a comparable privacy level. Specifically, DAP employs a discriminator trained via the MIA technique of shadow
models and a novel loss function minimising the prediction error while maximising the attacker's error. This approach results in models that offer privacy competitive to those achieved by DP, yet with significantly reduced performance loss and faster training time.
With this work, we provide the following contributions:
* We propose a novel learning technique called Discriminative Adversarial Privacy or DAP, that combines adversarial learning and membership inference attack principles. This technique is designed to ensures an optimal balance between model performance, speed, and privacy.
* We introduce a novel loss function for DAP that is specifically tailored to simultaneously minimise the prediction error while maximising the attacker's error.
* We define a novel metric, namely Accuracy Over Privacy or AOP, to efficiently capture and handle the performance-privacy trade-off.
* We substantiate our claims with rigorous empirical validation, providing extensive experimental results that demonstrate DAP's comparative advantage over DP in terms of performance, training time, and privacy preservation.
## 2 Related Works
**Membership Inference Attacks.** The family of attacks known as Membership Inference Attacks (MIAs) is one of the biggest threats to deep learning models. MIAs are incredibly versatile and effective, leading to a growing research interest both in terms of the development of new attack algorithms and defensive countermeasures [7]. MIAs consist of determining, given a machine learning model, whether or not a given record was included in its training dataset. In practice, a MIA model is a binary classifier that can distinguish whether or not a record belongs to the training set of an already trained target model. The challenge is to carry out the MIA in the real world with little useful information for the attacker, such as in machine-learning-as-a-service scenarios. Shokri _et al_[3] pioneered one of the first and, still to this day, highly effective MIA algorithms, based on the assumption that over-parameterised models could memorise information about individual training samples beyond the generalisation of the problem for which they were trained. Assuming the structure and learning algorithm of the target model are known to the attacker, Shokri _et al_propose a training technique that trains several models - called _shadow_ models - to emulate the target model's behaviour. In this way, the attacker can leverage the predictions of such models to build a MIA discriminator, able to identify whether a sample has been used or not in the training procedure of the target model.
Numerous studies extended this technique and expanded the attack surface. For instance, Chen _et al_used data poisoning to enhance the MIA precision while hiding the attack traces by minimising test-time performance degradation [8]. He _et al_demonstrated the feasibility of MIA against models trained via self-supervised learning, and explored early stopping as a potential countermeasure, albeit at the expense of the model's utility [9]. Recently, researchers evaluated the effectiveness of MIAs against Generative Adversarial Networks (GANs) [10; 11], diffusion models [12; 13], recommender systems [14; 15], semantic segmentation [16; 17], and text-to-image [18].
**Differential Privacy.** Historically, Differential Privacy (DP) has been the primary defence against MIAs. It is a procedure designed to provide robust protection for individual-level information in a dataset [19]. The application of DP ensures that the inclusion or exclusion of any individual sample in a dataset does not significantly alter the results of statistical analyses or machine learning models trained on that dataset. DP is frequently presented in its relaxed form, referred to as \((\varepsilon,\delta)\)-DP. Formally, a randomised mechanism denoted as _M:_\(D\to R\), with domain \(D\) and range \(R\), satisfies \((\varepsilon,\delta)\)-DP if the following inequality holds for any two adjacent inputs \(d\), \(d^{\prime}\in D\) and any subset of outputs _S_\(\subseteq R\):
\[Pr[\text{ {M(d)}}\in S\,]\leq e^{\varepsilon}Pr[\text{ {M(d^{\prime})}}\in S\,]+\delta. \tag{1}\]
In Equation (1), the \(\varepsilon\) parameter, known as the privacy budget, denotes the maximum allowable information leakage, with a lower \(\varepsilon\) value indicating stronger privacy. Conversely, the additive \(\delta\) term represents the probability of privacy preservation being violated.
In a machine learning context, the DP framework achieves its goal by introducing randomness into the data analysis process, in a manner that obscures the contribution of any single individual's data. Usually, this randomisation is implemented as additive noise summed to the original data, to the intermediate matrices of the training algorithm, or through ad-hoc subsampling of the dataset. Abadi _et al_[4] pioneered the concept of Differentially-Private Stochastic Gradient Descent (DP-SGD), which has been affirmed as one of the most prevalent differentially-private optimisers within the deep learning literature. DP-SGD introduces Gaussian noise to the gradient computation with a standard deviation controlled by \(\varepsilon\). This step ensures that the gradients are sufficiently randomised, thereby hindering an
attacker's ability to infer information about individual data points from the model's parameters. This approach has been successfully applied across various domains, particularly in federated learning applications [20; 21; 22].
**Alternative Privacy Preserving Techniques.** While DP offers numerous advantages such as increased trust in data analysis results and enhanced fairness in decision-making processes that rely on data, it does come with its own set of challenges. These primarily revolve around the trade-off between privacy protection and data utility and the computational complexity that arises while implementing DP mechanisms [5; 6]. Given these constraints, the literature has seen the emergence of alternatives to DP and its variants. Chen _et al_proposed an alternative to DP, called RelaxLoss [23]. The key concept of this training framework is the relaxation of the entropy loss function, with the goal of reducing the generalisation gap and privacy leakage in machine learning models. Kaya and Dumitras [24] evaluated the efficacy of data augmentation mechanisms against MIAs in image classification tasks. Their study encompassed seven different mechanisms, including differential privacy. They found that augmenting data to improve model utility did not mitigate the risk of MIAs. Furthermore, they delved into why the commonly utilised label smoothing mechanism amplified the risk of MIAs. Webster _et al_introduced a general-purpose approach to tackle the issue of membership privacy in machine learning. Their solution involves the generation of surrogate datasets using images created by Generative Adversarial Networks (GANs), which are labelled with a classifier trained on the private dataset. They demonstrated that these surrogate datasets can be utilised for various downstream tasks and provide resistance against membership attacks. In their study, different GANs proposed in the literature were evaluated, revealing that GANs of higher quality yield better surrogate data for the given task [25]. Lomurno and Matteucci [5] presented a comparison of the effectiveness of the DP-SGD algorithm against standard optimisation practices with regularisation techniques. They compared the utility of the resulting models, their training performance, and the efficacy of MIAs against the learned models. Their empirical findings highlight the often superior privacy-preserving properties of dropout and \(l2\)-regularisation, given a fixed number of training epochs.
## 3 Method
**Discriminative Adversarial Privacy.** With this work, we introduce a novel learning framework for privacy-preserving deep learning, called Discriminative Adversarial Privacy (DAP). DAP is a learning framework to efficiently train high-performing deep learning models with strong resilience against MIAs. DAP uses a deep neural network classifier as a baseline, referred to as \(\mathcal{C}_{base}\), and trained using the hold-out technique on dataset \(Data\). Similarly, \(K\) shadow models, denoted as \(\mathcal{C}_{S,k}\), are trained using the hold-out technique, as prescribed by the MIA from Shokri _et al_[3]. For each shadow model produced this way, ground truth, prediction, and loss are stored for each of its training and test samples, associating a binary label according to whether it belongs to the first or second set. Of these samples, only the miss-classified ones are retained, as they are the most empirically informative in a discriminative context and, from the ablation studies, lead to the most performing results. These data are used to build the adversarial binary classification dataset \(Data_{MIA}\) to train the binary discriminator \(\mathcal{D}_{MIA}\). Once trained, the weights of this model are frozen.
Figure 1: Overview of the Differentiable Adversarial Privacy (DAP) framework.
At this point, adversarial training is performed using \(\mathcal{D}_{MIA}\) and a new classifier \(\mathcal{C}_{DAP}\) with the same structure of \(\mathcal{C}_{base}\). In DAP, \(\mathcal{C}_{DAP}\) is trained to minimize the categorical crossentropy loss as usual, i.e. to maximize the probability of assigning the correct class label to training examples. This error is used to update all the classifier's weights. Then, for each batch of data, the miss-classified predictions from \(\mathcal{C}_{DAP}\) are collected together with their corresponding ground truth and loss. This secondary batch is fed through \(\mathcal{D}_{MIA}\) and its prediction error is computed maximising the error of the discriminator as in the standard min-max adversarial training [26]. This secondary error is used to update the last fully connected layer of \(\mathcal{C}_{DAP}\), with the goal of reducing the probabilities that its outputs can be easily discriminated by the attacker. The optimisation procedure of DAP can be described as
\[\min_{\mathcal{C}}\max_{\mathcal{D}}\mathcal{L}(\mathcal{C},\mathcal{D},t)= \mathbb{E}_{x\sim p(x)}[\text{log}(\mathcal{C}(x,t))]+\beta\mathbb{E}_{x,y \sim p(x,y)}[\text{log}(1-\mathcal{D}(\mathcal{C}(x,t),y))]. \tag{2}\]
In Equation (2), \(x\) and \(y\) are respectively the training inputs and the ground truth labels, \(t\) is the current epoch, and \(\beta\) is a dynamic loss balancing parameter. \(\beta\) is crucial to ensure learning stability. In fact, the different nature of the two losses makes them not directly comparable in terms of magnitude depending on the training epoch and the specific data distribution. \(\beta\) is dynamically adjusted during training and it is computed as
\[\beta(\mathcal{C},\mathcal{D},t,r)=\begin{cases}\frac{\mathbb{E}[\text{log}( \mathcal{C}(x,t-1))]_{v}}{\mathbb{E}[\text{log}(1-\mathcal{D}(\mathcal{C}(x,t -1),y))]_{v}}\cdot r&\text{if }t>0\\ 1&\text{otherwise}\end{cases}. \tag{3}\]
According to Equation (3), the value of \(\beta\) at time \(t\) is proportional to the ratio of the classification loss and the discrimination loss on the validation set at the previous step \(t-1\). Then, \(\beta\) is scaled by a hyperparameter \(r\) that weighs the contribution of the discriminator. As a final note, \(\beta\) is always set to \(1\) for \(t=0\). The overall DAP framework is described in Figure 1.
**Accuracy Over Privacy.** When evaluating machine learning models in a privacy-preserving setting, it is vital to contemplate both model performance and the privacy of the underlying training data. However, measuring the trade-off between these two facets is a complex task. Quantifying privacy itself is nontrivial, and comparing metrics across different domains can be particularly challenging. For these reasons, we propose a novel metric called Accuracy Over Privacy (AOP), which provides a concise measure of the accuracy and privacy of a target model. Within the realm of MIAs, the efficacy of the attacking model is often measured using the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve of the attack. Instead, when the model is a classifier, its performance can be assessed utilising the Top-1 Accuracy (ACC). Therefore, given the classifier ACC and the AUC of a MIA model (\(\text{AUC}_{\text{MIA}}\)), the AOP is computed as
\[\mathrm{AOP}(\lambda)=\frac{\mathrm{ACC}}{(2\mathrm{max}(\mathrm{AUC}_{ \text{MIA}},0.5))^{\lambda}}. \tag{4}\]
Equation 4 summarizes the effects of ACC and \(\text{AUC}_{\text{MIA}}\) in a single metric. \(\lambda\geq 1\) weighs the importance of privacy when measuring the AOP.
The AOP metric exhibits several properties. Concerning its range, it is constrained in the interval \([0,1]\). For highly inaccurate models or models susceptible to MIAs, the AOP approaches 0. Conversely, the AOP is closer to 1 when models exhibit high accuracy and strong resilience against MIAs at the same time. The \(\lambda\) parameter is a key factor in the AOP metric, as it is controls the impact of the privacy component. Figure 2 shows that increasing values of \(\lambda\) cause the AOP metric to shrink towards 0. Concerning the denominator, the max operator ensures that the AUC is never lower than the AUC of a random guessing model, which is equal to 0.5. Moreover, the denominator allows for obtaining AOP values equal to the classification accuracy for models that perfectly preserve privacy.
Figure 2: From left to right, interpolation plots of AOP(\(\lambda\)) for \(\lambda=1\), \(2\), \(5\), and \(10\).
## 4 Experimental Setup
In order to guarantee the transparency and reproducibility of the study, this section provides a comprehensive description of the experiments conducted and their setup. Our proposed algorithm, DAP, operates in two different settings. In the first one, referred to as test DAP or \(\mathbf{DAP}_{t}\), shadow models are trained with the test set, simulating a situation where an external dataset, potentially public, is accessible to both the attacker and victim. \(\mathbf{DAP}_{t}\) allows deep learning engineers to proactively prevent potential attacks by employing the same dataset that could be used by the attacker. In the second setting, validation DAP or \(\mathbf{DAP}_{v}\), the shadow models are trained using the validation set. This situation mimics a typical scenario where the attacker's data distribution differs from that of the victim. In both of these modes, we maintained 10 shadow models, and optimize the parameter \(r\) over a uniform range from 0 to 1 with increments of 0.025.
To ensure a fair evaluation, we compared DAP against several alternative approaches. We initially establish a **Baseline** model, constituting of the base classifier without any protective measures. Subsequently, following the methodology of Lomurno _et al_[5], we include a model, called **Reg**, that applies dropout regularization to each intermediate classifier weight and \(l2\) regularization to the model output. The dropout probability is tuned between 0.2, 0.33, and 0.5, while the \(l2\) weight over 0.1, 0.01, and 0.001. Furthermore, we extend our examination to incorporate models with (\(\varepsilon\),\(\delta\))-DP, retaining a constant \(\delta\) value equal to 10\({}^{-5}\) while adjusting the \(\varepsilon\) budget by modifying the number of training epochs. Specifically, we test four models with \(\varepsilon\) values of 0.5, 1, 2, and 4.
To limit the free parameters of the experiments, we maintained the same architecture for each classifier across all configurations, as illustrated on the left side of Figure 3. This selection was motivated by the intricate spatial complexity involved in DP training. The residual architecture of the discriminator employed in both the \(\mathbf{DAP}_{t}\) and \(\mathbf{DAP}_{v}\) models is depicted in Figure 3, where the proposed residual block is situated in the middle, and the overall structure is positioned on the right. All models are trained using the Adam optimizer with a learning rate chosen between \(10^{-5}\), \(10^{-4}\), and \(10^{-3}\) and a batch size of 32. Each model is trained to convergence with early stopping - with a patience of 25 epochs - on validation accuracy except for DP models, where the number of training epochs is fixed and proportional to \(\varepsilon\).
The proposed models are evaluated with respect to classification Top-1 Accuracy, AUC of MIAs - performed using the toolkit provided by the TensorFlow Privacy library - and training epoch time. Furthermore, we employed our novel metric, the AOP, with \(\lambda=2\), to assess the trade-off between performance and privacy, with a particular emphasis on the latter. The study covers eight datasets: Cifar-10 [27], Cifar-100 [27], FMNIST [28], EuroSAT [29], TinyImagenet [30], OxfordFlowers [31], STL-10 [32], and Cinic-10 [33]. The experiments are conducted on a machine equipped with an Intel(R) Xeon(R) Gold 6238R CPU @ 2.20GHz CPU and an Nvidia Quadro RTX 6000 GPU.
## 5 Results and Discussion
In this section, we comment on the results obtained from the set of experiments comparing DAP to regularization and DP for defense against MIAs. Table 1 collects the accuracy metrics for each model and dataset. As anticipated, the models incorporating DP yield the lowest accuracy scores, even with a high privacy budget (\(\varepsilon=4\)). Conversely, the regularized (Reg) model achieves consistently high accuracy, even occasionally outperforming the baseline model. Our proposed method, DAP, guaranteed an accuracy boost over the DP counterparts. A notable example of this is with the
Figure 3: The neural network architectures involved in the experiments. From left to right, the CNN used for each analyzed classifier and shadow models, the residual block specially designed for the DAP discriminator, and the overall architecture of the DAP discriminator.
EuroSAT dataset, where the DAP\({}_{t}\) and DAP\({}_{v}\) settings result in accuracy gains of 22% and 21% respectively, compared to the best-performing DP model. Concerning classification accuracy, in summary, the Reg model attains the highest accuracy performance on average, followed by DAP\({}_{t}\) and DAP\({}_{v}\).
Table 2 collects the results concerning MIAs conducted on the target models. Here, DAP\({}_{t}\) and DAP\({}_{v}\) exhibit average AUCs of 51.3% and 50.8%, respectively. This indicates that both approaches effectively safeguard against MIAs, rendering the attacks nearly akin to random guessing and achieving performances competitive with DP models. The Reg model, instead, nearly matches the privacy level of the baseline. This discrepancy between our results and the findings of Lomurno and Matteucci [5] is due to the different experimental conditions. In particular, our experiments run until convergence without fixing a specific number of epochs. In summary, DAP proves to be an effective training framework that produces models resilient against MIAs.
Table 3 collects the outcomes concerning the proposed AOP metric. These findings highlight the ability of DAP to produce private models that, at the same time, demonstrate competitive performance. In fact, DAP\({}_{t}\) and DAP\({}_{v}\) outperform DP and regularization in terms of the accuracy-privacy tradeoff. Concerning the Reg model, despite its susceptibility to MIAs, it is still a viable intermediate choice due to its superior accuracy. Conversely, DP models are extremely effective in MIA prevention but this advantage comes at the expense of the final accuracy, resulting in underperforming models. Notably, the AOP follows the trend of the privacy budget \(\varepsilon\). In fact, the most private model (DP with \(\varepsilon=0.5\)) is also the least performing due to the impactful addition of gradient noise.
Lastly, Table 4 collects the time per epoch required to train each model. Here, the Baseline and Reg models emerge as the fastest, while the DP models require about 8 times as long. DAP, on the other hand, manages to produce strong results both in terms of privacy and accuracy, requiring only twice the time of the baseline model.
Summarizing the results in terms of accuracy, privacy, AOP, and training time, DAP offers a better tradeoff than DP and regularization. Specifically, the DAP\({}_{t}\) setting produces high-performance models, albeit less private. In contrast, the DAP\({}_{v}\) setting produces models with strong privacy at a slight accuracy expense. Both settings handle the performance-privacy tradeoff far more effectively than DP in significantly less time.
\begin{table}
\begin{tabular}{|l||c|c c c c c c c|} \hline
**Dataset** & **Baseline** & **Reg** & \(\epsilon=0.5\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=4\) & **DAP\({}_{t}\)** & **DAP\({}_{v}\)** \\ \hline \hline
**Cifar-10** & 0.784 & **0.811** & 0.313 & 0.374 & 0.417 & 0.418 & 0.624 & 0.613 \\
**Cifar-100** & 0.481 & **0.532** & 0.039 & 0.083 & 0.090 & 0.072 & 0.315 & 0.276 \\
**FMNIST** & 0.932 & **0.926** & 0.605 & 0.701 & 0.736 & 0.774 & 0.866 & 0.871 \\
**EuroSAT** & 0.958 & **0.950** & 0.308 & 0.588 & 0.681 & 0.646 & 0.900 & 0.893 \\
**TinyImagenet** & 0.365 & **0.378** & 0.031 & 0.032 & 0.032 & 0.025 & 0.260 & 0.217 \\
**OxfordFlowers** & 0.566 & **0.659** & 0.031 & 0.051 & 0.087 & 0.139 & 0.290 & 0.257 \\
**STL-10** & 0.655 & **0.650** & 0.084 & 0.142 & 0.250 & 0.289 & 0.480 & 0.384 \\
**Cinic-10** & 0.673 & **0.709** & 0.280 & 0.341 & 0.391 & 0.405 & 0.577 & 0.586 \\ \hline
**Average** & 0.677 & **0.702** & 0.211 & 0.289 & 0.336 & 0.346 & 0.539 & 0.512 \\ \hline \end{tabular}
\end{table}
Table 1: The Accuracy metric on the test sets. Results improving the baseline are coloured in green, while results worse than the baseline are red. The best results among them are in **bold**, while the second best are underlined.
\begin{table}
\begin{tabular}{|l||c|c c c c c c c|} \hline
**Dataset** & **Baseline** & **Reg** & \(\epsilon=0.5\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=4\) & **DAP\({}_{t}\)** & **DAP\({}_{v}\)** \\ \hline \hline
**Cifar-10** & 0.648 & 0.631 & 0.505 & 0.526 & 0.519 & **0.503** & 0.507 & 0.505 \\
**Cifar-100** & 0.603 & 0.621 & **0.501** & 0.515 & 0.507 & 0.506 & 0.516 & 0.506 \\
**FMNIST** & 0.552 & 0.562 & **0.502** & **0.502** & 0.504 & 0.505 & 0.507 & 0.506 \\
**EuroSAT** & 0.544 & 0.528 & 0.505 & 0.502 & **0.500** & 0.502 & 0.501 & 0.501 \\
**TinyImagenet** & 0.603 & 0.592 & 0.514 & **0.501** & 0.521 & 0.504 & 0.516 & 0.509 \\
**OxfordFlowers** & 0.761 & 0.765 & 0.543 & 0.537 & 0.526 & 0.532 & 0.538 & **0.521** \\
**STL-10** & 0.604 & 0.563 & 0.502 & 0.524 & 0.505 & **0.501** & 0.508 & 0.506 \\
**Cinic-10** & 0.572 & 0.614 & **0.501** & 0.514 & 0.511 & 0.507 & 0.513 & 0.507 \\ \hline
**Average** & 0.611 & 0.609 & 0.509 & 0.514 & 0.511 & **0.507** & 0.513 & 0.508 \\ \hline \end{tabular}
\end{table}
Table 2: The AUC metric of the MIAs. Results improving the baseline are coloured in green, while results worse than the baseline are red. The best results among them are in **bold**, while the second best are underlined.
## 6 Conclusion
In this work, we introduced the Discriminative Adversarial Privacy (DAP) framework and the Accuracy Over Privacy (AOP) metric. The goal of DAP is to produce deep learning models resilient to Membership Inference Attacks (MIAs), while the AOP summarizes the privacy-accuracy tradeoff in a single value. As shown in the experiments, DAP demonstrated superior ability in maintaining a beneficial balance between model performance and privacy, outperforming models based on Differential Privacy (DP). The AOP metric has effectively encapsulated these results, providing a concise yet robust evaluation criterion. On top of that, DAP required considerably less computational overhead, thus accelerating the training process with respect to DP. Collectively, our contributions offer a promising approach to the development and evaluation of deep learning models resilient against MIAs, providing an optimal balance between execution time, accuracy, and privacy.
## Acknowledgment
This project has been supported by AI-SPRINT: AI in Secure Privacy-pReserving computNg conTinuum (European Union H2020 grant agreement No. 101016577) and FAIR: Future Artificial Intelligence Research (NextGenerationEU, PNRR-PE-AI scheme, M4C2, investment 1.3, line on Artificial Intelligence).
|
2308.02194 | Paired Competing Neurons Improving STDP Supervised Local Learning In
Spiking Neural Networks | Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware
has the potential to significantly reduce the energy consumption of artificial
neural network training. SNNs trained with Spike Timing-Dependent Plasticity
(STDP) benefit from gradient-free and unsupervised local learning, which can be
easily implemented on ultra-low-power neuromorphic hardware. However,
classification tasks cannot be performed solely with unsupervised STDP. In this
paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP
learning rule to train the classification layer of an SNN equipped with
unsupervised STDP for feature extraction. S2-STDP integrates error-modulated
weight updates that align neuron spikes with desired timestamps derived from
the average firing time within the layer. Then, we introduce a training
architecture called Paired Competing Neurons (PCN) to further enhance the
learning capabilities of our classification layer trained with S2-STDP. PCN
associates each class with paired neurons and encourages neuron specialization
toward target or non-target samples through intra-class competition. We
evaluate our methods on image recognition datasets, including MNIST,
Fashion-MNIST, and CIFAR-10. Results show that our methods outperform
state-of-the-art supervised STDP learning rules, for comparable architectures
and numbers of neurons. Further analysis demonstrates that the use of PCN
enhances the performance of S2-STDP, regardless of the hyperparameter set and
without introducing any additional hyperparameters. | Gaspard Goupy, Pierre Tirilly, Ioan Marius Bilasco | 2023-08-04T08:20:54Z | http://arxiv.org/abs/2308.02194v2 | # Paired Competing Neurons Improving STDP Supervised Local Learning In Spiking Neural Networks
###### Abstract
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware has the potential to significantly reduce the high energy consumption of Artificial Neural Networks (ANNs) training on modern computers. The biological plausibility of SNNs allows them to benefit from bio-inspired plasticity rules, such as Spike Timing-Dependent Plasticity (STDP). STDP offers gradient-free and unsupervised local learning, which can be easily implemented on neuromorphic hardware. However, relying solely on unsupervised STDP to perform classification tasks is not enough. In this paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule to train the classification layer of an SNN equipped with unsupervised STDP. S2-STDP integrates error-modulated weight updates that align neuron spikes with desired timestamps derived from the average firing time within the layer. Then, we introduce a training architecture called Paired Competing Neurons (PCN) to further enhance the learning capabilities of our classification layer trained with S2-STDP. PCN associates each class with paired neurons and encourages neuron specialization through intra-class competition. We evaluated our proposed methods on image recognition datasets, including MNIST, Fashion-MNIST, and CIFAR-10. Results showed that our methods outperform current supervised STDP-based state of the art, for comparable architectures and numbers of neurons. Also, the use of PCN enhances the performance of S2-STDP, regardless of the configuration, and without introducing any hyperparameters. Further analysis demonstrated that our methods exhibited improved hyperparameter robustness, which reduces the need for tuning.
Spiking Neural Networks, Image Recognition, Supervised Local Learning, STDP, Intra-class competition.
## I Introduction
Spiking Neural Networks (SNNs), often considered as the third generation of artificial neural networks, are alternatives that more closely mimic brain behavior [1], compared to second-generation Artificial Neural Networks (ANNs). Spiking neurons communicate through discrete binary signals, called spikes, to transmit information, allowing asynchronous processing with spatiotemporal dynamics. Recently, SNNs have gained growing attention due to their energy-efficient computing ability, when implemented on dedicated neuromorphic hardware [2, 3, 4]. Especially, on-chip training is of particular interest since it is known that the training phase consumes the most energy.
However, training SNNs directly on neuromorphic hardware remains a significant challenge. Due to the non-differentiable nature of the spike generation function, the use of the gradient-based Backpropagation (BP) algorithm is not as straightforward as it is for traditional ANNs. Many approaches were proposed in the literature [5, 6] to adapt the BP algorithm for SNNs, mainly relying on gradient approximation. Nevertheless, although these methods have achieved state-of-the-art results, they are challenging to implement on neuromorphic hardware as they require global communication during backpropagation. Consequently, other approaches attempt to make gradient computation local, notably by utilizing feedback connections [7, 8], or by employing a layer-wise cost function [9, 10]. These approaches still do not solve the gradient approximation problem.
On the other hand, the Spike Timing-Dependent Plasticity (STDP) [11] is a bio-inspired unsupervised learning rule based on Hebb's theory [12]. In this rule, the synaptic weights are updated locally, i.e., only with the information from pre and postsynaptic neurons, which greatly facilitates implementation on neuromorphic hardware. Convolutional SNNs (CSNNs) trained with STDP have demonstrated the ability to extract relevant features from images [13, 14, 15, 16]. In addition, they benefit from unsupervised learning, which reduces the dependency on labels and improves robustness to changing data. However, these SNNs still require an additional supervised module to perform the classification from the extracted features. To leverage the potential of unsupervised STDP in neuromorphic hardware and enable end-to-end SNN solutions, spike-based classifiers with supervised learning must be designed.
In particular, combining unsupervised STDP training for feature extraction and supervised STDP training for classification, as shown in Figure 1, is very appealing [17, 18]. First, both training processes rely on the same rule, which facilitates hardware implementation. Second, unsupervised training can be used to reduce the number of supervised layers, and hence, the reliance on labeled data. Third, unlike BP-based methods, supervised training with STDP may eliminate the need for global communication and gradient approximation. Several supervised adaptations of STDP are reported in the literature [18, 19, 20, 21, 22, 23]. Among them, Reward-modulated STDP (R-STDP) [18] is a reinforcement learning rule based on Winner-Takes-All (WTA) competition that modulates the polarity of the STDP update to apply
rewards and punishments. R-STDP has shown effectiveness on simple image recognition datasets (such as MNIST [24]) with SNNs combining unsupervised STDP and supervised R-STDP, but has not been evaluated on complex datasets such as CIFAR-10 [25]. More recently, Supervised STDP (SSTDP) [22] proposes a method to modulate both the polarity and intensity of the STDP update based on temporal neuron error. The rule enables effective learning in deep networks (using global communication) and has demonstrated very good performance on various image recognition datasets, including CIFAR-10. However, it has not yet been investigated in SNNs incorporating unsupervised STDP.
In this paper, we focus on the supervised training of a spike-based classification layer with temporal decision-making in a two-layer SNN. The SNN, illustrated in Figure 1, comprises a feature extraction layer trained with unsupervised STDP and a classification layer trained with supervised STDP. The main contributions of this paper include the following:
1. To train the classification layer, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule where the polarity and the intensity of the updates are modulated by an error signal. In this rule, neurons update their weights to align their spikes with desired timestamps derived from the average firing time within the layer. S2-STDP is adapted from SSTDP.
2. To further enhance the learning capabilities of our S2-STDP classification layer, we introduce a training architecture called Paired Competing Neurons (PCN). This architecture associates each class with paired neurons and encourages neuron specialization on target or non-target samples through intra-class competition.
3. We evaluate the performance of S2-STDP and PCN on three image recognition datasets of growing complexity: MNIST, Fashion-MNIST, and CIFAR-10.
The remainder of this paper is organized as follows. Section II provides the necessary background information about the SNN employed in this study. Section III outlines some issues encountered with SSTDP with our SNN architecture, which serve as the basis for our contribution. Section IV presents a detailed description of our spike-based classification layer and our proposed training methods. Section V covers our results on image recognition datasets and provides an in-depth investigation of the key characteristics of our methods. Section VI concludes the paper. The source code is publicly available at: [https://gitlab.univ-lille.fr/gaspard.goupy/snn-pcn](https://gitlab.univ-lille.fr/gaspard.goupy/snn-pcn).
## II Background
### _Neural coding_
Since spiking neurons communicate through spikes, encoding the image is a necessary step before the SNN can process it. Two main coding schemes exist in the literature [26]: rate coding and temporal coding. Rate coding represents the information in the neuron firing rate, whereas temporal coding represents the information through the precise timing of the spike. Hence, temporal coding is more efficient in terms of energy consumption, making it particularly suitable for hardware implementation on low-power devices. In this work, we use a temporal scheme called latency coding [27], formulated as:
\[t\left(x\right)=T_{\max}-x \tag{1}\]
where \(x\) is the normalized pixel value, \(t(x)\) the encoded timestamp, and \(T_{\max}\) is the maximum firing time, set to 1. Here, each pixel is represented by a single spike, which constrains the number of generated spikes.
### _Neuron model_
In our SNN, we use the Single-Spike Integrate-and-Fire (SSIF) model [28], describing IF neurons that can fire at most
Fig. 1: Architecture of the SNN employed in this paper. First, the image is preprocessed and encoded into timestamps using the latency coding scheme. Then, a Convolutional SNN (CSNN) trained with unsupervised STDP is used to extract relevant features from the image. The resulting feature maps are compressed through a max-pooling layer to reduce their size and provide invariance to translation on the input. Lastly, they are flattened and fed to a fully-connected SNN trained with a supervised adaptation of STDP for classification. Each output neuron is associated with a class and the first one to fire predicts the label. The SNN training is done in a layer-wise fashion.
once per sample. The neurons integrate input spikes to their membrane potential \(V\) as follows:
\[V_{j}(t)=V_{j}(t-1)+\sum_{i}w_{ij}\cdot S_{i}(t-1) \tag{2}\]
where \(t\) is the timestamp, \(V_{j}\) is the membrane potential of the output neuron \(j\), \(S_{i}\) is the spike of the input neuron \(i\) (0 or 1), and \(w_{ij}\) is the weight of the synapse. When the membrane potential of a neuron exceeds a defined threshold \(V_{\text{th}}\), the neuron emits a spike, its membrane potential is reset, and the neuron is deactivated until the next sample is shown.
### _Unsupervised feature extraction layer_
Our SNN comprises a feature extraction layer trained with STDP to improve image representation before classification. This layer, illustrated in Figure 1, is based on the CSNN described in [13]. It consists of a trainable convolutional layer to extract spatial features from input images. Then, the feature maps are compressed with a max-pooling layer to reduce their size and provide translation invariance. The convolutional layer is entirely trained before the training of the classification layer starts.
Synaptic weights of the convolutional layer are updated with the multiplicative STDP rule [29], which can be described as:
\[\Delta w_{ij}=\begin{cases}A^{+}\times\exp\left(-\beta\,\frac{w_{ij}-w_{\min} }{w_{\max}-w_{\min}}\right),&\text{if }t_{j}\geq t_{i}\\ A^{-}\times\exp\left(-\beta\,\frac{w_{\max}-w_{ij}}{w_{\max}-w_{\min}}\right),&\text{if }t_{j}<t_{i}\end{cases} \tag{3}\]
where \(w_{ij}\) represents the weight of the synapse connecting input neuron \(i\) and output neuron \(j\), \(\Delta w_{ij}\) the weight change, \(t_{i}\) the firing timestamp of the neuron \(i\), \(\beta\) the saturation factor, \(w_{\min}\) and \(w_{\max}\) the minimum and maximum weight values, \(A^{+}\) and \(A^{-}\) the learning rates. In addition, a WTA mechanism is employed among neurons to promote the learning of distinct patterns, where only the first neuron to fire updates its weights with STDP.
Neuron thresholds in the convolutional layer are updated through two adaptation rules [13]. The first one is applied to control the timestamp at which neurons should fire:
\[V_{\text{th}_{i}}=\max\left(\text{Th}_{\min},V_{\text{th}_{i}}-\eta_{\text{th} }\left(t_{i}-t_{\text{target}}\right)\right) \tag{4}\]
where \(V_{\text{th}_{i}}\) represents the threshold of neuron \(i\), \(\text{Th}_{\min}\) the minimum threshold, \(\eta_{\text{th}}\) the learning rate, \(t_{i}\) and \(t_{\text{target}}\) the actual and target firing timestamp of the neuron. The second one is applied to ensure homeostasis (i.e. fair competition among neurons) in the layer:
\[\Delta V_{\text{th}_{i}} =\begin{cases}\ \ \ \eta_{\text{th}}\text{ if }t_{i}=\min\left\{t_{0}, \cdots,t_{N}\right\}\\ -\frac{\eta_{\text{th}}}{N}\text{ o.w.}\end{cases} \tag{5}\] \[V_{\text{th}_{i}} =\max\left(\text{Th}_{\min},V_{\text{th}_{i}}+\Delta V_{\text{ th}_{i}}\right)\]
where \(N\) is the number of neurons in competition.
## III Problem statement
In the introduction, we highlight some of the recent supervised adaptations of STDP that apply to spike-based classification. Among them, SSTDP [22] is a rule with state-of-the-art performance in diverse visual tasks. SSTDP incorporates neuron error to modulate the STDP updates in temporally-encoded SNNs. In the output layer and for each sample, desired time ranges are computed for target and non-target neurons based on the average firing time \(T_{\text{mean}}\) in the layer. For target neurons, the time range corresponds to \([0,T_{\text{mean}}-t_{1}]\) and for non-target neurons, it corresponds to \([T_{\text{mean}}+t_{2},1]\) (with \(t_{1}\), \(t_{2}\) defined time gaps). The error of a neuron is measured by the temporal difference to reach its time range. Although SSTDP is designed for multi-layer SNNs, we consider the method as a good starting point (given its reported performance) for training the classification (or output) layer of our SNN. To this end, we conducted preliminary experiments with SSTDP on the MNIST [24] and the Fashion-MNIST [30] datasets. It led us to identify two primary issues that we aim to resolve in this work:
1. the limited number of STDP updates;
2. the saturation of firing timestamps toward the maximum firing time.
First, SSTDP incorporates the concept of temporal error and desired time ranges. During the training process and for each sample, if a neuron fires in its desired range, the error is clipped and the neuron weights are not updated. We believe that the desired time ranges defined by SSTDP are too broad and lack sufficient constraints, resulting in a limited number of updates. Figure 2 illustrates the average update ratio per epoch in the classification layer, calculated by dividing the number of updates by the possible number of updates (in an epoch).
For both datasets, this update ratio hardly exceeds \(30\)% in the initial epoch, implying that neurons do not update their weights for approximately \(70\)% of the training samples. As the number of epochs increases, the ratio progressively decreases to \(7\)% for MNIST and \(19\)% for Fashion-MNIST, but the accuracy continues to improve, indicating the ongoing necessity of training. This results in a highly inefficient training process as a significant portion of the samples undergoes computational processing within the SNN without producing weight updates.
Second, because negative updates are more frequent than positive updates, neurons are continually pushed to fire later. It creates a saturation effect where their firing timestamps
Fig. 2: Average update ratio and validation accuracy per epoch in the classification layer trained with SSTDP. The training process of SSTDP is inefficient as it results in a very limited number of updates per epoch.
rapidly approach the maximum firing time. This behavior is illustrated in Figure 3.
As observed, the average firing time grows rapidly and then stabilizes within a range close to the maximum firing time. During the last training epoch, we compute an average firing time of \(0.98\pm 0.008\) for MNIST and \(0.99\pm 0.009\) for Fashion-MNIST. As a result, we believe that the SSTDP update method affects the ability of the SNN to separate classes since the firing timestamps saturate toward the maximum time and show limited variance.
## IV Methods
In this paper, we first propose S2-STDP for training a spike-based classification layer with temporal decision-making. S2-STDP employs error-modulated weight updates to reach desired timestamps derived from the average firing time within the layer. Then, we introduce the PCN training architecture, which further enhances the learning capabilities of our classification layer trained with S2-STDP. PCN is based on intra-class competition and does not introduce any additional hyperparameters.
### _Supervised classification layer_
The classification layer is the output layer of our SNN. In this work, we employ a fully-connected architecture composed of \(N\) output neurons, receiving input spikes from the flattened feature maps of the CSNN, as illustrated in Figure 1. The purpose of the classification layer is to predict the class of the input image. To do so, each output neuron \(j\) is associated with a specific class label \(c_{j}\) and the prediction of the SNN \(\hat{y}\) is defined as:
\[\hat{y} =c_{j^{*}} \tag{6}\] \[j^{*} =\operatorname{argmin}\left\{t_{0},\cdots,t_{N}\right\}\]
where \(t_{j}\) denotes the firing timestamp of the neuron \(j\). In essence, the neuron with the earliest firing timestamp predicts the class label. This method is appealing as it eliminates the need to propagate the entire encoded input for inference, which reduces computation time and the number of generated spikes.
### _Supervised STDP training_
To optimize the synaptic weights of the classification layer, we propose an error-modulated supervised STDP learning rule named S2-STDP. This rule trains target neurons to fire at early timestamps and non-target neurons to fire at late timestamps. During the training process, each sample triggers the weight update of output neurons according to Equation 7:
\[\Delta w_{ij}=\begin{cases}e_{j}\times A^{+}\times\exp\left(-\beta\frac{w_{ij }-w_{\min}}{w_{\max}-w_{\min}}\right),&\text{if }t_{j}\geq t_{i}\\ e_{j}\times A^{-}\times\exp\left(-\beta\frac{w_{\max}-w_{ij}}{w_{\max}-w_{ \min}}\right),&\text{if }t_{j}<t_{j}\end{cases} \tag{7}\]
where \(w_{ij}\) represents the weight of the synapse connecting input neuron \(i\) and output neuron \(j\), \(\Delta w_{ij}\) the weight change, \(t_{i}\) the firing timestamp of the neuron \(i\), \(e_{j}\) the error of the neuron \(j\), \(\beta\) the saturation factor, \(w_{\min}\) and \(w_{\max}\) the minimum and maximum weight values, \(A^{+}\) and \(A^{-}\) the learning rates.
The error plays a crucial role in guiding the learning process by modulating the polarity and intensity of the STDP update. In the context of temporal learning, the error of an output neuron \(j\) is defined as:
\[e_{j}=\frac{t_{j}-T_{j}}{T_{\max}} \tag{8}\]
where \(t_{j}\) and \(T_{j}\) respectively represent the actual and desired firing timestamps, and \(T_{\max}\) denotes the maximum firing time. To compute the desired firing timestamps, we introduce a method described in Equation 9, adapted from SSTDP [22]:
\[T_{j}=\begin{cases}T_{\mathrm{mean}}-\frac{n-1}{n}g,&\text{if }c_{j}=y\\ T_{\mathrm{mean}}+\frac{1}{n}g,&\text{if }c_{j}\neq y\end{cases} \tag{9}\]
where \(T_{\mathrm{mean}}\) represents the average firing time in the layer, \(n\) is the number of neurons, \(c_{j}\) is the neuron label, \(y\) is the sample label, \(g\) is the time gap hyperparameter that determines the desired distance from \(T_{\mathrm{mean}}\). Specifically, in our adaptation, we remove the error clipping from the original rule, which prevents neurons from updating their weights if their firing timestamp is lower (when they match the input sample class, otherwise higher) than their computed desired timestamp. In other words, SSTDP defines desired time ranges \(\left[0,T_{\mathrm{mean}}-\frac{n-1}{n}g\right]\) and \(\left[T_{\mathrm{mean}}+\frac{1}{n}g,1\right]\) whereas our adaption defines desired timestamps. Therefore, neurons can undergo weight updates in both directions, regardless of their associated class. For instance, if a neuron that does not correspond to the input sample class fires after its desired timestamp, it will receive a positive weight update to promote earlier firing. Conversely, if a neuron that does match the input sample class fires before its desired timestamp, it will receive a negative weight update to promote later firing. With our adaptation, we aim to drastically increase the number of updates per epoch (to nearly \(100\)%) and reduce the saturation of firing timestamps toward the maximum firing time. This is achieved by forcing
Fig. 3: Average firing time for some classes against the number of training samples, in the classification layer trained with SSTDP. The firing timestamps tend to saturate toward the maximum firing time.
neurons to fire around the average firing time, thereby ensuring its stabilization.
In addition to STDP, we propose a heterosynaptic plasticity model [14, 31, 32] to regulate changes in synaptic weights. After each update of an output neuron \(j\), its weights are normalized as follows:
\[w_{ij}=w_{ij}\cdot\frac{f_{\mathrm{norm}}}{\sum_{k}w_{kj}} \tag{10}\]
where \(w_{ij}\) represents the weight of the synapse with input neuron \(i\), \(f_{\mathrm{norm}}\) is the normalization factor, computed as the sum of weights of neuron \(j\) at initialization. This normalization ensures that all neurons maintain a constant and similar weight average throughout the learning process, allowing them equal chances of activation regardless of the number of weight updates they have undergone.
### _Paired Competing Neurons_
S2-STDP involves training each neuron to alternate its firing timestamp between two separate objectives, depending on the class of the input sample (see Equation 9). If the input sample class matches the neuron class, the neuron receives a target weight update, teaching it to spike just before \(T_{\mathrm{mean}}\). If the input sample class does not match the neuron class, the neuron receives a non-target weight update, teaching it to spike right after \(T_{\mathrm{mean}}\). Hence, the neurons must adapt their weights to satisfy both requirements, which can prevent them from learning too specific patterns. However, S2-STDP also has the advantage of stabilizing the average firing time of the neurons in the layer. To further enhance the learning capabilities of the layer trained with S2-STDP, we propose the PCN training architecture, described in Figure 4.
In this method, each class is associated with a pair of output neurons (instead of a single neuron) that are connected with lateral inhibition to promote intra-class competition. Within each pair and for every sample, the first neuron to spike, called the winner, inhibits the other one, called the loser, and undergoes the weight update process. The purpose of PCN is to encourage neuron specialization on one of the two firing objectives: target or non-target. In other words, for a pair of neurons \(n_{1}\) and \(n_{2}\) associated with the class \(c\), when we present samples of class \(y=c\), we want \(n_{1}\) to learn to spike at the target desired timestamp (\(T_{\mathrm{mean}}-\frac{n-1}{n}g\)) and win the competition against \(n_{2}\) (\(t_{n_{1}}\leq t_{n_{2}}\)). Conversely, when we present samples of class \(y\neq c\), we want \(n_{2}\) to learn to spike at the non-target desired timestamp (\(T_{\mathrm{mean}}+\frac{1}{n}g\)) and win the competition against \(n_{1}\) (\(t_{n_{2}}\leq t_{n_{1}}\)). Note that the neuron order \(n_{1}\) and \(n_{2}\) is arbitrary as we do not assign an objective to each neuron at initialization: this behavior emerges through intra-class competition. By encouraging specialization on one
Fig. 4: Classification layer equipped with Paired Competing Neurons (PCN) and trained with supervised STDP. Each class is represented by paired neurons, where the first neuron to spike (the winner) inhibits the other one (the loser). Only the winners undergo STDP updates. The difference \(\Delta t\) between the desired and actual firing timestamps is used to compute the neuron error, which modulates the intensity and the polarity of the STDP update. The purpose of PCN is to reduce training constraints arising from desired firing timestamps by encouraging specialization on one type of sample, such that one neuron learns to win the competition for samples of its class, whereas the other neuron learns to win the competition for samples of other classes.
of the two objectives, we reduce training constraints on the neuron weights.
It is important to mention that the use of PCN offers notable other advantages. First, thanks to lateral inhibition, it does not create more S2-STDP updates per sample, as only the winners are updated. Second, it is straightforward to implement in an existing SNN as it only requires attaching a new neuron to each output neuron. Third, it does not introduce any additional hyperparameters. Last, it can be used with any other learning rule involving two objective timestamps.
## V Results
### _Experimental setup_
#### V-A1 Datasets
We select three image recognition datasets of ten classes, with growing complexity: MNIST [24], Fashion-MNIST [30], and CIFAR-10 [25]. Both MNIST and Fashion-MNIST consist of \(28\times 28\) grayscale images. They contain 60,000 samples for training and 10,000 samples for testing. We preprocess the images with on-center/off-center coding to extract edge information [13]. CIFAR-10 is composed of \(32\times 32\) RGB images, 50,000 for the training set and 10,000 for the test set. We preprocess the images with a hardware-friendly whitening method presented in [33]. Note that CIFAR-10 is challenging for STDP-based SNNs and only a limited number of studies have considered this dataset thus far [14, 16, 34]. All the preprocessing methods are used with their original hyperparameters, provided in Supplementary Material (Section SuppM-I).
#### V-A2 SNN model
The SNN model used in this work is presented in Figure 1. The feature extraction layer is trained with unsupervised STDP whereas the classification layer is trained with a supervised adaptation of STDP. For each dataset, we compare S2-STDP with two existing methods, R-STDP [18] and SSTDP [22], both of which we have implemented. We consider three configurations of the feature extraction layer with increasing numbers of filters: CCNN-16, CCNN-64, and CCNN-128. Within a given dataset, these configurations share the same hyperparameters, except for the number of filters, allowing us to analyze the performance of the learning rule across various input sizes of the classification layer. Note that unless specified, the experiments are conducted on the CCNN-128 configuration.
#### V-A3 Protocol
We divide our experimental protocol into two phases: hyperparameter optimization and evaluation.
During the hyperparameter optimization phase, a subset of the training set is used as a validation set, which is created by randomly selecting, for each class, a percentage \(p_{\mathrm{val}}\) of its samples. We then apply the gridsearch algorithm to optimize the hyperparameters of the SNN based on the validation accuracy obtained. Hence, we ensure that the hyperparameters are not optimized for the test set. In this work, only the hyperparameters of the classification layer are optimized with gridsearch. The hyperparameters of the extraction layer are manually chosen based on preliminary experiments. For each dataset, all the rules are optimized on the CCNN-128 configuration. Then, the optimal hyperparameters are used with CCNN-64 and CCNN-16, except for the firing threshold that is divided by 2 and 4, respectively (as the input size decreases). All the hyperparameters are provided in Supplementary Material (Section SuppM-I).
During the evaluation phase, we use the K-fold cross-validation strategy (for the training set, as the test set is fixed) and change the seed for each fold. This allows us to assess the performance of the SNN with varying training and validation data, as well as different parameter initialization. In both phases, we employ an early stopping mechanism (with a patience \(p_{\mathrm{stop}}\)) during training to prevent overfitting. In all our following experiments, we choose \(p_{\mathrm{stop}}=10\), \(K=10\) and \(p_{\mathrm{val}}=\frac{1}{K}\) (i.e. \(10\)% of the training sets are used for validation).
### _Accuracy comparison_
In this section, we present a comparative analysis of the accuracy performance of two existing methods, R-STDP and SSTDP, and our methods, S2-STDP and S2-STDP+PCN. The average test accuracy achieved by each method on the MNIST, Fashion-MNIST, and CIFAR-10 datasets is presented in Table I.
Across all datasets and CSNN configurations, S2-STDP outperforms SSTDP. This demonstrates that our adaptation for computing the desired firing timestamps is more effective. Also, SSTDP exhibits poor and unstable results when operating with small input sizes (i.e. output sizes of the CSNN, cf. columns CCNN-16 in Table I). On the Fashion-MNIST dataset, it even leads to training divergence in certain runs, as evidenced by the high standard deviation of \(9.16\) ppt. This behavior suggests that SSTDP is unstable when confronted with varying input sizes, which can be undesirable, considering the current limitations of neuromorphic hardware in terms of network size. Note that this behavior is influenced by the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rule & CSNN-16 & CSNN-64 & CSNN-128 \\ \hline R-STDP (N=200) & \(96.05\pm 0.51\) & \(97.47\pm 0.16\) & \(97.88\pm 0.13\) \\ \hline R-STDP (N=20) & \(61.23\pm 27.74\) & \(93.34\pm 0.66\) & \(93.28\pm 0.87\) \\ \hline SSTDP & \(81.12\pm 2.87\) & \(95.84\pm 0.18\) & \(96.37\pm 0.09\) \\ \hline S2-STDP & \(95.40\pm 0.21\) & \(97.19\pm 0.09\) & \(97.81\pm 0.05\) \\ \hline
**S2-STDP+PCN** & \(\mathbf{97.02\pm 0.14}\) & \(\mathbf{95.21\pm 0.07}\) & \(\mathbf{97.89\pm 0.06}\) \\ \hline \hline SVM & \(98.4\pm 0.06\) & \(98.84\pm 0.05\) & \(98.93\pm 0.04\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline Rule & CSNN-16 & CSNN-64 & CSNN-128 \\ \hline R-STDP (N=200) & \(82.30\pm 0.26\) & \(82.30\pm 0.92\) & \(83.26\pm 0.22\) \\ \hline R-STDP (N=20) & \(76.50\pm 0.52\) & \(76.73\pm 0.13\) & \(77.01\pm 0.22\) \\ \hline SSTDP & \(51.22\pm 9.16\) & \(79.68\pm 1.67\) & \(84.12\pm 1.11\) \\ \hline S2-STDP & \(82.23\pm 0.40\) & \(84.92\pm 0.24\) & \(85.88\pm 0.22\) \\ \hline
**S2-STDP+PCN** & \(\mathbf{83.89\pm 0.40}\) & \(\mathbf{85.84\pm 0.19}\) & \(\mathbf{87.12\pm 0.21}\) \\ \hline \hline SVM & \(87.63\pm 0.17\) & \(89.02\pm 0.23\) & \(89.3\pm 0.21\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline Rule & CSNN-16 & CSNN-64 & CSNN-128 \\ \hline
**R-STDP (N=200)** & \(\mathbf{51.55\pm 1.23}\) & \(\mathbf{61.85\pm 0.61}\) & \(\mathbf{65.56\pm 0.38}\) \\ \hline R-STDP (N=20) & \(38.51\pm 4.41\) & \(51.74\pm 0.93\) & \(54.02\pm 0.80\) \\ \hline SSTDP & \(42.57\pm 1.38\) & \(57.89\pm 0.24\) & \(61.34\pm 0.14\) \\ \hline S2-STDP & \(47.94\pm 0.49\) & \(58.24\pm 0.27\) & \(61.53\pm 0.16\) \\ \hline S2-STDP+PCN** & \(49.23\pm 0.56\) & \(59.58\pm 0.16\) & \(62.81\pm 0.
hyperparameters. When tuning SSTDP specifically for the Fashion-MNIST CCNN-16 configuration, we can reach an accuracy of \(80.10\)% instead of \(51.22\)%, indicating that the rule can still deal with small input sizes but does not provide great robustness against hyperparameters. The use of PCN further improves S2-STDP effectiveness without requiring additional hyperparameters. In comparison to the other considered STDP-based methods, S2-STDP+PCN achieves the highest accuracy on the MNIST and Fashion-MNIST datasets. Specifically, compared to SSTDP on CCNN-128, it exhibits an accuracy improvement of \(2.22\) ppt on MNIST, \(3\) ppt on Fashion-MNIST, and \(1.47\) ppt on CIFAR-10.
It is worth noting that R-STDP significantly outperforms all the STDP-based methods on CIFAR-10. However, R-STDP requires 200 output neurons to achieve this performance whereas S2-STDP+PCN only uses 20 neurons. When R-STDP is used with 20 output neurons, it performs significantly worse than all other methods on all datasets and CCNN configurations. This observation highlights the importance of error-modulated weight updates in enabling effective supervised training with STDP. In our case, it leads to a substantial reduction in the number of trainable parameters by a factor of 10. Also, on Fashion-MNIST, R-STDP obtains relatively low performance and fails to extract more relevant features when the number of feature maps increases. The difference in performance improvement between CCNN-16 and CCNN-128 with R-STDP is only about \(0.96\) ppt, whereas it is about \(3.23\) ppt with S2-STDP+PCN. Lastly, although the Support Vector Machine (SVM) gets the highest accuracy, it is not compatible with neuromorphic hardware.
### _Stabilization of the output firing timestamps_
In Section III, we elaborated on an issue of SSTDP: the saturation of firing timestamps toward the maximum firing time. Our proposed S2-STDP is specifically designed to address this issue by encouraging neurons to fire at exact desired timestamps instead of time ranges. In this section, we evaluate its impact on the output firing timestamps of our SNN.
First, in Figure 5, we compare the average firing time in the classification layer obtained on training samples at each epoch by the different SSTDP-based methods. As previously illustrated in Figure 3, the average firing time of SSTDP is very close to the maximum firing time. On the contrary, our proposed methods successfully reduce the saturation of firing timestamps. In addition, they obtain a significantly higher average firing time variance between the samples over an epoch. For instance, at epoch 20, we measure a standard deviation of \(0.008\) for SSTDP and \(0.049\) for S2-STDP. This indicates that the average firing time tends to vary more between samples with S2-STDP, which can enable better class separation.
Then, we study the impact of integrating a PCN architecture with S2-STDP. Figure 6 illustrates the average firing time of output neurons trained with and without PCN, on MNIST test samples. We notice that neurons trained with a PCN architecture are better at reaching the desired timestamps, particularly for the non-target neurons. This is because, through intra-class competition, PCN successfully creates neuron specialization on one type of sample (target or non-target). For instance, when we provide samples from class 7 to the pair associated with this class, neuron 2 is used to win the competition against neuron 1. Conversely, when we provide samples from class 8 to the pair associated with class 7, neuron 1 is used to win against neuron 2. Furthermore, intra-class competition permits to reduce training constraints, as the non-target losers (i.e. losers when we present samples of class different from theirs) do not have to reach an exact desired timestamp. As observed, their firing timestamps are much higher compared to the desired timestamp for non-target winning neurons (red line in the figure). Consequently, the synaptic weights of the neurons trained with a PCN architecture are more specialized on samples linked to the assigned objective.
### _Robustness against the time gap hyperparameter_
During training, the time gap hyperparameter \(g\) is used to define the distance between the desired timestamps and the average firing time. Selecting an appropriate value for this hyperparameter is crucial to ensure accurate class separation. However, hyperparameter tuning can be a time-consuming task, and achieving the optimal value may not always be feasible. In this section, we investigate the influence of the time gap value on the accuracy.
Figure 7 compares SSTDP and our proposed methods across various values of \(g\) on the three datasets. S2-STDP demonstrates greater robustness compared to SSTDP regarding the choice of \(g\), as it significantly expands the range of values that can achieve near-optimal accuracy. The accuracy curve of S2-STDP exhibits a more pronounced bell-shaped pattern with a larger plateau near the maximum. This implies that tuning \(g\) can be easier with S2-STDP. When considering a suitable range for \(g\), S2-STDP always achieves higher accuracy than SSTDP on MNIST and Fashion-MNIST datasets. On CIFAR-10, the behavior observed differs slightly, but the low accuracy arising from the complexity of the task may hinder accurate interpretation. The use of PCN as a training architecture for S2-STDP almost always improves its performance. This suggests that the efficacy of PCN is not dependent on a specific value of \(g\). Note that all the methods use their respective gridsearch-optimized hyperparameters.
Fig. 5: Average firing time in the classification layer against the epoch. The standard deviation measures the variance over the samples during an epoch. Our methods using S2-STDP significantly reduce the saturation of firing timestamps toward the maximum firing time.
### _Robustness against the hyperparameter set_
Our PCN training architecture improves the accuracy of S2-STDP across all evaluated datasets, as indicated in Section V-B. However, the hyperparameters of the classification layer are individually optimized for S2-STDP and S2-STDP+PCN, suggesting that PCN might only enhance performance in specific hyperparameter sets. In this section, we present evidence that PCN can effectively improve the accuracy of S2-STDP irrespective of the hyperparameters used. Specifically, we aim to demonstrate that implementing PCN on an existing SNN trained with S2-STDP is likely to always yield improved results.
Figure 8 illustrates the comparison of S2-STDP accuracy with and without PCN across different hyperparameter sets on the Fashion-MNIST dataset. The varying hyperparameters include the firing threshold (\(\mathrm{thr}\)), time gap (\(g\)), and learning rates (\(\mathrm{ap}\) and \(\mathrm{am}\)). Their values are selected within a suitable range to achieve satisfactory performance. The results show that integrating PCN improves S2-STDP accuracy consistently, with an average improvement of \(0.94\) ppt and a maximum improvement of \(1.57\) ppt. For comparison, when independently optimizing the hyperparameters for both methods, the measured accuracy improvement is \(1.24\) ppt. Although the observed improvement may not be substantial, the use of PCN as a training architecture enhances the accuracy of S2-STDP regardless of the hyperparameter set and does not introduce any additional hyperparameters. In the Supplementary Material (Section SuppM-II-B), we present similar plots for all the datasets, accompanied by a comparison with SSTDP. The results exhibit consistency with the analysis conducted on the Fashion-MNIST dataset. Furthermore, SSTDP yields considerably lower accuracy for various configurations, once again showing its poor robustness against hyperparameters.
Fig. 6: Neuron average firing time in the classification layer on MNIST test samples. Each point represents a neuron, and the row denotes its associated class. The points with lower alpha are the inhibited neurons (i.e. losers). The green and red lines correspond to the average desired timestamps of the target neuron and the non-target neurons, respectively. Through intra-class competition, the use of PCN architecture creates neuron specialization which helps them reach their desired firing timestamp.
Fig. 7: Accuracy of the different SSTDP-based learning rules against the time gap hyperparameter. Our proposed methods using S2-STDP enable better robustness against the time gap value.
### _Comparison with the literature_
In Table II, we present an accuracy comparison between our partially supervised SNN, trained with STDP + S2-STDP+PCN, and other state-of-the-art algorithms employed for training SNNs. For this comparison, we focused on supervised methods, primarily STDP-based and BP-based with local updates.
On the MNIST dataset, our proposed SNN outperforms the STDP-based approaches and achieves results that are close to the non-local BP-based state of the art. Similarly, for the Fashion-MNIST dataset, our SNN surpasses the STDP-based approaches with similar architectures and demonstrates competitive performance compared to most of the other methods. It is important to note that only the output layer of our SNN is trained with supervision, whereas all the other methods in Fashion-MNIST are fully supervised. For instance, in [23], they employ a 6-layer SNN trained with global feedback + STDP, achieving an accuracy of \(89.05\)%. In contrast, we employ a 2-layer partially supervised CSNN, resulting in an accuracy loss of only \(1.93\) ppt. Additionally, our results are highly competitive with a 2-layer SNN trained using a BP-based algorithm, with a loss of \(0.88\) ppt.
However, when it comes to the CIFAR-10 dataset, the performance of our method remains significantly low compared to BP-based state-of-the-art SNNs. This dataset is particularly challenging for shallow architectures with only one supervised layer. We believe that the features extracted by our unsupervised layer are not distinguishable enough to allow
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Architecture & Learning rule & Local learning & Accuracy \\ \hline MNIST & **20CS-P2-50CS-P3-200FC-10FC**[35] & **STDP pretraining + BP** & **no** & **99.28** \\ & 300FC-10FC [22] & SSTDP & no & \(98.10\) \\ & 500FC-150FC-10FC [20] & BP-STDP & no & \(97.20\pm 0.07\) \\ & C5-1000FC-10FC [36] & BRP & yes & \(99.01\) \\ & 500FC-500FC-10FC [37] & EMSTD & yes & \(97.30\) \\ & 30C5-P2-2250CS-P3-200C5-P5 [18] & STDP + R-STDP & yes & \(97.20\) \\ & _128C5-P4-20FC (Ours)_ & _STDP + S2-STDP+PCN_ & yes & \(98.59\pm 0.06\) \\ \hline Fashion-MNIST & 1000FC-10FC [38] & BP & no & \(88.00\) \\ & **20CS-P2-4005-P2-100FC-10FC**[39] & **STIDi-BP** & **yes** & **92.80** \\ & 200FC-200FC-200FC-200FC-200FC-10FC**[23] & Global feedback + STDP & yes & \(89.05\) \\ & 500FC-500FC-10FC [37] & EMSTDP & yes & \(86.10\) \\ & 6400FC-10FC [21] & Sym-STDP & yes & \(85.31\pm 0.16\) \\ & _128C5-P4-20FC (Ours)_ & _STDP + S2-STDP+PCN_ & yes & \(87.12\pm 0.21\) \\ \hline CIFAR-10 & **VEGF-7**[22] & **SSTDP** & **no** & **91.31** \\ & 64C7-P8-512FC-512FC-10FC [14] & STDP + ANN-based BP & no & \(71.20\) \\ & 256C3-P2-1024FC-10FC [16] & Probalistic STDP + ANN-based BP & no & \(66.23\) \\ & 16C5-P2-8C3-P2-100FC-10FC [34] & EMSTDP & yes & \(64.40\) \\ & C5-1000FC-10FC [36] & BRP & yes & \(57.08\) \\ & _128C5-P4-20FC (Ours)_ & _STDP + S2-STDP+PCN_ & yes & _62.81 \(\pm\) 0.15_ \\ \hline \hline \end{tabular}
\end{table} TABLE II: Accuracy comparison of our proposed SNN with the literature. Accuracies come from the original papers.
Fig. 8: Accuracy of S2-STDP with and without the use of PCN architecture across different hyperparameter sets, on Fashion-MNIST test samples. PCN always improves S2-STDP performance, without introducing any hyperparameters.
for an accurate analysis. The SNNs employed in [14, 16] share similarities with ours, as they consist of an unsupervised feature extraction layer trained with STDP and a supervised module. However, they employ a two and three-layer ANN for classification, while we employ a single-layer SNN, explaining the accuracy gap measured. When compared to another shallow SNN as in [36], our approach still demonstrates significantly superior performance.
As a concluding remark, it is important to mention that the fairness of comparisons may vary, as some authors report top-1 accuracy, and some do not provide details on their evaluation methodology nor how hyperparameters have been tuned.
## VI Conclusion
In this paper, we proposed Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule for training a spike-based classification layer with temporal decision-making. This layer is the output of a convolutional SNN (CSNN) equipped with unsupervised STDP for feature extraction. Our learning rule integrates error-modulated weight updates that align neuron spikes with desired timestamps derived from the average firing time within the layer. S2-STDP has been specifically developed to address the issues observed with SSTDP. To further enhance the learning capabilities of the classification layer trained with S2-STDP, we introduced a training architecture called Paired Competing Neurons (PCN). PCN associates each class with paired neurons connected via lateral inhibition and encourages neuron specialization through intra-class competition. We evaluated our S2-STDP and PCN on three image recognition datasets of growing complexity: MNIST, Fashion-MNIST, and CIFAR-10. Our experiments showed that:
1. Our methods outperform current supervised STDP-based state of the art, for comparable architectures and numbers of neurons.
2. S2-STDP successfully addressed the issues of SSTDP concerning the limited number of STDP updates and the saturation of firing timestamps toward the maximum firing time.
3. Our methods exhibited improved hyperparameter robustness as compared to SSTDP.
4. The PCN architecture enhances the performance of S2-STDP, regardless of the hyperparameter set.
In the future, we plan to expand S2-STDP to multi-layer architectures, while maintaining local computation required for on-chip learning. This includes exploring both feedback connexions [23] and local losses [10].
## Acknowledgements
This work is funded by Chaire Luxant-ANVI (Metropole de Lille) and supported by IRCICA (CNRS UAR 3380). We would like to thank Benoit Miramond for the helpful exchange.
|
2306.11444 | Blackbird language matrices (BLM), a new task for rule-like
generalization in neural networks: Motivations and Formal Specifications | We motivate and formally define a new task for fine-tuning rule-like
generalization in large language models. It is conjectured that the
shortcomings of current LLMs are due to a lack of ability to generalize. It has
been argued that, instead, humans are better at generalization because they
have a tendency at extracting rules from complex data. We try to recreate this
tendency to rule-based generalization. When exposed to tests of analytic
intelligence, for example, the visual RAVEN IQ test, human problem-solvers
identify the relevant objects in the picture and their relevant attributes and
reason based on rules applied to these objects and attributes. Based on the
induced rules, they are able to provide a solution to the test. We propose a
task that translates this IQ task into language. In this paper, we provide the
formal specification for the task and the generative process of its datasets. | Paola Merlo | 2023-06-20T10:45:56Z | http://arxiv.org/abs/2306.11444v1 | # Blackbird language matrices (BLM), a new task for rule-like generalization
###### Abstract
We motivate and formally define a new task for fine-tuning rule-like generalization in large language models.
It is conjectured that shortcomings of current LLMs are due to a lack of ability to generalize. It has been argued that, instead, humans are better at generalization because they have a tendency at extracting rules from complex data. We try to recreate this tendency to rule-based generalization.
When exposed to tests of analytic intelligence, for example the visual RAVEN IQ test, human problem-solvers identify the relevant objects in the picture and their relevant attributes and reason based on rules applied to these objects and attributes. Based on the induced rules, they are able to provide a solution to the test.
We propose a task that translates this IQ task into language. In this paper, we provide the formal specification for the task, and the generative process of its datasets.
## 1 Introduction
Current consensus about LLM is that to reach better, possibly human-like, abilities in abstraction and generalisation, we need to develop tasks and data that help us understand their current generalisation abilities and help us train LLMs to more complex and compositional skills.
Generalisation in NLP has been defined in a very narrow way, as extension from a set of data points to new data points of exactly the same nature (i.i.d. assumption). Even under this very narrow definition, recent studies show that current algorithms do not generalise well (Belinkov and Bisk, 2018; Belinkov and Glass, 2019; Kiela et al., 2021).
In contrast, humans are good generalizers. A large body of literature of experimental work has demonstrated that the human mind is predisposed to extract regularities and generate rules from data, in a way that is distinct from the patterns of activation of neural networks (Lakretz et al., 2019, 2021; Sable-Meyer et al., 2021).
One possible approach to develop more robust methods, then, is to pay more attention to the decomposition of complex observations and to the causal chains in the generative process that gives rise to the data. Scholkopf et al. (2012) give the example of a dataset that consists of altitude A and average annual temperature T of weather stations (Peters et al., 2017) and argue that a robust model represents not their correlation, but the underlying problem structure and the causal effect of A on T.
To discover the underlying problem structure, machine learning research in vision has developed the notion of _disentanglement_. A disentangled representation can be defined as one where single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors (Bengio et al., 2013).
Let's look at an illustrative example of complex linguistic relations that might require discovering the underlying causal relations: the causative alternation in English, shown in (1).
\begin{tabular}{l c c} (1) & The teacher & opened & the door. \\ Agent & & & \\ The door & & & \\ Theme & & & \\ \end{tabular}
This alternation applies to change of state verbs, such as _open, break, melt, burn_ among many others, verbs that describe a change that affects the state of the undergoing participant (it is, after the action, _the door_ that changes from a state of being closed to a state of being open). They occur in two subcategorisation frames that are related to each other in a regular way: the object of the transitive frame is the subject of the intransitive frame. This way, in terms of semantic roles, the subject of the transitive is an agent, but the subject of the intransitive is a theme.
To learn the structure of such a complex alterna
tion automatically, a neural network must be able (i) to identify the elements manipulated by the alternation, (ii) to identify the relevant attributes of these elements and (iii) recognize and formulate the rules of change of these attributes and elements across the two sentences. Thus finding the solution requires abilities in handling phenomena that span more than one sentence, whose correspondences must be recognised, in structure and meaning.
To study what factors and models lead to learning more disentangled linguistic representations, representations that reflect the underlying linguistic rules of grammar, we take the approach of developing curated data on a large scale, building models to learn these data and investigating the models' behaviour.
To this end we develop a new linguistic task, (similar to Raven's progressive matrices for vision), which we call Blackbird Language Matrices (BLMs) that define prediction tasks to learn these complex linguistic patterns and paradigms. The contribution of this work lies in the definition of a new challenging learning task, which requires tackling a mixture of language processing abilities and abstract rule learning. This task takes us closer to investigations of human linguistic intelligence.
In this work, we provide the formal specification for the matrices. One such BLM problem has already been used in Merlo et al. (2022) and Nastase and Merlo (2023) and its data is described in detail in An et al. (2023).
### Raven's Progressive Matrices for vision
Raven's progressive matrices (progressive because tasks get harder) are IQ tests consisting of a sequence of images, called the _context_, connected in a logical sequence by underlying generative rules (Raven, 1938). The task is to determine the missing element in this visual sequence, the _answer_. The answer is chosen among a set of closely or loosely similar alternatives. An instance of this task is given in Figure 1: given a matrix on the left, choose the last element of the matrix from a choice of elements. The matrices are built according to generative rules that span the whole sequence of stimuli and the answers are constructed to be similar enough that the solution can be found only if the rules are identified correctly. For example in Figure 1, the matrix is constructed according to two rules: Rule 1: from left to right, the red dot moves one place clockwise each time. This pattern continues onto the next row; Rule 2: from top to bottom, the blue square moves one place anticlockwise each time. This pattern continues onto the next column. Identifying these rules leads to the correct answer, the only cell that continues the generative rules correctly.
To establish how to develop human-like rule-based inference, it pays to survey how humans solve this kind of intelligence test and develop a corresponding formalisation.
## 2 Formal Specifications of BLMs
The formal specifications we develop in the following sections use a vocabulary that is inspired by the cognitive primitives identified in the study by Carpenter et al. (1990). We are also inspired by computational methods in vision, that reproduce RAVEN's matrices, although the vision problems have slightly different properties and can arguably be described fully by context-free processes (Wang and Su, 2015; Barrett et al., 2018; Zhang et al., 2019; Zhu and Mumford, 2006).
We define here a new task and data format, we call it Blackbird's Language Matrices (BLMs).
Definition Let a 4-tuple \((LP,C,W,w_{c})\) be given, where \(LP\) is the definition of the linguistic grammatical phenomenon, \(C\) is the corresponding context matrices, \(W\) is the answer set, and \(w_{c}\) is the selected item of \(W\) that is correct.
The BLM _task_ can be defined as the instruction:
\[\text{find }(w_{c}\in W)\text{ given }C.\]
A BLM _problem_\((LP,C,W,Aug)\) is an instance of a BLM task, where \(Aug\) is the augmentation
Figure 1: Example of progressive matrice in the visual world. The task is to determine the missing element in a visual pattern. Given the matrix on the left, choose the last element of the matrix from the choice of elements on the right. The matrix is constructed according to two rules (see text for explanation). Identifying these rules leads to the correct answer (marked by double edges).
method for the matrices. We describe all components in the next sections.
### Defining the linguistic phenomenon
The first step in the definition of the problem consists in formally defining the linguistic grammatical phenomenon as a paradigm.
Definition Let a linguistic phenomenon LP be given. LP is exhaustively defined by a grammar \(G_{LP}=(O,A,E,I,L)\) s.t.
\(O\) is the set of objects
\(A\) is the set of attributes of the objects in \(O\)
\(E\) is the set of external observed rules
\(I\) is the set of unobserved internal rules
\(L\) is the lexicon of objects in \(O\), attributes in \(A\), and operators in \(E\cup I\).
For example, as shown in the example Figure 3, in the subject-verb agreement phenomenon, the agreement rule is the primary production in \(E\), while the fact that agreement can occur independently of the distance of the elements expresses the fact that agreement applies to structural representations, a rule in \(I\). Sometimes, but not always, I acts as a confusing factor.
Rules are triples of objects (shown in red), attributes (in green) and operations (in blue). Objects are usually phrases, attributes are usually morpho-syntactic properties of the phrases and operations are typical grammatical operations: feature match, movement (becomes), lexical substitution (changes).
### Defining the matrices
Definition A \(BLM\)_matrix_ is a tuple=\((S,R,T)\) s.t.
\(S\) is the shape of the matrix
\(R\) are the relational operators that connect the items of the matrix
\(T\) is the set of items of the matrix.
Shape
\(S(n,l)\) is the shape of the matrix, which consists of \(n\) items and each item can be at most of length \(l\).
The length of the items can vary. The items can be sentences or elements in a morphological paradigm. The choice of \(n\) depends on how many items need to be shown to illustrate the paradigm and on whether the illustration is exhaustive or sampled. For example, a matrix of size eight is exhaustive for an agreement problem with three noun phrases and a two-way number differentiation (singular, plural), but can only present a sample of the information for the causative alternation.
Relational operations
Connective sequential operations, such as alternation or progression are chosen. The point of these relational operations is to trasform a list of items (sentences or words) into a predictable sequence that connects all the items.
The values of \(R\), so far, are alternations or progression. They could also be conjunction, disjunction, exclusive OR and other logical or graded operators.
Alternation applies to a given \((o,a)\) pair and loops over all the values of \(a\) with a given increment defined over the items of the matrix. For example, the grammatical feature number is binary in certain languages. So, alternation\((o=NP;a_{i}=(s,p);i=1,2,3,...)\). This is used to create different alternations of \((o,a)\) in the sentence, which in the subject-verb agreement BLM is used to show independence from linear distance.
Progression applies to countable attributes or ordinal attributes, for example, existence. So, one can have 1,2,...,\(n\) of a given object \(n\). Progression can also apply to \(position\) or to graded properties such as \(length\).
Items
The items \(T\) are defined by \(G_{LP}=(O,A,E,I,L)\) and they are drawn from the set \(\mathcal{T}\).
The matrix is created by sampling \((o,a,r)\). The ways in which \(r\in R\) can apply to a given \((o,a)\) pair has to be predefined, as it is not entirely context-free.
### Defining the answer set
The answer set \(W\) consists of a set of items like those in \(C\). One item in W, \(w_{c}\), is the correct answer to complete the sequence defined by \(C\). The other items are the contrastive set. They are items that
**BLM task:**: Find \((w_{c}\in W)\) given \(C\),
given a 4-tuple \((LP,C,W,w_{c})\), where \(LP\) is the definition of the linguistic grammatical phenomenon, \(C\) is the corresponding context matrices, \(W\) is the answer set, and \(w_{c}\) is the selected item of \(W\) that is correct.
**BLM problem:**: A \(BLM\)_problem_ is a tuple\((LP,C,W,Aug)\). It is an instance of a BLM task, where \(Aug\) is the augmentation method for the matrices.
**BLM matrix:**: A \(BLM\)_matrix_ is a tuple \((S,R,T)\) s.t. \(S\) is the shape of the matrix, \(R\) are the relational operators that connect the items of the matrix, \(T\) is the set of items of the matrix.
**Linguistic phenomenon LP:**: A _linguistic phenomenon LP_ is exhaustively defined by a grammar \(G_{LP}=(O,A,E,I,L)\) s.t. \(O\) is the set of objects, \(A\) is the set of attributes of the objects in \(O\), \(E\) is the set of external observed rules, \(I\) is the set of unobserved internal rules, \(L\) is the lexicon of objects in \(O\), attributes in \(A\), and operators in \(E\cup I\).
**Shape:**: \(S(n,l)\) is the shape of the matrix, which consists of \(n\) items and each item can be at most of length \(l\).
**Relational operations:**: Alternation \((o=NP;a_{i}=(s,p);i=1,2,3,...)\). Progression applies to countable attributes or ordinal attributes.
**Context set:**: The _context set_\(C\) is a set of items generated by \(LP\).
**Answer set:**: The _answer set_\(W\) is a set of items generated by \(LP\). One item in W, \(w_{c}\), is the correct answer. The other items are the contrastive set. They are items that violate \(G_{LP}\) either in \(E,I\) or in \(R\).
**Augmented BLM:**: An _augmented BLM_ is a quadruple \((S,R,T,Aug)\). \(S\) is the shape of the matrix, \(R\) are the relational operations that connect the \(T\) items of the matrix. The items \(T\) are defined by \(G_{LP}=(O,A,E,I,L)\) and they are drawn from the set \(\mathcal{T}\).
\(Aug\) is a set of operations defined to augment the cardinality of \(\mathcal{T}\), while keeping S and R constant. \(Aug\) is defined by controlled manipulations of Os and As in T to collect similar elements, s.t.
\(|P(o_{aug})-P(o)|\leq rank(10)\) and \(|P(t^{i}_{aug})-P(t^{i})|\leq\epsilon\)
Figure 2: Summary of definitions concerning the Blackbird Language Matrices task.
violate \(G_{LP}\) either in \(E,I\) or in \(R\). In other words, the contrastive set comprises elements that violate one or more of the rules of construction of the context matrix C, either in the primary rules \(E\), in the auxiliary rules \(I\), or in the matrix operators \(R\).
Sometimes they are built almost automatically, sometimes by hand. The cardinality of the answer set is determined by how many facets of the linguistic phenomenon need to be shown to have been learned.
### Augmenting the matrices
Different levels of lexical and structural complexity can be obtained by changing the lexical items (step by step), in a given matrix.
Definition An _augmented BLM_ is a quadruple \((S,R,T,Aug)\).
\(S\) is the shape of the matrix, \(R\) are the relational operations that connect the \(T\) items of the matrix.
\(Aug\) is a set of operations defined to augment the cardinality of \(\mathcal{T}\), while keeping S and R constant. \(Aug\) is defined by controlled manipulations of Os and As in \(\mathcal{T}\) to collect similar elements, s.t.
\[|score(o_{aug})-score(o)| \leq rank(10)\text{ and }\] \[|score(t_{aug}^{i})-score(t^{i})| \leq\epsilon\]
In words and in practice, we augment the sentence set \(\mathcal{T}\) by modifying the noun phrases of the items in \(T\). We generate alternatives with a language model choosing among the top \(n\), and verifying that the acceptability score of the resulting sentences is still acceptable, within a margin from the original sentence. The margin is set with a variable-size window and collects the top 10 alternative noun phrases. The acceptability of the resulting sentences is validated manually.
## 3 BLM Formal Template and Example
The abstract template of context and answer set, with its objects and attributes, that corresponds to the problem of subject-verb agreement is shown in Figure 5. This abstract template also corresponds to the example of matrices shown in Figure 6.
Figure 4: Example answer set.
Figure 5: Examples of template and progressive matrices for the number agreement problems Merlo et al. (2022); An et al. (2023); Nastase and Merlo (2023).
Figure 3: Example of corresponding to formal definitions of E and I for two LPs.
The sequence is generated by a rule of progression of number of attractors (one and two), a rule of subject-verb agreement that alternates every sentence between singular and plural of the head noun and a rule of number of the attractors that alternates between singular and plural with a frequency period of two. Thus, the correct answer for this example is a sentence that has three noun phrases and a plural subject and plural first attractor and singular second attractor.
## 4 Conclusions
In this paper, we have defined precisely the main concepts underlying a new multiple-choice task, Blackbird Language Matrices (BLMs), inspired by IQ test for humans. The task is based on solving a missing-element puzzle that requires identifying the correct completion of a context sequence among several minimally contrastive alternatives. These matrices have already shown to define a challenging task Merlo et al. (2022), and a first full-fledged dataset is available and described in An et al. (2023). This task has also been shown to be useful for detailed investigation of distributed representations Nastase and Merlo (2023).
Because the context sequence is developed in such a way that it is the implicit definition of a linguistic phenomenon, the matrix can become a way of teaching complex structured problems to current neural networks. It is also a powerful tool to evaluate the already existing abilities of current large language models in a challenging task of analytic intelligence. Work is underway on all these aspects.
## Ethics Statement
To the best of our knowledge, there are no ethics concerns with this paper.
## Acknowledgments
We gratefully acknowledge the partial support of this work by the Swiss National Science Foundation, through grants #51NF40_180888 (NCCR Evolving Language) and SNF Advanced grant TMAG-1_209426 to PM. Thanks to all collaborators and audiences that have shown interest in this concept.
Figure 6: Example of lexically varied contexts for the main clause contexts. Correct answer in bold, Merlo et al. (2022); An et al. (2023); Nastase and Merlo (2023). |
2302.05059 | Effects of noise on the overparametrization of quantum neural networks | Overparametrization is one of the most surprising and notorious phenomena in
machine learning. Recently, there have been several efforts to study if, and
how, Quantum Neural Networks (QNNs) acting in the absence of hardware noise can
be overparametrized. In particular, it has been proposed that a QNN can be
defined as overparametrized if it has enough parameters to explore all
available directions in state space. That is, if the rank of the Quantum Fisher
Information Matrix (QFIM) for the QNN's output state is saturated. Here, we
explore how the presence of noise affects the overparametrization phenomenon.
Our results show that noise can "turn on" previously-zero eigenvalues of the
QFIM. This enables the parametrized state to explore directions that were
otherwise inaccessible, thus potentially turning an overparametrized QNN into
an underparametrized one. For small noise levels, the QNN is
quasi-overparametrized, as large eigenvalues coexists with small ones. Then, we
prove that as the magnitude of noise increases all the eigenvalues of the QFIM
become exponentially suppressed, indicating that the state becomes insensitive
to any change in the parameters. As such, there is a pull-and-tug effect where
noise can enable new directions, but also suppress the sensitivity to parameter
updates. Finally, our results imply that current QNN capacity measures are
ill-defined when hardware noise is present. | Diego García-Martín, Martin Larocca, M. Cerezo | 2023-02-10T05:33:52Z | http://arxiv.org/abs/2302.05059v2 | # Effects of noise on the overparametrization of quantum neural networks
###### Abstract
Overparametrization is one of the most surprising and notorious phenomena in machine learning. Recently, there have been several efforts to study if, and how, Quantum Neural Networks (QNNs) acting in the absence of hardware noise can be overparametrized. In particular, it has been proposed that a QNN can be defined as overparametrized if it has enough parameters to explore all available directions in state space. That is, if the rank of the Quantum Fisher Information Matrix (QFIM) for the QNN's output state is saturated. Here, we explore how the presence of noise affects the overparametrization phenomenon. Our results show that noise can "turn on" previously-zero eigenvalues of the QFIM. This enables the parametrized state to explore directions that were otherwise inaccessible, thus potentially turning an overparametrized QNN into an underparametrized one. For small noise levels, the QNN is quasi-overparametrized, as large eigenvalues coexists with small ones. Then, we prove that as the magnitude of noise increases all the eigenvalues of the QFIM become exponentially suppressed, indicating that the state becomes insensitive to any change in the parameters. As such, there is a pull-and-tug effect where noise can enable new directions, but also suppress the sensitivity to parameter updates. Finally, our results imply that current QNN capacity measures are ill-defined when hardware noise is present.
## I Introduction
Overparametrization has become one of the most important concepts for studying neural networks in classical machine learning. When a neural network is overparametrized, it has a capacity which is larger than the number of training points [1]. Despite being initially counterintuitive, as increasing the number of parameters can lead to overfitting, research has shown that overparametrization can actually improve the performance of a model [1; 2; 3; 4]. For example, it has been observed that the generalization error can decrease when the model size is increased, a phenomenon known as double descent [5; 6; 7]. Additionally, overparametrization can provide convergence guarantees, ensuring that a model will be able to find a good solution during its optimization [8; 9]. These benefits make overparametrization an important consideration in the design of classical machine learning algorithms.
In the past few years, there has been a significant amount of effort towards merging concepts from classical machine learning with those of quantum computing, leading to the blossoming field of Quantum Machine Learning (QML) [10; 11; 12; 13]. The key idea here is that one can leverage the exponentially large dimension of the Hilbert space as a feature space to process and learn from data. Crucially, there is hope that QML has the potential of enabling a quantum advantage in the near-term [14; 15].
Within the framework of QML, parametrized quantum circuits, or Quantum Neural Networks (QNNs), have received considerable attention due to their versatility and wide usability [16; 17; 18; 19; 20]. While several works have studied the capabilities, trainability and performance of QNNs [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], most of these consider noiseless scenarios which do not account for the effect of hardware noise [40; 41; 42; 43; 44]. However, since noise is an intrinsic element in near-term quantum computing, it is fundamental to understand how its presence alters noiseless results and changes our understanding of QNNs.
In this work we study how the recently developed understanding of overparametrization in QNNs [21; 22; 23; 24; 25; 26] is affected by the presence of quantum noise. In particular, we will review the results of Ref. [21], which characterizes the critical number of parameters needed to overparametrize a QNN. It has been observed that underparametrized QNNs exhibit spurious local minima in the optimization landscape that hinder their trainability. By adding enough parameters to the circuit (hence overparametrizing it), these false local traps disappear. Since the previous facilitates the QNN's parameter training, the overparametrization onset corresponds to a veritable computational phase transi
tion. Notably, in Ref. [21] the number of parameters needed to overparametrize a QNN is defined as those needed to saturate the rank of the Quantum Fisher Information Matrix (QFIM), and concomitantly the QNN's capacity, as introduced in [27; 28].
Our results show that the presence of hardware noise can increase the rank of a QFIM whose rank would have been saturated in a noiseless scenario. That is, noise can turn null eigenvalues of a noiseless-state QFIM into non-null eigenvalues of the corresponding noisy-state QFIM. As schematically depicted in Fig. 1, this means that hardware noise allows the QNN to explore previously unavailable directions. Hence, some of the redundant parameters in an overparametrized noiseless QNN become relevant to control trajectories in state space when the effect of noise is accounted for. As such, noise can potentially render an overparametrized model into an underparametrized one. In addition, we analytically prove that as the noise strength (or depth of the circuit) increases, the eigenvalues of the QFIM become exponentially suppressed. Thus, for large noise levels (or for deep QNNs), the states become insensitive to any change in the parameters. On the positive side, our numerics show that for small noise levels, the model behaves as being _quasi-overparametrized_: Large eigenvalues of the QFIM (the ones that are non-zero in the noiseless setting) coexists with small ones (the ones that were previously zero). Additionally, we prove that certain types of noise, specifically global depolarizing noise or measurement noise [42; 45], cannot increase the rank of the QFIM. To conclude, we discuss the implications of our results to QNN capacity measures proposed in the literature, and to other fields such as quantum metrology.
## II Framework
### Quantum Neural Networks
In this work we consider a QML task where the goal is to train a model on a dataset \(\mathcal{S}=\{\rho^{(s)}\}_{s=1}^{N}\) consisting of \(n\)-qubit quantum states. We use \(d\) to denote the dimension of the composite quantum system, i.e., \(d=2^{n}\). The quantum model is parametrized through a QNN, which is a unitary quantum channel \(\mathcal{C}_{\mathbf{\theta}}\) acting on input states \(\rho^{(s)}\) as \(\mathcal{C}_{\mathbf{\theta}}(\rho^{(s)})=U(\mathbf{\theta})\rho^{(s)}U^{\dagger}(\bm {\theta})\). Here, \(U(\mathbf{\theta})\) is taken to be of the form
\[U(\mathbf{\theta})=\prod_{m=1}^{M}U_{m}(\theta_{m})\,,\quad U_{m}(\theta_{m})=e^{- i\theta_{m}H_{m}}\,, \tag{1}\]
where \(H_{m}\) are traceless Hermitian operators taken from a set of generators \(\mathcal{G}\), and \(\mathbf{\theta}\in\mathbb{R}^{M}\) is a vector of trainable parameters. The previous allows us to express \(\mathcal{C}_{\mathbf{\theta}}\) as a concatenation of \(M\) unitary channels
\[\mathcal{C}_{\mathbf{\theta}}=\mathcal{C}_{\theta_{M}}^{M}\circ\cdots\circ\mathcal{ C}_{\theta_{1}}^{1}\,, \tag{2}\]
with \(\mathcal{C}_{\theta_{m}}^{m}(\rho^{(s)})=U_{m}(\theta_{m})\rho^{(s)}U_{m}^{ \dagger}(\theta_{m})\). Thus, the output of the QNN is a parametrized state
\[\rho^{(s)}_{\mathbf{\theta}}=\mathcal{C}_{\mathbf{\theta}}(\rho^{(s)})=\mathcal{C}_{ \theta_{M}}^{M}\circ\cdots\circ\mathcal{C}_{\theta_{1}}^{1}(\rho^{(s)})\,. \tag{3}\]
The variational parameters \(\mathbf{\theta}\) are trained by minimizing an appropriately chosen loss function \(\mathcal{L}(\mathbf{\theta})\) which we consider
Figure 1: **Schematic diagram of our main results.** Consider the task of implementing a QNN, i.e., a parametrized unitary channel on a quantum computer. As shown in [21], the overparametrization phenomenon is defined as the QNN having enough parameters to explore all relevant directions in state space. a) For certain ansatzes the QNN can be efficiently overparametrized with few parameters, as there only exists a small number of available directions in state space. Moreover, for (most) such directions, changes in the parameter values usually translate into changes in state space. b) When the quantum device is faulty, quantum noise will act throughout the computation. In this work, we explore how hardware noise modifies the overparametrization phenomenon. Our results show that quantum noise can enable additional directions in state space. However, we also find that as the noise probability increases, the system becomes more and more insensitive to variations in the parameters.
to be of the form
\[\mathcal{L}(\mathbf{\theta})=\sum_{s=1}^{N}f_{s}\left(\mathrm{Tr}\Big{[}\mathcal{C}_{ \mathbf{\theta}}(\rho^{(s)})O_{s}\Big{]}\right)\,, \tag{4}\]
where \(O_{s}\) and \(f_{s}\) are respectively a (potentially) data-instance-dependent measurement, and a post-processing function.
While there are many aspects that define and distinguish a given QNN from another, we note that one of the most important is the choice of generators \(\mathcal{G}\) from which the QNN in Eq. (1) is built. Once \(\mathcal{G}\) is determined, the next aspect that defines a QNN is its depth, or equivalently, the number of parameters \(M\). In particular, one wants to choose \(\mathcal{G}\) and \(M\) such that there exist parameters values for which the task at hand is solved. While in this work we will not discuss how to appropriately choose \(\mathcal{G}\) (we instead refer the reader to [17; 31]), let us consider the effect of increasing the value of \(M\). In a nutshell, adding more parameters to a QNN increases its expressibility (up to a certain point) [47; 31; 46], meaning that the QNN can generate a wider breadth of unitaries. From a practical stand-point, adding new parameters can potentially enable new directions in the state space 1, and concomitantly in the loss functions landscape. This can improve the trainability of the model by removing spurious local minima, and increasing the dimension of the solution manifold [49; 21]. In the following subsection we will see that the overparametrization phenomenon is indeed linked to the number of independent directions that are accessible in state space.
Footnote 1: By “directions in state space” we refer to elements of the tangent hyperplane defined at any point in state space [48]
### Dynamical Lie algebra, quantum Fisher information, and overparametrization
We will briefly recall here the main results in [21]. We will begin by defining the Dynamical Lie Algebra (DLA) of a QNN [50; 51], which can be used to characterize the group of unitaries that it can be implement [46; 31; 47]. It follows that the DLA also determines the manifold of all reachable states by the QNN. This will allow us to interpret the overparametrization regime as that in which the QNN has enough parameters to explore all accessible directions in said manifold. In particular, we will show that the rank of the quantum Fisher information matrix can be used to detect the onset of overparametrization.
**Definition 1** (Dynamical Lie Algebra).: _Given a set of Hermitian generators \(\mathcal{G}\), the dynamical Lie algebra \(\mathfrak{g}\) is the subspace of operator space spanned by the repeated nested commutators of the elements in \(i\mathcal{G}\). That is_
\[\mathfrak{g}=\mathrm{span}\left\langle i\mathcal{G}\right\rangle_{Lie}\,, \tag{5}\]
_where \(\left\langle i\mathcal{G}\right\rangle_{Lie}\) denotes the Lie closure of \(i\mathcal{G}\)._
The DLA contains information about the ultimate expressiveness of the QNN, since the group of reachable unitaries obtained for any possible parameter values \(\mathbf{\theta}\in\mathbb{R}^{M}\) (for an arbitrary large number of parameters \(M\)) is obtained from the DLA via exponentiation, i.e., as \(\{U(\mathbf{\theta})\}_{\mathbf{\theta}}=\mathbb{G}=e^{\mathfrak{g}}\subseteq \mathcal{SU}(d)\). We remark that \(\mathbb{G}\) is known as the dynamical Lie group. Moreover, the manifold of states obtained from the action of the QNN on an input state \(\rho\), given by \(\{U\rho U^{\dagger},U\in\mathbb{G}\}\), is known as the orbit of \(\rho\) under \(\mathbb{G}\).
From here, we can ask: _By varying the parameters in the QNN, can we explore all accessible directions in the orbits of the input states, i.e., are we in the overparametrized regime?_ To answer this question, let us assume for now that the dataset consist of a single parametrized pure state \(\left|\psi\right\rangle\). One can study the action of the QNN on \(\left|\psi\right\rangle\) via the Quantum Fisher Information Matrix (QFIM). To define the QFIM, start by considering a distance measure \(\mathcal{D}\) between two pure states. In particular, we take \(\mathcal{D}\) to be the infidelity, i.e., \(\mathcal{D}(\left|\psi\right\rangle,\left|\phi\right\rangle)=1-|\left\langle \psi|\phi\right\rangle|^{2}\). Then, given a set of parameters \(\mathbf{\theta}\) and an infinitesimal perturbation \(\mathbf{\delta}\), an expansion to second-order of \(\mathcal{D}\) between the quantum states \(\left|\psi(\mathbf{\theta})\right\rangle=U(\mathbf{\theta})\left|\psi\right\rangle\) and \(\left|\psi(\mathbf{\theta}+\mathbf{\delta})\right\rangle=U(\mathbf{\theta}+\mathbf{\delta}) \left|\psi\right\rangle\) gives the Fubini-Study metric [52; 53], i.e.,
\[\mathcal{D}(\left|\psi(\mathbf{\theta})\right\rangle,\left|\psi(\mathbf{\theta}+\mathbf{ \delta})\right\rangle)=\frac{1}{2}\mathbf{\delta}^{T}\cdot F(\left|\psi(\mathbf{ \theta})\right\rangle)\cdot\mathbf{\delta}\,. \tag{6}\]
Here, \(F(\left|\psi(\mathbf{\theta})\right\rangle)\) is the QFIM for the state \(\left|\psi(\mathbf{\theta})\right\rangle\), an \(M\times M\) matrix whose elements are given by [54]
\[\begin{split}[F(\left|\psi(\mathbf{\theta})\right\rangle)]_{ml}\!=\! 4\mathrm{Re}&[\left\langle\partial_{m}\psi(\mathbf{\theta})|\partial_ {l}\psi(\mathbf{\theta})\right\rangle\\ &-\left\langle\partial_{m}\psi(\mathbf{\theta})|\psi(\mathbf{\theta}) \right\rangle\left\langle\psi(\mathbf{\theta})|\partial_{l}\psi(\mathbf{\theta}) \right\rangle]\,,\end{split} \tag{7}\]
where \(\left|\partial_{m}\psi(\mathbf{\theta})\right\rangle=\partial\left|\psi(\mathbf{\theta} )\right\rangle/\partial\theta_{m}=\partial_{m}\left|\psi(\mathbf{\theta})\right\rangle\), for \(\theta_{m}\in\mathbf{\theta}\). As shown in Fig. 2, the eigenvalues and eigenvectors of the QFIM provide valuable geometrical information regarding how changes in the parameters translate into changes in the state. Crucially, the rank of the QFIM quantifies the number of independent directions in state space that can be explored by making infinitesimal changes in \(\mathbf{\theta}\).
As such, one can determine if the QNN is overparametrized by checking if it has enough parameters so that the QFIM saturates its maximum achievable rank.
**Definition 2** (Overparametrization).: _A QNN is said to be overparametrized if the number of parameters \(M\) is such that the QFIM saturates its achievable rank \(R\) at least in one point of the loss landscape. That is, if increasing the number of parameters past some minimal (critical) value \(M_{c}\) does not further increase the rank of the QFIM, i.e.,_
\[\max_{M\geqslant M_{c},\boldsymbol{\theta}}\operatorname{rank}[F(|\psi( \boldsymbol{\theta})))]=R\,. \tag{8}\]
The main result of Ref. [21] is that \(M_{c}\) is directly linked to the dimension of \(\mathfrak{g}\) (the circuit's DLA). In particular, the rank of the QFIM is upper bounded by \(\dim(\mathfrak{g})\). Hence, one can potentially reach the overparametrization regime if the QNN has \(\sim\dim(\mathfrak{g})\) parameters. Clearly, if \(\dim(\mathfrak{g})\in\Omega(\exp(\mathfrak{n}))\) (as is the case of controllable unitaries) then one cannot efficiently overparametrize the QNN. More interesting, however, are the cases when \(\dim(\mathfrak{g})\in\mathcal{O}(\operatorname{poly}(n))\), such as those arising in [55; 31].
To finish, we note that in Ref. [21] it was also shown that Definition 2 has operational meaning in terms of the capacity of the QNN [27; 28]. We recall that the capacity (or power) of a QNN is used to quantify the breadth of functions that it can capture [56]. For instance, let us consider the capacity measure of [28], which defines the effective quantum dimension of a QNN as
\[D_{1}(\boldsymbol{\theta})=\mathbb{E}\left[\sum_{m=1}^{M}\mathcal{Z}(\lambda^{ m}(\boldsymbol{\theta}))\right]\,. \tag{9}\]
Here \(\lambda^{m}(\boldsymbol{\theta})\) are the eigenvalues of the QFIM for the state \(|\psi(\boldsymbol{\theta})\rangle\), and \(\mathcal{Z}(x)\) is a function such that \(\mathcal{Z}(x)=0\) for \(x=0\), and \(\mathcal{Z}(x)=1\) for \(x\neq 0\). Moreover, the expectation value is taken over the probability distribution from which states are sampled from \(\mathcal{S}\). It is straightforward to see that for a single-state dataset, \(D_{1}(\boldsymbol{\theta})=\operatorname{rank}[F(|\psi(\boldsymbol{\theta})))]\). An alternative definition for the capacity of a QNN can be found in [27]. In the limit of large system sizes, i.e., when \(n\to\infty\), the effective quantum dimension of [27] converges to
\[D_{2}=\max_{\boldsymbol{\theta}}\left(\operatorname{rank}\left[I(|\psi( \boldsymbol{\theta})\rangle)\right]\right)\,, \tag{10}\]
where \(I(|\psi(\boldsymbol{\theta})\rangle)\) is the classical Fisher Information matrix, defined as
\[I(|\psi(\boldsymbol{\theta})\rangle)=\mathbb{E}\left[\frac{\partial\log(p(| \psi\rangle,y;\boldsymbol{\theta}))}{\partial\boldsymbol{\theta}}\frac{ \partial\log(p(|\psi\rangle,y;\boldsymbol{\theta}))}{\partial\boldsymbol{ \theta}}^{T}\right]. \tag{11}\]
Here, \(p(|\psi\rangle\,,y;\boldsymbol{\theta})\), describes the joint relationship between an input \(|\psi\rangle\) and an output \(y\) of the QNN. In addition, the expectation value is taken over the probability distribution that samples input states from the dataset. As shown in [21] the model's capacity, as quantified by the effective dimensions of Eqs. (9) or (10), is upper bounded as
\[D_{1}(\boldsymbol{\theta})\leqslant\dim(\mathfrak{g}),\quad D_{2}\leqslant \dim(\mathfrak{g})\,. \tag{12}\]
Moreover, one can show that when the QNN is overparametrized, \(D_{1}(\boldsymbol{\theta})\) achieves its maximum value. This shows that overparametrizing a QNN is equivalent to saturating its capacity.
### Quantum noise preliminaries
Quantum noise refers to the uncontrolled errors that occur when implementing a QNN on quantum hardware. Such errors may arise from a wide variety of sources, such as imperfections when implementing gates or when performing measurements, undesired qubit-qubit couplings or unwanted interactions between the qubits and their environment.
In this work, we model the action of the hardware noise present throughout a QNN by considering that noise channels \(\mathcal{N}_{m}\) act before and after each unitary \(U_{m}(\theta_{m})\) (see Fig. 3). Here, we recall the definition of a unital Pauli channel.
Figure 2: **QFIM and directions in state space.** Let \(U(\boldsymbol{\theta})\in e^{\theta}\) be a parametrized unitary and \(|\psi\rangle\) some pure state, so that \(|\psi(\boldsymbol{\theta})\rangle=U(\boldsymbol{\theta})\,|\psi\rangle\). Here we schematically show that the eigenvalues and eigenvectors of the QFIM, \(F(|\psi(\boldsymbol{\theta})\rangle)\), inform how changes in parameter space translate into changes in state space. In particular, when modifying \(\boldsymbol{\theta}\) following the eigenvectors of the QFIM, the state \(|\psi(\boldsymbol{\theta})\rangle\) explores the corresponding available direction in the tangent space \(\mathfrak{g}\,|\psi(\boldsymbol{\theta})\rangle\). Additionally, the magnitude of the QFIM eigenvalues determines the sensitivity of the state to a change along an eigenvector direction [53]. As such, a large eigenvalue means that it is “easy” to nudge the state in state space, while small eigenvalues indicate that the state is insensitive to parameter changes in the direction of the associated eigenvector.
**Definition 3** (Unital Pauli channel).: _A unital Pauli channel is a CPTP map \(\mathcal{N}\) whose action on an operator \(\rho\) is given by_
\[\mathcal{N}(\rho)=\sum_{\mathbf{\alpha}\mathbf{\beta}}p_{\mathbf{\alpha}\mathbf{\beta}}X^{\mathbf{ \alpha}}Z^{\mathbf{\beta}}\rho Z^{\mathbf{\beta}}X^{\mathbf{\alpha}}\,, \tag{13}\]
_where \(\{p_{\mathbf{\alpha}\mathbf{\beta}}\}\) is a probability distribution (i.e., \(p_{\mathbf{\alpha}\mathbf{\beta}}\geqslant 0\) and \(\sum_{\mathbf{\alpha}\mathbf{\beta}}p_{\mathbf{\alpha}\mathbf{\beta}}=1\)), and \(X^{\mathbf{\alpha}}Z^{\mathbf{\beta}}:=X^{\alpha_{1}}Z^{\beta_{1}}\otimes...\otimes X^ {\alpha_{n}}Z^{\beta_{n}}\), where \(\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}\in\{0,1\}\)._
In other words, a unital (identity preserving) Pauli channel consists of Pauli operators applied randomly according to a certain probability distribution. It is easy to see that it is diagonal in the Pauli basis. That is, its action maps a Pauli operator \(X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}}\) onto itself as \(\mathcal{N}(X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}})=c_{\mathbf{\alpha}^{ \prime}\mathbf{\beta}^{\prime}}X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}}\), where \(c_{\mathbf{00}}=1\) and \(-1\leqslant c_{\mathbf{\alpha}^{\prime}\mathbf{\beta}^{\prime}}\leqslant 1\) for all \(\mathbf{\alpha}^{\prime}\) and \(\mathbf{\beta}^{\prime}\). Indeed,
\[\mathcal{N}(X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}}) =\sum_{\mathbf{\alpha}\mathbf{\beta}}p_{\mathbf{\alpha}\mathbf{\beta}}X^{\mathbf{ \alpha}}Z^{\mathbf{\beta}}X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}}Z^{\mathbf{ \beta}}X^{\mathbf{\alpha}}\] \[=X^{\mathbf{\alpha}^{\prime}}Z^{\mathbf{\beta}^{\prime}}\underbrace{ \sum_{\mathbf{\alpha}\mathbf{\beta}}(-1)^{\mathbf{\alpha}^{\prime}\cdot\mathbf{\beta}}(-1)^{ \mathbf{\alpha}\cdot\mathbf{\beta}^{\prime}}p_{\mathbf{\alpha}\mathbf{\beta}}}_{c_{\mathbf{\alpha} ^{\prime}\mathbf{\beta}^{\prime}}}, \tag{14}\]
where we used the following properties,
\[[X^{\mathbf{\alpha}},X^{\mathbf{\alpha}^{\prime}}]=0,\;\;\;[Z^{\mathbf{\beta}},Z^{\mathbf{ \beta}^{\prime}}]=0,\;\;\;X^{\mathbf{\alpha}}Z^{\mathbf{\beta}}=(-1)^{\mathbf{\alpha} \cdot\mathbf{\beta}}Z^{\mathbf{\beta}}X^{\mathbf{\alpha}},\]
together with the fact that the square of a Pauli operator is equal to the identity. Note that \(c_{\mathbf{00}}=1\) implies that a unital Pauli noise channel maps the identity operator onto itself, which is a necessary and sufficient condition for a diagonal superoperator to be trace preserving. In what follows we will assume that \(c_{\mathbf{\alpha}^{\prime}\mathbf{\beta}^{\prime}}\in(-1,1)\) for all \(\mathbf{\alpha}^{\prime}\) and \(\mathbf{\beta}^{\prime}\).
Pauli unital noise includes, as a special case, _local depolarizing noise_, which acts on each qubit \(j\in[1,n]\) as [57]
\[\mathcal{N}_{j}^{Depol}(\rho) =\left(1-\frac{3p}{4}\right)\rho+\frac{p}{4}\left(X_{j}\rho X_{j} +Y_{j}\rho Y_{j}+Z_{j}\rho Z_{j}\right)\,,\] \[=(1-p)\rho+p\frac{\openone_{j}\otimes\mathrm{Tr}_{j}[\rho]}{2}\,. \tag{15}\]
Here, \(0<p\leqslant 1\) denotes the probability of depolarization, and \(X_{j}\), \(Y_{j}\) and \(Z_{j}\) are Pauli operators acting on the \(j\)-th qubit. Moreover, \(\mathrm{Tr}_{j}\) indicates the partial trace over qubit \(j\). Similarly, we can construct an \(n\)-qubit channel consisting of a local depolarizing channel acting on each qubit as
\[\mathcal{N}_{loc}^{Depol}(\rho)=\bigotimes_{j=1}^{n}\mathcal{N}_{j}^{Depol}(\rho )\,, \tag{16}\]
or the _global depolarizing channel_, whose action is
\[\mathcal{N}^{Depol}(\rho)=(1-p)\rho+p\frac{\openone}{d}\,, \tag{17}\]
where \(0<p\leqslant 1\). Other examples of Pauli noise channels include bit- and phase-flip channels, as well as \(T_{2}\) processes (i.e., the dephasing channel is a unital Pauli channel).
As shown in Fig. 3, in the presence of quantum noise the action of the QNN is modeled by noise channels interleaved with the unitary channels. Hence, the output of the noisy QNN is given by
\[\widetilde{\rho}_{\mathbf{\theta}}=\widetilde{\mathcal{C}}_{\mathbf{\theta}}(\rho)= \mathcal{N}_{M+1}\circ\mathcal{C}_{\theta_{M}}^{M}\circ\mathcal{N}_{M}\circ \cdots\circ\mathcal{N}_{2}\circ\mathcal{C}_{\theta_{1}}^{1}\circ\mathcal{N}_{1} (\rho)\,, \tag{18}\]
for some (potentially layer-dependent) noise channels \(\mathcal{N}_{m}\), with \(m=1,\ldots,M+1\).
### Mixed-state quantum Fisher information matrix
As previously discussed, in the presence of noise the quantum states evolving through the circuit become mixed, and we must extend the formula of the QFIM in Eq. (7) to account for this. Following the same program that led to the QFIM for pure states, we can define a mixed-state QFIM as an expansion of the _Bures distance_, which is a measure of distinguishability between mixed states. The Bures distance is defined as
\[\mathcal{B}(\rho,\sigma)=2\left(1-\sqrt{\mathcal{F}(\rho,\sigma)}\right)\,, \tag{19}\]
Figure 3: **Noiseless and noisy quantum circuits.** a) Noiseless quantum circuit consisting of parametrized unitary channels \(\mathcal{C}_{\theta_{m}}^{m}\). b) Noisy quantum circuit where unital Pauli noise channels \(\mathcal{N}_{m}\) are interleaved with the unitary channels.
where \(\mathcal{F}(\rho,\sigma)\) is the Ulhmann fidelity [58]
\[\mathcal{F}(\rho,\sigma)=\left(\mathrm{Tr}\Big{[}\sqrt{\sqrt{\rho}\sigma\sqrt{ \rho}}\Big{]}\right)^{2}\,. \tag{20}\]
Let \(\rho_{\mathbf{\theta}}\) be a parametrized mixed state. And let its spectral decomposition be
\[\rho_{\mathbf{\theta}}=\sum_{\mu=1}^{d}r_{\mu}\ket{r_{\mu}}\langle r_{\mu}|\, \tag{21}\]
where \(\{r_{\mu}\}_{\mu=1}^{d}\) are the eigenvalues of \(\rho_{\mathbf{\theta}}\) (such that \(r_{\mu}\geqslant 0\) for all \(\mu\), and \(\sum_{\mu}r_{\mu}=1\)), and \(\{\ket{r_{\mu}}\}_{\mu=1}^{d}\) are the associated eigenvectors2. Then, a second order expansion of the Bures distance between \(\rho_{\mathbf{\theta}}\) and \(\rho_{\mathbf{\theta}+\mathbf{\delta}}\) leads to the mixed state QFIM, whose entries are [59; 53]
Footnote 2: For simplicity, we omit the \(\mathbf{\theta}\) dependency in \(r_{\mu}\) and \(\ket{r_{\mu}}\).
\[[F(\rho_{\mathbf{\theta}})]_{ml} =\sum_{\begin{subarray}{c}m,l\\ r_{m}+r_{l}\neq 0\end{subarray}}\frac{2\mathrm{Re}[\bra{r_{m}}\ket{\partial_{l} \rho_{\mathbf{\theta}}}\ket{r_{l}}\bra{r_{l}}\partial_{l}\rho_{\mathbf{\theta}}\ket{r_ {m}}]}{r_{m}+r_{l}} \tag{22}\] \[=\sum_{\begin{subarray}{c}m\\ r_{m}\neq 0\end{subarray}}\left(\frac{(\partial_{l}r_{m})(\partial_{l}r_{m})}{r_{m}}+4 r_{m}\mathrm{Re}\left[\langle\partial_{l}r_{m}|\partial_{k}r_{m}\rangle\right]\right)\] \[\quad-\sum_{\begin{subarray}{c}m,l\\ r_{m}+r_{l}\neq 0\end{subarray}}\frac{8r_{m}r_{l}}{r_{m}+r_{l}}\mathrm{Re}\left[\langle \partial_{l}r_{m}|r_{l}\rangle\langle r_{l}|\partial_{j}r_{m}\rangle\right]\,. \tag{23}\]
Here we recall a few properties of the QFIM which will be used below, and we refer the reader to [59] for their proof.
1. \(F\) is a symmetric matrix: \(F^{T}=F\).
2. \(F\) is positive semi-definite: \(F\geqslant 0\).
3. \(F\) is convex: For any pair of states \(\rho_{\mathbf{\theta}}\) and \(\sigma_{\mathbf{\theta}}\) and for \(0\leqslant q\leqslant 1\) we have \(F(q\rho_{\mathbf{\theta}}+(1-q)\sigma_{\mathbf{\theta}})\leqslant qF(\rho_{\mathbf{ \theta}})+(1-q)F(\sigma_{\mathbf{\theta}})\).
4. \(F\) is invariant under unitary transformations: \(F(U\rho_{\mathbf{\theta}}U^{\dagger})=F(\rho_{\mathbf{\theta}})\) for any \(U\in\mathcal{U}(d)\).
5. \(F\) is non-increasing under quantum channels: If \(\Phi\) is a quantum channel, then \(F(\Phi(\rho_{\mathbf{\theta}}))\leqslant F(\rho_{\mathbf{\theta}})\).
## III Results
The previous section reviewed the results of Ref. [21], which analyzed the overparametrization phenomenon when no hardware noise is present. However, in a realistic scenario where the QNN is implemented on a near-term quantum device [60] we can expect that quantum noise will act throughout the circuit. Therefore, in what follows we set out to study how the results of Ref. [21] change when noise is considered. For simplicity, we will study the case when the dataset is composed of a single mixed state \(\mathcal{S}=\{\rho\}\). The extension of our results to multi-state datasets is straightforward, as one simply needs to follow the approach taken in [21].
### Single-qubit toy model
We start with a simple toy model that will help us gather intuition on the effects that noise may have on the rank and the eigenvalues of the QFIM, and hence on the QNNs' overparametrization. As we will show, we can expect that presence of quantum noise will generally: i) Increase the rank of the QFIM, and ii) Decrease the overall magnitude of the QFIM eigenvalues. To illustrate these two phenomena, we consider a simple single-qubit model undergoing bit-flip noise. The setup is as follows. First, we initialize the state of the single qubit to
\[\rho=0.9\ket{+}\langle+|+0.1\frac{\openone}\,. \tag{24}\]
We choose a full rank state to avoid issues in the QFIM arising from a change in the rank of the state [61; 62]. Then, this state is sent through a circuit composed of four single qubit rotations
\[U(\mathbf{\theta})=e^{-i\theta_{4}X/2}e^{-i\theta_{2}Z/2}e^{-i\theta_{2}X/2}e^{-i \theta_{1}Z/2}\,. \tag{25}\]
This setup is depicted in Fig. 4(a, top). In channel notation, this QNN is expressed as the concatenation of four unitary channels
\[\mathcal{C}_{\mathbf{\theta}}=\mathcal{C}_{\theta_{4}}^{X}\circ\mathcal{C}_{ \theta_{3}}^{Z}\circ\mathcal{C}_{\theta_{2}}^{X}\circ\mathcal{C}_{\theta_{1}}^ {Z}\,, \tag{26}\]
where \(\mathcal{C}_{\theta}^{Z}(\rho)=e^{-i\theta Z/2}e^{i\theta Z/2}\), and analogously for \(\mathcal{C}_{\theta}^{X}(\rho)\).
The generators of the QNN are the Pauli matrices \(\mathcal{G}=\{X,Z\}\), and it is straightforward to check that the DLA is simply \(\mathfrak{g}=\mathrm{span}\{iX,iY,iZ\}\cong\mathfrak{su}(2)\), meaning that the QNN is _universal_ or _controllable_[31; 50]. Moreover, we can see that the maximum possible rank of the QFIM is \(\max_{\mathbf{\theta}}\left(\mathrm{rank}\left[F(\rho_{\mathbf{\theta}})\right]\right)=2\), as the state lives on a two-dimensional shell inside of the Bloch sphere. As such, it is clear that the QNN is already overparametrized, since the maximum attainable rank of the QFIM is smaller than the number of parameters. To exemplify how noise affects the QNN, we evaluate the QFIM at three different sets of parameter values,
1. \(\mathbf{\theta}_{1}=\{0,0,0,0\}\), leading to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{1}})\right]=1\),
2. \(\mathbf{\theta}_{2}=\{\frac{\pi}{2},0,0,0\}\), leading to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{2}})\right]=2\),
3. \(\mathbf{\theta}_{3}=\{\frac{\pi}{2},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}\}\), leading to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{3}})\right]=2\).
The (noiseless) trajectories corresponding to these choices are presented in Fig. 4(a, bottom). While the rank of the QFIM is indeed saturated at \(\mathbf{\theta}_{2}\) and \(\mathbf{\theta}_{3}\), for \(\mathbf{\theta}_{1}\) we have \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{1}})\right]=1\). We have thus added this example to showcase the important role that the interplay between the initial state and the QNN parameters has in determining the rank of the QFIM.
Now, let us consider the case where bit-flip noise channels act before and after every unitary gate in the circuit (see Fig. 4(b, top) for a schematic portrayal of the setup). For convenience, we recall that the bit-flip channel is a special case of Pauli noise of the form
\[\mathcal{N}^{BF}(\rho)=(1-p)\rho+pX\rho X\,,\qquad 0<p\leqslant 1\,, \tag{27}\]
such that the noisy QNN channel becomes
\[\widetilde{\mathcal{C}}_{\mathbf{\theta}}=\mathcal{N}^{BF}\circ\mathcal{C}^{X}_{ \theta_{4}}\circ\mathcal{N}^{BF}\circ\mathcal{C}^{Z}_{\theta_{3}}\circ \mathcal{N}^{BF}\circ\mathcal{C}^{X}_{\theta_{2}}\circ\mathcal{N}^{BF}\circ \mathcal{C}^{Z}_{\theta_{1}}\circ\mathcal{N}^{BF}\,. \tag{28}\]
A direct evaluation of the QFIM rank at the sets of parameter values previously considered reveals that
1. \(\mathbf{\theta}_{1}=\{0,0,0,0\}\), leads to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{1}})\right]=1\),
2. \(\mathbf{\theta}_{2}=\{\frac{\pi}{2},0,0,0\}\), leads to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{2}})\right]=2\),
3. \(\mathbf{\theta}_{3}=\{\frac{\pi}{2},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}\}\), leads to \(\operatorname{rank}\left[F(\rho_{\mathbf{\theta}_{3}})\right]=3\).
The trajectories defined by these rotations are presented in Fig. 4(b, bottom), for a value of \(p=0.1\). Here we
Figure 4: **Single-qubit toy model examples.** a) We consider the case where the single qubit state of Eq. (24) is sent through a noiseless QNN with four parameters as in (26). We plot in the Bloch sphere the three trajectories defined by \(\mathbf{\theta}_{1}\), \(\mathbf{\theta}_{2}\) and \(\mathbf{\theta}_{3}\). b) We consider the case where the single qubit state of Eq. (24) is sent through a noisy QNN with four parameters, as in (28). Here, bit-flip noise channels act before and after every gate with probability \(p=0.1\). We plot in the Bloch sphere the three trajectories defined by \(\mathbf{\theta}_{1}\), \(\mathbf{\theta}_{2}\) and \(\mathbf{\theta}_{3}\).
can see that for \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\) the rank of the QFIM is not increased. While it is obvious that for \(\mathbf{\theta}_{1}\) the noise does not change the output state of the QNN (as \(\rho\) is a fixed point of the noise model), for \(\mathbf{\theta}_{2}\) the noise channels do change the output state of the QNN. Notably, we can see that all the noise channels are effectively applied at the end of the parametrized evolution in both cases. As we will show below, this implies that they cannot change the rank of the QFIM (see Theorem 1). Finally, for \(\mathbf{\theta}_{3}\) the noise does increases the rank of the QFIM from two to three. Here, the rank of the QFIM is maximal, which follows from the state evolving in the three-dimensional Bloch sphere.
The fact that for \(\mathbf{\theta}_{3}\) the rank of the QFIM is increased indicates that the presence of noise enables a new direction in state space. As shown in Fig. 5(a), in the absence of noise (and hence when the rank of the QFIM is two), there are only two available directions in state space. These directions are depicted as blue and red lines corresponding to the trajectories followed by the state when the parameters are changed along the directions dictated by the eigenvectors of the QFIM with non-zero eigenvalues. Since the channel is unitary, these trajectories lie on the surface of a (fixed-purity) shell of the Bloch sphere. We have also verified that, as expected, the state remains unchanged when the parameters are varied along a direction corresponding to an eigenvector associated to a null eigenvalue of the QFIM (although the previous cannot be visualized in the plot because the initial and final state of the evolution are the same).
On the contrary, as shown in Fig. 5(b), when noise acts throughout the circuit (and hence when the rank of the QFIM is three), there are three available directions in state space. Here, the red, blue and green curves correspond to the trajectories that the state follows when changing the parameters along the directions given by the three eigenvectors of the QFIM with associated non-zero eigenvalue. Crucially, we can now see that there exists a direction (the blue curve) that preserves the purity of the quantum state. The other two directions, however, can both increase and decrease the purity of the output state. This is evidenced from the fact that the trajectories in the state space move inwards and outwards from the fixed-purity shell in the Bloch sphere.
We find it important to remark that while some of the di
Figure 5: **State space trajectories following perturbations along QFIM eigendirections.** a) We consider that the single qubit state of Eq. (24) is sent through a noiseless QNN as in (26), with parameters \(\mathbf{\theta}_{3}\). Here we show within the Bloch sphere how the state \(\rho_{\mathbf{\theta}}\) changes when the parameters are varied following the directions given by three eigenvectors of the QFIM \(F(\rho_{\mathbf{\theta}})\). Two such directions are associated with the two non-zero eigenvalues (blue and red curves) and with a zero eigenvalue (green, non-visible, curve). b) We consider that the single qubit state of Eq. (24) is sent through a noisy QNN as in (28), with parameters \(\mathbf{\theta}_{3}\). Here we show within the Bloch sphere how the state \(\rho_{\mathbf{\theta}}\) changes when the parameters are varied following the directions given by three eigenvectors of the QFIM \(F(\widetilde{\rho}_{\mathbf{\theta}})\) with associated non-zero eigenvalues (blue, red, and green curves).
rections in state space can change the purity of the state \(\widetilde{\rho}_{\mathbf{\theta}}\), this does not imply that the QNN is purifying the state. As shown in Fig. 5(b), by perturbing the parameters \(\mathbf{\theta}\) along a direction \(\mathbf{\delta}\) one can move the _final state_ inwards or outwards in the Bloch sphere. That is, we can decrease or increase the purity of \(\widetilde{\rho}_{\mathbf{\theta}+\mathbf{\delta}}\)_with respect to that of_\(\widetilde{\rho}_{\mathbf{\theta}}\). However, this does not imply that \(\widetilde{\rho}_{\mathbf{\theta}+\mathbf{\delta}}\) has less entropy than the initial state \(\rho\). In other words, changing the variational parameters by \(\mathbf{\delta}\) implies preparing again the initial state \(\rho\) and applying \(\widetilde{\mathcal{C}}_{\mathbf{\theta}+\mathbf{\delta}}\), not evolving from \(\widetilde{\rho}_{\mathbf{\theta}}\) to \(\widetilde{\rho}_{\mathbf{\theta}+\mathbf{\delta}}\). Physically, we can interpret the previous as saying that the state evolving under the noisy QNN \(\widetilde{\mathcal{C}}_{\mathbf{\theta}+\mathbf{\delta}}\) is less sensitive to noise than that evolving under \(\widetilde{\mathcal{C}}_{\mathbf{\theta}}\), and hence its purity gets less degraded by noise.
Next, let us evaluate how noise affects the magnitude of the QFIM eigenvalues. In Fig. 7 we plot the largest eigenvalue of the QFIM versus the probability of a bit-flip error \(p\). In this case, it is manifest that for all the parameter values considered above, the eigenvalue magnitude decreases with \(p\). A similar effect occurs with the remaining non-zero eigenvalues, as one can verify that their magnitudes also decrease with \(p\). Crucially, the previous holds not only for the eigenvalues of the QFIM that were non-zero in the noiseless setting, but also for those that the noise "turns on". This result indicates that the state's sensitivity to parameter changes decreases with increasing noise levels. As we will prove below (see Theorems 3 and 4), this is a general consequence of the presence of noise in a QNN.
### Global depolarizing noise
Here we study the overparametrization of general QNNs acting on an \(n\)-qubit state under a simple noise model: Global depolarizing noise (see Eq. (17)). We henceforth assume that global depolarizing noise channels act before and after every unitary channel in the QNN with the same probability \(p\). That is, we consider the case when
\[\widetilde{\rho}_{\mathbf{\theta}}=\mathcal{N}^{Depol}\circ\mathcal{C}_{\theta_{M }}^{M}\circ\mathcal{N}^{Depol}\circ\cdots\circ\mathcal{N}^{Depol}\circ \mathcal{C}_{\mathbf{\theta}_{1}}^{1}\circ\mathcal{N}^{Depol}(\rho)\,.\]
It is not hard to see that the action of the global depolarizing noise channels can be commuted through to the end of the circuit, so that
\[\mathcal{N}^{Depol}\circ\mathcal{C}_{\theta_{M}}^{M}\circ \mathcal{N}^{Depol}\circ\cdots\circ\mathcal{N}^{Depol}\circ\mathcal{C}_{\mathbf{ \theta}_{1}}^{1}\circ\mathcal{N}^{Depol}\] \[=\underbrace{\mathcal{N}^{Depol}\circ\cdots\circ\mathcal{N}^{ Depol}}_{\times(M+1)}\circ\mathcal{C}_{\theta_{M}}^{M}\circ\cdots\circ\mathcal{C}_{ \mathbf{\theta}_{1}}^{1}\] \[=\mathcal{N}^{Depol}_{eff}\circ\mathcal{C}_{\theta_{M}}^{M}\circ \cdots\circ\mathcal{C}_{\mathbf{\theta}_{1}}^{1}\,. \tag{29}\]
Here, we have defined
\[\mathcal{N}^{Depol}_{eff}(\rho)=(1-p)^{M+1}\rho+\left(1-(1-p)^{M+1}\right) \frac{\openone}{d}\,. \tag{30}\]
This shows that we can express the output state of the QNN as
\[\widetilde{\rho}_{\mathbf{\theta}} =(1-p)^{M+1}\mathcal{C}_{\mathbf{\theta}}(\rho)+\left(1-(1-p)^{M+1} \right)\frac{\openone}{d}\] \[=(1-p)^{M+1}\rho_{\mathbf{\theta}}+\left(1-(1-p)^{M+1}\right)\frac{ \openone}{d}\,. \tag{31}\]
More generally, we note that the previous result can be extended to the case when global depolarizing noise channels with different layer-dependent probabilities \(p_{m}\) act in between the gates of the circuit. In particular, one now finds
\[\mathcal{N}^{Depol}_{eff}(\rho)=\prod_{m=1}^{M+1}(1-p_{m})\rho+\left(1-\prod_ {m=1}^{M+1}(1-p_{m})\right)\frac{\openone}{d}\,. \tag{32}\]
Next, we study how the rank of the QFIM and the magnitude of its eigenvalues change due to the presence of global depolarizing noise. In this context, we find it convenient to first present a useful theorem.
**Theorem 1**.: _Consider the case when a single noise channel acts at the end of the QNN as_
\[\widetilde{\mathcal{C}}_{\mathbf{\theta}}=\mathcal{N}\circ\mathcal{C}_{\theta_{M }}^{M}\circ\cdots\circ\mathcal{C}_{\mathbf{\theta}_{1}}^{1}\,. \tag{33}\]
Figure 7: **Eigenvalues of the QFIM versus noise levels.** Here we consider the case where the noisy QNN (28) acts on the initial single-qubit state of Eq. (24). We plot the magnitude of the largest eigenvalue of the QFIM for the single-qubit toy model versus the probability of bit-flip error \(p\). The top, middle and bottom panels respectively correspond to the parameter values \(\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3}\).
_The rank of the QFIM cannot be increased by the action of the noise. That is_
\[\operatorname{rank}[F(\widetilde{\rho}_{\boldsymbol{\theta}})]\leqslant \operatorname{rank}[F(\rho_{\boldsymbol{\theta}})]\,, \tag{34}\]
_where \(\rho_{\boldsymbol{\theta}}\) and \(\widetilde{\rho}_{\boldsymbol{\theta}}\) respectively denote the output states of the noiseless and noisy QNNs (see Eqs. (3) and (18))._
We refer the reader to Appendix A for a proof of Theorem 1. The key implication of this theorem is that if the noise acts exclusively at the end of the circuit, then the rank of the QFIM cannot be increased. Hence, it follows that one can overparametrize the noisy QNN, \(\widetilde{\mathcal{C}}_{\boldsymbol{\theta}}\), with the same number of parameters needed to overparametrize the noiseless one, \(\mathcal{C}_{\boldsymbol{\theta}}\).
Using Theorem 1 we can readily prove the following result for the case of global depolarizing noise.
**Theorem 2**.: _When global depolarizing channels act before and after every unitary in the QNN, then the rank of the QFIM cannot be increased by the action of the noise._
The proof of this theorem can be found in Appendix B. Theorem 2 shows that the presence of global depolarizing channels cannot increase the number of available directions in state space, as the rank of the QFIM is non-increasing. Again, this means that one can overparametrize the noisy QNN with the same number of parameters needed to overparametrize the noiseless QNN, when global depolarizing noise acts throughout the circuit.
In addition, we can also analyze how the eigenvalues of the QFIM change due to the presence of the global depolarizing channels. Here, the following theorem holds.
**Theorem 3**.: _When global depolarizing channels act before and after every unitary in the QNN, the entries of the QFIM (and therefore its eigenvalues) satisfy_
\[[F(\rho_{\boldsymbol{\theta}})]_{ml}\in\mathcal{O}(e^{-p(M+1)}) \tag{35}\]
_i.e., they become exponentially suppressed with the product of the number of gates \(M\) and the probability of depolarization \(p\)._
This theorem is proven in Appendix C. Theorem 3 indicates that while global depolarizing noise cannot increase the number of available directions in state space, it does suppress the sensitivity of the state to _any_ variations in the parameters. This result can be used to further understand the so-called _noise-induced barren plateau_ phenomenon [40; 41] whereby noise erases all the features in the QML model's training landscape. For instance, let us note that given a linear loss function (i.e., \(f_{s}(x)=x\) in Eq. (4)), one has
\[\mathcal{L}(\boldsymbol{\theta}) =\operatorname{Tr}\!\left[\widetilde{\mathcal{C}}_{\boldsymbol{ \theta}}(\rho)O\right]\] \[=(1-p)^{M+1}\operatorname{Tr}[\mathcal{C}_{\boldsymbol{\theta}} (\rho)O]+(1-(1-p)^{M+1})\operatorname{Tr}[O]\,. \tag{36}\]
Equation (36) shows that the optimization landscape becomes exponentially flat with \(M\) and \(p\) (hence a noise-induced barren plateau). As such, our results show that noise-induced insensitivities arise already at the level of state space, thus providing a more fundamental understanding of the noise-induced barren plateau phenomenon.
### Local depolarizing plus unital Pauli channels
In this section we show that some of the intuition gathered from the previous sections can be extended to more general Pauli noise models. Namely, we here consider how noise affects the QFIM for a QNN acting on \(n\)-qubits when a fairly general Pauli noise acts. As we will show, the entries of the QFIM, and concomitantly its eigenvalues, get exponentially suppressed with the product of the number of noise channels and the noise probability. We note that here we will not attempt to prove that, on average, noise increases the rank of the QFIM. This is due to the fact that there can exist special types of noise and parameter values for which the rank is not increased (see Sections III.1 and III.2). Because of these subtleties, we will leave a more detailed rank analysis for future work.
In what follows we will consider a general noise model where noise channels are interleaved with the unitary channels of the QNN as
\[\widetilde{\mathcal{C}}_{\boldsymbol{\theta}}=\mathcal{N}_{M+1}\circ\mathcal{ C}_{\delta_{M}}^{M}\circ\mathcal{N}_{M}\circ\cdots\circ\mathcal{N}_{2}\circ \mathcal{C}_{\theta_{1}}^{1}\circ\mathcal{N}_{1}\,, \tag{37}\]
for some (potentially layer dependent) noise channels \(\mathcal{N}_{m}\), \(m=1,\ldots,M+1\). Moreover, as shown in Fig. 8, we will assume that each noise channel is composed of local depolarizing noise channels acting on each qubit plus some general unital Pauli noise. That is,
\[\mathcal{N}_{m}=\mathcal{N}_{loc}^{Dep}(\rho)\circ\mathcal{N}_{m}^{P}(\rho)\,, \tag{38}\]
where \(\mathcal{N}_{m}^{P}(\rho)\) is an arbitrary unital Pauli quantum channel and \(\mathcal{N}_{loc}^{Dep}\) is a product of local depolarizing channels as given by (16). For simplicity, we will assume that all local depolarizing noise channels have the same probability \(p\). In Appendix F we show how our results can be generalized to the case where they have different (qubit- and layer-dependent) probabilities. Moreover, we note that the order in which \(\mathcal{N}_{loc}^{Dep}(\rho)\) and \(\mathcal{N}_{m}^{P}(\rho)\) act in (38) will be irrelevant
for our purposes, as our results can also be shown to hold when the order is reversed (see Appendix F).
For this noise model, we prove the following theorem.
**Theorem 4**.: _Let \(\widetilde{\mathcal{G}}_{\mathbf{\theta}}\) be a noisy channel as in Eqs. (37) and (38), where a Pauli noise channel (composed of a local depolarizing noise acting on each qubit plus a general unital Pauli channel) acts before and after each gate of the QNN. Furthermore, let \(p\) be the probability of the local depolarizing channels as in (15). The entries of the QFIM, and thus its eigenvalues, are exponentially suppressed with the product of \(M\) and \(p\) as \(\mathcal{O}(e^{-2p(M+1)})\)._
See Appendix D for a proof of Theorem 4. This theorem states that under very general noise models, the entries of the QFIM and its eigenvalues vanish exponentially with the noise probability and the number of gates. Crucially, we will have that irrespective of whether the rank of the QFIM is increased or not by the noise, if the circuit is too deep (large \(M\)), or if the noise levels are too high (large \(p\)), the state becomes insensitive to parameter changes. Similarly to Theorem 3, this result sheds new light into the noise-induced barren plateau phenomenon [40, 41].
## IV Numerical results
In this section we present numerical results that extend and complement our theoretical findings. All the simulations presented here have been performed in double precision with the open-source library qibo[63], using the fast qibojit backend [64]. The simulations have been carried out on CPUs, namely IntelCore i7-9750H and AMD Ryzen Threadripper PRO 3955WX cores.
In particular, we consider the problem where the QNN is given by a Hamiltonian Variational Ansatz (HVA) [22, 65] with generators inspired by the transverse-field Ising model with periodic boundary conditions. That is, we have \(\mathcal{G}=\{H_{0},H_{1}\}\), with
\[H_{0}=\sum_{i=1}^{n}\sigma_{i}^{z}\sigma_{i+1}^{z}\,\quad H_{1}=\sum_{i=1}^{n} \sigma_{i}^{x}\, \tag{39}\]
and \(\sigma_{n+1}^{z}\equiv\sigma_{1}^{z}\). Here, the action of the noiseless QNN \(U(\mathbf{\theta})\) is given by
\[U(\mathbf{\theta})=\prod_{l=1}^{L}e^{-i\theta_{1}H_{1}}e^{-i\theta_{ 0}H_{0}}\,, \tag{40}\]
where \(L\) is the number of layers. Thus, the QNN has \(M=2L\) parameters. We have fixed the initial state of the QNN to be the state \(\ket{+}^{\otimes n}\). As shown in [21], the DLA associated with this ansatz has dimension \(\dim(\mathfrak{g})=\frac{3}{2}n\), meaning that the QNN can be overparametrized with only a polynomial (linear) number of parameters (or layers). In what follows, we will study how the presence of noise affects the QFIM. In all cases, the computations have been carried out at random points in parameter space.
First, in order to validate Theorem 2 we have simulated the action of global depolarizing channels acting before and after each gate. The results are depicted in Fig. 9,
Figure 8: **Schematic representation of a QNN under the general noise model considered.** Our results are derived for a general noise model where unitary gates are interleaved by noise channels composed of local depolarizing noise channels acting on each qubit plus some general unital Pauli noise.
Figure 9: **Eigenvalues of the QFIM under global depolarizing noise.** Here we consider a problem where an \(n=10\) qubit state is sent through an HVA quantum circuit as in Eq. (40), with \(L=20\) (i.e., with \(M=40\) parameters). In the simulations, global depolarizing noise acts on all qubits before and after each gate. We show the magnitude of the \(m\)-th eigenvalue of the QFIM for different noise values \(p\), at a random point in the landscape. The inset shows the scaling of two non-null eigegenvalues with \(p\).
where the eigenspectrum of the QFIM is plotted for \(n=10\) qubits, \(M=40\) parameters, and different noise probabilities. Here we see that the rank of the QFIM is unaffected by global depolarizing noise as indicated by Theorem 2. These numerical results also allow us to verify the exponential decrease of the QFIM eigenvalues with the probability of the depolarizing noise, predicted by Theorem 3 (see inset in Fig. 9).
Second, we have simulated the case where local depolarizing channels act on each qubit before and after every gate in the circuit. The results are shown in Fig. 10, where we plot the eigenspectrum of the QFIM for \(n=10\) qubits, \(M=40\) parameters, and different noise probabilities. In contrast to global depolarizing channels, local noise does increase the rank of the QFIM. This can be observed in the plot from the fact that as soon as \(p\) is larger than zero, all the eigenvalues of the QFIM become non-null (as opposed to the noiseless case). As already discussed, this implies that noise enables new directions in state space. Moreover, here we can see that according to Definition 2, noise can turn an overparametrized QNN (with saturated rank) into an underparametrized one (where the rank of the QFIM is equal to the number of parameters). Notably, Fig. 10 shows that there exists a certain robustness to noise in the overparametrization phenomenon. This is evidenced from the fact that when the probability of noise acting is small (e.g., \(p\sim 10^{-5}\)), there still exist a gap of about two orders of magnitude between the dominant eigenvalues and the "newly-appeared" ones. Hence, for small noise levels the system can be considered to be in a quasi-overparametrized regime, where large eigenvalues of the QFIM (the ones that were previously non-zero) coexist with small eigenvalues (the ones that were previously zero).
This separation in eigenvalue magnitude disappears when noise levels increase. As shown in Fig. 10, for large enough noise probability (e.g., \(p=0.08\)) all the eigenvalues are exponentially vanishing. Moreover, in Fig. 11 we show the scaling of the QFIM entries and eigenvalues with the number of gates and noise probability. As in the case of global depolarizing noise, these decrease exponentially. Taken together, the results in Figs. 10 and 11 numerically confirm the result established in Theorem 4.
Figure 11: **Average magnitude of the QFIM entries and eigenvalues in the presence of local depolarizing noise.** Here we consider a problem where an \(n=10\) qubit state is sent through an HVA quantum circuit as in Eq. (40). In the simulations, local depolarizing noise channels act with the same probability \(p\) on all qubits before and after each gate. We show the average magnitude of the entries of the QFIM and its eigenvalues for different a) number of layers \(L\) and b) noise values \(p\). Bars depict the standard deviation.
Figure 10: **Eigenvalues of the QFIM under local depolarizing noise.** Here we consider a problem where an \(n=10\) qubit state is sent through an HVA quantum circuit as in Eq. (40), with \(L=20\) (i.e., with \(M=40\) parameters). In the simulations, local depolarizing noise channels act with the same probability \(p\) on all qubits before and after each gate. We show the magnitude of the \(m\)-th eigenvalue of the QFIM for different noise values \(p\), at a random point in the landscape.
Implications to capacity measures
Let us briefly discuss the implications of the previous results for the capacity measures of Refs. [27; 28]. In particular, we have seen that both these measures are related to the maximum rank of the quantum or classical Fisher information matrix. For simplicity, we will consider first a noiseless QNN that has enough parameters \(M\) to be well beyond the overparametrization threshold. In this case, the rank of the QFIM, and concomitantly its capacities \(D_{1}(\mathbf{\theta})=\text{rank}[F(\widetilde{\mathbf{\rho}}_{\mathbf{\theta}})]\) and \(D_{2}\), are saturated, and are such that \(D_{1}(\mathbf{\theta}),D_{2}<M\) (see [21]).
The results presented in this work indicate that if hardware noise is present, then the rank of the QFIM can increase (e.g., the QFIM can become full rank). Using \(D_{1}(\mathbf{\theta})\) as capacity measure [28] would imply that the QNN's capacity is increased by noise. Moreover, a similar conclusion can be drawn for the capacity measure of [27] as follows. Indeed, when the noise renders the QFIM full rank, then it becomes invertible and the following inequality holds [66]
\[I^{-1}(\widetilde{\rho}_{\mathbf{\theta}})\geqslant F^{-1}(\widetilde{\rho}_{\mathbf{ \theta}})\,. \tag{41}\]
This implies that the classical Fisher information is also invertible, and thus full rank. Hence, noise also increases the capacity of the QNN when \(D_{2}\) is used as capacity measure.
We remark that it is true that new directions are enabled in state space by the action of noise, and that these can be somewhat partially controlled (see Fig. 5). However, it is also worth recalling that the sensitivity of the noisy state to parameter updates along these direction decreases exponentially with the noise magnitude. In fact, one can expect that in the regime where the noise is sufficiently large, the QFIM can be full rank (i.e., \(\text{rank}[F(\widetilde{\mathbf{\rho}}_{\mathbf{\theta}})]=M\)) but the magnitude of its eigenvalues exponentially small (see Theorem 4). In this scenario, the QNN has a seemingly increased capacity due to noise, but the state is rendered insensitive to parameter changes.
The critical issue here is that the rank of the QFIM is a discrete number that depends on the number of strictly non-zero eigenvalues of the QFIM, but not on their magnitude. Such issue could be potentially alleviated by considering capacity measures that depend on the magnitude of the eigenvalues. For instance, one could modify the measure of [28] (see Eq. (9)) as
\[D_{1}^{(\epsilon)}(\mathbf{\theta})=\mathbb{E}\left[\sum_{m=1}^{M}\mathcal{Z}^{( \epsilon)}(\lambda^{m}(\mathbf{\theta}))\right]\,, \tag{42}\]
where \(\lambda^{m}(\mathbf{\theta})\) are the eigenvalues of the QFIM for the state \(\rho_{\mathbf{\theta}}\), \(\mathcal{Z}^{(\epsilon)}(x)=0\) for \(x\leqslant\epsilon\), and \(\mathcal{Z}(x)=1\) for \(x>\epsilon\). As such, one would only account for the eigenvalues of the QFIM that are larger than a given tuneable constant \(\epsilon\). We leave however the study of such a measure for future work, as more research is needed to understand the interplay between the capacity of QNNs and quantum noise.
## VI Conclusions
Theoretically understanding the performance of QNNs is a fundamental step to guaranteeing their success in practical realistic scenarios. While there has been tremendous efforts in studying noiseless QNNs, little is known about their performance when hardware noise acts throughout the computation. However, since noise is a defining property of near-term quantum devices, more research is needed to bridge the gap between our understanding of noiseless and noisy QNNs.
In this work we focus on the overparametrization of noisy QNNs. To analyze this phenomenon, we first present a toy model example which showcases how noise acting throughout a quantum circuit can indeed increase the rank of the QFIM. Crucially, this means that noise can transform an overparametrized QNN into an underparametrized one. Moreover, this toy model also illustrates a second general effect that noise exerts on QNNs, namely it produces a decrease in the magnitude of the QFIM eigenvalues and thus in the sensitivity of the quantum state to parameter updates.
We then derive analytical results proving that a noise channel at the end of a quantum circuit cannot increase the rank of the QFIM. This implies that certain noise models, like global depolarizing noise interleaved with unitary gates, or measurement noise, leave the rank of the QFIM unaffected. In turn, this means that the noisy QNN can be overparametrized with the same number of parameters as the noiseless QNN. However, we also prove that global depolarizing channels suppress the QFIM entries and its eigenvalues exponentially with the product of the number gates and the probability of depolarization. This renders the output of the QNN insensitive to changes in the variational parameters.
Furthermore, we prove that for fairly general Pauli noise models (consisting of local depolarizing channels and unital Pauli noise), the eigenvalues and entries of the QFIM get exponentially suppressed with the circuit depth and the noise probability. Our results point to a combined effect arising from noise, whereby the rank of the QFIM can be increased, but at the same time the magnitude of all the eigenvalues of the QFIM (both the pre-existing ones and the ones that the noise "turned on") get suppressed. Therefore, although noise enables new directions, it also makes the noisy state insensitive to changes in the parameter val
ues.
With the help of numerical simulations we are able to identify three regimes for the overparametrization phenomenon in the presence of noise. The first corresponds to small noise levels. Here, the magnitude of the new non-zero eigenvalues is very small compared to that of pre-existing ones, whose magnitude remains largely unchanged, indicating a certain robustness to noise. In this "quasi-verparametrized" regime, the state is mostly insensitive when the parameters are moved along the directions associated with the newly appeared non-zero eigenvalues. Then, there exists an intermediate regime where the new non-zero eigenvalues are comparable to the previous non-zero ones, but smaller than in the noiseless scenario. Here, we find that some of these new directions are purity altering, meaning that the QNN can map the state to regions where it is more, or less, sensitive to the effects of noise. Finally, in the third regime, all the eigenvalues vanish and the state becomes (almost) completely insensitive to changes in the parameter values.
We then study the implications of our results to current QNN capacity measures proposed in the literature [27; 28]. We find that measures based on the QFIM rank can be misleading when noise is taken into account. In the presence of noise, the QFIM can be transformed from singular to full rank, indicating (according to rank-based measures) that noise can increase a QNN's capacity. However, the eigenvalues of the QFIM are exponentially suppressed, meaning that the state does not significantly change with parameter updates. This dissonance arises from the fact that the eigenvalue magnitude is not accounted for in rank-based measures, only the number of strictly non-zero eigenvalues. These capacity measures should then be modified accordingly, which is left for future work.
To finish, we discuss the impact of our results beyond the overparametrization phenomenon. For instance, our results can be understood as shedding new light on the noise-induced barren plateau phenomenon whereby the optimization landscape of QML models gets exponentially flat with the noise (or the depth) of the circuit [40]. Namely, the flatness in the landscape arises from the state being insensitive to changes in the parameters, as evidenced by the exponentially suppressed eigenvalues of the QFIM. In addition, our results have critical implications to noisy-state quantum sensing [67; 68; 69; 70; 71]. Since the ultimate precision achievable for sensing external parameters depends on the quantum Fisher information through the quantum Cramer-Rao bound [72; 73], our results demonstrate how the utility of a noisy state as a sensor gets degraded by the presence of noise.
###### Acknowledgements.
We acknowledge the Referees of [21] for pointing us in this fruitful research direction. We would also like to thank Max Hunter Gordon, Zoe Holmes, Eddie Schoute and Fred Sauvage for useful conversations. D.G-M. was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Computational Partnerships program. M.L. acknowledges support by the Center for Nonlinear Studies at Los Alamos National Laboratory (LANL) and the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program. M.C. acknowledges the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. DOE. This work was also supported by Laboratory Directed Research and Development (LDRD) program of LANL under project numbers 20230049DR and 20230527ECR.
|
2302.11992 | Solving Recurrent MIPs with Semi-supervised Graph Neural Networks | We propose an ML-based model that automates and expedites the solution of
MIPs by predicting the values of variables. Our approach is motivated by the
observation that many problem instances share salient features and solution
structures since they differ only in few (time-varying) parameters. Examples
include transportation and routing problems where decisions need to be
re-optimized whenever commodity volumes or link costs change. Our method is the
first to exploit the sequential nature of the instances being solved
periodically, and can be trained with ``unlabeled'' instances, when exact
solutions are unavailable, in a semi-supervised setting. Also, we provide a
principled way of transforming the probabilistic predictions into integral
solutions. Using a battery of experiments with representative binary MIPs, we
show the gains of our model over other ML-based optimization approaches. | Konstantinos Benidis, Ugo Rosolia, Syama Rangapuram, George Iosifidis, Georgios Paschos | 2023-02-20T15:57:56Z | http://arxiv.org/abs/2302.11992v1 | # Solving Recurrent MIPs with Semi-supervised Graph Neural Networks
###### Abstract
We propose an ML-based model that automates and expedites the solution of MIPs by predicting the values of variables. Our approach is motivated by the observation that many problem instances share salient features and solution structures since they differ only in few (time-varying) parameters. Examples include transportation and routing problems where decisions need to be re-optimized whenever commodity volumes or link costs change. Our method is the first to exploit the sequential nature of the instances being solved periodically, and can be trained with "unlabeled" instances, when exact solutions are unavailable, in a semi-supervised setting. Also, we provide a principled way of transforming the probabilistic predictions into integral solutions. Using a battery of experiments with representative binary MIPs, we show the gains of our model over other ML-based optimization approaches.
Graph Neural Networks, Discrete Optimization, Semi-supervised Learning, MIP
## 1 Introduction
The solution of mixed integer programming (MIP) problems is as important as it is challenging to achieve. Indeed, MIP is employed for optimizing real-world operations, decision processes and systems, including transportation [28, 34], facility location [25], production planning [62, 14], content distribution networks [29], all the way to structured predictions [30, 46] and neural network (NN) training [48]. However, despite the success and wide availability of MIP solvers, the fast and accurate optimization of such problems remains largely elusive. This has given rise to perpetual efforts for approximation algorithms [59] and heuristic solutions [37], and for improving the performance of solvers [54].
Nonetheless, an aspect that has received less attention is that several MIP instances share properties and common structures. In fact, practitioners, more often than not, need to solve repeatedly problems that differ only in few parameters, cf. [51, 11]. For example, routing and other transportation problems are re-solved periodically over the same graph whenever new demands or costs are obtained. This motivates the use of ML in order to explore correlations among problem properties and solution values, which in turn can be leveraged to expedite the solution of new instances, enabling their solution even in real-time as an increasing number of applications require [10]. The idea of ML-assisted optimization is not new per se; it has been successfully applied to configure solvers [44], and design heuristics, e.g., in Branch-and-Bound (BnB) techniques [31]. Importantly, recent studies focused on learning problem structures towards _predicting_ (some) variables [31, 19, 52, 39] and/or active constraints [10, 11], with promising results - we review them in Sec. 2.
In this work we make several steps towards the latter direction by developing an ML-based method for facilitating the solution of _binary_ MIPs which (might) exhibit temporal structure. For example, internet traffic follows a diurnal (and not random) pattern [47; 27]; and the same holds for a plenitude of transportation problems [49]. Motivated by these observations, we propose MIPnet, a Long Short-Term Memory (LSTM)-based probabilistic variable-prediction model [56] that operates on a permutation and scale-invariant embedding space created by a Graph Convolutional Network (GCN) [8]. The GCN exploits the intuitive bipartite graph MIP representation [31; 52], towards encoding the problems' salient properties without manual feature engineering.
Furthermore, unlike prior works, MIPnet employs _semi-supervised_ learning to augment the training data without computational costs, while still benefiting from the robustness of supervised learning when possible. And including this unsupervised loss provides additional gains. Namely, not all variables have the same impact on the MIP's objective and on constraint violation. Indeed, misprediction of some variables can render the problem infeasible, while others affect only the objective (and to different extent). Clearly, while a model using only supervised loss cannot discern such conditions, MIPnet can learn these effects with per-variable granularity, and mitigate their impact accordingly. Our goal is to predict a significant portion of the binary variables so as to facilitate the optimization of the remaining (binary and continuous) variables. The selection of these variables is based on a confidence metric following a principled Bayesian approach that enables tunable variable selection without (expensive) model re-training.
MIPnet is evaluated through a battery of experiments, using both real and synthetic traces, and in diverse real-world problems that include network routing, facility location and TSP, among others. These are representative problems in operational research, and have been used in prior works [10; 11; 52; 19] and references therein. The experiments reveal significant gains over these previous works in terms of variable prediction accuracy, while maintaining high constraint feasibility. We also demonstrate the importance of exploiting the temporal dimension of these problems (whenever it is prevalent), towards improving our predictions. Finally, we stress that we focus on binary MILPs which have applications in a vast range of real-world problems;2 yet, our method paves the road towards tackling general MIPs.
Footnote 2: E.g., 164 of 240 benchmarks in [33] are binary/integer LPs; while \(90\%\) of variables are binary in the rest.
In summary, the contributions of this work are:
* We propose MIPnet, a GCN-based probabilistic model that learns to solve MIPs and selects the most confident variables based on a principled Bayesian approach. MIPnet speeds up significantly the solution of _general binary_ MIPs while exploiting, for the first time, any temporal relation across the problem instances.
* MIPnet employs semi-supervised learning to benefit from labeled data when available, and remain operative when these are hard to obtain. This hybrid approach further allows the model to identify and prioritize variables that are pivotal for feasibility and optimality.
* In a series of experiments, MIPnet is proved to find accurate solutions to a range of different problems, consistently outperforming SotA benchmarks such as [10; 52].
**Paper Structure**. Sec. 2 reviews the related work and Sec. 3 introduces the model, training and inference approach. Sec. 4 presents the experimental results and Sec. 5 concludes the study. Model and dataset details, and additional experiments are provided as supplementary material.
## 2 Related work
Learning Configurations & HeuristicsA first thrust of works learn to tune (hyper) parameters of solvers [22; 23; 50; 44; 4; 6]. For example, [22; 23] leverage local search to identify good configurations and [50] developed a pertinent software library. These suggestions are oblivious to, and hence unable to benefit from, the problem structure. A different approach is to learn heuristics. Many works in this context learn how to select BnB variables [1; 21; 31; 2; 35] or BnB nodes [36; 38], and how to run primal heuristics [20]. Similar ideas are used in cutting planes [7] and in optimization over graphs [16; 40; 41]. On the other hand, [12; 45] (and references therein) learn optimization models via alternative decompositions and problem reformulations.
Identifying SubstructuresThe third thrust of related work focuses on learning and exploiting problem substructures. Early efforts include predicting _backdoor variables_[17; 18], i.e., instantiating key variables to increase the problem's tractability. Similarly, assuming the availability of training data, [51; 43] propose sampling methods for predicting active constraints. Along these lines, [10; 11] builds Optimal Classification Trees and Feedforward NNs to predict active constraints and variables. Similarly, [61] assigns variables using a \(k\)-Nearest Neighbours classifier. Finally, [57; 60] use Imitation Learning and RL for splitting ILPs. These works do not employ richer (and more promising) problem representations, nor they design bespoke NN architectures for the problems at hand.
Employing GCNsTo that end, [31; 52; 19] use GCNs where the MIPs are encoded with bipartite graphs. This approach is permutation-invariant, thus can yield generalizable learned policies. In [31] the GCN is used for variable selection in BnB; [52] learns also how to initialize (_Neural Diving_); and [19] uses cross-entropy loss to predict probability distributions for the binary variables so as to expedite branch exploration. On the other hand, [39] proposes a GNN-based unsupervised approach for graph-related problems, where the integral solution is recovered with derandomization through sequential decoding. This interesting approach yields feasible solutions with high probability, yet is not applicable to general MIPs.
Unlike these works (see overview [9]), \(\mathtt{MIPnet}\) considers the _temporal evolution_ of problem's parameters; employs semi-supervised learning to reduce the requirement for training datasets; and follows a Bayesian approach to variable selection through learned distributions with tunable confidence and infeasibility penalization. Appendix J includes further comparisons with prior work.
## 3 Semi-supervised Temporal GCN
### Problem Setting
Consider the binary MILP in standard form:3
Footnote 3: We do not specifically include equality constraints since they can be rewritten as two inequality constraints.
\[\underset{\mathbf{z}}{\text{minimize}} \mathbf{c}^{T}\mathbf{z}\] (1) subject to \[\mathbf{A}\mathbf{z}\preceq\mathbf{b},\]
where \(\mathbf{z}\!=\![\mathbf{z}^{(\mathbf{b})};\mathbf{z}^{(\mathbf{c})}]\), with \(\mathbf{z}^{(\mathbf{b})}\!\in\!\{0,1\}^{D_{x}^{(\mathbf{c})}}\), and \(\mathbf{z}^{(\mathbf{c})}\!\in\mathbb{R}^{D_{x}^{(\mathbf{c})}}\) being the binary and continuous variables, respectively, and \(D_{x}\!=\!D_{x}^{(\mathbf{b})}\!+\!D_{x}^{(\mathbf{c})}\). An instance of (1) is defined by the set of parameters \(\phi\!=\!\{\mathbf{c},\mathbf{A},\mathbf{b}\}\), with \(\mathbf{c}\!\in\!\mathbb{R}^{D_{x}}\), \(\mathbf{A}\!\in\!\mathbb{R}^{D_{x}\times D_{y}}\), \(\mathbf{b}\!\in\!\mathbb{R}^{D_{x}}\) (notation details in Appendix A).
We focus on applications where (1) needs to be solved recurrently with different parameters that follow a temporal structure. Consider a set of time series \(\mathcal{S}\) with \(\mathcal{T}_{s}\) timesteps, where each timestep corresponds to an instance with parameters \(\phi_{s,t}\), with \(s\in\mathcal{S}\) and \(t\in\mathcal{T}_{s}\). For the set of instances \(\{\phi_{s,t}\}_{s\in\mathcal{S},t\in\mathcal{T}_{s}}\), consider their optimal solutions \(\{\mathbf{z}_{s,t}^{\star}\}_{s\in\mathcal{S},t\in\mathcal{T}_{s}}\), with \(\mathcal{S}\subseteq\mathcal{S}\) and \(\mathcal{T}_{s}\subseteq\mathcal{T}_{s}\), i.e., we assume the availability of solution only for a subset of instances. Our goal is to train a model that learns a globally shared mapping from the problem parameters \(\phi_{s,t}\) to a probabilistic representation \(p(\mathbf{z}_{s,t}^{\star})\) of the optimal solution. Leveraging this mapping, we present a method to select a subset of variables for which the optimal assignment is known with high confidence - the size of the subset is a user-defined parameter. Then, we fix these variables in problem (1) and we solve the resulting lower dimensional sub-problem. As shown in the result section, for every new set of instances \(\{\phi_{s,t}^{\prime}\}_{s\in\mathcal{S}^{\prime},t\in\mathcal{T}_{s}^{\prime}}\), the proposed methods is able to expedite the computation of the optimal solution.
We elaborate next on the model architecture, training and inference procedures. For clarity of exposition we drop the time series and timestep subscripts unless necessary.
### Proposed Method
Inspired by [31; 19; 52], we represent an instance of (1) as a bipartite graph by having one node for each variable and constraint, and connecting a variable node to a constraint node iff the variable is active on that constraint. We use a GCN block to map the nodes to an embedding space, followed by an LSTM block to capture the temporal structure of the MILP instances \(\{\phi_{s,t}\}_{s\in\mathcal{S},t\in\mathcal{T}_{s}}\). The model is trained in a semi-supervised manner, exploiting both a supervised and an unsupervised component.
Two key properties of (1) are the scale and permutation invariance, i.e., the solution of the problem does not change if we scale (appropriately) the problem, or permute its parameters and variables. If a model does not satisfy these properties, it would need to be re-trained for each variation of a problem. Our architecture satisfies both scale invariance by introducing a normalization step and permutation invariance by allowing only symmetric functions among nodes.
Parameter NormalizationWe initially normalize all parameters \(\phi\) of (1) in order to make the training more stable and the model scale-invariant. This means that the model will learn to solve the problem with parameters drawn from a "base" distribution, and every scaled problem version can be solved just by normalizing the parameters. The normalization needs to ensure that the relative weights of the objective parameters and each constraint remain the same;
therefore we normalize them separately. The normalized parameters \(\forall i\leq D_{\text{c}},j\leq D_{\text{z}}\), are computed as follows:
\[a_{i,j}\!=\!\frac{a_{i,j}}{\|[\mathbf{a}_{i}^{T};b_{i}]\|_{p}},\ b_{i}\!=\!\frac{ b_{i}}{\|[\mathbf{a}_{i}^{T};b_{i}]\|_{p}},\ c_{j}\!=\!\frac{c_{j}}{\|\mathbf{c}\|_{p}}, \tag{2}\]
where \(\mathbf{a}_{i}\) is the \(i\)-th row of \(\mathbf{A}\). We use \(p=2\). For problem instances of different sizes the normalization can lead to a distribution shift. We discuss this in Appendix B.
Model Architecture.The model builds on the bipartite graph representation of MILPs. Formally, consider a graph \(G=(\mathcal{V},\mathcal{E},\mathbf{A}^{\text{(adj)}})\) defined by the set of nodes \(\mathcal{V}\), with \(|\mathcal{V}|=D_{\text{z}}+D_{\text{c}}\), the set of edges \(\mathcal{E}\), with \(|\mathbf{E}|\) equal to the number of nonzero entries in the constraint matrix \(\mathbf{A}\), and the graph adjacency matrix \(\mathbf{A}^{\text{(adj)}}=[\mathbf{I}_{D_{\text{z}}},\mathbf{A}^{T};\mathbf{ A},\mathbf{I}_{D_{\text{z}}}]\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), i.e., the (non-binary) adjacency matrix contains the coefficients from the constraint matrix \(\mathbf{A}\) and self-loops for all nodes given by the identity matrices. One set of \(D_{\text{z}}\) nodes in the bipartite graph corresponds to variables and the other set of \(D_{\text{c}}\) nodes to constraints. An edge \(e_{i,j}\in\mathcal{E}\) indicates that variable \(j\) appears in the \(i\)-th constraint.
The nodes of the graph are associated with a set of features \(\mathbf{U}=\mathbf{U}(\phi)\in\mathbb{R}^{|\mathcal{V}|\times D_{\text{u}}}\) derived by the problem parameters \(\phi\), where \(D_{u}\) is the feature dimension. The feature vector of each node is constructed by linearly combining in a permutation invariant way all the variable and constraint parameters that the node is related to. We refer the reader to Appendix C for a detailed description.
The graph representation of (1) motivates using a GCN [42]. A GCN with \(L\) layers is denoted:
\[\mathbf{X}^{(l+1)}\!=\!g(\mathbf{X}^{(l)},\mathbf{A}^{\text{(adj)}};\theta_{ g}^{(l)}),\ \forall\ l=0,\ldots,L-1, \tag{3}\]
with \(\mathbf{X}^{(0)}=\mathbf{U}\) and \(\theta_{g}^{(l)}\in\Theta\) the learnable parameters of each layer. The adjacency matrix defines the graph connectivity and determines how information is aggregated at each layer, while the number of layers define the number of hops away that a node gets information from. Note that the GCN is applied to each node in parallel and the resulting embedding is by construction permutation invariant. We follow the GCN propagation rule as defined in [42; 8], with a few (optional) modifications between each layer of the GCN as in [52]: (i) We include a nonlinearity after each layer, (ii) we include skip connection inputs and (iii) we apply layer normalization [5] in the output of each layer.
The output of the final layer \(\mathbf{X}=\mathbf{X}^{(L)}\in\mathbb{R}^{|\mathcal{V}|\times D_{\text{z}}}\) is the final embedding of each node which will be used as input to the next NN block of the model architecture. We use an LSTM network to capture the temporal evolution of (1). The inputs to the LSTM at time \(t\) are the embedded nodes \(\mathbf{X}\) as well as the previous network output \(\mathbf{h}_{s,t-1}\), i.e., \(\mathbf{h}_{s,t}=h(\mathbf{h}_{s,t-1},\mathbf{X}_{s,t-1};\theta_{h})\), with \(\theta_{h}\in\Theta\) the learnable parameters of the LSTM. The LSTM retains the permutation invariance since it acts on the feature dimension. The network output at time \(t\), i.e., \(\mathbf{h}_{s,t}\), is appropriately projected to the parameters of the selected probabilistic representation of each variable using a multilayer perceptron (MLP). In general the projection has the following form:
\[\psi_{s,t,j}=f(\mathbf{x}_{s,t,j},\mathbf{h}_{s,t};\theta_{f}), \tag{4}\]
where \(\theta_{f}\in\Theta\) are the learnable MLP parameters and \(\mathbf{x}_{s,t,j}=[\mathbf{X}_{s,t}]_{j}\) are the embedded features of the \(j\)-th variable node that acts as a skip connection, while the constraint nodes are discarded. Note that the mapping layer can apply any function in the feature dimension of \(\mathbf{h}_{s,t}\) but only symmetric functions across the node dimension. Here, \(\psi\) represents a generic set of parameters for the binary variables that may differ based on the selected model output. If (1) includes continuous variables the model outputs their values and not a distribution, i.e., \(\psi_{s,t}\!=\!\{p(\mathbf{a}_{s,t}^{(\mathbf{b})}),\hat{\mathbf{z}}_{s,t}^{( \mathbf{c})}\}\). In particular, we leverage two MLPs to project the output of the LSTM to the estimates of binary and continuous variables (for a discussion about the prediction of continuous variables please see Appendix D). The probabilistic representation of binary variables becomes concrete in Sec. 3.3.
### Training
A common characteristic of MILP applications is the difficulty in obtaining labels. Specifically, to obtain one label, one has to solve an instance of the MILP, which depending on the use case may take from minutes to hours or even days. It is therefore of high interest for the generality of MIPnet to be able to train with a semi-supervised loss, exploiting labeled data and benefiting from unlabeled data.
Supervised Setting.For the instances that have available labels (optimal solutions) we can train the model with an MLE approach. A natural choice for a probabilistic representation of binary variables is the Bernoulli distribution \(\text{Ber}(z;\pi)\)[52; 60], where \(\pi\) is the Bernoulli parameter. Given a Bernoulli representation of a binary variable it is straightforward to assign a value to the variable, but hard to quantify how accurate or confident this prediction is (see Appendix E for a discussion). A different approach was proposed in [52] that used the SelectiveNet [32] and learned
two sets of binary variables. One set indicates if a variable is going to be selected and the other its binary value (to be used if selected). Here, we introduce a principled way to define how accurate or confident is a prediction via its own variance. We leverage this notion of variance and follow a Bayesian approach by using the Beta distribution \(\text{Beta}(\pi;\alpha,\beta)\) to model the Bernoulli parameters, where \(\alpha\) and \(\beta\) are the parameters of the Beta distribution. This approach can be readily used as a statistically principled selection method, and is a key advantage of our method.
Now, we elaborate on the model output (4). The model yields the Beta distribution parameters \(\psi\!=\!\{\alpha,\beta\}\) (we ignore the continuous variables and drop subscripts), and for each variable with optimal value \(z^{\star}\) the likelihood is computed as \(\int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)\text{Ber}(z^{\star};\pi)d\pi\), i.e., we need to integrate over all values of \(\pi\). Unfortunately this integral does not have a closed-form solution; yet, we can approximate it efficiently using a quadrature method (Monte-Carlo sampling can be also used, but is not as efficient). We use the Clenshaw-Curtis quadrature method [15] where the function to be integrated is evaluated at the \(K\) roots of a Chebyshev polynomial and the integral is approximated by a weighted average of the integral's function \(I(\pi)\!=\!\text{Beta}(\pi;\alpha,\beta)\text{Ber}(z^{\star};\pi)\) in specific and predefined points \(\{\bar{\pi}_{k}\}_{k=0}^{K/2}\), i.e.,
\[\int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)\text{Ber}(z^{\star};\pi)d\pi\!=\! \int_{0}^{1}\!I(\pi)d\pi\approx\mathbf{w}^{T}I(\bar{\pi}),\]
where \(\mathbf{w}\in\mathbb{R}^{K/2+1}\) is independent of the function (precomputed) and \(I(\bar{\pi})=[I(\bar{\pi}_{0}),\ldots,I(\bar{\pi}_{K/2})]^{T}\). For details of the method we refer the reader to Appendix F.1. With the above formulation the negative log-likelihood (NLL) of all the instances becomes:
\[\ell_{\text{sup}}(\mathbf{z}^{\star},\psi)\approx-\sum_{s=1}^{|\mathcal{S}|} \sum_{t=1}^{|\mathcal{T}_{s}|}\sum_{j=1}^{D_{s}}\log\left(\mathbf{w}^{T}I( \bar{\pi}_{s,t,j})\right), \tag{5}\]
where \(\psi\!=\!\{\boldsymbol{\alpha}_{s,t},\boldsymbol{\beta}_{s,t}\}\).
In Appendix F.2 we propose an additional regularization term, while in Appendix F.3 we describe a weighted version of 5 for imbalanced labels.
Unsupervised Setting.Apart from (5) we consider an unsupervised loss that is optimized together and is essentially the objective of (1) with a penalty term for the constraint violations:
\[\ell_{\text{unsup}}=\sum_{s=1}^{|\mathcal{S}|}\sum_{t=1}^{|\mathcal{T}_{s}|} \mathbf{c}_{s,t}^{T}\hat{\mathbf{z}}_{s,t}+\lambda_{\text{c}}\|\left(\mathbf{ A}_{s,t}\hat{\mathbf{z}}_{s,t}-\mathbf{b}_{s,t}\right)_{+}\|^{2}, \tag{6}\]
where \(\lambda_{\text{c}}\geq 0\) and \((x)_{+}=\max(x,0)\). Here, \(\hat{\mathbf{z}}_{s,t}\) represents a prediction based on the learned Beta distribution that is computed as \(\hat{z}=\sigma\left(\frac{\alpha}{\alpha+\beta}\right)\), where \(\sigma(\cdot)\) is the sigmoid function and \(\frac{\alpha}{\alpha+\beta}\) is the mean of Beta. Applying the sigmoid to the mean gives values close to the limits that we use as a proxy for the true binary since exact rounding would block the gradient flow. Further, in the unsupervised loss we include the continuous variables that are directly given by the network output (4), i.e., \(\psi_{s,t}=\{\boldsymbol{\alpha}_{s,t},\boldsymbol{\beta}_{s,t},\hat{\mathbf{ z}}_{s,t}^{(c)}\}\) where \(\boldsymbol{\alpha}_{s,t},\boldsymbol{\beta}_{s,t}\in\mathbb{R}^{D^{(b)}_{s}}\) and \(\hat{\mathbf{z}}_{s,t}^{(c)}\in\mathbb{R}^{D^{(s)}_{s}}\).
The unsupervised loss, as explained earlier, allow us to work with instances for which labels are hard to obtain and, at the same time, provides the means to learn the impact of erroneous predictions (with per-variable granularity) on
Figure 1: MIPnet – At each time step \(t\), we construct the features \(\mathbf{U}_{s,t}\) of all nodes based on the MILP temporal parameters, feed them into a GCN that produces the final embedding \(\mathbf{X}_{s,t}\). The embeddings are fed to an LSTM whose output is projected to the set of parameters \(\psi_{s,t}\) and the corresponding loss \(\ell_{s,t}\) is computed.
the objective function and constraint violation. This, in turn, is proved an effective practical control for mitigating the effect of the (few) miss-predictions that MIPnet yields. Note also that the continuous variables are optimized jointly with the binary through this loss.
Putting the above together, the overall loss is given by
\[\ell=\ell_{\text{sup}}+\lambda\ell_{\text{unsup}}, \tag{7}\]
with \(\lambda\geq 0\). Note that if a label is not available, the supervised loss of that instance is masked and we compute only the unsupervised loss. Fig. 1 summarizes the architecture.
### Inference and Variable Selection
Our prediction problem is very challenging since it can return an infeasible solution even if it mispredicts only few variables. Hence, instead of targeting an all-or-nothing solution (as in [11], see Table 4), we apply a _variable selection_ method where we generate a recommended assignment for each variable associated with a notion of confidence. By fixing the value of high-confidence variables and calling an MILP solver on the reduced problem, we speed up the discovery of high-quality solutions, as shown also in [19; 52; 11; 61]. Differently to these works however, we take a formal Bayesian approach into this variable selection problem, which additionally, does not require retraining.
Given a trained model and a new instance \(\phi^{\prime}=\{\mathbf{c},\mathbf{A},\mathbf{b}\}\) the model outputs the parameters \(\psi^{\prime}=\{\boldsymbol{\alpha},\boldsymbol{\beta},\hat{\mathbf{z}}^{( \mathrm{c})}\}\) with \(\boldsymbol{\alpha},\boldsymbol{\beta}\in\mathbb{R}^{D^{(\mathrm{b})}_{z}}\) and \(\hat{\mathbf{z}}^{(\mathrm{c})}\in\mathbb{R}^{D^{(\mathrm{c})}_{z}}\). For each of the \(D^{(\mathrm{b})}_{z}\) binary variables we compute the mean and the standard deviation of its Beta distribution, given by \(\mu_{j}=\frac{\alpha_{j}}{\alpha_{j}+\beta_{j}}\) and \(\sigma_{j}=\sqrt{\frac{\alpha_{j}\beta_{j}}{(\alpha_{j}+\beta_{j})(\alpha_{j} +\beta_{j}+1)}}\). Since a confident output has mean close to 0 or 1 and low variance, we define the score \(s_{j}=\min(\mu_{j},1-\mu_{j})+\gamma\sigma_{j}\), for \(j=1,\ldots,D^{(\mathrm{b})}_{z}\), and we select a desired percentage \(\rho\) of the variables with the lowest score. Parameter \(\gamma\geq 0\) is tuned on a validation set. For a given set of selected variables, we fix their values rounding their mean, and optimize the remaining binary and continuous variables.Following this approach we solve the remaining problem only once, although one has the option to take multiple samples from the learned distribution, solve the remaining problems multiple times and keep the best solution [52].
## 4 Experiments
We evaluate MIPnet using carefully-selected problems and datasets, as detailed in Appendix G. For the routing problem we used the dataset Geant[53] collecting real-world _temporal_ traffic data from Internet Service Providers; for the facility-loc we used the real-world topology from the nobel-germany dataset [53]; and for the caching problem we used actual requests from the MovieLens datasets [24]. We generated synthetic data for the tsp, energy-grid and revenue-max problems. We ran every experiment 3 times in order to evaluate model variance. Appendices H and I include evaluation details and additional experiments, respectively.
### Evaluating Accuracy & Feasibility
We assess the accuracy, optimality and feasibility of MIPnet against various state of the art methods. Namely, we compare our results with Neural Diving (N-div) [52], a GCN model that selects variables using the SelectiveNet approach [32]; the method proposed in [11], an approach that learns a classifier from problem parameters to complete solutions (based on the unique solutions appeared in the training set); and a variant of our model trained only with supervised loss, MIPnet-sup, to evaluate the importance of the unsupervised loss component. As a further ablation, we also evaluate our method when trained only with unsupervised loss which, however, did not achieve satisfactory performance (see Appendix I). As a final method we use the MIP solver SCIP [58]. For all problems expect for routing, SCIP finds an optimal solution for all instances within the \(15\)-minutes time limit. On the other hand for routing, SCIP finds an optimal solution for \(99.73\%\) of instances within the \(60\)-minutes time limit, which highlights the problem's complexity. These optimal solutions - and the suboptimal ones when optimals are not available - serve as _ground truth_, and the results of all other methods are presented relative to them.
For a problem with \(D^{(b)}_{z}\) binary variables, we fix the \(\left\lceil\rho D^{(b)}_{z}\right\rceil\) most-confident ones, where \(\rho\in(0,1)\), and use a solver for the exact optimization of the remaining subproblem, subject to MIPnet's assignments. Table 1 summarizes the percentage of correct predictions on the assigned variables (_accuracy_) across all instances, while Table 2 presents the percentage of infeasible instances. Table 3 compares the optimality gap of the different approaches, which is computed using _only_ the set of feasible solutions (denoted with \(\mathcal{N}_{f}\)), and is defined as the average difference between
a method's objective (obj) and SCIP's optimal value (obj\({}^{*}\)), i.e., \(\texttt{opt\_gap}=\frac{1}{|\mathcal{N}_{f}|}\sum_{\texttt{obj}\in\mathcal{N}_{ f}}(\texttt{obj}-\texttt{obj}^{*})\). We repeat the experiments for different \(\rho\) values.4
Footnote 4: In MIPnet, we can decide a different portion of assigned variables without model retraining. This stands in stark contrast to NI-div where the variable selection percentage is predefined.
We observe that MIPnet matches or outperforms N-div in almost all cases, while it exhibits more consistent behaviour across runs. Compared to MIPnet-sup, it is either on par or better, especially in feasibility. A notable observation is that accuracy itself is not adequate to validate the performance of a model. For example, in the routing and tsp datasets we observe that accuracy values well-above \(90\%\) can lead to (almost) \(100\%\) infeasibility; while in facility-loc accuracy values \(>99\%\) can yield a significant portion of infeasible instances - especially for MIPnet-sup and N-div. This is associated with the complexity of the constraints and highlights the importance of using an unsupervised component in model training.
Since [11] is an approach that selects a complete solution, i.e., there is no option for selecting \(\rho\%\) of variables, we present the results of this method separately. As shown in Table 4, this all-or-nothing approach learns feasible solutions for tsp, facility-loc, and caching (no variance was observed across runs). In these problems the constraints are time-invariant (do not change across instances). Hence, optimal solutions from the training dataset are feasible for the test dataset, and indeed we see that [11] finds feasible solutions. Alas, these are often far from optimal, e.g., for tsp and caching the average gap is \(19.61\%\) and \(52.49\%\), respectively. On the other hand, for revenue-max where the constraints change across instances, optimal solutions from the training dataset are infeasible for new problem instances and the method returns infeasible variable assignments. That is, this approach does not generalize to solutions not seen in the training dataset. Finally, we highlight that we were able to run the approach from [11] only on a smaller version of the routing dataset, which alludes to potential implementation difficulties of this approach - see Appendix I.
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline \(\rho\) (\%) & Method & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline \multirow{3}{*}{30\%} & MIPnet & **97.0\(\pm\)0.13** & **100.00\(\pm\)0.00** & **98.12\(\pm\) 1.38** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & **97.0\(\pm\)0.04** & **100.98\(\pm\)0.00** & 94.56\(\pm\) 1.73 & 99.97\(\pm\) 0.05 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 84.0\(\pm\)0.72 & **100.00\(\pm\)0.00** & 93.43\(\pm\) 0.00 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \multirow{3}{*}{40\%} & MIPnet & **96.03\(\pm\)0.13** & **100.00\(\pm\)0.00** & **97.54\(\pm\) 1.73** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & 80.01\(\pm\)0.15 & 98.97\(\pm\) 1.69 & 93.97\(\pm\) 1.49 & 99.97\(\pm\) 0.00 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 87.17\(\pm\) 3.68 & 99.98\(\pm\) 0.02 & 90.69\(\pm\) 1.49 & 99.20\(\pm\) 1.31 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \multirow{3}{*}{50\%} & MIPnet & **94.46\(\pm\)0.09** & **100.00\(\pm\)0.00** & **96.97\(\pm\) 1.49** & **99.98\(\pm\) 0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & **94.46\(\pm\)0.11** & 99.79\(\pm\) 0.24 & 93.42\(\pm\) 1.49 & 99.94\(\pm\) 0.00 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 89.34\(\pm\) 0.11 & 99.91\(\pm\) 0.03 & 95.92\(\pm\) 0.54 & 99.61\(\pm\) 0.53 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \multirow{3}{*}{60\%} & MIPnet & 92.33\(\pm\)0.08 & **99.01\(\pm\) 90.03** & **96.38\(\pm\) 2.05** & 99.96\(\pm\) 0.01 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & **93.15\(\pm\) 4.11** & 99.71\(\pm\) 0.29 & **93.26\(\pm\) 1.35** & 99.86\(\pm\) 0.13 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 90.38\(\pm\) 2.55 & 99.91\(\pm\) 0.02 & 95.41\(\pm\) 2.61 & **99.96\(\pm\) 0.01** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \multirow{3}{*}{70\%} & MIPnet & 91.25\(\pm\) 0.07 & **93.84\(\pm\) 0.58** & 95.72\(\pm\) 2.85 & **99.89\(\pm\) 0.01** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & **91.86\(\pm\) 0.09** & 99.41\(\pm\) 0.08 & 93.92\(\pm\) 1.19 & 99.74\(\pm\) 0.14 & 100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 90.30\(\pm\) 0.92 & 99.64\(\pm\) 0.06 & 93.93\(\pm\) 0.84 & 99.64\(\pm\) 0.11 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (mean \(\pm\) std)\(\%\) of MIPnet vs. N-div. Bold indicates the best method (higher values are better).
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline \(\rho\) (\%) & Method & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline \multirow{3}{*}{30\%} & MIPnet & **97.0\(\pm\)0.13** & **100.00\(\pm\)0.00** & **98.12\(\pm\) 1.38** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & 61.27\(\pm\)0.04** & **100.00\(\pm\)0.00** & 93.81\(\pm\) 0.13 & 99.97\(\pm\) 0.05 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 84.0\(\pm\)0.72 & **100.00\(\pm\)0.00** & 93.43\(\pm\) 0.00 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline \multirow{3}{*}{40\%} & MIPnet & **96.03\(\pm\)0.13** & **100.00\(\pm\)0.00** & **97.54\(\pm\) 1.73** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & NPnet-sup & 60.01\(\pm\)0.15 & 98.97\(\pm\) 1.06 & 93.97\(\pm\) 1.49 & 99.97\(\pm\) 0.04 & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ & -div & 87.17\(\pm\) 3.68 & 99.98\(\pm\) 0.02 & 90.69\(\pm\) 1.
### Evaluating the Temporal Component of MIPnet
In this section we explore the benefit of leveraging the temporal aspect of the problem instances. Figure 2 illustrates the differences in accuracy and infeasibility when MIPnet employs an MLP instead of an LSTM, for three problems with different temporal properties. Namely, the real-world demands in routing follow a diurnal pattern; the demands in facility-loc have a strong time-dimension; while the edge costs in tsp are generated by adding random perturbations in successive instances.
We observe that the LSTM-based MIPnet outperforms consistently, and most-often substantially, its MLP-based version both in terms of mean and variance values. In other words, all else being equal in the model and inference approach, the addition of LSTM has substantial gains, without requiring any compromises (MLP never outperforms LSTM). This finding underlines the importance of leveraging the temporal evolution pattern of the problem parameters, a hitherto overlooked aspect in all prior works.
### Trading Off Accuracy and Solution Speed via \(\rho\)
Finally, we explore how the percentage of assigned variables \(\rho\), affects the solution time. These experiments reveal also the gains of MIPnet in terms of solution speed, compared to off-the-shelf solvers. In detail, we denote with \(t_{p}\) the time the SCIP solver requires for solving the optimization problem, where \(p=1-\rho\) is the percentage of variables that are _not_ assigned by MIPnet (hence, need to be optimized by the solver). Figure 3 reports the normalized time defined as the ration \(t_{p}/t_{100}\), i.e., the ratio between the solver (SCIP) time when \(\rho\) variables are assigned using MIPnet, and the time for solving the entire problem with SCIP.5 As expected, the time decreases when a higher percentage of variables is assigned. For all problems an average of, at least, \(2\times\) speedup is obtained by setting \(\rho=0.7\), while for revenue-max, energy-grid, and caching we achieve a 2x speedup already for \(\rho=0.3\), i.e., when assigning only \(30\%\) of the variables, and a maximum \(5\times\) speed up. These experiments manifest the solution speed gains of MIPnet compared to a solver, and the importance of our variable selection threshold which indeed can be used to trade off accuracy with solution speed.
Footnote 5: These times are calibrated in order to account for the different computational environments we have used in the experiments.
## 5 Conclusions
Solving MIPs via ML methods can revolutionize the solution of large-scale NP-hard problems and avail MIP to new application domains. This potential is hampered by unprecedented challenges: the values of variables are inherently correlated and need to be jointly predicted; even tiny value assignment errors may lead to infeasibility or unbounded
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & **Accuracy** & **Infeasibility** & **Optimality gap** \\ \hline routing & - & - & - \\ facility-loc & 98.13 & 0.00 & 1.78 \\ tsp & 86.29 & 0.00 & 19.61 \\ revenue-max & 99.22 & 100.00 & - \\ caching & 99.91 & 0.00 & 52.49 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy, infeasibility and optimality gap of [11].
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline \(\rho\) (\%) & Method & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline \multirow{3}{*}{30\%} & MIPnet & \(6.59\pm 0.15\) & \(0.07\pm 0.01\) & \(\mathbf{0.24\pm 0.02}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & MIPnet-sup & \(\mathbf{6.03\pm 0.63}\) & \(\mathbf{0.04\pm 0.05}\) & \(2.38\pm 0.00\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & S-div & & \(0.28\pm 0.23\) & \(0.25\pm 0.04\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \hline \multirow{3}{*}{40\%} & MIPnet & \(\mathbf{6.27\pm 0.00}\) & \(\mathbf{16.0\pm 0.01}\) & \(0.37\pm 0.07\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & MIPnet-sup & \(27.59\pm 0.00\) & \(0.20\pm 0.10\) & \(\mathbf{0.20\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & N-div & - & \(0.44\pm 0.35\) & \(\mathbf{0.26\pm 0.04}\) & \(0.08\pm 0.04\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \hline \multirow{3}{*}{50\%} & MIPnet & - & \(\mathbf{0.34\pm 0.03}\) & \(\mathbf{0.74\pm 0.19}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & MIPnet-sup & - & \(0.72\pm 0.42\) & \(0.65\pm 0.00\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & N-div & - & \(3.55\pm 4.88\) & \(5.70\pm 0.63\) & \(8.81\pm 1.245\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \hline \multirow{3}{*}{60\%} & MIPnet & - & \(\mathbf{0.73\pm 0.10}\) & \(1.49\pm 0.30\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & MIPnet-sup & - & \(2.15\pm 1.72\) & \(4.18\pm 0.00\) & \(0.01\pm 0.01\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & N-div & - & \(1.22\pm 0.55\) & \(\mathbf{0.48\pm 0.01}\) & \(0.01\pm 0.00\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \hline \multirow{3}{*}{70\%} & MIPnet & - & \(\mathbf{266\pm 0.72}\) & \(2.08\pm 0.17\) & \(\mathbf{0.01\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ & MPnet-sup & - & \(5.80\pm 5.04\) & \(\mathbf{0.01\pm 0.01}\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \cline{1-1} & S-div & - & \(5.83\pm 1.97\) & \(\mathbf{1.79\pm 0.21}\) & \(0.04\pm 0.03\) & \(\mathbf{0.00\pm 0.00}\) & \(\mathbf{0.00\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Optimality gap (mean \(\pm\) std)\(\%\) of MIPnet vs. N-div. Bold indicates the best method (lower values are better).
suboptimality; training data are not readily available; and generalizing trained models is highly non-trivial. In this work, we propose MIPnet, a systematic framework for tackling general binary MILPs that may exhibit temporal structure, which: employs GCNs to automate embeddings and capture correlation among variables; uses an LSTM to account for, hitherto-overlooked, time dependencies across problem instances (a regularly-encountered operational condition); and includes a principled Bayesian assignment strategy with user-tunable confidence. We follow a semi-supervised approach so as to account for the lack of solved instances, a practical limitation when dealing with NP-hard problems, and in order, also, to identify key variables impacting feasibility and optimality. Our work fills (some of) the gaps in the literature and, we believe, contributes towards establishing ML as an off-the-shelf MIP technology, including for non-binary and nonlinear problems that were not addressed by the proposed approach.
Figure 3: Calibrated running time for various \(\rho\) values. \(t_{p}\) is the solution time when a percentage \(\rho\) of variables is assigned by MIPnet and \(p\!=\!1\!-\!\rho\) variables are assigned by the solver. \(t_{100}\) is the time when using only the solver.
Figure 2: Comparison of MIPnet wl and wo a temporal component.
## References
* [1] A. H. Land, and A. G. Doig. An Automatic Method of Solving Discrete Programming Problems. _Econometrica_, 28(3):497-520, 1960.
* [2] A. M. Alvarez. A Machine Learning-based Approximation of Strong Branching. _INFORMS Journal on Computing_, 29(1):185-195, 2017.
* [3] Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J Zico Kolter. Differentiable convex optimization layers. _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [4] Carlos Ansotegui, Yuri Malitsky, Horst Samulowitz, Meinolf Sellmann, and Kevin Tierney. Model-based genetic algorithms for algorithm configuration. In _Twenty-Fourth International Joint Conference on Artificial Intelligence_, 2015.
* [5] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. _arXiv preprint arXiv:1607.06450_, 2016.
* [6] Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, and Ellen Vitercik. Learning to branch. In _International Conference on Machine Learning (ICML)_, pages 344-353, 2018.
* [7] Maria-Florina F Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of tree search configuration: Cutting planes and beyond. _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [8] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. _arXiv preprint arXiv:1806.01261_, 2018.
* [9] Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. _European Journal of Operational Research_, 290(2):405-421, 2021.
* [10] Dimitris Bertsimas and Bartolomeo Stellato. Online Mixed-integer Optimization in Milliseconds. _arXiv preprint arXiv:1907.02206_, 2019.
* [11] Dimitris Bertsimas and Bartolomeo Stellato. The Voice of Optimization. _Machine Learning_, 110(2):249-277, 2021.
* [12] Pierre Bonami, Andrea Lodi, and Giulia Zarpellon. Learning a classification of mixed-integer quadratic programming problems. In _International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research_, pages 595-604. Springer, 2018.
* [13] Francesco Borrelli, Alberto Bemporad, and Manfred Morari. _Predictive control for linear and hybrid systems_. Cambridge University Press, 2017.
* [14] Zhi-Long Chen. Integrated production and outbound distribution scheduling: review and extensions. _Operations research_, 58(1):130-148, 2010.
* [15] Charles W Clenshaw and Alan R Curtis. A method for numerical integration on an automatic computer. _Numerische Mathematik_, 2(1):197-205, 1960.
* [16] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In _International conference on machine learning_, pages 2702-2711, 2016.
* [17] Bistra Dilkina, Carla P Gomes, Yuri Malitsky, Ashish Sabharwal, and Meinolf Sellmann. Backdoors to combinatorial optimization: Feasibility and optimality. In _International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research_, pages 56-70. Springer, 2009.
* [18] Bistra Dilkina, Carla P Gomes, and Ashish Sabharwal. Backdoors in the context of learning. In _International Conference on Theory and Applications of Satisfiability Testing_, pages 73-79. Springer, 2009.
* [19] Jian-Ya Ding, Chao Zhang, Lei Shen, Shengyin Li, Bing Wang, Yinghui Xu, and Le Song. Accelerating primal solution findings for mixed integer programs based on solution prediction. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 1452-1459, 2020.
* [20] E. B. Khalil, B. Dilkina, G. L. Nemhauser, S. Ahmed, and Y. Shao. Learning to Run Heuristics in Tree Search. In _International Joint Conference on Artificial Intelligence (IJCAI)_, 2017.
* [21] E. B. Khalil, et al. Learning to Branch in Mixed Integer Programming. In _Proceedings of the AAAI conference on artificial intelligence_, page 724-731, 2016.
* [22] F. Hutter, et al. ParamILS: An Automatic Algorithm Configuration Framework. _Journal of Artificial Intelligence Research_, 36(1), 2009.
* [23] F. Hutter, et al. Sequential Model-based Optimization for General Algorithm Configuration. _Learning and intelligent optimization_, pages 507-523, 2011.
* [24] F. M. Harper, and J. A. Konstan. The MovieLens Datasets: History and Context. _ACM Transactions on Interactive Intelligent Systems_, 5:19:1-19:2, 2015.
* [25] R. Z. Farahani and M. Hekmatfar. Facility Location: Concepts, Models, Algorithms and Case Studies. _Springer_, 2009.
* [26] Nuno P. Fafsca, Vivek Dua, and Efstratios N. Pistikopoulos. _Multiparametric Linear and Quadratic Programming_, chapter 1, pages 1-23. John Wiley & Sons, Ltd, 2007. ISBN 9783527631216. doi: [https://doi.org/10.1002/9783527631216.ch1](https://doi.org/10.1002/9783527631216.ch1). URL [https://onlinelibrary.wiley.com/doi/abs/10.1002/9783527631216.ch1](https://onlinelibrary.wiley.com/doi/abs/10.1002/9783527631216.ch1).
* [27] G. Barlacchi, et al. A Multi-source Dataset of Urban Life in the City of Milan and the Province of Trentino. _Scientific Data_, 2, 2015.
* [28] G. Laporte. Fifty Years of Vehicle Routing. _Transportation Science_, pages 408-416, 2009.
* [29] G. S. Paschos, et al. Cache Optimization Models and Algorithms. _Found. Trends Commun. Inf. Theory_, 16(3-4):156-343, 2019.
* [30] G. Vivek, K. Srikumar, and D. Roth. On amortizing inference cost for structured prediction. _Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning_, pages 1114-1124, 2012.
* [31] Maxime Gasse, Didier Chetelat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [32] Yonatan Geifman and Ran El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. In _International Conference on Machine Learning (ICML)_, pages 2151-2159, 2019.
* [33] Ambros Gleixner, Gregor Hendel, Gerald Gamrath, Tobias Achterberg, Michael Bastubbe, Timo Berthold, Philipp M. Christophel, Kati Jardx, Thorsten Koch, Jeff Linderoth, Marco Lubbecke, Hans D. Mittelmann, Derya Ozyurt, Ted K. Ralphs, Domenico Salvagnin, and Yuji Shinano. MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming Library. _Mathematical Programming Computation_, 2021. doi: 10.1007/s12532-020-00194-3. URL [https://doi.org/10.1007/s12532-020-00194-3](https://doi.org/10.1007/s12532-020-00194-3).
* [34] B. Gopalakrishnan and E. L. Johnson. Airline Crew Scheduling: State-of-the-Art. _Annals of Operations Research_, 140(1):305-337, 2005.
* [35] Christoph Hansknecht, Imke Joormann, and Sebastian Stiller. Cuts, primal heuristics, and learning to branch for the time-dependent traveling salesman problem. _arXiv preprint arXiv:1805.01415_, 2018.
* [36] He He, Hal Daume III, and Jason M Eisner. Learning to search in branch and bound algorithms. _Advances in Neural Information Processing Systems (NeurIPS)_, 2014.
* [37] I. Boussaid, et al. A Survey on Optimization Metaheuristics. _Information Sciences_, 237:82-117, 2013.
* [38] J. Song, R. Lanka, A. Zhao, Y. Yue, and M. Ono. Learning to Search via Retrospective Imitation. In _arXiv:1804.00846_, 2018.
* [39] Nikolaos Karalias and Andres Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In _Proceedings of NeurIPS_, 2020.
* [40] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. _Advances in Neural Information Processing Systems (NeurIPS)_, 2017.
* [41] Minsu Kim, Jinkyoo Park, et al. Learning collaborative policies to solve np-hard routing problems. _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [42] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_, 2016.
* [43] Martin Klauco, Martin Kaluz, and Michal Kvasnica. Machine learning-based warm starting of active set methods in embedded model predictive control. _Engineering Applications of Artificial Intelligence_, 77:1-8, 2019.
* [44] Robert Kleinberg, Kevin Leyton-Brown, Brendan Lucier, and Devon Graham. Procrasinating with confidence: Near-optimal, anytime, adaptive algorithm configuration. _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [45] Markus Kruber, Marco E Lubbecke, and Axel Parmentier. Learning when to use a decomposition. In _International conference on AI and OR techniques in constraint programming for combinatorial optimization problems_, pages 202-210. Springer, 2017.
* [46] Alex Kulesza and Fernando Pereira. Structured learning with approximate inference. _Advances in Neural Information Processing Systems (NeurIPS)_, 2007.
* [47] Anukool Lakhina, Konstantina Papagiannaki, Mark Crovella, Christophe Diot, Eric D. Kolaczyk, and Nina Taft. Structural analysis of network traffic flows. In _Proceedings of ACM Sigmetrics_, 2004.
* [48] Cong Leng, Zesheng Dou, Hao Li, Shenghuo Zhu, and Rong Jin. Extremely low bit neural network: Squeeze the last bit out with ADMM. In _Proceedings of the AAAI Conference on Artificial Intelligence_, 2018.
* [49] M. C. Gonzalez, et al. Understanding Individual Human Mobility Patterns. _Nature_, 453:779-782, 2008.
* [50] M. Lopez-Ibanez, et al. The Trace Package: Iterated Racing for Automatic Algorithm Configuration. _Operations Research Perspectives_, pages 43-58, 2016.
* [51] Sidhant Misra, Line Roald, and Yeesian Ng. Learning for constrained optimization: Identifying optimal active constraint sets. _INFORMS Journal on Computing_, 34(1):463-480, 2022.
* [52] Vinod Nair, Sergey Bartunov, Felix Gimeno, Ingrid von Glehn, Pawel Lichocki, Ivan Lobov, Brendan O'Donoghue, Nicolas Sonnerat, Christian Tjandraatmadja, Pengming Wang, et al. Solving mixed integer programs using neural networks. _arXiv preprint arXiv:2012.13349_, 2020.
* [53] S. Orlowski, M. Pioro, A. Tomaszewski, and R. Wessaly. SNDlib 1.0-Survivable Network Design Library. In _Proceedings of the 3rd International Network Optimization Conference (INOC 2007), Spa, Belgium_, April. [http://sndlib.zib.de](http://sndlib.zib.de), extended version accepted in Networks, 2009.
* [54] R. E. Bixby. A Brief History of Linear and Mixed-Integer Programming Compucation. _Documenta Mathematica_, pages 107-121, 2010.
* [55] S. Chopra, I. Gilboa, and S. T. Sastry. Source Sink Flows with Capacity Installation in Batches. _Discrete Applied Mathematics_, 85:165-192, 1998.
* [56] S. Hochreiter, et al. Long short-term memory. _Neural Computation_, 9(8):1735-1780, 1997.
* [57] Jialin Song, Yisong Yue, Bistra Dilkina, et al. A general large neighborhood search framework for solving integer linear programs. _Advances in Neural Information Processing Systems (NeurIPS)_, 2020.
* [58] T. Achterberg. SCIP: Solving Constraint Integer Programs. _Mathematical Programming Computation_, 1:1-41, 2009.
* [59] David Williamson and David Shmoys. _The Design of Approximation Algorithms_. Cambridge University Press, 2011.
* [60] Yaoxin Wu, Wen Song, Zhiguang Cao, and Jie Zhang. Learning large neighborhood search policy for integer programming. _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [61] Alinson S Xavier, Feng Qiu, and Shabbir Ahmed. Learning to solve large-scale security-constrained unit commitment problems. _INFORMS Journal on Computing_, 33(2):739-756, 2021.
* [62] Y. Pochet, and L. A. Wolsey. Production Planning by Mixed Integer Programming. _Springer Science and Business Media_, 2006.
**Supplementary Materials**
## Appendix A Notation
Matrices, vectors and scalars are denoted by uppercase bold, lowercase bold and lowercase normal letters, i.e., \(\mathbf{X}\), \(\mathbf{x}\) and \(x\), respectively. The set of \(D\)-dimensional real numbers is defined by \(\mathbb{R}^{D}\), \(\mathbf{x}\in\mathbb{R}^{D}\) and \(\mathbf{z}\in\mathbb{Z}^{D}\) denote vectors of real and integer variables with dimension \(D\). \([\mathbf{x};\mathbf{z}]\) and \([\mathbf{x},\mathbf{z}]\) indicates column and row concatenation, respectively. \(\mathbf{X}^{T}\) denotes the transpose of matrix \(\mathbf{X}\) and \([\mathbf{X}]_{j}\) its \(j\)-th row. \(\mathbf{I}_{D}\) and \(\mathbf{0}_{D}\) denote the \(D\)-dimensional identity and zero matrix, respectively. The number of variables is denoted as \(D_{z}\) and the number of constraints as \(D_{\mathrm{c}}\). We use a hat to indicate a variable prediction, e.g., \(\hat{z}\). We use the subscript \(j\) as a variable index and \(i\) as a constraint index. We use uppercase calligraphic and Greek letters to define sets. We denote by \(\mathcal{S}\) a set of time series (indexed by \(s\)) and by \(\mathcal{T}_{s}=\{1,\ldots,|\mathcal{T}_{s}|\}\) the set of timesteps of time series \(s\) (indexed by \(t\)). \(\phi\) represents the set of parameters of an optimization problem and \(\Theta\) the set of parameters of a NN. The symbol \(\mathbbm{1}_{\text{condition}}\) denotes the indicator function with value 1 if condition is true and 0 otherwise. \(\text{KL}\left(p||q\right)\) denotes the KL-divergence of distributions \(p\) and \(q\).
## Appendix B Parameter Normalization & Distribution Shift
As mentioned in Section 3, the parameter normalization (see Eq. (2)) makes our model scale invariant, i.e., our trained model can be applied to new instances where the magnitudes of the constraint matrix and cost vector are entirely different from what was seen during training. More importantly, our trained model can also be used in solving MIP problems whose sizes (i.e., length of the cost vector \(D_{z}\)) are different from that of training instances. However, during test time, if the problem size grows (or shrinks), the parameters will become smaller (or larger), causing a distribution shift.
The distribution shift is caused since in training time with the normalization given in (2) (with \(p=2\)) the parameters \(a_{ij}\) and \(b_{i}\) are proportional to \(\frac{1}{\sqrt{D_{z}+1}}\), and \(c_{j}\) is proportional to \(\frac{1}{\sqrt{D_{z}}}\), whereas at test time these would be proportional to \(\frac{1}{\sqrt{D_{z}+1}}\) and \(\frac{1}{\sqrt{D_{z}}}\), respectively. This can be rectified by a further rescaling of parameters during test time.
More precisely, given a model that is trained on instances of size \(D_{z}\) and applied on test instances of size \(\bar{D}_{z}\), we normalize the test instance's parameters as
\[a_{i,j}\!=\!\sqrt{\frac{\bar{D}_{z}+1}{D_{z}+1}}\frac{a_{i,j}}{\|[\mathbf{a} _{i}^{T};b_{i}]\|_{2}},\quad b_{i}\!=\!\sqrt{\frac{\bar{D}_{z}+1}{D_{z}+1}} \frac{b_{i}}{\|[\mathbf{a}_{i}^{T};b_{i}]\|_{2}},\quad c_{j}\!=\!\sqrt{\frac{ \bar{D}_{z}}{D_{z}}}\frac{c_{j}}{\|\mathbf{c}\|_{2}}. \tag{8}\]
This reduces to Eq. (2) (with \(p=2\)), if \(\bar{D}_{z}=D_{z}\). With this normalization, the parameters \((a_{i,j},b_{i})\) and \(c_{j}\) during test time are also proportional to \(\frac{1}{\sqrt{D_{z}+1}}\) and \(\frac{1}{\sqrt{D_{z}}}\) respectively.
Given this normalization, our model can also be trained on MIP instances of different sizes. We just need to fix one size \(D_{z}\) as the reference size and for every instance of different size \(\bar{D}_{z}\), the parameters would have a further rescaling as in (8).
## Appendix C Graph Features
An integral part of MIPnet is the creation of the features of each node that form the matrix \(\mathbf{U}\in\mathbb{R}^{|\mathcal{V}|\times D_{u}}\), which is fed into the GCN (see Eq. (3)), where \(D_{u}\) is the feature dimension. We differentiate the feature construction between variable and constraint nodes.
The set of parameters that the \(j\)-th variable is associated with are: (i) its cost in the objective \(c_{j}\), (ii) all its parameters in the constraint matrix \(\mathbf{A}\), i.e., \(\mathbf{a}_{j}=[\mathbf{A}^{T}]_{j}\) and (iii) the right-hand side of the constraints \(\mathbf{b}\). Usually the constraint matrix \(\mathbf{A}\) is sparse, especially for large problems, which means that each variable participates in only a small subset of the constraints. We leverage this observation and we include as relevant features only the subset of constraints that each variable is included, i.e., for the \(j\)-th variable we include the constraint \(i\) if \(a_{i,j}\neq 0\). In particular, we calculate the maximum number of constraints that any variable appears, \(m_{c}\), and we keep that many constraint parameters for all variables (so all the feature dimensions are consistent).6
Now, for the \(j\)-th variable we construct \(m_{c}\) triplets \((a_{i,j},b_{i},c_{j})\), where \(i\in\mathcal{I}_{j}=\{i|a_{i,j}\neq 0\}\). If a variable is included in less that \(m_{c}\) constraints, i.e., if \(|\mathcal{I}_{j}|<m_{c}\), then we include \(m_{c}-|\mathcal{I}_{j}|\) zero triplets. Each triplet is mapped to the target feature dimension \(D_{u}\) using an MLP with parameters learned jointly across all variables:
\[\mathbf{V}_{j}=f^{\text{(var-map)}}(\mathbf{a}_{\mathcal{I}_{j},j},\mathbf{b} _{\mathcal{I}_{j}},c_{j})\!=\!\begin{bmatrix}a_{i_{1},j}&b_{i_{1}}&c_{j}\\ \vdots&\vdots&\vdots\\ a_{i_{m_{c}},j}&b_{i_{m_{c}}}&c_{j}\end{bmatrix}\mathbf{W}^{\text{(var-map)}},\]
where \(\mathbf{W}^{\text{(var-map)}}\in\mathbb{R}^{3\times D_{u}}\) is the learnable weight matrix of the MLP and \(\mathbf{V}_{j}\in\mathbb{R}^{m_{c}\times D_{u}}\). Finally, we aggregate over all rows using a symmetric function (e.g., average):
\[\mathbf{u}_{j}^{\text{(var)}}=f^{\text{(var-agg)}}(\mathbf{V}_{j}), \tag{9}\]
where \(\mathbf{u}_{j}^{\text{(var)}}\in\mathbb{R}^{D_{u}}\). Note that \(\mathbf{u}_{j}^{\text{(var)}}\) is variable and constraint permutation invariant. The complete feature matrix for the variable nodes is constructed by concatenating all the feature vectors \(\mathbf{u}_{j}^{\text{(var)}}\), i.e., \(\mathbf{U}^{\text{(var)}}=[\mathbf{u}_{1}^{\text{(var)}},\dots,\mathbf{u}_{ D_{x}}^{\text{(var)}}]\in\mathbb{R}^{D_{x}\times D_{u}}\).
The features of the constraints are constructed in a similar manner. The set of parameters that the \(i\)-th constraint is associated with are: (i) all its parameters in the constraint matrix \(\mathbf{A}\), i.e., \(\mathbf{a}_{i}=[\mathbf{A}]_{i}\) (ii) the right-hand side of the constraint \(b_{i}\) and (iii) the costs in the objective \(\mathbf{c}\) for the variables in the constraint. As in the variable nodes, we leverage the sparsity of \(\mathbf{A}\) and find the maximum number of variables in all the constraints, \(m_{v}\). Now, for the \(i\)-th constraint we create \(m_{v}\) triplets \((a_{i,j},b_{i},c_{j})\), where \(j\in\mathcal{J}_{i}=\{j:a_{i,j}\neq 0\}\). The triplets are projected to the target dimension \(D_{u}\) and then aggregated in the same way as the variable features:
\[\mathbf{C}_{i}=f^{\text{(con-map)}}(\mathbf{a}_{i,\mathcal{J}_{i }},b_{i},\mathbf{c}_{\mathcal{J}_{i}}) \!=\!\begin{bmatrix}a_{i,j_{1}}&b_{i}&c_{j_{1}}\\ \vdots&\vdots&\vdots\\ a_{i,j_{m_{v}}}&b_{i}&c_{j_{m_{v}}}\end{bmatrix}\mathbf{W}^{\text{(con-map)}},\] \[\mathbf{u}_{i}^{\text{(con)}}=f^{\text{(con-agg)}}(\mathbf{C}_{i}),\]
where \(\mathbf{W}^{\text{(con-map)}}\in\mathbb{R}^{3\times D_{u}}\) is the learnable weight matrix of the MLP, \(\mathbf{C}_{i}\in\mathbb{R}^{m_{v}\times D_{u}}\) and \(\mathbf{u}_{i}^{\text{(con)}}\in\mathbb{R}^{D_{u}}\). Note that \(\mathbf{u}_{i}^{\text{(con)}}\) is variable and constraint permutation invariant. The complete feature matrix for the constraint nodes is constructed by concatenating all the feature vectors \(\mathbf{u}_{i}^{\text{(con)}}\), i.e., \(\mathbf{U}^{\text{(con)}}=[\mathbf{u}_{1}^{\text{(con)}},\dots,\mathbf{u}_{D_ {c}}^{\text{(con)}}]\in\mathbb{R}^{D_{c}\times D_{u}}\).
It is possible to differentiate between the equality and inequality constraints and treat them separately. In that case, a different set of triplets is constructed for each constraint and a different MLP can be used to map the triplets to the feature dimension.
The final feature matrix \(\mathbf{U}\) is:
\[\mathbf{U}=\begin{bmatrix}\mathbf{U}^{\text{(var)}}\\ \mathbf{U}^{\text{(con)}}\end{bmatrix}\in\mathbb{R}^{|\mathcal{V}|\times D_{u}}. \tag{10}\]
## Appendix D MLP for Continuous Variables
Given an assignment of integer variables, we can re-write the MILP (1) as the following parametric linear program:
\[V(\mathbf{z}^{\text{(b)}})=\underset{\mathbf{z}}{\text{minimize}} (\mathbf{c}^{\text{(b)}})^{T}\mathbf{z}^{\text{(b)}}+(\mathbf{c}^{ \text{(c)}})^{T}\mathbf{z}^{\text{(c)}}\] (11) subject to \[\mathbf{A}^{\text{(b)}}\mathbf{z}^{\text{(c)}}\leq\mathbf{b}- \mathbf{A}^{\text{(b)}}\mathbf{z}^{\text{(b)}}.\]
where \(\mathbf{z}=[\mathbf{z}^{\text{(b)}};\mathbf{z}^{\text{(c)}}]\), \(\mathbf{A}=[\mathbf{A}^{\text{(b)}},\mathbf{A}^{\text{(c)}}]\), and \(\mathbf{c}=[\mathbf{c}^{\text{(b)}},\mathbf{c}^{\text{(c)}}]\). The above function \(V:\mathbb{R}^{D_{x}}\rightarrow\mathbb{R}\) is a piecewise convex function [26, 13], and therefore it can be approximated by an MLP with ReLU activation units. This fact motivated us to use an MLP to map the embedding from the GCN to the continuous variables. Notice that also a convex layer [3] can be used to approximate optimization programs. However, training convex layers is expensive as it requires to differentiate through the optimization program and in our testing it significantly increased the time needed for training. For this reason, we opted to use an MLP instead.
## Appendix E On the Bernoulli and Beta Distributions
The goal of our model is to learn the value of each binary variable as well as measure how reliable is each prediction. A natural approach to assess how reliable or risky is a decision is via the variance. Recall that a Bernoulli distribution,
defined as \(\text{Ber}(\pi)\), has a mean \(\pi\) and variance \(\pi(1-\pi)\). Thus, for a given mean, the variance is completely defined and fixed, i.e., we cannot have different variance values for a given mean value. This essentially renders the variance uninformative (given the mean) for the purpose of assessing how reliable is a binary variable prediction. On the other hand, in the Beta distribution a fixed mean does not correspond to a fixed variance value and therefore can be used (along with the mean) as a measure to select trustworthy variables. For example, when selecting the \(\rho\%\) most reliable variable predictions, one might prefer to include a variable with mean 0.2 and small variance (and fix it to the integer value 0) than a variable with mean 0.1 and very large variance, which indicates that the model has low confidence on that particular prediction although its mean is closer to 0.
## Appendix F Training
### Clenshaw-Curtis Quadrature
The Clenshaw-Curtis quadrature [15] is a method for numerical integration that is based on the expansion of the integrand in terms of Chebyshev polynomials, where the function \(f(x)\) to be integrated over the fixed interval \([-1,1]\) is evaluated at the \(K\) roots of a Chebyshev polynomial. Then, the integral can be approximated as:
\[\int_{-1}^{1}f(x)dx\approx\mathbf{w}^{\top}\mathbf{y}, \tag{12}\]
where
\[\mathbf{w}=\mathbf{D}^{\top}\mathbf{d}, \tag{13}\]
are the quadrature weights, with \(\mathbf{d}\in\mathbb{R}^{K/2+1}\) and \(\mathbf{D}\in\mathbb{R}^{(K/2+1)\times(K/2+1)}\). In particular, the \(k\)-th element of \(\mathbf{d}\) (with zero-based indexing) is given by
\[d_{k}=\begin{cases}1,&k=0,\\ 2/(1-(2k)^{2}),&k=1,\ldots,K/2-1,\\ 1/(1-K^{2}),&k=K/2,\end{cases} \tag{14}\]
while the \((m,k)\)-th element of \(\mathbf{D}\) is defined as
\[D_{mk}=\frac{2}{K}\cos\left(\frac{mk\pi}{K/2}\right)\times\begin{cases}1/2,&k= 0,K/2,\\ 1,&\text{otherwise}.\end{cases} \tag{15}\]
Notice that both \(\mathbf{D}\) and \(\mathbf{d}\) are independent of the function \(f(x)\) and therefore can be precomputed in \(O(K\log K)\). The information of the function is encoded in the vector \(\mathbf{y}\in\mathbb{R}^{K/2+1}\), where its \(k\)-th element, with \(k=0,\ldots,K/2\), can be computed as
\[y_{k}=f(\cos(k\pi/K))+f(-\cos(k\pi/K)). \tag{16}\]
Finally, if the function needs to be integrated in the \([0,1]\) interval, we can apply the simple change of variable \(x^{\prime}=2x-1\to x=(x^{\prime}+1)/2\) and the new integral becomes
\[\int_{-1}^{1}\frac{1}{2}f((x^{\prime}+1)/2)dx^{\prime}, \tag{17}\]
where we have also accounted for the Jacobian factor which is equal to \(1/2\). The integral in (17) has the right limits and the Clenshaw-Curtis quadrature can be applied accordingly.
### Regularization
Ideally, we would like the model to produce Beta distributions with mean close to 0.5 and high variance if it is not confident. We propose the following regularization term to reinforce this behaviour:
\[r(\alpha,\beta,z^{\star})= \int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)|z^{\star}-\pi|d\pi \cdot\text{KL}\left(\text{U}(0,1)||\text{Beta}(\pi,\alpha,\beta)\right)\] \[= \left((1-z^{\star})\int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)\pi d \pi+z^{\star}\int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)(1-\pi)d\pi\right)\] \[\times\int_{0}^{1}\log\left(\frac{1}{\text{Beta}(\pi;\alpha, \beta)}\right)d\pi \tag{18}\] \[= \left((1-z^{\star})\frac{\alpha}{\alpha+\beta}+z^{\star}\frac{ \beta}{\alpha+\beta}\right)\left(-\int_{0}^{1}\log\left(\frac{\pi^{\alpha-1}( 1-\pi)^{\beta-1}}{B(\alpha,\beta)}\right)d\pi\right)\] \[= \left(\frac{(1-z^{\star})\alpha+z^{\star}\beta}{\alpha+\beta} \right)\left(\alpha-1+\beta-1+\log\left(B(\alpha,\beta)\right)\right),\]
where \(\text{Uni}(0,1)\) is the uniform distribution in the unit interval and \(B(\alpha,\beta)\) the Beta function. The goal is to minimize the KL-divergence between the learned Beta distribution and the Uniform only when a prediction is wrong. The term \(\int_{0}^{1}\text{Beta}(\pi;\alpha,\beta)|z^{\star}-\pi|d\pi\) evaluates how far is the distribution from the true label \(z^{\star}\): if Beta _is close_ to the label and has small variance (confident and correct) then this term is very small and the regularization becomes negligible. If the distribution _is far_ from the label, the KL divergence regularizes Beta towards Uniform (pushes its mean towards 0.5) and increases its variance.
The overall supervised loss is given by:
\[\ell_{\text{sup-reg}}=\ell_{\text{sup}}(\mathbf{z}^{\star},\psi=\{\boldsymbol{ \alpha}_{s,t},\boldsymbol{\beta}_{s,t}\})+\lambda_{\text{reg}}r(\mathbf{z}^{ \star},\psi=\{\boldsymbol{\alpha}_{s,t},\boldsymbol{\beta}_{s,t}\}), \tag{19}\]
where \(\lambda_{\text{reg}}\geq 0\) is a regularization parameter.
### Weighted Loss
It is common in classification tasks to include weights in the loss when the classes are imbalanced or to trade off precision and recall.
Our problem can be interpreted as a classification task with \(2^{D^{\text{(b)}}_{z}}\) classes, one for each possible binary sequence of length \(D^{\text{(b)}}_{z}\), and each instance then belongs to a specific class. With this interpretation the number of classes explodes very fast and no meaningful technique can be applied since the samples required to have a decent representation of each class in unattainable (e.g., for \(D^{\text{(b)}}_{z}=100\) and assuming an average of only 10 samples per class we would require \(2^{100}\cdot 10\approx 10^{7}\) samples).
An interpretation that scales is to consider our problem as \(D^{\text{(b)}}_{z}\) classification tasks (one fore each variable) with 2 classes each (since we consider binary variables). With this approach, each instance is a combination of \(D^{\text{(b)}}_{z}\) classes. Now, each classification task has as many samples as the number of instances, which are split in a different way among the 2 classes.
Note that the above interpretations do not change the mathematical formulation of the problem or the loss but the way we think of the class weighting. Following the second approach, we can calculate the class representation percentage \(r_{j}\) of the \(j\)-th variable (i.e., each classification task) from the available instances (labels). For example, if in \(80\%\) of the labels \(z_{j}=0\) and in \(20\%\) of the labels \(z_{j}=1\), then \(r_{j}=0.2\). Then, the (supervised) loss in its generic form would be:
\[\ell(\mathbf{z}^{\star},\phi)=-\sum_{s=1}^{S}\sum_{t=1}^{T}\sum_{j=1}^{D_{s}} \frac{\log(\cdot)}{r_{j}^{z_{s,t,j}}(1-r_{j})^{1-z_{s,t,j}^{\star}}}. \tag{20}\]
The use (or not) of these weights for the computation of the supervised loss is a hyperparameter that is optimized for each dataset.
## Appendix G Datasets
We introduce the six types of problems which we employed in the evaluation of MIPnet. The results for the problems from Tables 5 and 7 can be found in Section 4, and the results from problems from Tables 6 and 8 are discussed in Section I.1 of the Appendix. These problems were selected in line with the evaluation studies in prior works, see [52; 19; 11] and references therein; they appear regularly in different application domains and, furthermore, constitute building blocks for a very wide range of optimization problems. In the sequel we describe the mathematical formulation of each problem, denoted as \(\mathbb{P}_{p}^{t}\) with \(t\in\mathcal{T}\) and \(p=1,\ldots,6\), to indicate that each formulation corresponds to a particular instance of the problem with time-varying parameters.78 We further identify the problems' complexity and class (type of problem), and we map them to real-world applications. Finally, we provide details about the datasets and instances we used in training, testing and validation - see also the summarized view in Tables 5-8.
Footnote 7: We have ignored the time series index \(s\) to avoid notational cluttering.
Footnote 8: For the problem formulations we have used \(t\) as a superscript in all the time-varying parameters (\(t\) is fixed per instance and it just indicates that a specific parameter changes across instances), while all the subscripts are instance-dependent indices.
### Multicommodity Network Design and Routing
We consider the joint unsplittable-routing and network design problem where the goal is to minimize the aggregate routing cost by deciding how much capacity to purchase for each link and which path to select for each commodity.
Formally, the network is modeled by a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and serves a set of \(\mathcal{K}\) commodities with time varying demands \(\boldsymbol{d}^{t}=\{d_{k}^{t}\}_{k\in\mathcal{K}}\), and predetermined sources and destinations. Specifically, our goal is to solve the following _binary problem:_
\[\mathbb{P}_{1}^{t}:\quad\underset{\boldsymbol{p},\boldsymbol{q}}{ \text{minimize}} \sum_{k\in\mathcal{K}}\sum_{l\in\mathcal{P}_{k}}v_{k,l}p_{k,l}d_{k}^ {t}+\sum_{e\in\mathcal{E}}\sum_{i\in\mathcal{I}}u_{i}q_{e,i}\] subject to \[\sum_{l\in\mathcal{P}_{k}}p_{k,l}=1, \forall k\in\mathcal{K},\] \[\sum_{k\in\mathcal{K}}\sum_{l\in\mathcal{P}_{k}}a_{e,k,l}p_{k,l}d _{k}^{t}\leq b_{e}+\sum_{i\in\mathcal{I}}c_{i}q_{e,i}, \forall e\in\mathcal{E},\] \[\boldsymbol{p}\in\{0,1\}^{|\mathcal{K}|},\ \boldsymbol{q}\in\{0,1\}^{| \mathcal{E}||\mathcal{I}|}.\]
Problem \(\mathbb{P}_{1}^{t}\) has the following binary _variables_:
* \(p_{k,l}\in\{0,1\}\): selection of path \(l\) for routing commodity \(k\in\mathcal{K}\),
* \(q_{e,i}\in\{0,1\}\): installment of \(i\)-type capacity at edge \(e\in\mathcal{E}\),
and the following _parameters_:
* \(\mathcal{P}_{k}\): set of eligible paths for commodity \(k\in\mathcal{K}\),
* \(v_{k,l}\in\mathbb{R}_{+}\): routing cost (per unit of traffic) of path \(l\in\mathcal{P}_{k}\),
* \(a_{e,l}^{k}\in\{0,1\}\): parameter indicating that edge \(e\) is contained in path \(l\in\mathcal{P}_{k}\),
* \(b_{e}\in\mathbb{R}_{+}\): initial capacity installed at edge \(e\in\mathcal{E}\),
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & routing-sm & revenue-max-sm \\ \hline \hline \# Integer variables & 1044 & 10000 \\ \hline \# Continuous variables & 0 & 0 \\ \hline \# Equality constraints & 390 & 0 \\ \hline \# Inequality constraints & 36 & 10 \\ \hline \hline Data & real & synthetic \\ \hline Fraction of non-zeros in \(\mathbf{A}\) & 0.0333 & 0.501 \\ \hline Fraction of non-zeros in \(\mathbf{b}\) & 1.0 & 1.0 \\ \hline Fraction of non-zeros in \(\mathbf{c}\) & 0.9169 & 1.0 \\ \hline Fraction of non-zeros in \(\mathbf{z}^{*}\) & 0.4298 & 0.0166 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Additional datasets problem sizes.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline \# Integer variables & 2812 & 1989 & 144 & 1000 & 30000 & 39942 \\ \hline \# Continuous variables & 0 & 0 & 500 & 0 & 0 & 0 \\ \hline \# Equality constraints & 450 & 50 & 36 & 1 & 0 & 0 \\ \hline \# Inequality constraints & 36 & 39 & 4082 & 1010 & 20 & 1 \\ \hline Data & real & mixed & synthetic & synthetic & synthetic & mixed \\ \hline Fraction of non-zeros in \(\mathbf{A}\) & 0.0374 & 0.0226 & 0.1414 & 0.6723 & 0.50 & 1.0 \\ \hline Fraction of non-zeros in \(\mathbf{b}\) & 1.0 & 0.5 & 0.8333 & 0.9876 & 1.0 & 1.0 \\ \hline Fraction of non-zeros in \(\mathbf{c}\) & 0.579 & 0.9804 & 0.9996 & 1.0 & 1.0 & 0.0187 \\ \hline Fraction of non-zeros in \(\mathbf{z}^{*}\) & 0.0001 & 0.0278 & 0.0833 & 0.7375 & 0.0044 & 0.0006 \\ \hline \end{tabular}
\end{table}
Table 5: Datasets problem sizes.
* \(u_{i}\in\mathbb{R}_{+}\): cost of buying \(i\)-th capacity installment,
* \(c_{i}\in\mathbb{R}_{+}\): capacity of \(i\)-th installment,
* \(d_{k}^{t}\in\mathbb{R}_{+}\): demand of commodity \(k\) at time \(t\).
**Complexity**. \(\mathbb{P}_{1}^{t}\) is known to be NP-hard even for a single commodity, i.e., \(|\mathcal{K}|=1\), cf. [55].
**Datasets**. The problem parameters, including the graph topology, were taken from the SNDlib database [53] that contains real-world traffic matrices and network topologies from different communication (and other) networks (e.g., from backbone ISP networks).9 In particular, for this problem we used the dataset _geant_ dataset, the size of which can be seen in Table 5. The capacity and installment costs are fixed over time, while the demand vector \(\mathbf{d}^{t}\) is being updated every \(15\) minutes over a period of \(4\) months. We note that we have scaled the demand \(\mathbf{d}^{t}\) and edge capacity by a factor of \(100\) and \(1/40\), respectively, to increase the complexity of the problem.10 The number of time series we used, and the number of instances for each time series can be found in Table 7, where the instances differ in vector \(\mathbf{d}^{t}\).
Footnote 9: SNDlib is a rich and well-known database which is regularly used in the evaluation of routing algorithms in communication networks.
Footnote 10: This scaling renders capacity purchase necessary in order to fulfill the demands.
### Facility Location
In the facility location problem, given a set \(\mathcal{I}\) of _facility_ locations and a set \(\mathcal{J}\) of _clients_, the goal is to decide which facilities to open and how to assign clients to the opened facilities. The objective is to minimize the total opening and assignment cost. More formally, we solve the following _binary problem_:
\[\mathbb{P}_{2}^{t}:\quad\text{minimize} \sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}c_{j,t}d_{j}^{t}z_{j, i}+\sum_{i\in\mathcal{I}}f_{i}x_{i}\] subject to \[\sum_{i\in\mathcal{I}}z_{j,i}=1, \forall j\in\mathcal{J},\] \[\sum_{j\in\mathcal{J}}z_{j,i}\leq 2|\mathcal{J}|x_{i}, \forall i\in\mathcal{I},\] \[\mathbf{z}\in\{0,1\}^{|\mathcal{I}||\mathcal{J}||\mathcal{I}|},\]
where the vector of binary variables \(\mathbf{z}=[z_{11},\dots,z_{|\mathcal{I}||\mathcal{J}|},x_{1},\dots,x_{| \mathcal{I}|}]\). In the above problem, we introduced the following _binary variables_:
* \(z_{j,i}\in\{0,1\}\): assignment of client \(j\in\mathcal{J}\) to facility \(i\in\mathcal{I}\),
* \(x_{i}\in\{0,1\}\): deployment of facility \(i\in\mathcal{I}\),
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline \# of time series for training & 46 & 80 & 80 & 80 & 80 & 80 \\ \hline \# of time series for testing & 5 & 20 & 20 & 20 & 20 & 20 \\ \hline \# of time series for validation & 5 & 20 & 20 & 20 & 20 & 20 \\ \hline \# of instances per time series & 96 & 700 & 400 & 100 & 100 & 150 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Datasets time series dimensions.
\begin{table}
\begin{tabular}{c|c c} \hline \hline & routing-sm & revenue-max-sm \\ \hline \hline \# of time series for training & 96 & 80 \\ \hline \# of time series for testing & 18 & 20 \\ \hline \# of time series for validation & 10 & 20 \\ \hline \# of instances per time series & 96 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Additional datasets time series dimensions.
and the following _parameters_:
* \(f_{i}\in\mathbb{R}_{+}\): cost for opening facility \(i\in\mathcal{I}\),
* \(c_{j,i}\in\mathbb{R}_{+}\): cost for associating client \(j\in\mathcal{J}\) to facility \(i\in\mathcal{I}\),
* \(d^{t}_{j}\in\mathbb{R}_{+}\): demand of client \(j\in\mathcal{J}\) at time \(t\).
**Complexity**. \(\mathbb{P}^{t}_{2}\) is the metric uncapacitated facility location problem, which is NP-hard and inapproximable below \(1.463\) ratio, with the currently-known algorithm achieving a solution within \(1.488\) ratio, see [67].
**Datasets**. The graph topology is borrowed from the _germany50_ dataset [53], the cost parameters \(f_{i}\) are randomly generated from a uniform distribution for each problem instance and the time-varying demand \(\boldsymbol{d}^{t}=\{d^{t}_{j}\}_{j\in\mathcal{J}}\) is defined as:
\[\boldsymbol{d}^{t+1}=\max\left\{0,\boldsymbol{A}\boldsymbol{d}^{t}+a_{1}\sin( t/\mathtt{period}_{1})+a_{2}\sin(t/\mathtt{period}_{2})+\mathbf{w}\right\},\]
for a Hurwitz matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{J}|\times|\mathcal{J}|}\) constructed by randomly sampling the eigenvalues \(\lambda_{i}\sim\mathcal{U}(0.98,0.999)\) and the associated orthonormal eigenvectors \(\mathbf{v}_{1}\)11 for \(i\in\{1,\ldots,|\mathcal{J}|\}\). The random variables \(a_{1}\sim\mathcal{U}(1,5)\) and \(a_{2}\sim\mathcal{U}(2,10)\), and the random vector \(\mathbf{w}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{1})\), where the vector of zeros \(\mathbf{0}\in\mathbb{R}^{|\mathcal{J}|}\) and the covariance matrix \(\mathbf{\Sigma}_{1}\in\mathbb{R}^{|\mathcal{J}|\times|\mathcal{J}|}\) is a diagonal matrix of ones. In the above definition the \(\max\) operator is applied component-wise and guarantees nonnegative demand. Finally, the periodicity of the sine functions are \(\mathtt{period}_{1}=20\) and \(\mathtt{period}_{2}=70\).
Footnote 11: The eigenvectors are computed randomly sampling from a standard normal and then using the Gram–Schmidt procedure to compute a set of orthonormalized vectors.
### Travelling Salesman
We consider the Travelling Salesman Problem (TSP) under the Dantzig-Fulkerson-Johnson formulation [64]. In detail, we have a set \(\mathcal{N}\) of \(N\) cities, which we need to visit exactly once and return to the starting node. The cities are connected with a graph \(G=(\mathcal{N},\mathcal{E})\) which induces routing (or distance) costs \(c_{i,j},\forall(i,j)\in\mathcal{E}\). Our goal is to minimize the aggregate (travelling) cost of the route. In detail, the formulated _binary problem_ is:
\[\mathbb{P}^{t}_{3}:\quad\underset{\mathbf{z}}{\text{minimize}} \sum_{i\in\mathcal{N}}\sum_{j\in\mathcal{N}}c^{t}_{i,j}z_{i,j}\] subject to \[\sum_{i\in\mathcal{N}}z_{i,j}=1, \forall j\in\mathcal{N},\] \[\sum_{j\in\mathcal{J}}z_{i,j}=1, \forall i\in\mathcal{N},\] \[\sum_{i\in\mathcal{S}}\sum_{j\in\mathcal{S}}z_{i,j}\leq|\mathcal{ S}|-1, \forall\mathcal{S}\subseteq\{1,\ldots,N\},\;|\mathcal{S}|\geq 2,\] \[\mathbf{z}\in\{0,1\}^{N^{2}}.\]
The problem includes the _binary variables_:
* \(z_{i,j}\in\{0,1\}\): inclusion of link \((i,j)\in\mathcal{E}\) or not (\(x_{ij}=0\)) in the route,
and _parameters_:
* \(c^{t}_{i,j}\in\mathbb{R}_{+}\): cost for traversing link \((i,j)\) at time \(t\).
**Complexity**. \(\mathbb{P}^{t}_{3}\) is NP-hard to approximate within any polynomial factor for general distance functions, while improved approximation ratios are available for certain restricted cases (metric spaces, etc.). We refer the reader to [63] for an up-to-date discussion on the complexity of TSP.
**Datasets**. We created instances with \(N=12\) nodes. The location of each node is randomly generated using a uniform distribution and we assumed that the graph is fully connected. The cost vector \(\boldsymbol{c}^{t}\in\mathbb{R}^{n}\) is given by the distance between each city pair and it changes at each time step as follows:
\[\boldsymbol{c}^{t+1}=\max\{0,\boldsymbol{c}^{t}+\mathbf{w}\},\]
where \(\mathbf{w}\sim\text{U}(d_{min},d_{max})^{n}\) and the \(\max\) operator is applied component-wise; where \(d_{min},d_{max}\) are positive constants.
### Revenue Maximization
We consider the problem of shipping commodities from a set \(\mathcal{N}\) of \(N=|\mathcal{N}|\) source nodes (each commodity corresponds to one node) towards their intended destinations over predetermined (overlapping) paths. Our goal is to maximize the revenue from delivering as many commodities as possible, while satisfying the time-varying capacity of each edge \(i\in\mathcal{I}\) that lies along the path of each commodity \(n\in\mathcal{N}\), i.e., \(i\geq n\). Formally, we consider the following _binary_ problem:
\[\mathbb{P}_{4t}:\quad\underset{\mathbf{z}}{\text{maximize}} \sum_{n\in\mathcal{N}}c_{n}^{t}z_{n}\] subject to \[\sum_{n\in\mathcal{N}}a_{i,n}z_{n}\leq b_{i}^{t},\quad\forall i\in \mathcal{I},\] \[\mathbf{z}\in\{0,1\}^{N}.\]
The problem includes the _binary variables_:
* \(z_{n}\in\{0,1\}\): transporting commodity \(n\) to its destination via a predetermined path, or not,
and _parameters_:
* \(c_{n}^{t}\in\mathbb{R}_{+}\): revenue when delivering commodity \(n\), at time \(t\),
* \(b_{i}^{t}\in\mathbb{R}_{+}\): transportation capacity of link \(i\in\mathcal{I}\), at time \(t\),
* \(a_{i,n}\in\{0,1\}\): indicating if link \(i\) lies in the predetermined path of commodity \(n\), i.e., \(i\geq n\).
**Complexity**. \(\mathbb{P}_{4}^{t}\) is NP-complete as it falls in the category of multi-dimensional Knapsack problems, see [66].
**Datasets.** We randomly generated from a uniform distribution the scalars \(a_{i,n}\) representing the resources needed to ship commodity \(n\) from the \(i\)-th facility. Furthermore, for each time series the time-varying capacity and revenue vectors are defined as follows:
\[\mathbf{c^{t+1}} =\mathbf{c^{t}}+a_{1}\sin(t/\mathtt{period}_{1})+a_{2}\sin(t/ \mathtt{period}_{2})+\mathbf{w}_{c},\] \[\mathbf{b^{t+1}} =\mathbf{b^{t}}+\mathbf{w}_{b},\]
where \(\mathbf{c^{t}}=[c_{1}^{t},\ldots,c_{N}^{t}]\), \(\mathbf{b_{t}}=[b_{1}^{t},\ldots,b_{|\mathcal{I}|}^{t}]\), the scalars \(a_{1}\), \(a_{2}\) are randomly generated for each time series, and the random vectors \(\mathbf{w}_{c}\in\mathbb{R}^{|\mathcal{I}||\mathcal{N}|}\) and \(\mathbf{w}_{b}\in\mathbb{R}^{|\mathcal{I}|}\). For further details about the number of time series and problem instances per time series please refer to Table 7.
### Energy Grid
We consider a microgrid energy-sharing problem, cf. [65], where a set \(\mathcal{F}\) of \(F=|\mathcal{F}|\) energy producers-consumers (or, _prosumers_) coordinate their energy consumption plan over several time periods. The prosumers have at their disposal a hierarchical energy storage system, where the primary batteries are already in place and the secondary batteries are deployed upon demand by paying additional cost. The goal is to optimize the overall microgrid operation, by solving
Figure 4: Microgrid energy storage optimization for a community of \(\mathcal{F}\) prosumers and a two-tier battery system.
the _mixed binary problem_:
\[\mathbb{P}_{5}^{t}:\underset{\mathbf{z}}{\text{maximize}} \sum_{f\in\mathcal{F}}c_{f}^{t}z_{f}^{(c)}-\sum_{i\in\mathcal{I}}p_{ i}^{t}z_{i}^{(b)}\] subject to \[\sum_{f\in\mathcal{F}}a_{nf}z_{f}^{(c)}\leq b_{n}^{t}+\sum_{i\in \mathcal{I}}d_{n,i}z_{i}^{(b)},\quad\forall n\in\mathcal{N},\] \[\sum_{f\in\mathcal{F}}z_{f}^{(c)}=1,\] \[\mathbf{z}^{(c)}\in\mathbb{R}^{F},\,\mathbf{z}^{(b)}\in\{0,1\}^{F},\]
where the last constraint enforces the prosumers \(\mathcal{F}\) to keep at least 1 (normalized) unit of energy for serving the needs of their community.
The main _variables_ of \(\mathbb{P}_{5}^{t}\) are:
* \(z_{f}^{(c)}\in\mathbb{R}\): continuous variable deciding how much energy prosumer \(f\in\mathcal{F}\) will store (if positive) or purchase (if negative),12 Footnote 12: Negative transfers release capacity that can be used for stored-energy transfers.
* \(z_{i}^{(b)}\in\{0,1\}\): decides whether to deploy or not (\(z_{i}=0\)) the secondary battery \(i\in\mathcal{I}\),
and the main _parameters_:
* \(a_{n,f}\in\mathbb{R}_{+}\): energy transfer loss coefficient when prosumer \(f\in\mathcal{F}\) transfers energy to battery \(n\in\mathcal{N}\). Larger values indicate larger losses,
* \(c_{f}^{t}\in\mathbb{R}_{+}\): selling or purchase (when \(z_{f}^{(c)}<\)0) price for the energy of prosumer \(f\in\mathcal{F}\), at time \(t\),
* \(b_{n}^{t}\in\mathbb{R}_{+}\): available capacity at primary battery \(n\in\mathcal{N}\) at time \(t\),
* \(d_{n,i}\in\mathbb{R}_{+}\): energy loss coefficient when transferring energy from primary battery \(n\in\mathcal{N}\) to secondary battery \(i\in\mathcal{I}\).
**Complexity**. \(\mathbb{P}_{5}^{t}\) is NP-hard as it generalizes the facility location problem [65].
**Datasets**. We created instances using a non-stationary formula for the time-varying vectors that exhibits temporal properties as follows:
\[\mathbf{c}^{t+1} =\mathbf{c}^{t}+a\sin(t/\texttt{period}_{1})+\mathbf{w}_{c},\] \[\mathbf{b}^{t+1} =\mathbf{b}^{t}+\mathbf{w}_{\mathbf{b}},\]
where the scalar \(a\) is randomly generated for each time series, and the random vectors \(\mathbf{w}_{c}\in\mathbb{R}^{|\mathcal{I}||\mathcal{P}|}\) and \(\mathbf{w}_{b}\in\mathbb{R}^{|\mathcal{I}|}\).
### Caching
Our final problem is a standard data caching problem where we wish to store the most popular files in a cache of limited capacity, see [29]. In particular, we consider a content library of \(\mathcal{N}=\{1,2,\ldots,N\}\) files and a single cache with \(C\) bytes capacity. Each file \(n\in\mathcal{N}\) has popularity \(p_{n}^{t}\) at time \(t\), which captures the expected requests for file \(n\) at that time, and size of \(q_{n}\) bytes. The goal is to store, at each time, those files that will be requested by the larger number of content viewers, i.e., to maximize the cache hits. This can be formalized with the following _binary problem_:
\[\mathbb{P}_{6}^{t}:\underset{\mathbf{z}}{\text{maximize}} \sum_{n=1}^{N}p_{n}^{t}x_{n}\] subject to \[\sum_{n=1}^{N}x_{n}q_{n}\leq C\] \[x_{n}\in\{0,1\},\,\,\,\forall n\in\mathcal{N}.\]
The _variables_ of \(\mathbb{P}_{6}^{t}\) are:
* \(x_{n}\in\{0,1\}\): binary variables that select a file to be cached or not,
and the main _parameters_:
* \(p_{n}^{t}\in\mathbb{R}_{+}\): normalized file popularity parameters at time \(t\),
* \(q_{n}\in\mathbb{R}_{+}\): size of file \(n\in\mathcal{N}\).
**Complexity**. \(\mathbb{P}_{6}^{t}\) is a standard Knapsack problem which is NP-Complete but can be solved in pseudo-polynomial time through dynamic programming.
**Datasets**. The file sizes were created randomly from a uniform distribution \(\mathcal{U}(1,10)\), while for the file popularity we used the standard dataset for request traces from Movielens [24], which we sliced in order to create the different time series.
## Appendix H Experimental Details
### Benchmarks
In our experiments we use as benchmark the N-div model [52] as it is the most related approach to our method MIPnet.
N-div uses a GCN based on the bipartite graph representation of the MILP, similar to our approach, while the output of the GCN is mapped to Bernoulli parameters using an MLP. The model is trained by minimizing the NLL (cross-entropy) modified based on the SelectiveNet approach [32]. The idea is to learn two sets of binary variables: one set indicates if a variable is going to be selected and the other its binary value (to be used if selected). Therefore, the overall loss can be seen as a weighted cross-entropy. Further, this loss assumes a predefined target selection percentage and therefore a different model is required for different variable selection percentages.
We implemented N-div by keeping the common structure of our model and changing the final layers and the loss. Further, we used the same set of features (see Appendix C) as in our method for a fair comparison. Note that in [52] it is not clearly mentioned what features are used for the N-div method. The authors only mention a set of solver-based features that can be used in their neural-branching method but are not applicable for N-div since no such features are available during inference.
For the method from [11], we use the implementation available online at github.com/bstellato/mlopt. This strategy learns a mapping from problem parameters to feasible variable assignments. First, the input data is categorized into strategies, i.e., feasible variables assignment. Then, the mapping is learned as a multi-class classification problem. Notice that this approach does not generalize when a class is not in the training dataset, i.e., when at test time all feasible assignments seen during training are note feasible for the new problem instance. In order to scale to large datasets, we modified their implementation so that it accepts sparse training data. Moreover, we made sure that their method receives the same train/validation/test split as our method.
### Hyperparameters
For all datasets we used the Adam optimizer for the gradient updates with \(10^{-5}\) weight-decay and clipped the gradient norm to 10. The learning rate was warmed-up linearly from \(1\times 10^{-4}\) to \(1\times 10^{-2}\) for the first \(500\) steps after which a cosine decay follows for the remaining time steps with a decay rate of 0.99.
For each dataset we performed a hyperparameter optimization on the model and loss parameters. The batch size and the number of training steps were tuned per dataset with values in \(\{8,12,16,32\}\) and \(\{8000,15000,20000\}\), respectively. All the loss hyperparameters were linearly warmed-up to an initial value and then linearly scaled up to
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Schedule & \multicolumn{3}{c}{Parameter} \\ \hline & \(\lambda\) & \(\lambda_{\text{avg}}\) & \(\lambda_{\text{c}}\) \\ \hline \hline warm-up steps & \{500, 1000, 1500, 2000\} & \{250,500\} & \{1000,1500\} \\ warm-up initial value & \(\{0.01,0.1\}\) & \(\{0.01,0.1\}\) & \(0.1\) \\ warm-up final value & \(\{0.1,1\}\) & \(\{0.1,1\}\) & \(\{1,10\}\) \\ final value & \(\{1,10,50\}\) & \(\{0.1,1,10,100\}\) & \(\{10,100\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Hyperparameters of the loss components.
a final target value in the final training step. The loss scheduling hyperparameter ranges are provided in Table 9. The model hyperparameter ranges are provided in Table 10.13
Footnote 13: Note that the number of units in GCN increases due to skip connections.
### Computational Environment
For running our experiments we used p3.8xlarge AWS EC2 instances with 4 Tesla V100 GPUs, 36 CPUs, and 244 GB of memory, and m5.8xlarge instances with 32 CPUs and 128 GB of memory.
## Appendix I Additional Results
### Additional Datasets
In Table 11 we provide results for accuracy, infeasibility, and optimality gap for the two additional datasets routing-sm and revenue-max-sm, which are smaller versions of routing and revenue-max-sm, respectively. Table 12 shows the accuracy, infeasibility, and optimality gap on these two additional datasets for the method from [11]. We notice that our method performs better than N-div and the method from [11] in almost all additional experiments. Notice that the method from [11] is able to find a feasible solution for routing-sm in more cases than MIPnet and N-div. However, this method is always infeasible in revenue-max, MIPnet and N-div always find a feasible solution.
### Unsupervised Loss
As we already showed in Section 4.1, training MIPnet only with supervised loss (MIPnet-sup) leads to decreased performance for reasons we discussed throughout the paper. Here, we complete the picture by including the performance of MIPnet-unsup, i.e., MIPnet trained only with unsupervised loss. Tables 13-15 show the accuracy, infeasibility and optimality gap, respectively. It is clear that the performance of MIPnet-unsup compared to MIPnet is significantly degraded in most datasets, with the sole exception of the caching.
It is important to highlight some key observations based on these results. First, the accuracy of MIPnet-unsup in the energy-grid dataset is less than 50% for all values of \(\rho\). This does not lead to significant infeasibilities since apparently the constraints of the problem are not that hard to satisfy, however it causes a significant optimality gap. This is a case where labels make a big difference in the model training process and in fact the performance of MIPnet-sup is in par with MIPnet, i.e., most of the learning is done from the labels. On the other end, we observe that in the
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Parameter & \multicolumn{4}{c}{Block} \\ \hline & MLP (features) & GCN & LSTM & MLP (binary map) & MLP (continuous map) \\ \hline \hline \# of layers & 1 & \(\{2,3\}\) & \(\{1,2,3\}\) & 1 & 1 \\ \# of units & \(\{8,16\}\) & \(\{8,16\}\) & \(\{16,32\}\) & \(2\times D_{i}^{(b)}\) & \(D_{i}^{(c)}\) \\ nonlinearity & \(\{\text{ReLU},\text{No}\}\) & \(\{\text{ReLU},\text{No}\}\) & No & ReLU & No \\ \hline \hline \end{tabular}
\end{table}
Table 10: Hyperparameters of the model blocks.
\begin{table}
\begin{tabular}{c c|c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{**Accuracy**} & \multicolumn{2}{c|}{**Infeasibility**} & \multicolumn{2}{c}{**Optimality gap**} \\ \hline \(\rho\) & Method & routing-sm & revenue-max-sm & routing-sm & revenue-max-sm & routing-sm & revenue-max-sm \\ \hline \hline \multirow{3}{*}{30\%} & MIPnet & **99.18 \(\pm\) 0.17** & **100.00 \(\pm\) 0.00** & **7.88 \(\pm\) 0.63** & **0.00 \(\pm\) 0.00** & **0.35 \(\pm\) 0.01** & **0.12 \(\pm\) 0.03** \\ & N-div & 95.67 \(\pm\) 0.48 & **100.00 \(\pm\) 0.00** & 77.91 \(\pm\) 9.73 & **0.00 \(\pm\) 0.00** & **0.39 \(\pm\) 0.04** & 0.16 \(\pm\) 0.05 \\ \hline \multirow{3}{*}{40\%} & MIPnet & **98.29 \(\pm\) 0.22** & **100.00 \(\pm\) 0.00** & **35.63 \(\pm\) 21.02** & **0.00 \(\pm\) 0.00** & 0.95 \(\pm\) 0.37 & **0.07 \(\pm\) 0.02** \\ & N-div & 78.33 \(\pm\) 25.14 & **100.00 \(\pm\) 0.00** & 96.27 \(\pm\) 4.32 & **0.00 \(\pm\) 0.00** & **0.27 \(\pm\) 0.02** & 0.11 \(\pm\) 0.07 \\ \hline \multirow{3}{*}{50\%} & MIPnet & **97.32 \(\pm\) 0.40** & **100.00 \(\pm\) 0.00** & **65.07 \(\pm\) 29.85** & **0.00 \(\pm\) 0.00** & **1.48 \(\pm\) 0.89** & 0.05 \(\pm\) 0.02 \\ & N-div & 89.46 \(\pm\) 0.93 & **100.00 \(\pm\) 0.00** & 100.00 \(\pm\) 0.00 & **0.00 \(\pm\) 0.00** & - & **0.03 \(\pm\) 0.03** \\ \hline \multirow{3}{*}{60\%} & MIPnet & **95.19 \(\pm\) 0.42** & **100.00 \(\pm\) 0.00** & **100.00 \(\pm\) 0.00** & **0.00 \(\pm\) 0.00** & - & 0.03 \(\pm\) 0.03 \\ & N-div & 92.41 \(\pm\) 1.26 & **100.00 \(\pm\) 0.00** & **100.00 \(\pm\) 0.00** & **0.00 \(\pm\) 0.00** & - & **0.02 \(\pm\) 0.01** \\ \hline \multirow{3}{*}{70\%} & MIPnet & **92.79 \(\pm\) 0.43** & **100.00 \(\pm\) 0.00** & **100.00 \(\pm\) 0.00** & **0.00 \(\pm\) 0.00** & - & **0.03 \(\pm\) 0.04** \\ & N-div & 89.52 \(\pm\) 25.55 & **100.00 \(\pm\) 0.00** & **100.00 \(\pm\) 0.00** & **0.00 \(\pm\) 0.00** & - & **0.03 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 11: Accuracy, infeasibility and optimality gap (mean \(\pm\) std) of MIPnet vs. N-div. Bold indicates best method.
caching dataset the model is able to learn the optimal values without labels. Arguably this is an easy problem to learn since all the models achieve optimal performance. In such cases the overhead of collecting (many) labels might not be necessary. All the other datasets fall in the case where both MIPnet-sup and MIPnet-unsup underperform compared to MIPnet and a combination of supervised and unsupervised loss gives the best performance.
### Missing Labels
Finally, we examine how missing labels affect the performance of a model trained with and without unsupervised loss. For this experiment we used the tsp dataset. Figure 5 illustrates the accuracy of the models for different percentages of missing labels. In the presence of missing labels the model that is trained with both supervised and unsupervised loss clearly outperforms the one without unsupervised loss in all cases.
## Appendix J Discussion of Related Work
Table 16 summarizes the key differences of MIPnet from the most-related models and approaches. In the evaluation section we have presented detailed comparisons of MIPnet with [52], [32] and [11], while here we focus on qualitative
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \(\rho\) (\(\%\)) & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline
30\% & \(95.46\pm 1.47\) & \(99.02\pm 1.12\) & \(96.45\pm 2.69\) & \(49.38\pm 1.36\) & \(100.00\pm 0.00\) & \(100.00\pm 0.00\) \\ \hline
40\% & \(94.13\pm 1.16\) & \(98.90\pm 1.25\) & \(95.83\pm 2.90\) & \(48.16\pm 0.75\) & \(100.00\pm 0.00\) & \(100.00\pm 0.00\) \\ \hline
50\% & \(92.60\pm 1.13\) & \(98.85\pm 1.26\) & \(95.28\pm 3.04\) & \(47.61\pm 0.77\) & \(99.99\pm 0.00\) & \(100.00\pm 0.00\) \\ \hline
60\% & \(91.24\pm 1.01\) & \(98.76\pm 1.21\) & \(94.76\pm 3.03\) & \(47.59\pm 0.86\) & \(99.99\pm 0.00\) & \(100.00\pm 0.00\) \\ \hline
70\% & \(89.49\pm 0.76\) & \(98.50\pm 1.02\) & \(94.32\pm 2.84\) & \(47.78\pm 0.79\) & \(99.99\pm 0.00\) & \(100.00\pm 0.00\) \\ \hline \end{tabular}
\end{table}
Table 13: Accuracy (mean \(\pm\) std)\(\%\) of MIPnet-unsup.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \(\rho\) (\(\%\)) & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline
30\% & \(100.00\pm 0.00\) & \(6.92\pm 0.56\) & \(33.43\pm 47.07\) & \(0.02\pm 0.03\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
40\% & \(100.00\pm 0.00\) & \(13.87\pm 10.09\) & \(41.94\pm 42.22\) & \(0.06\pm 0.00\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
50\% & \(100.00\pm 0.00\) & \(29.89\pm 25.36\) & \(54.68\pm 39.66\) & \(0.11\pm 0.06\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
60\% & \(100.00\pm 0.00\) & \(47.28\pm 24.40\) & \(69.16\pm 40.00\) & \(0.15\pm 0.03\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
70\% & \(100.00\pm 0.00\) & \(72.68\pm 15.37\) & \(84.04\pm 21.92\) & \(0.17\pm 0.08\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline \end{tabular}
\end{table}
Table 15: Optimality gap (mean \(\pm\) std)\(\%\) of MIPnet-unsup.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \(\rho\) (\(\%\)) & routing & facility-loc & tsp & energy-grid & revenue-max & caching \\ \hline \hline
30\% & - & \(8.83\pm 12.33\) & \(1.55\pm 1.27\) & \(8.59\pm 0.04\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
40\% & - & \(12.44\pm 17.31\) & \(3.11\pm 2.78\) & \(12.62\pm 0.36\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline
50\% & - & \(15.08\pm 20.06\) & \(5.95\pm 5.38\) & \(17.16\pm 0.63\) & \(0.66\pm 0.01\) & \(0.00\pm 0.00\) \\ \hline
60\% & - & \(18.97\pm 24.36\) & \(11.06\pm 9.82\) & \(22.00\pm 0.95\) & \(1.23\pm 0.01\) & \(0.00\pm 0.00\) \\ \hline
70\% & - & \(32.20\pm 27.12\) & \(19.73\pm 17.64\) & \(27.41\pm 1.11\) & \(1.57\pm 0.07\) & \(0.00\pm 0.00\) \\ \hline \end{tabular}
\end{table}
Table 12: Accuracy, infeasibility and optimality gap of [11].
differences. In detail, we see that MIPnet is the only model that uses semi-supervised learning. This is very important since obtaining labels requires, in most cases, solving large-scale NP-hard problems, while the unsupervised loss component can improve the performance of the model further, by allowing us to identify the effect (in terms of optimality gap and infeasibility) of mispredictions at a per-variable granularity. These benefits are evident in the experiments. We note that other works, such as [52] and [19], identify the importance of unsupervised learning as well, but this approach is essesentially explored only in [39].
In particular, [39] proposes an unsupervised approach specifically for graph problems where a GNN is used to learn a distribution over the graph nodes, representing a solution. In order to produce an integral solution, the authors derandomize the continuous values using sequential decoding. The method comes with theoretical guarantees, giving good and feasible solutions with high probability. However, this approach is not applicable to general MIPs, while even for the targeted graph problems it is not trivial to include general constraints. Besides, this work does not benefit from the availability of labels, which, in certain operational environments are available.
On the other hand, in [52] the authors provide only a preliminary evaluation with training datasets that might include noisy labels (obtained from suboptimal solutions); while [19] performs an initial variable assignment and then uses neighborhood search to increase the training data without solving the problem exactly - which works under the assumption of locality, as the authors stress. On the contrary, MIPnet uses a _semi-supervised_ learning approach which, apart from allowing us to expand the training dataset, enables tuning the impact of each variable on the feasibility and objective value of the problem.
Another distinguishing feature of MIPnet is that, unlike all prior works, it accounts for the temporal structure across the different instances, which, as shown in the experiments, indeed enhances the model's performance. This aspect is crucial as, more often than not, the different instances that practitioners solve exhibit a temporal structure, namely problem parameters such as the commodity volumes, user requests, transportation costs or electricity prices, follow some type of diurnal pattern.
We also note that the related works in Table 16 can be separated to those that use GCN in order to automate feature embedding, and to those that do not follow this approach, as e.g., [10; 11]. GCNs seem to enhance the performance and provide the means to account for dependencies across the variables and, indirectly, among the constraints as well. Finally, MIPnet is the only work that uses a Bayesian approach and a tunable confidence for performing the assignment of variables, unlike, e.g., the threshold-based variable selection rule of [19]. We note also that [52] offers the option to select different percentage \(\rho\) of the binary variables, but this decision needs to be made in advance so as to train the model accordingly.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & **Use GCN** & **Vars Select Method** & **Temporal** & **Learning** \\ \hline \hline
[52] & Yes & Bernoulli & No & Superv. \\ \hline
[11] & No & No & No & Superv. \\ \hline
[19] & Yes & Threshold-based & No & Superv. \\ \hline
[10] & No & No & No & Superv. \\ \hline
[31] & Yes & Threshold-based & No & Superv. \\ \hline
[39] & Yes & No & No & Unsuperv. \\ \hline MIPnet & Yes & Bayesian (\& tunable thresh.) & Yes & Superv. + Unsuperv. \\ \hline \hline \end{tabular}
\end{table}
Table 16: Comparison of MIPnet with most related models.
Figure 5: Comparison of MIPnet w and wo unsupervised loss for different missing label percentages. |
2303.03073 | A neural network based model for multi-dimensional nonlinear Hawkes
processes | This paper introduces the Neural Network for Nonlinear Hawkes processes
(NNNH), a non-parametric method based on neural networks to fit nonlinear
Hawkes processes. Our method is suitable for analyzing large datasets in which
events exhibit both mutually-exciting and inhibitive patterns. The NNNH
approach models the individual kernels and the base intensity of the nonlinear
Hawkes process using feed forward neural networks and jointly calibrates the
parameters of the networks by maximizing the log-likelihood function. We
utilize Stochastic Gradient Descent to search for the optimal parameters and
propose an unbiased estimator for the gradient, as well as an efficient
computation method. We demonstrate the flexibility and accuracy of our method
through numerical experiments on both simulated and real-world data, and
compare it with state-of-the-art methods. Our results highlight the
effectiveness of the NNNH method in accurately capturing the complexities of
nonlinear Hawkes processes. | Sobin Joseph, Shashi Jain | 2023-03-06T12:31:19Z | http://arxiv.org/abs/2303.03073v1 | # A neural network based model for multi-dimensional nonlinear Hawkes processes
###### Abstract
This paper introduces the Neural Network for Nonlinear Hawkes processes (NNNH), a non-parametric method based on neural networks to fit nonlinear Hawkes processes. Our method is suitable for analyzing large datasets in which events exhibit both mutually-exciting and inhibitive patterns. The NNNH approach models the individual kernels and the base intensity of the nonlinear Hawkes process using feed forward neural networks and jointly calibrates the parameters of the networks by maximizing the log-likelihood function. We utilize Stochastic Gradient Descent to search for the optimal parameters and propose an unbiased estimator for the gradient, as well as an efficient computation method. We demonstrate the flexibility and accuracy of our method through numerical experiments on both simulated and real-world data, and compare it with state-of-the-art methods. Our results highlight the effectiveness of the NNNH method in accurately capturing the complexities of nonlinear Hawkes processes.
KEYWORDS: nonlinear Hawkes processes, Hawkes processes with inhibition, neural networks for Hawkes Process, online learning for Hawkes processes
## 1 Introduction
A.G. Hawkes introduced the Hawkes process, which is a type of multivariate point process used to model a stochastic intensity vector that depends linearly on past events Hawkes (1971). This approach has been widely applied in various fields, such as seismology Ogata (1999), Marsan and Lengline (2008), financial analysis Filimonov and Sornette (2012), Bacry et al. (2015), social interaction modeling Crane and Sornette (2008), Blundell et al. (2012), Zhou et al. (2013), criminology Mohler et al. (2011), genome analysis Reynaud-Bouret et al. (2010), Carstensen et al. (2010), and epidemiology Park et al. (2020), Chiang et al. (2022).
The Hawkes process in its original form is linear, i.e., the intensities depend on past events through a linear combination of kernel functions. The primary concern in modelling the Hawkes process is estimating these kernel functions. A common practice has been to assume a parametric form for the kernel function, such as the exponential and power-law decay kernels, and then use maximum likelihood estimation (Ozaki, 1979) to determine the optimal values of the parameters.
While easier to estimate, a parametric kernel function is often not expressive enough for real-world problems. A significant body of work is concerned with the non-parametric estimation of the kernel functions. Lewis and Mohler (2011) propose a non-parametric, Expectation-Maximization (EM) based method to estimate the Hawkes kernel and the non-constant base intensity function. Zhou et al. (2013) uses the method of multipliers and majorization-minimization approach to estimate the multivariate Hawkes kernels. Bacry and Muzy (2014) develop a non-parametric estimation
based on the Wiener-Hopf equations for estimating the Hawkes kernels. Achab et al. (2017) derive an integrated cumulants method for estimating the Hawkes kernel. Xu et al. (2016) proposed a sparse-group-lasso-based algorithm combined with likelihood estimation for estimating the Hawkes kernels. Joseph et al. (2022) use a feed-forward network to model the excitation kernels and fit the network parameters using a maximum likelihood approach. Except for Bacry and Muzy (2014), these non-parametric kernel estimation methods are generally restricted to the linear Hawkes processes.
Modelling the intensities as a linear combination of kernel functions imposes a non-negativity constraint on the kernel functions, which can be interpreted as excitation kernels. Non-negative kernel functions do not allow incorporating inhibitive effects while modelling the intensity process. Modelling inhibition, where the occurrence of an event reduces the intensity of future arrival, has drawn relatively less attention in the literature. However, inhibitory effects are prevalent in various domains. For example, in neuroscience, inhibitory kernel functions can represent the presence of a latency period before the successive activations of a neuron (Reynaud-Bouret et al., 2013).
This paper proposes a non-parametric estimation model for the non-linear Hawkes process. The non-linear Hawkes process allows the inclusion of both excitatory and inhibitory effects to model a broader range of phenomena. The stability condition of the nonlinear Hawkes process is explored in Bremaud and Massoulie (1996). Bonnet et al. (2021) use a negative exponential function to model inhibitive kernels for a univariate nonlinear Hawkes process and use maximum likelihood estimation to determine the optimal parameters. Bonnet et al. (2022) extend this approach to a multivariate nonlinear Hawkes process. Lemonnier and Vayatis (2014) develop the Markovian Estimation of Mutually Interacting Process (MEMIP) method, which utilizes weighted exponential functions to determine kernels for the nonlinear Hawkes process. To extend the MEMIP for large dimensional datasets, Lemonnier et al. (2016) introduce dimensionality reduction features. However, the above approaches require the kernel function to be smooth, which is a drawback. Wang et al. (2016) propose an algorithm that learns the nonlinear Hawkes kernel non-parametrically using isotonic regression and a single-index model. The Isotonic Hawkes Process, however, assumes only a continuous monotonic decreasing excitation kernel to capture the influence of the past arrivals.
In their work, Du et al. (2016) employ a recurrent neural network (RNN) to model event timings and markers simultaneously by leveraging historical data. Conversely, Mei and Eisner (2017) propose a novel continuous-time long short-term memory (LSTM) model to capture the self-modulating Hawkes processes, capable of accounting for the inhibiting and exciting effects of prior events on future occurrences. Moreover, this approach can accommodate negative background intensity values, which correspond to delayed response or inertia of some events. While RNN-based models can capture complex long-term dependencies, inferring causal relationships or extracting causal information from these models can be challenging.
We propose a neural networks based non-parametric model for the nonlinear Hawkes process where feed-forward networks are used to model the kernels and time-varying base intensity functions. The architecture of the neural network is chosen such that the likelihood function and its gradients with respect to the network parameters can be efficiently evaluated. As the likelihood function is non-convex with respect to the parameter space, we use the Stochastic Gradient Descent (SGD) with Adam (Kingma and Ba, 2014) to obtain the optimal network parameters that maximize the log-likelihood. The method is an extension of the Shallow Neural Hawkes model proposed by Joseph et al. (2022), which was designed for linear Hawkes processes and only allows for the modelling of kernels with an excitation feature. We evaluate our model against state-of-the-art methods for non-linear Hawkes processes, using both simulated and real datasets.
The paper is structured as follows: Section 2 provides a definition of the non-linear Hawkes process and formulates the associated log-likelihood maximization problem. In Section 3, we introduce the proposed neural network model for the non-linear Hawkes process and discuss the parameter estimation procedure. Section 4 presents the results obtained by our method on synthetic data and its application to infer the order dynamics of bitcoin and ethereum market orders on the Binance exchange. Additionally, we apply our method to a high-dimensional neuron spike dataset. Finally, in Section 5, we summarize our findings and discuss the limitations of our method.
Preliminary Definitions
Definition of the Hawkes Process:We denote a D-dimensional counting process as \(N(T)\), and its associated discrete events as \(\mathcal{S}=(t_{n},d_{n})_{n\geq 1}\), where \(t_{n}\in[0,T)\) is the timestamp of the \(n\)th event and \(d_{n}\in(1,2,...,D)\) is the dimension in which the \(n\)th event has occurred. Let \(\{t_{1}^{d},\ldots,t_{k}^{d},\ldots,t_{N_{d}(T)}^{d}\}\), be the ordered arrivals for the dimensions \(d=1,\ldots,D\). Given \(t\geq 0\), the count of events in \([0,t)\) for the \(d\)th dimension will be
\[N_{d}(t)=\sum_{n\geq 1}\mathbb{1}_{t_{d}^{\ast}<t}.\]
The conditional intensity function for the counting process at \(t\), \(\lambda_{d}^{\ast}(t):\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\), is given by,
\[\lambda_{d}^{\ast}(t)=\underset{\Delta t\to 0}{lim}\frac{\mathbb{E} \left[N_{d}(t+\Delta t)-N_{d}(t)\right]\mathcal{H}_{t^{-}}}{\Delta t} \tag{1}\]
where, \(\mathcal{H}_{t^{-}}\), denotes the history of the counting process upto \(t\).
For a \(D\)-dimensional Hawkes process the conditional intensity \(\lambda_{d}(t),\) for the \(d\)-th dimension is expressed as,
\[\lambda_{d}(t)=\mu_{d}(t)+\sum_{j=1}^{D}\sum_{\{\forall k|(t_{k}^{j}<t)\}} \phi_{dj}(t-t_{k}^{j}), \tag{2}\]
where \(\mu_{d}(t)\), the exogenous base intensity for the \(d\)-th dimension, does not depend upon the history of the past arrivals. \(\phi_{dj}(t-t_{k}^{j}),\)\(1\leq d,j\leq D,\) are called the excitation kernels that quantify the magnitude of excitation of the conditional intensity for the \(d\)-th dimension at time \(t\) due to the past arrival at \(t_{k}^{j},\)\(\{\forall k|t_{k}^{j}<t\}\) in the \(j\)th dimension. These kernel functions are positive and causal, i.e., their support is in \(\mathbb{R}^{+}\). Inferring a Hawkes process requires estimating the base intensity function \(\mu_{d}\) and its kernels functions \(\phi_{dj},\) either by assuming a parametric form for the kernels or in a non-parametric fashion. A more generic form of the Hawkes process conditional intensity \(\lambda_{d}^{\ast}(t),\) is given by,
\[\lambda_{d}^{\ast}(t)=\Psi_{d}(\lambda_{d}(t)),\ \forall d=1,...,D, \tag{3}\]
where \(\Psi_{d}:\mathbb{R}\rightarrow\mathbb{R}^{+}\).
If \(\Psi_{d}\) is an identity function then, Equation 3 is equivalent to the linear Hawkes process expressed in Equation 2. However, if \(\Psi_{d}\) is a non-linear function, then the process is called a Non-linear Hawkes Process. For the stability and stationarity of the nonlinear Hawkes process, \(\Psi_{d}\) should be Lipschitz continuous, as given by Theorem 7 of Bremaud and Massoulie (1996). The advantage of the nonlinear Hawkes process is that it allows the kernel output to take negative values to model inhibitory effects as well as positive values to model excitatory effects, i.e., \(\phi_{dj}:\mathbb{R}^{+}\rightarrow\mathbb{R}\). Wang et al. (2016) uses a sigmoid function \((1+e^{-x})^{-1}\) and a decreasing function \(1-(1+e^{-x})^{-1},\) while Bonnet et al. (2021) and Costa et al. (2020) use a \(\max\) function as \(\Psi\).
The Log-Likelihood Function of non-linear Hawkes ProcessThe categorization of methods used for estimating Hawkes processes, as discussed in Cartea et al. (2021), can be broadly classified into three groups:
* _Maximum likelihood estimation (MLE)_ It is the most commonly used approach for estimating the kernels of the Hawkes process, as employed by many methods such as Bonnet et al. (2021), Du et al. (2016), and Lemonnier et al. (2016). However, MLE methods have high computational costs as their time complexity increases quadratically with the number of arrivals. Expectation maximization methods, such as those used by Lewis and Mohler (2011) and Zhou et al. (2013), also belong to this category.
* _Method of moments_Wang et al. (2016) and Achab et al. (2017) use a moment matching method to estimate the Hawkes process. These methods typically rely on the spectral properties of the Hawkes process. The WH method (Bacry and Muzy, 2014), which is used as a benchmark model in numerical experiments, estimates the Hawkes process as a system of Wiener-Hopf equations and also falls under this category.
* _Least squares estimation (LSE)_ This method involves minimizing the L2 error of the integrated kernel. Cartea et al. (2021) propose an efficient stochastic gradient-based estimation method for linear multivariate Hawkes processes, which applies to large datasets and general kernels. However, the method is not applicable suitable for nonlinear Hawkes process.
In this paper the optimal kernels are estimated through the maximum likelihood approach.
For a \(D\)-multidimensional nonlinear Hawkes process with \(\mathbf{\mu}=[\mu_{d}(t)]_{D\times 1}\) and \(\mathbf{\Phi}=[\phi_{dj}(t)]_{D\times D}\) ; \(1\leq d,j\leq D\), let \(\mathbf{\theta}=[\theta_{1},\ldots\theta_{p}\ldots\theta_{P}]\) be the set of parameters used to model \(\mathbf{\mu}\) and \(\mathbf{\Phi}\). The parameters can be estimated by maximizing the log-likelihood function over the sampled events from the process. The log-likelihood (LL) function corresponding to the dataset \(\mathcal{S}\) for the nonlinear Hawkes process is given by (see for instance Rubin (1972); Daley and Vere-Jones (2007)),
\[\mathcal{L}(\mathbf{\theta},\mathcal{S})=\sum_{d=1}^{D}\left[\sum_{t_ {n},d_{n}\in\mathcal{S}}\left\{\log(\lambda_{d}^{*}(t_{n}))-\int_{t_{n-1}^{d }}^{t_{n}^{d}}\lambda_{d}^{*}(s)ds\right\}\mathbb{1}_{d_{n}=d}\right] \tag{4}\]
Depending on the parametric form of the kernel,\(\phi_{dj}(t)\), the LL function may or may not be concave. For instance, even for the exponential kernel, \(\phi_{dj}(t)=\alpha_{dj}\exp(-\beta_{dj})(t)\), the LL function is not concave in the parameter space. We use the Stochastic Gradient Descent (SGD) method to search the local optima for the LL function in the parameter space. In order to estimate the parameters \(\mathbf{\theta}\) using the SGD we need an unbiased estimator of the gradient of the LL, as given by Equation 4, with respect to \(\theta\).
\[\nabla_{\theta_{p}}\left(\mathcal{L}(\mathbf{\theta},\mathcal{S})\right) = \nabla_{\theta_{p}}\left(\sum_{d=1}^{D}\left[\sum_{t_{n},d_{n}\in \mathcal{S}}\left\{\log(\lambda_{d}^{*}(t_{n}))-\int_{t_{n-1}^{d}}^{t_{n}^{d}} \lambda_{d}^{*}(s)ds\right\}\mathbb{1}_{d_{n}=d}\right]\right) \tag{5}\] \[= \sum_{d=1}^{D}\left[\sum_{t_{n},d_{n}\in\mathcal{S}}\nabla_{ \theta_{p}}\left(\left\{\log(\lambda_{d}^{*}(t_{n}))-\int_{t_{n-1}^{d}}^{t_{n} ^{d}}\lambda_{d}^{*}(s)ds\right\}\right)\mathbb{1}_{d_{n}=d}\right]\]
Following Equation 5, an unbiased estimator of the gradient of \(\mathcal{L}\) with respect to parameter \(\theta_{p}\) will be,
\[\nabla_{\theta_{p}}\widehat{\mathcal{L}}(\mathbf{\theta},t_{n}^{d}) :=\nabla_{\theta_{p}}\left(\log(\lambda_{d}^{*}(t_{n}^{d}))-\int_{t_{n-1}^{d}} ^{t_{n}^{d}}\left(\lambda_{d}^{*}(s)\right)ds\right), \tag{6}\]
where \(t_{n}^{d}\) is randomly sampled from \(\mathcal{S}\). The proposed model does not assume a specific parametric form for the kernels and the base intensities. The model and its estimation is discussed in Section 3.
## 3 Proposed Model
Hornik et al. (1989) show that a multilayer feed-forward networks with as few as one hidden layer are capable of universal approximation in a very precise and satisfactory sense. We here propose a neural network-based system called the Neural Network for Non-Linear Hawkes (NNNH) to estimate the kernels and the base intensity of the nonlinear Hawkes process. Each kernel, \(\phi_{dj},\,1\leq d,j\leq D,\) and base intensity function \(\mu_{d}\), is separately modelled as a feed-forward network. However, the parameters of all the networks are jointly estimated by maximizing the log-likelihood of the training dataset.
In the NNNH, each kernel \(\phi_{dj}\), \(1\leq d,j\leq D\) is modelled as a feed-forward neural network \(\hat{\phi}_{dj}:\mathbb{R}^{+}\rightarrow\mathbb{R}\),
\[\hat{\phi}_{dj}(t)=A_{2}\circ\varphi\circ A_{1}, \tag{7}\]
where \(A_{1}\): \(\mathbb{R}\rightarrow\mathbb{R}^{p}\) and \(A_{2}\): \(\mathbb{R}^{p}\rightarrow\mathbb{R}\). More precisely,
\[A_{1}(x)=\mathbf{W}_{1}x+\mathbf{b_{1}}\ \ \text{for}\ x\in\mathbb{R},\ \mathbf{W}_{1}\in\mathbb{R}^{p\times 1}, \mathbf{b_{1}}\in\mathbb{R}^{p},\]
\[A_{2}(\mathbf{x})=\mathbf{W}_{2}\mathbf{x}+b_{2}\,\,\,\text{for}\,\,\mathbf{x} \in\mathbb{R}^{p},\,\,\mathbf{W}_{2}\in\mathbb{R}^{1\times p},b_{2}\in\mathbb{R},\]
and \(\varphi:\mathbb{R}^{j}\rightarrow\mathbb{R}^{j},j\in\mathbb{N}\) is the component-wise Rectified Linear Unit (ReLU) activation function given by:
\[\varphi(x_{1},\ldots,x_{j}):=\left(\max(x_{1},0),\ldots,\max(x_{j},0)\right).\]
We take \(\mathbf{W}_{1}=[a_{1}^{1},a_{1}^{2},....,a_{1}^{p}]^{\top}\), \(\mathbf{W}_{2}=[a_{2}^{1},a_{2}^{2},....,a_{2}^{p}]\) and \(\mathbf{b_{1}}=[b_{1}^{1},b_{1}^{2},....,b_{1}^{p}]^{\top}.\) Therefore, the kernel function can be written as:
\[\widehat{\phi}_{dj}(x)=b_{2}+\sum_{i=1}^{p}a_{2}^{i}\max\left(a_{1}^{i}x+b_{1 }^{i},0\right) \tag{8}\]
As the kernel, \(\widehat{\phi}_{dj},\) maps from positive real numbers to real numbers, it is versatile in its ability to model both inhibitory and excitatory effects.
With a choice of \(p\) neurons for the hidden layer, the dimension of the parameter space for the above network will be \(3p+1.\) For a \(D\)-dimensional nonlinear Hawkes process there will be \(D^{2}\) kernels and the total number of parameters to be estimated would be \((3p+1)D^{2}.\) In case the base intensity is not constant, we also model it as a feed-forward neural network, with an architecture similar to the ones used for the kernels. We then need to approximate in total \((3p+1)\left(D^{2}+D\right)\) parameters. If the base intensity is assumed to be a constant, the total number of parameters to be estimated would be \((3p+1)D^{2}+D.\) The Figure 2 is a schematic representation of the NNNH model, with each \(\phi_{dj},\) and \(\mu_{d}\) being a network (as expressed in Figure 1). In order to estimate \(\lambda_{d}^{*}(t),\) we need in addition to \(t,\) the history of arrivals until \(t,\) i.e., \(\mathcal{H}_{t}^{-}.\) The NNNH therefore estimates \(\lambda_{d}^{*}(t)\) as:
\[\widehat{\lambda}_{d}^{*}(t)=\max\left(\widehat{\mu}_{d}(t)+\sum_{j=1}^{D} \sum_{\{\forall k\mid(t_{k}^{j}<t)\}}\widehat{\phi}_{dj}(t-t_{k}^{j}),0\right). \tag{9}\]
The outer \(\max\) function ensures that the estimated \(\lambda_{d}^{*}(t)\) is positive and mimics the \(\Psi\) function in the non-linear Hawkes process, as given in Equation 3.
Figure 1: The architecture of the feed-forward neural network used for modeling kernels and base intensity functions of the nonlinear Hawkes process in NNNH. Here \(\varphi,\)\(f\) are the activation functions. In the NNNH we take \(\varphi\) as ReLU and \(f\) as the identity function.
This non-parametric method employs a feed-forward neural network that can precisely estimate any continuous kernel function and base intensity function defined on a compact set, with a desired level of accuracy (Leshno et al. (1993)). However, the challenge is to jointly estimate the parameters of the \(D^{2}+D\) neural networks used to model the base intensity and the kernel functions. To do this, the set of parameter values that maximize the log-likelihood over the observed dataset \(\mathcal{S}\) is found. Since the log-likelihood function is non-convex in the parameter space, the batch stochastic gradient descent is used to estimate the parameter values that provide a local maxima of the log-likelihood function The gradient of the log-likelihood function as given by Equation 6 with respect to each network parameter is computed for updating the parameter values in the batch stochastic gradient method.
### Estimating the parameters of the NNNH
The challenge for the NNNH model is, given the dataset \(\mathcal{S}\) to efficiently obtain the parameters of the networks that locally maximize the log-likelihood function. More precisely, let \(\boldsymbol{\theta}=[\theta_{1},\ldots\theta_{p}\ldots\theta_{P}]^{\top}\) denote the parameters of the network and let \(\boldsymbol{\Theta}\) denote the parameter space; then we want to find, for given dataset \(\mathcal{S},\) the values of the parameters that maximize the log-likelihood function over the parameter space, i.e.,
\[\widehat{\boldsymbol{\theta}}=\operatorname*{arg\,max}_{\boldsymbol{\theta} \in\boldsymbol{\Theta}}\mathcal{L}\left(\boldsymbol{\theta},\mathcal{S}\right). \tag{10}\]
Gradient descent is an effective optimization technique when the log-likelihood function can be differentiated with respect to its parameters. This is because computing the first-order partial derivatives of all parameters has the same computational complexity as evaluating the log-likelihood function, making it a relatively efficient approach. With neural networks, stochastic gradient descent, has been shown to be more effective and efficient method for optimization. The NNNH parameter estimation algorithm utilizes Adam (Kingma and Ba (2014)), a stochastic gradient descent technique that adjusts learning rates adaptively and relies solely on first-order gradients. Equation 6 provides
Figure 2: NNNH model for multivariate nonlinear Hawkes process.
the necessary unbiased estimator of the log-likelihood function for the NNNH method to be used with the SGD. Algorithm 1 gives the pseudo-code for the NNNH parameter estimation using Adam.
```
0: dataset \(\mathcal{S}\) Input: learning rate \(\alpha=0.01\), decay rates \(\beta_{1}=0.9,\beta_{2}=0.999\), small constant \(\epsilon=10^{-8}\) Output: Optimal parameter value \(\widehat{\mathbf{\theta}}\) initialize parameter vector \(\mathbf{\theta}_{\mathbf{0}}\); \(m_{0}\gets 0\) (initialize first moment vector); \(v_{0}\gets 0\) (initialize second moment vector); initialize iteration step \(i=0\); whilestopping criterion not metdo \(i\gets i+1\) randomly sample \((t_{i},d_{i})\in\mathcal{S}\) \(g_{i}\leftarrow\nabla_{\mathbf{\theta}}\widehat{\mathcal{L}}(\mathbf{\theta}_{i-1},t_{ i}^{d})\) compute unbiased gradient of the likelihood function \(m_{i}\leftarrow\beta_{1}m_{i-1}+(1-\beta_{1})g_{i}\) update biased first moment estimate \(v_{i}\leftarrow\beta_{2}v_{i-1}+(1-\beta_{2})g_{i}^{2}\) update biased second moment estimate \(\widehat{m}_{i}\leftarrow\frac{m_{i}}{1-\beta_{1}^{2}}\) compute bias-corrected first moment estimate \(\hat{v}_{i}\leftarrow\frac{v_{i}}{1-\beta_{2}^{2}}\) compute bias-corrected second moment estimate \(\mathbf{\theta}_{i}\leftarrow\mathbf{\theta}_{i-1}-\alpha\frac{\widehat{m}_{i}}{\sqrt{ v_{i}+\epsilon}}\) update parameters end while
```
**Algorithm 1**Estimation of parameters of NNNH using Adam
The essential step while estimating the parameters for the NNNH model is computing the unbiased gradient of the likelihood function, i.e.,
\[\nabla_{\theta_{p}}\widehat{\mathcal{L}}(\mathbf{\theta},t_{n}^{d}):=\nabla_{ \theta_{p}}\left(\log(\widehat{\lambda}_{d}^{*}(t_{n}^{d}))-\int_{t_{n-1}^{d}} ^{t_{n}^{d}}\left(\widehat{\lambda}_{d}^{*}(s)\right)ds\right).\]
This in turn involves computing the following gradients,
\[\nabla_{\theta_{p}}\log(\widehat{\lambda}_{d}^{*}(t_{n}^{d}))\text{ and }\nabla_{\theta_{p}}\int_{t_{n-1}^{d}}^{t_{n}^{d}}\left(\widehat{\lambda}_{d}^{*} (s)\right)ds.\]
While it is possible to compute the former gradient efficiently, the challenge lies in accurately estimating the latter gradient. The former gradient calculation involves the following steps:
\[\nabla_{\theta_{p}}\log(\widehat{\lambda}_{d}^{*}(t_{n}^{d})) = \frac{1}{\widehat{\lambda}^{*}(t_{n}^{d})}\nabla_{\theta_{p}} \max\left(\widehat{\mu}_{d}(t_{n}^{d})+\sum_{j=1}^{D}\sum_{\{\forall k|(t_{k}^ {j}<t_{n}^{d})\}}\widehat{\phi}_{dj}(t_{n}^{d}-t_{k}^{j}),0\right),\] \[= \frac{1}{\widehat{\lambda}^{*}(t_{n}^{d})}\left(\nabla_{\theta_{ p}}\left(\widehat{\mu}_{d}(t_{n}^{d})\right)+\sum_{j=1}^{D}\sum_{\{\forall k|(t_{k}^ {j}<t_{n}^{d})\}}\nabla_{\theta_{p}}\left(\widehat{\phi}_{dj}(t_{n}^{d}-t_{k}^ {j})\right)\right)\mathbbm{1}_{\widehat{\lambda}_{d}(t_{d}^{d})>0}.\]
Here,
\[\widehat{\lambda}_{d}^{*}(t)=\max\left(\widehat{\lambda}_{d}(t),0\right).\]
In Appendix A, you can find the expressions for the gradients of the neural networks \(\widehat{\mu}_{d}(t)\) and \(\widehat{\phi}_{dj}(t).\) In Section 3.2 we provide an approach for efficient estimation of \(\nabla_{\theta_{p}}\int_{t_{n-1}^{d}}^{t_{n}^{d}}\left(\widehat{\lambda}_{d}^ {*}(s)\right)ds.\)
### Gradient of the integrated Hawkes intensity function
The computation of the first order derivatives of,
\[\widehat{\Lambda}_{d}(t_{n}^{d}):=\int_{t_{n-1}^{d}}^{t_{n}^{d}}\widehat{\lambda }_{d}^{*}(s)ds,\]
with respect to all the parameters will have the same computational complexity as evaluating the function itself. In the SGD method, it is crucial to perform efficient computations of \(\widehat{\Lambda}_{d}(t_{n}^{d})\) since these gradients must be computed multiple times. Gaussian Quadrature is a numerical technique that can be used to estimate the definite integral, but it is limited to well-behaved integrands that can be approximated by a certain degree of polynomial. Unfortunately, the non-linear Hawkes intensity function lacks smoothness, making the use of Gaussian Quadrature prone to substantial error. Moreover, Gaussian Quadrature entails computing the quadrature points and weights, which can be time-consuming and resource-intensive. The number of quadrature points required for accurate approximation also grows quickly with the polynomial degree, resulting in increased computational cost.
Our selection of network architecture can enable efficient evaluation of the integral \(\widehat{\Lambda}_{d}(t_{n}^{d})\) in the NNNH. The above integral can be expanded as:
\[\int_{t_{n-1}^{d}}^{t_{n}^{d}}\widehat{\lambda}_{d}^{*}(s)ds = \int_{t_{n-1}^{d}}^{t_{n}^{d}}\max\left(\widehat{\lambda}_{d}(s), 0\right)ds\] \[= \int_{t_{n-1}^{d}}^{t_{n}^{d}}\max\left(\left[\widehat{\mu}_{d}(s )+\sum_{j=1}^{D}\sum_{\{t_{k}^{j}<s\}}\widehat{\phi}_{dj}(s-t_{k}^{j})\right], 0\right)ds\] \[= \int_{t_{n-1}^{d}}^{t_{n}^{d}}\max\left(\left[\widehat{\mu}+\sum_ {j=1}^{D}\sum_{\{t_{k}^{j}<s\}}\sum_{i=1}^{p}a_{2}^{i}\max\left(a_{1}^{i}(s-t_{ k}^{j})+b_{1}^{i},0\right)\right],0\right)ds,\]
with \(\widehat{\phi}_{dj}(s)\) substituted using Equation 8 and \(\widehat{\mu}_{d}(s)\) set as a constant for simplicity in the third line. In addition, we set the bias \(b_{2}\) to zero for ease of exposition. Therefore, the above integral takes the following form:
\[\int_{t_{n-1}^{d}}^{t_{n}^{d}}\widehat{\lambda}_{d}^{*}(s)ds=\int_{t_{n-1}^{d }}^{t_{n}^{d}}\max\left(\left[\widehat{\mu}+\sum_{\{k,i,j\}}a_{2}^{i}\max\left( a_{1}^{i}(s-t_{k}^{j})+b_{1}^{i},0\right)\right],0\right)ds \tag{11}\]
Let \(\mathbf{X}\equiv\{x_{1},\ldots,x_{P^{*}}\}\) be the sequence of _zero-crossings_ obtained for all the \(P^{*}\) neurons, where the zero-crossing for the \(i\)th neuron is obtained by solving:
\[a_{1}^{i}(x_{i}-t_{k}^{j})+b_{1}^{i} = 0\] \[x_{i} = t_{k}^{j}-\frac{b_{1}^{i}}{a_{1}^{i}}.\]
\(P^{*}\) represents the total number of neurons used in the neural networks for modeling the kernels, \(\widehat{\phi}_{dj}\) with \(1\leq d,j\leq D,\) of a \(D\)-dimensional nonlinear Hawkes process. For example, if each kernel in a \(D\)-dimensional nonlinear Hawkes process is modelled using a network with \(P\) neurons, then \(P^{*}=P(D^{2})\).
We re-index the sequence \(\mathbf{X}\) in monotone increasing order as \(\mathbf{S}\equiv\{s_{1}\leq s_{2}\leq\cdots\leq s_{P^{*}}\}.\) Let \(\mathbf{S}^{*}\equiv\{s_{l}\leq\cdots\leq s_{u}\},\) where \(t_{n-1}^{d}\leq s_{l}\leq\cdots\leq s_{u}\leq t_{n}^{d},\) and \(1\leq l\leq u\leq P^{*},\) be the largest subsequence of the sorted sequence \(\mathbf{S}.\) Therefore, \(\mathbf{S}^{*}\) is the set of all zero-crossings that lie in the range \([t_{n-1}^{d},t_{n}^{d}].\)
We then write,
\[\int_{t_{n-1}^{d}}^{t_{n}^{d}}\widehat{\lambda}_{d}^{*}(s)ds=\int_{t_{n-1}^{d}}^{s _{1}}\max\left(\widehat{\lambda}_{d}(s),0\right)ds+\ldots\int_{s_{q-1}}^{s_{q}} \max\left(\widehat{\lambda}_{d}(s),0\right)ds\ldots+\int_{s_{u}}^{t_{n}^{d}} \max\left(\widehat{\lambda}_{d}(s),0\right)ds,\]
and exploit the fact that \(\widehat{\lambda}_{d}(s)\) will be linear in \(s\) in the sub-intervals, \([t_{n-1}^{d},s_{l}],\ldots,[s_{u},t_{n}^{d}]\). If \(\widehat{\lambda}_{d}(s)\) is linear in \(s\) in the interval \([s_{q-1},s_{q}]\), then there will be at the max one zero crossing for the function \(\max(\widehat{\lambda}_{d}(s),0)\) that lies within this interval. If \(s_{q}^{\text{out}}\) is the zero-crossing for \(\max(\widehat{\lambda}_{d}(s),0)\) and \(s_{q}\leq s_{q}^{\text{out}}\leq s_{q}\), we can write:
\[\int_{s_{q-1}}^{s_{q}}\max\left(\widehat{\lambda}_{d}(s),0\right)ds=\left( \int_{s_{q-1}}^{s_{q}^{\text{out}}}\widehat{\lambda}_{d}(s)ds\right)1_{ \widehat{\lambda}_{d}(s_{q-1})>0}+\left(\int_{s_{q}^{\text{out}}}^{s_{q}} \widehat{\lambda}_{d}(s)ds\right)1_{\widehat{\lambda}_{d}(s_{q})>0}.\]
In case \(s_{q}^{\text{out}}\notin[s_{q-1},s_{q}]\) the above can be integrated as:
\[\int_{s_{q-1}}^{s_{q}}\max\left(\widehat{\lambda}_{d}(s),0\right)ds=\left( \int_{s_{q-1}}^{s_{q}}\widehat{\lambda}_{d}(s)ds\right)1_{\widehat{\lambda}_{ d}(s_{q-1})>0}.\]
Therefore, we evaluate \(\widehat{\Lambda}_{d}(t_{n}^{d})\), by splitting the integral into intervals within which \(\max(\widehat{\lambda}_{d}(s),0)\) is linear in \(s\). The integral,
\[\int_{t_{n-1}^{d}}^{t_{n}^{d}}\widehat{\lambda}_{d}^{*}(s)ds\]
as well as its gradients with respect to the network parameters can then be exactly evaluated. The expression of the above integral in a sub-interval \([s_{q-1},s_{q}],\) where \(\widehat{\lambda}_{d}^{*}(s)\) is linear is provided in the Appendix B.
## 4 Experiments and Results
In this section, we evaluate the effectiveness of the NNNH method on synthetic and real-world datasets1.. We assess the performance of the NNNH method in estimating the conditional intensity function of a non-homogeneous Poisson Processes (NHPP). We then showcase the adaptability of the NNNH method by utilizing one-dimensional non-linear Hawkes processes. Specifically, we explore the NNNH's capacity to estimate Hawkes processes with smooth, non-smooth, and negative kernels, and examine its robustness in handling different variants of non-linear Hawkes models. Additionally, we investigate the NNNH's ability to estimate kernels and base intensity function for multidimensional non-linear Hawkes processes, where we evaluate a simultaneous combination of smooth, non-smooth, and inhibitive kernels.
Footnote 1: The source code and the dataset used in the experiments described here are available at [https://github.com/sobin-joseph/NNNH](https://github.com/sobin-joseph/NNNH)
We investigate the practical applications of non-linear Hawkes models through two case studies. In the first case, we apply the NNNH method to analyze tick-by-tick cryptocurrency trading data on the Binance exchange, seeking to identify any causal relationships between the buy and sell market orders of the BTC/USD and the ETH/USD pairs. For the second case, we utilize the NNNH to analyze neuronal spike trains recorded from the motor cortex of a monkey, revealing the interdependence between individual neurons and how they function in tandem.
Prior to delving into the outcomes of our numerical experiments, we provide a brief overview of the data preprocessing steps and the hyper-parameters selected for fitting the NNNH model to a dataset. Specifically we discuss the scaling approach adopted for the dataset, the choice of the number of neurons opted for the network, the initial learning rates employed for the Adam optimizer, and the stopping criteria for the optimizer.
### Data preprocessing and choice of hyper-parameters:
For all our experiments, we divide the dataset into training, validation, and test set. The dataset is partitioned in a 60:20:20 ratio for analysis. Due to the dependence on history in the Hawkes process, we do not alter the chronology of the events. Before splitting the dataset, we first scale the timestamps. Scaling the dataset is critical step in the preprocessing stage of building neural networks as it can enhance their performance, stability, and convergence. Additionally, scaling helps to ensure that all input features have a similar range of variance, which can lead to improved initialization of the network parameters and overall performance during training. Given dataset \(\mathcal{S}=(t_{n},d_{n}),\) where \(t_{n}\in[0,T),\) let \(\{t_{1}^{d},\ldots,t_{k}^{d},\ldots,t_{N_{d}(T)}^{d}\},\) be the ordered arrivals for the dimensions \(d=1,\ldots,D.\) We define \(T_{\max}\) as
\[T_{\max}=\max\left(t_{N_{1}(T)}^{1},\ldots,t_{N_{d}(T)}^{d},\ldots,t_{N_{D}(T)} ^{D}\right),\]
and \(N(T_{\max})\) as:
\[N(T_{\max})=\sum_{d=1}^{D}N_{d}(T_{\max}).\]
We scale the original timestamps as:
\[\widehat{t}_{n}^{d}=t_{n}^{d}\frac{N(T_{\max})}{T_{\max}}.\]
Initialization of network parameters and choice of batch sizeIn the NNNH method as discussed in Section 3 we model each kernel and base intensity function as a feed-forward neural network. To model the base intensity functions, \(\widehat{\mu}_{d}(t),\) we use a feed-forward network with fifty neurons in the hidden layer. The following initialization is used for the parameters of the network,
\[a_{1}^{i} \sim \mathcal{U}(-10^{-3},10^{-3})\] \[a_{2}^{i} \sim \mathcal{U}(0,0.2)\] \[b_{1}^{i} \sim \mathcal{U}(-1,1),\]
where \(\mathcal{U}(a,b),\) denotes uniform distribution between \(a\) and \(b\).
For the kernels, \(\widehat{\phi}_{dj}(t),\)\(1\leq d,j\leq D,\) of the nonlinear Hawkes process we use feed-forward networks with thirty two neurons in the hidden layer and the following scheme for initialising the network parameters,
\[a_{1}^{i} \sim \mathcal{U}(0,-0.3)\] \[a_{2}^{i} \sim \mathcal{U}(0,0.2)\] \[b_{1}^{i} \sim \mathcal{U}(0,0.3).\]
If we use a constant baseline intensity we initialize \(\widehat{\mu}=1.\)
We use a batch size of hundred for calculating the stochastic gradient. We use different learning rates for updating the parameters in the hidden layer and the output layer, as we observe faster convergence with this choice. The following learning rates were used for all the experiments: For fitting the networks used to model the base intensity function we used a learning rate of \(10^{-3}\) for updating the network parameters in the output layer and \(10^{-6}\) for updating the parameters in the hidden layer. For the networks used to model the kernel function the we used corresponding learning rates of \(10^{-2}\) and \(10^{-3}\) for the output and hidden layers, respectively.
Stopping criteria:In this study, for all the experiments, we utilize a stopping criterion based on the negative log-likelihood value computed on the validation dataset. The negative log-likelihood values are computed at the end of each iteration of the batch stochastic gradient descent algorithm. The NNNH parameter estimation algorithm stops when the updated parameters fail to improve the best
recorded validation error for a specified number of iterations, following the approach proposed by Goodfellow et al. (2016). This early stopping criterion helps prevent over fitting, as demonstrated in Figure 3, where we observe a reduction in training negative log-likelihood value while the validation negative log-likelihood value starts increasing after a certain number of iterations, indicating that the model is starting to over-fit the training data.
Through numerical experiments, Section 4.5 presents our investigation of the sensitivity of NNNH estimation to various hyper-parameter choices.
### Estimation of Non Homogenous Poisson Process Intensity
A Non-Homogenous poisson Process (NHPP) \(N(t),t\geq 0\) is a generalized counting process in which the rate at which events occurs varies with time. The intensity function, denoted by \(\mu(t):\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\), captures the varying rate of the events over time, which could be influenced by factors such as external events, seasonal patterns, or other underlying phenomena. To comprehend this counting process, estimating the intensity function is necessary. The intensity function of NHPP can be estimated parametrically or non-parametrically. Parametric estimation assumes that the parametric form of the underlying intensity function is known. For instance, Rigdon and Basu (1989) estimates the parameters by assuming the intensity function as power law function, while Lee et al. (1991) uses a general exponential-polynomial-trigonometric intensity function. If one does not assume the parameteric form of the intensity function, they would resort to non-parametric techniques to estimate the conditional intensity function. In Leemis (1991), a technique is proposed that utilizes a piecewise linear function to estimate the cumulative intensity of the NHPP, while Xiao and Dohi (2013) presents another non-parametric approach that employs a wavelet-based estimation method to estimate the intensity function.
In this section, we show that the NNNH technique can be used to model the intensity function, \(\mu(t),\) of a non-homogeneous Poisson process. A single-layered feed-forward neural network can be used to model the intensity function as,
\[\hat{\mu}(t)=\max\left(b_{2}+\sum_{i=1}^{p}a_{2}^{i}\max(a_{1}^{i}t+b_{1}^{i},0 ),0\right), \tag{13}\]
Figure 3: Plot of Negative log-likelihood for train and validation datasets in one dimensional case, indicating the stopping criteria.
where, as explained in Section3, \([a_{1}^{i},b_{1}^{i},a_{2}^{i},b_{2}],\,1\leq i\leq p,\) are the parameters of the network that are estimated by maximizing the log-likelihood value,
\[\mathcal{L}(\mathbf{\Theta})=\sum_{t_{n}\in\mathcal{S}}log(\widehat{\mu}(t_{n}))- \int_{0}^{T}\widehat{\mu}(s)ds, \tag{14}\]
for the training dataset \(\mathcal{S}\). We use the SGD with Adam, for adaptive learning rates, to find the optimal network parameters, as discussed in Section 3.1.
We consider an exponential, linear, polynomial, and a trigonometric function, as given in Table 1, to model the conditional intensity. The arrival times are simulated using the thinning algorithm of Lewis and Shedler (1979). Table 1 compares the true negative log-likelihood of the simulated NHPP with the negative log-likelihood values obtained using the fitted NNNH model 2.
Footnote 2: The source code and the dataset used in the experiments described here are available at [https://github.com/sobin-joseph/NNNH](https://github.com/sobin-joseph/NNNH)
We see that for all the cases considered, the estimated negative log-likelihood values are reasonably close to the true negative log-likelihood values. Figure 4 compares the NNNH estimates of the intensity function with the true intensity function. We see for all the cases considered the NNNH is able to recover the intensity function reasonably well.
### Univariate Hawkes Estimation
This section focuses on examining the effectiveness of the NNNH method in estimating one-dimensional non-linear Hawkes processes using the following criteria:
* Estimation of non-smooth kernels
* Estimation of negative kernels
* Estimation of non-linear Hawkes processes for different variations of \(\Psi\).
* Estimation of Hawkes processes with varying base intensity function.
We simulate the arrival times for the different variants of the Hawkes processes using the _Ogata's thinning algorithm_ proposed in Ogata (1981). We compare the performance of our algorithm to the following state of the art methods:
* WH: An algorithm proposed in Bacry and Muzy (2014), a non-parametric estimation method which solves a Wiener-Hopf system derived from the auto-covariance of the multivariate Hawkes processes.
* Bonnet: A maximum likelihood based estimation method for Hawkes processes with self excitation or inhibition as proposed in Bonnet et al. (2021).
Estimation of non-smooth kernels:We first consider a linear Hawkes process with a non-smooth kernel, specifically a rectangular kernel of the form,
\[\text{Rectangular, }\phi(t)=\left\{\begin{array}{l}\alpha\beta,\text{if}\, \delta\leq t\leq\delta+\frac{1}{\beta},\\ 0,\text{otherwise}\end{array}\right.\]
\begin{table}
\begin{tabular}{|c|l|c|c|} \hline Function & Underlying equation & TNLL & NNLL \\ \hline \hline Exponential & \(\mu(t)=0.5e^{-0.001t}\) & 1177 & 1173 \\ \hline Linear & \(\mu(t)=0.5\) & 2528 & 2518 \\ \hline Parabola & \(\mu(t)=2\times 10^{-7}(1.5t-2000)\) & 1703 & 1699 \\ \hline Sin Curve & \(\mu(t)=0.4(\text{sin}(2\pi t\times 0.0004-1000)+1.1)\) & 3482 & 3538 \\ \hline \end{tabular}
\end{table}
Table 1: The NHPP functions and their corresponding True Negative Log-Likelihood(TNLL) and NNNH Negative Log-Likelihood(NNLL) obtained from the NNNH method
The smoothness assumption of the kernel is a prerequisite for some non-parametric methods used in estimating the kernels of Hawkes processes, such as the Markovian Estimation of Mutually Interacting Process (MEMIP) proposed in Lemonnier and Vayatis (2014). As a result, it is necessary to compare the performance of the NNNH method using a non-smooth kernel in order to assess its effectiveness. We simulate the process using the following parameter values for the rectangular kernel,
\[\mu=0.05,\,\alpha=0.7,\,\beta=0.4,\,\delta=1.\]
Simulating the process until \(T=60000\) yields \(N(T)=10001\) arrivals. As a first step, we visually compare the estimated kernels obtained using the NNNH and the benchmark methods with the true kernel. Figure 5 illustrates the kernels fitted by the three estimation methods. As expected we see that the non-parametric models, i.e., the NNNH and the WH are able to better recover the ground truth. This is also reflected in the negative log-likelihood values obtained for WH, Bonnet, and NNNH methods, i.e., \(4228\), \(12838\), and \(2800\) respectively.
Figure 4: NNNH estimated kernel and base intensity for a one-Dimensional Hawkes process with sin base intensity and exponential kernel
Estimation of negative kernels:We next consider a non-linear Hawkes process,
\[\lambda^{*}(t)=\max\left(\mu+\sum_{\tau<t_{n}}\phi(t-\tau),0\right),\]
where, \(\Psi\) in Equation 3 is a \(\max\) function, i.e. \(\Psi(\lambda(t))=\max(\lambda(t),0)\). The inhibitive kernel, \(\phi,\) is specified as an exponential function given by,
\[\phi(t)=-0.5e^{-2t}.\]
Given that non-parametric methods, such as the Expectation Maximization (EM) approach introduced in Lewis and Mohler (2011), are capable of estimating only positive kernels, it is of interest to evaluate the capacity of the NNNH method in estimating the negative kernels.
We simulate the above process for \(T=14000\), with a base intensity, \(\mu=0.9\). This resulted in \(N(T)=10001\) events. We first compare the kernel obtained using the NNNH, WH, and Bonnet method with the ground truth in Figure 6. All three methods are able to recover the kernel, but visually the NNNH and Bonnet methods appear to provide better estimates. Furthermore, this conclusion is supported by the estimated negative log-likelihood values, which are \(9833\), \(9559\), and \(9564\) for the WH, Bonnet, and the NNNH, respectively.
Estimation of non-linear Hawkes processes for different variations of \(\Psi:\) Diverse variations of the non-linear Hawkes process can be derived by making distinct selections of the function \(\Psi\) in
Figure 5: Fitted Rectangular kernel: A comparison of the NNNH, WH, and Bonnet estimates with the actual (real) kernel.
Figure 6: Fitted negative exponential kernel: A comparison of teh NNNH, WH, and Bonnet estimates with the actual kernel.
Equation 3. In contrast to the preceding example where \(\Psi\) was chosen as a \(\max\) function, in this instance, we adopt a sigmoid function for \(\Psi\) given by
\[\lambda^{*}(t)=\frac{1}{1+e^{-(\lambda(t)-2)}}\]
. We consider the following kernel,
\[\phi(t)=e^{-2t},\]
for this example.
The nonlinear Hawkes process was simulated for a duration of \(T=10000\), resulting in \(N(T)=3028\) events. According to Equation 9, the NNNH model converts a non-linear Hawkes process with any \(\Psi\) into a non-linear Hawkes process with \(\Psi\) as a \(\max\) function. As a result, the recovered kernels from the NNNH method might not correspond to the kernels of the original process. However, we can compare the actual simulated intensity process with the intensities recovered from the NNNH method and the WH method. Figure 7 demonstrates that the recovered intensities match the simulated intensities.
The accuracy of NNNH estimates of quantiles is evident from the QQ plot obtained from the fitted WH and the NNNH method on the test dataset, as shown in Figure 8. The negative log-likelihood values for NNNH and WH are \(2954\), and \(3813\) respectively.
Figure 8: QQ plot comparing estimated and true intensities for sigmoid nonlinear Hawkes process using WH and NNNH
Figure 7: Comparison of real and estimated intensity function by WH and NNNH for nonlinear Hawkes process with sigmoid as \(\psi\) function.
Estimation of Hawkes processes with varying base intensity functionTo complete the univariate case test for the NNNH, we examine a Hawkes process with a base intensity that varies over time. We adopt a trigonometric sin function for the time-varying base intensity \(\mu(t)\), which is specified in Table 1. The associated kernel for the Hawkes process is expressed as:
\[\phi(t)=e^{-2t}.\]
We simulate the process till \(T=3000,\) which results in \(N(T)=2482\) events. The comparison between the recovered base intensity function and the excitation kernel using the NNNH method with the true values is depicted in Figure 9. The results demonstrate that the NNNH approach can accurately estimate both the base intensity function and the kernel concurrently.
### Multivariate Hawkes Estimation
In this section, we evaluate the NNNH approach for estimating multivariate non-linear Hawkes processes using a simulated dataset. The multivariate non-linear Hawkes process is simulated using the _Ogata's thinning algorithm_. The parameters of the neural networks utilized to model the kernels in the NNNH method are optimized by maximizing the log-likelihood through the stochastic gradient descent method, as described in Section 3.1. We utilize the WH method (Bacry and Muzy, 2014) and the Bonnet Multivariate method (Bonnet et al., 2022) as our comparative models. While the WH method is non-parametric, the Bonnet Multivariate approach assumes a parametric structure for the kernels in the Hawkes process. Both reference models are capable of estimating kernels that exhibit inhibitory effects.
We consider two examples for the simulated case. We first consider a multivariate Hawkes process, with both positive and negative exponential kernels. Specifically the parameter chosen for the model are:
\[\alpha=\begin{bmatrix}-0.9&3\\ 1.2&1.5\end{bmatrix},\]
\[\beta=\begin{bmatrix}5&5\\ 8&8\end{bmatrix}\]
, and a constant base intensity
\[\mu=\begin{bmatrix}0.5\\ 1.0\end{bmatrix}.\]
The kernels of the process are defined as: \(\phi_{dj}(t)=\alpha_{dj}e^{-\beta_{dj}t},\)\(1\leq d,j\leq 2.\)
We simulate the process till \(T=1000,\) which results in \(N(T)=1002\) events. Figure 10 plots the true kernels and the kernels recovered using the NNNH, the WH, and the Bonnet Multivariate
Figure 9: Estimated kernel and base intensity by applying NNNH to a one-dimensional Hawkes process with sin base intensity and an exponential kernel
methods. Through a visual examination, it can be observed that the Bonnet Multivariate approach closely approximates the actual kernels. This outcome is in line with expectations as the simulated data employed a parametric form of the Hawkes process that is specifically suitable for the Bonnet Multivariate model. Regarding the non-parametric models, the WH estimates exhibit greater variability compared to the NNNH estimates. The respective negative log-likelihoods for the NNNH, the WH, and the Bonnet Multivariate methods are \(1480\), \(1967\), and \(1460\).
Subsequently, we examine a bivariate non-linear Hawkes process that encompasses a combination of diverse types of kernels. Precisely, we select the kernels as follows:
\[\text{exponential kernel}\,\phi_{11}(t)=0.3e^{-3t},\] \[\text{rectangular kernel}\,\phi_{12}(t)=\left\{\begin{array}{l}0.7\times 0.4,\text{if}\,1\leq t\leq 1+\frac{1}{0.4},\\ 0,\text{otherwise}\end{array}\right.,\] \[\text{negative exponential}\,\phi_{21}(t)=-0.2e^{-t},\] \[\text{exponential}\,\phi_{22}(t)=0.4e^{-2t},\]
Figure 10: The kernels fitted using the NNNH, WH, and Bonnet Multivariate methods, for a bivariate Hawkes process with both positive and negative kernels.
and use the following base intensity values:
\[\mu=\begin{bmatrix}0.1\\ 0.2\end{bmatrix}\]
We perform simulations up to time \(T=10000\), resulting in the occurrence of a total \(8489\) events in the two dimensions. Figure 11 presents the findings obtained from the NNNH approach in comparison with other techniques. Notably, the advantages of employing a non-parametric estimation method for Hawkes processes are evident in this context. Both the NNNH and the WH methods successfully capture all the kernels, whereas the Bonnet Multivariate approach does not accurately depict the rectangular kernel. It is worth mentioning that the estimated kernels obtained using the WH method appear to be more erratic than those of the NNNH. The corresponding negative log-likelihood values for the WH, the Bonnet Multivariate, and the NNNH are \(3051\), \(3062\), and \(2899\) respectively.
### Sensitivity Analysis
In the context of the NNNH approach, we have made deliberate decisions concerning the quantity of neurons deployed in the network's hidden layer to emulate the kernels and the learning rate employed to regulate the parameter updates in the Adam algorithm. To gauge the sensitivity of the NNNH method, we have conducted a numerical investigation with regard to these decisions. Specifically, we have examined the influence of variations in the number of neurons and learning rates on the negative log-likelihood value, while holding all other factors constant. Note that the likelihood values reported are for the validation dataset which was not used in the training of the network
The sensitivity analysis is conducted for the negative exponential kernel, which is discussed in Section 4.3. The study focuses on the variation in the number of neurons employed in the neural networks, and the corresponding impact on the negative log-likelihood values. The results are presented in
Figure 11: The fitted kernels, using the NNNH, WH, and Bonnet Multivariate method, for a bivariate non-linear Hawkes process with a mix of exponential and rectangular kernels.
Table 2 presents the percentage variation in the negative log-likelihood values linked to the optimal parameters obtained by varying the learning rate for the output layer of the neural network. The learning rate for the hidden layer remains unchanged, and is taken as ten percent of the learning rate for the output layer.
We find that the optimal network parameters obtained using varying learning rates results in similar negative log-likelihood values and therefore use a learning rate of 0.01 for all our experiments.
### Real Data
#### 4.6.1 Financial Dataset
In this study, we evaluate the efficacy of the NNNH method on high-frequency order book data pertaining to two of the most frequently traded cryptocurrencies, namely bitcoin and Ethereum. The data comprises of buy and sell trade records for the BTC-USD (bitcoin- US dollar) and ETH-USD (Ethereum- US Dollar) pairs. We streamed the Binance exchange order book data, as several popular cryptocurrencies are traded in this exchange, and the exchange has high trade volumes. We obtain the tick-by-tick arrival times for both buy and sell trades from the exchange. The market is composed of makers and takers, with makers generating buy or sell orders that are not carried out immediately, thereby creating liquidity for that cryptocurrency. In contrast, takers place market orders that are executed instantly, thereby taking away the liquidity.
The Binance exchange furnishes two streams of tick-by-tick data: trade arrival data and trade stream data. The trade arrival data comprises of limit orders, which are orders placed with a specified price limit, and market orders, which are orders executed at the prevailing market price. Limit orders are executed when the best available market price reaches the set limit, while market orders are executed instantly at the current best limit order. The Binance trade-stream data provides the timestamp of these order arrivals, along with price and volume features, and a unique identifier for the buyer/seller. Since a single market order may necessitate multiple limit orders to fulfill the requested volume, several trades are recorded with a common identifier. Therefore, we cleaned the dataset by filtering out the data with common IDs and retaining only the unique trade events. Finally, based on whether the buyer was a market maker or taker, the trades were marked as either a buy or a sell market order.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline No of Neurons & NLL & \(\%\) change & \(\%\) time change \\ \hline
2 & 2057.78 & -0.061 & +164.1 \\
4 & 2058.01 & -0.057 & +95.4 \\
8 & 2059.06 & -0.004 & +89.9 \\
16 & 2059.14 & -0.001 & +90.9 \\
32 & 2059.16 & 0.00 & 0.00 \\
64 & 2061.29 & +0.10 & +6.7 \\
128 & 2061.63 & +0.12 & +77.3 \\
256 & 2061.62 & +0.12 & +76.1 \\ \hline \end{tabular}
\end{table}
Table 2: Percentage change in negative log-likelihood (NLL) values and corresponding computational time for different numbers of neurons in the neural network
\begin{table}
\begin{tabular}{|c|c|c|} \hline Learning Rate & NLL & \(\%\) change \\ \hline
0.01 & 2064.54 & +0.00 \\
0.005 & 2064.19 & -0.02 \\
0.001 & 2065.49 & +0.05 \\
0.0005 & 2067.87 & +0.12 \\ \hline \end{tabular}
\end{table}
Table 3: Percentage change in negative log-likelihood (NLL) values for different learning rates.
Our analysis focuses exclusively on the trades conducted for the BTC-USD and ETH-USD pairs, which account for the predominant trading volumes in the cryptocurrency exchange. The arrival time for market orders for these two pairs can be modelled as a four dimensional non-linear Hawkes process, i.e.,
First Dimension: Intensity process for the sell market orders for the BTC-USD pair
Second Dimension: Intensity process for the buy market orders for the BTC-USD pair
Third Dimension: Intensity process for the sell market orders for the ETH-USD pair
Fourth Dimension: Intensity process for the buy market orders for the ETH-USD pair.
The analysis was performed for the market orders arrival times on December 7 2021, between 12:00 to 12:10 (UTC) on the Binance exchange. Table 4 summarises the data after classifying the trades as buy or sell market order events.
Our aim is to employ the NNNH method to jointly model the intensity process of buy and sell market orders for BTC-USD and ETH-USD currency pairs. This modelling approach enables us to gain insights into the causal relationships between the two pairs, such as whether the arrival of a buy BTC-USD order affects the arrival of a sell ETH-USD market order. Furthermore, identifying the functional form of the self and cross modulation due to the arrival of market orders is of interest. This highlights the necessity of non-parametric estimation techniques, as the true form of the modulation function remains unknown. To achieve our objective, we partition the dataset into train, validation,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Market Order Type & BTC-USD & ETH-USD & Aggregate \\ \hline Sell & 4219 & 3480 & 7699 \\ Buy & 4374 & 2999 & 7373 \\ \hline Total & 8593 & 6479 & 15072 \\ \hline \end{tabular}
\end{table}
Table 4: Summary of the crypto-currency market orders recorded on December 7 2021 between 12:00 and 12:10 UTC
Figure 12: Histogram for the inter event time arrivals of the BTC-USD sell trade
and test sets. We optimize the parameters of the model using stochastic gradients computed on the train set and use the validation set for applying the early stopping criteria. Specifically, if there is no improvement in negative log-likelihood values computed on the validation set for ten consecutive iterations, we stop training. Finally, we evaluate the goodness of fit using the negative log-likelihood reported on the test set.
We adopt the widely used non-parametric methods, namely the WH and the EM method (Lewis and Mohler, 2011), as reference models. In Figure 13, we compare the obtained kernels using the three methods. The corresponding negative log-likelihood values are \(9630\), \(6732\), and \(6400\) for the EM, WH, and NNNH methods, respectively. The estimated base intensities for BTC-USD sell, BTC-USD buy, ETH-USD sell, and ETH-USD buy, in the order of appearance, using the NNNH method are \(0.0028\), \(0.0032\), \(0.0021\), and \(0.0022\), respectively.
By examining Figure 13, we can draw the following observations for the market orders that arrived during the training window:
* Significant self-excitation is observed for both BTC-USD and ETH-USD buy and sell market orders.
* BTC-USD sell orders result in excitation of ETH-USD buy and sell orders.
* The self-excitation of ETH-USD buy and sell orders is higher compared to their respective BTC-USD orders.
* A certain level of inhibition is observed in BTC-USD buy orders due to sell BTC-USD orders.
Notably, all three methods effectively capture a lagged cross-excitation effect in ETH-USD buy orders caused by sell BTC-USD orders.
Figure 13: Fitted kernels for BTC-USD and ETH-USD sell-buy market orders estimated using the NNNH, the WH, and the EM methods.
Predictive capacity of the NNNH methodAccurately inferring the kernels is critical in developing an improved predictive model for the arrival process. For example, a trading algorithm relies on predicting the time at which the next market order will arrive with 90% confidence. With an accurate prediction model, approximately 90% of the predictions are expected to be correct, and in roughly 10% of the cases, the order will arrive after the predicted time. If the model consistently produces accurate predictions (i.e., does not fail for 10% of the cases), then the predictions are overly conservative, indicating that the predicted time is set too far in the future. Conversely, if more than 10% of the predictions fail, the predicted time is closer than anticipated. Therefore, a prediction model that produces predictions with Q% certainty should have Q% correct outcomes. The accuracy of such a prediction model can be evaluated using a QQ plot.
Figure 14 presents the QQ plot for the BTC-USD and ETH-USD order arrivals based on the EM, WH, and NNNH models. The sample quantiles for the QQ plot are obtained from the test dataset. The results indicate that all three models make reasonably accurate predictions. Notably, the NNNH method appears to have more precise predictions, particularly for BTC-USD and ETH-USD sell orders, as evidenced by the visual assessment.
#### 4.6.2 Neuron Spike Train Dataset
In this study, we utilized the NNNH method to analyze a dataset of neuron spikes, which was obtained from an experiment conducted by Engelhard et al. (2013). The primary objective of the experiment was to examine the relationship between unit recordings from the motor cortex of monkeys and the position and grip strength of their hands as they utilized a joystick to manipulate a robotic arm. Consistent with the approach taken by Aljadeff et al. (2016), we analyzed the data obtained from the nine simultaneous electrode recordings. Although the dataset included additional information
Figure 14: QQ plot comparing order arrivals of BTC-USD and ETH-USD using EM, WH, and NNNH methods
regarding hand motion, such as cursor trajectory and grip force, our analysis was focused exclusively on the neuron spike train.
To provide a brief biological background about neuron spikes, we refer to Duval et al. (2022). Neurons employ an electrical wave of short duration, known as the action potential or spike, to reliably transmit signals from one end to the other (Castelfranco and Hartline, 2016). Action potentials are all-or-none events and are triggered when the neuron's membrane potential, which is the electrical potential difference between the inside and the outside of the neuron, is sufficiently large. Between two successive action potentials, the neuron adds up its inputs, causing changes in its membrane potential. When this potential reaches a sharp threshold, the neuron fires a spike that propagates (Luo et al., 2015).
The histogram in Figure 15 presents the inter-arrival times of spikes generated by a single neuron. Notably, the peak of the histogram is marginally displaced from the origin, in contrast to the histogram of a conventional exponential distribution. This observation suggests the presence of an inhibitory process. To model the neuron spike train, we employed a non-linear Hawkes process that is nine-dimensional. Each dimension of the process is associated with the intensity of neuron spike arrivals at an individual electrode. We use the NNNH method to estimate the kernel functions and the base intensities for this process.
Table5 gives the number of spike events for each neurons used for the analysis.
Figure 16 displays the kernels that were fitted using the NNNH method to model the neuron spike train. Additionally, Table 6 reports the base intensity values for the arrival of neuron spikes at each electrode. The results indicate that a majority of the neurons exhibit a self-inhibitory characteristic. Specifically, after firing, neurons typically undergo a refractory period during which they cannot generate additional spikes. The inhibitory pattern evident in the plots likely corresponds to this refractory period. Based on the figures, it is reasonable to assume that the refractory period for the recorded motor neurons falls within the 20-60 ms range.
Figure 15: Histogram of inter-spike time intervals for neuron 1 in grip-and-reach task
\begin{table}
\begin{tabular}{|c|c|c|c|c||c|c|c|c|} \hline Neuron & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) \\ \hline \hline No. of events & 4121 & 1895 & 912 & 2509 & 221 & 2784 & 149 & 3854 & 1360 \\ \hline \end{tabular}
\end{table}
Table 5: Summary statistics of neuron spike data
## 5 Conclusion
This paper proposes a non-parametric method for estimating non-linear Hakes processes. We call the method Neural Network for Non-linear Hawkes processes or the NNNH method. The NNNH method involves modelling the kernels and the base intensity functions of a non-linear Hawkes process as individual neural networks. The parameters of the neural networks involved are jointly estimated by maximizing the log-likelihood function using the batch stochastic gradient descent with Adam for adaptive learning. To apply the SGD method, an unbiased estimator for the gradients with respect to the network parameters is proposed. The paper provides an efficient scheme to evaluate the integrated intensity and its gradients, which can be a computational bottleneck.
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|c|c|c|} \hline \(\mu_{1}\) & \(\mu_{2}\) & \(\mu_{3}\) & \(\mu_{4}\) & \(\mu_{5}\) & \(\mu_{6}\) & \(\mu_{7}\) & \(\mu_{8}\) & \(\mu_{9}\) \\ \hline \hline
0.009 & 0.035 & 0.001 & 0.046 & 0.001 & 0.043 & 0.001 & 0.076 & 0.022 \\ \hline \end{tabular}
\end{table}
Table 6: Base intensities of neuron spike data obtained using the NNNH for the grip-and-reach task
Figure 16: Kernels of motor neurons demonstrating the self-dependency and mutual dependency of neurons for the grip-and-reach task estimated by NNNH.
The effectiveness of the NNNH method in learning the underlying process is investigated through numerical experiments. Unlike many recent neural network-based models that predict conditional intensities, the NNNH model retains the interpretability of the Hawkes process by enabling the inference of the kernels rather than just direct prediction of the conditional intensities. The ability to recover the kernels is desirable for understanding causal relationships between the arrivals in different dimensions, for instance. The NNNH method performed satisfactorily for the diverse set of problems considered. We also see that the NNNH can be used to estimate non-homogeneous Poisson process. The NNNH method is able to recover non-smooth kernels, kernels with negative intensities, and estimate other variants of the non-linear Hawkes processes.
The NNNH approach is applied to examine the process of order arrivals for buy and sell transactions on the Binance exchange in relation to the BTC-USD and ETH-USD currency pairs. We find evidence of both self and cross excitation in the order arrivals for the two currency pairs. Notably, self excitation is observed to be significant for both currency pairs, while the cross dependence exhibits an asymmetry whereby ETH-USD order arrivals are more strongly influenced by BTC-USD order arrivals than vice versa.
We applied the NNNH model to a publicly accessible dataset of neuron spikes obtained from the motor cortex of monkeys engaged in a grip-and-reach task. As anticipated, the kernels obtained from the analysis exhibit self-inhibition, which could be associated with the refractory period of the neurons. Our findings provide evidence of the NNNH method's efficacy in analyzing high-dimensional datasets.
The NNNH method has a disadvantage in that the estimation of model parameters relies on the use of stochastic gradient descent (SGD), which updates the network parameters iteratively using first-order derivatives. Convergence of SGD-based methods can be slow, often requiring several iterations before the stopping criterion is met. However, the use of SGD makes the NNNH method well-suited for online learning, enabling the model to be trained on new data points that are continuously streaming in, as opposed to a static dataset. Moreover, the NNNH method is amenable to parallel computing of the gradients for batch SGD. A potential avenue for future research is to extend the NNNH method to model marked non-linear Hawkes processes.
|
2305.14859 | Utility-Probability Duality of Neural Networks | It is typically understood that the training of modern neural networks is a
process of fitting the probability distribution of desired output. However,
recent paradoxical observations in a number of language generation tasks let
one wonder if this canonical probability-based explanation can really account
for the empirical success of deep learning. To resolve this issue, we propose
an alternative utility-based explanation to the standard supervised learning
procedure in deep learning. The basic idea is to interpret the learned neural
network not as a probability model but as an ordinal utility function that
encodes the preference revealed in training data. In this perspective, training
of the neural network corresponds to a utility learning process. Specifically,
we show that for all neural networks with softmax outputs, the SGD learning
dynamic of maximum likelihood estimation (MLE) can be seen as an iteration
process that optimizes the neural network toward an optimal utility function.
This utility-based interpretation can explain several otherwise-paradoxical
observations about the neural networks thus trained. Moreover, our
utility-based theory also entails an equation that can transform the learned
utility values back to a new kind of probability estimation with which
probability-compatible decision rules enjoy dramatic (double-digits)
performance improvements. These evidences collectively reveal a phenomenon of
utility-probability duality in terms of what modern neural networks are (truly)
modeling: We thought they are one thing (probabilities), until the
unexplainable showed up; changing mindset and treating them as another thing
(utility values) largely reconcile the theory, despite remaining subtleties
regarding its original (probabilistic) identity. | Huang Bojun, Fei Yuan | 2023-05-24T08:09:07Z | http://arxiv.org/abs/2305.14859v2 | # Utility-Probability Duality of Neural Networks
###### Abstract
It is typically thought that supervised training of modern neural networks is a process of fitting the groudtruth probabilities. However, many counter-intuitive observations in language generation tasks let one wonder if this canonical probabilistic explanation can really account for the observed empirical success. To resolve this issue, we propose an alternative _utility-based explanation_ to the standard supervised learning procedure in deep learning. The basic idea is to interpret the learned neural network not as a probability model but as an _ordinal utility_ function Fishburn (1990) that encodes the preference revealed in training data. We developed a theory based on this utility-based interpretation, in which the theoretical expectations and empirical observations are better reconciled.
## 1 Introduction
In this paper we challenge, and fix, a standard explanation of deep learning. The prevailing mindset nowadays is to _interpret_ a neural network \(f(\mathbf{w})\) as a parametric model of the conditional probability distribution \(\mathbf{P}_{\mathbf{w}}[Y=\mathbf{y}|X=\mathbf{x}]\), where \(X\) is an **expected input** of the task under concern (e.g. an image/sentence/speech audio), and \(Y\) is an **expected output** given \(X\) (e.g. a class label, a score, or a structured object such as a sentence or an action plan) which is assumed to follow a groundtruth distribution \(\mathbf{P}_{\text{true}}\). Training of the neural network \(f(\mathbf{w})\) is then _thought_ to be the process of approximating \(\mathbf{P}_{\text{true}}\) with \(\mathbf{P}_{\mathbf{w}}\). Indeed, in Goodfellow et al. (2016), the deep learning textbook writes (p.138): "_most supervised learning algorithms in this book are based on estimating a probability distribution \(p(y|x)\)_".
This probability interpretation of neural networks supports two natural ways to use the learned probability model \(\mathbf{P}_{\mathbf{w}}\) at inference time. The first way is to choose the most likely output in \(\mathbf{P}_{\mathbf{w}}\):
\[\mathbf{y}_{\text{MAP}}\doteq\operatorname*{arg\,max}_{\mathbf{y}}\,\mathbf{P}_{\mathbf{ w}}[Y=\mathbf{y}|X=\mathbf{x}] \tag{1}\]
where MAP stands for _maximum a-posteriori probability_. When \(\mathbf{P}_{\mathbf{w}}=\mathbf{P}_{\text{true}}\), \(\mathbf{y}_{\text{MAP}}\) is a provably optimal output in many common scenarios Bishop (2006); see Appendix A for a rigorous analysis.
Another sensible decision rule is to sample the output from the distribution \(\mathbf{P}_{\mathbf{w}}\), which makes the **actual output** a random variable, denoted by \(A\) here:
\[A\sim\mathbf{P}_{\mathbf{w}}[Y=\cdot|X=\mathbf{x}] \tag{2}\]
When \(\mathbf{P}_{\mathbf{w}}=\mathbf{P}_{\text{true}}\), the stochastic output \(A\) is not necessarily optimal, but is necessarily a good output as long as the expected output \(Y\) is the output of a good decision policy (because \(A\) is identically distributed with \(Y\); see Appendix B for more elaboration on the soundness of the sampling rule).
The two decision rules (1) and (2) underlie a long tradition in the ML community that reduces the problem of _learning to make decisions_ to a probability estimation problem 2: If we could estimate
\(\mathbf{P}_{\text{true}}\) perfectly, our decision would be guaranteed good. In reality, the approximation of \(\mathbf{P}_{\text{true}}\) with \(\mathbf{P}_{\mathbf{w}}\) always comes with errors, but the correspondence in the limit between decision making and probability estimation still gives the reasonable expectation that the closer the probability estimation is, the better the induced decision - by the two decision rules (1) and (2) - would be.
However, it is known that many neural networks with excellent decision quality are actually poorly calibrated in terms of probability estimation Guo et al. (2017); Minderer et al. (2021). In fact, Guo et al. (2017) reported that for some popular NN architectures, more powerful models (in terms of classification quality) tend to be worse calibrated in terms of how \(\mathbf{P}_{\mathbf{w}}\) matches \(\mathbf{P}_{\text{true}}\).
More importantly, recent empirical studies in NLP found that for a variety of language generation tasks, both the MAP rule \(A=\mathbf{y}_{\text{MAP}}\) and the sampling rule \(A\sim\mathbf{P}_{\mathbf{w}}\) lead to very bad performance in terms of text/decision quality Stahlberg and Byrne (2019); Cohen and Beck (2019); Eikema and Aziz (2020); Holtzman et al. (2019); this is the case even for extensively-trained models with state-of-the-art architectures. On the other hand, greedy or near-greedy outputs, as a kind of "economic yet sub-optimal" choices from the probabilistic perspective, turn out to work significantly better, and is often the _only_ solution that is known to work satisfactorily, not in cost, but in quality Wu et al. (2016). These paradoxical observations form an explainability issue that challenges the probabilistic rationale behind the empirical success in related domains (Section 2).
To resolve this issue, we propose an alternative _value-based explanation_ to the standard supervised learning procedure in deep learning. The basic idea is to interpret the learned neural network not as a probability model but as an _ordinal utility_ function Fishburn (1990); Moscati (2018) that encodes human preference revealed in training data. We develop a theory based on this value-based interpretation, in which the theoretical expectations and empirical observations are better reconciled.
Specifically, in Section 3 we point out that a softmax-normalized neural network model also comes with an un-normalized sub-model for the logits, and that this logit sub-model is the actual functioning part of the overall model at inference/decision time. As a result, the standard MLE training process for softmax probability model can be equivalently seen as a certain learning dynamic for the un-normalized sub-model. Now suppose we could _directly_ explain why the sub-models trained with this particular learning dynamic _will_ support good greedy decisions - in a way that the explanation does not resort to probabilistic semantics of the (sub-)model - then the probability-based interpretation would become unnecessary and can be bypassed. What is bypassed together is the contradiction between the probabilistic interpretation and experimental observations.
In Section 4, we indeed provide such a non-probabilistic explanation. Without the probabilistic semantic, the "logit sub-model" is re-interpreted as just a Q-function, and we show that the "MLE-equivalent learning dynamic" of this Q-function is a perturbed variant of a particular supervised Q-learning algorithm family (called _MABE_). We mathematically prove that the unperturbed variant of this family is indeed training the Q-function toward an _optimal_ utility function that gives optimal output under greedy decision. Then we experimentally intervene the perturbation term, and show that the perturbation (which makes the "MLE-equivalent variant" different from the unperturbed variant) has little impact on the learning dynamic empirically.
Moreover, in Section 5 we derive an equation from this utility-based theory which allows us to transform the learned Q-values back to estimations of \(\mathbf{P}_{\text{true}}\), thus bringing back the probabilistic semantic. However, the utility-based probability estimation, called _dual probability_ in the paper, encodes a different probability space from the canonical softmax probabilities. Intriguingly, running probability-based decision rules (1) and (2) based on the dual probability leads to _dramatically_ better performance in all tasks we examined (e.g. +14.6 BLEU for sampling and +17.3 for MAP in WMT'14 en2de translation), and it also gives more reasonable probability predictions. This result implies that the standard supervised learning procedure in deep learning - as a utility-learning procedure now - may indeed correspond to a dual process of probability learning, _but_, the probability space learned from this dual process may not be best represented by the softmax probabilities as usually perceived.
Overall, all the evidences above collectively revealed a phenomenon of _utility-probability duality_, that a neural network is perhaps both a utility function and a probability function in many deep learning settings (Section 6).
The Paradox
It is relatively well known that the probabilities predicted by many deep neural networks (that well support decision making in practice) do not match the true probabilities very well Guo et al. (2017). But this observation alone does not necessarily contradict with the probabilistic rationale behind neural network learning. The genuine paradox manifests itself through a _reversal_ in terms of the quality between "supposedly-rational" and "supposedly-irrational" decisions from the probabilistic perspective. Such a reversal was observed in a variety of _language generation_ tasks, such as machine translation Koehn and Knowles (2017), abstractive summarization Cohen and Beck (2019), and image captioning Holtzman et al. (2019). In this work we used three such tasks for experimentation: **WMT'14 English\(\rightarrow\)German (en2de)**, the most-widely used machine translation (MT) benchmark, consisting of \(4.5\) millions training sentences; **WMT'17 Chinese\(\rightarrow\)English (zh2en)**, another classic MT benchmark where the source and target languages are remote, consisting of \(21\) millions training examples; **CNN/DailyMail**, the most-widely used benchmark for abstractive document summarization, consisting of nearly \(300\) thousands of document-abstract pairs.
### The Expectations
In these tasks, the expected output \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{\tau})\) consists of a sequence of atomic decisions; each \(y_{t}\) is called a _token_ without loss of generality. In this case, a neural network \(f(\mathbf{w})\) is usually thought to be an _auto-regressive model_ that represents the token-wise conditional probabilities: \(f_{[y_{t}]}(\mathbf{x},\mathbf{y}_{<t};\mathbf{w})=\mathbf{P}_{\mathbf{w}}[\mathbf{y}_{t}|\mathbf{x}, \mathbf{y}_{<t}]\), where \(f_{[y_{t}]}\) denotes the vector-component of \(f\)'s output that corresponds to token \(y_{t}\), and \(\mathbf{y}_{<t}\doteq(y_{1},y_{2},\ldots,y_{t-1})\) denotes the partial output up to decision step \(t\). Such a neural network encodes \(\mathbf{P}_{\mathbf{w}}[\mathbf{y}|\mathbf{x}]\) through the _product rule of probability_, with
\[\mathbf{P}_{\mathbf{w}}[\mathbf{y}|\mathbf{x}]=\prod_{t}\mathbf{P}_{\mathbf{w}}[y_{t}|\mathbf{x}, \mathbf{y}_{<t}]=\prod_{t}f_{[y_{t}]}(\mathbf{x},\mathbf{y}_{<t};\mathbf{w}). \tag{3}\]
By substituting (3) into (1) and (2), we _can_ effectively maximize or sample \(\mathbf{P}_{\mathbf{w}}\) using the neural network \(f(\mathbf{w})\) as we did in one-shot decision tasks. In particular, finding the MAP output (over all possible sentences of a natural language) is a shortest-path problem that can be solved by a backtracking search in realistic (yet costly) time, as demonstrated by Stahlberg and Byrne (2019).
On the other hand, a seemly sub-optimal but economic choice is to simply output a sequence \(\mathbf{y}\) where each token \(y_{t}\) is a greedy decision that _locally_ maximizes the token-wise probability given the auto-regressive context:
\[\mathbf{y}_{\text{greedy}}\doteq(y_{1}\ldots y_{\tau})\;\;\text{where}\;\;y_{t}= \arg\max_{a}\;\mathbf{P}_{\mathbf{w}}[a|\mathbf{x},\mathbf{y}_{<t}] \tag{4}\]
Comparing (1) and (4), we see that the MAP decision rule maximizes over the combinatorial space of all possible sentences (= the output space), while the greedy decision rule maximizes over the token space, which avoids the combinatorial search at the cost of returning outputs with lower predicted probability. It is thus expected that the MAP rule _should_ give higher quality outputs than the greedy rule _if_ the model-predicted probability is a good indicator of the true likelihood \(\mathbf{P}_{\text{true}}\).
In practice, _beam search_ is a popular generalization of the greedy rule that generates near-greedy outputs by making each decision step based on not a single but a pool of auto-regressive contexts. With the capacity of the pool, a.k.a. the _beam size_, being \(1\), beam search degenerates exactly to the greedy rule (4). With the beam size tuned toward infinity, beam search will eventually (but extremely slowly) cover the entire output space and will return the MAP output (1) in the theoretical limit. It is thus expected that the output quality of beam search should improve as beam size grows.
Moreover, the sampling rule (2) _should_ have reasonable performance _if_ the neural network is well modeling the probabilities of a desired output (see Appendix B). In other words, if sampling a probability model cannot give reasonable outputs, it must be because the model is not well modeling the true probabilities - in that case there is no reason to expect that picking the greedy tokens would give something significantly better.
### The Observations
Surprisingly, however, in a number of language generation tasks, the greedy rule (4) and its close variants perform _much_ better than the more principled decision rules (1) and (2). Figure 1 illustrates the issue in WMT'14 en2de, with an experiment we designed to synthesize all the related
counter-intuitive observations together in a systematic and self-contained way. In this experiment, a Transformer neural network with 60 millions parameters was trained using the standard MLE loss (see Appendix G.1 for experiment setting details). From Figure 1 we see that:
(1) **Sampling the learned probability model \(\mathbf{P}_{\boldsymbol{w}}\) gives bad outputs**. Specifically, the performance of sampled output (corresponding to "sampling" in Figure 1) is \(5.4\) (\(\pm\)0.3 for \(90\%\) confidence interval). As the baselines, human's score is \(42.5\), and the best machine translation solution ("\(|\)beam\(|\)=16" in Figure 1) is \(27.1\). The performance of sampling the probability model is much closer to random output's (\(\approx 0\)), which is far from being a reasonable performance.
(2) **The greedy rule gives good outputs**, achieving a performance score of \(25.6\). While another \(1.5\) score can be obtained by relaxing the greedy pick to a few candidates each time ("\(|\)beam\(|\)=16" in Fig.1), the gap is rather marginal. In general, it is fair to say that greedy outputs are nearly the best, or that near-greedy outputs are the best.
(3) **Seeking to maximize the predicted probability gives bad outputs**: As we continue to increase the beam size, we indeed find outputs with higher probability according to the model (orange curve), but the actual translation quality turns out to decrease (blue curve). With beam size \(=1024\), the performance has dropped to \(8.6\). In fact, Stahlberg and Byrne (2019) reported that the exact MAP output has a performance score as low as \(2.1\) in a similar WMT 15 task.
(4) **The learned probability model \(\mathbf{P}_{\boldsymbol{w}}\) significantly over-estimates some clearly bad outputs, while under-estimates, again significantly, some clearly good outputs on the other hand**. Specifically, the model predicts a probability of around \(1/10^{4}\) for the empty translation which consists of nothing but an end-of-sequence token - clearly such translation should never occur (and indeed the model never saw any empty translation in the training data). 3 On the other hand, the model assigns _lower_ probability to the best-performing outputs (e.g. \(\mathbf{P}_{\boldsymbol{w}}[\boldsymbol{y}_{\text{greedy}}]\approx 1/10^{6}\) in Figure 1), and moreover, assigns _much lower_ probability to the true expected outputs provided by human (with average predicted probability as low as \(1/10^{18}\)).
Footnote 3: Probability over-estimation is not limited to this particular output. It is a general trend that current probability models over-estimate many very short and meaningless outputs Wu et al. (2016); Murray and Chiang (2018).
The above observations are not limited to the particular task as demonstrated. See Appendix G.2 and G.3 for similar results in WMT'17 zh2en translation and in CNN-DM summarization, respectively. The pattern is the same across all tasks: Both maximizing and sampling the learned probability model perform poorly while going greedy or near-greedy with the "local probabilities" performed dramatically well, and the learned model systematically assigns very low probabilities to desired outputs while giving much higher probabilities to undesired outputs.
These observations lead to a paradox if we insist the probabilistic explanation: On one hand, we see strong reasons to reject the greedy rule - for three tokens \(A\), \(B\), \(C\), saying that "\(AC\)_is more likely than \(BC\) because \(P(A)>P(B)\)"_ (which is what the greedy rule is suggesting!) violates basic
Figure 1: MLE models exhibit paradoxical observations in WMT’14 en2de Translation. The performance is measured by the standard BLEU metric in the domain Papineni et al. (2002). \(\log_{10}\mathbf{P}_{\boldsymbol{w}}\) gives the order-of-magnitude of the probability as predicted by the trained model.
principles of probability theory. On the other hand, we do observe that the greedy rule works very well in reality, much better than decisions based on \(P(AC)\) and \(P(BC)\).
### Existing Explanations
The counter-intuitive observations above make one naturally wonder if the learned neural networks are really supporting decision making _through_ good probability modeling. Indeed, aspects of this problem have been called "_beam search curse_" Yang et al. (2018), "_beam search bless_" Meister et al. (2020), or "_neural text degeneration_" Holtzman et al. (2019) - these names may have suggested how paradoxical the community are feeling about the problem. In the following we briefly mention some existing explanations in the literature. See Appendix E for an extended discussion of related works.
Eikema and Aziz (2020) defended the probabilistic interpretation, arguing that the MAP output is not a good decision rule at all for the selected task. We however argue that tasks like translation fall in the category that MAP is provably optimal _if_ the probability estimation is accurate (see Appendix A), so inadequacy of MAP output can only be _caused_ by inaccuracy of probability modeling.
Many works Ranzato et al. (2016), Zhang et al. (2019), Wu et al. (2016), Stahlberg and Byrne (2019), Cohen and Beck (2019), Holtzman et al. (2019) seek to find out why the learned model deviates from the groundtruth distribution. Factors such as _exposure bias_Wang and Sennrich (2020), _length bias_Wu et al. (2016), abnormal probability fluctuations Cohen and Beck (2019), and long-tail errors Holtzman et al. (2019), are identified, with many heuristic methods proposed to avoid the identified failure patterns. This line of works however did not explain why (near-)greedy decisions _based on a model with so many issues_ can somehow lead to good empirical result. Note that the heuristics proposed in these works themselves have effectively made the resulted solution deviating from the probability principles. It is still unclear why we _have to_ violate well-established probability principles, either in the form of greedy decision or by some sort of heuristic rules, to obtain competitive performance from the learned "probability models".
Meister et al. (2020) recently proposed to explain the effectiveness of greedy output via a "uniform information density (UID)" hypothesis in cognitive science. However, the precise mathematical expression of the UID hypothesis itself is subject to different interpretations Meister et al. (2021). In contrast, in this paper we propose a mathematically accurate and non-probabilistic explanation.
## 3 A Duality View to MLE Training
We lay out a conceptual framework in this section which aims at resolving the paradox as illustrated in Section 2 through a shift of mindset. Consider the standard MLE training process for neural networks: We collect a data set \(\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=1..n}\) with \(\mathbf{y}^{(i)}\sim\mathbf{P}_{\text{true}}[Y|X=\mathbf{x}^{(i)}]\), then we update the model parameters \(\mathbf{w}\) toward the ones that maximize the (log-)probability of the data in \(\mathbf{P}_{\mathbf{w}}\):
\[\mathbf{w}_{\text{MLE}}=\arg\max_{\mathbf{w}}\ \log\mathbf{P}_{\mathbf{w}}[\{\mathbf{y}^{(1..n )}\}|\{\mathbf{x}^{(1..n)}\}]=\arg\max_{\mathbf{w}}\ \sum_{i=1}^{n}\sum_{t=1}^{T_{i}}\log f_{[y^{(i)}_{t}]}(\mathbf{x}^{(i)},\mathbf{y}^{ (i)}_{<t};\mathbf{w}) \tag{5}\]
where the neural network \(f\) models a softmax distribution
\[f_{[y_{t}]}(\mathbf{x},\mathbf{y}_{<t};\mathbf{w})=e^{Q_{[y_{t}]}(\mathbf{x},\mathbf{y}_{<t};\mathbf{ w})}/\sum_{a}e^{Q_{[a]}(\mathbf{x},\mathbf{y}_{<t};\mathbf{w})} \tag{6}\]
over the so-called _logit_ vector \(Q(\mathbf{x},\mathbf{y}_{<t};\mathbf{w})\). The training of \(f(\mathbf{w})\) follows a learning dynamic driven by the gradient of the MLE objective (5). 4 This gradient can be conveniently computed by substituting (6) into (5), yielding
Footnote 4: See Appendix C for how (5) corresponds _exactly_ to the NLP practice in real world.
\[\nabla_{\mathbf{w}}\log\mathbf{P}_{\mathbf{w}}[y_{i}|\mathbf{x},\mathbf{y}_{<t}] = \nabla_{\mathbf{w}}Q_{[y_{t}]}(\mathbf{w})-\nabla_{\mathbf{w}}\text{LogSumExp} \Big{(}Q(\mathbf{w})\Big{)} \tag{7}\] \[= \nabla_{\mathbf{w}}Q_{[y_{t}]}(\mathbf{w})-\sum_{a}\ f_{[a]}(\mathbf{w})\ \nabla_{\mathbf{w}}Q_{[a]}(\mathbf{w})\]
where \(\text{LogSumExp}(Q)\doteq\log\sum_{a}\exp(Q_{[a]})\), and we have omitted \((\mathbf{x},\mathbf{y}_{<t})\) in \(Q\)'s and \(f\)'s argument for brevity. (7) is a simple fact known by many, and is often utilized for the purpose of computing the gradient of the log likelihood function.
We however argue that one can also understand (7) in the opposite direction. Instead of viewing (7) as a method to implement a learning dynamic of \(\mathbf{P}_{\mathbf{w}}\) through manipulating \(Q(\mathbf{w})\), we can reversely interpret it as a method to implement a learning dynamic of \(Q(\mathbf{w})\) through manipulating \(\mathbf{P}_{\mathbf{w}}\). In this alternative perspective, we iterate the function \(Q\)5 for its own sake, in the particular way as prescribed by the right-hand side of (7). The left-hand side of (7) - as well as its connection to the MLE objective (5) - is merely a human-imposed explanation about this Q-oriented learning dynamic. More generally, the iteration of \(\mathbf{P}_{\mathbf{w}}\) and the iteration of \(Q(\mathbf{w})\) can be considered _dual process_ to each other that are taking place in parallel, in a learning dynamic conventionally named "MLE training".
Footnote 5: In the new perspective we shall perhaps not call \(Q\) the “logits” any more, a name that itself is suggesting that \(Q\) is nothing but the logarithm of something else.
While in principle one is free to choose either the P-iteration view or the Q-iteration view, the former (i.e. the probabilistic interpretation) will induce many conflicts with the empirical observations as shown in Section 2. For this reason, we propose to explain the empirical behaviors of softmax-normalized neural networks from the Q-iteration perspective, in which what the neural network is expected to output are not probabilities, but are just the un-normalized Q-values. The Q-function is trained with the RHS of (7) being the update rule. At decision time, the well-performed greedy-to-P rule (4) (which requires probabilistic interpretation) can be re-interpreted as a greedy-to-Q rule:
\[\mathbf{y}_{\text{greedy}}=(y_{1\dots T})\;,\;y_{t}=\arg\max_{a}\;Q_{[a]}(\mathbf{x}, \mathbf{y}_{<t};\mathbf{w}) \tag{8}\]
because the softmax transformation is order preserving.
Note that in above we are _not_ proposing a new algorithm, but were only rephrasing the standard training and inference procedures in existing practice (i.e. (5) and (4)) from another point of view. As the probabilistic semantic is entirely discarded, all the probability-based assertions about the softmax outputs become unexpected, thus the weak performance of probability-based decision rules and the unreasonable probability predictions are not paradoxical any more under the Q-iteration perspective. The only thing that needs to be explained is _why_ the particular Q-iteration procedure (7) _can_ learn a good Q-function for greedy usage, which would be the main topic of the next section.
Before turning to our account to the above question, we first remark that "_learning unnormalized Q-functions in support of greedy decision making_" is not a random problem we posed here just for fitting a particular experimental result, but is a classic research topic that has been extensively studied in reinforcement learning (RL) Watkins (1989); Mnih et al. (2015); Sutton and Barto (2018). In RL literature, such a Q-function is also called an _action-value_ function, or just _value function_ for short. Value functions support decision making by assigning preferential scores to options so that optimal ones can be identified, locally and greedily, without checking the long-term consequence of the local decision. A value function is called an **optimal Q-function** if the induced greedy decision policy (8) gives optimal outputs.
However, in existing RL literature, the concept of value has been mostly limited to a particular type of _cardinal utility_ which corresponds to the expected reward of a specific policy, and in RL the optimal Q-function is typically learned via a _Bellman value iteration_ procedure. The manipulations on the Q-function in the "MLE dynamic" (7) is clearly a very different procedure. In fact, different from the typical RL setting where the learning is driven by a reward signal, the dual process (7) of MLE optimization relies on demonstrative samples of the expected output - in other words, it is an _imitation learning_ procedure of Q-learning. The Q-function thus learned represents an ordinal utility function that does not necessarily correspond to some pre-defined reward; see Appendix D for more comparison and contrast between the utility concept and the value concept in RL. Existing RL or imitation learning literature thus cannot fully explain the utility-based rationale behind this uncommon (but empirically effectively) procedure: If the standard supervised MLE training of deep neural networks is actually learning Q-functions, what is the "target" of this Q-learning dynamic? Is the learning steered toward an _optimal_ Q-function? Can we explain this procedure, which works well when (and to large extent, only when) coupled with the greedy decision rule, without resorting back to the probability interpretation? We seek to address these questions in the next section.
## 4 MLE Training as a Perturbed Dynamic of Optimal Utility Learning
Our general goal is to interpret the SGD dynamic of MLE training as an SGD process of _some_ objective function of \(Q\). There is however a technical obstacle: If we look at (7), the gradient
operator \(\nabla_{\mathbf{w}}\) cannot be re-arranged to the head because of the \(f_{[a]}(\mathbf{w})\) term. As a result, it is not immediately clear that (7) is computing a gradient of anything other than the log-likehood (which is the interpretation we wanted to bypass here).
Nonetheless, let us temporarily do a "wrong" algebraic manipulation by moving \(\nabla_{\mathbf{w}}\) to the head of (7) anyway, making it "approximately" the gradient of the following objective function:
\[J_{\text{MABE}}(\mathbf{w},\mathbf{x},\mathbf{y}) \doteq\sum_{t=1}^{T}\;\Big{(}\;Q(y_{i}|\mathbf{x},\mathbf{y}_{<t};\mathbf{w}) -\sum_{a}f(a|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})Q(a|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})\;\Big{)}\] \[=\sum_{t=1}^{T}\;\Big{(}\;Q(y_{i}|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})- \underset{A_{t}\sim\mathbf{P}_{\mathbf{w}}}{\mathbf{E}}\;\Big{[}Q(A_{t}|\mathbf{x},\bm {y}_{<t};\mathbf{w})\Big{]}\;\Big{)} \tag{9}\]
Intuitively, \(J_{\text{MABE}}\) measures the _advantage_ of outputting the expected token \(y_{t}\) over a stochastic output \(A_{t}\) that follows the softmax distribution induced by \(Q\), where the advantage is the difference of expected utilities between the two decisions (as predicted by \(Q\), under context \(\mathbf{x},\mathbf{y}_{<t}\), for each step \(t\)). The subscript MABE stands for Maximum Advantage over Boltzmann Exploration.
From (7) to (9), we have (naively) understood the log-likelihood gradient \(\nabla_{\mathbf{w}}\log\mathbf{P}_{\mathbf{w}}\) as an approximation of \(\nabla_{\mathbf{w}}J_{\text{MABE}}\), with the impact of \(\partial\mathbf{w}\) over \(f(\mathbf{w})\) being ignored. In this sense, the MLE optimization can be seen as a _biased_ SGD dynamic of \(J_{\text{MABE}}\) optimization.
Interestingly, the log-likelihood gradient is not arbitrarily biased, but there is a _precise_ connection between the gradients of the two functions (see the proof in Appendix F.1):
**Proposition 1**.: _Given input \(\mathbf{x}\), output \(\mathbf{y}=(y_{1}\ldots y_{T})\), and parametric model \(Q(\mathbf{w})\), we have_
\[\frac{\partial\log\mathbf{P}_{\mathbf{w}}[\mathbf{y}|\mathbf{x}]}{\partial w_{j}}=\frac{ \partial J_{\text{MABE}}}{\partial w_{j}}(\mathbf{w},\mathbf{x},\mathbf{y})\;+\;\sum_{t=1} ^{T}\mathbf{cov}_{t}\Big{[}Q,\frac{\partial Q}{\partial w_{j}}\Big{]}\qquad, \;\forall j\]
_where \(\;\mathbf{cov}_{t}\Big{[}Q,\frac{\partial Q}{\partial w_{j}}\Big{]}\doteq \underset{A_{t}\sim P_{t}}{\mathbf{cov}}\Big{[}Q_{t}(A_{t})\;,\;\frac{\partial Q _{t}}{\partial w_{j}}(A_{t})\Big{]}\)_
\[=\;\sum_{a}P_{t}(a)\cdot\Big{(}\;Q_{t}(a)-\sum_{b}P_{t}(b)Q_{t}(b)\;\Big{)} \cdot\Big{(}\;\frac{\partial Q_{t}}{\partial w_{j}}(a)-\sum_{b}P_{t}(b)\frac{ \partial Q_{t}}{\partial w_{j}}(b)\;\Big{)} \tag{10}\]
* \(\mathbf{P}_{\mathbf{w}}\) _is the softmax distribution induced by_ \(Q(\mathbf{w})\) _as prescribed by (_3_) and (_6_),_
* \(P_{t}(a)\) _denotes_ \(\mathbf{P}_{\mathbf{w}}[a|\mathbf{x},\mathbf{y}_{<t}]\)_, the probability that the token at step_ \(t\) _is_ \(a\)_, according to_ \(\mathbf{P}_{\mathbf{w}}\)_._
* \(Q_{t}(a)\) _denotes_ \(Q(a|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})\)_, the utility of outputting token_ \(a\) _at step_ \(t\)_, according to_ \(Q(\mathbf{w})\)_,_
* \(\frac{\partial Q_{t}}{\partial w_{j}}(a)\) _denote_ \(\frac{\partial Q}{\partial w_{j}}(a|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})\)_, the partial derivative of_ \(Q(a|\mathbf{x},\mathbf{y}_{<t};\mathbf{w})\) _at step_ \(t\)_,_
Intuitively, the \(\mathbf{cov}_{t}\) term in (10) is the covariance between \(Q(A_{t})\) and its derivative when \(A_{t}\) follows the Boltzmann exploration policy \(\mathbf{P}_{\mathbf{w}}\). Proposition 1 asserts that the gradient of the probability-learning objective (5) differs from the gradient of the utility-learning objective (9) by exactly this covariance (or by the cumulative covariance in sequential decision setting). For complex models with millions or billions of parameters, if the model output is not strongly correlated to the partial derivative of a single parameter, the covariance term identified in Proposition 1 would have limited impact on the learning progress. As a key result, we empirically found that this is indeed true: in all the tasks we have experimented, the perturbation from this covariance term cannot significantly affect the learning, not only for the final performance, but also throughout the entire learning dynamic.
Specifically, consider a MABE(\(\lambda\)) family of Q-learning algorithms as defined by Algorithm 1. MABE(0) optimizes \(J_{\text{MABE}}\) based on unbiased estimator of \(\nabla J_{\text{MABE}}\). MABE(1) adds the covariance term to the gradient estimator, thus is equivalent to traditional MLE training. For other \(\lambda\), MABE(\(\lambda\)) does not seem to have principled interpretations, but is simply constructed by perturbing the gradient estimator with a \(\lambda\) multiple of the covariance, where \(\lambda\) can be either positive or negative. By tuning \(\lambda\) to different values, we can control how significantly the gradient is perturbed.
Figure 2 shows the learning curves of MABE(\(\lambda\)) under five values of \(\lambda\), ranging from \(-2\) to \(2\). Generally speaking, all the learning curves are similar, in all the three tasks being examined, not only in the end but almost throughout the training process. Performance of the perturbed variant MABE(1), a.k.a. MLE training, is slightly lower than the unperturbed variant MABE(0) (see Fig. 7,
10, 13 in the appendix for the numerical scores). The learning under \(\lambda=2\) (which is 2x perturbed) was somewhat slower at the beginning, but managed to catch up with other variants in later stage of the learning. Importantly, MABE(1) - a.k.a. the SGD-based MLE training of softmax-normalized neural networks - does not seem to be anything uniquely different from the other "non-probabilistic" variants in the MABE(\(\lambda\)) family. Under the greedy decision rule, its performance is generally similar to (often slightly lower than) that of the unperturbed variant MABE(0).
So far we have re-interpreted a classic statistical learning procedure (i.e. SGD-based MLE) as a kind of Q-learning algorithm (i.e. MABE(\(\lambda\))). Now we investigate the optimality of this Q-learning algorithm. Given that the performance of MABE(\(\lambda\)) is similar under modest perturbation coefficient \(\lambda\), in the following we focus on analyzing MABE(0), the learning dynamic without any perturbation. In the following we prove that when the Q-model is expressive enough, the global maximum of \(J_{\text{MABE}}\) is indeed an optimal Q-function (See the proof in Appendix F.2).
**Theorem 2**.: _Consider a tabular model \(Q(a;\mathbf{q})=\sum_{j=1}^{d}\mathds{1}[a=j]\cdot q_{j}\), where the parameter vector \(\mathbf{q}=(q_{1}\ldots q_{d})\) directly encodes the utilities for each possible action \(a\in\{1\ldots d\}\). Let \(\mathbf{p}=(p_{1}\ldots p_{d})\) be the softmax distribution induced by \(\mathbf{q}\). Let \(\mathbf{q}^{*}\) be the Q-values that maximizes \(J(\mathbf{q})=\mathbf{E}_{Y\sim\mathbf{P}_{\text{true}}}\ \Big{[}Q(Y;\mathbf{q})\Big{]}- \mathbf{E}_{\mathbf{A}\sim\mathbf{p}}\ \big{[}Q(A;\mathbf{q})\Big{]}\), and \(\mathbf{p}^{*}\) the softmax distribution by \(\mathbf{q}^{*}\), then_
\[\max_{a\in\operatorname{supp}(Y)}\ q_{a}^{*}\ >\ \max_{b\notin\operatorname{supp}(Y)}\ q_{b}^{*}\ \ +\ 1 \tag{11}\]
_where \(\operatorname{supp}(Y)\) is the support of \(\mathbf{P}_{\text{true}}\) (which is thus the set of all desired actions), and moreover,_
\[p_{a}^{*}\cdot(\ 1+q_{a}^{*}-\mathbf{E}_{\mathbf{p}^{*}}[Q]\ )\ =\ \ \mathbf{P}_{\text{true}}[Y=a]\qquad,\ \forall a \tag{12}\]
The function \(J\) in Theorem 2 is an idealized form of the MABE objective \(J_{\text{MABE}}\), with the loss of parameterization in \(Q(\mathbf{w})\) being ignored. The inequality (11) in Theorem 2 suggests that \(\mathbf{q}^{*}\), the global maximum of \(J\), separates desired actions from undesired ones by at least a constant margin, so going greedy with \(\mathbf{q}^{*}\) can provably avoid undesired actions. This fact established a strict optimality property for the global maximum of \(J_{\text{MABE}}\): Maximizing \(J_{\text{MABE}}\) over \(\mathbf{w}\) guarantees to find the _optimal_ Q-function 6 as long as the global maximum of \(J_{\text{MABE}}\) is covered by the parametric model \(Q(\mathbf{w})\).
Footnote 6: Recall that a Q-function is optimal if the greedy policy (8) finds optimal decisions (see Section 3).
Figure 2: SGD dynamic of \(J_{\text{MABE}}\) when the gradient is perturbed by the covariance term (10). Performance of the Q-greedy decision rule is evaluated on the test set every \(5000\) SGD steps. BLEU and ROUGE are standard metrics for corresponding tasks.
As MABE(0) is just a standard stochastic gradient process of \(J_{\text{MABE}}\) optimization, this optimality result (which is subject to model errors, data errors, and optimization errors) supports the soundness of the MABE family of Q-learning algorithms to the same strength as how the SGD-based MLE procedure has been justified in the probabilistic explanation of deep learning. In this way, we subsumed the SGD dynamic of MLE as a perturbed variant of a learning dynamic towards optimal Q-function (where the perturbation does not significantly affect the learning dynamic).
## 5 The Dual Probabilities
In above we have been arguing that the empirical effectiveness of standard deep learning procedures can be better explained without interpreting the neural networks as probability models. In some cases, however, people may just want to have a probability model Papamakarios et al. (2017). Interestingly, our non-probabilistic theory entails a way to bring back the probabilistic semantic by transforming the learned Q-values (back) to probability estimations.
Specifically, (12) in Theorem 2 gives a precise relationship between the Q-value of an action and the _true_ probability that the action is an desired one. The equation holds at the global maximum \(\mathbf{q}^{*}\) of \(J_{\text{MABE}}\). In practice, the optimization is never exact for modern neural networks, yet we can still use the equation as a guidance to transform the Q-values obtained from MABE optimization to an approximation of the probabilities \(\mathbf{P}_{\text{true}}\). Specifically, let vector \(\mathbf{q}=(q_{1}\ldots q_{d})\) be the Q-values predicted by a Q-function for a given decision context, define
\[p_{i}^{\text{dual}}\ =\ \text{CLIP}\Big{(}\ p_{i}\ (1+q_{i}-\mathbf{p}\cdot\mathbf{q} )\ \Big{)}\ /\ Z \tag{13}\]
where \(p_{i}=\text{softmax}_{[i]}(\mathbf{q})\), \(\text{CLIP}(x)\doteq\min(\max(0,x),1)\) trims the predictions to \([0,1]\), and \(Z\) is the sum of the numerator in (13) across all \(i\). The clipping and \(Z\)-normalization in (13) are not needed if \(\mathbf{q}\) is exactly optimized.
We call the probability predictions by (13), the _dual probabilities_ of the Q-values. Note that the dual probabilities \(p^{\text{dual}}\) are different from the predictions computed directly from the softmax transformation (which gives \(\mathbf{p}\)); the former "calibrates" the latter with a scaling factor \(1+q_{i}+\mathbf{p}\cdot\mathbf{q}\).
Empirically, we found that the dual probabilities (13) perform much better than the commonly used softmax probabilities, when both are used in probability-compatible decision rules, as Table 2 shows (also see Appendix G.1, G.2 and G.3).
Taking WMT'14 en2de as example, translations by sampling the dual probabilities attain a BLEU score of \(20.0\), which is a gain of \(+14.6\) (or \(+370\%\)) over sampling with the traditional softmax probabilities (cf. Figure 1). The dual probability makes pure probability sampling a much more competitive decision rule now. Similarly, the dual probabilities also drastically improve the real-world performance of (approximate) probability maximization. For search with beam size \(=1024\), for example, its BLEU score in WMT'14 en2de is lifted from \(8.6\) to \(25.9\), a gain of \(+17.3\), and the score is higher than greedy's. Moreover, the dual probability of the empty output is now strictly zero on \(2736\) of the \(2737\) testing instances. In fact, the raw scaling factor \(1+q_{i}-\mathbf{p}\cdot\mathbf{q}\) of the end-of-seq token was negative (thus was clipped to \(0\)) in all but one instances. It is only a pity that the model will also assign zero probability for most of the reference translations (\(2614\) out of \(2737\), which is less than the number for empty outputs though). On the other hand, the dual probability model is much more confident for self-generated outputs; see Figure 6 in Appendix G.1.
Similar gains in probability prediction and utilization can be observed in all tasks we examined. See Appendix G.1, G.2 and G.3 for more details. Overall, dual probability models exhibit much more reasonable behaviors than traditional softmax probability models.
\begin{table}
\begin{tabular}{l|r r|r r|r r} \hline \hline & \multicolumn{2}{c|}{WMT’14 en\(\rightarrow\)de} & \multicolumn{2}{c|}{WMT’17 zh\(\rightarrow\)en} & \multicolumn{2}{c}{CNN / DailyMail} \\ & sampling & \(|\text{beam}|=1024\) & sampling & \(|\text{beam}|=1024\) & sampling & \(|\text{beam}|=1024\) \\ \hline Softmax Probability & 5.4 \(\pm\)0.3 & 8.6 & 8.3 \(\pm\)0.2 & 11.9 & 21.1 \(\pm\)0.1 & 20.1 \\ Dual Probability & 20.0 \(\pm\)0.2 & 25.9 & 17.7 \(\pm\)0.1 & 21.1 & 27.9 \(\pm\)0.1 & 27.1 \\ & (+14.6) & (+ 17.3) & (+ 9.4) & (+ 9.2) & (+ 6.8) & (+ 7.1) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dual probabilities significantly improve pure sampling and MAP (\(\pm\) gives \(90\%\) confidence interval from \(9\) trials).
Importantly, the dual probability formula (13) does not use any hyperparameter, and is derived from first principles. Recall that in Section 3 we proposed to think of the Q-learning dynamic of (7) as a dual process that simultaneously optimizes the Q-values _and_ the softmax probabilities. But now, in light of the advantage of the dual probabilities as observed in this section, it seems that the probability given by (13) is a more accurate probability model. As a result, if we say that the neural network is representing both utility and probability, the probability counterpart seems to be better represented by the probability given in (13), instead of by the commonly recognized softmax probability.
## 6 Conclusions
To summarize, we have demonstrated how the current practice of neural networks contradicts with its canonical probabilistic explanation in some complex decision tasks. This motivated us to develop an alternative explanation, in which the classic SGD-based MLE optimization process of softmax-normalized neural networks is interpreted as a supervised Q-learning algorithm (MABE(1)). Our utility-based theory is inherently free of the paradoxical probabilistic semantics, and yet can induce a dual probability space when needed, with significantly improved performance.
Based on results in this paper, one can either say that the neural network trained from "SGD-based MLE optimization" is modeling a utility function, whose theoretical optimality is characterized by Theorem 2, or, one could also say that the neural network is indeed modeling a probability space, of not the softmax probability (6), but of the dual probability (13).
Although this duality phenomenon may best manifest itself in sequential decision tasks,7 we believe the conceptual implications of our duality theory may apply to all deep learning tasks because the probability interpretation of neural networks has been framed as a unified logical framework for all learning tasks. Our results challenged this mindset, and our theory provides a better unified framework to reason about deep neural networks.
Footnote 7: In one-shot classification tasks, the MAP rule (1) degenerates to the greedy rule (4), thus many problems observed in this paper will not show up, except that the inaccuracy of softmax probability estimations can still be observed Guo et al. (2017).
## Acknowledgments
Authors of this paper would like to thank Rui Ding, Jingjing Xu, Shion Ishikawa, Liang Li, and Lihua Wang for giving many helpful comments on previous versions of this paper.
|
2305.05954 | Enhancing the Performance of Transformer-based Spiking Neural Networks
by SNN-optimized Downsampling with Precise Gradient Backpropagation | Deep spiking neural networks (SNNs) have drawn much attention in recent years
because of their low power consumption, biological rationality and event-driven
property. However, state-of-the-art deep SNNs (including Spikformer and
Spikingformer) suffer from a critical challenge related to the imprecise
gradient backpropagation. This problem arises from the improper design of
downsampling modules in these networks, and greatly hampering the overall model
performance. In this paper, we propose ConvBN-MaxPooling-LIF (CML), an
SNN-optimized downsampling with precise gradient backpropagation. We prove that
CML can effectively overcome the imprecision of gradient backpropagation from a
theoretical perspective. In addition, we evaluate CML on ImageNet, CIFAR10,
CIFAR100, CIFAR10-DVS, DVS128-Gesture datasets, and show state-of-the-art
performance on all these datasets with significantly enhanced performances
compared with Spikingformer. For instance, our model achieves 77.64 $\%$ on
ImageNet, 96.04 $\%$ on CIFAR10, 81.4$\%$ on CIFAR10-DVS, with + 1.79$\%$ on
ImageNet, +1.16$\%$ on CIFAR100 compared with Spikingformer. | Chenlin Zhou, Han Zhang, Zhaokun Zhou, Liutao Yu, Zhengyu Ma, Huihui Zhou, Xiaopeng Fan, Yonghong Tian | 2023-05-10T07:48:08Z | http://arxiv.org/abs/2305.05954v3 | Enhancing the Performance of Transformer-based Spiking Neural Networks by SNN-optimized Downsampling with Precise Gradient Backpropagation
###### Abstract
Deep spiking neural networks (SNNs) have drawn much attention in recent years because of their low power consumption, biological rationality and event-driven property. However, state-of-the-art deep SNNs (including Spikformer and Spiking-former) suffer from a critical challenge related to the imprecise gradient backpropagation. This problem arises from the improper design of downsampling modules in these networks, and greatly hampering the overall model performance. In this paper, we propose ConvBN-MaxPooling-LIF (CML), an SNN-optimized downsampling with precise gradient backpropagation. We prove that CML can effectively overcome the imprecision of gradient backpropagation from a theoretical perspective. In addition, we evaluate CML on ImageNet, CIFAR10, CIFAR100, CIFAR10-DVS, DVS128-Gesture datasets. CML shows state-of-the-art performance on all these datasets in directly trained SNN models and significantly enhanced performances of transformer-based SNN models. For instance, our model achieves 77.64 \(\%\) on ImageNet, 96.04 \(\%\) on CIFAR10, 81.4\(\%\) on CIFAR10-DVS, with + 1.79\(\%\) on ImageNet, +1.16\(\%\) on CIFAR100 compared with Spikingformer. Codes will be available at Spikingformer-CML.
## 1 Introduction
Artificial neural networks (ANNs) have demonstrated remarkable success in various artificial intelligence fields, including image classification [1][2][3], object detection[4][5], and semantic segmentation[6]. Unlike ANNs, which rely on continuous high-precision floating-point data to process and transmit information, spiking neural networks (SNNs) use discrete temporal spike sequences. SNNs, as the third-generation neural network inspired by brain science[7], have attracted the attention of many researchers in recent years due to their low power consumption, biological rationality, and event-driven characteristics.
SNNs can be classified into two types: convolution-based SNNs and transformer-based SNNs, borrowing architectures from convolution neural network and vision transformer in ANNs, respectively. Convolution architectures exhibit translation invariance and local dependence, but their receptive fields are typically small and limit their ability to capture global dependencies. In contrast, Vision
transformer is based on self-attention mechanisms that can capture long-distance dependencies and has enhanced the performance of artificial intelligence on many computer vision tasks, including image classification [8; 9], object detection [10; 11] and semantic segmentation [12; 13]. Transformer-based SNNs represent a novel form of SNN that combines transformer architecture with SNN, providing great potential to break through the performance bottleneck of SNNs. So far, transformer-based SNNs mainly contain Spikformer[14] and Spikingformer[15]. Spikformer introduces spike-based self-attention mechanism for the first time through spike self attention (SSA) block and shows powerful performance. However, its energy efficiency is not optimal due to its integer-float multiplications. Spikingformer, a pure event-driven transformer-based SNN, could effectively avoid non-spike computations in Spikformer through spike-driven residual learning [15]. Spikingformer significantly reduces energy consumption compared with Spikformer while even improving network performance. However, both Spikformer and Spikingformer suffer from imprecise gradient backpropagation since they inherit the traditional downsampling modules without adaption for the backpropagation of spikes. The backpropagation imprecision problem greatly limits the performance of spiking transformers.
In this paper, we propose a downsampling module adapted to SNNs, named ConvBN-MaxPooling-LIF (CML), and prove CML can effectively overcome the imprecision of gradient backpropagation from a theoretical perspective. In addition, we evaluate CML on both static image datasets ImageNet, CIFAR10, CIFAR100 and neuromorphic datasets CIFAR10-DVS, DVS128-Gesture. The experimental results show our proposed CML can improve the performance of transformer-based SNNs by a large margin (e.g. + 1.79\(\%\) on ImageNet, +1.16\(\%\) on CIFAR100 compared with Spikingformer), and Spikingformer/Spikformer + CML achieve the state-of-the-art on all above datasets (e.g. 77.64 \(\%\) on ImageNet, 96.04 \(\%\) on CIFAR10, 81.4\(\%\) on CIFAR10-DVS) in directly trained SNN models.
## 2 Related Work
**Convolution-based Spiking Neural Networks.** One fundamental distinction between SNNs and traditional ANNs lies in the transmission mode of information within the networks. ANNs use continuous floating point numbers with high precision, while SNNs with spike neurons transmit information in the form of discrete temporal spike sequences. A spike neuron, the basic unit in SNNs, receives floats or spikes as inputs and accumulates membrane potential across time until it reaches a threshold to generate spikes. Typical spike neurons include Leaky Integrate-and-Fire (LIF) neuron[16], PLIF[17] and KLIF[18] etc. At present, there are two ways to obtain SNNs. One involves converting a pre-trained ANN to SNN[19][20][21], replacing the ReLU activation function in ANN with spike neurons, resulting in comparable performance to ANNs but high latency. Another method is to directly train SNNs[16], using surrogate gradient[22][23] to solve the problem of non-differentiable spikes, which results in low latency but relatively poor performance. Zheng et al. propose a threshold-dependent batch normalization (tdBN) method for direct training deep SNNs[24], which is the first time to explore the directly-trained deep SNNs with high performance on ImageNet. Fang et al. proposed the spike-element-wise block, which overcomes problems such as gradient explosion and gradient vanishing and extended the directly trained SNNs to more than 100 layers [25].
**Transformer-based Spiking Neural Networks.** Vision Transformer (ViT)[8] has become a mainstream visual network model with its superior performance in computer vision tasks. Spikeformer[26] has been proposed to combine the architecture of transformer to SNNs, but this algorithm still belongs to vanilla self-attention due to the existence of numerous floating-point multiplication, division, and exponential operations. Spikformer model [14], incorporating the innovative Spiking self attention (SSA) module that effectively eliminates the complex Softmax operation in self-attention, achieves low computation, energy consumption and high performance. However, both Spikeformer and Spikformer suffer from non-spike computations (integer-float multiplications) in the convolution layer caused by the design of their residual connection. Spikingformer [15] proposes spike-driven residual learning, which could effectively avoid non-spike computations in Spikformer and Spikeformer and significantly reduce energy consumption. Spikingformer is the first pure event-driven transformer-based SNN.
**Downsamplings in SNNs.** Downsampling is a common technique used in both both SNNs and ANNs to reduce the size of feature maps, which in turn reduces the number of network parameters, accelerates the calculation speed and prevents network overfitting. Mainstream downsampling units
include Maxpooling [14; 15], Avgpooling (Average pooling), and convolution with stride greater than 1[25]. Among them, Maxpooling and convolution downsampling are more commonly applied in directly trained SNNs. In this paper, we mainly choose Maxpooling as the key downsampling unit of the proposed CML.
## 3 Method
We examine the limitation of the current downsampling technique in SNNs, which hampers the performance improvement of SNNs. We then overcome this limitation by adapting the downsampling to make it compatible with SNNs.
### Defect of Downsampling in Spikformer and Spkingformer
SNNs typically employ the network module shown in Figure 1(a), which gives rise to the problem of imprecise gradient backpropagation. ConvBN represents the combined operation of convolution and batch normalization. Following ConvBN are the spike neurons, which receive the resultant current and accumulate the membrane potential across time, generate a spike when the membrane potential exceeds the threshold, and finally perform maxpooling for downsampling. The output of ConvBN is the feature map \(x\in R^{m\times n}\), the output of spike neuron is the feature map \(h\in R^{m\times n}\), and the output of maximum pooling is the feature map \(y\in R^{\frac{m}{\times}\frac{n}{\epsilon}}\), where \(m\times n\) is the feature map size, and \(s\) is the pooling stride.
Given the global loss function \(L\) and the backpropagation gradient \(\frac{\partial L}{\partial y_{ij}}\) after downsampling, the gradient at the feature map \(x\) is as follows:
\[\frac{\partial L}{\partial x_{uv}}=\sum_{i=0}^{\frac{m}{\epsilon}}\sum_{j=0} ^{\frac{n}{\epsilon}}\frac{\partial L}{\partial y_{ij}}\frac{\partial y_{ij} }{\partial h_{uv}}\frac{\partial h_{uv}}{\partial x_{uv}} \tag{1}\]
Figure 1: The downsampling module in Spikformer, Spkingformer and our proposed SNN-optimized downsampling. (a) shows the downsampling module in Spikformer and Spkingformer which have imprecise gradient backpropagation issue. (b) shows our proposed SNN-optimized downsampling (ConvBN-MaxPooling-LIF, CML) with precise gradient backpropagation. Note that Multistep LIF is LIF spike neuron with time steps \(T>1\). Same as Spikformer, \(T\) is an independent dimension for spike neuron layer. In other layers, it is merged with the batch size.
The backpropagation gradient of Maxpooling is:
\[\frac{\partial y_{ij}}{\partial h_{uv}}=\left\{\begin{array}{ll}1,&h_{uv}=\max \left(h_{i\times s+t,j\times s+r}\right)\\ 0,&others\end{array}\right. \tag{2}\]
where \(t,r\in[0,s)\). Suppose that the spike neuron used in our work is LIF, and the dynamic model of LIF is described as:
\[H[t] =V[t-1]+\frac{1}{\tau}\left(X[t]-\left(V[t-1]-V_{\text{reset}} \right)\right) \tag{3}\] \[S[t] =\Theta(H[t]-V_{th})\] (4) \[V[t] =H[t](1-S[t])+V_{\text{reset}}S[t] \tag{5}\]
where \(X[t]\) is the input current at time step \(t\), and \(\tau\) is the membrane time constant. \(V_{reset}\) represents the reset potential, \(V_{th}\) represents the spike firing threshold, \(H[t]\) and \(V[t]\) represent the membrane potential before and after spike firing at time step \(t\), respectively. \(\Theta(v)\) is the Heaviside step function, if \(v\geq 0\) then \(\Theta(v)=1\), otherwise \(\Theta(v)=0\). \(S[t]\) represents the output spike at time step \(t\).
The backpropagation gradient of LIF neuron is:
\[\frac{\partial h_{uv}}{\partial x_{uv}}=\frac{\partial S[t]}{\partial X[t]}= \frac{1}{\tau}\times\Theta^{\prime}\left(H[t]-V[t]\right) \tag{6}\]
As a result, the backpropagation gradient on feature map \(x\) is:
\[\frac{\partial L}{\partial x_{uv}}=\begin{cases}\frac{1}{\tau}\frac{\partial L }{\partial y_{ij}}*\Theta^{{}^{\prime}}(H[t]-V[t]),&h_{uv}=\max(h_{i\times s+ t,j\times s+r})\\ 0,&others\end{cases} \tag{7}\]
According to Eq. (7), the gradient exists in the position of the maximal element in feature map h. However, since the output of LIF neuron are spikes, that is, the corresponding position value with spike is 1, otherwise it is 0. Therefore, the element with the first value of 1 in feature map h is chosen as the maximum value, and there is a gradient in this position, which causes imprecise gradient backpropagation. To sum up, after downsampling, when conducting backpropagation in the network structure shown in Figure 1(a), the element with gradient in the feature map \(x\) is not necessarily the element with the most feature information, which is the imprecision problem of gradient backpropagation.
### SNN-optimized Downsampling: CML
Here we improve the network structure of downsampling, as shown in Figure 1(b), which overcomes the imprecision problem of gradient backpropagation. The output of ConvBN is the feature map \(x\in R^{m\times n}\), the output of spike neuron is the feature map \(h\in R^{\frac{m}{2}\times\frac{n}{2}}\), and the output of maximum pooling is the feature map \(y\in R^{\frac{m}{2}\times\frac{n}{2}}\), where \(m\times n\) is the feature map size, and \(s\) is the pooling stride. The backpropagation gradient \(\frac{\partial L}{\partial y_{ij}}\) after LIF neuron is known, then the gradient at the feature map \(x\) is as follows:
\[\frac{\partial L}{\partial x_{uv}}=\sum_{i=0}^{\frac{m}{4}}\sum_{j=0}^{\frac{ n}{2}}\frac{\partial L}{\partial y_{ij}}\frac{\partial y_{ij}}{\partial h_{ij}} \frac{\partial h_{ij}}{\partial x_{uv}} \tag{8}\]
The backpropagation gradient of Maxpooling is:
\[\frac{\partial h_{ij}}{\partial x_{uv}}=\begin{cases}1,&x_{uv}=\max(x_{i\times s +t,j\times s+r})\\ 0,&others\end{cases} \tag{9}\]
where \(t,r\in[0,s)\). The backpropagation gradient of LIF neuron is:
\[\frac{\partial y_{ij}}{\partial h_{ij}}=\frac{\partial S[t]}{\partial X[t]}= \frac{1}{\tau}\times\Theta^{\prime}\left(H[t]-V[t]\right) \tag{10}\]
As a result, the backpropagation gradient on feature map \(x\) is as follows:
\[\frac{\partial L}{\partial x_{uv}}=\begin{cases}\frac{1}{\tau}\frac{\partial L }{\partial y_{ij}}\times\Theta^{{}^{\prime}}(H[t]-V[t])&,x_{uv}=\max(x_{i\times s +t,j\times s+r})\\ 0&,others\end{cases} \tag{11}\]
According to Eq. (11), the maximum element in feature map h corresponds to the maximum element in feature map x, that is, after downsampling, when conducting backpropagation in the network structure shown in Figure 1(b), the element with gradient in feature map x is the element with the most feature information, thus overcoming the imprecision problem of gradient backpropagation. In addition, the computational cost of CML on LIF neurons is only one quarter of that downsampling in Figure 1(a).
### Application and Comparative Analysis of SNN-optimized Downsampling CML
Our proposed SNN-optimized downsampling structure (CML) has universality in spiking neural networks. From theoretical perspectives, the downsampling structure in Figure 1(a) can be easily replaced by our CML structure in Figure 1(b) to overcome the imprecision problem of gradient backpropagation in all kinds of spiking neural networks, which improves the network performance while reduces the computational cost at the same time.
In addition to the CML downsampling we proposed and ConvBN-LIF-MaxPool used in spikformer [14], we summarized another two potential downsampling way: ConvBN-AvgPool-LIF, ConvBN(stride=2)-LIF [25] through investigation and analysis. Therefore, we compared these four downsampling modules by the experiment on CIFAR 10/100, which is shown in Tab. 1. The experimental results show our proposed ConvBN-MaxPool-LIF achieves the best performance among them, outperforming others by a large margin. In Sec.4, we carry out extensive experiments to further verify the effectiveness of our SNN-optimized downsampling module.
## 4 Experiments
In this section, we evaluate the CML downsampling module on static datasets(ImageNet[27], CIFAR10, and CIFAR100[28]) and neuromorphic datasets (CIFAR10-DVS and DVS128 Gesture [29]), using Spikformer and Spikingformer as baselines. Spikformer and Spikingformer are representative transformer-based spiking neural networks. Specifically, we replace SPS with CML in spikformer and replace SPED with CML in spikingformer, while keeping the remaining settings unchanged.
### ImageNet Classification
**ImageNet-1K** contains around \(1.3\) million \(1000\)-class images for training and \(50,000\) images for validation. We conduct experiments on ImageNet-1K to evaluate our CML module, with an input size of \(224\times 224\) by default both during training and inference. The training details of our proposed Spikformer + CML and Spikingformer + CML remain consistent with the original Spikformer and Spikingformer, respectively.
The experimental results shown in Tab. 2 indicate that CML significantly enhances the performance of Spikformer and Spikingformer with various network sizes. Specifically, Spikformer-8-768 + CML achieves 76.55\(\%\) Top-1 classification accuracy, which outperforms Spikformer-8-768 by
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Method & Backbone & Time Step & CIFAR10 & CIFAR100 \\ \hline ConvBN-LIF-MaxPool & Spikingformer-4-384-400E & 4 & 95.81 & 79.21 \\
**ConvBN-MaxPool-LIF** & Spikingformer-4-384-400E & 4 & **95.95** & **80.37** \\ ConvBN-AvgPool-LIF & Spikingformer-4-384-400E & 4 & 95.23 & 78.52 \\ ConvBN(stride=2)-LIF & Spikingformer-4-384-400E & 4 & 94.94 & 78.65 \\ \hline ConvBN-LIF-MaxPool & Spikformer-4-384-400E & 4 & 95.51 & 78.21 \\
**ConvBN-MaxPool-LIF** & Spikformer-4-384-400E & 4 & **96.04** & **80.02** \\ ConvBN-AvgPool-LIF & Spikformer-4-384-400E & 4 & 95.13 & 78.53 \\ ConvBN(stride=2)-LIF & Spikformer-4-384-400E & 4 & 94.93 & 78.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results of our proposed ConvBN-MaxPool-LIF Downsampling in CIFAR10/100, comparing with potential or mainstream downsampling way in SNN. In detail, we keep the retaining network structure of Spikformer and Spikingformer unchanged. Note that ConvBN-LIF-MaxPool, which is used in Spikformer and Spikingformer, is the baseline for comparation.
1.74\(\%\). In addition, Spikingformer-8-768 + CML achieves 77.64\(\%\) Top-1 classification accuracy, which outperforms Spikingformer-8-768 by 1.79\(\%\) and achieves the state-of-the-art performance on ImageNet in directly trained spiking neural network models. These results strongly validate the effectiveness of CML.
### CIFAR Classification
**CIFAR10/CIFAR100** both contain 50,000 train and 10,000 test images with 32 x 32 resolution. CIFAR10 and CIFAR100 contain 10 categories and 100 categories for classification, respectively. We evaluate our CML module on CIFAR10 and CIFAR100. The training details of our proposed Spikformer + CML and Spikingformer + CML are consistent with the original Spikformer and Spikingformer, respectively.
The experimental results are shown in Tab. 3. CML enhances the performance of all Spikformer and Spikingformer models in both CIFAR10 and CIFAR100 by a large margin. For CIFAR10, Spikformer-4-384-400E + CML achieves 96.04\(\%\) Top-1 classification accuracy, which outperforms Spikformer-4-384-400E by 0.53\(\%\) and realizes the state-of-the-art performance of CIFAR10 in directly trained spiking neural network model. Spikingformer-4-384-400E + CML achieves 95.95\(\%\) Top-1 classification accuracy, which outperforms Spikingformer-4-384-400E by 0.14\(\%\). For CIFAR100, Spikformer-4-384-400E + CML achieves 80.02\(\%\) Top-1 classification accuracy, which outperforms Spikingformer-4-384-400E by 1.81\(\%\). Spikingformer-4-384-400E + CML achieves 80.37\(\%\) Top-1 classification accuracy, which outperforms Spikingformer-4-384-400E by 1.16\(\%\) and realizes the state-of-the-art performance of CIFAR100 in directly trained spiking neural network model. The experimental results strongly further verify the effectiveness of our method.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Methods & Architecture & Param & Time & Top-1 Acc \\ & & (M) & Step & \\ \hline Hybrid training[30] & ResNet-34 & 21.79 & 250 & 61.48 \\ TET[31] & Spiking-ResNet-34 & 21.79 & 6 & 64.79 \\ & SEW-ResNet-34 & 21.79 & 4 & 68.00 \\ Spiking ResNet[32] & ResNet-34 & 21.79 & 350 & 71.61 \\ & ResNet-50 & 25.56 & 350 & 72.75 \\ STBP-tdBN[24] & Spiking-ResNet-34 & 21.79 & 6 & 63.72 \\ & SEW-ResNet-34 & 21.79 & 4 & 67.04 \\ SEW ResNet[25] & SEW-ResNet-50 & 25.56 & 4 & 67.78 \\ & SEW-ResNet-101 & 44.55 & 4 & 68.76 \\ & SEW-ResNet-152 & 60.19 & 4 & 69.26 \\ MS-ResNet[33] & ResNet-104 & 44.55+ & 5 & 74.21 \\ Transformer (ANN)[14] & Transformer-8-512 & 29.68 & 1 & 80.80 \\ \hline Spikformer[14] & Spikformer-8-384 & 16.81 & 4 & 70.24 \\ Spikformer-8-512 & 29.68 & 4 & 73.38 \\ & Spikformer-8-768 & 66.34 & 4 & 74.81 \\ \hline Spikformer-8-384 & 16.81 & 4 & **72.73(+2.49)** \\
**Spikformer + CML** & Spikformer-8-512 & 29.68 & 4 & **75.61(+2.23)** \\ & Spikformer-8-768 & 66.34 & 4 & **77.34(+2.53)** \\ \hline Spikingformer[15] & Spikingformer-8-384 & 16.81 & 4 & 72.45 \\ Spikingformer-8-512 & 29.68 & 4 & 74.79 \\ & Spikingformer-8-768 & 66.34 & 4 & 75.85 \\ \hline
**Spikingformer + CML** & Spikingformer-8-384 & 16.81 & 4 & **74.35(+1.90)** \\ & Spikingformer-8-512 & 29.68 & 4 & **76.54(+1.75)** \\ & Spikingformer-8-768 & 66.34 & 4 & **77.64(+1.79)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation on ImageNet. The default input resolution of all the models in inference is 224 \(\times\) 224. CML module enhances the network performance of all models of Spikformer and Spikingformer by a large margin. Note that 77.64 \(\%\) of Spikingformer-8-768 + CML achieves the state-of-the-art performance on ImageNet in directly trained SNN models.
### DVS Classification
**CIFAR10-DVS Classification.** CIFAR10-DVS is a neuromorphic dataset derived from the CIFAR10 dataset, where the visual input is captured by a Dynamic Vision Sensor (DVS) that represents changes in pixel intensity as asynchronous events rather than static frames. It includes 9,000 training samples and 1,000 test samples. We carry out experiments on CIFAR10-DVS to evaluate our CML module. The training details of our proposed Spikformer + CML and Spikingformer + CML are consistent with the original Spikformer and Spikingformer, which all contain 2 spiking transformer blocks with 256 patch embedding dimensions.
We compare our method with SOTA methods on CIFAR10-DVS in Tab.4. Spikingformer + CML achieves 81.4\(\%\) top-1 accuracy with 16 time steps and 80.5\(\%\) accuracy with 10 time steps, outperforming Spikingformer by 0.1\(\%\) and 0.6\(\%\) respectively. Spikformer + CML achieves 80.9\(\%\) top-1 accuracy with 16 time steps and 79.2\(\%\) accuracy with 10 time steps, outperforms Spikformer by 0.3\(\%\) and 0.6\(\%\) respectively. Among them, 81.4\(\%\) of Spikingformer + CML is the state-of-the-art performance of CIFAR10-DVS in directly trained spiking neural network.
**DVS128 Gesture Classification.** DVS128 Gesture is a gesture recognition dataset that contains 11 hand gesture categories from 29 individuals under 3 illumination conditions. The training details of our proposed Spikformer + CML and Spikingformer + CML are consistent with the original
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Methods & Architecture & Param & Time & CIFAR10 & CIFAR100 \\ & & (M) & Step & Acc & Acc \\ \hline Hybrid training[30] & VGG-11 & 9.27 & 125 & 92.22 & 67.87 \\ Diet-SNN[34] & ResNet-20 & 0.27 & 10/5 & 92.54 & 64.07 \\ STBP[16] & CIFARNet & 17.54 & 12 & 89.83 & - \\ STBP NeuNorm[35] & CIFARNet & 17.54 & 12 & 90.53 & - \\ TSSL-BP[36] & CIFARNet & 17.54 & 5 & 91.41 & - \\ STBP-tdBN[24] & ResNet-19 & 12.63 & 4 & 92.92 & 70.86 \\ TET[31] & ResNet-19 & 12.63 & 4 & 94.44 & 74.47 \\ MS-ResNet[33] & ResNet-110 & - & - & 91.72 & 66.83 \\ & ResNet-482 & - & - & 91.90 & - \\ \hline ANN[14] & ResNet-19* & 12.63 & 1 & 94.97 & 75.35 \\ & Transformer-4-384 & 9.32 & 1 & 96.73 & 81.02 \\ \hline & Spikformer-4-256 & 4.15 & 4 & 93.94 & 75.96 \\ Spikformer[14] & Spikformer-2-384 & 5.76 & 4 & 94.80 & 76.95 \\ & Spikformer-4-384 & 9.32 & 4 & 95.19 & 77.86 \\ & Spikformer-4-384-400E & 9.32 & 4 & 95.51 & 78.21 \\ \hline & Spikformer-4-256 & 4.15 & 4 & **94.82(+0.88)** & **77.64(+1.68)** \\ & Spikformer-2-384 & 5.76 & 4 & **95.63(+0.83)** & **78.75(+1.80)** \\
**Spikformer+CML** & Spikformer-4-384 & 9.32 & 4 & **95.93(+0.74)** & **79.65(+1.79)** \\ & Spikformer-4-384-400E & 9.32 & 4 & **96.04(+0.53)** & **80.02(+1.81)** \\ \hline & Spikingformer-4-256 & 4.15 & 4 & 94.77 & 77.43 \\ Spikingformer[15] & Spikingformer-2-384 & 5.76 & 4 & 95.22 & 78.34 \\ & Spikingformer-4-384 & 9.32 & 4 & 95.61 & 79.09 \\ & Spikingformer-4-384-400E & 9.32 & 4 & 95.81 & 79.21 \\ \hline & Spikingformer-4-256 & 4.15 & 4 & **94.94(+0.17)** & **78.19(+0.76)** \\
**Spikingformer+CML** & Spikingformer-2-384 & 5.76 & 4 & **95.54(+0.32)** & **78.87(+0.53)** \\ & Spikingformer-4-384 & 9.32 & 4 & **95.81(+0.20)** & **79.98(+0.89)** \\ & Spikingformer-4-384-400E & 9.32 & 4 & **95.95(+0.14)** & **80.37(+1.16)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of our method on CIFAR10/100. CML module enhances the network performance of all models of Spikformer and Spikingformer in both CIFAR10 and CIFAR100 by a large margin. Note that 96.04 \(\%\) of Spikformer-4-384-400E + CML and 80.37 \(\%\) of Spikingformer-4-384-400E + CML are the state-of-the-art performance of CIFAR10 and CIFAR100 in directly trained spiking neural networks, respectively.
Spikformer and Spikingformer on DVS128 Gesture, which all contain 2 spiking transformer blocks with 256 patch embedding dimensions.
We compare our method with SOTA methods on DVS128 Gesture in Tab.4. Spikingformer + CML achieves 98.6\(\%\) top-1 accuracy with 16 time steps and 97.2\(\%\) accuracy with 10 time steps, outperforms Spikingformer by 0.3\(\%\) and 1.0\(\%\) respectively. Spikformer + CML achieves 98.6\(\%\) top-1 accuracy with 16 time steps and 97.6\(\%\) accuracy with 10 time steps, outperforms Spikformer by 0.7\(\%\) and 1.8\(\%\) respectively. Among them, 98.6\(\%\) of Spikingformer + CML and Spikformer + CML is the state-of-the-art performance of DVS128 Gesture in directly trained spiking neural network.
## 5 Conclusion
In this paper, we investigate the imprecision problem of gradient backpropagation caused by downsampling in Spikformer and Spikingformer. Subsequently, we propose an SNN-optimized downsampling module with precise gradient backpropagation, named ConvBN-MaxPooling-LIF (CML), and prove our proposed CML can effectively overcome the imprecision of gradient backpropagation from theoretical perspectives. In addition, we evaluate CML on the static datasets ImageNet, CIFAR10, CIFAR100 and neuromorphic datasets CIFAR10-DVS, DVS128-Gesture. The experimental results show our proposed CML can improve the performance of SNNs by a large margin (e.g. + 1.79\(\%\) on ImageNet, +1.16\(\%\) on CIFAR100 comparing with Spikingformer), and our models achieve the state-of-the-art on all above datasets (e.g. 77.64 \(\%\) on ImageNet, 96.04 \(\%\) on CIFAR10, 81.4\(\%\) on CIFAR10-DVS) in directly trained SNNs.
## 6 Acknowledgment
This work is supported by grants from National Natural Science Foundation of China 62236009 and 62206141.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR10-DVS} & \multicolumn{2}{c}{DVS128} \\ \cline{2-5} Method & Time Step & Acc & Time Step & Acc \\ \hline LIAF-Net [37] & 10 & 70.4 & 60 & 97.6 \\ TA-SNN [38] & 10 & 72.0 & 60 & 98.6 \\ Rollout [39] & 48 & 66.8 & 240 & 97.2 \\ DECOLLE [40] & - & - & 500 & 95.5 \\ tdBN [24] & 10 & 67.8 & 40 & 96.9 \\ PLIF [17] & 20 & 74.8 & 20 & 97.6 \\ SEW-ResNet [25] & 16 & 74.4 & 16 & 97.9 \\ Dspike [41] & 10 & 75.4 & - & - \\ SALT [42] & 20 & 67.1 & - & - \\ DSR [43] & 10 & 77.3 & - & - \\ MS-ResNet [33] & - & 75.6 & - & - \\ \hline Spikformer[14] (Our Implement) & 10 & 78.6 & 10 & 95.8 \\
16 & 80.6 & 16 & 97.9 \\ \hline
**Spikformer + CML** & 10 & **79.2(+0.6)** & 10 & **97.6(+1.8)** \\
16 & **80.9(+0.3)** & 16 & **98.6(+0.7)** \\ \hline Spikingformer[15] & 10 & 79.9 & 10 & 96.2 \\
16 & 81.3 & 16 & 98.3 \\ \hline
**Spikingformer + CML** & 10 & **80.5(+0.6)** & 10 & **97.2(+1.0)** \\
16 & **81.4(+0.1)** & 16 & **98.6(+0.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on two neuromorphic datasets, CIFAR10-DVS and DVS128 Gesture. The result of Spikformer is our implementation according to its open source code. Note that 81.4 \(\%\) and 98.6 \(\%\) of Spikingformer + CML achieve the state-of-the-art performance of CIFAR10-DVS and DVS128 Gesture in directly trained spiking neural networks, respectively. |
2307.16189 | Stable Adam Optimization for 16-bit Neural Networks Training | In this research, we address critical concerns related to the numerical
instability observed in 16-bit computations of machine learning models. Such
instability, particularly when employing popular optimization algorithms like
Adam, often leads to unstable training of deep neural networks. This not only
disrupts the learning process but also poses significant challenges in
deploying dependable models in real-world applications. Our investigation
identifies the epsilon hyperparameter as the primary source of this
instability. A nuanced exploration reveals that subtle adjustments to epsilon
within 16-bit computations can enhance the numerical stability of Adam,
enabling more stable training of 16-bit neural networks. We propose a novel,
dependable approach that leverages updates from the Adam optimizer to bolster
the stability of the learning process. Our contributions provide deeper
insights into optimization challenges in low-precision computations and offer
solutions to ensure the stability of deep neural network training, paving the
way for their dependable use in various applications. | Juyoung Yun | 2023-07-30T10:03:36Z | http://arxiv.org/abs/2307.16189v7 | Trustworthy Optimization: A Novel Approach to Counter Numerical Instability in 16-bit Neural Network Training
###### Abstract
In this research, we address critical trustworthiness concerns related to the numerical instability observed in 16-bit computations of machine learning models. Such instability, particularly when employing popular optimization algorithms like RMSProp and Adam, often leads to unreliable training of deep neural networks. This not only disrupts the learning process but also poses significant challenges in deploying dependable models in real-world applications. Our investigation identifies the epsilon hyperparameter as the primary source of this instability. A nuanced exploration reveals that subtle adjustments to epsilon within 16-bit computations can enhance the reliability of RMSProp and Adam, enabling more trustworthy training of 16-bit neural networks. We propose a novel, dependable approach that leverages updates from the Adam optimizer to bolster the stability of the learning process. Our contributions provide deeper insights into optimization challenges in low-precision computations and offer solutions to ensure the trustworthiness and stability of deep neural network training, paving the way for their dependable use in various applications.
## 1 Introduction
The meteoric advancement of machine learning and artificial intelligence technologies has enabled the construction of neural networks that effectively emulate the complex computations of the human brain. These deep learning models have found utility in a wide range of applications, such as computer vision, natural language processing, autonomous driving, and more. With the growing complexity and sophistication of these neural network models, the computational requirements, particularly for 32-bit operations, have exponentially increased. This heightened computational demand necessitates the exploration of more efficient alternatives, such as 16-bit operations.
However, the shift to 16-bit operations is riddled with challenges. A common standpoint within the research community argues that 16-bit operations are not ideally suited for neural network computations. This belief is mainly attributable to concerns related to numerical instability during the backpropagation phase, especially when popular optimizers like Adam [8] and RMSProp [15] are employed. This instability, more pronounced during the optimizer-mediated backpropagation process rather than forward propagation, can negatively impact the performance of 16-bit operations and compromise the functioning of the neural network model. Current optimizers predominantly operate on 32-bit precision. If these are deployed in a 16-bit environment without appropriate hyperparameter fine-tuning, the neural network models encounter difficulties during learning. This issue is particularly evident in backward propagation, which heavily relies on the optimizer. Confronted with these challenges, the objective of this research is to conduct an exhaustive investigation into the feasibility and implementation of 16-bit operations for neural network training. We propose and
evaluate innovative strategies aimed at reducing the numerical instability encountered during the backpropagation phase under 16-bit environments. A significant focus of this paper is also dedicated to exploring the future possibilities of developing 16-bit based optimizers. One of the fundamental aims of this research is to adapt key optimizers such as RMSProp and Adam to prevent numerical instability, thereby facilitating efficient 16-bit computations. These newly enhanced optimizers are designed to not only address the issue of numerical instability but also leverage the computational advantages offered by 16-bit operations, all without compromising the overall performance of the neural network models. Through this research, our intention goes beyond improving the efficiency of neural network training; we also strive to validate the use of 16-bit operations as a dependable and efficient computational methodology in the domain of deep learning. We anticipate that our research will contribute to a shift in the prevalent perceptions about 16-bit operations and will foster further innovation in the field. Ultimately, we hope our findings will pave the way for a new era in deep learning research characterized by efficient, high-performance neural network models.
## 2 Related Works
The matter of numerical precision in deep learning model training has garnered substantial attention in recent years. A milestone study by Gupta et al. [4] was among the first to explore the potential of lower numerical precision in deep learning, emphasizing the critical balance between computational efficiency and precision. They suggested that with careful implementation, lower precision models can be as effective as their higher precision counterparts while requiring less computational and memory resources. Building upon these findings, significant advancements have been made in utilizing 16-bit operations for training convolutional neural networks. Courbariaux et al. [2] pioneered a novel technique to train neural networks using binary weights and activations, significantly reducing memory and computational demands. Despite these advancements, numerical instability during backpropagation remains a persistent challenge. Bengio et al. [1] illustrated the difficulties encountered in learning long-term dependencies with gradient descent due to numerical instability. Such findings underscore the need for innovative solutions to mitigate this widespread issue. One critical component to addressing this challenge lies in the development of effective optimizers. Adam, an optimizer introduced by Kingma and Ba [8], is known to face issues of numerical instability when employed in lower-precision environments. In the context of efficient neural network training, Han et al. [5] proposed a three-stage pipeline that significantly reduces the storage requirements of the network. Their work forms an integral part of the broader discussion on efficient neural network training, further reinforcing the relevance of our research on 16-bit operations. Through our investigation, we aim to contribute to this body of work by presenting a novel approach to addressing numerical instability issues associated with 16-bit precision in the realm of deep learning model training. **Mixed Precision Training**: Micikevicius et al. [12] stressed the imperative transition from 32-bit to 16-bit operations in deep learning, given the memory and computational constraints associated with training increasingly complex neural networks. Their research demonstrated that mixed precision computations offer a more memory and computation-efficient alternative without compromising the model's performance. Another research by Yun et al. [16], provided a comprehensive theoretical analysis, focusing on the performance of pure 16-bit floating-point neural networks. They introduced the concepts of floating-point error and tolerance to define the conditions under which 16-bit models could approximate their 32-bit counterparts. Their results indicate that pure 16-bit floating-point neural networks can achieve similar or superior performance compared to their mixed-precision and 32-bit counterparts, offering a unique perspective on the benefits of pure 16-bit networks.
## 3 Analysis
Through theoretical analysis, we analyze which part of the neural network occurs numerical instability which can ruin the training process.
### Forward Propagation
**Linear Network.** During forward propagation, each column of the input matrix \(X\) represents a distinct training example. This matrix layout simplifies the network's computation, as each feature of
every training sample can be processed in parallel, exploiting the parallel nature of matrix operations.
\[X=\begin{bmatrix}x_{11}&x_{12}&\cdots&x_{1m}\\ x_{21}&x_{22}&\cdots&x_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ x_{n1}&x_{n2}&\cdots&x_{nm}\end{bmatrix}\]
The weight matrix \(W\) represents the strength and direction of connections between neurons. Each column of \(W\) corresponds to the set of weights connecting every input neuron to a specific hidden neuron.
\[W=\begin{bmatrix}w_{11}&w_{12}&\cdots&w_{1k}\\ w_{21}&w_{22}&\cdots&w_{2k}\\ \vdots&\vdots&\ddots&\vdots\\ w_{n1}&w_{n2}&\cdots&w_{nk}\end{bmatrix}\]
The bias vector \(b\) provides an additional degree of freedom, allowing each neuron in the hidden layer to be activated not just based on weighted input but also based on this inherent property.
\[b=\begin{bmatrix}b_{1}\\ b_{2}\\ \vdots\\ b_{k}\end{bmatrix}\]
The raw outputs or pre-activations (denoted as \(Z\)) are computed by multiplying the transposed weight matrix with the input matrix and adding the bias.
\[Z=W^{T}X+b\]
After computing \(Z\), we need a mechanism to introduce non-linearity into our model. Without this, no matter how deep our model, it would behave just like a single-layer perceptron. This non-linearity is introduced using an activation function \(\sigma(\cdot)\), applied element-wise to the matrix \(Z\).
\[A=\sigma(Z)\]
Here, each entry \(a_{ij}\) in \(A\) is the activated value of the \(j\)-th neuron in the hidden layer when the model is provided the \(i\)-th training example. These activations will either be used as input for subsequent layers or be the final output of the network.
In Deep Neural Network (Linear Network.), each operation in the forward pass involves straightforward mathematical operations like addition and multiplication. There is no division or complex operation that could amplify small numerical errors or lead to potential instability.
**Convolutional Network.** In the realm of deep learning, especially when processing image data, CNNs have gained significant prominence. The foundational blocks of CNNs involve convolving filters over input data and pooling layers [11].
The convolution layer can be detailed by observing the element-wise multiplication and summation [11]. Specifically, in our 3x3 input matrix \(I\) and 2x2 filter \(F\) example:
\[I=\begin{bmatrix}i_{11}&i_{12}&i_{13}\\ i_{21}&i_{22}&i_{23}\\ i_{31}&i_{32}&i_{33}\end{bmatrix}\qquad\qquad\qquad\qquad F=\begin{bmatrix}f_{ 11}&f_{12}\\ f_{21}&f_{22}\end{bmatrix}\]
For the top-left corner of \(I\), the convolution operation using the filter \(F\) is as follows:
\[o_{11}=\begin{bmatrix}i_{11}&i_{12}\\ i_{21}&i_{22}\end{bmatrix}\odot\begin{bmatrix}f_{11}&f_{12}\\ f_{21}&f_{22}\end{bmatrix}\]
Where \(\odot\) represents element-wise multiplication. This implies:
\[o_{11}=i_{11}\cdot f_{11}+i_{12}\cdot f_{12}+i_{21}\cdot f_{21}+i_{22}\cdot f _{22}\]
The filter \(F\) continues sliding across \(I\), performing similar computations for each position, resulting in the matrix:
\[O=\begin{bmatrix}o_{11}&o_{12}\\ o_{21}&o_{22}\end{bmatrix}\]
Each element in the output matrix \(O\) is computed in the same fashion, with each \(o_{ij}\) being the result of convolving a 2x2 segment of \(I\) with the filter \(F\).
Pooling layers can also be represented using matrix notation, though the operation is simpler. For max pooling with a 2x2 window, given an input matrix \(I\), the operation for a single position can be illustrated as [9]:
\[P_{ij}=\max\begin{bmatrix}I_{2i-1,2j-1}&I_{2i-1,2j}\\ I_{2i,2j-1}&I_{2i,2j}\end{bmatrix}\]
For an example 4x4 input matrix being processed by a 2x2 max pooling operation:
\[I=\begin{bmatrix}i_{11}&i_{12}&i_{13}&i_{14}\\ i_{21}&i_{22}&i_{23}&i_{24}\\ i_{31}&i_{32}&i_{33}&i_{34}\\ i_{41}&i_{42}&i_{43}&i_{44}\end{bmatrix}\]
The output matrix \(P\) from this max pooling operation will be:
\[P=\begin{bmatrix}\max(i_{11},i_{12},i_{21},i_{22})&\max(i_{13},i_{14},i_{23}, i_{24})\\ \max(i_{31},i_{32},i_{41},i_{42})&\max(i_{33},i_{34},i_{43},i_{44})\end{bmatrix}\]
In CNNs, actions taken during the forward pass primarily consist of basic arithmetic, such as addition and multiplication. There's an absence of division or intricate calculations that might magnify minor numerical inaccuracies or trigger potential instability.
Especially those with a significant number of layers, the iterative multiplications during forward propagation can intermittently lead to either vanishingly small values or notably large increments in the activations. However, methods such as batch normalization[7] have been introduced to regulate and prevent these activations from attaining extreme values. Therefore, forward propagation, which involves the transmission of input data through the network to produce an output, is generally more resilient to such instabilities. This comparative robustness can be ascribed to a variety of underlying reasons:
* Simplified Operations: Forward propagation predominantly involves elementary mathematical operations such as addition and multiplication. Thus, the results are less prone to reaching extreme values unless the input data or model parameters are improperly scaled or exhibit a high dynamic range [6].
* Absence of Derivatives: Contrasting backpropagation, forward propagation does not necessitate the computation of derivatives, a process that could engender exceedingly large or small numbers, thus inducing numerical instability [14].
* Limited Propagation of Errors: Forward propagation is less likely to accumulate and propagate numerical errors throughout the network. This contrasts with backpropagation, where errors could proliferate through the derivative chain [11].
### Backward Propagation
We focused on backpropagation rather than forward propagation. The reason for this is that forward propagation consists of the product and sum of the matrix without division operations. In pursuit of a thorough mathematical scrutiny, our objective pivots on three principal tenets: firstly, to dissect and understand the operational intricacies of each optimizer; secondly, to pinpoint the exact circumstances and causes that give rise to numerical instability; and finally, to formulate effective solutions that can aptly mitigate these issues. To exemplify, consider the update rule in the gradient descent method for optimizing a neural network [13]:
\[\theta=\theta-\eta\nabla_{\theta}J(\theta)\]
In this equation, \(\theta\) denotes the parameters of the model, \(\eta\) represents the learning rate, and \(\nabla_{\theta}J(\theta)\) signifies the gradient of the loss function \(J(\theta)\) with respect to the parameters. Understanding such equations allows us to uncover the internal mechanics of each optimizer, thereby facilitating our quest to alleviate numerical instability.
### Mini-Batch Gradient Descent
Mini-batch gradient descent is a variant of the gradient descent algorithm that divides the training datasets into small batches to compute errors [13]. The update rule for the parameters in mini-batch gradient descent can be written as:
\[\theta= \theta-\eta\nabla_{\theta}J(\theta;x^{(i:i+n)};y^{(i:i+n)})\]
This optimizer leverages mini-batches of \(m\) examples \((x^{(i:i+n)},y^{(i:i+n)})\) from the training dataset \(x^{(i)},...,x^{(n)}\) to update the model's parameters \(\theta\in\mathbb{R}^{d}\). It aims to minimize the loss function \(J(\theta)\) by taking calculated steps that are antithetical to the gradient of the function with respect to the parameters [13]. The magnitude of these steps is dictated by the learning rate, denoted by \(\eta\). It is postulated that Mini-Batch Gradient Descent can function optimally in a 16-bit environment, given the limited number of hyperparameters that can provoke numerical instability within the neural network during the process of minimizing \(J(\theta)\).
* **Assumption 3.1**_Consider the input image data \(x\in Xx_{1},\dots,x_{i}\), the label \(y\in Yy_{1},\dots,y_{i}\), where each \(x\in\mathbb{R}\) satisfies \(0<x<255\), and each \(y\in\mathbb{Z}\). The normalized data \(\bar{x}\) will lie in the range \(0.0<\bar{x}<1.0\). If \(fp16_{min}<f(x,y)=\nabla_{\theta}J(\theta;x^{(i)};y^{(i)}))<fp16_{max}\), and \(f(x,y)\) does not engender numerical instability \(I=NaN,Inf\) during the update of \(\theta\), it can be deduced that the mini-batch gradient descent method will function properly for training the 16-bit neural network._
Based on _Assumption 3.1_, the number of hyperparameters in the Mini-Batch Gradient Descent that could instigate numerical instability is restricted. Consequently, we anticipate that this optimizer should function efficiently within a 16-bit neural network with default hyperparameters.
### RMSProp
Root Mean Square Propagation, more commonly referred to as RMSProp, is an optimization methodology utilized in the training of neural networks. This technique was proposed by Geoff Hinton during his Coursera class in Lecture 6e [15]. However, when implemented within a 16-bit neural network using default hyperparameters, RMSProp results in the model weights turning into \(NaN\), thereby causing a complete halt in the training process. Figure 1 shows that when training MNIST with a 16-bit neural network using RMSProp, the weights become NaN (white points) after a few epochs if a value less than 1e-4 is used as epsilon. To circumvent this numerical problem, it is essential to understand the source of numerical instability that RMSProp introduces in 16-bit environments. The update rule for RMSProp can be expressed as follows:
\[w_{t}= w_{t-1}-\eta\frac{g_{t}}{\sqrt{v_{t}}+\epsilon}\]
The 16-bit floating-point representation has a more constrained computational scope in contrast to the 32-bit variant. When values exceed or fall below the valid range of \(fp16\), numerical instability emerges during the training sequence. In the context of the RMSProp optimizer, the learning rate \(\eta\) typically doesn't instigate issues provided \(\eta>fp16_{min}\). However, the denominator \(v_{t}\), adaptive learning rate, and \(\epsilon\) can trigger numerical instability if their values extend beyond the allowable range.
\[v_{t}= \begin{cases}0&if\ v_{t}<fp16_{min}\\ \beta v_{t-1}+(1-\beta)g_{t}{}^{2}&otherwise\\ \infty&if\ v_{t}>fp16_{max}\end{cases}\]
If the value of \(v_{t}\) falls below \(fp16_{min}\), \(v_{t}\) converges to zero, which in turn affects \(w_{t}\). The expression for \(w_{t}\) can then be rearranged so that only \(\epsilon\) remains in the denominator. Under these circumstances, the numerical instability of the optimizer is primarily induced by the default \(\epsilon\) value. In TensorFlow, the default value for \(\epsilon\) is set to 1e-07, and the default learning rate \(\eta\) is configured as 1e-03.
\[w_{t}= w_{t-1}-\eta\cdot g_{t}\cdot\epsilon^{-1}\]
The adjustment of weights is contingent upon the \(\epsilon\) value. Nevertheless, during this process, numerical instability may emerge when this value is inversed, leading to underflow and overflow incidents in the optimizer.
\[\epsilon^{-1}= \begin{cases}\infty&if\ \epsilon^{-1}>fp16_{Max}\\ \epsilon^{-1}&otherwise\end{cases}\]
Should the value of epsilon decline to 1e-07, its reciprocal soars to 1e+07. While the 16-bit format accommodates 1e-07, it cannot handle 1e+07, leading to an overflow that escalates to \(\infty\). This overflow is an inevitable outcome, independent of \(g_{t}\), the gradient's value. The progressive unfolding of such overflow instances stirs up numerical instability, eventually compromising the integrity of the neural network. Whenever \(\epsilon^{-1}>fp16_{max}\), the term \(g_{t}\cdot\epsilon^{-1}\) will invariably result in \(I=\infty,-\infty\), contingent on the sign of \(g_{t}\).
\[w_{t}= \begin{cases}w_{t-1}-\infty&if\ g_{t}>0\\ w_{t-1}+\infty&if\ g_{t}<0\end{cases}\]
The current weight is represented by \(w_{t}=w_{t-1}-i\ (i\in I)\) which yields \(NaN,-\infty\) when \(w_{t-1}\in\infty,-\infty\). Upon the occurrence of \(NaN\), this optimizer ceases to function on a 16-bit neural network, a consequence of the TensorFlow default epsilon setting being at 1e-07.
* **Assumption 3.2**_We posit that the input image data \(x\in Xx_{1},\ldots,x_{i}\) and label \(y\in Yy_{1},\ldots,y_{i}\), where all \(x\in\mathbb{R}\), \(0<x<255\), and all \(y\in\mathbb{Z}\). The normalized data \(\bar{x}\) will lie within the range of \(0.0<\bar{x}<1.0\). Similarly, the gradient \(g_{t}\) will fall within the range \(fp16_{min}<g_{t}<fp16_{max}\). There exist two conditions under which RMSProp can operate effectively in a 16-bit environment. Firstly, if \(v_{t}\neq 0\), \(v_{t}\geq fp16_{min}\) and \(fp16_{min}<\sqrt{v_{t}}+\epsilon<fp16_{max}\), \(w_{t}\) will not undergo \(overflow\). Secondly, when
Figure 1: Evolution of neural network weights across 20 epochs with RMSProp optimizer. The plot displays the weights of the first Dense layer in the Deep Neural Network with MNIST Dataset. The x-axis represents the first 10 input neurons, the y-axis shows the first 32 neurons of the ’Dense1’ layer, and the depth (z-axis) indicates the training epochs. Colors, based on the ’RdBu’ colormap, denote the magnitude and direction of the weight. We used le-2 as learning rate and TensorFlow for this.
\(v_{t}<fp16_{min}\), \(v_{t}\) becomes \(0\). In this case, if \(g_{t}\cdot\epsilon^{-1}<fp16_{max}\), RMSProp successfully evades critical numerical instability. Provided either of these conditions are met, the RMSProp optimizer will function appropriately in a 16-bit neural network._
In the event that _Assumption 3.2_ holds true, the RMSProp optimizer is capable of executing effectively in 16-bit neural networks. An improper selection of the epsilon parameter could misdirect the course of learning in the 16-bit neural network, potentially causing numerical instability. This may lead to erroneous conclusions regarding the compatibility of the RMSProp optimizer with 16-bit neural networks.
### Adam
Adam is a sophisticated optimization algorithm that amalgamates the salient features of the RMSProp and Momentum methods. Conceived by Diederik Kingma and Jimmy Ba, the Adam optimizer implements the technique of adaptive moment estimation to iteratively refine neural network weight calculations with enhanced efficiency [8]. By extending stochastic gradient descent, Adam optimally addresses non-convex problems at an accelerated pace, requiring fewer computational resources compared to a host of other optimizers. The potency of this algorithm is particularly noticeable when dealing with large datasets, as it maintains a tighter trajectory over numerous training iterations.
\[w_{t}= w_{t-1}-\eta\cdot\frac{\hat{m}_{t}}{\sqrt{\hat{v}_{t}}+\epsilon}\]
Figure 1 shows that when training MNIST with a 16-bit neural network using Adam, the weights become NaN (white points) after a few epochs if a value less than 1e-3 is used as epsilon. When considering the Adam optimizer, numerical instability arises following principles akin to those governing RMSProp. However, an additional distinguishing element in Adam is the introduction of a momentum variable, \(m\). In the Adam optimizer, both \(m\) and \(v\) are initialized to zero. Hence, at the
Figure 2: Evolution of neural network weights across 20 epochs with Adam optimizer. The plot displays the weights of the first Dense layer in the Deep Neural Network with MNIST Dataset. The x-axis represents the first 10 input neurons, the y-axis shows the first 32 neurons of the ’Dense’ layer, and the depth (z-axis) indicates the training epochs. Colors, based on the ’RdBu’ colormap, denote the magnitude and direction of the weight. We used le-2 as learning rate and TensorFlow for this.
onset of learning, both \(m_{t}\) and \(v_{t}\) are inclined towards zero, undergoing an initial process to remove this bias.
\[m_{t}= \beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot g_{t}\] \[v_{t}= \beta_{2}\cdot m_{t-1}+(1-\beta_{2})\cdot{g_{t}}^{2}\]
The velocity variable \(v_{t}\) in the Adam optimizer is a non-negative real number. The adjusted velocity \(\hat{v}t\) is calculated as the ratio of \(v_{t}\) to \((1-\beta 2)\). Therefore, if \(v_{t}\) is less than the maximum value permissible in 16-bit floating point precision, \(fp16_{max}\), then \(\hat{v}_{t}\) will approach zero. This condition significantly influences the calculated value of \(w_{t}\) within the Adam optimizer.
\[w_{t}= w_{t-1}-\eta\cdot\hat{m_{t}}\cdot\epsilon^{-1}\]
In the event where the inverse of epsilon (\(\epsilon^{-1}\)) exceeds the maximum value permissible in a 16-bit floating point (\(fp16_{max}\)), \(\epsilon^{-1}\) essentially becomes infinite. The default value of \(\epsilon\) in TensorFlow is \(1e-07\), while the default learning rate (\(\eta\)) is \(1e-03\). Therefore, if \(\epsilon\) is \(1e-07\), its reciprocal escalates to infinity. Depending on the sign of the adjusted momentum (\(\hat{m_{t}}\)), the value of \(\eta\cdot\hat{m_{t}}/\epsilon\) will either be positive or negative infinity. Consequently, irrespective of the learning rate \(\eta\), \(\hat{m_{t}}/\epsilon\) will belong to the set \(I=\infty,-\infty\). As a result, the new weight update equation \(w_{t}=w_{t-1}-i\) (\(i\in I\)) leads to a situation where \(w_{t}\) belongs to the set \(NaN,-\infty\) whenever \(w_{t-1}\) is \(\infty,-\infty\). This introduces numerical instability in the learning process.
\[w_{t}= \begin{cases}w_{t-1}-\infty&if\;\hat{m_{t}}>0\\ w_{t-1}+\infty&if\;\hat{m_{t}}<0\end{cases}\]
This process induces a numerical instability in a 16-bit neural network. In particular, if the previous weight \(w_{t-1}\) is \(-\infty\), the operation \(w_{t-1}\) - \(\infty\) returns \(-\infty\), but \(w_{t-1}\) + \(\infty\) produces a non-numeric value (\(NaN\)). Consequently, the current weight \(w_{t}\) becomes \(NaN\), thereby halting training within the 16-bit Neural Network utilizing the Adam optimizer.
* **Assumption 3.3** posits that the input image data \(x\in Xx_{1},\ldots,x_{i}\), label \(y\in Yy_{1},\ldots,y_{i}\), where all \(x\in\mathbb{R}\) satisfy \(0<x<255\), and all \(y\in\mathbb{Z}\). The normalized data \(\bar{x}\) from \(x\) will lie in the range \(0.0<\bar{x}<1.0\), and the momentum \(m_{t}\) will fall within \(fp16_{min}<m_{t}<fp16_{max}\). Two conditions may allow the Adam optimizer to function in a 16-bit setting. Primarily, if \(\hat{v_{t}}\neq 0\), \(\hat{v_{t}}\geq fp16_{min}\) and \(fp16_{min}<\sqrt{\hat{v_{t}}}+\epsilon<fp16_{max}\), the weight \(w_{t}\) will not overflow. Additionally, when \(\hat{v_{t}}<fp16_{min}\), \(\hat{v_{t}}\) becomes \(0\). If \(\hat{m_{t}}\cdot\epsilon^{-1}<fp16_{max}\), the Adam optimizer circumvents critical numerical instability. Provided one of these conditions is met, the Adam optimizer functions effectively in a 16-bit neural network.
Given that the Adam optimizer incorporates the method of the RMSProp optimizer, similar issues impede its performance in a 16-bit Neural Network. However, if Adam satisfies _Assumption 3.3_, the optimizer can function within a 16-bit Neural Network. Proper comprehension and handling of the epsilon issue enables the utilization of a variety of optimizers even within 16-bit computations.
Our exploration into the root causes of numerical instability has led us to a singularly intriguing finding - the value of the hyperparameter, Epsilon, plays a remarkably pivotal role. This discovery, though unassuming at first glance, has far-reaching implications in our quest for numerical stability within the realm of neural network optimization. Epsilon is often introduced in the denominator of certain equations to prevent division by zero errors or to avoid the pitfalls of numerical underflow. In the context of optimizers such as RMSProp and Adam, it's usually involved in the update rule, safeguarding against drastic weight changes arising from exceedingly small gradient values. However, in a 16-bit computational environment, when the reciprocal of Epsilon becomes larger than the maximum representable number, it leads to a condition known as numerical overflow. The overflow subsequently manifests as numerical instability, disrupting the otherwise orderly progression of the learning process, and effectively halting the training of the 16-bit neural network.
**Epsilon.** Consider the function \(f(x)=\frac{1}{x}\). As \(x\) nears 0, \(f(x)\) grows indefinitely. By adding \(\epsilon\) (i.e., \(f(x)=\frac{1}{x+\epsilon}\)), the function remains bounded near 0. In optimization, this ensures parameter updates stay bounded, aiding numerical stability during training. However, in low precision scenarios, the
presence of \(\epsilon\) can induce instability, as gradients become overshadowed by this constant. This isn't unique to the Adam optimizer and adjusting Epsilon's value isn't straightforward, with its ideal value often being context-specific. We've developed a method to address this, offering a systematic approach to Epsilon tuning, ensuring stable optimization amidst numerical challenges.
## 4 Method
### Existing Method
**Loss Scaling in Mixed Precision**: Mixed precision training [12] is a technique that utilizes both 16-bit and 32-bit floating-point types during training to make it more memory efficient and faster. However, the reduced precision of 16-bit calculations can sometimes lead to numerical underflow, especially in the intermediate gradient values in some optimizers such as Adam and RMSProp. To prevent this, the "loss scaling" method is employed. Loss scaling is a straightforward concept. Before performing backpropagation, the loss is scaled up by a large factor. This artificially increases the scale of the gradients and thus helps prevent underflow. After the gradients are calculated, they are scaled back down to ensure the weights are updated correctly. In essence, this process amplifies the small gradient values to make them representable in the reduced precision and then scales them back down to ensure that the actual model updates remain accurate. While mixed precision does use 16-bit calculations, it leans on 32-bit calculations for the critical parts to maintain accuracy, especially for maintaining running sums and optimizer states. This implies that we aren't fully exploiting the speed and memory benefits of 16-bit precision. According to Table 1, Mixed precision based VIT-16 will not reduce GPU memory usage because it utilizes 32-bit computation for mixed precision.
Contrary to the mixed precision method, a pure 16-bit neural network operates entirely in the 16-bit precision realm. There are several reasons why pure 16-bit computations can outshine both 32-bit and mixed precision. First, 16-bit operations are faster than their 32-bit and mixed precision. Second, 16-bit representations require half the memory of 32-bit, which means we can fit larger models or batch sizes into GPU memory, further enhancing performance. Yun et at[16] shows pure 16-bit neural networks are faster than both 32-bit and mixed precision. This means that mixed precision with loss scaling method cannot utilize advantages of 16-bit operations. The inefficiencies and the constant juggling between 16-bit and 32-bit in mixed precision, along with the need for methods like loss scaling, indicate that a more robust solution is needed for numerical stability in low precision environments. This is where the "Numerical Guarantee Method" comes into play. Instead of hopping between precisions or artificially scaling losses, the Numerical Guarantee Method aims to provide a stable environment for optimizers like Adam to function efficiently in pure 16-bit computations. This approach makes the training process faster and potentially more accurate than using 32-bit or mixed precision models.
### Numerical Guarantee Method
In this study, we present a novel approach designed to alleviate numerical instability, a common problem associated with widely-used optimizers such as Adam and RMSProp, in the training of 16-bit neural networks. These optimizers have frequently encountered challenges when working with smaller epsilon values, particularly in instances involving division operations. The genesis of our method stems from an in-depth mathematical exploration of the update rules in the Adam optimizer, which revealed that instability originates during the gradient update step - a phase where the momentum term (\(\hat{m}\)) is divided by the sum of the square root of the velocity term (\(\hat{v}\)) and the epsilon parameter (\(\epsilon_{t}\)). To circumvent this instability, we propose a refined version of the gradient computation equation, which effectively prevents division by an exceedingly small epsilon value.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Epsilon & Test Accuracy & Training Time (seconds) & GPU Memory Usage \\ \hline Floating Point 16-bit & 1E-3 & 0.686 & 1255.35 & 1.234 GB \\ Floating Point 32-bit & 1E-3 & 0.693 & 1733.14 & 2.412 GB \\ Mixed Precision (MP) & 1E-3 & 0.685 & 1430.12 & 2.415 GB \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison between 16-bit, 32-bit, and Mixed Precision settings for training VIT-16[3] with cifar-10 images and Adam optimizer.
This is achieved by introducing a maximum function that sets a lower threshold on the denominator. Consequently, the updated Adam optimizer follows the equation:
\[w_{t}=w_{t-1}-\frac{\eta\cdot\hat{m}_{t}}{\sqrt{\max(\hat{v}_{t},\epsilon)}}\]
The modification guarantees that the denominator remains within a safer numerical range, thus facilitating a more stable gradient update. The addition of the stabilizing term, \(\sqrt{\max(\hat{v}_{t},\epsilon)}\), ensures the denominator does not dwindle excessively, thereby avoiding abnormally large updates. Should \(\hat{v}_{t}\) fall below \(\epsilon\), the term assumes the square root of \(\epsilon\) instead. The precise value employed as the lower limit (in this case, \(\epsilon\) is 1e-4) can be tailored according to the specific requirements of the neural network in consideration. Our proposed adjustment to the Adam optimizer offers a myriad of benefits, which are elaborated upon subsequently:
* Enhanced Numerical Stability: By taking the maximum of \(\hat{v}_{t}\) and a small constant inside the square root function, we mitigate the numerical instability that arises when \(\hat{v}_{t}\) is very close to zero. This enhancement significantly reduces the chances of overflow and underflow during computations, which in turn increases the robustness of the optimizer.
* Preservation of Adam's Benefits: The modification does not alter the fundamental characteristics and benefits of the Adam optimizer. It retains the benefits of Adam, such as efficient computation, suitability for problems with large data or parameters, and robustness to diagonal rescale or translations of the objective functions.
* Ease of Implementation: The modification requires just a minor change to the original Adam algorithm, making it easy to implement in practice. This simplicity of modification allows for easy integration into existing machine learning frameworks and pipelines, thereby increasing its accessibility to practitioners.
* Applicability to 16-bit Computations: The updated method enables Adam to work effectively on 16-bit neural networks, which was a challenge with the original Adam optimizer due to numerical instability issues. Thus, it extends the applicability of Adam to systems and applications where memory efficiency is crucial.
* Flexibility: This modification is generalizable and can be applied to other optimizers based on the same concept as Adam, thereby providing a broad range of applications.
Overall, these advantages make this modification an effective solution to the problem of numerical instability in Adam and similar optimizers, particularly in the context of 16-bit computations. Our approach presents a novel way to alleviate numerical instability in 16-bit neural networks, without losing the advantages offered by advanced optimizers such as Adam and RMSProp. We will demonstrate the effectiveness of our approach through comprehensive experimental results in the following section.
## 5 Results
Our first experimental setup was comprised of a deep neural network (DNN) with 3 linear layers trained on the MNIST dataset [11], a ubiquitous benchmark in the field of machine learning, to evaluate the performance of two widely adopted optimization algorithms: RMSProp [15] and Adam [8]. The network was implemented using the TensorFlow framework, and a batch size of 512 was utilized during the training process. The architecture of the DNN was relatively simple, containing a single linear layer, which provided a streamlined environment to assess the relative merits of the two optimization strategies. The results of the performance comparison between the RMSProp and Adam optimizers under different precision settings are summarized in Table 2.
Our second experiment focuses on the Vision Transformer (ViT)[3], a complex and sophisticated image training model. The Vision Transformer (ViT) was originally introduced and benchmarked using the Adam optimizer in the foundational paper by Dosovitskiy et al. [3]. In their experiments, Adam was chosen for training the ViT on large datasets like ImageNet. The authors demonstrated that with Adam and the specific training regimen described in the paper, the ViT model achieves
competitive performance compared to state-of-the-art convolutional networks. Therefore, we had empirical experiments about VITs with adam optimizer incorporating with numerical guarantee method. Specifically, we employed variants of the ViT model, including ViT-8, ViT-12, and ViT-16, to train on the CIFAR-10 image dataset[10]. The performance of the Adam optimizer under various precision settings, when used with these ViT models (with no data augmentation) and trained in 100 epochs, is summarized in Table 3. These findings are detailed as follows:
### 16-bit Precision with Numerical Guarantee
In the realm of 16-bit precision, the integration of numerical guarantee methods--highlighted as 'O' in Table 2--led both RMSProp and Adam optimizers to consistently achieve stellar test accuracies, all surpassing the \(0.98\) mark. This commendable performance was evident across various epsilon values, underscoring the robust numerical stability even when working within precision-constrained environments. As for time efficiency, disparities were minimal: RMSProp took an estimated \(98\) to \(100\) seconds per training epoch, while Adam hovered around \(110\) seconds. Table 3 reveals that the 16-bit vision transformer models fortified with our numerical guarantee technique align closely with
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Numerical**} & \multicolumn{2}{c}{**RISPProp**} & \multicolumn{2}{c}{**RISPProp**} & \multicolumn{2}{c}{**Adam**} & \multicolumn{2}{c}{**Adam**} \\ \hline
**Precision** & **epsilon** & **Guarantee Method** & **Test Accuracy** & **Training Time (seconds)** & **Test Accuracy** & **Training Time (seconds)** \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison between RMSProp and Adam optimizers in 16-bit and 32-bit precision settings for training DNN.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Numerical**} & \multicolumn{2}{c}{**VIT-8**} & \multicolumn{2}{c}{**VIT-12**} & \multicolumn{2}{c}{**VIT-16**} & \multicolumn{2}{c}{**VIT-16**} \\ \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Equilion**} & \multicolumn{2}{c}{**Guarantee**} & \multicolumn{2}{c}{**Test**} & \multicolumn{2}{c}{**Training Time**} & \multicolumn{2}{c}{**Test**} & \multicolumn{2}{c}{**Training Time**} \\ \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Method**} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{(seconds)} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{(seconds)} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{(seconds)} \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of the Vision Transformer (ViT) based on 16-bit and 32-bit precision with Adam optimizer, incorporating the novel numerical guarantee method.
their 32-bit VIT counterparts in test accuracy. They possess a distinct edge, completing training at a pace that's briskly 30% swifter than the 32-bit variants across all VIT iterations.
### 16-bit Precision without Numerical Guarantee
In scenarios where the numerical guarantee was not enforced (indicated by 'X' in Table 2, 3), a decline in test accuracy was observed for both RMSProp and Adam optimizers when the epsilon value decreased below \(10^{-3}\). Despite this decline in accuracy, the computation times were relatively stable, with minor increases compared to the cases where numerical guarantees were incorporated. This shows our observation about risk of epsilon for optimizer is true.
### 32-bit Precision without Numerical Guarantee
Within the 32-bit precision domain, both the RMSProp and Adam optimizers sustained consistent test accuracies, even without the application of numerical guarantees. However, a conspicuous surge in computational time was observed when juxtaposed with the 16-bit precision settings: the RMSProp optimizer's duration spanned from \(180.59\) to \(183.81\) seconds, while the Adam optimizer's span fell between \(202.72\) to \(205.10\) seconds. As corroborated by Table 3, all VIT models operating in 32-bit exhibited a computational pace that lagged approximately 30% behind their 16-bit counterparts, irrespective of the incorporation of the numerical guarantee method.
The experiments were conducted on laptop GPUs featuring RTX 3080. An intriguing observation was that 16-bit computations were at least 40 percent faster than their 32-bit counterparts. This advantage of computational speed underlines the potential of utilizing 16-bit precision, particularly in contexts with constrained computing resources such as laptops. This study successfully establishes the critical role of numerical guarantee methods in maintaining high levels of accuracy, particularly in lower precision settings. It also emphasizes the trade-off between numerical stability and computational efficiency. While higher precision settings may be less prone to numerical instability, they demand significantly more computational resources.
To summarize, our research offers a novel approach to mitigate numerical instability in 16-bit neural networks without sacrificing accuracy. This contribution holds the potential to stimulate further advancements in the field of machine learning, particularly in applications where computational resources are limited. The findings underscore the importance of careful consideration of numerical precision in the implementation of deep learning models and highlight the potential advantages of lower-precision computations. Further investigations would be beneficial to validate these findings in more other deep learning models, such as transformers for natural language processing which is not for image classification.
## 6 Discussion
The novel approach for mitigating numerical instability presented in this work has demonstrated promising results with respect to the backpropagation phase of deep learning models when dealing with the vision transformer models. Our modification to the gradient computation equation within the Adam optimizer ensures that the denominator stays within a safe range, thereby providing a stable update for the gradient. This has the potential to significantly enhance the stability of the training process, and by extension, the accuracy of the resultant model. However, as in all scientific studies, this work is not without its limitations. The empirical validation of our proposed method has been conducted with image classification model such as the vision transformer models. While this forms the basis for more complex architectures, it remains only one part of training of the wide array of layers and structures currently utilized in modern deep learning architectures. Specifically, future research should aim to validate the robustness and utility of our approach across a broader spectrum of deep learning architectures. In particular, Transformer models for Natural Language Processing is a significant portion of the current deep learning landscape, particularly in language-related tasks. The complexity of these models, with their intricate hierarchies and highly non-linear transformations, may pose additional challenges that are not fully encapsulated by a Linear Layer. As such, it is imperative that further validation is conducted on these architectures to establish the generalizability of our method. Furthermore, while our focus has primarily been on mitigating numerical instability, it would also be beneficial to investigate any potential side effects this approach might have on
other aspects of the model training process. For instance, it would be interesting to explore the implications for training time, memory requirements, and robustness to variations in hyperparameters. In conclusion, this work presents an exciting step forward in the pursuit of more robust and stable training methodologies for deep learning models. The road ahead, while challenging, is filled with opportunities for further innovation and refinement.
## 7 Conclusions
In the realm of deep learning, achieving trustworthiness and reliability is paramount, especially when the field faces persistent challenges like numerical instability. Our research introduces a groundbreaking strategy designed to bolster the trustworthiness of neural networks, which is most evident during backpropagation processes. By innovatively tweaking the gradient update mechanism in the Adam optimizer, we've effectively curtailed the prevalent instability issues encountered in 16-bit neural training. Empirical tests leveraging the MNIST dataset with a rudimentary DNN underscored the superiority of our optimizer, frequently outclassing the conventional Adam setup in a 16-bit environment. Impressively, our trustworthy 16-bit methodology managed to truncate training durations by approximately 45% relative to its 32-bit counterpart, simultaneously preserving stability across an array of epsilon magnitudes. Further experiments on the Vision Transformer model for the cifar-10 dataset substantiated the enhanced trustworthiness of our approach: our 16-bit models processed data at a rate 30% swifter than the 32-bit models while ensuring heightened stability. Nonetheless, our investigations were primarily concentrated on image classification through Linear DNN and VIT. Anticipating future endeavors, we aim to delve into the reliability and trustworthiness of our strategy when applied to intricate architectures and diverse tasks, including NLP. Our overarching ambition is to architect a holistic solution to combat numerical instability in deep learning, and we're confident that our current contributions establish a robust groundwork for ensuing advancements in ensuring model trustworthiness.
|
2310.05512 | UAVs and Neural Networks for search and rescue missions | In this paper, we present a method for detecting objects of interest,
including cars, humans, and fire, in aerial images captured by unmanned aerial
vehicles (UAVs) usually during vegetation fires. To achieve this, we use
artificial neural networks and create a dataset for supervised learning. We
accomplish the assisted labeling of the dataset through the implementation of
an object detection pipeline that combines classic image processing techniques
with pretrained neural networks. In addition, we develop a data augmentation
pipeline to augment the dataset with automatically labeled images. Finally, we
evaluate the performance of different neural networks. | Hartmut Surmann, Artur Leinweber, Gerhard Senkowski, Julien Meine, Dominik Slomma | 2023-10-09T08:27:35Z | http://arxiv.org/abs/2310.05512v1 | # UAVs and Neural Networks for search and rescue missions
###### Abstract
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with automatically labeled images. Finally, we evaluate the performance of different neural networks.
SAR; UAV; AI; Object Detection; Vegetation Fires
## I Introduction
According to the European Forest Fire Information System (EFFIS), the frequency of vegetation fires such as forest fires increased sharply in recent years. Depending on the extent of the fire, the effects are devastating. For example, ecosystems are destroyed, which affects the habitats of animals and humans. Additionally, pollutants are released into the environment during the combustion process, which can lead to health problems. While climate change is increasingly leading to situations that promote the development of vegetation fires, most fires are started by human activity. During fire fighting, the firefighters are exposed to the risk of being injured or killed. Mobile robots such as unmanned aerial vehicles can reduce the risks and support the emergency services during an operation so that they can, for example, gain an overview of the situation and initiate further fire-fighting measures [1][2]. The evaluation and analysis of aerial images, in order to identify relevant objects during vegetation fires takes a great deal of time and is usually carried out by one or more specialists. Since this is a time-critical task in search and rescue operations, machine vision can be used to detect objects in the aerial imagery to provide situational awareness in a timely manner. In this way, significant objects such as humans, vehicles and fire can be detected in the aerial images automatically and the result is evaluated by a specialist and forwarded to the emergency services. For object detection, deep-learning methods have prevailed over other methods in research, since they are superior to those methods in terms of speed and accuracy [3]. In most publications, new neural networks are presented and compared with existing ones [4][5][6][7][8]. Since these are usually based on different architectures, each has its advantages and disadvantages, which means that the results differ. Usually, only one neural network is used in the context of search and rescue missions, although the rest of the methods also have potential to positively influence the results. Theoretically, there would be the possibility to merge the results by using different models. By adding more filters, the results would be more meaningful than using only one model. Supervised learning of neural networks requires datasets that are authoritative for diverse scenarios. Acquiring data form real images, simulations, and other models such as Generative Adversarial Networks (GANs), is a time-consuming task. Additionally - if not already available - the objects in the aerial images usually have to be annotated with bounding boxes. In this context, different methods can be used to generate datasets through data augmentation with a lower expenditure of time, which can be used in supervised learning under certain assumptions. For example, it would be conceivable that objects that are recognized as false positives (FPs) serve as backgrounds. Objects that the model recognizes well or false negatives (FNs) can be placed in these backgrounds to increase the accuracy during training.
## II Related Work
Moffatt et. al. present a recent example in the field of fire detection [9]. A hexacopter was equipped with a Velodyne VLP-16 for obstacle avoidance and fire detection. The algorithms for these tasks were run in real time on an Intel NUC onboard computer. For firefighting, a small fire extinguisher was developed and mounted under the UAV. The images from the infrared camera FLIR TAU were processed in two steps to detect fire. In the first step, darker and lighter areas in the image were determined through Local Intensity Operation (LIO) or Intensity Brightening Operation (IBO). In the second step, background noise was filtered out and a binary image was generated to show only the heat source. Based on this, the UAV can navigate to a fire source and extinguish it remotely by localizing the UAV using GPS and detecting the fire in the infrared image. This allows for the selective combat of individual sources of fire. Other handcrafted fire detection methods use a combination of techniques such as background subtraction, color spaces and spatial attributes to identify potential fire regions [10][11]. With the advancement of deep learning techniques in Image Processing and Computer Vision, deep learning models have outperformed traditional handcrafted visual detection approaches [12]. Zhang et al. presented a deep learning approach for forest fire detection, which involves training a full image and patch-level classifier in a combined CNN. Their method has a cascaded approach, first using a global image-level classifier and then a fine-grained patch classifier to determine the exact location of fire [4]. Samarth et al. [13] proposed the use of CNNs for binary fire detection and superpixel localization in video or still imagery. They evaluated different reduced complexity CNN architectures including different Inception architectures, ResNet, and EfficientNet. They proposed two low-complexity variants of the InceptionV3 and InceptionV4 networks, InceptionV3-OnFire and InceptionV4-OnFire, as the best models for fire detection and localization. Others propose a multi-scale fire detection method using deep-stacked layers and a densely connected residual block. The final detection is made by considering predictions from different scales of feature maps through a weighted voting algorithm [14].
In this paper, we propose the use of multiple neural networks in the context of search and rescue missions, with the goal of improving the results compared to using only one network. Our approach merges the results of different models, incorporating more filters to achieve a more meaningful outcome. We understand the importance of having authoritative datasets for the supervised learning of these neural networks. Thus, we acquire data from real images, simulations, and other models such as Generative Adversarial Networks (GANs). However, obtaining this data can be time-consuming and may require manual annotation of objects in aerial images. To address this, we also examined methods for semi-automatic labeling of our data and ways to generate datasets with a lower expenditure of time for use in supervised learning.
## III Data acquisition
### _Third party and internal datasets_
Within the context of vegetation fires, the dataset's focus is on fire scenarios. Various existing datasets for fire detection in images were considered during dataset creation. Robin Cole cites over 14 datasets containing fire images1, and Xu et. al. utilized several publicly available datasets for supervised learning [15]. However, most available data lacks the camera perspective of a flying UAV. To support aerial assistance for responders, the imagery must include diverse perspectives, heights, and resolutions for a robust model.
Footnote 1: Fire detection from images: [https://github.com/robmarkcode/fire-detection-from-images](https://github.com/robmarkcode/fire-detection-from-images)
A new 33GB dataset meeting these criteria was created and published on Kaggle2. A tool was developed to facilitate dataset creation by searching various media platforms, downloading sources, and extracting individual frames from relevant video segments. The dataset consists of historical vegetation fires filmed from the ground or air, as well as internal data from real disaster areas, such as vegetation fires, industrial fires, and floods, or from SAR exercises like searches for people and vehicles. With about 21,000 third-party images and 5,000 internal images, approximately 10% of the 26,000 total images were used for training, as most data lacked labels.
Footnote 2: Aerial Rescue Object Detection: [https://www.kaggle.com/datasets/juliememine/rescue-object-detection](https://www.kaggle.com/datasets/juliememine/rescue-object-detection)
### _Generative Adversarial Networks_
Generative Adversarial Networks (CycleGAN [16]) were explored for acquiring data in different fire scenarios, as aerial photographs are not always available. CycleGAN was used to map data from one context to another, e.g., from an empty burning crop field to a crop field with a burning barn. Frames were extracted from two public UAV-captured videos, creating a dataset of 2588 images, with 200 for evaluation.
Test results showed CycleGAN learning a scenario transfer, but generated images were unconvincing, inaccurately depicting fires and buildings (Figure 1). Consequently, none were included in the dataset. Despite this, the approach may be successful for other problems, given different data and hyperparameters.
Fig. 1: Input images (left) and the fake results from CycleGAN (right).
### _Simulation_
Simulations offer a means of creating scenarios and delivering information that closely resembles reality. However, such works often prioritize physical events, like the varying burn rates of different tree types, over realistic graphics [17]. As a result, a game engine was explored for additional data collection. Several projects within the "Unreal Engine 4" (UE4) were examined, ultimately selecting the prefabricated 'Landscape Mountains' project, which features a mountainous landscape with trees, a lake, and other objects.
Fire collections were incorporated into the project, positioned at different locations amid the trees, to create a visually realistic and interactive representation of a fire, rather than a physical image. Flames were placed near the ground and at tree height, simulating an early-stage forest fire with various flame sizes at different locations (Figure 2).
To generate aerial photographs, a script was developed to simulate a meandering UAV flight trajectory, producing around 150 aerial images from varying heights and perspectives at regular intervals.
## IV Dataset Labeling
There are various types of services and methods for labeling bounding boxes in images. One possibility is Model-Assisted Labeling (MAL), where data is labeled with the help of neural networks. This basic principle was adopted and implemented for this work. Since not only neural networks but also conventional computer vision algorithms were used in this case, the method is referred to as assisted labeling (AL) in this paper. The following pretrained object localization and recognition methods were used:
* Rule-Based Color-Model (Fire)
* Faster R-CNN (Vehicle)
* YOLOv3 (Fire, Vehicle)
* Light-Weight RefineNet (Human)
Using these methods, 2400 images were processed by the Object Detection Pipeline within 3 hours. The Object Detection Pipeline (ODP) is primarily used for the integration of several object detection methods, whose results are subsequently filtered and merged. The reason for building this pipeline (Figure 4) is mainly based on the following aspect: The competition between authors and developers in publications about object detection methods, especially neural networks, is based on the development of different approaches and architectures whose results are finally compared. These object detection architectures often take different approaches and thus have opposing strengths and weaknesses, the combination of which can lead to positive results. Thus, deficits of one method can be compensated for with the strenght of another method, for example, deficits in the detection of small objects of one method can be compensated for with another method whose strength is the detection of small objects.
All images are processed by the various neural networks, and the resulting bounding boxes are passed to the filter manager for further processing. We have implemented several filters to modify the bounding boxes. The 'SmallBB' filter removes bounding boxes that are smaller than a certain threshold in width and height. The 'MaskBB' filter (shown in Figure 3) first enlarges all bounding boxes by a factor, combines bounding boxes of the same class that overlap, and creates a large bounding box. In the second step, the large bounding box is split into nine smaller bounding boxes to reduce background and exclude objects that do not belong to the class. In the third step, bounding boxes that do not overlap with the original bounding boxes are removed. Finally, the remaining bounding boxes are merged to form the smallest number of large bounding boxes possible. The 'MergeBB' filter merges bounding boxes of the same class that have an intersection over union (IoU) or generalized intersection over union (GIoU) greater than a certain threshold.
The test of the filter was completed with 200 randomly selected images of the dataset. Before filtering, the 200 images contained about 6000 bounding boxes. Filtering with the ODP reduced the number to about 500 bounding boxes using several filters. After manual correction the number increased to approximately 600 bounding boxes. This corresponds to a decrease of approx. 90%. Based on this, the remaining bounding boxes in the dataset were manually corrected. Compared to ODP, manual labeling required ten times the time in terms of correction. Furthermore, concentration plays a significant role in manual labeling, so it cannot be ruled out that errors occur during labeling. For this reason, several cycles are run during AL, depending on the time budget, in order to keep the number of errors low.
## V First learning cycle
After labeling, the dataset was split into a training dataset of 2334 images and an evaluation dataset of 259 images, resulting in a 90:10 ratio with over 19000 bounding boxes (see figure 5).
Fig. 3: Sequence of the four steps of the ‘MaskBB’ filter, where five bounding boxes were combined to three bounding boxes.
Fig. 2: Unreal Engine 4 ‘Landscape Mountains’ project with flames simulate a forest fire.
We used OpenMMLab's MMDetection toolbox for training. Exemplarily, the following neural networks were selected. These were available in the framework at the time of the investigation and could demonstrate good metrics on the COCO dataset. The YOLOX network performs slightly worse in comparison, but it is a fast network and was therefore considered in the context of search and rescue.
* TOOD (Version: food r101 fpn dconv c3-c5 mstrain 2x)
* AutoAssign (Version: autoassign r50 fpn 8x2 1x)
* YOLOX (Version: yolov s 8x8 300e)
* VarifocalNet (Version: vfnet x101 32x4d fpn mdconv c3-c5 mstrain 2x)
* Deformable DETR (Version: deformable detr twostage refine r50 16x2 50e)
For training, we downloaded the weights of the pre-trained models that resulted from the supervised learning of the COCO dataset.
Table I shows that during training, the mAP of the Deformable DETR is worse for small objects than for the AutoAssign, which in turn does not perform as well as the Deformable DETR for objects of medium size. In all three categories, the mAPs for the YOLOX are the lowest, whereas the TOOD performs best.
The goal of this learning cycle was to identify the strengths and weaknesses of each model. To do so, we incorporated the models into the object detection pipeline, loaded the dataset, and extracted the FPs and FNs. In this context, a bounding box was considered a FN if its IoU with the ground truth was zero. This criterion highlights bounding boxes that were not detected by the neural networks as FNs and reduces the number of FPs. A confidence score of 0.3 was used. Tables II and III show the number of FPs and FNs for each class of neural network. The AutoAssign method followed by the Deformable DETR have the lowest number of FPs, while the YOLOX and VFNet have the highest. The TOOD falls in the middle. The results for the FNs are similar to the FPs: the Deformable DETR followed by the AutoAssign have a low number of FNs, while the YOLOX performs the worst. However, the TOOD generates a similar number of FNs as the YOLOX. This suggests that while the TOOD has the highest mean average precision (mAP) and mean average recall (mAR), making it the overall best method, it has more difficulty than the AutoAssign and Deformable DETR in identifying specific objects in the images.
number of FNs, as the detection coverage in this case was provided by neural networks such as the Deformable DETR. In the process, 358 relevant objects were detected but were incorrectly classified and counted as FPs. To obtain only irrelevant FPs, we manually reviewed the results and removed these objects (see Table IV).
## VI Mosaic-Augmentation
After training, the results (e.g., FPs and FNs) are often not further considered. The Data Augmentation Pipeline (DAP) addresses this issue (see Figure 6). The DAP uses the ODP to extract the FPs and FNs, as well as backgrounds that do not contain relevant objects. With this information, the DAP creates a new dataset that projects known objects onto new backgrounds to help the neural networks distinguish relevant objects from irrelevant objects and separate them from the backgrounds during a renewed training. An experiment was conducted to increase the mAP and mAR after a re-learning run. The FPs were grouped into mosaic-like images using a tool of the DAP. In this process, another aspect was considered: the grouping of the classes from which the FPs originated. This means that FPs that were misclassified as flames, for example, were assembled exclusively with other FPs that were also misclassified as flames. The mosaic-like images were generated using three different variants. In the first variant, the resolution of an FP was scaled quadratically. In the last two variants, the width and height of the resolution of an FP were doubled, respectively. During the creation of a mosaic-like image, the FPs of a class were randomly shuffled and then assembled. The number of iterations was determined by the total number of FPs and the number needed for a mosaic-like image. In addition to the minimum number of iterations required, nine more iterations were performed. This ultimately generated over 300 images (Fire: 50; Human: 140; Vehicle: 120) with a resolution of approximately 900 x 900 pixels. Since the generated images did not contain relevant objects, they could not be immediately used for supervised learning. Therefore, relevant objects or bounding boxes were extracted from aerial photographs taken in a bird's eye view from internal datasets using the object detection pipeline. To insert objects into the generated mosaic-like images, the background of the objects was manually masked, resulting in a total of over 100 objects (Fire:17; Human:25; Vehicle:61). This process was facilitated by the DAP (see Figure 7). Only one randomly chosen object was added to a randomly chosen coordinate in each image. Care was taken to ensure that the class of objects added to the mosaic-like images did not match. This means that a mosaic-like image created from FPs of flames did not have an object of class 'flame' added to it, but rather an object of class'vehicle', for example. Using an opposite class for the object creates a contrast with the background of the FPs within the bounding box.
## VII Object-Pasting
Aerial images that did not contain any relevant objects were processed with the DAP. As a result, over 190 aerial photographs were automatically labeled with a total of over 1000 objects (fire:96, vehicle: 573, human: 423). The EXIF data allowed the objects to be scaled realistically based on the flight altitude. Different configurations were used to insert the objects into each dataset. Figure 8 shows example results
Fig. 6: The Data Augmentation Pipeline (DAP) consists of several components. The ’BackgroundManager’ manages ’Background’ instances and allows the backgrounds to be modified using ’BackgroundFilter’ image processing algorithms. ’ObjectClass’ represents a relevant object and includes an attribute specifying the maximum width of the object in meters. The relevant objects are managed by an ’ObjectManager’, and ’ObjectPaster’ is responsible for inserting the objects into the backgrounds. The ’Distributor’ calculates the position and number of objects of each class to be inserted. The ’ObjectScaler’ can scale the objects and backgrounds to a realistic size based on their EXIF header and metric attribute, so as not to lose context in certain situations. The ’ObjectBlender’ class applies image processing algorithms to blend the objects into the background and adjust the contrast, brightness, and color to match the background where the objects overlap. This changes the visual appearance of the objects depending on the background.
of the DAP. If the EXIF headers of the images contained more information, it would be possible to add shadows to people and vehicles or simulate smoke for flames. However, the context of the images cannot be considered, so an added vehicle on the roof would be the same size as an added vehicle on the road. Despite this, the aerial photographs contain background information that can be useful for the neural networks in supervised learning to optimize the FPs and FNs by differentiating the objects from the backgrounds.
## VIII Second learning cycle
After adding the images to the dataset, a total of 3097 images with 21454 bounding boxes were available for the next learning cycle. The evaluation dataset was not modified so that the learning cycles could be compared retrospectively. In this cycle, the number of epochs for the YOLOX was increased to 300, because the YOLOX always performed best at epoch 100 in the second learning cycle, indicating that its full potential was not being utilized.
The remaining hyperparameters of the configuration files were not changed. For the training, the original weights of the pre-trained neural networks that were created during the supervised learning of the COCO dataset were used again to avoid the risk of getting stuck in a local optimum. The results of the same models from the first learning cycle are compared with the respective models from the second learning cycle in Table V.
Table V compares the top five models in the second learning cycle.
The TOOD outperforms the other models in all metrics, followed by the Deformable DETR and the AutoAssign. The YOLOX and the VFNet show similar results. The following table shows the mAP @[IoU=0.50:0.95] for the respective classes, noting that the values are significantly higher for mAP @[IoU=0.50].
Table VII shows that the TOOD performs well for the 'Fire' and 'Vehicle' classes, but the Deformable DETR performs better for the 'Human' class. To further evaluate the effectiveness of the mosaic augmentation and object pasting, row-normalized and column-normalized confusion matrices were computed from the first and second learning cycles for the respective models. The matrices were calculated using default values such as an IoU of 0.5 and a confidence score of 0.3. On average, the number of FPs increased by about 16%, while the number of FNs decreased by about 10%. It is clear from the results of the experiment that the object pasting and mosaic augmentation techniques had a positive effect on the performance of the models in the second learning cycle. The mAP and mAR of the models increased, and the number of FNs decreased, while the number of FPs increased. This suggests that these techniques can be useful for improving the performance of neural networks in object detection tasks. It would be interesting to further explore the potential of these techniques in different contexts and with different neural network architectures, in order to more fully understand their capabilities and limitations. Additionally, it would be useful to investigate methods for minimizing the increase in FPs while still achieving the benefits of the object pasting and mosaic augmentation techniques.
Fig. 8: Object pasting using the Data Augmentation Pipeline. Examples include a flame on a dry vegetation ground (left), a person and a vehicle in an industrial area (center), and two vehicles in a flooded area (right).
Fig. 7: Mosaic augmentation using the DAP from Human FPs (left), Fire FPs (center), and Vehicle FPs (right).
## IX Proof of Concept
To further evaluate the performance of the models after the second learning cycle, a test was conducted using 282 images of fire from synthetic data and from a vegetation fire exercise. These images were not used in the training or evaluation process, so the results are based on data that the models have not previously processed. The test used 154 images of synthetic data containing only fire and 128 images from an internal dataset of a vegetation fire exercise, totaling 2788 bounding boxes.
The results of the models on the test dataset show that the TOOD performs the best, followed by the Deformable DETR and the AutoAssign. The YOLOX and VFNet show similar performance. The results for the 'Vehicle' and 'Human' classes are similar to those on the evaluation dataset, but the models perform worse for the 'Fire' class on the test dataset. However, the models are still able to provide consistent results. Exemplary images from the test dataset processed by the neural networks are shown in figure 10.
The test dataset showed that the performance of the models varies depending on the granularity of the objects being detected, particularly for the 'Fire' class. One example is the difference between the results obtained by the TOOD and YOLOX models, where the TOOD had bounding boxes that did not match the ground truth bounding boxes in the lower right corner of an image, while the YOLOX's bounding boxes were almost identical to the ground truth bounding boxes (see Figure 11). In the worst case, the TOOD's result was evaluated as having two FPs and one FN because the IoU was below 0.5, even though the two bounding boxes accurately marked the fire in the image. This illustrates the potential for using a different method of evaluating bounding boxes in the training process for objects with ambiguous boundaries, which would simplify manual labeling.
The potential of using the ODP to combine the results of multiple models is demonstrated in Figure 12. The left side of the figure shows the results of all models in one image, where there is significant variation in the bounding boxes. On the right side of the figure are the merged results of the ODP. Optimally, the strengths of the different neural network architectures complement each other and the visualization of the results for further operations is simplified for emergency personnel. For example, the bounding boxes for the 'Fire' and 'Human' classes could be taken exclusively from the TOOD and the bounding boxes for the 'Vehicle' class could be taken from the Deformable DETR. Another possibility is to consider a bounding box to be a true positive only if it is detected by at least three models. This would likely reduce the number of false positives. These changes could be implemented in the ODP with minimal effort.
## X Building a Dataset
In this paper, we present a method for creating a new and very important dataset in the COCO format for object detection and classification tasks. The main part of the data set
Fig. 11: Test image YOLOX (left) and TOOD (right).
Fig. 12: Input image (left), filtering and fusion of bounding boxes using the ODP (right).
Fig. 10: Deformable-DETR inference with real aerial image (left) and artificial aerial image (right).
Fig. 9: Distribution of bounding boxes for testing(red) and training(blue)
consists of UAV images. We have collected the images over the last 5 years during the missions of the DRZ's Robotics Task Force during various real missions and exercises with the rescue forces. Such images are not publicly available for privacy reasons. To create the dataset, we first ran each of four neural networks (TOOD, YOLOX, VfNet, and Deformable DETR) on all of the images and kept only those bounding boxes that had a result of at least 50% confidence. This step served to filter out low-confidence detections that may not be reliable. Next, we compared the bounding boxes from the different networks using the intersection over union measure. If at least two networks detected the same class in the same location with at least 50% confidence, we counted it as a true positive and added the bounding box to the new dataset. This step helped to ensure that the bounding boxes included in the dataset were relatively robust and consistent across different networks. Finally, we annotated the images in the new dataset using the COCO format, which includes information such as class labels and bounding boxes for each object in the image. This allowed us to organize the data in a standardized and structured way that is compatible with a wide range of computer vision tasks.
Overall, our method for creating a new dataset in the COCO format involved a combination of automatic processing. While there is always a risk of errors in any automated process, we believe that the combination of multiple neural networks and strict inclusion criteria helped to improve the quality and reliability of the dataset. After completing the process, we published the dataset on Kaggle, a popular platform for data science competitions and projects, to make it widely available to researchers and practitioners working in the field of computer vision.
## XI Conclusions
This paper presents our research using UAVs and computer vision techniques to assist in rescue operations, specifically vegetation fires. Data was acquired from various sources, including simulated images from a 3D game engine. An object detection pipeline was created to label the dataset, employing three pre-trained neural networks, a rule-based algorithm, and bounding box correction filters. Supervised learning was conducted using five MMDetection library models.
Initial learning cycle results indicated potential in object recognition, though with varying strengths and weaknesses among models. To enhance results, false positives were extracted and used in a data augmentation pipeline to create mosaic-like images, tagged using the DAP and added to the dataset. The optimized dataset and second learning run increased false positives but decreased false negatives, improving mean average precision and recall. Models were also able to detect objects outside the original dataset. Filtering and fusing results via the object detection pipeline simplifies presentation to responders. Further improvement could be achieved by targeting specific object classes and weighting neural network results. This approach is explored in the German Rescue Robotics Center (A-DRZ) joint research project [1][2].
## Acknowledgment
This work was funded by the Federal Ministry of Education and Research (BMBF) under grant number 13N14860 (A-DRZ [https://rettungsrobotik.de/](https://rettungsrobotik.de/). Thanks to all of our partners in the A-DRZ project).
|
2305.02961 | FUSegNet: A Deep Convolutional Neural Network for Foot Ulcer
Segmentation | This paper presents FUSegNet, a new model for foot ulcer segmentation in
diabetes patients, which uses the pre-trained EfficientNet-b7 as a backbone to
address the issue of limited training samples. A modified spatial and channel
squeeze-and-excitation (scSE) module called parallel scSE or P-scSE is proposed
that combines additive and max-out scSE. A new arrangement is introduced for
the module by fusing it in the middle of each decoder stage. As the top decoder
stage carries a limited number of feature maps, max-out scSE is bypassed there
to form a shorted P-scSE. A set of augmentations, comprising geometric,
morphological, and intensity-based augmentations, is applied before feeding the
data into the network. The proposed model is first evaluated on a publicly
available chronic wound dataset where it achieves a data-based dice score of
92.70%, which is the highest score among the reported approaches. The model
outperforms other scSE-based UNet models in terms of Pratt's figure of merits
(PFOM) scores in most categories, which evaluates the accuracy of edge
localization. The model is then tested in the MICCAI 2021 FUSeg challenge,
where a variation of FUSegNet called x-FUSegNet is submitted. The x-FUSegNet
model, which takes the average of outputs obtained by FUSegNet using 5-fold
cross-validation, achieves a dice score of 89.23%, placing it at the top of the
FUSeg Challenge leaderboard. The source code for the model is available on
https://github.com/mrinal054/FUSegNet. | Mrinal Kanti Dhar, Taiyu Zhang, Yash Patel, Sandeep Gopalakrishnan, Zeyun Yu | 2023-05-04T16:07:22Z | http://arxiv.org/abs/2305.02961v2 | # FUSegNet: A Deep Convolutional Neural Network for Foot Ulcer Segmentation
###### Abstract
This paper presents FUSegNet, a new model for foot ulcer segmentation in diabetes patients, which uses the pre-trained EfficientNet-b7 as a backbone to address the issue of limited training samples. A modified spatial and channel squeeze-and-excitation (scSE) module called parallel scSE or P-scSE is proposed that combines additive and max-out scSE. A new arrangement is introduced for the module by fusing it in the middle of each decoder stage. As the top decoder stage carries a limited number of feature maps, max-out scSE is bypassed there to form a shorted P-scSE. A set of augmentations, comprising geometric, morphological, and intensity-based augmentations, is applied before feeding the data into the network. The proposed model is first evaluated on a publicly available chronic wound dataset where it achieves a data-based dice score of 92.70%, which is the highest score among the reported approaches. The model outperforms other scSE-based UNet models in terms of Pratt's figure of merits (PFOM) scores in most categories, which evaluates the accuracy of edge localization. The model is then tested in the MICCAI 2021 FUSeg challenge, where a variation of FUSegNet called x-FUSegNet is submitted. The x-FUSegNet model, which takes the average of outputs obtained by FUSegNet using 5-fold cross-validation, achieves a dice score of 89.23%, placing it at the top of the FUSeg Challenge leaderboard. The source code for the model is available on GitHub.
_Keywords_ - Chronic wounds, foot ulcers, deep learning, image segmentation, FUSeg Challenge 2021.
## 1 Introduction
Chronic wounds are those that fail to progress through the normal healing process or for which the healing process does not restore anatomic and functional integrity after three months [1]. Among different lower extremity chronic wounds, diabetic foot ulcer (DFU) is very prevalent. Long-term diabetic patients may develop neuropathy, a result of nerve damage brought on by chronically raised blood glucose levels. Neuropathy with or without peripheral vascular disease reduces or completely diminishes the ability to feel pain in the feet, leading to an ulceration called diabetic foot ulcer (DFU) which can range in depth from superficial to deep.
Obese and diabetic patients are more vulnerable to the development of chronic wounds. 34.2 million Americans and 463 million other people worldwide have diabetes which is expected to increase by 25% in 2030 [2]. The risk of developing a foot ulcer in a diabetes patient is 30% with up to 85% chance that the foot ulcer will precede lower limb amputation in diabetes [3]. Foot ulcers have both social and economic impacts. Such chronic wounds can impact the patient's quality of life. They can cause serious consequences, including limb amputations and death if they are not treated appropriately [4]. According to research using the Medicare 5% Limited Data Set for CY2014, nearly 15% of Medicare beneficiaries (8.2 million) are affected with chronic wounds, which have an annual cost to Medicare of between $28.1 and $31.7 billion, with diabetic wound infections being the prominent prevalence category (3.4%) apart from the surgical infections [5].
To evaluate and manage chronic wounds, track the wound healing process, and plan for future interventions, the wound area must be precisely measured [6]. Manual measurement of the wound region suffers from three limitations
- (1) costly in terms of time and labor, (2) needs medical experts and (3) error-prone. Additionally, the coronavirus disease (COVID) pandemic in 2020 severely affected global health care, including wound care [7]. An alternative is to apply computer-aided methods to segment the wound regions. Computerized methods offer the following advantages
- (1) faster and more efficient, (2) automatic, (3) easier to extract additional morphological features (such as height, width, depth, area, etc.), and (4) easier to keep digital records.
## 2 Literature Review
Literature available for diabetic foot ulcer (DFU) segmentation can be divided into two categories. The first category deploys traditional image processing techniques and machine learning. The second category uses various deep learning methods. Song et al. [8] used four segmentation techniques, k-means, edge detection, thresholding, and region growing, to extract features from DFU images. They optimized parameters using Grid search and Nelder-Mead simplex algorithm. They then used a Multi-Layer Perceptron (MLP) and a Radial Basis Function (RBF) neural network to identify the wound region. Wantanajittikul et al. [9] applied Cr-Transformation and Luv-Transformation to highlight the wound region removing the background. A pixel-wise Fuzzy C-mean Clustering (FCM) technique is used to segment the transformed images. Jawahar et al. [10] compare three methods for DFU segmentation - mask-based segmentation, L*a*b* color space-based segmentation, and K-means clustering-based segmentation. They experimented on the Medetec dataset [11] consisting of 152 clinical images and get the best segmentation result for K-means clustering. Heras-Tang et al. [12] used a logistic regressor model to classify ulcer region pixels followed by a post-processing stage that applies a DBSCAN clustering algorithm, together with dilation and closing morphological operators. They achieved an F1-score of 0.88 on a dataset having 26 images for training and 11 for validation.
However, the above-mentioned methods suffer from at least one of the following limitations: (1) require some extent of feature engineering, (2) sensitive to skin color, illumination, and resolution, (3) require manual tuning of parameters, (4) not fully automatic end-to-end, (5) not evaluated on a large dataset. These limitations can be fully or partially overcome by deploying deep learning models.
Goyal et al. [13] introduced a foot ulcer dataset consisting of 705 and achieved a dice coefficient of 79.4% using FCN-16 architecture. The network tends to create smooth contours; therefore, its segmentation accuracy is restricted in identifying small wounds and wounds with uneven borders. Liu et al. [14] proposed a framework called WoundSeg that uses MobileNet architecture with different numbers of channels alongside VGG16 architecture. On their dataset of 950 photos captured in an uncontrolled lighting setting with a complicated background, they achieved a Dice accuracy of 91.6%. However, rather than using experts, a watershed method is used to semi-automatically annotate their dataset. Wang et al. [15] used lightweight MobileNetv2 on a chronic wound dataset consisting of 810 training images and 200 test images. They added a post-processing step to fill the gaps left by the presence of abnormal tissue and remove small regions. They achieved a data-based Dice score of 90.47%. Cao et al. [16] classified wound images into five grades using the Wagner diabetic foot grading method and used mask R-CNN for semantic segmentation. They had a dataset of 1426 DFU images, with 967 images having nested labels and 459 images having single-graded labels. Their model had an accuracy of 98.42% but did not significantly improve the F1-score compared to region proposal-based methods. Additionally, their model is sensitive to feature vector concatenation, without which its performance decreases. Ramachandram et al. [17] developed an attention-embedded encoder-decoder network for wound tissue segmentation. The model consisted of two stages: the first stage segmented the wound region, and the second stage segmented the four wound tissue types (epithelial, granulation, slough, and eschar). The model was trained on 467,000 images for wound segmentation and 17,000 images for tissue segmentation, using the largest wound dataset reported so far. They evaluated the model on a dataset of 58 images and found poor performance for epithelial tissue. Huang et al. [18] first detected the wound region by Fast R-CNN, and then applied GrabCut and SURF algorithms to determine the wound boundaries. Consequently, the segmentation part is based on classical image-processing techniques rather than deep learning. The accuracy was 89%, though the mean average precision (mAP) was limited to 58. Additionally, the GrabCut algorithm incorporates GMM data and computes iterative
minimization, which includes some random information and produces marked contours that are less precise in practice. Mahbod et al. [19] proposed an ensembled network for the FUSeg challenge 2021 [20] consisting of LinkNet and U-Net using pretrained EfficientNetB1 and EfficientNetB2 encoders, respectively, with additional pretraining using the Medetec dataset [11]. The FUSeg dataset consists of 1210 DFU images where 1010 images are provided for training and 200 images for evaluation. They achieved data-based Dice scores of 88.80% and 92.07% for the FUSeg dataset and the chronic wound dataset, another dataset of the same organizer, respectively. However, the segmentation performance deteriorated when there was no wound or a very small wound region. Kendrick et al. [21] used a dataset called DFUC2022 for diabetic foot ulcer (DFU) segmentation, consisting of 2000 training and 2000 testing images, which is reported as the largest DFU dataset available currently. They proposed a network using FCN32 with a modified VGG backbone that replaced ReLU activation with Leaky-ReLU and removed the bottom three max-pooling layers to prevent excessive downsampling. To address the class imbalance, they trained on patches that had DFU regions and used a patch size of 64x48 with a stride of 32x24. They achieved a dice score of 74.47%.
In this study, our contribution can be summarized as follows -
1. We propose a modified squeeze-and-excitation (SE) attention module called parallel scSE (P-scSE) that combines both the additive and max-out spatial and channel squeeze-and-excitation (scSE) modules.
2. We propose a noble encoder-decoder-based architecture called FUSegNet for foot ulcer segmentation. The encoder path incorporates a pretrained EfficientNet-b7. In each decoder stage, we integrate P-scSE modules for fusion. Additionally, we develope a modified version called x-FUSegNet (pronounce cross-FUSegNet) that takes the average of outputs obtained by the FUSegNet using 5-fold cross-validation.
3. We propose a new arrangement for the proposed attention model. As opposed to [22], instead of using it at the end of each decoder stage, we use it in the middle followed by a 3\(\times\)3 Conv-ReLU-BN. Layer-wise visualization shows that such an arrangement smooths the output obtained by the attention module.
4. We evaluate the proposed FUSegNet model on the chronic wound dataset [15] and the FUSeg Challenge 2021 dataset [20]. We first carry out extensive experiments on the chronic wound dataset to finalize all the network parameters. We then participate in the FUSeg Challenge 2021 and apply x-FUSegNet to evaluate the performance.
5. The FUSegNet model outperforms state-of-the-art methods for the chronic wound dataset with a dice score of 92.70%. The x-FUSegNet is currently at the top of the leaderboard of the FUSeg Challenge 2021 with a dice score of 89.23% [23].
## 3 Materials and Methods
### Datasets
In this paper, two datasets were used to evaluate our models. They are - the chronic wound dataset [15] and the FUSeg Challenge 2021 dataset [20].
The chronic wound dataset is a publicly available dataset containing 1010 images of foot ulcers with a resolution of \(224\times 224\). It was generated by the Big Data Analytics and Visualization Lab - UWM in collaboration with the AZH Wound and Vascular Center, Milwaukee, Wisconsin, USA. The dataset is divided into 810 images for training and 200 images for testing, all taken from 889 patients under uncontrolled lighting conditions using iPad Pro and Canon SX 620 HS digital cameras. A YOLOv3 object detection model [24] was used to locate the wound region, and the images were manually labeled to create binary segmentation masks verified by wound care experts.
The FUSeg dataset is an extension of the chronic wound dataset and was created by the same group. It contains 1210 foot ulcer images, of which 1010 are identical to the chronic wound dataset but now has the entire view instead of just the wound region. These 1010 images are used for training, and the remaining 200 images are for testing, all
with a fixed size of \(512\times 512\). The FUSeg dataset is used for the MICCAI 2021 Foot Ulcer Segmentation Challenge, with the segmentation masks for the test images only used for evaluation and kept private by the organizers.
### FUSegNet architecture
Extensive experimentation was conducted on the chronic wound dataset before taking part in the FUSeg Challenge 2021, which led to the development of a new architecture named FUSegNet. The overview of the proposed network architecture is illustrated in Figure 1. It is primarily an encoder-decoder-based architecture. It incorporates EfficientNet-b7 and a decoder that embeds our proposed modified attention module (P-scSE). In summary, the encoder is a way of down-sampling that collects semantic or contextual information. The decoder, on the other hand, is an up-sampling path that restores spatial information. The necessary high-resolution (but low semantic) information is finally sent from the encoder to the decoder through shortcut connections between two paths. A modified attention mechanism called parallel spatial and channel squeeze-and-excitation (P-scSE) is fused in the decoder path.
#### 3.2.1 Why EfficientNet-b7
To avoid manual scaling, an EfficientNet architecture is employed as the encoder backbone in this study. Convolutional neural networks rely heavily on scaling, which can be achieved in various ways, including depth-wise, width-wise, and resolution-wise scaling. However, traditional scaling methods are random and require manual tuning, making them time-consuming and challenging to perform simultaneously. The authors [25] of the EfficientNet propose a novel architecture that uniformly scales depth, width, and resolution using fixed scaling coefficients (\(\alpha\), \(\beta\), and \(\gamma\)) and a compound coefficient \(\phi\). Mathematically, depth, width, and resolution scaling are achieved by \(\alpha^{\phi}\), \(\beta^{\phi}\), and \(\gamma^{\phi}\), respectively. To determine the appropriate values of these coefficients, the authors used neural architecture search (NAS) to construct a baseline network called EfficientNet-B0, with values of \(\alpha\), \(\beta\), and \(\gamma\) set at 1.2, 1.1, and 1.15, respectively. After a small grid search under the constraint of \(\alpha\cdot\beta^{2}\cdot\gamma^{2}\approx 2\), the baseline model was scaled up with different values of \(\phi\) to obtain EfficientNet-B1 to B7. EfficientNet-B0, the baseline model, has an image resolution of 224\(\times\)224, while the value of \(\phi\) in EfficientNet-B7 is 6. As a result, the resolution of EfficientNet-B7 can be calculated as 224 \(\times\)\(\gamma^{\phi}\)=224\(\times\)1.15\({}^{6}\)\(\approx\)518. In this work, EfficientNet-B7 is chosen as its resolution is suitable for the FUSeg Challenge 2021 which has an image resolution of 512\(\times\)512. Additionally, EfficientNet-B7 demonstrated exceptional performance on ImageNet.
#### 3.2.2 Parallel spatial and channel squeeze-and-excitation (P-scSE)
In this section, we discuss our proposed modified squeeze-and-excitation (SE) module called Parallel spatial and channel squeeze-and-excitation (P-scSE). Figure 2 illustrates the formation of the P-scSE module. The SE module was first proposed by Hu et al. [26] to enhance network representational power by emphasizing informative features while suppressing less useful ones. It generates a channel descriptor using global average pooling and excites channel
Figure 1: Overview of the proposed FUSegNet architecture. Numbers are shown for an input of 512\(\times\)512\(\times\)3.
wise dependencies. It is also termed _cSE_ as it excites along the channel axis (see Figure 2(a)). Roy et al. [22] introduced the sSE module (see Figure 2(b)), which squeezes along the channel axis while exciting spatially, and the scSE module (see Figure 2(c)), which combines the cSE and sSE blocks to aggregate spatial and channel-wise information. The notation used is '\(s\)' for spatial and '\(c\)' for the channel, while '\(S\)' and '\(E\)' represent squeeze and excitation, respectively. These modules are useful for complex anatomy tasks like medical image segmentation.
Let \(\textbf{X}^{[l-1]}\) be the output feature map of the (\(l\)-1)-th level of the decoder. Then the intermediate feature map at the \(l\)-th level, \(\textbf{X}_{i}^{[l]}=\textbf{X}_{u}^{[l]}-\textbf{X}_{s}^{[l]}\), where, \(\textbf{X}_{i}^{[l]}\in\mathbb{R}^{H\times W\times C}\), will be achieved by doing concatenation (\(\frown\)) between \(\textbf{X}_{u}^{[l]}\), which is achieved by upsampling \(\textbf{X}^{[l-1]}\), and \(\textbf{X}_{s}^{[l]}\), which is the encoder output at the \(l\)-th level passed through a skip connection.
This intermediate \(\textbf{X}_{i}^{[l]}=[\textbf{x}_{i,1},\textbf{x}_{i,2},...,\textbf{x}_{i,C}]\) is passed through the \(P\)-_scSE_ module. Before diving into the P-scSE module, let's consider \(\textbf{X}_{i}^{[l]}\) is passed through the _scSE_ module. During the channel excitation (cSE), the channel descriptor, \(\textbf{X}_{sS}^{[l]}\in\mathbb{R}^{1\times\times C}\) is achieved by using global average pooling, which, for channel \(c\), can be expressed as:
\[\textbf{x}_{sS,c}^{[l]}=\textbf{F}_{sS}\big{(}\textbf{x}_{i,c}\big{)}=\frac{1 }{H\times W}\sum_{p=1}^{H}\sum_{q=1}^{W}\textbf{x}_{i,c}^{[l]}(p,q) \tag{1}\]
Where \(\textbf{F}_{sS}(.)\) is the squeezing operator along the spatial dimensions. It is then passed through the channel excitation which is expressed as:
\[\textbf{X}_{cE}^{[l]}=\textbf{F}_{cE}\big{(}\textbf{X}_{sS}^{[l]},\textbf{W} \big{)}=\sigma_{2}\big{(}\textbf{W}_{2}\textbf{X}_{cE,l}^{[l]}\big{)}=\sigma_ {2}\big{(}\textbf{W}_{2}\big{[}\sigma_{1}\big{(}\textbf{W}_{1}\textbf{X}_{sS} ^{[l]}\big{)}\big{]}\big{)} \tag{2}\]
Where \(\textbf{F}_{cE}(\cdot)\) is the excitation operator, \(\textbf{X}_{cE,l}^{[l]}\) is the intermediate excitation, \(\sigma_{1}(\cdot)\) and \(\sigma_{2}(\cdot)\) are rectified linear unit (ReLU) [27] and sigmoid function, respectively, with \(\textbf{W}_{1}\in\mathbb{R}^{\frac{C}{r}\times C}\) and \(\textbf{W}_{2}\in\mathbb{R}^{C\times\frac{C}{r}}\). The intermediate excitation performs dimension reduction specified by a reduction ratio, \(r\) that determines the capacity and computational cost of the cSE block. The dimension is increased back and maps to between 0 and 1 using a sigmoid function. This is then used to recalibrate \(\textbf{X}_{i}^{[l]}\) by doing channel-wise multiplication and can be expressed as \(\textbf{X}_{cSE}^{[l]}=\textbf{X}_{i}^{[l]}\cdot\textbf{X}_{cE}^{[l]}\).
Figure 2: Parallel scSE (P-scSE) module.
In the sSE block, the channel squeeze, \(\mathbf{X}_{cs}^{[l]}\) is achieved by performing a convolution on \(\mathbf{X}_{i}^{[l]}\) with a 1\(\times\)1 kernel and one output channel, and can be expressed as \(\mathbf{X}_{csE}^{[l]}=\mathbf{F}_{cv}\big{(}\mathbf{X}_{i}^{[l]},\mathbf{W}_{ cs}\big{)}\), where \(\mathbf{F}_{cv}(\cdot)\) is a convolution operator and \(\mathbf{W}_{cs}\in\mathbb{R}^{1\times 1\times C\times 1}\). \(\mathbf{X}_{cs}^{[l]}\in\mathbb{R}^{H\times W}\) is a projected output along the channel axis on which a sigmoid activation is applied to get spatially excited, \(\mathbf{X}_{sE}^{[l]}\), which can be written as \(\mathbf{X}_{sE}^{[l]}=\mathbf{\sigma}_{2}\Big{(}\mathbf{X}_{cs}^{[l]}\Big{)}\). This spatially excited \(\mathbf{X}_{sE}^{[l]}\) is then used to recalibrate \(\mathbf{X}_{i}^{[l]}\) to generate the \(sSE\) block output and can be expressed as \(\mathbf{X}_{sSE}^{[l]}=\mathbf{X}_{i}^{[l]}\cdot\mathbf{X}_{sE}^{[l]}\). Finally, \(\mathbf{X}_{scSE}^{[l]}\) is achieved by \(\mathbf{X}_{scSE}^{[l]}=\mathbf{X}_{cSE}^{[l]}\bigodot\mathbf{X}_{sSE}^{[l]}\), where \(\bigodot\) is an aggregation operation and can be either max-out (taking maximum for a specific location), addition, multiplication, or concatenation.
The max-out operation provides competitiveness between these two SE blocks by outputting the maximum for a given location (x, y, c). So, the final excitation is formed by selectively collecting the spatial and channel excitation. The addition operation adds these two blocks element-wise. One important aspect of the addition operation is that instead of ignoring one block it provides equal importance to both of them. The multiplication operation multiplies the SE blocks element-wise. The concatenation operation concatenates them along the channel axis. In this paper, we mainly focus on max-out and additive. Because in multiplication, the final excited pixels will be those that were excited by both SE blocks. If one SE block's excitation is close to 0 and the other one's close to 1, then the resultant excitation will be near 0. So, there is a good chance of information being lost which could be crucial for tasks like segmenting wound regions. On the other hand, though in concatenation, all information is preserved, it doubles the number of channels in the final output, consequently increasing the model complexity as the subsequent convolutional layers need to process feature maps with more channels. So, considering all these facts, to utilize the benefits of max-out and addition, we aggregate both by creating parallel branches of two \(scSE\) modules - one aggregated by max-out and another by addition. The final parallel \(scSE\) (\(P\)-\(scSE\)) module is formed by adding these two branches element-wise. We did not do further recalibration as it has already been done twice. So, the output of P-scSE can be written as \(\mathbf{X}_{p-scSE}^{[l]}=\mathbf{X}_{scSE-maxout}^{[l]}\)\(\bigoplus\)\(\mathbf{X}_{scSE-addition}^{[l]}\), where \(\bigoplus\) is an element-wise addition. Figure 2 demonstrates the parallel scSE. The final output of the \(l\)-th decoder level is:
\[\mathbf{X}^{[l]}=\mathbf{\sigma}_{1}\left(\mathbf{F}_{BN}\Big{[}\mathbf{F}_{ cv}(\mathbf{X}_{p-scSE}^{[l]},W_{p-scSE}\Big{]}\right) \tag{3}\]
Where \(\mathbf{F}_{cv}(\cdot)\), \(\mathbf{F}_{BN}(\cdot)\), and \(\mathbf{\sigma}_{1}(\cdot)\) are the convolutional layer, batch normalization layer, and ReLU activation function, respectively. A switch (SW) is used to form the shorted P-scSE by bypassing the max-out scSE and is used when there is less number of feature maps.
#### 3.2.3 Loss function
We use a hybrid loss function consisting of dice loss and focal loss both having equal weights. Cross-entropy loss has the drawback of discretely computing per-pixel loss without taking into account whether or not the surrounding pixels are ground truth pixels, thereby ignoring the global scenario. Dice loss, originating from Sorensen-Dice coefficient, on the other hand, considers information loss both locally and globally. Dice loss can be expressed as \(DL=(1-DSC)\), where \(DSC\) is the dice coefficient. Focal loss (FL) comes in handy when there is a class imbalance (for instance, background \(>>\) foreground) [28]. It down-weights easy examples and focuses training on hard (misclassified) examples or false negatives using a modulating factor, \((1-p_{t})^{\gamma}\), and can be expressed as:
\[FL(p_{t})=-\mathbf{\sigma}_{t}(1-p_{t})^{\gamma}\;log(p_{t}) \tag{4}\]
Where, \(\gamma>1\) is the focusing parameter, and \(\mathbf{\sigma}_{t}\in[0,1]\) is a weighting factor. So, the final loss function is, \(L=DL+FL\).
#### 3.2.4 x-FUSegNet
For the FUSeg challenge 2021 where images contain wound regions in complex backgrounds, we adopt ensembling through k-fold cross-validation on the FUSegNet. The resulting ensemble network is termed x-FUSegNet (pronounce 'cross FUSegNet'). Note that, the chronic wound dataset contains cropped images containing mainly the wound region removing the complex background. So, for the FUSeg challenge, the dataset is first split into 5 folds and then trained with the FUSegNet model 5 times keeping one fold out for validation. Thus, we obtain 5 trained models which are ensembled during the inference. The ensembled output is the average of predictions achieved from these 5 models. Such an ensemble boosts the segmentation performance. The final binary output is generated by thresholding predictions to 1 if it is greater than or equal to 0.5, otherwise 0.
#### 3.2.5 Training and inference
All experiments are done on a 64-bit Ubuntu PC with an 8-core 3.4 GHz CPU and a single NVIDIA RTX 2080Ti GPU. Weight update is done using Adam optimizer [29] with an initial learning rate of 1\(\times\)10\({}^{-4}\) and weight decay of 1\(\times\)10\({}^{-5}\) to reduce losses. The ReduceLROnPlateau, a learning rate scheduling technique, is used to decrease the learning rate when the specified metric stops improving for a period longer than the permitted patience number. We set 0.1 and 10 as the decreasing factor and patience, respectively. Images are standardized, while ground truths are normalized first. Then a set of augmentations is performed before feeding to the network. Table 1 lists all the augmentations performed with their probabilities of being selected. With a batch size of 2, we train each model for 200 epochs while monitoring the validation loss and intersection-over-union (IoU) score. We keep storing and overwriting the checkpoint whenever the validation loss decreases or the IoU score increases. Therefore, only the best checkpoint is evaluated during inference. To avoid needless training, an early stopping with patience 30 is utilized.
#### 3.2.6 Evaluation metrics
The performance of an image segmentation model is frequently assessed in the medical image segmentation community using the dice coefficient (DSC). Additionally, we assess our model using precision, recall, and intersection-over-union (IoU). We evaluate both data-based and image-based metrics. We calculate Pratt's figure of merits, PFOM [30] to evaluate the boundary performance. Here are each definition's details:
\[Precision_{data}=\frac{\sum_{i=1}^{N}TP_{i}}{\sum_{l=1}^{N}TP_{l}+\sum_{l=1}^{ N}FP_{l}} \tag{5}\]
\[Recall_{data}=\frac{\sum_{i=1}^{N}TP_{i}}{\sum_{l=1}^{N}TP_{l}+\sum_{l=1}^{N} FN_{l}}^{5} \tag{6}\]
\[DSC_{data}=\frac{\sum_{l=1}^{N}2TP_{i}}{\sum_{l=1}^{N}2TP_{l}+\sum_{l=1}^{N} FP_{l}+\sum_{l=1}^{N}FN_{l}} \tag{7}\]
\begin{table}
\begin{tabular}{c c c c} \hline & \multicolumn{2}{c}{Augmentation (Probability, p=0.9)} \\ \hline Set 1 (p=0.5) & Set 2 (p=0.9) & Set 3 (p=0.2) & Set 4 (p=0.2) \\ \hline H. Flip (p=0.8) & Scale (limit=0.5, p=1) & Perspective (p=1) & Clahe (p=1) \\ V. Flip (p=0.4) & Rotate (limit=30, p=1) & Gaussian noise (p=1) & B\&C (limit=0.2, p=1) \\ & Shift (limit=0.1, p=1) & Sharpen (p=1) & Gamma (p=1) \\ & Combine all (p=1) & Blur (limit=3, p=1) & Hue saturation (p=1) \\ & & M. blur (limit=3, p=1) & \\ \hline \end{tabular}
\end{table}
Table 1: List of augmentations. (B&C means brightness and contrast)
\[IoU_{data}=\frac{\sum_{i=1}^{N}{\mathit{TP}_{i}}}{\sum_{i=1}^{N}{\mathit{TP}_{i} }+\sum_{i=1}^{N}{\mathit{FP}_{i}}+\sum_{i=1}^{N}{\mathit{FN}_{i}}} \tag{8}\]
\[Precision_{image}=\frac{1}{N}{\sum_{i=1}^{N}{\mathit{TP}_{i}}\over{\mathit{TP} _{i}+\mathit{FP}_{i}}} \tag{9}\]
\[Recall_{image}=\frac{1}{N}{\sum_{i=1}^{N}{\mathit{TP}_{i}}\over{\mathit{TP}_{i }+\mathit{FN}_{i}}} \tag{10}\]
\[DSC_{image}=\frac{1}{N}{\sum_{i=1}^{N}{\frac{2\mathit{TP}_{i}}{2\mathit{TP}_ {i}+\mathit{FP}_{i}+\mathit{FN}_{i}}}} \tag{11}\]
\[IoU_{image}=\frac{1}{N}{\sum_{i=1}^{N}{\frac{\mathit{TP}_{i}}{\mathit{TP}_{i }+\mathit{FP}_{i}+\mathit{FN}_{i}}}} \tag{12}\]
\[\mathit{FPOM}=\frac{1}{max\left({\mathit{I}_{gb},\mathit{I}_{pb}}\right)}{ \sum_{i=1}^{\mathit{I}_{pb}}}\frac{1}{1+\beta d(i)^{2}} \tag{13}\]
Here _TP_, _FP_, _TN_, and _FN_ are true positive, false positive, true negative, and false negative, respectively. The number of boundary points in the ground truth and prediction, respectively, are \(I_{gb}\) and \(I_{pb}\). In order to establish a relative penalty between smeared and isolated borders, a scaling constant, \(\beta\) selected to be 1/9 as reported in [30]. \(d\)(i) is the pixel miss distance of the \(i\)th edge detected. In other words, it is the pixel Euclidean distance of the \(i\)th boundary point between the ground truth and the prediction.
## 4 Results and Discussion
### Results
Models are primarily evaluated by intersection-over-union (IoU), precision, recall, and dice coefficient (DSC). First, an extensive study is performed on the chronic wound dataset as its test images are publicly available with corresponding masks. Results for state-of-the-art methods are listed in Table 2. The table is split into four sections. The first section lists the results for the same dataset reported in the published research papers. In the second and third sections, we generate outputs using different state-of-the-art methods. The third section, particularly, lists the performance of different scSE modules fused in U-Net architecture. The last section tabulates the performance of our proposed FUSegNet model. Our proposed model outperforms the existing approaches and achieves a DSC of 86.05 for image-based and 92.70 for data-based evaluation. Figure 3 demonstrates some outputs generated by the FUSegNet for different DSC scores. Figure 4 demonstrates how the P-scSE module is working in the decoding process. For further analysis, images are categorized into 10 groups based on the size of the ulcer. Figure 5 shows the boxplot representation of the image-based evaluation of each category. In addition, we use a pie chart to demonstrate the data-based evaluation of each category. For large ulcer regions, we achieve DSC scores of more than 90%. Figure 6 plots the DSC scores and PFOM scores for SE-fused models for different categories. It is seen that the P-scSE-fused FUSegNet outperforms the other models in most of the categories. Table 3 shows the top five teams currently on the MICCAI FUSeg Challenge 2021 leaderboard with our x-FUSegNet being at the top of it. Figure 7 and Figure 8 illustrate the prediction result for the x-FUSegNet in the FUSeg Challenge dataset. We then analyze the predictions from visual inspection as the test data is not publicly available.
\begin{table}
\begin{tabular}{c c c c c|c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**Image-based**} & \multicolumn{4}{c}{**Data-based**} & \multirow{2}{*}{**Param**} \\ \cline{2-2} \cline{5-8} & **IoU** & **P** & **R** & **DSC** & **IoU** & **P** & **R** & **DSC** & **(M)** \\ \hline VGG16 & NA & NA & NA & NA & NA & 83.91 & 78.35 & 81.03 & NA \\ SegNet & NA & NA & NA & NA & NA & 83.66 & 86.49 & 85.05 & NA \\ Mask RCNN & NA & NA & NA & NA & NA & 94.30 & 86.40 & 90.20 & NA \\ MobileNetV2+CCL & NA & NA & NA & NA & NA & 91.01 & 89.97 & 90.47 & NA \\ LinkNet-EffB1 + & NA & NA & NA & 84.42 & 85.51 & 92.68 & **91.80** & 92.07 & NA \\ UNet-EffB2 & & & & & & & & \\ \hline MANet & 76.97 & 86.28 & 83.85 & 83.80 & 84.71 & 93.11 & 90.37 & 91.72 & 76.35 \\ FPN & 76.40 & 86.77 & 82.94 & 83.37 & 83.64 & 92.52 & 89.71 & 91.09 & 65.67 \\ TransUNet & 75.61 & 85.38 & 82.50 & 82.61 & 81.15 & 92.07 & 87.25 & 89.59 & 16.97 \\ LinkNet & 74.36 & 87.12 & 80.71 & 81.89 & 82.74 & 94.24 & 87.15 & 90.56 & 62.78 \\ PSPNet & 75.09 & 85.40 & 82.84 & 82.65 & 84.63 & 92.57 & 90.79 & 91.67 & **1.02** \\ DeepLabV3Plus & 77.02 & 86.04 & 84.91 & 83.96 & 85.19 & 92.75 & 91.27 & 92.00 & 63.46 \\ \hline U-Net & 77.92 & 86.58 & 85.31 & 84.66 & 83.28 & 90.31 & 91.45 & 90.88 & 65.45 \\ U-Net + scSE (additive) & 77.81 & **88.72** & 83.35 & 84.15 & 84.92 & **94.89** & 89.00 & 91.85 & 65.46 \\ U-Net + scSE (max-out) & 78.21 & 88.06 & 84.11 & 84.75 & 85.15 & 94.62 & 89.48 & 91.98 & 65.46 \\ \hline Proposed FUSegNet & **79.44** & 88.29 & **86.35** & **86.05** & **86.40** & 94.40 & 91.07 & **92.70** & 64.90 \\ \hline \hline \end{tabular}
*P and R mean precision and recall, respectively
\end{table}
Table 2: Segmentation result on the Chronic Wound dataset. Results in the first section are taken from [15] and [19]. 2nd and 3rd sections tabulated the results that we performed on state-of-the-art methods. 4th section lists the results obtained by our proposed model.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Team** & **Approach** & **Data-based DSC (\%)** \\ \hline Mrinal et al. & x-FUSegNet & **89.23** \\ Mahbod et al. & LinkNet-EffB1 + UNet-EffB2 & 88.80 \\ Zhang et al. & U-Net with HarDNet68 & 87.57 \\ Galdran et al. & Stacked U-Nets & 86.91 \\ Hong et al. & NA & 86.27 \\ \hline \hline \end{tabular}.
\end{table}
Table 3: Top five performers of the MICCAI 2021 FUSeg Challenge [23].
### Discussion
The main objective of this work is to develop a deep-learning model for the FUSeg Challenge 2021 that contains diabetic foot ulcer images. Since the segmentation masks for the test images of the chronic wound dataset are publicly available but those for the test images of the FUSeg dataset are not, we first conduct a thorough analysis of the chronic wound dataset, a publicly available DFU dataset, to develop the network. First, we build an encoder-decoder-based model where a pretrained EfficientNet-b7 is used as the backbone. EfficientNet seeks to achieve a balanced depth, width, and scale resolution which is very important in medical image segmentation. We propose a new decoder architecture by using a modified spatial and channel squeeze-and-excitation module that we call P-scSE. The name comes from the fact that two parallel branches of additive and max-out scSE are combined together. In the final decoder stage, a modified P-scSE called shorted P-scSE is used. The intuition is that the final decoder stage has a very limited number of feature maps to process. So, taking max-out will risk losing important features. Instead, we bypass the max-out scSE and feed the feature maps from the decoder input at the P-scSE module's output, where they are combined with the additive scSE output.
From Table 2, it is seen that our proposed model outperforms the state-of-the-art methods for both image-based and data-based evaluation. Figure 3 shows the segmentation results for different DSC scores. In addition, we compare our proposed model with the U-Net architecture fused with the scSE module. Roy et al. [22] proposed the scSE module at end of each decoder stage in U-Net architecture. In contrast, we fuse our P-scSE module in the middle followed by a Conv-BN-ReLU block. Figure 4 shows that such an arrangement smooths the attention output and sends a well-represented feature map to the next decoder stage. It also outperforms the scSE-based score. It is also notable that our proposed model has a reduced number of parameters than the scSE-fused U-Net and has a very competitive number of parameters when compared to most of the state-of-the-art methods.
For further analysis, as shown in Figure 5(b), we divide the test images into 10 different categories based on the no. of ground truth (GT) pixels. For instance, "%GT area \(<0.15\)" means that the no. of foot ulcer pixels is less than 0.15% compared to the total no. of pixels in that particular image. Similarly, "%GT area 0" means there is no foot ulcer in the image. We then perform an image-based evaluation on each category to generate boxplots. Figure 5(a)
Figure 3: Segmentation results achieved by the FUSegNet for Chronic Wound dataset for different DSC scores. (top) TP, FP, and FN regions are marked with green, red, and blue, respectively. (bottom) The original and predicted boundaries of the ulcer regions are shown in red and green colors, respectively.
shows the boxplot representation of each category. The whiskers endpoints are selected as the (1st quartile - \(1.5\times\text{ interquartile range (IQR)}\)) and (3rd quartile + \(1.5\times\text{IQR}\)). A green triangle symbolizes the mean, while an orange line represents the median or the 2nd quartile (Q\({}_{2}\)). The evaluation metrics are displayed on the x-axis label. The red dots are the outliers. Outliers are the points that are outside the range of the whiskers interval. It is seen that in most cases, Q\({}_{2}\) lies at the upper half of the interquartile box. The Q\({}_{2}\) and mean value indications, which approach 90% or more as the foot ulcer region increases, show that overall performances for assessment measures are often over 80% even when certain outliers, particularly for categories 2 and 4, fall to very low values. It is very difficult to interpret the boxplot for category 1 since it does not have any foot ulcer region. So, even a very tiny region in the prediction will result in zero DSC. Hence, the DSC will either be 0 or 100% for category 1. So, as shown in Figure 5(d), another boxplot is generated that represents the no. of non-zero intensities found in category 1. Images in this category do not have any foot ulcers. So, any presence of non-zero intensity in the prediction is a false-positive (FP). The mean and median are close to 100 and 50, respectively, which is quite low compared to the total no. of pixels in an image.
We also draw a pie chart as shown in Figure 5(c) that indicates the DSC for each category. This time a data-based evaluation is performed. We first calculate TP, FP, and FN for a specific category, and then calculate the DSC of that particular category. We exclude category 1 as it does not contain any foot ulcer pixels. It can be inferred that the DSC score improves as the area with foot ulcers grows. For categories 6-10, DSC is above 90%, while for categories 3-5, it is equal to or close to 90%. Additionally, we assess how well our model performs in comparison to the scSE-fused U-Net models in terms of DSC scores. Figure 6(left) demonstrates the proposed model outperforms the other models in most of the categories.
We then perform a contour test. The goal is to measure how close the ground truth and prediction contours are. To do that, we use Pratt's figure of merits (PFOM). The PFOM index evaluates the accuracy of edge localization. We use Canny edge detection with a Gaussian kernel that has an extremely low standard deviation (0.1) to prevent over-smoothing. Also, when there is no edge in any of the images, then PFOM will be infinity. To avoid computational complexity, we replace infinity with a relatively high value. In our case, we set it to 2. We then compare the PFOM scores for different squeeze-and-excitation (SE) modules. Figure 6(right) shows that the scSE (additive) works well when the ulcer area is very small, but it falls rapidly as the region grows. On the other hand, our proposed
Figure 4: Demonstration of how the prediction formed in the decoding process. (_top_) Outputs right after applying the P-scSE module. (_bottom_) Outputs of each decoder stage. Outputs are taken applying conv2D-ReLU-BN after performing P-scSE except for the top layer (Decoder 0) where additional conv2d and sigmoid activation function is applied to create the final output.
FUSegNet shows consistent performance while dominating most of the categories. It is found that the scSE (max-out) performs the least well of these three mechanisms.
After performing extensive tests on the chronic wound dataset to finalize all the parameters for the FUSegNet model, we then apply it to the FUSeg Challenge 2021. The only change made is the submission of the x-FUSegNet, a
Figure 5: (_a_) Image-based evaluations for each category for the Chronic Wound dataset are demonstrated using boxplots. (_b_) Specification of category. Categories are generated based on the percentage ground truth area. (_c_) Pie-chart representation of data-based evaluation for each category. (_d_) Boxplot representation of no. of non-zero intensity found in category 1. Images in this category do not have any foot ulcers. So, any presence of non-zero intensity in the prediction is a false-positive (FP).
Figure 6: (_left_) DSC scores for the Chronic Wound dataset using different squeeze-and-excitation (SE) modules for different categories. (_right_) PFOM scores to evaluate the boundary performance for different categories.
variant of FUSegNet, which trains the FUSegNet model five times using 5-fold cross-validation. The final output is taken by doing a pixel-wise average of all 5 models. We ensemble these models so that a better performance can be achieved in complex backgrounds. Table 3 lists the results for the top 5 approaches in the MICCAI FUSeg Challenge 2021. The organizer evaluates the data-based metrics only and ranks based on the dice score. Currently, our model is at the top of the leaderboard [23].
As the test image masks of the FUSeg Challenge are not publicly available, we manually analyze the model's output. Figure 7 shows some outputs that we think are quite good. The segmented regions are marked with green color. Visual inspection suggests that the model separates the foot ulcer area from the skin around the foot. In certain images, we also discover some anomalies. Figure 8 illustrates a few such instances. In Figure 8(a-b), after visual inspection, we find that the model has detected some regions that are actually not foot ulcer regions. Such areas have a color intensity that is roughly a blend of the wound and nearby skin color. It only identifies one of the two wound regions in Figure 8(c). In Figure 8(d-e), the model misses some portions of the wound regions. In these two cases, the inner portion of the wound region has a relatively deeper intensity and gradually fades away as it approaches the contour of the wound. The model produces a false negative considering these faded areas as the surrounding foot skin. Deep visual investigation reveals that the model appears to function very well for the extensive callus and deep ulcer areas including granulation, necrotic, and eschar tissues, however, there is room for improvement in the superficial ulcer.
During inference, once models are loaded, the average test times for the FUSegNet and x-FUSegNet are 0.04 seconds and 0.21 seconds, respectively.
Figure 8: Segmentation results for the FUSeg dataset that appear problematic after performing manual inspections. Arrows are showing the spots where problems are found.
Figure 7: Segmentation results for the FUSeg dataset that appear promising after performing manual inspections. Boundaries of the predicted region are marked with green color.
Conclusion
Untreated chronic wounds are costly and negatively impact patients' quality of life. Diabetes and obesity increase the risk of chronic wounds, such as foot ulcers, which can lead to serious consequences. Proper identification and monitoring of the wound area are crucial for effective treatment, but manual extraction is time-consuming and expensive. A fully automated computer-aided foot ulcer segmentation method is a faster and more effective solution.
In this paper, we propose a deep learning-based fully automatic encoder-decoder model called FUSegNet to segment the foot ulcer region. We construct a modified spatial and channel squeeze-and-excitation module called P-scSE that we fuse to the middle of each decoder stage. The model is primarily developed for the FUSeg Challenge 2021. Before applying it, we perform several experiments on the chronic wound dataset. Finally, we come up with x-FUSegNet that ensembles the outputs of the FUSegNet trained using 5-fold cross-validation. Our proposed model outperforms the current approaches for the FUSeg Challenge 2021 in terms of the dice score.
In this paper, only the ulcer region is considered. The ulcer region can be divided into different sub-regions based on the presence of different tissues. So, future works include zonal segmentation by characterizing different tissues present in the ulcer region. Precise evaluation and monitoring of various tissue types in chronic wounds are essential for managing and tracking the wound's healing progress. This information helps to plan future treatments for the wound.
|
2308.13648 | Emulating Radiative Transfer with Artificial Neural Networks | Forward-modeling observables from galaxy simulations enables direct
comparisons between theory and observations. To generate synthetic spectral
energy distributions (SEDs) that include dust absorption, re-emission, and
scattering, Monte Carlo radiative transfer is often used in post-processing on
a galaxy-by-galaxy basis. However, this is computationally expensive,
especially if one wants to make predictions for suites of many cosmological
simulations. To alleviate this computational burden, we have developed a
radiative transfer emulator using an artificial neural network (ANN),
ANNgelina, that can reliably predict SEDs of simulated galaxies using a small
number of integrated properties of the simulated galaxies: star formation rate,
stellar and dust masses, and mass-weighted metallicities of all star particles
and of only star particles with age <10 Myr. Here, we present the methodology
and quantify the accuracy of the predictions. We train the ANN on SEDs computed
for galaxies from the IllustrisTNG project's TNG50 cosmological
magnetohydrodynamical simulation. ANNgelina is able to predict the SEDs of
TNG50 galaxies in the ultraviolet (UV) to millimetre regime with a typical
median absolute error of ~7 per cent. The prediction error is the greatest in
the UV, possibly due to the viewing-angle dependence being greatest in this
wavelength regime. Our results demonstrate that our ANN-based emulator is a
promising computationally inexpensive alternative for forward-modeling galaxy
SEDs from cosmological simulations. | Snigdaa S. Sethuram, Rachel K. Cochrane, Christopher C. Hayward, Viviana Acquaviva, Francisco Villaescusa-Navarro, Gergo Popping, John H. Wise | 2023-08-25T19:35:58Z | http://arxiv.org/abs/2308.13648v1 | # Emulating Radiative Transfer with Artificial Neural Networks
###### Abstract
Forward-modeling observables from galaxy simulations enables direct comparisons between theory and observations. To generate synthetic spectral energy distributions (SEDs) that include dust absorption, re-emission, and scattering, Monte Carlo radiative transfer is often used in post-processing on a galaxy-by-galaxy basis. However, this is computationally expensive, especially if one wants to make predictions for suites of many cosmological simulations. To alleviate this computational burden, we have developed a radiative transfer emulator using an artificial neural network (ANN), _ANN/gelina_, that can reliably predict SEDs of simulated galaxies using a small number of integrated properties of the simulated galaxies: star formation rate, stellar and dust masses, and mass-weighted metallicities of all star particles and of only star particles with age \(<10\) Myr. Here, we present the methodology and quantify the accuracy of the predictions. We train the ANN on SEDs computed for galaxies from the _IllustrisTNG_ project's TNG50 cosmological magnetohydrodynamical simulation. _ANN/gelina_ is able to predict the SEDs of TNG50 galaxies in the ultraviolet (UV) to millimetre regime with a typical median absolute error of \(\sim 7\) per cent. The prediction error is the greatest in the UV, possibly due to the viewing-angle dependence being greatest in this wavelength regime. Our results demonstrate that our ANN-based emulator is a promising computationally inexpensive alternative for forward-modeling galaxy SEDs from cosmological simulations.
keywords: galaxies: evolution - radiative transfer - methods: statistical
## 1 Introduction
Forward-modeling observables from galaxy formation simulations provides a means to confront theoretical models with observations directly. This approach represents an alternative to the more-traditional method of inferring physical properties such as stellar mass and star formation rate (SFR) from observations and comparing those with the corresponding quantities from simulations. One advantage of forward-modeling observables from a simulation is the avoidance of "throwing away" information from the simulation; e.g. when predicting spectral energy distributions (SEDs), the full star formation history of a simulated galaxy is used.
One particularly common forward-modeling technique in the galaxy formation community is to predict ultraviolet-to-millimeter (UV-to-mm) SEDs of simulated galaxies, including both integrated SEDs and images in various observed bands (e.g. Jonsson, 2006; Jonsson et al., 2010; Camps and Baes, 2015; see Steinacker et al., 2013 for a review). This calculation is normally done by performing dust radiative transfer on simulated galaxies in post-processing to compute how light from star particles propagates through the simulated galaxy's interstellar medium (ISM) and is absorbed, scattered, and re-emitted by interstellar dust. This approach has been applied to compare model predictions with observations of various classes of objects such as massive galaxies or active galactic nuclei (e.g. Hayward et al., 2012; Lanz et al., 2014; Narayanan et al., 2015; Safarzadeh et al., 2017; Cochrane et al., 2019, 2022, 2023, 2023, 2024; Baes et al., 2020; Parsotan et al., 2021). Moreover, such synthetic observations can be used to perform ground-truth experiments to test and refine techniques for inferring physical quantities from observations (e.g. Wuyts et al., 2010; Hayward and Smith, 2015; Michalowski et al., 2014; Smith and Hayward, 2018; McKinney et al., 2021; Cochrane et al., 2022).
Unfortunately, this forward-modeling step can incur significant additional computational expense and requires detailed outputs from simulations (the full 3D density distribution of stars and dust, for example). This required detailed information is not always available, in particular for coarse-resolution simulations. In such scenarios, the Monte Carlo radiative transfer (MCRT) calculations must make various assumptions (e.g. regarding sub-grid dust clumpiness) that can affect the robustness of the predicted observables. It is thus desirable to find a computationally inexpensive, robust method for making such predictions from a limited amount of data, such as integrated galaxy properties. Having a fast method available to run radiative transfer with varying input parameters would allow predictions to
be made for different input parameters, which will ultimately allow us to marginalize over the uncertainties associated with these parameters.
Several previous works (Hayward et al., 2011; Safarzadeh et al., 2016; Lovell et al., 2021; Cochrane et al., 2023) have demonstrated that it is possible to predict observed-frame far-IR (FIR) fluxes of simulated galaxies using only a small number of integrated properties of the galaxies, such as SFR and dust mass. The success of these approaches suggests that geometric variations amongst simulated galaxies play a subdominant role in determining their FIR SEDs and gives confidence that one can employ such simple approaches as an alternative to full MCRT, as has been done in a handful works (e.g. Hayward et al., 2013; Safarzadeh et al., 2017; Hayward et al., 2021; Miller et al., 2015; Popping et al., 2020; Cochrane et al., 2023). However, these works have focused on the thermal dust emission; it is likely more difficult to predict full UV-mm SEDs because geometry plays a greater role in the UV-optical due to the non-isotropic nature of dust attenuation on galaxy scales. Nevertheless, in this work, we aim to predict UV-mm SEDs using only a small number of integrated properties of simulated galaxies, without specifying any information about the galaxy geometry. The general workflow of the project is shown in Figure 1.
Using machine learning (ML) techniques to emulate computationally intensive calculations in simulations has shown substantial promise in astrophysics (Buchner, 2019; Kasim et al., 2021; Bird et al., 2022) as well as other computationally intensive STEM fields, e.g. climate modeling (Weber et al., 2020). Motivated by such works and the demonstrated possibility of predicting FIR SEDs from a small number of galaxy properties, in this work, we develop an artificial neural network (ANN) emulator for MCRT calculations on simulated galaxies. NNs are deep learning (DL) algorithms, a type of ML that employs multiple layers of computation to learn trends and correlations in data. NNs are robust at understanding complex relationships between input and output data due to their iterative calculations that optimise the data fit in pieces. ML algorithms take into account not only the nonlinear mapping of input to output data but also the distribution of properties across the full training set (Berner et al., 2019).
In this paper, we explore the possibility of an ANN reproducing nonlinear relationships between galaxy properties and galaxy SEDs. Our work addresses the following question: "given a set of 3D MCRT calculations with fixed assumptions (regarding e.g. the stellar initial mass function, single-age stellar population SED templates, and dust model) performed on galaxies selected from a single cosmological simulation, how well can an ANN emulate the MCRT calculations to predict integrated UV-mm SEDs of the simulated galaxies?" The inverse workflow has been attempted, including using ML to derive galaxy properties from a given SED (Gilda et al., 2021) and using ML to derive star formation histories for galaxies in the _Illustris_ and eagle simulations (Lovell et al., 2019), but to the best of our knowledge, using an NN to predict galaxy SEDs has not been previously attempted. We use SEDs from the TNG50 cosmological magnetohydrodynamical simulation (Nelson et al., 2019; Pillepich et al., 2019), which is the highest-resolution simulation in the _IllustrisTNG_ suite (TNG) (Nelson et al., 2019; Pillepich et al., 2019), to train our ANN, _ANNgilina1_.
Footnote 1: [https://github.com/snigdana/runANMegilina](https://github.com/snigdana/runANMegilina)
The structure of this paper is as follows. In Section 3, we discuss the methods used to create _ANNgilina_ and present an overview of the TNG50 dataset used for training. In Section 4, we present our preliminary results, touching on some of the interpretation that went into analyzing _ANNgelina_'s performance. We discuss applications and limitations of _ANNgelina_, as well as improvements to be made in future work. We draw conclusions in Section 5.
## 2 Data
The version of _ANNgelina_ released with this paper (ANNgelina_v1.0) was trained on synthetic SEDs obtained by running a radiative transfer code on galaxies in the TNG50 (Nelson et al., 2019; Pillepich et al., 2019) simulation. Here, we provide an overview of the TNG simulations, the sample of galaxies selected from TNG50, and the methods used to model their SEDs.
### IllustrisTNG simulations
The _IllustrisTNG_ project2 is a suite of large-volume cosmological magnetohydrodynamical simulations that model the formation of galaxies from early times to \(z=0\). The numerical methods and galaxy formation model used for the simulations are described in detail in Pillepich et al. (2017), Weinberger et al. (2018), and Pillepich et al. (2018), so we will only briefly summarize them here.
Footnote 2: [https://www.tng-project.org](https://www.tng-project.org)
The simulations were run with arepo3(Springel, 2010), an unstructured moving-mesh hydrodynamics code. Star formation is implemented by stochastically spawning stellar particles at a rate set by a volume density-dependent Kennicutt-Schmidt law (Kennicutt, 1998; Schmidt, 1959); a density threshold of 0.13 cm\({}^{-3}\) is employed. Stellar populations are evolved self-consistently, including chemical enrichment and gas recycling, resulting in mass loss. An effective equation of state is used to approximate how SNe heat the ISM, and stellar feedback-driven winds are implemented by stochastically isotropically kicking and temporarily hydrodynamically decoupling
Figure 1: _ANNgelina_ project workflow. In the traditional approach, computationally expensive MCRT calculations are needed to forward-model SEDs and images from simulated galaxies. In this work, we demonstrate how an ANN can be used to predict SEDs of simulated galaxies, accelerating this process by at least 7 orders of magnitude.
gas cells (Springel and Hernquist, 2003). Black hole accretion is modelled as Eddington-limited modified Bondi-Hoyle accretion. A two-mode AGN feedback model is employed: a relatively efficient kinetic channel at low Eddington ratio ('radio-mode') and a less-efficient thermal channel at high Eddington ratio ('quasar mode') (see Weinberger et al., 2018 for details). The free parameters in the subgrid models employed were tuned to match the galaxy stellar mass function and stellar mass-to-halo mass relation at \(z\sim 0\), in addition to the cosmic star formation rate history at \(z\lesssim 10\). The black hole mass-stellar mass relation, halo gas fractions and stellar half-mass radii of galaxies were also considered (see section 3.2 of Pillepich et al., 2017). The TNG simulations adopt a cosmology consistent with the Planck 2015 results (Ade et al., 2016): \(\Omega_{m}=0.31\), \(\Omega_{\Lambda}=0.69\), \(\Omega_{b}=0.0486\), \(h=0.677\), \(\sigma_{8}=0.8159\), and \(n_{s}=0.97\)(Nelson et al., 2019).
#### 2.1.1 IllustrisTNG50 dataset
We use the highest-resolution simulation from the _IllustrisTNG_ suite, TNG50 (Nelson et al., 2019), which has a volume of \((35~{}h^{-1}~{}{\rm Mpc})^{3}\). The mass resolution is \(8.5\times 10^{4}\,{\rm M}_{\odot}\) for baryonic particles and \(4.5\times 10^{5}\,{\rm M}_{\odot}\) for dark matter particles. TNG50 employs a collisionless softening length of \(0.3\,{\rm kpc}\) at \(z=0\) and an adaptive gas softening length with an minimum of \(74\) comoving pc.
We draw our sample of TNG50 galaxies from Popping et al. (2022), who modelled integrated SEDs for approximately 3,800 star-forming galaxies4 on or above the star formation main sequence with \(M_{*}>10^{8.7}\,{\rm M}_{\odot}\). Our sample is comprised of 1709 (central and satellite) galaxies at \(z=1\), 1149 galaxies at \(z=2\), 620 galaxies at \(z=3\), 184 galaxies at \(z=4\), and 76 galaxies at \(z=5\). There are fewer SEDs available for the higher-redshift snapshots, since fewer galaxies meet the redshift-independent stellar mass criterion. Figure 2 shows the distribution of the parameter space covered by our TNG50 dataset.
Footnote 4: Here, a single halo is considered a distinct galaxy in each of the snapshots. Given the large redshift spacing between snapshots, this assumption is reasonable.
### Generation of synthetic SEDs
Synthetic galaxy SEDs were generated by Popping et al. (2022) using the skirt MCRT code (Camps and Baes, 2015).5 skirt takes the three-dimensional dust and stellar density distributions from a galaxy simulation and propagates photon packets from radiation sources (i.e. star particles; AGN emission was not modeled) through the simulated galaxies' ISM using a Monte Carlo approach to model dust absorption, scattering, and re-emission. Popping et al. (2022) followed the methods described by Schulz et al. (2020) to calculate the SEDs for TNG50 galaxies. We briefly summarise their methods here.
Footnote 5: [https://skirt.ugent.be/root/_home.html](https://skirt.ugent.be/root/_home.html)
Gas and stellar particles within 7.5 times the stellar half-mass radius were extracted from subhalos. Star particles with ages greater than 10 Myr were assigned Bruzual and Charlot (2003) single-age stellar population model template SEDs based on their age and metallicity. Star particles with ages \(\leq 10\) Myr were assigned SEDs from Groves et al. (2008), which include a sub-grid model for HII and photodissociation regions. Dust was modelled using the gas-phase metal density distribution, with a constant dust-to-metals mass ratio of 0.4 (e.g. Dwek, 1998; James et al., 2002). Gas cells with \(T>7.5\times 10^{4}\,{\rm K}\) were assumed to contain no dust due to dust destruction via sputtering. The Weingartner and Draine (2001) Milky Way dust model was employed, with a mix of graphite, silicate, and PAH grains. Self-absorption of dust grains was not taken into account, but this is unlikely to be significant for the bulk of the galaxy population considered here (e.g. Hayward et al., 2011).
The modeled SEDs span rest-frame \(0.1-1000\,{\rm\mu m}\) and are sampled at 200 uniformly log-spaced wavelengths. The simulated galaxies were "observed" "face-on" (i.e. the detector was placed such that the line of sight was along the angular momentum vector of the simulated galaxy, corresponding to face-on projections for simulated disc galaxies).
#### 2.2.1 Modeled SEDs for TNG50 galaxies
In Figure 3, we show examples of modeled SEDs for TNG50 galaxies in several stellar mass bins. The upper panel shows SEDs color-coded by SFR. The lower panel shows SEDs color-coded by redshift. We use this visualization to understand trends in our data. As
Figure 2: Correlations between various integrated properties and stellar mass for all galaxies from TNG50 with \(M_{*}>10^{9}\,{\rm M}_{\odot}\), illustrating the region of parameter space spanned by our simulated galaxy sample: SFR (_left-most_), dust mass (_middle-left_), and mass-weighted metallicities of all stars (_middle-right_) and young (\(<10\) Myr-old) stars (_right-most_). In each panel, the points are color-coded by redshift. All properties have been calculated within twice the stellar half-mass radius of each galaxy. As expected, all of the considered quantities correlate with total stellar mass. The normalization of the SFR-\(M_{*}\) relation increases with increasing redshift, whereas the \(M_{\rm dust}-M_{*}\) correlation is independent of redshift for the range considered.
expected, higher SFRs generally yield higher fluxes. There are interesting trends at short wavelengths, though, with some high-SFR SEDs displaying particularly low fluxes at \(\lambda\lesssim 1\,\upmu\)m, owing to high levels of dust attenuation. We also note that SFR correlates with the shape of the FIR SED: the FIR SED peaks at shorter wavelengths for galaxies with higher SFRs. For a given stellar mass bin, higher-redshift (i.e. earlier-forming) galaxies are also brighter at all wavelengths (note that these SEDs are generated in the rest frame). This likely reflects the evolution of the "main-sequence": at a given stellar mass, higher-redshift star-forming galaxies have higher SFRs and hence higher intrinsic (pre-dust-attenuated) fluxes.
## 3 Methods
We design an ANN, _ANNeglina_, to emulate SED generation of TNG50 galaxies and hence bypass the radiative transfer procedure described in Section 2.2. An ANN architecture consists of an input layer with several "features"; one or more hidden layers, in which the features are manipulated; and an output layer, which contains the desired outputs, or "labels". The structure of _ANNeglina_ is illustrated in Figure 4. We use a fully connected network, where each neuron talks to all neurons in the previous and following layers. Each neurons in the input layer corresponds to a galaxy property, such as stellar mass or SFR, and each neuron in the output later corresponds to the flux at one wavelength. _ANNeglina_ is built with _PyTorch_(Paszke et al., 2019)6. In the following sections, we describe the architecture and parameter choices in more detail and outline the training procedure.
Footnote 6: [https://pytorch.org/](https://pytorch.org/)
### Data normalisation and partition
Each input feature, \(X\), is normalised as follows:
\[X_{\rm norm}=\frac{\log_{10}X-\mu(\log_{10}X)}{\sigma(\log_{10}X)} \tag{1}\]
For any features that had zero values, we added a negligible nonzero value to that feature before normalizing to avoid taking the logarithm of zero. This normalization procedure accelerates the training of the network.
The flux is input as the irradiance, \(E\), defined as \(E=\lambda S_{\lambda}/(\mathrm{W\,m^{-2}})\). We use \(\log_{10}E\) as the labels in the network. Data are then randomized and partitioned into training (70%), validation (15%), and test (15%) sets.
### Training the ANN
Here, we provide a broad overview of the training process and various hyper-parameter and function choices. During a "forward pass", the ANN pushes the training data through the hidden layers by using the current values of the weights for each feature. After each forward pass, the model updates itself through a "backpropagation" phase, in which it determines a loss value (based on the difference between the predicted and true outputs) and updates the weights according to an optimization function, usually a type of gradient descent. The number of times the forward pass and backpropagation cycle is executed depends on the pre-defined batch size and number of epochs.
Several parameters define the workflow of the network. Hyper-parameters such as the number of hidden layers and neurons in each layer, dropout fraction, weight decay, and activation function are related to the structure of the network. These hyper-parameters were optimized as described in Section 3.3. The learning rate, number of epochs, and batch size affect the training procedure; appropriate choices ensure generalizability of the model and improved effi
Figure 3: TNG50 SEDs used to train our ANN. The top row shows the SEDs color-coded by SFR, and the bottom row shows the SEDs color-coded by redshift; the columns show different stellar mass bins, as indicated at the tops of the figures. In a given mass bin, galaxies with higher SFRs tend to have higher IR fluxes, and their FIR SEDs are hotter. In the two lower-mass bins, the UV-optical fluxes also increase with SFR. This is less evident in the highest-mass bin due to increasing dust attenuation. In all mass bins, there is a subset of galaxies with high SFRs and heavily reddened UV-optical SEDs. Higher-redshift galaxies tend to have larger fluxes due to the evolution of the star formation main sequence (at fixed mass, SFR increases with increasing redshift).
ciency while training. The batch size (128) and number of epochs (1500) were tuned manually. The batch size refers to the number of samples to be propagated through the network in one full pass. One epoch is completed once all training samples have been propagated through the network. The initial learning rate, dropout fraction, weight decay, number of hidden layers, and number of neurons per hidden layer were determined using _Optuna_(Akiba et al., 2019), as described in Section 3.3.
We use "dropout", a simple and efficient regularization technique that consists of removing a random subset of weights at every training epoch, thereby ensuring that the learned weights are more robust and preventing over-fitting. The dropout fraction determines what percentage of weights are removed. Weight decay works in tandem with dropout and is another regularization technique applied to all weights in a network. There is an extra term added to the loss function to reduce the variance in the network weights.
We adopt the following optimizer, activation and loss functions:
**Optimizer function:** We chose the optimizer function AdamW, which is a modified version of the Adam optimizer (Kingma & Ba, 2014) in which the learning rate and the weight decay of the model are optimized separately. It takes an initial learning rate and weight decay value and updates them for each feature's weight individually while performing gradient descent. It performs well at handling non-smooth data. The learning rate controls how much the model changes in response to predicted error during each forward pass.
**Activation function:** We use the LeakyReLu (Leaky Rectified Linear Unit) activation function (Maas et al., 2013), which is defined as
\[y=\begin{cases}x&(x\geq 0),\\ 0.01x&(x<0).\end{cases} \tag{2}\]
**Loss function:** NN losses in supervised learning models are determined by quantifying the difference between the predicted and true outputs using a loss function. We adopt the mean square error (MSE) as our loss function. The MSE guarantees that the prediction represents the posterior mean without assuming any shape for the posterior distribution. During the training phase, the network minimizes the MSE averaged over each batch. The MSE for a batch of size \(b\) is given by
\[\text{MSE}=\frac{1}{200\cdot b}\sum_{i=1}^{b}\sum_{\lambda=1}^{200}\left[\log _{10}(E_{\lambda,\ i,\text{ pred}}/E_{\lambda,\ i,\text{ true}})\right]^{2} \tag{3}\]
where \(\lambda\) represents the wavelength index of the SED, given 200 wavelengths sampled between \(0.1-1000\)\(\mu\)m, and \(E_{\lambda,\ i}\) is the irradiance at a given wavelength, \(\lambda\), for a given galaxy in the batch.
### Hyperparameter optimization
We use the _Optuna_ hyperparameter tuning package (Akiba et al., 2019) to tune the follwing model parameters: learning rate, dropout fraction, weight decay, number of hidden layers, and number of neurons per hidden layer. After testing with up to 11 hidden layers, each with up to 1000 neurons, optimal performance was achieved with a shallow, one hidden layer architecture with 419 neurons, as shown in Figure 4. All hyperparameter values for our optimized model can be found in the public repository listed in Section 2.
### Galaxy feature selection tests
This version of _ANNgelina_ uses just five galaxy properties (SFR, \(M_{\text{dist}}\), \(M_{*}\), \(Z_{*}\), and \(Z_{*<10\text{Myr}}\)) to predict SEDs. While designing _ANNgelina_, we tested five additional galaxy properties: gas mass (\(M_{\text{gas}}\)), redshift (\(z\)), half-SFR radius, half-stellar-mass radius, and half-dust-mass radius.
We first tuned an ANN using all ten features ("full" model). We then trained ten further nine-feature ANNs, using the same architecture but with a different feature removed. In all cases, all input features were normalised, as described in Equation 1. We compared the loss values of each of the nine-feature ANN to that of the "full" model. We eliminated features that did not significantly impact the loss values of the model. It is reassuring that the features that were determined to significantly affect the model are all properties that should matter according to physical intuition (e.g. SFR, dust mass). Redshift is not important because we are predicting SEDs in the rest frame.
In Table 1, we show the correlation matrix of the galaxy properties used in _ANNgelina_. Correlations are calculated as the covariance of each variable pair divided by the product of their standard deviations. A correlation with an absolute magnitude close to 1.0 indicates very strong correlation, while an absolute magnitude closer to 0.0 indicates a weak correlation. While some properties are strongly correlated (most notably stellar and dust masses), _ANNgelina_'s per
\begin{table}
\begin{tabular}{l c c c c c} \hline Feature & \(M_{\text{dist}}\) & \(M_{*}\) & \(Z_{*}\) & \(Z_{*<10\text{Myr}}\) & SFR \\ \hline \(M_{\text{dust}}\) & 1.0 & 0.88 & 0.41 & 0.28 & 0.48 \\ \(M_{*}\) & 0.88 & 1.0 & 0.44 & 0.29 & 0.48 \\ \(Z_{*}\) & 0.41 & 0.44 & 1.0 & 0.9 & 0.37 \\ \(Z_{*<10\text{Myr}}\) & 0.28 & 0.29 & 0.9 & 1.0 & 0.25 \\ SFR & 0.48 & 0.48 & 0.37 & 0.25 & 1.0 \\ \hline \end{tabular}
\end{table}
Table 1: Correlation matrix for TNG50 galaxy properties.
Figure 4: The architecture of _ANNgelina_, with input features labeled \(X_{1}-X_{5}\) (purple), neurons in the hidden layer labeled \(\text{h}_{1}-\text{h}_{419}\) (orange), and output neurons labeled \(\text{E}_{1}-\text{E}_{200}\) (green). Not all neurons are shown. The following galaxy properties are used as features: SFR, stellar mass, dust mass, and stellar metallicities of both all and young (\(<10\) Myr old) stars. The output comprises rest-frame fluxes at 200 wavelengths, uniformly spaced in log-space. We note the loss function, optimizer and activation functions used in training.
formance is impacted negatively if any of the listed properties are omitted from the input feature set, indicating that even strongly correlated features contribute non-redundant information.
## 4 Results and Discussion
### Characterising _ANNegelina_'s overall performance
We assess the performance of _ANNgeelina_ using a test set of TNG50 galaxies that were not included in the training set. Here, we define the metrics used to characterise how well SEDs are predicted. We define the offset of the predicted (log-space) SED from the target (skirt-predicted) SED at a given wavelength \(\lambda\) as follows:
\[\mathrm{Offset_{4}/dex}=\mathrm{log_{10}(E_{4,predicted}/E_{4,true})}. \tag{4}\]
For each galaxy in the test set, we take the median absolute offset value across all 200 wavelengths, and call this the MAE:
\[\mathrm{MAE/dex}=\mathrm{median}\left|\mathrm{Offset_{4}/dex}\right|. \tag{5}\]
We use the MAE as a single value characterising the SED prediction for each galaxy. We note that, because of our definition, the MAE is essentially equivalent to the Median Absolute Percentage Error in the low-error regime.
In Figure 5, we show several examples of target and predicted SEDs for galaxies in our test set. From left-to-right, we show predicted SEDs with MSEs at the \(\approx 2^{\mathrm{nd}}\), \(50^{\mathrm{th}}\), and \(98^{\mathrm{th}}\) percentiles of the MSE distribution, where the \(98^{\mathrm{th}}\) percentile refers to the worst of the three predictions, or the highest MSE. The top panels show the true SED calculated using skirt in cyan and that predicted by _ANNgeelina_ in black. The bottom panels show the logarithm of the prediction error versus wavelength. The MSE and MAE are quoted at the tops of the columns. For the \(2^{\mathrm{nd}}\) percentile example, the true and predicted SEDs are essentially indistinguishable - the SED is predicted to within \(\sim 0.05\) dex (\(\sim 12\%\)) across the full UV-mm. The MAE is \(0.0109\) dex (\(2.5\%\)). For the \(50^{\mathrm{th}}\) percentile example, _ANNgeelina_ overpredicts the SED at both UV and FIR wavelengths, but the error is at most \(\sim 0.15\) dex (\(\sim 41\%\)). The MAE is \(0.0643\) dex (\(16\%\)). The \(98^{\mathrm{th}}\) percentile example displays a catastrophic failure. The UV luminosity density is underpredicted by an order of magnitude. The MAE is \(0.1638\) dex (\(46\%\)).
We compile MAE values for all test-set galaxies in Figure 6. _ANNgeelina_ in black. The bottom panels show the logarithm of the prediction error versus wavelength. The MSE and MAE are quoted at the tops of the columns. For the \(2^{\mathrm{nd}}\) percentile example, the true and predicted SEDs are essentially indistinguishable - the SED is predicted to within \(\sim 0.05\) dex (\(\sim 12\%\)) across the full UV-mm. The MAE is \(0.0109\) dex (\(2.5\%\)). For the \(50^{\mathrm{th}}\) percentile example, _ANNgeelina_ overpredicts the SED at both UV and FIR wavelengths, but the error is at most \(\sim 0.15\) dex (\(\sim 41\%\)). The MAE is \(0.0643\) dex (\(16\%\)). The \(98^{\mathrm{th}}\) percentile example displays a catastrophic failure. The UV luminosity density is underpredicted by an order of magnitude. The MAE is \(0.
_Ngelina_ performs well on the TNG50 dataset we employ, with an average MAE of \(\sim 0.06\) dex (15 %), with the worst predictions having an MAE of \(\sim 0.2\) dex (60 %). In Figure 7, we show 2D projections of the input feature space colored by MSE. If a particular region of parameter space were problematic for _ANNgelina_, there would be local maxima in the MSE distribution. We see no such local maxima. This indicates that _ANNgelina_ does not exhibit higher MSE values in any one part of the parameter space; rather, it predicts the SEDs uniformly well throughout the parameter space probed. The scatter in the predictions may be due to additional variables not included in the analysis.
### _ANNgelina_'s performance in different wavelength regimes
As seen in Figure 5, for a given SED, _ANNgelina_'s accuracy varies with wavelength. In the examples shown, _ANNgelina_ appears to struggle to reproduce rest-frame UV emission more than other wavelengths. In this section, we characterise distributions of MAE values within individual wavelength ranges for the whole test sample.
We divide SEDs into the following wavelength ranges: \(0.1-0.4\,\mathrm{\SIUnitSymbolMicro m}\) (UV), \(0.4-0.7\,\mathrm{\SIUnitSymbolMicro m}\) (optical), \(0.7-3\,\mathrm{\SIUnitSymbolMicro m}\) (near-IR; NIR), \(3-50\,\mathrm{\SIUnitSymbolMicro m}\) (mid-IR; MIR), \(50-207\,\mathrm{\SIUnitSymbolMicro m}\) (far-IR; FIR) and \(207-1000\,\mathrm{\SIUnitSymbolMicro m}\) (sub-mm). For each galaxy in the test set, we calculate the MAE within each of these wavelength ranges, following Equation 5. In Figure 8, we compile these values (note that the worst 4% of values have been omitted for clarity). Vertical lines indicate the average MAE across test galaxies within each of these wavelength bands. The UV stands out from the others, with a significantly higher average MAEE of 0.15 dex (41%), while all other wavelength bands hover around 0.06 dex (15%). This is expected, as the UV emission depends much more strongly on the geometry of the observed simulated galaxy since the UV is highly attenuated by dust grains, and the amount of attenuation can vary considerably with the line of sight.
### Limitations of our approach
There are a few considerations to highlight to potential users of _ANNgelina_. First, all SEDs used to train _ANNgelina_ are for "face-on" projections of TNG50 galaxies (i.e. along the angular momentum axis). Consequently, for galaxies with disc-like morphologies, our dataset is biased toward the least-obscured lines-of-sight, and heavily obscured lines-of-sight (e.g. edge-on discs) are thus under-represented. The SEDs of simulated galaxies with exactly the same values for the input features can vary due to viewing-angle effects: dust attenuation is non-isotropic, especially in the UV. For this reason, if the "raw" predictions from _ANNgelina_ are used, the scatter in the predicted SEDs will be unrealistically low. One cannot simply perturb the predicted photometry by adding random noise to account for this effect, as the prediction errors for flux values in different bands should be correlated (e.g. for an edge-on galaxy, _ANNgelina_ is likely to systematically overpredict the UV-optical flux). In future work, we will employ a larger training set with multiple lines-of-sight and incorporate some geometric features to attempt to account for viewing angle effects in the ANN's predictions. Since there is intrinsic scatter to be expected when predicting an SED given a set of galaxy properties, we will also attempt to use normalizing flows to sample the distribution instead of providing the model a single value as we have done here.
Second, _ANNgelina_ was trained on TNG50 galaxies sampling a restricted region of parameter space (simulated galaxies with \(M_{\star}>10^{8.7}\,\mathrm{M}_{\odot}\) on or above the main sequence), and it should only be applied within this region. Applying _ANNgelina_ to simulated galaxies outside of this parameter space may result in high prediction error, even when these galaxies are simulated with exactly the same code (i.e. numerical method and sub-grid models) and resolution.
Moreover, only a single simulation - and thus single code and single resolution - was used. It is possible that the prediction error would be significantly greater if _ANNgelina_ were applied to simulations that employ the _IllustrisTNG_ model but different resolution or/and simulations that employ a different numerical method or/and sub-grid models. In this proof-of-concept work, we have opted to employ only a single simulation, but in future work, we will explore how well we can cross-apply our emulator to other simulations.
Finally, we note that various assumptions are "baked in" to the MCRT calculations and thus _ANNgelina_. For example, it is necessary to make assumptions about the stellar populations (stellar initial mass function and single-age stellar population SED templates) and dust (optical properties, dust-to-metal ratio, temperature above which dust is destroyed, and sub-grid dust distribution). These assumptions can affect the resulting SEDs, and depending on the science question of interest, they may affect the quantitative or even qualitative results. We have explored the sensitivity of our results to such assumptions in various previous works (e.g. Hayward et al., 2011; Snyder et al., 2013; Safarzadeh et al., 2017). Exploring the impact of such assumptions is beyond the scope of this proof-of-concept work. Instead, as noted in the introduction, we have addressed the following question: "given a set of 3D MCRT calculations with fixed assumptions (regarding e.g. the stellar initial mass function, single-age stellar population SED templates, and dust
Figure 7: TNG50 galaxies in the test set, color-coded by MAE value. The uniformity of MSE values in all feature-feature planes indicates that _ANNgelina_ performs uniformly well across the parameter space probed.
model) performed on galaxies selected from a single cosmological simulation, how well can an ANN emulate the MCRT calculations to predict integrated UV-mm SEDs of the simulated galaxies?"
It is important that users understand that the results will in general depend on the assumptions used in the MCRT calculations to generate the training set. For this reason, the reported errors _do not_ include systematic errors associated with the MCRT assumptions and thus underestimate the 'true' error. These errors only represent how imperfectly the ANN can emulate MCRT calculations for this single simulation and set of MCRT assumptions. If, for example, one desires to predict SEDs assuming SMC-like dust, it is necessary to generate a training set using SMC-like dust in the MCRT calculations and then train a new ANN rather than using the ANN that we have made publicly available.
## 5 Conclusions
In this work, we have presented an ANN-based emulator to predict UV-mm SEDs of simulated galaxies. We trained the ANN on a sample of SEDs generated in previous work by performing dust MCRT on galaxies from the TNG50 simulation. We find that the ANN performs well at predicting the SEDs of simulated galaxies in the test set - the average MAE is 0.06 dex. For the vast majority of simulated galaxies in the test set, _ANNgellina_ can predict the flux across the full UV-mm wavelength range with a maximum relative error of \(\lesssim 50\) per cent. The UV dominates the prediction error, likely because the viewing angle dependence is strongest in this wavelength regime, and our input feature set includes no information about viewing angle. Our results demonstrate that our ANN-based emulator is a promising computationally inexpensive alternative to performing MCRT in order to predict UV-mm SEDs of simulated galaxies.
## Acknowledgements
The authors thank the anonymous referee for their useful comments, which helped improve the clarity of the manuscript. SS is supported by the NASA FINESST fellowship award 80NSSC22K1589. SS acknowledges support from the CCA Pre-doctoral Program, during which this work was initiated. The Flatiron Institute is supported by the Simons Foundation. JHW acknowledges support from NASA grants NNX17AG23G, 80NSSC20K0520, and 80NSSC21K1053 and NSF grants OAC-1835213 and AST-2108020.
## Data Availability
A github repository containing the tools to run our RT emulator is available at [https://github.com/snigdaa/runANNgellina](https://github.com/snigdaa/runANNgellina). Our cleaned data are available at the github repository linked above.
|
2301.08950 | Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks | When training Convolutional Neural Networks (CNNs) there is a large emphasis
on creating efficient optimization algorithms and highly accurate networks. The
state-of-the-art method of optimizing the networks is done by using gradient
descent algorithms, such as Stochastic Gradient Descent (SGD). However, there
are some limitations presented when using gradient descent methods. The major
drawback is the lack of exploration, and over-reliance on exploitation. Hence,
this research aims to analyze an alternative approach to optimizing neural
network (NN) weights, with the use of population-based metaheuristic
algorithms. A hybrid between Grey Wolf Optimizer (GWO) and Genetic Algorithms
(GA) is explored, in conjunction with SGD; producing a Genetically Modified
Wolf optimization algorithm boosted with SGD (GMW-SGD). This algorithm allows
for a combination between exploitation and exploration, whilst also tackling
the issue of high-dimensionality, affecting the performance of standard
metaheuristic algorithms. The proposed algorithm was trained and tested on
CIFAR-10 where it performs comparably to the SGD algorithm, reaching high test
accuracy, and significantly outperforms standard metaheuristic algorithms. | Manuel Bradicic, Michal Sitarz, Felix Sylvest Olesen | 2023-01-21T13:22:09Z | http://arxiv.org/abs/2301.08950v1 | Genetically Modified Wolf Optimization with Stochastic Gradient Descent for Optimising Deep Neural Networks
###### Abstract
When training Convolutional Neural Networks (CNNs) there is a large emphasis on creating efficient optimization algorithms and highly accurate networks. The state-of-the-art method of optimizing the networks is done by using gradient descent algorithms, such as Stochastic Gradient Descent (SGD). However, there are some limitations presented when using gradient descent methods. The major drawback is the lack of exploration, and over-reliance on exploitation. Hence, this research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms. A hybrid between Grey Wolf Optimizer (GWO) and Genetic Algorithms (GA) is explored, in conjunction with SGD; producing a Genetically Modified Wolf optimization algorithm boosted with SGD (GMW-SGD). This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality, affecting the performance of standard metaheuristic algorithms. The proposed algorithm was trained and tested on CIFAR-10 where it performs comparably to the SGD algorithm, reaching high test accuracy, and significantly outperforms standard metaheuristic algorithms.
Grey Wolf Optimizer, SGD, SL-PSO, GA, Metaheuristic Algorithms, optimization algorithm, Bi-objective optimization, CIFAR-10
## I Introduction
The purpose of this paper is to research and discuss the use of evolutionary algorithms for finding the optimal weights of NNs to perform image classification on the CIFAR-10 dataset, proposed by Krizhevsky et al. [1]. Section 2 begins with a review of the work relating to CNNs and their emergence, an evolutionary algorithm Social Learning Particle Swarm Optimisation (SL-PSO), and population-based meta-heuristic algorithms for NN weight optimization in general. The detailed structure of our NN and the justification of the chosen architecture are presented in Section 3. Section 4 explores a detailed structure of the proposed hybrid GWO-SGD [2]. In Section 4 we explore the collected results. Section 5 reports the training of the proposed algorithm, as a bi-objective problem, and discusses its overall performance. Finally, section 6 gives the conclusion of this work.
## II Related Work
### _Convolutional Neural Networks_
A typical Feed-forward NN (FFNN), also referred to as a multilayer perceptron (MLP), is composed of an input layer, hidden layers and an output layer. Each individual layer of neurons is interconnected with its neighboring layers through a set of real-valued weights. Furthermore, each layer of neurons, apart from the input layer, is assigned an additional activation threshold, called a bias. In this work, CNNs are used. The main property of CNNs that make them more suitable than FFNNs for this problem is that the inputs are images, on which they can perform convolutions. The reason they are better equipped is due to the fact that the convolutional layers that a CNN has, can successfully capture the spatial and temporal dependencies of an image. Convolution is a linear operation that takes two functions \(f\) and \(h\) and produces another function \(g\). In the case of CNNs, \(f\) is a multi-dimensional array (image pixels), \(h\) is a pre-defined matrix, array (also called a kernel or filter) and finally, \(g\) is the result of the convolution, also called a feature map. FFNNs and CNNs have two main phases: feed-forward and back propagation, through which the continuous optimization of weights and biases occurs. During the training process it is crucial to choose a suitable optimizer. Generally, image classification is attempted with gradient descent methods (GDs) [3, 4]. Based on how well the network performed on the input data, the GD updates the parameters of the network with a cost function. The aim is to reach the minimum of that cost function by taking small steps in the direction of the negative gradient. One drawback is their tendency to converge easily towards local optima [5]. SGD [6] as one of the most favored gradient-based algorithms for training NNs, also suffers from early convergence. To avert premature convergence, a wide range of adaptive gradient algorithms have been developed that adjust the learning rate in efficient ways, one worth mentioning is Adam. The issue of premature convergence has been tackled in past literature through trying to improve the global search of gradient descent methods [5, 7]. All of the proposed
solutions in these respective papers focus on hybridizing the very effective convergence speed of gradient descent with the gradient-less global search of meta-heuristic optimization algorithms. This paper aims to expand on this method of training neural networks, by hybridizing GWO [2, 8] and SGD together to minimize loss. Some elements from genetic algorithms (GA) [9] to diversify GWO have been used as well.
### _Training Algorithms_
Metaheuristic algorithms are a rapidly expanding field with many variants to fit different problems. The No Free Lunch theorem [10] has logically proven that there is no metaheuristic algorithm that can solve all optimization problems. Therefore, the exploration of algorithm variants and hybridizations may prove fruitful in the problem of finding the optimal set of weights for a CNN working on CIFAR-10. As a branch of metaheuristic algorithms, swarm intelligence algorithms have proven quite successful in such a task [11]. Therefore, the decision was made to apply the SL-PSO and GWO algorithms to this problem, to explore their efficacy in finding an optimal solution. Being a variant of the original Particle Swarm Optimization algorithm (PSO) [12], Social Learning Particle Swarm Optimization (SL-PSO) was a method proposed by Cheng and Jin [13] to incorporate the social learning aspects of animals into ordinary PSO. This was done with the purpose of changing the trial and error process of social learning to a more social process where individuals learn from all higher performing individuals (demonstrators) in their respective swarm. SL-PSO implements swarm sorting and behavior learning [13] where the swarm is sorted into fitness values and every particle but the best one will learn from their respective demonstrators. Each particle apart from the best is considered an imitator. Furthermore, each particle also gets updated by a random inertia component, which applies diversity to the swarm and improves on the global search capabilities of the standard PSO algorithm [14]. SL-PSO was also designed to be able to efficiently optimize problems of a higher dimensionality [13], which made it a good candidate to be used as a baseline for a network with a large amount of parameters.
The GWO algorithm [2] was proposed by Mirjalili and took inspiration from the hierarchical nature through which wolves interact and hunt in the wild. This hierarchy consists of alpha, beta, delta (dominant) and omega wolves(non-dominant) classified from highest fitness to lowest respectively. This is mathematically represented in the algorithm through a method similar to the social aspect of SL-PSO where the non-dominant omega wolves learn from the mean value of the dominant wolves. Further explanation of the GWO implementation can be seen in section 4. Due to the similarities between SL-PSO and GWO in their social learning, a comparison of the two would provide useful insight into their relative impacts on the optimization problem. GWO is described as having a highly efficient global search and good ability to converge [2] whereas SL-PSO was lacking in terms of its exploitation ability [14]. Therefore, GWO is a logical candidate to use as an implementation in this research.
### _Population Meta-heuristics for Neural Networks_
As previously mentioned, there are a variety of metaheuristic algorithms to choose from. A study found that population-based metaheuristic algorithms can be more efficient than the exact optimization algorithms, as the latter struggles to solve problems in a high-dimensional space [15]. Furthermore, the authors argue that those algorithms have to make different assumptions in order to perform well in each problem. Population-based algorithms step away from this issue as they usually make far less assumptions or no assumptions at all about the problem. As mentioned in the previous section, they can search large decision spaces and are not prone to getting stuck in local minima, unlike the gradient descent methods. A survey was done on evaluating the performance of training feedforward neural networks with the metaheuristic algorithms [16]. They analyzed various algorithms, and in some cases, they matched or even outperformed the gradient-based methods on low dimensional neural networks [17, 18]. Many population-based metaheuristics have a strong exploration and exploitation, whereas gradient-based methods only focus on exploitation. However, the problem comes when the metaheuristic algorithms are to be used for training deep neural networks which increase the number of decision variables from dozens up to even millions when considering modern architectures. A large part of the literature regarding the optimization of deep neural networks, and specifically CNNs, focus on optimizing the neural network architecture [19, 20]. This approach generates the optimal networks for the given problem, however, this research aims to design an algorithm which trains the weights of the network. There were multiple instances in the literature [5, 7, 11] of different researchers combining gradient-based methods with metaheuristic algorithms. This was done to train neural networks by exchanging information between the two types. For instance, Albeahdili et al. [11] created a hybrid of PSO and GA, and combined it with SGD, which demonstrated highly competitive results for CIFAR-10. The approach focuses on training the metaheuristic algorithm and then using SGD on those individuals. An issue with running multiple SGD algorithms is that on more complex networks it takes a very long time to run in parallel, and is computationally expensive. However, this approach has a strong benefit where the particles explore the space and then gradient descent is applied on them to exploit the space with various parameters. Furthermore, there is a huge advantage of using GAs to share information between different individuals, as it helps explore the search space more and helps guide the particles away from local minima [21]. The literature provided strong evidence that combining some population-based algorithms with genetic modifications yields better performance [11, 22, 23]. Returning to [11], one drawback, as identified in the meta-analysis by [16], is that even though PSO has a fast convergence rate, it has a low ability to find global minima. On the contrary, as mentioned
in the previous section, GWO has a very high convergence rate as well as the ability to find global optima in high dimensions. Furthermore, as [22] suggests, combining GWO with GA can improve on some of the shortcomings of the algorithm.
### _Summary_
To conclude, this research proposes an algorithm which takes the advantage of combining a metaheuristic algorithm with a gradient descent method. This will allow it to explore the space, as well as exploit it better. Due to high dimensions presented by deep neural network models the exploitation part will be increasingly more difficult for metaheuristic algorithms as the model complexity increases. This algorithm will exchange information between the two training types in the form of individuals. To differ from the literature, this research proposes a hybrid between GWO and GA, and combines it with a gradient descent method, such as SGD. The choice to use SGD as opposed to Adam, is because converges faster and the adaptive feature of it won't work very well when the network weights are constantly being switched [24]. This proposed method is novel because there were no attempts to optimise deep neural network weights with this hybrid algorithm. To avoid premature convergence, GA modifications were proposed [11, 22], by using crossover and mutation genetic operators only when the fitness stagnates.
## III Selected Architecture and Justification
CNN architecture is another popular topic that has been in the limelight when it comes to optimization as it can be a long and difficult process to find the correct architecture that suits a specific problem [25]. After having reviewed a series of popular architectures [25, 26], it was found that even among the smallest ones, the number of features still ranged in the hundreds of thousands. This network size is very costly for the optimization problem of finding an optimal set of weights, as each feature is another decision variable that would need to be trained. To combat this computational difficulty, a smaller, custom network architecture was used for training against the CIFAR-10 dataset instead. This network has a total of 58,685 features. An illustration of the architecture can be seen in Fig. 1. It is also worth mentioning that the option of transfer learning and block training was considered, through which a pre-trained model would be used to bolster the accuracy of the model. A few unfrozen layers would be appended to the frozen model for the training algorithm to optimize. These solutions would remove the issue of high-dimensionality, however, it would go against the purpose of this research by not using the global search capabilities of GWO in a meaningful way.
## IV Training Algorithm
The implemented algorithm, GMW-SGD, is displayed in the pseudocode in Algorithm IV-C. The proposed algorithm is a hybrid between GWO [2] and SGD, with occasional mutation or crossover.
Initially, the population of individuals is initialized. As defined in [2], the individuals depict wolves in a pack. Each one of them represents the parameters from the CNN; either a weight or a bias [27], and hence, the individual will be a flattened representation of the network. The encoding strategy becomes difficult for large CNNs with thousands of parameters and many layers, so the simplified encoding of an individual would be represented as follows:
\[ind(i)=[w_{0},...,w_{Nw},b_{0},...,b_{Nb}], \tag{1}\]
\[population=[ind(1);...;ind(Np)], \tag{2}\]
where \(Np\) is the size of population, \(i\) = 1,..., \(Np\), \(Nw\) is the number of weights and \(Nb\) is the number of biases in the network. The fitness of the individuals is calculated with Categorical Cross Entropy loss function shown in Eq. (9) and the objective is to minimize it. As per original GWO algorithm, there is a social hierarchy of grey wolves represented mathematically that the algorithm follows. The three best individuals are saved in the descending fitness into Alpha (\(\alpha\)), Beta (\(\beta\)) and Delta (\(\delta\)) wolves, respectively, and they represent the \(dominating\) wolves. Rest of the individuals are saved as Omega wolves (\(\omega\)).
### _Grey Wolf Optimization Algorithm_
The basis of the GWO algorithm mimics the stages of grey wolf hunting in the wild. The algorithm is split into different stages: encircling the prey, hunting and attacking the prey.
During each iteration of the algorithm, all of the \(omega\) solves change their positions according to the positions of \(dominant\) wolves. The assumption is that those three will have better knowledge about the potential location of prey, i.e. optimal solution. Therefore, the rest of the pack diverges from each other to search for the prey, with the use of \(\vec{A}\) and \(\vec{C}\) as shown in Eqs. (6) and (7), and then they converge to attack the prey when it is found. The locations of the \(omega\) individuals are updated by the following formulas:
\[\vec{D}_{\alpha}=|\vec{C_{1}}\cdot\vec{X}_{\alpha}(t)-\vec{X}(t)|, \tag{3a}\] \[\vec{D}_{\beta}=|\vec{C_{2}}\cdot\vec{X}_{\beta}(t)-\vec{X}(t)|,\] (3b) \[\vec{D}_{\delta}=|\vec{C_{3}}\cdot\vec{X}_{\delta}(t)-\vec{X}(t)|,\] (3c) \[\vec{X}_{m}(t+1)=\vec{X}_{l}(t)-A_{m}\cdot D_{l}, \tag{4}\]
\[\vec{X}(t+1)=\frac{(\vec{X}_{1}+\vec{X}_{2}+\vec{X}_{3})}{3}, \tag{5}\]
where \(t\) signifies the current generation, \(\vec{X_{l}}\) is the position of the prey indicated by \(l\) = {\(\alpha\), \(\beta\), \(\delta\)}. (\(\vec{X}\)) is the new location of the current wolf and \(m\) = {1,2,3}. Finally, \(\vec{A}\) and \(\vec{C}\) are constant vectors calculated as follows:
\[\vec{A}=2\vec{a}\vec{r_{1}}-\vec{a}, \tag{6}\]
\[\vec{C}=2\vec{r_{2}}, \tag{7}\]
where \(r_{1}\),\(r_{2}\) are randomly generated vectors in range [0,1] and the component \(\vec{a}\) is linearly decreased over the generations from 2 to 0.
Despite the merits of GWO mentioned in II Related Work, the algorithm is prone to converge prematurely and get stuck in local minima due to the high dimensionality. To tackle the issue, GA modifications will be applied.
### _Genetic Modifications: Hybrid of GWO and GA_
By utilizing genetic crossover and mutation [21], individuals can be modified when they stagnate. Therefore, the genetic algorithm is set to run only when \(patience\) is reached, due to the fitness of the \(omega\) wolves not improving. There is also a probability of whether the population will be mutated or crossed over with \(p_{mut}\).
Firstly, to follow the algorithm, only the \(omega\) wolves are mutated, as the \(dominant\) wolves know where the prey is and they should not be altered. Each individual is mutated by the following formula:
\[p^{\prime}\begin{cases}p+((2u)^{\frac{1}{1+9m}}-1)(p-x_{i}^{(L)}),u\leq 0.5 \\ p+(1-(2(1-u))^{\frac{1}{1+9m}})(x_{i}^{(U)}-p),u>0.5\end{cases} \tag{8a}\]
where \(u\) is randomly generated in range from 0 to 1, \(\eta\) is a constant in range [20,100], determining how similar to the parents the output will be, and the mutated individual (\(p^{\prime}\)) is bounded by \(x_{i}^{(L)}\) and \(x_{i}^{(U)}\). The worse individuals are set to be mutated more, as they are performing worse.
Secondly, the crossover is carried between all the \(omega\) wolves, where each is combined with a random \(dominant\) wolf, as depicted in Figure 3. Similar to mutation, the rate of how many decision variables modified decreases with better individuals, so that the better individuals are not changed as much. The offspring take some parts from both of the parents, and in turn, \(omega\) wolves will take some of the decision variables from the \(dominant\) wolf which can help guide them better.
To further tackle the issue of evolutionary algorithms struggling to exploit the high dimensional decision space, the above-mentioned metaheuristic algorithm was combined with a gradient descent method to help it exploit the space after exploration.
### _Stochastic Gradient Descent_
Finally, this algorithm utilizes SGD, where it is run on the three best individuals (\(\alpha,\beta,\delta\)) after all the individuals are updated in the metaheuristic step. It fits naturally with the GWO algorithm as the best individuals are used to guide the rest of the pack. This simulates the \(dominant\) wolves hunting the prey, as these ones are in the best locations out of the pack. The loss function used for the training was Categorical Cross Entropy (\(CE\)), as shown below:
\[CE=-\log(\frac{e^{s_{p}}}{\Sigma_{j}^{c}e^{s_{j}}}) \tag{9}\]
where \(C\) are the classes, \(s\) is the score for a class and \(p\) is the correct/positive class.
After the \(dominant\) wolves are updated, the rest of the wolves will still explore the space, but over time will start moving towards the best ones. As they get closer they might find another prey (i.e. better minimum). Final optimization feature added to the algorithm was the slow decrease of the learning rate, implemented with the SGD to make the optimizer make smaller steps, to approach the minimum more.
The GMW-SGD combines the strong exploration introduced by GWO algorithm and is further enhanced by the genetic modifications to tackle some of the original GWO's shortcomings. Finally, the information is exchanged between the evolutionary and the gradient descent part through the
Fig. 1: Selected Neural Network Architecture
Fig. 2: Crossover between \(omega\) and \(dominant\) wolf
\(dominant\) wolves, for which SGD is used to help with the exploitation of the best-found solutions.
```
0: Maximum number of evolutions (\(NEvol\)), maximum number of generations (\(NGen\)), maximum number of epochs (\(NEpool\)). The population (\(P\)) of Np individuals (\(i\)), with the top three individuals (\(alpha(i)\), \(beta(i)\) and \(delta(i)\)) called \(dominators\) and the rest, \(omega\). Patience (\(pat\)) defines the period before genetic modifications, probability of mutation/crossover happening (\(p_{mut}\)), neural network model (\(m\)), with a learning rate (\(lr\)).
0: The best individual, \(alpha(i)\).
1: Initialise random population \(P\) with \(Np\) individuals according to Eqs. (1) and (2).
2: Initialise the constants \(A\) and \(C\) for \(i^{th}\) according to Eqs. (6) and (7), respectively.
3: Evaluate the fitness of \(i\) in \(P\) according to Eq. (9).
4: Sort the individuals based on their fitness and select \(dominators\) and \(omega\).
5:while\(t\leq NEvol\)do
6:for\(g\) in \(NGen\)do
7:for individual \(j\) in \(omega\)do
8: Update \(j\) based on Eqs. (3-5),
9:endfor
10: Update the constants \(A\) and \(C\) for \(i^{th}\) according to Eqs. (6) and (7), respectively.
11: Repeat Step 3.
12: Repeat Step 4.
13:if average loss across past generations didn't improve for \(pat\)then
14:for individual \(j\) in \(omega\)do
15: Randomly generate \(r\) between [0,1].
16:if\(r<p_{mut}\)then
17: Mutate \(j\) according to Eq. 8a and 8b.
18:else
19: Crossover \(j\) with random \(dominant(i)\) according to Fig. 2.
20:endif
21:endfor
22:endif
23:endfor
24:for individual \(j\) in \(dominant\)do
25: Update \(m\) with the weights from \(j\).
26:for\(e\) in \(NEpool\)do
27: Train and validate \(m\) with SGD.
28: Update \(j\) with the new weights generated.
29:endfor
30:endfor
31:if\(t\%2==0\)then
32:\(lr\gets lr\cdot 0.1\)
33:endif
34:endwhile
```
**Algorithm 1** Pseudocode of GMW-SGD
## V Results
The performance of the proposed algorithm was tested by benchmarking it with two other training algorithms. The selected algorithms are:
1. SL-PSO - a population-based algorithm. It is closely related to the metaheuristic part of the algorithm.
2. SGD - a gradient-based optimizer with a learning rate scheduler.
The parameters that were used during the training process are shown in Table I.
The algorithms were run once, independently, against CIFAR-10 and were added to Table II and Fig. 3. Both of the metaheuristic algorithms were set to run for 2160 function evaluations. However, SGD ran for less as it quickly converged and got stuck in a minimum. Hence, it was stopped early, once it has not improved for about 20 epochs and made the assumption that the model would not change over time (and if it did it would overfit on the training dataset).
The performance of the models was evaluated by testing the accuracy on both training and test sets from CIFAR-10. Furthermore, CE was used to evaluate the loss on the test dataset.
After analyzing the results the following conclusions can be drawn. The first major difference that can be noticed in Fig. 3 is the difference between SL-PSO and SGD. This outlines the major difference between the two types of algorithms. SGD instantly exploits the decision space it starts in and successfully updates the weights to quickly reach higher
accuracies. On the other hand, SL-PSO steadily increases over time, but the improvement it makes are extremely small.
With GMW-SGD, it can be seen that the algorithm inherits trends from both types of algorithms. Unlike pure gradient descent methods, it takes longer to converge, and the individuals explore the space throughout the training to find the best solutions to exploit. It can be seen that it started converging about half way through the training, and even then it still increases by very little. Furthermore, it shows to be much better suited for this type of training than a standard population-based metaheuristic algorithm. SL-PSO might never reach high accuracy due to converging early, and if it did it would take a very long time.
Even though the training accuracy of GMW-SGD might not reach as high as the SGD's, Table II suggests that SGD overfits substantially on the training data. The differences in the classifications on the test dataset were comparable between SGD and GMW-SGD, with the former performing just slightly better. Furthermore, the loss of GMW-SGD on the test dataset was smaller than the one of the model trained with SGD. These results suggest that the proposed algorithm can perform almost as well, if not just as well as a standard gradient-based method. As Table II suggests, the trained model transfer extremely well to the test dataset, thus not over-fitting at all.
The proposed algorithm takes the best of both exploitation and exploration, and for that reason the performance is very similar. Thus, it was expected to perform better than SGD, but it could only match the metrics. One possible explanation why neither of the algorithms can perform better is due to the architecture being used. As mentioned previously, a simple network was used in order to make the dimensions smaller for metaheuristic algorithms. Most modern solutions use deeper and more complex neural networks for tasks such as this and the proposed neural network does not have the capacity to perform as well. Hence, it may affect the performance of both SGD and GMW-SGD. Secondly, even though the dimensions of the network are relatively small for the deep learning standards (like EfficientNet or MobileNet with over a million parameters), they are extremely high for metaheuristic algorithms, as it extensively increases the number of decision variables being changed. For this reason, the algorithm might not be able to perform as well, and therefore, might predominantly rely on SGD. If the decision space was smaller, the algorithm could potentially benefit more from the exploration of other individuals.
## VI Bi-Objective Problem
Training as a bi-objective problem by non-dominated Sorting has also been demonstrated in this paper. Kalyanmoy et al. in [28] suggest a Non-dominated Sorting Genetic Algorithm II (NSGA-II) which diminishes the difficulties of the regular NSGA, such as computational complexity and nonnelitism approach. In the proposed approach, once the offspring population has been created, the algorithm sorts both parent and offspring populations into different non-dominated fronts. This strategy is used to determine which individuals are to be chosen for the next generation. This bi-objective training problem has considered two objectives: the maximisation of accuracy, and the minimization of the Gaussian Regularizer (the sum of the square of the weights). The parameters used in this experiment remained the same as in the previous section, see Table I.
The overall observation is that the GMW-SGD optimizer algorithm did not perform successfully when training as a bi-objective problem due to several reasons. The problem of image classification that is trying to be solved in this study, prioritises high accuracy over the total sum of the network (the sum of the weights). The non-dominated sorting algorithm, due to its nature, would abolish elitism, the individuals with high accuracy and higher Gaussian regularisation value, prioritising less relevant individuals. GWO is a social learning algorithm that relies on the 3 most successful individuals who are 'leading' the pack. In this case, those individuals were not the best representation of the most successful wolves and were often individuals with a low Gaussian Regulariser value converging in a local minimum. This type of selection prevented the individuals from learning from the better ones and prevented the algorithm from reaching a global minimum, but rather converged in one of the local minima. In essence, this optimization problem is not suited for bi
Fig. 3: Convergence of best training accuracies on CIFAR-10
objective optimization and performs significantly better as a single optimization problem. Table III displays a set of individuals with the highest crowding distances, and they are the most representative Pareto-optimal front.
## VII Conclusion and Future work
Even though there exists a number of classical metaheuristic optimization techniques, it can be concluded that they do not converge as efficiently as gradient-based methods. However, with ample computational resources, there is potential for them to reach the same optima. In this work, a hybrid evolutionary algorithm, combined with SGD, was proposed, utilizing a novel implementation of mutation that leads to the heterogeneous behaviour of particles across space. GMW-SGD brings together both exploration and exploitation and was able to perform better than the baseline meta-heuristic algorithm, while also matching the accuracy of standard SGD. On top of this GMW-SGD provides the added benefits of an improved population-based global search which improves the avoidance of local minima.
Some future improvements to this research can be made by running each algorithm for a larger amount of function evaluations. As the meta-heuristic algorithms would be given more chances to demonstrate their global search capabilities once the gradient-descent method has converged into an optimum.
Further randomness could be implemented in addition to GA mutation and crossover such that the steps taken by the omega wolves in the direction of the mean of the dominant wolves are more varied. This should allow for further exploration in a less systematic way compared to the standard implementation of GMW-SGD.
To tackle the issue of optimizing weights for deep neural networks using metaheuristic algorithms, the training process could be broken down into training blocks. That means that the training process would consist of training weights per layer, rather than the network as a whole. That property would allow GWO-SGD to train on deep neural networks while still maintaining low dimensionality.
|
2305.14642 | Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic
Systems | Reasoning system dynamics is one of the most important analytical approaches
for many scientific studies. With the initial state of a system as input, the
recent graph neural networks (GNNs)-based methods are capable of predicting the
future state distant in time with high accuracy. Although these methods have
diverse designs in modeling the coordinates and interacting forces of the
system, we show that they actually share a common paradigm that learns the
integration of the velocity over the interval between the initial and terminal
coordinates. However, their integrand is constant w.r.t. time. Inspired by this
observation, we propose a new approach to predict the integration based on
several velocity estimations with Newton-Cotes formulas and prove its
effectiveness theoretically. Extensive experiments on several benchmarks
empirically demonstrate consistent and significant improvement compared with
the state-of-the-art methods. | Lingbing Guo, Weiqing Wang, Zhuo Chen, Ningyu Zhang, Zequn Sun, Yixuan Lai, Qiang Zhang, Huajun Chen | 2023-05-24T02:23:00Z | http://arxiv.org/abs/2305.14642v3 | # Newton-Cotes Graph Neural Networks:
###### Abstract
Reasoning system dynamics is one of the most important analytical approaches for many scientific studies. With the initial state of a system as input, the recent graph neural networks (GNNs)-based methods are capable of predicting the future state distant in time with high accuracy. Although these methods have diverse designs in modeling the coordinates and interacting forces of the system, we show that they actually share a common paradigm that learns the integration of the velocity over the interval between the initial and terminal coordinates. However, their integrand is constant w.r.t. time. Inspired by this observation, we propose a new approach to predict the integration based on several velocity estimations with Newton-Cotes formulas and prove its effectiveness theoretically. Extensive experiments on several benchmarks empirically demonstrate consistent and significant improvement compared with the state-of-the-art methods.
## 1 Introduction
Reasoning the time evolution of dynamic systems has been a long-term challenge for hundreds of years [1; 2]. Despite that the advances in computer manufacturing nowadays make it possible to simulate long trajectories of complicated systems constrained by multiple force laws, the computational cost still imposes a heavy burden on the scientific communities [3; 4; 5; 6; 7; 8].
Recent graph neural networks (GNNs) [9; 10; 11; 12; 13]-based methods provide an alternative solution to predict the future states directly with only the initial state as input [14; 15; 16; 17; 18; 19; 20]. Take molecular dynamics (MD) [4; 21; 22; 23; 24] as an example, the atoms in a molecule can be regarded as nodes with different labels, and thus can be encoded by a GNN. As the input and the target are atomic coordinates, the learning problem becomes a complicated regression task of predicting the coordinate at each dimension of each atom. Furthermore, some directional information (e.g., velocities or forces) are also important features and cannot be directly leveraged especially considering the rotation and translation of molecules in the system. Therefore, the existing works put great effort into the reservation of physical symmetries, e.g., transformation equivariance and geometric constraints [18; 19]. The experimental results on a variety of benchmarks also demonstrate the advantages of their methods.
Without loss of generality, the main target of the existing works is to predict the future coordinate \(\mathbf{x}^{T}\) distant to the given initial coordinate \(\mathbf{x}^{0}\), with \(\mathbf{x}^{0}\) and the initial velocity \(\mathbf{v}^{0}\) as input. Then, the
predicted coordinate \(\hat{\mathbf{x}}^{T}\) can be written as:
\[\hat{\mathbf{x}}^{T}=\mathbf{x}^{0}+\hat{\mathbf{v}}^{0}T. \tag{1}\]
For a complicated system comprising multiple particles, predicting the velocity term \(\hat{\mathbf{v}}^{0}\) rather than the coordinate \(\hat{\mathbf{x}}^{T}\) improves the robustness and performance. Specifically, by subtracting the initial coordinate \(\mathbf{x}^{0}\), the learning target can be regarded as the normalized future coordinate, i.e., \(\hat{\mathbf{v}}^{0}=(\mathbf{x}^{T}-\mathbf{x}^{0})/(T-0)\), which is why all state-of-the-art methods adopt this strategy [18; 19]. Nevertheless, the current strategy still has a significant drawback from the view of numerical integration.
As illustrated in Figure 1, supposed that the actual velocity of a particle is described by the curve \(v(t)\). To predict the future state \(\mathbf{v}^{T}\) at \(t=T\), the existing methods adopt a constant estimation approach, i.e., \(\hat{\mathbf{x}}^{T}=\mathbf{x}^{0}+\hat{\mathbf{v}}^{0}T=\mathbf{x}^{0}+ \int_{0}^{T}\hat{\mathbf{v}}^{0}dt\). Evidently, the prediction error could be significant as it purely relies on the fitness of neural models. If we alternatively choose a simple two-step estimation (i.e., Trapezoidal rule [25]), the prediction error may be reduced to only the blue area. In other words, the model only needs to _compensate_ for the blue area.
In this paper, we propose Newton-Cotes graph neural networks (abbr. NC) to estimate multiple velocities at different time points and compute the integration with Newton-Cotes formulas. Newton-Cotes formulas are a series of formulas for numerical integration in which the integrand (in our case, \(v(t)\)) are evaluated at equally spaced points. One most important characteristic of Newton-Cotes formulas is that the integration is computed by aggregating the values of \(v(t)\) at these points with a group of weights \(\{w^{0},...,w^{k}\}\), where \(k\) refers to the order of Newton-Cotes formulas and \(\{w^{0},...,w^{k}\}\) is irrelevant to the integrand \(v(t)\) and can be pre-computed by Lagrange interpolating polynomial [26].
We show that the existing works can be naturally derived to the basic version NC (\(k=0\)) and theoretically prove that the prediction error of this estimation as well as the learning difficulty will continually reduce as the increase of estimation step \(k\).
To better train a NC (k), we may also need the intermediate velocities as additional supervised data. Although these data can be readily obtained (as the sampling frequency is much higher than our requirement), we argue that the performance suffers slightly even if we do not use them. The model is capable of learning to generate promising velocities to better match the true integration. By contrast, if we do provide these data to train a NC (k), denoted by NC\({}^{+}\) (k), we will obtain a stronger model that not only achieves high prediction accuracy in conventional tasks, but also produces highly reliable results of long-term consecutive predictions.
We conduct experiments on several datasets ranging from N-body systems to molecular dynamics and human motions [27; 28; 29], with state-of-the-art methods as baselines [17; 18; 19]. The results show that NC improves all the baseline methods with a significant margin on all types of datasets, even without any additional training data. Particularly, the improvement for the method RF [17] is greater than \(30\%\) on almost all datasets.
## 2 Related Works
In this section, we first introduce the related works using GNNs to learn system dynamics and then discuss more complicated methods that possess the equivariance properties.
### Graph Neural Networks for Reasoning Dynamics
Interaction network [30] is perhaps the first GNN model that learns to capture system dynamics. It separates the states and correlations into two different parts and employs GNNs to learn their interactions. This progress is similar to some simulation tools where the coordinate and velocity/force are computed and updated in an alternative fashion. Many followers extend this idea, e.g., using
Figure 1: Illustration of different estimations. The rectangle, trapezoid, and striped areas denote the basic (used in the existing works), two-step, and true estimations, respectively. Blue denotes the error of two-step estimation that a model needs to compensate for, whereas blue \(+\) yellow denotes the error of the existing methods.
hierarchical graph convolution [9; 10; 31; 32] or auto-encoder [33; 34; 35; 36] to encode the objects, or leveraging ordinary differential equations for energy conservation [37].
The above methods usually cannot process the interactions among particles in the original space (e.g., Euclidean space). They instead propose an auxiliary interaction graph to compute the interactions, which is out of our main focus.
### Equivariant Graph Neural Networks
Recently, some methods considering physical symmetry and Euclidean equivariance have been proposed and achieved state-of-the-art performance in modeling system dynamics [14; 15; 16; 17; 18; 19; 38]. Specifically, [14; 15; 16] force the translation equivariance, but the rotation equivariance is overlooked. TFN [39] further leverages spherical filters to achieve rotation equivariance. SE(3) Transformer [29] proposes to use the Transformer [13] architecture to model 3D cloud points directly in Euclidean space. EGNN [18] simplifies the equivariant graph neural network and make it applicable in modeling system dynamics. GMN [19] proposes to consider the physical constraints (e.g., sticks and hinges sub-structures) widely existed in systems, which achieves the state-of-the-art performance while maintaining the same level of computational cost. SEGNN [38] extends EGNN with steerable vectors to model the covariant information among nodes and edges.
Note that, not all EGNNs are designed to reason system dynamics, but the best-performing ones (e.g., EGNN [18] and GMN [19]) in this sub-area usually regard the velocity \(\mathbf{v}\) as an important feature. These methods can be set as the backbone model in NC, and the models themselves belong to the basic NC (0).
## 3 Methodology
We start from preliminaries and then take the standard EGNN [18] as an example to illustrate how the current deep learning methods work. We show that the learning paradigm of the existing methods can be derived to the simplest form of numerical integration. Finally, we propose NC and theoretically prove its effectiveness in predicting future states.
### Preliminaries
We suppose that a system comprises \(N\) particles \(p_{1},p_{2},...,p_{N}\), with the velocities \(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{N}\) and the coordinates \(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}\) as states. The particles can also carry various scalar features (e.g., mass and charge for atoms) which will be encoded as embeddings. We follow the corresponding existing works [17; 18; 19] to process them and do not discuss the details in this paper.
Reasoning DynamicsReasoning system dynamics is a classical task with a very long history [1]. One basic tool for reasoning or simulating a system is numerical integration. Given the dynamics (e.g., Langevin dynamics, well-used in MD) and initial information, the interaction force and acceleration for each particle in the system can be estimated. However, the force is closely related to the real-time coordinate, which means that one must re-calculate the system states for every very small time step (i.e., \(dt\)) to obtain a more accurate prediction for a long time interval. For example, in MD simulation, the time step \(dt\) is usually set to \(50\) or \(100\) fs (\(1\) fs \(=10^{-15}\) second), whereas the total simulation time can be \(1\) ms (\(10^{-3}\) second). Furthermore, pursuing highly accurate results usually demands more complicated dynamics, imposing heavy burdens even for super-computers. Therefore, leveraging neural models to directly predict the future state of the system has recently gained great attention.
### Egnn
Although the existing EGNN methods have great advantage in computational cost over the conventional simulation tools, the potential prediction error that a neural model needs to compensate for is also huge. Take the standard EGNN [18] as an example, the equivariant graph convolution can be written as follows:
\[\mathbf{m}_{ij}^{0}=\phi_{e}(\mathbf{h}_{i},\mathbf{h}_{j},||\mathbf{x}_{i}^{0 }-\mathbf{x}_{j}^{0}||^{2},e_{ij}), \tag{2}\]
where the aggregation function \(\phi_{e}\) takes three types of information as input: the feature embeddings \(\mathbf{h}_{i}\) and \(\mathbf{h}_{j}\) for the input particle \(p_{i}\) and an arbitrary particle \(p_{j}\), respectively; the Euclidean distance
\(||\mathbf{x}_{i}^{0}-\mathbf{x}_{j}^{0}||^{2}\) at time \(t=0\); and the edge attribute \(e_{ij}\). Then, the predicted velocity and future coordinate can be calculated by the following equation:
\[\hat{\mathbf{v}}_{i}^{0} =\phi_{v}(\mathbf{h}_{i})\mathbf{v}_{i}^{0}+\frac{1}{N-1}\sum_{j \neq i}{(\mathbf{x}_{i}^{0}-\mathbf{x}_{j}^{0})\mathbf{m}_{ij}^{0}}, \tag{3}\] \[\hat{\mathbf{x}}_{i}^{T} =\mathbf{x}_{i}^{0}+\hat{\mathbf{v}}_{i}^{0}T, \tag{4}\]
where \(\phi_{v}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{1}\) is a learnable function. From the above equations, we can find that the predicted velocity \(\hat{\mathbf{v}}_{i}^{0}\) is also correlated with three types of information: 1) the initial velocity \(\mathbf{v}_{i}^{0}\) as a basis; 2) the pair-wise Euclidean distance \(||\mathbf{x}_{i}^{0}-\mathbf{x}_{j}^{0}||^{2}\) (i.e., \(\mathbf{m}_{ij}^{0}\)) to determine the amount of force (analogous to Coulomb's law); and the pair-wise relative coordinate difference \((\mathbf{x}_{i}^{0}-\mathbf{x}_{j}^{0})\) to determine the direction. For multi-layer EGNN and other EGNN models, \(\hat{\mathbf{v}}_{i}^{0}\) is still determined by these three types of information, which we refer interested readers to Appendix A for details.
**Proposition 3.1** (Linear mapping).: _The existing methods learn a linear mapping from the initial velocity \(\mathbf{v}^{0}\) to the average velocity \(\mathbf{v}^{t^{*}}\) over the interval \([0,T]\)._
Proof.: Please see Appendix B.1
As the model makes prediction only based on the state and velocity at \(t=0\), we assume that \(\mathbf{v}^{t^{*}}\) fluctuates around \(\mathbf{v}^{0}\) and follows a normal distribution denoted by \(\mathcal{N}_{NC(0)}=(\mathbf{v}^{0},\sigma_{NC(0)}^{2})\). Then, the variance term \(\sigma_{NC(0)}^{2}\) reflects how difficult to train a neural model on data sampled from this distribution. In other words, the larger the variance is, the worse the expected results are.
**Proposition 3.2** (Variance of \(\mathcal{N}_{NC(0)}\)).: _Assume that the average velocity \(\mathbf{v}^{t^{*}}\) over \([0,T]\) follows a normal distribution with \(\mu=\mathbf{v}^{0}\), then the variance of the distribution is:_
\[\sigma_{NC(0)}^{2}=\frac{\epsilon_{\text{NC}(0)}^{2}}{T^{2}}=\mathcal{O}(T^{2}) \tag{5}\]
Proof.: According to the definition, \(\sigma_{NC(0)}^{2}\) can be written as follows:
\[\sigma_{NC(0)}^{2} =\frac{\sum_{p}{(\mathbf{v}^{t^{*}}-\mathbf{v}^{0})^{2}}}{M}= \frac{\sum_{p}{((\mathbf{v}^{t^{*}}T-\mathbf{v}^{0}T)/T)^{2}}}{M} \tag{6}\] \[=\frac{\sum_{p}{((\mathbf{x}^{T}-\mathbf{x}^{0})-\mathbf{v}^{0}T )^{2}}}{MT^{2}}=\frac{\sum_{p}{(\int_{0}^{T}(\mathbf{v}(t)-\mathbf{v}^{0}T)dt) ^{2}}}{MT^{2}}, \tag{7}\]
where \(M\gg N\) is the number of all samples. The term \((\mathbf{v}(t)-\mathbf{v}^{0}T)\) is a typical (degree 0) polynomial interpolation error and can be defined as:
\[\epsilon_{\text{IP}(0)}(t) =\mathbf{v}(t)-\mathbf{v}^{0}T=\mathbf{v}(t)-P_{k}(t)\big{|}k=0 \tag{8}\] \[=\frac{\mathbf{v}^{(k+1)}(\xi)}{(k+1)!}(t-t_{0})(t-t_{1})...(t-t_{ k})\big{|}k=0=\mathbf{v}(\xi)t, \tag{9}\]
where \(0\leq\xi\leq T\), and the corresponding integration error is:
\[\epsilon_{\text{NC}}(0)=\int_{0}^{T}\epsilon_{\text{IP}(0)}(t)dt=\mathbf{v}( \xi)\int_{0}^{T}tdt=\frac{1}{2}\mathbf{v}(\xi)T^{2}=\mathcal{O}(T^{2}). \tag{10}\]
Therefore, we obtain the final variance:
\[\sigma_{NC(0)}^{2}=\frac{M\mathcal{O}(T^{4})}{MT^{2}}=\mathcal{O}(T^{2}), \tag{11}\]
concluding the proof.
### Nc (k)
In comparison with the \(0\) degree polynomial approximation NC\((0)\), the high order NC\((k)\) is more accurate and yields only a little extra computational cost.
Suppose that we now have \(K+1\) (usually \(K\leq 8\) due to the catastrophic Runge's phenomenon [40]) points \(\mathbf{x}_{i}^{0},\mathbf{x}_{i}^{1},...,\mathbf{x}_{k}^{K}\) equally spaced on time interval \([0,T]\), then the prediction for the future coordinate \(\hat{\mathbf{x}}_{i}^{T}\) in Equation (4) can be re-written as:
\[\hat{\mathbf{x}}_{i}^{T}=\mathbf{x}_{i}^{0}+\frac{T}{K}\sum_{k=0}^{K}w^{k} \hat{\mathbf{v}}_{i}^{k}, \tag{12}\]
where \(\{w^{k}\}\) is the coefficients of Newton-Cotes formulas and \(\hat{\mathbf{v}}_{i}^{k}\) denotes the predicted velocity at time \(t^{k}\). To obtain \(\hat{\mathbf{v}}_{i}^{k},k\geq 1\), we simply regard EGNN as a recurrent model [41, 42] and re-input the last output velocity and coordinate. Some popular sequence models like LSTM [43] or Transformer [13] may be also leveraged, which we leave to future work. Then, the equation of updating the predicted velocity \(\hat{\mathbf{v}}_{i}^{k}\) at \(t^{k}\) can be written as:
\[\hat{\mathbf{v}}_{i}^{k}=\phi_{v}(\mathbf{h}_{i})\mathbf{v}_{i}^{k-1}+\frac{ \sum_{j\neq i}(\mathbf{x}_{i}^{k-1}-\mathbf{x}_{j}^{k-1})\mathbf{m}_{ij}}{N-1}, \tag{13}\]
where \(\mathbf{v}_{i}^{k-1},\mathbf{x}_{i}^{k-1}\) denote the velocity and coordinate at \(t^{k-1}\), respectively. Note that, Equation (13) is different from the form used in a multi-layer EGNN. The latter always uses the same input velocity and coordinate as constant features in different layers. By contrast, we first predict the next velocity and coordinate and then use them as input to get the new trainable velocity and coordinate. This process is more like training a language model [13, 41, 44, 43, 45].
From the interpolation theorem [26, 46], there exists a unique polynomial \(\sum_{k=0}^{K}C^{k}t^{(k)}\) of degree at most \(K\) that interpolates the \(K+1\) points, where \(C^{k}\) is the coefficients of the polynomial. To avoid ambiguity, we here use \(t^{(k)}\) to denote \(t\) raised to the power of \(k\), whereas \(t^{k}\) to denote the \(k\)-th time point. \(\sum_{k=0}^{K}C^{k}t^{(k)}\) thus can be constructed by the following equation:
\[\sum_{k}^{K}C^{k}t^{(k)}=[\mathbf{v}(t^{0})]+[\mathbf{v}(t^{0}),\mathbf{v}(t^ {1})](t-t^{0})+...+[\mathbf{v}(t^{0}),...,\mathbf{v}(t^{K})](t-t^{0})(t-t^{1} )...(t-t^{K}), \tag{14}\]
where \([\cdot]\) denotes the divided difference. In our case, the \(K+1\) points are equally spaced over the interval \([0,T]\). According to Newton-Cotes formulas, we will have:
\[\int_{0}^{T}\mathbf{v}(t)dt\approx\int_{0}^{T}\sum_{k=0}^{K}C_{k}t^{(k)}dt= \frac{T}{K}\sum_{k=0}^{K}w^{k}\mathbf{v}^{k}. \tag{15}\]
Particularly, \(\{w^{k}\}\) are Newton-Cotes coefficients that are irrelevant to \(\mathbf{v}(t)\). With additional observable points, the estimation for the integration can be much more accurate. The objective thus can be written as follows:
\[\operatorname*{argmin}_{\{\hat{\mathbf{v}}^{k}\}}\sum_{p}\mathbf{x}^{T}-\hat {\mathbf{x}}^{T}=\sum_{p}\mathbf{x}^{T}-\mathbf{x}^{0}-\frac{T}{K}\sum_{k=0}^{ K}w^{k}\hat{\mathbf{v}}^{k}. \tag{16}\]
**Proposition 3.3** (Variance of \(\sigma_{NC(k)}^{2}\)).: \(\forall k\in\{0,1,...\},\epsilon_{NC(k)}\geq\epsilon_{NC(k+1)}\)_, and consequently:_
\[\sigma_{NC(0)}^{2}\geq\sigma_{NC(1)}^{2}\geq...\geq\sigma_{NC(k)}^{2}. \tag{17}\]
Proof.: Please see Appendix B.2 for details. Briefly, we only need to prove that \(\epsilon_{\text{NC}(0)}\geq\epsilon_{\text{NC}(1)}\), where NC (1) is associated with Trapezoidal rule.
**Proposition 3.4** (Equivariance of NC).: _NC possesses the equivariance property if the backbone model \(\mathcal{M}\) (e.g., EGNN) in NC possesses this property._
Proof.: Please see Appendix B.3.
### NC (k) and NC\({}^{+}\) (k)
In the previous formulation, in addition to the initial and terminal points, we also need the other \(K-1\) points for supervision which yields a regularization loss:
\[\mathcal{L}_{r}=\sum_{p}\sum_{k=0}^{K}\|\mathbf{v}^{k}-\hat{\mathbf{v}}^{k}\|. \tag{18}\]
We denote NC (k) with the above loss by NC\({}^{+}\) (k). Minimizing \(\mathcal{L}_{r}\) is equivalent to minimizing the difference between the true velocity and predicted velocity at each time point \(t^{k}\) for each particle. \(\|\cdot\|\) can be arbitrary distance measure, e.g., L2 distance. Although the \(K-1\) points of data can be easily accessed in practice, we argue that a loose version (without \(\mathcal{L}_{r}\)) NC (k) can also learn promising estimation of the integration \(\int_{0}^{T}\mathbf{v}(t)dt\). Specifically, we still use \(K+1\) points \(\{\hat{\mathbf{v}}^{k}\}\) to calculate the integration over \([0,T]\), except that the intermediate \(K-1\) points are unbounded. The model itself needs to determine the proper values of \(\{\hat{\mathbf{v}}^{k}\}\) to optimize the same objective defined in Equation 16. In Section 4.5, we design an experiment to investigate the difference between the true velocities \(\{\mathbf{v}^{k}\}\) and the predicted velocities \(\{\hat{\mathbf{v}}^{k}\}\).
### Computational Complexity and Limitations
In comparison with EGNN, NC (k) does not involve additional neural layers or parameters. Intuitively, we recurrently use the last output of EGNN to produce new predicted velocities at next time point, the corresponding computational cost should be \(K\) times that of the original EGNN. However, our experiment in Section 4.4 shows that the training time did not actually linearly increase w.r.t. \(K\). One reason may be the gradient in back-propagation is still computed only once rather than \(K\) times.
We present an implementation of NC\({}^{+}\) in Algorithm 1. We first initialize all variables in the model and then recurrently input the last output to the backbone model to collect a series of predicted velocities. We then calculate the final predicted integration and plus the initial coordinate as the final output (i.e., Equation (12)). Finally we compute the losses and update the model by back-propagation.
```
1:Input: the dataset \(\mathcal{D}\), the backbone model \(\mathcal{M}\), number of steps \(k\) for NC;
2:repeat
3:for each batched data in the training set \((\{\boldsymbol{X}^{0},...,\boldsymbol{X}^{k}\},\{\boldsymbol{V}^{0},..., \boldsymbol{V}^{k}\})\)do
4:\(\hat{\boldsymbol{X}}^{0}\leftarrow\boldsymbol{X}^{0},\quad\hat{\boldsymbol{V}} ^{0}\leftarrow\boldsymbol{V}^{0}\);
5:for\(i:=1\)to\(k\)do
6:\(\hat{\boldsymbol{X}}^{i},\hat{\boldsymbol{V}}^{i}\leftarrow\mathcal{M}(\hat{ \boldsymbol{X}}^{i-1},\hat{\boldsymbol{V}}^{i-1})\);
7:endfor
8: Compute the final prediction \(\hat{\boldsymbol{X}}^{k}\) by Equation 12;
9: Compute the main prediction loss \(\mathcal{L}_{main}\) according to Equation (16);
10: Compute the velocity regularization loss \(\mathcal{L}_{r}\) according to Equation (18);
11: Minimize \(\mathcal{L}_{main}\), \(\mathcal{L}_{r}\);
12:endfor
13:until the loss on the validation set converges.
```
**Algorithm 1** Newton-Cotes Graph Neural Network
## 4 Experiment
We conducted experiments on three different tasks towards reasoning system dynamics. The source code and datasets have been uploaded and will be available on GitHub.
### Settings
We selected three state-of-the-art methods as our backbone models: RF [17], EGNN [18], and GMN [19]. To ensure a fair comparison, we followed their best parameter settings (e.g., hidden size and the number of layers) to construct NC. In other words, the main settings of NC and the baselines
were identical. The results reported in the main experiments were based on a model of NC (2) for a balance of training cost and performance.
We reported the performance of NC w/ additional points (denoted by \(\mathbf{NC}^{+}\)) and w/o additional points (the relaxed version, denoted by \(\mathbf{NC}\)).
### Datasets
We leveraged three datasets towards different aspects of reasoning dynamics to exhaust the performance of NC. We used the N-body simulation benchmark proposed in [19]. Compared with the previous ones, this benchmark has more different settings, e.g., number of particles and training data. Furthermore, it also considers some physical constraints (e.g., hinges and sticks) widely existing in the real world. For molecular dynamics, We used MD17 [28] that consists of the molecular dynamics simulations of eight different molecules as a scenario. We followed [19] to construct the training/testing sets. We also considered reasoning human motion. Follow [19], the dataset was constructed based on \(23\) trials in CMU Motion Capture Database [27].
As NC\({}^{+}\) requires additional supervised data, we accordingly extracted more intermediate data points between the input and target from each dataset. It is worth noting that this operation was tractable as the sampling frequency of the datasets was much higher than our requirement.
### Main Results
We reported the main experimental results w.r.t. three datasets in Tables 1, 2, and 3, respectively. Overall, NC\({}^{+}\) significantly improved the performance of all baselines on almost all datasets and across all settings, which empirically demonstrates the generality as well as the effectiveness of the proposed method. Remarkably, NC (without extra data) achieved slightly low results but still had great advantages over performance in comparison with three baselines.
Specifically, The average improvement on N-body datasets was most significant, where we observe a near 20% decrease in prediction error for all three baseline methods. For example, NC (RF) and NC\({}^{+}\) (RF) greatly outperformed the original RF and even had better results than the original EGNN under some settings (e.g., 3,2,1). This phenomenon also happened on the molecular dynamics datasets,
\begin{table}
\begin{tabular}{l r r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{\(|\begin{subarray}{c}\text{Train}\\ 0.1\end{subarray}=500\end{subarray}\)} & \multicolumn{3}{c}{\(\begin{subarray}{c}\text{Train}\\ 0.1\end{subarray}=1500\end{subarray}\)} \\ & 1,2,0 & 2,1 & 0,10 & 5,3,3 & 1 & 2,0 & 2,1 & 0,10 & 5,3,3 \\ \hline Linear & 8.23\(\pm\)0.75\(\pm\)0.09 & 7.55\(\pm\)0.09 & 7.96\(\pm\)0.10 & 11.36\(\pm\)0.00 & 11.62\(\pm\)0.00 & 8.22\(\pm\)0.07 & 7.55\(\pm\)0.09 & 9.76\(\pm\)0.00 & 11.36\(\pm\)0.00 & 11.62\(\pm\)0.00 \\ GNN [9] & 5.33\(\pm\)0.07\(\pm\)0.01\(\pm\)0.08 & 7.58\(\pm\)0.08 & 9.83\(\pm\)0.04 & 9.77\(\pm\)0.23 & 3.61\(\pm\)0.13 & 3.23\(\pm\)0.34 & 3.42\(\pm\)0.11 & 7.97\(\pm\)0.04 & 7.91\(\pm\)0.31 \\ TFN [39] & 11.54\(\pm\)0.08 & 9.87\(\pm\)0.27 & 11.66\(\pm\)0.08 & 13.43\(\pm\)0.31 & 12.23\(\pm\)0.11 & 5.86\(\pm\)0.34 & 4.97\(\pm\)0.23 & 8.71\(\pm\)0.14 & 11.21\(\pm\)0.12 & 10.75\(\pm\)0.08 \\ SE(3)-Tr. [29] & 5.54\(\pm\)0.08 & 5.14\(\pm\)0.08 & 8.95\(\pm\)0.04 & 11.42\(\pm\)0.00 & 11.59\(\pm\)0.00 & 5.02\(\pm\)0.04 & 4.68\(\pm\)0.08 & 8.39\(\pm\)0.02 & 10.82\(\pm\)0.03 & 10.85\(\pm\)0.02 \\ \hline RF [17] & 3.50\(\pm\)0.17 & 3.07\(\pm\)0.24 & 5.25\(\pm\)0.44 & 7.59\(\pm\)0.25 & 7.73\(\pm\)0.39 & 2.97\(\pm\)0.15 & 2.19\(\pm\)0.11 & 3.80\(\pm\)0.25 & 5.71\(\pm\)0.31 & 5.66\(\pm\)0.27 \\ NC (RF) & **2.84\(\pm\)0.23**\(\pm\)**0.04 & **4.32\(\pm\)**0.08** & **6.67\(\pm\)0.28** & 7.14\(\pm\)0.24 & 2.91\(\pm\)0.11 & **1.76\(\pm\)0.01** & **3.23\(\pm\)0.01** & 5.09\(\pm\)0.05 & 5.34\(\pm\)0.03 \\ NC\({}^{+}\) (RF) & 2.92\(\pm\)0.24 & **3.24\(\pm\)**0.08** & **4.28\(\pm\)**0.05** & 7.02\(\pm\)**0.14** & 7.46\(\pm\)0.23** & **2.87\(\pm\)0.18** & 1.76\(\pm\)0.01 & 3.24\(\pm\)0.02 & **5.06\(\pm\)0.08** & **5.23\(\pm\)0.29** \\ \hline EGNN [18] & 2.81\(\pm\)0.12 & 2.27\(\pm\)0.04 & 4.67\(\pm\)0.47 & 4.75\(\pm\)0.08 & 4.59\(\pm\)0.29 & 2.59\(\pm\)0.10 & 1.86\(\pm\)0.01 & 2.54\(\pm\)0.01 & 2.79\(\pm\)0.34 & 3.25\(\pm\)0.07 \\ NC (EGNN) & 2.41\(\pm\)0.03 & 2.18\(\pm\)0.02 & 3.53\(\pm\)0.01 & 4.26\(\pm\)0.03 & 4.13\(\pm\)0.23 & **2.23\(\pm\)0.13** & 1.91\(\pm\)0.03 & 2.14\(\pm\)0.03 & 2.36\(\pm\)0.05 & **2.86\(\pm\)0.03** \\ NC\({}^{+}\) (EGNN) & **2.25\(\pm\)0.04** & **2.07\(\pm\)**0.06** & **2.01\(\pm\)0.02** & **3.54\(\pm\)0.06** & **3.96\(\pm\)0.04** & 2.32\(\pm\)0.05 & **1.86\(\pm\)0.13** & 2.39\(\pm\)0.01 & **2.35\(\pm\)0.07** & 2.99\(\pm\)0.02 \\ \hline GMN [19] & 1.84\(\pm\)0.02 & 2.02\(\pm\)0.02 & 2.48\(\pm\)0.04 & 2.92\(\pm\)0.04 & 4.08\(\pm\)0.03 & 1.68\(\pm\)0.04 & 1.47\(\pm\)0.03 & 2.10\(\pm\)0.04 & 2.32\(\pm\)0.02 & 2.86\(\pm\)0.01 \\ NC (GMN) & **1.55\(\pm\)0.07** & 1.58\(\pm\)0.02 & 2.07\(\pm\)0.03 & 2.73\(\pm\)0.02 & 2.99\(\pm\)0.03 & 1.59\(\pm\)0.01 & 1.18\(\pm\)0.05 & 1.47\(\pm\)0.01 & **1.66\(\pm\)0.01** & 1.93\(\pm\)0.03 \\ NC (GMN) & 1.57\(\pm\)0.02 & **1.43\(\pm\)0.02** & **2.03\(\pm\)0.04** & **2.57\(\pm\)0.04** & **2.72\(\pm\)0.03** & **1.49\(\pm\)0.12** & **1.17\(\pm\)0.04** & **1.44\(\pm\)0.01** & 1.70\(\pm\)0.06** & **1.91\(\pm\)0.02** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prediction error (\(\times 10^{-2}\)) on N-body dataset. The header of each column “\(p,s,h\)” denotes the scenario with \(p\) isolated particles, \(s\) sticks and \(h\) hinges. Results averaged across 3 runs.
\begin{table}
\begin{tabular}{l r r r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{\(\begin{subarray}{c}\text{Aquin}\\ \text{RF}\end{subarray}\)} & \multicolumn{3}{c}{\(\begin{subarray}{c}\text{Run}\\ 0.1\end{subarray}=1500\end{subarray}\)} & \multicolumn{3}{c}{\(\begin{subarray}{c}\text{Mian}\\ 0.1\end{subarray}=1500\)} & \multicolumn{3}{c}{\(\begin{subarray}{c}\text{Train}\\ 0.2\end{subarray}=1500\)} \\ & 1,2,0 & 2,1 & 0,10 & 5,3,3 & 1 & 2,0 & 2,1 & 0,10 & 3,2,1 & 0,10 & 5,3,3 \\ \hline Linear &
where we find that NC\({}^{+}\) (EGNN) not only defeated its original version, but also the original GMN and even NC\({}^{+}\) (GMN) on many settings.
### Impact of estimation step \(k\)
We conducted experiments to explore how the performance changes with respect to the estimation step \(k\). The experimental results are shown in Figure 2.
For all three baselines, the performance improved rapidly as the growth of step \(k\) from \(0\) to \(2\), but this advantage gradually vanished as we set larger \(k\). Interestingly, the curve of training time showed an inverse trend, where we find that the total training time of NC (5) with GMN was almost two times slower than the NC (1) version. Therefore, we set \(k=2\) in previous experiments to get the best balance between performance and training cost.
By comparing the prediction errors of different baselines, we find that GMN that considers the physical constraints (e.g., sticks and hinges) usually had better performance than the other two methods, but the gap significantly narrowed with the increase of step \(k\). For example, on MD17 and motion datasets, EGNN obtained similar prediction errors for \(k\geq 2\) while demanding less training time. This phenomenon demonstrates that predicting the integration with multiple estimations lowers the requirement for model capacity.
Overall, learning with NC significantly reduced the prediction errors of different models. The additional training time was also acceptable in most cases.
### A Comparison between NC and NC\({}^{+}\)
The only difference between NC and NC\({}^{+}\) was the use of the regularization loss in NC\({}^{+}\) to learn the intermediate velocities. Therefore, we designed experiments to compare the intermediate results and the final predictions of NC and NC\({}^{+}\) with GMN as the backbone model.
We first tracked the average intermediate velocity prediction errors on valid datasets in terms of training epochs. The results are shown in Figure 3, from which we observe that the prediction errors were not huge for both two methods, especially NC\({}^{+}\) obtained highly accurate prediction for the intermediate velocities with different \(k\). For NC, we actually find that it implicitly learned to model
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & GNN & TFN & SE(3)-Tr. & RF & EGNN & GMN \\ \hline Raw & 67.3\({}_{\pm 1}\) & 66.9\({}_{\pm 2}\) & 60.9\({}_{\pm 0.9}\) & 197.0\({}_{\pm 10}\) & 59.1\({}_{\pm 2}\) & 43.9\({}_{\pm 1}\) \\ NC & N/A & N/A & N/A & 164.8\({}_{\pm 1}\) & 55.8\({}_{\pm 6}\) & 30.0\({}_{\pm 4}\) \\ NC\({}^{+}\) & N/A & N/A & N/A & **162.6\({}_{\pm 18}\)** & **51.3\({}_{\pm 4}\)** & **29.6\({}_{\pm 8}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction error (\(\times 10^{-2}\)) on motion capture. Results averaged across 3 runs.
Figure 2: A detailed comparison of NCGNN without extra data on \(5\) datasets, w.r.t. \(k\), average of \(3\) runs, conducted on a V100. The lines with dot, circle, and triangle denote the results of GMN, EGNN, and RF, respectively. The first, second, and third rows are the results of prediction error, training time of one epoch, and total training time, respectively.
the intermediate velocities more or less, even though its prediction errors gradually improved or dramatically fluctuated during training. This observation empirically demonstrates the effectiveness of the intermediate velocities in predicting the final state of the system. We also reported the results on MD17 and N-body datasets in Appendix C.
We then visualized the intermediate velocities and the final predictions in Figure 4. The intermediate velocities were visualized by adding the velocity with the corresponding coordinate at the intermediate time point. From Figure 4, we can find that NC\({}^{+}\) (green ones) learned very accurate predictions with only a few non-overlapped nodes to the target nodes (red ones). For the loose version NC, it learned good predictions for the backbone part, but failed on the limbs (especially the legs). The motion of the main backbone was usually stable, while that of the limbs involved swing and twisting. We also conducted experiments on the other two types of datasets, please see Figure 8 in Appendix C.
### On the Time Evolution of Dynamic Systems
In previous experiments, we show that NC\({}^{+}\) was capable of learning highly accurate predictions of intermediate velocities. This inspired us to realize the full potential of NC\({}^{+}\) by producing a series of predictions given only the initial state as input. For each time point (including the intermediate time points), we used the most recent \(k+1\) points of averaged data as input to predict the next velocity and coordinate. This strategy produced much more stable predictions than the baseline NC (0) - directly using the last output as input to predict the next state.
The results are shown in Figure 5, from which we can find that both the baseline (blue lines) and NC\({}^{+}\) (green lines) had good predictions at the initial steps. However, due to the increasing acculturated errors, the baseline soon collapsed and only the main backbone was roughly fit to the target. By contrast, NC\({}^{+}\) was still able to produce promising results. The deviations mainly concentrated on the limbs. Therefore, we argue that NC\({}^{+}\) can produce not only the highly accurate one-step prediction, but also the relatively reliable consecutive predictions. We also uploaded the.gif files of examples on all three datasets in Supplementary Materials, which demonstrated the consistent conclusion.
Figure 4: Visualization of the intermediate velocities w.r.t. \(k\). The red, blue, and green lines denote the target, prediction of NC, and prediction of NC\({}^{+}\), respectively.
Figure 5: Consecutive predictions on the Motion dataset. The red, blue, and green lines denote the target, the prediction of NC (0), and the prediction of NC\({}^{+}\) (2), respectively. The interval between two figures is identical to the setting in the main experiment.
Figure 3: The prediction errors of intermediate velocities on valid set, w.r.t. training epoch. The blue and green lines denote the prediction errors of NC and NC\({}^{+}\), respectively.
Conclusion
In this paper, we propose NC to tackle the problem of reasoning system dynamics. We show that the existing state-of-the-art methods can be derived to the basic form of NC, and theoretically/empirically prove the effectiveness of high order NC. Furthermore, with a few additional data provided, NC\({}^{+}\) is capable of producing promising consecutive predictions. In future, we plan to study how to improve the ability of NC\({}^{+}\) on the consecutive prediction task.
|
2302.10483 | Structured Bayesian Compression for Deep Neural Networks Based on The
Turbo-VBI Approach | With the growth of neural network size, model compression has attracted
increasing interest in recent research. As one of the most common techniques,
pruning has been studied for a long time. By exploiting the structured sparsity
of the neural network, existing methods can prune neurons instead of individual
weights. However, in most existing pruning methods, surviving neurons are
randomly connected in the neural network without any structure, and the
non-zero weights within each neuron are also randomly distributed. Such
irregular sparse structure can cause very high control overhead and irregular
memory access for the hardware and even increase the neural network
computational complexity. In this paper, we propose a three-layer hierarchical
prior to promote a more regular sparse structure during pruning. The proposed
three-layer hierarchical prior can achieve per-neuron weight-level structured
sparsity and neuron-level structured sparsity. We derive an efficient
Turbo-variational Bayesian inferencing (Turbo-VBI) algorithm to solve the
resulting model compression problem with the proposed prior. The proposed
Turbo-VBI algorithm has low complexity and can support more general priors than
existing model compression algorithms. Simulation results show that our
proposed algorithm can promote a more regular structure in the pruned neural
networks while achieving even better performance in terms of compression rate
and inferencing accuracy compared with the baselines. | Chengyu Xia, Danny H. K. Tsang, Vincent K. N. Lau | 2023-02-21T07:12:36Z | http://arxiv.org/abs/2302.10483v1 | # Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach
###### Abstract
With the growth of neural network size, model compression has attracted increasing interest in recent research. As one of the most common techniques, pruning has been studied for a long time. By exploiting the structured sparsity of the neural network, existing methods can prune neurons instead of individual weights. However, in most existing pruning methods, surviving neurons are randomly connected in the neural network without any structure, and the non-zero weights within each neuron are also randomly distributed. Such irregular sparse structure can cause very high control overhead and irregular memory access for the hardware and even increase the neural network computational complexity. In this paper, we propose a three-layer hierarchical prior to promote a more regular sparse structure during pruning. The proposed three-layer hierarchical prior can achieve per-neuron weight-level structured sparsity and neuron-level structured sparsity. We derive an efficient Turbo-variational Bayesian inferencing (Turbo-VBI) algorithm to solve the resulting model compression problem with the proposed prior. The proposed Turbo-VBI algorithm has low complexity and can support more general priors than existing model compression algorithms. Simulation results show that our proposed algorithm can promote a more regular structure in the pruned neural networks while achieving even better performance in terms of compression rate and inferencing accuracy compared with the baselines.
Deep neural networks, Group sparsity, Model compression, Pruning.
## I Introduction
Deep neural networks (DNNs) have been extremely successful in a wide range of applications. However, to achieve good performance, state-of-the-art DNNs tend to have huge numbers of parameters. Deployment and transmission of such big models pose significant challenges for computation, memory and communication. This is particularly important for edge devices, which have very restricted resources. For the above reasons, model compression has become a hot topic in deep learning.
A variety of research focused on model compression exists. In [1]-[3], the weight pruning approaches are adopted to reduce the complexity of DNN inferencing. In [1], the threshold is set to be the weight importance, which is measured by introducing an auxiliary importance variable for each weight. With the auxiliary variables, the pruning can be performed in one shot and before the formal training. In [2] and [3], Hessian matrix of the weights are used as a criterion to perform pruning. These aforementioned approaches, however, cannot proactively force some of the less important weights to have small values and hence the approaches just passively prune small weights. In [4]-[13], systematic approaches of model compression using optimization of regularized loss functions are adopted. In [11], an \(\ell_{2}\)-norm based regularizer is applied to the weights of a DNN and the weights that are close to zero are pruned. In [4], an \(\ell_{0}\)-norm regularizer, which forces the weights to be exactly zero, is proposed. The \(\ell_{0}\)-norm regularization is proved to be more efficient to enforce sparsity in the weights but the training problem is notoriously difficult to solve due to the discontinuous nature of the \(\ell_{0}\)-norm. In these works, the regularization only promotes sparsity in the weights. Yet, sparsity in the weights is not equivalent to sparsity in the neurons. In [9], polarization regularization is proposed, which can push a proportion of weights to zero and others to values larger than zero. Then, all the weights connected to the same neuron are assigned with a common polarization regularizer, such that some neurons can be entirely pruned and some can be pushed to values larger than zero. In this way, the remaining weights do not have to be pushed to small values and hence can improve the expressing ability of the network. In [5], the authors use a group Lasso regularizer to remove entire filters in CNNs and show that their proposed group sparse regularization can even increase the accuracy of a ResNet. However, the resulting neurons are randomly connected in the neural network and the resulting datapath of the pruned neural network is quite irregular, making it difficult to implement in hardware [5], [6].
In addition to the aforementioned deterministic regularization approaches, we can also impose sparsity in the weights of the neural network using Bayesian approaches. In [14], weight pruning and quantization are realized at the same time by assigning a set of quantizing gates to the weight value. A prior is further designed to force the quantizing gates to "close", such that the weights are forced to zero and the non-zero weights are forced to a low bit precision. By designing an appropriate prior distribution for the weights, one can enforce more refined structures in the weight vector. In [15], a simple sparse prior is proposed to promote weight-level sparsity, and variational Bayesian inference (VBI) is adopted to solve the model compression problem. In [16], group sparsity is investigated from a Bayesian perspective. A two-layer hierarchical sparse prior is proposed where the weights follow Gaussian distributions and all the output weights of a neuron share the same prior variance. Thus, the output weights of a neuron can be pruned at the same time and neuron-level sparsity is achieved.
In this paper, we consider model compression of DNNs from a Bayesian perspective. Despite various existing works on model compression, there are still several technical issues to be addressed.
* **Structured datapath in the pruned neural network:** In deterministic regularization approaches, we can achieve group sparsity in a weight matrix. For example, under \(\ell_{2,1}\)-norm regularization [5, 7], some rows can be zeroed out in the weight matrix after pruning and this results in neuron-level sparsity. However, surviving neurons are randomly connected in the neural network without any structure. Moreover, the non-zero weights within each neuron are also randomly distributed. In the existing Bayesian approaches, both the single-layer priors in [15] and the two-layer hierarchical priors in [16] also cannot promote structured neuron connectivity in the pruned neural network. As such, the random and irregular neuron connectivity in the datapath of the DNN poses challenges in the hardware implementation.1 Footnote 1: When the pruned DNN has randomly connected neurons and irregular weights within each neuron, very high control overhead and irregular memory access will be involved for the datapath to take advantage of the compression in the computation of the output.
* **Efficiency of weight pruning:** With existing model compression approaches, the compressed model can still be too large for efficient implementation on mobile devices. Moreover, the existing Bayesian model compression approaches tend to have high complexity and low robustness with respect to different data sets, as they typically adopt Monte Carlo sampling in solving the optimizing problem [15, 16]. Thus, a more efficient model compression solution that can achieve a higher compression rate and lower complexity is required.
To overcome the above challenges, we propose a three-layer hierarchical sparse prior that can exploit structured weight-level and neuron-level sparsity during training. To handle the hierarchical sparse prior, we propose a Turbo-VBI [17] algorithm to solve the Bayesian optimization problem in the model compression. The main contributions are summarized below.
* **Three-layer Hierarchical Prior for Structured Weight-level and Neuron-level Sparsity:** The design of an appropriate prior distribution in the weight matrix is critical to achieve more refined structures in the weights and neurons. To exploit more structured weight-level and neuron-level sparsity, we propose a three-layer hierarchical prior which embraces both of the traditional two-layer hierarchical prior [16] and support-based prior [18]. The proposed three-layer hierarchical prior has the following advantages compared with existing sparse priors in Bayesian model compression: 1) It promotes advanced weight-level structured sparsity as well as neuron-level structured sparsity, as illustrated in Fig. 1c. Unlike existing sparse priors, our proposed prior not only prunes the neurons, but also promotes regular structures in the unpruned neurons as well as the weights of a neuron. Thus, it can achieve more regular structure in the overall neural network. 2) It is flexible to promote different sparse structures. The group size as well as the average gap between two groups can be adjusted via tuning the hyper parameters. 3) It is optimizing-friendly. It allows the application of low complexity training algorithms.
* **Turbo-VBI Algorithm for Bayesian Model Compression:** Based on the 3-layer hierarchical prior, the model training and compression is formulated as a Bayesian inference problem. We propose a low complexity Turbo-VBI algorithm with some novel extensions compared to existing approaches: 1) It can support more general priors. Existing Bayesian compression methods [15, 16] cannot handle the three-layer hierarchical prior due to the extra layer in the prior. 2) It has higher robustness and lower complexity compared with existing Bayesian compression methods. Existing Bayesian compression methods use Monte Carlo sampling to approximate the log-likelihood term in the objective function, which has low robustness and high complexity [19]. To overcome this drawback, we propose a deterministic approximation for the log-likelihood term.
* **Superior Model Compression Performance:** The proposed solution can achieve a highly compressed neural network compared to the state-of-the-art baselines and achieves a similar inferencing performance. Therefore, the proposed solution can substantially reduce the computational complexity in the inferencing of the neural network and the resulting pruned neural networks are hardware friendly. Consequently, it can fully unleash the potential of AI applications over mobile devices.
The rest of the paper is organized as follows. In Section II, the existing structures and the desired structure in the weight matrix are elaborated. The three-layer hierarchical prior for the desired structure is also introduced. In Section III, the model compression problem is formulated. In Section IV, the Turbo-VBI algorithm for model compression under the three-layer hierarchical prior is presented. In Section V and Section VI, numerical experiments and conclusions, respectively, are provided.
## II Structures in The Weight Matrix
Model compression is realized by enforcing sparsity in the weight matrix. We first review the main existing structured sparsity of the weight matrix in the literature. Based on this, we elaborate the desired structures in the weight matrix that are not yet considered. Finally, we propose a 3-layer hierarchical prior model to enforce such a desired structure in the weight matrix.
### _Review of Existing Sparse Structured Weight Matrix_
There are two major existing sparse structures in the weight matrix, namely random sparsity and group sparsity. We focus on a fully connected layer to elaborate these two sparse structures.
**1) Random Sparsity in the Weight Matrix:** In random sparsity, the weights are pruned individually. \(\ell_{1}\)-norm
regularization and one-layer prior [15] are often adopted to regularize individual weights. Such regularization is simple and optimization-friendly; however, both resulting weights and neurons are completely randomly connected since there is no extra constraint on the pruned weights. In the weight matrix, this means that the elements are set to zero without any structure, which results in the remaining elements also having no specific structures. When translated into neural network topology, this means that the surviving weights and neurons are randomly connected, as illustrated in Fig. (a)a. Because of the random connections in the pruned network, such a structure is not friendly to practical applications such as storage, computation and transmission [5, 20].
**2) Group Sparsity in the Weight Matrix:** In group sparsity, the weights are pruned in groups. Particularly, in fully connected layers, a group is often defined as the outgoing weights of a neuron to promote neuron-level sparsity. To promote group sparsity, sparse group Lasso regularization [7, 21] and two-layer priors are proposed. Generally, such regularization often has two folds: one regularizer for group pruning and one regularizer for individual weight pruning. Thus, the weights can not only be pruned in groups, but can also be pruned individually in case the group cannot be totally removed. However, when a neuron cannot be totally pruned, the weights connected to it are randomly pruned, and the neurons are also pruned in random. In Fig. (b)b, we illustrate an example of group sparsity. In the weight matrix, the non-zero elements in a row are randomly distributed and the non-zero rows are also randomly distributed. When translated into neural network topology, this means the unpruned weights connected to a neuron are randomly distributed, and in each layer, the surviving neurons are also randomly distributed. Thus, group sparsity is still not fully structured.
On the other hand, recent research has pointed out that the irregular connections in the neural network can bring various disadvantages in reducing the computational complexity or inferencing time [20, 22, 23, 24, 25]. As discussed above, existing sparsity structures are not regular enough, and the drawbacks of such irregular structures are listed as follows.
1. _Low decoding and computing parallelism:_ Sparse weight matrix are usually stored in a sparse format on the hardware, e.g., compressed sparse row (CSR), compressed sparse column (CSC), coordinate format (COO). However, as illustrated in Fig. 2 (a), when decoding from the sparse format, the number of decoding steps for each row (column) can vastly differ due to the irregular structure. This can greatly harm the decoding parallelism [23]. Moreover, the irregular structure also hampers the parallel computation such as matrix tiling [24, 20], as illustrated in Fig. 2 (b).
2. _Inefficient pruning of neurons:_ As illustrated in Fig. 2 (c), the unstructured surviving weights from the previous layer tend to activate random neurons in the next layer, which is not efficient in pruning the neurons in the next layer. This may lead to inefficiency in reducing the floating point operations (FLOPs) of the pruned neural network.
3. _Large storage burden:_ Since the surviving weights of existing model compression methods are unstructured, the exact location for each surviving weight has to be recorded. This leads to a huge storage overhead, which can be even greater than storing the dense matrix [24, 20]. The huge storage overhead can also affect the inferencing time by increasing the memory access and decoding time.
### _Multi-level Structures in the Weight Matrix_
Based on the above discussion, a multi-level structured sparse neural network should meet the following three criteria: **1) Per-neuron weight-level structured sparsity:** The surviving weights connected to a neuron should exhibit a regular structure rather than be randomly distributed as in traditional
Fig. 1: Comparison of different structures in the weight matrix of a fully connected layer with 6 input neurons and 6 output neurons. Each row denotes the weights connected to one input neuron.
group sparsity. In a weight matrix, this means the non-zero elements of each row should follow some regular structure. **2) Neuron-level structured sparsity:** The surviving neurons of each layer should also have some regular structures. In a weight matrix, this means that the non-zero rows should also follow some structure. **3) Flexibility:** The structure of the resulting weight matrix should be sufficiently flexible so that we can achieve a trade off between model complexity and predicting accuracy in different scenarios. Fig. 1c illustrates an example of a desired structure in the weight matrix. In each row, the significant values tend to gather in groups. When translated into neural network topology, this enables the advanced weight-level structure: The surviving weights of each neuron tend to gather in groups. In each column, the significant values also tend to gather in groups. When translated into neural network topology, this enables the neuron-level structure: The surviving neurons in each layer tend to gather in groups. Moreover, the group size in each row and column can be tuned to achieve a trade off between model complexity and predicting accuracy. Such proposed multi-level structures enable the datapath to exploit the compression to simplify the computation with very small control overheads. Specifically, the advantages of our proposed multi-level structure are listed as follows.
1. _High decoding and computing parallelism:_ As illustrated in Fig. 2 (a), with the blocked structure, the decoding can be performed in block wise such that all the weights in the same block can be decoded simultaneously. Moreover, the regular structure can bring more parallelism in computation. For example, more efficient matrix tiling can be applied, where the zero blocks can be skipped during matrix multiplication, as illustrated in Fig. 2 (b). Also, the kernels can be compressed into smaller size, such that the inputs corresponding to the zero weights can be skipped [20].
2. _Efficient neuron pruning:_ As illustrated in Fig. 2 (c), the structured surviving weights from the previous layer can lead to a structured and compact neuron activation in the next layer, which is beneficial for more efficient neuron pruning.
3. _More efficient storage:_ With the blocked structure, we can simply record the location and size for each block rather than recording the exact location for each weight. For a weight matrix with average cluster size of \(m\times m\) and number of clusters \(P\), the coding gain over traditional COO format is \(\mathcal{O}\left(Pm^{2}\right)\).
In this paper, we focus on promoting a multi-level group structure in model compression. In our proposed structure, the surviving weights connected to a neuron tend to gather in groups and the surviving neurons also tend to gather in groups. Additionally, the group size is tunable so that our proposed structure is flexible.
### _Three-Layer Hierarchical Prior Model_
The probability model of the sparse prior provides the foundation of specific sparse structures. As previously mentioned, existing sparse priors for model compression cannot promote the multi-level structure in the weight matrix. To capture the multi-level structured sparsity, we propose a three-layer hierarchical prior by combining the two-layer prior and the support based prior [18].
Let \(w\) denote a weight in the neural network. For each weight \(w\), we introduce a support \(s\in\left\{0,1\right\}\) to indicate whether the weight \(w\) is active \(\left(s=1\right)\) or inactive \(\left(s=0\right)\). Specifically, let \(\rho\) denote the precision for the weight \(w\). That is, \(1/\rho\) is the variance of \(w\). When \(s=0\), the distribution of \(\rho\) is chosen to satisfy \(\mathbb{E}\left[\rho\right]\gg 1\), so that the variance of the corresponding weight \(w\) is very small. When \(s=1\), the distribution of \(\rho\) is chosen to satisfy \(\mathbb{E}\left[\rho\right]=\mathcal{O}\left(1\right)\), so that \(w\) has some probability to take significant values. Then, the three-layer hierarchical prior (joint distribution of \(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\)) is given by
\[p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)=p\left(\mathbf{s} \right)p\left(\boldsymbol{\rho}|\mathbf{s}\right)p\left(\mathbf{w}|\boldsymbol{ \rho}\right), \tag{1}\]
where \(\mathbf{w}\) denotes all the weights in the neural network, \(\boldsymbol{\rho}\) denotes the corresponding precisions, and \(\mathbf{s}\) denotes the corresponding supports. The distribution of each layer is elaborated below.
_Probability Model for \(p\left(\mathbf{s}\right)\):_\(p\left(\mathbf{s}\right)\) can be decomposed into each layer by \(p\left(\mathbf{s}\right)=\prod_{l=1}^{L}p\left(\mathbf{s}_{l}\right)\), where \(L\) is the total layer number of the neural network and \(\mathbf{s}_{l}\) denotes the supports of the weights in the \(l\)-th layer. Now we focus on the \(l\)-th layer. Suppose the dimension of the weight matrix is \(K\times M\), i.e., the layer has \(K\) input neurons and \(M\) output neurons. The distribution \(p\left(\mathbf{s}_{l}\right)\) of the supports is used to capture the multi-level structure in this layer. Since we focus on a specific layer now, we use \(p\left(\mathbf{s}\right)\) to replace \(p\left(\mathbf{s}_{l}\right)\) in the following discussion for notation simplicity. In order to promote the aforementioned multi-level structure, the active elements in each row of the weight matrix should gather in clusters and the active elements in each column of the weight matrix should also gather in clusters. Such a block structure can be achieved by modeling the support matrix as a Markov random field (MRF). Specifically, each row of the support matrix is modeled by a Markov chain as
\[p\left(\mathbf{s}_{row}\right)=p\left(s_{row,1}\right)\prod_{m=1}^{M}p\left(s _{row,m+1}|s_{row,m}\right), \tag{2}\]
where \(\mathbf{s}_{row}\) denotes the row vector of the support matrix, and \(s_{row,m}\) denotes the \(m\)-th element in \(\mathbf{s}_{row}\). The transition probability is given by \(p\left(s_{row,m+1}=1|s_{row,m}=0\right)=p_{01}^{row}\) and \(p\left(s_{row,m+1}=0|s_{row,m}=1\right)=p_{10}^{row}\). Generally, a smaller \(p_{01}^{row}\) leads to a larger average gap between two clusters, and a smaller \(p_{10}^{row}\) leads to a larger average cluster size. Similarly, each column of the support matrix is also modeled by a Markov chain as
\[p\left(\mathbf{s}_{col}\right)=p\left(s_{col,1}\right)\prod_{k=1}^{K}p\left(s_ {col,k+1}|s_{col,k}\right), \tag{3}\]
where \(\mathbf{s}_{col}\) denotes the column vector of the support matrix and \(s_{col,k}\) denotes the \(n\)-th element in \(\mathbf{s}_{col}\). The transition probability is given by \(p\left(s_{col,k+1}=1|s_{col,k}=0\right)=p_{01}^{col}\) and \(p\left(s_{col,k+1}=0|s_{col,k}=1\right)=p_{10}^{col}\). Such an MRF model is also
known as a 2D Ising model. An illustration of the MRF prior is given in Fig. 3. Note that \(p\left(\mathbf{s}\right)\) can also be modeled with other types of priors, such as Markov tree prior and hidden Markov prior, to promote other structures.
_Probability Model for \(p\left(\boldsymbol{\rho}|\mathbf{s}\right)\):_ The conditional probability \(p\left(\boldsymbol{\rho}|\mathbf{s}\right)\) is given by
\[p\left(\boldsymbol{\rho}|\mathbf{s}\right)=\prod_{n=1}^{N}\left(\Gamma\left( \rho_{n};a_{n},b_{n}\right)\right)^{s_{n}}\left(\Gamma\left(\rho_{n};\overline{ a}_{n},\overline{b}_{n}\right)\right)^{1-s_{n}}, \tag{4}\]
where \(N\) is the total number of weights in the neural network. \(\rho_{n}\) takes two different Gamma distributions according to the value of \(s_{n}\). When \(s_{n}=1\), the corresponding weight \(w_{n}\) is active. In this case, the shape parameter \(a_{n}\) and rate parameter \(b_{n}\) should satisfy that \(\frac{a_{n}}{b_{n}}=\mathbb{E}\left[\rho_{n}\right]=\mathcal{O}\left(1\right)\) such that the variance of \(w_{n}\) is \(\mathcal{O}\left(1\right)\). When \(s_{n}=0\), the corresponding weight \(w_{n}\) is inactive. In this case, the shape parameter \(\overline{a}_{n}\) and rate parameter \(\overline{b}_{n}\) should satisfy that \(\frac{\overline{a}_{n}}{b_{n}}=\mathbb{E}\left[\rho_{n}\right]\gg 1\) such that the variance of \(w_{n}\) is very close to zero. The motivation for choosing Gamma distribution is that Gamma distribution is conjugate to Gaussian distribution. Thus, it can leads to a closed form solution in Bayesian inference [17].
_Probability Model for \(p\left(\mathbf{w}|\boldsymbol{\rho}\right)\):_ The conditional probability \(p\left(\mathbf{w}|\boldsymbol{\rho}\right)\) is given by
\[p\left(\mathbf{w}|\boldsymbol{\rho}\right)=\prod_{n=1}^{N}p\left(w_{n}|\rho_{ n}\right), \tag{5}\]
where \(p\left(w_{n}|\rho_{n}\right)\) is assumed to be a Gaussian distribution:
\[p\left(w_{n}|\rho_{n}\right)=\mathcal{N}\left(w_{n}|0,\frac{1}{\rho_{n}} \right). \tag{6}\]
Although the choice of \(p\left(w_{n}|\rho_{n}\right)\) is not restricted to Gaussian distribution [16], the motivation for choosing Gaussian is still reasonable. First, according to existing simulations, the compression performance is not so sensitive to the distribution type of \(p\left(w_{n}|\rho_{n}\right)\)[16, 15]. Moreover, assuming \(p\left(w_{n}|\rho_{n}\right)\) to be a Gaussian distribution also contributes to easier optimization in Bayesian inference, as shown in Section IV.
_Remark 1_.: _(Comparison with existing sparse priors)_ One of the main differences between our work and the existing works [15, 16] is the design of the prior. Because of the MRF support layer, our proposed three-layer prior is more general and can capture more regular structures such as the
Fig. 3: Illustration of the support prior for a \(6\times 6\) weight matrix.
Fig. 2: (a) Illustration of decoding parallelism. In existing sparse structures, the decoding step for each row can be vastly different because of the irregular structure. In our proposed structure, each block can be decoded simultaneously because each block can be coded as a whole. (b) Illustration of computing parallelism. In existing sparse structures, the matrix-matrix multiplication has to be performed element-wise because each non-zero weight is individually encoded. The processor has to fetch individual weights and multiply them to individual input elements to get the individual output element (green square). In our proposed structure, parallel techinics for dense matrix such as matrix tiling can be applied. Moreover, due to the clustered structure, the zero block (gray block) can be skipped during computing the output (green block). (c) Illustration of efficient neuron pruning. In existing sparse structures, the surviving weights of each neuron are randomly connected to the neurons in the next layer, which will randomly activate neurons in the next layer. In our proposed structure, the surviving weights of each neuron tend to activate the same neurons in the next layer, which can lead to a structured and compact neuron activation in the next layer.
desired multi-level structure. Such a multi-level structure in the weight matrix cannot be modeled by the existing priors in [15] and [16] but our proposed prior can easily support the sparse structures in them. In our work, if the prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) reduces to one layer \(p\left(\mathbf{w}\right)\) and takes a Laplace distribution, it is equivalent to a Lasso regularization. In this way, random sparsity can be realized. If the prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) reduces to one layer \(p\left(\mathbf{w}\right)\) and takes an improper log-scale uniform distribution, it is exactly the problem of variational dropout [15]. In this way, random sparsity can be realized. If the prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) reduces to a two-layer \(p\left(\mathbf{w}|\boldsymbol{\rho}\right)\), and takes the group improper log-uniform distribution or the group horseshoe distribution, it is exactly the problem in [16]. In this way, group sparsity can be realized. Moreover, since our proposed prior is more complicated, it also leads to a different algorithm design in Section IV.
## III Bayesian Model Compression Formulation
Bayesian model compression handles the model compression problem from a Bayesian perspective. In Bayesian model compression, the weights follow a prior distribution, and our primary goal is to find the posterior distribution conditioned on the dataset. In the following, we elaborate on the neural network model and the Bayesian training of the DNN.
### _Neural Network Model_
We first introduce the neural network model. Let \(\mathcal{D}=\left\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),..., \left(x_{D},y_{D}\right)\right\}\) denote the dataset, which contains \(D\) pairs of training input \(\left(x_{d}\right)\) and training output \(\left(y_{d}\right)\). Let \(NN\left(x,\mathbf{w}\right)\) denote the output of the neural network with input \(x\) and weights \(\mathbf{w}\), where \(NN\left(\cdot\right)\) is the neural network function. Note that \(NN\left(x,\mathbf{w}\right)\) is a generic neural network model, which can embrace various commonly used structures, some examples are listed as follows.
* **Multi-Layer Perceptron (MLP):** MLP is a representative class of feedforward neural network, which consists of an input layer, output layer and hidden layers. As illustrated in Fig. 4a, we can use \(NN\left(x,\mathbf{w}\right)\) to represent the feedforward calculation in an MLP.
* **Convolutional Neural Network (CNN):** A CNN consists of convolutional kernels and dense layers. As illustrated in Fig. 4b, we can use \(NN\left(x,\mathbf{w}\right)\) to represent the complicated calculation in a CNN.
Since the data points from the training set are generally assumed to be independent, we can use the following stochastic model to describe the observations from a general neural network:
\[\mathbf{y}\sim p\left(\mathcal{D}|\mathbf{w}\right)=\prod_{d=1}^{D}p\left(y_ {d}|x_{d},\mathbf{w}\right), \tag{7}\]
where the likelihood \(p\left(y_{d}|x_{d},\mathbf{w}\right)\) can be different for regression and classification tasks [26]. In regression tasks, it can be modeled as a Gaussian distribution:
\[p\left(y_{d}|x_{d},\mathbf{w}\right)=\mathcal{N}\left(NN\left(x_{d},\mathbf{ w}\right),\sigma_{d}^{2}\right), \tag{8}\]
where \(\sigma_{d}^{2}\) is the noise variance in training data, while in classification tasks, it can be modeled as
\[p\left(y_{d}|x_{d},\mathbf{w}\right)=\exp\left(-G\left(y_{d}|x_{d},\mathbf{w} \right)\right), \tag{9}\]
where \(G\left(y_{d}|x_{d},\mathbf{w}\right)\) is the cross-entropy error function.
### _Bayesian Training of DNNs_
Bayesian training of a DNN is actually calculating the posterior distribution \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\) based on the prior distribution \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) and the neural network stochastic model (7). In our research, after we obtain the posterior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\), we use the maximum a posteriori (MAP) estimate of \(\mathbf{w},\boldsymbol{\rho}\) and \(\mathbf{s}\) as a deterministic estimate of the weights, precision and support, respectively:
\[\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)^{\ast}=\arg\max_{ \mathbf{w},\boldsymbol{\rho},\mathbf{s}}p\left(\mathbf{w},\boldsymbol{\rho}, \mathbf{s}|\mathcal{D}\right). \tag{10}\]
According to Bayesian rule, the posterior distribution of \(\mathbf{w},\boldsymbol{\rho}\) and \(\mathbf{s}\) can be derived as
\[p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)=\frac{p \left(\mathcal{D}|\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)p\left(\mathbf{ w},\boldsymbol{\rho},\mathbf{s}\right)}{p\left(\mathcal{D}\right)}, \tag{11}\]
where \(p\left(\mathcal{D}|\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) is the likelihood term. Based on (7) and (8), it is given by:
\[p\left(\mathcal{D}|\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)=\prod_{d=1}^ {D}p\left(y_{d}|x_{d},\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right), \tag{12}\]
\[p\left(y_{d}|x_{d},\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)=\mathcal{N} \left(NN\left(x_{d},\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right),\sigma_{d}^{2} \right). \tag{13}\]
Equation (11) mainly contains two terms: 1) Likelihood \(p\left(\mathcal{D}|\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\), which is related to the expression of dataset. 2) Prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\), which is related to the sparse structure we want to promote. Intuitively, by Bayesian training of the DNN, we are actually simultaneously 1) tuning the weights to express the dataset and 2) tuning the weights to promote the structure captured in the prior.
However, directly computing the posterior according to (11) involves several challenges, summarized as follows.
Fig. 4: Illustration of \(NN\left(x,\mathbf{w}\right)\) in different neural networks.
* **Intractable multidimensional integral:** The calculation of the exact posterior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\) involves calculating the so-called evidence term \(p\left(\mathcal{D}\right)\). The calculation of \(p\left(\mathcal{D}\right)=\sum_{\mathbf{s}}\int\int p\left(\mathcal{D}|\mathbf{ w},\boldsymbol{\rho},\mathbf{s}\right)p\left(\mathbf{w},\boldsymbol{\rho}, \mathbf{s}\right)d\boldsymbol{\nu}d\boldsymbol{\rho}\) is usually intractable as the likelihood term can be very complicated [26].
* **Complicated prior term:** Since the prior term \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) contains an extra support layer \(p\left(\mathbf{s}\right)\), the KL-divergence between the approximate distribution and the posterior distribution is difficult to calculate or approximate. Traditional VBI method cannot be directly applied to achieve an approximate posterior distribution.
In the next section, we propose a Turbo-VBI based model compression method, which approximately calculates the marginal posterior distribution of \(\mathbf{w},\boldsymbol{\rho}\) and \(\mathbf{s}\).
## IV Turbo-VBI Algorithm for Model Compression
To handle the aforementioned two challenges, we propose a Turbo-VBI [17] based model compression algorithm. The basic idea of the proposed Turbo-VBI algorithm is to approximate the intractable posterior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\) with a variational distribution \(q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\). The factor graph of the joint distribution \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\) is illustrated in Fig. 5, where the variable nodes are denoted by white circles and the factor nodes are denoted by black squares. Specifically, the factor graph is derived based on the factorization
\[p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)=p\left( \mathcal{D}|\mathbf{w}\right)p\left(\mathbf{w}|\boldsymbol{\rho}\right)p\left( \boldsymbol{\rho}|\mathbf{s}\right)p\left(\mathbf{s}\right). \tag{14}\]
As illustrated in Fig. 5, the variable nodes are \(\mathbf{w},\boldsymbol{\rho}\) and \(\mathbf{s}\), \(g\) denotes the likelihood function, \(f\) denotes the prior distribution \(p\left(w_{n}|\rho_{n}\right)\) for weights \(\mathbf{w}\), \(\eta\) denotes the prior distribution \(p\left(\rho_{n}|s_{n}\right)\) for precision \(\boldsymbol{\rho}\), and \(h\) denotes the joint prior distribution \(p\left(\mathbf{s}\right)\) for support \(\mathbf{s}\). The detailed expression of each factor node is listed in Table I.
### _Top Level Modules of the Turbo-VBI Based Model Compression Algorithm_
Since the factor graph in Fig. 5 has loops, directly applying the message passing algorithm usually cannot achieve a good performance. In addition, since the probability model is more complicated than the existing ones in [15] and [16], it is difficult to directly apply the VBI algorithm. Thus, we consider separating the complicated probability model in Fig. 5 into two parts, and perform Bayesian inference respectively, as illustrated in Fig. 6. Specifically, we follow the turbo framework and divide the factor graph into two parts. One is the support part (Part B), which contains the prior information. The other one is the remaining part (Part A). Correspondingly, the proposed Turbo-VBI algorithm is also divided into two modules such that each module performs Bayesian inference on its corresponding part respectively. Module A and Module B also need to exchange messages. Specifically, the output message \(v_{h\to s_{n}}\) of Module B refers to the message from factor node \(h_{A,n}\) to variable node \(s_{n}\) in Part A, and is equivalent to the message from variable node \(s_{n}\) to factor node \(h_{B,n}\) in Part B, which refers to the conditional marginal probability \(p\left(s_{n}|\boldsymbol{v}_{\eta\to\mathbf{s}}\right)\). It is calculated by sum-product message passing (SPMP) on part B, and acts as the input of Module A to facilitate the VBI on Part A. The output message \(v_{\eta_{n}\to s_{n}}\) of Module A refers to the posterior probability subtracting the output of Module B: \(\frac{q\left(s_{n}\right)}{\mathrm{cn}_{n}}\), and acts as the input of Module B to facilitate SPMP on Part B.
Specifically, the probability model for the prior distribution in Part A is assumed as
\[\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)=\hat{p}\left( \mathbf{s}\right)p\left(\boldsymbol{\rho}|\mathbf{s}\right)p\left(\mathbf{w}| \boldsymbol{\rho}\right), \tag{15}\]
where
\[\hat{p}\left(\mathbf{s}\right)=\prod_{n=1}^{N}\left(\pi_{n}\right)^{s_{n}} \left(1-\pi_{n}\right)^{1-s_{n}}\]
is a new prior for support \(\mathbf{s}\). \(\pi_{n}\) is the probability that \(s_{n}=1\), which is defined as:
\[\pi_{n}=p\left(s_{n}=1\right)=\frac{v_{h\to s_{n}}\left(1\right)}{v_{h\to s_{n }}\left(1\right)+v_{h\to s_{n}}\left(0\right)}. \tag{16}\]
Note that the only difference between the new prior in (15) and that in (1) is that the complicated \(p\left(\mathbf{s}\right)\) is replaced by a simpler distribution \(\hat{p}\left(\mathbf{s}\right)\) with independent entries. The correlations among the supports are separated into Part B. Then, based on the new prior \(\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\), Module A performs a VBI algorithm.
Fig. 5: Factor graph of the joint distribution \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\).
Fig. 6: Illustration of the two modules in the Turbo-VBI algorithm.
In the VBI algorithm, variational distributions \(q\left(\mathbf{w}\right)\), \(q\left(\boldsymbol{\rho}\right)\) and \(q\left(\mathbf{s}\right)\) are used to approximate the posterior. After that, the approximate posterior \(q\left(\mathbf{s}\right)\) is delivered from Module A back into Module B. The delivered message \(v_{\eta_{n}\to s_{n}}\) is defined as
\[v_{\eta_{n}\to s_{n}}=\frac{q\left(s_{n}\right)}{v_{h\to s_{n}}}, \tag{17}\]
according to the sumproduct rule.
With the input message \(v_{\eta_{n}\to s_{n}}\), Module B further performs SPMP over Part B to exploit the structured sparsity, which is contained in the prior distribution \(p\left(\mathbf{s}\right).\) Specifically, in Part B, the factor nodes
\[h_{B,n}=v_{\eta_{n}\to s_{n}},n=1,...,N \tag{18}\]
carry the information of the variational posterior distribution \(q\left(\mathbf{s}\right)\), and the factor node \(h\) carries the structured prior information of \(p\left(\mathbf{s}\right)\). By performing SPMP over Part B, the message \(v_{h\to s_{n}}\) can be calculated and then delivered to Module A. These two modules exchange messages iteratively until convergence or the maximum iteration number is exceeded. After this, the final outputs \(q\left(\mathbf{w}\right)\), \(q\left(\boldsymbol{\rho}\right)\) and \(q\left(\mathbf{s}\right)\) of Module A are the approximate posterior distribution for \(\mathbf{w}\), \(\boldsymbol{\rho}\), and \(\mathbf{s}\), respectively. Note that although the SPMP is not guaranteed to converge on Part B, because our \(p\left(\mathbf{s}\right)\) is an MRF prior, the desired structure is usually well promoted in the final output of Module A, as illustrated in the simulation results. This is because SPMP can achieve good results on 2-D Ising model and there have been many solid works concerning SPMP on loopy graphs [27]. And our proposed VBI compression algorithm in Module A can achieve good enough performance to capture the structure. In addition, since the design of \(p\left(\mathbf{s}\right)\) is flexible, one can model it with Markov chain priors or Markov tree priors. In this case, the convergence is guaranteed. In the following, we elaborate the two modules in details.
### _Sparse VBI Estimator (Module A)_
In Module A, we want to use a tractable variational distribution \(q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) to approximate the intractable posterior distribution \(\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\) under the prior \(\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\). The quality of this approximation is measured by the Kullback-Leibler divergence (KL divergence):
\[D_{KL} \left(q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)|| \hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\right) \tag{19}\] \[=\sum_{\mathbf{s}}\int\int q\left(\mathbf{w},\boldsymbol{\rho}, \mathbf{s}\right)\ln\frac{q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s} \right)}{\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D} \right)}d\mathbf{w}d\boldsymbol{\rho}.\]
Thus, the goal is to find the optimal variational distribution \(q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) that minimizes this KL divergence. However, the KL divergence still involves calculating the intractable posterior \(\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}|\mathcal{D}\right)\). In order to overcome this challenge, the minimization of KL divergence is usually transformed into a minimization of the negative evidence lower bound (ELBO), subject to a factorized form constraint [28].
**Sparse VBI Problem:**
\[\min_{q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)}-ELBO, \tag{20}\] \[s.t.q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)=\prod_ {n=1}^{N}q\left(w_{n}\right)q\left(\rho_{n}\right)q\left(s_{n}\right),\]
where \(ELBO=\sum_{\mathbf{s}}\int\int q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s} \right)\ln p\left(\mathcal{D}|\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)d \mathbf{w}d\boldsymbol{\rho}-D_{KL}\left(q\left(\mathbf{w},\boldsymbol{\rho}, \mathbf{s}\right)||\hat{p}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s} \right)\right).\) The constraint means all individual variables \(w,\rho\) and \(s\) are assumed to be independent. Such an assumption is known as the mean field assumption and is widely used in VBI methods [16, 28].
The problem in (20) is non-convex and generally we can only achieve a stationary solution \(q^{*}\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\). A stationary solution can be achieved by applying a block coordinate descent (BCD) algorithm to the problem in (20), as will be proved in Lemma 1. According to the idea of the BCD algorithm, to solve (20) is equivalent to iteratively solving the following three subproblems.
_Subproblem 1 (update of \(\boldsymbol{\rho}\)):_
\[\min_{q\left(\boldsymbol{\rho}\right)} D_{KL}\left(q\left(\boldsymbol{\rho}\right)||\widetilde{p}_{ \boldsymbol{\rho}}\right), \tag{21}\] \[s.t.q\left(\boldsymbol{\rho}\right)=\prod_{n=1}^{N}q\left(\rho_{n }\right),\]
where \(\ln\widetilde{p}_{\boldsymbol{\rho}}=\left\langle\ln\hat{p}\left(\mathbf{w}, \boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\right\rangle_{q\left(\boldsymbol {\rho}\right)q\left(\mathbf{w}\right)}.\)
_Subproblem 2 (update of \(\mathbf{s}\)):_
\[\min_{q\left(\mathbf{s}\right)} D_{KL}\left(q\left(\mathbf{s}\right)||\widetilde{p}_{ \mathbf{w}}\right), \tag{22}\] \[s.t.q\left(\mathbf{s}\right)=\prod_{n=1}^{N}q\left(s_{n}\right),\]
where \(\ln\widetilde{p}_{\mathbf{w}}=\left\langle\ln\hat{p}\left(\mathbf{w}, \boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\right\rangle_{q\left(\boldsymbol {\rho}\right)q\left(\mathbf{w}\right)}.\)
_Subproblem 3 (update of \(\mathbf{w}\)):_
\[\min_{q\left(\mathbf{w}\right)} D_{KL}\left(q\left(\mathbf{w}\right)||\widetilde{p}_{ \mathbf{w}}\right), \tag{23}\] \[s.t.q\left(\mathbf{w}\right)=\prod_{n=1}^{N}q\left(w_{n}\right),\]
where \(\ln\widetilde{p}_{\mathbf{w}}=\left\langle\ln\hat{p}\left(\mathbf{w}, \boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\right\rangle_{q\left(\boldsymbol {\rho}\right)q\left(\mathbf{s}\right)}\) and \(\left\langle f\left(x\right)\right\rangle_{q\left(x\right)}=\int f\left(x \right)q\left(x\right)dx\). The detailed derivations of the three subproblems are provided in the Appendix A. In the following, we elaborate on solving the three subproblems respectively.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Factor node & Distribution & Function \\ \hline \(g_{d}\left(x_{d},y_{d},\mathbf{w}\right)\) & \(p\left(y_{d}|x_{d},\mathbf{w}\right)\) & \(\mathcal{N}\left(NN\left(x_{d},\mathbf{w}\right),\sigma_{d}^{2}\right)\) \\ \hline \(f_{n}\left(w_{n},\rho_{n}\right)\) & \(p\left(w_{n}|\rho_{n}\right)\) & \(\mathcal{N}\left(w_{n}|0,\frac{1}{\rho_{n}}\right)\) \\ \hline \(\eta_{n}\left(\rho_{n},s_{n}\right)\) & \(p\left(\rho_{n}|s_{n}\right)\) & \(\left(\Gamma\left(\rho_{n};a_{n},b_{n}\right)\right)^{s_{n}}\left(\Gamma\left( \rho_{n};\overline{a},\overline{b}_{n}\right)\right)^{1-s_{n}}\) \\ \hline \(h\) & \(p\left(\mathbf{s}\right)\) & MRF prior. \(p\left(s_{\text{\tiny{row}},m+1}|s_{\text{\tiny{row}},m}\right),p\left(s_{\text{ \tiny{col}},k+1}|s_{\text{\tiny{col}},k}\right)\) \\ \hline \end{tabular}
\end{table} TABLE I: Detailed expression of each factor node.
#### Iv-A1 Update of \(q\left(\mathbf{\rho}\right)\)
In subproblem 1, obviously, the optimal solution \(q^{\star}\left(\mathbf{\rho}\right)\) is achieved when \(q^{\star}\left(\mathbf{\rho}\right)=\widetilde{p}_{\mathbf{\rho}}\). In this case, \(q^{\star}\left(\mathbf{\rho}\right)\) can be derived as
\[q^{\star}\left(\mathbf{\rho}\right)=\prod_{n=1}^{N}\Gamma\left(\rho_{n};\widetilde{ \alpha}_{n},\widetilde{b}_{n}\right), \tag{24}\]
where \(\widetilde{\alpha}_{n}\) and \(\widetilde{b}_{n}\) are given by:
\[\widetilde{a}_{n} =\left\langle s_{n}\right\rangle a_{n}+\left\langle 1-s_{n} \right\rangle\overline{a}_{n}+1 \tag{25}\] \[=\widetilde{\pi}_{n}a_{n}+\left(1-\widetilde{\pi}_{n}\right) \overline{a}_{n}+1,\] \[\widetilde{b}_{n} =\left\langle\left|w_{n}\right|^{2}\right\rangle+\left\langle s_ {n}\right\rangle b_{n}+\left\langle 1-s_{n}\right\rangle\overline{b}_{n}\] (26) \[=\left|\mu_{n}\right|^{2}+\sigma_{n}^{2}+\widetilde{\pi}_{n}b_{n }+\left(1-\widetilde{\pi}_{n}\right)\overline{b}_{n},\]
where \(\mu_{n}\) and \(\sigma_{n}^{2}\) are the posterior mean and variance of weight \(w_{n}\) respectively, and \(\widetilde{\pi}_{n}\) is the posterior expectation of \(s_{n}\).
#### Iv-A2 Update of \(q\left(\mathbf{s}\right)\)
In subproblem 2, we can achieve the optimal solution \(q^{\star}\left(\mathbf{s}\right)\) by letting \(q^{\star}\left(\mathbf{s}\right)=\widetilde{p}_{\mathbf{s}}\). In this case, \(q^{\star}\left(\mathbf{s}\right)\) can be derived as
\[q^{\star}\left(\mathbf{s}\right)=\prod_{n=1}^{N}\left(\widetilde{\pi}_{n} \right)^{s_{n}}\left(1-\widetilde{\pi}_{n}\right)^{1-s_{n}}, \tag{27}\]
where \(\widetilde{\pi}_{n}=\frac{C_{1}}{C_{1}+C_{2}}\). \(C_{1}\) and \(C_{2}\) are given by:
\[C_{1}=\frac{\pi_{n}b_{n}^{a_{n}}}{\Gamma\left(a_{n}\right)}e^{ \left(a_{n}-1\right)\left(\ln p_{n}\right)-b_{n}\left(\rho_{n}\right)}, \tag{28}\] \[C_{2}=\frac{\left(1-\pi_{n}\right)\overline{b}_{n}^{\overline{a} _{n}}}{\Gamma\left(\overline{a}_{n}\right)}e^{\left(\overline{a}_{n}-1\right) \left(\ln\rho_{n}\right)-\overline{b}_{n}\left(\rho_{n}\right)}, \tag{29}\]
where \(\left\langle\ln\rho_{n}\right\rangle=\psi\left(\widetilde{a}_{n}\right)-\ln \left(\widetilde{b}_{n}\right)\), \(\psi\left(x\right)=\frac{d}{dx}\ln\left(\Gamma\left(x\right)\right)\) is the digamma function. The detailed derivation of \(q^{\star}\left(\mathbf{\rho}\right)\) and \(q^{\star}\left(\mathbf{s}\right)\) is provided in the Appendix B.
#### Iv-A3 Update of \(q\left(\mathbf{w}\right)\)
Expanding the objective function in subproblem 3, we have
\[D_{KL}\left(q\left(\mathbf{w}\right)||\widetilde{p}_{\mathbf{w}}\right) \tag{30}\] \[=\mathbb{E}_{q\left(\mathbf{w}\right)}\ln q\left(\mathbf{w}\right) -\mathbb{E}_{q\left(\mathbf{w}\right)}\left[\left\langle\ln p\left(\mathbf{w },\mathbf{\rho},\mathbf{s},\mathcal{D}\right)\right\rangle_{q\left(\mathbf{\rho} \right)q\left(\mathbf{s}\right)}\right]\] \[=\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w }\right)-\left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q \left(\mathbf{\rho}\right)}\right)-\mathbb{E}_{q\left(\mathbf{w}\right)}\ln p \left(\mathcal{D}|\mathbf{w}\right)+const.\]
Similar to traditional variational Bayesian model compression [16, 15], the objective function (30) contains two parts: the "prior part" \(\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w}\right)-\left\langle \ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q\left(\mathbf{\rho}\right)}\right)\) and the "data part" \(\mathbb{E}_{q\left(\mathbf{w}\right)}\ln p\left(\mathcal{D}|\mathbf{w}\right)\). The prior part forces the weights to follow the prior distribution while the data part forces the weights to express the dataset. The only difference from traditional variational Bayesian model compression [16, 15] is that the prior term \(\left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q\left(\mathbf{\rho}\right)}\) is parameterized by \(\mathbf{\rho}\) and \(\mathbf{\rho}\) carries the structure captured by the three layer hierarchical prior. Our aim is to find the optimal \(q\left(\mathbf{w}\right)\) that minimizes \(D_{KL}\left(q\left(\mathbf{w}\right)||\widetilde{p}_{\mathbf{w}}\right)\), subject to the factorized constraint \(q\left(\mathbf{w}\right)=\prod_{n=1}^{N}q\left(w_{n}\right)\). It can be observed that the objective function contains the likelihood term \(\ln p\left(\mathcal{D}|\mathbf{w}\right)\), which can be very complicated for complex models. Thus, subproblem 3 is non-convex and it is difficult to obtain the optimal solution. In the following, we aim at finding a stationary solution \(q^{\star}\left(\mathbf{w}\right)\) for subproblem 3. There are several challenges in solving subproblem 3, summarized as follows.
**Challenge 1: Closed form for the prior part \(\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w}\right)- \left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q\left(\mathbf{ \rho}\right)}\right)\).** In traditional variational Bayesian model compression, this expectation is usually calculated by approximation [15], which involves approximating error. To calculate the expectation accurately, a closed form for this expectation is required.
**Challenge 2: Low-complexity deterministic approximation for the data part \(\mathbb{E}_{q\left(\mathbf{w}\right)}\ln p\left(\mathcal{D}|\mathbf{w}\right)\).** In traditional variational Bayesian model compression [16, 15], this intractable expectation is usually approximated by Monte Carlo sampling. However, Monte Carlo approximation has been pointed out to have high complexity and low robustness. In addition, the variance of the Monte Carlo gradient estimates is difficult to control [19]. Thus, a low-complexity deterministic approximation for the likelihood term is required.
**Closed form for \(\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w}\right)- \left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q\left(\mathbf{ \rho}\right)}\right)\).** To overcome Challenge 1, we propose to use a Gaussian distribution as the approximate posterior. That is, \(q\left(w_{n}\right)=\mathcal{N}\left(\mu_{n},\sigma_{n}^{2}\right)\), \(q\left(\mathbf{w}\right)=\prod_{n=1}^{N}q\left(w_{n}\right)\), \(\mu_{n}\) and \(\sigma_{n}^{2}\) are the variational parameters that we want to optimize. Then, since the prior \(p\left(\mathbf{w}|\mathbf{\rho}\right)\) is also chosen as a Gaussian distribution, \(\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w}\right)-\left\langle \ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q\left(\mathbf{\rho}\right)}\right)\) can be written as the KL-divergence between two Gaussian distributions. Thus, the expectation has a closed form and can be calculated up to a constant. Specifically, by expanding \(p\left(\mathbf{w}|\mathbf{\rho}\right)\), we have
\[\left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle =\sum_{n=1}^{N}\left\langle\ln\sqrt{\frac{\rho_{n}}{2\pi}}e^{ -\frac{w_{n}^{2}}{2}\rho_{n}}\right\rangle \tag{31}\] \[=\sum_{n=1}^{N}\left\langle\frac{1}{2}\ln\rho_{n}-\frac{w_{n}^{2}} {2}\rho_{n}+const.\right\rangle\] \[=\sum_{n=1}^{N}\left(\frac{1}{2}\left\langle\ln\rho_{n}\right\rangle -\frac{w_{n}^{2}}{2}\left\langle\rho_{n}\right\rangle+const.\right)\] \[=\sum_{n=1}^{N}\left(\ln p\left(w_{n}\right|\left\langle\rho_{n} \right\rangle\right)+const.\right)\] \[=\ln p\left(\mathbf{w}|\left\langle\mathbf{\rho}\right\rangle \right)+const.,\]
where we use \(\left\langle\cdot\right\rangle\) to replace \(\left\langle\cdot\right\rangle_{q\left(\mathbf{\rho}\right)}\) or \(\left\langle\cdot\right\rangle_{q\left(\mathbf{\rho}\right)}\) for simplicity. Thus, we have
\[\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w} \right)-\left\langle\ln p\left(\mathbf{w}|\mathbf{\rho}\right)\right\rangle_{q \left(\mathbf{\rho}\right)}\right)\] \[=\mathbb{E}_{q\left(\mathbf{w}\right)}\left(\ln q\left(\mathbf{w} \right)-\ln p\left(\mathbf{w}|\mathbb{E}\left[\bm
where \(\widetilde{\sigma}_{n}^{2}=\frac{1}{\mathbb{E}[\mathbf{\rho}_{n}]}\) is the variance of the prior \(p\left(\mathbf{w}|\mathbb{E}\left[\mathbf{\rho}\right]\right)\). Since our aim is to minimize (30), the constant term in (32) can be omitted.
**Low-complexity deterministic approximation for \(\mathbb{E}_{q\left(\mathbf{w}\right)}\ln p\left(\mathcal{D}|\mathbf{w}\right)\).** To overcome Challenge 2, we propose a low-complexity deterministic approximation for \(\mathbb{E}_{q\left(\mathbf{w}\right)}\ln p\left(\mathcal{D}|\mathbf{w}\right)\) based on Taylor expansion.
For simplicity, let \(g\left(\mathbf{w}\right)\) denote the complicated function \(\ln p\left(\mathcal{D}|\mathbf{w}\right)\). Specifically, we want to construct a deterministic approximation for \(\mathbb{E}_{\mathbf{w}\sim q\left(\mathbf{w}\right)}g\left(\mathbf{w}\right)\) with the variational parameters \(\mu_{n}\) and \(\sigma_{n}^{2}\), \(n=1,...,N\). Following the reparameterization trick in [15] and [29], we represent \(w_{n}\) by \(\mu_{n}+\xi_{n}\sigma_{n}^{2}\), \(\xi_{n}\sim\mathcal{N}\left(0,1\right)\). In this way, \(w_{n}\) is transformed into a deterministic differentiable function of a non-parametric noise \(\xi_{n}\), and \(\mathbb{E}_{\mathbf{w}\sim q\left(\mathbf{w}\right)}g\left(\mathbf{w}\right)\) is transformed into \(\mathbb{E}_{\xi\sim\mathcal{N}\left(0,1\right)}g\left(\mathbf{\mu}+\mathbf{\xi}\mathbf{ \sigma}^{2}\right)\). Note that in traditional variational Bayesian model compression, this expectation is approximated by sampling \(\mathbf{\xi}\). However, to construct a deterministic approximation, we take the first-order Taylor approximation of \(\mathbb{E}_{\xi\sim\mathcal{N}\left(0,1\right)}g\left(\mathbf{\mu}+\mathbf{\xi}\mathbf{ \sigma}^{2}\right)\) at the point \(\mathbf{\xi}=\mathbb{E}\left[\mathbf{\xi}\right]=\mathbf{0}\), and we have
\[\mathbb{E}_{\mathbf{\xi}}g\left(\mathbf{\mu}+\mathbf{\xi}\mathbf{\sigma}^{2}\right)\approx g \left(\mathbf{\mu}+\mathbb{E}\left[\mathbf{\xi}\right]\cdot\mathbf{\sigma}^{2}\right)=g \left(\mathbf{\mu}\right). \tag{34}\]
That is, we use the first-order Taylor approximation to replace \(\mathbb{E}_{\mathbf{\xi}}g\left(\mathbf{\mu}+\mathbf{\xi}\mathbf{\sigma}^{2}\right)\). Simulation results show that our proposed approximation can achieve good performance. Additionally, since our proposed approximation is deterministic, the gradient variance can be eliminated.
Based on the aforementioned discussion, subproblem 3 can be solved by solving an approximated problem as follows.
**Approx. Subproblem 3:**
\[\min_{\mathbf{\mu},\mathbf{\sigma}^{2}}\sum_{n=1}^{N}\left(\ln\frac{ \widetilde{\sigma}_{n}}{\sigma_{n}}+\frac{\sigma_{n}^{2}+\mu_{n}^{2}}{2 \widetilde{\sigma}_{n}^{2}}-\frac{1}{2}\right)-\ln p\left(\mathcal{D}|\mathbf{ \mu}\right). \tag{35}\]
This problem is equivalent to training a Bayesian neural network with (35) as the loss function.
#### V-B4 Convergence of Sparse VBI
The sparse VBI in Module A is actually a BCD algorithm to solve the problem in (20). By the physical meaning of ELBO and KL divergence, the objective function is continuous on a compact level set. In addition, the objective function has a unique minimum for two coordinate blocks (\(q\left(\mathbf{\rho}\right)\) and \(q\left(\mathbf{s}\right)\)) [17, 28]. Since we can achieve a stationary point for the third coordinate block \(q\left(\mathbf{w}\right)\), the objective function is regular at the cluster points generated by the BCD algorithm. Based on the above discussion, our proposed BCD algorithm satisfies the BCD algorithm convergence requirements in [30]. Thus, we have the following convergence lemma.
**Lemma 1**.: _(Convergence of Sparse VBI): Every cluster point \(q^{*}\left(\mathbf{w},\mathbf{\rho},\mathbf{s}\right)=q^{*}\left(\mathbf{\rho}\right) q^{*}\left(\mathbf{s}\right)q^{*}\left(\mathbf{w}\right)\) generated by the sparse VBI algorithm is a stationary solution of problem (20)._
### _Message Passing (Module B)_
In Module B, SPMP is performed on the support factor graph \(\mathcal{G}_{s}\) and the normalized message \(v_{s_{n}\to h_{B}}\) is fed back to Module A as the prior probability \(\hat{p}\left(s_{n}\right)\).
We focus on one fully connected layer with \(K\) input neurons and \(M\) output neurons for easy elaboration. The support prior \(p\left(\mathbf{s}\right)\) for this \(K\times M\) layer is modeled as an MRF. The factor graph \(\mathcal{G}_{s}\) is illustrated in Fig. 7. Here we use \(s_{i,j}\) to denote the support variable in the \(i\)-th row and \(j\)-th column. The unary factor node \(h_{B,i,j}\) carries the probability given by (18). The pair-wise factor nodes carry the row and column transition probability \(p\left(s_{row,m+1}|s_{row,m}\right)\) and \(p\left(s_{col,k+1}|s_{col,k}\right)\), respectively. The parameters \(p_{01}^{cou},p_{10}^{row}\) and \(p_{01}^{col},p_{10}^{col}\) for the MRF model are given in Section II-C. Let \(v_{s\to f}\left(s\right)\) denote the message from the variable node \(s\) to the factor node \(f\), and \(v_{f\to s}\left(s\right)\) denote the message from the factor node \(f\) to the variable node \(s\). According to the sum-product law, the
Fig. 7: Illustration of the support factor graph in Module B with \(K\) input neurons and \(M\) output neurons.
messages are updated as follows:
\[v_{s\to f}\left(s\right)=\prod_{h\in n\left(s\right)\backslash\left\{f\right\}}v_{ h\to s}\left(s\right), \tag{36}\]
\[v_{f\to s}\left(s\right)=\begin{cases}f\left(s\right)&f\ is\ unary\ factor\ node,\\ f\left(s\right)\cdot v_{t\to f}\left(t\right)&f\ is\ pairwise\ factor\ node,\end{cases} \tag{37}\]
where \(n\left(s\right)\backslash\left\{f\right\}\) denotes the neighbour factor nodes of \(s\) except node \(f\), and \(t\) denotes the other neighbour variable node of \(f\) except node \(s\). After the SPMP algorithm is performed, the final message \(v_{s\to h_{B}}\) from each variable node \(s_{i,j}\) to the connected unary factor node \(h_{B,i,j}\) is sent back to Module A.
The overall algorithm is summarized in Algorithm 1.
**Remark 2**: _(Comparison with traditional variational inference) Traditional variational inference cannot handle the proposed three-layer sparse prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) while our proposed Turbo-VBI is specially designed to handle it. In existing Bayesian compression works [14]-[16], the KL-divergence between the true posterior \(p\left(\mathbf{w}|\mathcal{D}\right)\) and the variational posterior \(q\left(\mathbf{w}\right)\) is easy to calculate, so that the KL-divergence can be directly optimized. Thus, simple variational inference can be directly applied. The fundamental reason behind this is that the prior \(p\left(\mathbf{w}\right)\) in these works are relatively simple, and they usually only have one layer [14], [15] or two layers [16]. Thus, the KL-divergence between the prior \(p\left(\mathbf{w}\right)\) and the variational posterior \(q\left(\mathbf{w}\right)\) is easy to calculate, which is a must in calculating the KL-divergence between the true posterior \(p\left(\mathbf{w}|\mathcal{D}\right)\) and the variational posterior \(q\left(\mathbf{w}\right)\). However, these simple priors cannot promote the desired regular structure in the weight matrix and lack the flexibility to configure the pruned structure. Thus, we introduce the extra support layer \(\mathbf{s}\), which leads to the three layer hierarchical prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\). With this three-layer prior, traditional variational inference cannot be applied because the KL-divergence between the prior \(p\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) and the variational posterior \(q\left(\mathbf{w},\boldsymbol{\rho},\mathbf{s}\right)\) is difficult to calculate. In order to inference the posterior, we propose to separate the factor graph into two parts and iteratively perform Bayesian inference, which leads to our proposed Turbo-VBI algorithm.
## V Performance Analysis
In this section, we evaluate the performance of our proposed Turbo-VBI based model compression method on some standard neural network models and datasets. We also compare with several baselines, including classic and state-of-the-art model compression methods. The baselines considered are listed as follows:
1. _Variational dropout (VD):_ This is a very classic method in the area of Bayesian model compression, which is proposed in [15]. A single-layer improper log-uniform prior is assigned to each weight to induce sparsity during training.
2. _Group variational dropout (GVD):_ This is proposed in [16], which can be regarded as an extension of VD into group pruning. In this algorithm, each weight follows a zero-mean Gaussian prior. The prior scales of the weights in the same group jointly follow an improper log-uniform distribution or a proper half-Cauchy distribution to realize group pruning. In this paper, we choose the improper log-uniform prior for comparison. Note that some more recent Bayesian compression methods mainly focus on a quantization perspective [14], [31], which is a different scope from our work. Although our work can be further extended into quantization area, we only focus on structured pruning here. To our best knowledge, GVD is still one of the state-of-the-art Bayesian pruning methods without considering quantization.
3. _Polarization regularizer based pruning (polarization):_ This is proposed in [9]. The algorithm can push some groups of weights to zero while others to larger values by assigning a polarization regularizer to the batch norm scaling factor of each weight group \[\min_{\mathbf{w}} \frac{1}{D}\sum_{d=1}^{D}L\left(NN\left(x_{d},\mathbf{w}\right),y _{d}\right)+R\left(\mathbf{w}\right)\] \[+\lambda\left(t\left\|\boldsymbol{\gamma}\right\|_{1}-\left\| \boldsymbol{\gamma}-\bar{\gamma}\mathbf{1}_{n}\right\|_{1}\right),\] where \(\lambda\left(t\left\|\boldsymbol{\gamma}\right\|_{1}-\left\|\boldsymbol{ \gamma}-\bar{\gamma}\mathbf{1}_{n}\right\|_{1}\right)\) is the proposed polarization regularizer and \(\boldsymbol{\gamma}\) is the batch norm scaling factor of each weight.
4. _Single-shot network pruning (SNIP):_ This is proposed in [1]. The algorithm measures the weight importance by introducing an auxiliary variable to each weight. The importance of each weight is achieved before training by calculating the gradient of its "importance variable". After this, the weights with less importance are pruned before the normal training.
We perform experiments on LeNet-5, AlexNet, and VGG-11 architectures, and use the following benchmark datasets for the performance comparison.
1. Fashion-MNIST [32]: It consists of a training set of 60000 examples and a test set of 10000 examples. Each example is a \(28\times 28\) grayscale image of real life fashion apperal, associated with a label from 10 classes.
2. CIFAR-10 [33]: It is the most widely used dataset to compare neural network pruning methods. It consists of a training set of 50000 examples and a test set of 10000 examples. Each example is a three-channel \(32\times 32\) RGB image of real life object, associated with a label from 10 classes.
3. CIFAR-100 [33]: It is considered as a harder version of CIFAR-10 dataset. Instead of 10 classes in CIFAR-10, it consists of 100 classes with each class containing 600 images. There are 500 training images and 100 testing images in each class.
We will compare our proposed Turbo-VBI based model compression method with the baselines from different dimensions, including 1) an overall performance comparison; 2) visualization of the weight matrix structure; 3) CPU and GPU acceleration and 4) robustness of the pruned neural network.
We run LeNet-5 on Fashion-MNIST, and AlexNet on CIFAR-10 and VGG on CIFAR-100. Throughout the experiments, we set the learning rate as 0.01, and \(a=b=\overline{a}=1,\overline{b}=1\times 10^{-3}\). For LeNet-5+Fashion-MNIST, we set the batch size
as 64, and for AlexNet+CIFAR-10 and VGG+CIFAR-100, we set the batchsize as 128. All the methods except for SNIP go through a fine tuning after pruning. For LeNet, the fine tuning is 15 epochs while for AlexNet and VGG, the fine tuning is 30 epochs. Note that the models and parameter settings may not achieve state-of-the-art accuracy but it is enough for comparison.
### _Overall Performance Comparison_
We first give an overall performance comparison of different methods in terms of accuracy, sparsity rate, pruned structure and FLOPs reduction. The results are summarized in Table II. The structure is measured by output channels for convolutional layers and neurons for fully connected layers. It can be observed that our proposed method can achieve an extremely low sparsity rate, which is at the same level (or even lower than) the unstructured pruning methods (VD and SNIP). Moreover, it can also achieve an even more compact pruned structure than the structured pruning methods (polarization and GVD). At the same time, our proposed method can maintain a competitive accuracy. We summarize that the superior performance of our proposed method comes from two aspects: 1) superior individual weight pruning ability, which leads to low sparsity rate, and 2) superior neuron pruning ability, which leads to low FLOPs. Compared to random pruning methods like VD and SNIP which generally can achieve lower sparsity rate than group pruning methods, our proposed method can achieve even lower sparsity rate. This shows that our proposed sparse prior has high efficiency in pruning individual weights. This is because the regularization ability of \(p\left(\mathbf{w}|\boldsymbol{\rho}\right)\) can be configured by setting small \(\overline{b}\). Compared to group pruning methods like GVD and polarization, our proposed method can achieve a more compact structure. This shows our proposed method is also highly efficient in pruning neurons. The reason is two fold: First, each row in the prior \(p\left(\mathbf{s}\right)\) can be viewed as a Markov chain. When some weights of a neuron are pruned, the other weights will also have a very high probability to be pruned. Second, the surviving weights in a weight matrix tend to gather in clusters, which means the surviving weights of different neurons in one layer tend to activate the same neurons in the next layer. This greatly improves the regularity from the input side, while the existing group pruning methods like GVD and polarization can only regularize the output side of a neuron.Visualization of Pruned Structure
In this subsection, we visualize the pruned weight matrix of our proposed method and compare to the baselines to further elaborate the benefits brought by the proposed regular structure. Fig. 8 illustrates the 6 kernels in the first layer of LeNet. We choose GVD and polarization for comparison here because these two are also structured pruning methods. It can be observed that GVD and polarization can both prune the entire kernels but the weights in the remaining kernels cannot be pruned and have no structure at all. However, in our proposed method, the weights in the remaining kernels are clearly pruned in a structured manner. In particular, we find that the three remaining \(5\times 5\) kernels all fit in smaller kernels with size \(3\times 3\). This clearly illustrates the clustered structure promoted by the MRF prior. With the clustered pruning, we can directly replace the original large kernels with smaller kernels, and enjoy the storage and computation benefits of the smaller kernel size.
### _Acceleration Performance_
In this subsection, we compare the acceleration performance of our proposed method to the baselines. Note that the primary goal of the structured pruning is to reduce the computational complexity and accelerate the inference of the neural networks. In the weight matrix of our proposed method, we record the clusters larger than \(3\times 3\) as an entirety. That is, we record the location and size of the clusters instead of recording the exact location for each element. In Fig. 9, we plot the average time costed by one forward pass of a batch on CPU and GPU respectively. It can be observed that our proposed method
Fig. 8: Kernel structures of CONV_1 in LeNet.
Fig. 9: Avg. time costed by one forward pass of a batch on CPU and GPU. For LeNet and AlexNet, the batch size is 10000 examples and each results is averaged on 1000 experiments. For VGG, the batch size is 1000 examples and each result is averaged on 100 experiments.
can achieve the best acceleration performance on different models. For LeNet, our proposed method can achieve a \(1.50\times\) gain on CPU and \(2.85\times\) gain on GPU. For AlexNet, the results are \(1.63\times\) and \(1.64\times\). For VGG, the results are \(2.19\times\) and \(1.88\times\). Note that the unstructured pruning methods show little acceleration gain because their sparse structure is totally random. We summarize the reason for our acceleration gain from two aspects: 1) More efficient neuron pruning. As shown in subsection V-A, our proposed method can result in a more compact model and thus leading to a larger FLOPs reduction. 2) More efficient decoding of weight matrix. Existing methods require to store the exact location for each unpruned weight, no matter in CSR, CSC or COO format because the sparse structure is irregular. However, in our proposed method, we only need the location and size for each cluster to decode all the unpruned weights in the cluster. Thus, the decoding is much faster. Note that the performance of our proposed method can be further improved if the structure is explored during the matrix-matrix multiplication. For example, the zero blocks can be skipped in matrix tiling, as discussed in subsection V-A. However, this involves modification in lower-level CUDA codes so we do not study it here.
### _Insight Into the Proposed Algorithm_
In this subsection, we provide some insight into the proposed method. First, Fig. 10 illustrates the convergence behavior of our proposed method on different tasks. It can be observed that generally our proposed method can converge in 15 iterations in all the tasks. Second, we illustrate how the structure is gradually captured by the prior information \(\mathbf{v}_{h\to s_{n}}\) from Module B and how the posterior in Module A is regularized to this structure. As the clustered structure in kernels has been illustrated in Fig. 8, here in Fig. 11, we visualize the message \(\mathbf{v}_{h\to s_{n}}\) and \(\mathbf{v}_{\eta_{n}\to s_{n}}\) of the FC_3 layer in AlexNet. It can be observed that in iteration 1, the message \(\mathbf{v}_{\eta_{n}\to s_{n}}\) has no structure because the prior \(\hat{p}\left(\mathbf{s}\right)\) in Module A is randomly initialized. During the iterations of the algorithm, \(\mathbf{v}_{\eta_{n}\to s_{n}}\) from Module B gradually captures the structure from the MRF prior and acts as a regularization in Module A, so that structure is gradually promoted in \(\mathbf{v}_{h\to s_{n}}\). Finally, the support matrix
\begin{table}
\end{table} TABLE II: Overall performance comparisons to the baselines on different network models and datasets. Top-1 accuracy is reported. The best results are bolded.
Fig. 10: Convergence of proposed algorithm. The convergence if measured by the change in message \(\mathbf{v}_{h\to s_{n}}\).
has a clustered structure and the weight matrix also has a similar structure.
### _Robustness of Pruned Model_
In this subsection, we evaluate the robustness of the pruned model. The robustness of the pruned model is often neglected in model pruning methods while it is of highly importance [34, 35, 36]. We compare the model robustness of our proposed method to the baselines on LeNet and Fashion-MNIST dataset. We generate 10000 adversarial examples by fast gradient sign method (FGSM) and pass them through the pruned models. In Fig. 12, we plot the model accuracy under different \(\epsilon\). Both the accuracy with and without an adversarial fine tuning is reported. It can be observed that all methods show a rapid drop without fine tuning. Specifically, our proposed method shows an accuracy drop of 72.1% from 89.01% when \(\epsilon=0\) to 24.86% when \(\epsilon=0.05\), while SNIP reports the largest drop of 73.4%. However, we also observe that the structured pruning methods (proposed and polarization) show stronger robustness than the unstructured ones (VD and SNIP). We think the reasons why our proposed method and polarization perform better may be different. For polarization, the unpruned weights are significantly more than other baselines, which means the remaining important filters have significantly more parameters. This allows the model to have a strong ability to resist the noise. For our proposed method, the possible reason is that it can enforce the weights to gather in groups. Thus, the model can "concentrate" on some specific features even when they are added by noise. It can be observed that after an adversarial fine tuning, all methods show an improvement. VD shows the largest improvement of 77.31% at \(\epsilon=0.1\) while our proposed method only shows a moderate improvement of 56.26%. The possible reason behind this is our proposed method is "too" sparse and the remaining weights cannot learn the new features. Note that SNIP shows the smallest improvement. The possible reason is that in SNIP, the weights are pruned before training and the auxiliary importance variables may not measure the weight importance accurately, which results in some important weights being pruned. Thus, the expressing ability of the model may be harmed.
## VI Conclusions and Future Work
### Conclusions
We considered the problem of model compression from a Bayesian perspective. We first proposed a three-layer hierar
Fig. 11: Visualization of the message \(\mathbf{v}_{h\to s_{n}}\) and \(\mathbf{v}_{\eta_{n}\to s_{n}}\) for the Dense_3 layer of AlexNet. \(\mathbf{v}_{h\to s_{n}}\) carries the prior information of support from Module B and \(\mathbf{v}_{\eta_{n}\to s_{n}}\) carries the posterior information of support from Module A. It can be clearly observed that the structured prior captured from Module B gradually regularizes the posterior. Eventually both \(\mathbf{v}_{h\to s_{n}}\) and \(\mathbf{v}_{\eta_{n}\to s_{n}}\) converge to a stable state.
Fig. 12: Accuracy vs \(\epsilon\) for LeNet model.
chical sparse prior in which an extra support layer is used to capture the desired structure for the weights. Thus, the proposed sparse prior can promote both per-neuron weight-level structured sparsity and neuron-level structured sparsity. Then, we derived a Turbo-VBI based Bayesian compression algorithm for the resulting model compression problem. Our proposed algorithm has low complexity and high robustness. We further established the convergence of the sparse VBI part in the Turbo-VBI algorithm. Simulation results show that our proposed Turbo-VBI based model compression algorithm can promote a more regular structure in the pruned neural networks while achieving even better performance in terms of compression rate and inferencing accuracy compared to the baselines.
### Future Work
With the proposed Turbo-VBI based structured model compression method, we can promote more regular and more flexible structures and achieve superior compressing performance during neural network pruning. This also opens more possibilities for future research, as listed below:
Communication Efficient Federated Learning: Our proposed method shows huge potential in a federated learning scenario. First, it can promote a more regular structure in the weight matrix such that the weight matrix has a lower entropy. Thus, the regular structure can be further leveraged to reduce communication overhead in federated learning. Second, the two-module design of the Turbo-VBI algorithm naturally fits in a federated learning setting. The server can promote a global sparse structure by running Module B while the clients can perform local Bayesian training by running Module A.
Joint Design of Quantization and Pruning: Existing works [16, 14] have pointed out that quantization and pruning can be achieved simultaneously by Bayesian model compression. Thus, it is also possible to further compress the model by combining quantization with our proposed method.
### Derivation of Three Subproblems
Here, we provide the derivation from the original problem (20) to the three subproblems (21), (22) and (23). For expression simplicity, let \(\mathbf{z}=\left[\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3}\right]\) denote all the variables \(\mathbf{w},\boldsymbol{\rho}\) and \(\mathbf{s}\), where \(\mathbf{z}_{1}=\mathbf{w}\), \(\mathbf{z}_{2}=\boldsymbol{\rho}\) and \(\mathbf{z}_{3}=\mathbf{s}\). Then, the original problem (20) can be equivalently written as
**Sparse VBI Problem:**
\[\begin{split}&\min_{q\left(\mathbf{z}\right)}-ELBO,\\ & s.t.q\left(\mathbf{z}\right)=\prod_{n=1}^{3N}q\left(z_{n}\right). \end{split} \tag{38}\]
Then, by (15) in [28], we have
\[\begin{split} ELBO\\ =&\int q\left(\mathbf{z}\right)\ln\hat{p}\left( \mathcal{D}|\mathbf{z}\right)d\mathbf{z}-D_{KL}\left(q\left(\mathbf{z}\right) ||\hat{p}\left(\mathbf{z}\right)\right)\\ =&\int q\left(\mathbf{z}\right)\ln\frac{\hat{p} \left(\mathcal{D},\mathbf{z}\right)}{q\left(\mathbf{z}\right)}d\mathbf{z}\\ =&\int\prod_{i=1}^{3}q\left(\mathbf{z}_{i}\right) \left[\ln\hat{p}\left(\mathcal{D},\mathbf{z}\right)-\sum_{i=1}^{3}\ln q\left( \mathbf{z}_{i}\right)\right]d\mathbf{z}\\ =&\int\prod_{i=1}^{3}q\left(\mathbf{z}_{i}\right) \ln\hat{p}\left(\mathcal{D},\mathbf{z}\right)\prod_{i=1}^{3}d\mathbf{z}_{i} \\ &-\sum_{i=1}^{3}\int\prod_{j=1}^{3}q\left(\mathbf{z}_{j}\right) \ln q\left(\mathbf{z}_{i}\right)d\mathbf{z}_{i}\\ =&\int q\left(\mathbf{z}_{j}\right)\left[\ln\hat{p} \left(\mathcal{D},\mathbf{z}\right)\prod_{i\neq j}q\left(\mathbf{z}_{i}\right) d\mathbf{z}_{i}\right]d\mathbf{z}_{j}\\ &-\int q\left(\mathbf{z}_{j}\right)\ln q\left(\mathbf{z}_{j} \right)d\mathbf{z}_{j}-\sum_{i\neq j}\int q\left(\mathbf{z}_{i}\right)\ln q \left(\mathbf{z}_{i}\right)d\mathbf{z}_{i}\\ =&\int q\left(\mathbf{z}_{j}\right)\ln\widetilde{p}_ {j}d\mathbf{z}_{j}-\int q\left(\mathbf{z}_{j}\right)\ln q\left(\mathbf{z}_{j }\right)d\mathbf{z}_{j}\\ &-\sum_{i\neq j}\int q\left(\mathbf{z}_{i}\right)\ln q\left( \mathbf{z}_{i}\right)d\mathbf{z}_{i}\\ =&-D_{KL}\left(q\left(\mathbf{z}_{j}\right)|| \widetilde{p}_{j}\right)-\sum_{i\neq j}\int q\left(\mathbf{z}_{i}\right)\ln q \left(\mathbf{z}_{i}\right)d\mathbf{z}_{i},\end{split}\]
where \(\ln\widetilde{p}_{j}=\int\ln\hat{p}\left(\mathcal{D},\mathbf{z}\right)\prod _{i\neq j}q\left(\mathbf{z}_{i}\right)d\mathbf{z}_{i}=\left\langle\hat{p} \left(\mathcal{D},\mathbf{z}\right)\right\rangle_{i\neq j}\).It is obvious that \(-ELBO\) is minimized when \(D_{KL}\left(q\left(\mathbf{z}_{j}\right)||\widetilde{p}_{j}\right)\) is minimized. Thus, the Sparse VBI Problem (38) can be solved by iteratively minimizing \(D_{KL}\left(q\left(\mathbf{z}_{j}\right)||\widetilde{p}_{j}\right)\) for \(j=1,2,3\).
### Derivation of (25) - (30)
Let \(q^{\star}\left(\boldsymbol{\rho}\right)=\widetilde{p}_{\boldsymbol{\rho}}\), we have
\[\begin{split}\ln q^{\star}\left(\boldsymbol{\rho}\right)\\ &\propto\ln\widetilde{p}_{\boldsymbol{\rho}}\\ &\propto\left\langle\ln p\left(\mathbf{w},\boldsymbol{\rho}, \mathbf{s},\mathcal{D}\right)\right\rangle_{q(\mathbf{s})q(\mathbf{w})}\\ &\propto\left\langle\ln p\left(\mathbf{w}|\boldsymbol{\rho} \right)\right\rangle_{q(\mathbf{w})}+\left\langle\ln p\left(\boldsymbol{\rho}| \mathbf{s}\right)\right\rangle_{q(\mathbf{s})}\\ &\propto\sum_{n=1}^{N}\left(\left\langle s_{n}\right\rangle a_{n }+\left\langle 1-s_{n}\right\rangle\overline{a}_{n}\right)\ln\rho_{n}\\ &\quad-\left(\left\langle|w_{n}|^{2}\right\rangle+\left\langle s _{n}\right\rangle b_{n}+\left\langle 1-s_{n}\right\rangle\overline{b}_{n}\right)\rho_{n}.\end{split}\]
Thus, we have that \(q^{\star}\left(\boldsymbol{\rho}\right)\) follows a Gamma distribution in (25), with parameters in (26) and (27).
Let \(q^{\star}\left(\mathbf{s}\right)=\widetilde{p}_{\mathbf{s}}\), we have
\[\begin{split}\propto&\ln\widetilde{p}_{\mathbf{s}}\\ \propto&\left\langle\ln p\left(\mathbf{w}, \boldsymbol{\rho},\mathbf{s},\mathcal{D}\right)\right\rangle_{q(\boldsymbol{ \rho})q(\mathbf{w})}\\ \propto&\left\langle\ln p\left(\boldsymbol{\rho}| \mathbf{s}\right)\right\rangle_{q(\boldsymbol{\rho})}+\ln\hat{p}\left( \mathbf{s}\right)\\ \propto&\sum_{n=1}^{N}s_{n}\left(\ln b_{n}^{a_{n}}+ \left(a_{n}-1\right)\left\langle\ln\rho_{n}\right\rangle-b_{n}\left\langle\rho_ {n}\right\rangle-\ln\Gamma\left(a_{n}\right)\right)\end{split}\]
\[+(1-s_{n})\left(\ln\overline{b}_{n}^{\overline{a}_{n}}+(\overline{a}_{ n}-1)\left\langle\ln\rho_{n}\right\rangle-\overline{b}_{n}\left\langle\rho_{n} \right\rangle-\ln\Gamma\left(\overline{a}_{n}\right)\right)\] \[+\sum_{n=1}^{N}\left(s_{n}\ln\pi_{n}+(1-s_{n})\ln\left(1-\pi_{n} \right)\right)\] \[\propto\ln\prod_{n=1}^{N}\left(\widetilde{\pi}_{n}\right)^{s_{n} }\left(1-\widetilde{\pi}_{n}\right)^{1-s_{n}}.\]
Thus, we have that \(q^{\star}\left(\mathbf{s}\right)\) follows a Bernoulli distribution in (28), with parameters in (29) and (30).
|
2309.01824 | On the fly Deep Neural Network Optimization Control for Low-Power
Computer Vision | Processing visual data on mobile devices has many applications, e.g.,
emergency response and tracking. State-of-the-art computer vision techniques
rely on large Deep Neural Networks (DNNs) that are usually too power-hungry to
be deployed on resource-constrained edge devices. Many techniques improve the
efficiency of DNNs by using sparsity or quantization. However, the accuracy and
efficiency of these techniques cannot be adapted for diverse edge applications
with different hardware constraints and accuracy requirements. This paper
presents a novel technique to allow DNNs to adapt their accuracy and energy
consumption during run-time, without the need for any re-training. Our
technique called AdaptiveActivation introduces a hyper-parameter that controls
the output range of the DNNs' activation function to dynamically adjust the
sparsity and precision in the DNN. AdaptiveActivation can be applied to any
existing pre-trained DNN to improve their deployability in diverse edge
environments. We conduct experiments on popular edge devices and show that the
accuracy is within 1.5% of the baseline. We also show that our approach
requires 10%--38% less memory than the baseline techniques leading to more
accuracy-efficiency tradeoff options | Ishmeet Kaur, Adwaita Janardhan Jadhav | 2023-09-04T21:26:26Z | http://arxiv.org/abs/2309.01824v1 | # On the fly Deep Neural Network Optimization Control for Low-Power Computer Vision
###### Abstract
Processing visual data on mobile devices has many applications, e.g., emergency response and tracking. State-of-the-art computer vision techniques rely on large Deep Neural Networks (DNNs) that are usually too power-hungry to be deployed on resource-constrained edge devices. Many techniques improve the efficiency of DNNs by using sparsity or quantization. However, the accuracy and efficiency of these techniques cannot be adapted for diverse edge applications with different hardware constraints and accuracy requirements. This paper presents a novel technique to allow DNNs to adapt their accuracy and energy consumption during run-time, without the need for any re-training. Our technique called AdaptiveActivation introduces a hyper-parameter that controls the output range of the DNNs' activation function to dynamically adjust the sparsity and precision in the DNN. AdaptiveActivation can be applied to any existing pre-trained DNN to improve their deployability in diverse edge environments. We conduct experiments on popular edge devices and show that the accuracy is within 1.5% of the baseline. We also show that our approach requires 10%-38% less memory than the baseline techniques leading to more accuracy-efficiency tradeoff options.
computer vision, low-power
## I Introduction
Deep Neural Networks (DNNs) are important tools in computing. They are heavily used in fields like computer vision [1, 2]. DNNs are not just simple neural networks with a couple hundred neurons, they are made of many layers with a large spread of connections between layers. This gives DNNs a tremendous range of variability that can be fine-tuned for accurate inference through training. Unfortunately, DNNs are also computation-heavy and energy-expensive as a result. ResNet [3], a popular DNN used in computer vision, needs to perform billions of operations and general gigabytes of memory to perform image classification on a single image [4]. These many computations and memory operations require significant compute resources and lead to high energy costs [5].
The excessive energy costs present a problem for DNNs -- it is difficult to deploy them in low-power embedded IoT environments. Most IoT devices are often constrained by battery power [6] and have limited computing resources. To address this issue, many techniques have been developed to improve the efficiency of DNNs at the expense of accuracy [7]. However, in most of the existing low-power computer vision techniques, the compromise between accuracy and efficiency cannot be tuned (depicted in Fig. 1(a)). This leads to DNNs that are either too inaccurate because of the accuracy losses, or DNNs that are too power-hungry because of the lack of optimizations [8]. Moreover, in real-world DNN applications, the ambient temperature varies widely based on the weather and the heat dissipation [9, 10]. As a result, the power and thermal budgets available for processing change dynamically [5]. This also creates a need for dynamically tunable DNN workloads (depicted in Fig. 1(b)).
In this work, we present a method to vary the accuracy-efficiency tradeoff configurations on the fly. Our technique, called AdaptiveActivation, can be applied to any existing DNN without any retraining. By using the available power budget as an input, AdaptiveActivation transforms the activation functions' output to increase or decrease the sparsity or precision in the DNN. As noted in prior research, higher sparsity or more precision in the DNN increases efficiency and decreases accuracy, and vice versa. Our method measures the DNN layers' sensitivity to sparsity and precision and uses that information to ensure that the accuracy losses are minimal.
To evaluate this approach, our experiments compare the image classification accuracy and efficiency of the different techniques. This paper uses a computer vision dataset that contains varying input resolutions to show that this method can vary the accuracy-efficiency tradeoff for different workload types. We also deploy on hardware and use the Intel Power Calculator [11] to compare the energy consumption and latency. Using the proposed approach, we observe that our method reduces the memory of existing DNNs, such as MobileNet and ResNet-18, by upto \(\sim\)38% without significant accuracy losses and the need for any re-training.
Fig. 1: (a) Existing low-power computer vision techniques apply optimizations before deployment, leading to a static accuracy-efficiency tradeoff. (b) The proposed method adds an Adaptive Activation Controller to adjust the DNNs’ hyper-parameters to change the accuracy-efficiency tradeoff dynamically during run-time.
The rest of the paper is organized as follows: In Section II, we highlight relevant existing work. In Section III, we present details about our technique. Section IV contains information about our experimental setup and compares the results of our technique with the existing state-of-the-art. Finally, Section V concludes this paper.
## II Related Work
### _Deep Neural Networks for Computer Vision_
Deep neural networks (DNNs) are a class of machine learning algorithms that can achieve high accuracy on many computer vision tasks [12, 13, 14]. DNNs use gradient descent to train models with millions or even billions of parameters with large amounts of training data. This is the main reason why DNNs are the most accurate computer vision techniques. DNNs generally contain several layers. The different layers are connected with activation functions. These activation functions provide non-linearity in the DNN helping the DNN converge better. Some commonly used activation functions are Sigmoid [15] and ReLU [16]. In this paper, we propose a transformation to the existing ReLU activation function to enable dynamically changing accuracy vs. efficiency tradeoffs for edge applications.
### _Low Power Computer Vision Techniques_
Goel et al. [5] survey low-power DNNs and describe the benefits of reducing memory and operations for low-power applications. There are many DNN optimization techniques that increase DNN efficiency [12, 17, 5]. For example, MobileNet [13] is a DNN micro-architecture that uses bottleneck layers to reduce the number of parameters. Quantization and pruning [7] are other commonly used techniques used to reduce the memory requirement of DNNs. NN quantization reduces the memory requirement and DNN pruning reduces the DNN operations [18]. However, in such existing efficient DNN inference techniques, the compromise between accuracy and efficiency is fixed and cannot be tuned. We propose a method that can select the activation sparsity and quantization to vary the accuracy-efficiency tradeoff on-the-fly.
### _Adaptive DNN Workloads_
Because of the importance of having adaptive workloads for different application scenarios and hardware requirements, a significant amount of research focuses on building on re-configurable DNN architectures. In these techniques, the DNNs use different layers and connections based on the input [19, 20]. GaterNet [19] is a technique that uses a gating layer to deactivate certain DNN connections to reduce computation costs. Other techniques [20, 8] use different connections in the DNN based on the input to decrease the number of arithmetic operations. These techniques add new components or layers to the DNN and often require expensive training and/or re-training schemes. Our proposed method requires no retraining and still can be adapted for various accuracy vs. efficiency tradeoff configurations on the fly.
### _Our Contributions_
(1) This is the first method to tune the accuracy-efficiency tradeoff by transforming the output of the activation function to control sparsity and quantization. Our method can be applied to any existing pre-trained DNN without the need for any re-training. (2) We develop methods to select the optimal sparsity and quantization levels for each DNN layer based on their sensitivity. (3) Experiments show that our method outperforms the baseline techniques in terms of memory, number of operations, and energy.
## III AdaptiveActivation: Dynamic Control of DNNs
Our proposed method, called AdaptiveActivation, transforms the output of the DNN's activation function to change the sparsity and quantization levels during inference. Fig. 2 outlines the working of AdaptiveActivation. The inputs are a pre-trained DNN and the target memory/latency requirements of the activation. AdaptiveActivation first calculates each layer's sensitivity to varying sparsity and quantization. Using this information, AdaptiveActivation iteratively adds sparsity and reduces the precision (quantization level) of the least sensitive layers. The least sensitive layers are chosen because they can be optimized with minimal impact on the DNN accuracy. In the following subsections, we provide more details about each component used in AdaptiveActivation.
### _Controlling Activation Sparsity and Precision_
In this section, we highlight our methods to control the activation sparsity and precision in DNNs.
#### Iii-A1 AdaptiveActivation ReLU
In this section, we describe our method to control the sparsity in DNN activations. We specifically focus on methods that require no no-retraining because they can be used for on-the-fly adjustments to inference performance.
Fig. 3(a) and Eqn. (1) depict the existing ReLU activation function. In the existing activation function, only the input values (\(x\)) below zero are converted to zero in the output. In order to control the sparsity of the existing ReLU, we add a new hyper-parameter \(T\). AdaptiveActivation ReLU (AA-ReLU) is depicted in Fig. 3(b) and Eqn. (2). By adjusting the value of \(T\), we can control the sparsity in the DNN.
Fig. 2: Proposed AdaptiveActivation method that can control the accuracy vs. efficiency tradeoff on the fly.
\[y=\begin{cases}x&\text{if }x>0\\ 0&\text{otherwise}\end{cases} \tag{1}\]
\[y=\begin{cases}x-T&\text{if }x>T\\ 0&\text{otherwise}\end{cases} \tag{2}\]
#### Iii-A2 Activation Precision Control
The default quantization format used in DNN libraries such as PyTorch is 32-bit floating point. However, many other quantization formats can be used for inference. Some quantization formats are: 16-bit floating point, 8-bit floating point, 4-bit integer, and 2-bit integer. Using fewer bits reduces the memory requirement, and also speeds up the arithmetic operations. However, using fewer bits reduces the accuracy of the DNN.
In this work, we use different quantization levels for different layers based on the impact on the accuracy. Cast operators are inserted at the output of the activations to convert values from one quantization level to another.
### _Layer-wise Sensitivity Analysis_
Upon analyzing the distributions of activations in the layers in DNN, it is evident that a significant portion of the values are rounded to zero. This is because of the use of the ReLU activation function (depicted in Fig. 3(a)).
Fig. 4: Activation output histogram for one layer of the pre-trained VGG-16 DNN averaged over 128 inputs. A significant portion of the activation values are zero.
Fig. 5: Activation sensitivity analysis on one layer of the VGG-16 DNN. The sparsity axis measures the additional sparsity added to the DNN activation (0.0 means no additional sparsity and 1.0 means all values are rounded to 0). The number of bits axis represents the precision of the activation outputs (the default value is 32-bits). The accuracy axis measures the relative accuracy of the overall DNN when this layer is set to a specific precision and sparsity.
Fig. 3: (a) In the existing ReLU activation function, all values greater than zero are retained, and values smaller than zero are converted to zero. (b) In our proposed AdaptiveActivation ReLU (AA-ReLU) activation the activation function’s cutoff \(T\) is adjustable for each DNN layer. This allows us to increase or decrease the sparsity in the DNN.
a histogram analysis for the VGG-16 DNN. This provides evidence that DNN's activation functions can handle sparse feature maps. Our work uses the AA-ReLU function (depicted in Fig. 3(b)) to further increase the DNN sparsity to improve efficiency, without significant accuracy losses.
However, different layers have different distributions and thus have varied sensitivities to sparsity and precision. Fig. 5 depicts our activation sensitivity analysis for the same layer in the VGG-16 DNN. In the activation sensitivity analysis, for each layer independently, we vary the sparsity and precision (using AA-ReLU) and measure the impact on the overall DNN accuracy on the testing dataset. When performing the sensitivity analysis on a particular layer, all other layers are frozen. This allows us to optimally set the sparsity and precision for each DNN layer using the algorithm described in the next subsection to ensure the best tradeoff between accuracy and efficiency.
### _Optimally Selecting Per-Layer Sparsity and Precision for a Given Memory Budget_
The task is to select the sparsity layers and precision levels for each DNN layer in a way that minimizes the accuracy loss and meets the target performance requirements. We propose a greedy iterative method to update the hyper-parameters (i.e., the sparsity and precision) one layer at a time. The greedy selection selects \(S\) pre-defined sparsity levels and \(Q\) pre-defined precision levels (as depicted in the sensitivity analysis in Fig. 5). For each layer, the \(S\times Q\) hyper-parameter combinations are evaluated for their impact on memory and accuracy. Across the model, a rank-list, \(O\) of most suitable optimizations is then created, where the top-ranked optimization has the least impact on accuracy and the most memory improvement. Our greedy algorithm iteratively selects the optimizations from \(O\) with the highest score until the target memory requirement is reached. The algorithm is depicted in Fig. 6.
## IV Experiments and Results
This section shows that a PyTorch implementation of the AdaptiveActivation technique improves efficiency when compared to the baseline without sacrificing classification accuracy. Through other experiments, we also show that our method offers multiple accuracy-efficiency tradeoff options. We will open-source our code with the final version of the paper.
### _Datasets Used_
We use the ImageNet dataset [21] in our experiment. This is a commonly used dataset dataset that has varied input data (greyscale and color) of different-sized inputs that showcases the generality of our technique.
### _Evaluation Metrics_
The image classification accuracy is used as the primary metric. The classification accuracy is the number of images classified correctly from the test set divided by the total number of images. To compare the efficiency, we use the memory requirement (model size + activation memory size). The memory requirement is calculated using the number of parameters and activations multiplied by the number of bytes used (i.e., the precision). We use the Intel Power Calculator [11] to estimate the energy requirements and latency of the PyTorch programs running on Intel CPUs.
#### Iv-B1 Comparison with baselines
When comparing with baseline techniques, we consider 3 popular DNN architectures: ResNet-18 [3], VGG-16 [14], and MobileNet [13]. These architectures are representative of most convolutional DNN architectures used today. We hypothesize that this technique can be applied to Transformer-based architectures, but is not included in this paper and will be considered in future work.
Table I compares the image classification accuracy and memory requirements of the three baseline DNNs with and without AdaptiveActivation. Since AdaptiveActivation allows for many different energy-efficiency tradeoffs, we present one tradeoff in Table I. Our method requires 38% (\(1-\frac{33}{54}\)) and 10.5% (\(1-\frac{510}{570}\)) less memory than the baseline MobileNet and VGG-16 implementations in PyTorch. The memory improvements in MobileNet are greater because MobileNet creates more activations due to the use of Depthwise Separable Convolutions. Across all three DNNs, the accuracy losses are small, i.e., 1%-1.5% degradation. This may be an acceptable tradeoff
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**DNN**} & \multicolumn{2}{c|}{**Baseline**} & \multicolumn{2}{c|}{**AA**} \\ \cline{3-6} & & **Acc.** & **Mem.** & **Acc.** & **Mem.** \\ \hline \multirow{3}{*}{ImageNet} & ResNet-18 & 72.4\% & 68 & 71.0\% & 55 \\ \cline{2-6} & MobileNet & 72.0\% & 54 & 70.1\% & 33 \\ \cline{1-1} \cline{2-6} & VGG-16 & 70.5\% & 570 & 69.8\% & 510 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of accuracy and memory (in MB) of three popular DNN architectures with and without Adaptive-Activation (AA).
Fig. 6: Algorithm to greedily select optimizations to meet the target memory/latency requirement. The grey boxes depict the input to the algorithm. The optimized DNN is the output.
in many applications, especially because our technique does not require any re-training.
#### Iv-B2 Tunable Accuracy vs. Efficiency Tradeoff
We deploy the PyTorch implementations on an Intel Core i7 CPU, with an average power of 28 W to compare the energy consumption and latency. We use the Intel Power Calculator tool to obtain the results. Fig. 7 shows the various accuracy vs. efficiency (measured as latency) options made possible with Adaptive-Activation on the ResNet-18 DNN. Please note that all these tradeoff configurations are possible without the need for any retraining.
## V Conclusions
Through this paper, we solve an important problem in edge-based systems, i.e., most existing DNNs are too power-hungry to be deployed on most edge devices. Although there have been many techniques that optimize DNNs for low-power environments, they offer static optimizations, where the accuracy vs. efficiency tradeoff cannot be tuned. Such static optimizations lead to scenarios where the solution is accurate but too power-hungry, or are well-optimized but not accurate enough to be deployed in different application settings. To overcome this problem, we propose AdaptiveActivation. This is a technique that transforms the output of the DNN's activation function to increase the sparsity and/or reduce the precision to change the accuracy vs efficiency tradeoff. We also present an algorithm that can control the amount of sparsity/quantization added to the network to change the tradeoff on the fly without the need for any retraining. In our experiments, we consider three popular DNN architectures and showcase the ability of our technique to reduce the memory requirements and latency, with only small losses in accuracy. In future work, we will look to extend this research to apply to Transformer-based architectures as well.
|
2307.14013 | Sound Field Estimation around a Rigid Sphere with Physics-informed
Neural Network | Accurate estimation of the sound field around a rigid sphere necessitates
adequate sampling on the sphere, which may not always be possible. To overcome
this challenge, this paper proposes a method for sound field estimation based
on a physics-informed neural network. This approach integrates physical
knowledge into the architecture and training process of the network. In
contrast to other learning-based methods, the proposed method incorporates
additional constraints derived from the Helmholtz equation and the zero radial
velocity condition on the rigid sphere. Consequently, it can generate
physically feasible estimations without requiring a large dataset. In contrast
to the spherical harmonic-based method, the proposed approach has better
fitting abilities and circumvents the ill condition caused by truncation.
Simulation results demonstrate the effectiveness of the proposed method in
achieving accurate sound field estimations from limited measurements,
outperforming the spherical harmonic method and plane-wave decomposition
method. | Xingyu Chen, Fei Ma, Amy Bastine, Prasanga Samarasinghe, Huiyuan Sun | 2023-07-26T07:52:29Z | http://arxiv.org/abs/2307.14013v1 | # Sound Field Estimation around a Rigid Sphere with Physics-informed Neural Network
###### Abstract
Accurate estimation of the sound field around a rigid sphere necessitates adequate sampling on the sphere, which may not always be possible. To overcome this challenge, this paper proposes a method for sound field estimation based on a physics-informed neural network. This approach integrates physical knowledge into the architecture and training process of the network. In contrast to other learning-based methods, the proposed method incorporates additional constraints derived from the Helmholtz equation and the zero radial velocity condition on the rigid sphere. Consequently, it can generate physically feasible estimations without requiring a large dataset. In contrast to the spherical harmonic-based method, the proposed approach has better fitting abilities and circumvents the ill condition caused by truncation. Simulation results demonstrate the effectiveness of the proposed method in achieving accurate sound field estimations from limited measurements, outperforming the spherical harmonic method and plane-wave decomposition method.
## 1 Introduction
Sound field analysis is a critical area of research with a wide range of applications, including spatial audio processing [1, 2], augmented reality/virtual reality [3], and active noise control [4, 5]. To determine the distribution of acoustic pressure in a given environment, researchers often use a microphone array. However, microphone arrays cause reflections and scattering, which potentially impacts the accuracy of the sound field estimation. Therefore, understanding the scattering field is essential to ensure the accuracy of the sound field estimation.
Rigid spherical microphone arrays are extensively employed in sound field analysis due to their theoretically derived scattering fields, enabling the estimation of sound fields in three-dimensional space surrounding the array [6, 7, 8]. Conventional methods, such as the spherical harmonic (SH) method [9], represent the sound field by a set of orthonormal modes. However, these methods have inherent limitations in practical applications. For instance, the limited number of microphones leads to a discrete representation through integration, resulting in truncation errors [10]. Plane-wave decomposition method assumes that the sound field consists only of plane waves [11]. It neglects the spatial variation within each wavefront, which may not fully capture the intricacies of the scattering fields.
Learning-based methods have been employed in scattering problems, and have demonstrated effectiveness in fitting nonlinear mappings [12, 13]. Among these methods, Convolutional Neural Networks (CNNs) have gained popularity due to their ability to promote parameter sharing and reduce the number of trainable parameters [14]. However, in the context of sound field estimation, convolutional and max-pooling layers introduce pixel spaces that can lead to discontinuities in sound pressure continuity. This discontinuity potentially undermines the physical interpretation of the data [15, 16, 17]. Additionally, the heavy reliance on training data often necessitates a large simulated dataset.
Recently, researchers introduced Physics-Informed Neural Networks (PINNs) as a method to incorporate physical knowledge into the network architecture and training [18, 19, 17, 20]. This integration allows PINNs to achieve physically feasible estimations and require less training data. Fully connected Neural Networks (FNNs) are commonly employed as part of PINN due to their ability to approximate any continuous function, as guaranteed by the Universal Approximation Theorem [21].
In this paper, we propose a method for sound field estimation using PINNs around a rigid sphere. We employ a small FNN network structure and incorporate two constraints: the Helmholtz equation and the fact that sound pressure cannot generate radial motion at the sphere boundary. We use the FNN to model the mapping between Euclidean space positions and sound pressure and use these constraints as a loss function to train the network. Notably, our method relies solely on measured data from microphones, eliminating the need for setting up a grid and using simulated datasets. Simulation results demonstrate the effectiveness of our method, outperforming the spherical harmonic method and the plane-wave decomposition
method.
## 2 problem formulation
We introduce two coordinate systems, as shown in Fig. 1, where \(\mathbf{x}=(x,y,z)\) and \(\mathbf{r}=(r,\theta,\phi)\) represent Cartesian and spherical coordinates, respectively. Our study focuses on a sound field \(\mathcal{V}\subset\mathbb{R}^{3}\), which includes a rigid sphere located at the origin \(O\) with a radius of \(a\). We aim to estimate the sound field \(P(\mathbf{r},\omega)\) at position \(\mathbf{r}\) and angular frequency \(\omega\) around the sphere \(r\geq a\), by using measurements from \(Q\) microphones mounted on the rigid sphere.
For a homogeneous sound field (absence of sound sources) around a rigid sphere, theoretical solutions for the sound field have been established [6]. The spherical harmonic (SH) method decomposes \(P(\mathbf{r},\omega)\) into a series of orthonormal modes [6]
\[P(\mathbf{r},\omega)=\sum_{n=0}^{\infty}G_{n}(r,a,\omega)\sum_{m=-n}^{n}\mathcal{P} _{n}^{m}(a,\omega)Y_{n}^{m}(\theta,\phi), \tag{1}\]
where \(G_{n}(r,a,\omega)\) is the radial propagator, \(\mathcal{P}_{n}^{m}(a,\omega)\) are the spherical harmonic coefficients, and \(Y_{n}^{m}(\theta,\phi)\) is the spherical harmonic of order \(n\) and degree \(m\). \(\mathcal{P}_{n}^{m}(a,\omega)\) are obtained by integrating \(P(\mathbf{a},\omega)\) over the surface of the sphere
\[\mathcal{P}_{n}^{m}(a,\omega)=\iint P(\mathbf{a},\omega)Y_{n}^{m}(\theta, \phi)^{*}d\Omega, \tag{2}\]
where \(\mathbf{a}=(a,\theta,\phi)\), \(\Omega=(\theta,\phi)\), and \(\iint(\cdot)d\Omega\equiv\int_{0}^{2\pi}\int_{0}^{\pi}(\cdot)\sin\theta d\theta d\phi\). \(G_{n}(r,a,\omega)\) must satisfy the boundary condition on the rigid sphere, leading to
\[G_{n}(r,a,\omega)=j_{n}(\frac{\omega r}{c})-\frac{j_{n}^{\prime}\left(\frac{ \omega a}{c}\right)}{h_{n}^{(2)^{\prime}}\left(\frac{\omega a}{c}\right)}h_{n }^{(2)}(\frac{\omega r}{c}), \tag{3}\]
where \(j_{n}(\cdot)\) and \(h_{n}^{(2)}(\cdot)\) are the spherical Bessel function of the first kind and the spherical Hankel function of the second kind, respectively, and \(j_{n}^{\prime}(\cdot)\) and \(h_{n}^{(2)^{\prime}}(\cdot)\) are corresponding derivatives [8], \(c\) is the speed of sound.
There are three problems with the implementation of the theoretical method. First, the infinite series in (1) needs to be truncated at limited order \(N\)[10]. Second, the pressure on the sphere is sampled at a limited (\(Q\)) number of positions. Hence, the integral in (2) needs to be discretely approximated. Third, in the low-frequency region where the distance \(r\) increases (\(r>a\)), the propagators \(G_{n}(r,a,\omega)\) approximately follow a power law of the form \(\frac{n+1}{2n+1}\left(\frac{r}{a}\right)^{n}\). These increasing values have significant implications as they indicate an ill-posed problem [8]. Consequently, small variations in \(\mathcal{P}_{n}^{m}(a,\omega)\) resulting from truncation errors or noise will be magnified through the propagator.
To overcome these challenges and obtain more accurate estimations, we propose a neural network approach integrated with physics laws. This method aims to establish a regression relationship between coordinates and sound pressure while satisfying the governing physics equations.
## 3 PINN method
In this section, we present the PINN method, which consists of three steps: (1) constructing an FNN using microphone positions and measured sound pressure as training pairs, (2) incorporating physical knowledge as a constraint in the network, and (3) adjusting the weight of the loss function to facilitate convergence. The workflow of the PINN method is presented in Fig. 2.
FNN is a composition of multiple layers, and the nodes between adjacent layers are fully connected. A single layer of the network is denoted as
\[F(\mathbf{x})=\sigma\left(\mathbf{x}^{\intercal}\mathbf{w}+b\right), \tag{4}\]
where \(\mathbf{x}\in\mathbb{R}^{3}\) is the inputs, which in this case are the Cartesian positions in the sound field \(\mathcal{V}\), \(\mathbf{w}\) represents the weights, \(b\) is the bias, and \(\sigma\) is the activation function. The FNN is a composition of multiple layers
\[\Phi(\mathbf{x},\omega;\psi)=\left(F_{n}\circ\ldots\circ F_{2}\circ F_{1}\right)( \mathbf{x}), \tag{5}\]
where \(\psi\) represents the set of all trainable parameters \(\mathbf{w}\) and \(b\). Hereafter, \(\omega\) is omitted in \(\Phi(\mathbf{x};\psi)\) when representing acoustic quantities for notional simplicity. \(\psi\) is learned by minimizing the mean squared error (MSE) loss \(\mathcal{L}_{\text{data}}\) with respect to the training data
\[\psi^{*}=\operatorname*{arg\,min}_{\psi}\mathcal{L}_{\text{data}}(\mathbf{x};\psi), \tag{6}\]
where \(\mathcal{L}_{\text{data}}\) is defined as
\[\mathcal{L}_{\text{data}}(\mathbf{x}_{i};\psi)=\frac{1}{Q}\sum_{i=1}^{Q}\left(P( \mathbf{x}_{i},\omega)-\Phi\left(\mathbf{x}_{i};\psi\right)\right)^{2}. \tag{7}\]
It fits the regression relationship between the spatial coordinates of the microphones \(\mathbf{x}\) and the corresponding sound
Figure 1: System setup: A rigid sphere with microphones showing Cartesian and spherical coordinates. \(\bullet\) represent the microphones, and \(\star\) represents a reference point.
pressure values \(P(\mathbf{x},\omega)\). The network results \(\Phi(\mathbf{x};\psi)\) represent the estimated sound pressure \(\hat{P}(\mathbf{x},\omega)\).
The aforementioned approach represents a purely data-driven method. It needs a large dataset, which is not always possible. We incorporate domain knowledge to reduce the data requirements and generate physically meaningful estimations.
For sound field estimation around a rigid sphere, we can leverage two domain knowledge to improve the estimation accuracy. Firstly, sound fields are governed by the Helmholtz equation
\[\left(\triangle+(\frac{c}{\omega})^{2}\right)P(\mathbf{r},\omega)=0, \tag{8}\]
where \(\triangle=\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}+\partial^{2 }/\partial z^{2}\) denotes the Laplace operator. Therefore, we require the PINN estimation to satisfy (8), i.e.,
\[\left(\triangle+(\frac{c}{\omega})^{2}\right)\Phi(\mathbf{x};\psi)=0. \tag{9}\]
Second, radial velocity is zero on the rigid sphere:
\[\left.\frac{\partial P(\mathbf{r},\omega)}{\partial r}\right|_{r=a}=0. \tag{10}\]
It represents the derivative of the sound pressure with respect to the radius of the sphere and can be written in terms of derivatives with respect to the Cartesian coordinates as
\[\frac{\partial P(\mathbf{r},\omega)}{\partial r}=\frac{\partial P(\mathbf{r},\omega)} {\partial x}\frac{\partial x}{\partial r}+\frac{\partial P(\mathbf{r},\omega)}{ \partial y}\frac{\partial y}{\partial r}+\frac{\partial P(\mathbf{r},\omega)}{ \partial z}\frac{\partial z}{\partial r}. \tag{11}\]
Conduct the substitutions \(\frac{\partial x}{\partial r}=\frac{\pi}{r}\), \(\frac{\partial y}{\partial r}=\frac{y}{r}\), \(\frac{\partial z}{\partial r}=\frac{z}{r}\), (10) can be expressed in a matrix form
\[\left.\frac{\partial P(\mathbf{r},\omega)}{\partial r}\right|_{r=a}=\begin{bmatrix} x&y&z\end{bmatrix}\begin{bmatrix}\frac{\partial P(\mathbf{r},\omega)}{ \partial x}\\ \frac{\partial P(\mathbf{r},\omega)}{\partial y}\\ \frac{\partial P(\mathbf{r},\omega)}{\partial z}\end{bmatrix}=0. \tag{12}\]
We denote \(\mathbf{x_{b}}\) as a point on the rigid sphere with radius \(a\) and \(\begin{bmatrix}\frac{\Phi(\mathbf{x};\psi)}{\partial x}&\frac{\Phi(\mathbf{x};\psi)}{ \partial y}&\frac{\Phi(\mathbf{x};\psi)}{\partial z}\end{bmatrix}^{\intercal}\) as \(\mathbf{J}\). Therefore, we require the PINN estimation to satisfy (12), i.e.,
\[\mathbf{x_{b}}\mathbf{J}=0. \tag{13}\]
At present, we have constructed an FNN and acquired physical knowledge, which allows us to develop a PINN. Leveraging automatic differentiation within the neural network, we can efficiently compute the first-order and second-order derivatives of \(\Phi(\mathbf{x};\psi)\) as required in (9) and (13). These derivatives are essential for updating the network parameters \(\psi\) using gradient descent.
We formulate the loss function to include three terms, each contributing to the overall training process
\[\mathcal{L}(\psi) =\lambda_{1}\underbrace{\frac{1}{Q}\sum_{q=1}^{Q}\left\|P(\mathbf{x} _{q})-\Phi(\mathbf{x}_{q};\psi)\right\|_{2}^{2}}_{\mathcal{L}_{\text{data}}} \tag{14}\] \[+\lambda_{2}\underbrace{\frac{1}{D}\sum_{d=1}^{D}\left\|\triangle \Phi(\mathbf{x}_{d};\psi)+(\frac{c}{\omega})^{2}\Phi(\mathbf{x}_{d};\psi)\right\|_{2} ^{2}}_{\mathcal{L}_{\text{PDE}}}\] \[+\lambda_{3}\underbrace{\frac{1}{B}\sum_{b=1}^{B}\left\|\mathbf{x_{b} \mathbf{J}}\right\|_{2}^{2}}_{\mathcal{L}_{\text{he}}},\]
where \(\|\cdot\|_{2}\) is the 2-norm, \(\mathcal{L}_{\text{data}}\) represents the discrepancy between the PINN estimations \(\Phi(\mathbf{x})\) and the actual measurements \(P(\mathbf{x}_{q})\) obtained from microphones. \(\mathcal{L}_{\text{PDE}}\) ensures that the Helmholtz equation is satisfied for the positions \(\mathbf{x}_{d}\) within the domain \(\mathcal{V}\), and \(\mathcal{L}_{\text{bc}}\) imposes the boundary condition of zero radial velocity on the rigid sphere. \(D\) are positions surrounding the sphere, and \(B\) are positions on the rigid sphere. To control the relative importance of these terms and ensure similar influences on the training process, we introduce weight factors \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\).
The weights in the loss function significantly influence the convergence and accuracy of the PINN. By balancing loss weights, a compromise can be achieved between accurately fitting the microphone measurements and adhering to the physical constraints. Since all three losses aim to approach zero, direct normalization is not feasible. \(\mathcal{L}_{\text{data}}\) is influenced by data noise. To address the issue, we balance \(\mathcal{L}_{\text{PDE}}\) and \(\mathcal{L}_{\text{bc}}\) to the same order of magnitude during the initial stages of training. Specifically, We set \(\lambda_{1}\) to \(1\), \(\lambda_{2}\) to \((\frac{c}{\omega})^{2}\), and \(\lambda_{3}\) to \(r\).
## 4 simulation
We conduct simulations in a 3D free field to evaluate the proposed method regarding the sound field estimation surrounding a rigid sphere and compare the performance of three methods: the spherical harmonic method (SH) [9], the plane wave decomposition method (PL) [22], and the proposed method (PINN).
We uniformly mount 32 omnidirectional microphones on a rigid sphere with a radius of \(a=0.042\) m as shown in Fig. 1. We simulate the transfer functions between the sound source and the microphones on the rigid sphere according to [23] Specifically, we place two point sources at coordinates \((2.5,0.8,0.0)\) m and \((-2.0,-0.6,1.2)\) m, both with an amplitude of \(1\). The speed of sound is set to \(c=343\) m/s. In order to account for realistic conditions, we add white Gaussian noise to the observation signals, resulting in a signal-to-noise ratio (SNR) of 30 dB. The pressure values are then normalized within the range of \([-1,1]\).
For the PINN method, we construct a 3-layer FNN with 4 nodes with the hyperbolic tangent (tanh) activation function. To train the PINN, we employ the Adam optimizer [24] with a learning rate of \(10^{-5}\) for a total of 10,000 epochs. We set \(\mathcal{L}_{\text{data}}\) based on the 32 microphone measurements, \(\mathcal{L}_{\text{PDE}}\) based on 1000 randomly distributed points surrounding the sphere, and \(\mathcal{L}_{\text{bc}}\) based on 500 uniformly distributed sampling points on the rigid sphere. As we train the network in Cartesian coordinates, we calculate the spherical coordinates \(\mathbf{r}\) and then convert them to Cartesian coordinates \(\mathbf{x}\).
In Fig. 4, we illustrate the training process. It demonstrates the simultaneous and nearly synchronized reduction of the training error \(\mathcal{L}_{\text{PDE}}\) and \(\mathcal{L}_{\text{bc}}\) in the initial stages of training for the first 2000 epochs. Subsequently, a gradual decline in \(\mathcal{L}_{\text{data}}\) is observed. This observation suggests that the weight configurations of \(\lambda_{1}=1\), \(\lambda_{2}=(\frac{c}{\omega})^{2}\), and \(\lambda_{3}=r\) achieve a good balance.
We define the pressure error between the true and estimated sound fields as
\[\text{error}:=\|P(\mathbf{x}_{g},\omega)-\hat{P}(\mathbf{x}_{g},\omega)\|, \tag{15}\]
where \(P(\mathbf{x}_{g},\omega)\) and \(\hat{P}(\mathbf{x}_{g},\omega)\) represent the true and estimated pressures, respectively.
Fig. 3 presents the ground truth, estimated sound pressure using these three methods, and error at a frequency of 1000 Hz and a radius of 0.072 m. It is observed that the SH method exhibits higher errors around \(\theta=0.5\pi\), the PL method performs the worst among the three due to its assumption of far-field conditions, which ignore the scattering information. the PINN method achieves the best results overall, although it still experiences relatively higher errors at \(\theta=0.5\pi\).
To evaluate the overall performance, we defined the normalized mean squared error (NMSE) as
\[\text{NMSE}:=10\log_{10}\frac{\sum_{g=1}^{10000}\|P(\mathbf{x}_{g},\omega)-\hat{P }(\mathbf{x}_{g},\omega)\|_{2}^{2}}{\sum_{g=1}^{10000}\|P(\mathbf{x}_{g},\omega)\|_{2}^ {2}}, \tag{16}\]
The estimation was performed at 10,000 evaluation positions uniformly sampled within a sphere with a specific radius and frequency.
Fig. 5 illustrates the relationship between the radius \(r\) of the estimation sphere and the NMSE at a frequency of 1000 Hz. When \(r=a\), which corresponds to estimating the sound pressure on the rigid sphere, all three methods yield comparable results. However, as the radius increases, both the SH method and PL method show a significant surge in error. In contrast, the PINN method exhibits a more gradual increase in error as the radius increases. It has a lower NMSE of \(7-10\) dB for \(r>0.05\) m than SH and PL methods. This indicates that the PINN method performs better compared to the conventional methods.
## 5 Conclusion
In this study, we propose a PINN method to estimate the sound field around a rigid sphere based on the measurement of a small number of microphones mounted on the rigid sphere. Our method employs an FNN to model the mapping between space positions and sound pressure. We incorporate the Helmholtz equation and the fact that sound pressure cannot generate radial motion at the sphere boundary as part of the loss function. This integration of physical knowledge into the network architecture and training process enhances the accuracy of estimation. A major advantage of our approach is that it relies solely on measured
Figure 2: The workflow includes three components: an FNN for mapping inputs to outputs with trainable parameters \(\psi\), a Physics-Informed Network that incorporates physical governing equations and boundary conditions, and a feedback mechanism to minimize the loss function and update \(\psi\).
data, eliminating the need for grid setup and simulated datasets. Simulation results demonstrate that our method outperforms the spherical harmonic method and the plane-wave decomposition method.
It is hard to derive theoretical solutions for the scattering sound of complex geometries, but PINN can use physical and geometric constraints to fit the sound pressure distribution. It allows our method to potentially estimate the sound field of arbitrary geometry.
|
2310.14621 | Spiking mode-based neural networks | Spiking neural networks play an important role in brain-like neuromorphic
computations and in studying working mechanisms of neural circuits. One
drawback of training a large scale spiking neural network is that updating all
weights is quite expensive. Furthermore, after training, all information
related to the computational task is hidden into the weight matrix, prohibiting
us from a transparent understanding of circuit mechanisms. Therefore, in this
work, we address these challenges by proposing a spiking mode-based training
protocol, where the recurrent weight matrix is explained as a Hopfield-like
multiplication of three matrices: input, output modes and a score matrix. The
first advantage is that the weight is interpreted by input and output modes and
their associated scores characterizing the importance of each decomposition
term. The number of modes is thus adjustable, allowing more degrees of freedom
for modeling the experimental data. This significantly reduces the training
cost because of significantly reduced space complexity for learning. Training
spiking networks is thus carried out in the mode-score space. The second
advantage is that one can project the high dimensional neural activity
(filtered spike train) in the state space onto the mode space which is
typically of a low dimension, e.g., a few modes are sufficient to capture the
shape of the underlying neural manifolds. We successfully apply our framework
in two computational tasks -- digit classification and selective sensory
integration tasks. Our method accelerate the training of spiking neural
networks by a Hopfield-like decomposition, and moreover this training leads to
low-dimensional attractor structures of high-dimensional neural dynamics. | Zhanghan Lin, Haiping Huang | 2023-10-23T06:54:17Z | http://arxiv.org/abs/2310.14621v3 | # Spiking mode-based neural networks
###### Abstract
Spiking neural networks play an important role in brain-like neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large scale spiking neural network is that an expensive cost of updating all weights is required. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This reduces a sizable training cost because of significantly reduced space complexity for learning. The second advantage is that one can project the high dimensional neural activity in the ambient space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We analyze our framework in two computational tasks--digit classification and selective sensory integration tasks. Our work thus derives a mode-based learning rule for spiking neural networks.
Introduction
Spiking neural activity observed in primates' brains establishes the computational foundation of high order cognition [1]. In contrast to modern artificial neural networks, spiking networks have their own computation efficiency, because the spiking pattern is sparse and also the all-silent pattern dominates the computation. Therefore, studying the mechanism of spike-based computation and further deriving efficient algorithms play an important role in current studies of neuroscience and AI [2].
In a spiking network (e.g., cortical circuits in the brain), the membrane potential will be reset immediately after a spike is emitted, and furthermore, during the refractory period, the neuron is not responsive to its afferent synaptic currents including external signals. The neural dynamics in the form of spikes is thus not differentiable, which is in a stark contrast to the rate model of the dynamics, for which a backpropagation through time (BPTT) can be implemented [3; 4; 5]. This nondifferentiable property presents a computational challenge for gradient-based algorithms. Another challenge comes from understanding the computation itself. Even in standard spike-based networks, the weight values on the synaptic connections are modeled by real numbers. When a neighboring neuron fires, the input weight to the target neuron will give a contribution to the integrated currents. With single real weight values, it is hard to dissect which task-related information is encoded in neural activity, especially after learning. In addition, it is commonly observed that the neural dynamics underlying behavior is low-dimensionally embedded [6]. There thereby appears a gap between the extremely high dimensional weight space and the actual low dimensional latent space of neural activity. To overcome these two challenges, we develop a new type of spiking networks, called spiking mode-based neural networks (SMNNs).
In essence, we decompose the traditional real-valued weights as three matrices: the left one acts as the input mode space, the right one acts as the output mode space and the middle one encodes the importance of each mode in the corresponding space. Therefore, we can interpret the weight as two mode spaces and a score matrix. The neural dynamics can thus be projected to the input mode space which is an intrinsically low dimensional space. In addition, the leaky integrated fire (LIF) model can be discretized into a discrete dynamics with exponential current-based synapses and threshold-type firing, for which a surrogate gradient descent with an additional steepness hyperparameter can be applied [7; 8]. We then
show in experiments how the SMNN framework is applied to classify pixel-by-pixel MNIST datasets and furthermore to context dependent computation in neuroscience. For both computational tasks, we reveal a low-dimensional mode space (in the the number of modes) for which the high dimensional neural dynamics can be projected. Overall, we construct a computational efficient (less model parameters required because of a few dominating modes) and conceptually simple framework to understand challenging spike-based computations.
## II Related Works
Recurrent neural networks (RNNs) using continuous rate dynamics of neurons can be used to generate coherent output sequences [9], in which the network weights are trained by the first-order reduced and controlled error (FORCE) method, where the inverse of the rate correlation matrix is iteratively estimated during learning [5; 9]. This method can be generalized to supervised learning in spiking neural networks [10; 11; 12]. Other methods include training the rate network first and then rescale the synaptic weights to adapt to a spiking setting [13], and mapping a trained continuous-variable rate RNN to a spiking RNN model [14; 15]. These methods rely either on standard efficient methods like FORCE for rate models, or on heuristic strategies to modify the weights on the rate counterpart. In contrast, our current SMNN framework works directly on a spiking dynamics for which many biological plausible factors can be incorporated, e.g., membrane and synapse time scales, cell types and refractory time etc. In addition, recent studies focus on low-rank connectivity hypothesis on recurrent rate dynamics [16; 17]. The weight matrix in RNNs is decomposed into a sum of a few rank-one connectivity matrices, referred to as left and right connectivity vectors respectively. Recently, this low-rank framework is applied to analyze the low rank excitatory-inhibitory spiking networks, with a focus either on mean-field analysis of random networks [18] or on the capability of approximating arbitrary non-linear input-output mappings using small size networks [19]. Therefore, taking biological constraints into account is an active research frontier providing a mechanistic understanding of spike-based neural computations.
In our current work, we interpret this kind of low-rank construction as its full form, like that in the generalized Hopfield model [20; 21]. Then a score matrix is naturally introduced to characterize the competition amongst these low-rank modes, where we found
a remarkable piecewise power-law behavior for their magnitude ranking. Strikingly, the same mode decomposition learning was shown to take effects for multi-layered perceptrons trained on real structured dataset [22], for which the encoding-recoding-decoding hierarchy can be mechanistically explored. Here, we show the effectiveness of the framework in a biological plausible spiking networks, where many brain computation hypotheses can be tested. Therefore, our current work makes a significant step towards a fast and interpretable training protocol to recurrent spiking networks.
## III Results
## IV Mode decomposition learning with spiking activity
Our goal is to learn the underlying information represented by spiking time series using SMNNs. This framework is applied to classify the handwritten digits whose pixels are input to the network one by one (a more challenging task compared to the perceptron learning), and is then generalized to the context-dependent computation task, where two noisy sensory inputs with different modalities are input to the network, which is required to output the correct response when different contextual signals are given. This second task is a well-known cognitive control experiment carried out in macaque monkey prefrontal cortex [23].
### Recurrent spiking dynamics
Our network consists of an input layer, a hidden layer with \(N\) LIF neurons and an output layer. In the input layer, input signals are projected as external currents. Neuron \(i\) in the hidden layer (reservoir or neural pool) has an input mode \(\mathbf{\xi}_{i}^{\text{in}}\) and an output mode \(\mathbf{\xi}_{i}^{\text{out}}\), where both modes \(\mathbf{\xi}_{i}\in\mathbb{R}^{P}\). The connectivity weight from neuron \(j\) to neuron \(i\) is then constructed as \(W_{ij}^{\text{rec}}=\sum_{\mu}\lambda_{\mu}\xi_{i\mu}^{\text{in}}\xi_{j\mu}^{ \text{out}}\), where \(\lambda_{\mu}\) is a score encoding the competition among modes, and the connectivity matrix can be written as \(\mathbf{W}^{\text{rec}}=\mathbf{\xi}^{\text{in}}\mathbf{\Sigma}(\mathbf{\xi}^{\text{out}})^{ \top}\in\mathbb{R}^{N\times N}\) where \(\mathbf{\Sigma}\in\mathbb{R}^{P\times P}\) is the importance matrix which is a diagonal matrix \(\text{diag}(\lambda_{1},\dots,\lambda_{P})\) here. For simplicity, we do not take into account the Dale's law, i.e., the neuron population is separated into excitatory and inhibitory subpopulations. The law could be considered as a matrix product between a non-negative matrix and a diagonal matrix specifying the cell types [24]. The activity of the hidden layer is transmitted to the output layer for generating the actual
network outputs. Based on SMNNs, the time complexity of learning can be reduced from \(\mathcal{O}(N^{2})\) to \(\mathcal{O}(2NP+P)\). It is typical that \(P\) is of the order one, and thus the SMNN has the _linear_ training complexity in the network size.
In the hidden layer, the membrane potential \(\mathbf{U}(t)\) and the synaptic currents \(\mathbf{I}^{\mathrm{syn}}(t)\) of LIF neurons are subject to the following dynamics,
\[\tau_{\mathrm{mem}}\frac{\mathrm{d}\mathbf{U}(\mathrm{t})}{ \mathrm{dt}} =-\mathbf{U}(t)+\mathbf{I}^{\mathrm{syn}}(t)+\mathbf{I}^{\mathrm{ ext}}(t), \tag{1a}\] \[\tau_{\mathrm{syn}}\frac{\mathrm{d}\mathbf{I}^{\mathrm{syn}}( \mathrm{t})}{\mathrm{dt}} =-\mathbf{I}^{\mathrm{syn}}(t)+\mathbf{W}^{\mathrm{rec}}\mathbf{r}(t), \tag{1b}\]
where \(\tau_{\mathrm{mem}}\) and \(\tau_{\mathrm{syn}}\) are the membrane and synaptic time constants (being of the exponentially decaying type), respectively. \(\mathbf{I}^{\mathrm{ext}}\) indicates the external currents. The spikes are filtered by specific types of synapse in the brain. Therefore, we denote \(\mathbf{r}\in\mathbb{R}^{N}\) as the filtered spike train with the following double-exponential synaptic filter [1],
\[\frac{\mathrm{d}r_{i}}{\mathrm{d}t} =-\frac{r_{i}}{\tau_{d}}+h_{i}, \tag{2a}\] \[\frac{\mathrm{d}h_{i}}{\mathrm{d}t} =-\frac{h_{i}}{\tau_{r}}+\frac{1}{\tau_{r}\tau_{d}}\sum_{t_{i}^{ k}<t}\delta(t-t_{i}^{k}), \tag{2b}\]
where \(\tau_{r}\) and \(\tau_{d}\) refer to the synaptic rise time and the synaptic decay time, respectively; \(t_{i}^{k}\) refers to the \(k\)-th spiking time of unit \(i\). Different time scales of synaptic filters are due to different types of receptors (such as fast AMPA, relatively fast GABA, and slow NMDA receptors) with different temporal characteristics [1]. Note that \(\tau_{\mathrm{syn}}=0\) corresponds to the spiking network used in the previous work [13], which proposed to train the rate network first and then rescale the synaptic weights to adapt to a spiking setting.
By definition,
\[\mathbf{I}^{\mathrm{ext}}(t)=\mathbf{W}^{\mathrm{in}}\mathbf{u}(t) \tag{3}\]
where the time-varying inputs \(\mathbf{u}\in\mathbb{R}^{N_{\mathrm{in}}}\) are fed to the network via \(\mathbf{W}^{\mathrm{in}}\in\mathbb{R}^{N\times N_{\mathrm{in}}}\). Corresponding to the continuous dynamics in Eq. (1), the membrane potential is updated in discrete time steps by [25]
\[\mathbf{U}[n+1]=\Big{(}\lambda_{\mathrm{mem}}\mathbf{U}[n]+(1-\lambda_{ \mathrm{mem}})\mathbf{I}[n])\Big{)}\odot\Big{(}\mathcal{I}-\mathbf{S}[n]\Big{)}, \tag{4}\]
with \(n\) is the time step, \(\odot\) denotes the element-wise product, \(\mathcal{I}\) is an all-one vector of dimension \(N\), and the membrane decay factor \(\lambda_{\text{mem}}\equiv\exp\Bigl{\{}(-\frac{\Delta t}{\tau_{\text{mem}}}) \Bigr{\}}\). \(\Delta t\) is a small time interval or step size for solving the ordinary differential equations [Eq. (1)]. We show details of derivations in the supplemental material. A spike thus takes place at a time measured in the unit of \(\Delta t\). \(\mathbf{S}(t)\) is the associated spiking output of neuron \(i\), computed as \(S_{i}(t)=\Theta(U_{i}(t)-U_{\text{thr}})\) with a spike threshold \(U_{\text{thr}}\) (set to one in the following analysis) and Heaviside step function \(\Theta\). We also consider the refractory period, which can be neglected for simplicity in simulations. The factor \(\mathcal{I}-\mathbf{S}[n]\) resets the membrane potential to zero after a spike event. If the refractory period is considered, the membrane potential will stay at zero for a short duration, e.g., \(2ms\). In the following, \(\mathbf{I}[n]\) denotes the total afferent synaptic currents at the time step \(n\) and is calculated as [25]
\[\mathbf{I}[n+1]=\lambda_{\text{syn}}\mathbf{I}[n]+\mathbf{W}^{\text{rec}}\mathbf{ r}[n]+\mathbf{W}^{\text{in}}\mathbf{u}[n], \tag{5}\]
where \(\lambda_{\text{syn}}\equiv\exp\Bigl{\{}(-\frac{\Delta t}{\tau_{\text{syn}}} )\Bigr{\}}\). For a fast synaptic dynamics (\(\tau_{\text{syn}}\to 0\)), \(\mathbf{I}[n]=\mathbf{I}^{\text{syn}}[n]+\mathbf{I}^{\text{ext}}[n]\), which is expected. Specific values of these model parameters in simulations are given in the supplemental material.
MNIST data learningThe MNIST dataset consists of handwritten digits, each of which is composed of 28x28 pixels, belonging to 10 different classes (6000 images for each class). The dataset is commonly used as a benchmark classification for neural networks [26]. The handwritten digit is a static image, and thus must be transformed into spiking time series. We thus introduce an additional transformation layer consisting of spiking neurons before the hidden layer. More precisely, each image is converted into spiking activity using one neuron per pixel. The transformation layer is then modeled by a population of Poisson spiking neurons with a maximum firing rate \(f_{max}\). Each neuron in this layer generates a Poisson spike train with a rate \(f_{i}=f_{max}\frac{g_{i}}{255}\), where \(g_{i}\) is the corresponding pixel intensity. We present the same static image every 2ms (step size) and for a total of 50 time steps as input to the transformation layer (therefore for a duration 100ms), to convert the image into spiking activity input to the hidden layer [Figure 1 (a) and (b)].
Context-dependent learning taskIt is fundamentally important for our brain to attend selectively to one feature of noisy sensory input, while the other features are ignored. The same modality can be relevant or irrelevant depending on the contextual cue, thereby
allowing for flexible computation (a fundamental ability of cognitive control). The neural basis of this selective integration was found in the prefrontal cortex of Macaque monkey [23]. In this experiment, monkeys were trained to make a decision about either the dominant color or motion direction of randomly moving colored dots. Therefore, the color or motion indicates the context for the neural computation. This context-dependent flexible computation can be analyzed by training a recurrent rate neural network [23; 24; 27]. However, a mode-based training of spiking networks is lacking so far. Towards a more biological plausible setting, we train a recurrent spiking network using our SMNN framework. The network has two sensory inputs of different modalities in analogy to motion and color, implemented as Gaussian trajectory whose mean is randomly chosen for each trial, but the variance is kept to one. In addition, two contextual inputs (cues indicating which modality should be attended to) are also provided. The network is then trained to report whether the sensory input in the relevant modality has positive mean or negative mean (offset). Within this setting, four attractors would be formed after learning, corresponding to left motion, right motion, red color and green color (see Table 1), in analogy to the Monkey experiments [23]. In simulations, we consider 500 time steps with a step size of 2ms, and noisy signals are only present during the stimulus window (\(100\mathrm{ms}-500\mathrm{ms}\)) [see an illustration in Figure 1 (c)].
## V Experiments using surrogate gradient descents
Throughout our experiments, the Heaviside step function is approximated by a sigmoid surrogate
\[\Theta(x)\approx\frac{x}{1+k|x|}, \tag{6}\]
where a steepness parameter \(k=25\). Other parameters include \(\Delta t=2\mathrm{ms}\), \(\tau_{\mathrm{mem}}=20\mathrm{ms}\), \(\tau_{\mathrm{syn}}=10\mathrm{ms}\), \(\tau_{d}=30\mathrm{ms}\), \(\tau_{r}=2\mathrm{ms}\) and the refractory period \(t_{\mathrm{ref}}=10\mathrm{ms}\) for all tasks. Because we have not imposed sparsity constraint on the connectivity, we take the initialization
\begin{table}
\begin{tabular}{l l l} \hline Offset & Contextual input & Attractor type \\ \hline positive & (1,0) & left attractor \\ negative & (1,0) & right attractor \\ positive & (0,1) & red attractor \\ negative & (0,1) & green attractor \\ \hline \end{tabular}
\end{table}
Table 1: Attractor type corresponding to the monkey’s experiment
scheme that \([\mathbf{\xi}^{\text{in}}\mathbf{\Sigma}(\mathbf{\xi}^{\text{out}})^{\top}]_{ij}\sim O(\frac{1} {\sqrt{N}})\), similar to what is done in multi-layered perceptron learning [22]. The training is implemented by the adaptive moment estimation (Adam [28]) stochastic gradient descent algorithm to minimize the loss function (cross-entropy for classification or mean-squared error for regression) with a learning rate of 0.001. We remark that training in the mode space can be done by using automate differentiation on PyTorch. However, we leave the detailed derivation of the learning rule to the supplemental material. Codes are available in our GitHub [29].
Figure 1: Model structure for MNIST and contextual integration task. (a) Model structure for MNIST task. Each image is converted to spiking activity input to the recurrent reservoir, by Poisson spiking neurons whose rate is determined by the pixel intensity (see details in the main text). \(T=100\)ms in our learning setting. (b) The activity profile of one readout unit for the MNIST task. Only the maximal value is taken for classification. (c) Model structure for contextual integration task. If the cued (context 1 or context 2) input signal is generated using a positive offset value, then the network is supervised to produce an output approaching \(+1\) regardless of the irrelevant input signals (e.g., those coming from the other context).
### MNIST classification task
As a proof of concept, we apply first SMNN to the benchmark MNIST dataset. The maximum of recurrent unit activity over time is estimated by \(a_{i}=\max_{t}(r_{i}(t))\), which are read out by the readout neurons as follows,
\[\mathbf{o}=\text{softmax}(\mathbf{W}^{\text{out}}\mathbf{a}), \tag{7}\]
where readout weights \(\mathbf{W}^{\text{out}}\in\mathbb{R}^{10\times N}\). Using BPTT, the gradients in the mode space \(\mathbf{\theta}=(\mathbf{\xi}^{\text{in}},\mathbf{\Sigma},\mathbf{\xi}^{\text{out}})\) is given by
\[\frac{\partial\mathcal{L}}{\partial\mathbf{\xi}^{\text{in}}} =\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)} \frac{\partial\mathbf{I}(t)}{\partial\mathbf{W}^{\text{rec}}}\frac{\partial\mathbf{W}^ {\text{rec}}}{\partial\mathbf{\xi}^{\text{in}}}=\sum_{t=1}^{T}\frac{\partial \mathcal{L}}{\partial\mathbf{I}(t)}\mathbf{r}(t-1)\mathbf{\xi}^{\text{out}}\mathbf{ \Sigma}, \tag{8}\] \[\frac{\partial\mathcal{L}}{\partial\mathbf{\xi}^{\text{out}}} =\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)} \frac{\partial\mathbf{I}(t)}{\partial\mathbf{W}^{\text{rec}}}\frac{\partial\mathbf{W}^ {\text{rec}}}{\partial\mathbf{\xi}^{\text{out}}}=\sum_{t=1}^{T}\frac{\partial \mathcal{L}}{\partial\mathbf{I}(t)}\mathbf{r}(t-1)\mathbf{\xi}^{\text{in}}\mathbf{ \Sigma},\]
where \(\mathbf{\xi}^{\text{in/out}}_{\mu}\in\mathbb{R}^{N}\), \(T\) is the length of the training trajectory, \(\mathcal{L}\) is the loss function, whose gradients \(\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)}\) with respect to total synaptic currents are calculated explicitly in the supplemental material. Each training mini-batch contains 100 images, and LIF networks with different mode sizes (\(P=1,2,\dots 50,100\)) are trained.
With increasing mode size, the test accuracy increases [Figure 2 (a) and (b)]. A better generalization is achieved by a larger neural pool. Surprisingly, even for the smallest mode size (\(P=1\)), the network could be trained to perform well, reaching an accuracy of 96% for \(N=200\). In comparison with the rate network counterpart, the spiking model performs better when the mode size is relatively large [Figure 2 (c)], but is more energetic efficient than the rate network. We also plot the membrane potential profile for a few representative neurons, and observe a network synchronous oscillation at a later training stage [Figure 2 (d) and (e)], which signals a neural correlate of visual input classification in neural circuits based on spiking population codes.
### Contextual integration task
In the contextual integration task, the network activity is read out by an affine transformation as
\[o(t)=\mathbf{W}^{\rm out}\mathbf{r}(t), \tag{9}\]
where \(\mathbf{W}^{\rm out}\in\mathbb{R}^{1\times N}\) refers to the readout weights, and \(\mathbf{r}(t)\) is the filtered spike train. We use the root mean squared (RMSE) loss function defined as
\[\mathcal{L}=\sqrt{\sum_{t=0}^{T}(z(t)-o(t))^{2}}, \tag{10}\]
Figure 2: Learning performance of MNIST classification task. Five independent runs are used to estimate the standard deviation. (a) Test accuracy versus different mode size. (b) Loss function as a function of training mini-batch, each composed of 100 digit images. The mode size varies, and the network size \(N=100\). (c) Comparison of the accuracy between rate and spiking model with fixed network size \(N\)=200. (d) Membrane potential trace for three typical reservoir neurons in response to spike train inputs. The un-smooth profile is due to the discretization of the dynamics with step size \(\Delta t\). The triangle marks when the filtered spike train takes a maximum value. \((P,N)=(3,100)\). (e) Spike trains of reservoir neurons with neuron firing rate (right) and population average firing rate (bottom). \((P,N)=(3,100)\).
where \(z(t)\) is the target output in time \(t\), taking zero except in the response period, and \(T\) denotes the time range. In analogy to the previous MNIST classification task, the gradient in the mode space \(\mathbf{\theta}=(\mathbf{\xi}^{\text{in}},\mathbf{\Sigma},\mathbf{\xi}^{\text{out}})\) for the current context-dependent computation could be similarly derived (see details in the supplemental material). The learning rate was set to \(0.001\), and each training mini-batch contains \(100\) trials.
As shown in Figure 3 (a), the test MSE does not vary strongly with the mode size, especially for a relatively large neural population. For a smaller one \(N=30\), the performance becomes better with increasing the value of \(P\). However, the best performance is still worse than that of large-sized networks. The training dynamics in Figure 3 (b) shows that a larger value of \(P\) speeds up the learning. Figure 3 (c) confirms that our training protocol
Figure 3: Learning performance of contextual integration task. Five independent runs are used to estimate the standard deviation. (a) Mean squared error (MSE) versus different mode size. (b) MSE as a function of training mini-batch for different mode sizes. The network size \(N\)=100. (c) Average output activity in response to test inputs for \((P,N)=(3,100)\). The shaded region indicates the stimulus period. Sensory inputs are only shown during the stimulus period, followed by a response period. The fluctuation over 100 random trials is also shown. Colored lines are two target outputs. (d) Membrane potential trace for four typical reservoir neurons in response to a random input. The shaded region indicates the stimulus period. (e) Spike raster of reservoir neurons with neuron firing rate (right) and population average firing rate (bottom). \((P,N)=(3,100)\) for (d) and (e).
succeeds in reproducing the result of Monkey's experiments on the task of flexible selective integration of sensory inputs. We can even look at the dynamics profile of the membrane potential. During the stimulus period, each neuron has its own time scales to encode the input signals, and at a network scale, we do not observe any regular patterns. However, after the sensory input is turned off, the network is immediately required to make a decision, and we observe that the firing frequency of the neural pool is elevated, and interestingly, some neurons seem to keep the same pace. This will reflect basic properties of neural manifold behind the perceptual decision making, which we shall detail in the next section.
### Projection in the mode space
To study the neural manifold underlying the perceptual decision making, we project the high dimensional neural activity onto the mode space. If the small number of modes is sufficient to capture the performance, we can visualize the manifold without using the principal component analysis that is impossible to keep all information in a low dimensional space in general. This is one merit of our method. As expected, after training, four separated attractors are formed in the mode space, either in the input mode space or in the output mode space [Figure 4 (a)]. The test dynamics would flow to the corresponding attractor depending on the context of the task.
We next consider a context switching experiment, and look at how the dynamics is changed in the low dimensional intrinsic space. The experimental protocol is shown in Figure 4 (b). At \(t=300\)ms, the context is switched to the other one, and the network should carry out computation according to the new context, e.g., making a correct response to the input signal. Before the context is switched, the context cued input signals have a positive offset. Correspondingly, the neural activity trajectory in the mode space evolves to the left attractor [Figure 4 (c)]. Once the context is switched, the context cues another input signal that has a negative offset. The neural trajectory is then directed to the neighboring green attractor. Therefore, the neural dynamics can be guided by the contextual cue, mimicking what occurs in the prefrontal cortex of Monkeys that perform the contextual integration task [23].
### Power law for connectivity importance scores
We next find whether some modes are more important than the others. To make comparable the magnitudes of the pattern and importance scores, we rank the modes according to the following measure [22]
\[\tau_{\mu}=\chi||\mathbf{\xi}_{\mu}^{\text{in}}||_{2}+\chi||\mathbf{\xi}_{\mu}^{\text{ out}}||_{2}+|\lambda_{\mu}|, \tag{11}\]
where \(\mathbf{\xi}_{\mu}^{\text{in/out}}\in\mathbb{R}^{N}\)\(\chi=\sum_{\mu}|\lambda_{\mu}|/\sum_{\mu}(\|\mathbf{\xi}_{\mu}^{\text{in}}\|_{2}+\|\mathbf{ \xi}_{\mu}^{\text{out}}\|_{2})\). We observe a piecewise power law for the \(\tau\) measure (Figure 5), implying that the ambient space of neural activity is actually low-dimensional, and can be projected to a low dimensional mode space, with \(P_{\text{dom}}\) (taking
Figure 4: The filtered spike train of hidden layer projected to the mode space for the contextual integration task. The mode size \(P=3\), and the network size \(N=100\). (a) Projection into the input mode space. Three hundred randomly generated trials were used. Different colors encode different offset sign and contextual cue. The color gets darker with time in the dynamics trajectory. (b) Context switching experiment. At \(t=300\)ms, the previous contextual cue is shifted to the other one. The left-y axis encodes the input signals, while the right-y axis encodes the contextual information. (c) Activity projection for the context switching experiment in (b). (d) Projection coefficient in the input mode space for the context switching experiment in (b).
a small value in Figure 5) dominant coordinate axes on which the importance scores vary mildly. On one hand, the task information is coded hierarchically into the mode space, and on the other hand, this observation supports that a fast training of spiking networks is possible by focusing on leading modes explaining the network macroscopic behavior.
### Reduced dynamics in the mode space
Here, we will show how the dynamics of network activity based on our SMNN learning in the contextual integration task can be projected to a low dimensional counterpart. We follow the previous works of low-rank recurrent neural networks [16; 18]. To construct a subspace spanned by orthogonal bases, we first split \(\mathbf{W}^{\rm in}\) into the parts parallel and orthogonal to the input mode \(\mathbf{\xi}^{\rm in}\),
\[\mathbf{W}^{\rm in}_{s}=\sum_{\mu}\alpha_{\mu}\mathbf{\xi}^{\rm in}_{\mu}+\beta_{s}\bm {W}^{s}_{\perp}, \tag{12}\]
where \(s\) indicates which component of input signals, \(\alpha_{\mu}\) and \(\beta_{s}\) are coefficients for the linear combination. We assume that the bases are orthogonal; otherwise, one can use Gram
Schmidt procedure to obtain the orthogonal bases.
We can then express the membrane potential \(\mathbf{U}(t)\) in the above orthogonal bases as
\[\mathbf{U}(t)=\sum_{\mu}\gamma_{\mu}(t)\mathbf{\xi}_{\mu}^{\text{in}}+\sum_{s}v_{s}( t)\mathbf{W}_{\perp}^{s}, \tag{13}\]
where the coefficients for the linear combination are given below,
\[\begin{split}&\gamma_{\mu}(t)=\frac{(\mathbf{\xi}_{\mu}^{\text{in}})^{ \top}\mathbf{U}(t)}{(\mathbf{\xi}_{\mu}^{\text{in}})^{\top}\mathbf{\xi}_{\mu}^{\text{ in}}},\\ & v_{s}(t)=\frac{(\mathbf{W}_{\perp}^{s})^{\top}\mathbf{U}(t)}{(\mathbf{W }_{\perp}^{s})^{\top}\mathbf{W}_{\perp}^{s}}.\end{split} \tag{14}\]
With the above linear combination, the dynamics of \(\mathbf{v}(t)\) is captured by
\[\tau_{\text{mem}}\frac{\mathrm{d}v_{s}(t)}{\mathrm{d}t}=-v_{s}(t)+\beta_{s}u_ {s}(t), \tag{15}\]
which is derived by using the original dynamics of the membrane potential [Eq. (1)].
Similarly, we obtain the dynamics of \(\mathbf{\gamma}(t)\) as follows,
\[\tau_{\text{mem}}\frac{\mathrm{d}\mathbf{\gamma}(t)}{\mathrm{d}t}=\mathbf{F}(t, \mathbf{\gamma}(t),\mathbf{r}(t),\mathbf{u}(t)), \tag{16}\]
where the force function \(\mathbf{F}\) is defined below,
\[\mathbf{F}=-\mathbf{\gamma}(t)+\mathbf{\Sigma}(\mathbf{\xi}^{\text{out}})^{\top}\mathbf{r }(t)+\mathbf{\alpha}\odot\mathbf{b}, \tag{17}\]
where the vector \(\mathbf{b}=\sum_{s}u_{s}(t)\mathcal{I}\), and \(\mathcal{I}\) is an all-one vector of dimension \(P\). To derive Eq. (17), we have made fast synaptic dynamics assumption, i.e., \(\tau_{\text{syn}}\to 0\). In addition, the filtered spike trains \(\mathbf{r}(t)\) can also be written as a form of linear combination,
\[\mathbf{r}(t)=\sum_{\mu}\kappa_{\mu}(t)\mathbf{\xi}_{\mu}^{\text{in}}+\sum_{s}\nu_{s}( t)\mathbf{W}_{\perp}^{s}, \tag{18}\]
where the coefficients are given respectively by
\[\begin{split}\kappa_{\mu}(t)&=\frac{(\mathbf{\xi}_{\mu}^{ \text{in}})^{\top}\mathbf{r}(t)}{(\mathbf{\xi}_{\mu}^{\text{in}})^{\top}\mathbf{\xi}_{ \mu}^{\text{in}}},\\ \nu_{s}(t)&=\frac{(\mathbf{W}_{\perp}^{s})^{\top}\mathbf{ r}(t)}{(\mathbf{W}_{\perp}^{s})^{\top}\mathbf{W}_{\perp}^{s}}.\end{split} \tag{19}\]
Next, we consider a simple exponential synaptic filter characterized by a time constant \(\tau_{s}\) (other synapse types can be similarly analyzed), and project the spiking activity \(\mathbf{S}\) onto the orthogonal bases. The resulting reduced dynamics thus reads
\[\tau_{s}\frac{\mathrm{d}\nu_{s}(t)}{\mathrm{d}t}=-\nu_{s}(t)+\frac{(\mathbf{W}_{ \perp}^{s})^{\top}\mathbf{S}(t)}{(\mathbf{W}_{\perp}^{s})^{\top}\mathbf{W}_{\perp}^{s}}. \tag{20}\]
In addition, we also have
\[\tau_{s}\frac{\mathrm{d}\kappa_{\mu}(t)}{\mathrm{d}t}=-\kappa_{\mu}(t)+\frac{( \mathbf{\xi}_{\mu}^{\text{in}})^{T}\mathbf{S}(t)}{(\mathbf{\xi}_{\mu}^{\text{in}})^{ \top}\mathbf{\xi}_{\mu}^{\text{in}}}. \tag{21}\]
The spiking activity is indicated by \(\mathbf{S}(t)=\Theta\Bigl{(}\sum_{\mu}\gamma_{\mu}(t)\mathbf{\xi}_{\mu}^{\text{in} }+\sum_{s}v_{s}(t)\mathbf{W}_{\perp}^{s}-U_{\text{thr}}\mathcal{I}\Bigr{)}\). We plot the dynamics of \(\mathbf{\kappa}\) for the context switching experiment [Figure 4(d)], which reveals the qualitatively same behavior as observed in Figure 4 (c).
## VI Conclusion
In this work, we propose a spiking mode-based neural network framework for training various computational tasks. This framework is based on mode decomposition learning inspired from Hopfield network and multi-layered perceptron training [20; 22]. From the SMNN learning rule, we can adapt the mode size to the task difficulty, and retrieve the pow-law behavior of the importance scores, and furthermore, the high dimensional recurrent activity can be projected to the low-dimensional mode space with a few leading modes, derived from the pow-law behavior. Using a few modes, we can speed up the training of recurrent spiking networks, thereby making a large scale of spike-based computation possible in practice. Further extension of our work allows us treating more realistic biological networks, e.g., considering excitatory-inhibitory networks, sparsity of network connections, and sequence memory from network activity, etc. To conclude, our work provides a fast,
interpretable and biological plausible framework for analyzing the neuroscience experiments and designing brain-like computation circuits.
## VII Acknowledgments
We thank Yang Zhao for an earlier involvement of this project, especially the derivation of the discretization of continuous neural and synaptic dynamics, and Yuhao Li for helpful discussions. This research was supported by the National Natural Science Foundation of China for Grant Number 12122515 (H.H.), and Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices (No. 2022B1212010008), and Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023B1515040023).
## Appendix A Discretization of continuous neural and synaptic dynamics
The interested continuous dynamics can be written in the following form,
\[\tau\frac{\mathrm{d}x}{\mathrm{d}t}=-x(t)+y(t), \tag{10}\]
where \(y(t)\) is a time-dependent driving term. This first-order linear differential equation has a solution,
\[x(t)=\frac{1}{\tau}e^{-t/\tau}\int_{0}^{t}y(s)e^{s/\tau}ds, \tag{11}\]
which implies that \(x(0)=0\). Next, we assume the discretization step size \(\Delta t\) is a small quantity. We then have
\[\begin{split} x(t+\Delta t)&=\frac{1}{\tau}e^{- \frac{t+\Delta t}{\tau}}\int_{0}^{t+\Delta t}y(s)e^{s/\tau}ds\\ &=\lambda_{\tau}x(t)+\frac{1}{\tau}e^{-\frac{t+\Delta t}{\tau}} \int_{t}^{t+\Delta t}y(s)e^{s/\tau}ds\\ &=\lambda_{\tau}x(t)+\frac{\lambda_{\tau}}{\tau}\int_{0}^{\Delta t }y(t+s)e^{s/\tau}ds\\ &\simeq\lambda_{\tau}x(t)+\frac{\lambda_{\tau}}{\tau}y(t)\int_{0 }^{\Delta t}e^{s/\tau}ds\\ &=\lambda_{\tau}x(t)+(1-\lambda_{\tau})y(t),\end{split} \tag{12}\]
where we change the integral variable in the third equality, and the approximation in the fourth line holds when \(\Delta t\) is close to zero, and we define \(\lambda_{\tau}=e^{-\Delta t/\tau}\). Note that in our simulations, \(\tau_{r}\) and \(\tau_{\rm syn}\) are relatively small, and therefore we neglect the corresponding decay factor in the second term of the last line in Eq. (18). A full set of discrete dynamics is given in Eq. (17).
## Appendix B Derivation of mode-based learning rules
In this section, we provide details to derive the spiking mode-based learning rules. First, the discrete update rules for the dynamics are given below,
\[\begin{split}\mathbf{r}(t+1)&=\lambda_{d} \mathbf{r}(t)+(1-\lambda_{d})\mathbf{h}(t+1),\\ \mathbf{h}(t+1)&=\lambda_{r}\mathbf{h}(t)+\mathbf{S }(t+1),\\ \mathbf{U}(t+1)&=\Big{(}\lambda_{\rm mem}\mathbf{U}( t)+(1-\lambda_{\rm mem})\mathbf{I}(t+1))\Big{)}\odot\Big{(}\mathcal{I}- \mathbf{S}(t)\Big{)},\\ \mathbf{I}(t+1)&=\lambda_{\rm syn}\mathbf{I}(t)+ \boldsymbol{W}^{\rm rec}\mathbf{r}(t)+\boldsymbol{W}^{\rm in}\mathbf{u}(t), \end{split} \tag{17}\]
where \(t\) is a discrete time step (or in the unit of \(\Delta t\)), and \(\lambda_{i}=\exp\{(-\Delta t/\tau_{i})\}\). With the loss function \(\mathcal{L}\), we define the following error gradients \(\boldsymbol{\mathcal{K}}(t)\equiv\frac{\partial\mathcal{L}}{\partial\mathbf{r }(t)}\), and by using the chain rule, we immediately have the following results.
case 1\(t=T\):
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{r}(T)}& =\boldsymbol{\mathcal{K}}(T),\\ \frac{\partial\mathcal{L}}{\partial\mathbf{I}(T)}&= \boldsymbol{\mathcal{K}}(T)\frac{\partial\mathbf{r}(T)}{\partial\mathbf{I}(T )}.\end{split} \tag{18}\]
case 2\(t<T\):
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{r}(t)}& =\boldsymbol{\mathcal{G}}(t)+\boldsymbol{\mathcal{K}}(t+1)\frac{ \partial r(t+1)}{\partial\mathbf{r}(t)}\\ &=\boldsymbol{\mathcal{G}}(t)+\lambda_{d}\boldsymbol{\mathcal{K}} (t+1),\\ \frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)}& =\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t+1)}\frac{ \partial\mathbf{I}(t+1)}{\partial\mathbf{I}(t)}+\frac{\partial\mathcal{L}}{ \partial\mathbf{r}(t)}\frac{\partial\mathbf{r}(t)}{\partial\mathbf{I}(t)}\\ &=\lambda_{\rm syn}\boldsymbol{\mathcal{K}}(t+1)\frac{\partial \mathbf{r}(t+1)}{\partial\mathbf{I}(t+1)}+\boldsymbol{\mathcal{K}}(t)\frac{ \partial\mathbf{r}(t)}{\partial\mathbf{I}(t)},\end{split} \tag{19}\]
where \(\mathbf{\mathcal{G}}(t)\) is the explicit differentiation of \(\mathcal{L}\) with respect to \(\mathbf{r}(t)\). Using the surrogate gradient of the step function, we have
\[\begin{split}\mathbf{S}(t)&=\Theta(\mathbf{U}(t)-U_{ \text{thr}}\mathcal{I})\approx\frac{\mathbf{U}(t)-U_{\text{thr}}\mathcal{I}}{1+ k|\mathbf{U}(t)-U_{\text{thr}}\mathcal{I}|},\\ \frac{\partial\mathbf{S}(t)}{\partial\mathbf{U}(t)}& =\text{diag}\Big{(}\frac{1}{(1+k|\mathbf{U}(t)-U_{\text{thr}} \mathcal{I}|)^{2}}\Big{)},\end{split} \tag{10}\]
where \(\mathcal{I}\) represents an all-one vector, and \(|\mathbf{a}|\) for a vector \(\mathbf{a}\) equals to \(|a_{i}|\) when the \(i\)-th component is considered. As a result, we can easily get
\[\begin{split}\frac{\partial\mathbf{r}(t)}{\partial\mathbf{I}(t)}& =\frac{\partial\mathbf{r}(t)}{\partial\mathbf{h}(t)}\frac{ \partial\mathbf{h}(t)}{\partial\mathbf{S}(t)}\frac{\partial\mathbf{S}(t)}{ \partial\mathbf{U}(t)}\frac{\partial\mathbf{U}(t)}{\partial\mathbf{I}(t)}\\ &=(1-\lambda_{d})(1-\lambda_{\text{mem}})\text{diag}\Big{(}\frac{ 1}{(1+k|\mathbf{U}(t)-U_{\text{thr}}\mathcal{I}|)^{2}}\Big{)}.\end{split} \tag{11}\]
Therefore, we can update the mode component and the connectivity importance as follows,
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{\xi}^{ \text{in}}}&=\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial \mathbf{I}(t)}\frac{\partial\mathbf{I}(t)}{\partial\mathbf{W}^{\text{rec}}}\frac{ \partial\mathbf{W}^{\text{rec}}}{\partial\mathbf{\xi}^{\text{in}}}&=\sum_{ t=1}^{T}\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)}\mathbf{r}(t-1)\mathbf{\xi}^{ \text{out}}\mathbf{\Sigma},\\ \frac{\partial\mathcal{L}}{\partial\lambda_{\mu}}& =\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)} \frac{\partial\mathbf{I}(t)}{\partial\mathbf{W}^{\text{rec}}}\frac{\partial\mathbf{W} ^{\text{rec}}}{\partial\lambda_{\mu}}&=\sum_{t=1}^{T}\frac{ \partial\mathcal{L}}{\partial\mathbf{I}(t)}\mathbf{r}(t-1)\mathbf{\xi}^{\text{in}} _{\mu}(\mathbf{\xi}^{\text{out}}_{\mu})^{\top},\\ \frac{\partial\mathcal{L}}{\partial\mathbf{\xi}^{\text{out}}}& =\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)} \frac{\partial\mathbf{I}(t)}{\partial\mathbf{W}^{\text{rec}}}\frac{\partial\mathbf{W} ^{\text{rec}}}{\partial\mathbf{\xi}^{\text{out}}}&=\sum_{t=1}^{T} \frac{\partial\mathcal{L}}{\partial\mathbf{I}(t)}\mathbf{r}(t-1)\mathbf{\xi}^{ \text{in}}\mathbf{\Sigma}.\end{split} \tag{12}\]
For the handwritten digit classification task, \(a_{i}=\underset{t}{max}(r_{i}(t))=r_{i}(t^{m}_{i}),\mathbf{o}=\text{softmax}(\mathbf{W} ^{\text{out}}\mathbf{a})\) and \(\mathcal{L}=-\mathbf{z}^{T}\ln\mathbf{o}\), where \(t^{m}_{i}\) is the time when the maximal firing rate is reached, and \(\mathbf{z}\) is the one-hot target label. We then have the following equations for updating time-dependent error gradients.
\[\begin{split}\mathcal{K}_{i}(T)&=\frac{\partial \mathcal{L}}{\partial a_{i}}\sum_{j}\frac{\partial a_{i}}{\partial r_{j}(T)}\\ &=\Big{(}(\mathbf{o}-\mathbf{z})\mathbf{W}^{\text{out}}\Big{)}_{i}\sum _{j}\frac{\partial r_{i}(t^{m}_{i})}{\partial r_{j}(T)}\\ &=\Big{(}(\mathbf{o}-\mathbf{z})\mathbf{W}^{\text{out}}\Big{)}_{i}\sum _{j}\frac{\partial r_{i}(t^{m}_{i})}{\partial r_{j}(t^{m}_{i})}\delta(T-t^{m}_ {i})\\ &=\Big{(}(\mathbf{o}-\mathbf{z})\mathbf{W}^{\text{out}}\Big{)}_{i} \delta(T-t^{m}_{i}),\end{split} \tag{13}\]
and
\[\begin{split}\mathcal{K}_{i}(t)&=\frac{\partial\mathcal{ L}}{\partial a_{i}}\sum_{j}\frac{\partial a_{i}}{\partial r_{j}(t)}+\lambda_{d} \mathcal{K}_{i}(t+1)\\ &=\Big{(}((\mathbf{o}-\mathbf{z})\boldsymbol{W}^{\text{out}} \Big{)}_{i}\sum_{j}\frac{\partial a_{i}}{\partial r_{j}(t)}+\lambda_{d} \mathcal{K}_{i}(t+1),\forall t<T.\end{split} \tag{10}\]
The sum in the first term of the last equality in Eq. (10) can be explicitly calculated out, i.e.,
\[\begin{split}\sum_{j}\frac{\partial a_{i}}{\partial r_{j}(t)}& =\sum_{j}\frac{\partial r_{i}(t_{i}^{m})}{\partial r_{j}(t)}\\ &=\sum_{j}\frac{\partial r_{i}(t_{i}^{m})}{\partial r_{j}(t_{i}^{ m})}\delta(t-t_{i}^{m})+\sum_{j}\frac{\partial r_{i}(t_{i}^{m})}{\partial r_{j}(t_{i}^ {m}-1)}\delta(t+1-t_{i}^{m})\\ &=\delta(t+1-t_{i}^{m})\sum_{j}\Big{[}\delta_{ij}\lambda_{d}+ \left[\frac{\partial\mathbf{r}(t_{i}^{m})}{\partial\mathbf{I}(t_{i}^{m})} \right]_{ii}\Big{(}W_{ij}^{\text{rec}}+\lambda_{\text{syn}}\left[\frac{ \partial\mathbf{I}(t_{i}^{m}-1)}{\partial\mathbf{r}(t_{i}^{m}-1)}\right]_{ij} \Big{)}\Big{]}+\delta(t-t_{i}^{m}),\end{split} \tag{11}\]
where the term \(\left[\frac{\partial\mathbf{I}(t_{i}^{m}-1)}{\partial\mathbf{r}(t_{i}^{m}-1)} \right]_{ij}=0\) for \(i\neq j\) [Eq. (11)].
For the contextual integration task, \(o(t)=\boldsymbol{W}^{\text{out}}\mathbf{r}\) and \(\mathcal{L}=\sqrt{\sum_{t=0}^{T}(z(t)-o(t))^{2}}\), where \(z(t)\) is the target output. In a similar way, one can derive the following error gradients,
\[\boldsymbol{\mathcal{K}}(T) =\frac{z(T)-o(T)}{\mathcal{L}}\boldsymbol{W}^{\text{out}}, \tag{12a}\] \[\boldsymbol{\mathcal{K}}(t) =\frac{z(t)-o(t)}{\mathcal{L}}\boldsymbol{W}^{\text{out}}+\lambda _{d}\boldsymbol{\mathcal{K}}(t+1),\forall t<T. \tag{12b}\]
The other update equations are the same as above.
## Appendix C Initialization and model parameters
The initialization scheme is given in Table 2, where the constructed recurrent weights should be multiplied by a factor of \(\frac{1}{\sqrt{PN}}\)[22]. The hyper parameters used in the neural dynamics equations are given in Table 3.
|
2302.09461 | Liveness score-based regression neural networks for face anti-spoofing | Previous anti-spoofing methods have used either pseudo maps or user-defined
labels, and the performance of each approach depends on the accuracy of the
third party networks generating pseudo maps and the way in which the users
define the labels. In this paper, we propose a liveness score-based regression
network for overcoming the dependency on third party networks and users. First,
we introduce a new labeling technique, called pseudo-discretized label encoding
for generating discretized labels indicating the amount of information related
to real images. Secondly, we suggest the expected liveness score based on a
regression network for training the difference between the proposed supervision
and the expected liveness score. Finally, extensive experiments were conducted
on four face anti-spoofing benchmarks to verify our proposed method on both
intra-and cross-dataset tests. The experimental results show our approach
outperforms previous methods. | Youngjun Kwak, Minyoung Jung, Hunjae Yoo, JinHo Shin, Changick Kim | 2023-02-19T02:45:35Z | http://arxiv.org/abs/2302.09461v2 | # Liveness Score-Based Regression Neural Networks for Face Anti-Spoofing
###### Abstract
Previous anti-spoofing methods have used either pseudo maps or user-defined labels, and the performance of each approach depends on the accuracy of the third party networks generating pseudo maps and the way in which the users define the labels. In this study, we propose a liveness score-based regression network for overcoming the dependency on third party networks and users. First, we introduce a new labeling technique, called pseudo-discretized label encoding for generating discretized labels indicating the amount of information related to real images. Secondly, we suggest the expected liveness score based on a regression network for training the difference between the proposed supervision and the expected liveness score. Finally, extensive experiments were conducted on four face anti-spoofing benchmarks to verify our proposed method on both intra-and cross-dataset tests. The experimental results denote our approach outperforms previous methods.
Machine Learning, ICML, ICML
## 1 Introduction
Face anti-spoofing (FAS) systems have been successfully established in face authentication, and widely used in online banking, electronic payments, and securities as a crucial technique. Despite its substantial success, FAS still shows vulnerability to various presentation attacks (PAs) such as printed materials, replay-videos, and 3D-masks. To alleviate such vulnerability, previous deep learning-based FAS methods (Liu et al., 2018; Yu et al., 2020) learn discriminative features for distinguishing real faces against PAs, and such methods mostly treat the FAS problem as a binary classification of whether a given face is real or a spoof, as shown in Fig. 1(a). However, such binary classification-based approaches suffer from non-trivial attacks because they are prone to an over-fitting to the training data, resulting in poor generalization (Liu et al., 2018). To mitigate the over-fitting problem, regression-based methods (Feng et al., 2018; Bian et al., 2022; Yu et al., 2021, 2020) have been proposed, which find sparse evidence for known spoof types and generalize to unseen attacks. For regression-based neural networks, two approaches are considered: First, pseudo-define based supervision (Jiang and Sun, 2022; Bian et al., 2022; Yu et al., 2021, 2020; Liu et al., 2018; Fang et al., 2021) is designed for context-agnostic discrimination describing the local cues from the pixel level, such as the depth and reflection. For example, a pseudo-map based CNN (Feng et al., 2018) utilizes pseudo-depth supervision using the mean square error (MSE) to reconstruct a sparse depth-map and a flattened-map for a real and spoof image, respectively, as illustrated in Fig. 1(a). And PAL-RW (Fang et al., 2021) is the first approach that utilizes partial pixel-wise labels with face masks to train FAS models. Secondly, user-define based supervision (Jiang and Fangling; Wang et al., 2022) is designed for constrained learning using the relative distances among real and PAs to improve the generalization ability. For instance, ordinal regression (Jiang and Fangling) introduces user-defined ordinal labels. Based on the user-defined labels, the model is trained to finely constrain the relative distances among the features of different spoof categories within the latent space. Another example is PatchNet (Wang et al., 2022), which subdivides binary labels (a real or a spoof) into fine-grained labels (reals or spoofs). Despite previous efforts, we found that the pseudo-define based
Figure 1: Comparison between previous methods and our method for face anti-spoofing. (a) Previous methods utilize either binary supervision to detect spoof cues, or pseudo depth supervision, or both. (b) Our method discretizes binary labels and exchanges real and spoof images for our expected liveness score. The discretized label \(\lambda\) indicates the ratio of a real image over an image.
supervisions depend on the accuracy of additional works (e.g., depth- (Feng et al., 2018) and texture-based (Zhang et al., 2018)), and that user-define based supervision relies on that user-specified guides, and the correctness is not guaranteed. In this paper, as described in Fig. 1(b), we introduce a discretized label encoding for increasing data distribution and generating data relationships, which has no dependencies on the prior works. For our proposed label encoding, we present a novel pre-processing method, called the pseudo-discretized label encoding (PDLE) scheme, in which an image is randomly selected in a mini-batch, then the opposite labeled image is also arbitrarily chosen from the whole batch, and then parts of the images are exchanged to generate a new image and its discretized dense label.
Our contributions are as follows:
* We re-formulate face anti-spoofing as a value regression problem that directly optimizes a deep neural network with mean square error for improving performance, instead of using binary cross-entropy.
* We propose a simple yet effective pseudo-discretized label encoding (PDLE), which enforce the regression network to represent the ratio of information of the real image to that of the given input image for a prediction of the liveness score.
* We conduct extensive experiments, and obtain the state-of-the-art and outstanding performance on the intra- and cross-dataset respectively.
## 2 Proposed Method
### Overview
For an expansion of the training image and label distribution without an information corruption, we introduce a discretized label encoding schema to preserve the spoof and real cues in the images, and indicate the amount of a real image information over that of the input image. To leverage the PDLE, we propose learning a value regression neural network using the MES between the expected liveness scores and the pseudo-labels. In addition, we apply a domain-invariant learning scheme (GRL) (Ganin and Lempitsky, 2015) as an adversarial training to our regression neural network using the domain labels. The framework of our method is illustrated in Fig. 2.
### Pseudo-Discretized Label Encoding
We assume that \(X=\{x_{s},x_{r}\}\in\mathbb{R}^{H\times W\times 3}\) and \(Y=\{y_{s}=0.0,y_{r}=1.0\}\) denote the spoof and real color image space and the class label space in each. To sample the discretized labels between \(y_{s}\) and \(y_{r}\), we use the following formula:
\[u\sim\mathcal{U}\{1,K\};\quad\lambda=\frac{u}{K}, \tag{1}\]
where \(u\) is sampled from the discrete uniform distribution (1, \(K\)), and \(K\) is a pre-defined discretized level, a cardinality of an encoding label \(\tilde{Y}\) set, and a number of outputs for the last \(FC\) in Fig. 2. \(\lambda\) implies a pseudo-discretized label presenting the amount of a partial real image over a whole image. Inspired by CutMix (Yun et al., 2019), we first exchange a real image and a spoof image through a random rectangular box as follows:
\[\tilde{x} =M\odot x_{a}+(1-M)\odot x_{b},\text{where }y_{a}\neq y_{b} \tag{2}\] \[\tilde{y} =\begin{cases}1-\lambda,&\text{if }x_{a}=x_{r}\\ \lambda,&\text{otherwise},\end{cases}\]
where \(M\in\{0,1\}^{H\times W}\) is a random rectangular mask based on \(\lambda\), with \(0\) and \(1\) indicating inside and outside the mask. \(\odot\) is an element-wise multiplication operator, and \(x_{a}\) is an anchor to choose a sample from a mini-batch, whereas \(x_{b}\) is the opposite sample selected from the entire training set. \(\tilde{x}\) indicates the exchanged image, and \(\tilde{y}\) is the pseudo-discretized label determined based on whether \(x_{a}\) is a real image or not. We exchange between images with different labels (\(y_{a}\neq y_{b}\)) to expand data and label distribution. As shown in Fig. 2, we use \(\tilde{X}\in(x_{s},\tilde{x},x_{r})\) and \(\tilde{Y}\in(y_{s},\tilde{y},y_{r})\)
Figure 2: Overview of our approach for a value regression neural network. Our framework consists of a label encoding (PDLE) for the data and label expansion, a encoder network for the feature extractor, an expected liveness score estimator for the regression network learning, and a discriminator for the domain-invariant feature learning.
as the training data and the supervision for the regression network to learn the liveness score.
### Expected Liveness Score
Let \(\mathbb{P}:\mathbb{R}^{H\times W\times 3}\rightarrow\mathbb{R}^{K}\) denote the probability of the liveness evidence estimated using \(SoftMax\), \(FC_{k}\), and \(Encoder(\tilde{X})\), as illustrated in Fig. 2. We employ \(K\) in Eq. 1 to formulate a random variable \(C\) with a finite list \(\{c_{0},...,c_{K}\}\) whose the \(i^{th}\) element \(c_{i}\) is denoted as follows:
\[c_{i}=\begin{cases}0.0,&\text{if }i=0\\ interval\times i,&\text{if }i>0\text{ and }i<K\\ 1.0,&\text{if }i=K\end{cases} \tag{3}\]
where \(interval=\lceil\frac{y_{i}-y_{s}}{K}\rceil\). The random variable \(c_{i}\) and its probability \(p_{i}\) are exploited to calculate the expected liveness score as follows:
\[\mathbb{E}[C]=\rho=\sum_{i=0}^{K}c_{i}*p_{i}, \tag{4}\]
where \(p_{i}\) is the \(i^{th}\) element of \(P\) which is the predicted probability vector of real cues from the input \(\tilde{X}\). We write \(\mathbb{E}[C]\) with \(\rho\), which is calculated using the sum over the element-wise multiplication between the random variables and their corresponding probabilities.
### Objective Function
Our objective function is defined as follows:
\[L^{\rho}_{mse}=-\frac{1}{N}\sum_{j=1}^{N}(\rho_{j}-\tilde{Y}_{j})^{2}, \tag{5}\]
where \(N\) is a mini-batch size, and \(\tilde{Y}_{j}\) and \(\rho_{j}\) are the \(j^{th}\) supervision and expected liveness score in the mini-batch. We calculate the distance between \(\tilde{Y}_{j}\) and \(\rho_{j}\) for our main objective function \(L^{\rho}_{mse}\).
To further improve the performance, we exploit not only a regression network but also an adversarial learning technique GRL (Ganin & Lempitsky, 2015). Finally, our overall loss function can be formulated as follows:
\[L_{final}=\alpha*L^{\rho}_{mse}+(1-\alpha)*L_{adv}, \tag{6}\]
where \(L^{\rho}_{mse}\) is a liveness score-based regression training loss and \(L_{adv}\) is an adversarial training loss for jointly learning our livensss score-based regression neural network. \(\alpha\) is a non-negative parameter to balance the importance of two losses, and we empirically set \(\alpha\) to \(0.5\).
## 3 Experiments
We demonstrate the effectiveness of the proposed approach on an intra- and cross-dataset. Based on the experimental results, the characteristics of our algorithm will be discussed in this section.
### Datasets and Metrics
**Datasets.** We employed four public datasets, OULU-NPU (labeled O) (Boulkenafet et al., 2017), CASIA-FASD (labeled C) (Zhang et al., 2012), Replay-Attack (labeled I) (Chingovska et al., 2012), and MSU-MFSD (labeled M) (Wen et al., 2015) for our experiments. OULU-NPU is a high-resolution database with four protocols for validating the improved performance on the intra-dataset. The videos of each dataset are recorded under different scenarios with various cameras and subjects, and they are used for cross-dataset testing to validate the generalization ability for testing data with unconstrained distribution shifts.
**Evaluation Metrics.** We utilized average classification error rate (ACER) for the intra-dataset testing on OULU-NPU. The half total error rate (HTER) and area under curve (AUC) are measured for the cross-dataset testing protocols.
### Implementation Details
**Primitive Data Preparation and Augmentation.** Because the four FAS datasets are in video format, we extracted images at certain intervals. After obtaining the images, we used RetinaFace (Deng et al., 2019) to detect faces, and then cropped and resized the color image to a resolution of 256\(\times\)256. Data augmentation, including horizontal flipping and random cropping, was used for training, and center cropping was employed for testing. And we empirically set \(K\) to 10 for our approach after testing variant \(K\) as depicted in Fig. 3.
**Experimental Setting.** To train the FAS task, we used ResNet18 (He et al., 2015) as the encoder with the Adam optimizer under an initial learning rate and weight decay of 1e-4 and 2e-4, respectively, for all testing protocols. We trained the models with a batch size of 32 and a max epoch of 200, whereas decreasing the learning rate through an exponential LR with a gamma of 0.99. For the domain labels on the intra-dataset, we used the number of sessions in each protocol.
### Intra-Dataset Testing on OULU-NPU
OULU-NPU has four protocols for evaluating the generalization ability under mobile scenarios with previously unseen sensors and spoof types. As shown in Table. 1, our PDLE approach presents the best performance for all protocols, and the expected liveness scores clearly validate the ability to generalize better latent embedding features. In particular, our proposed PDLE achieves the significant performance im
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline \multirow{2}{*}{Method} & Protocol 1 & Protocol 2 & Protocol 3 & Protocol 4 \\ \cline{2-5} & ACER(\%) & ACER(\%) & ACER(\%) & ACER(\%) \\ \hline \hline Auxiliary (Liu et al., 2018) & 1.6 & 2.7 & 2.941.5 & 5.964.0 \\ \hline CDCN (Yu et al., 2020) & 1.0 & 1.45 & 2.841.4 & 6.924.9 \\ \hline Facebook (Amin Iourabov, 2018) & 1.5 & 4.3 & 3.616.5 & 5.645.7 \\ \hline DC-CDN (Yu et al., 2021) & 0.4 & 1.3 & 1.941.1 & 4.343.1 \\ \hline LMFD-PAD (Fang et al., 2022) & 1.5 & 2.0 & 3.423.1 & 3.343.1 \\ \hline NAS-FAS (Yu et al., 2020) & 0.2 & **1.2** & 1.720.6 & 2.942.8 \\ \hline PeakNet (Wang et al., 2022) & **0.2** & **1.2** & 1.184.26 & 2.943.0 \\ \hline
**Uns** & **0** & **1.2** & **0.961.03** & **0.962.04** \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results for ACER (%) in comparison with the previous methods and the proposed **PDLE** approach within the intra-dataset (OULU-NPU protocols).
provement for protocol 4 (unseen lighting, spoof type, and sensor type). The results demonstrate that the effectiveness to train a liveness score-based regression neural network using the amount of swapping as pseudo-discrete labels. Note that our proposed PDLE improves the overall ACER performance over the previous SOTA (PatchNet (Wang et al., 2022a)) approach.
### Cross-Dataset Testing
To evaluate our proposed method, we select three out of four datasets to train and use the remaining one for testing, denoted by \(\{\cdot\&\&\cdot\}\) to \(\{\bullet\}\). We compare our proposed method with the latest methods as shown in Table. 2. Among ResNet-18 based methods, we found that our method shows outstanding performance on testing the O&C&I to M, O&C&M to I, and I&C&M to O protocols, and the other protocol O&M&I to C displays the very competitive performance. When comparing to the ResNet-50 based method HFN+MP (Cai et al., 2022), our approach shows competitive performance on testing datasets O&C&I to M and O&M&I to C which contain a variety of image resolutions, and superior performance on testing O&C&M to I and I&C&M to O whose images are collected from various capture devices unlike other datasets. By split testing on each capture device in the dataset C, we found that our method show relatively the low performance on low quality images (93.73% AUC) compared to normal (94.79% AUC) and high quality (96.47% AUC) images. This result proves that the proposed method achieves satisfactory performance on all protocols because our liveness score-based regression network estimates probabilities of the real cues under various presentation attacks.
### Ablation Study
We conducted ablation studies on cross-dataset testing to explore the contribution of each component in our method, as depicted in Table 2. To analyze the effect of discretization, we separated the proposed PDLE into patch exchange (PE) and label encoding (LE). And we confirmed that each of them is the essential element for improving performance, and also observed the best performance when both were used. In addition, we verified the influence of the pre-defined \(K\) in PDLE for determining the representation power of the liveness against an input image. As shown in Fig. 3, we tested various values of \(K\) on the O&C&M to I protocol to investigate the impact of \(K\) on AUC. With \(K\) between \(2\) and \(17\), our method outperforms the baseline.
## 4 Conclusion
In this paper, we have proposed the PDLE approach for training a face anti-spoofing regression model. The regression model allows the probability to estimate our liveness score. Our approach not only has the effect of a data augmentation because different labels and domains are densely exchanged, new data combinations are also created, which results in the improved domain generalization. Through our experiments, we confirm the effectiveness, robustness, and generalization of the proposed PDLE and expected liveness score.
## 5 Acknowledgements
This work was supported by KakaoBank Corp., and IITP grant funded by the Korea government (MSIT) (No. 2022-0-00320).
\begin{table}
\begin{tabular}{|c||c|c|c||c|c||c||c||c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Network} & \multicolumn{2}{c||}{O\&C\&I to M} & \multicolumn{2}{c||}{O\&M\&I to C} & \multicolumn{2}{c||}{O\&C\&M to I} & \multicolumn{2}{c||}{I\&C\&M to O} \\ \cline{3-10} & & HTER(\%) & AUC(\%) & HTER(\%) & AUC(\%) & HTER(\%) & AUC(\%) & HTER(\%) & AUC(\%) \\ \hline \hline NAS-FAS (Yu et al., 2020a) & NAS & 19.53 & 88.63 & 16.54 & 90.18 & 14.51 & 93.84 & 13.80 & 93.43 \\ \hline NAS-FAS w/ D-Meta (Yu et al., 2020a) & NAS & 16.85 & 90.42 & 15.21 & 92.64 & 11.63 & 96.98 & 13.16 & 94.18 \\ \hline DRDG (George and Marcel, 2021) & DenseNet & 12.43 & 95.81 & 19.05 & 88.79 & 15.56 & 91.79 & 15.63 & 91.75 \\ \hline ANRL (Liu et al., 2021) & - & 10.83 & 96.75 & 17.83 & 89.26 & 16.03 & 91.04 & 15.67 & 91.90 \\ \hline LMFD-PAD (Wang et al., 2022) & Res-50 & 10.48 & 94.55 & 12.50 & 94.17 & 18.49 & 84.72 & 12.41 & 94.95 \\ \hline DBEI (Jiang and Sun, 2022) & Res-50 & 8.57 & 95.01 & 20.26 & 85.80 & _13.52_ & _93.22_ & 20.22 & 88.48 \\ \hline HFN+MP (Cai et al., 2022) & Res-50 & 5.24 & 97.28 & _9.17_ & 96.09 & 15.35 & 90.67 & _12.04_ & 94.26 \\ \hline \hline CAPD (Huang et al., 2022) & Res-18 & 11.64 & 95.27 & 17.51 & 89.98 & 15.08 & 91.92 & 14.27 & 93.04 \\ \hline SSDG-R (Ria et al., 2020) & Res-18 & 7.38 & 97.17 & 10.44 & 95.94 & 11.71 & 96.59 & 15.61 & 91.54 \\ \hline SSAN-R (Wang et al., 2022b) & Res-18 & 6.67 & 98.75 & **10.00** & **96.67** & 8.88 & 96.79 & 13.72 & 93.63 \\ \hline PatchNet (Wang et al., 2022a) & Res-18 & 7.10 & 98.46 & 11.33 & 94.58 & 13.40 & 95.67 & 11.82 & 95.07 \\ \hline Ours w/o PE\& Res-18 & 10.83 & 94.58 & 15.08 & 91.14 & 14.50 & 93.55 & 13.88 & 93.16 \\ \hline Ours w/o PE & Res-18 & 10.41 & 94.93 & 13.59 & 91.04 & 11.17 & 93.92 & 12.50 & 94.35 \\ \hline Ours w/o LE & Res-18 & 9.58 & 94.47 & 12.47 & 92.28 & 12.25 & 94.55 & 13.29 & 93.62 \\ \hline
**Ques** & Res-18 & **5.41** & **98.85** & **10.05** & **94.27** & **8.62** & **97.60** & **11.42** & **95.82** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison results of cross-domain testing on MSU-MFSD (M), CASIA-MFSD (C), Replay-Attack (I), and OULU-NPU (O). PE and LE mean patch-exchange and label-encoding, respectively. **Bold** and _italic_ denote the best results among Res-18 and Res-50 based methods in each.
Figure 3: Ablation study on the discretized level \(K\). |