id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2308.15602 | An Experimental Comparison of Partitioning Strategies for Distributed
Graph Neural Network Training | Recently, graph neural networks (GNNs) have gained much attention as a
growing area of deep learning capable of learning on graph-structured data.
However, the computational and memory requirements for training GNNs on
large-scale graphs make it necessary to distribute the training. A prerequisite
for distributed GNN training is to partition the input graph into smaller parts
that are distributed among multiple machines of a compute cluster. Although
graph partitioning has been studied with regard to graph analytics and graph
databases, its effect on GNN training performance is largely unexplored. As a
consequence, it is unclear whether investing computational efforts into
high-quality graph partitioning would pay off in GNN training scenarios.
In this paper, we study the effectiveness of graph partitioning for
distributed GNN training. Our study aims to understand how different factors
such as GNN parameters, mini-batch size, graph type, features size, and
scale-out factor influence the effectiveness of graph partitioning. We conduct
experiments with two different GNN systems using vertex and edge partitioning.
We found that high-quality graph partitioning is a very effective optimization
to speed up GNN training and to reduce memory consumption. Furthermore, our
results show that invested partitioning time can quickly be amortized by
reduced GNN training time, making it a relevant optimization for most GNN
scenarios. Compared to research on distributed graph processing, our study
reveals that graph partitioning plays an even more significant role in
distributed GNN training, which motivates further research on the graph
partitioning problem. | Nikolai Merkel, Daniel Stoll, Ruben Mayer, Hans-Arno Jacobsen | 2023-08-29T19:47:31Z | http://arxiv.org/abs/2308.15602v2 | # An Experimental Comparison of Partitioning Strategies for Distributed Graph Neural Network Training
###### Abstract.
Recently, graph neural networks (GNNs) have gained much attention as a growing area of deep learning capable of learning on graph-structured data. However, the computational and memory requirements for training GNNs on large-scale graphs can exceed the capabilities of single machines or GPUs, making distributed GNN training a promising direction for large-scale GNN training. A prerequisite for distributed GNN training is to partition the input graph into smaller parts that are distributed among multiple machines of a compute cluster. Although graph partitioning has been extensively studied with regard to graph analytics and graph databases, its effect on GNN training performance is largely unexplored.
In this paper, we study the effectiveness of graph partitioning for distributed GNN training. Our study aims to understand how different factors such as GNN parameters, mini-batch size, graph type, features size, and scale-out factor influence the effectiveness of graph partitioning. We conduct experiments with two different GNN systems using vertex and edge partitioning. We found that graph partitioning is a crucial pre-processing step that can heavily reduce the training time and memory footprint. Furthermore, our results show that invested partitioning time can be amortized by reduced GNN training, making it a relevant optimization.
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: preprint: AIP/12/15
+
Footnote †: preprint: AIP/12/15
that graph partitioning is effective for GNN training, leading to speedups of up to 10.4 and reducing the memory footprint by up to 85.1%. Compared to results known from distributed graph processing, these numbers are much higher, showing the enormous potential of graph partitioning for GNN workloads.
2. We show that partitioning quality properties such as the replication factor or vertex balance can influence the GNN training a lot. We find a strong correlation between replication factor, network communication and memory footprint. Therefore, minimizing the replication factor is crucial for efficient distributed GNN training. We also show that a vertex imbalance can decrease the speedup and also leads to severe imbalances regarding memory utilization.
3. We find that GNN parameters such as the hidden dimension, the number of layers, the mini-batch size and the feature size influence the effectiveness of graph partitioning, both in terms of training time and memory overheads. Our experiments further show that a higher scale-out factor can decrease the effectiveness of vertex partitioning, while the effectiveness increases for edge partitioning.
4. We find that invested partitioning time can be amortized by faster GNN training in typical scenarios, making graph partitioning relevant for production systems.
Our paper is organized as follows. In Section 2, we introduce graph partitioning and graph neural networks. In Section 3, we describe our methodology. Then, we analyze the results for DistGNN in Section 4 and for DistDGL in Section 5. In Section 6, we summarize our main findings and in Section 7 we discuss related work. Finally, we conclude our paper in Section 8.
## 2. Background
Let \(G=(V,E)\) be a graph consisting of a set of vertices \(V\) and a set of edges \(E\subseteq V\times V\). \(N(v)\) represents the set of vertices that are connected to \(v\). In the following, we discuss graph partitioning in Section 2.1 and distributed GNN training in Section 2.2.
### Graph Partitioning
The main approaches for graph partitioning are _edge partitioning_ and _vertex partitioning_ (see Figure 1). In the following, we present both approaches in more detail, along with commonly used partitioning quality metrics.
Edge PartitioningIn edge partitioning (vertex-cut), the set of edges \(E\) is divided into \(k\) partitions by assigning each edge to exactly one partition \(p\in P=\{p_{1},\ldots,p_{k}\}\) with \(\cup_{i=1}^{k}p_{i}=E\). Through this process, vertices can be cut. A cut vertex is replicated to all partitions that have adjacent edges. Each partition \(p_{i}\) covers a set of vertices \(V(p_{i})=\{v\in V\,|\exists e=\{u,v\}\in p_{i}\}\). The goal of edge partitioning (Zhou et al., 2017) is to minimize the number of cut vertices while keeping the partitions' edges \(\alpha\)-balanced, meaning \(\forall p_{i}\in P:|p_{i}|\leq\alpha\cdot\frac{|E|}{k}\).
Commonly used quality metrics to evaluate edge partitioners are the mean _replication factor_ and _edge balance_. The replication factor \(RF(P)\) is defined as \(\frac{1}{|V|}\sum_{i=1}^{k}|V(p_{i})|\) and represents the average number of partitions to which vertices are replicated. This metric is closely related to communication costs because replicated vertices need to synchronize their state via the network. The edge balance is defined as \(\mathit{EB}(P)=\frac{max(\{|p_{i}|,\ldots,|p_{k}|\})}{mean(|p_{i}|,\ldots,|p_{ k}|)|)}\) and the vertex balance as \(\mathit{VB}(P)=\frac{max(\{|V(p_{i})|,\ldots,|V(p_{i})|\})}{mean(\{|V(p_{i})|, \ldots,|V(p_{i})|\})}\). Most edge partitioners do not explicitly balance vertices because the computational load of many graph algorithms is proportional to the number of edges as messages are aggregated along the edges (e.g., the PageRank algorithm).
Vertex PartitioningIn vertex partitioning (edge-cut), the set of vertices \(V\) is divided into \(k\) partitions by assigning each vertex \(v\) to exactly one partition \(p\in P=\{p_{1},\ldots,p_{k}\}\) with \(\cup_{i=1}^{k}p_{i}=V\). Vertex partitioning aims to minimize the number of cut edges while balancing the partition sizes in terms of number of vertices. We define \(E_{cut}\) as the set of cut edges. An edge \(e=\{u,v\}\) is cut if both \(u\) and \(v\) are assigned to different partitions.
Commonly used vertex partitioning quality metrics to evaluate vertex partitioners are the _edge-cut ratio_ and _vertex balance_. The edge-cut ratio is defined as \(\lambda=\frac{|E_{cut}|}{|E|}\) and indicates communication costs, as messages are sent via edges, and cut edges lead to network communication speed even machines. The vertex balance is defined as \(\mathit{VB}(P)=\frac{max(\{|p_{1}|,\ldots,|p_{k}|\})}{mean(\{|p_{1}|,\ldots,|p _{k}|\})}\) and indicates computation balance.
Partitioner TypesBoth edge and vertex partitioning algorithms can be categorized into (1) _streaming partitioners_, which stream the graph and directly assign vertices or edges to partitions (Zhou et al., 2017; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Streaming partitioners can be further divided into _stateless_ partitioners, which do not keep any state, and _stateful_ streaming partitioners, which use some state, e.g., the current load per partition or to which partition vertices or edges were assigned. The state is considered for the assignments. (2) _In-memory partitioners_ which load the complete graph into the memory (Goh et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). (3) _Hybrid partitioners_ that partition one part with an in-memory partitioner and the remaining part with a streaming partitioner (Wang et al., 2019).
### Graph Neural Network Training
Graph Neural Networks are a class of neural networks which operate on graph-structured data. GNNs iteratively learn on graphs by aggregating the local neighborhoods of vertices. At start, each vertex \(v\) is represented by its feature vector \(h_{v}^{(0)}\). In each layer, a vertex \(v\) aggregates learned representations of its neighbors \(N(v)\) of the previous layers, resulting in \(a_{v}^{(k)}\) (Equation 1).
\[a_{v}^{(k)}=\mathit{AGGREGATE}^{(k)}(\{h_{u}^{(k-1)}|u\in N(v)\}) \tag{1}\]
A vertex updates its representation based on \(a_{v}^{(k)}\) and its previous intermediate representation \(h_{v}^{(k-1)}\) in layer \(k-1\) by applying an update function (Equation 2).
\[h_{v}^{(k)}=\mathit{UPDATE}^{(k)}(a_{v}^{(k)},h_{v}^{(k-1)}) \tag{2}\]
Figure 1. Edge partitioning vs. vertex partitioning.
The main approaches to train GNNs are _full-batch_ and _mini-batch_ training (Golovolovolovolov et al., 2012). In full-batch training, the entire graph is used to update the model once per epoch. In mini-batch training, each epoch contains multiple iterations, where a mini-batch is sampled from the graph and used for training and model update.
## 3. Experimental Methodology
Our study aims to investigate the effect of graph partitioning on the performance of distributed GNN training. We want to answer the following five research questions (RQ):
* How effective is graph partitioning for distributed GNN training in reducing training time and memory footprint?
* Do classical partitioning quality metrics accurately describe the effectiveness of partitioning algorithms for distributed GNN workloads? Which partitioning quality metric is most crucial?
* How much is the partitioning effectiveness influenced by GNN parameters such as the _number of layers_, _hidden dimension_, _feature size_, _mini-batch size_, and _type of graph_?
* What is the impact of the scale-out factor on the partitioning effectiveness?
* Can the invested partitioning time be amortized by a reduced GNN training time?
To answer these research questions, we conduct various experiments with two state-of-the-art distributed GNN systems: _DistGNN_(Golovolovolov et al., 2012) and _DistDGL_(D
**Training & partitioning time** We measure the time per epoch. In addition, we measure the time spent in the forward pass, backward pass, synchronization of the model, and the optimizer step. Furthermore, we measure the partitioning time.
### Partitioning Performance
We compare the partitioning algorithms regarding their communication costs and computational balance in the following.
_Communication costs._ We observe significant differences in terms of replication factors for the different partitioners. In all cases, _HEP100_ leads to the lowest (best) replication factor and _Random_ to the largest (worst) one. Figure 1(b) shows, for example, that _HEP100_ at 32 partitions leads to a replication factor of 2.52 on _OR_, which is much smaller compared to _Random_ which leads to a replication factor of 22.2. In general, more partitions lead to larger replication factors. For some partitioners, the replication factors increase more sharply than for others if the number of partitions increases. We observe a strong correlation (\(R^{2}\geq 0.98\)) between the replication factor and network traffic. This correlation is shown in Figure 3 for _OR_ for different numbers of machines and number of layers and is also observed for the remaining graphs. The observation is plausible. The higher the replication factor, the more data is communicated via the network because more vertices are replicated and must synchronize their states.
We conclude that **minimizing the replication factor is crucial for reducing network overhead**.
_Computational balance._ It is crucial to balance the number of edges and vertices per partition. Each edge leads to an aggregation in the GNN, and neural network operations are performed for vertices. We observe a good edge balance of at most \(\alpha\leq 1.11\) for all partitioners. However, we observe significant vertex imbalances (see Figure 4). Especially, the partitioners _2PS-L_, _HEP10_ and _HEP100_ lead to large imbalances between 1.18 and 1.89 on 4 machines (see. Figure 3(a)) and can even increase up to 2.44 on 32 machines (see. Figure 3(b)). The vertex imbalance has a significant influence on the balance of memory utilization. Figure 5 reports the imbalance regarding memory utilization for all partitioners. We observe that vertex imbalance perfectly correlates with memory utilization imbalance.
We conclude that **minimizing vertex imbalance is crucial for balancing memory utilization**.
_Partitioning time._ Figure 6 reports the partitioning time for partitioning all graphs into 4 and 32 partitions. We observe that the partitioning times of some algorithms, e.g., _Random_, _2PS-L_ and _DBH_, are less dependent on the number of partitions, compared to the remaining partitioners, where more partitions lead to higher partitioning times. For example, _HDRF_ takes much more time to partition the graphs into 32 partitions compared to 4 partitions, which is expected because the complexity of the scoring function depends on the number of partitions.
In the following, we further analyze how the GNN training time is influenced by the partitioning metrics for different number of machines, and how GNN parameters influence the effectiveness of the partitioners.
### GNN Training Performance
In Figure 7, we report the speedup distribution for all partitioners compared to random partitioning for all combinations of _feature size_, _hidden dimension_ and _number of layers_ (see Table 3). We observe significant differences in terms of training time between the partitioners. _HEP100_ leads to the largest speedups of up to 3.53, 6.18, 8.15 and 10.41 on the graphs _EU_, _EN_, _OR_ and _HW_, respectively. For example, on _OR_, compared to _Random_, on average _DBH_, _2PS-L_, _HDRF_, _HEP10_ and _HEP100_ lead to speedups of 1.40, 1.46, 1.44, 2.96 and 3.68 on 8 machines, 1.62, 1.61, 1.75, 4.37 and 7.16 on 16 machines and 1.74, 1.95, 2.00, 5.67 and 7.16 on 32 machines. We observe
Figure 4. Vertex balance.
Figure 5. Memory utilization balance (4 machines).
Figure 3. Replication factor vs. network communication for different number of machines and number of layers on _OR_.
Figure 2. Replication factors.
that the effectiveness in terms of speedup increases if the number of machines increases. Only _2PS-L_ on _EU_ has a slowdown of 0.92, 0.92, and 0.91 on 8, 16 and 32 machines, respectively. We attribute this to the observation that the partitioning is highly imbalanced in terms of vertex balance, which we discuss in the following. We observe that a low replication factor is crucial to achieve large speedups. However, if the replication factor of two partitioners is close, it becomes clear that vertex balance is important as well. This can for example be seen in Figure 8. _2PS-L_ leads to a similar replication factor as _HDRF_ and _DBH_ on _EN_. However, _2PS-L_ leads to a large vertex imbalance while _HDRF_ and _DBH_ are perfectly balanced. We observe that _2PS-L_ leads to much smaller speedups which indicates that the vertex imbalance has a negative effect on the speedup.
In Figure 7, we observe only little spread in terms of speedup. In other words, the partitioners' speedups are independent of the GNN parameters. We make a similar observation for network traffic. The savings are stable and not influenced much by the GNN parameters. This observation seems plausible. The replication factor influences for how many replicas the vertex state needs to be synchronized. The GNN parameters hidden dimension and feature size influence the state size and determine how much state needs to be synchronized. However, the ratio between the partitioners stays stable.
Figure 8(a) and Figure 8(b) give an overview of how much memory is needed for training with different partitioners in percent of random partitioning. We make two main observations. (1) The high-quality partitioners (_HEP10_ and _HEP100_) are much more effective than the other partitioners. (2) A large deviation indicates that the partitioners' effectiveness depends on the GNN parameters. In the following, we first discuss why the partitioners differ in their effectiveness in terms of memory footprint and analyze how the GNN parameters influence this effectiveness.
We observe a strong correlation (\(R^{2}\geq 0.99\)) between the replication factor and memory footprint. Further, we observe that the memory footprint can heavily decrease, e.g., _HEP100_ reduces the memory for the graphs _EU_, _OR_, _HW_, _EN_ by 37% 53%, 56% and 60% on 8 machines, by 44%, 60%, 65% and 63% on 16 machines and by 40%, 67%, 66% and 63% on 32 machines, respectively compared to _Random_. There are also cases where random partitioning leads to out-of-memory errors. For example, in all cases, _DI_ can not be processed if random partitioning is applied, but in contrast, the more advanced partitioners enable the processing in many cases. In classical distributed graph processing, the replication factor is often minimized with the primary goal of reducing network communication. The memory load is less critical, especially if the vertex state is small which is the case for many graph processing algorithms such as BFS, DFS, Connected Components, PageRank, and K-cores. In contrast, in GNNs, the vertex state consists of large feature vectors meaning that the vertex state, not the graph structure, dominates the required memory. We conclude that **minimizing the replication factor is crucial for minimizing the memory overhead and can be decisive for GNN training.**
In the following, we analyze how the different GNN parameters influence the effectiveness of the partitioners **in terms of memory**.
_(1) Feature size._ In Figure 9(a), we report the memory footprint for all partitioners in percent of random partitioning dependent on the feature size. We observe that if we keep all other parameters constant, an increase in the feature size increases the effectiveness. This result seems plausible. A fixed amount of memory is needed, e.g., for storing the graph structure. With an increasing feature size, the state to replicate increases, making partitioning more effective. We conclude, **the larger the feature size, the higher the effectiveness of graph partitioners to reduce the memory footprint.**
Figure 8. Replication factor vs. speedup on _EN_ along with the vertex balance in the brackets.
Figure 6. Partitioning time.
Figure 7. Speedup distribution of graph partitioners on 4, 8, 16 and 32 machines for all experiments.
(2) _Hidden dimension._ If the feature size and number of layers are kept constant, there will be some fixed amount of memory for the graph structure, the corresponding features, and the replication of the features. We observe that the higher the hidden dimension, the more effective the partitioners become (see Figure (b)b). This observation seems reasonable. Larger hidden sizes lead to more state (intermediate representations) which needs to be synchronized among the machines as they are needed as the input of the next layer. We conclude that **the larger the hidden dimension, the higher the effectiveness of graph partitioners to reduce the memory footprint**.
_(3) Number of layers._ The number of layers can influence the effectiveness of graph partitioning. The higher the number of layers, the more intermediate representations need to be stored (one per vertex and layer) which are needed in the backward pass. The replication factor influences to how many machines the intermediate representations are replicated, and the hidden dimension determines their size. We observe that, especially if the hidden dimension is large and the feature size is small, the effectiveness increases for more layers. This seems reasonable.
If the feature size is large and the hidden dimension is small, the effectiveness of graph partitioning remains relatively unaffected by the number of layers. This is because the state size is large due to the replication of large features. Increasing the number of layers results in more replications of small hidden representations, which does not significantly impact the memory footprint. However, if the feature size is small and the hidden dimension is large, increasing the number of layers leads to more replications of large hidden representations.
We conclude, **the larger the hidden dimension and the smaller the features size, the higher the effectiveness of graph partitioners with an increasing number of layers increases.**
_(4) Scale-out factor._ In the following, we analyze how the scale-out factor influences the partitioners' effectiveness in terms of speedup and memory footprint. Figure (a)a shows the average speedup for all graph partitioners at different scale-out factors compared to random partitioning. We observe that the effectiveness of all graph partitioners increases if more machines are used for training. However, there are differences in how sharply the effectiveness increases. For the more light-weight partitioners _2PS-L_, _DBH_ and _HDRF_, the speedups increase moderately from 1.57, 1.37, and 1.49 on 4 machines to 1.79, 1.7, and 2.06 on 32 machines, respectively. The partitioners _HEP10_ and _HEP100_ lead to a speedup of 1.95 and 2.47 on 4 machines which increase sharply to 5.41 and 6.77 on 32 machines, respectively. We make similar observations for the memory overheads (see Figure (b)b). All partitioners lead to substantial savings which increase if the number of machines increases. Both observations are plausible: In Figure (c)c, we report the achieved replication factors for all partitioners for all scale-out factors in percentages of _Random_, meaning lower numbers are better. We find that the replication factor of all partitioners increases less sharply than _Random_ if the scale-out factor increases. We observe that the replication factors of _2PS-L_, _DBH_, _HDRF_ is 56.74%, 76.49% and 62.16% of _Random_ on 4 machines and 39.99%, 60.81% and 48.58% of _Random_ on 32 machines. _HEP10_ and _HEP100_ achieve a replication factor of 49.27% and 36.05% of _Random_ on 4 machines and significant lower replication factors of 14.05% and 11.37% of _Random_ on 32 machines, respectively.
We conclude that **the effectiveness of graph partitioning is increasing both in terms of training time and memory overhead if the number of machines is increasing**.
_(5) Partitioning time amortization._ In Table 4, we report the average number of epochs until the partitioning time is amortized by faster training time for each combination of graph and partitioner. We assume that random partitioning does not take any time. _DBH_ is the partitioner that amortizes the fastest: on average, it takes 1.39, 3.79, 3.05, and 3.83 epochs on the graphs _EN_, _EU_, _HW_ and _OR_ to amortize the partitioning time. _HEP100_ which leads to the largest speedups amortizes after 4.29, 12.0, 4.7, and 7.03 epochs on the graphs _EN_, _EU_, _HW_ and _OR_. Full-batch training is often performed for hundreds of epochs (Shen et al., 2017). **Therefore, the partitioning time can be amortized.** In addition, a hyper-parameter search is often performed which requires even more training epochs. Therefore, it is even more beneficial to invest in partitioning.
## 5. DistDgl
### Experiments
**Graph partitioning algorithms** We use six state-of-the-art vertex partitioners from different categories: (1) stateless streaming partitioning with random partitioning, (2) stateful streaming with LDG, and (3) in-memory partitioning with ByteGNN, Spinner, Metis, and KaHIP (see. Table 2).
Figure 10. Memory footprint in % of random partitioning dependant on different GNN hyper-parameters for _OR_ on 8 machines.
Figure 9. Distribution of memory footprint of graph partitioners in % of random partitioning on 4 and 32 machines.
**Workloads** We selected a representative set of graph neural network architectures commonly used in distributed GNN training, namely, GAT, GraphSage, and GCN. We use the same hyperparameters as for _DistGNN_ (see Table 3). If not mentioned otherwise, we perform neighborhood sampling for all GNN models with the following configuration. Let \(S_{i}\) be the number of neighbors to sample for layer \(i\). For two layer GNNs, we use \(l_{1}=25\) and \(l_{2}=20\), for three layer GNNs \(l_{1}=15\), \(l_{2}=10\) and \(l_{3}=5\) and for four layer GNNs, \(l_{1}=10\), \(l_{2}=10\), \(l_{3}=5\) and \(l_{4}=5\). We use a global batch size \(GBS\) of 1024 if not stated otherwise. Therefore, each worker \(w_{i}\in W\) trains with \(\frac{GBS}{|W|}\) samples.
**Partitioning metrics** We compare the partitioners with the commonly used partitioning quality metrics _edge-cut_ and vertex balance introduced in Section 2.1. In addition, we measure the training vertex balance. Further, we measure metrics based on the sampled mini-batches: the number of edges of the computation graphs, the number of local input vertices, and the number of vertices that need to be fetched via the network.
**Training & Partitioning Time** We measure the epoch and step time and all phases (mini-batch sampling, feature loading, forward pass, backward pass, and model update) for each step. In addition, we measure the partitioning time.
### Partitioning Performance
In the following, we compare the graph partitioners regarding communication costs and computational balance.
_Communication Costs._ In Figure 12, we report the achieved edge-cut ratio for each combination of graph, graph partitioning algorithm, and number of partitions. In most cases, _KaHIP_ achieves the lowest edge-cut and random partitioning leads to the largest edge-cut. We observe significant differences between the partitioning algorithms in terms of edge-cut, e.g., _KaHIP_ achieves an edge-cut ratio smaller than 0.001 and 0.12 on the graph _DI_ and _EU_ for 32 partitions, respectively, which is much lower (better) compared to random partitioning which leads to edge-cuts of 0.68 and 0.93 for the same graphs. We also observed for all partitioning algorithms that a higher number of partitions leads to a larger edge-cut.
In the following, we investigate the influence of the edge-cut on network communication. There are cases where a lower edge-cut results in less network communication. However, there are also cases where even if the edge-cut of different partitioners is similar, the network communication can differ a lot. For example, we observed that Spinner has an edge-cut lower than Metis on _OR_, but the network communication is much higher.
This observation seems reasonable. There can be edges that are more frequently involved in the sampling process. If these edges are cut, they lead to more network traffic than edges hardly visited in the sampling process. To ensure that the observed anomaly is related to graph partitioning, we measure for each mini-batch the number of vertices needed for processing the mini-batch that are not local to the respective worker. We define these vertices as _remote vertices_. We observe a strong correlation between the number of remote vertices and network traffic. We conclude that edge-cut is not always a perfect predictor for network traffic and that there are cases where a lower edge-cut still leads to more remote vertices and higher network traffic.
_Computation Balance._ For efficient distributed graph processing, it is crucial that the computation is balanced among machines which is measured via the vertex balance. However, unlike distributed graph processing algorithms such as PageRank, the computational load of mini-batch GNN training depends on the size of the sampled mini-batches. Each worker samples a mini-batch based on the k-hop
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|} \hline Graph & DBH & 2PS-L & HDRF & HEP10 & HEP100 \\ \hline \hline EN & 1.39 & 4.57 & 4.64 & 3.35 & 4.29 \\ \hline EU & 3.79 & no & 8.8 & 10.15 & 12.0 \\ \hline HO & 3.05 & 4.22 & 7.26 & 4.48 & 4.7 \\ \hline OR & 3.83 & 7.39 & 11.69 & 6.64 & 7.03 \\ \hline \end{tabular}
\end{table}
Table 4. Number of epochs until graph partitioning time is amortized by faster GNN training time. “No” means no amortization is possible because of slowdown.
Figure 11. The effectiveness of graph partitioning increases in terms of speedup and memory overhead if the scale-out factor is increased from 4 to 8, 16 and 32 machines.
neighborhood of the training vertices. To ensure load balance, it is essential that the computation graphs of mini-batches are of similar size. We define the number of vertices that are needed to compute a mini-batch as the _input vertices_ and _input vertex balance_ per step as the number of input vertices of the largest mini-batch divided by the average number of input vertices per mini-batch in the respective step. In Figures 13(a)-13(b), we report the imbalance of the mini-batches in terms of input vertices. We observe a large imbalance, which is increasing as the number of partitions is increased.
_Partitioning Time_. In Figure 15, we report the partitioning time for all graphs and partitioning algorithms for 4 and 32 partitions. We observe that the best (in terms of lowest edge-cut) performing partitioner _KaHIP_, leads to the highest partitioning time.
In the following, we investigate the influence of the partitioning metrics on the actual GNN training time and analyze how the GNN hyper-parameters influence the effectiveness of the graph partitioners.
### GNN Training Performance
In Figure 16, we report the average speedups for all combinations of _feature size_, _hidden dimension_ and _number of layers_ (see Table 3) for all graph partitioners with random partitioning as a baseline on 4, 8, 16 and 32 machines for the GraphSage architecture. In our experiments, _KaHIP_ and _Metis_ lead to the largest speedups of up to 1.84, 1.84, 3.09 and 3.47 on a cluster with 4, 8, 16 and 32 machines, respectively. Therefore, graph partitioning is an important preprocessing step for distributed GNN training.
In Figure 16, we see significant variances in terms of speed up that indicate that the effectiveness of the partitioning algorithms depends on the GNN parameters. Therefore, we conduct a detailed analysis of how different GNN parameters influence the different training phases and how the partitioners differ from each other. On each worker in each step, we measure the phase times (1) mini-batch sampling, (2) feature loading, (3) forward pass, (4) backward pass, and (5) model update. In each step, we identify the worker which leads to the longest mini-batch sampling, feature loading and forward pass time as the straggler. We exclude the time for the backward pass because it also contains the time for the all-reduce operation in which the gradients are synchronized between the workers. The model update time is negligible. Then, we get the phase times of the slowest worker per step and sum up the phase times of all steps of the respective slowest worker. In other words, we are interested in how much time the straggler spends on average in each phase. In the following, we investigate how the different GNN model parameters influence the effectiveness of graph partitioning in terms of speedup of the distributed training compared to random partitioning. In Figure 17, we observe large imbalances for all partitioners, showing that even if the number of training vertices
Figure 16. Speedup distribution of graph partitioners on 4, 8, 16 and 32 machines for all GraphSage experiments.
Figure 14. Balance in terms of input vertices of the mini-batches for the GraphSage model.
Figure 13. Training vertex balance (8 partitions).
Figure 15. Partitioning time on a logarithmic scale.
is balanced, the computation time can be imbalanced. Interestingly, all partitioners lead to large imbalances.
_(1) Feature size._ We observe that **the effectiveness of partitioning increases with larger feature sizes**. Figures (a)a-(b)b shows the speedup for the partitioners compared to random partitioning dependent on the feature size. For example, in Figure (a)a, the training for GraphSage with _KaHIP_ leads to a speedup of 1.23 and 1.52 for a feature size of 16 and 512, respectively. This observation is plausible. As feature sizes increase, network communication increases because larger feature vectors are sent over the network, making graph partitioning even more valuable in reducing communication costs.
**Detailed:** For each combination of number layers and hidden dimension, we vary the feature size. We make the following key observations: (1) The larger the feature size, the longer the feature fetching phase (see Figure (a)a) while the sampling time stays constant. We also observe that for small feature sizes (up to 64), the sampling takes more time than fetching features, but for large feature sizes of 512, the time for feature fetching dominates the sampling time by a lot (see Figure (a)a). In contrast, for the road network _DI_, we observe that sampling always takes more time than fetching features (see Figure (b)b) which seems plausible because the mean degree in the road network is small and the skew of the degree distribution is low. Therefore, the sampled mini-batches are small, and only a few input vertices must be fetched. We also observe that the edge-cut for the road network is much lower than for the remaining graphs (see Figure 12). (2) The forward and backward pass time increases with larger feature sizes, which is plausible because more computations will be performed in the first layer.
We observe that the partitioners differ a lot from each other when varying the feature size and that the feature size influences the different phases. The feature fetching phase is influenced most, which can for example be seen in Figure (a)a for training a three layer GraphSage with a hidden dimension of 64 on the graph _EU_. In most cases, the better the partitioner in terms of edge-cut, the lower the communication costs, which can speed up both the mini-batch sampling and the feature fetching phase.
_(2) Number of hidden dimensions._ We found that **partitioning becomes less crucial as the hidden dimension increases**. For example, compared with random partitioning, _KaHIP_ leads to a speedup of 1.38 and 1.19 and _Metis_ 1.31 and 1.15 for a hidden dimension of 16 and 512, respectively. This result is reasonable since an increased hidden size leads to greater computational costs, potentially dominating the communication costs.
**Detailed:** We vary the hidden dimension for each combination of feature size and number of layers. Our main observations are: (1) sampling and feature loading time stay constant, which is expected as only the neural network operations are influenced by the hidden dimension. The larger the hidden dimension, the more time is used for computation. (2) We also observe that the effectiveness of partitioners decreases for the larger hidden sizes because most of the differences are in feature loading and sampling. However, if the hidden size increases, the computation takes a larger share of the overall training time. Therefore, the difference between the partitioners is lower.
_(3) Number of Layers._ We observe that **the effectiveness of the partitioners remains relatively unaffected by an increasing number of layers.** In some cases, the effectiveness slightly increases or decreases, but the influence is much smaller than the influence of the feature size and hidden dimension, and there is also no clear trend. This is an unexpected observation. One could think that the effectiveness of the partitioning algorithms would
Figure 19. Phase times for a 3 layer GraphSAGE model with a hidden dimension of 64 on 4 machines for different feature sizes.
Figure 17. Balance in terms of training time.
Figure 18. Speedup of graph partitioners for the GraphSage model on 4 and 32 machines for different feature sizes.
heavily decrease if the number of layers increases because large parts of the graph will be contained in the mini-batches, but still, the partitioning algorithms lead to different training times, and many partitioners outperform random partitioning.
**Detailed:** We vary the number of layers for each combination of feature size and hidden dimension. We make the following key observations: (1) All phases increase in run-time if the number of layers increases. This is expected because an increase in the number of layers leads to larger computation graphs within the mini-batches, which increase the communication costs (more remote accesses in the sampling phase and more remote vertices to fetch via the network) and the computation costs (more neural network operations). (2) We observe, especially for 3 and 4 layer GraphSage, that most of the speedup gained by different partitioning algorithms comes from faster sampling and feature fetching (see Figure 21).
_(4) Scale-out factor._ In the following, we investigate the effectiveness of scaling out distributed GNN training to more machines for all partitioning algorithms. We scale out from 4 to 8, 16 and 32 machines. We make the following observations: (1) For _DI_, scaling out increases the effectiveness of the partitioners. Especially for the partitioners _KaHIP_, _Metis_, _LDG_ and _ByteGNN_, the effectiveness increases a lot. However, for _Spinner_, the effectiveness stays relatively constant. This seems plausible as the edge-cut for _Random_ and _Spinner_ is far higher on _DI_ than for the remaining partitioners (see Figure 12). (2) For the remaining graphs, we observe that the effectiveness of GraphSage decreases on average (see Figure 1(a)). For example, _KaHIP_ and _Metis_ lead to a speedup of 1.32 and 1.27 on 4 machines and to a smaller speedup of 1.25 and 1.19 on 32 machines, respectively. We found that the number of remote vertices (see Figure 1(b)) and the edge-cut (see Figure 1(c)) of the partitioners in percentages of _Random_ is increasing when scaling out to more machines. We also observe that the network communication of the partitioners in percentages of _Random_ is also increasing. In other words, the effectiveness of the partitioners is also decreasing in terms of partitioning metrics and network communication compared to _Random_, when the number of machines increases. It is worth to note that the feature loading phase scales really well. We found that large feature sizes make partitioning more effective. For large feature vectors and few machines, the feature fetching phase can take a large share of the training time and also leads to large differences between the partitioners (see Figures 1(a) and 1(b)). The feature fetching phase can decrease sharply when scaling out to more machines. Therefore, the difference between the partitioners is decreasing, resulting in lower effectiveness. We make a similar observation with the sampling phase. We conclude that **in most cases, the effectiveness of partitioning slightly decreases if the scale-out factor increases**.
_(5) Partitioning time amortization._ In Table 5, we report the average number of epochs until the partitioning time is amortized by faster training time for each combination of graph and partitioner. We observe that **the partitioning time can be amortized by faster GNN training**. However, _KaHIP_, the partitioner which leads to the largest speedups only amortizes for _DI_ but not for the remaining graphs. However, _Metis_, which also leads to significant speedups, does amortize for all graphs.
Figure 23. Speedup of graph partitioners for the GraphSage model on 4 and 32 machines for different number of layers.
Figure 21. Times of different phases for a GraphSAGE model with a hidden dimension and feature size of 64 on 4 machines for the _OR_ graph.
Figure 20. Speedup of graph partitioners for the GraphSage model on 4 and 32 machines for different hidden dimensions.
Figure 22. Times of different phases for a 3 layer GraphSAGE with a feature size of 64 on 4 machines for the _OR_ graph.
### Influence of mini-batch size on partitioner effectiveness
The following experiments aim to investigate the influence of the _mini-batch size_ on the effectiveness of partitioning. In other words, we want to evaluate if the partitioning is more crucial (in terms of reduced training time) if the mini-batch size increases.
We fix the number of workers to 16 and set the mini-batch size to 512, 1024, 2048, 4096, 8192, 16384, and 32768 for a three layer GAT and a three layer GraphSage. For both GNN architectures, we use two configurations: (1) hidden dimension and feature size of 64 (low communication) and (2) hidden dimension of 64 and feature size of 512 (high communication).
We observe for all partitioners that the network traffic decreases if the batch size increases compared to _Random_ (see Figure 26b). For example, _KaHIP_ and _Spinner_ lead to a network communication of 66% and 77% of _Random_ with a batch size of 512 and of 48% and 67% if the batch size is set to 32768, respectively. We observe a similar trend for the number of remote vertices (see Figure 26c). This seems reasonable. Many vertices can end up in _different_ mini-batches. However, if the mini-batch increases, the overlap in the larger mini-batches increases, leading to fewer remote vertices.
The effectiveness of the partitioners can decrease or increase with larger batch sizes if the feature size is 64: However, there is no clear trend. In contrast, if the features size is 512, in most cases, the effectiveness of the partitioners increases for ByteGNN, KaHIP, Metis and Spinner on the graphs _HW_, _EU_, _EN_ and _OR_. For example, Figure 26a shows that training with _KaHIP_ and _Metis_ lead to a speed up of 1.27 and 1.13 for a small batch size of 512 and to a larger speed up of 1.91 and 1.65 if the batch size is set to 32768. We conclude that **larger batch sizes increase the partitioner effectiveness for large feature sizes**.
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|} \hline Graph & ByteGNN & KaHIP & LDG & SPINNER & METIS \\ \hline \hline DI & 0.93 & 2.61 & 0.1 & 14.37 & 1.13 \\ \hline EN & 2.16 & 2501.93 & 0.39 & 54.07 & 16.79 \\ \hline EU & no & 1197.25 & no & 53.8 & 8.14 \\ \hline HO & 0.68 & 347.51 & 0.47 & 77.78 & 10.7 \\ \hline OR & 3.14 & 223.19 & 0.27 & 70.19 & 14.59 \\ \hline \end{tabular}
\end{table}
Table 5. Number of epochs until graph partitioning time is amortized by faster GNN training time. ”No” means no amortization is possible because of slowdown.
Figure 24. The effectiveness of partitioning decreases when scaling GraphSage from 4 to 32 machines.
Figure 25. Phase times for a 3 layer GAT and GraphSage with a feature size of 512 and a hidden dimension of 64 on the _OR_ graph trained with 4, 8, 16 and 32 machines.
Lessons Learned
In the following, we summarize our main findings and refer them to the research questions introduced in Section 3.
**(1) Graph partitioning is effective to speed up GNN training (RQ-1).** We observed large speedups of up to 10.41 and 3.4 for DistGNN and DistDGL, respectively. The speedups achieved for DistDGL are comparable to those seen in distributed graph processing [29; 30; 32]. However, the speedups for DistGNN are higher. Full-batch training leads to large communication and memory overheads, both heavily influenced by graph partitioning, which makes graph partitioning crucial for efficient distributed full-batch GNN training.
**(2) Graph partitioning is effective in reducing the memory footprint (RQ-1).** We found that the replication factor perfectly correlates with the amount of necessary memory. Different from classical distributed graph processing, where the state of vertices is small, in GNNs, the state of vertices is large. The vertices can have large feature vectors and intermediate representations that must be stored for all vertices. Minimizing the replication factor leads to significant memory savings because fewer vertex states are replicated. We observe many cases where advanced partitioning algorithms leading to small replication factors can reduce the necessary amount of memory by up to 85.1%. Therefore, the replication factor determines whether training a GNN with the given memory budget is possible at all.
**(3) Classical partitioning metrics are relevant for GNN performance (RQ-2)**. For both DistDGL and DistGNN, we observed that both communication and balancing metrics are important. Especially for DistGNN, we observed that the replication factor has a large correlation with the network communication, the memory overhead, and ultimately the training time. Minimizing the replication factor to reduce the GNN training time is crucial. We also found that in cases where two different partitioners lead to a similar replication factor, balancing the vertices becomes important. Furthermore, vertex balance perfectly correlates with memory utilization balance which is crucial for memory-intensive full-batch training. This is an important insight, as most edge partitioners balance the number of edges per partition and do not focus on balancing vertices.
**(4) GNN parameters influence the effectiveness of graph partitioners (RQ-3).** Unlike distributed graph processing, in GNN training, the GNN models have parameters, such as the number of layers, the hidden dimension, or the batch size, and the graphs have attached features. For DistDGL, we observed that GNN parameters influence the partitioners' effectiveness. Graph partitioning is effective, especially if the feature vectors are large and the hidden dimensions are low. We also found that if the feature size is large and the mini-batch size increases, the effectiveness increases a lot. For DistGNN, we observe that effectiveness in terms of training run-time is less dependent on the GNN parameters. However, regarding memory overhead, the effectiveness increases if the feature size, the hidden dimension or the number of layers increases.
**(5) The scale-out factor influences the effectiveness of graph partitioners (RQ-4)**. For DistDGL, we observe that in most cases, the effectiveness of graph partitioning slightly decreases when scaling out to more machines. However, for DistGNN, graph partitioning becomes very important because the network costs increase significantly.
**(6) Partitioning can be amortized by faster GNN training (RQ-5).** For both DistDGL and DistGNN, we found that the invested graph partitioning time can in many cases be amortized already after a few epochs, making graph partitioning an important optimization for distributed GNN training.
## 7. Related Work
Different studies [2; 45; 12; 34] have been conducted to investigate how graph partitioning influences the performance of distributed graphs processing systems. Verma et al. [45] study the graph partitioning algorithms available in GraphX [14], PowerGraph [13], and PowerLyra [9] for graph analytics. Abbas et al. [2] study streaming graph partitioners and compare them in a graph processing framework based on Apache Flink [7] for graph analytics. Gill et al. [12] investigate the influence of different partitioning strategies in D-Galois for graph analytics workloads. Pacaci and Ozsu [34] study streaming graph partitioning algorithms for graph analytics with PowerLyra and online graph query workloads with JanusGraph [1]. The studies focus on classical graph workloads. However, distributed GNN training is different. First, GNN training leads to large memory and communication overheads. Huge feature vectors and large intermediate states are computed, stored, and sent over the network. Second, the computations consist of computationally expensive neural network operations. Third, the GNN workloads are not only characterized by the model architecture, but also by GNN parameters such as the number of layers and hidden dimension. Forth, mini-batch-based training has a complex data loading phase which consists of distributed multi-hop sampling followed by a communication intensive feature loading phase.
Graph partitioning is a vibrant research area and many different approaches exist [16; 18; 19; 26; 27; 28; 29; 30; 32; 33; 35; 36; 37; 38; 39; 40; 41; 46; 48; 51]. See [8] for a recent survey about graph partitioning. We selected a representative set of different categories.
Many distributed graph neural network systems exist [11; 17; 42; 43; 49; 50]. A recent survey [44] gives an overview of different systems and compares those along different axis, also graph partitioning. We extend this research by experimentally investigating the effectiveness of graph partitioning for GNN training.
## 8. Conclusions
In our work, we performed an experimental evaluation to investigate the effectiveness of graph partitioning for distributed GNN training. We showed that graph partitioning is an essential optimization for distributed GNN training and that different factors such as GNN parameters (e.g., hidden dimension, number of layers, mini-batch size, etc.), the scale-out factor, and the feature size can influence the effectiveness of graph partitioning for GNN training, both in terms of memory footprint and training time. Further, we found that invested partitioning time can be amortized by reduced GNN training time. Based on our findings, we conclude that graph partitioning has great potential to make GNN training more effective. We hope our research can spawn the development of even more effective graph partitioning algorithms in the future. |
2304.11941 | Partitioning and Deployment of Deep Neural Networks on Edge Clusters | Edge inference has become more widespread, as its diverse applications range
from retail to wearable technology. Clusters of networked resource-constrained
edge devices are becoming common, yet no system exists to split a DNN across
these clusters while maximizing the inference throughput of the system.
Additionally, no production-ready orchestration system exists for deploying
said models over such edge networks which adopts the robustness and scalability
of the cloud. We present an algorithm which partitions DNNs and distributes
them across a set of edge devices with the goal of minimizing the bottleneck
latency and therefore maximizing inference throughput. The system scales well
to systems of different node memory capacities and numbers of nodes, while
being node fault-tolerant. We find that we can reduce the bottleneck latency by
10x over a random algorithm and 35% over a greedy joint partitioning-placement
algorithm, although the joint-partitioning algorithm outperforms our algorithm
in most practical use-cases. Furthermore we find empirically that for the set
of representative models we tested, the algorithm produces results within 9.2%
of the optimal bottleneck latency. We then developed a standalone cluster
network emulator on which we tested configurations of up to 20 nodes and found
a steady increase in throughput and decrease in end-to-end latency as the
cluster size scales. In these tests, we observed that our system has multi-node
fault-tolerance as well as network and system IO fault-tolerance. We have
implemented our framework in open-source software that is publicly available to
the research community at https://github.com/ANRGUSC/SEIFER. | Arjun Parthasarathy, Bhaskar Krishnamachari | 2023-04-24T09:21:56Z | http://arxiv.org/abs/2304.11941v1 | # Partitioning and Deployment of Deep Neural Networks on Edge Clusters
###### Abstract.
Edge inference has become more widespread, as its diverse applications range from retail to wearable technology. Clusters of networked resource-constrained edge devices are becoming common, yet no system exists to split a DNN across these clusters while maximizing the inference throughput of the system. Additionally, no production-ready orchestration system exists for deploying said models over such edge networks which adopts the robustness and scalability of the cloud. We present an algorithm which partitions DNNs and distributes them across a set of edge devices with the goal of minimizing the bottleneck latency and therefore maximizing inference throughput. The system scales well to systems of different node memory capacities and numbers of nodes, while being node fault-tolerant. We find that we can reduce the bottleneck latency by 10x over a random algorithm and 35% over a greedy joint partitioning-placement algorithm, although the joint-partitioning algorithm outperforms our algorithm in most practical use-cases. Furthermore we find empirically that for the set of representative models we tested, the algorithm produces results within 9.2% of the optimal bottleneck latency. We then developed a standalone cluster network emulator on which we tested configurations of up to 20 nodes and found a steady increase in throughput and decrease in end-to-end latency as the cluster size scales. In these tests, we observed that our system has multi-node fault-tolerance as well as network and system IO fault-tolerance. We have implemented our framework in open-source software that is publicly available to the research community at [https://github.com/ANRGUSC/SEIFER](https://github.com/ANRGUSC/SEIFER).
Arjun Parthasarathy and Bhaskar Krishnamachari. 2023. Partitioning and Deployment of Deep Neural Networks on Edge Clusters. 1 (April 2023), 27 pages. [https://doi.org/10.1145/nnnnnn.nnnnnn](https://doi.org/10.1145/nnnnnn.nnnnnn)
## 1. Introduction
Deep Neural Networks (DNNs) have greatly accelerated machine learning across different disciplines, such as Computer Vision (Cheng et al., 2017) and Natural Language Processing (Krishnamachari et al., 2020). Edge Inference is becoming an increasingly popular field with multiple facets (Krishnamachari et al., 2020), as sensor-driven computation necessitates DNN inference in the field. Applications for Edge Inference range from retail to wearable technology (Bahdan et al., 2018; Chen et al., 2020).
The edge can come in multiple configurations (Krishnamachari et al., 2020; Parthasarathy et al., 2020), and there are multiple approaches to facilitate edge inference. For cloud-edge hybrid inference, one such approach is model compression (Krishnamachari et al., 2020), which deals exclusively with DNN optimization but does not address the system's runtime configuration. In this paper, we focus on clusters of resource-constrained edge devices. These _edge clusters_ are becoming increasingly common due to their low-cost and scalability at the edge (Krishnamachari et al., 2020). Many lessons in high-availability and application portability can be taken from cloud computing (Krishnamachari et al., 2020). Unlike a cloud data center, the edge brings system resource limitations and communication bottlenecks between devices.
With this in mind, we address the following problem: **How can we take advantage of multi-device edge clusters to enable high-performance DNN inference while respecting computational resource constraints and
taking into account the heterogeneity of communication links? Additionally, can we integrate cloud computing principles such as fault-tolerance, high-availability, and container-based abstractions to make edge inference production-viable?
To partition a deep learning model, we first split the model into components that are executed sequentially. Each partition is assigned to a different edge device, and once each node performs inference with its piece of the model, that intermediate inference result is sent to the next node with the corresponding partition in the sequence. This inference pipeline is shown in Figure 1.
In an edge cluster, although we have a lower computational power in each node, we can take advantage of this inference pipelining to increase system throughput. Since each node can perform inference with its partition individually, prior nodes in the pipeline can send their finished inference results to the subsequent nodes in the pipeline and accept new batches.
We define the _throughput_ metric of a system as the number of inference cycles it can perform per unit time. As we showed in our previous work DEFER [36], we can achieve higher throughput with distributed edge inference as opposed to inference on a single device, providing that the node has enough capacity for the model.
The throughput is defined as the reciprocal of the _bottleneck latency_. For nodes \([k]=\{1,2,\ldots,k\}\), the bottleneck latency \(\beta\) is defined as
\[\begin{split} S=\{k\in[k]\mid c_{k},\gamma_{k}\}\\ \beta=\max_{s\in S}s\end{split} \tag{1}\]
where \(c_{k}\) is the compute time of the operations on node \(k\), and \(\gamma_{k}\) is the communication time between node \(k-1\) and \(k\).
We use ResNet50 [22], which is a representative model for our use case. On a Raspberry Pi 4, the inference speed was found to be 225 ms [39]. Next, we found the amount of data transferred between each layer of the model. On average,
Figure 1: Partitioning and Distributing a Model Across Edge Devices to Create an Inference Pipeline
10.2 Mbits of data was transferred between layers. Given an average WiFi bandwidth of 6 Mbps for a low-end edge network, this gives us a communication time of \(1.7s\). This is 7.5x slower than the compute time. In reality, many models are larger than ResNet50 and will therefore be split across devices, so each device will have less operations to execute. This means that communication time will outweigh compute time as the bottleneck. Therefore, we can simplify our bottleneck latency using Equation 2.
\[\beta=\max_{k\in[k]}\gamma_{k} \tag{2}\]
Since throughput is defined as \(\frac{1}{\beta}\), by minimizing the bottleneck latency we maximize inference throughput.
Additionally, we assume that all nodes are homogeneous in RAM. We discuss the different capabilities of edge devices in Section 5.1. If the devices are not the same capacity, then the algorithm will take the smallest memory capacity across all nodes in the cluster, and take that as the capacity of each node.
In this paper, we primarily analyze image and text models due to their prevalence on the edge for visual analytics applications (Wang et al., 2017; Wang et al., 2018). We build on prior solutions by addressing both system resource limitations and cloud-computing features to create a robust inference framework.
We make two contributions:
1. A partitioning and placement algorithm for DNNs across a cluster of edge devices distributed spatially within the same WiFi network. The algorithm finds the candidate partition points, finds the optimal partition sizes to transfer the least amount of data, and finds the arrangement of nodes with the highest bandwidth. Together, these aim to minimize the resulting bottleneck latency according to the throughput metric.
2. A robust, containerized system to perform inference with the model partitions. The system is node fault-tolerant and dynamically updates the model partitions based on revisions to the model. The framework takes into account system resource limitations to provide a lightweight inference runtime. Our code is available at [https://github.com/ANRGUSC/SEIFER](https://github.com/ANRGUSC/SEIFER).
We found that our algorithm results in a 10x improvement over a random partitioning/placement algorithm, and a 35% reduction in bottleneck latency for systems with 50 compute nodes. We empirically observe an average approximation ratio of 1.092 for the bottleneck latency (i.e. it is 9.2% more than the optimal bottleneck latency, on average).
Additionally, we found that our containerized system has multi-node fault-tolerance and is able to recover from both network and system IO faults.
## 2. Related Work
### Edge Inference
#### 2.1.1. DNN Model Slicing
Some works mathematically perform DNN Model Slicing by layer (Wang et al., 2017; Wang et al., 2018), after calculating layer impact during the training stage. These do not account for communication demands on the edge. Others abstract model layers into certain "execution units," (Wang et al., 2017; Wang et al., 2018) which they then choose to slice based on certain resource requirements. Li _et al._ (Li et al., 2019) regressively predict a layer's latency demand and optimizes communication bandwidth accordingly. These works are optimized for a hybrid edge-cloud pipeline and do not address the demands of a cluster of edge devices. Couper (Li et al., 2019) additionally evaluates model slices on a containerized platform and deploys these containers on an existing edge framework. However, it only addresses the case of a few sensors and edge devices, instead of a large edge cluster.
This means that unlike our work, it does not have to optimize the placement of partitions onto devices and instead focuses on minimizing data transfer.
Our prior work DEFER addresses the partitioning and execution of DNNs on edge clusters, but does not address how to find candidate partition points or attempt to minimize bottleneck latency. Additionally, it does not leverage containerization and therefore is not easily portable between edge cluster configurations. We build on our prior work by introducing both containerization and an algorithm which aims to find optimal model partitions and node placement to minimize bottleneck latency.
Our paper builds on these works by addressing the bandwidth limitation of an edge cluster, and aims to maximize inter-node bandwidth to minimize bottleneck latency.
#### 2.1.2. Edge Inference Runtimes
Intermittent edge inference (Krishnamachari et al., 2017) describes how to optimize edge inference for energy use on edge devices, but focuses on compression and pruning of model layers with a specialized inference runtime. Jupiter (Krishnamachari et al., 2017) orchestrates execution of a task on geographically distributed compute nodes based on a given task graph. This framework takes compute time as the bottleneck and uses a dynamic-programming solution to minimize computation time when distributing the task graph. Unlike this work, we take communication time as the bottleneck, which is more in line with real-world edge-cluster characteristics. Another edge task execution framework (Krishnamachari et al., 2017) uses a multi-stage process to evaluate, schedule and containerize tasks on the edge. While creating an efficient inference runtime, this framework does not explicitly address the use case of DNN slicing across edge devices and therefore does not consider the bandwidth between nodes in the cluster.
Our prior work DEFER is a standalone Python application that cannot scale across different node environments nor integrate the high-availability and fault-tolerance of a cluster framework. Couper, mentioned in the aforementioned section, creates a set of containers that can be run by another container orchestration framework. However, it doesn't construct the Kubernetes (Kubernetes, 2018) Pod execution units run on each node nor manage the lifecycle of the application components. Our framework pre-packages a Kubernetes distribution, allowing it to self-manage a standalone set of cluster resources and the scheduling of application components onto nodes. Broadly speaking, we introduce a containerized inference runtime optimized specifically for DNN partitioning and standalone cluster management based on cloud-computing principles, which prior edge inference runtimes do not address.
## 3. Partitioning and Placement Algorithm
We are given two graphs:
1. An unweighted DAG \(G_{m}\) representing the computation graph of a DNN, where each vertex represents a layer in the model. This DAG can be found using common ML libraries such as Tensorflow (Abadi et al., 2016) and Keras (Keras, 2017).
2. A weighted complete graph \(G_{c}\) representing the communication graph of a cluster of homogeneous physical compute nodes, where each vertex represents a physical compute node and each edge represents the bandwidth between those nodes. The graph is complete because we assume that these edge devices will communicate over the same WiFi network.
Our goal is to optimally partition the model and place these partitions on a set of edge devices. We do so as follows.
### Converting a Complex DAG to a Linear DAG
First, we need to distill \(G_{m}\) into a linear DAG. The vertices where it is possible to partition the model are called "candidate partition points." We illustrate this in Figure 2.
For \(v\in V\), edges \(e\in E\) and source vertex \(s\) of \(G_{m}\), find the longest path from \(s\) to \(v\). This can be done by topologically sorting the DAG and for each vertex in the resulting list, relaxing each neighbor of that vertex. We call the length of this longest path the _topological depth_ of that vertex in the graph. Let \(LP(v)\) denote the length of longest path from \(s\) to \(v\).
To verify that all paths from vertex \(v_{prev}\) go through vertex \(v\), use a modified DFS by recursing on the incident edges of each vertex. If we encounter a vertex with a greater topological depth than \(v\), return false. If we reach vertex \(v\), return true. Let \(AP(v_{prev},v)\) denote the result of this algorithm.
Given the previously found candidate partition point \(p_{k-1}\) and the current vertex \(u\), the next candidate partition point \(p_{k}=u\) iff:
1. \(LP(u)\neq LP(v)\forall v\in\{V-u\}\)
2. \(AP(p_{k-1},u)=\text{true}\)
with \(p_{0}=s\).
The time complexity of LP is \(O(V+E)\). AP runs in polynomial time by returning upon reaching a vertex with a greater topological depth. Therefore, this algorithm runs in polynomial time.
Figure 2 shows the candidate partition points at certain sections of the DAG of ResNet50 [22] and InceptionResNetV2 [43].
As shown in Figure 3, almost all the models have at least 25 candidate partition points. This is more than enough granularity to split the model, given the upper bound described in section 5.1 of the number of partitions we will need. There are some model architectures, like NASNet [52], which do not allow partitioning under our scheme.
As shown in Figure 4, NASNet cannot be partitioned because there is no single point that splits the model into a distinct execution unit that does not have any dependencies to a previous or subsequent layer. If we run our LP algorithm, we find that there is no single layer that has distinct topological depth from other layers. We found that 64 of the 66 (97 %) pretrained Keras models could be partitioned under our scheme, and only the NASNet variants could not.
Figure 2: Partition points for ResNet50 and InceptionResNetV2 models
### Optimal model partitioning and placement
Our goal is to maximize throughput of the system. As previously discussed, this means we need to minimize the bottleneck latency. Latency is defined as \(\frac{\text{data}}{\text{bandwidth}}\). Given a tuple of partition points \(P_{opt}\), their transfer sizes \(T\), and a set of bandwidths \(B\) between compute nodes, the latency between each set of compute node is defined as
\[\gamma_{k}=\frac{T_{opt,k}}{B_{k}}\forall 0\leq k<|P_{opt}| \tag{3}\]
The bottleneck latency for the system is then given by Equation 2. For the purposes of explanation, we separate the problems of optimizing the partitions (thereby optimizing transfer size) and optimizing placement (thereby optimizing bandwidth between nodes). We show empirically that this results in the the smallest bottleneck latency. In Section 7, we compare this formulation to an algorithm that tries to jointly optimize transfer size and bandwidth.
Figure 4: Portion of NASNet’s layer DAG
Figure 3: Histogram of number of candidate partition pts
#### 3.2.1. Finding optimal partitions
Our heuristic for finding optimal partitions is the "transfer size" of the partition; i.e how much data will be transferred from that partition to the next. Given the tuple of candidate partition points \(P=(p_{0},p_{1},\ldots,p_{k})\), we now need to find a set of model partitions which minimizes the sum of transfer sizes. Assuming a batch size of 1, the transfer size \(t_{k}\) of candidate partition point \(p_{k}\) is defined as
\[t_{k}=\frac{\eta(p_{k})}{\lambda} \tag{4}\]
The function \(\eta(p_{k})\) finds the size of the output array of candidate partition point \(p_{k}\).
\(\lambda\approx 1.44*2.1\) represents the total compression ratio given by multiplying the average ZFP compression ratio [(30)] by the average LZ4 compression ratio [(12)].
To better illustrate our algorithm, we classify the transfer size \(t_{k}\) into 3 transfer size classes ("low", "medium", or "high") based on the distribution of the transfer sizes. We discuss how many transfer size classes are actually necessary in section 5.2.1.
\[C=\{L,M,H\}\quad t_{k}\in C\quad\forall 0\leq k<|P| \tag{5}\]
The optimal set of partitions is the scheme which minimizes the sum of the transfer sizes of said partitions.
Let \(G_{p}\) represent a DAG, where each vertex is represented by a possible partition. The vertices are defined as follows:
\[V=\{\{p_{i},p_{i+1},\ldots,p_{j}\}\mid\omega(\{p_{i},p_{i+1},\ldots,p_{j}\})< \kappa\}\quad\forall 0\leq i<|P_{opt}|,0\leq j<|P_{opt}|-i \tag{6}\]
The set of vertices represents every possible contiguous subarray of candidate partition points, where \(\omega(P)\) finds whether the memory use of partition \(P\) is within the memory capacity \(\kappa\) of the compute node. As discussed in Section 5.1, we quantize the models to reduce their memory footprint. However, when calculating the memory footprint of a partition, we do not consider this quantization. This means that we are conservative on partition size and in turn provide extra space on each device for the memory overhead from containerization. We provide a discussion on deriving the model's memory usage in section 5.1. Each partition is a set of layers that fall between the partition points \(p_{i}\) and \(p_{j}\).
The set of edges is defined as follows:
\[E=\{(u,v)\mid(u,v)\in V,\rho(u_{|u|-1})=\rho(v_{0})-1\} \tag{7}\]
The function \(\rho(v)\) finds the index of element \(v\) in \(P_{opt}\). There is an edge between vertices if the last partition point of \(u\)'s partition is adjacent in \(P\) to the first partition point of \(v\)'s partition. For example, if \(u=[1,2]\), \(v=[3,4]\), and \(P=(1,2,3,4)\), then \((u,v)\) is an edge. Each edge has a weight \(w(u,v)\) which corresponds to its transfer size class.
Figure 5 shows an example partition graph, where edges that are the same color will have the same weight. In the figure, "root" vertices have in-degree 0, "leaf" vertices have out-degree 0, and "intermediate" vertices have neither.
Algorithm 1 finds the shortest path in the graph from a root to a leaf. Since edges which bridge the same candidate partition points (and have the same color as shown in Figure 5) will have the same subsequent paths, we can memoize the shortest path. On line 2, we store a map on which tells us for each candidate partition point what the shortest path is from that point. Using memoization, Algorithm 1 takes \(O(N)\) to find the shortest path, but \(O(N^{2})\) to construct the partition graph. Therefore the runtime of Algorithm 1 is \(O(N^{2})\), where N is the number of nodes.
We also need to take into account the latency between the dispatcher node and the first compute node, since the model's input data is very large. After finding the array of optimal partitions, we prepend a special _dispatcher partition_ to the array of optimal partitions, which represents the runtime that sends inference input data. The transfer size
corresponding to this partition is \(\eta(p_{0})\), where \(p_{0}\) is the first candidate partition point and represents the model's input layer. We don't need to worry about the latency between the last compute node and the dispatcher node, because the size of a finished inference result is far smaller than an array of input data. We quantify this size difference in Section 5.2.2.
Let \(\Theta\) represent the set of chosen partitions. For each subarray in \(\Theta\), we take the last element of the subarray, add that to the list of partition points \(Q\), and add its corresponding transfer size to the list \(S\). The resulting list is then sorted based on the topological depth of each partition point, so that the partitions are executed in the order they appear in the model.
#### 3.2.2. Finding optimal model placement
With the set of optimal partitions \(Q\) and their corresponding transfer sizes \(S\), we now need to "match" them to the vertices of \(G_{\epsilon}\). We know from Equation 5 that every element of \(S\) is a bandwidth class of \(C\). Let \(c(e)\) return the bandwidth class of a given edge of \(G_{\epsilon}\). We use the following threshold function to classify each edge:
\[\tau(X,t)=\left\{\begin{array}{ll}c(e)=C_{\arg X-1},&\text{if }e<t\\ c(e)=X,&\text{if }e\geq t\end{array}\right\}\forall e\in E_{\epsilon} \tag{8}\]
If the edge is greater than or equal to the threshold, it will be classified as class \(X\), otherwise it will be classified as the class in \(C\) right below \(X\). We provide a discussion on estimating the bandwidth distribution of \(G_{\epsilon}\) in section 5.3.1. In order for our algorithm to work, we set the number of transfer size classes equal to the number of bandwidth classes.
Figure 6 shows an example communication graph.
Figure 5. Example partition graph, where the partition points are \(P=\{1,2,3,4,5\}\)
```
// Map to store memoized paths \(pathFrom\leftarrow\) NEW-MAP() procedureMIN-COST-PATH(\(G\), \(v\)) if\(\textit{{c}}.children=\emptyset\)then return\(v\), \(0\) endif \(partitionLastLayer\gets v[v.length-1]\) if\(partitionLastLayer\notin pathFrom\)then \(paths\leftarrow[]\) for\(c\in\textit{{v}}.children\)do \(path,cost\leftarrow\) MIN-COST-PATH(\(G\), \(c\)) \(paths\leftarrow\) APPEND(\(paths\), \((path,cost)\)) endfor \(pathFrom[partitionLastLayer]=\) MIN(\(paths\)) endif \(minPath,minCost\gets pathFrom[partitionLastLayer]\) \(chosenNode\leftarrow minPath[0]\) // Path starting at v and going to a leaf \(newPath\leftarrow\) APPEND(\([v],...minPath\)) \(newCost\leftarrow minCost+w(v,chosenNode)\) return\(newPath,newCost\) endprocedure procedure PARTITION(G) \(roots\leftarrow\) GET-ROOT-VERTICES(\(G\)) for\(r\in\textit{roots}\)do \(path,cost\leftarrow\) MIN-COST-PATH(\(G\), \(r\)) \(paths\leftarrow\) APPEND(\(paths\), \((path,cost)\)) endfor \(minPath,minCost\leftarrow\) MIN(\(paths\)) // Prepend dispatcher partition \(\Delta\) to beginning of optimal partitions array \(bestPath\leftarrow\) APPEND(\([\Delta],...minPath\)) return minPath endprocedure \(\Theta\leftarrow\) PARTITION(\(G_{p}\))
```
**Algorithm 1** Optimal Partitioning
Given the array of transfer sizes \(S\) and array of communication graph edges \(E_{c}\), the lower bound on bottleneck latency we can achieve is given by Theorem 1.
Theorem 1.: _The lowest bottleneck latency we can achieve is:_
\[\min(\beta)=\frac{\max S}{\max E_{c}} \tag{9}\]
_Therefore, if we achieve \(\min(\beta)\), then we have found the optimal minimum bottleneck latency._
We prove Theorem 1 as follows:
Given the highest transfer size (max \(S\)), then it must be matched with the highest bandwidth (max \(E_{c}\)) to have the lowest bottleneck latency. There are two cases in which the system would have another bottleneck latency:
1. \[\beta=\frac{\max S}{e}\forall_{e}\in E_{c}-max(E_{c})\] (10)
2. \[\beta=\frac{s}{e}\forall\{s\in S-max(S),e\in E_{c}-max(E_{c})\mid\beta\geq \frac{\max S}{\max E_{c}}\}\] (11)
In Equation 10, the latency of the system would be higher than Equation 9, since the transfer size is being matched with a lower bandwidth edge. In Equation 11, some other transfer size \(s\) and bandwidth \(e\) may result in a higher bottleneck latency, in which case Equation 9 still holds. Therefore, Theorem 3.1 holds.
We run tests in Section 7 to see how often we get this optimal solution. Algorithm 2 performs the matching between \(S\) and \(G_{c}\) to try to reach the optimal latency as outlined above. Let \(N\) represent the array of nodes that we choose from \(G_{c}\), with length \(|S|\).
In algorithm 2, we use the color-coding k-path algorithm [2], which finds a path of length \(k\) (where \(k\) is the number of vertices) in \(G_{c}^{X}\) if a \(k\)-path exists and does so in polynomial time if \(k<\log(|V^{X}|)\). See section 5.1 for how we can bound the runtime of the algorithm. We use a binary search to find the maximum threshold for which a \(k\)-path exists. On line 3, we sort in descending order so that we can find the maximum viable edge-weight threshold with a binary search. As \(N\) starts to be filled in, the \(k\)-paths have to be found between certain nodes in order for \(N\) to be a contiguous path of nodes. We modify the \(k\)-path algorithm to start at \(s\) and stop once it reaches \(u\). We make the algorithm more efficient by stopping a particular iteration if we reach \(u\) before we have a path of length \(k\). If \(s\) is _null_, find any find any \(k\)-path that ends at \(u\). Similarly, if \(u\) is _null_, find any \(k\)-path that starts at \(s\).
Algorithm 3 performs the \(k\)-path matching of partitions onto vertices of \(G_{c}\).
```
1:\(S\leftarrow\emptyset\)
2:\(S\leftarrow\emptyset\)
3:for\(i\in S\)do
4:\(S\leftarrow\emptyset\)
5:\(S\leftarrow\emptyset\)
6:for\(i\in S\)do
7:\(S\leftarrow\emptyset\)
8:\(S\leftarrow\emptyset\)
9:\(S\leftarrow\emptyset\)
10:\(S\leftarrow\emptyset\)
11:\(S\leftarrow\emptyset\)
12:endfor
13:\(S\leftarrow\emptyset\)
14:endfor
15:\(S\leftarrow\emptyset\)
16:endfor
17:\(S\leftarrow\emptyset\)
18:endfor
19:\(S\leftarrow\emptyset\)
[MISSING_PAGE_POST]
```
1:procedureSUBGRAPH-K-PATH(\(X\), \(k\), \(s\), \(u\)) // Sort by weight in descending order \(\mathit{edgeList}\leftarrow\mathrm{SORT}(G_{c}\), \(\{e\in E_{c}\mid w(e)\},\mathit{reverse}=\mathrm{TRUE})\)\(\mathit{low}\gets 0\)\(\mathit{high}\gets\mathit{edgeList.length}\)\(\mathit{bestPath}\leftarrow[]\)while\(\mathit{low}<\mathit{high}\)do \(\mathit{median}\leftarrow\frac{(\mathit{low}+\mathit{high})}{2}\)\(\tau(X,\mathit{edgeList}[\mathit{median}])\)\(\tau(X,\mathit{threshold})\)// \(G_{c}^{X}\) is the induced subgraph of \(G_{c}\) with only bandwidth class \(X\) edges \(G_{c}^{X}\leftarrow\{E_{c}^{X}=\{e\in E_{c}\wedge c(e)=X\mid e\}\mid V_{c}^{X},E_{c}^{X}\}\)\(\mathit{result}\leftarrow\mathrm{K-PATH}(G_{c}^{X},k,s,u)\) if\(\mathit{result}=\mathrm{FALSE}\)then \(\mathit{low}\gets\mathit{median}+1\) else \(\mathit{high}\gets\mathit{median}\)\(\mathit{bestPath}\gets\mathit{result}\) endif endwhile for\(N\in\mathit{bestPath}\)do \(\mathrm{DEL}(G_{c},N)\) endfor endprocedure
```
**Algorithm 2** Finding K-Paths
By starting with the longest \(H\)-subarrays and working to the shortest \(L\) subarrays, we are greedily finding the best bandwidth paths to match with the highest transfer size terms of \(S\). We continue this process until we have found \(k\)-path matchings for all subarrays of \(S\). In some cases, a high number of bandwidth classes will prevent the algorithm from returning a result, because it has very few edges to choose from during each iteration of the matching. In this case, we can re-run the algorithm with fewer bandwidth classes.
## 4. Cluster Architecture
To run on a resource-constrained edge cluster, we propose modifications from our earlier implementation of the DEFER framework (Zhu et al., 2017) which make the system more robust. Rather than using the full Kubernetes framework (which is not suitable for resource-constrained devices), we use microK8s (Krishnamachari et al., 2017), which is adapted to run on the edge. MicroK8s only modifies the Kubernetes control plane and underlying infrastructure, so we still use Kubernetes constructs for our framework. We create Kubernetes services and take advantage of in-cluster DNS to allow pod communication independent of their lifecycle. In the event of node failure, pods can be rescheduled to healthy nodes and the system will continue running.
As outlined in DEFER (Zhu et al., 2017), the system has the configuration step and inference step. We encourage readers to look at our prior work to understand how the configuration and inference steps work, because our objective here is to demonstrate the Kubernetes architecture of each step. We add an additional step, the _system init step_, which configures the edge cluster and allows it to be run on any set of edge devices.
### System Initialization Step
Upon system startup, the process of leader election starts and the _Dispatcher Node_ is chosen. The following events take place:
1. **Scheduling IPerf Jobs**. The system initialization pod launches a job for each node which schedules a pod. Each pod contains a container which runs an IPerf (Krishnamachari et al., 2017) server and an IPerf client, which it uses to find the bandwidth between itself and each other node in the cluster. Using a leader-follower architecture, the dispatcher directs each compute node when and where to connect in order to run the IPerf job. Each pod then directs the bandwidth info back to the dispatcher.
2. **Scheduling Dispatcher Init Job**. A job is created for the pod that runs the partitioning and placement algorithm. It is scheduled onto the same node that the system initialization pod is running on, which has been chosen as the leader. The dispatcher pod is configured with the bandwidths between all nodes in the graph.
3. **NFS Server**. A cluster-wide NFS server is dynamically provisioned using NFS-Ganesha (Zhu et al., 2017). The NFS server will contain the files necessary to instantiate each node's partition. Since the NFS server has a lifecycle independent of each pod, it will preserve configuration data so that crashed pods can restart their inference runtimes.
Figure 7. Kubernetes Cluster Overview
### Configuration Step
The pod within the dispatcher init job runs two containers which perform the DEFER configuration step as follows:
1. **Partitioning Container**. The partitioning container pulls the model from a specified external repository. Using the stored node bandwidth data, the container runs the algorithm specified above to partition the model and assign each partition to a compute node. It quantizes the model (see Section 5.1) and saves the serialized model files to the NFS server.
2. **Deploy Container**. The container creates a separate deployment for each inference pod to manage its lifecycle. Each inference pod is assigned to a certain compute node, and is configured to send its intermediate computed inference to another compute node. Additionally, it creates a deployment for the dispatcher to send and receive inference data during the inference step. This dispatcher deployment is scheduled to whichever node the placement algorithm scheduled the dispatcher partition.
### Inference Step
#### 4.3.1. Inference Pods
Each inference pod has two containers:
1. **Inference Runtime**. This container instantiates a TFLite [45] model from the files on the NFS server. The container contains two FIFOs, which read and write serialized data to the IO container, respectively. Using ZFP [30] and LZ4 [12] compression, the runtime reads and decompresses data, runs it through the model, and then compresses and writes data.
2. **IO Container**. Contains two FIFOs and two TCP sockets. The FIFOs are used to read and write serialized data to the inference runtime, respectively. One TCP socket acts as a server and receives a connection from the previous compute node, while the other acts as a client and sends the computed inference data to the subsequent compute node.
#### 4.3.2. Dispatching Inference Data
Once the inference pods are deployed, the dispatcher runs three containers:
1. **Processing Container**. Runs an HTTP server to read model input. It will convert the model input into the ZFP/LZ4 compressed form used in the system. The container contains two FIFOs, one to send model input data,
Figure 8. IPerf job orchestration to find bandwidths between nodes
and the other to receive finished inference results. The container runs an HTTP client which can be configured to send the finished inference data to a certain location.
2. **IO Container**. Contains two FIFOs and two TCP sockets. The FIFOs are used to read and write serialized data to the processing container. One TCP socket acts as a server to receive finished inference results from the final compute node, and the other acts as a client to send model input to the first compute node.
3. **Model Watch Container**. Watches for updates to the model on the external repository, and if it changes, the container will stop the inference pods and restart model partitioning. The cluster only needs to be shut down and restarted from the system initialization step if a new node is added.
In lieu of running on microK8s, these containers can also be packaged and deployed on other edge inference frameworks [(4; 27; 32)].
### System Fault Tolerance
Within each container, the IO mechanisms are all protected from failure. We elaborate on their recovery modes below.
1. Network Failure 1. **IPerf job network failure** - system re-tries connection, until it reads a non-error state from the IPerf JSON output.
Manuscript submitted to ACM
Figure 10. Dispatcher Pod
Figure 9. Inference Pod
2. **Client-side TCP connection error** - system checks for connection refused and DNS errors, and re-queries server until successful connection. If the desired pod is still in the ContainerCreating state, there will be a ConnectionRefusedError. If there was a node failure or the desired pod was restarted, there will be a DNSError while the service backing the desired pod waits for a new pod to be scheduled. 3. **Connection Reset Error or EOF** - socket failed while reading data, so re-create the TCP server socket and wait for a new client connection. 4. **Broken Pipe Error** - socket failed while writing data, so re-create the TCP client socket and wait for server to accept connection.
2. File IO Failure 1. **Broken Pipe Error** - other end of FIFO failed while writing data, so pipe is re-created and opened for writing. 2. **Connection Reset Error** - other end of FIFO failed while reading data, so pipe is re-created and opened for reading. 3. **Blocking Error** - pipes are opened for writing in non-blocking mode, so if there's no incoming data on the pipe, the thread needs to wait for incoming IO.
3. Kubernetes Pod Failure 1. **Pod Restart** - because of the RestartAlways PodFailure policy we selected, pods in the deployments will always restart after failure (ex. OOMKilled, Error, etc.).
4. Node Failure 1. **Rescheduling Pods** - Pods will be evicted from their current node and rescheduled to a functioning node, where they will be started up again and configure themselves with TCP socket connections and NFS server data. For inference pods, the neighboring partitions will detect a break in the TCP socket connection and attempt to reconnect to the pod that was just rescheduled to a new node. 2. **Rescheduling Volumes** - if the node containing the NFS server data goes down, then Kubernetes will need to dynamically provision a new PersistentVolume on another healthy node. However, this means that the current model partition files will be lost, and the cluster will need to be restarted. The in-built Kubernetes database, etcd, stores multiple copies of cluster configuration data across multiple nodes, allowing for cluster-wide configuration data to be node fault-tolerant. In our future work, we plan to explore sharding of our NFS server across multiple nodes to prevent single-node dependency.
## 5. Measurements and Modeling
In this section, we establish some baselines for the evaluation of our framework. In particular, we model the memory footprint of a model partition and some properties of Random Geometric Graphs which help us prove characteristics of the inter-node communication graph. The results from this section inform the configuration of our algorithm and our understanding of its efficacy.
### Memory Footprint of Partitions
We used a sample of models from TFHub (Wang et al., 2017) for four different domains: image, video, audio, and text. Using TFLite (Wang et al., 2017), we performed float16 quantization and dynamic range quantization on each model. Float16 quantization has an insignificant accuracy drop and dynamic range quantization has a minimal accuracy drop, so neither of them should reasonably affect the accuracy of the model in a real-world scenario.
We calculated the peak memory usage of each quantized model using the TFLite Model Benchmark Tool [46], and the results are shown in Figure 11. We focus on image and text models since they are the most memory-intensive and common use-cases for edge inference. By using float16 quantization, we can limit text models to 2000 MB peak memory usage. By using dynamic range quantization, we can limit image models to 2000 MB peak memory usage, and the vast majority of models will use less than 1000 MB. Table 1 quantifies how many devices on average we would need to accommodate the model for each category of edge capability and memory capacity.
For our purposes, low-end edge is a Raspberry Pi Zero [18] with 512 MB of RAM, mid-end edge is the Raspberry Pi 3 Model B [15], with 1 GB of RAM, and high-end edge is the Raspberry Pi 4 Model B [16] with up to 8 GB of RAM. We make this estimation assuming that extra memory will be used to run a MicroK8s control plane architecture. We can infer from our sampling that we will not need more than 4 low-end edge devices to accommodate a real-world model. In section 3.2.2, we said we could bound the runtime of the \(k\)-path algorithm. The runtime of the algorithm is roughly \(O(4.32^{k})\). Since we just found that there will be at most 4 partitions, we will only never need to find a path of length
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Low-End Edge (512 MB) & Mid-End Edge (1 GB) & High-End Edge (8 GB) \\ \hline Image & 3 & 2 & 1 \\ \hline Text & **4** & 3 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Number of Devices Necessary to Accommodate Models
Figure 11: Peak Memory Usage of Popular TFHub models
\(k\leq 4\). Therefore, we can cap the runtime of the \(k\)-path algorithm at \(O(4.32^{4})<350\) operations, which runs on the order of milliseconds.
### Distribution of Partition Transfer Sizes
Using the same sample of Keras models, we partitioned the models and calculated the transfer sizes of all the candidate partition points.
#### 5.2.1. Transfer Sizes between Model Partitions
Figure 12 highlights the histogram of the number of bins, according to Doane's estimator, that each model's transfer sizes would need to adequately be represented in a histogram-like format. This number of bins is roughly equal to the number of transfer size classes necessary to represent the variation in each model's transfer sizes.
We can see that most of the models require 11 transfer size classes, and almost all the models requiring 11-13 classes. This informs our methodology of the number of transfer size classes that we choose in Section 7. In that section, we test an approximately equal number of transfer size classes below and above this number of 11, to see how it affects the bottleneck latency.
#### 5.2.2. Transfer Sizes between Dispatcher and Model Partitions
The transfer size between the dispatcher and the first model partition is the input size of the model. The transfer size between the last partition and the dispatcher is the size of the computed inference result. For ResNet50 with a batch size of 1, the input is a 3-channel 224x224 pixel image. This comes out to an array size of 150328. The output is a soft-max array of 1000 image classes. This comes out to an array size of 1000. The input size is more than 100x the output size, and this factor is higher for many models that accept larger images but have a similar amount of output image classes. Therefore, the transfer size from the last partition to the dispatcher is negligible compared to the transfer size from the dispatcher to the first partition.
### Properties of Communication Graphs
The wireless communication graph formed by a randomly deployed set of nodes can be modeled as an Erdos-Renyi Random Geometric Graph (Zhou et al., 2017), so we can extract certain characteristics from it.
#### 5.3.1. Average Bandwidth of an Edge Connection
Let \(B=150\) be the range of the WiFi router in meters. We derive the equation for bandwidth given distance from the router based on Shannon's capacity equation (\(C=\log_{2}\left(1+\frac{S}{N}\right)\)). Here, we assume that the signal decays proportionally to the inverse square of the distance between the device and the router. Equation 12 represents the bandwidth of a connection with a device \(d\) meters from the router.
Figure 12. Histogram of number of bins necessary to represent each model’s transfer sizes
\[D(x)=\log_{2}\left(1+\frac{a}{d^{2}}\right),d\in(1,B) \tag{12}\]
In Equation 12, we found \(a=283230\) by assuming that the bandwidth at 80 m from the router was 5.5 Mbps, which matches the characteristics of a low-power edge network. Then, we can plug in the distance formula to derive the bandwidth given a position (x, y) on the 2D plane:
\[r(x,y)=\log_{2}\left(1+\frac{a}{\sqrt{x^{2}+y^{2}}}\right)=\log_{2}\left(1+ \frac{a}{x^{2}+y^{2}}\right)\quad x,y\in(-B,-1)\cup(1,B) \tag{13}\]
In Equation 13, we define our function on the domain \(x,y\in(-B,-1)\cup(1,B)\). We do this for two reasons: to satisfy the domain of Equation 12, and to simplify the creation of our geometric graphs for our simulations. From a practical standpoint, this means that we assume that no devices will be within 1 m of the router. Let two continuous random variables \(X,Y\sim\text{Unif}(-B,B);X,Y\notin(-1,1)\). Then their PDFs are
\[f_{X}(x)=f_{Y}(y)=\tfrac{1}{2(B-1)}\quad x,y\in(-B,-1)\cup(1,B) \tag{14}\]
Since \(X\) and \(Y\) are independent, their joint PDF is given by
\[f_{XY}(x,y)=\left(\tfrac{1}{2(B-1)}\right)^{2}\quad x,y\in(-B,-1)\cup(1,B) \tag{15}\]
Now, we can find the expected value of the transformation \(r(x,y)\) over \(X\) and \(Y\).
\[E[r(X,Y)]=\int_{-B}^{B}\int_{-B}^{B}r(x,y)f_{XY}(x,y)dxdy\quad x,y\notin(-1,1) \tag{16}\]
To find the standard deviation, we need to also calculate \(E[r(X,Y)]^{2}\).
\[E[r(X,Y)^{2}]=\int_{-B}^{B}\int_{-B}^{B}r(x,y)^{2}f_{XY}(x,y)dxdy\quad x,y \notin(-1,1) \tag{17}\]
We can then calculate the mean, standard deviation, and coefficient of variation.
\[\begin{array}{l}\mu=E[r(X,Y)]\approx 4.766\\ \sigma=\sqrt{E[r(X,Y)^{2}]-E[r(X,Y)]^{2}}\approx 1.398\\ CV=\frac{\alpha}{\mu}\approx 0.293\end{array} \tag{18}\]
The average bandwidth between any two nodes in our communication graph is 4.766 Mbps and that the distribution of bandwidths in a randomly generated graph is relatively tight. We use this result in the next section to confirm the efficacy of our \(k\)-path algorithm.
#### 5.3.2. Clustering in RGGs
We can model the induced subgraph of high edges \(G_{\epsilon}^{H}\) with a random geometric graph.
Consider the case where we only have \(L\) and \(H\) bandwidth classes, and we want to split the graph such that all edges above the average bandwidth are classified as \(H\).
Since the average bandwidth from Section 5.3.1 was found to be \(\mu\approx 4.766\), we find the distance from the router at which this bandwidth would be achieved using Equation 12.
\[\begin{array}{l}D(x)=\mu\\ x\approx 103.944\end{array} \tag{19}\]
Because we take the range of the WiFi router to be \(B=150\), we can scale this result to the region \(r\in[0,1]^{2}\) necessary for an RGG.
\[r=\frac{103.944}{B}\approx 0.693 \tag{20}\]
Now, we can compute the proportion of vertices in the largest cluster and the _cluster coefficient_ of the graph (Gelman, 1998). First, we find the average degree \(\alpha\) of the graph given \(r\). Let \(N=|V|\) represent the number of vertices in \(G_{c}\).
\[\begin{split} a&=\frac{\pi^{\frac{d}{2}}r^{d}}{ \Gamma\left(\frac{d+2}{2}\right)}\\ b&=2^{d}a\\ \alpha&=Nb\end{split} \tag{21}\]
where \(d=2\) is the number of dimensions. We can find the proportion \(P\) of vertices in the largest cluster (connected component) of the graph based on \(\alpha\).
\[P\left(\alpha\right)=1-\frac{1}{\alpha}\sum_{n=1}^{|V^{H}|}\frac{n^{\left(n-1 \right)}}{n!}\left(\alpha e^{-\alpha}\right)^{n} \tag{22}\]
Let's consider two practical cases, where \(N=10\) and \(N=50\).
\[\begin{split} N&=10\qquad\quad N=50\\ \alpha&\approx 60.343\quad\quad\alpha\approx 301.715 \\ P(\alpha)&=1\qquad P(\alpha)=1\end{split} \tag{23}\]
In both of these cases, \(P(\alpha)=1\) means that all of the vertices in the graph are part of the largest cluster, i.e they are all connected. This means that we are guaranteed to find a \(k\)-path of length \(k\leq N\) in \(G_{c}^{H}\).
We can also find the _cluster coefficient_\(C\) of the graph, which is a measure of how "cliquish" the graph is. This number is independent of \(N\).
\[\begin{split} H\left(x\right)=\frac{1}{\sqrt{\pi}}\sum_{i=x}^{ \frac{d}{2}}\frac{\Gamma\left(i\right)}{\Gamma\left(i+\frac{1}{2}\right)} \left(\frac{3}{4}\right)^{\left(i+\frac{1}{2}\right)}\\ C=1-H(1)&\approx 0.587\end{split} \tag{24}\]
This means that given any two vertices \(i,j\in V\), if for some common vertex \(k\), \((i,k)\in E\) and \((j,k)\in E\), then \((i,j)\in E\) with probability \(\approx 0.587\).
This indicates that the subgraph of bandwidth class \(H\) edges exhibits cliquish behavior, and due to the high probability of cliquish edges, there is a large variety of \(k\)-paths in the subgraph.
## 6. Evaluation Methodology
### Algorithm Simulations
We simulated a set of randomly placed edge devices using a random complete graph. For each evaluation, we created a random complete graph by drawing the positions of the nodes from a uniform distribution with the range \((-B,-1)\cup(1,B)\) used in Equation 13. Between each set of nodes, we calculated the edge weight according to Equation 13.
For each model, we ran Algorithm 3 with a certain number of nodes, number of bandwidth classes, and node memory capacity. We used the set of nodes [5, 10, 15, 20, 50]. We used the set of bandwidth classes [2, 5, 8, 11, 14, 17, 20]. We used the set of node memory capacities [64, 128, 256, 512].
We used the following configuration to test the algorithm:
1. **Number of Nodes** - 5, 10, 15, 20, or 50 randomly placed edge devices.
2. **Number of Bandwidth Classes** - 2, 5, 8, 11, 14, 17, or 20 bandwidth classes, which provide granularity in how to classify the transfer sizes and edge bandwidths.
3. **Node Memory Capacity** - 64, 128, 256, or 512 MB of RAM for a compute node.
For each test, we used a different random communication graph generated using the procedure above. With each algorithm result, we then calculated the bottleneck latency according to Equation 3. The resulting bottleneck latency from each configuration of model, node capacity, number of nodes, and number of bandwidth classes was run 50 times and averaged.
We compare the resulting bottleneck latency of our algorithm to that of the following two algorithms:
1. **Random Algorithm** - Select a random node and a random partition that can be accommodated on that node.
2. **Joint-Optimization Algorithm** - Let \(Q\) and \(N\) represent the optimal set of partitions and optimal arrangement of nodes, respectively, chosen under this algorithm For each node \(n\), do the following: 1. At each step choose the partition with the smallest transfer size that will fit within the node. Add this partition to the set of chosen partitions \(p\). 2. Starting at \(n\), find the neighbor in the communication graph whose edge \(e\) has the highest bandwidth, and add that to the path of chosen nodes \(c\). Then, find the highest bandwidth edge from \(e\), and so on. 3. Compare the bottleneck latency found with \(p\) and \(c\) to the smallest bottleneck found with all nodes \(n\) thus far, and update \(Q\) and \(N\) with \(p\) and \(c\) if the current bottleneck is smaller.
For each of these algorithms, we used the same configuration and methodology as above to find the bottleneck latency. These algorithms don't use bandwidth classes, so we didn't need to include that as part of the configuration.
We also ran 1000 tests of the InceptionResNetV2 model with a random communication graph with 64 MB node memory capacity, 50 compute nodes, and 20 bandwidth classes. We found that the model reaches the optimal latency (as defined in Theorem 1) 54 times. This is a percentage of 5.4% at optimality.
### Cluster Simulations
We tested our system by creating a virtual cluster on a single host machine. We generate configurations of communication graphs commonly found in the real world.
#### 6.2.1. Graph Configurations.
1. **Number of Nodes** - 5, 9, or 20 compute nodes
2. **Node Arrangement** - Ring, Grid, or Cluster shape
Figure 13 shows the different configurations that we test for a system of 9 compute nodes. In ChaosMesh, we use Equation 13 to define the bandwidth between a pair of nodes according to their distance apart in the graph.
#### 6.2.2. Test Environment Architecture.
To simulate an edge cluster on a single machine, we used Minikube [25]. Since Minikube has a minimum node memory requirement of 1800 MB, we artifically restrict Algorithm 3 to partition based
on 64 MB node capacity so we can mimic an edge device. To simulate different network bandwidths between nodes, we used ChaosMesh [8], which uses the TC-TBF [26] Linux algorithm. We've packaged our test environment as part of our code release.
Figure 14 depicts the architecture of the test environment. Each node gets a workflow of different _NetworkChaos_ rules. Each NetworkChaos rule specifies a TC-TBF bandwidth limit. Within TC-TBF, we control 3 parameters:
1. **Rate** - the bandwidth in kbps, derived from our communication graph configurations in Figure 13
2. **Limit** - max number of TCP packets that can be queued on the sender side. It's recommended that this number be set to \(2*\) rate \(*\) latency. Given a max bandwidth of \(18mbps\) from Equation 13 and a conservative estimate of \(2s\) for latency, the limit should be 10 MB.
3. **Buffer** - max number of tokens that can be sent instantaneously. If this number is too high, it will allow high burst speeds during the IPerf bandwidth jobs shown in Fig. 8 and result in an abnormally high bandwidth reading.
Figure 14: Test Environment Architecture
Figure 13: Communication graph configurations for a system with 9 compute nodes
If this number is too low, not enough tokens will be available at a time to send data, resulting in packet loss. In our testing, we found that a burst of 10 KB allows for accurate bandwidth readings with IPerf.
Since TC-TBF only affects the egress bandwidth from each pod and uses the IP of the target pod to limit the bandwidth, we need to make sure that TCP packets exiting the pod have the correct source and destination IPs. Since a regular Kubernetes Service has its own cluster-wide IP and forwards packets to the pod it backs, the TC-TBF rules wouldn't take effect for any packets since they would have the destination IP of the _service_ which backs the next inference pod, rather than the IP of the pod itself. Therefore, our test environment uses Kubernetes _headless services_ to back each inference pod. Rather than returning the service's cluster-wide IP, a DNS query for a headless service will directly return the IP of the pod it backs, allowing TCP packets sent to another pod to have the correct destination IP.
## 7. Results
### Simulation Results
In Figure 15, the lack of bottleneck latency values for InceptionResNetV2 with 5 nodes and 64MB node capacity indicates that the model could not be partitioned with these physical constraints.
In Figure 15, the color map was only generated for the node capacities which were too small for the models to fit on a single device of that capacity. All models were able to fit on a single 512 MB device.
Figure 15. Color Map of Bottleneck Latency (s) based on Model, Node Capacity, Number of Nodes, and Number of Bandwidth Classes – Optimal Partitioning/Placement
For each model, the lowest bottleneck latency for a given node capacity comes from the combination of the most number of bandwidth classes and number of nodes. The lowest bottleneck latency comes with the highest node capacity. These results follow from the fact that a larger node and number of nodes allows the partitioning algorithm to have greater choice in selecting the smallest transfer sizes. Similarly, a high number of bandwidth classes allows the placement algorithm to better perform the \(k\)-path matching.
In Figure 16, the optimal algorithm produces 40x lower bottleneck latency than the random algorithm for MobileNetV2 on different node configurations. The difference is the smallest for ResNet50, with the optimal algorithm producing a 2x lower bottleneck latency. For this selection of models, the optimal algorithm reduces bottleneck latency by 10x on average. The models with the greatest variance in transfer size (see Section 5.2.1) will result in the largest difference in bottleneck latency between the optimal random algorithms. Overall, we see that the optimal algorithm produces a significant reduction in bottleneck latency compared to the random algorithm.
In Figure 17, the joint optimization algorithm tends to perform better for a smaller number of nodes. Since each of these algorithms use the same optimal partitioning logic, we can only compare the models based on their differing placement logic. As the number of nodes increases, our \(k\)-path algorithm performs better. This makes sense, because the difference in the greedy strategy of the joint optimization algorithm and the matching strategy of our algorithm
\begin{table}
\begin{tabular}{|c|c|c|} \hline & K-Path Matching & Joint-Optimization \\ \hline
16 MB & 1.45 & **1.12** \\ \hline
32 MB & 1.19 & **1.07** \\ \hline
64 MB & 1.09 & **1.08** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of Approximation Ratios for K-Path Matching vs. Joint Optimization on Keras Pretrained Models
Figure 16: Comparison of Algorithm 3 with Random Algorithm - based on Model, Node Capacity, Number of Nodes
only becomes more apparent as the communication graph grows bigger and there are more options for node paths. In particular, for 50 nodes, our algorithm outperforms the joint optimization algorithm by 35%. We hypothesize that this trend would continue for more complex models which have a greater number of candidate partition points and a greater variance in transfer size, necessitating the \(k\)-path matching strategy to minimize bottleneck latency.
In Table 2, the joint-optimization algorithm outperforms the k-path matching algorithm, although the different gets closer as the node capacity increases.
### Test Environment Results
Table 3 compares our framework to the closely-related work Couper and our prior work DEFER. We see that since DEFER is simply a multi-threaded Python runtime, it has network and system IO fault-tolerance. However, since Couper Manuscript submitted to ACM
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Number of Nodes & Graph Shape & Inference Throughput (Hz) & End-to-End Latency (s) \\ \hline
5 & Ring & 0.072 & 23.55 \\ \hline
5 & 1x5 Grid & 0.113 & 15.86 \\ \hline \end{tabular}
\end{table}
Table 4. Throughput and End-to-End Latency based on Graph Shape
Figure 17. Comparison of Algorithm 3 with Joint Optimization - based on Model, Node Capacity, Number of Nodes
is also a container orchestration framework, it additionally has single node fault-tolerance but cannot scale to multi-node fault tolerance because it is designed to be deployed on clusters with few edge devices. However, our framework is designed to be run on large edge clusters and has been tested with clusters of up to 50 nodes. Our framework only requires a collective cluster memory equivalent to the model partitions' memory and a single node's worth of storage space for the NFS server which houses the model partitions. In Table 4, we find the inference throughput and end-to-end latency for different cluster sizes and graph shapes. We see that a grid shape, due the closeness of its nodes, outperforms the ring shape for both inference throughput and end-to-end latency. **This table will be updated in a future version to include results for 9 node and 20 node configurations.**
## 8. Conclusion
We have presented a framework to partition and place a model across a set of resource-constrained edge devices, with the goal of maximizing inference throughput. We leverage containerization to increase robustness, scalability, and fault-tolerance of the system.
We show that given certain characteristics about edge devices on a WiFi network, we can infer details about the communication graph and hardware requirements of most image and text models.
We find that we can reduce the bottleneck latency by 10x over a random algorithm and 35% over a greedy joint partitioning-placement algorithm, although the joint-partitioning algorithm outperforms our algorithm in most practical use-cases. Furthermore we find empirically that for the set of representative models we tested, the algorithm produces results within 9.2% of the optimal bottleneck latency. In our tests on our virtual cluster environment, we observed that our system has multi-node fault-tolerance as well as network and system IO fault-tolerance.
Our code is publicly available to the research community at [https://github.com/ANRGUSC/SEIFER](https://github.com/ANRGUSC/SEIFER).
### Future Work
With minor edits, we could extend our framework to work with geographically-distributed edge devices for a truly scalable edge inference solution.
Our results from Section 5.1 suggest that with software changes, we could potentially run the average image model on a cluster of micro-controllers. We could use RiotOS (Shi et al., 2017) without any containerization and perform optimizations to run with limited device memory. Some devices we could potentially take advantage of are the Raspberry Pi Pico (Raspberry Pi Pico, 2018) and Arduino Uno (Aro et al., 2018).
Secondly, as model size grows, the model partitions and their weights may occupy more storage space than a node's capacity. In the future, we could explore a more complex NFS database which uses shards across multiple nodes.
Finally, our results from Section 5.1 suggest that we could run parallel streams of our inference framework on the same cluster. To do this, we would use different pod namespaces and cluster RBAC (role-based access control) for different instances of the system.
## 9. Acknowledgements
We would like to acknowledge the helpful input and pointers provided by Prof. Anil Vullikanti from the University of Virginia, particularly in directing us to the color-coding \(k\)-path algorithm.
|
2301.00738 | Training Differentially Private Graph Neural Networks with Random Walk
Sampling | Deep learning models are known to put the privacy of their training data at
risk, which poses challenges for their safe and ethical release to the public.
Differentially private stochastic gradient descent is the de facto standard for
training neural networks without leaking sensitive information about the
training data. However, applying it to models for graph-structured data poses a
novel challenge: unlike with i.i.d. data, sensitive information about a node in
a graph cannot only leak through its gradients, but also through the gradients
of all nodes within a larger neighborhood. In practice, this limits
privacy-preserving deep learning on graphs to very shallow graph neural
networks. We propose to solve this issue by training graph neural networks on
disjoint subgraphs of a given training graph. We develop three
random-walk-based methods for generating such disjoint subgraphs and perform a
careful analysis of the data-generating distributions to provide strong privacy
guarantees. Through extensive experiments, we show that our method greatly
outperforms the state-of-the-art baseline on three large graphs, and matches or
outperforms it on four smaller ones. | Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, Stephan Günnemann | 2023-01-02T16:14:50Z | http://arxiv.org/abs/2301.00738v1 | # Training Differentially Private Graph Neural Networks with Random Walk Sampling
###### Abstract
Deep learning models are known to put the privacy of their training data at risk, which poses challenges for their safe and ethical release to the public. Differentially private stochastic gradient descent is the de facto standard for training neural networks without leaking sensitive information about the training data. However, applying it to models for graph-structured data poses a novel challenge: unlike with i.i.d. data, sensitive information about a node in a graph cannot only leak through its gradients, but also through the gradients of all nodes within a larger neighborhood. In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks. We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph. We develop three random-walk-based methods for generating such disjoint subgraphs and perform a careful analysis of the data-generating distributions to provide strong privacy guarantees. Through extensive experiments, we show that our method greatly outperforms the state-of-the-art baseline on three large graphs, and matches or outperforms it on four smaller ones.
## 1 Introduction
The introduction of Graph Neural Networks (GNNs) has enabled the training of Deep Learning (DL) models on graph-structured data and for various tasks such as node classification, link prediction or graph classification. However, similar to DL models trained on image [1] or text data [2; 3], GNNs leak information about their training data [4; 5; 6], such as the features of a node, or which nodes are connected by an edge.
In this paper, we analyze the privacy of GNNs under the lens of Differential Privacy (DP) [7]. In particular, we ensure the privacy of all nodes' features in a graph. While DP-SGD [8] is the de facto standard for training DL models with DP, its transfer to GNNs is not straightforward given the non-i.i.d. nature of the data. Indeed, since an \(L\)-layer GNN typically uses the \(L\)-hop neighborhood of a node during the forward pass, the gradient of a node does not depend on that node alone, but on all nodes in its neighborhood. While some works [9; 10] have attempted to apply DP to GNNs, most of them focus on edge-level DP. Methods that can be applied to feature-level DP suffer from
loose privacy guarantees [9], or rely on custom GNN architectures [10]. We propose an adaptation of DP-SGD to train GNNs with feature-level DP while attenuating the aforementioned problem and preserving a high model utility. We experimentally demonstrate that our method can offer significantly stronger privacy guarantees than prior work, particularly on large graphs.
## 2 Background
### Differential privacy
(\(\epsilon,\delta\))-DpDifferential Privacy (DP) [7] is a notion of privacy that allows data analysts to extract useful statistics from a dataset, without leaking too much information about the samples in it. More formally, given two neighboring datasets \(D\) and \(D^{\prime}\) - denoted \(D\sim D^{\prime}\) - that differ by one sample (either by deleting, adding or modifying a sample), a randomized algorithm \(\mathcal{M}\) with co-domain \(Y\) is \((\epsilon,\delta)\)-DP if for all \(O\subseteq Y\), and for all \(D\sim D^{\prime}\), \(Pr[\mathcal{M}(D)\in O]\leq\exp(\epsilon)Pr[\mathcal{M}(D^{\prime})\in O]+\delta\). The parameters \(\epsilon\) and \(\delta\) are the privacy budget parameters: the smaller their values, the better the privacy guarantees.
(\(\alpha,\gamma\))-RdpAn alternative definition of DP is Renyi Differential Privacy (RDP) [11]. A randomized algorithm \(\mathcal{M}\) is said to be \(\gamma\)-RDP of order \(\alpha\) - or (\(\alpha\), \(\gamma\))-RDP - if for any \(D\sim D^{\prime}\) it holds that \(D_{\alpha}(\mathcal{M}(D),\mathcal{M}(D^{\prime}))\leq\gamma\), where \(D_{\alpha}=\frac{1}{\alpha-1}\log\mathbb{E}_{x\sim Q}\left(\frac{P(x)}{Q(x)} \right)^{\alpha}\) is the Renyi divergence of order \(\alpha\) which measures the similarity of the distributions \(P\) and \(Q\). Note that if \(\mathcal{M}\) is (\(\alpha\), \(\gamma\))-RDP, then it is also (\(\epsilon\), \(\delta\))-DP for any \(0<\delta<1\) where \(\epsilon=f_{\text{RDP}\rightarrow\text{DP}}(\alpha,\gamma,\delta)=\gamma+\log( \frac{\alpha-1}{\alpha})-\frac{\log\delta+\log\alpha}{\alpha-1}\)[12]. We rely on \((\alpha,\gamma)\)-RDP during our analysis, but report our results in terms of \((\epsilon,\delta)\)-DP following prior work.
The Gaussian mechanismGiven an algorithm \(\mathcal{A}\) with real-valued output space \(\mathcal{A}:\mathbb{N}^{\mathcal{D}}\rightarrow\mathbb{R}^{d}\), the Gaussian mechanism prioritizes the algorithm by adding Gaussian noise to the outputs of \(\mathcal{A}\), i.e. \(\mathcal{M}=\mathcal{G}_{\sigma}\left(\mathcal{A}\left(D\right)\right)= \mathcal{A}(D)+\mathcal{N}(0,\sigma^{2})\). Given that the \(\ell_{2}\) sensitivity of \(\mathcal{A}\) is \(\Delta_{2}\mathcal{A}(D)=\max_{D\sim D^{\prime}}\|\mathcal{A}(D)\ -\ \mathcal{A}(D^{\prime})\|_{2}\), the mechanism satisfies (\(\alpha\), \(\gamma(\alpha)\))-RDP, with \(\gamma(\alpha)=\frac{\alpha(\Delta_{2}\mathcal{A})^{2}}{2\sigma^{2}}\). Intuitively, this indicates that the larger the sensitivity of the function, the more noise needs to be added to obtain a small privacy budget, and therefore the worse the final performance will be. A small sensitivity is therefore desirable.
Amplification by sub-samplingA useful property of DP (and RDP) is that, given a mechanism \(S\) that samples a sub-set of the dataset \(D\), applying a private mechanism to \(S(D)\) leads to better privacy guarantees than applying it to the entire dataset \(D\). Intuitively, this is due to the fact that subsampling introduces a non-zero chance of an added or modified sample to not be processed by the randomized algorithm. Typically, \(S\) is assumed to be a Poisson or uniform sampling over the dataset. Poisson sampling is typically used when the neighboring datasets differ in size, while uniform sampling is used otherwise. In this paper, we rely on uniform sampling.
### Differential privacy in deep learning
Differentially Private Stochastic Gradient Descent (DP-SGD) [13; 14; 8] is the foundation of many works [9; 2; 15] that apply DP to deep learning. It privatizes the weights of a model with respect to the input dataset at every iteration of training, and then accumulates the privacy budget being spent over all iterations. One private training iteration consists of batching a set of samples, computing the gradient on each sample independently, clipping the norm of each gradient vector to a maximum norm \(C\), calculating the entire gradient by adding calibrated Gaussian noise, and finally performing an update step. The clipping step is used to bound the sensitivity of the gradients to changes in the input. Then, assuming that two neighboring datasets \(D\) and \(D^{\prime}\) differ in the features of one sample, the sensitivity of the total gradient on a batch of i.i.d. samples is bounded by \(2C\). Through batching (i.e. sub-sampling the dataset using a sampling mechanism \(S\)), amplification by sub-sampling theorems [16; 17] can be exploited to get better privacy guarantees at every iteration. Finally, assuming each iteration \(t\) is \((\alpha,\gamma_{t})\)-RDP, the overall training is then \((\alpha,\sum_{t=0}^{T}\gamma_{t})\)-RDP [11] where \(T\) is the total number of iterations.
### Graph neural networks
DefinitionIn the following, we define a graph as \(G=\{X,A\}\), where \(X\in\mathbb{R}^{N\times d}\) is the feature matrix in which each row corresponds to one node's feature vector, and \(A\in\{0,1\}^{N\times N}\) is the adjacency matrix in which \(A_{ij}\) is 1 if there exists an edge between nodes \(i\) and \(j\) and 0 otherwise. Note that we only consider undirected graphs, therefore \(A=A^{T}\). Graph Neural Networks (GNNs) are a class of models that learn a mapping \(f:G\to Z\in\mathbb{R}^{N\times d^{\prime}}\), where \(Z\) is an updated feature matrix of \(G\) that can be used for various downstream tasks. Each layer of a GNN typically consists of two steps: 1) in the aggregation step, information about the neighborhood of every node is gathered; 2) in the update step, the feature vector of every node is updated based on its current feature vector and the aggregated neighborhood information.
The receptive fieldThe receptive field of a node in a GNN is defined as the region in the input graph that influences the GNN's predictions for that specific node. For a GNN with \(L\) layers, the receptive field of a node \(v\) is the \(L\)-hop neighborhood of \(v\). Thus, for a graph with maximum node degree \(K\), the largest possible receptive field size of any node \(v\) is RF\((v)=\sum_{l=0}^{L}K^{l}=\frac{K^{L+1}-1}{K-1}\), i.e. the receptive field grows exponentially with the number of layers of the GNN.
### Differential privacy in graph neural networks
Given that graphs contain two types of attributes - node features and edges - multiple levels of DP [18; 9; 10] can be considered: _edge-level_ DP, where the edges between nodes are private; _feature-level_ DP, where the features of nodes are private; and _node-level_ DP, where both the features and edges of nodes are private. In this work, we focus on feature-level DP using DP-SGD. Contrary to traditional i.i.d. datasets, samples in a graph (i.e. nodes) are not independent: changing the features of one node affects the gradients of all nodes within the receptive field of the modified node. In fact, the sensitivity of the total gradient on a graph is bounded by \(2\frac{K^{L+1}-1}{K-1}C\) (see Appendix A), which grows exponentially with the number of layers \(L\). Given that the Gaussian mechanism adds noise proportional to the sensitivity of the total gradient, this can lead to large amounts of noise being added during training, which in turn leads to poor final model utility.
## 3 Related work
In [19], a node-level differentially private GNN is trained by perturbing features and edges locally before sending them to a global server. This setup is called local DP, and differs from our notion of DP where a central learner is trusted with the real data. The authors in [15] propose to split the graph into disjoint sub-graphs using uniform node sampling, then treat each sub-graph as an independent sample. Note that, contrary to our method which considers privacy at the individual node feature level, their approach treats the entire graph as a datapoint to privatize, rather than providing privacy for the individual nodes in the graph. The method in [10] prioritizes GNNs at both the node-level and edge-level. However, their approach only applies to the GNN architecture they propose and not to arbitrary GNNs, unlike our proposed method. Furthermore, it does not resolve the issue of exponentially growing sensitivity in transductive learning scenarios. For a survey on DP on graph data, refer to [20]. Finally, the authors of [9] propose to reduce the sensitivity of a GNN's gradients by bounding the maximum degree \(K\) of the graph. However, this does not resolve the exponential growth with the number of layers. Therefore, they still obtain loose privacy guarantees (\(\epsilon=20\)). Since this method is the closest to our setup, we compare our approach to theirs in our experiments.
## 4 Methodology
### Approach
We propose to adapt DP-SGD to the graph domain to ensure that the weights of a GNN are private with respect to the nodes' features, while overcoming the problem of requiring exponentially more noise with a growing network depth. In the following, we define two graphs \(G\) and \(G^{\prime}\) as neighbors if they share the same structure \(A\) and number of nodes \(N\) but differ in one row of the feature matrix \(X\) corresponding to the modified node \(\tilde{v}\). We want to train the GNN such that for all \(G\sim G^{\prime}\)
\(D_{\alpha}(\mathcal{M}(G),\mathcal{M}(G^{\prime}))\leq\gamma\), where \(\mathcal{M}\) is a randomized algorithm that returns the weights of the GNN.
To adapt DP-SGD to the graph domain, we propose to pre-process the graph into sets of independent subgraphs that do not affect each others' gradients, so that the sensitivity of the total gradient on any batch depends on the gradient of one subgraph only. We summarize our training procedure in Algorithm 1. More precisely, we pre-process the graph into a set of \(M\) disjoint subgraphs \(G_{S}=\{s_{1},s_{2},\ldots,s_{M}\}\), i.e. subgraphs that do not have any nodes in common, using sampling method \(S\). Each subgraph \(s_{i}\) consists of two components: 1) one training node \(v_{i}\), and 2) a set of neighbors \(\mathscr{N}(v_{i})\) that is used for the aggregation step of the GNN. At training time, for every iteration \(t\), we create a batch by sampling \(m\) subgraphs uniformly at random from the set of subgraphs \(G_{S}\). We then compute the gradients \(\nabla_{\mathbf{w}_{t}}\mathcal{L}(v_{j},\mathscr{N}(v_{j}))\) on all training nodes and clip the norm of each to a value C. We compute the total gradient by summing individual gradients and adding Gaussian noise. Finally, we update the weights.
Due to the disjointness of subgraphs, changing one node's features - whether it is a training node or a neighbor - will affect at most one subgraph (i.e. sample) in the batch, which reduces the upper bound on the sensitivity of the total gradient to \(2C\). Since we sample subgraphs uniformly at random, we can leverage the strong amplification by sub-sampling theorem [17], i.e. account for the possibility of the gradient not being affected if the modified node \(\tilde{v}\) is not part of the batch.
We generate these disjoint subgraphs via random walk sampling, which is an effective way of training GNNs [21]. We choose random walk sampling, since it ensures that nodes form a connected subgraph of a training node's neighborhood, while limiting the number of nodes being sampled from that neighborhood (i.e. from the receptive field). In the following, we propose three different random-walk-based sampling methods, which we later compare in our experimental results. Furthermore, we derive for each sampling method a tight upper bound on the probability of sampling the modified node \(\tilde{v}\) in a batch, which is required for applying the amplification by subsampling theorem in [17].
### Sampling methods
Our three sampling methods consist of pre-processing the graph into a set of \(M\) disjoint subgraphs \(G_{S}=\{s_{1},s_{2},\ldots,s_{M}\}\), and then generating a batch \(B\subseteq G_{S}\) by sampling \(m\) subgraphs uniformly at random. An overview of our general approach is depicted in Figure 1. Given a graph with \(M\) generated disjoint subgraphs, the true probability of sampling node \(\tilde{v}\) is \(P[\tilde{v}]=\frac{1}{M}\), since we know that a node is in exactly one of the \(M\) subgraphs. However, to ensure differential privacy, we require a bound that holds for all possible graphs and any run of the sampling procedure. Thus, we use the upper bound \(P[\tilde{v}]=\frac{1}{M}\leq\frac{1}{M_{\min}}\) where \(M_{\min}\) is the minimum number of subgraphs that can be generated in any graph of \(N\) nodes. Then, the probability of sampling \(\tilde{v}\) in a batch of \(m\) subgraphs using sampling mechanism \(S\) is at most \(P_{S}[\tilde{v}]\leq\frac{m}{M_{\min}}\).
Figure 1: Our general sampling method. Starting with a graph, we generate subgraphs by first sampling a root node (depicted in red), and then sampling one or more random walks starting from the root node. Every node appears in exactly one subgraph. Before every iteration, we batch \(m\) many subgraphs, where \(m=2\) in this case. Root nodes are used as training nodes, while remaining nodes are used for aggregation in the GNN only.
Disjoint random walksThe first sampling method we propose is called Disjoint Random Walks (DRW). We pre-process the graph once before training and then generate batches at every iteration using the same set of subgraphs. Each subgraph consists of one random walk of length \(L\) (refer to Appendix B for a pseudo-code). A random walk of length \(L\) contains at most \(L+1\) nodes, and generating random walks that all have maximal length would result in the minimum number of random walks, since a node can only appear in one random walk. Therefore, we get \(M_{\min}=\lceil\frac{N}{L+1}\rceil\) and \(P[\tilde{v}]\leq\frac{1}{\lceil\frac{N}{L+1}\rceil}\). Finally, the upper bound probability of sampling a node \(\tilde{v}\) is \(P_{\text{DRW}}[\tilde{v}]\leq\frac{m}{\lceil\frac{N}{L+1}\rceil}\).
Disjoint random walks with restartsTo create better subgraphs that contain more nodes for aggregation, we also propose Disjoint Random Walks with Restarts (DRW-R). Similary to DRW, this sampling method generates subgraphs once before training by using random walks, but instead of sampling one random walk per training node we sample \(R\) of them (refer to Appendix B for a pseudo-code). Given a random walk length of \(L\) and \(R\) restarts, the minimum number of subgraphs is \(M_{\min}=\lceil\frac{N}{1+R\times L}\rceil\) where \(1+R\times L\) is the maximum size of one subgraph when all random walks have length \(L\), and the probability of sampling node \(u\) in a batch of size \(m\) is therefore \(P_{\text{DRW-R}}[u]\leq\frac{m}{\lceil\frac{N}{L+1}\rceil}\).
Disjoint random walks with dynamic re-samplingFinally, we propose a third sampling method in which we pre-process the graph into disjoint subgraphs every \(i^{th}\) iteration instead of once before training, where \(i\) is a hyper-parameter that is chosen based on the cost of the sampling procedure on each dataset. This allows us to increase the diversity of subgraphs used for training, and prevent overfitting on the subgraphs generated in one run of the sampling procedure. We call this procedure DRW-D, where D stands for Dynamically re-sampling random walks. The probability of sampling node \(\tilde{v}\) is the same as in DRW, namely \(P_{\text{DRW-D}}[\tilde{v}]=P_{\text{DRW}}[\tilde{v}]\leq\frac{m}{\lceil \frac{N}{L+1}\rceil}\). Note that this method consists simply of re-running the subgraph generation process DRW at every \(i^{th}\) iteration instead of once before training, which is reflected in Algorithm 1.
```
Input: Graph \(G=\{V,E\}\), sampling method \(S\), loss function \(\mathcal{L}\), initial model weights \(\mathbf{w_{0}}\), noise standard deviation \(\sigma\), gradient clipping norm \(C\), number of iterations \(T\), frequency at which to re-sample subgraphs in DRW-D \(i\) \(G_{S}=S(G)\)\(\triangleright\) Generate subgraphs from graph G using sampling method S for t in [0, T) do if t % i == 0 and S == DRW-D then \(G_{S}=S(G)\) endif Sample \(m\) subgraphs uniformly at random from \(G_{S}\) to form batch \(B\) for\(s_{j}\) in \(B\)do \(\triangleright\)\(s_{j}\) is a subgraph Compute \(\nabla_{\mathbf{w_{t}}}\mathcal{L}(v_{j},\mathscr{N}(v_{j}))\) \(g_{t}(v_{j})=\text{clip}\left(\nabla_{\mathbf{w_{t}}}\mathcal{L}\left(v_{j}, \mathscr{N}\left(v_{j}\right)\right),C\right)\)\(\triangleright\) Compute and clip individual gradients in B endfor \(g_{t}(B)=\frac{1}{\lceil B\rceil}\left(\left(\sum_{s_{j}\in B}g_{t}(v_{j}) \right)+\mathcal{N}(0,\sigma^{2})\right)\)\(\triangleright\) Add noise to the gradients \(\mathbf{w_{t+1}}=\text{update}(\mathbf{w_{t}},g_{t}(B))\)\(\triangleright\) Update weights based on optimizer being used endfor
```
**Algorithm 1** DP-SGD with random walk sampling
## 5 Experimental results
Experimental setupWe report our results on seven datasets, both in the transductive and the inductive settings. The dataset sizes in terms of total nodes range from small (Cora [22], Citeseer [22]) to medium (PPI [21], Pubmed [22]) to large (Flickr [21], Arxiv [21], Reddit [21]), or in number of training nodes from small (Pubmed, Citeseer, Cora) to medium (PPI) to large (Flickr, Arxiv, Reddit). We report the exact number of nodes as well as some additional dataset characteristics in Appendix C. We focus on the node classification task, and report our results in terms of F1 Micro score, a metric equivalent to accuracy except on PPI which is a multi-label classification task. Following prior work, we report our privacy budget using \(\epsilon\) and a fixed \(\delta\) per dataset (see Appendix
C). Given a target \(\epsilon\), we keep training while tracking the \((\alpha,\gamma_{t})\) privacy budget being spent until we reach \(\epsilon=f_{\text{RDP}\rightarrow\text{DP}}(\alpha,\sum_{t=0}^{T^{\prime}}\gamma_ {t},\delta)\) at iteration \(T^{\prime}\).
We compare our proposed methodology with each sampling method to three baselines: 1) A basic GCN trained with random walk sampling; 2) A basic MLP trained with uniform node sampling; and 3) The method proposed in [9] which we call FDP for Feature-level DP. Note that while they train their models up to an \(\epsilon\) of 20, we only train them until \(\epsilon=8\), since a very large \(\epsilon\) does not have much value in terms of privacy.
DiscussionTable 5 summarizes our results. A GCN trained without DP always outperforms the ones trained with DP, which is expected since clipping gradients and especially adding Gaussian noise decreases the utility of the final model. However, in some cases our method can almost match the utility of the basic GCN, whereas the FDP baseline struggles. For example, DRW sampling on Flickr can reach up to 48.7% accuracy - which corresponds to 95% of the baseline GCN's performance - whereas FDP reaches only 42.5% accuracy - which corresponds to 83% of the baseline GCN's performance. Similarly, our method achieves 87% of the GCN's performance on the challenging dataset Reddit, while FDP can only reach 60% of the GCN's performance. This shows that our sub-sampling approach is effective at solving the exponential growth of the receptive field while approaching the utility of the non-DP GCN baseline, which makes our method attractive for real world applications. That being said, our method uses a smaller amount of training nodes than what is available at every iteration, even when computational complexity is not an issue (i.e. on small graphs). The effect of this reduction in training training samples is exacerbated on small graphs that do not require batching in non-DP training, which leads to our method performing on-par with the FDP baseline on small datasets.
Comparison with variable privacy budgetFinally, in Figure 2 we expand on our previous results by reporting the accuracy at various \(\epsilon\) checkpoints during training. We report the best results that our method achieved across all sampling methods and compare to the FDP baseline. On all datasets, our method largely outperforms FDP across multiple epsilon values. Moreover, FDP cannot achieve an epsilon lower than 2, whereas our method does while sometimes outperforming FDP at higher privacy budgets.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \hline & & Layers & Width & \multicolumn{8}{c|}{Dataset} \\ & & & Cora & CiteSeer & PPI & PubMed & Flickr & Arxiv & Reddit \\ \hline \multirow{4}{*}{} & \multirow{2}{*}{GCN (non-DP)} & 1 & - & 69.8 & 59.5 & 46.2 & 68.7 & 45.6 & 59.7 & 92.5 \\ & & 2 & 256 & 77.3 & 63.7 & 58.9 & 72.9 & 51.3 & 69.1 & 94.7 \\ & & 2 & 512 & 76.6 & 62.2 & 60.7 & 72.9 & 51.3 & 69.5 & 94.7 \\ \hline \multirow{4}{*}{} & \multirow{2}{*}{MLP (non-DP)} & 1 & - & 43.0 & 37.6 & 45.2 & 61.3 & 45.7 & 52.3 & 67.7 \\ & & 2 & 256 & 47.3 & 36.1 & 52.1 & 61.5 & 36.2 & 52.6 & 69.8 \\ & & 512 & 44.8 & 39.3 & 53.6 & 63.3 & 38.4 & 52.0 & 69.7 \\ \hline \multirow{4}{*}{} & \multirow{2}{*}{FDP (DP)} & 1 & - & 17.1 & 17.5 & 38.4 & 39.6 & 33.6 & 43.8 & 56.7 \\ & & 2 & 256 & 17.6 & 21.5 & **40.7** & 41.4 & 42.5 & 31.9 & 43.7 \\ & & 512 & 23.2 & **22.1** & 40.0 & 41.2 & 42.4 & 30.2 & 42.3 \\ \hline \multirow{4}{*}{} & \multirow{2}{*}{DRW (DP)} & 1 & - & 19.9 & 20.6 & 40.2 & **41.7** & 42.1 & 59.2 & 81.4 \\ & & 2 & 256 & 17.2 & 20.9 & 38.7 & 40.3 & **48.7** & 59.6 & 80.2 \\ & & 512 & 24.9 & 21.3 & 37.9 & 41.1 & 47.9 & 59.2 & 81.8 \\ \hline \multirow{4}{*}{Ours} & \multirow{2}{*}{DRW-D (DP)} & 1 & - & 19.8 & 20.6 & 40.1 & **41.7** & 42.2 & 59.2 & 81.4 \\ & & 2 & 256 & 17.2 & 21.3 & 38.6 & 40.2 & 48.5 & **59.7** & 80.2 \\ & & 512 & **25.0** & 21.7 & 37.9 & 41.2 & 47.8 & 59.3 & 81.5 \\ \hline \multirow{4}{*}{} & \multirow{2}{*}{DRW-R (DP)} & 1 & - & 18.3 & 19.2 & 40.0 & 40.3 & 42.3 & 59.1 & 82.0 \\ & & 2 & 256 & 17.3 & 20.7 & 38.2 & 40.4 & 48.3 & **59.7** & 81.0 \\ \cline{1-1} & & 2 & 512 & 24.5 & 21.3 & 36.9 & 40.4 & 48.5 & 59.4 & **82.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between the F1 Micro score (%) achieved by a basic GCN and MLP, the FDP baseline, and our proposed method with multiple sampling methods. All DP methods are trained with a target budget of \(\epsilon\leq 8\).
## 6 Conclusion
We proposed a novel way of training differentially private graph neural networks. Since graphs consist of inter-connected nodes that influence each other's gradients during training, naively adapting traditional DP methods to graph neural networks can result in unnecessarily large amounts of noise being added to the model during training, which in turn leads to poor utility of the model. We proposed an adapted version of DP-SGD that uses random-walk based sub-sampling to overcome this problem and introduced three sampling methods that generate disjoint subgraphs. For each sampling method, we derived an upper bound on the probability of sampling a modified node in a batch to apply the amplification by sub-sampling theorem and obtain tighter privacy guarantees. Our method achieves a better privacy-utility trade-off compared to the state-of-the-art baseline FDP across multiple datasets, especially for large datasets. A necessary future work direction in this field is to attempt to solve the performance issue on small datasets, which is especially exacerbated on GNNs. For example, pre-training the models on public datasets [2] or using variable signal-to-noise ratios during training are ways of improving the utility in DP. Moreover, different sampling methods that do not necessarily focus on random walks can be explored.
|
2306.10201 | Stretched sinograms for limited-angle tomographic reconstruction with
neural networks | We present a direct method for limited angle tomographic reconstruction using
convolutional networks. The key to our method is to first stretch every tilt
view in the direction perpendicular to the tilt axis by the secant of the tilt
angle. These stretched views are then fed into a 2-D U-Net which directly
outputs the 3-D reconstruction. We train our networks by minimizing the mean
squared error between the network's generated reconstruction and a ground truth
3-D volume. To demonstrate and evaluate our method, we synthesize tilt views
from a 3-D image of fly brain tissue acquired with Focused Ion Beam Scanning
Electron Microscopy. We compare our method to using a U-Net to directly
reconstruct the unstretched tilt views and show that this simple stretching
procedure leads to significantly better reconstructions. We also compare to
using a network to clean up reconstructions generated by backprojection and
filtered backprojection, and find that this simple stretching procedure also
gives lower mean squared error on previously unseen images. | Kyle Luther, Sebastian Seung | 2023-06-16T22:35:12Z | http://arxiv.org/abs/2306.10201v1 | # Stretched sinograms for limited-angle tomographic reconstruction with neural networks
###### Abstract
We present a direct method for limited angle tomographic reconstruction using convolutional networks. The key to our method is to first stretch every tilt view in the direction perpendicular to the tilt axis by the secant of the tilt angle. These stretched views are then fed into a 2-D U-Net which directly outputs the 3-D reconstruction. We train our networks by minimizing the mean squared error between the network's generated reconstruction and a ground truth 3-D volume. To demonstrate and evaluate our method, we synthesize tilt views from a 3-D image of fly brain tissue acquired with Focused Ion Beam Scanning Electron Microscopy. We compare our method to using a U-Net to directly reconstruct the unstretched tilt views and show that this simple stretching procedure leads to significantly better reconstructions. We also compare to using a network to clean up reconstructions generated by backprojection and filtered backprojection, and find that this simple stretching procedure also gives lower mean squared error on previously unseen images.
## 1 Introduction
Electron tomography is an imaging technique that uses transmission electron microscope images acquired from multiple viewpoints to reconstruct the 3-D structure of an object [5]. Linear reconstruction techniques like filtered backprojection (FBP) or iterative reconstruction methods (SART,SIRT) are widely used but display strong artifacts in the presence of limited tilt angles, highly noisy inputs, and complex nonlinear misalignments, all of which are commonplace in electron tomography workflows [16, 33, 15].
**Problem setting** Our goal is to use deep learning to perform reconstruction of electron tomography tilt series. Ideally the implicit prior contained in a neural network will enable higher quality reconstructions using fewer tilts than would be required by classical reconstruction methods like filtered backprojection.
**Related methods** The related fields of X-Ray and positron emission tomography have seen a proliferation of deep learning methods to perform tomographic reconstruction. [28] categorize and review several distinct ways that neural networks have been used to aid in tomographic reconstruction of X-Ray CT data and we review the most relevant methods below.
_Domain transform_ methods use a neural network \(f\) to map sinogram data \(\mathbf{y}\) to a reconstruction \(\hat{\mathbf{x}}=f(\mathbf{y})\). Perhaps the simplest approach from this category is the work of [7] which uses a convolutional network to directly map sinograms to reconstructions. A related approach was used by [29] which used hybrid fully connected/convolutional networks. [34] used fully connected layers and added an additional manifold encoding-decoding strategy.
Notably these approaches have not shown the same level of interest as competing methods which we review below and [31] argue that the generalization performance of domain transform methods has been lackluster in addition to them typically having rather large computational and memory requirements.
_Image domain_ methods apply neural networks to the output of a classic reconstruction method, typically either backprojection or filtered backprojection. These methods can be thought of as neural post-processing. [10, 2, 20] demonstrate that deep networks can improve the quality of reconstructions that have been generated from low-dose (noisy) views. [9] show that artifacts caused by reconstruction from sparse viewpoints can be removed, and [1] show that artifacts caused by a limited-angle set of views can be removed using deep networks to post-process reconstructions. [32] show that post-processing can be even used to remove metal artifacts. A hybrid method was used in [31, 12] where the first step was to map a sinogram of size nview \(\times\) width to a sequence of individual backprojections of size nview \(\times\) height \(\times\) width.
_Sensor domain_ methods instead apply neural networks to the raw sinograms and are typically only used as a pre-processing step so that classical reconstruction methods are applied to the network outputs. Networks have been shown to remove artifacts [3, 6]. Networks have also been used to inpaint missing views from limited-angle sinograms [21].
_Dual domain_ methods combine both _sensor_ and _image_ domain approaches and apply networks both before and after backprojection [13, 27, 26].
_Dictionary-based reconstruction of affine-aligned tilt views_ On a seemingly unrelated front, dictionary based reconstruction with sparsity priors was applied to electron tomographic reconstruction [8, 25]. Critically, these works first performed an _affine_ alignment to tilt views before reconstruction. Using an affine alignment is non-standard as it stretches every view instead of just translating them. However, both works showed impressive reconstructions when combining the affine-stretched views and a translationally invariant dictionary.
**What is missing?** With the exception of [7], prior neural methods either rely on fully connected architectures [29, 34] or backprojection to actually perform the reconstruction. Our goal is generate quality reconstructions with the widely used U-Net architecture [18] and avoid the use of backprojection.
**Our contributions** We propose a simple backprojection-free pre-processing scheme that significantly improves the ability of a convolutional network to perform tomographic reconstructions. We simply stretch each view in the sinogram by \(\sec\theta\) along the direction perpendicular to the tilt axis. Our method is applicable to parallel beam, limited angle geometries that are typical in electron tomography.
## 2 Method: neural reconstruction of stretched sinograms
Our training and reconstruction procedures are shown in Fig. 1.
### Stretched sinograms
**Notation and assumptions** We assume we are working with a parallel beam geometry and that we have \(n_{\mathrm{view}}\) tilt views of size \(n_{h}\times n_{w}\) over a limited range of angles \((-\theta,+\theta)\) where \(\theta<90^{\circ}\). We denote the 3-D tensor of 2-D tilt views by \(\mathbf{y}\in\mathbb{R}^{n_{\mathrm{view}}\times n_{h}\times n_{w}}\). We assume these views have been log-normalized so that they are related to the object density \(\mathbf{x}\in\mathbb{R}^{n_{d}\times n_{h}\times n_{w}}\) via the Radon transform:
\[\mathbf{y}=P\mathbf{x} \tag{1}\]
**Generating stretched views** To generate the stretched sinogram, we simply stretch every tilt view by \(\sec\theta\) in the direction perpendicular to the tilt axis. This stretching is performed so that the image size is preserved, meaning the stretched sinograms are also \(n_{h}\times n_{w}\) in size. Bilinear interpolation is used to perform the stretching and the stretching is done treating the center of each image as the origin.
If the tilt axis is parallel to the \(\hat{y}\) axis, this means we stretch along the \(\hat{x}\)-axis of each 2D tilt view. We now write the formula for tilt stretching in this case.
Figure 1: Method overview **Inference**: we first stretch every tilt view by \(\sec\theta\) in the direction perpendicular to the tilt axis, then perform reconstruction with a U-Net **Training**: We train our networks using simulated tilt views and mean squared error (MSE) between network outputs and ground truth volumes
Let \(i,j\) index the rows,columns of a tilt image. We use natural coordinates so \((-1,-1)\) refers to the upper left and \((+1,+1)\) refers to the bottom right corner of the image. Bilinear interpolation is used to evaluate at fractional pixel locations.
\[y^{\rm stretched}(\theta,i,j)=y\left(\theta,i,j\sec\theta\right) \tag{2}\]
This simple stretching procedure can be extended to stacks with arbitrary tilt axes (e.g. dual tilt setups [14]). The formula in Eq. 2 would need change to stretch along a linear combination of \(i,j\). We can write the stretching procedure in matrix notation, treating bilinear interpolation as a sparse linear operator \(S\):
\[{\bf y}^{\rm stretched}=S{\bf y} \tag{3}\]
This operation maps an \(n_{\rm view}\times n_{h}\times n_{w}\) tensor of raw views to an \(n_{\rm view}\times n_{h}\times n_{w}\) tensor of stretched views.
**Visualizing the stretched views** In Fig. 2 we compare a simulated sinogram and its corresponding stretched sinogram. We simulate the sinogram by applying the Radon transform to a \(45\times 512\times 512\) volume of FIB-SEM data of fly brain tissue [30; 19] which is further described in Section 3. Specifically we compute 45 projections uniformly spaced over the angles \((-45^{\circ},+45^{\circ})\) with a tilt axis in the \(\hat{y}\) direction. We show the \(x\theta\) view for \(y=1\) and the \(xy\) view for \(\theta=-45^{\circ}\) for both the sinogram and stretched sinogram.
Figure 2: Sinogram vs stretched sinogram representations. The sinogram is generated with a vertical tilt axis and with tilt angles spaced at \(2^{\circ}\) increments between \(-45^{\circ}\) and \(+45^{\circ}\). The stretched sinogram is by stretching each \(xy\) view of the sinogram by \(\sec\theta\) (Eq. 2). In the top row we show an \(x\theta\)-slice of the raw/stretched sinograms. In the bottom row we show \(xy\) slices corresponding to \(\theta=-45^{\circ}\).
### 3-D Reconstruction with a 2-D U-Net
For simplicity we stick closely to the original 2-D U-Net architecture proposed in [18]. More details are provided in the Appendix. We treat the stretched sinogram with \(K\) tilt views as a 2-D image with \(K\) input channels. We generate a 3-D reconstruction from the 2-D U-Net by treating the output channel dimension as the depth dimension of the output volume. To improve training speed we use Instance Normalization layers before every ReLU layer [22]. We found that the more popular Batch Normalization operation gave significantly higher test time error, likely due to the fact that we train our U-Nets with a batch size of 1.
### Supervised network training
We assume we have paired examples of tilts \(\mathbf{y}\) and volumes \(\mathbf{x}\). We train our networks using stochastic gradient descent applied to the mean squared error (MSE) objective between these volumes and network reconstructions. Specifically, we generate the network reconstruction \(\hat{\mathbf{x}}=f_{\omega}(S\mathbf{y})\) using a U-Net applied to the stretched tilt views. We then compute and backpropagate through the MSE:
\[l(\omega)=\frac{1}{n_{oixel}}\left\|\mathbf{x}-f_{\omega}(S\mathbf{y})\right\| ^{2} \tag{4}\]
where \(n_{pixel}=n_{d}\times n_{h}\times n_{w}\) is the number of pixels in the output reconstructions.
## 3 Experiments with simulated data
### Data and simulated tilt views
We download three \(1k\times 1k\times 1k\) voxel volumes to use as the train, validation, and test sets from the 3D isotropic dataset imaged at \(8\times 8\times 8\) nm\({}^{3}\) resolution by [30]. This dataset contains images of a fruit-fly brain imaged with a focused ion
Figure 3: \(128\times 128\) sized crop from an \(xy\) view of a test set reconstruction. Directly using a U-Net to perform reconstruction or applying a U-Net to clean up FBP causes a neuron boundary to disappear.
beam milling and scanning electron microscope imaging (FIB-SEM) technique. Slices from this dataset are shown in the Appendix.
We sample patches of size \(32\times 512\times 512\) voxels at arbitrary positions from the \(1k\times 1k\times 1k\) voxel training volume. These patches are normalized to be zero-mean, unit-variance. From these patches, simulated tilt series consisting of 8 tilt views ranging uniformly over \((-60^{\circ},+60^{\circ})\) are generated using the ASTRA toolbox [24, 23]. The input shape to our networks is therefore \(8\times 512\times 512\).
For data augmentation we add Gaussian noise with standard deviation of 0.3 times the standard deviation of the projections and randomly shift a chosen number \(n\) of the 8 tilt views in both \(x\) and \(y\) directions by an integer number of pixels between \([-3,+3]\) to simulate misalignments that can occur in real tomographic tilt series. We compare results for when we misalign \(n=0,2,4,8\) of the sections. Each tilt view is then normalized to be unit mean and zero variance after augmentation.
### Comparisons
We compare our method representation to 5 other methods: (1) using the raw sinograms as input to a U-Net, (2) using backprojection as input to a U-Net, (3) using filtered backprojection as input to a U-Net (the approach of [9]), (4) using backprojection without U-Net postprocessing, (5) using filtered backprojection without U-Net postprocessing. We use the ASTRA Toolbox to implement both the backprojection and filtered backprojection operations. For filtered backprojection, we use the default _ram-lak_ filter which implements a ramp-function in frequency space. In the case of backprojection and filtered-backprojection, the depth dimension is treated as the channel index to the 2D U-Nets and the channel dimensionality is 32 instead of 8.
### Training details
The forward and backward projections were implemented on a CPU using the ASTRA toolbox and these were distributed across 64 cpus to speed up inference time. We use a single ASUS RTX3090 Turbo 24GB GPU for inference and backpropagation. Networks were trained using PyTorch [17]. We use the Adam optimizer to train our U-Nets [11]. We trained all networks for up to 72 hours with a step-size of 0.001. Learning curves for all configurations are shown in the Appendix. We used early stopping computed on the validation set so that we perform test-set evaluation using the network with lowest validation set error.
### Results
In Fig. 3 we show a qualitative comparison between various methods for one \(xy\) slice of the reconstruction on the test set (in the setting where 4/8 of the tilt views were misaligned). We evaluate our networks on a \(992\times 1000\times 1000\) test set volume extracted from a neighboring location of FIB-SEM images of fly brain
tissue. This volume is broken into 124 (almost) non-overlapping patches3 of size \(32\times 512\times 512\) and tilt views are simulated and subsequently reconstructed.
Footnote 3: There is a small sliver of width 24 pixels where the patches overlap due to the fact that we are extracting \(512\times 512\) patches from \(1000\times 1000\) sections
**Vary level of noise** In this setup we use networks trained on non-misaligned data and do not misalign tilts at inference time either. Here we vary the level of noise at inference time. We observe that as we add noise, the MSE of plain filtered backprojection dramatically increases. All neural methods improve over both classical methods. We observe that not stretching the inputs before inference with a U-Net gives nearly 50% higher mean squared error than our method. We observe that stretching provides the lowest MSE with _bp+U-Net_ closesly following behind. The results are shown in Tab. 1.
**Vary level of misalignments**
In this setup we train and test networks with various number of misaligned tilts. All neural methods improve over both classical methods. We observe that stretching the tilts before inference gives much lower MSE than not stretching. This time, we observe a larger gap between _stretch+U-Net_ compared to _bp+U-Net_ or _fbp+U-Net_ as we increase the number of misaligned sections. The results are shown in Tab. 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline noise & direct U-Net & stretch+U-Net & fbp+U-Net & bp+U-Net & fbp & bp \\ \hline
0.0 & \(0.145\pm 0.001\) & \(\mathbf{0.092\pm 0.001}\) & \(0.120\pm 0.001\) & \(0.097\pm 0.001\) & \(0.389\pm 0.003\) & \(0.451\pm 0.005\) \\
0.1 & \(0.146\pm 0.002\) & \(\mathbf{0.093\pm 0.001}\) & \(0.119\pm 0.001\) & \(0.098\pm 0.001\) & \(0.637\pm 0.003\) & \(0.452\pm 0.005\) \\
0.2 & \(0.150\pm 0.002\) & \(\mathbf{0.098\pm 0.001}\) & \(0.122\pm 0.001\) & \(0.103\pm 0.001\) & \(0.996\pm 0.004\) & \(0.454\pm 0.005\) \\
0.3 & \(0.157\pm 0.002\) & \(\mathbf{0.104\pm 0.001}\) & \(0.129\pm 0.001\) & \(0.109\pm 0.001\) & \(1.244\pm 0.004\) & \(0.458\pm 0.005\) \\
0.4 & \(0.166\pm 0.002\) & \(\mathbf{0.114\pm 0.001}\) & \(0.140\pm 0.001\) & \(0.119\pm 0.001\) & \(1.403\pm 0.004\) & \(0.462\pm 0.005\) \\ \hline \end{tabular}
\end{table}
Table 1: Test set MSE as we vary the level of noise added to tilts at inference time. In each setting we reconstruct \(32\times 512\times 512\) volumes from 8 tilt views of size \(512\times 512\) with tilt angles uniformly spaced between \([-60^{\circ},+60^{\circ}]\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline n shifts & direct U-Net & stretch+U-Net & fbp+U-Net & bp+U-Net & fbp & bp \\ \hline
0/8 & \(0.157\pm 0.002\) & \(\mathbf{0.104\pm 0.001}\) & \(0.129\pm 0.001\) & \(0.109\pm 0.001\) & \(1.244\pm 0.004\) & \(0.458\pm 0.005\) \\
2/8 & \(0.217\pm 0.002\) & \(\mathbf{0.118\pm 0.001}\) & \(0.157\pm 0.001\) & \(0.130\pm 0.001\) & \(1.274\pm 0.004\) & \(0.479\pm 0.001\) \\
4/8 & \(0.226\pm 0.003\) & \(\mathbf{0.121\pm 0.001}\) & \(0.166\pm 0.002\) & \(0.135\pm 0.001\) & \(1.304\pm 0.004\) & \(0.481\pm 0.005\) \\
8/8 & \(0.235\pm 0.002\) & \(\mathbf{0.126\pm 0.001}\) & \(0.180\pm 0.001\) & \(0.164\pm 0.002\) & \(1.367\pm 0.004\) & \(0.508\pm 0.006\) \\ \hline \end{tabular}
\end{table}
Table 2: Test set MSE as we vary the number of misaligned tilt views (both at train and test time). In each setting we reconstruct \(32\times 512\times 512\) volumes from 8 tilt views of size \(512\times 512\) with tilt angles uniformly spaced between \([-60^{\circ},+60^{\circ}]\).
## 4 Discussion
Why should using the stretched sinogram as network input lead to lower reconstruction error than backprojection or filtered backprojection? Both BP and FBP are reconstruction algorithms so intuitively it might seem like cleaning up a reconstruction would be easier to generating one from scratch like networks do in our method. Our experiments suggest however that this intuition is wrong. We speculate that it is because backprojection _attenuates_ high frequency modes while filtered backprojection _amplifies_ in the input. Stretching on the other hand, may suppress less information in the input which may be helpful for data-driven learning.
**Limitations** In this proof of concept, our tilt views were synthesized from a 3-D image, which could serve as ground truth for the desired reconstruction. In the real world, where would the ground truth for supervised training come from? One possible scenario is that conventional tomography would be used to generate the ground truth for convolutional net tomography. One would first acquire a high quality, densely sampled, low noise, well-aligned set of tilt views. These views given to classic reconstruction methods like FBP or SIRT to generate 3D reconstructions. Neural nets would then be trained to perform tomography from subsets of the tilt views (possibly with additional noise and misalignment augmentations). Once trained, the nets could be applied in large scale tomographic imaging pipelines, where there is a strong motivation to reduce the number of tilt views and/or acquisition time of each image.
We have relied on mean squared error as the primary metric for evaluating and comparing methods. In cell biology, the goal is to learn something new about the structure of the object being reconstructed. In other fields like connectomics, a central goal is identify boundaries between cells [4]. Neither of these goals is directly linked to MSE. We have attempted to validate our reconstruction via qualitative inspections and have indeed seen cases where membranes (in particular membranes that lie in the imaging plane) are washed out. This suggests that downstream segmentation of neurites may indeed benefit via stretched sinogram representations. However this point must be remembered when interpreting the results of this study.
**Extensions** As discussed in the limitations section a central goal is to use tomography as part of a broader pipeline. Many pipelines in the literature already use deep neural networks to generate segmentations of electron microscope images. It seems reasonable to do away with explicit reconstruction altogether and instead directly map tilt views to segmentation. This has the bonus of avoiding any questions regarding MSE, since the network would be directly optimizing the quantity of interest: namely the segmentation performance.
|
2304.01575 | The expressive power of pooling in Graph Neural Networks | In Graph Neural Networks (GNNs), hierarchical pooling operators generate
local summaries of the data by coarsening the graph structure and the vertex
features. While considerable attention has been devoted to analyzing the
expressive power of message-passing (MP) layers in GNNs, a study on how graph
pooling affects the expressiveness of a GNN is still lacking. Additionally,
despite the recent advances in the design of pooling operators, there is not a
principled criterion to compare them. In this work, we derive sufficient
conditions for a pooling operator to fully preserve the expressive power of the
MP layers before it. These conditions serve as a universal and theoretically
grounded criterion for choosing among existing pooling operators or designing
new ones. Based on our theoretical findings, we analyze several existing
pooling operators and identify those that fail to satisfy the expressiveness
conditions. Finally, we introduce an experimental setup to verify empirically
the expressive power of a GNN equipped with pooling layers, in terms of its
capability to perform a graph isomorphism test. | Filippo Maria Bianchi, Veronica Lachi | 2023-04-04T07:03:08Z | http://arxiv.org/abs/2304.01575v3 | # The expressive power of pooling in Graph Neural Networks
###### Abstract
In Graph Neural Networks (GNNs), hierarchical pooling operators generate local summaries of the data by coarsening the graph structure and the vertex features. Considerable attention has been devoted to analyzing the expressive power of message-passing (MP) layers in GNNs, while a study on how graph pooling affects the expressiveness of a GNN is still lacking. Additionally, despite the recent advances in the design of pooling operators, there is not a principled criterion to compare them. In this work, we derive sufficient conditions for a pooling operator to fully preserve the expressive power of the MP layers before it. These conditions serve as a universal and theoretically-grounded criterion for choosing among existing pooling operators or designing new ones. Based on our theoretical findings, we analyze several existing pooling operators and identify those that fail to satisfy the expressiveness conditions. Finally, we introduce an experimental setup to verify empirically the expressive power of a GNN equipped with pooling layers, in terms of its capability to perform a graph isomorphism test.
## 1 Introduction
Significant effort has been devoted to characterizing the expressive power of Graph Neural Networks (GNNs) in terms of their capabilities for testing graph isomorphism [32]. This has led to a better understanding of the strengths and weaknesses of GNNs and opened up new avenues for developing advanced GNN models that go beyond the limitations of such algorithms [30]. The more powerful a GNN, the larger the set of non-isomorphic graphs that it can distinguish by generating distinct representations for them. GNNs with appropriately formulated message-passing (MP) layers are as effective as the Weisfeiler-Lehman isomorphism test (WL test) in distinguish graphs [36], while higher-order GNN architectures can match the expressiveness of the \(k\)-WL test [27]. Several approaches have been developed to enhance the expressive power of GNNs by incorporating random features into the nodes [31; 1], by using randomized weights in the network architecture [40], or by using compositions of invariant and equivariant functions [25]. Despite the progress made in understanding the expressive power of GNNs, the results are still limited to _flat_ GNNs consisting of a stack of MP layers followed by a final readout [10; 38; 21].
Inspired by pooling in convolutional neural networks, recent works introduced hierarchical pooling operators that enable GNNs to learn increasingly abstract and coarser representations of the input graphs [37; 8]. By interleaving MP with pooling layers that gradually distill global graph properties through the computation of local graph summaries, it is possible to build deep GNNs that improve the accuracy in graph classification [7; 4] and node classification tasks [18; 24].
It is not straightforward to evaluate the power of a graph pooling operator and the quality of the coarsened graphs it produces. The most common approach is to simply measure the performance of a GNN with pooling layers on a downstream task, such as graph classification. However, such an approach is highly empirical and provides a rather indirect evaluation that is affected by external factors. One factor is the overall GNN architecture: the pooling layers are combined with different MP layers, activation functions, normalization or dropout layers, and specific optimization algorithms, which makes it difficult to disentangle the contribution of the individual components. Another factor is the dataset at hand: some classification tasks only require isolating a specific motif in the graph [22; 6], while others require considering global properties that depend on the whole graph structure [15]. Two new criteria were recently proposed to evaluate a graph pooling operator in terms of the spectral similarity between the original and the coarsened graph topology and its capability of reconstructing the features of the original graph from the coarsened one [20]. While providing valuable insights, these criteria give results that are, to some extent, contrasting and in disagreement with the traditional evaluation based on the performance of the downstream task.
To address this issue, we introduce a universal and principled criterion that quantifies the power of a pooling operator as its capability to retain the information in the graph from an expressiveness perspective. In particular, we investigate how graph pooling affects the expressive power of GNNs and derive sufficient conditions under which the pooling operator preserves the highest degree of expressiveness. Our contributions are summarized as follows.
* We show that when certain conditions are met in the MP layers and in the pooling operator, their combination produces an injective function between graphs. This implies that the GNN can effectively coarsen the graph to learn high-level data descriptors, without compromising its expressive power.
* Based on our theoretical analysis, we identify commonly used pooling operators that do not satisfy these conditions and may lead to failures in certain scenarios.
* We introduce a simple yet effective experimental setup for measuring, empirically, the expressive power of any GNN in terms of its capability to perform a graph isomorphism test.
Besides providing a criterion for choosing among existing pooling operators and for designing new ones, our findings allow us to debunk criticism and misconceptions about graph pooling.
## 2 Background
### Graph neural networks
Let \(\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)\) be a graph with node features \(\mathcal{X}^{0}=\left\{\mathbf{x}_{i}^{0}\right\}_{i=1}^{N}\), where \(\left\{\cdot\right\}\) denotes a multiset and \(\left|\mathcal{V}\right|=N\). The principal operations performed by GNNs are those of the MP layers [19], which implement a local computational mechanism to process graphs. Specifically, the information related to a node \(v\) that is stored in a feature vector \(\mathbf{h}_{v}\), is updated by combining the features of neighboring nodes. After \(l\) iterations, the vector \(\mathbf{h}_{v}^{l}\) embeds both the structural information and the node content of the \(l\)-hop neighborhood of \(v\). With enough iterations, the node feature vectors can be used to classify the nodes or the entire graph. More rigorously, the output of the \(l\)-th layer of a MP-GNN is:
\[\mathbf{x}_{v}^{l}=\texttt{COMBINE}^{(l)}(\mathbf{x}_{v}^{l-1},\texttt{AGGREGATE}^{(l )}(\mathbf{x}_{u}^{l-1},\,u\in\mathcal{N}[v])) \tag{1}\]
where \(\texttt{AGGREGATE}^{(l)}\) is a function that aggregates the node features from the neighborhood \(\mathcal{N}[v]\) at the (\(l-1\))-th iteration, and \(\texttt{COMBINE}^{(l)}\) is a function that combines the own features with those of the neighbors. This type of MP-GNN implements permutation-invariant feature aggregation functions and the information propagation is isotropic [33]. In graph classification/regression tasks, a READOUT function typically transforms the feature vectors from the last layer \(L\) to produce the final output:
\[\mathbf{o}\ =\ \texttt{READOUT}(\mathbf{x}_{v}^{L},\,v\in\mathcal{V})). \tag{2}\]
### Expressive power of graph neural networks
When analyzing the expressive power of GNNs, the primary objective is to evaluate their capacity to produce different outputs for non-isomorphic graphs. While an exact test for graph isomorphism
has a combinatorial complexity [2], the WL test for graph isomorphism [35] is an effective and computationally efficient test that can distinguish a broad range of graphs, with some exceptions, such as strongly regular graphs [11]. The algorithm assigns to each graph vertex a color that depends on the multiset of labels of its neighbors and on its own color. At each iteration, the colors of the vertices are updated until convergence is reached.
There is a strong analogy between an iteration of the WL-test and the aggregation scheme implemented by MP in GNNs. In fact, it has been proved that MP-GNNs are at most as powerful as the WL test in distinguishing different graph-structured features [36; 27]. Moreover, if the MP operation is injective, then the resulting MP-GNN is as powerful as the WL test [36]. The Graph Isomorphism Network (GIN) implements such injective multiset function as follows:
\[\mathbf{x}_{v}^{l}=\texttt{MLP}^{(l)}\left((1+\epsilon^{l})\mathbf{x}_{v}^{l-1}+\sum_ {u\in\mathcal{N}[v]}\mathbf{x}_{u}^{l-1}\right). \tag{3}\]
Under the condition that the nodes' features are a countable multiset, the representational power of GIN equals that of the WL test. Some GNNs can surpass the discriminative power of the WL test by using higher-order generalizations of MP operation [27], or by using a composition of invariant and equivariant functions [25], at the price of higher computational complexity. In this work, we focus on the standard MP-GNN, which remains the most widely adopted due to its computational efficiency.
### Graph pooling operators
A graph pooling operator implements a function \(\texttt{POOL}:\mathcal{G}\mapsto\mathcal{G}_{P}=(\mathcal{V}_{P},\mathcal{E }_{P})\) such that \(|\mathcal{V}_{P}|=K\), with \(K\leq N\) and \(\mathcal{X}_{P}=\{\mathbf{x}_{P}\}_{i=1}^{K}\) is the multiset of the pooled nodes features. To formally describe the \(\texttt{POOL}\) function, we adopt the Select-Reduce-Connect (SRC) framework [20], that expresses a graph pooling operator through the combination of three functions: _selection_, _reduction_, and _connection_. The selection function (\(\mathtt{SEL}\)) clusters the nodes of the input graph into subsets called _supernodes_, namely \(\mathtt{SEL}:\mathcal{G}\mapsto\mathcal{S}=\{\mathcal{S}_{1},\dots,\mathcal{S} _{K}\}\) with \(\mathcal{S}_{j}=\left\{s_{i}^{j}\right\}_{i=1}^{N}\) where \(s_{i}^{j}\) is a membership score that measures how much node \(i\) contributes to supernode \(j\). Typically, a node can be assigned to zero, one, or several supernodes, each with different scores. The reduction function (\(\mathtt{RED}\)) creates the pooled vertex features multiset by aggregating the features of the vertices assigned to the same supernode, that is, \(\mathtt{RED}:(\mathcal{G},\mathcal{S})\mapsto\mathcal{X}_{P}\). Finally, the connect function (\(\mathtt{CON}\)) generates the edges (and, potentially, the edge features) by connecting the supernodes \(\mathcal{E}_{P}=\{\mathtt{CON}(\mathcal{G},\mathcal{S}_{m},\mathcal{S}_{l})\}_ {m,l=1}^{K}\).
Most of the existing graph pooling operators can be specified by a particular implementation of the SRC functions. It is worth noting that the input and output of the SRC functions can be represented as both multisets and matrices. In Section 3.1 we use the multisets representation as it facilitates presenting our main result, while in the rest of the paper, we adopt the matrix representation.
## 3 Expressive power of graph pooling operators
We define the expressive power of a graph pooling operator as its capability of preserving the expressive power of the MP layers that came before it. We first present our main result, which is a formal criterion to verify the expressive power of a pooling operator. In particular, we provide three sufficient (though not necessary) conditions ensuring that if the MP and the pooling layers meet certain criteria, then the latter retains the same level of expressive power as the former. Then, we analyze several existing pooling operators and analyze their expressive power based on those criteria.
### Conditions for expressiveness
**Theorem 1**.: _Let \(\mathcal{G}_{1}=(\mathcal{V}_{1},\mathcal{E}_{1})\) with \(|\mathcal{V}_{1}|=N\) and \(\mathcal{G}_{2}=(\mathcal{V}_{2},\mathcal{E}_{2})\) with \(|\mathcal{V}_{2}|=M\) with node features \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) respectively, such that \(\mathcal{G}_{1}\neq_{WL}\mathcal{G}_{2}\). Let \(\mathcal{G}_{1}^{L}\) and \(\mathcal{G}_{2}^{L}\) be the graph obtained after applying a block of \(L\) MP layers such that \(\mathcal{X}_{1}^{L}=\{\mathbf{x}_{i}^{L}\}_{i=1}^{N}\) and \(\mathcal{X}_{2}^{L}=\{\mathbf{y}_{i}^{L}\}_{i=1}^{M}\) are the new multisets of nodes features. Let \(\mathtt{POOL}\) be a pooling operator expressed by the functions \(\mathtt{SEL}\), \(\mathtt{RED}\), \(\mathtt{CON}\), which is placed after the MP layers. Let \(\mathcal{G}_{1_{P}}=\mathtt{POOL}(\mathcal{G}_{1})\) and \(\mathcal{G}_{2_{P}}=\mathtt{POOL}(\mathcal{G}_{2})\) with \(|\mathcal{V}_{1_{P}}|=|\mathcal{V}_{2_{P}}|=K\). Let
\(\mathcal{X}_{1_{P}}=\{\mathbf{x}_{P_{i}}\}_{j=1}^{K}\) and \(\mathcal{X}_{2_{P}}=\left\{\mathbf{y}_{P_{j}}\right\}_{j=1}^{K}\) be the nodes features of the pooled graphs. If the following conditions hold:_
1. \(\sum_{i}^{N}\mathbf{x}_{i}^{L}\neq\sum_{i}^{M}\mathbf{y}_{i}^{L}\)_;_
2. _For each node_ \(i\)_, the memberships generated by_ SEL _satisfy_ \(\sum_{j=1}^{K}s_{i}^{j}=\lambda\)_, with_ \(\lambda>0\)_;_
3. _The function_ RED _is of type_ RED_ \(:\left\{\left(\mathbf{x}_{i}^{L},s_{i}^{j}\right)_{i=1}^{N}\right\}_{j=1}^{K} \mapsto\mathcal{X}_{P}=\left\{\sum_{i=1}^{N}\mathbf{x}_{i}^{L}\cdot s_{i}^{j} \right\}_{j=1}^{K}\)_;_
_then \(\mathcal{X}_{1_{P}}\neq\mathcal{X}_{2_{P}}\)._
The proof can be found in Appendix A and a schematic summary is in Fig. 1.
When the three conditions are satisfied, the combination of the MP layers and pooling operator results in an injective function between graphs. When using a powerful MP layer such as GIN [36], there are theorems for functions defined on sets that guarantee condition 1 to be met. In particular, the sum over a multiset that is countable is an injective function [39]. When \(\mathcal{G}_{1}\neq_{\text{WL}}\mathcal{G}_{2}\) a GNN block with enough GIN layers produces different multisets of node features \(\mathcal{X}_{1}^{L}\neq\mathcal{X}_{2}^{L}\). Note that this is true also when \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) have the same number of nodes, i.e., Th. 1 holds for \(|\mathcal{V}_{1}|=|\mathcal{V}_{2}|=N\).
If the pooling operator satisfies conditions 2 and 3, it will also produce multisets of node features so that \(\mathcal{X}_{1_{P}}\neq\mathcal{X}_{2_{P}}\). Due to the injectiveness of the coloring function of the WL algorithm, two graphs with different multisets of node features will be classified as non-isomorphic by the WL test and, therefore, \(\mathcal{G}_{1_{P}}\neq_{\text{WL}}\mathcal{G}_{2_{P}}\). This means that the pooling operator effectively coarsens the graphs while retaining all the information necessary to differentiate between them and that the composition of GIN layers and appropriate pooling operator maps non-WL equivalent graphs \(\mathcal{G}_{1}\neq_{\text{WL}}\mathcal{G}_{2}\) into non-WL equivalent graphs \(\mathcal{G}_{1_{P}}\neq_{\text{WL}}\mathcal{G}_{2_{P}}\).
Condition 2 implies that all nodes in the original graph must contribute to the supernodes. Moreover, letting the sum of the memberships \(s_{i}^{j}\) to be a constant \(\lambda\) (usually, \(\lambda=1\)), places a restriction on the formation of the super-nodes. Condition 3 requires that the features of the supernodes \(\mathcal{X}_{P}\) are a convex combination of the node features \(\mathcal{X}^{L}\). It is important to note that the conditions for the expressiveness only involve SEL and RED, but not the CON function. Indeed, both the graph's topology and the nodes' features are embedded in the features of the supernodes by MP and pooling layers satisfying the conditions of Th. 1. Nevertheless, even if a badly-behaved CON function does not affect the expressiveness of the pooling operator, it can still compromise the effectiveness of the MP layers that come afterward. This will be discussed further in Sections 3.3 and 4.
### Expressiveness of existing pooling operators
The SRC framework allows building a comprehensive taxonomy of the existing pooling operators, based on the density of supernodes, the trainability of the SEL, RED, and CON functions, and the
Figure 1: A GNN with expressive MP layers (condition 1) extracts different features \(\mathcal{X}_{1}^{L}\) and \(\mathcal{X}_{2}^{L}\) for two graphs \(\mathcal{G}_{1}\), \(\mathcal{G}_{2}\) that are WL-distinguishable. A pooling layer satisfying the conditions 2 and 3 generates coarsened graphs \(\mathcal{G}_{1_{P}}\) and \(\mathcal{G}_{2_{P}}\) that are still WL-distinguishable.
adaptability of the number of supernodes \(K\)[20]. The density of a pooling operator is defined as the expected value \(\mathbb{E}[|\mathcal{S}_{k}|/N]\), which is the ratio between the cardinality of a supernode \(\mathcal{S}_{k}\) and the number of nodes in the graph \(\mathcal{G}\). A method is referred to as _dense_ if the supernodes have cardinality \(O(N)\), whereas a pooling operator is considered _sparse_ if the supernodes generated have constant cardinality \(O(1)\)[20].
Pooling methods can also be distinguished according to the number of nodes \(K\) of the pooled graph. If \(K\) is constant and independent of the input graph size, the pooling method is _fixed_. On the other hand, if the number of supernodes is a function of the input graph, the method is _adaptive_. Finally, the in some pooling operators the SEL, RED, and CON functions can be learned end-to-end along with the other components of the GNN architecture. In this case, the method is said to be _trainable_, meaning that the operator has parameters that are learned by optimizing a task-driven loss function. Otherwise, the methods are _non-trainable_.
Dense pooling operatorsProminent methods in this class of pooling operators are DiffPool [37], MinCutPool [7], and DMoN [34]. Besides being dense, all these operators are also trainable and fixed. DiffPool, MinCutPool, and DMoN compute a cluster assignment matrix \(\mathbf{S}\in\mathbb{R}^{N\times K}\) either with an MLP or an MP-layer, which are fed with the node features \(\mathbf{X}^{L}\) and end with a softmax. The main difference among these methods is in how they define unsupervised auxiliary loss functions, which are used to inject a bias in how the clusters are formed. Thanks to the softmax normalization, the cluster assignments sum up to one, ensuring condition 2 of Th. 1 to be satisfied. Moreover, the pooled node features are computed as \(\mathbf{X}_{p}=\mathbf{S}^{\mathsf{T}}\mathbf{X}^{L}\), making also condition 3 satisfied.
There are dense pooling operators that use algorithms such as non-negative matrix factorization [3] to obtain a cluster assignment matrix \(\mathbf{S}\), which may not satisfy condition 2. Nonetheless, it is always possible to apply a suitable normalization to \(\mathbf{S}\) to ensure that its rows sum up to one. Therefore, we claim that all dense methods preserve the expressive power of the preceding MP layers.
Non-expressive sparse pooling operatorsMembers of this category are Top-\(k\)[18, 22] ASAPool [29], SAGPool [23] and PanPool [24], which are also trainable and adaptive. These methods reduce the graph by selecting a subset of its nodes based on a ranking score and they mainly differ in how their SEL function computes such a score. Specifically, the Top-\(k\) method ranks nodes based on a score obtained by multiplying the node features with a trainable projection vector. A node \(i\) is kept (\(s_{i}=1\)) if its among the top-\(K\) in the ranking and is discarded (\(s_{i}=0\)) otherwise. SAGPool simply replaces the projection vector Top-\(k\) with an MP layer to account for the graph's structure when scoring nodes. ASAPool, instead, examines all potential local clusters within the input graph given a fixed receptive field and it employs an attention mechanism to compute the cluster membership of the nodes. The clusters are subsequently scored using a GNN. Finally, in PanPool the scores are obtained from the diagonal entries of a maximal entropy transition matrix, which is a transition matrix that generalizes the graph Laplacian.
Regardless of how the score is computed, all these methods generate a cluster assignment matrix \(\mathbf{S}\) where not all the rows sum to one. Indeed, if a node is not selected, it is not assigned to any supernode in the coarsened graph. Therefore, these methods fail to meet condition 2 of Theorem 1. Additionally, all these methods share the same RED, which involves multiplying the features of each selected node by its ranking score, making condition 3 also unsatisfied.
Intuitively, these operators produce a pooled graph that is a subgraph of the original graph and discard the content of the remaining parts. This hinders the ability to retain all the necessary information for preserving the expressiveness of the preceding MP layers. The limitation of Top-\(k\) is exemplified in Fig. 2: regardless of the projector \(p\), Top-\(k\) maps two WL-distinguishable graphs into two isomorphic graphs, meaning that it cannot preserve the partition on graphs induced by the WL test.
Expressive sparse pooling operatorsNot all sparse pooling operators coarsen the graph by selecting a subgraph. In fact, some of them assign each node in the original graph to exactly one supernode and, thus, satisfy condition 2 of Th. 1. In matrix form, the cluster assignment would be represented by a sparse matrix \(\mathbf{S}\) that satisfies \(\mathbf{S}\mathbf{1}_{K}=\mathbf{1}_{N}\) and where every row has one entry equal to one and the others equal to zero. Within this category of sparse pooling operators, notable examples include Graclus [13], ECPool [14], and \(k\)-MISPool [4].
Graclus is a non-trainable, greedy bottom-up spectral clustering algorithm, which matches each vertex with the neighbor that is closest according to the graph connectivity [13]. When Graclus is used to perform graph pooling, the RED function is usually implemented as a max_pool operation between the vertices assigned to the same cluster [12]. In this work, to ensure condition 3 of Th. 1 to be satisfied, we use a sum_pool operation instead. Contrarily from Gralcus, ECPool and \(k\)-MISPool are trainable. ECPool first assigns to each edge \(e_{i\to j}\) a score \(r_{ij}=f(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{\Theta})\). Then, iterates over each edge \(e_{i\to j}\), starting from those with higher scores, and contracts it if neither nodes \(i\) and \(j\) are attached to an already contracted edge. The endpoints of a contracted edge are merged into a new supermode \(\mathcal{S}_{k}=r_{ij}(\mathbf{x}_{i}+\mathbf{x}_{j})\), while the remaining nodes become supernodes themselves. Since each supermode either contains the nodes of a contracted edge or is a node from the original graph, all columns of \(\mathbf{S}\) have either one or two entries equal to one, while each row sums up to one. The RED function can be expressed as \(\mathbf{r}\odot\mathbf{S}^{T}\mathbf{X}^{L}\), where \(\mathbf{r}[k]=r_{ij}\) if \(k\) is the contraction of two nodes \(i\)\(j\), otherwise \(\mathbf{r}[k]=1\). As a result, ECPool met the expressiveness conditions of Th. 1. Finally, \(k\)-MISPool identifies the supernodes with the centroids of the maximal \(k\)-independent sets of a graph [5]. To speed up computation, the centroids are selected with a greedy approach based on a ranking vector \(\pi\). Since \(\pi\) can be obtained from a trainable projector \(\mathbf{p}\) applied to the vertex features, \(\pi=\mathbf{X}^{L}\mathbf{p}^{T}\), \(k\)-MISPool is a trainable pooling operator. \(k\)-MISPool assigns each graph vertex to one of the centroids and aggregates the features of the vertex assigned to the same centroid with a sum_pool operation to create the features of the supernodes. Therefore, \(k\)-MISPool satisfies the expressiveness conditions of Th. 1.
A common characteristic of these methods is that the number of supernodes \(K\) cannot be directly specified. Graclus and ECPool achieve a pooling ratio of approximately 0.5 by roughly reducing each time the graph size by 50%. On the other hand, \(k\)-MISPool can control the coarsening level by computing the maximal independent set from \(\mathcal{G}^{k}\), which is the graph where each node of \(\mathcal{G}\) is connected to its \(k\)-hop neighbours. As the value of \(k\) increases, the pooling ratio decreases.
### Criticism on graph pooling
Recently, the effectiveness of graph pooling has been questioned using as an argument a set of empirical results aimed at exposing the weaknesses of certain pooling operators [26]. The experiments showed that using a randomized cluster assignment matrix \(\mathbf{S}\) (followed by a softmax normalization) gives comparable results to using the assignment matrices learned by Diffpool [37] and MinCutPool [7]. Similarly, applying Graclus [13] on the complementary graph would give a performance similar to using the original graph.
Figure 2: Example of failure of Top-\(k\) pooling operator. Regardless of the value learned for the projector \(p\), two WL-distinguishable graphs are mapped into the same coarsened graph becoming indistinguishable.
We identified potential pitfalls in the proposed evaluation, which considered only pooling operators that are expressive and that, even after being modified, retain their expressive power. Clearly, even if expressiveness ensures that all the information is preserved in the pooled graph, its structure is corrupted when using a randomized \(\mathbf{S}\) or a complementary graph. This hinders the effectiveness of the MP layers that come after pooling, as their inductive biases no longer match the data structure they receive. Notably, this might not affect certain classification tasks, e.g., when the goal is to detect small structures that are already captured by the MP layers before pooling.
To address these limitations, first, we propose to corrupt a pooling operator that is not expressive. In particular, we design a Top-\(k\) pooling operator where the nodes are ranked based on a score that is sampled from a Normal distribution rather than being produced by a trainable layer applied to the vertex features. Second, we evaluate all the modified pooling operators in a setting where the MP layers after pooling are essential for the task and show that the performance drop is significant.
## 4 Experimental Results
To empirically confirm the theoretical results presented in Section 3, we designed a synthetic dataset that is specifically tailored to evaluate the expressive power of a GNN. We considered a GNN with MP layers interleaved with 10 different pooling operators: DiffPool [37], DMoN [34], MinCut [7], ECPool [14], Graclus, \(k\)-MISPool [4], Top-\(k\)[18], PanPool [24], ASAPool [29], and SAGPool [23]. For each pooling method, we used the implementation in Pytorch Geometric [17] with the default configuration. In addition, following the setup used to criticize the effectiveness of graph pooling [26], we considered the following pooling operators: Rand-Dense, a dense pooling operator where the cluster assignment is a normalized random matrix; Rand-Sparse, a sparse operator that ranks nodes based on a score sampled from a Normal distribution; Cmp-Graclus, an operator that applies the Graclus algorithm on the complement graph.
### The EXPWL1 dataset
Our experiments aim at evaluating the expressive power of MP layers when combined with pooling layers. However, existing real-world and synthetic benchmark datasets are unsuitable for this purpose as they are not specifically designed to relate the power of GNNs to that of the WL test. Recently, the EXP dataset was proposed to test the capability of special GNNs to achieve higher expressive power than the WL test [1], which, however, goes beyond the scope of our evaluation. Therefore, we introduce a modified version of EXP called EXPWL1, which comprises a collection of graphs \(\{\mathcal{G}_{1},\dots,\mathcal{G}_{N},\mathcal{H}_{1},\dots,\mathcal{H}_{N}\}\) that represent propositional formulas that can be satisfiable or unsatisfiable. Each pair \((\mathcal{G}_{i},\mathcal{H}_{i})\) in EXPWL1 consists of two non-isomorphic graphs distinguishable by a WL test, which encode formulas with opposite SAT outcomes. Therefore, any GNN that has an expressive power equal to the WL test can distinguish them and achieve approximately 100% classification accuracy on the dataset. Compared to the original EXP dataset, we increased the size of the dataset to a total of 3000 graphs and we also increased the size of each graph from an average of 55 nodes to 76 nodes. This was done to make it possible to apply an aggressive pooling without being left with a trivial graph structure. The EXPWL1 dataset and the code to reproduce the experimental results are publicly available 2.
Footnote 2: [https://github.com/FilippoMB/The-expressive-power-of-pooling-in-GNNs](https://github.com/FilippoMB/The-expressive-power-of-pooling-in-GNNs)
### Experimental procedure
To empirically evaluate which pooling operator maintains the expressive power of the MP layers preceding it, we first identified a GNN architecture without pooling layers, which achieves approximately 100% accuracy on the EXPWL1. We found that a GNN with three GIN layers followed by a global_sum_pool reaches the desired accuracy. Then, we inserted a pooling layer between the second and third GIN layers, which performs an aggressive pooling by using a pooling ratio of 0.1 that reduces the graph size by 90%. The details of the GNN configuration are in Appendix B.1. To ensure a fair comparison, when testing each method we shuffled the datasets and created 10 different train/validation/test splits using the same random seed. We trained each model on all splits for 500 epochs and reported the average training time and the average test accuracy obtained by the models that achieved the lowest loss on the validation set.
To validate our experimental approach, we also measured the performance of the proposed GNN architecture equipped with the different pooling layers on popular benchmark datasets for graph classification. In particular, we considered six TUD datasets [28] (NC11, Proteins, Mutagenicity, COLLAB, Reddit-B, COLORS-3) and an additional synthetic dataset, B-Hard [9].
### Experimental Results
Table 1 reports the performances of different pooling operators on EXPWL1. These results are consistent with our theoretical findings: pooling operators that satisfy the conditions of Th. 1 achieve the highest average accuracy among all the pooling operators. Despite the aggressive pooling, these operators retain all the necessary information and achieve the same performance as the GNN without a pooling layer. On the other hand, non-expressive pooling operators achieve a significantly lower accuracy as they are not able to correctly distinguish all graphs.
Table 1 also show that employing a pooling operator based on a normalized random cluster assignment matrix (Rand-dense) or the complement graph (Cmp-Graclus) gives a lower performance. First of all, this result disproves the argument that such operators are comparable to the regular ones [26]. Additionally, we notice that the reduction in performance is less significant for Rand-Dense and Cmp-Graclus than for Rand-sparse. This outcome is expected because, in terms of expressiveness, Rand-dense and Cmp-Graclus still satisfy the conditions of Th. 1. Nevertheless, their performance is still lower than the original pooling operators. The reason is that even if a badly-behaved CON function does not compromise the expressiveness of the pooling operator, the structure of the pooled graph is corrupted when utilizing a randomized \(\mathbf{S}\) or a complementary graph. This, in return, reduces the effectiveness of the last GIN layer, which is essential to correctly classify the graphs in EXPWL1.
There are two remarks about the experimental evaluation. As discussed in Section 3.2, it is not possible to explicitly specify the pooling ratio in Graclus, ECPool, and \(k\)-MISPool. For \(k\)-MISPool, setting \(k=5\) gives a pooling ratio of approximately 0.1 on EXPWL1. However, for Graclus and ECPool, the only feasible option is to apply the pooling operator recursively until the desired pooling ratio of 0.1 is reached. Unfortunately, this approach is demanding, both in terms of computing time and memory usage. While it was possible to do this with Graclus in EXPWL1, we encountered an out-of-memory error after a few epochs when using ECPool on an RTX A6000 with 48GB of VRAM. Thus, the results for ECPool on the EXPWL1 are obtained with a single pooling layer that gives a pooling ratio of approximately 0.5 rather than 0.1. Clearly, a pooling ratio of 0.5 retains more information from the original graph, greatly simplifying the training in ECPool with respect to the other methods. Nevertheless, due to its expressiveness, we argue that ECPool would have reached approx 100% accuracy on EXPWL1 if implementing a more aggressive pooling was feasible.
The second remark is that in EXPWL1 when using too many MP layers, at least one node ends up containing enough information to accurately classify the graphs. This was demonstrated by using a model with 3 GIN layers followed by global_max_pool, which achieved an accuracy of \(~{}0.983\pm 0.006\). It should be noted that the baseline model with 3 GIN layers equipped with the more expressive global_sum_pool achieves a slightly higher accuracy of \(~{}0.993\pm 0.003\). In contrast,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Pooling** & **s/epoch** & **GIN layers** & **Pool Ratio** & **Test Acc** & **Expr.** \\ \hline _No-pool_ & 0.33s & 2+1 & 0.1 & \(99.3\pm 0.3\) & – \\
**DiffPool** & 0.69s & 2+1 & 0.1 & \(97.0\pm 2.4\) & ✓ \\
**DMoN** & 0.75s & 2+1 & 0.1 & \(99.0\pm 0.7\) & ✓ \\
**MinCut** & 0.72s & 2+1 & 0.1 & \(98.8\pm 0.4\) & ✓ \\
**ECPool** & 4.79s & 2+1 & 0.5 & \(99.5\pm 0.5\) & ✓ \\
**Graclus** & 1.00s & 2+1 & 0.1 & \(99.9\pm 0.1\) & ✓ \\ \(k\)**-MIS** & 1.17s & 2+1 & 0.1 & \(99.9\pm 0.1\) & ✓ \\
**Top-\(k\)** & 0.47s & 2+1 & 0.1 & \(67.9\pm 13.9\) & ✗ \\
**PanPool** & 3.82s & 2+1 & 0.1 & \(63.2\pm 7.7\) & ✗ \\
**ASAPool** & 1.11s & 1+1 & 0.1 & \(83.5\pm 2.5\) & ✗ \\
**SAGPool** & 0.59s & 1+1 & 0.1 & \(79.5\pm 9.6\) & ✗ \\ \hline
**Rand-dense** & 0.41s & 2+1 & 0.1 & \(91.7\pm 1.3\) & ✓ \\
**Cmp-Graclus** & 7.42s & 2+1 & 0.5 & \(91.0\pm 1.6\) & ✓ \\
**Rand-sparse** & 0.47s & 2+1 & 0.1 & \(62.8\pm 1.8\) & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification on **EXPWL1** Dataset.
a model with only 2 GIN layers and global_max_pool gives a significantly lower accuracy of \(0.665\pm 0.018\). Therefore, to ensure that our evaluation is meaningful, no more than 2 MP layers should precede the pooling operator. Since ASAPool and SAGPool implement an additional MP operation internally, we used only 1 GIN layer before them, rather than 2 as for the other pooling methods.
Finally, Fig. 3 shows the average accuracy and the average run-time obtained on the seven benchmark datasets (the detailed results are in Appendix B.3). These benchmarks are not designed to test the expressive power and, thus, a GNN equipped with a non-expressive pooling operator could achieve good performance. This happens, for example, in those datasets where all the necessary information is captured by the first two GIN layers that come before pooling or in datasets where only a small part of the graph is what determines the class. Nevertheless, this second experiment serves two purposes. First, it demonstrates the soundness of the GNN architecture used in the first experiment, which achieves results comparable to those of models carefully optimized on the benchmark datasets [16]. Second, and most importantly, it shows that the performances on the benchmark datasets and EXPWL1 are aligned; this underlines the relevance of our theoretical result on the expressiveness in practical applications. It is worth noting that on the benchmark datasets, it was not possible to obtain a pooling ratio of 0.1 for both Graclus and ECPool. Using a pooling ratio of 0.5 gives Graclus and ECPool an advantage over other methods, which makes the comparison not completely fair and shows an important limitation of these two methods.
As a concluding remark, we comment on the training time of the dense and sparse pooling methods. A popular argument in favor of sparse pooling methods is their computational advantage compared to the dense ones. Our results show that this is not the case in modern deep-learning pipelines. In fact, ECPool and PanPool are approximately 10 times slower than dense pooling methods, ASAPool is twice as slow, and the only sparse method with training times lower than the dense ones is \(k\)-MIS. While it is true that the sparse methods save memory by avoiding computing intermediate dense matrices, such an advantage is relevant only for extremely large graphs that are rarely encountered in practical applications.
## 5 Conclusions
In this work, we studied for the first time the expressive power of pooling operators in GNNs. We identified the sufficient conditions that a pooling operator must satisfy to fully preserve the expressive power of the original GNN model. Based on our theoretical results, we proposed a principled approach to evaluate the expressive power of existing graph pooling operators by verifying whether they met the conditions for expressiveness.
To empirically test the expressive power of a GNN, we introduced a new dataset that allows verifying if a GNN architecture achieves the same discriminative power of the WL test. We used such a dataset to evaluate the expressiveness of a GNN equipped with different pooling operators and we found that the experimental results were consistent with our theoretical findings. We believe that the proposed dataset would be a valuable tool as it allows, with minimal effort, to empirically test the expressive power of any MP layers and pooling operators within a GNN.
In our experimental evaluation, we also considered popular benchmark datasets for graph classification and found that the expressive pooling operators achieved the highest performance. This confirmed the relevance of our principled criterion in practical applications to select a pooling operator based on its expressiveness. Finally, we focused on the computational time of the pooling methods and found
Figure 3: Average accuracy and average runtime across the benchmark graph classification datasets.
that most sparse pooling methods not only perform worse due to their weak expressive power but are also not faster than the more expressive pooling methods.
We hope our work will provide novel insights into the relational deep-learning community and help to debunk misconceptions and criticism towards graph pooling.
### Acknowledgements
We gratefully acknowledge the support of Nvidia Corporation with the donation of the RTX A6000 GPUs used in this work. We also thank Daniele Zambon, Caterina Graziani and Antonio Longa for the useful discussions.
|
2303.16904 | Severity classification of ground-glass opacity via 2-D convolutional
neural network and lung CT scans: a 3-day exploration | Ground-glass opacity is a hallmark of numerous lung diseases, including
patients with COVID19 and pneumonia, pulmonary fibrosis, and tuberculosis. This
brief note presents experimental results of a proof-of-concept framework that
got implemented and tested over three days as driven by the third challenge
entitled "COVID-19 Competition", hosted at the AI-Enabled Medical Image
Analysis Workshop of the 2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP 2023). Using a newly built virtual
environment (created on March 17, 2023), we investigated various pre-trained
two-dimensional convolutional neural networks (CNN) such as Dense Neural
Network, Residual Neural Networks (ResNet), and Vision Transformers, as well as
the extent of fine-tuning. Based on empirical experiments, we opted to
fine-tune them using ADAM's optimization algorithm with a standard learning
rate of 0.001 for all CNN architectures and apply early-stopping whenever the
validation loss reached a plateau. For each trained CNN, the model state with
the best validation accuracy achieved during training was stored and later
reloaded for new classifications of unseen samples drawn from the validation
set provided by the challenge organizers. According to the organizers, few of
these 2D CNNs yielded performance comparable to an architecture that combined
ResNet and Recurrent Neural Network (Gated Recurrent Units). As part of the
challenge requirement, the source code produced during the course of this
exercise is posted at https://github.com/lisatwyw/cov19. We also hope that
other researchers may find this light prototype consisting of few Python files
based on PyTorch 1.13.1 and TorchVision 0.14.1 approachable. | Lisa Y. W. Tang | 2023-03-23T22:35:37Z | http://arxiv.org/abs/2303.16904v2 | # Severity classification of ground-glass opacity via
###### Abstract
Ground-glass opacity is a hallmark of numerous lung diseases, including patients with COVID19 and pneumonia, pulmonary fibrosis, and tuberculosis. This brief note presents experimental results of a proof-of-concept framework that got implemented and tested over three days as driven by the third challenge entitled "COVID-19 Competition", hosted at the AI-Enabled Medical Image Analysis Workshop of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). Using a newly built virtual environment (created on March 17, 2023), we investigated various pre-trained two-dimensional convolutional neural networks (CNN) such as Dense Neural Network, Residual Neural Networks, and Vision Transformers, as well as the extent of fine-tuning each architecture would require. Based on empirical experiments, we opted to fine-tune them using ADAM's optimization algorithm with a standard learning rate of 0.001 for all CNN architectures and apply early-stopping whenever the validation loss reached a plateau. For each trained CNN, the model state with the best validation accuracy achieved during training was stored and later reloaded for new classifications of unseen samples drawn from the validation set provided by the challenge organizers. According to the organizers, few of these 2D CNNs yielded performance comparable to the baseline developed by the organizers. As part of the challenge requirement, the source code produced during the course of this exercise is posted at [https://github.com/lisaturyw/cov19](https://github.com/lisaturyw/cov19). We also hope that other researchers will find this prototype of few Python files based on PyTorch 1.13.1 and TorchVision 0.14.1 approachable for their own work.
_Keywords: Computed Tomography; pulmonary parenchymal involvement; ground-glass opacity severity; AlexNet; DenseNet; InceptionNet; Residual Network; Wide Residual Network; SqueezeNet; VisionTransform; VGG_
## 1 Introduction
Ground-glass opacity (GGO) as captured in lung computed tomography scans (CTs) is a hallmark of numerous lung diseases and typically signifies lung consolidation [1, 2]. The extent of GGO observed in lung scans is one approach for severity assessment (known as 'grading') that has been used in clinical practice to facilitate the triage of patients in hospitals and acute care centres, albeit the manual process of grading by visual inspection of lung scans is laborious and time-consuming.
With the advent of deep convolutional neural networks (CNNs), computer-automated and/or computer-assisted classification of image volumes may now be done with high efficiency and accuracy. Countless studies from the past five years, e.g. [4, 20], have further shown the success of transfer learning of CNNs trained using three-channel two-dimensional inputs. Accordingly, we approach severity classification of GGO similarly to leverage off-the-shelf pretrained networks that could facilitate model training with training sample sizes smaller than 500. To this end, we explored the use of various CNNs architectures: AlexNet [11], VGG [18], Residual Networks [14], Wide Residual Networks [15], DenseNet [12], SqueezeNet [16] and Vision Transformers [17].
This note aims to document the initial experimental work done on the open-source dataset named "COV19CT-DB" (COVID-19 Computed Tomography Database) as part of a challenge submission to the third competition entitled "ICASSP: COVID19 severity detection challenge". Due to time constraints (i.e. three days), we focus on the severity classification problem where the objective is to label each CT volume as either mild, moderate, severe, or critical [6]. These categories were predetermined by a pre-existing protocol where experts visually inspected each scan for GGO and manually labeled each as "mild" if less than 26% of the lung volumes were judged to capture "pulmonary parenchymal involvement"; "moderate" if the involvement as 26-50%; "severe" if involvement is 50-70%; or "critical" if involvement was greater than 75% [19].
## 2 Materials
The training and validation samples is derived as a subset of a larger dataset named "COV19-CT-DB" that contains over one thousand patient samples of three-Dimensional CT chest scans as archived in lossy compression
format (i.e. JPEG) [6, 5, 7, 8, 9, 10]. These CT volumes were collected from different hospitals in the United States from September 1 2020 and November 30 2021 [6]. Upon successful enrolment into the ICASSP 2023 challenge, participants were directed to four hyperlinks to OneDrive where zipped archives (.rar or.zip) could be downloaded. Upon decompression, each folder represents a patient scan, with each containing up to 300 individual CT slices in JPEG format. To this end, 430 and 101 CT scans were provided for model training and validation, respectively.
## 3 Methods
On a high level, we propose the deployment of pretrained convolutional neural networks (CNNs) for the classification task. For ease of optimization, our approach explores a previous framework that leveraged two-dimensional (2D) Residual Networks [4] for the identification of a lung disease. The decision to employ 2D networks was motivated by two factors: time constraints and the observation that GGO have been observed in lower lung lobes in COVID-19 patients [6].
### Preprocessing
As the information on field of volume and voxel spacing are no longer accessible in the archival format provided by the challenge (i.e. JPEG), the depth of each CT volume was approximated by the file count \(n\) of each scan / folder, i.e.
```
importsubprocessoutput=subprocess.run(['ls',ct_folder],stdout=subprocess.PIPE); n=len(output.stdout.decode('utf-8').split())
```
The axial section centered at \(z\) was computed deterministically (\(z=\lfloor n*f\rceil\)). The value of \(f=0.25\) was defined based on visual inspection on a subset of the images randomly drawn from the training set. Then, three contiguous slices centered at \(z\) were concatenated to form three-channel inputs of size \(3\times v\times v\) where \(v\) depends on the CNN model [4].
Prior to resizing, lung masks were also computed to mask out regions outside the rib cage, as shown in Figures 1-3. The implementation code of lung mask generation was adopted from an online resource [22] such that the threshold parameter is adjusted so it will operate on the compressed data as stored in the JPEG images from COV19-CT-DB, i.e.:
Figures 1-3 provide examples of the image inputs.
```
fromskimage.segmentationimportclear.border init_roi_mask=jpeg_im_slice<100#Updated roi_mask_v0=clear_border(init_roi_mask)...
```
### Model-training
Each model architecture was fine-tuned over a maximum of 500 epochs. We used the categorical cross-entropy objective. For all CNN architectures, we applied early-stopping whenever the validation loss reached a plateau. Two optimization algorithms explored were Adaptive Moment Estimation (ADAM) and Stochastic Gradient Descent (SGD). For SGD, the standard setting of using momentum value of 0.9 was used. The following settings were explored initially: batch size BS=\(\{16,128\}\), optimization's learning rate LR=\(\{0.001,0.01\}\).
### Extent of fine-tuning
We initialized each CNN with pretrained weights and subsequently explored two level of network fine-tuning: allowing the network weights of all layers or only the last layer to be changed/optimized.
### Evaluation metrics and model selection
For each class, we calculated the area under the receiver's operating curve (AUROC) and the F1-macro score, i.e.
\[\textbf{F1-mac}=\frac{1}{K}\sum_{i=1}^{K}\frac{2\times PR_{i}\times RC_{i}}{PR_{ i}+RC_{i}} \tag{1}\]
where \(K\) is the number of classes (\(K=4\) severity classes of GGO) and \(RC_{i}\) and \(PR_{i}\) respectively denotes the precision and recall for class \(i\), i.e. \(PR_{i}=\frac{TP_{i}}{TP_{i}+FP_{i}}\) and \(RC_{i}=\frac{TP_{i}}{TP_{i}+FN_{i}}\).
For each trained CNN, the state with the best validation accuracy achieved during training was selected for the evaluation of each test sample.
### Evaluation protocol
We set a random subset of the training set as the internal validation set. The entire validation set of \(n\)=101 was left as unseen by each CNN. The test set (\(n\)=230) was provided by the organizers a few days before the challenge deadline. The labels on the test set is blinded to participants.
## 4 Results
We first examined the results of fine-tuning the models on the last layer. Table 1 reports preliminary comparisons of the accuracies (F1-macro score and area under receiver's operating curve) achieved by individual models as evaluated on the unseen validation set. That is, all images from the validation set were not observed by the models during training. As Table 1 shows, only DenseNet and Vision Transformer (VTB32) achieved greater than 51 in F1-macro on the unseen validation set (\(n\)=101).
\begin{table}
\begin{tabular}{l|l|c c|c c|c} \hline & & \multicolumn{2}{c|}{AUROC} & \multicolumn{2}{c|}{F1-macro} & Pred. class distr. \\ Model & Settings & Val & Unseen* & Val & Unseen* & \\ \hline AlexNet & BS16 SGD LR0.001 & 66.8 & 59.0 & 49.6 & 39.5 & 89, 31, 92, 19 \\ \hline AlexNet & BS32 ADAM LR0.001 & 66.6 & 59.8 & 51.1 & 34.0 & 61, 53, 106, 11 \\ \hline AlexNet & BS512 SGD LR0.001 & 67.1 & 59.0 & 39.9 & 30.9 & 93, 18, 120, 0 \\ \hline AlexNet & BS64 ADAM LR0.001 & 66.1 & 62.0 & 49.4 & 45.1 & 78, 41, 100, 12 \\ \hline DenseNet201 & BS16 SGD LR0.001 & 62.1 & 59.0 & 42.9 & 38.8 & 80, 32, 114, 5 \\ \hline DenseNet201 & BS32 ADAM LR0.001 & 63.4 & 63.3 & 46.3 & 46.8 & 57, 73, 99, 2 \\ \hline DenseNet201 & BS512 SGD LR0.001 & 63.1 & 62.5 & 45.6 & 45.1 & 50, 76, 103, 2 \\ \hline DenseNet201 & BS64 ADAM LR0.001 & 61.4 & 62.2 & 41.9 & 36.4 & 57, 63, 109, 2 \\ \hline DenseNet & BS16 ADAM LR0.001 & 63.8 & 60.1 & 34.8 & 35.9 & 86, 15, 130, 0 \\ \hline DenseNet & BS16 SGD LR0.001 & 63.8 & 60.1 & 34.8 & 35.9 & 86, 15, 130, 0 \\ \hline DenseNet & BS23 ADAM LR0.001 & 65.1 & 67.2 & 38.1 & 40.9 & 71, 26, 134, 0 \\ \hline DenseNet & BS512 SGD LR0.001 & 67.5 & 67.0 & 41.5 & 51.4 & 68, 46, 115, 2 \\ \hline DenseNet & BS64 ADAM LR0.001 & 65.8 & 63.4 & 39.2 & 37.6 & 58, 17, 156, 0 \\ \hline InceptionNet & BS16 SGD LR0.001 & 59.5 & 58.2 & 38.1 & 40.0 & 89, 17, 123, 2 \\ \hline InceptionNet & BS32 ADAM LR0.001 & 63.0 & 52.5 & 44.0 & 27.8 & 80, 41, 108, 2 \\ \hline InceptionNet & BS512 SGD LR0.001 & 63.7 & 64.3 & 38.6 & 39.5 & 39, 58, 134, 0 \\ \hline InceptionNet & BS64 ADAM LR0.001 & 60.8 & 55.1 & 40.7 & 36.3 & 61, 77, 91, 2 \\ \hline ResNet152 & BS16 SGD LR0.001 & 63.8 & 54.6 & 36.5 & 28.5 & 76, 51, 104, 0 \\ \hline ResNet152 & BS32 ADAM LR0.001 & 67.7 & 57.5 & 40.7 & 29.5 & 74, 35, 122, 0 \\ \hline ResNet152 & BS512 SGD LR0.001 & 66.6 & 59.5 & 39.4 & 30.8 & 107, 50, 74, 0 \\ \hline ResNet152 & BS64 ADAM LR0.001 & 65.1 & 56.3 & 38.4 & 28.8 & 81, 26, 124, 0 \\ \hline SqueezeNet & BS16 SGD LR0.001 & 68.8 & 58.8 & 42.6 & 32.4 & 74, 34, 123, 0 \\ \hline SqueezeNet & BS32 ADAM LR0.001 & 67.7 & 63.4 & 53.2 & 44.6 & 61, 68, 95, 7 \\ \hline SqueezeNet & BS512 SGD LR0.001 & 65.4 & 58.9 & 50.3 & 39.0 & 60, 54, 112, 5 \\ \hline SqueezeNet & BS64 ADAM LR0.001 & 66.5 & 63.8 & 50.5 & 45.2 & 60, 70, 94, 7 \\ \hline VGG & BS16 SGD LR0.001 & 62.5 & 58.5 & 43.9 & 35.4 & 75, 65, 81, 10 \\ \hline VGG & BS32 ADAM LR0.001 & 62.2 & 63.7 & 45.1 & 44.2 & 89, 65, 72, 5 \\ \hline VGG & BS512 SGD LR0.001 & 63.1 & 60.5 & 44.6 & 33.5 & 66, 78, 82, 5 \\ \hline VGG & BS64 ADAM LR0.001 & 63.6 & 58.2 & 46.8 & 33.0 & 92, 57, 78, 4 \\ \hline VTB32 & BS16 SGD LR0.001 & 68.6 & 68.0 & 54.7 & 52.0 & 77, 45, 95, 14 \\ \hline VTB32 & BS32 ADAM LR0.001 & 67.1 & 65.7 & 52.9 & 48.9 & 78, 48, 96, 9 \\ \hline VTB32 & BS512 SGD LR0.001 & 63.6 & 61.8 & 45.2 & 40.4 & 78, 38, 115, 0 \\ \hline VTB32 & BS64 ADAM LR0.001 & 67.0 & 66.6 & 52.6 & 51.3 & 74, 48, 96, 13 \\ \hline WideResNet101 & BS16 SGD LR0.001 & 67.0 & 60.2 & 40.4 & 33.4 & 84, 27, 120, 0 \\ \hline WideResNet101 & BS32 ADAM LR0.001 & 65.0 & 58.3 & 37.4 & 29.9 & 90, 16, 125, 0 \\ \hline WideResNet101 & BS512 SGD LR0.001 & 65.3 & 57.2 & 48.4 & 31.5 & 73, 55, 101, 2 \\ \hline WideResNet101 & BS64 ADAM LR0.001 & 62.1 & 58.1 & 32.6 & 29.7 & 89, 6, 136, 0 \\ \hline \end{tabular}
\end{table}
Table 1: Performance when model weights were fine-tuned **only on the last layer**. Shown are the overall and class-wise F1-scores expressed in percentages. *Performance of CNNs when evaluated the entire validation set was not observed during training. *Pred. class distr.” denotes the predicted class distribution on the test set.
We next examined the results of fine-tuning the models on the all layers. Similar to the previous results, only DenseNet, ResNet152, and Vision Transformer (VTB32) achieved greater than 51 in F1-macro on the same unseen validation set, as reported in Table 2. The class-wise F1-macro scores are further reported in Table 3.
In summary, when only the training set was used (\(n\)=430) and the entire validation set was left as unseen (i.e. only used for evaluation), fine-tuning all network weights seemed to add minor improvement to the accuracy performance of DenseNet and Vision Transformers while fine-tuning ResNet152 substantially improved accuracy (F1-macro of less than 31.0 increased to greater than 56.0).
Based on these empirical results, we retrained ResNet152 using the same settings as listed in Table 3 but using both the training and validation images for model training.
**NB**. This section will be updated when the third-party evaluations of the two prediction files submitted to the organizers are published.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline & & \multicolumn{2}{c|}{AUROC} & \multicolumn{2}{c|}{F1-macro} & \multicolumn{1}{c}{Pred. class distr.} \\ \cline{2-6} Model & Settings & Val & Unseen & Val & Unseen & \\ \hline AlexNet & BS16 SGD LR0.001 & 50.0 & 50.0 & 13.8 & 15.4 & 0, 0, 231, 0 \\ \hline AlexNet & BS32 ADAM LR0.001 & 50.0 & 50.0 & 13.8 & 15.4 & 0, 0, 231, 0 \\ \hline AlexNet & BS512 SGD LR0.001 & 42.6 & 46.7 & 23.8 & 21.2 & 0, 25, 206, 0 \\ \hline AlexNet & BS64 ADAM LR0.001 & 50.0 & 50.0 & 13.8 & 15.4 & 0, 0, 231, 0 \\ \hline DenseNet201 & BS16 SGD LR0.001 & 68.3 & 63.4 & 52.9 & 43.1 & 101, 21, 95, 14 \\ \hline DenseNet201 & BS32 ADAM LR0.001 & 72.1 & 63.1 & 59.4 & 43.7 & 71, 75, 74, 11 \\ \hline DenseNet201 & BS64 ADAM LR0.001 & 72.2 & 66.4 & 59.3 & 49.4 & 59, 81, 85, 6 \\ \hline DenseNet & BS16 ADAM LR0.001 & 68.2 & 69.7 & 54.3 & 51.6 & 81, 64, 79, 7 \\ \hline DenseNet & BS16 SGD LR0.001 & 67.9 & 70.4 & 53.6 & 54.1 & 58, 43, 125, 5 \\ \hline DenseNet & BS32 ADAM LR0.001 & 74.7 & 65.7 & 63.5 & 47.6 & 84, 81, 58, 8 \\ \hline DenseNet & BS64 ADAM LR0.001 & 71.1 & 64.3 & 54.8 & 41.8 & 105, 15, 87, 24 \\ \hline InceptionNet & BS16 SGD LR0.001 & 68.3 & 61.5 & 51.8 & 38.4 & 107, 56, 63, 5 \\ \hline InceptionNet & BS32 ADAM LR0.001 & 73.1 & 66.4 & 60.4 & 48.6 & 86, 52, 79, 14 \\ \hline InceptionNet & BS64 ADAM LR0.001 & 66.8 & 58.6 & 45.6 & 32.2 & 115, 4, 108, 4 \\ \hline ResNet152 & BS16 ADAM LR0.001 & 72.5 & 71.9 & 56.7 & 57.3 & 94, 65, 47, 25 \\ \hline ResNet152 & BS16 SGD LR0.001 & 67.2 & 68.4 & 47.5 & 48.7 & 112, 70, 40, 9 \\ \hline ResNet152 & BS32 ADAM LR0.001 & 68.4 & 65.1 & 52.5 & 47.5 & 87, 51, 88, 5 \\ \hline ResNet152 & BS64 ADAM LR0.001 & 67.6 & 71.7 & 49.4 & 56.2 & 85, 77, 65, 4 \\ \hline \hline SqueezeNet & BS16 SGD LR0.001 & 59.3 & 62.8 & 32.2 & 37.0 & 48, 13, 170, 0 \\ \hline SqueezeNet & BS32 ADAM LR0.001 & 63.7 & 61.1 & 38.0 & 34.4 & 61, 36, 134, 0 \\ \hline SqueezeNet & BS512 SGD LR0.001 & 67.5 & 65.3 & 41.9 & 39.2 & 61, 33, 137, 0 \\ \hline SqueezeNet & BS64 ADAM LR0.001 & 68.8 & 67.5 & 41.4 & 40.1 & 111, 34, 86, 0 \\ \hline VGG & BS16 SGD LR0.001 & 69.8 & 59.8 & 53.8 & 36.8 & 88, 46, 71, 26 \\ \hline VGG & BS32 ADAM LR0.001 & 63.7 & 58.5 & 44.9 & 34.4 & 76, 50, 86, 19 \\ \hline VGG & BS64 ADAM LR0.001 & 70.1 & 68.7 & 54.9 & 49.0 & 116, 64, 35, 16 \\ \hline VTB32 & BS16 SGD LR0.001 & 62.1 & 64.1 & 42.4 & 46.0 & 84, 55, 89, 3 \\ \hline VTB32 & BS32 ADAM LR0.001 & 60.8 & 61.8 & 39.2 & 40.1 & 103, 5, 116, 7 \\ \hline VTB32 & BS512 SGD LR0.001 & 67.7 & 61.9 & 53.4 & 44.6 & 49, 64, 115, 3 \\ \hline VTB32 & BS64 ADAM LR0.001 & 69.1 & 67.4 & 54.2 & 52.1 & 68, 43, 109, 11 \\ \hline WideResNet101 & BS16 SGD LR0.001 & 69.6 & 62.2 & 54.3 & 42.0 & 120, 34, 69, 8 \\ \hline WideResNet101 & BS32 ADAM LR0.001 & 68.1 & 65.9 & 52.2 & 49.5 & 33, 72, 118, 8 \\ \hline WideResNet101 & BS64 ADAM LR0.001 & 68.4 & 63.1 & 50.0 & 44.6 & 80, 90, 57, 4 \\ \hline \end{tabular}
\end{table}
Table 2: Performance when **all** model weights were fine-tuned.
\begin{table}
\begin{tabular}{l l|l|l l l l} \hline & & & \multicolumn{4}{c}{Class-wise F1-macro} \\ Model & Settings & Average F1-macro & Mild & Moderate & Severe & Critical \\ \hline DenseNet & BS16 SGD LR0.001 & 54.1 & 100 & 74.4 & 67.6 & 62.8 \\ ResNet152 & BS16 ADAM LR0.001 & 56.2 & 100 & 79.4 & 68.8 & 71.7 \\ VT (32-bit) & BS64 ADAM LR0.001 & 52.1 & 100 & 74.7 & 70.6 & 65.0 \\ \hline \end{tabular}
\end{table}
Table 3: Summary of performance metrics reported when weights of **all** layers were fine-tuned.
## 5 Conclusion
In this brief note, we shared empirical data that explored the feasibility of severity classification without the deployment of three-dimensional neural networks. The source code developed during the course of this experimental prototyping period is posted at [https://github.com/lisatwyw/cov19](https://github.com/lisatwyw/cov19). We hope that other researchers may find this quick prototype consisting of few Python files based on PyTorch 1.13.1 and TorchVision 0.14.1 approachable.
Figure 1: Example training input used to fine-tune CNNs.
Figure 2: Example test input.
## Acknowledgements
The author sincerely thank Professor Dimitrios Kollias and the organizing committee for provisioning the COV19-CT-DB dataset and hosting this exciting challenge [6, 5, 7, 8, 9, 10]. The author also expresses deep gratitude to Tong Tsui Shan and Kim Chuen Tang as well as staff of Compute Canada/Alliance Canada and Data Science Institute for their support.
|
2302.08185 | WHC: Weighted Hybrid Criterion for Filter Pruning on Convolutional
Neural Networks | Filter pruning has attracted increasing attention in recent years for its
capacity in compressing and accelerating convolutional neural networks. Various
data-independent criteria, including norm-based and relationship-based ones,
were proposed to prune the most unimportant filters. However, these
state-of-the-art criteria fail to fully consider the dissimilarity of filters,
and thus might lead to performance degradation. In this paper, we first analyze
the limitation of relationship-based criteria with examples, and then introduce
a new data-independent criterion, Weighted Hybrid Criterion (WHC), to tackle
the problems of both norm-based and relationship-based criteria. By taking the
magnitude of each filter and the linear dependence between filters into
consideration, WHC can robustly recognize the most redundant filters, which can
be safely pruned without introducing severe performance degradation to
networks. Extensive pruning experiments in a simple one-shot manner demonstrate
the effectiveness of the proposed WHC. In particular, WHC can prune ResNet-50
on ImageNet with more than 42% of floating point operations reduced without any
performance loss in top-5 accuracy. | Shaowu Chen, Weize Sun, Lei Huang | 2023-02-16T10:10:40Z | http://arxiv.org/abs/2302.08185v1 | # WHC: Weighted Hybrid Criterion for Filter Pruning on Convolutional Neural Networks
###### Abstract
Filter pruning has attracted increasing attention in recent years for its capacity in compressing and accelerating convolutional neural networks. Various data-independent criteria, including norm-based and relationship-based ones, were proposed to prune the most unimportant filters. However, these state-of-the-art criteria fail to fully consider the dissimilarity of filters, and thus might lead to performance degradation. In this paper, we first analyze the limitation of relationship-based criteria with examples, and then introduce a new data-independent criterion, Weighted Hybrid Criterion (WHC), to tackle the problems of both norm-based and relationship-based criteria. By taking the magnitude of each filter and the linear dependence between filters into consideration, WHC can robustly recognize the most redundant filters, which can be safely pruned without introducing severe performance degradation to networks. Extensive pruning experiments in a simple one-shot manner demonstrate the effectiveness of the proposed WHC. In particular, WHC can prune ResNet-50 on ImageNet with more than 42\(\%\) of floating point operations reduced without any performance loss in top-5 accuracy.
Shaowu Chen, Weize Sun, Lei Huang Department of Electronic and Information Engineering, Shenzhen University
Filter pruning, CNN compression, acceleration.
## 1 Introduction
Deep convolutional neural networks (CNNs) have achieved great success in various research fields in recent years, and broader or deeper architectures have been derived to obtain better performance [1]. However, state-of-the-art CNNs usually come with an enormously large number of parameters, costing prohibitive memory and computational resources, and thus are problematic to be deployed on resource-limited platforms such as mobile devices [2].
To tackle this problem, pruning methods, including weight pruning [3, 4] and filter pruning approaches [5, 6, 7], are developed to compress and accelerate CNNs. Weight pruning evaluates the importance of elements of weight tensors using criteria such as absolute value [3], and sets those with the lowest scores to zero to achieve element-wise sparsity. Nevertheless, as the sparsity is unstructured, weight pruning relies on customized software and hardware to accelerate CNNs. By contrast, filter pruning methods remove structured filters, which yields slimmer CNNs that can be applied to general-purpose hardware using the common BLAS libraries directly with less memory and inference time consumption, and thus have attracted more attention in recent years.
One of the core tasks in filter pruning is to avoid severe performance degradation on CNNs. To this end, many pruning criteria, including data-driven [8, 9, 10] and data-independent ones [11, 12], are proposed to find the most redundant filters, which can be safely deleted. In this paper, we focus on data-independent criteria, which can be further divided into two categories: norm-based [13] and relationship-based [12, 14, 15, 16]. The former believe that norms such as the \(\ell_{1}\) and \(\ell_{2}\)[11, 13] of filters indicate their importance, and thus prune filters with more minor norms. However, He _et al._[14] argue that it is difficult for norm-based criteria to verify unimportant filters when the variance of norms is insignificant. To solve the problem, they propose a relationship-based criterion, FPGM, which prunes filters with the shortest Euclidean distances from the others. In line with FPGM, cosine criterion [15] and CFP [16] are also developed, in which filters with the highest angles and the weakest correlation with others are considered the most valuable, respectively.
Generally speaking, relationship-based methods can overcome the problems introduced by norm-based criteria but still have their imperfections. For example, considering the colored filters shown in Figure 1(a) with similar small norms but different angles, FPGM and the cosine distance criterion will delete the 1st filter while keeping the inverted 2nd and 3rd since they have the largest Euclidean or angle-wise distance with others. However, the 2nd and 3rd filters will extract strongly correlated (although negatively) feature maps that contain highly redundant information, while the 1st filter orthogonal to them may extract totally different features. Deleting the 1st filter will weaken the representative capacity of CNNs, therefore, it is the 2nd or 3rd filter that should be deleted instead of the 1st one. Furthermore, considering that the 2nd filter has a smaller norm than the 3rd one, it is more reasonable to prune the 2nd filter. Whereas, the cosine distance [15] and CFP [16] criteria may rate the 2nd and 3rd the same score and remove one of them randomly. Similarly, this situation would also happen when filters have similar angles, such as
Figure 1: Examples in which relationship-based criteria lose efficacy.
the example shown in the Figure 1(b). These examples demonstrate that relationship-based criteria also need improvement.
In this paper, we propose a Weighted Hybrid Criterion (WHC) that considers both magnitude and relationship of filters to address the problems mentioned above and alleviate performance degradation robustly. Specifically, we value filters that have both more significant norms and higher dissimilarity from others (manifesting as orthogonality, instead of antiphase in FPGM [14] and cosine distance criterion [15]) while deleting the others. Moreover, we weigh filters' dissimilarity terms differently rather than equally. That is, when evaluating a filter, its dissimilarity terms with filters of more significant norms are assigned a greater weight, while for filters of more minor norms, lower weights are appointed. The reason for this is that the dissimilarity evaluated by the degree of orthogonality is more trustworthy if the norm of a counterpart is more significant. In this manner, WHC can rationally score filters and prune those with the lowest scores, _i.e._, the most redundant ones, and thus alleviate the degradation in CNNs' performance caused by pruning.
## 2 Methodology
### Notation and Symbols
**Weight tensors of a CNN.** Following the conventions of PyTorch, we assume that a pre-trained \(L\)-layers CNN has weight tensors \(\{\mathcal{W}_{l}\in\mathbb{R}^{N_{l+1}\times N_{l}\times K\times K}|l=1,2, \cdots,L\}\), where \(\mathcal{W}_{l}\), \(K\times K\), \(N_{l}\) and \(N_{l+1}\) stand for the weight tensor of the \(l\)-th convolutional layer, the kernel sizes, the number of input and output channels of the \(l\)-th convolutional layer, respectively.
**Filters.** We use \(\mathcal{F}_{li}\) to represent the \(i\)-th filter of the \(l\)-th layer, where \(\mathcal{F}_{li}=\mathcal{W}_{l}[i,:,:,:]\in\mathbb{R}^{N_{l}\times K\times K}\), _i.e._, the \(i\)-th slide of \(\mathcal{W}_{l}\) along the first dimension.
**Pruning rates.**\(r_{l}=\frac{\#\text{pruned filters}}{N_{l+1}}\in[0,1]\) denotes the proportion of pruned filters in the \(l\)-th layer.
### Weighted Hybrid Criterion (WHC)
Norm-based criteria degrades when the variance of norms of filters is small [14], and relationship-based ones may fail to distinguish unimportant filters in several cases, as illustrated in Figure 1(a) and 1(b). To address these problems, we propose a data-independent Weighted Hybrid Criterion (**WHC**) to robustly prune the most redundant filters, which scores the importance of the \(i\)-th filter \(\mathcal{F}_{li}\) in the \(l\)-th layer by taking into account not only the norm of a filter but also the linear dissimilarity as follows:
\[\text{score}_{li}=\|\mathcal{F}_{li}\|_{2}\sum_{j=1,j\neq i}^{N_{l+1}}\| \mathcal{F}_{lj}\|_{2}\left(1-|\cos\theta_{i,j}|\right), \tag{1}\]
where
\[\cos\theta_{i,j}=\frac{<\mathcal{F}_{li},\mathcal{F}_{lj}>}{\|\mathcal{F}_{ li}\|_{2}\cdot\|\mathcal{F}_{lj}\|_{2}}, \tag{2}\]
and \(\|\mathcal{F}_{ij}\|_{2}\) represents the \(\ell_{2}\) norm of the vectorized \(\mathcal{F}_{ij}\). Note that for a pre-trained model, we can assume that \(\|\mathcal{F}_{lj}\|_{2}>0\).
When applying WHC in Eq. 1 for pruning, filters with lower scores are regarded as more redundant and thus will be deleted, while those with the opposite should be retained. To explain how WHC works in theory, we first discuss the unweighted variant of (1), Hybrid Criterion (**HC**):
\[\text{score}^{\prime}_{li}=\|\mathcal{F}_{ii}\|_{2}\sum_{j=1}^{N_{l+1}}\big{(}1 -|\cos\theta_{i,j}|\big{)}. \tag{3}\]
Here \(1-|\cos\theta_{i,j}|\in[0,1]\), the dissimilarity measurement (**DM**) between \(\mathcal{F}_{li}\) and \(\mathcal{F}_{li}\), acts as a scaling factor for \(\|\mathcal{F}_{li}\|_{2}\), which in fact widens relative gaps between filters' norms and thus tackles the problem of invalidation of norm-based criteria [14] caused by a small variance of norms. Moreover, unlike euclidean or angle-wise distance-based criteria [14, 15] that prefer filters having \(180^{\circ}\) angles with others, WHC (1) and HC (3) value filters that are more orthogonal to the others, since they have shorter projected lengths with others and can extract less redundant features.
Note that in HC (3), the DM terms \(1-|\cos\theta_{i,j}|\) for \(j=1,\cdots,N_{l+1}\) are considered equally valuable. However, WHC has a different view: when evaluating a filter \(\mathcal{F}_{li}\), weighs for the DM terms should be proportional to \(\|\mathcal{F}_{lj}\|_{2}\) for \(i\neq j\). The reason for this is that it is less robust to count on filters with smaller norms when scoring filters. To explain this, consider two orthogonal filters, \(\mathcal{F}_{l1}=(100,0)\) and \(\mathcal{F}_{l2}=(0,0.1)\). Since the norm of \(\mathcal{F}_{l2}\) is minor, a small additive interference \((-0.1,-0.1)\) can easily change \(\mathcal{F}_{l2}=(0,0.1)\) to \(\mathcal{F}_{l2}^{\prime}=(-0.1,0)\), which radically modifies the DM term of \(\mathcal{F}_{l1}\) and \(\mathcal{F}_{l2}\) from the ceiling 1 to the floor 0. To improve robustness, the DM term should be weighted.
AutoML techniques such as meta-learning [17] can be used to learn the weights, but it will be time-consuming. Alternatively, we directly take norms of filters as weights in WHC (1), such that the blind spots of norm-based and relationship-based criteria can be eliminated easily but effectively. As illustrated in Figure 2, when encountering the case shown in Figure 1(a) where norms of filters differ insignificantly, WHC can utilize dissimilarity information to score the filters and recognize the most redundant one. Furthermore, WHC is also robust to the case shown in Figure 1(b) in which the filters have similar linear relationships but with varied norms, while relationship-based criteria such as the cosine criterion [15] will lose effectiveness since it scores the filters equally. There is a scenario that WHC may lose efficacy, _i.e._, filters have the same norms and DM terms at the time. However, this indicates that there is no redundancy, and therefore it is not necessary to prune the corresponding model.
### Algorithm Description
As described in Algorithm 1, we perform filter pruning using WHC following the common "Pretrain-Prune-Finetune" pipeline in a sim
Figure 2: _Left top_: Magnitude information of filters. _Left bottom_: Measuring dissimilarity of filters. _Right_: WHC scores filters and prune the most redundant ones.
ple single-shot manner, with all layers pruned under the same pruning rate, _i.e._, \(r_{1}=r_{2}=\cdots=r_{L}\). Although iterative mechanism [18], Knowledge Distillation [19, 20, 21], sensitive analysis that decides layer-wise pruning rates [11], and some fine-tuning techniques [22] can improve the performance of pruned CNNs, none of them are included in this paper for ease of presentation and validation.
```
1:Pre-trained model \(\{\mathcal{W}_{l}\}_{l=1}^{L}\), pruning rates \(r_{l}\), training data, fine-tuning epoch \(epoch_{f}\)
2:for\(l=L\to 1\)do
3: Score \(\{\mathcal{F}_{l}\}_{l=1}^{N_{l+1}}\) using WHC (1);
4: Prune \(r_{l}\)\(\times\)\(N_{l+1}\) filters with the lowest scores to get \(\mathcal{W}_{l}^{\prime}\);
5: Replace \(\mathcal{W}_{l}\) with \(\mathcal{W}_{l}^{\prime}\).
6:endfor
7:Fine-tune \(\{\mathcal{W}_{l}^{\prime}\}_{l=1}^{L}\) for \(epoch_{f}\) epochs.
8:Compact model \(\{\mathcal{W}_{l}^{\prime}\}_{l=1}^{L}\)
```
**Algorithm 1** WHC for single-shot filter pruning
By pruning \(r_{l}\cdot N_{l+1}\) filters in the \(l\)-th layer, WHC also reduces the same number of input channels in the \(l\)-\(l\)-th layer. Suppose the input feature maps for the \(l\)-th layer are of dimensions \(H_{l}\times W_{l}\times N_{l}\), and output feature maps for the \(l\)-th, \(l\)-\(l\)-th layer are of dimensions \(H_{l+1}\times W_{l+1}\times N_{l+1}\) and \(H_{l+2}\times W_{l+2}\times N_{l+2}\), respectively, pruning the \(l\)-th layer with \(r_{l}\) will reduce \(H_{l+1}W_{l+1}(N_{l+1}r_{l})K^{2}N_{l}+H_{l+2}W_{l+2}N_{l+2}K^{2}(N_{l+1}r_{l})\) floating point operations (**FLOPs**) totally, which greatly accelerates the forwarding inference.
## 3 Experiment
### Experimental Settings
**Datasets and baseline CNNs.** Following [13, 14], we evaluate the proposed WHC on the compact and widely used ResNet-20/32/56/110 for CIFAR-10 [23] and ResNet-18/34/50/101 for ILSVRC-2012 (ImageNet) [24] For a fair comparison, we use the same pre-trained models for CIFAR-10 as [14]. Whereas, for ILSVRC-2012, since part of the per-trained parameters of [14] are not available, we use official Pytorch pre-trained models [25] with slightly lower accuracy. Code and CKPT are available at [https://github.com/ShaowuChen/WHC](https://github.com/ShaowuChen/WHC).
**Pruning and fine-tuning.** The experiments are implemented with Pytorch 1.3.1 [25]. We keep all our implementations such as data argumentation strategies, pruning settings, and fine-tuning epochs the same as [13, 14], except that we use the straightforward single-shot mechanism. In the pruning stage, all convolutional layers in a network are pruned with the same pruning rate, and we report the proportion of "FLOPs" dropped for ease of comparison.
**Compared methods.** We compare WHC with several criteria, including the data-independent norm-based PFEC [11], SPF [13], ASPF [26], relationship-based FPGM [14], and several data-dependent methods HRank [27], GAL [28], LFPC [29], CP [30], NISP [31], ThiNet [18] and ABC [32].
### Evaluation on CIFAR-10
For CIFAR-10, we repeat each experiment three times and report the average accuracy after fine-tuning. As shown in Table 1, WHC outperforms several state-of-the-art counterparts. WHC can prune 52.3% of FLOPs in ResNet-110 with even 0.39% improvement, while the norm-based SFP under the same settings suffers 0.78% of degradation. The improvement shows that under moderate pruning rates, WHC can alleviate the overfitting problem of models without hurting their capacity.
Compared with the iterative ASFP [26], data-driven HRank [27], AutoML-based ABC [32] and LFPC [29], WHC in a single-shot manner can also achieve competitive performance. For example, although more FLOPs are reduced, WHC still achieves 0.42% and 0.75% higher accuracy than LFPC in ResNet-56 and ResNet-110, respectively, which demonstrates that WHC can recognize the most redundant filters effectively. Furthermore, under similar pruned rates, as the depth of CNNs increases, the pruned models obtained by WHC suffer less performance degradation. The reason for this is that deeper CNNs contain more redundancy, which can be removed by WHC robustly without hurting CNNs' capacity severely.
### Evaluation on ILSVRC-2012
The results are shown in Table 2. Not surprisingly, compared with several state-of-the-art methods, WHC not only achieves the highest top-1 and top-5 accuracy, but also suffers the slightest performance degradation. In ResNet-50, WHC reduces more than 40% of FLOPs but barely brings loss in the top-1 and top-5 accuracy, while the norm-based SFP suffers 14% degradation in the top-1 accuracy and other methods more than 0.5%. Compared with norm-based and relationship-based criteria, the superior performance of WHC can be attributed to the utilization of both norm and linear similarity information of filters, and the assigned weights on different DM terms can provide more robust results.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Depth} & \multirow{2}{*}{Method} & Baseline & Pruned & Acc. & FLOPs \\ & & acc. (\%) & acc. (\%) & \(\downarrow\) (\%) & \(\downarrow\) (\%) \\ \hline \multirow{3}{*}{20} & **WHC** & 92.20 (\(\pm\)0.18) & **91.62 (\(\pm\)0.14)** & **0.58** & 42.2 \\ & **WHC** & 92.20 (\(\pm\)0.18) & 90.72 (\(\pm\)0.16) & 1.48 & **54.0** \\ \hline \multirow{3}{*}{32} & **WHC** & 92.63 (\(\pm\)0.70) & **92.71 (\(\pm\)0.08)** & **-0.08** & 41.5 \\ & **WHC** & 92.63 (\(\pm\)0.70) & 92.44 (\(\pm\)0.12) & 0.19 & **53.2** \\ \hline \multirow{7}{*}{56} & PFEC [11] & 93.04 & 93.06 & -0.02 & 27.6 \\ & **WHC** & **93.59 (\(\pm\)0.58)** & **93.91 (\(\pm\)0.06)** & **-0.32** & **28.4** \\ \cline{2-5} & GAI [28] & 93.26 & 93.38 & 0.12 & 37.6 \\ & SFP [13] & **93.59 (\(\pm\)0.58)** & 93.78 (\(\pm\)0.22) & -0.19 & **41.1** \\ & **WHC** & **93.59 (\(\pm\)0.58)** & **93.80 (\(\pm\)0.33)** & **-0.21** & **41.1** \\ \hline \multirow{7}{*}{56} & HRank [27] & 93.26 & 93.17 & 0.09 & 50.0 \\ & SFP [13] & **93.59 (\(\pm\)0.58)** & 93.35 (\(\pm\)0.31) & 0.24 & **52.6** \\ & ASFP [26] & **93.59 (\(\pm\)0.58)** & 93.12 (\(\pm\)0.20) & 0.47 & **52.6** \\ & FFOM [14] & **93.59 (\(\pm\)0.58)** & 93.12 (\(\pm\)0.03) & 0.33 & **52.6** \\ & **WHC** & **93.59 (\(\pm\)0.58)** & **93.47 (\(\pm\)0.19)** & **0.12** & **52.6** \\ \hline \multirow{7}{*}{110} & LFPC [29] & **93.59 (\(\pm\)0.58)** & 93.24 (\(\pm\)0.17) & 0.35 & 52.9 \\ & ABC [32] & 93.26 & 93.23 & 0.03 & 54.1 \\ & **WHC** & **93.59 (\(\pm\)0.58)** & **93.66 (\(\pm\)0.19)** & **-0.07** & **54.8** \\ \hline \multirow{7}{*}{110} & GAI [28] & 93.26 & 91.58 & 1.68 & 60.2 \\ & **WHC** & **93.59 (\(\pm\)0.58)** & **93.29 (\(\pm\)0.11)** & **0.30** & **63.2** \\ \hline \multirow{7}{*}{110} & GAL [28] & 93.50 & 93.59 & -0.09 & 18.7 \\ & PFEC [11] & 93.53 & 93.30 & 0.23 & 38.6 \\ \cline{2-5} & SFP [13] & **93.68 (\(\pm\)0.32)** & 93.86 (\(\pm\)0.21) & -0.18 & **40.8** \\ \cline{2-5} & ASFP [26] & **93.68 (\(\pm\)0.32)** & 93.07 (\(\pm\)0.12) & 0.31 & **40.8** \\ \cline{2-5} & **WHC** & **93.68 (\(\pm\)0.32)** & **94.32 (\(\pm\)0.17)** & **-0.64** & **40.8** \\ \hline \multirow{7}{*}{110} & GAL [28] & 93.26 & 92.74 & 0.76 & 48.5 \\ \cline{2-5} & SFP [13] & **93.68 (\(\pm\)0.32)** &
### Ablation Study
**Decoupling experiment.** To further validate the effectiveness of WHC, we progressively decouple WHC into several criteria, as shown in Table 3. The cosine criterion [15] is also added for comparison. We repeat pruning 40% of filters in ResNet-32 and ResNet-56 three times and report the raw accuracy (without fine-tuning) and the average drop in accuracy after fine-tuning. Compared with the cosine criterion [15], the DM criterion suffers less degradation in accuracy and is therefore more rational. Taking into account both norm and dissimilarity, HC achieves better performance than \(\ell_{2}\) and DM. Furthermore, by assigning different weights to the DM terms, WHC outperforms all counterparts stably, especially in the more compact ResNet-32. The \(\ell_{2}\) achieves similar performance as WHC in ResNet-56, but fails to maintain the same performance in ResNet-32, demonstrating the robustness of the proposed WHC.
**Types of norm and dissimilarity measurement.** We replace \(\ell_{2}\) norm and \(\cos\theta_{i,j}\) in WHC (1) with \(\ell_{1}\) norm and correlation coefficient, respectively. The correlation can be regarded as the centralized version of \(\cos\theta_{i,j}\). We conduct experiments on ResNet-32 with \(r_{l}=40\%\), The fine-tuned accuracy of the \(\ell_{1}\) and correlation version of WHC are \((92.50\pm 0.11)\%\) and \((92.62\pm 0.18)\%\), respectively, which are slightly higher than the naive WHC, \((92.44\pm 0.12)\%\). The results indicate that WHC can be further improved with more suitable types of norm and dissimilarity measurement.
### Visualization
We prune 40% of filters in the first layer of ResNet-50 for ImageNet and visualize the output feature maps, as shown in Figure 3. Among the pruned filters, 3 and 34 can be replaced by 10 and 29, respectively, and [13, 32, 48, 62, _et al_] fail to extract valuable features. We also compare WHC with the \(\ell_{2}\) norm criterion [13] and relationship-based FPGM [14], finding that \(\ell_{2}\) and FPGM rank the filters differently from WHC but finally give a similar pruning list under the given pruning rate. The only divergence of WHC and the \(\ell_{2}\) criterion arises over filters 29 and 56: WHC keeps 29 and prunes 56, while \(\ell_{2}\) criterion takes the opposite action. We consider filter 29 to be more valuable than filter 56, since the latter fails to extract significant features, while the former highlights the eyes of the input image. The difference of WHC and \(\ell_{2}\) in the pruning list for a single layer is insignificant, but an accumulation of tens of layers finally results in a wide gap and makes WHC more robust in finding redundant filters.
## 4 Conclusion
We propose a simple but effective data-independent criterion, Weighted Hybrid Criterion (WHC), for filter pruning. Unlike previous norm-based and relationship-based criteria that use a single type of information to rank filters, WHC takes into consideration both magnitude of filters and dissimilarity between filter pairs, and thus can recognize the most redundant filters more efficiently. Furthermore, by reweighting the dissimilarity measurements according to the magnitude of counterpart filters adaptively, WHC is able to alleviate performance degradation on CNNs caused by pruning robustly.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Depth} & \multirow{2}{*}{Method} & Baseline & Pruned & Top-1 & Baseline & Pruned & Top-5 & FLOPs \\ & & top-1 & top-1 & acc. \(\downarrow\) & top-5 & top-5 & acc. \(\downarrow\) & \(\downarrow\) (\%) \\ & & acc. (\%) & acc. (\%) & acc. (\%) & acc. (\%) & acc. (\%) & (\%) \\ \hline \multirow{4}{*}{18} & SFP [13] & 70.23 & 60.79 & 9.44 & 89.51 & 83.11 & 6.40 & **41.8** \\ & ASF [26] & 70.23 & 68.02 & 2.21 & 89.51 & 88.19 & 1.32 & **41.8** \\ & FPGM [14] & **70.28** & 68.41 & 1.87 & **89.63** & 88.48 & 1.15 & **41.8** \\ & **WHC** & 69.76 & **68.48** & **1.28** & 89.08 & **88.52** & **0.56** & **41.8** \\ \hline \multirow{4}{*}{34} & PFEC [11] & 73.23 & 72.17 & 1.06 & - & - & - & 24.2 \\ & ABC [32] & 73.28 & 70.98 & 2.30 & 91.45 & 90.05 & 1.40 & 41.0 \\ & SFP [13] & **73.92** & 72.29 & 1.63 & **91.62** & 90.90 & **0.72** & **41.1** \\ & ASF [26] & **73.92** & 72.53 & 1.39 & **91.62** & 91.04 & 0.58 & **41.1** \\ & FPGM [14] & **73.92** & 72.54 & 1.38 & **91.62** & 91.13 & 0.49 & **41.1** \\ & **WHC** & 73.31 & **72.92** & **0.40** & 91.42 & **91.14** & **0.28** & **41.1** \\ \hline \multirow{4}{*}{18} & ThiNet [18] & 72.88 & 72.04 & 0.84 & 91.14 & 90.67 & 0.47 & 36.7 \\ & SFP [13] & **76.15** & 62.14 & 14.01 & **92.87** & 84.60 & 8.27 & 41.8 \\ & ASF [26] & **76.15** & 75.53 & 0.62 & **92.87** & 92.73 & 0.14 & 41.8 \\ & FFGM [14] & **76.15** & 75.59 & 0.56 & **92.87** & 92.63 & 0.24 & **42.2** \\ & **WHC** & 76.13 & **76.06** & **0.07** & 92.86 & **92.86** & **92.86** & **0.00** & **42.2** \\ \hline \multirow{4}{*}{50} & HRRank [27] & **76.15** & 74.98 & 1.17 & **92.87** & 92.33 & 0.54 & 43.8 \\ & NISP [31] & - & 0.89 & - & - & - & - & 44.0 \\ & Gal [28] & **76.15** & 71.95 & 4.20 & **92.87** & 90.94 & 1.93 & 43.0 \\ & **WHC** & 73.01 & 73.40 & 1.90 & 92.20 & 91.40 & 0.80 & 49.6 \\ & CP [30] & - & - & 92.20 & 90.80 & 1.40 & 50.0 \\ & FPGM [14] & **76.15** & 74.83 & 1.32 & **92.87** & 92.32 & 0.55 & **53.5** \\ & **WHC** & 76.13 & **75.33** & **0.80** & 92.86 & **92.52** & **0.34** & **53.5** \\ \hline \multirow{4}{*}{101} & GAL [28] & **76.15** & 71.80 & 4.35 & 92.87 & 90.82 & 2.05 & 55.0 \\ & ABC [32] & 76.01 & 73.86 & 2.15 & **92.96** & 91.69 & 1.27 & 54.3 \\ \cline{1-1} & ABC [32] & 76.01 & 73.52 & 2.49 & **92.96** & 91.51 & 1.45 & 56.6 \\ \cline{1-1} & LFFC [29] & **76.15** & 74.46 & 1.69 & 92.87 & 92.04 & **0.53** & 60.8 \\ \cline{1-1} & WHC & 76.13 & **74.64** & **1.49** & 92.86 & **92.16** & **0.70** & **60.9** \\ \hline \multirow{4}{*}{102} & FFGM [14] & **77.37** & 77.32 & 0.05 & **93.56** & 93.56 & 0.00 & **42.2** \\ \cline{1-1} & **WHC** & **77.37** & **77.75** & **-0.38** & 93.55 & **93.84** & **-0.30** & **42.2** \\ \hline \multirow{4}{*}{103} & ABC [32] & **77.38** & 75.82 & 1.56 & **93.59** & 92.74 & 0.85 & 59.8 \\ \cline{1-1} & **WHC** & 77.37 & **76.63** & **0.74** & 93.55 & **93.30** & **0.25** & **60.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pruning results on ILSVRC-2012 (ImageNet). “acc.” and “\(\downarrow\)” stand for “accuracy” and “drop”, respectively.
Figure 3: Visualization of ResNet-50-conv1 output feature maps (after ReLU, BN and MaxPooling, _i.e._, the input of the second Conv layer). The feature maps bounded by red boxes correspond to the pruned filters; 40% of the filters are pruned. The highest values are colored in the brightest green, while the lowest in the darkest blue. |
2305.03077 | Explaining dark matter halo density profiles with neural networks | We use explainable neural networks to connect the evolutionary history of
dark matter halos with their density profiles. The network captures independent
factors of variation in the density profiles within a low-dimensional
representation, which we physically interpret using mutual information. Without
any prior knowledge of the halos' evolution, the network recovers the known
relation between the early time assembly and the inner profile, and discovers
that the profile beyond the virial radius is described by a single parameter
capturing the most recent mass accretion rate. The results illustrate the
potential for machine-assisted scientific discovery in complicated
astrophysical datasets. | Luisa Lucie-Smith, Hiranya V. Peiris, Andrew Pontzen | 2023-05-04T18:00:01Z | http://arxiv.org/abs/2305.03077v2 | # Explaining dark matter halo density profiles with neural networks
###### Abstract
We use explainable neural networks to connect the evolutionary history of dark matter halos with their density profiles. The network captures independent factors of variation in the density profiles within a low-dimensional representation, which we physically interpret using mutual information. Without any prior knowledge of the halos' evolution, the network recovers the known relation between the early time assembly and the inner profile, and discovers that the profile beyond the virial radius is described by a single parameter capturing the most recent mass accretion rate. The results illustrate the potential for machine-assisted scientific discovery in complicated astrophysical datasets.
**Introduction** - Machine-assisted scientific discovery has recently become a key frontier in artificial intelligence (AI) research [1]. It is useful to build in characteristics of good physical models into such frameworks for artificial scientific discovery. Such desiderata include the compression of information within a dataset into a set of minimal ingredients which: can accurately predict new data; can be usefully contextualized within the broader scientific domain; and generalize beyond the specific setting in which the model was originally fitted to explain other aspects of the physical system. The required compression can be achieved through'representation learning'; recent works have shown that neural networks can learn low-dimensional representations which match the already-known parameters describing simple physical systems [2]. However, extracting new knowledge from deep learning models requires the development of tools that can explain latent representations in terms of the physics they represent, without knowing the relevant physical parameters _a priori_.
In this work, we demonstrate the use of an _explainable AI_ framework to address an open problem in cosmological structure formation. In the modern picture of structure formation, galaxies form at the center of extended, overdense 'halos' of dark matter, which originate from small fluctuations in the density of matter in the early Universe and undergo highly non-linear dynamical processes throughout their evolution [3; 4; 5; 6; 7]. The dynamical history of a halo then determines the way in which matter is distributed within a halo today. The matter distribution within a halo (known as its 'density profile') strongly affects cosmological analyses that connect the observed galaxy distribution with theoretical predictions of the underlying matter distribution [8], as well as direct and indirect searches for dark matter [9]. Current models of halo density profiles rely on empirical fitting functions calibrated to simulations, such as the Navarro-Frenk-White (NFW) profile [10] and the Einasto profile [11]. Halo density profiles are known to show universal shapes across 20 orders of magnitude in mass [12; 13]. However, no general consensus on an explanation from first principles for the origin of halo density profiles exists.
In Ref. [14], we used an interpretable deep learning framework to create a model of the independent degrees of freedom in the spherically-averaged density profiles of dark matter halos. The model, which we denoted an _interpretable variational encoder_ (IVE), generated a compact, low-dimensional latent representation of the 3D density field in a region containing the halo, which contains all the information used by the neural network to predict the profile. The representation is interpretable because it is disentangled, i.e. we require that each latent component captures different, independent factors of variation in the profiles. We found that three components are required (and are sufficient) for modeling the profiles out to the halo outskirts: one component describes the overall normalization of the profile, the second the shape of the profile within the virial radius, and the third the shape of the profile beyond the virial radius that is affected by infalling material.
In this _Letter_, we turn to the physical interpretation of the learnt IVE latent representation in terms of the evolution history of halos that led to the final halo density profiles. Although the network was trained only with information about the present-day density field, we explore whether the latent parameters carry memory of the evolution history of the halos. We measure the information encoded within each latent about the halos' evolution history using the information-theoretic measure of _mutual information_ (MI). By this metric, the IVE representation and the NFW parametrization similarly highlight a dependence of the profile on physical accretion history. However the IVE additionally allows us to measure the connection between a halo's recent evolution history and the density in its far outskirts, something that the NFW profile does not capture.
**Background** - We begin by briefly reviewing the current understanding of the physics of halo density profiles. The NFW profile is the most widely used fitting function for the halo density profile. It is a two-parameter functional form given by
\[\rho(r)=\frac{\rho_{s}}{r/r_{s}\left(1+r/r_{s}\right)^{2}}\,, \tag{1}\]
where \(r_{s}\) and \(\rho_{s}\) are the scale radius and characteristic density, respectively. The scale radius is often re-written in terms of a concentration parameter \(c\equiv r_{200m}/r_{s}\), so that the NFW profile can be parametrized by a virial radius \(r_{200m}\) (or virial mass) and concentration \(c\). The virial radius \(r_{200m}\) is typically
adopted as a proxy for the halo boundary, and defined as the radius which contains a mean density that is 200 times the mean density of the Universe. High-resolution simulations have revealed this functional form to be 'universal': it provides a good fit to stacked profiles of halos for a large range of halo masses [13; 15], for several different cosmological models [16; 17; 18; 19; 20], and even in the absence of hierarchical growth [21; 22; 23]. This suggests that universal density profiles are a generic feature that arises from collisionless gravitational collapse.
Despite the lack of a first-principles explanation for the self-similarity of halo density profiles, some insights have been gained from studying the correlation between the NFW concentration and summary statistics of the halo evolution process. Mass, concentration and halo formation time all correlate: on average, low-mass halos assemble earlier and have higher characteristic densities (or concentration), reflecting the larger background density at earlier times [24; 25; 26; 19; 10; 27]. This description can explain the qualitative trend of the mean concentration as a function of halo mass, but not the large residual scatter in concentration seen in simulations [25; 28]. It is also limited to the simplest summary statistic of the halo evolution history, i.e. the halo formation time. Further work has suggested that the self-similarity of halos may be related to the self-similarity of the halo mass assembly history [15], although this has only been validated on stacked profiles of well-behaved,'relaxed' halos.
The situation worsens when modeling profiles beyond the virial radius: the halo outskirts strongly deviate from the NFW form due to the presence of the splashback radius, where particles reach the apocenter of their first orbit. Recent work has focused on modeling the location of the splashback radius, finding that it is sensitive to the late-time mass accretion rate [29; 30; 31; 32; 33; 34]. Modeling the full shape of the outer profile remains a difficult task due to its intrinsically non-equilibrium nature, leading to a reliance on multi-parameter fitting functions with little physical explainability [35; 36].
**Deep learning model** - The IVE architecture used in this work has two main components: the encoder, mapping the 3D density field to a low-dimensional latent representation, and the decoder, mapping the latent representation and the query radius \(\log(r)\) to the output profile \(\log[\rho(r)]\). By design, all the information used by the model to predict the density profiles is captured within the latent representation. An illustration of the model is shown in the top-half of Fig. 1. The encoder is a 3D convolutional neural network (CNN) with parameters \(\phi\) that maps the inputs \(\mathbf{x}\) to a multivariate distribution in the latent space \(p_{\phi}(\mathbf{z}|\mathbf{x})\). We choose the latent representation to be a set of independent Gaussians, \(p_{\phi}(\mathbf{z}|\mathbf{x})=\prod_{i=1}^{L}\mathcal{N}(\mu_{i}(\mathbf{x}),\sigma_{i} (\mathbf{x}))\), where \(L\) is the dimensionality of the latent space; under this assumption, the encoder maps the inputs \(\mathbf{x}\) to the vectors \(\mu=\mu_{i},..,\mu_{L}\) and \(\sigma=\sigma_{i},..,\sigma_{L}\). The decoder of the IVE consists of another neural network model with parameters \(\theta\) that maps a sampled latent vector \(z\sim p_{\phi}(\mathbf{z}|\mathbf{x})\) and a value of the query \(\log(r)\) to a single predicted estimate for \(\log[\rho_{\rm pred}(r)]\).
A crucial aspect of the IVE that makes the latent space interpretable is that it is _disentangled_: independent factors of variation in the density profiles are captured by different, independent latents. This is achieved through the design of a loss function that minimizes the mean squared error between predicted and ground-truth profiles, while simultaneously maximizing the degree of independence between the latent variables by encouraging those to be as close as possible to independent Gaussians of mean 0 and variance 1 [37]. More details on the encoder and decoder architectures and the loss function are presented in Ref. [14].
**Methods** - We trained two IVE models for different tasks: one (\(\text{IVE}_{\rm virial}\)) was trained to model the density profile up to the halo virial radius \(r_{200\rm m}\), and the second (\(\text{IVE}_{\rm infall}\)) was trained to model profiles beyond the halo boundary out to \(2\,r_{200\rm m}\). The former is used for direct comparison with the NFW profile, which is also designed to model the profile out to the virial radius, and the latter is used to investigate the less studied halo outer profile. The innermost radius of the profiles we consider is \(r_{\rm min}=3\,\epsilon\), where \(\epsilon\) is the gravitational softening of the simulation; this choice ensures that we can robustly trust the inner profile. The inputs are given by the 3D density field within a \(N=131^{3}\) sub-box of size \(L_{\rm sub-box}=0.4\,\text{Mpc}\,h^{-1}\) for the \(\text{IVE}_{\rm virial}\) model, and of size \(L_{\rm sub-box}=0.6\,\text{Mpc}\,h^{-1}\) for the \(\text{IVE}_{\rm infall}\) one. For the latter, we further restricted our analysis to halos with \(r_{200\rm m}{\leq 150\,\text{kpc}\,h^{-1}}\). Further discussion on the training data of the \(\text{IVE}_{\rm virial}\) and \(\text{IVE}_{\rm infall}\) models is presented in Ref. [14]. To compare the \(\text{IVE}_{\rm virial}\) results with NFW, we fitted the NFW formula in Eq. (1) to each halo's density profile using least-squares minimization, and recovered the best-fitting parameters \(r_{s}\) and \(\rho_{s}\). The concentration was then derived using \(c=\)\(r_{200\rm m}/r_{s}\). A description of the simulations used for training and testing the IVE models can be found in the Supplemental Material.
The first step of the analysis was to verify that the IVE models learn to predict the density profiles at \(\sim\!5\%\) accuracy, comparable to the accuracy of NFW fits (see Supplemental
Figure 1: A neural network is trained to discover the underlying degrees of freedom in halo density profiles in the form of a latent representation, when presented with the full 3D density structure of a halo. We physically interpret the discovered representation by measuring the MI between the latent parameters and the assembly history of the halos.
Material). Crucially, only the \(z=0\) snapshots were used for training the IVE models to construct the latent representations mapping the 3D density field to dark matter halo profiles - i.e., the model had no access to the merger histories of the halos during training. The resulting disentangled latent space directly corresponds to the underlying degrees of freedom in the halo density profiles. Following recent works [38, 39, 14], we then used the MI to (i) quantify the information captured by the latent space about the halo density profiles and (ii) connect the IVE latents to the halo's evolution history, showing how the latter determines the present-day density profile.
The MI was estimated using GMM-MI [39], which performs density estimation using Gaussian mixtures and provides MI uncertainties through bootstrap. We first measured the MI between each latent and the density profile \(\rho(r)\); this allows us to directly link each latent to a degree of freedom in the profile that affects its shape over a certain radial range. We then measured the MI between each latent and the mass assembly history of each halo. This in turn allowed us to connect each degree of freedom describing the density profile of the halo directly to characteristics of the halos' evolution which determines that component.
**Results** - Figure 2 quantifies the information contained within the latents of the \(\text{IVE}_{\text{infall}}\) (top panel) and the \(\text{IVE}_{\text{virial}}\) (bottom panel) models about the ground-truth density profiles1. We show the MI between each latent parameter and the ground-truth profiles, which we denote as \(\text{MI}_{\rho_{\text{true}}(r)}\). The three latents discovered by the \(\text{IVE}_{\text{infall}}\) describe (i) the normalization of the profile, which dominates the variation in the profiles out \(\sim r_{200\text{m}}/2\), (ii) the shape of the inner profile, which becomes informative on radial scales approaching \(r_{200\text{m}}\), and (iii) the shape of the outer profile beyond \(r_{200\text{m}}\). The first two are analogous to the two NFW parameters, mass and concentration, respectively. A closer comparison between the inner shape latent of the \(\text{IVE}_{\text{virial}}\) model and concentration (bottom panel of Fig. 2) shows that both parameters carry information about the density in the core and on radial scales close to \(r_{200\text{m}}\). This bimodality is due to a compensation effect between the density in the inner region and that close to the virial boundary: at fixed normalization, halos with denser cores become less dense in the outskirts, and vice versa. The \(\text{MI}_{\rho_{\text{true}}(r)}\) of the inner latent is shifted towards larger radii compared to that of concentration, suggesting that the former is sensitive to variations in the shape of the profile on larger radial scales than the latter; this distinction will become relevant when physically interpreting the latent and comparing it to concentration.
Footnote 1: We verify the conclusions of our previous work in Ref. [14] at higher precision using the new GMM-MI estimator [39].
We now move on to a physical interpretation of the latents in relation to characteristics of the halos' evolution histories. Recall that the network did not have access to this information during training. The interpretation of the normalization latent is straightforward: it captures the \(z=0\) mass of the halo, \(M_{200\text{m}}\). Their MI is \(\sim 2.07\pm 0.01\) nats, implying a strong correlation between the two. This also matches expectations from the literature [10, 11], as halo mass also controls the normalization in the NFW and Einasto fitting functions. To physically interpret the inner and outer shape latents, we measure
Figure 2: The MI between the latent parameters and the ground-truth halo profiles \(\rho_{\text{true}}(r)\) for the \(\text{IVE}_{\text{infall}}\) (_top_) and the \(\text{IVE}_{\text{virial}}\) (_bottom_) models. In the \(\text{IVE}_{\text{infall}}\) case, we also show MI with the NFW concentration. (For clarity we do not show the normalization latent for \(\text{IVE}_{\text{infall}}\), since it behaves identically to the \(\text{IVE}_{\text{virial}}\) normalization latent.)
Figure 3: The MI between the latent parameters and the mass accretion histories (denoted \(\text{MI}_{M(z)}\); top row), and that between the latent parameters and the mass accretion rate (denoted \(\text{MI}_{dM(z)/dz}\); bottom row). The inner shape latent and the NFW concentration carry memory of the early-time mass assembly history, as well as the later-time mass accretion rate. The outer shape latent carries information about the halos’ most recent mass accretion rate over the past dynamical time (indicated by the arrow).
their MI with two quantities that describe the assembly history of the halos over cosmic time. The first is the mass accretion history, \(M_{\rm 200m}(z)/M_{\rm 200m}(z=0)\), which describes the evolution of the halo mass as a function of time \(M_{\rm 200m}(z)\) normalized to the present-day halo mass \(M_{\rm 200m}(z=0)\). The second is the mass accretion rate \(\Gamma(t)\equiv\Delta\ln M_{\rm 200m}(a)/\Delta\ln a\)[33], which describes the rate of change in halo mass with respect to the scale factor \(a(t)\). The value of the accretion rate depends on the time interval used to compute the change in mass and scale factor; we compute \(\Gamma(t)\) by taking the finite difference of the halo masses at each consecutive timestep in the simulation.
Figure 3 shows the MI between the latents and mass accretion history (MI\({}_{M(z)}\); top row) and that between the latents and the mass accretion rate (MI\({}_{\rm dM(z)/dz}\); bottom row). We first focus on the inner shape latent, which we compare to the NFW concentration. The MI\({}_{M(z)}\) of the inner shape latent increases with time during the early formation period, peaks at \(z\sim 1\), and declines rapidly towards \(z=0\); recall that this is the MI with the mass assembly history normalized to the present-day halo mass. This result reveals that the inner shape latent is sensitive to the early assembly history of halos. The MI\({}_{dM(z)/dz}\) of the same latent reveals that the latter is also sensitive to the later time mass accretion rate. This dual dependence explains the bimodal shape of the MI between the inner latent and the profile (Fig. 2, bottom panel): the early assembly phase determines the shape of the profile in the innermost region of the halo, while the later time mass accretion rate determines the shape of the profile close to the virial radius. We further validate this interpretation in the Supplemental Material.
The NFW concentration shows a similar picture to the inner shape latent. However, its MI\({}_{M(z)}\) peaks at earlier times (\(z\sim 0.55\)) compared to the inner shape latent. This implies that the inner shape latent carries information about the build up of mass onto the halo over a longer period of time than concentration, which therefore affects the inner halo structure (and the profile) out to larger scales. The sensitivity of the inner latent to later times/larger scales in the profile explains why the inner latent MI\({}_{\rho_{\rm trw}(r)}\) is shifted towards larger radial scales than that of concentration (Fig. 2). Moreover, we find that the absolute magnitude of the concentration MI\({}_{M(z)}\) is higher than that of the inner shape latent; this is because the closer to the halo core the stronger the correlation with the early assembly history due to halos accreting mass 'inside-out'. As a result, concentration, which is sensitive to the profile on smaller \(r\) than the latent, has a higher MI with the early assembly history than the latent. Finally, the NFW concentration is related to the later time mass accretion rate in a similar way to the inner latent.
Figure 3 also quantifies the information contained within the outer shape latent about the mass evolution history of halos. We find that the outer shape latent is primarily determined by the late-time mass accretion rate. In particular, the outer shape latent is sensitive to the accretion rate over the past \(\sim 5\) Gyr. This timescale corresponds to the halo dynamical time, \(t_{\rm dyn}\equiv 2\times r_{\rm 200m}/v_{\rm 200m}\), defined as the time it takes for material to cross the halo at a typical virial velocity \(v_{\rm 200m}=\sqrt{{\rm G}M_{\rm 200m}/r_{\rm 200m}}\). This result suggests that the outer profile is detemined by the accretion of matter infalling onto the halo, that is dynamically out of equilibrium and has not yet virialized within the halo.
**Discussion** - Our results show that the IVE framework has extracted a direct connection between the assembly history of cold dark matter halos and their density profiles, without having access to explicit information about the time evolution of the halos during training. This has deep implications for understanding the origin of universality in dark matter halos. The results suggest that the universality in the profiles, captured by three degrees of freedom alone, is a direct consequence of a universality in the halo assembly histories themselves, since the latents contain comparable amounts of information about both quantities.
Previous work [15] found a resemblance between the shape of the average mass accretion history, expressed in terms of the critical density of the Universe, and the average enclosed mass profile, expressed in terms of its enclosed density, for a selected set of 'well-behaved' halos of similar mass. In the halo outskirts, the profile has been linked to the dynamical accretion history of the halos primarily through the relation between the splashback radius and the mass accretion rate [30; 35]; existing models make use of multi-parameter fitting functions to capture the dynamical impact on the outer profile [36].
By contrast, within the IVE framework, the connection between the density profiles and the _entire_ mass accretion history or mass accretion rate is clearly elucidated through MI. This result was obtained directly using all halos in the simulations, without requiring a curated sample of well-behaved halos. The IVE rediscovers the known correlation between the inner profile and halo formation time [24; 25]; it then additionally demonstrates that the complexity of the dynamical, infalling material is encoded in only a single degree of freedom that captures the recent mass accretion rate. In future work, we will use the connection between assembly history, latents, and density profile captured by the IVE framework to build a model that can determine mass accretion histories from density profiles.
More broadly, our results represent progress towards enabling _new_ machine-assisted scientific discoveries, going beyond artificial rediscovery of known physical laws and concepts [40; 2; 41]. The key requirements for artificial scientific discovery are _interpretability_ and _explainability_; the IVE achieves these by generating a low-dimensional representation that disentangles the independent factors of variation in the output (interpretability) and can be explained in terms of the physics it represents through MI (explainability). The approach shows promise for gaining insight into other emergent properties of the cosmic large-scale structure which currently lack of physical explanations.
**Acknowledgments** - LLS thanks Benedikt Diemer and Simon White for useful discussions. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement nos. 818085 GMGalaxies and
101018897 CosmicExplorer). This work has been enabled by support from the research project grant 'Understanding the Dynamic Universe' funded by the Knut and Alice Wallenberg Foundation under Dnr KAW 2018.0067. The work of HVP was additionally supported by the Goran Gustafsson Foundation for Research in Natural Sciences and Medicine. AP was additionally supported by the Royal Society. HVP and LLS acknowledge the hospitality of the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. The participation of HVP and LLS at the Aspen Center for Physics was supported by the Simons Foundation. This work was partially enabled by the UCL Cosmoparticle Initiative.
The contributions from the authors are listed below: **L.L.-S.**: conceptualization; formal analysis; investigation; methodology; software; validation; visualization; writing - original draft, review & editing. **H.V.P.**: conceptualization; methodology; interpretation & validation; writing - review & editing. **A.P.**: methodology; writing - review & editing.
## Supplemental Material
### Simulations
We begin with a description of the simulations used for training the IVE models and subsequent interpretation of the learned representation. We ran four dark-matter-only \(N\)-body cosmological simulations using the publicly-available code GADGET-4 [42] assuming a _Planck_\(\Lambda\)CDM cosmological model [43]. We evolved \(N=512^{3}\) dark matter particles in a \((50\,\mathrm{Mpc}\,h^{-1})^{3}\) box from \(z=99\) to \(z=0\). The four simulations are based on different realizations of the initial Gaussian random field, generated using genetIC[44]. Three simulations were used for training the machine learning model and one was set aside for validation and testing.
Dark matter halos were identified at \(z=0\) using the SUB-FIND halo finder [42; 45], as done in Ref. [14]. We restricted our analysis to halos within the mass range \(\log_{10}(M/M_{\odot})\in[11,13]\), in order to fully resolve the inner profile of the lowest-mass halos and not be affected by small-number statistics at the high-mass end. To track the evolution history of the dark matter halos, we saved \(91\) snapshots of the simulations between \(z\sim 7\) and \(z=0\). We used the pynbody and tangos software packages [46] to construct the merger trees of every dark matter halo. tangos matches a halo with its successor in time based on the fraction of common particles between the two objects; the procedure is repeated for every timestep in the simulation, thus yielding halo merger trees from \(z\sim 7\) to \(z=0\). The merger trees were then used to track the mass of each halo's main progenitor over time.
### Predictions for the halo density profiles
Figure 4 shows the accuracy of the predictions of the IVE\({}_{\mathrm{virial}}\) and IVE\({}_{\mathrm{infall}}\) models. We show the mean and 90% confidence interval of the residuals \(\log_{10}[\rho_{\mathrm{predicted}}/\rho_{\mathrm{true}}]\), in every radial bin of the profile used for testing. Since every radial bin contains a different value of \(r\) for different halos, we plot the residuals as a function of \(r_{\mathrm{eff}}\) defined as the median of the distribution of radius values within each bin. The grey band shows the residuals of the NFW fits for comparison. Note that the IVE results include uncertainties in the latent distributions, whereas the NFW fits do not include uncertainties as they were obtained through least-squares optimization. The performance of the IVE models is consistent with that of the NFW profile, meaning that our model contains sufficient predictive accuracy to yield meaningful latent representations.
### Further physical interpretation of the inner shape latent
We present a further investigation on the physical interpretation of the inner shape latent. As shown in Fig. 2, the MI between the inner shape latent and the ground-truth density profile has a bimodal shape. The MI first peaks at \(r_{1}\sim 0.1\,r_{200\mathrm{m}}\) and then again at \(r_{2}\sim 0.9\,r_{200\mathrm{m}}\), meaning that the latent contains information about the shape of the profile in the inner region of the halo and close to the halo virial radius. When physically interpreting the latent, we found that the latent carries information about the early formation history of the halo and the later time mass accretion rate (Fig. 3). We now verify if the early formation history is responsible for the shape of the profile in the inner region, while the later time mass accretion rate is what determines that closer to the virial radius. To do so, we compute the MI between the ground-truth density at the first MI peak, \(\rho(r_{1})\), and both the halo mass assembly history and mass accretion rate. We then repeat the calculation for the ground-truth density at the second MI peak, \(\rho(r_{2})\). Fig. 5 shows the MI between the two ground-truth densities with the mass assembly history, \(M(z)\), in the top panel2
Figure 4: Mean and 90% confidence interval of the residuals \(\log[\rho_{\mathrm{predicted}}/\rho_{\mathrm{true}}]\) of the IVE\({}_{\mathrm{virial}}\) and IVE\({}_{\mathrm{infall}}\) models, as a function of \(r_{\mathrm{eff}}\) defined as the median radius in each bin. The grey band shows the NFW residuals.
because \(\rho(r)\) also depends on the overall normalization and therefore must be compared to \(M_{200\mathrm{m}}(z)\).
|
2310.02861 | Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly
Detection | Graph-level anomaly detection has gained significant attention as it finds
applications in various domains, such as cancer diagnosis and enzyme
prediction. However, existing methods fail to capture the spectral properties
of graph anomalies, resulting in unexplainable framework design and
unsatisfying performance. In this paper, we re-investigate the spectral
differences between anomalous and normal graphs. Our main observation shows a
significant disparity in the accumulated spectral energy between these two
classes. Moreover, we prove that the accumulated spectral energy of the graph
signal can be represented by its Rayleigh Quotient, indicating that the
Rayleigh Quotient is a driving factor behind the anomalous properties of
graphs. Motivated by this, we propose Rayleigh Quotient Graph Neural Network
(RQGNN), the first spectral GNN that explores the inherent spectral features of
anomalous graphs for graph-level anomaly detection. Specifically, we introduce
a novel framework with two components: the Rayleigh Quotient learning component
(RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly
captures the Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the
spectral space of graphs. Extensive experiments on 10 real-world datasets show
that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in
AUC, demonstrating the effectiveness of our framework. Our code is available at
https://github.com/xydong127/RQGNN. | Xiangyu Dong, Xingyi Zhang, Sibo Wang | 2023-10-04T14:47:27Z | http://arxiv.org/abs/2310.02861v4 | # Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection
###### Abstract
Graph-level anomaly detection has gained significant attention as it finds many applications in various domains, such as cancer diagnosis and enzyme prediction. However, existing methods fail to capture the underlying properties of graph anomalies, resulting in unexplainable framework design and unsatisfying performance. In this paper, we take a step back and re-investigate the spectral differences between anomalous and normal graphs. Our main observation shows a significant disparity in the accumulated spectral energy between these two classes. Moreover, we prove that the accumulated spectral energy of the graph signal can be represented by its Rayleigh Quotient, indicating that the Rayleigh Quotient is a driving factor behind the anomalous properties of graphs. Motivated by this, we propose _Rayleigh Quotient Graph Neural Network (RQGNN)_, the first spectral GNN for graph-level anomaly detection, providing a new perspective on exploring the inherent spectral features of anomalous graphs. Specifically, we introduce a novel framework that consists of two components: the Rayleigh Quotient learning component (RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly captures the Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the spectral space of graphs. Extensive experiments on 10 real-world datasets show that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC, demonstrating the effectiveness of our framework.
## 1 Introduction
Graph-structure data explicitly expresses complex relations between items, and thus has attracted much attention from the deep learning community. Extensive efforts have been devoted to deploying GNNs (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018) on node-level tasks. Recently, researchers have started to shift their focus from local properties to graph-level tasks (Wang et al., 2021; Liu et al., 2022; Yue et al., 2022), and graph-level anomaly detection has become one of the most important graph-level tasks with diverse applications (Ma et al., 2022; Zhang et al., 2022; Qiu et al., 2022), such as cancer diagnosis, enzyme prediction, and brain disease detection. In addition, applications of graph-level anomaly detection can be observed in trending topics, such as spam detection (Li et al., 2019) and rumor detection (Bian et al., 2020).
Following the common design of graph learning models, existing solutions for graph-level anomaly detection mainly employ spatial GNNs with distinct pooling techniques. For example, CAL (Sui et al., 2022) and FAITH (Wang et al., 2022) incorporate node features with topological characteristics of graphs to generate graph representations. Meanwhile, due to the limitations of the average or sum pooling function in certain tasks, researchers have introduced various graph pooling functions (Wu et al., 2022; Hua et al., 2022; Liu et al., 2023). However, to the best of our knowledge, no previous attempt has provided spectral analysis for anomalous graphs, missing an important feature that can help better capture the properties of anomalous graphs.
To address this issue, we start by investigating the spectral energy of the graph Laplacian. Our key findings and theoretical analysis validate that the accumulated spectral energy can be represented by the Rayleigh Quotient. With this connection, we further empirically show that the Rayleigh Quotient distributions of normal graphs and anomalous graphs follow different patterns. In particular, we first
randomly sample \(n_{a}\) anomalous graphs and \(n_{n}\) normal graphs. For each graph, we calculate its corresponding Rayleigh Quotient. Subsequently, we set the maximum and minimum values of the Rayleigh Quotient of graphs as the bounds of the value range, which is then divided into 10 equal-width bins. After that, we assign each value of the Rayleigh Quotient of graphs to its corresponding bin. Finally, we calculate the frequency of values that fall into each bin and normalize them, which can be regarded as the normalized Rayleigh Quotient distribution of the sampled dataset. Figure 1 reports the Rayleigh Quotient distribution on the SN12C dataset, and the results on other datasets can be found in Appendix A.4. As we can observe, regardless of the variations in the sample size, the Rayleigh Quotient distribution of each class exhibits a consistent pattern across different sample sets. In addition, it is evident from Figure 1 that the Rayleigh Quotient distribution of anomalous graphs and that of normal ones are distinct from each other statistically. This observation highlights how the Rayleigh Quotient can reveal the underlying differences between normal and anomalous graphs. Hence, the Rayleigh Quotient should be encoded and explored when identifying anomalous graphs. Additionally, as we establish a connection between the Rayleigh Quotient and the spectral energy of the graph Laplacian, it becomes apparent that the spectral energy distribution exhibits robust statistical patterns. This, in turn, empowers us to leverage spectral graph neural networks for further encoding and utilization of this valuable information.
Motivated by the observation and theoretical analysis, in this paper, we propose RQGNN, a Rayleigh Quotient-based GNN framework for graph-level anomaly detection tasks. It consists of two main components: the Rayleigh Quotient learning component (RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). Firstly, we adopt RQL to derive the Rayleigh Quotient of each graph and then employ a multi-layer perceptron (MLP) to generate the representation of each graph, aiming to capture explicit differences between anomalous and normal graphs guided by their Rayleigh Quotient. Secondly, to obtain the implicit information embedded in the spectral space, we draw inspiration from the Chebyshev Wavelet GNN (CWGNN) and adopt it to learn the inherent information in the graph data. Besides, to alleviate the drawbacks of existing pooling techniques in graph-level anomaly detection, we introduce a powerful spectral-related pooling function called RQ-pooling. Furthermore, we address the challenge of imbalanced data in graph-level anomaly detection via a class-balanced focal loss. The final graph embedding is the combination of representations generated by the RQL and CWGNN-RQ. By combining the explicit information from the Rayleigh Quotient and the implicit information from the CWGNN-RQ, RQGNN effectively captures more inherent information for the detection of anomalous graphs.
In our experiments, we evaluate RQGNN against 10 alternative frameworks across 10 datasets. Extensive experiments demonstrate that our proposed framework consistently outperforms spectral GNNs and the state-of-the-art (SOTA) GNNs for both graph classification task and graph-level anomaly detection task. We summarize our contributions as follows:
* Our main observation and theoretical analysis highlight that the Rayleigh Quotient reveals underlying properties of graph anomalies, providing valuable guidance for future work in this field.
* We propose the first spectral GNNs for the graph-level anomaly detection task, which incorporates explicit and implicit learning components, enhancing the capabilities of anomaly detection.
* Comprehensive experiments show that RQGNN outperforms SOTA models on 10 real-world graph datasets, demonstrating the effectiveness of RQGNN.
Figure 1: Normalized Rayleigh Quotient distribution on SN12C.
## 2 Preliminaries
**Notation.** Let \(G=(\mathbf{A},\mathbf{X})\) denote a connected undirected graph with \(n\) nodes and \(m\) edges, where \(\mathbf{X}\in\mathbb{R}^{n\times F}\) is node features, and \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacency matrix. We set \(\mathbf{A}_{ij}=1\) if there exists an edge between node \(i\) and \(j\), otherwise \(A_{ij}=0\). Let \(\mathbf{D}\) be the diagonal degree matrix, the Laplacian matrix \(\mathbf{L}\) is then defined as \(\mathbf{D}-\mathbf{A}\) (regular) or as \(\mathbf{I}_{n}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\) (normalized), where \(\mathbf{I}_{n}\) is an \(n\times n\) identity matrix. The Laplacian matrix can be eigen-decomposed as \(\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}\), where the diagonal matrix \(\mathbf{\Lambda}\) consists of real eigenvalues (graph spectrum).
Next, we briefly review spectral GNNs and existing work for graph-level anomaly detection and graph classification.
**Spectral GNN.** By processing Laplacian matrix, spectral GNNs manipulate the projection of graph spectrum (Defferrard et al., 2016) and can be viewed as graph signal processing models. It has drawn much attention in the graph learning community. For instance, ChebyNet (Defferrard et al., 2016) and BernNet (He et al., 2021) utilize different approximations of spectral filters to improve the expressiveness of spectral GNN. Specformer (Bo et al., 2023) combines the transformer and spectral GNN to perform self-attention in the spectral domain. BWGNN (Tang et al., 2022) adopts a wavelet filter to generate advanced node representations for node-level anomaly detection, showing the potential ability of wavelet filter in the anomaly detection area.
**Graph-level Anomaly Detection.** To the best of our knowledge, OCGIN (Zhao & Akoglu, 2021) is the first to explore graph-level anomaly detection, which provides analysis to handle the performance flip of several methods on graph classification datasets. After that, OCGTL (Qiu et al., 2022) adopts graph transformation learning to identify anomalous graphs and GLocalKD (Ma et al., 2022) investigates the influence of knowledge distillation on graph-level anomaly detection. A following work, HimNet (Niu et al., 2023) builds a hierarchical memory framework to balance the anomaly-related local and global information. One recent study, iGAD (Zhang et al., 2022) suggests that the anomalous substructures lead to graph anomalies. It proposes an anomalous substructure-aware deep random walk kernel and a node-aware kernel to capture both topological and node features, achieving SOTA performance. Yet, existing solutions only explain the anomalous phenomena from spatial perspectives. In contrast, our RQGNN further explores the spectral aspects of anomalous graphs, leading to an explainable model design and satisfying model performance.
**Graph Classification.** Graph classification models can also be considered as a general framework for our task. GMT (Baek et al., 2021) points out that a simple sum or average pooling function is unlikely to fully collect information for graph classification. The authors propose a multi-head attention-based global pooling layer to capture the interactions between nodes and the topology of graphs. Afterward, Gmixup (Han et al., 2022) applies data augmentation to improve the generalization and robustness of GNN models. Moreover, TVGNN (Hansen & Bianchi, 2023) gradually distills the global label information from the node representations. Even though these models have achieved SOTA performance on the graph classification task, the imbalanced datasets bring a non-negligible problem for such models. Without specifically paying attention to the imbalanced nature of data, these SOTA graph classification models are not able to meet the requirements of the graph-level anomaly detection task, as we will show during the empirical evaluation.
## 3 Our Method: RQGNN
The observation in Section 1 highlights the differences between the Rayleigh Quotient distribution of anomalous and normal graphs. In Section 3.1, we further provide a theoretical analysis of the Rayleigh Quotient. This motivates our design of the Rayleigh Quotient learning component (RQL) in our framework, to be elaborated in Section 3.2. Moreover, our theoretical analysis in Section 3.1 further shows that the accumulated energy of the graph can be represented by the Rayleigh Quotient, which motivates us to apply the spectral GNN to capture the spectral energy information, to be detailed in Section 3.3. We further present a powerful spectral-related pooling function called RQ-pooling in Section 3.3. Section 3.4 elaborates on the design of class-balanced focal loss.
### Rayleigh Quotient and Spectral Analysis
As described in Section 2, the regular Laplacian matrix can be decomposed as \(\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}\), where \(\mathbf{U}=(\mathbf{u}_{1},\mathbf{u}_{2},...,\mathbf{u}_{n})\) represents orthonormal eigenvectors and the corresponding eigenvalues are sorted in ascending order, i.e. \(\lambda_{1}\leq...\leq\lambda_{n}\). Let \(\mathbf{x}=(x_{1},x_{2},...,x_{n})^{T}\in\mathbb{R}^{n}\) be a signal on graph \(G\). Then \(\mathbf{\hat{x}}=(\hat{x_{1}},\hat{x_{2}},...,\hat{x_{n}})^{T}=\mathbf{U}^{T}\mathbf{x}\) is the graph Fourier transformation of \(\mathbf{x}\). Following the definition in Horn & Johnson (2012), we define the Rayleigh Quotient as \(\frac{\mathbf{x}^{T}\mathbf{L}\mathbf{x}}{\mathbf{x}^{T}\mathbf{x}}\), where \(\mathbf{L}\) represents the Laplacian matrix of graph \(G\) and \(\mathbf{x}\) is a signal on graph \(G\). The following two theorems show that the change of the Rayleigh Quotient can be bounded given a small perturbation on graph signal \(\mathbf{x}\) and graph Laplacian \(\mathbf{L}\), and proofs can be found in Appendix A.1.
**Theorem 1**.: _For any given graph \(G\), if there exists a perturbation \(\mathbf{\Delta}\) on \(\mathbf{L}\), the change of Rayleigh Quotient can be bounded by \(||\mathbf{\Delta}||_{2}\)._
**Theorem 2**.: _For any given graph \(G\), if there exists a perturbation \(\mathbf{\delta}\) on \(\mathbf{x}\), the change of Rayleigh Quotient can be bounded by \(2\mathbf{x}^{T}\mathbf{L}\mathbf{\delta}+o(\mathbf{\delta})\). If \(\mathbf{\delta}\) is small enough, in which case \(o(\mathbf{\delta})\) can be ignored, the change can be further bounded by \(2\mathbf{x}^{T}\mathbf{L}\mathbf{\delta}\)._
Theorems 1-2 provide valuable guidance in exploring the underlying spectral properties behind anomalous and normal graphs based on the Rayleigh Quotient. Recap from Section 1 that the normalized Rayleigh Quotient distribution of graphs with the same class label statistically exhibits a similar pattern on different sample sizes. If the graph Laplacian \(\mathbf{L}\) and graph signal \(\mathbf{x}\) of two graphs are close, then their Rayleigh Quotients will be close to each other and these two graphs will highly likely belong to the same class. This motivates us to design a component to learn the Rayleigh Quotient of each graph directly, as we will show in Section 3.2.
Besides, we further analyze the relationship between the Rayleigh Quotient and the spectral energy of graphs, which serves as the rationale for incorporating a spectral-related component into our framework. Let \(\hat{x}_{k}^{2}/\sum_{i=1}^{n}\hat{x}_{i}^{2}\) denote the spectral energy of \(\lambda_{k}\). Although this distribution provides valuable guidance for measuring the graph spectrum in mathematics, it is not suitable for GNN training due to the time-consuming eigendecomposition computation. Therefore, in the following, we introduce the accumulated spectral energy and show that it can be transformed into the Rayleigh Quotient, thereby avoiding the expensive matrix decomposition process.
Let \(\sum_{j=1}^{k}\hat{x}_{j}^{2}/\sum_{i=1}^{n}\hat{x}_{i}^{2}\) denote the accumulated spectral energy from \(\lambda_{1}\) to \(\lambda_{k}\). According to previous work (Li et al., 2022; Luan et al., 2022), real-world graph data usually shows heterophily in connection and high-pass graph filters will capture more spectral information. Based on this observation, instead of exploring the original accumulated spectral energy that represents the low-frequency energy, we investigate the high-frequency energy that represents the accumulated spectral energy from \(\lambda_{k}\) to \(\lambda_{n}\). For any \(t\in[\lambda_{k},\lambda_{k+1})\), where \(1\leq k\leq n-1\), we denote \(E(t)=1-\sum_{j=1}^{k}\hat{x}_{j}^{2}/\sum_{i=1}^{n}\hat{x}_{i}^{2}\) as the high-frequency energy. Then we can derive:
\[\int_{0}^{\lambda_{n}}E(t)dt=\frac{\sum_{j=1}^{n}\lambda_{j}\hat{x}_{j}^{2}}{ \sum_{i=1}^{n}\hat{x}_{i}^{2}}=\frac{\mathbf{x}^{T}\mathbf{L}\mathbf{x}}{\mathbf{x}^{T}\mathbf{x}}. \tag{1}\]
This result demonstrates that the accumulated spectral energy can be exactly represented by the Rayleigh Quotient. We summarize this result in the following proposition.
**Proposition 1**.: _Given graph \(G\), the Rayleigh Quotient represents the accumulated spectral energy._
The Proposition 1 indicates the Rayleigh Quotient represents the accumulated spectral energy of the graph, which motivates us to design a spectral GNN and spectral-related pooling function to capture the inherent properties behind anomalous graphs, as we will show in Section 3.3.
### Rayleigh Quotient Learning Component
Motivated by Theorems 1-2 and the observation in Section 1, a simple yet powerful component is introduced to capture different trends on the Rayleigh Quotient. Specifically, we first use a two-layer MLP to obtain the latent representation of each node. Then, we calculate the Rayleigh Quotient for each graph as the explicit learning component in our RQGNN. Let \(\mathbf{\tilde{X}}\) denote the node features
after the feature transformation, then the Rayleigh Quotient can be expressed as:
\[RQ(\mathbf{X},\mathbf{L})=diag\left(\frac{\mathbf{\tilde{X}}^{T}\mathbf{L}\mathbf{\tilde{X}}}{\mathbf{ \tilde{X}}^{T}\mathbf{\tilde{X}}}\right), \tag{2}\]
where \(diag(\cdot)\) denotes the diagonal entries of a square matrix. Finally, we employ another two-layer MLP to get the Rayleigh Quotient representation of the entire graph:
\[h_{RQ}^{G}=\text{MLP}\left(RQ(\mathbf{X},\mathbf{L})\right). \tag{3}\]
Except for explicitly learning from the Rayleigh Quotient, following the common design of GNN, we need to implicitly learn from the topology and node features of graphs, so that we can collect comprehensive information for graph-level anomaly detection. The details are presented as follows.
### Chebyshev wavelet GNN with RQ-pooling
As described in Section 3.1, the accumulated spectral energy can be represented by the Rayleigh Quotient, which reveals crucial spectral properties for graph-level anomaly detection. This motivates us to design a spectral GNN for learning graph representations. Even though existing spectral GNNs, e.g., ChebyNet (Defferrard et al., 2016) and BernNet (He et al., 2021), have achieved notable success in the node classification task, these models fall short in capturing the underlying properties of anomalous graphs, resulting in inferior performance on the graph-level anomaly detection task, as we will show in our experiments. This can be attributed to two main reasons. Firstly, simple spectral GNNs can be seen as single low-band or high-band graph filters (He et al., 2021). However, as analyzed in Section 3.1, to capture the spectral properties of anomalous graphs, it is necessary to consider the spectral energy with respect to each eigenvalue. Secondly, even using more powerful spectral GNN models, generating improved graph representations remains challenging without a carefully designed pooling function. To address these issues, we present CWGNN-RQ, a novel component that effectively learns the inherent spectral representations of different graphs.
**CWGNN.** Following Hammond et al. (2011), we define \(\psi\) as the graph wavelet and a group of \(q\) wavelets can be denoted as \(\mathbf{W}=(\mathbf{W}_{\psi_{1}},\mathbf{W}_{\psi_{2}},\cdots,\mathbf{W}_{\psi_{q}})\). Each wavelet is denoted as \(\mathbf{W}_{\psi_{i}}=\mathbf{U}g_{i}(\mathbf{\Lambda})\mathbf{U}^{T}\), where \(g_{i}(\cdot)\) is the kernel function defined on \([0,\lambda_{n}]\) in the spectral domain. Then, the general wavelet GNN of a graph signal \(\mathbf{x}\) can be expressed as:
\[\mathbf{W}\mathbf{x}=\left[\mathbf{W}_{\psi_{1}},\mathbf{W}_{\psi_{2}},\cdots,\mathbf{W}_{\psi_{q }}\right]\mathbf{x}=\left[\mathbf{U}g_{1}(\mathbf{\Lambda})\mathbf{U}^{T}\mathbf{x},\mathbf{U}g_{2}( \mathbf{\Lambda})\mathbf{U}^{T}\mathbf{x},\cdots\mathbf{U}g_{q}(\mathbf{\Lambda})\mathbf{U}^{T}\mathbf{x} \right].\]
However, calculating graph wavelets requires decomposing the graph Laplacian, resulting in expensive computational costs. To achieve better computational efficiency, we employ Chebyshev polynomials to calculate approximate wavelet operators. The following lemma shows that the Chebyshev series can be used to represent any function.
**Lemma 1** (Rivlin (1974)).: _There always exists a convergent Chebyshev series for any function \(f(t)\):_
\[f(t)=\frac{1}{2}c_{0}+\sum_{k=1}^{\infty}c_{k}T_{k}(t),\]
_where \(c_{k}=\frac{2}{\pi}\int_{0}^{\pi}cos(k\theta)f(cos(\theta))d\theta\), and \(k\) is the order of the Chebyshev polynomials._
In addition, the Chebyshev polynomials on interval \([-1,1]\) can be iteratively defined as \(T_{k}(t)=2tT_{k-1}(t)-T_{k-2}(t)\) with initial values of \(T_{0}(t)=1\) and \(T_{1}(t)=t\). Given that the eigenvalues of the normalized Laplacian matrix fall in the range of \([0,2]\), we utilize the shifted Laplacian matrix \(\mathbf{L}-\mathbf{I}_{n}\) to compute the following shifted Chebyshev polynomials \(\bar{T}\). Meanwhile, for each wavelet \(i\), we also have a scale function \(s_{i}(\cdot)\) to re-scale the eigenvalue so that it can fit into the domain of \(f\) to calculate the following Chebyshev coefficient \(\bar{c}_{i,k}\). By introducing the truncated Chebyshev polynomials into graph wavelet operators, the \(i\)-th kernel function \(f_{i}(\mathbf{L})\) is designed to capture the first \(iK\)-hop information, which can be expressed as follows:
\[f_{i}(\mathbf{L})=\frac{1}{2}\bar{c}_{i,0}\mathbf{I}_{n}+\sum_{k=1}^{iK}\bar{c}_{i,k} \bar{T}_{k}(\mathbf{L}), \tag{4}\]
where \(\bar{T}_{k}(\mathbf{L})=\frac{4}{\lambda_{n}}(\mathbf{L}-\mathbf{I})\bar{T}_{k-1}(\mathbf{L})- \bar{T}_{k-2}(\mathbf{L})\) with initial values of \(\bar{T}_{0}(\mathbf{L})=\mathbf{I}_{n}\) and \(\bar{T}_{1}(\mathbf{L})=t\mathbf{I}_{n}\) represents the shifted Chebyshev polynomials and \(\bar{c}_{i,k}=\frac{2}{\pi}\int_{0}^{\pi}cos(k\theta)f(s_{i}(\frac{\lambda_{n} (cos(\theta)+1)}{2}))d\theta\) with \(1\leq i\leq q\). Then, the results of \(q\) graph wavelets are concatenated together to generate the representation of node \(j\):
\[\mathbf{h}_{j}=\text{CONCAT}\left(\left(f_{1}(\mathbf{L})\mathbf{\tilde{X}}\right)_{j}, \left(f_{2}(\mathbf{L})\mathbf{\tilde{X}}\right)_{j},\cdots,\left(f_{q}(\mathbf{L})\mathbf{ \tilde{X}}\right)_{j}\right). \tag{5}\]
After node representations \(\mathbf{h}\) are generated by CWGNN, a pooling function is needed to obtain the representation of the entire graph. Commonly used pooling functions such as average pooling and sum pooling functions (Hamilton et al., 2017) have achieved satisfactory performance in various classification tasks. However, as demonstrated in our experiments, these techniques become ineffective in graph-level anomaly detection. Consequently, this challenge calls for a newly designed pooling function that can effectively guide CWGNN to learn better graph representation.
**RQ-pooling.** In order to incorporate spectral information into node weights, we adopt an attention mechanism to generate the graph representation:
\[\mathbf{h}_{Att}^{G}=\sigma\left(\sum_{j\in V}a_{j}\mathbf{h}_{j}\right), \tag{6}\]
where \(\sigma\) is the non-linear activation function, and \(a_{j}\) is the attention coefficient of node \(j\). Specifically, since the spectral energy corresponds to each graph signal, we set the Rayleigh Quotient as the weight of these signals. Then, the attention coefficient for node \(j\) can be expressed as \(a_{j}=RQ(\mathbf{X},\mathbf{L})\mathbf{h}_{j}\), which is used as the node importance score in RQ-pooling. Such a strategy allows CWGNN to capture more underlying spectral information of graphs. The final representation of the graph is the concatenation of both \(\mathbf{h}_{RQ}^{G}\) and \(\mathbf{h}_{Att}^{G}\):
\[\mathbf{h}^{G}=\text{MLP}\left(\text{CONCAT}\left(\mathbf{h}_{Att}^{G},\mathbf{h}_{RQ}^{G }\right)\right). \tag{7}\]
### Class-balanced focal loss
As discussed in Section 2, the imbalanced nature of graph-level anomaly detection brings non-negligible challenges. To tackle this issue, we introduce a re-weighting technique called the class-balanced focal loss, which enhances the anomalous detection capability of our RQGNN.
**Expected number.** As the number of training samples increases, there will be more potential information overlap among different samples. Consequently, the marginal benefit that a model can extract from the data diminishes. To address this issue, the class-balanced focal loss is designed to capture the diminishing marginal benefits by using more data points of a class. This approach ensures that the model effectively utilizes the available data while avoiding redundancy and maximizing its learning potential. Specifically, we define the expected number \(\eta(n_{t})\) as the total number of samples that can be covered by \(n_{t}\) training data and utilize the inverse of this number as the balance factor in our loss function.
**Proposition 2**.: _The expected number \(\eta(n_{t})=\frac{1-\beta^{n_{t}}}{1-\beta}\), where \(\beta=\frac{N-1}{N}\) with \(N\) equaling to the total number of data points in class \(t\)._
In practice, without further information of data for each class, it is difficult to empirically find a set of good \(N\) for all classes. Therefore, we assume \(N\) is dataset-dependent and set the same \(\beta\) for all classes in a dataset. In addition, we also employ focal loss (Lin et al., 2017), an adjusted version of cross-entropy loss. By combining the expected number and focal loss together, we can achieve the goal of reweighting the loss for each class. The class-balanced focal loss is defined as follows:
\[\mathcal{L}_{CB_{focal}}=\frac{\mathcal{L}_{focal}}{\eta(n_{y})}=\frac{1-\beta }{1-\beta^{n_{y}}}\sum_{i=1}^{C}(1-p_{i})^{\gamma}log(p_{i}), \tag{8}\]
where \(\beta\) and \(\gamma\) are hyperparameters, \(n_{y}\) is the number of samples in class \(y\) that the current sample belongs to, \(C\) denotes the number of classes, and \(p_{i}=\text{softmax}(\mathbf{h}^{G})_{i}\) is the predicted probability for the current sample belonging to class \(i\).
## 4 Experiments
### Experimental Setup
**Datasets.** We use 10 real-world datasets to investigate the performance of RQGNN, including MCF-7, MOLT-4, PC-3, SW-620, NCI-H23, OVCAR-8, P388, SF-295, SN12C, and UACC257. These datasets are obtained from the TUDataset (Morris et al., 2020), consisting of various chemical compounds and their reactions to different cancer cells. We treat inactive chemical compounds as normal graphs and active ones as anomalous graphs. The anomalous level is measured using the anomalous ratio \(h=\frac{n_{a}}{n_{n}+n_{a}}\). In addition, the attributes are generated from node labels using one-hot encoding. The statistics of these 10 real-world datasets are shown in Table1.
**Baselines.** We compare RQGNN against 10 SOTA GNN competitors, including spectral GNNs, graph classification models and graph-level anomaly detection models.
* Spectral GNNs with average pooling function: ChebyNet (Defferrard et al., 2016) and BernNet (He et al., 2021).
* Graph classification models: GMT (Baek et al., 2021), Gmixup (Han et al., 2022), and TVGNN (Hansen and Bianchi, 2023).
* Graph-level anomaly detection models: OCGIN (Zhao and Akoglu, 2021), OCGTL (Qiu et al., 2022), GLocalKD (Ma et al., 2022), HimNet (Niu et al., 2023), and iGAD (Zhang et al., 2022).
Also, we investigate two variants of RQGNN. We use RQGNN-1 to indicate the model that replaces the RQ-pooling with the average pooling, and use RQGNN-2 to indicate the model without the RQL component. More details of the datasets and baselines can be found in Appendix A.3.
**Experimental Settings.** We randomly divide each dataset into training/validation/test sets with 70%/15%/15%, respectively. During the sampling process, we ensure that each set maintains a consistent ratio between normal and anomalous graphs. We select the epoch where the models achieve the best Macro-F1 score on the validation set as the best epoch and use the corresponding model for performance evaluation. We set the learning rate as 0.005, the batch size as 512, the hidden dimension \(d=64\), the width of CWGNN-RQ \(q=4\), the depth of CWGNN-RQ \(K=6\), the
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & MCF-7 & MOLT-4 & PC-3 & SW-620 & NCI-H23 & OVCAR-8 & P388 & SF-295 & SN12C & UACC257 \\ \hline \(n_{a}\) & 25476 & 36625 & 25941 & 38122 & 38296 & 38437 & 39174 & 38246 & 38049 & 38345 \\ \(n_{a}\) & 2294 & 3140 & 1568 & 2410 & 2057 & 2079 & 2298 & 2025 & 1955 & 1643 \\ \(h\) & 0.0826 & 0.079 & 0.057 & 0.0595 & 0.051 & 0.0513 & 0.0554 & 0.0503 & 0.0489 & 0.0411 \\ \(\bar{n}\) & 26.4 & 26.1 & 26.36 & 26.06 & 26.07 & 26.08 & 22.11 & 26.06 & 26.08 & 26.09 \\ \(\bar{m}\) & 28.53 & 28.14 & 28.49 & 28.09 & 28.1 & 28.11 & 23.56 & 28.09 & 28.11 & 28.13 \\ \(F\) & 46 & 64 & 45 & 65 & 65 & 65 & 72 & 65 & 65 & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of 10 real-world datasets, where \(n_{n}\) is the number of normal graphs, \(n_{a}\) is the number of anomalous graphs, \(h\) is the anomalous ratio, \(\bar{n}\) is the average number of nodes, \(\bar{m}\) is the average number of edges, and \(F\) is the number of attributes.
\begin{table}
\begin{tabular}{c c c|c c c c c c c c c} \hline \hline & \multicolumn{3}{c|}{Spectral GNN} & \multicolumn{3}{c|}{Geap Classification} & \multicolumn{3}{c}{Graph-level Anomaly Detection} \\ \hline Datasets & Metrics & ChebyNet & BernNet & GMT & Gmixup & TVGNN & OCGIN & OCGTL & GLocalKD & HimNet & iGAD & RQGNN-1 & RQGNN-2 & RQGNN \\ \hline MCF-7 & AUC & 0.6612 & 0.6172 & 0.7706 & 0.6974 & 0.7180 & 0.5348 & 0.5866 & 0.6363 & 0.6369 & 0.8146 & 0.8994 & 0.8346 & **0.854** \\ & F1 & 0.4780 & 0.4784 & 0.4784 & 0.4779 & 0.5984 & - & - & - & - & 0.6468 & 0.6266 & 0.7205 & **0.794** \\ \hline MOLT-4 & AUC & 0.6647 & 0.6144 & 0.7660 & 0.6232 & 0.7159 & 0.5259 & 0.6191 & 0.6631 & 0.6633 & 0.8068 & 0.8246 & 0.8196 & **0.8316** \\ & F1 & 0.4858 & 0.4794 & 0.4814 & 0.4789 & 0.4916 & - & - & - & - & 0.6671 & 0.7119 & 0.7113 & **0.7240** \\ \hline PC-3 & AUC & 0.6051 & 0.6084 & 0.7896 & 0.6098 & 0.7974 & 0.3810 & 0.5489 & 0.6277 & 0.6703 & 0.8722 & 0.8533 & 0.8671 & **0.8782** \\ & F1 & 0.4853 & 0.4853 & 0.4853 & 0.4853 & 0.6206 & - & - & - & - & 0.6697 & 0.7003 & **0.7324** & 0.7184 \\ \hline SW-620 & AUC & 0.6759 & 0.6072 & 0.7467 & 0.6479 & 0.7326 & 0.4995 & 0.6398 & 0.6542 & 0.6544 & 0.8512 & 0.8401 & 0.8427 & **0.8560** \\ & F1 & 0.4898 & 0.4847 & 0.4844 & 0.5365 & - & - & - & - & 0.6627 & 0.6941 & 0.7209 & **0.7335** \\ \hline NCI-H23 & AUC & 0.6728 & 0.6114 & 0.8030 & 0.7324 & 0.7782 & 0.4948 & 0.6122 & 0.6837 & 0.6814 & 0.8297 & 0.8413 & 0.8354 & **0.8680** \\ & F1 & 0.4930 & 0.4869 & 0.4869 & 0.4869 & 0.5500 & - & - & - & 0.6646 & 0.6735 & **0.7349** & 0.7214 \\ \hline OVCAR-8 & AUC & 0.6330 & 0.3580 & 0.7692 & 0.3660 & 0.7653 & 0.5298 & 0.6007 & 0.6750 & 0.6570 & 0.8691 & 0.8549 & 0.8560 & **0.8799** \\ & F1 & 0.4900 & 0.4868 & 0.4868 & 0.4869 & 0.5406 & - & - & - & 0.6683 & 0.6866 & 0.6707 & **0.77215** \\ \hline P388 & AUC & 0.7266 & 0.6707 & 0.8498 & 0.6166 & 0.7957 & 0.5252 & 0.6501 & 0.6445 & 0.6667 & 0.8995 & 0.8911 & 0.8904 & **0.9023** \\ & F1 & 0.6365 & 0.5001 & 0.6583 & 0.4856 & 0.3557 & - & - & - & - & 0.7437 & 0.7525 & 0.7788 & **0.7963** \\ \hline SF-295 & AUC & 0.6505 & 0.6503 & 0.7926 & 0.6714 & 0.7346 & 0.4744 & 0.6400 & 0.7069 & 0.7037 & 0.8707 & 0.8691 & **0.8382** \\ & F1 & 0.4871 & 0.4871 & 0.4871 & 0.4866 & 0.4935 & - & - & - & - & 0.6919 & 0.7120 & 0.7335 & **0.7416** \\ \hline SN12C & AUC & 0.6598 & 0.6014 & 0.7919 & 0.7211 & 0.7341 & 0.5004 & 0.5617 & 0.6880 & 0.6916 & 0.8477 & 0.8851 & **0.8904** & 0.8861
dropout rate as 0.4, the hyperparameters of the loss function \(\beta=0.999\), \(\gamma=1.5\), and we use batch normalization for the final graph embeddings. We obtain the source code of all competitors from GitHub and perform these GNN models with default parameter settings suggested by their authors.
### Experimental Results
We first evaluate the performance of RQGNN against different SOTA GNN models. Table 2 reports AUC and Macro-F1 scores of each GNN model on 10 datasets. The best result on each dataset is highlighted in boldface. Since OCGIN, OCGTL, GLocalKD, and HimNet adopt one-class classification to identify anomalous graphs, we only report their AUC scores. As we can see, RQGNN outperforms all baselines on all datasets. Next, we provide our detailed observations.
Firstly, for two SOTA spectral GNNs, ChebyNet and BernNet, they fail to learn the underlying anomalous properties from the spectral perspective. In particular, compared with ChebyNet and BernNet, RQGNN takes the lead by 20.72% and 25.28% on these 10 datasets in terms of average AUC score, and takes the lead by 24.40% and 25.33% on these 10 datasets in terms of average Macro-F1 score, respectively. These empirical evidences demonstrate that even though graph Laplacian matrix is related to graph spectral energy, we still need to carefully design graph filters and pooling functions to capture the underlying anomalous properties in the graph.
Secondly, we carefully check the results of GNNs for graph classification to verify whether graph-level anomaly detection can be easily tackled by graph classification models. From Table 2, we can observe that compared to three GNN models, GMT, Gmixup, and TVGNN, RQGNN takes a lead by 8.38%, 20.59%, and 11.69% in terms of average AUC score and 23.70%, 25.48%, and 19.67% in terms of average Macro-F1 score, respectively. These results demonstrate that graph classification models cannot be directly adopted to handle the graph-level anomaly detection task.
Thirdly, we compare RQGNN with SOTA GNN models designed for graph-level anomaly detection. Despite being specialized for graph-level anomaly detection, OCGIN, OCGTL, GLocalKD, and HimNet fail to outperform other GNN baselines in terms of AUC scores. This can be attributed to their inability to effectively capture the important graph anomalous information. In contrast, RQGNN guided by the Rayleigh Quotient successfully captures the spectral differences between anomalous and normal graphs, resulting in significantly superior performance compared to OCGIN, OCGTL, GLocalKD, and HimNet. In particular, RQGNN takes the lead by an average margin of 34.82%, 25.28%, 20.02%, and 19.78% in terms of AUC score, respectively. Among all the baselines, iGAD stands out as the most competitive model, which incorporates anomalous-aware sub-structural information into node representations. However, it lacks the incorporation of important properties of anomalous graphs, such as the Rayleigh Quotient and spectral energy of graphs, which leads to relatively unsatisfying Macro-F1 scores on all datasets. With the guidance of Rayleigh Quotient, RQGNN outperforms iGAD by 1.44% in terms of AUC score and 6.74% in terms of Macro-F1 score on average across 10 datasets.
Figure 2: Varying the hidden dimension, width, and depth.
### Ablation Study
In this set of experiments, we investigate the effectiveness of each component in RQGNN. Ablation study for the class-balanced focal loss can be found in Appendix A.2. The experimental results of RQGNN variants are shown in Table 2.
Firstly, we use RQGNN-1 to indicate the model that replaces the RQ-pooling with average pooling. Recap from Section 4.1 that, it combines the representation of Rayleigh Quotient and CWGNN with average pooling function. As we can observe, RQGNN-1 outperforms all other baselines on all datasets in terms of the Macro-F1 scores. In particular, compared with RQGNN-1, RQGNN further boosts the performance and takes the lead by 1.76% in terms of the AUC score and 3.92% in terms of the Macro-F1 score on average. This result demonstrates that the RQ-pooling that introduces Rayleigh Quotient as the node weight captures more crucial information from the spectral domain.
Then, we use RQGNN-2 to indicate the model with only CWGNN-RQ. Specifically, we remove the RQL component that explicitly calculates the Rayleigh Quotient of each graph. Instead, we only compute the Rayleigh Quotient as the node weights for CWGNN. As we can observe, RQGNN-2 outperforms all the other baselines in terms of the Macro-F1 scores, which again shows the effectiveness of the RQ-pooling. Besides, according to Table 2, we can see that RQGNN is 0.09% higher than RQGNN-2 in terms of AUC score and 1.08% higher than RQGNN-2 in terms of Macro-F1 score on average, which further shows the effectiveness of the RQL component. In summary, these results demonstrate the effectiveness of each component in RQGNN.
### Parameter Analysis
Next, we conduct experiments to analyze the effect of representative parameters: the hidden dimension \(d\) of RQGNN, the width \(q\) and depth \(K\) of CWGNN-RQ on MCF-7, SF-295, SN12C, and UACC257 datasets. Figure 2 reports the Macro-F1 score of RQGNN as we vary the hidden dimension \(d\) from \(32\) to \(256\), the width \(q\) from \(3\) to \(6\), and the depth \(K\) from \(5\) to \(8\). As we can observe, when we set the hidden dimension to \(64\), RQGNN achieves relatively satisfactory performances on these four datasets. When the width of CWGNN-RQ is set to \(4\), RQGNN achieves the best results on all four datasets. Hence, we set the width to \(4\) in our experiments. Meanwhile, as we can observe, RQGNN shows a relatively stable and high performance in terms of all four presented datasets when we set the depth to \(6\). As a result, the depth is set to \(6\) in RQGNN.
### Case Study
In this set of experiments, we conduct experiments to investigate whether RQGNN learns the trends of Rayleigh Quotient on anomalous and normal graphs. If RQGNN successfully detects anomalous graphs, the samples in the test set that can be classified correctly by a converged model should have a similar Rayleigh Quotient distribution to that in the training set. Figure 3 illustrates the Rayleigh Quotient distribution of the normal and anomalous graphs in the training and test set on the SN12C dataset. As we can see, graphs that can be classified correctly in the test set exhibit a similar Rayleigh Quotient distribution to that in the training set. Meanwhile, those graphs that our RQGNN can not classify correctly, display different distributions from the train graphs. These results demonstrate that Rayleigh Quotient is an intrinsic characteristic of the graph-level anomaly detection task. Our RQGNN can effectively learn the Rayleigh Quotient as a discriminative feature and thus outperforms SOTA competitors.
## 5 Conclusion
In this paper, we introduce spectral analysis into the graph-level anomaly detection task. We discover differences in the spectral energy distributions between anomalous and normal graphs and further demonstrate the observation through comprehensive experiments and theoretical analysis. The combination of the RQL component that explicitly captures the Rayleigh Quotient of the graph and CWGNN-RQ that implicitly explores graph anomalous information provides different spectral perspectives for this task. Extensive experiments demonstrate that RQGNN consistently outperforms other SOTA competitors by a significant margin. |
2305.18457 | Learning Strong Graph Neural Networks with Weak Information | Graph Neural Networks (GNNs) have exhibited impressive performance in many
graph learning tasks. Nevertheless, the performance of GNNs can deteriorate
when the input graph data suffer from weak information, i.e., incomplete
structure, incomplete features, and insufficient labels. Most prior studies,
which attempt to learn from the graph data with a specific type of weak
information, are far from effective in dealing with the scenario where diverse
data deficiencies exist and mutually affect each other. To fill the gap, in
this paper, we aim to develop an effective and principled approach to the
problem of graph learning with weak information (GLWI). Based on the findings
from our empirical analysis, we derive two design focal points for solving the
problem of GLWI, i.e., enabling long-range propagation in GNNs and allowing
information propagation to those stray nodes isolated from the largest
connected component. Accordingly, we propose D$^2$PT, a dual-channel GNN
framework that performs long-range information propagation not only on the
input graph with incomplete structure, but also on a global graph that encodes
global semantic similarities. We further develop a prototype contrastive
alignment algorithm that aligns the class-level prototypes learned from two
channels, such that the two different information propagation processes can
mutually benefit from each other and the finally learned model can well handle
the GLWI problem. Extensive experiments on eight real-world benchmark datasets
demonstrate the effectiveness and efficiency of our proposed methods in various
GLWI scenarios. | Yixin Liu, Kaize Ding, Jianling Wang, Vincent Lee, Huan Liu, Shirui Pan | 2023-05-29T04:51:09Z | http://arxiv.org/abs/2305.18457v1 | # Learning Strong Graph Neural Networks with Weak Information
###### Abstract.
Graph Neural Networks (GNNs) have exhibited impressive performance in many graph learning tasks. Nevertheless, the performance of GNNs can deteriorate when the input graph data suffer from weak information, i.e., incomplete structure, incomplete features, and insufficient labels. Most prior studies, which attempt to learn from the graph data with a specific type of weak information, are far from effective in dealing with the scenario where diverse data deficiencies exist and mutually affect each other. To fill the gap, in this paper, we aim to develop an effective and principled approach to the problem of graph learning with weak information (GLWI). Based on the findings from our empirical analysis, we derive two design focal points for solving the problem of GLWI, i.e., enabling long-range propagation in GNNs and allowing information propagation to those stray nodes isolated from the largest connected component. Accordingly, we propose D\({}^{2}\)PT, a dual-channel GNN framework that performs long-range information propagation not only on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities. We further develop a prototype contrastive alignment algorithm that aligns the class-level prototypes learned from two channels, such that the two different information propagation processes can mutually benefit from each other and the finally learned model can well handle the GLWI problem. Extensive experiments on eight real-world benchmark datasets demonstrate the effectiveness and efficiency of our proposed methods in various GLWI scenarios.
Graph Neural Networks, Missing Data, Few-Label Learning +
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote † †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote † †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote †: Shirui Pan is the corresponding author.
+
Footnote † †: Shirui Pan is the corresponding author.
+
Footnote † †: Shirui Pan is the corresponding author.
+
Footnote † † †: Shirui Pan is the corresponding author.
+
Footnote † † † †: Shirui Pan is the corresponding author.
+
Footnote †
real-world scenarios. To bridge the gap, a natural research question is: "Can we design a universal and effective GNN for graph learning with weak information (GLWI)?"
To answer this question, in this paper, we first conduct a comprehensive analysis to investigate the performance of GNNs when learning with weak information. With empirical discussion, we find information propagation, the fundamental operation in GNNs, plays a crucial role in mitigating the data incompleteness. However, the limitations of model architectures and deficiencies of data hinder conventional GNNs to execute effective information propagation on incomplete data. From the perspective of model architectures, we pinpoint that GNNs with _long-range propagation_ enable sufficient information communication, which not only helps recover missing data but also further exploits the observed information. From the perspective of data, we ascribe the performance degradation of graph learning with extremely weak information (Fig. 1(e)) to the incomplete graph structure. Concretely, the scattered nodes isolated from the largest connected component (i.e., _stray nodes_) lead to ineffective information propagation, which impedes feature imputation and supervision signal spreading. These empirical findings shed light on the key criteria that help improve GNNs to handle the GLWI problem.
Following the above design principles, we propose a powerful yet efficient GNN model, _Dual-channel Diffused Propagation then Transformation_ (D\({}^{2}\)PT for short), for GLWI. Our theme is to enable effective information propagation on graph data with weak information by conducting efficient long-range propagation and relieving the stray node problem. More specifically, to enhance the expressive capability and reduce the computational cost of long-range message passing, we design a graph diffusion-based backbone model termed DPT which enables effective message passing while preserving high running efficiency. To allow information propagation on stray nodes, based on propagated features, we further learns a global graph by connecting nodes sharing similar semantics from a global view. We apply dual-channel training and contrastive prototype alignment mechanisms to D\({}^{2}\)PT, which fully leverages the knowledge of the global graph to optimize the DPT backbone. Extensive experiments on 8 real-world benchmark datasets demonstrate the effectiveness, generalization capability, and efficiency of D\({}^{2}\)PT.
To summarize, our paper makes the following contributions:
* **Problem.** We make the first attempt to investigate the graph learning problem with extremely weak information where structure, features, and labels are incomplete simultaneously, advancing existing research scope from a single angle to multiple intertwined perspectives as a whole.
* **Analysis.** We provide a comprehensive analysis to investigate the impact of data deficiency on GNNs, which further guides our algorithm designs against GLWI problem.
* **Algorithms.** We propose a novel method termed D\({}^{2}\)PT, which provides a universal, effective, and efficient solution for diverse GLWI scenarios.
* **Experiments.** We conduct extensive experiments to demonstrate that D\({}^{2}\)PT can offer superior performance over baseline methods in multiple GLWI tasks.
## 2. Related Works
### Graph Neural Networks
Graph neural networks (GNNs) are a family of neural networks that learn complex dependencies in graph-structured data [25; 56; 51; 5]. Based on the message passing paradigm, existing GNNs are composed of two types of atomic operations: propagation (P) that aggregates representations to adjacent nodes and transformation (T) that updates node representations with learnable non-linear mappings [52; 59; 60]. Different GNNs have their specific designs of P/T functions and orders of P/T operations [51]. Commonly used P functions include averaging [25], summation [52], and attention [45], while T is often defined as perceptron layer(s) [19; 50]. To organize P/T operations, the majority of GNNs follow a PPT scheme, where multiple entangled "P-T" layers are sequentially stacked [19; 52; 45; 5]. There are also some GNNs that execute multi-round P/T operation first, and execute another type of operation in the following step, i.e., PPTT scheme [68; 50] and TPP scheme [6; 15]. Recently efforts extend GNNs to various learning scenarios, such as unsupervised representation learning [65; 67], adversarial attack [55; 57], and architecture search [63; 64].
### Graph Learning with Weak Information
Graph learning with weak information (GLWI) aims to learn graph machine learning models when input graph data with 1) incomplete structure, 2) incomplete features, and/or 3) insufficient labels. Most existing works focus on learning GNNs on graphs with data insufficiency in a single aspect.
To handle _incomplete structure_, **graph structure learning** aims to jointly learn an optimized graph structure along with the backbone GNN [30; 70]. As representative methods, LDS [14] and GEN [49] use Bernoulli model and stochastic block model respectively to parameterize the adjacency matrix, and train the probabilistic models along with the backbone GNNs. IDGL [4] and Simp-GCN [23] introduce metric learning technique to revise the original graph structure. Pro-GNN [24] directly models the adjacency matrix with learnable parameters and learns it with GNN alternatively.
To resolve _incomplete features_, **attribute completion** aims to recover the missing data from the existing ones [3; 10]. Spinelli et al. [38] first apply a GNN-based autoencoder for missing data imputation. SAT [3] introduces a feature-structure distribution matching mechanism to the node attribute completion model. GCN\({}_{MF}\)[40] uses Gaussian Mixture Model to transform the incomplete features at the first layer of GNN. HGNN-AC [22] employs topological embeddings to benefit attribute completion.
Figure 1. Sketch maps of graph data with (a) ideal information, (b) weak structure, (c) weak features, (d) weak labels, and (e) extremely weak information.
A line of studies termed **label-efficient graph learning** propose to learn GNN models from data with _insufficient labels_[(8; 7; 28; 39)]. IGCN [(28)] is a pioneering work that applies a label-aware low-pass graph filter on GNNs to achieve label efficiency. M3S [(39)] leverages clustering technique to provide extra supervision signals and trains the model in a multi-stage manner. CGPN [(46)] utilizes Poisson network and contrastive learning for label-efficient graph learning. Meta-PN [(8)] generates high-quality pseudo labels with label propagation strategy to augment the scarce training samples.
Despite their success in handling GLWI from a single aspect, to the best of our knowledge, none of the existing works has jointly considered the data insufficiency from three aspects. Moreover, with carefully-crafted learning procedures, most of them require high computational costs for training, damaging their running efficiency on large-scale graphs. To bridge the gaps, in this paper, we aim to propose a general, efficient, and effective approach for GLWI.
## 3. Preliminaries
**Notations**. We consider an attributed and undirected graph as \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})=(\mathbf{A},\mathbf{X})\), where \(\mathcal{V}=\{v_{1},\cdots,v_{n}\}\) is the node set with size \(n\), \(\mathcal{E}\) is the edge set with size \(m\), \(\mathbf{A}\in\{0,1\}^{n\times n}\) is the binary adjacency matrix (where the \(i\), \(j\)-th entry \(\mathbf{A}_{ij}=1\) means \(v_{i}\) and \(v_{j}\) are connected and vice versa), and \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is the feature matrix (where the \(i\)-th row \(\mathbf{X}_{i}\) is the \(d\)-dimensional feature vector of node \(v_{i}\)). The label of \(\mathcal{G}\) is represented by a label matrix \(\mathbf{Y}\in\mathbb{R}^{n\times c}\), where \(c\) is the number of classes, each row is a one-hot vector, and the \(i\), \(j\)-th entry \(\mathbf{Y}_{ij}=1\) indicates that node \(v_{i}\) belongs to the \(j\)-th class and vice versa. The neighbor set of node \(v_{i}\) is represented by \(\mathbf{\mathcal{N}}_{v_{i}}=\{v_{j}|\mathbf{A}_{ij}=1\}\). The normalized adjacency matrix is represented by \(\tilde{\mathbf{A}}=\mathbf{D}^{-1/2}\mathbf{AD}^{-1/2}\), where \(\mathbf{D}\) is the diagonal degree matrix \(\mathbf{D}_{ii}=\sum_{j}\mathbf{A}_{ij}\).
**Graph neural networks (GNNs)**. Following the message passing paradigm, GNNs can be defined as the stacked combination of two fundamental operations: **propagation** (\(\mathsf{P}\)) that aggregates the representations of each node to its neighboring nodes and **transformation** (\(\mathsf{T}\)) that transforms the node representations with non-linear mappings [(59)]. With \(\mathbf{h}_{i}^{(i)}\) and \(\mathbf{h}_{i}^{(o)}\) as the input and output representations of node \(v_{i}\) respectively, the \(\mathsf{P}\) operation can be formulated by \(\mathbf{h}_{i}^{(o)}\leftarrow\mathsf{P}(\mathbf{h}_{i}^{(i)},\{\mathbf{h}_{ i}^{(i)}|v_{j}\in\mathbf{\mathcal{N}}_{v_{i}}\})\), and the \(\mathsf{T}\) operation can be formulated by \(\mathbf{h}_{i}^{(o)}\leftarrow\mathsf{T}(\mathbf{h}_{i}^{(i)})\). Taking GCN [(25)] as an implementation, the \(\mathsf{P}\) and \(\mathsf{T}\) can be written as \(\mathbf{h}_{i}^{(o)}=\sum_{j}\tilde{\mathbf{A}}_{ij}\mathbf{h}_{j}^{(i)}\) and \(\mathbf{h}_{i}^{(o)}=\sigma(\mathbf{Wh}_{i}^{(i)})\), where \(\mathbf{W}\) is a learnable parameter matrix and \(\sigma(\cdot)\) is a non-linear activation function.
According to the manner of stacking \(\mathsf{P}\) and \(\mathsf{T}\) operations, GNNs can be divided into two categories: entangled GNNs (\(\mathsf{PTPT}\)) and disentangled GNNs (\(\mathsf{PTPT}\) or \(\mathsf{TPP}\)) [(60)]. For example, a two-layer GCN [(25)] is an entangled GNN that can be written as \(\mathbf{H}^{(o)}=\mathsf{GCN}(\mathbf{H}^{(i)})=\mathsf{P}(\mathsf{T}( \mathsf{P}(\mathsf{T}(\mathbf{H}^{(i)}))))\), where \(\mathsf{P}\) and \(\mathsf{T}\) are stacked alternatively and in couples. A two-layer SGC [(50)] is a disentangled \(\mathsf{PTPT}\) GNN that can be written as \(\mathbf{H}^{(o)}=\mathsf{SGC}(\mathbf{H}^{(i)})=\mathsf{T}(\mathsf{P}(\mathsf{ P}(\mathbf{H}^{(i)})))\), where \(\mathsf{T}\) is executed after all \(\mathsf{P}\) are finished. Given a GNN, we define the iteration times of \(\mathsf{P}\) and \(\mathsf{T}\) as its **propagation step**\(s_{\mathsf{P}}\) and **transformation step**\(s_{\mathsf{T}}\), respectively.
**Semi-supervised node classification**. In this paper, we focus on the semi-supervised node classification task, which is an essential and widespread task in graph machine learning [(19; 14; 19; 25; 39; 45; 50)]. In this task, only the labels of a small fraction of nodes \(\mathcal{V}_{L}\subset\mathcal{V}\) are available for model training, and the goal in inference phase is to predict the labels of unlabeled nodes \(\mathcal{V}_{U}\subset\mathcal{V}\), w.r.t. \(\mathcal{V}_{U}\cap\mathcal{V}_{L}=\emptyset\). We denote the training labels as \(\mathbf{Y}_{L}\in\mathbb{R}^{n_{L}\times c}\) where \(n_{L}=|\mathcal{V}_{L}|\).
**Graph learning with weak information (GLWI)**. To formulate GLWI, we first define ideal graph data for semi-supervised node classification under some ideal conditions.
Definition 3.1 (ideal graph data).: Let ideal graph data be \(\hat{\mathcal{D}}=(\hat{\mathcal{G}},\hat{\mathbf{Y}}_{L})=((\mathcal{V},\hat{ \mathcal{E}},\hat{\mathbf{X}}),\hat{\mathbf{Y}}_{L})\), where \(\hat{\mathcal{E}}\) is an ideal edge set that contains all necessary links, \(\hat{\mathbf{X}}\) is an ideal feature matrix that contains all informative features, and \(\hat{\mathbf{Y}}_{L}\) is an ideal label matrix that contains adequate labels (with number \(\hat{n}_{L}\)) with a balanced distribution.
Note that ideal graph data is a perfect case for graph learning. In real-world scenarios, the data for model training (i.e., observed graph data) are sometimes incomplete and insufficient. Specifically, the structure can be incomplete in graph data with an _incomplete edge set_\(\hat{\mathcal{E}}\in\hat{\mathcal{E}}\) that contains limited edges to provide adequate information for graph learning. Meanwhile, some critical elements in the feature matrix are missing, which can be represented by an _incomplete feature matrix_\(\hat{\mathbf{X}}=\mathbf{M}\odot\hat{\mathbf{X}}\), where \(\mathbf{M}\in\{0,1\}^{n\times d}\) is the missing mask matrix. Besides, the available labels for model training can be scarce, indicating an _insufficient label matrix_\(\hat{\mathbf{Y}}_{L}\) with training number \(\hat{n}_{L}\ll\hat{n}_{L}\). Based on the above definitions, the basic GLWI scenarios can be formulated by:
Definition 3.2 (Basic GLWI scenarios).: Let graph data with weak structure, weak features, and weak labels be \(\mathcal{D}_{\text{ws}}=((\mathcal{V},\hat{\mathcal{E}},\hat{\mathbf{X}}),\hat{ \mathbf{Y}}_{L})\), \(\mathcal{D}_{\text{wf}}=((\mathcal{V},\hat{\mathcal{E}},\hat{\mathbf{X}}),\hat{ \mathbf{Y}}_{L})\), and \(\mathcal{D}_{\text{wf}}=((\mathcal{V},\hat{\mathcal{E}},\hat{\mathbf{X}}),\hat{ \mathbf{Y}}_{L})\), respectively. The targets in graph learning with weak structure, weak features, and weak labels scenarios are to predict the labels of unlabeled nodes \(\mathcal{V}_{U}\) with \(\mathcal{D}_{\text{ws}}\), \(\mathcal{D}_{\text{wf}}\), and \(\mathcal{D}_{\text{wf}}\) for model training, respectively. These three scenarios are defined as basic GLWI scenarios.
In the real world, the data deficiencies often occur, more or less, in three aspects simultaneously, leading to the more intractable extreme GLWI scenario:
Definition 3.3 (Extreme GLWI scenario).: Let graph data with extremely weak information be \(\mathcal{D}_{\text{x}}=((\mathcal{V},\hat{\mathcal{E}},\hat{\mathbf{X}}),\hat{ \mathbf{Y}}_{L})\). The target in extreme GLWI scenario is to predict the labels of unlabeled nodes \(\mathcal{V}_{U}\) with \(\mathcal{D}_{\text{x}}\) for model training.
Notably, in basic scenarios, the graph data only has one type of weak information; on the contrary, the structure, features, and labels are all deficient in extreme scenario. Due to the mutual effects among different data deficiency, extreme scenario is **more challenging** than basic scenarios.
## 4. Design Motivation and Analysis
In this section, we expose the key to solving the GLWI problem is to execute _effective information propagation_ in GNNs. Firstly, we discuss the critical roles of information propagation in handling graph data with weak information. Then, with empirical analysis, we find two crucial criteria that enable effective information propagation and hence benefit GLWI, i.e., employing long-range propagation and alleviating the stray node problem.
### Roles of Information Propagation in GLWI
In GNNs, propagation is a fundamental operation that transmits information along edges in graph-structured data (G
GNNs with larger \(s_{p}\) can generally perform better on basic GLWI scenarios;
**Discussion.** 1) Based on homophily assumption (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2019; Zhang et al., 2019), we deduce that nodes with similar features/labels tend to be connected in graph topology, supporting the effectiveness of long-range propagation in feature imputation and supervision signal spreading. However, if \(s_{p}\) is overlarge, noisy information would inevitably occur in receptive fields, which may degrade the performance. During grid search, we also find that \(\text{SGC}(s_{p}=10)\) performs worse than \(\text{SGC}(s_{p}=5)\), indicating that \(s_{p}\) should be kept within an appropriate range. Here, we would like to point out that the default \(s_{p}=2\) for most GNNs is usually ineffective for GLWI. 2) A large \(s_{p}\) tends to aggravate the over-smoothing issue (Zhou et al., 2017; Zhang et al., 2019). In our experiments, we find that graph diffusion mechanism can alleviate the issue by adding ego information at each propagation step with residual connection (Han et al., 2017; Zhang et al., 2019), which allows larger \(s_{p}\) while preserving high performance.
**Summary:** With empirical analysis, we derive _Criterion 1_ to _handle GLWI problem: enabling long-range propagation leads to effective information propagation, which further alleviates the deficiency in structure, features, and labels._
### Stray Nodes Hinder GLWI
In the above subsection, we demonstrate that larger \(s_{p}\) leads to effective information propagation and hence benefits basic GLWI scenarios. Then, we are curious about _how do large-\(s_{p}\) models perform in the scenarios where data insufficiency in features/labels/structure are entangled?_ To answer this question, we conduct experiments (detailed settings are in Appendix A.2) to compare the performance of \(\text{APPNP}(s_{p}=20)\) on graph data with different combinations of weak features (WF), weak labels (WL), and weak structure (WS). From the results in Fig. 2(d), we witness sharp decreases in performance when data are with multiple types of deficiencies. Even with a larger propagation step, unfortunately, conventional GNNs still suffer from the extremely weak information in graph data.
Based on the results, one might ask: does larger \(s_{p}\) make information propagation effective enough for extreme GLWI scenario? Recalling that graph structure provides the "bridges" for information propagation, we speculate the quality of graph structure can also affect the effectiveness of information propagation. Some clues can be found in Fig. 2(d): among the two-aspect combination scenarios, "WF+WS" and "WL+WS" have severer performance degradation than "WF+WL", which indicates the incomplete graph structure (WS) is the major factor hindering GLWI.
To understand how weak structure hinders GLWI, we first investigate the difference between ideal structures and weak structures1. By comparing the distributions of node connection in ideal and incomplete structures, we have an interesting observation: as shown in the upper of Fig. 3(a), in ideal structures, the vast majority of nodes are connected together to form the largest connected component (LCC); in contrast, in weak structures, there exist more stray subgraphs composed of few nodes and more isolated nodes that are even independent to other nodes. On Cora dataset, we visualize the distribution of nodes from connected components with different sizes in the lower of Fig. 3(a). We can demonstrate that in ideal structure, over 90% (2485 out of 2708) of nodes are included in LCC, while this percentage decreases to 64.9% in weak structure. Moreover, about 18% of nodes in weak structure are isolated, while no isolated node exists in ideal structure.
Footnote 1: In the case study, we construct weak structures by randomly removing 50% of edges.
For simplicity, we denote _stray nodes_ as the nodes from stray subgraphs and the isolated nodes, and denote _LCC nodes_ as the nodes within LCC. With empirical analysis, we find that in extreme GLWI scenario, information propagation is ineffective on stray nodes, leading to sub-optimal performance in extreme GLWI scenario.
**Stray nodes hinder feature completion.** In Sec. 4.1, we illustrate that GNNs are able to complete the features with contextual knowledge via recurrent propagation. However, for stray nodes, the missing information is hard to be filled with limited contextual nodes. As shown in the upper of Fig. 3(b), if a specific feature is missing in all nodes within a stray subgraph, it cannot be completed by propagation, even if we increase \(s_{p}\). For the isolated nodes, the propagation is unable to complete their features, which is a worse case. In Table 1, we show the average L2 distance between raw features and the features after propagation in \(\text{SGC}\)(Zhang et al., 2019). We find that the distances on LCC nodes are 2x-9x larger than those on stray nodes, demonstrating that LCC nodes receive much more completion than stray ones.
**Stray nodes hinder supervision signal spreading.** In Sec. 4.1, we point out that GNNs also play the role of spreading supervision signals from labeled nodes to unlabeled nodes. Unfortunately, if all the nodes in a stray subgraph are unlabeled, the supervision single is hard to reach them via propagation (e.g., the nodes with "?" in the lower of Fig. 3(b)). Meanwhile, for the labeled stray nodes, the supervision signals are also trapped in the small connected components and then cannot be propagated to most nodes. In this case, when labeled nodes are extremely scarce, a large number of nodes would be out of the coverage of supervision. As illustrated in Table 1, on APPNP (Han et al., 2017), the test accuracy on LCC nodes is 13.4%
\begin{table}
\begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{L2 distance} & \multicolumn{2}{c}{Test accuracy} \\ \cline{2-5} & LCC nodes & stray nodes & LCC nodes & stray nodes \\ \hline Cora & 2.7870 & 1.0889 & 62.79 & 39.12 \\ CiteSeer & 3.7238 & 1.7312 & 56.82 & 35.99 \\ PubMed & 0.2553 & 0.0278 & 67.54 & 59.93 \\ \hline \end{tabular}
\end{table}
Table 1. Comparison between LCC and stray nodes w.r.t. feature-wise distance/accuracy in extreme GLWI scenario. Experimental details please find in Appendix A.3.
Figure 3. Sketch maps to illustrate stray node problem.
-60.5% higher than the stray nodes, which indicates that stray nodes are more potential to be misclassified due to the lack of supervision.
**Challenge.** Although the stray nodes are easy to be identified in an incomplete graph, it is of great difficulty to handle stray nodes in GLWI. A feasible solution is to connect the stray nodes to LCC. However, directly linking irrelevant nodes together may introduce noisy edges to the original graph, which further harms the feature imputation and label spreading. Moreover, when features and labels are deficient, it is hard to determine how to build the connections between the stray nodes and other nodes.
**Summary:** By investigating how incomplete structure affect feature imputation and supervision signal spreading, we can summarize _Criterion 2: the key to handling extreme GLWI scenario is to address the ineffective information propagation problem on the stray nodes isolated from LCC._
## 5. Methodology
From the analysis in Sec. 4, we pinpoint that the key to addressing GLWI problem is to enable effective information propagation. To this end, we can design GNN models for GLWI following two crucial criteria: _Criterion 1 - enabling long-range propagation_ and _Criterion 2 - handling stray node problem_. With the guidance of _Criterion 1_, in this section, we first present a strong base model termed DPT, a large-\(s_{p}\) GNN that balances the effectiveness and efficiency. Then, following _Criterion 2_, we further propose D\({}^{2}\)PT by introducing a dual-channel architecture with an augmented global graph, which relieves the stray node problem.
### Diffused Propagation then Transformation
Motivated by _Criterion 1_, enlarging the propagation step \(s_{p}\) is critical for effective information propagation; however, the growing computational complexity with the increase of \(s_{p}\) is also non-negligible. For entangled GNNs (e.g., GCN (Golov
an augmented global graph alongside the original graph, and then extract knowledge from the global graph during model training.
In D\({}^{2}\)PT, we employ k-nearest neighbor (kNN) graph as the global graph, which ensures that each node has at least \(k\) neighbors. To leverage the features completed by DPT, we construct kNN graph from the propagated features matrix \(\overline{\mathbf{X}}\) instead of raw features \(\mathbf{X}\). In concrete, the kNN adjacency matrix is written by:
\[\mathbf{A}^{\prime},\text{ where }\mathbf{A}^{\prime}_{ij}=\begin{cases}1,& \text{s}(\overline{\mathbf{X}}_{i},\overline{\mathbf{X}}_{j})\geq\min(\tau( \overline{\mathbf{X}}_{i},k),\tau(\overline{\mathbf{X}}_{j},k)),\\ 0,&\text{otherwise},\end{cases} \tag{3}\]
where \(\text{s}(\overline{\mathbf{X}}_{i},\overline{\mathbf{X}}_{j})\) is the similarity between vectors \(\overline{\mathbf{X}}_{i}\) and \(\overline{\mathbf{X}}_{j}\), and \(\tau(\overline{\mathbf{X}}_{i},k)\) returns the similarity between \(\overline{\mathbf{X}}_{i}\) and its \(k\)-th similar row vector in \(\overline{\mathbf{X}}\). Here \(\mathbf{A}^{\prime}\) is inherently symmetric.
Then, in a parallel manner, we execute DPT on the original graph as well as the global graph simultaneously. For the original channel, the computation follows Sec. 5.1. For the global channel, by replacing the adjacency matrix in Eq. (1) to \(\mathbf{\hat{A}}^{\prime}=\mathbf{D}^{\prime-1/2}\mathbf{A}^{\prime}\mathbf{D }^{\prime-1/2}\), we calculate the global propagated features \(\overline{\mathbf{X}}^{\prime}\). After acquiring the output \(\overline{\mathbf{Y}}^{\prime}\) via Eq. (2), we can finally compute the loss function \(\mathcal{L}^{\prime}_{ce}\) at the global channel. By training the shared-weight MLP model with \(\mathcal{L}_{ce}\) and \(\mathcal{L}^{\prime}_{ce}\) jointly, D\({}^{2}\)PT is able to capture informative knowledge from two graph views and alleviate the affect by stray nodes. Since \(\overline{\mathbf{X}}^{\prime}\) can also be pre-computed before training, D\({}^{2}\)PT inherits the high efficiency of DPT.
**Contrastive Prototype Alignment.** Now we can train the backbone DPT model on both the original and global views. However, the naive dual-channel pipeline has two limitations. First, although original and global channels share model parameters, the generated representations, especially those of stray nodes, can be significantly different due to input structure differences. Consequently, the model may fail to capture common knowledge from both views or even be confused by the disordered supervision signals. Second, both \(\mathcal{L}_{ce}\) and \(\mathcal{L}^{\prime}_{ce}\) are computed based on labeled nodes, leading to the potential over-fitting problem when training samples are scarce.
To bridge the gaps, we introduce a contrastive prototype alignment loss that enhances the semantic consistency between two channels and, at the same time, extracts supervision signals from unlabeled samples. At the first step, we employ a linear projection layer to map the representations \(\mathbf{H}\) and \(\mathbf{H}^{\prime}\) into latent embeddings \(\mathbf{Z}=\mathbf{H}\mathbf{W}_{3}\) and \(\mathbf{Z}^{\prime}=\mathbf{H}^{\prime}\mathbf{W}_{3}\). Then, for each class \(j\in[1,\cdots,c]\), we acquire its prototype (Vaswani et al., 2017; Wang et al., 2017) by calculating the weighted average of latent embeddings:
\[\mathbf{p}_{j}=\sum\nolimits_{\begin{subarray}{c}i\text{argmax}(\overline{ \mathbf{Y}}_{i})=j\end{subarray}}\frac{s_{i}Z_{i}}{S_{j}},\quad\mathbf{p}^{ \prime}_{j}=\sum\nolimits_{\begin{subarray}{c}i\text{argmax}(\overline{ \mathbf{Y}}_{i})=j\end{subarray}}\frac{s_{i}Z^{\prime}_{i}}{S_{j}}, \tag{4}\]
where weight \(s_{i}=1\) for labeled nodes and \(s_{i}=\max(\overline{\mathbf{Y}}_{i})\) (i.e., the confidence in prediction) for unlabeled nodes and \(S_{j}\) is the sum of weight \(s_{i}\) of all nodes allocated to class \(j\). Once the prototypes are computed, we regularize the prototypes from two channels with an Info-NCE-based (Chen et al., 2017) contrastive prototype alignment loss:
\[\mathcal{L}_{cpa}=-\frac{1}{2c}\sum_{j=1}^{c}\left(\log\frac{\text{f}(\mathbf{ p}_{j},\mathbf{p}^{\prime}_{j})}{\sum_{q\neq j}\text{f}(\mathbf{p}_{j}, \mathbf{p}^{\prime}_{q})}+\log\frac{\text{f}(\mathbf{p}_{j},\mathbf{p}^{ \prime}_{j})}{\sum_{q\neq j}\text{f}(\mathbf{p}_{q},\mathbf{p}^{\prime}_{j})} \right), \tag{5}\]
where \(\text{f}(\mathbf{a},\mathbf{b})=\text{e}^{\text{cos}(\mathbf{a},\mathbf{b})/ \tau}\), \(\text{cos}(\cdot,\cdot)\) is the cosine similarity function, and \(\tau\) is the temperature hyper-parameter. \(\mathcal{L}_{cpa}\) increases the agreement between the representations of original and global views on labeled and unlabeled nodes, which helps the model distill informative knowledge from the global view, especially for the stray nodes. Moreover, compared to traditional sample-wise contrastive loss (Chen et al., 2017), \(\mathcal{L}_{cpa}\) enables the model to learn class-level information and further leverage labels, and also enjoys much lower time complexity (\(O(ec^{2})\) rather than \(O(en^{2})\)).
**Learning Objective.** By combining the above three losses with trade-off coefficients \(\gamma_{1}\) and \(\gamma_{2}\), the overall objective of D\({}^{2}\)PT is:
\[\mathcal{L}=\mathcal{L}_{ce}+\gamma_{1}\mathcal{L}^{\prime}_{ce}+\gamma_{2} \mathcal{L}_{cpa}. \tag{6}\]
**Scalability Extension.** In order to adapt D\({}^{2}\)PT to large-scale graphs efficiently, we introduce the following mechanisms.
1) To reduce the cost of computing kNN graphs, based on locality-sensitive approximation (Kang et al., 2017; Wang et al., 2017), we design a global approximation algorithm to efficiently construct globally connected kNN graphs. The core idea is to approximate local kNN twice with different batch splits and integrate two kNN graphs into a connected global graph. Detailed algorithm please see Appendix C.1.
2) To reduce the computational cost of training phase, we adopt a mini-batch semi-supervised learning strategy. In each epoch, we only sample a batch of unlabeled nodes \(\mathcal{V}_{B}\) (\(\left\{\mathcal{V}_{B}\right\}=n_{B}\ll n\)) along with labeled nodes \(\mathcal{V}_{L}\) for model training. In this case, \(\mathcal{L}_{cpa}\) is calculated on \(\mathcal{V}_{L}\cup\mathcal{V}_{B}\) instead of all nodes, which significantly saves the requirement on memory and time. Finally, the time complexity per training epoch is \(O(e(n_{L}+n_{B})(e+d+c)+c^{2}e)\). The detailed complexity analysis is given in Appendix C.3, and the overall algorithm of DPT is in Appendix C.2.
## 6. Experiments
In this section, we perform an extensive empirical evaluation of our methods (DPT and D\({}^{2}\)PT) on various GLWI scenarios. Our experiments seek to answer the following questions:
**RQ1:** How _effective_ are our methods in extreme GLWI scenario?
**RQ2:** Can our methods _generally perform well_ in various basic GLWI scenarios?
**RQ3:** How _efficient_ are our methods in terms of times and space?
**RQ4:** How do the key designs and hyper-parameters influence the performance of our methods?
### Experimental Setups
**Datasets.** We adopt 8 publicly available real-world graph datasets for evaluation, including Cora (Vaswani et al., 2017), CiteSeer (Vaswani et al., 2017), PubMed (Vaswani et al., 2017), Amazon Photo (Vaswani et al., 2017), Amazon Computers (Vaswani et al., 2017), CoAuthor CS (Vaswani et al., 2017), CoAuthor Physics (Vaswani et al., 2017), and ogbn-arxiv (Vaswani et al., 2017). More details and statistics of datasets are summarized in Appendix D.1.
**GLWI scenario implementations.** We simulate various GLWI scenarios by applying stochastic perturbation on graph data and limiting the number of training nodes. Specifically, to build weak structure, we randomly remove 50% of edges from the original graph structure. To construct weak features, we randomly replace 50% of entries in the feature matrix with 0. For datasets except ogbn-arxiv, we randomly select 5 nodes per class to form the training set in weak-label scenario, while this number in other scenarios
is 20. We sample 30 nodes per class for validation and the rest for testing. For ogbn-arxiv dataset, we randomly select 2% nodes for training with weak labels, and the validation and testing sets follow the official setting (Krizhevsky et al., 2014). In extreme scenario, we construct the insufficient structure, features, and labels via the above strategies respectively and combine them together.
**Baselines.** We compare our methods with four groups of baselines: 1) conventional GNNs, including GCN (Krizhevsky et al., 2014), GAT (Krizhevsky et al., 2014), APPNP (Krizhevsky et al., 2014), and SGC (Krizhevsky et al., 2014); 2) GNNs with graph structure learning, including Prony-GNN (Krizhevsky et al., 2014), IDGL (Chen et al., 2016), GEN (Chen et al., 2016), and SimP-GCN (Krizhevsky et al., 2014); 3) GNNs with feature completion, including GINN (Zhu et al., 2017) and GCN\({}_{MF}\)(Chen et al., 2016); 4) label-efficient GNNs, including M3S (Zhu et al., 2017), CGPN (Chen et al., 2016), Meta-PN (Chen et al., 2017), and GRAND (Krizhevsky et al., 2014). In the efficiency analysis, we consider four scalable GNNs, including GraphSAGE (Krizhevsky et al., 2014), ClusterGCN (Chen et al., 2016), PPRGo (Chen et al., 2016), and GAMLP (Chen et al., 2016).
**Experimental Details.** For all experiments, we report the averaged test accuracy and standard deviation over 5 trials. For our methods, we perform grid search to select the best hyper-parameters on validation set. We also search for optimal hyper-parameters for baseline methods during reproduction. More implementation details are demonstrated in Appendix D.2. Our code is available at [https://github.com/yixinliu233/D2PT](https://github.com/yixinliu233/D2PT).
features, and labels, and report the results in Table 3, 4, and 5, respectively. From the results, we find that D\({}^{2}\)PT generally outperforms the baseline methods in all scenarios. Moreover, our base model DPT also achieves competitive performance, especially on data with weak features. The superior performance illustrates the powerful generalization capability of our proposed methods in learning from graph data with different imperfections.
### Efficiency Analysis (RQ3)
We analyze the efficiency of DPT and D\({}^{2}\)PT on ogbn-arxiv dataset in terms of running time per epoch and memory usage on graphic cards. The results are shown in Fig. 5 where DPT\({}^{*}\) and D\({}^{2}\)PT\({}^{*}\) indicate the models that save full data into graphic cards for efficient validation and testing. From Fig. 5(a), we find that D\({}^{2}\)PT enjoys high running efficiency and state-of-the-art performance compared to baselines. For instance, D\({}^{2}\)PT is 3.69x faster than APPNP, 30x faster than GAT, and 225x faster than Meta-PN. Meanwhile, DPT has extremely high efficiency (close to SGC) while still achieving excellent performance. From the perspective of memory usage, thanks to the adjacency decoupling and mini-batch semi-supervised learning designs, the memory usages of D\({}^{2}\)PT and DPT are smaller than 2000MB, which verifies their space efficiency. Even if we load the full dataset for efficient evaluation, their memory usages are still comparable to most baselines. An interesting finding is that two large-\(s_{p}\) scalable GNNs, PPRGo and GAMLP, also yield competitive performance, illustrating the advantage of long-range propagation in GLWI scenarios.
### Ablation and Parameter Studies (RQ4)
**Effect of key components.** We illustrate the effect of dual-channel training and contrastive prototype alignment by removing the corresponding losses. As is shown in the middle block of Table 6, both of these designs give significant performance improvement over the base model, and the contrastive prototype alignment seems to contribute more. Moreover, D\({}^{2}\)PT, which jointly considers both of them, produces the best results.
**Selection of global graph.** Besides generating kNN graph from \(\overline{\mathbf{X}}\), we also attempt two alternative strategies: generating kNN graph from raw features (kNN from \(\mathbf{X}\)) and directly using raw features as the augmented data (Aug. w/o prop). From Table 6, we can observe that these strategies produce poor performance. The observation demonstrates the high quality of the kNN graph from \(\overline{\mathbf{X}}\) where features are completed by long-range propagation.
**Effect of propagation step.** To study the impact of propagation step \(s_{p}\), we tune the iteration number \(T\) (\(T=s_{p}\) in our methods) from 1 to 50 on two datasets. As shown in Fig. 6(a), the performance of DPT and D\({}^{2}\)PT generally increases with the growth of \(T\). The results verify our statement that GNNs with long-range propagation tend to perform better in GLWI scenarios. Also, we can find that the accuracy becomes stable when \(T\geq 20\), indicating that a moderate propagation step can provide sufficient information for GLWI.
**Visualization.** For qualitative analysis, we provide visualizations of the learned representations \(\mathbf{H}\) via t-SNE (Wang et al., 2017). The results on Cora dataset are presented in Fig.6(b) where nodes in the same color are from the same class. We can observe that the result from DPT, where nodes with different labels are mixed together, is not satisfactory enough. The possible reason is that the embeddings of stray nodes are less distinguishable in the latent space. By introducing the global graph with \(\mathcal{L}_{ce}^{*}\) or \(\mathcal{L}_{cpa}\), the decision boundary becomes clearer in (ii) and (iii). With the guidance of two auxiliary losses, D\({}^{2}\)PT performs best, proved by the more compact structure and distinct boundary of the learned representations in (iv).
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Methods & Cora & CiteSeer & PubMed \\ \hline D\({}^{2}\)PT & 66.00\(\pm\)1.20 & 56.99\(\pm\)2.23 & 66.43\(\pm\)2.45 \\ \hline w/o \(\mathcal{L}_{ce}^{*}\) & 61.97\(\pm\)5.22 & 53.56\(\pm\)0.04 & 64.33\(\pm\)2.87 \\ w/o \(\mathcal{L}_{cepa}\) & 61.00\(\pm\)1.77 & 48.37\(\pm\)5.23 & 63.97\(\pm\)5.18 \\ Base (DPT) & 56.37\(\pm\)5.97 & 46.06\(\pm\)4.56 & 65.08\(\pm\)3.13 \\ \hline kNN from X & 54.52\(\pm\)3.22 & 40.56\(\pm\)3.01 & 63.29\(\pm\)4.78 \\ Aug. w/o prop & 52.68\(\pm\)5.51 & 41.22\(\pm\)4.04 & 64.56\(\pm\)3.51 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Performance of D\({}^{2}\)PT and its variants.
Figure 5. Efficiency analysis on ogbn-arxiv dataset.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Methods & Cora & CiteSeer & PubMed \\ \hline GAT & 74.39\(\pm\)1.97 & 57.62\(\pm\)3.69 & 68.98\(\pm\)3.39 \\ APPNP & 78.08\(\pm\)2.39 & 56.20\(\pm\)5.33 & 71.30\(\pm\)3.11 \\ SGC & 76.28\(\pm\)2.13 & 57.99\(\pm\)3.90 & 66.45\(\pm\)4.69 \\ \hline M3S & 76.30\(\pm\)3.08 & 64.17\(\pm\)2.43 & 69.45\(\pm\)5.48 \\ CGPN & 76.09\(\pm\)1.75 & 64.84\(\pm\)2.56 & 69.94\(\pm\)3.25 \\ Meta-PN & 77.83\(\pm\)1.76 & 61.86\(\pm\)3.82 & 71.96\(\pm\)3.46 \\ GRAND & 78.09\(\pm\)1.96 & 59.52\(\pm\)0.45 & 69.17\(\pm\)3.53 \\ \hline DPT & 77.55\(\pm\)2.40 & 60.11\(\pm\)2.30 & 70.07\(\pm\)2.40 \\ D\({}^{2}\)PT & 79.00\(\pm\)2.05 & 67.94\(\pm\)1.14 & 72.17\(\pm\)1.79 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Results in terms of classification accuracy in graph learning with weak labels.
Figure 6. Parameter study and visualization results.
## 7. Conclusion
In this paper, we make the first attempt towards graph learning with weak information (GLWI), a practical yet challenging learning problem on graph-structured data with incomplete structure, features, and labels. With discussion and empirical analysis, we expose the key to addressing GLWI problem is effective information propagation, and figure out two crucial criteria for model design: enabling long-range propagation and handling stray nodes. Following these criteria, we propose a novel GNN model termed D\({}^{2}\)PT that enjoys high efficiency for long-range propagation and solves stray node problem with an augmented global graph and dual-channel architecture. Extensive experiments demonstrate the effectiveness of D\({}^{2}\)PT in multiple GLWI scenarios. In the future, promising follow-up directions include: 1) exploring GLWI for data with more challenging data deficiencies, such as noisy features, noisy edges, and imbalanced label distribution; 2) applying GLWI to more downstream tasks, e.g., graph classification and link prediction; and 3) unsupervised graph learning for incomplete data.
###### Acknowledgements.
This work is supported by ARC Future Fellowship (No. FT210100097), Amazon Research Award, NSF (No. 2229461), and ONR (No. N00014-21-1-4002).
|
2307.16203 | Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction
and Learning | This paper studies the performance of deep convolutional neural networks
(DCNNs) with zero-padding in feature extraction and learning. After verifying
the roles of zero-padding in enabling translation-equivalence, and pooling in
its translation-invariance driven nature, we show that with similar number of
free parameters, any deep fully connected networks (DFCNs) can be represented
by DCNNs with zero-padding. This demonstrates that DCNNs with zero-padding is
essentially better than DFCNs in feature extraction. Consequently, we derive
universal consistency of DCNNs with zero-padding and show its
translation-invariance in the learning process. All our theoretical results are
verified by numerical experiments including both toy simulations and real-data
running. | Zhi Han, Baichen Liu, Shao-Bo Lin, Ding-Xuan Zhou | 2023-07-30T11:29:51Z | http://arxiv.org/abs/2307.16203v1 | # Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction and Learning
###### Abstract
This paper studies the performance of deep convolutional neural networks (DCNNs) with zero-padding in feature extraction and learning. After verifying the roles of zero-padding in enabling translation-equivalence, and pooling in its translation-invariance driven nature, we show that with similar number of free parameters, any deep fully connected networks (DFCNs) can be represented by DCNNs with zero-padding. This demonstrates that DCNNs with zero-padding is essentially better than DFCNs in feature extraction. Consequently, we derive universal consistency of DCNNs with zero-padding and show its translation-invariance in the learning process. All our theoretical results are verified by numerical experiments including both toy simulations and real-data running.
Deep learning, deep convolutional neural network, zero-padding, pooling, learning theory
## 1 Introduction
In the era of big data, machine learning, especially deep learning, has received unprecedented success in numerous application regions including computer vision [1], management science [2], finance [3], economics [4], games [5] and so on. To reveal the mystery behind the success is the recent focus and will be an eternal task of machine learning, requiring not only to understand the running mechanism of specific learning schemes, but also to provide solid theoretical verifications. As shown in Figure 1, a machine learning scheme can be regarded as a two-step strategy that searches suitable feature mappings to obtain a sequence of features at first and then finds appropriate linear combinations of the obtained features for the learning purpose. In this way, the main difference of learning schemes lies in the different feature mappings. For example, the kernel approach [6] utilizes the kernel-based mapping to extract features and deep learning [7] employs deep neural networks (deep nets for short) for feature representation.
The great success of deep learning [7, 8, 9] implies that deep nets are excellent feature extractors in representing the translation invariance [10], rotation invariance [11], calibration invariance [12], spareness [13], manifold structure [14], among others [15]. Furthermore, avid research activities in deep learning theory proved that, equipped with suitable structures, deep nets outperform the classical shallow neural networks (shallow nets for short) in capturing the smoothness [16], positioning the input [17], realizing the sparseness in frequency and spatial domains [18, 19], reflecting the rotation-invariance [20], grasping the composite-structure [21], group-structure [22], manifold-structure [23] and hierarchy-structure [24]. These encouraging theoretical assertions together with the notable success in application seem that the running mechanism of deep learning is put on the order of the day.
The problem is, however, that there are three challenged gaps between the established theoretical results and desired running mechanisms of deep learning. At first, the structures of deep nets in theoretical analysis and applications for the same purpose are totally different. For example, to embody the rotation invariance, practicioners focus on tailoring the network to obtain a structured deep nets to automatically extracting the feature [11] while theoretical analysis devotes to proving the existence of a deep fully connected neural network (DFCN) via tuning the weights. Then, as the product of full matrix frequently does not obey to the commutative law, DFCN requires strict orders of the extracted features, which consequently yields that the features extracted by DFCNs cannot be combined directly to explain the success of deep learning in practice. At last, practicioners are willing to encode the a-priori knowledge into the training process via setting suitable network structure which is beyond the scope of existing theoretical analysis. The above three gaps between existing theoretical analysis and application requirements, highlighted in Figure
Fig. 1: The steps of feature extraction of classical machine learning and deep learning.
2, significantly dampens the spirits of both practicioner and theoretical analysts, making them drive in totally different directions in understanding deep learning.
Noting these gaps, some theoretical analysis has been carried out on analyzing the learning performance of structured deep nets with the intuition that there exist some features that can be extracted by the structure of deep nets. Typical examples includes [20] for deep nets with tree structures, [25] for deep convolutional neural networks (DCNN) with multi-channel, [26] for DCNN with resnet-type structures, and [27, 28] for deep convolutional neural networks (DCNN) with zero-padding. However, these results seem not to provide sufficient advantages of structured deep neural networks since they neither present solid theoretical explanations on why structured deep nets outperform others, nor give theoretical guidances on which features can be extracted by specific network structures. This motivates our study in this paper to show why and when DCNN performs better than DPCN and how to tailor the structure (zero-padding, depth, filter length and pooling scheme) of DCNN for a specific learning task.
We study the performance of DCNN with one-dimensional and one-channel convolution in feature extraction and learning. As shown in Figure 3, if there is not any zero-padding imposed to the convolution structure, DCNN then behaves a contracting nature (we call such DCNN as cDCNN for short) in the sense that the width decreases with respect to the depth, prohibiting its universality in feature extraction [29, 30]. Therefore, without the help of other structures such as fully-connected layers and zero-padding, there are numerous features that cannot be extracted by cDCNN. Noticing that additional fully-connected layer may destroy the convolutional structure of cDCNN, we are interested in conducting zero-padding as Figure 3 and making the network possess an expansive nature. We call DCNN with zero-padding like Figure 3 as eDCNN and study the performance of eDCNN in feature extraction and learning via considering the following three problems:
\(\diamond\) (P1): What is the role of zero-padding in eDCNN?
\(\diamond\) (P2): How to specify the pooling scheme to improve the performance of eDCNNs?
\(\diamond\) (P3): Why and when are DCNNs better than widely studied deep fully connected networks (DFCNs)?
As shown in Figure 3, zero-padding enlarges the size of the extracted features to guarantee the universality, and therefore plays a crucial role in DCNN. Problem (P1) indeed refers to how to explore the important role of zero-padding in enhancing the representation and learning performances of cDCNN. It is well known that pooling drives the converse direction as zero-padding to shrink the size via designing suitable sub-sampling mechanism. Problem (P2) focuses on the role of pooling in eDCNN and studies its theoretical advantages in improving the generalization performance and enabling the feature extraction of eDCNN. With the help of a detailed analysis of the roles of zero-padding and pooling, theoretical guarantees for the pros and cons of eDCNN, compared with the cDCNN and DFCN, should be illustrated, which is the main topic of problem (P3). In a nutshell, the above three problems are crucial to understand the running mechanisms of structured deep nets and probing into the reason why structured deep nets perform better than DFCN.
Our purpose in this paper is to provide answers to the aforementioned three problems from the representation theory [15] and statistical learning theory viewpoints [31, 32]. The main contributions can be concluded as follows:
\(\bullet\) Methodology development: We study the role of zero-padding and pooling in eDCNN and find that with suitable pooling strategy, eDCNN possesses the translation invariance and is universal in feature extraction. These findings show that eDCNN is better than DFCN in encoding the translation invariance in the network structure without sacrificing its performance in extracting other features, and is also better than cDCNN in term of universality in approximation and learning. Therefore, we actually provide an alternative network structure for deep learning with more clear running mechanisms and excellent performances in feature extraction and learning.
\(\bullet\) Theoretical novelty: We provide solid theoretical verifications on the excellent performance of eDCNN in feature extraction and learning. From the feature extraction viewpoint, we prove that zero-padding enables eDCNN to be translation-equivalent and pooling enables it to be translation-invariant. Furthermore, we prove that encoding the translation-equivalence (or translation invariance) into
Fig. 3: Zero-padding and network structure.
Fig. 2: Three challenged gaps between the established theoretical results and desired running mechanisms of deep learning.
the network, eDCNN performs not worse than DFCN in extracting other features in the sense that with similar number of free parameters, eDCNN can approximate DFCN within an arbitrary accuracy but not vice-verse. From the learning theory perspective, we prove that eDCNN is capable of yielding universally consistent learner and encoding the translation-invariance.
\(\bullet\) Application guidance: By the aid of theoretical analysis, we conduct several numerical simulations on both toy data and real-world applications to show the excellent performance of eDCNN in feature extraction and learning. Our numerical results show that eDCNN always perform not worse than DFCN and cDCNN. Furthermore, if the data process some translation-invariance, then eDCNN is better than other two networks, which provides a guidance on how to use eDCNN.
In summary, we study the feature extraction and learning performances of eDCNN and provide theoretical answers to problems (P1-P3). For (P1), we prove that zero-padding enables DCNN to reflect the translation-equivalence and improve the performance of DCNNs in feature extraction. For (P2), we show that pooling plays a crucial role in reducing the number of parameters of eDCNN without sacrificing its performance in feature extraction. For (P3), we exhibit that if the learning task includes some translation-equivalence or translation-invariance, eDCNN is essentially better than other network structures, showing why and when DCNN outperforms DFCN.
The rest of the paper is organized as follows. In the next section, we introduce eDCNN. In Section 3, we study the role of zero-padding in eDCNN. In Section 4, theoretical analysis is carried out to demonstrate the importance of pooling in eDCNN. In Section 5, we compare eDCNN with DFCN in feature extraction. In Section 6, we verify the universal consistency of eDCNN and show its translation-invariance in the learning process. In Section 7, numerical experiments concerning both toy simulations and real-world applications are made to illustrate the excellent performance of eDCNN and verify our theoretical assertions. In the last section, we draw some conclusions of our results. All proofs of the theoretical assertions are postponed to Supplementary Material of this paper.
## 2 Deep Convolutional Neural Networks
Let \(L\in\mathbb{N}\) be the depth of a deep net, \(d_{0}=d\) and \(d_{\ell}\in\mathbb{N}\) be the width of the \(\ell\)-th hidden layer for \(\ell=1,\ldots,L\). For any \(\vec{v}\in\mathbb{R}^{d_{\ell-1}}\), define \(\mathcal{J}_{\ell,W^{\ell},\vec{v}^{\ell}}:\mathbb{R}^{d_{\ell-1}}\to\mathbb{R }^{d_{\ell}}\) as the affine operator by
\[\mathcal{J}_{\ell,W^{\ell},\vec{v}^{\ell}}(x):=W^{\ell}x+\vec{b}^{\ell}, \tag{1}\]
where \(W^{\ell}\) is a \(d_{\ell}\times d_{\ell-1}\) weight matrix and \(\vec{b}^{\ell}\in\mathbb{R}^{d_{\ell}}\) is a bias vector. Deep nets with depth \(L\) is then defined by
\[\mathcal{N}_{d_{1},\ldots,d_{L}}(x)=\vec{a}^{L}\cdot\sigma\circ\mathcal{J}_{L, W^{L},\vec{b}^{L}\circ\sigma\circ\cdots\circ\sigma\circ}\mathcal{J}_{1,W^{1}, \vec{b}^{1}}(x), \tag{2}\]
where \(\vec{a}^{L}\in\mathbb{R}^{d_{L}}\), \(\sigma(t):=\max\{t,0\}\) is the ReLU function, \(\sigma(x)=(\sigma(x^{(1)}),\ldots,\sigma(x^{(d)}))^{T}\) for \(x=(x^{(1)},\ldots,x^{(d)})^{T}\) and \(f\circ g(x)=f(g(x))\). Denote by \(\mathcal{H}_{d_{1},\ldots,d_{L}}\) the set of deep nets with depth \(L\) and width \(d_{\ell}\) in the \(\ell\)-th hidden layer. In this way, deep learning can be regarded as a learning scheme that utilizes the feature mapping \(\vec{V}_{d_{1},\ldots,d_{L}}:\mathbb{R}^{d}\to\mathbb{R}^{d_{L}}\) defined by
\[\vec{V}_{d_{1},\ldots,d_{L}}(x):=\sigma\circ\mathcal{J}_{L,W^{L},\vec{b}^{L}} \circ\sigma\circ\cdots\circ\sigma\circ\mathcal{J}_{1,W^{1},\vec{b}}(x) \tag{3}\]
to extract data features at first and then uses a simple linear combination of the extracted features for a learning purpose. The quality of learning is then determined by the feature mapping \(\vec{V}_{d_{1},\ldots,d_{L}}\), which depends on the depth \(L\), width \(d_{\ell}\), bias vectors \(\vec{b}^{\ell}\), and more importantly, the structure of weight matrices \(W^{\ell}\), \(\ell=1,\ldots,L\). The structure of the weight matrices actually determines the structure of deep nets. For example, full weight matrices correspond to DFCNs [16], sparse weight matrices are related to deep sparsely connected networks (DSCNs) [33], Toeplitz-type weight matrices refer to eDCNNs. Figure 4 presents four structures of deep nets and the associated weight matrices.
The performances of DFCN and DSCN in feature extraction and learning have been extensively studied in theory [17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. In particular, [23] proved that DFCN succeeds in capturing the manifold structure of the inputs; [36] verified that DFCN is capable of realizing numerous features such as the locality and \(\ell_{1}\)-radial, and [33] showed that DSCN benefits in extracting the piecewise smooth features of the data. Despite these encouraging developments, there is a crucial challenge that the derived provable properties of DFCNs and DSCNs require totally different structures for different learning tasks, not only in different width and depth, but also in the structure of weight matrices. This makes the running mechanism of deep learning still a mystery, since it is quite difficult to design a unified DFCN or DSCN structure suitable for all learning tasks to embody the power of depth.
Noting this, cDCNN comes into researchers' insights due to its unified structure and popularity in practice [37]. For \(s\in\mathbb{N}\), let \(\vec{w}=(w_{j})_{j=-\infty}^{\infty}\) be a filter of length \(s\), i.e. \(w_{j}\neq 0\) only for \(0\leq j\leq s\). For any \(\vec{v}\in\mathbb{R}^{d^{\prime}}\) and \(d^{\prime}\in\mathbb{N}\), define the one-dimensional and one-channel convolution without zero padding by
\[(\vec{w}\star\vec{v})_{j}=\sum_{k=j-s}^{j}w_{j-k}v_{k+s},\qquad j=1,\ldots,d^{ \prime}-s. \tag{4}\]
For \(\vec{v}\in\mathbb{R}^{d_{\ell-1}}\) and \(d_{\ell}=d_{\ell-1}-s\), define the contracting convolution operator \(\mathcal{C}^{\star}_{\ell,\vec{w}^{\ell},\vec{v}^{\ell}}:\mathbb{R}^{d_{\ell-1 }}\to\mathbb{R}^{d_{\ell}}\) as
\[\mathcal{C}^{\star}_{\ell,\vec{w}^{\ell},\vec{b}^{\ell}}(\vec{v}):=\vec{w}^{ \ell}\star\vec{v}+\vec{b}^{\ell} \tag{5}\]
for \(\vec{w}^{\ell}\) supported on \(\{0,1,\ldots,s\}\) and bias vector \(\vec{b}^{\ell}\in\mathbb{R}^{d_{\ell}}\). Then cDCNN is defined by
\[\mathcal{N}^{\star}_{L,s}(x):=\vec{a}_{L}\cdot\sigma\circ\mathcal{C}^{\star}_{L, \vec{w}_{L},\vec{b}_{L}}\circ\sigma\circ\cdots\circ\sigma\circ\mathcal{C}^{ \star}_{1,\vec{w}_{1},\vec{b}_{1}}(x). \tag{6}\]
Due to the contracting nature of cDCNN, we get \(d_{\ell}\leq d\) for all \(\ell=1,\ldots,L\). This makes cDCNN even not be universal in approximation, since the smallest width of a deep net required to guarantee the universality is \(d+1\) according to the minimal-width theory of deep nets approximation established in [29]. As a result, though cDCNN is able to extract certain specific features, it is impossible to use the unified cDCNN structure for all learning tasks. A feasible remedy for this drawback of cDCNN is to add several fully
connected layers after the convolutional layers to enhance the versatility. The problem is, however, that how to set the width and depth of the fully connected layers becomes a complication, making the network structure also unclear. Furthermore, the added fully connected layers may destroy several important properties such as the translation-equivalence, translation-invariance and calibration invariance of cDCNNs [10], making it difficult to theoretically analyze the role of convolution operator (5) in the learning process.
Another approach to circumvent the non-universality of cDCNN is to use zero-padding to widen the network, just as [28, 30, 38] did. For any \(\vec{v}\in\mathbb{R}^{d^{\prime}}\), define
\[(\vec{w}*\vec{v})_{j}=\sum_{k=1}^{d^{\prime}}w_{j-k}v_{k},\qquad j=1,\ldots,d^ {\prime}+s. \tag{7}\]
Compared with (4), zero-padding is imposed in the convolution operation, making the convolution defined by (7) have an expansive nature, just as Figure 3 purports to show. For \(\vec{v}\in\mathbb{R}^{d_{\ell-1}}\) and \(d_{\ell}=d_{\ell-1}+s\), denote the expansive convolution operator \(\mathcal{C}_{\ell,\vec{w}^{t},\mathcal{E}}:\mathbb{R}^{d_{\ell-1}}\to \mathbb{R}^{d_{\ell}}\) by
\[\mathcal{C}_{\ell,\vec{w}^{t},\mathcal{E}^{t}}(\vec{v}):=\vec{w}^{t}*\vec{v}+ \vec{b}^{\ell} \tag{8}\]
for \(\vec{w}^{t}\) supported on \(\{0,1,\ldots,s\}\) and bias vector \(\vec{b}^{\ell}\in\mathbb{R}^{d_{\ell}}\). eDCNN is then mathematically defined by
\[\mathcal{N}_{L,s}(x)=\vec{a}_{L}\cdot\vec{V}_{d_{1},\ldots,d_{L}}^{eDCNN}, \tag{9}\]
where
\[\vec{V}_{d_{1},\ldots,d_{L}}^{eDCNN}:=\sigma\circ\mathcal{C}_{L,\vec{w}_{L}, \vec{b}_{L}}\circ\sigma\circ\cdots\circ\sigma\circ\mathcal{C}_{1,\vec{w}_{1}, \vec{b}_{1}}(x). \tag{10}\]
Denote by \(\mathcal{H}_{L,s}\) the set of all eDCNNs formed as (9).
It is well known that DFCNs do not always obey the commutative law in the sense that there are infinitely many full matrices \(A\) and \(B\) such that \(AB\neq BA\). This means that the order of the affine operator defined by (1) affects the quality of feature extraction significantly and implies that there must be a strict order for DFCNs to extract different features. Changing the order of hidden layers thus leads to totally different running mechanisms of DFCNs. Differently, the convolutional operators defined in (4) and (7) succeed in breaking through the above bottlenecks of the affine operator by means of admitting the commutative law.
**Lemma 1**.: _Let \(s\in\mathbb{N}\). If \(\vec{w}^{1},\vec{w}^{2}\) are supported on \(\{0,\ldots,s\}\), then_
\[\vec{w}^{1}\otimes\vec{w}^{2}=\vec{w}^{2}\otimes\vec{w}^{1}, \tag{11}\]
_where \(\otimes\) denotes either the contracting convolution \(\star\) in (4) or the expansive convolution \(*\) in (7)._
The commutative law established in Lemma 1 shows that DCNN is able to extract different features without considering which feature should be extracted at first, which is totally different from DFCN and presents the outperformance of the convolutional structure over the classical inner product structure in DFCN derived from the affine mapping (1).
We then show the advantage of eDCNN over cDCNN in approximation. Due to the contracting nature, the depth of cDCNN is always smaller than \(d/s\), making the maximal number of free parameters of cDCNN not larger than \(d\)
Fig. 4: Four structures of eDCNN, cDCNN, DFCN and DSCN.
which is impossible to guarantee the universality [29]. The following lemma provided in [28, Theorem 1] illustrates the universality of eDCNN.
**Lemma 2**.: _Let \(2\leq s\leq d\). There holds_
\[\lim_{L\to\infty}\inf_{g\in\mathcal{H}_{L,s}}\|f-g\|_{C(\mathbb{I}^{d})}=0, \qquad\forall f\in C(\mathbb{I}^{d}).\]
The universality of eDCNN shows that with sufficiently many hidden layers and appropriately tuned weights, eDCNN can extract any other features, which shows its advantage over cDCNN in approximation.
## 3 The Power of Zero-Padding in feature extraction
In this section, we analyze the role of zero-padding to answer problem (P1). Our study starts with the bottleneck of the contracting convolution structure (4) in representing the translation-equivalence. We say that a \(d\)-dimensional vector \(\vec{v}_{p,d,j}\) is supported on \(\{j,j+1,\ldots,j+p-1\}\) with \(j+p\leq d+1\), if
\[\vec{v}_{p,d,j}=(\overbrace{0,\ldots,0}^{j-1},v_{1},\ldots,v_{p},\overbrace{0,\ldots,0}^{d-p-j+1})^{T}. \tag{12}\]
Let \(A_{j,d}\) be the \(d\times d\) matrix whose \((j+i,1+i)\)- components with \(i=0,1,\ldots,d-j\) are \(1\) while the others are \(0\). Then, it is easy to check that \(A_{j,d}\) is a translation operator (or matrix) satisfying
\[A_{j,d}\circ\vec{v}_{p,d,1}=\vec{v}_{p,d,j}. \tag{13}\]
We present definitions of translation-equivalence and translation-invariance [10] as follows.
**Definition 1**.: _Let \(\mathcal{G}_{d^{\prime},d}:\mathbb{R}^{d}\to\mathbb{R}^{d^{\prime}}\) be a linear operator and \(A_{j,d}\) be the translation operator satisfying (13). If_
\[\mathcal{G}_{d^{\prime},d}\circ A_{j,d}\circ\vec{v}_{p,d,1}=A_{j,d^{\prime}} \circ\mathcal{G}_{d^{\prime},d}\circ\vec{v}_{p,d,1},\quad\forall j=1,\ldots,d -p,\]
_then \(\mathcal{G}_{d^{\prime},d}\) is said to be translation-equivalent. Furthermore, if_
\[\mathcal{G}_{d^{\prime},d}\circ A_{j,d}\circ\vec{v}_{p,d,1}=\mathcal{G}_{d^{ \prime},d}\circ\vec{v}_{p,d,1},\quad\forall j=1,\ldots,d-p,\]
_then the linear operator \(\mathcal{G}_{d^{\prime},d}\) is said to be translation-invariant._
Translation-equivalence and translation-invariance are important features in numerous application regions [10, 13]. Taking image recognition for example, as shown in Figure 5, a judgement of the cat is easily to be made independent of the location of inputs, showing the translation-invariance of the learning task. Unfortunately, the classical cDCNN is not always capable of encoding the translation-equivalence into the network structure, just as the the following lemma exhibits.
**Lemma 3**.: _Let \(d^{\prime}\in\mathbb{N},1\leq p\leq d^{\prime}\) and \(2\leq s\leq d^{\prime}\). There exist a \(\vec{w}^{\ell}\) supported on \(\{0,1,\ldots,s\}\) and some \(j\in\{1,\ldots,d^{\prime}-p+1\}\) such that_
\[A_{j,d^{\prime}-s}\circ(\vec{w}^{\ell}\star\vec{v}_{p,d^{\prime},1})\neq\vec{w }^{\ell}\star(A_{j,d^{\prime}}\circ\vec{v}_{p,d^{\prime},1}).\]
It is easy to check that there is a \(\vec{w}^{\ell}\) supported on \(\{0,1,\ldots,s\}\) such that there are \(s\) non-zero items in \(\vec{w}^{\ell}\star\vec{v}_{p,d^{\prime},1}\) and consequently \(s\) non-zero items in \(A_{j,d^{\prime}-s}\circ(\vec{w}^{\ell}\star\vec{v}_{p,d^{\prime},1})\), but for \(s\leq j\leq d^{\prime}-p-s\) there are \(2s-1\) non-zero items in \(\vec{w}^{\ell}\star(A_{j,d^{\prime}}\circ\vec{v}_{p,d^{\prime},1})\), which proves the above lemma directly. Lemma 3 does not show that cDCNN always cannot encode the translation-invariance, but demonstrates that cDCNN limits to do this when the support of the target lies on the edges. Our result in this section is to show that zero-padding defined by (7) is capable of breaking through the above drawback of cDCNN.
**Proposition 1**.: _Let \(L\in\mathbb{N}\), \(1\leq p\leq d\), \(2\leq s\leq d\), \(d_{0}=d\), \(d_{\ell}=d+\ell s\) and \(\vec{w}^{\ell}\) be supported on \(\{0,1,\ldots,s\}\), \(\ell=1,\ldots,L\). For any \(1\leq j\leq d-p+1\), there holds_
\[\vec{w}^{L}\star\cdots\star\vec{w}^{1}\ast(A_{j,d}\circ\vec{v}_ {p,d,1}) \tag{14}\] \[= \vec{w}^{L}\star\cdots\ast(A_{j,d_{1}}\circ(\vec{w}^{1}\ast\vec{ v}_{p,d,1}))\] \[= \cdots=A_{j,d_{L}}\circ(\vec{w}^{L}\star\cdots\star\vec{w}^{1} \ast\vec{v}_{p,d,1}).\]
The proof of Proposition 1 will be given in Appendix. Proposition 1 shows that if a vector \(v_{p,d,j}\) is translated to \(v_{p,d,j^{\prime}}\) for \(j^{\prime}\neq j\), then the output of the convoluted vector \(\vec{w}^{L}\ast\cdots\ast\vec{w}^{1}\ast\vec{v}_{p,d,j}\) is also translated with step \(j^{\prime}-j\). This illustrates that without tuning weights, the expansive convolution structure succeeds in encoding the translation-equivalence feature of the data, showing its outperformance over the inner product structure in DFCNs and the contracting convolutional structure in cDCNNs.
## 4 The Importance of Pooling in eDCNN
In this section, we borrow the idea of the location-based pooling (down-sampling) scheme from [38] to equip eDCNN to reduce the size of extracted features and enhance its performance of encoding the translation-invariance, simultaneously. For any \(d^{\prime}\in\mathbb{N}\), the location-based pooling scheme \(\mathcal{S}_{d^{\prime},u,j}:\mathbb{R}^{d^{\prime}}\to\mathbb{R}^{[d^{\prime}/u ]}\) for a vector \(\vec{v}\in\mathbb{R}^{d^{\prime}}\) with scaling parameter \(u\) and location parameter \(0\leq j\leq d^{\prime}\) is defined by
\[\mathcal{S}_{d^{\prime},u,j}(\vec{v})=(v_{ku+j})_{k=1}^{[d^{\prime}/u]}, \tag{15}\]
where \([a]\) denotes the integer part of the real number \(a\) and \(v_{ku+j}=0\). For any \(d\in\mathbb{N}\), if we set \(v_{ku+j}=0\) for \(ku+j>d\), then \(\mathcal{S}_{d^{\prime},u,j}\) can be regarded as a operator from \(\mathbb{R}^{d^{\prime}}\) to \(\mathbb{R}^{d}\). As shown in Figure 6, \(\mathcal{S}_{d^{\prime},u,j}\) devotes to selecting \([d^{\prime}/u]\) neurons from \(d^{\prime}\) features and the selection rule is only based on the location of the neurons, which is totally different from the classical max-pooling that selects neurons with the largest value or the average-pooling which synthesizes the averaged value of several neurons.
With the proposed location-based pooling scheme, we show that eDCNN succeeds in encoding the translation-equivalence (or translation-invariance) without sacrificing
Fig. 5: Translation-invariance of image recognition.
the performance of DCFN in extracting other features. Given \(d^{\prime},d,\ell\in\mathbb{N}\) satisfying \(\tilde{d}=d^{\prime}+\ell s\), \(\vec{b}\in\mathbb{R}^{d^{\prime}}\) and \(\vec{w}^{1},\ldots,\vec{w}^{L}\) supported on \(\{0,\ldots s\}\) with \(L=\lceil\frac{dd^{\prime}}{s-1}\rceil\), for any \(x\in\mathbb{R}^{d^{\prime}}\), define a multi-layer convolutional operator with pooling by
\[\mathcal{B}_{L,d^{\prime},k}(x):=\mathcal{S}_{\tilde{d},d^{\prime},k}(\vec{w} ^{L}*\cdots*\vec{w}^{1}*x+\vec{b}). \tag{16}\]
The following proposition is the main result of this section.
**Proposition 2**.: _Let \(J,d\in\mathbb{N}\), \(2\leq s\leq d\), \(d_{0}=d\), \(d_{1},\ldots,d_{J}\in\mathbb{N}\), \(L_{j}=\lceil\frac{d_{j-1}d_{j}}{s-1}\rceil\), and \(x\) be supported on \(\{k,k+1,\ldots,k+p-1\}\) for some \(p,k\in\mathbb{N}\). If \(\vec{V}_{d_{1},\ldots,d_{L}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{L}}\) defined by (3) is an arbitrary feature generated by DFCN, then there exist \(\sum_{j=1}^{J}L_{j}\) filter vectors \(\{\vec{w}^{j,\ell}\}_{\ell=1}^{L_{j}}\) supported on \(\{0,1,\ldots,s\}\) and \(J\) bias vectors \(\vec{b}^{j}\in\mathbb{R}^{d_{j}}\), \(j=1,\ldots,J\) such that for any \(1\leq i\leq d-p+1-k\) there holds_
\[\vec{V}_{d_{1},\ldots,d_{J}}(x)=\sigma\circ\mathcal{B}_{L_{J},d_{ J-1},0}\circ\cdots\circ\sigma\circ\mathcal{B}_{L_{1},d_{0},0}(x) \tag{17}\] \[=\sigma\circ\mathcal{B}_{L_{J},d_{J-1},0}\circ\cdots\circ\sigma \circ\mathcal{B}_{L_{2},d_{1},0}\circ\sigma\circ\mathcal{B}_{L_{1},d_{1},i}(A _{i,d}x).\]
Due to the commutative law established in Lemma 1, it is easy to check that the representation presented in (17) is not unique. Noting that there are
\[L_{j}(s+1)=\lceil\frac{d_{j-1}d_{j}}{s-1}\rceil(s+1)\leq 3d_{j}d_{j-1},\]
free parameters involved in the multi-layer convolutional structure \(\vec{w}^{j,L_{j}}*\cdots*\vec{w}^{j,1}\), which is comparable with those in the classical inner product structure, i.e., \(d_{j}d_{j-1}\), the location-based pooling scheme is to enable the same size of the affine-mapping and multi-layer convolutional mapping without sacrificing its performance in feature extraction. Therefore, Proposition 2 yields that acting as a feature extractor, the convolutional structure together with suitable pooling schemes outperforms the classical inner product structure in DFCN in the sense that the former succeeds in capturing the translation-invariance structure without losing the performance of DFCN in extracting other features. A direct corollary of the above proposition is the following translation-invariance of eDCNN with pooling.
**Corollary 1**.: _Let \(L\in\mathbb{N}\), \(2\leq s\leq d\), \(1\leq p\leq d\) and \(\vec{w}^{\ell}\) with \(\ell=1,\ldots,L\) be supported on \(\{0,1,\ldots,s\}\). Then for any \(1\leq j,j^{\prime}\leq d-p+1\) with \(j^{\prime}\neq j\), there holds_
\[\mathcal{S}_{d_{L},d,j}(\vec{w}^{L}*\cdots*\vec{w}^{1}*\vec{v}_{p,d,j})=\mathcal{S}_{d_{L},d,j^{\prime}}(\vec{w}^{L}*\cdots*\vec{w}^{1}*\vec{v }_{p,d,j^{\prime}}). \tag{18}\]
It follows from Corollary 1 that a suitable pooling scheme not only reduces the free parameters in a network but also endows the network with some important property. Therefore, if the convolutional structure is employed in the network, designing an appropriate pooling scheme is crucial to improve its performance in feature extraction and learning. Another direct corollary of Proposition 2 is the following comparison between eDCNN with pooling and shallow nets.
**Corollary 2**.: _Let \(2\leq s\leq d\), \(n\in\mathbb{N}\), \(x\) be supported on \(\{j,j+1,\ldots,j+p-1\}\) for some \(p,j\in\mathbb{N}\), \(\sigma(Wx+\vec{b})\) be a feature generated by a shallow net with \(n\times d\) weight matrix \(W\) and \(n\) dimensional bias vector \(\vec{b}\). Then, there exist \(L^{*}=\lceil\frac{nd}{s-1}\rceil\) filter vectors \(\{\vec{w}^{\ell}\}_{\ell=1}^{L^{*}}\) supported on \(\{0,1,\ldots,s\}\)and a bias vector \(\vec{b}^{\prime}\in\mathbb{R}^{n}\) such that for any \(1\leq k\leq d-p+1-j\), there holds_
\[\sigma(Wx+\vec{b})=\sigma\circ\mathcal{B}_{L^{*},d,0}(x)=\sigma\circ\mathcal{ B}_{L^{*},d,k}(A_{k,d}x)),\]
_where \(A_{k,d}\) is the translation matrix satisfying (13)._
Corollary 2 shows that convolutional structures and appropriate pooling scheme can provide the translation-invariance without sacrificing the feature extraction capability of the classical shallow nets. It implies that no matter where \(x\) is supported, the multi convolutional structure together with suitable location-based pooling scheme yields the same output of high quality.
## 5 Performance of eDCNNs in Feature Extraction
As discussed in the above two sections, zero-padding and location-based pooling play important roles in improving the performance of the convolutional structure in feature extraction. However, the network provided in Proposition 2 uses both bias vectors and ReLU activation functions and is thus different from the eDCNN defined by (9). In this section, we show that adding suitable bias vectors enables to produce a unified eDCNN structure formed as (9). We then present a comparison between eDCNN and DFCN in feature extraction.
Since adopting different bias vectors in different neurons in the same layer prohibits the translation-equivalence of eDCNN, we study a subset of \(\mathcal{H}_{L,s}\) whose elements of the bias vector in the same layer are the same. For any \(d_{0}^{\prime}=d^{\prime}\in\mathbb{N}\) and \(d_{\ell}^{\prime}=d^{\prime}+\ell s\), define the restricted convolution operator by
\[\mathcal{C}_{\ell,\vec{w}^{\ell},\vec{w}^{\ell}}^{R}(x):=\vec{w}^{\ell}*x+b^{ \ell}\mathbf{1}_{d_{\ell}^{\prime}} \tag{19}\]
for \(\vec{w}^{\ell}\) supported on \(\{0,1,\ldots,s\}\), \(b^{\ell}\in\mathbb{R}\) and \(\mathbf{1}_{d_{\ell}^{\prime}}=(1,\ldots,1)^{T}\in\mathbb{R}^{d_{\ell}^{\prime}}\). Define further the restricted eDCNN feature-extractor from \(\mathbb{R}^{d^{\prime}}\) to \(\mathbb{R}^{d_{\ell}^{\prime}}\) by
\[\vec{V}_{\ell}^{*,R}(x):=\sigma\circ\mathcal{C}_{\ell,\vec{w}^{\ell},\vec{b}^{ \prime}}^{R}\circ\sigma\circ\cdots\circ\sigma\circ\mathcal{C}_{1,\vec{w}^{1}, \vec{b}^{1}}^{R}(x). \tag{20}\]
Fig. 6: Location-based pooling.
Denote by
\[\vec{V}^{s,R,d,j}_{\ell}(x):=\mathcal{S}_{d^{\prime}_{\ell}d^{\prime }_{0},j}\circ\sigma\circ\mathcal{C}_{\ell,u^{\ell},b^{\prime}}\circ\sigma\circ \mathcal{C}^{R}_{\ell-1,u^{\ell-1},b^{\ell-1}}\] \[\circ\sigma\circ\mathcal{C}^{R}_{\ell-2,u^{\ell}-2,b^{\ell-2}} \circ\cdots\circ\sigma\circ\mathcal{C}^{R}_{1,u^{\ell},b^{1}}(x) \tag{21}\]
the restricted eDCNN feature-extractor with depth \(\ell\), filter length \(s\), location-based pooling at the \(d_{\ell}\) layer with pooling parameter \(d^{\prime}j\in\mathbb{N}\). It should be highlighted that when pooling happens, that is, in the \(\ell\)-th layer, we use the convolutional operator (8) rather than the restricted one (19). In the following theorem, we show that eDCNN with pooling performs not worse than shallow nets with similar number of parameters in feature extraction but succeeds in encoding the translation-invariance.
**Theorem 1**.: _Let \(2\leq s\leq d\), \(n\in\mathbb{N}\), \(L^{*}=\lceil\frac{nd}{s-1}\rceil\), \(d_{0}=d\), and \(d_{\ell}=d_{\ell-1}+s\). For any \(W\in\mathbb{R}^{n\times d}\) and \(\vec{\theta}\in\mathbb{R}^{n}\), there exist \(b^{\ell}\in\mathbb{R}\) for \(1\leq\ell\leq L^{*}-1\), \(\vec{b}^{L^{*}}\in\mathbb{R}^{d_{L^{*}}}\) and \(L^{*}\) filter vectors \(w^{\ell}\) supported on \(\{0,1,\ldots,s\}\) such that_
\[\sigma(Wx+\vec{\theta})=\vec{V}^{s,R,d,0}_{L^{*}}(x) \tag{22}\]
_If in addition \(x\) is supported on \(\{j,j+1,\ldots,j+p-1\}\) for some \(p,j\in\mathbb{N}\), then for any \(1\leq k\leq d-p+1\), there holds_
\[\vec{V}^{s,R,d,0}_{L^{*}}(x)=\vec{V}^{s,R,d,k}_{L^{*}}(A_{k,d}x). \tag{23}\]
Theorem 1 presents three advantages of eDCNN over shallow nets. At first, as far as the feature extraction is concerned, eDCNN performs not worse than shallow nets in the sense that it can exactly represent any features extracted by shallow nets with comparable free parameters. Then, it follows from (23) that with suitable pooling mechanism, eDCNN succeeds in encoding the translation-invariance into the structure without tuning weights, which is totally different from other network structures. Finally, it can be derived from (22) and (23) that even with the structure constraints, the approximation capability of eDCNN is at least not worse than shallow nets. In particular, denote by
\[\mathcal{H}^{s,R,d,j}_{\ell}:=\left\{h(x)=\vec{a}\cdot\vec{V}^{s,R,d,0}_{\ell,\ell-1}(x):\vec{a},\vec{b}^{\ell}\in\mathbb{R}^{n},b^{k}\in\mathbb{R}\right\} \tag{24}\]
the set of restricted eDCNN with pooling defined by (21) with \(\{\vec{w}^{k}\}_{k=1}^{\ell}\) being supported on \(\{0,1,\ldots,s\}\). We can derive the following corollary from Theorem 1 directly.
**Corollary 3**.: _Let \(2\leq s\leq d\), \(n\in\mathbb{N}\), \(L^{*}=\lceil\frac{nd}{s-1}\rceil\), \(d_{0}=d\) and \(d_{\ell}=d_{\ell-1}+s\). Then, for any \(f\in C(\mathbb{I}^{d})\), we have_
\[\text{dist}(f,\mathcal{H}^{s,R,d,j}_{\ell})\leq\text{dist}(f,\mathcal{H}_{n}), \tag{25}\]
_where \(\text{dist}(f,\mathcal{H}):=\inf_{g\in\mathcal{H}}\|f-g\|_{C(\mathbb{I}^{d})}\) denotes the distance between the function \(f\) and the set \(\mathcal{H}\) in \(C(\mathbb{I}^{d})\) and \(\mathcal{H}_{n}\) is the set of shallow nets with \(n\) neurons in the hidden layer._
Theorem 1 and Corollary 3 illustrate the outperformance of eDCNN over shallow nets. In the following, we aim to compare eDCNN with DFCN in feature extraction. The following theorem is our second main result.
**Theorem 2**.: _Let \(2\leq s\leq d\), \(L\in\mathbb{N}\), \(d_{1}^{*},\ldots,d_{L}^{*}\in\mathbb{N}\) and \(d_{0}^{*}=d\). If \(L_{\ell}=\lceil\frac{d_{\ell}^{*}d_{\ell-1}}{s-1}\rceil\) for \(\ell=1,2,\ldots,L\), then for any \(\vec{V}_{d_{1}^{*},\ldots,d_{L}^{*}}(x)\) defined by (3), there exist filter sequences \(\{\vec{w}_{\ell}\}_{1,1}^{L,d_{\ell}}\) with \(d_{L_{\ell}}=d+L_{\ell}s\) supported on \(\{0,1,\ldots,s\}\), \(b_{\ell,j}\in\text{and }\vec{b}^{L_{\ell}}\in\mathbb{R}^{d_{L_{\ell}}^{*}}\) such that_
\[\vec{V}_{d_{1}^{*},\ldots,d_{L}^{*}}(x)=\vec{V}^{s,R,d_{L_{\ell}}^{*},0}_{L_{ L_{\ell}}}\circ\vec{V}^{s,R,d_{L_{\ell}}^{*},0}_{L_{L-1}-2,0}\circ\cdots\circ\vec{V}^{s,R, d_{0}^{*},0}_{L_{1}}(x). \tag{26}\]
_If in addition \(x\) is supported on \(\{j,j+1,\ldots,j+p-1\}\) for some \(p,j\in\mathbb{N}\), then for any \(1\leq k\leq d-p+1\), there holds_
\[\vec{V}^{s,R,d_{L_{\ell}}^{*},0}_{L_{\ell}}\circ\vec{V}^{s,R,d_{L _{\ell}}^{*},2,0}_{L_{\ell-1}-2,0}\circ\cdots\circ\vec{V}^{s,R,d_{0}^{*},0}_{L_{ 1}}(x) \tag{27}\] \[= \vec{V}^{s,R,d_{L_{\ell-1}^{*}},0}_{L_{\ell-1}-0}\circ\vec{V}^{s,R, d_{L_{\ell-2}}^{*},0}_{L_{\ell-1}-2,0}\circ\cdots\circ\vec{V}^{s,R,d_{0}^{*},b}_{L_{ \ell}}(A_{k,d}x).\]
It should be mentioned that the only difference between eDCNN for shallow nets and eDCNN for DFCN is the number of pooling. In fact, to represent an \(L\)-layer DFCN, it requires \(L\) location-based pooling in eDCNN. Theorem 2 shows that eDCNN with suitable pooling scheme can represent any DFCN with comparable free parameters, even though eDCNN is capable of encoding the translation-invariance in the network structure. This demonstrates the outperformance of eDCNN over DFCN. Denote by \(\mathcal{H}^{s,R,d_{L_{\ell-1}}^{*},\ldots,d_{0}^{*},0}_{L_{L},\ldots,L_{1}}\) the set of all eDCNNs formed as
\[\vec{a}\cdot\vec{V}^{s,R,d_{L_{\ell-1}}^{*},0}_{L_{L},\ldots,L_{1}}\circ\vec{ V}^{s,R,d_{L_{\ell-2}}^{*},0}_{L_{L-1}}\circ\cdots\circ\vec{V}^{s,R,d_{0}^{*},0}_{L_{1}}(x).\]
We obtain from Theorem 2 the following corollary directly.
**Corollary 4**.: _Let \(2\leq s\leq d\), \(L\in\mathbb{N}\), \(d_{1}^{*},\ldots,d_{L}^{*}\in\mathbb{N}\) and \(d_{0}^{*}=d\). If \(L_{\ell}=\lceil\frac{d_{\ell}^{*}d_{\ell-1}^{*}}{s-1}\rceil\) for \(\ell=1,2,\ldots,L^{*}\), then for any \(f\in C(\mathbb{I}^{d})\), we have_
\[\text{dist}(f,\mathcal{H}^{s,R,d_{L_{\ell-1}}^{*},\ldots,d_{0}^{*},0}_{L_{L}, \ldots,L_{1}})\leq\text{dist}(f,\mathcal{H}_{d_{1}^{*},\ldots,d_{L}^{*}}). \tag{28}\]
With the help of the location-based pooling scheme and convolutional structures, it is easy to check that the number of free parameters involved in \(\mathcal{H}^{s,R,d_{L_{\ell-1}}^{*},\ldots,d_{0}^{*},0}_{L_{\ell}}\) and \(\mathcal{H}^{s}_{L_{1}^{*},\ldots,d_{L}^{*}}\) are of the same order, which implies that the approximation capability of the eDCNN is at least not worse than that of DFCN. The problem is, however, that it is difficult to set the sizes of pooling since \(\{d_{j}^{*}\}_{j=1}^{L}\) for a specific learning task is usually unknown, making them be crucial parameters in eDCNN. The following corollary devotes to the universal approximation property of eDCNN with specified \(\{d_{j}^{*}\}_{j=1}^{L}\).
**Corollary 5**.: _Let \(2\leq s\leq d\) and \(\ell_{1}=\lceil\frac{d(d+1)}{s-1}\rceil\) and \(\ell_{j}=\lceil\frac{(d+1)^{2}}{s-1}\rceil\) for \(j\geq 2\). For an arbitrary \(\varepsilon>0\) and any \(f\in C(\mathbb{I}^{d})\), there exist an \(L\in\mathbb{N}\) such that_
\[\text{dist}(f,\mathcal{H}^{s,R,d_{L}^{*}+1,\ldots,d+1,
least-square regression framework [31, 32] for the sake of brevity, although our results also hold for other loss functions.
In least-square regression [31, 32], samples in the data set \(D:=\{z_{i}\}_{i=1}^{m}:=\{(x_{i},y_{i})\}_{i=1}^{m}\) are assumed to be drawn independently and identically (i.i.d.) from an unknown but definite probability distribution \(\rho\) on \(Z:=\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}=\mathbb{I}^{d}\), and \(\mathcal{Y}\subseteq\mathbb{R}\). The aim is to learn a function \(f_{D}\) based on \(D\) to minimize the generalization error \(\mathcal{E}(f):=\int_{Z}(f(x)-y)^{2}d\rho\), which is theoretically minimized by the well known regression function [31]\(f_{\rho}(x):=\int_{y}yd\rho(y|x)\). Since \(\rho\) is unknown, its conditional mean \(f_{\rho}\) is also unknown. Our aim is then to find an estimator \(f_{D}\) to minimize
\[\mathcal{E}(f)-\mathcal{E}(f_{\rho})=\|f-f_{\rho}\|_{L^{2}_{ \mathcal{F}_{\mathcal{X}}}}^{2}, \tag{29}\]
where \(\rho_{X}\) is the marginal distribution of \(\rho\) on \(\mathcal{X}\).
One of the most important properties that a learner should have is that, as the sample size \(m\) grows, the deduced estimator converges to the real relation between the input and output. This property, featured as the strongly universal consistency [31], can be defined as follows.
**Definition 2**.: _A sequence of regression estimators \(\{f_{m}\}_{m=1}^{\infty}\) is called strongly universally consistent, if_
\[\lim_{m\to\infty}\mathcal{E}(f_{m})-\mathcal{E}(f_{\rho})=0\]
_holds with probability one for all Borel probability distributions \(\rho\) satisfying \(\int_{\mathcal{Y}}y^{2}d\rho(y|x)<\infty\)._
In our previous work [30] we show that running empirical risk minimization (ERM) on eDCNN without pooling yields strongly universally consistent learners. In this section, we show that imposing pooling scheme on eDCNN can maintain the strongly universal consistency. For this purpose, we build up the learner via ERM:
\[f_{D,L_{L},\dots,L_{1}}^{s,R,d_{1}^{\mathcal{L}},1,\dots,d_{0}^{ \mathcal{L}},0}:=\arg\min_{f\in\mathcal{H}_{L_{L},\dots,L_{1}}^{s,R,d_{L_{L}^ {-1}},\dots,d_{0}^{\mathcal{L}},0}}\mathcal{E}_{D}(f), \tag{30}\]
where \(\mathcal{E}_{D}(f)=\frac{1}{|D|}\sum_{i=1}^{|D|}(f(x_{i})-y_{i})^{2}\) denotes the empirical risk of \(f\). As shown in Definition 2, expect for \(\int_{\mathcal{Y}}y^{2}d\rho(y|x)<\infty\), there is not any other restriction on \(y\), making the analysis be quite difficult. A preferable way is to consider a truncation operator defined by \(\pi_{M}t=\min\{M,|t|\}\cdot\operatorname{sgn}(t)\) on \(y\), \(y_{M_{D}}:=\pi_{M_{D}}y\) with \(M_{D}\to\infty\). Therefore, our final estimator is \(\{\pi_{M_{D}}f_{D,L_{L},\dots,L_{1}}^{s,R,d_{L_{1}^{\mathcal{L}},1,\dots,d_{0 }^{\mathcal{L}},0}}\}_{|D|=1}^{\infty}\). It should be mentioned that such a truncation operator is widely adopted in demonstrating the universal consistency for numerous learning schemes [30, 31], including local average regression, linear least squares, shallow neural networks learning and eDCNN without pooling. The following theorem then presents sufficient conditions on designing the pooling scheme and selecting the depth of eDCNN to guarantee the strongly universal consistency.
**Theorem 3**.: _Let \(\theta\in(0,1/2)\) be an arbitrary real number, \(2\leq s\leq d\), and \(L:=L_{D}\in\mathbb{N}\), \(d_{1}^{\ast},\dots,d_{L}^{\ast}\in\mathbb{N}\) possibly depending on \(|D|\). If \(d_{1}^{\ast},\dots,d_{L}^{\ast}\geq d+1\), \(\sum_{j=1}^{d}d_{j-1}^{\ast}d_{j}^{\ast}\to\infty\), \(M=M_{D}\to\infty\), \(M_{D}^{2}|D|^{-\theta}\to 0\) and_
\[\frac{M_{D}^{4}\left(\sum_{j=1}^{L}d_{j-1}^{\ast}d_{j}^{\ast} \right)^{2}\log(L_{D}d_{\max}^{\ast})\log(M_{D}|D|)}{|D|^{1-2\theta}}\to 0, \tag{31}\]
_then \(\pi_{M_{D}}f_{D,L_{L},\dots,L_{1}}^{s,R,d_{L_{1}^{\mathcal{L}},1,\dots,d_{0}^{ \mathcal{L}},0}}\) is strongly universally consistent. If in addition \(x\) is supported on \(\{j,j+1,\dots,j+p-1\}\) for some \(p,j\in\mathbb{N}\), then for any \(1\leq k\leq d-p+1\), there holds_
\[\pi_{M_{D}}f_{D,L_{L},\dots,L_{1}}^{s,R,d_{L_{1}^{\mathcal{L}},1, \dots,d_{0}^{\mathcal{L}},0}}(x)=\pi_{M_{D}}f_{D,L_{L},\dots,L_{1}}^{s,R,d_{L_{ 1}^{\mathcal{L}},1,\dots,d_{0}^{\mathcal{L}},k}}(A_{k,d}x). \tag{32}\]
Theorem 3 presents sufficient conditions on the structure and pooling scheme for verifying the strongly universal consistency of eDCNN. Different from [30], Theorem 3 admits different pooling schemes to guarantee the universal consistency. Theorem 3 shows that with arbitrary \(L\to\infty\) location-based pooling operators, eDCNN can reduce the width of network from infinite to finite. The role of the location-based pooling is not only to trim the network and consequently reduces the number of parameters, but also to enhance the translation-invariance of the network structure as shown in (32). If we set \(L=1\), then Theorem 3 is similar as [30, Theorem 1]. If we set \(d_{1}^{\ast}=\dots=d_{L}^{\ast}=d+1\), then the condition (31) reduces to
\[\frac{M_{D}^{4}L_{D}^{2}\log(L_{D})\log(M_{D}|D|)}{|D|^{1-2\theta}}\to 0, \tag{33}\]
which shows that eDCNN with the structure in Corollary 5 (or Figure 7) is universally consistent.
## 7 Numerical Verifications
In this section, we conduct both toy simulations and real data experiments to verify the excellent performance of eDCNN, compared with cDCNN and DFCN. In all these simulations, we train the networks with an Adam optimizer
Fig. 7: The unified structure and specific pooling scheme.
for 2000 epochs. The piece-wise learning rates are [0.003, 0.001, 0.0003, 0.0001], changing at 800, 1200, 1500 epochs. Each experiment is repeated for 10 times and the outliers that the training loss does not decrease are eliminated and we report the average loss results. All the following experiments are conducted on a single GPU of Nvidia GeForce RTX 3090. Codes for our experiments are available at [https://github.com/liubc17/eDCNN_zero_padding](https://github.com/liubc17/eDCNN_zero_padding).
There are mainly six purposes in our experiments. The first one is to show the advantage of the expansive convolutional structure in (7) over the inner structure in DPCN; The second one is to show the outperformance of eDCNN in extracting the translation-equivalence features, compared with DFCN and cDCNN; The third one is to evaluate the ability of eDCNN in learning clean data; The fourth one is to study the performance of eDCNN in learning noisy data; The fifth one is to show the universal consistency of eDCNN; The final purpose is to verify the feasibility of eDCNN on two real world data for human activity recognition and heartbeat classification.
### _Advantages of the expansive convolutional structure_
In this simulation, we verify the expansive convolutional structure via comparing it with the inner structure in DFCN and contracting convolutional structure in cDCNN. Our basic idea is to show that the expansive convolutional structure is capable of representing the inner product structure with much fewer free parameters but not vice-verse. For this purpose, we generate three implicit functions, denoted as "f1", "f2" and "f3" respectively, in Figure 8, where"f1" is generated by a 5-layer randomly initialized cDCNN, "f2" is generated by a 5-layer randomly initialized fully connected network. The only difference between "f1" and "f2" is that "f2" pads 2 zeros on both side in the convolution process. For training each function, 900 training samples and 100 test samples are generated. Each sample is a 30-dimension vector with continuous 5-dimension entries uniformly distributed in [0, 1) and zero for the others.
We use multi-level convolutional structures and fully connected structures to fit the three functions. As shown in Table I, we use 1-layer and 2-layer fully connected (denoted as "fc") networks as baselines. We use 1-block and 2-block multi-level expansive convolutional (denoted as "multi-conv") structures to factorize corresponding 1-layer and 2-layer fully connected networks. Each layer of the used 1-layer and 2-layer fully connected networks has 10 units and uses ReLU activation and each block of the used 1-block and 2-block multi-level cDCNN or eDCNN convolutional structures has 5 convolution layers. Each layer has 1 filter of filter size 3. The former 4 layers do not use bias and activation. Only after each block, there is a bias and activation. To match the feature dimension of multi-level eDCNN, a max pooling with pooling size 4 is used after the first block and a max pooling with pooling size 2 is used after the second block.
There are four interesting phenomenon can be found in Table I: 1) The multi-level expansive convolutional structure performs at least not worse than the inner product structure in approximating all three functions with much fewer parameters, showing that presenting the expansive convolutional structure to take place of the inner product structure succeeds in reducing the number of parameters without sacrificing the performance of feature extraction; 2) In fitting "f1" and "f2" which possesses some translation-equivalence, multi-level convolutional structures achieve much lower test loss than their inner product structure counterparts, exhibiting the outperformance of the convolutional structure in extracting the translation-equivalence; 3) In fitting "f3", although 1-block multi-level convolutional structure has slightly higher test loss than 1-layer"fc", 2-block multi-level convolutional structure achieves the least loss. This is mainly due to the fact that "f3" is generated by fully connected networks and fitting "f3" is easier for the inner product structure. This phenomena illustrates that with more hidden layers, inner product can be represented by the multi-convolutional structure, just as Proposition 2 says; 4) The expansive convolutional structure performs always better than the contracting convolutional structure, especially for 1-block setting. The reason for this is that the contracting nature limits the representing performance of the corresponding network, although it may encodes some invariance into the structure. All these demonstrates the power of the expansive convolutional structure and verifies our theoretical assertions in Section 3 and Section 4.
### _eDCNNs for translation-equivalence data_
In this simulation, we verify the outperformance of eDCNNs in approximating translation-equivalence functions over DFCNs and cDCNNs. The used translation-equivalence functions are adapted from "f1" and "f2" in Figure 8, denoted as "f1m" and "f2m". The only difference is that "f1m" and "f2m" use a same weight bias with value 0.01 after each convolutional layer. We use fully connected networks, cDCNN and eDCNN to fit "f1m" and "f2m". The depth of the used fully connected networks, cDCNN and eDCNN for fitting "f1m" and "f2m" is set as 4. Each fully connected layer has 10 units and uses ReLU activation. Each convolutional layer of cDCNN and eDCNN has 1 filter of filter size 3 and eDCNN pads 2 zeros on both side in the convolution process. 900 training samples and 100 test
\begin{table}
\begin{tabular}{c c c c} \hline Fit function & Network config. & Test Loss & Params. \\ \hline \multirow{4}{*}{function “f1”} & 1-layer fc & 5.96e-3 & 300 \\ & 1-block multi-conv cDCNN & 5.67e-3 & 16 \\ & 1-block multi-conv eDCNN & 2.23e-3 & 16 \\ & 2-layer fc & 7.74e-3 & 400 \\ & 2-block multi-conv cDCNN & 2.95e-3 & 32 \\ & 2-block multi-conv eDCNN & 2.40e-3 & 32 \\ \hline \multirow{4}{*}{function “f2”} & 1-layer fc & 1.39e-2 & 300 \\ & 1-block multi-conv cDCNN & 8.95e-3 & 16 \\ & 1-block multi-conv eDCNN & 8.66e-3 & 16 \\ & 2-layer fc & 7.06e-3 & 400 \\ & 2-block multi-conv cDCNN & 2.89e-3 & 32 \\ & 2-block multi-conv eDCNN & 2.29e-3 & 32 \\ \hline \multirow{4}{*}{function “f3”} & 1-layer fc & 1.04e-3 & 300 \\ & 1-block multi-conv cDCNN & 8.30e-3 & 16 \\ & 1-block multi-conv eDCNN & 1.18e-3 & 16 \\ & 2-layer fc & 1.16e-3 & 400 \\ & 2-block multi-conv cDCNN & 7.53e-3 & 32 \\ & 2-block multi-conv eDCNN & 9.30e-4 & 32 \\ \hline \end{tabular}
\end{table} TABLE I: The fitting results of 3 functions.
samples are generated for our purpose. Each sample is a 30-dimension vector with continuous 5-dimension entries uniformly distributed in [0, 1) and zero for the others.
The fitting RMSE loss results of the three network configurations are shown in Figure 9 and Figure 10, where the "Network" axis indicates the network configuration, the"Pooling-Bias" axis indicates the use of pooling and bias, "W" denotes using pooling, "W/o" denotes not using pooling, and "T", "F" and "S" denote trainable, non-trainable and restricting the trainable bias vector to be the same, respectively. The only difference between Figure 9 and Figure 10 is that we restrict the continuous 5-dimension entries on the edge of the 30-dimension vector for the 100 test samples to show the translation-equivalence of eDCNN.
There are mainly five observations presented in the above figures: 1) In approximating translation-equivalence functions, both cDCNN and eDCNN perform much better than DFCN. The main reason is that cDCNN and eDCNN encode the translation-equivalence in the network structure1 which is beyond the capability of DFCN; 2) From Figure 9 where testing points are drawn far from the edge, eDCNN performs at least not worse than cDCNN while for Figure 10 where testing points are drawn on the edge, eDCNN is much better than cDCNN, showing the outperformance of eDCNN over cDCNN in encoding the translation-equivalence and verifying Lemma 3 and Proposition 1; 3) eDCNN with pooling and restricting trainable bias vectors performs stable for all four tasks. The main reason is that eDCNN with restricting trainable bias vectors does not destroy the translation-equivalence of the network and therefore is more suitable for approximating translation-equivalence functions and "f1m" and "f2m" are biased with 0.01 after each convolutional layer, which verifies Theorem 2; 4) Though the effect of pooling is a little bit unstable for different learning tasks and different bias vector selections, it is efficient for the proposed eDCNN with restricting trainable bias vectors, which also verifies Theorem 2; 5) As far as the bias vector selection is concerned, it can be found in the above figures that restricting the bias vectors to be constants in the same hidden layers generally outperform other two methods, exhibiting a "less is more" phenomena in the sense that less tunable parameters may lead to better approximation performance. This is due to the fact that parameterizing all elements of the bias vector destroys the translation-equivalence of eDCNN. All these show the advantages of the proposed eDCNN and verify our theoretical assertions in Section 5.
Footnote 1: If the support of input is far away from the edge, then cDCNN also possesses the translation-equivalence.
### _Learning ability of eDCNN for clean data_
In this section, we show the learning performance of eDCNN in learning clean data \(y_{i}=f(x_{i})\) for \(f\) being either \(f_{1}(x)=x_{1}+x_{2}+x_{3}x_{4}+x_{2}^{2}\) with the entries of \(x\) are uniformly distributed in \((-1,1)\) or \(f_{2}(x)=x_{1}x_{2}x_{3}x_{4}x_{5}+x_{2}x_{3}x_{4}x_{5}x_{6}+\cdots+x_{26}x_{27 }x_{28}x_{29}x_{30}\) with continuous 5-dimension entries uniformly distributed in \([0,1)\) and zero for the others. It is easy to check that \(f_{2}\) is translation-invariant while \(f_{1}\) does not have any transformation-invariance property. For each problem, 1000 training samples and 100 test samples are generated randomly.
To show the power of eDCNN, we consider the following 5 network configurations:
1) fully connected networks (denoted as DFCN). Each fully connected layer has 10 units and uses ReLU activation.
2) contracted DCNN (denoted as cDCNN). Each convolutional layer has 1 filter with filter size 3. Each layer has a trainable bias restricted to share the same value, expect for the last layer has a trainable bias without any restriction. Each layer does not use padding and is followed by a ReLU activation.
3) contracted DCNN followed by one fully connected layer (denoted as "cDCNN+fc"). The setting is the same as cDCNN configuration, except that there is a fully connected layer after the convolution module. The number of units of the fully connected layer is the same as the output dimension of the convolution module after flatten operation.
4) expansive DCNN (denoted as eDCNN). Each convolutional layer has 1 filter with filter size 3. Each layer has a trainable bias restricted to share the same value, expect for the last layer has a trainable bias without any restriction. Each layer pads 2 zeros on both side in the convolution process and is followed by a ReLU activation.
5) expansive DCNN followed by one pooling layer (denoted as "eDCNN+pl"). The setting is the same as eDCNN configuration, except that there is a max pooling with pooling size 2 after the convolution module.
Fig. 8: The fitted three functions. “f1” is generated by a 5-layer randomly initialized cDCNN. “f2” is generated by a 5-layer randomly initialized fully connected network.
The numerical results can be found in Figure 11. There are three interesting observation showed in the figure: 1) eDCNN as well as "eDCNN+pl" always performs better than other two structures in both translation-invariance data and non-translation-invariance data. Due to the universality in approximation and learning, eDCNN is capable of learning arbitrary regression functions neglecting whether it possesses some transformation-invariance, which is far beyond the capability of cDCNN; 2) cDCNN performs worse than DFCN in learning \(f_{1}\) while better than DFCN in learning \(f_{2}\), although dDCNN does not possesses the universality in approximation and learning. This is due to the fact that \(f_{2}\) possesses translation-invariance and the convolutional structure is easy to capture this invariance while the classical inner product structure fails; 3) eDCNN performs a little bit than the popular "cDCNN+fc" in both learning tasks, showing that eDCNN would be a preferable alternative of the classical setting of convolutional neural networks. All these verify our theoretical setting in Section 6 for clean data.
\begin{table}
\begin{tabular}{c|c c c c c} \hline Test Position & \multicolumn{5}{c}{beginning} \\ \multicolumn{1}{l|}{Network} \\ Loss & 0.0578 & 0.0496 & 0.0447 & 0.0401 & 0.0406 \\ \hline \begin{tabular}{c} Test Position \\ Network \\ \end{tabular} & DFCN & cDCNN & cDCNN+fc & eDCNN & eDCNN+pl \\ \hline \begin{tabular}{c} Test Position \\ Network \\ \end{tabular} & DFCN & cDCNN & cDCNN+fc & eDCNN & eDCNN+pl \\ \hline \begin{tabular}{c} Test Position \\ Network \\ Loss \\ \end{tabular} & 0.0597 & 0.0430 & 0.0467 & 0.0401 & 0.0406 \\ \hline \begin{tabular}{c} Test Position \\ Network \\ \end{tabular} & DFCN & cDCNN & cDCNN+fc & eDCNN & eDCNN+pl \\ \hline
\begin{tabular}{c} Test Position \\ Network \\ Loss \\ \end{tabular} & 0.0698 & 0.0556 & 0.0426 & 0.0385 & 0.0415 \\ \hline \end{tabular}
\end{table} TABLE II: The learning results of 5 network architectures.
Fig. 10: Fitting “f1m” and “f2m” of DFCN, cDCNN and eDCNN on the position restricted test samples.
Fig. 9: Fitting “f1m” and “f2m” of DFCN, cDCNN and eDCNN
### _Learning ability of eDCNN for noisy data_
In this simulation, we show the outperformance of eDCNN in learning noisy data. The data are generated by \(y_{i}=f(x_{i})+\sigma_{i}\) with \(f_{3}(x)=\sin{(x_{1}^{2}+\cdots+x_{30}^{2})}+\frac{1}{2}\cos{(x_{1}^{2}+\cdots+x_ {30}^{2})}\) with only continuous 5-dimension entries uniformly distributed in [0, 1) and zero for the other and \(\sigma_{i}\) is drawn i.i.d. according to the Gaussian distribution \(\mathcal{N}(0,0.01)\). To show the difference between cDCNN and eDCNN in reflecting the translation-invariance, we evaluate on different positions of the continuous 5-dimension entries. Our numerical results can be found in Table II, where test position "beginning" denotes that the continuous 5-dimension entries are at positions 1 to 5, test position "middle" denotes that the continuous 5-dimension entries are at positions 13 to 17, and test position "end" denotes that the continuous 5-dimension entries are at positions 26 to 30. We also figure out the role of depth for the mentioned five structures in Figure 12.
From Table II and Figure 12, it is safe to draw the following two conclusions: 1) eDCNN as well as "eDCNN+pl" performs better than DFCN and cDCNN in three test positions. Especially, for "beginning" and "end" positions, eDCNN performs much better. This is due to the fact that cDCNN does not encode the translation-equivalence when the support of function is at the edge, showing the necessity of zero-padding; 2) eDCNN as well as "eDCNN+pl" performs at least not worse than "cDCNN+fc". Furthermore, it follows from Figure 12 that eDCNN and "eDCNN+pl" behave stable with respect to the depth, which is similar as "cDCNN+fc". Both observations illustrate that zero-padding is a feasible and efficient alternative of fully connected layer in learning noisy data.
### _Universal consistency of eDCNN_
In this simulation, we show the universal consistency of eDCNN via showing the relation between the generalization error and size of the data. In particular, we report this relation on both clean and noisy data for the mentioned 5 network configurations. To be detailed, we report the smallest loss among different network depths for learning \(f_{2}\) and set depth to be 6 for learning \(f_{3}\). The reason of different setting for different simulations is due to the stability of depth shown in the previous subsection. The numerical results can be found in Figure 13.
There are also three interesting observations exhibited in Figure 13: 1) For \(f_{2}\), the test loss curve of eDCNN is much lower than others while "eDCNN+pl" achieves the suboptimal results, which validates that eDCNN has better learning capability than DFCNs and cDCNNs in learning clean translation-invariant data; 2) For \(f_{3}\), eDCNN, "eDCNN+pl" and "cDCNN+fc" are better than DFCN and cDCNN, mainly due to the non-translation-invariance of DFCN and cDCNN when \(f_{3}\) is supported at the edge; 3) RMSE roughly decreases with respect to the number of training samples, demonstrating the consistency of the deep nets estimates. All these verify our theoretical assertions in Section 6.
### _Real date excitations_
In this part, we aim at showing the usability and efficiency of eDCNN on two real world data.
#### 7.6.1 WISDM dataset
For human activity recognition task, we evaluate on WISDM dataset which includes 1098207 samples with 6 categories of movement (walking, running, jogging, etc.).
The network structures are as follows. There are 60 units in each fully connected layer and 20 filters with filter length 9 in the the convolutional layer of cDCNN, "cDCNN+fc", eDCNN and "eDCNN+pl". The number of units of the fully connected layer of "cDCNN+fc" is set as 100. The pooling size of the max pooling layer of "eDCNN+pl" is set as 2 and the max pooling layer lies after the convolutional module. We train the network with an Adam optimizer for 150 epochs. The batch size is set as 512.
Figure 14 shows the classification accuracy results of the 5 network configurations on WISDM dataset varying with depth. It can be found that "eDCNN+pl" achieves the highest test accuracy and holds high test accuracy with different depths, which shows the good learning capacity of eDCNN with appropriate pooling scheme. DFCN is not good at dealing with such data that contains temporal information. Even when dealing with such data, cDCNN
Fig. 11: Learning performance of mentioned approaches on clean data
Fig. 12: Relation between RMSE of test position “beginning” and network depth. The number of training samples is 20000.
outperforms "cDCNN+fc" overall, which further validates the disadvantage of the fully connected layer in dealing with such data.
#### 7.6.2 ECG heartbeat dataset
For ECG heartbeat classification task, we evaluate on the MIT-BIH Arrhythmia Database and the PTB Diagnostic ECG Database. An ECG is a 1D signal that is the result of recording the electrical activity of the heart using an electrode. It is a very useful tool that cardiologists use to diagnose heart anomalies and diseases. The MIT-BIH Arrhythmia data set includes 109446 samples with 5 categories, and the PTB Diagnostic ECG Database includes 14552 samples with 2 categories. Similarly, the network depth of DFCN, cDCNN, "cDCNN+fc", eDCNN and "eDCNN+pl" varies from 1 to 10. For fully connected layers, each layer has 80 units. For the convolutional layer, each layer has 16 filters with length 6. The number of units of the fully connected layer is 64. The pool size of the max pooling layer of "eDCNN+pl" is 2 and the max pooling layer lies after the first convolutional layer. We train the network with an Adam optimizer for 100 epochs.
Fig. 16: Classification accuracy results of the 5 network configurations on PTB dataset.
Fig. 14: Classification accuracy results of the 5 network configurations on WISDM dataset.
Fig. 13: Relation between RMSE and data size for \(f_{2}\) and \(f_{3}(x)\) with test position “beginning”.
Fig. 15: Classification accuracy results of the 5 network configurations on MIT-BTH dataset.
Figure 15 shows the classification accuracy results of the 5 network configurations on MIT-BTH dataset varying with depth. "eDCNN+pl" achieves the highest test accuracy and holds high test accuracy with large network depths, which shows the good learning capacity of eDCNN with appropriate pooling scheme. DFCN achieves good and stable results among different network depths. The relatively poor results of cDCNN and "cDCNN+fc" may due to the information loss while convolving at the edge of feature without bias.
Figure 16 shows the classification accuracy results of the 5 network configurations on PTB dataset varying with depth. DFCN is not good at dealing with this dataset. The other four configurations achieve good and vary close results. It may because this dataset is relatively easy to classify.
## 8 Conclusion
In this paper, we studied the representation and learning performance of deep convolutional neural networks with zero-padding (eDCNN). After detailed analysis of the roles of zero-padding, pooling and bias vectors, we found that eDCNN succeeds in encoding the translation-equivalence (or translation-invariance for eDCNN with pooling) without sacrificing the universality in approximation and learning of DFCNs. This demonstrated that eDCNN is essentially better than DFCN in translation-equivalence related applications, since the network structure itself can reflect this data feature without any training. Noting that deep neural networks without zero-padding (cDCNN) do not possess the universality, we found that eDCNN is essentially better than cDCNN in the sense that eDCNN can extract any data features if it is sufficiently deep, which is beyond the capability of cDCNN. This assertion was also made in extracting the translation-equivalence (or translation-invariance) by terms that cDCNN fails to encode it if the support of data is on the edge. All these findings together with solid theoretical analysis and numerous numerical experiments illustrated the outperformance of eDCNN over cDCNN and DFCN and provided a unified network structure with theoretical verifications for feature extraction and learning purposes.
## Acknowledgement
Z. Han was partially supported by the National Key Research and Development Program of China under Grant 2020YFB1313400, the CAS Project for Young Scientists in Basic Research under Grant YSBR-041, the National Natural Science Foundation of China under Grant 61903358, 61773367, 61821005, and the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant 2022196, Y202051. S. B. Lin was partially supported by the National Natural Science Foundation of China [Grant Nos. 62276209]. D. X. Zhou was partially supported in part by the Research Grants Council of Hong Kong [Project Nos. CityU 11308020, N_CityU 102/20, C1013-21GF], Hong Kong Institute for Data Science, Germany/Hong Kong Joint Research Scheme [Project No. G-CityU101/20], Laboratory for AI-Powered Financial Technologies, and National Science Foundation of China [Project No. 12061160462].
|
2304.06540 | Temporal Knowledge Sharing enable Spiking Neural Network Learning from
Past and Future | Spiking Neural Networks (SNNs) have attracted significant attention from
researchers across various domains due to their brain-like information
processing mechanism. However, SNNs typically grapple with challenges such as
extended time steps, low temporal information utilization, and the requirement
for consistent time step between testing and training. These challenges render
SNNs with high latency. Moreover, the constraint on time steps necessitates the
retraining of the model for new deployments, reducing adaptability. To address
these issues, this paper proposes a novel perspective, viewing the SNN as a
temporal aggregation model. We introduce the Temporal Knowledge Sharing (TKS)
method, facilitating information interact between different time points. TKS
can be perceived as a form of temporal self-distillation. To validate the
efficacy of TKS in information processing, we tested it on static datasets like
CIFAR10, CIFAR100, ImageNet-1k, and neuromorphic datasets such as DVS-CIFAR10
and NCALTECH101. Experimental results demonstrate that our method achieves
state-of-the-art performance compared to other algorithms. Furthermore, TKS
addresses the temporal consistency challenge, endowing the model with superior
temporal generalization capabilities. This allows the network to train with
longer time steps and maintain high performance during testing with shorter
time steps. Such an approach considerably accelerates the deployment of SNNs on
edge devices. Finally, we conducted ablation experiments and tested TKS on
fine-grained tasks, with results showcasing TKS's enhanced capability to
process information efficiently. | Yiting Dong, Dongcheng Zhao, Yi Zeng | 2023-04-13T13:51:26Z | http://arxiv.org/abs/2304.06540v2 | # Temporal Knowledge Sharing enable Spiking Neural Network
###### Abstract
Spiking neural networks have attracted extensive attention from researchers in many fields due to their brain-like information processing mechanism. The proposal of surrogate gradient enables the spiking neural networks to migrate to more complex tasks, and gradually close the gap with the conventional artificial neural networks. Current spiking neural networks utilize the output of all moments to produce the final prediction, which compromises their temporal characteristics and causes a reduction in performance and efficiency. We propose a temporal knowledge sharing approach (TKS) that enables the interaction of information between different moments, by selecting the output of specific moments to compose teacher signals to guide the training of the network along with the real labels. We have validated TKS on both static datasets CIFAR10, CIFAR100, ImageNet-1k and neuromorphic datasets DVS-CIFAR10, NCALTECH101. Our experimental results indicate that we have achieved the current optimal performance in comparison with other algorithms. Experiments on Fine-grained classification datasets further demonstrate our algorithm's superiority with CUB-200-2011, Stanford-Dogs, and StanfordCars. TKS algorithm helps the model to have stronger temporal generalization capability, allowing the network to guarantee performance with large time steps in the training phase and with small time steps in the testing phase. This greatly facilitates the deployment of SNNs on edge devices.
## 1 Introduction
The spiking neural networks (SNNs) are known as the third generation of artificial neural networks (ANNs) because of their brain-like information processing method. Due to their non-differential characteristics, the training of spiking neural networks has been extensively studied by researchers. Inspired by the learning mechanism of the brain, some researchers have introduced biologically plausible learning rules into the training of SNNs [14, 15]. Furthermore, the proposal of surrogate gradient enables the SNNs to be extended to more complex structures and tasks [21, 15], and has shown comparable performance to ANNs in many fields.
Researchers have attempted to enhance the performance of SNNs trained based on backpropagation from several perspectives. Some researchers improve the performance of the SNNs by improving the structure. For example, inspired by biological structures, LISNN [1] introduces lateral interactions in the convolutional SNNs, which greatly enhances the performance and robustness of the SNNs. Back-EISNN [14] is motivated by the autapse, and introduces self-feedback connections to facilitate and accelerate the training of SNNs. In addition, some works have been inspired by the classic structures in the field of ANNs, such as the ResNet-based [13], Transformer-based [14]. The representation ability of SNNs may also be improved by improving the dynamic characteristics of spiking neurons, such as the learnable mem
Figure 1: The inference process of spiking neural networks over time, the correct prediction during the process is not reasonably used.
brane time constant [14], adaptive thresholds [15]. However, the temporal properties of SNNs are not fully exploited in these works.
Spiking neural networks are mostly trained in long simulation time. After that, firing rates or average membrane potentials are used to determine the final prediction. This results in a substantial latency of SNNs and restricts the development of SNNs on edge hardware. In addition, the process of combining information from all of the moments in order to make the final prediction means that each of the moments contributes equally to the final result. During this process, the SNN's temporal information is not fully utilized. As shown in Fig. 1, where the SNN runs through six time steps and performs a five-class classification. The correct prediction is achieved at timestep 1 and 3 during the process. However, after merging all the timesteps, the final prediction is wrong. The information at the correct timestep does not assist the model in making a correct prediction, nor does it provide a positive contribution to subsequent model training.
With the aim of better leveraging the temporal information of spiking neural networks, we view the way that SNNs aggregate the results of multiple moments together to jointly determine prediction as an aggregation of models. The membrane potential distribution at the previous moment will be part of the model parameters and states at the next moment, and we expect each sub-model of the aggregated model to be sufficiently accurate. Considering that knowledge distillation can guide the training of low-performance student networks with the knowledge of high-performance teachers [13, 14], we train the network with guidance by knowledge transfer of the models to each other between different moments. The information from different moments is merged and represented as soft labels. Also, to prevent incorrect information transfer, we only integrate the output at the correct moment as the teacher signal. We expect that models at different moments can continuously modify their output through mutual interaction. Not only can the aggregated model obtain the true label information, but it can also receive knowledge information from other models at each moment, which is called temporal knowledge sharing (TKS). The temporal knowledge sharing greatly utilizes the temporal information during the training of SNNs, and has improved the performance of SNNs. Our contribution can be summarized as follows:
1. We propose a temporal knowledge sharing training strategy, which will extract the temporal information at the correct moment and combine them into the teacher signal to guide the learning process during training.
2. We have conducted experiments on several datasets to verify the superiority of our model. We have achieved state-of-the-art performance on not only the static datasets CIFAR10, CIFAR100 and ImageNet-1K, but also on the neuromorphic datasets DVS-CIFAR10 and NCALTECH101. We also conducted experiments on the fine-grained classification datasets CUB-200-2011, StanfordDogs, and StanfordCars, all of which show significant performance improvements compared to other state-of-the-art algorithms.
3. The temporal knowledge sharing enables to improve the accuracy of the model at each moment, which greatly enhances the stability of SNN training. Furthermore, it allows for high performance inference using only a few number of simulation steps, which facilitates deployment on edge devices in a convenient manner.
## 2 Related Work
Several works have done research on exploiting the temporal information of spiking neural networks. TCJA-SNN [15] assigns attention mechanisms to temporal channels by introducing additional networks. AttentionSNN [21] exploits temporal information by optimizing membrane potentials with attention weights. TET [1] supervises each moment individually to exploit temporal information. TEBN [13] assigns different weights to each moment by extending the batch normalization layer to the temporal dimension. BPSTA [2] designs the temporal residual connection to help the error propagate across the spikes in the temporal dimension. These efforts do not take full advantage of the information available in the network's output at the correct moment.
Knowledge distillation helps to transfer knowledge from a high performing model to a low performing model to obtain better results than the original [13]. Some SNNs guide the training of SNNs by introducing additional network knowledge from pre-trained ANNs [15] or pre-trained large SNNs [16]. They all require additional training of a larger network, which adds a lot of computational cost. Self-knowledge distillation [15, 14, 13] generates the teacher signal itself without additional network for computation. Inspired by self-knowledge distillation, we design to use the results generated by the network at different moments to guide the training of the network, and do not train an additional network, which greatly reduces the computational effort.
## 3 Method
### Spiking Neuron Model
The spiking neuron is the basic computing unit of the spiking neural network, which uses a differential equations to describe the dynamic behavior of biological neurons, as shown in Equation 1.
\[\begin{split}\tau_{m}\frac{dV_{t}}{dt}&=-V_{t}+I_{t }\quad V_{t}\leq V_{th}\\ S_{t}&=1,\quad V_{t}=V_{rest}\quad V_{t}\geq V_{th }\end{split} \tag{1}\]
By accumulating membrane potential over time, the neuron releases a spike after reaching the threshold \(V_{th}\), and re-sets to the resting potential \(V_{rest}\), which we set to 0 here. In order to facilitate the simulation calculation, we discretize Equation 1 to get Equation 2.
\[V[t]=(1-\frac{1}{\tau})V[t-1](1-S[t-1])+\frac{1}{\tau}I[t] \tag{2}\]
\(V[t]\) denotes the membrane potential at time \(t\), \(tau_{m}\) denotes the membrane potential constant, \(I[t]=\sum_{j}w_{ij}S_{j}\) denotes the input current obtained by collecting pre-synaptic neuron spikes, and \(S[t]\) denotes the spike at time \(t\).
### Temporal Knowledge Sharing
For the training of SNNs, the commonly used training method is to use the information of all moments as the final prediction, and construct a loss function with the true label, then perform error backpropagation. Here, we use the average membrane potential of the output layer as shown in Equation 3:
\[O=\frac{1}{T}\sum_{t=1}^{T}V_{t}^{out} \tag{3}\]
And we use the cross entropy loss:
\[L_{CE}=-\sum_{i=1}^{n}y_{i}log(p_{i}) \tag{4}\]
where \(p_{i}=\frac{e^{O_{i}}}{\sum_{j=1}^{n}e^{O_{j}}}\), and \(y_{i}\) is the label.
As shown in Equation 4, the model needs to accumulate the output of all moments to make the final prediction, which ignores the temporal characteristic of SNNs. For SNNs, the output at each moment contains rich information. Inspired by knowledge distillation, we consider the output at different moments in the model can be used to guide the learning of the network. As shown in the Figure 2, we collect the predictions generated at the correct moment of classification and use the mean value of these signals as the teacher signal \(Z\). If there is no correct signal in the prediction result of the model, we will adjust the prediction probability of the teacher signal to \(1/num_{category}\).
\[Z=\frac{1}{m}\sum_{t=1}^{m}V_{t}^{out} \tag{5}\]
We use the cross-entropy function to measure the gap between the student and the teacher output as shown in Equation 6:
\[L_{TKS}=-\sum_{i=1}^{n}Z_{i}log(q_{i}) \tag{6}\]
As the common operation in knowledge distillation, \(q_{i}\) is obtained by calculating the output through softmax with temperature, adding the temperature parameter to make the prediction smoother:
```
0: Training dataset D = \(\{(x_{i},y_{i})\}_{i=1}^{N}\), SNN model, Simulation Length,
0: A high-performance, low-latency and stable spiking neural network model;
1:for each mini-batch training data \(D_{i}=\{x_{i},y_{i}\}\)do
2: Compute the SNN output O as shown in Eq. 3
3: Collect the output membrane potential at the correct moment to make up the teacher signal \(Z\) as shown in Eq. 5
4: Calculate the final loss using Eq. 8
5: Adjust the synaptic weights through the spatial temporal backpropagation with surrogate gradients;
6:endfor
```
**Algorithm 1** Temporal Knowledge Sharing For Training Deep Spiking Neural Networks
\[q_{i}=\frac{exp(O_{i}/\tau)}{\sum_{j}exp(O_{j}/\tau)} \tag{7}\]
\(\tau\) denotes the temperature. Then the final loss function can be written as:
\[L_{all}=(1-\alpha)L_{CE}+\alpha\tau^{2}L_{TKS} \tag{8}\]
Figure 2: The whole training process of our temporal knowledge sharing. TKS will collect information at the right moment and compose additional teacher signals to guide the training of the network
\(\alpha\) is the balance coefficient, which adjusts the balance between loss of the predicted value and the true label, as well as the degree of the teacher signal. A larger \(\alpha\) indicates that the teacher signal is more critical for the weight adjustment of the model. Because the model has not been able to provide a more accurate supervised signal in the early stage of training, true labels are needed for the main guidance. Here we adopt a gradual ascent strategy that will initialize \(\alpha\) to a small value and increase linearly with training. Empirically, the initial value of \(\alpha\) is set to 0 and grows gradually with the training process, reaching its maximum value in the last epoch of the training process.
## 4 Experiment
To illustrate the superiority of our algorithm, we validated it on commonly used static datasets: CIFAR10, CIFAR100, ImageNet-1K as well as neuromorphic datasets: DVS-CIFAR10, NCALTECH101. To demonstrate the effectiveness of the algorithm, we further conducted experiments on fine-grained object recognition tasks: CUB-200-2011, Stanford Dogs, Stanford Cars. In the above experiments, we applied the surrogate function from [11].except for ImageNet-1K, AdamW trainer is applied for model optimization, with the weight decay set to 0.01. For ImageNet-1K, we used Adam optimizer, with the weight decay set to 0. The batch size for all experiments is set to 128. And the cosine strategy was used to control the learning rate.
### Evaluation metrics
**Accuracy:** Top1 accuracy and Top5 accuracy reflect the classification accuracy of the model on the corresponding dataset. Top1 represents the accuracy of the category corresponding to the highest probability, while Top5 represents the accuracy of the label category belonging to the top five probability categories of output.
**AURC (the Area Under the Risk-coverage Curve)**[1]: The AURC measures the area under the risk-coverage curve (RC) during the convergence process. It
\begin{table}
\begin{tabular}{l l l l|r} \hline \hline Dataset & Model & Method & Architecture & T & Accuracy(\%) \\ \hline \multirow{8}{*}{CIFAR10} & [10] & Hybrid training & ResNet-20 & 250 & 92.22 \\ & [10] & ANN2SNN & ResNet-20 & 2048 & 91.36 \\ & [11] & ANN2SNN & ResNet-20 & 128 & 93.56 \\ & [11] & STBP & CIFARNet & 12 & 89.83 \\ & [12] & DSpike & ResNet-18 & 4 & 93.66 \\ & [12] & TSSL-BP & CIFARNet & 5 & 91.41 \\ & [12] & STBP-tdBN & ResNet-19 & 4 & 92.92 \\ & [13] & TET* & ResNet-19 & 4 & 94.44 \\ & [14] & TEBN* & ResNet-19 & 4 & 95.58 \\ \cline{2-5} & Ours & TKS & ResNet-19 & 4 & **95.30** \\ & Ours & TKS* & ResNet-19 & 4 & **96.35** \\ \cline{2-5} & Ours & TKS* & SEW-ResNet-19 & 4 & **96.76** \\ \hline \multirow{8}{*}{CIFAR100} & [10] & Hybrid training & VGG11 & 125 & 67.87 \\ & [10] & Diet-SNN & ResNet-20 & 5 & 64.07 \\ & [15] & ANN2SNN & ResNet-20 & 2048 & 67.82 \\ & [16] & DSpike & ResNet-18 & 4 & 73.35 \\ & [13] & STBP-tdBN & ResNet-19 & 4 & 70.86 \\ & [13] & TET* & ResNet-19 & 4 & 74.47 \\ & [13] & TEBN* & ResNet-19 & 4 & 78.71 \\ \cline{2-5} & Ours & TKS & ResNet-19 & 4 & **76.20** \\ & Ours & TKS* & SEW-ResNet-19 & 4 & **79.89** \\ \hline \multirow{8}{*}{ImageNet-1K} & [10] & Hybrid training & ResNet-34 & 250 & 61.48 \\ & [13] & SPIKE-NORM & ResNet-34 & 2500 & 69.96 \\ \cline{1-1} & [13] & STBP-tdBN & Spiking-ResNet-34 & 6 & 63.72 \\ \cline{1-1} & [13] & SEW ResNet & SEW-ResNet-34 & 4 & 67.04 \\ \cline{1-1} & [13] & SEW ResNet & SEW-ResNet-50 & 4 & 67.78 \\ \cline{1-1} & [13] & TET & SEW-ResNet-34 & 4 & 68.00 \\ \cline{1-1} & [13] & TEBN & SEW-ResNet-34 & 4 & 68.28 \\ \cline{1-1} \cline{2-5} & Ours & TKS & SEW-ResNet-34 & 4 & **69.60** \\ \cline{1-1} & Ours & TKS & SEW-ResNet-50 & 4 & **70.70** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of static datasets: CIFAR10, CIFAR100, ImageNet-1K. Where * denotes data augmentation, The best performing model is indicated as boldface.
describes the quality of probabilistic predictions from the perspective of confidence estimation. It assesses whether correct and incorrect predictions can be separated well.
### Static Datasets
We validate our TKS algorithm on CIFAR10, CIFAR100 and ImageNet-1k, and compare it with the current state-of-the-art algorithms. For CIFAR10 and CIFAR100 [11], we use the ResNet-19 [22] structure as well as the SEW-ResNet-19 structure [4]. SEW-ResNet-19 improves the original ResNet, allowing residual learning to be performed more effectively in SNNs. As a fair comparison, we use SEW-ResNet-34 and SEW-ResNet-50 structures for the ImageNet-1k dataset. For CIFAR10 and CIFAR100, the temperature parameter \(\tau\) is set to 3. And for the ImageNet-1k dataset, \(\tau\) is set to 1. For all the experiments above, \(\alpha\) is increased from the initial value 0 to 0.7.
As shown in Table 1, for all datasets, our TSK method has achieved the best performance compared with other famous deep SNNs algorithms. compared with TET and TEBN, our TKS algorithm has a significant improvement on CIFAR10 and CIFAR100. Especially, for the ImageNet-1k dataset, we outperform them by 1.3% and 1.6% on the same network structure SEW-ResNet-34.
### Neuromorphic Datasets
Neuromorphic datasets are usually recorded by event cameras and the data consists of events. We conduct experiments on the DVS-CIFAR10 [11] as well as the NCAL-TECH101 [1] datasets, respectively. The VGG-SNN [4] model is adopted as the basic architecture. The temperature is set at 5 and \(\alpha\) grow from the initial value 0 to 0.3. As shown in Table 2, compared with other current best algorithms, TKS obtains the highest accuracy, achieving 85.3% on DVS-CIFAR10 and 84.1% on NCALTECH101, respectively.
### Fine-grained Classification Task
Generally, fine-grained datasets have similar features between categories [23], which are difficult to distinguish. In addition, it has a limited number of samples in each category. We conduct experiments on the CUB-200-2011[21], StanfordDogs [24], and StanfordCars [11] datasets and compare our
\begin{table}
\begin{tabular}{c l l l l r} \hline \hline Dataset & Method & Model & Architecture & T & Accuracy(\%) \\ \hline \multirow{8}{*}{DVS-CIFAR10} & [22] & STBP-tdBN & ResNet-19 & 10 & 67.8 \\ & [20] & Streaming Rollout & DenseNet & 10 & 66.8 \\ & [24] & Conv3D & LIAF-Net & 10 & 71.7 \\ & [21] & LIAF & LIAF-Net & 10 & 70.4 \\ & [21] & DSpike & ResNet-18 & 10 & 75.4 \\ & [21] & PLIF & CNN6 & 20 & 74.8 \\ & [23] & TCJA-SNN & VGGSNN & 10 & 80.7 \\ & [4] & TET & VGGSNN & 10 & 83.2 \\ & [4] & TEBN & VGGSNN & 10 & 84.9 \\ \cline{2-5} & Ours & TKS & VGG-SNN & 10 & **85.3** \\ \hline \multirow{8}{*}{NCALTECH101} & [23] & TCJA-SNN & TCJAnet & 14 & 78.5 \\ & [23] & EventMixer & ResNet-18 & 10 & 79.5 \\ \cline{1-1} & [21] & NDA & ResNet-19 & 10 & 78.6 \\ \cline{1-1} & [24] & HATS & - & - & 64.2 \\ \cline{1-1} & [24] & DART & - & - & 66.8 \\ \cline{1-1} \cline{2-5} & Ours & TKS & VGG-SNN & 10 & **84.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy of neuromorphic datasets: DVS-CIFAR10, NCALTECH101. The best result are shown in boldface.
\begin{table}
\begin{tabular}{c l l l|l l|l l} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{CUB-200-2011} & \multicolumn{2}{c}{StanfordDogs} & \multicolumn{2}{c}{StanfordCars} \\ \cline{3-8} & & top1 & top5 & top1 & top5 & top1 & top5 \\ \hline \multirow{2}{*}{
\begin{tabular}{c} SEW- \\ ResNet-18 \\ \end{tabular} } & Baseline & 37.94 & 63.55 & 51.54 & 78.10 & 68.59 & 88.96 \\ & TET & 40.49 & 65.53 & 52.14 & 78.69 & 68.52 & 88.56 \\ & TKS & **42.82** & **66.59** & **54.03** & **79.76** & **71.22** & **89.70** \\ & Baseline & 46.77 & 72.35 & 56.79 & **83.57** & 76.04 & 92.71 \\ & TET & 47.38 & 71.06 & 57.06 & 81.98 & 72.94 & 92.81 \\ & TKS & **51.71** & **75.28** & **59.06** & 82.82 & **76.94** & **92.95** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy of fine-grained task datasets: CUB-200-2011, Stanford Dogs, Stanford Cars. The best result are shown in boldface. It demonstrates both top1 Acc and top5 Acc.
TKS with the baseline model and TET model. The temperature parameter \(\tau\) for the fine-grained task is 3, and \(\alpha\) grows from initial value 0 to 0.7. As shown in the Table 3, our TKS outperforms the baseline on all datasets and also exceeds the TET algorithm. Fine-grained tasks are usually used to measure the feature extraction ability of the model. And TKS uses the information of the distribution of the output category as the teacher signal, which will help the model to improve the classification ability of the categories with similar features.
## 5 Discussion
In this section, we will conduct the ablation analysis to show the superiority of our TKS. Then, we will conduct experiments to verify the generalization of TKS under different timesteps. Finally, we will visualize the feature embedding.
### Ablation analysis
To illustrate the effectiveness of the TKS algorithm, we conducted ablation experiments on the CIFAR10, CIFAR100, ImageNet-1k, DVS-CIFAR10, and NCALTECH101 datasets. Also, we will include LS [21] as well as TET [10] methods on the same baseline for fairness of comparison with existing methods. The LS method is a regularization method, which can also be viewed as being passed teacher information that follows a uniform distribution [12, 23]. Hence, we include the LS method in the comparison to ensure that the teacher signal delivered by TKS does indeed provide useful information and is not simply a regularization factor. The TET algorithm supervises each moment individually to exploit temporal information. As shown in Table 4, TKS has achieved the highest performance compared with the baseline and TET. Meanwhile, in order to illustrate the superiority of TKS, we evaluate the model from the perspective of confidence estimation. We calculate the AURC for each model. As shown in the Table 4, a lower AURC represents a higher quality of probabilistic prediction. TKS helps the model to improve in most cases.
### Robustness with Different Time Steps
In addition to the performance of the SNNs, another problem that restricts the development of SNNs is latency. SNNs often
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline Dataset & Model & Method & Top1 Acc(\%) & Top5 Acc(\%) & AURC (x\(10^{3}\)) \\ \hline \multirow{6}{*}{CIFAR10} & \multirow{6}{*}{ResNet19} & Baseline & 95.76 & **99.94** & 4.10 \\ & & LS & 95.99 & 99.63 & 6.60 \\ & & TET & 96.14 & 99.93 & 3.64 \\ & & TKS & **96.35** & 99.85 & **2.76** \\ & \multirow{6}{*}{SEW-ResNet-19} & Baseline & 96.54 & **99.94** & 3.50 \\ & & LS & 96.54 & 99.70 & 8.44 \\ & & TET & 96.41 & 99.89 & 3.59 \\ & & TKS & **96.76** & 99.86 & **2.75** \\ \hline \multirow{6}{*}{CIFAR100} & \multirow{6}{*}{ResNet19} & Baseline & 76.78 & 94.13 & 78.27 \\ & & LS & 76.77 & 92.04 & 71.96 \\ & & TET & 78.90 & 94.57 & 65.28 \\ & & TKS & **79.89** & **94.77** & **54.07** \\ & & Baseline & 78.35 & 94.38 & 67.55 \\ & & LS & 79.39 & 93.40 & 63.43 \\ & & TET & 79.53 & 94.70 & 58.93 \\ & & TKS & **80.67** & **94.71** & **54.56** \\ \hline \multirow{6}{*}{ImageNet-1K} & SEW-ResNet-34 & Baseline & 68.61 & 87.67 & - \\ & ResNet-34 & TET & 68.68 & 88.13 & - \\ \cline{1-1} & & TKS & 69.60 & 88.40 & - \\ \cline{1-1} & & TKS & **70.70** & **88.77** & - \\ \hline \hline \multirow{6}{*}{DVS-CIFAR10} & \multirow{6}{*}{VGG-SNN} & Baseline & 83.20 & 98.60 & 46.14 \\ & & LS & 84.00 & **98.70** & 40.20 \\ \cline{1-1} & & TET & 84.70 & 97.80 & 43.42 \\ \cline{1-1} & & TKS & **85.30** & 98.00 & **40.18** \\ \hline \multirow{6}{*}{NCAL-TECH101} & \multirow{6}{*}{VGG-SNN} & Baseline & 78.26 & 91.26 & 52.03 \\ \cline{1-1} & & LS & 82.76 & 92.18 & 35.21 \\ \cline{1-1} & & TET & 81.72 & 92.29 & 36.89 \\ \cline{1-1} & & TKS & **84.10** & **93.33** & **30.00** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the multiple methods implemented on our Baseline. The best performing model is indicated as boldface.
require long simulation timesteps to produce accurate results. Moreover, latency may often vary when dealing with different edge devices. This means that when changing devices, the model needs to be retrained with different timesteps, which greatly reduces efficiency. We hope that SNNs can achieve higher performance with large timesteps during training and achieve almost lossless performance with smaller simulation steps during the inference phase. The performance of conventional SNN models drops sharply when the test timestep is inconsistent with the training timestep. We demonstrate the performance of the VGG-SNN model trained on the DVS-CIFAR10 dataset. For the training phase, we set the timestep to 10, and for the test phase, we verify the performance at different timesteps. As shown in Figure 3, for the baseline SNN model, when the timestep is set to 1 in the test phase, the network performance drops by nearly 33%. Our TKS can still achieve accuracy of about 75%. Further, we train and test with the same timestep for the SNN model. As can be seen in Figure 3, our TKS model outperforms them at different timesteps. The TKS model has strong generalization in simulation timesteps, which provides strong convenience for the development of SNNs.
Also, we verify the performance on DVS-CIFAR10 as well as NCALTECH101 using the output of SNN at each single moment. The models are trained in timestep 10. As shown in Fig. 4, for TKS, the performance of the output at each moment is higher than that of the baseline model. The teacher signal helps the model to achieve better performance at each moment. Thus the model at each moment acquires the ability to process the data separately.
### Feature Visualization
We visualize the feature embedding of the penultimate layer of the model using the t-SNE [22] method. The results of TKS and the baseline model are shown in the following graphs on CIFAR100 of ResNet-19 and DVS-CIFAR10 of VGG-SNN, respectively. As shown in Figure 5, TKS can well separate the data for both static images and neuromorphic images compared with the baseline model. The teacher signal of TKS expands the distance between similar categories by adjusting the distribution of sample features, making it easier for the classifier to classify.
## 6 Conclusion
The performance and latency of SNNs have been a significant constraint to the development of SNNs. In this paper, we demonstrate that SNNs do not make effective use of the information generated from different moments. Based on this, we propose a temporal knowledge sharing (TKS) approach, where we select the output of correctly classified moments in SNNs as teacher signals to further guide model training. We validate the effectiveness of TKS both on static datasets CIFAR10, CIFAR100, ImageNet-1k, and neuromorphic datasets DVS-CIFAR10, NCALTECH101. TKS achieves the current optimal performance. Also, on fine-grained classification datasets, our algorithm outperforms the baseline model and other excellent algorithms. In addition to performance, TKS can support SNNs with different simulation steps in the training and inference phases. This allows our algorithm to be deployed with a few simulation steps after obtaining performance guarantees with larger simulation steps in the training phase. This greatly facilitates the deployment of SNNs on edge devices.
Figure 4: Accuracy in each timestep on DVS-CIFAR10(left) NCALTECH101(right). The model is trained at timestep 10. We calculate the accuracy of the output in each step.
Figure 5: The visualization results for t-SNE, (a) Baseline results for CIFAR100. (b) TKS results for CIFAR100. (c) Baseline result for DVS-CIFAR10. (d) TKS results for DVS-CIFAR10
Figure 3: Accuracy in different timestep on DVS-CIFAR10, the model is trained at timestep 10. |
2304.14144 | Categorification of Group Equivariant Neural Networks | We present a novel application of category theory for deep learning. We show
how category theory can be used to understand and work with the linear layer
functions of group equivariant neural networks whose layers are some tensor
power space of $\mathbb{R}^{n}$ for the groups $S_n$, $O(n)$, $Sp(n)$, and
$SO(n)$. By using category theoretic constructions, we build a richer structure
that is not seen in the original formulation of these neural networks, leading
to new insights. In particular, we outline the development of an algorithm for
quickly computing the result of a vector that is passed through an equivariant,
linear layer for each group in question. The success of our approach suggests
that category theory could be beneficial for other areas of deep learning. | Edward Pearce-Crump | 2023-04-27T12:39:28Z | http://arxiv.org/abs/2304.14144v1 | # Categorification of Group Equivariant Neural Networks
###### Abstract
We present a novel application of category theory for deep learning. We show how category theory can be used to understand and work with the linear layer functions of group equivariant neural networks whose layers are some tensor power space of \(\mathbb{R}^{n}\) for the groups \(S_{n}\), \(O(n)\), \(Sp(n)\), and \(SO(n)\). By using category theoretic constructions, we build a richer structure that is not seen in the original formulation of these neural networks, leading to new insights. In particular, we outline the development of an algorithm for quickly computing the result of a vector that is passed through an equivariant, linear layer for each group in question. The success of our approach suggests that category theory could be beneficial for other areas of deep learning.
## 1 Introduction
Despite the numerous advances that have been made in many areas of deep learning, it is well known that the field is lacking a rigourous theoretical framework to support the applications that are being developed. Practitioners typically spend a significant amount of time and effort searching for a neural network architecture that works well for the problem that they wish to solve. The architectures are often designed using heuristics that have been shown to work well in practice, despite them being poorly understood in theory. Notably, small modifications to the architecture can often result in a significant reduction in performance.
It has only become apparent very recently to researchers in the deep learning community that there is potential for category theory to provide a new set of tools for developing the theory of deep learning. The hope is that category theory will provide the rigourous theoretical framework in which all existing and future results can be placed. Category theory is based on the core concept of _compositionality_; that complex systems can be built out of smaller parts, and that the entire system can be understood by studying these smaller parts. Category theory was first used in pure mathematics in the 1940s as a way of establishing a higher-level structure for understanding a number of algebraic objects (sets, vector spaces, topological spaces etc.) and their maps (functions, linear maps, continuous maps etc.) that shared similar characteristics. It has since been applied successfully to many other disciplines of science, such as physics, chemistry and computer science. Given that many deep learning architectures share similar characteristics, in that they are typically built out of layers and maps between these layers, it is no surprise that deep learning researchers are looking to category theory to achieve a similar outcome for their own field.
In this paper, we present a novel application of category theory for deep learning. We show that a number of group equivariant neural networks - for the groups \(S_{n}\), \(O(n)\), \(Sp(n)\) and \(SO(n)\) - whose layers are some tensor power of \(\mathbb{R}^{n}\) that have recently appeared in the literature (Maron et al. (2019); Pearce-Crump (2022a,b); Godfrey et al. (2023)) can be understood in category theoretic
terms. We call this the "Categorification of Group Equivariant Neural Networks" because, in proving this result, we replace a number of set-theoretic constructions with category theoretic notions that results in a deeper structure for understanding and working with the layer functions of the neural networks themselves. We wish to emphasise that the outcome of this process is not simply a case of rewriting the existing results in a different language, but that, crucially, we obtain new insights into these neural networks from the richer structure that is established. One particularly important consequence that we show is that any of the weight matrices that appear in the neural networks in question can be understood solely by using a certain type of combinatorial diagram that has a string-like quality to it. By pulling on the strings or dragging their ends to different locations, we can use category theory to obtain new results for these group equivariant neural networks. We describe a very powerful example of this idea, where the properties of the categorification lead to a recovery of the algorithm - using a very different method - proposed by Godfrey et al. (2023) for computing the result of a vector that is passed through a symmetric group equivariant linear layer. We suggest that our approach can be adapted to obtain an algorithm for computing the same procedure for the other groups mentioned in this paper; this result will appear in another paper by the same authors.
## 2 Preliminaries
We choose our field of scalars to be \(\mathbb{R}\) throughout. Tensor products are also taken over \(\mathbb{R}\), unless otherwise stated. Also, we let \([n]\) represent the set \(\{1,\ldots,n\}\).
Recall that a representation of a group \(G\) is a choice of vector space \(V\) over \(\mathbb{R}\) and a group homomorphism
\[\rho_{V}:G\to GL(V) \tag{1}\]
Furthermore, recall that a map \(\phi:V\to W\) between two representations of \(G\) is said to be \(G\)-equivariant if, for all \(g\in G\) and \(v\in V\),
\[\phi(\rho_{V}(g)[v])=\rho_{W}(g)[\phi(v)] \tag{2}\]
We denote the set of all _linear_\(G\)-equivariant maps between \(V\) and \(W\) by \(\operatorname{Hom}_{G}(V,W)\). It can be shown that \(\operatorname{Hom}_{G}(V,W)\) is a vector space over \(\mathbb{R}\). See Segal (2014) for more details.
### Tensor Power Spaces as Group Representations
The groups of interest, namely, \(S_{n}\), \(O(n)\), \(Sp(n)\), and \(SO(n)\), can all be viewed as subgroups of \(GL(n)\). We use the symbol \(G\) to refer to any of these groups in the following. Recall that \(\mathbb{R}^{n}\) has a standard basis that is given by \(\{e_{i}\mid i\in[n]\}\), where \(e_{i}\) has a \(1\) in the \(i^{\text{th}}\) position and is \(0\) otherwise.
(Note that if \(G=Sp(n)\), then \(n=2m\), and we label the indices by \(1,1^{\prime},\ldots,m,m^{\prime}\) and call the standard basis of \(\mathbb{R}^{n}\) the symplectic basis.)
There exists a (left) action of \(G\) on \(\mathbb{R}^{n}\) that is given by left multiplication on the standard basis, which can be extended linearly to obtain a representation \(G\to GL(\mathbb{R}^{n})\).
Moreover, since the elements
\[e_{I}\coloneqq e_{i_{1}}\otimes e_{i_{2}}\otimes\cdots\otimes e_{i_{k}} \tag{3}\]
for all \(I\coloneqq(i_{1},i_{2},\ldots,i_{k})\in[n]^{k}\) form a basis of \((\mathbb{R}^{n})^{\otimes k}\), the \(k\)-tensor power space of \(\mathbb{R}^{n}\), there also exists a (left) action of \(G\) on \((\mathbb{R}^{n})^{\otimes k}\) that is given by
\[g\cdot e_{I}\coloneqq ge_{i_{1}}\otimes ge_{i_{2}}\otimes\cdots\otimes ge_{i_{ k}} \tag{4}\]
Again, this action can be extended linearly to obtain a representation \(\rho_{k}:G\to GL((\mathbb{R}^{n})^{\otimes k})\).
Figure 1: Examples of \((6,4)\)–partition diagrams. b) is also a \((6,4)\)–Brauer diagram, and c) is also a \(10\backslash 6\)–diagram.
We are interested in the space of \(G\)-equivariant linear maps between any two tensor power spaces of \(\mathbb{R}^{n}\), \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\), since these maps are the linear layer functions in the group equivariant neural networks of interest.
### Set Partition Diagrams
Pearce-Crump (2022a,b) showed that, for the groups \(G\) in question, \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) can be constructed from certain set partitions of \([l+k]\), and in particular, from their corresponding set partition diagrams. We review these constructions below.
For \(l,k\in\mathbb{Z}_{\geq 0}\), consider the set \([l+k]\coloneqq\{1,\ldots,l+k\}\) having \(l+k\) elements. We can create a set partition of \([l+k]\) by partitioning it into a number of subsets. We call the subsets of a set partition _blocks_. Let \(\Pi_{l+k}\) be the set of all set partitions of \([l+k]\). Then, for each set partition \(\pi\) in \(\Pi_{l+k}\), we can associate to it a diagram \(d_{\pi}\), called a \((k,l)\)-partition diagram, consisting of two rows of vertices and edges between vertices such that there are
* \(l\) vertices on the top row, labelled left to right by \(1,\ldots,l\)
* \(k\) vertices on the bottom row, labelled left to right by \(l+1,\ldots,l+k\), and
* the edges between the vertices correspond to the connected components of \(\pi\).
As a result, \(d_{\pi}\) represents the equivalence class of all diagrams with connected components equal to the blocks of \(\pi\).
There are special types of \((k,l)\)-partition diagrams that we are interested in, namely:
* A \((k,l)\)-Brauer diagram \(d_{\beta}\) is a \((k,l)\)-partition diagram where the size of every block in \(\beta\) is exactly two.
* Given \(k\) and \(l\), an \((l+k)\backslash n\)-diagram \(d_{\alpha}\) is a \((k,l)\)-partition diagram where exactly \(n\) blocks in \(\alpha\) have size one, with the rest having exactly size two. The vertices corresponding to the blocks of size one are called free vertices.
We give examples of these diagrams in Figure 1.
We can form a number of vector spaces as the \(\mathbb{R}\)-linear span of certain subsets of \((k,l)\)-partition diagrams, as follows:
* The partition space \(P^{l}_{k}(n)\) is defined to be the \(\mathbb{R}\)-linear span of the set of all \((k,l)\)-partition diagrams.
* The Brauer space \(B^{l}_{k}(n)\) is defined to be the \(\mathbb{R}\)-linear span of the set of all \((k,l)\)-Brauer diagrams.
* The Brauer-Good space \(D^{l}_{k}(n)\) is defined to be the \(\mathbb{R}\)-linear span of the set of all \((k,l)\)-Brauer diagrams together with the set of all \((l+k)\backslash n\)-diagrams.
Furthermore, we can define two \(\mathbb{R}\)-bilinear operations on \((k,l)\)-partition diagrams
composition: \[\bullet:P^{m}_{l}(n)\times P^{l}_{k}(n)\to P^{m}_{k}(n)\] (5) tensor product: \[\otimes:P^{l}_{k}(n)\times P^{m}_{q}(n)\to P^{l+m}_{k+q}(n)\] (6)
as follows:
Composition: Let \(d_{\pi_{1}}\in P^{l}_{k}(n)\) and \(d_{\pi_{2}}\in P^{m}_{l}(n)\). First, we concatenate the diagrams, written \(d_{\pi_{2}}\circ d_{\pi_{1}}\), by putting \(d_{\pi_{1}}\) below \(d_{\pi_{2}}\), concatenating the edges in the middle row of vertices, and then removing all connected components that lie entirely in the middle row of the concatenated diagrams. Let \(c(d_{\pi_{2}},d_{\pi_{1}})\) be the number of connected components that are removed from the middle row in \(d_{\pi_{2}}\circ d_{\pi_{1}}\). Then the composition is defined, using infix notation, as
\[d_{\pi_{2}}\bullet d_{\pi_{1}}\coloneqq n^{c(d_{\pi_{2}},d_{\pi_{1}})}(d_{\pi _{2}}\circ d_{\pi_{1}}) \tag{7}\]
Tensor Product: Let \(d_{\pi_{1}}\in P^{l}_{k}(n)\) and \(d_{\pi_{2}}\in P^{m}_{q}(n)\). Then \(d_{\pi_{1}}\otimes d_{\pi_{2}}\) is defined to be the \((k+q,l+m)\)-partition diagram obtained by horizontally placing \(d_{\pi_{1}}\) to the left of \(d_{\pi_{2}}\) without any overlapping of vertices.
It is clear that both of these operations are associative.
The composition and tensor product operations for \(B^{l}_{k}(n)\) are inherited from the composition and tensor product operations for \(P^{l}_{k}(n)\), defined in (5) and (6) respectively. However, the composition and tensor product operations for \(D^{l}_{k}(n)\) are rather more involved; full details of their formulation can be found in the Technical Appendix.
### Group Equivariant Linear Layers
From the vector spaces defined in Section 2.2, it is possible to obtain either a spanning set or a basis for \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\). We give the form of the the spanning sets/bases, expressed in the basis of matrix units for \(\operatorname{Hom}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\), in the Technical Appendix. Here, we reproduce a number of results which describe the existence of a surjective map from each of the vector spaces defined in Section 2.2 onto its corresponding vector space of \(G\)-equivariant linear maps, which arises from the spanning sets/bases.
**Theorem 2.1** (Diagram Basis when \(G=S_{n}\)).: _(Godfrey et al., 2023, Theorem 5.4) For any \(k,l\in\mathbb{Z}_{\geq 0}\) and any \(n\in\mathbb{Z}_{\geq 1}\), there is a surjection of vector spaces_
\[\Theta^{l}_{k,n}:P^{l}_{k}(n)\to\operatorname{Hom}_{S_{n}}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{8}\]
_that is given by_
\[d_{\pi}\mapsto E_{\pi} \tag{9}\]
_for all \((k,l)\)-partition diagrams \(d_{\pi}\), where \(E_{\pi}\) is given in the Technical Appendix._
_In particular, the set_
\[\{E_{\pi}\mid d_{\pi}\text{ is a }(k,l)\text{-partition diagram having at most }n\text{ blocks}\} \tag{10}\]
_is a basis for \(\operatorname{Hom}_{S_{n}}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) in the standard basis of \(\mathbb{R}^{n}\), of size \(\operatorname{B}(l+k,n)\coloneqq\sum_{t=1}^{n}\left\{\begin{smallmatrix}l+k\\ t\end{smallmatrix}\right\}\), where \(\left\{\begin{smallmatrix}l+k\\ t\end{smallmatrix}\right\}\) is the Stirling number of the second kind._
**Theorem 2.2** (Spanning set when \(G=O(n)\)).: _(Pearce-Crump, 2022b, Theorem 6.5) For any \(k,l\in\mathbb{Z}_{\geq 0}\) and any \(n\in\mathbb{Z}_{\geq 1}\), there is a surjection of vector spaces_
\[\Phi^{l}_{k,n}:B^{l}_{k}(n)\to\operatorname{Hom}_{O(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{11}\]
_that is given by_
\[d_{\beta}\mapsto E_{\beta} \tag{12}\]
_for all \((k,l)\)-Brauer diagrams \(d_{\beta}\), where \(E_{\beta}\) is given in the Technical Appendix._
_In particular, the set_
\[\{E_{\beta}\mid d_{\beta}\text{ is a }(k,l)\text{-Brauer diagram}\} \tag{13}\]
_is a spanning set for \(\operatorname{Hom}_{O(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) in the standard basis of \(\mathbb{R}^{n}\), of size \(0\) when \(l+k\) is odd, and of size \((l+k-1)!!\) when \(l+k\) is even._
**Theorem 2.3** (Spanning set when \(G=Sp(n),n=2m\)).: _(Pearce-Crump, 2022b, Theorem 6.6) For any \(k,l\in\mathbb{Z}_{\geq 0}\) and any \(n\in\mathbb{Z}_{\geq 2}\) such that \(n=2m\), there is a surjection of vector spaces_
\[X^{l}_{k,n}:B^{l}_{k}(n)\to\operatorname{Hom}_{Sp(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{14}\]
_that is given by_
\[d_{\beta}\mapsto F_{\beta} \tag{15}\]
_for all \((k,l)\)-Brauer diagrams \(d_{\beta}\), where \(F_{\beta}\) is given in the Technical Appendix._
_In particular, the set_
\[\{F_{\beta}\mid d_{\beta}\text{ is a }(k,l)\text{-Brauer diagram}\} \tag{16}\]
_is a spanning set for \(\operatorname{Hom}_{Sp(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), for \(n=2m\), in the symplectic basis of \(\mathbb{R}^{n}\), of size \(0\) when \(l+k\) is odd, and of size \((l+k-1)!!\) when \(l+k\) is even._
**Theorem 2.4** (Spanning set when \(G=SO(n)\)).: _(Pearce-Crump, 2022b, Theorem 6.7)_
_For any \(k,l\in\mathbb{Z}_{\geq 0}\) and any \(n\in\mathbb{Z}_{\geq 1}\), there is a surjection of vector spaces_
\[\Psi_{k,n}^{l}:D_{k}^{l}(n)\to\operatorname{Hom}_{SO(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{17}\]
_that is given by_
\[d_{\beta}\mapsto E_{\beta} \tag{18}\]
_if \(d_{\beta}\) is a \((k,l)\)-Brauer diagram, where \(E_{\beta}\) is given in the Technical Appendix, and by_
\[d_{\alpha}\mapsto H_{\alpha} \tag{19}\]
_if \(d_{\alpha}\) is a \((k+l)\backslash n\)-diagram, where \(H_{\alpha}\) is is also given in the Technical Appendix_
_In particular, the set_
\[\{E_{\beta}\}_{\beta}\cup\{H_{\alpha}\}_{\alpha} \tag{20}\]
_where \(d_{\beta}\) is a \((k,l)\)-Brauer diagram, and \(d_{\alpha}\) is a \((l+k)\backslash n\)-diagram, is a spanning set for \(\operatorname{Hom}_{SO(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) in the standard basis of \(\mathbb{R}^{n}\)._
## 3 Strict \(\mathbb{R}\)-Linear Monoidal Categories and String Diagrams
We appreciate that the language of category theory is not commonplace in the machine learning literature. To aid the reader, we have provided some foundational material in the Technical Appendix. Other good references are Mac Lane (1998); Kock (2003); Turaev & Virelizier (2017).
At a basic level, category theory is concerned with objects and the relationships between objects. These relationships are called morphisms. A collection of objects and a collection of morphisms between objects (satisfying some additional conditions) form a category. We can perform operations with morphisms, such as (vertically) composing them, to form new morphisms between objects. Category theory makes it possible to abstract away specific details of structures, to focus instead on the relationships between them. We are interested not only in the relationships within a category but also in how relationships are preserved across different categories. These are described by functors.
In this paper, we are interested in categories that have a specific property called _monoidal_, and the (monoidal) functors between these categories. We will see in Section 5 that it is this property that has important implications for the group equivariant neural networks that we look at in this paper. The monoidal property gives additional structure to the way in which objects and morphisms can be related. In particular, monoidal categories have an additional operation, known as a tensor product, that allows objects and morphisms to be composed in a different way, which we call horizontal. Monoidal functors preserve the tensor product across monoidal categories.
We assume throughout that all categories are _locally small_; that is, that the collection of morphisms between any two objects is a set. In fact, all of the categories that we consider in this paper have morphism sets that are vector spaces: in particular, the morphisms between objects become linear maps. We follow the presentation given in Hu (2019) and Savage (2021) below.
### Strict \(\mathbb{R}\)-Linear Monoidal Categories
_Definition 3.1_.: A category \(\mathcal{C}\) is said to be _strict monoidal_ if it comes with a bifunctor \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\), called the tensor product, and a unit object \(\mathds{1}\), such that, for all objects \(X,Y,Z\) in \(\mathcal{C}\), we have that
\[(X\otimes Y)\otimes Z=X\otimes(Y\otimes Z) \tag{21}\]
\[(\mathds{1}\otimes X)=X=(X\otimes\mathds{1}) \tag{22}\]
and, for all morphisms \(f,g,h\) in \(\mathcal{C}\), we have that
\[(f\otimes g)\otimes h=f\otimes(g\otimes h) \tag{23}\]
\[(1_{\mathds{1}}\otimes f)=f=(f\otimes 1_{\mathds{1}}) \tag{24}\]
where \(1_{\mathds{1}}\) is the identity morphism \(\mathds{1}\to\mathds{1}\).
_Remark 3.2_.: We can assume that all monoidal categories are strict (nonstrict monoidal categories would have isomorphisms in the place of the equalities given in Definition 3.1) owing to a technical result known as Mac Lane's Coherence Theorem. See Mac Lane (1998) for more details.
_Definition 3.3_.: A category \(\mathcal{C}\) is said to be \(\mathbb{R}\)-_linear_ if, for any two objects \(X,Y\) in \(\mathcal{C}\), the morphism space \(\operatorname{Hom}_{\mathcal{C}}(X,Y)\) is a vector space over \(\mathbb{R}\), and the composition of morphisms is \(\mathbb{R}\)-bilinear.
Combining Definitions 3.1 and 3.3, we get
_Definition 3.4_.: A category \(\mathcal{C}\) is said to be _strict \(\mathbb{R}\)-linear monoidal_ if it is a category that is both strict monoidal and \(\mathbb{R}\)-linear, such that the bifunctor \(\otimes\) is \(\mathbb{R}\)-bilinear.
Analogous to how there exists maps between sets, there exists "maps" between categories, known as functors. In particular, we are interested in the following type of functors:
_Definition 3.5_.: Suppose that \((\mathcal{C},\otimes_{\mathcal{C}},\mathds{1}_{\mathcal{C}})\) and \((\mathcal{D},\otimes_{\mathcal{D}},\mathds{1}_{\mathcal{D}})\) are two strict \(\mathbb{R}\)-linear monoidal categories.
A _strict \(\mathbb{R}\)-linear monoidal functor_ from \(\mathcal{C}\) to \(\mathcal{D}\) is a functor \(\mathcal{F}:\mathcal{C}\to\mathcal{D}\) such that
1. for all objects \(X,Y\) in \(\mathcal{C}\), \(\mathcal{F}(X\otimes_{\mathcal{C}}Y)=\mathcal{F}(X)\otimes_{\mathcal{D}} \mathcal{F}(Y)\)
2. for all morphisms \(f,g\) in \(\mathcal{C}\), \(\mathcal{F}(f\otimes_{\mathcal{C}}g)=\mathcal{F}(f)\otimes_{\mathcal{D}} \mathcal{F}(g)\)
3. \(\mathcal{F}(\mathds{1}_{\mathcal{C}})=\mathds{1}_{\mathcal{D}}\), and
4. for all objects \(X,Y\) in \(\mathcal{C}\), the map \[\operatorname{Hom}_{\mathcal{C}}(X,Y)\to\operatorname{Hom}_{\mathcal{D}}( \mathcal{F}(X),\mathcal{F}(Y))\] (25) given by \(f\mapsto\mathcal{F}(f)\) is \(\mathbb{R}\)-linear.
The following definition will also prove to be very important in what follows.
_Definition 3.6_.: Given any two (locally small) categories \(\mathcal{C}\) and \(\mathcal{D}\) (not necessarily strict \(\mathbb{R}\)-linear monoidal), a functor \(\mathcal{F}:\mathcal{C}\to\mathcal{D}\) is said to be _full_ if the map (25) is surjective for all objects \(X,Y\) in \(\mathcal{C}\).
### String Diagrams
Strict monoidal categories are particularly interesting because they can be represented by a very useful diagrammatic language known as string diagrams. As this language is, in some sense, geometric in nature, we will see that it is much easier to work with these diagrams than with their equivalent algebraic form.
_Definition 3.7_ (String Diagrams).: Suppose that \(\mathcal{C}\) is a strict monoidal category. Let \(W,X,Y\) and \(Z\) be objects in \(\mathcal{C}\), and let \(f:X\to Y\), \(g:Y\to Z\), and \(h:W\to Z\) be morphisms in \(\mathcal{C}\). Then we can represent the morphisms \(1_{X}:X\to X\), \(f:X\to Y\), \(g\circ f:X\to Z\) and \(f\otimes h:X\otimes W\to Y\otimes Z\) as diagrams in the following way:
(26)
In particular, the vertical composition of morphisms \(g\circ f\) is obtained by placing \(g\) above \(f\), and the horizontal composition of morphisms \(f\otimes h\) is obtained by horizontally placing \(f\) to the left of \(h\).
We will often omit the labelling of the objects when they are clear or when they are not important.
As an example of how useful string diagrams are when working with strict monoidal categories, the associativity of the bifunctor given in (23) becomes immediately apparent. Another, more involved, example is given by the interchange law that exists for any strict monoidal category. It can be expressed algebraically as
\[(\mathds{1}\otimes g)\circ(f\otimes\mathds{1})=f\otimes g=(f\otimes\mathds{1 })\circ(\mathds{1}\otimes g) \tag{27}\]
Without string diagrams, it is somewhat tedious to prove this result - see (Savage, 2021, Section 2.2) - but with them, the result is intuitively obvious, if we allow ourselves to deform the diagrams by pulling on the strings:
(28)
## 4 Categorification
At this point, we have defined a vector space for each \(k,l\in\mathbb{Z}_{\geq 0}\) that is the \(\mathbb{R}\)-linear span of a certain subset of \((k,l)\)-partition diagrams. However, it should be apparent that, for all values of \(k\) and \(l\), these vector spaces are all similar in nature, in that the set partition diagrams only differ by the number of vertices that appear in each row and by the connections that are made between vertices. Moreover, the astute reader may have noticed that set partition diagrams look like string diagrams. Given that string diagrams represent strict monoidal categories, and that we have a collection of vector spaces for certain subsets of set partition diagrams, this implies that we should have a number of strict \(\mathbb{R}\)-linear monoidal categories! Indeed we do; we formalise this intuition below.
### Category Definitions
We assume throughout that \(n\in\mathbb{Z}_{>0}\).
_Definition 4.1_.: We define the partition category \(\mathcal{P}(n)\) to be the category whose objects are the non-negative integers \(\mathbb{Z}_{\geq 0}=\{0,1,2,\dots\}\), and, for any pair of objects \(k\) and \(l\), the morphism space \(\operatorname{Hom}_{\mathcal{P}(n)}(k,l)\) is \(P_{k}^{l}(n)\).
The vertical composition of morphisms is given by the composition of partition diagrams defined in (5); the bifunctor (the horizontal composition of morphisms) is given by the tensor product of partition diagrams defined in (6); and the unit object is 0.
_Definition 4.2_.: We define the Brauer category \(\mathcal{B}(n)\) to be the category whose objects are the same as those of \(\mathcal{P}(n)\) and, for any pair of objects \(k\) and \(l\), the morphism space \(\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\) is \(B_{k}^{l}(n)\).
The vertical composition of morphisms, the horizontal composition of morphisms and the unit object are the same as those of \(\mathcal{P}(n)\).
_Definition 4.3_.: We define the Brauer-Grood category \(\mathcal{SG}(n)\) to be the category whose objects are the same as those of \(\mathcal{P}(n)\) and, for any pair of objects \(k\) and \(l\), the morphism space \(\operatorname{Hom}_{\mathcal{SG}(n)}(k,l)\) is \(D_{k}^{l}(n)\).
The vertical composition of morphisms and the horizontal composition of morphisms are the same as those defined for \(D_{k}^{l}(n)\), which can be found in the Technical Appendix. The unit object is 0.
It is easy to show that \(\mathcal{P}(n)\), \(\mathcal{B}(n)\) and \(\mathcal{SG}(n)\) are strict \(\mathbb{R}\)-linear monoidal categories.
Also, for the four groups of interest, we can define the following category.
_Definition 4.4_.: If \(G\) is any of the groups \(S_{n},O(n),Sp(n)\) or \(SO(n)\), then we define \(\mathcal{C}(G)\) to be the category whose objects are pairs \(\{((\mathbb{R}^{n})^{\otimes k},\rho_{k})\}_{k\in\mathbb{Z}_{\geq 0}}\), where \(\rho_{k}:G\to GL((\mathbb{R}^{n})^{\otimes k})\) is the representation of \(G\) given in Section 2.1, and, for any pair of objects \(((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and \(((\mathbb{R}^{n})^{\otimes l},\rho_{l})\), the morphism space, \(\operatorname{Hom}_{\mathcal{C}(G)}(((\mathbb{R}^{n})^{\otimes k},\rho_{k}), ((\mathbb{R}^{n})^{\otimes l},\rho_{l}))\) is precisely \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\).
The vertical composition of morphisms is given by the usual composition of linear maps, the horizontal composition of morphisms is given by the usual tensor product of linear maps, both of which are associative operations, and the unit object is given by \((\mathbb{R},1_{\mathbb{R}})\), where \(1_{\mathbb{R}}\) is the one-dimensional trivial representation of \(G\).
It can be shown that \(\mathcal{C}(G)\) is a subcategory of the category of representations of \(G\), \(\operatorname{Rep}(G)\). See the Technical Appendix for more details. In particular, it is also a strict \(\mathbb{R}\)-linear monoidal category.
### Full, Strict \(\mathbb{R}\)-Linear Monoidal Functors
Given that we have a number of categories from the vector spaces of the different types of partition diagrams, that we have a category from the group equivariant linear maps between tensor power spaces, and that we have a number of maps between these vector spaces - as seen in Section 2.3 - we should have a number of functors between the newly defined categories. Indeed, we have that
**Theorem 4.5**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[\Theta:\mathcal{P}(n)\to\mathcal{C}(S_{n}) \tag{29}\]
_that is defined on the objects of \(\mathcal{P}(n)\) by \(\Theta(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{P}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{P}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}(S_ {n})}(\Theta(k),\Theta(l)) \tag{30}\]
_is precisely the map_
\[\Theta^{l}_{k,n}:P^{l}_{k}(n)\to\operatorname{Hom}_{S_{n}}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{31}\]
_given in Theorem 2.1._
**Theorem 4.6**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[\Phi:\mathcal{B}(n)\to\mathcal{C}(O(n)) \tag{32}\]
_that is defined on the objects of \(\mathcal{B}(n)\) by \(\Phi(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}(O (n))}(\Phi(k),\Phi(l)) \tag{33}\]
_is the map_
\[\Phi^{l}_{k,n}:B^{l}_{k}(n)\to\operatorname{Hom}_{O(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{34}\]
_given in Theorem 2.2._
**Theorem 4.7**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[X:\mathcal{B}(n)\to\mathcal{C}(Sp(n)) \tag{35}\]
_that is defined on the objects of \(\mathcal{B}(n)\) by \(X(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{B}(n)}(k,l)\to\operatorname{Hom}_{\mathcal{C}( Sp(n))}(\Phi(k),\Phi(l)) \tag{36}\]
_is the map_
\[X^{l}_{k,n}:B^{l}_{k}(n)\to\operatorname{Hom}_{Sp(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{37}\]
_given in Theorem 2.3._
**Theorem 4.8**.: _There exists a full, strict \(\mathbb{R}\)-linear monoidal functor_
\[\Psi:\mathcal{B}\mathcal{G}(n)\to\mathcal{C}(SO(n)) \tag{38}\]
_that is defined on the objects of \(\mathcal{B}\mathcal{G}(n)\) by \(\Psi(k)\coloneqq((\mathbb{R}^{n})^{\otimes k},\rho_{k})\) and, for any objects \(k,l\) of \(\mathcal{B}(n)\), the map_
\[\operatorname{Hom}_{\mathcal{B}\mathcal{G}(n)}(k,l)\to\operatorname{Hom}_{ \mathcal{C}(SO(n))}(\Phi(k),\Phi(l)) \tag{39}\]
_is the map_
\[\Psi^{l}_{k,n}:D^{l}_{k}(n)\to\operatorname{Hom}_{SO(n)}((\mathbb{R}^{n})^{ \otimes k},(\mathbb{R}^{n})^{\otimes l}) \tag{40}\]
_given in Theorem 2.4._
Proofs of these results are given in the Technical Appendix.
Figure 2: We can use the string-like aspect of \((k,l)\)–partition diagrams to factor them as a composition of a permutation in \(S_{k}\), a _planar_\((k,l)\)–partition diagram, and a permutation in \(S_{l}\).
## 5 Implications for Group Equivariant Neural Networks
The fullness of the functors given in Section 4.2 is especially important. This condition immediately implies that, to understand any \(G\)-equivariant linear map in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\), it is enough to work with the subset of \((k,l)\)-partition diagrams that correspond to \(G\), since we can apply the appropriate functor to obtain the equivariant maps themselves.
Furthermore, as the \((k,l)\)-partition diagrams have a string-like aspect to them - because they are morphisms in a strict \(\mathbb{R}\)-linear monoidal category - we are able to drag and bend the strings and/or move the vertices to obtain new partition diagrams, and, in the process, new \(G\)-equivariant linear maps via the appropriate functor!
One very powerful use of this idea can be seen in Figure 2. On the left hand side, we have an arbitrary \((5,3)\)-partition diagram. Suppose that we wish to multiply a vector \(v\in(\mathbb{R}^{n})^{\otimes 5}\), expressed in the standard basis, by the matrix that the diagram corresponds to under the functor \(\Theta\) given in Theorem 4.5. We assume here that \(n\geq 4\), since the number of blocks in the set partition corresponding to the \((5,3)\)-partition diagram is \(4\). One option would be to multiply the vector by the matrix as given. However, we can vastly improve the speed of the computation by performing a number of deformations to the diagram as shown in the figure.
At each stage, we drag and bend the strings representing the connected components of the set partition to obtain a factoring of the original \((5,3)\)-partition diagram in terms of a composition of three other diagrams: a \((5,5)\)-partition diagram that is not only Brauer but also a diagram that represents a permutation in the symmetric group \(S_{5}\); another \((5,3)\)-partition diagram that is _planar_ - that is, none of the connected components in the diagram intersect each other - and, finally, a diagram representing another permutation, this time in the symmetric group \(S_{3}\).
Since the middle diagram is planar, this means that it can be decomposed as a tensor product of a number of simpler diagrams, using the strict monoidal property of \(\mathcal{P}(n)\). The tensor product decomposition is shown in Figure 3. The key point is that, under the functor \(\Theta\), the image of the planar partition diagram will be the Kronecker product of the images of these simpler diagrams, since the functor is strict \(\mathbb{R}\)-linear monoidal. This will make the multiplication of any vector by this matrix significantly quicker to perform.
Hence, we have outlined a procedure which shows that, to multiply \(v\) by the linear map that is the image of the left hand diagram under \(\Theta\)_quickly_, we can first apply a permutation to the indices of the standard basis elements appearing in the input vector \(v\), then apply the Kronecker product matrix that is the image under \(\Theta\) of the planar \((5,3)\)-partition diagram, and then finally apply another permutation to the indices of the standard basis elements in \((\mathbb{R}^{n})^{\otimes 3}\) appearing in the resulting vector.
By generalising this example to any \((k,l)\)-partition diagram, we will recover - with one key distinction - the algorithm of Godfrey et al. (2023) for applying symmetric group equivariant layer functions on tensor power spaces of \(\mathbb{R}^{n}\) to input vectors; however, we have used a very different approach to obtain it. The key distinction between the two versions comes from making the middle diagram in the composition planar. Moreover, it is not hard to see that this idea will generalise to give an algorithm for applying group equivariant linear maps to input vectors for the other groups presented in this paper. This result will appear in another paper by the same authors.
## 6 Related Work
Maron et al. (2019) studied the classification of linear permutation equivariant and invariant neural network layers. They characterised the learnable, linear, permutation equivariant layer functions in \(\operatorname{Hom}_{S_{n}}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) for \(n\geq k+l\), using the orbit basis. However, Godfrey et al. (2023) discovered that using the diagram basis, first constructed by Jones (1994) in the case \(k=l\), is beneficial for permutation equivariant neural network computations. Pearce-Crump (2022) estab
Figure 3: The decomposition of the planar \((5,3)\)–partition diagram into a tensor product of simpler partition diagrams.
lished the connection between permutation equivariant linear layers and the partition algebra, using Schur-Weyl duality. They fully characterised these layer functions for all values of \(n\) and tensor power space orders, revealing that the dimension of the layer function space is not independent of \(n\). The partition algebra was introduced by Martin (1990, 1994, 1996) and expanded upon by Jones (1994). Recent papers by Benkart & Halverson (2019, 2019) and Benkart et al. (2017) show how the partition algebra can be used to construct the invariant theory of the symmetric group. Comes (2020) discovered the partition category and expressed it in terms of generators and relations. Pearce-Crump (2022b) characterised all learnable, linear, equivariant layer functions in \(\mathrm{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) for \(G=O(n)\), \(Sp(n)\), and \(SO(n)\), using various sets of set partition diagrams. This characterisation was adapted from the combinatorial representation theory of the Brauer algebra, first developed by Brauer (1937). Grood (1999) studied the representation theory of the Brauer-Grood algebra, while the Brauer category first appeared in Lehrer & Zhang (2012). The same authors investigated the theory behind what we have termed the Brauer-Grood category in Lehrer & Zhang (2018).
## 7 Conclusion
In this paper, we showed how category theory can be applied to the linear layer functions of group equivariant neural networks for the groups \(S_{n}\), \(O(n)\), \(Sp(n)\), and \(SO(n)\), resulting in a richer structure and a deeper understanding of these layer functions. In particular, we outlined the development of an algorithm for computing the result of a vector that is passed through an equivariant, linear layer for each group. The success of our approach suggests that category theory could be beneficial for other areas of deep learning, leading to new insights and approaches.
## 8 Acknowledgments
The author would like to thank his PhD supervisor Professor William J. Knottenbelt for being generous with his time throughout the author's period of research prior to the publication of this paper.
This work was funded by the Doctoral Scholarship for Applied Research which was awarded to the author under Imperial College London's Department of Computing Applied Research scheme. This work will form part of the author's PhD thesis at Imperial College London.
|
2303.00196 | Transformed Low-Rank Parameterization Can Help Robust Generalization for
Tensor Neural Networks | Achieving efficient and robust multi-channel data learning is a challenging
task in data science. By exploiting low-rankness in the transformed domain,
i.e., transformed low-rankness, tensor Singular Value Decomposition (t-SVD) has
achieved extensive success in multi-channel data representation and has
recently been extended to function representation such as Neural Networks with
t-product layers (t-NNs). However, it still remains unclear how t-SVD
theoretically affects the learning behavior of t-NNs. This paper is the first
to answer this question by deriving the upper bounds of the generalization
error of both standard and adversarially trained t-NNs. It reveals that the
t-NNs compressed by exact transformed low-rank parameterization can achieve a
sharper adversarial generalization bound. In practice, although t-NNs rarely
have exactly transformed low-rank weights, our analysis further shows that by
adversarial training with gradient flow (GF), the over-parameterized t-NNs with
ReLU activations are trained with implicit regularization towards transformed
low-rank parameterization under certain conditions. We also establish
adversarial generalization bounds for t-NNs with approximately transformed
low-rank weights. Our analysis indicates that the transformed low-rank
parameterization can promisingly enhance robust generalization for t-NNs. | Andong Wang, Chao Li, Mingyuan Bai, Zhong Jin, Guoxu Zhou, Qibin Zhao | 2023-03-01T03:05:40Z | http://arxiv.org/abs/2303.00196v3 | # Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks
###### Abstract
Achieving efficient and robust multi-channel data learning is a challenging task in data science. By exploiting low-rankness in the transformed domain, _i.e., transformed low-rankness_, tensor Singular Value Decomposition (t-SVD) has achieved extensive success in multi-channel data representation and has recently been extended to function representation such as Neural Networks with t-product layers (t-NNs). However, it still remains unclear how t-SVD theoretically affects the learning behavior of t-NNs. This paper is the first to answer this question by deriving the upper bounds of the generalization error of both standard and adversarially trained t-NNs. It reveals that the t-NNs compressed by exact transformed low-rank parameterization can achieve a sharper adversarial generalization bound. In practice, although t-NNs rarely have exactly transformed low-rank weights, our analysis further shows that by adversarial training with gradient flow (GF), the over-parameterized t-NNs with ReLU activations are trained with implicit regularization towards transformed low-rank parameterization under certain conditions. We also establish adversarial generalization bounds for t-NNs with approximately transformed low-rank weights. Our analysis indicates that the transformed low-rank parameterization can promisingly enhance robust generalization for t-NNs.
T 1-55, 2023 Under Review
Tensor SVD, deep neural networks, adversarial generalization error, implicit bias
## 1 Introduction
_Multi-channel learning_ (MCL) is a task to extract representations from the data with multiple channels, such as multispectral images, time series, and multi-view videos, in an efficient and robust manner (Liu et al., 2020; Hou et al., 2021; Zhang and Ng, 2021, 2022). Among methods tackling this task, _tensor neural networks_ (t-NNs) came to stage recently (Newman et al., 2018; Malik et al., 2021; Wu et al., 2022). Their t-product (Kilmer et al., 2013) layers distinguish them from the other networks. Compared with other methods, t-NNs demonstrate effectiveness and robustness when handling multi-channel learning tasks (Newman et al., 2018; Malik et al., 2021; Wu et al., 2022). The intuition is that, as each channel of the data has a different physical attribute (like multispectral
data), one usually needs to extract a few features for each channel individually. The t-product-based layers give a fixed channel number, which is equal to the number of the input channel, in different layers, such that changes in individual channels can be explicitly captured in the representation learning phase.
In substance, the key of t-product layers is tensor decomposition, especially tensor singular value decomposition (_t-SVD_) (Liu et al., 2020; Lu et al., 2019; Hou et al., 2021; Zhang and Ng, 2021, 2022; Qiu et al., 2022). For t-SVD, a tensor can be decomposed into a t-product of factors following a linear transformation. Unlike the classic tensor decomposition methods, t-SVD explores the low-rank structure of a tensor in the transformed domain, _i.e._, the _transformed low-rankness_. The imposed transformation is preserved in the t-product layers, providing additional expressivity for the neural networks. Meanwhile, the controllable transformed low-rank structure gives us more flexibility for balancing the model accuracy and adversarial robustness (Wu et al., 2022). However, the learning behavior of t-NN (the stack of t-product layers) remains lacking in the systematically theoretical analysis. More importantly, the additional transformation in the model makes the theoretical analysis more technically challenging compared to the existing work on general neural networks (Neyshabur et al., 2015; Xiao et al., 2022; Lv and Zhu, 2022; Suzuki et al., 2020).
To this end, we conduct a thorough investigation on the learning behavior of t-NN, addressing the following fundamentally important questions:
* _Q1:Can we theoretically characterize the generalization behavior of t-NNs?_ Yes. We derive the upper bound of the generalization gaps for t-NNs under both standard training and adversarial training in Sec. 3, and achieve sharp bounds on the adversarial generalization gap in Sec. 4.1.
* _Q2: How does transformed low-rankness theoretically influence the efficiency of adversarial learning for t-NN?_ We analyze the adversarial generalization gap of t-NNs whose weight tensors are of transformed low-rankness in Sec. 4.1.
* _Q3: How does adversarial learning of t-NNs affect transformed rank of their weight tensors?_ In Sec. 4.2, we deduce that weight tensors tend to be approximately of transformed low-rankness in adversarially trained highly over-parameterized t-NNs with ReLU activations, using GF.
* _Q4: How is adversarial generalization impacted by approximately transformed low-rank weight tensor in t-NNs?_ In Sec. 4.3, we derive adversarial generalization bounds for t-NNs with approximately transformed low-rank weight tensors in both general cases and a special case where weight tensors have certain patterns of spectral decay in the transformed domain.
## 2 Notations and Preliminaries
In this section, we introduce the notations and give a quick review of t-SVD which plays a central role in the following analysis.
### Notations
We use lowercase, lowercase boldface, and uppercase boldface letters to denote scalars, _e.g._, \(a\in\mathbb{R}\), vectors _e.g._, \(\mathbf{a}\in\mathbb{R}^{m}\), and matrices, _e.g._, \(\mathbf{A}\in\mathbb{R}^{m\times n}\), respectively. Following the standard notations
in (Kilmer et al., 2013), a 3-way tensor of size \(d\times 1\times\mathsf{c}\) is also called a _t-vector_ and denoted by underlined lowercase, _e.g._, \(\underline{\mathbf{x}}\), whereas a 3-way tensor of size \(m\times n\times\mathsf{c}\) is also called a _t-matrix_ and denoted by underlined uppercase, _e.g._, \(\underline{\mathbf{X}}\). We use a t-vector \(\underline{\mathbf{x}}\in\mathbb{R}^{d\times 1\times\mathsf{c}}\) to represent a multi-channel example, where \(\mathsf{c}\) denotes the number of channels and \(d\) is the number of features for each channel.
Given a matrix \(\mathbf{\Lambda}\in\mathbb{R}^{m\times n}\), its Frobenius norm (F-norm) and spectral norm are defined as \(\left\|\mathbf{A}\right\|_{\mathrm{F}}:=\sqrt{\sum_{i=1}^{\min\{m,n\}}\sigma_{ i}^{2}}\) and \(\left\|\mathbf{A}\right\|:=\max_{i}\sigma_{i}\), respectively, where \(\sigma_{i},\,i=1,\cdots,\min\{m,n\}\) are its singular values. The _stable rank_ of a non-zero matrix \(\mathbf{A}\) is defined as the squared ratio of its F-norm and spectral norm \(r_{\mathsf{stb}}(\mathbf{A}):=\left\|\mathbf{A}\right\|_{\mathrm{F}}^{2}/ \left\|\mathbf{A}\right\|^{2}\). Given a tensor \(\underline{\mathbf{T}}\), define its \(l_{p}\)-norm and F-norm respectively as \(\left\|\underline{\mathbf{T}}\right\|_{l_{p}}:=\left\|\mathtt{vec}( \underline{\mathbf{T}})\right\|_{l_{p}}\), and \(\left\|\underline{\mathbf{T}}\right\|_{\mathrm{F}}:=\left\|\mathtt{vec}( \underline{\mathbf{T}})\right\|_{2}\), where \(\mathtt{vec}(\cdot)\) denotes the vectorization operation of a tensor (Kolda and Bader, 2009). Given \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\), let \(\underline{\mathbf{T}}_{\cdot,i}\) denotes its \(i\)th frontal slice. The inner product between two tensors \(\underline{\mathbf{\Lambda}},\underline{\mathbf{B}}\) is defined as \(\left\langle\underline{\mathbf{\Lambda}},\underline{\mathbf{B}}\right\rangle: =\mathtt{vec}(\underline{\mathbf{\Lambda}})^{\top}\mathtt{vec}(\underline{ \mathbf{B}}).\) The frontal-slice-wise product of two tensors \(\underline{\mathbf{\Lambda}},\underline{\mathbf{B}}\), denoted by \(\underline{\mathbf{\Lambda}}\odot\underline{\mathbf{B}}\), equals a tensor \(\underline{\mathbf{T}}\) such that \(\underline{\mathbf{T}}_{\cdot,i}=\underline{\mathbf{\Lambda}}_{\cdot,i} \underline{\mathbf{B}}_{\cdot,i},\,\,\forall i=1,\cdots,\mathsf{c}\)(Kilmer et al., 2013). We use \(|\cdot|\) as the absolute value for a scalar and cardinality for a set. Other notations are introduced at their first appearance.
### Tensor Singular Value Decomposition
The framework of tensor singular value decomposition (t-SVD) is based on the t-product under an invertible linear transform \(M\)(Kernfeld et al., 2015). In recent studies, the transformation matrix \(\mathbf{M}\) defining the transform \(M\) is _restricted to be orthogonal_(Wang et al., 2021) for better properties, which is also followed in this paper. Given any _orthogonal matrix1\(\mathbf{M}\in\mathbb{R}^{\mathsf{c}\times\mathsf{c}}\)_, define the associated linear transform \(M(\cdot)\) with its inverse \(M^{-1}(\cdot)\) on any \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) as
Footnote 1: Following (Newman et al., 2018), we restrict \(\mathbf{M}\) to be orthogonal for simplicity.
\[M(\underline{\mathbf{T}}):=\underline{\mathbf{T}}\times_{3}\mathbf{M},\,\,\, \text{and}\,\,\,\,\,\,M^{-1}(\underline{\mathbf{T}}):=\underline{\mathbf{T}} \times_{3}\mathbf{M}^{-1}, \tag{1}\]
where \(\times_{3}\) denotes the tensor matrix product on mode-\(3\)(Kernfeld et al., 2015).
**Definition 1** (t-product (Kernfeld et al., 2015)): _The t-product of any \(\underline{\mathbf{\Lambda}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) and \(\underline{\mathbf{B}}\in\mathbb{R}^{n\times k\times\mathsf{c}}\) under the invertible linear transform \(M\) in Eq. (1) is denoted and defined as \(\underline{\mathbf{\Lambda}}*_{M}\underline{\mathbf{B}}=\underline{\mathbf{C} }\in\mathbb{R}^{m\times k\times\mathsf{c}}\) such that \(M(\underline{\mathbf{C}})=M(\underline{\mathbf{\Lambda}})\odot M(\underline{ \mathbf{B}})\)._
**Definition 2** (\(M\)-block-diagonal matrix): _The \(M\)-block-diagonal matrix of any \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\), denoted by \(\widetilde{\mathbf{T}}_{M}\), is the block diagonal matrix whose diagonal blocks are the frontal slices of \(M(\underline{\mathbf{T}})\):_
\[\widetilde{\mathbf{T}}_{M}:=\mathtt{bdiag}(M(\underline{\mathbf{T}})):= \begin{bmatrix}M(\underline{\mathbf{T}})_{\cdot,\cdot,1}&&\\ &M(\underline{\mathbf{T}})_{\cdot,\cdot,2}&&\\ &&\ddots&\\ &&M(\underline{\mathbf{T}})_{\cdot,\cdot,\mathsf{c}}\end{bmatrix}\in\mathbb{R}^{m \mathsf{c}\times n\mathsf{c}}.\]
In this paper, we also follow the definition of t-transpose, t-identity tensor, t-orthogonal tensor, and f-diagonal tensor given by Kernfeld et al. (2015), and thus the t-SVD is introduced as follows.
**Definition 3** (t-SVD, tubal rank (Kernfeld et al., 2015)): _Tensor Singular Value Decomposition (t-SVD) of \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) under the invertible linear transform \(M\) in Eq. (1) is given as follows_
\[\underline{\mathbf{T}}=\underline{\mathbf{U}}*_{M}\underline{\mathbf{S}}*_{M} \underline{\mathbf{V}}^{\top}, \tag{2}\]
_where \(\underline{\mathbf{U}}\in\mathbb{R}^{m\times m\times\mathsf{c}}\) and \(\underline{\mathbf{V}}\in\mathbb{R}^{n\times n\times\mathsf{c}}\) are t-orthogonal, and \(\underline{\mathbf{S}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) is f-diagonal._
_The tubal rank of \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) is defined as the number of non-zero tubes of \(\underline{\mathbf{S}}\) in its t-SVD in Eq. (2), i.e., \(r_{\text{t}}(\underline{\mathbf{T}}):=|\{i\mid\underline{\mathbf{S}}(i,i,:) \neq\mathbf{0},i\leq\min\{m,n\}\}|\)._
For any \(\underline{\mathbf{T}}\in\mathbb{R}^{m\times n\times\mathsf{c}}\) with the tubal rank \(r_{\text{t}}(\underline{\mathbf{T}})\), we have following relationship between its t-SVD and the matrix SVD of its \(M\)-block-diagonal matrix (Wang et al., 2021; Lu, 2021):
\[\underline{\mathbf{T}}=\underline{\mathbf{U}}*_{M}\underline{\mathbf{S}}*_{M }\underline{\mathbf{V}}^{\top}\;\Leftrightarrow\;\widetilde{\mathbf{T}}_{M}= \widetilde{\mathbf{U}}_{M}\cdot\widetilde{\mathbf{S}}_{M}\cdot\widetilde{ \mathbf{V}}_{M}^{\top},\quad\text{and}\qquad\mathsf{cr}_{\text{t}}(\underline {\mathbf{T}})\geq\text{rank}(\widetilde{\mathbf{T}}_{M}). \tag{3}\]
As the \(M\)-block-diagonal matrix \(\widetilde{\mathbf{T}}_{M}\) is defined after transforming tensor \(\underline{\mathbf{T}}\) from the original domain to the transformed domain, the relationship \(\mathsf{cr}_{\text{t}}(\underline{\mathbf{T}})\geq\text{rank}(\widetilde{ \mathbf{T}}_{M})\) indicates that the tubal rank can be chosen as a measure of transformed low-rankness (Wang et al., 2021; Lu, 2021).
## 3 Neural Networks with t-Product Layer
In this section, we formulate the t-product layer used in t-NNs, and then rigorously prove the standard and adversarial generalization bound for t-NNs.
### Multi-Channel Feature Learning via t-Product
Suppose a multi-channel example as a t-vector \(\underline{\mathbf{x}}\in\mathbb{R}^{d\times 1\times\mathsf{c}}\) where \(\mathsf{c}\) is the number of channels, and \(d\) is the number of features. An \(L\)-layer t-NN feature extractor \(\mathbf{f}(\underline{\mathbf{x}})\) extracts \(d_{L}\) features for each channel of \(\underline{\mathbf{x}}\) as follows
\[\mathbf{f}(\underline{\mathbf{x}})=\mathbf{f}^{(L)}(\underline{\mathbf{x}}); \quad\mathbf{f}^{(l)}(\underline{\mathbf{x}})=\sigma(\underline{\mathbf{W}}^ {(l)}*_{M}\mathbf{f}^{(l-1)}(\underline{\mathbf{x}})),\;\forall l=1,\cdots,L; \quad\mathbf{f}^{(0)}(\underline{\mathbf{x}})=\underline{\mathbf{x}} \tag{4}\]
where the \(l\)-th layer \(\mathbf{f}^{(l)}\) first conduct t-product with weight tensor (t-matrix) \(\underline{\mathbf{W}}^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}\times\mathsf{c}}\) on the output of the \((l-1)\)-th layer as multi-channel features2\(\mathbf{f}^{(l-1)}(\underline{\mathbf{x}})\in\mathbb{R}^{d_{l-1}\times 1\times \mathsf{c}}\) to obtain a (\(d_{l}\times 1\times\mathsf{c}\))-dimensional representation and then use the entry-wisely ReLU3 the activation \(\sigma(x)=\max\{x,0\}\) for nonlinearity.
Footnote 2: For simplicity, let \(d_{0}=d\) by treating the input example \(\underline{\mathbf{x}}\) as the \(0\)-th layer \(\mathbf{f}^{(0)}\).
Footnote 3: Although we consider ReLU activation in this paper, most of the main theoretical results (e.g., Theorems 7, 9, 11, 19 and 21) can be generalized to general Lipschitz activations with slight modifications in the proof.
**Remark 4**: _Unlike Newman et al. (2018), Malik et al. (2021) and Wu et al. (2022) whose nonlinear activation is performed in the transformed domain, the t-NN model in Eq. (4) considers the nonlinear activation in the original domain and hence is consistent with traditional neural networks._
By adding a linear classification module of weight \(\mathbf{w}\in\mathbb{R}^{cd_{L}}\) after the feature reaction module in Eq. (4), we consider the following t-NN predictor whose sign can be utilized for binary classification:
\[f(\underline{\mathbf{x}};\underline{\mathbf{W}}):=\mathbf{w}^{\top}\mbox{ \tt vec}(\mathbf{f}^{(L)}(\underline{\mathbf{x}}))\in\mathbb{R}. \tag{5}\]
Let \(\underline{\mathbf{W}}:=\{\underline{\mathbf{W}}^{(1)},\cdots,\underline{ \mathbf{W}}^{(L)},\mathbf{w}\}^{4}\) denote the collection of all the weights. With a slight abuse of notations, let \(\left\|\underline{\mathbf{W}}\right\|_{\text{F}}:=\sqrt{\left\|\mathbf{w} \right\|_{2}^{2}+\sum_{l=1}^{L}\left\|\underline{\mathbf{W}}^{(l)}\right\|_{ \text{F}}^{2}}\) denote the sum of Euclidean norms of all the weights.
The function class of t-NNs whose weights are bounded in the Euclidean norm is defined as
\[\mathfrak{F}:=\bigg{\{}f(\mathbf{x};\underline{\mathbf{W}})\ \ \ \Big{|}\ \| \mathbf{w}\|_{2}\leq B_{w},\quad\Big{\|}\underline{\mathbf{W}}^{(l)}\Big{\|}_{ \mathbf{F}}\leq B_{l},\quad\forall l=1,\cdots,L\bigg{\}}. \tag{6}\]
Let \(B_{\underline{\mathbf{W}}}:=B_{w}\prod_{j=1}^{L}B_{j}\) for simplicity. The goal is to train a t-NN predictor \(f\in\mathfrak{F}\) on the training sample set \(S\) to achieve high classification accuracy for any new multi-channel example \(\underline{\mathbf{x}}\) drawn from the unknown data distribution \(P_{\underline{\mathbf{x}}}\).
### Generalization Bounds for t-NNs
Suppose we are given a training multi-channel dataset \(S\) consisting of \(N\) example-label pairs \((\underline{\mathbf{x}}_{i},y_{i})\in\mathbb{R}^{d\times 1\times c}\times\{\pm 1\}\), \(i=1,\cdots,N\), which are _i.i.d._ drawn from an underlying data distribution \(P_{\underline{\mathbf{x}},y}\). The following assumption is made on the input multi-channel data.
**Assumption 5**: _All the input example \(\underline{\mathbf{x}}\in\mathbb{R}^{d\times 1\times c}\) has the upper bounded F-norm, i.e., \(\|\underline{\mathbf{x}}\|_{\mathbf{F}}\leq B_{x}\)._
When a loss function \(\ell(f(\underline{\mathbf{x}}_{i}),y_{i})\) is considered as the measure of classification quality, we define the empirical and population risk for any predictor \(f\in\mathfrak{F}\)
\[\hat{\mathcal{L}}(f):=\frac{1}{N}\sum_{i=1}^{N}\ell(f(\underline{\mathbf{x}}_ {i}),y_{i}),\quad\mathcal{L}(f):=\mathbb{E}_{P(\underline{\mathbf{x}},y)}\left[ \ell(f(\underline{\mathbf{x}}),y)\right], \tag{7}\]
respectively. Similar to (Lyu and Li, 2020), we also make additional assumptions as follows on the loss function.
**Assumption 6**: _For any t-NN predictor \(h\in\mathfrak{F}\), the loss function \(\ell(h(\underline{\mathbf{x}}),y)\) can be expressed as \(\ell(h(\underline{\mathbf{x}}),y)=\exp(-\mathfrak{f}(yh(\underline{\mathbf{x }}))\) such that: **(A.1)** The range of loss \(\ell(\cdot,\cdot)\) is \([0,B]\). **(A.2) \(\mathfrak{f}:\mathbb{R}\to\mathbb{R}\)** is \(C^{1}\)-smooth; **(A.3) \(\mathfrak{f}^{\prime}(x)\geq 0\)** for all \(x\in\mathbb{R}\); **(A.4)** there exists \(b_{\mathfrak{f}}\geq 0\) such that \(x\mathfrak{f}^{\prime}(x)\) is non-decreasing for \(x\in(b_{\mathfrak{f}},+\infty)\), and the derivative \(x\mathfrak{f}^{\prime}(x)\to+\infty\) as \(x\to+\infty\); **(A.5)** let \(\mathfrak{g}:[\mathfrak{f}(b_{\mathfrak{f}}),+\infty)\to[b_{\mathfrak{f}},+\infty)\) be the inverse function of \(\mathfrak{f}\) on the domain \([b_{\mathfrak{f}},+\infty)\). There exist \(b_{\mathfrak{g}}\geq\max\{2\mathfrak{f}(b_{\mathfrak{f}},\mathfrak{f}(2b_{ \mathfrak{f}}))\}\) and \(K\geq 1\), such that \(\mathfrak{g}^{\prime}(x)\leq K\mathfrak{g}^{\prime}(\theta x)\) and \(\mathfrak{f}^{\prime}(y)\leq K\mathfrak{f}^{\prime}(\theta y)\) for all \(x\in(b_{\mathfrak{g}},+\infty),y\in(\mathfrak{g}(b_{\mathfrak{g}}),+\infty)\) and \(\theta\in[1/2,1)\)._
Assumption **(A.1)** is a natural assumption in generalization analysis (Yin et al., 2019; Awasthi et al., 2020), and Assumptions **(A.2)-(A.5)** are the same as Assumption (B3) in Lyu and Li (2020). According to Assumption **(A.2)**, the loss function \(\ell(\cdot,\cdot)\) satisfies the \(L_{\ell}\)-Lipschitz continuity
\[|\ell(h(\underline{\mathbf{x}}_{1}),y_{1})-\ell(h(\underline{\mathbf{x}}_{2}),y_{2})|\leq L_{\ell}|y_{1}h(\underline{\mathbf{x}}_{1})-y_{2}(\underline{ \mathbf{x}}_{2})|,\quad\text{with }L_{\ell}=\sup_{|q|\leq B_{\tilde{f}}} \mathfrak{f}^{\prime}(q)e^{-\mathfrak{f}(q)}, \tag{8}\]
where \(B_{\tilde{f}}\) is an upper bound on the output of any t-NN \(h\in\mathfrak{F}\). The Lipschitz continuity is also widely assumed for generalization analysis of DNNs (Yin et al., 2019; Xiao et al., 2022). Assumption 6 is satisfied by commonly used loss functions such as the logistic loss and the exponential loss.
The generalization gap \(\mathcal{L}(f)-\hat{\mathcal{L}}(f)\) of any functions \(f\in\mathfrak{F}\) can be bounded as follows.
**Lemma 7** (Generalization bound for t-NNs): _Under Assumptions 5 and 6, it holds with probability at least \(1-2e^{-t}\) for any \(t>0\) that for any \(f\in\mathfrak{F}\), its generalization error satisfies_
\[\mathcal{L}(f)-\hat{\mathcal{L}}(f)\leq\frac{L_{\ell}B_{x}B_{\underline{\mathbf{ W}}}}{\sqrt{N}}(\sqrt{2\log(2)d}+1)+3B\sqrt{\frac{t}{2N}}. \tag{9}\]
### Adversarial Generalization Bounds for t-NNs
We consider the adversarial generalization behavior of t-NNs in this subsection. We first make the following assumption on the adversarial perturbations.
**Assumption 8**: _Given an input example \(\underline{\mathbf{x}}\), the adversarial perturbation is chosen within a radius-\(\xi\) ball of norm \(R_{\mathsf{a}}(\cdot)\) whose compatibility constant with the Euclidean norm is given as \(\mathsf{C}_{R_{\mathsf{a}}}=\sup_{\underline{\mathbf{x}}\neq\bullet}R_{\mathsf{ a}}(\underline{\mathbf{x}})/\left\|\underline{\mathbf{x}}\right\|_{\mathrm{F}}\)._
The assumption allows for much broader adversary classes than the commonly considered \(l_{p}\)-attacks (Xia and Yuan, 2017; Xiao et al., 2022). For example, if one treats the multi-channel data \(\underline{\mathbf{x}}\in\mathbb{R}^{d\times 1\times\mathsf{c}}\) as a matrix of dimensionality \(d\times\mathsf{c}\) and attacks it with nuclear norm attacks (Kazemi et al., 2020), then the constant \(\mathsf{C}_{R_{\mathsf{a}}}=\sqrt{\min\{d,\mathsf{c}\}}\).
Given an example-label pair \((\underline{\mathbf{x}},y)\), the adversarial loss function for any predictor \(f\) is defined as \(\tilde{\ell}(f(\underline{\mathbf{x}}),y)=\max_{R_{\mathsf{a}}(\underline{ \mathbf{x}}^{\prime}-\underline{\mathbf{x}})\leq\epsilon}\ell(f(\underline{ \mathbf{x}}^{\prime}),y)\). The empirical and population adversarial risks are thus defined as
\[\hat{\mathcal{L}}^{\mathrm{adv}}(f)=\frac{1}{N}\sum_{i=1}^{N}\tilde{\ell}(f( \underline{\mathbf{x}}_{i}),y_{i}),\quad\mathcal{L}^{\mathrm{adv}}(f)=\mathbb{ E}_{(\underline{\mathbf{x}},y)}\left[\tilde{\ell}(f(\underline{\mathbf{x}}),y) \right],\]
respectively. The adversarial generalization performance is measured by the adversarial generalization gap (AGP) defined as \(\mathcal{L}^{\mathrm{adv}}(f)-\hat{\mathcal{L}}^{\mathrm{adv}}(f)\). Let \(B_{\tilde{f}}:=(B_{x}+\xi\mathsf{C}_{R_{\mathsf{a}}})B_{\underline{\mathbf{W} }}\) for simplicity. For any t-NN \(f\in\mathfrak{F}\), its AGP is bounded as follows.
**Theorem 9** (Adversarial generalization bound for t-NNs): _Under Assumptions 5 and 6, it holds with probability at least \(1-2e^{-t}\)\((\forall t>0)\), that for any \(f\in\mathfrak{F}\), its adversarial generalization gap satisfies_
\[\mathcal{L}^{\mathrm{adv}}(f)-\hat{\mathcal{L}}^{\mathrm{adv}}(f)\leq \frac{CL_{\ell}B_{\tilde{f}}}{\sqrt{N}}\sqrt{\mathsf{c}\sum_{l=1} ^{L}d_{l-1}d_{l}\log(3(L+1))}+3B\sqrt{\frac{t}{2N}}. \tag{10}\]
**Remark 10**: _When the input example has channel number \(\mathsf{c}=1\) and the attacker uses \(l_{p}\)-attack, the adversarial generalization bound in Theorem 9 denegerates to Theorem 4 in Xiao et al. (2022)._
## 4 Transformed Low-rank Parameterization for t-NNs
### Generalization Bound under Exact Transformed Low-rank Parameterization
According to the adversarial generalization error bound for t-NNs in Theorem 9, one can find that the bound scales like \(O(\sqrt{\mathsf{c}(\sum_{l=1}^{L}d_{l-1}d_{l})/N})\) in the square root of the parameter complexity which may require a large number of training examples \(N\) to achieve an expected adversarial accuracy. In addition, high parameter complexity results in the high energy/storage/timing costs when deploying the modern large-scale DNN models in the field, especially on resource-constrained embedded and mobile devices.
To this end, we propose a transformed low-rank parameterization scheme to compress the original hypothesis set. Specifically, given a vector of pre-set ranks \(\mathbf{r}=(r_{1},\cdots,r_{L})^{\top}\in\mathbb{R}^{L}\) where \(r_{l}\leq\{d_{l},d_{l-1}\}\), we consider the following subset of the original t-NNs:
\[\mathfrak{F}_{\mathbf{r}}:=\bigg{\{}f\biggm{|}f\in\mathfrak{F},\text{and }r_{t}(\underline{\mathbf{W}}^{(l)})\leq r_{l},\forall l=1,\cdots,L\bigg{\}}. \tag{11}\]
In the function set \(\mathfrak{F}_{\mathbf{r}}\), the weight tensor \(\underline{\mathbf{W}}^{(l)}\) of the \(l\)-th layer has upper bounded tubal rank, which means low-rankness in the transformed domain5. We bound the adversarial behavior for any function \(f\in\mathfrak{F}_{\mathbf{r}}\) as follows.
Footnote 5: For real implementations, one can adopt similar rank learning straregety to (Idelbayev and Carreira-Perpinan, 2020) to select a suitable rank parameter \(\mathbf{r}\). Due to the scope of this paper, we leave this for future work.
**Theorem 11** (Adversarial generalization bound for t-NNs with transformed low-rank weights): _Under Assumptions 5 and 6, it holds with probability at least \(1-2e^{-t},\,\forall t>0\), that for any \(f_{\mathbf{r}}\in\mathfrak{F}_{\mathbf{r}}\), its adversarial generalization error satisfies_
\[\mathcal{L}^{\text{adv}}(f_{\mathbf{r}})-\hat{\mathcal{L}}^{\text{adv}}(f_{ \mathbf{r}})\leq\frac{C^{\prime}L_{\ell}B_{\tilde{f}}}{\sqrt{N}}\sqrt{\text{ c}\sum_{l=1}^{L}r_{l}(d_{l-1}+d_{l})\log(9(L+1))}+3B\sqrt{\frac{t}{2N}}. \tag{12}\]
Comparing Theorem 11 with Theorem 9, it can be seen that the adversarial generalization bound under transformed low-rank parameterization has a better scaling \(O(\sqrt{\text{c}\sum_{l=1}^{L}r_{l}(d_{l-1}+d_{l})/N})\), which indicates a smaller number of training examples are needed and a more efficient adversarial training process can be achieved.
### Implicit Bias of Gradient Flow in Adversarial Learning of t-NNs
Although Theorem 11 shows exactly transformed low-rank parameterization leads to lower bounds, the well trained t-NNs on real dataset rarely have exactly transformed low-rank weights. In this section, we will prove that the highly overparameterized t-NNs, trained by adversarial training with gradient flow (GF), are of approximate transformed low-rank parameterized under certain conditions.
The proposed t-NN \(f(\underline{\mathbf{x}};\underline{\mathbf{W}})\) is said to satisfy the (positively) _homogeneous_ as the following condition holds: \(f(\underline{\mathbf{x}};a\underline{\mathbf{W}})=a^{L+1}f(\underline{ \mathbf{x}};\underline{\mathbf{W}})\) for any positive constant \(a\). Motivated by (Lv and Zhu, 2022), we focus on the scale invariant adversarial perturbations defined as follows.
**Definition 12** (Scale invariant adversarial perturbation, Lv and Zhu 2022): _An adversarial perturbation is said to be a scale invariant adversarial perturbation for \(f(\underline{\mathbf{x}};a\underline{\mathbf{W}})\) if it satisfies \(\underline{\boldsymbol{\delta}}_{i}(a\underline{\mathbf{W}})=\underline{ \boldsymbol{\delta}}_{i}(\underline{\mathbf{W}}),\) for any positive constant \(a\)._
**Lemma 13**: _The \(l_{2}\)-FGM (Miyato et al., 2017), FGSM (Goodfellow et al., 2015), \(l_{2}\)-PGD and \(l_{\infty}\)-PGD (Madry et al., 2018) perturbations for the t-NNs are all scale invariant._
Gradient flow can be seen as gradient descent with infinitesimal step size. For gradient flow, the ReLU t-NNs are locally Lipschitz, and thus we can use GF to train ReLU t-NNs. When using GF for the ReLU t-NNs, \(\underline{\mathbf{W}}\) changes continuously with time, and the trajectory of parameter \(\underline{\mathbf{W}}\) during training is an arc \(\underline{\mathbf{W}}:[0,\infty)\rightarrow\mathbb{R}^{\text{dim}( \underline{\mathbf{W}})},t\mapsto\underline{\mathbf{W}}(t)\) that satisfies the differential inclusion (Dutta et al., 2013; Lyu and Li, 2020)
\[\frac{\text{d}\underline{\mathbf{W}}(t)}{\text{d}t}\in-\partial^{\circ}\hat{ \mathcal{L}}^{\text{adv}}(\underline{\mathbf{W}}(t)) \tag{13}\]
for _a. e._\(t\geq 0\), where \(\partial^{\circ}\hat{\mathcal{L}}^{\text{adv}}\) denotes the Clarke's subdifferential (Dutta et al., 2013), with respect to \(\underline{\mathbf{W}}(t)\). If \(\hat{\mathcal{L}}^{\text{adv}}(\underline{\mathbf{W}})\) is actually a \(C^{1}\)-smooth function, the above differential inclusion reduces to
\[\frac{\text{d}\underline{\mathbf{W}}(t)}{\text{d}t}\in-\frac{\partial\hat{ \mathcal{L}}^{\text{adv}}(\underline{\mathbf{W}}(t))}{\partial\underline{ \mathbf{W}}(t)} \tag{14}\]
for all \(t\geq 0\), which corresponds to the gradient flow with differential in the usual sense. However, for simplicity, we follow Vardi and Shamir (2021); Timor et al. (2023) that we still use Eq. (14) to denote Eq. (13) with a slight abuse of notations, even if \(\hat{\mathcal{L}}^{\text{adv}}\) does not satisfy differentiability but only locally Lipschitz. Note that the ReLU function is not differentiable at \(0\). Practical implementations of gradient methods define the derivative \(\sigma^{\prime}(0)\) to be some constant in \([0,1]\). In this work we assume for convenience that \(\sigma^{\prime}(0)=0\).
**Assumption 14** (Existence of a separability of adversarial examples during training): _There exists a time \(t_{0}\) such that \(N\hat{\mathcal{L}}^{\text{adv}}(t_{0})\leq\exp(\mathfrak{f}(b_{\mathfrak{f}}) )=\ell(b_{\mathfrak{f}})\)._
This assumption is a generalization of the separability condition in (Lyu and Li, 2020; Lv and Zhu, 2022). Adversarial training can typically achieve this separability in practice, _i.e._, the model can fit adversarial examples of the training dataset, which makes the above assumption a reasonable one.
**Lemma 15** (Convergence to the direction of a KKT point): _Consider the hypothesis class \(\mathfrak{F}\) in Eq. (6). Under Assumptions 6 and 14, the limit point of normalized weights \(\{\underline{\mathbf{W}}(t)/\left\|\underline{\mathbf{W}}(t)\right\|_{ \mathbb{F}}:t\geq 0\}\) of the gradient flow for Eq. (14), i.e., the empirical adversarial risk with scale invariant adversarial perturbations \(\underline{\boldsymbol{\delta}}_{i}(\underline{\mathbf{W}})\), is aligned with the direction of a KKT point of the minimization problem:_
\[\min_{\underline{\mathbf{W}}}\frac{1}{2}\left\|\underline{\mathbf{W}}\right\| _{\mathbb{F}}^{2},\qquad\text{s.t. }y_{i}f(\underline{\mathbf{x}}_{i}+\underline{ \boldsymbol{\delta}}_{i}(\underline{\mathbf{W}});\underline{\mathbf{W}})\geq 1, \forall i=1,\cdots,N. \tag{15}\]
**Theorem 16** (Implicit low-rankness induced by GF for t-NNs): _Suppose that there is an example \(\underline{\mathbf{x}}_{i}\) satisfying \(\left\|\underline{\mathbf{x}}_{i}\right\|_{\mathbb{F}}\leq 1\) in the training set \(S=\{(\underline{\mathbf{x}}_{i},y_{i})\}_{i=1}^{N}\subseteq\mathbb{R}^{d\times 1 \times\mathbf{c}}\times\{\pm 1\}\). Assume that there is a \((J+1)\)-layer (\(J\geq 2\)) ReLU t-NN, denoted by \(g(\underline{\mathbf{x}};\underline{\mathbf{V}})\) with parameterization \(\underline{\mathbf{V}}=(\underline{\mathbf{V}}^{(1)},\cdots,\underline{ \mathbf{V}}^{(J)},\mathbf{v})\), satisfying the following conditions:_
1. _the dimensionality of the weight tensor_ \(\underline{\mathbf{V}}^{(j)}\in\mathbb{R}^{m_{j}\times m_{j-1}\times\mathbf{ c}}\) _of the_ \(j\)_-th t-product layer satisfies_ \(m_{j}\geq 2\)_,_ \(\forall j=1,\cdots,J\)_;_
2. _there is a constant_ \(B>0\)_, such that the Euclidean norm of the weights_ \(\underline{\mathbf{V}}=(\underline{\mathbf{V}}^{(1)},\cdots,\underline{ \mathbf{V}}^{(L)},\mathbf{v})\) _satisfy_ \(\left\|\underline{\mathbf{V}}^{(j)}\right\|_{\mathbb{F}}\leq B\) _for all_ \(j=1,\cdots,J\) _and_ \(\left\|\mathbf{v}\right\|_{2}\leq B\)_; and_
3. _for all_ \(i\in\{1,\cdots,N\}\)_, we have_ \(y_{i}g(\underline{\mathbf{x}}_{i}+\underline{\boldsymbol{\delta}}_{i}( \underline{\mathbf{V}});\underline{\mathbf{V}})\geq 1\)_, i.e., all the perturbed training samples are correctly classified._
_Then, we consider the class of over-parameterized t-NNs \(\mathfrak{F}=\{f(\underline{\mathbf{x}};\underline{\mathbf{W}})\}\) defined in Eq. (5) satisfies_
1. _the number_ \(L\) _of t-product layers is much greater than_ \(J\)_, and_
2. _the dimensionality of weight_ \(\underline{\mathbf{W}}^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}\times\mathbf{ c}}\) _satisfies_ \(d_{l}\gg\max_{j\leq J}\{m_{j}\}\) _for all_ \(l=1,\cdots,L\)_._
_Let \(\underline{\mathbf{W}}^{*}=(\underline{\mathbf{W}}^{*(1)},\cdots,\underline{ \mathbf{W}}^{*(L)},\mathbf{w}^{*})\) be a global optimum of Problem (15). Namely, \(\underline{\mathbf{W}}^{*}\) parameterizes a minimum-norm t-NN \(f(\underline{\mathbf{x}};\underline{\mathbf{W}}^{*})\in\mathfrak{F}\) that labels the perturbated datasets correctly with margin 1 under homogeneous adversarial perturbation. Then, we have_
\[\frac{L}{\sum_{l=1}^{L}\left(r_{\text{stb}}(\widetilde{\mathbf{W}}_{M}^{*(l)}) \right)^{-1/2}}\leq\frac{1}{\left(1+\frac{1}{L}\right)\left(\frac{1}{B} \right)^{\frac{J+1}{L+1}}\sqrt{\frac{L+1}{(J+1)+(cm_{J})(L-J)}}-\frac{1}{L}} \tag{16}\]
_where \(\widetilde{\mathbf{W}}_{M}^{*(l)}\) denotes the \(M\)-block-diagonal matrix of weight tensor \(\underline{\mathbf{W}}^{*(l)}\) for all \(l=1,\cdots,L\)._
By the above theorem, when \(L\) is sufficiently large, the harmonic mean of the square root of the stable rank of \(\widehat{\mathbf{W}}_{M}^{*(l)}\), i.e., the \(M\)-block-diagonal matrix of weight tensor \(\underline{\mathbf{W}}^{*(l)}\), is roughly \(1/\sqrt{cm_{J}}\), which is significantly smaller than the squared root of the dimensionality \(\sqrt{\min\{cd_{l},cd_{l-1}\}}\) according to condition \((c.5)\) in Theorem 16. Thus, \(f(\underline{\mathbf{x}};\underline{\mathbf{W}}^{*})\) has an nearly low-rank parameterization in the transformed domain. In our case, the weights \(\underline{\mathbf{W}}(t)\) generated by the GF tend to have an infinite norm and to converge in direction to a transformed low-rank solution. Moreover, note that the ratio between the spectral and the Frobenius norms is invariant to scaling, and hence it suggests that after a sufficiently long time, GF tends to reach a network with transformed low-rank weight tensors. There are also experimental supports of this phenomenon (Langenberg et al., 2019) that adversarial training leads to low-rank weights.
### Adversarial Generalization Bounds for Approximate Low-Tubal-Rank t-NNs
Theorem 16 shows that for highly over-parameterized adversarial training with GF, the well trained t-NNs have approximately transformed low-rank parameters under certain conditions. In this section, we will characterize the adversarial generalization behavior of the approximately transformed low-rank6 parameterized t-NNs.
Footnote 6: We use the tensor tubal rank as a measure of low-rankness in the transformed domain for notation simplicity. One can also consider the average rank (Wang et al., 2021) or multi-rank (Wang et al., 2021) for more refined bounds with quite similar techniques.
An approximately low-tubal-rank parameterized t-NN \(f\) can be compressed with an exactly low-tubal-rank parameterized t-NN \(g\in\mathfrak{F}_{\mathbf{r}}\), and we can ensure that the compressed function \(g\) has a small distance to the original function \(f\) in their parameter space. _Then, can the small parameter distance between \(f\) and \(g\) also indicate a small difference in their adversarial generalization behaviors?_ To answer the questions, we first define the \((\delta,\mathbf{r})\)-approximate low-tubal-rank parameterized functions.
**Definition 17** (\((\delta,\mathbf{r})\)-approximate low-tubal-rank parameterization): _A t-NN \(f(\underline{\mathbf{x}};\underline{\mathbf{W}})\in\mathfrak{F}\) with weights \(\underline{\mathbf{W}}=(\mathbf{w},\underline{\mathbf{W}}^{(1)},\cdots, \underline{\mathbf{W}}^{(L)})\) is said to satisfy the \((\delta,\mathbf{r})\)-approximate low-tubal parameterization of tolerance \(\delta>0\) and rank \(\mathbf{r}=(r_{1},\cdots,r_{L})^{\top}\in\mathbb{N}^{L}\), if there is a t-NN \(g(\underline{\mathbf{x}};\underline{\mathbf{W}}_{\mathbf{r}})\in\mathfrak{F}_ {\mathbf{r}}\) whose weights \(\underline{\mathbf{W}}_{g}=(\mathbf{w},\underline{\mathbf{W}}_{r_{1}}^{(1)}, \cdots,\underline{\mathbf{W}}_{r_{L}}^{(L)})\) satisfy \(\left\|\underline{\mathbf{W}}_{r_{1}}^{(l)}-\underline{\mathbf{W}}^{(l)} \right\|_{\mathrm{F}}\leq\delta,\forall l\in\{1,\cdots,L\}\)._
Consider the adversarially trained t-NNs with approximately low-tubal-rank weights in the set
\[\mathfrak{F}_{\delta,\mathbf{r}}:=\{f\in\mathfrak{F}|f\text{ satisfies the }(\delta,\mathbf{r})\text{-approximate low-tubal parameterization}\}\,. \tag{17}\]
Our idea of bounding the adversarial generalization gap for any \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\) in terms of its low-tubal-rank compression \(g\in\mathfrak{F}_{\mathbf{r}}\) is motivated by the work of compressed bounds for non-compressed but compressable models (Suzuki et al., 2020) which is originally developed for generalization analysis of standard training.
Under Assumption 6, we define the adversarial version of \(\mathfrak{F}_{\delta,\mathbf{r}}\) as \(\mathfrak{F}_{\delta,\mathbf{r}}^{\mathrm{adv}}:=\{\tilde{f}:(\underline{ \mathbf{x}},y)\mapsto\min_{R_{\mathbf{z}}(\mathbf{z}^{\prime}-\underline{ \mathbf{x}})\leq\xi}yf(\underline{\mathbf{x}}^{\prime})\mid f\in\mathfrak{F}_ {\delta,\mathbf{r}}\}\). To analyze adversarial generalization gap of function \(f\) in \(\mathfrak{F}_{\delta,\mathbf{r}}\) through \(g\in\mathfrak{F}_{\mathbf{r}}\), we consider their adversarial counterparts \(\tilde{f}\in\mathfrak{F}_{\delta,\mathbf{r}}^{\mathrm{adv}}\) and \(\tilde{g}\in\mathfrak{F}_{\mathbf{r}}^{\mathrm{adv}}\), where \(\mathfrak{F}_{\mathbf{r}}^{\mathrm{adv}}\) is defined as \(\mathfrak{F}_{\mathbf{r}}^{\mathrm{adv}}:=\{\tilde{f}:(\underline{\mathbf{x} },y)\mapsto\min_{R_{\mathbf{z}}(\underline{\mathbf{x}}-\underline{\mathbf{x}}^ {\prime})\leq\xi}yf(\underline{\mathbf{x}}^{\prime})\mid f\in\mathfrak{F}_{ \mathbf{r}}\}\). The Minkowski difference of set \(\mathfrak{F}_{\delta,\mathbf{r}}^{\mathrm{adv}}\) and \(\mathfrak{F}_{\mathbf{r}}^{\mathrm{adv}}\) is given by \(\mathfrak{F}_{\delta,\mathbf{r}}^{\mathrm{adv}}-\mathfrak{F}_{\mathbf{r}}^{ \mathrm{adv}}:=\{\tilde{f}-\tilde{g}\mid\tilde{f}\in\mathfrak{F}_{\delta, \mathbf{r}}^{\mathrm{adv}},\,\tilde{g}\in\mathfrak{F}_{\mathbf{r}}^{\mathrm{adv}}\}\).
The empirical \(L_{2}\)-norm of a t-NN \(h\in\mathfrak{F}\) on a training sample \(S=(\underline{\mathbf{x}}_{i},y_{i})_{i=1}^{N}\) is defined as \(\left\|h\right\|_{S}:=\sqrt{N^{-1}\sum_{i=1}^{N}h^{2}(\underline{\mathbf{x}}_ {i},y_{i})}\), and the population \(L_{2}\)-norm is \(\left\|h\right\|_{L_{2}}:=\sqrt{\mathbb{E}_{P(\underline{\mathbf{x}},y)}[h^{2}( \underline{\mathbf{x}},y)]}\). In
the coming Theorem 19, we show that the small distance in the parameter space of \(f\) and \(g\) results in a small empirical \(L_{2}\)-distance in the adversarial output space. Specifically, for any \(f(\underline{\mathbf{x}};\underline{\mathbf{W}})\in\mathfrak{F}_{\delta, \mathbf{r}}\) with compressed version \(g(\underline{\mathbf{x}};\underline{\mathbf{W}}_{\mathbf{r}})\), the adversarial empirical \(L_{2}\)-norm \(\left\|\tilde{f}(\underline{\mathbf{x}};\underline{\mathbf{W}})-\tilde{g}( \underline{\mathbf{x}};\underline{\mathbf{W}}_{\mathbf{r}})\right\|_{S}\) can be upper bounded by a small constant \(\hat{\mathfrak{r}}>0\) in linearity of \(\delta\). Define the local Rademacher complexity of \(\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}-\mathfrak{F}_{\mathbf{r}}^{ \text{adv}}\) of radius \(\mathfrak{r}>0\) (in population \(L_{2}\)-norm) as \(\hat{R}_{\mathbf{r}}(\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}- \mathfrak{F}_{\mathbf{r}}^{\text{adv}}):=\bar{R}_{N}(\{h\in\mathfrak{F}_{ \delta,\mathbf{r}}^{\text{adv}}-\mathfrak{F}_{\mathbf{r}}^{\text{adv}}\mid \left\|h\right\|_{L_{2}}\leq\mathfrak{r}\}),\) where \(\bar{R}_{N}(\mathcal{H})\) denotes the average Rademacher complexity of a function class \(\mathcal{H}\)(Bartlett et al., 2002). We assume the local Rademacher complexity of \(\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}-\mathfrak{F}_{\mathbf{r}}^{ \text{adv}}\) can be upper bounded by a concave funtion of \(\mathfrak{r}\) which is a common assumption for analyzing localized Redemacher complexity (Bartlett et al., 2002; Suzuki et al., 2020).
**Assumption 18**: _Suppose their exists a function \(\phi:[0,\infty)\rightarrow[0,\infty)\) such that \(\dot{R}_{\mathfrak{r}}(\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}- \mathfrak{F}_{\mathbf{r}}^{\text{adv}})\leq\phi(\mathfrak{r})\ \ \text{and}\ \ \phi(2 \mathfrak{r})\leq 2\phi(\mathfrak{r}),\ (\forall\mathfrak{r}>0).\)_
We further define \(\mathfrak{r}_{*}=\mathfrak{r}_{*}(t):=\inf\left\{\mathfrak{r}>0\ \big{|}16B_{f}\mathfrak{r}^{-2}\phi(\mathfrak{r})+B_{f}\mathfrak{r}^{-1} \sqrt{2t/N}+2tB_{f}^{2}\mathfrak{r}^{-2}/N\leq 1/2\right\}\) for any \(t>0\), to bound the ratio of the empirical and population \(L_{2}\)-norm of any function \(h\in\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}-\mathfrak{F}_{\mathbf{r}}^{ \text{adv}}\) by \(\left\|h\right\|_{L_{2}}^{2}\leq 2(\left\|h\right\|_{S}^{2}+\mathfrak{r}_{*}^{2})\) with high probability via peeling argument (Steinwart and Christmann, 2008, Theorem 7.7).
Then, we establish an adversarial generalization bound for approximately low-tubal-rank t-NNs.
**Theorem 19** (**Adversarial generalization bound for general approximately low-tubal-rank t-NNs**): _For any \((\delta,\mathbf{r})\)-approximately low-tubal-rank parameterized \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\) with adversarial proxy \(\tilde{f}\in\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}}\), there is a \(g\in\mathfrak{F}_{\mathbf{r}}\) with adversarial proxy \(\tilde{g}\in\mathfrak{F}_{\mathbf{r}}^{\text{adv}}\) such that \(\left\|\tilde{f}-\tilde{g}\right\|_{S}\leq\delta B_{\tilde{f}}\sum_{l=1}^{L}B_{ l}^{-1}=:\hat{\mathfrak{r}}\). Let \(\dot{\mathfrak{r}}=\sqrt{2(\hat{\mathfrak{r}}^{2}+\mathfrak{r}_{*}^{2})}\). Then, under Assumptions 5, 6, 18, there exist constants \(C_{1},C_{2}>0\) such that_
\[\begin{split}\mathcal{L}^{\text{adv}}(f)-\hat{\mathcal{L}}^{ \text{adv}}(f)\leq&\underbrace{\frac{C_{1}L_{\ell}B_{\tilde{f}}}{ \sqrt{N}}\sqrt{\mathfrak{c}\sum_{l=1}^{L}r_{l}(d_{l-1}+d_{l})\log(9(L+1))}+B \sqrt{\frac{t}{2N}}}_{\text{main term}}\\ &+\underbrace{C_{2}\left(\Phi(\dot{\mathfrak{r}})+L_{\ell}\dot{ \mathfrak{r}}\sqrt{\frac{t}{N}}+\frac{tL_{\ell}B_{\tilde{f}}}{N}\right)}_{ \text{bias term}}\end{split} \tag{18}\]
_for all \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\) with probability at least \(1-4e^{-t}\) for all \(t>0\), where \(\Phi(\mathfrak{r})\) is defined as_
\[\Phi(\mathfrak{r}):=\bar{R}_{N}\left(\left\{\ell\circ\tilde{f}-\ell\circ \tilde{g}\ \big{|}\ \tilde{f}\in\mathfrak{F}_{\delta,\mathbf{r}}^{\text{adv}},\tilde{g}\in \mathfrak{F}_{\mathbf{r}}^{\text{adv}},\ \left\|\tilde{f}-\tilde{g}\right\|_{L_{2}}\leq\mathfrak{r}\right\} \right).\]
_Here, \(\circ\) denotes the function composition operation._
The main term measures the complexity of functions \(\mathfrak{F}_{\mathbf{r}}\) with exact low-tubal-rank parameterization in adversarial settings which could be much smaller than \(\mathfrak{F}_{\delta,\mathbf{r}}\). The bias term represents a sample complexity to bridge the approximately low-tubal-rank parameterized \(\mathfrak{F}_{\delta,\mathbf{r}}\) and exactly low-tubal-rank parameterized \(\mathfrak{F}_{\mathbf{r}}\). Typically we have \(\mathfrak{r}_{*}^{2}=o(1/\sqrt{N})\), and if we set \(\hat{\mathfrak{r}}=o_{p}(1)\), then the bias term can be faster than the main term which is \(O(1/\sqrt{N})\).
We further demonstrate how small the obtained bound can be in a special setting where the weights of t-product layers have singular values satisfying a polynomial decay in the transformed domain.
**Assumption 20**: _Consider the setting where any \(t\)-NN \(f(\underline{\mathbf{x}};\underline{\mathbf{W}})\in\mathfrak{F}_{\delta,\mathbf{ r}}\) has tensor weights \(\underline{\mathbf{W}}^{(l)}\)\((l=1,\cdots,L)\) whose singular values in the transformed domain satisfy \(\sigma_{j}(M(\underline{\mathbf{W}}^{(l)})_{:,:,k})\leq V_{0}\cdot j^{-\alpha}\), where \(V_{0}>0\) is a constant, and \(\sigma_{j}(\cdot)\) is the \(j\)-th largest singular value of a matrix._
In this setting, we can see that for any \(1\leq r_{l}\leq\min\{d_{l},d_{l-1}\}\), we can approximate \(\underline{\mathbf{W}}^{(l)}\) with its optimal tubal-rank-\(r_{l}\) approximation tensor \(\underline{\mathbf{W}}_{r_{l}}^{(l)}\) as \(\left\|\underline{\mathbf{W}}^{(l)}-\underline{\mathbf{W}}_{r_{l}}^{(l)} \right\|_{\mathbf{F}}=\left\|M(\underline{\mathbf{W}}^{(l)})-M(\underline{ \mathbf{W}}_{r_{l}}^{(l)})\right\|_{\mathbf{F}}\leq\sqrt{c/(2\alpha-1)}V_{0} (r_{l}-1)^{(1-2\alpha)/2},\) which can be much smaller than \(\left\|\underline{\mathbf{W}}_{r_{l}}^{(l)}\right\|_{\mathbf{F}}\) when \(\alpha>1/2\) is sufficiently large. Thus, we can always find an exactly low-tubal-rank parameterized \(g\in\mathfrak{F}_{\mathbf{r}}\) for any \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\) with approximately low-tubal-rank weights, such that the distance between \(g\) and \(f\) in the parameter space is quite small. The following theorem shows that the small distance in parameter space also leads to small adversarial generalization gaps.
**Theorem 21**: _Under Assumptions 5, 6 and 20, if we let \(\hat{\mathbf{r}}=V_{0}B_{\tilde{f}}\sum_{l=1}^{L}(r_{l}+1)^{-\alpha}B_{l}^{-1},\) then for any \(t\)-NN \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\) there is always a function \(g\in\mathfrak{F}_{\mathbf{r}}\) whose t-product layer weights have tubal-rank exactly no greater than \(r_{l}\), satisfies \(\left\|\tilde{f}-\tilde{g}\right\|_{S}\leq\hat{\mathbf{r}}\). Then, there is a constant \(C_{\alpha}\) only depending on \(\alpha\) such that for any function \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\), its adversarial generalization gap holds for any \(t>0\)_
\[\mathcal{L}^{\text{adv}}(f)-\hat{\mathcal{L}}^{\text{adv}}(f) \tag{19}\] \[\leq C_{\alpha}L_{\ell}\bigg{\{}B_{\tilde{f}}E_{1}+\hat{\mathbf{ r}}\sqrt{E_{1}}+E_{2}^{\frac{2\alpha}{2\alpha+1}}\left(B_{\tilde{f}}^{\frac{2 \alpha-1}{2\alpha+1}}+1\right)+\hat{\mathbf{r}}^{\frac{2\alpha}{2\alpha+1}} \sqrt{E_{2}}+(\hat{\mathbf{r}}+\frac{B}{L_{\ell}})\sqrt{\frac{t}{N}}+\frac{1+ tB_{\tilde{f}}}{N}\bigg{\}}\]
_with probability at least \(1-4e^{-t}\), where \(E_{1}=N^{-1}\mathbf{c}\sum_{l=1}^{L}r_{l}(d_{l}+d_{l-1})\log(9NLB_{\tilde{f}} /\sqrt{\mathbf{c}})\) and \(E_{2}=N^{-1}\mathbf{c}\sum_{l=1}^{L}\left(LV_{0}B_{\tilde{f}}B_{l}^{-1} \right)^{1/\alpha}(d_{l}+d_{l-1})\log(9NLB_{\tilde{f}}/\sqrt{\mathbf{c}}).\)_
This indicates that, if \(\alpha>1/2\) is large (in other words, each weight matrix is close to rank \(1\)), then we have a better generalization error bound. Note that the rank \(r_{l}\) can be arbitrary chosen and \(\hat{\mathbf{r}}\) and \(E_{1}\) are in a trade-off relation. Hence, by selecting the rank appropriately so that this trade-off is balanced, then we obtain the optimal upper bound as in the following corollary.
**Corollary 22**: _Under the same assumption to Theorem 21, if we choose the parameter \(\mathbf{r}\) of tubal ranks in \(\mathfrak{F}_{\mathbf{r}}\) by \(r_{l}=\min\{\lceil\left(LV_{0}B_{\tilde{f}}B_{l}^{-1}\right)^{1/\alpha}\rceil,d _{l},d_{l-1}\}\), then there is a constant \(C_{\alpha}\) only depending on \(\alpha\) such that for any function \(f\in\mathfrak{F}_{\delta,\mathbf{r}}\), its adversarial generalization gap_
\[\mathcal{L}^{\text{adv}}(f)-\hat{\mathcal{L}}^{\text{adv}}(f) \leq C_{\alpha}L_{\ell}\bigg{\{}B_{\tilde{f}}^{1-1/(2\alpha)}\sqrt{ \frac{\mathbf{c}\sum_{l=1}^{L}\left(LV_{0}B_{l}^{-1}\right)^{1/\alpha}(d_{l}+d _{l-1})\log(9NLB_{\tilde{f}}/\sqrt{\mathbf{c}})}{N}} \tag{20}\] \[\qquad\qquad+E_{2}^{\frac{2\alpha}{2\alpha+1}}\left(B_{\tilde{f}} ^{\frac{2\alpha-1}{2\alpha+1}}+1\right)+\sqrt{E_{2}}+\frac{B}{L_{\ell}}\sqrt{ \frac{t}{N}}+\frac{1+tB_{\tilde{f}}}{N}\bigg{\}}\]
_holds with probability at least \(1-4e^{-t}\) for any \(t>0\)._
Note that the bound has linear dependency on all the number of neurons of t-product layers in the square root, i.e., \(O\left(\sqrt{\frac{\zeta\sum_{i}(d_{i}+d_{i-1})}{N}}\right)\), while the Theorem 9 shows dependency on the number of total parameters, i.e., \(O\left(\sqrt{\frac{\zeta\sum_{i}d_{i}d_{i-1}}{N}}\right)\). This result implies that the low-tubal-rank parameterization of t-NNs might achieve much better adversarial generalization for multi-channel data learning.
## 5 Related Works
**T-SVD-based data and function representation:** What most significantly distinguishes t-SVD-based data representation from the classical low-rank decomposition methods, is the low-rankness defined in the transformed domain. This low-rankness is vital to real multi-channel data modeling with both smoothness and low-rankness (Wang et al., 2020; Liu et al., 2020; Wang et al., 2021). Utilized in t-product layers in DNNs (Newman et al., 2018; Malik et al., 2021; Wu et al., 2022), t-SVD has also been a workhorse for function representation and achieves impressive empirical performance. In terms of theoretical studies, extensive past research on t-SVD-based signal processing models were conducted (Zhang and Ng, 2022; Liu et al., 2020; Qiu et al., 2022; Wang et al., 2021). However, t-SVD-based learning model has never been theoretically scrutinized. To the best of our knowledge, this paper is the first for theoretical analysis of t-SVD-based learning models.
**Theoretical analysis methods:** The related theoretical analysis methods are norm-based generalization analysis (Neyshabur et al., 2015) and implicit regularization of gradient descent based learning (Vardi and Shamir, 2021). Norm-based generalization analysis is important to theoretical analysis in standard generalization analysis of DNNs (Golowich et al., 2018), compressed models (Arora et al., 2018) and non-compressed models (Suzuki et al., 2020). Note that in compressed models, the standard generalization bound may be driven by intrinsic parameters (Arora et al., 2018). It also plays an important role in adversarial generalization analysis (Yin et al., 2019; Awasthi et al., 2020; Xiao et al., 2022). All aforementioned works are for models using matrix products. Instead we adopt norm-based tools in standard and adversarial generalization analysis on t-NNs. For implicit regularization of gradient descent based learning, extensive past research has been conducted on implicit bias of GF for both standard and adversarial training of homogeneous networks building on matrix product layers, respectively (Lyu and Li, 2020; Ji and Telgarsky, 2020). For different networks, Gunasekar et al. (2018) and Yun et al. (2021) proved biases towards sparse linear predictors for linear diagonal networks and linear convolution networks, respectively, whereas Timor et al. (2023) derived the bias towards low-rank solutions for ReLU networks with sufficient depth. Our analysis extends them to adversarial training of t-NN with scale-invariant adversarial perturbations, and demonstrates that GF for overparameterized t-NN with ReLU activations produces nearly transformed low-rank weights.
## 6 Concluding Remarks
This paper explores how t-SVD influences the learning behavior of t-NNs. We first derive the upper bounds of generalization gaps of both standard and adversarially trained t-NNs. For more efficient adversarial learning, t-NNs are then compressed by imposing the transformed low-rank structure
into the weight tensors, which achieves sharper bounds on the adversarial generalization gap. Although weights of well trained models are rarely exactly transformed low-rank, our analysis further shows that for adversarial training with gradient flow in highly over-parameterized settings, the learnt t-NNs tend to have approximately transformed low-rank weights. We also establish adversarial generalization bounds for t-NNs with approximately transformed low-rank weights in both general cases and a special case where the weight tensors have certain patterns of spectral decay in the transformed domain.
|
2307.16217 | Text Analysis Using Deep Neural Networks in Digital Humanities and
Information Science | Combining computational technologies and humanities is an ongoing effort
aimed at making resources such as texts, images, audio, video, and other
artifacts digitally available, searchable, and analyzable. In recent years,
deep neural networks (DNN) dominate the field of automatic text analysis and
natural language processing (NLP), in some cases presenting a super-human
performance. DNNs are the state-of-the-art machine learning algorithms solving
many NLP tasks that are relevant for Digital Humanities (DH) research, such as
spell checking, language detection, entity extraction, author detection,
question answering, and other tasks. These supervised algorithms learn patterns
from a large number of "right" and "wrong" examples and apply them to new
examples. However, using DNNs for analyzing the text resources in DH research
presents two main challenges: (un)availability of training data and a need for
domain adaptation. This paper explores these challenges by analyzing multiple
use-cases of DH studies in recent literature and their possible solutions and
lays out a practical decision model for DH experts for when and how to choose
the appropriate deep learning approaches for their research. Moreover, in this
paper, we aim to raise awareness of the benefits of utilizing deep learning
models in the DH community. | Omri Suissa, Avshalom Elmalech, Maayan Zhitomirsky-Geffet | 2023-07-30T12:54:39Z | http://arxiv.org/abs/2307.16217v1 | # Text Analysis Using Deep Neural Networks in Digital Humanities and Information Science
###### Abstract
Combining computational technologies and humanities is an ongoing effort aimed at making resources such as texts, images, audio, video, and other artifacts digitally available, searchable, and analyzable. In recent years, deep neural networks (DNN) dominate the field of automatic text analysis and natural language processing (NLP), in some cases presenting a super-human performance. DNNs are the state-of-the-art machine learning algorithms solving many NLP tasks that are relevant for Digital Humanities (DH) research, such as spell checking, language detection, entity extraction, author detection, question answering, and other tasks. These supervised algorithms learn patterns from a large number of "right" and "wrong" examples and apply them to new examples. However, using DNNs for analyzing the text resources in DH research presents two main challenges: (un)availability of training data and a need for domain adaptation. This paper explores these challenges by analyzing multiple use-cases of DH studies in recent literature and their possible solutions and lays out a practical decision model for DH experts for when and how to choose the appropriate deep learning approaches for their research. Moreover, in this paper, we aim to raise awareness of the benefits of utilizing deep learning models in the DH community.
## Introduction
The research space of digital humanities (DH) applies various methods of computational data analysis to conduct multi-disciplinary research in archaeology (Eiteljorg, 2004; Forte, 2015), history (Thomas, 2004; Zaagsma, 2013), lexicography (Woolridge, 2004), linguistics (Hajic, 2004), literary studies (Rommel, 2004), performing arts (Saltz, 2004), philosophy (Ess, 2004), music (Burgoyne, Fujinaga, & Downie 2015; Wang, Luo, Wang, & Xing, 2016), religion (Hutchings, 2015) and other fields. The scope of DH continues to expand with the development of new information technologies, and its boundaries remain amorphous (McCarty, 2013). Therefore, DH's definition is unclear and may have different interpretations (Ramsay, 2016; Poole, 2017). Library and Information Science (LIS) and DH research have a similar and overlapping scope and interfaces (Posner, 2013; Koltagy 2016), to the
extent that some propose to integrate and combine both research fields (Sula, 2013; Robinson, Priego, & Bawden, 2015). DH and LIS academic units are often located together (Sula, 2013), and share a significant volume of common topics, such as metadata, linked data and ontologies, information retrieval, collection classification, management, archiving and curation, bibliographic catalogue research, digitization of printed or physical artifacts, preservation of cultural heritage, data mining and visualization, and bibliometrics (Svensson, 2010; Russell 2011; Gold, 2012; Warwick 2012; Sula, 2012; Beaudoin, & Buchanan, 2012; Sula 2013; Drucker, Kim, Salehian, Bushong, 2014; Koltay, 2016; Gold, & Klein 2016). However, regardless of the definition or research scope, many (if not most) of the research in DH/LIS focuses on textual resources, recorded information, and documents (Robinson et al., 2015; Poole, 2017). Therefore, this paper argues that a deep understanding of text analysis methods is a fundamental skill that future (and present) DH/LIS experts must acquire.
Supervised deep neural networks (deep learning) are a subset of machine learning algorithms considered to be the state-of-the-art approach for many NLP tasks, such as entity recognition (Li, Sun, Han, & Li, 2020), machine translation (Yang, Wang, & Chu, 2020), part-of-speech tagging and other tasks (Collobert & Weston, 2008) from which many DH/LIS text analysis research projects can benefit. Therefore, this paper aims to raise the awareness of DH and LIS researchers of state-of-the-art text analysis (NLP using deep neural networks) approaches and techniques. This is not the first attempt to make NLP technologies accessible or highlight the benefits of NLP to the DH/LIS research community (Biemann, Crane, Fellbaum, & Mehler, 2014; Kuhn, 2019; Hinrichs, Hinrichs, Kubler, & Trippel, 2019; McGillivray, Poibeau, & Ruiz Fabo, 2020). However, this paper argues that in addition to bridging between the NLP community and the DH/LIS research community, the DH/LIS research community should cultivate experts with a deep understanding of the technological space, experts that are capable of customizing and developing the technology themselves. Use of "off the shelf" tools and algorithms is no longer sustainable (Kuhn, 2019); the future DH expert must be comfortable using and adapting state-of-the-art NLP methodologies and technologies to the DH-specific tasks. To the best of our knowledge, this is the first attempt to highlight the challenges and analyze the potential solutions of the common usage of deep neural networks for text analysis in the DH/LIS space.
DNN models are often developed by computer scientists and trained, tested, and optimized for generic, open-domain tasks or by commercial enterprises for modern texts (Krapivin, Autaeu, & Marchese, 2009; Rajpurkar, Zhang, Lopyrev, & Liang, 2016). However, applying these DNN models for DH/LIS tasks and textual resources is not straightforward and requires further investigation. This paper presents the practical challenges that DH/LIS experts may encounter when applying DNN models in their research by examining multiple use cases presented in current literature, alongside an overview of the possible solutions, including deep learning technology. Although there might be other methodological challenges (Kuhn, 2019), this paper focuses on the two main practical challenges faced when applying deep learning for almost every DH research:
(1) Training data (un)availability - DH text resources are often domain-specific and niche, and contain a relatively small number of training examples; thus, there is not enough data for the DNN learning process to converge. Even when there is a large DH text corpus, there are no balanced ground truth labeled datasets (i.e., datasets with the distribution of "right" and "wrong" examples representative of the corpus) from which the DNN can learn (McGillivray et al., 2020), and changes or adaptations in the network architecture are required in order to achieve high accuracy for such datasets (Hellrich and Hahn, 2016).
(2) Domain adaptation - in many tasks considered "common" in NLP, the DH interpretation of the task is different from the standard interpretation. Moreover, DH text resources may need to be preprocessed before serving as input to DNNs, due to "noisy" data (biased, contains errors or missing labels or data (Hall, 2020; Prebor et al., 2018)) or non-standard data structure, such as mixed data formats (combining unstructured text, semi-structured and structured data in the same resource). In many cases, these resources are unsuitable for serving as an input into DNN models, or if they are used as-is, the models do not achieve maximum accuracy.
These challenges have unique implications on the utilization of DNNs with DH/LIS resources and tasks and, in various cases, may require different solutions. As a result of this study, a decision model for choosing the appropriate machine-learning approach for DH/LIS research is presented as a practical guideline for experts, with topics that digital humanitists should master being outlined.
### Digital Humanities and Automatic Text Analysis
Natural Language Processing (NLP) is a research area that explores how computational techniques (algorithms) can be used to understand and transform natural language text into structured data and knowledge (Young et al., 2018; Chowdhary, 2020). Until a few years ago, the state-of-the-art techniques that addressed supervised natural language processing challenges were based on a mix of machine learning algorithms. NLP tasks such as text classifications, entity recognition, machine translation, and part-of-speech tagging were solved using various classic supervised machine learning algorithms, such as Support Vector Machine (SVM), Hidden Markov Model (HMM), decision trees, k-nearest neighbors (KNN), and Naive Bayes (Zhou and Su, 2002; Liu et al., 2010; Vijayan et al., 2017). Basically, these algorithms apply a manually selected set of characteristic features to a given task and corpus, and a labeled dataset with "right" and "wrong" examples for training the optimal classifier. Given a new example of the same type, this classifier will be able to automatically predict whether or not this example belongs to the predefined category (e.g., whether a given sentence has a positive sentiment or not).
However, in many cases, it is not easy to decide what features should be used. For example, if a researcher wishes to learn to classify a text's author from the Middle Ages, she will need to use the
features that represent the unique writing styles that distinguish the authors. Unfortunately, it is not easy to describe these features in terms of textual elements. Deep learning solves this central problem by automatically learning representations of features based on examples instead of using explicit predefined features (Deng & Liu, 2018). Deep learning (DL) is a sub-field of machine learning that draws its roots from the Neuroognition field (Bengio, Goodfellow, & Courville, 2017). The DL approach uses deep neural networks (DNN) models for solving a variety of Artificial Intelligence tasks. The technical details of various DNN models and techniques appear in Appendix I.
DH researchers use NLP algorithms for DH-specific tasks in various domains. For example, Niculae, Zampieri, Dinu, and Ciobanu (2014) used NLP techniques to automatically date a text corpus. They developed a classifier for ranking temporal texts and dating of texts using a machine learning approach based on logistic regression on three historical corpora: the corpus of Late Modern English texts (de Smet, 2005), a Portuguese historical corpus (Zampieri & Becker, 2013) and a Romanian historical corpus (Ciobanu, Dinu, Dinu, Niculae, & Sulea, 2013). To construct social networks among literary characters and historical figures, Elson, Dames, and McKeown (2010) applied "off-the-shelf" machine learning tools for natural language processing and text-based rules on 60 nineteenth-century British novels. Zhitomirsky-Geffet and Prebor (2019) used lexical patterns for Jewish sages disambiguation in the Mishna, and then applied several machine learning methods based on Habernal and Gurevych's (2017) approach for the co-occurrence of sages and pattern-based rules for specific inter-relationship identification in order to formulate a Jewish sages social interactions network. In paleography, the study of historical writing systems and the deciphering and dating of historical manuscripts, Cilia, De Stefano, Fontanella, Marrocco, Molinara, and Freca (2020) utilized MS-COCO (Lin, Maire, Belongie, Hays, Perona, Ramanan, & Zitnick, 2014), a generic corpus of images, and a domain-specific corpus to train DNN models and design a pipeline for medieval writer identification. To predict migration and location of manuscripts, Prebor, Zhitomirsky-Geffet and Miller (2020a, 2020b) devised lexical patterns for disambiguation of named entities (dates and places) in the corpus of the Department of Manuscripts and the Institute of Microfilmed Hebrew Manuscripts in the National Library of Israel. Next, the authors trained a CART machine learning classifier (Classification and regression tree based on Decision Tree learning) (Rokach and Maimon, 2015) to predict the places of manuscripts that were often absent in the corpus. For ancient languages analysis, a study (Dereza, 2018) compared accuracy for lemmatization for early Irish data using a rule-based approach and DNN models, and proved the advantages of using DNN on such a historical language - even with limited data. For historical network analysis, Finegold, Otis, Shalizi, Shore, Wang, and Warren (2016) used named entity recognition tools (Finkel, Grenager, & Manning, 2005; Alias-i, 2008) with manual rules on the Oxford Dictionary of National Biography and then applied a regression method, namely Poisson Graphical Lasso (Yang, Ravikumar, Allen & Liu, 2013) to find correlations between entities (nodes). Nevertheless, as demonstrated by the examples above, although
there is a "computational turn" (Berry, 2011) in the DH research and methodologies, state-of-the-art computational NLP algorithms, like deep neural networks, are still rarely used within the core research area of DH (Kuhn, 2019).
To estimate the potential of deep learning use in DH, a comparison has been performed to one of the fields that is similar to DH - Bioinformatics. These fields are comparable since both are characterized by their inter-disciplinarity and because Bioinformatics thrives on application of computational analysis for exploring and investigating information repositories in a chosen knowledge domain (Ewens and Grant, 2006). A list of leading journals was compiled in each field and searched for articles with "deep neural network" and "machine learning" keywords. For DH, twelve journals were selected, based on Spinaci, Gianmarco, Colavizza, Giovanni, & Peroni (2019), all in English and ranked as 1 (exclusively DH). For Bioinformatics, twelve journals were selected based on Google Scholar's top publication list1. The two lists of the journals appear in Appendix III.
Footnote 1: [https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=bio_bioinformatics](https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=bio_bioinformatics)
The comparison was conducted on the articles published in the above journals over the past three years and measured the following: 1) the percentage of articles with each of the two keywords in the selected journals in each field, to ascertain the usage of machine learning (ML) in general vs. deep learning (DL) in particular, in each field; and 2) the percentage of articles mentioning deep learning out of the machine learning articles in each field. As can be observed from Figure 1, in the DH field, only 21% of the articles discussing "machine learning" also discussed "deep learning"; while in Bioinformatics, 52% of the articles discussing "machine learning" also discussed "deep learning". Moreover, in the DH field, only 3.8% of the articles mentioned "deep learning", while in Bioinformatics, 19.5% of the articles mentioned "deep learning" - five times higher. In addition, in the DH field, 18% of the articles discussed "machine learning", while in Bioinformatics, 37% of the articles discussed "machine learning" - only two times higher. These results indicate that the DH field "lags behind" when it comes to using machine learning and especially deep learning state-of-the-art models.
The next section provides an in-depth analysis of challenges and potential solutions for using DNN in DH/LIS, supported by multiple use-case studies from the recent DH literature. The analysis is divided into two main sections dealing with two primary challenges in applying deep learning to DH research: training data (un)availability and domain adaptation.
## Challenges when Using Deep Learning for Digital Humanities Research
### Training Data (Un)availability
Computer scientists often work on generic supervised text analysis tasks with open-domain or modern datasets. Kaggle2, the machine learning community, hosts many of these datasets. For example, the IMDb dataset contains a short description of a movie, and its review score allows to research sentiment analysis [16]; question answering system can be developed using Stanford Question Answering Dataset [17]; and SPAM filtering can be developed using a dedicated dataset [18]. Unfortunately, the DH community has not (as yet) produced large annotated open datasets for researches (although there are few in niche areas like [19, 14]). The lack of annotated data is a challenge for both classical machine learning and deep learning supervised algorithms [15]. However, supervised deep learning algorithms require significantly more data than
Figure 1: Deep neural networks and machine learning articles in DH/LIS vs. Bioinformatics.
machine learning algorithms, making this challenge a critical practical challenge for DH researchers. This is one reason that even when DH/LIS researchers use deep learning, they often use unsupervised algorithms that do not require training data and are limited to specific tasks (Moreno-Ortiz, 2017). This section investigates some of the methods that DH researchers can apply to overcome this challenge.
#### Training Dataset Generation by Humans
Humans are the best alternative for dataset generations due to their domain knowledge and high accuracy. Therefore, the first consideration when generating a dataset is to consider if humans can be used for the job. However, humans are not as scaleable as computer software. It is possible to manually generate a dataset by humans when the needed labeling is relatively small or as a baseline for synthetic dataset generation. There are two types of manual dataset generation: crowd-based dataset generation and domain expert-based dataset generation. Crowdsourcing dataset generation is a relatively cheaper and effective method, but it can only be used when the labeling is "common knowledge". In some cases, for example, in the study aiming to generate a dataset of relationships extraction between characters in literary novels (Chaturvedi et al., 2016), the researchers must use expert annotators that can read and understand a novel; or even annotate themselves when working with historical languages known only to a few, as in Schulz & Ketschik (2019).
Crowdsourcing is based on large groups of non-expert, low-paid workers or volunteers performing various well-defined tasks. Existing studies tested optimization strategies for different tasks, such as extracting keyphrases (Yang, Bansal, Dakka, Ipeirotis, Koudas, & Papadias, 2009), natural language and image annotation (Snow, O'Connor, Jurafsky, & Ng, 2008; Sorokin & Forsyth, 2008), and document summarization (Aker, El-Haj, Albakour, & Kruschwitz, 2012). Crowdsourcing requires quality control to ensure that crowd workers are performing their tasks at a satisfactory level (Elmalech Grosz 2017). One of the effective generic (task-agnostic) quality control techniques is filtering out tasks with a low inter-worker agreement (Bernstein, Little, Miller, Hartmann, Ackerman, Karger, Crowell, & Panovich, 2010; Downs, Holbrook, Sheng, & Cranor, 2010; Kittur, Smus, Khamkar, & Kraut, 2011). Another popular approach is breaking tasks into sub-tasks (Bernstein et al., 2010; Kittur et al., 2011).
Employing crowd workers for dataset generation has been carried out in various domains, including DH projects (e.g., Elson, Dames, & McKeown, 2010). Thus, in this use-case study, Elson et al. (2010) utilized crowdsourcing to build a dataset of quoted speech attributions in historical books in order to generate a social network among literary characters. Elson et al. (2010) did not use DNN, but rather classic machine learning methods (Davis, Elson, & Klavans, 2003), but the dataset generating process is the same for classic ML and DL.
Another example of such a use-case is fixing Optical Character Recognition (OCR) errors in historical texts. In the DH/LIS space, there is great interest in investigating historical archives. Therefore, over the past few decades, archives of paper-based historical documents have undergone digitization using OCR technology. OCR algorithms convert scanned images of printed textual content into machine-readable text. The quality of the OCRed text is a critical component for the preservation of historical and cultural heritage. Unsatisfactory OCR quality means that the text will not be searchable, analyzable, or analysis may result in wrong conclusions. Unfortunately, while generic OCR techniques and tools achieve good results on modern texts, they are not accurate enough when applied to historical texts. Post-correction of digitized small scale or niche language historical archive is a challenge that can be solved using DNNs with high accuracy (Chiron, Doucet, Coustaty, & Moreux, 2017; Rigaud, Doucet, Coustaty, & Moreux, 2019) if an appropriate dataset is attainable. Therefore, the first thing that should be researched is an effective methodology for crowdsourcing this specific task (Suissa, Elmalech, & Zhitomirsky-Geffet, 2019). The details of the crowdsourcing research are outside the scope of this paper. What is essential from the DH/LIS research point of view is that the findings of Suissa et al. (2019) proved to be an effective dataset generation approach. Using the developed strategies, DH researchers can optimize the process to achieve better results matching their objectives and priorities. The corrected corpus of OCRed texts created by the optimized crowdsourcing procedure can serve as a training dataset for DNN algorithms.
However, although the crowdsourcing method yields satisfactory results, it is suitable mainly for widely spread languages like English or Spanish. Other national languages do not have enough crowd workers-speakers to utilize such an approach effectively. Moreover, manually generating a dataset for training a DNN model in order to post-correct OCR errors is expensive and inefficient, even when the task is crowdsourced. Therefore, in practice, this human-only dataset generation should be shifted to a human-in-the-loop solution.
#### Training Dataset Generation using Algorithms
The next range of solutions takes a two-phase approach. In the first phase, humans are used to create a small set of examples; this set of examples is used in the second phase by a different set of algorithms to generate a synthetic dataset with numerous training examples (Pantel, & Pennacchiotti, 2006; Bunescu, & Mooney, 2007). One way is to find recurring patterns in a small number of manually corrected examples, and use them to generate more correct examples. Thus, the use-case study that adopted this approach for automatic training dataset generation in the OCR post-correction domain, Suissa, Elmalech, & Zhitomirsky-Geffet (2020) used crowd workers to fix a relatively small set of OCRed documents. Then, the Needleman-Wunsch alignment algorithm (Needleman, & Wunsch, 1970) was used to find common confusions between characters committed by the crowd workers.
Using this confusion list, a large dataset of "wrong" and "right" sentences was generated and used by a DNN to correct historical OCRed text.
Another way to generate a dataset from a small set of manual examples is called "distant supervision" (Mintz, Bills, Snow, & Jurafsky, 2009). In this approach, a classifier is trained on a small set of examples and is applied to a large corpus. The classifier will classify the data with a relatively low accuracy but sufficiently high accuracy for the DNN to learn other features from this weak classification. Blanke, Bryant, & Hedges (2020) used this method to perform sentiment analysis on Holocaust testimonials data (Thompson, 2017). In the first phase, they did not use crowd workers for the initial dataset generation but rather applied a dictionary-based approach to find negative and positive sentiment sentences based on the TF-IDF measure (Singhal, 2001). Using these sentences, they trained a classifier to distinguish between positive and negative examples. In the second phase, they used the classifier to produce a large training corpus of positive and negative memories of Holocaust survivors for DNN text analysis. Using this method eliminates the need for humans; however, it is suitable only for specific tasks.
A different approach to solving the training dataset's unavailability is the transfer learning (Torrey, & Shavlik, 2010) method. In transfer learning, a generic dataset is used; the dataset should be suitable for the task needed to be solved, but with open-domain / other domain data. The model is then trained again using a small set of domain-specific examples (generated by humans or artificially). This approach is based on the intuition that humans transfer their knowledge between tasks based on previous experiences. Cilia et al. (2020) utilized transfer learning to identify medieval writers from scanned images. Instead of generating a large dataset, they used a model that was already trained on an open generic dataset MS-COCO (Lin et al., 2014) and trained it again using a small set of domain-specific examples from the Avila Bible (images of a giant Latin copy of the Bible). Banar, Lasaracina, Daelemans, & Kestemont (2020) applied transfer learning to train neural machine translation between French and Dutch on digital heritage collections. They trained several DNNs on Eubookshop (Skadins, Tiedemann, Rozis, & Deksne, 2014), a French-Dutch aligned corpus. Then, instead of training the DNN models directly on the target domain data, they first trained the models on "intermediate" data from Wikipedia (articles close to the target domain). Only then did they train the models for the third time on the target domain data - the Royal Museums of Fine Arts of Belgium dataset. Using this "intermediate fine-tuning" approach, Banar et al. (2020) achieved high accuracy for French-Dutch translation in the domain of Fine Arts. This method can also solve another challenge for the DH/LIS researcher when using DNN models - the domain adaptation challenge.
Recent studies (Radford, Wu, Child, Luan, Amodei, & Sutskever, 2019; Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, & Amodei, 2020) show that in some cases, instead of fine-tuning a pre-trained model, a large-scale pre-trained model, such as GPT3 (Radford et al., 2019), trained on ~500
billion (modern) words, can achieve good results with a limited (or without) domain-specific dataset. Although these methods (named Few-shot and Zero-shot learning) do not reach the same performance as the fine-tuning method, they are preferable for low resource domains when dataset generation is impossible. However, most of the models that are pre-trained on a large-scale modern English dataset and suitable for Few-shot and Zero-shot learning may not reach the same accuracy for DH historical corpora, especially in (other than English) national languages, due to a bias towards modern language.
### Domain Adaptation
Even with a large dataset ready for DNN training, there are other challenges a DH/LIS expert may encounter when attempting to solve a text analysis task on DH/LIS data with DNNs. As mentioned in the previous section, data is a critical part of DNN's high accuracy. However, specific task/domain adaptation is just as vital, and without adapting the model or the architecture to the specific task and domain, the DNN may perform poorly.
A DNN model is a set of chained mathematical formulas with weights assigned to each node (neuron) expressing a solution to a specific task. Although there are regulation techniques to generalize the DNN model, in many cases training the model with different data will significantly impact the weights. In other words, using the same mathematical formulas, the learning process interprets the same task differently. In this context, transfer learning described in the previous section can also serve as a domain adaptation method, since the DNN model's interpretation of the task is adjusted to the domain-specific data. Moreover, DH/LIS text analysis tasks are not just different in terms of interpretation but also often require domain-specific preprocessing and analysis pipeline. Therefore, in order to improve the accuracy of DNN models for text analysis tasks, DH/LIS experts should be familiar with methods and techniques for customizing DNN models, preprocessing DH/LIS data, and adapting the analysis pipeline.
### DNN Optimization for DH-specific Tasks
A DNN model has a high number of architecture components and hyper-parameters that influence the model training efficacy and accuracy. Selecting the domain-specific suitable components and hyper-parameter values may considerably improve the performance of the DNN [1]. Here are a few of the most common architectures and hyper-parameters that an expert should consider (see Appendix I for technical details):
* Architecture components:
* Type of the model - for instance, RNN-based, SAN-based [23], feed-forward-based, Transformers-based [14].
* Type and size of the layers - including individual layers, such as CNN [1], LSTM [15], GRU [16],
2014), ResNet (He et al., 2016), AlexNet (Krizhevsky et al., 2012), and multi-layer architectures, such as BERT (Devlin et al., 2018). These can be applied with or without bidirectionality (Schuster et al., 1997), attention (Bahdanau, Cho, & Bengio, 2015), skip-connections (Chang et al., 2017), and other architectural components.
* DNN input is a vector (a series of numbers). Each number can represent a word using word-embedding methods, such as Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014), a single character using one-hot encoding or character-embedding (Char2Vec), encoded features, or contextual embeddings (e.g., BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019)), XLNet (Yang et al., 2019)) based on the surrounding words.
* for instance, encoder-decoder architecture (Cho et al., 2014).
* including Sigmoid, Tan-h, ReLU, and Softmax.
* regression tasks can be: i) mean squared error (MSE), ii) mean squared logarithmic error, iii) mean absolute error; for binary classification tasks: i) binary cross-entropy, ii) hinge, iii) squared hinge; for multi-class classification: i) multi-class cross-entropy, ii) sparse multi-class cross-entropy, iii) Kullback
- Leibler divergence.
* Type and size of the regulation layers - regulation layers reduce overfitting by adding constraints to the DNN. These constraints, such as dropout (Srivastava et al., 2014), L1, and L2, prevent the model from learning the training data and force it to learn the patterns in the data.
* Batch size - the number of examples to use in a single training pass.
* Number of epochs and the epochs' size - the number of iterations on the training data and the number of examples to use during the entire training process.
* Learning rate, method, and configuration - such as stochastic gradient descent (SGD), adaptive moment estimation (Adam) (Kingma & Ba, 2014), and Adagrad (Duchi et al., 2011).
Theoretically, architecture components are also hyper-parameters. However, from a practical perspective, once architecture components are chosen, they are usually fixed. There are techniques that can be applied to find and set these architecture components and hyper-parameters automatically. These techniques are called AutoML and are suitable for many different DNN models (and classical
ML models). However, AutoML has its limitations: it is often costly (training the model repeatedly), does not fit large-scale problems, and may lead to overfitting (Feurer & Hutter, 2019). It is advisable to check AutoML optimization methods such as submodular optimization (Jin, Yan, Fu, Jiang, & Zhang, 2016), grid search (Montgomery, 2017), Bayesian optimization (Melis, Dyer, & Blunsom, 2017), neural architecture search (So, Liang, & Le, 2019), and others (Feurer et al., 2019) or, if the researcher has a hypothesis or intuition about the problem, it is also possible to test multiple architecture components and hyper-parameters combinations manually. Moreover, training a large DNN language model such as a BERT-based model with standard pre-defined hyper-parameters on public cloud servers costs $2,074-$12,571, depending on the hyper-parameters and the corpus size (Devlin et al., 2018), while using neural architecture search (So et al., 2019) to train a DNN language model with hyper-parameters optimized for the specified task costs $44,055-$3,201,722 (Strubell, Ganesh, & McCallum, 2019). Therefore, the budget is another consideration for using some AutoML methods.
Numerous DH studies have demonstrated the importance and the impact of hyper-parameters optimization on the DNN accuracy. Tanasescu, Kesarwani, & Inkpen (2018) optimized hyper-parameters for poetic metaphor classification. They experimented with different activation functions (ReLU, Tan-h for the inner layers and Softmax and Sigmoid for the output layer), number of layers (1-4), number of neurons in each layer (6-306), dropout rate(0-0.9), number of epochs (20-1000), and batch size (20-200). The optimization increased the metaphor classification F-score by 2.9 (from 80.4 to 83.3) and precision by 5.6 (from 69.8 to 75.4). Wang et al. (2016), used a DNN model for Chinese song iambics generation and tested several architecture components. In their research, Wang et al. (2016) added an attention layer (Bahdanau et al., 2015) on top of bidirectional LSTM layers and tested several domain-specific training methods. This DNN domain optimization made it possible to achieve near-human performance. These use-cases emphasize how important it is for DH/LIS experts to understand architecture components and hyper-parameters and their usage.
#### Domain-specific Dataset Adaptation for DNN
Using DNN models in some domains can also require adaptation of the data (preprocessing) prior to inputting it into the DNN model. A use-case study of Won, Murrieta-Flores, & Martins (2018) aimed to perform Named Entity Recognition (NER) on two historical corpora, Mary Hamilton Papers (modern English from 1750 to 1820) and the Samuel Hartlib collection (early modern English from 1600 to 1660). NER is an NLP task which outputs identification of entity types in text. Entity types can be places, people, or organization names and other "known names". The historical corpus selected in Won et al. (2018) was OCRed and preserved in hierarchical XML files with texts and metadata. DNN models (and the tools used in the study) for NER are not designed to work directly on XML since XML is a graph-based format, and NER is a sequence-based task. It should be noted that there
are graph-based DNN models (e.g., Scarselli, Gori, Tsoi, Hagenbuchner, & Monfardini, 2008), but they are not suitable for the NER task. Therefore, Won et al. (2018) needed to adapt their domain data by "translating" the XML markup into text sequences that a DNN model can receive as input. In this preprocessing phase, the researchers took into account the metadata that exists in the domain that was embedded in the XML file, such as authorship, dates, information about the transliteration project, corrections and suggestions made by the transliterators, and particular words and phrases annotated within the body text. Moreover, the square brackets (and their content) added by the transcribers were semi-automatically removed from the text. The metadata was added to the text sequence as labels for the training data to improve the accuracy of the results. Won et al. (2018) did not use DNN models directly but rather used "off the shelf" software to conduct their research. However, they concluded the research with the recognition that using pre-made tools is not sufficient - "_Finally, it must be noted that although this research accomplished the evaluation of the performance of these NER tools, further research is needed to deeply understand how the underlying models work with historical corpora and how they differ._"
#### DNN Pipeline Adaptation
DNN models are designed to work in a certain pipeline of components to solve a specific task. For example, a "naive" DNN based pipeline for the OCR of a book collection will be: 1) scan a book page, 2) use the image as an input to an image-to-text DNN model, 3) use the obtained text or post-process it to correct errors. However, in some cases, it is advisable to design a new domain-specific pipeline to solve the task or increase the model's accuracy. A use-case of such a domain-specific OCR pipeline is presented by Cilia et al. (2020). The goal of the study was identification of the page's writer for each page of the given medieval manuscript. Medieval handwritten manuscripts present two unique challenges for OCR: 1) first section letters or titles may be drawn as a picture over several lines, and 2) handwritten lines are not always aligned and may reduce accuracy when performing a full-page OCR. Cilia et al. (2020) designed a pipeline for processing handwritten medieval texts with three main steps, using: 1) an object detector to detect lines in the page's scanned image and separate a picture at the top from the text lines, 2) a separate DNN classifier to classify each line, and 3) a majority vote among multiple DNN classifiers obtained for each line and picture object at the line-level, in order to make a decision for the classification (writer identification) of the entire page. This pipeline, tailored to the medieval paleography domain, solved the domain's unique challenges by separating between picture objects and text lines and classifying each line with a different classifier instead of classifying an entire page with a single DNN model (the naive pipeline). This pipeline's domain adaptation approach combined with the transfer learning approach, described in the previous section, produced an impressive 96% accuracy in identifying writers that would not have been achieved without this adaptation.
Pipeline adaptation is not just pipelining different models or combining ML and DL; it is also re-training and adapting an existing model, i.e., fine-tuning a model. Fine-tuning a model is a subset of transfer learning, in which a model is trained on a different dataset and also changed by setting different hyper-parameters or adding new last layers on top of the model to fit a specific task. In their research, Todorov and Colavizza (2020), fine-tuned a BERT-based model (Devlin et al., 2018) for increasing the annotation accuracy of NER in French and German historical corpora. In particular, the Groningen Meaning Bank's Corpus Annotated for NER was applied (Bos, Basile, Evans, Venhuizen, & Bjerva, 2017). To embed words (including sub-words) and characters, four models were applied: (1) newly trained word-embeddings on their historical corpus, (2) in-domain pre-trained embeddings that were trained on another corpus in the same domain, (3) BERT-based embedding that was trained on French and German Wikipedia, and (4) character level embeddings learned from the historical corpus training data. As can be observed from Figure 2, Todorov et al. (2020) combined the embedding (by concatenation) and transferred the unified embeddings to a new layer based on a Bi-LSTM-CRF layer. A Bi-LSTM-CRF layer is a Bidirectional (Schuster et al., 1997) Long Short-Term Memory (Hochreiter et al., 1997) layer that merges the sub-word embedding input into a word-level output and transfers its output to fully connected layers (one layer per each entity type) which then outputs tag (entity type) probabilities for each token using Conditional Random Fields (Lafferty, McCallum, & Pereira, 2001). The Bi-LSTM-CRF method has been shown as useful and accurate by Lample, Ballesteros, Subramanian, Kawakami, & Dyer (2016). They also changed the LSTM activation function (remove the tan-h function) and tried three different hyper-parameters configurations. Using the domain-specific pipeline, model, and hyper-parameters, the researchers dramatically increase the accuracy (in some entity types by over 20%) of NER task on French and German historical corpora compared to a state-of-the-art baseline model. Moreover, they tested the impact of the pre-trained generic embedding. They found that (1) without using the open-domain embedding (BERT), their model did not attain high accuracy, and (2) on the other hand, "freezing" the open-domain embedding layers (i.e., using them but re-training only the top layers on the domain-specific historical data) did not affect the accuracy. These findings demonstrate the importance of adapting DNN models to a specific domain and task, while reducing the training time and costs by freezing the large open-domain layers. It is essential to note that besides inputting the historical corpora documents into the DNN model, Todorov et al. (2020) also tested the addition of manually-created features to the documents such as title, numeric and other markups; these features did not have any effect on the accuracy, proving that the DNN model "learned" (or at least did not need) these features.
## Appendix A Decision Model for Using Deep Learning for Digital Humanities Research
Based on the above analysis of challenges and possible solutions illustrated by multiple use-case studies described in the recent literature, it is clear that the DH/LIS experts must know just enough math, understand the inner-working of ML and DL algorithms, Python programming, and use these frameworks and other popular modules (Geron, 2019).
Therefore, this paper argues that DH/LIS researchers can no longer see NLP and ML researchers as their "tool makers", and must learn to apply and adapt deep learning models (DNNs) to their specific research domain. However, since working with DNN models requires significant effort, computational resources, budget, and time, a decision model was formulated for assisting DH experts in determining when it is "worthwhile" to invest in training DNN models. The decision model is based on two strategies: 1) the data availability strategy - how to assess the types of methods and models suitable for the available dataset, and 2) the domain adaptation strategy - how to determine whether and when it is "worthwhile" to invest in domain adaptation.
Figure 3 presents the data availability strategy and leads to three possible recommendations: (1) with no data, either zero-shot DL models, or hard-coded rules/assumptions regarding domain data should be implemented, based on prior knowledge and experience; (2) with limited data, either classical machine learning algorithms, such as SVM or HMM, or few-shot DL models can be used; otherwise (3) it is advisable to use supervised deep learning models for the task. It should be noted that if the
Figure 2: Historical corpora NER fine-tuning pipeline (Todorov and Colavizza, 2020).
DNN model is overfitting (high accuracy on the training dataset and low accuracy on the validation dataset), it is advisable to increase the dataset size by employing expert workers, crowdsourcing, or synthetic data generation. Figure 4 presents the domain adaptation strategy and also leads to three possible recommendations: (1) if strict rules can be defined, there is no need for ML or DL; (2) with limited resources or for low accuracy tasks, ML is the preferable option, and (3) with the appropriate resources and a need for high accuracy, DL with domain adaptation should be utilized. A researcher can use both strategies of the proposed decision model to choose the recommended approach for the given task. Since there are many different text analysis tasks, some aspects of the strategies depend on the expert's assessment; for example, "what is considered a small or a large dataset?" and "what is low or high accuracy?". These assessments should be performed by the researcher based on the concrete task, domain, and needs. Notice that the advice to use DNN models does not mean that it is not recommended to combine them with ML algorithms when suitable.
Figure 3: Data availability strategy for DH researchers
large corpus (or a large corpus can be generated), for complex problems such as unstructured texts, when the researcher has a budget for computational resources (GPUs servers), and accuracy is essential (domain adaption is always assumed). Since most of the DH corpora are not labeled, dataset generation will most probably be required. When the labeling requires only "common knowledge", it is advisable to use crowdsourcing (if possible); otherwise, the researcher should consider using domain experts or automatic generating of synthetic data as explained above in this paper. A step-by-step example for decision model usage for a specific DH task can be found in Appendix II.
It should be noted that the extensive computational resources needed to train DNN models have an impact on the environment. DL may become a major contributor to climate change if the exponential growth of training more and more DNN models continues (Anthony, Kanding, & Selvan, 2020; Hsueh, 2020). It has been estimated that training one transformer model such as BERT-based (Devlin et al., 2018) will produce similar amounts of CO\({}_{2}\) to those of air travel of one person from NY to SF; using a neural architecture search (So et al., 2019), an AutoML method, will produce almost five times more CO\({}_{2}\) than an average car produces throughout its lifetime including the fuel (Strubell et al., 2019). We note that the proposed decision model does not consider environmental impact, yet researchers should be aware of this and take it into consideration.
Figure 4: Domain adaptation strategy for DH researchers
By using this decision model as a guideline and applying the suggested solutions for the two fundamental challenges faced by many DH projects - DH-specific training dataset generation and model adaptation, DH/LIS experts can solve a variety of important tasks in the field for diverse national languages, such as 1) improving OCR post-correction (including restoring damaged text); 2) automated ontology and knowledge graph construction for various DH domains (based on entity/category and relation extraction and NER); and 3) corpus-based stylometric analysis and profiling of DH resources (e.g., identification of an author, date, location, and sentiment of the given text or image).
## Conclusion and Discussion
This paper presents the main two challenges almost every DH/LIS research can expect to encounter using DNN models in her research. Although classic learning techniques based on rules, patterns, or predefined features are no longer considered state-of-the-art in many text processing tasks (e.g., Thyaharajan, Sampath, Durairaraj, & Krishnamoorthy, 2020; Glazkova, 2020), DH/LIS researchers are still using them often, even when there is a better alternative such as deep neural networks. The reasons for avoiding using deep learning in DH may be the lack of "off-the-shelf" tools tailored for the specified task, lack of training data, as well as time, computational resources, and budget limitations. Based on the presented investigation of the main challenges of using DNN in DH research and the proposed decision model for handling these challenges, and the potential adoption of DNN methods, this paper argues that DH/LIS researchers should expand their arsenal of computational skills and methods. A DH expert must acquire in-depth knowledge in mathematics, software programming and have a deep understanding of the usage of deep neural network frameworks. Therefore, we encourage DH/LIS academic departments to introduce the following topics into their academic syllabus, at the applied (rather than theoretical) level:
* Multivariable calculus (partial derivatives, gradients, high order derivatives),
* Linear algebra (vector space, matrices operations, matrices decompositions),
* Probability (distribution, entropy),
* Statistics (bayesian, parameter estimation, overfitting, and underfitting),
* Mathematical optimization (gradient descent, stochastic gradient descent),
* Unsupervised machine learning (k-means, hierarchical clustering, local outlier factor),
* Supervised machine learning (SVM, logistic regression, naive bayes, knn),
* Unsupervised and self-supervised deep learning (autoencoders, deep belief networks, generative adversarial networks, embeddings),
* Supervised deep learning (feed-forward, RNN, Self-Attention Network (SAN), CNN),
* Python / R programming (working with data, visualization, ML and DL frameworks, working with GPUs).
Adding these topics to the academic syllabus of DH/LIS experts does not mean that DH/LIS experts will become Computer Science experts, but rather they will be able to comprehend and adapt DL algorithms for their needs. Using this knowledge, DH/LIS experts will no longer be limited to "off the shelf" tools developed for generic open-domain tasks, and will be able to utilize the full potential of the DL algorithms.
Finally, in addition to raising awareness of digital humanities researchers of deep neural networks as the state-of-the-art text analysis method, researchers should be encouraged to generate and release public DH/LIS corpora for training deep neural networks. |
2304.09101 | LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient
Training in Deep Spiking Neural Networks | Spiking Neural Networks (SNNs) are biologically realistic and practically
promising in low-power computation because of their event-driven mechanism.
Usually, the training of SNNs suffers accuracy loss on various tasks, yielding
an inferior performance compared with ANNs. A conversion scheme is proposed to
obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with
the same structures. However, an enormous number of time steps are required for
these converted SNNs, thus losing the energy-efficient benefit. Utilizing both
the accuracy advantages of ANNs and the computing efficiency of SNNs, a novel
SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge
distillation (LaSNN). In order to achieve competitive accuracy and reduced
inference latency, LaSNN transfers the learning from a well-trained ANN to a
small SNN by distilling the knowledge other than converting the parameters of
ANN. The information gap between heterogeneous ANN and SNN is bridged by
introducing the attention scheme, the knowledge in an ANN is effectively
compressed and then efficiently transferred by utilizing our layer-wise
distillation paradigm. We conduct detailed experiments to demonstrate the
effectiveness, efficacy, and scalability of LaSNN on three benchmark data sets
(CIFAR-10, CIFAR-100, and Tiny ImageNet). We achieve competitive top-1 accuracy
compared to ANNs and 20x faster inference than converted SNNs with similar
performance. More importantly, LaSNN is dexterous and extensible that can be
effortlessly developed for SNNs with different architectures/depths and input
encoding methods, contributing to their potential development. | Di Hong, Jiangrong Shen, Yu Qi, Yueming Wang | 2023-04-17T03:49:35Z | http://arxiv.org/abs/2304.09101v1 | LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient Training in Deep Spiking Neural Networks
###### Abstract
Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power computation because of their event-driven mechanism. Usually, the training of SNNs suffers accuracy loss on various tasks, yielding an inferior performance compared with ANNs. A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures. However, an enormous number of time steps are required for these converted SNNs, thus losing the energy-efficient benefit. Utilizing both the accuracy advantages of ANNs and the computing efficiency of SNNs, a novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN). In order to achieve competitive accuracy and reduced inference latency, LaSNN transfers the learning from a well-trained ANN to a small SNN by distilling the knowledge other than converting the parameters of ANN. The information gap between heterogeneous ANN and SNN is bridged by introducing the attention scheme, the knowledge in an ANN is effectively compressed and then efficiently transferred by utilizing our layer-wise distillation paradigm. We conduct detailed experiments to demonstrate the effectiveness, efficacy, and scalability of LaSNN on three benchmark data sets (CIFAR-10, CIFAR-100, and Tiny ImageNet). We achieve competitive top-1 accuracy compared to ANNs and 20x faster inference than converted SNNs with similar performance. More importantly, LaSNN is dexterous and extensible that can be effortlessly developed for SNNs with different architectures/depths and input encoding methods, contributing to their potential development.
Spiking Neural Networks (SNNs), knowledge distillation, layer-wise, supervised learning.
## I Introduction
Spiking Neural Networks (SNNs) have been proven hopefully in lower-power computing, generally termed the third generation of neural networks [1]. Inspired by the principles of brain computing, SNNs imitate the brain that utilizes discrete spikes for representing and transmitting information, which provides significant potential on event-driven and energy-efficient neuromorphic hardware [2, 3, 4]. One critical challenge of SNNs is the effective training of the spiking neuron-based networks [5]. Incorporating inspirations from the biological nervous systems and structures, one way to address the problem is by adopting the spike-time-dependent plasticity (STDP) learning rule, a specialization of the widely studied Hebbian rule in neuroscience. However, lacking global instructors results in SNNs trained with the STDP rule cannot be extended to deep networks with good performances and are constrained to shallow structures for solving simple tasks (e.g., MNIST) [6, 7, 8]. Due to the inherent non-differentiable problem in SNNs, the well-known backpropagation (BP) algorithm [9] is not straightforwardly applicable to SNNs. In general, replace the actual gradient in SNNs with a designed continuous function to approximate the discontinuous derivative of spiking neurons [10, 11]. This approximation is adequate for small SNNs, but ineffective if SNNs adopt deep structures as well as solve challenging tasks (e.g., ImageNet). In addition, SNNs trained with surrogate-gradient methods fail to achieve comparable performance as their deep neural networks counterpart (ANNs). In order to address the performance problem mentioned above, ANN-to-SNN conversion methods are developed that convert the parameters of pre-trained ANNs into SNNs with the same structures [12, 13, 14, 15]. Although these methods achieve nearly equivalent representation even in large networks, the ANN-converted SNNs usually need an enormous number of time steps (hundreds or even thousands) [12, 13], leading to intense power consumption. Moreover, binary computation cannot be accelerated by using GPU devices, leading to high latency (\(T\)\(\times\) more time than ANN training). These obstructions run contrary to the objective of low-power computing.
Then it is natural to ask: is there an optimal way that both leverage the guidance of well-trained ANNs (like the conversion scheme) while simultaneously maintaining the efficiency of spike-based computing (like the surrogate scheme)? To this end, by distilling the knowledge from ANNs with a layer-wise scheme, we develop a novel framework called LaSNN. Similar to conversion methods, utilizing a well-trained ANN to guide the SNN training process. However, instead of directly converting the network parameters, we propose to _distill the knowledge_ from an ANN to target SNN for low-power and fast inference.
To achieve this, the critical issue is how to distill knowledge from ANNs to SNNs effectively. Knowledge distillation
generally transfers learning from a cumbersome teacher model with high performance to a smaller student model. Although knowledge distillation is promising for improvements in classification accuracy and model compressing, its application is mainly constrained to homogeneous models (e.g., from ANN to ANN) [16, 17]. The spike-based representation and computation for SNNs differ highly from ANNs that use analog values. Therefore, previous distillation approaches and training techniques cannot be directly applied to heterogeneous models.
In order to address these challenges, a new framework is developed, enabling SNNs to learn from ANNs. For clarity, we summarize the significance of our contributions as follows.
1. Instead of directly converting the network parameters from ANNs to SNNs, we propose an ANN-to-SNN knowledge transfer approach, which uses attention as a shared information representation, to bridge the information gap between heterogeneous networks of ANN and SNN.
2. We put forward a layer-wise ANN-to-SNN distillation framework capable of transferring the knowledge from ANNs to SNNs. The training pipeline of LaSNN consists of three stages. Firstly, training a cumbersome ANN as the teacher model and, secondly, utilizing ANN-to-SNN conversion methods to initialize the parameters of the student SNN. Thirdly, training initial SNN based on a layer-wise distillation scheme that the student SNN is encouraged to imitate the inference of the teacher ANN. The proposed framework enables SNNs to compress the knowledge from a large ANN for achieving accurate and efficient computing.
3. Detailed experiments are conducted on CIFAR-10, CIFAR-100, and Tiny ImageNet data sets to evaluate our methods. Experimental results illustrate that LaSNN can achieve compatible top-1 accuracy to ANNs and are 20x faster than converted SNNs with similar accuracy. More importantly, LaSNN is feasible for different architectures/depths and encoding methods. Our work, thus, strongly suggests the superiority of LaSNN for training deep SNNs.
## II Related Work
Various training algorithms for SNNs have been developed that can be broadly categorized into synaptic plasticity learning rules, surrogate-gradient training methods, and ANN-to-SNN conversion algorithms. Based on the sensitivity of time and abstracted the learning rules of biological brains, synaptic plasticity algorithms update the weights of connection according to neurons' firing time interval [6, 8, 18]. Because of lacking global information, synaptic plasticity algorithms are typically applied in solving simple tasks and processing neuromorphic images [6, 19, 20]. Therefore, our discussions are mainly focused on surrogate-gradient training and ANN-to-SNN conversion algorithms that have achieved significant progress on complex tasks and deep structures. In addition, we briefly introduce previous works on knowledge distillation, which refers to transferring the knowledge in a large model or an ensemble of models into a single model that is much easier to deploy, and its application in SNNs.
### _Training of SNNs_
#### Ii-A1 Surrogate-gradient Algorithms
ANNs have obtained significant achievements with the gradient-based training algorithm. By computing the gradient of each operation during propagating forward, errors can backpropagate from the output layer to the input layer. However, the derivatives of spike functions, such as integrate-and-fire (IF) and leaky-integrate-and-fire (LIF) neurons, are undefined at the time step of generating spikes and '0' otherwise. In order to cope with the non-differentiable spiking computations, surrogate gradient methods use a continuous function to approximate the IF neuron, serving as a surrogate for the actual gradient. Thus, SNNs can be directly optimized by applying backpropagation through time (BPTT) that acts like RNN [10, 21] or error backpropagation algorithms. However, backpropagation through time algorithm needs to compute the gradient and backpropagate errors through each time step, thus are computationally expensive and slow. In addition, the inaccurate approximations for computing the gradients accumulate errors failing to train deep SNNs directly.
#### Ii-A2 ANN-to-SNN Conversion
Converting ANNs directly into SNNs has been proven successful for training deep SNNs [12, 13, 14, 7, 10]. In order to facilitate the nearly lossless conversion, an ANN is constructed with rectified linear unit (ReLU) neurons and some restrictions (e.g., no bias terms, average polling, and no batch normalization), then trained with gradient descent. Different mapping strategies are applied for converting the well-trained ANN to an SNN with IF neurons, such as data-based normalization [7], threshold balancing [13, 22], and soft reset (also called reset-by-subtraction mechanism) [12]. Since only introducing a large number of time steps can make the firing rates closely approximate the high-precision activation, the major bottleneck of these methods suffers from high inference latency (2000-2500 time steps) to obtain satisfying performance. In recent work, the authors dedicatedly calibrate the parameters in the SNN layer-by-layer to match the activation after conversion [23]. However, it requires cumbersome architectures and is not scalable for small SNNs.
#### Ii-A3 Hybrid SNN Training
Recently, a hybrid training mechanism, including backpropagation and ANN-to-SNN conversion, has been introduced to overcome the intense computing of BPTT on the one hand and maintain the low latency of inference (100-250 time steps) on the other hand [24]. Specifically, SNNs are converted from pre-trained ANNs, followed by fine-tuning with a surrogate gradient or BPTT. However, it still employs inherent information to optimize the weights of the SNN and lacks additional guidance. Therefore, we need to optimize the balance between accuracy and latency further.
### _Knowledge Distillation_
Knowledge distillation has proven to be a practical approach to compress models by transferring the knowledge of a cumbersome model (teacher) to a small model (student). Generally,
soft labels that contain more information than one-hot labels are employed in the cross-entropy loss function to regularize the student model [16]. Previous works mainly focus on ANN-to-ANN distillation by reformulating the supervision signals for more effective knowledge transfer, such as attention transfer [17].
### _SNN Training with Knowledge Distillation_
Previous works have shown that knowledge distillation approaches proposed in ANNs can extend with SNNs, such as distilling spikes from a large SNN to a small SNN [25] and distilling label-based knowledge from a well-trained ANN to a simple SNN [21, 26]. However, the approaches mentioned above mainly utilize label-based information, so the performance of deep SNNs can be improved further.
In this article, we first introduce the attention-based distillation capable of bridging the information representation and transmission gap between ANNs and SNNs. Based on the attention scheme, we propose a layer-wise distillation framework called LaSNN that effectively and efficiently transfers the learning of models. Moreover, we apply the surrogate-gradient method to optimize the parameters of SNNs during knowledge distilling.
## III The LaSNN Framework
The overall process of layer-wise ANN-to-SNN knowledge distillation is illustrated in Fig. 1 (a). In this section, we first describe the SNN model. Then we give details about the ANN-to-SNN distillation paradigm with a layer-wise supervision strategy. Finally, we present the three-stage training process of LaSNN step-by-step.
### _SNN Model_
#### Iii-A1 Encoding Method
We encode input images (analog values) into spatial-temporal patterns (spike trains) with a temporal coding scheme since the representation and transmission of information in SNNs are spikes. The pixel values of input images are normalized to the range of \([-1,1]\). Feeding these normalized values into the Poisson encoder outputs Poisson spike trains with rates proportional to the intensity of corresponding pixels. Specifically, after inputting the image, the Poisson encoder will generate random numbers at every time step for every pixel, then compare these random numbers with the normalized value, outputting spikes if these random numbers are less than the normalized value. Therefore, averaged over a long time, these Poisson-distributed spike trains are equivalent to pixel values.
#### Iii-A2 Spiking Neuron
In this work, we use LIF spiking neuron model. The iterative model in a discrete manner for machine learning is described by:
\[v_{i}^{t}=\lambda v_{i}^{t-1}+\sum_{j}\omega_{ij}o_{j}^{t}-\theta o_{i}^{t-1} \tag{1}\]
\[o_{i}^{t-1}=\begin{cases}1,&if\ v_{i}^{t-1}>\theta\\ 0,&otherwise\end{cases} \tag{2}\]
where \(o\) denotes the spike output, \(v\) represents the membrane potential, subscript \(i\) and \(j\) denote the post- and pre-neuron, respectively, superscript \(t\) is the time step, \(\omega_{ij}\) is the synaptic weight connecting the post- and pre-neuron, \(\theta\) is the threshold potential, and \(\lambda(<1)\) indicates the leak in membrane potential. In order to reduce the trainable parameters, neurons within the same layer share the identical threshold value, and all neurons share the same leak value. In the process of forward propagation, the membrane potential of the neuron will increase by receiving pre-synaptic spikes, after reaching the threshold of firing, the neuron outputs a post-synaptic spike, and the membrane potential resets to resting.
However, as shown in equation (3), in order to define the loss function on spike count, the neuron dynamics in the output layer are modified by removing the leaking part (\(\lambda=1\)) and integrating the input without firing. Neurons' number in the
Fig. 1: (a) The framework of the proposed LaSNN. (b) Training pipeline.
output layer is consistent with the number of classification targets, and the output predicted distribution \(p\) is given in equation (4),
\[v_{i}^{t}=v_{i}^{t-1}+\sum_{j}\omega_{ij}o_{j}^{t} \tag{3}\]
\[p_{i}=\frac{e^{v_{i}^{T}}}{\sum_{j=1}^{N}e^{v_{j}^{T}}} \tag{4}\]
\(T\) denotes the total number of time steps, \(v^{T}\) represents neuron's membrane potential in the output layer over all time steps, and \(N\) denotes the number of classification targets.
#### Ii-A3 Network Architectures
The SNN architecture is basically similar to traditional multi-layer feed-forward deep neural networks, such as VGG and residual architecture. However, some details are modified to achieve a minimal loss in ANN-to-SNN conversion.
Firstly, since adopting bias terms increases the difficulty of threshold balancing and the probability of conversion loss, no bias terms will be used. Batch Normalization in ANN is also unused because eliminating the bias term results in the input bias of each layer becoming zero. Dropout [27] is employed for both ANN and SNN as an alternative regularizer.
Secondly, we adopt the average pooling operation to reduce the size of the feature map. Introducing max pooling operation results in significant information loss because of the binary activation in SNNs.
Thirdly, we replace the original wide kernel (7x7, stride 2) in residual architectures with an alternative block that consists of three small convolution layers (3x3, stride 1) and two dropout layers in between.
### _ANN-to-SNN Distillation_
In order to distill the knowledge from an ANN to target SNN. The teacher is a well-trained cumbersome ANN model, containing rich and accurate attention information. The student is a small SNN model with similar ANN architecture. We explain 1) the activation-based scheme and 2) the gradient-based scheme for representing attention information, which bridges the knowledge gap between a real-valued ANN and a discrete signal SNN.
#### Ii-B1 Distilling Activation-based Attention
Suppose the importance of a hidden neuron with respect to the particular input is indicated by its absolute value, then we define a function \(\mathcal{F}\) that computes statistics of these absolute values across channels to represent the attention knowledge (shown in Fig 2). Specifically, we define A \(\in R^{C\times H\times M}\) representing the ANN convolutional layer's activation tensor containing \(C\) convolutional channels with spatial dimensions of \(H\times M\). Then, we use the function \(\mathcal{F}\) to map the above \(3D\) input tensor into a real value output. In order to discriminate the differences in attention-based knowledge among different targets more clearly, the absolute values are raised to the power of 2. Moreover, this value is averaged by channels and spatial dimensions to ease the impact of individual extreme values and noise on overall performance. The mapping function \(\mathcal{F}\) is defined as follows:
\[\mathcal{F}:R^{C\times H\times M}\to R^{H\times M} \tag{5}\]
\[\mathcal{F}_{mean}(A)=\frac{\sum_{i=1}^{C}\sum_{j=1}^{H}\sum_{k=1}^{M}|A_{i, j,k}|^{2}}{C\times H\times M} \tag{6}\]
where operations of power and absolute value are element-wise. After introducing attention information for the activation tensor of teacher and student models, we have a new attention loss term \(\mathcal{L}_{at}(Ta,St)\):
\[\mathcal{L}_{at}(Ta,St)=\sum_{l\in Z}||\mathcal{F}_{mean}^{l}(Ta)-\mathcal{F }_{mean}^{l}(St)||_{2} \tag{7}\]
which indicates the losses of all attention map pairs (\(Z\)) for both teacher (\(Ta\)) and student (\(St\)) networks during distillation process. Therefore, according to Eq. (7), we have total loss function \(\mathcal{L}_{total}\):
\[\mathcal{L}_{total}=\mathcal{L}_{ce}+\frac{\alpha}{2}\mathcal{L}_{at}(Ta,St) \tag{8}\]
\[\mathcal{L}_{ce}=-\sum_{i}y_{i}log(p_{i}) \tag{9}\]
where \(\alpha\) is the hyperparameter, and \(\mathcal{L}_{ce}\) denotes the cross-entropy loss between the true output \(y\) and the predicted distribution \(p\).
#### Ii-B2 Distilling Gradient-based Attention
Suppose a pixel's small change results in a significant impact on the model's prediction. The model gives particular attention to that pixel. Then, we define a gradient-based attention, which is viewed as the input sensitivity knowledge learned by the network. In other words, the attention to the input's specific spatial location encodes the sensitivity of the model's output prediction with respect to changes at that location. Thus, the teacher model's gradient of loss in regard to input is defined as follows:
\[G_{Ta}=\frac{\partial}{\partial x}\mathcal{L}(W_{Ta},x) \tag{10}\]
Inspired by the spike activation map (SAM) [28], we use the spike activity in the forward propagation to define the input sensitivity function.
\[G_{St}(h,m,t)=\sum_{C}\sum_{\nu^{\prime}\in O_{h,m}}e^{-|t-t^{\prime}|}o_{h,m}^ {t} \tag{11}\]
where \(t^{\prime}\) represents the previous spike time, the set \(O_{h,m}\) includes previous firing times of a neuron located at \((h,m)\). We then define the total loss as:
\[\mathcal{L}_{at}(Ta,St)=\sum_{l\in Z}||G_{Ta}^{l}-G_{St}^{l}||_{2} \tag{12}\]
\[\mathcal{L}_{total}=\mathcal{L}_{ce}+\frac{\alpha}{2}\mathcal{L}_{at}(Ta,St) \tag{13}\]
Fig. 2: Illustration of the spatial attention map of a streamlined convolutional network.
Supposing transfer losses are within teacher and student feature maps of the same spatial resolution. However, if necessary, we can use interpolation or downsampling to match their shapes.
### _Layer-wise Supervision Strategy_
Previous works have shown that various parts of each layer in the model contain different attention information [29]. As shown in Fig. 2, with the low layers, neurons reflect intense activation; with the middle layers, neurons' activation tends to focus on recognizable areas, such as feet or eyes; with the top layers, neurons' activation reflects the entire object.
Recently, ANN-to-SNN knowledge distillation studies mainly focus on minimizing the output distribution between a large ANN and a small SNN [21, 26], which fails to achieve satisfactory performance because of lacking enough information for distillation. Generally, when signals pass through every convolution layer, a streamlined convolutional network can output various attention maps containing different levels of attention information. In order to achieve a minimal loss, we divide the learned attention knowledge into three levels, from low to high, to distill attention information sufficiently from a teacher ANN and supervise the transfer losses with the designed loss function (equation 7 or 12).
### _Three-stage Training Process of LaSNN_
The three-stage LaSNN training pipeline is illustrated in Fig. 1 (b). Firstly, train a teacher (cumbersome) ANN with a bias term, and batch normalization [30].
Secondly, convert a single smaller ANN (intermediate ANN) as the student SNN with IF neurons (the architecture is described in the subsection network architectures). In conversion, we use threshold balancing, where the weights are unchanged and then normalized by the maximum preactivation [13].
Thirdly, after finishing the training of the teacher model and conversion of the student model, a layer-wise scheme is adopted for transferring the knowledge of the teacher ANN to the student SNN, the distillation strategy, and loss function designing as described in the above subsections. And the student SNN is optimized with error back-propagation by utilizing a linear surrogate gradient to approximate the discontinuous gradient [31]. The pseudo-derivative is described as:
\[\frac{\partial o_{i}^{t}}{\partial v_{i}^{t}}=max\left\{0,1-|\theta|\right\} \tag{14}\]
In order to achieve stable performance for deep SNNs, we introduce the decayed term \(\gamma<1\) (typically \(\gamma=0.3\)) that dampens the increase of back-propagated errors through spikes:
\[\frac{\partial o_{i}^{t}}{\partial v_{i}^{t}}=\gamma\ max\left\{0,1-|\theta|\right\} \tag{15}\]
Note that the gradients propagation is not affected, thus that can propagate through many time steps in the dynamic threshold. The weight update is calculated as:
\[\Delta\omega_{ij}=\sum_{t}\frac{\partial\mathcal{L}_{total}}{\partial\omega_{ ij}}=\sum_{t}\frac{\partial\mathcal{L}_{ce}}{\partial\omega_{ij}}+\sum_{t} \frac{\partial\mathcal{L}_{at}}{\partial\omega_{ij}} \tag{16}\]
\[\mathcal{L}_{ce}=\sum_{t}\frac{\partial\mathcal{L}_{ce}}{\partial o_{i}^{t}} \frac{\partial o_{i}^{t}}{\partial v_{i}^{t}}\frac{\partial v_{i}^{t}}{\partial \omega_{ij}} \tag{17}\]
where \(\partial o_{i}^{t}/\partial v_{i}^{t}\) is a non-differentiable term, and we approximate it with the linear surrogate gradient (Equation 15). The main training process of the student SNN is described in algorithm 1.
```
1:# Forward propagation
2:for\(t=1\) to \(T\)do
3:\(O_{0}^{t}\gets Poisson\ Encoder(X)\)
4:for\(l=1\) to \(L-1\)do
5:if\(isinstance(St_{l},[Conv,Linear])\)then
6:\(\theta\) output sum of the previous layer
7:\(V_{l}^{t}=\lambda V_{l}^{t-1}+W_{l}O_{l-1}^{t}-V_{l}^{th}\times O_{l}^{t-1}\)
8:# Generate spike when \(V>V_{th}\)
9:\(O_{l}^{t}\gets Surrogate\ Gradient(V_{l}^{t},V_{l}^{th},t)\)
10:if\(O_{l}^{t}==1\)then
11:# Save spike times (Stf)
12:\(StT_{l}^{t}=t\)
13:endif
14:elseif\(isinstance(St_{l},AvgPool)\)then
15:\(O_{l}^{t}=St_{l}(O_{l-1}^{t})\)
16:elseif\(isinstance(St_{l},Dropout)\)then
17:\(O_{l}^{t}=Dropout*O_{l-1}^{t}\)
18:endif
19:endfor
20:\(V_{L}^{t}=\lambda V_{L}^{t-1}+W_{L}O_{L-1}^{t}\)
21:endfor
22:# Backward Propagation: compute \(\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}V_{L}}\) by Surrogate Gradient
23:for\(t=T\) to 1do
24:for\(l=L-1\) to 1do
25:\(\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}V_{L}^{t}}=\frac{\mathrm{d}\mathcal{L}}{ \mathrm{d}O_{l}^{t}}\frac{\mathrm{d}O_{l}^{t}}{\mathrm{d}V_{l}^{t}}=\frac{ \mathrm{d}\mathcal{L}}{\mathrm{d}O_{l}^{t}}\times\gamma\ max\left\{0,1-|V_{l} ^{th}|\right\}\)
26:endfor
27:endfor
```
**Algorithm 1** Overall training algorithm
## IV Experiments
### _Data sets and Settings_
We evaluate the performance of the LaSNN framework on CIFAR-10, CIFAR-100, and Tiny ImageNet data sets.
* **CIFAR-10**: The data set contains \(60,000\) labeled images of \(10\) categories, divided into training \((50,000)\) and testing \((10,000)\) sets. The images size is \(32\times 32\), and has RGB three channels.
* **CIFAR-100**: The data set is similar to CIFAR-10 except that it contains \(100\) categories.
* **Tiny ImageNet**: The data set is the subset of ImageNet containing \(200\) categories. Each class has \(500\) training images, \(50\) validation images and \(50\) testing images. The images are of size \(64\times 64\) with RGB channels.
Both the teacher ANN and the intermediate ANN used for converting to the student SNN are trained from scratch, setting
batch size as \(256\) and employing SGD optimizer. The learning rate is \(0.01\), and the weight decay is \(0.0005\). The teacher ANN reaches convergence after \(300\) epochs, while the intermediate ANN reaches convergence after \(200\) epochs. The time step is set to \(2500\) during the conversion process. After converting, use the LaSNN framework to train the student SNN. We use the Adam optimizer [32] in the process of training the student SNN with the layer-wise distillation (proposed work) and label-based distillation [26], setting the learning rate as \(0.0001\) and the weight decay as \(0.0005\). The decayed term \(\alpha\) for attention losses is set to \(0.9\), according to Hinton et al. [16]. For the student SNN, setting training epoch as \(100\), the mini-batch size as \(16\), and the time step as \(100\). All performances are evaluated on NVIDIA GeForce RTX 3090 GPU that has 24 GB of memory.
### _Performance and Comparison_
We evaluate the proposed LaSNN framework with activation-based and gradient-based distillation on CIFAR-10, CIFAR-100, and TinyImageNet data sets. All the teacher models have widely used ANN architectures (DenseNet121, VGG16, and ResNet20), and the student models are relatively shallower ANN-like architectures. For simplicity, we first employ activation-based distillation for the comparison experiments. And then, we explore the effects of activation-based and gradient-based distillation.
#### Iv-B1 Comparison with Non-distillation Algorithms
In order to demonstrate the effectiveness of our LaSNN framework, LaSNN is compared to three main non-distillation SNN training algorithms under the same conditions on the CIFAR-10 data set, including the ANN-to-SNN conversion algorithm, hybrid training algorithm, and calibration algorithm.
As shown in Table I, on the CIFAR-10 data set, LaSNN achieves higher performance than ANN-to-SNN conversion models with relatively shallow structures and comparable performance to converted SNNs with deep architectures. Notably, LaSNN is significantly more efficient than ANN-to-SNN conversion methods both in shallow and deep architectures (e.g., VGG16). The advanced accuracy and efficiency performance of LaSNN are attributed to its layer-wise distillation process with the attention scheme to represent information both in teacher ANNs and student SNNs.
Then, we further investigate the effectiveness of our LaSNN framework by carefully comparing LaSNN with the hybrid training algorithm [24]. Detailed results are summarized in Table II, LaSNN is significantly better than hybrid-trained SNNs not only in shallow structures but also in deep architectures over different data sets (CIFAR-10, CIFAR-100 and Tiny ImageNet). In other words, by forcing the student SNN to imitate the attention maps of a cumbersome (teacher) ANN, the performance of a student SNN can be significantly improved. Notably, LaSNN achieves more improved performance when tasks become more complex (e.g., on the Tiny ImageNet data set), illustrating that our LaSNN framework may be promising for more challenging tasks.
In addition, we conduct the performance comparison between LaSNN and the calibration algorithm on various networks. Details are shown in Fig. 3, the inference accuracy of SNNs decreases as the network structures become shallow, but LaSNN is significantly more stable than the calibration algorithm of classification accuracy for all networks. In contrast, the performance of the calibration algorithm decreases obviously for small architectures, illustrating that our approach is more scalable to SNNs than the calibration algorithm.
#### Iv-B2 Comparison with Distillation Algorithms
In this case, we compare our approach with two distillation algorithms (label-based distillation [26] and spike-based distillation [25]) to evaluate the efficacy of our LaSNN framework.
Compared with LaSNN, ANN-to-SNN distillation only distills label-based knowledge of the last layer from pre-trained ANNs to SNNs. The results are shown in Table II. On all three data sets, LaSNN shows significant improvements
Fig. 3: Performance comparison of the LaSNN framework and the SNN calibration algorithm on the CIFAR-10 data set.
and reaches comparable performance to ANNs corresponding improvements are 0.62% to 1.38% on the CIFAR-10, 0.69% on the CIFAR-100, and 1.95% on the Tiny ImageNet data set.
The spike-based knowledge distillation algorithm constructs a three-dimensional matrix as the spiking activation tensor that is used to represent and transfer the knowledge from the outputs of the teacher SNN to the student SNN [25]. As shown in Table III, LaSNN shows remarkable improvements in inference performance compared to distilling spikes with similar small architectures on the CIFAR-10 data set. Along with the spiking activation tensor, the spiking distilled method uses the sliding window to accumulate the losses over the total time steps (\(T\)) during the distillation process, consuming a lot of computational resources. Moreover, the authors in Kushawhaha et al. [25], employed a multi-stage distillation scheme [33] to improve the classification accuracy of student SNNs, which introduces resource consumption further. However, our LaSNN framework employed a hybrid training scheme that overcomes the inherent low-performance and high-latency problems of deep SNNs. In addition, attention-based knowledge contains much more helpful information than one-hot labels.
The above results emphasize the validity of distilling attention information from ANNs to SNNs with the layer-wise scheme, which is essential for improving the performance of SNNs.
### _Effect of Distillation Strategies and Different Teacher ANNs_
In this part, ablation experiments are conducted to demonstrate the importance of layer-wise distillation in recognition performance. Then, we evaluate the effectiveness of two attention-based distillation strategies, i.e., activation-based attention and gradient-based attention. Finally, we evaluate the impact of using different teacher ANNs.
#### Iv-C1 Effectiveness of Layer-wise Distillation
In order to show the importance of layer-wise distillation scheme, we first conduct an ablation experiment with various network structures (ResNet12, VGG7, and VGG16) on the CIFAR-10, CIFAR-100, and Tiny ImageNet data sets. Fig 1 (b) describes two processes employed in the training pipeline of student SNNs, initialize an SNN model from an intermediate ANN by the parameter normalization method, then optimize the initialized SNN through a layer-wise distillation scheme. In our experiment, we compare LaSNN with two models: 1) the model without LaSNN: the initial conversion SNN; and 2) the model without layer-wise distillation scheme: directly trained the initial conversion SNN using the same surrogate gradient. For simplicity, we employ the activation-based distillation scheme in LaSNN, similar to the above comparison experiments.
Fig. 4 shows the effects of the layer-wise distillation scheme on the accuracy performance. When non-LaSNN training scheme is employed, the training algorithm is degraded into an ANN-to-SNN conversion algorithm. In the meantime, when the non-layer-wise scheme is employed, the training algorithm is degraded into the hybrid training method, resulting in performance decreased in all three networks over three data sets.
Notably, as shown in Fig. 4, if employing the layer-wise scheme, the accuracy performance increases significantly on the Tiny ImageNet task. This indicates layer-wise distillation scheme is promising for more challenging tasks.
#### Iv-C2 Effectiveness of Two Attention-based Distillation Strategies
To check whether distilling knowledge from activation-based attention can be more beneficial than from gradient-based attention, we train three networks with different depths (VGG5, VGG9, ResNet12) on the CIFAR-10 data set, and two attention-based distillation schemes are employed. Deterministic algorithms and the fixed random seed are adopted in the experiments. Both activation-based and gradient-based schemes are evaluated under the same experimental settings (details are provided in the experiments section). Accuracy results are shown in Fig.5 (a). Similar to activation-based attention, employing the gradient-based attention scheme leads to improved performance. However, compared with the activation-based scheme in the same training settings, the gradient-based scheme achieves weaker performance.
In addition, as the gradient-based scheme needs to calculate gradient maps of the teacher ANN and spiking activation maps of the student SNN, LaSNN with the gradient-based scheme consumes more computing and memory resources, shown in Fig.5 (b). More specifically, one epoch of gradient-based (activation-based) training with VGG5\(/\)VGG9\(/\)ResNet12 structures takes \(76/117/60\) (\(34/56/76\)) minutes and \(7.13/9.59/7.51\) (\(4.89/6.23/4.71\)) GB of GPU memory, respectively.
We find that the most resource-intensive operation is calculating the SAM, which needs to compute the activation maps per time step and accumulate the values to evaluate the contribution of previous spikes with respect to the current neural state. Thus, we additionally trained the same student SNNs (VGG5\(/\)VGG9\(/\)ResNet12) with 30 time steps. Although they achieve weaker performance than same distillation models with 100 time steps, the memory requirements are reduced to the same level as activation-based scheme, shown in Fig.5 (b). In the future, we plan to explore more efficient gradient-based attention for ANN-to-SNN distillation because it is so far unclear how gradient-based attention transfer in the form of spikes.
#### Iv-C3 Performance Using Different Teacher ANNs
It was reported that knowledge distillation suffers severe performance losses when the structures and depths of the teacher and student networks are different [17]. Thus, we investigate the performances of LaSNN with various \(ANN/SNN\) pairs based on the CIFAR-10 data set. The detailed results of all combinations are given in Table IV. We chose three widely used ANNs (DenseNet121, VGG16, and ResNet20) as the teacher ANN. The VGG5 and VGG9 student SNN achieve the same performance for the teacher ANN with different structures (DenseNet121 and ResNet20 teacher ANN) and different depths (VGG16 teacher ANN). Although the ResNet12 student SNN achieves different performances with different teacher ANNs, the accuracy effectively increased compared to the non-LaSNN training scheme (84.82%) and reached comparable performance to ANN (92.46%). The above results highlight the potential that LaSNN can effectively transfer knowledge from various ANNs to SNNs. More importantly, LaSNN is flexible for training deep SNNs. Thus, it is not necessary to train corresponding assistant networks during distillation process [17, 25].
### _Analysis of Input Encoding and Computational Cost_
The impact of input encoding and the comparison of computing efficiency are analyzed in this section. In this section, we focus on investigating the impact of input encoding and comparing the computing efficiency of the LaSNN framework and other SNN training methods.
#### Iv-D1 Analysis of Input Encoding
In rate coding, such as Poisson encoding, the analog values indicate neurons' firing rate. In the meantime, the neuron outputs binary values (0 or 1) at each time step in SNNs. Specifically, an analog value of 0.3 indicates a neuron firing spikes during 30% total time steps. Thus, obtaining high accuracy requires adopting an enormous number of time steps, which leads to high inference latency. Recent studies in state-of-the-art SNNs employ the direct encoding scheme to reduce inference latency [10, 34].
In this part, we analyze the impact of two different input encoding methods on the average spike rate. We train two different student SNNs with VGG5, VGG9, and ResNet12
Fig. 4: Ablation experiments of the LaSNN framework on different data sets.
Fig. 5: Performance, training time, and memory cost comparison of two attention-based distillation schemes on the CIFAR-10 data set. (a) Comparison of performance. (b) Comparison of training time (curve) and memory cost (bar graph) for two attention-based schemes. The abbreviation Grad-based T-30 (T-100): stands for using the gradient-based attention method with 30 (100) time steps.
structures and two input encoding methods: 1) Poisson rate encoding; 2) direct input encoding. Firstly, the SNNs are trained under the same conditions (such as teacher-student pairs, learning rate, and the number of iterations) for evaluating performance. Next, the same student SNNs with different input encoding schemes learned from the same teacher ANN are trained to achieve similar accuracy on the CIFAR-10 data set for evaluating the average spike rate. As shown in Fig.6, the performance of VGG5 and VGG9 student SNN with direct input encoding are higher than corresponding SNNs with Poisson input encoding (\(88.55\%/89.66\%\) vs. \(86.68\%/87.58\%\)). Although ResNet12 student SNN with direct input encoding achieves lower performance than Poisson input encoding (\(90.56\%\) vs. \(91.34\%\), shown in Fig.6), it can reach a level significantly close to using hybrid training and ANN-to-SNN distillation methods (\(90.70\%\) and \(90.72\%\), in Table II). The results indicate that LaSNN is feasible for direct input encoding.
In addition, the student SNNs with Poisson input encoding and 100 time steps achieve average spikes of \(111.25/97.16/133.37\) (after evaluating 2000 samples from the CIFAR-10 test set, sum the spike in all time steps, and then divide the total number of neurons to calculate the average number of spikes). As shown in Fig.6, when we replace Poisson input encoding with direct input encoding, the inference latency is reduced to \(30/50\) time steps. With Poisson input encoding, the firing rate is proportional to the input pixel value. In contrast, with direct input encoding, the pixel value is fed into the first layer as the input current, and the spike is generated through the IF or LIF spiking neuron. Therefore, direct input encoding reduces the time steps required to encode the input.
#### Iv-C2 Analysis of Computational Cost
Since the energy consumption of a single spike in an SNN is constant [12], the fundamental energy consumption analysis depends on the number of spikes as well as the total number of time steps. The result in Fig. 7 illustrates the average number of spikes in every convolutional layer of SNNs with the VGG7 architecture after evaluating 2000 samples from the CIFAR-100 test set. The average number of spikes is compared on a converted SNN, a distillation SNN, and the LaSNN framework, the more intense the spiking activity is, the more energy is consumed. Under the same conditions, including inputs, time steps, threshold voltages, and others, the LaSNN framework achieves fewer average spikes in most layers and obtains higher performance compared to the converted SNN and the distillation SNN.
## V Conclusion
In this work, we first abstracted an attention representation to bridge the information gap between ANNs and SNNs during the ANN-to-SNN distillation process. Then, we extended the ANN-to-SNN distillation with our layer-wise scheme. Moreover, three processes of training were introduced to optimize the weights of the student SNN. Finally, we propose a new LaSNN framework for ANN-to-SNN distillation utilizing a simple and practical paradigm to effectively and efficiently transfer the learning, in contrast to distilling label-based information from the last layer in other studies. We evaluated the performance of our new framework with various architectures and two different input encoding methods on three benchmark data sets. Experimental results demonstrated
Fig. 6: Effects of employing direct input encoding and Poisson input encoding on the CIFAR-10 data set. The abbreviation T: stands for time steps.
Fig. 7: Energy analysis on the CIFAR-100 data set over 2000 samples. The abbreviation V: stands for voltage threshold.
that LaSNN achieved competitive top-1 accuracy compared to ANNs and 20x faster inference than converted SNNs with similar performance. Ablation experiments illustrated that our layer-wise scheme plays a crucial role in effectively and efficiently distilling the knowledge. In addition, LaSNN is practical for ANNs and SNNs with different architectures/depths and encoding methods. Accordingly, our LaSNN framework will be more beneficial for developing accurate, efficient, and scalable deep SNNs compared to other SNN training schemes.
|
2302.10906 | Deep Neural Networks for Encrypted Inference with TFHE | Fully homomorphic encryption (FHE) is an encryption method that allows to
perform computation on encrypted data, without decryption. FHE preserves the
privacy of the users of online services that handle sensitive data, such as
health data, biometrics, credit scores and other personal information. A common
way to provide a valuable service on such data is through machine learning and,
at this time, Neural Networks are the dominant machine learning model for
unstructured data. In this work we show how to construct Deep Neural Networks
(DNN) that are compatible with the constraints of TFHE, an FHE scheme that
allows arbitrary depth computation circuits. We discuss the constraints and
show the architecture of DNNs for two computer vision tasks. We benchmark the
architectures using the Concrete stack, an open-source implementation of TFHE. | Andrei Stoian, Jordan Frery, Roman Bredehoft, Luis Montero, Celia Kherfallah, Benoit Chevallier-Mames | 2023-02-13T09:53:31Z | http://arxiv.org/abs/2302.10906v1 | # Deep Neural Networks for Encryppted Inference with TFHE
###### Abstract
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption. FHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information. A common way to provide a valuable service on such data is through machine learning and, at this time, Neural Networks are the dominant machine learning model for unstructured data.
In this work we show how to construct Deep Neural Networks (DNN) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits. We discuss the constraints and show the architecture of DNNs for two computer vision tasks. We benchmark the architectures using the Concrete stack1, an open-source implementation of TFHE.
Footnote 1: [https://github.com/zama-ai/concrete-ml](https://github.com/zama-ai/concrete-ml)
## 1 Introduction
Neural Networks (NNs) are machine learning (ML) models that have driven the recent expansion of the field of Artificial Intelligence (AI). Their performance on unstructured data such as images, sound and text is unmatched by other ML techniques. Moreover, deep NNs obviate the need for complex feature engineering and process raw data directly, making them easier to deploy in production. Applications of NNs include image classification, face recognition, voice assistants, and search engines, tools which today are a staple of the user experience online. Deployment of such models in SaaS applications raises a security risk: they are a target of malevolent entities that seek to steal the sensitive user data these models process.
Privacy-preserving technologies, such as multi-party computing (MPC) and fully homomorphic encryption (FHE), provide a solution to the risk of data leaks, eliminating it by design. Notably, FHE encrypts user data and allows a third party to process the data in its encrypted form, without needing to decrypt it. Only the data owner can decrypt the result of the computation. Thus, an attacker can only steal encrypted data they can not decrypt.
In this work we show how to build neural networks that are FHE compatible, while minimizing the cryptography knowledge needed by the machine learning practitioner. We based our work on the Concrete Library [7] which uses TFHE [6], works over integers, provides a fast _programmable_ bootstrapping mechanism, and performs exact computation.
## 2 Related work
Several alternative approaches exist for neural network inference over encrypted data. All use NNs with integer weights and activations and many of them rely on "leveled" fully homomorphic encryption schemes that do not use bootstrapping, such as CKKS [5] and YASHE [3].
CryptoNets [9] uses YASHE which supports the computation of polynomials of encrypted values. CryptoNets are NNs quantized to integers (of 5-10 bits) with activation functions expressed as low-degree polynomials. CryptoNets achieve 99% accuracy on MNIST using a three layer network with an inference time of 570 seconds/image.
FHE-DiNN [4], a TFHE based approach, quantizes inputs, intermediate values and weights to binary values. In this case, the training is done with hardSigmoid activation which is swapped for the sign function in inference. However, binary NNs are hard to train and do not perform well in many ML tasks such as object detection and speech processing.
Another TFHE approach, SHE [11], uses bit series representation of encrypted values and boolean gates. They run NNs that fit within a maximum multiplicative depth budget and, by avoiding expensive multi-bit PBSs, they achieve inference of a ShuffleNet on ImageNet with a latency of 18 000 seconds/image. They rely on logarithmic quantization of weights which allows to reduce multiplicative depth for the convolution layers by using bit-shifts. Sums, relu and maxpool are computed using boolean gates.
Leveled approaches such as SHE and CryptoNets are limited by the maximum multiplicative depth budget, which, in turn, limits the supported network types and their depth. Moreover, some schemes such as CKKS are approximate by design, as the noise corrupts some of the message bits.
In this work we propose an approach to train arbitrary NNs which can have any depth, number of neurons and activation functions. Furthermore, our approach performs exact computation in FHE: the noise of the encryption scheme does not corrupt the values that are processed. Thus results in FHE are the same as in the clear - there is no degradation of accuracy when moving to encrypted inference - which is a major advantage when putting models in production.
## 3 Neural Network Training for Encrypted Inference
Training NNs is usually done in floating point, but most FHE schemes, including TFHE, only support integers. Consequently, quantization must be used, and two main approaches exist:
1. Post-training quantization is commonly used [9; 11], but, in this mode, NNs lose accuracy when the quantization bit-width is lower than 7-8 bits. With per-channel quantization, or logarithmic quantization, which are more complex to implement, as few as 4 bits were used for weights and activations without loss of accuracy [14].
2. Quantization-aware Training (QAT), used in this work and in [4], is an approach that adds quantizers to network activations and weights during training. QAT enables extreme quantization with less than 4 bit weights and activations.
To support arbitrarily deep NNs and any activation function, we make use of the programmable boostrapping mechanism [8] (PBS) of TFHE. PBS reduces the noise in accumulators of ciphertext leveled operations (addition, multiplication with clear constants) but also allows to apply a lookup-table (TLU) on its input ciphertext.
The TFHE PBS mechanism has a rather high computational cost, and this cost depends on the number of bits of the encrypted value to be boostrapped. It is convenient to keep the accumulator size low, in order to speed up the PBS computation. However, reducing accumulator bit-width has a negative impact on network prediction performance, so a compromise needs to be found.
We describe here a QAT strategy that can process all the intermediate encrypted values as integers. In this way, training an FHE compatible network becomes purely a machine learning problem and no cryptography knowledge is needed by the practitioner. To build a TFHE compatible NN, the constraints on the network architecture are the following:
* All layers that sum or multiply two encrypted values, such as convolution conv and fully-connected fc, must have quantized inputs. This is easily achieved using QAT frameworks.
* The bit-width of the accumulators of layers such as conv, fc must be bounded. To achieve this, we use pruning.
To control the accumulator bit-width while keeping the training dynamics stable, we use \(L^{1}\)-norm unstructured pruning. Figure 1 shows the impact of pruning on the accumulator size for two quantization modes: narrow and wide range.
While the inputs of conv and fc layers need to be quantized, it is possible to use floating point layers for all univariate operations such as batch normalization, quantization, and activations.
In our FHE compatible NNs the outputs of a conv or fc are processed by a sequence of univariate operations that ends with quantization. This sequence of functions takes integers and has integer outputs, but the intermediary computations in these operations can use float parameters. Thus, batch normalization, activation functions, neuron biases and any other univariate transformation of conv or fc outputs does not need quantization. Figure 1 shows the architecture of the network during training and inference.
## 4 Neural Network Inference using TFHE
Inference of our FHE NNs is based on quantized implementations of NN operators that add or multiply together encrypted values. Convolutional, fully connected and average pooling layers use the quantized formulation from [10]. Since uniform quantization is used, we can define a quantized value \(r\) as \(r=S(q-Z)\) where \(S\) is the quantization scale, \(Z\) is the quantization zero-point and \(q\) is the integer representation of the value. Next, the fully connected layer, with inputs \(x\), weights \(w\) and outputs \(o\), with per-tensor quantization parameters \((S_{x},Z_{x})\), \((S_{w},Z_{w})\) can be written as:
\[S_{o}(q_{o}^{k}-Z_{o})=\sum_{i=0}^{N}S_{x}(q_{x}^{i}-Z_{x})S_{w}(q_{w}^{(i,k)}- Z_{w})+b^{k} \tag{1}\]
where \(k\) is the index of a neuron in the layer and \(N\) is the number of connections of the neuron and \(b^{k}\) is the bias of the \(k\)-th neuron. A convolutional layer can be expressed by extending the sum to the height, width and channel dimensions. Equation 2 can be re-written to separate integer and floating point computations (note that zero-points \(Z_{x},Z_{o},Z_{w}\) are integers).
\[q_{o}^{k}=b^{k}+Z_{o}+\frac{S_{x}S_{w}}{S_{o}}\sum_{i=0}^{N}(q_{x}^{i}-Z_{x})( q_{w}^{(i,k)}-Z_{w}) \tag{2}\]
Therefore we can separate the equation in a floating point univariate function \(f\) and a sum over products of encrypted inputs and clear weights:
Figure 1: Left: accumulator size while varying the number of active neurons during pruning for a 3-layer fully-connected network with 2 bit weights and activations. Two quantization modes are shown: Narrow range uses values \([-2^{b-1}+1,2^{b-1}-1]\), while Wide range uses \([-2^{b-1},2^{b-1}-1]\). Right: the structure of a 2 layer convolutional network in training and during inference. Univariate layers are fused to table-lookups, implemented with PBS.
\[q_{o}^{k}=f(\Sigma)\ \ \mbox{where}\ \ f(q)=b^{k}+Z_{o}+\frac{S_{x}S_{w}}{S_{o}}q\ \ \mbox{and}\ \ \Sigma=\sum_{i=0}^{N}(q_{x}^{i}-Z_{x})(q_{w}^{(i,k)}-Z_{w}) \tag{3}\]
The univariate function \(f\) in eq. 3 takes integer inputs. We compose this function with the batch-normalization, and, finally, with the quantization function \(Q(x)=floor\left(\frac{x}{S_{x}}\right)+Z_{x}\). Thus \(f\) becomes a function defined on \(\mathbb{Z}\), with values in \(\mathbb{Z}\) and can be implemented as a lookup table with a PBS in FHE, without any loss of precision.
The complete NN computation can now be expressed over integers using the following operations: multiplication of an encrypted value and a clear constant, sums of encrypted integer values, table lookup of encrypted integer values. In our implementation of TFHE, Concrete, we encode integers in two different ways: integers up to 8 bits are encoded into a single ciphertext, and integers between 9-16 bits are encoded with a CRT representation into several ciphertexts as described in [2]. This contrasts to previous works, such as [11], that encode each bit of an integer as an individual ciphertext and use boolean gates to build arithmetic circuits.
An automated optimization process [2] determines the cryptographic parameters of the circuit, based on several factors: (1) the _circuit bit-width_, defined as the minimum bit-width necessary to encode the largest integer value obtained anywhere in the NN's integer-based evaluation, (2) the maximum 2-norm of the integer weight tensors of the layers, and (3), the desired probability of error of the PBS. The optimization process determines the cryptosystem parameters (LWE dimension, polynomial size, GLWE dimension, etc.) to ensure a fast execution, the target probability of failure and the security level (using the lattice-estimator [1]). We set the PBS error probability sufficiently low to ensure full correctness of the results, i.e. the results in the clear are always the same as those in FHE, up to a user-defined error-rate, e.g. \(10^{-6}\), for one full NN inference.
## 5 Experimental Results
The networks were implemented in PyTorch with Brevitas [13] and converted to FHE with Concrete-ML [12]. We ran experiments on two datasets with several neural network architectures, in two quantization modes (see Figure. 1, left). The test machine had an Intel i7-11800H CPU with 8 cores and we used 16 threads 50, the experiments.
The 16 threads 50, the experiments.
## 6 Conclusion
Our approach to encrypted inference for Neural Networks shows several advantages over other methods. First, we believe our method is easier to use than other works, since the problem of making an FHE compatible network becomes strictly an ML problem and no cryptography knowledge is needed. Second, the computations in FHE are correct with respect to the computations in the clear and, using TFHE, noise does not corrupt the encrypted values. Thus, once a network is trained incorporating the quantization constraints, the accuracy that is measured on clear data will be the same as that on encrypted data. Finally, our approach, using PBS, shows competitive accuracies in FHE and allows to convert arbitrary depth networks using any activation function to FHE. Networks up to 9 layers were shown, but deeper NNs can easily be implemented.
Preliminary code for the MNIST classifier is available7 and code for the CIFAR10 classifier will be released soon.
Footnote 7: [https://github.com/zama-ai/concrete-ml/tree/release/0.5.x/use_case_examples](https://github.com/zama-ai/concrete-ml/tree/release/0.5.x/use_case_examples)
Many possible strategies can be employed to improve upon this work, in order to support larger models, such as ResNet, on larger data-sets like ImageNet. For example, a better pruning strategy could decrease the PBS count, per-channel quantization can improve accuracy, and faster step functions in FHE could improve the overall speed.
|
2301.04012 | Quantum Multi-Agent Actor-Critic Neural Networks for Internet-Connected
Multi-Robot Coordination in Smart Factory Management | As one of the latest fields of interest in both academia and industry,
quantum computing has garnered significant attention. Among various topics in
quantum computing, variational quantum circuits (VQC) have been noticed for
their ability to carry out quantum deep reinforcement learning (QRL). This
paper verifies the potential of QRL, which will be further realized by
implementing quantum multi-agent reinforcement learning (QMARL) from QRL,
especially for Internet-connected autonomous multi-robot control and
coordination in smart factory applications. However, the extension is not
straightforward due to the non-stationarity of classical MARL. To cope with
this, the centralized training and decentralized execution (CTDE) QMARL
framework is proposed under the Internet connection. A smart factory
environment with the Internet of Things (IoT)-based multiple agents is used to
show the efficacy of the proposed algorithm. The simulation corroborates that
the proposed QMARL-based autonomous multi-robot control and coordination
performs better than the other frameworks. | Won Joon Yun, Jae Pyoung Kim, Soyi Jung, Jae-Hyun Kim, Joongheon Kim | 2023-01-04T04:28:39Z | http://arxiv.org/abs/2301.04012v1 | Quantum Multi-Agent Actor-Critic Neural Networks for Internet-Connected Multi-Robot Coordination in Smart Factory Management
###### Abstract
As one of the latest fields of interest in both academia and industry, quantum computing has garnered significant attention. Among various topics in quantum computing, variational quantum circuits (VQC) have been noticed for their ability to carry out quantum deep reinforcement learning (QRL). This paper verifies the potential of QRL, which will be further realized by implementing quantum multi-agent reinforcement learning (QMARL) from QRL, especially for Internet-connected autonomous multi-robot control and coordination in smart factory applications. However, the extension is not straightforward due to the non-stationarity of classical MARL. To cope with this, the centralized training and decentralized execution (CTDE) QMARL framework is proposed under the Internet connection. A smart factory environment with the Internet of Things (IoT)-based multiple agents is used to show the efficacy of the proposed algorithm. The simulation corroborates that the proposed QMARL-based autonomous multi-robot control and coordination performs better than the other frameworks.
Quantum deep learning, multi-agent reinforcement learning, quantum computing, robot control, smart factory
## I Introduction
In various Industry 4.0 scenarios, automated and autonomous management of smart factory systems are getting a lot of attention nowadays [2, 3, 4, 5, 6, 7, 8]. For the automation of factory management, the use of autonomous multiple mobile robots is widely studied [9, 10, 11]. According to the Verizon Report [12], _Industry 4.0 is squarely underway in manufacturing. The global market is expected to reach $219.8 billion by 2026, and autonomous mobile robots are becoming key workhorses in this transformation_. To realize the efficient and effective autonomous multi-robot control and coordination, multi-agent reinforcement learning (MARL)-based algorithms are essentially required [13, 14].
Recently, revolutionary innovations have been made in distributed learning and MARL due to the remarkable evolution in computing hardware and deep learning algorithms [14]. Moreover, the developments in quantum computing hardware and algorithms placed further emphasis on this trend [15], resulting in the incentivization of the research on quantum machine learning. Nowadays, quantum machine learning is at a newborn level compared to conventional machine learning. For instance, in the classification task of quantum machine learning, the performance of quantum machine learning is low given the MNIST dataset at 32.5% of top-1 accuracy [16] on quantum computers, and 74.2% [17] on ideal quantum machines. However, the theoretically discovered advantages (_i.e., quantum supremacy_) are being experimentally proven recently [18, 19, 20]. The potential of quantum algorithms is evident from their ability to downsize the model parameters while maintaining accuracy by exploiting quantum entanglements [21]. In addition, the empirical result of [22] shows that quantum machine learning outperforms the empirical result of classical machine learning. An outstanding example of this is the variational quantum circuit (VQC) architecture, also known as a quantum neural network (QNN) [23, 24]. QNN is a quantum circuit that reproduces the function of a classical deep neural network. By combining the QNN and classical deep learning models, hybrid quantum-classical models are built, which allow QRL to be carried out. Compared to RL, QRL uses lesser model parameters but significantly reduces the training and inference time [25, 26] while consuming lesser computing resources as well [27]. Thus, it is clear that quantum machine learning using quantum computing will become a big trend in the near future. This paper aims to combine VQC with the classical MARL to extend QRL to quantum MARL (QMARL).
The agents in the MARL environment interact with each other by either cooperating or competing. This interaction is realized based on Internet-of-Things (IoT)-based connectivity technologies. These interactions result in a non-stationary reward for each agent, which hinders the convergence of MARL training. The centralized training and decentralized execution (CTDE) method is used [28] to deal with the non-stationarity of the MARL model. In this scenario, the reward is distributed to all agents concurrently by concatenating their state-actions pairs. A naive implementation of a VQC version of CTDE is possible, as shown in [1]. However, such implementation causes the qubits to increase with the number of agents because when QRL is carried out via VQC, the state-action pairs are represented by qubits. Consequently, quantum errors will also increase with the qubits [29], significantly
affecting the MARL convergence and scalability. Furthermore, quantum error correction is not yet viable in the current noisy intermediate-scale quantum (NISQ) era.
This paper intends to improve on various pre-existing methods of implementing VQCs, agent policies, and state encoding [25, 30, 31]. Three significant differences exist for our proposed VQC compared to previous works, which are parameter sharing, non-random VQC design, and 2-variables dense encodings. Firstly, parameter sharing refers to sharing model parameter values between agents. All the agents in previous works had individual, distinct policies meaning more agents required more policies, resulting in excess computing power consumption in the process of formulating them. In this improved model, there will only be one policy that will be shared among the agents, increasing computing power efficiency. The second improvement is the non-random VQC design. The VQCs used in previous works are composed of randomly selected quantum gates. Although the performance is remarkable, it cannot be easily reproduced because of its randomness. The same model might not show the same performance in another iteration because of the random quantum gates. However, this is improved in this paper by designing a fixed model and removing the random nature of the previous VQC. This ensures the reproducibility and stability of the model. Finally, the proposed model in this paper utilizes the 2-variables dense encoding method instead of the 4-variables dense encoding method. The original encoding method is capable of reducing the dimensions of given data. Although this may be good for the NISQ-era quantum circuits, it inevitably causes a loss of information. The proposed 2-variables dense encoding method does not reduce data dimensions, but it is still compatible with NISQ-era quantum circuits. Thus, information loss is prevented, which will improve the performance of this model.
**Contributions.** The major contributions of this research are summarized as follows.
* This paper first provides a quantum-based MARL solution for autonomous multi-robot control and coordination in smart factory applications.
* An improved and novel CTDE QMARL framework which utilizes parameter sharing on policy, VQC design, and 2-variables dense encodings is additionally proposed.
* Lastly, via extensive experiments, the proposed QMARL framework is proven to be superior to the classical MARL model by carrying out simulations in smart factory scenarios. The results show that the proposed model produces higher performance than the others.
**Organization.** The rest of this paper is organized as follows. The preliminaries of this paper are described in Sec. II. Our considering autonomous mobile robots coordination for smart manufacturing is described in Sec. III. Sec. IV introduces our proposed algorithm; and the numerical results and demonstration of the proposed algorithm are shown in Sec. V. Sec. VI concludes this paper and presents future work. Note that the notations in this paper are listed in Table I. Most equations and notations used here are based on the _Dirac_ notations used in [32].
## II Preliminaries of Quantum Computing
**Single Qubit Quantum State.** QC utilizes a _qubit_ as the basic unit of computation. The qubit represents a quantum superposition state between two basis states, denoted as \(|0\rangle\) and \(|1\rangle\). There are two ways to describe a qubit state,
\[|\psi\rangle=\alpha|0\rangle+\beta|1\rangle, \tag{1}\]
where \(\|\alpha\|_{2}^{2}+\|\beta\|_{2}^{2}=1\), as well as,
\[|\psi\rangle=\cos\left(\frac{\delta}{2}\right)|0\rangle+e^{i\varphi}\sin\left( \frac{\delta}{2}\right)|1\rangle, \tag{2}\]
where \(\delta\in[-\pi,\pi]\) and \(\varphi\in[-\pi,\pi]\). The former is based on a normalized 2D complex vector, while the latter is based on polar coordinates \((\delta,\varphi)\) from a geometric viewpoint. The qubit state is mapped into the surface of a 3D unit sphere (_Bloch sphere_). In addition, a quantum gate is a unitary operator transforming a qubit state into another qubit state, which is represented as a \(2\times 2\) matrix with complex entries. The single-qubit Pauli gates \(X\), \(Y\), and \(Z\) are defined as follows,
\[X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad Y=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\quad Z=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}. \tag{3}\]
There are additional quantum gates that are frequently used, i.e., \(R_{\text{s}}\), \(R_{\text{y}}\), and \(R_{\text{z}}\). These are rotation operator gates rotate a single qubit by \(\delta\) around their corresponding axes in the Bloch sphere and the single qubit operation can be expressed as the following equations,
\[R_{X}(\delta)=e^{-i\frac{\delta}{2}X},\quad R_{Y}(\delta)=e^{-i\frac{\delta}{2 }Y},\quad R_{Z}(\delta)=e^{-i\frac{\delta}{2}Z}, \tag{4}\]
where rotation angles are denoted as \(\delta\in\mathbb{R}[0,2\pi]\). These basic Pauli and rotation gates are unitary matrices, \(U^{\dagger}U=I\), where \(I\) denotes an identity matrix.
\begin{table}
\begin{tabular}{c|l} \hline \hline \multicolumn{2}{c}{Scenario Notations} \\ \hline \(N\) & The number of AMR agents \\ \(M\) & The number of sites/warehouses \\ \(T\) & An episode length \\ \(o^{n}\) & The observation of \(n\)-th AMR agent \\ \(a^{n}\) & The action of \(n\)-th AMR agent \\ \(\mathbf{a}\) & The actions set of AMR agents, i.e., \(\mathbf{a}=\{a_{n}\}_{n=1}^{N}\). \\ \(s\) & The ground truth state \\ \(q_{n,t}^{e}\) & The load status of \(m\)-th warehouse at time \(t\) \\ \(q_{n,t}^{e}\) & The load status of \(m\)-th AMR agent at time \(t\) \\ \(q_{\max}^{e}\) & A load capacity of warehouse \\ \(q_{\max}^{e}\) & A load capacity of AMR agent \\ \hline \multicolumn{2}{c}{Quantum Computing Notations} \\ \hline \(|\psi\rangle\) & Entangled quantum state \\ \(\langle O\rangle\) & Observable \\ \(\Gamma\) & Pauli-\(\Gamma\) gate, e.g., \(\Gamma\in\{X,Y,Z\}\) \\ \(R_{\Gamma}\) & Rotating \(\Gamma\) gate, e.g., \(\Gamma\in\{X,Y,Z\}\) \\ \((\cdot)^{\dagger}\) & Complex conjugate operator \\ \(\mathcal{M}\) & Measurement operator \\ \hline \hline \end{tabular}
\end{table} TABLE I: List of notations
**Multi-Qubit Quantum State.** Multi-qubit system enables super-fast quantum computing due to quantum superposition. The well-known quantum algorithms (e.g., Shor algorithm [33] and Grover search [34]) are based on the multi-qubit system. The quantum state with \(q\) qubits is denoted as \(|\mathbf{\psi}\rangle=|\psi_{1}\rangle\otimes|\psi_{2}\rangle\otimes\cdots\otimes| \psi_{q}\rangle=\sum_{n=0}^{2^{q-1}}\alpha_{n}|n\rangle\), where \(\otimes\), \(\alpha_{n}\) and \(|n\rangle\) stand for superposition operator (i.e., tensor-product), and \(n\)-th probability amplitude and \(n\)-th basis of \(q\)-qubits quantum state, respectively. Note that the sum of squared magnitude of probability amplitude equals 1, _i.e._, \(\sum_{n=0}^{2^{q}-1}|\alpha_{n}|^{2}=1\)[32]. To realize quantum superposition, there are quantum gates that operate on multiple qubits, called controlled rotation gates. They act on a qubit according to the signal of several control qubits, which generates quantum entanglement between those qubits. Among them, the _Controlled-\(X\)_ (or CNOT) gate \(\textit{CX}=\begin{bmatrix}I&0\\ 0&X\end{bmatrix}\) is one of the widely used control gates. These multi-qubit gates allow quantum algorithms to work with their features on VQC, which will eventually be utilized for QMARL.
## III Autonomous Mobile Robots Coordination for Smart Manufacturing
### _Design of an Autonomous Mobile Robot System_
An automated guided vehicle (AGV) is a portable robot that travels along lines or wires marked on the floor or navigates using radio waves, vision cameras, magnets, or lasers. AGVs are widely used in industrial applications to transport heavy materials around large industrial facilities such as factories and warehouses. Therefore, it is obvious that AGVs are essential to smart factory management. The autonomous mobile robot (AMR) differs from AGV because it has various sensors that enable autonomous location identification and search by detecting surrounding static and dynamic objects. Their paths are generated based on static and dynamic obstacles in real time so that AMR can travel freely without a predefined path. While the system is more flexible, real-time path generation poses additional challenges that fleet management systems (FMSs) must deal with such as, performing activities, e.g., shipping transport orders, routing vehicles, and scheduling task execution. Note that AMRs are tightly combined, which leads to high computational complexity. For example, the performance of AMRs suffer when considering all possible AMR paths, even though the numbers of AMRs and transfer orders are relatively small. As a result, centralized AMR fleet management and order execution optimization are often not performed in real time. Therefore, the use of MARL algorithms is widely considered and studied [35]. For further performance improvement, QMARL can be additionally utilized, as we discuss in this paper.
### _Automated LCD Smart Factory with Multiple AMRs_
Thanks to the properties of QC, QC has shown that QC could save many orders of magnitude in energy consumption compared to classical supercomputers [36]. Regarding QRL, recent studies show quantum supremacy [22]. In this paper, we consider a liquid crystal display (LCD) smart factory system which utilizes DC-based AMRs. As shown in Fig. 1, the color thin-film transistor (TFT) LCD panel consists of two glass substrates; a TFT array substrate and a color filter substrate. TFT LCD panels are fabricated by a combination of five processes; TFT array filter process, color filter process, repair process, cell fabrication process, and module assembly process. The first two processes (i.e., TFT array filter and color filter processes) are carried out at Site A, the two substrates are carried by AMR and the rest of the process is carried out at Site B. Each AMR has a role in the transition from providing services to flexible areas that require decisions to be made based on dividing the service area into several zones. In the process of manufacturing TFT LCDs, various defective LCDs can occur which should not be used. Therefore, to prevent the usage of such defective LCDs, the AMRs must identify the defective products and request a quality verification of the LCD. Techniques of detecting defects among LCDs have already been developed and implemented in smart factories [37, 38, 39]. By using the _precision_ parameter proposed by the works above, the AMRs will recognize defective LCDs and unload them in another collection point dedicated for defects.
In this paper, we assume that all AMRs have the optimal trajectory planners and charging schedulers such as [40, 41]. Thus, our proposed QMARL model must plot the trajectory of each AMR such that the defective LCDs are separated while the normal products are properly unloaded. For communication, the QDL server is wire-linked to every site, and each site is wirelessly connected with AMRs. Since the packet size is small, and the transmit power is sufficient in LCD smart factory, we assume that the packet loss is negligible. The QDL server receives observation from AMRs, reconfigures the state, and finally transmits action decisions to AMRs. For flexible manufacturing, AMRs should be properly planned to load goods, unload goods, and control the quality of LCD. Moreover, the decision-making process of scheduling and dispatching these resources is essential for optimal utilization and high AMR productivity performance.
**Problem Definition and Formulation.** In this situation, a quantum deep learning (QDL) server supports the decision-making process for efficiently scheduling material handling systems, under the fundamental concept of the CTDE-based QMARL framework. Specifically, the QDL server makes distributed and sequential decisions for each AMR to determine their goods (i.e., the number of goods to carry in each AMR and requesting quality control) for eliminating the overflow and underflow of delivering goods in each AMR.
## IV Quantum Multi-Agent Actor-Critic Network for Autonomous Multi-Robot Coordination
### _Fundamental MDP Formulation_
Our considering autonomous multi-robot coordination in a smart factory environment consists of \(M\) sites and \(N\) AMR agents. The smart factory environment is mathematically modeled with POMDP (referred to as Sec. II). Hereafter, we explain the description based on \(m\)-th site, \(n\)-th agent, and time step \(t\).
#### Iii-A1 Load Dynamics
Each site has a warehouse where the load capability is \(c_{\max}^{\text{W}}\). In addition, the load capacities of AMR agents is under the maximum capacities \(c_{\max}^{\text{A}}\). AMR agents receive goods (e.g., LCD panels or TFTs) from other AMRs. In this paper, we denote the load weights \(b_{t}^{\text{A},n}\). The load weights follow the uniform distribution \(\forall b_{t}^{\text{A},n}\sim\mathcal{U}(0,w_{\textit{load}}\cdot b_{\max})\). The warehouse and AMR agents have loading status \(c_{t}^{\text{W},m}\) and \(c_{t}^{\text{A},n}\) that are temporally loaded goods. All AMR agents carry their goods to warehouses. The dynamics are as follows,
\[c_{t+1}^{\xi,n}=\textit{clip}(c_{t}^{\xi,n}-a_{t}^{\xi,n}+b_{t}^{\xi,n},0,c_{ \max}^{\xi}), \tag{5}\]
where \(\xi\in\{\text{W},\text{A}\}\) identifies the warehouse and an AMR agent. The terms \(a_{t}^{\xi,k}\) and \(b_{t}^{\xi,n}\) imply the total delivered goods weights and the received goods weights of \(m\)-th warehouse or \(n\)-th AMR agent, respectively. Note that \(a_{t}^{A,n}\) is \(n\)-th AMR agent's action. In addition, a clipping function is defined as \(\textit{clip}(x,x_{\min},x_{\max})\triangleq\min(x_{\max},\max(x,x_{\min}))\).
#### Iii-A2 Quality Control
We assume that the loads have been classified by the previous defect detection process. In the defect detection process, four types are given to the loads (i.e., true positives, false positives, false negatives, and true negatives), which make the statistics (_i.e._, precision, recall, and F-score). This paper considers the load status as the quality statistics (_e.g._, precision). Among them, quality statistics are given to AMR agents. The AMR agents can make action decisions for re-requesting quality control of the load. If the loads are requested for quality control, the loads undergo quality verification by quality engineers. We assume that quality engineers can detect all defects on load perfectly. However, the quality re-assurance process via quality engineers additionally requires \(\tau_{\textit{p}ual}\) time delay.
#### Iii-A3 Utility Design
We design the utility for quality, time delay, and load balancing. First of all, AMR agents receive the goods with the type of true positives (_TP_) and false positives (_FP_). The metric, i.e., precision, can represent the ratio of positive predictive value, which is written as follows:
\[u_{t}^{q,n}=\frac{\textit{TP}_{t}^{n}}{\textit{TP}_{t}^{n}+\textit{FP}_{t}^{n }}, \tag{6}\]
where \(\textit{TP}_{t}^{n}=\sum_{\textit{load}\in\mathbb{I}_{t}^{n}}\mathbbm{1}( \textit{load}=\textit{TP})\) and \(\textit{FP}_{t}^{n}=\sum_{\textit{load}\in\mathbb{I}_{t}}\mathbbm{1}(\textit{ load}=\textit{FP})\) stand for the true positives and false positives of \(n\)-th AMR, respectively. Note that \(l_{t}^{n}\) denotes the whole load defect status of \(n\)-th AMR. Regarding the delay, we measure the processing time. Thus, the delay utility of \(n\)-th AMR agent is written as follows:
\[u_{t}^{d,n}=-\Big{(}1+\tau_{\textit{quad}}\cdot q_{t}^{n}\Big{)}, \tag{7}\]
where \(q_{t}^{n}\) denotes the quality control action. If \(q_{t}^{n}=1\), the loads are conveyed to quality engineers; otherwise, the loads are conveyed to other-site. Finally, load balancing is to minimize the total amount of overflowed load and the event where the load is empty. Thus, the utility for load balancing is written as follows:
\[u_{t}^{b,n}=\mathbbm{1}_{(c_{t+1}^{A,n}=0)}\cdot\tilde{c}_{t}^{A,n}+ \mathbbm{1}_{(c_{t+1}^{A,n}=c_{\max}^{A})}\cdot\tilde{c}_{t}^{W,n} \tag{8}\] \[u_{t}^{W,m}=\mathbbm{1}_{(c_{t+1}^{W,m}=0)}\cdot\tilde{c}_{t}^{W,m}+\mathbbm{1}_{(c_{t+1}^{W,m}=c_{\max}^{W,m})}\cdot\tilde{c}_{t}^{W,m} \tag{9}\]
where \(\tilde{c}_{t}^{W,m}=|c_{t}^{W,m}-a_{t}^{W,n}+b_{t}^{W,n}|\) and \(\tilde{c}_{t}^{W,n}=|c_{\max}-\tilde{c}_{t}^{W,n}|\). Note that \(r(s_{t},\textbf{a}_{t})\in[-\infty,0]\) (negative) because this paper considers the occurrence of abnormal loading status (e.g., load overflow or underflow) as a negative utility. The objective is to maximize the total precision and minimize the total delay and overflowed or underflowed event.
### _POMDP Setup_
This subsection introduces the formal definition of POMDP, i.e., a stochastic decision-making model under un
Fig. 1: System model: Liquid crystal display panel manufacturing using quantum multi-agent reinforcement learning
certainty among agents [42]; and our proposed QMARL is mathematically modeled with this fundamental concept of POMDP. Note that POMDP is defined as a tuple \(\langle\mathcal{N},\mathcal{S},\mathcal{A},P,r,\mathcal{Z},O,\rho,\gamma,T\rangle\). The sets of states and observations are represented as \(\mathcal{S}\) and \(\mathcal{Z}\), respectively. \(\mathcal{N}:=\{1,\cdots,N\}\) and \(s\in\mathcal{S}\) denote the set of \(N\) agents and the current state of the environment, respectively. The initial state \(s_{0}\sim\rho\) follows the distribution \(\rho\). The action of \(n\)-th agent \(a^{n}\in\mathcal{A}\) is discrete or continuous actions, and the joint action is denoted as \(\mathbf{a}:=\{a^{n}\}_{n=1}^{N}\). The transition is determined with probability function \(P(s^{\prime}|s,\mathbf{a}):\mathcal{S}\times\mathcal{A}\times\mathcal{S} \rightarrow\mathcal{S}\), where \(s^{\prime}\) denotes the next state. The shared reward \(r_{t}=r(s_{t},\mathbf{a}_{t}):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is given to whole agents. In DecPOMDP, the true state \(s\) is not directly given to agents. Each agent \(n\in\mathcal{N}\) has observation \(z^{n}\in\mathcal{Z}\) from observation function \(O(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{Z}\). We consider that all agents have parameter-shared policy denoted as \(\pi_{\theta}\). Thus, the policy \(\pi_{\theta}\) takes the \(n\)-th agent's observation \(z_{t}^{n}\in\mathcal{Z}\) and decides \(n\)-th agent's action as \(\pi_{\theta}(a|z^{n}):\mathcal{Z}\times\mathcal{A}\rightarrow[0,1]\). The objective of POMDP is to obtain the optimal policy \(\pi_{\theta}^{*}=\arg\max_{\pi_{\theta}}\mathbb{E}_{\mathbf{a}\sim\pi_{\theta} }[\sum_{t=1}^{T}\gamma^{t-1}\cdot r_{t}]\), where \(\gamma\in\mathbb{R}[0,1]\) and \(T\) denote the discount factor and finite time, respectively. Based on this definition, we design the POMDP as follows:
#### Iii-B1 Observation
Each AMR agent partially obtains its observation. Because the parameter shared policy \(\pi_{\theta}\) is used, the observation contains the binary indicator vector \(\mathbf{n}_{2}\). In addition, \(n\)-th AMR agent makes its action decision with its loading status \(c_{t}^{A,n}\), and the current loading status of warehouse \(\{c_{t}^{W,m}\}_{m=1}^{M}\). In summary, \(n\)-th agent's observation is defined as \(z_{t}^{n}\triangleq\{\mathbf{n}_{2},c_{t}^{A,n}\}\cup\{c_{t}^{W,m}\}_{m=1}^{M}\).
#### Iii-B2 State
A state information variable containing information about all AMR agents' and warehouse's loading status is designed. The state variable at time \(t\) is as \(s_{t}=\{c_{t}^{A,n},u_{t}^{d,n}\}_{n=1}^{N}\cup\{c_{t}^{W,m}\}_{m=1}^{M}\). Note that the state information is utilized as the input of the quantum critic network.
#### Iii-B3 Action
It is considered that AMR agents can choose which warehouse to convey goods, where the destination space is defined as \(\mathcal{I}\triangleq\{1,\cdots,M\}\). In addition, AMR agents can determine conveying quantity to the warehouse. The conveying quantity and the quality control space is defined as \(\mathcal{P}\triangleq\{p_{\min},\cdots,p_{\max}\}\) and \(\mathcal{Q}\triangleq\{0,1\}\), respectively. Finally, \(n\)-th AMR agent's action and its action space are defined as \(a_{t}^{n}:=(i_{t}^{n},p_{t}^{n},q_{t}^{n})\in\mathcal{A}\equiv\mathcal{I} \times\mathcal{P}\times\mathcal{Q}\).
#### Iii-B4 Reward
The objective of POMDP is to minimize the total amount of overflowed load and the event where the load is empty. Thus, the reward \(r(s_{t},\mathbf{a_{t}})\) is defined as follows,
\[r(s_{t},\mathbf{a_{t}})=\sum_{n=1}^{N}(u_{t}^{q,t}+w_{d}\cdot u_{t}^{d,n}+w_{ b}\cdot u_{t}^{b,n})+w_{W}\cdot\sum_{m=1}^{M}u_{t}^{W,m}, \tag{10}\]
where \(w_{d}\), \(w_{b}\), and \(w_{W}\) stand for reward coefficients for time delay, and load-balancing of AMR agents and sites, respectively.
### _Quantum Multi-Agent Actor-Critic Network Design_
#### Iii-C1 State Encoding Circuit
The state encoding circuit is leveraged for feedforwarding a state input. Fig. 2 presents the two schemes of state encoder. Fig. 2(a)/(b) need a single gate or two gates per qubit, respectively. Despite the encoding system showing the best performance when the number of qubits is equal to the number of input variables, the number of input variables in RL (i.e., state) must be larger than the number of qubits [26]. Thus, this paper considers two state encoders under the consideration of the environment, as follows,
\[U_{\textit{enc}}^{a}(z) =\Big{[}\otimes_{k=1}^{K}\left(R_{Y}(x_{k}^{z})\right)\Big{]} \cdot|0\rangle^{\otimes_{\textbf{dec}}}, \tag{11}\] \[U_{\textit{enc}}^{c}(s) =\Big{[}\otimes_{k^{\prime}=1}^{K^{\prime}}\left(R_{Y}(x_{2k^{ \prime}}^{s})\cdot R_{X}(x_{2k^{\prime}-1}^{s})\right)\Big{]}\cdot|0\rangle^{ \otimes_{\textbf{dec}}}, \tag{12}\]
where \(x_{k}^{z}\) and \(x_{k^{\prime}}^{s}\) stand for \(k\)-th entry of observation \(z\) and \(k^{\prime}\)-th entry of state \(s\), respectively. Note that \(U_{\textit{enc}}^{a}(z)\) and \(U_{\textit{enc}}^{c}(s)\) denote an actor observation encoder and critic state encoder. The actor observation encoder and critic state encoder work in the \(K\) and \(K^{\prime}\) qubits system, respectively.
#### Iii-C2 Parameterized Circuit and Quantum Measurement
A parameterized circuit is a quantum circuit that performs numerical tasks such as estimation, optimization, approximation, and classification using learnable parameters. As shown in Fig. 3(a), The VQC block consists of rotating gates with different directions and _Controlled-Z_ gate, i.e., \(R_{X}\), \(R_{Y}\), \(R_{Z}\), and \(\textit{CZ}=\begin{bmatrix}I&0\\ 0&Z\end{bmatrix}\). Note that _CZ_ is used to entangle qubits. To improve the circuit's performance, this paper configures the parameterized circuit with multi-VQC blocks, which requires additional trainable parameters \(\theta\) as shown in Fig. 3. To obtain the desirable outputs, the measurement \(\mathcal{M}\) is leveraged, which calculates the expected value of superpositioned quantum states based on its computational basis. In summary, the observable (i.e., expected value) is written as follows:
\[\langle O\rangle_{x,\theta}=\prod_{M\in\mathcal{M}}\langle 0|U_{\textit{enc}}^{ \dagger}(x)U_{\textit{VQC}}^{\dagger}(\theta)MU_{\textit{VQC}}(\theta)U_{ \textit{enc}}(x)|0\rangle, \tag{13}\]
where \(\langle O\rangle_{x,\theta}\) is the output of VQC with inputs \(x\) and circuit parameter \(\theta\); \(\mathcal{M}\) is the set of quantum measurement bases in VQC with \(|\mathcal{M}|\leq n_{qubit}\).
Fig. 3: The illustration of the parameterized circuit.
Fig. 2: The illustration of the state encoder.
#### Iii-C3 Implementation on Quantum Actor-Critic
The proposed QMARL for a smart factory in this paper is decentralized for scalability. Every AMR agent in the QMARL has a VQC-based policy, i.e., agents do not require communication among agents. The observables of the actor/critic are as follows,
\[\langle O_{a}\rangle_{o,\theta} =\!\Big{\{}\!\langle 0|U_{\textit{mc}}^{a\dagger}(o)U_{\textit{VQC}}^{a \dagger}(\theta)MU_{\textit{VQC}}^{a}(\theta)U_{\textit{mc}}^{a}(o)|0\rangle \!\Big{\}}_{M\in\mathcal{M}_{a}}, \tag{14}\] \[\langle O_{c}\rangle_{s,\phi} =\!\Big{\{}\!\langle 0|U_{\textit{mc}}^{c\dagger}(s)U_{\textit{VQC}}^{c \dagger}(\phi)MU_{\textit{VQC}}^{c}(\phi)U_{\textit{mc}}^{c}(s)|0\rangle\! \Big{\}}_{M\in\mathcal{M}_{c}}. \tag{15}\]
**Quantum Actor.** For the quantum actor, the observable of (14) is used to calculate the probabilities of actions of each AMR agent. Then, the quantum policy is written via a softmax function of its observable,
\[\pi_{\theta}(a_{t}|z_{t})=\textit{softmax}(\beta_{a}\langle O_{a}\rangle_{z_{t }^{n},\theta}), \tag{16}\]
where
\[\textit{softmax}(\textbf{x})\triangleq\Bigg{[}\frac{e^{x_{1}}}{\sum_{i=1}^{ N}e^{x_{i}}};\cdots;\frac{e^{x_{N}}}{\sum_{i=1}^{N}e^{x_{i}}}\Bigg{]} \tag{17}\]
and \(\beta_{a}\) is the scaling factor for an actor observable, respectively. At the time \(t\), the actor policy of \(n\)-th agent makes an action-decision with the given observation \(o_{t}^{n}\), which is denoted as \(\pi_{\theta}(a_{t}^{n}|o_{t}^{n})\). Note that \(\theta\) denotes parameters of \(n\)-th actor. Then, the action \(a_{t}^{n}\) is computed as follows,
\[a_{t}^{n}=\arg\max_{a}\pi_{\theta}(a|o_{t}^{n}), \tag{18}\]
and note all agents use the same policy by parameter sharing.
**Quantum Centralized Critic.** The centralized critic is adopted for CTDE as a state-value function. At time \(t\), the parameterized critic estimates the discounted returns given \(a_{t}\) as follows:
\[V_{\phi}(s_{t})=\beta_{c}\langle O_{c}\rangle_{s_{t},\phi}\!\simeq\!\mathbb{E} [\!\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}\!\cdot\!r(s_{t^{\prime}}\!,\! \textbf{u}_{t^{\prime}})|s_{t}\!=\!s], \tag{19}\]
where \(\gamma\), \(T\), \(\textbf{a}_{t}\), \(\beta_{c}\), and \(r(s_{t^{\prime}},\textbf{a}_{t^{\prime}})\) stand for a discounted factor \(\gamma\in[0,1)\), an episode length, the actions of all agents, scaling factor for a critic observable and reward functions that the state \(s_{t^{\prime}}\) and action \(\textbf{a}_{t}^{\prime}\) are given, respectively. In addition, \(\phi\) presents the trainable parameters of a critic. Here, \(s_{t}\) is the ground truth state at \(t\).
### _Training Algorithm_
The objective of MARL agents is to maximize discounted returns. To derive the gradients for the maximization objective, we leverage the joint state-value function \(V_{\phi}\). To train \(V_{\phi}\), this paper leverages a multi-agent policy gradient (MAPG), which is formulated as follows,
\[\nabla_{\theta}\mathcal{L}_{\textit{actor}} =-\mathbb{E}_{\textbf{a}\sim\pi_{\theta}}\Bigg{[}\sum_{t=1}^{T} \sum_{n=1}^{N}y_{t}\nabla_{\theta}\!\log\pi_{\theta}(a_{t}^{n}|z_{t}^{n}) \Bigg{]}, \tag{20}\] \[\nabla_{\phi}\mathcal{L}_{\textit{critic}} =\nabla_{\phi}\!\sum_{t=1}^{T}\left\|y_{t}\right\|^{2}, \tag{21}\]
subject to
\[y_{t}=r(s_{t},\textbf{a}_{t})+\gamma V_{\phi^{\mathrm{T}}}(s_{t+1})-V_{\phi}( s_{t}), \tag{22}\]
where \(\phi^{\mathrm{T}}\) is the parameters of target critic network. Note that (20) and (21) are for following the parameter-shift rule [43], written as follows:
\[\frac{\partial\mathcal{L}_{\textit{actor}}}{\partial\theta_{i}} =\frac{\partial\mathcal{L}_{\textit{actor}}}{\partial\pi_{\theta}} \frac{\partial\pi_{\theta}}{\partial\langle O\rangle_{\theta,\theta}}\Big{[} \langle O\rangle_{o,\theta\frac{1}{2}\textbf{e}_{i}}\!-\!\langle O\rangle_{o, \theta\frac{-1}{2}\textbf{e}_{i}}\Big{]}, \tag{23}\] \[\frac{\partial\mathcal{L}_{\textit{critic}}}{\partial\phi_{j}} =\frac{\partial\mathcal{L}_{\textit{critic}}}{\partial V_{\phi}} \frac{\partial V_{\phi}}{\partial\langle O\rangle_{s,\phi}}\Big{[}\langle O \rangle_{s,\phi\frac{1}{2}\textbf{e}_{j}}\!-\!\langle O\rangle_{o,\phi-\frac {1}{2}\textbf{e}_{i}}\Big{]}, \tag{24}\]
where \(\textbf{e}_{i}\) and \(\textbf{e}_{j}\) stand the \(i\)- and \(j\)-th standard bases of parameterized vectors \(\theta\) and \(\phi\), respectively. Note that the two left partial derivatives are derived by classical computing, and the last term is obtained by quantum computing. The detailed training procedure is presented in **Algorithm 1**.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Schemes** & **Computing method** & **\# of parameters** \\ \hline Proposed & Quantum & \(\approx 110\) \\ Comp1 & Quantum/Classical & \(\approx 110\) \\ Comp2 & Classical & \(\approx 110\) \\ Comp3 & Classical & \(\approx 40\)K \\ Comp4 & Random Walk & None \\ \hline \hline \end{tabular}
\end{table} TABLE II: The benchmark schemes.
## V Performance Evaluation
### _Experimental Setup_
To verify the effectiveness of the proposed QMARL framework for smart factory management (named, Proposed), the proposed QMARL-based algorithm is compared with four comparing methods as listed in Table II. The purpose of these numerical experiments is as follows,
* The comparative experiments of Proposed, Comp1, and Comp2 are conducted to corroborate the quantum advantages. The number of parameters is equally set for a fair comparison.
* This paper compares Proposed and Comp3 to verify that the proposed method can achieve better performance than the latest MARL technique.
* To verify the superiority of MARL, this paper compares MARL schemes to random walk schemes, i.e., Comp4.
* To investigate the robustness of quality control, we train benchmark schemes in various environments regarding precision. We validate the fact that the quality of the load is time-varying in the environment. We corroborate the robustness of the quality control in our proposed scheme.
The simulation parameter settings are listed in Table III. Because the number of qubits used in this paper is lower than \(110\), this paper assumes that quantum noise is negligible. Comp1 is a hybrid quantum classical method utilizing A2C critic structure which is proposed and developed in another work [45]. Moreover, Comp2 and Comp3 are based on CTDE structure. Specifically, the value decomposition network (VDN) [46]. For a fair comparison, we compose the neural network of linear operations and activation functions (i.e., linear or dense layer). The python software libraries (torchquantum and pytorch) are used for deploying VQCs and DL methods, which support GPU acceleration [16]. In addition, all experiments are conducted on a multi-GPU platform (equipped with 2 NVIDIA Titan XP GPUs using a 1405 MHz main clock and 12 GB memory) for training and inferencing/testing.
### _Performance of Training_
Fig. 4 presents the numerical results corresponding to the training metric. This paper adopts total reward, precision,
\begin{table}
\begin{tabular}{l|r} \hline \hline
**Parameters** & **Values** \\ \hline The number of sites (\(M\)) & \(2\) \\ The number of AMRs (\(N\)) & \(6\) \\ The load capacity of warehouse & \(2,000\,\)kg \\ The load capacity of AMR agent & \(500\,\)kg \\ Observation dimension & \(6\) \\ Precision (Reported [39]) & \(\{61.9,95.8,97.1\}\%\) \\ Weight of TFT-LCD (Reported [44]) & \(6\,\)kg \\ Action dimension & \(5\) \\ State dimension & \(8\) \\ Episode length & \(30\,\)timestep \\ Reward coefficient \((w_{d},w_{b},w_{W})\) & \((0.1,1,10)\) \\ Time delay by quality engineers (\(\tau_{qual}\)) & \(3\,\)timestep \\ \hline Actor observable hyperparameter \(\beta_{a}\) & \(3\) \\ Critic observable hyperparameter \(\beta_{c}\) & \(35\) \\ Optimizer & \(\text{Adam optimizer}\) \\ The number of gates in \(U_{p}^{a}\) and \(U_{p}^{c}\) & \(54\) \\ The number of qubits of actor & \(8\) \\ The number of qubits of critic & \(8\) \\ Learning rate of actor & \(1\times 10^{-2}\) \\ Learning rate of critic & \(1\times 10^{-3}\) \\ Weight decay & \(1\times 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: The experiment parameters.
Fig. 4: The experimental result of various metrics with comparing different MARL frameworks.
processing time, and loaded/overflowed/underflowed amount in AMR/server as training metrics. As shown in Fig. 4(a), all training benchmark schemes (_i.e.,_ Proposed, Comp1, Comp2 and Comp3) converge to the expected value for each different total reward. Proposed, which utilizes VQCs for both actor and critic network configuration, can observe the increased total reward from the beginning of learning to \(980\) epochs. Then, the total reward of Proposed achieves the final value of \(-37\). Comp1 and Comp2, which share a common state-value network composed of a small number of parameters, do not evaluate their values properly and cause the reward to exist between \(-205\) and \(-240\). This is lower than \(-200\), which is the expected value of the total reward when a random walk is performed. However, a classical actor and critic composed of a large number of parameters \((\approx 40\)K) show similar performance to the proposed scheme (_e.g.,_\(\pm 10\) performance difference in the total reward).
In Proposed and Comp3, policy evaluation and improvement are trained to increase the total reward. However, Comp1 and Comp2 with classical critic networks composed of small parameters are trained (i.e., actor loss and critic loss are reduced), but not in the direction of reward increasing. In other words, policy evaluation and improvement are not working correctly. The only difference between Proposed and Comp1 is whether the critic is a quantum-based or a classical critic, and there is a huge difference in training performance. In addition, compared with Comp3, the number of parameters is \(364\)x lower than that of Comp3, whereas the performances are almost equivalent to each other.
### _Feasibility Studies in the LCD Smart Factory Environment_
This section investigates the proposed model's performance in LCD smart factory environment. Fig. 4 shows the results of various metrics in the training process, and Table IV shows the performance after training is finished. The result of Table IV represents the average value of inference of \(100\) iterations. Fig. 4(b-c) represent the average quality and processing time, respectively. Fig. 4(d-i) show the amount of loaded, overflowed, and underflowed load amount generated in warehouse and AMR, respectively. For this simulation, the total amount of overflowed/underflowed loads achieve target performance if the corresponding values become 0. During the training process, it is shown that the values of the two metrics (i.e., the amount of overflowed/underflowed loads) of the Proposed and Comp3 are reduced. Therefore, it can be inferred that Proposed and Comp3 are trained in the correct direction. On the other hand, overflowed loads sparsely occurred in Comp1 and Comp2. However, the underflowed load amount is the highest, which proves that Comp1 and Comp2 are learning in the direction that satisfies only one of the two goals. Furthermore, this tendency also affects the average load status of the warehouse and AMR agents. In the proposed scheme of this paper, it is confirmed that all four values of the indicators continuously decrease until they reach the approximate value of 0. Hence, it is confirmed that the AMR agent has the average load of 3.6 kg in Fig. 4.
### _Impact on State Encoding Method_
According to [31, 47], state-encoding is crucial for the performance of QRL. Therefore, an experiment is designed to demonstrate the importance of state encoding. This experiment aims to transform four random bits into continuous scalar values. The output value is calculated as \(y=\sum_{i=1}^{4}x_{i}*2^{1-i}\). In this transformation process, \(\{4,2,1\}\) variables dense encoding is carried out to compare the dense encoding methods. Note that the number of parameters in VQCs is identical to 50. The result is shown in Fig. 6, and it is concluded that the performance of \(1,2\) variables dense encoding is high while the performance of \(4\) variables dense encoding is low. In other words, the \(2\)-variables encoding used in this paper has less performance degradation than the \(4\)-variables encoding technique.
### _Robustness of Quality Control_
We design the experiment to investigate the robustness of the proposed framework. To benchmark the robustness, we design the smart factory environment that is time-varying and configure the environment in four phases. In phase 1, the precision of load is randomly selected from \(\{61.9,95.8,97.1\}\%\), which is identical to the training environment. Note that the initial precision follows the uniform distribution \(\mathcal{U}[61.9,97.1]\%\). The quality of LCD load carried by each AMR varies with time (e.g., \(61.9\)%, \(95.8\)%, and \(97.1\)% for phase 2, phase 3, and phase 4, respectively). Then, the average precision is measured for 60 minutes to investigate the
\begin{table}
\begin{tabular}{c||c c c c c} \hline \hline \multicolumn{1}{c||}{**Metric**} & \multicolumn{5}{c}{**Benchmark Scheme**} \\ \multicolumn{1}{c||}{**[SI Unit of (a)-(f): kg]**} & Proposed & Comp1 & Comp2 & Comp3 & Comp4 \\ \hline (a) Avg. load status of AMR & 6.0 & 2.9 & 9.3 & 2.9 & 2.6 \\ (b) Avg. load status of server & 511 & 88 & 87 & 244 & 87 \\ (c) Avg. overflowed load in AMR & **81** & 101 & 224 & 131 & 103 \\ (d) Avg. overflowed load in server & **2.6** & 0 & 0 & 5.1 & 0 \\ (e) Avg. underflowed load in AMR & **77** & 106 & 227 & 136 & 100 \\ (f) Avg. underflowed load in server & **371** & 628 & 630 & 493 & 579 \\ \hline (g) Avg. precision of load [\%] & **92.1\%** & 90.8\% & 90.8\% & 92.2\% & 89.3\% \\ (h) Avg. processing time [Minute] & **292** & 294 & 255 & 371 & 253 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The numerical results of various metrics corresponding to the different benchmark schemes.
robustness of quality control. The result of Fig. 5 represents the average precision value of inference of \(100\) iterations. During phase 1, the precision records \(88.4\%\) on average. At \(t=30\), the quality of the input load decreases, i.e., the input load's precision equals \(61.9\%\). Thus, the precision is \(72.4\%\), the lowest precision during the episode. In response to this result, the AMR agents try to improve the precision value during phase 2. On the other hand, the AMR agents do not make actions on quality control in phases 3 and 4 since the quality of input load increases from \(61.9\%\) to \(97.1\%\). In summary, the robustness of quality control in our proposed scheme is corroborated by demonstrating the ability of our AMR agents to encounter and cope with the unpredictable quality of input load.
### _Discussion_
This section provides in-depth discussions to explain why the proposed scheme outperforms the other frameworks.
#### V-F1 Expressibility of Trainable Parameters
The authors of [48] have argued that the parameters of VQCs have more expressibility for quantum neural networks than classical neural networks. The small number of trainable parameters in the RL/MARL regime acts as a vulnerability for the classical neural network. In [30, 31], it is proven that QRL and QMARL can achieve similar performance to a classical RL/MARL. In the results of this paper, the classical neural network with fewer parameters yields lower performance due to two reasons; 1) index embedding on observation, and 2) parameter-shared policy. The index embedding on agents' observations and the parameter-shared policy method are utilized for faster convergence despite taking a loss in performance. Furthermore, the expressibility capacity of a neural network is also trusted to be sufficient, which is why the two methods introduced above are used [49]. Unfortunately, the degradation in performance is significant regardless of the expressibility capacity. On the other hand, the quantum circuit operates successfully even with a small number of parameters.
#### V-F2 Dimensional Reduction Corresponding to the State Encoding
Information loss occurs when the input variables are lost by dimensional reduction. In the experiments, the dimension of the input variable is set to four, and the output variable is set to four, two, and one for different schemes, respectively. In four variables dense encoding, the information loss is severe, because four independent variables are encoded using four rotation gates \(R_{Y}(x_{4})\), \(R_{Y}(x_{3})\), \(R_{Z}(x_{2})\), and \(R_{Z}(x_{1})\) through a single qubit. In the cases of two variables dense encoding and one variable dense encoding, the dimensional reduction does not occur. This is proven by showing the encoding processes on the Bloch sphere. For two and one variables encoding, the qubits are rotated twice in two orthogonal directions (e.g., \(y\)-axis and \(z\)-axis directions) and once in one direction, respectively. Consequently, the ranks of the resultant qubits are guaranteed. Therefore, the four variables dense encoding method has the lowest performance and is outperformed by the other aforementioned methods.
## VI Concluding Remarks
This work has investigated the design of QMARL agents based on VQCs for autonomous multi-robot control and coordination in smart factory management while taking POMDP into consideration. When utilizing AMRs as QMARL agents, the two variables dense encoding method is implemented to reduce the number of qubits in the proposed model. In addition, this paper adopts the parameter-shared policy with index embedding, which can reduce the number of trainable parameters. Using the abovementioned techniques, the quantum policy and state-value function are configured to quantum multi-agent actor-critic. The extensive numerical results show the superiority of the proposed QMARL-based AMR control in smart factory management. Finally, the proposed QMARL has an explicit performance gain when using the same number of parameters compared to the classical MARL algorithm and does not suffer from a severe dimensional reduction of data compared to other state-encoding methods.
|
2306.03902 | Utterance Classification with Logical Neural Network: Explainable AI for
Mental Disorder Diagnosis | In response to the global challenge of mental health problems, we proposes a
Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis
of mental disorders. Due to the lack of effective therapy coverage for mental
disorders, there is a need for an AI solution that can assist therapists with
the diagnosis. However, current Neural Network models lack explainability and
may not be trusted by therapists. The LNN is a Recurrent Neural Network
architecture that combines the learning capabilities of neural networks with
the reasoning capabilities of classical logic-based AI. The proposed system
uses input predicates from clinical interviews to output a mental disorder
class, and different predicate pruning techniques are used to achieve
scalability and higher scores. In addition, we provide an insight extraction
method to aid therapists with their diagnosis. The proposed system addresses
the lack of explainability of current Neural Network models and provides a more
trustworthy solution for mental disorder diagnosis. | Yeldar Toleubay, Don Joven Agravante, Daiki Kimura, Baihan Lin, Djallel Bouneffouf, Michiaki Tatsubori | 2023-06-06T17:58:44Z | http://arxiv.org/abs/2306.03902v1 | # Utterance Classification with Logical Neural Network:
###### Abstract
In response to the global challenge of mental health problems, we proposes a Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis of mental disorders. Due to the lack of effective therapy coverage for mental disorders, there is a need for an AI solution that can assist therapists with the diagnosis. However, current Neural Network models lack explainability and may not be trusted by therapists. The LNN is a Recurrent Neural Network architecture that combines the learning capabilities of neural networks with the reasoning capabilities of classical logic-based AI. The proposed system uses input predicates from clinical interviews to output a mental disorder class, and different predicate pruning techniques are used to achieve scalability and higher scores. In addition, we provide an insight extraction method to aid therapists with their diagnosis. The proposed system addresses the lack of explainability of current Neural Network models and provides a more trustworthy solution for mental disorder diagnosis.
## 1 Introduction
A mental disorder is a significant deterioration of human thinking, emotional control, or behavior, which is diagnosed clinically and can affect key areas of life. Due to the COVID-19 pandemic, the number of people who suffer from anxiety and depressive illnesses greatly increased in 2020. Initial projections indicate a 26% and 28% increase in anxiety and major depressive disorders respectively during the first year of the pandemic (who). Moreover, every year, 703 000 people commit suicide, with many more attempting to do so. Although people of all ages commit suicides, it is alarming that in 2019 suicide was one of the leading causes of death among young people worldwide (Sui). Furthermore, around 24 million people, or 1 in 300 persons (0.32%), globally suffer from schizophrenia. Although it is not common as other mental disorders, schizophrenia produces psychosis, is associated with significant disability, and may have an impact on all aspects of life, including personal, family, social, educational, and occupational functioning (Sch).
Diagnosis of mental disorders is accomplished through a clinical interview, where a therapist evaluates the mental health of the patient and identifies possible disorders based on symptoms. However, although many mental health issues may be properly treated at low cost, there is still a wide gap between those who need care and those who have access to it. Despite the progress in some countries, there is still a severe lack of effective therapy coverage. Therefore, there is a need for an AI solution that can assist therapists with a diagnosis of mental disorders.
Although current Neural Network (NN) models are powerful and can operate in a wide range of tasks, their effectiveness in mental disorder classification is questionable due to their black-box nature. In this regard, the model explainability is a vital property, which is required to make a diagnosis of mental disorders. While Neural Network models can achieve high scores, therapists may be hesitant to trust such tools and accept classification results if proper explanations are not provided. Because of NN nature, it is impossible to tell whether their predictions are the result of robust features or some spurious clues Ribeiro et al. (2020). There are attempts to provide interpretable insights in mental disorder diagnosis, such as using topic modeling to extract concepts Lin et al. (2023) or inferring psychological properties such as working alliance Lin et al. (2023). Although such approaches can enable explainable AI systems for passive assistance Lin et al. (2023); Lin (2022) or interventional recommendations Lin et al. (2023); d) to the therapists, applying these insights directly to the classification problem yields suboptimal performance Lin et al. (2022). Furthermore, despite
being able to provide global explanations for the prediction (Mowery et al., 2017), traditional ML models lack scalability and they are not generalizable for broader tasks.
In this regard, Logical Neural Network (Riegel et al., 2020) might be a good solution to the problem. It is a Neuro-Symbolic AI method (NSAI), that combines the learning capabilities of neural networks with the reasoning capabilities of classical logic-based AI. The LNN is a Recurrent neural network architecture in which neurons represent a precisely defined notion of weighted real-valued logic. It has a 1-to-1 relationship to a system of logical formulae. The main problem related to this approach is that it has not been implemented for the supervised learning utterance classification task. Therefore, this work proposes an LNN-based explainable NSAI utterance classification method for mental disorder diagnosis. The model was trained with different predicate pruning techniques to achieve scalability and higher scores. The advantages of the proposed system can be summarized via the following points:
* We propose design of the supervised NSAI method for utterance classification task, where input to the model is predicates from clinical interviews and output is a mental disorder class. After the training the system outputs weighted logical rule to make classifications.
* We propose a predicate pruning methods to improve scalability and generalizability of the model.
* We propose an insight extraction methods which can aid therapists with their mental disorder diagnosis.
1) This paper is organized as follows: Section II details the proposed system, Section III contains experiment results, Section IV provides discussions and future work, and the paper ends with a Conclusion.
## 2 Supervised learning with LNN
Although NSAI supports data driven training of the network, it encodes knowledge into logic rules with predicates as inputs, where predicates represents a property or a relation. Therefore, NSAI method requires special preprocessing of the dataset to generate predicates and data samples for training and testing purposes. The proposed system consists of two parts Abstract Meaning Representation (AMR) (Zhou et al., 2021) semantic parser and LNN. Fig. 1 shows overall pipeline of the system, first part containing AMR parser is used to convert raw text into classifier input data, and second part is an LNN model which performs rule-based classification.
### Dataset preperation and preprocessing
Counseling and Psychotherapy Transcripts (ale) is a unique and fully anonymized online series of clinical interviews that allows students and researchers to dive deeply into the patient-therapist relationship and track the progress and setback of patients over multiple therapy sessions. These materials bring the mental disorder diagnosis process to life and provide unprecedented levels of access to the widest possible range of clients. Therefore, transcripts of 4 types of mental disorders, which are anxiety, depression, suicidal thoughts, and schizophrenia, from this dataset are used in our training and evaluation of the model. Table 1 shows the details of the dataset; in our simulations, only 12 sessions of clinical interviews have been used due to the computational constraints of semantic parsing. An example from the transcript has shown in Fig 2. In our experiments, a transcript is a full clinical interview between a patient and a therapist, while an utterance represents a full response of the patient to a specific question from the therapist.
As mentioned before, LNN requires a special data structure to function. AMR parser is used for generation of predicates by extracting the semantics of the utterance and converting semantics into a graph, where nodes (keys) represent concepts and edges (values) represent relations to concepts. Example of AMR Representation is shown in Fig 1. AMR Representation keys and values are combined to generate predicates as shown in Table 2.
Moreover, a training and a testing sample is input to the model and is obtained by using AMR parser over an utterance. Furthermore, a sample contains all predicates that has been mined from dataset and
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Class** & **Number of total sessions** & **Number of used session** \\ \hline _Anxiety_ & 498 & 12 \\ \hline _Depression_ & 377 & 12 \\ \hline _Suicidal_ & 12 & 12 \\ \hline _Schizophrenia_ & 71 & 12 \\ \hline \end{tabular}
\end{table}
Table 1: Details of the dataset.
the corresponding output of parser as groundings. The values of the groundings are assigned according to the presence of the particular predicates in the parsed utterance, which means only predicates that results from that particular utterance assigned with _TRUE_ grounding for that particular sample. In this regard, certain combinations of predicates might repeat over multiple classes and the proposed design takes into account this issue.
### Proposed system details
LNN is a core of the model, which has only few differences from regular neural network. The main difference of LNN is that its neural parameters are limited such that the truth functions of the relevant logical gates govern the behavior of the neurons. Moreover, LNN neuron has more parameters compared to dense neuron, since it keeps both upper and lower bounds to the corresponding subformula or predicate.
The proposed LNN architecture has 4 _AND_ logic gates that act as binary classifiers for each mental disorder class. Predicates are inputs to the logic gates, while model is trained by samples generated from utterances. Those samples show truth values for formulae. After the training model outputs set of weight for each predicate and outputs a tensor of lower and upper bounds as a score for a particular input. In our experiments the each logic gate is evaluated as a binary classifier that classifies according to some threshold, thus the upper and lower bounds are averaged to obtain a single score. The score \(S\) for each class is obtained via following equation:
\[S=w_{1}\:P_{1}\:(x_{1})+w_{2}\:P_{2}\:(x_{2})+..+w_{N}\:P_{N}\:(x_{N}) \tag{1}\]
where \(P\) is a predicate, \(w\) is a weight obtained from training and \(x_{i}\) is a grounding for each predicate in a sample.
The proposed system will be evaluated as a separate binary classification models for each gate by True Positive Rate (TPR) and False Positive Rate (FPR) metrics. The TPR indicates the proportion of all available positive samples that contain correct positive results. In contrast, FPR quantifies the pro
Figure 1: The overview of the proposed system.
Figure 3: Proposed LNN architecture for mental disorder diagnosis.
Figure 2: Examples of a dataset transcript.
portion of available negative samples that contain incorrectly positive results. Moreover, the receiver operating characteristic (ROC) curve is created by plotting the TPR against the FPR at various threshold values.
\[TPR=\frac{True\:Positives}{True\:Positives\:+False\:Negatives} \tag{2}\]
\[FPR=\frac{False\:Positives}{True\:Negatives\:+False\:Positives} \tag{3}\]
### Predicate pruning methods
Predicates play a crucial role in LNN training and can greatly affect the accuracy of the model. Table 4 shows that 48 transcripts result in more than 19000 predicates. However, according to Table 3 a preliminary simulation results show that for a linear increase in number of predicates, LNN requires exponential increase in training time. Therefore, there is a need for predicate pruning methods, which will help to chose predicates that contribute the most towards the correct diagnosis. Thus, similarity, exclusivity and frequency based predicate pruning methods has been proposed to reduce number of predicates.
_Similarity pruning_. Simulations has shown that AMR Parser returns multiple variants of values per one key. Often, those values contain repeating phrases. Thus, it is possible to group all those
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Input** & \multicolumn{2}{c|}{**Predicates**} & \multicolumn{1}{c|}{**Output**} \\ \hline & \multicolumn{3}{c|}{AMR Representation} & \multicolumn{1}{c|}{} \\ \hline & Keys & Values & Output of \\ & & & Parser \\ \hline & HAS\_POSSESSION & your medication & TRUE \\ \cline{2-4} & HAS\_POSSESSION & any details & FALSE \\ \cline{2-4} & HAS\_POSSESSION & downs & FALSE \\ \cline{2-4} & HAS\_POSSESSION & just awkward thing & FALSE \\ \cline{2-4} & have & your medications & FALSE \\ \cline{2-4} & have & any details & FALSE \\ \cline{2-4} sample0 & have & downs & TRUE \\ \cline{2-4} & have & just awkward thing & FALSE \\ \cline{2-4} & talk & your medication & FALSE \\ \cline{2-4} & talk & any details & FALSE \\ \cline{2-4} & talk & downs & FALSE \\ \cline{2-4} & talk & just awkward thing & FALSE \\ \hline & HAS\_POSSESSION & your medications & TRUE \\ \cline{2-4} & HAS\_POSSESSION & any details & TRUE \\ \cline{2-4} & HAS\_POSSESSION & downs & FALSE \\ \cline{2-4} & HAS\_POSSESSION & just awkward thing & TRUE \\ \cline{2-4} & have & your medications & FALSE \\ \cline{2-4} sample1 & have & any details & FALSE \\ \cline{2-4} & have & downs & FALSE \\ \cline{2-4} & have & just awkward thing & FALSE \\ \cline{2-4} & talk & your medications & FALSE \\ \cline{2-4} & talk & any details & TRUE \\ \cline{2-4} & talk & downs & FALSE \\ \cline{2-4} & talk & just awkward thing & FALSE \\ \hline \end{tabular}
\end{table}
Table 2: LNN for supervised learning inputs and outputs – predicates, data samples and class
\begin{table}
\begin{tabular}{|c|c|} \hline
**\# of predicates** &
\begin{tabular}{c} **Training time** \\ **(s)** \\ \end{tabular} \\ \hline
710 & 4.49 \\ \hline
1415 & 16.54 \\ \hline \end{tabular}
\end{table}
Table 3: Results of training time with different number of predicates for an LNN model with 2 Logic gates.
lookalike predicates by taking a predicate that contains possible repetitions, e.g. instead of taking both "HAS_POSSESSION_my sister's birthday" and "HAS_POSSESSION_sister's birthday", one can take only the first one.
_Frequency pruning_. In traditional ML word count can show the importance of some features for a specific class. Using the same logic, it was assumed that predicates that are encountered frequently in sessions will have higher impact on model training. Thus, predicates has been analyzed in terms of repetitions across sessions and have been grouped according to the specified frequencies.
_Exclusive pruning_. Since transcripts are conversations between patients and therapists, there are many repeating predicates between classes. Thus, it was suggested that predicates belonging only to a class will avoid contradictions in the model as well as will have higher correlation to a specific class. Therefore, predicates repeating between classes predicates that are repeated only once have been removed.
## 3 Experiment results
In this section experiment results for predicate pruning and LNN model evaluation will be provided. Table 4 shows number of predicates for a particular pruning method. Similarity pruning method prunes almost half of the original predicates. Furthermore, Exclusive and Frequency pruning methods have been applied on top of the similarity pruning method. Results for the Exclusive pruning shows that Depression class has twice of Anxiety and Suicidal predicates and 5-times of Schizophrenia predicates. Moreover, results for the different frequencies show that majority of the predicates (43%) repeat just once, while the higher frequency rates have fewer predicates.
The LNN models have been trained using different pruning methods and have been compared with Deep Learning (DL) and LNN baselines. The number of predicates and training samples are shown in the Table 5. The LNN models have been trained with supervised loss, which targets the labels with learning rate of 0.05, for 50 epochs. The main difference between LNN models is in the predicates. The details of each model are summarized below:
* _DL baseline_. As a DL baseline pre-trained BERT (Devlin et al., 2018) model and Bert tokenizer with a maximum sequence length of 256 inputs have been selected for finetuning. The model has been trained for 10 epochs using Adam optimizer with a learning rate of \(10^{-5}\).
* _LNN baseline_. The predicates for the LNN baseline have been selected randomly from Similarity predicates. The number of predicates for each class varies from 340 to 380 predicates.
* _Frequency pruning models_. Several models with different frequencies have been trained to examine the effectiveness of the frequency pruning methods. \(F>Threshold\) stands for the model with predicates repeating with a frequency higher than the threshold value. The \(F>5\ balanced\) ensures that classes are balanced in terms of predicates. The remaining predicates have been chosen from a lower frequency.
* _Exclusive pruning models_. The exclusive pruning method is used in combination with similarity and frequency pruning methods. In the simulations, Frequency pruning prunes predicates that repeat only once. Then the exclusive pruning removes all the repeating predicates between classes.
According to the Table 5, Frequency predicates does not have a significant effect on model performance when they are applied alone, since the are
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Class**} & **Original** & **Similarity** & **Exclusive** & \multirow{2}{*}{**F=1**} & \multirow{2}{*}{**F=2**} & \multirow{2}{*}{**2+F<10**} & \multirow{2}{*}{**F>9**} \\ & **predicates** & **pruning** & & & & & \\ \hline _Anxiety_ & 5529 & 2773 & 245 & 3152 & 216 & 150 & 14 \\ \hline _Depression_ & 7227 & 3532 & 472 & 2174 & 454 & 133 & 12 \\ \hline _Suicidal_ & 6067 & 3213 & 230 & 2839 & 197 & 160 & 17 \\ \hline _Schizophrenia_ & 3746 & 1914 & 96 & 1718 & 102 & 87 & 7 \\ \hline \end{tabular}
\end{table}
Table 4: Number of original predicates and number of predicates after similarity, exclusive and frequency pruning methods.
under (AUC) the ROC curve is around 0.5, which is close to the random classifier. Moreover, LNN baseline with 1000 predicates and 10000 training samples performed surprisingly well for the Anxiety class, achieving AUC of 0.76. Baseline DL model has AUC scores higher than 0.72 for all classes when treated as a binary classifier for each class. However, since therapist cannot use this data explicitly, the accuracy for the multi-class classification has higher importance for this case and DL model can provide only 58% accuracy in such setting. Exclusive predicates model have shown a good performance overall. It reached AUC of 0.79 for the depression class and 0.57 for schizophrenia.
## 4 Discussions and Future Work
Scaling of the LNN is a significant issue which requires selection of the right predicates. Pruning of the predicates essentially limits the knowledge base of the LNN, thus it is important to understand the effect of the predicates on model performance. Frequency predicate models have not shown great results, the possible explanation for that behavior can be found in predicate analysis. The analysis shows that predicates with higher frequencies also tend to be inclusive for several classes. Such predicates might be extracted from common dialogue phrases, that are common to regular conversations. Thus, it is more difficult to learn for LNN in such circumstances and it might lead to a behavior similar to the random classifiers'. Moreover, variation in the frequency thresholds did not affect the overall performance of the LNN model. Thus, it can be concluded that frequency predicates cannot provide a quality selection of the predicates when they are applied alone. Furthermore, in the case of exclusive predicates, the model has learned depression class better than others. It can be explained by the depression class possessing more exclusive predicates compared to other classes. Interestingly, the model has learned to identify non-schizophrenia samples better than schizophrenia samples. Possible reasoning for that is fewer predicates for schizophrenia compared to other classes. Furthermore, some mental disorders have the same symptoms, and exclusive pruning eliminate such predicates from the training, which might lead to limited diagnostic abilities. Thus, exclusive predicates should be combined with other methods to provide trade-off between generalization and exclusivity of predicates.
Another challenge of this line of work is the usage of AI for mental disorder diagnosis. As pointed out in (Lin, 2022), one significant challenge is related to the privacy and security of patient data. To train the model, the system requires access to sensitive patient data, which must be protected from unauthorized access or misuse. There is also a concern that the use of AI in mental health diagnosis may lead to the stigmatization of individuals with mental disorders. In this work, we have de-identified all the sessions and all the transcripts are obtained under proper license and consent. We would also like to point out that the system may not work for all individuals, which could lead to misdiagnosis or lack of diagnosis, leading to harm to the patient. Therefore, the ethical challenge lies in ensuring the system's reliability, fairness, and transparency and balancing the use of AI with the need for human involvement in mental health diagnosis and treatment, as part of the future work.
The main advantage of the LNN over DL is in its explainability. It is possible to extract predicates with high weights for the each class and to examine which predicates contribute to the result significantly. Table 6 shows the predicate semantics analysis for each class after the training. Predicates of depression and anxiety suicidal classes are mostly related to the first-person and third-person actions respectively, while people with anxiety tend to talk about feelings more. In addition, predicates of the schizophrenia class tend to relate to the medical terms. This overlaps with overall content of the transcripts and predicates that posses high weights can be used to give insights to therapist during the diagnosis of the patients.
Figure 4: AUC ROC curves for each class in testing.
### Future work
Overall, it is evident that predicates are too specific from the number of predicates with a frequency of 1, which might be a possible explanation for the poor performance of the model overall. Therefore, they might require some generalization of the predicates. One of the promising methods for that is to use synonym-based predicates. By using thesaurus dictionaries, it is possible to cluster all the keys and values of the AMR representations and use only one variants for the synonyms. That way, it might be possible to reduce the number of predicates significantly and achieve their generalization.
Another possible way to enhance the model is the explore LNN and DL hybrid approach. By using LNN scores it is possible to train some dense layers with SoftMax to predict classes in the multiclass setting. In a such way it will be more convenient to compare LNN results with DL solutions while keeping the explainability of the LNN.
## 5 Conclusion
Mental disorders are a significant issue that is affecting more people every year. Therefore, explainable AI mental disorder diagnosis through utterance classification can aid the therapist in their practice. In this work, a supervised learning setting for the LNN has been proposed to address this issue. Moreover, predicate pruning methods based on the similarity, frequency, and exclusivity of the predicates are analyzed in terms of training performance. Overall, the model trained with exclusive predicates shows the best results among the pruning methods, and acheived AUC ROC of 0.79 for the depression disorder. Finally, explainability of the LNN diagnosis has been shown by analyzing significant predicates for each class and extracting the predicates with high weights.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & **Grouping of predicates** & **Top 1 weight** & **Top 2 weight** & **Top 3 weight** \\ \hline \multirow{3}{*}{**Depression**} & Related to first-person actions & Do\_i & Come \_they & Resemble \_what \\ \hline \multirow{3}{*}{**Anxiety**} & Related to feelings & get\_it & look\_it & have-rel-role\_my \\ & Related to medical terms & give\_me & resemble \_things & do\_it \\ \hline \multirow{3}{*}{**Suicidal**} & Related to third-person actions & have-mann sense & put\_it & do\_everything \\ \hline \end{tabular}
\end{table}
Table 6: Analysis of the semantics of the predicates for the each class.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & **\# of training samples** & **\# of predicates** & **Suicidal (AUC)** & **Depression (AUC)** & **Anxiety (AUC)** & **Schizophrenia (AUC)** \\ \hline _BaselineLNN_ & 10000 & 1000 & 0.50 & 0.50 & 0.76 & 0.52 \\ \hline _BaselineDL_ & 10000 & N/A & 0.73 & 0.83 & 0.81 & 0.72 \\ \hline _F \textgreater{}5_ & 3947 & 87 & 0.55 & 0.58 & 0.55 & 0.52 \\ \hline _F\textgreater{}5. Balanced_ & 3947 & 141 & 0.55 & 0.59 & 0.52 & 0.55 \\ \hline _F \textgreater{}3_ & 3605 & 349 & 0.54 & 0.56 & 0.53 & 0.52 \\ \hline _F\textgreater{}6_ & 3947 & 81 & 0.55 & 0.53 & 0.56 & 0.50 \\ \hline _Exclusive predicates and F\textgreater{}1_ & 3947 & 981 & 0.51 & 0.79 & 0.50 & 0.43 \\ \hline \end{tabular}
\end{table}
Table 5: AUC ROC scores for DL baseline, LNN baseline and proposed pruning methods. |
2301.09522 | Optimising Event-Driven Spiking Neural Network with Regularisation and
Cutoff | Spiking neural network (SNN), next generation of artificial neural network
(ANN) that more closely mimic natural neural networks offers promising
improvements in computational efficiency. However, current SNN training
methodologies predominantly employ a fixed timestep approach, overlooking the
potential of dynamic inference in SNN. In this paper, we strengthen the
marriage between SNN and event-driven processing with a proposal to consider
cutoff in SNN, which can terminate SNN anytime during the inference to achieve
efficient inference. Two novel optimisation techniques are presented to achieve
inference efficient SNN: a Top-K cutoff and a regularisation. The Top-K cutoff
technique optimises the inference of SNN, and the regularisation are proposed
to affect the training and construct SNN with optimised performance for cutoff.
We conduct an extensive set of experiments on multiple benchmark frame-based
datsets, such as Cifar10/100, Tiny-ImageNet and event-based datasets, including
CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results
demonstrate the effectiveness of our techniques in both ANN-to-SNN conversion
and direct training, affirming their compatibility and potential benefits in
enhancing accuracy and reducing inference timestep when integrated with
existing methods. Code available:
https://github.com/Dengyu-Wu/SNN-Regularisation-Cutoff | Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang | 2023-01-23T16:14:09Z | http://arxiv.org/abs/2301.09522v3 | # Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff
###### Abstract
Spiking neural networks (SNNs), next generation of artificial neural networks (ANNs) with the benefit of energy efficiency, have achieved the accuracy close to its ANN counterparts, on benchmark datasets such as CIFAR10/100 and ImageNet. However, comparing with frame-based input (e.g., images), event-based inputs from e.g., Dynamic Vision Sensor (DVS) can make a better use of SNNs thanks to the SNNs' asynchronous working mechanism. In this paper, we strengthen the marriage between SNNs and event-based inputs with a proposal to consider anytime optimal inference SNNs, or AOI-SNNs, which can terminate anytime during the inference to achieve optimal inference result. Two novel optimisation techniques are presented to achieve AOI-SNNs: a regularisation and a cutoff. The regularisation enables the training and construction of SNNs with optimised performance, and the cutoff technique optimises the inference of SNNs on event-driven inputs. We conduct an extensive set of experiments on multiple benchmark event-based datasets, including CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results demonstrate that our techniques are superior to the state-of-the-art with respect to the accuracy and latency.
## I Introduction
SNNs have recently attracted significant research and industrial interests thanks to its energy efficiency and low latency [1], and there are neuromorphic chips such as Loihi [2] and TrueNorth [3] on which SNNs can be deployed. Mechanistically, SNNs mimic biological neurons, and the neurons process and forward spikes independently. With such an asynchronous working mechanism, only a (small) subset of neurons will be activated during inference. That is, energy efficiency is inherent to SNNs.
The asynchronous mechanism also suggests that event-based input may make a better use of SNNs. Actually, neuromorphic sensors such as Dynamic Vision Sensor [4, 5, 6] and Dynamic Audio Sensor (DAS) [7] have been developed to generate binary "events", which are ideal inputs to SNN. For example, unlike conventional frame-based cameras which measure the "absolute" brightness at a constant rate, DVS cameras are bio-inspired sensors that _asynchronously_ measure per-pixel brightness changes (called "events"), and output a stream of events that encode the time, location and sign of the brightness changes [8]. DVS reveals the sparsity and asynchronicity in recognition systems for computational efficiency [9, 10, 11]. To deal with event-based input, we propose to consider anytime optimal inference SNNs, or AOI-SNNs, which allow the termination at any time during the inference on a spike train (i.e., an input) and return the best possible inference result. Such SNNs enable the cutoff during the inference without (significantly) compromising the performance, and thus can achieve the best in terms of accuracy and latency.
Regarding the training of SNNs, a mainstream approach is through ANN-to-SNN conversion, which adopts the mature training regime of ANNs to first train a high-accuracy ANN, and then convert it into SNN. Such conversions via ANNs have resulted in research to focus on achieving the near-zero conversion loss. However, existing conversion methods [12, 13, 14] mostly conduct empirical experiments on frame-based benchmark datasets such as ImageNet [15] and CIFAR10/100 [16]. In this paper, we will focus on event-based input, and therefore the AOI-SNNs, and explore effective training and inference methods to improve accuracy and latency together.
When considering ANN-to-SNN conversions to deal with DVS inputs, there are two possible ways. The first one aggregates the sparse events in the DVS stream into a frame-based input, on which the SNN processes as a whole. This resembles the ANN processing a static input (such as an image). As explained in Section III-A, the frame-based input will base on the average spike rate, neglecting the spike timing information. The second is to directly work with the event-based input, by considering e.g., AOI-SNNs. An obvious benefit is that SNNs can exploit sparse events in the DVS input, enabling energy-efficient operation and reduced latency. In addition, unlike frame-based input, the event-based input does not need an encoder at or before the first layer, which allows SNNs to operate asynchronously and achieve extra low-latency (further explained in Section III-A).
This paper makes two key technical contributions. Firstly, we propose a regularisation technique to influence the activation distribution during ANN training, which results in an SNN that can classify with less input information. As will be discussed in Related Work (Section II), with our proposed regulariser, we can train an ANN without clipping and do not need to apply any quantisation-aware technique. Experiments in Section V-D show that we can achieve better accuracy than the state-of-the-art methods on both direct training and ANN-to-SNN conversion. Clipping (and quantisation-aware) techniques have been the status quo in this area due to the recent progress [17, 13, 12, 18] and our result suggests that there is an alternative, and probably better, way to get an improved SNN. Instead of simulating non-differentiable SNN activations during ANN training, our regulariser enables the attainment of a better distribution of SNN current by actively regularising the activations of the possible misclassifications. The regulariser is based on a new theoretical result (Section IV-A) that a smaller ratio of threshold voltages to average accumulated current can result in an SNN that can achieve optimised performance at any time during the inference.
The second contribution is that, instead of setting the inference length to always be \(T_{total}\), we can explore an early cutoff mechanism that enables the SNN model to automatically achieve optimal latency and energy efficiency. As shown in Fig. 1, the SNN model will run a monitoring mechanism to determine when it is sufficiently confident to make a decision. Once such a decision is made at time \(t<T_{total}\), a cutoff action is triggered so that the SNN will not take future inputs until the time \(T_{total}\). Therefore, not only will this lead to lower latency (because decision is made at time \(t\) rather than \(T_{total}\)), but it will be also more energy-efficient (because no spike will be generated after time \(t\)).
## II Related work
The application of SNNs to a data source can be separated into two phases: training and inference. Broadly speaking, the training algorithms for SNNs can be categorised into direct training (DT) and ANN-to-SNN conversion. Recently, Spike-based Error Backpropagation [20, 21, 22, 23, 24] direct train a neural network to process the temporal information of input spikes. However, either direct training or conversion algorithm [25, 18] needs to collapse the input spikes into frames for the training. More specifically, the first layer in former SNN needs to wait for the full spike train within one frame to generate one spike, while the latter can respond very fast as long as the SNN receives spikes. Normally, the number of frames in direct training is kept small to reduce training complexity and determines the latency of SNN in inference. In contrast, ANN-to-SNN conversion can incorporate the maximum number of spikes during training to consider the SNN with optimal latency.
For the ANN-to-SNN conversion, early studies [19, 26] use the maximum value of activation to normalise the weights from ANN, and [27] proves that the normalisation can also be achieved by greedily searching for the optimal threshold using the input spike train. A unified conversion framework is studied in [18]. Besides, there are hybrid methods [28, 29] that combine conversion and direct training. Tandem Learning [30] leverages the gradient from ANN to update SNN during training. The first two columns of Table I present the technical ingredients of different conversion methods for the training phase. Recent work [12, 18] shows that, outlier elimination (OE) in ANN activations can be implemented by applying _clipping_ operation after the Rectified Linear Unit (ReLU). Based on this, [17, 13] further minimise the quantisation error by Quantisation-aware (QA) training. Different from the above methods, we develop a new regulariser to achieve the better performance _without_ clipping, and moreover, noticeably, we are _free from_ applying QA training.
For the inference phase, as indicated in the last two columns
Fig. 1: An illustrative diagram showing the regularisation for improving SNN latency and the cutoff mechanism for reducing latency on Cifar10-DVS dataset. Cutoff is triggered when \(S_{gap}\) is greater than \(\beta\), a value dynamically determined by a confidence rate as introduced in Section IV-C.
of Table I, the soft-reset mechanism [19] and the additive white noise to membrane potential [12, 18, 13] can significantly increase the conversion efficiency. To the best of our knowledge, there is no existing work on cutoff in the inference phase, and our confidence-based method is the first of its kind.
## III Preliminary
In this section, we discuss the event-based input in spiking neuron and introduce the ANN-to-SNN conversion. To facilitate the analysis, we use **bold symbol** to represent vector, \(l\) to denote the layer index, and \(i\) to denote the index of elements. For example, \(\mathbf{a}^{l}\) is a vector and \(a_{i}^{l}\) is the \(i\)-th element in \(\mathbf{a}^{l}\). Inference time \(t\) represents the time length of input. \(T_{total}\) denotes the maximum time length of input and it can be various depending on dataset. \(\mathbf{W}^{l}\) is weight matrix at the \(l\)-th layer.
### _Integrated-and-fire model_
Conversion-based SNN uses integrate-and-fire (IF) neuron as the basic computing unit to approximate ReLU in ANN [18]. Fig. 2 illustrates the inference process in IF neurons. The input spike train \(X_{i}(t)\) charges the membrane potential \(V_{i}(t)\) with weighted current. The weighted current and bias current are translated from the weight \(\mathbf{W}^{l}\) and bias \(\mathbf{b}^{l}\) in ANN. When \(V_{i}(t)\) reaches the threshold \(V_{thr}\), the neuron will generate a spike and then reset the \(V_{i}(t)\) by subtracting \(V_{thr}\). The _reset by subtraction_ mechanism was firstly suggested in [19] to reduce information loss during inference. The dynamics of IF neuron can be described as
\[\mathbf{V}^{l}(t)=\left\{\begin{array}{ll}\mathbf{V}^{l}(t-1)+\mathbf{Z}^{l}(t)-\mathbf{ \theta}^{l}(t)V_{thr}^{l}&l>1\\ \mathbf{Z}^{1}(t)&l=1\end{array}\right. \tag{1}\]
where \(\mathbf{\theta}^{l}(t)\) is a step function i.e., \(\theta_{i}^{l}(t)=1\) if \(V_{i}^{l}(t-1)+Z_{i}^{l}(t)\geq V_{thr}^{l}\) and \(\theta_{i}^{l}(t)=0\) otherwise. \(\mathbf{Z}^{l}(t)\) is the input current such that
\[\mathbf{Z}^{l}(t)=\mathbf{W}^{l}\mathbf{\theta}^{l-1}(t)+\mathbf{b}^{l}\ \ \ \ \ \text{when }l>1. \tag{2}\]
For the event-based inputs (e.g., from a DVS sensor), \(\mathbf{Z}^{l}(t)\) at the first layer, i.e., \(\mathbf{Z}^{1}(t)\), can be initialised as
\[\mathbf{Z}^{1}(t)=\mathbf{W}^{1}\mathbf{X}(t)+\mathbf{b}^{1} \tag{3}\]
where \(\mathbf{X}(t)\) is the time-dependent spike train, i.e., the input may change the charging current with time during the inference. To consider the temporal information, we split the spike train into \(F\) frames and duration of each frame is equal to \(T=T_{total}/F\), where \(F\in\mathbb{Z}^{+}\). We write \(\mathbf{\bar{X}_{f}}\) to represent the average spiking rate of \(f\)-th frame, i.e., \(\mathbf{\bar{X}_{f}}=1/N_{\max}\sum_{t=T:(f-1)}^{T:f}\mathbf{X}(t)\) such that \(N_{\max}\) is the maximum spikes of all training frames in dataset \(D\). The spiking resolution of \(X(t)\) can be roughly computed as \(S_{r}=N_{\max}/T\). For event-based input, SNN can manifest faster inference due to immediate response after receiving the first spike, and it completes the inference whenever the spike train ends, i.e., at \(T_{total}\). The event-based benchmarks are further introduced in Section V-B. This characteristic makes it possible that the inference time is dynamic for different inputs. In this paper, with the cutoff technique as in Section IV-C, we will show that the average latency of the inference in SNN can be further reduced (to some \(t\leq T_{total}\)).
### _Temporal training_
Regarding the direct training of SNN in [22, 23], the resulted SNN can make decision by averaging output spikes of consecutive frames. Assuming that the inference of each frames is independent, such process can be approximated in ANN training by letting the loss function be
\[L_{TT}=\frac{1}{F}\sum_{f=1}^{F}L_{CE}(\mathbf{Y}_{f},\mathbf{\hat{Y}}) \tag{4}\]
where \(\mathbf{Y}_{f}\) is output of \(\mathbf{\bar{X}_{f}}\) after softmax, \(\mathbf{\hat{Y}}\) is the ground truth and \(L_{CE}\) is cross-entropy loss. Temporal training loss (\(L_{TT}\)) was suggested in [23] that achieves better generalisation. To simplify the theoretical analysis, we let \(F=1\) in Section III-C & IV. The further explanation of temporal training is given in appendix V-C, including the impact of \(F\) on ANN training and extension of theories to \(F>1\). To ensure the independence between frames, the membrane potential of hidden layer is reset after each frame, while that of output layer is reset after the last frame, which is feasible in hardware implementation [31, 32].
### _ANN-to-SNN conversion_
The conversion method is mainly based on integrated-and-fire (IF) neuron, which generates spikes depending on positive accumulated current, corresponding to ReLU activation in ANN. An existing conversion method [18] uses current normalisation methods by letting
\[\frac{1}{T\cdot S_{r}}\sum_{t=0}^{T}\mathbf{Z}^{1}(t)=\mathbf{a}^{1} \tag{5}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c|}{Training} & \multicolumn{4}{c}{Inference} \\ \cline{2-6} & OE through & Apply QA & Soft-reset & Additive Noise & Cutoff \\ \hline
[19] & - & - & \(\surd\) & - & - \\
[12] & clipping (COE) & - & \(\surd\) & - & - \\
[18] & clipping (COE) & - & \(\surd\) & \(\surd\) & - \\
[17] & clipping (COE) & \(\surd\) & \(\surd\) & - & - \\
[13] & clipping (COE) & \(\surd\) & \(\surd\) & \(\surd\) & - \\ \hline Ours & regularisation (ROE) & - & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Technical ingredients of different conversion methods. OE and QA denote Outlier Elimination and Quantisation-aware technique respectively. COE: Clipping for Outlier Elimination; ROE: Regularisation for Outlier Elimination.
where \(\mathbf{a}^{1}\) is the output of ReLU activation at the first layer of ANN. The spiking rate of each SNN neuron at layer \(l\) is defined as \(\mathbf{r}^{l}(t)=\mathbf{N}^{l}(t)/t\), where \(\mathbf{N}^{l}(t)\) is the number of spikes received up to time \(t\) by neuron at layer \(l\). The relationship between spiking rate in SNN and activation in ANN has been theoretically proved in [18], which gives
\[\mathbf{r}^{l}(t)=\frac{1}{V_{thr}^{l}}\Big{(}\mathbf{W}^{l}\mathbf{r}^{l-1}(t)+\mathbf{b}^{l} \Big{)}-\mathbf{\Delta}^{l}(t) \tag{6}\]
where \(\mathbf{\Delta}^{l}(t)\triangleq\mathbf{V}^{l}(t)/(tS_{r}V_{thr}^{l})\) represents the residual spiking rate. The spiking rate at the first layer can be initialised as \(\mathbf{r}^{1}(t)=\mathbf{a}^{1}/V_{thr}^{1}-\mathbf{\Delta}^{1}(t)\). Note that, we use \(tS_{r}\) to represent the timestep in [18]. Then, the current normalisation can be achieved by
\[\tilde{\mathbf{W}}^{l}\leftarrow\mathbf{W}^{l},\ \tilde{\mathbf{b}}^{l}\leftarrow\frac{1}{ \lambda^{l-1}}\mathbf{b}^{l},\ V_{thr}^{l}\leftarrow\frac{\lambda^{l}}{\lambda^{l -1}} \tag{7}\]
where \(\lambda^{l}\) be the maximum value of the activation at layer \(l\). For temporal training, the temporal input frames share the same \(\lambda^{l}\).
## IV Methods
We introduce two novel techniques: one is for the training and the other for the inference. Section IV-A presents the theoretical underpinning of the regulariser, which in turn is introduced in Section IV-B. This is followed by the introduction of cutoff mechanism in Section IV-C for the inference.
### _Anytime optimal inference SNNs_
The regulariser is based on an investigation into the design of AOI-SNNs. An AOI-SNN is able to perform optimally under different settings on the inference time for processing an input. When \(t\) is large enough to make \(\Delta^{l}(t)\) negligible, we define the desired spiking rate as follows:
\[\mathbf{r}_{d}^{l}=\mathbf{a}^{l}/V_{thr}^{1} \tag{8}\]
We start from establishing a theoretical underpinning between spiking rate \(\mathbf{r}^{l}(t)\) and its desired value \(\mathbf{r}_{d}^{l}\). Let \(\phi^{l}\) denote the angle between \(\mathbf{r}_{d}^{l}\) and \(\mathbf{r}^{l}(t)\). Then, we follow [33] to use cosine similarity between \(\mathbf{r}_{d}^{l}\) and \(\mathbf{r}^{l}(t)\), i.e., \(\cos(\phi^{l})\), for the measurement of the performance of SNN by \(t\). Actually, [33] shows that the cosine similarity between full precision and quantised neural network has a high correlation with the final accuracy of the quantised neural network. Similarly, we expect that higher cosine similarity between \(r^{l}(t)\) and \(r_{d}^{l}\) can result in less accuracy drop by \(t\).
The following theorem states that, for any \(t\), the performance of the SNN is of negative correlation with threshold \(V_{thr}^{l}\), and positive correlation with \(L_{2}\) norm over \(\mathbf{a}^{l}\), as stated in the below theorem.
**Theorem IV.1**: _For any inference time, assuming that the residual spiking rate \(\mathbf{\Delta}^{l}(t)\) is independent from \(\mathbf{r}_{d}^{l}\), the cosine similarity between \(\mathbf{r}_{d}^{l}\) and \(\mathbf{r}^{l}(t)\) is inversely proportional to the ratio of threshold to average accumulated current,_
\[\cos(\phi^{l})\propto\Big{(}\sqrt{n^{l}}\frac{V_{thr}^{l}}{\|\mathbf{a}^{l}\|_{2}} \Big{)}^{-1}\]
_where \(n^{l}\) is the dimension of \(\mathbf{a}^{l}\) and \(\|\mathbf{a}^{l}\|_{2}/\sqrt{n^{l}}\) denotes the average accumulated current._
We give a proof sketch of the theorem. Because \(\Delta^{l}(t)\) is independent from \(\mathbf{r}_{d}^{l}\), the angle between these two vectors tends to be \(\pi/2\) at high dimension. Then, by \(\mathbf{r}^{l}(t)=\mathbf{r}_{d}^{l}-\mathbf{\Delta}^{l}(t)\), we get a right angle triangle with \(\mathbf{r}_{d}^{l}\) and \(\Delta^{l}(t)\) as the legs, and \(\mathbf{r}^{l}(t)\) as the hypotenuse, as illustrated in Fig. 3. Moreover, we have
\[\cos(\phi^{l})=\frac{\|\mathbf{r}_{d}^{l}\|_{2}}{\|\mathbf{r}^{l}(t)\|_{2}}\geq\frac{\| \mathbf{r}_{d}^{l}\|_{2}}{\|\mathbf{r}_{d}^{l}\|_{2}+\|\mathbf{\Delta}^{l}(t)\|_{2}} \tag{9}\]
We are interested in increasing the lower bound of Equation 9, so that we have greater \(\cos(\phi^{l})\) for different \(t\). Combining with Equations (6) and (8), we have
\[\begin{split}\cos(\phi^{l})\geq\frac{\|\mathbf{a}^{l}/V_{thr}^{l}\|_{ 2}}{\|\mathbf{a}^{l}/V_{thr}^{l}\|_{2}+\|\mathbf{V}^{l}(t)/(tS_{r}V_{thr}^{l})\|_{2}}\\ =\frac{\|\mathbf{a}^{l}\|_{2}}{\|\mathbf{a}^{l}\|_{2}+\|\mathbf{V}^{l}(t)/(tS_{ r})\|_{2}}\end{split} \tag{10}\]
Fig. 3: Graphic illustration of the desired spiking rate \(\mathbf{r}_{d}^{l}\) and spiking rate \(\mathbf{r}^{l}(t)\)
Fig. 2: Inference in integrate-and-fire (IF) neuron with _reset by subtraction_ mechanism.
Assuming that elements in \(\mathbf{V}^{l}(t)\) satisfy uniform distribution over the time \(t\) and they are in \([0,V_{thr}]\), we can derive \(\mathbb{E}(\|V(t)/(tS_{r})\|_{2})\leq\sqrt{n^{l}}V_{thr}/(\sqrt{3}tS_{r})\) (proof in Appendix A). Moreover, at high dimensions, the relative error made as considering \(\mathbb{E}(\|V(t)/(tS_{r})\|_{2})\) instead of the random variable \(\|V(t)/(tS_{r})\|_{2}\) becomes asymptotically negligible [34, 33]. Therefore, Equation 10 can be computed with the following lower bound
\[\begin{split}\cos(\phi^{l})&\geq\frac{\|\mathbf{a}^{l} \|_{2}}{\|\mathbf{a}^{l}\|_{2}+\sqrt{n^{l}}V_{thr}^{l}/(\sqrt{3}tS_{r})}\\ &\quad=\frac{\sqrt{3}tS_{r}}{\sqrt{3}tS_{r}+\sqrt{n^{l}}V_{thr}^{ l}/\|\mathbf{a}^{l}\|_{2}}\end{split} \tag{11}\]
which explicitly explains that (1) the increase of \(t\) to \(t\gg\sqrt{n^{l}}V_{thr}^{l}/\|\mathbf{a}^{l}\|_{2}\) can increase the lower bound and (2) it is possible to minimise term \(\sqrt{n^{l}}V_{thr}^{l}/\|\mathbf{a}^{l}\|_{2}\) for developing an SNN with optimised performance at any time during the inference. In other words, an AOI-SNN expects a good (small) ratio of threshold voltage \(V_{thr}^{l}\) to average accumulated current, i.e., \(\|\mathbf{a}^{l}\|_{2}/\sqrt{n^{l}}\), while not degrading SNN classification performance. The point (2) corresponds with the theorem.
### _Regulariser for outlier elimination (ROE)_
This section shows how to design a regulariser based on Theorem IV.1. Recall from Equation (7) that \(V_{thr}^{l}\) is determined by \(\lambda^{l}\) and \(\lambda^{l-1}\), where \(\lambda^{l}\) is the maximum value of activation in the \(l\)-th layer. To simplify the complexity of optimisation, the impact of \(1/\lambda^{l-1}\) is omitted and the ratio of threshold to expected current approximately becomes proportional to \(\lambda^{l}/\|\mathbf{a}^{l}\|_{2}\). Therefore, we design a regulariser to minimise term \(\lambda^{l}/\|\mathbf{a}^{l}\|_{2}\) to develop an AOI-SNN. Firstly, we use matrix \(\mathbf{A}^{l}\) to represent a batch of \(\mathbf{a}^{l}\) during training. Secondly, we simply use maximum value in \(\mathbf{A}^{l}\) to approximate \(\lambda^{l}\), i.e., \(\lambda^{l}\approx\|\mathbf{A}^{l}\|_{\max}\). Then, we write \(\|\mathbf{A}^{l}\|_{2,q}=(\sum_{j}(\sum_{i}A_{ij}^{2})^{q/2})^{1/q}\) to denotes the \(L_{2,p}\) over \(\mathbf{A}^{l}\), where \(A_{ij}^{l}\) presents \(j\)-th \(\mathbf{a}_{i}^{l}\) in the batch and \(q\in\mathbb{Z}\). Finally, we can let the penalty term be the ratio between \(\|\mathbf{A}^{l}\|_{\max}\) and \(\|\mathbf{A}^{l}\|_{2,q}\) with scale constant \(\sqrt{n^{l}}\), i.e.,
\[R(\mathbf{A}^{l})=\sqrt{n^{l}}\frac{\|\mathbf{A}^{l}\|_{\max}}{\|(\mathbf{A}^{l})\|_{2,q}} \tag{12}\]
We let \(q\) be \(-\infty\) so that the penalty term can focus on the inputs with relatively small accumulated current in the batch. The final training objective is
\[L_{TT}+\alpha\sum_{l}\ln{(R(\mathbf{A}^{l}))} \tag{13}\]
where \(\alpha\) is a hyper-parameter to balance two loss terms. Logarithm is applied to reduce the impact from extremely large value. The regularisation-based training is to train an ANN based on \(\mathbf{\bar{X}}_{f}\) resulting in an SNN, then SNN operates with the event-based input (Equation 3). A small \(R(\mathbf{A})^{l}\) implies that it is less possible for \(\lambda^{l}\) to be an outlier and \(\|\mathbf{a}^{l}\|_{2}\) is generally large.
### _Cutoff mechanism to reduce inference time_
Thanks to the asynchronous working mechanism, event-driven SNNs can predict when only part of the spike train is processed. Nevertheless, a naive cutoff on the length of spike train (or the sampling time of event sensor) can easily result in accuracy loss. In this section, we suggest a principled method to determine the inference time. Technically, a new metric, called confidence rate and denoted as \(C(\hat{t},D\{S_{gap}>\beta\})\), is defined based on the statistical characteristics of processing a set \(D\) of inputs with respect to the discrete inference time \(\hat{t}\) and \(S_{gap}\). \(S_{gap}>\beta\) operates as a condition to identify the samples in \(D\) that are suitable for cutoff. Actually, we are able to plot a curve of confidence rate \(C(\hat{t},D\{S_{gap}>\beta\})\) with respect to the time \(\hat{t}\) and \(\beta\), respectively. During the processing of an individual input \(\mathbf{X}\), we will monitor another variable \(S_{gap}\), and once it is able to ensure the confidence rate can reach certain degree, an early cutoff signal can be sent (see Fig. 1). The following provides the details.
We write \(\mathbf{X}[\hat{t}]=\sum_{t=0}^{\hat{t}}\mathbf{X}(t)\) to denote the accumulation of \(\mathbf{X}(t)\) from 0 up to \(\hat{t}\). Then, we let \(f(\mathbf{X}[\hat{t}])\) return the prediction of \(f\) based on the partial input \(\mathbf{X}[\hat{t}]\). Based on this, we define a function
\[g(\mathbf{X})=\arg\min_{\hat{t}}\{\forall\hat{t}_{1}>\hat{t}:\mathbf{1}(f(\mathbf{X}[\hat{t }_{1}])=\mathbf{y})\} \tag{14}\]
to express the earliest time from which the model \(f\) is able to confidently and correctly classify according to the partial input. \(\mathbf{1}(\cdot)\) is the indicator function, i.e., \(\mathbf{1}(x_{1}=x_{2})=1\) and \(\mathbf{1}(x_{1}\neq x_{2})=0\). \(\mathbf{1}(f(\mathbf{X}[\hat{t}_{1}])=\mathbf{y})\) suggests that \(f(\mathbf{X}[\hat{t}])\) is the same as the ground truth \(\mathbf{y}\). Then, recall that \(\mathbf{N}^{L}(t)\) is the number of spikes received by \(t\) by the output layer \(L\). We write \(Top_{k}(\mathbf{N}^{L}(t))\) as the top \(k\) spikes that occur in some neuron of layer \(L\). Then, we let
\[S_{gap}=Top_{1}(\mathbf{N}^{L}(t))-Top_{2}(\mathbf{N}^{L}(t)) \tag{15}\]
be a variable denoting the gap of top-1 and top-2 number of spikes. A large \(S_{gap}\) implies little possibility of switching the prediction results during inference. Then, we let \(D\{\cdot\}\) denote the inputs in subset of \(D\) that satisfy a certain condition. Now, we can define the confidence rate as follows:
_Confidence rate:_ \[C(\hat{t},D\{S_{gap}>\beta\})=\] \[\frac{1}{|D\{S_{gap}>\beta\}|}\sum_{\mathbf{X}\in D\{S_{gap}>\beta\}}(g( \mathbf{X})\leq\hat{t})\]
which intuitively computes the percentage of inputs in \(D\) that can achieve the prediction success on or before a prespecified time \(\hat{t}\), i.e., \(g(\mathbf{X})\leq\hat{t}\). \(|D\{S_{gap}>\beta\}|\) denotes the number of samples in \(D\) satisfying the condition. It is not hard to see that, when \(\hat{t}=0\), \(C(\hat{t},D\{S_{gap}>\beta\})\) is also \(0\), and with the increase of time \(\hat{t}\), \(C(\hat{t},D\{S_{gap}>\beta\})\) will also increase until reaching \(1\). Our algorithm searches for a minimum \(\beta\in\mathbb{Z}^{+}\) at a specific \(\hat{t}\), as expressed in the following optimisation objective:
\[\arg\min_{\beta}C(\hat{t},D\{S_{gap}>\beta\})\geq 1-\epsilon \tag{16}\]
where \(\epsilon\) is a pre-specified constant such that \(1-\epsilon\) represents an acceptable level of confidence for activating cut-off, and a set of \(\beta\) is extracted under different \(\hat{t}\) using training samples.
Equation IV-C is visualised in Fig. 4 that shows the impact of inference time and \(\beta\) on confidence. Time ratio denotes the normalised inference time. We characterise the confidence metric with training samples and eventually use testing samples for evaluation. Note that, on Cifar10-DVS, all model achieves 100% for training accuracy, however, they perform differently on confidence. With regularisation, SNN-ROE can further improve the confidence than SNN-QA, e.g., it is 0.01 higher at \(0.125T_{total}\) and 0.07 higher at \(0.25T_{total}\). Therefore, SNN-ROE can have a better performance at any time during the inference, as there are more inputs join the early cutoff. Fig. 3(b) presents that the input with large \(S_{gap}\) has more consistent prediction over time, which supports the use of \(S_{gap}>\beta\) as the cutoff condition.
## V Experiment
We implement the ROE and conduct an extensive set of experiments to validate it. We consider its comparison with the state-of-the-art CNN-to-SNN conversion methods. In this section, 'SNN-QA' denotes the method in [13], which includes both COE and QA during training, and outperforms the other methods on image input. In contrast, 'SNN-COE' denotes the SNN with only COE. Our proposed method is denoted by 'SNN-ROE'. To reduce the accuracy loss during inference, we followed [18, 13] to add extra current \(V_{thr}^{l}/2\) to each neuron.
Our method is validated against three event-based datasets, e.g., Cifar10-DVS [35], N-Caltech101 [36] and DVS128 Gesture [9]. We train the neural network using Tensorflow with Keras API and convert it into SNN by SpKeras [18]. Our work is publicly available 1. As [13] did not cover the event-based input, we replicate their work as our baseline for comparison and set the quantisation length of SNN-QA to 16 for all datasets, which yields optimal performance. Note that, we use original input from DVS camera without any pre-processing for the inference so that SNN can remain asynchronous to the input events.
Footnote 1: [https://github.com/Dengyu-Wu/SNN-Regularisation-Cutoff](https://github.com/Dengyu-Wu/SNN-Regularisation-Cutoff)
### _Experiment setup_
The network architectures for difference datasets are given in Table II, which are modified from VGG-11 [37] for Cifar10-DVS & N-Caltech101 and VGG-like structure [22] for DVS128 Gesture.
Batch Normalisation [38] is applied after each convolutional and fully-connected layer to accelerate the convergence of ANN training. For all experiment, the learning rate is set to 0.1 and decays to zero after 300 epochs based on cosine decay schedule [39]. Weight decay is set to 0.0005. We set \(\alpha\) to 0.003 for the regulariser proposed in Section 4.2 and use pixel shifting as the data augmentation for all models, i.e., both width and height are randomly shifted by the range [-20%,20%]. Dropout is applied after fully-connected layer for DVS128 Gesture to improve the training and the dropout rate is 0.2. We set the batch size to 128 for \(F=1\) and 32 for \(F>1\) to reduce memory consumption.
For SNN training, we inherit the conversion methods and most of the notation from [18, 13]. Particularly, the relationship between ReLU and IF is from [18] and the relationship between quantised ReLU and IF is from [13]. Moreover, ROE minimisation is based on [18] and operates as a penalty term during ANN training.
### _Event-based datasets_
The samples in the event-based datasets record the event addresses with on/off events over a period of time. For Cifar10-DVS, it consists of 10,000 samples extracted from Cifar10 [16]. Each sample has 128\(\times\)128 spatial resolution. The length of each spike train is less equal to \(1.3s\). For N-Caltech101, it has 8709 samples categorised into 101 classes. The number of samples in each class ranges from 31 to 800. The length of each spike train is about \(0.3s\). The width in x-direction does not exceed 240 pixels and in y-direction does
\begin{table}
\begin{tabular}{c c} \hline \hline Dataset & Network Architecture \\ \hline Cifar10-DVS & C64k84-C64-C128-C2562-C256-C51282 \\ N-Caltech101 & -C512-C512s2-C512-AP2-FC512-(10 or 101) \\ \hline DVS128 Gesture & C128k84-[C128-MP2]* -FC512-FC128-(11) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Network architectures for difference datasets. C64k8s4 represents the convolutional layer with \(filters=64\), \(kernel\ size=4\) and \(strides=4\). The default values of Kernel size and strides are 3 and 1 respectively. AP2 is the average pooling layer, MP2 is the max pooling layer with \(kernel\ size=2\) and FC is the fully-connected layer.
Fig. 4: Evaluation of confidence on Cifar10-DVS
not exceed 180 pixels. For this two datasets, we use 90% samples in each class for training and 10% for testing. DVS128 Gesture consists of 1341 samples with 11 categories. Each sample is repetitive over \(6.0s\).
### _applying temporal training in ANN_
Similar to the direct training in [22], the temporal training in ANN, shown in Fig. 5, reshapes the temporal frames before forwarding them into the neural network and computes average loss after multiple outputs for optimisation. However, temporal training uses ReLU as the activation function and has no iterative operation during forward propagation. Although it ignores the correlation between the neighbouring frames in hidden layers, our experiment shows that SNN still can achieve good performance. Normally, iterative operation can be expensive when the number of iteration is large, i.e., large memory required [22]. Fig. 6 presents that the increase of F can improve the accuracy on DVS128 Gesture, while it has little effect on Cifar10-DVS. We did not examine \(F\) in N-Calch101 due to its large size.
Moreover, since temporal training treats consecutive frames as individual frames and generates most spike for the prediction, regulariser and cutoff can be directly deployed when \(F>1\),
### _Experimental results_
This section presents a comparison between SNN-ROE, SNN-QA and SNN-COE on accuracy w.r.t time ratio and performance improvement after cutoff. The inference time \(t=\text{Time Ratio}\times T_{total}\), where \(T_{total}\) is equal to 1.3\(s\), 0.3\(s\) and 1.2\(s\) for Cifar10-DVS, N-Caltech101 and DVS128 Gesture respectively.
It is not hard to see, with cutoff, the performance of all models is improved in Fig. 7(a, b, c). For example, the accuracy curve moves above its original curve, which means that same accuracy can have less inference time for same model. Thanks to the increase of confidence (recall the results in Fig. 4a), SNN-ROE can have general higher accuracy before the time point (red dash line) and it shows consistent results in different datasets. The confidence evaluation for N-Caltech101 and DVS128 Gesture is provided in Fig. A-1. It has been argued in [24] that the temporal information in Cifar10-DVS is not the dominant information, which is similar in N-Caltech101. Unlike N-Caltech101 and Cifar10-DVS, the correlation between the temporal events in DVS128 Gesture is high. This phenomenon can also be observed from the Fig. 5. Therefore, we set \(F=1\) on N-Caltech101 and Cifar10-DVS for efficient training and \(F=4\) on DVS128 Gesture to incorporate temporal information. Moreover, the result in Fig. 7c shows that cutoff also can improve SNN trained with \(F>1\). Fig. 7d presents that the time ratio becomes adaptive after applying cutoff and regulariser can generally increase the cutoff performance with more accurate predictions at early inference time.
We collect the results around the time point in Table III, which has similar inference time, to show the performance of each model on cutoff. SNN-ROE achieves the superior performance on accuracy and latency. Spiking resolution \(S_{r}\) is calculated to estimate maximum average spikes per second.
## VI Conclusions
This paper promotes anytime optimal inference SNNs (AOISNNs), which maintain the optimal performance throughout the inference stage, and therefore are suitable for event-driven inputs such as those from dynamic vision sensor or dynamic audio sensor. Two technical novelties are proposed to optimise the attainment of AOI-SNNs, one for the training stage and the other for the inference stage. Our experiments demonstrate the superior performance with respect to the accuracy and latency, comparing to the state-of-the-art.
|
2307.15285 | Optimal Approximation of Zonoids and Uniform Approximation by Shallow
Neural Networks | We study the following two related problems. The first is to determine to
what error an arbitrary zonoid in $\mathbb{R}^{d+1}$ can be approximated in the
Hausdorff distance by a sum of $n$ line segments. The second is to determine
optimal approximation rates in the uniform norm for shallow ReLU$^k$ neural
networks on their variation spaces. The first of these problems has been solved
for $d\neq 2,3$, but when $d=2,3$ a logarithmic gap between the best upper and
lower bounds remains. We close this gap, which completes the solution in all
dimensions. For the second problem, our techniques significantly improve upon
existing approximation rates when $k\geq 1$, and enable uniform approximation
of both the target function and its derivatives. | Jonathan W. Siegel | 2023-07-28T03:43:17Z | http://arxiv.org/abs/2307.15285v2 | # Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks
###### Abstract
We study the following two related problems. The first is to determine to what error an arbitrary zonoid in \(\mathbb{R}^{d+1}\) can be approximated in the Hausdorff distance by a sum of \(n\) line segments. The second is to determine optimal approximation rates in the uniform norm for shallow \(\mathrm{ReLU}^{k}\) neural networks on their variation spaces. The first of these problems has been solved for \(d\neq 2,3\), but when \(d=2,3\) a logarithmic gap between the best upper and lower bounds remains. We close this gap, which completes the solution in all dimensions. For the second problem, our techniques significantly improve upon existing approximation rates when \(k\geq 1\), and enable uniform approximation of both the target function and its derivatives.
## 1 Introduction
A (centered) zonotope in \(\mathbb{R}^{d+1}\) (so that the sphere \(S^{d}\subset\mathbb{R}^{d+1}\) is of dimension \(d\)) is a convex polytope \(P\) which is the Minkowski sum of finitely many centered line segments, i.e. a body of the form
\[P=\{x_{1}v_{1}+\cdots+x_{n}v_{n},\;x_{i}\in[-1,1]\} \tag{1.1}\]
for some collection of vectors \(v_{i}\in\mathbb{R}^{d+1}\). The number \(n\) is the number of summands of the zonotope. A zonoid is a convex body which is a limit of zonotopes in the Hausdorff metric.
We consider the following problem: Given an arbitrary zonoid \(Z\), how accurately can \(Z\) be approximated by a polytope \(P\) with \(n\)-summands? Here accuracy \(\epsilon\) is taken to mean that \(Z\subset P\subset(1+\epsilon)Z\).
This problem has been studied by a variety of authors (see for instance [7, 8, 9, 10, 20, 22]). Of particular interest is the case when \(Z=B^{d+1}\) is the Euclidean unit ball. In this case the problem has an equivalent formulation as (see [7]): how many directions \(v_{1},...,v_{n}\in S^{d}\) are required to estimate the surface area of a convex body in \(\mathbb{R}^{d+1}\) from the volumes of its \(d\)-dimensional projections orthogonal to each \(v_{i}\)?
In [10], it was shown using spherical harmonics that with \(n\) summands and \(Z=B^{d+1}\) the best error one can achieve is lower bounded by
\[\epsilon(n)\geq c(d)n^{-\frac{1}{2}-\frac{3}{2d}}. \tag{1.2}\]
When \(d=2,3\), this bound was matched up to logarithmic factors in [8], specifically it was shown that for general zonoids \(Z\), we have
\[\epsilon(n)\leq C(d)\begin{cases}n^{-\frac{1}{2}-\frac{3}{2d}}\sqrt{\log(n)}&d =2\\ n^{-\frac{1}{2}-\frac{3}{2d}}\log(n)^{3/2}&d=3.\end{cases} \tag{1.3}\]
For larger values of \(d\) the result in [8] gives the worse upper bound of
\[\epsilon(n)\leq C(d)n^{-\frac{1}{2}-\frac{1}{d-1}}\sqrt{\log(n)}. \tag{1.4}\]
In [9] (see also [22]) it was shown that these bounds can be attained using summands of equal length if \(Z=B^{d+1}\).
The picture was nearly completed in [28] where it was shown that we have
\[\epsilon(n)\leq C(d)\begin{cases}n^{-\frac{1}{2}-\frac{3}{2d}}\sqrt{\log(n)}&d =2,3\\ n^{-\frac{1}{2}-\frac{3}{2d}}&d\geq 4.\end{cases} \tag{1.5}\]
Moreover, it was shown that the upper bound when \(d\geq 4\) can be achieved using summands of equal length for all zonoids \(Z\).
In this work, we remove the logarithmic factors in (1.5) when \(d=2,3\), i.e. we prove that
\[\epsilon(n)\leq C(d)n^{-\frac{1}{2}-\frac{3}{2d}} \tag{1.6}\]
for all \(d\), and thus provide upper bounds exactly matching (up to a constant factor) the lower bound (1.2). To formulate these results, we pass to the dual setting (see [10, 28]). A symmetric convex body \(Z\) is a zonoid iff
\[\|x\|_{Z^{*}}:=\sup_{z\in Z}x\cdot z=\int_{S^{d}}|x\cdot y|d\tau(y) \tag{1.7}\]
for a positive measure \(\tau\) on \(S^{d}\). The body \(Z\) is a zonotope with \(n\)-summands iff \(\tau\) is supported on \(n\) points. Since our error measure is scale invariant, we may assume that \(\tau\) is a probability distribution. Given these considerations, the bound (1.6) follows from the following result.
**Theorem 1**.: _There exists a constant \(C=C(d)\) such that for any probability measure \(\tau\) on the sphere \(S^{d}\), there exists a probability measure \(\tau^{\prime}\) on \(S^{d}\) which is supported on \(n\) points, such that_
\[\sup_{x\in S^{d}}\left|\int_{S^{d}}|x\cdot y|d\tau(y)-\int_{S^{d}}|x\cdot y|d \tau^{\prime}(y)\right|\leq Cn^{-\frac{1}{2}-\frac{3}{2d}}. \tag{1.8}\]
We remark that our method produces summands of unequal length (i.e. a non-uniform distribution \(\tau^{\prime}\)) and we do not know whether this approximation can be achieved using summands of equal length (even for the ball \(B^{d+1}\)) when \(d<4\).
Recently, there has been renewed interest in the zonoid approximation problem due to its connection with approximation by shallow ReLU\({}^{k}\) neural networks [2]. The ReLU\({}^{k}\) activation function (simply called ReLU when \(k=1\)) is defined by
\[\sigma_{k}(x)=x_{+}^{k}:=\begin{cases}x^{k}&x\geq 0\\ 0&x<0,\end{cases} \tag{1.9}\]
where in the case \(k=0\) we interpret \(0^{0}=1\) (so that \(\sigma_{0}\) is the Heaviside function). A shallow ReLU\({}^{k}\) neural network is a function on \(\mathbb{R}^{d}\) of the form
\[f_{n}(x)=\sum_{i=1}^{n}a_{i}\sigma_{k}(\omega_{i}\cdot x+b_{i})=\sum_{i=1}^{n }a_{i}(\omega_{i}\cdot x+b_{i})_{+}^{k}, \tag{1.10}\]
where the \(a_{i}\in\mathbb{R}\) are coefficients, the \(\omega_{i}\in\mathbb{R}^{d}\) are directions, the \(b_{i}\in\mathbb{R}\) are the offsets (or biases) and \(n\) (the number of terms) is called the width of the network.
Shallow neural networks can be viewed as a special case of non-linear dictionary approximation. Given a Banach space \(X\), let \(\mathbb{D}\subset X\) be a bounded subset, i.e. \(\|\mathbb{D}\|:=\sup_{d\in\mathbb{D}}\|d\|_{X}<\infty\), which we call a dictionary. Non-linear dictionary approximation methods seek to approximate a target function \(f\) by elements of the set
\[\Sigma_{n}(\mathbb{D})=\left\{\sum_{i=1}^{n}a_{i}d_{i},\ a_{i}\in\mathbb{R}, \ d_{i}\in\mathbb{D}\right\} \tag{1.11}\]
of \(n\)-term linear combinations of dictionary elements. Note that because the elements \(d_{i}\) are not fixed, this a non-linear set of functions. It is often important to obtain some control on the coefficients \(a_{i}\) in a non-linear dictionary expansion. For this reason, we introduce the set
\[\Sigma_{n}^{M}(\mathbb{D})=\left\{\sum_{i=1}^{n}a_{i}d_{i},\ a_{i}\in\mathbb{ R},\ d_{i}\in\mathbb{D},\ \sum_{i=1}^{n}|a_{i}|\leq M\right\} \tag{1.12}\]
of non-linear dictionary expansions with \(\ell^{1}\)-bounded coefficients.
We consider using shallow ReLU\({}^{k}\) neural networks to approximate functions and derivatives of order up to \(k\) uniformly on a bounded domain \(\Omega\subset\mathbb{R}^{d}\), so we take our Banach space \(X\) to be the Sobolev space \(W^{k}(L_{\infty}(\Omega))\) (see for instance [17]), with norm given by
\[\|f\|_{W^{k}(L_{\infty}(\Omega))}:=\sup_{|\alpha|\leq k}\|D^{\alpha}f\|_{L^{ \infty}(\Omega)}. \tag{1.13}\]
In our analysis, without loss of generality we will often take \(\Omega=B^{d}:=\{x:\ |x|\leq 1\}\) to be the unit ball in \(\mathbb{R}^{d}\) to simplify the presentation.
Shallow neural networks correspond to non-linear approximation with the dictionary
\[\mathbb{D}=\mathbb{P}^{d}_{k}:=\{\sigma_{k}(\omega_{k}\cdot x+b_{i}),\ \omega_{i}\in S^{d-1},\ b_{i}\in[a,b]\}\subset W^{k}(L_{\infty}(\Omega)), \tag{1.14}\]
where by positive homogeneity we can take \(\omega_{i}\) on the unit sphere, and the biases \(b_{i}\) are restricted to an interval depending upon \(\Omega\) to ensure both the boundedness and expressiveness of \(\mathbb{P}^{d}_{k}\) (see for instance [39]). When \(\Omega=B^{d}\) is the unit ball in \(\mathbb{R}^{d}\), we can take \([a,b]=[-1,1]\) for example, since
\[\sigma_{k}(\omega\cdot x+b)+(-1)^{k}\sigma_{k}(-\omega\cdot x-b)\]
for \(\omega\in S^{d-1}\) and \(b\in[-1,1]\) spans the space of polynomials of degree at most \(k\) on \(B^{d}\).
A typical class of functions considered in the context of non-linear dictionary approximation is the variation space of the dictionary \(\mathbb{D}\), defined as follows. Let
\[B_{1}(\mathbb{D}):=\overline{\bigcup_{n=1}^{\infty}\Sigma_{n}^{1}(\mathbb{D})}, \tag{1.15}\]
denote the closed symmetric convex hull of the dictionary \(\mathbb{D}\) and define the variation norm of a function \(f\in X\) by
\[\|f\|_{\mathcal{X}_{1}(\mathbb{D})}:=\inf\{s>0:\ f\in sB_{1}(\mathbb{D})\}. \tag{1.16}\]
This construction is also called the gauge of the set \(B_{1}(\mathbb{D})\) (see for instance [34]), and has the property that the unit ball of the \(\mathcal{K}_{1}(\mathbb{D})\)-norm is exactly the closed convex hull \(B_{1}(\mathbb{D})\). We also write
\[\mathcal{K}_{1}(\mathbb{D}):=\{f\in X;\ \|f\|_{\mathcal{X}_{1}(\mathbb{D})}<\infty\} \tag{1.17}\]
for the space of functions with finite \(\mathcal{K}_{1}(\mathbb{D})\)-norm. The variation norm has been introduced in different forms in the literature and plays an important role in statistics, signal processing, non-linear approximation, and the theory of shallow neural networks (see for instance [4, 5, 12, 13, 19, 31, 32, 36, 37, 41]).
In the case corresponding to shallow ReLU\({}^{k}\) neural networks, \(\mathbb{D}=\mathbb{P}^{d}_{k}\), the variation space can equivalently be defined via integral representations, which were studied for example in [2, 16]. Specifically, we have \(f\in\mathcal{K}_{1}(\mathbb{P}^{d}_{k})\) iff there exists a (signed measure) \(d\mu\) of bounded variation on \(S^{d-1}\times[a,b]\) such that
\[f(x)=\int_{S^{d-1}\times[a,b]}\sigma_{k}(\omega\cdot x+b)d\mu(\omega,b) \tag{1.18}\]
pointwise almost everywhere. Moreover, the variation norm is given by
\[\|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}=\inf\left\{\int_{S^{d-1}\times[a, b]}d|\mu|(\omega,b),\ f(x)=\int_{S^{d-1}\times[a,b]}\sigma_{k}(\omega\cdot x+b)d \mu(\omega,b)\right\}, \tag{1.19}\]
where the infimum above is taken over all measures with finite total variation giving such a representation of \(f\). This is due to the fact that
\[B_{1}(\mathbb{P}^{d}_{k})=\left\{\int_{S^{d-1}\times[a,b]}\sigma_{k}(\omega \cdot x+b)d\mu(\omega,b),\ \int_{S^{d-1}\times[a,b]}d|\mu|(\omega,b)\leq 1\right\}, \tag{1.20}\]
which follows from Lemma 3 in [39], and an 'empirical' discretization of the integral in (1.20) using the fact that half-spaces have bounded VC-dimension [42, 43] (note that the closure in (1.15) is taken in the \(X=W^{k}(L_{\infty})\)-norm). In fact, it follows easily from this that we get the same set \(B_{1}(\mathbb{P}^{d}_{k})\), and thus the same variation space \(\mathcal{K}_{1}(\mathbb{P}^{d}_{k})\), even if we take the closure in (1.15) with respect to a weaker norm such as \(L_{2}\).
One important question is how efficiently functions in the variation space \(\mathcal{K}_{1}(\mathbb{D})\) can be approximated by non-linear dictionary expansions \(\Sigma_{n}(\mathbb{D})\) with \(n\) terms. When the space \(X\) is a Hilbert space (or more generally a type-2 Banach space), we have the bound [4, 19, 33]
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{D})}\|f-f_{n}\|_{X}\leq C\|f\|_{\mathcal{K}_{1 }(\mathbb{D})}n^{-\frac{1}{2}}. \tag{1.21}\]
The constant here depends only upon the norm of the dictionary \(\|\mathbb{D}\|\) and the type-2 constant of the space \(X\). Moreover, the norm of the coefficients \(a_{i}\) can be controlled, so that if \(f\) is in \(B_{1}(\mathbb{D})\) (the unit ball of \(\mathcal{K}_{1}(\mathbb{D})\)), then \(f_{n}\) can be taken in \(\Sigma^{1}_{n}(\mathbb{D})\). This fact was first applied to neural network approximation by Jones and Barron [4, 19], and forms the basis of the dimension independent approximation rates obtained for shallow neural networks.
For many dictionaries, for example the dictionaries \(\mathbb{P}^{d}_{k}\) corresponding to shallow neural networks, the rate (1.21) can be significantly improved (see for instance [2, 21, 25, 38]). For instance, in the \(L_{2}\)-norm we get the rate
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{P}^{d}_{k})}\|f-f_{n}\|_{L_{2}(\Omega)}\leq C \|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}n^{-\frac{1}{2}-\frac{2k+1}{2d}}, \tag{1.22}\]
and this rate is optimal up to logarithmic factors if we require even mild control on the coefficients \(a_{i}\) (for instance \(|a_{i}|\leq C\) for a constant \(C\)) [38].
In this work, we consider approximation rates for the dictionary \(\mathbb{P}^{d}_{k}\) on the variation space \(\mathcal{K}_{1}(\mathbb{P}^{d}_{k})\) in the \(W^{m}(L_{\infty})\)-norm for \(m=0,...,k\), i.e. we consider uniform approximation of both \(f\) and its derivatives up to order \(m\). This is a much stronger error norm than the \(L_{2}\)-norm, and approximating derivatives is important for applications of shallow neural networks to scientific computing (see for instance [23, 35, 44]). For an arbitrary (bounded) dictionary \(\mathbb{D}\subset W^{m}(L_{\infty})\), no rate of approximation for \(\mathcal{K}_{1}(\mathbb{D})\) by \(\Sigma_{n}(\mathbb{D})\) can be obtained in general, i.e. there exist dictionaries for which the rate can be arbitrarily slow [14]. Thus, our results rely upon the special structure of the \(\mathbb{P}^{d}_{k}\) dictionary of ReLU\({}^{k}\) atoms.
This problem has previously been considered in the case \(m=0\), i.e. in the \(L_{\infty}\)-norm (see for instance [2, 3, 11, 21, 24, 46]). In this case, when \(k=0\) an approximation rate of
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{P}^{d}_{0})}\|f-f_{n}\|_{L_{\infty}(\Omega)} \leq C\|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{0})}n^{-\frac{1}{2}-\frac{1}{2d}}, \tag{1.23}\]
was proved in [24] using results from geometric discrepancy theory [27]. For \(k=1\), the aforementioned results on approximating zonoids by zonotopes [28] were used in [2] to get a rate of
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{P}^{d}_{1})}\|f-f_{n}\|_{L_{\infty}(g^{ \prime})}\leq C\|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}\begin{cases}n^{- \frac{1}{2}-\frac{3}{2}}\sqrt{\log n}&d=2,3\\ n^{-\frac{1}{2}-\frac{3}{2d}}&d\geq 4\end{cases} \tag{1.24}\]
on the sphere \(S^{d}\). Finally, when \(k\geq 2\), the best known result is [21]
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{P}^{d}_{1})}\|f-f_{n}\|_{L_{\infty}(\Omega)} \leq C\|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}n^{-\frac{1}{2}-\frac{1}{2}} \sqrt{\log n}. \tag{1.25}\]
We remark that for all of these results, the coefficients of \(f_{n}\) can be controlled. Specifically, if \(f\in B_{1}(\mathbb{P}^{d}_{k})\), then \(f_{n}\) can be taken in \(\Sigma^{1}_{n}(\mathbb{P}^{d}_{k})\).
By refining the techniques of discrepancy theory used to obtain these \(L_{\infty}\) bounds, we are able to prove the following result, which is essentially a generalization of Theorem 1.
**Theorem 2**.: _Let \(k\geq 0\). For any probability distribution \(\tau\) on \(S^{d-1}\times[-1,1]\) there exists a probability distribution \(\tau^{\prime}\) supported on at most \(n\) points such that for any multi-index \(\alpha\) with \(|\alpha|\leq k\) we have_
\[\sup_{x\in B^{d}}\left|D^{\alpha}_{x}\left(\int_{S^{d}}\sigma_{k}(\mathbf{\omega} \cdot x+b)d\tau(\mathbf{\omega},b)-\int_{S^{d}}\sigma_{k}(\mathbf{\omega}\cdot x+b)d \tau^{\prime}(\mathbf{\omega},b)\right)\right|\leq Cn^{-\frac{1}{2}-\frac{2(k-| \mathbf{\omega}|)+1}{2d}}, \tag{1.26}\]
_where \(D^{\alpha}_{x}\) denotes the \(\alpha\)-th order derivative with respect to \(x\). Here the constant \(C=C(d,k)\) depends only upon \(d\) and \(k\)._
As a Corollary, we obtain the following approximation rate.
**Theorem 3**.: _Let \(0\leq m\leq k\) and \(\Omega=B^{d}\). Then we have the bound_
\[\inf_{f_{n}\in\Sigma_{n}(\mathbb{P}^{d}_{k})}\|f-f_{n}\|_{W^{m}(L_{\infty}( \Omega))}\leq C\|f\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}n^{-\frac{1}{2}- \frac{2k-m}{2d}}, \tag{1.27}\]
_where \(C=C(d,k)\) is a constant. Moreover, the coefficients of \(f_{n}\) can be controlled, so if \(f\in B_{1}(\mathbb{P}^{d}_{k})\), then \(f_{n}\) can be taken in \(\Sigma^{1}_{n}(\mathbb{P}^{d}_{k})\)._
We remark that by scaling, Theorems 2 and 3 can easily be extended to any bounded domain \(\Omega\subset\mathbb{R}^{d}\). Theorem 3 extends the approximation rates derived in [38] from the \(L_{2}\)-norm to the \(L_{\infty}\)-norm, which significantly improves upon existing results [2, 21, 24] when \(k\geq 1\). In addition, we obtain approximation rates in the \(W^{m}(L_{\infty})\)-norm, which
enables derivatives and function values to be uniformly approximated simultaneously. The approximation rates given in Theorem 3 are an important building block in obtaining approximation rates for shallow \(\operatorname{ReLU}^{k}\) networks on Sobolev and Besov spaces [45].
Theorems 1 and 2 are proved using a modification of the geometric discrepancy argument used in [28], while Theorem 3 is an easy Corollary of Theorem 2. Theorem 1 follows essentially as a special case of Theorem 2 when \(k=1\), except that the ReLU activation function is replaced by the absolute value function and the setting is changed to the sphere. For this reason, we only give the complete proof of Theorem 2. The changes necessary to obtain Theorem 1 are relatively straightforward and left to the reader. We begin by collecting the necessary geometric and combinatorial facts in Section 2. The proofs of Theorems 2 and 3 are given in Section 3.
## 2 Geometric Lemmas
In this section, we collect and prove the geometric Lemmas which are crucial to the proofs of Theorems 1 and 2. In particular, in the proof of Theorem 2 we will need the following covering result.
**Lemma 1**.: _Let \(P\) be an \(N\) point subset of \(S^{d-1}\times[-1,1]\) (i.e. a set of \(n\) halfspaces) and \(0<\delta<1\) be given. Then there exists a subset \(\mathcal{N}\subset B^{d}\) of the unit ball (depending upon both \(P\) and \(\delta\)) with \(|\mathcal{N}|\leq(C/\delta)^{d}\) such that for any \(x\in B^{d}\) there exists a \(z\in\mathcal{N}\) with_
* \(|x-z|\leq C\delta\sqrt{d}\)_._
* \(|P\cap\{(\omega,b)\in S^{d-1}\times[-1,1],\ \operatorname{sgn}\left(\omega \cdot x+b\right)\neq\operatorname{sgn}\left(\omega\cdot z+b\right)\}|\leq \delta N\)_._
_Here \(C\) is an absolute constant._
A version of this Lemma on the sphere, which is required for the proof of Theorem 1 was proved by Matousek [28].
**Lemma 2** (Lemma 6 in [28]).: _Let \(P\) be an \(N\)-point subset of \(S^{d}\) and \(0<\delta<1\) be given. There exists a subset \(\mathcal{N}\subset S^{d}\) (depending upon both \(P\) and \(\delta\)) with \(|\mathcal{N}|\leq C\delta^{-d}\) such that for any \(x\in S^{d}\), there exists a \(z\in\mathcal{N}\) with_
* \(|x-z|\leq\delta\)_._
* \(|P\cap\{y\in S^{d},\ \operatorname{sgn}\left(y\cdot x\right)\neq\operatorname{ sgn}\left(y\cdot z\right)\}|\leq\delta N\)_._
_Here \(C=C(d)\) is a constant independent of \(N,P\) and \(\delta\)._
The proof of Lemma 1 follows essentially the same ideas as the proof of Lemma 2 in [28]. However, for completeness we give the proof here as well. The version given in Lemma 1 explicitly tracks the dimension dependence of the constants and can be used to track the dimension dependence of the constants in Theorems 1 and 2.
We begin by recalling the relevant combinatorial background (see for instance [29], Chapter 5).
**Definition 1**.: _A set system \((X,\mathcal{S})\) consists of a set \(X\) and a collection of subsets \(\mathcal{S}\subset 2^{X}\) of \(X\)._
The particular set system which we will consider in the proof of Lemma 1 is given by
\[X=S^{d-1}\times[-1,1],\ \mathcal{S}=\left\{\{(\omega,b):\ \omega\cdot x+b\geq 0 \},\ x\in B^{d}\right\}. \tag{2.1}\]
In other words, the elements are halfspaces and the sets consists of all halfspaces containing a given point \(x\) in the unit ball.
Given a subset \(Y\subset X\), we write \(\mathcal{S}|_{Y}=\{Y\cap S,\ S\in\mathcal{S}\}\) for the system \(\mathcal{S}\) restricted to the set \(Y\).
**Definition 2** (VC-dimension [43]).: _A subset \(Y\subset X\) is shattered by \(\mathcal{S}\) if \(\mathcal{S}|_{Y}=2^{Y}\). The VC-dimension of the set system \((X,\mathcal{S})\) is the cardinality of the largest finite subset of \(X\) which is shattered by \(\mathcal{S}\)._
An important ingredient in the proof is the following bound on the VC-dimension of the set system given in (2.1).
**Lemma 3** (Lemma 3 in [24]).: _The set system in (2.1) has VC-dimension bounded by \(d\)._
Finally, we will need the following packing Lemma for set systems with bounded VC-dimension.
**Lemma 4**.: _Let \((X,\mathcal{S})\) be a set system and \(\mu\) a probability measure on the set \(X\). Define a distance \(d_{\mu}\) on the collection of subsets \(\mathcal{S}\) by_
\[d_{\mu}(S_{1},S_{2})=\mu(S_{1}\Delta S_{2}). \tag{2.2}\]
_In other words, \(d_{\mu}\) is the probability that a randomly chosen element from the measure \(\mu\) will be in one set but not the other._
_Let \(\epsilon>0\) and suppose that \(S_{1},...,S_{N}\in\mathcal{S}\) are such that \(d_{\mu}(S_{i},S_{j})\geq\epsilon\) for all \(i\neq j\). Then, if \((X,\mathcal{S})\) has VC-dimension at most \(d\), we have_
\[N\leq\left(\frac{C}{\epsilon}\right)^{d} \tag{2.3}\]
_for an absolute constant \(C\) (we can take for instance \(C=50\))._
This was first proved by Haussler in the case where \(X\) is a finite set and \(\mu\) is the counting measure [18], with a weaker result (losing a logarithmic factor) being obtained earlier by Dudley [15]. The generalization to arbitrary probability measures \(\mu\) follows from this result in a relatively simple manner, and has been noted in the case of certain geometric set systems in [26].
Proof of Lemma 4.: Consider drawing \(k\) independent random samples \(x_{1},...,x_{k}\) from the probability distribution \(\mu\) with replacement. We consider the sets
\[\bar{S}_{j}=\{i:x_{i}\in S_{j}\}\subset\{1,...,k\}. \tag{2.4}\]
It is clear that the VC-dimension of the set system \(\bar{S}_{1},...,\bar{S}_{N}\) viewed as a subsets of \(\{1,...,k\}\) is also at most \(d\). Applying Theorem 1 in [18], we see that if each pair \(\bar{S}_{i}\) and \(\bar{S}_{j}\) differ in at least \(\delta k\) elements for some \(\delta>0\), then
\[N\leq e(d+1)\left(\frac{2e}{\delta}\right)^{d}\leq\left(\frac{C}{\delta} \right)^{d} \tag{2.5}\]
for \(C=25\) (for example).
Finally, we observe that by choosing \(k\) large enough we can guarantee that any two pairs \(\bar{S}_{i}\) and \(\bar{S}_{j}\) differ in at least \(\delta k\) elements for \(\delta=\epsilon/2\) with positive probability. Indeed, for each pair of sets the fraction of elements that they differ in is an average of Bernoulli random variables with expectation at least \(\epsilon\), since \(d_{\mu}(S_{i},S_{j})\geq\epsilon\). Using standard concentration inequalities combined with a union bound (for \(k\) large enough, note that \(N\) is fixed) completes the proof.
Proof of Lemma 1.: Consider the set system given in (2.1) and the probability measure \(\mu\) defined by
\[\mu=\frac{1}{2}\pi+\frac{1}{2}\pi_{P}, \tag{2.6}\]
where \(\pi\) is the uniform probability measure on \(S^{d-1}\times[-1,1]\), and \(\pi_{P}\) is the empirical measure associated to the set of halfspaces \(P\), i.e.
\[\pi_{P}=\frac{1}{|P|}\sum_{(\omega,b)\in P}\delta_{(\omega,b)} \tag{2.7}\]
where \(\delta_{(\omega,b)}\) denotes the Dirac measure at the point \((\omega,b)\).
Let \(x_{1},...,x_{N}\in B^{d}\) (viewed as elements of the set system \(\mathcal{S}\)) be a maximal set of points such that \(d_{\mu}(x_{i},x_{j})\geq\delta/2\). By Lemma 4 and the VC-dimension bound in Lemma 3, we have that
\[N\leq\left(\frac{2C}{\delta}\right)^{d}. \tag{2.8}\]
Moreover, given an \(z\in B^{d}\) there is an \(x_{i}\) such that \(d_{\mu}(x_{i},z)<\delta/2\) by the maximality of the set \(x_{1},...,x_{N}\). From the definition of \(\mu\), this means that \(d_{\pi}(x_{i},z)<\delta\) and \(d_{\pi_{P}}(x_{i},z)<\delta\). Given the form (2.7) of \(\pi_{P},d_{\pi_{P}}(x_{i},z)<\delta\) is equivalent to
\[|P\cap\{(\omega,b)\in S^{d-1}\times[-1,1],\ \operatorname{sgn}\left(\omega \cdot x_{i}+b\right)\neq\operatorname{sgn}\left(\omega\cdot z+b\right)\}|< \delta|P|=\delta N.\]
On the other hand, \(d_{\mu}(x_{i},z)<\delta\) implies that
\[\mathbb{E}_{\omega\in S^{d-1}}\left[\mathbb{P}(x_{i}\cdot\omega<b<z\cdot \omega\text{ or }z\cdot\omega<b<x_{i}\cdot\omega)\right]=\frac{1}{2}\mathbb{E}_{\omega\in S^{d- 1}}\big{|}(x_{i}-z)\cdot\omega|<\delta, \tag{2.9}\]
where the expectation denotes an average over the sphere \(S^{d-1}\), and the probability is over a uniformly random \(b\in[-1,1]\). It is well-known that for any fixed unit vector \(w\in S^{d-1}\) we have
\[\mathbb{E}_{\omega\in S^{d-1}}|w\cdot\omega|\geq cd^{-1/2} \tag{2.10}\]
for an absolute constant \(c\). Together with (2.9), this implies that \(|x_{i}-z|<C\delta\sqrt{d}\) as desired.
## 3 Approximation by Shallow ReLU\({}^{k}\) Neural Networks
In this section, we give the proof of Theorem 2, which follows easily from the following Proposition.
**Proposition 1**.: _Fix an integer \(k\geq 0\). Let \(\tau\) be a probability distribution on the sphere \(S^{d-1}\times[-1,1]\) which is supported on \(N\) points for \(N\) sufficiently large (\(N\geq 6\) is sufficient). Then there exists a probability distribution \(\tau^{\prime}\) supported on at most \((1-c)N\) points such that for all multi-indices \(\alpha\) with \(|\alpha|\leq k\) we have_
\[\sup_{x\in\mathbb{P}^{d}}\left|D_{x}^{\alpha}\left(\int_{S^{d-1}\times[-1,1]} \sigma_{k}(\mathbf{\omega}\cdot x+b)d\tau(\mathbf{\omega},b)-\int_{S^{d-1}\times[-1,1]} \sigma_{k}(\mathbf{\omega}\cdot x+b)d\tau^{\prime}(\mathbf{\omega},b)\right)\right|\leq CN ^{-\frac{1}{2}-\frac{2(k-|\mathbf{\omega}|)+1}{2d}}, \tag{3.1}\]
_where \(D_{x}^{\alpha}\) denotes the \(\alpha\)-th order derivative with respect to \(x\). Here \(C=C(d,k)\) and \(c\) is an absolute constant._
Proof of Theorem 2.: We repeatedly apply Proposition 1 to the distribution \(\tau\) until the size of the support set is at most \(n/2\). By Proposition 1, for any multi-index \(\alpha\) the error incurred in the \(\alpha\)-th derivative is bounded by
\[Cn^{-\frac{1}{2}-\frac{2(k-|\mathbf{\omega}|)+1}{2d}}\left(\sum_{j=0}^{\infty}(1- c)^{j\left(\frac{1}{2}+\frac{2(k-|\mathbf{\omega}|)+1}{2d}\right)}\right)\lesssim n^{- \frac{1}{2}-\frac{2(k-|\mathbf{\omega}|)+1}{2d}}, \tag{3.2}\]
since the errors in each step form a geometric series whose sum is bounded by a multiple of its largest term (which is the error made in the last step).
Proof of Theorem 3.: Suppose without loss of generality that \(\left\|f\right\|_{\mathcal{K}_{1}(\mathbb{P}^{d}_{k})}\leq 1\), i.e. that \(f\in B_{1}(\mathbb{P}^{d}_{k})\).
By definition, this means that for any \(\varepsilon>0\) there exist parameters \((\omega_{1},b_{1}),...,(\omega_{N},b_{N})\in S^{d}\times[-1,1]\) and weights \(a_{1},...,a_{N}\in\mathbb{R}\) (for a sufficiently large \(N\)) such that
\[\left\|f-\sum_{i=1}^{N}a_{i}\sigma_{k}(\omega_{i}\cdot x+b_{i})\right\|_{W^{k }(L_{m}(S^{d}))}<\varepsilon, \tag{3.3}\]
and \(\sum_{i=1}^{N}|a_{i}|\leq 1\). The next step is to approximate the sum in (3.3) by an element in \(\Sigma_{n}^{1}(\mathbb{P}^{d}_{k})\). To do this, we split the sum into its positive and negative parts, i.e. we write
\[\sum_{i=1}^{N}a_{i}\sigma_{k}(\omega_{i}\cdot x+b_{i})=\sum_{a_{i}>0}a_{i} \sigma_{k}(\omega_{i}\cdot x+b_{i})-\sum_{a_{i}<0}|a_{i}|\sigma_{k}(\omega_{i }\cdot x+b_{i}). \tag{3.4}\]
By considering the positive and negative pieces separately, we essentially reduce to the case where all \(a_{i}\) are positive. In this case, the sum can be written
\[\sum_{i=1}^{N}a_{i}\sigma_{k}(\omega_{i}\cdot x+b_{i})=\int_{S^{d-1}\times[-1, 1]}\sigma_{k}(\mathbf{\omega}\cdot x+b)d\tau(\mathbf{\omega},b) \tag{3.5}\]
for a probablity measure \(\tau\) supported on at most \(N\) points.
Applying Theorem 2 gives an \(f_{n}\in\Sigma_{n/2}^{1}(\mathbb{P}^{d}_{k})\) such that
\[\left\|f_{n}-\sum_{i=1}^{N}a_{i}\sigma_{k}(\omega_{i}\cdot x+b_{i})\right\|_{W^ {m}(L_{m}(S^{d}))}\leq Cn^{-\frac{1}{2}-\frac{2(k-m)+1}{2d}}, \tag{3.6}\]
whenever \(a_{i}\geq 0\) and \(\sum_{i=1}^{N}a_{i}=1\). Applying this to the positive and negative parts in (3.4) and summing them gives an \(f_{n}\in\Sigma_{n}^{1}(\mathbb{P}^{d}_{k})\) such that
\[\left\|f-f_{n}\right\|_{W^{m}(L_{m}(S^{d}))}\leq Cn^{-\frac{1}{2}-\frac{2(k-m )+1}{2d}}+\varepsilon. \tag{3.7}\]
Since \(\varepsilon>0\) was arbitrary, this completes the proof.
It remains to prove Proposition 1. The proof utilizes the ideas of geometric discrepancy theory and borrows many ideas from the proof of Proposition 9 in [28]. However, Proposition 9 in [28] only deals with uniform distributions, and a few key modifications are required to deal with the case of 'unbalanced' distributions \(\tau\), which enables us to remove the logarithmic factors in all dimensions in Theorems 1, 2, and 3. In addition, dealing with the higher order smoothness of the ReLU\({}^{k}\) activation function introduces significant technical difficulties.
We first introduce some notation. We will need to work with symmetric tensors in order to handle higher order derivatives of multivariate functions. Our tensors will be defined on the ambient space \(\mathbb{R}^{d+1}\) containing the sphere \(S^{d}\), so let \(I=\{1,...,d+1\}\) denote the relevant indexing set. A (symmetric) tensor \(X\) of order \(m\) is an array of numbers indexed by a tuple \(\mathbf{i}\in I^{m}\), which satisfies
\[X_{\mathbf{i}}=X_{\pi(\mathbf{i})} \tag{3.8}\]
for any permutation \(\pi\) of \(\{1,...,m\}\). Here \(\pi(\mathbf{i})_{j}=\mathbf{i}_{\pi(j)}\) for \(j=1,...,m\). Note that vectors in \(\mathbb{R}^{d+1}\) are symmetric tensors of order one. We adopt the \(\ell^{m}\) norm on the space of symmetric tensors, i.e.
\[\|X\|:=\max_{\mathbf{i}\in I^{m}}|X_{\mathbf{i}}|. \tag{3.9}\]
Given tensors \(X\) and \(Y\) of orders \(m_{1}\) and \(m_{2}\), their tensor product, which is a tensor of order \(m_{1}+m_{2}\), is defined in the standard way by
\[(X\otimes Y)_{\mathbf{ij}}=X_{\mathbf{i}}Y_{\mathbf{j}}, \tag{3.10}\]
where \(\mathbf{i}\in I^{m_{1}}\), \(\mathbf{j}\in I^{m_{2}}\) and \(\mathbf{ij}\) denotes concatenation. We will also write \(X^{\otimes r}\) for the \(r\)-fold tensor product of \(X\) with itself. Supposing that \(m_{1}\geq m_{2}\), we define the contraction, which is a tensor of order \(m_{1}-m_{2}\) by
\[\langle X,Y\rangle_{\mathbf{i}}=\sum_{\mathbf{j}\in I^{m_{2}}}X_{\mathbf{i}} Y_{\mathbf{j}}. \tag{3.11}\]
Note that since we will be exclusively working with \(\mathbb{R}^{d+1}\) with the standard inner product, to simplify the presentation we will not make the distinction between covariance and contravariance in the following.
We remark that repeated contraction can be written in terms of the tensor product in the following way
\[\langle\langle X,Y\rangle,Z\rangle=\langle X,Y\otimes Z\rangle, \tag{3.12}\]
and also note the inequality
\[\|\langle X,Y\rangle\|\leq C\|X\|\|Y\|, \tag{3.13}\]
where \(C=C(d,k)=(d+1)^{k}\).
Given an order \(0\leq m\leq k\), we denote the \(m\)-th derivative (tensor) of the ReLU\({}^{k}\) function (with \(\omega\) and \(b\) fixed) by
\[\sigma_{k}^{(m)}(x;\omega,b)=D_{x}^{m}[\sigma_{k}(\omega\cdot x+b)]=\begin{cases} \frac{k!}{(k-m)!}(\omega\cdot x+b)^{k-m}\omega^{\otimes m}&\omega\cdot x+b \geq 0\\ 0&\omega\cdot x+b<0.\end{cases} \tag{3.14}\]
In order to deal with the additional smoothness of the activation function we will need to utilize higher-order Taylor polynomials. Given points \(x_{1},x_{2}\in S^{d}\), parameters \((\omega,b)\in S^{d-1}\times[-1,1]\), an order \(0\leq m\leq k\), and a number of terms \(0\leq r\leq k-m\), we denote by
\[\mathcal{T}_{x_{1}}^{m,r}(x_{2};\omega,b):=\sum_{q=0}^{r}\frac{1}{q!}\left< \sigma_{k}^{(m+q)}(x_{1};\omega,b),(x_{2}-x_{1})^{\otimes q}\right> \tag{3.15}\]
the \(r\)-th order Taylor polynomials of \(\sigma_{k}^{(m)}(x;\omega,b)\) around \(x_{1}\) evaluated at \(x_{2}\).
Proof of Proposition 1.: Note that since \(\tau\) is supported on \(N\) points the integral which we are trying to approximate in Proposition 1 is given by
\[\int_{S^{d-1}\times[-1,1]}\sigma_{k}(\omega\cdot x+b)d\tau(\omega,b)=\sum_{( \omega,b)\in S}a_{\omega,b}\sigma_{k}(\omega\cdot x+b), \tag{3.16}\]
where \(|S|=N\) and the coefficients \(a_{\omega,b}\) satisfy \(a_{\omega,b}\geq 0\) and \(\sum_{(\omega,b)\in S}a_{\omega,b}=1\).
Let \(M\) denote the median of the coefficients \(a_{\omega,b}\) and set
\[S_{-}=\{(\omega,b)\in S:\ a_{\omega,b}\leq M\},\ \ S_{+}=\{(\omega,b)\in S:a_{ \omega,b}>M\}.\]
This gives a decomposition of the sum in (3.16) in terms of its large and small coefficients
\[\sum_{(\omega,b)\in S}a_{\omega,b}\sigma_{k}(\omega\cdot x+b)=\sum_{(\omega,b )\in S_{-}}a_{\omega,b}\sigma_{k}(\omega\cdot x+b)+\sum_{(\omega,b)\in S_{+}}a _{\omega,b}\sigma_{k}(\omega\cdot x+b). \tag{3.17}\]
We will leave the second, i.e. the large, sum untouched and approximate the small sum by
\[\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\sigma_{k}(\omega\cdot x+b)\approx\sum_{ (\omega,b)\in T}b_{\omega,b}\sigma_{k}(\omega\cdot x+b), \tag{3.18}\]
where \(T\subset S_{-}\) and \(|T|\leq(1-c)|S_{-}|\), the new coefficients \(b_{\omega,b}\geq 0\) and satisfy
\[\sum_{(\omega,b)\in T}b_{\omega,b}=\sum_{(\omega,b)\in S_{-}}a_{\omega,b},\]
and the error of approximation satisfies (using the tensor norm we have introduced)
\[\sup_{x\in\mathcal{S}^{d}}\left\|\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\sigma_{ k}^{(m)}(x;\omega,b)-\sum_{(\omega,b)\in T}b_{\omega,b}\sigma_{k}^{(m)}(x; \omega,b)\right\|\leq CN^{-\frac{1}{2}-\frac{2(k-m)+1}{2d}} \tag{3.19}\]
for \(m=0,...,k\). Setting
\[\tau^{\prime}=\sum_{(\omega,b)\in T}b_{\omega,b}\delta_{\omega,b}+\sum_{( \omega,b)\in S_{+}}a_{\omega,b}\delta_{\omega,b} \tag{3.20}\]
now completes the proof since \(|S_{-}|\geq N/2\) (here \(\delta_{\omega,b}\) denotes the Dirac delta distribution at \((\omega,b)\)).
We now turn to the heart of the proof, which is constructing an approximation (3.18) which satisfies (3.19). Note first that by construction, we have
\[\max_{(\omega,b)\in S_{-}}a_{\omega,b}\leq M\leq\frac{2}{N}, \tag{3.21}\]
i.e. all of the coefficients in the small half are at most \(2/N\). This holds since at least half (i.e. at least \(N/2\)) of the \(a_{\omega,b}\) are at least as large as the median \(M\) and \(\sum_{(\omega,b)\in S}a_{\omega,b}=1\).
Next we construct a multi-scale covering of the ball using Lemma 1. For \(l=1,...,L\) with \(2^{L}>N\) we apply Lemma 1 with \(P=S_{-}\) and \(\delta=2^{-l}\) to obtain a sequence of sets \(N_{l}\subset B^{d}\) with \(|N_{l}|\leq(C2^{l})^{d}\) such that for any \(x\in B^{l}\) there exists a \(z\in N_{l}\) with \(|x-z|\leq C2^{-l}\sqrt{d}\) and
\[|\{y\in S_{-}:\operatorname{sgn}\left(y\cdot x\right)\neq\operatorname{sgn} \left(y\cdot z\right)\}|\leq 2^{-l}|S_{-}|\leq 2^{-l}N.\]
Given a point \(x\in S^{d}\), we denote by \(\pi_{l}(x)\) the point \(z\in N_{l}\) satisfying these properties (if this point is not unique we choose one arbitrarily for each \(x\)).
For each level \(l=1,...,L\), each point \(x\in N_{l}\), and each index \(m=0,..,k\) we consider the function
\[\phi_{x,l}^{m}(\omega,b)=\begin{cases}\sigma_{k}^{(m)}(x;\omega,b)-\mathcal{T }_{\pi_{l-1}(x)}^{m,k-m}(x;\omega,b)&l\geq 2\\ \sigma_{k}^{(m)}(x;\omega,b)&l=1,\end{cases} \tag{3.22}\]
where \(\mathcal{T}_{\pi_{l-1}(x)}^{m,k-m}(x;\omega,b)\) is the \((k-m)\)-th order Taylor polynomial of \(\sigma_{k}^{(m)}(x;\omega,b)\) defined in (3.15).
We note the following bounds on \(\phi_{x,l}^{m}(\omega,b)\). First, if \(\operatorname{sgn}\left(\omega\cdot x+b\right)=\operatorname{sgn}\left( \omega\cdot\pi_{l-1}(x)+b\right)\) (for \(l\geq 2\)), then
\[\phi_{x,l}^{m}(\omega,b)=0. \tag{3.23}\]
This holds since on the half space \(\{x:\omega\cdot x+b\geq 0\}\) the function \(\sigma_{k}^{(m)}(x;\omega,b)\) is a polynomial of degree \(k-m\) in \(x\). Thus on this half-space it is equal to its \((k-m)\)-th order Taylor polynomial about any point. So if \(x\) and \(\pi_{l-1}(x)\) both lie in this half-space, then the difference in (3.22) vanishes. On the other hand, if \(x\) and \(\pi_{l-1}(x)\) both lie in the complement, then all terms in (3.22) are 0.
On the other hand, for any \(x\in B^{d}\) and \((\omega,b)\in S^{d-1}\times[-1,1]\) we have the bound
\[\|\phi_{x,l}^{m}(\omega,b)\|\leq C2^{-l(k-m)}, \tag{3.24}\]
where \(C=C(d,k)\). This holds since \(\sigma_{k}^{(m)}(x;\omega,b)\) (as a function of \(x\)) has \((k-m)\)-th order derivatives which are bounded by \(C(k)=k!2^{k-m}\) for \(x\in B^{d}\) by (3.14). Thus using Taylor's theorem the difference in (3.22) is bounded by
\[\left\|\sigma_{k}^{(m)}(x;\omega,b)-\mathcal{T}_{\pi_{l-1}(x)}^{m,k-m}(x; \omega,b)\right\|\leq C|x-\pi_{l-1}(x)|^{k-m}\leq C2^{-l(k-m)}, \tag{3.25}\]
for \(C=C(d,k)\). When \(l=1\) we also trivially obtain the bound (3.24).
The next step is to decompose the functions \(\sigma_{k}^{(m)}(x;\omega,b)\) with \((\omega,b)\in S_{-}\) in terms of the \(\phi_{x,l}^{m}(\omega,b)\). This is captured in the following technical Lemma.
**Lemma 5**.: _Let \(\phi_{x,l}^{m}\) be defined by (3.22). For \(x\in S^{l}\) define \(x_{L}=\pi_{L}(x)\) and \(x_{l}=\pi_{l}(x_{l+1})\) for \(l<L\)._
_Then for any \(m=0,...,k\), \(x\in S^{l}\) and \((\omega,b)\in S_{-}\) we have_
\[\sigma_{k}^{(m)}(x;\omega,b)=\sum_{l=1}^{L}\phi_{x_{j},j}^{m}(\omega,b)+\sum_{ i=1}^{k-m}\sum_{l=1}^{L}\left\langle\phi_{x_{l},l}^{m+i}(\omega,b),\Gamma_{ lid}^{m}(x)\right\rangle, \tag{3.26}\]
_for a collection of tensors \(\Gamma_{i,l}^{m}(x)\) depending upon \(x\) which satisfy the bound_
\[\|\Gamma_{i,l}^{m}(x)\|\leq C2^{-il}, \tag{3.27}\]
_for a constant \(C(d,k)\)._
The proof of this Lemma is a technical, but relatively straightforward calculation, so we postpone it until the end of this Section and complete the proof of Proposition 1 first.
Utilizing the decomposition (3.26) we write the LHS of (3.19) as
\[\begin{split}\sup_{x\in S^{l}}&\left\|\sum_{l=1}^{ L}\left[\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\phi_{x_{l},l}^{m}(\omega,b)- \sum_{(\omega,b)\in T}b_{\omega,b}c\phi_{x_{l},l}^{m}(\omega,b)\right]\right. \\ &\left.+\sum_{l=1}^{L}\sum_{l=1}^{k-m}\left\langle\sum_{(\omega, b)\in S_{-}}a_{\omega,b}\phi_{x_{l},l}^{m+i}(\omega,b)-\sum_{(\omega,b)\in T }b_{\omega,b}\phi_{x_{l},l}^{m+i}(\omega,b),\Gamma_{i,l}^{m}(x)\right\rangle \right\|.\end{split} \tag{3.28}\]
Utilizing the triangle inequality, the bound (3.13), and the bound (3.27), we see that it suffices to find the subset \(T\subset S_{-}\) with \(|T|\leq(1-c)|S_{-}|\), and new coefficients \(b_{\omega,b}\geq 0\) satisfying \(\sum_{(\omega,b)\in T}b_{\omega,b}=\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\) such that for \(m=0,...,k\) we have
\[\sum_{l=1}^{L}\sum_{i=0}^{k-m}2^{-il}\sup_{x\in N_{l}}\left\|\sum_{(\omega,b) \in S_{-}}a_{\omega,b}\phi_{x,l}^{m+i}(\omega,b)-\sum_{(\omega,b)\in T}b_{ \omega,b}\phi_{x,l}^{m+i}(\omega,b)\right\|\leq CN^{-\frac{1}{2}-\frac{2(k-m) +1}{2d}} \tag{3.29}\]
for a constant \(C=C(d,k)\).
To find this set \(T\) and new coefficients \(b_{\omega,b}\), we divide the set \(P=S_{-}\) into disjoint subsets \(P_{1},..,P_{l}\) of size \(3\) with
\[\left|\bigcup_{i=1}^{l}P_{i}\right|\geq|S_{-}|/2. \tag{3.30}\]
(Note that here we need \(|S_{-}|\geq 3\) which follows from \(N\geq 6\).) We denote the three elements of each set \(P_{j}\) by
\[P_{j}=\{u_{j},v_{j},w_{j}\},\]
which are ordered so that the coefficients satisfy \(0\leq a_{u_{j}}\leq a_{v_{j}}\leq a_{w_{j}}\). Note that \(S_{-}\) contains halfspaces and so each of the elements \(u_{j},v_{j},w_{j}\) consist of an \((\omega,b)\) tuple.
Based upon the partition \(P_{1},...,P_{l}\), we will use a modification of the partial coloring argument given in [28] (the idea is originally due to Spencer [40] and Beck [6]). The main difference is in how we use a partial coloring to reduce the number of terms in the sum over \(S_{-}\).
Given a 'partial coloring' \(\chi:\{1,...,t\}\rightarrow\{-1,0,1\}\), we transform the sum \(\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\sigma_{k}(\omega\cdot x+b)\) in the following way. If \(\chi(j)=1\), we remove the term corresponding to \(u_{j}\), double the coefficient \(a_{v_{j}}\) of the term corresponding to \(v_{j}\), and add the difference \(a_{u_{j}}-a_{v_{j}}\) to the coefficient \(a_{w_{j}}\) of the term corresponding to \(w_{j}\). If \(\chi(j)=-1\), we do the same but reverse the roles of \(u_{j}\) and \(v_{j}\).
This results in a transformed sum \(\sum_{(\omega,b)\in T}b_{\omega,b}\sigma_{k}(\omega\cdot x+b)\) over a set \(T\subset S_{-}\) and with coefficients \(b_{\omega,b}\) for \((\omega,b)\in T\) described as follows. Let
\[R_{j}=\begin{cases}\emptyset&\chi(j)=0\\ \{u_{j}\}&\chi(j)=1\\ \{v_{j}\}&\chi(j)=-1,\end{cases} \tag{3.31}\]
denote the removed set for each \(P_{j}\). Then the set \(T\) is given by
\[T=S_{-}\setminus\left(\bigcup_{j=1}^{l}R_{j}\right), \tag{3.32}\]
and for \((\omega,b)\in T\) the coefficients \(b_{\omega,b}\) are given by
\[b_{\omega,b}=\begin{cases}a_{\omega,b}&(\omega,b)\notin\bigcup_{j}P_{j}\\ (1+\chi(j))a_{\omega,b}&(\omega,b)=v_{j}\\ (1-\chi(j))a_{\omega,b}&(\omega,b)=u_{j}\\ a_{\omega,b}+\chi(j)(a_{uj}-a_{v_{j}})&(\omega,b)=w_{j}.\end{cases} \tag{3.33}\]
We have constructed this transformation so that
\[\sum_{(\omega,b)\in T}b_{\omega,b}=\sum_{(\omega,b)\in S_{-}}a_{\omega,b},\]
the \(b_{\omega,b}\geq 0\) since \(w_{j}\) has the largest coefficient among the halfspaces in \(P_{j}\), and for any \(x\in S^{d}\) the error in the \(m\)-th derivative is given by
\[\begin{split}\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\sigma^{(m)}_ {k}(x;\omega,b)-\sum_{(\omega,b)\in T}b_{\omega,b}\sigma^{(m)}_{k}(x;\omega,b) =\\ \sum_{j=1}^{t}\chi(j)\left[-a_{u_{j}}\sigma^{(m)}_{k}(x;u_{j})+a _{v_{j}}\sigma^{(m)}_{k}(x;v_{j})+(a_{u_{j}}-a_{v_{j}})\sigma^{(m)}_{k}(x;w_{ j})\right].\end{split} \tag{3.34}\]
Using the linearity of the derivative and the definition of the Taylor polynomial (3.15), this implies that for any \(x\in S^{d}\), any level \(l=1,...,L\), and any \(m=0,...,k\) we have
\[\sum_{(\omega,b)\in S_{-}}a_{\omega,b}\phi^{m}_{x,l}(\omega,b)-\sum_{(\omega, b)\in T}b_{\omega,b}\phi^{m}_{x,l}(\omega,b)=\sum_{j=1}^{l}\chi(j)\Psi^{m}_{x,l,j}, \tag{3.35}\]
where we have defined
\[\Psi^{m}_{x,l,j}=-a_{u_{j}}\phi^{m}_{x,l}(u_{j})+a_{v_{j}}\phi^{m}_{x,l}(v_{j} )+(a_{u_{j}}-a_{v_{j}})\phi^{m}_{x,l}(w_{j}). \tag{3.36}\]
Further, for any index \(j\) such that \(\chi(j)\neq 0\), we have eliminated one term (either \(u_{j}\) or \(v_{j}\)) from the sum. Thus
\[|T|=|S|-|\{j:\ \chi(j)\neq 0\}|. \tag{3.37}\]
We proceed to find a partial coloring \(\chi:\{1,...,t\}\rightarrow\{-1,0,1\}\) with a positive fraction of non-zero entries, i.e. with \(|\{j:\ \chi(j)\neq 0\}|\geq ct\), such that for \(m=0,...,k\)
\[\sum_{l=1}^{L}\sum_{i=0}^{k-k}2^{-il}\sup_{x\in N_{l}}\left\|\sum_{j=1}^{t} \chi(j)\Psi^{m+l}_{x,l,j}\right\|\leq CN^{-\frac{1}{2}-\frac{2(k-m)+1}{2d}}, \tag{3.38}\]
for a constant \(C=C(d,k)\).
By (3.35) this will guarantee that the LHS in (3.29) is sufficiently small, and by (3.37) this will guarantee that the set \(T\) is small enough, since by (3.30)
\[|T|=|S|-|\{j:\ \chi(j)\neq 0\}|\leq|S|-ct\leq\left(1-\frac{c}{6}\right)|S_{-}|. \tag{3.39}\]
The existence of such a partial coloring \(\chi\) follows from a well-known technique in discrepancy theory called the partial coloring method.
Given a (total) coloring \(\varepsilon:\{1,...,t\}\rightarrow\{\pm 1\}\) we consider the quantities
\[E^{m}_{x,l}(\varepsilon):=\sum_{j=1}^{l}\varepsilon(j)\Psi^{m}_{x,l,j} \tag{3.40}\]
for each \(x\in N_{l}\). We would like to find a coloring \(\varepsilon\) such that \(\|E^{m}_{x,l}(\varepsilon)\|\leq\Delta^{m}_{l}\) for all \(l=1,...,L\), \(m=0,...,k\) and \(x\in N_{l}\), where the \(\Delta^{m}_{l}\) are suitable parameters chosen so that
\[\sum_{l=1}^{L}\sum_{i=0}^{k-m}2^{-il}\Delta^{m+i}_{l}\leq CN^{-\frac{1}{2}- \frac{2(k-m)+1}{2d}}, \tag{3.41}\]
for \(m=0,...,k\).
One strategy would be to choose \(\varepsilon\) uniformly at random, bound the tail of the random variable \(E_{x,l}^{m}(\varepsilon)\), and use a union bound over \(x\in N_{l}\). Unfortunately, this strategy will lose a factor \(\sqrt{\log N}\). The ingenious method to get around this, due to Spencer [40] and Beck [6], is to show that instead there exist _two_ colorings \(\varepsilon_{1}\) and \(\varepsilon_{2}\) such that \(\|E_{x,l}^{m}(\varepsilon_{1})-E_{x,l}^{m}(\varepsilon_{2})\|\leq\Delta_{l}^ {m}\) for all \(l=1,...,L\), \(m=0,...,k\), and \(x\in N_{l}\), and such that \(\varepsilon_{1}\) and \(\varepsilon_{2}\) differ in many indices, i.e.
\[|\{j:\,\varepsilon_{1}(j)\neq\varepsilon_{2}(j)\}|\geq ct \tag{3.42}\]
for an absolute constant \(c\). Then \(\chi=\frac{1}{2}(\varepsilon_{1}-\varepsilon_{2})\) gives the desired partial coloring.
We will prove the existence of these two colorings \(\varepsilon_{1}\) and \(\varepsilon_{2}\) for suitably chosen parameters \(\Delta_{l}^{m}\) satisfying (3.41). To help organize this calculation, it is convenient to introduce the notion of the entropy of a discrete distribution (see for instance [1, 29]). (Note that for simplicity all of the logarithms in the following are taken with base 2.)
**Definition 3**.: _Let \(X\) be a discrete random variable, i.e. the range of \(X\) is a countable set \(\Lambda\). The entropy of \(X\) is defined by_
\[H(X)=-\sum_{\lambda\in\Lambda}p_{\lambda}\log(p_{\lambda}), \tag{3.43}\]
_where \(p_{\lambda}=\mathbb{P}(X=\lambda)\) is the probability of the outcome \(\lambda\)._
One important property of the entropy we will use is subadditivity, i.e. if \(X=(X_{1},...,X_{r})\), then
\[H(X)\leq\sum_{j=1}^{r}H(X_{j}), \tag{3.44}\]
where we have equality in the above bound when the components \(X_{j}\) of \(X\) are independent.
An important component of the calculation is the following Lemma from [28] (see also [1, 30]).
**Lemma 6** (Lemma 11 in [28]).: _Let \(\varepsilon:\{1,...,t\}\to\{\pm 1\}\) be a uniformly random coloring. Let \(b\) be a function of \(\varepsilon\) and suppose that the entropy satisfies \(H(b(\varepsilon))\leq t/5\). Then there exist two colorings \(\varepsilon_{1},\varepsilon_{2}\) differing in at least \(t/4\) components such that \(b(\varepsilon_{1})=b(\varepsilon_{2})\)._
We utilize this lemma in the following way. Take each entry of the (tensor-valued) random variable \(E_{x,l}^{m}(\varepsilon)\) defined in (3.40) and round it to the nearest multiple of the (still undetermined) parameter \(\Delta_{l}^{m}\). This results in a random variable
\[b_{x,l}^{m}(\varepsilon)=[(\Delta_{l}^{m})^{-1}E_{x,l}^{m}(\varepsilon)], \tag{3.45}\]
where \([\cdot]\) denote the (component-wise) nearest integer function. Note that if \(b_{x,l}^{m}(\varepsilon_{1})=b_{x,l}^{m}(\varepsilon_{2})\), then it follows that \(\|E_{x,l}^{m}(\varepsilon_{1})-E_{x,l}^{m}(\varepsilon_{2})\|\leq\Delta_{l}^ {m}\). Applying Lemma 6 and the subadditivity of the entropy (3.44), we see that if
\[\sum_{l=1}^{L}\sum_{m=0}^{k}\sum_{x\in N_{l}}H(b_{x,l}^{m}(\varepsilon))\leq t/5 \tag{3.46}\]
for an appropriate choice of \(\Delta_{l}^{m}\) satisfying (3.41), then there exist two colorings \(\varepsilon_{1}\) and \(\varepsilon_{2}\) satisfying the desired condition with \(c=1/4\).
It remains to choose the parameters \(\Delta_{l}^{m}\) satisfying (3.41) and to bound the sum in (3.46). For this, we utilize the following Lemma from [28], which bounds the entropy of a 'rounded' random variable in terms of the tails of the underlying real-valued random variable.
**Lemma 7** (Lemma 11 in [28]).: _Let \(E\) be a real valued random variable satisfying the tail estimates_
\[\mathbb{P}(E\geq\alpha M)\leq e^{-\alpha^{2}/2},\;\mathbb{P}(E\leq-\alpha M) \leq e^{-\alpha^{2}/2}, \tag{3.47}\]
_for some parameter \(M\). Let \(b(E)\) denote the random variable obtained by rounding \(E\) to the nearest multiple of \(\Delta=2\lambda M\). Then the entropy of \(b\) satisfies_
\[H(b)\leq G(\lambda):=C_{0}\begin{cases}e^{-\lambda^{2}/9}&\lambda\geq 10\\ 1&.1<\lambda<10\\ -\log(\lambda)&\lambda\leq.1\end{cases} \tag{3.48}\]
_for an absolute constant \(C_{0}\)._
To apply this Lemma, we bound the tails of the random variables \(E^{m}_{x,l}(\varepsilon)\). This follows using Bernstein's inequality as in [28]. Fix an \(l\geq 2\) and an \(x\in N_{l}\). We call an index \(j\) 'good' if
\[\operatorname{sgn}(\omega\cdot x+b)=\operatorname{sgn}(\omega\cdot\pi_{l-1}(x) =b) \tag{3.49}\]
for \((\omega,b)=u_{j},v_{j},\) and \(w_{j}\), and 'bad' otherwise. Using (3.23) we see that for the good indices we have
\[\Psi^{m}_{x,l,j}=0. \tag{3.50}\]
For the bad indices, we utilize 3.24 to get
\[\|\Psi^{m}_{x,l,j}\|\leq C2^{-(k-m)l}N^{-1}, \tag{3.51}\]
since \(a_{u_{j}},a_{v_{j}}\leq 2/N\) by (3.21). Next, we bound the number of bad indices. An index is bad if
\[\operatorname{sgn}(\omega\cdot x+b)\neq\operatorname{sgn}(\omega\cdot\pi_{l- 1}(x)+b), \tag{3.52}\]
for \((\omega,b)=u_{j},v_{j}\) or \(w_{j}\). From the construction of \(N_{l-1}\) using Lemma 2, the number of \((\omega,b)\) for which (3.52) occurs (and thus the number of bad indices) is bounded by \(2^{-l}|S_{-}|\leq C2^{-l}N\).
Thus, Bernstein's inequality (applied only to the bad indices) gives the following bound on the components of the random variable \(E^{m}_{x,l}\),
\[\mathbb{P}\left((E^{m}_{x,l})_{l}\geq\alpha M^{m}_{l}\right)\leq e^{-\alpha^{ 2}/2} \tag{3.53}\]
for all \(\mathbf{i}\in I^{m}\), where
\[M^{m}_{l}=C2^{-\left(k-m+\frac{1}{2}\right)}lN^{-1/2}, \tag{3.54}\]
for a constant \(C=C(d,k)\). The same bound holds also for the negative tails.
The proof is completed via the following calculation (see [28, 30] for similar calculations). We let \(\alpha,\beta>0\) be parameters to be specified in a moment and define a function
\[\Lambda_{\alpha,\beta}(x)=\begin{cases}2^{\alpha x}&x\geq 0\\ 2^{\beta x}&x\leq 0.\end{cases} \tag{3.55}\]
Let \(\kappa>0\) be another parameter and set \(\tau\) to be the largest integer satisfying \(2^{d\tau}\leq\kappa N\). We set the discretization parameter to be
\[\Delta^{m}_{l}=2M^{m}_{l}\Lambda_{\alpha,\beta}(l-\tau). \tag{3.56}\]
Fix an \(m\) with \(0\leq m\leq k\). We calculate
\[\begin{split}\sum_{l=1}^{L}\sum_{i=0}^{k-m}2^{-il}\Delta^{m+i}_{l} &\leq CN^{-1/2}\sum_{l=0}^{k-m}\sum_{l=-\infty}^{\infty}2^{-il}2^{- \left(k-m-i+\frac{1}{2}\right)l}\Lambda_{\alpha,\beta}(l-\tau)\\ &\leq CN^{-1/2}\sum_{l=-\infty}^{\infty}2^{-\left(k-m+\frac{1}{2} \right)l}\Lambda_{\alpha,\beta}(l-\tau),\end{split} \tag{3.57}\]
since all of the terms in the sum over \(i\) are the same. Making a change of variables in the last sum, we get
\[\sum_{l=1}^{L}\sum_{i=0}^{k-m}2^{-il}\Delta^{m+i}_{l}\leq CN^{-1/2}2^{-\left( k-m+\frac{1}{2}\right)\tau}\sum_{l=-\infty}^{\infty}2^{-\left(k-m+\frac{1}{2} \right)l}\Lambda_{\alpha,\beta}(l). \tag{3.58}\]
If we now choose \(\alpha\) and \(\beta\) such that the above sum over \(l\) converges (this will happen as long as \(\alpha<k-m+\frac{1}{2}<\beta\)), then we get
\[\sum_{l=1}^{L}\sum_{i=0}^{k-m}2^{-il}\Delta^{m+i}_{l}\leq CN^{-\frac{1}{2}- \frac{2(k-m)+1}{2d}}, \tag{3.59}\]
since by construction \(2^{\tau}\geq(1/2)(\kappa N)^{1/d}\) (note the constant \(C\) depends upon the choice of \(\kappa\) which will be made shortly). This verifies (3.41).
To verify the entropy condition, we calculate
\[\sum_{l=1}^{L}\sum_{m=0}^{k}\sum_{x\in N_{l}}H(b^{m}_{x,l}(\varepsilon))\leq \sum_{l=1}^{L}\sum_{m=0}^{k}\sum_{x\in N_{l}}\sum_{l\in I^{m}}H(b^{m}_{x,l}( \varepsilon)_{\mathbf{i}}) \tag{3.60}\]
using subadditivity of the entropy. We now use Lemma 7 combined with the tail bound estimate (3.53) to get
\[H(b^{m}_{x,l}(\mathbf{e})_{i})\leq G(\Delta^{m}_{l}/(2M^{m}_{l}))=G(\Lambda_{\alpha, \beta}(l-\tau)). \tag{3.61}\]
Using that \(|I^{m}|\leq C=C(d,k)\) and \(|N_{l}|\leq C2^{dl}\), we get that
\[\sum_{l=1}^{L}\sum_{m=0}^{k}\sum_{x\in\mathcal{N}_{l}}H(b^{m}_{x,l}(\mathbf{e})) \leq C\sum_{l=1}^{L}2^{dl}G(\Lambda_{\alpha,\beta}(l-\tau))\leq C\sum_{l=\infty }^{\infty}2^{dl}G(\Lambda_{\alpha,\beta}(l-\tau)). \tag{3.62}\]
Finally, making another change of variables, we get
\[\sum_{l=1}^{L}\sum_{m=0}^{k}\sum_{x\in\mathcal{N}_{l}}H(b^{m}_{x,l}(\mathbf{e})) \leq C2^{d\tau}\sum_{l=\infty}^{\infty}2^{dl}G(\Lambda_{\alpha,\beta}(l))\leq C \kappa N, \tag{3.63}\]
since it is easy to verify that the above sum over \(l\) converges for any \(\alpha,\beta>0\). Choosing \(\kappa\) sufficiently small so that \(C\kappa\leq 1/60\) will guarantee that the condition (3.46) is satisfied (since \(t\geq N/12\) by (3.30)) and this completes the proof.
Proof of Lemma 5.: We first prove that for \(m=0,...,k\), \((\mathbf{\omega},b)\in S^{d-1}\times[-1,1]\) and \(l=1,...,L\) we have
\[\sigma^{(m)}_{k}(x_{l};\mathbf{\omega},b)=\sum_{j=1}^{l}\phi^{m}_{x_{j,j},l}(\mathbf{ \omega},b)+\sum_{i=1}^{k-m-1}\sum_{j=1}^{l}\left\langle\phi^{m+i}_{x_{j,j}}( \mathbf{\omega},b),\Gamma^{m,l}_{i,j}(x)\right\rangle, \tag{3.64}\]
where the tensors \(\Gamma^{m,l}_{i,j}(x)\) satisfy the bound
\[\left\|\Gamma^{m,l}_{i,j}(x)\right\|\leq C2^{-ij}, \tag{3.65}\]
for a constant \(C=C(d,k)\).
We prove this by (reverse) induction on \(m\). Note if \(m=k\) the equation (3.64) holds since the definition of \(\phi^{(m)}_{x,l}\) in (3.22) becomes
\[\phi^{m}_{x,l}(\mathbf{\omega},b)=\begin{cases}\sigma^{(m)}_{k}(x,\mathbf{\omega},b)- \sigma^{(m)}_{k}(\pi_{l-1}(x);\mathbf{\omega},b)&l\geq 2\\ \sigma^{(m)}_{k}(x,\mathbf{\omega},b)&l=1,\end{cases} \tag{3.66}\]
Let \(0\leq m\leq k\) and suppose that (3.64) holds for \(m+1,...,k\). We will show that it also holds for \(m\). Expanding out the Taylor polynomial in the definition of \(\phi^{m}_{x,l}\) for \(x=x_{l}\) we see that
\[\sigma^{(m)}_{k}(x_{l};\mathbf{\omega},b) =\phi^{m}_{x_{l},l}(\mathbf{\omega},b)+\mathcal{T}^{m,k-m}_{x_{l-1}}( x_{l};\mathbf{\omega},b) \tag{3.67}\] \[=\phi^{m}_{x_{l},l}(\mathbf{\omega},b)+\sum_{\mathbf{\omega}=0}^{k-m}\frac {1}{q!}\left\langle\sigma^{(m+q)}_{k}(x_{l-1};\mathbf{\omega},b),(x_{l}-x_{l-1})^ {\otimes q}\right\rangle\] \[=\phi^{m}_{x_{l},l}(\mathbf{\omega},b)+\sigma^{(m)}_{k}(x_{l-1};\mathbf{ \omega},b)+\sum_{q=1}^{k-m}\frac{1}{q!}\left\langle\sigma^{(m+q)}_{k}(x_{l-1}; \mathbf{\omega},b),(x_{l}-x_{l-1})^{\otimes q}\right\rangle.\]
Applying this expansion recursively to \(\sigma^{(m)}_{k}(x_{l-1};\mathbf{\omega},b)\), we get
\[\sigma^{(m)}_{k}(x_{l};\mathbf{\omega},b)=\sum_{j=1}^{l}\phi^{m}_{x_{j,j}}(\mathbf{ \omega},b)+\sum_{p=1}^{l-1}\sum_{q=1}^{k-m}\frac{1}{q!}\left\langle\sigma^{(m+q )}_{k}(x_{p};\mathbf{\omega},b),(x_{p+1}-x_{p})^{\otimes q}\right\rangle. \tag{3.68}\]
Now we use the inductive assumption to expand \(\sigma^{(m+q)}_{k}(x_{p};\mathbf{\omega},b)\) using (3.64) and apply the identity (3.12) to get
\[\sigma^{(m)}_{k}(x_{l};\mathbf{\omega},b)=\sum_{j=1}^{l}\phi^{m}_{x_{j,j}}(\mathbf{\omega},b) +\sum_{p=1}^{l-1}\sum_{q=1}^{k-m}\frac{1}{q!}\sum_{j=1}^{p}\left\langle\phi ^{m+q}_{x_{j,j}}(\mathbf{\omega},b),(x_{p+1}-x_{p})^{\otimes q}\right\rangle \tag{3.69}\] \[+\sum_{p=1}^{l-1}\sum_{q=1}^{k-m}\frac{1}{q!}\sum_{j=1}^{k-m-q-1} \sum_{j=1}^{q-p-1}\left\langle\phi^{m+q+\ell}_{x_{j,j}}(\mathbf{\omega},b),\Gamma^{m +q,l}_{f,j}(x)\otimes(x_{p+1}-x_{p})^{\otimes q}\right\rangle.\]
Rearranging this sum, we obtain
\[\sigma^{(m)}_{k}(x_{l};\mathbf{\omega},b)=\sum_{j=1}^{l}\phi^{m}_{x_{j,j}}(\mathbf{ \omega},b)+\sum_{i=1}^{k-m}\sum_{j=1}^{l-1}\left\langle\phi^{m+i}_{x_{j,j}}(\mathbf{ \omega},b),\Gamma^{m,l}_{i,j}(x)\right\rangle, \tag{3.70}\]
where the tensors \(\Gamma_{i,j}^{m,l}(x)\) are defined recursively by
\[\Gamma_{i,j}^{m,l}(x)=\frac{1}{i!}\sum_{p=j}^{l-1}(x_{p+1}-x_{p})^{\otimes j}+ \sum_{p=q}^{l-1}\sum_{q=1}^{i-1}\frac{1}{q!}\Gamma_{i-q,j}^{m+q,l}(x)\otimes(x_{ p+1}-x_{p})^{\otimes q}. \tag{3.71}\]
Finally, we bound the norm \(\|\Gamma_{i,j}^{m,l}(x)\|\). By construction, the points \(x_{p}\) satisfy \(|x_{p+1}-x_{p}|\leq C2^{-p}\) for a dimension dependent constant \(C=C(d)\) (in the Euclidean norm which bounds the \(\ell^{\infty}\)-norm). This gives the bound
\[\|\Gamma_{i,j}^{m,l}(x)\|\leq C\left(\frac{1}{i!}\sum_{p=j}^{l-1}2^{-pj}+\sum_ {p=j}^{l-1}\sum_{q=1}^{i-1}\frac{1}{q!}2^{-pq}\|\Gamma_{i-q,j}^{m+q,l}\|\right). \tag{3.72}\]
Utilizing the inductive assumption to bound \(\|\Gamma_{i-q,j}^{m+q,l}\|\) we get
\[\|\Gamma_{i,j}^{m,l}\| \leq C\left(\frac{1}{i!}\sum_{p=j}^{l-1}2^{-pj}+\sum_{p=j}^{l-1} \sum_{q=1}^{i-1}\frac{1}{q!}2^{-pq}2^{-(i-q)j}\right) \tag{3.73}\] \[\leq C\left(\frac{1}{i!}\sum_{p=j}^{\infty}2^{-pj}+2^{-ij}\sum_{p =j}^{\infty}\sum_{q=1}^{\infty}\frac{1}{q!}2^{-q(p-j)}\right)\leq C2^{-ij},\]
for a potentially different constant \(C\). However, the induction is completed after a finite number (namely \(k+1\)) steps. Thus the constant \(C=C(d,k)\) can be taken uniform in \(m\). This proves (3.64).
To prove (3.26), we write
\[\sigma_{k}^{(m)}(x;\omega,b) =\left[\sigma_{k}^{(m)}(x;\omega,b)-\mathcal{T}_{\omega_{L}}^{m, k-m}(x;\omega,b)\right]+\mathcal{T}_{\omega_{L}}^{m,k-m}(x,\omega,b) \tag{3.74}\] \[=\left[\sigma_{k}^{(m)}(x;\omega,b)-\mathcal{T}_{\omega_{L}}^{m, k-m}(x;\omega,b)\right]+\sum_{q=0}^{k-m}\frac{1}{q!}\left\langle\sigma_{k}^{(m+q)} (x_{L};\omega,b),(x-x_{L})^{\otimes q}\right\rangle.\]
We claim that if \(\omega,b\in S_{-}\), then the first term
\[\sigma_{k}^{(m)}(x;\omega,b)-\mathcal{T}_{\omega_{L}}^{m,k-m}(x;\omega,b)=0. \tag{3.75}\]
This follows since by construction using Lemma 1, we have the bound
\[|S_{-}\cap\{(\omega,b)\in S^{d-1}\times[-1,1],\ \operatorname{sgn}\left( \omega\cdot x+b\right)\neq\operatorname{sgn}\left(\omega\cdot\pi_{L}(x)+b \right)\}|\leq 2^{-L}|S_{-}|<1, \tag{3.76}\]
since \(2^{-L}<N^{-1}\). Thus for all \((\omega,b)\in S_{-}\) and \(x\in S^{d}\), we have \(\operatorname{sgn}\left(\omega\cdot x+b\right)=\operatorname{sgn}\left(\omega \cdot\pi_{L}(x)+b\right)\), and the argument following (3.23) implies that the difference in (3.75) vanishes.
Next, we expand each term in the sum in (3.74) using (3.64) with \(l=L\) to get (using again the identity (3.12))
\[\sigma_{k}^{(m)}(x;\omega,b)=\sum_{q=0}^{k-m}\frac{1}{q!}\sum_{j=1}^{L}\left\langle \phi_{x,j,j}^{m+q}(\omega,b),(x-x_{L})^{\otimes q}\right\rangle+\sum_{q=0}^{k- m}\frac{1}{q!}\sum_{\ell^{\prime}=-1}^{k-m-q}\sum_{j=1}^{L-1}\left\langle\phi_{x,j,j}^{m+q+\ell^{\prime}}(\omega,b),\Gamma_{\ell^{\prime},j}^{m+q,L}\otimes(x -x_{L})^{\otimes q}\right\rangle. \tag{3.77}\]
Rewriting this, we get
\[\sigma_{k}^{(m)}(x;\omega,b)=\sum_{j=1}^{L}\phi_{x,j,j}^{m}(\omega,b)+\sum_{i= 1}^{k-m}\sum_{j=1}^{L}\left\langle\phi_{x,j,j}^{m+i}(\omega,b),\Gamma_{i,j}^{m }(x)\right\rangle, \tag{3.78}\]
where the tensors \(\Gamma_{i,j}^{m}(x)\) are given by
\[\Gamma_{i,j}^{m}(x)=\frac{1}{i!}(x-x_{L})^{\otimes i}+\sum_{q=0}^{i-1}\Gamma_{ i-q,j}^{m+q,L}\otimes(x-x_{L})^{\otimes q}. \tag{3.79}\]
Finally, we bound the norm \(\|\Gamma_{i,j}^{m}(x)\|\). Utilizing that \(|x-x_{L}|\leq C2^{-L}\) and the bound (3.65) we get
\[\|\Gamma_{i,j}^{m}(x)\|\leq\frac{2^{-Li}}{i!}+C\sum_{q=0}^{i-1}2^{-(i-q)j}2^{- Liq}\leq C2^{-ij}, \tag{3.80}\]
for a constant \(C(d,k)\), since \(j\leq L\) and \(0\leq i\leq k\). Upon relabelling \(j\) to \(l\) this is exactly Lemma 5.
## 4 Acknowledgements
We would like to thank Ron DeVore, Rob Nowak, Jinchao Xu, and Rahul Parhi for helpful discussions. This work was supported by the National Science Foundation (DMS-2111387 and CCF-2205004).
|
2305.06480 | ST-GIN: An Uncertainty Quantification Approach in Traffic Data
Imputation with Spatio-temporal Graph Attention and Bidirectional Recurrent
United Neural Networks | Traffic data serves as a fundamental component in both research and
applications within intelligent transportation systems. However, real-world
transportation data, collected from loop detectors or similar sources, often
contains missing values (MVs), which can adversely impact associated
applications and research. Instead of discarding this incomplete data,
researchers have sought to recover these missing values through numerical
statistics, tensor decomposition, and deep learning techniques. In this paper,
we propose an innovative deep learning approach for imputing missing data. A
graph attention architecture is employed to capture the spatial correlations
present in traffic data, while a bidirectional neural network is utilized to
learn temporal information. Experimental results indicate that our proposed
method outperforms all other benchmark techniques, thus demonstrating its
effectiveness. | Zepu Wang, Dingyi Zhuang, Yankai Li, Jinhua Zhao, Peng Sun, Shenhao Wang, Yulin Hu | 2023-05-10T22:15:40Z | http://arxiv.org/abs/2305.06480v3 | ST-GIN: An Uncertainty Quantification Approach in Traffic Data Imputation with Spatio-temporal Graph Attention and Bidirectional Recurrent United Neural Networks
###### Abstract
Traffic data serves as a fundamental component in both research and applications within intelligent transportation systems. However, real-world transportation data, collected from loop detectors or similar sources, often contains missing values (MVs), which can adversely impact associated applications and research. Instead of discarding this incomplete data, researchers have sought to recover these missing values through numerical statistics, tensor decomposition, and deep learning techniques. In this paper, we propose an innovative deep learning approach for imputing missing data. A graph attention architecture is employed to capture the spatial correlations present in traffic data, while a bidirectional neural network is utilized to learn temporal information. Experimental results indicate that our proposed method outperforms all other benchmark techniques, thus demonstrating its effectiveness.
## I Introduction
As a crucial public resource responsible for ensuring effective communication among personnel and seamless circulation of materials, the transportation system's efficient and stable operation is pivotal to maintaining the smooth functioning of modern society [1]. In this context, traffic data assumes a fundamental role in facilitating applications and research in the transportation domain. It is indispensable for both individuals seeking route planning solutions and researchers and governments involved in transportation management and control [2].
Notably, traffic data collected from loop detectors or other channels is frequently incomplete, owing to various reasons, which poses challenges for traffic analysis and other operations in practice [3]. In this regard, despite technological advancements, the issue of missing data remains a persistent challenge that is difficult to address. For instance, according to Chandra and colleagues, data collected by loop detectors on I-4 in Orlando, Florida had a missing rate of 15 percent [4]. The ST data collection of the Georgia NaviGAtor system had an average rate of missed data ranging from 4 percent to 14 percent [5].
Traffic data can be categorized as spatial-temporal data, and it exhibits two critical characteristics [6]. Firstly, it demonstrates temporal dependence, implying the existence of non-linear temporal dependencies. For instance, traffic situations may vary dynamically, periodically, and regularly (e.g., during morning and evening rush hours), leading to changes in the correlations between different time steps. Another characteristic of traffic data is its spatial dependence, which implies the presence of dynamic spatial connections on complex networks. This means that the relationships between nodes in the road network can change over time, depending on various traffic situations. For instance, traffic congestion can have a negative effect on traffic upstream, but it may not affect traffic downstream as much.
In recent times, the advancements in deep learning have facilitated the integration of artificial intelligence into numerous real-world applications. In this research, we have proposed an innovative deep learning framework, namely ST-GIN (SpatioTemporal-Graph Imputation Network with Uncertainty Quantification), that combines graph attention neural networks and bidirectional recurrent united neural networks. We have leveraged uncertainty quantification to perform the task of data imputation.
The paper is structured as follows: Section II provides a comprehensive review and summary of previous studies on trajectory prediction. Section III introduces the problem of missing data imputation using deep learning. The proposed methodology and the adopted loss function are described in Section IV. Section V presents the experimental results that compare the performance of several models against our proposed approach. Finally, the contributions of this study are summarized in Section VI.
## II Literature Review
### _Imputation in traffic data_
To address incomplete traffic data, a basic strategy involves discarding entire rows containing MVs; however, this approach risks losing valuable information. A more effective strategy entails preprocessing the data by imputing MVs, i.e., inferring them from the known parts of the data [7, 8]. Various imputation techniques are proposed to handle missing data in traffic datasets, with the primary categories being tensor decomposition-based and deep learning-based methods.
In decomposition-based methods, the core idea is to represent the original data in a more compact or low-rank
form, then reconstruct the full data to estimate the missing values. Bayesian tensor decomposition methods are widely utilized for transforming inputs into low-rank factors and subsequently reconstructing the complete tensor [9, 10, 11].
Nonetheless, decomposition-based methods are constrained by the optimization norm and model assumptions. In contrast, deep learning methods have recently gained traction due to their powerful representation capabilities. By incorporating both spatial and temporal knowledge, the inductive properties of neural networks offer an effective means of imputing missing values in traffic datasets [12, 13].
### _Deep Learning and graph neural networks_
Deep learning is a machine learning subfield that involves training neural networks with multiple layers to learn and represent complex patterns in data [14]. It has brought about significant advancements in many areas of research and industry. Notable deep learning architectures include Convolutional Neural Networks (CNNs) for image processing [15], Recurrent Neural Networks (RNNs) for sequential data [16], and Generative Adversarial Networks (GANs) for generating realistic images and data [17].
Graph Neural Networks (GNNs) have recently garnered considerable attention in the field of artificial intelligence and machine learning. GNNs are a type of deep learning model that can directly operate on graph-structured data. In transportation research, GNNs have gained popularity due to their capability of handling complex spatial data, such as traffic flow networks and urban transportation systems [18].
### _Uncertainty Quantification_
Research on uncertainty quantification has progressed in various deep learning fields. In this study, we focus on handling data uncertainty, which refers to the irreducible uncertainty inherent in the data generation process. For example, in linear regression, data uncertainty corresponds to the residuals, which are typically assumed to follow a normal distribution.
Data uncertainty can be characterized using parametric methods. Parametric methods involve models that parameterize a probabilistic distribution, which is commonly estimated through Bayesian methods or mean-variance estimation (MVE) [18, 19]. Despite their conceptual appeal, Bayesian methods often necessitate intensive computation, relying on sampling methods and variational inference [20, 21]. In contrast, MVE minimizes the negative log-likelihood (NLL) loss based on a pre-specified distribution of the dependent variable [22, 23]. Although MVE is computationally efficient, it can yield misleading results if probabilistic distributions are misspecified.
## III Problem Statement
Formally, a graph \(G\) is defined as an ordered pair \((V,E)\), where \(V\) is the set of vertices (or nodes) and \(E\) is the set of edges. In the context of traffic data, the topology of a road network can be represented as a graph. The set of edges \(E\) in the graph reflects the connections between road segments, while the set of vertices \(V\) stores the traffic feature (e.g. speed and flow) of the road segments. Thus, the urban road network can be represented as a graph \(G=(V,E)\), where the set of road segments \(V=\{v_{1},v_{2},\cdots,v_{N}\}\) denotes has a size of \(N\). Equivalently, the connectivity information \(E\) between the road segments is stored in the adjacency matrix \(A\in\mathbb{R}^{N\times N}\), where the entry \(A_{ij}\) indicates the connectivity between the \(i\)th and \(j\)th road segments. In this paper, we adopt a well-known adjacency matrix construction technique from [24] that a value of 0 in this entry indicates no connection, while a non-negative value indicates the weight of the edge between the two vertices \(v_{i}\) and \(v_{j}\). We set the diagonal entries of \(A\) to 1, as the weight matrix is given by \(V_{ij}=exp(-\frac{dist(v_{i},v_{j})^{2}}{\sigma^{2}})\).
The matrix of features \(X\), belonging to the set of real numbers \(\mathbb{R}^{N\times T}\), corresponds to the traffic features \(V\). Here, \(N\) represents the number of sensors located between time \(t\) and \(t+T\), where \([t,t+1,\ldots,t+T]\) represents a sequence of evenly spaced, continuous time instances. The \(X_{ti}\) entry stands for the traffic information recorded for all road segments at time \(ti\), while the \(X_{n}\) entry captures the traffic information of sensor \(n\) between time \(t\) and \(t+T\), where \(n\) ranges from \(1\) to \(N\).
During the operation of these \(N\) sensors, certain situations may arise leading to the problem of missing data. Specifically, data may be partially occluded during the observed time period for a particular sensor \(n\) due to data storage or transmission failure. In this paper, this issue is defined as _random missing_.
On the other hand, there might be instances when some sensors experience system failure leading to the loss or occlusion of all data during the observed time period. In such scenarios, all values within \(X_{n}\) are missing, and this paper defines this situation as _non-random missing_.
Mathematically, \(X_{t}\), \(X_{t+1}\),..., and \(X_{t+T}\) represent all the traffic data with real values and missing values of different time steps. Similarly, \(\tilde{X}_{t}\), \(\tilde{X}_{t+1}\),..., and \(\tilde{X}_{t+T}\) represent the imputed data by the specific function \(f\).
\[\left[\hat{X}_{t},\hat{X}_{t+1}\cdots,\hat{X}_{t+T}\right]=f\left(G;(X_{t},X_{ t+1},\cdots,X_{t+T})\right). \tag{1}\]
However, to view this reconstruction problem as a Bayesian perspective, we assume that all the traffic data is generated from an unknown Gaussian distribution:
\[X_{t}\sim\mathcal{N}(\mu_{t},\sigma_{t}^{2}). \tag{2}\]
Hence, in the context of missing data, we would like to approximate (\(\mu_{t}\), \(\sigma_{t}^{2}\)), (\(\mu_{t+1}\), \(\sigma_{t+1}^{2}\)),..., (\(\mu_{t+T}\), \(\sigma_{t+T}^{2}\)) based on the existing data.
## IV Proposed Method
In this section, we first introduce the concept of graph attention networks and bidirectional gated recurrent united networks. Then, we explain the proposed deep learning architecture.
### _Graph Attention Networks_
Graph attention networks (GATs) [25] have emerged as a powerful tool for modeling complex relationships between nodes in a graph. GATs employ attention mechanisms to selectively aggregate information from neighboring nodes, enabling them to capture both local and global patterns in the graph.
To achieve this, GATs first compute attention coefficients \(\alpha_{ij}\) for each pair of neighboring nodes. Once the attention coefficients are computed, they are used to aggregate information from neighboring nodes. Specifically, the embedding for node \(i\) is computed as a weighted sum of the embeddings of its neighbors:
\[h_{i}=\gamma\Bigg{(}\sum_{j\in\mathcal{N}(i)}\alpha_{ij}Wx_{j}\Bigg{)}. \tag{3}\]
Here, \(\mathcal{N}(i)\) represents the set of neighbors of node \(i\), \(W\in\mathbb{R}^{d\times f}\) is a weight matrix, \(x_{j}\) represents the inputs of the layer, and \(\gamma\) is an activation function.
The attention coefficients \(\alpha_{ij}\) are computed as follows:
\[\alpha_{ij}=\frac{\exp(\text{LeakyReLU}(a^{T}[Wx_{i}\|Wx_{j}]))}{\sum_{k\in N (i)}\exp(\text{LeakyReLU}(a^{T}[Wx_{i}\|Wx_{k}]))}, \tag{4}\]
where \(a\in\mathbb{R}^{2f}\) is a weight vector, \(\|\) denotes concatenation, and LeakyReLU is a leaky rectified linear unit activation function with negative slope \(\alpha\). The attention coefficients \(\alpha_{ij}\) are learned during training via backpropagation, allowing the model to adaptively focus on different parts of the graph as needed. In a traffic network, it is crucial to gather information from the surrounding nodes to effectively contribute to the imputation of missing data for a specific node [12].
### _Bidirectional Recurrent Neural Networks_
Bidirectional Recurrent Neural Networks (Bi-RNNs) are a powerful neural network architecture that enables the flow of information in both the forward and backward directions through the recurrent layer [26]. In particular, the bidirectional gated recurrent unit (BiGRU) is a variant of the recurrent neural network that can effectively process sequential data by analyzing it in both the forward and backward directions. Unlike the traditional gated recurrent unit (GRU) [27], the BiGRU incorporates an additional set of GRU cells that process the data in reverse. As a result, the BiGRU is capable of capturing both the past and future context of a sequence, making it a valuable tool in many applications.
The BiGRU can be represented mathematically as follows: Let \(a_{t}\) be the input at time step \(t\), and \(h_{t}\) be the hidden state of the BiGRU at time step \(t\). The final hidden state of the BiGRU at time step \(t\) is obtained by concatenating the forward and backward hidden states:
\[\hat{h_{t}}=[h_{t}\|h_{t}^{\prime}]. \tag{5}\]
This final hidden state contains information about both the past and future context of the input sequence.
In the context of data imputation, traditional time series forecasting problems only rely on previous time series data to predict future data. However, data imputation can also utilize future data to backwardly deduct previous time series data. As such, the BiGRU architecture is well-suited to address the data imputation problem, as it can leverage both forward and backward sequential temporal information to effectively fill in missing values.
### _The general structure_
Our proposed approach for imputing missing data in traffic networks employs a two-step process, leveraging GATs an BiGRUs to address both the spatial and temporal dependencies in the data, shown in Figure 1.
Firstly, we utilize a GAT layer to capture spatial dependencies within the traffic road network. This GAT architecture generates a spatial representation of the missing data by considering the relationships between neighboring nodes. The graph attention mechanism assigns different weights to neighboring nodes, emphasizing relevant connections and reducing the impact of distant or unrelated nodes. This helps in understanding localized patterns and road segment interactions, which are essential for accurate imputation.
For the second step, we employ a BiGRU layer to capture the temporal features of the traffic data. BiGRU layer processes the data in both forward and reverse directions, enabling the model to capture historical trends, real-time fluctuations, and future patterns in traffic flow. This dual processing enhances the imputation quality by grasping temporal context and dependencies, leading to more accurate predictions.
By integrating spatial and temporal information, we output the mean \(\hat{\mu}\) and variance \(\hat{\sigma}^{2}\) of the approximated missing data, providing imputation results and uncertainty quantification for the missing values based on Gaussian distributions.
### _Loss function_
During the training process, we implement the training framework of variational autoencoder [28] and make some adjustments. To view the proposed method in another prospective, the architecture in IV-C is identical to the Encoder component in the variational autoencoder (VAE)
Fig. 1: The general structure of ST-GIN
framework. During the training of variational autoencoder, the loss function is divided into two parts, a reconstruction loss and a regularization loss. The reconstruction loss measures how well the VAE can reconstruct the input data, while the regularization loss encourages the VAE to have a well-behaved latent space.
#### Iv-B1 reconstruction loss
Instead of generating the traffic data randomly based on \(\hat{\mu}\) and \(\hat{\sigma}\), since the \(\hat{\mu}\) has the highest probability to be generated, we directly assign the generated data to be \(\hat{\mu}\). At the same time, since some of the values are missing, the loss function can only be calculated by non-missing values. Therefore, the reconstruction loss can be expressed as:
\[L_{Reconstruction}=\frac{1}{k}\sum_{i=1}^{k}(x_{i}-\hat{\mu_{i}})^{2}, \tag{6}\]
here, \(x_{1},\dots,x_{k}\) are non-missing real traffic data, and \((\hat{\mu}_{1},\hat{\sigma}_{1}),\dots,(\hat{\mu}_{k},\hat{\sigma}_{k})\) are the corresponding distributions generated by the neural networks.
#### Iv-B2 Regularization loss
For the regularization loss, we would like to maximize the probability that the existing data are generated; hence, the negative log-likelihood (NLL) loss is adopted:
\[L_{Regularization} =\mathcal{L}(\hat{\mu},\hat{\sigma}|x_{1},x_{2},...,x_{k}) \tag{7}\] \[=-\ln\prod_{i=1}^{k}\frac{1}{\sqrt{2\pi\hat{\sigma_{i}}^{2}}} \exp\Big{(}-\frac{(x_{i}-\hat{\mu_{i}})^{2}}{2\hat{\sigma_{i}}^{2}}\Big{)}\] \[=-\sum_{i=1}^{k}\ln\Big{(}\frac{1}{\sqrt{2\pi\hat{\sigma_{i}}^{2} }}\Big{)}-\sum_{i=1}^{k}\Big{(}\frac{(x_{i}-\hat{\mu_{i}})^{2}}{2\hat{\sigma_{ i}}^{2}}\Big{)}\] \[=-\frac{k}{2}\ln(2\pi\hat{\sigma_{i}}^{2})-\frac{1}{2\hat{\sigma_ {i}}^{2}}\sum_{i=1}^{k}(x_{i}-\hat{\mu_{i}})^{2},\]
here, \(x_{1},\dots,x_{k}\) are non-missing real traffic data, and \((\hat{\mu}_{1},\hat{\sigma}_{1}),\dots,(\hat{\mu}_{k},\hat{\sigma}_{k})\) are the corresponding distributions generated by the neural networks.
#### Iv-B3 Combined loss
In this research, we introduce a control hyperparameter \(\lambda\) to control the influence of two sub loss functions to the total loss. For the experiment, we select \(\lambda\) as 0.5 after trials and errors.
\[\mathcal{L}=\lambda\cdot L_{Reconstruction}+(1-\lambda)\cdot L_{Regularization} \tag{8}\]
## V Experiment
### _Data Preparation_
In this research, we utilize the METR-LA dataset [24], a publicly available dataset providing traffic speed and flow measurements from 207 detectors in 5-minute intervals on highways in Los Angeles County. The data is collected using inductive loop detectors installed on highways. The collected data is then aggregated and processed to generate traffic flow information, such as average speed and traffic volume, for different segments of the road network. The chosen time period is from March 1, 2012, to March 30, 2012.
To simulate missing values in the dataset, we use two scenarios: random missing and non-random missing. For the random missing case, we employ the traffic speed data and randomly select different portions (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7) of traffic data inputs, setting them as MVs, represented by zeroes. This scenario simulates random temporary sensor failures or random data storage problems. For the non-random missing scenario, we utilize traffic flow data and randomly select different portions (0.1, 0.2, 0.3, 0.4) of sensors, setting all values for these sensors to zeroes. This scenario aims to simulate long-term sensor or system malfunctions.
The accuracy of an imputation model can be evaluated by comparing the predicted traffic mean to the actual observed value. Notice that traffic flow data are non-negative, we transform our negative mean values to zero. Consequently, we employ two popular metrics, Mean Absolute Error (MAE) and Mean Square Error (MSE):
\[MAE=\frac{1}{n}\sum_{i=1}^{n}\left|y_{i}-\hat{y_{i}}\right|, \tag{9}\]
\[MSE=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}. \tag{10}\]
### _Baseline Methods_
Here, we introduce the baseline methods that we use in this experiment.
* Average: For missing values in a specific day, fill them in using the average of non-missing values during the same time period.
* Mean: For missing values in a specific day, fill them with the mean value of the same sensor in this specific day.
* Singular Value Decomposition (SVD) [29]: a matrix factorization technique. Fill in missing values by estimating them based on the relationships between other variables in the dataset.
* Temporal Regularized Matrix Factorization (TRMF) [30]: an effective tool for imputing missing data within a given multivariate time series and forecasting time series with missing values.
* Bidirectional Gated Recurrent United neural networks (BiGRU): BiGRU is proficient in dealing with time series problems.
* Graph Convolutional Neural Networks (GCN) [31]: A neural network structure that is efficient in dealing with graph information.
However, TRMF, SVD, Mean and Average cannot deal with the non-random missing situation. Hence, all baseline methods are tested in random missing data. BiGRU and GCN are tested in non-random missing data.
### _Result and Analysis_
#### V-C1 Random Missing
Table I presents the MSE and MAE for all the random missing cases. The results demonstrate that, except for the Mean method, the imputation accuracy decreases as the amount of missing data increases. However, our proposed ST-GIN method consistently achieves the
highest accuracy compared to other baseline methods for a given portion of missing data.
As previously discussed, traffic data exhibits spatial and temporal dependencies, and effectively capturing the spatial-temporal correlation of the existing information is crucial for successful data imputation. The Mean and Average methods do not utilize the complex road network structures, resulting in their simple imputers being unable to fully recover traffic information.
SVD and TRMF strive to capture the spatial and temporal information by identifying the relationships between rows and columns of a large data matrix. Nevertheless, without considering the prior knowledge \(G\) of the graph, these methods cannot effectively utilize the spatial information, leading to limited performance.
Among the three deep learning methods, GCN focuses solely on the spatial features of the traffic data, while BiGRU captures only the temporal correlations of the speed data. In comparison, ST-GIN, which analyzes both spatial and temporal dependencies, consistently outperforms GCN and BiGRU in terms of imputation accuracy.
#### Iv-A2 Non-random Missing
Table II summarizes the MSE and MAE errors for all the non-random missing cases. As with the random cases, imputation becomes more challenging as the missing values become more consecutive. Many existing imputation methods are not suitable for handling consecutive missing data, either spatially or temporally [13]. When compared to the other two deep learning baselines, ST-GIN generally exhibits better performance, except for the MAE value in the 0.4 missing case.
Figure 2 plot a one-day value, imputed mean, and a 0.05 confidential interval of a random sensor when 10 percent of sensor data is missing. The imputed mean generally capture the trend of the real traffic flow data. However, due to the stochastic nonlinear nature of traffic flow, it is difficult to impute exact value of the traffic data, especially when the all values of the same data are missing. However, for all real
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline
**Metrics** & **Model** & **0.1** & **0.2** & **0.3** & **0.4** \\ \hline \multirow{3}{*}{MSE} & BiGRU & 242.8 & 245.6 & 265.9 & 285.27 \\ & GCN & 252.6 & 286.5 & 322.6 & 375.9 \\ & ST-GIN & **201.5** & **242.8** & **250.7** & **278.1** \\ \hline \multirow{3}{*}{MAE} & BiGRU & 10.72 & 11.62 & 11.79 & **12.75** \\ & GCN & 12.47 & 13.81 & 14.63 & 15.68 \\ \cline{1-1} & ST-GIN & **10.20** & **11.37** & **11.46** & 12.76 \\ \hline \hline \end{tabular}
\end{table} TABLE II: MSE and MAE in non-random missing traffic flow data
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Model** & **0.1 (MSE/MAE)** & **0.2 (MSE/MAE)** & **0.3 (MSE/MAE)** & **0.4 (MSE/MAE)** & **0.5 (MSE/MAE)** & **0.6 (MSE/MAE)** & **0.7 (MSE/MAE)** \\ \hline Average & 0.1537 / 0.3198 & 0.1498 / 0.3167 & 0.1480 / 0.3140 & 0.1578 / 0.3244 & 0.1704 / 0.3368 & 0.1901 / 0.3571 & 0.2175 / 0.3843 \\ Mean & 0.0827 / 0.2491 & 0.0827 / 0.2489 & 0.0826 / 0.2490 & 0.0827 / 0.2490 & 0.0827 / 0.2492 & 0.0828 / 0.2492 & 0.0828 / 0.2493 \\ SVD & 0.00446 / 0.0550 & 0.0153 / 0.1049 & 0.0325 / 0.1542 & 0.0559 / 0.2032 & 0.0865 / 0.25350 & 0.1230 / 0.3027 & 0.1659 / 0.3521 \\ TRMF & 0.0065 / 0.0701 & 0.0082 / 0.0789 & 0.0108 / 0.0900 & 0.0148 / 0.1050 & 0.0217 / 0.1266 & 0.0334 / 0.1583 & 0.0596 / 0.2112 \\ BiGRU & 0.0022 / 0.0429 & 0.0089 / 0.0844 & 0.0210 / 0.1294 & 0.0398 / 0.1794 & 0.0695 / 0.2357 & 0.1024 / 0.2857 & 0.1509 / 0.3489 \\ GCN & 0.0325 / 0.1465 & 0.0432 / 0.1759 & 0.0589 / 0.2071 & 0.0810 / 0.2439 & 0.1126 / 0.2873 & 0.14263 / 0.3231 & 0.1852 / 0.3696 \\ ST-GIN & **0.0011 / 0.0300** & **0.0037 / 0.0538** & **0.0038 / 0.0522** & **0.0137 / 0.0957** & **0.0128 / 0.0537** & **0.0374 / 0.1757** & **0.0450 / 0.1684** \\ \hline \end{tabular}
\end{table} TABLE I: MSE and MAE for random missing in traffic speed data
Fig. 2: Uncertainty Quantification of 0.1 Non-random Missing Data
values fall within in the 95 percent interval.
## VI Conclusion
In this paper, we introduce a novel deep learning framework, ST-GIN, which effectively addresses the issue of missing data in traffic datasets. This framework leverages graph attention layers to capture the spatial relationships among traffic tensors, while utilizing bidirectional gated recurrent neural networks to learn the temporal correlations of traffic data. Experimental results indicate that our method demonstrates superior performance when compared to numerous benchmark techniques for imputing missing speed data in both random and non-random missing scenarios, as exemplified by the METR-LA dataset.
Several potential avenues can be explored to further enhance and expand upon this research. One such direction includes employing a wider range of data to evaluate the adaptability of our model across various scenarios. This is particularly relevant for urban road networks, which are characterized by higher short-term variations due to uncertain road conditions and fluctuating traffic patterns. Investigating the model's performance in such complex environments will provide valuable insights into its applicability and robustness.
Additionally, integrating advanced deep learning frameworks, such as attention-based models and transformers, could further improve the imputation accuracy of our method. These architectures have demonstrated promising results in capturing intricate dependencies and patterns in diverse data domains, which could potentially benefit the task of imputing missing traffic data.
|
2304.06959 | Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks
with Soft-Thresholding | Soft-thresholding has been widely used in neural networks. Its basic network
structure is a two-layer convolution neural network with soft-thresholding. Due
to the network's nature of nonlinearity and nonconvexity, the training process
heavily depends on an appropriate initialization of network parameters,
resulting in the difficulty of obtaining a globally optimal solution. To
address this issue, a convex dual network is designed here. We theoretically
analyze the network convexity and numerically confirm that the strong duality
holds. This conclusion is further verified in the linear fitting and denoising
experiments. This work provides a new way to convexify soft-thresholding neural
networks. | Chunyan Xiong, Mengli Lu, Xiaotong Yu, Jian Cao, Zhong Chen, Di Guo, Xiaobo Qu | 2023-04-14T07:04:07Z | http://arxiv.org/abs/2304.06959v1 | # Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks with Soft-Thresholding
###### Abstract
Soft-thresholding has been widely used in neural networks. Its basic network structure is a two-layer convolution neural network with soft-thresholding. Due to the network's nature of nonlinearity and nonconvexity, the training process heavily depends on an appropriate initialization of network parameters, resulting in the difficulty of obtaining a globally optimal solution. To address this issue, a convex dual network is designed here. We theoretically analyze the network convexity and numerically confirm that the strong duality holds. This conclusion is further verified in the linear fitting and denoising experiments. This work provides a new way to convexify soft-thresholding neural networks.
Soft-thresholding, non-convexity, strong duality, convex optimization.
## I Introduction
Neural networks (NN) have been extensively employed in various applications, including speech and image recognition [1, 2], image classification [3], fast medical imaging [4] and biological spectrum reconstruction [5, 6, 7], etc. NN, however, is easy to stuck at the local optimum or the saddle point due to the network non-convexity (Fig. 1) [8]. This limitation prevents NN from reaching the global optimum [9, 10, 11]. To address this issue, proper initialization of network parameters is required in the training process [12, 13, 14].
Typical initialization strategies have been established [3, 15, 16] but the network may still encounter instability if the NN has multiple layers or branches [13]. For example, the original Transformer model [17] did not converge without initializing the learning rate in a warm-up way [18, 19, 20]. Roberta [21] and GPT-3 [22] had to tune the parameters of the optimizer ADAM [23] for stability under the large batch size. Recent studies have shown that architecture-specific initialization can promote convergence [19, 24, 25, 26, 27]. Even though, these initialization techniques hardly work to their advantage when conducting architecture searches, training networks with branching or heterogeneous components [13].
Convexifying neural networks is another way to make the solution not depend on initialization [28, 29]. At present, theoretical research on the convexification of NN focuses on finite-width networks which include fully connected networks [30, 31, 32, 33, 34, 35, 36, 37] and convolutional neural networks (CNN) [38, 39]. The former is powerful to learn multi-level features [40] but may require a large space and computational resources if the size of training data is large. The latter avoids this problem by reducing network complexity through local convolutions [41, 42, 43] and have been successfully applied in image processing [44, 45, 46, 47]. Up to now, CNN has been utilized as an example in convexifying networks under a common non-linear function, ReLU [38, 39].
To make the rest description clear, following previous theoretical work [38], we will adopt the denoising task handled by a basic two layers ReLU-CNN, for theoretical analysis of convexity.
Let \(\mathbf{\widetilde{X}}\in\mathbb{R}^{n\times h}\) denote a noise-free 2D image, \(n\) and \(h\) represent the width and height, respectively. \(\mathbf{\widetilde{X}}\) is contaminated by an additive noise \(\mathbf{E}\), whose entries are drawn from a probability distribution, such as \(\mathcal{N}(0,\sigma^{2})\) in the case of i.i.d Gaussian noise. Then, the noisy observation \(\mathbf{\widetilde{Y}}\) is modeled as \(\mathbf{\widetilde{Y}}=\mathbf{\widetilde{X}}+\mathbf{E}\). Given a set of convolution filters, \(\mathbf{\widetilde{U}}_{k}\in\mathbb{R}^{m\times m}(k=1,...,K)\), noise-suppressed images are obtained under each filter and
Fig. 1: Toy example: In some bad cases, the non-convex neural network gets stuck in a local optimum or saddle point. The objective value of a two-layer non-convex and a convex neural network model for 1D vector fitting. The input of the network is \([-2,-1,0,1,2]^{T}\) and its ideal output (also called the label) is \([1,-1,-1,-1,1]^{T}\). Here, the bias term is included by concatenating a column of ones to the input. Under 3 random initialization trials of network parameters, the objective value of the non-convex neural network is different. Convex neural networks, which do not depend on the initialization, can be solved directly with a convex procedure to obtain the optimal value.
then linearly combined according to [38]
\[\sum_{k=1}^{K}\left(\widetilde{\mathbf{Y}}\otimes\widetilde{\mathbf{U}}_{k} \right)_{+}\otimes\widetilde{v}_{k}, \tag{1}\]
where \(\otimes\) represents the 2D convolution operation, \((\cdot)_{+}\) denotes an element-wise ReLU operation, and \(\widetilde{v}_{k}\in\mathbb{R}\) is a 1x1 kernel used as the weight in the linear combination. Then, convolution kernels, i.e. \(\widetilde{\mathbf{U}}_{k}\) and \(v_{k}\), are obtained by minimizing the prediction loss between the noise-free image and the network output as
\[\min_{\widetilde{\mathbf{U}}_{k},\widetilde{v}_{k}}\left\|\sum_{k=1}^{K} \left(\widetilde{\mathbf{Y}}\otimes\widetilde{\mathbf{U}}_{k}\right)_{+} \otimes\widetilde{v}_{k}-\widetilde{\mathbf{X}}\right\|_{F}^{2}. \tag{2}\]
To reduce the network complexity, Eq. (2) is further improved to an object value as [38]
\[\min_{\widetilde{\mathbf{U}}_{k},\widetilde{v}_{k}}\left\|\sum_{k=1}^{K} \left(\widetilde{\mathbf{Y}}\otimes\widetilde{\mathbf{U}}_{k}\right)_{+} \otimes\widetilde{v}_{k}-\widetilde{\mathbf{X}}\right\|_{F}^{2} \tag{3}\]
by constraining the energy (or the power of norm) of all convolution kernels. The \(\beta>0\) is a hyper-parameter to trade the prediction loss with the convolution kernel energy.
To convexify the primal network in Eq. (3), the convex duality theory was introduced to convert Eq. (3) into a dual form, enabling the reach of global minimum [38]. No gap between the primal and dual objective values has been demonstrated theoretically and experimentally [38]. This work inspired us to convexify other networks, for example, replacing ReLU with soft-thresholding.
Soft-thresholding (ST) is another non-linear function that has been widely adopted in CNN [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 221, 230, 224, 225, 231, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 32, 33, 34, 35, 36, 37, 38, 39, 31, 34, 36, 38, 39, 32, 35, 39, 33, 36, 37, 39, 34, 38, 39, 35, 37, 38, 39, 36, 39, 37, 38, 39, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 88, 89, 91, 83, 85, 89, 92, 86, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 114, 109, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 223, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 253, 254, 256, 257, 258, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 276, 278, 279, 281, 283, 284, 285, 286, 287, 288, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 300, 31, 32, 33, 34, 35, 36, 37, 38, 39, 30, 31, 34, 35, 36, 37, 39, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 1109, 119, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 142, 143, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 181, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 207, 208, 209, 209, 210, 211, 223, 204, 209, 212, 209, 225, 209, 213, 209, 22
We denote
\[S=S_{1}\cup S_{2}\cup S_{3}, \tag{6}\]
where
\[S_{1}=\{i|\ i\in H_{1}\}\cup\{i|\ i\in H_{2}\}, \tag{7}\] \[S_{2}=\{i|\ i\in H_{2}\},\] \[S_{3}=\{i|\ i\in H_{2}\}\cup\{i|\ i\in H_{3}\}.\]
\[\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{I}]^{\top}\in \mathbb{R}^{I\times m^{2}},\ \ \mathbf{Q}^{S}\in\mathbb{R}^{I\times I}\text{ is a}\]
diagonal matrix, its diagonal elements are as follows
\[\mathbf{Q}_{ii}=\begin{cases}\frac{\mathbf{y}_{i}^{T}\mathbf{u}_{k}+\lambda}{ \mathbf{y}_{i}^{T}\mathbf{u}_{k}},&\text{if}\quad i\in S_{1},\\ 0,&\text{if}\quad i\in S_{2},\\ \frac{\mathbf{y}_{i}^{T}\mathbf{u}_{k}-\lambda}{\mathbf{y}_{i}^{T}\mathbf{u}_ {k}},&\text{if}\quad i\in S_{3}.\end{cases} \tag{8}\]
\[\mathbf{Q}^{S}=\mathbf{Q}^{S_{1}}+\mathbf{Q}^{S_{2}}+\mathbf{Q}^{S_{3}}, \tag{9}\]
We denote
\[\mathbf{Q}^{S}\text{ in }\mathbf{Q}^{S}\mathbf{Yu}_{k}\geq 0 \text{ as }\mathbf{Q}^{S_{1}},\] \[\mathbf{Q}^{S}\text{ in }\mathbf{Q}^{S}\mathbf{Yu}_{k}=0 \text{ as }\mathbf{Q}^{S_{2}},\] \[\mathbf{Q}^{S}\text{ in }\mathbf{Q}^{S}\mathbf{Yu}_{k}\leq 0 \text{ as }\mathbf{Q}^{S_{2}}.\]
\[P_{S}=\{\mathbf{u}_{k}|\ P_{S_{1}}\cup P_{S_{2}}\cup P_{S_{3}}\}, \tag{10}\]
where
\[P_{S_{1}}=\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{1}}\mathbf{Yu}_{k} \geq 0,\quad\forall i\in S_{1}\}, \tag{11}\] \[P_{S_{2}}=\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{2}}\mathbf{Yu}_{k} =0,\quad\forall i\in S_{2}\},\] \[P_{S_{3}}=\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{3}}\mathbf{Yu}_{k} \leq 0,\quad\forall i\in S_{3}\}.\]
### _Basic Lemmas and Definitions_
**Lemma 1** (Slater's condition [52]): Consider the optimization problem
\[\min_{\mathbf{x}}f_{0}(\mathbf{x}) \tag{12}\] \[\text{s.t. }f_{j}(\mathbf{x})<0,\quad j=1,...,J,\quad\mathbf{Ax}= \mathbf{b},\]
where \(\mathbf{A}\in\mathbb{R}^{m\times n}\), \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{b}\in\mathbb{R}^{m}\), \(f_{0},...,f_{J}\) are convex functions.
If there exists an \(\mathbf{x}^{*}\in\mathbf{relint}D\) (where \(\mathbf{relint}\) denotes the relative interior of the convex set \(D:=\cap_{j=0}^{J}\text{dom}(f_{j})\)), such that
\[f_{j}(\mathbf{x}^{*})<0,\quad j=0,...,J,\quad\mathbf{Ax}^{*}=\mathbf{b}. \tag{13}\]
Such a point is called strictly feasible since the inequality constraints hold with strict inequalities. The strong duality holds if Slater's condition holds (and the problem is convex).
**Lemma 2** (Sion's Minimax theorem [53, 54]): Let \(X\) and \(Y\) be nonvoid convex and compact subsets of two linear topological spaces, and let \(f:X\times Y\rightarrow\mathbb{R}\) be a function that is upper semicontinuous and quasi-concave in the first variable and lower semicontinuous and quasi-convex in the second variable. Then
\[\min_{y\in Y}\max_{x\in X}f(x,y)=\max_{x\in X}\min_{y\in Y}f(x,y). \tag{14}\]
**Lemma 3** (Semi-infinite programming [55]): Semi-infinite programming problems of the form
\[\min_{\mathbf{x}\in\mathbb{R}^{n}}f(\mathbf{x})\text{ subject to }g(\mathbf{x},w) \leq 0,w\in\Omega, \tag{15}\]
where \(\Omega\) is a (possibly infinite) index set, \(\overline{\mathbb{R}}=\mathbb{R}\cup\{+\infty\}\cup\{-\infty\}\) denotes the extended real line, \(f:\mathbb{R}^{n}\rightarrow\overline{\mathbb{R}}\) and \(g:\mathbb{R}^{n}\times\Omega\rightarrow\mathbb{R}\). The above optimization problem is performed in the finite-dimensional space \(\mathbb{R}\) and, if the index set \(\Omega\) is infinite, is subject to an infinite number of constraints, therefore, it is referred to as a semi-infinite programming problem.
**Lemma 4** (An extension of Zaslavsky's hyperplane arrangement theory [56]): Consider a deep rectifier network with \(L\) layers, \(n_{l}\) rectified linear units at each layer \(l\), and an input of dimension \(n_{0}\). The maximal number of regions of this neural network is at most
\[\sum_{(j_{1},...,j_{L})}\prod_{l=1}^{L}\binom{n_{l}}{j_{l}}, \tag{16}\]
where \(J=\{(j_{1},...,j_{L})\in\mathbb{Z}^{L}:0\leq j_{l}\leq\text{min}\{n_{0},n_{1 }-j_{1},...,\\ n_{l-1}-j_{l-1},n_{l}\}\ \forall\ l=1,...,L\}\). This bound is tight when \(L=1\).
**Definition 1** (Optimal duality gap [52]): The optimal value of the Lagrange dual problem is denoted as \(d^{*}\), and the optimal value of the primal problem is denoted as \(p^{*}\). The weak duality is defined as \(d^{*}\) is the best lower bound of \(p^{*}\) as follows
\[d^{*}\leq p^{*}. \tag{17}\]
The difference \(p^{*}-d^{*}\) is called the optimal duality gap of the primal problem.
**Definition 2** (Zero duality gap [52]): If the equality
\[d^{*}=p^{*} \tag{18}\]
holds, i.e. the optimal duality gap is zero, then we say that the strong duality holds. Strong duality means that a best bound, which can be obtained from the Lagrange dual function, is tight.
**Definition 3** (Hyperplanes and halfspaces [52]):
A hyperplane is a set of the form \(\{\mathbf{x}|\ \mathbf{a}^{T}\mathbf{x}=b\}\) where \(\mathbf{a}\in\mathbb{R}^{n},\ \mathbf{x}\in\mathbb{R}^{n\times 1},\ \mathbf{a}\neq 0\) and \(b\in\mathbb{R}\).
A hyperplane divides \(\mathbb{R}^{n}\) into halfspaces. A halfspace is a set of the form \(\{\mathbf{x}|\ \mathbf{a}^{T}\mathbf{x}\leq b\}\) where \(\mathbf{a}\neq 0\), i.e. the solution set of one (nontrivial) linear inequality. This is illustrated in Fig. 4.
## III Model and Theory
### _Proposed Model_
A two-layer primal ST-CNN is expressed as follows
\[p^{*}=\min_{\tilde{\mathbf{U}}_{k},\tilde{\mathbf{v}}_{k}}\left\|\sum_{k=1}^{K} \tau\left(\widetilde{\mathbf{Y}}\otimes\widetilde{\mathbf{U}}_{k}\right)_{ \lambda}\otimes\widetilde{v}_{k}-\widetilde{\mathbf{X}}\right\|_{F}^{2}\]
\[+\beta\sum_{k=1}^{K}(\left\|\widetilde{\mathbf{U}}_{k}\right\|_{F}^{2}+| \widetilde{\upsilon}_{k}|^{2}), \tag{19}\]
where the main difference between Eq. (19) and Eq. (3) is an element-wise soft-thresholding operator \(\tau(a_{ij})_{\lambda}=(|a_{i\underline{j}}|-\lambda)_{+}sign(a_{ij})\), \(\otimes\) represents the 2D convolution operation, \(\widetilde{\mathbf{Y}}\in\mathbb{R}^{n\times h}\) is the input, \(\widetilde{\mathbf{U}}_{k}\in\mathbb{R}^{m\times m}\), \(\widetilde{v}_{k}\in\mathbb{R}\), \(\beta>0\).
Replacing convolutional operations with matrix multiplication (Fig. 5), Eq. (19) can be converted into the following form
\[p^{*}= \min_{\mathbf{u}_{k}^{\prime},v_{k}^{\prime}}\left\|\sum_{k=1}^{ K}\tau\left(\mathbf{Yu}_{k}^{\prime}\right)_{\lambda}v_{k}^{\prime}-\mathbf{x} \right\|_{2}^{2}\] \[+\beta\sum_{k=1}^{K}(\left\|\mathbf{u}_{k}^{\prime}\right\|_{2}^ {2}+|v_{k}^{\prime}|^{2}), \tag{20}\]
where \(\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{I}]^{\top}\in \mathbb{R}^{I\times m^{2}}\) is the input, \(I=nh\), \(\left\{\mathbf{y}_{i}\in\mathbb{R}^{m^{2}}\right\}_{i=1}\), \(\mathbf{x}\in\mathbb{R}^{I}\) is the label, \(\mathbf{u}_{k}^{\prime}\in\mathbb{R}^{m^{2}},v_{k}^{\prime}\in\mathbb{R}\).
Next, we introduce the main theory (Theorem 1) that converts a two-layer primal ST-CNN (Fig. 6(a)) into a convex dual ST-CNN (Fig. 6(b)).
### _Theoretical Analysis_
**Theorem 1** (Main theory): There exists \(k^{*}\leq I\) such that if the number of convolution filters \(k\geq(k^{*}+1)\), a two-layer ST-CNN (Eq. 20) has a strong duality satisfy form. This form is given through finite-dimensional convex programming as
\[d_{3}^{*}= \min_{\mathbf{w}_{i}\in\mathcal{P}_{w},\mathbf{w}_{i}^{\prime} \in\mathcal{P}_{w}}\left\|\sum_{i=1}^{I}\mathbf{Q}^{S}\mathbf{Y}(\mathbf{w}_{ i}^{\prime}-\mathbf{w}_{i})-\mathbf{x}\right\|_{2}^{2} \tag{21}\] \[+2\beta\sum_{i=1}^{I}(\left\|\mathbf{w}_{i}^{\prime}\right\|_{2}+ \left\|\mathbf{w}_{i}\right\|_{2}),\]
where \(\mathbf{Q}^{S}\) is a diagonal matrix, and its diagonal elements for \(\mathbf{Q}_{ii}\) take the following values
\[\mathbf{Q}_{ii}=\begin{cases}\frac{Y_{1}^{T}\mathbf{u}_{k}+\lambda}{Y_{i}^{ \prime}\mathbf{u}_{k}},&\text{if}\quad i\in S_{1},\\ 0,&\text{if}\quad i\in S_{2},\\ \frac{Y_{1}^{T}\mathbf{u}_{k}-\lambda}{Y_{i}^{\prime}\mathbf{u}_{k}},&\text{ if}\quad i\in S_{3}.\end{cases} \tag{22}\]
\(\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{I}]^{\top}\in \mathbb{R}^{I\times m^{2}}\), \(\mathbf{w}_{i}\) and \(\mathbf{w}_{i}^{\prime}\) are both dual variables, and they correspond to \(\mathbf{u}_{k}^{\prime}\) and \(v_{k}^{\prime}\) in Eq. (20) which are learnable parameters. \(\mathbf{x}\in\mathbb{R}^{I}\) is the label.
\[p_{w}=\left\{\mathbf{w}_{i}|\mathbf{Q}^{S_{1}}\mathbf{Y}\mathbf{w}_{i}\geq 0,\mathbf{Q}^{S_{2}}\mathbf{Y}\mathbf{w}_{i}=0,\mathbf{Q}^{S_{3}}\mathbf{Y} \mathbf{w}_{i}\leq 0\right\}, \tag{23}\]
\[p_{w^{\prime}}=\left\{\mathbf{w}_{i}^{\prime}|\mathbf{Q}^{S_{1}}\mathbf{Y} \mathbf{w}_{i}^{\prime}\geq 0,\mathbf{Q}^{S_{2}}\mathbf{Y}\mathbf{w}_{i}^{ \prime}=0,\mathbf{Q}^{S_{3}}\mathbf{Y}\mathbf{w}_{i}^{\prime}\leq 0\right\}.\]
\[\mathbf{Q}^{S}=\mathbf{Q}^{S_{1}}+\mathbf{Q}^{S_{2}}+\mathbf{Q}^{S_{3}}. \tag{24}\]
_Remark:_ The constraints on \(\mathbf{w}\) and \(\mathbf{w}^{\prime}\) in \(p_{w}\) and \(p_{w}^{\prime}\) arise from the segmentation property of the soft thresholding. We first randomly generate the vector \(\tilde{\mathbf{w}}\) to do convolution with the input \(\mathbf{Y}\) and generate the corresponding \(\mathbf{Q}^{S}\) based on the value of \(\mathbf{Y}\tilde{\mathbf{w}}\). Then we input \(\mathbf{Y}\) and \(\mathbf{Q}^{S}\) into our dual ST-CNN, and Eq. (21) is used in our objective function (objective loss). Because it is an objective function with constraints \(p_{w}\) and \(p_{w}^{\prime}\), hence, we use hinge loss (adding constraints to the objective function) as the loss function in experiments. There exist \(\mathbf{w}_{i}\), \(\mathbf{w}_{i}^{\prime}\) such that \(\tilde{\mathbf{w}}=\mathbf{w}_{i}^{\prime}-\mathbf{w}_{i}\).
Before proving the main theory (Theorem 1), we present the following main derivation framework (Fig. 7).
1) Theorem 2: Scaling \(\left\|\mathbf{u}_{k}^{\prime}\right\|_{2}^{2}+|v_{k}^{\prime}|^{2}\) in the primal ST-CNN (Eq. 20);
2) Theorem 3: Eliminating variables to obtain an equivalent convex optimization model under the principle of Lagrangian dual theory;
3) Theorem 4: Convert nonlinear operations to linear operations using a diagonal matrix;
4) Theorem 5: Exact representation of a two-layer ST-CNN;
5) Theorem 6: Prove zero dual gaps (strong duality).
_Theorem 2_: To scaling \(\mathbf{u}_{k}^{\prime},v_{k}^{\prime}\), let \(\mathbf{u}_{k}=\varepsilon\mathbf{u}_{k}^{\prime}\), \(v_{k}=\frac{1}{\varepsilon}v^{\prime}{}_{k}\),
\[p^{*}= \min_{\mathbf{u}_{k}^{\prime},v_{k}^{\prime}}\left\|\sum_{k=1}^{ K}\tau\left(\mathbf{Y}\mathbf{u}_{k}^{\prime}\right)_{\lambda}v_{k}^{\prime}- \mathbf{x}\right\|_{2}^{2}\] \[+\beta\sum_{k=1}^{K}(\left\|\mathbf{u}_{k}^{\prime}\right\|_{2}^{2}+ |v_{k}^{\prime}|^{2}), \tag{25}\]
Fig. 4: A hyperplane defined by \(\mathbf{a}^{\top}\mathbf{x}=b\) in \(\mathbb{R}^{2}\) determines two halfspaces. The halfspace determined by \(\mathbf{a}^{\top}\mathbf{x}\geq b\) is the halfspace extending in the direction \(\mathbf{a}\). The halfspace determined by \(\mathbf{a}^{\top}\mathbf{x}\leq b\) extends in the direction \(-\mathbf{a}\). The vector \(\mathbf{a}\) is the outward of this halfspace.
Fig. 5: Replacing convolutional operations with matrix multiplication. (a) Convolution in Eq. (19), (b) matrix multiplication in Eq. (20).
the primal ST-CNN can be translated as
\[p^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\min_{v_{k}\in \mathbb{R}}\left\|\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda}v_{k}- \mathbf{x}\right\|_{2}^{2}\] \[\quad+2\beta\sum_{k=1}^{K}(|v_{k}|), \tag{26}\]
where \(\varepsilon\) is introduced so that the scaling has no effect on the network output, the proof of Theorem 2 is provided in Appendix A.
Then, according to Eq. (26), we can obtain an equivalent convex optimization model by using the Lagrangian dual theory.
**Theorem 3**: \[p^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\min_{v_{k}\in \mathbb{R}}\left\|\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda}v_{ k}-\mathbf{x}\right\|_{2}^{2}\] \[\quad+2\beta\sum_{k=1}^{K}|v_{k}|.\]
_is equivalent to_
\[d_{1}^{*}=\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,\mathbf{x}\right\|^{ 2}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda}\left|\leq 2\beta}-\frac{1}{4} \left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}+\left\|\mathbf{x}\right\|_{2}^{2}. \tag{27}\]
Proof: By reparameterizing the problem, let
\[\mathbf{r}=\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda}v_{k}- \mathbf{x}, \tag{28}\]
where \(\mathbf{r}\in\mathbb{R}^{I}\), hence, we have
\[d_{1}^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\min_{v_{k}, \mathbf{r}}\left\|\mathbf{r}\right\|_{2}^{2}+2\beta\sum_{k=1}^{K}|v_{k}|,\] \[s.t. \mathbf{r}=\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda }v_{k}-\mathbf{x}. \tag{29}\]
Introducing the Lagrangian variable \(\mathbf{z}\), and \(\mathbf{z}\in\mathbb{R}^{I}\), \(\mathbf{z}^{T}\in\mathbb{R}^{1\times I}\), and obtaining the Lagrangian dual form of the primal ST-CNN as follows
\[d_{1}^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\min_{v_{k}, \mathbf{r}}\max_{\mathbf{z}}\left\|\mathbf{r}\right\|_{2}^{2}+2\beta\sum_{k=1 }^{K}|v_{k}|+\mathbf{z}^{T}\mathbf{r}\] \[\quad+\mathbf{z}^{T}\mathbf{x}-\mathbf{z}^{T}\sum_{k=1}^{K}\tau \left(\mathbf{Yu}_{k}\right)_{\lambda}v_{k}. \tag{30}\]
Using Sion's minimax theorem [53, 54] to change the order of maximum and minimum
\[d_{1}^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\max_{\mathbf{z}} \min_{v_{k},\mathbf{r}}\left\|\mathbf{r}\right\|_{2}^{2}+2\beta\sum_{k=1}^{K} |v_{k}|+\mathbf{z}^{T}\mathbf{r}\] \[\quad+\mathbf{z}^{T}\mathbf{x}-\mathbf{z}^{T}\sum_{k=1}^{K}\tau \left(\mathbf{Yu}_{k}\right)_{\lambda}v_{k}. \tag{31}\]
Minimizing the objective function Eq. (31) with \(\mathbf{r}\) as a variable
\[\left\|\mathbf{r}\right\|_{2}^{2}+\mathbf{z}^{T}\mathbf{r}=\left\|\mathbf{r}+ \frac{1}{2}\mathbf{z}\right\|_{2}^{2}-\frac{1}{4}\left\|\mathbf{z}\right\|_{2 }^{2}. \tag{32}\]
When \(\mathbf{r}=-\frac{1}{2}\mathbf{z}\), Eq. (31) takes the optimal value. Hence, Eq. (31) can be translated to
\[d_{1}^{*} =\min_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\max_{ \mathbf{z}}\min_{v_{k}}-\frac{1}{4}\left\|\mathbf{z}\right\|_{2}^{2}+2\beta \sum_{k=1}^{K}|v_{k}|+\mathbf{z}^{T}\mathbf{x}\] \[\quad-\mathbf{z}^{T}\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right) _{\lambda}v_{k}. \tag{33}\]
Let
\[f=\min_{v_{k}}2\beta\sum_{k=1}^{K}|v_{k}|-\mathbf{z}^{T}\sum_{k=1}^{K}\tau \left(\mathbf{Yu}_{k}\right)_{\lambda}v_{k}, \tag{34}\]
Fig. 6: Objective value of a two-layer primal ST-CNN and dual ST-CNN trained with ADAM on a one-dimensional dataset. Assuming \(\mathbf{x}=[-1,2,0,1,2]\) and \(\mathbf{y}=[2,1,2,1,2]\), which are the input and output, respectively. (a) Non-convex primal ST-CNN, (b) convex dual ST-CNN.
Fig. 7: The main derivation process.
eliminating the variable \(v_{k}\) in the primal ST-CNN, hence
\[\max_{\mathbf{z}\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\left|\mathbf{z}^{T} \sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}\right)_{\lambda}\right|\leq 2\beta. \tag{35}\]
Eq. (33) is equivalent to the following optimization problem
\[d_{1}^{*}=\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,\mathbf{z} :\mathbf{z}^{T}\sum_{k=1}^{K}\tau(\mathbf{Yu}_{k})_{\lambda}\leq 2\beta}-\frac{1}{ 4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}+\left\|\mathbf{x}\right\|_{2}^ {2}.\]
Next, to divide hyperplanes and provide an exact representation, we convert the nonlinear operation \(\tau(\cdot)_{\lambda}\) into the linear operator using the diagonal matrix \(\mathbf{Q}^{S}\).
_Theorem 4_:
\[d_{1}^{*}=\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,\mathbf{z} :\mathbf{z}^{T}\sum_{k=1}^{K}\tau(\mathbf{Yu}_{k})_{\lambda}\leq 2\beta}-\frac{ 1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}+\left\|\mathbf{x}\right\|_{ 2}^{2},\]
can be represented as a standard finite-dimensional program
\[d_{2}^{*}=\max_{\mathbf{z}}-\frac{1}{4}\left\|\mathbf{z}-2 \mathbf{x}\right\|_{2}^{2}+\left\|\mathbf{x}\right\|_{2}^{2}, \tag{37}\]
s.t.
\[P_{S}=\{\mathbf{u}_{k}|\ P_{S_{1}}\cup P_{S_{2}}\cup P_{S_{3}}\}, \tag{38}\]
where
\[P_{S_{1}} =\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{1}}\mathbf{Yu}_{k}\geq 0, \forall i\in S_{1}\}, \tag{39}\] \[P_{S_{2}} =\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{2}}\mathbf{Yu}_{k}=0, \forall i\in S_{2}\},\] \[P_{S_{3}} =\{\mathbf{u}_{k}|\ \mathbf{Q}^{S_{3}}\mathbf{Yu}_{k}\leq 0, \forall i\in S_{3}\}.\]
Proof: First, we analyze the one-sided dual constraint in Eq. (35) as follows
\[\max_{\mathbf{z}:\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\mathbf{z}^{T} \tau\left(\mathbf{Yu}_{k}\right)_{\lambda}\leq 2\beta. \tag{40}\]
To divide hyperplanes, we divide \(\mathbb{R}^{m^{2}}\) into three subsets to obtain Eq. (4) and Eq. (5). Let \(i\in H_{1}\cup H_{2}\cup H_{3}\), \(|H_{1}|+|H_{2}|+|H_{3}|=nh=I\), \(\mathcal{H}_{X}\) be the set of all hyperplane arrangement patterns for the matrix \(\mathbf{Y}\), defined as the following set [57, 58]
\[\mathcal{H}_{X}=\{sign(\mathbf{Yu}_{k}+\lambda)\cup sign(\mathbf{Yu}_{k}- \lambda)|\mathbf{u}_{k}\in\mathbb{R}^{m^{2}}\}. \tag{41}\]
Next, we take out the positions of the elements corresponding to different symbols and assign them according to
\[S_{1} =\{i|\ i\in H_{1}\}\cup\{i|\ i\in H_{2}\}, \tag{42}\] \[S_{2} =\{i|\ i\in H_{2}\},\] \[S_{3} =\{i|\ i\in H_{2}\}\cup\{i|\ i\in H_{3}\},\] \[S =S_{1}\cup S_{2}\cup S_{3}.\]
To assign a corresponding value to the position of each \(i\) in the above three sets such that the same transformation as the soft threshold function is achieved, the diagonal matrix \(\mathbf{Q}^{S}\) is constructed, and its diagonal elements for \(\mathbf{Q}_{ii}\) as Eq. (8).
Using the diagonal matrix \(\mathbf{Q}^{S}\), the constraints in Eq. (35) are equivalent to the following form
\[\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\left|\mathbf{z}^{T} \mathbf{Q}^{S}\left(\mathbf{Yu}_{k}\right)\right|\leq 2\beta, \tag{43}\]
where \(\mathbf{Q}^{S}=\mathbf{Q}^{S_{1}}+\mathbf{Q}^{S_{2}}+\mathbf{Q}^{S_{3}}\).
Hence, Eq. (36) can be finitely parameterized as
\[d_{2}^{*}=\max_{\mathbf{z}}-\frac{1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2 }^{2}+\left\|\mathbf{x}\right\|_{2}^{2},\]
s.t.
\[\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\left|\mathbf{z}^{T} \mathbf{Q}^{S}\mathbf{Yu}_{k}\right|\leq 2\beta. \tag{44}\]
Now, we introduce an exact representation of a two-layer ST-CNN.
_Theorem 5_:
\[d_{2}^{*}=\max_{\mathbf{z}}-\frac{1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2 }^{2}+\left\|\mathbf{x}\right\|_{2}^{2},\]
s.t.
\[\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\left|\mathbf{z}^{T} \mathbf{Q}^{S}\mathbf{Yu}_{k}\right|\leq 2\beta.\]
is equivalent to
\[d_{3}^{*}=\min_{\mathbf{u}_{i}\in\mathcal{D}_{w_{i}}\mathbf{u} _{i}^{\prime}\in\mathcal{D}_{w^{\prime}}}\left\|\sum_{i=1}^{I}\mathbf{Q}^{S} \mathbf{Y}(\mathbf{w}_{i}^{\prime}-\mathbf{w}_{i})-\mathbf{x}\right\|_{2}^{2} \tag{45}\] \[+2\beta\sum_{i=1}^{I}(\left\|\mathbf{w}_{i}\right\|_{2}+\left\| \mathbf{w}_{i}^{\prime}\right\|_{2}),\]
where,
\[p_{w} =\left\{\mathbf{w}_{i}|\mathbf{Q}^{S_{1}}\mathbf{Y}\mathbf{w}_{i} \geq 0,\mathbf{Q}^{S_{2}}\mathbf{Y}\mathbf{w}_{i}=0,\mathbf{Q}^{S_{3}} \mathbf{Y}\mathbf{w}_{i}\leq 0\right\},\] \[p_{w^{\prime}} =\left\{\mathbf{w}_{i}^{\prime}|\mathbf{Q}^{S_{1}}\mathbf{Y} \mathbf{w}_{i}^{\prime}\geq 0,\mathbf{Q}^{S_{2}}\mathbf{Y}\mathbf{w}_{i}^{\prime}=0, \mathbf{Q}^{S_{3}}\mathbf{Y}\mathbf{w}_{i}^{\prime}\leq 0\right\}.\]
The proof of Theorem 5 is provided in Appendix B. According to this theorem, we can prove that the strong duality holds, i.e. the primal ST-CNN and the dual ST-CNN achieve global optimality. They are theoretically equivalent and will obtain Theorem 6.
_Theorem 6_: Suppose the optimal value of the primal ST-CNN is \(p^{*}\) and the optimal value of the dual ST-CNN is \(d_{3}^{*}\), the strong duality holds if \(p^{*}=d_{3}^{*}\).
Proof: The optimal solution to the dual ST-CNN is the same as the optimal solution to the primal ST-CNN model constructed \(\{\mathbf{u}_{k}^{*},v_{k}^{*}\}_{k=1}^{K}\) as follows
\[\left(\mathbf{u}_{k}^{*},v_{k}^{*}\right) =(\frac{\mathbf{w}_{i}^{*}}{\sqrt{\left\|\mathbf{w}_{i}^{*}\right\| }}),\ \ \ \text{if}\ \ \ \mathbf{w}_{i}^{*}\neq 0,\] \[\left(\mathbf{u}_{k}^{*},v_{k}^{*}\right) =(\frac{\mathbf{w}_{i}^{*}}{\sqrt{\left\|\mathbf{w}_{i}^{*}\right\| }}),\ \ \ \text{if}\ \ \ \mathbf{w}_{i}^{*}\neq 0, \tag{46}\]
where \(\left\{\mathbf{w}_{i}^{*},\mathbf{w}_{i}^{*}\right\}_{i=1}^{I}\) are the optimal solution of Eq. (45).
\[p^{*}= \min_{\mathbf{u}_{k}^{*}\in\mathbb{R}^{m},v_{k}^{*}\in\mathbb{R}} \left\|\sum_{k=1}^{K}\tau\left(\mathbf{Yu}_{k}^{\prime}\right)_{\lambda}v_{k}^{ \prime}-\mathbf{x}\right\|_{2}^{2} \tag{47}\] \[+\beta\sum_{k=1}^{K}(\left\|\mathbf{u}_{k}^{\prime}\right\|_{2} ^{2}+\left|v_{k}^{\prime}\right|^{2})\]
\[\leq\left\|\sum_{k=1}^{K}\tau\left(\mathbf{V}\mathbf{u}_{k}^{*} \right)_{\lambda}v_{k}^{*}-\mathbf{x}\right\|_{2}^{2}+\beta\sum_{k=1}^{I}(\left\| \mathbf{u}_{k}^{*}\right\|_{2}^{2}+\left|v_{k}^{*}\right|^{2})\] \[=\left\|\sum_{i=1}^{I}\mathbf{Q}^{\mathcal{S}}\mathbf{Y}(\mathbf{ w}_{i}^{\prime}-\mathbf{w}_{i})-\mathbf{x}\right\|_{2}^{2}\] \[\quad+\beta\sum_{i=1,\mathbf{w}_{i}^{*}\neq 0}^{I}\left(\left\| \frac{\mathbf{w}_{i}^{*}}{\sqrt{\left\|\mathbf{w}_{i}^{*}\right\|_{2}}} \right\|_{2}^{2}+\left\|\sqrt{\left\|\mathbf{w}_{i}^{*}\right\|_{2}}\right\|_ {2}^{2}\right)\] \[\quad+\beta\sum_{i=1,\mathbf{w}_{i}^{*}\neq 0}^{I}\left(\left\| \frac{\mathbf{w}_{i}^{*}}{\sqrt{\left\|\mathbf{w}_{i}^{*}\right\|_{2}}} \right\|_{2}^{2}+\left\|\sqrt{\left\|\mathbf{w}_{i}^{*}\right\|_{2}}\right\|_ {2}^{2}\right)\] \[=\left\|\sum_{i=1}^{I}\mathbf{Q}^{\mathcal{S}}\mathbf{Y}( \mathbf{w}_{i}^{\prime*}-\mathbf{w}_{i}^{*})-\mathbf{x}\right\|_{2}^{2}\] \[\quad+2\beta\sum_{i=1}^{I}\left(\left\|\mathbf{w}_{i}^{*}\right\| _{2}+\left\|\mathbf{w}_{i}^{*}\right\|_{2}\right)=d_{3}^{*}.\]
Combining \(p^{*}\leq d_{3}^{*}\), \(p^{*}\geq d_{1}^{*}\) (Theorems 2-3) and \(d_{1}^{*}=d_{2}^{*}=d_{3}^{*}\) (Theorems 4-5), \(p^{*}=d_{3}^{*}\) is proved.
Basing on the Lemma 3[55], we know that \(k+1\) of the total \(I\) filters \((\mathbf{w}_{i},\mathbf{w}_{i}^{\prime})\) are non-zero at optimum, where \(k\leq I\)[38, 39].
Finally, by combining Theorems 2-6, the main theory can be proved. Thus, the hyperplane arrangements can be constructed in polynomial time (See proof in Appendix D).
The global optimization of neural networks is NP-Hard [59]. Despite the theoretical difficulty, highly accurate models are trained in practice using stochastic gradient methods [60]. Unfortunately, stochastic gradient methods cannot guarantee convergence to an optimum of the non-convex training loss [61] and existing methods rarely certify convergence to a stationary point of any type [62]. stochastic gradient methods are also sensitive to hyper-parameters, they converge slowly, to different stationary points [63] or even diverge depending on the choice of step size. Parameters like the random seed complicate replications and can produce model churn, where networks learned using the same procedure give different predictions for the same inputs [64][65].
Therefore, some optimizers were designed to find the optimal solution during the training process. For example, early on, the SGD optimizer [66], the SGD-based adaptive gradient optimizer (ASGD) [67] and the adoption of moment estimation (ADAM) [23] optimizer, may lead to different training results under the same non-convex optimization objective (See results in Section IV. A) [68].
## IV Experimental Results
Experiments will show three observations: 1) The performance of the primal ST-CNN depends on the chosen optimizer. 2) The performance of the primal ST-CNN relies on initialization. 3) The zero dual gap holds between the primal and dual ST-CNN.
All experiments were implemented on a server equipped with dual Intel Xeon Silver 4210 CPUs, 128GB RAM, the Nvidia Tesla T4 GPU (16 GB memory), and PyTorch deep learning library [69]. The test dataset includes simulated data and the MNIST handwritten digits commonly used in deep learning research [38][70].
Experiments use the MNIST handwritten digits dataset with a size of 28\(\times\)28. We randomly select 600 out of 60,000 as the training set, and 10,000 in the test set remain the same. Then, they are added i.i.d Gaussian noise from the distribution \(\mathcal{N}(0,\sigma^{2})\) as the training and test dataset of the primal and the dual ST-CNN. For network training, 600 noisy images and their noise-free ones are used as the input and label. For the network test, 10000 noisy images and their noise-free ones are used as the input and label. The number of training and test datasets is consistent with that used in the ReLU-based dual theory experiments [38].
### _Primal ST-CNN Relies on Optimiser_
Here, we choose noise \(\sigma=0.25\), the primal ST-CNN and the dual ST-CNN are trained by using SGD, ASGD [67], and ADAM [23] as the optimizers respectively. The training and testing results are shown in Fig. 8. The optimal solution of the primal ST-CNN is dependent on the selection of optimizers.
### _Primal ST-CNN Relies on Initialization_
We choose different ways of parameter initialization including Kaiming He uniform distribution initialized as well as a normal distribution with zero mean and standard deviation of 0.001 and 0.005, respectively. The experimental results are shown in Fig. 9. The objective value of the primal ST-CNN and the dual ST-CNN coincide when two types of networks are initialized with Kaiming He uniform distribution for training. However, when we initialize the network parameters using normal distributions with mean 0 and standard deviations of 0.001 and 0.005, the objective value of the dual ST-CNN will be better than the primal ST-CNN.
This observation implies that the primal ST-CNN is dependent on the selection of the initial values.
### _Verify Zero Dual Gap (Strong Duality)_
Zero dual gap [52] means that, when both the primal and the dual ST-CNN reach the global optimum, the objective values of the two are equal. Therefore, according to the above two experimental results, in order to make the primal network achieve the global optimum, we choose ADAM [23] as the optimizer for the primal network, and the network parameters are initialized by Kaiming He uniform distribution. Under various noise levels \(\sigma\in\{0.25,0.5,0.75\}\). Both approaches achieve close objective values under all noise levels (Fig. 10).
It should be noted here that the optimal values of the primal ST-CNN and the dual ST-CNN are not exactly equal as the theory proves, but very close. The error is caused by the fact that the experiment is based on a large amount of data training, which is within the negligible range. Hence, experimental results are consistent with our theory.
## V Conclution
In this paper, to achieve the global optimum and remove the dependence of solutions on the initial network parameters,
Fig. 8: Training and testing with the different optimizers. From left to right: (a) Training results of the primal and primal ST-CNNs when SGD is used as an optimizer, (b) training results when the optimizer is ASGD, (c) training results when the optimizer is ADAM, (d) testing results when the optimizer is SGD, (e) testing results when the optimizer is ASGD, (f) testing results when the optimizer is ADAM.
Fig. 9: Example of training and testing with the different initialization. The primal ST-CNN and the dual ST-CNN are trained with Kaiming uniform initial [16], mean 0, standard deviation 0.001 and standard deviation 0.005 initialized with normal distribution, respectively. From left to right: (a) Training of the primal ST-CNN and dual ST-CNN. (b) Testing of the primal ST-CNN and dual ST-CNN.
a convex dual ST-CNN is proposed to replace its primal ST-CNN (a convolution neural network with soft-thresholding). Under the principle of convex optimal dual theory, we theoretically prove that the strong duality holds between the dual and primal ST-CNN and further verify this observation in experiments of image denoising.
## VI Appendix
### _Proof of Theorem 2_
We combine basic inequality to rescale the parameters as \(\left\|\textbf{u}_{k}^{\prime}\right\|_{2}^{2}+|v_{k}^{\prime}|^{2}\) in Eq. (20) [32, 37, 71, 72, 73].
The parameters can be rescaled \(\textbf{u}_{k}^{\prime}=\varepsilon_{k}\textbf{u}_{k}\), and \(v_{k}^{\prime}=\frac{v_{k}}{\varepsilon_{k}}\), for any \(\varepsilon_{k}>0\).
\[\sum_{k=1}^{K}\tau\left(\textbf{Y}\textbf{u}_{k}^{\prime}\right)_{ \lambda}v_{k}^{\prime}=\sum_{k=1}^{K}\tau\left(\varepsilon_{k}\textbf{Y} \textbf{u}_{k}\right)_{\lambda}\frac{v_{k}}{\varepsilon_{k}}=\sum_{k=1}^{K} \tau\left(\textbf{Y}\textbf{u}_{k}\right)_{\lambda}v_{k}. \tag{48}\]
This proves that the scaling has no effect on the network output. In addition to this, we have the following basic inequality
\[\min(\left\|\textbf{u}_{k}^{\prime}\right\|_{2}^{2}+|v_{k}^{ \prime}|^{2}) \tag{49}\] \[=\min_{\varepsilon_{k}}(\left\|\varepsilon_{k}\textbf{u}_{k} \right\|_{2}^{2}+\left|\frac{v_{k}}{\varepsilon_{k}}\right|^{2})\] \[=2\min\left\|\textbf{u}_{k}\right\|_{2}\left|v_{k}\right|,\]
here \(\varepsilon_{k}=\left(\frac{v_{k}}{\left\|\textbf{u}_{k}\right\|_{2}}\right) ^{\frac{1}{2}}\).
Since the scaling operation has no effect on the right-hand side of the inequality, we can set \(\left\|\textbf{u}_{k}\right\|_{2}=1\), \(\forall k\). Therefore, \(\left\|\textbf{u}_{k}\right\|_{2}\left|v_{k}\right|\) becomes \(\left|v_{k}\right|\).
Now, let us consider a modified version of the problem, where the unit norm inequality constraint has no effect on the optimal solution. Let us assume that for a certain index \(k\), \(\left\|\textbf{u}_{k}\right\|_{2}\leq 1\) exit \(v_{k}\neq 0\) as an optimal solution. This shows that the unit norm inequality constraint is not activated for \(v_{k}\) and hence removing the constraint for \(\textbf{u}_{k}\) will not change the optimal solution.
However, removing the constraint \(\left\|\textbf{u}_{k}\right\|_{2}\longrightarrow\infty\) reduces the objective value since it yields \(v_{k}=0\).
Here, we have a contradiction that proves that all the constraints that correspond to a nonzero \(v_{k}\) must be active for an optimal solution.
This also shows that replacing \(\left\|\textbf{u}_{k}\right\|_{2}=1\) with \(\left\|\textbf{u}_{k}\right\|_{2}\leq 1\) is the solution to the problem.
Hence,
\[p^{*}= \min_{\textbf{u}_{k}\in\mathbb{R}^{m},\,u_{k}\in\mathbb{R}}\left\| \sum_{k=1}^{K}\tau\left(\textbf{Y}\textbf{u}_{k}\right)_{\lambda}v_{k}- \textbf{x}\right\|_{2}^{2}\] \[+2\beta\sum_{k=1}^{K}(\left\|\textbf{u}_{k}\right\|_{2}\left|v_{k }\right|). \tag{50}\]
Relaxing the constraints without changing the optimal solution of the objective function such that \(\left\|\textbf{u}_{k}\right\|_{2}\leq 1\), so
\[p^{*}= \min_{\left\|\textbf{u}_{k}\right\|_{2}\leq 1}\min_{v_{k}\in \mathbb{R}}\left\|\sum_{k=1}^{K}\tau\left(\textbf{Y}\textbf{u}_{k}\right)_{ \lambda}v_{k}-\textbf{x}\right\|_{2}^{2} \tag{51}\] \[+2\beta\sum_{k=1}^{K}|v_{k}|.\]
### _Proof of Theorem 5_
Here,
\[\max_{\left\|\textbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\left|\textbf{z}^{T} \textbf{Q}^{S}\left(\textbf{Y}\textbf{u}_{k}\right)\right|\leq 2\beta, \tag{52}\]
can be split into two constraints
\[\max_{\left\|\textbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\textbf{z}^{T} \textbf{Q}^{S}\left(\textbf{Y}\textbf{u}_{k}\right)\leq 2\beta, \tag{53}\]
Fig. 10: Verify that zero dual gap (the strong duality holds), i.e. the objective function values are very close when both the primal ST-CNN objective value and the dual network objective value achieves global optimality. (a) Training of the primal and dual ST-CNN in the case of Gaussian noise with mean 0 and standard deviation 0.25, 0.5, and 0.75 respectively. (b) Test of the primal and dual ST-CNNs in the case of Gaussian noise with a mean of 0 and standard deviations of 0.25, 0.5, and 0.75 respectively.
and
\[\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,P_{S}}-\mathbf{z}^{T}\mathbf{Q}^{S} \left(\mathbf{Y}\mathbf{u}_{k}\right)\leq 2\beta. \tag{54}\]
We first discuss the former
\[\max_{\left\|\mathbf{u}_{k}\right\|_{2}\leq 1,P_{S}}\mathbf{z}^{T}\mathbf{Q}^{S} \left(\mathbf{Y}\mathbf{u}_{k}\right)\Longrightarrow\min_{\left\|\mathbf{u}_{k} \right\|_{2}\leq 1,P_{S}}-\mathbf{z}^{T}\mathbf{Q}^{S}\left(\mathbf{Y}\mathbf{u}_{k} \right). \tag{55}\]
Introducing \(\mathbf{b},\mathbf{c},\mathbf{e}\in R^{I}\), the Lagrangian form of the above function as follows
\[L(\mathbf{u}_{k})= -\mathbf{z}^{T}\mathbf{Q}^{S}\mathbf{Y}\mathbf{u}_{k}-\mathbf{b} \mathbf{Q}^{S_{1}}\mathbf{Y}\mathbf{u}_{k}+\mathbf{c}\mathbf{Q}^{S_{2}}\mathbf{ Y}\mathbf{u}_{k}\] \[+\mathbf{e}\mathbf{Q}^{S_{3}}\mathbf{Y}\mathbf{u}_{k}, \tag{56}\]
where
\[-\left(\mathbf{z}^{T}\mathbf{Q}^{S}\mathbf{Y}\mathbf{u}_{k}- \mathbf{b}\mathbf{Q}^{S_{1}}\mathbf{Y}\mathbf{u}_{k}+\mathbf{c}\mathbf{Q}^{S_ {2}}\mathbf{Y}\mathbf{u}_{k}+\mathbf{e}\mathbf{Q}^{S_{3}}\mathbf{Y}\mathbf{u}_ {k}\right)\] \[= -\left(\mathbf{z}^{T}\mathbf{Q}^{S}\mathbf{Y}\mathbf{u}_{k}+ \mathbf{b}\mathbf{Q}^{S_{1}}\mathbf{Y}\mathbf{u}_{k}-\mathbf{c}\mathbf{Q}^{S_ {2}}\mathbf{Y}\mathbf{u}_{k}-\mathbf{e}\mathbf{Q}^{S_{3}}\mathbf{Y}\mathbf{u} _{k}\right)\] \[\geq -\left\|\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}}\mathbf{ c}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}\right\|_{2}\left\|\mathbf{u}_{k} \right\|_{2}.\] \[\text{Let }\left\|\mathbf{u}_{k}\right\|_{2}=1,\] \[infL(\mathbf{u}_{k})\] \[= -\left\|\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}}\mathbf{ c}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}\right\|_{2}. \tag{57}\]
Then, the dual problem is as follows
\[\min_{\mathbf{b},\mathbf{c},\mathbf{e}\in\mathbb{R}^{I}}\left\| \mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{\top}\mathbf{Q}^{S_{1}} \mathbf{b}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}}\mathbf{c}-\mathbf{Y}^{\top} \mathbf{Q}^{S_{3}}\mathbf{e}\right\|_{2}.\] \[\mathbf{b},\mathbf{e}\geq 0 \tag{58}\]
Hence, the constraints
\[\max_{\mathbf{z}:\left\|\mathbf{u}_{k}\right\|_{2}\leq 1}\mathbf{z}^{T} \mathbf{\tau}\left(\mathbf{Y}\mathbf{u}_{k}\right)_{\lambda}\leq 2\beta,\]
can be equal as follows
\[\min_{\mathbf{b},\mathbf{c},\mathbf{e}\in\mathbb{R}^{I}}\] \[\mathbf{b},\mathbf{e}\geq 0\] \[\left\|\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}}\mathbf{ c}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}\right\|_{2}\] \[\leq 2\beta. \tag{59}\]
According to the above equation, we can deduce that
\[\forall i\in[1,I],\ \exists\quad\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{e}_{i} \in R^{I},\ \ s.t.\ \ \mathbf{b}_{i},\mathbf{e}_{i}\geq 0\] \[\left\|\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}_{i}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}} \mathbf{c}_{i}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}_{i}\right\|_{2}\] \[\leq 2\beta. \tag{60}\]
Considering the constraints on both sides, introducing variables \(\mathbf{b}_{i}^{\prime},\ \mathbf{c}_{i}^{\prime},\ \mathbf{e}_{i}^{\prime}\).
Then, there are \(2I\) constraints as follows:
\[\max_{\mathbf{z}} -\frac{1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}+\left\| \mathbf{x}\right\|_{2}^{2}, \tag{61}\] \[\mathbf{b}_{i},\mathbf{c}_{i},\mathbf{e}_{i}\in R^{I}\] \[\mathbf{b}_{i},\mathbf{e}_{i}\geq 0\] \[\mathbf{b}_{i}^{\prime},\mathbf{c}_{i}^{\prime},\mathbf{e}_{i}^{ \prime}\in R^{I}\] \[\mathbf{b}_{i}^{\prime},\mathbf{e}_{i}^{\prime}\geq 0\] s.t. \[\left\|\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}_{1}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}} \mathbf{c}_{1}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}_{1}\right\|_{2}\] \[\leq 2\beta\] \[\leq 2\beta\] \[\left\|-\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}+\mathbf{Y}^{ \top}\mathbf{Q}^{S_{1}}\mathbf{b}_{1}^{\prime}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{2}} \mathbf{c}_{1}^{\prime}-\mathbf{Y}^{\top}\mathbf{Q}^{S_{3}}\mathbf{e}_{1}^{ \prime}\right\|_{2}\] \[\leq 2\beta\] \[\leq 2\beta.\]
Noting that, as long as \(\beta>0\), let \(\mathbf{b}_{i}=\mathbf{c}_{i}=\mathbf{e}_{i}=\mathbf{b}_{i}^{\prime}=\mathbf{c}_{i }^{\prime}=\mathbf{e}_{i}^{\prime}=\mathbf{e}_{i}^{\prime}=\mathbf{z}=\mathbf{0}\), the above constraint holds, and the strong duality holds from the Slater's condition.
The dual problem can be rewritten as
\[\min_{\lambda_{i},\lambda_{i}^{\prime}\in\mathbb{R}} \max_{\mathbf{z}} \tag{62}\] \[\lambda_{i},\lambda_{i}^{\prime}\geq 0\,\mathbf{b}_{i},\mathbf{c}_{i}, \mathbf{e}_{i}\in\mathbb{R}^{I}\] \[\mathbf{b}_{i},\mathbf{e}_{i}\geq 0\] \[\mathbf{b}_{i}^{\prime},\mathbf{c}_{i}^{\prime},\mathbf{e}_{i}^{ \prime}\in R^{I}\] \[\mathbf{b}_{i}^{\prime},\mathbf{e}_{i}^{\prime}\geq 0\] \[-\frac{1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}+\left\| \mathbf{x}\right\|_{2}^{2}\] \[+\sum_{i=1}^{I}2\lambda_{i}\beta+\sum_{i=1}^{I}2\lambda_{i}^{ \prime}\beta\] \[-\sum_{i=1}^{I}\lambda_{i}\left\|\mathbf{Y}^{\top}(\mathbf{Q}^{S} \mathbf{z}+\mathbf{Q}^{S_{1}}\mathbf{b}_{i}-\mathbf{Q}^{S_{2}}\mathbf{c}_{i}- \mathbf{Q}^{S_{3}}\mathbf{e}_{i})\right\|_{2}\] \[-\sum_{i=1}^{I}\lambda_{i}^{\prime}\left\|\mathbf{Y}^{\top}(- \mathbf{Q}^{S}\mathbf{z}+\mathbf{Q}^{S_{1}}\mathbf{b}_{i}^{\prime}-\mathbf{Q}^{S_ {2}}\mathbf{c}_{i}^{\prime}-\mathbf{Q}^{S_{3}}\mathbf{e}_{i}^{\prime})\right\|_{2}.\]
Introducing variable \(\mathbf{t}_{1},...,\mathbf{t}_{I},\mathbf{t}_{1}^{\prime},...,\mathbf{t}_{I}^{ \prime}\in\mathbb{R}^{m^{2}}\), the above formula can be changed to
\[\min_{\lambda_{i},\lambda_{i}^{\prime}\in\mathbb{R}} \max_{\mathbf{z}} \min_{\mathbf{t}_{i}\in\mathbb{R}^{m^{2}},\left\|\mathbf{t}_{i}\right\|_{2}\leq 1\] (63) \[\lambda_{
Next, we take \(\mathbf{z}\), \(\mathbf{b}_{i},\mathbf{b}_{i}^{\prime},\mathbf{c}_{i},\mathbf{c}_{i}^{\prime}, \mathbf{e}_{i},\mathbf{e}_{i}^{\prime}\) as variables to analyze the maximum value of the objective function.
Where, we take \(\mathbf{z}\) as the variable to analyze the following objective functions
\[-\frac{1}{4}\left\|\mathbf{z}-2\mathbf{x}\right\|_{2}^{2}-\sum_{i=1 }^{I}\lambda_{i}\mathbf{t}_{i}^{\top}\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z }+\sum_{i=1}^{I}\lambda_{i}^{{}^{\prime}}\mathbf{t}_{i}^{\top}\mathbf{Y}^{\top }\mathbf{Q}^{S}\mathbf{z}\] \[= -\frac{1}{4}(\left\|\mathbf{z}\right\|_{2}^{2}-4\mathbf{z}^{\top }\mathbf{x}+4\left\|\mathbf{x}\right\|_{2}^{2})-\sum_{i=1}^{I}\lambda_{i} \mathbf{t}_{i}^{\top}\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}\] \[+\sum_{i=1}^{I}\lambda_{i}^{{}^{\prime}}\mathbf{t}_{i}^{\top} \mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}\] \[= -\frac{1}{4}\left(\left\|\mathbf{z}\right\|_{2}^{2}-4\mathbf{z}^{ \top}\mathbf{x}+4\left\|\mathbf{x}\right\|_{2}^{2}+4\sum_{i=1}^{I}\lambda_{i} \mathbf{t}_{i}^{\top}\mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}\right)\] \[+\sum_{i=1}^{I}\lambda_{i}^{{}^{\prime}}\mathbf{t}_{i}^{\top} \mathbf{Y}^{\top}\mathbf{Q}^{S}\mathbf{z}\] \[= -\frac{1}{4}\left\|\sum_{i=1}^{I}\mathbf{z}-(2\mathbf{x}-2 \lambda_{i}\mathbf{Q}^{S}\mathbf{Y}\mathbf{t}_{i}+2\lambda_{i}^{{}^{\prime}} \mathbf{Q}^{S}\mathbf{Y}\mathbf{t}_{i}^{\prime})\right\|_{2}^{2}\] \[-\left\|\mathbf{x}\right\|_{2}^{2}\] \[+\left\|\sum_{i=1}^{I}\mathbf{x}-\lambda_{i}\mathbf{Q}^{S} \mathbf{Y}\mathbf{t}_{i}+\lambda_{i}^{\prime}\mathbf{Q}^{S}\mathbf{Y}\mathbf{ t}_{i}^{\prime}\right\|. \tag{64}\]
Let \(\mathbf{z}=2\mathbf{x}-2\lambda_{i}\mathbf{Q}^{S}\mathbf{Y}\mathbf{t}_{i}+2 \lambda_{i}^{\prime}\mathbf{Q}^{S}\mathbf{Y}\mathbf{t}_{i}^{\prime}\), the objective function can take the maximum value
\[\min_{\begin{subarray}{c}\lambda_{i},\lambda_{i}^{{}^{\prime}} \in\mathbb{R}\left.\mathbf{t}_{i}\in\mathbb{R}^{m^{2}},\left\|\mathbf{t}_{i} \right\|_{2}\leq 1\right.\left.\mathbf{b}_{i},\mathbf{b}_{i}^{\prime}, \mathbf{c}_{i},\mathbf{c}_{i}^{\prime},\mathbf{e}_{i},\mathbf{e}_{i}^{\prime} \geq 0\\ \lambda_{i}^{{}^{\prime}}\geq 0\quad\mathbf{t}_{i}^{{}^{\prime}}\in\mathbb{R}^{m^{2}}, \left\|\mathbf{t}_{i}^{{}^{\prime}}\right\|_{2}\leq 1\right.\left.\mathbf{b}_{i}, \mathbf{b}_{i}^{\prime},\mathbf{c}_{i},\mathbf{c}_{i}^{\prime},\mathbf{e}_{i},\mathbf{e}_{i}^{\prime}\in\mathbb{R}^{I}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
\[B_{4}=\left\{\mathbf{u}_{2}|\ \mathbf{y}_{i}^{T}\mathbf{u}_{2}-\lambda=0\right\}.\]
To investigate the number of linear regions, the following question must be answered: How many regions are generated by the arrangement of \(n\) hyperplanes in \(\mathbb{R}^{m^{2}}\)?
According to the Lemma 4 proposed by [56], which is an extension of Zaslavsky's hyperplane arrangement theory [74]. The Lemma 3 tightens this bound for a special case in which the hyperplanes may not be in general positions [56]. Therefore, it is suitable for analyzing the dual ST-CNN proposed in this paper, which contains many parallel hyperplanes. Consider m hyperplanes in \(\mathbb{R}^{m^{2}}\) as defined by the rows of \(\mathbf{Y}\mathbf{u}_{k}+\lambda=0\).
Then, the number of regions induced by the hyperplanes is as most
\[\sum_{j=0}^{rank(\mathbf{Y})}\binom{m^{2}}{j}\,. \tag{73}\]
If the \(\mathbf{Y}\) is full rank [75, 76, 77, 57], this expression can be written as
\[3\sum_{j=0}^{r-1}\binom{I-1}{j}\leq 3r\left(\frac{e(I-1)}{r}\right)^{r}, \tag{74}\]
for \(r\leq I\), where \(r:=rank(\mathbf{Y})\).
It is useful to recognize that two-layer soft-thresholding networks with \(K\) hidden neurons can be globally optimized via the convex program Eq. (21). The convex program has \(6I^{2}\) constraints and \(6Im^{2}\) variables, which can be solved in polynomial time with respect to \(I\). The computational complexity is at most \(O(m^{12}(\frac{I}{m^{2}})^{3m^{2}})\) using standard interior-point solvers.
## Acknowledgments
The authors thank Jian-Feng Cai, Peng Li, Zi Wang, Yihui Huang, and Nubwimana Rachel for helpful discussions.
|
2303.14322 | Spatio-Temporal driven Attention Graph Neural Network with Block
Adjacency matrix (STAG-NN-BA) | Despite the recent advances in deep neural networks, standard convolutional
kernels limit the applications of these networks to the Euclidean domain only.
Considering the geodesic nature of the measurement of the earth's surface,
remote sensing is one such area that can benefit from non-Euclidean and
spherical domains. For this purpose, we propose a novel Graph Neural Network
architecture for spatial and spatio-temporal classification using satellite
imagery. We propose a hybrid attention method to learn the relative importance
of irregular neighbors in remote sensing data. Instead of classifying each
pixel, we propose a method based on Simple Linear Iterative Clustering (SLIC)
image segmentation and Graph Attention GAT. The superpixels obtained from SLIC
become the nodes of our Graph Convolution Network (GCN). We then construct a
region adjacency graph (RAG) where each superpixel is connected to every other
adjacent superpixel in the image, enabling information to propagate globally.
Finally, we propose a Spatially driven Attention Graph Neural Network (SAG-NN)
to classify each RAG. We also propose an extension to our SAG-NN for
spatio-temporal data. Unlike regular grids of pixels in images, superpixels are
irregular in nature and cannot be used to create spatio-temporal graphs. We
introduce temporal bias by combining unconnected RAGs from each image into one
supergraph. This is achieved by introducing block adjacency matrices resulting
in novel Spatio-Temporal driven Attention Graph Neural Network with Block
Adjacency matrix (STAG-NN-BA). We evaluate our proposed methods on two remote
sensing datasets namely Asia14 and C2D2. In comparison with both non-graph and
graph-based approaches our SAG-NN and STAG-NN-BA achieved superior accuracy on
all the datasets while incurring less computation cost. The code and dataset
will be made public via our GitHub repository. | U. Nazir, W. Islam, M. Taj | 2023-03-25T01:26:50Z | http://arxiv.org/abs/2303.14322v1 | # Spatio-Temporal driven Attention Graph Neural Network with Block Adjacency matrix (STAG-NN-BA)
###### Abstract.
Despite the recent advances in deep neural networks, standard convolutional kernels limit the applications of these networks to the Euclidean domain only. Considering the geodesic nature of the measurement of the earth's surface, remote sensing is one such area that can benefit from non-Euclidean and spherical domains. For this purpose, we propose a novel Graph Neural Network architecture for spatial and spatio-temporal classification using satellite imagery. We propose a hybrid attention method to learn the relative importance of irregular neighbors in remote sensing data. Instead of classifying each pixel, we propose a method based on Simple Linear Iterative Clustering (SLIC) image segmentation and Graph Attention GAT (Wang et al., 2017). The superpixels obtained from SLIC become the nodes of our Graph Convolution Network (GCN). We then construct a region adjacency graph (RAG) where each superpixel is connected to every other adjacent superpixel in the image, enabling information to propagate globally. Finally, we propose a Spatially driven Attention Graph Neural Network (SAG-NN) to classify each RAG. We also propose an extension to our SAG-NN for spatio-temporal data. Unlike regular grids of pixels in images, superpixels are irregular in nature and cannot be used to create spatio-temporal graphs. We introduce temporal bias by combining unconnected RAGs from each image into one supergraph. This is achieved by introducing block adjacency matrices resulting in novel Spatio-Temporal driven Attention Graph Neural Network with Block Adjacency matrix (STAG-NN-BA). We evaluate our proposed methods on two remote sensing datasets namely Asi14 and C2D2. In comparison with both non-graph and graph-based approaches our SAG-NN and STAG-NN-BA achieved superior accuracy on all the datasets while incurring less computation cost. The code and dataset will be made public via our GitHub repository.
Geometric deep learning, Euclidean and non-Euclidean domains, Superpixels, Region adjacency graph, Remote sensing +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal of Physics A: Mathematical and Theoretical Physics (London) (1978) 2020 203 2003 20031 2020 2023 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 2020 2023 202020 2023 2020 2023 202023 2020 2023 20202 2023 202020 2023 202020 2023 2020 202023 2020 2023 202020 2023 202020 2023 202020 202023 202020 2023 202020 202020 202020 2023 202020 202020 202020 202020 202020 20202 20202 202020 20202 202020 202020 202020 202020 202020 202020 2202020 202020 202020 202020 2022020 202020 202020 2022020 202020 2202020 2202020 202020 2020202 202020 2202020 2022020 202020 2020220 202020 2022020 2022020 202020 2202020 20202020 2020220 20202020 2022020 202020 2022020 202020 2020220 2020202 20202020 220202 2020202 20202020 2020202 2020202 20202020 202020 2020202 2020202 20202020 202020202 20202020 20202020 202020202 20202020 20202020 2020202 20202020 202020202 20202020 20220202 202020202 202020202 2022020202 2020202020 20220202 20220202 20202020 202020220 202022020 20220202 202020202 2020220202 20202202 2020202 2022020202 202020202 2022020202 2020202 20220202 20202202 20220202 20202202 20202022 20202022 20202202 20220202 20220202 20220202 20202202 20220202 20202202 2022022 20220202 2022022 20220202 202202202 202022202 20220202 20202202 2022202 20202202 202202202 2022022 20202202 2022022 20202202 2022022 2022022 202022202 2022022 202202022 2022022 20220222 20220222 2022022 2020222 2022022 2022022 20220222 2022022 2022202 2022022 2022022 2022022 2022022 202222 2022022 2022202 2202202 20222202 2202222 202222 2022222 202222 202222 2022222 202222 2022222 202222 2222222 22222222 222222 2222222 222222 22
While the utility of graph neural networks for emerging applications is promising, the complexity of graph data imposes significant challenges on many existing machine learning algorithms. For instance, in the area of image processing, the use of Graph Convolutional Networks (GCN) is still limited to a few examples only (Nguyen et al., 2017; Wang et al., 2018; Wang et al., 2018). By some carefully hand-crafted graph construction methods or other supervised approaches, images can be converted to structured graphs capable of processing by GCNs. In these GNNs, each pixel of an image is considered as a graph node (Garshan et al., 2016) which is cumbersome and in many cases unnecessary. Instead of learning from raw image pixels, the use of'superpixels' addresses this concern (Wang et al., 2018; Wang et al., 2018) and helps in reducing the graph size and thereby the computational complexity. The applications of Superpixels include saliency estimation (Wang et al., 2018), optical flow estimation (Wang et al., 2018), object detection (Wang et al., 2018), semantic segmentation (Wang et al., 2018), reduce input for subsequent algorithms (Wang et al., 2018) and explainable AI (Wang et al., 2018).
In this paper, we propose a hybrid attention method to incorporate these relational inductive biases in remote sensing data. Instead of classifying each pixel, we propose a method based on Simple Linear Iterative Clustering (SLIC) image segmentation and Graph Attention GAT (Yang et al., 2018) to detect socio-economic indicators from remote sensing data (see Table 1). We first over-segment the image into superpixels. These superpixels become the nodes of our Graph Convolution Network (GCN). We then construct a region adjacency graph (RAG) where each superpixel is connected to every other adjacent superpixel in the image, enabling information to propagate globally. Finally, we classify each RAG via Spatially driven Attention Graph Neural Network (SAG-NN). We also propose an extension to our SAG-NN for spatio-temporal data named as Spatio-temporal Attention driven GNN (STAG-NN). Unlike, pixels or objects, superpixels are prone to change over time, to address this problem we propose a STAG-NN with Block diagonal Adjacency matrix (STAG-NN-BA) which enables us to incorporate both the spatial as well as temporal information in a single time-varying graph. The main novelty of this paper is the SAG-NN and STAG-NN-BA architectures for the prediction of spatio-temporal transition classes (such as construction, destruction, cultivation, and harvesting) from remote sensing data. We also show this approach incurs less computational cost compared with other deep learning methods. The details of our proposed approach, which is derived from vanilla GAT (Yang et al., 2018), are presented in Section 4.
## 2. Related Work
**Image Classification**: Availability of high resolution satellite imagery paved a way for future planning and geographical studies for large-scale analysis across the globe (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Automated large-scale surveys via remote sensing often make use of image classification (Wang et al., 2018) and segmentation (Wang et al., 2018; Wang et al., 2018). In this study, we are focusing on image classification as it plays an important role in land-use land-cover applications. The generic problem of image classification consists of distinguishing the images into object classes (usually, a set of predefined collection of labels). Traditional approaches followed the method of preprocessing images to extract image features (e.g., texture, color, etc.) and running a classifier on those features.
Krizhevsky et al.(Krizhevsky et al., 2014) published a seminal study that explored deep neural networks for image classification. They won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012
Figure 1. Flow diagram of our proposed spatial attention graph neural network (SAG-NN). Images are first converted into superpixels using SLIC, region adjacency graph is then constructed using these superpixels, finally spatial attention graph neural network is applied. Graph embedding are then used for classification via MLP. (Satellite images courtesy Google Earth).
by a large margin and set a turning point for image classification research. In the following years networks like GoogLeNet (Shi and Malik, 2015) and Squeeze-and-Excitation (Shi and Malik, 2015) further helped in reducing the top-5 error rate from 15.3% to just 2.51%. More recent approaches, such as ViT-e (Shi and Malik, 2015) and CoAtNet-7 (Dong et al., 2016), combined convolution with attention /transformers (Shi and Malik, 2015) and achieved a top-1 accuracy of 90.45% and 90.88% respectively.
Despite the recent advances in datasets and network architectures, using standard convolutional kernels limits the applications of these networks in problems that do not present a domain based on rectangular grids. For example, panoramas capture a whole 360-degree field of view, similarry, measurement on earth's surface are geodesic in nature. To handle these issues, some researchers suggested networks designed to adapt to the spherical domain (Dong et al., 2016). In contrast, others proposed to learn how to adapt convolutional layers to the spherical domain (Dong et al., 2016). More recently, graph based methods have been introduced so that such non-Euclidean spaces can be modeled via geometric deep learning (Shi and Malik, 2015).
**Geometric Deep Learning**: Recently, there has been an increasing interest in geometric deep learning (Shi and Malik, 2015), attempting to generalize deep neural models to non-Euclidean structured domains such as graphs and manifolds. Graph-based representations can be used to model a variety of problems and domains. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics (Garshan et al., 2016). In addition, they naturally allow to model multi-resolution representations of the same object. Furthermore, they naturally allow several "multi-resolution" representations of the same object. The same image can be converted to graphs using pixel-level or superpixel-level representations. Superpixel-based representations reduce the input size while also allowing domains such as pinhole and spherical images to be represented as graphs, reducing computation costs needed for classification. Furthermore, there are several recent advances toward the development of Graph Neural Networks (GNNs) (Shi and Malik, 2015; Shi and Malik, 2015; Shi and Malik, 2015), including Graph Attention Networks (Shi and Malik, 2015), which could bridge the gap between different domains.
**Image Classification via Graphs**: To the best of our knowledge, Monti et al.(Mori et al., 2015) proposed the first application of Graph Neural Networks(GNNs) to image classification and the MoNET framework for dealing with geometric data in general. Their framework works by weighting the neighborhood aggregation through a learned scaling factor based on geometric distances. Velickovic et al. (Velickovic et al., 2016) proposed a model using self-attention for weighting the neighborhood aggregation in GNNs. Although this model was a sub-model of the MoNET framework, it provided extraordinary results on other datasets, namely Cora and Citeseer, two famous citation networks (Shi and Malik, 2015), and on the FAUST humans dataset (Carparpes et al., 2016).
Although Shi and Malik's seminal paper applied graph-based methods directly to images by converting each pixel to a graph node for image segmentation, smaller graphs can be generated with lower-level representations. Each segmentation region can be the natural choice for nodes of a graph but generating accurate segmentation results is still an open problem. Superpixels might be the middle ground between pixel-based graphs and object-related region-based graphs. Superpixels group pixels similar in color near each other into meaningful representation units called segments (Shi and Malik, 2015). Several computer vision tasks can be performed on these over-segmented images, including depth estimation, segmentation, and object localization as in (Bai et al., 2016). The work mentioned above on using GNNs for images, alongside the work on adapting self-attention for GNNs and the works for generating superpixels of images, form the pillars on which we based our experiments.
SplineCNN (Shi and Malik, 2015), and Geo-GCN (Mori et al., 2015) are two other models which extend MoNET frameworks to weight neighborhood aggregation based on geometric information. SplineCNN leverages B-spline base properties in their neighborhood aggregation procedure, while Geo-GCN engineered a learned distance function to perform data augmentation using rotations and conformations. Semi-supervised augmentation for classification is another technique for using GNNs with image data as in (Velickovic et al., 2016). The main difference between their method is that they extract a feature vector for each image with a convolutional network and then build a graph on which they used their model. Although their technique is useful for semi-supervised learning. We use the vanilla GAT based classifier for a graph representing an image directly that is not comparable.
In this paper, we propose a unified framework allowing to generalize geometric deep learning to remote sensing data and learn spatial and spatio-temporal features using superpixels. We improve the GAT scoring function to overcome the following shortcomings in GATv1 (Shi and Malik, 2015) and GATv2 (Shi and Malik, 2015): 1) In GATv1, the learned layers \(\mathbf{W}\) and \(a\) are applied consecutively, and thus can be collapsed into the single linear layer. 2) GATv2 (Shi and Malik, 2015) performs best for a complete bipartite graph. We improved the graph attention scoring function by introducing the relational inductive bias in data using neighborhood features aggregation as well as the ranking of attended nodes. Our proposed approach achieves higher accuracy with less computing cost than state-of-the-art graph neural network architectures.
## 3. Challenges
### Heterogeneity in Remote Sensing Data
While considering a large geographic area, several inherent complexities in satellite imagery make automated detection of change in land-use a challenging task. This includes, but is not limited to, i) variations in imaging sensors, ii) differences in construction design across the countries, iii) dynamic surroundings and iv) variations in luminosity, seasonal changes, and pollution levels, etc.
\begin{table}
\begin{tabular}{|c|l|l|c|} \hline
**Formula** & **Indications** & **Ventilation** & **Method** \\ \hline Cora A.K., et al.(Shi and Malik, 2015) & Land Cross Mapping & Earth Surface Dry & Graph Graph CNN \\ \hline Basic, et al.(Shi and Malik, 2015) & Body Kitping & Memory & CNN \\ \hline Basic, et al.(Shi and Malik, 2015) & Land Cross Mapping & Land Cross Mapping & \(\mathcal{O}\)CNN \\ \hline Basic, et al.(Shi and Malik, 2015) & Land Cross Mapping & Land Cross Mapping & \(\mathcal{O}\)CNN \\ \hline Basic, et al.
The heterogeneity of land surface covers, in particular, poses a major challenge for the task of spatial and spatio-temporal analysis. High resolution satellite imagery is drawing much attention from researchers due to the fine spatial details of land surface covers. Pixel-based classification methods are hardly applicable for high-resolution remote sensing images due to the high interior heterogeneity of land surface covers. The separation between spectral signatures of different land surface covers is more difficult due to the abundant details in pixel-based classification (Sandhi, 2017). To deal with this challenge, we are using superpixel-based classification which reduces the redundancy of the spatial features of different ground objects. Details of other challenges can be found in this paper (Sandhi, 2017).
### Representation of Images as Graphs
GNNs on images are characterized by unique challenges with respect to their implementation. Most of the graph neural frameworks (Golovolovolov et al., 2013; Golovolovolov and LeCun, 2015; LeCun, 2016) are designed for dense representations such as pixel-based graphs. However, pixel based representation results in a large number of nodes which increases both the compute as well as memory costs. Since adjacent pixels are known to have similar information except at object boundaries, pixel based representation is not only cumbersome, but it is also highly redundant. To address this concern superpixel and object-based graphs have been extensively used in the literature (see Table 2). For subsequent processing, superpixels have been widely used as an effective way to reduce the number of image primitives.
The literature includes numerous methods for determining a superpixel based representation from an image, each with different strengths and weaknesses. Recently, many DNN-based methods to identify superpixels have been proposed (Sandhi, 2017; Sandhi, 2017). But the most popular of practices in the GNN literature (on account of generally good results and low compute complexity) are SLIC (Golovolovolov et al., 2013), Quickshift (Quickshift, 2017) and Felzenszwalb (2017). Details of these methods are presented in the following subsections.
#### 3.2.1. Slc
The SLIC (simple linear iterative clustering) (Golovolovolov et al., 2013) algorithm simply performs an iterative clustering approach in the 5D space of color information and image location. The algorithm quickly gained momentum and is now widely used due to its speed, storage efficiency, and successful segmentation in terms of color boundaries. However, the limitation of SLIC is that it often captures the background pixels as shown in Fig. 2 - Column 1, and therefore does not significantly help in data reduction for the graph generation. But it performs better in capturing built-up and grassy land from satellite imagery as shown in Fig. 5 - Column 2.
#### 3.2.2. Quickshift
Quickshift (Quickshift, 2017) is a relatively recent 2D algorithm that is based on an approximation of kernelized mean-shift (Golovolovolov et al., 2013). It segments an image based on the three parameters: \(\epsilon\) for the standard deviation of the Gaussian function, \(\alpha\) for the weighting of the color term, and \(S\) to limit the calculating a window size of \(S\times S\). Therefore, it belongs to the family of local mode-seeking algorithms and is applied to the 5D space consisting of color information and image location. One of the benefits of Quickshift is that it actually computes a hierarchical segmentation on multiple scales simultaneously. As shown in Fig. 2 - Column 2, it does not capture background pixels and also reduces 30% of input data for the graph generation. But it cannot segment built-up and grassy areas perfectly as shown in Fig. 5 - Column 3.
#### 3.2.3. Felzenszwalb
This fast 2D image segmentation algorithm, proposed in (Golovolovolovolov et al., 2013), has a single scale parameter that influences the segment size. The actual size and number of segments can vary greatly, depending on local contrast. This segmentation appeared to be less suitable in tests on a series of images, as its parameters require a special adjustment, and consequently, a static choice of this parameter leads to unusable results. As shown in Fig. 2 - Column 3 and Fig. 5 - Column 1, it only captures the pixels corresponding to the region of interest pixels but performs poorly in graph generation procedure as shown in Fig. 3 - Column 3.
Instead of grid-based placement as in images, superpixels usually result in irregular representation depending upon image content. Such irregular representation restricts the construction of graph on spatio-temporal data. This work has addressed this issue by proposing STAG-NN-BA which resolves the issue via a block adjacency matrix.
## 4. Proposed Methodology
The proposed methodology consist of following major steps:
* Generate a superpixel representation of the input images.
* Create a region adjacency graph (RAG) from the superpixel representation, by connecting neighbouring superpixels.
\begin{table}
\begin{tabular}{c c} \hline \hline Graph Type & Proposals \\ \hline Pixel-based Graph & (Golovolovolov et al., 2013; Golovolovolov et al., 2013; Golovolovolov and LeCun, 2015) \\ Superpixel-based Graph & (Golovolovolov et al., 2013; Golovolovolov and LeCun, 2015; LeCun, 2016; LeCun, 2016) \\ Object-based Graph & (Golovolovolovolov et al., 2013; Golovolovolov and LeCun, 2016; LeCun, 2016; LeCun, 2016) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Classification of proposals for graph generation from images
Figure 3. Region Adjacency Graphs (RAG) generation from SLIC, Quickshift and Felzenszwalb superpixels respectively.
Figure 2. Superpixel segmentation techniques on MNIST digit: 9.
* Spatial Attention Graph Neural Network (SAG-NN) from region adjacency graph (RAG) for spatial classification.
* Spatio-temporal driven Graph Attention Neural Network with Block Adjacency matrix (STAG-NN-BA) for classification of transitions or changes in land-use over time.
The following subsections discuss the proposed architecture in detail.
### Superpixel Segmentation
When we apply segmenation techniques on satellite imagery, SLIC (Beng et al., 2015) performs better as compared to Quickshift (Quickshift et al., 2016), Felzenszwalb (2017) and Compact watershed (Wang et al., 2017). As shown in Fig. 4 - Column 2, SLIC captures the color boundaries, and segments perfectly the built-up area and agricultural land. It is more stable for satellite imagery as compared to other segmentation techniques. The superpixel segmentation technique using SLIC (Beng et al., 2015) provides an elegant way to divide the satellite image into homogeneous regions as shown in Fig. 4. We set the number of segments to 75 and compactness to 10. This resulted in approximately 75 superpixels per image and subsequently a graph of 75 nodes instead of 65536 nodes in case of using raw pixel values of remote sensing imagery.
### Graph generation from superpixels
After using a superpixel segmentation technique, a Region Adjacency Graph (RAG) is generated by treating each superpixel as a node and adding edges between all directly adjacent superpixels. Unlike MoNet (Wang et al., 2017), which use K-Nearest Neighbours to form a connection between nodes, in our graph \(G\) we formed connections based on immediate adjacency only. Thus ours is a more compact graph while the information from neighbours of neighbours can still be incorporated in our case by using K-hop messaging passing. Each graph node can have associated features, providing aggregate information based on the characteristics of the superpixel itself. The regions obtained in the segmentation stage are represented as vertices \(V\) and relations between neighboring regions are represented as edges \(E\). The search for the most similar pair of regions is repeated several times per iteration and every search requires \(\mathcal{O}(N)\) region similarity computations. The graph is utilized so that the search is limited only to the regions that are directly connected by the graph structure.
Figure 4. Superpixel segmentation techniques on image from Asia14 dataset. Felzenszwalb’s method and quickshift cannot segment perfectly built-up and barren land due to inherent complexities in satellite imagery. On the other hand, compact watershed poorly performed on grassy land. While SLIC works perfectly on satellite imagery.
Figure 5. RAG generation from SLIC superpixels on image from Asia14 dataset (Satellite images courtesy Google Earth).
Figure 8. The block adjacency matrix (Temporal RAG) for land-use classification used in proposed spatio-temporal models: STAG-NN-BA-GCP and STAG-NN-BA-GSP.
Figure 6. RAG Generation from a single geospatial image.
Figure 7. Generation of Temporal RAG from geospatial images of same geolocation from multiple years.
### Spatial Attention Graph Neural Network (SAG-NN)
We will start by describing a single message passing layer, as the sole layer utilized throughout all of the GCN (Srivastava et al., 2017) and GAT (Yang et al., 2017) architectures.
Consider a graph \(G(V,E)\), where \(V\) is set of \(n\) nodes and \(E\) is the set of \(m\) vertices. \(G\) is specified as a set of nodes' initial embeddings (input features): \((\overrightarrow{x_{1}},\overrightarrow{x_{2}},\ldots,\overrightarrow{x_{n}})\), and an adjacency matrix **ADJ**, such that \(\textbf{ADJ}_{i,j}=1\) if \(i\) and \(j\) are connected, and \(0\) otherwise. Consider node \(i\)'s initial embedding (for step \(k=0\)) is:
\[\overrightarrow{h}_{i}^{(0)}=\overrightarrow{x}_{i},\forall i\in V \tag{1}\]
A graph convolutional layer at step \(k=1,2,\ldots,K\) then computes a set of new node features \((\overrightarrow{h}_{1}^{k},\overrightarrow{h}_{2}^{k},\ldots,\overrightarrow {h}_{n}^{k})\), based on the input features as well as the graph structure. Every graph convolutional layer starts off with a shared feature transformation specified by a weight matrix \(\mathbf{W}\).
In general, to satisfy the localization property, we will define a graph convolutional operator as an aggregation of features across neighbourhoods; defining \(\mathcal{N}_{i}\) as the neighbourhood of node \(i\) (typically consisting of all first-order neighbours of \(i\), including \(i\) itself), we can define the output features of node \(i\) as
\[\overrightarrow{h}_{i}^{(k)}=f^{(k)}\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum _{j\in\mathcal{N}_{i}}C^{(k)}\overrightarrow{h}_{j}^{(k-1)}+C^{(k)} \overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)} \tag{2}\]
where \(\forall i\in V\) and \(f^{(k)}\) is an activation function. Each neighbour can be assigned different importance as:
\[\overrightarrow{h}_{i}^{(k)}=f^{(k)}\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum _{j\in\mathcal{N}_{i}}\alpha_{ij}^{(k-1)}\overrightarrow{h}_{j}^{(k-1)}+ \alpha_{ii}^{(k-1)}\overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)} \tag{3}\]
where \(\forall i\in V\) and \(\sum_{j\in\mathcal{N}_{i}}(.)\) is the weighted mean of i's neighbour's embedding at step \(k-1\) and the attention weights \(\alpha^{(k)}\) are generated by an attention mechanism \(\mathbf{A}^{(k)}\), normalized such that the sum over all neighbours of each node \(i\) is 1:
\[\alpha_{ij}^{(k)}=\frac{\mathbf{A}^{(k)}(\overrightarrow{h}_{i}^{(k)}, \overrightarrow{h}_{j}^{(k)})}{\sum_{w\in\mathcal{N}_{i}}\mathbf{A}^{(k)}( \overrightarrow{h}_{i}^{(k)},\overrightarrow{h}_{w}^{(k)})},\,\forall(i,j)\in E \tag{4}\]
In standard GAT (see eq. 3 & 4) \(\alpha_{ij}\) is implicitly defined, employing self-attention over the node features to do so. This choice was not without motivation, as self-attention has previously been shown to be self-sufficient for state-of-the-art-level results on machine translation, as demonstrated by the Transformer architecture (Yang et al., 2017).
Generally, we let \(\alpha_{ij}\) be computed as a byproduct of an attentional mechanism, \(a:\mathcal{R}^{N}\times\mathcal{R}^{N}\longrightarrow\mathcal{R}\) which computes normalized coefficients \(\alpha_{ij}\) across pairs of nodes \(i,j\), based on their features (see eq. 4).
In contrast, in GATv2, every node can attend to any other node using scoring function shown in eq. 5.
\[\overrightarrow{h}_{i}^{(k)}=\alpha_{ij}^{(k-1)}\Bigg{[}f^{(k)}\Bigg{(} \mathbf{W}^{(k)}\cdot\sum_{j\in\mathcal{N}_{i}}\overrightarrow{h}_{j}^{(k-1) }+\overrightarrow{h}_{i}^{(k-1)}\Bigg{)}\Bigg{]} \tag{5}\]
The main problem in the standard GAT scoring function (see eq. 3) is that the learned layers \(\mathbf{W}\) and \(\alpha\) are applied consecutively, and thus can be collapsed into single linear layer (Chen et al., 2017). To fix this limitation in our work, we then impose a relational inductive bias in data using neighborhood features aggregation (see eq. 6 & 7). In our proposed SAG-NN, the node \(i\)'s embedding at step k for \(k=1\) is:
\[\overrightarrow{h}_{i}^{(k)}=f^{(k)}\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}AGG _{j\in\mathcal{N}_{i}}(\{\overrightarrow{h}_{j}^{(k-1)}\}),\overrightarrow{h }_{i}^{(k-1)}\Bigg{]}\Bigg{)}, \tag{6}\]
where \(\forall i\in V\) and \(AGG(.)\) is the aggregation of i's neighbour's embeddings at step \(k-1\) and \(h_{i}^{(k-1)}\) is the node i's embedding at step \(k-1\). And node i's embedding at step k for \(k=2,3,\ldots\) upto \(K\) is:
\[\overrightarrow{h}_{i}^{(k)}=f^{(k)}\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum _{j\in\mathcal{N}_{i}}\alpha_{ij}^{(k-1)}\overrightarrow{h}_{j}^{(k-1)}+ \alpha_{ii}^{(k-1)}\overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)} \tag{7}\]
The proposed solution not only improves the aggregation of features from neighbouring nodes, it also improves the ranking of attended nodes (static attention) as shown in eq. 6 & 7.
#### 4.3.1. Spatio-temporal Classification via SAG-NN-E
Although the proposed SAG-NN architecture is developed to account for neighborhood features' aggregation to learn spatial land-use classes, we also extended it for spatio-temporal classification. Given \(T\) time steps, our resulting ensemble SAG-NN-E has \(T\) copies of SAG-NN, one for each time step, connected in parallel. The ensemble has a voting scheme that takes the spatial classification from each SAG-NN and generates the spatio-temporal classification (see Fig. 9). We used this ensemble as a baseline for evaluation of our proposed Spatio-temporal driven Graph Attention Neural Network which is discussed next.
### Spatio-temporal driven Graph Attention Neural Network with Block Adjacency matrix (STAG-NN-BA)
Images having multiple channels such as in case of color or multi-spectral images or sequence of multiple images are usually represented as a spatio-temporal volume. These patio-temporal volumes have fixed spatial dimension or pixels at each depth of the volume. However, when instead of pixels, superpixels of images are used this result in different dimension at each time step. Thus graph from superpixels of each image from a sequence cannot be stacked
Figure 9. Spatio-temporal Classification via SAG-NN-E.
together as in case of pixel based representation. Furthermore, in GNNs the structure of the graph remains unchanged over multiple layers, only the node representation changes (Wang et al., 2017). This restricts the use of GNNs for spatio-temporal classification problems having varying nodes over time.
We addressed this problem by proposing a novel temporal-RAG that connects the individual RAG from each image. To incorporate the temporal change in graphs, we add the fourth dimension in the node features of these RAGs which is basically a numeric index that indicates the chronological order of the image the superpixel belongs to. We then combine the RAGs of these separate images into a supergraph that has these RAGs as unconnected subgraphs, we call this supergraph _Temporal-RAGs_. Figure 7 depicts the creation of _Temporal-RAGs_ from Images of a geo-location from different years. Our proposed temporal-RAG is an extension of our SAG-NN architecture. The supergraph of SAG-NN's is generated by combining the adjacency matrices from each RAG into a single adjacency matrix (see Figs. 7 and 8). This results in a block diagonal adjacency matrix for Temporal-RAGs resulting in Spatio-temporal driven Graph Attention Neural Network with Block Adjacency matrix (STAG-NN-BA) defined as:
\[\overrightarrow{h}_{i}^{(k)} =ReLU\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum_{j\in\mathcal{N} _{i}}\alpha_{ij}^{(k-1)}\overrightarrow{h}_{j}^{(k-1)}+\sigma_{ii}^{(k-1)} \overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)}\] \[+ReLU\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum_{j\in\mathcal{N} _{i}}\alpha_{ij}^{(k-1)}\overrightarrow{h}_{j}^{(k-1)}+\alpha_{ii}^{(k-1)} \overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)}\] \[+ReLU\Bigg{(}\mathbf{W}^{(k)}\cdot\Bigg{[}\sum_{j\in\mathcal{N} _{i}}\alpha_{ij}^{(k-1)}\overrightarrow{h}_{j}^{(k-1)}+\alpha_{ii}^{(k-1)} \overrightarrow{h}_{i}^{(k-1)}\Bigg{]}\Bigg{)} \tag{8}\]
where \(\mathbf{+}\) symbol represent the concatenation of features.
In STAG-NN-BA we aggregate the node embedding from all the RAGs into one graph embedding \(X_{G}\) of length \(D\). Then, we feed that embedding to the Multi-Layer Perceptron (MLP) for assigning one of the final transition classes. Our proposed architecture allows to impose relational inductive bias in data using neighborhood features aggregation over space as well as time resulting in a single architecture for data with a varying number of nodes over time (see Fig. 10). Thus it can be used to classify the transitions or change in land-use over time in the remote sensing data. Since transitions are essentially temporal phenomena, the proposed STAG-NN-BA method can incorporate temporal information into regional adjacency graphs. We believe that this method can be extended to other geometric data.
We do not assign features to the edges, since our model uses an attention mechanism, and we believe that the edge features will be learned according to the features of the connecting nodes. STAG-NN-BA combine ideas of graph convolutions (Wang et al., 2017), which allows graph nodes to aggregate information from their irregular neighbourhoods, with self-attention mechanisms (Wang et al., 2017), which allows nodes to learn the relative importance of each neighbour during the aggregation process.
Although, there are many different models that try to incorporate weights in neighborhood aggregation such as SplineCNN (Krizhevsky et al., 2015) and GEO-GCN (Krizhevsky et al., 2015). We used three approaches to perform a land-use transition classification of temporal images namely SAG-NN-E (see section 4.3.1), Global Sum Pooling (STAG-NN-BA-GSP) and Global Concatenated Pooling (STAG-NN-BA-GCP). The last two are discussed as follows:
**Global Sum Pooling (STAG-NN-BA-GSP)**: There exist many different types of order-in-variant read-out layers in the literature, such as Global Average Pooling (Wang et al., 2017), Global Attention Pooling (Wang et al., 2017), Global Max Pooling (Wang et al., 2017), and Global Sum Pooling (Wang et al., 2017).
We use Global Sum Pooling (GSP) for it's simplicity as defined in the equation: \(\mathbf{x}_{\theta}=\sum_{\mathbf{x}\in\mathcal{V}}\mathbf{x}_{\mathbf{0}}^{(L)}\), where \(V\) is the set of vertices, \(\mathbf{x}_{\mathbf{0}}^{(L)}\) is the node embedding at the last layer of a graph neural network, and \(\mathbf{x}_{\mathcal{G}}\) is the embedding for the graph obtained as a result of the pooling operation.
**Global Concatenated Pooling (STAG-NN-BA-GCP)**: We are using RAGs of images from three different timestamps combined into one Temporal-RAGs for the transition classification. Taking the graph readout in the last layers of GAT using Global Sum Pooling (GSP) adding all the nodes of the Temporal-RAGs into one \(n\)-dimensional vector. This makes the embedding of a Temporal-RAG indistinguishable from the embedding of a Temporal-RAG in which the underlying RAGs were to swap places. To solve this problem, we introduced a variation of GSP which gives us separate embedding for each underlying RAG concatenated into one \(n\times D\) vector (see Fig. 10).
## 5. Results and Evaluation
### Datasets
We used three datasets for evaluation of our proposed approach namely MNIST (Krizhevsky et al., 2015), Asia14 (Krizhevsky et al., 2015) and C2D2 Dataset (Beng et al., 2016). Both Asia14 and C2D2 datasets are remote sensing datasets for spatial and spatio-temporal classification respectively. These datasets are graph signal classification tasks, where graphs are represented in mixed mode: one adjacency matrix, many instances of node features. Details of these datasets are discussed next.
#### 5.1.1. MNIST Pixel-based Dataset
The MNIST dataset (Krizhevsky et al., 2015) is an acronym that stands for the Modified National Institute of Standards and Technology dataset. It is a dataset of \(28\times 28\) pixel grayscale images of handwritten single digits between 0 and 9. MNIST dataset containing \(70,000\) pixel based region adjacency graphs as described by (Krizhevsky et al., 2015). Every graph is labeled by one of 10 classes.
#### 5.1.2. Asia14 pixel-based and Superpixels Dataset
_Asia14_ dataset contains samples under varying conditions as discussed in Section 3.1. Furthermore, unlike street imagery, land-use is subject to significant variations in satellite imagery. To cater for this, we used a subset of 14-class dataset named _Asia14_(Krizhevsky et al., 2015). This dataset consisting of Digital Globe RGB band images from 2016 and 2017 of resolution \(256\times 256\) at zoom level 20 (corresponding to 0.149 pixel per meter on the equator). We used 9 classes including brick kilns, houses, roads, tennis courts, grass, dense forest, parking lots, parks. The issue of sensor variations is handled by diversifying the training data across several spatial locations within the Indo-Pak region of South Asia. We have \(9,000\) pixel-based region adjacency graphs and we generated the superpixels using SLIC (Beng et al., 2016). Then \(9,000\) graphs with 75 nodes each, were generated using region adjacency graph method.
#### 5.1.3. C2D2 Dataset
This dataset contains Spatio-temporal data annotated for four fundamental land-use land-change transitions namely construction, destruction, cultivation, and de-cultivation. This dataset was originally collected and prepared by (Beng et al., 2017). They browsed Digital Globe imagery data for the years 2011, 2013, and 2017 and visited almost \(5,50,000\) random locations which make approximately \(5310\ km^{2}\). Along with lat-long, at each location, we cropped an image patch of resolution \(256\times 256\) at zoom level 20 (i.e. 0.149 pixel per meter on the equator). The provided dataset contained 3D volumes of Spatio-temporal images from different years. we had to reverse the process to separate out the individual images for a location into the directories of each year. We then generate regional adjacency graphs (RAGs) from the superpixels of these images that were generated using SLIC and use the same annotations as it was assigned to the 3D volumes.
### Implementation Details
All the graph neural networks are trained using PyTorch. Optimization method is Adam with an initial learning rate of \(1e^{-3}\). The learning rate increases by 0.1 if validation loss does not decline for 20 epochs. Instead of using fixed number of epochs, we used early stopping criteria and patience for early stopping is 200. We have kept the same train, validation and test splits for all the datasets, i.e. 70%, 15%, and 15% respectively.
### Evaluation of SAG-NN
We evaluated our Spatial Attention Graph Attention Network (SAG-NN) architecture on two datasets namely MNIST and Asia14. We performed two experiments, in the first experiment we generated pixel-based graphs and in the second experiment we used superpixel based graphs. We performed comparisons with two classical methods namely Inception-ResNet-v2 (Vinyals et al., 2015) and 2D-ResNet-50 (Vinyals et al., 2015) and seven graph based state-of-the-art methods namely MoNet (Vinyals et al., 2015), ChebNet (Chen et al., 2016), GATv1 (Wang et al., 2017), AGNN (Wang et al., 2017), GraphSAGE (Chen et al., 2017), Crystal GCN (Wang et al., 2017), GATv2 (Wang et al., 2017).
We first trained and validated our Spatial Attention Graph Attention as well as all the other methods on MNIST dataset. Our SAG-NN model achieved highest accuracy of 98.14% on MNIST dataset with 25.69 million number of parameters on pixel-based RAG. Then we trained and tested SAG-NN as well as all the other methods on Asia14 dataset. Here again our proposed SAG-NN achieved highest test accuracy of 77.00% and 80.98% on pixel-based graph and superpixel RAGs respectively (see Table 3 and 4).
In Table 3, the experiments show that the SAG-NN outperforms on pixel-based RAGs as compared to other classical or RAG-based GNN classifiers. In Table 4, SAG-NN has comparable training parameters and shows high accuracy when compared with GCN (Wang et al., 2017) and GraphSAGE (Chen et al., 2017). It shows comparable high accuracy when
\begin{table}
\begin{tabular}{|c|c|c|} \hline Architectures & \#Param (M) & Asia14 \\ \hline Classical models of neural network on image dataset \\ \hline Inception-ResNet-v2 (Vinyals et al., 2015) & 23.50 & 57.70 \% \\ \hline
2D-ResNet-50 (Vinyals et al., 2015) & 23.50 & 56.45 \% \\ \hline \multicolumn{3}{|c|}{Graph neural networks} \\ \hline GCN (Wang et al., 2017) & **0.015** & 9.78\% \\ \hline GraphSAGE (Chen et al., 2017) & **0.015** & 65.00\% \\ \hline GATv1 (Wang et al., 2017) & **0.030** & **80.30** \\ \hline GATv2 (Wang et al., 2017) & 0.055 & 72.04\% \\ \hline SAG-NN (our) & **0.030** & **80.98\%** \\ \hline \end{tabular}
\end{table}
Table 4. Spatial classification accuracy on SLIC superpixels based Region Adjacency Graph (RAG) of subset of Asia14 (Wang et al., 2017) datasets. Top-2 ranking methods are in bold and, in particular, red (1st) and violet (2nd).
Figure 10. Spatio-Temporal driven Attention Graph Neural Network with Block Adjacency matrix (STAG-NN-BA).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Architectures & \#Param (M) & MNIST & Asia14 \\ \hline \multicolumn{3}{|c|}{Classical models of neural network on image dataset} \\ \hline Inception-ResNet-v2 (Vinyals et al., 2015) & 23.50 & - & 57.70 \% \\ \hline \multicolumn{3}{|c|}{Classical models of neural network on image dataset} \\ \hline Inception-ResNet-v2 (Vinyals et al., 2015) & 23.50 & - & 57.70 \% \\ \hline \multicolumn{3}{|c|}{Classical models of neural network on image dataset} \\ \hline Inception-ResNet-v2 (Vinyals et al., 2015) & 23.50 & - & 56.45 \% \\ \hline \multicolumn{3}{|c|}{Graph neural networks} \\ \hline MoNet (Vinyals et al., 2015) & 2.12 & 91.11\% & 66.39\% \\ \hline ChebNet (Chen et al., 2016) & 12.85 & 75.62\% & 64.60 \% \\ \hline GATv1 (Wang et al., 2017) & 25.70 & 96.19\% & 69.85 \% \\ \hline AGNN (Wang et al., 2017) & **0.41** & 97.98\% & 47.80\% \\ \hline GraphSAGE (Chen et al., 2017) & 12.85 & 97.27\% & 70.00\% \\ \hline Crystal GCN (Wang et al., 2017) & **0.41** & **98.04\%** & 63.20\% \\ \hline GATv2 (Wang et al., 2017) & 25.70 & - & 71.10\% \\ \hline SAG-NN (our) & 25.69 & **98.14\%** & **77.00\%** \\ \hline \end{tabular}
\end{table}
Table 3. Spatial classification accuracy on pixel based Region Adjacency Graph (RAG) of MNIST (Wang et al., 2017) and subset of Asia14 (Wang et al., 2017) datasets. Top-2 ranking methods are in bold and, in particular, red (1st) and violet (2nd).
compared with GATv1 (Wang et al., 2017). GATv2 (Chen et al., 2017) is proposed for bipartite graphs that's why it shows low performance on pixel-based and superpixel-based region adjacency graphs as compared to our proposed model.
### Evaluation of STAG-NN-BA
We compared both the variants of our STAG-NN-BA with two other methods namely 3D-ResNet-34 (Chen et al., 2017) and SAG-NN-E. SAG-NN-E is our extension of SAG-NN for spatio-temporal data and serves as the baseline. 3D-ResNet-34 (Chen et al., 2017) on the other hand uses 3D convolution and is the only state-of-the-art method with published results on C2D2 dataset. In order to compare our results on C2D2 dataset, we used the same train/test split as in 3D-ResNet-34 (Chen et al., 2017). The ability of transition classification for SAG-NN-E approach is dependent on the performance of land-use classification and voting procedure (see section 4.3.1). Both STAG-NN-BA-GCP and STAG-NN-BA-GSP achieved significantly higher accuracies as compared to SAG-NN-E and 3D-ResNet-34 (Chen et al., 2017) in terms of accuracy and compute cost. STAG-NN-BA-GCP and STAG-NN-BA-GSP achieved approximately 7% and 20% higher accuracy as compared to 3D-ResNet-34. They also achieved 4.88% and 17.81% higher accuracy as compared to SAG-NN-E which indicates the effectiveness of our temporal model STAG-NN-BA as compared to spatial model via SAG-NN. Furthermore, STAG-NN-BA-GSP outperforms all the other methods which shows that the global sum pooling is a more suited method of aggregation as compared to global concatenated pooling.
Table 5 also compares the training parameters, forward pass time, and accuracy of our models used for spatio-temporal land-use classification. It can be seen that the forward pass time of STAG-NN-BA is almost 1ms lower as compared to SAG-NN and much lower as compared to 3D-ResNet-34.
In the land-use transition classification, the STAG-NN-BA-GSP approach is the most reliable. However, we also draw comparison of 3D-ResNet-34 (Chen et al., 2017) with SAG-NN-E and STAG-NN-BA-GCP (see Table 5). Both spatio-temporal proposed models (STAG-NN-BA-GCP and STAG-NN-BA-GSP) achieved higher performance with low computational cost on the C2D2 dataset.
### Qualitative Analysis
Fig. 11 shows the sample annotations for _Construction_ transition class. In Fig. 11 (Row 1), SAG-NN-E with voting mechanism classifies it as _Cultivation_ which is clearly wrong as it can be seen from the middle and last image that the land has undergone the _Construction_. This type of misclassification is expected from the model since there are two transitions in three images of geolocation. The voting mechanism tends to get confused when multiple transitions are present in an example. But our proposed model STAG-NN-BA-GSP' correctly classifies it as _Construction_. In Fig. 11 (Row 2) our all models: SAG-NN-E, STAG-NN-BA-GCP, and STAG-NN-BA-GSP classify it as _Construction_. In Fig. 11 (Row 3), SAG-NN-E and STAG-NN-BA-GSP correctly classify it but STAG-NN-BA-GCP confused it with _Destruction_ perhaps because in this example one building is removed while multiple others were added.
## 6. Conclusion and Future Work
This paper proposed two novel Graph Neural Network architectures for spatial and spatio-temporal classification of remote sensing imagery. We also proposed a novel method to represent temporal information in images using region adjacency graph called Temporal-RAG. We evaluated our approaches on two remote sensing datasets namely Asia14 and C2D2. The comparison with the previously existing classical and graph neural network methods showed that our approaches achieved higher performance and reduced the computation power greatly. There are two areas recognized while working on this paper that can serve as interesting problems for future works. Firstly, there is an issue of information loss during the generation of graphs from superpixel segmentation. Secondly, over-segmentation of an image to make superpixels causes information loss, which decreases the representation power of pixels-based graphs. The information about the shape of the underlying superpixel segment is lost. We can extract generic shape embedding using an auto-encoder into a single \(N\) dimensional vector. While assigning the
Figure 11. Examples showing the change in land-use between 2011 and 2017. In all three examples, more and more land was used for construction purposes over the years. See Section 5.5 for a discussion on results. (Satellite images courtesy Google Earth).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model & \# Par. (M) & FPT (ms) & Acc. \\ \hline
3D-ResNet-34 (Chen et al., 2017) & 63.50 & \textgreater{} 3.6 & 57.72 \% \\ SAG-NN-E & **0.030** & 3.60 ms & 60.02 \% \\ STAG-NN-BA-GCP (ours) & **0.050** & **2.50 ms** & 64.90 \% \\ STAG-NN-BA-GSP (ours) & **0.030** & 2.62 ms & **77.83 \%** \\ \hline \end{tabular}
\end{table}
Table 5. Spatio-temporal comparative evaluation for land-use transition classification on C2D2 dataset respectively. (Key: Acc.: Accuracy, Par.: Parameters, M: Millions, FPT: Forward Pass Time in milliseconds for 100 forward passes). Top-2 ranking methods are in bold and, in particular, red (1st) and violet (2nd).
color values as features, this \(N\)-dimensional shape embedding vector can be concatenated into the initial features. This can help incorporate the shape into graph representations.
|
2301.01630 | Equalization of a 10 Gbps IMDD signal by a small silicon photonics time
delayed neural network | A small 4-channels time-delayed complex perceptron is used as a silicon
photonics neural network (NN) device to compensate for chromatic dispersion in
optical fiber links. The NN device is experimentally tested with
non-return-to-zero optical signals at 10 Gbps after propagation through up to
125 km optical fiber link. During the learning phase, a separation-loss
function is optimized in order to maximally separate the transmitted levels of
0s from the 1s, which implies an optimization of the bit-error-rate. Testing of
the NN device shows that the excess losses introduced by the NN device are
compensated by the gain in transmitted signal equalization for a link longer
than 100 km. The measured data are reproduced by a model which accounts for the
optical link and the neural network device. This allows simulating the network
performances for higher data rates, where the device shows improvement with
respect to the benchmark both in terms of performance as well as ease of use. | Emiliano Staffoli, Mattia Mancinelli, Paolo Bettotti, Lorenzo Pavesi | 2023-01-04T14:06:12Z | http://arxiv.org/abs/2301.01630v2 | # Equalization of a 10 Gbps IMDD signal by a small silicon photonics time delayed neural network
###### Abstract
A small 4-channels time-delayed complex perceptron is used as a silicon photonics neural network (NN) device to compensate for chromatic dispersion in optical fiber links. The NN device is experimentally tested with non-return-to-zero optical signals at 10 Gbps after propagation through up to 125 km optical fiber link. During the learning phase, a separation-loss function is optimized in order to maximally separate the transmitted levels of 0s from the 1s, which implies an optimization of the bit-error-rate. Testing of the NN device shows that the excess losses introduced by the NN device are compensated by the gain in transmitted signal equalization for a link longer than 100 km. The measured data are reproduced by a model which accounts for the optical link and the NN device. This allows simulating the network performances for higher data rates, where the device shows improvement with respect to the benchmark both in terms of performance as well as ease of use.
osurmurm
## 1 Introduction
Optical fibers are the backbone of Internet since they allow data transmission at large bandwidths and long distances. To increase the capacity of the optical links large input optical power signals are needed to compensate for fiber losses [1]. In these conditions of high-power transmission, both linear and nonlinear effects alter the shape of the transmitted optical pulses [2], which implies the necessity of distortion compensation in the optical network. Nowadays, signal recovery (equalization) is mostly accomplished by digital devices that introduce latency, delay and power consumption [3]. A clear example is observed in the trend to replace simple intensity modulation direct detection (IMDD) transceivers with more performing but costly and power-hungry coherent transceivers [4], where digital signal processing (DSP) devices allow running algorithm to restore the data [5]. Different numerical approaches to correct for both linear and nonlinear optical fiber impairments exist with an emerging trend to use artificial intelligence-based algorithms [6].
To reduce the cost and power consumption of optical links, it is desirable to introduce equalization techniques also for simple IMDD systems. Even linear impairments, such as Chromatic Dispersion (CD), Polarization Mode Dispersion (PMD), Symbol Timing Offset and Optical filtering, severely distort the transmission [1]. Among these impairments, one of the most severe is CD which causes a broadening of the optical pulse and the associated intersymbol interference [7]. To compensate or correct for CD several types of equalization techniques have been introduced, among which dispersion compensated optical fiber and Bragg gratings are the most diffused ones [1]. These are based on the use of dispersion-compensated units which recover the initial undispersed signal by counter-acting the CD effect. Another approach relays on the use of a dispersion compensating photonic-integrated programmable lattice filter formed by cascaded Mach-Zehnder interferometers [8]. An alternative to these approaches is the use of integrated photonic neural networks [9]. Their advantages derive from operating the corrections directly in the optical domain, drastically reducing the power demand and the latency, as well as in the flexibility of the equalization which can be learned directly on the deployed link and, therefore, can be eas
ily adapted to optical link variations. Few hardware implementations of this concept exist [10, 11, 12, 13, 14, 15, 16].
Here, we propose and validate the use of a small silicon photonics 4-channels delayed complex perceptron [17] to equalize a 10 Gbps IMDD 100 km long optical link. In the proposed photonic neural network (NN), the input signal is split into 4 channels where the combined actions of delay lines and tunable phase shifters create the desired interference pattern at the output that counteracts the intersymbol interference. This working principle has been applied to compensate for distortions induced by linear effects during propagation in a single-mode fiber. Equalization is performed on-chip and no external data processing is thus needed, except for the training phase. NN training is based on a Particle Swarm Optimizer (PSO) [18]. Moreover, being the NN of the Feed Forward type [17], the latency induced in signal processing is maximally reduced.
## 2 Procedures
The small NN device, whose design is shown in Fig. 1, is based on a delayed complex perceptron [17]. The input signal (\(u(t)\), the input complex field) is split into four waveguides by a cascade of 1x2 multimode interferometers (MMIs). On each \(k\)-th waveguide but the first (\(k=\)1), a spiral forms a delay line which adds to the input signal a delay \(\Delta_{k}=(k-1)\Delta_{t}\), where \(\Delta_{t}=50\) ps has been determined from the signal bitrate that, in this case, is 10 Gbps in NRZ (Non-Return-to-Zero) modulation. After the delay stage, the \(k\)-th waveguide hosts a delayed copy of the input \(u(t)\), namely \(u_{k}(t)=u(t-\Delta_{t}(k-1))\) with \(k\)=1,..., 4. Now, the signal undergoes a phase modulation performed by phase shifters realized with current-controlled heaters. In this way, the signals in each waveguide are weighted with \(w_{k}=a_{k}\exp(i\phi_{k})\) where \(a_{k}\) stands for the spiral losses and \(\phi_{k}\) for the added phase. After the weighting section, the four signals are recombined by means of a 1x4 combiner, realized using a cascade of 2x1 MMIs, which performs the operation \(\sum_{k=1}^{4}u_{k}(t)w_{k}\). The output signal is then detected by a fast photodetector, which closes the processing by performing a nonlinear transformation, i.e. the detected signal intensity is \(y(t)=\left|\sum_{k=1}^{4}u_{k}(t)w_{k}\right|^{2}\).
The delayed complex perceptron acts as a 4-tap filter. The complexity of the layout in terms of the number of taps \(N_{T}\) and delay unit \(\Delta_{t}\) is determined in relation to the input bitrate \(B\) and the target propagation distance \(L\). This relation can be empirically described as
\[N_{T}=\mathrm{int}\left(\frac{1/B+|L\beta_{2}\Delta\omega|}{\Delta_{t}}\right). \tag{1}\]
Here the numerator represents an estimate of the new pulse width, obtained as the sum of the initial bit time slot \(1/B\) and the pulse broadening \(\Delta T\) induced by CD on a gaussian pulse propagating in a fiber [1]. \(\beta_{2}\) represents the Group Velocity Dispersion parameter, and \(\Delta\omega\) the pulse bandwidth. Substituting in Eq. 1 the parameters for the propagation of a 10 Gbps NRZ PRBS (\(\Delta\omega\approx 2\pi\times 10\) GHz) through a \(L=100\) km long standard SM G.652D fiber (\(\beta_{2}=-0.021\) ps\({}^{2}\)/m), one obtains a pulse broadening of \(\Delta T=130\) ps, that for \(\Delta_{t}=50\) ps corresponds to \(N_{T}\approx 4\). The choice of \(\Delta_{t}\) is the result of a trade-off between a sufficient sampling of the information of a single-bit time slot at recombination (at least 2 samples per bit) and the aim of having a restricted number of channels to contain the excess losses.
The NN device is fabricated on a Silicon-on-Insulator (SOI) platform within a multi-project wafer run at IMEC-Belgium. The waveguides are 220 nm thick and have a width of 450 nm, allowing for single-mode operation on both polarizations at 1550 nm. The input and output gratings have a footprint of 50 \(\mu\)m \(\times\) 30 \(\mu\)m and fix the polarization to the Transverse Electric (TE). The \(1\times 2\) and \(2\times 1\) MMIs used for splitting and recombination of the optical signal have a footprint of 20 \(\mu\)m \(\times\) 100 \(\mu\)m. The gratings and the MMIs are implemented using the proprietary IMEC PDKs. Phase shifters are based on current-driven heaters realized as 60 \(\mu\)m-long and 0.6 \(\mu\)m-wide with a resistance of 60 \(\Omega\) placed on top of an 800 nm thick Silica
cladding. \(\Delta_{k}\) are realized with spirals of a length \(k\)-th multiple of 3.56 mm (corresponding to a delay of \(\Delta_{t}=\) 50 ps). The optical losses of the spirals due to surface roughness present on the waveguide and by the bends in the curved optical paths have been measured to 6 dB/cm [17]. These result in an attenuation of 2.1 dB, 4.3 dB, and 6.4 dB for \(k=2,\ldots,4\), respectively. The NN device's insertion losses have been estimated to be 8.2 dB at 1550 nm. The chip is placed on a Proportional-Integral-Derivative (PID) controlled Peltier cell that keeps its temperature at 21\({}^{\circ}\)C.
The experimental setup is represented in Fig. 1. In the transmission stage, a tunable laser source (TLS) operating at 1550 nm is modulated as a NRZ 10 Gbps Pseudo-Random binary sequence (PRBS) of order 10 and period 2\({}^{10}\) bits. A 50:50 Fiber Optic Splitter sends half of the signal to a fast photodiode (RX1). The other half is coupled to an optical fiber span, where distortions induced by CD are accumulated. The length of the span goes from a minimum of 0 km to a maximum of 125 km, with a granularity of 25 km. The distorted signal enters the NN device for optical processing. DC current controllers set the currents in the heaters. The output signal from the NN device is coupled to a fast photodiode (RX2) at the receiver stage. Both RX1 and RX2, which monitor \(y_{in}(t)\) and \(y_{out}(t)\), respectively, are connected to a 40 GSa/s oscilloscope with a 16 GHz bandwidth. The Signal-To-Noise ratio (SNR) at the receiver RX2 can be varied by using a Variable Optical Attenuator (VOA2) inserted after the NN device (see Appendix A).
For each measure, the DC controller sends pre-set currents to the NN device and a triggering signal to the oscilloscope. The acquisition is delayed by 1 ms from the arrival of the triggering signal to let the optical signal stabilize at the output of the NN device, according to the thermal relaxation time of the heaters (few tens of \(\mu\)s). The observation window of the oscilloscope is 1 \(\mu\)s wide, allowing the observation of at least 9 periods of PRBS at each acquisition. Four samples per bit are available because of the 40 GSa/s sampling rate. In what follows, the samples in each bit are labeled from 1 to 4, the 4\({}^{\text{th}}\) being the most recent. Acquired sequences are then under-sampled, obtaining a sub-sequence constituted by \(n\)-th sample in each bit of the full trace and being the chosen sample the most representative of the actual bit value (typically the closest to the center of the bit). An operation performed over the under-sampled sequence at the \(n\)-th
Figure 1: Experimental setup. The full link consists of a transmission stage, the optical fiber, the neural network (NN) device and the receiver stage. Two fast photodetectors (RX1 and RX2) allow for measuring the input and the transmitted signals. The inset shows the actual design of the NN device, where one can observe the cascaded 1x4 and 4x1 splitter and combiner, the three spirals, and the four phase shifters (small blue rectangles) connected to the external DC current controller. Details are given in Appendix A or in [17].
sample in each bit will be mentioned as performed over the \(n\)-th sample.
Input (from RX1) and output (from RX2) signals are aligned by exploiting their cross-correlation, obtaining \(y_{in}\) and \(y_{out}\). The under-sampling at the \(n\)-th sample of the two sequences provides \(\overline{y}_{in}\) and \(\overline{y}_{out}\), respectively. The output signal \(\overline{y}_{out}\) is compared with \(\overline{y}_{in}\) to label the 1 level (\(\overline{y}_{out,H}\)) or 0 level (\(\overline{y}_{out,L}\)).
NN training procedure is performed off-chip using fully automatized software. An analog loss function \(\mathcal{L}\) is created to obtain the largest possible separation between the distributions of signal levels expected as 1s or 0s in the output signal. This quickly minimizes the associated bit error rate (BER) since it is directly linked to the overlap between these two distributions. Indeed, in presence of random gaussian noise characterized by standard deviations \(\sigma_{0}\) and \(\sigma_{1}\) affecting 1s and 0s in the bit sequence, the BER can be computed as [1]
\[\text{BER}=\frac{1}{4}\left[\text{erfc}\left(\frac{I_{1}-I_{D}}{\sigma_{1} \sqrt{2}}\right)+\text{erfc}\left(\frac{I_{D}-I_{0}}{\sigma_{0}\sqrt{2}}\right) \right]. \tag{2}\]
Here \(I_{(0,1)}=\langle\overline{y}_{out,(L,H)}\rangle\) are the average levels for 1s and 0s, \(\sigma_{(0,1)}\) are their standard deviations, \(I_{D}\) is the decision threshold, and erfc is the complementary error function. A BER reduction can thus be obtained by maximizing \(I_{1}-I_{0}\). \(\mathcal{L}\) measures the spacing between the tails of the distributions related to \(\overline{y}_{out,H}\) and \(\overline{y}_{out,L}\). The training's goal is the maximization of this spacing, therefore we call it the separation loss function. Having \(\overline{y}_{out,(L,H)}^{i}\) the measured signal values in a sequence, the separation loss function is expressed as
\[\mathcal{L}=E[0]-E[1]=\frac{1}{N_{L}}\left[\sum_{i=1}^{N_{L}}\overline{y}_{ out,L}^{i}\right]-\frac{1}{N_{H}}\left[\sum_{i=1}^{N_{H}}\overline{y}_{out,H}^{i} \right], \tag{3}\]
\(E[0]\) and \(E[1]\) are estimates of the tail position in the two distributions. In \(E[0]\), \(i\) runs over the samples such that \(\overline{y}_{out,L}^{i}>I_{0}+1.28\sigma_{0}\), namely \(\overline{y}_{out,L}^{i}\) is part of the group of the rightmost \(N_{L}\) points corresponding to the 10% of the population of the \(\overline{y}_{out,L}\) distribution. Similarly, \(\overline{y}_{out,H}^{i}<I_{1}-1.28\sigma_{1}\) is part of the group of the leftmost \(N_{H}\) points corresponding to the 10% of the population of \(\overline{y}_{out,H}\) distribution. The PSO is adopted for training [18], which is performed in a condition of no attenuation in front of RX2, i.e. an average optical power at RX2 of about 0 dBm and a SNR of 11.2 dB.
In light of the differentiability of \(\mathcal{L}\) with respect to the currents controlling the induced phase shifts in the device, other choices for the training algorithm are possible, including a Back-Propagation (BP) technique. During the experimental phase, we performed some tests using an adapted version of the Adam algorithm [19], which is a gradient-based alternative in which the descent proceeds with a memory of the previous iterations. Such weighted adaptation of the gradient is well known to make more robust the trajectory towards the local minimum in the presence of noise and is often preferred versus the standard BP algorithm. The algorithm proved to be more time-efficient but possibly limited by premature endings of the research at a local minimum. Therefore, here we chose to rely on the PSO, which guaranteed the robustness and the repeatability of the final outcomes.
After the training phase, the testing phase is performed via a scan over the power at the receiver (PRX) made by varying the attenuation of VOA2 in front of RX2, which corresponds to a scan over the SNR at the RX2. For each PRX value, 50 acquisitions for a total of \(5\times 10^{5}\) bits with the trained currents set are performed, evaluating the BER for each measure. The BER is defined here as the cumulative error between the digitized input and output signals. The digitized signals are obtained by applying a threshold to \(\overline{y}_{in}\) and \(\overline{y}_{out}\). At each evaluation, the optimal sample for the generation of \(\overline{y}_{out}\) and the optimal threshold which minimizes the BER are selected. The threshold is chosen among 10 possible equally spaced levels between the minimum and
maximum of the signal. Training and testing procedures are performed for multiple lengths of the fiber span and then compared with the corresponding reference curves obtained without the NN device.
The full optical link (from the transmission to the receiver stages) is simulated to model the effect of the NN device (see Appendix B for details). Also in the simulation, the NN's training is performed by optimizing the separation loss function with the PSO. Noise is added as described in the Appendix C. The sampling of the oscilloscope is modeled as well. BER is computed as in the experimental case.
## 3 Results
The equalization effect of the NN device for a span of 125 km is summarized in Fig. 2. The eye diagrams in panels (a-c) show the three aperture conditions reached after the modulation at the transmitter (a), after the fiber propagation (b) and after the equalization performed by the NN device (c), respectively. CD generates a closure of the eye diagram, as a consequence of the intersymbol interference. Particularly evident in Fig. 2(b) is a high-density region between normalized amplitude values of 0.3 and 0.4 crossed by the red-dashed line, which represents a raise of the zero-level induced by the interference of a low bit with neighboring bits in the high state. The action of the NN device partially restores the aperture (panel (c)), eliminating the intermediate level seen in panel (b). The same scenario is presented in panels (d-f), where the histograms report the distributions of the optical power levels expected as 0s or 1s associated with the 2\({}^{\rm nd}\) sample in the bit, in the input (d), non-corrected (e) and corrected (f) output signals, respectively. An example of their time evolution is reported in Fig. 2(g) with normalized amplitudes. Data are collected with an SNR = 11.2 dB at RX2. In this regime, the evaluation of the BER is not limited by the SNR, but by the fiber dispersion that generates intersymbol interference. The distorted output in Fig. 2(g) shows clearly the presence of pulse broadening and the consequent generation of an intersymbol. Bits expected as 0s preceded or followed by a 1 are raised close to the 1s, increasing thus the probability for errors. As a consequence, the distributions of power levels for 0s and 1s widen and the gap between the distributions reduces, as shown in Fig. 2(e). This leads to an increased BER. As clear from Fig. 2(f), the corrective action of the NN device partially restores the two distorted distributions of Fig. 2(e), thus decreasing the BER.
The training returns a set of 3 optimal currents, associated with the channels in the NN device. One can then model the relative recombination phase shift \(\phi_{k}\) (with \(k=2,\ldots,4\)) used for the weight \(w_{k}\) in the \(k\)-th channel with respect to the first channel (chosen as reference) as
\[\phi_{k}=\phi_{k}^{o}+i_{k}^{2}\gamma_{k}, \tag{4}\]
where \(\phi_{k}^{0}\) is the relative phase measured at zero currents, \(i_{k}\) is the optimal current in the \(k\)-th channel and \(\gamma_{k}\) is the conversion factor between the dissipated thermal power in a resistor and the induced phase shift in the underneath waveguide. Measurements conducted on test resistor structures yield \(\gamma_{k}\approx 0.01\) rad/mA\({}^{2}\)[20], while finding \(\phi_{k}^{o}\) is cumbersome due to the uncertainties in the optical path lengths and widths caused by the finite fabrication resolution. Therefore, for the sake of clarity, we show in Fig. 3(a) the currents used for the trained NN at different fiber lengths, and in Fig. 3(b) the corresponding phase shifts obtained from the simulation. In panel (b) it appears that longer optical links require an increase of the phase shift to about \(2\pi\) in each channel, meaning that the delayed copies constructively contribute to the output [17]. Thus, the NN device weights more the contributions by the longer delay lines since a larger pulse broadening is to be compensated.
Figures 4 (a,b,c) report the simulated and experimental BER versus PRX profiles obtained for different fiber spans. The Back-to-Back (BTB) configuration (namely with no NN device and no fiber, black curve) measures the TX/RX performance. For low PRX values (namely low SNR), BER is dominated by noise, which is present in the output signal at the receiver regardless of the number of fibers. The BER is dominated by noise, which is present in the output signal at the receiver regardless of the number of fibers.
the length of the fiber. Thus, all the profiles overlap in this region. On the contrary, for higher PRX, the SNR increases too and the most dominant contribution in the BER is provided by distortions in the signal induced by cumulated chromatic dispersion. These distortions become more important for longer fibers, causing a worsening of the BER even at high PRX. In fact, the dispersion length for this system is \(L_{D}=T_{0}^{2}/|\beta_{2}|=2\pi c_{0}T_{0}^{2}/(\lambda^{2}|D|)=450\) km, using \(D=17.2\) ps/nm/km and \(T_{0}=100\) ps. The effects of the corrections operated by the NN device are evident for long fiber lengths (\(\geq 100\) km) when the amount of distortion to be compensated is significant. The NN device almost recovers the BER versus PRX curves to the reference optimal case (BTB).
The gain brought in by the action of the NN device can be described starting from the PRX values corresponding to the same BER in the experimental curves. The reference BER value is considered to be \(2\times 10^{-3}\), being this a typical BER-threshold value for Forward Error Correction (pre-FEC threshold) [21]. The corresponding PRX values are interpolated for each BER versus PRX profile obtained both with and without the NN device, producing respectively \(PRX(w)\) and \(PRX(w/)\). Figure 4(d) reports the corresponding experimental and simulated overall gain obtained as \(PRX(w)-PRX(w/)\) subtracted with the Excess Loss (EL) introduced by the trained NN device. Note that the EL depends on the actual weights configuration since the output signal results from the interference of the weighted and delayed copies of the input [17]. We use the best-case scenario and we neglect in the EL calculations the 8.2 dB contribution of the grating losses. Note also that the values of \(PRX(w/)\) for 100 km and 125 km fiber spans are extrapolated from the corresponding BER versus PRX since no data at the pre-FEC threshold are available. The horizontal line at the null value highlights the point where the gain generated by the NN device compensates for its excess loss. This happens for fiber lengths above 100 km for the used bit rate. The NN device has to be considered as underperforming, being the 6 dB/cm spiral propagation losses unusually higher than the expected nominal value of 2 dB/cm for IMEC processing [22]. Improvements in the fabrication could increase further the performance of this already working NN device.
Figure 3: Training outcomes for signal equalization as a function of the fiber link length. Optimized experimental currents (a) and simulated optimized relative phase shifts (b) in channels 2 (blue dots), 3 (green stars), and 4 (red triangles) after the training. Error bars in (a) (barely visible) derive from the instrument output precision, not from statistics. The phase shifts in (b) are measured in each channel with respect to the 1st channel (no spiral). PSO is chosen as the training algorithm in both cases.
Figure 4: (a-c) Experimental (full lines) and simulated (dashed lines) BER versus PRX curves: black discs refer to the back-to-back (BTB) configuration (the transmission stage is directly interfaced to the receiver), the red stars to the transmission by a fiber link, the blue triangles to the transmission by a fiber link with the NN device. Dashed horizontal black lines refer to the pre-FEC threshold value. Error bars are calculated as the standard deviation over multiple acquisitions (see Appendix A). Error-free points are replaced by \(1\times 10^{-7}\) due to the finite dimension of the data set. The used fiber link is 75 km long (a), 100 km long (b) or 125 km long (c). (d) Overall gain provided by the NN device as a function of the fiber link length (blue discs experiment, orange triangles simulation). Gain is given as the improvement of the PRX at BER \(=2\times 10^{-3}\) when the NN device is used with respect to results without the NN device. The dashed line marks the threshold above which the gain guaranteed by the equalization is greater than the NN device excess loss of about 8.5 dB.
## 4 Conclusions
The model of the NN device allows accessing working conditions that are not explorable with the present integrated version of the NN device. Indeed, its versatility is limited by the fixed delay lines which are set for a 10 Gbps data rate. On the contrary, the simulations allow adopting a higher modulation frequency by adapting the delay lines to different bit rates. We can thus compare the performance of our NN device with the results obtained in [10] and [12] which can be considered as a benchmark for the current state of the art for short reach (up to 25 km) access link applications. The first approach [10] is based on the reservoir computing paradigm where a photonic integrated circuit composed of delay lines and beam splitters arranged in a swirl topology forms the reservoir. The second approach [12] is based on the spectral decomposition technique where the spectral content of the optical carrier is divided into slices and analyzed following an all-optical/hybrid approach.
The comparison starts by tuning the parameters of our simulation in order to reproduce the BER versus fiber length profile of [10] at 40 Gbps. In particular, the SNR has been fixed to 12 dB and each BER value is obtained as an average over \(1.024\times 10^{6}\) transmitted bits. The parameters are kept unaltered for the other runs too, including the training and subsequent testing phase for the NN device. The NN has been modeled with delay lines introducing a shift of half (12.5 ps) or three-quarters (18.75 ps) of a bit. To compare our NN device with the ones of [10] and [12] we used the same representative performances as in these works. First, the BER as a function of the link length for the NN device at 40 Gbps is reported in Fig. 5 (a). A clear BER improvement is observed where the equalization provided by the trained NN device ensures an extension of the link reach up to almost 20 km when a delay of 18.75 ps is used. A comparison with the results in [10] shows that the present NN device provides better BER performances up to 20 km fiber length. Despite its simplicity, the NN device outperforms the swirl-based reservoir without the need for electrical data post-processing [10].
Figure 5: (a) BER versus link length with SNR=12 dB at the receiver for the link without the NN device (dashed blue line and triangles), with the NN device and a delay granularity of 12.5 ps (full purple line and stars), and with the NN device and a delay granularity of 18.75 ps (full orange line and circles). (b) SNR penalty at a BER of \(2.26\times 10^{-4}\) as a function of the link length without the NN device (dashed blue line and triangles), with the NN device and a delay granularity of 12.5 ps (full purple line and stars), and with the NN device and a delay granularity of 18.75 ps (full orange line and circles). Penalty is calculated from the back-to-back performance. Curves are interrupted at the last fiber length value for which it was possible to interpolate the chosen BER threshold in the corresponding BER versus PRX profile.
In [12], the SNR penalty is used as a figure of merit. This is defined as the increase in the SNR needed to achieve the same BER as that of the BTB configuration and calculated at the pre-FEC threshold of \(2.26\times 10^{-4}\). The SNR penalty of our trained NN device for an NRZ 40 Gbps data rate as a function of the fiber link length is shown in Fig. 5(b). When the NN device is used with a delay line granularity of 18.75 ps, the SNR penalty stays below 1 dB up to a link length of 18 km. Compared to the performances of the devices discussed in [12], the present NN device is doing better than the 1-stage and 2-stage fully optical devices but worse than the 4-stage fully optical devices, which has however a significantly larger complexity (it requires 30 Mach-Zehnder interferometers) than our NN design.
The performances of our device validate its use for signal equalization, suggesting further studies for the optimization of the layout for in-line applications. We foresee next-generation devices equipped with an augmented number of channels and amplitude modulators in each tap to allow for much larger adaptability to the different transmission scenarios (bitrate, modulation format...). A transceiver-packaged version of these optimized devices would provide significant advantages even at high modulation frequencies (up to 100 Gbps) at metro propagation distances (up to 100 km). These in-line transceivers relieve the computational efforts of complex DSPs both in coherent and IMDD systems, in addition to a latency reduction. Most important for short-reach applications is a significant reduction in power consumption, which for the present NN accounts for 70 mW, to be compared with the typical \(>1\) W for DSP (Table 19.1 in [23]). Thus, simplified DSPs (e.g. less power-hungry) will be required to achieve the same BER over longer distances without reducing the carrier frequency.
In summary, we demonstrated a simple concept of a feed-forward neural network device that is able to correct linear signal distortion both on a metro network (10 Gbps, 100 km) and on a high-speed short-reach access link (40 Gbps, 20 km). For different applications which have different data rates, proper tuning of the nodes' delays is needed.
Figure 6: Experimental setup. The different symbols are self-explanatory and are discussed in the text. The inset shows the design of the NN device.
## Appendix A Experimental setup
Figure 6 presents the experimental setup. The tunable laser source (TLS) is constituted by an InGaAs-based semiconductor laser that can be thermally tuned around 1550 nm. The source is modulated by a Nested Mach-Zehnder interferometer (NMZI) with 30 GHz of electro-optical bandwidth and driven by an Arbitrary Waveform Generator (AWG) with 30 GHz of electrical bandwidth and a sampling rate of up 64 GSa/s. After the modulation stage, a Fiber Optic Coupler with 50% coupling ratio addresses half of the optical signal to a fast photodiode (RX1, 20 GHz bandwidth) which detects the input signal. A polarization controller allows tuning the local compression and torque applied to the fiber itself, inducing a polarization change.
An Erbium-Doped Fiber Amplifier (EDFA1) amplifies the optical signal to 20 dBm and then a Variable Optical Attenuator (VOA1) controls the effective power level launched into the fiber link made of an SM G.652D fiber with a nominal loss coefficient of 0.2 dB/km. The fiber link length is varied during the experiments from a minimum of 0 km to a maximum of 125 km with a granularity of 25 km. A Semiconductor Optical Amplifier (SOA) with a small signal gain of 13.4 dB is inserted at the end of the fiber link to partially recover the fiber link attenuation. The amplified optical signal is then sent to a switch that allows addressing the optical signal respectively to an Optical Spectrum Analyzer (OSA) or to the input grating of the NN device. Here the optical signal is processed by the NN and, via the output grating coupler, is coupled to the output fiber. Currents sent to the NN device are provided by a terminal-controlled DC current generator.
The output fiber is connected to a Fiber Optic Coupler with a \(99.9:0.1\) coupling ratio to address 0.1% of the optical signal to a Power Monitor (PM1). The other 99.9% is sent to a second EDFA (EDFA2) (small-signal gain of 30 dB) followed by a second VOA (VOA2). The combined action of these last two elements regulates the amount of optical power detected to a level below the damage threshold of the fast photodiode RX2. Then, a tunable optical filter with 30 GHz bandwidth and 5 dB of insertion loss cleans up the signal from the out-of-band amplified spontaneous emission noise added by the amplification stages. Another Fiber Optic coupler with \(99.9:0.1\) coupling splits the signal towards a second Power Monitor (PM2) and to another fast photodiode (RX2, 20 GHz bandwidth) which measures the output signal. Both RX1 and RX2 fast photodiodes are connected to a 40 GSa/s oscilloscope (OSC) with a 16 GHz bandwidth. The bandwidth limit of the transmission line is thus fixed by the oscilloscope, having the narrowest bandwidth in the line.
For each measure, the DC current generator sends pre-set currents to the NN device and a triggering signal to the oscilloscope. The acquisition is delayed by 1 ms from the arrival of the triggering signal to stabilize the NN device response, according to the thermal relaxation time of the heaters. The observation window of the oscilloscope is 1 \(\mu\)s wide, allowing the observation of at least 9 periods of PRBS at each acquisition. Four samples per bit are available because of the 40 GSa/s sampling rate. Each point in the BER versus PRX profiles (as those in Fig. 4) is obtained as an average over \(N=100\) measurements, allowing getting a minimum non-null measurable value of 1/(\(N\times\) Number of bits in the sequence). Error bars are obtained as the standard deviation of the measured BER values for that point.
## Appendix B Modeling of the experiment
The main elements of the optical link have been modeled, simulating the modulation apparatus, the structure of the NN device, the propagation of the optical PRBS sequence in the fiber link and the noise contributions at the receiver. In the following, the reported numerical values for the parameters refer to the simulation performed at 10 Gbps (40 Gbps).
The model includes a Mach-Zehnder modulator driven by an electrical signal to imprint on the optical carrier a PRBS sequence of order 10 and period \(2^{10}\) bits with an analog bandwidth of 30 GHz, an extinction ratio of 13.9 dB and a null chirp. The sampling frequency of the electrical signal is fixed to 320 GSa/s, in order to preserve the information over a sufficiently large frequency range. The same sampling frequency is thus maintained also in the resulting modulated optical signal propagating across the simulated setup until the final detection process and simulated oscilloscope, which reduces the sampling frequency to 40 (160) GSa/s.
The evolution of the signal is simulated by solving the linear Schrodinger equation in the Fourier domain. This can be derived following the approach proposed in [1], which reduces to
\[\tilde{A}(z,\omega)=\exp\left[iz\frac{\beta_{2}}{2}\omega^{2}-\frac{\alpha}{2} z\right]\tilde{A}(0,\omega) \tag{5}\]
where \(\tilde{A}(z,\omega)\) is the Fourier Transform of the temporal optical field envelope, \(z\) is the propagation distance, \(\beta_{2}\) is the group-velocity dispersion (GVD) parameter and \(\alpha\) stands for the fiber losses. The result of this operation is the propagated temporal optical field envelope \(A(z,t)\), obtained by applying the inverse Fourier transform to \(\tilde{A}(z,\omega)\).
This complex optical field signal is then provided as input to a model of the NN device. The model simulates the action of 4 delay lines, each of them associated with a fixed attenuation value measured for the actual spiral length in the NN device. A tunable phase shift is applied to each channel before the output combiner, to simulate the action of the heater-actuated phase shifters. Note that Eq. 5 was not used to simulate the signal propagation inside the NN device, since the dispersion length \(L_{D}\approx 2\) km [1] associated with the spirals is much longer than the length of the spirals themselves ( \(\sim 1\) cm). Effects deriving from chromatic dispersion can thus be neglected inside the NN device. Therefore, the relative delay between the 4 channels has been emulated by inserting a shift of the proper amount of samples between the 4 sequences.
After the combiner, the complex optical field signal is converted into the detected optical power (output signal) through the modulus square operation and, then, treated to account for the noise measured experimentally at the receiver (noise modeling is discussed in the next section). A band-pass filter with a bandwidth of 16 GHz (28 GHz) obtained with a 5\({}^{\rm th}\)-grade Bessel polynomial is then applied to the detected output signal, simulating the electronic bandwidth of the oscilloscope. An 8-bit vertical sampling with 100 mV full-scale is then applied to the output signal, together with a 40 GSa/s (160 GSa/s) horizontal sampling. For each simulated acquisition, the position of the first sample in the first bit is randomly chosen in the first quarter of the duration of the bit itself. Indeed, in the experimental setup, the triggering signal for the oscilloscope comes from the DC generator which controls the phase shifters too. The oscilloscope is then asynchronous with the AWG and, therefore, the position of the first sample in the sequence is different in each acquisition. Depending on where the first sample falls in each bit, the contrast level in the acquired curve changes, possibly leading to a different BER result.
The sampled output signal is then compared with the input signal according to the same modalities described in Section 2 for the real experiment. The only difference regards the training phase, during which the loss function is always evaluated at the 3\({}^{\rm rd}\) sample, being this close to the center of the bit and distant from the transients. Different fiber length scenarios are simulated using the PSO training algorithm.
After each run, the BER versus PRX curves are calculated. Each BER value appearing in the profiles is obtained as an average over \(N=1000\) measurements, corresponding to a minimum non-null measurable value of 1/(\(N\times\) Number of bits in the sequence).
Noise modeling
In the experimental setup, the optical amplifiers (EDFAs and SOA) act as noise sources, but the presence of the 30 GHz band-pass optical filter reduces their impact in deteriorating the SNR at the receiver. In the studied configuration, their contribution is negligible with respect to that introduced by the fast photodiode (RX2, receiver). The fluctuations in its response to the input optical power can be modeled as follows [1]
\[\sigma^{2}=\langle(\Delta I)^{2}\rangle=\sigma_{s}^{2}+\sigma_{T}^{2}=2q(I_{p}+ I_{d})\Delta f+(4k_{B}T/R_{L})F_{n}\Delta f.\]
The first term accounts for the contribution coming from shot noise, being \(q\) the electron charge, \(I_{p}\) the average current, \(I_{d}\) the dark current and \(\Delta f\) representing the effective noise bandwidth of the detector. The second term describes fluctuations induced by thermal noise, being \(k_{B}\) the Boltzmann constant, \(T\) the temperature, \(R_{L}\) the load resistor of the detector and \(F_{n}\) the noise figure of its internal amplifier. For the current experimental setup, the previous equation becomes \(\sigma^{2}=\langle(\Delta V)^{2}\rangle=mV_{meas}+q\) where \(V_{meas}\) is the measured voltage at the oscilloscope, \(m\) accounts for the proportional term due to shot noise and \(q\) includes the noise contributions deriving from thermal noise and shot noise associated with the dark current. A characterization of the setup provided us with \(m=0.0189\) mV and \(q=0.2263\) mV\({}^{2}\).
Funding.European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 788793, BACKUP and No 963463, ALPI).
Acknowledgments.We acknowledge a fruitful discussion with Stefano Biasi. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 788793, BACKUP and No 963463, ALPI).
Disclosures.M.M., P.B. and L.P. have filed a patent on the technology here described.
Data Availability Statement.The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2301.03755 | A real neural network state for quantum chemistry | The restricted Boltzmann machine (RBM) has been successfully applied to solve
the many-electron Schr$\ddot{\text{o}}$dinger equation. In this work we propose
a single-layer fully connected neural network adapted from RBM and apply it to
study ab initio quantum chemistry problems. Our contribution is two-fold: 1)
our neural network only uses real numbers to represent the real electronic wave
function, while we obtain comparable precision to RBM for various prototypical
molecules; 2) we show that the knowledge of the Hartree-Fock reference state
can be used to systematically accelerate the convergence of the variational
Monte Carlo algorithm as well as to increase the precision of the final energy. | Yangjun Wu, Xiansong Xu, Dario Poletti, Yi Fan, Chu Guo, Honghui Shang | 2023-01-10T02:21:40Z | http://arxiv.org/abs/2301.03755v1 | # A real neural network state for quantum chemistry
###### Abstract
The restricted Boltzmann machine (RBM) has been successfully applied to solve the many-electron Schrodinger equation. In this work we propose a single-layer fully connected neural network adapted from RBM and apply it to study ab initio quantum chemistry problems. Our contribution is two-fold: 1) our neural network only uses real numbers to represent the real electronic wave function, while we obtain comparable precision to RBM for various prototypical molecules; 2) we show that the knowledge of the Hartree-Fock reference state can be used to systematically accelerate the convergence of the variational Monte Carlo algorithm as well as to increase the precision of the final energy.
## I Introduction
Ab initio electronic structure calculations based on quantum-chemical approaches (Hartree-Fock theory and post-Hartree-Fock methods) have been successfully applied in molecular systems [1]. For strongly correlated many-electron systems, the exponentially growing Hilbert space size limits the application scale of most numerical algorithms. For example, the full configuration interaction (FCI) which takes the whole Hilbert space into account, is currently limited within around \(24\) orbitals and \(24\) electrons [2]. The density matrix renormalization group (DMRG) algorithm [3; 4] has been used to solve larger chemical systems of several tens of electrons [5; 6], however it is essentially limited by the expressive power of its underlying variational ansatz: the matrix product state (MPS) which is a special instance of the one-dimensional tensor network state [7], therefore DMRG could also be extremely difficult to approach even larger systems. The coupled cluster (CC) [8; 9] method expresses the exact wave function in terms of an exponential form of a variational wave function ansatz, and higher level of accuracy can be obtained by considering electronic excitations up to doublets in CCSD or triplets in CCSD(T). In practice, it is often accurate with a durable computational cost, thus considered as the "gold standard" in electronic structure calculations. However, the accuracy of the CC method is only restricted in studying weakly correlated systems [10]. The multi-configuration self-consistent field (MCSCF) [11; 12; 13] method is crucial for describing molecular systems containing nearly degenerate orbitals. It introduces a small number of (active) orbitals, then the configuration interaction coefficients and the orbital coefficients are optimized to minimize the total energy of the MCSCF state. It has been applied to systems with around 50 active orbitals [14], but they are still limited by the exponential complexity that grows with the system size.
In recent years the variational Monte Carlo (VMC) method in combination with a neural network ansatz for the underlying quantum state (wave function) [15], referred to as the neural network quantum states (NNQS), has been demonstrated to be a scalable and accurate tool for many-spin systems [16; 17; 18] and many-fermion systems [19]. NNQS allow very flexible choices of the neural network ansatz, and with an appropriate variational ansatz, it could often achieve comparable or higher accuracy compared to existing methods. NNQS has also been applied to solve ab-initio quantum chemistry systems in real space with up to \(30\) electrons [20; 21; 22], as well as in discrete basis after second quantization [23; 24; 25]. Up to now various neural networks have been used, such as the restricted Boltzmann machine (RBM) [15], convolutional neural network [16], recurrent neural networks [26] and variational auto-encoder [25]. In all those neural networks, the RBM is a very special instance in that: 1) it has a very simple structure which contains only a fully connected dense layer plus a nonlinear activation; 2) with such a simple structure, RBM can be more expressive than MPS [27], in fact it is equivalent to certain two-dimensional tensor network states [28], and can even represent certain quantum state with volume-law entanglement [29]. In practice RBM achieves comparable accuracy to other more sophisticated neural networks for complicated applications such as frustrated many-spin systems [30; 31].
For the ground state of molecular systems, the wave function is real. However, if one uses a real RBM as the variational ansatz for the wave function, then all the amplitudes of the wave function will be positive, which means that it may be good for ferromagnetic states but will be completely wrong for anti-ferromagnetic states. Therefore even for real wave functions one would have to use complex RBMs or two RBMs [32] in general. In this work we propose a neural network with real numbers which is slightly modified from the RBM such that its output can be both positive and negative, and use it as the neural network ansatz to solve quantum chemistry problems. To accelerate convergence of the VMC iterations, we explicitly use the Hartree-Fock reference state as the starting point for the Monte Carlo sampling af |
2302.03205 | An entity-guided text summarization framework with relational
heterogeneous graph neural network | Two crucial issues for text summarization to generate faithful summaries are
to make use of knowledge beyond text and to make use of cross-sentence
relations in text. Intuitive ways for the two issues are Knowledge Graph (KG)
and Graph Neural Network (GNN) respectively. Entities are semantic units in
text and in KG. This paper focuses on both issues by leveraging entities
mentioned in text to connect GNN and KG for summarization. Firstly, entities
are leveraged to construct a sentence-entity graph with weighted multi-type
edges to model sentence relations, and a relational heterogeneous GNN for
summarization is proposed to calculate node encodings. Secondly, entities are
leveraged to link the graph to KG to collect knowledge. Thirdly, entities guide
a two-step summarization framework defining a multi-task selector to select
salient sentences and entities, and using an entity-focused abstractor to
compress the sentences. GNN is connected with KG by constructing
sentence-entity graphs where entity-entity edges are built based on KG,
initializing entity embeddings on KG, and training entity embeddings using
entity-entity edges. The relational heterogeneous GNN utilizes both edge
weights and edge types in GNN to calculate graphs with weighted multi-type
edges. Experiments show the proposed method outperforms extractive baselines
including the HGNN-based HGNNSum and abstractive baselines including the
entity-driven SENECA on CNN/DM, and outperforms most baselines on NYT50.
Experiments on sub-datasets show the density of sentence-entity edges greatly
influences the performance of the proposed method. The greater the density, the
better the performance. Ablations show effectiveness of the method. | Jingqiang Chen | 2023-02-07T02:27:21Z | http://arxiv.org/abs/2302.03205v1 | # An entity-guided text summarization framework with relational heterogeneous graph neural network
###### Abstract
Two crucial issues for text summarization to generate faithful summaries are to make use of knowledge beyond text and to make use of cross-sentence relations in text. Intuitive ways for the two issues are Knowledge Graph (KG) and Graph Neural Network (GNN) respectively. Entities are semantic units in text and in KG. This paper focuses on both issues by leveraging entities mentioned in text to connect GNN and KG for summarization. Firstly, entities are leveraged to construct a sentence-entity graph with weighted multi-type edges to model sentence relations, and a relational heterogeneous GNN for summarization is proposed to calculate node encodings. Secondly, entities are leveraged to link the graph to KG to collect knowledge. Thirdly, entities guide a two-step summarization framework defining a multi-task selector to select salient sentences and entities, and using an entity-focused abstractor to compress the sentences. GNN is connected with KG by constructing sentence-entity graphs where entity-entity edges are built based on KG, initializing entity embeddings on KG, and training entity embeddings using entity-entity edges. The relational heterogeneous GNN utilizes both edge weights and edge types in GNN to calculate graphs with weighted multi-type edges. Experiments show the proposed method outperforms extractive baselines including the HGNN-based HGNNSum and abstractive baselines including the entity-driven SENECA on CNN/DM, and outperforms most baselines on NYT50. Experiments on sub-datasets show the density of sentence-entity edges greatly influences the performance of the proposed method. The greater the density, the better the performance. Ablations show effectiveness of the method.
Summarization Graph neural network Knowledge graph Entity
## 1 Introduction
Automatic text summarization aims to distill long text into concise summaries to facilitate quick information consumption [1]. Extractive summarization directly extracts salient sentences from source documents as summaries. Abstractive summarization can generate text that does not appear in source
documents. Recent progress of text summarization relies on the use of deep learning techniques [2-6]. These methods mainly follow the encoder-decoder framework where source texts are encoded by encoders in different forms, and sentences are labeled or generated by decoders.
Two crucial issues for summarization to generate faithful summaries are: to make use of knowledge beyond text [1], and to make use of cross-sentence relations in text [2]. Fig. 1 shows an example. There are five sentences in Fig. 2 (a). The sentences s1 and s5 are relatively distant from each other, while there are cross-sentence relations between s1 and s2 because they contain the same entity Tamil Tigers. Fig. 1(b) is a part of Knowledge Graph about the entities recognized in the sentences. The knowledge graph is constructed outside the news text and contains knowledge beyond the text. Either cross-sentence relations shown in Fig. 1(a) or entity knowledge shown in Fig. 1(b) is important for summarization. A feasible way to utilize entities knowledge and cross-sentence relations is to construct a sentence-entity graph as in Fig. 1(c) and to link the graph to Knowledge Graph.
For the knowledge issue, knowledge for summarization is often beyond text [1, 7, 8], while most existing research is on text itself. Knowledge Graph is an important form of knowledge beyond text. Entities mentioned in text can be linked to entities in KG to make use of knowledge in text and knowledge beyond text. Entities in text contain knowledge such as keywords and structural knowledge in text, and can be used to improve summarization. For example, entities are used as keywords to generate entity-coherent summaries [3], and entities are used as bridge to cluster sentences and generate summaries for long Chinese articles [9]. To utilize entities in KG to improve summarization, a
Fig. 1: An example taken from CNN about the Tamil Tigers Organization of Sri Lankan
preliminary study simply injects entity embeddings trained on KG into the encoder-decoder architecture [10]. Recent work uses entities in KG to improve pre-trained language models [11, 12]. These previous studies utilize entities in text or entities in KG for summarization. It is still a challenge to effectively connect structural knowledge in text with knowledge in KG to improve summarization.
For the cross-sentence relations issue, modeling these relations is crucial to extracting summary-worth sentences which can be further summarized to abstractive summaries. Early traditional work such as LexRank [13] and TextRank [14] uses inter-sentence cosine similarity to build text graphs. Recent progresses are based on GNN [15] to capture long-distance dependency through modeling cross-sentence relations as graph structures. Various types of graph structures can be built, and different variations of GNNs are proposed to calculate the graphs. In particular for summarization, graphs constructed from text often contains real-value weighted multi-type edges. A recent study constructs a sentence-word bipartite graph with a single type of weighted edge, and a heterogeneous GNN is proposed to calculate node encodings [2]. GNN is also used to model intra- and inter-sentence relations for dialogue summarization [16]. Other work relies on discourse structures to build summarization graphs [17, 18]. For graphs with unweighted multi-type edges, the relational GNN (R-GNN) [19] is proposed to calculate the graphs. Edge types and edge weights are important information for GNN-based graph calculations, and Entities in text are informative semantic units for graph construction. Current studies for summarization do not make full use of entities, edge types, and edge weights for construction and calculation of graphs to model cross-sentence relations.
As an attempt for tackling the knowledge issue and the cross-sentence relation issue in combination for summarization, this paper proposes an entity-guided summarization framework by leveraging entities mentioned in text to connect KG and GNN, and by making use of edge weights and edge types in GNN for calculations of graphs with weighted multi-type edges.
Firstly, entities mentioned in text are used to build and calculate a sentence-entity graph with weighted multi-type edges to model sentence relations for summarization. Three edge types, i.e., sentence-entity edges, entity-entity edges, and sentence-entity edges are introduced to link sentences and entities. These edges have different weights. The relational heterogeneous GNN (R-HGNN) for summarization is proposed for calculations of the graph by making use of edge types and edge weights in the propagation process. R-HGNN combines the advantage of the traditional GNN [15] in making use of edge weights and the advantage of the traditional R-GNN [19] in making use of edge types.
Secondly, entities are used to link the sentence-entity graph to Knowledge Graph to utilize knowledge outside text for summarization using entity linking techniques [20-22]. It is reasonable to believe that external knowledge can improve summarization. Knowledge Graph is utilized in GNN in three ways: building entity-entity edges as cooccurrence times of two entities in Wikipedia webpages, initializing entities embeddings in knowledge bases by RDF2Vec [23], and supervising entities embeddings with entity-entity edges.
Thirdly, entities guide the two-step summarization method. A multi-task objective is defined on the sentence-entity graph network to select salient sentences and entities simultaneously. Sentence selection and entity selection can benefit each other, because salient sentences often contain salient entities. An entity-focused abstractor follows to compress the salient sentences using the salient entities as queries.
The main contributions of this paper are summarized as follows:
* To the authors' knowledge, this paper is the first to leverage entities to connect GNN and Knowledge Graph to model cross-sentence relations in text and knowledge beyond text for summarization. GNN is connected with KG by constructing the sentence-entity graph where entity-entity edges are built based on KG, initializing entity embeddings on KG, and training entity embeddings using entity-entity edges.
* The relational heterogeneous GNN for summarization is proposed to calculate graphs with weighted multi-type edges by making use of both edge weights and edge types in the propagation process.
* The proposed method outperforms all existing baselines on the CNN/DM dataset without the pre-trained language models, and outperforms most baselines on the NTY50 dataset. Experiments on sub-datasets show the density of sentence-entity edges greatly influence the performance of the proposed method. Greater the density of sentence-entity edges, better the performance of the method. Ablation studies show the effectiveness of the proposed method.
## 2 Related work
**Entities and knowledge graphs for summarization** Entity plays an important role in summarization for selecting sentences [3, 24, 25]. Entities connecting sentences are used to extract coherent sentences [26]. More recently, entities are used to select summary-worth sentences by computing an entity context vector which is compared with the sentence context vectors, and are utilized to generate coherent abstractive summaries by compressing the extracted sentences [3]. And named entities extracted from input documents are used to construct entity-predicate-entity graphs, and then the graph2seq method is employed to generate summaries [27]. Knowledge Graph is an important form of knowledge beyond text
for summarization. Another similar work is Semantic Link Network which can be traced to the work of constructing a network models in 1998 and is recently extended to cyber-physical-social space for better modeling cyber-physical-social systems [28]. Most previous studies make use of KG by training entity embeddings based on KG. For example, entity-level knowledge from knowledge graphs are incorporated into the encoder-decoder architecture for summarization [10] by learning entity embeddings through the TransE method [29] and then injecting the entity embeddings into Transformer [30]. With the development of the pre-trained language models, entity-level knowledge can also be used to improve BERT [31] such as ERNIE [11]. The K-Bert model is proposed to inject domain knowledge into BERT through connecting triplets with entities in a sentence to construct a sentence tree [12]. These entity embeddings trained based on KG reflect the structure of entities and relations in KG. For summarization, the cross-sentence structure in the document is also important. It is a promising research front for summarization to make use of knowledge of entity relations in KG and knowledge of cross-sentence relations in text in combination. This paper proposes to inject knowledge in KG into the sentence-entity graph by building entity-entity edges based on KG, initializing entity embeddings on KG, and training entity embeddings using entity-entity edges.
**GNNs for summarization** Graph neural networks are originally designed for homogeneous graph with nodes of same types and with weighted edges [32, 33]. Message propagation algorithms for the traditional GNNs utilize the weighted edges to propagate message from neighboring nodes to calculate node encodings [33]. The original GNN does not consider edge types. Therefore, the relational GNN (R-GNN) is proposed to utilize the edge types by equipping each edge type with a transformation function [19]. The original R-GNN does not consider edge weights. GNNs are effective approach to model the structure of documents and to capture long-distance relationships in text for summarization. For example, the GNN is employed for extractive summarization by constructing the sentence graph with sentences encoded by RNN and edge weights computed by counting discourse relation indicators, and then applying the traditional GNN to calculate the graph [17]. The R-GNN-based summarization approach is is proposed to capture long-distance relationships in long text by constructing the graph on sentences, words and entities with the relations of NEXT, IN and REF [34]. The GNN is used in a discourse-aware neural summarization model to capture long-range dependencies by constructing structural discourse graph based on RST trees and coreference mentions encoded with GNN [18]. Recently, the heterogeneous GNNs with different types of nodes and edges are proposed for summarization [2] or other applications [35-40]. For summarization, the heterogeneous graph neural network (HGNN) is
proposed by constructing a sentence-word bipartite graph having only one type of edge to model cross-sentence relations [2], and applying GAT [32] for node encodings calculations. For other applications, the heterogeneous GNNs are applied to recommendation [34], or to program reidentification [37]. For large-scale graphs, HinSAGE [41] is an extension of GraphSAGE [38] to heterogeneous networks, with three types of aggregating algorithms i.e. mean aggregating, LSTM aggregating, pooling aggregating. Most existing heterogeneous GNNs are for graphs with 0/1 edges (un-weighted edges). To calculate graphs with weighted multi-type edges, both edge weights and edge types should be made use of in the propagation process. This paper proposes the relational heterogeneous GNN for summarization to make use of both edge weights and edge types by combining the advantages of both the traditional GNN and the traditional R-GNN.
**Two-step summarization approaches** With the development of deep learning techniques, great progress has been made in extractive and abstractive summarization. Most work focuses on the encoder-decoder model based on RNN [4, 5, 42] or Transformer [6, 43]. To connect the extractive and abstractive summarization, the two-step summarization approach has been popular in recent years [3, 44]. The extraction step selects summary-worth sentences, and the abstraction step generates the abstractive summary from the selected sentences [45-47]. Some work applies the reinforcement method to connect the two steps and to enhance the ROUGE gains [3, 9, 48]. In [3], entities are leveraged to select sentences for abstraction. In [9], GNN is used to select sentence cluster for abstraction by first clustering sentences containing same keywords and then linking the clusters. This paper proposes the multi-task selector to select salient sentences and salient entities, and employs the entity-focused abstractor to compress the sentences.
## 3 The proposed framework
The goal is to create both extractive and abstractive summaries for an input document by making use of knowledge beyond text and cross-sentence relations. The input document is denoted as \(D\), and has \(M\) sentences and \(N\) entities. Each entity may be mentioned many times in the document with different mention forms. For example, the entity Tamil Tigers in the example in Fig. 1 may be mentioned as Tigers, Liberation Tigers of Tamil Elam, LTTE, The Militant Organization, etc. Therefore, an entity in a document is represented as {EntityName, MentionSet}, where EntityName is the entity name, and MentionSet is the set of mentions of the entity in the document. Each sentence is a sequence of words, and each entity mention is also a sequence of words.
To create both extractive and abstractive summaries, the two-step summarization approach is employed to select salient sentences and salient entities followed by a generator to create abstractive summaries. And a multi-task objective is defined for salient sentence selecting and salient entity selecting, because salient sentences often contain salient entities. Sentence selection is formulated as a sequence labeling task [5], as well as entity selection. A label sequence \(y^{S}\)\({}_{1}\), \(y^{S}\)\({}_{2}\), \(\ldots\), \(y^{S}\)\({}_{M}\) is predicted where \(y^{S}\)\({}_{i}\)=1 denotes the \(i^{\text{th}}\) sentence is selected as the summary sentence and \(y^{S}\)\({}_{i}\)=0 denotes the sentence is not selected. A label sequence \(y^{F}\)\({}_{1}\), \(y^{F}\)\({}_{2}\), \(\ldots\), \(y^{E}\)\({}_{N}\) is predicted where \(y^{E}\)\({}_{i}\)=1 denotes the \(j^{\text{th}}\) entity is selected as the salient entity and \(y^{E}\)\({}_{i}\)=0 denotes the entity is not selected. The ground truth sentence labels (called ORACLE) are obtained using the greedy approach introduced in [3], and the ground truth entity labels are obtained by collecting entities in the ground truth manual abstractive summaries. The
Figure 2: The framework of the proposed entity-guided summarization model
abstractive summary generator is a seq2seq network equipped with the attention mechanism [48] and the copy mechanism [49], using selected entities as queries.
To make use of external knowledge and cross-sentence relations, entities serve as guidance in the proposed framework. Entities guide the construction and calculation of the sentence-entity graph where sentences are linked with each other through entities, and entities mentioned in text are linked to YAGO2 [50] to collect external knowledge for summarization. The relational heterogeneous graph neural network (R-HGNN) is proposed to calculate node encodings of the sentence-entity graph for summarization.
Fig. 2 shows the proposed framework. As a concrete example, the sentence-entity graph in Fig. 2 is constructed from the example in Fig. 1. The graph comprises entity nodes and sentence nodes, connected by three types of relational edges which are differently shaped in Fig. 2. The proposed framework consists of: 1) the graph constructing module builds and initializes the sentence-entity graph for an input document (SS3.1), 2) the graph layer applies the relational heterogeneous GNN to calculate node encodings (SS3.2), 3) the multi-task selector that select salient sentences and entities by training R-HGNN using a multi-task objective (SS3.3), and 4) the generator that generates abstractive summaries by the entity-focused pointer-generator network taking the selected sentences and entities as input (SS3.4). Finally, an RL connector connects the selector and the generator (SS3.5). Due to the limited computational resource, pre-trained encoders (i.e., BERT) are not applied, which are regarded as future work. Moreover, the proposed model is orthogonal to BERT-based models.
### Constructing a sentence-entity graph for a document
Given a document, a heterogeneous sentence-entity graph is constructed. Previous work [2] uses sentences and words as nodes to build heterogeneous bipartite graph which has only a single type of edges i.e. the sentence-word edge. This research uses entities instead of words as the nodes, because: 1) entities are more informative than words, and more importantly, 2) and entities mentioned in text can be linked to Knowledge Graphs to make use of knowledge beyond text. However, as every coin has two sides, entities are sparser than words in text. Therefore, apart from sentence-entity edges, sentence-sentence edges and entity-entity edges are added to the sentence-entity graph as shown in Fig. 2. Each sentence corresponds to a sentence node in the graph, and each entity in the document corresponds to an entity node in the graph. Nodes and edges are built and encoded in the following. Knowledge Graph plays an important part in entity encoding and entity-entity edges constructing.
**Building entity nodes** Since each entity may have many mentions in the document, the off-the-shelf NER tool is employed to recognize entity mentions in the input document, and then the off-the-shelf coreference resolution tool is used to cluster the mentions. Both tools are from Stanford CoreNLP [51]. Technically, the entity mentions recognized by the NER tool are precise enough, but the number of recognized mentions is limited. Therefore, the coreference resolution tool is used to recall more mentions and to cluster the mentions based on the reference chains. To balance the precision and the mention number, only the chains that containing mentions recognized by the NER tool are used. A mention cluster corresponds to an entity. Then, entities nodes are built and encoded as shown in Fig. 3.
In Fig. 3, to connect the sentence-entity graph with Knowledge Graph, entities in the input document are linked to YAGO2 with entity linking techniques. YAGO2 is an open source knowledge base extracted from Wikipedia, comprising a huge number of entities and facts [50]. Entity linking aims to link entities mentioned in text to entities registered in the knowledge base. Entity linking is a hard job because entities have ambiguous mentions in text. Recent years witness numerous studies on entity linking with the development of Knowledge Graph and deep learning techniques [20, 21, 22]. Among them, this study adopts the state-of-the-art open-source framework AIDA-light [21] to link each entity mention cluster to entities registered in YAGO2. AIDA-light is a lightweight collective entity linking framework, unifying three features, i.e. the prior probability of an entity being mentioned, the similarity between the contexts of a mention and a candidate entity, as well as the coherence among entities.
In Fig. 3, entity node embeddings are obtained by concatenating word-level embeddings and entity-level embeddings. 1) Word-level entity embeddings reflect word-level literal meanings of entities. The mentions in a mention cluster are ordered as they occur in the input document and are then concatenated into one sequence with the special token <SEP> as the separator. A Bi-directional Recurrent Neural Network (BiRNN) is employed to encode each mention sequence. The last forward hidden state and the last backward hidden state are concatenated as the word-level entity embedding of the \(j^{\text{th}}\) entity denoted
Fig. 3: Building and encoding entity nodes
as \(e_{j}^{w}\). 2) Entity level embeddings reflect entity relatedness in Knowledge Graph. RDF2Vec [23] is adopted to learn entity-level embeddings from YAGO2. RDF2Vec first converts the knowledge graph into a set of sequences of entities using graph walks, and then uses those sequences to train a neural language model estimating the likelihood of a sequence of entities appearing in a graph. The learned entity-level entity embedding of the \(j^{\text{th}}\) entity is denoted as \(e_{j}^{e}\), and entity-level embeddings of all entities in the document are denoted as the matrix \(E^{E}\) of which the \(j^{\text{th}}\) row equals with \(e_{j}^{e}\). Since the total number of entities in YAGO2 is rather huge and it is also unnecessary to learn embeddings for all entities in YAGO2, this study maintains entity-level entity embeddings for a fixed-sized entity vocabulary of which each entity occurs frequently in the experimental corpora. The special entity UNK is used for entities that are out of the vocabulary. Word-level embeddings \(e_{j}^{w}\) and entity-level embedding \(e_{j}^{e}\) are concatenated to serve as initial encodings of the \(j^{\text{th}}\) entity node in the sentence-entity graph, denoted as \(E_{j}^{(0)}=[e_{j}^{w},e_{j}^{e}]\). Take the entity Tamil Tigers as example. It is mentioned in the document as Tigers, Liberation Tigers of Tamil Eelam, LTTE, The Militant Organization. These mentions are concatenated with \(<\)SEP\(>\) and are encoded with BiRNN to get word-level embeddings. This entity is also in YAGO2 and can be encoded with RDF2Vec to get entity-level embeddings. Word- and entity-level embeddings are concatenated as the final embeddings of the entity. This way entity nodes can embrace word-level lexical information and entity-level information from Knowledge Graph.
**Building sentence nodes** The \(M\) sentences of the input document is represented as \([s_{1},\,s_{2},\,s_{3},\,\ldots,\,s_{M}]\). The sentence \(s_{i}\) is a sequence of words represented as \([s_{i1},\,s_{i2},\,s_{i3},\,\ldots,\,s_{i|s|}]\), where \(|s|\) is the length of the \(i^{\text{th}}\) sentence \(s_{i}\). Then a two-level BiRNN is applied to encode the sentence sequence as shown in Fig. 4. Equation (1) employs the word-level BiRNN to encode each sentence word by word, and equation (2) concatenates the last forward hidden state and the last backward hidden state as the sentence representation. Equation (3) employs the sentence-level BiRNN taking sentence representations as input to encode the sentence sequence to conserve sequential relationships of sentences, and then equation (4) concatenates the forward hidden state and the backward hidden state as the initial sentence node representation \(S_{i}^{(0)}\). As an example, the five sentences in Fig. 1 (a) are first encoded with equation (1) and (2) independently, and are then encoded with equation (3) and (4) sequentially to get final sentence representations.
\[[\bar{h}_{i1},\bar{h}_{i2},...,\bar{h}_{i|si|};\bar{h}_{i1},\bar{h}_{i2},...,\bar{ h}_{i|si|}]=BiRNN1([s_{i1},s_{i2},...,s_{i|si|}]) \tag{1}\]
\[s\_rep_{i}=[\bar{h}_{i|si|},\bar{h}_{i1}] \tag{2}\]
\[[\bar{h}_{1},\bar{h}_{2},...,\bar{h}_{M};\bar{h}_{1},\bar{h}_{2},...,\bar{h}_{ M}]=BiRNN2([s\_rep_{1},s\_rep_{2},...,s\_rep_{M}]) \tag{3}\]
\[S_{i}^{(0)}=[\bar{h}_{i},\bar{h}_{i}] \tag{4}\]
**Building edges** There are three types of edges in the sentence-entity graph. 1) The sentence-entity (SE for short) edge is built between a sentence node and an entity node if the sentence contains any mention of the entity. The SE edges reflect containment relationships between sentences and entities, and the weights are set as the occurrence times of the entity in the sentence. 2) The sentence-sentence (SS for short) edge is built between every two adjacent sentences in the input document, so there are \(M-1\) SS edges in the graph of the document of M sentences. The SS edges reflect sequential relationships among sentences, and the weights are set as 1. 3) The entity-entity (EE for short) edge is built between any two entities if the two entities co-occur in a Wikipedia webpage. The EE edge is built based on the external knowledge base, different from other two types of edges that are built based on the input document itself. Since entities registered in YAGO2 are from Wikipedia, each entity corresponds to a Wikipedia webpage. The weights of EE edges are set as the numbers of Wikipedia webpages that two entities co-occur. Fortunately, the data of entity-entity cooccurrences in Wikipedia are provided in AIDA. As the example in Fig. 2 shows, there is an EE edge between the entities e2 and e3 because the two entities occur in a same Wikipedia webpage. There is an SE edge between the sentence s3 and the entity e2 because s3 contain e2 as shown in Fig.1 (a). And there is an EE edge between the sentences s2 and s3 because s2 and s3 are adjacent sentences. The three edge types represent three orthogonal relationships, and are all undirected edges.
Figure 4: Encoding sentences
Entity-entity edges are built based on KG and Entity-level entity embeddings are initialized on KG. In the following sections, the training of the entity encodings will be supervised using entity-entity edges.
### The relational heterogeneous GNN for summarization
This section introduces the relational heterogeneous graph neural network to compute node encodings for the sentence-entity graph with weighted multi-type edges for summarization. Edge types and edge weights are both important information for message propagation in GNN. The traditional GNN [15] makes use of edge weights to calculate graphs with weighted single-type edges, and the traditional R-GNN [19] makes use of edge types to calculate graphs with unweighted multi-type edges. For the sentence-entity graph with weighted multi-type edges, R-HGNN makes use of both edge types and edge weights by combining the advantages of both GNNs. R-HGNN employs the propagation algorithm of GNN for intra-edge-type propagation to calculate node encodings, defines an edge-type-specific function as R-GNN does to transform the encodings to edge-type-specific encodings, and aggregates encodings.
**Definition of R-HGNN** Suppose the input graph has m nodes and c edge types. The edge types are denoted as \(\{ET_{1}\), \(ET_{2}\),..., \(ET_{c}\}\). Nodes are linked by different types of weighted edges, and each edge type corresponds to an independent adjacent matrix of Nodes. The notations are as follows:
* \(X^{(l)}\in R^{m\times d}\) is the node encodings in the \(l^{\text{th}}\) level of R-HGNN, and \(X^{(0)}\) is the initial node embeddings.
* \(A^{ET_{k}}\in R^{m\times m}\) is the edge-type-specific adjacent matrix of nodes for the edge type \(ET_{k}\), and elements in \(A^{ET_{k}}\) are the weights of corresponding edges.
The goal of R-HGNN is to learn a function \(Z=f(X^{(0)},A^{ET_{1}},A^{ET_{2}},...,A^{ET_{c}})\) where \(Z\) is the high-level hidden features for the nodes, encapsulating the information of edge types, edge weights, and the graph structure. R-HGNN has \(L\) levels, and \(Z=X^{(L)}\). Equations (5) to (7) are the equations for the propagation process of R-HGNN to calculate node encodings in the \(l^{\text{th}}\) level.
\[X^{ET_{k}^{(l)}}=D^{ET_{k}^{-\frac{1}{2}}}A^{ET_{k}}D^{ET_{k}^{-\frac{1}{2}}}g ^{ET_{k}^{\frac{1}{2}}}(X^{(l-l)}) \tag{5}\]
\[X^{self^{(l)}}=g^{self}(X^{(l-l)}) \tag{6}\]
\[X^{(l)}=\sigma(\sum_{k=1}^{c}X^{ET_{k}^{(l)}}+X^{self^{(l)}}) \tag{7}\]
Equation (5) calculates edge-type-specific nodes encodings by making use of both edge weights and edge types. Firstly, edge weights are used in the propagation process by partly borrowing the idea of the
original GNN proposed in [15]. As with [15], the edge-type-specific matrix \(A^{ET_{k}}\) is normalized by the degree matrix \(D^{ET_{k}}\) where the diagonal element \(D_{\bar{u}}^{ET_{k}}=\sum_{j}A_{\bar{y}}^{ET_{k}}\), because directly using the edge-type-specific matrices will change the scale of node encodings. The theoretical justification of the calculations is provided in [15]. Secondly, edge types are used in the propagation process by partly borrowing the idea of the original R-GNN proposed in [19]. As with [19], each edge type is equipped with an edge-type-specific transformation function \(g^{ET_{k}}()\), and the linear transformation function \(g^{ET_{k}}(X)=W^{ET_{k}}X\) with a trainable weight matrix \(W^{ET_{k}}\) is chosen. The node encodings are transformed by the function \(g^{ET_{k}}()\), and are propagated to neighboring nodes through the edge-type-specific propagation process. The virtual self-edges are added to the graph to consider self-loops of the propagation process, and equation (6) calculates node encodings for the self-edges.
Equation (7) calculates encodings of each node in the \(l^{\text{th}}\) level by accumulating all edge-type-specific encodings of the node, where \(\sigma()\) is the element-wise activation function such as \(\operatorname{Re}LU(\cdot)=\max(0,\cdot)\). The accumulated encodings in the \(l^{\text{th}}\) level are used for propagation in the \((l+1)^{\text{th}}\) level. And the encodings in the final \(L^{\text{th}}\) level is used as features of the nodes in the next subsection.
R-HGNN utilizes the advantages of both the original GNN and the original R-GNN to make use of edge types and edge weights for graphs with weighted multi-type edges. To some degree, the original GNN and the original R-GNN can be seen as the special case of R-HGNN without considering edge types and without considering edge weights respectively.
**Applying R-HGNN to the sentence-entity graph** The sentence-entity graph constructed in the previous subsection has three types of weighted edges, i.e., the SE edges, the SS edges, and the EE edges. The graph has \(M\) sentence nodes and \(N\) entity nodes. The proposed R-HGNN can be applied to calculate node encodings. The inputs are formalized as follows:
* \(A^{SS}\in R^{(M+N)\mapsto(M+N)}\), the adjacent matrix for the SS edges, where \(A^{SS}_{\bar{y}}=1\) if \(i\) and \(j\) are both sentence nodes and are adjacent in the document, else \(A^{SS}_{\bar{y}}=0\), as described in Section 3.1.
* \(A^{SE}\in R^{(M+N)\mapsto(M+N)}\), the adjacent matrix for the SE edges, where the value of \(A^{SE}_{\bar{y}}\) is set as occurrence times of entities in sentences if \(i\) and \(j\) are different types of nodes, else \(A^{SE}_{\bar{y}}\) is set as 0, as described in Section 3.1.
* \(A^{EE}\in R^{(M+N)\mapsto(M+N)}\), the adjacent matrix for the EE edges, where the value of \(A^{EE}_{\bar{y}}\) is set as the co-occurrence times of the entity \(i\) and the entity \(j\) in Wikipedia Web Pages if \(i\) and \(j\) are both
entity nodes, else \(A_{y}^{EE}\) is set as 0, as described in Section 3.1.
* \(X^{(0)}=\!\!\left[\!\!\begin{array}{c}S^{(0)}\\ E^{(0)}\end{array}\!\!\right]\!\in\!R^{(M+N)\times d}\), the initial node embedding matrix, where \(S^{(0)}\!\in\!R^{M\times d}\) is the matrix of initial sentence node encodings and \(E^{(0)}\!\in\!R^{N\times d}\) is the matrix of initial entity node encodings, calculated as described in Section 3.1. \(X^{(0)}\) is the concatenation of \(S^{(0)}\) and \(E^{(0)}\).
The outputs are the node encodings in the \(L^{\text{th}}\) level of R-HGNN, i.e., the sentence node encodings \(S^{(L)}\) and the entity node encodings \(E^{(L)}\). Take the graph in Fig. 2 as an example. There are five sentences and four entities in the graph. The adjacent \(\text{A}^{\text{SS}}\), \(\text{A}^{\text{SE}}\) and \(\text{A}^{\text{EE}}\) can be computed accordingly, and the node embedding matrix \(\text{X}^{(0)}\) can also be initialized as described in Section 3.1. R-HGNN can be applied in the graph to calculate sentence node encodings and entity node embeddings. In the following subsection, a multi-task selector is defined upon the R-HGNN, using these encodings as features.
### The multi-task selector
The multi-task selector has two tasks to do: selecting salient sentence nodes and select salient entity nodes from the sentence-entity graph. The two tasks can benefit each other, because salient sentences often contain salient entities. As the example in Fig. 1 (a) shows, the leading sentence is a salient sentence, and contains the salient entity Tamil Tigers. Selection of salient sentences and entities can affect each other. The selected sentences form the extractive summary, and the selected entities can used as queries of the entity-focused generator introduced in next subsection. With node encodings of the last level of the R-HGNN as features, the objectives of the multi-task selector are defined as follows:
**Supervising with sentence selecting and entity selecting collectively** Equations (8) computes the probability of the sentences to be selected by applying a softmax function over sentence node encodings with MLP transformations. Equation (9) computes the probability of the entities to be selected by applying a softmax function over entity node encodings with MLP transformations.
\[p(\hat{y}_{i}^{S}= 1)\sim\textit{soft}\max(\textit{MLP}(S^{(L)})) \tag{8}\] \[p(\hat{y}_{j}^{E}= 1)\sim\textit{soft}\max(\textit{MLP}(E^{(L)})) \tag{9}\]
Cross Entropy is adopted to compute the loss. As mentioned, the label \(y_{i}^{S}=\)1 means that the \(i^{\text{th}}\) sentence is a summary sentence and \(y_{i}^{S}=0\) means the sentence is not a summary sentence, so the
ground-truth probability that the \(\hat{r}^{\text{th}}\) sentence to be selected is \(p(y_{i}^{s}=1)=\dfrac{y_{i}^{s}}{\sum_{i=1}^{M}y_{i}^{s}}\). The entity label \(y_{j}^{E}=1\) means that the \(j^{\text{th}}\) entity is a summary entity and \(y_{i}^{E}=0\) means the entity is not a summary entity, so the ground-truth probability that the \(j^{\text{th}}\) entity to be selected is \(p(y_{i}^{E}=1)=\dfrac{y_{j}^{E}}{\sum_{j=1}^{N}y_{j}^{E}}\). Equations (10) and (11) compute the sentence loss and the entity loss respectively.
\[loss^{S}=CrossEntropy(y^{S},\hat{y}^{S}) \tag{10}\]
\[loss^{E}=CrossEntropy(y^{E},\hat{y}^{E}) \tag{11}\]
**Supervising entity embeddings with entity-entity edges** Entity embeddings consist of word-level embeddings and entity-level embeddings, where entity-level embeddings are initialized by RDF2Vec to reflect the global relatedness of entities in YAGO2. Since the embeddings are trainable in the proposed model, the relatedness information between entities will be lost or reduced in entity embeddings if the training of entity embeddings is not directly supervised. Moreover, the local entity relatedness reflecting the structure of the constructed sentence-entity graph should also be enhanced in the entity embeddings. It is reasonable to believe that keeping entity relatedness information in entity embeddings can improve the performance of the sentence selector and the entity selector.
\[r_{i,j}^{EE}=\dfrac{A_{i,j}^{EE}}{\sum_{i,j}A_{i,j}^{EE}} \tag{12}\]
\[\hat{r}_{i,j}^{EE}\sim\textit{soft}\max(E^{E}\times E^{E^{T}}) \tag{13}\]
\[loss^{EE}=CrossEntropy(r^{EE},\hat{r}^{EE}) \tag{14}\]
Because entity-entity edges reflect entity relatedness information in Wikipedia, entity-entity edges are used as ground-truth entity-entity relatedness to supervise the training of entity embeddings. Firstly, entity-entity edges for every two entities in the document are normalized by equation (12), where \(A^{EE}\) is the adjacent matrix for EE edges, and the element \(A_{i,j}^{EE}\) of the matrix is the co-occurrence times of the entity i and the entity j in Wikipedia webpages as calculated in section 3.1. Only entities in the document are considered, because 1) the entities co-occur in the same document have higher relatedness, which will be reflected in the entity embeddings; 2) using a small entity set for supervising instead of the large entire entity set can save the limited computational resource and speed up training. Secondly, entity
entity relatedness is predicted by using the softmax activation function to normalize the dot-product between the entity-level entity matrix \(E^{E}\) and \(E^{E^{E}}\) in equation (13), where \(E^{E}\) is the matrix of the entity-level entity embeddings of entities in the input document as calculated in section 3.1. Finally, Equation (14) computes the loss as the cross entropy between the ground-truth and the predicted entity-entity relatedness.
**Loss of the multi-task selector** The final loss of the multi-task selector is defined in equation (15) by linearly combining the sentence selecting loss, the entity selecting loss, and the entity-entity relatedness loss. The hyper-parameters \(\lambda^{E}\) and \(\lambda^{EE}\) are empirically set as 0.42 and 0.33 respectively.
\[loss^{selector}=loss^{S}+\lambda^{E}*loss^{E}+\lambda^{EE}loss^{EE} \tag{15}\]
**Inference** In the inference stage, sentences and entities are ranked by \(P(\hat{y}_{i}^{S}=1)\) and \(P(\hat{y}_{i}^{S}=1)\) respectively, and then the top-ranked ones are selected.
### The entity-focused generator
The entity-focused generator inputs the salient sentences and salient entities selected by the multi-task selector, and generates abstractive summaries. The entity-focused generator extends the state-of-the-art pointer-generator network [49] through using the entities as queries.
The sentences are ordered as they occur in the original document, and are concatenated to a whole text. Then a BiRNN is applied to encode the text. Let \(\vec{h}_{i}^{T}\) and \(\vec{h}_{i}^{T}\) be the forward and backward hidden states of the \(\vec{i}^{\text{th}}\) word in the concatenated text. \(d\_rep=[\vec{h}_{m}^{T},\vec{h}_{i}^{T}]\) is the representation of the concatenated text where m is the word count of the text. Let \(h_{i}^{T}=[\vec{h}_{i}^{T},\vec{h}_{i}^{T}]\) be the encoding of the \(\vec{i}^{\text{th}}\) word.
For entity encodings, only the word-level entity embeddings (denoted as \(e_{j}^{w}\) for the \(\vec{j}^{\text{th}}\) entity as described in Section 3.1) are adopted to represent the entity. Then the average pooling is applied over the encodings of the entities to get the encoding of the salient entity set as \(h^{E}=avg_{j=1}^{N}(e_{j}^{w})\).
As with the original pointer-generator network, the decoder generates words following the vocabulary distributions and pointer distributions of the words as shown below.
\[p(w_{t})=p_{gen}p_{vocab}(w_{t})+(1-p_{gen})\sum_{i:w_{t}=w_{t}}a_{t,i} \tag{16}\]
The salient entities are used to calculated the attention \(a_{t,i}\) and the generation probability \(p_{gen}\). Let \(h_{t}\) be the hidden state in the \(t^{\text{th}}\) decoding step. \(p_{gen}\) is calculated as follows:
\[a_{t}=soft\max(\alpha_{t}\,) \tag{17}\]
\[\alpha_{t,i}=v\tanh(W^{aD}h_{t}\,+W^{aT}h_{i}^{T}+W^{aE}h^{E}+b^{ atm}) \tag{18}\]
\[p_{gen}=sigmoid\,(W^{pD}h_{t}\,+W^{pT}h_{i}^{T}+W^{pE}h^{E}+x_{t}\,+b^{ atm}) \tag{19}\]
In the equations, the entity encodings are added as parameters to compute the attention and the generation probability. This way entity information is incorporated to generate summaries. As with the pointer-generator network, the coverage loss \(cov\_loss_{t}\) is also added.
The loss is the negative log-likelihood of the predicted word and the coverage loss, i.e., \(loss_{t}^{generator}=-\log(p(w_{t}\,))+\lambda^{\text{cov}}\,\text{cov}\_loss_{t}\). Readers can refer to [49] for more details.
### Connecting the selector and the generator
The selector selects salient sentences and salient entities, whereas the generator compresses and paraphrases them. Until this point, they are trained separately without any form of parameter sharing. To connect two networks, the self-critical learning algorithm [52] based on policy gradient is adopted to connect them.
In line with the Markov Decision Process formulation, at each time step, the selector samples sentences and entities from the input document, and the generator uses the sentences and entities to generate an abstractive summary. This summary is evaluated against the ground-truth summary, and receives ROUGE-1 [53] as the reward.
Following [52], in the training process, the selector samples sentences and entities from the distributions \(p(\hat{y}_{i}^{S}=1)\) and \(p(\hat{y}_{j}^{E}=1)\) introduced in Section 3.3. The generator then creates an abstractive summary and return the Rouge-1 reward denoted as \(R\). Let \(Sample^{S}\) be the index set of the sampled sentences in the input document, and \(Sample^{E}\) be the index set of the sampled entities. For each \(i\in Sample^{S}\), \(y_{i}^{\prime S}\) is set as 1 and \(p(y_{i}^{\prime S}=1)=\dfrac{y_{i}^{\prime S}}{\sum_{k\in Sample^{S}}y_{k}^{ \prime S}}\), and for \(i\not\in Sample^{S}\), \(p(y_{i}^{\prime S}=1)=0\). For each \(j\in Sample^{E}\), \(y_{j}^{\prime E}\) is set as 1 and \(p(y_{j}^{\prime E}=1)=\dfrac{y_{j}^{\prime E}}{\sum_{k\in Sample^{E}}y_{k}^{ \prime E}}\), and for \(j\not\in Sample^{E}\), \(p(y_{j}^{\prime E}=1)=0\).
As with previous work [3], the parameters of the generator are frozen. Only the selector is trained by reinforcement learning. The following equations compute the loss of reinforcement learning.
\[loss^{RL}=R*(CrossEntropy(p(y^{YS}),p(\hat{y}^{S}))+\lambda^{E}CrossEntropy(p(y^ {YS}),p(\hat{y}^{E}))) \tag{20}\]
If the selector accurately selects salient sentences and entities, the entity-focused generator is more likely to generate high-quality abstractive summaries, which will be encouraged. Otherwise, actions resulting in inferior selections will be discouraged.
Equation (21) recomputes the loss of the selector with the RL loss, where the hyper-parameter \(\lambda^{RL}\) is empirically set as 0.6.
\[loss^{selector\_with\_RL}=loss^{selector}+\lambda^{RL}loss^{RL} \tag{21}\]
## 4 Experiments
### Corpora and preprocessing
Two popular summarization datasets are used to evaluate the proposed model: The CNN/DailyMail (CNN/DM) dataset [54] and the NYT50 dataset [55].
For CNN/DM, the standard dataset is split into 286649/13359/11490 examples for training, validation, and test. The preprocess steps in [54] are followed to obtain the plain text. To obtain entities and mentions, the annotations provided by the original dataset are used. For each entity in an example, the dataset provides the entity id and all mention starts and ends.
NYT50 is a subset of New York Times Annotated Corpus [56] preprocessed by [55] for document summarization. The dataset contains 110540 articles with summaries and is split into 100834 and 9706 for training and test. Following the preprocessing steps in [55], the last 4000 examples of the training set are used as the validation set, and the test examples are filtered to 3452. To obtain entities and mentions, the standoff annotations that contain entity mention annotations and coreference annotations provided in NYT50 are used. Mentions are clustered by coreference ids in the annotation file, and each mention cluster represents an entity. Entity types are also provided in annotation files. Entities of the location type, the person type, the organization type and the event type are used for experiments.
Table 1 shows statistics of the two datasets. The observations are as follows:
\(\bullet\) Though CNN/DM have less sentences in documents than NYT50, CNN/DM has more entities than NYT50, which indicates CNN/DM is entity-denser than NYT50.
\(\bullet\) Sentences in CNN/DM contain average 2.14 entity mentions, while sentences in NYT50 contain only average 0.88 mentions, indicating that sentences in CNN/DM is of bigger probability to contain same entities than in NYT50.
\(\bullet\) 74% entities in CNN/DM can be linked to YAGO2, almost two times as much as in NYT50.
\(\bullet\) Most importantly, sentence-entity edges in CNN/DM are much denser than in NYT50. _SE.Density_ is defined as follows. Suppose the sentence-entity graph constructed from a document has _SE.Count_ sentence-entity edges, \(M\) sentences, and \(N\) entities, then _SE.Density_ of the document is calculated as \(\dfrac{SE.Count+1}{M+N}\). Here other two types of edges are not counted. _SE.Density\(>=\)_1 means that the sentence-entity bipartite graph is a connected graph, and _SE.Density\(<\)_1 means that the sentence-entity bipartite graph is not a connected graph. Bigger _SE.Density_, more connected the bipartite graph.
### Implementation and parameter settings
The model is implemented with Tensorflow in Python. The code will be released on Github under the link [https://github.com/jingqiangchen/kbsumm](https://github.com/jingqiangchen/kbsumm). Due to the limited computational resource, the word vocabulary is limited to 40K words, and are initialized with 128-dimensional word2vec embeddings [57]. The entity vocabulary of CNN/DM is limited to 500K entities because there are more than 1000K entities in the dataset, and entities out of the vocabulary are replaced with the special UNK entity. The entity vocabulary of NYT50 is set as 146894 which is the whole entities recognized in NYT50. The entity vocabulary is initialized with 128-dimensional RDF2Vec embeddings [23]. One layer of the GRU cell is adopted as the RNN cell. The number of R-HGNN levels is set as 2. The dimension of graph node embeddings is set as 512. The dimension of the hidden state of the BiRNN encoder is set as 256. The dimension of the hidden state of the RNN decoder is 512. The parameters of Adam are set as those in
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & **Train** & **Dev** & **Test** & **Doc.Sent** & **Sum.Sent** & **Doc.Ent** & **Sum.Ent** & **Sent.Men** & **Ent.YAGO** & **SE.Density** \\ \hline
**CNNDM** & 286649 & 13359 & 11490 & 27.93 & 3.75 & 22.59 & 3.64 & 2.14 & 16.81 & 0.94 \\
**NYT50** & 100834 & 4000 & 3452 & 40.81 & 2.94 & 21.21 & 2.59 & 0.88 & 8.75 & 0.57 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The statistics of the two datasets, where Doc.Sent, Sum.Sent, Doc.Ent, Sum.Ent, Sent.Men, Ent.YAGO, and SE.Density stand for the average sentence number in documents, the average sentence number in summaries, the average entity number in documents, the average entity number in summaries, the average entity mention number in sentences, the average number of entities successfully linked to YAGO2, and the density of sentence-entity edges respectively
[58]. The batch size is set to 15. Convergence is reached within 50K-80K training steps. It takes about one day for training 30k \(\sim\) 40k steps on a GTX-1080 TI GPU card.
For the multi-task selector, the top-k ranked sentences are selected as the extractive summary. k is set as 4 for CNN/DM and is set as 3 for NYT50, because the average sentence number in summaries of CNN/DM is 3.75 and of NYT50 is 2.94 as shown in Table 1. Similarly, the top-4 ranked entities for CNN/DM and the top-3 ranked entities for NYT50 are selected as salient entities according to the average number of entities in summaries of the two datasets as shown in Table 1. The input document is truncated into a maximum length of 100 sentences. The entity set of the input document is truncated into a maximum of 100 entities. For the entity-focused generator, the input text is truncated into a maximum length of 150 words. The decoding steps of the generator are limited to 100 steps.
### Comparing with existing methods
More than ten baselines are compared with the proposed methods on CNN/DM and NYT50. The following are two versions of the proposed method and three strong existing baseline methods. Other baselines will be introduced when analyzing the evaluation results of the datasets.
* **RHGNNSumExt** The proposed extractive summarization method in this study by removing the entity-focused generator from the proposed model. Top-\(k\) ranked sentences are extracted as the extractive summary where k depends on datasets.
* **RHGNNSumAbs** The proposed complete model by first extracting top-k ranked sentences with RHGNNSumExt and then rewriting the sentences to an abstractive summary through the entity-focused generator, with a RL connector to connect the selector and the generator.
* **HGNNSum** This is the extractive summarization approach proposed in [2]. It constructs a bipartite sentence-word graph for an input document, where sentence-word edges are built if sentences contain words. HGNNSum applies the heterogeneous GNN to the graph and directly selects salient sentences as the summary by a sentence selector.
* **SENECA** This is the abstractive summarization approach proposed in [3] driven by entities to generate coherent summaries. Entities are first used to select salient sentences, and then a RL-based abstract generation module using coherence, conciseness and clarity as rewards is applied to compress the sentences to generate final summaries.
* **ASGARD** This is the abstractive summarization approach proposed in [27]. It constructs an entity-entity graph from the input document, where nodes are entities extracted from the document, and edges are predicates. The entity-entity graph is encoded with GNN, and the document is encoded with LSTM. And then the abstractive summary is generated by attending to graph encodings and document encodings. The constructed entity-entity graph is not linked to
external knowledge graphs, and sentence-entity relations are not considered in the work. Two versions of ASGARD without reinforcement learning are compared. ASGARD-DOC treats the input document as a whole for encoding, while ASGARD-SEG segments the input document into a set of paragraphs which are encoded dependently and are then combined with LSTMs.
The following give the evaluations and analysis on CNN/DM and NYT50 respectively.
**Results on CNN/DM** Table 2 shows the evaluation results on the CNN/DM dataset. The first part is the LEAD-3 baseline which simply selects the leading three sentences from documents, and the ORACLE upper bound which is the ground truth extractive summary obtained as described in Section 3. The second part is four extractive summarization methods published recent years. The third part includes four state-of-the-art abstractive summarization methods including the pointer-generator network, the sentence rewriting method, the RL-based method, and the bottom-up method. The last part includes the two strong baselines SENECA and HGNNSum, and the proposed methods in this study.
According to Table 2, RHGNNSumExt achieves the highest Rouge-1 and Rouge-L scores among all the extractive and abstractive methods. In particular, RHGNNSumExt outperforms HGNNSum according to Rouge-1 and Rouge-L scores. The main difference between the two models is three-fold: 1) RHGNNSumExt builds a sentence-entity graph with weighted multi-type edges, while HGNNSumm builds a sentence-word bipartite graph with one type of edges; 2) RHGNNSumExt utilizes both edge weights and edge types in the propagation process for calculations of node encodings; 3) RHGNNSumExt injects external knowledge from YAGO2 into the graph through linking entities to the
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Rouge-1** & **Rouge-2** & **Rouge-L** \\ \hline LEAD-3 [49] & 40.34 & 17.70 & 36.57 \\ ORACLE [59] & 52.59 & 31.24 & 48.87 \\ \hline JECS [60] & 41.70 & 18.50 & 37.90 \\ LSTM+PN [42] & 41.85 & 18.93 & 38.13 \\ HER w/o Policy [60] & 41.70 & 18.30 & 37.10 \\ HER w Policy [61] & 42.30 & 18.90 & 37.60 \\ \hline PG [49] & 39.53 & 17.28 & 36.38 \\ SENTREWRITE [45] & 40.88 & 17.80 & 38.54 \\ DEEPREINFORCE [62] & 41.16 & 15.75 & 39.08 \\ BOTTOM-UP [47] & 41.22 & 18.68 & 38.34 \\ \hline SENECA [3] & 41.52 & 18.36 & 38.09 \\ ASGARD-DOC [27] & 40.38 & 18.40 & 37.51 \\ ASGARD-SEG [27] & 40.09 & 18.30 & 37.30 \\ \hline HGNNSum & 42.31 & **19.51** & 38.74 \\ RHGNNSumExt & **42.39** & 19.45 & **38.85** \\ RHGNNSumAbs & 41.63 & 18.45 & 38.00 \\ \hline \hline \end{tabular} Bold values indicate that the best results
\end{table}
Table 2: Comparison results on CNN/DM using rouge F1 at the full summary length
knowledge base, and trains the entity embedding to fit the entity-entity edges. As shown in Table 1, each sentence in CNN/DM contains average 2.14 entity mentions, and the density of sentence-entity edges is 0.94, so there are enough sentence-entity edges in the sentence-entity graphs of the CNN/DM dataset for message propagations of R-HGNN. Nevertheless, the improvement of RHGNNSumExt over HGNNSum is not very significant. This is mainly because entity linking is a hard job and suffers from missing linking or wrong linking as mentioned in section 3.1. As shown in Table 1, only 16.81 out of 22.59 entities for CNN/DM can be linked to YAGO2, and there can be wrong linking among the 16.81 linked entities. The state-of-the-art entity linking method AIDA-light achieves about 80% precision in a standard small-scale short news dataset with high-quality manually annotated entity mentions [21]. As for CNN/DM, the news articles are much longer, and the quality of the mention annotations is much lower.
The proposed abstractive method RHGNNSumAbs achieves the highest Rouge-1 score among all the abstractive methods. In particular, RHGNNSumAbs achieves the higher Rouge-1 score and higher Rouge-2 score than SENECA does. SENECA is a two-step entity-driven method which first selects salient sentences and then generates coherent abstractive summaries using entities as queries. RHGNNSumAbs and SENECA use different content selection methods. RHGNNSumAbs utilizes the graph-based model which models sentence relations and incorporates knowledge graphs for content selection. And SENECA employs a single-layer unidirectional LSTM to recurrently extract salient sentences. RHGNNSumAbs also outperforms ASGARD-DOC and ASGARD-SEG. ASGARD does not employ an extractor before generative summary generation. The quality of the extracted sentences greatly determines the quality of the generative summaries.
**Results on NYT50** Table 3 shows the evaluation results on the NYT50 dataset. As with the limited-length ROUGE recall used in [55] and [2], the extracted sentences are truncated to the length of the human-written summaries and the recall scores are used instead of F1. The first two lines of Table 3 are baselines reported by [55] and the next two lines are the LEAD-3 baseline and the ORACLE upper bound for extractive summarization reported by [2]. The second part and the third part report the performance of other non-BERT-based studies and the proposed models in this study respectively. Because SENECA is not reported in NYT50, Table 3 does not show the scores of SENECA.
The proposed model RHGNNSumExt outperforms most baselines on the NYT50 dataset. RHGNNSumExt does not outperform HGNNSum, because there are much less sentence-entity edges and much less entity mentions for NYT50 than for CNN/DM as shown in Table 1, making there are not enough sentence-entity edges for NYT50 for message propagations of R-HGNN. Concretely, sentences in NYT50 contains averagely 0.88 entity mentions and the _SE.Density_ value is 0.57, respectively much less than 2.18 and 0.94 in CNN/DM. Moreover, the average number of the entities successfully linked to YAGO2 in documents of NYT50 is 8.75, also much smaller than 16.81 for CNN/DM, making there are less entity-entity edges for NYT50 because entity-entity edges are built if two entities co-occur on Wikipedia Web Pages. The detailed discussion of _SE.Density_ will be given in the following subsection.
Nevertheless, the proposed abstractive method RHGNNSumAbs outperforms all abstractive baselines and most extractive baselines in terms of Rouge-1 scores in Table 3. In particular, RHGNNSumAbs achieves the higher Rouge-1 score than the pointer-generator network (PG) does. RHGNNSumAbs is a competitive abstractive summarization method for the NYT50 dataset.
### Discussion on density of sentence-entity edges
As shown in the preceding subsection, the proposed method performs better on CNN/DM of higher _SE.Density_ values, and performs worse on NYT50 of lower _SE.Density_ values, indicating that _SE.Density_ of datasets influences the performance of the proposed method. To see how the proposed method performs on datasets with different _SE.Density_ values, and what values of _SE.Density_ are suitable for usage of the proposed method, experiments are carried out on subsets of the two datasets with different _SE.Density_ values.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Rouge-1** & **Rouge-2** & **Rouge-L** \\ \hline First sentence [55] & 28.60 & 17.30 & - \\ First k words [55] & 35.70 & 21.60 & - \\ LEAD-3 & 38.99 & 18.74 & 35.35 \\ ORACLE & 60.54 & 40.75 & 57.22 \\ \hline COMPRESS [55] & 42.20 & 24.90 & - \\ SUMO [63] & 42.30 & 22.70 & 38.60 \\ PG [49] & 43.71 & 26.40 & - \\ DRM [62] & 42.94 & 26.02 & - \\ \hline HGNNSum & 46.89 & 26.26 & 42.58 \\ RHGNNSumExt & 45.80 & 25.89 & 40.26 \\ RHGNNSumAbs & 44.30 & 24.33 & 37.03 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison results on NYT50 using Rouge recall at the summary length of 100 words, where the result of the PG model is copied from [59] and ‘-’ means the original paper did not report the result
Fig. 5 shows sample distributions on CNN/DM and NYT50 under different _SE.Density_ values. As shown, sample distributions on the two datasets are rather different. By summing up the numbers of samples whose _SE.Density_ are smaller than 0.7 in Fig.4, there are in total 79470 out of 103656 samples from NYT50. In contrast, there are in total 244485 out of 311491 samples from CNN/DM whose _SE.Density_ is greater than 0.7. There are 25 samples in CNN/DM whose _SE.Density_ is between 0.0 and 0.1, comparing to 520 in NYT50. And there are 123247 samples in CNN/DM whose _SE.Density_ is bigger than 1.0, comparing to 2147 in NYT50. As a result of the distributions, the average _SE.Density_ of CNN/DM is 0.94 and the average _SE.Density_ of NYT50 is 0.57.
For CNN/DM, four sub-datasets are created for comparison by selecting the samples of which _SE.Density_ values are smaller than 0.5, 0.6, 07, and 0.8 respectively. For NYT50, four sub-datasets are created for comparison by selecting the samples of which _SE.Density_ values are greater than 0.4, 0.5, 06, and 0.7 respectively. For each sub-dataset, the split of train, dev and test follows the original datasets.
The proposed method RHGNNSumExt and the baseline method HGNNSum are trained and evaluated on the sub-datasets. For HGNNSum, the default settings provided in [2] is used, and the best eval models are adopted for test. The results are shown in Table 4 and Table 5.
As shown in Table 4, for CNN/DM, the proposed method RHGNNSumExt performs worse than HGNNSum on the sub-datasets with _SE.Density_\(<\)0.5 and _SE.Density_\(<\)0.6, and RHGNNSumExt performs better than HGNNSum on the sub-datasets with _SE.Density_\(<\)0.7 and _SE.Density_\(<\)0.8. For CNN/DM, the performance of RHGNNSum improves when the _SE.Density_ of sub-datasets increases.
As shown in Table 5, for NYT50, RHGNNSumExt performs better than HGNNSum on the sub-dataset with _SE.Density_\(>\)=0.7, and RHGNNSumExt performs worse than HGNNSum on the sub-datasets with _SE.Density_\(>\)=0.4, _SE.Density_\(>\)=0.5, and _SE.Density_\(>\)=0.6. Note that the average _SE.Density_ of the entire NYT50 dataset is 0.57, and RHGNNSumExt performs worse than HGNNSum on the entire NYT50 dataset. For NYT50, the performance of RHGNNSum also improves when the _SE.Density_ of sub-datasets increases.
entity embeddings partly represent external knowledge from the knowledge base YAGO2 in two ways: 1) entity-level entity embeddings are initialized on YAGO2 using RDF2Vec, and 2) the embeddings are trained with RHGNNSumExt by supervising with entity-entity edges constructed based on YAGO2.
To see the effect of adding entity-level entity embeddings, the embeddings are remove from the model, and the results are shown in the second lines of Table 6 and Table 7. The scores of RHGNNSumExt are higher than the model without entity embeddings, indicating adding entity-level entity embeddings improve the proposed summarization model.
To see the effect of supervising embeddings with entity-entity edges, this sub-objective is removed from the model, and the results are shown in the third lines of Table 6 and Table 7. The performance of the model without entity-entity edges supervision performs worse than the model with entity-entity edges supervision. Entity-entity edges supervision is to conserve entity-entity relatedness in entity embedding, and can improve the proposed model.
**Effects of edge weights and edge types in R-HGNN**. There are three types of edges with different weights in the sentence-entity graph. R-HGNN utilizes both edge weights and edge types in the
\begin{table}
\begin{tabular}{c c c c} \hline
**Method** & **Rouge-1** & **Rouge-2** & **Rouge-L** \\ \hline RHGNNSumExt & 42.39 & 19.45 & 38.85 \\ w./o. Entity-Level Entity Embeddings & 42.28 & 19.35 & 38.69 \\ w./o. Entity Embedding Supervising & 42.26 & 19.32 & 38.66 \\ w./o. Edge Weights & 42.33 & 19.39 & 38.76 \\ w./o. Edge Types & 42.25 & 19.24 & 38.62 \\ with Agg.Mean & 42.30 & 19.36 & 38.73 \\ w./o. EE Edges \& SS Edges & 42.22 & 19.32 & 38.67 \\ RHGNNSumAbs & 41.63 & 18.45 & 38.00 \\ w./o. RL Connector & 41.55 & 18.36 & 37.91 \\ \hline \end{tabular}
\end{table}
Table 6: Ablation studies on CNN/DM
\begin{table}
\begin{tabular}{c c c c} \hline
**Method** & **Rouge-1** & **Rouge-2** & **Rouge-L** \\ \hline RHGNNSumExt & 45.80 & 25.89 & 40.26 \\ w./o. Entity-Level Entity Embeddings & 45.64 & 25.66 & 40.12 \\ w./o. Entity Embedding Supervising & 45.61 & 25.55 & 40.02 \\ w./o. Edge Weights & 45.69 & 25.79 & 40.22 \\ w./o. Edge Types & 45.62 & 25.65 & 40.11 \\ with Agg.Mean & 45.76 & 25.68 & 40.11 \\ w./o. EE Edges \& SS Edges & 45.54 & 25.57 & 39.99 \\ RHGNNSumAbs & 44.30 & 24.33 & 37.03 \\ w./o. RL Connector & 44.24 & 24.28 & 36.94 \\ \hline \end{tabular}
\end{table}
Table 7: Ablation studies on NYT50
propagation process for calculations of node encodings, combining the advantages of both the traditional GNN and the traditional R-GNN.
To see the effect of edge weights, the weights are removed from the graphs and the traditional R-GNN is applied instead of R-HGNN. As shown in the fourth lines of Table 6 and Table 7, using R-GNN without edge weights perform not as well as using R-HGNN with edge weights, indicating that edge weights on R-HGNN can improve the performance.
To see the effect of edge types, edge types are removed from the graphs and the traditional GNN is applied instead of R-HGNN. As shown in the fifth lines of Table 6 and Table 7, GNN not considering edge types performs not as well as R-HGNN considering edge types, indicating that edge types on R-HGNN can improve the performance.
**Effects of aggregating algorithms**. To see the effect of aggregating algorithms for the proposed model, the mean aggregating algorithm of HinSAGE [41] is used instead of the aggregating algorithm of R-HGNN by neglecting edge weights in the propagation process. As shown in the sixth lines of Table 6 and Table 7, RHGNNSumExt with Agg.Mean performs not as well as RHGNNSumExt, indicating that the aggregating algorithm of the original GNN which can make use of edge weights is more suitable than the mean aggregating algorithm.
**Effects of EE edges and SS edges**. One difference between the sentence-entity graph constructed in this study and the sentence-word graph constructed in [2] is that the entity-entity edges and the sentence-sentence edges are added to the sentence-entity graph besides the sentence-entity edges, while the sentence-word graph only has sentence-word edges. The seventh lines of Table 6 and Table 7 show the performance of the proposed model without EE edges and SS edges. The model without the two types of edges performs worse than the model with the two types of edges, and even worse than the model without entity embeddings. EE edges and SS edges are necessary for the sentence-entity graph.
**Effects of the RL connector**. The last two lines of Table 6 and Table 7 show the Rouge scores of the abstractive models with or without the RL connector respectively. RHGNNSumAbs outperforms RHGNNSumAbs without the RL connector. Note that RHGNNSumAbs without the RL connector directly applies the generator on the sentences selected by the multi-task selector RHGNNSumExt. The performance of the selector greatly affects the performance of the generator. The RL connector fine tunes the selector to better fit the generator, and improves the generative summaries.
Conclusions
This paper proposes an entity-guided text summarization framework by connecting Knowledge Graph and Graph Neural Network to make use of knowledge beyond text and cross-sentence relations in text for creating faithful summaries. The key components of the proposed summarization framework are the method of leveraging entities to connect GNN and KG, and the relational heterogeneous GNN for summarization. GNN is connected with KG by constructing the sentence-entity graph, and initializing and training entity embeddings based on KG. Concretely, external knowledge in KG is utilized in GNN through building entity-entity edges based on Wikipedia webpages, initializing entity node encodings in YAGO2, and supervising entity node encodings with entity-entity edges. The relational heterogeneous GNN calculates node encodings of the sentence-entity graph with weighted multi-type edges by combining the advantages of both the traditional GNN and the traditional R-GNN to make use of edge weights and edge types in the propagation process. Experiments carried out on CNN/DM show the proposed extractive summarization method outperforms all reported baselines without pre-trained language models. Experiments carried out on NYT50 show the proposed method outperforms most reported baselines. Experiments on sub-datasets of CNN/DM and NYT50 show that density of sentence-entity edges of constructed sentence-entity graphs greatly influence the performance of the proposed model. The greater the density, the better the performance of the proposed model. Ablation studies show effectiveness of the proposed method, and that R-HGNN outperforms the traditional GNN and the traditional R-GNN in making use of both edge weights and edge types for summarization. The results provide a promising step for other NLP tasks such as multi-modal summarization, question answering, and news image captioning to make use of cross-sentence relations in documents and external knowledge in KG.
**Acknowledgements** This research was sponsored by the National Natural Science Foundation of China (No.61806101).
**Availability of data and materials** The datasets and codes are available at [https://github.com/jingqiangchen/kbsumm](https://github.com/jingqiangchen/kbsumm).
|
2302.01374 | Neural Network Architecture for Database Augmentation Using Shared
Features | The popularity of learning from data with machine learning and neural
networks has lead to the creation of many new datasets for almost every problem
domain. However, even within a single domain, these datasets are often
collected with disparate features, sampled from different sub-populations, and
recorded at different time points. Even with the plethora of individual
datasets, large data science projects can be difficult as it is often not
trivial to merge these smaller datasets. Inherent challenges in some domains
such as medicine also makes it very difficult to create large single source
datasets or multi-source datasets with identical features. Instead of trying to
merge these non-matching datasets directly, we propose a neural network
architecture that can provide data augmentation using features common between
these datasets. Our results show that this style of data augmentation can work
for both image and tabular data. | William C. Sleeman IV, Rishabh Kapoor, Preetam Ghosh | 2023-02-02T19:17:06Z | http://arxiv.org/abs/2302.01374v1 | # Neural Network Architecture for Database Augmentation Using Shared Features
###### Abstract
The popularity of learning from data with machine learning and neural networks has lead to the creation of many new datasets for almost every problem domain. However, even within a single domain, these datasets are often collected with disparate features, sampled from different sub-populations, and recorded at different time points. Even with the plethora of individual datasets, large data science projects can be difficult as it is often not trivial to merge these smaller datasets. Inherent challenges in some domains such as medicine also makes it very difficult to create large single source datasets or multi-source datasets with identical features. Instead of trying to merge these non-matching datasets directly, we propose a neural network architecture that can provide data augmentation using features common between these datasets. Our results show that this style of data augmentation can work for both image and tabular data.
keywords: neural networks, machine learning, databases, autoencoders, classification +
Footnote †: journal: Journal of Biomedical Informatics
## 1 Introduction
Data analysis tools from machine learning and artificial intelligence are now used to solve problems within almost every industry and problem domain. Although more data is continuously being collected to support those advanced methods, real world data science problems are often challenged with a limited number of examples or missing features. For example, collecting medical data is costly and time consuming as it requires patient consent, data security for protecting privacy, and the need for subject matter experts. If these challenges are addressed, collecting large patient cohorts still may not be possible as inclusion for a given
study may be restrictive or the medical center may not treat enough such patients each year. Even outside of medicine, many of the available datasets are still relatively small. Within the popular UC Irvine Machine Learning Repository (UCI), half of the datasets contain fewer than 1,600 examples [1].
To make the most of modern machine learning and deep learning algorithms, quality data is needed for training and the more data available often produces the best results. Very large, data-centric companies have the capability to construct massive datasets but this is often not scalable to many problem domains. Collecting, cleaning, and storing large datasets is expensive and smaller companies or medical studies focusing on a single diagnosis simply may have no way to generate large amounts of data. Aggregating across multiple data sources is one solution to address small datasets but becomes non-trivial when features do not match, some of the data is missing, or different feature encoding is used. However, it is likely some of the features are present for multiple datasets if they belong to the same problem domain. Medical data often includes demographics information like age, sex, race, or diagnosis and other domains may use industry standards resulting in feature overlap.
In addition to the common features, each dataset likely has other unique features that make it difficult for direct aggregation. However, the shared context between these datasets may provide enough information to transfer the unique context between these non-matching datasets. Our approach uses autoencoder networks to convert common features from a given dataset (**A**) to its full complement of features. The same common features from a different dataset (**B**) can then be passed through **A**'s network to generate synthetic features for dataset **A**. The unique features from the new synthetic examples can then be added to the existing features for database **B**, creating a new augmented dataset.
The primary motivation for this work is to address the challenges faced when trying to learn from relatively small medical datasets. As previously mentioned, it is often impracticable to create large datasets focusing on specific medical questions. There are many high quality medical datasets and we hypothesized that predictive performance would be improved by including information from other datasets based on the common features. In Section 5.3, we discuss how this data augmentation method can be applied to real world medical datasets.
In summary, we propose an autoencoder based approach for augmenting datasets using common features. Our experiments include both CNN networks for images and fully connected networks for tabular data, providing insight on how feature sharing impacts performance. We also show that this method can improve classifier performance, specifically with the use case of cancer datasets.
This work addresses the challenge of combining knowledge across disparate datasets and our main contributions are the following:
* **Proposed architecture:** We propose an autoencoder based solution for augmenting dataset using common features. This provides a method to create synthetic examples that are completely missing from a dataset which adds more context.
* **Study of the impact of feature sharing:** This initial experimental study is performed on both image and tabular based datasets and the impact of data sharing is investigated.
* **Use case with clinical datasets:** A real world example of cancer data is used to show the proposed method's performance.
* **Software:** We provide a publicly available software package using Python with Keras on GitHub for future research.
## 2 Related Works
One challenge with real world machine learning and deep learning projects is the limited size and quality of training data. Some domains like medicine are notorious for the difficulty of building large datasets with concerns of privacy, data collection cost, and regulatory restrictions. Even when enough data is present, issues like class imbalance can reduce model performance.
Data augmentation is often used to increase the size of a dataset or improve data quality. One of the most common ways to augment image data is to apply shifts, rotations, color adjustment, or zooming. This keeps the main concepts of the image but introduces enough variation to help with overfitting while providing an almost unlimited number of permutations. More recent advances in image augmentation include patch cutout, blurring, and image mixing where two training examples are blended using various methods [2; 3]. The XtremeAugment method was also developed to generate new examples by adding one or more training objects with adjusted color or viewpoint, all on existing or new backgrounds [4].
Instead of perturbing existing data, completely new synthetic examples can also be generated. Techniques like random oversampling, Synthetic Minority Oversampling Technique (SMOTE) [5], including its many derived methods, can be used to add more examples to the training data. Although those algorithms were originally designed for traditional machine learning problems, DeepSMOTE
[6] was later created for deep learning problems. Unlike traditional data augmentation and random oversampling, some of the SMOTE based methods can create examples in specific portions of the feature space. This allows for placing more focus on areas of interest like decision boundaries, safe regions, or regions specifed by clustering algorithms [7; 8; 9]. Undersampling can also be used for class balancing [10; 11] by reducing the size of the majority class. Both over- and undersampling can be done within the same algorithm shown by the Combined Synthetic Oversampling and Undersampling Technique for Imbalanced Data Classification (CSMOUTE) method [12].
Another solution for creating larger datasets is to combine multiple smaller dataset that include the same kind of information. However, these datasets may not share the same features or have the same data ranges. Several works proposed solutions to database merging including a method that combined multiple image datasets using only their principal component analysis (PCA) space and did not require the original training images to be kept [13]. PCA was used for feature reduction which could aid in combining datasets with many non-shared features [14]. Another work showed that singular value decomposition (SVD) could be used to combine partially overlapping microarray gene expression datasets [15].
The occurrence of missing feature values is a common challenge as one survey suggested that over half of the UCI datasets had a missing feature rate of at least 30% [16]. Imputation is often used to replace missing values which can be based on methods like simple statistics, regression, hot-deck method, clustering, and other machine learning algorithms [17]. Neural networks have been used for imputation with both traditional autoencoders [18; 19] and GANs [20].
## 3 Proposed Architecture
Applying traditional data augmentation methods may result in sub-optimal results when faced with disparate datasets. Model generalization and class imbalance can be partially addressed with synthetic examples generated from a single source but this excludes any novel information present in other relevant datasets. Feature reduction methods may remove critical relationships within smaller classes or interesting regions of the feature space. To address some of these limitations, we propose a neural network based method that generates unseen features from common ones, thereby improving the predictive power of learning algorithms.
The core component is an autoencoder network and we investigated the use of both a traditional autoencoder (AE) and a variational autoencoder (VAE) architecture. Figure 1 shows the general architecture used in the experiments found in
Section 4. Input data is passed through encoding layer and compressed into the latent space. The primary difference between these two architectures is the step right before the latent space layer as depicted in gray for AE and blue for VAE. While the AE uses a layer like the prior encoder layers, the VAE splits into mean (\(\mu\)) and variance (\(\sigma^{2}\)) sub-layers. A stochastic sampling method is then used to produce the layer output when generating the latent values. This also means that the same input data may result in different outputs after the model is frozen unlike the traditional AE.
If two datasets, referred to \(A\) and \(B\), represent different aspects of the same concept, the unique features from each dataset may be correlated. This architecture uses the common features to generate synthetic replacements for feature values that are only present in the other dataset. By combining the existing and synthetic features, the new examples simulate the scenario where all features came from a single complete data source. In the following description of the processing steps, we are augmenting dataset \(A\) with the unique features in dataset \(B\). Below are the detailed steps of this process:
**Identify Common Features:** First, the features common between the two datasets are identified. Min-max normalization is performed on the common features
Figure 1: An example autoencoder network, broken down into the input, encoder, decoder, and output sections. This architecture varies between the methods used in the experimental study with AE shown in gray and VAE in blue.
across both datasets to ensure proper alignment and the same process is used for the dataset specific columns resulting in a range of [0, 1]. Next, new sub-datasets named _CA_ and _CB_ are extracted from the pre-processed \(A\) and \(B\), representing the common features of the examples present in each dataset.
**Fit the Autoencoder Network with _B_:** An autoencoder network is trained to map _CB_ data to the full compliment of features found in \(B\).
**Predict with \(B\) Data:** The autoencoder is frozen and the _CA_ is passed through the _CB-B_ network to predict the \(B\) features values.
**Append Generated \(B\) Features:** Database \(A\) is now augmented with the \(B\) synthetic features, adding information from the similar, but independent, database \(B\).
To better visualize this process, we show in Figure2 how the MNIST image data was split by image columns to simulate the two dataset problem considered above. Dataset \(A\) is given the left half and dataset \(B\) is given the right half of the MNIST image columns.
Common features between these new datasets are simulated by reintroducing some columns from the opposing dataset. To make the experiments more consistent, the resulting training images are padded with zero columns so the final dimensions are the same as in the original dataset. The example in Figure 2 shows this process with six common columns, with first row showing the dataset \(A\) with the 14 left hand columns, the six common columns, and the result of combining left hand and common columns with padding. The process is performed for the right hand dataset \(B\) with a comparison to the original image in the middle.
Figure 3 shows an entire diagram of the architecture. Using the split image data described above, the common columns are used to train on the _CB-B_ network. Common columns from the test \(A\) dataset are encoded using the _CB-B_ network and decoded into synthetic \(B\) features. The original \(A\) data is then combined with the synthetic features to create the augmented image. While the resulting image is not perfect, it gets closer to replicating the original image.
## 4 Experimental Study
In this section, we apply the data augmentation process to both image and tabular data using traditional and variational autoencoder networks. Source code used for these experiments is provided at [https://github.com/fsleeman/database-augmentation-network](https://github.com/fsleeman/database-augmentation-network) which was written is Python with Keras.
### MNIST and CIFAR10
In these first experiments, we test the augmentation process using the MNIST [21] and CIFAR10 [22] datasets. The image features were split using the same process as shown in Figure 2 to simulate datasets with unavailable data. For the purpose of these experiments, each image column was treated as if it was as a single feature. The middle columns that overlap with the left and right sides of the original images were marked as common features (shown in Figure 2 as CA and CB). An even number of common columns (\(n\)) were used to provided a symmetry that allowed for the same experiments to be performed on both sides of the images. We performed experiments between 2 and _total columns_ - 2 for both datasets, where MNIST had 28 total columns and CIFAR10 had 32. Future experiments can investigate other combinations of feature sharing but the small size of MNIST and CIFAR10 images limits the number of useful combinations.
Figure 2: Example of how images could be split to simulate two different datasets. Right hand (\(A\)) and its associated middle common columns (\(CA\)) go to the masked database \(A\) image and the left hand (\(B\)) with its common columns (\(CB\)) go to database \(B\).
Figure 3: The proposed data augmentation network. Starting at the bottom, common columns of the masked training example from dataset \(B\) are used to train an autoencoder network. Test data from dataset \(A\) is processed using the \(CB\)-\(B\) network to get synthetic B features. These new features are added to the \(A\) masked test data to generate the augmented image.
The impact of the data augmentation process was evaluated using a simple CNN based classifier. This model used two CNN layers with max pooling, flattened, a with dropout, a dense layer, one more dropout layer, and a final softmax layer. Both MNIST and CIFAR10 datasets had ten classes and so argmax was used to choose the predicted class. The only difference between how these datasets were processed was that the MNIST networks used one color channel as included gray scale images and the CIFAR10 used three color channels for its RGB color. Twenty percent of the data was held out for testing and the remaining data was used for training and validation. To better represent two completely independent datasets, the training data was split, so database \(A\) got the first half of the examples (by index) and database \(B\) got the second half. This meant that there was no information overlap between those datasets including the common features. Training was then performed using 5-fold cross validation and the remaining test data was used to provide the results in Sections 5.1 and 5.2.
### Lung Cancer Case Study
One of the major problems faced with clinically oriented data science projects is the lack of data. While there are many high quality datasets in the field of medicine, they often do not have the same features or formatting that would allow them to be combined directly. One motivation of this work was to devise an approach that would improve model performance with information from independent medical datasets. In these experiments, we apply the proposed data augmentation method on two disjointed lung cancer datasets.
The National Institutes of Health (NIH) has provided a number of cancer related datasets including the Genomic Data Commons (GDC) [23] and the Surveillance, Epidemiology, and End Results (SEER) Program [24]. Started in 2016, the GDC is a harmonized cancer dataset which combines data from several modalities such as gene expression, mutations, pathology images, prescribed drugs, and clinical outcomes across over eighty six thousand patients. SEER has been collecting cancer data since 1973 from cancer registries across the United States which currently has over fifteen million reported cases.
We have chosen to limit the data in our experiments to lung cancer because it is one of the most common diagnoses and has a wide range of survival outcomes. Unlike the SEER dataset which is in a tabular format, GDC is mostly made up of other modalities which further reduced the amount of data used. After filtering for lung cancer, samples with missing values were then removed. There are many methods for addressing missing features but these examples were removed because we wanted to experiment on the cleanest available data. Other approaches
in handling missing features should be further investigated.
The data used in the following experiments resulted in 522 examples for GDC and 674,008 for SEER after the filtering and data cleaning. Seven common features between the two cleaned datasets were then identified as shown in Table 1: sex, year of diagnosis, age at diagnosis, race, International Classification of Diseases version 10 (ICD-10) code, histology, and laterality. Categorical features were split into multiple columns using one-hot encoding and min-max scaling was performed across both datasets for normalization. After one-hot encoding, there were a total of 27 individual common features.
The unique features of the two datasets covered other relevant lung cancer information that likely would add value to the learning process. GDC included details such as smoking history, pathological staging, and ethnicity while SEER included group staging, income, rural vs. urban locale, surgery type, the number of tumors, among others. These unique features were also min-max normalized, resulting in a total of 68 features for GDC and 78 for SEER.
## 5 Discussion of Results
### Mnist
The results in Table 2 show the \(F_{1}\) classification performance for the images with missing data and with both basic AE and VAE augmentation. Twenty six total experiments were performed across thirteen common columns combinations on both sides of the images with one of the augmentation methods providing twenty of the best results. The basic AE usually did better with fewer common
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{3}{c}{SEER and NIH Common Features} \\ \hline Feature & Type & Possible Values \\ \hline Sex & Categorical & Female, Male \\ Year of diagnosis & Numerical & 1957 to 2017 \\ Age at diagnosis & Numerical & 0 to 88 \\ Race & Categorical & American Indian/Alaska Native, Asian or Pacific Islander, Black, White, Unknown \\ ICD-10 Code & Categorical & Main bronchus, Upper lobe, Middle lobe, Lower lobe, Overlapping lesion, Lung (NOS) \\ Histology & Categorical & Adenocarcinoma, Bronchiolo-alveolar carcinoma, non-mucinou \\ & & Invasive mucinous adenocarcinoma, Adenocarcinoma with mixed subtypes \\ & & Papillary adenocarcinoma (NOS), Clear cell adenocarcinoma (NOS) \\ & & Mucinous adenocarcinoma, S��{}r ring cell carcinoma, Acinar cell carcinoma \\ Laterality & Categorical & Left, Right, Other \\ \hline \hline \end{tabular}
\end{table}
Table 1: A list of features common between the GDC and SEER datasets with their data types and valid value ranges.
columns compared with the VAE which may suggest that the more complicated VAE network benefits more with additional data. While the overall improvements of augmentation do not seem to be very significant, they still represent a large portion of the remaining performance as most of the \(F_{1}\) scores are already above 99 percent.
Figure 4 shows line graph of the performance of the MNIST images with missing columns against the AE and VAE based augmentation methods. As expected, including more common columns makes the problem easier for both the missing data case and with augmentation but performance flattens out after enough data is available. Because of how the handwritten digit were presented, there is almost no useful information at the far left and right sides of the original images and most of the information is in the middle. The first few common columns are in this critical section which explains why adding just a few columns makes a significant impact on the result. This is most apparent with the AE plots where most of the gains were found within the first eight common columns.
Figure 5 show examples of the generated digits for the \(A\) and \(B\) sides of the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{5}{c}{MNIST Dataset} \\ \hline Common & A Only & A + B* AE & A + B* VAE & B Only & A* + B AE & A* + B VAE \\ Columns & & & & & & \\ \hline
2 & 97.16 & **97.17** & 97.08 & **96.50** & 96.42 & 96.30 \\
4 & **97.70** & 97.66 & 97.62 & **97.48** & 97.42 & 97.31 \\
6 & 98.02 & **98.12** & 98.04 & 98.10 & **98.18** & 98.04 \\
8 & 98.42 & **98.46** & 98.36 & 98.57 & **98.73** & 98.58 \\
10 & 98.61 & **98.70** & 98.64 & 98.87 & **98.93** & 98.92 \\
12 & 98.86 & 98.86 & **98.93** & 99.07 & **99.11** & 98.99 \\
14 & 98.96 & **99.02** & 98.97 & 99.09 & **99.14** & 99.08 \\
16 & **99.11** & 99.08 & 99.09 & 99.15 & 99.15 & **99.17** \\
18 & **99.11** & 99.08 & 99.09 & 99.11 & 99.14 & **99.17** \\
20 & 99.10 & 99.05 & **99.15** & 99.11 & 99.14 & **99.20** \\
22 & **99.11** & 99.03 & 99.10 & 99.13 & 99.15 & **99.16** \\
24 & 99.07 & 99.05 & **99.13** & 99.15 & 99.13 & **99.17** \\
26 & 99.08 & **99.10** & 99.08 & 99.09 & 99.10 & **99.16** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The \(F_{1}\) scores for the MNIST classifier experiments with the left and right hand data as the primary datasets. The \(*\) symbol marks the synthetic features that were generated with either the AE or VAE based networks.
images. In most of these cases, the AE and VAE algorithms did a reasonably good job replacing the missing image columns. The largest discrepancy between these methods was with the second to last image column for the number five. This was a more difficult problem as the digit was poorly written and the bottom semicircle was completely closed. The AE got much closer when attempting to complete the right hand size of the image as the VAE may have mistook the image as a partially completed eight. A similar problem occurred when completing the left hand side as it could be seen as a six. While the VAE did appear to make some of the clearest augmentations, its mistakes were more pronounced but might be improved with larger training sets.
### Cifar10
Like with MNIST, Table 3 shows that the data augmentation process improved \(F_{1}\) scores for 23 of the 30 cases across the AE and VAE experiments. The impact of augmentation was more pronounced with this dataset which may be attributed to the increased complexity of the data. Many of the experiments showed im
Figure 4: Plot showing the comparison of MNIST images with missing columns against the AE and VAE based augmentation methods.
provements in the range of 0.5 to 1.0%. The AE method gave most of the top results, although the VAE outperformed the non-augmented data in the majority of experiments. Augmentation tended to work better when there was enough data to learn from and some critical information was still missing that needed to be replaced. This ideal range was around 8 - 24 common columns for CIFAR10.
The value of data augmentation for CIFAR10 is more evident as shown in Figure 6. \(F_{1}\) scores increase when more data is included but does not plateau like with MNIST as there is useful data throughout the images. Both augmentation methods improve scores in most cases and the relative performance increase is present as more columns are added.
Figure 7 show examples of the generated digits for the \(A\) and \(B\) sides of the images. Although augmentation improved \(F_{1}\) scores, the quality of the generated image data was much worse than with MNIST. The CIFAR10 dataset is not very large compared to recent deep learning image datasets and with its size further reduced to create the non-overlapping \(A\) and \(B\) datasets. While there may not be enough training data to produce high quality image data, it was still enough to help the classifier. In some cases, such as the boats depicted in columns two and tree of Figure 7, the new image data does look like a very blurry version of the
Figure 5: Comparison of augmented images for both the A and B side of the MNIST dataset. In the augmentation results, A* and B* refers to their corresponding synthetic representations. These images were generated using eight common columns.
true image data. However, the frog images in columns 5, 6, and 8 provide almost no information but may be attributed to the diverse coloring, backgrounds, or orientations. Boats, on the other hand, tend to look more similar as they often share common colors and backgrounds which may require fewer training examples.
### GDC and SEER
As mentioned in Section 4.2, these experiments used 522 and 647k examples for the GDC and SEER datasets respectively. Unlike the prior image based experiments, the number of common columns were naturally presented as the two datasets were already separated. Because the GDC dataset was much smaller, the SEER dataset was randomly undersampled to evaluate the impact that the dataset size has on the augmentation process. Classification was performed to predict if a patient would survive 24 months or longer after lung cancer treatment.
Table 4 shows the \(F_{1}\) scores for the experiments performed with different level of SEER undersampling. Although using a high percentage of the SEER dataset
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{CIFAR10 Dataset} \\ \hline Common & A Only & A + B* AE & A + B* VAE & B Only & A* + B AE & A* + B VAE \\ Columns & & & & & \\ \hline
2 & **64.50** & 64.38 & 64.13 & **64.15** & 64.07 & 63.67 \\
4 & 65.95 & 65.52 & **65.96** & **65.37** & 65.15 & 64.83 \\
6 & 66.14 & 66.01 & **66.33** & 65.30 & **65.83** & 65.70 \\
8 & 66.80 & 66.99 & **67.48** & 66.02 & **66.75** & 66.43 \\
10 & 67.32 & 67.01 & **67.80** & 66.47 & **66.89** & 66.68 \\
12 & 67.68 & **68.07** & 67.54 & 67.19 & **67.51** & 67.36 \\
14 & 68.07 & **68.74** & 68.14 & 67.23 & **67.82** & 67.69 \\
16 & **68.94** & 68.88 & 68.62 & 67.98 & 68.46 & **68.80** \\
18 & 68.58 & 68.95 & **69.10** & **68.63** & 68.60 & 68.31 \\
20 & 68.69 & **69.26** & 69.13 & 68.59 & **69.02** & 68.58 \\
22 & 69.29 & **69.50** & 69.18 & 68.96 & **69.44** & 69.04 \\
24 & 69.39 & **70.03** & 69.92 & 69.09 & **70.11** & 69.38 \\
26 & 69.80 & 69.49 & **70.38** & 69.66 & 69.51 & **69.77** \\
28 & 69.81 & **70.14** & 70.10 & **69.99** & 69.66 & 69.92 \\
30 & 70.12 & 70.32 & **70.38** & **69.90** & 69.77 & 69.79 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The \(F_{1}\) scores for the CIFAR10 classifier experiments. The \(*\) symbol marks the synthetic features that were generated with either the AE or VAE based networks.
did not help the GDC classification, using 500 to 10k SEER examples did. This resulted in both datasets being more balanced and using much more SEER examples in the training process could make the latent space used for decoding too cluttered to generate useful features. When using the 10k SEER examples for training, the AE network increased \(F_{1}\) score by almost 2.3%.
When augmenting the SEER dataset, the VAE outperformed the AE method in all cases and improved the overall \(F_{1}\) scores six out of eight times compared to just using the original data. As expected, including more of the original SEER data improved the non-augmented classification results. However, augmentation started to consistently improve results when at least 10k SEER examples were used.
These results show that cross-database feature augmentation can improve classifier performance, especially when the larger dataset is the one being augmented. One potential benefit from this kind of augmentation is that good results may be achievable without the entire dataset. This can be more important for cases where datasets are very large and are slow to train or when it is difficult to acquire large
Figure 6: Plot showing the comparison of CIFAR10 images with missing columns against the AE and VAE based augmentation methods.
training dataset, which is a common challenge with medical data. Augmenting only 100k SEER examples with the small GDC dataset provided better results than the entire SEER dataset without augmentation.
The smaller GDC dataset got less benefit from augmentation but did see some improvement with a smaller selection of SEER examples. Additional tuning of the autoencoding network and feature engineering may result in a more significant impact with even smaller datasets.
### Future Work
In addition to the work presented in this paper, there are a number of other topics that should be further investigated. The data cleaning process performed for the GDC and SEER datasets removed any examples that had many missing features and entire features that had a significant number of missing values to ensure the cleanest possible data. However, some of this removed data could be used if a more permissive imputation process was used such as the traditional statistical based replacement or even within the autoencoder process itself.
Class imbalance can negatively impact classifier performance as it can be biased towards the majority class which can also affect deep learning problems
Figure 7: Comparison of augmented images for both the A and B side of the CIFAR10 dataset. In the augmentation results, A* and B* refers to their corresponding synthetic representations. These images were generated using eight common columns.
[25, 26, 27]. Although the MNIST and CIFAR10 datasets were mostly balanced, the GDC and SEER datasets has a much higher imbalance with the two year survival examples representing approximately 72% of the dataset. The proposed feature generation system could also be extended to create entirely new features for class balancing as previously done with a VAE based network [28]. The potential benefit of majority class undersampling, minority class oversampling, or some combination of both could be further pursued using both traditional machine learning and deep learning algorithms.
Instance level difficulty is another approach used to improve model performance but was not considered in this work. The data augmentation method may benefit by focusing more on specific types of examples. Most of the current work on instance level difficulty has focused on a single dataset so there is not much research on how the difficulty of examples between multiple datasets would affect mutual encodings.
In this work, only one dataset was used to augment another but there may be cases where the augmentation process could benefit from multiple datasets. Each extra dataset could be used to generate different groups of features to augment the target dataset. There could also be features common between datasets \(A\) and \(B\) but different features common between \(B\) and \(C\).
The simpler AE and VAE networks used in the experiments could be replaced with deeper networks of the same type or more complicated architectures like stacked autoencoders and GANs. These neural networks often perform better on datasets with many more examples or features such as high resolution pictures, 3D
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{5}{c}{GDC and SEER Tabular Datasets} \\ \hline SEER Count & GDC & GDC + SEER* AE & GDC + SEER* VAE & SEER & GDC* + SEER AE & GDC* + SEER VAE \\ \hline
250 & **83.74** & 82.16 & 79.36 & **77.22** & 69.63 & 75.61 \\
500 & 79.25 & **79.49** & 77.76 & 77.09 & 75.00 & **77.14** \\
1k & 82.62 & **83.00** & 79.61 & **77.36** & 73.41 & 75.82 \\
10k & 84.61 & **86.88** & 74.63 & 78.94 & 77.71 & **79.87** \\
50k & 89.29 & **89.58** & 82.45 & 81.83 & 81.67 & **82.96** \\
100k & **81.96** & 81.56 & 77.18 & 82.73 & 82.23 & **83.93** \\
250k & **86.01** & 84.36 & 83.38 & 82.21 & 82.00 & **83.61** \\
500k & **84.91** & 83.48 & 81.23 & 83.10 & 82.92 & **84.26** \\
647k & **84.83** & 80.33 & 83.27 & 83.42 & 83.11 & **84.60** \\ \hline \hline \end{tabular}
\end{table}
Table 4: \(F_{1}\) scores for the GDC and SEER experiments. Since the SEER dataset is much larger than GDC, experiments were run with different subsets with the total included examples in the _SEER Count_ column. These two datasets were augmented in the same manner for MNIST and CIFAR10.
medical images, or gene expression data. In addition to classification, there are other types of machine learning algorithms that could benefit from the proposed technique such as regression and clustering.
## 6 Conclusion
Within many problem domains, there are often many related datasets that do not have matching features making the curation of large datasets difficult. In this work, we have proposed an autoencoder based solution for data augmentation using common features between these disparate datasets. As long as these datasets are contextually similar, the information unique from each dataset could be used to improve the performance of classifiers. We have shown that this approach can work with both image and tabular data as well as different types of autoencoder networks.
Although our experiments used relatively shallow autoencoders, they could be replaced with much complicated or problem specific architectures such as GANs,
Figure 8: Plot showing the comparison of augmentation with the GDC and SEER datasets using the AE and VAE networks.
stacked autoencoders, and deep pre-trained models. In addition to the MNIST and CIFAR10 examples, we showed that the data augmentation can work on real world medical datasets. Even the small GDC dataset was able to provide a benefit for classifying the much larger SEER dataset. This method is especially useful for domains like medicine which often is limited to small datasets collected as part of individual studies.
This initial work on data augmentation using common features has suggested several new areas of research. This approach could be extended to include the use of multiple datasets for augmentation, larger autoencoder style networks, and different styles of imputation. Utilizing data properties like class imbalance or instance level difficulty may provide further benefits.
|
2308.12435 | Characterising representation dynamics in recurrent neural networks for
object recognition | Recurrent neural networks (RNNs) have yielded promising results for both
recognizing objects in challenging conditions and modeling aspects of primate
vision. However, the representational dynamics of recurrent computations remain
poorly understood, especially in large-scale visual models. Here, we studied
such dynamics in RNNs trained for object classification on MiniEcoset, a novel
subset of ecoset. We report two main insights. First, upon inference,
representations continued to evolve after correct classification, suggesting a
lack of the notion of being ``done with classification''. Second, focusing on
``readout zones'' as a way to characterize the activation trajectories, we
observe that misclassified representations exhibit activation patterns with
lower L2 norm, and are positioned more peripherally in the readout zones. Such
arrangements help the misclassified representations move into the correct zones
as time progresses. Our findings generalize to networks with lateral and
top-down connections, and include both additive and multiplicative interactions
with the bottom-up sweep. The results therefore contribute to a general
understanding of RNN dynamics in naturalistic tasks. We hope that the analysis
framework will aid future investigations of other types of RNNs, including
understanding of representational dynamics in primate vision. | Sushrut Thorat, Adrien Doerig, Tim C. Kietzmann | 2023-08-23T21:36:35Z | http://arxiv.org/abs/2308.12435v2 | # Characterising representation dynamics in
###### Abstract
**Recurrent neural networks (RNNs) have yielded promising results for both recognizing objects in challenging conditions and modeling aspects of primate vision. However, the representational dynamics of recurrent computations remain poorly understood, especially in large-scale visual models. Here, we studied such dynamics in RNNs trained for object classification on MiniFcoset, a novel subset of ecoset. We report two main insights. First, upon inference, representations continued to evolve after correct classification, suggesting a lack of the notion of being "done with classification". Second, focusing on "readout zones" as a way to characterize the activation trajectories, we observe that misclassified representations exhibit activation patterns with lower L2 norm, and are positioned more peripherally in the readout zones. Such arrangements help the misclassified representations move into the correct zones as time progresses. Our findings generalize to networks with lateral and top-down connections, and include both additive and multiplicative interactions with the bottom-up sweep. The results therefore contribute to a general understanding of RNN dynamics in naturalistic tasks. We hope that the analysis framework will aid future investigations of other types of RNNs, including understanding of representational dynamics in primate vision1.**
Footnote 1: This article is a revision of the 2023 Conference on Cognitive Computational Neuroscience (CCN) paper, in which we present a new analysis in the Appendix, and include suggestions made by the CCN reviewers.
**Keywords: recurrent neural networks, object recognition, neural representations, dynamics, naturalistic tasks, readout zones**
## 1 Introduction
Feedback connections are ubiquitous in brains (Felleman and Van Essen, 1991). The resulting recurrent computations are advantageous in challenging conditions such as recognizing objects in clutter (Wyatte, Jilk, O'Reilly, 2014; Kreiman and Serre, 2020) and natural scenes (Spoerer, Kietzmann, Mehrer, Charest, and Kriegeskorte, 2020). Research into the representation dynamics underlying recurrent computations is nascent but accelerating (Mante, Sussillo, Shenoy, & Newsome, 2013; Zamir et al., 2017; Quax and van Gerven, 2018; Mastrogiuseppe and Ostojic, 2018; van Bergen and Kriegeskorte, 2020; Thorat, Aldegheri, and Kietzmann, 2021; Lindsay, Mrsic-Flogel, & Sahani, 2022; Driscoll, Shenoy, & Sussillo, 2022). Moving to a more naturalistic setting, this work investigates representations and their dynamics in a deep recurrent convolutional neural network (RNN), as they contribute to improving classification responses to natural images. While we provide novel insights into temporal trajectories of the RNNs, the developed framework applies more broadly and can be applied to both, artificial and biological neural network dynamics, and hence contributes to the toolbox available to researchers interested in modelling vision with deep neural networks (Doerig et al., 2023).
## 2 Model system and dataset
In our RNN models2, lateral or local top-down connections are included (Fig. 1A). Such RNNs have been used as models of human neural dynamics and behavior (Kietzmann et al., 2019; Spoerer et al., 2020; Doerig et al., 2022). The lateral and top-down connections interacted with the bottom-up sweep through either additive or multiplicative interactions. The RNNs were unrolled for 10 timesteps. The RNNs were trained to classify the input images at each timestep (their readouts had no bias terms; see Appendix 6.1.1). The \(64\times 64\,\mathrm{px}\) RGB images were taken from MiniEcoset3, which is a novel subset of ecoset (Mehrer, Spoerer, Jones, Kriegeskorte, & Kietzmann, 2021) containing 100 object classes that follow a hierarchical object structure.
Footnote 2: The training and evaluation scripts can be found at: github.com/KietzmannLab/BLT-Pytorch-CCN23
Footnote 3: MiniEcoset can be found at: osf.io/msna2/
## 3 Analysis
We start our analyses by focusing on an RNN with lateral connections which interact with the feedforward sweep additively. Please note that these results generalize across RNN configuration (Fig. 3A).
### Learned categorical structure
We start our analysis by asking whether the RNN successfully learns the hierarchical structure encoded in the dataset statistics. To do so, we computed the similarities between the readout vectors (rows of the readout weight matrix, corresponding to connections from the final AvgPool layer to each of the readout neurons), as they can give us insight into which classes are considered similar by the RNN. Cosine similarity (\(\bar{A}\cdot\bar{B}/|\bar{A}||\bar{B}|\)) was computed between each pair of the readout vectors.
Hierarchical clustering on the pairwise similarities revealed meaningful clusters (Fig. 1B) resembling the dataset structure and the animacy organization observed in primate brains (Grill-Spector & Weiner, 2014). This suggests that our choice of architecture and dataset leads to an interpretable feature extractor.
Figure 1: (A) Architecture of the recurrent neural network. The lateral or top-down connections interact with bottom-up processing additively or multiplicatively. (B) Readout vectors capture referential semantic features of the data.
### Convergent representation dynamics
Next, we moved into analysing the representational dynamics of the RNNs, asking whether they exhibit a signature of being "done with classification", as expected in a stable RNN with attractor dynamics (Linsley, Karkada Ashok, Govindarajan, Liu, & Serre, 2020). Additionally, we asked if the changes in pre-readout representations (i.e., final AvgPool layer activations) are smaller for images that are already correctly classified as opposed to images that are not yet correctly classified. For this analysis, we focused on images that were classified correctly and consistently starting from a given timestep \(t\) (termed stable classification with \(t_{stable}=t\); we only consider these images for subsequent analyses). To define representational changes, we analysed the \(l^{2}\)-norms of the change in representations across time, as a function of \(t_{stable}\).
As seen in Fig. 2A (left), the amount of representational change did not depend on \(t_{stable}\): the changes in representations were not smaller for images that were classified correctly at earlier timesteps. However, the change in all representations did decrease with timesteps. These results indicate that although all representations "settle" across time, the rate of settling is independent of the correctness of classification. Interestingly, as seen in Fig. 2A (right), this reduction in the rate of change was also observed pre-training, suggesting these dynamics are a property of the network architecture. Finally, note that in contrast to previous findings (Linsley et al., 2020), these RNNs exhibits stable state dynamics despite them being trained with backpropagation through time (BPTT), as discussed in Appendix 6.3.
### Signatures of stable classification
Originating from the observation that, on average, representations move the same distance regardless of correct classification, we hypothesized that representations that are able to transition into another class may initially be closer to the decision boundary, whereas the ones that do not transition are initially far from the boundary (and are therefore unable to leave the current class). As we show in the Appendix 6.1, in networks with linear readouts and argmax decisions, the "readout zones", in which representations are assigned to a given class, resemble conical structures (a 2D schematic is shown in Fig. 3B). Given this structure, being closer to the decision boundary either entails having a lower L2 norm or having a lower cosine similarity with the readout vector (see Appendix 6.1.1 for further explanation). To explore this hypothesis, we assessed whether currently incorrectly-classified representations (that will eventually become correct) indeed have lower norms and/or lower cosine similarities with the readout vector of the current class.
At each timestep \(t\), we compared both properties of the representations with \(t_{stable}\leq t\) (i.e., currently
Figure 2: (A) The amount of change in representations does not depend on the correctness of classification, in both trained and random RNNs. (B) Signatures of stable correct classification: the norm of the representation and its cosine similarity to the readout vector of the current class are higher. (C) Signature of the future correct class: for currently misclassified representations, the cosine similarity to the correct class readout vector is higher.
correct) and the representations with \(t_{stable}>t\) (i.e., currently incorrect): their norms, and their cosine similarities to the current readout. As seen in Fig. 2B, both properties were smaller for \(t_{stable}<t\) than for \(t_{stable}\leq t\): the norms and cosine similarities were lower for representations that were incorrectly classified at a given timestep. As seen in Fig. 3A, these patterns (averaged across timesteps) are independent of the kind of feedback used or how it interacts with the bottom-up sweep. This confirms the hypothesis that currently incorrect representations are closer to the decision boundary.
What constrains incorrect images to be closer to the decision boundary? There are two main possibilities: either any feedforward sweep, including in a purely feedforward network, automatically projects them to this position, or the feedforward sweep is shaped by the fact that recurrent computations move representations the same distance regardless of correct classification. To answer this question, we tested if the norms of two feedforward networks could predict the how fast images are correctly classified by the RNN. They do, as can be seen in Appendix 6.2. This suggests that the requirement of the recurrent computations (representations moving out of a class should be closer to the decision boundary) are satisfied by the representations instantiated by the feedforward sweep. The reason why incorrect images are closer to the decision boundary is independent of recurrence, and the effect of recurrent computations is to move these representations from the incorrect to the correct class. What properties of the images lead to their representations being initialized closer to the decision boundary remains to be explored.
### Signatures of the correct class
We have now established that currently misclassified objects reside closer to the decision boundary (in the incorrect readout zone). Do these currently incorrectly-classified representations exhibit any signatures of their correct classes? Evidence for this would be provided if the cosine similarity of an incorrectly-classified representation to its correct class readout vector was higher than its cosine similarity to the readout vectors corresponding to the correct class of other incorrectly-classified representations in the same readout zone (see Fig. 3B for a schematic). As seen in Fig. 2C, the cosine similarity of the incorrectly-classified representations to the corresponding correct class readout vector is indeed higher than the cosine similarity to other correct classes' readout vectors. Hence, there are signatures of the correct classes in the incorrectly classified representations. This pattern (averaged across timesteps) is independent of the kind of feedback (lateral vs. top-down) and how it interacts with the bottom-up sweep (additive vs. multiplicative; Fig. 3A).
An intriguing question that arises from this is whether and how recurrent computations utilize these nascent features to correct the classification. Future work in understanding these dynamics shall consider: How do the incorrectly classified representations move through other classes to arrive at their correct classes? How do the feedback connections hierarchically (given Fig. 1B) constrain the feedforward sweep to lead to those trajectories? Are similar dynamics/representations found in biological visual systems?
## 4 Conclusions
In the RNNs studied here, the magnitude of changes in network activations are surprisingly similar across images and decrease with model timesteps. This shows that the extent of recurrent dynamics experienced by image representations does not depend on the correctness of classification. In addition, we highlight an interesting representation arrangement, presented schematically in Fig. 3B: image representations that are currently incorrectly classified (red and blue squares) have lower norms, and are closer to the current readout zone's decision boundary. The initial norm of the representation depends on the alignment of the image features with the feedforward weights, and can be seen as indicating the certainty of the network's inference after the feedforward sweep. For representations where certainty is low, recurrence can more easily move them towards the correct readout zone.
This work reported our first advances in deriving a framework for understanding representational dynamics in RNNs trained on naturalistic images, which we hope will further clarify how recurrent systems, both artificial and biological, reach their decisions. Future work should investigate the representation trajectories in other recurrent systems, including spatiotemporal data from the primate visual system.
## 5 Acknowledgments
The project was partially funded by the European Union (ERC, TIME, Project 101039524). Compute resources were funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, Project number 456666331).
|
2305.09957 | Deep quantum neural networks form Gaussian processes | It is well known that artificial neural networks initialized from independent
and identically distributed priors converge to Gaussian processes in the limit
of large number of neurons per hidden layer. In this work we prove an analogous
result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of
certain models based on Haar random unitary or orthogonal deep QNNs converge to
Gaussian processes in the limit of large Hilbert space dimension $d$. The
derivation of this result is more nuanced than in the classical case due to the
role played by the input states, the measurement observable, and the fact that
the entries of unitary matrices are not independent. An important consequence
of our analysis is that the ensuing Gaussian processes cannot be used to
efficiently predict the outputs of the QNN via Bayesian statistics.
Furthermore, our theorems imply that the concentration of measure phenomenon in
Haar random QNNs is worse than previously thought, as we prove that expectation
values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d
\sqrt{d}}\right)$. Finally, we discuss how our results improve our
understanding of concentration in $t$-designs. | Diego García-Martín, Martin Larocca, M. Cerezo | 2023-05-17T05:32:45Z | http://arxiv.org/abs/2305.09957v2 | # Deep quantum neural networks form Gaussian processes
###### Abstract
It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension \(d\). The derivation of this result is more nuanced than in the classical case due the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. An important consequence of our analysis is that the ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is much worse than previously thought, as we prove that expectation values and gradients concentrate as \(\mathcal{O}\left(\frac{1}{e^{d}\sqrt{d}}\right)\) - exponentially in the Hilbert space dimension. Finally, we discuss how our results improve our understanding of concentration in \(t\)-designs.
Neural Networks (NNs) have revolutionized the fields of Machine Learning (ML) and artificial intelligence. Their tremendous success across many fields of research in a wide variety of applications [1; 2; 3] is certainly astonishing. While much of this success has come through heuristics, the past few decades have witnessed a significant increase in our theoretical understanding of their inner workings. One of the most interesting results regarding NNs is that fully-connected models with a single hidden layer converge to Gaussian Processes (GPs) in the limit of large number of hidden neurons, when the parameters are initialized from independent and identically distributed (i.i.d.) priors [4]. More recently, it has been shown that i.i.d.-initialized, fully-connected, multi-layer NNs also converge to GPs in the infinite-width limit [5]. Furthermore, other architectures, such as convolutional NNs [6], transformers [7] or recurrent NNs [8] are also GPs under certain assumptions. More than just a mathematical curiosity, the correspondence between NNs and GPs opened up the possibility of performing exact Bayesian inference for regression and learning tasks using wide NNs [9; 4].
With the advent of quantum computers, there has been an enormous interest in merging quantum computing with ML, leading to the thriving field of Quantum Machine Learning (QML) [10; 11; 12; 13; 14]. Rapid progress has been made in this field, largely fueled by the hope that QML may provide a quantum advantage in the near-term for some practically-relevant problems. While the prospects for such a practical quantum advantage remain unclear [15], a number of promising analytical results have already been put forward [16; 17; 18; 19]. Still, much remains to be known about QML models.
In this work, we contribute to the QML body of knowledge by proving that under certain conditions, the outputs of deep Quantum Neural Networks (QNNs) - i.e., parametrized quantum circuits acting on input states drawn from a training set- converge to GPs in the limit of large Hilbert space dimension (see Fig. 1). Our results are derived for QNNs that are Haar random over the unitary and orthogonal groups. Unlike in the classical case, where the proof of the emergence of GPs stems from the central limit theorem, the situation becomes more intricate in the quantum setting as the entries of the QNN are not independent - the rows and columns of a unitary matrix are constrained to be mutually orthonormal. Hence, our proof strategy boils down to showing that each moment of the QNN's output distribution converges to that of a multivariate Gaussian. In addition, we show that in contrast to classical NNs, the Bayesian distribution of the QNN is inefficient for predicting the model's outputs. We then use our results to provide a precise characterization of the concentration of measure phenomenon in deep random quantum circuits [20; 21; 22; 23; 24; 25]. Here, our theorems indicate that the expectation values, as well as the gradients, of Haar random processes concentrate exponentially faster than reported in previous barren plateau studies [20; 21]. Finally, we discuss how our results can be leveraged to study QNNs that are not fully Haar random but instead form \(t\)-designs, which constitutes a much more practical assumption [26; 27; 28].
Gaussian processes and classical machine learning
We begin by introducing GPs.
**Definition 1** (Gaussian process).: _A collection of random variables \(\{X_{1},X_{2},\ldots\}\) is a GP if and only if, for every finite set of indices \(\{1,2,\ldots,m\}\), the vector \((X_{1},X_{2},\ldots,X_{m})\) follows a multivariate Gaussian distribution, which we denote as \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\). Said otherwise, every linear combination of \(\{X_{1},X_{2},\ldots,X_{m}\}\) follows a univariate Gaussian distribution._
In particular, \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\) is determined by its \(m\)-dimensional mean vector \(\mathbf{\mu}=(\mathbb{E}[X_{1}],\ldots,\mathbb{E}[X_{m}])\), and its \(m\times m\) dimensional covariance matrix with entries \((\mathbf{\Sigma})_{\alpha\beta}=\mathrm{Cov}[X_{\alpha},X_{\beta}]\).
GPs are extremely important in ML since they can be used as a form of kernel method to solve learning tasks [4, 9]. For instance, consider a regression problem where the data domain is \(\mathscr{X}=\mathbb{R}\) and the label domain is \(\mathscr{Y}=\mathbb{R}\). Instead of finding a single function \(f:\mathscr{X}\rightarrow\mathscr{Y}\) which solves the regression task, a GP instead assigns probabilities to a set of possible \(f(x)\), such that the probabilities are higher for the "more likely" functions. Following a Bayesian inference approach, one then selects the functions that best agree with some set of empirical observations [9, 14].
Under this framework, the output over the distribution of functions \(f(x)\), for \(x\in\mathscr{X}\), is a random variable. Then, given a set of training samples \(x_{1},\ldots,x_{m}\), and some covariance function \(\kappa(x,x^{\prime})\), Definition 1 implies that if one has a GP, the outputs \(f(x_{1}),\ldots,f(x_{m})\) are random variables sampled from some multivariate Gaussian distribution \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\). From here, the GP is used to make predictions about the output \(f(x_{m+1})\) (for some new data instance \(x_{m+1}\)), given the previous observations \(f(x_{1}),\ldots,f(x_{m})\). Explicitly, one constructs the joint distribution \(P(f(x_{1}),\ldots,f(x_{m}),f(x_{m+1}))\) from the averages and the covariance function \(\kappa\), and obtains the sought-after "predictive distribution" \(P(f(x_{m+1})|f(x_{1}),\ldots,f(x_{m}))\) via marginalization. The power of the GP relies on the fact that this distribution usually contains less uncertainty than \(P(f(x_{m+1}))=\mathcal{N}(\mathbb{E}[f(x_{m+1})],\kappa(x_{m+1},x_{m+1}))\) (see the Methods).
## II Haar random deep QNNs form GPs
In what follows we consider a setting where one is given repeated access to a dataset \(\mathscr{D}\) containing pure quantum states \(\{\rho_{i}\}_{i}\) on a \(d\)-dimensional Hilbert space. We will make no assumptions regarding the origin of these states, as they can correspond to classical data encoded in quantum states [29, 30], or quantum data obtained from some quantum mechanical process [31, 32]. Then, we assume that the states are sent through a deep QNN, denoted \(U\). While in general \(U\) can be parametrized by some set of trainable parameters \(\mathbf{\theta}\), we leave such dependence implicit for the ease of notation. At the output of the circuit one measures the expectation value of a traceless Hermitian operator taken from a set \(\mathscr{O}=\{O_{j}\}_{j}\) such that \(\mathrm{Tr}[O_{j}O_{j^{\prime}}]=d\delta_{j,j^{\prime}}\) and \(O_{j}^{2}=\mathds{1}\), for all \(j,j^{\prime}\) (e.g., Pauli strings). We denote the QNN outputs as
\[C_{j}(\rho_{i})=\mathrm{Tr}\big{[}U\rho_{i}U^{\dagger}O_{j}\big{]}\,. \tag{1}\]
Then, we collect these quantities over some set of states from \(\mathscr{D}\) and some set of measurements from \(\mathscr{O}\) in a vector
\[\mathscr{C}=(C_{j}(\rho_{i}),\ldots,C_{j^{\prime}}(\rho_{i^{\prime}}),\ldots). \tag{2}\]
As we will show below, in the large-\(d\) limit \(\mathscr{C}\) converges to a GP when the QNN unitaries \(U\) are sampled according to the Haar measure on the degree-\(d\) unitary \(\mathbb{U}(d)\) or orthogonal \(\mathbb{O}(d)\) groups (see Fig. 1). We will henceforth use the notation \(\mathbb{E}_{\mathbb{U}(d)}\) and \(\mathbb{E}_{\mathbb{O}(d)}\) to respectively denote Haar averages over \(\mathbb{U}(d)\) and \(\mathbb{O}(d)\). Moreover, we assume that when the circuit is sampled from \(\mathbb{O}(d)\), the states in \(\mathscr{D}\) and the measurement operators in \(\mathscr{O}\) are real valued.
Figure 1: **Schematic of our main results.** It is well known that certain classical NNs with \(N_{h}\) neurons per hidden layer become GPs when \(N_{h}\rightarrow\infty\). That is, given inputs \(x_{1}\) and \(x_{2}\), and corresponding outputs \(y_{1}\) and \(y_{2}\), then the joint probability \(P(y_{1},y_{2})\) is a multivariate Gaussian \(\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\). In this work, we show that a similar result holds under certain conditions for deep QNNs in the limit of large Hilbert space dimension, \(d\rightarrow\infty\). Now, given quantum states \(\rho_{1}\) and \(\rho_{2}\), \(C(\rho)=\mathrm{Tr}[U\rho U^{\dagger}O]\) is such that \(P(C(\rho_{1}),C(\rho_{2}))=\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\).
### Moment computation in the large-\(d\) limit
As we discuss in the Methods, we cannot rely on simple central-limit-theorem arguments to show that \(\mathscr{C}\) forms a GP. Hence, our proof strategy is based on computing all the moments of the vector \(\mathscr{C}\) and showing that they asymptotically match those of a multivariate Gaussian distribution. To conclude the proof we show that these moments unequivocally determine the distribution, for which we can use Carleman's condition [33, 34]. We refer the reader to the Supplemental Information (SI) for the detailed proofs of the results in this manuscript.
First, we present the following lemma.
**Lemma 1**.: _Let \(C_{j}(\rho_{i})\) be the expectation value of a Haar random QNN as in Eq. (1). Then for any \(\rho_{i}\in\mathscr{D}\), \(O_{j}\in\mathscr{O}\), \(\mathbb{E}_{\mathbb{U}(d)}[C_{j}(\rho_{i})]=\mathbb{E}_{\mathbb{O}(d)}[C_{j} (\rho_{i})]=0\,.\) (3) Moreover, for any pair of states \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\) and operators \(O_{j},O_{j^{\prime}}\in\mathscr{O}\) we have_
\[\operatorname{Cov}_{\mathbb{U}(d)}[C_{j}(\rho_{i})C_{j^{\prime}}(\rho_{i^{ \prime}})]=\operatorname{Cov}_{\mathbb{O}(d)}[C_{j}(\rho_{i})C_{j^{\prime}}( \rho_{i^{\prime}})]=0\,,\]
_if \(j\neq j^{\prime}\) and_
\[\mathbf{\Sigma}_{i,i^{\prime}}^{\mathbb{U}} =\frac{d}{d^{2}-1}\left(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime }}]-\frac{1}{d}\right)\,, \tag{4}\] \[\mathbf{\Sigma}_{i,i^{\prime}}^{\mathbb{O}} =\frac{2(d+1)}{(d+2)(d-1)}\left(\operatorname{Tr}[\rho_{i}\rho_{ i^{\prime}}]\left(1-\frac{1}{d+1}\right)-\frac{1}{d+1}\right)\,, \tag{5}\]
_if \(j=j^{\prime}\). Here, we have defined \(\mathbf{\Sigma}_{i,i^{\prime}}^{G}=\operatorname{Cov}_{G}[C_{j}(\rho_{i})C_{j}( \rho_{i^{\prime}})]\), where \(G=\mathbb{U}(d),\mathbb{O}(d)\)._
Lemma 1 shows that the expectation value of the QNN outputs is always zero. More notably, it indicates that the covariance between the outputs is null if we measure different observables (even if we use the same input state and the same circuit). This implies that the distributions \(C_{j}(\rho_{i})\) and \(C_{j^{\prime}}(\rho_{i^{\prime}})\) are independent if \(j\neq j^{\prime}\). That is, knowledge of the measurement outcomes for one observable and different input states does not provide any information about the outcomes of other measurements, at these or any other input states. Therefore, in what follows we will focus on the case where \(\mathscr{C}\) contains expectation values for different states, but the same operator. In this case, Lemma 1 shows that the covariances will be positive, zero, or negative depending on whether \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]\) is larger, equal, or smaller than \(\frac{1}{d}\), respectively.
We now state a useful result.
**Lemma 2**.: _Let \(\mathscr{C}\) be a vector of \(k\) expectation values of a Haar random QNN as in Eq. (2), where one measures the same operator \(O_{j}\) over a set of \(k\) states \(\rho_{1},\dots,\rho_{k}\in\mathscr{D}\). In the large-\(d\) limit, if \(k\) is odd then \(\mathbb{E}_{\mathbb{U}(d)}\left[C_{j}(\rho_{1})\cdots C_{j}(\rho_{k})\right]= \mathbb{E}_{\mathbb{O}(d)}\left[C_{j}(\rho_{1})\cdots C_{j}(\rho_{k})\right]=0\). Moreover, if \(k\) is even and if a) \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{ \operatorname{poly}(\log(d))}\right)\) for all \(i,i^{\prime}\), we have_
\[\mathbb{E}_{\mathbb{U}(d)}\left[C_{j}(\rho_{1})\cdots C_{j}(\rho _{k})\right] =\frac{1}{d^{k/2}}\sum_{\sigma\in T_{k}}\prod_{\{t,t^{\prime}\} \in\sigma}\operatorname{Tr}[\rho_{t}\rho_{t^{\prime}}] \tag{6}\] \[=\frac{\mathbb{E}_{\mathbb{O}(d)}\left[C_{j}(\rho_{1})\cdots C_{ j}(\rho_{k})\right]}{2^{k/2}}\,,\]
_where the summation runs over all the possible disjoint pairing of indexes in the set \(\{1,2,\dots,k\}\), \(T_{k}\), and the product is over the different pairs in each pairing; while if b) \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]=0\) for all \(i,i^{\prime}\), we have_
\[\mathbb{E}_{\mathbb{U}(d)}\left[C_{j}(\rho_{1})\cdots C_{j}(\rho _{k})\right] =\frac{k!}{2^{k/2}(k/2)!}\frac{1}{d^{k}} \tag{7}\] \[=\frac{\mathbb{E}_{\mathbb{O}(d)}\left[C_{j}(\rho_{1})\cdots C_{ j}(\rho_{k})\right]}{2^{k/2}}\,.\]
Using Lemma 2 as our main tool, we will be able to prove that deep QNNs form GPs for different types of datasets. In Table 1 we present a summary of our main results.
### Positively correlated GPs
We begin by studying the case when the states in the dataset satisfy \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{ \operatorname{poly}(\log(d))}\right)\) for all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\). According to Lemma 1, this implies that the variables are positively correlated. In the large \(d\) limit, we can derive the following theorem.
**Theorem 1**.: _Under the same conditions for which Lemma 2(a) holds, the vector \(\mathscr{C}\) forms a GP with mean vector \(\mathbf{\mu}=\mathbf{0}\) and covariance matrix given by \(\mathbf{\Sigma}_{i,i^{\prime}}^{\mathbb{U}}=\frac{\mathbf{\Sigma}_{i,i^{\prime}}^{ \mathbb{O}}}{2}=\frac{\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]}{d}\)._
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Dataset. For all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\): & GP & Correlation & Statement \\ \hline \hline \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{ \operatorname{poly}(\log(d))}\right)\) & Yes & Positive & Theorem 1 \\ \hline \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]=\frac{1}{2}\) & Yes & Null & Theorem 2 \\ \hline \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]=0\) & Yes & Negative & Theorem 3 \\ \hline \end{tabular}
\end{table}
Table 1: **Summary of main results.** In the first column we present conditions for the states in the dataset under which the deep QNN’s outputs form GPs. In the remaining columns we report the correlation in the GP variables and the associated theorem where the main result is stated. In all cases we assume that we measure the same operator \(O_{j}\) for all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\). In Theorem 5, we extend some of these results to the cases where the conditions are only met on average when sampling states over \(\mathscr{D}\).
Theorem 1 indicates that the covariances for the orthogonal group are twice as large as those arising from the unitary group. In Fig. 2, we present results obtained by numerically simulating a unitary Haar random QNN for a system of \(n=18\) qubits. The circuits were sampled using known results for the asymptotics of the entries of unitary matrices [33]. In the left panels of Fig. 2, we show the corresponding two-dimensional GP obtained for two initial states that satisfy \(\mathrm{Tr}[\rho_{i},\rho_{i^{\prime}}]\in\Omega(1)\). Here, we see that the variables are positively correlated in accordance with the prediction in Theorem 1.
The fact that the outputs of deep QNNs form GPs unravels a deep connection between QNNs and quantum kernel methods. While it has already been pointed out that QNN-based QML constitutes a form of kernel-based learning [35], our results materialize this connection for the case of Haar random circuits. Notably, we can recognize that the kernel arising in the GP covariance matrix is proportional to the Fidelity kernel, i.e., to the Hilbert-Schmidt inner product between the data states [36, 37, 35]. Moreover, since the predictive distribution of a GP can be expressed as a function of the covariance matrix (see the Methods), and thus of the kernel entries, our results further cement the fact that quantum models such as those in Eq. (1) are functions in the reproducing kernel Hilbert space [35].
### Uncorrelated GPs
We now consider the case when \(\mathrm{Tr}[\rho_{i}\rho_{i^{\prime}}]=\frac{1}{d}\) for all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\). We find the following result.
**Theorem 2**.: _Let \(\mathscr{C}\) be a vector of \(k\) expectation values of operators in \(\mathscr{C}\) over a set of \(k\) states \(\rho_{1},\dots,\rho_{k}\in\mathscr{D}\). If \(\mathrm{Tr}[\rho_{i}\rho_{i^{\prime}}]=\frac{1}{d}\) for all \(i,i^{\prime}\), then in the large \(d\)-limit \(\mathscr{C}\) forms a GP with mean vector \(\mathbf{\mu}=\mathbf{0}\) and diagonal covariance matrix_
\[\mathbf{\Sigma}_{i,i^{\prime}}^{\mathrm{U}}=\frac{\mathbf{\Sigma}_{i,i^{\prime}}^{ \mathrm{O}}}{2}=\begin{cases}\frac{1}{d}&\text{if }i=i^{\prime}\\ 0&\text{if }i\neq i^{\prime}\end{cases}\quad. \tag{8}\]
In the right panel of Fig. 2, we plot the GP corresponding to two initial states such that \(\mathrm{Tr}[\rho_{i},\rho_{i^{\prime}}]=\frac{1}{d}\). In this case, the variables appear uncorrelated as predicted by Theorem 2.
### Negatively correlated GPs
Here we study the case when \(\mathrm{Tr}[\rho_{i}\rho_{i^{\prime}}]=0\) for all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\). We prove the following theorem.
**Theorem 3**.: _Under the same conditions for which Lemma 2(b) holds, the vector \(\mathscr{C}\) forms a GP with mean vector \(\mathbf{\mu}=\mathbf{0}\) and covariance matrix_
\[\mathbf{\Sigma}_{i,i^{\prime}}^{\mathrm{U}(d)}=\begin{cases}\frac{1}{d+1}\text{ if }i=i^{\prime}\\ -\frac{1}{(d^{2}-1)}\text{ if }i\neq i^{\prime}\end{cases}\quad, \tag{9}\]
_and_
\[\mathbf{\Sigma}_{i,i^{\prime}}^{\mathrm{O}(d)}=\begin{cases}\frac{2}{d+1}\text{ if }i=i^{\prime}\\ -\frac{1}{(d+2)(d-1)}\text{ if }i\neq i^{\prime}\end{cases}\quad. \tag{10}\]
Note that since we are working in the large \(d\) limit, we could have expressed the entries of the covariance matrices of Theorem 3 as \(\mathbf{\Sigma}_{i,i}^{\mathrm{U}}=\frac{\mathbf{\Sigma}_{i,i}^{\mathrm{O}}}{2}=\frac {1}{d}\), and \(\mathbf{\Sigma}_{i,i^{\prime}}^{\mathrm{U}}=\frac{\mathbf{\Sigma}_{i,i}^{\mathrm{O}}} {2}=-\frac{1}{d^{2}}\) for \(i\neq i^{\prime}\). However, we find it convenient to present their full form as it will be important below.
Figure 2: **Two-dimensional GPs.** We plot the joint probability density function, as well as its scaled marginals, for the measurement outcomes at the output of a unitary Haar random QNN acting on \(n=18\) qubits. The measured observable is \(O_{j}=Z_{1}\), where \(Z_{1}\) denotes the Pauli \(z\) operator on the first qubit. Moreover, the input states are: \(\rho_{1}=|0\rangle\langle 0|^{\otimes n}\) and \(\rho_{2}=|\mathrm{GHZ}\rangle\langle\mathrm{GHZ}|\) with \(|\mathrm{GHZ}\rangle=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes n}+|1\rangle^{ \otimes n})\), for the left panel; \(\rho_{1}\) and \(\rho_{3}=|\Psi\rangle\langle\Psi|\) with \(|\Psi\rangle=\frac{1}{\sqrt{d}}\,|0\rangle^{\otimes n}+\sqrt{1-\frac{1}{d}}\,| 1\rangle^{\otimes n}\) for the right panel. In both cases we took \(10^{4}\) samples.
### Deep QNN outcomes, and their linear combination
In this section, and the following ones, we will study the implications of Theorems 1, 2 and 3. Unless stated otherwise, the corollaries we present can be applied to all considered datasets (see Table 1).
First, we study the univariate probability distribution \(P(C_{j}(\rho_{i}))\).
**Corollary 1**.: _Let \(C_{j}(\rho_{i})\) be the expectation value of a Haar random QNN as in Eq. (1). Then, for any \(\rho_{i}\in\mathscr{D}\) and \(O_{j}\in\mathscr{O}\), we have_
\[P(C_{j}(\rho_{i}))=\mathcal{N}(0,\sigma^{2})\,, \tag{11}\]
_where \(\sigma^{2}=\frac{1}{d},\frac{2}{d}\) when \(U\) is Haar random over \(\mathbb{U}(d)\) and \(\mathbb{O}(d)\), respectively._
Corollary 1 shows that when a single state from \(\mathscr{D}\) is sent through the QNN, and a single operator from \(\mathscr{O}\) is measured, the outcomes follow a Gaussian distribution with a variance that vanishes inversely proportional with the Hilbert space dimension. This means that for large problems sizes, we can expect the results to be extremely concentrated around their mean (see below for more details). In Fig. 3 we compare the predictions from Corollary 1 to numerical simulations. We find that the simulations match very closely our theoretical results for both the unitary and the orthogonal groups. Moreover, we can observe that the standard deviation for orthogonal Haar random QNNs is larger than that for unitary ones. In Fig. 3 we also plot the quotient \(\frac{\mathbb{E}[C_{j}(\rho_{i})^{k}]}{\mathbb{E}[C_{j}(\rho_{i})^{2}]^{k/2}}\) obtained from our numerics, and we verify that it follows the value \(\frac{k!}{2^{k/2}(k/2)!}\) of a Gaussian distribution.
At this point, it is worth making an important remark. According to Definition 1, if \(\mathscr{C}\) forms a GP, then any linear combination of its entries will follow a univariate Gaussian distribution. In particular, if \(\{C_{j}(\rho_{1}),C_{j}(\rho_{2}),\ldots,C_{j}(\rho_{m})\}\subseteq\mathscr{C}\), then \(P(C_{j}(\widetilde{\rho}))\) with \(\widetilde{\rho}=\sum_{i=1}^{m}c_{i}\rho_{i}\) will be equal to \(\mathcal{N}(0,\widetilde{\sigma}^{2})\) for some \(\widetilde{\sigma}\). Note that the real-valued coefficients \(\{c_{i}\}_{i=1}^{m}\) need not be a probability distribution, meaning that \(\widetilde{\rho}\) is not necessarily a quantum state. The previous then raises an important question: What happens if \(\widetilde{\rho}\propto\openone\)? A direct calculation shows that \(C_{j}(\widetilde{\rho})=\sum_{i=1}^{d}C_{j}(c_{i}\rho_{i})\propto\text{Tr} \big{[}U\openone U^{\dagger}O_{j}\big{]}=\text{Tr}[O_{j}]=0\). How can we then unify these two perspectives? On the one hand \(C_{j}(\widetilde{\rho})\) should be normally distributed, but on the other hand we know that it is always constant. To solve this issue, we note that the only dataset we considered where the identity can be constructed is the one where \(\text{Tr}[\rho_{i}\rho_{i^{\prime}}]=0\) for all \(i,i^{\prime}\)1. In that case, we can leverage Theorem 3 along with the identity \(\widetilde{\sigma}^{2}=\text{Var}_{G}[\sum_{i=1}^{d}C_{j}(\rho_{i})]=\sum_{i, i^{\prime}}\text{Cov}_{G}[C_{j}(\rho_{i}),C_{j}(\rho_{i^{\prime}})]\) to explicitly prove that \(\text{Var}_{G}[\sum_{i=1}^{d}C_{j}(\rho_{i})]=0\) (for \(G=\mathbb{U}(d),\mathbb{O}(d)\)). Hence, we find a zero-variance Gaussian distribution, i.e., a delta distribution in the QNN's outcomes (as expected).
Footnote 1: This follows from the fact that if \(\mathscr{D}\) contains a complete basis then for any \(\widetilde{\rho}\in\mathscr{D}^{\perp}\), one has that if \(\text{Tr}[\widetilde{\rho}_{i}]=0\) for all \(\rho_{i}\in\mathscr{D}\), then \(\widetilde{\rho}=0\). Here, \(\mathscr{D}^{\perp}\) denotes the kernel of the projector onto the subspace spanned by the vectors in \(\mathscr{D}\).
### Predictive power of the deep QNN's GP
Let us now study the predictive distribution of the QNN's GP. We consider a scenario where we send \(k\) states \(\rho_{1},\ldots,\rho_{k}\in\mathscr{D}\) to the QNN and measure the same operator \(O_{j}\) at its output. Moreover, we assume that there exists some (statistical) noise in the measurement process, so that
Figure 3: **Probability density function for \(C_{j}(\rho_{i})\), for Haar random QNNs and different problem sizes.** We consider unitary and orthogonal QNNs with \(n\)-qubits, and we take \(\rho_{i}=|0\rangle\langle 0|^{\otimes n}\), and \(O_{j}=Z_{1}\). The colored histograms are built from \(10^{4}\) samples in each case, and the solid black lines represent the corresponding Gaussian distributions \(\mathcal{N}\left(0,\sigma^{2}\right)\), where \(\sigma^{2}\) is given in Corollary 1. The insets show the numerical versus predicted value of \(\mathbb{E}[C_{j}(\rho_{i})^{k}]/\mathbb{E}[C_{j}(\rho_{i})^{2}]^{k/2}\). For a Gaussian distribution with zero mean, such quotient is \(\frac{k!}{2^{k/2}(k/2)!}\) (solid black line).
we actually estimate the quantities \(y(\rho_{i})=C_{j}(\rho_{i})+\varepsilon_{i}\), where the noise terms \(\varepsilon_{i}\) are assumed to be independently drawn from the same distribution \(P(\varepsilon_{i})=\mathcal{N}(0,\sigma_{N}^{2})\). For simplicity, we assume that the noise is given by finite sampling and that \(\sigma_{N}^{2}=\frac{1}{N}\), where \(N\in\mathcal{O}(\mathrm{poly}(\log(d)))\) is the number of shots used to estimate each \(y(\rho_{i})\). We then prove the following result.
**Theorem 4**.: _Consider a GP obtained from a Haar random QNN. Given the set of observations \((y(\rho_{1}),\ldots,y(\rho_{m}))\) obtained from \(N\in\mathcal{O}(\mathrm{poly}(\log(d)))\) measurements, then the predictive distribution of the GP is trivial:_
\[P(C_{j}(\rho_{m+1})|C_{j}(\rho_{1}),\ldots,C_{j}(\rho_{m}))=P(C_{j}(\rho_{m+1} ))=\mathcal{N}(0,\sigma^{2})\,,\]
_where \(\sigma^{2}\) is given by Corollary 1._
Theorem 4 shows that by spending only a polylogarithmic-in-\(d\) number of measurements, one cannot use Bayesian statistical theory to learn any information about new outcomes given previous ones.
### Concentration of measure
In this section we show that Corollary 1 provides a more precise characterization of the concentration of measure and the barren plateau phenomena for Haar random circuits than that found in the literature [20; 21; 22; 23; 24; 25]. First, it implies that deep orthogonal QNNs will exhibit barren plateaus, a result not previously known. Second, we recall that in standard barren plateau analyses, one only looks at the first two moments of the distribution of cost values \(C_{j}(\rho_{i})\) (or, similarly, of gradient values \(\partial_{\theta}C_{j}(\rho_{i})\)). Then one uses Chebyshev's inequality, which states that for any \(c>0\), the probability \(P(|X|\geqslant c)\leqslant\frac{\mathrm{Var}[X]}{c^{2}}\), to prove that \(P(|C_{j}(\rho_{i})|\geqslant c)\) and \(P(|\partial_{\theta}C_{j}(\rho_{i})|\geqslant c)\) are in \(\mathcal{O}(\frac{1}{d})\)[25; 21]. However, having a full characterization of \(P(C_{j}(\rho_{i}))\) allows us to compute tail probabilities and obtain a much tighter bound. For instance, for \(U\) being Haar random over \(\mathbb{U}(d)\), we find
**Corollary 2**.: _Let \(C_{j}(\rho_{i})\) be the expectation value of a Haar random QNN as in Eq. (1). Assuming that there exists a parametrized gate in \(U\) of the form \(e^{-i\theta H}\) for some Pauli operator \(H\), then_
\[P(|C_{j}(\rho_{i})|\geqslant c),\,P(|\partial_{\theta}C_{j}(\rho_{i})| \geqslant c)\in\mathcal{O}\left(\frac{1}{ce^{dc^{2}}\sqrt{d}}\right)\,.\]
Corollary 2 indicates that the QNN outputs, and their gradients, actually concentrate with a probability which vanishes exponentially with \(d\). In an \(n\)-qubit system, where \(d=2^{n}\), then \(P(|C_{j}(\rho_{i})|\geqslant c)\) and \(P(|\partial_{\theta}C_{j}(\rho_{i})|\geqslant c)\) are doubly exponentially vanishing with \(n\). The tightness of our bound arises from the fact that Chebyshev's inequality is loose for highly narrow Gaussian distributions. Corollary 2 also implies that the narrow gorge region of the landscape [25], i.e., the fraction of non-concentrated \(C_{j}(\rho_{i})\) values, also decreases exponentially with \(d\).
In the Methods we furthermore show how our results can be used to study the concentration of functions of QNN outcomes, e.g., standard loss functions used in the literature, like the mean-squared error.
### Implications for \(t\)-designs
We now note that our results allow us to characterize the output distribution for QNN's that form \(t\)-designs, i.e., for QNNs whose unitary distributions have the same properties up to the first \(t\) moments as sampling random unitaries from \(\mathbb{U}(d)\) with respect to the Haar measure. With this in mind, one can readily see that the following corollary holds.
**Corollary 3**.: _Let \(U\) be drawn from a \(t\)-design. Then, under the same conditions for which Theorems 1, 2 and 3 hold, the vector \(\mathscr{C}\) matches the first \(t\) moments of a GP._
Corollary 3 extends our results beyond the strict condition of the QNN being Haar random to being a \(t\)-design, which is a more realistic assumption [26; 27; 28]. In particular, we can study the concentration phenomenon in \(t\)-designs: using an extension of Chebyshev's inequality to higher order moments leads to \(P(|C_{j}(\rho_{i})|\geqslant c)\), \(P(|\partial_{\theta}C_{j}(\rho_{i})|\geqslant c)\in\mathcal{O}\left(\frac{ \left(2\lfloor\frac{t}{2}\rfloor\right)!}{\left\lfloor\frac{t}{2}\right\rfloor( dc^{2})\left\lfloor\frac{t}{2}\right\rfloor(\lfloor\frac{t}{2}\rfloor)!}\right)\) (see the SI for a proof). Note that for \(t=2\) we recover the known concentration result for barren plateaus, but for \(t\geqslant 4\) we obtain new polynomial-in-\(d\)-tigter bounds.
### Generalized datasets
Up to this point we have derived our theorems by imposing strict conditions on the overlaps between every pair of states in the dataset. However, we can extend these results to the cases where the conditions are only met on average when sampling states over \(\mathscr{D}\).
**Theorem 5**.: _The results of Theorems 1 and 2 will hold, on average, if \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\operatorname{Tr}[\rho_ {i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{\mathrm{poly}(\log(d))}\right)\) and \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\operatorname{Tr}[\rho_ {i}\rho_{i^{\prime}}]=\frac{1}{d}\), respectively._
As discussed in the Methods section these extensions make our results more practical as \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\operatorname{Tr}[\rho_ {i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{\mathrm{poly}(\log(d))}\right)\).
\(\Omega\left(\frac{1}{\mathrm{poly}(\log(d))}\right)\) holds on standard multi-class classification settings [29; 32], while \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\mathrm{Tr}[\rho_{i}\rho_ {i^{\prime}}]=\frac{1}{d}\) holds when the dataset is composed of Haar random states.
## III Discussion and Outlook
In this manuscript we have shown that under certain conditions, the output distribution of deep Haar random QNNs converges to a Gaussian process in the limit of large Hilbert space dimension. While this result had been conjectured in [13], a formal proof was still lacking. We remark that although our result mirrors its classical counterpart -that certain classical NNs form GPs-, there exist nuances that differentiate our findings from the classical case. For instance, we need to make assumptions on the states processed by the QNN, as well as on the measurement operator. Moreover, some of these assumptions are unavoidable, as Haar random QNNs will not necessarily always converge to a GP. As an example, we have that if \(O_{i}\) is a projector onto a computational basis state, then one recovers a Porter-Thomas distribution [38]. Ultimately, these subtleties arise because the entries of unitary matrices are not independent. In contrast, classical NNs are not subject to this constraint.
It is worth noting that our theorems have further implications beyond those discussed here. We envision that our methods and results will be useful in more general settings where Haar random unitaries / \(t\)-designs are considered, such as quantum information scramblers and black holes [24; 39; 40], many-body physics [41], quantum decoulpers and quantum error correction [42]. Finally, we leave for future work several potential generalizations of our results. For instance, one could envision proving a general result that combines our Theorems 1, 2, and 3 into a single setting. Moreover, it could be interesting to study if GPs arise in other architectures such as quantum convolutional neural networks [43], or re-uploading circuits [30; 44], among others.
## IV Methods
### Infinitely-wide neural networks as Gaussian processes
Here we will briefly review the seminal work of Ref. [4], which proved that artificial NNs with a single infinitely-wide hidden layer form GPs. Our main motivation for reviewing this result is that, as we will see below, the simple technique used in its derivation cannot be directly applied to the quantum case.
For simplicity let us consider a network consisting of a single input neuron, \(N_{h}\) hidden neurons, and a single output neuron (see Fig. 1). The input of the network is \(x\in\mathbb{R}\), and the output is given by
\[f(x)=b+\sum_{l=1}^{N_{h}}v_{l}h_{l}(x)\,, \tag{12}\]
where \(h_{l}(x)=\phi(a_{l}+u_{l}x)\) models the action of each neuron in the hidden layer. Specifically, \(u_{l}\) is the weight between the input neuron and the \(l\)-th hidden neuron, \(a_{l}\) is the respective bias and \(\phi\) is some (non-linear) activation function such as the hyperbolic tangent or the sigmoid function. Similarly, \(v_{l}\) is the weight connecting the \(l\)-th hidden neuron to the output neuron, and \(b\) is the output bias. From Eq. (12) we can see that the output of the NN is a weighted sum of the hidden neurons' outputs plus some bias.
Next, let us assume that the \(v_{l}\) and \(b\) are taken i.i.d. from a Gaussian distribution with zero mean and standard deviations \(\sigma_{v}/\sqrt{N_{h}}\) and \(\sigma_{b}\), respectively. Likewise, one can assume that the hidden neuron weights and biases are taken i.i.d. from some Gaussian distributions. Then, in the limit of \(N_{h}\to\infty\), one can conclude via the central limit theorem that, since the NN output is a sum of infinitely many i.i.d. random variables, then it will converge to a Gaussian distribution with zero mean and variance \(\sigma_{b}^{2}+\sigma_{v}^{2}\mathbb{E}[h_{l}(x)^{2}]\). Similarly, it can be shown that in the case of multiple inputs \(x_{1},\ldots,x_{m}\) one gets a multivariate Gaussian distribution for \(f(x_{1}),\ldots,f(x_{m})\), i.e., a GP [4].
Naively, one could try to mimic the technique in Ref. [4] to prove our main results. In particular, we could start by noting that \(C_{j}(\rho_{i})\) can always be expressed as
\[C_{j}(\rho_{i})=\sum_{k,k^{\prime},r,r^{\prime}=1}^{d}u_{kk^{\prime}}\rho_{k^ {\prime}r}u_{r^{\prime}r}^{*}o_{r^{\prime}k}\,, \tag{13}\]
where \(u_{kk^{\prime}}\), \(u_{r^{\prime}r}^{*}\), \(\rho_{k^{\prime}r}\) and \(o_{r^{\prime}l}\) are the matrix entries of \(U\) and \(U^{\dagger}\), \(\rho\) and \(O\), respectively. Although Eq. (13) is a summation over a large number of random variables, we cannot apply the central limit theorem (or its variants) here, since the matrix entries \(u_{kk^{\prime}}\) and \(u_{rr^{\prime}}^{*}\) are not independent [33].
In fact, the correlation between the entries in the same row, or column, of a Haar random unitary are of order \(\frac{1}{d}\), while those in different rows, or columns, are of order \(\frac{1}{d^{2}}\). This small, albeit critical, difference makes it such that we cannot simply use the central limit theorem to prove that \(\mathscr{C}\) converges to a GP. Instead, we need to rely on the techniques described in the main text.
### Learning with the Gaussian process
In this section we will review the basic formalism for learning with Gaussian processes. Let \(\mathbf{C}\) be a Gaussian process. Then, by definition, given a collection of inputs \(\{x_{i}\}_{i=1}^{m}\), \(\mathbf{C}\) is determined by its \(m\)-dimensional mean vector \(\mathbf{\mu}\), and its \(m\times m\)-dimensional covariance matrix \(\mathbf{\Sigma}\). In what follows we will assume that the mean of \(\mathbf{C}\) is zero, and that the entries of its covariance matrix are expressed as \(\kappa(x_{i},x_{i^{\prime}})\). That is,
\[P\!\left(\!\!\left(\!\!\begin{array}{c}C(x_{1})\\ \vdots\\ C(x_{m})\end{array}\!\!\right)\!\!\right)=\mathcal{N}\!\left(\!\mathbf{\mu}=\left( \begin{array}{c}0\\ \vdots\\ 0\end{array}\!\!\right),\mathbf{\Sigma}=\left(\!\!\begin{array}{ccc}\kappa(x_{ 1},x_{1})&\cdots&\kappa(x_{1},x_{m})\\ \vdots&\vdots\\ \kappa(x_{m},x_{1})&\cdots&\kappa(x_{m},x_{m})\end{array}\!\!\right)\!\!\right)\!.\]
The previous allows us to know that, _a priori_, the distribution of values for any \(f(x_{i})\) will take the form
\[P(C(x_{i}))=\mathcal{N}(0,\sigma_{i}^{2})\,, \tag{14}\]
with \(\sigma_{i}^{2}=\kappa(x_{i},x_{i})\).
Now, let us consider the task of using \(m\) observations, which we will collect in a vector \(\mathbf{y}\), to predict the value at \(x_{m+1}\). First, if the observations are noiseless, then \(\mathbf{y}=(y(x_{1}),\cdots,y(x_{m}))\) is equal to \(\mathbf{C}=(C(x_{1}),\cdots,C(x_{m}))\). That is, \(\mathbf{C}=\mathbf{y}\). Here, we can use the fact that \(C\) forms a Gaussian process to find [9; 45]
\[P(C(x_{m+1})|\mathbf{C}) =P(C(x_{m+1})|C(x_{1}),C(x_{2}),\ldots,C(x_{m}))\] \[=\mathcal{N}\left(\mu(C(x_{m+1})),\sigma^{2}(C(x_{m+1}))\right)\,, \tag{15}\]
where \(\mu(C(x_{m+1}))\) and \(\sigma^{2}(C(x_{m+1}))\) respectively denote the mean and variance of the associated Gaussian probability distribution, and which are given by
\[\mu(C(x_{m+1})) =\mathbf{m}^{T}\cdot\mathbf{\Sigma}^{-1}\cdot\mathbf{C} \tag{16}\] \[\sigma^{2}(C(x_{m+1})) =\sigma^{2}_{m+1}-\mathbf{m}^{T}\cdot\mathbf{\Sigma}^{-1}\cdot\mathbf{m}\,. \tag{17}\]
The vector \(\mathbf{m}\) has entries \(\mathbf{m}_{i}=\kappa(x_{m+1},x_{i})\). We can compare Eqs. (14) and (15) to see that using Bayesian statistics to obtain the predictive distribution of \(P(C(x_{m+1})|\mathbf{C})\) shifts the mean from zero to \(\mathbf{m}^{T}\cdot\mathbf{\Sigma}^{-1}\cdot\mathbf{C}\) and the variance is decreased from \(\sigma^{2}_{m+1}\) by a quantity \(\mathbf{m}^{T}\cdot\mathbf{\Sigma}^{-1}\cdot\mathbf{m}\). The decrease in variance follows from the fact that we are incorporating knowledge about the observations, and thus decreasing the uncertainty.
In a realistic scenario, we can expect that noise will occur during our observation procedure. For simplicity we model this noise as Gaussian noise, so that \(y(x_{i})=C(x_{i})+\varepsilon_{i}\), where the noise terms \(\varepsilon_{i}\) are assumed to be independently drawn from the same distribution \(P(\varepsilon_{i})=\mathcal{N}(0,\sigma_{N}^{2})\). Now, since we have assumed that the noise is drawn independently, we know that the likelihood of obtaining a set of observations \(\mathbf{y}\) given the model values \(\mathbf{C}\) is given by \(P(\mathbf{y}|\mathbf{C})=\mathcal{N}(\mathbf{C},\sigma_{N}^{2}\mathds{1})\). In this case, we can find the probability distribution [9; 45]
\[P(C(x_{m+1})|\mathbf{C}) =\int d\mathbf{C}P(x_{m+1}|\mathbf{C})P(\mathbf{C}|\mathbf{y})\] \[=\int d\mathbf{C}P(C(x_{m+1})|\mathbf{C})P(\mathbf{y}|\mathbf{C})P(\mathbf{C})/P(\mathbf{y})\] \[=\mathcal{N}\left(\widetilde{\mu}(C(x_{m+1})),\widetilde{\sigma}^ {2}(C(x_{m+1}))\right)\,, \tag{18}\]
where now we have
\[\widetilde{\mu}(C(x_{m+1})) =\mathbf{m}^{T}\cdot(\mathbf{\Sigma}+\sigma_{N}^{2}\mathds{1})^{-1}\cdot \mathbf{C} \tag{19}\] \[\widetilde{\sigma}^{2}(C(x_{m+1})) =\sigma_{m+1}^{2}-\mathbf{m}^{T}\cdot(\mathbf{\Sigma}+\sigma_{N}^{2} \mathds{1})^{-1}\cdot\mathbf{m}\,. \tag{20}\]
In the first and the second equality we have used the explicit decomposition of the probability, along with Bayes and marginalization rules. We can see that the probability is still governed by a Gaussian distribution but where the inverse of \(\mathbf{\Sigma}\) has been replaced by the inverse of \(\mathbf{\Sigma}+\sigma_{N}^{2}\mathds{1}\).
### Concentration of functions of QNN outcomes
In the main text we have evaluated the distribution of QNN outcomes and their linear combinations. However, in many cases one is also interested in evaluating a function of the elements of \(\mathscr{C}\). For instance, in a standard QML setting the QNN outcomes are used to compute some loss function \(\mathcal{L}(\mathscr{C})\) which one wishes to optimize [10; 11; 12; 13; 14]. While we do not aim here at exploring all possible relevant functions \(\mathcal{L}\), we will present two simple examples that showcase how our results can be used to study the distribution of \(\mathcal{L}(\mathscr{C})\), as well as its concentration.
First, let us consider the case when \(\mathcal{L}(C_{j}(\rho_{i}))=C_{j}(\rho_{i})^{2}\). It is well known that given a random variable with a Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\), then its square follows a Gamma distribution \(\Gamma(\frac{1}{2},2\sigma^{2})\). Hence, we know that \(P(\mathcal{L}(C_{j}(\rho_{i})))=\Gamma(\frac{1}{2},2\sigma^{2})\). Next, let us consider the case when \(\mathcal{L}(C_{j}(\rho_{i}))=(C_{j}(\rho_{i})-y_{i})^{2}\) for \(y_{i}\in[-1,1]\). This case is relevant for supervised learning as the mean-squared error loss function is composed of a linear combination of such terms. Here, \(y_{i}\) corresponds to the label associated to the state \(\rho_{i}\). We can exactly compute all the moments of \(\mathcal{L}(C_{j}(\rho_{i}))\) as
\[\mathbb{E}_{G}[\mathcal{L}(C_{j}(\rho_{i}))^{k}]=\sum_{r=0}^{2k} \binom{2k}{r}\mathbb{E}_{G}[C_{j}(\rho_{i})^{r}](-y_{i})^{2k-r}\,, \tag{21}\]
for \(G=\mathbb{U}(d),\mathbb{O}(d)\). We can then use Lemma 2 to obtain
\[\mathbb{E}_{\mathbb{U}(d)}[C_{j}(\rho_{i})^{r}]=\frac{r!}{d^{r/2}2^{r/2}(r/2)!} =\frac{\mathbb{E}_{\mathbb{O}(d)}[C_{j}(\rho_{i})^{r}]}{2^{r/2}}\,,\]
if \(r\) is even, and \(\mathbb{E}_{\mathbb{U}(d)}[C_{j}(\rho_{i})^{r}]=\mathbb{E}_{\mathbb{O}(d)}[C_{j}( \rho_{i})^{r}]=0\) if \(r\) is odd. We obtain
\[\mathbb{E}_{\mathbb{U}(d)}[\mathcal{L}(C_{j}(\rho_{i}))^{k}]=\frac{2^{k}}{(-d) ^{k}}M\left(-k,\frac{1}{2},-\frac{dy^{2}}{2}\right)\,, \tag{22}\]
with \(M\) the Kummer's confluent hypergeometric function.
Furthermore, we can also study the concentration of \(\mathcal{L}(C_{j}(\rho_{i}))\) and show that \(P\left(|\mathcal{L}(C_{j}(\rho_{i}))-\mathbb{E}_{\mathbb{U}(d)}(\mathcal{L}(C _{j}(\rho_{i})))|\geqslant c\right)\), where the average \(\mathbb{E}_{\mathbb{U}(d)}(\mathcal{L}(C_{j}(\rho_{i})))=y_{i}^{2}+\frac{1}{d}\), is in \(\mathcal{O}\left(\frac{1}{|\sqrt{c}+y_{i}|e^{d|\sqrt{c}+y_{i}|^{2}}\sqrt{d}}\right)\).
### Motivation for the generalized datasets
In Theorem 5 we generalized the results of Theorems 1 and 2 to hold on average when a) \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\operatorname{Tr}[\rho_ {i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{\operatorname{poly}(\log(d))}\right)\) and b) \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}\operatorname{Tr}[\rho _{i}\rho_{i^{\prime}}]=\frac{1}{d}\), respectively. Interestingly, these two cases have practical relevance. Let us start with Case a). Consider a multiclass classification problem, where each state \(\rho_{i}\) in \(\mathscr{D}\) belongs to one of \(Y\) classes, with \(Y\in\mathcal{O}(1)\), and where the dataset is composed of an (approximately) equal number of states from each class. That is, for each \(\rho_{i}\) we can assign a label \(y_{i}=1,\ldots,Y\). Then, we assume that the classes are well separated in the Hilbert feature space, a standard and sufficient assumption for the model to be able to solve the learning task [29; 32]. By well separated we mean that
\[\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}] \in\Omega\left(\frac{1}{\operatorname{poly}(\log(d))}\right)\,, \quad\text{if}\quad y_{i}=y_{i^{\prime}}\,, \tag{23}\] \[\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}] \in\mathcal{O}\left(\frac{1}{2^{n}}\right)\,,\quad\text{if}\quad y _{i}\neq y_{i^{\prime}}\,. \tag{24}\]
In this case, it can be verified that for any pair of states \(\rho_{i}\) and \(\rho_{i^{\prime}}\) sampled from \(\mathscr{D}\), one has \(\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}[\operatorname{Tr}[ \rho_{i}\rho_{i^{\prime}}]]\in\Omega\left(\frac{1}{\operatorname{poly}(\log(d) )}\right)\).
Next, let us evaluate Case b). Such situation arises precisely if the sates in \(\mathscr{D}\) are Haar random states. Indeed, we can readily show that
\[\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\mathscr{D}}[ \operatorname{Tr}[\rho_{i}\rho^{\prime}]] =\mathbb{E}_{\rho_{i},\rho_{i^{\prime}}\sim\operatorname{Haar}}[ \operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]]\] \[=\int_{\mathbb{U}(d)}d\mu(U)\operatorname{Tr}\!\left[U\rho_{0}U^ {\dagger}V\rho_{0}^{\prime}V^{\dagger}\right]\] \[=\int_{\mathbb{U}(d)}d\mu(U)\operatorname{Tr}\!\left[U\rho_{0}U^ {\dagger}\rho_{0}^{\prime}\right]\] \[=\frac{\operatorname{Tr}[\rho_{0}]\operatorname{Tr}[\rho_{0}^{ \prime}]}{d}\] \[=\frac{1}{d}\,. \tag{25}\]
Here, in the first equality we have used that sampling Haar random pure states \(\rho_{i}\) and \(\rho_{i^{\prime}}\) from the Haar measure is equivalent to taking two reference states \(\rho_{0}\) and \(\rho_{0}^{\prime}\) and evolving them with Haar random unitaries. In the second equality we have used the left-invariance of the Haar measure, and in the third equality we have explicitly performed the integration (see the SI).
### Sketch of the proof of our main results
Since our main results are mostly based on Lemmas 1 and 2, we will here outline the main steps used to prove these Lemmas. In particular, to prove them we need to calculate, in the large \(d\) limit, quantities of the form
\[\mathbb{E}_{G}\left[\operatorname{Tr}\left[U^{\otimes k}\Lambda(U^{\dagger})^{ \otimes k}O^{\otimes k}\right]\right]\,, \tag{26}\]
for arbitrary \(k\), and for \(G=\mathbb{U}(d),\mathbb{O}(d)\). Here, the operator \(\Lambda\) is defined as \(\Lambda=\rho_{1}\otimes\cdots\otimes\rho_{k}\), where the pure states \(\rho_{i}\) belong to \(\mathscr{D}\), and where \(O\) is an operator in \(\mathscr{O}\). The first moment (\(k=1\)), \(\mathbf{\mu}\), and the second moments (\(k=2\)), \(\mathbf{\Sigma}_{i,\nu}^{G}\) can be directly computed using standard formulas for integration over the unitary and orthogonal groups (see the SI). This readily recovers the results in Lemma 1. However, for larger \(k\) a direct computation quickly becomes intractable, and we need to resort to asymptotic Weingarten calculations. More concretely, let us exemplify our calculations for the unitary group and for the case when the states in the dataset are such that \(\operatorname{Tr}\!\left[\rho_{i}\rho_{i_{i^{\prime}}}\right]\in\Omega\left( \frac{1}{\operatorname{poly}(\log(d))}\right)\) for all \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\). As shown in the SI, we can prove the following lemma.
**Lemma 3**.: _Let \(X\) be an operator in \(\mathcal{B}(\mathcal{H}^{\otimes k})\), the set of bounded linear operators acting on the \(k\)-fold tensor product of a \(d\)-dimensional Hilbert space \(\mathcal{H}\). Let \(S_{k}\) be the symmetric group on \(k\) items, and let \(P_{d}\) be the subsystem permuting representation of \(S_{k}\) in \(\mathcal{H}^{\otimes k}\). Then, for large Hilbert space dimension (\(d\to\infty\)), the twirl of \(X\) over \(\mathbb{U}(d)\) is_
\[\mathbb{E}_{\mathbb{U}(d)}[U^{\otimes k}X(U^{\dagger})^{\otimes k}]= \frac{1}{d^{k}}\sum_{\sigma\in S_{k}}\operatorname{Tr}[XP_{d}( \sigma)]P_{d}(\sigma^{-1})\] \[+\frac{1}{d^{k}}\sum_{\sigma,\Pi\in S_{k}}c_{\sigma,\pi} \operatorname{Tr}[XP_{d}(\sigma)]P_{d}(\pi)\,,\]
_where the constants \(c_{\sigma,\Pi}\) are in \(\mathcal{O}(1/d)\)._
We recall that the subsystem permuting representation of a permutation \(\sigma\in S_{k}\) is
\[P_{d}(\sigma)=\sum_{i_{1},\ldots,i_{k}=0}^{d-1}|i_{\sigma^{-1}(1)},\ldots,i_{ \sigma^{-1}(k)}\rangle\langle i_{1},\ldots,i_{k}|\,. \tag{27}\]
Lemma 3 implies that (26) is equal to
\[\begin{split}\mathbb{E}_{\mathbb{U}(d)}&\left[\operatorname{ Tr}\left[U^{\otimes k}\Lambda(U^{\dagger})^{\otimes k}O^{\otimes k}\right] \right]\\ &=\frac{1}{d^{k}}\sum_{\sigma\in S_{k}}\operatorname{Tr}[\Lambda P _{d}(\sigma)]\operatorname{Tr}\bigl{[}P_{d}(\sigma^{-1})O^{\otimes k}\bigr{]} \\ &+\frac{1}{d^{k}}\sum_{\sigma,\Pi\in S_{k}}c_{\sigma,\pi} \operatorname{Tr}[\Lambda P_{d}(\sigma)]\operatorname{Tr}\bigl{[}P_{d}(\pi)O^ {\otimes k}\bigr{]}\,.\end{split} \tag{28}\]
We now note that, by definition, since \(O\) is traceless and such that \(O^{2}=\openone\), then \(\operatorname{Tr}\bigl{[}P_{d}(\sigma)O^{\otimes k}\bigr{]}=0\) for odd \(k\) (and for all \(\sigma\)). This result implies that all the odd moments are exactly zero, and also that the non-zero contributions in Eq. (28) for the even moments come from permutations consisting of cycles of even length. We remark that as a direct consequence, the first moment, \(\mathbb{E}_{\mathbb{U}(d)}\left[\operatorname{Tr}\bigl{[}U\rho_{i}U^{\dagger} O\bigr{]}\right]\), is zero for any \(\rho_{i}\in\mathscr{D}\), and thus we have \(\boldsymbol{\mu}=\boldsymbol{0}\). To compute higher moments, we show that \(\operatorname{Tr}\bigl{[}P_{d}(\sigma)O^{\otimes k}\bigr{]}=d^{r}\) if \(k\) is even and \(\sigma\) is a product of \(r\) disjoint cycles of even length. The maximum of \(\operatorname{Tr}\bigl{[}P_{d}(\sigma)O^{\otimes k}\bigr{]}\) is therefore achieved when \(r\) is maximal, i.e., when \(\sigma\) is a product of \(k/2\) disjoint transpositions (cycles of length two), leading to \(\operatorname{Tr}\bigl{[}P_{d}(\sigma)O^{\otimes k}\bigr{]}=d^{k/2}\). Then, we look at the factors \(\operatorname{Tr}[\Lambda P_{d}(\sigma)]\) and include them in the analysis. We have that for all \(\pi\) and \(\sigma\) in \(S_{k}\),
\[\frac{1}{d^{k}}\Bigl{|}(c_{\sigma,\pi}\operatorname{Tr}[\Lambda P _{d}(\sigma)]\operatorname{Tr}\bigl{[}P_{d}(\pi)O^{\otimes k}\bigr{]} \tag{29}\] \[+c_{\sigma^{-1},\pi}\operatorname{Tr}\bigl{[}\Lambda P_{d}(\sigma ^{-1})\bigr{]}\bigr{)}\operatorname{Tr}\bigl{[}P_{d}(\pi)O^{\otimes k}\bigr{]} \Bigr{|}\in\mathcal{O}\left(\frac{1}{d^{\frac{k+2}{2}}}\right)\,.\]
Moreover, since \(\operatorname{Tr}[\rho_{i}\rho_{i^{\prime}}]\in\Omega\left(\frac{1}{\operatorname {poly}(\log(\log))}\right)\) for all pair of states \(\rho_{i},\rho_{i^{\prime}}\in\mathscr{D}\), it holds that
\[\frac{1}{d^{k}}\operatorname{Tr}[\Lambda P_{d}(\sigma)]\operatorname{Tr} \bigl{[}P_{d}(\sigma^{-1})O^{\otimes k}\bigr{]}\in\Omega\left(\frac{1}{d^{k/2 }}\right)\,, \tag{30}\]
if \(\sigma\) is a product of \(k/2\) disjoint transpositions, and
\[\begin{split}\frac{1}{d^{k}}\Bigl{|}& \operatorname{Tr}[\Lambda P_{d}(\sigma)]\operatorname{Tr}\bigl{[}P_{d}(\sigma^ {-1})O^{\otimes k}\bigr{]}+\\ &\operatorname{Tr}\bigl{[}\Lambda P_{d}(\sigma^{-1})\bigr{]} \operatorname{Tr}\bigl{[}P_{d}(\sigma)O^{\otimes k}\bigr{]}\Bigr{|}\in \mathcal{O}\left(\frac{1}{d^{\frac{k+2}{2}}}\right)\,,\end{split} \tag{31}\]
for any other \(\sigma\). We remark that if \(\sigma\) consist only of transpositions, then it is its own inverse, that is, \(\sigma=\sigma^{-1}\).
It immediately follows that for fixed \(k\) and \(d\to\infty\), the second sum in Eq. (28) is suppressed at least inversely proportional to the dimension of the Hilbert space with respect to the first one (i.e. exponentially in the number of qubits for QNNs made out of qubits). Likewise, the contributions in the first sum in (28) coming from permutations that are not the product of \(k/2\) disjoint transpositions are also suppressed inversely proportional to the Hilbert space dimension. Therefore, in the large \(d\) limit we arrive at
\[\mathbb{E}_{\mathbb{U}(d)}\left[\operatorname{Tr}\left[U^{\otimes k}\Lambda(U ^{\dagger})^{\otimes k}O^{\otimes k}\right]\right]=\frac{1}{d^{k/2}}\sum_{ \sigma\in T_{k}}\prod_{\{t,t^{\prime}\}\in\sigma}\operatorname{Tr}[\rho_{t} \rho_{t^{\prime}}]\,, \tag{32}\]
where we have defined as \(T_{k}\subseteq S_{k}\) the set of permutations which are exactly given by a product of \(k/2\) disjoint transpositions. Note that this is precisely the statement in Lemma 2.
From here we can easily see that if every state in \(\Lambda\) is the same, i.e., if \(\rho_{i}=\rho\) for \(i=1,\ldots,k\), then \(\operatorname{Tr}[\rho_{t}\rho_{t^{\prime}}]=1\) for all \(t,t^{\prime}\), and we need to count how many terms are there in Eq. (32). Specifically, we need to count how many different ways there exist to split \(k\) elements into pairs (with \(k\) even). A straightforward calculation shows that
\[\sum_{\sigma\in T_{k}}\prod_{\{t,t^{\prime}\}\in\sigma}1=\frac{1}{(k/2)!}{k\choose 2,2,\ldots,2}=\frac{k!}{2^{k/2}(k/2)!}\,. \tag{33}\]
Thus, we arrive at
\[\mathbb{E}_{\mathbb{U}(d)}\left[\operatorname{Tr}\left[U^{\otimes k}\Lambda(U^{ \dagger})^{\otimes k}O^{\otimes k}\right]\right]=\frac{1}{d^{k/2}}\frac{k!}{2^ {k/2}(k/2)!}\,. \tag{34}\]
Identifying \(\sigma^{2}=\frac{1}{d}\) implies that the moments \(\mathbb{E}_{\mathbb{U}(d)}\left[\operatorname{Tr}\left[U\rho U^{\dagger}O \right]^{k}\right]\) exactly match those of a Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\).
To prove that these moments unequivocally determine the distribution of \(\mathscr{C}\), we use Carleman's condition.
**Lemma 4** (Carleman's condition, Hamburger case [34]).: _Let \(\gamma_{k}\) be the (finite) moments of the distribution of a random variable \(X\) that can take values on the real line \(\mathbb{R}\). These moments determine uniquely the distribution of \(X\) if_
\[\sum_{k=1}^{\infty}\gamma_{2k}^{-1/2k}=\infty. \tag{35}\]
Explicitly, we have
\[\sum_{k=1}^{\infty}\left(\frac{1}{d^{k}}\frac{(2k)!}{2^{k}k!} \right)^{-1/2k} =\sqrt{2d}\sum_{k=1}^{\infty}\left((2k)\cdots(k+1)\right)^{-1/2k}\] \[\geqslant\sum_{k=1}^{\infty}\left((2k)^{k}\right)^{-1/2k}\] \[=\sum_{k=1}^{\infty}\frac{1}{\sqrt{2k}}=\infty\,. \tag{36}\]
Hence, according to Lemma 4, Carleman's condition is satisfied, and \(P(C_{j}(\rho_{i})\) is distributed following a Gaussian distribution.
A similar argument can be given to show that the moments of \(\mathscr{C}\) match those of a GP. Here, we need to compare Eq. (32) with the \(k\)-th order moments of a GP, which are provided by Isserlis theorem [46]. Specifically, if we want to compute a \(k\)-th order moment of a GP, then we have that \(\mathbb{E}[X_{1}X_{2}\cdots X_{k}]=0\) if \(k\) is odd, and
\[\mathbb{E}[X_{1}X_{2}\cdots X_{k}]=\sum_{\sigma\in T_{k}}\prod_{\{t,t^{ \prime}\}\in\sigma}\mathrm{Cov}[X_{t},X_{t^{\prime}}]\,, \tag{37}\]
if \(k\) is even. Clearly, Eq. (32) matches Eq. (37) by identifying \(\mathrm{Cov}[X_{t},X_{t^{\prime}}]=\frac{\mathrm{Tr}[\rho_{t}\rho_{t^{\prime}} ]}{d}\). We can again prove that these moments uniquely determine the distribution of \(\mathscr{C}\) from the fact that since its marginal distributions are determinate via Carleman's condition (see above), then so is the distribution of \(\mathscr{C}\)[34]. Hence, \(\mathscr{C}\) forms a GP.
## Acknowledgements
We acknowledge Francesco Caravelli, Frederic Sauvage, Lorenzo Leone, and Cinthia Huerta for useful conversations. D.G-M. was supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory (LANL) under project number 20230049DR. M.L. acknowledges support by the Center for Nonlinear Studies at Los Alamos National Laboratory (LANL). M.C. acknowledges support by the LDRD program of LANL under project number 20230527ECR. This work was also supported by LANL ASC Beyond Moore's Law project.
|
2310.19919 | Meta-Learning Strategies through Value Maximization in Neural Networks | Biological and artificial learning agents face numerous choices about how to
learn, ranging from hyperparameter selection to aspects of task distributions
like curricula. Understanding how to make these meta-learning choices could
offer normative accounts of cognitive control functions in biological learners
and improve engineered systems. Yet optimal strategies remain challenging to
compute in modern deep networks due to the complexity of optimizing through the
entire learning process. Here we theoretically investigate optimal strategies
in a tractable setting. We present a learning effort framework capable of
efficiently optimizing control signals on a fully normative objective:
discounted cumulative performance throughout learning. We obtain computational
tractability by using average dynamical equations for gradient descent,
available for simple neural network architectures. Our framework accommodates a
range of meta-learning and automatic curriculum learning methods in a unified
normative setting. We apply this framework to investigate the effect of
approximations in common meta-learning algorithms; infer aspects of optimal
curricula; and compute optimal neuronal resource allocation in a continual
learning setting. Across settings, we find that control effort is most
beneficial when applied to easier aspects of a task early in learning; followed
by sustained effort on harder aspects. Overall, the learning effort framework
provides a tractable theoretical test bed to study normative benefits of
interventions in a variety of learning systems, as well as a formal account of
optimal cognitive control strategies over learning trajectories posited by
established theories in cognitive neuroscience. | Rodrigo Carrasco-Davis, Javier Masís, Andrew M. Saxe | 2023-10-30T18:29:26Z | http://arxiv.org/abs/2310.19919v2 | # Meta-Learning Strategies through Value Maximization in Neural Networks
###### Abstract
Biological and artificial learning agents face numerous choices about how to learn, ranging from hyperparameter selection to aspects of task distributions like curricula. Understanding how to make these'meta-learning' choices could offer normative accounts of cognitive control functions in biological learners and improve engineered systems. Yet optimal strategies remain challenging to compute in modern deep networks due to the complexity of optimizing through the entire learning process. Here we theoretically investigate optimal strategies in a tractable setting. We present a _learning effort_ framework capable of efficiently optimizing control signals on a fully normative objective: discounted cumulative performance throughout learning. We obtain computational tractability by using average dynamical equations for gradient descent, available for simple neural network architectures. Our framework accommodates a range of meta-learning and automatic curriculum learning methods in a unified normative setting. We apply this framework to investigate the effect of approximations in common meta-learning algorithms; infer aspects of optimal curricula; and compute optimal neuronal resource allocation in a continual learning setting. Across settings, we find that control effort is most beneficial when applied to easier aspects of a task early in learning; followed by sustained effort on harder aspects. Overall, the learning effort framework provides a tractable theoretical test bed to study normative benefits of interventions in a variety of learning systems, as well as a formal account of optimal cognitive control strategies over learning trajectories posited by established theories in cognitive neuroscience.
## 1 Introduction
Deploying a learning system requires making many considered decisions about hyperparameters, architectures, and dataset properties. As learning systems have grown more complex, so have these decisions about how to learn. One approach to managing this complexity is to place these decisions under the control of the agent and meta-learn them. Building on this strategy, a range of meta-learning algorithms have been developed that are capable of fast adaptation to new tasks within a distribution (Finn et al., 2017; Nichol et al., 2018), continual learning (Parisi et al., 2019), and multitasking (Crawshaw, 2020). Meta-learning methods target diverse aspects of a learning system: they can adapt hyperparameters (Franceschi et al., 2018; Baik et al., 2020; Zucchet & Sacramento, 2022); learn weight initializations well-suited to a task distribution (Finn et al., 2017; Baik et al., 2020); manage different modules or architectural components (Andreas et al., 2017); enhance exploration (Gupta et al., 2018; Liu et al., 2021); and order tasks into a suitable curriculum (Stergiadis et al., 2021; Zhang et al., 2022). While this prior work has shown that meta-learning can bring important performance benefits, algorithms are often hand-designed for a specific intervention and a large gap remains in our theoretical understanding of how meta-learning operates (see App. A).
The aim of this paper is to develop a normative framework for investigating optimal meta-strategies in neural networks, implemented in biological and artificial agents. A core difficulty in computing optimal strategies is the complexity of optimizing through the learning process. To tackle this problem, we simplify the inner-loop learning dynamics using simpler tractable network models. We specifically study meta-learning dynamics in deep linear networks, which exhibit complex non-linear dynamics sharing properties of observed non-linear networks dynamics (Saxe et al., 2019; Braun et al., 2022). Examining this problem in a reduced setting, we derive optimal meta-learning strategies under various control designs and meta-learning scenarios. We concentrate on questions that are pertinent to the cognitive control literature, such as learning effort allocation, task switching, and attention to multiple tasks. The Expected Value of Control Theory (EVC, (Shenhav et al., 2013, 2017; Musslick et al., 2020; Masis et al., 2021)) has proposed answers to these questions. It posits that higher-level areas in the brain perform executive functions (cognitive control) over lower-level areas to maximize the cumulative return. The framework we present is a formal and computationally tractable example of the EVC theory that takes into account the impact of the control signal on future learning dynamics (see Appendix E).
**Main contributions**
\(\bullet\): We develop a computationally tractable _learning effort_ framework1 to study diverse and complex meta-learning interventions that normatively maximize value throughout learning.
Footnote 1: Anonymized Python package at [https://anonymous.4open.science/r/neuromod-6A3C/](https://anonymous.4open.science/r/neuromod-6A3C/) for reproducibility.
\(\bullet\): We fully solve learning dynamics as a function of control variables for simple models, and use this to derive efficient optimization procedures that maximize discounted performance throughout learning.
\(\bullet\): We express meta-learning algorithms such as Model Agnostic Meta-Learning (Finn et al., 2017) and Bilevel Programming (Franceschi et al., 2018) in our framework, studying the impact of approximations on their performance.
\(\bullet\): We compute optimized control strategies for a range of settings spanning continual learning, multi-tasking, and curriculum learning, and examine these normative strategies.
\(\bullet\): Due to this framework's normative goal of maximizing expected return, we draw qualitative connections to phenomena in cognitive neuroscience such as task engagement, mental effort, and cognitive control (Shenhav et al., 2013, 2017; Lieder et al., 2018; Masis et al., 2021).
## 2 Learning Effort Framework
We start by defining our framework in a general abstract way before turning to a simple example in Section 2.1. The generality of this description allows the framework to apply to a variety of different settings of interest spanning machine learning (Section 4) and cognitive control (Sections 5 and 6). Consider a learning model trained on a task \(\mathcal{T}\) for a period of time \(T\). Two equations define the learning model. We define the input-output mapping \(f\) and learning dynamics \(h\) as
\[\hat{Y}=f(X;w(t),g(t)),\quad\tau_{w}\frac{dw(t)}{dt}=h(w(t),g(t),\mathcal{T}) \tag{1}\]
respectively. In the first equation, \(f\) is a continuously differentiable function, \(X\) the input, and \(\hat{Y}\) the output. Here \(w(t)\) are the parameters of the learning model (e.g. weights in a neural network) during training with \(T\geq t\geq 0\). We introduce \(g(t)\) as an _effort signal_ (or _control signal_) that crucially will be chosen by the meta-learning optimization. This vector of control signals can model a number of interventions in the learning system, and will be chosen to maximize cumulative learning performance. The learning dynamics equation describes the evolution of the parameters during training and is given by a differential equation over the parameters of the learning model, \(h\) is a continuously differentiable function, and the evolution of the learned parameters \(w(t)\) (starting at \(w(0)\)) may depend on the control signal \(g(t)\) and task parameters \(\mathcal{T}\).
Given this setup, we can understand the control signal \(g(t)\) as a meta parameter that can be chosen in different ways, and which influences the network's input-output map and learning behavior. Regarding cognitive neuroscience literature, this could take the form of controlled attention or neural activity modulation. To determine how we choose \(g(t)\), we define a task performance metric during the learning period \(\mathcal{P}(t)\) (e.g. mean squared error during regression). Further, we assume that using the control signal \(g(t)\) is costly, according to a cost function \(C(g(t))\), as commonly used in control theory to describe, for instance, energy resource needed to exert control, or mental effort allocated to produced sustained engagement on a task. At any time during the learning of the task \(\mathcal{T}\) we consider an instant reward rate \(R(t)=\eta\mathcal{P}(t)\), where \(\eta\) is a constant that converts performance on the task \(\mathcal{P}(t)\) to reward/time units. We define the instant net reward rate as the difference between scaled performance and the cost of control \(v(t)=R(t)-C(g(t))\). The expected return or value function at the start of training can then be written as the cumulative discounted reward from learning and performing the task from time \(t=0\) to \(t=T\), with a discount factor \(1\geq\gamma>0\),
\[V=\int_{0}^{T}dt\gamma^{t}v(t)=\int_{0}^{T}dt\gamma^{t}\left[\eta\mathcal{P}(t )-C(g(t))\right]. \tag{2}\]
We emphasize this value function measures performance across the whole learning period. Finally, we posit that the goal of meta-learning is to choose \(g(t)\) to maximize the value function in equation 2. To find an approximately optimal \(g(t)\), we take gradient steps
\[g_{k+1}(t)=g_{k}(t)+\alpha_{g}\frac{dV}{dg(t)}. \tag{3}\]
for every \(0\geq t\geq T\), \(k\) being the iteration index. The optimal \(g(t)\) thus depends on a complex interplay of past and future values of the control signal, and how these interact with the whole trajectory of learning. Indeed, computing the gradient in equation 3 is computationally intractable in general. In the remainder of the paper, we carefully choose learning models and settings with rich dynamics but for which we have partial analytical tractability of the learning dynamics, such that efficient computation of the full control signal through time is possible. Further details on the algorithm implemented and estimation of involved quantities can be found in App. C and D.
Figure 1: Learning effort framework. A neural network is under the influence of a control signal \(g(t)\). This control signal is optimized iteratively by initializing \(g(t)\), then: (1) Solving learning dynamics in Eq. equation 1; (2) Computing the performance \(\mathcal{P}(t)\); (3) Integrating performance and control cost to compute the exact cumulative return \(V\) in Eq. 2; (4) Taking the gradient of \(V\) with respect to the control signal \(g(t)\) and update as in Eq. 3, then go back to (1). **(b)**: Multi-step MAML. **(c)**: Learning rate optimization as in Bilevel Programming. **(d)**: Task engagement, where the control signal determines the optimal amount of _engagement_ through time to multiple regression tasks. **(f)**: Category assimilation, where a model is trained to learn a classification task and can control the _engagement_ on each class \(c\) throughout training. **(e)**: Effort allocation, where the control signal (gain modulation of weights) is computed to maximize value throughout the learning of a single task. **(g)**: Task switching, where the gain modulation model is trained to switch tasks repeatedly and the control signal is computed throughout the switches.
By appropriate choice of how \(g(t)\) influences the network and learning dynamics, this general framework can accommodate a variety of possible interventions on a learning system. Some interventions correspond to other meta-learning algorithms such as Multi-Step MAML and Bilevel Programming (Fig. 1b and c). Further connection to the meta-learning literature in machine learning can be found in App. F. The results in subsequent sections investigate several scenarios illustrated in Fig. 1d to g. All of these experiments are variations on the influence of the control signal over the learning dynamics, keeping the rest of the framework as is.
### Single Neuron Example
Having described the general framework, we now turn to a simple case to illustrate it, yet with complex emergent solutions. We consider a _single neuron learning model_ trained on a _two-Gaussians regression task_ where the control signal acts as a _weight gain modulation_. This case offers insights regarding the dependence of the optimal control signal on task parameters and learning model hyperparameters.
**Two Gaussians regression task:** A dataset of examples \(i=1,\cdots,P\) is drawn as follows: A label \(y_{i}\) is first sampled as either \(+1\) or \(-1\) with probability \(1/2\). The input \(x_{i}\) is then sampled from a Gaussian \(x_{i}\sim\mathcal{N}(y_{i}\cdot\mu_{x},\sigma_{x}^{2})\). The task is to predict \(y_{i}\) from the value of \(x_{i}\). The intrinsic difficulty of the task is controlled by how much the Gaussians overlap, controlled by the relative value of \(\mu_{x}\) and \(\sigma_{x}\).
**Single neuron learning model:** The input-output mapping of our single neuron model is \(\hat{y}_{i}=x_{i}\cdot w(t)\left[1+g(t)\right]\), \(w(t)\) is our learned weight parameter, and \(g(t)\) is the control signal which acts as a multiplicative gain. The learning dynamics of \(w(t)\) are given by gradient descent on the loss function \(\mathcal{L}=\frac{1}{2}(y_{i}-\hat{y}_{i})^{2}+\frac{\lambda}{2}w(t)^{2}\). Taking the gradient flow limit (small learning rate (Saxe et al., 2019; Elkabetz & Cohen, 2021)), we find average learning dynamics for the weight described by
\[\tau_{w}\frac{dw}{dt}=-\left\langle\frac{\partial\mathcal{L}}{\partial w} \right\rangle=\mu_{x}\tilde{g}(t)-w(t)\left(\left\langle x^{2}\right\rangle \tilde{g}^{2}(t)+\lambda\right) \tag{4}\]
where \(\tilde{g}(t)=1+g(t)\), \(\left\langle\cdot\right\rangle\) denotes expectation over the data distribution, and \(\tau_{w}\) is the learning time scale of the weight. This gradient depends on \(g(t)\), making the learning dynamics of \(w(t)\) dependent on the control signal. For the single neuron model, we can find a closed form expression for \(w(t)\) as a function of the control signal \(g(t)\), giving us an expression for \(\left\langle\mathcal{L}(t)\right\rangle\) as well (see App. G.1). This tractability allows us to compute average dynamics and the necessary gradient efficiently.
**Control signal optimization**: As a performance and control measure for this model we use \(\mathcal{P}(t)=-\left\langle\mathcal{L}(t)\right\rangle\), \(C(g(t))=\beta g(t)^{2}\) respectively, meaning smaller loss leads to better performance and exerting control has a cost that is monotonic in the control signal magnitude, with cost per unit of control \(\beta\). Note that if \(g(t)=0\) for all \(T\geq t\geq 0\), then \(C(g(t))=0\), and \(\hat{y}_{i}=x_{i}\cdot w(t)\), which means that the weight is learned purely by gradient descent with no influence from the control signal, we call this is the _Baseline_ model. Having \(\mathcal{P}(t)\) and \(C(g(t))\), we can compute the value function in equation 2 and find the optimal \(g(t)\) by gradient ascent following equation 3 (algorithm described in App. D). In essence, this setting considers a simple learning scenario in which an agent can adjust the gain of the weights in a neural network that otherwise learns via gradient descent.
**Results:** In Fig. 2a, we show the difference in instant net reward \(v(t)\) for the baseline (\(g(t)=0\) for every \(t\)) and the control case (optimizing \(g(t)\)). The optimal meta-learning strategy that maximizes expected return in equation 2 invests more control at the start (Fig. 2b) of the learning period at the cost of some instant reward, with the result of faster learning (demonstrated in the lower loss for the control case in Fig. 2c). The control signal \(g(t)\) influences the instant net reward rate both at present \(t\) and future \(t^{\prime}>t\). The instant change in net reward rate \(v(t)\) will be caused by both the instant change on the \(\tilde{w}(t)=w(t)\cdot(1+g(t))\) (Fig. 2d) and \(C(g(t))\), making the effective weight \(\tilde{w}(t)\) closer to the solution at early stages. As expected, increasing the discount factor \(\gamma\) leads to higher levels of control, since future net reward will contribute more to the cumulative expected return, compensating the cost of increasing \(g(t)\) (Fig. 2e,f). Increasing the intrinsic noise of the task \(\sigma_{x}\) reduces the overall optimal control (Fig. 2g,h). Because it is not possible to
overcome this noise, the use of control will generate a cost that cannot be compensated by boosting learning. This inter-temporal choice of allocating effort based on the prospect of future reward has been widely studied in psychology and neuroscience (Masis et al., 2021; Keidel et al., 2021; Fromer et al., 2021; Masis et al., 2023) (App. A), and naturally arises from maximizing the discounted cumulative performance in Eq. 2. For more parameter variations see Fig. 9 in App. K.1.
## 3 Baseline Deep Linear Networks and Datasets
We now generalize the single neuron approach to more complex neural networks. In the case of a two-layer linear neural network, the corresponding input-output mapping in Eq. equation 1 is \(\hat{Y}=W_{2}(t)W_{1}(t)X\), where \(X\in\mathbb{R}^{I}\), \(\hat{Y}\in\mathbb{R}^{O}\), \(W_{1}(t)\in\mathbb{R}^{H\times I}\) and \(W_{2}(t)\in\mathbb{R}^{O\times H}\) are the first and second layer weights. Training a two-layer network to minimize MSE with weight regularization and taking gradient flow limit yields the learning dynamics equations (Saxe et al., 2019; Braun et al., 2022)
\[\tau_{w}\frac{dW_{1}}{dt}=W_{2}^{T}\left(\Sigma_{xy}^{T}-W_{2}W_{1}\Sigma_{x} \right)-\lambda W_{1},\ \ \tau_{w}\frac{dW_{2}}{dt}=\left(\Sigma_{xy}^{T}-W_{2}W_{1}\Sigma_{x} \right)W_{1}^{T}-\lambda W_{2} \tag{5}\]
where \(\Sigma_{xy}^{T}=\left\langle XY^{T}\right\rangle\), \(\Sigma_{x}^{T}=\left\langle XX^{T}\right\rangle\), \(\tau_{w}\) is a learning time-scale of the weights, and \(\lambda\) controls the weight regularization (see App. H). Learning is completely defined by the initial weights \(W_{1}(0)\), \(W_{2}(0)\), the task at hand and the hyperparameters, it follows non-linear dynamics due to weight coupling and a non-convex loss landscape while keeping computational tractability. Other frameworks such as the teacher-student setting (Goldt et al. 2019; Ye and Bors 2022) or mean field theory approximations (Mignacco et al. 2020; Bordelon and Pehlevan 2022) provide closed-form dynamics for non-linear networks, but relies on assumptions on task structure and taking architecture size limits (App. A). With the general framework and tractable models in hand, we now turn to probe the behavior of this two-layer network in a variety of settings. First, in Section 4, we draw out implications for standard meta-learning algorithms. Next, in Section 5 we turn to aspects of curriculum learning and the choice of which tasks to engage with. Finally, in Section 6 we study control interventions in the form of gain modulation throughout a network, of relevance to theories in neuroscience. In all of these sections, we compose meta-learning tasks from a base set of three datasets: (1) **Correlated Gaussian** regression, (2) **Semantic Tasks** with hierarchical concepts, and (3) **MNIST** (Details in App. J), from which we can determine the statistics needed to compute the learning dynamics (e.g. \(\Sigma_{x}\) and \(\Sigma_{xy}\)).
Figure 2: Results in single neuron model throughout the learning period \(0\geq t\geq T\). **(a)** Instant net reward \(v(t)\). **(b)** Loss \(\left\langle\mathcal{L}(t)\right\rangle\) for theoretical predictions (solid) and simulations using SGD (shaded). **(c)** Optimal control signal decreases through learning (Baseline \(g(t)=0\)). **(d)** Weight \(w(t)\) through learning for control and baseline case, \(\hat{w}(t)=w(t)\cdot(1+g(t))\). Dependence of optimal control signal on task parameters. **(e)** and **(g)**: optimal \(g(t)\) when varying discount factor \(\gamma\) and noise level \(\sigma_{x}\) respectively. **(f)** and **(h)**: Difference between instant net rewards \(v(t)\) between control and baseline when varying \(\gamma\) and \(\sigma_{x}\) respectively. Longer time horizons and less noisy tasks recruit more control.
Relation to Meta-Learning Algorithms in Machine Learning
The normative objective in Eq. equation 2 and the way it is maximized through gradient steps on the control signal \(g(t)\) can describe other meta-learning algorithms. Here we show the connections to two well-established algorithms, Model Agnostic Meta-Learning (MAML Finn et al., 2017) and Bilevel Programming (Franceschi et al., 2018).
MAML is an instance of our framework where the initial weights \(W_{1}(0)\) and \(W_{2}(0)\) in our deep linear network _are_ the control signal \(g(t)\). By defining the performance as the average loss per task indexed by \(\tau\), \(\mathcal{P}(t)=\sum_{\tau}\left\langle\mathcal{L}_{\tau}(t)\right\rangle\), this becomes the meta-objective in MAML when considering only one step ahead in the value function, this is \(V_{\text{MAML}}=\mathcal{P}(\delta t)\) with \(\delta t\) being the time after one gradient update on \(g(t)\) (See App F.1). Our framework can also optimize performance after multiple gradient steps, therefore obtaining Multi-Step MAML in a computationally tractable setting (Fig. 1b). We used the two-layer linear network (Section 3) and a set of 5 binary regression tasks with different pairs of digits from MNIST (App. F.1) to simulate Multi-Step MAML. Results in Fig. 3a and b show that the standard MAML loss \(V_{\text{MAML}}\) changes depending on how many steps ahead are considered during the initial weights optimization. \(V_{\text{MAML}}\) decreases when considering a few steps ahead, increasing the capacity to optimize the dynamics. After a certain number of steps considered in the optimization, \(V_{\text{MAML}}\) increases, sacrificing immediate performance to optimize the dynamics in steps further away, as shown in Fig. 3b. These multi-step results are only possible due to the tractability of our setting. We see that one-step MAML can substantially underperform Multi-Step MAML.
We also optimized hyperparameters of the network throughout training. Bilevel Programming optimization can compute this, with the main distinction being the reverse-hypergradient method used to update the meta-parameters (control signal) (Franceschi et al., 2017, 2018). We extend these methods by adding features with intuitive meaning under our normative frameworks, such as the discount factor \(\gamma\) and control cost \(C\) (App. F.2). We optimized the learning rate throughout time to maximize the cumulative reward in equation 2, and varied \(\gamma\) and \(\beta\) as in the single neuron example to illustrate the normative meaning in a hyperparameter optimization context (Fig. 1c). We observed qualitatively similar behavior as in the single neuron model, longer time horizons and less cost of increasing the learning rate recruit more control. Our work provides further utility to these meta-learning algorithms by interpreting them under a normative value-based framework. We note that there is a family of meta-learning phenomena that is not explicitly described by our framework. This being emergent meta-learning agents where no outer loop or meta-variable is optimized explicitly (Wang et al., 2017). We extend this discussion in App. F.3.
## 5 Engagement Modulation
Next we turn to the question of which tasks among many to engage with over time. We provide the model control over its _engagement_ on a set of available tasks, or class in a classification problem during learning. Selecting the optimal control signal in this setting involves improving multi-task capabilities and estimating optimal curriculum. Consider a set of \(N_{\tau}\) datasets, and a loss function \(\mathcal{L}(\hat{Y}_{\tau},Y_{\tau})\), where \(\hat{Y}_{\tau}\) is the estimation of a model and \(Y_{\tau}\) is the required target for dataset \(\tau\). The average loss for a set of datasets is \(\mathcal{L}=\sum_{\tau=1}^{N_{\tau}}\mathcal{L}(\hat{Y}_{\tau},Y_{\tau})+ \mathcal{R}(W)\) which is used to measure the performance \(\mathcal{P}(t)=-\left\langle\mathcal{L}(t)\right\rangle\) only, and we assume that weights are updated via gradient descent on the auxiliary loss \(\mathcal{L}_{\text{aux}}=\sum_{\tau=1}^{N_{\tau}}\psi_{\tau}(t)\mathcal{L}( \hat{Y}_{\tau},Y_{\tau})+\mathcal{R}(W)\), where \(\psi_{\tau}(t)\) are control signals we call _engagement coefficients_, and \(\mathcal{R}(W)\)
Figure 3: **(a)**: Single step MAML loss \(V=\mathcal{P}(\delta t)\) when considering more steps in the learning dynamics. **(b)**: Resulting learning dynamics from initial parameters found with Multi-Step MAML. **(c) and (d)**: Optimal learning rate when varying discount factor \(\gamma\) and cost coefficient \(\beta\).
is a weight decay regularize. We use this auxiliary loss just to obtain a learning dynamics equations that explicitly depends on the engagement coefficients \(\psi_{\tau}(t)\). Assuming the network receives inputs from all of the datasets at the same time (concatenated in \(X\)) and has specific outputs allocated to each dataset (concatenated in \(Y\)) as schematized in Fig. 1d, we can derive the learning dynamics equations for the weights as a function of \(\psi_{\tau}(t)\) giving
\[\tau_{w}\frac{dW_{1}}{dt} =\sum_{\tau}\psi_{\tau}(t)W_{2\tau}^{T}\left(\Sigma_{xy\tau}^{T}- W_{2\tau}W_{1}\Sigma_{x}\right)-\lambda W_{1},\] \[\tau_{w}\frac{dW_{2}}{dt} =\sum_{\tau}\psi_{\tau}(t)\left(\Sigma_{xy\tau}^{T}-W_{2\tau}W_{1 }\Sigma_{x}\right)W_{1}^{T}-\lambda W_{2}, \tag{6}\]
where \(W_{2\tau}\) denotes the weights of the neurons for the output to dataset \(\tau\) and \(\Sigma_{xy\tau}\) is \(\left\langle XY_{\tau}^{T}\right\rangle\), both padded with zeros to preserve dimension (see App. H.2). Each of the \(\psi_{\tau}(t)\) modulates the amount of learning of each dataset. The auxiliary loss to get a learning dynamics is to avoid the trivial solution of \(\psi_{\tau}=0\) to minimize the loss. We can find the optimal \(\psi_{\tau}(t)\) throughout learning by computing \(\mathcal{P}(t)\), using \(C(\psi(t))=\beta\|\mu_{\psi}-\bar{\psi}(t)\|^{2}\) (\(\bar{\psi}=(\psi_{1}(t),\psi_{2}(t),...)\)), then taking gradient steps on \(\psi_{\tau}(t)\) to maximize \(V\). Taking \(\mu_{\psi}=0\) means that to learn a dataset \(\tau\) (\(\psi_{\tau}(t)>0\)) the agent must pay a cost. We call this case _active engagement_. For \(\mu_{\psi}=1\), the agent must pay a cost to increase or suppress the learning signal from a specific dataset relative to a baseline. We call this case _attentive engagement_. In these cases, each of the elements in \(\bar{\psi}(t)\) are forced to stay in a certain range independently. Finally, we can force \(\bar{\psi}(t)\) to be of a fixed norm by making the cost \(C(\bar{\psi}(t))=\beta\left(\|\bar{\psi}(t)\|^{2}-\Psi\right)^{2}\), such that there is a fixed overall amount of engagement to distribute. We call this case _vector engagement_. For category engagement, which is focusing on particular subclasses in a classification problem, a similar set of equations can be derived (see App. H.3), where the engagement on class \(c\) through learning is denoted by \(\phi_{c}(t)\) (Fig. 1f). The meta-learning tasks used to train this model are the following:
**Task engagement**: Given a set of \(N_{\tau}\) datasets, and a total training period of \(T\), we trained the engagement modulation model described in Section 5. The idea of this task is to estimate the optimal learning curriculum (order of datasets presented in the neural network training) that maximizes expected return \(V\) during the time period \(T\). In this task, three binary MNIST classification datasets were used, specifically the digits \((0,1)\), \((7,1)\) and \((8,9)\) ordered by difficulty (easier to harder according to linear separability, see App. H.2).
Figure 4: Results for task engagement experiment. **(a)**, **(c)** and **(e)**: \(\mathcal{L}(t)\) for baseline and control case for Attentive, Active and Vector engagement. **(b)**, **(d)** and **(f)**: Engagement coefficients \(\psi_{\tau}(t)\) for each of the binary classification tasks Attentive, Active and Vector engagement. Mean and standard deviations from 5 independent trainings. **(h)** and **(j)**: Results for category engagement task, improvement in the loss function when using control for MNIST and Semantic dataset respectively. **(i)** and **(k)**: Optimal category engagement coefficients for MNIST and Semantic datasets. **(l)**: Class proportion experiment. **Uniform**: Loss when using uniform distribution for the abundance of classes in each batch. **Balanced**: Loss on a balanced batch, but using the inferred curriculum of classes in the batch to train. **Curriculum**: Loss on curriculum batch when using the curriculum. **(m)**: Loss per class using control (solid lines) and baseline (dashed lines).
**Category engagement**: For a classification task, there might be a better set of classes to learn during different stages of training. We trained the category engagement modulation model (described in Section 5 and App. H.3) to estimate the optimal _engagement_ or _attention_ to each of the categories in a classification task (Semantic and MNIST datasets) through learning. In addition, we trained the gain modulation model (next Section) in this same setting using a _neuron basis_ (see App. H.1).
### Results
**Task engagement**: We simultaneously gave the neural network inputs and targets for three datasets, as described in Section 5, each of them a different binary regression problem from MNIST. Each dataset used was chosen to vary on the level of difficulty to learn: the pair of numbers (0, 1) is easier to classify than (7, 1) and (8, 9) (based on the lowest loss achievable with linear regression in App. K.4). We computed the engagement coefficients \(\psi_{\tau}(t)\), one per dataset, that maximizes the expected return in Eq. 2. Learning curves and the evolution of engagement coefficients are depicted in Fig. 4; the baseline case corresponds to simultaneous training on all datasets at the same time (\(\psi_{\tau}(t)=1\) and \(C(\psi(t))=0\)). In the _attentive engagement_ agent, where \(\mu_{\psi}(t)=1\) (shown in Fig. 4a and 4b), the agent just needs to pay a cost to either amplify or suppress engagement on a dataset. In this setting, the agents amplify the engagement of all of the datasets, effectively increasing the learning rate per dataset, and achieving a lower \(\mathcal{L}_{C}(t)\) compared to \(\mathcal{L}_{B}(t)\). The order of learning each of the datasets goes from easier to harder, it is in the same order as in the _active engagement_ and _vector engagement_, and none of the datasets are engaged with \(\psi_{\tau}(t)<1\), presumably to avoid forgetting of early amplified datasets. In the case of _active engagement_, where \(\mu_{\psi}=0\) in \(C(\psi(t))=\beta\|\mu_{\psi}-\bar{\psi}(t)\|^{2}\) (shown in Fig. 4c and 4d), the agent must pay a cost to learn any of the tasks (\(\psi_{\tau}(t)>0\)). By distributing the learning between the tasks, the agent is capable of reaching \(\mathcal{L}_{C}(t)\) close to \(\mathcal{L}_{B}(t)\) as shown in the top panel of Fig. 4, without the need of fully engaging on all of the datasets at every time step. None of these datasets are fully disengaged at any point, possibly as a mechanisms to avoid catastrophic forgetting (Kirkpatrick et al., 2017) of datasets previously engaged during training. The engagement coefficients in the _vector_ case behave similarly. Since the control signal in this case is forced to keep a constant size of \(\Psi\), the agent is not able to fully engage in all of the datasets, and distributes this _attention_ resource on each dataset from easier to harder, as in the _active_ case. The meta-learning strategy found in our setting of keep re-visiting previous tasks to keep performance is well studied in psychology (Ericsson and Harwell, 2019; Eglington and Pavlik Jr, 2020), and it is also related to memory _replay_ theories as a value-based mechanism that avoids catastrophic forgetting (Mattar and Daw, 2018; Agrawal et al., 2022).
**Category engagement**: In some classification tasks, it might be better to learn some categories first and others later during training. We trained the engagement modulation model to control _engagement_ or _attention_ on categories of a classification dataset. In Fig. 4 we show the results of this model trained on the Semantic dataset, and MNIST dataset classifying all digits. The engagement coefficients \(\phi_{c}(t)\) describe the focus on class \(c\) in the classification problem, which basically scales the error signal for that specific class through training (see App. H.3). Fig. 4h and 4j show the improvement in the loss when optimizing for categoric assimilation coefficients for both datasets. Fig. 4i and 4k depict the engagement coefficients per class \(\phi_{c}(t)\). In the Semantic task, the engagement coefficients are clustered depending on the level of the hierarchy for the respective output. Higher coefficients are spent on categories in higher levels of the hierarchy, as well as earlier during learning. Because we kept \(\beta\) high for this experiment (\(\beta=5.0\)), the cost of deviating from a control vector of size \(C\) is high (where \(C\) is the number of classes); therefore the amplification of engagement in some categories goes along with suppression for other categories to keep the control with constant size. For the MNIST dataset, each \(\phi_{c}(t)\) corresponds to a specific digit, and the order of assimilation that maximizes value shows a consistent order of digits among different runs, being ordered as \((0,1,7,6,4,2,3,9,8,5)\), which is roughly the same as the average linear separation per digit (see App. K.4). As in the task engagement results, we found that it is optimal to assimilate easier elements first, allocating higher \(\phi_{c}(t)\) and more concentrated in the early stages of learning. More difficult categories are assimilated later, allocating a smaller maximum \(\phi_{c}(t)\) compared to easier classes, but with sustained engage over time. The benefits of learning from easier to harder aspects of tasks have been shown in cognitive science (Krueger and Dayan, 2009; Wilson et al., 2019) and machine learning (Parisi et al., 2019; Saglietti et al.,
2022; Zhang et al., 2022), and we are able to reproduce this finding in the task engagement and category engagement experiments within our normative framework. The engagement level per class amplifies the error signal of learning a particular class through time, which can be roughly controlled by modifying the proportion of classes in the batch through training. To show this, we trained the baseline network on MNIST (no control, only backpropagation), and used \(\phi_{c}(t)\) to modify the proportion of classes in the batch throughout the training (App. H.3). This gives a better curriculum than sampling each class uniformly to populate the batch, as shown in Fig. 4l and 4m.
## 6 Gain Modulation
Motivated by studies of neuromodulation (Lindsay and Miller, 2018; Ferguson and Cardin, 2020), we finally address a neuroscience inspired model (Shenhav et al., 2013, 2017) where the _learning effort control signals_\(G_{1}(t)\in\mathbb{R}^{H\times I}\) and \(G_{2}(t)\in\mathbb{R}^{O\times H}\) modulate the gain of each layers weights as \(\tilde{W}_{i}(t)=(1+G_{i}(t))\circ W_{i}(t)=\tilde{G}_{i}(t)\circ W_{i}(t)\) where \(\circ\) denotes element-wise multiplication. This control signal will modify the input-output mapping of the network to \(\hat{Y}=\tilde{W}_{2}(t)\tilde{W}_{1}(t)X\). Given the control signals, we assume the weights are learned using gradient descent, yielding the learning dynamics equations
\[\tau_{w}\frac{dW_{1}}{dt} =\left(\tilde{W}_{2}^{T}\Sigma_{xy}^{T}\right)\circ\tilde{G}_{1 }-\left(\tilde{W}_{2}^{T}\tilde{W}_{2}\tilde{W}_{1}\Sigma_{x}\right)\circ \tilde{G}_{1}-\lambda W_{1},\] \[\tau_{w}\frac{dW_{2}}{dt} =\left(\Sigma_{xy}^{T}\tilde{W}_{1}^{T}\right)\circ\tilde{G}_{2 }-\left(\tilde{W}_{2}\tilde{W}_{1}\Sigma_{x}\tilde{W}_{1}^{T}\right)\circ \tilde{G}_{2}-\lambda W_{2}. \tag{7}\]
The control signal \(G_{i}(t)\) effect is _similar_ to a time-varying learning rate, except (1) it is weight specific (i.e. with coupling between the elements of the control matrix), (2) it does not change the weight decay rate which is originally controlled by \(\lambda\) and \(\tau_{w}\), and (3) \(G_{i}(t)\) also changes the input-output mapping. Solving the learning dynamics gives \(\mathcal{P}(t)=-\left\langle L(t)\right\rangle\), using \(C(G(t))=\exp\left(\beta\left(\|G_{1}(t)\|_{F}^{2}+\|G_{2}(t)\|_{F}^{2}\right) \right)-1\), to then estimate \(dV/dG_{i}(t)\) as in App. C, and find the control trajectory that maximizes cumulative reward in Eq. 2 (More details in App. H.1, we provide an exact solution of the learning dynamics in a single-layer network given a control signal \(G(t)\) in App. G.2). In addition, we simulated this model in a non-linear network using approximations (see App. I). The meta-learning tasks used to train this model are the following:
**Effort Allocation**: We train the gain modulation model separately on each of the three datasets for a time period of \(T\), and estimate the control signal that maximizes the expected return \(V\) in Eq. 2.
**Task Switch**: We defined two different Gaussian datasets (App. L). We sequentially train the network on each dataset for a time period \(T_{s}\). The expected reward \(V\) is computed for the whole training period \(T>T_{s}\) of the gain modulation model, and maximized through gradient updates on \(G_{i}(t)\).
### Results
**Effort Allocation**: This setting is similar to the single neuron setting of Section 2.1, but with a two layers network instead of just one neuron, where every weight in the network has its own gain signal as described in Eq. 7 and schematized in Fig. 1e. The results of the baseline training and controlled training using gain modulation are presented in Fig. 5. In the gain modulation model, we can see the same qualitative behavior as in the single-neuron model when varying parameters of the learning model and control optimization. The control signal that maximizes expected return reduces the instant net reward rate by the use of control in the early stages of learning, to get better performance later as shown in the lower loss for the controlled case (Fig. 5a and b). Through both optimizing the learning and minimizing \(C(G(t))\) at the same time, the gain modulation is not only more efficient by getting a more sparse solution (\(L_{1}\) norm in Fig. 5b), using fewer weights than when no control is used, it also learns faster (Fig. 5c, more details in App. K.2). There are times during learning when it is more effective to apply control. As can be seen in the \(L_{2}\) norm of the control matrices \(G_{1}(t)\) and \(G_{2}(t)\), and the absolute value of the time derivative of the loss \(d_{t}\mathcal{L}(t)=|d\mathcal{L}(t)/dt|\) for the baseline and control case (Fig. 5d), the control signal is larger early in training and near the stages of learning when
the increase in performance (\(d_{t}\mathcal{L}(t)\)) is larger (Fig. 5a). The control signal shifts the peaks in \(d_{t}\mathcal{L}(t)\) earlier in learning, leading to better performance and higher reward earlier, compensating for the momentarily increased cost of control. Similar results are obtained when training on the other two datasets (see Fig. 12 and 13 in App. K.2). Neuromodulators are known to be involved in high-level executive tasks such as engagement in learning (Shenhav et al., 2013, 2017; Lieder et al., 2018; Grossman et al., 2022), and some of them are believed to act as gain modulation (Lindsay et al., 2020; Ferguson and Cardin, 2020) (App. A). We provide a testable and tractable setting where neuromodulators could manage performing and learning tasks to maximize cumulative reward.
**Task Switch**: The task is schematized in Fig. 1g. In Fig. 5e, each peak in the loss is a task switch (every 1800 time steps), and as expected, the baseline loss \(\mathcal{L}_{B}(t)\) is higher than the loss with control \(\mathcal{L}_{C}(t)\) almost at every point throughout learning. After each switch, the control signal manages to iteratively drive the learning dynamics to places in parameter space \(W\) where each switch is less costly (Fig. 5f). Since the linear network is over-parametrized, the drive to adjust for the next task can be done without meaningfully changing the solution for the current task. The control signal starts acting before the switch (Fig. 5g) to amortize the loss peak at the time of the switch, and to speed up the approach of the weight to the solution, skipping the plateau in the loss. In addition, the sparsity of the weights is higher compared to the baseline case, the cost of using control to switch is transferred to the size of the weights, making it easier to move the effective weight \(\tilde{W}(t)\) by a large amount when changing \(G(t)\) (See App. K.3). This setting poses meta-learning and gain modulation as a neural implementation of task/context switching in real scenarios (Puigbo et al., 2020; Ben-Iwhiwhu et al., 2022).
## 7 Discussion
We present a flexible computationally tractable _learning effort_ framework to study optimal meta-learning with neural network dynamics in a variety of settings, where control signals can influence both learning and performance. Our framework optimizes control signals on a fully normative objective: discounted cumulative performance throughout learning. The aim of our framework is to aid the evaluation of possible interventions in engineered systems, and provide formal underpinnings for cognitive control theories in neuroscience (see App. B). While a limitation of this work is its use of linear network models, we study non-linear network dynamics approximations in App. I. We hope our work will contribute to a greater understanding of how agents should act to maximize their learning abilities, based on their own prospects of learning.
Figure 5: Results of the gain modulation model trained on an MNIST classification task. **(a)**: Instant net reward \(v(t)\), baseline vs controlled. **(b)**: L1 and L2 norms of the weights. **(c)**: Loss \(\mathcal{L}(t)\) throughout learning. **(d)**: normalized \(d_{t}\mathcal{L}(t)\), and normalized L2 norm of the control signal \(G_{1}(t)\) and \(G_{2}(t)\). **(e)**: Results on the task switch meta-task. Comparison of \(\mathcal{L}(t)\) for the baseline and control case. **(f)**: Values of \(\mathcal{L}(t)\) at switch times, along with the normalized cost of control \(C(t)\) at switch times (green line). **(g)**: Zoom of \(\mathcal{L}(t)\) in the top panel, along with the normalized cost of control. |
2306.09552 | Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed
Neural Network | EIE proposed to accelerate pruned and compressed neural networks, exploiting
weight sparsity, activation sparsity, and 4-bit weight-sharing in neural
network accelerators. Since published in ISCA'16, it opened a new design space
to accelerate pruned and sparse neural networks and spawned many
algorithm-hardware co-designs for model compression and acceleration, both in
academia and commercial AI chips. In retrospect, we review the background of
this project, summarize the pros and cons, and discuss new opportunities where
pruning, sparsity, and low precision can accelerate emerging deep learning
workloads. | Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally | 2023-06-15T23:46:35Z | http://arxiv.org/abs/2306.09552v1 | # Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network
###### Abstract
EIE proposed to accelerate pruned and compressed neural networks, exploiting weight sparsity, activation sparsity, and 4-bit weight-sharing in neural network accelerators. Since published in ISCA'16, it opened a new design space to accelerate pruned and sparse neural networks and spawned many algorithm-hardware co-designs for model compression and acceleration, both in academia and commercial AI chips. In retrospect, we review the background of this project, summarize the pros and cons, and discuss new opportunities where pruning, sparsity, and low-precision can accelerate emerging deep learning workloads.
## I What we did well
We started this project as deep learning accelerators are bottlenecked by the memory footprint. Computation is cheap and memory is expensive. Existing algorithm and hardware stack accelerate the inference of a neural network "as is." We asked, can we compress the model first? and we developed the "Deep Compression" [1, 2] technique that can compress the weights of a neural network by an order of magnitude by pruning and quantization. Since pruned weights become zero, and zero multiplied by anything is still zero, we can potentially save the computation and memory. However, the resulting neural network is sparse and irregular, which conflicts with massively parallel computing, and runs inefficiently on general-purpose hardware.
EIE demonstrated that special-purpose hardware can make it cost-effective to do sparse operations with matrices that are upto 50% dense - while in software, density must be much less than 1% to overcome the overhead of the sparse package.
EIE exploits both weight sparsity and activation sparsity. It stores the weights in compressed sparse column format, parallelizes the computation by interleaving matrix rows over the processing elements, and detects the leading non-zero in activations. It not only saves energy by skipping zero weights but also saves the cycle by not computing it. EIE supports fine-grained sparsity, and allows pruning to achieve a higher pruning ratio.
EIE adopted aggressive weight quantization (4bit) to save memory footprint. To maintain accuracy, EIE decodes the weight to 16bit and uses 16bit arithmetic. This W4A16 approach (4-bit weight, 16-bit activation) is different from the conventional W8A8 approach. Such a design has been reborn in large language models (LLM). The single batch text generation of these models is dominated by matrix-vector multiplication -- same as EIE. It is memory-bounded, and the weight memory is the bottleneck, not the activation -- 4bit weight and 16bit activation become attractive to save memory and maintain accuracy at the same time, as adopted by many software LLM inference engines.1 However, these software solutions use linear integer weights, rather than a Kmeans codebook to make the weight decoding simpler and the arithmetic cheaper.
Footnote 1: 4bit LLM projects such as: GPTQ, AWQ, llama.cpp, MLC LLM
EIE demonstrates the opportunity for accelerator and neural network co-design. There's plenty of room at the top to compress the neural network before accelerating it (Figure 1). Deep Compression and EIE show the benefit of refactoring the design stack.
## II Later work
EIE generated a new wave of AI accelerator design by opening a new dimension: sparsity. Cambricon-X [3] proposes a prefix-sum-based indexing module and supports sparse CNNs. SCNN [4] utilizes outer product and scatter-add to process sparse CNN while maximizing the input data reuse. Pragmatic [5] skips bit-level zeros and eliminates ineffectual computations. UCNN [6] generalizes the sparsity problem to the repetition of weights with any value instead of zero. Eyeriss V2 [7] proposes a flexible interconnect and PE architecture to accelerate sparse CNN. ExTensor [8] hierarchically eliminates the computation in sparse tensor computations using an efficient intersection architecture. SIGMA [9] proposes flexible interconnect to perform the distribution/reduction of sparse data for DNN training. The Sparse Abstract Machine [10] targets sparse tensor algebra to reconfigurable and fixed-function spatial dataflow accelerators.
EIE had substantial impacts on commercial AI chip design, leveraging pruning and sparsity for higher efficiency. NVDLA [11] gates the pruned weights to save energy. NVIDIA Sparse Tensor Core [12] adopt structured 2:4 sparsity to speed up pruned models. Samsung NPU [13] uses a priority-based search algorithm to skip zeros in activations. Ambarella CV22 [14] supports both structured and unstructured weight sparsity.
Fig. 1: EIE opened a new opportunity to build hardware accelerator for sparse and compressed neural networks.
## III Lessons
Notwithstanding that EIE started sparse acceleration, this technique isn't as easily applied to arrays of vector processors. There are several improved designs that solved the issue, including Sparse Tensor Core [12, 15] that adopted structured sparsity (N:M sparsity), where one PE becomes more effective PEs in a regular manner. Another improvement is load-balance-aware pruning [16] to avoid PE starvation.
While EIE's special-purpose hardware is orders of magnitude more efficient than a software implementation of sparse M \(\times\) V, the overhead of traversing the CSC structure is non-zero. One PE performs only one MAC, but is associated with many overhead structures, including pointer read, sparse matrix access, leading non-zero detector, etc. In EIE, the weight and index are both 4bit giving a 50% storage overhead. Other designs use structured sparsity or coarse-grained block sparsity to reduce storage and control overhead.
EIE only accelerates fully connected layers. Later, SCNN [4], Cambricon-X [3] and Eyeriss-V2 [7] can also accelerate sparse convolution layers. EIE stores all the weights in SRAM. Commercially, Cerebras tried this path to put everything in SRAM. This setting is perfect for vision models, but not easy for LLM: the number of parameters of recent LLMs ranges from 10 billion to 100 billion, making it difficult to fit SRAM.
## IV New opportunities
DNN architecture has witnessed rapid change. After EIE, we developed hardware-aware neural architecture search (NAS) techniques, ProxylessNAS [17] and Once-for-all [18] that design small and fast models before model compression.
The first principle of efficient AI computing is to be lazy: avoid redundant computation, quickly reject the work, or delay the work. We show a few more examples.
After compressing the weights, the activation becomes the bottleneck. Therefore, we developed the MCUNet family [19, 20] that aggressively shrinks the activation for TinyML. MCUNet performs not only ImageNet classification but also detection with only 256KB SRAM and 1MB Flash on a microcontroller. By sparse update and low precision, we can even do on-device training under 256KB memory [21].
Generative AI: spatial sparsity persists in image editing or image in-painting; users don't edit the whole image. So rather than generating the full image, sparsely generating where is edited [22] can speed up inference.
Transformer is a major neural architecture after EIE, and FC layer is back again. The attention layer has no weights to prune. However, not all tokens are useful: SpAtten [23] proposes cascade token-pruning and gradually removes redundant tokens with the smallest attention score. It exploits "progressive quantization" that lazily fetches MSBs only, run inference; if the confidence is low, it fetches LSBs.
Temporal sparsity exists in videos. Adjacent frames are similar. Rather than using expensive 3D convolution, temporal shift [24] can efficiently exploit temporal redundancy with zero FLOPs. Point cloud is spatially sparse. TorchSparse [25] adaptively groups sparse matrices to trade computation for regularity. PointAcc [26] employs a sorting array to perform sparse input-output mapping and avoid zero computation.
We envision future AI models will be sparse at various granularity and structures. Co-designed with specialized accelerators, sparse models will become more efficient and accessible.
## Acknowledgements
We thank Zhekai Zhang and Yujun Lin for the discussions and collecting data for the figure.
|
2308.00958 | Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks | Despite the broad application of Machine Learning models as a Service
(MLaaS), they are vulnerable to model stealing attacks. These attacks can
replicate the model functionality by using the black-box query process without
any prior knowledge of the target victim model. Existing stealing defenses add
deceptive perturbations to the victim's posterior probabilities to mislead the
attackers. However, these defenses are now suffering problems of high inference
computational overheads and unfavorable trade-offs between benign accuracy and
stealing robustness, which challenges the feasibility of deployed models in
practice. To address the problems, this paper proposes Isolation and Induction
(InI), a novel and effective training framework for model stealing defenses.
Instead of deploying auxiliary defense modules that introduce redundant
inference time, InI directly trains a defensive model by isolating the
adversary's training gradient from the expected gradient, which can effectively
reduce the inference computational cost. In contrast to adding perturbations
over model predictions that harm the benign accuracy, we train models to
produce uninformative outputs against stealing queries, which can induce the
adversary to extract little useful knowledge from victim models with minimal
impact on the benign performance. Extensive experiments on several visual
classification datasets (e.g., MNIST and CIFAR10) demonstrate the superior
robustness (up to 48% reduction on stealing accuracy) and speed (up to 25.4x
faster) of our InI over other state-of-the-art methods. Our codes can be found
in https://github.com/DIG-Beihang/InI-Model-Stealing-Defense. | Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu | 2023-08-02T05:54:01Z | http://arxiv.org/abs/2308.00958v2 | # Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
###### Abstract.
Despite the broad application of Machine Learning models as a Service (MLaaS), they are vulnerable to model stealing attacks. These attacks can replicate the model functionality by using the black-box query process without any prior knowledge of the target victim model. Existing stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers. However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice. To address the problems, this paper proposes _Isolation and Induction_ (Inl), a novel and effective training framework for model stealing defenses. Instead of deploying auxiliary defense modules that introduce redundant inference time, Inl directly trains a defensive model by isolating the adversary's training gradient from the expected gradient, which can effectively reduce the inference computational cost. In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries, which can induce the adversary to extract little useful knowledge from victim models with minimal impact on the benign performance. Extensive experiments on several visual classification datasets (_e.g._, MNIST and CIFAR10) demonstrate the superior robustness (up to 48% reduction on stealing accuracy) and speed (up to 25.4\(\times\) faster) of our Inl over other state-of-the-art methods. Our codes can be found in [https://github.com/DIG-Beihang/Inl-Model-Stealing-Defense](https://github.com/DIG-Beihang/Inl-Model-Stealing-Defense).
Model Stealing, Stealing Defense, Model Privacy +
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote † †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote †: The corresponding author.
+
Footnote † †: The corresponding author.
+
Footnote † †: The corresponding author.
+
Footnote † †: The corresponding author.
+
Footnote † †: The corresponding author.
+
Footnote † † †: The corresponding author.
+
Footnote
To mitigate the threat of model stealing attacks, several defensive methods (Kririyappa et al., 2017; Dosov et al., 2018; Li et al., 2019; Li et al., 2019) have been devoted to making the victim model hard to steal by introducing perturbations or randomness to the model output. However, the practical feasibility of these defenses is still hampered by certain limitations: (1) Existing defensive methods often incorporate auxiliary modules that validate the input or modify the output of the victim model, which introduces extra computational costs. In practice, the computational burdens are mainly concentrated on the inference phase, and higher computational overheads indicate extended user response time and increased financial expenditures. (2) Some defenses add perturbations to the model's output to enhance its resilience against model stealing attacks by providing erroneous predictions to attackers. However, this comes at the expense of reduced benign accuracy for legitimate users due to the unfavorable trade-off.
To tackle these concerns, we propose a novel and effective defensive training framework against model stealing attacks, which is called InI. As for the _computational overheads_, distinct from prevailing methodologies that introduce auxiliary inference-time modules, our InI aims to directly train a robust model that is able to defend against stealing attacks without extra inference modules. Based on the fact that DNNs are heavily over-parameterized (Zhu et al., 2017) and can be trained to fit and generalize across diverse data distributions, we, therefore, posit that the victim model can achieve robustness by incorporating the countermeasures within its parameters during training. InI leverages a clone model during training as the surrogate adversary and estimates the adversary's optimization gradient and the expected gradient. With this estimation, the victim can learn to adjust its posterior probabilities to maximize the directional divergence between the two gradients. Therefore, we can isolate the clone model's optimization gradient from the expected gradient. To ameliorate the _trade-off between benign accuracy and stealing robustness_, different from previous studies that add perturbations over all predictions that harm the benign accuracy, we aim to train models that can learn to behave differently on benign and malicious queries. Following the assumption of previous works (Li et al., 2019; Li et al., 2019) that the malicious query samples deviate from the task distribution, InI trains a victim that behaves normally on the benign task yet produces inductive outputs on the malicious samples. Specifically, we introduce an out-of-distribution (OOD) dataset during victim training and minimize the adversary's information gain on it. As a result, during inference, the adversary is induced to extract little useful knowledge from the victim model using the stealing query with OOD examples. In addition, our method can be integrated with existing methods to better obtain defensive performance.
In summary, our **main contributions** are three-fold:
* We propose a novel and effective defensive training framework against model stealing attacks called InI to achieve robustness during training, which provides a new perspective of model stealing defenses.
* For computational overheads, InI incorporate the gradient isolation countermeasure within the victim's parameters; for unfavorable trade-offs, InI produces distinct outputs and induce the adversary to acquire minimal knowledge from malicious queries.
* Extensive experiments have been conducted on multiple datasets which demonstrate the state-of-the-art robustness and speed over other baselines. Moreover, InI shows flexible compatibility with existing methods for better defense.
## 2. Related Work
### Model Stealing Attacks
Model stealing attack, also referred to as model extraction, aims at inferring hyper-parameters (Zhu et al., 2017; Li et al., 2019), extracting model parameters (Li et al., 2019; Li et al., 2019; Li et al., 2019), or copying functionalities of a certain machine learning model. Our work focuses on stealing the classification accuracy of the model, which is the most prevailing and universal stealing attack in deep learning. Tramer _et al._(Tramer et al., 2019) proposed the concept of model stealing that attackers could "steal" the property of a machine learning model by queries without prior knowledge of the victim model. Generally speaking, there are many properties that can be stolen, _e.g._, model parameters, training data, or functionality. Papernot _et al._(Papernot et al., 2019) proposed a partial data approach named Jacobian-Based Dataset Augmentation (JBDA), which generates synthetic data by adding small perturbations on a small set of in-distribution samples. Orekondy _et al._(Orekondy et al., 2019) proposed KnockoffNets, employing samples from a surrogate dataset as query inputs of the victim model. The stealing performance of partial data and surrogate data approaches would degrade when the available data are different from the original training set. In recent years, some data-free stealing methods have been proposed. Kariyappa _et al._(Kariyappa et al., 2017) and Truong _et al._(Truong et al., 2019) are motivated by the framework of data-free knowledge distillation (Zhu et al., 2017; Li et al., 2019) and proposed data-free model
Figure 1. Illustration of attacks and defenses in model stealing. The adversary makes queries using malicious queries to extract knowledge from the victim MLaaS model, and the returned outputs are used to train a clone model. The defender introduces randomness into the model’s outputs, in order to mislead the stealing algorithm.
stealing methods, where they use zeroth-order gradient estimation to calculate the victim gradient in black-box settings. Moreover, Sanyal _et al._[(48)] proposed a model stealing attack in the hard-label setting. They utilize some unrelated proxy data to get a pre-trained data generator, while the stealing process is data-free.
### Model Stealing Defenses
Currently, most model stealing defenses tend to add perturbations to the model outputs, thus disturbing the optimization of the adversary. Lee _et al._[(18)] proposed an accuracy-preserving defense against model stealing attacks by adding deceptive perturbations to the model outputs while preserving its top-1 label, while it yields to the hard-label stealing. Other defenses like Maximizing Angular Deviation (MAD) [(45)] perturbs the model outputs with controllable intensity, defending against model stealing attacks at the expense of benign accuracy. Another approach of defense takes advantage of the data limitation of adversaries, making the victim model produce dissimilar output between in-distribution inputs and out-of-distribution inputs. Kariyappa _et al._[(13)] proposed Adaptive Misinformation (AM) defense that detects the OOD inputs and misleads adversaries with modified outputs. Kariyappa _et al._[(12)] then proposed Ensemble of Diverse Models (EDM) defense, which introduces randomness into the model output by using an ensemble of diverse models. Models in the ensemble are trained to perform diversely for OOD inputs, making the functionality of the model hard to be stolen. In addition, there are other types of countermeasures towards model stealing, such as digital watermarking [(10; 1; 23)], which inject an extractable watermark into the victim model and can distinguish whether a model is from stealing.
Existing defensive approaches take advantage of some limitations of model stealing attacks to mitigate the knowledge leakage, while they are suffering from high computational costs and unfavorable trade-offs. In this paper, we are devoted to defending against stealing attacks by incorporating the countermeasure with the victim's parameters, inherently enhancing the robustness against model stealing attacks.
## 3. Threat Model
### Attack Objective
In this paper, we mainly discuss the functionality stealing towards DNNs on image classification tasks. Specifically, an adversary aims at stealing the functionality of a victim model \(\mathcal{V}\) by training a clone model \(\mathcal{C}\). These attacks usually follow the framework of knowledge distillation [(9)], where the victim model \(\mathcal{V}\) plays the role of "teacher", and the knowledge of the teacher is distilled into the "student" model \(\mathcal{C}\). The objective of the adversary is to maximize the classification accuracy of clone model \(Acc(\mathcal{C}(\mathcal{x};\theta_{\mathcal{C}}),y)\) on the victim's target distribution \(\mathcal{D}_{tar}\). Define \(\theta_{\mathcal{V}}\) to be the parameter of \(\mathcal{V}\) and \(\theta_{\mathcal{C}}\) to be the parameter of \(\mathcal{C}\), and the adversary's goal could be formulated as Eqn. 1.
\[\max_{\theta_{\mathcal{C}}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}_{tar}} \left[Acc(\mathcal{C}(\mathbf{x};\theta_{\mathcal{C}}),\mathbf{y})\right] \tag{1}\]
In most real-world settings, the adversary has no knowledge about the victim's structure, parameters, or training set. The only interaction between the adversary and the victim is the _black-box query process_: the adversary inputs an image \(x\) and the victim returns a softmax probability or logits. Though the original training set is unavailable, the adversary can use synthetic data or surrogate data to query the victim model. For example, JBDA [(46)] synthesizes data from a small part of in-distribution samples, and KnockoffNets [(44)] uses surrogate datasets to query the victim model. Therefore, the adversary's learning objective is a surrogate goal based on the distribution of the query dataset \(\mathcal{D}_{que}\), which can be formulated as follows:
\[\min_{\theta_{\mathcal{C}}}\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{que}}\left[d( \mathcal{C}(\mathbf{x};\theta_{\mathcal{C}}),\mathcal{V}(\mathbf{x};\theta_{\mathcal{ V}}))\right] \tag{2}\]
In the ideal scenario, if \(\mathcal{D}_{tar}\) and \(\mathcal{D}_{que}\) are close enough and the query budget is sufficient, the model stealing is generally inevitable since the victim \(\mathcal{V}\) must guarantee the performance for benign users on the target distribution. However, in practice, \(\mathcal{D}_{tar}\) and \(\mathcal{D}_{que}\) are dissimilar due to the knowledge limitation of the adversary, and the query budget is limited by the adversary's financial cost. It is these limitations that support existing defensive methods.
### Defense Objective
In defenses against model stealing, the defender aims at preventing the functionality of the victim model from being stolen with an acceptable impact on its benign accuracy. To be more practical, the accuracy degradation should be constrained with a minimum threshold \(T\). The objective of the defender is to minimize the classification accuracy of the clone model \(Acc(\mathcal{C}(\mathcal{x};\theta_{\mathcal{C}}),y)\) on victim's target distribution \(\mathcal{D}_{tar}\), which could be formulated as Eqn. 3.
\[\min_{\theta_{\mathcal{V}}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{ D}_{tar}}\left[Acc(\mathcal{C}(\mathbf{x};\theta_{\mathcal{C}}),\mathbf{y})\right],\] \[\text{s.t.}\ \ \mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}_{tar}}\left[ Acc(\mathcal{V}(\mathbf{x};\theta_{\mathcal{V}}),\mathbf{y})\right]\geq T \tag{3}\]
Considering the limitations of the adversary, existing defense methods either add adaptive perturbations to multiply the adversary's query cost, or differentiate the victim's behavior on ID and OOD data to deceive the adversary. In this paper, we propose a defensive method against model stealing attacks, which gets rid of the extra computational costs and ameliorates the trade-off between benign accuracy and stealing resistance.
## 4. Methodology
To build a defensive method with low computational costs and high trade-offs against model stealing attacks, we propose InI, a novel and effective defensive training framework. In this section, we first illustrate the training-time gradient isolation methodology that gets rid of auxiliary inference-time modules, and then elaborate on the adversary induction approach that reduces the knowledge leakage. Finally, we explain our overall training framework.
### Gradient Isolation
Existing defensive methods are suffering from extra computational costs during inference since they often employ auxiliary inference-time modules. Studies have revealed that DNNs are heavily over-parameterized [(62)] and can be trained to fit and generalize across diverse data distributions, _e.g._, adversarial examples for adversarially-trained models [(5; 6; 31; 32; 38; 51; 63)] and real-world disturbance
for reinforcement agents (Gutton et al., 2017; Gutton et al., 2018; Gutton et al., 2019; Gutton et al., 2020). Inspired by them, we propose a defensive training framework to directly train a robust model, so that the model can generate deceptive outputs towards stealing attack queries without extra modules.
Generally, as Eqn. 1 shows, the adversary's _expected goal_ is to minimize the disagreement with the ground truth on the target distribution. This objective is not directly related to the victim's parameter \(\mathbf{\theta}_{\mathcal{V}}\), but the adversary must extract knowledge from the victim. As a consequence, the _real objective_ of the adversary is illustrated in Eqn. 2. There exists a gap between the expected goal and the real objective, and the defense can be achieved by isolating the adversary's real objective from the expected goal. As the adversary usually updates its parameters by gradient descent, we propose gradient isolation to isolate the real gradient from the expected gradient, thereby incorporating the robustness within the victim's parameters to mislead the adversary.
To achieve gradient isolation, we need to estimate the above two gradient terms during training. Therefore, we introduce a surrogate white-box clone model \(\mathcal{C}\) into the victim's training process to represent the stealing role of the adversary. To isolate the update gradient from the expected gradient, we maximize the directional divergence between them. Specifically, for a certain batch of data \(\mathbf{x}\), assuming the target of the adversary is \(\mathbf{y}\), the update gradient can be written as:
\[\begin{split}\nabla_{\mathbf{\theta}_{\mathcal{C}}}CE(\mathcal{C}( \mathbf{x};\mathbf{\theta}_{\mathcal{C}}),\mathbf{y})&=-\nabla_{\mathbf{\theta} _{\mathcal{C}}}\sum_{i}y_{i}\log C(\mathbf{x};\mathbf{\theta}_{\mathcal{C}})_{i}\\ &=-\mathbf{y}^{T}G,\end{split} \tag{4}\]
where \(CE(\cdot,\cdot)\) is the soft cross-entropy loss commonly used in model stealing attacks (Sutton et al., 2019; Gutton et al., 2020), and \(G=\nabla_{\mathbf{\theta}_{\mathcal{C}}}\log C(\mathbf{x};\mathbf{\theta}_{\mathcal{C}})\) is a Jacobian matrix. When \(y\) comes from the ground truth, it represents the correct optimization direction; when \(y\) comes from the victim model (denoted by \(\mathbf{\tilde{y}}\)), it represents the actual optimization direction during stealing.
The goal of gradient isolation is to maximize the directional divergence of these gradients, which can be quantified by the cosine similarity. Therefore, the objective of gradient isolation can be written as follows:
\[\mathcal{L}_{\textit{iso}}=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}_{\textit {tar}}}[CS(\mathbf{\tilde{y}}^{T}G,\mathbf{y}^{T}G)], \tag{5}\]
where \(CS(\cdot,\cdot)\) represents the cosine similarity. \(\mathbf{\tilde{y}}^{T}G\) and \(\mathbf{y}^{T}G\) can be calculated by the backward propagation. During the victim training, InI minimizes \(\mathcal{L}_{\textit{iso}}\) via optimization to perform gradient isolation, enhancing the victim's robustness against model stealing attacks.
### Adversary Induction
Some existing defenses add perturbations to the output over all samples, which harm the benign accuracy and cause low tradeoffs. To improve the trade-off between clean accuracy and stealing robustness, we further design an adversary induction approach to train victim models. Thus, the victim model will behave normally for benign users but produce inductive outputs that induce the adversary to optimize without learning too much useful knowledge.
Successfully inducing the adversary stands on the assumption that benign and malicious queries can be distinguished through the distribution (Gutton et al., 2018; Gutton et al., 2020). Practically, the adversary has limited knowledge of the distribution of the victim's training set \(\mathcal{D}_{\textit{tar}}\), and consequently uses a surrogate dataset \(\mathcal{D}_{\textit{quite}}\) to query and steal the victim model. Following this assumption, we regard the query samples of the adversary as out-of-distribution and apply an OOD dataset \(\mathcal{D}_{\textit{out}}\) during defense to substitute them. The over-parameterization property of DNNs ensures the generalization capability across diverse
Figure 2. The overall framework of InI. InI isolates the adversary’s gradients from the expected gradients during training to obtain faster inference speed, and induce the adversary to leak knowledge as little as possible.
distributions and thus enables the acquisition of the victim that exhibits divergent behavior on benign in-distribution (ID) queries and malicious out-of-distribution (OOD) queries. Thus, our InI aims to guarantee the victim's benign performance on ID queries, while inducing the adversary to attain little knowledge with uninformative outputs on OOD queries.
In particular, on ID samples, we should guarantee the victim's benign performance. This can be achieved by applying a cross-entropy loss to train the classification model, which is shown as follows:
\[\mathcal{L}_{ben}=\mathbb{E}_{(\mathbf{x},\mathbf{y})-\mathcal{D}_{lat}}[CE(\mathcal{ V}(\mathbf{x};\mathbf{\theta}_{\mathcal{V}}),\mathbf{y})]. \tag{6}\]
On OOD samples, we should induce the adversary to acquire minimal knowledge. Intuitively, the adversary induction can be realized by reducing the adversary's information gain on OOD queries. The information gain can be quantified by the KL divergence between the output probabilities of the clone and the victim model, which can be formulated as below:
\[\mathcal{L}_{ig}=\mathbb{E}_{\mathbf{x}-\mathcal{D}_{out}}[KL(\mathcal{C}(\mathbf{x}; \mathbf{\theta}_{\mathcal{C}}),\mathcal{V}(\mathbf{x};\mathbf{\theta}_{\mathcal{V}}))]. \tag{7}\]
Noted that \(\mathcal{L}_{ig}\) is the optimization objective of the adversary during model stealing. Therefore, when \(\mathcal{L}_{ig}\) is minimized before the stealing process, the adversary can only gain little information via optimization. Thus, the adversary would only extract little knowledge from the victim model. Since the adversary often uses gradient descent to update their parameters and attain knowledge, minimizing the first-order approximation of the KL divergence can also help to reduce the information gain. Consequently, we can minimizing the norm of \(\nabla_{\mathbf{\theta}_{\mathcal{C}}}\mathcal{L}_{ig}\), _i.e._, the gradient of the information gain. In summary, the objective of adversary induction can be formulated as:
\[\mathcal{L}_{ind}=\mathcal{L}_{ig}+\beta||\nabla_{\mathbf{\theta}_{\mathcal{C}}} \mathcal{L}_{ig}||. \tag{8}\]
\(\mathcal{L}_{ind}\) is a function related to \(\mathbf{\theta}_{\mathcal{V}}\), which can be integrated with our training framework. During the victim's training, we apply a surrogate white-box clone model \(\mathcal{C}\) and an OOD dataset to estimate the adversary's information gain and minimize \(\mathcal{L}_{ind}\) by updating the victim's parameter \(\mathbf{\theta}_{\mathcal{V}}\), thus reducing the knowledge leakage from the victim. The OOD dataset used in training comes from another classification task and is different from the query dataset in stealing attacks.
Though previous defenses (Han et al., 2017; Chen et al., 2018) utilize OOD datasets to train the victim, they only constrain the victim to produce meaningless or diverse outputs and did not take the adversary's role into consideration. In contrast to them, InI produces inductive outputs that trigger the adversary to learn less, reducing the knowledge leakage from the victim.
### Overall Framework
Figure 2 illustrates our overall framework. To train a robust victim \(\mathcal{V}\), we introduce a surrogate clone model \(\mathcal{C}\) into the training process. During training, the victim has white-box access to the surrogate clone model. To jettison the auxiliary modules for low computational costs, InI isolates the adversary's optimization gradient from the expected gradient during training via \(\mathcal{L}_{iso}\) in Eqn. 5. To further improve the trade-off between benign performance and stealing robustness, InI produces uninformative outputs on malicious queries through \(\mathcal{L}_{iso}\) in Eqn. 8 to induce the adversary to acquire less useful knowledge. By incorporating the robustness within the victim's parameters, the victim will acquire how to resist the model stealing attacks in a favorable trade-off without any auxiliary modules during inference.
During training, all objectives are simultaneously calculated and updated. We use some hyper-parameters \(\gamma_{1}\), \(\gamma_{2}\) to control the trade-off between each loss, and the total loss can be written as:
\[\mathcal{L}=\mathcal{L}_{ben}+\gamma_{1}\mathcal{L}_{iso}+\gamma_{2}\mathcal{L} _{ind} \tag{9}\]
However, during training, there may exist some conflicts between the optimization directions among the above losses. As the blue bars shown in Figure 3, we extract the gradients at the first 3 epochs of training and calculate their cosine similarities, where we observe that the cosine similarities between different objectives exist conflicts. To mitigate these conflicts, we leverage PCGrad (Zhou et al., 2017) to deal with the gradient conflicts. Before the gradient descent update, PCGrad tries to find the conflict among these gradients and projects one gradient to the orthogonal direction of the others as follows:
\[\mathbf{g}_{i}^{PC}=\mathbf{g}_{i}^{PC}-\frac{\mathbf{g}_{i}^{PC}\cdot\mathbf{g}_{j}}{||\mathbf{g}_ {j}||^{2}}\mathbf{g}_{j}. \tag{10}\]
After PCGrad stabilization, the conflicts are better mitigated and the optimization process is improved (see the orange bars in Figure 3). The overall pseudo-algorithm of our InI can be found in Supplementary Material.
## 5. Experiments
In this section, we first elaborate on the experimental settings; then, we illustrate the defense performance and inference speed analysis on image classification tasks; we finally provide ablation studies.
Figure 3. The cosine similarities of objective pairs during training. We choose the gradients of \(\mathcal{L}_{ben}\), \(\mathcal{L}_{ind}\), \(\nabla_{\mathbf{\theta}_{\mathcal{C}}}\mathcal{L}_{ind}\) and \(\mathcal{L}_{iso}\) at the first 3 epochs, and show the histogram of their cosine similarities. Blue bars indicates cosine similarities before the gradient surgery, and orange bars indicates those after the gradient surgery.
More results such as feature visualization analysis of our defense are provided in Supplementary Materials.
### Experimental Setup
In this part, we elaborate on our experimental settings about datasets, model architectures, defenses, attacks, and evaluation metrics.
**Datasets and architectures.** We evaluate our proposed InI on the most commonly-adopted image classification datasets for model stealing including MNIST (Krizhevsky et al., 2014), FashionMNIST (Zhu et al., 2017), CIFAR-10 (Krizhevsky et al., 2014), and CIFAR-100 (Krizhevsky et al., 2014). We choose ResNet-18 (He et al., 2017) as the backbone of all victim models. We also evaluate results on VGG networks (Zhu et al., 2017) which show similar observations (_c.f._ Supplementary Materials).
**Implementation details.** For the training of InI, we use an SGD optimizer with momentum 0.5 and a weight decay of \(1\times 10^{-3}\). For MNIST and FashionMNIST datasets, we train 50 epochs with a learning rate annealing of 0.1 every 20 epochs, and for CIFAR-10 and CIFAR-100 datasets, we train 150 epochs with a learning rate annealing of 0.1 every 50 epochs. The initial learning rate is 0.1.
**Defenses.** To demonstrate the effectiveness of InI, we compare our method with the commonly-adopted defensive approaches: MAD (Zhu et al., 2017), AM (Krizhevsky et al., 2014), EDM (Krizhevsky et al., 2014). We also report the results of an undefended model denoted by "Vanilla". We referred to the official implementation of these methods. The batch size of all defensive methods is 128. For the auxiliary OOD datasets used by AM, EDM, and InI, we choose KMNIST (Krizhevsky et al., 2014) for MNIST and FashionMNIST, and choose TinyImageNet (Krizhevsky et al., 2014) for CIFAR-10 and CIFAR-100. For the hash dataset used by EDM, we choose KMNIST for MNIST and FashionMNIST, and use SVHN (Zhu et al., 2017) for CIFAR-10 and CIFAR-100.
**Attacks.** Following the previous works(Krizhevsky et al., 2014; Krizhevsky et al., 2014; Zhu et al., 2017), to evaluate the performance of InI against model stealing, we use the commonly-used stealing attacks including KnockoffNets (Zhu et al., 2017) and JBDA (Zhu et al., 2017), and evaluate the integration of InI with other defensive methods. For each attack, we conduct soft-label and hard-label attacks, which means the adversary learns according to the victim's output probability and the top-1 label, respectively. The detailed settings of attacks are listed as follows:
* _KnockoffNets_: The budget of attack is 50000. As for the surrogate datasets, we use EMNISTLetters (Bengio et al., 2015), EMNIST (Bengio et al., 2015), CIFAR-100, CIFAR-10 as the surrogate dataset for MNIST, FashionMNIST, CIFAR-100, CIFAR-100.
* _JBDA_: We choose 150 images from the victim's training set as the seed samples. We use 6 rounds of augmentation and a noise rate of 0.1 to synthesize the query data. The clone model is trained for 10 epochs every augmentation round.
**Evaluation metrics.** Following (Krizhevsky et al., 2014), we evaluate the performance of defenses by comparing the _clone accuracy_ achieved by attacks on the victim's test set. To take the victim's benign accuracy into consideration, we further compare the _relative performance_ of the defenses, which is the ratio of the adversary's clone accuracy and the victim's benign accuracy. For methods that have mutable parameters during inference (_i.e._, MAD and AM), we follow the setting in (Krizhevsky et al., 2014) and adjust the parameters of the defense to have similar benign accuracy on the test set. The benign accuracy of these defenses on classification tasks is shown in Table 1. _For all the above metrics, the lower the better defenses._
### Defense Results on Stealing Attacks
In this part, we compare the performance of our InI with other model stealing defenses. We report the clone accuracy and the relative performance of existing defenses and InI in Tables 2 and 3. To further improve our defensive performance, we integrate the model trained by InI with MAD and AM and evaluate the performance. AM needs to jointly train the victim, yet we do not retrain our model in their framework but load the parameters from InI to replace the victim's parameter for simplicity.
**KnockoffNets attacks.** Table 2 presents the defense results against KnockoffNets attacks. We can draw some observations listed below:
* By inducing and isolating the adversary, InI achieves the best defense performance with similar benign accuracy on most of the results, which demonstrates our better trade-offs.
* Noted that, on CIFAR-100 dataset, InI achieves an extraordinary defense performance against the soft-label attack, but behaves poorly against the hard-label attack.
**JBDA attacks.** Table 3 shows the defense performance against JBDA attacks. Different from KnockoffNets which applys surrogate datasets, JBDA uses a set of seed examples from the victim's training set, and thus the results differ from KnockoffNets. From the results, we can observe that:
* The OOD-based defenses (AM, EDM, and InI) behave poorly on the soft-label JBDA. This mainly results from that the samples of JBDA come from the seed examples in the training set and are close to the target distribution.
* Instead, MAD, the perturbation-based method, could achieve the best performance on the soft-label attack on FashionMNIST, CIFAR-10, and CIFAR-100 datasets. However, its performance rapidly drops on the hard-label attacks, as it hardly modifies the top-1 label of the output.
* JBDA achieves a good stealing performance on simpler datasets like MNIST and FashionMNIST, but cannot behave well on more complex datasets like CIFAR-10 and CIFAR-100. The JBDA stealing results on CIFAR-100 (around 0.05\(\times\) on all defenses) is generally unusable.
**Integration with other defenses.** We also provide the experimental results of the integration of our InI with MAD and AM in Table 2 and 3. We can draw some observations that:
* The integration of our InI with MAD and AM can further improve the trade-off, as they achieve further defense performance with invariant benign accuracy than the original
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{c}{Benign Accuracy} \\ \cline{2-5} & MNIST & FashionMNIST & CIFAR-10 & CIFAR-100 \\ \hline Vanilla & 99.46 & 93.89 & 94.71 & 76.63 \\ MAD & 99.46 & 93.87 & 94.31 & 75.44 \\ AM & 99.41 & 93.67 & 94.30 & 75.00 \\ EDM & 99.43 & 93.70 & 94.35 & 75.38 \\ InI (**Ours**) & 99.40 & 93.36 & 94.32 & 75.50 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Benign accuracy of each defense on different image classification datasets.
MAD and AM against all the attacks. Additionally, the integration of our InI with AM mostly achieves more robustness than that with MAD.
* On soft-label JBDA attacks, though the integration achieves improvement in Vanilla and InI, it still cannot surpass the performance of MAD in similar benign accuracy. We attribute this phenomenon to the robustness trade-off between soft-label and hard-label attacks, as InI has traded its benign accuracy off the robustness against hard-label attacks during training, and similar robustness can only be achieved at the cost of more benign performance.
### Inference Speed Analysis
In this section, we provide the inference speed analysis of each defensive method and evaluate our analysis through experiments. In practice, the model for MLaaS would only train once but would receive millions of inference queries from users, which the service provider would charge for. Therefore, boosting the inference speed has a significant impact on the practical use of defensive methods. Evaluations and discussions about training-time speed are provided in Supplementary Materials.
We first provide some analyses of the inference process of each method. As shown in Table 4, our InI can achieve the fastest inference speed among all defensive methods. On the contrary, existing methods employ auxiliary modules during inferences, which would introduce extra computational operations, or even harm the DNN's parallel capability. Detailed analyses are given below.
**Time cost of Vanilla and InI.** We define the time cost of a forward operation as \(M_{f}\) for a single query. The time cost of a model without defense should be \(M_{f}\), and the same as InI since InI employs no extra modules.
**Time cost of MAD.** MAD computes the Jacobian matrix \(G=\nabla\log f(x,\mathbf{\theta})\) to maximize the angular deviation between the perturbed gradient and the original gradient. The official code needs \(C\) backward passes which costs \(CM_{b}\) (\(C\) is the number of classes of the target classification task) to calculate the Jacobian matrix. After getting \(G\), MAD needs to heuristically search a perturbed probability \(y^{*}\), which costs time of S. When the victim receives a batch of data with a mini-batch size of \(B\), the backward pass and heuristic search would cost \(BCM_{b}\) and \(BS\), as these operations cannot perform in parallel.
**Time cost of AM.** AM employs an adaptive misinformation module to perturb the victim's output probability. The adaptive misinformation is generated by a DNN with the same architecture as the victim's backbone. As a consequence, the time cost of AM mainly comes from the forward passes of the victim's backbone and the misinformation model, which cost around \(2M_{f}\) in sum.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Defense} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{FashionMNIST} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \cline{2-9} & soft-label & hard-label & soft-label & hard-label & soft-label & hard-label & soft-label & hard-label \\ \hline Vanilla & 99.39(1.00\(\ast\)) & 98.84(0.99\(\ast\)) & 71.58(0.76\(\ast\)) & 57.95(0.62\(\ast\)) & 79.55(0.84\(\ast\)) & 69.60(0.73\(\ast\)) & 50.89(0.66\(\ast\)) & 27.44(0.36\(\ast\)) \\ MAD & 99.31(1.00\(\ast\)) & 99.05(1.00\(\ast\)) & 68.84(0.73\(\ast\)) & 44.61(0.48\(\ast\)) & 70.31(0.75\(\ast\)) & 65.07(0.69\(\ast\)) & 37.36(0.50\(\ast\)) & 18.58(0.25\(\ast\)) \\ AM & 98.58(0.99\(\ast\)) & 97.14(0.98\(\ast\)) & 20.77(0.22\(\ast\)) & 14.23(0.15\(\ast\)) & 75.32(0.80\(\ast\)) & 63.08(0.67\(\ast\)) & 24.07(0.32\(\ast\)) & **15.99(0.21\(\ast\))** \\ EDM & 98.90(0.99\(\ast\)) & 97.44(0.98\(\ast\)) & 21.42(0.23\(\ast\)) & 15.90(0.17\(\ast\)) & 72.30(0.77\(\ast\)) & 62.31(0.66\(\ast\)) & 43.78(0.58\(\ast\)) & 20.52(0.27\(\ast\)) \\ InI (**Ours**) & **89.02(0.90\(\ast\))** & **95.90(0.96\(\ast\))** & **20.12(0.22\(\ast\))** & **10.82(0.12\(\ast\))** & **69.54(0.74\(\ast\))** & **60.33(0.64\(\ast\))** & **9.71(0.13\(\ast\))** & 22.01(0.29\(\ast\)) \\ \hline \hline InI + MAD & **88.09(0.89\(\ast\))** & **92.50(0.93\(\ast\))** & 20.18(0.22\(\ast\)) & 10.77(0.12\(\ast\)) & 67.45(0.72\(\ast\)) & 60.25(0.64\(\ast\)) & 9.37(0.12\(\ast\)) & 13.06(0.17\(\ast\)) \\ InI + AM & 88.22(0.89\(\ast\)) & 94.12(0.95\(\ast\)) & **15.01(0.16\(\ast\))** & **10.27(0.11\(\ast\))** & **65.80(0.70\(\ast\))** & **58.35(0.62\(\ast\))** & **9.36(0.13\(\ast\))** & **12.47(0.17\(\ast\))** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Experimental results for KnockoffNets attack. We report the clone accuracy and the relative performance on the target test set. Lower clone accuracy/relative performance indicates better defense performance. “InI + MAD” and “InI + AM” indicate the integration of our InI with other defenses.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Defense} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{FashionMNIST} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \cline{2-9} & soft-label & hard-label & soft-label & hard-label & soft-label & hard-label & soft-label & hard-label \\ \hline Vanilla & 73.00(0.73\(\ast\)) & 72.77(0.73\(\ast\)) & 71.09(0.76\(\ast\)) & 67.80(0.72\(\ast\)) & 26.19(0.28\(\ast\)) & 25.59(0.27\(\ast\)) & 4.82(0.6\(\ast\)) & 4.09(0.05\(\ast\)) \\ MAD & 61.69(0.62\(\ast\)) & 72.81(0.73\(\ast\)) & **57.70(0.61\(\ast\))** & 66.46(0.71\(\ast\)) & **18.73(0.20\(\ast\))** & 24.89(0.26\(\ast\)) & **2.44(0.03\(\ast\))** & 3.90(0.05\(\ast\)) \\ AM & 81.23(0.82\(\ast\)) & 73.17(0.74\(\ast\)) & 67.73(0.72\(\ast\)) & 66.28(0.71\(\ast\)) & 24.33(0.26\(\ast\)) & 25.12(0.27\(\ast\)) & 4.36(0.06\(\ast\)) & 3.29(0.04\(\ast\)) \\ EDM & 79.34(0.80\(\ast\)) & 78.72(0.79\(\ast\)) & 70.08(0.75\(\ast\)) & 68.86(0.73\(\ast\)) & 25.86(0.27\(\ast\)) & 25.71(0.27\(\ast\)) & 3.35(0.04\(\ast\)) & 3.04(0.04\(\ast\)) \\ InI (**Ours**) & **57.94(0.58\(\ast\))** & **66.19(0.67\(\ast\))** & 70.81(0.76\(\ast\)) & **64.99(0.70\(\ast\))** & 24.16(0.26\(\ast\)) & **24.14(0.26\(\ast\))** & 3.81(0.05\(\ast\)) & **2.61(0.03\(\ast\))** \\ \hline InI + MAD & **55.99(0.56\(\ast\))** & 66.09(0.66\(\ast\)) & **65.63(0.70\(\ast\))** & 64.61(0.69\(\ast\)) & 23.04(0.24\(\ast\)) & 24.04(0.25\(\ast\)) & 3.51(0.05\(\ast\)) & 2.61(0.03\(\ast\)) \\ InI + AM & 56.42(0.57\(\ast\)) & **62.35(0.63\(\ast\))** & 68.14(0.73\(\ast\)) & **63.72(0.68\(\ast\))** & **22.68(0.24\(\ast\))** & **22.43(0.24\(\ast\))** & **3.33(0.04\(\ast\))** & **2.45(0.03\(\ast\))** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Experimental results for JBDA attack. We report the clone accuracy and the relative performance on the target test set. Lower clone accuracy/relative performance indicates better defense performance. “InI + MAD” and “InI + AM” indicate the integration of our InI with other defenses.
**Time cost of EDM.** EDM jointly trains an ensemble of \(n\) victim models and selects a result from them according to a hash function. The hash function of EDM is a DNN with a simpler architecture, which costs \(M_{h}\) for a forward pass. For a single query, the time cost of EDM should be \(M_{f}+M_{h}\). However, in the official implementation of EDM, the time cost increase to \(nM_{f}+M_{h}\) when EDM receives a batched query. The reason lies in that the forward pass of a single model and a batch of data can run in parallel, but the forward pass of different models cannot run in parallel. Therefore, all models in the ensemble would be accessed, and the time cost would increase.
**Empirical evaluation.** We perform empirical experiments to evaluate the speed of each defense. The experiment is conducted on an RTX 3080 GPU and the architecture of the victim's backbone is ResNet-18. We randomly generate input images, execute the inference, and record the time cost. For fair comparisons, we conduct the process repeatedly for 2000 times and record the total time. As shown in Figure 4, other defenses consume significantly more time during inference compared to our method (1.98\(\times\)25.4\(\times\)).
### Ablation Studies
We then conduct ablation studies to understand the contributions of gradient isolation and adversary induction. Specifically, we conduct experiments by training the victim with (1) no extra defense (denoted by "Vanilla"), (2) only isolation loss \(\mathcal{L}_{iso}\); (3) only induction loss \(\mathcal{L}_{ind}\); (4) InI without \(\nabla\mathcal{L}_{ig}\); and (5) the full InI (\(\mathcal{L}_{iso}+\mathcal{L}_{ind}\)). We record the defense performance against KnockoffNets and JBDA on the CIFAR-10 dataset. As shown in Table 5, we can draw several observations:
* Model trained with \(\mathcal{L}_{iso}\) has an apparent drop in stealing performance (_e.g._, 4.34 on soft-label KnockoffNets), which indicates that gradient isolation can bring robustness against model stealing.
* Model trained with \(\mathcal{L}_{ind}\) shows limited defenses against model stealing attacks, as it only induces the adversary to learn less instead of directly misleading the adversary.
* With the two methods combined, InI can achieve the best defense performance (69.54 / 60.33 on KnockoffNets and 24.16 / 24.14 on JBDA), implying that the cooperation of gradient isolation and adversary induction plays an important role during training.
## 6. Conclusion
The model stealing attack becomes a raising challenge to the privacy and intellectual property of machine learning models. Existing defensive methods are suffering from additional computational costs and unfavorable trade-offs, which impede their practical implementation. To cope with this concern, we propose a novel and efficient training framework named InI. InI embeds the countermeasures within the victim's parameters, isolating the adversary's gradient from the expected gradient to achieve robustness without incurring extra computational overheads. In leverages the OOD assumption and induces the adversary to acquire minimal knowledge, thereby enhancing the trade-off. Through our evaluations, InI surpasses existing defensive methods in terms of speed and robustness, and the integration with prior defenses renders it more practical. We hope our proposed method could provide a new perspective of defense strategies against model stealing attacks.
## Acknowledgement
This work was supported by the National Natural Science Foundation of China (62022009 and 62206009), the Fundamental Research Funds for the Central Universities, and the State Key Laboratory of Software Development Environment.
|
2303.03851 | Parsing Line Segments of Floor Plan Images Using Graph Neural Networks | In this paper, we present a GNN-based Line Segment Parser (GLSP), which uses
a junction heatmap to predict line segments' endpoints, and graph neural
networks to extract line segments and their categories. Different from previous
floor plan recognition methods, which rely on semantic segmentation, our
proposed method is able to output vectorized line segment and requires less
post-processing steps to be put into practical use. Our experiments show that
the methods outperform state-of-the-art line segment detection models on
multi-class line segment detection tasks with floor plan images. In the paper,
we use our floor plan dataset named Large-scale Residential Floor Plan data
(LRFP). The dataset contains a total of 271,035 floor plan images. The label
corresponding to each picture contains the scale information, the categories
and outlines of rooms, and the endpoint positions of line segments such as
doors, windows, and walls. Our augmentation method makes the dataset adaptable
to the drawing styles of as many countries and regions as possible. | Mingxiang Chen, Cihui Pan | 2023-03-07T12:32:19Z | http://arxiv.org/abs/2303.03851v1 | # Parsing Line Segments of Floor Plan Images Using Graph Neural Networks
###### Abstract
In this paper, we present a GNN-based Line Segment Parser (GLSP), which uses a junction heatmap to predict line segments' endpoints, and graph neural networks to extract line segments and their categories. Different from previous floor plan recognition methods, which rely on semantic segmentation, our proposed method is able to output vectorized line segment and requires less post-processing steps to be put into practical use. Our experiments show that the methods outperform state-of-the-art line segment detection models on multi-class line segment detection tasks with floor plan images. In the paper, we use our floor plan dataset named Large-scale Residential Floor Plan data (LRFP). The dataset contains a total of 271,035 floor plan images. The label corresponding to each picture contains the scale information, the categories and outlines of rooms, and the endpoint positions of line segments such as doors, windows, and walls. Our augmentation method makes the dataset adaptable to the drawing styles of as many countries and regions as possible.
## 1 Introduction
Floor plan recognition has long been an active research field after deep learning has risen as a promising and stable method regarding computer vision problems. The task is straightforwardly designed, that is to recover the vector-graphic representation of a floor plan from a rasterized image, and re-enable further computing capabilities such as editing, synthesis, or analysis. In 2017, Liu _et al._[14] proposed a method that combines deep learning with integer programming to identify key points, room outlines, and room categories. Some studies [15, 21] show that using optical character recognition (OCR) or object detection methods for auxiliary judgment can further improve the accuracy. Although these methods show good results to some extent, they require heavy post-processing steps to be put into practical use.
In this paper, we propose a novel floor plan recognition method based on line segment detection (LSD) using Graph Neural Networks (GNN). While parsing the floor plans with two separate stages may introduce extra complexity, our method is able to extract vectorized line segments from floor plans rather than pixel-wise semantic segments. Despite the recent achievements made by deep learning in the field of LSD, two problems remain unsolved. First, line segments in floor plans have different categories such as doors, windows, or walls, while the detection methods proposed so far are not designed for multi-class line segments. Second, the algorithms performing well in natural scenes may not be the best choice in floor plan recognition tasks. For almost every blueprint images, including floor plans, the line segments are clearly and logically related to each other, which is different from the loose relationship between line segments in natural scenes.
Overall, our contributions are summarized as follows:
Figure 1: The proposed GLSP model can reliably translate images to a set of vectorized line segments with different types rather than semantic segmentation maps in previous studies [14, 28, 15]. Augmentation methods are used so that the floor plans in the dataset have various styles. The walls, doors, and windows in the figures are represented by red, green, and blue line segments, respectively.
* We introduce the task of multi-class line segment detection into the field of floorplan recognition.
* Our proposed method outputs vectorized results of structural elements and requires less post-processing steps to put the algorithm into practical use.
* An attention-based graph neural network is used to capture the relationships between line segments accurately. The model achieves better performance compared to the state-of-the-art wireframe parsing model.
The paper is organized as follows. First, we introduce related works in Section 2. The details of our method are explained in Section 3. The settings of experiments and their results are presented in Section 4. The conclusion is discussed in Section 5.
## 2 Related Works
**Floor plan recognition and reconstruction** At present, many methods based on deep learning divide the problem of floor plan recognition and reconstruction into several more typical sub-problems, such as object detection, semantic segmentation, optical character recognition (OCR), etc. The system would integrate the recognition results of each model through a series of post-processing methods, and output standardized floor plans. For example, Liu _et al._[14] use convolutional neural networks to identify the locations and types of junction points and use integer programming to output the information about walls and doors. The room types are recognized by a per-pixel semantic classification model. However, this method will not work if inclined walls are present in the floor plan since the types of each junction point are predefined. Zeng _et al._[28] improves the accuracy of the semantic segmentation by using a room-boundary guided attention mechanism, while the corresponding post-processing methods are not proposed, so the results obtained are not vectorized floor plans. Surikov _et al._[21] use Faster R-CNN the object detection on floor plans. Lv _et al._[15] improves the algorithm process based on [14], adding steps such as scale prediction, OCR, and object detection, which greatly improves the robustness and usability of the system.
**Datasets of floor plans** To the best of our knowledge, Raster-to-Vec [14] is one of the earliest approaches trying to reconstruct floor plans from images. Its dataset contains 870 vector-graphics annotations. Rent3D [13] is also a very popular open-source floor plan dataset containing 215 floor plan images. Recently, researchers have begun to use larger datasets for model training. Kalervo _et al._[11] provides Cubi-Casa5K, including 5000 floor plan images from Finland. Lv _et al._[15] mentioned Residential Floor Plan data (RFP) in their paper, which contains 7000 floor plans crawled from internet. However, the dataset is not open-source. Although the demand for larger-scale datasets is increasing without a doubt, it is difficult to obtain a large amount of floor plan data due to copyright or personal privacy protection. In addition, the existing floor plan datasets are generally only applicable to certain countries because of the drawing styles. Thus, even if the scale of some datasets such as RPLAN [22] is sufficient to build large models, researchers from other places may be reluctant to use them.
**Line segment detection** Specifically, we use line segment detection methods as the pre-processing module in some of our baseline models. Edge detection [1, 4, 5, 16, 23] and perceptual grouping [7, 9, 20] are classic methods often used by pioneers in this field. In addition, the method based on Hough transform [6, 8, 9, 17] is also a group of commonly used line segment detection methods based on traditional image processing. In the era of deep learning, the methods based on junction prediction represented by LCNN [31] and the methods based on dense prediction represented by AFM [25] have each shown excellent results and performance. HAWP [26] combines the two on their basis, that is, to use the holistic attraction field map to propose a series of line segments, use junctions to fine-tune the results of the proposal or remove unreasonable line segments, and output the confidence of the refined line segments. Later, F-Clip [3] further optimizes the above model, abandons the two-stage paradigm, and improves both speed and accuracy. HAWPv3 [27] explores the self-supervised learning paradigms based on HAWP and can be used as a good wireframe parser for the out-of-distribution images. Some researchers [18, 30] have proposed line segment detection methods based on graph networks. However, they do not perform better [18] than the above-mentioned two-phase parsing paradigm when detecting line segments in natural scenes. Recently, as the potential of the Transformer model being continuously explored, LETR [24] builds a transformer-based end-to-end line segment detection algorithm by adding endpoint distance loss and coarse-to-fine decoding to the transformer model DETR [2], which is originally built for object detection.
## 3 Method
Similar to previous researches [26, 31] on line segment detection, the floor plan representations are based on the notation of graph theory. A floor plan is defined on an undirected graph \(\mathcal{G}=(\mathcal{P},\mathcal{A})\) where \(i\in\mathcal{P}=\{1,2,...,n\}\) represents the \(i\)-th endpoint of all line segments, and \(a_{i,j}\in\mathcal{A}\), where \(i,j\in\mathcal{P},i\neq j\), represents the line segment from endpoint \(i\) to \(j\). For each endpoint \(i\), the coordinate in the image space is represented by \(p_{i}\). Different from line segment detection, the line segments in the floor plan have different categories (in this article are null, walls, doors, and windows), which are represented by \(c_{i,j}\).
In this section we first introduce the dataset used for training and evaluation in Section 3.1. Figure 2 illustrates an overview of our GNN-based Line Segment Parser (GLSP) architecture. For a floor plan image, we create two intermediate feature maps using identical backbone networks. One is used for junction detection (Section 3.2), and the another is used for building the features of potential connections in the graph (Section 3.3.1). We use a graph neural network to classify the connections (Section 3.3.2 and 3.3.3). Finally, the training strategies and the multi-task loss function are described in Section 3.4.
### Data Description
All the floor plan images and labels in this dataset come from manual annotations by our 3D scanning operators. Each floor plan has been slightly modified, so users would not know their real locations. The houses corresponding to the floor plans are mostly Chinese urban residential buildings. The dataset is randomly split into training, validation, and test sets with 268,035, 1,500, and 1,500 floor plans, respectively. Each sample has 4 floor plan images including pictures with or without items of furniture and room labels. The images are saved in JPG format with a resolution of \(1080\times 720\). The annotations for floor plans include: 1) the scales of images represented by millimeters per pixel, 2) the information of lines including the starting and the ending points, thickness, and the category, choosing from _wall_, _door_, and _window_, and 3) the information of rooms including the category and the contour. Please refer to the supplementary material for augmentation details and statistics about the dataset.
### Junction Detection
We use the stacked Hourglass network [19] as the feature extraction backbone for junction detection and the feature extraction module in the graph-building stage. The network is used in previous line segment detection researches [3, 26, 31] and is also a commonly used model in human keypoint estimation. We choose the same settings as in [26], so that the feature map is \(1/4\) scaled compared to the original image. The features are then up-sampled with a pixel shuffle layer to make the size of the feature map match the input. We make this modification because many endpoints in floor plans are close to each other. Using bins instead of pixels can result in a significant drop in the recall rate (Section 4). Hence, the junction offset module presented in [31] and [26] are removed. The neural network only predicts the junction likelihood \(\hat{J}^{\prime}(p)\), that for each pixel we have
\[\hat{J}^{\prime}(p)=\begin{cases}\hat{J}(p)&\hat{J}(p)=max_{p^{\prime}\in N(p) }\hat{J}(p^{\prime})\\ 0&\text{otherwise}\end{cases} \tag{1}\]
where a non-maximum suppression is applied.
### Line Segment Classification with Graph
We use an attention-based graph network for line segment classification. Attention mechanisms are now widely used in sequence-based tasks and have the advantage of amplifying the most important parts of graphs. This has proven to be useful for many tasks. To learn the relationships between line segments, the inputs to the graph neural network, similar to the definition of a dual graph, is defined on an undirected graph \(\mathcal{G}^{\prime}=(\mathcal{V},\mathcal{E})\) where \(v\in\mathcal{V}=\{1,2,...,n\}\) represents the \(v\)-th line segment, which type is represented by \(c^{\prime}_{v}\). If \(e_{v_{0},v_{1}}=1\) where \(e_{v_{0},v_{1}}\in\mathcal{E}\) and \(v_{0},v_{1}\in\mathcal{V},v_{0}\neq v_{1}\), the \(v_{0}\)-th and \(v_{1}\)-th line segment has a common junction. To clarify, the word "junction" and "line segment" correspond to \(\mathcal{P}\) and \(\mathcal{A}\) in the objective graph \(\mathcal{G}\), respectively; while the word "node" and "edge" correspond to \(\mathcal{V}\) and \(\mathcal{E}\) in the intermediate graph \(\mathcal{G}^{\prime}\), respectively.
#### 3.3.1 Find potential nodes
Averagely, a floor plan graph contains 50-100 junctions, which means a fully connected graph would involve up to 5,000 line segments. As for the intermediate graph \(\mathcal{G}^{\prime}\), that is thousands of nodes and millions of edges. Hence, we provide two node suppression strategies.
**Non-shortest suppression (NSS)** Similar to Non-maximum Suppression (NMS), NSS selects line segments out of a fully connected graph by the angles and lengths. If two line segments have a common point, and the angle between is less than \(\mathcal{D}_{NSS}\), the longer line segment would be removed. Note that \(\mathcal{D}_{NSS}\) is a dynamic threshold depending on the length and the direction of the longer line segment that
\[\mathcal{D}_{NSS}=\begin{cases}2^{\circ}&\text{the line is ``potential''}\\ 22.5^{\circ}&\text{otherwise}\end{cases} \tag{2}\]
In this paper, the line is "potential" if its length is less than 20 pixels or
Figure 2: An overview of the model structure.
\[\min(\theta_{0},\theta_{1},\theta_{2},\theta_{3})<\frac{200}{l}+2 \tag{3}\]
where \(\theta_{0}\), \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are the angles (in degrees) between the line segment and the vector \((0,1)\), \((0,-1)\), \((1,0)\), \((-1,0)\), respectively. Here, \(l\) represents the length of the line segment (in pixels).
**Non-diagonal suppression (NDS)** NDS is more aggressive comparing to NSS that the line is discarded if it is not "potential". Note that not all line segments in floor plans are horizontal or vertical lines. The inclined walls are usually longer due to aesthetic and architectural reasons [15], so that the line segments of the convex hull are also added to the set of potential line segments regardless of the above suppression.
#### 3.3.2 Embeddings
The primitive edge embedding consists of the normalized 2-dimensional coordinate values of the junction. The range of which is bounded by \([-1,1)\). The primitive node embedding includes: 1) basic information about the line segment, and 2) line features extracted from the second feature map. The vectors are concatenated to form the node embedding.
The basic information of line segments contains the normalized coordinates of the midpoint and two endpoints of the line segment, the length of the line segment, and the absolute cosine value of the angle between the line segment and the horizontal axis. Note the order of the endpoints should not affect the result, so that the endpoints are randomly switched in our implementation.
To extract the feature vector of each line segment, we introduce Rotated Region of Interest (RRoI) Pooling. It conceptually combines the advantages of LoI pooling and RoI Align. In wireframe detection, each ground truth line segment is usually in a position where the color gradient changes drastically so that the width is usually narrow. However, line segments in a floor plan are usually thick or painted with unique textures to represent special structures such as sliding doors and load-bearing walls. Hence, in addition to selecting sampling points along the direction of the line segment, RRoI also selects sampling points along the normal of the line segment as shown in Figure 3. The set of distances along the normal used in our model is \(\{-1,0,1\}\), and the number of points along the line is 32. A 2D max-pooling layer with a kernel size of (2,3) is used to reduce the shape of the feature from 32 by 3 to 16 by 1.
#### 3.3.3 Connection Classification
We adopt an attention based Graph Neural Network (GNN) similar to the design of Gated Attention Network (GaAN) [29] as our classification model to capture the relationship between nodes and classify the type of each node:
\[\mathbf{x}_{m,i}=\mathrm{FC}_{\theta_{o}}\big{(}\mathbf{x}_{m-1,i}\oplus \sum_{k=1}^{K}\sum_{j\in\mathcal{N}_{i}}w_{i,j}^{(k)}\circ\mathrm{FC}_{\theta_ {w}^{(k)}}^{h}(\mathbf{z}_{j})\big{)} \tag{4}\]
Here, \(\mathbf{x}_{m,i}\) is the vector of node \(i\) at the \(m\)-th iteration. \(\mathcal{N}_{i}\) represents node \(i\)'s neighbours, and \(\mathbf{z}\) is the reference vector of a neighbour node. \(K\) is the number of heads, and both \(\oplus\) and \(\|\) are the concatenation operation. \(\circ\) represents element-wise multiplication. FC means fully connected layers, and \(\theta\) represents the corresponding parameters. The formulation of the channel-wise attention between node \(i\) and its neighbour \(j\) is
\[w_{i,j,c}^{(k)}=\frac{\exp\left(\phi_{w,c}^{(k)}\left(\mathbf{x}_{i},\mathbf{ z}_{j},\mathbf{e}_{i,j}\right)\right)}{\sum_{l=1}^{|\mathcal{N}_{i}|}\exp\left( \phi_{w,c}^{(k)}\left(\mathbf{x}_{i},\mathbf{z}_{l},\mathbf{e}_{i,l}\right) \right)} \tag{5}\]
where \(c\) represents \(c\)-th channel, and \(\mathbf{e}_{i,j}\) is the feature of the edge from \(i\) to \(j\). The dot product attention is replaced by fully connected layers to aggregate information of edges:
\[\phi_{w}^{(k)}(\mathbf{x},\mathbf{z},\mathbf{e})=\mathrm{FC}_{\theta_{w}^{(k) }}\left(\psi_{w}^{(k)}(\mathbf{x},\mathbf{z},\mathbf{e})\right) \tag{6}\]
Here, \(\psi_{w}^{(k)}(\mathbf{x},\mathbf{z},\mathbf{e})\) concatenates the projected features:
\[\psi_{w}^{(k)}(\mathbf{x},\mathbf{z},\mathbf{e})=\mathrm{FC}_{\theta_{w}^{(k) }}(\mathbf{x})\oplus\mathrm{FC}_{\theta_{w}^{(k)}}(\mathbf{z})\oplus\mathrm{ FC}_{\theta_{e}^{(k)}}(\mathbf{e}) \tag{7}\]
The output vector of node \(i\) is
\[\mathbf{y}_{i}=\sigma(\mathbf{x}_{M,i}) \tag{8}\]
where \(M\) is the depth of the GNN, and \(\sigma(\cdot)\) is the sigmoid function which determines the likelihood of each line segment category.
### Multi-task Learning
We use the segmentation map of the structural elements (walls, door, and windows) to supervise the intermediate layers of hourglass modules. The proposed model is trained with the following loss function:
\[\mathbb{L}=\mathbb{L}_{Hourglass}+\mathbb{L}_{Junc}+\mathbb{L}_{Graph} \tag{9}\]
Figure 3: Visual comparison of different pooling methods. From left to right, 1) feature map; 2) the proposed line; 3) line of interest (LoI); 4) rotated region of interest (RRoI); 5) region of interest (RoI). The red dots in the 3rd to 5th picture represent the sampling points given by each pooling method.
where all three losses are binary cross entropy loss. In our experiments, \(\mathbb{L}_{Graph}\) is not added for the first 10,000 steps. The category of a node in the ground truth of the intermediate graph \(\mathcal{G}^{\prime}\) is regarded as one of the above structural elements if the \(L_{2}\) distance between the node and any of these ground truths is less than \(d_{max}=25\).
## 4 Experiments
### Baselines
**Modified HAWP** We choose the previous state-of-the-art model in wireframe parsing as one of our baseline approaches, yet the model is not designed for multi-class line segment detection. Thus, a few modifications are made: 1) The fully connected layers are no longer projecting LoI features to scores but to categories. 2) The junction detection module is aligned with our method described in Section 3.2. 3) The RRoI line feature extraction module is introduced to replace LoI. The effect of each modification is discussed in Section 4.4. As in the original paper, the binary cross entropy loss is used to train the models.
**GLSP as an integratable module** The GLSP model can also be used as an integratable module on a conventional line segment detection algorithm. Here, we choose the modified HAWP as the line segment detection algorithm, and the same techniques described in Section 3.3.1 to build the graph. The line segment classification results given by the modified HAWP are added to the features of nodes (the green line in Figure 4), and the performance of adding line features extracted from the feature map of the modified HAWP model into the GNN (the red line in Figure 4) is also tested in the ablation study. The two modules are trained independently, so the parameters of the modified HAWP would not be updated when training the GNN. The binary cross entropy loss is used to train the GNN.
models are trained for 2 epochs, where the learning rate is \(2\times 10^{-4}\) for the first epoch, and \(2\times 10^{-5}\) for the other. The batch size equals 8 if GLSP is an integratable module and NDS is used as the suppression strategy, and 4 otherwise.
All models are optimized by the ADAM optimizer [12] setting the weight decay to \(1\times 10^{-4}\). HAWP-M, HAWP-M+, and HAWP-M* in the following experiments represent the HAWP with the first, the first two, and all modifications in Section 4.1. The kernel size for NMS is 3 if not mentioned. In Table 2, the model "HAWP-M* + GNN" does not use line segment feature extracted from the feature map of the modified HAWP (the red line in figure 4), which effect are evaluated in the ablation study section.
### Evaluation Metrics
We follow the definition of Structural Average Precision (sAP) used in [31] and [26], whereas the results are evaluated on the ground truths with resolutions of \(512\times 512\) rather than \(128\times 128\) in [26]. Please refer to the supplementary material for the results of sAP for each class (wall, door, and window). sAP\({}_{N}\) represents Structural Average Precision without considering the class of line segments. The mean of sAP values over different categories is denoted as msAP. We set the threshold of \(L_{2}\) distance \(\vartheta_{L}\) to 8, 16, 32, and denote the results as msAP\({}^{\vartheta_{L}}\) and sAP\({}^{\vartheta_{L}}_{N}\). The vectorized junction AP (sAP\({}_{J}\)) is designed in a similar way, where the threshold \(\vartheta_{J}\in\{2,4,8\}\).
### Results and Analysis
Table 2 and Figure 6 shows the performance of line segment detection for baseline models and GLSP. The modifications made to HAWP improve its performance, whereas using GLSP as an integratable module and the end-to-end GLSP is a better choice for both junction detection and line segment detection. In Section 3.2, we argue that using bins instead of pixels can result in a significant drop in the recall rate of junction detection, which is verified in Table 1 and Figure 5. By comparing the accuracy of different models, it is not difficult to infer that the accuracy of junction detection can influence the accuracy of line segment detection to some extent. We also adjust the kernel size of NMS, and it can be seen from the table that a larger range of NMS does harm junction detection. GLSP predicts the junctions much better than HAWP-M*, and we suggest the reason may be that the feature map is only used to detect junctions and does not need to be shared with the module of line segment proposal. Please refer to the supplementary material for qualitative examples from the models mentioned above.
SF represent assigning the basic information of the line segments, assigning line segment detection results (the green line in Figure 4), assigning the pooling results of line segment detection network (the red line in Figure 4), and assigning line features extracted from the second image feature extraction networks (the blue line in Figure 2 and 4) to the nodes of the intermediate graph \(\mathcal{G}^{\prime}\), respectively.
If the GLSP is used as an integratable module, adding the line segment classification results is much better than adding line features extracted from the feature map of the modified HAWP model. The latter can also have negative effects in some cases. Therefore, the model "HAWP-M* + GNN" in Table 2 does not adopt the feature. The second feature map can slightly improve the performance of both paradigms. NSS may be a better choice compared to NDS, but the number of nodes in \(\mathcal{G}^{\prime}\) of NSS is approximately 4
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline BI & PR & DR & SF & NSS & msAP\({}^{8}\) & msAP\({}^{16}\) & msAP\({}^{32}\) & sAP\({}^{8}_{N}\) & sAP\({}^{16}_{N}\) & sAP\({}^{32}_{N}\) \\ \hline HAWP-M* + GNN & & & & & & & & & \\ \hline \(\bigcirc\) & \(\bigcirc\) & & & \(79.66\pm 0.35\) & \(85.07\pm 0.08\) & \(85.96\pm 0.07\) & \(80.19\pm 0.29\) & \(84.38\pm 0.12\) & \(85.04\pm 0.06\) \\ \(\bigcirc\) & & \(\bigcirc\) & & \(85.16\pm 0.32\) & \(90.24\pm 0.14\) & \(91.13\pm 0.04\) & \(81.03\pm 0.20\) & \(84.84\pm 0.11\) & \(85.44\pm 0.05\) \\ \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & & \(85.37\pm 0.32\) & \(90.47\pm 0.16\) & \(91.36\pm 0.06\) & \(81.23\pm 0.20\) & \(84.91\pm 0.11\) & \(85.50\pm 0.05\) \\ \(\bigcirc\) & & \(\bigcirc\) & \(\bigcirc\) & \(85.72\pm 0.27\) & \(91.30\pm 0.14\) & \(92.26\pm 0.03\) & \(81.17\pm 0.21\) & \(85.13\pm 0.11\) & \(85.75\pm 0.05\) \\ \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(85.11\pm 0.29\) & \(90.70\pm 0.11\) & \(91.66\pm 0.02\) & \(80.78\pm 0.28\) & \(84.91\pm 0.12\) & \(85.56\pm 0.05\) \\ \(\bigcirc\) & & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\mathbf{85.86}\pm 0.35\) & \(\mathbf{91.50}\pm 0.16\) & \(\mathbf{92.5}\pm 0.04\) & \(\mathbf{81.38}\pm 0.27\) & \(\mathbf{85.25}\pm 0.13\) & \(\mathbf{85.87}\pm 0.06\) \\ \hline
**GLSP** & & & & & & & & & & \\ \hline \(\bigcirc\) & & & & & & & & & & \\ \(\bigcirc\) & & & & & & & & & & \\ \(\bigcirc\) & & & & & & & & & & \\ \(\bigcirc\) & & & & & & & & & & \\ \(\bigcirc\) & & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: The ablation study of different designs. If NSS is not marked, NDS is used as the suppression strategy.
Figure 7: Precision-Recall (PR) curves of (a) GLSP as integratable modules, and (b) end-to-end GLSp.
times that of NDS.
Another problem we would like to discuss is adding prior knowledge or limitations to the training process. Here, we introduce two possible prior knowledge to the loss function after the category \(c^{\prime}_{v_{i}}\) for node \(v_{i}\) is predicted: 1) the graph loss for \(v_{i}\) is doubled if the line segment it represented is intersected with node \(v_{j}\)'s corresponding line segment, where \(v_{i}\neq v_{j}\), while both are classified as meaningful line segments (a wall, a door, or a window), and 2) the graph loss for a node is doubled if a sequence of nodes \(K\) can be found that a) \(v_{i}\in K\), b) the line segments represented by the nodes in the sequence are connected end-to-end, and c)\(\forall k\in K\), \(c^{\prime}_{k}\in\{\text{wall},\text{window}\}\), which means the area is not connected to other parts of the floor plan. The losses are introduced after the model has been trained for 20,000 steps. We use PK to represent whether the loss of prior knowledge is used, and \(N_{r}\) and \(N_{R}\) to represent the number of enclosed rooms, and the number of enclosed rooms with doors, respectively. As shown in Table 4 and Figure 8, the strategy improves the number of valid rooms in the final results, but slightly reduces the accuracy of the line segment detection.
## 5 Conclusion
In this paper, we present GLSP, a line segment detection algorithm based on Graph Attention Network. The proposed model can be used as an end-to-end algorithm or an integratable module on existing line segment detection models. The former one preform better than the latter paradigm on our open-source floor plan dataset LRFP. Our proposed methods are capable to output vectorized results of junctions and line segments, which may reduce the amount of computation for post-processing works when reconstructing editable floor plans from images.
|
2307.07093 | MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction | With the emergence of multimodal electronic health records, the evidence for
an outcome may be captured across multiple modalities ranging from clinical to
imaging and genomic data. Predicting outcomes effectively requires fusion
frameworks capable of modeling fine-grained and multi-faceted complex
interactions between modality features within and across patients. We develop
an innovative fusion approach called MaxCorr MGNN that models non-linear
modality correlations within and across patients through
Hirschfeld-Gebelein-Renyi maximal correlation (MaxCorr) embeddings, resulting
in a multi-layered graph that preserves the identities of the modalities and
patients. We then design, for the first time, a generalized multi-layered graph
neural network (MGNN) for task-informed reasoning in multi-layered graphs, that
learns the parameters defining patient-modality graph connectivity and message
passing in an end-to-end fashion. We evaluate our model an outcome prediction
task on a Tuberculosis (TB) dataset consistently outperforming several
state-of-the-art neural, graph-based and traditional fusion techniques. | Niharika S. D'Souza, Hongzhi Wang, Andrea Giovannini, Antonio Foncubierta-Rodriguez, Kristen L. Beck, Orest Boyko, Tanveer Syeda-Mahmood | 2023-07-13T23:52:41Z | http://arxiv.org/abs/2307.07093v1 | MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized Multimodal Fusion of Medical Data for Outcome Prediction
###### Abstract
With the emergence of multimodal electronic health records, the evidence for an outcome may be captured across multiple modalities ranging from clinical to imaging and genomic data. Predicting outcomes effectively requires fusion frameworks capable of modeling fine-grained and multi-faceted complex interactions between modality features within and across patients. We develop an innovative fusion approach called MaxCorr MGNN that models non-linear modality correlations within and across patients through Hirschfeld-Gebelein-Renyi maximal correlation (MaxCorr) embeddings, resulting in a multi-layered graph that preserves the identities of the modalities and patients. We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs, that learns the parameters defining patient-modality graph connectivity and message passing in an end-to-end fashion. We evaluate our model an outcome prediction task on a Tuberculosis (TB) dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.
Keywords:Multimodal Fusion Hirschfeld-Gebelein-Renyi (HGR) maximal correlation Multi-Layered Graphs Multi-Graph Neural Networks
## 1 Introduction
In the age of modern medicine, it is now possible to capture information about a patient through multiple data-rich modalities to give a holistic view of a patient's condition. In complex diseases such as cancer [18], tuberculosis [16] or autism spectrum disorder [9, 8, 7], evidence for a diagnosis or treatment outcome may be present in multiple modalities such as clinical, genomic, molecular, pathological and radiological imaging. Reliable patient-tailored outcome prediction requires fusing information from modality data both within and across patients. This can be achieved by effectively modeling the fine-grained and multi-faceted complex interactions between modality features. In general, this is a challenging problem as it is largely unclear what information is best captured by each modality, how
best to combine modalities, and how to effectively extract predictive patterns from data [14].
### Related Works
Existing attempts to fuse modalities for outcome prediction can be divided into at least three approaches, namely, feature vector-based, statistical or graph-based approaches. The vector-based approaches perform early, intermediate, or late fusion [18, 1] with the late fusion approach combining the results of prediction rather than fusing the modality features. Due to the restrictive nature of the underlying assumptions, these are often inadequate for characterizing the broader range of relationships among modality features and their relevance for prediction. In statistical approaches, methods such as canonical correlation analysis [19] and its deep learning variants [25] directly model feature correlations either in the native representation or in a latent space [18]. However, these are not guaranteed to learn discriminative patterns in the unsupervised setting and can suffer from scalability issues when integrated into larger predictive models [23]. Recently, graph-based approaches have been developed which form basic [2, 26, 5] or multiplexed graphs [6] from latent embeddings derived from modality features using concatenation [2] or weighted averaging [26]. Task-specific fusion is then achieved through inference via message passing walks between nodes in a graph neural network. In the basic collapsed graph construction, the inter-patient and intra-patient modality correlations are not fully distinguished. Conversely, in the multiplexed formulation [6], only a restricted form of multi-relational dependence is captured between nodes through vertical connections. Since the graph is defined using latent embedding directions, the modality semantics are not preserved. Additionally, the staged training of the graph construction and inference networks do not guarantee that the constructed graphs retain discriminable interaction patterns.
### Our Contributions
We develop a novel end-to-end fusion framework that addresses the limitations mentioned above. The Maximal Correlation Multi-Layered Graph Neural Network, i.e. MaxCorrMGNN, is a general yet interpretable framework for problems of multimodal fusion with unstructured data. Specifically, our approach marries the design principles of statistical representation learning with deep learning models for reasoning from multi-graphs.
The main contributions of this work are three-fold:
* First, we propose to model intra and inter-patient modality relationships explicitly through a novel patient-modality multi-layered graph as shown in Fig. 1. The edges in each layer (plane of the multi-graph) capture the _intra-modality relations_ between patients, while the cross-edges between layers capture _inter-modality relations_ across patients.
* Since these relationships are not known apriori for unstructured data, we propose, for the first time, to use learnable Hirschfeld-Gebelein-Renyi (HGR) maximal correlations. We introduce learnable soft-thresholding to uncover salient connectivity patterns automatically. Effectively, this procedure allows us to express any multimodal dataset as a patient-modality multilayered graph for fusion.
* Third, we develop a multilayered graph neural network (MGNN) from first principles for task-informed reasoning from multi-layered graphs.
To demonstrate the generality of our approach, we evaluate our framework on a large Tuberculosis (TB) dataset for multi-outcome prediction. Through rigorous experimentation, we show our framework outperforms several state-of-the-art graph based and traditional fusion baselines.
## 2 MaxCorrMGNN formulation for Multimodal Fusion
We now describe the four main aspects of our formulation, namely, (a) multilayered graph representation, (b) formalism for maximal correlation (c) task-specific inference through graph neural networks, and (d) loss function for end-to-end learning of both graph connectivity and inference.
### Patient-Modality Multi-layered Graph:
Given multimodal data about patients, we model the modality and patient information through a multi-layered graph [3] as shown in Fig. 1. _Here the nodes are grouped into multiple planes, each plane representing edge-connectivity according to an individual modality while each patient is represented by a set of corresponding nodes across the layers_ (called a supra-node).
Figure 1: Our Generalized Framework for Multimodal Fusion **Gray Box:** The features from different modalities are input to the MaxCorr (HGR) formulation. The nodes are the patients of the Multi-Graph, and the planes are the modalities. **Purple Box:** The Multi-Graph Neural Network maps the multi-graph representation to the targets.
Mathematically, we represent the multi-layered graph as: \(\mathcal{G}_{\mathrm{M}}=(\mathcal{V},\mathcal{E}_{\mathrm{M}})\), where \(|\mathcal{V}_{\mathrm{M}}|=|\mathcal{V}|\times K\) are the extended supra-nodes and \(\mathcal{E}_{\mathrm{M}}=\{(i,j)\in\mathcal{V}_{\mathrm{M}}\times\mathcal{V}_{ \mathrm{M}}\}\) are the edges between supra-nodes. There are \(K\) modality planes, each with adjacency matrices \(\mathbf{A}_{(k)}\in\mathcal{R}^{P\times P}\). The \(K\times K\) pairwise cross planar connections are given by \(\mathbf{C}_{(l,m)}\in\mathcal{R}^{P\times P}\), where \(P=|\mathcal{V}|\). All edge weights may take values in range \([0-1]\)
### HGR Maximal Correlations for Latent Multi-Graph Learning:
Recall that we would like to learn task informed patient-modality multi-graph representations automatically from unstructured modality data. To this end, we develop the framework illustrated in the Gray Box in Fig. 1.
Let \(\{\mathbf{x}_{n}^{k}\in\mathcal{R}^{D_{k}\times 1}\}_{k=1}^{K}\) be the features from to modality \(k\) for patient \(n\). Since features from different modalities lie in different input subspaces, we develop parallel common space projections to explore the dependence between them. The Hirschfield, Gebelin, Renyi (HGR) [23] framework in statistics is known to generalize the notion of dependence to abstract and non-linear functional spaces. Such non-linear projections can be parameterized by deep neural networks.
Specifically, let the collection of modality-specific projection networks be given by \(\{\mathbf{f}^{k}(\cdot):\mathcal{R}^{D_{k}\times 1}\rightarrow\mathcal{R}^{D_{p} \times 1}\}\). The HGR maximal correlation is a symmetric measure obtained by solving the following coupled pairwise constrained optimization problem:
\[\sup_{\mathcal{C}_{E},\mathcal{C}_{\mathrm{Cov}}}\rho_{\mathrm{HGR}}(\mathbf{x }^{l},\mathbf{x}^{m})=\sup_{\mathcal{C}_{E},\mathcal{C}_{\mathrm{Cov}}}\mathbb{ E}\Big{[}[\mathbf{f}^{l}(\mathbf{x}^{l})]^{T}\mathbf{f}^{m}(\mathbf{x}^{m}) \Big{]} \tag{1}\]
\(\forall\{l.m\}\) s.t. \(l\neq m\), where \(\mathcal{I}_{D_{p}}\) is a \(D_{p}\times D_{p}\) identity matrix. The constraint sets are given by:
\[\mathcal{C}_{E}:\{\mathbb{E}[\mathbf{f}^{l}(\cdot)]=\mathbb{E}[ \mathbf{f}^{m}(\cdot)]=\mathbf{0}\} \tag{2}\] \[\mathcal{C}_{\mathrm{Cov}}:\{\mathbf{Cov}(\mathbf{f}^{l}(\mathbf{ x}^{l}))=\mathbf{Cov}(\mathbf{f}^{m}(\mathbf{x}^{m}))=\mathcal{I}_{D_{p}}\} \tag{3}\]
Approaches such as deep CCA [25] can be thought of as a special case of this formulation which solve the whitening (empricial covariance) constraints (Eq. (3)) via explicit pairwise de-correlation.
However, for multiple modalities in large datasets, exact whitening is not scalable. To circumvent this issue, we can use the approach in [23]. This formulation proposes introduces a relaxation to the exact HGR, named soft-HGR, which consists of a trace regularizer in lieu of whitening. Eq. (1) can be relaxed as an empirical minimization problem min \(\mathcal{L}_{\mathrm{sHGR}}\), where the sHGR loss is:
\[\mathcal{L}_{\mathrm{sHGR}}=-\frac{1}{N_{z}}\mathbb{E}\Big{[} \mathbf{f}^{l}(\mathbf{x}^{l})^{T}\mathbf{f}^{m}(\mathbf{x}^{m})\Big{]}+\frac {1}{2N_{z}}\operatorname{Tr}\Big{[}\mathbf{Cov}[\mathbf{f}^{l}(\mathbf{x}^{l})] \mathbf{Cov}[\mathbf{f}^{m}(\mathbf{x}^{m})]\Big{]} \tag{4}\] \[=-\frac{1}{N_{z}}\Bigg{(}\sum_{l,m=1}^{K}\Bigg{[}\sum_{n=1}^{N} \frac{\mathbf{f}^{l}(\mathbf{x}_{n}^{l})^{T}\mathbf{f}^{m}(\mathbf{x}_{n}^{m} )}{(N-1)}-\frac{\operatorname{Tr}\Big{[}\mathbf{Cov}[\mathbf{f}^{l}(\mathbf{x} ^{l})]\mathbf{Cov}[\mathbf{f}^{m}(\mathbf{x}^{m})]\Big{]}}{2}\Bigg{]}\Bigg{)} \;\;l\neq m\]
The expectation under the functional transformations \(\mathbb{E}[\mathbf{f}^{l}(\mathbf{x}^{l})]=\mathbb{E}[\mathbf{f}^{m}(\mathbf{x}^{m} )]=\mathbf{0}\) is enforced step-wise by mean subtraction during optimization. Here, \(\mathbf{Cov}(\cdot)\) is the empirical covariance matrix. We parameterize \(\{\mathbf{f}^{k}(\cdot)\}\) as a simple two layered fully connected neural network with a normalization factor as \(N_{z}=M(M-1)\).
By design, the MaxCorr formulation allows us to utilize the correlation \(\rho_{\mathrm{sHGR}}(\mathbf{x}^{l}_{i},\mathbf{x}^{m}_{j})\) (computed after solving Eq. (4)) to model dependence between patients \(i\) and \(j\) according to the \(l\) and \(m\) modality features in a general setting. The absolute value of this correlation measure define the edge weights between nodes in the patient-modality multi-graph. As opposed to existing
Learnable Adaptive Sparsity:Additionally, we would like to have our learning framework automatically discover and retain salient edges that are relevant for prediction. To encourage sparsity in the edges, we utilize a learnable soft-thresholding formulation. We first define a symmetric block sparsity matrix \(\mathbf{S}\). Since edge weights in the multi-graph are in the range \([0-1]\), we normalize it through the sigmoid function as \(\tilde{\mathbf{S}}=\tilde{\mathbf{S}}^{T}=\mathrm{Sigmoid}(\mathbf{S})\in \mathcal{R}^{K\times K}\). The entries of the soft-thresholding matrix \(\tilde{\mathbf{S}}[l,m]\) define learnable thresholds for the cross modal connections when \(l\neq m\) and in-plane connections when \(l=m\). Finally, the cross modal edges and in-plane edges of the multi-graph are given by
\[\mathbf{C}_{(l,m)}[i,j] =\mathrm{ReLU}(\tilde{\rho}_{\mathrm{sHGR}}(\mathbf{x}^{l}_{i}, \mathbf{x}^{m}_{j})-\tilde{\mathbf{S}}[l,m]) \tag{5}\] \[\mathbf{A}_{k}[i,j] =\mathrm{ReLU}(\tilde{\rho}_{\mathrm{sHGR}}(\mathbf{x}^{k}_{i}, \mathbf{x}^{k}_{j})-\tilde{\mathbf{S}}[k,k]) \tag{6}\]
with \(\tilde{\rho}_{\mathrm{sHGR}}=|\rho_{\mathrm{sHGR}}|\) respectively. The adjacency matrices \(\mathbf{A}_{(k)}\) model the dependence within the features of modality \(k\), while the cross planar matrices \(\{\mathbf{C}_{(l,m)}\}\) capture interactions across modalities. Overall, \(\mathbf{S}\) acts as a regularizer that suppresses noisy weak dependencies. These regularization parameters are automatically inferred during training along with the MaxCorr projection parameters \(\{\mathbf{f}^{k}(\cdot)\}\). This effectively adds just \(K(K+1)\) learnable parameters to the MaxCorrMGNN.
### Multi-Graph Neural Network:
As a standalone optimization, the MaxCorr block is not guaranteed to learn discriminative projections of modality features. A natural next step is to couple the multi-graph representation learning with the classification task. Graph Neural Networks have recently become popular tools for reasoning from graphs and graph-signals. Given the patient-modality multi-graph, we design an extension of traditional graph neural networks to multi-graphs for inference tasks.
Conventional GNNs filter information based on the graph topology (i.e. the adjacency matrix) to map the node features to the targets based on graph traversals. Conceptually, cascading \(l\) GNN layers is analogous to filtered information pooling at each node from its \(l\)-hop neighbors [13] inferred from the powers of the graph adjacency matrix. These neighborhoods can be reached by seeding walks on the graph starting at the desired node. Inspired by this design, we craft a multi-graph neural network (Purple Box in Fig. 1) for outcome prediction. Our
MGNN generalizes structured message passing to the multi-graph in a manner similar to those done for multiplexed graphs [6]. Notably, our formulation is more general, as it avoids using strictly vertical interaction constraints between patients across modalities.
We first construct two supra-adjacency matrices to perform walks on the multi-graph \(\mathcal{G}_{\mathrm{M}}\) for fusion. The first is the _intra-modality adjacency matrix_\(\boldsymbol{\mathcal{A}}\in\mathcal{R}^{PK\times PK}\). The second is the _inter-modality connectivity matrix_\(\boldsymbol{\mathcal{C}}\in\mathcal{R}^{PK\times PK}\), each defined block-wise. Mathematically, we express this as:
\[\boldsymbol{\mathcal{A}}=\bigoplus_{k}\mathbf{A}_{(k)} \tag{7}\] \[\boldsymbol{\hat{\mathcal{C}}}:\boldsymbol{\hat{\mathcal{C}}}[lP: (l+1)P,mP:(m+1)P]=\mathbf{C}_{(l,m)}\ \mathbb{1}(l\neq m)+\boldsymbol{\mathcal{I}_{P}}\ \mathbb{1}(l=m) \tag{8}\]
where \(\bigoplus\) is the direct sum operation and \(\mathbb{1}\) denotes the indicator function. By design, \(\boldsymbol{\mathcal{A}}\) is block-diagonal and allows for within-planar (intra-modality) transitions between nodes. The off-diagonal blocks of \(\boldsymbol{\hat{\mathcal{C}}}\), i.e. \(\mathbf{C}_{(l,m)}\), capture transitions between nodes as per cross-planar (inter-modality) relationships.
Mgnn Message Passing Walks:Walks on \(\mathcal{G}_{\mathrm{M}}\) combine within and across planar steps to reach a patient supra-node \(s_{j}\) from another supra-node \(s_{i}\) (\(s_{i},s_{j}\in\mathcal{V}_{\mathrm{M}}\)). We characterize the multi-hop neighborhoods and transitions using factorized operations involving \(\boldsymbol{\mathcal{A}}\) and \(\boldsymbol{\hat{\mathcal{C}}}\). We can perform a multi-graph walk via two types of distinct steps, i.e., (1) an isolated intra-planar transition or (2) a transition involving an inter-planar step either before or after a within-planar step. These steps can be exhaustively recreated via two factorizations: (I) _after_ one intra-planar step, the walk _may_ continue in the same modal plane or hop to a different one via \(\boldsymbol{\mathcal{A}}\boldsymbol{\hat{\mathcal{C}}}\) and (II) the walk _may_ continue in the current modal plane or hop to a different plane _before_ the intra-planar step via \(\boldsymbol{\hat{\mathcal{C}}}\boldsymbol{\mathcal{A}}\).
The Multi-Graph Neural Network (MGNN) uses these walk operations to automatically mine predictive patterns from the multi-graph given the targets (task-supervision) and the GNN parameters. For supra-node \(s_{i}\), \(\mathbf{h}_{s_{i}}^{(d)}\in\mathcal{R}^{D^{d}\times 1}\) is the feature (supra)-embedding at MGNN depth \(d\). The forward pass operations of the MGNN are as follows:
\[\mathbf{h}_{s_{i},I}^{(d+1)} =\boldsymbol{\phi}_{I}^{(d)}\Big{(}(1+\epsilon)\mathbf{h}_{s_{i} }^{(d)}+\mathrm{wmean}\Big{[}\mathbf{h}_{s_{j}}^{(d)},\boldsymbol{\mathcal{A} }\boldsymbol{\hat{\mathcal{C}}}[s_{i},s_{j}]\ \ ;\ \ s_{j}\in\mathcal{N}_{\boldsymbol{\mathcal{A}} \boldsymbol{\hat{\mathcal{C}}}}(s_{i})\Big{]}\Big{)} \tag{9}\] \[\mathbf{h}_{s_{i},II}^{(d+1)} =\boldsymbol{\phi}_{II}^{(d)}\Big{(}(1+\epsilon)\mathbf{h}_{s_{i} }^{(d)}+\mathrm{wmean}\Big{[}\mathbf{h}_{s_{j}}^{(d)},\boldsymbol{\hat{ \mathcal{C}}}\boldsymbol{\mathcal{A}}[s_{i},s_{j}]\ \ ;\ \ s_{j}\in\mathcal{N}_{ \boldsymbol{\mathcal{A}}\boldsymbol{\hat{\mathcal{C}}}}(s_{i})\Big{]}\Big{)}\] (10) \[\mathbf{h}_{s_{i}}^{(d+1)}=\mathrm{concat}(\mathbf{h}_{s_{i},I}^ {(d+1)},\mathbf{h}_{s_{i},II}^{(d+1)})\] (11) \[\mathbf{g}_{o}(\{\mathbf{h}_{s_{i}}^{(L)}\}_{s_{i}\leftrightarrow i })=\hat{\mathbf{Y}}_{i} \tag{12}\]
At the input layer, we have \(\mathbf{h}_{s_{i}}^{(0)}=\mathbf{f}^{k}(\mathbf{x}_{i}^{k})\) computed from the modality features for patient \(i\) after the sHGR transformation from the corresponding modality \(k\). We then concatenate the supra-embeddings as input to the next layer i.e. \(\mathbf{h}_{s_{i}}^{(d+1)}\). Eqs. (9-10) denote the Graph Isomorphism Network (GIN) [24] with \(\{\boldsymbol{\phi}_{I}^{(d)}(\cdot),\boldsymbol{\phi}_{II}^{(d)}(\cdot)\}\) as layerwise linear transformations. This performs message
passing on the multi-graph using the neighborhood relationships and normalized edge weights from the walk matrices in the weighted mean operation \(\text{wmean}(\cdot)\).
From the interpretability standpoint, these _operations keep the semantics of the embeddings intact at both the patient and modality level throughout the MaxCorrMGNN transformations_. Finally, \(\mathbf{g}_{o}(\cdot)\) is a graph readout network that maps to the one-hot encoded outcome \(\mathbf{Y}\), which performs a convex combination of the filtered modality embeddings, followed by a linear readout.
### End-to-end Learning through task supervision:
Piecing together the constituent components, i.e. the latent graph learning and MGNN inference module, we optimize the following coupled objective function:
\[\mathcal{L}=\lambda\mathcal{L}_{\text{sHGR}}+(1-\lambda)\mathcal{L}_{\text{CE}} (\hat{\mathbf{Y}},\mathbf{Y}) \tag{13}\]
with \(\lambda\in[0,1]\) being a tradeoff parameter and \(\mathcal{L}_{CE}(\cdot)\) being the cross entropy loss. The parameters \(\{\{\mathbf{f}^{k}(\cdot)\},\mathbf{S},\{\boldsymbol{\phi}_{I}^{(d)}(\cdot), \boldsymbol{\phi}_{II}^{(d)}(\cdot),\epsilon\},\mathbf{g}_{o}(\cdot)\}\) of the framework are jointly learned via standard backpropagation.
Inductive Learning for Multi-Graph Neural Networks:The multi-graph is designed to have subjects as the nodes, which requires us to adapt training to accomodate an inductive learning setup. Specifically, we train the MaxCorrMGNN in a fully supervised fashion by extending the principles outlined in [2] for multi-layered graphs. During training, we use only the supra-node features and induced sub-graph edges (including both cross-modal and intra-planar edges) associated with the subjects in the training set for backpropagation. During validation/testing, we freeze the parameter estimates and add in the edges corresponding to the unseen patients to perform a forward pass for estimation. This procedure ensures that no double dipping occurs in the hyper-parameter estimation, nor in the evaluation step. Additionally, while not the focus of this application, this procedure allows for extending prediction and training to an online setting, where new subject/modality information may dynamically become available.
Implementation Details:We implement the MaxCorr projection networks \(\mathbf{f}^{l}(\cdot)\) as a simple three layered neural network with hidden layer width of 32 and output \(D_{p}=64\) and LeakyReLU activation (negative slope=0.01). The MGNN layers are Graph Isomorphism Network (GIN)[24] with ReLU activation and linear readout (width:64) and batch normalization. \(\mathbf{g}_{o}(\cdot)\) implements a convex combination of the modality embeddings followed by a linear layer. We use the ADAMw optimizer [15] and train on a 64GB CPU RAM, 2.3 GHz 16-Core Intel i9 machine (18-20 min training time per run). We set the hyperparameters for our model (and baselines) using grid-search to \(\lambda=0.01\), learning rate= 0.0001, weight decay= 0.001, epochs= 50, batch size= 128 after pre-training the network on the sHGR loss alone for 50 epochs. All frameworks are implemented on the Deep Graph Library (v=0.6.2) in PyTorch (v=0.10.1).
## 3 Experiments and Results
### Data and Preprocessing
We evaluate our model on the Tuberculosis Data Exploration Portal [10] consisting of 3051 patients with five different treatment outcomes (Died, Still on treatment, Completed, Cured, or Failure) with the class frequencies as: 0.21/0.11/0.50/0.10/0.08 respectively and five modalities. We pre-process the data according to the procedure outlined in [6].
For each subject, we have features available from demographic, clinical, regimen and genomic recordings with chest CTs available for 1015 of them. We have a total of 4081 genomic, 29 demographic, 1726 clinical, 233 regimen features that are categorical, and 2048 imaging and 8 miscellaneous continuous features. Information that may directly be related to treatment outcomes, eg drug resistance type, were removed from the clinical and regimen features.
For genomic data, 81 single nucleotide polymorphisms (SNPs) from the causative organisms _Mycobacterium tuberculosis_ (Mtb) known to be related to drug resistance were used. For 275 of the subjects, we also assemble the raw genome sequence from NCBI Sequence Read Archive. This provides a more fine-grained description of the biological sequences of the causative pathogen [17]. Briefly, we performed a _de novo_ assembly process on each Mtb genome to yield protein and gene sequences. We utilized InterProScan [12] to further process the protein sequences and extract the functional domains, i.e. sub-sequences located within the protein's amino acid chain responsible for the enzymatic bioactivity of a protein. This provides a total of 4000 functional genomic features. Finally, for the imaging modality, the lung was segmented via multi-atlas segmentation [22] followed by a pre-trained DenseNet [11] to extract a 1024-dimensional feature vector for each axial slice intersecting the lung. The mean and maximum of each
Figure 2: NIH TB Dataset: Multimodal data for treatment outcome prediction.
feature were then assembled to give a total of 2048 features. Missing features are imputed from the training cohort using mean imputation for all runs.
### Evaluation Metrics:
Since we have a five-class classification task, we evaluate the prediction performance of the MaxCorrMGNN and the baselines using the AU-ROC (Area Under the Receiver Operating Curve) metric. Given the prediction logits, this metric is computed both class-wise and as a weighted average. Higher per-class and overall AU-ROC indicate improved performance. For our experiments, we use 10 randomly generated train/validation/test splits with ratio 0.7/0.1/0.2 to train our model and each baseline.
Finally, statistical differences between the baselines and our method are measured according to the DeLong [4] test computed class-wise. This test is a sanity check to evaluate whether perceived differences in model performance are robust to sampling.
### Baseline Comparisons
We perform a comprehensive evaluation of our framework for the problem of multimodal fusion. Our baseline comparisons can be grouped into three categories, namely, (1) Single Modality Predictors/ No Fusion (2) State-of-the-art Conventional including early/late/intermediate fusion and Latent-Graph Learning models from literature (3) Ablation Studies.
The ablation studies evaluate the efficacy of the three main constituents of the MaxCorrMGNN, i.e. the MaxCorr graph construction, the Multi-Graph Neural Network and the end-to-end optimization.
* **Single Modality:** For this comparison, we run predictive deep-learning models on the individual modality features without fusing them as a benchmark. We use a two layered multi-layered perception (MLP) with hidden layer widths as 400 and 20 and LeakyReLU activation (neg. slope=0.01).
* **Early Fusion:** For early fusion, individual modality features are first concatenated and then fed through a neural network. The predictive model has the same architecture as the previous baseline.
* **Uncertainty Based Late Fusion [21]:** We combine the predictions from the individual modalities in the previous baseline using a state-of-the-art late fusion framework in [21]. This model estimates the uncertainty in the individual classifiers to improve the robustness of outcome prediction. Unlike our work, patient-modality dependence is not explicitly modeled as the modality predictions are only combined after individual modality-specific models have been trained. Hyperparameters are set according to [21].
* **Graph Based Intermediate Fusion [6]:** This is a graph based neural framework that achieved state-of-the-art performance on multimodal fusion on unstructured data. This model follows a two step procedure. For each patient, this model first converts the multimodal features into a fused binary multiplex graph (multi-graph where all blocks of \(\widehat{\mathcal{C}}\) are strictly diagonal) between features. The graph connectivity is learned in an unsupervised fashion through auto-encoders. Following this, a multiplexed graph neural network is used for inference. Hyperparameters are set according to [6]. While this framework takes a graph based approach to fusion, the construction of the graph is not directly coupled with the task supervision.
* **Latent Graph Learning [2]:** This baseline was developed for fusing multimodal data for prediction. It introduces a latent patient-patient graph learning from the concatenated modality features via a graph-attention (GAT-like [20]) formulation. However, unlike our model, the feature concatenation does not distinguish between intra- and inter-modality dependence across patients i.e. it constructs a single-relational (collapsed) graph that is learned as a part of the training.
* **sHGR+ANN [23]:** This is a state-of-the-art multimodal fusion framework [23] that also utilizes the sHGR formulation to infer multi-modal data representations. However, instead of constructing a patient-modality graph, the projected features are combined via concatenation. Then, a two layered MLP (hidden size:200) maps to the outcomes, with the two objectives trained end-to-end. This baseline can be thought of as an _ablation_ that evaluates the benefit of using the multi-graph neural network for fine-grained reasoning. Additionally, this and the previous framework help us evaluate the benefit of our patient-modality multi-graph representation for fusion.
* **MaxCorrMGNN w/o sHGR:** Through this comparison, we evaluate the need for using the soft HGR formulation to construct the latent multi-graph. Keeping the architectural components consistent with our model, we set \(\lambda=0\) in Eq. (13). Note that this _ablation_ effectively converts the multi-graph representation learning into a modality specific self/cross attention learning, akin to graph transformers. Overall, this framework helps us evaluate the benefit of our MaxCorr formulation for latent multi-graph learning.
* **Decoupled MaxCorrMGNN:** Finally, this _ablation_ is designed to examine the benefit of coupling the MaxCorr and MGNN into a coupled objective. Therefore, instead of an end-to-end training, we run the sHGR optimization first, followed by the MGNN for prediction.
### Outcome Prediction Performance:
Fig. 3 illustrates the outcome prediction performance of our framework against the single modality predictors (left), state-of-the-art fusion frameworks (middle), and ablated versions of our model (right). Comparisons marked with \(*\) achieve a
statistical significance threshold of \(p<0.01\) across runs as per the DeLong test [4]. Note that our fusion framework outperforms all of the single modality predictors by a large margin. Moreover, the traditional and graph-based fusion baselines also provide improved performance against the single modality predictors. Taken together, these observations highlight the need for fusion of multiple modalities for outcome prediction in TB. This observation is consistent with findings in treatment outcome prediction literature [6, 16] in TB.
The MaxCorrMGNN also provides improved performance when compared to all of the fusion baselines, with most comparisons achieving statistical significance thresholds. While the Early Fusion and Uncertainty based Late fusion [21] networks provide marked improvements over single modality predictions, but still fail to reach the performance level of our model. This is likely due to their limited ability to leverage subtle patient-specific cross-modal interactions.
On the other hand, the latent graph learning in [2] models connectivity between subjects as a part of the supervision. However, this method collapses the different types of dependence into one relation-type, which may be too restrictive for fusion applications. The intermediate fusion framework of [6] was designed to address these limitations by the use of multiplex graphs. However, the artificial separation between the graph construction and inference steps may not inherently extract discriminative multi-graph representations, which could explain the performance gap against our framework.
Finally, the three ablations, the sHGR+ANN [23], MaxCorrMGNN w/o sHGR, Decoupled MaxCorrMGNN help us systematically examine the three building blocks of our framework, i.e. MGNN and MaxCorr networks individually as well as the end-to-end training of the two blocks. We observe a notable performance drop in these baselines, which reinforces the principles we considered in carefully designing the individual components. In fact, the comparison against
Figure 3: We display the mean per-class and weighted average AU-ROC and the standard errors for TB outcome prediction against **(Left)**: Single Modality Predictors **(Middle)**: Traditional and Graph Based Fusion Frameworks **(Right)**: Ablations of the MaxCorrMGNN. * indicate comparisons against the MaxCorrMGNN according to the DeLong test that achieve statistical significance (\(p<0.01\)).
the Decoupled MaxCorrMGNN illustrates that coupling the two components into a single objective is key to obtaining improved representational power for predictive tasks. Taken together, our results suggest that the MaxCorrMGNN is a powerful framework for multimodal fusion of unstructured data.
## 4 Discussion
We have developed a novel multi-graph deep learning framework, i.e. the MaxCorrMGNN for generalized problems of multimodal fusion in medical data. Going one step beyond simple statistical measures, the patient-modality multi-layered graph allows us to uncover nuanced non-linear notions of dependence between modality features via the maximal correlation soft-HGR formulation. The sHGR formulation coupled with the learnable sparsity module allow us to directly translate an abstract measure of interaction across subjects and modalities in any multimodal dataset into a patient-modality multi-layered graph structure for inference. The construction of the multi-graph planes allow the node features to retain their individuality in terms of the plane (modality) and patient (node-identity) in the filtered Graph Neural Network representations. This admits more explainable intermediate representations in comparison to the baselines, i.e. provides us with the ability to explicitly reason at the granularity of both the subjects and modalities. Conversely, the graph based/traditional fusion baselines collapse this information, either in the multimodal representation or in the inference step. We believe that this added flexibility in the MaxCorrMGNN contributes to the improved generalization power in practice. Finally, all the individual components (i.e. MaxCorr, learnable soft-thresholding, MGNN message passing) are designed to be fully differentiable deep learning operations, allowing us to directly couple them end-to-end. We demonstrate in experiment that this coupling is key to generalization. As such, this model makes very mild assumptions about the nature of the multimodal data. The general principles and machinery developed in this work would likely be useful to a wide variety of applications beyond the medical realm.
Limitations and Future Work:In problems of multimodal fusion, especially for medical applications, data acquisition is a fairly contrived and expensive process. In many real-world modalities may often be only partially observed, missing in totality, or noisy in acquisition. Simple methods such as mean based imputation may be inadequate for fine-grained reasoning. As an aim to address this, an active line of exploration is to extend the framework to handle missing, ambiguous and erroneous data and labels within the multilayered graph representation. This may be achieved by leveraging statistical and graph theoretic tools that can be integrated directly into the message passing walks. Finally, the multi-graph and HGR construction focuses on uncovering pairwise relationships between subjects and features. A future direction would be to extend these frameworks to model complex multi-set dependencies.
## 5 Conclusion
We have introduced a novel multi-layered graph based neural framework for general inference problems in multimodal fusion. Our framework leverages the HGR MaxCorr formulation to convert unstructured multi-modal data into a patient-modality multi-graph. We design a generalized multi-graph neural network for fine-grained reasoning from this representation. Our design preserves the patient-modality semantics as a part of the architecture, making our representations more readily interpretable rather than fully black-box. The end-to-end optimization of the two components offers a viable tradeoff between flexibility, representational power, and interpretability. We demonstrate the efficacy of the MaxCorr MGNN for fusing disparate information from imaging, genomic and clinical data for outcome prediction in Tuberculosis and demonstrate consistent improvements against competing state-of-the-art baselines developed in literature. Moreover, the framework makes very few assumptions making it potentially applicable to a variety of fusion problems in other AI domains. Finally, the principles developed in this paper are general and can potentially be applied to problems well beyond multimodal fusion.
|
2302.14157 | Structural constraints on the emergence of oscillations in
multi-population neural networks | Oscillations arise in many real-world systems and are associated with both
functional and dysfunctional states. Whether a network can oscillate can be
estimated if we know the strength of interaction between nodes. But in
real-world networks (in particular in biological networks) it is usually not
possible to know the exact connection weights. Therefore, it is important to
determine the structural properties of a network necessary to generate
oscillations. Here, we provide a proof that uses dynamical system theory to
prove that an odd number of inhibitory nodes and strong enough connections are
necessary to generate oscillations in a single cycle threshold-linear network.
We illustrate these analytical results in a biologically plausible network with
either firing-rate based or spiking neurons. Our work provides structural
properties necessary to generate oscillations in a network. We use this
knowledge to reconcile recent experimental findings about oscillations in basal
ganglia with classical findings. | Jie Zang, Shenquan Liu, Pascal Helson, Arvind Kumar | 2023-02-27T21:32:00Z | http://arxiv.org/abs/2302.14157v2 | # Structural constraints on the emergence of oscillations in multi-population neural networks
###### Abstract
Oscillations arise in many real-world systems and are associated with both functional and dysfunctional states. Therefore, it is important to determine the causes of oscillations in a network. Whether a network can oscillate can be estimated if we know the strength of interaction between nodes. But in real-world networks (in particular in biological networks) it is usually not possible to know the exact connection weights. Therefore, it is important to determine the structural properties of a network necessary to generate oscillations. Here, we provide a proof that uses dynamical system theory to prove that an odd number of inhibitory nodes and strong enough connections are necessary to generate oscillations in a single cycle threshold-linear network. We illustrate these analytical results in a biologically plausible network with either firing-rate based or spiking neurons. Our work provides structural properties necessary to generate oscillations in a network. We use this knowledge to reconcile recent experimental findings about oscillations in basal ganglia with classical findings.
Introduction
Oscillations are ubiquitous in dynamical systems [1; 2]. They have important functional consequences but can also cause system malfunction. In the brain for instance, oscillations take part in information transfer [3; 4]. However, persistent beta band (13-30 Hz) oscillations are associated with the pathological symptoms of Parkinson's disease [5]. Therefore, it is important to determine when and how a system of many interacting nodes (network) oscillates.
This question is usually very difficult to answer analytically. The main tool that can be used is the Poincare-Bendixson theorem [6; 7] which is only valid in 2 dimensions, which drastically reduce its applicability. In some cases when we know the model parameters it is possible to calculate whether the system will oscillate or not. However, often such parameters cannot be measured experimentally. For example, in most physical, chemical, and biological networks, it is usually not possible to get the correct value of connectivity strength. By contrast, it is much easier to know whether two nodes in a system are physically connected and what is the sign (positive or negative) of their interactions. Therefore, it is much more useful to identify necessary structural conditions for the emergence of oscillations. A good example is the conjecture postulated by Thomas [8]: when considering a coupled dynamical system (\(\dot{x}=f(x)\) and \(x(0)\in\mathbb{R}^{n}\)) with a Jacobian matrix that has elements of fixed sign, it can exhibit oscillations only if the directed graph obtained from the nodes' connectivity (Jacobian matrix) admits a negative loop of two or more nodes (loop with an odd number of inhibition). This conjecture has been proven using graph theory for smooth functions \(f\)[9; 10].
Thomas also conjectured that the assumption on the constant sign of the Jacobian matrix may not be necessary [11], i.e. having a negative loop in some domain of the phase space should be sufficient to generate oscillations. This condition is more realistic due to the ubiquity of the non-linearity in biological systems. For example, in the brain, even though neurons are (usually) either excitatory or inhibitory, the transfer function linking the neurons is non-linear and can thus lead to elements of the Jacobian matrix with non-constant sign. To the best of our knowledge, this last conjecture has not been proved yet but there are many examples of it. For instance, oscillation can emerge from a simple EI network Wilson-Cowan model [12]. Our study is an example of this conjecture on the threshold-linear
network (TLN) model [13] which can closely capture the neural population dynamics.
Here, we study the long term behaviour of the TLN model in the case of a single cycle interaction containing all nodes. We show analytically that regardless of the sign of this loop, the system cannot oscillate when connections are too weak as the system possesses a unique globally asymptotically stable fixed point. However, when connections are strong enough, the system either possesses two asymptotically stable fixed points (positive loop) or a unique unstable fixed point (negative loop). In addition, the system can be shown to be bounded and thus, it has one of the following long term behavior: limit cycle, quasi-periodic or chaotic behavior. Interestingly, we can show that such dynamics can be shut down by introducing positive external input to excited nodes.
Based on our analytical results, we used simulations of basal ganglia (BG) network models with either firing rate-based or spiking neurons to explain recent experimental findings about the origin of oscillations in Parkinson's disease (PD). Traditionally, the subthalamic nucleus and globus pallidus (STN-GPe) subnetwork is considered to be the key network underlying the emergence of oscillations in PD [14; 15; 16]. However, recent experiments have shown that near complete inhibition of GPe but not of STN is sufficient to quench oscillations [17]. This observation contradicts several previous models and even clinical observations in which surgical removal of STN is used to alleviate PD symptoms. Our theory suggests that there are at least 6 possible cycles in the Cortex-BG network that have the potential to oscillate based on the connectivity structure. We show that even if STN is inhibited, other 'cycles' can sustain pathological oscillations. Interestingly, we found that GPe features in 5 out of 6 oscillatory cycles and therefore GPe inhibition is likely to affect PD-related oscillations in most cases.
## II Results
We study how the emergence of oscillations in a network of excitatory and inhibitory populations depends on the connectivity structure. We first consider a network of nodes with dynamics representing the average firing rate of a population. We derive structural conditions for the emergence of oscillations when the dynamics of individual nodes are described according to the threshold-linear network (TLN) model. Next, we use numerical simulations to test whether such results might still hold on two other models: the Wilson-Cowan popu
lation rate-based model [18] and a network model of the basal ganglia with spiking neurons (see Methods).
### Structural conditions to generate oscillations
#### Intuition behind the analytical results
There exist many ways to generate oscillations in a network. Oscillation can arise from individual nodes due to their intrinsic dynamics (a spiking neuron can have a periodic behaviour given the ionic channel composition [19]) or from the weights' dynamics when considering synaptic plasticity [20]. Here we assume that the system's ability to oscillate only depends on the connectivity structure: the presence of positive or negative loops and the connections strength (Jacobian matrix) within them. That is, neither plasticity nor biophysics of neurons is considered.
Consider a small network of two nodes. If we connect them mutually with excitatory synapses, intuitively we can say that the two-population network will not oscillate. Instead, the two populations will synchronize. The degree of synchrony will, of course, depend on the external input and the strength of mutual connections. If both these nodes are inhibitory, one of the nodes will emerge as a winner and the other will be suppressed [21]. Hence, a network of two mutually connected inhibitory populations cannot oscillate either. We can extend this argument to three population networks with three connections that form a closed loop or 'cycle' (Fig. 1a, top). When all three connections in the cycle are excitatory, the three populations will synchronize. Essentially, we will have a single population. Thus, these two and three population motifs are not capable of oscillations.
The simplest network motif which is capable of oscillating consists of two mutually connected nodes: one excitatory and one inhibitory (EI motif: Fig. 1a, bottom) [12]. When there are three populations connected with three connections to form a cycle, the potential to oscillate depends on the number of inhibitory connections. A cycle with one inhibitory connection (EEI motif) can be effectively reduced to an EI motif and therefore, can oscillate. However, when there are two inhibitory connections (EII motif, Fig. 1a, top)), the two inhibitory neurons engage in a winner-take-all type dynamics and the network is not capable of oscillations. Finally, if there are three inhibitory connections (i.e. all three nodes are
inhibitory, III motif) the network enters in a winner-less-competition [22] and can exhibit oscillations (Fig. 1a, bottom).
These examples of two or three nodes suggest that a network can generate oscillations if there are one or three inhibitory connections in the network. These observations form the basis for the conjecture of Thomas [8] that gives a necessary condition for oscillations to emerge. This condition is of course not sufficient. In the following we find additional constraints (input and minimum connection strength) needed to determine the emergence of oscillations in a network. To this end we use the TLN model which captures the neural population dynamics to a great extent. After proving the key theorems, we test with simulation whether similar results hold on a more realistic Wilson-Cowan model and a model of basal ganglia with spiking neurons.
#### Threshold linear network model
We consider the TLN\((W,b)\) in which individual nodes follow the dynamics
\[\frac{dx_{i}}{dt}=-x_{i}+\left[\sum_{j=1}^{n}W_{ij}x_{j}+b_{i}\right]_{+},\quad i =1,\ldots,n \tag{1}\]
where \(n\) is the number of nodes, \(x_{i}(t)\) is the activity level of the \(i\)th node at time \(t\geq 0\), \(W_{ij}\) is the connection strength from node \(j\) to node \(i\) and \([\,\cdot\,]_{+}\stackrel{{\rm def}}{{=}}\max\{\,\cdot\,,0\}\) is the threshold non-linearity. For all \(i\in[n]\stackrel{{\rm def}}{{=}}\{1,\ldots,n\}\), the external inputs \(b_{i}\in\mathbb{R}\) are assumed to be constant in time. We refer to a \(n\) neurons network with dynamics given by eq. 1 as TLN\((W,b)\).
In order to help the definition of cycle connectivity matrices, we define
\[C_{n}\stackrel{{\rm def}}{{=}}\{(i,j)\in[n]^{2}|i-j=1\}\cup\{(1,n )\}.\]
We denote by \(\delta_{i,I}\) the Kronecker delta which equals 1 when node \(i\) is inhibitory and 0 otherwise (node \(i\) is excitatory). In the following, we use the convention that node 0 is node \(n\) and node \(n+1\) is node 1. For a given set of elements \(\{y_{k}\}_{k\in\mathbb{N}}\) in \(\mathbb{R}\), we will use the convention:
\[\prod_{k=i}^{j}y_{k}=1\mbox{ when }j<i. \tag{2}\]
We define \(\mathcal{A}=\{a_{1},\ldots,a_{n_{I}}\}\) as the ensemble of inhibited nodes (\(\{k\in[n]|\delta_{k-1,I}=1\}\)) put in order such that \(a_{1}<\cdots<a_{n_{I}}\). Denoting by card\((\cdot)\) the cardinal function, we have that card\((a)=n_{I}\). We also use the cycle convention for \(\mathcal{A}\): \(a_{n_{I}+1}=a_{1}\).
#### Analytical results
**Theorem 1**.: _Let a network of inhibitory and excitatory nodes be connected through a graph \(G\) which does not contain any directed cycle. Assume that its nodes follow TLN(\(W,b\)) dynamics
Figure 1: **Structural condition for oscillations: odd inhibitory cycle rule and its illustrations.****a**: Examples of oscillating motifs and non-oscillating motifs in Wilson-Cowan model. Motifs that cannot oscillate show features of Winner-take-all: the winner will inhibit other nodes with a high activity level. Inversely, the oscillatory ones all show features of winner-less competition, which may contribute to oscillation. **b**: The odd inhibitory cycle rule for oscillation prediction with the sign condition of a network. **c**: Illustrations of oscillation in complex networks. Based on the odd inhibitory cycle rule, Network I can’t oscillate, while Network II could oscillate by calculating the sums of their motifs. The red or black arrows indicate inhibition or excitation, respectively. Hollow nodes and solid nodes represent excitatory and inhibitory nodes, respectively.
(eq. 1) with_
\[W_{ij}=\begin{cases}w_{ij}(-1)^{\delta_{j,I}}&\text{when edge $i\gets j\in G$}\\ 0&\text{otherwise},\end{cases}\]
_where \(w_{ij}\in\mathbb{R}^{+}\ \forall\ i,j\in[n]\). Then, TLN(\(W,b\)) has a unique globally asymptotically stable fixed point._
**Theorem 2**.: _Let G be a cyclical graph with \(n_{I}\in\mathbb{N}^{+}\) inhibitory nodes and \(n_{E}\in\mathbb{N}\) excitatory nodes such that \(n_{I}+n_{E}\geq 2\) (\(\geq 3\) when \(n_{I}=1\)). Assume that the nodes follow the TLN(\(W,b\)) dynamics (eq. 1) with for all \(i,j\in[n]\), \(w_{j}\in\mathbb{R}^{+}\),_
\[W_{ij}=\begin{cases}w_{j}(-1)^{\delta_{j,I}}&\text{when $(i,j)\in C_{n}$}\\ 0&\text{otherwise},\end{cases}\]
_and \(b_{i}=0\) when the node \(i-1\) is excitatory and \(b_{i}>0\) otherwise. Moreover, using convention (eq. 2), assume that the initial state is bounded,_
\[\forall j\in\{x_{a_{k}},\ldots,x_{a_{k+1}-1}\},\quad x_{j}(0)\in[0,b_{a_{k}} \prod_{i=a_{k}}^{j-1}w_{i}]. \tag{3}\]
_Then, the long time behaviour of the network depends on the following conditions,_
\[\forall k\in[n_{I}],\quad\prod_{i=a_{k}}^{a_{k+1}-1}w_{i}<\frac{b _{a_{k+1}}}{b_{a_{k}}}, \tag{4}\] \[\prod_{i=a_{k}}^{a_{k+1}-1}w_{i}>\frac{b_{a_{k+1}}}{b_{a_{k}}},\] (5) \[\sqrt[n]{\prod_{i=1}^{n}w_{i}}<\frac{1}{cos(\pi/n)},\] (6) \[\sqrt[n]{\prod_{i=1}^{n}w_{i}}>\frac{1}{cos(\pi/n)}. \tag{7}\]
_If \(n_{I}\) is even and_
* _eq._ 4 _is satisfied, TLN(_\(W,b\)_) has a unique globally asymptotically stable fixed point with support_ \([n]\)
* _eq._ 5 _is satisfied, TLN_\((W,b)\) _has two asymptotically stable fixed points with strict complementary subsets of_ \([n]\) _as supports._
_If \(n_{I}\) is odd and_
* _eq._ 4 _is satisfied, TLN_\((W,b)\) _has a unique fixed point which is globally asymptotically stable and its support is_ \([n]\)_,_
* _eq._ 5 & eq._ 6 _is satisfied, TLN_\((W,b)\) _has a unique fixed point which is asymptotically stable (not globally) and its support is_ \([n]\)_,_
* _eq._ 5 & eq._ 7 _are satisfied, TLN_\((W,b)\) _has a unique fixed point which is unstable and has_ \([n]\) _as support._
_Remark 1_.: First, note that eq. 4 implies
\[\sqrt[n]{\prod_{i=1}^{n}w_{i}}<1, \tag{8}\]
and similarly, eq. 5 implies
\[\sqrt[n]{\prod_{i=1}^{n}w_{i}}>1. \tag{9}\]
In addition, the bound on the initial state eq. 3 can be easily removed. We use it because it eases the proof as we then don't need to introduce technical details that are not interesting for this study.
Then, Theorem 2 says that a possible condition for the one cycle TLN to oscillate is that the number of inhibitory nodes is odd when the connection strength are strong enough (i.e. eq. 5 & eq. 7). In that case, the system has no stable fixed point and from Lemma 1 it is bounded so it has one of limit cycle, quasi-periodic or chaotic behaviours. In particular, Theorem 2 states that the odd number of inhibitory nodes is not sufficient. Indeed, when eq. 4 holds and \(n_{I}\) is odd, no oscillations are possible as the fixed point is globally stable. It is also the case when \(n_{I}\) is even which corresponds to Thomas' conjecture.
Finally, there is a gap in between the conditions (between eq. 4 and eq. 5 for example) for which the long term behaviour is not determined.
_Remark 2_.: In particular, if for all \(i\in[n]\), \(w_{i}=w\in\mathbb{R}_{+}^{*}\) and for all \(k\in[n_{i}]\), \(b_{a_{k}}=b\in\mathbb{R}_{+}^{*}\), then the dynamics of the system only depends on \(w\). When \(n_{I}\) is even: \(w<1\) implies that \(\operatorname{TLN}(W,b)\) have a unique globally asymptotically stable fixed point; \(w>1\) implies that the fixed point for \(w<1\) becomes unstable and \(\operatorname{TLN}(W,b)\) has two more asymptotically stable fixed point. If \(n_{I}\) is odd, \(\operatorname{TLN}(W,b)\) only has a unique fixed point which is asymptotically stable when \(w<\frac{1}{cos(\pi/n)}\) (globally when \(w<1\)) and unstable when \(w>\frac{1}{cos(\pi/n)}\).
_Remark 3_.: In Theorem 2, we assume that the external inputs are absent for excited nodes. Assume that the external input to any excited node, say node \(a_{k}<i<a_{k+1}\), is strictly positive. Then, bounding its dynamics as in Lemma 1, we know that its activity will be more than \(b_{i}\). Hence, the next inhibited node \(a_{k+1}\) can be silenced forever if
\[b_{i}\prod_{j=i}^{a_{k+1}-1}w_{j}>b_{a_{k+1}},\]
thus destroying the cycle structure and thus preventing oscillation from emerging.
On the other hand, if the external inputs to excited nodes are strictly negative, Theorem 2 conclusion will be similar but now with condition described in eq. 4 replaced by
\[b_{a_{k}}\prod_{i=a_{k}}^{a_{k+1}-1}w_{i}-\sum_{j=a_{k}+1}^{a_{k+1}-1}b_{j} \prod_{i=j}^{a_{k+1}-1}w_{i}<b_{a_{k+1}}.\]
This means that cycles with even (odd) inhibitory nodes need strong enough connections to generate multi-stability (limit cycle). We now clarify that the latter condition relates to weights' strength. With \(w=(w_{1},\cdots,w_{n})\) and using the set of functions increasing functions \((f_{i})_{1\leq i\leq n}\) such that
\[f_{i}^{w,b}(x)=w_{i-1}x_{i-1}-b_{i}\]
one can write the last condition as
\[f_{a_{k+1}}^{w,b}\circ\cdots\circ f_{a_{k}}^{w,b}(b_{a_{k}})<0.\]
Hence, the left term is increasing with any weight strength.
One should also note that under this negative input assumption to excited nodes, when weights are weak, the support of the fixed point might be different from \([n]\). In particular, some excited nodes might not belong to the support.
_Remark 4_.: When the decay rates are not the same (here all of them are \(-1\)), similar results hold but then the conditions for stability are more difficult to state precisely. Finally, when considering the EI network (two nodes), the system always admits to a unique globally asymptotically stable fixed point with support \(\{1,2\}\). Indeed, it is easy to show that the system will always reach the domain where the inhibitory node is small enough so that one can remove the threshold function in eq. 1 and thus the eigenvalues of the Jacobian matrix are \(\pm i\sqrt{w_{1}w_{2}}-1\). No oscillations are then possible, which is an easy example to show that negative loops are not sufficient to generate oscillation in non smooth dynamical systems.
Similar results have been shown by Snoussi [9] and Gouze [10]. Considering dynamical systems of the form \(\dot{x}=f(x)\) where \(f\) is a continuously differentiable function on a given open convex set and \(f\) has a constant sign Jacobian matrix, they used graph theory methods to show that negative loop in this matrix is a necessary condition to generate oscillations. In our case, \(f\) is not continuously differentiable, the Jacobian matrix elements can change sign within the state space and we show that there is a need of additional constraints for oscillations to arise. A formal proof of the aforementioned theorems is provided in Appendix by using classical dynamical theory tools.
### Intuition behind the proof of the theorems
The idea behind our proof can be explained graphically. We assume that nodes cannot oscillate due to their intrinsic activity and a fixed external input only drives them to a non-zero activity which does not change over time. Therefore, they need input from their pre-synaptic (upstream) nodes to change their state in a periodic manner to generate oscillations. In such a network, if we perturb the node \(i\) with a pulse-like input, it is necessary that the perturbation travels through the network and returns to the node \(i\) with a \(180^{o}\) phase shift (i.e. with an inverted sign). Otherwise, the perturbation dies out and each node returns to a state imposed by its external input.
In a network without directed cycles, it is possible to sort the nodes into smaller groups where nodes do not connect to each other (Fig. 2a). That is, a network with no directed cycles, can be rendered as a feed-forward network in which the network response by definition does not return to the node (or group) that was perturbed. Such a network can only oscillate when the intrinsic dynamics of individual nodes allow for oscillatory dynamics.
However, having a directed cycle is no guarantee of oscillations because network activity must return to the starting node with a 180\({}^{\circ}\) phase shift. This requirement puts a constraint on the number of inhibitory connections in the cycles. When we assume that there are no delays (or the delay is constant) in the connections, excitatory connections do not introduce any phase shift, however, inhibitory connections shift the phase by 180\({}^{\circ}\) (in the simplest case invert the sign of the perturbation). Given this, when a cycle has an even number of inhibitory connections the cycle cannot exhibit oscillations (Fig. 2b,top). However, replacing an inhibitory connection by an excitatory one can render this cycle with an ability to oscillate (Fig. 2b, bottom). Therefore, odd number of inhibitory appears to be necessary for oscillation to emerge.
### The effect of network parameters on oscillations
To test the validity of our theorems in more realistic biological neuronal networks, we numerically simulated the dynamics of the Wilson-Cowan model. Specifically, we investigated the role of synaptic transmission delays, synaptic weights, external inputs and self-connection in shaping the oscillations when the network has directed cycles. In particular, we focused on two networks: the III motif with three inhibitory nodes (odd inhibitory links) and the EII motif with one excitatory and two inhibitory nodes (even inhibitory links).
Our numerical simulation showed that for a wide range of parameters (synaptic delays, synaptic weights, external input and self-inhibition), while III network showed oscillations, EII network did not show any oscillations (Fig. 3 and S1). The oscillation frequency however depended on the exact value of the synaptic delays, synaptic weights and external inputs. For instance, increasing the synaptic delay reduced the oscillation frequency (Fig. 3 b). Synaptic delays play a more important role in shaping the oscillations in an EI type network (see Supplementary Fig. S2). The effect of increasing the synaptic strength was contingent on the external inputs. In general, increasing the synaptic strength resulted in a reduction in the oscillation frequency (Fig. 3 c). Next, oscillation frequency changed in a non-monotonic fashion as a function of external input irrespective of the choice of other parameters (Fig. 3 d). Typically, a mid-range input strength resulted in maximum oscillation frequency. Finally, increasing the self-connection of nodes increased the oscillation frequency but beyond a certain self-connection the node was completely silenced and it changed the network topology
Figure 2: **The intuitive explanations of Theorem 1 and 2.****a**: A visual representation of why directed cycles are important in network oscillation. By rearranging all nodes, any network without directed cycles can be seen as a feed-forward network which will make the system reach a stable fixed point. **b**: An intuitive explanation of the odd inhibitory cycle rule by showing the activities of two 6-node-loops. Odd inhibitory connections (bottom) can help the system oscillate, while even inhibitory connections has the opposite effect.
and oscillations disappeared (Fig. 3 e).
Overall, these results are consistent with our rule that odd number of inhibitory nodes and strong enough connections are necessary to induce oscillations in a directed cycle. The actual frequency of oscillations depends on specific network parameters.
## Oscillators in the cortex-basal ganglia network
Next, we use our theorem to explain recent experimental observations about the mechanisms underlying the emergence of oscillations in the basal ganglia. Emergence of 15-30 Hz (beta band) oscillations in the cortico-basal ganglia (CBG) network is an ubiquitous feature of Parkinson's disease (PD) [23; 24; 25; 26]. Based on their connectivity and activity subthalamic nucleus (STN) and the globus pallidus externa (GPe) subnetwork has emerged as the most likely generator of beta oscillations [27; 28]. The STN-GPe subnetwork becomes oscillatory when their mutual connectivity is altered [15; 29] or neurons become bursty [30; 31] or striatal inputs to GPe increase [16; 32; 33; 34]. However, oscillations might also be generated by the striatum [35], by the interaction between the direct and hyperdirect pathways [36] and even by cortical networks that project to the BG [37]. Recently, de la Crompte et al. [17] used optogenetic manipulations to shed light on the mechanisms underlying oscillation generation in PD. They showed that GPe is essential to generate beta band oscillations while motor cortex and STN are not. These experiments force us to rethink the mechanisms by which beta band oscillations are generated in the CBG network.
To better understand when GPe and/or STN are essential for beta band oscillations, we identified the network motifs which fulfill the odd inhibitory cycle rule. For this analysis, we excluded D1 SPNs because they have a very low firing rate in the PD condition [33]. In addition, cortex is assumed as a single node in the CBG network.
The CBG network can be partitioned into 238 subnetworks with 2, 3, 4, 5 or 6 nodes (see Supplementary Fig. S3-S8). Among these partitions, there are five loops (or cycles) in the CBG network with one or three inhibitory projections: Proto-STN, STN-GPi-Th-cortex, Proto-Arky-D2, Proto-FSN-D2, and Proto-GPi-Th-Cortex-D2 (Fig. 4a). One or more of these 5 loops appeared in 88 (out of 238) subnetworks of CBG (see Fig. 4b, colors indicate different loops). Larger subnetworks consisting of 5 and 6 nodes have multiple smaller subnetworks (with 2 or 3 nodes) that can generate oscillations (boxes with multiple colors
Figure 3: **Influence of network properties on the oscillation frequency in motifs III and EII with Wilson-Cowan model.****a**: The changed network parameters are shown in the table. Red (green) connections are inhibitory (excitatory) and black arrows are the external inputs. **b-e**: We systematically varied the synaptic delay time **b**, synaptic weights **c**, external input **d**, and self-connection **e**. These parameters were varied simultaneously for all the synapses i.e. in each simulation all synapses were homogeneous. Green, orange, red and turquoise respectively show the effect of synaptic delay, synaptic strength, external input and self-inhibition. See the Supplementary Fig. S1 and Fig. S2 for more detailed results about III and EI network motifs.
in Fig. 4b).
Based on our odd inhibitory cycles in BG, we found three oscillatory subnetworks which do not involve the STN (Fig. 4a, cyan, green and purple subnetworks). However, each of these oscillatory subnetworks involves Prototypical neurons (from the GPe) which receive excitatory input from STN. Therefore, it is not clear whether inhibition of STN can affect oscillations or not. To address this question, we first simulated the dynamics of a four-node motif (Fig. 4c top) using the Wilson-Cowan type model (see Methods). In this subnetwork, we have three cycles: Proto-STN loop with one inhibitory connection, Proto-STN-Arky-D2 loop with three inhibitory connections and Proto-Arky-D2 with three inhibitory connections.
We systematically varied external inputs to the STN and D2-SPNs and measured the frequency of oscillations (see Methods). We found that for weak inputs to the D2-SPNs, the Proto-STN subnetwork generated oscillations for weak positive input (Fig. 4c bottom). However, as the input to D2-SPNs increased, the oscillation frequency decreased and oscillations were observed even for very strong drive to STN (Fig. 4c bottom). That is, in this model, both Proto-STN and Proto-D2-Arky subnetworks compete for oscillations, which subnetwork wins depending on their inputs. To disentangle the oscillations of each of these two subnetworks, we performed 'lesion' experiments in our model (see Methods). These experiments also mimicked lesions performed in non-human primates [30].
When we removed the D2-SPN to Proto projections, the network could oscillate but only because of the Proto-STN subnetwork (Fig. 4d). In this setting, we get relatively high-frequency beta band oscillations but only for a small range of excitatory inputs to the STN (Fig. 4d, bottom). In this setting, inhibition of STN would certainly abolish any oscillation. Next, we removed the STN output (equivalent to inhibition of STN), the Proto-D2-Arky subnetwork generated oscillations for weak positive inputs to the D2-SPNs (Fig. 4d, bottom). Note that unlike in Fig. 4c, here we injected additional input to Proto to compensate for the loss of excitatory input from STN and to ensure that it had sufficient baseline activity. The frequency of Proto-D2-Arky oscillations was smaller than that observed for the Proto-STN subnetwork because the former involves a three synapses loop. However, as we have shown earlier, frequency of oscillation can be changed by scaling the connection weights or external inputs (Fig. 3). Overall, these results suggest that, in principle, it is possible for CBG network to oscillate even when STN is removed from the network.
Figure 4: **Schematic of CBG network model with potential oscillators and the interaction between two oscillators in Wilson-Cowan model.****a**: CBG structure with red lines denoting inhibition and green lines denoting excitation, along with five potential oscillators based on the odd inhibitory cycle rule. **b**: Oscillation in all BG motifs from 2 nodes to 6 nodes based on the odd inhibitory cycle rule. Each grid represents a separate motif. We use different colors to mark motifs that can oscillate, and each color means an oscillator from panel **a**. **c**: The reaction of oscillation frequency to different external inputs to D2 and STN in a BG subnetwork. External inputs to Proto and Arky are 1 and 3, respectively. **d**: Same thing as **c** but ruining the connection from D2 to Proto. **e**: Same thing as **c** but destroying the connections from STN and increasing the input to Proto from 1 to 4.
#### Oscillations in model of basal ganglia with spiking neurons
Thus far we have only illustrated the validity of our theorems in a firing rate-based model. To be of any practical value to brain science, it is important to check whether our theorems can also help in a network with spiking neurons. To this end, we simulated the two subnetworks with 3 inhibitory connections: Proto-D2-FSN and Proto-D2-Arky (see Methods). These subnetworks were simulated using a previous model of BG with spiking neurons [34].
The subnetworks Proto, Arky, D2-SPN, FSN have very little recurrent connectivity to oscillate on their own. We provided Poisson type external input. All neurons in a subnetwork received the same input rate but a different realization of the Poisson process. Both Proto-D2-FSN (Fig. 5a) and Proto-D2-Arky (Fig. 5b) subnetworks showed \(\beta\)-band oscillations. In the loop Proto-D2-FSN, D2-SPN neurons have a relatively high firing rate. This could be a criterion to exclude this loop as a potential contributor to the beta oscillations.
Next, we mimicked the STN inhibition experiments performed by de la Crompe et al. [17] in our model. To this end, we simulated the dynamics of BG network excluding D1-SPNs (because of their low firing rate in PD condition) and FSN (because with FSNs in the oscillation loop, D2-SPNs may have non-physiological firing rates). In this reduced model of BG, we changed inputs to operate in a mode where either Proto-D2-Arky (Fig. 5c) or Proto-STN (Fig. 5d) loop was generating the oscillations. In both cases, we systematically increased the inhibition of STN neurons.
In the Proto-D2-Arky mode, as we inhibited STN neurons, firing rate of the Proto neurons decreased and oscillations in STN population diminished but Proto neurons showed clear beta band oscillations (Fig. 5c). By contrast, and as expected when STN-Proto loop was generating the oscillations, increasing the STN inhibition abolished the oscillations in both STN and Proto neurons (Fig. 5d). In STN-Proto loop, when STN is inhibited, there is no cycle left in the network and therefore oscillations diminished, whereas the Proto-D2-Arky loop remains unaffected by the STN inhibition (except for a change in the firing rate of the Proto neurons). As shown in Fig. 4c, whether oscillations are generated by the Proto-D2-Arky or STN-Proto loop depends on the relative input to the D2 or STN neurons. So it is possible that in rodents, D2-SPN have stronger input from the cortex than STN and therefore, oscillations survive despite near complete inhibition of STN.
## Discussion
Here we prove in a single cycle TLN model and illustrate with numerical simulations of biological networks that when the number of inhibitory nodes in a directed cycle is odd
Figure 5: **Oscillations in a leaky integrate-and-fire (LIF) spiking neuronal network model of specific BG motifs.****a-b**: Average peristimulus time histograms (PSTH) of all neurons in **a** Proto-FSN-D2 and **b** Proto-Arky-D2 motifs under Parkinson condition with power spectral density (PSD) at the top right. **c**: PSTH of Proto and STN in a BG subnetwork with motif Proto-Arky-D2 as the oscillator during different STN inhibition. **d**: Same thing as **c** but changing the oscillator from Proto-Arky-D2 to Proto-STN.
and connections are strong enough, then the system has the potential to oscillate. In 1981, Thomas [8] conjectured that at least one negative feedback loop (i.e., a loop with an odd number of repressors) is needed for gene regulatory networks to have periodic oscillating behavior. This conjecture was proven for Boolean dynamical systems by Snoussi [9] and Gouze [10]. But their proof required that node transfer-function is differentiable everywhere. We here prove a more complete theorem for a case where node transfer-function is threshold-linear as is the case for many network in the brain. Thus, together with previous results of Snoussi [9] and Gouze [10] we further expand the scope within which we can comment on the potential of a network to generate oscillation based only on the connectivity structure alone. In addition, we complete this condition by one on weights' strength stating that the latter needs to be strong enough for the system to possibly oscillate. Eventually, oscillations can be quenched by adding positive external input to excited nodes.
A key assumption of our analysis is that there are no delays in the network. Indeed, delays within and between subnetwork connections can have a big effect on the oscillations [38]. In the numerical simulations of basal ganglia network, we included biologically realistic synaptic delays (i.e. connection delays were shorter than the time constants of the neurons). Our results suggest that such delays do not influence our results and they only determine the oscillation frequency. But it is not possible to comment on how the results may change when delays become longer than the time constant of the node.
#### Interactions between input and network structure
Previous models suggest that when we excite the excitatory node or inhibit the inhibitory node oscillations can emerge and strengthen [12, 16]. By contrast, when we inhibit the excitatory node or excite the inhibitory node, oscillations are quenched. This can be summarised as the 'Oscillations Sign Rule'. Let us label the excitatory population as positive and inhibitory as negative. Let us also label excitatory inputs as positive and inhibitory inputs as negative. Now if we multiple the sign of the node and sign of the stimulation, we can comment on the fate of oscillations in a qualitative manner. For example, inhibition of inhibitory nodes would be \(-\times-=+\) i.e. oscillations should be increased and when we inhibit excitatory nodes, it would be \(-\times+=-\) i.e. oscillations should be decreased. The 'Oscillation sign rule' scales to larger network with more nodes. With the 'Odd Cycle Rule'
as we have shown we can comment on whether a directed cycle will oscillate or not from the count of inhibitory links. When we combine the 'Oscillations Sign Rule' with the 'Odd Cycle Rule' we can get a more complete qualitative picture of whether a stimulating a node in a network will generate oscillations or not.
#### Interaction between node properties and network structure
In our proof we have assumed that nodes follow rather simple dynamics and have a threshold-linear transfer-function. In reality nodes in physical, chemical and biological systems can have more complex dynamics. For instance, biological neurons have the property of spike frequency adaptation or rebound spiking. Similarly, synapses in the brain can increase or decrease their weights based on the recent history of inputs which is referred to as the short-term-facilitation or short-term-depression of synapses [39]. Such biological properties can be absorbed in the network structure in the form of an extra inhibitory or excitatory connection. When nodes can oscillate given their intrinsic dynamics then the question becomes more about whether a network structure can propagate oscillations to other nodes.
#### Oscillations in the basal ganglia
We applied our results to understand the mechanisms underlying the emergence of PD-related pathological oscillations in the basal ganglia. Given that there are 8 key neuron populations in the basal ganglia, we enumerated 238 possible directed cycles. From 2-node-motifs to 6-node-motifs, our odd cycle rule identified 88 potential directed cycles that can generate oscillations. Among these, 81 cycles feature GPe (either Proto or Arky type or both) and 66 feature STN. Which specific cycle underlies oscillations depends on the exact input structure. For instance, when input to STN is higher than the D2 neurons, the STN-GPe network generates oscillations. But when inputs to D2 neurons are stronger, the D2-Proto-Arky cycle can become the oscillator. That is, STN is not necessary to generate oscillations in the basal ganglia. Our results also suggest that besides focusing on the network connectivity, we should also estimate the inputs to different nodes in order to pinpoint the key nodes underlying the PD-related pathological oscillations - that would be the way to reconcile the recent findings of de la Crompe [17] with previous results.
#### Beyond neural networks
In this work we have used the odd cycle rule to study oscillations in basal ganglia. However, oscillatory dynamics and the odd cycle rule show up in many chemical, biological and even social systems such as neuronal networks [40], psychological networks [41], social and political networks [42, 43, 44, 45], resting-state networks in autism [46] and gene networks [47, 48]. In fact, originally Thomas' conjecture [8] about the structural conditions for oscillations was made for gene regulatory networks. Therefore, we think that insights obtained from our analytical work can be extended to many other chemical, biological and social networks. It would be interesting to check to what extent our prediction of quenching oscillation by exciting the excitatory nodes holds in other systems besides biological neuronal networks.
## Methods
To study the emergence of oscillations in the basal ganglia, we used three models: Threshold-Linear Network (TLN), Wilson-Cowan model and network with spiking neurons. TLN model (eq. 1) was used here to rigorously prove that simple conditions, such as the odd inhibitory cycle rule, can lead to oscillations (Theorems 1 and 2). Wilson-Cowan type firing rate-based model was used to find the structural constraints on oscillations and to determine the effect of network properties (such as delays, synaptic weights, external inputs, and self-inhibition) on the emergence of oscillations. Finally, to demonstrate the validity of the odd inhibitory cycle rule in a more realistic model, we use a network with spiking neurons.
### Wilson-Cowan dynamics
In the firing rate-based models, we reduced each cortex-basal ganglia (CBG) subnetwork to a single node. To describe firing rate dynamics of such a node, we used the classic Wilson-Cowan model [18]
\[\tau\frac{dr_{i}(t)}{dt}=-r_{i}(t)+F\left(\sum_{j=1}^{n}w_{ij}r_{j}+I_{i}^{ext }\right) \tag{10}\]
where \(r_{i}(t)\) is the firing rate of the \(i\)th node, \(\tau\) is the time constant of the population activity, \(n\) is the number of nodes (or subnetworks), \(w_{ij}\) is the strength of connection from node
to \(i\), and \(I_{i}^{ext}\) is the external input to the population. \(F\) is a nonlinear activation function relating output firing rate to input, given by
\[F(x)=\frac{1}{1+e^{-a(x-\theta)}}-\frac{1}{1+e^{a\theta}} \tag{11}\]
where the parameter \(\theta\) is the position of the inflection point of the sigmoid, and \(\frac{a}{4}\) is the slope at \(\theta\). Here, \(\tau\), \(\theta\), and \(a\) are set as 20, 1.5, and 3. Other parameters of the model varied with each simulation. The simulation specific parameters for Fig. 3 are shown in Tables 1, 2 and for Fig. 4 are shown in Table 3, respectively.
#### Network model with spiking neurons
The basal ganglia network with spiking neurons was taken from a previous model by Chakravarty et al. [34]. Here we describe the model briefly and for details we refer the reader to the paper by Chakravarty et al. [34].
#### Spiking neuron model
Here, we excluded D1-SPNs because they have a rather small firing rate in PD conditions. The striatal D2-type spiny neurons (D2-SPN), fast-spiking neurons (FSNs) and STN neurons were modelled as standard LIF neurons with conductance-based synapses. The membrane potential \(V^{x}(t)\) of these neurons was given by:
\[C_{m}^{x}\frac{dV^{x}(t)}{dt}=I_{e}(t)+I_{syn}(t)-g_{L}^{x}\left[V^{x}(t)-V_{ reset}^{x}\right] \tag{12}\]
where \(x\in\{\)D2-SPN, FSN, and STN\(\}\), \(I_{e}(t)\) is the external current induced by Poisson type spiking inputs (see below), \(I_{syn}(t)\) is the total synaptic input (including both excitatory and inhibitory inputs). When \(V^{x}\) reached the threshold potential \(V_{th}^{x}\), the neuron was clamped to \(V_{reset}^{x}\) for a refractory duration \(t_{ref}\) = 2 ms. All the parameter values and their meaning for D2-SPN, FSN and STN are summarized in Tables 2, 3, 4, respectively.
We used the LIF model with exponential adaptation (AdEx) to simulate Proto and Arky neurons of the globus pallidus externa (GPe), with their dynamics defined as
\[C^{x}\frac{V^{x}(t)}{dt} =-g_{L}^{x}[V^{x}(t)-V_{\rm reset}^{x}]-w^{x}+I_{\rm syn}^{x}(t)+ I_{e}+g_{L}^{x}\Delta_{T}\exp\left(\frac{V^{x}(t)-V_{T}^{x}}{\Delta_{T}}\right)\] \[\tau_{w}\dot{w}^{x} =a\left(V^{x}(t)-V_{\rm reset}^{x}\right)-w^{x}\]
where \(x\in\{\)Proto, Arky\(\}\). Here when \(V^{x}(t)\) reaches the threshold potential (\(V_{th}^{x}\)), a spike is generated and \(V^{x}(t)\) as well as \(w^{x}\) will be reset as \(V_{\text{reset}}^{x}\), \(w^{x}+b\), respectively, where b denotes the spike-triggered adaptation. The parameter values and their meaning for Proto and Arky are specified in Table S5. Neurons were connected by static conductance-based synapses. The transient of each incoming synaptic current is given by:
\[I_{\text{syn}}^{x}\left(t\right)=g_{\text{syn}}^{x}\left(t\right)\left[V^{x}( t)-E_{rev}^{x}\right]\]
where \(x\in\{\)D2-SPN, FSN, STN, Arky, and Proto\(\}\). \(E_{rev}^{x}\) is the synaptic reversal potential and \(g_{\text{syn}}^{x}(t)\) is the time course of the conductance transient, given as follows:
\[g_{\text{syn}}^{x}(t)=\left\{\begin{array}{l}J_{syn}^{x}\frac{t}{\tau_{syn }}\exp\left(\frac{-(t-\tau_{syn})}{\tau_{syn}}\right),\text{ for }t\geq 0\\ 0,\text{ for t}<0\end{array}\right.,\]
where \(syn\in\{\)exc, inh\(\}\), \(J_{syn}^{x}\) is the peak of the conductance transient and \(\tau_{syn}^{x}\) is synaptic time constant. The synaptic parameters are shown in Table S6.
Some of model parameters were changed to operate the BG model in specific modes dominated by a 2 or 3 nodes cycle. The Table S7 and Table S8 show the parameters of Fig. 5c and Fig. 5d, respectively.
### External input
Each neuron in each sub-network of the BG received external input in the form of excitatory Poisson-type spike trains. This input was provided to achieve a physiological level of spiking activity in the network. For more details please see Chakravarthy et al. [34]. Briefly, the external input was modelled as injection of Poisson spike-train for a brief period of time by using the inhomogeneous_poisson_generator device in NEST. The strength of input stimulation can be controlled by varying the amplitude of the EPSP from the injected spike train.
### STN inhibition experiment
We set a subnetwork of basal ganglia to study how STN inhibition affects oscillation when different motifs dominate the system. The connections and external inputs to each neuron in Fig. 5c and 5d are shown in Tables S7 and S8. To simulate the increasing inhibition to
STN, the external input to STN was reduced from 1 pA to -99 pA in Fig. 5c and from 30 pA to -50 pA in Fig. 5d.
### Data analysis
The estimate of oscillation frequency of the firing rate-based model was done using the power spectral density calculated by pwelch function of MATLAB. The spiking activity of all the neurons in a sub-population were pooled and binned (rectangular bins, bin width = 0.1 ms). The spectrum of spiking activity was then calculated for the binned activity using pwelch function of MATLAB.
### Simulation tools
Wilson-Cowan type firing rate-based model was simulated using Matlab. All the relevant differential equations were integrated using Euler method with a time step of 0.01 ms. The network of spiking neurons was simulated in Python 3.0 with the simulator NEST 2.20 [49]. During the simulation, differential equations of BG neurons were integrated using Runga-Kutta method with a time step of 0.1 ms.
### Code availability
The simulation code will be made available on GitHub upon publication of the manuscript.
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{Synaptic weights} & \multicolumn{2}{c}{population properties} \\ \cline{2-6} Populations & E1 & I1 & I2 & external input & delay \\ \hline E1 & 0 & 0 & -15 (-20 – 0) & 6 & 0 (0 – 10) \\ I1 & 15 (0 – 20) & 0 (-20 – 0) & 0 & 6 (0 – 20) & 0 (0 – 10) \\ I2 & 0 & -15 (-20 – 0) & 0 (-20 – 0) & 6 (0 – 20) & 0 (0 – 10) \\ \end{tabular}
\end{table}
Table 2: Parameters of EII network for Fig. 3
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{Synaptic weights} & \multicolumn{2}{c}{population properties} \\ \cline{2-6} Populations & I1 & I2 & I3 & external input & delay \\ \hline I1 & 0 (-20 – 0) & 0 & -15 (-20 – 0) & 6 (0 – 20) & 0 (0 – 10) \\ I2 & -15 (-20 – 0) & 0 (-20 – 0) & 0 & 6 (0 – 20) & 0 (0 – 10) \\ I3 & 0 & -15 (-20 – 0) & 0 (-20 – 0) & 6 (0 – 20) & 0 (0 – 10) \\ \end{tabular}
\end{table}
Table 1: Parameters of III network for Fig. 3 and S1
## Acknowledgments
We thank Kingsshuk Chakravarthy for sharing the code of the basal ganglia network with spiking neurons. We thank Dr. Henri Rhiimaki for helpful comments and suggestions. This work was funded in parts by Swedish Research Council (VR), StratNeuro (to AK), Digital Futures grants (to AK and PH), the Inst. of Advanced Studies, University of Strasbourg, France Fellowship (to AK), and the National Natural Science Foundation of China under Grant No.11572127 and 11872183 (to SL).
|
2302.11007 | Unification of popular artificial neural network activation functions | We present a unified representation of the most popular neural network
activation functions. Adopting Mittag-Leffler functions of fractional calculus,
we propose a flexible and compact functional form that is able to interpolate
between various activation functions and mitigate common problems in training
neural networks such as vanishing and exploding gradients. The presented gated
representation extends the scope of fixed-shape activation functions to their
adaptive counterparts whose shape can be learnt from the training data. The
derivatives of the proposed functional form can also be expressed in terms of
Mittag-Leffler functions making it a suitable candidate for gradient-based
backpropagation algorithms. By training multiple neural networks of different
complexities on various datasets with different sizes, we demonstrate that
adopting a unified gated representation of activation functions offers a
promising and affordable alternative to individual built-in implementations of
activation functions in conventional machine learning frameworks. | Mohammad Mostafanejad | 2023-02-21T21:20:59Z | http://arxiv.org/abs/2302.11007v3 | # Unification of popular artificial neural network activation functions
###### Abstract
We present a unified representation of the most popular neural network activation functions. Adopting Mittag-Leffler functions of fractional calculus, we propose a flexible and compact functional form that is able to interpolate between various activation functions and mitigate common problems in training neural networks such as vanishing and exploding gradients. The presented gated representation extends the scope of fixed-shape activation functions to their adaptive counterparts whose shape can be learnt from the training data. The derivatives of the proposed functional form can also be expressed in terms of Mittag-Leffler functions making it a suitable candidate for gradient-based backpropagation algorithms. By training LeNet-5 neural network on MNIST and CIFAR-10 datasets, we demonstrate that adopting a unified gated representation of activation functions offers a promising and affordable alternative to individual built-in implementations of activation functions in conventional machine learning frameworks.
## I Introduction
Activation functions are one of the key building blocks in artificial neural networks (ANNs) that control the richness of the neural response and determine the accuracy, efficiency and performance[1] of multilayer neural networks as universal approximators.[2] Due to their biological links[3, 4] and optimization performance, saturating activation functions[5] such as logistic sigmoid and hyperbolic tangent[6] were commonly adopted in early neural networks. Nevertheless, both activation functions suffered from the vanishing gradient problem.[7] Later studies on image classification using restricted Boltzmann machines[8] and deep neural networks[9] demonstrated that rectified linear units (ReLUs) can mitigate the vanishing gradient problem and improve the performance of neural networks. Furthermore, the sparse coding produced by ReLUs not only creates a more robust and disentangled feature representation but also accelerates the learning process.[9]
The computational benefits and the current popularity of ReLUs should be taken with a grain of salt due to their notable disadvantages such as bias shift,[1] ill-conditioned parameter scaling[9] and dying ReLU.[10] Furthermore, the unbounded nature of ReLUs for positive inputs, while potentially helpful for training deep neural networks, can aggravate the exploding gradient problem in recurrent neural networks.[11, 12] In order to address the dying ReLU and the vanishing/exploding gradient problems, a multitude of ReLU variants has been proposed[13, 14, 15, 16, 17] but none has managed to consistently outperform the vanilla ReLUs in a wide range of experiments.[18] Alternative activation functions such as exponential linear units (ELUs)[1] and scaled exponential linear units (SELUs)[19] have also been proposed to build upon the benefits of ReLU and its variants and provide more robustness and resistance towards the input noise. Yet, among the existing slew of activation functions in the literature,[20, 21, 22] no activation function seems to offer global superiority across all modalities and application domains.
Trainable activation functions,[23, 24, 25, 26] whose functional form is learnt from the training data, offer a more flexible option than their fixed-shape counterparts. In order be able to fine-tune the shape of activation functions during backpropagation,[27] partial derivatives of activation functions with respect to unknown learning parameters are required. It is important to note that some trainable activation functions can also be replaced by simpler multilayer feed-forward subnetworks with
constrained parameters and classical fixed-shape activation functions [21]. The ability to replace a trainable activation function with a simpler sub-neural network highlights a deep connection between the choice of activation functions and performance of neural networks. As such, pre-setting the best possible trainable activation function parameters or fine-tuning the experimental settings [28] such as data preprocessing methods, gradient and weight clipping [12], optimizers [29, 30, 31, 32, 33], regularization methods such as \(L_{1}\), \(L_{2}\) and drop out [34], batch normalization [35], learning rate scheduling [28, 36], (mini-)batch size, or network design [37] variables such as depth (number of layers) and width (number of neurons per layer) of the neural network as well as weight initialization methods [38, 39, 16] becomes an important but challenging task. Several strategies such as neural architecture search [40] and network design space design [41] have been proposed to assist the automation of the network design [37] process but they have to deal with an insurmountable computational cost barrier for practical applications.
In this manuscript, we take a theoretical neuroscientific standpoint [3] towards activation functions by emphasizing the existing connections among them from a mathematical perspective. As such, we resort to the expressive power of rational functions as well as higher transcendental special functions of fractional calculus to propose a unified gated representation of activation functions. The presented functional form is conformant with the outcome of a semi-automated search, performed by Ramachandran _et al._[42], in order to find the optimal functional form of activation functions over a pre-selected set of functions. The unification of activation functions offers several significant benefits: The unified form requires fewer lines of code to be implemented and leads to less confusion in dealing with a wide variety of empirical guidelines on activation functions because individual activation functions correspond to special parameter sets in a single functional representation. The derivatives of the proposed functional form can also be expressed in terms of its constituent special functions, making it a suitable choice for an efficient implementation of backpropagation algorithms for training ANNs. Finally, the proposed functional form can be adopted as a fixed-shape or trainable activation function or both when training neural networks. In other words, one can access different activation functions or interpolate between them by fixing or varying a set of parameters in the gated functional, respectively.
The manuscript is organized as follows: In Sec. II, we introduce Mittag-Leffler functions of one- and two-parameters and discuss their important analytical and numerical properties. Next, we use Mittag-Leffler functions to create a gated representation that can unify a set of most commonly used activation functions. Section III delineates the computational details of our experiments presented in Sec. IV, where we provide numerical evidence for the efficiency and accuracy of the proposed functional form. Concluding remarks and future directions are presented in Sec. V.
## II Theory
### Mittag-Leffler functions of one- and two-parameters
Mittag-Leffler functions, sometimes referred to as "_the queen of functions in fractional calculus_" [43, 44] are one of the most important higher transcendental functions that play a fundamental role in fractional calculus [45, 46, 47]. The interested reader is referred to Refs. [48, 49, 46] for a survey of scientific and engineering applications. The one-parameter Mittag-Leffler function is defined as [50, 49]
\[E_{\alpha}(z)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+1)},\qquad \alpha\in\mathbb{C}, \tag{1}\]
where \(\Gamma(\cdot)\) stands for Euler Gamma function [51, 52] and \(\mathbb{C}\) denotes the set of complex numbers. For all values of \(\mathrm{Re}(\alpha)>0\), the series in Eq. 1 converges everywhere in the complex plane and the one-parameter Mittag-Leffler function becomes an entire function of the complex variable \(z\).[49] However, when \(\mathrm{Re}(\alpha)<0\), the series in Eq. 1 diverges everywhere on \(\mathbb{C}\setminus\{0\}\). As \(\alpha\to 0^{+}\), the Mittag-Leffler function can be expressed as [49]
\[E_{0}(\pm z)=\frac{1}{1\mp z},\qquad|z|<1. \tag{2}\]
Although Mittag-Leffler series has a finite radius of convergence, the restriction in Eq. 2 can be lifted and the asymptotic geometric series form can be adopted as a part of the definition of Mittag-Leffler function for \(\alpha=0\).[53] The aforementioned definition seems to be matching the implementation of Mittag-Leffler function in Mathematica 13.2.[54] Note that for \(x>0\) and \(0\leq\alpha\leq 1\), the one-parameter Mittag-Leffler function with negative arguments, \(E_{\alpha}(-x)\), is a completely monotonic [55] function with no real zeros.[49] The two-parameter Mittag-Leffler function can be similarly defined as
\[E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+\beta)}, \qquad\mathrm{where}\quad\mathrm{Re}(\alpha)>0,\quad\mathrm{and}\quad\beta \in\mathbb{C}. \tag{3}\]
The exponential form of Mittag-Leffler functions, \(E_{1}(z)=E_{1,1}(z)\), has no zeros in the complex plane. Nonetheless, for all \(m\in\mathbb{N}\), where \(\mathbb{N}\) is the set of natural numbers, \(E_{1,-m}\) has its only \(m+1\)-order zero located at \(z=0\). All zeros of \(E_{2}(z)\) are simple and can be found on the negative real semi-axis. For a more detailed discussion on the distribution of zeros and the asymptotic properties of Mittag-Leffler functions, see Ref. [49].
Parallel to the study of analytic properties, the realization of accurate and efficient numerical methods for calculating Mittag-Leffler functions is still an open and active area of research.[49, 56] In particular, the existence of free, open-source and accessible software for computing Mittag-Leffler functions is key to their usability in practical applications. We must note that the code base and programmatic details of recent updates to the implementation of Mittag-Leffler functions in Mathematica are not publicly available for further analysis in this manuscript. Nevertheless, several open-source numerical modules for numerical computation of Mittag-Leffler functions are available in the public domain. Gorenflo _et al.[57]_ have proposed an algorithm for computing two-parameter Mittag-Leffler functions that is suitable for use in Mathematica. Podlubny's algorithm is implemented in MATLAB which allows the computation of Mittag-Leffler functions with arbitrary accuracy.[58] Garrappa has proposed an efficient method for calculating one- and two-parameter Mittag-Leffler functions using hyperbolic path integral transform and quadrature.[59] Both MATLAB [60] and Python[61] implementations of Garrappa's algorithm are also available in the public domain. Zeng and Chen have also constructed global Pade approximations for special cases of parameters \(0<\alpha\leq 1\) and \(\beta\geq\alpha\), based on the complete monoticity of \(E_{\alpha,\beta}(-x)\).[62] Another powerful feature of Mittag-Leffler functions is its relation to other higher transcendental special functions such as hypergeometric, Wright, Meijer \(G\) and Fox \(H\)-functions.[49, 51, 52, 63, 64] which allows for more general analytic manipulations and efficient numerical computations. For example, Mathematica automatically simplifies the one-parameter Mittag-Leffler functions with non-negative (half-)integer \(\alpha\) to (sum of) generalized hypergeometric functions.[49, 65] In addition to the algorithm complexity and implementation specifics, the total number of activation functions in a neural network can strongly affect its runtime on computing accelerators such as graphics processing units (GPUs). The neural network architecture[37] is also a major factor in determining the computational cost.[41] We will consider the impact of these factors in our numerical experiments in Sec. IV.
Gated representation of activation functions
In order to unify the most common classical fixed-shape activation functions, listed in a recent survey,[21] we propose the following functional form
\[x\Phi\bigg{[}x\bigg{|}\gamma\begin{array}{c}\alpha_{1}&\beta_{1}&f\\ \alpha_{2}&\beta_{2}&g\end{array}\bigg{]}:=x\bigg{\{}x^{\gamma-1}\left(\frac{E _{\alpha_{1},\beta_{1}}\left[f(x)\right]}{E_{\alpha_{2},\beta_{2}}\left[g(x) \right]}\right)\bigg{\}}, \tag{4}\]
where the gate function, \(\Phi[f(x),g(x)]\), is a rational binary composition of two "well-behaved" functional mappings \(f,g:\mathbb{R}\rightarrow\mathbb{R}\) and is responsible for generating a (non-)linear neural response. Here, \(\mathbb{R}\) denotes the set of real numbers. The gated representation in Eq. 4 incorporates the functional form obtained from an automated search over a set of a pre-selected functions [42] and is consistent with the functional form of popular activation functions such as ReLU and Swish. Throughout this manuscript, we restrict ourselves to \(\gamma\geq 0\), \(\text{Re}(\alpha)>0\) and \(\beta\in\mathbb{R}\).
Table 1 presents a shortlist of popular fixed-shape classical activation functions that are accessible to the proposed gated representation as special cases via different sets of parameters.
The ReLU activation function is commonly represented in a piecewise functional form as \(\max(0,x)\). In order to mimic this behavior, the gate function in Eq. 4 should reduce to identity for \(x>0\) and zero otherwise. The former condition is satisfied when \(\gamma=1\) and \(E_{\alpha_{1},\beta_{1}}\left[f(x)\right]/E_{\alpha_{2},\beta_{2}}\left[g(x) \right]=1\), for which \(\Phi\bigg{[}x\bigg{|}1\begin{smallmatrix}\alpha&\beta&f\\ \alpha&\beta&f\end{smallmatrix}\bigg{]}=1\) is a trivial case. Plots of ReLU activation function and its gated representation are shown in Fig. 1(a). Note that the y-axis label, \(a(x)\), collectively refers to activation functions regardless of their functional form. The gate functional for sigmoid activation function, \(\sigma(x)\), takes \(f(x)=-e^{-x}\) and \(g(x)=0\) to yield
\[x\Phi\bigg{[}x\bigg{|}0\begin{array}{ccc}0&1&-e^{-x}\\ 1&1&0\end{array}\bigg{]}=\frac{1}{1+e^{-x}}=\sigma(x). \tag{5}\]
Plots of sigmoid activation function and its gated representation are shown in Fig. 1(b). The sigmoid gate function in Eq. 5 can be morped into that of Swish by setting \(\gamma=1\) and \(f(x)=e^{-cx}\) to obtain
\[x\Phi\bigg{[}x\bigg{|}1\begin{array}{ccc}0&1&-e^{-cx}\\ 1&1&0\end{array}\bigg{]}=x\,\sigma(cx), \tag{6}\]
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Activation Function & Argument & \(\gamma\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\beta_{1}\) & \(\beta_{2}\) & \(f(x)\) & \(g(x)\) \\ \hline ReLU\({}^{a}\) & \(x\) & 1 & 1 & 1 & 1 & 0 & 0 \\ Sigmoid & \(x\) & 0 & 0 & 1 & 1 & 1 & \(-e^{-x}\) & 0 \\ Swish & \(x\) & 1 & 0 & 1 & 1 & 1 & \(-e^{-cx}\) & 0 \\ Softsign & \(x\) & 1 & 0 & 1 & 1 & 1 & \(-|x|\) & 0 \\ Hyperbolic Tangent & \(x\) & 1 & 2 & 2 & 2 & 1 & \(x^{2}\) & \(x^{2}\) \\ Mish & \(\log(1+e^{x})\) & 2 & 2 & 2 & 2 & 1 & \(x^{2}\) & \(x^{2}\) \\ Bipolar Sigmoid & \(x\) & 0 & 0 & 0 & 1 & \(-e^{-x}\) & \(e^{-x}\) \\ & \(x/2\) & 1 & 2 & 2 & 2 & 1 & \(x^{2}\) & \(x^{2}\) \\ GELU & \(x\) & 1 & 1/2 & 1 & 1 & 1 & \(x/\sqrt{2}\) & \(x^{2}/2\) \\ \hline \hline \end{tabular} \({}^{a}\) The set of parameters are pertinent to \(x\geq 0\), otherwise \(\Phi=0\).
\({}^{b}\) At least two representations exist for the bipolar sigmoid.
\end{table}
Table 1: Special cases of gate function \(\Phi\) in Eq. 4
Figure 1: Plots of built-in and gated representation of various activation functions
where \(c\) is a trainable parameter. For \(c=1\), the resulting activation function in Eq. 6 is referred to as Swish-1.[42] Plots of Swish-1 activation function and its gated variant are shown in Fig. 1(c). Setting \(f(x)=-|x|\) in the gate function, one can convert Swish into Softsign, defined as
\[x\Phi\bigg{[}x\bigg{|}1\begin{array}{ccc}0&1&-|x|\\ 1&1&0\end{array}\bigg{]}=\frac{x}{1+|x|}. \tag{7}\]
Plots of Softsign activation function and its gated representation are illustrated in Fig. 1(d). The gate functional for the hyperbolic tangent activation function takes \(f(x)=g(x)=x^{2}\) to yield
\[x\Phi\bigg{[}x\bigg{|}1\begin{array}{ccc}2&2&x^{2}\\ 2&1&x^{2}\end{array}\bigg{]}=\tanh(x). \tag{8}\]
Plots of hyperbolic tangent activation function and its gated representation are presented in Fig. 1(f). As mentioned in Sec. I, the gated functional form in Eq. 4 arms us with significant variational flexibility. In addition to accessing a set of fixed-shape activation functions via setting the gate function parameters, we can also interpolate between different functional forms by varying those parameters over a finite domain. Figure 2 illustrates an example where by fixing all parameters in the gated representation of hyperbolic tangent except \(\beta_{2}\), one can smoothly interpolate between linear (\(\beta_{2}=2\)) and hyperbolic tangent (\(\beta_{2}=1\)) activation functions. Thus, it is possible to tune the saturation behavior of gated representation of saturating functions such as hyperbolic tangent and mitigate their vanishing/exploding gradient problem in a controlled fashion.[8; 9; 38] Furthermore, one can turn \(\beta_{2}\) (or in principle, any other parameter) into a trainable parameter and allow the hosting neural network to learn its optimal value from the training data.
Our unification strategy can go beyond the aforementioned list of fixed-shape or trainable activation functions. For instance, Mish[66] can be obtained by passing Softplus, \(\log(1+e^{x})\), to hyperbolic tangent gate function as an argument and setting \(\gamma=2\) to get
\[x\Phi\bigg{[}\log(1+e^{x})\bigg{|}2\begin{array}{ccc}2&2&x^{2}\\ 2&1&x^{2}\end{array}\bigg{]}=x\,\tanh\big{[}\log(1+e^{x})\big{]}. \tag{9}\]
Plots of Mish and its gated representation are illustrated in Fig. 1(e). The bipolar sigmoid function can also be expressed by using the hyperbolic tangent gate function and passing a scaled linear function as an argument to get
\[x\Phi\bigg{[}\frac{x}{2}\bigg{|}1\begin{array}{ccc}2&2&x^{2}\\ 2&1&x^{2}\end{array}\bigg{]}=\tanh\left(\frac{x}{2}\right). \tag{10}\]
Equivalently, one can also express the bipolar sigmoid function with a different set of paramenters and arguments in the gate function as
\[x\Phi\bigg{[}x\bigg{|}0\begin{array}{ccc}0&1&-e^{-x}\\ 0&1&e^{-x}\end{array}\bigg{]}=\frac{1-e^{-x}}{1+e^{-x}}. \tag{11}\]
Setting \(f(x)=\frac{x}{\sqrt{2}}\) and \(g(x)=\frac{x^{2}}{2}\), Gaussian error linear units (GELUs) can also be written as
\[\frac{x}{2}\Phi\bigg{[}x\bigg{|}1\begin{array}{ccc}\frac{1}{2}&1&\frac{x}{ \sqrt{2}}\\ 1&1&\frac{x^{2}}{2}\end{array}\bigg{]}=\frac{x}{2}\bigg{[}1+\mathrm{erf}\left( \frac{x}{\sqrt{2}}\right)\bigg{]}, \tag{12}\]
where \(\mathrm{erf}(\cdot)\) is the error function.[51, 52]
## III Computational details
In order to illustrate the efficiency and performance of the neural networks armed with gated activation functions, we train the classical LeNet-5 (Fig. 3) neural network[67] on Modified National Institute of Standards and Technology (MNIST)[68] and CIFAR-10[69] image classification datasets. The MNIST dataset consists of 60,000 training and 10,000 testing grayscale images of hand-written digits \((0,1,2,\ldots,9)\) that are normalized and centered to a 28\(\times\)28 fixed size. CIFAR-10 dataset contains 50,000 training and 10,000 test images from 10 object classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck). Each data point in CIFAR-10 dataset is a 32\(\times\)32 RGB image. Both datasets are pulled from Wolfram Data Repository[70, 71] and all training experiments are performed using Wolfram Mathematica 13.2.[54] Individual training sessions (excluding that of the baseline) involve replacing all three element-wise ReLU activation layers in LeNet-5 architecture with their (gated) counterparts from Table I. Bipolar sigmoid and GELU are excluded from our study as their hosting neural networks fail to converge without deviation from the selected default settings in Mathematica and further modifications to avoid the divergence. The adaptive momentum (ADAM) stochastic gradient descent optimizer is used for training with the stability parameter, the first and the second moment exponential decay rates set to \(\epsilon=10^{-5}\), \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\), respectively. All network parameters are randomly initialized using "1234" as seed in order to ensure each individual training session in the ensemble starts with the same set of initial parameters. We ran each experiment corresponding to MNIST and CIFAR-10 datasets for 10 and 20 epochs, respectively with a fixed batch size of 64. Each training session is repeated twenty times in single-precision (32-bit floating-point numbers) and the averaged results are rounded to four significant digits as shown in Tables II and III.
Two computational frameworks have been deployed in this study: A single personal laptop armed with a NVIDIA GeForce GTX 1650 GPU and a data center unit equipped with an NVIDIA A100 80GB PCIe GPU. Results from the former setup can be found in the Supporting Information. The trajectory of all training instances and a simple Mathematica code snippet for reproducing the results can be found in Ref. [72].
## IV Results and discussion
We analyze the performance of LeNet-5 neural network architecture by replacing the element-wise layers of ReLU activation functions with their counterparts from Table 1 as well as their gated representations. In order to quantify the effect of various activation functions on the performance of LeNet-5 classifier, we mainly focus on two performance metrics: Loss and accuracy. The training loss is measured using multi-class cross-entropy, which is defined as[73, 74]
\[\mathscr{L}=-\sum_{i=1}^{N}\sum_{k=1}^{K}y_{i,k}\ln(\hat{y}_{i,k}). \tag{13}\]
For each data point \(i\) in a dataset of size \(N\), cross-entropy can measure how well the estimated probabilities, \(\hat{y}_{i,k}\) for each class \(k\), where \(k\in\{1,2,3,\ldots,K\}\), match those of the target class labels, \(y_{i,k}\). Compared with other loss functions, cross-entropy can also improve the convergence rate of the optimization process in our case by more aggressively penalizing the incorrect predictions and generating larger gradients.[74] Accuracy is defined as fraction of number of times that the classifier is correct in its predictions.[73] Since accuracy is not an appropriate metric for imbalanced datasets,[74] we also include the macro-average values of precision, recall, and F1 score in the Supporting Information despite our knowledge about the balanced nature of class distributions in MNIST and CIFAR-10 datasets. Table 2 shows the training and validation performance results where those of LeNet-5 neural network with ReLU element-wise layers are taken as baseline. For each activation function, there are two entries: the first entry refers to the results of built-in activation functions in Mathematica 13.2 and the second one corresponds to those of gated activation functions.
Table 2 reveals that the ensemble average validation accuracy of LeNet-5 classifier on MNIST test set is not significantly sensitive towards the choice of activation functions in the element-wise layers. In particular, the validation accuracy of LeNet-5 classifier armed with sigmoid activation functions is slightly smaller than that of the baseline neural network with ReLUs. Furthermore, choosing other activation functions such as Softsign and Mish further deteriorates the
Figure 3: LeNet-5 neural network architecture
corresponding average validation accuracies compared with that of ReLUs in the baseline LeNet-5 architecture. Plots of training/validation loss and accuracy versus epochs can be found in Supporting Information.
Our main interest in Table 2 is in the average values of total wall-clock time spent on training LeNet-5 on MNIST data set using a NVIDIA A100 80GB PCIe GPU. The average timings reveal that the added cost of calculating one- or two-parameter Mittag-Leffler functions in the gated representation of activation functions is small compared with their built-in variants implemented in Mathematica 13.2 program package.[75] Specifically, the largest measured time gap is observed between built-in and gated representations of Mish which mainly stems from the overhead of calculating Softplus and passing it as an argument to two-parameter Mittag-Leffler functions for each neural response in the element-wise activation layers. On the other hand, the computational time gap between built-in and gated representation of Softsign is very small.
Table 3 presents the performance results for training/validation of LeNet-5 network on CIFAR-10 dataset. All results correspond to the average of 20 individual training sessions, each running for 20 epochs. A comparison of the average accuracy values in Table 3 with those in Table 2 reflects the more intricate nature of the CIFAR-10 dataset which requires deeper neural networks or more advanced architectural design and training strategies. The interested reader is referred to a study on "Convolutional Deep Belief Neural Networks on CIFAR-10" [76] and ImageNet competition for a chronological survey of efforts on this topic.[77] Table 3 reveals that the validation accuracy of LeNet-5 neural network can be improved by replacing ReLUs with any other activation function considered in this study with the exception of sigmoid and hyperbolic tangent. In particular, replacing ReLUs with Swish-1 or Mish yields the largest improvements in validation accuracy of \(\approx\) 1.2 %.
The average timings for training LeNet-5 neural network on CIFAR-10 dataset show similar trends to those presented in Table 2. Specifically, the time difference between computations pertinent to built-in and gated representation of Mish activation function peaks at about 30 seconds. Note that the timings in Table 3 are roughly two times larger in magnitude than their counter
\begin{table}
\begin{tabular}{l c c c} \hline \hline Activation Function\({}^{c}\) & Loss & Accuracy (\%) & Wall-Clock Time (s) \\ \hline Sigmoid & 0.0270 (0.0290) & 99.13 (99.13) & 21.93 \\ & 0.0270 (0.0290) & 99.13 (99.13) & 30.39 \\ Swish-1 & 0.0411 (0.0295) & 98.73 (98.98) & 28.38 \\ & 0.0422 (0.0295) & 98.69 (98.99) & 30.60 \\ Softsign & 0.0138 (0.0324) & 99.56 (99.00) & 28.39 \\ & 0.0137 (0.0315) & 99.56 (99.03) & 28.61 \\ tanh & 0.0146 (0.0329) & 99.52 (98.95) & 20.77 \\ & 0.0126 (0.0325) & 99.58 (99.01) & 29.72 \\ Mish & 0.0279 (0.0299) & 99.14 (99.07) & 24.38 \\ & 0.0339 (0.0304) & 98.96 (99.03) & 40.73 \\ \hline ReLU\({}^{d}\) & 0.0144 (0.0292) & 99.54 (99.16) & 23.78 \\ \hline \hline \end{tabular}
* All results are ensemble averages over 20 independent training and testing experiments.
* All calculations are performed using a NVIDIA A100 80GB PCIe GPU.
* The test results are given in parentheses. The first and second rows in each activation function entry correspond to the built-in and gated representations, respectively.
* The LeNet-5 neural network architecture with ReLU activation functions is taken as the base architecture.
\end{table}
Table 2: The best performance metrics and timings pertinent to training and testing of LeNet neural network on MNIST dataset with various activation functions\({}^{a,b}\)
parts in Table II due to the adopted number of training epochs (20 for CIFAR-10 compared with 10 for MNIST). The average timings reported in both tables suggest that a unified implementation of the most popular activation functions using Mittag-Leffler functions is possible only at a small and affordable additional computational cost compared with their built-in individual implementations. The gap between the built-in implementations of activation functions and their gated representations can be further reduced as more efficient algorithms and implementations of special functions such as Mittag-Leffler function become available. Comparing the aforementioned timings in II and III obtained using a NVIDIA A100 80GB PCIe GPU with those in Supporting Information computed by a NVIDIA GeForce GTX 1650 GPU demonstrates that the availability of more powerful computing accelerators is yet another asset to benefit from when training ANNs with a large number of gated activation functions.
## V Conclusion and future work
In this manuscript, we have presented a unified representation of some of the most popular neural network activation functions. The proposed functional form not only sheds light on direct analytical connections between several well-established activation functions in the literature, but also allows for interpolating between different functional forms through varying the gate function parameters. Furthermore, the gated functional form and its derivatives can both be expressed in terms of Mittag-Leffler functions which makes them a suitable candidate for training neural networks using backpropagation algorithms. A unified representation of activation functions can also lead to large savings in programming efforts compared with what is otherwise required for individual implementations of activation functions in popular machine learning frameworks via inheritance and/or customized classes. Through training the classic LeNet-5 neural network on standard benchmark datasets such as MNIST and CIFAR-10, we have established the possibility of implementing an efficient unified representation of activation functions without sacrificing the accu
\begin{table}
\begin{tabular}{l c c c} \hline \hline Activation Function\({}^{c}\) & Loss & Accuracy (\%) & Wall-Clock Time (s) \\ \hline \multirow{2}{*}{Sigmoid} & 0.7353 (1.0151) & 74.79 (65.39) & 44.26 \\ & 0.7353 (1.0152) & 74.78 (65.38) & 54.97 \\ \multirow{2}{*}{Swish-1} & 0.7323 (0.9016) & 74.55 (69.39) & 54.44 \\ & 0.7322 (0.9018) & 74.55 (69.40) & 60.77 \\ \multirow{2}{*}{Softsign} & 0.6380 (0.9297) & 77.89 (68.64) & 53.09 \\ & 0.6280 (0.9313) & 78.26 (68.56) & 54.52 \\ \multirow{2}{*}{tanh} & 0.8129 (0.9766) & 71.63 (66.79) & 45.58 \\ & 0.8015 (0.9738) & 72.05 (66.90) & 56.09 \\ \multirow{2}{*}{Mish} & 0.6842 (0.8914) & 76.18 (69.40) & 48.37 \\ & 0.6841 (0.8913) & 76.18 (69.44) & 78.09 \\ \hline ReLU\({}^{d}\) & 0.7299 (0.9279) & 74.57 (68.24) & 45.08 \\ \hline \hline \end{tabular}
* All results are ensemble averages over 20 independent training and testing experiments.
* All calculations are performed using a NVIDIA A100 80GB PCIe GPU.
* The test results are given in parentheses. The first and second rows in each activation function entry correspond to the built-in and gated representations, respectively.
* The LeNet-5 neural network architecture with ReLU activation functions is taken as the base architecture.
\end{table}
Table 3: The best performance metrics and timings pertinent to training and testing of the LeNet-5 neural network on CIFAR-10 dataset with various activation functions\({}^{a,b}\)
racy. The analytic properties of one- and two-parameter Mittag-Leffler functions and their relation to other generalized and special functions [52] such as hypergeometric and Wright functions [44] also opens a door to a pristine area of research in fractional ANNs [78] and backpropagation algorithms [79] which is currently under investigation by us.
## 1 General formula for derivatives of Mittag-Leffler function
The differentials of the one- and two-parameter Mittag-Leffler functions can be expressed in terms of Mittag-Leffler function itself. The aforementioned closeness property is computationally beneficial for an efficient implementation of the gradient descent-based backpropagation algorithms for training ANNs. For a more in-depth discussion on differential and recurrence relations of Mittag-Leffler functions of one-, two- and three-parameter(s), see Refs. [49] and [80].
Let \(p\in\mathbb{N}\), where \(\mathbb{N}\) denotes the set of natural numbers. Then, the general derivatives of one-parameter Mittag-Leffler function can be given as
\[\begin{split}\frac{d^{p}}{dz^{p}}E_{p}(z^{p})&=E_{ p}(z^{p}),\qquad\text{and}\\ \frac{d^{p}}{dz^{p}}E_{p/q}(z^{p/q})&=E_{p/q}(z^{p/ q})+\sum_{k=1}^{q-1}\frac{z^{-kp/q}}{\Gamma(1-kp/q)}\qquad q=2,3,\dots.\end{split} \tag{14}\]
Assuming \(\alpha>0\) and \(\beta\in\mathbb{R}\), the first-derivative of the two-parameter Mittag-Leffler functions can be written as a sum of two instances of two-parameter Mittag-Leffler functions as [80]
\[\frac{d}{dz}E_{\alpha,\beta}(z)=\frac{E_{\alpha,\alpha+\beta-1}(z)+(1-\beta)E _{\alpha,\alpha+\beta}(z)}{\alpha}. \tag{15}\]
In general, one can write
\[\frac{d^{m}}{dz^{m}}E_{\alpha,\beta}(z)=\frac{1}{\alpha^{m}}\sum_{k=0}^{m}c_{ k}^{(m)}E_{\alpha,\alpha m+\beta-k}(z),\qquad m\in\mathbb{N}, \tag{16}\]
where \(c_{0}^{(0)}=1\) and the remaining coefficients for \(k=1,2,\dots\) can be computed using the following recurrence relation
\[c_{k}^{(m)}=\begin{cases}\left[1-\beta-\alpha(m-1)\right]c_{0}^{(m-1)},&k=0, \\ c_{k-1}^{(m-1)}+\left[1-\beta-\alpha(m-1)+k\right]c_{k}^{(m-1)},&1\leq k\leq m -1,\\ 1,&k=m.\end{cases} \tag{17}\]
## Acknowledgements
The author would like to thank Dr. Reza Hemmati for proofreading the manuscript, NVIDIA Corporation for the generous Academic Hardware Grant and Virginia Tech for providing an institutional license to Mathematica 13.2. |
2304.04784 | Criticality versus uniformity in deep neural networks | Deep feedforward networks initialized along the edge of chaos exhibit
exponentially superior training ability as quantified by maximum trainable
depth. In this work, we explore the effect of saturation of the tanh activation
function along the edge of chaos. In particular, we determine the line of
uniformity in phase space along which the post-activation distribution has
maximum entropy. This line intersects the edge of chaos, and indicates the
regime beyond which saturation of the activation function begins to impede
training efficiency. Our results suggest that initialization along the edge of
chaos is a necessary but not sufficient condition for optimal trainability. | Aleksandar Bukva, Jurriaan de Gier, Kevin T. Grosvenor, Ro Jefferson, Koenraad Schalm, Eliot Schwander | 2023-04-10T18:00:00Z | http://arxiv.org/abs/2304.04784v1 | # Criticality versus uniformity in deep neural networks
###### Abstract
Deep feedforward networks initialized along the edge of chaos exhibit exponentially superior training ability as quantified by maximum trainable depth. In this work, we explore the effect of saturation of the tanh activation function along the edge of chaos. In particular, we determine the line of uniformity in phase space along which the post-activation distribution has maximum entropy. This line intersects the edge of chaos, and indicates the regime beyond which saturation of the activation function begins to impede training efficiency. Our results suggest that initialization along the edge of chaos is a necessary but not sufficient condition for optimal trainability.
_Introduction._ Over the past decade or so, deep learning has emerged as one of the most powerful tools for processing and analyzing data, and has proven successful on an increasingly wide range of computational challenges. These remarkable feats include highly accurate image classification [1], advanced generative modelling of images [2], natural language processing [3], accurate protein structure predictions [4], and testing humans in a wide range of games [5]. Key to these neural networks' success is the extremely large number of parameters--generally speaking, the _expressivity_ of a neural network increases with depth [6]. Expressivity refers to the range of functions that a network can approximate, with the network being understood as simply a function from the space of inputs to the space of outputs. However, the price we must pay for larger and more powerful networks is that they are more difficult to train; for example, the risk of vanishing or exploding gradients is exacerbated with depth [7]. Hence, an improved understanding of how the network parameters impact trainability is highly valuable, as even small improvements in the initialization of deep neural networks can make intractable problems tractable.
In this work, we study trainability in deep random feedforward neural networks. Such networks are frequently used in the literature due to their analytical tractability: the phase space is two-dimensional and parameterized by the variances of the initial weight and bias distributions: \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\).1 This makes them useful models for investigating general features of deep networks. In particular, we will be concerned with the behavior of the pre- and post-activations, in terms of both their distributions as well as the accuracy of the network on a classic image classification task, namely MNIST (numerical digit recognition) and CIFAR-10 (colored images, which we convert to grayscale).
Footnote 1: As is standard in the literature, we restrict to zero-mean networks, as initializing with a small non-zero mean does not qualitatively change our results.
More specifically, we build on previous work [8; 9] which demonstrated the presence of an order-to-chaos phase transition in this class of deep networks. Intuitively, correlations in the input that we wish to learn are exponentially suppressed with depth in the ordered (analogously, low-temperature) phase, and washed-out by noise in the chaotic (high-temperature) phase; these two phases are characterized by vanishing or exploding gradients, respectively. The boundary between these two phases is a critical line called the _edge of chaos_,2 which is a continuous phase transition characterized by a diverging correlation length \(\xi\) for the layer-to-layer two-point function of the neurons. Since the correlation length sets the depth scale at which information can propagate, this theoretically enables networks of arbitrary depth to be trained at criticality (more generally, networks are trainable provided their depth does not exceed the scale set by \(\xi\)). In other words, the deeper the network, the closer one must lie to the edge of chaos; this was demonstrated in [9] along a slice of parameter space at bias variance \(0.05\) and weight variance ranging from \(1\) to \(4\), and subsequently generalized/corroborated in, e.g., [10; 11; 12]
Footnote 2: Technically, this should be called the edge of stability, but we will use edge of chaos synonymously with criticality for consistency with the literature.
Several questions naturally arise from the above work. First, given that the network parameters will evolve under training in order to minimize the specified cost function and, in particular, develop interdependencies, why does the choice of initialization have such a decisive effect on network performance?3 Indeed, it was observed in [12] that the hidden-layer pre-activation distributions (as quantified by their variance) rapidly approach some asymptotic value within \(10\) or fewer layers, and then remain relatively unchanged for arbitrarily many additional layers. We corroborate this fact at the level of the post-activation in fig. 6 of appendix A.
Footnote 3: In other words, why does the network remain near the initialization regime (e.g., the edge of chaos) as it evolves?
Second, what role does the particular distribution of post-activations in a given layer play in determining network performance? For example, the activation function considered in [9] is hyperbolic tangent, which we adopt henceforth. When \(\sigma_{b}^{2}\ll 1\) and \(\sigma_{w}^{2}\lesssim 1\), the pre-activations \(z\) of the hidden layers are approximately Gaussian-distributed with small variance (cf. (8)). In this case, \(\tanh(z)\approx z\), so the network behaves like a linear network. These are quite restrictive, being incapable of representing functions whose output data are non-linearly separable and cannot be generated by a combination of linearly separable data. In the opposite extreme, for large values of \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), the pre-activation variance becomes so large that the post-activation distribution becomes peaked at \(\pm 1\). In other words, large pre-activation variance saturates the \(\tanh\), causing it to behave like a discrete step-function. One expects this regime to also impair trainability, since the gradients on which the backpropagation algorithm depends become vanishingly small everywhere except near the origin.4 Thus, it seems that one should seek to remain somewhere between these two extremes. Quantifying this is one of the main motivations for the present work.
Footnote 4: Recall that the updates to the weights and biases under gradient descent contain products of the derivatives of the activation functions in all higher layers.
In particular, note that in both the linear and the saturation regimes, one expects the expressibility of the network to be poor. In contrast, between these extremes lies a region in which the post-activation distribution is approximately uniform, and hence we might expect the expressibility of the network to be maximized at this point. To see this, recall that the uniform distribution has maximum entropy, which measures the number of possible states any particular system can have; a step function, in contrast, can only store a single bit of information, and hence has a low entropy of \(\ln 2\). This leads to the conjecture that networks whose internal distributions are approximately uniform, i.e., maximally entropic, have higher expressibility, and hence might enjoy a performance advantage. Of course, given approximately Gaussian pre-activations, the post-activation distribution of \(\tanh\) cannot be exactly uniform, but we can quantify the degree of uniformity via the relative entropy (defined below). In fact, we will show that there is a _line of uniformity_ on the \((\sigma_{w}^{2},\sigma_{b}^{2})\) phase space along which the post-activation distribution is as uniform as possible. This line intersects the aforementioned edge of chaos (see fig. 1), and the relative importance of lying near this line is the primary question we shall explore below.
We shall begin by deriving an expression for the line of uniformity, defined by the condition that the distribution of the final hidden layer minimizes the relative entropy with respect to the uniform distribution. The computation uses many of the same ingredients as [9], and the interested reader is encouraged to turn there for more background. We then examine proximity to this line in relation to the edge of chaos considered in previous works.
We find that for deep networks away from the edge of chaos, the exponential suppression dominates, and no benefit from uniformity is observed. However, along the edge of chaos - where the suppression is only polynomial - we find a relatively sharp fall-off in the post-training accuracy to the right of the line of uniformity. The location of this fall-off depends on the learning rate, since decreasing the learning rate can increase the final accuracy, but at the cost of additional computing time (see fig. 2). This suggests that criticality is a necessary but not sufficient condition for optimal trainability.
This dependence on other hyperparameters illustrates that optimal trainability is not just a matter of final accuracy but also of efficiency, i.e., how quickly the final accuracy is reached. Since computational limits exist, we shall rely on an intuitive notion of efficiency per epochs in addition to accuracy; that is, we consider the accuracy achieved after a fixed number of training epochs. It is conceivable that in the limit of infinite training epochs accuracy differences disappear, so that formally, the configurations are equally good. In a practical sense however, they clearly are not.
Note that there can obviously be very many notions of efficiency depending on which resource(s) one considers most valuable. Here, we are implicitly prioritizing training time, i.e., number of epochs. If one were to put the premium on floating point operations used in training, then one would instead measure efficiency as in [13]. Yet another concept called learning efficiency has to do with how much time it takes to run a learning algorithm and, in particular, how this scales with the size of the input space [14].
Returning to our main question, to isolate the effects of uniformity _away_ from the edge of chaos, we also examine networks which are both shallow (i.e., not yet exponentially suppressed) and narrow (i.e., low expressibility per layer), and confirm that training efficiency, in the sense described above, degrades to the right of the line of uniformity (i.e., away from the origin), though final accuracy need not. In contrast to the edge of chaos, the line of uniformity is not a sharp phase boundary, but it does indicate coarsely the parameter boundary where activation saturation starts to affect training efficiency. This not only establishes the more obvious point that, even in deep random feedforward toy models on the edge of chaos, backpropagation training depends sensitively on activation function choice, as earlier emphasized in [15; 16], but also that for a given activation function choice there are optimal points or regions on the edge of chaos itself.
_The line of uniformity._ We can estimate the location of the line of uniformity by capitalizing on the fact that wide networks, with a large number \(N\) of neurons in each hidden layer, are approximate Gaussian processes. At finite \(N\), the neurons in a given layer are not independent due
to their shared dependence on the neurons in the previous layer. Physically however, the non-Gaussianities that can be seen by marginalizing over the previous layer(s) can be thought of as interactions that are \(1/N\) suppressed [17; 18]. Hence, in the limit \(N\to\infty\), the distribution of pre-activations becomes Gaussian, essentially by the central limit theorem. This greatly simplifies the analysis, and is the reason for the widespread use of such models in previous studies, including [9].5
Footnote 5: One will often see the phrase “mean-field theory” used in place of the central limit theorem in this context; however, as pointed out in [18], this is not technically correct, and mean-field theory does not necessarily correspond to the \(N\to\infty\) limit.
Thus, at large-\(N\), the distribution of pre-activations \(z\) for any hidden layer takes the form
\[p(z;\sigma^{2})=\frac{1}{\sqrt{2\pi}\,\sigma}\,e^{-\frac{z^{2}}{2\sigma^{2}}}\, \tag{1}\]
where \(\sigma^{2}\) is the variance, and we assume the mean \(\mu=0\) since adding a small finite mean does not qualitatively change our results. If the activation function \(\phi(z)\) is one-to-one and once-differentiable, then the distribution of post-activations \(x\) will be given by
\[p_{\phi}(x;\sigma^{2})=\frac{1}{\sqrt{2\pi}\,\sigma\,\phi^{\prime}\big{(}\phi^ {-1}(x)\big{)}}\,e^{-\frac{\phi^{-1}(x)^{2}}{2\sigma^{2}}}. \tag{2}\]
Concretely, for \(\phi(z)=\tanh(z)\), this yields
\[p_{\phi}(x;\sigma^{2})=\frac{1}{\sqrt{2\pi}\,\sigma(1-x^{2})}\,e^{-\frac{ \mathrm{arctanh}(x)^{2}}{2\sigma^{2}}}\, \tag{3}\]
with \(x\in[-1,1]\). The corresponding variance is given by
\[\sigma_{\phi}^{2}=\int_{-1}^{1}\mathrm{d}x\,x^{2}\,p_{\phi}\big{(}x;\sigma^{2 }\big{)}. \tag{4}\]
As mentioned above, we quantify the uniformity of the post-activation distribution \(p_{\phi}\) by the relative entropy or Kullback-Leibler divergence with respect to the uniform distribution \(p_{\mathrm{uni}}\),
\[S(p_{\mathrm{uni}}||p_{\phi})=\int_{-1}^{1}\!\mathrm{d}x\;p_{\mathrm{uni}}(x) \ln\frac{p_{\mathrm{uni}}(x)}{p_{\phi}(x)}. \tag{5}\]
Substituting in (3) and \(p_{\mathrm{uni}}=\frac{1}{2}\), this yields
\[S(p_{\mathrm{uni}}||p_{\phi})=\frac{1}{2}\ln(8\pi\sigma^{2})+\frac{\pi^{2}}{2 4\sigma^{2}}-2. \tag{6}\]
This has a minimum at
\[\sigma_{\mathrm{min}}^{2}=\frac{\pi^{2}}{12}\approx 0.822. \tag{7}\]
Therefore, we wish to find the set of points \((\sigma_{\mathrm{e}}^{2},\sigma_{b}^{2})\) at which the variance of the final hidden layer is \(\sigma_{\mathrm{min}}^{2}\); this will define the line of uniformity. To proceed, we use the recursion relation
\[\sigma_{\ell}^{2}=\sigma_{w}^{2}\,\sigma_{\phi,\ell-1}^{2}+\sigma_{b}^{2}\, \tag{8}\]
which follows from the large-\(N\) condition discussed above (i.e., the neurons on any given layer can be treated as i.i.d. random variables). Note that this is exactly the same as eq. (3) of [9], where our \(\sigma_{\ell}^{2}\) is their \(q_{aa}^{\ell}\) and our \(\sigma_{\phi,\ell-1}^{2}\) is the corresponding integral expression.6 This recursion relation ostensibly requires the variance of the first hidden layer, \(\sigma_{1}^{2}\), as an input. However, it turns out that (8) quickly converges to a fixed value \(\sigma_{\star}^{2}\), which (by definition) is a function of \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), but not of \(\sigma_{1}^{2}\):
Footnote 6: Explicitly, the variance can be written as \(\sigma_{\phi}^{2}=\int\mathcal{D}z\,\big{[}\phi(\sigma z)\big{]}^{2}\), where \(\mathcal{D}z=\frac{\mathrm{d}z}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}\) is the standard Gaussian measure.
\[\sigma_{\star}^{2}=\sigma_{w}^{2}\,\sigma_{\phi,\star}^{2}+\sigma_{b}^{2}, \tag{9}\]
where \(\sigma_{\phi,\star}^{2}\) is \(\sigma_{\phi}^{2}\) evaluated at \(\sigma_{\star}^{2}\); see [8] for further discussion of this convergence. In appendix A, we have demonstrated numerically that the corresponding post-activation distribution indeed converges rapidly to one which depends only on the initialization point \((\sigma_{w}^{2},\sigma_{b}^{2})\).
Now, consider a fixed value of \(\sigma_{\star}^{2}\) (and hence also of \(\sigma_{\phi,\star}^{2}\)). Then we can consider (9) as an expression for \(\sigma_{b}^{2}\) as a function of \(\sigma_{w}^{2}\), which defines a line in phase space of the form
\[\sigma_{b}^{2}=\sigma_{\star}^{2}-\sigma_{\phi,\star}^{2}\,\sigma_{w}^{2}. \tag{10}\]
where \(\sigma_{\star}^{2}\) is the \(y\)-intercept, and \(-\sigma_{\phi,\star}^{2}\) is the slope. Since the relative entropy (6) of the final hidden layer is only a function of its variance, the lines of constant \(\sigma_{\star}\) given by (10) are also lines of constant relative entropy. In particular, the line of uniformity (minimum relative entropy) is given by (10) with \(\sigma_{\star}^{2}=\sigma_{\mathrm{min}}^{2}=\frac{\pi^{2}}{12}\), cf. eq. (7). There is no closed-form expression for \(\sigma_{\phi,\mathrm{min}}^{2}\), but we can evaluate (4) numerically to obtain \(\sigma_{\phi,\mathrm{min}}^{2}\approx 0.359\). In summary, the line of uniformity (LOU) is given by
\[\mathrm{LOU}:\quad\sigma_{b}^{2}=\sigma_{\mathrm{min}}^{2}-\sigma_{\phi, \mathrm{min}}^{2}\,\sigma_{w}^{2}, \tag{11}\]
with \(\sigma_{\mathrm{min}}^{2}=\frac{\pi^{2}}{12}\approx 0.822\) and \(\sigma_{\phi,\mathrm{min}}^{2}\approx 0.359\). In the left panel of fig. 1, we present a contour plot of the logarithm of the relative entropy. The line of uniformity is the dashed black line--to the left of it, as one approaches the origin, is the linear regime; and to the right, the activation becomes more and more saturated. For comparison, the edge of chaos is the solid black line.
_The edge of chaos._ The method for computing the edge of chaos as a function of \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\) is described in
[8; 9]. Once we have \(\sigma_{*}^{2}\), as described previously, then we can define the quantities
\[\chi=\sigma_{w}^{2}\int\mathcal{D}z\left[\phi^{\prime}(\sigma_{*}z)\right]^{2}, \qquad\xi=-\frac{1}{\ln\chi}, \tag{12}\]
where \(\mathcal{D}z\) is the standard Gaussian measure, cf. footnote 6, and \(\xi\) is the correlation length mentioned in the introduction (note that this is denoted \(\xi_{c}\) in [9]).
The meaning of \(\chi\) will be discussed in the next paragraph, while the meaning of \(\xi\) is as follows: we consider two identical copies of the network and feed them slightly different inputs. Then, we can study the correlation (i.e., covariance) between a neuron in one copy and the same neuron in the second copy as a function of the layer. This correlation will decay exponentially for deeper layers with a characteristic length scale, \(\xi\). (Strictly speaking, this is only true in the ordered phase: in the chaotic phase, the quantity \(\xi\) is complex-valued and cannot be interpreted as a correlation length). The edge of chaos is defined as the critical point, where the correlation length \(\xi\) diverges.
As discussed in more detail in [8; 9], \(\chi\) is obtained as the derivative of the aforementioned covariance with respect to that in the previous layer, and probes the stability of the fixed point when the covariance is unity: \(\chi>1\) implies that we approach this point from below (unstable), while \(\chi<1\) implies that we approach this point from above (stable).7 The edge of chaos corresponds to \(\chi=1\), where \(\xi\) diverges.
Footnote 7: See [19] for a pedagogical explanation.
To find the edge of chaos, we can scan over the space of tuples \((\sigma_{w},\sigma_{*})\) to find those which satisfy the condition \(\chi=1\). We then feed these into (8) to find the corresponding value of \(\sigma_{b}\). In this manner, we can find arbitrarily many points on the edge of chaos (EOC). Within some finite range of \(\sigma_{w}^{2}\) values, we can find a good fit to the EOC. In the range \(1\leq\sigma_{w}^{2}\leq 10\), a good polynomial fit is
\[\text{EOC}:\quad\sigma_{b}^{2}=\sum_{n=2}^{9}\frac{c_{n}}{n!}(\sigma_{w}^{2}-1 )^{n}, \tag{13}\]
with fit coefficients
\[\begin{array}{c c|c c}\hline n&c_{n}&n&c_{n}\\ \hline\hline 2&0.0190&6&-1.15\\ 3&0.778&7&0.769\\ 4&-1.07&8&-0.328\\ 5&1.25&9&0.0672\\ \hline\end{array} \tag{14}\]
Of course, we can reduce the number of fit coefficients needed by reducing the range of \(\sigma_{w}^{2}\) values over which we require the fit to be good.
The form of this fit is designed such that it contains the point \((\sigma_{w}^{2},\sigma_{b}^{2})=(1,0)\), and that the edge of chaos has zero slope at this point. We justify these conditions analytically in appendix B. In the right plot in fig. 1, we present a contour plot of \(\chi\). Again, the edge of chaos is drawn as a solid black line and the line of uniformity as a dashed line. The point of intersection of the edge of chaos and line of uniformity is found to be
\[(\sigma_{w}^{2},\sigma_{b}^{2})_{\text{intersect}}=(2.00,0.104). \tag{15}\]
_The impact of uniformity along the edge of chaos._ To the right of the line of uniformity, neurons begin to saturate the \(\tanh\) activation function, i.e., approach \(\pm 1\). This implies that backpropagation based on gradient descent should be less efficient, and hence networks should reach a lower accuracy in a fixed amount of training time. The
Figure 1: (Left) Contour plot of the logarithm of the relative entropy in the \((\sigma_{w}^{2},\sigma_{b}^{2})\) plane. The dashed line is the line of uniformity—saturation increases to the right of it and linearity increases to the left of it. (Right) Contour plot of \(\chi=e^{-1/\xi}\). The ordered/low-temperature phase is shaded blue, while the chaotic/high-temperature phase is shaded red. In both, the solid black line is the edge of chaos, while the dashed black line is the line of uniformity.
Google Brain collaboration has already established that at the edge of chaos, learning accuracy is enhanced due to polynomial rather than exponential decay of correlations as a function of network depth [9]. Combining the two insights, optimal learning should therefore take place on the edge of chaos near the line of uniformity.
To test this hypothesis, we have performed the MNIST image classification task in networks ranging up to a depth of \(L=100\) hidden layers at various points along the edge of chaos. The resulting learning accuracy is shown in fig. 2. We see that this expectation is partially validated. On the left side of the line of uniformity - but to the right of the linear regime - all points on the edge of chaos are equally good at learning. But beyond a certain point, which lies to the right of the intersection point (15) of the edge of chaos and line of uniformity, the final accuracy decreases. However, this drop-off point is substantially (up to an order of magnitude) displaced to the right of the intersection point, indicating that the line of uniformity is perhaps better thought of as a region rather than a narrow band, and depends on hyperparameters (such as the learning rate) as mentioned above. Nevertheless, for typical learning rates used in the literature of order \(10^{-3}\), such as used in [9], the drop-off point at approximately \(\sigma_{w}^{2}\sim 2.5\) is indeed fairly close to the intersection between the line of uniformity and the edge of chaos at \(\sigma_{w}^{2}=2\).
We repeated this exercise for the CIFAR-10 image classification task, and present the corresponding results in fig. 3. We converted the colored images to grayscale to reduce the input size by a factor of 3. The drop-off in accuracy along the edge of chaos towards larger values of \(\sigma_{w}^{2}\) is still present, though the effect is not as dramatic as it is for MNIST. This is not surprising as CIFAR is a much more difficult task than MNIST and so we expect that the saturation of slightly more or fewer neurons will have a much less decisive effect. We note however that in the regime of extremely small learning rates, where training MNIST becomes highly inefficient, the MNIST and CIFAR results appear similar insofar as neither exhibits the obvious sharp drop-off observed for MNIST at the higher learning rates generally used in practice.
Thus, the line of uniformity is not a sharp boundary, unlike the edge of chaos. This is somewhat inherent in its definition, which selects proximity to the uniform distribution of final hidden layer weights as a condition for efficient learning based on the entropic argument given above, but does not specify any particular fall-off behavior. The line of uniformity does, however, give an estimate of where the saturation of the activation function should start to affect learning, and by extension, the point at which saturation of the activation function begins to hinder learning efficiency. To summarize: on the left side of the line of uniformity, the distributions are sufficiently narrow that saturation of the tanh activation function does not occur, and all initial weight distributions along the EOC learn equally well. Conversely, on the right side of uniformity, neurons saturate the activation function and hence hamper learning, even along the EOC. This is our main observation. Importantly, we note that the studies by [9] were performed to the left of the point where the line of uniformity crosses the edge of
Figure 2: Accuracies on MNIST for distributions of initial weights along the edge of chaos in a deep (\(L=100\)), wide (\(N=784\)) neural network with tanh activation function, for a range of learning rates, after 30 epochs (left) and 100 epochs (right). We observe a drop-off in accuracy beyond a value \(\sigma_{w}^{2}\) which is up to an order of magnitude larger than the point at which the line of uniformity is crossed. For learning rates of the order typically used in the literature, this point is near the intersection of the LOU and the EOC, but moves to higher values of \(\sigma_{w}^{2}\) for smaller learning rates. When learning rates become extremely small (\(r<10^{-5}\)), learning becomes highly inefficient, and the drop-off less sharp for the training duration considered. Networks were trained via stochastic gradient descent with batch size 64 and momentum 0.8.
chaos and hence at optimal efficiency.
Before moving on to our final set of experiments, we note that the above conclusion is of course specific to saturating activation functions, specifically tanh. This is one motivation for the use of non-saturating activation functions such as ReLU or SWISH, though the unbounded nature of such functions presents its own set of training difficulties. While a similar analysis of uniformity, as quantified by the maximally entropic distribution, for non-saturating activation functions is beyond the scope of this work, a brief inspection of learning efficiency along the EOC for SWISH shows no loss of accuracy in agreement with the absence of saturation effects; see appendix D.8
Footnote 8: For both SWISH and tanh, the edge of chaos is a line of critical initializations through phase space, while for ReLU it is only a single point [17].
_Uniformity away from the EOC._ Thus far, we have examined the impact of uniformity on training efficiency along the edge of chaos. Now, we would like to explore whether the line of uniformity still affords training advantages even for networks initialized far from criticality. In attempting to exhibit this however, one quickly finds that the edge of chaos represents a far more dominant effect than the line of uniformity. A close inspection of the learning accuracy of deep (L=300) and wide (N=784) MNIST learning networks shows that there is no discernible difference in learning accuracy away from the edge of chaos: it is simply poor everywhere (see fig. 7 in appendix C, also [9].) This can be understood from the form of the correlation functions: away from the edge of chaos, correlations damp exponentially \(\sim e^{-L/\xi}\). For a deep network, this exponential damping will erase any finer difference in accuracy results. Along the edge of chaos, the damping is only polynomial and, therefore, the finer difference remains, as seen in fig. 2. In shallow networks however, the exponential damping does not have sufficient time to compound, and if the network is also narrow and hence has low expressibility per layer, we can explore the effect of uniformity even away from criticality in such models.
Furthermore, it is common lore that efficient backpropagation needs sufficient gradients, and that such gradients are absent if most of the post-activation functions saturate to a fixed asymptotic value. However, if a sufficient number of weight and bias values are such that there remain trainable paths through a saturated landscape, the model will still learn, even though, distribution-wise, most of the neurons have saturated. Therefore, the inefficiency due to saturation discussed above can be displayed more clearly by choosing narrower networks with smaller \(N\), where we might expect that uniformity - that is, maximally entropic distributions - may afford the most advantage.
The effect of lying near uniformity is therefore strongest in shallow, narrow networks rather than deep, wide networks where the edge of chaos effect dominates. For these small networks, some of the asymptotic analysis above locating the LOU and EOC does not immediately apply, since the network is unable to reach the asymptotic value \(\sigma_{*}^{2}\) of the pre-activation variance.9
Footnote 9: In this sense, we may take “shallow” to mean \(L\leq 5\), since as
Figure 3: Accuracies on CIFAR-10 for distributions of initial weights along the edge of chaos in a deep (\(L=100\)), wide (\(N=1024\)) neural network with tanh activation function, for a range of learning rates, after 30 epochs (left) and 100 epochs (right). The drop-off in accuracy towards higher values of \(\sigma_{w}^{2}\) here is much more gradual than the sharp drop-offs observed for MNIST in fig. 2 (at all but the lowest learning rates, \(r<10^{-5}\)). Networks were trained via stochastic gradient descent with batch size 64 and momentum 0.8.
At the same time, the input variance and mean, \(\sigma_{0}^{2}\) and \(\mu_{0}\), actually _do_ matter in this case and, with this information, we can roughly estimate the location of the line of uniformity. For example, for \(L=1\), we have \(\sigma_{1}^{2}=\sigma_{w}^{2}(\sigma_{0}^{2}+\mu_{0}^{2})+\sigma_{b}^{2}\) and the line of uniformity would be where \(\sigma_{1}^{2}=\sigma_{\rm min}^{2}=\frac{\pi^{2}}{12}\). For example, for MNIST, \(\sigma_{0}^{2}\approx 0.095\) and \(\mu_{0}^{2}\approx 0.017\), so the line of uniformity can be estimated as \(\sigma_{b}^{2}\approx\frac{\pi^{2}}{12}-0.112\sigma_{w}^{2}\). Equivalently, for fixed \(\sigma_{b}^{2}\), this gives a \(\sigma_{w}^{2}\)-threshold of \(\sigma_{w}^{2}\sim 7.35+8.93\sigma_{b}^{2}\) beyond which we expect saturation effects to decrease training efficiency. For \(L=2\), we would iterate the above process once more, passing through the activation function; this gives an estimated threshold of \(\sigma_{w}^{2}\approx 3.5+8.93\sigma_{b}^{2}\).
Footnote 1: We note that the \(\sigma_{w}^{2}\)-threshold is not sensitive to the choice of the threshold.
Results for \(L=1\) are shown in fig. 4, and results for \(L=2\) are shown in fig. 5. As predicted, we observe that the accuracy retains a high, approximately constant value up to a \(\sigma_{b}^{2}\)-dependent threshold for \(\sigma_{w}^{2}\), and then decays approximately linearly thereafter. To determine the threshold empirically, we fit the data to a function of the form
\[A_{\rm fit}(\sigma_{w}^{2})=A_{\rm max}-r(\sigma_{w}^{2}-\sigma_{w,\rm thr}^{2} )\,\Theta(\sigma_{w}^{2}-\sigma_{w,\rm thr}^{2}), \tag{16}\]
where \(A_{\rm max}\) is the maximum accuracy, \(\sigma_{w,\rm thr}^{2}\) is the threshold value, \(r\) is the rate of linear decay, and \(\Theta\) is the Heaviside step function. Each accuracy vs. \(\sigma_{w}^{2}\) data point is an average over 20 instantiations of the network and thus comes with its own variance. These propagate into uncertainty bars for the three fit parameters. We plot the threshold for different values of \(\sigma_{b}^{2}\) in fig. 4 for \(L=1\). This qualitatively confirms our expectations, though the empirical value of the threshold is about a factor of 2 greater than the analytical prediction, and the slope about a factor of 8 smaller. However, given that we are applying a large-\(N\) analysis to a relatively narrow network (\(N=8\)), an \(\mathcal{O}(1)\) quantitative discrepancy is reasonable. For \(L=2\) the corresponding results are presented in fig. 5, again showing qualitative agreement. The empirical threshold in this case is about a factor of 4 greater than the theoretical value, and the slope is a factor of 8 smaller.
_Conclusion._ In this work, we establish that for deep random feedforward networks along the edge of chaos, the efficiency of training via stochastic gradient descent still depends on non-saturation of the activation function. Similar points have been made previously in [15; 16], which compared the performance of difference activation functions initialized at one point on their respective edges of chaos. However, what we demonstrate for the tanh activation function is that not all points on the edge of chaos are equally efficient at learning. Within a fixed number of training epochs (\(\sim 100\)), activation function saturation eventually impedes learning if we push the weight and bias variances too far to the right of the line of uniformity, defined to be where the final layer post-activation is most uniformly distributed, i.e., maximally entropic. Unlike the edge of chaos, which separates chaotic and ordered outputs, the line of uniformity does not mark an abrupt change in the overall behavior of the network. Rather, it simply indicates roughly the point where the saturation of the activation function begins to impede learning. We demonstrate this for shallow and
Figure 4: For small networks, the learning efficiency exhibits threshold behavior as a function of \(\sigma_{w}^{2}\). Shown are results for MNIST trained on a \(N=8\), \(L=1\) network sampled over 50 network initializations. The inset shows the fits in the threshold region. The bottom figure shows that the location of this threshold in \(\sigma_{w}^{2}\) decreases with increasing \(\sigma_{b}^{2}\) consistent with the trend implied by the line of uniformity threshold. As explained in the text, there is a multiplicative factor involved and the large-\(N\) analysis cannot be straightforwardly transplanted to this small-\(N\) case. The uncertainty bars are propagated from the uncertainties in the accuracy versus \(\sigma_{w}^{2}\) data points.
narrow networks as well, where the exponential damping of neuron correlations away from the edge of chaos becomes much less of a decisive factor in determining training efficiency.
_Acknowledgments._ This research was supported in part by the Dutch Research Council (NWO) project 680-91-116 (_Planckian Dissipation and Quantum Thermalisation: From Black Hole Answers to Strange Metal Questions_.) and by the Dutch Research Council (NWO)/Ministry of Education. K.T.G. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101024967.
## Appendix A Independence of \(\sigma_{*}^{2}\) on \(\sigma_{1}^{2}\)
The exact pre- or post-activation distribution at a given layer obviously does depend on \(\sigma_{1}^{2}\), the pre-activation variance at the first hidden layer. This dependence is generated via the recursion relation (8). However, at the fixed point, the asymptotic distributions do not depend on \(\sigma_{1}^{2}\). Indeed, the relation that the asymptotic pre-activation variance satisfies is eq. (9), which does not depend on \(\sigma_{1}^{2}\) at all. We can demonstrate this fact by plotting the evolution of the post-activation distribution for fixed \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), but for many values of \(\sigma_{1}^{2}\). In fig. 6, we show this for \((\sigma_{w}^{2},\sigma_{b}^{2})=(1.76,0.05)\) for several values of \(\sigma_{1}^{2}\), both less than and greater than \(\sigma_{*}^{2}\) which turns out to be \(\sigma_{*}^{2}\approx 0.57\) in this case. When \(\sigma_{1}^{2}<\sigma_{*}^{2}\), the post-activation distribution starts out narrower and spreads out, whereas when \(\sigma_{1}^{2}>\sigma_{*}^{2}\) it starts out more peaked at \(\pm 1\) and then flattens out.
## Appendix B Analytic Details of the Fixed Point Computation
In this appendix, we will show that the edge of chaos contains the point \((\sigma_{w}^{2},\sigma_{b}^{2})=(1,0)\) and has zero slope there. At this point, the fixed-point equation (9) reads
\[\sigma_{*}^{2}=\sigma_{\phi,*}^{2}. \tag{10}\]
The left-hand side is the fixed-point pre-activation variance, whereas the right-hand side is the corresponding post-activation variance. As long as \(|\phi(z)|<|z|\), which is the case for \(\phi(z)=\tanh(z)\) except at \(z=0\), the variance of the post-activation will always be smaller than that of the pre-activation. Therefore, the only solution at this point is \(\sigma_{*}^{2}=\sigma_{\phi,*}^{2}=0\) and thus at this point \(\phi^{\prime}(\sigma_{*}z)=\,\text{sech}^{2}(0)=1\) and \(\chi=1\) or \(\xi=\infty\). Hence, this point is on the edge of chaos.
Now, consider eq. (10), but now along the edge of chaos rather than the lines of constant \(\sigma_{*}^{2}\). Let \(\sigma_{w}^{2}\) be our independent parameter along the edge of chaos and take a derivative with respect to it:
\[\frac{\partial\sigma_{b}^{2}}{\partial\sigma_{w}^{2}}=\left(1-\sigma_{w}^{2} \,\frac{\partial\sigma_{\phi,*}^{2}}{\partial\sigma_{w}^{2}}\right)\frac{ \partial\sigma_{*}^{2}}{\partial\sigma_{w}^{2}}-\sigma_{\phi,*}^{2}\, \tag{11}\]
w
Figure 5: Small-network threshold behavior as in fig. 4, for MNIST trained on a network with \(N=8\) and \(L=2\), sampled over 50 initial conditions drawn from \((\sigma_{w}^{2},\sigma_{b}^{2})\). The bottom figure shows that the location of this threshold decreases with \(\sigma_{b}^{2}\) consistent with the trend implied by the line of uniformity.
where we have used the fact that \(\sigma^{2}_{\phi,*}\) depends on \(\sigma^{2}_{w}\) only through its dependence on \(\sigma^{2}_{*}\).
To compute the derivative \(\frac{\partial\sigma^{2}_{*}}{\partial\sigma^{2}_{*}}\), it is convenient to first rewrite the integral expression for \(\sigma^{2}_{\phi}\) in (4) by changing back to the original pre-activation variable \(z\):
\[\sigma^{2}_{\phi}=\int_{-1}^{1}\mathrm{d}x\,p_{\phi}(x;\sigma^{2})\,x^{2}=\int_ {-\infty}^{\infty}\mathrm{d}z\,p(z;\sigma^{2})\,\phi(z)^{2}. \tag{30}\]
We can easily compute the various derivatives of the pre-activation distribution:
\[\frac{\partial p(z;\sigma^{2})}{\partial\sigma^{2}}=\bigg{(}\frac{z^{2}}{ \sigma^{2}}-1\bigg{)}\frac{p(z;\sigma^{2})}{2\sigma^{2}}, \frac{\partial^{2}p(z;\sigma^{2})}{\partial z^{2}}=\bigg{(}\frac{z^{2}}{ \sigma^{2}}-1\bigg{)}\frac{p(z;\sigma^{2})}{\sigma^{2}}=2\,\frac{\partial p(z ;\sigma^{2})}{\partial\sigma^{2}}. \tag{31}\]
Therefore, using integration by parts, and the fact that we can ignore boundary terms due to the fast fall-off of the Gaussian, we find
\[\frac{\partial\sigma^{2}_{\phi}}{\partial\sigma^{2}}=\int\mathrm{d}z\,\frac{ \partial p(z;\sigma^{2})}{\partial\sigma^{2}}\,\phi(z)^{2}=\frac{1}{2}\int \mathrm{d}z\,\frac{\partial^{2}p(z;\sigma^{2})}{\partial z^{2}}\,\phi(z)^{2}= \int\mathrm{d}z\,p(z;\sigma^{2})\big{(}\phi^{\prime}(z)^{2}+\phi(z)\,\phi^{ \prime\prime}(z)\big{)}. \tag{32}\]
By rescaling the variable to \(\sigma z\), the first integral term above can be written as
\[\int\mathrm{d}z\,p(z;\sigma^{2})\phi^{\prime}(z)^{2}=\int\mathcal{D}z\,\big{[} \phi^{\prime}(\sigma z)\big{]}^{2}. \tag{33}\]
Note that when this is evaluated at \(\sigma^{2}_{*}\) and multiplied by \(\sigma^{2}_{w}\), we get precisely \(\chi\), as defined in (12). Let us give a name to the remaining integral in (32) evaluated at \(\sigma^{2}_{*}\). For future convenience, we will put a relative minus sign in the definition below, the reason being that, for \(\phi=\tanh\), the object \(\phi\,\phi^{\prime\prime}\) is _negative_ semi-definite:
\[\tilde{\chi}=-\sigma^{2}_{w}\int\mathrm{d}z\,p(z;\sigma^{2}_{*})\,\phi(z)\, \phi^{\prime\prime}(z)=-\sigma^{2}_{w}\int\mathcal{D}z\,\phi(\sigma_{*}z)\, \phi^{\prime\prime}(\sigma_{*}z). \tag{34}\]
Figure 6: Layer-to-layer evolution of the post-activation distribution at \((\sigma^{2}_{w},\sigma^{2}_{b})=(1.76,0.05)\) for six different values of the first hidden layer pre-activation variance \(\sigma^{2}_{1}\). The post-activations converge to the asymptotic distribution within about five layers.
Then, (101) evaluated at \(\sigma_{*}^{2}\) and multiplied by \(\sigma_{w}^{2}\) reads
\[\sigma_{2}^{2}\frac{\partial\sigma_{\phi,*}^{2}}{\partial\sigma_{*}^{2}}=\chi- \tilde{\chi}. \tag{102}\]
Now, let us define
\[\tilde{\xi}=-\frac{1}{\ln(\chi-\tilde{\chi})}. \tag{103}\]
This is precisely the object called \(\xi_{q}\) in [9], which is the length scale that controls the exponential decay of information propagation through the neural network from a single input.
Plugging eq. (102) back into eq. (100) gives
\[\frac{\partial\sigma_{b}^{2}}{\partial\sigma_{w}^{2}}=(1-\chi+\tilde{\chi}) \frac{\partial\sigma_{*}^{2}}{\partial\sigma_{w}^{2}}-\sigma_{\phi,*}^{2}. \tag{104}\]
Along the edge of chaos, \(\chi=1\), and so
\[\frac{\partial\sigma_{b}^{2}}{\partial\sigma_{w}^{2}}=\tilde{\chi}\frac{ \partial\sigma_{*}^{2}}{\partial\sigma_{w}^{2}}-\sigma_{\phi,*}^{2}\, \tag{105}\]
Now, we can establish a simple bound on \(\tilde{\chi}\) by virtue of the fact that \(|\phi(z)|\leq|z|\), for \(\phi=\tanh\). To do this, let us first rewrite \(\tilde{\chi}\) using the identity
\[\phi^{\prime\prime}(z)=-2\tanh(z)\,\text{sech}^{2}(z)=-2\,\phi(z)\,\phi^{ \prime}(z). \tag{106}\]
Therefore,
\[\phi(z)\,\phi^{\prime\prime}(z)=-2\,\phi(z)^{2}\,\phi^{\prime}(z)=-\frac{2}{3 }\big{[}\phi(z)^{3}\big{]}^{\prime}\, \tag{107}\]
and
\[\tilde{\chi}=\frac{2\sigma_{2}^{2}}{3}\int\text{d}z\,p(z;\sigma_{*}^{2})\big{[} \phi(z)^{3}\big{]}^{\prime}=-\frac{2\sigma_{w}^{2}}{3}\int\text{d}z\,\frac{ \partial p(z;\sigma_{*}^{2})}{\partial z}\,\phi(z)^{3}=\frac{2\sigma_{w}^{2}} {3\sigma_{*}^{2}}\int\text{d}z\,p(z;\sigma_{*}^{2})\,z\,\phi(z)^{3}. \tag{108}\]
Therefore, since \(|\phi(z)|\leq|z|\) for \(\phi=\tanh\),
\[0\leq\tilde{\chi}\leq\frac{2\sigma_{w}^{2}}{3\sigma_{*}^{2}}\int\text{d}z\,p( z;\sigma_{*}^{2})\,z^{4}=2\,\sigma_{w}^{2}\,\sigma_{*}^{2}. \tag{109}\]
Therefore, since we have already shown that \(\sigma_{*}^{2}=\sigma_{\phi,*}^{2}=0\) at the point \((\sigma_{w}^{2},\sigma_{b}^{2})=(1,0)\), it follows that \(\tilde{\chi}=0\) at this point as well and, from eq. (105),
\[\frac{\partial\sigma_{b}^{2}}{\partial\sigma_{w}^{2}}\bigg{|}_{(\sigma_{w}^{2 },\sigma_{b}^{2})=(1,0)}=0. \tag{110}\]
In other words, the edge of chaos has zero slope at the point \((\sigma_{w}^{2},\sigma_{b}^{2})=(1,0)\).
## Appendix C Implementation Details
Throughout this work, we have used a vanilla feedforward neural network of \(L\) hidden layers, each having the same depth \(N\). As described, initial weights and biases are drawn from zero-mean Gaussian distributions with \(\frac{\sigma_{*}^{2}}{N}\) and \(\sigma_{b}^{2}\) respectively. Both MNIST and CIFAR-10 were trained using the standard cross-entropy loss function and no optimizer. This reproduces the results of [9] (see fig. 7), confirming critical behavior.
## Appendix D SWISH activation function
Throughout the text, we examined the impact of saturation via the line of uniformity for the \(\tanh\) activation function. For non-saturating activation functions, it is an open question whether a similar notion of uniformity exists. While a full analysis of this is beyond the scope of this work, in this appendix we offer some preliminary results for the SWISH activation function,
\[\text{swish}(z)=\frac{z}{1+e^{-z}}\, \tag{10}\]
which also features a line of critical points separating an ordered and chaotic phase. Note that unlike the EOC for \(\tanh\), which increases with increasing \(\sigma_{w}^{2}\), the EOC for SWISH decreases with increasing \(\sigma_{w}^{2}\), which prevents us from examining the impact of large weight variances. Conversely, for small values of \(\sigma_{w}^{2}\), the corresponding value of \(\sigma_{b}^{2}\) becomes so large that we are unable to satisfy the critical detection criteria \(\chi=1\) discussed in the main text.10 The EOC for SWISH is plotted in fig. 8, which shows a computable range of approximately \(\sigma_{w}^{2}\in[1.97,3.4]\). The same figure also shows the accuracy for an \(L=40\) network with SWISH activation function trained along the EOC, demonstrating no deterioration of performance within this range, which confirms the absence of saturation effects. See also [15; 16].
Footnote 10: We do not claim that the EOC stops beyond this point, rather that it cannot be computed from the central limit method used in [8; 9]. It is conceivable that this could be computed via the NN/QFT correspondence developed in [18], but this has not been attempted for SWISH.
|
2306.11282 | Phase Repair for Time-Domain Convolutional Neural Networks in Music
Super-Resolution | Audio Super-Resolution (SR) is an important topic as low-resolution
recordings are ubiquitous in daily life. In this paper, we focus on the music
SR task, which is challenging due to the wide frequency response and dynamic
range of music. Many models are designed in time domain to jointly process
magnitude and phase of audio signals. However, prior works show that approaches
using Time-Domain Convolutional Neural Network (TD-CNN) tend to produce
annoying artifacts in their waveform outputs, and the cause of the artifacts is
yet to be identified. To the best of our knowledge, this work is the first to
demonstrate the artifacts in TD-CNNs are caused by the phase distortion via a
subjective experiment. We further propose Time-Domain Phase Repair (TD-PR),
which uses a neural vocoder pre-trained on the wide-band data to repair the
phase components in the waveform outputs of TD-CNNs. Although the vocoder and
TD-CNNs are independently trained, the proposed TD-PR obtained better mean
opinion score, significantly improving the perceptual quality of TD-CNN
baselines. Since the proposed TD-PR only repairs the phase components of the
waveforms, the improved perceptual quality in turn indicates that phase
distortion has been the cause of the annoying artifacts of TD-CNNs. Moreover, a
single pretrained vocoder can be directly applied to arbitrary TD-CNNs without
additional adaptation. Therefore, we apply TD-PR to three TD-CNNs that have
different architecture and parameter amount. Consistent improvements are
observed when TD-PR is applied to all three TD-CNN baselines. Audio samples are
available on the demo page. | Yenan Zhang, Guilly Kolkman, Hiroshi Watanabe | 2023-06-20T04:26:02Z | http://arxiv.org/abs/2306.11282v2 | # Phase Repair for Time-Domain Convolutional Neural Networks in Music Super-Resolution
###### Abstract
Audio Super-Resolution (SR) is an important topic in the field of audio processing. Many models are designed in time domain due to the advantage of waveform processing, such as being able to avoid the phase problem. However, in prior works it is shown that Time-Domain Convolutional Neural Network (TD-CNN) approaches tend to produce annoying artifacts in their output. In order to confirm the source of the artifact, we conduct an AB listening test and found phase to be the cause. We further propose Time-Domain Phase Repair (TD-PR) to improve TD-CNNs' performance by repairing the phase of the TD-CNNs' output. In this paper, we focus on the music SR task, which is challenging due to the wide frequency response and dynamic range of music. Our proposed method can handle various narrow-bandwidth from 2.5kHz to 4kHz with a target bandwidth of 8kHz. We conduct both objective and subjective evaluation to assess the proposed method. The objective evaluation result indicates the proposed method achieves the SR task effectively. Moreover, the proposed TD-PR obtains the much higher mean opinion scores than all TD-CNN baselines, which indicates that the proposed TD-PR significantly improves perceptual quality. Samples are available on the demo page1.
Footnote 1: [https://mannmaruko.github.io/demopage/dpr.html](https://mannmaruko.github.io/demopage/dpr.html)
## I Introduction
Audio Super-Resolution (SR), also known as bandwidth extension and bandwidth expansion, aims to predict the High-Resolution (HR) components from the Low-Resolution (LR) input audio. LR audio is common in our daily life (_e. g._, historical recordings, unprofessional-made modern recordings). The real-world LR recordings have a variety of bandwidth or even ambiguous bandwidth. Therefore, addressing audio SR in real world is challenging. Deep Neural Networks (DNNs) have become the mainstream on audio SR tasks compared with conventional methods [1, 2, 3, 4], but only a few works focus on the music SR [2]. In this paper, we focus on the music SR with solo piano music. We perform SR on solo instrument music instead of the orchestra, since the task will be simpler on single source data. Among many music instruments, we choose solo piano music as the representative, since piano has the most broad frequency range.
Various works have delved deeply into DNN-based approaches for audio SR. Since Frequency-Domain Convolutional Neural Networks (FD-CNN) can only process the magnitude, they generally require additional signal processing to estimate the corresponding phase information, such as Griffin-Lim algorithms [2] or a neural vocoder [4]. Compared with FD methods, Time-Domain Convolutional Neural Networks (TD-CNNs) that directly learn wave-to-wave mapping, are considered being able to avoid the phase problem on audio SR tasks [2]. However, TD-CNNs (_e. g._, AudioUNet [1]) tend to produce annoying artifacts in their results. To alleviate the artifact, Lim _et. al._ proposed a time-frequency model [5] based on AudioUNet. Wang _et. al._ made efforts on object function that employing the FD loss [6] during the TD-CNN's training. The data augmentation strategy was proposed [7] to improve the robustness of TD-CNNs. However, none of the above CNN methods in time domain succeeds in removing the artifact according to the provided audio samples. This observation encourages us to explore other aspects that may have caused the artifact. In terms of up-sampling ratio, many works perform the SR on the fixed ratio (_e. g._, 2\(\times\)) [1, 2], which would be a limitation when apply these models to real world problems.
We investigate the existing issue of artifacts in the following ways. First, we train our models to handle LR music with various bandwidth, which is applicable to real world problems. Second, We conduct an AB listening test which, to the best of our knowledge, is the first to demonstrate the artifact in TD-CNNs is caused by the phase distortion via a subjective test. Last but not least, we propose the Time-Domain Phase Repair (TD-PR) method to improve the performance of TD-CNNs for the music SR. TD-PR can be directly used for general TD-CNN models without parameter adjustment. The subjective evaluation shows the TD-PR significantly improves the perceptual quality of three TD-CNN baselines, which validates its effectiveness and generalization ability.
## II Related Work
Various approaches in audio SR have been developed and some of them work in frequency domain. Li _et. al._ propose the FD approach for speech SR, which consists of 2 steps [8]. The first step is mapping the magnitude component from narrow-bandwidth to wide-bandwidth by DNN. The second step is to estimate the corresponding phase by signal processing. Following this work, Hu _et. al._ introduce Generative Adversarial Network (GAN) into both steps and got the better performance [2]. However, training two GAN-based models is difficult due to the instability of GAN training. Furthermore, this SR system works on fixed up-sampling ratio, which limits it's application for real world problems. Liu _et. al._ use a GAN-based neural vocoder for the second step without using GAN in the first step, which still performs speech SR successfully with the |
2306.10822 | Interpreting Deep Neural Networks with the Package innsight | The R package innsight offers a general toolbox for revealing variable-wise
interpretations of deep neural networks' predictions with so-called feature
attribution methods. Aside from the unified and user-friendly framework, the
package stands out in three ways: It is generally the first R package
implementing feature attribution methods for neural networks. Secondly, it
operates independently of the deep learning library allowing the interpretation
of models from any R package, including keras, torch, neuralnet, and even
custom models. Despite its flexibility, innsight benefits internally from the
torch package's fast and efficient array calculations, which builds on LibTorch
$-$ PyTorch's C++ backend $-$ without a Python dependency. Finally, it offers a
variety of visualization tools for tabular, signal, image data or a combination
of these. Additionally, the plots can be rendered interactively using the
plotly package. | Niklas Koenen, Marvin N. Wright | 2023-06-19T10:12:32Z | http://arxiv.org/abs/2306.10822v2 | # Interpreting Deep Neural Networks with the Package innsight
###### Abstract
The \(\mathsf{R}\) package **innsight** offers a general toolbox for revealing variable-wise interpretations of deep neural networks' predictions with so-called feature attribution methods. Aside from the unified and user-friendly framework, the package stands out in three ways: It is generally the first \(\mathsf{R}\) package implementing feature attribution methods for neural networks. Secondly, it operates independently of the deep learning library allowing the interpretation of models from any \(\mathsf{R}\) package, including **keras**, **torch**, **neuralnet**, and even custom models. Despite its flexibility, **innsight** benefits internally from the **torch** package's fast and efficient array calculations, which builds on **LibTorch** - **PyTorch**'s \(\mathsf{C++}\) backend - without a Python dependency. Finally, it offers a variety of visualization tools for tabular, signal, image data or a combination of these. Additionally, the plots can be rendered interactively using the **plotly** package.
_Keywords_: neural networks, feature attribution, interpretable machine learning, explainable, artificial intelligence, XAI, IML, **torch**, **keras**, \(\mathsf{R}\).
## 1 Introduction
Throughout the past decade, neural networks have unleashed a tremendous surge of attention and infiltrated almost all conceivable domains of science, industry, and public life. Mainly, their increasing popularity is due to their natural ability to extract patterns and knowledge from vast amounts of structured raw data thanks to modern computing capacities and deliver outstanding performance (Krizhevsky _et al._, 2017; LeCun _et al._, 2015; Silver _et al._, 2016; Bengio _et al._, 2021). However, the intelligently learned decision-making process of the neural network remains inscrutable and hidden from the user due to its enormous complexity. Interpretations cannot be inferred as straightforward from this so-called _black box_ as, for example, the coefficients of linear models. As a consequence, the gain in predictive accuracy and model flexibility generally comes at the price of an increasingly opaque and intricate machine learning model, as was already noted by Gunning and Aha (2019). Nevertheless, it is precisely this question of interpretability - or, informally speaking: Why did a network make a certain prediction? - that is becoming more and more relevant for applications with high-stake decisions and possibly becoming a legal requirement, e.g., in autonomous systems (O'Sullivan _et al._, 2019), healthcare (Schneeberger _et al._, 2020) or data processing in general (European Union, 2016;
###### Abstract
We propose a novel approach to the classification of the joint and temporal
is produced in the standard forward pass and the class to be explained is selected. In the subsequent method-specific backward pass, each input variable is assigned a relevance score to the chosen output class and can be visualized in a heat map or bar chart depending on the input type. An overview of the most popular feature attribution methods can be found in Section 2.
To make feature attribution methods widely accessible to users and to provide them with a unified and easy-to-use interface, several software packages have been developed in the last few years. The most popular packages are **investigate**(Alber _et al._, 2019) for the deep learning library **Keras**(Chollet _et al._, 2015), and **captum**(Kokhlikyan _et al._, 2020) and **zennit**(Anders _et al._, 2021) for **PyTorch**(Paszke _et al._, 2019). However, all these packages are Python-exclusive and only support networks of specific deep learning libraries. In order to make feature attribution methods easily available to the R community, we contribute the software package **innsight** pursuing the following goals:
* _First feature attribution R package_: **innsight** is the first R package that implements the most popular feature attribution methods for neural networks unified in a single user-friendly package.
* _Computationally efficient_: The powerful **torch**(Falbel and Luraschi, 2023) package is utilized internally for all calculations, which builds on **LibTorch**, the C++ variant of **PyTorch**(Paszke _et al._, 2019), and doesn't rely on a Python dependency.
* _Deep-learning-library-agnostic_: The passed trained models are not limited to a specific deep learning library. The package supports models from the R packages **keras**(Kalinowski _et al._, 2023), **torch** and **neuralnet**(Gunther and Fritsch, 2010), but under some constraints, an arbitrary model can be passed as a list to be fully flexible.
* _Visualization tools_: **innsight** offers several visualization methods for individual or summarized results regardless of whether it is tabular, signal, image data or a combination of these. Additionally, interactive plots can be created based on the **plotly** package (Sievert _et al._, 2022).
The **innsight** package is available from the Comprehensive R Archive Network (CRAN) at [https://CRAN.R-project.org/package=innsight](https://CRAN.R-project.org/package=innsight) or from our GitHub repository at [https://github.com/bips-hb/innsight/](https://github.com/bips-hb/innsight/).
Figure 1: General procedure of feature attribution methods: First, an input instance \(\mathbf{x}\) flows through the model \(f\) to obtain a prediction \(\mathbf{\hat{y}}\). Then, the desired output node or class \(\hat{y}_{c}\) to be explained is selected. Finally, the relevance \(R^{c}_{i}\) of the individual input variables \(i\) at the selected output \(c\) is calculated in a backward pass.
The rest of the paper is structured as follows: First, we overview the most popular feature attribution methods for neural networks in Section 2. Then, in Section 3, we elaborate on the package's design, functionality, and capabilities. Next, the package is applied to a basic example on a penguin dataset and an advanced example for melanoma detection based on image and tabular data as input types. In the concluding Section 5, the obtained package's outputs are compared and validated with the already mentioned Python equivalents.
## 2 Methodology of feature attribution
Feature attribution methods for neural networks are a group of local interpretation methods that assign to each input variable the contribution or impact to a chosen model output. For example, suppose an input instance \(\mathbf{x}\in\mathbb{R}^{p}\) with \(p\in\mathbb{N}\) variables is fed forward through a neural network \(f:\mathbb{R}^{p}\to\mathbb{R}^{C}\) resulting in an output \(f(\mathbf{x})=\mathbf{\hat{y}}\in\mathbb{R}^{C}\) with \(C\in\mathbb{N}\) classes or regression outputs. In this case, a feature attribution method assigns relevance scores \(R^{c}_{1},\ldots,R^{c}_{p}\) to each of the input features \(x_{1},\ldots,x_{p}\) of \(\mathbf{x}\) on a chosen output class or node \(\hat{y}_{c}\) of the prediction \(\mathbf{\hat{y}}\) to be explained as already described in Figure 1.
### Gradient-based methods
Gradient-based methods are the fastest and most straightforward interpretation methods because they operate on the default techniques of the high-level deep learning libraries for computing gradients during the training loop. However, these methods - in a sense - calculate the derivatives of chosen output to the input variables instead of the derivatives of the loss value to the model parameters during gradient descent. Although the terminology of gradient-based methods can often be interpreted more broadly, we only consider techniques that use the default gradient methods. For example, Ancona _et al._ (2018) showed that variants of the _layerwise relevance propagation (LRP)_ and _deep learning important features (DeepLift)_, discussed later in Section 2.2 and 2.3, can approximately be considered gradient-based. Regardless, they are not mentioned in this section, since, on the one hand, the fundamental purpose of these variants is different, and, on the other hand, they require an overwriting of the standard gradient methods.
#### Vanilla Gradient
One of the first and most intuitive methods for interpreting neural networks is the _Gradient_ method introduced by Simonyan _et al._ (2014), also known as _Vanilla Gradients_ or _Saliency maps_. This method computes the gradients of the selected output with respect to the input variables. Therefore, the resulting relevance values indicate those variables that can be locally perturbed the least to change the outcome the most. Mathematically, this method can be described by the following formula for the input variable \(x_{i}\) with \(\mathbf{x}\in\mathbb{R}^{p}\), the model \(f:\mathbb{R}^{p}\to\mathbb{R}^{C}\) and the output \(\hat{y}_{c}=f(\mathbf{x})_{c}\):
\[\text{Gradient}(\mathbf{x})_{i}^{c}=\frac{\partial\,f(\mathbf{x})_{c}}{\partial\,x_{ i}}=\frac{\partial\,\hat{y}_{c}}{\partial\,x_{i}}.\]
Assuming that the function behaves linearly overall, increasing \(x_{i}\) by one raises the output by \(\text{Gradient}(\mathbf{x})_{i}^{c}\). In general, however, neural networks are highly nonlinear, so this interpretation is only valid for very small changes of \(x_{i}\).
### Smoothed gradients
The _smoothed gradients (SmoothGrad)_ method, introduced by Smilkov _et al._ (2017), addresses a significant problem of the basic Gradient method. As described in the previous subsection, gradients locally assume a linear behavior, but this is generally no longer the case for deep neural networks with many layers. These have large fluctuations and abruptly change their gradients, making the interpretations of the Gradient method noisier and potentially misleading. Smilkov _et al._ (2017) proposed to compute the gradients of randomly perturbed copies of \(x_{i}\) and determine the average gradient from that instead of calculating only the gradient in \(x_{i}\). For obtaining relevance values with the SmoothGrad method for the individual components \(x_{i}\in\mathbb{R}\) of an instance \(\mathbf{x}\in\mathbb{R}^{p}\), first \(K\in\mathbb{N}\) realizations of a \(p\)-dimensional multivariate Gaussian distribution \(q=\mathcal{N}(0,\sigma^{2}\mathrm{I}_{p})\) are generated describing the random perturbations, i.e., \(\mathbf{\varepsilon}^{1},\ldots,\mathbf{\varepsilon}^{K}\sim q\). Then the empirical mean of the gradients for variable \(x_{i}\) and output index \(c\) are calculated as follows:
\[\mathrm{SmoothGrad}(\mathbf{x})_{i}^{c}=\frac{1}{K}\sum_{j=1}^{K}\frac{\partial\, f(\mathbf{x}+\mathbf{\varepsilon}^{j})_{c}}{\partial\,x_{i}+\varepsilon_{i}^{j}}\approx \mathbb{E}_{\mathbf{\varepsilon}\sim q}\left[\frac{\partial\,f(\mathbf{x}+\mathbf{ \varepsilon})_{c}}{\partial\,x_{i}+\varepsilon_{i}}\right].\]
The number of perturbations \(K\) and the variance \(\sigma^{2}\) are hyperparameters. With the value of \(K\), the estimation accuracy for the mean gradient can be increased, but this goes hand in hand with a higher computational effort. The second parameter \(\sigma^{2}\) is mostly specified indirectly via a noise level \(\lambda\geq 0\) determining the percentage of the total range of the input domain that is covered by the standard deviation \(\sigma\), i.e., \(\lambda=\frac{\sigma}{x_{\text{max}}-x_{\text{min}}}\). Especially for images, this argument can be used to control the visual smoothness of the explanation.
### Gradient\(\times\)Input and SmoothGrad\(\times\)Input
A simple modification can change both previously discussed methods to the methods _Gradient\(\times\)Input_ and _SmoothGrad\(\times\)Input_. The gradients are calculated as in the respective sections and then multiplied by the corresponding input instance. The _Gradient\(\times\)Input_ method was introduced by Shrikumar _et al._ (2017b) and relies on a well-grounded mathematical background despite its simple idea: The basic concept is decomposing the output prediction \(\hat{y}_{c}\) according to its relevance to each input variable \(x_{i}\), i.e., into variable-wise additive effects
\[\hat{y}_{c}=f(\mathbf{x})_{c}=\sum_{i=1}^{p}R_{i}. \tag{1}\]
Mathematically, this method is based on the first-order Taylor decomposition. Assuming that a function \(g:\mathbb{R}^{p}\rightarrow\mathbb{R}\) is continuously differentiable in \(\mathbf{x}\in\mathbb{R}^{p}\), a remainder term \(\varepsilon(g,\mathbf{z},\mathbf{x}):\mathbb{R}^{p}\rightarrow\mathbb{R}\) with \(\lim_{\mathbf{z}\rightarrow\mathbf{x}}\varepsilon(g,\mathbf{z},\mathbf{x})=0\) exists such that
\[g(\mathbf{z}) =g(\mathbf{x})+\nabla_{\mathbf{x}}g(\mathbf{x})\cdot(\mathbf{z}-\mathbf{x})^{\top}+ \varepsilon(g,\mathbf{z},\mathbf{x})\] \[=g(\mathbf{x})+\sum_{i=1}^{p}\frac{\partial\,g(\mathbf{x})}{\partial\,x _{i}}(z_{i}-x_{i})+\varepsilon(g,\mathbf{z},\mathbf{x}),\quad\mathbf{z}\in\mathbb{R}^{p}.\]
The first-order Taylor formula thus describes a linear approximation of the function \(g\) at the point \(\mathbf{x}\) since only the first derivatives are considered. Consequently, a highly nonlinear and
continuous function \(g\) is well approximated only in a small neighborhood around \(\mathbf{x}\). For larger distances from \(\mathbf{x}\), sufficient small values of the residual term are not guaranteed anymore. The Gradient\(\times\)Input method considers the data point \(\mathbf{x}\) and sets \(\mathbf{z}=\mathbf{0}\). In addition, the residual term \(\varepsilon(f_{c},\mathbf{x},\mathbf{0})\) and the summand \(f(\mathbf{0})_{c}\) are ignored, which then results in the following approximation of the prediction \(f(\mathbf{x})_{c}\) in variable-wise relevances:
\[f(\mathbf{x})_{c}\approx\sum_{i=1}^{p}\frac{\partial\,f(\mathbf{x})_{c}}{ \partial\,x_{i}}\cdot x_{i},\quad\text{hence}\] \[\text{Gradient}\times\text{Input}(\mathbf{x})_{i}^{c}=\frac{ \partial\,f(\mathbf{x})_{c}}{\partial\,x_{i}}\cdot x_{i}.\]
Analogously, this multiplication can be applied to all gradients in the summation of the SmoothGrad method in order to compensate for local fluctuations:
\[\text{SmoothGrad}\times\text{Input}(\mathbf{x})_{i}^{c}=\frac{1}{K}\sum_{j=1}^{K} \frac{\partial\,f(\mathbf{x}+\mathbf{\varepsilon}^{j})_{c}}{\partial\,x_{i}+\varepsilon _{i}^{j}}\cdot(x_{i}+\varepsilon_{i}^{j}),\quad\mathbf{\varepsilon}^{1},\ldots, \mathbf{\varepsilon}^{K}\sim\mathcal{N}(0,\sigma^{2}\text{I}_{p}).\]
### Layer-wise relevance propagation (LRP)
The _layer-wise relevance propagation (LRP)_ method was introduced by Bach _et al._ (2015) and has a similar goal as the Gradient\(\times\)Input approach explained in the previous section: decomposing the output into variable-wise relevances conforming to Equation 1. The distinguishing aspect is that the prediction \(\hat{g}_{c}\) is redistributed layer by layer from the output node back to the inputs according to the layer's weights and intermediate values. The entire procedure is accomplished by rule-based relevance messages defining how to redistribute the upper-layer relevance to the lower layer. Consequently, the LRP method is applied to a model with \(L\) layers as follows, irrespective of the selected rule: As a first step, the model output score \(\hat{g}_{c}\) is used as the upper-layer relevance \(R_{1}^{L}\) to calculate the relevances for the layer of index \(L-1\) before the output layer via the selected relevance message. After that, the obtained values \(R_{i}^{L-1}\) are used as the upper-layer relevances for the layer preceding that layer of index \(L-1\), and so on. Repeating this procedure layer by layer until the input layer is reached, yields the relevance values \(R_{i}^{1}\) for each input variable \(i\) of the model input instance and, thus, the results of the LRP method. A visual overview of these steps is given in Figure 2.
In the following, we briefly overview the most popular variations of relevance messages flowing from a node of index \(j\) in layer \(l+1\) to node \(i\) in the preceding layer:
* **The simple rule:** The fundamental rule on which all other variations of relevance messages are more or less based is the _simple rule_ (also known as _LRP-0_). The relevances are redistributed to the lower layers according to the ratio between local and
Figure 2: Backward pass of the LRP method.
global pre-activation. Let \(\mathbf{x}\) the inputs of the preceding layer, \(\mathbf{w}\) the weight matrix and \(\mathbf{b}\) the bias vector of layer \(l\), and \(R_{j}^{l+1}\) the upper-layer relevance; then \(x_{i}\,w_{i,j}\) is the local and \(z_{i}=b_{j}+\sum_{k}x_{k}\,w_{k,j}\) the global pre-activation defining the simple rule as \[r_{i\gets j}^{(l,\,l+1)}=\frac{x_{i}\,w_{i,j}}{z_{j}}\,R_{j}^{l+1}.\]
* **The \(\varepsilon\)-rule:** One issue with the simple rule is that it is numerically unstable when the global pre-activation \(z_{j}\) vanishes and causes a division by zero. The \(\varepsilon\)_-rule_ (also known as _LRP-\(\varepsilon\)_) tackles those situations by adding a stabilizer \(\varepsilon>0\) that moves the denominator away from zero, i.e., \[r_{i\gets j}^{(l,\,l+1)}=\frac{x_{i}\,w_{i,j}}{z_{j}+\text{sign}(z_{j})\, \varepsilon}\,R_{j}^{l+1}.\] This inserted value \(\varepsilon\) absorbs some of the relevance and can, therefore, be utilized to achieve sparser and less noisy results for the explanation. As \(\varepsilon\) increases, a greater portion of the relevance is intercepted, sustaining only the most salient relevances for this relevance message.
Both variants have in common that they distribute the upper-layer relevance proportionally downward regarding the local and global pre-activations, i.e., \(x_{i}w_{i,j}\) and \(z_{j}\). Even though the \(\varepsilon\)-rule avoids division by zero, numerical inconsistencies can occur in both variants for very deep models. Since the pre-activations are not necessarily guaranteed to be positive, the local pre-activations may take on substantial positive or negative values that cancel out in the global pre-activation leading to magnified values in the preceding layer. As a result, larger relevances in the lower layers potentially accumulate in deep models and increasingly reach the limits of computational representation of floating point numbers. To prevent this blow-up of relevances, the authors introduced the \(\alpha\)_-\(\beta\)-rule_, which treats positive and negative pre-activations separately:
* **The \(\alpha\)-\(\beta\)-rule:** The \(\alpha\)-\(\beta\)_-rule_ was introduced to avoid numerical instabilities and enable a weighting between positive and negative relevances depending on the user's focus. This relevance message applies the simple rule to the positive and negative parts of the pre-activations, respectively, and takes the weighted sum of both. The weighting can be regulated by the hyperparameters \(\alpha,\beta\in\mathbb{R}\) satisfying \(\alpha+\beta=1\). Mathematically formulated, the rule is defined as follows: \[r_{i\gets j}^{(l,\,l+1)}=\left(\alpha\frac{(x_{i}\,w_{i,j})^{+}}{z_{j}^{+ }}+\beta\,\frac{(x_{i}\,w_{i,j})^{-}}{z_{j}^{-}}\right)\,R_{j}^{l+1}\] with \[z_{j}^{\pm}=(b_{j})^{\pm}+\sum_{k}(x_{k}\,w_{k,j})^{\pm},\quad(\cdot)^{+}= \max(\cdot,0),\quad(\cdot)^{-}=\min(\cdot,0).\]
For any of the rules described above, the relevance of the lower-layer node \(R_{i}^{l}\) is determined by summing up all incoming relevance messages \(r_{i\gets j}^{(l,\,l+1)}\) into the respective node of index \(i\), i.e.,
\[R_{i}^{l}=\sum_{j}r_{i\gets j}^{(l,\,l+1)}. \tag{2}\]
Since the bias vector is included in the computation of the global pre-activations in all presented variants, this term absorbs a certain amount of the upper-layer relevance. Consequently, the LRP methods approximate the output prediction rather than providing an accurate representation of the targeted decomposition in Equation 1.
There are even more variants of relevance messages discussed in the literature suitable for various situations or layer types: For example, the _deep Taylor decomposition_ (also called \(z^{+}\)-_rule_) in ReLU models - also achieved with the \(\alpha\)-\(\beta\)-rule with \(\alpha=1\) - allows filtering out only positive relevances (Montavon _et al._, 2017), or the _\(\gamma\)-rule_ favoring positive over negative relevances (Montavon _et al._, 2019). Moreover, some rules are specifically designed for the input layer (Montavon _et al._, 2017). Due to the rule independence of how the lower-layer relevances are computed from the relevance messages in Equation 2, the rules can also be set individually for each layer, called _composite-rule_(Montavon _et al._, 2019; Kohlbrenner _et al._, 2020).
### Deep learning important features (DeepLift)
One method that, to some extent, echoes the idea of LRP is the so-called _deep learning important features (DeepLift)_ method introduced by Shrikumar _et al._ (2017). It behaves similarly to LRP in a layer-by-layer backpropagation fashion from a selected output node back to the input variables considering the simple rule. However, it incorporates a reference value \(\mathbf{\tilde{x}}\) to compare the relevances with each other. Hence, the relevances of DeepLift represent the relative effect of the outputs of the instance to be explained \(f(\mathbf{x})_{c}\) and the output of the reference value \(f(\mathbf{\tilde{x}})_{c}\). By taking the difference, the bias term is eliminated in the relevance messages, preventing the relevance absorption and leading to an exact variable-wise decomposition of the difference-from-reference output \(\Delta\hat{g}_{c}=f(\mathbf{x})_{c}-f(\mathbf{\tilde{x}})_{x}\), i.e.,
\[\Delta\hat{g}_{c}=f(\mathbf{x})_{c}-f(\mathbf{\tilde{x}})_{x}=\sum_{i=1}^{d}R_{i}.\]
In contrast to the LRP method, DeepLift defines a multiplier layer by layer, starting from the output layer and propagating to the input layer instead of directly determining the relevances in each intermediate stage. Based on these multipliers, the contribution of an arbitrary variable to the difference-from-reference output can be obtained by multiplying it by the corresponding difference-from-reference input. For an arbitrary layer with the layer's input \(\mathbf{x}\), reference input \(\mathbf{\tilde{x}}\) and multiplier \(m_{\Delta\mathbf{x}\Delta\hat{g}_{c}}\), this means:
\[\sum_{i}m_{\Delta x_{i}\Delta\hat{g}_{c}}\left(x_{i}-\tilde{x}_{i}\right)=m_{ \Delta\mathbf{x}\Delta\hat{g}_{c}}\cdot\left(\Delta\mathbf{x}\right)^{\top}=\Delta \hat{g}_{c}. \tag{3}\]
The multipliers fulfill a chain rule allowing the computation of the multiplier for the preceding layer given the already calculated one \(m_{\Delta\mathbf{t},\Delta\hat{g}_{c}}\), i.e.,
\[m_{\Delta x_{i}\Delta\hat{g}_{c}}=\sum_{j}m_{\Delta x_{i}\Delta t_{j}}\,m_{ \Delta t_{j}\Delta\hat{g}_{c}}. \tag{4}\]
In other words, the chain rule justifies defining the multipliers for each layer or part of a layer separately before combining them with the upper-layer multipliers. The authors distinguish between the linear and nonlinear components of a layer and provide definitions of the multipliers for each of them:
* **Linear rule:** For the linear components of a layer, such as matrix multiplication in dense or convolution layers, the weights of the corresponding layer are used as the multipliers, i.e., \(m_{\Delta x_{i}\Delta z_{j}}=w_{i,j}\).
* **Rescale rule:** This rule can be used for all nonlinear parts of a layer that can be reduced to a one-dimensional function \(\sigma\), e.g., all activations such as ReLU, tanh, or sigmoid. In this case, the ratio between the difference-from-reference activation \(\Delta\sigma(z)_{j}=\sigma(z_{j})-\sigma(\tilde{z}_{j})\) and the pre-activation \(\Delta z_{j}=z_{j}-\tilde{z}_{j}\) gives the multiplier, i.e., \(m_{\Delta z_{j}\Delta\sigma(z)_{j}}=\frac{\Delta\sigma(z)_{j}}{\Delta z_{j}}\). To avoid numerical instability caused by a vanishing denominator, the gradient of \(\sigma\) at \(z_{j}\) is used instead of the multiplier when \(z_{j}\) is close to its reference value \(\tilde{z}_{j}\).
* **RevealCancel rule:** This rule is designed for non-linearities \(\sigma\) to propagate meaningful relevances for saturated activations and discontinuous gradients through the layers' activation part, even when activations like ReLU eliminate the values. Similar to the normal pre-activations in the \(\alpha\)-\(\beta\)-rule for LRP, the positive \(\Delta z_{j}^{+}\) and negative \(\Delta z_{j}^{-}\) difference-from-reference pre-activations are considered separately, ensuring the propagation of expressive contribution scores. Descritively, the _RevealCancel_ rule can be explained in a way that the multiplier for the positive part \(m_{\Delta z_{j}^{+}\Delta y_{j}^{+}}\) is the ratio between the average effect of \(\Delta z_{j}^{+}\) after activating, before and after the negative part \(\Delta z_{j}^{-}\) has been added, and the positive difference-from-reference pre-activation \(\Delta z_{j}^{+}\). In the same way, the negative multiplier \(m_{\Delta z_{j}^{-}\Delta y_{j}^{-}}\) is given by the ratio of the average impact of \(\Delta z_{j}^{-}\) after activating, before and after the positive part \(\Delta z_{j}^{+}\) has been added, to \(\Delta z_{j}^{-}\). Mathematically, the rule is defined as \[m_{\Delta z_{j}^{\pm}\Delta y_{j}^{\pm}}=\frac{\frac{1}{2}\left(\sigma(\tilde{ z}_{j}+\Delta z_{j}^{\pm})-\sigma(\tilde{z}_{j})+\sigma(\tilde{z}_{j}+\Delta z_{j}^{ \pm}+\Delta z_{j}^{\mp})-\sigma(\tilde{z}_{j}+\Delta z_{j}^{\pm})\right)}{ \Delta z_{j}^{\pm}}.\]
These rules, along with the chain rule (Equations 3-4), enable the successive computation of the input variables' contributions \(R_{i}\) to the difference-from-reference output \(\Delta\hat{g}_{c}\) in a single backward pass.
The reference value is the only crucial hyperparameter for the DeepLift method, apart from the rule for non-linearities. This choice depends significantly on the application and usually requires proficient domain-specific knowledge. Nevertheless, the authors suggest asking oneself the question what one wants to measure an effect against. For example, taking the background color or blurred versions of the original picture as the reference values for images are reasonable choices. In many cases, zeros as a baseline are also used. Ancona _et al._ (2018) showed that using the Rescale rule with activations crossing the origin (i.e., \(\sigma(0)=0\)) and a zero baseline as reference value \(\tilde{x}\) coincides with the Gradient\(\times\)Input method discussed in Section 2.1.3.
### Connection weights
One of the earliest methods specifically designed for neural networks is the _connection weights (CW)_ method invented by Olden _et al._ (2004), resulting in a global relevance score for each input variable. The basic idea of this approach is to multiply all path weights for each possible connection between an input variable \(x_{i}\) and the output node or class \(\hat{y}_{c}\) and then calculate
the sum of all of them. However, this method ignores all bias vectors and all activation functions during calculation. Analogously to the previous methods, CW can also be defined layer by layer, deriving the relevance for layer \(l\) from the upper layer as follows:
\[R_{i}^{l}=\sum_{j}w_{i,j}R_{j}^{l+1}.\]
Since only the model weights are used, this method is independent of input data and, thus, a global interpretation method. Inspired by the method Gradient\(\times\)Input (see Sec. 2.1.3), it can also be extended into a local method by taking the point-wise product of the global CW method and the input data.
## 3 Functionality and usage
The R package **innsight** combines all the methods discussed in the previous section in a user-friendly structure and a unified step-based workflow from the trained model to the visualization of the relevances of a feature attribution method. For efficient high-dimensional array calculations, the package uses the R package **torch**[18], which builds on **LibTorch** (the C++ variant of **PyTorch**[17]), and consequently runs without a Python dependency (see Fig. 3). The following three steps yield the requested results regardless of the class of the passed model or the chosen feature attribution method:
* Step 1: Convert the model
* Step 2: Apply selected method
* Step 3: Get or visualize results.
Apart from the utilized packages for the internal workflows, calculations, and visualizations discussed in the following sections, the packages **checkmate**[10] and **cli**[11] are generally used for all argument verifications, internal checks, and terminal outputs of messages, warnings, and errors.
### Step 1 - Convert the model
The key step that turns the **innsight** package into a deep-learning-library-agnostic approach and unlocks the provided **torch** toolbox to all methods is this first step, which essentially analyzes a passed model and creates a **torch**-based replication. For the user, however, the internal processes remain hidden, and the entire conversion step is accomplished by creating a new instance of the class Converter:
Converter$new(model, input_dim = NULL, input_names = NULL, output_names = NULL, dtype = "float", save_model_as_list = FALSE)
Figure 3: **innsight** utilizes the package **torch**, which builds directly on the C++ library **LibTorch** without a Python dependency.
This object is implemented using the object-oriented R6 class imported from the equally named **R6** package [12]. The only necessary argument is the passed model, which can be either an nn_sequential object from **torch**, a keras_model object from **keras**[13], a neuralnet object from **neuralnet**[13], or a named list in a specific style. The other arguments input_dim, input_names and output_names are optional - except input_dim in combination with **torch** models - and are used for internal validation of the copied model or to assign labels to the input and output nodes used for the visualizations in Step 3 explained in Section 3.3. In addition, the arguments dtype and save_model_as_list specify the calculations' numerical precision and save the entire model as a named list in the instance's field model_as_list, which is created as an intermediate step during the conversion process and are explained in more detail in the next paragraph and Figure 4.
To be as flexible as possible and to interpret almost arbitrary models from any R package, a conversion method is implemented for each of the model classes mentioned above of the three packages **torch**, **keras** and **neuralnet**, summarizing all decisive components and layers of the passed model in an ordered and unified way into a list. Then, a **torch**-based model ConvertedModel (i.e., a subclass of nn_module) is created internally from this list. In addition, the interpretation methods described in Section 2 are pre-implemented for each valid layer type which can be called layer by layer in the following step. Since the creation of the converted model is consequently independent of the class of the given model, the conversion call can be bypassed by directly passing the desired model as a list. Hence, custom wrappers for other packages' models can be written, allowing an interpretation of models not being created by the packages **torch**, **keras** or **neuralnet**. An overview of the individual steps that are performed internally when initializing a new instance of the Converter class is summarized in Figure 4. In addition to the fields shown in Figure 4, there are also fields containing the labels ($input_names, $output_names) and shapes ($input_dim, $output_dim) of the input and output layers in a unified list structure. What kind of list structure is required for a model passed as a list, which layers are generally accepted and even more is explained in detail in the vignette "In-depth explanation" (see vignette("detailed_overview", package = "innsight") or the online documentation available at [https://bips-hb.github.io/innsight/articles/detailed_overview.html](https://bips-hb.github.io/innsight/articles/detailed_overview.html)) and is only referred to at this point.
### Step 2 - Apply selected method
As previously mentioned, the **innsight** package provides the most popular feature attribution techniques in a unified framework. Besides the individual method-specific variations, the
Figure 4: Internal conversion process executed during initialization of an Converter object.
overall structure of each method is the same. Internally, this unification is achieved by the R6 super class InterpretingMethod, from which all methods intended for users inherit and only add method-specific arguments to those of the super class. The rudimentary call of initializing a new method object looks like this:
InterpretingMethod$new(converter, data, channels_first = TRUE, output_idx = NULL, ignore_last_act = TRUE, verbose = interactive(), dtype = "float")
The key arguments for every method are the converter object from the first step (see Sec. 3.1), containing the torch-converted model, and the data to be interpreted. The data can be passed in any format as long as the R base method as.array() can convert it into an array and it matches the expected input dimension of the model. In addition, it is common for image or signal data to place either the channel axis directly after the batch axis or at the last position. However, this placement can generally not be extracted unambiguously from the data, which is why the channels_first argument specifies where the channel axis is located, allowing the use of both formats in **innsight**. The remaining arguments output_idx, ignore_last_act, verbose and dtype set which output nodes or classes are to be explained, whether the last activation function is ignored, whether a progress bar is displayed, or change the numerical precision for the calculations.
The feature attribution techniques designed for the package user's regular use cases and applications are inheritors of the super class InterpretingMethod and extend it by method-specific arguments. How the methods from Section 2 are realized, is summarized below:
* The methods _Gradient_ and _Gradient\(\times\)Input_ are implemented as the R6 class Gradient, which has times_input as the only additional argument apart from the inherited ones. This argument switches between the usual gradients (times_input=FALSE) and the gradients multiplied by the corresponding inputs (times_input=TRUE). With **innsight*
* they are applied with the following R code:
* Gradient$new(converter, data, times_input = FALSE,...)
* Gradient$new(converter, data, times_input = TRUE,...)
* Similarly, the methods _SmoothGrad_ and _SmoothGrad\(\times\)Input_ are realized in the R6 class SmoothGrad containing the arguments n for the number of perturbations and noise_level for the noise scale in addition to the times_input argument. The call in **innsight*
* is as follows:
* SmoothGrad$new(converter, data, times_input = FALSE, n = 50, noise_level = 0.1,...)
* SmoothGrad$new(converter, data, times_input = TRUE, n = 50, noise_level = 0.1,...)
* The _LRP_ method, including the simple rule ("simple"), \(\varepsilon\)-rule ("epsilon"), \(\alpha\)-\(\beta\)-rule ("alpha_beta"), and a composition of these rules, is implemented in the R6 class LRP. The rule and its corresponding parameter (if available) are set with the arguments rule_name and rule_param. For both arguments, named lists can also be passed to assign a rule or parameter to each layer type separately. Since many zeros are produced in a maximum pooling layer during the backward pass due to the selection of the maximum value in the pooling kernel, the argument winner_takes_all can be used to treat a maximum as an average pooling layer in the backward pass instead. The overall call in **innsight** is the following: # LRP (with defaults) LRP$new(converter, data, rule_name = "simple", rule_param = NULL, winner_takes_all = TRUE,...)
# LRP with average pooling in the backward pass and # rule "alpha_beta" with alpha = 1 LRP$new(converter, data, rule_name = "alpha_beta", rule_param = 1, winner_takes_all = FALSE,...)
* Analogously, the method _DeepLift_ is realized in the R6 class DeepLift including the argument rule_name for selecting the _Rescale_ ("rescale") or _RevealCancel_ ("reveal_cancel") rule for non-linearities. The reference value is set with x_ref de-faulting to a baseline of zeros. DeepLift can also run into problems in maximum pooling layers since the maximum values in the pooling kernel from the normal and reference input generally do not coincide. Hence, with the winner_takes_all argument, this layer type can be treated as an average pooling layer in a backward pass. With **innsight** DeepLift is applied with the following code: # DeepLift (with defaults) DeepLift$new(converter, data, rule_name = "rescale", x_ref = NULL, winner_takes_all = TRUE,...)
# DeepLift with average pooling in the backward pass and # rule "reveal_cancel" DeepLift$new(converter, data, rule_name = "reveal_cancel", winner_takes_all = FALSE,...)
* The last method provided by **innsight** is the _connection weights (CW)_ method realized in the R6 class ConnectionWeights. The argument times_input specifies whether the global result of the CW method is calculated or whether it is additionally multiplied by the inputs to obtain local instance-wise explanations. A notable aspect, in this case, is that the data argument is not needed for the global variant but it is required for the local one. The call in **innsight** is as follows: # Connection weights (global) ConnectionWeights$new(converter, times_input = FALSE,...)
# Connection weights (local) ConnectionWeights$new(converter, data, times_input = TRUE,...)
### Step 3 - Get and visualize the results
After creating an object of a selected method, the third step is to extract the results and, if required, present them in a descriptive and visual way. For this purpose, the **innsight** package provides three generic methods get_result(), plot() and boxplot() that either return the results as an R object (such as an array, torch_tensor or data.frame) or create visualizations for individual instances or aggregated results over the whole passed dataset. All three generic functions call the respective class methods in the InterpretingMethod super class, which are inherited by all the interpreting methods from the second step by design, e.g., for a method's result method, plot(method,...) is the same as method$plot(...).
#### Generic function get_result()
The function get_result() can be used to obtain the results in various forms, whatever is favored according to the user's subsequent workflow or application. This method has only the argument type (besides the method object) which determines the representation of the returned results. By default (type="array"), the result is returned as an R base array, including the input and output names in the corresponding dimensions specified in the first step in the converter object (see Sec. 3.1). The shape of the array is composed of the input shape including the batch size and the number of computed output nodes, i.e., for a tabular input with ten instances and four input variables, the shape is 10\(\times\)4\(\times\)3 if the method was applied to three output nodes in Step 2. In the same way, type="torch_tensor" returns a torch_tensor object having the same shape as the array, but without dimension labels. However, both variants can also return a list or list of lists with the related results as an array or torch_tensor for models with multiple input or output layers. The third and last format of the results is an R base data.frame obtained with type="data.frame".
Included are columns
Figure 5: Overview of the generic get_result() function yielding the results of a method object as either a named array (left), a torch_tensor (middle), or a data.frame (right) depending on the argument type.
for the input instance ("data"), the input and output layer of the model ("model_input" and "model_output"), the input variable ("feature") - possibly also a second one for images ("feature_2") and the channel for signal and image data ("channel") - the output node or class ("output_node"), and the relevance ("value") for the corresponding values. A visual overview of this generic function is given in Figure 5.
#### Generic function plot()
The generic function plot() visualizes individual instances of the result of the method applied before based on the graphic package ggplot2(Wickham, 2016) or the package **plotly**(Sievert et al., 2022) for interactive graphics if the corresponding argument as_plotly is set. The call is executed as follows:
ggplot2-based plot plot(method, data_idx = 1, output_idx = NULL, aggr_channels = "sum", as_plotly = FALSE, same_scale = FALSE)
interactive plotly-based plot plot(method, data_idx = 1, output_idx = NULL, aggr_channels = "sum", as_plotly = TRUE, same_scale = FALSE)
The key arguments for this function are data_idx and output_idx, which specify the indices for the dataset instances and for the desired output nodes or classes whose result is to be visualized. By default, the first data instance and the first computed output node are used. In the argument output_idx, no arbitrary indices can be passed, but only those for which the results were calculated previously in the second step. The further argument aggr_channels can be used to define how the channels are aggregated for image and signal data. Since visualization depends on the data type, it is internally distinguished between tabular/signal data and image data; accordingly, a bar chart or a raster chart is created, as shown in Figure 6 on the left. The relevances in the bars or the pixels are also scaled by color, facilitating a visual comparison; red means positive, blue negative, and white the absence of relevance. Since, in general, the scales vary significantly for the selected output class or data instance, the plots are scaled separately for each value in output_idx and data_idx. When several input layers are to be visualized, the remaining argument same_scale can be used to select whether the individual input layers are also scaled separately in terms of color. This decision depends on the use case illustrated in the melanoma example in Section 4.2. Furthermore, instead of returning objects from the ggplot2 or **plotly** packages, instances of the S4 classes innsight_ggplot2 and innsight_plotly are produced, which are examined in the Section 3.3.4 for advanced visualizations.
#### Generic function boxplot()
Global behavioral patterns and insights into the model's decision-making process can be derived from the results of multiple instances by appropriately summarizing and aggregating them. The generic function boxplot() visualizes these global interpretations over the whole or parts of the given dataset based on the graphics package ggplot2(Wickham, 2016) or the package **plotly**(Sievert et al., 2022) for interactive charts analogous to the previously discussed function plot(). The call for this function is the following:
boxplot(method, output_idx = NULL, data_idx = "all", ref_data_idx = NULL, aggr_channels = "sum", preprocess_FUN = abs, as_plotly = FALSE, same_scale = FALSE,...)
In addition to the identical arguments output_idx, aggr_channels, as_plotly, and same_scale for the plot() function, options for selecting the data points to be aggregated (data_idx), for drawing a reference data point (ref_data_idx) and a pre-process function of the results (preprocess_FUN) are added. Again, the visualization style depends on the type of input data, as in the plot() function; tabular and signal data are displayed as box plots, whereas only a raster plot with the pixel-wise median is rendered for image data due to the high dimensionality. However, if the chart is **plotly**-based, there is a slider to select which quantile to display. Basic examples and an overview of the boxplot() function is given in Figure 6 on the right. Despite the creation of ggplot2 or **plotly** graphs, instances of the S4 class innsight_ggplot2 or innsight_plotly are returned, which are explained in the following section.
### Advanced visualization
The previous two sections have already explained the basic plot() and boxplot() functions. As mentioned, these functions create either an object of the S4 class innsight_ggplot2 (if as_plotly = FALSE) or one of the S4 class innsight_plotly (if as_plotly = TRUE). These functions are intended to generalize the usual ggplot2 or **plotly** objects since, with these packages, the limits of clear visualization possibilities for models with multiple input layers are quickly reached. For example, two charts with different scales for each output node or class need to be generated in a model with images and tabular data as inputs. In this case, a ggplot2-based or **plotly**-based plot is generated for each single input instance and output node and then combined into one large visualization using arrangeGrob() from **gridExtra**(Auguie, 2017) or subplot() from **plotly**, respectively. In contrast, the S4 class innsight_ggplot2 behaves as a wrapper for the ggplot2 object for ordinary models with only one input or
Figure 6: Overview of the visualization tools plot() and boxplot() provided by the **innsight** package depending on the type of input and the argument as_plotly.
output layer. Nevertheless, instances of the innsight_ggplot2 class can be treated and modified as regular ggplot2 objects providing a ggplot2-typical usage by adding, for example, themes, scales or geometric objects; hence the intermediate step via this class is generally not noticeable to the user. For example, the following code is valid:
plot(method) + ggplot2::theme_bw() + ggplot2::xlab("My new x label") + ggplot2::scale_y_continuous(trans = "pseudo_log") + ggplot2::geom_text(ggplot2::aes(label = signif(value))
Conveniently, all ggplot2 objects are based on the same data.frame, which is also obtained via the get_result() method (see Sec. 3.3.1), i.e., the corresponding column names can be used as variables in the ggplot2 objects, as can be seen in the last line of the code chunk above. For objects of the innsight_plotly class, the entire plot is always created using the plotly::subplot() function. However, this has the consequence that individually assigned modifications are partially overwritten by the grouping, which is why the usual **plotly**-typical adaptations can only be performed after the innsight_plotly object has been printed and returned by the generic print() function for this class, i.e.,
print(plot(method, as_plotly = TRUE)) %>% plotly::hide_color() %>% plotly::layout(axis = list(title = "My new x label"))
In addition, generic functions for both S4 classes are implemented, which provide a deeper and more detailed examination of an already created plot through indexing or indexed modification. Section 4.2 demonstrates the application and illustration of some of these generic methods using visualized explanations of a model that takes tabular and image data as inputs. However, for a more detailed description and usage of these classes, please refer to the vignette "In-depth explanation" (see vignette("detailed_overview", package = "innsight") or the online documentation at [https://bips-hb.github.io/innsight/articles/detailed_overview.html](https://bips-hb.github.io/innsight/articles/detailed_overview.html)).
## 4 Illustrations
To exemplify the methods and step-by-step execution of the **innsight** package, a standard dataset with only numerical tabular inputs on a simple model and a more complex dataset with image and tabular data on an extensive non-sequential network are analyzed in the following. The penguin dataset from the **palmerpenguins** package (Horst _et al._, 2020) is used as the simple dataset taking only the numerical variables of bill length and depth, flipper length, and body weight as inputs. The melanoma dataset (Rotemberg _et al._, 2020) of the Kaggle competition1 is taken as the second dataset, which classifies the malignancy or benignity of the skin cell based on images of skin lesions and moles, and patient-level contextual information.
### Example 1: Penguin dataset
In the first example, the penguin dataset provided by the **palmerpenguins** package (Horst _et al._, 2020) is used and a neural network consisting of a dense layer is trained using the **neuralnet** package (Gunther and Fritsch, 2010). Before the **innsight** package can be used, the dataset must be processed and the neural network must be trained on the modified dataset. As a first pre-processing step, only the variables with the species, bill length and depth, flipper length, and body weight are selected, cleaned of missing values, and numerical variables are normalized:
R> library("palmerpenguins") R> data <- na.omit(penguins[, c(1, 3, 4, 5, 6)]) R> data[, 2:5] <- scale(data[, 2:5]) Next, the dataset is divided into training data and test data at a ratio of 75% to 25%:
R> train_idx <- sample.int(nrow(data), as.integer(nrow(data) * 0.75)) R> train_data <- data[train_idx, ] R> test_data <- data[-train_idx, -1] In the second pre-processing step, a network with 128 units in a single hidden layer and the logistic function as activation is fitted on the training data train_data:
R> library("neuralnet") R> model <- neuralnet(species -., + data = train_data, hidden = 128, act.fct = "logistic", + err.fct = "ce", linear.output = FALSE) Now, we follow the three steps that provide and visualize an explanation of the model model on the test data test_data, which were described in detail in Section 3. As a reminder, the first step uses the R6 class Converter to convert the given model to a **torch**-based model with the pre-implemented methods in each layer:
R> library("innsight") R> conv <- Converter$new(model) Then, in the second step, the desired method is selected and applied to the test data test_data via the corresponding R6 class. In this example, the LRP method is used with the \(\alpha\)-\(\beta\)-rule with \(\alpha=2\):
R> lrp <- LRP$new(conv, test_data, rule_name = "alpha_beta", rule_param = 2) In the last step, the results are visualized in two ways: Using the plot() function, the relevances of one instance of the species Adelie (data index 1) and one of the species Gentoo (data index 76) are displayed for both corresponding classes (output nodes 1 and 3). Secondly, the results for the two classes, Adelie and Gentoo, are aggregated over the entire test data and box plots are generated using the boxplot() function without pre-processing and including the first data point as a reference. As mentioned in Section 3, these two variants can be treated and modified like ordinary ggplot2 objects, e.g., adding themes or rotating the x-axis labels. Both visualizations are executed by the following code and can be viewed in Figure 7:
R> library("ggplot2") R> plot(lrp, data_idx = c(1, 76), output_idx = c(1, 3)) + + theme_bw() + + theme(axis.text.x = element_text(angle = 45, vjust = 0.6)) R> boxplot(lrp, output_idx = c(1, 3), preprocess_FUN = identity, + ref_data_idx = 1) + + theme_bw() + + theme(axis.text.x = element_text(angle = 45, vjust = 0.6)) In Figure 6(a) it can be seen that the bill length for the chosen penguin of the Adelie class (index 1 in the dataset test_data) is highly relevant - based on the trained model - for this particular class. However, at the same time, this feature also argues against the Gentoo class due to its strong negative relevance. For the Gentoo penguin, the bottom row in Figure 6(a) reveals that the bill depth is decisively in favor of the Gentoo class and concurrently against the Adelie species. Besides these instance-wise explanations, the boxplot() function provides aggregate insights across the entire test data test_data, summarized in Figure 6(b). The box plots show that the bill length has high positive relevance for the Adelie class and consequently strongly influences it. Simultaneously, however, it also negatively affects the Gentoo class in general. It further emerges that the bill depth and flipper length are crucial features for the Gentoo class.
Figure 7: Generated visualizations of LRP results with the \(\alpha\)-\(\beta\)-rule (\(\alpha=2\)) on the penguin dataset. Sub-figure (a) shows the individual results from data points 1 and 76 from the test data test_data for the Adelie and Gentoo classes. In contrast, the summarized results as box plots across the whole test data for the same two classes can be found in (b), including the individual result of the first data point with the red line.
### Example 2: Melanoma dataset
The second example examines the melanoma dataset (Rotemberg _et al._, 2020) from the Kaggle challenge2 in 2020, issued by the society of imaging informatics in medicine (SIIM) and based on the international skin imaging collaboration (ISIC) archive, the most extensive publicly available collection of quality-controlled dermoscopic images of skin lesions. This dataset consists of 33 126 labeled images with associated patient-level contextual information, such as the age, gender, and image location of the skin lesion or mole.
Footnote 2: See the following link for the official dataset description [https://www.kaggle.com/competitions/sitim-isic-melanoma-classification/overview/description](https://www.kaggle.com/competitions/sitim-isic-melanoma-classification/overview/description)
Due to the complexity and high dimensionality of the data, training a neural network is not straightforward and overall not the main focus of this paper, thus reference is made to the GitHub repository for reproduction ([https://github.com/bips-hb/JSS_innsight/](https://github.com/bips-hb/JSS_innsight/)), and only the most notable points are summarized in the following: The tabular input part's numerical and one-hot encoded categorical variables are fed into a sequential model of dense layers. On the other hand, we use an architecture based on the established residual layers (He _et al._, 2016) considering skip connections between convolutional layers for the image data. Afterward, the two outputs of the respective input parts are merged by concatenation and finally flow in a sequential model with only dense layers to obtain a prediction probability for the skin lesion status. The coarse structure is summarized in Figure 8, where additional dropout layers are used between dense layers. Furthermore, the numerical variable age and the one-hot encoded variables gender and location yield ten features as inputs for the tabular model, and the images are resized to 224\(\times\)224\(\times\)3 for the image model. This model architecture is trained on the melanoma dataset with a validation split of 20% and a batch size of 256 instances using the **Keras** library (Chollet _et al._, 2015) with stochastic gradient descent (SGD) as the optimizer and class-weighted binary cross-entropy as the loss function. However, the best model is selected based on the highest value of the area under the ROC curve (AUC) on the validation data. This metric is chosen because the dataset is highly imbalanced with only 584 of the 33 126 images containing a malignant skin lesion. Since the model is trained from scratch and the image model has significantly more parameters than the tabular one, the training starts with 300 warm-up epochs on the image model using the image data only. Then, the image model is joined with the tabular and the dense output model. Afterward, training continues on the image and tabular data, saving the model with the highest value of the AUC metric on the validation data. In addition, the initial learning rate of 0.01 is reduced by a factor of 0.1 after 20 epochs without a validation AUC improvement, and training is terminated after 40 unimproved epochs. With this approach, a AUC value of 87.71% and an accuracy of 84.19% on the validation data are achieved and the model to be interpreted is selected.
Figure 8: Model architecture for the melanoma dataset.
Based on this model, the obtained predictions can now be explained using the 3-step approach of **innsight**: In the first step, the trained model is loaded and converted to a **torch**-based model with the pre-implemented feature attribution methods using the Converter class. However, since **keras** models do not include names of the input variables and output nodes, these can be passed along when initializing the converter to preserve meaningful labels of the input and output variables in the visualizations. Thus, the first step is executed by the following R code:
R> library("keras") R> library("innsight") R> model <- load_model_tf("path/to/model") R> input_names <- list( + list(paste0("C", 1:3), paste0("H", 1:224), paste0("W", 1:224)), + list(c("Sex: Male", "Sex: Female", "Age", + "Loc: Head/neck", "Loc: Torso", "Loc: Upper extrm.", + "Loc: Lower extrem.", "Loc: Palms/soles", "Loc: Oral/genital", + "Loc: Missing"))) R> output_name <- c("Probability of malignant lesion") R> converter <- Converter$new(model, input_names = input_names, + output_names = output_name)
Next, the LRP method with composite rules is applied, which selects the propagation rule depending on the layer type. For convolutional layers, the \(\alpha\)-\(\beta\)-rule with \(\alpha=1.5\) is used to favor the positive over the negative relevances. In addition, the \(\varepsilon\)-rule with \(\varepsilon=0.01\) is performed on all dense layers and the simple rule - the rule used by default - on average pooling layers. This second step is performed with **innsight** as follows:
R> rule_name <- list(Conv2D_Layer = "alpha_beta", Dense_Layer = "epsilon") R> rule_param <- list(Conv2D_Layer = 1.5, Dense_Layer = 0.01) R> res <- LRP$new(converter, input, channels_first = FALSE, + rule_name = rule_name, rule_param = rule_param)
For the sake of simplicity, the loading of the input data input is omitted in the above code snippet and can be found in the reproduction material together with the whole example. In addition, the channels axis of the images is located at the last position, which is why the argument channels_first must be set to FALSE. The results can be visualized using the implemented plot() function for all interpretability methods. By default, the results are scaled using colors (red for positive and blue for negative relevances) for each instance, each considered output node and each input layer individually. This behavior is especially appropriate for models with multiple input layers consisting of images mixed with tabular data. Because even if the relevances are the same at the end of the tabular and image model before merging, they are further propagated to only ten input variables for the tabular and 224\(\times\)224\(\times\)3 variables for the image model leading to potentially different relevance scales. The following code produces the plot object based on the S4 class innsight_ggplot2 for the first three dataset instances, which can be treated as a ggplot2 object:
R> library("ggplot2") R> p <- plot(res, data_idx = c(2, 3, 1)) + theme_bw()
The order of the indices of the data instances also specifies the ordering of the appearances in the plot, i.e., in the above code, the second instance is visualized first, then the third, and finally, the first instance. Since this model has no standard architecture and the visualization is more extensive, the suggested packages **gridExtra**(**Aguuie 2017**) and **gtable**(**Wickham and Pedersen 2023**) are required. However, each individual plot in the object p can now be modified individually based on the ggplot2 syntax. The indexing works as the objects are plotted in a matrix-wise fashion provided by the facet rows and columns. It is pointed out that each plot object is based on the same dataset, which is also created by the method get_result(type = "data.frame"), i.e., the same column names can be used within the ggplot2 syntax. In the following code snippet, the facet and the tabular x-axis labels are changed manually and the plot is visualized, which can be found in Figure 9:
R> p[1, 1] <- p[1, 1, restyle = FALSE] + + facet_grid(cols = vars(model_input), + labelller = as_labelller(c(Input_1 = "Image input"))) R> p[1, 2] <- p[1, 2, restyle = FALSE] + + facet_grid(rows = vars(data), cols = vars(model_input), + labelller = as_labelller(c(data_2 = "ISIC_6535558 (87.82%)", + Input_2 = "Tabular input"))) R> p[2:3, 2] <- p[2:3, 2, restyle = FALSE] + + facet_grid(rows = vars(data), + labelller = as_labelller(c(data_3 = "ISIC_7291021 (0.05%)", + data_1 = "ISIC_0946787 (47.47%)"))) + + theme(axis.text.x = element_text(angle = 45, vjust = 0.6)) R> plot(p, heights = c(0.31, 0.31, 0.38)) The argument restyle is set when indexing the innsight_ggplot2 object, ensuring that the subplots are extracted in the same way as they are displayed in the whole plot. Otherwise, the entire plot's corresponding facet stripes and axis labels are transferred to the selection. In addition, the arguments in the generic function plot() for innsight_ggplot2 objects are forwarded to the function arrangeGrob() when the plot is finally rendered. This feature allows adjusting the relative heights and widths, demonstrated in the last line of code, to slightly compensate for the increased vertical space of the rotated axis labels.
The three instances in Figure 9 describe different explanatory approaches to the trained model's predictions: The top image in Figure 9a of a malignant lesion was recorded on the torso of a 65-year-old female patient. In the associated interpretation generated by **innsight** (top row in Fig. 9b), it can be observed that, on the one hand, the model identifies the lesioned skin area. On the other hand, the darker and patchy pigmentation and the ragged borders positively influenced the prediction of \(87,82\%\) for melanoma. This observation is also consistent with the official ABCD checklist for melanoma (**Friedman et al.** 1985**), which states that asymmetry, irregular borders, varying color, and large diameters are indicative of a malignant skin lesion. However, the patient's age also positively affected the prediction, as evident from the tabular patient-level information explanation in Figure 9b. A complimentary picture results from the middle image in Figure 9a, showing a benign mole located on the lower extremities of a 40-year-old man. The model saw melanoma for only 0.05% and explains its decision with the symmetrical shape, uniform color pigmentation, and lack of notched borders. In addition, the age of 40 also has a slightly negative influence on the
prediction (middle row in Fig. (b)b). The last instance exemplifies a situation where the model is uncertain whether it is a malignant or benign skin lesion. The truly malignant skin area originates from the lower extremities of a 90-year-old woman (bottom image in Fig. (a)a). Especially the image input explanation in the last row in Figure (b)b shows the model's uncertainty because the mole's upper part looks very regular, arguing for a healthy lesion and consistently highlighted with negative relevance by the model's explanation. In contrast, the lower part contains some notches potentially favoring melanoma, which the model also correctly identified. Furthermore, the high age of the 90-year-old patient has a solid positive relevance to the model's prediction, demonstrating the strong effect of the feature age.
## 5 Validation and runtime
To evaluate the validity and computational performance of **innsight**, the results of the presented feature attribution methods on simulated models and data are compared with the results of the Python implementations **zennit**(Anders _et al._, 2021), **investigate**(Alber _et al._, 2019), **captum**(Kokhlikyan _et al._, 2020) and **deeplift**(Shrikumar _et al._, 2017). The packages **deeplift** and **investigate** are based on the high-level machine learning library **Keras**(Chollet _et al._, 2015) and utilize **TensorFlow**(Abadi _et al._, 2015) as the backend for all calculations. In addition, both packages initially create a replication of the passed model with the interpretations methods pre-implemented in the individual layers similar to **innsight**. In contrast, the packages **zennit** and **captum** use **PyTorch**(Paszke _et al._, 2019) and run without a conversion
Figure 9: The image part of the instance of the melanoma dataset to be explained and the associated visualization generated by **innsight**. Figure (a) shows a (top) malignant lesion image of a 65-year-old female, (middle) benign lesion of a 40-year-old male and (bottom) malignant lesion of a 90-year-old female patient. Figure (b) displays the LRP explanation of the patients from (a) created with the plot() function and subsequent minor modifications such as facet and x-axis labels.
step since hooks are used to modify the automated backward pass according to the applied method on the fly. However, this only enables the application of methods that can be considered independent of the preceding and following layers, which complicates, for example, an implementation of DeepLift with the RevealCancel rule. Furthermore, not every package supports all methods. The gradient-based methods Gradient and Gradient\(\times\)Input are provided by all packages. In contrast, DeepLift with Rescale rule is only implemented in **deeplift** and **captum**, and the RevealCancel rule only in **deeplift**. The LRP methods are available in all packages except for **deeplift**. However, **captum** does not accept the \(\alpha\)-\(\beta\)-rule.
### Validity comparison
For the validation, shallow untrained dense and convolutional models with the most commonly used layer types - such as 2D convolution, 2D maximum/average pooling and dense layers - and normally distributed input data are generated. More specifically, 32 different architectures are considered, using ReLU and hyperbolic tangent to include both constrained and unconstrained activation functions, with and without bias vectors, with varying pooling layers and a different number of output nodes. From each of these architectures, 50 randomly initiated models are created, resulting in 1600 distinct models, and evaluated on normally distributed datasets with 32 input instances each. The experimental details can be found in the Appendix A.1. Moreover, all figures and results are reproducible using the code in the reproduction material or on GitHub ([https://github.com/bips-hb/JSS_innsight](https://github.com/bips-hb/JSS_innsight)).
As a measure of quality, the mean absolute error (MAE) between the result of **innsight** and the corresponding reference implementation over all input variables and output nodes is considered. Consequently, for each combination of method, model, input instance, and output node, a value for this quality measure is derived, leading to box plots for visualizing the differences. In addition to the box plots, the acceptable error range of up to \(10^{-6}\) is highlighted in light gray to distinguish numerical tolerated differences caused by calculations of single-precision floating point numbers according to the IEEE 754 standard (IEEE 2019)
Figure 10: Comparison of feature attribution methods’ results of **innsight** and the reference implementations **captum**, **zennit**, **investigate**, and **deeplift** regarding the mean absolute error as box plots over different model architectures and repetitions. It shows the results separated into (a) gradient-based methods, (b) DeepLift and (c) LRP. The shaded gray area indicates the error tolerance of \(10^{-6}\).
from abnormal discrepancies. The results are summarized in Figure 10. For the gradient-based methods Gradient and Gradient\(\times\)Input, the method's results coincide precisely for **PyTorch**-based packages, and they differ at most by \(10^{-7}\) for the **Keras**-based packages (see Fig. 9(a)). The main reason for this discrepancy is that **innsight**, **zennit**, and **captum** compute gradients via **PyTorch** or **PyTorch**'s C++ library **LibTorch**, and **investigate** and **deeplift** use **TensorFlow** for the calculations. A similar picture results for the DeepLift method with the Rescale and RevealCancel rules but with few outliers with a maximum error of up to \(10^{-3}\) for the Rescale rule (see Fig. 9(b)). However, all outliers with an error exceeding \(10^{-6}\) originate from models with the hyperbolic tangent as activation and can thus be explained by numerical inaccuracies due to the saturated activation. In addition, minor discrepancies are probably caused by different treatments of vanishing denominators in the multipliers or numerical uncertainties between the backends **PyTorch**/**LibTorch** and **TensorFlow** in general. For the LRP methods, a few adjustments are needed for the **investigate** and **captum** packages since they use only the simple rule for average pooling layers, which is modified in **innsight** using composite rules. Apart from that, the results from **innsight** compared to **captum** or **zennit** for the simple, \(\varepsilon\)-rule and \(\alpha\)-\(\beta\)-rule differ negligibly and are far below the maximally tolerated error of \(10^{-6}\) (see Fig. 9(c)). For the simple and \(\varepsilon\)-rule, **investigate** is consistent with **innsight** except for a few deviations. Again, 96.8% of the cases with errors exceeding \(10^{-6}\) are caused by a saturated hyperbolic tangent activation, lower errors on different stabilizers for the denominators in the relevance messages, and general numerical inaccuracies between their backends. However, significant discrepancies can be observed using the \(\alpha\)-\(\beta\)-rule, which only occur in models with a bias vector (see Fig. 9(c) bottom). The reason for this is a different interpretation of the positive or negative part of the bias vector, which is discussed in more detail in the Appendix B.
### Runtime comparison
In addition to comparing whether **innsight**'s results are consistent with the reference implementations, a runtime comparison is also conducted concerning the number of output nodes, hidden units or filters, hidden layers, batch size, and for images, the size of the input images. It must be noted again that the packages based on **Keras** and **innsight** first convert the passed model, and the **PyTorch**-based packages use hooks to overwrite the automated backward pass while executing, making them considerably faster. Therefore, in the results, only the execution time excluding the conversion step - as far as possible - is presented and not the total time. For comparisons of the total time needed to calculate an explanation or conversion time only, see Appendix A.3. However, the **investigate** package has a special characteristic in this regard since the entire conversion
Figure 11: Model architectures for the run-time comparison. The hyperparameters for the number of outputs (C), number of hidden units or filter size (U), number of hidden layers (L), batch size (B) and the height/width of the input images (W) were varied in each case.
process and the construction of the underlying graph only happens during the analysis of the first batch of input data3. For this reason, conversion times are almost hardly present in the results. Since this simulation assumes that an interpretation method is being applied for the first time to a model and only to a single batch of input instances, the results of **investigate** are slightly biased and would be notably quicker if the same model is employed with more input batches. Analogously to the comparison from Section 5.1, untrained dense and convolutional neural networks with the architectures shown in Figure 11, and normally distributed input data are used for the time comparisons. Depending on the type of time comparison, the hyperparameters for the number of output nodes (C), number of hidden units or filter size (U), number of hidden layers (L), batch size (B), and the size of the input images (W) are varied. The hyperparameters not considered in the respective comparison remain unchanged and take default values, i.e., one output node, 10 hidden units for the tabular model and 5 filters for the image model, two layers in total, a batch size of 16 and an input image size of 10. In addition, 20 replicates of each architecture are created to compensate for potential numerical fluctuations. For a more detailed simulation description or analysis of the results including the conversion time, please refer to Appendix A.2 and A.3, and for a reproduction of the results, see the reproduction material or the code in the GitHub repository at [https://github.com/bips-hb/JSS_innsight/](https://github.com/bips-hb/JSS_innsight/).
Footnote 3: See the GitHub issue #50 and #129 for the **investigate** package
In general, comparing the runtimes of the different packages reveals that **innsight** is faster than **ininvestigate** and **deeplift** (which are based on **Keras**), but slower than **captum** and **zennit** (which are based on **PyTorch**). This overall trend is particularly evident when adding more layers to the models since **innsight** is 10-15 times slower than **captum** and **zennit**, but still for the same order of magnitude faster than **investigate** and **deeplift**. One notable exception is that for deeper models using LRP with \(\alpha\)-\(\beta\)-rule, **innsight** is 20 times faster than **ininvestigate**, and only slightly faster than **deeplift** using the DeepLift method (see Fig. 12a). However, there are both positive and negative deviations from this trend, especially when varying the number of output nodes. In these cases, **innsight** stands out with the LRP and DeepLift methods, and performs similarly or even faster than the **PyTorch**-based packages (see Fig.12b). The primary reason for this, however, is that the results for several output nodes can be calculated at once in **innsight**. In contrast, all other implementations only allow the calculation for single nodes and thus the method's results are computed by iterative execution. Although **innsight** and **investigate** become more comparable to the **PyTorch** packages with larger inputs and more filters and hidden units, a weakness of **innsight** can be stated: Even if the results on tabular data almost correspond to those of the **PyTorch**-based packages, **innsight** is slower than all reference implementations for larger image sizes in the DeepLift method and more filters in LRP and DeepLift (see Fig. 16 and 18 in the appendix). Even though the packages in this section are compared regarding runtime and some packages' weaknesses are revealed, all considered implementations provide an explanation within a reasonable runtime of a few seconds, even for deep neural networks with several large images as inputs.
## 6 Summary and discussion
In summary, we have presented **innsight**, an R package that provides the most well-known feature attribution methods for interpreting neural network predictions. After a detailed
introduction of the implemented feature attribution methods, the internal structure utilizing **torch**'s fast array calculations and the R6 class Converter demonstrated how the deep-learning-model-agnostic approach was implemented to enable the analysis of models from any R package in an efficient way. This flexibility is complemented by a unified 3-step approach from model to plotted results, including multiple visualization tools based on ggplot2 or **plotly** for interactive plots. The step-wise procedure was illustrated using a model on the tabular penguin dataset and a deep neural network on the melanoma dataset consisting of structured patient-level information and images. Furthermore, the results of the simulation study show that **innsight** returns identical feature-wise explanations to the reference implementations **captum**, **zennit**, **investigate**, and **deeplift** in Python, except for negligible numerical inaccuracies. In terms of runtime, the package shows that it is generally faster than the **Keras**-based packages but slower than **captum** and **zennit**, and it only suffers during convolution if the image height/width or the number of filters is large. Apart from that, the package also has some limitations that could be improved in the future. For example, only converting sequential models (i.e., nn_sequential) from the **torch** package is possible because no structured network graph can be extracted from an arbitrary nn_module. Nevertheless, passing a model
Figure 12: Package’s average evaluation time in seconds over 20 repetitions for applying different feature attribution methods on models with (a) a varying number of hidden layers and (b) a varying number of output nodes (only image data).
as a list allows the user to do the conversion step on their own in such cases. Furthermore, an activation function is assigned to a linear or convolutional layer only if it is defined in the layer itself or immediately after the layer. This behavior is especially relevant for the RevealCancel rule in the DeepLift method, because **insight** handles separated activations with the Rescale rule, which is the case, for example, with the layer sequence of convolution, batch normalization, and activation. Moreover, even if it is possible in the **torch** package, the **insight** package currently only supports computations on CPUs and not on GPUs.
## Computational details
A 64-bit Linux platform running Ubuntu 20.04 with an AMD Ryzen Threadrippper 3960X (24 cores, 48 threads) CPU including 256 gigabyte RAM and two NVIDIA Titan RTX GPUs was used for all computations. All comparisons and calculations with the reference implementations were performed in a separate session - created by **callr**(Csardi and Chang, 2022) - using only a single CPU thread per job. An exception was the neural network training on the melanoma dataset using a single GPU, which was also the only code executed by Python and not from R 4.3.0 (R Core Team, 2023). Due to the package requirement mismatch, separate environments were created for each of the **Keras**-based and the **PyTorch**-based packages, i.e.,
* **investigate**2.0.2: Using Python 3.8.15 with **Keras**2.10.0 and **TensorFlow**2.10
* **deeplift**0.6.13: Using Python 3.6.15 with **Keras**2.2.4 and **TensorFlow**1.15
* **captum**0.6.0 and **zennit**0.5.0: Using Python 3.8.12 with **PyTorch**1.13.1 (cpu)
The corresponding environments were loaded in R and then the code was executed in Python using **reticulate**1.30(Ushey _et al._, 2023). In addition, the computer was used exclusively for the runtime measurements for the corresponding job and was not distorted by other simultaneous processes.
## Acknowledgments
This project was funded by the German Research Foundation (DFG), Emmy Noether Grant 437611051.
|
2307.03544 | Roman Numeral Analysis with Graph Neural Networks: Onset-wise
Predictions from Note-wise Features | Roman Numeral analysis is the important task of identifying chords and their
functional context in pieces of tonal music. This paper presents a new approach
to automatic Roman Numeral analysis in symbolic music. While existing
techniques rely on an intermediate lossy representation of the score, we
propose a new method based on Graph Neural Networks (GNNs) that enable the
direct description and processing of each individual note in the score. The
proposed architecture can leverage notewise features and interdependencies
between notes but yield onset-wise representation by virtue of our novel edge
contraction algorithm. Our results demonstrate that ChordGNN outperforms
existing state-of-the-art models, achieving higher accuracy in Roman Numeral
analysis on the reference datasets. In addition, we investigate variants of our
model using proposed techniques such as NADE, and post-processing of the chord
predictions. The full source code for this work is available at
https://github.com/manoskary/chordgnn | Emmanouil Karystinaios, Gerhard Widmer | 2023-07-07T12:20:56Z | http://arxiv.org/abs/2307.03544v2 | # Roman numeral analysis with graph neural networks:
###### Abstract
Roman numeral analysis is the important task of identifying chords and their functional context in pieces of tonal music. This paper presents a new approach to automatic Roman numeral analysis in symbolic music. While existing techniques rely on an intermediate lossy representation of the score, we propose a new method based on Graph Neural Networks (GNNs) that enable the direct description and processing of each individual note in the score. The proposed architecture can leverage notewise features and interdependencies between notes but yield onset-wise representation by virtue of our novel edge contraction algorithm. Our results demonstrate that _ChordGNN_ outperforms existing state-of-the-art models, achieving higher accuracy in Roman Numeral analysis on the reference datasets. In addition, we investigate variants of our model using proposed techniques such as NADE, and post-processing of the chord predictions. The full source code for this work is available at [https://github.com/manoskary/chordgnn](https://github.com/manoskary/chordgnn)
## 1 Introduction
Automatic Chord Recognition is one of the core problems in Music Information Retrieval. The task consists of identifying the harmonies or chords present in a musical piece. Various methods have been proposed to address this task using either an audio or symbolic representation of the music [1]. In the symbolic domain, most approaches focus on the related and arguably more complex problem of Automatic Roman Numeral Analysis, which is a functional harmony analysis problem that has its roots in musicological research of Western classical music.
Roman Numeral Analysis is a notational system used in music theory to analyze chord progressions and identify the relationship between chords in a given key. In this system, each chord in a piece of music is assigned a Roman numeral based on its position within the key's scale. For example, in the key of C major, the I chord is C major, the IV chord is F major, and the V chord is G major. Roman Numerals are an important tool for understanding and analyzing the harmonic structure of music, and they are a valuable resource for musicians, composers, and arrangers alike.
In Music Information Retrieval, a lot of work has been done to automate Roman Numeral analysis. However, current approaches still face significant challenges. Some of these are related to the large chord symbol vocabulary. A common way to address this problem is to divide a Roman Numeral into several components (e.g., key, degree, inversion) and transform the analysis into a multitask learning scenario. However, multitask approaches themselves face challenges with interdependencies among tasks. Lastly, Roman Numeral analysis faces a score representation problem related to existing models such as CNNs whose inputs must be in fixed-sized chunks. Recent state-of-the-art approaches follow an audio-inspired strategy, dividing a musical score into fixed-length time frames ("windows") which are then processed by a Convolutional Recurrent Neural Network (CRNN). However, such a representation is unnatural for scores and has the added practical disadvantage of being time-limited (for example regarding notes extending beyond the current window) and, due to the fixed-length (in terms of score time) constraint, capturing varying amounts of musically relevant context.
In this paper, we propose a new method for automatic Roman Numeral analysis based on Graph Neural Networks that can leverage note-wise information to address the score representation issue. Our model, _ChordGNN_, builds on top of existing multitask approaches but introduces several novel aspects, including a graph convolutional architecture with an edge contraction pooling layer that combines convolution at the note level but yields the learned representation at the onset level.
Our proposed method, _ChordGNN_, is evaluated on a large dataset of Western classical music, and the experimental results demonstrate that it outperforms existing state-of-the-art methods, in terms of the commonly used Chord Symbol Recall measure. To address the interdependencies among tasks we investigate the effect of post-processing and other proposed techniques such as NADE and gradient normalization. Finally, we look at a qualitative musical example and compare our model's predictions with other state-of-the-art models.
## 2 Related Work
There is a big body of literature covering the topic of Automatic Chord Recognition applied in the audio domain; however, in our work, we focus on the problem of automatic Roman Numeral Analysis in the symbolic domain. It consists of labeling the chords and harmonic progressions in a piece of music using Roman Numerals, where each numeral represents a chord built on a particular scale degree. Numerous approaches have tried to automate Roman Numeral analysis or infer harmonic relations between chords. Notable work includes statistical models such as _Melsima_[2], HMM-based models [3], and grammar-based approaches [4].
In recent years, research has shifted towards a deep learning and data-driven approach. Due to the large vocabulary of possible Roman Numerals, the problem has been divided into several component subtasks, thus resulting in a multitask learning setting [5]. As a multitask problem, a Roman Numeral is characterized by the following components: the primary and secondary degree (as illustrated in Figure 1), the local key at the time point of prediction, the root of the chord, the inversion of the chord, and the quality (such as major, minor, 7, etc.). Although the root can be derived from the other components, it was pointed out by [6] that redundancy is assisting Roman Numeral analysis systems to learn. An example of Roman Numerals and their components can be viewed in Figure 1. Recent state-of-the-art approaches decompose the numeral prediction task to the simultaneous prediction of those 6 components [5, 6, 7, 8, 9].
Most deep learning approaches to Roman Numeral analysis are inspired by work in audio classification, cutting a score into fixed-size chunks (in terms of some constant score time unit; e.g., a 32nd note) and using these as input to deep models. Using this quantized time frame representation, [6] introduced a CRNN architecture to predict Roman Numerals. Other work has continued to build on the latter by introducing more tasks to improve performance such as the _AugmentedNet_ model [7], or introducing intra-dependent layers to inform in an orderly fashion the prediction of one task with the previously predicted task, such as the model introduced by [8]. Other architectures, such as the CSM-T model, have demonstrated good results by introducing modular networks which treat a score as a sequence of notes ordered first by onset and then by pitch [9].
Should a musicologist perform music analysis on a piece of music, they would consider the individual notes existing in the score. Thus, a time frame representation would come across as unnatural for symbolic music and in particular for such an analysis task. In this paper, we present a method that no longer treats the score as a series of quantized frames but rather as a partially ordered set of notes connected by the relations between them, i.e., a graph. A visual comparison of the two representations is shown in Figure 2. Recently, modeling scores as graphs has also been demonstrated to be beneficial for problems such as expressive performance generation [10], cadence detection [11], voice separation [12], or boundary detection [13].
Automatic Roman Numeral analysis, as a multitask problem, is mostly tackled with hard parameter-sharing models. These models share part of the model across all tasks as an encoder, and then the common embeddings are branched to a classification model per task [6, 7, 8]. However, some approaches separate tasks from this paradigm to a more modular or soft parameter sharing approach [9].
In the field of multitask learning, a lot of research has been done on the problem of conflicting gradients during backpropagation in hard parameter-sharing models. Issues with multi-objective optimization have been early addressed by Zhang et al. [14] and recent solutions have been proposed for the multitask setting in the form of dynamic task prioritization [15], gradient normalization [16], rotation matrices [17], or even game-theoretic approaches [18]. In our work, we experimentally evaluate some of these techniques in the multitask setting to investigate whether Roman Numeral analysis subtasks conflict with each other (see Section 5.2).
Figure 1: A Roman Numeral analysis for two bars for four-part harmony in \(C\) major. Capital letters stand for major quality and lowercase for minor quality. The third chord has a dominant seven as its primary degree and the dominant of \(C\) major as its secondary degree. The \(V_{5}^{6}\) indicates a major with a seven quality in second inversion. The bass (lowest chord note) of that chord is \(F\) sharp, the root is \(D\), and the local key is \(C\) major.
Figure 2: Different representations of the score excerpt shown in the middle. Top: quantized time frame representation, bottom: graph representation.
## 3 Methodology
### Roman Numeral Analysis
We already discussed, in Section 2, how Roman Numeral analysis can be viewed as a multi-task problem. In this section, we describe in detail the additional tasks introduced by [7] that we also use for training and prediction. First, let us assume that the prediction can be broken down into specific time points, and each time point is attributed to a unique onset in the score.
The Roman Numeral prediction can be viewed as a simultaneous prediction of the local key, degree (primary and secondary), quality, inversion, and root. Each one of these tasks is a categorical, multiclass classification problem. However, [7] indicated that only three tasks would be sufficient for \(~{}98\%\) of the Roman Numeral annotations in our dataset (detailed in Section 4.1). These three tasks comprise the prediction of a restricted vocabulary of common Roman Numeral symbols in combination with the local key and the inversion. We refer to Roman Numeral prediction involving the 5 tasks as _conventional RN_, and the combined prediction of key, inversion, and restricted RN vocabulary _alternative RN_, as \(RN_{alt}\), in accordance with [7].
Several other tasks have been introduced that have been shown to improve the performance of related models [7]. These include the Harmonic Rhythm, which is used to infer the duration of a Roman Numeral at a given time point; the Tonicization task, a multiclass classification task that refers to a tonicized key implied by the Roman Numeral label and is complementary to the local key; the Pitch Class Sets task, which includes a vocabulary of different pitch class sets, and the Bass task, which aims to predict the lowest note in the Roman Numeral label.
### Graph Representation of Scores
Our approach to automatic Roman Numeral analysis no longer treats the score as a sequence of quantized time frames but rather as a graph, which permits us to specify note-wise information such as pitch spelling, duration, and metrical position. We use graph convolution to model interdependencies between notes. We model our score generally following Karystinaios and Widmer [11], but we opt for a heterogeneous graph convolution approach, i.e., including different edge relations/types. Furthermore, we develop an edge contraction pooling layer that learns onset-wise representations from the note-wise embeddings and therefore yields a sequence.
After the edge contraction, we follow [6, 7, 8] by adding to the graph convolution a sequence model for the hard-sharing part of our model, and simple shallow multi-layer perceptron heads for each task. In essence, we replace the CNN encoder that works on quantized frames of the score in previous approaches, with a graph convolutional encoder followed by an edge contraction layer. Our proposed architecture is shown in Figure 3.
The input to the GNN encoder is an attributed graph \(G=(V,E,X)\) where \(V\) and \(E\) denote its node and edge sets and \(X\) represents the node feature matrix, which contains the features of the notes in the score. For our model, we used pitch spelling, note duration, and metrical position features.
Given a musical piece, the graph-building process creates a set of edges \(E\), with different relation types \(\mathcal{R}\). A labeled edge \((u,r,v)\) of type \(r\) between two notes \(u,v\) belongs to \(E\) if the following conditions are met:
* notes starting at the same time: \(on(u)=on(v)\to r=\) onset
* note starting while the other is sounding: \(on(u)>on(v)\wedge on(u)\leq on(v)+dur(v)\to r=\) during
* note starting when the other ends: \(on(u)+dur(u)=on(v)\to r=\) follow
* note starting after a time frame when no note is sounding: \(on(u)+dur(u)<on(v)\wedge\nexists v\in V,~{}on(v^{\prime})<on(v)\wedge on(v^{ \prime})>on(u)+dur(u)\to r=\) silence
### Model
In this section, we introduce and describe _ChordGNN_, a Graph Convolutional and Recurrent Neural Network. The structure of the network is visually outlined in Figure 3. _ChordGNN_ uses heterogeneous graphSAGE [19] convolutional blocks defined as:
\[\begin{split}\mathbf{h}^{(l+1)}_{\mathcal{N}_{r}(v)}& =\mathrm{mean}\left(\{\mathbf{h}^{l}_{u},\forall u\in\mathcal{N}_{ r}(v)\}\right)\\ \mathbf{h}^{(l+1)}_{v_{r}}&=\sigma\left(W\cdot \mathrm{concat}(\mathbf{h}^{l}_{v},\mathbf{h}^{l+1}_{\mathcal{N}_{r}(v)}) \right)\\ \mathbf{h}^{(l+1)}_{v}&=\frac{1}{|\mathcal{R}|} \sum_{r\in\mathcal{R}}\mathbf{h}^{(l+1)}_{v_{r}}\end{split} \tag{1}\]
Figure 3: The proposed Architecture Chord-GNN
where \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\) and \(\mathbf{x}_{u}\) is the input features for node \(u\), \(\mathcal{N}(u)\) are the neighbors of node \(u\), and \(\sigma\) is a ReLU activation function. We name the output representations of all nodes after graphSAGE convolution \(H=\{h_{u}^{(L)}\mid u\in V\}\) where \(L\) is the total number of convolutional layers.
Given the hidden representation \(H\) of all nodes, and onset edges \(E_{\text{On}}=\{(u,v)\mid on(u)=on(v)\}\), the onset edge contraction pooling is described by the following equations: first, we update the hidden representation with a learned weight, \(H^{\prime}=HW^{(\text{epoch})}\). Subsequently we need to unify the representations for every node \(u\), such that \(\forall v\in\mathcal{N}_{\text{On}}(v),\;h_{u}^{(\text{cp})}=h_{v}^{(\text{cp})}\):
\[h_{u}^{(\text{cp})}=h_{u}+\sum_{v\in\mathcal{N}_{\text{On}}(v)}h_{v} \tag{2}\]
where, \(h_{u}\) and \(h_{v}\) belong to \(H^{\prime}\). Subsequently, we filter the vertices:
\[V^{\prime}=\{v\in V|\;\forall u\in V,\;(v,u)\in E_{\text{On}}\implies u\notin V ^{\prime}\} \tag{3}\]
Therefore, \(H^{(cp)}=\{h_{u}^{(cp)}\mid\forall u\in V^{\prime}\}\) are the representations obtained. Sorting the representations by the onset on which they are attributed we obtain a sequence \(S=[h_{u_{1}}^{(cp)},h_{u_{2}}^{(cp)},\ldots h_{u_{k}}^{(cp)}]\) such that \(on(u_{1})<on(u_{2})<\cdots<on(u_{k})\).
The sequence \(S\) is then passed through an MLP layer and 2 GRU layers. This concludes the hard-sharing part of our model. Thereafter, an MLP head is attached per task, as shown in Figure 3.
For training, we use the dynamically weighted loss introduced by [20]. The total loss \(\mathcal{L}_{tot}\) of our network is calculated as a weighted sum of the individual losses for every task, where the weights are learned during training:
\[\mathcal{L}_{\text{tot}}=\sum_{t\in\mathcal{T}}\mathcal{L}_{t}*\frac{1}{2 \gamma_{t}^{2}}+\log(1+\gamma_{t}^{2}) \tag{4}\]
where \(\mathcal{T}\) is the set of tasks; \(\mathcal{L}_{t}\) is the cross-entropy loss relating to task \(t\); the \(\gamma_{t}\) are learned scalars that give the weight for each task \(t\); and the \(\log\) expression is a regularization term [20].
#### 3.3.1 Post-processing
We enhance our model with a post-processing phase after the model has been trained. The post-processing phase combines the logits of all tasks' predictions by concatenating them and, then, feeds them to a single-layer bidirectional LTSM block. Then, again the embeddings of the sequential block are distributed to 11 one-layer MLPs, one for each task. The post-processing block is sketched in Figure 4.
## 4 Experiments and Corpora
In the experiments, we compare our model, _ChordGNN_, with other recent models for automatic Roman Numeral analysis. We run experiments with our model in the exact same way as described in the paper [7], including the specific data splits, so that our results are directly comparable to the figures reported there. A detailed comparison of the results will be given in Table 1. Furthermore, we develop variants of our model using proposed techniques such as NADE [8], and post-processing of the chord predictions. We report a configuration study of our model on the use of gradient normalization techniques and NADE that should improve results on Multi-Task learning scenarios and avoid common Multi-Task Learning problems such as conflicting gradients. Lastly, we compare our model with the updated version _v1.9.1_ of the state-of-the-art model Augmented-Net [21] and datasets.
### Datasets
For training and evaluation, we combined six data sources into a single "Full" Dataset of Roman Numeral annotations in accordance with [7]: the Annotated Beethoven Corpus (ABC) [22]; the annotated Beethoven Piano Sonatas (BPS) dataset [5]; the Haydn String Quartets dataset (HaydnSun) [23]; the TAVERN dataset [24]; a part of the When-in-Rome (WiR) dataset [25, 26]; and the Well-Tempered-Clavier (WTC) dataset [25] which is also part of the WiR dataset.
Training and test splits for the full dataset were also provided by [7]. It is worth noting that the BPS subset splits were already predefined in [5]. In total, approximately 300 pieces were used for training, and 56 pieces were used for testing, proportionally taken from all the different data sources. We draw a distinction for the BPS test set, which includes 32 Sonata first movements and for which we ran an additional experiment. The full test set also includes the 7 Beethoven piano sonatas.
In addition to the above datasets, we include data augmentations identical to the ones described in [7]: texturization and transposition. The texturization is based on a dataset augmentation technique introduced by [27]. The transposition augmentation boils down to transposing a score to all the keys that lie within a range of key signatures that have up to 7 flats or sharps. It should be noted that the augmentations are only applied in the training split.
For our last experiment (to be reported on in Section 5.3 below), we add additional data that were recently introduced by [21]. The additional data include the annotated Mozart
Figure 4: Post-processing of Roman Numeral predictions.
Piano Sonatas (MPS) dataset [28] for which we also applied the aforementioned augmentations.
### Configuration
For all our experiments, we train our network with the AdamW optimizer. We fix our architecture with a hidden size of \(256\), a learning rate of \(0.0015\), a weight decay of \(0.005\), and a dropout of \(0.5\) which is applied to each learning block of our architecture.
## 5 Results
As an evaluation metric, we use Chord Symbol Recall (CSR) [29] where for each piece, the proportion of time is collected during which the estimated label matches the ground truth label. We apply the CSR at the 32nd note granularity level, in accordance with [6, 7, 9].
### Quantitative Results
In the first experiment, which compares our _ChordGNN_ to existing state-of-the-art approaches, we evaluate the full dataset, but also the annotated Beethoven Piano Sonatas (BPS) [5] subset, which many previous approaches had also used. The results are shown in Table 1. We present the CSR scores (where they are applicable) for Local Key, Degree, Quality, Inversion, Root, conventional Roman Numeral, and Alternative Roman Numeral (see Section 3). Furthermore, we include the onset-wise accuracy score for our models' conventional Roman Numeral predictions.
On the BPS subset, we compare our model _ChordGNN_ with the Micchi (2020) model [6], the _CSM-T_ (2021) model [9] and the _AugmentedNet_ 2021 model [7]. Our results on Roman Numeral prediction surpass all previous approaches. Note that the _AugmentedNet_ model exhibits higher prediction scores on the individual Key, Degree, Quality, and Root tasks, which are used jointly for the prediction of the Roman numeral. These results indicate that our model obtains more meaningfully interrelated predictions, with respect to the Roman numeral prediction, resulting in a higher accuracy score.
Moreover, we compare _ChordGNN_ to _AugmentedNet_ on the full test dataset. Our model surpasses _AugmentedNet_ with and without post-processing in all fields apart from local key prediction and quality. Our model obtains up to \(11.6\%\) improvement in conventional Roman Numeral prediction.
In both experiments, post-processing has been shown to improve both \(RN\) and \(RN_{alt}\). However, _ChordGNN_ without post-processing already surpasses the other models.
### Configuration Study
For a systematic study of multitask training, we investigated the effects of extension modules, gradient normalization techniques, and learnable weight loss. In detail, we test 5 configurations using as baseline the _ChordGNN_ model (without post-processing) with standard CE loss and no weighing. Furthermore, we test our proposed architecture using the dynamically weighted loss described in Section 3.3 (same as the model in Table 1), Rotograd [17] and GradNorm [16] for Gradient Normalization, and NADE [8]. The models are run on the Full data set described above and averaged over five runs with random initialization. The results, summarized in Table 2, suggest that using the dynamically weighted loss yields better results compared to other methods such as the Baseline or Gradient Normalization techniques. Furthermore, the dynamically weighted loss is comparable to NADE but also more robust on Conventional Roman Numeral prediction on our datasets.
\begin{table}
\begin{tabular}{l|l|c c c c c|c c c} & \multicolumn{2}{c}{**Model**} & Key & Degree & Quality & Inversion & Root & RN & RN (Onset) & RN\({}_{alt}\) \\ \hline \hline \multirow{8}{*}{**CondGNN**} & Micchi (2020) & 82.9 & 68.3 & 76.6 & 72.0 & - & 42.8 & - & - \\ & CSM-T (2021) & 69.4 & - & - & - & 75.4 & 45.9 & - & - \\ & AugNet (2021) & **85.0** & **73.4** & **79.0** & 73.4 & **84.4** & 45.4 & - & 49.3 \\ & ChordGNN (Ours) & 79.9 & 71.1 & 74.8 & 75.7 & 82.3 & 46.2 & 46.6 & 48.6 \\ & ChordGNN+Post (Ours) & 82.0 & 71.5 & 74.1 & **76.5** & 82.5 & **49.1** & **49.4** & **50.4** \\ \hline \multirow{4}{*}{**CondGNN**} & AugNet (2021) & **82.9** & 67.0 & **79.7** & 78.8 & 83.0 & 46.4 & - & 51.5 \\ & ChordGNN (Ours) & 80.9 & 70.1 & 78.4 & 78.8 & 84.8 & 48.9 & 48.4 & 50.4 \\ \cline{1-1} & ChordGNN+Post (Ours) & 81.3 & **71.4** & 78.4 & **80.3** & **84.9** & **51.8** & **51.2** & **52.9** \\ \end{tabular}
\end{table}
Table 1: Model comparison on two different test sets, the Beethoven Piano Sonatas (BPS), and the full test set. \(RN\) stands for Roman Numeral, \(RN_{alt}\) for the alternative Roman Numeral computations discussed in Section 3.1. \(RN(Onset)\) refers to onset-wise prediction accuracy, all other scores use the CSR score (see Section 5). Note that model _CSM-T_ reports _Mode_ instead of _Quality_.
\begin{table}
\begin{tabular}{l|c c}
**Variant** & RN & RN\({}_{alt}\) \\ \hline ChordGNN (Baseline) & \(46.1\pm 0.003\) & \(47.8\pm 0.007\) \\ ChordGNN + WLoss & \(\mathbf{48.9}\pm 0.001\) & \(\mathbf{50.4}\pm 0.010\) \\ ChordGNN + Rotograd & \(45.5\pm 0.003\) & \(47.1\pm 0.005\) \\ ChordGNN + R-GradN & \(45.2\pm 0.006\) & \(46.7\pm 0.005\) \\ ChordGNN + NADE & \(48.2\pm 0.005\) & \(49.9\pm 0.005\) \\ \end{tabular}
\end{table}
Table 2: Configuration Study: Chord Symbol Recall on Roman Numeral analysis on the full test set. \(RN\) stands for Roman Numeral, \(RN_{alt}\) refers to the alternative Roman Numeral computations discussed in section 3.1. WLoss stands for the dynamically weighted loss described in Section 3, and R-GradN stands for Rotograd with Gradient Normalization. Every experiment is repeated 5 times with the same ChordGNN model as Table 1 without post-processing.
### Latest developments
Our last experiment focuses on specific developments that have very recently been published in Napoles Lopez's Ph.D. thesis [21]. In the thesis, three additional tasks, related to predicting the components of a canonical representation of the current chord, as implied by the Roman Numeral, were proposed and the dataset was extended with the Annotated Mozart Piano Sonatas (MPS) corpus [28], as mentioned in Section 4.1 above.
To test the relevance of these updates, we trained an adapted version of our model, now with 11+3=14 individual tasks and including the Mozart data. It turns out that the updated model improves significantly in performance, achieving a \(53.5\) CSR score on conventional Roman Numeral (compare this to row "ChordGNN (Ours)" in Table 1). Furthermore, post-processing can improve the results by up to two additional percentage points. 1
Footnote 1: Unfortunately, we cannot directly compare these numbers to [21], as their results are not reported in comparable terms.
### A Musical Example
In Figure 5, we look at a comparison between the human annotations, _AugmentedNet_ and _Chord-GNN_ predictions (The musical excerpt is taken from Napoles Lopez's thesis [21], and the predictions relate to the new models trained as described in the previous section.). Marked in red are false predictions, and marked in yellow are correct predictions of the model with wrong ground-truth annotations. Both models' predictions are very similar to the human analysis. However, our model correctly predicts the initial pickup measure annotation. In measure 2, the ground truth annotation marks a tonic in first inversion; however, the viola at that point is lower than the cello and therefore the chord is actually in root position. Both models obtain a correct prediction at that point. Subsequently, our model predicts a harmonic rhythm of eighth notes, which disagrees with the annotator's half-note marking. Analyzing the underlying harmony in that passage, we can justify our model's choices.
The human annotation suggests that the entire second half of the 2nd measure represents a \(vii^{o}\) chord. However, it should not be in the first inversion, as the cello plays an F# as the lowest note (which is the root of \(vii^{o}\)). The AugNet analysis faces the same issue, in contrast with the predictions of ChordGNN. However, there are two conflicting interpretations of the segment. First, the \(vii^{o}\) on the third beat is seen as a passing chord between the surrounding tonic chords, leading to a dominant chord in the next measure. Alternatively, the \(vii^{o}\) could already be part of a prolonged dominant harmony (with passing chords on the offbeats) leading to the \(V^{7}\). The ChordGNN solution accommodates both interpretations as it doesn't attempt to group chords at a higher level, treating each eighth note as an individual chord rather than a passing event. The other two solutions prefer the second option.
## 6 Conclusion
In this paper, we presented _ChordGNN_, a model for automatic Roman Numeral analysis in symbolic music, based on a note-level, graph-based score representation. We showed that _ChordGNN_ improves on other state-of-the-art models, and that post-processing can further improve the accuracy of the predictions. A configuration study suggests that gradient normalization techniques or techniques for carrying prediction information across tasks are not particularly beneficial or necessary for such a model.
Follow-up work will focus on strengthening the robustness of our models by pre-training with self-supervised methods on large corpora. We believe that such pre-training can be beneficial for learning helpful intrinsic musical information. Such a step is crucial since more data improves predictions but Roman Numeral annotations are hard to find or produce. Moreover, we aim to enrich the number of tasks for joint prediction by including higher-level analytical targets such as cadence detection and phrase boundary detection. Finally, we aim to extend our method to the audio domain.
Figure 5: A comparison between the human annotation, AugmentedNet, and ChordGNN on a passage of Haydn’s string quartet op.20 No.3 movement 4. The red (wrong) markings on Human Analysis and AugNet (2022) are from [21]
## 7 Acknowledgements
We gratefully acknowledge the musical analysis of the \(vii^{o}\) passage in Fig. 5 (Section 5.4) that was offered by an anonymous reviewer, and which we took the liberty of adopting for our text. This work is supported by the European Research Council (ERC) under the EU's Horizon 2020 research & innovation programme, grant agreement No. 101019375 ("Whither Music?"), and the Federal State of Upper Austria (LIT AI Lab).
|
2304.06636 | Neural networks: from the perceptron to deep nets | Artificial networks have been studied through the prism of statistical
mechanics as disordered systems since the 80s, starting from the simple models
of Hopfield's associative memory and the single-neuron perceptron classifier.
Assuming data is generated by a teacher model, asymptotic generalisation
predictions were originally derived using the replica method and the online
learning dynamics has been described in the large system limit. In this
chapter, we review the key original ideas of this literature along with their
heritage in the ongoing quest to understand the efficiency of modern deep
learning algorithms. One goal of current and future research is to characterize
the bias of the learning algorithms toward well-generalising minima in a
complex overparametrized loss landscapes with many solutions perfectly
interpolating the training data. Works on perceptrons, two-layer committee
machines and kernel-like learning machines shed light on these benefits of
overparametrization. Another goal is to understand the advantage of depth while
models now commonly feature tens or hundreds of layers. If replica computations
apparently fall short in describing general deep neural networks learning,
studies of simplified linear or untrained models, as well as the derivation of
scaling laws provide the first elements of answers. | Marylou Gabrié, Surya Ganguli, Carlo Lucibello, Riccardo Zecchina | 2023-04-13T16:04:54Z | http://arxiv.org/abs/2304.06636v1 | # Neural networks: from the perceptron
###### Abstract
Artificial networks have been studied through the prism of statistical mechanics as disordered systems since the 80s, starting from the simple models of Hopfield's associative memory and the single-neuron perceptron classifier. Assuming data is generated by a teacher model, asymptotic generalisation predictions were originally derived using the replica method and the online learning dynamics has been described in the large system limit. In this chapter, we review the key original ideas of this literature along with their heritage in the ongoing quest to understand the efficiency of modern deep learning algorithms. One goal of current and future research is to characterize the bias of the learning algorithms toward well-generalising minima in a complex overparametrized loss landscapes with many solutions perfectly interpolating the training data. Works on perceptrons, two-layer committee machines and kernel-like learning machines shed light on these benefits of overparametrization. Another goal is to understand the |
2306.13442 | Minibatch training of neural network ensembles via trajectory sampling | Most iterative neural network training methods use estimates of the loss
function over small random subsets (or minibatches) of the data to update the
parameters, which aid in decoupling the training time from the (often very
large) size of the training datasets. Here, we show that a minibatch approach
can also be used to train neural network ensembles (NNEs) via trajectory
methods in a highly efficient manner. We illustrate this approach by training
NNEs to classify images in the MNIST datasets. This method gives an improvement
to the training times, allowing it to scale as the ratio of the size of the
dataset to that of the average minibatch size which, in the case of MNIST,
gives a computational improvement typically of two orders of magnitude. We
highlight the advantage of using longer trajectories to represent NNEs, both
for improved accuracy in inference and reduced update cost in terms of the
samples needed in minibatch updates. | Jamie F. Mair, Luke Causer, Juan P. Garrahan | 2023-06-23T11:12:33Z | http://arxiv.org/abs/2306.13442v2 | # Minibatch training of neural network ensembles via trajectory sampling
###### Abstract
Most iterative neural network training methods use estimates of the loss function over small random subsets (or _minibatches_) of the data to update the parameters, which aid in decoupling the training time from the (often very large) size of the training datasets. Here, we show that a minibatch approach can also be used to train neural network ensembles (NNEs) via trajectory methods in a highly efficient manner. We illustrate this approach by training NNEs to classify images in the MNIST datasets. This method gives an improvement to the training times, allowing it to scale as the ratio of the size of the dataset to that of the average minibatch size which, in the case of MNIST, gives a computational improvement typically of two orders of magnitude. We highlight the advantage of using longer trajectories to represent NNEs, both for improved accuracy in inference and reduced update cost in terms of the samples needed in minibatch updates.
## I Introduction
Traditional machine learning (ML) applications aim to train a single model, usually by adjusting the parameters that define a complex function approximator like a neural network (NN), to perform well on some desired outcome as measured by a proxy loss function. A high performance on an appropriate loss function will entail a high performance on the metrics one cares about, for example accuracy in a classification problem [1]. There is strong empirical evidence from numerical experiments that increasing the scale of single models improves performance, as for example in the timely class of large language models (LLMs) [2].
However, to counteract the seemingly ever-increasing size of LLMs, there has also been significant work towards devising smaller models with similar capabilities in order to reduce the computational cost of training and of inference. A notable recent example is Stanford's Alpaca [3], based on LLaMA [4; 5], which can match GPT3.5 [6] despite being over an order of magnitude smaller. Another possibility is to replace one large model by an _ensemble_ of smaller models which can provide similar or better inferences while also being less costly to train and evaluate [7; 8]. This is the class of problems we focus on here.
Recently, we introduced an approach to train collectively an ensemble of models [9], in particular neural network ensembles (NNEs) where predictions at inference time are aggregated in a committee-like fashion (for classification, the ensemble prediction is the most voted for option, while for scoring, the ensemble prediction is the mean ensemble score). In Ref. [9] we defined an NNE in terms of the trajectory of the model parameters under a simple (discrete in time, diffusive in parameter space) dynamics, and trained it by biasing the trajectory that defines the NNE towards a small time-integrated loss. That is, once training is converged, the NNE corresponds to a discrete trajectory of the model parameters sampled from a distribution of trajectories exponentially "tilted" to have low time-integrated loss. This approach is borrowed from the study of glassy systems [10], where biasing dynamics according to time-integrated observables (e.g. the dynamical activity [11; 12]) is known to access low energy states for the configurations in the trajectory. Such low-loss trajectories can be accessed via importance sampling in trajectory space, such as transition path sampling (TPS) [13] as adapted to stationary dynamics and large deviation problems [14]. The ensuing trained NNE is a collection of NN models correlated by the underlying dynamics of the parameters and with a low value of the total loss due to the tilting.
While Ref. [9] provides a proof of principle of the trajectory sampling approach, it suffers from a significant computational bottleneck: importance sampling is a Monte Carlo scheme on trajectories, where updates are determined according to changes in the (time-integrated) loss evaluated over the _whole training set_, so that each Monte Carlo iteration scales with the size of the training data. For example, when training for the textbook MNIST digit classification problem in Ref. [9], we used only a small amount of the entire training dataset (2048 samples from the available 60000) to make the problem tractable for a comprehensive study. On the contrary, it is well known that ML models generalise poorly with small datasets [1]. This computational limitation makes the method of Ref. [9] impractical for more complex tasks. This has to be contrasted with gradient descent [1], where there is no need to sample faithfully from a distribution, so that the gradient of the loss can be estimated efficiently only on very small subsets of training data, known as _minibatches_, giving rise to stochastic gradient descent (where the noise from the difference between the minibatch estimate and the full loss actually helps convergence to a good local minimum [1]).
In this paper, we resolve the problem above by implementing a minibatch method in the trajectory sampling
used in the training of the NNEs. We build on the approach of Ref. [15] for doing Monte Carlo sampling with small data batches. We show that our new method reduces the training cost by a factor given by the ratio of the average minibatch size (which we determine in an adaptive manner) to the size of the dataset. We illustrate this more efficient method on MNIST classification (using the whole MNIST dataset), showing in this case a computational gain of about two orders of magnitude. Our minibatch approach also allows us to highlight the key features of the trajectory NNE method, showing the advantage of using longer trajectories to represent NNEs both in terms of accuracy and data requirement for training.
The rest of the paper is organised as follows. In Sec. II we describe the theory, reviewing the idea of NNEs as trajectories of a stochastic dynamics, training as tilted trajectory sampling, and the central approach to perform mini-batch trajectory Monte Carlo. In Sec. III we present the adaptive minibatch trajectory sampling method for training NNEs. We illustrate the method with two examples in Sec. IV, an exactly solvable linear perceptron, and the full MNIST digit classification problem. In Sec. V we give our conclusions, and further technical details are provided in the Appendices.
## II Theoretical background
### Neural network ensemble as a trajectory of neural networks
In Ref. [9], we proposed that a NNE could be obtained by evolving the parameters of a NN model under a suitable stochastic dynamics, where the NNE is composed of the sequence of NNs in time. If, at time step \(t\), the NN is defined by \(\mathbf{\theta}_{t}\), this dynamics would give rise to a trajectory \(\mathbf{\theta}_{1}\rightarrow\mathbf{\theta}_{2}\rightarrow\ldots\rightarrow\mathbf{ \theta}_{\tau}\), with the NNE as the set of visited models under the dynamics, \(\mathbf{\Theta}=[\mathbf{\theta}_{1},\mathbf{\theta}_{2},\ldots,\mathbf{\theta}_{\tau}]\). As the aim is to minimise the loss over the ensemble
\[\mathcal{L}(\mathbf{\Theta})=\sum_{t=1}^{\tau}L(\mathbf{\theta}_{t}) \tag{1}\]
where \(L(\mathbf{\theta}_{t})\) is the standard loss for the \(t\)-th model (see below for a specific form of the loss), training is equivalent to finding a suitable dynamics whose typical trajectories are those with low time-aggregated loss, \(\mathcal{L}(\mathbf{\Theta})\).
Once trained, this dynamics is defined in terms of (in general time-dependent) stochastic dynamics \(\mathcal{M}(\tau,\sigma,s)\equiv\{M_{i;\sigma,s}\}_{t=1}^{\tau-1}\), where \(M_{i;\sigma,s}(\mathbf{\theta}^{\prime}|\mathbf{\theta})\) are the transition probabilities at each time step, such that the NNE corresponds to a trajectory generated using dynamics \(\mathcal{M}(\tau,\sigma,s)\). This approach is illustrated in Fig. 1(a): the NNE is a discrete-time trajectory, where each state along the trajectory corresponds to one of the NNs that form the ensemble. Starting from the first model, \(\mathbf{\theta}_{1}\), each subsequent model is sampled according
Figure 1: (a) An NNE as a stochastic trajectory, and a sketch of trajectory sampling. Each state in the trajectory corresponds to one NN model in the NNE. The proposed path sampling update from trajectory \(\mathbf{\Theta}\) to trajectory \(\mathbf{\Theta}^{\prime}\) is via a stochastic bridge in which only one model is modified. (b) The loss function of a model with parameters \(\mathbf{\theta}\). Each layer represents a data element. The loss, \(L(\mathbf{\theta})\), is given by the average of the individual loss for each of the elements of the dataset, \(\{l_{i}(\theta)\}_{iN}\). The minibatch estimate of the loss is instead the average over a random selection \(\{r_{j}\}_{1:b}\) of \(b\) data points in the dataset. (c) Exactly solvable linear perceptron: mean NNE loss \(\langle\mathcal{L}\rangle\) per NNE size \(\tau\), as a function of \(s\) for various \(\tau\). Lines are analytical results. Symbols are the numerical results using the minibatch TPS algorithm for training (with \(20\times 10^{6}\) TPS epochs as a “burn-in” followed by \(20\times 10^{6}\) epochs for convergence of training).
to \(\mathcal{M}(\tau,\sigma,s)\). Three hyperparameters determine the dynamics that produces the NNE: the final time \(\tau\) sets the number of models in the ensemble; \(\sigma\) sets the "stiffness" of the chain (see below for details), that is, how correlated subsequent models are to each other, with small \(\sigma\) corresponding to large stiffness; and \(s\) controls the level of the overall loss, with larger \(s\) corresponding to lower loss. The central idea is that a correlated chain of models generated as a trajectory from dynamics \(\mathcal{M}(\tau,\sigma,s)\) at large \(s\) provides a well trained NNE [9].
### Learning as a trajectory sampling problem
Obtaining a suitable dynamics \(\mathcal{M}(\tau,\sigma,s)\) that produces well-trained NNEs as its typical trajectories is a difficult task. We can however resolve this problem by means of trajectory sampling techniques [9]. Consider as a starting point an untrained dynamics \(\mathcal{M}(\tau,\sigma,0)\) with the same transition probabilities at every time step \(t\)[9]
\[M_{\sigma}(\mathbf{\theta}_{t+1}|\mathbf{\theta}_{t})\propto\exp\left[-\frac{1}{2 \sigma^{2}}(\mathbf{\theta}_{t}-\mathbf{\theta}_{t+1})^{2}\right], \tag{2}\]
with \(\int_{\mathbf{\theta}^{\prime}}M_{\sigma}(\mathbf{\theta}^{\prime}|\mathbf{\theta})=1\). This dynamics corresponds to a discrete-time Gaussian diffusion process that knows nothing about the loss (1). As such, a typical trajectory drawn from it will correspond to a random (and therefore untrained) NNE. As indicated above, the parameter \(\sigma\) sets the variance of the diffusive steps, so that for smaller \(\sigma\) subsequent models are more correlated, while for \(\sigma\to\infty\) all the models of the chain are uncoupled.
The dynamics (2) produces an _ensemble of trajectories_ (and therefore an ensemble of NNEs) with each trajectory having probability
\[P(\mathbf{\Theta};\sigma)=\frac{1}{\mathcal{Z}_{\tau}(\sigma)}p(\theta_{1})\prod_ {t=1}^{\tau-1}\exp\left[-\frac{1}{2\sigma^{2}}(\mathbf{\theta}_{t}-\mathbf{\theta}_{t+ 1})^{2}\right], \tag{3}\]
given by the product of the \(M_{\sigma}\) at each step. Here \(p(\theta_{1})\) is the probability used to draw the first model, and \(\mathcal{Z}_{\tau}(\sigma)\) a normalisation constant (the "partition sum" of the trajectory ensemble). In order to obtain trajectories with low overall loss, what we aim is to define a new trajectory ensemble that is exponentially "tilted" with respect to (3), as is standard in large deviation studies of dynamics (e.g., Ref. [16]), that is [9]
\[P(\mathbf{\Theta};\sigma,s)=\frac{1}{\mathcal{Z}_{\tau}(\sigma,s)}e^{-s\mathcal{L }(\mathbf{\Theta})}P(\mathbf{\Theta};\sigma). \tag{4}\]
For large \(s\), a typical trajectory from this ensemble will correspond to a NNE with low overall loss. The learned dynamics \(\mathcal{M}(\tau,\sigma,s)\) of the previous subsection would be the dynamics that produces trajectories distributed according to the tilted distribution (4).
One way to avoid having to determine the \(\mathcal{M}(\tau,\sigma,s)\) dynamics explicitly is to directly sample trajectories from the tilted distribution (4). In this way, convergence of the training, that is, finding \(\mathcal{M}(\tau,\sigma,s)\), coincides with convergence of the trajectory sampling of (4), as we do here by means of an importance sampling method in trajectory space based on transition path sampling (TPS) [13].
### Monte Carlo in trajectory space
Consider a Monte Carlo scheme for sampling trajectories, specifically a Metropolis-Hastings approach [13]: given a current trajectory \(\mathbf{\Theta}\), the probability to change to a new trajectory \(\mathbf{\Theta}^{\prime}\) is given by
\[p(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})=g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})A(\mathbf{ \Theta}^{\prime},\mathbf{\Theta}), \tag{5}\]
where the factor \(g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})\) is the probability to propose the move, and \(A(\mathbf{\Theta}^{\prime},\mathbf{\Theta})\) is that to accept it. For the above to converge to (4) we need to impose that it obeys detailed balance with respect to (4), which implies
\[\frac{A(\mathbf{\Theta}^{\prime},\mathbf{\Theta})}{A(\mathbf{\Theta},\mathbf{\Theta}^{\prime} )}=\frac{P(\mathbf{\Theta}^{\prime};\sigma)g(\mathbf{\Theta}|\mathbf{\Theta}^{\prime})}{P( \mathbf{\Theta};\sigma)g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})} \tag{6}\]
If the proposed moves obey detailed balance with respect to the original untilted dynamics (3),
\[\frac{g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})}{g(\mathbf{\Theta}|\mathbf{\Theta}^{\prime} )}=\frac{P(\mathbf{\Theta}^{\prime};\sigma)}{P(\mathbf{\Theta};\sigma)}, \tag{7}\]
then the acceptance ratio reduces to
\[\frac{A(\mathbf{\Theta}^{\prime},\mathbf{\Theta})}{A(\mathbf{\Theta},\mathbf{\Theta}^{\prime} )}=e^{-s\left[\mathcal{L}(\mathbf{\Theta}^{\prime})-\mathcal{L}(\mathbf{\Theta}) \right]} \tag{8}\]
In standard TPS, (7) is realised by proposing trajectories by simply running the original dynamics (3) (via "shooting" or "shifting" moves, see Ref. [13]). This approach, however, carries an exponential cost in the time extent of trajectories, since the loss difference in the exponent of (8) scales linearly with time. This can be mitigated [9] by proposing small changes in a trajectory, see Fig. 1(a): the proposed trajectory is one where only the state at one time is modified; as this has to obey (7), it has to be done as a _Brownian bridge_[17; 18]. That is, as conditioned dynamics starting in the previous state and retuning to the state after the one changed (see Ref. [9] for details).
## III Minibatch path sampling
While the Brownian bridge version of TPS ameliorates the exponential-in-time cost in the trajectory sampling, there is another source of computational slowness coming from the evaluation of the trajectory loss, cf. (8). With the Brownian bridge TPS, Fig. 1(a), only a single model changes between the current and proposed trajectory, and the change in trajectory loss in (8) is therefore
given by that the change of that model's loss. If this change is at time \(t\), this requires the evaluation of \(L(\mathbf{\theta}_{t}^{\prime})\), which at training is the average of the loss under that model for each of the \(N\) training data points,
\[L(\mathbf{\theta}_{t}^{\prime})=\frac{1}{N}\sum_{i=1}^{N}l_{i}(\mathbf{\theta}_{t}^{ \prime}) \tag{9}\]
where \(l_{i}(\mathbf{\theta}_{t})\) is the loss for the inference for data point \(i\). This means that in each Monte Carlo iteration, computing the change in loss inevitably scales with the training set size \(N\) (together with a cost that depends on the size and architecture of the NN being considered). This evaluation can therefore become computationally infeasible for larger datasets. For example, in Ref. [9], we had to reduce the training dataset by almost an order of magnitude to show a proof-of-principle of the method with for the MNIST classification problem.
In contrast to gradient descent, one cannot simply replace the loss over the whole training set for an estimate based on a small subset, or minibatch. For gradient descent, the error that this introduces becomes a source of noise, converting it into stochastic gradient descent (and its adaptive variants [19, 1, 20]). This in turn gives rise to the usual advantages that an exploit/explore strategy brings, in this case to minimise the loss locally descending the gradient versus exploration of the loss landscape. Since Monte Carlo aims to sample from a distribution, Eq. (4) in our case, a straightforward replacement of the loss by a minibatch approximation would lead to failure of the necessary detailed balance condition.
This problem has been considered before in the context of Bayesian inference, where so-called "tall datasets" make Monte Carlo inefficient, see e.g., Refs. [21, 22, 23]. In what follows we build on the approach put forward in Ref. [15] to develop an adaptive minibatch trajectory sampling method.
### Monte Carlo with minibatches
We first describe the scheme of Ref. [15] in the context of the Monte Carlo annealing of a system with degrees of freedom \(\mathbf{\Theta}\) (a NNE in our case) and target distribution (4), and in the next subsection we extend the approach to integrate it with TPS in an adaptive manner.
Let us define the quantity \(\Delta(\mathbf{\theta}^{\prime},\mathbf{\theta})\) as the logarithm of the change in weight under a proposed move,
\[\Delta(\mathbf{\Theta}^{\prime},\mathbf{\Theta})=-s\left[\mathcal{L}(\mathbf{\Theta}^{ \prime})-\mathcal{L}(\mathbf{\Theta})\right], \tag{10}\]
and choose our acceptance function as
\[A(\mathbf{\Theta}^{\prime},\mathbf{\Theta})=(1+e^{\Delta(\mathbf{\Theta}^{\prime},\mathbf{ \Theta})})^{-1}, \tag{11}\]
which satisfies the detailed balance condition (8). Monte Carlo works by generating a proposed move from \(g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta})\), and then accepting the move if
\[A(\mathbf{\Theta}^{\prime},\mathbf{\Theta})>V, \tag{12}\]
where \(V\) is a uniformly distributed random number, \(V\sim\mathcal{U}(0,1)\). As (11) is a logistic (or sigmoid) function whose inverse is also its derivative, we can equivalently write the acceptance test as \(\Delta(\mathbf{\Theta}^{\prime},\mathbf{\Theta})>X_{\text{log}}\), where \(X_{\text{log}}\) is a logistically sampled random variable. As this distribution is symmetric around zero, we can equally write the test (12) as
\[\Delta(\mathbf{\Theta}^{\prime},\mathbf{\Theta})+X_{\text{log}}>0. \tag{13}\]
The loss \(\mathcal{L}(\mathbf{\Theta})\) that enters in (10) is the average of the loss over the entire training data set
\[\mathcal{L}(\mathbf{\Theta})=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{i}(\mathbf{\Theta} )=\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{\tau}l_{i}(\mathbf{\theta}_{t}), \tag{14}\]
where \(\mathcal{L}_{i}(\mathbf{\Theta})\) is the (trajectory) loss for the inference on data point \(i\). Consider now an approximation of the loss difference in terms of a random minibatch of size \(b\)
\[\mathcal{L}(\mathbf{\Theta})\approx\frac{1}{b}\sum_{j=1}^{b}\mathcal{L}_{r(j)}( \mathbf{\Theta}), \tag{15}\]
where \(r(j)\) specifies a random permutation of the indices of the elements in the training dataset, cf. Fig. 1(b). In terms of the above we can define an approximation to (10) [15],
\[\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})=-\frac{s}{b}\sum_{j=1}^{b}\left[ \mathcal{L}_{r(j)}(\mathbf{\Theta}^{\prime})-\mathcal{L}_{r(j)}(\mathbf{\Theta}) \right]. \tag{16}\]
If one could replace \(\Delta(\mathbf{\Theta},\mathbf{\Theta}^{\prime})\) by \(\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})\), then there would be a computational gain that would scale as the ratio of the size \(N\) of the training dataset to that of the minibatch \(b\) (as it is necessary to compute \(\Delta(\mathbf{\Theta},\mathbf{\Theta}^{\prime})\) for each Monte Carlo iteration). The way to do so is as follows [15].
Since the elements of the minibatch are chosen in an identical and independent manner, from the central limit theorem and for large enough \(b\), we expect \(\Delta^{*}\) to be normally distributed around \(\Delta\) with some variance \(\rho^{2}(\Delta^{*})\). That is, \(\Delta^{*}=\Delta+X_{\text{norm}}\), where \(X_{\text{norm}}\) is an approximately normal zero-mean random correction of variance \(\rho^{2}(\Delta^{*})\). As a logistically distributed random variable, \(X_{\text{log}}\), is almost normally distributed, we can write \(X_{\text{log}}\) as \(X_{\text{log}}=X_{\text{norm}}+X_{\text{corr}}\), where \(X_{\text{corr}}\) is the (hopefully small) correction to normality. Inserting this decomposition of \(X_{\text{log}}\) into the acceptance test (13), we can replace \(\Delta\) with the minibatch estimate:
\[\Delta^{*}(\mathbf{\Theta}^{\prime},\mathbf{\Theta})+X_{\text{corr}}>0. \tag{17}\]
This new acceptance only depends on the minibatch estimates of the loss and -- if accurate -- will be efficient for \(b\ll N\). The test (17) will asymptotically give the correct acceptance distribution provided that (i) that the fluctuations \(X_{\text{norm}}\) of \(\Delta^{*}\) around \(\Delta\) are normally distributed (which can be checked to adjust the size of \(b\)), and (ii)
that the distribution \(C_{\rm corr}(X;\rho)\) for the random correction \(X_{\rm corr}\) can be numerically computed with low error (see [15] for analysis of the errors).
Thanks to the CLT, condition (i) is relatively easy to satisfy for a large enough minibatch sample size. If the batch size grows beyond the size of the dataset, we do not need this approximation and can use (12). Condition (ii) holds when the sample error on \(\Delta^{*}\) is sufficiently small [15]. In practice one can only compute the distribution of \(C_{\rm corr}(X;\rho)\) accurately enough only for standard deviations of \(X_{\rm norm}\) such that \(\rho\lesssim 1.1\) (see below for our implementation).
As computing \(C_{\rm corr}(X;\rho)\) is numerically expensive (see Appendix A for details) for each empirical \(\rho\approx 1\), we can instead compute a \(\rho^{2}=1\) correction and then add a further "normal correction" with small variance \(1-\rho^{2}\) to make a total normal random variable with fixed variance \(1\). Putting all of these together, we get the minibatch acceptance test that we use
\[\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})+X_{\rm nc}+X_{\rm corr}>0, \tag{18}\]
where \(X_{\rm nc}\) stands for the normal correction random variable, of zero mean and variance \(1-\rho^{2}\).
### Generalisation to enable training with TPS
The TPS scheme that we use relies on proposing trajectory updates, cf. Fig. 1(a), consisting of _bridging_ moves (for changes in the middle of the trajectory) and _shooting_ moves to get changes in the endpoints, see Ref. [9] for details. In either case, only a single model (i.e. a single time step) is altered, thus reducing the size of the update and improving the acceptance rate.
Once a candidate trajectory is proposed, the minibatch acceptance criterion of the previous subsection is applied. The specific steps are as follows, defining an adaptive minibatch scheme:
* We draw \(m\) random samples from the training set, which are used to calculate an estimate of \(\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})\) and \(\rho^{2}(\Delta^{*})\), cf. (16). If \(\rho^{2}>1\), \(m\) more samples are drawn without replacement, updating \(\Delta^{*}\) and \(\rho\) accordingly (and terminating if all samples are used). In this way, we form a minibatch of overall size \(b\) such that the sample variance of \(\Delta^{*}\) is strictly less than or equal to \(1\).
* We draw the random correction \(X_{\rm nc}\) from a normal distribution and \(X_{\rm corr}\) from \(C_{\rm corr}(X;\rho)\). With these we use (18) to accept or reject the proposed change to the trajectory. [If the total minibatch size \(b\) equals \(N\), then we use the original test (12) as \(\Delta^{*}\) coincides with \(\Delta\) in that case.] Unlike Ref. [15], exact sampling of (4) is not required to effectively train our NNE, and we do not test the normality assumption of \(\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})\). This is equivalent to setting their threshold \(\delta\to\infty\) in Ref. [15]. Justification for this simplifying choice is given in Appendix C.
```
1:input Initial trajectory \(\mathbf{\Theta}_{1}\), dataset \(\{x_{1},\ldots,x_{N}\}\), trajectory length \(\tau\), trajectory coupling \(\sigma\), minibatch chunk size \(m\), pre-computed correction \(C_{\rho=1}(X)\) distribution, cut-off hyperparameters \(c_{0}\) and \(c_{1}\) and training epochs \(E\).
2:output Sequence of trajectories \(\{\mathbf{\Theta}_{1},\mathbf{\Theta}_{1},\ldots,\mathbf{\Theta}_{E+1}\}\)
3:for\(k\in[1,2,\ldots,E]\)do
4: Sample proposal \(\mathbf{\Theta}^{\prime}\) using \(g(\mathbf{\Theta}^{\prime}|\mathbf{\Theta}_{k},\tau,\sigma)\) (i.e. shooting or bridging)
5: Sample \(\Delta^{*}(\mathbf{\Theta}^{\prime},\mathbf{\Theta}_{k})\) and \(\rho^{2}\) using \(m\) randomly selected samples, without replacement
6:\(b\gets N\)
7:while\(\rho^{2}>1\)and\(b<N\), and \(\max(\frac{|\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})|}{\rho}-c_{1},0)<=c_{0}\)do
8: Select \(m\) more randomly selected samples, without replacement and update estimates for \(\Delta^{*}(\mathbf{\Theta}^{\prime},\mathbf{\Theta}_{k})\) and \(\rho^{2}\)
9:\(b\gets b+m\)
10:endwhile
11:if\(b=N\)then
12: Sample random number \(V\sim\mathcal{U}(0,1)\)
13:if\(V<g(\Delta^{*}(\mathbf{\Theta}^{\prime},\mathbf{\Theta}_{k}))\)then
14: Accept with \(\mathbf{\Theta}_{k+1}\leftarrow\mathbf{\Theta}^{\prime}\)
15:else
16: Reject with \(\mathbf{\Theta}_{k+1}\leftarrow\mathbf{\Theta}_{k}\)
17:endif
18:continue
19:endif
20: Sample \(X_{\rm nc}\sim\mathcal{N}(0,1-\rho^{2})\) and \(X_{\rm corr}\sim C_{\rho=1}(X)\)
21:if\(\Delta^{*}(\mathbf{\Theta}^{\prime},\mathbf{\Theta}_{k})+X_{\rm nc}+X_{\rm corr}>0\)then
22: Accept with \(\mathbf{\Theta}_{k+1}\leftarrow\mathbf{\Theta}^{\prime}\)
23:else
24: Reject with \(\mathbf{\Theta}_{k+1}\leftarrow\mathbf{\Theta}_{k}\)
25:endif
26:endfor
```
**Algorithm 1** Minibatch TPS Training
One issue with this simple algorithm is that the minibatch size grows if the sample variance is much larger than \(1\). The biasing parameter, \(s\), scales the sample variance with \(s^{2}\), while taking \(b\) samples only reduces this variance by a factor of \(b\). For fixed \(\mathbf{\Theta}\) and \(\mathbf{\Theta}^{\prime}\), we would expect the minibatch size to change with \(s\) to compensate for the increased sample variance. For \(s\to\infty\), this would revert the minibatch method to the original full-dataset acceptance test. To avoid this, we introduce a _cut-off test_ which halts increasing the minibatch size when \(\Delta^{*}\) is sufficiently far away from the origin and performs an alternative acceptance test. The alternative acceptance test is broken into two stages. Firstly, we approximate the acceptance function to be equal to \(0\) when \(x<-c_{0}\) and \(1\) when \(x>c_{0}\) for some positive constant \(c_{0}\). We choose \(c_{0}\) to be sufficiently high that the approximate acceptance function is unchanged for \(-c_{0}\leq x\leq c_{0}\) without having to renormalise. Secondly, we choose another threshold for which the true mean \(\Delta\) is approximately guaranteed to be within \(\Delta^{*}\pm c_{1}\rho\). We use \(\max(\rho^{-1}|\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})|-c_{1},0)>c_{0}\) as our cut-off acceptance test. In our experiments, using
\(c_{1}=10\) and \(c_{0}=5\) gave good results. The final combined algorithm is given in Algorithm 1. Setting \(c_{0}\) or \(c_{1}\) to be high will make this cut-off less likely to be triggered, increasing minibatch size and therefore computational cost, however, setting them too close to \(0\) will result in an inaccurate acceptance test which does not obey detailed balance.
## IV Examples of training NN Ensembles via minibatch trajectory sampling
We now apply the method of Sec. III for the training of NNEs in two illustrative problems. The first is that of a linear perceptron, which is simple enough to be solved exactly, allowing us to directly compare our method with the expected results. The second is the more complex, but now standard, problem of MNIST digit classification [24].
### NNE of linear perceptrons
We first test the method with a linear classification problem, also considered in Ref. [9]. This problem can be defined as follows: we generate a set of independent random \(D\) dimensional points, \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) (setting \(x_{N}=1\)), together with a \(D\)-dimensional random weight vector \(\mathbf{w}\) that we use to assign labels \(y_{i}=\mathbf{w}\cdot\mathbf{x}_{i}\) to each of the points \(\mathbf{x}_{i}\). The aim is to train the parameters \(\mathbf{\Theta}\) of an ensemble of \(\tau\) linear perceptrons, where the prediction of the \(t\)-th perceptron for the \(i\)-th data point is \(\mathbf{\theta}_{t}\cdot\mathbf{x}_{i}\). For training, we consider the mean-squared sample loss for each model in the NNE, which for data point \(i\) reads
\[l_{i}^{\rm(MSE)}(\mathbf{\theta}_{t})=\frac{1}{2}(y_{i}-\mathbf{\theta}_{t}\cdot\mathbf{x }_{i})^{2}. \tag{19}\]
The NNE loss over the training dataset, cf. (14), in turn reads
\[\mathcal{L}(\mathbf{\Theta})=\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{\tau}\frac{1}{2 }(y_{i}-\mathbf{\theta}_{t}\cdot\mathbf{x}_{i})^{2} \tag{20}\]
We now implement the adaptive minibatch estimation of the trajectory loss described in Alg. 1 to train this NNE for various \(\tau\) and \(s\). Figure 1(c) demonstrates that the numerics obtained in this way coincide with the analytic results from the exact trajectory distribution (4) [9]. This is an elementary proof-of-principle of the method.
Exact distributions for \(\mathbf{w}\) and \(\mathbf{x}\), along with experimental hyperparameters are provided in Appendix B.1.
### NNE for MNIST digit classification
The second problem we consider is that of an ensemble of models for classification of digits using the standard set of handwritten MNIST images [24], see Fig. 2(a). In this case, each NN in the NNE is a small convolutional neural network (CNN) whose architecture is described in Appendix B.2. For a given image \(X\), one of these CNNs with parameters \(\mathbf{\theta}\) provides the probability \(y(k|X;\mathbf{\theta})\) that the image corresponds to digit \(k\), for \(k=0,\ldots,9\). The appropriate loss function is the mean cross entropy, which for the data point \(i\) and the \(t\)-th model reads
\[l_{i}^{\rm(MNIST)}(\mathbf{\theta}_{t})=-\sum_{k=0}^{9}\delta_{z_{i},k}\log y(k|X_ {i};\mathbf{\theta}_{t}), \tag{21}\]
where \(z_{i}\) is the true classification of \(X_{i}\). The training loss for the NNE then reads
\[\mathcal{L}(\mathbf{\Theta})=-\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{\tau}\sum_{k=0} ^{9}\delta_{z_{i},k}\log y(k|X_{i};\mathbf{\theta}_{t}), \tag{22}\]
We train trajectories for a fixed number of epochs \(E\) (i.e., TPS iterations), which we choose to be large enough for the trajectory loss to appear to converge, which we track in terms of the minibatch loss estimate to avoid further computational cost. One can observe convergence for a small number of sample runs in Figs. 3(a, b). Specifically, we allow for a "burn-in" of the initial \(20\times 10^{6}\) epochs, and subsequently observe the average trajectory loss for the next \(20\times 10^{6}\) epochs. For each set of hyperparameters \((\tau,s)\) we perform six independent trainings starting each training run from a random initial seed trajectory. The time-averaged loss thus obtained is shown in Fig. 2(b) as a function of \(s\) for various NNE sizes \(\tau\). We note the following: (i) for every \(\tau\) the loss per model in the trained NNE decreases with \(s\), as should be the case when converging to (4); (ii) the larger \(\tau\) the lower the loss, indicating that the longer trajectories give rise to more accurate NNEs; (iii) there appears to be a transition from high to low loss with \(s\) which could indicate (dynamical) phase coexistence, as seen in many other trajectory ensemble problems [25].
A similar trend to that of the loss is observed in the accuracy on the generalisation test set. In Fig. 2(c) we show the accuracy of the final ensembles obtained after all training epochs. We use the NNE to collectively make a prediction on each sample by letting each model "vote" for their predicted class, with the class with the most votes getting selected (in the event of a tie, the smaller digit is selected). We plot this ensemble accuracy, averaged over six independent runs, for the same hyperparameters of panel (b).
As a proxy for the computational cost of training, in Fig. 2(d) we show the average batch size per epoch \(\langle b\rangle\) necessary for training to a certain value of the loss per model. Since the size of the training set is \(N=6\times 10^{4}\), the ratio \(N/\langle b\rangle\) gives the computational gain of using the minibatch method. From Figs. 2(b, c), we know that longer trajectories can yield a lower loss at smaller values of \(s\). From Fig. 2(d), we see that longer trajectories (larger NNEs) are also computationally more efficient to
train, requiring a smaller mean minibatch size, \(\langle b\rangle\), than smaller NNEs for the same level of overall loss, showing a computational gain in excess of two orders of magnitude of the minibatch method to the original full loss method of Ref. [9].
## V Conclusions
In this paper we have presented a variant of the minibatch Monte Carlo method of Ref. [15] adapted to the sampling of trajectories that correspond to neural network ensembles [9]. We have shown that this technique can be used to train NNEs via trajectory sampling to give an improvement in computational efficiency up to two orders of magnitude. While we have focused, for concreteness, on supervised learning applications, we note that an adaptive trajectory sampling technique like the one presented here should be also very useful in Monte Carlo based _reinforcement learning_ (RL), where datasets do not have a fixed size. We expect that this method will provide a stable training technique on these RL problems, which have exhibited brittle behaviour when continuously trained on changing objectives [26; 27; 28; 29].
Our results here add to the growing number of recent works studying the training dynamics of NNs from the statistical mechanics point of view, see e.g., Refs. [30; 31; 32; 33; 34]. Most of these consider the training of a single NN in terms of a stochastic dynamics akin to thermal annealing, cf. Ref. [32]. In contrast, our approach based on sampling trajectories of NNs shares more similarities to training by quantum annealing, see for example Refs. [35; 36]. Note that this similarity is not referring to actual unitary dynamics, but to the fact the computation of a trajectory ensemble in (4) is similar to that of a quantum partition sum (in terms of imaginary-time trajectories). Furthermore, the improved computational efficiency provided by the minibatch method we introduced here allowed us to highlight the benefit of larger NNEs (i.e., longer trajectories) capable of accessing lower loss regions of state space using far less data than single NNs or small ensembles.
## Code availability
Our TPS implementation package is available through GitHub, TransitionPathSampling.jl[37], together with the source code to generate the figures and results in the paper [38].
###### Acknowledgements.
We acknowledge support from EPSRC Grant no. EP/V031201/1 and University of Nottingham grant no. FiF1/3. LC was supported by an EPSRC Doctoral prize from the University of Nottingham. Simulations were performed using the University of Nottingham Augusta HPC cluster, and the Sulis Tier 2 HPC platform hosted by the Scientific Computing Research Technology Platform at the University of Warwick. Sulis is funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+ consortium. We thank the creators and community of the Julia programming language [39], and acknowledge use of the packages CUDA.jl[40; 41], Makie.jl[42] and ForwardDiff.jl[43].
Figure 2: (a) Representative examples of the MNIST digit images used for training. (b) Mean value of the time-averaged loss vs \(s\). Each data point is calculated after \(2\times 10^{7}\) TPS epochs over the following \(2\times 10^{7}\) TPS epochs. (c) Final accuracy on the standard \(10,000\) test images of a single trained NNE (via majority vote), taken at the end of the \(4\times 10^{7}\) epochs. (d) Average batch size per epoch, \(\langle b\rangle\) as a function of the converged mean NNE loss after \(2\times 10^{7}\) TPS epochs. The mean batch size per epoch is obtained from the \(4\times 10^{7}\) TPS training epochs.
## Appendix A Correction Distribution
Seita et al. [15] show that we can calculate the \(X_{\text{corr}}\) distribution numerically. We introduce a parameter \(V\) to specify the range of values to sample the distribution over. We construct two discrete vectors \(X\) and \(Y\) with the elements of \(X\) going linearly from \(-2V\) to \(+2V\) and \(Y\) going from \(-V\) to \(+V\). The vector \(X\) has \(4N+1\) elements and the vector \(Y\) has \(2N+1\) elements.
From here, we define a matrix \(M\) with elements
\[M_{ij}=\Phi_{\sigma}(X_{i}-Y_{j}), \tag{10}\]
where \(\Phi_{\sigma}\) is the cumulative distribution function (CDF) of a normal distribution with variance \(\sigma^{2}\). Additionally, we construct a new vector \(v\) such that
\[v_{i}=S(X_{i}), \tag{11}\]
where \(S\) is the logistic sigmoid function, i.e. the CDF of a logistically distributed random variable. Finally, we define the vector \(u\) to be \(u_{j}=C_{\sigma}(Y_{j})\) which is our target to calculate. This can be calculated using the formula
\[u=(M^{T}M+\lambda I)^{-1}M^{T}v, \tag{12}\]
where \(\lambda\) is regularisation parameter. We followed recommendations from Seita et al [15] and used \(V=10\), \(N=4000\) and \(\lambda=10\) to construct our numerical approximation of \(C_{\sigma}\). We set any negative elements equal to zero and re-normalise the CDF to ensure the area under the curve will equal 1.
Fortunately, the sampling algorithm allows us to calculate the distribution for a single value of \(\sigma\) to save on computation and memory. This distribution can be calculated once and cached for future use. We can alter the acceptance condition to be
\[\Delta^{*}(\mathbf{\Theta},\mathbf{\Theta}^{\prime})+X_{\text{nc}}+X_{\text{corr}}>0, \tag{13}\]
where \(X_{\text{corr}}\) is sampled when \(\sigma=1\) and \(X_{\text{nc}}\sim\mathcal{N}(0,1-\text{Var}[\Delta^{*}])\), requiring that \(\text{Var}[\Delta^{*}]<1\).
### Sampling the correction distribution
The distribution can be efficiently sampled using the CDF: the cumulative sum of the probability distribution function (PDF). The CDF is a monotonically increasing set of \(y\) values from 0 to 1. These values have corresponding \(X\) values in the domain \(-2V\) to \(2V\). In order to sample this distribution we draw a random number, \(u\), uniformly between 0 and 1. We find the \(X\) which corresponds to the intersection of \(u=C_{\sigma}(X)\) by bisection on the discretised points and then linear interpolation between discretised values.
## Appendix B Experiment Configurations
Here, we provide the exact parameters used to generate data provided in the results.
### Linear Perceptron
The linear perceptron model was trained on a simple 1D problem, which was generated via \(\mathbf{y}=m\mathbf{x}+c\), where \(x_{i}\sim\mathcal{U}(0,1)\) and \(m\sim\mathcal{U}(-1,1)\) and \(c\sim\mathcal{U}(-2,0)\). We randomly sampled 256 points to use in the distribution. The minibatch method used batch sizes of 32 and set \(\sigma=0.1\) for the coupling between models in the trajectories. Experiments were run for \(4\times 10^{7}\) epochs. Averages of observables were taken by discarding the first half of the data, to allow for a _burn-in_ time, and using minibatch estimates on the data.
Figure 3: Training curves for MNIST classification NNE. (a) Average ensemble loss per model in the NNE as a function of cumulative data usage \(D\), for \(s=50\) and various NNE sizes \(\tau\). Note that the abscissa is scaled by \(\tau\), so curves are in terms of “per-model” epochs. In terms of \(D/\tau\), for equivalent training time lower losses are reached for larger NNFs. (Loss curves has been down sampled for clarity.) Inset: same in linear \(D\) scale. (b) Same training curves but now plotted in terms of epochs \(E\).
Mnist
The convolutional neural network (CNN) model architecture used for our MNIST experiments was as follows:
1. Input \(28\times 28\) single channel image.
2. Convolution layer with a \(5\times 5\) kernel and 16 output channels.
3. \(2\times 2\) max pooling layer.
4. Convolution layer with a \(3\times 3\) kernel and 8 output channels.
5. \(4\times 4\) max pooling layer.
6. Fully connected dense layer with 10 outputs.
7. Softmax layer to normalise probabilities.
This model would output a normalised probability vector for each input image, specifying the "likelihood" of the image being a certain digit. The model contained 1906, 32-bit, floating point parameters.
For our experiments, we did not anneal the \(s\) parameter, but instead, chose a fixed duration of \(2\times 10^{7}\) epochs to allow for some "burn-in" time. The models were then run for another \(2\times 10^{7}\) epochs, to allow for measuring the loss as an observable. Accuracies were only measured at the end of the \(4\times 10^{7}\) epochs, due to the high computational demand.
We ran all of our experiments using \(\sigma=0.05\) and only changed a random 25% of the parameters of each model on each perturbation. Each perturbation changed only a single model in the trajectory, uniformly randomly.
We ran 6 independent experiments for each set of presented parameters and calculated averages to present the results in Figure 2, along with calculating the error bars using the averages' sample variance.
## Appendix C Normality Investigation
To justify setting the error threshold \(\delta\rightarrow\infty\), we ran a training experiment for two different \(\tau\) at \(s=50\) using a base batch size of 240. These experiments were run for \(20,000\) epochs and samples at intervals of \(5,000\) epochs. Histograms of the individual \(\Delta\) samples across the entire batch are plotted, along with a fitted curve showing the expected normal distribution given the empirical mean and variance of the batch data. This is presented in Figure 4.
|
2307.12461 | Rates of Approximation by ReLU Shallow Neural Networks | Neural networks activated by the rectified linear unit (ReLU) play a central
role in the recent development of deep learning. The topic of approximating
functions from H\"older spaces by these networks is crucial for understanding
the efficiency of the induced learning algorithms. Although the topic has been
well investigated in the setting of deep neural networks with many layers of
hidden neurons, it is still open for shallow networks having only one hidden
layer. In this paper, we provide rates of uniform approximation by these
networks. We show that ReLU shallow neural networks with $m$ hidden neurons can
uniformly approximate functions from the H\"older space $W_\infty^r([-1, 1]^d)$
with rates $O((\log m)^{\frac{1}{2} +d}m^{-\frac{r}{d}\frac{d+2}{d+4}})$ when
$r<d/2 +2$. Such rates are very close to the optimal one $O(m^{-\frac{r}{d}})$
in the sense that $\frac{d+2}{d+4}$ is close to $1$, when the dimension $d$ is
large. | Tong Mao, Ding-Xuan Zhou | 2023-07-24T00:16:50Z | http://arxiv.org/abs/2307.12461v1 | # Rates of Approximation by ReLU Shallow Neural Networks
###### Abstract
Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from Holder spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with \(m\) hidden neurons can uniformly approximate functions from the Holder space \(W_{\infty}^{r}([-1,1]^{d})\) with rates \(O((\log m)^{\frac{1}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}})\) when \(r<d/2+2\). Such rates are very close to the optimal one \(O(m^{-\frac{r}{d}})\) in the sense that \(\frac{d+2}{d+4}\) is close to \(1\), when the dimension \(d\) is large.
_Keywords_: deep learning, shallow neural networks, ReLU, rates of uniform approximation, Holder space
_Mathematics Subject Classification (2020)_: 68T07, 41A25, 68Q32
## 1 Introduction
The exploration of approximating functions by neural networks has a history of over 30 years. Let \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) be the data vector and \(m\in\mathbb{N}\). A shallow neural
network of width \(m\) associated with a continuous activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) is defined by
\[f_{m}(x)=\sum_{i=1}^{m}\beta_{i}\sigma(\alpha_{i}\cdot x-b_{i}), \tag{1.1}\]
where \(\{\alpha_{i}\}_{i=1}^{m}\subset\mathbb{R}^{d}\) are connection vectors, \(\{\beta_{i}\}_{i=1}^{m}\subset\mathbb{R}\) weights and \(\{b_{i}\}_{i=1}^{m}\subset\mathbb{R}\) biases. The universality of shallow networks [7; 18; 31] asserts for any non-polynomial activation that any continuous function on any compact subset of \(\mathbb{R}^{d}\) can be approximated by output functions of the form (1.1) to an arbitrary accuracy when the number \(m\) of hidden neurons is large enough. Rates of approximation by such output functions from the hypothesis space
\[H_{m}=\left\{\sum_{i=1}^{m}\beta_{i}\sigma(\alpha_{i}\cdot x-b_{i}):\ \alpha_{i}\in \mathbb{R}^{d},\ \beta_{i}\in\mathbb{R},\ b_{i}\in\mathbb{R}\right\}. \tag{1.2}\]
were also studied in a large literature when \(\sigma\) is a sigmoid type \(C^{\infty}\) activation function. In [2] it was proved that a function \(f\) on \(\mathbb{R}^{d}\) with its Fourier transform \(\hat{f}\) satisfying \(\int_{\mathbb{R}^{d}}\lvert\omega\rvert\lvert\hat{f}(\omega)\rvert d\omega<\infty\) can be approximated uniformly on \([-1,1]^{d}\) by \(f_{m}\in H_{m}\) with rates \(O(m^{-1/2})\). The error rate is proved (e.g. [25]) to be \(O(m^{-r/d})\) for functions from the Holder space \(W_{\infty}^{r}([-1,1]^{d})\) defined as follows.
**Definition 1**.: _Let \(r\in\mathbb{N}\), \(d\in\mathbb{N}\), and \(\Omega\subset\mathbb{R}^{d}\) be a compact domain with non-empty interior. The Holder space \(W_{\infty}^{r}(\Omega)\) is defined by_
\[W_{\infty}^{r}(\Omega)=\{f\in C(\Omega):\ \max_{0\leq\|\alpha\|_{1}\leq r}\|D^{ \alpha}f\|_{L_{\infty}(\Omega)}<\infty\} \tag{1.3}\]
_with the norm \(\|f\|_{W_{\infty}^{r}(\Omega)}=\max_{0\leq\|\alpha\|_{1}\leq r}\lVert D^{ \alpha}f\rVert_{L_{\infty}(\Omega)}\) defined with the partial derivatives \(D^{\alpha}f\) of order \(\alpha=(\alpha_{1},\cdots,\alpha_{d})\in\mathbb{Z}_{+}^{d}\)._
When training network parameters with gradient descent methods in deep learning, the classical neural networks with sigmoid type \(C^{\infty}\) activation functions often encounter the **gradient vanishing problem**. To solve this problem, in most neural networks used for deep learning applications, the classical sigmoid type \(C^{\infty}\) activation functions are replaced by the **rectified linear unit** (ReLU) \(\sigma\) defined as
\[\sigma(x)=\max\{0,x\},\qquad x\in\mathbb{R}.\]
Within the last few years, rates of function approximation by ReLU deep neural networks have been obtained when the network has many layers with depth increasing with the number \(m\) of hidden neurons. D. Yarotsky [36] observed that iterates of ReLU can be used to realize piecewise linear interpolations of the univariate quadratic function \(\varphi(u)=u^{2}\) with few parameters. Based on this observation,
he proved that ReLU deep neural networks with \(O(\log m)\) layers and \(O(m\log m)\) neurons can achieve an almost optimal error rate \(O(m^{-r/d})\) for approximating functions from \(W_{\infty}^{r}([-1,1]^{d})\). For approximating smooth functions and piecewise-smooth functions, upper bounds were given in [30, 4, 13, 16]. Applying Yarotsky's method to functions from the Korobov space, a deep neural network with \(O(\log m)\) layers and \(O(m(\log m)^{\frac{3}{2}(d-1)+1})\) neurons, which achieves the rate \(O(m^{-2})\), was constructed in [29].
All the above mentioned results are for ReLU deep neural networks with many layers. Furthermore, in [32], it was proved that on a smooth \(d\)-dimensional compact manifold without boundary, \(C^{2}\) functions can be approximated by a deep network of depth \(4\) with a rate \(O(m^{-2/d})\), which is optimal. Rates of approximation by networks induced by ReLU type activation functions such as \(\left(\max\{0,x\}\right)^{\alpha}\) with \(\alpha>1\) were also obtained in [1, 26]. This recent literaure on ReLU type networks and the classical work on sigmoid shallow networks lead to a natural open question whether one can derive rates of function approximation by ReLU shallow networks of one hidden layer.
The purpose of this paper is to answer the above open question and present rates of approximating functions from Holder spaces uniformly by **ReLU shallow neural networks**. An approximation theory about shallow neural networks plays a fundamental role in understanding ReLU networks as a benchmark. It has some other applications such as those for convolutional neural networks to be discussed below. Similar analysis was conducted in [28, Theorem 4.1] for sigmoid networks which demonstrates how to deduce approximation rates of a sigmoid deep neural network by using those of sigmoid shallow networks.
A nice study for ReLU shallow networks was carried out by Klusowski and Barron [21]. They gave rates \(O(m^{-\frac{1}{2}-\frac{1}{d}})\) of approximating functions \(f\) whose Fourier transform \(\hat{f}\) satisfies \(\int_{\mathbb{R}^{d}}\lvert\hat{f}(\omega)\rvert\lVert\omega\rVert_{1}^{2}d \omega<\infty\). If we want to apply this result to the Holder space \(W_{\infty}^{r}(\Omega)\), then the regularity index \(r\) must satisfy \(r>d/2\), as discussed in [38], which might be too large for some learning problems dealing with data of large dimensions \(d>>1\).
A possible way to overcome this barrier is using an intermediate function \(f_{R}(x)=\int_{\lvert\omega\rvert\leq R}\hat{f}(\omega)e^{i\omega x}d\omega\) and estimate the error \(\lVert f-f_{R}\rVert_{\infty}\) and \(\lVert f_{R}-f_{m}\rVert_{\infty}\), where \(f_{m}\) is the function in [21] which approximates \(f_{R}\). However, this method does not make full use of frequency domain properties of the Fourier transform. One can only obtain a rate \(O(m^{-\frac{r}{2d}})\), which is much worse than the optimal rate \(O(m^{-r/d})\).
In this paper, we use a novel idea motivated by tools from Fourier analysis [12, 34, 5] to carry out time-frequency analysis for partial sums of Fourier series and decompose the error into multi-level parts according to various frequevely levels. Then we are able to show that the error of approximating functions from \(W_{\infty}^{r}([-1,1]^{d})\) by ReLU shallow neural networks can be estimated with rates \(O((\log m)^{\frac{1}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}})\)
when \(r<d/2+2\). This rate of approximation is very close to the optimal one \(O(m^{-r/d})\) when the dimension \(d\) is large. In fact, \(O(m^{-r/d})\) is the lower bound for the approximation by any neural network with \(m\) parameters, which was proved in [8] and will be discussed in Section 5. Throughout the paper we take the domain \(D=[-1,1]^{d}\) of functions for approximation and \(\sigma\) to be the ReLU activation function.
## 2 Main Results
The following theorem, to be proved in Section 4, is our first main result. It gives an upper bound for approximating functions from \(W_{\infty}^{r}(D)\) uniformly by ReLU shallow neural networks.
**Theorem 1**.: _Let \(d,r\in\mathbb{N}\). Then there exists a constant \(C(d,r)\) depending only on \(d\) and \(r\) such that for any \(F\in W_{\infty}^{r}(D)\) and \(m\in\mathbb{N}\), there holds_
\[\inf_{f_{m}\in H_{m}}\|F-f_{m}\|_{L_{\infty}(D)}\leq C(d,r)\|F\|_{W_{\infty}^{ r}(D)}\left\{\begin{array}{ll}(\log m)^{\frac{1}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d +4}},&\qquad\text{if }r<\frac{d}{2}+2,\\ (\log m)^{\frac{3}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}},&\qquad\text{if }r=\frac{d}{2}+2,\\ (\log m)^{\frac{1}{2}}m^{-\frac{1}{2}-\frac{1}{d}},&\qquad\text{if }r>\frac{d}{2}+2. \end{array}\right. \tag{2.1}\]
As all the existing approximation results on ReLU networks are either for deep networks with multiple layers or for shallow networks [21] with a large regularity index \(r>d/2\), Theorem 1 establishes the first approximation theory with almost optimal rates for approximating functions from Holder spaces by ReLU shallow networks.
Theorem 1 holds true only for the Holder space \(W_{\infty}^{r}(D)\) with an integer regularity index \(r\). The restriction on \(r\) is due to a technique in our proof for estimating a quantity involving multplications of a Fourier series with polynomial sequences. It would be interesting to extend Theorem 1 to Holder spaces with non-integer regularity indices and Sobolev spaces \(W_{p}^{r}(D)\) which would allow Holder and Sobolev spaces to be defined spectrally [6] and thereby more general approximation estimates given in terms of moduli of smoothness.
Theorem 1 may be applied to various ReLU networks. One example is the important family of convolutional neural networks (CNNs), which are widely applied in speech recognition, image classification and many other tasks [17, 22].
Given a sequence \(w=(w_{k})_{k\in\mathbb{Z}}\) on \(\mathbb{Z}\) supported in \(\{0,1,\ldots,s\}\) and another one \(x=(x_{k})_{k\in\mathbb{Z}}\) supported in \(\{1,2,\ldots,t\}\), the convolution of \(w\) and \(x\) is a sequence supported in \(\{1,2,\ldots,t+s\}\) given by
\[(w{*}x)_{i}=\sum_{k\in\mathbb{Z}}w_{i-k}x_{k}=\sum_{k=1}^{t}w_{i-k}x_{k}, \qquad i\in\mathbb{Z}.\]
This convolutional operation induces a convolutional Toeplitz matrix
\[T^{w}:=(w_{i-k})_{1\leq i\leq t+s,1\leq k\leq t}\]
with the size \((t+s)\times t\) depending on that of the input vector in \(\mathbb{R}^{t}\) which corresponds to the zero-padding approach in convolutional networks.
**Definition 2**.: _Let \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) be the input data vector, \(s,J\in\mathbb{N}\), and \(\{d_{j}\}_{j=1}^{J}\) given by \(d_{0}=d\),_
\[d_{j}=d_{j-1}+s,\qquad j\in\{1,\ldots,J\}.\]
_The **deep CNN**\(\{h^{(j)}:\mathbb{R}^{d}\to\mathbb{R}^{d_{j}}\}_{j=1}^{J}\) with widths \(\{d_{j}\}_{j=1}^{J}\), sequences of filters \(\mathbf{w}:=\{w^{(j)}:\ \mathbb{Z}\to\mathbb{R}\}_{j=1}^{J}\) supported in \(\{0,\ldots,s\}\), and biases \(\mathbf{b}:=\{b^{(j)}\in\mathbb{R}^{d_{j}}\}_{j=1}^{J}\) is defined by_
\[h^{(j)}(x)=\mathcal{A}_{j}\circ\ldots\circ\mathcal{A}_{1}(x),\qquad j=1,\ldots,J, \tag{2.2}\]
_where \(\mathcal{A}_{j}\) is a map from \(\mathbb{R}^{d_{j-1}}\) to \(\mathbb{R}^{d_{j}}\) defined with the \(d_{j}\times d_{j-1}\) convolutional matrix \(T^{w^{(j)}}\) by acting \(\sigma\) componentwise as_
\[\mathcal{A}_{j}(v)=\sigma(T^{w^{(j)}}v-b^{(j)}),\qquad v\in\mathbb{R}^{d_{j-1 }}.\]
It was proved in [38] that when \(r>d/2+2\), deep CNNs can approximate functions from \(W_{\infty}^{r}(D)\) uniformly with rate \(O(\sqrt{\log J}J^{-1/2-1/d})\). In this paper, we can use Theorem 1 to derive rates of approximating by deep CNNs functions from the Holder space with a smaller index \(r\leq d/2+2\), which is our second main result, to be proved in Section 4.
**Corollary 1**.: _Let \(d,\ s\in\mathbb{N}\) and \(F\in W_{\infty}^{r}(D)\) for some integer \(r\in\mathbb{N}\). Then for any \(J\in\mathbb{N}\), there exist \(\mathbf{w}=\{w^{(j)}\}_{j=1}^{J}\) and \(\mathbf{b}=\{b^{(j)}\}_{j=1}^{J}\) such that the space_
\[\mathcal{H}_{J}^{\mathbf{w},\mathbf{b}}=\left\{c\cdot h^{(J)}(x):\ c\in \mathbb{R}^{d_{J}}\right\}\]
_contains an element \(f_{J}^{\mathbf{w},\mathbf{b}}\) satisfying_
\[\|F-f_{J}^{\mathbf{w},\mathbf{b}}\|_{C(D)}\leq\left\{\begin{array}{ll}C_{1} (d,r)\|F\|_{W_{\infty}^{r}(D)}(\log J)^{d+1/2}J^{-\frac{r}{d}\frac{d+2}{d+4}},&\text{if }r<\frac{d}{2}+2,\\ C_{1}(d,r)\|F\|_{W_{\infty}^{r}(D)}(\log J)^{d+3/2}J^{-\frac{r}{d}\frac{d+2}{d+ 4}},&\text{if }r=\frac{d}{2}+2,\\ C_{1}(d,r)\|F\|_{W_{\infty}^{r}(D)}(\log J)^{1/2}J^{-\frac{1}{2}-\frac{1}{d}},&\text{if }r>\frac{d}{2}+2,\end{array}\right.\]
_where \(C_{1}(d,r)\) depends only on \(d\) and \(r\)._
The rates of approximation presented in Corollary 1 are stated in terms of the depth \(J\) or number of layers of the CNN. In many applications of CNNs, the depth \(J\) is large. Some relations between deep CNNs of large depth and fully-connected
networks have been observed recently. In [39], it was proved that the last layer of any fully-connected network is identical to that of a deep CNN with at most 8 times number of free parameters. For approximating or learning ridge function [10], radial functions [23], and functions from Korobov spaces [24], deep CNNs can be achieve the same accuracy with much smaller number of free parameters than fully-connected networks. In a recent application of CNNs to readability of Chinese texts [11], it is found that one layer or two is already efficient. Conducting analysis for approximation and learning by CNNs with small depths would be an interesting task.
## 3 Error Decomposition and Preliminary Analysis
To prove Theorem 1, we need an error decomposition: first we extend \(F\) to a \(2\pi\)-periodic function \(f\) on \(\mathbb{R}^{d}\), then we decompose the error between \(F\) and \(f_{m}\) into two parts, involving the (high-order) Jackson operator.
By the well-known extension theorem (e.g. [33, Chapter 6]), \(F\) can be extended to a \(2\pi\)-periodic continuous function \(f\) on \(\mathbb{R}^{d}\) such that \(F=f\) on \(D\) and
\[\|f\|_{W_{\infty}(\mathbb{T}^{d})}\leq C_{2}(d,r)\|F\|_{W_{\infty}^{r}(D)},\]
where \(C_{2}(d,r)\) is a constant that only depends on \(d\) and \(r\) and \(\mathbb{T}^{d}=[-\pi,\pi]^{d}\).
The periodic function \(f\) has a Fourier series expansion
\[f(x)=\sum_{k\in\mathbb{Z}^{d}}\hat{f}(k)e^{ik\cdot x},\qquad x\in\mathbb{R}^{ d},\]
where \(\left\{\hat{f}(k)=(2\pi)^{-d}\int_{\mathbb{T}^{d}}f(x)e^{-ik\cdot x}dx:k\in \mathbb{Z}^{d}\right\}\) are the Fourier coefficients of \(f\).
Let \(N\) be an integer. We use the Jackson operator \(J_{N,r}\) to approximate \(f\).
We first introduce the univariate Jackson kernel as
\[K_{N,r}^{[1]}(t)=\lambda_{N,r}\left(\frac{\sin Mt/2}{\sin t/2}\right)^{2r}, \qquad t\in\mathbb{R},\]
where \(M:=\lfloor N/r\rfloor+1\) with \(\lfloor u\rfloor\) being the integer part of \(u>0\) and \(\lambda_{N,r}=\left[\int_{\mathbb{T}}(\frac{\sin Mt/2}{\sin t/2})^{2r}dt \right]^{-1}\). The function \(K_{N,r}^{[1]}\) is real-valued and \(2\pi\)-periodic. It also has an expression
\[K_{N,r}^{[1]}(t)=\sum_{k\in\mathbb{Z}}\tilde{a}_{k,N}^{[1]}e^{ikt}=\tilde{a}_ {0,N}^{[1]}+\sum_{k\in\mathbb{N}}2\tilde{a}_{k,N}^{[1]}\cos kt,\]
where \(\{\tilde{a}_{k,N}^{[1]}\}_{k\in\mathbb{Z}}\) is a real-valued even sequence supported in \(\{-N,\ldots,N\}\). To see this, using prosthaphaeresis formulae and the standard expressions of the \(M\)-th Fejer kernel
\[\frac{1}{2M}\left(\frac{\sin Mt/2}{\sin t/2}\right)^{2}=\sum_{\ell=0}^{M-1}b_{ \ell}\cos\ell t,\]
where \(b_{j}=1-\frac{j}{M}\) for \(j\geq 1\) and \(b_{0}=\frac{1}{2}\), we can deduce
\[K_{N,r}^{[1]}(t)=\lambda_{N,r}(2M)^{r}\sum\limits_{k=0}^{r(M-1)}\left(\sum \limits_{\epsilon\in\{-1,1\}^{r}}\sum\limits_{0\leq\ell_{i}\leq M-1,\ \forall i\ i=1}\prod\limits_{i=1}^{r}b_{\ell_{i}}\right)\frac{\cos kt}{2^{r}}. \tag{3.1}\]
The asymptotic behavior \(\lambda_{n,r}\sim n^{-2r+1}\) can be found in the literature (e.g., [9, Chapter 7, Lemma 2.1]) and seen easily from the identity \(\int_{\mathbb{T}}(\frac{\sin Mt/2}{\sin t/2})^{2r}dt=2\int_{0}^{\pi}(\frac{ \sin Mt/2}{\sin t/2})^{2r}dt\) and
\[\int_{0}^{\pi}(\frac{\sin Mt/2}{t/2})^{2r}dt\leq\int_{0}^{\pi}(\frac{\sin Mt/2 }{\sin t/2})^{2r}dt\leq\int_{0}^{\pi}(\frac{\sin Mt/2}{(2/\pi)\cdot(t/2)})^{2r}dt\]
by bounding the integral \(\int_{0}^{\pi}(\frac{\sin Mt/2}{t})^{2r}dt=\left(\frac{2}{M}\right)^{1-2r} \int_{0}^{M\pi/2}(\frac{\sin u}{u})^{2r}du\) as
\[\int_{\pi/6}^{5\pi/6}(\frac{\sin u}{u})^{2r}du\leq\int_{0}^{M\pi/2}(\frac{\sin u }{u})^{2r}du\leq\sum\limits_{k=0}^{\infty}\int_{0}^{2\pi}\frac{(\sin u)^{2r}} {(u+2k\pi)^{2r}}du\leq 2\pi+\sum\limits_{k=1}^{\infty}\frac{1}{k^{2r}}.\]
Thus, we can bound \(\tilde{a}_{k,N}^{[1]}\) as
\[|\tilde{a}_{k,N}^{[1]}|\leq 2^{r}\sup\limits_{n\in\mathbb{N}}(\lambda_{n,r}n^{ 2r-1}):=C_{3}(r),\qquad\forall k\in\mathbb{Z}.\]
The classical Jackson Theorem (e.g., (2.8) and (2.11) in [9, Chapter 7]) asserts that for the univariate Jackson operator \(J_{N,r}^{[1]}\) on \(L_{\infty}(\mathbb{T})\) given by
\[J_{N,r}^{[1]}(g,x):=\int_{\mathbb{T}}\left[\sum\limits_{\ell=1}^{r}(-1)^{\ell -1}\binom{r}{\ell}g(x+\ell y)\right]K_{N,r}^{[1]}(y)dy, \tag{3.2}\]
there exists a constant \(C_{4}(r)\) depending only on \(r\) such that
\[\|J_{N,r}^{[1]}(g)-g\|_{L_{\infty}(\mathbb{T})}\leq C_{4}(r)\|g^{(r)}\|_{L_{ \infty}(\mathbb{T})}N^{-r},\qquad\forall g\in W_{\infty}^{r}(\mathbb{T}),\ N\in \mathbb{N}. \tag{3.3}\]
Since \(\|K_{N,r}^{[1]}\|_{L_{1}(\mathbb{T})}=1\), (3.2) implies
\[\|J_{N,r}^{[1]}(g)\|_{L_{\infty}(\mathbb{T})}\leq\sum\limits_{\ell=1}^{r}\binom {r}{\ell}\|g\|_{L_{\infty}(\mathbb{T})}\|K_{N,r}^{[1]}\|_{L_{1}(\mathbb{T})} \leq 2^{r}\|g\|_{L_{\infty}(\mathbb{T})},\quad\forall g\in L_{\infty}(\mathbb{ T}). \tag{3.4}\]
Observe from a change of variable and the \(2\pi\)-periodicity of \(g\) that
\[J_{N,r}^{[1]}(g,x)= \sum\limits_{\ell=1}^{r}(-1)^{\ell-1}\binom{r}{\ell}\frac{1}{ \ell}\int_{0}^{2\ell\pi}g(x+t)\sum\limits_{k\in\mathbb{Z}}\tilde{a}_{k,N}^{[1] }e^{ikt/\ell}dt\] \[= \sum\limits_{\ell=1}^{r}(-1)^{\ell-1}\binom{r}{\ell}\frac{1}{ \ell}\int_{0}^{2\pi}g(x+t)\sum\limits_{k\in\mathbb{Z}}\tilde{a}_{k,N}^{[1]} \sum\limits_{\alpha=0}^{\ell-1}e^{ik(t+2\alpha\pi)/\ell}dt.\]
Notice that the summation \(\sum\limits_{\alpha=0}^{\ell-1}e^{ik(t+2\alpha\pi)/\ell}=e^{ikt/\ell}\sum\limits_{ \alpha=0}^{\ell-1}\left(e^{i2k\pi/\ell}\right)^{\alpha}\) vanishes when \(k\not\in\ell\mathbb{Z}\) and equals \(\ell e^{ikt/\ell}\) otherwise. Henece
\[J_{N,r}^{[1]}(g,x)= \sum\limits_{\ell=1}^{r}(-1)^{\ell-1}\binom{r}{\ell}\int_{0}^{2 \pi}g(x+t)\sum\limits_{k^{\prime}\in\mathbb{Z}}\tilde{a}_{k^{\prime}\ell,N}^{ [1]}e^{ik^{\prime}t}dt\] \[= \int_{\mathbb{T}}g(x-y)\sum\limits_{k^{\prime}\in\mathbb{Z}}\sum \limits_{\ell=1}^{r}(-1)^{\ell-1}\binom{r}{\ell}\tilde{a}_{-k^{\prime}\ell,N} ^{[1]}e^{ik^{\prime}y}dy.\]
Thus, by introducing a \(2\pi\)-periodic kernel \(G_{N,r}^{[1]}(t)=\sum\limits_{k\in\mathbb{Z}}a_{k,N}^{[1]}e^{ikt}\) with an even sequence of coefficients
\[a_{k,N}^{[1]}=\sum\limits_{\ell=1}^{r}(-1)^{\ell-1}\binom{r}{\ell}\tilde{a}_ {-k\ell,N}^{[1]},\qquad k\in\mathbb{Z},\]
we see that the Jackson operator can be expressed as
\[J_{N,r}^{[1]}(g,x)=\int_{\mathbb{T}}G_{N,r}^{[1]}(y)g(x-y)dy=\int_{\mathbb{T}} G_{N,r}^{[1]}(x-y)g(y)dy.\]
The coefficients of the kernel \(G_{N,r}^{[1]}\) can be bounded as
\[|a_{k,N}^{[1]}|\leq 2^{r}C_{3}(r). \tag{3.5}\]
Now we can define a multidimensional \(2\pi\)-periodic kernel \(G_{N,r}\) on \(\mathbb{R}^{d}\) by
\[G_{N,r}(x)=\prod\limits_{j=1}^{d}G_{N,r}^{[1]}(x_{j})=\sum\limits_{k\in \mathbb{Z}^{d}}a_{k,N}e^{ik\cdot x}. \tag{3.6}\]
where \(a_{k,N}=\prod\limits_{j=1}^{d}a_{k_{j},N}^{[1]}\) for \(k=(k_{1},\ldots,k_{d})\in\mathbb{Z}^{d}\). By means of this kernel, we define the Jackson operator on \(L_{\infty}(\mathbb{T}^{d})\) by
\[J_{N,r}(g,x):=\int_{\mathbb{T}^{d}}G_{N,r}(x-y)g(y)dy=\sum\limits_{\|k\|_{ \infty}\leq N}(2\pi)^{d}a_{k,N}\hat{g}(k)e^{ik\cdot x},\qquad x\in\mathbb{T}^ {d}. \tag{3.7}\]
To approximate the function \(f\), we write \(J_{N,r}(f)\) in terms of the multidimensional kernel (3.6) as \(J_{N,r}(f,x)=\int_{\mathbb{T}^{d}}\prod\limits_{\ell=1}^{d}G_{N,r}^{[1]}(x_{ \ell}-y_{\ell})f(y_{1},\ldots,y_{d})dy_{1}\ldots dy_{d}\). It is a special case with \(j=d\) of the intermediate functions
\[\left\{\int_{\mathbb{T}^{j}}\prod\limits_{\ell=1}^{j}G_{N,r}^{[1]}(x_{\ell}-y _{\ell})f(y_{1},\ldots,y_{j-1},y_{j},x_{j+1},\ldots,x_{d})dy_{1}\ldots dy_{j} \right\}_{j=0}^{d}\]
while \(f(x)=f(x_{1},\ldots,x_{d})\) corresponds to the case with \(j=0\). Then by subtracting and adding the intermediate functions with \(j=d-1,\ldots,1\), we find that the error of approximation \(J_{N,r}(f)-f\) can be expressed as
\[J_{N,r}(f,x)-f(x)=\sum_{j=1}^{d}\int_{\mathbb{T}^{j-1}}\prod_{\ell=1}^{j-1}G_{N,r}^{[1]}(x_{\ell}-y_{\ell})I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})dy_{1} \ldots dy_{j-1}\]
where \(I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})\) is a function on \(\mathbb{T}^{j-1}\) indexed by \(x_{j},\ldots,x_{d}\in\mathbb{R}\) given by
\[\int_{\mathbb{T}}G_{N,r}^{[1]}(x_{j}-y_{j})f(y_{1},\ldots,y_{j-1},y_{j},x_{j+1 },\ldots,x_{d})dy_{j}-f(y_{1},\ldots,y_{j-1},x_{j},x_{j+1},\ldots,x_{d}).\]
Applying (3.4) to the function \(\int_{\mathbb{T}^{j-2}}\prod_{\ell=2}^{j-1}G_{N,r}^{[1]}(x_{\ell}-y_{\ell})I_ {x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})dy_{2}\ldots dy_{j-1}\) of the single variable \(y_{1}\), we see that
\[\sup_{x_{1}\in\mathbb{T}}\left|\int_{\mathbb{T}^{j-1}}\prod_{\ell =1}^{j-1}G_{N,r}^{[1]}(x_{\ell}-y_{\ell})I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y _{j-1})dy_{1}\ldots dy_{j-1}\right|\] \[\leq 2^{r}\sup_{y_{1}\in\mathbb{T}}\left|\int_{\mathbb{T}^{j-2}}\prod _{\ell=2}^{j-1}G_{N,r}^{[1]}(x_{\ell}-y_{\ell})I_{x_{j},\ldots,x_{d}}(y_{1}, \ldots,y_{j-1})dy_{2}\ldots dy_{j-1}\right|.\]
We have by iteration
\[\sup_{x_{1},\ldots,x_{j-1}\in\mathbb{T}}\left|\int_{\mathbb{T}^{ j-1}}\prod_{\ell=1}^{j-1}G_{N,r}^{[1]}(x_{\ell}-y_{\ell})I_{x_{j},\ldots,x_{d}}(y _{1},\ldots,y_{j-1})dy_{1}\ldots dy_{j-1}\right|\] \[\leq 2^{r(j-1)}\sup_{y_{1},\ldots,y_{j-1}\in\mathbb{T}}\left|I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})\right|.\]
But \(I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})=J_{N,r}^{[1]}(h,x_{j})-h(x_{j})\) where \(h\) is the univariate function \(h=f(y_{1},\ldots,y_{j-1},\cdot,x_{j+1},\ldots,x_{d})\) indexed by \(y_{1},\ldots,y_{j-1},x_{j+1},\ldots,x_{d}\). Hence, by (3.3),
\[\left|I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})\right|\leq C_{4}(r)\|h^{(r )}\|_{L_{\infty}(\mathbb{T})}N^{-r}.\]
Observe that \(\|h^{(r)}\|_{L_{\infty}(\mathbb{T})}\leq\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}\). Therefore, the bound \(C_{4}(r)\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}N^{-r}\) for \(\left|I_{x_{j},\ldots,x_{d}}(y_{1},\ldots,y_{j-1})\right|\) is independent of \(y_{1},\ldots,y_{j-1},x_{j},\ldots,x_{d}\), and we obtain
\[\|J_{N,r}(f)-f\|_{L_{\infty}(\mathbb{T}^{d})}\leq\sum_{j=1}^{d}2^{r(j-1)}C_{4 }(r)\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}N^{-r}\leq d2^{rd}C_{4}(r)\|f\|_{W^{r }_{\infty}(\mathbb{T}^{d})}N^{-r}. \tag{3.8}\]
Shallow ReLU neural networks can approximate the function \(J_{N,r}(f)\) well with error bounds stated in terms of its Fourier coefficients.
**Lemma 1**.: _For \(k\in\mathbb{Z}^{d}\), let \(\widehat{J_{N}}(k)\) be the Fourier coefficient of \(J_{N,r}(f)\) at \(k\) satisfying_
\[J_{N,r}(f,x)=\sum\limits_{k\in\mathbb{Z}^{d}}\widehat{J_{N}}(k)e^{ik\cdot x}.\]
_Then for each \(m\in\mathbb{N}\), there exists a function \(f_{m}(x)=\sum\limits_{k=1}^{m}\beta_{k}\sigma(\alpha_{k}\cdot x-b_{k})\in H_{m}\) such that_
\[\|J_{N,r}(f)-f_{m}\|_{L_{\infty}(D)}\leq C_{5}v_{J_{N},2}\max\left\{\sqrt{\log m },\sqrt{d}\right\}m^{-\frac{1}{2}-\frac{1}{d}}, \tag{3.9}\]
_where \(C_{5}\) is an absolute constant,_
\[v_{J_{N},2}:=\sum\limits_{k\in\mathbb{Z}^{d}}|\widehat{J_{N}}(k)|\|k\|_{1}^{2}, \tag{3.10}\]
_and \(\beta_{k},b_{k}\in\mathbb{R}\), \(\alpha_{k}\in\mathbb{R}^{d}\) can be bounded as_
\[|\beta_{k}|\leq\frac{8\pi^{2}v_{J_{N},2}}{m},\qquad\|\alpha_{k}\|_{1}\leq 1, \qquad 0\leq b_{k}\leq 1,\qquad\forall\ k=1,\ldots,m.\]
The proof of Lemma 1 is similar to that in [21] and is given in details in the appendix.
The Jackson operator used in this paper may be replaced by some other approximation schemes of the form \(I_{N}(f)=\int_{\mathbb{T}^{d}}G_{N}(\cdot,y)f(y)dy\), where \(G_{N}\) is a family of kernels with a scaling index \(N\). What is challenging is to approximate \(I_{N}(f)\) by the output \(f_{m}\) of a shallow network induced by ReLU and to estimate the error. This is realized in our approach by a key identity (5.3) for the function \(e^{iz}\) valid in the range \(|z|\leq c\) and a concentration inequality for suprema of empirical processes followed by a novel bound for the quantity \(v_{J_{N},2}\) given in the next section. Another possible approach [27] is to use some kernels defined by formulae (6.1), (6.26) in [6] and then apply the related estimates given in Lemma 6.1 and Proposition 4.1 there. It would be interesting to use such an approach and derive rates of approximating \(F\in W_{\infty}^{r}(D)\) with a non-integer index \(r>0\), which extends our result in Theorem 1.
## 4 Proof of the Main Results by Fourier Analysis
The key analysis of this paper concentrates on estimating the quantity \(v_{J_{N},2}\).
Recall \(\widehat{J_{N}}(k)=(2\pi)^{d}a_{k,N}\hat{f}(k)\) is nonzero only when \(\|k\|_{\infty}\leq N\) by (3.7). Let \(L=\lceil\log_{2}N\rceil\), where \(\lceil u\rceil\) denotes the smallest integer no less than \(u>0\). Then \(N\leq 2^{L}\leq 2N\). Applying (3.5) to \(a_{k,N}=\prod\limits_{j=1}^{d}a_{k_{j},N}^{[1]}\) and noticing the term with \(k=0\) vanishes, we have
\[v_{J_{N},2}\leq(2^{r+1}\pi C_{3}(r))^{d}\sum\limits_{k\in\mathbb{Z}^{d}}|\hat {f}(k)|\|k\|_{1}^{2}\leq(2^{r+1}\pi C_{3}(r))^{d}\sum\limits_{\ell=0}^{L}\sum \limits_{2^{\ell-1}<\|k\|_{\infty}\leq 2^{\ell}}|\hat{f}(k)|\|k\|_{1}^{2}.\]
If \(r\geq 2\), by \(\|k\|_{1}\geq\|k\|_{\infty}\), we have \(\|k\|_{1}^{2-r}\leq\|k\|_{\infty}^{2-r}\leq(2^{\ell-1})^{2-r}\) when \(2^{\ell-1}<\|k\|_{\infty}\leq 2^{\ell}\). If \(r=1\), we also have \(\|k\|_{1}^{2-r}\leq d\|k\|_{\infty}^{2-r}\leq d(2^{\ell})=2d(2^{\ell-1})^{2-r}\) when \(2^{\ell-1}<\|k\|_{\infty}\leq 2^{\ell}\). Hence, in either case,
\[v_{J_{N},2}\leq 2d(2^{r+1}\pi C_{3}(r))^{d}\sum_{\ell=0}^{L}(2^{\ell-1})^{2-r} \sum_{2^{\ell-1}<\|k\|_{\infty}\leq 2^{\ell}}|\hat{f}(k)|\|k\|_{1}^{r}. \tag{4.1}\]
Inspired by some methods in harmonic analysis [12, 34, 5, 14], we define a collection of new functions on \(\mathbb{T}^{d}\), which is the novelty of our time-frequency anlaysis and plays a key role in our error decomposition: for \(\ell=1,\ldots,L\), let
\[T_{\ell}f(x)=\sum_{\|k\|_{\infty}\leq 2^{\ell}}\hat{f}(k)\|k\|_{1}^{r}e^{ik \cdot x},\qquad x\in\mathbb{T}^{d}.\]
The problem of bounding \(v_{J_{N},2}\) is then transformed to that of bounding \(\sum\limits_{k\in\mathbb{Z}^{d}}|\widehat{T_{\ell}f}(k)|\), where \(\widehat{T_{\ell}f}(k)=\hat{f}(k)\|k\|_{1}^{r}\) are the Fourier coefficients of \(T_{\ell}f\).
By Parseval's identity,
\[(2\pi)^{-d}\int_{\mathbb{T}^{d}}|T_{\ell}f(x)|^{2}dx=\sum_{k\in\mathbb{Z}^{d}} |\widehat{T_{\ell}f}(k)|^{2},\]
we have
\[\sum_{k\in\mathbb{Z}^{d}}|\widehat{T_{\ell}f}(k)|= \sum_{\|k\|_{\infty}\leq 2^{\ell}}|\widehat{T_{\ell}f}(k)|\leq \left(\sum_{\|k\|_{\infty}\leq 2^{\ell}}|\widehat{T_{\ell}f}(k)|^{2}\right)^{1/2} \left(\sum_{\|k\|_{\infty}\leq 2^{\ell}}1^{2}\right)^{1/2}\] \[\leq \left(2^{\ell+1}+1\right)^{\frac{d}{2}}\left((2\pi)^{-d}\int_{ \mathbb{T}^{d}}|T_{\ell}f(x)|^{2}dx\right)^{1/2}\leq(2^{\ell+1}+1)^{\frac{d}{2 }}\|T_{\ell}f\|_{\infty}. \tag{4.2}\]
Thus we only need to estimate \(\|T_{\ell}f\|_{\infty}\) to obtain an upper bound for \(v_{J_{N},2}\). This is the main analysis in the proof of our main results. It would be interesting to extend our analysis to some other machine learning algorithms which involve spectral decompositions and frequency analysis [15, 19, 20].
Proof of Theorem 1.: Our analysis is based on dividing the set
\[U_{\ell}:=\left\{-2^{\ell},-2^{\ell}+1,\ldots,2^{\ell}-1,2^{\ell}\right\}^{d}\]
into disjoint subsets according to the signs of its components as
\[U_{\ell}=\bigcup_{\epsilon\in\{-1,0,1\}^{d}}\Xi_{\epsilon}, \tag{4.3}\]
where for each \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{d})\in\{-1,0,1\}^{d}\),
\[\Xi_{\epsilon}=\left\{k=(k_{1},\ldots,k_{d})\in U_{\ell}:\ \operatorname{sgn}(k_{j})= \epsilon_{j},\ \forall j=1,\ldots,d\right\}.\]
Then
\[T_{\ell}f(x)=\sum\limits_{\epsilon\in\{-1,0,1\}^{d}}\sum\limits_{k\in\Xi_{ \epsilon}}\hat{f}(k)\|k\|_{1}^{r}e^{ik\cdot x}.\]
Observe from the multinomial formula that for \(k\in\Xi_{\epsilon}\),
\[\|k\|_{1}^{r} =\left(\sum\limits_{j=1}^{d}|k_{j}|\right)^{r}=\left(\sum\limits_{ j=1}^{d}\epsilon_{j}k_{j}\right)^{r}\] \[=\sum\limits_{\begin{subarray}{c}\alpha_{1}+\cdots+\alpha_{d}=r \\ \alpha_{1},\ldots,\alpha_{d}\in\mathbb{Z}_{+}\end{subarray}}\frac{r!}{\alpha_{ 1}!\ldots\alpha_{d}!}\prod\limits_{j=1}^{d}(\epsilon_{j}k_{j})^{\alpha_{j}},\]
where \(\epsilon_{j}^{\alpha_{j}}\) denotes \(1\) when \(\epsilon_{j}=0\), \(\alpha_{j}=0\). So, for each \(\epsilon\in\{-1,0,1\}^{d}\),
\[\sum\limits_{k\in\Xi_{\epsilon}}\hat{f}(k)\|k\|_{1}^{r}e^{ik\cdot x}\] \[=\sum\limits_{\begin{subarray}{c}\alpha_{1}+\cdots+\alpha_{d}=r \\ \alpha_{1},\ldots,\alpha_{d}\in\mathbb{Z}_{+}\end{subarray}}\frac{r!}{\alpha_{ 1}!\ldots\alpha_{d}!}\left(\prod\limits_{j=1}^{d}\epsilon_{j}^{\alpha_{j}} \right)\left(\sum\limits_{k\in\Xi_{\epsilon}}\left(\prod\limits_{j=1}^{d}k_{j }^{\alpha_{j}}\right)\hat{f}(k)e^{ik\cdot x}\right). \tag{4.4}\]
Putting \(\hat{f}(k)=(2\pi)^{-d}\int_{\mathbb{T}^{d}}f(t)e^{-ik\cdot t}dt\) into the above sum over \(\Xi_{\epsilon}\), we have
\[\sum\limits_{k\in\Xi_{\epsilon}}\left(\prod\limits_{j=1}^{d}k_{j }^{\alpha_{j}}\right)\hat{f}(k)e^{ik\cdot x}=(2\pi)^{-d}\sum\limits_{k\in\Xi_ {\epsilon}}e^{ik\cdot x}\int_{\mathbb{T}^{d}}\left(\prod\limits_{j=1}^{d}k_{j }^{\alpha_{j}}\right)f(t)e^{-ik\cdot t}dt\] \[= (2\pi)^{-d}\sum\limits_{k\in\Xi_{\epsilon}}\int_{\mathbb{T}^{d} }(-i)^{r}\frac{\partial^{r}f}{\partial x_{1}^{\alpha_{1}}\ldots\partial x_{d}^ {\alpha_{d}}}(t)e^{ik\cdot(x-t)}dt\] \[= (2\pi)^{-d}(-i)^{r}\int_{\mathbb{T}^{d}}\frac{\partial^{r}f}{ \partial x_{d}^{\alpha_{d}}\ldots\partial x_{1}^{\alpha_{1}}}(t)\prod\limits _{j=1}^{d}\left(\sum\limits_{k_{j}\in\Xi_{\epsilon}^{[j]}}e^{ik_{j}(x_{j}-t_{ j})}\right)dt, \tag{4.5}\]
where \(\widetilde{\Xi}_{\epsilon}^{[j]}=\left\{\gamma\in\{-2^{\ell},\ldots,2^{\ell} \}:\ \operatorname{sgn}(\gamma)=\epsilon_{j}\right\}\) for \(j\in\{1,\ldots,d\}\).
If \(\epsilon_{j}=1\) or \(-1\), then \(\sum\limits_{k_{j}\in\Xi_{\epsilon}^{[j]}}e^{ik_{j}(x_{j}-t_{j})}\) equals to \(\sum\limits_{\beta=1}^{2^{\ell}}e^{i\epsilon_{j}\beta(x_{j}-t_{j})}\). Observe that
\[\int_{\mathbb{T}}\left|\sum\limits_{\beta=1}^{2^{\ell}}e^{i\beta t }\right|dt \leq \int_{\mathbb{T}}\left|\sum\limits_{\beta=1}^{2^{\ell}}\cos\beta t \right|dt+\int_{\mathbb{T}}\left|\sum\limits_{\beta=1}^{2^{\ell}}\sin\beta t \right|dt\] \[= \int_{\mathbb{T}}\left|\sum\limits_{\beta=1}^{2^{\ell}}\cos\beta t \right|dt+\int_{\mathbb{T}}\left|\frac{2\sin\frac{2^{\ell}+1}{2}t}{2\sin\frac{ t}{2}}\sin 2^{\ell-1}t\right|dt.\]
Hence
\[\int_{\mathbb{T}}\left|\sum_{\beta=1}^{2^{\ell}}e^{i\beta t}\right|dt \leq \int_{\mathbb{T}}\left|\frac{1}{2}+\sum_{\beta=1}^{2^{\ell}}\cos \beta t-\frac{1}{2}\right|dt+\int_{\mathbb{T}}\left|\frac{\sin\frac{2^{\ell}+1}{ 2}t}{\sin\frac{t}{2}}\right|dt\] \[\leq \int_{\mathbb{T}}\left|\frac{1}{2}D_{2^{\ell}}(t)-\frac{1}{2} \right|dt+\int_{\mathbb{T}}\left|D_{2^{\ell-1}}(t)\right|dt,\]
where \(D_{n}\) with \(n\in\mathbb{N}\) is the Dirichlet kernel on \(\mathbb{T}\) given by \(D_{n}(t)=1+2\sum\limits_{k=1}^{n}\cos kt\). Since the Dirichlet kernel can be bounded [9] as \(\|D_{n}\|_{1}\leq(\frac{4}{\pi}\log n+2\pi+1)\), we have
\[\int_{\mathbb{T}}\left|\sum_{k_{j}\in\Xi_{\epsilon}^{[j]}}e^{ik_{j}(x_{j}-t_{j })}\right|dt=\int_{\mathbb{T}}\left|\sum_{\beta=1}^{2^{\ell}}e^{i\beta t} \right|dt\leq\pi+\frac{1}{2}\|D_{2^{\ell}}\|_{1}+\|D_{2^{\ell-1}}\|_{1}\leq( 4\pi+2)(\ell+1).\]
If \(\epsilon_{j}=0\), the term in (4.4) with \(\alpha_{j}>0\) vanishes. When \(\alpha_{j}=0\), by (3.5),
\[\sum_{k_{j}\in\Xi_{\epsilon}^{[j]}}e^{ik_{j}(x_{j}-t_{j})}=1.\]
Therefore, we can bound the \(L_{1}\)-norm of the product term in (4.5) as
\[\int_{\mathbb{T}^{d}}\left|\prod_{j=1}^{d}\left(\sum_{k_{j}\in\Xi_{\epsilon}^{ [j]}}e^{ik_{j}(x_{j}-t_{j})}\right)\right|dt=\prod_{j=1}^{d}\int_{\mathbb{T}} \left|\sum_{k_{j}\in\Xi_{\epsilon}^{[j]}}e^{ik_{j}(x_{j}-t_{j})}\right|dt_{j} \leq(4\pi+2)^{d}(\ell+1)^{d}.\]
Combining this with (4.4) and (4.5), we obtain
\[\left|\sum_{k\in\Xi_{\epsilon}}\hat{f}(k)\|k\|_{1}^{r}e^{ik\cdot x }\right|\leq\sum_{\begin{subarray}{c}\alpha_{1}+\cdots+\alpha_{d}=r\\ \alpha_{1},\ldots,\alpha_{d}\in\mathbb{Z}_{+}\end{subarray}}\frac{r!}{\alpha _{1}!\ldots\alpha_{d}!}\left|\sum_{k\in\Xi_{\epsilon}}\prod_{j=1}^{d}\epsilon _{j}^{\alpha_{j}}k_{j}^{\alpha_{j}}\hat{f}(k)e^{ik\cdot x}\right|\] \[\leq (2\pi)^{-d}\sum_{\begin{subarray}{c}\alpha_{1}+\cdots+\alpha_{d }=r\\ \alpha_{1},\ldots,\alpha_{d}\in\mathbb{Z}_{+}\end{subarray}}\frac{r!}{\alpha_{ 1}!\ldots\alpha_{d}!}(4\pi+2)^{d}\left\|\frac{\partial^{r}f}{\partial x_{d}^{ \alpha_{d}}\ldots\partial x_{1}^{\alpha_{1}}}\right\|_{\infty}(\ell+1)^{d}.\]
It follows that
\[\|T_{\ell}f\|_{\infty}\leq C_{6}(d,r)\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}( \ell+1)^{d}, \tag{4.6}\]
where \(C_{6}(d,r):=3^{d}(2\pi)^{-d}\sum\limits_{\begin{subarray}{c}\alpha_{1}+\cdots +\alpha_{d}=r\\ \alpha_{1},\ldots,\alpha_{d}\in\mathbb{Z}_{+}\end{subarray}}\frac{r!}{\alpha _{1}!\ldots\alpha_{d}!}(4\pi+2)^{d}=(\frac{3}{\pi}+6)^{d}d^{r}\).
Combining (4.1), (4.2), and (4.6) yields
\[v_{J_{N},2} \leq 2d(2^{r+1}\pi C_{3}(r))^{d}\sum_{\ell=0}^{L}(2^{\ell-1})^{2-r}\sum_ {k\in\mathbb{Z}^{d}}|\widehat{T_{\ell}f}(k)|\] \[\leq 2d(2^{r+1}\pi C_{3}(r))^{d}C_{6}(d,r)\|f\|_{W^{r}_{\infty}( \mathbb{T}^{d})}\sum_{\ell=0}^{L}(2^{\ell-1})^{2-r}(2^{\ell+1}+1)^{\frac{d}{2}} (\ell+1)^{d}\] \[\leq 2d(2^{r+1}\pi C_{3}(r))^{d}C_{6}(d,r)\|f\|_{W^{r}_{\infty}( \mathbb{T}^{d})}2^{r-2}\times 3^{\frac{d}{2}}\sum_{\ell=0}^{L}(\ell+1)^{d}(2^{ \ell})^{\frac{d}{2}+2-r}.\]
For \(r<\frac{d}{2}+2\), we have \(\frac{d}{2}+2-r>0\). But \(r\in\mathbb{N}\). Hence \(\frac{d}{2}+2-r\geq\frac{1}{2}\). It follows that
\[\sum_{\ell=0}^{L}(\ell+1)^{d}(2^{\ell})^{\frac{d}{2}+2-r}\leq(L+1 )^{d}\sum_{\ell=0}^{L}(2^{\ell})^{\frac{d}{2}+2-r}\] \[= (L+1)^{d}\frac{2^{\frac{d}{2}+2-r}}{2^{\frac{d}{2}+2-r}-1}\cdot \frac{\left(2^{\frac{d}{2}+2-r}\right)^{L+1}-1}{2^{\frac{d}{2}+2-r}}\leq(\log _{2}N+2)^{d}\frac{\sqrt{2}}{\sqrt{2}-1}(2^{\frac{d}{2}+2-r})^{L}\] \[\leq \frac{\sqrt{2}}{\sqrt{2}-1}(\log_{2}N+2)^{d}(2N)^{\frac{d}{2}+2- r}.\]
For \(r=\frac{d}{2}+2\),
\[\sum_{\ell=0}^{L}(\ell+1)^{d}(2^{\ell})^{\frac{d}{2}+2-r}=\sum_{\ell=0}^{L}( \ell+1)^{d}\leq(L+1)^{d+1}\leq(\log_{2}N+2)^{d+1}.\]
For \(r>\frac{d}{2}+2\), we estimate the sum
\[\sum_{\ell=0}^{L}(\ell+1)^{d}(2^{\ell})^{\frac{d}{2}+2-r}=\sum_{\ell=0}^{L} \left\{(\ell+1)^{d}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^{- \ell}\cdot\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^{-\ell}\right\}\]
via bounding the factor \((\ell+1)^{d}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^{-\ell}\), \(\ell\in\{0,1,\ldots,L\}\), by \(\max_{0\leq\ell\leq L}\left\{(\ell+1)^{d}\left[2^{\frac{1}{2}\left(r-\frac{d}{ 2}-2\right)}\right]^{-\ell}\right\}\), and the sum of the remaining terms as
\[\sum_{\ell=0}^{L}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^{- \ell}\leq 1+\int_{0}^{L}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)} \right]^{-t}dt\leq 1+\frac{2}{(r-2-\frac{d}{2})\log 2}.\]
To estimate the above maximum value, we consider a function \(h:(-1,\infty)\rightarrow\mathbb{R}\) given by
\[h(t)=(t+1)^{d}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^{-t}.\]
From the derivative \(h^{\prime}(t)=(t+1)^{d-1}\left[2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right]^ {-t}\left\{d-(t+1)\log 2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}\right\}\), we see that \(h\) increases on \((-1,t^{*})\) with \(t^{*}=-1+d/\left(\frac{1}{2}\left(r-\frac{d}{2}-2\right)\log 2\right)\), achieves its maximum value at \(t^{*}\), and then decreases on \((t^{*},\infty)\). Hence
\[\max_{0\leq\ell\leq L}\left\{(\ell+1)^{d}\left[2^{\frac{1}{2}\left(r-\frac{d}{ 2}-2\right)}\right]^{-\ell}\right\}\leq h(t^{*})\leq\left(\frac{2d}{\left(r- \frac{d}{2}-2\right)\log 2}\right)^{d}2^{\frac{1}{2}\left(r-\frac{d}{2}-2 \right)}.\]
Therefore,
\[\sum_{\ell=0}^{L}(\ell+1)^{d}(2^{\ell})^{\frac{d}{2}+2-r}\leq\left(1+\frac{2} {\left(r-2-\frac{d}{2}\right)\log 2}\right)\left(\frac{2d}{\left(r-\frac{d}{2}-2 \right)\log 2}\right)^{d}2^{\frac{1}{2}\left(r-\frac{d}{2}-2\right)}.\]
Together with (3.8) and (3.9), the above estimates yield
\[\|f-f_{m}\|_{L_{\infty}(D)} \leq \|f-J_{N,r}(f)\|_{L_{\infty}(D)}+\|J_{N,r}(f)-f_{m}\|_{L_{\infty}( D)}\] \[\leq d2^{rd}C_{4}(r)\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}N^{-r}+ \frac{2\sqrt{2}}{\sqrt{2}-1}d6^{\frac{d}{2}}(2^{r+1}\pi C_{3}(r))^{d}C_{5}C_{6 }(d,r)\] \[\times \|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}\max\left\{\sqrt{d},\sqrt{ \log m}\right\}m^{-\frac{1}{2}-\frac{1}{d}}\] \[\times \left\{\begin{array}{ll}(\log_{2}N+2)^{d}N^{\frac{d}{2}+2-r},& \mbox{if }r<\frac{d}{2}+2,\\ (\log_{2}N+2)^{d+1},&\mbox{if }r=\frac{d}{2}+2,\\ \left(1+\frac{2}{\left(r-2-\frac{d}{2}\right)\log 2}\right)\left(\frac{2d}{ \left(r-\frac{d}{2}-2\right)\log 2}\right)^{d}2^{\frac{3}{2}\left(r-\frac{d}{2}-2 \right)},&\mbox{if }r>\frac{d}{2}+2.\end{array}\right.\]
Finally, by choosing \(N=\lfloor m^{\frac{1}{d}(\frac{d+2}{\max\left(2r,d+4\right)})}\rfloor\) and noting \(\|f\|_{W^{r}_{\infty}(\mathbb{T}^{d})}\leq C_{2}(d,r)\|F\|_{W^{r}_{\infty}(D)}\), we have
\[\|f-f_{m}\|_{L_{\infty}(D)}\leq C(d,r)\|F\|_{W^{r}_{\infty}(D)}\left\{ \begin{array}{ll}(\log m)^{\frac{1}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}},& \mbox{if }r<\frac{d}{2}+2,\\ (\log m)^{\frac{3}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}},&\mbox{if }r=\frac{d}{2}+2,\\ (\log m)^{\frac{1}{2}}m^{-\frac{1}{2}-\frac{1}{d}},&\mbox{if }r>\frac{d}{2}+2, \end{array}\right.\]
where
\[C(d,r)= 2dC_{2}(d,r)2^{rd}C_{4}(r)+\frac{2\sqrt{2}d}{\sqrt{2}-1}6^{ \frac{d}{2}}C_{2}(d,r)(2^{r+1}\pi C_{3}(r))^{d}C_{5}C_{6}(d,r)\sqrt{d}\] \[\times\left\{\begin{array}{ll}1,&\mbox{if }r\leq\frac{d}{2}+2,\\ \left(1+\frac{2}{\left(r-2-\frac{d}{2}\right)\log 2}\right)\left(\frac{2d}{ \left(r-\frac{d}{2}-2\right)\log 2}\right)^{d}2^{\frac{3}{2}\left(r-\frac{d}{2}-2 \right)},&\mbox{if }r>\frac{d}{2}+2.\end{array}\right.\]
Since \(f=F\) on \(D\), this verifies the desired estimate (2.1) and completes the proof of the theorem.
As pointed out by a referee, another way to bound \(T_{\ell}f(x)\) is to view \(e^{ik\cdot x}\) as a univiate function of the variable \(k\cdot x\) and express it using a \(2\pi\) periodic function which equals the hat function on \([-1,1]\) and vanishes on \([-\pi,\pi]\setminus[-1,1]\). Then some probability estimates might be used to carry out the analysis.
We can now apply Theorem 1 and the construction in [38, 37, 39] to prove our rates of approximation by deep CNNs.
Proof of Corollary 1.: Let \(J\geq\frac{2d}{s-1}\) and \(m=\lfloor\frac{(s-1)J}{d}-1\rfloor\). By Theorem 1, we have \(f_{m}\in H_{m}\) with \(f_{m}(x)=\sum\limits_{i=1}^{m}\beta_{i}\sigma(\alpha_{i}\cdot x-t_{i})\) on \(D\) such that
\[\|F-f_{m}\|_{L_{\infty}(D)}\leq\left\{\begin{array}{ll}C(d,r)\|F\|_{W^{r}_{ \infty}(D)}(\log m)^{\frac{1}{2}+d}m^{-\frac{r}{d}\frac{d+2}{d+4}},&\mbox{if $r< \frac{d}{2}+2$},\\ C(d,r)\|F\|_{W^{r}_{\infty}(D)}(\log m)^{\frac{3}{2}+d}m^{-\frac{r}{d}\frac{d+ 2}{d+4}},&\mbox{if $r=\frac{d}{2}+2$},\\ C(d,r)\|F\|_{W^{r}_{\infty}(D)}(\log m)^{\frac{1}{2}}m^{-\frac{1}{2}-\frac{1}{ d}},&\mbox{if $r>\frac{d}{2}+2$}.\end{array}\right.\]
Now we realize \(f_{m}\) by an output function \(f_{J}^{\mathbf{w},\mathbf{b}}\) of a deep CNN of depth \(J\) constructed in [38, Proof of Theorem 2]. Precisely, first applying [38, Theorem 3] to the sequence \(W=(W_{k})_{-\infty}^{\infty}\) supported in \(\{0,\ldots,md-1\}\) with
\[[W_{md-1}\;\ldots\;W_{1}\;W_{0}]=[\alpha_{m}^{\top}\;\ldots\;\alpha_{2}^{\top }\;\alpha_{1}^{\top}],\]
adding delta filter sequences at the end if necessary, we can obtain filters \(\mathbf{w}=\{w^{(j)}\}_{j=1}^{J}\) supported in \(\{0,\ldots,s\}\) such that \(W=w^{(J)}*w^{(J-1)}*\cdots*w^{(2)}*w^{(1)}\).
Next, taking \(\mathbf{b}=\{b^{(j)}\}_{j=1}^{J-1}\) such that for \(j\in\{1,\ldots,J-1\}\) and \(x\in D\), the components of \(T^{w^{(j)}}h^{(j-1)}(x)-b^{(j)}\) are positive.
Finally, for \(k=1,\ldots,m\), let \(b^{(J)}_{kd}\) be the number which makes the constant term of \(\big{(}h^{(J)}(x)\big{)}_{kd}\) equals to \(t_{k}\). Taking \(c=\left(\sum\limits_{k=1}^{m}\beta_{k}\delta_{kd}(j)\right)_{j=1}^{d_{J}}\) with \(\delta_{i}\) being the delta sequence at \(i\) yields \(f_{J}^{\mathbf{w},\mathbf{b}}\). Then we have \(f_{J}^{\mathbf{w},\mathbf{b}}=f_{m}\). Combining this identity with the fact that \(\frac{1}{2}(s-1)J\leq md\leq(s-1)J\) gives
\[\|F-f_{J}^{\mathbf{w},\mathbf{b}}\|_{C(\Omega)}\leq\left\{\begin{array}{ll} C_{1}(d,r)\|F\|_{W^{r}_{\infty}(D)}(\log J)^{d+1/2}J^{-\frac{r}{d}\frac{d+2}{d+4}},& \mbox{if $r<\frac{d}{2}+2$},\\ C_{1}(d,r)\|F\|_{W^{r}_{\infty}(D)}(\log J)^{d+3/2}J^{-\frac{r}{d}\frac{d+2}{d+4} },&\mbox{if $r=\frac{d}{2}+2$},\\ C_{1}(d,r)\|F\|_{W^{r}_{\infty}(D)}(\log m)^{\frac{1}{2}}m^{-\frac{1}{2}-\frac{ 1}{d}},&\mbox{if $r>\frac{d}{2}+2$},\end{array}\right.\]
where \(C_{1}(d,r)=C(d,r)(2d)^{\frac{r}{d}\frac{d+2}{d+4}}\). This proves Corollary 1.
## 5 Discussion
The rate of uniformly approximating functions \(f\in W^{r}_{\infty}([-1,1]^{d}))\) given in Theorem 1 is very close to the following lower bound [8, Theorem 4.2] for neural networks when the data dimension \(d\) is large:
_Let \(d,r\in\mathbb{N}\). Then there exists a constant \(c_{r}\) depending only on \(r\) such that for any \(n\in\mathbb{N}\), map \(\eta:\mathbb{R}^{n}\to C(D)\), and continuous map \(M:W^{r}_{\infty}([-1,1]^{d})\to\mathbb{R}^{n}\) there holds_
\[\sup_{\|f\|_{W^{r}_{\infty}([-1,1]^{d})}\leq 1}\|f-\eta(M(f))\|_{\infty}\geq c_{r}n^{-r/ d}.\]
We apply this lower bound to our setting. If we denote \(H^{p}_{m}\) with \(m\in\mathbb{N}\) to be the set of output functions \(f_{m}\) on \(D\) constructed by ReLU deep neural networks of depth \(p\) with \((d+1)m\) free parameters, then \(H^{1}_{m}=H_{m}\). Let \(A_{m}\) be the collection of these parameters. If there exists a continuous map \(M:W^{r}_{\infty}(D)\mapsto A_{m}\), then for any map \(\eta:A_{m}\to C(D)\) which together with \(M\) produces \(f_{m}=\eta(M(F))\) there holds
\[\sup_{\|F\|_{W^{r}_{\infty}(D)}\leq 1}\|F-f_{m}\|_{L_{\infty}(D)}\geq c_{r}m^{- \frac{r}{d}}. \tag{5.1}\]
We end our discussion by remarking that when \(r>d/2+2\), our main result correponds to that in [21]. This was used in [38]. To make it explicit, let
\[W^{r}_{2}(\mathbb{R}^{d}):=\left\{f\in L_{2}(\mathbb{R}^{d}):\ \|f\|_{W^{r}_{2}( \mathbb{R}^{d})}:=\int_{\mathbb{R}^{d}}\lvert\hat{f}(\omega)\rvert^{2}(1+| \omega|^{2r})d\omega<\infty\right\},\]
where \(\hat{f}(\omega)=(2\pi)^{-d}\int_{\mathbb{R}^{d}}f(x)e^{-i\omega\cdot x}d\omega\) is the Fourier transform of \(f\).
By the extension theorem, there exists a constant \(C_{7}(d,r)\) depending only on \(d\) and \(r\) such that each \(F\in W^{r}_{\infty}(D)\) can be extended to a function \(f\) in \(W^{r}_{2}(\mathbb{R}^{d})\) with
\[\|f\|_{W^{r}_{2}(\mathbb{R}^{d})}\leq C_{7}(d,r)\|F\|_{W^{r}_{2}(D)}\leq(2\pi)^ {d/2}C_{7}(d,r)\|F\|_{W^{r}_{\infty}(D)}.\]
Then \(r>d/2+2\) is a sufficient condition for the finiteness of \(v_{f,2}\) defined in [21] as
\[v_{f,2}:=\int_{\mathbb{R}^{d}}\lvert\hat{f}(\omega)\rvert\|\omega\|_{1}^{2}d \omega<\infty.\]
In fact, we have
\[v_{f,2}= \int_{D}\lvert\hat{f}(\omega)\rvert\|\omega\|_{1}^{2}d\omega+ \int_{\mathbb{R}^{d}\setminus D}\lvert\hat{f}(\omega)\rvert\|\omega\|_{1}^{2}d\omega\] \[\leq \left(\int_{D}d^{2}\lvert\hat{f}(\omega)\rvert^{2}\left|\omega \right|^{4}d\omega\cdot\int_{D}1d\omega\right)^{1/2}\] \[+\left(\int_{\mathbb{R}^{d}\setminus D}d^{2r}\lvert\hat{f}(\omega )\rvert^{2}\left|\omega\right|^{2r}d\omega\cdot\int_{\mathbb{R}^{d}\setminus D }\lVert\omega\rVert_{1}^{4-2r}d\omega\right)^{1/2}\] \[\leq C_{8}(d,r)\left(\int_{\mathbb{R}^{d}}\lvert\hat{f}(\omega) \rvert^{2}\left(1+|\omega|^{2r}\right)d\omega\right)^{1/2}\leq(2\pi)^{d/2}C_{ 7}(d,r)C_{8}(d,r)\|F\|_{W^{r}_{\infty}(D)},\]
where \(C_{8}(d,r)\) is a constant depending only on \(d\) and \(r\). Then for \(f\in W^{r}_{\infty}(D)\),
\[\inf_{f_{m}\in H_{m}}\lVert f-f_{m}\rVert_{L_{\infty}(D)}\leq C_{9}(d,r)\|f\|_{W^ {r}_{\infty}(D)}(\log m)^{\frac{1}{2}}m^{-\frac{1}{2}-\frac{1}{d}}, \tag{5.2}\]
where \(C_{9}(d,r)\) only depends on \(d\) and \(r\). This is the rate of approximation we obtained in (2.1).
When \(r=d/2+2\), we have \(-\frac{r}{d}\frac{d+2}{d+4}=-\frac{1}{2}-\frac{1}{d}\). Then the upper bound in Theorem 1 is the same as that in [21] up to a logarithmic term.
## Appendix
In this appendix, we provide a detailed proof of Lemma 1 for completeness. The method of the proof is borrowed from [21]. Here is an outline of the proof: we first represent the Fourier series basis function \(e^{ik\cdot x}\) as an integral of the shifts \(\sigma(k\cdot x-u)\) of the ReLU multiplied with \(e^{iu}\). Then we express the value at \(x\in D\) of the Jackson operator \(J_{N,r}(f,x)\) as the expectation of a random variable. Finally, we approximate the expectation by an empirical mean. The key part of the proof is to conduct Rademacher analysis for estimating the error between the expectation and the empirical mean, uniformly for \(x\in D\), by applying a concentration inequality for suprema of empirical processes to a collection of random variables indexed by the set \(D\times\{-1,0,1\}\).
Proof of Lemma 1.: The following identity stated as Equation (19) in [21]
\[e^{iz}-iz-1=-\int_{0}^{c}\sigma(z-u)e^{iu}+\sigma(-z-u)e^{-iu}du,\qquad|z|\leq c \tag{5.3}\]
applied to \(c=\|\pi k\|_{1}\) and \(z=k\cdot x\) with \(k\in\mathbb{Z}^{d}\setminus\{0\}\) yields
\[e^{ik\cdot x}=-\int_{0}^{\|\pi k\|_{1}}\sigma(k\cdot x-u)e^{iu}+\sigma(-k\cdot x -u)e^{-iu}du+ik\cdot x+1.\]
Changing the variable \(u\) by \(t=\frac{u}{\|\pi k\|_{1}}\) and using \(\frac{1}{\|\pi k\|_{1}}\sigma(v)=\sigma(\frac{v}{\|\pi k\|_{1}})\) for \(v\in\mathbb{R}\), we have
\[e^{ik\cdot x}=-\|\pi k\|_{1}^{2}\int_{0}^{1}\sigma\left(\frac{k}{\|\pi k\|_{1 }}\cdot x-t\right)e^{i\|\pi k\|_{1}t}+\sigma\left(-\frac{k}{\|\pi k\|_{1}} \cdot x-t\right)e^{-i\|\pi k\|_{1}t}dt+ik\cdot x+1.\]
Putting this expression into the Jackson operator \(J_{N,r}(f,x)=\sum\limits_{k\in\mathbb{Z}^{d}}\widehat{J_{N}}(k)e^{ik\cdot x}\), we find
\[J_{N,r}(f,x) = \sum\limits_{k\in\mathbb{Z}^{d}}\widehat{J_{N}}(k)\bigg{\{}-\|\pi k \|_{1}^{2}\int_{0}^{1}\sigma\left(\frac{k}{\|\pi k\|_{1}}\cdot x-t\right)e^{i \|\pi k\|_{1}t}\] \[+\sigma\left(-\frac{k}{\|\pi k\|_{1}}\cdot x-t\right)e^{-i\|\pi k \|_{1}t}dt\bigg{\}}+\sum\limits_{k\in\mathbb{Z}^{d}}\widehat{J_{N}}(k)(ik\cdot x +1).\]
Take a phase \(b(k)\in(-\pi,\pi]\) of the complex number \(\widehat{J_{N}}(k)\) satisfying
\[\widehat{J_{N}}(k)=|\widehat{J_{N}}(k)|e^{ib(k)}.\]
Then \(\widehat{J_{N}}(k)e^{\pm i\|\pi k\|_{1}t}=|\widehat{J_{N}}(k)|e^{i(\pm\|\pi k\|_ {1}t+b(k))}\). Notice that \(J_{N,r}(f,x)\) is a real number. Then by taking its real part, we have
\[J_{N,r}(f,x) = -\pi^{2}\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}|\widehat{J_{N}}(k )|\|k\|_{1}^{2}\int_{0}^{1}\sigma\left(\frac{k}{\|\pi k\|_{1}}\cdot x-t\right) \cos(\|\pi k\|_{1}t+b(k))dt\] \[-\pi^{2}\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}|\widehat{J_{N}}( k)|\|k\|_{1}^{2}\int_{0}^{1}\sigma\left(-\frac{k}{\|\pi k\|_{1}}\cdot x-t \right)\cos(-\|\pi k\|_{1}t+b(k))dt\] \[-\left[\sum_{k\in\mathbb{Z}^{d}}\mathrm{Im}(\widehat{J_{N}}(k)) k\right]\cdot x+\sum_{k\in\mathbb{Z}^{d}}\mathrm{Re}(\widehat{J_{N}}(k)).\]
To approximate the function \(J_{N,r}(f,x)\) by the output \(f_{m}(x)=\sum\limits_{k=1}^{m}\beta_{k}\sigma(\alpha_{k}\cdot x-b_{k})\in H_{m}\) of a shallow network, we regard \(J_{N,r}(f,x)\) as the expectation of a random variable, discretize it, and then estimate the error by a concentration inequality. Here \(x\in D\) is used as an index of a collection of random vaiables.
We first take a probability measure \(P\) on \(\{-1,1\}\times[0,1]\times\left(\mathbb{Z}^{d}\setminus\{0\}\right)\) by setting for \(z\in\{-1,1\},k\in\mathbb{Z}^{d}\setminus\{0\}\) the density as
\[p(z,t,k)=\frac{\pi^{2}}{v}|\widehat{J_{N}}(k)|\ \|k\|_{1}^{2}\ |\cos(z\|\pi k\|_{1} t+b(k))|,\qquad t\in[0,1],\]
where \(v\) is the normalization constant
\[v=\pi^{2}\sum_{k\in\mathbb{Z}^{d}}\left[\|k\|_{1}^{2}|\widehat{J_{N}}(k)|\int_ {0}^{1}|\cos(\|\pi k\|_{1}t+b(k))|+|\cos(-\|\pi k\|_{1}t+b(k))|dt\right]\leq 2 \pi^{2}v_{J_{N},2}.\]
We then define a collection of random variables \(\{h_{x}\}_{x\in D}\) on \(\{-1,1\}\times[0,1]\times\left(\mathbb{Z}^{d}\setminus\{0\}\right)\) given by
\[h_{x}(z,t,k)=\sigma(z\alpha\cdot x-t)s(zt,k),\qquad z\in\{-1,1\},t\in[0,1],k \in\mathbb{Z}^{d}\setminus\{0\}, \tag{5.4}\]
where \(\alpha=\alpha_{k}:=\frac{k}{\|\pi k\|_{1}}\) and \(s(t,k):=-\mathrm{sgn}(\cos(\|\pi k\|_{1}t+b(k)))\). For each \(x\in D\), the expected value \(\mathbb{E}_{P}[h_{x}]=\int_{\{-1,1\}\times[0,1]\times\left(\mathbb{Z}^{d} \setminus\{0\}\right)}h_{x}(z,t,k)dP(z,t,k)\) of the random variables \(h_{x}\) satisfies
\[J_{N,r}(f,x)+\left[\sum_{k\in\mathbb{Z}^{d}}\mathrm{Im}(\widehat {J_{N}}(k))k\right]\cdot x-\sum_{k\in\mathbb{Z}^{d}}\mathrm{Re}(\widehat{J_{N }}(k))\] \[= v\int_{\{-1,1\}\times[0,1]\times\mathbb{Z}^{d}\setminus\{0\}}h_{ x}(z,t,k)dP(z,t,k)=:g_{x}.\]
The rest of the proof is analogous to [21, Proof of Theorem 1], and we replace \(m\) by \(m^{\prime}=\lceil m/4\rceil\) here.
Let \(\epsilon>0\) to be determined later. We can partition the set \(\{(z,t,\alpha)\in\{-1,1\}\times[0,1]\times\mathbb{R}^{d}:\ \|\alpha\|_{1}=\pi^{-1}\}\) into a family of subsets \(\{\mathcal{A}_{j}\}_{j=1}^{M^{\prime}}\) of \(\ell^{\infty}\)-diameter at most \(\frac{\epsilon}{d+1}\), where the number \(M^{\prime}\) of the subsets in this family can be chosen to be the integer part of \(2\left(\frac{2d+2}{\pi}\right)^{d-1}(d+1)\epsilon^{-d}\). The diamater restriction yields
\[\sup_{(z,t,\alpha),(\tilde{z},\tilde{t},\tilde{\alpha})\in\mathcal{A}_{j}}\|( z,t,\alpha)-(\tilde{z},\tilde{t},\tilde{\alpha})\|_{\infty}\leq\frac{\epsilon}{d+ 1},\quad j=1,\ldots,M^{\prime},\]
which together with the Lipschitz property of \(\sigma\) implies
\[\sup_{x\in D}\left|\sigma(z\alpha\cdot x-t)-\sigma(\tilde{z}\tilde{\alpha} \cdot x-\tilde{t})\right|\leq\epsilon,\quad\forall(z,t,\alpha),(\tilde{z}, \tilde{t},\tilde{\alpha})\in\mathcal{A}_{j}.\]
For each \(j\), we denote two subsets of \(\{-1,1\}\times[0,1]\times\left(\mathbb{Z}^{d}\setminus\{0\}\right)\) as
\[\mathcal{A}_{j,-} = \left\{(z,t,k):\ \left(z,t,\frac{k}{\|\pi k\|_{1}}\right)\in \mathcal{A}_{j},\ s(zt,k)=-1\right\},\] \[\mathcal{A}_{j,+} = \left\{(z,t,k):\ \left(z,t,\frac{k}{\|\pi k\|_{1}}\right)\in \mathcal{A}_{j},\ s(zt,k)=1\right\},\]
and set the collection \(\{\mathcal{A}_{j,-},\mathcal{A}_{j,+}:j=1,\ldots,M^{\prime}\}\) as \(\{\mathcal{B}_{i}\}_{j=1}^{M}\) with \(M=2M^{\prime}\). Then \(\{\mathcal{B}_{1},\ldots,\mathcal{B}_{M}\}\) form a partition of the set \(\Lambda=\{-1,1\}\times[0,1]\times\left(\mathbb{Z}^{d}\setminus\{0\}\right)\) and satisfy
\[\sup_{(z,t,k),(\tilde{z},\tilde{t},\tilde{k})\in\mathcal{B}_{i}}\sup_{x\in D }\left|h_{x}\left(\tilde{z},\tilde{t},\tilde{k}\right)-h_{x}\left(z,t,k\right) \right|\leq\epsilon,\quad i=1,\ldots,M. \tag{5.5}\]
We restrict the probability measure \(P\) onto the subsets in this partition and define a collection of probability measures \(\{P_{i}\}_{i=1}^{M}\) on \(\Lambda\) by
\[dP_{i}(z,t,k)=\frac{1}{L_{i}}dP(z,t,k)\mathbf{1}\{(z,t,k)\in\mathcal{B}_{i}\},\]
where \(L_{i}=\int_{\mathcal{B}_{i}}dP(z,t,k)\) is the normalization constant to make \(P_{i}\) a probability measure. Correspondingly, we set \(m_{i}=m^{\prime}L_{i}\) and take a random sample
\[\underline{a}=\{(z_{j,i},t_{j,i},k_{j,i})\}_{1\leq j\leq n_{i},\ 1\leq i\leq M}\]
of sizes \(\{n_{i}=\lceil m_{i}\rceil\}\) independently according to \(\{P_{i}\}_{i=1}^{M}\). Thus, we split the population domain \(\Lambda\) into \(M\) "strata" \(\mathcal{B}_{1},\ldots,\mathcal{B}_{M}\) and allocate the number of within-stratum samples to be proportional to the "size" of the stratum \(m_{1},\ldots,m_{M}\) (i.e., proportionate allocation). Note from \(\sum\limits_{i=1}^{M}L_{i}=1\) that
\[\sum\limits_{i=1}^{M}n_{i}\leq m^{\prime}+M. \tag{5.6}\]
Let
\[g_{i,x}=\frac{v}{n_{i}}\sum_{j=1}^{n_{i}}h_{x}(z_{j,i},t_{j,i},k_{j,i}),\qquad i=1, \ldots,M\]
and
\[\overline{g}_{x}=\sum_{i=1}^{M}\frac{m_{i}}{m^{\prime}}g_{i,x}.\]
We apply \(L_{i}=m_{i}/m^{\prime}\) to get
\[\mathbb{E}\left[\sup_{x\in D}|\overline{g}_{x}-g_{x}|\right]= \mathbb{E}\left[\sup_{x\in D}\left|\sum_{i=1}^{M}L_{i}g_{i,x}-v\sum_{i=1}^{M}L_ {i}\int_{\mathcal{B}_{i}}h_{x}(z,t,k)dP_{i}(z,t,k)\right|\right] \tag{5.7}\] \[= \frac{v}{m^{\prime}}\mathbb{E}\left[\sup_{x\in D}\left|\sum_{i=1 }^{M}\frac{m_{i}}{n_{i}}\sum_{j=1}^{n_{i}}\left(h_{x}(z_{j,i},t_{j,i},k_{j,i}) -\mathbb{E}_{P_{i}}[h_{x}]\right)\right|\right].\]
To carry out Rademacher analysis for the quantity in (5.7), we let \(\underline{\sigma}=\{\sigma_{j,i}\}\) be a sequence of independent identically distributed Rademacher variables and \(\{\mu_{i}\}_{i=1}^{M}\) be a sequence of functions defined on \(D\) by \(\mu_{i}(x)=h_{x}(z_{i},t_{i},k_{i})\) with a random sample \((z_{i},t_{i},k_{i})\in\mathcal{B}_{i}\) drawn according to \(P_{i}\). We get from [35, Lemma 2.3.6] that
\[\mathbb{E}\left[\sup_{x\in D}\left|\sum_{i=1}^{M}\frac{m_{i}}{n_{ i}}\sum_{j=1}^{n_{i}}\left(h_{x}(z_{j,i},t_{j,i},k_{j,i})-\mathbb{E}_{P_{i}}[h_{ x}]\right)\right|\right] \tag{5.8}\] \[\leq 2\mathbb{E}\left[\sup_{x\in D}\left|\sum_{i=1}^{M}\frac{m_{i}}{ n_{i}}\sum_{j=1}^{n_{i}}\sigma_{j,i}\left(h_{x}(z_{j,i},t_{j,i},k_{j,i})-\mu_{i}(x) \right)\right|\right].\]
For notational brevity, we denote \(\tilde{h}_{j,i}(x)=\frac{m_{i}}{n_{i}}(h_{x}(z_{j,i},t_{j,i},k_{j,i})-\mu_{i}( x))\). Observe that \(\sup_{y\in\{-1,0,1\}}\left(\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sigma_{j,i}y \tilde{h}_{j,i}(x)\right)=\left|\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sigma_{j,i} \tilde{h}_{j,i}(x)\right|\). Fix \(\underline{a}\). We apply a concentration inequality [3, Corollary 13.2] for suprema of empirical processes involving a collection of random variables \(\left\{\sum\limits_{k=1}^{n}\alpha_{k,t}\epsilon_{k}:t\in\mathcal{T}\right\}\) induced by a sequence of independent Rademacher variables \(\{\epsilon_{k}\}_{k=1}^{n}\) and a collection of coefficient sequences \(\{\alpha_{k,t}\}_{k=1}^{n}\) indexed by a set \(\mathcal{T}\) with the distance \(\mbox{dist}(t,t^{\prime})=\left\{\sum_{k=1}^{n}\left(\alpha_{k,t}-\alpha_{k,t^ {\prime}}\right)^{2}\right\}^{1/2}\) for \(t,t^{\prime}\in\mathcal{T}\). In our situation, \(\underline{\sigma}=\{\sigma_{j,i}\}\) is the sequence of independent Rademacher variables. The collection of coefficient sequences is \(\left\{\left(y\tilde{h}_{j,i}(x)\right)_{j,i}:(x,y)\in\mathcal{T}\right\}\) indexed by the set \(\mathcal{T}:=D\times\{-1,0,1\}\) with the distance \(\kappa\) given by
\[\kappa\left((x,y),(x^{\prime},y^{\prime})\right)=\left(\sum_{i=1}^{M}\sum_{j=1 }^{n_{i}}\left(y\tilde{h}_{j,i}(x)-y^{\prime}\tilde{h}_{j,i}(x^{\prime})\right) ^{2}\right)^{1/2},\qquad(x,y),(x^{\prime},y^{\prime})\in\mathcal{T}.\]
Hence we can apply [3, Corollary 13.2] and obtain
\[\mathbb{E}_{\underline{\sigma}}\left[\sup_{x\in D}\left|\sum_{i=1}^ {M}\sum_{j=1}^{n_{i}}\sigma_{j,i}\tilde{h}_{j,i}(x)\right|\right] \tag{5.9}\] \[= \mathbb{E}_{\underline{\sigma}}\left[\sup_{(x,y)\in D\times\{-1, 0,1\}}\left(\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sigma_{j,i}\tilde{y}\tilde{h}_{j, i}(x)\right)-0\right]\] \[\leq 12\int_{0}^{\delta/2}\sqrt{N(u,\mathcal{T})}du,\]
where \(N(u,\mathcal{T})\) is the \(u\)-metric entropy of \(\mathcal{T}\) with respect to the metric \(\kappa\) (i.e., the logarithm of the smallest size of \(u\)-nets that cover \(\mathcal{T}\) with respect to \(\kappa\)) and \(\delta=\left(\sup_{(x,y)\in D\times\{-1,0,1\}}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}} \left(y\tilde{h}_{j,i}(x)\right)^{2}\right)^{1/2}\).
To estimate the metric entropy \(N(u,\mathcal{T})\), we observe from a simple covering of the interval \([-1,1]\) by \(1+1/\eta\) intervals of radius \(\eta>0\) that the cube \(D=[-1,1]^{d}\) can be covered by \((1+1/\eta)^{d}\leq(2/\eta)^{d}\) balls of radius \(\eta\) in the \(\ell_{\infty}\)-norm for \(\eta\leq 1\). Combining this with a metric relation
\[\kappa\left((x,y),(x^{\prime},y)\right)=|y|\left(\sum_{i=1}^{M}\sum_{j=1}^{n_{ i}}\left(\tilde{h}_{j,i}(x)-\tilde{h}_{j,i}(x^{\prime})\right)^{2}\right)^{1/2} \leq 2\sqrt{m^{\prime}+M}\|x-x^{\prime}\|_{\infty}\]
seen from the Lipschitz property of \(\sigma\) and the definition of \(\tilde{h}_{j,i}\), we find that any \(\eta\)-covering of \(D\) with respect to the \(\ell_{\infty}\)-norm induces a \(2\sqrt{m^{\prime}+M}\eta\) covering of \(D\times\{y\}\) with respect to the \(\kappa\)-metric. Therefore, by taking \(\eta=u/(2\sqrt{m^{\prime}+M})\leq\frac{\delta}{2}/(2\sqrt{m^{\prime}+M})<1\), for the covering numbers \(\mathcal{N}(u,D\times\{y\})\) and \(\mathcal{N}(u,\mathcal{T})\), we have
\[\mathcal{N}(u,D\times\{y\})\leq(2/\eta)^{d}\leq\left(\frac{4\sqrt{m^{\prime}+ M}}{u}\right)^{d}\]
and
\[\mathcal{N}(u,\mathcal{T})\leq\sum_{y\in\{1,0,-1\}}\mathcal{N}(u,D\times\{y\}) \leq 3\left(\frac{4\sqrt{m^{\prime}+M}}{u}\right)^{d}. \tag{5.10}\]
It follows from (5.5) and (5.6) that \(\delta\leq\sqrt{m^{\prime}+M}\epsilon\) and from (5.10) that \(N(u,\mathcal{T})\leq d\log\left(4\sqrt{m^{\prime}+M}/u\right)+\log 3\).
Now we determine \(\epsilon>0\) by
\[\epsilon=\frac{2(d+1)\pi^{-1+1/d}}{\lceil m/4\rceil^{1/d}}.\]
This choice together with the definition of \(M^{\prime}\) gives \(M^{\prime}=2\left((2d+2)/\pi\right)^{d-1}\left(d+1\right)\epsilon^{-d}=\lceil m/4\rceil\). Hence \(M^{\prime}=m^{\prime}=\lceil m/4\rceil\) and \(M=2M^{\prime}=2\lceil m/4\rceil\). Then by evaluating the integral, we can bound (5.9) as
\[\mathbb{E}_{\underline{\sigma}}\left[\sup_{x\in D}\left|\sum_{i=1}^{M}\sum_{j=1 }^{n_{i}}\sigma_{j,i}\tilde{h}_{j,i}(x)\right|\right]\leq 12(3+2\sqrt{\log m}) \sqrt{d}\sqrt{\frac{m}{2}}\epsilon. \tag{5.11}\]
Thus by taking expection over \(\underline{a}\in\Lambda\), we obtain
\[\mathbb{E}\left[\sup_{x\in D}\left|\sum_{i=1}^{M}\frac{m_{i}}{n_ {i}}\sum_{j=1}^{n_{i}}\sigma_{j,i}\left(h_{x}(z_{j,i},t_{j,i},k_{j,i})-\mu_{i }(x)\right)\right|\right] \tag{5.12}\] \[= \mathbb{E}_{\underline{a}}\mathbb{E}_{\underline{\sigma}}\left[ \sup_{x\in D}\left|\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sigma_{j,i}\tilde{h}_{j,i }(x)\right|\right]\leq 12(3+2\sqrt{\log m})\sqrt{d}\sqrt{\frac{m}{2}}\epsilon.\]
Together with (5.7) and (5.8), we conclude that
\[\mathbb{E}\left[\sup_{x\in D}|\overline{g}_{x}-g_{x}|\right]\leq\frac{C_{5}}{ 2\pi^{2}}vd^{3/2}\sqrt{\log mm}^{-\frac{1}{2}-\frac{1}{d}} \tag{5.13}\]
holds for some absolute constant \(C_{5}\). Since this inequation holds on average, by (5.6) we know that there is a realization
\[\overline{g}_{x}=\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\frac{vm_{i}}{m^{\prime}n_{i} }h_{x}(z_{j,i},t_{j,i},\omega_{j,i})=:\sum_{k=1}^{3\lceil m/4\rceil}\beta_{k} \sigma(\alpha_{k}\cdot x-b_{k})\in H_{3\lceil m/4\rceil}\]
such that
\[\sup_{x\in D}\left|J_{N,r}(f,x)+\left[\sum_{k\in\mathbb{Z}^{d}} \operatorname{Im}(\widehat{J_{N}}(k))k\right]\cdot x-\sum_{k\in\mathbb{Z}^{d} }\operatorname{Re}(\widehat{J_{N}}(k))-\overline{g}_{x}\right|\] \[\leq C_{5}v_{J_{N},2}d^{3/2}\sqrt{\log mm}^{-\frac{1}{2}-\frac{1}{d}}.\]
Moreover, from the definition (5.4) of \(h_{x}\), we can get bounds of the parameters as
\[|\beta_{k}|\leq\frac{v}{\lceil m/4\rceil}\leq\frac{8\pi^{2}v_{J_{N},2}}{m}, \qquad\|\alpha_{k}\|_{1}\leq 1,\qquad 0\leq b_{k}\leq 1.\]
To complete the proof, notice \(u=\sigma(u)-\sigma(-u)\) for \(u\in\mathbb{R}\), then for \(m\geq 20\), a function of the form \(\sum\limits_{k=3\lceil m/4\rceil+1}^{m}\beta_{k}\sigma(\alpha_{k}\cdot x-b_{k})\) can realize the affine function \(\left[\sum\limits_{k\in\mathbb{Z}^{d}}\operatorname{Im}(\widehat{J_{N}}(k))k \right]\cdot x-\sum\limits_{k\in\mathbb{Z}^{d}}\operatorname{Re}(\widehat{J_ {N}}(k))\) with the parameters bounded as
\[|\beta_{k}|\leq\frac{8v_{J_{N},2}}{m},\qquad\|\alpha_{k}\|_{1}\leq 1,\qquad 0 \leq b_{k}\leq 1,\]
and the desired bound (3.9) is verified. The bound is trivially true for \(m<20\). This completes the proof of Lemma 1.
## Acknowledgments
The first version of the paper was written when the authors were at City University of Hong Kong, supported partially by NSFC/RGC Joint Research Scheme [RGC Project No. N_CityU102/20 and NSFC Project No. 12061160462], Germany/Hong Kong Joint Research Scheme [Project No. G-CityU101/20], Hong Kong Institute for Data Science, and InnoHK initiative, The Government of the HKSAR, and Laboratory for AI-Powered Financial Technologies. The authors would like to thank Hrushikesh Mhaskar and the referees for their constructive comments and suggestions.
|
2302.10899 | Feature Affinity Assisted Knowledge Distillation and Quantization of
Deep Neural Networks on Label-Free Data | In this paper, we propose a feature affinity (FA) assisted knowledge
distillation (KD) method to improve quantization-aware training of deep neural
networks (DNN). The FA loss on intermediate feature maps of DNNs plays the role
of teaching middle steps of a solution to a student instead of only giving
final answers in the conventional KD where the loss acts on the network logits
at the output level. Combining logit loss and FA loss, we found that the
quantized student network receives stronger supervision than from the labeled
ground-truth data. The resulting FAQD is capable of compressing model on
label-free data, which brings immediate practical benefits as pre-trained
teacher models are readily available and unlabeled data are abundant. In
contrast, data labeling is often laborious and expensive. Finally, we propose a
fast feature affinity (FFA) loss that accurately approximates FA loss with a
lower order of computational complexity, which helps speed up training for high
resolution image input. | Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin | 2023-02-10T01:00:49Z | http://arxiv.org/abs/2302.10899v3 | Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data
###### Abstract
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN). The FA loss on intermediate feature maps of DNNs plays the role of teaching middle steps of a solution to a student instead of only giving final answers in the conventional KD where the loss acts on the network logits at the output level. Combining logit loss and FA loss, we found that the quantized student network receives stronger supervision than from the labeled ground-truth data. The resulting FAQD is capable of compressing model on label-free data, which brings immediate practical benefits as pre-trained teacher models are readily available and unlabeled data are abundant. In contrast, data labeling is often laborious and expensive. Finally, we propose a fast feature affinity (FFA) loss that accurately approximates FA loss with a lower order of computational complexity, which helps speed up training for high resolution image input.
Quantization, Convolutional Neural Network, Knowledge Distillation, Model Compression, Image Classification
## I Introduction
Quantization is one of the most popular methods for deep neural network compression, by projecting network weights and activation functions to lower precision thereby accelerate computation and reduce memory consumption. However, there is inevitable loss of accuracy in the low bit regime. One way to mitigate such an issue is through knowledge distillation (KD [10]). In this paper, we study a feature affinity assisted KD so that the student and teacher networks not only try to match their logits at the output level but also match feature maps in the intermediate stages. This is similar to teaching a student through intermediate steps of a solution instead of just showing the final answer (as in conventional KD [10]). Our method does not rely on ground truth labels while enhancing student network learning and closing the gaps between full and low precision models.
### _Weight Quantization of Neural Network_
Quantization-aware training (QAT) searches the optimal model weight in training. Given an objective \(L\), the classical QAT scheme ([6, 21]) is formulated as
\[\begin{cases}w^{t+1}=w^{t}-\nabla_{u}L(u^{t}),\\ u^{t+1}=\text{Quant}(w^{t+1}),\end{cases} \tag{1}\]
where Quant is projection to a low precision quantized space. Yin et al. [28] proposed BinaryRelax, a relaxation form of QAT, which replaces the second update of (1) by
\[\begin{split} u^{t+1}=\frac{w^{t+1}+\lambda^{t+1}\text{Quant}(w^{ t+1})}{1+\lambda^{t+1}},\\ \lambda^{t+1}=\eta\lambda^{t}\quad\text{with }\eta>1.\end{split} \tag{2}\]
Darkhorn et al. [7] further improved (2) by designing a more sophisticated learnable growing scheme for \(\lambda^{t}\) and factoring a learnable parameter into Proj(.). Polino et al. [19] proposed quantized distillation (QD), a QAT framework that leverages knowledge distillation for quantization. Under QD, the quantized model receives supervision from both ground truth (GT) labels and a trained teacher in float precision (FP). The objective function has the generalized form (\(\alpha\in(0,1)\)):
\[\mathcal{L}_{QD}=\alpha\mathcal{L}_{KD}+(1-\alpha)\mathcal{L}_{GT} \tag{3}\]
where \(\mathcal{L}_{KD}\) is Kullback-Leibler divergence (KL) loss, and \(\mathcal{L}_{GT}\) is negative log likelihood (NLL) loss. In order to compare different methods fairly, we introduce two technical terms: end-to-end quantization and fine-tuning quantization. End-to-end quantization is to train a quantized model from scratch, and fine-tuning quantization is to train a quantized model from a pre-trained float precision (FP) model. With the same method, the latter usually lands a better result than the former. Li et al. [15] proposed a mixed quantization (a.k.a. BRECQ) that takes a pre-trained model and partially retrains the model on a small subset of data. We list the performance of some previous works of weight quantization, which will serve as baselines for this work.
\begin{table}
\begin{tabular}{c|c|c|c} Method & 1-bit & 2-bit & 4-bit \\ \hline \hline \multicolumn{4}{c}{Model: ResNet20} \\ \hline QAT ([6, 21]) & 87.07\% & 90.26\% & 91.47\% \\ \hline BinaryRelax [28] & 88.64\% & 90.47\% & 91.75\% \\ \hline QD [19] & 89.06\% & 90.86\% & 91.89\% \\ \hline DSQ [8] & 90.24\% & 91.06\% & 91.92\% \\ \hline BRECQ [15] & N/A & 81.31\% & 83.98\% \\ \end{tabular}
\end{table} TABLE I: Quantization accuracies of some existing quantization-aware training methods on CIFAR-10 dataset. All methods except BRECQ are end-to-end.
### _Activation Quantization_
In addition to weight quantization, the inference of neural networks can be further accelerated through activation quantization. Given a resolution \(\alpha>0\), a quantized ReLU activation function of bit-width \(b\in\mathbb{N}\) is formulated as:
\[\sigma(x,\alpha)=\begin{cases}0&x<0\\ k\alpha&(k-1)\alpha\leq x<k\alpha,\ \ 1\leq k\leq 2^{b}-1\\ (2^{b}-1)\alpha&x\geq(2^{b}-1)\alpha\end{cases} \tag{4}\]
where the resolution parameter \(\alpha\) is learned from data. A plot of \(2\)-bit quantized ReLU is shown in figure 1. However, such quantized activation function leads to vanished gradient during training, which makes the standard backpropagation inapplicable. Indeed, it is clear that \(\frac{\partial\sigma}{\partial x}=0\) almost everywhere. Bengio et al. [2] proposed to use a straight through estimator (STE) in backward pass to handle the zero gradient issue. The idea is to simply replace the vanished \(\frac{\partial\sigma}{\partial x}\) with a non-trivial derivative \(\frac{\partial\sigma}{\partial x}\) of a surrogate function \(\tilde{\sigma}(x,\alpha)\). Theoretical studies on STE and convergence vs. recurrence issues of training algorithms have been conducted in ([17, 27]). Among a variety of STE choices, a widely-used STE is the \(x\)-derivative of the so-called clipped ReLU [3]\(\tilde{\alpha}(x,\alpha)=\min\{\max\{x,0\},(2^{b}-1)\alpha\}\), namely,
\[\frac{\partial\tilde{\sigma}}{\partial x}=\begin{cases}1&0<x<(2^{b}-1)\alpha \\ 0&\text{else}.\end{cases}\]
In addition, a few proxies of \(\frac{\partial\sigma}{\partial\alpha}\) have been proposed ([4, 29]). In this work, we follow [29] and use the three-valued proxy:
\[\frac{\partial\sigma}{\partial\alpha}\approx\begin{cases}0&x\leq 0\\ 2^{b-1}&0<x<(2^{b}-1)\alpha\\ 2^{b}-1&x\geq(2^{b}-1)\alpha.\end{cases} \tag{5}\]
### _Knowledge Distillation_
Several works have proposed to impose closeness of the probabilistic distributions between the teacher and student networks, e.g. similarity between feature maps. A flow of solution procedure (FSP) matrix in [26] measures the information exchange between two layers of a given model. Then \(l_{2}\) loss regularizes the distance between FSP matrices of teacher and student in knowledge distillation. An attention transform (AT) loss [30] directly measures the distance of feature maps outputted by teacher and student, which enhances the learning of student from teacher. Similarly, feature affinity (FA) loss [24] measures the distance of two feature maps. In a dual learning framework for semantic segmentation [24], the FA loss is applied on the output feature maps of a segmentation decoder and a high-resolution decoder. In [25], FA loss on multi-resolution paths also improves light weight semantic segmentation. Given two feature maps with the same height and width (interpolate if different), \(\mathbf{F}^{S}\in\mathbb{R}^{C_{1}\times H\times W}\) and \(\mathbf{F}^{T}\in\mathbb{R}^{C_{2}\times H\times W}\), we first normalize the feature map along the channel dimension. Given a pixel of feature map \(F_{i}\in\mathbb{R}^{C}\), we construct an affinity matrix \(\mathbf{S}\in\mathbb{R}^{WH\times WH}\) as:
\[\mathbf{S}_{ij}=\|\mathbf{F}_{i}-\mathbf{F}_{j}\|_{\theta}:=\cos\theta_{ij}= \frac{\langle\mathbf{F}_{i},\mathbf{F}_{j}\rangle}{||\mathbf{F}_{i}||||\mathbf{ F}_{j}||}.\]
where \(\mathbf{\Theta}_{ij}\) measures the angle between \(F_{i}\) and \(F_{j}\). Hence, the FA loss measures the similarity of pairwise angular distance between pixels of two feature maps, which can be formulated as
\[L_{fa}(\mathbf{F}^{S},\mathbf{F}^{T})=\frac{1}{W^{2}H^{2}}||\mathbf{S}^{T}- \mathbf{S}^{S}||_{2}^{2}. \tag{6}\]
### _Contributions_
In this paper, our main contributions are:
1. We find that using mean squares error (MSE) gives better performance than KL on QAT, which is a significant improvement to QD ([19]).
2. We consistently improve the accuracies of various quantized student networks by imposing the FA loss on feature maps of each convolutional block. We also unveil the theoretical underpinning of feature affinity loss in terms of the celebrated Johnson-Lindenstrass lemma for low-dimensional embeddings.
3. We achieve state-of-art quantization accuracy on CIFAR-10 and CIFAR-100. Our FAQD framework _can train a quantized student network on unlabeled data_.
4. We propose a randomized Fast FA (FFA) loss to accelerate the computation of training loss, and prove its convergence and error bound.
## II Feature Affinity Assisted Distillation and Quantization
### _Feature Affinity Loss_
In quantization setting, it is unreasonable to require that \(F^{S}\) is close to \(F^{T}\), as they are typically in different spaces (\(F^{S}\in\mathcal{Q}\) in full quantization) and of different dimensions. However, \(F^{S}\) can be viewed as a compression of \(F^{T}\) in dimension, and preserving information under such compression has been studied in compressed sensing. Researchers ([20, 22]) have proposed to compress graph embedding to lower dimension so that graph convolution can be computed efficiently. In K-means cluttering problem, several methods ([1, 18]) have been designed to project the data into a low-dimensional space such that
\[||\text{Proj}(\mathbf{x})-\text{Proj}(\mathbf{y})||\approx||\mathbf{x}-\mathbf{ y}||,\ \ \forall\ (\mathbf{x},\mathbf{y}), \tag{7}\]
and so pairwise distances from data points to the centroids can be computed at a lower cost.
In view of the feature maps of student model as a compression of teacher's feature maps, we impose a similar property in terms of pairwise angular distance:
\[||\mathbf{F}_{i}^{S}-\mathbf{F}_{j}^{S}||_{\theta}\approx||\mathbf{F}_{i}^{T}-\mathbf{F}_{j}^{T} ||_{\theta},\ \forall\left(i,j\right)\]
which is realized by minimizing the feature affinity loss. On the other hand, a Johnson-Lindenstrauss (JL [11]) like lemma can guarantee that we have student's feature affinity matrix close to the teacher's, provided that the number of channels of student network is not too small. In contrast, the classical JL lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that the _Euclidean_ distances between the points are nearly preserved. To tailor it to our application, we prove the following JL-like lemma in the angular distance case:
**Theorem II.1** (Johnson-Lindenstrauss lemma, Angular Case): _Given any \(\epsilon\in(0,1)\), an embedding matrix \(\mathbf{F}\in\mathbb{R}^{n\times d}\), for \(k\in(16\epsilon^{-2}\ln n,d)\), there exists a linear map \(T(\mathbf{F})\in\mathbb{R}^{n\times k}\) so that_
\[\begin{split}(1-\epsilon)||\mathbf{F}_{i}-\mathbf{F}_{j}||_{ \theta}\leq||T(\mathbf{F})_{i}-T(\mathbf{F})_{j}||_{\theta}\\ \leq(1+\epsilon)||\mathbf{F}_{i}-\mathbf{F}_{j}||_{\theta},\ \ \forall\ \ 1\leq i,j\leq n\end{split} \tag{8}\]
_where \(||\mathbf{F}_{i}-\mathbf{F}_{j}||_{\theta}=\frac{\langle\mathbf{F}_{i},\mathbf{F}_{j}\rangle}{ ||\mathbf{F}_{i}||\mathbf{F}_{j}||}\) is the angular distance._
It is thus possible to reduce the embedding dimension down from \(d\) to \(k\), while roughly preserving the pairwise angular distances between the points. In a convolutional neural network, we can view intermediate feature maps as \(\mathbf{F}^{S}\in\mathbb{R}^{HW\times C_{1}}\) and \(\mathbf{F}^{T}\in\mathbb{R}^{HW\times C_{2}}\), and feature affinity loss will help the student learn a compressed feature embedding. The FA loss can be flexibly placed between teacher and student in different positions (encoder/decoder, residual block, etc.) for different models. In standard implementation of ResNet, residual blocks with the same number of output channels are grouped into a sequential layer. We apply FA loss to the features of such layers.
\[\mathcal{L}_{FA}=\sum_{l=1}^{L}L_{fa}(\mathbf{F}_{l}^{T},\mathbf{F}_{l}^{S})\]
where \(\mathbf{F}_{l}^{T}\) and \(\mathbf{F}_{l}^{S}\) are the feature maps of teacher and student respectively. For example, the residual network family of ResNet20, ResNet56, ResNet110, and ResNet164 have \(L=3\), whereas the family of ResNet18, ResNet34, and ResNet50 have \(L=4\).
### _Choice of Loss Functions_
In this work, we propose two sets of loss function choices for the end-to-end quantization and pretrained quantization, where end-to-end quantization refers to having an untrained student model with randomly initialized weights. We investigate both scenarios of quantization and propose two different strategies for each.
The Kullback-Leibler divergence (KL) is a metric of the similarity between two probabilistic distributions. Given a ground-truth distribution \(P\), it computes the relative entropy of a given distribution \(Q\) from \(P\):
\[\mathcal{L}_{KL}(P||Q)=\sum_{x\in\mathcal{X}}P(x)\ln\frac{P(x)}{Q(x)}. \tag{9}\]
While KD is usually coupled with KL loss ([19, 10]), it is not unconventional to choose other loss functions. Kim et al. [14] showed that MSE, in certain cases, can outperform KL in the classic teacher-student knowledge distillation setting. KL loss is also widely used for trade-off between accuracy and robustness under adversarial attacks, which can be considered as self-knowledge distillation. Given a classifier \(f\), an original data point \(\mathbf{x}\) and its adversarial example \(\mathbf{x}^{\prime}\), TRADES [31] is formulated as
\[L_{TRADES}=\mathcal{L}_{CE}(f(\mathbf{x}),\mathbf{y})+\mathcal{L}_{\mathbf{KL}} (\mathbf{f}(\mathbf{x})||\mathbf{f}(\mathbf{x}^{\prime}))\]
Li et al. [16] showed that \(L_{CE}\big{(}f(\mathbf{x}^{\prime}),y\big{)}\) outperforms \(\mathcal{L}_{KL}(f(\mathbf{x})||\mathbf{f}(\mathbf{x}^{\prime}))\) both experimentally and theoretically.
Inspired by the studies above, we conduct experiments on different choices of the loss function. We compare KD on quatization from scratch (end-to-end). As shown in table II, MSE outperforms KL in quantization.
On the other hand, we find that KL loss works better for fine-tuning quantization. One possible explanation is that when training from scratch, the term \(\ln\frac{P(x)}{Q(x)}\) is large. However, the derivative of logarithm is small at large values, which makes it converge slower and potentially worse. On the other hand, when \(\frac{P(x)}{Q(x)}\) is close to 1, the logarithm has sharp slope and converges fast.
### _Feature Affinity Assisted Distillation and Quantization_
Inspired by previous studies ([13, 15, 19]), we propose a feature affinity assisted quantized distillation (FAQD). The end-to-end quantization objective function is formulated as:
\[\begin{split}\mathcal{L}=\alpha\,\mathcal{L}_{KD}+\beta\, \mathcal{L}_{FA}+\gamma\,\mathcal{L}_{GT}\\ =\alpha\,\mathcal{L}_{MSE}\big{(}f^{T}(\mathbf{x}),f^{S}(\mathbf{ x})\big{)}&+\beta\,\sum_{l=1}^{L}\mathcal{L}_{fa}(\mathbf{F}_{l}^{T}, \mathbf{F}_{l}^{S})\\ &+\gamma\,\mathcal{L}_{NLL}(f^{S}(\mathbf{x}),y).\end{split} \tag{10}\]
In fine-tuning quantization, we replace MSE loss in (10) by KL divergence loss. In FAQD, the student model learns not only the final logits of the teacher but also the intermediate extracted feature maps of the teacher using feature affinity norm computed as in [24].
\begin{table}
\begin{tabular}{c|c|c|c|c} Student & Teacher & 1-bit & 2-bit & 4-bit \\ \hline \hline \multicolumn{4}{c}{\(\mathcal{L}_{KD}\)} & \multicolumn{1}{c|}{KL loss in (3)} \\ \hline \hline ResNet20 & ResNet110 & 89.06\% & 90.86\% & 91.89\% \\ \hline \multicolumn{4}{c}{\(\mathcal{L}_{KD}\)} & \multicolumn{1}{c|}{MSE in (3)} \\ \hline ResNet20 & ResNet110 & **90.00\%** & **91.01\%** & **92.05\%** \\ \hline \end{tabular}
\end{table} TABLE II: Comparision of KL loss and MSE loss on CIFAR-10 data set. All teachers are pre-trained FP models, and all students are initial models (end-to-end quantization).
In addition to (10), we also propose a label-free objective which does not require the knowledge of labels:
\[\mathcal{L}_{\text{label-free}}=\alpha\mathcal{L}_{MSE}\big{(}f^{T}(\mathbf{x}),f^{S}(\mathbf{x})\big{)}+\beta\sum_{l=1}^{L}\mathcal{L}_{fa}(\mathbf{F}_{l}^{T },\mathbf{F}_{l}^{S}). \tag{11}\]
Despite the pre-trained computer vision models being available from cloud service such as AWS and image/video data abundantly collected, the data labeling is still expensive and time consuming. Therefore, a label-free quantization framework has significant value in the real world. In this work, we verify that the FA loss can significantly improve KD performance. The label-free loss in Eq. (11) can outperform the baseline methods in table I as well as the prior supervised QD in (3).
## III Experimental Results
### _Weight Quantization_
In this section we test FAQD on the dataset CIFAR-10. First, we experiment on fine-tuning quantization. The float precision (FP) ResNet110 teaches ResNet20 and ResNet56. The teacher has 93.91% accuracy, and the two pre-trained models have accuracy 92.11% and 93.31% respectively. While both SGD and Adam optimization work well on the problem, we found KL loss with Adam slightly outperform SGD in this scenario. The objective is
\[\mathcal{L}=\mathcal{L}_{KL}+\mathcal{L}_{FA}\]
for the label-free quantization. When calibrating the ground-truth label, the cross-entropy loss \(\mathcal{L}_{NLL}\) is used as the supervision criterion.
For end-to-end quantization, we found that MSE loss performs better than KL loss. Adam optimization struggles to reach acceptable performance on end-to-end quantization (with either KL or MSE loss). We further test the performance of FAQD on larger dataset CIFAR-100 where an FP ResNet 164 teaches a quantized ResNet110. We report the accuracies for both label-free and label-present supervision. We evaluate FAQD on both fine-tuning quantization and end-to-end quantization.
In the ResNet experiment, the teacher ResNet164 has 74.50% testing accuracy. For the pretrained FAQD, the FP student ResNet110 has 72.96% accuracy. As shown in table VI
Fig. 2: FAQD framework. The intermediate feature maps are supervised by FA loss, and the raw logits by MSE loss.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \multicolumn{4}{c}{Teacher ResNet110: 93.91\%} \\ \hline \hline Method & 1-bit & 2-bit & 4-bit \\ \hline Pre-trained FP student & ResNet20: 92.21\% & \\ \hline Label-free FAQD & 89.97\% & 91.40\% & 92.55\% \\ \hline FAQD with Supervision & 90.92\% & 91.93\% & 92.74\% \\ \hline Pre-trained FP student & ResNet56: 93.31\% & \\ \hline Label-free FAQD & 92.34\% & 92.91\% & 93.52\% \\ \hline FAQD with Supervision & 92.83\% & 93.14\% & 93.77\% \\ \hline \end{tabular}
\end{table} TABLE III: Fine-tuning knowledge distillation for quantization of all convolutional layers.
and table V, FAQD has surprisingly superior performance on CIFAR-100. The binarized student almost reaches the accuracy of FP model, and the 4-bit model surpasses the FP teacher.
### _Full Quantization_
In this section, we extend our results to full quantization where the activation function is also quantized. As shown in table VII, the 4W4A fune-tuning quantization has accuracy similar to float ResNet20. Meanwhile, we fill in the long existing performance gap [9] when reducing precision from 1W2A to 1W1A on CIFAR-10 dataset, as the accuracy drop is linear (with respect to activation precision) and small.
## IV Fast Feature Affinity Loss
### _Proposed Method_
Despite the significant increase of KD performance, we note that introducing FA loss will increase the training time. If we normalize the feature maps by row beforehand, computing FA loss between multiple intermediate feature maps can be expensive.
\[\mathcal{L}_{fa}(F_{1},F_{2})=\|F_{1}F_{1}^{T}-F_{2}F_{2}^{T}\|_{2}^{2}. \tag{12}\]
As we freeze the pre-trained teacher, feature map of the teacher model \(F_{1}=f^{T}(\mathbf{x})\) is a constant, in contrast to student feature map \(F_{1}=f^{S}(\Theta,\mathbf{x})\). Denote \(\mathbf{S}_{1}=F_{1}F_{1}^{T}\in\mathbb{R}^{WH\times WH}\) and \(g(\Theta,\mathbf{x})=f^{S}(\Theta,\mathbf{x})[f^{S}(\Theta,\mathbf{x})]^{T}\). The feature affinity can be formulated as
\[\mathcal{L}_{fa}(\Theta)=\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\| \mathbf{S}_{1}-g(\Theta,\mathbf{x})\|_{2}^{2}. \tag{13}\]
Computing \(\mathbf{S}_{1}\) and \(g(\Theta,X)\) requires \(\mathcal{O}(W^{2}H^{2}C)\) complexity each (C is the number of channels), which is quite expensive. We introduce a random estimator of \(L_{ffa}(\Theta)\):
\[\mathcal{L}_{fa}(F_{1},F_{2},\mathbf{z})=\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x} \in\mathcal{X}}\|(\mathbf{S}_{1}-g(\Theta,\mathbf{x}))\mathbf{z}\|_{2}^{2}, \tag{14}\]
where \(\mathbf{z}\in\mathbb{R}^{HW}\) is a vector with i.i.d unit normal components \(N(0,1)\). We show below that Eq. (14) is an unbiased estimator of FA loss (13).
**Proposition 1**: \[\mathbb{E}_{\mathbf{x}\sim N(0,1)}[\mathcal{L}_{fa}(F_{1},F_{2},\mathbf{z})]= \mathcal{L}_{fa}(\Theta).\]
This estimator can achieve computing complexity \(\mathcal{O}(HWC)\) by performing two matrix-vector multiplication \(F_{1}\big{(}F_{1}^{T}\mathbf{z}\big{)}\). We define the Fast Feature Affinity (FFA) loss to be the \(k\) ensemble of (14):
\[\mathcal{L}_{ffa,k}(\Theta)=\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X} }\frac{1}{k}\|(\mathbf{S}_{1}-g(\Theta,\mathbf{x}))Z_{k}\|_{2}^{2} \tag{15}\]
where \(Z_{k}\in\mathbb{R}^{HW\times k}\) with i.i.d \(\mathcal{N}(0,1)\) components, and we have \(k\ll WH\). The computational complexity of \(\mathcal{L}_{ffa,k}(\Theta)\) is \(\mathcal{O}(kWHC)\).
Finally, we remark that FFA loss can accelerate computation of pairwise Euclidean distance in dimensional reduction such as in (7). The popular way to compute the pairwise distance of rows for a matrix \(A\in\mathbb{R}^{n\times c}\) is to broadcast the vector of row norms and compute \(AA^{T}\). Given the row norm vector \(v=(\|A_{1}\|^{2},\cdots,\|A_{n}\|^{2})\), the similarity matrix \((\mathbf{S}_{ij})\), \(\mathbf{S}_{ij}=\|A_{i}-A_{j}\|^{2}\), is computed as
\[\mathbf{S}=\mathbf{1}\otimes v-2AA^{T}+v\otimes\mathbf{1}.\]
The term \(2AA^{T}\) can be efficiently approximated by FFA loss.
\begin{table}
\begin{tabular}{
### _Experimental Results_
We test Fast FA loss on CIFAR-15. As mentioned in the previous section, ResNet-20 has 3 residual blocks. The corresponding width and height for feature maps are 32, 16, and 8, \(H=W\) for all groups, so the dimension (\(HW\)) of similarity matrices are 1024, 256, and 64. We test the fast FA loss with dimensions 1, 6, and 10. The results are shown in table VIII. Meanwhile, the FFA has much training time for each step. When \(k=1\), the accuracies are inconsistent due to large variance. With too few samples in the estimator, the fast FA norm is too noisy and jeopardize the distillation. When \(k=6\), the fast FA loss stabilizes and shows a significant improvement from the baseline, \(\mathcal{L}=\mathcal{L}_{MSE}+\mathcal{L}_{CE}\) as in table II. When \(k\) increases to \(10\), the performance fast FA loss is comparable with the exact FA loss (table IV). Moreover, we experiment with the time consumption for computing FA loss and FFA loss. We plot the time in log scale vs. \(H\), (\(H=W\)) for feature maps. The theoretical time complexity for computing exact FA loss is \(\mathcal{O}(H^{4})\) and that for computing FFA loss is \(\mathcal{O}(H^{2})\). We see that figure 4(a) agrees with the theoretical estimate.
The larger the \(H\), the more advantage the FFA loss. For (medical) images with resolutions in the thousands, the FFA loss will have significant computational savings.
### _Theoretical Analysis of FFA Loss_
As shown in proposition 4.1, the FFA loss is a \(k\) ensemble unbiased estimator of FA loss. By the strong law of large numbers, the FFA loss converges to the exact FA loss with probability 1.
**Theorem IV.1**: _For given \(\Theta\), suppose that \(|\mathcal{L}_{fa}(\Theta)|\leq\infty\), then_
\[\forall\epsilon>0,\exists N\ \ s.t.\ \ \forall k>N,\ \ |\mathcal{L}_{ffa,k}( \Theta)-\mathcal{L}_{fa}(\Theta)|<\epsilon.\]
_Namely, the FFA loss converges to FA loss pointwise:_
\[\forall\Theta,\ \lim_{k\rightarrow\infty}\mathcal{L}_{ffa,k}(\Theta)=\mathcal{ L}_{fa}(\Theta).\]
_We also establish the following error bound for finite \(k\)._
**Proposition 2**: \[\mathbb{P}\big{(}|\mathcal{L}_{ffa,k}(\Theta)-\mathcal{L}_{fa}(\Theta)|> \epsilon\big{)}\leq\frac{C}{\epsilon^{2}k},\]
_where \(C\leq 3\,\|\mathcal{L}_{fa}(\Theta)\|_{2}^{4}\)._
Proposition IV.2 says that the probability that the FFA estimation has an error beyond a target value decays like \(\mathcal{O}(\frac{1}{k})\). The analysis guarantees the accuracy of FFA loss as an efficient estimator of FA loss. Another question one might ask is whether minimizing the FFA loss is equivalent to minimizing the FA loss. Denote \(\Theta^{*}=\arg\min L_{fa}(\Theta)\) and \(\Theta^{*}_{k}=\arg\min L_{ffa,k}(\Theta)\), and assume the minimum is unique for each function. In order to substitute FA loss by FFA loss, one would hope that \(\Theta^{*}_{k}\) converges to \(\Theta^{*}\). Unfortunately, the point-wise convergence in Theorem IV.1 is not sufficient to guarantee the convergence of the optimal points, as a counter-example can be easily constructed. In the rest of this section, we show that such convergence can be established under an additional assumption.
**Theorem IV.2** (Convergence in the general case): _Suppose that \(\mathcal{L}_{ffa,k}(\Theta)\) converges to \(\mathcal{L}_{fa}(\Theta)\) uniformly, that is_
\[\forall\epsilon>0,\exists N\ \ s.t.\ \ \forall k>N,\ \ |\mathcal{L}_{ffa,k}( \Theta)-\mathcal{L}_{fa}(\Theta)|<\epsilon\]
\(\forall\Theta\) _and_
\[|\mathcal{L}_{fa}(\Theta)|\leq\infty,\ \forall\Theta.\]
_Then_
\[\lim_{k\rightarrow\infty}||\Theta^{*}_{k}-\Theta^{*}||^{2}=0. \tag{16}\]
_The uniform convergence can be relaxed if \(\mathcal{L}_{fa}\) is convex in \(\Theta\). We would like to present a consequence of Theorem IV.2._
**Corollary IV.2.1** (Convergence in the convex case): _Suppose that \(L_{fa}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex and \(L\)-smooth, and that there \(\exists M\) such that \(||\Theta^{*}_{k}||\leq M,\,\forall k\). Then, \(\forall k,\ \ \mathcal{L}_{ffa,k}\) is also convex, and \(\lim_{k\rightarrow\infty}||\Theta^{*}_{k}-\Theta^{*}||^{2}=0\)._
\begin{table}
\begin{tabular}{c|c|c|c|c} Model & k & 2W32A & 4W32A & 4W4A \\ \hline \hline \multicolumn{5}{c}{Fast FA Loss Accuracy} \\ \hline ResNet20 & 1 & 90.83\(\pm\)2.75\% & 91.83\(\pm\)3.01\% & 90.32\(\pm\) 2.35\% \\ ResNet20 & 5 & 91.37\% & 92.14\% & 91.12\% \\ ResNet20 & 15 & 91.45\% & 92.39\% & 91.53\% \\ \hline \multicolumn{5}{c}{Fast FA Loss Computing Time Per Step} \\ \hline ResNet20 & 1 & 44 ms & 65 ms & 187 ms \\ ResNet20 & 5 & 46 ms & 66 ms & 188 ms \\ ResNet20 & 15 & 48 ms & 68 ms & 190 ms \\ \hline \multicolumn{5}{c}{Exact FA Loss Computing Time Per Step} \\ \hline ResNet20 & N/A & 60 ms & 80 ms & 205 ms \\ \hline \end{tabular}
\end{table} TABLE VIII: Test FFA loss for different \(k\) on CIFAR-10 (end-to-end quantization). As \(k\) increases, the performance is approaching the exact FA loss. The \(4\)-bit projection is more time consuming than \(2\)-bit. STE and \(\alpha\) update in full-quantization add extra time in milliseconds (ms).
Fig. 4: Plots for inference time of FA loss and FFA loss with \(k=1\).
## V Conclusion
We presented FAQD, a feature assisted (FA) knowledge distillation method for training-aware quantization. It couples MSE loss with FA loss and significantly improves the accuracy of the quantized student. FAQD works for both weight only and full quantization, and outperforms baseline Resnets on CIFAR-10 and CIFAR-100. We also analyzed an efficient randomized approximation (FFA) to the FA loss for feature maps with large dimensions. This theoretically founded FFA loss benefits training models on high resolution images.
## VI Appendix
**Proof of Theorem 2.1:** It suffices to prove that for any set of \(n\) unit vectors in \(\mathbb{R}^{d}\), there is a linear map nearly preserving pairwise angular distances, because the angular distance is scale-invariant.
Let \(T\) be a linear transformation induced by a random Gaussian matrix \(\frac{1}{\sqrt{k}}A\in\mathbb{R}^{k\times d}\) such that \(T(\mathbf{F})=\mathbf{F}A^{T}\). Define the events \(\mathcal{A}^{-}_{ij}=\{T:(1-\epsilon)\|\mathbf{F}_{i}-\mathbf{F}_{j}\|^{2}\leq\|T(\mathbf{F })_{i}-T(\mathbf{F})_{j}\|^{2}\leq(1+\epsilon)\|\mathbf{F}_{i}-\mathbf{F}_{j}\|^{2}\) fails\(\}\) and \(\mathcal{A}^{-}_{ij}=\{T:(1-\epsilon)\|\mathbf{F}_{i}+\mathbf{F}_{j}\|^{2}\leq\|T(\mathbf{F}) _{i}+T(\mathbf{F})_{j}\|^{2}\leq(1+\epsilon)\|\mathbf{F}_{i}+\mathbf{F}_{j}\|^{2}\) fails\(\}\).
Following the proof of the classical JL lemma in the Euclidean case [23], we have:
\[P(\mathcal{A}^{-}_{ij})\leq 2e^{-\frac{(\epsilon^{2}-\epsilon^{3})k}{4}},\ \ P( \mathcal{A}^{+}_{ij})\leq 2e^{-\frac{(\epsilon^{2}-\epsilon^{3})k}{4}}. \tag{17}\]
Let \(\mathcal{B}_{ij}=\{T:|\mathbf{F}_{i}\cdot\mathbf{F}_{j}-T(\mathbf{F})_{i}\cdot T(\mathbf{F})_{ j}|>\epsilon\}\), where \(\cdot\) is the shorthand for inner product. We show that \(\mathcal{B}_{ij}\subset A^{-}_{ij}\cup\mathcal{A}^{+}_{ij}\) for \(\|F_{i}\|=\|F_{j}\|=1\) by showing \(\mathcal{A}^{-C}_{ij}\cap\mathcal{A}^{+C}_{ij}\subset\mathcal{B}^{C}_{ij}\).
If \(\mathcal{A}^{-C}_{ij}\cap\mathcal{A}^{+C}_{ij}\) holds, we have
\[4T(\mathbf{F})_{i}\cdot T(\mathbf{F})_{j}\] \[= \|T(\mathbf{F})_{i}+T(\mathbf{F})_{j}\|^{2}-\|T(\mathbf{F})_{i}-T(\mathbf{F})_{j} \|^{2}\] \[\leq (1+\epsilon)\|\mathbf{F}_{i}+\mathbf{F}_{j}\|^{2}-(1-\epsilon)\|\mathbf{F}_ {i}-\mathbf{F}_{j}\|^{2}\] \[= 4\mathbf{F}_{i}\cdot\mathbf{F}_{j}+2\epsilon(\|F_{i}\|_{2}^{2}+\|F_{j} \|^{2})\] \[= 4\mathbf{F}_{i}\cdot\mathbf{F}_{j}+4\epsilon.\]
Therefore, \(\mathbf{F}_{i}\cdot\mathbf{F}_{j}-T(\mathbf{F})_{i}\cdot T(\mathbf{F})_{j}\geq-\epsilon\). By a similar argument, we have \(\mathbf{F}_{i}\cdot\mathbf{F}_{j}-T(\mathbf{F})_{i}\cdot T(\mathbf{F})_{j}\leq\epsilon\). Then we have \(\mathcal{A}^{-C}_{ij}\cap\mathcal{A}^{+C}_{ij}\subset\mathcal{B}^{C}_{ij}\), and thus
\[\mathbb{P}(\mathcal{B}_{ij})\leq\mathbb{P}(\mathcal{A}^{-}_{ij}\cup A^{+}_{ij} )\leq 4\exp\{-\frac{(\epsilon^{2}-\epsilon^{3})k}{4}\}\]
and
\[\mathbb{P}(\cup_{i<j}\mathcal{B}_{ij})\leq\sum_{i<j}\mathbb{P}(\mathcal{B}_{ ij})\leq 4n^{2}\exp\{-\frac{(\epsilon^{4}-\epsilon^{3})k}{4}\}.\]
This probability is less than \(1\) if we take \(k>\frac{16\ln n}{\epsilon^{2}}\). Therefore, there must exist a \(T\) such that \(\cap_{i<j}\mathcal{B}^{C}_{ij}\) holds, which completes the proof.
**Proof of Proposition 4.1:** Letting \(N=WH\), \(a_{ij}=(F_{1}F_{1}^{T})_{ij}\), and \(b_{ij}=(F_{2}F_{2}^{T})_{ij}\) in equation (14), we have:
\[\mathbb{E}_{z}\mathcal{L}_{ffa}(F_{1},F_{2};2)=\mathbb{E}_{z}\sum _{i=1}^{N}(\sum_{j=1}^{N}|a_{ij}-b_{ij}|z_{j})^{2}\] \[=\mathbb{E}_{z}\sum_{i=1}^{N}(\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{2}z_ {j}^{2}+2\sum_{j\neq k}|a_{ij}-b_{ij}||a_{ik}-b_{ik}|z_{j}z_{k})\] \[=\mathbb{E}_{z}\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{2}z_ {j}^{2}+2\sum_{i=1}^{N}\sum_{j\neq k}|a_{ij}-b_{ij}||a_{ik}-b_{ik}|z_{j}z_{k}\] \[=\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{2}\mathbb{E}_{z}z_ {j}^{2}\] \[+2\sum_{i=1}^{N}\sum_{j\neq k}|a_{ij}-b_{ij}|a_{ik}-b_{ik}| \mathbb{E}_{z}z_{j}z_{k}\] \[=\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{2}=\mathcal{L}_{fa} (F_{1},F_{2};2).\]
**Proof of Theorem 4.1:** Given a Gaussian matrix \(Z_{k}=[\mathbf{z}_{1},\cdots,\mathbf{z}_{k}]\in\mathbb{R}^{n\times k}\),
\[\mathcal{L}_{ffa,k}(\Theta)=\frac{1}{k}\sum_{l=1}^{k}\mathcal{L}_{ffa}(F_{1}, F_{2},\mathbf{z}_{l}).\]
For any fixed \(\Theta\), \(\mathcal{L}_{ffa}(F_{1},F_{2},\mathbf{z}_{l})\), \(l=1,\cdots,k\), are i.i.d random variables. Suppose the first moment of each random variable is finite, by the strong law of large numbers, \(\mathcal{L}_{ffa,k}(\Theta)\) converges to \(\mathbb{E}[\mathcal{L}_{ffa}(F_{1},F_{2},\mathbf{z}_{1})]\) almost surely. In other words, \(\lim_{k\rightarrow\infty}\mathcal{L}_{ffa,k}(\Theta)=\mathcal{L}_{fa}(\Theta)\) with probability 1.
**Proof of Proposition 4.2:** By Chebyshev's inequality, we have
\[\mathbb{P}\big{(}\big{|}\mathcal{L}_{ffa,k}(\Theta)-\mathbb{E}[ \mathcal{L}_{ffa,k}(\Theta)]\big{|}>\epsilon\big{)}\leq\] \[\frac{\text{Var}(\mathcal{L}_{ffa,k}(\Theta))}{\epsilon^{2}}=\frac{ \text{Var}(\mathcal{L}_{ffa}(F_{1},F_{2},\mathbf{z}_{1}))}{\epsilon^{2}k}. \tag{18}\]
In order to estimate
\[\text{Var}(\mathcal{L}_{ffa}(F_{1},F_{2},\mathbf{z}_{1})=\] \[\mathbb{E}[\mathcal{L}_{ffa}^{2}(F_{1},F_{2},\mathbf{z}_{1})]- \big{(}\mathbb{E}[\mathcal{L}_{ffa}(F_{1},F_{2},\mathbf{z}_{1})]\big{)}^{2}, \tag{19}\]
it suffices to estimate
\[\mathbb{E}[\mathcal{L}_{ffa}^{2}(F_{1},F_{2},\mathbf{z}_{1})]=\] \[\mathbb{E}_{z}\big{(}\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{ 2}z_{j}^{2}+\sum_{i=1}^{N}\sum_{j\neq k}|a_{ij}-b_{ij}||a_{ik}-b_{ik}|z_{j}z_ {k}\big{)}^{2}\]
which equals (as cross terms are zero):
\[=\mathbb{E}_{z}\big{(}\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{2}z_{j}^{2} \big{)}^{2}\]
Direct computation yields:
\[\sum_{i=1}^{N}\sum_{j=1}^{N}|a_{ij}-b_{ij}|^{4}z_{j}^{4}+\sum_{i=1}^{ N}\sum_{j=1}^{N}\sum_{l\neq i}^{N}|a_{ij}-b_{ij}|^{2}|a_{lj}-b_{lj}|^{2}z_{j}^{4}\] \[+2\sum_{i=1}^{N}\sum_{j=1}^{N}\sum_{l\neq j}^{N}|a_{ij}-b_{ij}|^{ 2}|a_{il}-b_{il}|^{2}z_{j}^{2}z_{l}^{2}\] \[+\sum_{i=1}^{N}\sum_{j=1}^{N}\sum_{k=1}^{N}\sum_{l\neq j}^{N}|a_{ ij}-b_{ij}|^{2}|a_{kl}-b_{kl}|^{2}z_{j}^{2}z_{l}^{2}\]
Notice that \(\mathbb{E}[z_{i}^{4}]=3\). Taking \(\mathbb{E}[\cdot]\), we derive the upper bound \(3\|\mathcal{L}_{fa}\|_{2}^{4}\).
**Proof of Theorem 4.2:** Since \(\lim\limits_{k\to\infty}\mathcal{L}_{ffa,k}(\Theta^{*})=\mathcal{L}_{fa}( \Theta^{*})\), it suffices to show that
\[\lim\limits_{k\to\infty}\ \inf\limits_{\Theta}\mathcal{L}_{ffa,k}(\Theta)= \mathcal{L}_{fa}(\Theta^{*}).\]
Note that
\[\forall\Theta,\lim\limits_{k\to\infty}\mathcal{L}_{ffa,k}(\Theta)=\mathcal{L}_ {fa}(\Theta)\leq\mathcal{L}_{fa}(\Theta^{*}).\]
Then,
\[\mathcal{L}_{fa}(\Theta^{*})\geq\lim\limits_{k\to\infty}\inf\limits_{\Theta}L _{ffa,k}(\Theta).\]
On the other hand, for arbitrary \(\epsilon>0\), we have:
\[\exists N\ \ s.t.\ \ \forall k>N\ \ |\mathcal{L}_{ffa,k}(\Theta)-\mathcal{L}_{ fa}(\Theta)|<\frac{\epsilon}{2},\,\forall\Theta\]
and there exists a sequence \(\{\Theta_{k}\}\) s.t.
\[\mathcal{L}_{ffa,k}(\Theta_{k})<\inf\limits_{\Theta}\mathcal{L}_{ffa,k}( \Theta)+\frac{\epsilon}{2}.\]
Note that \(|\mathcal{L}_{ffa,k}(\Theta_{k})-\mathcal{L}_{fa}(\Theta_{k})|<\frac{\epsilon }{2}\) for \(k>N\), so:
\[\mathcal{L}_{fa}(\Theta^{*})-\epsilon\leq\mathcal{L}_{fa}(\Theta_{k})- \epsilon<\inf\limits_{\Theta}\mathcal{L}_{ffa,k}(\Theta),\ \ \forall k>N.\]
Since \(\epsilon\) is arbitrary, taking \(k\to\infty\), we have
\[\mathcal{L}_{fa}(\Theta^{*})\leq\lim\limits_{k\to\infty}\inf\limits_{\Theta} \mathcal{L}_{ffa,k}(\Theta).\]
**Proof of Corollary 4.2.1:** For readability, we shorthand: \(\mathcal{L}_{ffa,k}=f_{k}\) and \(\mathcal{L}_{fa}=f\). Let
\[\mathbf{H}=\frac{\nabla^{2}f}{\nabla\Theta\nabla\Theta^{T}}\succcurlyeq\mathbf{ 0}\in\mathcal{R}^{n\times n}\]
be the Hessian matrix of FA loss, which is positive semi-definite by convexity of \(L_{fa}\). Then,
\[\frac{\nabla^{2}f_{k}}{\nabla\Theta\nabla\Theta^{T}}=Z_{k}^{T}\mathbf{H}Z_{k} \succcurlyeq\mathbf{0}\in\mathbb{R}^{k\times k}\]
which implies the convexity of \(f_{k}\) for all \(k\). Moreover, it is clear that \(f_{k}\) is smooth for all \(k\) since
\[\|\nabla f_{k}(\mathbf{x})-\nabla f_{k}(\mathbf{y})\|=\|Z_{k}(\nabla f (\mathbf{x})-\nabla f(\mathbf{y}))\|\\ \leq L\cdot\|Z_{k}\|\cdot\|\mathbf{x}-\mathbf{y}\|. \tag{20}\]
We note that \(f_{k}\) is also smooth. Although we cannot claim equi-smoothness since we cannot bound \(\|Z_{k}\|\) uniformly in \(k\), the above is sufficient for us to prove the desired result.
For \(\forall k\), given any initial parameters \(\Theta^{0}\), by smoothness and convexity of \(f_{k}\), it is well-known that
\[\|\Theta_{k}^{t}-\Theta_{k}^{*}\|\leq\|\Theta^{0}-\Theta_{k}^{*}\|\]
where \(\Theta_{k}^{t}\) is the parameter we arrive after \(t\) steps of gradient descent. Hence, we can pick a compact set \(\mathbf{K}=\overline{B_{R}(\Theta^{*})}\) for \(R\) large enough such that \(\{\Theta_{k}\}_{k=1}^{\infty}\subset\mathbf{K}\) (denote \(\Theta_{\infty}^{*}=\Theta^{*}\)). Now, it's suffices to prove \(f_{k}\) converges to \(f\) uniformly on \(K\). In fact, \(f_{k}\) converges to \(f\) on any compact set. To begin with, we state a known result from functional analysis ([5, 12]):
**Lemma 6.1**: _(Uniform boundness and equi-Lipschitz) Let \(\mathcal{F}\) be a family of convex function on \(\mathbb{R}^{n}\) and \(K\subset\mathbb{R}^{n}\) be a compact subset. Then, \(\mathcal{F}\) is equi-bounded and equi-Lipschitz on \(K\)._
This result is established in any Banach space in [12], so it automatically holds in finite dimensional Euclidean space. By Lemma 6.1, we have that the sequence \(\{f_{k}\}_{k=1}^{\infty}\), where \(f_{\infty}=f\), is equi-Lipschitz. \(\forall>0\), \(\exists\,\delta>0\) s.t. \(|f_{k}(x)-f_{k}(y)|<\epsilon\) for all \(k\) and \(x,y\in K\) when \(|x-y|<\delta\). Since \(\{B(x,\delta)\}_{x\in K}\) forms an open cover for \(K\), we have a finite sub-cover \(\{B(x_{j},\delta)\}_{j=1}^{m}\) of \(K\). Since there are finitely many points \(x_{j}\), there exists \(N_{\epsilon}\) such that
\[\forall k>N_{\epsilon},\ \ |f_{k}(x_{j})-f(x_{j})|<\epsilon,\text{ for }j=1, \cdots,m.\]
For any \(x\in K\), \(x\in B(x_{j^{*}},\delta)\) for some \(j^{*}\). For all \(k>N_{\epsilon}\), we have
\[|f_{k}(x)-f(x)|\leq\\ |f_{k}(x)-f_{k}(x_{j^{*}})|+|f_{k}(x_{j^{*}})-f(x_{j^{*}})|+|f(x _{j^{*}})-f(x)|\\ \leq(2\tilde{L}+1)\epsilon \tag{21}\]
where \(\tilde{L}\) is the Lipschitz constant for equi-Lipschitz family. Therefore, \(f_{k}\) converges to \(f\) uniformly on \(K\). |
2310.06232 | Spiking PointNet: Spiking Neural Networks for Point Clouds | Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency,
have drawn much research attention on 2D visual recognition and shown gradually
increasing application potential. However, it still remains underexplored
whether SNNs can be generalized to 3D recognition. To this end, we present
Spiking PointNet in the paper, the first spiking neural model for efficient
deep learning on point clouds. We discover that the two huge obstacles limiting
the application of SNNs in point clouds are: the intrinsic optimization
obstacle of SNNs that impedes the training of a big spiking model with large
time steps, and the expensive memory and computation cost of PointNet that
makes training a big spiking point model unrealistic. To solve the problems
simultaneously, we present a trained-less but learning-more paradigm for
Spiking PointNet with theoretical justifications and in-depth experimental
analysis. In specific, our Spiking PointNet is trained with only a single time
step but can obtain better performance with multiple time steps inference,
compared to the one trained directly with multiple time steps. We conduct
various experiments on ModelNet10, ModelNet40 to demonstrate the effectiveness
of Spiking PointNet. Notably, our Spiking PointNet even can outperform its ANN
counterpart, which is rare in the SNN field thus providing a potential research
direction for the following work. Moreover, Spiking PointNet shows impressive
speedup and storage saving in the training phase. | Dayong Ren, Zhe Ma, Yuanpei Chen, Weihang Peng, Xiaode Liu, Yuhan Zhang, Yufei Guo | 2023-10-10T00:59:26Z | http://arxiv.org/abs/2310.06232v1 | # Spiking PointNet: Spiking Neural Networks
###### Abstract
Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency, have drawn much research attention on 2D visual recognition and shown gradually increasing application potential. However, it still remains underexplored whether SNNs can be generalized to 3D recognition. To this end, we present Spiking PointNet in the paper, the first spiking neural model for efficient deep learning on point clouds. We discover that the two huge obstacles limiting the application of SNNs in point clouds are: the intrinsic optimization obstacle of SNNs that impedes the training of a big spiking model with large time steps, and the expensive memory and computation cost of PointNet that makes training a big spiking point model unrealistic. To solve the problems simultaneously, we present a trained-less but learning-more paradigm for Spiking PointNet with theoretical justifications and in-depth experimental analysis. In specific, our Spiking PointNet is trained with only a single time step but can obtain better performance with multiple time steps inference, compared to the one trained directly with multiple time steps. We conduct various experiments on ModelNet10, ModelNet40 to demonstrate the effectiveness of Spiking PointNet. Notably, our Spiking PointNet even can outperform its ANN counterpart, which is rare in the SNN field thus providing a potential research direction for the following work. Moreover, Spiking PointNet shows impressive speedup and storage saving in the training phase. Our code is open-sourced at Spiking-PointNet.
## 1 Introduction
The advent of deep learning technologies, notably PointNet [38], has considerably amplified our capabilities to comprehend and manipulate intricate 3D data from real-world settings. With autonomous driving and augmented reality, which often require real-time interaction and fast response, becoming increasingly prevalent, the reliance on efficient point cloud processing techniques has been escalated. However, computation for the point cloud is energy-hungry and usually needs powerful devices.
Spiking Neural Networks (SNNs) [40; 4; 11; 12; 39; 35; 2; 55; 22; 57; 56; 47; 52; 44; 53; 54], seen as more energy efficient than Artificial Neural Networks (ANNs) due to their event-driven computation mechanism and the energy-saving multiplication-addition transformation advantage, have received extensive attention recently in many fields. For example, in [36], SNNs were used to handle sequential learning and show better performance and less energy cost on sequential learning compared to ANNs with similar scales. In [31], SNNs were leveraged to study the Human Activity Recognition (HAR) task. The results show that the SNN can reduce up to 94% energy consumption while being comparable to homogeneous ANN counterparts in accuracy. There are also some works that apply SNNs in autonomous driving. LaneSNNs [45] presented an SNN-based approach to detect
the lanes with an event-based camera input with a very low power consumption of about 1 W. For the more challenging point cloud task, a question is naturally raised: Could SNNs be transferred to the 3D domain and retain the energy-efficient advantage?
To this end, we present **Spiking PointNet**, the first spiking neural network approach to deep learning on point clouds. To better apply the SNNs in the point cloud field, we focus on solving two huge obstacles staying on this road. The first is optimizing difficulty. Though the binary spike information transmission paradigm makes SNNs much energy efficient, it also introduces the training challenge since the gradients for firing process of the spiking neuron are not well-defined, and they are all almost zero or infinite sometimes. The zero-but-all gradient makes it impossible to train SNNs via gradient-based optimization methods like ANNs. To handle this problem, various Surrogate Gradient (SG) methods have been proposed [35; 49; 39; 30; 14]. This kind of method tries to find an alternative function to replace the firing function when doing back-propagation of the spiking neurons. Thus, the SNN can be also trained with the current gradient-based optimization framework. However, it is not easy to find a suitable surrogate function, especially for these SNNs with large time steps. With the increasing of time steps, the explode or vanish problem and the gradient error problem will be severe. We will provide a detailed analysis in Sec. 3.3.
The second problem is that training networks for point clouds need more expensive memory and computation than images since point cloud data requires more dimensions to describe itself. To overcome this limitation in point clouds, researchers have proposed various model simplification strategies. These strategies include but are not limited to, sparse convolution [7], optimization during the data processing phase [27], and optimization at the local feature extraction stage [34; 33]. However, for applying the SNN to point clouds, the memory and computation will be enlarged greatly still with the increasing of time steps, and the above methods cannot handle this problem well. Thus, there is no existing way to train SNNs with large time steps on common deep-learning devices.
To solve the above problems simultaneously, we present a trained-less but learning-more paradigm for Spiking PointNet. Specifically, we propose a new framework for Spiking PointNet, that we train the SNN using a suitable SG method with only a single time step and infer it with multiple time steps to obtain a better performance. We will prove theoretically and experimentally that this framework can result in a better SNN than training it with multiple time steps directly in Sec. 3.4. To improve the framework further, we also embed a membrane potential perturbation method in the framework based on the observation that the residual membrane potential of SNN coming from the previous time step cannot transmit the temporal information for static point cloud datasets but a perturbation to increase the generalization. The overall workflow of the framework is visualized in Fig. 1.
The contributions of our paper are as follows:
* We prove that it is not easy to train a well-performed SNN with large time steps directly for point clouds with theoretical justifications and in-depth experimental analysis and propose
Figure 1: The overall of the trained-less but learning-more framework. The Spiking PointNet is trained with only one single time step in the training phase, while is used with multiple time steps in the inference phase. To improve the performance of the SNN, we also add some membrane potential perturbation in the training.
the Spiking PointNet with a trained-less but learning-more framework, a first simple yet effective SNN framework for point clouds.
* Furthermore, we also propose a membrane potential perturbation method for the framework to increase the SNN generalization.
* We evaluate our methods on various datasets and the experimental results show the effectiveness of our method. Rather, our Spiking PointNet even can outperform its ANN counterpart, which is very rare in the SNN field.
## 2 Related Work
### Spiking Neural Networks
Generally, there are three kinds of methods to train SNNs [16]: (1) spike-timing-dependent plasticity (STDP) [1] approaches, (2) ANN to SNN conversion approaches [25; 24; 32; 8; 10; 3; 22; 29], and (3) directly training approaches [6; 35; 49; 39; 30; 46; 47; 18; 21; 17; 13]. STDP is a kind of biology-inspired method [23; 9] that updates the weights with the unsupervised learning algorithm called Hebbian learning [43]. However, it is limited to small-scale datasets yet. The ANN-to-SNN conversion [8; 29] converts a well-trained ANN checkpoint to the SNN counterpart. Since training an ANN is much faster than training an SNN, this kind of method provides a fast way to obtain an SNN without using gradient descent for SNNs at all. However, it does not have its own learned feature. In specific, all the converted SNN does is to mimic the ANN. Moreover, this type of method requires many time steps to obtain a high-accuracy SNN. The direct training method tries to find an alternative function to replace the firing function of the spiking neurons when doing back-propagation. This kind of method can narrow the time steps greatly, even less than 5 [20; 15; 14], hence has received much attention recently. However, it is not easy to find a suitable surrogate function for these SNNs with large time steps. In this work, we focus on solving the problem.
### Deep Learning on Point Clouds
Training networks for point clouds need expensive memory and computation. To address the challenges posed by expensive computation and memory requirements, researchers have proposed a series of model simplification strategies to overcome the limitations of current point cloud models in practical applications [7; 41; 27; 34; 33; 42]. For instance, Lee _et al_. [28] introduced PillarAcc, an innovative algorithm-hardware co-design that significantly enhances the performance and energy efficiency of 3D object detection. However, its reliance on complex sparse convolution and dynamic pillar pruning may introduce additional complexity in the design and implementation process. Choy _et al_. [7] proposed MinkowskiConv, which provides a comprehensive solution for handling sparse spatio-temporal data, greatly enhancing its ability to capture complex temporal patterns in the data. Nevertheless, the inherent computational complexity and memory demands of 4D convolutions present new challenges. Hu _et al_. [27] introduced RandLA-Net to conserve computational resources in point cloud analysis by leveraging random sampling and an efficient local feature aggregation module. However, a limitation of RandLA-Net is that random sampling may lead to the loss of critical information and cannot be seamlessly applied to existing networks without a decline in performance. In comparison, the SNN version of PointNet offers an effective solution by significantly improving algorithm execution efficiency without altering the overall network structure, reducing dependence on high-performance devices in the inference. This enables general-purpose networks to more effectively address the computational resource consumption challenges of practical point cloud networks without the need to redesign network structures. However, for applying the SNN to point clouds, the memory and computation will be enlarged greatly still with the increasing of time steps in the training time. And, there is no existing way to train SNNs with large time steps on common deep-learning devices.
## 3 Preliminary and Methodology
In the paper, we mainly apply the SNN for the PointNet [38], the first deep learning model that processes raw point clouds directly, and modify it to the Spiking PointNet. Here, we first introduce the PointNet and widely used SNN neuron model, Leaky Integrate-and-Fire (LIF) model in detail. Then we will elucidate the difficulty of optimizing the Spiking PointNet with large time steps. Next,
a trained-less but learning-more framework to solve the above problem will be presented. Finally, we further improve it with a membrane potential perturbation method.
### PointNet
PointNet represents a novel application of deep learning to process point cloud data [38]. It effectively addresses two primary challenges: permutation invariance, the unordered nature of point cloud data, and rotational invariance, the freedom to rotate the point cloud in 3D space without altering the represented object. Specifically, to tackle these challenges, PointNet employs a symmetric function in conjunction with a spatial transformer network. It processes each point through a shared fully connected network, followed by a max pooling operation. This approach inherently ensures permutation invariance as it remains indifferent to the order of input points. Formally, given point cloud data \(\{x_{1},x_{2},...,x_{n}\}\), each point \(x_{i}\) is transformed via a shared Multi-Layer Perceptron (MLP) denoted by \(h\), followed by a max pooling operation to enforce symmetry, yielding a global feature descriptor. Therefore, PointNet approximates a general function \(f\) defined on a point set by applying a symmetric function \(g\) on transformed elements in the set:
\[f\left(\{x_{1},\ldots,x_{n}\}\right)\approx g\left(h\left(x_{1}\right),\ldots, h\left(x_{n}\right)\right), \tag{1}\]
where \(f:2^{\mathbb{R}^{N}}\rightarrow\mathbb{R},h:\mathbb{R}^{N}\rightarrow\mathbb{ R}^{K}\) and \(g:\underbrace{\mathbb{R}^{K}\times\cdots\times\mathbb{R}^{K}}_{n}\rightarrow \mathbb{R}\) is a symmetric function.
For rotational invariance, PointNet introduces a spatial transformer network - a specialized neural network proficient at predicting the required spatial transformation matrix for the point cloud, thereby enabling PointNet to manage rotating point cloud data.
The principal divergence between PointNet and conventional point cloud processing methodologies resides in the implementation of deep neural networks. This represents a significant leap from the traditional approach of manually designed features to Artificial Neural Networks (ANNs). The proposed model, Spiking PointNet, advances this progression by transitioning from ANNs to Spiking Neural Networks (SNNs). SNNs, which emulate the neural mechanisms of the brain more closely, promise to enhance the efficiency and precision of point cloud processing outcomes.
### Explicitly Iterative LIF Model
SNNs use the spiking neuron, which is inspired by the brain's natural mechanisms, to transmit information. A spiking neuron will receive input spike trains from the previous layer neuron models along times to update its membrane potential, \(u\). In the paper, we adopt the widely used leaky integrate and fire (LIF) neuron model, which can be described as follows:
\[\tau_{\mathrm{m}}\frac{du}{dt}=-\left(u-u_{\mathrm{rest}}\right)+R\cdot I(t), \quad u<V_{\mathrm{th}}. \tag{2}\]
In the above equation, \(I\) represents the input current, \(V_{\mathrm{th}}\) is the threshold, and \(R\) and \(\tau_{\mathrm{m}}\) are the resistance and time constant, respectively. A spike will be generated when \(u\) reaches \(V_{th}\), and \(u\) is subsequently reset to the resting potential \(u=u_{\mathrm{rest}}\), typically set to zero [30; 11; 39].
To use the mature machine learning framework (_e.g._, TensorFlow, Pytorch) to train the SNNs, an explicitly iterative LIF spiking model was proposed in [49] given by
\[\begin{gathered} u_{i}[t+1]=\lambda\left(u_{i}[t]-V_{\mathrm{th}}s _{i}[t]\right)+\sum_{j}w_{ij}s_{j}[t]+b_{i},\\ s_{i}[t+1]=H\left(u_{i}[t+1]-V_{\mathrm{th}}\right).\end{gathered} \tag{3}\]
Here, \(I_{i}(t)=\sum_{j}w_{ij}s_{j}(t)+b_{i}\), where the subscript \(i\) denotes the \(i\)-th current neuron, \(w_{ij}\) is the weight from \(j\)-th neuron in the previous layer connected to the current neuron \(i\), and \(b_{i}\) is a bias. \(H(x)\) signifies the Heaviside step function, \(s_{i}[t]\) is the spike train of neuron \(i\) at discrete time step \(t\), and \(\lambda<1\) is a leaky term for \(1-\frac{1}{\tau_{\mathrm{m}}}\), typically is 0.20 or 0.25 as in [30; 6; 39; 20].
The main difference between ANNs and SNNs is the nonlinear computational neuron. Replacing the ReLU neuron from PointNet with LIF spiking neuron will transform the PointNet to Spiking PointNet.
### Optimizing Difficulty for SNNs with Large Time Steps
A notorious problem in SNN training is the non-differentiability of the firing function, see Eq. (3). To discuss this problem concretely, we denote the loss function as \(L\) and calculate the gradients w.r.t. weights using the chain rule following [51] shown in Fig. 2 and given by
\[\frac{\partial L}{\partial\mathbf{W}^{l}}=\sum_{t=1}^{T}\frac{\partial L}{ \partial\mathbf{s}^{l+1}[t]}\frac{\partial\mathbf{s}^{l+1}[t]}{\partial \mathbf{u}^{l+1}[t]}\left(\frac{\partial\mathbf{u}^{l+1}[t]}{\partial \mathbf{W}^{l}}+\sum_{\tau<t}\prod_{i=t-1}^{\tau}\left(\frac{\partial\mathbf{u }^{l+1}[i+1]}{\partial\mathbf{u}^{l+1}[i]}+\frac{\partial\mathbf{u}^{l+1}[i+ 1]}{\partial\mathbf{u}^{l+1}[i]}\frac{\partial\mathbf{s}^{l+1}[i]}{\partial \mathbf{w}^{l}}\right)\frac{\partial\mathbf{u}^{l+1}[\tau]}{\partial\mathbf{W }^{l}}\right), \tag{4}\]
where \(\mathbf{W}^{l}\) represents the weights from layer \(l\) to \(l+1\), \(T\) is the total time steps, and \(L\) is the loss. The terms \(\frac{\partial\mathbf{s}^{l}[t]}{\partial\mathbf{u}^{l}[t]}\) for firing function is non-differentiable. Its gradient is 0 almost everywhere except for the threshold. Therefore, the actual updates for weights would either be 0 or infinity when recalling the gradient descent. To handle this problem, many surrogate gradient methods are proposed [49; 58; 19]. In this kind of method, when performing the forward pass, the firing function remains exactly the same, while, when for the backward pass, the firing function will become a surrogate function, and the surrogate gradient is computed based on it. A typically surrogate function may refer to the tanh-like function [14; 5; 30], given by
\[\varphi(x)=\frac{1}{2}\tanh\left(k\left(x-V_{\mathrm{th}}\right)\right)+\frac{ 1}{2}, \tag{5}\]
where \(k\) is a constant. The \(\varphi(x)\) and its gradient can be seen in Fig. 3. The surrogate gradient can be adjusted by changing \(k\). Other widely used surrogate functions also enjoy the same characteristic, such as rectangular or sigmoid surrogate functions proposed in [49].
It can be seen that, when \(k\) is set as a large value, a more accurate gradient in the backward pass can be obtained, _i.e._, the gradient will be sharp at a narrow range while gradual in the residual part. However, the gradient explode or vanish problem will become more severe in this case since the final
Figure 3: The surrogate function (left) under different values of the coefficient, \(k\) and its corresponding gradient (right). The blue curves represent the firing function (left) and its true gradient (right).
Figure 2: Chain rule graph for gradients w.r.t. weights of SNNs
weight gradient is calculated by multiplying many surrogate gradients through layers and time steps according to Eq. (4), which tends to be either very big or small. While, when \(k\) is set as a small value, a more inaccurate gradient in the backward pass will be obtained [14]. Hence the gradient error will be accumulated through layers and time steps, thus hurting the performance of the SNN too [48]. Consequently, it is very difficult to train a well-performed SNN with large time steps directly, limited by the fact that there is no suitable surrogate gradient for this kind of SNN.
### The Trained-less But Learning-more Framework
As aforementioned, except for the optimizing difficulty, there is no existing suitable way to train SNNs with large time steps on common deep-learning devices for point clouds, since training network on point clouds is much energy and memory hungry. To handle these two problems simultaneously, we propose a trained-less but learning-more framework.
To better describe the paradigm, we first show the gradient distributions of the first layer for Spiking PointNet on the ModelNet40 in the Fig. 4. Here, we have several baselines: (1) The Spiking PointNet using 1 single time step along with \(k=0.5,5,20\), respectively; (2) the Spiking PointNet using 4 time steps along with \(k=0.5,5,20\), respectively. It can be seen that, when \(k=5\), the gradient distribution for Spiking PointNet with 1 single time step is relatively suitable. While \(k=20\), the explode or vanish problem is very significant, and when \(k=0.5\), the distribution is relatively flat, which means it is different from the actual gradient greatly and the gradient error is huge. Hence, a small \(k\) or a large \(k\) is not a good idea for SNNs. The results in Tab. 1 also show that a small \(k\) or a large \(k\) will reduce the SNN accuracy.
Nevertheless, we can still find a relatively suitable surrogate function for the SNN with few time steps. However, the explode or vanish problem and the gradient error problem will be more severe with the time step increasing for SNNs. It can be seen that, although the \(k=5\) is a good choice for the Spiking PointNet with 1 single time step, the explode or vanish problem will become very severe for the Spiking PointNet with 4 time steps. Meanwhile, with the time step increasing, the gradient error problem becomes severe too. Note that, when \(k=0.5\), the gradient distribution for Spiking PointNet with 4 time steps becomes flatter, which means a huger gradient error.
Figure 4: The gradient distributions of the first layer for Spiking PointNet on ModelNet40 with different \(k\) and time steps. (a), (b), and (c) show the distributions for the Spiking PointNet using 1 single time steps with \(k=0.5,5,20\), respectively. (e), (d), and (f) show the distributions for the Spiking PointNet using 4 time steps with \(k=0.5,5,20\), respectively.
Consequently, it is not easy to train a Spiking PointNet with large time steps. The Tab. 1 also shows that the Spiking PointNet with 4 time steps even performs worse than the one with only one single time steps. To this end, we propose a trained-less but learning-more framework. In specific, we train our Spiking PointNet with only a single time step but use it with multiple time steps in the inference time. By training SNNs with only one single time step, the gradient explode or vanish problem will be mitigated greatly. Thus we can choose a relatively large \(k\), and meanwhile, the gradient error will be reduced at the same time. In the paper, we choose \(k\) as 5. The Tab. 2 shows the results of our trained-less but learning-more framework for Spiking PointNet on ModelNet10 and ModelNet40. It can be seen that training the Spiking PointNet with a suitable surrogate function will outperform the one with 4 time steps, and if we infer the trained model with multiple time steps, the accuracy will increase some still. Thus we name the paradigm as the trained-less but learning-more framework.
### Membrane Potential Perturbation Method
An interesting phenomenon in our trained-less but learning-more framework is that though the Spiking PointNet is trained with only 1 single time step, in the inference, with the increasing of time steps, the accuracy will increase less or more at the same time. Some work [37; 31] proves that the SNNs can extract spatio-temporal features for sequential data with multiple time steps. However, the point cloud is the static data, thus there is no temporal feature to extract. We guess that the reason for the accuracy increase of Spiking PointNet with multiple time steps is that it becomes an ensemble. The residual membrane potential along time steps in the spiking neuron can be seen as the perturbation. The perturbation will provide different initializations for the Spiking PointNet along time steps. Thus the Spiking PointNet at every time step can be seen as a different model. And averaging their outputs can improve the uncertainty estimation and thus may lead to an enhancement in SNN accuracy.
To verify our guess, in this section, we conducted a series of ablation experiments on ModelNet40. We trained the Spiking PointNet with 4 time steps and evaluated its accuracy at every time step and all time steps respectively. The results are shown in Tab. 3. It can be seen that, the collective results outperform those obtained from individual steps, implying that the performance improvement associated with larger time steps might be more related to an ensemble learning effect, rather than a direct result of the increased time steps. In specific, the Spiking PointNet at each time step can be seen as an independent model casting a vote towards the final prediction. This ensemble learning strategy increases the robustness of the model and subsequently improves the prediction accuracy. Our study suggests that a rethinking and optimization of time steps in SNNs is warranted. The inherent ensemble learning effect, which is under appreciated in the conventional SNN design, could be a viable strategy to enhance the performance of SNNs, while also managing computational resources. Our insights provide valuable implications for future design and optimization strategies in the field of SNNs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Dataset} & Training: 4 T & \multicolumn{4}{c}{Training: 1 T} \\ \cline{2-5} & Inferring: 4 T & Inferring: 1 T & Inferring: 2 T & Inferring: 3 T & Inferring: 4 T \\ \hline ModelNet10 & 91.05\% & 91.99\% & 92.43\% & 92.53\% & 92.32\% \\ \hline ModelNet40 & 86.70\% & 86.98\% & 87.26\% & 87.21\% & 87.13\% \\ \hline \hline \end{tabular}
Training: \(n\) T denotes training the Spiking PointNet with \(n\) time steps. Inferring: \(n\) T denotes Inferring the Spiking PointNet with \(n\) time steps.
\end{table}
Table 2: The ablation study for the trained-less but learning-more framework.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Time step} & \multicolumn{3}{c}{\(k\)} \\ \cline{2-4} & 0.5 & 5 & 20 \\ \hline
1 & 80.34\% & 86.98\% & 83.46\% \\ \hline
4 & 76.73\% & 86.70\% & 75.36\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: The accuracy for Spiking PointNet with different time steps and \(k\) on ModelNet40.
Under the perspective that the residual membrane potential of SNN, coming from the previous time step cannot transmit the temporal information for static point cloud datasets but a perturbation to increase the generalization, we further propose a membrane potential perturbation method for the framework. In specific, we add some membrane potential perturbation randomly to initial the spiking neurons of the Spiking PointNet at each epoch in the training phase, thus the generalization of the model trained with only 1 single time step will be improved like those trained with multiple time steps. The results for the trained-less-based Spiking PointNet with membrane potential perturbation are shown in Tab. 4. It can be seen that with the perturbation method, the Spiking PointNet further gets another performance lift, amounting to 93.31% and 88.61% final accuracy for ModelNet10 and ModelNet40 respectively.
## 4 Experiments
In this section, we conduct extensive experiments on ModelNet10 and ModelNet40 [50] to demonstrate the superior performance of our method. ModelNet10 and ModelNet40 are two widely recognized public datasets used for 3D object classification, curated and maintained by a research team at Princeton University. ModelNet10 is a compact dataset comprising 4,899 3D models that span 10 distinct categories such as tables, chairs, bathtubs, and guitars. This dataset is a subset of ModelNet40, offering fewer categories but with more pronounced differences between each category. This characteristic makes ModelNet10 an excellent starting point for evaluating the performance of 3D classification algorithms. ModelNet40 is a more comprehensive dataset, containing approximately 12,311 3D models across 40 different categories, including tables, chairs, airplanes, guitars, and more. With an expanded array of categories and samples, ModelNet40 serves as a robust benchmark for gauging the performance of 3D classification algorithms in more complex and challenging tasks. We leverage the PointNet architecture for point cloud classification tasks. For all our SNN models, we set \(V_{\rm th}\) as 0.5, The initial perturbations, \(\delta\), range from 0 to 0.5.
### Ablation Studies
We first conducted thorough ablation experiments of our method against the vanilla SNN for PointNet on the ModelNet10/40 datasets. The Tab. 5 displays the performances of various methods under different training and testing time steps. On the ModelNet10 dataset, our Spiking PointNet with membrane potential perturbation (MPP) reaches an accuracy of 93.31% with a testing time step of 4, which outperforms both the one without MPP (92.32%) and the ANN-based approach (92.98%). Even with a testing time step of 1, our Spiking PointNet with MPP still achieves an accuracy of 91.66%, surpassing the performance of vanilla Spiking PointNet trained with 4 time steps (89.62%). This validates the effectiveness of our method. Further, on the ModelNet40 dataset, our Spiking PointNet with MPP attains an accuracy of 88.61% with a testing time step of 4, also outperforming
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Training: 1 T} \\ \cline{3-6} & & Inferring: 1 T & Inferring: 2 T & Inferring: 3 T & Inferring: 4 T \\ \hline \multirow{2}{*}{ModelNet10} & without MPP & 91.99\% & 92.43\% & 92.53\% & 92.32\% \\ \cline{2-6} & with MPP & 91.66\% & 92.98\% & 92.98\% & 93.31\% \\ \hline \multirow{2}{*}{ModelNet40} & without MPP & 86.98\% & 87.26\% & 87.21\% & 87.13\% \\ \cline{2-6} & with MPP & 87.72\% & 88.46\% & 88.25\% & 88.61\% \\ \hline \hline \end{tabular} MPP denotes membrane potential perturbation.
\end{table}
Table 4: The ablation study for the membrane potential perturbation.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & 1-th time step & 2-th time step & 3-th time step & 4-th time step & Averaging all \\ \hline Accuracy & 83.70\% & 84.65\% & 85.70\% & 85.29\% & 86.70\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: The verification test for the effect of the time step on the static dataset.
the one without MPP (87.13%) and the vanilla Spiking PointNet (86.70%). Similarly, even with a testing time step of 1, our Spiking PointNet with MPP achieves an accuracy of 87.72%, still superior to the performance of the vanilla one trained with 4 time steps (85.59%).
### Energy Efficiency
In this section, we conducted a comprehensive investigation into the hardware efficiency of our proposed framework, with a focus on quantifying energy consumption in computational tasks on ModelNet10. For an ANN model, the dot product operation, or Multiply-Accumulate (MAC) operation, involves both addition and multiplication operations. However, the SNN leverages the multiplication-addition transformation advantage, eliminating the need for multiplication operations in all layers except the first layer. Remarkably, in the absence of spikes, hardware can employ sparse computation to completely avoid addition operations. To estimate energy consumption, we adopted the methodology using 45nm CMOS technology following [26; 39]. The MAC operation in ANN consumes 4.6pJ of energy, while the accumulation operation in SNN requires only 0.9pJ. Notably, in line with our trained-less but learning-more paradigm, we achieved a spike firing rate of 18.7% with \(k=5\). Based on our findings, we computed the energy cost and presented the results in Tab. 6. Our network exhibits remarkable energy efficiency, necessitating only \(9.2\times 10^{6}\)pJ of energy per forward pass, which equates to a 15.2-fold reduction in comparison to conventional ANNs. Moreover, when we conduct inference in four time steps, the performance reaches 93.31%, while the energy required is merely about 3.8 times less than that of its ANN counterpart.
## 5 Conclusion
In this paper, we have presented Spiking PointNet, the first spiking neural network (SNN) specifically designed for efficient deep learning on point clouds. This work was motivated by the tremendous potential of SNNs in energy efficiency and the rising demand for efficient point cloud processing techniques, especially in fields such as autonomous driving and augmented reality. We identified two main challenges hindering the application of SNNs in point cloud tasks: the intrinsic optimization difficulty of SNNs, and the high computational and memory cost of point cloud processing, especially for large time steps. To address these obstacles, we proposed a novel trained-less but learning-more paradigm. This paradigm allows for the training of Spiking PointNet with only a single time step, but is capable of achieving superior performance through multiple time step inference. Theoretical justifications and experimental analysis provided in the paper support our method's effectiveness. Additionally, we introduced a membrane potential perturbation method, which significantly en
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{**Method**} & \multicolumn{1}{c}{**Time step**} & \multicolumn{1}{c}{**Acc.**} & \multicolumn{1}{c}{**\#Add.**} & \multicolumn{1}{c}{**\#Mult.**} & \multicolumn{1}{c}{**Energy**} \\ \hline PointNet & - & 92.98\% & 0.03M & 13.94M & \(1.4\times 10^{8}\)pJ \\ \hline Spiking PointNet & 1 & 91.66\% & 0.45M & 0.45M & \(9.2\times 10^{6}\)pJ \\ & 4 & 93.31\% & 1.8M & 1.8M & \(3.7\times 10^{7}\)pJ \\ \hline \hline \end{tabular}
\end{table}
Table 6: Energy estimation of ANN (PointNet) and SNNs (Spiking PointNet) of computation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods} & \multirow{2}{*}{Training time steps} & \multicolumn{5}{c}{Testing time steps} \\ \cline{3-6} & & & 1 & 2 & 3 & 4 \\ \hline \multirow{4}{*}{ModelNet10} & ANN & - & \multicolumn{5}{c}{92.98\%} \\ & Vanilla SNN & 4 & 89.62\% & 90.83\% & 91.05\% & 91.05\% \\ & Ours without MPP & 1 & 91.99\% & 92.43\% & 92.53\% & 92.32\% \\ & Ours with MPP & 1 & 91.66\% & 92.98\% & 92.98\% & 93.31\% \\ \hline \multirow{4}{*}{ModelNet40} & ANN & - & \multicolumn{5}{c}{89.20\%} \\ & Vanilla SNN & 4 & 85.59\% & 86.58\% & 86.34\% & 86.70\% \\ \cline{1-1} & Ours without MPP & 1 & 86.98\% & 87.26\% & 87.21\% & 87.13\% \\ \cline{1-1} & Ours with MPP & 1 & 87.72\% & 88.46\% & 88.25\% & 88.61\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between our method and the vanilla SNN on ModelNet10/40 datasets.
hanced the generalization ability of the Spiking PointNet without increasing computational and storage requirements. Our extensive experiments on multiple datasets, including ModelNet10 and ModelNet40, demonstrated the robustness and superiority of Spiking PointNet. Notably, in certain scenarios, Spiking PointNet was even able to outperform its Artificial Neural Network counterparts, an uncommon achievement in the SNN field.
## Acknowledgment
This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413.
|
2305.01933 | An Exploration of Conditioning Methods in Graph Neural Networks | The flexibility and effectiveness of message passing based graph neural
networks (GNNs) induced considerable advances in deep learning on
graph-structured data. In such approaches, GNNs recursively update node
representations based on their neighbors and they gain expressivity through the
use of node and edge attribute vectors. E.g., in computational tasks such as
physics and chemistry usage of edge attributes such as relative position or
distance proved to be essential. In this work, we address not what kind of
attributes to use, but how to condition on this information to improve model
performance. We consider three types of conditioning; weak, strong, and pure,
which respectively relate to concatenation-based conditioning, gating, and
transformations that are causally dependent on the attributes. This
categorization provides a unifying viewpoint on different classes of GNNs, from
separable convolutions to various forms of message passing networks. We provide
an empirical study on the effect of conditioning methods in several tasks in
computational chemistry. | Yeskendir Koishekenov, Erik J. Bekkers | 2023-05-03T07:14:12Z | http://arxiv.org/abs/2305.01933v1 | # An Exploration of Conditioning Methods in Graph Neural Networks
###### Abstract
The flexibility and effectiveness of message passing based graph neural networks (GNNs) induced considerable advances in deep learning on graph-structured data. In such approaches, GNNs recursively update node representations based on their neighbors and they gain expressivity through the use of node and edge attribute vectors. E.g., in computational tasks such as physics and chemistry usage of edge attributes such as relative position or distance proved to be essential. In this work, we address not what kind of attributes to use, but _how to condition_ on this information to improve model performance. We consider three types of conditioning; weak, strong, and pure, which respectively relate to concatenation-based conditioning, gating, and transformations that are causally dependent on the attributes. This categorization provides a unifying viewpoint on different classes of GNNs, from separable convolutions to various forms of message passing networks. We provide an empirical study on the effect of conditioning methods in several tasks in computational chemistry.
## 1 Introduction
Graph neural networks (GNNs) are a family of neural networks that can learn from graph-structured data. Starting with the success of GCN (Kipf and Welling, 2016) in achieving state-of-the-art performance on semi-supervised classification, several variants of GNNs have been developed for this task, including Graph-SAGE (Hamilton et al., 2017), GAT(Velickovic et al., 2017), GATv2 (Brody et al., 2021), EGNN (Satorras et al., 2021) to name a few most recent ones.
Most of the models based on the message-passing framework utilize conditional linear layers. We define _"conditioning"_ as using additional information together with feature vectors from its neighbors' nodes. For example, EGNN (Satorras et al., 2021) conditions message vectors on the distance between two nodes or DimeNet (Gasteiger et al., 2020) additionally utilizes angle information. Many neural network models use conditioning in their layers without exploring their different variants. Therefore, improving upon the type of conditioning could still improve most state-of-the-art models. We believe that this is the first work that analyzes different conditioning methods in GNNs.
In this paper, we categorize three conditioning methods: weak, strong, and pure. They differ in the level of dependency on a given quantity, such as edge attributes, and differ in complexity. Message passing neural network (MPNN) using _weak conditioning_ method _concatenates_ attributes with node features. In this scenario, linear layers effectively gain an attribute-dependent bias, which we consider a weak type of conditioning as this does not guarantee that the attribute is actually utilized, i.e., it could be ignored. On the other hand, we have _pure conditioning_ method which forces the model to always use the attributes by letting them causally parametrize transformation matrices. However, from a practical perspective pure conditioning is computationally expensive and it can be simplified to a _strong conditioning_ method, which corresponds to an attribute-dependent _gating_ of the outputs of linear layers. We experiment with these three conditioning methods in variations of the EGNN model (Satorras et al., 2021) on computational chemistry datasets QM9 (Ramakrishnan et al., 2014) and MD17 (Chmiela et al., 2017) and show the advantage of strong conditioning over weak conditioning in performance, and over pure conditioning in training time.
The main contributions of this paper are:
1. A unifying analysis of geometric message passing by formulating conditional transformations in terms of various forms of conditional linear layers.
2. An intuitive exposition of different conditioning methods in the context of convolutional message passing.
3. Empirical studies that show the benefit of _strong conditioning_ methods, as well as the benefit of _deep conditioning_ in multi-layer perceptron-based message functions.
In this work, we address not what kind of attributes to use, but _how to condition_ on this information to improve model performance. As such, we focus on an intuitive analysis, and in the experimental section, we do not intend to achieve the best performance but focus on ablation studies in order to obtain general take-home messages.
## 2 Preliminaries
In this section, we introduce the relevant materials on graph neural networks on top of which we will later complement our analysis and definitions of conditioning methods.
### Graph Neural Network
In this work, we consider the graph regression task as an example. A graph is represented by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes \(v_{i}\in\mathcal{V}\) and edges \(e_{ij}\in\mathcal{E}\). A typical message passing layer (Gilmer et al., 2017) is defined as:
\[\mathbf{m}_{ij}=\phi_{e}(\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l},\mathbf{a}_{ij}) \tag{1}\]
\[\mathbf{m}_{i}=\sum_{j\in N(i)}\mathbf{m}_{ij} \tag{2}\]
\[\mathbf{h}_{i}^{l+1}=\phi_{h}(\mathbf{h}_{i}^{l},\mathbf{m}_{i}) \tag{3}\]
Where \(\mathbf{h}_{i}^{l}\) is the embedding of node \(v_{i}\) at layer \(l\), \(\mathbf{a}_{ij}\) is the edge attribute of nodes \(v_{i}\) and \(v_{j}\), and \(N(i)\) is the set of neighbors of the node \(v_{i}\). Finally, \(\phi_{e}\) and \(\phi_{h}\) are the message (edge) and update (node) functions respectively which are commonly parametrized by Multilayer Perceptrons (MLPs).
### Geometric Graph Neural Networks
When the graphs have an embedding in Euclidean space, i.e., each node \(v_{i}\) has an associated position \(\mathbf{x}_{i}\in\mathbb{R}^{n}\), we want to leverage this geometric information whilst preserving stability/invariance to rigid-body transformations. That is, many tasks are invariant to Euclidean distance preserving transformations in \(E(n)\). E.g., the prediction of energy of a system of atoms is invariant to its global position and orientation in space. Several works have shown how to build equivariant message passing based graph neural networks for such geometric graphs.
Central in those works is the conditioning of the message and update function on invariant geometric attributes, such as the pairwise distance \(\mathbf{a}_{ij}=\|\mathbf{x}_{j}-\mathbf{x}_{i}\|\), as popularized in (Satorras et al., 2021), or covariant spherical/circular harmonic embeddings of relative position \(\mathbf{a}_{ij}=Y(\mathbf{x}_{j}-\mathbf{x}_{i})\) as is common steerable group convolution-based graph NNs (Brandstetter et al., 2021). Here we consider attributes that transform predictably via representations of \(E(n)\) as _covariants_, and those that remain invariant as _invariants_. Such covariants typically contain more (directional) information but require specialized operations such as the Clebsch-Gordan tensor product (Thomas et al., 2018; Anderson et al., 2019) in order to preserve equivariance of the graph NNs. Satorras et al. (2021) show that with a simple recipe based on invariant attributes, one can often obtain equally powerful graph NNs. As such, we focus this paper on the use of \(\mathbf{a}_{ij}=\|\mathbf{x}_{j}-\mathbf{x}_{i}\|\) as a sufficiently expressive attribute, and model the message and update functions \(\phi_{e}\) and \(\phi_{h}\) as regular MLPs.
Our objective then is to understand what is the most effective way of utilizing attributes in geometric graph NNs. To make this notion of conditioning explicit, we will denote the message and update function as
\[\phi_{e}(\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l}\,|\,\mathbf{a}_{ij})\quad\text{ and}\quad\phi_{h}(\mathbf{h}_{i}^{l}\,|\,\mathbf{a}_{i})\,,\]
where we note that, although uncommon, it is possible to define invariant or covariant geometric node attributes \(\mathbf{a}_{i}\)(Brandstetter et al., 2021).
## 3 Analysis of Conditioning Methods
In the subsequent, we unify several conditioning methods used in literature through the notion of conditional linear layers, and by discussing them in relation to the prevalent convolution layer and its variations. As a starting point, we use the fact that convolution is a simple form of message passing with linear message functions conditioned on relative position, i.e.,
\[\mathbf{m}_{ij}=\phi_{e}(\mathbf{f}_{j}\mid\mathbf{x}_{j}-\mathbf{x}_{i})= \mathbf{W}(\mathbf{x}_{j}-\mathbf{x}_{i})\mathbf{f}_{j}\:, \tag{4}\]
and the following update function typically is the application of a point-wise activation function \(\sigma\), i.e., \(\phi_{h}(\mathbf{f}_{i})=\sigma(\mathbf{f}_{i})\), possibly with a skip connection as in ResNets (He et al., 2016). In general, geometric graph NNs do not just _linearly_ transform node features, but generally do this _non-linearly_ via message/update functions parametrized by MLPs, leading to a notion of non-linear convolutions when the attributes are invariant/covariant quantities (Brandstetter et al., 2021).
Importantly, these MLPs themselves are parametrized by linear layers, intertwined with non-linear activation functions, and nothing prevents from conditioning each of these linear layers on the attributes. It is the purpose of this paper to categorize several options for conditioning and understand what effect this has on performance.
### Conditional linear layers
We propose the following modifications of the linear layer, as to make them conditional on attributes
\[\mathbf{W}_{\mathbf{a}}\mathbf{h} :=\mathbf{W}\:\mathbf{h}\] _no_ (5) \[\mathbf{W}_{\mathbf{a}}\mathbf{h} :=\mathbf{W}\left(\mathbf{h}\oplus\mathbf{a}\right)\] _weak_ (6) \[\mathbf{W}_{\mathbf{a}}\mathbf{h} :=(\mathbf{W}^{a}\mathbf{a})\odot(\mathbf{W}^{h}\:\mathbf{h})\] _strong_ (7) \[\mathbf{W}_{\mathbf{a}}\mathbf{h} :=\mathbf{W}(\mathbf{a})\mathbf{h}\] _pure_ (8)
where to keep the similarity to the common notation for linear transformations, we use the notation \(\mathbf{W}_{\mathbf{a}}\) to denote that the linear transformation is conditioned on \(\mathbf{a}\). We further use to notation \(\mathbf{h}\oplus\mathbf{a}\) to denote the concatenation of vectors \(\mathbf{h}\) and \(\mathbf{a}\), and use \(\odot\) to denote element-wise multiplication. To distinguish matrices applied to different attributes, we use italic labels, e.g. \(\mathbf{W}^{h}\).
We stress the hierarchy in terms of the dependence of the transformation on \(\mathbf{a}\). The most direct dependence is in the _pure_ method, in which the transformation is causally parametrized by \(\mathbf{a}\), followed by _strong_ conditioning in which a standard unconditional transformation \(\mathbf{W}^{h}\:\mathbf{h}\) is _gated_ by a vector \(\mathbf{W}^{a}\:\mathbf{a}\). Both the pure and strong methods are by construction forced to utilize the attribute, as, in particular in the strong case, the transformation would not exists without attribute \(\mathbf{a}\). We refer to equation 6 as _weak_ conditioning, as in principle the transformation would still exist in the absence of the attribute, and moreover, if the dimensionality of \(\mathbf{h}\) is much larger than that of \(\mathbf{a}\), the transformation of the attribute only contributes to a small extent to the output of this layer. We hypothesize that this hierarchy correlates with performance and experimentally test this hypothesis in Sec. 5. What follows is a brief analysis of these types of conditioning.
### Pure conditioning corresponds to bi-linear layers
As common for implementations of convolutions, one typically expands the convolution kernel \(\mathbf{W}(\mathbf{x}_{j}-\mathbf{x}_{i})\in\mathbb{R}^{d_{o}\times d_{i}}\), i.e., a transformation matrix with elements \(W_{oi}(\mathbf{x}_{j}-\mathbf{x}_{i})\) that depends on relative position, in a basis \(\{\phi_{b}:\mathbb{R}^{n}\rightarrow\mathbb{R}\}_{b=1}^{d_{b}}\) via
\[W_{oi}(\mathbf{x}_{j}-\mathbf{x}_{i})=\sum_{b=1}^{d_{b}}W_{boi}\:\phi_{b}( \mathbf{x}_{j}-\mathbf{x}_{i})\:. \tag{9}\]
The basis could be the usual \(3\times 3\) pixel basis, or it could be a continuous basis for when the continuous structure of the data is to be respected, such as circular or spherical harmonics (Worrall et al., 2017; Weiler and Cesa, 2019; Thomas et al., 2018; Anderson et al., 2019), B-splines (Bekkers, 2020; Fey et al., 2018), or hermite polynomials (Sosnovik et al., 2020). In recent works on the parametrization of continuous functions as Neural Fields (Xie et al., 2022) or in transformer-based methods (Vaswani et al., 2017), such basis functions are often referred to as _coordinate embeddings_ or _position encodings_.
An important observation is that, given such a parametrization through basis functions, the pure conditioning layer corresponds to a bi-linear layer
\[h_{o}^{l+1}=\sum_{b}\sum_{i}\phi_{b}(\mathbf{a})\,W_{boi}\,h_{i}^{l}\quad \Longleftrightarrow\quad\mathbf{h}^{l+1}=\boldsymbol{\phi}(\mathbf{a})^{bilinear} \mathbf{W}^{l}\,\mathbf{h}^{l}\,, \tag{10}\]
where we use \(i\) to index the input feature vector, \(o\) the output feature vector, and \(b\) the basis functions.
Such bilinear layers are often implicitly used in convolutional architectures, where the expansion in the basis function is typically hard-coded or pre-computed. On continuous data, such as point cloud methods, the bilinear layer is ubiquitous. Notably, in the context of steerable group equivariant convolutions, the transformations happen via bilinear operators called the Clebsch-Gordan tensor product, in combination with spherical harmonic embeddings of relative position (Brandstetter et al., 2021). Outside of the (group) convolution literature, bilinear layers are often used to explicitly model conditioning on geometric attributes, for which a relevant example to our current work is DimeNet (Gasteiger et al., 2020). DimeNet uses an advanced message passing framework in which messages and updates are conditioned on geometric quantities (embedded as spherical harmonics and radial basis functions) via combinations of weak, strong, and pure conditioning.
### Strong conditioning corresponds to depth-wise separable convolutions
In later works, DimeNet was improved by DimeNet++ (Gasteiger et al., 2020), both in performance and speed, by replacing the compute-heavy bilinear layers (pure conditioning) with the more efficient gating type conditioning (strong conditioning). The computational bottleneck of pure conditioning motives the use of strong conditioning, a route that has proven successful in convolutional architectures computer vision as well, via the use of so-called depth-wise separable convolutions (Sifre and Mallat, 2014; Chollet, 2017).
Chollet (2017) shows huge efficiency gains, both in terms of computing and performance, when factorizing convolution kernels in two parts. One part does the channel mixing, which does not depend on the relative position, while another part depends on the relative position which scales/gates the output. That is if the kernel is given by
\[W_{oi}(\mathbf{x}_{j}-\mathbf{x}_{i})=W_{o}^{a}(\mathbf{x}_{j}-\mathbf{x}_{i} )W_{oi}^{h}\,, \tag{11}\]
the convolution boils down to message passing with conditional linear layers of the _strong_ type, as we can write
\[\mathbf{h}^{l+1}=\left(W^{a}(\mathbf{x}_{j}-\mathbf{x}_{i})\right)\odot( \mathbf{W}^{h}\mathbf{h}^{l})\,, \tag{12}\]
where we can define \(\mathbf{W}^{a}(\mathbf{x}_{j}-\mathbf{x}_{i})=\mathbf{W}^{a}\mathbf{a}_{ij}\) as the linear transformation of a coordinate embedding \(\mathbf{a}_{ij}=\boldsymbol{\phi}(\mathbf{x}_{j}-\mathbf{x}_{i})\) if we want the connection to equation 7 explicit. The fact that the transformation overall is linear and that it splits into a part that does and a part that doesn't depend on pair-wise attributes allows for very efficient implementations that first perform a group-wise convolution, followed by channel mixing. This principle is at the core of the recently popularized ConvNeXt architecture (Liu et al., 2022) and plays an essential role in equivariant graph NNs on (molecular) point clouds in order to be able to scale up (Thomas et al., 2018). Separability recently also proved necessary in order for equivariant convolutional NNs to scale up to large groups, such as the scale-rotation-translation group (Knigge et al., 2022). In the context of conditional NN to parametrize _neural fields_(Xie et al., 2022), strong conditioning commonly appears in the form of so-called _film layers_(Perez et al., 2018).
### Weak conditioning corresponds to linear layers with a conditional bias
Weak conditioning (Eq. 6) corresponds to a standard linear layer with an adaptive bias:
\[\mathbf{W}_{\mathbf{a}}\,\mathbf{h}=\boldsymbol{W}\left(\mathbf{h}\oplus \mathbf{a}\right)=\underbrace{\boldsymbol{W}^{{}^{\prime}}\mathbf{h}}_{\text{ no condition}}+\underbrace{\boldsymbol{W}^{{}^{\prime\prime}}\mathbf{g}}_{\text{ conditional bias}}=\mathbf{W}^{{}^{\prime}}\,\mathbf{h}+\mathbf{b}(\mathbf{a})\,, \tag{13}\]
where \(\mathbf{W}^{{}^{\prime}}\) are the first \(d_{h}\) rows of \(\mathbf{W}\) that are applied to the \(\mathbf{h}\in\mathbb{R}^{d_{h}}\) of the concatenated vector, and \(\mathbf{W}^{{}^{\prime\prime}}\) are the last \(d_{a}\) rows of \(\mathbf{W}\) that are applied to \(\mathbf{a}\in\mathbb{R}^{d_{a}}\). This simple form of conditioning is most used in literature to condition any type of NN on the conditioning vector \(\mathbf{a}\), from message passing methods (Gilmer et al., 2017; Satorras et al., 2021), to conditional neural fields, e.g. as a simple but effective form of modulation in sirens (Dupont et al., 2022), to conditional variational auto-encoders (Sohn et al., 2015).
### Conditional MLPs
With the various forms of conditional linear layers given in equations 6, 7, and 8 we can build conditional MLPs, by simply replacing the usual linear layers with the conditional ones1. Such MLPs could e.g. be denoted with \(\operatorname{MLP}(\mathbf{h}\,|\,\mathbf{a})\). Usually, only the first layer of such a conditional MLP is conditional and usually of the weak type, as with EGNN (Satorras et al., 2021). In the experiments we show that this simple choice is sub-optimal in the context of geometric message passing a la EGNN, and show that improvements can be made by either by conditioning more layers, or switching to strong conditioning.
Footnote 1: Code is available at [https://github.com/YeskendirK/conditioning-GNNs](https://github.com/YeskendirK/conditioning-GNNs)
## 4 Related work: Geometric Message Passing for Computational Chemistry
In the previous section, we discussed several options for conditioning message/update functions for us in message passing graph neural networks, as well as their use in other fields of deep learning. In our experiments we benchmark the three conditioning methods (equations 6, 7, 8) in the context geometric message passing for computational chemistry. Recent works in this category, and the types of conditioning used in those works, are as follows.
EGNN (Satorras et al., 2021) is a message passing neural network (MPNN) that uses a _weak_ conditioning method to utilize the distance between nodes.
DimeNet (Gasteiger et al., 2020) is the type of MPNN, where message embeddings interact based on the distance between atoms and the angle directions. _Pure_ conditioning is adapted to utilize angle directions in message update and aggregation. Tensor Field Network (Thomas et al., 2018) is a neural network for 3D point clouds. Each point in TFN is associated with a vector in a representation of \(SO(3)\). To condition one representation on another, a tensor product of representations is used which in its general form corresponds to _pure_ conditioning. However, many works of the steerable message passing kind, including TFN and NequIP (Batzner et al., 2022), implement a _separable_ variation (_strong_ conditioning) for the sake of computational efficiency. The exception is steerable EGNN (Brandstetter et al., 2021), which uses steerable MLPs with _pure_ conditioning in each layer, i.e., not only the first layer of the message function as in EGNN.
DimeNet++ (Klicpera et al., 2020) is an extension of DimeNet which changed conditioning method from pure to strong to increase efficiency. Klicpera et al. (2020) showed that changing the conditioning method decreased the runtime time of the original DimeNet by a factor of 5. SchNet (Schutt et al., 2017) is another example of MPNN that utilizes atom locations using a strong conditioning method. PaiNN (Schutt et al., 2021) further extends SchNet by projecting the interatomic distances via radial basis functions and iteratively updating the vectors along with the scalar features but also using a strong conditioning method.
## 5 Experiments
In this section, we design experiments to evaluate the effectiveness of three conditioning methods shown in equation 6, 7, and 8 on two real-world datasets: QM9 (Ramakrishnan et al., 2014) and MD17 (Chmiela et al., 2017). We want to demonstrate the effect of the choice of the conditioning method and present the results of some other models on benchmarks to provide context. In our experiments, we report Mean Absolute Error (MAE) between model predictions and ground truth.
Implementation detailsAs a baseline architecture, we use the EGNN model that consists of 7 layers, 128 features per hidden layer, and a 2-layer message (edge) function \(\phi_{e}=\operatorname{MLP}(\mathbf{h}_{i}\oplus\mathbf{h}_{j}\,|\,\|\mathbf{ x}_{j}-\mathbf{x}_{i}\|^{2})\) with only weak conditioning via equation 6 in the first layer. We will test modifications of this model by changing the weak conditioning, to strong or pure, and explore the effect of conditioning multiple layers in the edge function MLP.
We found it useful to pass geometric distances through two-layer MLP, as a form of vectorization/coordinate embedding of the pairwise distance, for the QM9 dataset and Random Fourier Feature layer (Rahimi & Recht, 2007) for the MD17 dataset.
We use the same values for hyperparameters from the EGNN paper (Satorras et al., 2021) for both datasets: we trained each model on QM9/MD17 dataset for a total of 1.000/500 epochs, used Adam optimizer, batch size 96 (64 for pure-EGNN), weight decay 1e-16, and cosine decay for the learning rate starting at 5e-4.
Qm9The QM9 (Ramakrishnan et al., 2014) dataset consists of small molecules represented as a set of atoms (up to 29 atoms per molecule), each atom having a 3D position associated and a five-dimensional one-hot node embedding that describes the atom type (H, C, N, O, F). The QM9 dataset has 3D coordinate locations of each atom and we use the distance between two atoms (nodes) as an edge attribute. The dataset comprises 12 quantum properties for each of the molecules. We use 100k molecules for training, 18k for validation, and 13k for testing.
Table 1 shows the Mean Absolute Error for the prediction of 12 molecular properties for weak and strong conditioning methods. We can clearly see that using _strong conditioning shows lower MAE than weak conditioning for all properties by an average of 10%_. Due to a large number of data points and our limited computational resources, we limit our experiments to weak and strong conditioning methods. Training one epoch of pure-EGNN was on average 267x longer than training strong-EGNN, which made it intractable to validate pure conditioning on this dataset.
The advantage in the performance of strong conditioning over weak conditioning is clearly shown in Table 1. We also hypothesize that the advantage of the strong conditioning method will be more evident in constrained settings of shallow networks. In order to test whether strong conditioning provide a more efficient parametrization then weak conditioning, we repeat experiments from Table 1 on three molecular properties, but with a more shallow network. Table 2a shows the MAE for the prediction of three molecular properties for weak and strong conditioning methods for models with 7 and 3 layers. On average improvement of strong conditioning over weak conditioning was more highlighted for the shallow network than for the deep network (10.8% vs 7.7%). It demonstrates the importance of conditioning methods in models with less capacity.
We define conditioning depth as the number of conditional layers in the message function. In our experiments, the conditioning depth was 2 as we conditioned all two layers in the message function. We test the effect of conditioning depth for different conditioning methods. Table 2b shows the MAE for the prediction of five molecular properties for weak and strong conditioning methods for models with conditioning depths 1 and 2. The weak-EGNN shows better performance with a single conditional layer while strong-EGNN benefits more from two conditional layers.
Md17MD17 (Chmiela et al., 2017) is a dataset of eight small organic molecules containing up to 17 total atoms composed of the atoms H, C, N, O, F. For each molecule, an ab-initio molecular dynamics simulation was run using DFT to calculate the ground state energy and forces. At intermittent timesteps, the energy, forces, and configuration (positions of each atom) were recorded. We uniformly sample 50k molecules for training, 10k for validation, and 10k for testing.
Table 3 shows the Mean Absolute Error for the prediction of energies for 8 molecules for three conditioning methods. From Table 3 we can see that strong conditioning improved the performance of 5 molecules by an average of 14.5%. It did not show improvement in molecules that initially had high MAE. In experiments with pure conditioning, to decrease computational cost we used half of the model: \(2\to 1\) conditional layers, hidden embedding size of \(128\to 64\), and \(7\to 3\) layers. With
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline Task & \(\alpha\) & \(\Delta\epsilon\) & \(\epsilon_{\textit{HOMO}}\) & \(\mu_{\textit{LUMO}}\) & \(\mu\) & \(C_{v}\) & \(G\) & \(H\) & \(R^{2}\) & \(U\) & \(U_{0}\) & \(ZPVE\) \\ Units & \(bolr^{3}\) & meV & meV & meV & D & cal/mol K & meV & meV & \(bolr^{3}\) & meV & meV & meV & meV \\ \hline \hline SchNet &.235 & 69 & 43 & 38 &.030 &.040 & 19 & 17 &.180 & 20 & 20 & 1.50 \\ DimNet++ &.044 & 33 & 25 & 20 &.030 &.023 & 8 & 7 &.331 & 6 & 6 & 1.21 \\ EGNN (baseline) &.071 & 48 & 29 & 25 &.029 &.031 & 12 & 12 & 106 & 12 & 11 & 1.55 \\ \hline \hline weak-EGNN (ours) &.067 & 50.52 & 28.11 & 25.74 &.032 &.031 & 9.81 & 10.51 &.138 & 11.14 & 9.94 & 1.455 \\ strong-EGNN (ours) &.061 & 44.52 & 27.48 & 23.83 &.023 &.029 & 9.79 & 10.29 &.088 & 9.39 & 9.88 & 1.45 \\ \hline \end{tabular}
\end{table}
Table 1: Mean Absolute Error for the molecular property prediction benchmark in QM9 dataset.
these changes, a pure-EGNN model was 14.3x and 16x times slower than strong-EGNN and weak-EGNN respectively. The negative impact of decreasing the number of layers on the performance of strong-EGNN on the QM9 dataset can be seen in Table (a)a. Considering that we reduced by half the number of layers, hidden embedding size, and conditioning depth, pure-EGNN shows competitive performance with weak/strong-EGNN on 5 molecules and outperforms it on some of them. This potential gain in performance might, however, not outweigh the computation costs of pure over strong or weak conditioning.
## 6 Conclusion
In this work, we explore how graph neural networks can recursively update node embeddings with edge attribute information such as geometric distance. We provide a unifying analysis of several works in literature that utilize attributes, through a notion of conditional linear layers. We present three conditioning methods to this end: weak, strong, and pure. Weak conditioning method concatenates edge attributes to node features, strong conditioning method gates node features, and in pure conditioning, edge attributes causally parametrize transformation matrices. We explain the intuition of each method and apply them to the EGNN model to empirically show their difference in performance and computational cost on QM9 and MD17 datasets.
Our conclusion is that strong conditioning (gating) generally beats weak conditioning (concatenation) in the message passing framework. We also conclude that pure conditioning is computationally prohibitive in geometric message passing, whilst it can achieve competitive performance with weak and strong conditioning methods with a smaller network. This confirms the impact observed by other works on separable convolutions, s.a. in Depth-wise separable convolutions (Chollet, 2017) and ConvNeXt (Liu et al., 2022), and justifies the use of separable (group) convolutions of steerable tensor-based methods such as TFN (Thomas et al., 2018). While these methods can be formulated as linear message passing methods of the convolutional form, we show that performance gains can be achieved through multi-layer conditional message functions, as a form of non-linear convolution (Brandstetter et al., 2021). In this setting, we show that it can be beneficial to condition all layers in the conditional MLP, rather than to only condition the first layer as is the convention.
\begin{table}
\end{table}
Table 2: Mean Absolute Error (MAE) for the molecular property prediction benchmark in the QM9 dataset for different numbers of layers and conditioning depths. \(\Delta\) is the percentage difference of MAE, lower \(\Delta\) is better.
\begin{table}
\end{table}
Table 3: Mean Absolute Error (MAE) for the conformational energies (meV) prediction benchmark on MD17 dataset.
We believe that our categorization of conditioning methods, combined with our empirical findings can be used as a guideline in designing the next generation of geometric graph neural network architectures.
**Acknowledgements:** This work is part of the research programme VENI with project "context-aware AI" with number 17290, which is (partly) financed by the Dutch Research Council (NWO).
|
2302.02523 | Principle of learning sign rules by neural networks in qubit lattice
models | A neural network is a powerful tool that can uncover hidden laws beyond human
intuition. However, it often appears as a black box due to its complicated
nonlinear structures. By drawing upon the Gutzwiller mean-field theory, we can
showcase a principle of sign rules for ordered states in qubit lattice models.
We introduce a shallow feed-forward neural network with a single hidden neuron
to present these sign rules. We conduct systematical benchmarks in various
models, including the generalized Ising, spin-$1/2$ XY, (frustrated) Heisenberg
rings, triangular XY antiferromagnet on a torus, and the Fermi-Hubbard ring at
an arbitrary filling. These benchmarks show that all the leading-order sign
rule characteristics can be visualized in classical forms, such as pitch
angles. Besides, quantum fluctuations can result in an imperfect accuracy rate
quantitatively. | Jin Cao, Shijie Hu, Zhiping Yin, Ke Xia | 2023-02-06T01:37:18Z | http://arxiv.org/abs/2302.02523v3 | # The principle of learning sign rules by neural networks in qubit lattice models
###### Abstract
A neural network is a powerful tool for generalizing hidden laws beyond human intuition; however, it looks like a black box due to complicated nonlinear structures. Based on the Gutzwiller mean-field theory, we exhibit a principle of learning sign rules for the ordered states in qubit lattice models. Accordingly, we construct a shallow feed-forward neural network with a single hidden neuron and systematically make benchmarks in the generalized Ising, XY, frustrated Heisenberg chains, antiferromagnetic XY on the triangle lattice, and the Fermi-Hubbard chain at an arbitrary filling. All the leading-order or mean-field sign rule characters are visualized in classical forms, such as the gauge field gradient, pitch angles, etc. Besides, quantum fluctuations violate the sign rule and quantitatively yield an imperfect accuracy rate in the prediction.
Hidden information decoded in the wave function of the ground state benefits understanding properties of quantum closed systems at zero temperature, including orders, correlations, and even intricate entanglement features, etc [1; 2; 3; 4]. Especially for a real Hamiltonian, phases of elements in the wave function reduce to a _sign rule_ in a selected representation, e.g. Perron-Frobenius theorem for a class of Hamiltonian only having non-positive-definite off-diagonal elements [5; 6], Marshall-Peierls rule (MPR) for antiferromagnetic spin models on bipartite lattices [7; 8; 9]. In history, it has been recognized as an essential origin of the volume law for the Renyi entanglement entropies [10], which tightly links to various physical phenomena [11; 12; 13; 14].
Similar to the matrix product state (MPS) successfully applied to (quasi) one-dimensional (1D) lattice models [15; 16; 17; 18], the neural network quantum state (NNQS) and fast-developing machine learning (ML) techniques figure out a new way of multi-scale compression of the wave function, which has been vigorously promoted for 1D as well as higher-dimensional quantum many-body systems [19; 20; 21; 22]. With an appropriate choice of an empirical activation function _cosine_ in the hidden layer of NNQS, the complicated sign rules in qubit lattice models can be read out of the wave function [23], and relevant studies have drawn much attention in recent years [24; 25; 26; 27].
According to previous studies, the relatively higher complexity of the sign rule usually demands more hidden layers or neurons for a required representation precision [23; 24; 25; 26; 27]. On the one hand, it is beneficial in applications to improve sign rules by designing a new architecture or brutally enhancing the complexity of NNQS, benefitting from which quantum Monte Carlo (QMC) simulations are capable of reaching a higher numeric precision and the early proposal of avoiding the "negative sign problem" comes true [28; 29; 30]. On the other hand, recent concerns focus on interpreting the meaning of growing sophisticated neural networks [31; 32; 33] and preferably finding links to existing physical insights [34; 35; 36], which strongly motivates our work.
In this work, we establish a Gutzwiller mean-field (GWMF) principle in qubit lattice models, where the sign rules can be well learned by a single-hidden-neuron feed-forward neural network (shn-FNN). All findings, tested by various spin and fermion models, point that an existing leading-order sign rule has a vivid physical scenario tightly related to orders in spins or charges. The structure of the paper is organized as follows. In Sec. I, we unveil a GWMF picture of the sign rules for ordered ground states in qubit lattice models. In Sec. II, we introduce an shn-FNN in detail, which matches the GWMF picture and can easily be interpreted. In Sec. III, we adopt the shn-FNN to extract the GWMF sign rules in spin models and the Fermi-Hubbard model. Besides, we discuss the influence of frustration and global symmetries. At last, we summarize conclusions and make a brief discussion in Sec. IV.
## I I. Gutzwiller mean-field sign rules
A qubit denotes a quantum state \(|n\rangle\) widely in condensed matter physics, such as a spin-1/2 in quantum magnets [37], a single fermion state in ultra-cold atomic systems [38], a two-level atom in quantum cavities [39], and so on [40; 41; 42], where a binary value \(n=0/1\) denotes either an empty/occupied fermion level or a spin-1/2 polarizing \(\downarrow/\uparrow\) in the z-axis. For a lattice model, a state is depicted by a wave function \(|\psi\rangle=\sum_{\{\mathbf{n}\}}c_{\mathbf{n}}|\mathbf{n}\rangle\), where the real expansion coefficient \(c_{\mathbf{n}}=s_{\mathbf{n}}a_{\mathbf{n}}\) consists of the sign \(s_{\mathbf{n}}\) and amplitude \(a_{\mathbf{n}}\) in the representation of \(|\mathbf{n}\rangle=\otimes_{l=1}^{L}|n_{l}\rangle\) for \(L\) qubits or equivalent lattice sites. \(|n_{l}\rangle\) is the local basis at site-\(l\) with the quantum index
and forms a vector \({\bf n}=(n_{1},\,\ldots,\,n_{L})^{T}\).
Without loss of generality, we take a spin-\(1/2\) as an example. A spin \(\hat{\bf S}=(\hat{S}^{x},\,\hat{S}^{y},\,\hat{S}^{z})\) has three components in the \(x\), \(y\) and \(z\)-axes, respectively, as well as the spin-flipping-up and down operators \(\hat{S}^{\pm}\). In the \(\hat{S}^{z}\)-representation, only two free real variables are out of two complex coefficients in front of bases. They make up of a spin-coherent state \(|\Omega\rangle=c^{\dagger}\,|\uparrow\rangle+c^{\downarrow}\,|\downarrow\rangle\), where \(c^{\dagger}=\cos(\theta/2)e^{-i\phi}\) and \(c^{\downarrow}=\sin(\theta/2)\) are defined by a pair of angles \(\theta\) and \(\phi\) in a solid angle [43]. As convention, site-dependent \(\theta\in[0,\,\pi]\) and \(\phi\in[0,\,2\pi]\). In the case, a spin-\(1/2\)\({\bf S}=\langle\hat{\bf S}\rangle={\bf\Omega}/2\) behaves like half of the unit vector \({\bf\Omega}=(\sin\theta\cos\phi,\,\sin\theta\sin\phi,\,\cos\theta)\) in three dimensional Cartesian coordinates. So the sign for the basis with a non-vanishing amplitude only depends on \(\phi\) because both \(\cos(\theta/2)\) and \(\sin(\theta/2)\) are positive definite. Besides, an irrelevant free parameter \(h\) modulates the phase in front of the spin-coherent state, i.e. \(e^{ih}|\Omega\rangle\).
In the GWMF theory [44; 45], the wave function of a state is a product \(|\psi\rangle=\bigotimes_{l=1}^{L}|\Omega_{l}\rangle\) of bases for \(L\) spins. Therefore, after substituting the local spin-coherent state into the GWMF wave function, we can easily prove that a basis \(|{\bf s}\rangle\) with \({\bf s}=(s_{1},\,\cdots,\,s_{L})^{T}\) has a sign of \(s_{\bf n}={\rm Sgn}[\cos({\bf w}\cdot{\bf n}+\tilde{h})]\) and \({\bf n}=1/2+{\bf s}\), referencing from the all-spin-up basis (\(\uparrow,\,\ldots,\,\uparrow\))\({}^{T}\). The characteristic phase vector \({\bf w}=(\phi_{1},\,\cdots,\,\phi_{L})\) is clearly defined by a phase \(w_{l}=\phi_{l}\) at site-\(l\), which reflects the order of spins, called the leading-order or GWMF sign rule. The constant \(\tilde{h}=\sum_{l}h_{l}+h_{0}\) stems from dynamical phase factors \(h_{l}\) for all sites, while \(h_{0}\) or equivalently \(\tilde{h}\) is determined by other necessarily preserved global symmetries, e.g. translational and inversion symmetries.
In the case of a general magnetic order, referencing from a state \(|\Omega_{0}\rangle\), the local basis \(|\Omega_{l}\rangle=\prod_{p=1}^{N_{c}}\hat{R}_{l}({\bf\Omega}_{p},{\bf Q}_{p} \cdot{\bf r}_{l})|\Omega_{0}\rangle\) at site-\(l\) with the displacement \({\bf r}_{l}\) contains \(N_{c}\) spin-rotations \(\hat{R}_{l}({\bf\Omega},q)=\exp(iq\hat{\bf S}_{l}\cdot{\bf\Omega})\) by an angle \(q\) about the axis \({\bf\Omega}\)[46], where \({\bf\Omega}_{p}\) and \({\bf Q}_{p}\) are a characteristic solid angle and the corresponding momentum, respectively. The sign rule \(w_{l}\) is also hidden in \(|\Omega_{l}\rangle\) generated by the multi-\(Q\) rotations.
## II II. Single-hidden-neuron feed-forward neural network
Feed-forward neural network (FNN), considered a multi-layer nested function of variational parameters, is optimized according to the training sets and goal functions [47; 48; 49], which helps approximate any continuous functions [50; 51] and sort samples by discrete values of characters [49]. As FNN becomes deeper, the growing complexity prevents us from understanding the sign rule or making links to meaningful physics insights. We choose an shn-FNN analogous to previous shallow FNNs [32; 35], distinct from recently developed operations in a compact latent space [52; 53]. Let us briefly introduce how it works to learn the GWMF sign rules.
The double-valued sign \(s_{\bf n}\) of either positive or negative for arbitrary bases \(|{\bf n}\rangle\) can also be classified through a structure of FNN as shown in Fig. 1, constituted of an input layer, a hidden layer, and an output layer. We assign details of the configuration \({\bf n}\) to FNN without adornment, so simply the \(L\)-dimensional vector \({\bf y}_{I}={\bf n}\) in the input layer. \(N_{H}\) neurons in the hidden layer gives a vector \({\bf y}_{H}=(y_{H,1},\,\cdots,\,\,y_{H,N_{H}})^{T}\). Two neurons in the output layer form a one-hot vector \({\bf y}_{O}=(y_{O,1},\,y_{O,2})^{T}\). Three vectors are linked by two weight matrices \({\bf w}\) and \({\bf w}_{O}\), and corresponding activation functions. The activation function cosine is empirically chosen in the hidden layer so that \({\bf y}_{H}=\cos({\bf w}\cdot{\bf n})\), which performs excellently in the random search [23]. The function softmax defined in App. A executes normalization by an exponential function for gaining the meaning of probabilities before the classification of the sign. At last, the predicted sign \(y_{\rm sign}\) is determined by \({\bf y}_{O}={\rm softmax}({\bf w}_{O}\cdot{\bf y}_{H})\). In details, \(y_{\rm sign}\) is positive only if \(y_{O,1}>y_{O,2}\). For a configuration \({\bf n}\), we usually add a superscript \(({\bf n})\) to each variable, such as \({\bf y}_{O}^{({\bf n})}=(y_{O,1}^{({\bf n})},\,y_{O,2}^{({\bf n})})\) in the output layer.
Unless otherwise stated, we choose \(N_{H}=1\) and two unequal elements of \({\bf w}_{O}\), which gives the structure of shn-FNN exactly. So the shn-FNN representation for the sign rule reduces to a function
\[y_{\rm sign}={\rm Sgn}\left[\cos({\bf w}\cdot{\bf n})\right] \tag{1}\]
Figure 1: The feed-forward neural network (FNN) is designed to learn the sign rules for a quantum state of \(L\) qubits. There are \(L\), \(N_{H}\), and \(2\) neurons in the input (black squares), hidden (blue hexagons), and output (red circles) layers, respectively, which are linked by two weight matrices \({\bf w}\) (blue lines) and \({\bf w}_{O}\) (red lines). In particular, the hidden and output layers are activated separately by a cosine and a softmax functions. The sign suggested by FNN is positive if \(y_{O,1}>y_{O,2}\) and negative otherwise.
of the input configuration \(\mathbf{n}\), the signs are uniquely encoded into \(\mathbf{w}\), precisely equal to the GWMF sign rules mentioned above except for a constant \(\tilde{h}\). Later on, we will see that different constants related to global symmetries of the ground state can also be learned.
Besides, we make short statements about the choice of data sets and methods for training. We prepare the total data set \(\mathbf{T}\) after the ground state wave function is obtained by the exact diagonalization (ED) method. We sort samples in descending order of amplitude and discard ones that have \(a_{\mathbf{n}}<10^{-15}\) to avoid the artificial effects caused by the limited numeric precision of floating numbers. For a sample in \(\mathbf{T}\), the corresponding configuration \(\mathbf{n}\) gives a sign \(s_{\mathbf{n}}\) in a one-hot vector \(\mathbf{y}^{(\mathbf{n})}\) to make a quantitative comparison with the vector in the output layer easily. The vector \(\mathbf{y}^{(\mathbf{n})}=(y_{1}^{(\mathbf{n})},\,y_{2}^{(\mathbf{n})})\) only has two valid values: either (1, 0) for a positive \(s_{\mathbf{n}}\) or (0, 1) for a negative one. In practice, the adopted part of \(\mathbf{T}\) contains the first \(N_{s}\) samples, and the choice of \(N_{s}\) depends on the specific demand. Without a particular notification, these samples are randomly regrouped into two parts. Four out of five samples are used for training, while others give the testing set.
During training, the back-propagation (BP) [54] algorithm is used to optimize variables in \(\mathbf{w}\) and \(\mathbf{w}_{O}\) of FNN, associated with adaptively adjusting the learning rate by Adam algorithm [55]. The process is equivalent to the minimization of the cross entropy
\[\mathcal{S}_{\times}=-\sum_{\{\mathbf{n}\}}\left(y_{1}^{(\mathbf{n})}\ln y_{O,1}^{(\mathbf{n})}+y_{2}^{(\mathbf{n})}\ln y_{O,2}^{(\mathbf{n})}\right) \tag{2}\]
of two one-hot vectors by collecting contributions from all samples in the training set. To reduce the computational costs of training with a large data set, we usually use the so-called _mini-batch_[49, 56] method based on the stochastic gradient descent (SGD). In this work, 100 configurations are randomly selected from a training set to calculate the gradients of \(\mathbf{w}\) and \(\mathbf{w}_{O}\) according to Eq. (2) at each step. It performs very well on aspects of both accuracy and speeding up. In addition, the randomness of choice can also kill harmful effects induced by over-fitting. Moreover, we realize FNN and the Adam optimization based on the ML library "TensorFlow" [57].
To evaluate the performance of FNN or further trace the trajectory of the optimization process, we define the accuracy rate (AR) \(\text{AR}=\mathcal{N}_{s}^{c}/\mathcal{N}_{s}\) to suppose that \(\mathcal{N}_{s}^{c}\) out of \(\mathcal{N}_{s}\) configurations in a set are successfully classified. To correctly capture the GWMF sign rules through shn-FNN, we usually modulate \(N_{s}\) larger gradually until AR reaches its maximum.
## III III. Qubit Lattice Models
Next, we systematically analyze the GWMF sign rules learned by shn-FNN for various ordered states in qubit lattice models, including non-frustrated spin models in Sec. III A and frustrated ones in Sec. III B, and interacting fermions in Sec. III C.
## III III. Non-frustrated spin models
_Ising chains_. A generalized Ising chain has a Hamiltonian \(\hat{H}_{\text{Ising}}(J,\,\boldsymbol{\Omega})=J\sum_{l=1}^{L}(\hat{\mathbf{S }}_{l}\cdot\boldsymbol{\Omega})(\hat{\mathbf{S}}_{l+1}\cdot\boldsymbol{\Omega})\). For the case of a ferromagnetic coupling \(J<0\), the ground state favorites all spins aligned along the same axis, either \(\boldsymbol{\Omega}\) or \(-\boldsymbol{\Omega}\) exactly, which suggests uniform \(w_{l}=\phi\) or \(\pi+\phi\). In one of the double-folded degenerate manifolds as \(J>0\), spins at even sites are parallel to \(\boldsymbol{\Omega}\) with \(w_{l}=\phi\), while ones at odd sites are parallel to \(-\boldsymbol{\Omega}\) with \(w_{l}=\pi+\phi\). As \(J<0\) and \(\boldsymbol{\Omega}\) is along the x-axis \(\hat{x}\), a combination of \(\tilde{h}=0\) and \(w_{l}=0\) hits 100% AR as shown in Fig. 2(a).
_Spin-\(1/2\) XY chains_. For Spin-\(1/2\) XY chains, in the Hamiltonian \(\hat{H}_{\text{XY}}^{P}(J)=J\sum_{l=1}^{L}(\hat{S}_{l}^{x}\hat{S}_{l+1}^{x}+ \hat{S}_{l}^{y}\hat{S}_{l+1}^{y})=(1/2)(\sum_{l=1}^{L}J\hat{S}_{l}^{+}\hat{S}_ {l+1}^{-}+\text{h.c.})\), the spin couplings in the \(x\) and \(y\)-axes are equal. Once \(J<0\) with the periodic boundary condition (PBC), all spins in the GWMF ground state are aligned to the same polarization direction confined in the xy-plane with \(\theta_{l}=\pi/2\) and \(\phi_{l}=\varphi\). Infinite degenerate manifolds are connected by a two-dimensional rotation \(O(2)\) and imply a rotation-invariant combination \(|\psi\rangle=\int_{0}^{2\pi}(d\varphi/2\pi)\exp(im\varphi)[\bigotimes_{l}| \Omega_{l}\rangle]\) with an integer or half-integer \(m\). After the integral, non-vanishing bases obey a conservation law of
Figure 2: Accuracy rate AR as a function of controllable parameters \(\varphi\) and \(\tilde{h}\) in GWMF or equivalently in shn-FNN. We concern ground states in (a) a generalized ferromagnetic Ising chain with \(\boldsymbol{\Omega}=\hat{x}\), ferromagnetic XY chains with (b) PBC and (c) TBC (even parity), and (d) an antiferromagnetic XY chain with PBC, respectively. Black-filled triangles mark the parameters given by the optimized shn-FNN with AR = 1 or 100%. For (a) and (b), the phase vector \(w_{l}=\varphi\), while for (c) and (d) \(w_{l}=\varphi l\). We set \(L=16\) always.
\(\sum_{l=1}^{L}\hat{S}_{l}^{z}=m\) due to a recovered \(U(1)\) symmetry, that is \(e^{i\varphi\hat{S}_{i}^{z}}|\psi\rangle=e^{im\varphi}|\psi\rangle\) for arbitrary angle \(\varphi\). A global rotation can tune the sign rule with a uniform \(w_{l}=\varphi\). As \(\varphi=0\), the sign for arbitrary configuration in the ground state is always positive, which is summarized as the Perron-Frobenius theorem in history [5; 6]. In a numerical variation, e.g. ED or others, it is hard to eliminate the angle, so all combinations with \(\tilde{h}=0\) and \(\varphi\in[\pi/16,\,3\pi/16]\) get 100% AR in Fig. 2(b).
For the twisted boundary condition (TBC), an antiferromagnetic bond connects two edge lattice sites in the Hamiltonian \(\hat{H}_{XY}^{T}(J)=(1/2)(\sum_{l=1}^{L-1}J\hat{S}_{l}^{+}\hat{S}_{l+1}^{-}-J \hat{S}_{L}^{+}\hat{S}_{1}^{-}+\text{h.c.})\). Under a rotation of \(\hat{U}_{\delta}=\prod_{l=1}^{L}\hat{R}_{l}(\hat{z},\,l\delta)\) with a gradient angle \(\delta=\pi/L\), the twisting effect from the edge bond is absorbed into a gauge field \(\tilde{J}=\exp(i\delta)\) in a new Hamiltonian \(\hat{H}_{XY}^{P}(J\tilde{J})\). Meanwhile, \(\hat{U}_{\delta}^{\dagger}\hat{S}_{l}^{\pm}\hat{U}_{\delta}=\hat{S}_{l}^{\pm} \exp(\mp il\delta)\). Because \(s_{\mathbf{n}}\) is always positive definite in the ground state \(|\psi_{XY}^{P}(J\tilde{J})\rangle\) of \(\hat{H}_{XY}^{P}(J\tilde{J})\) argued in App. B, the ground state \(|\psi_{XY}^{T}(J)\rangle\) of \(\hat{H}_{XY}^{T}(J)\) carries a nonzero complex phase due to the rotation mentioned above, i.e. \(|\psi_{XY}^{T}(J)\rangle=\hat{U}_{\delta}|\psi_{XY}^{P}(J\tilde{J})\rangle\). After combining \(\pm\delta\), we obtain a real and inversion-symmetric wave function \(|\psi_{xy}^{T}(J)\rangle=\cos[\sum_{l}w_{l}\hat{S}_{l}^{z}]|\psi_{xy}^{P}(J \tilde{J})\rangle\), the sign rule of which is \(w_{l}=\delta l\) and \(\tilde{h}=-(L+1)\pi/4\) with an extra phase gradient \(\delta\) uniformly in space, which is successfully dug out by an shn-FNN in Fig. 2(c).
Under another transformation \(\hat{U}_{\pi}=\prod_{l=1}^{L}\hat{R}_{l}(z,\,\,l\pi)=\prod_{l\in\text{odd}} \hat{R}_{l}(z,\,\,\pi)\), we know that \(\hat{S}^{\pm}\rightarrow-\hat{S}^{\pm}\) for all odd sites, so the ferromagnetic XY chain for \(J<0\) is linked to an antiferromagnetic counterpart for \(J>0\). So in the antiferromagnetic XY chain with PBC, the ground state has an extra MPR, i.e. \(s_{\mathbf{n}}=(-1)^{N_{O}}\) where \(N_{O}=\sum_{l\in\text{odd}}\langle\hat{S}_{l}^{z}+1/2\rangle\) sums over all odd sites, or equivalently \(w_{l}=\pi l\), as shown in Fig. 2(d).
Therefore, in XY chains, the above leading-order sign rules have a general form of \(w_{l}=\phi_{l}=\varphi l\) with \(\varphi=0\), \(\delta\) or \(\pi\), which is related to the rotation of the spin with a specific angle gradient and can be easily read out by optimizing an shn-FNN with the whole data set. The resulting long-range correlation \(\langle\hat{\mathbf{S}}_{l}\cdot\hat{\mathbf{S}}_{l^{\prime}}\rangle=\mathbf{S }_{l}\cdot\mathbf{S}_{l^{\prime}}=\cos(\phi_{l}-\phi_{l^{\prime}})=\cos[\varphi (l-l^{\prime})]\) gives an oscillation in space, which suggests double peaks of the structure factor \(\mathcal{S}_{k}=(1/L^{2})\sum_{l,l^{\prime}}\exp[ik(l-l^{\prime})|\langle\hat{ \mathbf{S}}_{l}\cdot\hat{\mathbf{S}}_{l^{\prime}}\rangle\) at the momentum \(k=\pm\varphi\) correspondingly in Fig. 3(a).
_Heisenberg chain_. In a pure antiferromagnetic Heisenberg chain (AFHC) with equal nearest-neighboring antiferromagnetic couplings \(J_{1}=1\) in the \(x\), \(y\), and \(z\) axes, spins are aligned to the direction \(\mathbf{\Omega}\) at even sites, and the inverse \(-\mathbf{\Omega}\) at odd sites according to GWMF. A pair of degenerate manifolds are linked by a three-dimensional or equivalent two-dimensional rotation by an angle about a specified axis. In GWMF, \(|\Psi\rangle=\int(d\Omega/4\pi)Y_{l,m}(\theta,\phi)[\mathbf{\hat{\otimes}}_{l} \,|\Omega_{l}\rangle]\) still obey MPR since \(\theta\) does not provide any sign, where \(Y_{l,m}(\theta,\phi)\) gives the standard spheric harmonics function for the orbit with a total angular momentum \(l\) and a magnetization \(m\) in the z-axis. Although the actual ground state behaves as the Tomonago-Luttinger liquid (TLL) [59; 60; 61], it has been proven that MPR is still ideally obeyed [7; 14]. Here the optimized shn-FNN consistently shows that \(w_{l}=\pi l\) modulo \(2\pi\) in Fig. 3(f).
## III B. Frustrated spin models
\(J_{1}\)-\(J_{2}\)_Afhc_. As the antiferromagnetic next-nearest-neighboring (NNN) Heisenberg coupling \(J_{2}>0\) is turned on in the frustrated spin-\(1/2\)\(J_{1}\)-\(J_{2}\) AFHC \(\hat{H}_{J_{1}-J_{2}}=\sum_{l=1}^{L}(J_{1}\hat{\mathbf{S}}_{l}\cdot\hat{\mathbf{S}} _{l+1}+J_{2}\hat{\mathbf{S}}_{l}\cdot\hat{\mathbf{S}}_{l+2})\), with a dimensionless ratio \(\alpha=J_{2}/J_{1}\). At the Majumdar-Ghosh (MG) point \(\alpha_{\text{MG}}=1/2\), the dimerized (DM) ground state is a product of the next-nearest-neighboring spin-\(1/2\) pairs and obeys MPR strictly. As \(\alpha\) goes infinitely large, both decoupled chains, consisting of odd and even sites separately, obey MPRs individually. Away from that limit, a relatively tiny \(\alpha>0\) benefits a stable commensurate spin order with a pitch angle \(\varphi=\pi/2\). Once \(\alpha<\alpha_{\text{DC}}\approx 0.99\) for \(L=16\), commensurability breaks due to the emerged triplet defects [62; 63]. In between \(\alpha_{\text{MG}}\) and \(\alpha_{\text{DC}}\), the ground state undergoes an incommensurate crossover cut into several intervals, as shown in Figs. 3(f, g), each of
Figure 3: \(J_{1}\)-\(J_{2}\) antiferromagnetic Heisenberg chain (AFHC). (a-e) The structure factor \(\mathcal{S}_{k}\) as a function of the momentum \(k\) for \(L=16\). We choose (a) \(\alpha=J_{2}/J_{1}=0.35\), (b) 0.52, (c) 0.58, (d) 0.73, and (e) 1.2, respectively. According to the optimized shn-FNN, we obtain (f) the phase gradient \(\varphi\) in the GWMF sign rule \(w_{l}=\varphi l\), (g) the corresponding accuracy rate AR (black dots) and correction rate \(w_{c}\) (red dots). A series of level crossings at \(\alpha_{\text{MG}}=1/2\), \(\alpha_{c1}\approx 0.53\), \(\alpha_{c2}\approx 0.64\), and \(\alpha_{\text{DC}}\approx 0.99\) are marked. (h) At \(\alpha=0.35\), bases in the ground state are regrouped into an MPR-obeying set (red dots) and an MPR-violating set (or \(\overline{\text{MPR}}\), blue crosses). The violation only takes place in the bases with small amplitudes. (i) Sign-fidelity susceptibility density \(\chi_{f}\) in the region \(\alpha\in[0\), \(\alpha_{\text{MG}}=1/2)\) for \(L=8\), 12, 16, 20, 22 and 24. The peaks depart from the real BKT transition point \(\alpha_{\text{BKT}}\approx 0.241\) because of an abnormal scaling hypothesis [58].
which carries a specified pitch angle \(\varphi=2p\pi/L\) with an integer number \(p\) chosen from \(L/2\) to \(L/4\)[62, 63], which suggests \(w_{l}=\varphi l\). For a finite system size \(L\), the ground state keeps the translation symmetry with a conserved momentum \(0\) or \(\pi\) depending on \(p\). The additional inversion symmetry concerning the center of the chain leads to a constrain \(w_{l}+w_{L+1-l}=2\pi p(L+1)/L\), and the resulting sign rule has \(\tilde{h}=p\pi/2\). Note that the data set in frustrated spin systems are chosen from the first \(N_{s}\) samples, which should be carefully adjusted during training.
Due to the interplay of interactions, strong quantum fluctuations usually violate the leading-order sign rule with a pitch angle \(\varphi\) and the resulting AR \(<100\%\). For a small \(\alpha=0.35\) and \(L=16\) in the Fig. 3(h), the bases \(\mathbf{T}_{\text{MPR}}\) with the large weights still obey MPR, while the wrong prediction occurs in the MPR-violating set \(\mathbf{T}_{\overline{\text{MPR}}}\) with smaller weights in principle. Thus, the place where the most significant weight in \(\mathbf{T}_{\overline{\text{MPR}}}\) becomes detectable as it exceeds the limited numeric precision of floating numbers, gives an artificial critical point [11, 13, 14].
To quantitatively estimate the violation of MPR, we define a sign-fidelity \(f=\langle\psi^{\text{MPR}}|\psi\rangle\), referencing from a state \(|\psi^{\text{MPR}}\rangle=\sum_{\{\mathbf{n}\}}s_{\mathbf{n}}^{\text{MPR}}a_ {\mathbf{n}}|\mathbf{n}\rangle\) fully obeying MPR. In fact, \(f=2w_{c}-1\) with a correct rate \(w_{c}=\sum_{\mathbf{n}\in\mathbf{T}_{\text{MPR}}}|a_{\mathbf{n}}|^{2}\) summing over all bases in \(\mathbf{T}_{\text{MPR}}\) as convention [11, 13, 14]. In the vicinity of a continuous transition point, the minimum of \(f\) or \(w_{c}\) expects to be achieved, which means the most complicated sign rule [23, 64]. Like the orthogonalization catastrophe for free fermions [65], fidelity is a pow-law function of the system size \(L\). In principle, the relevant sign-fidelity susceptibility density \(\chi_{f}=-(\ln f)/L\) has a good capability of indicating the places of continuous transition points [66]. However, caused by the abnormal behavior of the exponential closure of gaps at the famous Berzinskii-Kosterlitz-Thouless (BKT) transition point \(\alpha_{\text{BKT}}\approx 0.241\), the maximum of \(\chi_{f}\) is located at \(\alpha_{\text{peak}}\approx 0.43>\alpha_{\text{BKT}}\) in the DM region [67], where \(\chi_{f}\) approaches a \(L\)-independent function as shown in Fig. 3(i) [58]. Similarly, in the incommensurate crossover region, the correct rate still shows a clear structure of staircases in Fig. 3(g), which is believed to vanish in the thermodynamical limit (TDL).
_Antiferromagnetic spin-\(1/2\) XY model on the triangular lattice_. FNN is also capable of learning sign rules for the ground state of 2D quantum models, such as the XY model on triangular lattices with \(L_{x}\times L_{y}\) sites as shown in Fig. 4(a), where the corresponding Hamiltonian reads \(\hat{H}_{\triangle}=\sum_{\{l,l^{\prime}\}}(\hat{S}_{l}^{+}\hat{S}_{l^{\prime }}^{-}+\text{h.c.})\) and \(\langle\rangle\) sums over all the nearest-neighboring sites \(l\) and \(l^{\prime}\). In the XC geometry, the lattice site \(l=l_{y}L_{x}+l_{x}+1\) with a displacement \(\mathbf{r}_{l}\) are labeled by binary indices \((l_{x},l_{y})\) with \(l_{x}=0,\,\cdots,\,L_{x}-1\) and \(l_{y}=0,\,\cdots,\,L_{y}-1\). According to the previous studies, the ground state on a torus depicts a coplanar \(120^{\circ}\) order [68], i.e. the angle between spins at neighboring sites is always \(2\pi/3\). To consider the translation symmetry, \(L_{x}\) is selected as a multiple of \(3\) to guarantee the exact hit at relevant high-symmetry momentum points \(K^{\pm}\) in the first Brillouin zone, as shown in Fig. 4(b). The desired leading-order sign rule \(w_{(l_{x},l_{y})}=(2\pi/3)(l_{x}+[l_{y}])\) is obtained in Fig. 4(c), where \([l_{y}]=1\) once \(l_{y}\) is even and becomes \(0\) otherwise.
Meanwhile, the ground state inherits point group symmetries of the triangular lattice. The expectation values of the symmetry operations listed in table 1 are \(+1/-1\) corresponding to either the symmetric/even or antisymmetric/odd sector of the group representation in math. We take the mirror inversion \(\mathcal{M}_{y}\) about the y-axis as an example. To suppose that the basis \(|\mathbf{n}\rangle\) becomes \(|\mathbf{n}^{\prime}\rangle\) under \(\mathcal{M}_{y}\), we easily know that \(\mathbf{w}\cdot\mathbf{n}^{\prime}=L_{y}\pi-\mathbf{w}\cdot\mathbf{n}\) modulo \(2\pi\). For even \(L_{y}/2\), such as the \(3\times 4\) lattice, the symmetric ground state helps the activation function select a cosine form with \(\tilde{h}=0\), maintaining \(\text{Sgn}[\cos(\mathbf{w}\cdot\mathbf{n})]=\text{Sgn}[\cos(\mathbf{w}\cdot \mathbf{n}^{\prime})]\). In contrast, for the odd (\(L_{y}/2\)), e.g. the geometry of \(3\times 6\) lattices, the antisymmetric ground state prefers \(\tilde{h}=\pi/2\), i.e. \(\text{Sgn}[\sin(\mathbf{w}\cdot\mathbf{n})]=-\text{Sgn}[\sin(\mathbf{w}\cdot \mathbf{n}^{\prime})]\). This discrepancy is captured by shn-FNN in Fig. 4(c-d).
Besides, based on the mean-field picture of spinless Dirac fermions coupled to Chern-Simons gauge fields [69,
Figure 4: Antiferromagnetic spin-\(1/2\) XY model on the triangle lattice. (a) Lattice structure with XC torus geometry. \(a_{1}\) and \(a_{2}\) denote two primitive vectors. All sites are labeled with indices. (b) Structure factor of the spin-flipping correlation \(\mathcal{S}_{\mathbf{k}}^{+-}=(1/S^{2})\sum_{l,l^{\prime}}\exp[i\mathbf{k} \cdot(\mathbf{r}_{l}-\mathbf{r}_{l^{\prime}})](\hat{S}_{l}^{+}\hat{S}_{l^{ \prime}}^{-})\) in the first Brillouin zone and \(S\) is the area. Filled circles mark all allowed momentum as \(L_{x}=3\) and \(L_{y}=4\). The largest amplitude accumulates at high-symmetry points \(K^{\pm}\) (red circles) for indicating a \(120^{\circ}\) order. (c) Phase distribution \(w_{l}\) in \(\mathbf{w}\) for the geometry \(3\times 4\). (d) Accuracy rate AR (black squares) and correction rate \(w_{c}\) (red squares) as a function of geometry \(L_{x}\times L_{y}\). It is noticed that the optimized shn-FNN for the ground state always suggests that \(\tilde{h}=0\) for the even \(L_{y}/2\) but \(\tilde{h}=\pi/2\) for the odd one.
70], for different lattice geometries with finite \(L_{x}\) and \(L_{y}\), non-condensed BCS pairs of spinons from high symmetry points \(K^{\pm}\) would violate the leading-order sign rule, where both AR and \(w_{c}\) deviate from 1. However, the subtle relationship between the lattice geometry and the discrepancy from GWMF still needs to be included.
## III C. Fermi-Hubbard chain
The Fermi-Hubbard model is the simplest model to describe the physics in the strongly correlated electron systems, which is closely related to magnetism, metal-insulator transition, and the promising theory of high-temperature superconductivity [71; 72; 73]. In one dimension, a Hamiltonian for two-species fermions reads \(\hat{H}_{F}=\sum_{l=1}^{L}[-t\sum_{\sigma}(\hat{c}_{l,\sigma}^{\dagger}\hat{c} _{l+1,\sigma}+\text{h.c.})+U\hat{n}_{l,\uparrow}\hat{n}_{l,\downarrow}]\), where \(\hat{c}_{l,\sigma}^{\dagger}\), \(\hat{c}_{l,\sigma}\) and \(\hat{n}_{l,\sigma}=\hat{c}_{l,\sigma}^{\dagger}\hat{c}_{l,\sigma}\) denote the creation, annihilation and particle number operators of fermion at site-\(l\) respectively, \(\sigma=\uparrow\) or \(\downarrow\) the spin polarization, \(t>0\) the hopping amplitude between two nearest-neighboring sites, and \(U\) the onsite coulomb repulsion.
In the Fock space, each basis is a product of the local fermion bases with spin-up and down, that is \(|\mathbf{n}\rangle=[\bigotimes_{l=1}^{L}|n_{l,\uparrow}\rangle][\bigotimes_{l= 1}^{L}|n_{l,\downarrow}\rangle]\). By the conventional Jordan-Wigner transformation \(\hat{S}_{l,\sigma}^{\dagger}=\hat{\Pi}_{l,\sigma}\hat{c}_{l,\sigma}^{\dagger}\) with \(\hat{\Pi}_{l,\sigma}=\prod_{k=1}^{l-1}\hat{F}_{k,\sigma}\) and \(\hat{F}_{k,\sigma}=\exp(i\pi\hat{n}_{k,\sigma})\)[74], we get a two-leg spin-1/2 ladder \(\hat{H}_{\text{Ladder}}=\sum_{\sigma}\hat{H}_{\parallel,\sigma}+\hat{H}_{\perp}\) plus a Hamiltonian \(\hat{H}_{\parallel,\sigma}=-t\) [\(\sum_{l=1}^{L-1}\hat{S}_{l,\sigma}^{+}\hat{S}_{l+1,\sigma}^{-}+(-1)^{\hat{N}_ {\sigma}-1}\hat{S}_{l,\sigma}^{+}\hat{S}_{1,\sigma}^{-}+\text{h.c.}\)] in the transverse \(x\) and \(y\)-axes, a Hamiltonian \(\hat{H}_{\perp}=U\sum_{l=1}^{L}(\hat{S}_{l,\uparrow}^{z}+1/2)(\hat{S}_{l, \downarrow}^{z}+1/2)\) in the longitudinal z-axis, and the particle number of a species in total \(\hat{N}_{\sigma}=\sum_{l=1}^{L}\hat{n}_{l,\sigma}\).
Once \(U=0\), even numbers of spin-up and down fermions propose TBC applied to legs in the spin ladder, which leads to a unique phase vector \(w_{l}=-(l-1)\pi/L+\pi/2-\pi/(2L)\) in the sign rule \(\text{Sgn}[\cos(\sum_{l}n_{l,\sigma}w_{l})]\) for the even-parity state of the species \(\sigma\) as shown in Fig. 5(a). For the odd-parity state, the cosine function is replaced by sine, i.e. \(\text{Sgn}[\sin(\sum_{l}n_{l,\sigma}w_{l})]\). In the section, we never discuss a trivial case of odd fermion numbers, where the induced PBC in the spin ladder suggests zero phases everywhere, i.e. \(w_{l}=0\).
For small \(U/t>0\), the ground state for arbitrary finite \(N_{\uparrow}\) and \(N_{\downarrow}\) keeps even parity, which is their combination of \(\text{Sgn}[\alpha_{1}\cos(\sum_{l}n_{l,\uparrow}w_{l})\cos(\sum_{l}n_{l, \downarrow}w_{l})+\alpha_{2}\sin(\sum_{l}n_{l,\uparrow}w_{l})\sin(\sum_{l}n_{l,\downarrow}w_{l})]\). After training as \(U/t=0.1\), \(N_{\uparrow}=14\), \(N_{\downarrow}=2\) and \(L=16\), the optimized shn-FNN suggests a maximum AR \(\approx 97\%\) at \(\alpha_{1}=-1\) and \(\alpha_{2}=1\) in Fig. 5(b). Thus, the GWMF sign rule for the Fermi-Hubbard model becomes \(\text{Sgn}[\cos(\sum_{l,\sigma}n_{l,\sigma}w_{l})]\). We also found that the GWMF sign rule looks robust and relies less on the filling fraction and the system size \(L\). For the case of \(U/t=0.1\) and \(L=16\) in Fig. 5(c), AR \(>96.8\%\) persists for different even numbers of spin-down fermions. Moreover, AR for \(N_{\downarrow}=2\) gets closer to 100% as \(L\) grows.
In the dominantly large U limit, only single occupations can live in the ground state because of a considerable charge gap, so spin fluctuations in the reduced Hilbert space of either spin-up \(\hat{c}_{l,\uparrow}^{\dagger}|0\rangle\) or spin-down \(-\hat{c}_{l,\downarrow}^{\dagger}|0\rangle\) are described by the effective antiferromagnetic Heisenberg chain, equivalent to MPR. Returning to the fermion bases, it is easy to prove that \(w_{l}\) is the same as one in the GWMF sign rule. For instance, the correction rate \(w_{c}\approx 1\) for the GWMF sign rule once \(U/t\geq 8\) in
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(L_{x}\times L_{y}\) & \(\langle\mathcal{T}_{x}\rangle\) & \(\langle\mathcal{T}_{y}\rangle\) & \(\langle\mathcal{M}_{x}\rangle\) & \(\langle\mathcal{M}_{y}\rangle\) & \(\langle\mathcal{T}_{c}\rangle\) \\ \hline \(3\times 4\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) \\ \(3\times 6\) & \(+1\) & \(+1\) & \(+1\) & \(-1\) & \(-1\) \\ \(3\times 8\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) \\ \(6\times 4\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) & \(+1\) \\ \(3\times 10\) & \(+1\) & \(+1\) & \(+1\) & \(-1\) & \(-1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Measurement of the symmetry operations of the ground state for the antiferromagnetic spin-1/2 XY model on the triangular lattice. We consider the translation by a site \(\mathcal{T}_{x}\) in x-axis and \(\mathcal{T}_{y}\) in y-axis, mirror inversion \(\mathcal{M}_{x}\) about the x-axis and \(\mathcal{M}_{y}\) about the y-axis, center inversion \(\mathcal{I}_{c}\).
Fig. 5(d).
According to the Bethe ansatz solution [75], the Fermi liquid only survives at \(U=0\) in TDL. However, because of a tiny charge gap close to \(U=0\), fermions behave like a Fermi liquid in the ground state with a limited system size \(L\leq 20\), much smaller than the correlation length. A quasi-critical point appears for finite system size, in the vicinity of which the strong quantum fluctuations would violate the GWMF sign rules and suddenly drop \(w_{c}\). As \(L\) grows in Fig. 5(d), the quasi-critical point approaches \(U=0\) gradually.
In an alternative combination scheme of \(|{\bf n}\rangle=\bigotimes_{l=1}^{L}[|n_{l,\uparrow}\rangle|n_{l,\downarrow}\rangle]\), the Jordan Wigner transformation changes accordingly and an additional nonlinear appendix \((-1)^{\sum_{l=2}^{l}n_{l,\uparrow}\sum_{k=1}^{l-1}n_{k,\downarrow}}\) in front of the predicted sign rules, which is not easily generalized in an shn-FNN.
## IV IV. Summary and Discussions
We successfully establish a Gutzwiller mean-field theory of sign rules for the ordered ground state in qubit lattice models, which perfectly matches the sign predicted by a shallow FNN with a single hidden neuron called shn-FNN. Based on that principle, we not only consistently explain the excellent performance of activation functions in the neural network, but also exhibit a way of vividly interpreting the sign rule represented by FNN.
We test our theory by systematical benchmarks in various spin models and the Fermi-Hubbard chain. For the case of non-frustrated spin-1/2 models, such as a generalized Ising chain, XY chains with the periodic or twisted boundary conditions, an antiferromagnetic Heisenberg chain, sign rules for the ground states with magnetic orders can be fully captured by shn-FNN, where the accurate rate of the prediction can archive 100% exactly. In contrast, competition between interactions in frustrated models strongly enhances the complexity of sign rules for the ground states and reduces the prediction accuracy. However, the leading-order or mean-field sign rules obtained by optimizing shn-FNN are still capable of visualizing the pictorial scenario of orders in spins, where the characteristic phase vector is tightly related to pitch angles, gauge field gradients, etc. We can also get a unified mean-field sign rule by choosing suitable bases in the Fermi-Hubbard chain.
Our theory is a simple starting point by removing short-range details in the ordered states. It is interesting to further decode information from high-order microscopic processes instead of the leading-order ones. Of course, the theory for general lattice models also deserves profound studies in the future.
## V Acknowledgement
We thank Tao Li, Rui Wang, Ji-Lu He, Wei Su, and Wei Pan for the grateful discussion. S. H. acknowledges funding from the Ministry of Science and Technology of China (Grant No. 2017YFA0302904) and the National Science Foundation of China (Grants No. 12174020). Z. P. Y. acknowledges funding from the National Science Foundation of China (Grants No. 12074041). S. H. and K. X. further acknowledge support from Grant NSAF-U2230402. The computations were performed on the Tianhe-2JK at the Beijing Computational Science Research Center (CSRC) and the high-performance computing cluster of Beijing Normal University in Zhuhai.
### A. Function softmax in FNN
In our work, we only have a single neuron \(y_{H}\) in the hidden layer and the normalization function softmax gives two neurons (\(y_{O,1}\), \(y_{O,2}\)) in the output layer, i.e.
\[\begin{split}& y_{O,1}=\frac{(e^{y_{H}})^{a}}{(e^{y_{H}})^{a}+(e^{y_{H}})^ {b}},\\ & y_{O,2}=\frac{(e^{y_{H}})^{b}}{(e^{y_{H}})^{a}+(e^{y_{H}})^{b}}. \end{split} \tag{3}\]
In practice, we choose an appropriate discrepancy between exponents \(a\) and \(b\) so that \((e^{y_{H}})^{a}\) and \((e^{y_{H}})^{b}\) are distinguishable. Taking \(a>b\) as an example, \(y_{H}>0\) leads to \(y_{O,1}>1/2\) and a positive predicted sign, otherwise a negative one.
### B. Sign rule for the Spin-1/2 XY model
Here we explicitly prove that the ground state of Hamiltonian \(H_{XY}^{P}(J\tilde{J})\) in the main text always has a positive sign for arbitrary configuration \({\bf n}\), i.e. \(s_{\bf n}>0\).
Under the Jordan-Wigner transformation [74]
\[\begin{split}&\hat{S}_{l}^{+}=\hat{c}_{l}^{\dagger}\exp(\ \ i\pi\sum_{k<l}\hat{n}_{k}),\\ &\hat{S}_{l}^{-}=\hat{c}_{l}\exp(-i\pi\sum_{k<l}\hat{n}_{k})\end{split} \tag{4}\]
with the creation (annihilation) operator \(\hat{c}_{l}^{\dagger}\) (\(\hat{c}_{l}\)) and particle number operator \(\hat{n}_{k}=\hat{c}_{l}^{\dagger}\hat{c}_{l}\) of the fermion, the Hamiltonian \(\hat{H}_{XY}^{P}(J\tilde{J})\) becomes \(\hat{H}_{F}^{P/T}(J\tilde{J})\) for spinless free fermions at half-filling where
\[\hat{H}_{F}^{P/T}(J)=-J\left(\sum_{l=1}^{L-1}\hat{c}_{l}^{\dagger}\hat{c}_{l+ 1}\pm\hat{c}_{L}^{\dagger}\hat{c}_{1}\right)+{\rm h.c.}. \tag{5}\]
The periodic/twisted boundary condition is selected if \(N\) is odd/even. On condition that \(N\) is odd, single particle energy levels are still described by the plane wave
with discrete momentum \(k_{m}=2m\pi/L\) and an integer \(m=0\),..., \(L-1\). If \(L\) is large enough, the many-body wave function, consisting of single-particle energy levels, is the same as one of the ground states for the Hamiltonian \(\hat{H}_{F}^{P}(J)\). When \(N\) is even, analogous to the above argument, the ground state keeps the same as one of the Hamiltonian \(\hat{H}_{F}^{T}(J)\). Both of two above Hamiltonian can be transferred back to \(\hat{H}_{XY}^{P}(J)\) by the inverse Jordan-Wigner transformation, in which the sign of configurations is always positive.
|
2310.19474 | Structure-Informed Neural Networks for Boundary Observation Problems | We introduce Structure Informed Neural Networks (SINNs), a novel method for
solving boundary observation problems involving PDEs. The SINN methodology is a
data-driven framework for creating approximate solutions to internal variables
on the interior of a domain, given only boundary data. The key idea is to use
neural networks to identify a co-ordinate transformation to a latent space,
upon which a well-posed elliptic system of partial differential equations is
constructed. The use of elliptic systems enables the low-cost transfer of
information from the domain's boundary to its interior. This enables
approximate solutions to PDE boundary observation problems to be constructed
for generic, and even ill-posed, problems. A further advantage of the proposed
method is its ability to be trained on experimental or numerical data without
any knowledge of the underlying PDE. We demonstrate the ability of SINNs to
accurately solve boundary observation problems by considering two challenging
examples of a non-linear heat equation and boundary observation for the
Navier-Stokes equations. | Jakub Horsky, Andrew Wynn | 2023-10-30T12:01:41Z | http://arxiv.org/abs/2310.19474v1 | # Structure-Informed Neural Networks for Boundary Observation Problems
###### Abstract
We introduce _Structure Informed Neural Networks_ (SINNs), a novel method for solving boundary observation problems involving PDEs. The SINN methodology is a data-driven framework for creating approximate solutions to internal variables on the interior of a domain, given only boundary data. The key idea is to use neural networks to identify a co-ordinate transformation to a latent space, upon which a well-posed elliptic system of partial differential equations is constructed. The use of elliptic systems enables the low-cost transfer of information from the domain's boundary to its interior. This enables approximate solutions to PDE boundary observation problems to be constructed for generic, and even ill-posed, problems. A further advantage of the proposed method is its ability to be trained on experimental or numerical data without any knowledge of the underlying PDE. We demonstrate the ability of SINNs to accurately solve boundary observation problems by considering two challenging examples of a non-linear heat equation and boundary observation for the Navier-Stokes equations.
_Key words:_ Data driven scientific computing, Reduced order modeling, Machine learning, Partial differential equations, Operator learning
## 1 Introduction
Boundary observation problems aim to discover the value of a physical quantity inside a domain by using only observations from its boundary. If possible, this means that potentially complex physical information can be obtained without the need for invasive internal sensors. Many fundamental problems in engineering and physics have this form with applications, for example, in fluid mechanics [4], medical imaging [16], geophysics [15], and thermal sensing [2].
Typically, the internal physical quantity of interest is linked to the boundary observations by a partial differential equation (PDE). In many applications, this can make the problem highly challenging to analyse analytically and computationally impractical to solve numerically. In fluid mechanics, for example, the nonlinear Navier-Stokes equations govern the relation between the internal
fluid properties, such as its velocity or temperature, and boundary data which are convenient to observe experimentally, such as pressure or shear stress. The well-known complexity of solutions to such nonlinear PDEs implies that solving boundary observation problems of practical importance is a significant challenge.
In this paper, we propose a new data-driven methodology, called _Structure Informed Neural Networks_ (SINNs), for solving boundary observation problems involving nonlinear PDEs. The idea is to embed an inherently well-posed structure for boundary observation problems into a data-driven framework with the aim to enable efficient, low-order, approximate solutions. This is achieved in a three-stage process, indicated schematically in Figure 1. First, a neural network encodes both the boundary data and the structure of the boundary geometry into a simpler _latent space_ of boundary variables. Information is then passed from the boundary to the interior of the latent space using an _elliptic system_[3]. This embeds a general class of well-posed PDEs into the SINN. Finally, a second neural network is used to decode the interior latent to physical variables.
The idea of using elliptic systems in a data-driven approach is the main novelty of this paper. Boundary value problems for elliptic systems were widely studied in the "golden age" of PDE analysis in the 1950s [11]. Our motivation for using them now is that they can describe a significant range of boundary value problems, are numerically tractable to solve, and can be defined with only small number of parameters. The second major contribution of this paper is to develop an operator-theoretic framework for embedding elliptic systems within the classical encoder-decoder structure of neural network-based reduced order modelling. This underpins the efficient numerical identification of SINNs, enables a powerful coupling of elliptic systems with deep neural networks, and opens the door to the data-driven solution of a wide range of challenging non-linear boundary observation problems.
The structure-informed neural networks (SINNs) developed here have some similarities, and take inspiration from, a number of existing data-driven methods for PDE analysis. For example, Koopman-based modal decomposition methods [1, 14, 10] possess the same three-stage mapping structure as in Figure 1; Physics-Inspired Neural Networks (PINNs) [13] use neural networks to efficiently solve PDEs, including boundary value problems [7]; and Neural Operators [5] use a kernel-based Neural Networks to construct solution operators for PDE parameter identification. To enable a full discussion of the relation and distinction between SINNs and existing methods in SS1.3 we must first define the mathematical structure of the boundary observations problems we aim to solve and give an overview of the SINN methodology.
### Boundary observation problems
Consider a physical domain \(\Omega\subset\mathbb{R}^{d}\), with \(d=2\) or \(3\), and let \(\partial\Omega\) denote its boundary. At each point \(\mathbf{x}\in\Omega\), we want to recover the value of an \(n\)-dimensional physical variable \(\mathbf{u}(\mathbf{x})\in\mathbb{R}^{n}\). To do this, we can only make use of boundary data \(\mathbf{b}(\mathbf{z})\in\mathbb{R}^{n_{\partial}}\) which can be measured at each point \(\mathbf{z}\in\partial\Omega\). It will be assumed that both interior and boundary data are square-integrable functions
in the sense that \(\mathbf{u}\in X\), where
\[X=L^{2}(\Omega,\mathbb{R}^{n})=\left\{f:\Omega\to\mathbb{R}^{n}:\int_{\Omega}\|f( \mathbf{x})\|^{2}d\mathbf{x}<\infty\right\},\]
and that \(\mathbf{b}\in Y\), where
\[Y=L^{2}(\partial\Omega,\mathbb{R}^{n_{\partial}})=\left\{f:\partial\Omega\to \mathbb{R}^{n_{\partial}}:\int_{\partial\Omega}\|f(\mathbf{z})\|^{2}d\mathbf{z}<\infty \right\}.\]
A typical situation in which such data arises is if the internal and boundary data satisfy a PDE of the form
\[\begin{split}\mathcal{L}(\mathbf{u},\mathbf{\lambda})&=0, \qquad\text{in }\Omega\\ \mathcal{B}(\mathbf{u})&=\mathbf{b},\qquad\text{on }\partial\Omega, \end{split} \tag{1}\]
where \(\mathcal{L}\) is a differential operator, \(\mathbf{\lambda}\) are any parameters, and \(\mathcal{B}\) is an output operator linking the interior to boundary variables. We do not assume that (1) is a well-posed in the sense that for any boundary function \(\mathbf{b}\), there is a unique solution \(\mathbf{u}\) satisfying the PDE. Instead, the output operator \(\mathcal{B}\) should be viewed simply as shorthand for the "available information" which may be observed on the boundary, given that a physical system is in state \(\mathbf{u}\) inside the domain.
In this abstract language, the structure-informed neural networks (SINNs) that will be constructed in this paper are operators
\[\begin{split}\mathcal{F}:& Y\to X\\ \mathbf{b}&\mapsto\mathbf{u}\end{split} \tag{2}\]
acting between the function space \(Y\) of observable measurements and the space \(X\) of all possible distributions of interior physical values. The fact we will identify _operators_ is important since it means that a single SINN \(\mathcal{F}\) is able to
approximate the internal variables \(\mathbf{u}\), given any possible boundary observation \(\mathbf{b}\in X\). As will be discussed in SS1.3, this operator-based philosophy places SINNs within the recent class of neural-network-based operator identification methods such as Neural Operators [5] or DeepONets [9].
A key objective of this paper is to identify SINN operators from data. We assume that a data ensemble
\[\mathcal{U}:=\{(\mathbf{u}_{i},\mathbf{b}_{i})\}_{i=1}^{N_{T}}\subset(X\times Y)^{N_{T}}\]
is available consisting of \(N_{T}\) pairs of internal and boundary data, each arising from a solution to (1). The idea will be to construct a mapping of the form (2) which optimally fits the data \(\mathcal{U}\). Given the infinite-dimensional nature of the underlying problem and the finite-dimensional nature of the data \(\mathcal{U}\), however, any numerically tractable method must make an a-priori restriction on the possible forms that the \(\mathcal{F}\) can take.
### SINN operators
We assume that \(\mathcal{F}\) is the composition of three operators
\[\mathcal{F}=\delta\circ\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial}, \tag{3}\]
the structure of which is shown schematically in Figure 1. The first operator is called a _boundary encoder_\(\epsilon^{\partial}:Y\to Y_{L}\). This is a nonlinear operator, defined in terms of a neural network, which maps both the boundary data and geometry into a boundary latent space, \(Y_{L}:=L^{2}(\partial\Omega,\mathbb{R}^{r})\), where the parameter \(r\) governs the order and complexity of the latent space.
To enable data-driven training we must further restrict the form of the operator \(\epsilon^{\partial}\), and we assume that \(\epsilon^{\partial}\) acts _semi-locally_ in the following sense. Given boundary data \(\mathbf{b}\in Y\), the value of \((\epsilon^{\partial}\mathbf{b})(\mathbf{z})\) at any \(\mathbf{z}\in\partial\Omega\) can only depend on the values of \(\mathbf{b}\) in a small neighbourhood \(N_{\mathbf{z}}\subset\partial\Omega\) of \(\mathbf{z}\). Practically, this will be achieved by training a neural network1\(\mathcal{N}^{\partial}:\{\mathbf{b}(\mathbf{\xi}):\mathbf{\xi}\in N_{\mathbf{z}}\}\mapsto( \epsilon^{\partial}\mathbf{b})(\mathbf{z})\) As will be described in detail in SS2.2, the fact that the input to \(\mathcal{N}_{\partial}\) is defined in terms of a local neighbourhood will enable the use of a single neural network to be repeatably to build up the definition \(\epsilon^{\partial}:Y\to Y_{L}\). This enables a wide class of nonlinear operators to be considered without significantly increasing the number of optimisation parameters.
Footnote 1: A formal mathematical definition of neural networks used in this paper is given in §4.5.
The purpose of introducing latent variables is to define a common structure within which information can be passed from the boundary latent space \(Y_{L}\) to an interior latent space \(X_{L}=L^{2}(\Omega,\mathbb{R}^{r})\). A SINN implements this transfer of information by using an _elliptic system_ of PDEs. An elliptic system is governed by a second-order differential operator
\[D_{\mathcal{A}}\mathbf{\ell}=\sum_{i,j=1}^{d}A_{ij}\frac{\partial^{2}\mathbf{\ell}}{ \partial x_{i}\partial x_{j}},\]
where \(A_{ij}\in\mathbb{R}^{r\times r}\) are symmetric matrices satisfying the two conditions: i) that \(A_{ij}=A_{ji}\), for any \(i,j=1,\ldots,r\); and ii) that the block matrix \(\mathcal{A}=(A_{ij})\in\mathbb{R}^{rd\times rd}\) is strictly positive definite.
Information is passed from the boundary latent space \(Y_{L}\) to an interior latent space \(X_{L}\) by solving the following boundary problem:
\[\begin{split} D_{\mathcal{A}}\boldsymbol{\ell}&=0,\qquad\quad\text{in }\Omega\\ \boldsymbol{\ell}&=\epsilon^{\partial}(\boldsymbol{b }),\qquad\text{on }\partial\Omega.\end{split} \tag{4}\]
The assumption that \(\mathcal{A}\) is positive definite is crucial. This implies that (4) is a _strongly elliptic system_ of PDEs. It then follows, under appropriate smoothness conditions [3] on the latent boundary data \(\epsilon^{\partial}(\boldsymbol{b})\) and the boundary geometry, that (4) has a unique solution \(\boldsymbol{\ell}\in X_{L}\). We let \(\mathcal{E}_{\mathcal{A}}:Y_{L}\to X_{L}\) denote the operator which maps boundary data to internal variables when solving the elliptic boundary value problem (4). The structure of the SINN mapping (3) is hence specifically designed to create a latent space in which passage of data from boundary to the interior is well-posed. This is achieved irrespective of the properties of the PDE or the observation mapping structure (1) from which the physical data was sampled.
This third, and final, component of a SINN operator (3) is a _decoder_
\[\begin{split}\delta:X_{L}&\longrightarrow X\\ \boldsymbol{\ell}&\longmapsto\boldsymbol{u}\end{split} \tag{5}\]
which lifts a distribution of interior latent \(\boldsymbol{\ell}\in X_{L}\) back into physical space \(\boldsymbol{u}\in X\). Analogous to the boundary encoder, \(\delta\) is assumed to be nonlinear and semi-local. That is, for any \(\boldsymbol{x}\in\Omega\), the value of \((\delta\boldsymbol{\ell})(\boldsymbol{x})\) must only depend on the values of \(\boldsymbol{\ell}\) in a small neighbourhood \(N_{\boldsymbol{x}}\subset\Omega\) of \(\boldsymbol{x}\). Again, this can be implemented using a single neural network \(\mathcal{N}:\{\boldsymbol{\ell}(\boldsymbol{y}):\boldsymbol{y}\in N_{ \boldsymbol{x}}\}\mapsto\boldsymbol{\ell}(\boldsymbol{x})\) which is applied repeatably to form the definition of the operator \(\delta\), as described in detail in SS2.3.
In summary, a structure-informed neural network (SINN) \(\mathcal{F}\) is an operator of the following form
\[\mathcal{F}=\left\{\begin{array}{c}\text{semi-local}\\ \text{nonlinear NN}\\ \delta:X_{L}\to X\end{array}\right\}\circ\left\{\begin{array}{c}\text{ global}\\ \text{elliptic system}\\ \mathcal{E}_{\mathcal{A}}:Y_{L}\to X_{L}\end{array}\right\}\circ\left\{ \begin{array}{c}\text{semi-local}\\ \text{nonlinear NN}\\ \epsilon^{\partial}:Y\to Y_{L}\end{array}\right\}\]
The semi-local architecture of the encoder and decoder mappings is chosen specifically so as to restrict the number of degrees of freedom involved in defining the nonlinear components of the operator. The global transfer of information from boundary to interior is performed in the latent space via a well-posed elliptic system. This embeds a natural, yet very general, object into a SINN which is specifically tailored to the structure of the boundary observation problems that are our aim to solve. Furthermore, as will be described in SS3, a key advantage of using elliptic systems of PDEs is that identification of their coefficients can be
performed in a computationally-efficient manner using only local training data. However, once trained, the resulting elliptic system can then be applied globally to give a SINN solution to the original boundary observation problem.
In SS2 we introduce the concept of a _generating function_ which underpins the semi-local structure of the encoder and decoder operators, before introducing these operator formally and deriving their inherited mathematical properties. The method of training SINNs from data is described in SS3 and its numerical implemention discussed in SS4. Implementation of our approach on a pair of challenging test-cases is given in SS5. Before this, we first comment briefly on the relation between the proposed SINN architecture and other, related, data-driven approaches to PDE analysis.
### Relation of SINNs to existing methods
The use of neural networks to solve PDEs has received much recent interest with the development of Physics-inspired Neural Networks (PINNs) [13]. In the context of solving a PDE of the form (1), the idea is to view the solution \(\mathbf{u}\) as a mapping \(\mathbb{R}^{d}\ni\mathbf{x}\mapsto\mathbf{u}(\mathbf{x})\in\mathbb{R}^{n}\) and to therefore seek to construct a neural network \(\mathcal{N}_{P}:\mathbb{R}^{d}\to\mathbb{R}^{n}\) which approximates the solution. The crucial step is to add so-called physics-inspired constraints, namely \(\mathcal{L}(\mathcal{N}_{P}(\mathbf{x}),\mathbf{\lambda})_{|_{\Omega}}=0\) and \(\left[\mathcal{B}(\mathcal{N}_{P}(\mathbf{x}))-\mathbf{b}\right]_{|_{\partial\Omega}}=0\), to force the constructed solution to satisfy the underlying PDE.
In contrast to the SINN operators \(\mathcal{F}:Y\to X\) which act between functions spaces, PINNs are finite-dimensional mappings that directly attempt to replicate the solution mapping \(\mathbf{x}\mapsto\mathbf{u}(\mathbf{x})\). They require knowledge of the underlying PDE they seek to solve (i.e., of \(\mathcal{L},\mathbf{\lambda}\) and \(\mathcal{B}\)) and, when applied to boundary observation problems, must be trained using knowledge of the specific boundary data \(\mathbf{b}\). In contrast, SINNs do not require such information: the identification of operators means that such boundary data is not required in the SINN methodology.
The three-operator structure of the mapping \(\mathcal{F}=\delta\circ\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial}\) is widely used in a variety of data-driven approaches to low-order modelling. In Koopman-based modelling, for example, operators with this three-level structure are used to approximate the time-evolution of chaotic, infinite-dimensional, dynamical systems. In these approaches, the role of the central operator \(\mathcal{E}_{\mathcal{A}}\) is to model temporal evolution, rather than the passage of information from a domain's boundary to its interior as in this paper. The main distinction between the SINNs and the Koopman methodology is that, in the latter approach, the latent space is finite dimensional and the temporal operator a finite-dimensional ODE.
This represents an important distinction with the SINN methodology. To explain, consider the case of the decoder \(\delta\) operator, and assume that it maps from a finite dimensional latent space, say \(\mathbb{R}^{r}\), into the infinite dimensional space of physical variables \(X=L^{2}(\Omega,\mathbb{R}^{n})\). The mismatch in dimensions between latent and physical space implies that the decoder must have an inherent method of translating finite-dimensional latent variables to infinite-dimensional functions. In Koopman-based approaches, this is typically achieved by considering a ba
sis of functions \(\{\Phi_{i}\}_{i=1}^{N}\subset X\) and letting \(\delta\) involve a mapping from the latent space \(\mathbb{R}^{r}\) to the coefficients \(\{\hat{f}_{i}\}\subset\mathbb{R}^{N}\) of a series expansion \(\sum_{i=1}^{N}\hat{f}_{i}\Phi_{i}\in X\). A major challenge of this approach is the choice of an appropriate basis \(\{\Phi_{i}\}\) and attempting to solve this problem has motivated a range of different Koopman-based methods [17, 6].
In contrast, in the SINN approach developed here, the use of an elliptic system \(\mathcal{E}_{\mathcal{A}}\) removes the need for assigning or identifying a set of basis functions, and therefore the imposition of unnecessary structure on the operator \(\mathcal{F}\). Constructing an appropriate elliptic system only requires identifying the PDE coefficient matrix \(\mathcal{A}\), which potentially offers a significant reduction in dimension to identifying a set of basis function \(\{\Phi_{i}\}\subset X\). This advantage comes at the cost of requiring solution of a PDE, as opposed to an ODE, as the central component of the model \(\mathcal{F}\). However, the SINN methodology deliberately imposes a well-posed elliptic structure which, in many cases, enables this PDE to be solved at accurately and at low cost using existing algorithms. In addition, as will be explained in SS3, since our aim is to identify a PDE, the cost function for SINN training can be chosen to involve only low-cost, local, solutions to elliptic systems during training. However, once trained, the identified elliptic systems can then be used to transfer information across a domain globally.
Finally, the philosophy taken in this paper to identify _operators_ using the SINN methodology is related to the recent interest in using neural networks to identify operators between function spaces, such as Neural Operators [5] or DeepONets [9]. The Neural Operator framework [5] seeks to construct solution operators \(G:\boldsymbol{\lambda}\mapsto\boldsymbol{u}\) which solve PDEs of the form (1) with Dirichlet boundary conditions \(\boldsymbol{b}=0\) using knowledge of their distributed parameters \(\boldsymbol{\lambda}\). In this approach, \(G\) transfers information globally in the domain \(\Omega\) using an iterative sequence of integral operators whose kernels are identified using neural networks. For practicable computational implementation in model training, structure needs to be imposed in the integral kernels, such as using low-rank approximations, Convolutional Neural Networks, Graph Neural Networks [12], or Fourier Neural Operators [8]. Any such choice of structure is philosophically similar to need to prescribe a functional basis in the Koopman-based methodology described previously. Again, the contrast to the SINN methodology is that by training an elliptic operators, only a relative small number of coefficients are required to enable global transfer of problem information, and this is achieved without the need to impose any additional structure on the operator ansatz. We note, finally, that the DeepONet methodology [9], which can be viewed as a special case of the Neural Operator approach, also essentially requires the identification of a functional basis during training.
## 2 Encoders and Decoders for SINNs
In this section, we give a detailed description of the mathematical structure of the encoder and decoder operators required to create a SINN. We will describe three classes of operator: interior encoders, boundary encoders, and decoders.
As indicated in Figure 1, only the boundary encoder and decoder are required to define a SINN mapping. However, as will be explained in SS3, interior encoders will be required to enable data-driven training.
The semi-local structure of all encoder and decoder operators will be implemented by defining _generating functions_ (GFs), which act as the building blocks of the SINN methodology. In each of the follow sections we first introduce a generating function, use it to define the respective operator, then comment on the regularity properties inherited by that operator.
### Interior Encoders
For the purposes of model training only, we will construct interior encoders \(\epsilon\) which, given any distribution of physical variable \(\mathbf{u}\in X\), transforms these into a distribution of latent variables \(\mathbf{\ell}=\epsilon\mathbf{u}\) on the domain interior.
_Interior Encoder GFs:_ Given a compact set \(0\in E\subset\mathbb{R}^{d}\), a generating function for an interior encoder is any continuous, compact2, generally nonlinear mapping
Footnote 2: A compact mapping is one which maps bounded subsets to relatively compact subsets.
\[e:L^{2}(E,\mathbb{R}^{n})\longrightarrow\mathbb{R}^{r}. \tag{6}\]
This should be thought of as a mapping
\[e:\left\{\begin{array}{c}\text{Local patch of}\\ \text{ interior data}\end{array}\right\}\longmapsto\left\{\begin{array}{c} \text{Latent}\\ \text{variables}\end{array}\right\}.\]
which will be used to endow the interior encoder \(\epsilon\) with the desired semi-local structure.
_Definition of Interior Encoders:_ For any \(\mathbf{x}\in\Omega\), define a local neighbourhood
\[E_{\mathbf{x}}:=\mathbf{x}+E=\{\mathbf{x}+\mathbf{y}:\mathbf{y}\in E\},\]
and let \(\Omega_{E}:=\{\mathbf{x}\in\Omega:E_{\mathbf{x}}\subset\Omega\}\) be the set of points whose neighbourhoods \(E_{\mathbf{x}}\) are entirely contained in \(\Omega\). These sets are shown in Figure 3.
Next let \(\mathbf{u}\in X\). For any \(\mathbf{x}\in\Omega_{E}\), a local function \(\mathbf{u}_{\mathbf{x}}:E\rightarrow\mathbb{R}^{n}\) can be defined by
\[\mathbf{u}_{\mathbf{x}}(\mathbf{y}):=\mathbf{u}(\mathbf{x}+\mathbf{y}),\qquad\mathbf{y}\in E.\]
Given a generating function \(e:L^{2}(E,\mathbb{R}^{n})\rightarrow\mathbb{R}^{r}\), we then define an interior encoder by
\[\left(\epsilon\mathbf{u}\right)(\mathbf{x}):=e\left(\mathbf{u}_{\mathbf{x}}\right),\qquad\mathbf{ x}\in\Omega_{E}, \tag{7}\]
This definition should be thought of as mapping the physical data \(\mathbf{u}\), viewed as a _function_ in \(X=L^{2}(\Omega,\mathbb{R}^{n})\), to a new function \(\epsilon\mathbf{u}:\Omega_{E}\rightarrow\mathbb{R}^{r}\). This allows the latent variables \(\mathbf{\ell}(\mathbf{x})=(\epsilon\mathbf{u})(\mathbf{x})\) corresponding to \(\mathbf{u}\) to be defined on the subdomain \(\Omega_{E}\).
It follows trivially from its definition that interior encoders are operators satisfying \(\epsilon:L^{2}(\Omega,\mathbb{R}^{n})\to L^{2}(\Omega_{E},\mathbb{R}^{r})\). However, the following result shows that the latent variable field created using the encoder \(\epsilon\) are, in fact, continuous, uniformly bounded, functions.
**Lemma 1**.: _Let \(e:L^{2}(E,\mathbb{R}^{n})\to\mathbb{R}^{r}\) be an interior encoder generating function and let \(\epsilon\) be defined by (7). Then \(\epsilon:L^{2}(\Omega,\mathbb{R}^{n})\to C(\Omega_{E},\mathbb{R}^{r})\)._
Proof.: See Appendix 8.1.
### Boundary Encoders
We describe how to construct a boundary encoder \(\epsilon^{\partial}\) which, given any distribution of boundary values \(\boldsymbol{b}\in Y\), transforms these into a distribution of latent variables \(\boldsymbol{\ell}=\epsilon^{\partial}\boldsymbol{b}\) on the boundary \(\partial\Omega\). The construction is analogous to that of the interior encoder in (7) but with the added complication of including information about the boundary geometry.
_Boundary encoder GFs:_ Given a fixed, compact, set \(0\in E_{\partial}\subset\mathbb{R}^{d-1}\) containing the origin, a generating function for the boundary encoder is any continuous, compact, and generally nonlinear function
\[e^{\partial}:L^{2}(E_{\partial},\mathbb{R}^{n_{\partial}})\times L^{2}(E_{ \partial},\mathbb{R}^{d})\to\mathbb{R}^{r}. \tag{8}\]
This should be understood as a mapping
\[e^{\partial}:\left\{\begin{array}{c}\text{Section of}\\ \text{boundary data}\end{array}\right\}\times\left\{\begin{array}{c}\text{ Section of}\\ \text{boundary geometry}\end{array}\right\}\longmapsto\left\{\begin{array} []{c}\text{Latent}\\ \text{variables}\end{array}\right\}\]
which will be used repeatably to define a semi-local boundary encoder operator.
_Definition of Boundary Encoders:_ We assume thoughout that \(\partial\Omega\) is sufficiently regular that a normal vector \(\mathbf{n}(\mathbf{z})\in\mathbb{R}^{d}\) and a tangent plane \(T_{\mathbf{z}}\subset\mathbb{R}^{d-1}\) exists for every \(\mathbf{z}\in\partial\Omega\). Each tangent plane \(T_{\mathbf{z}}\) is defined in terms of a local-coordinate system with origin at \(\mathbf{z}\) and whose basis vectors \((\mathbf{e}_{i}^{\mathbf{z}})_{i=1}^{d-1}\) are orthogonal to \(\mathbf{n}(\mathbf{z})\). We assume further that there exists a ball \(B_{R}(\mathbf{z})\subset\mathbb{R}^{d}\) of radius \(R\) such that the local projection \(P_{\mathbf{z}}:\partial\Omega\cap B_{R}(\mathbf{z})\to T_{\mathbf{z}}\) from the boundary to tangent plane \(T_{\mathbf{z}}\) is one-to-one and, in addition, that there exists \(\tau>0\), independent of \(\mathbf{z}\in\partial\Omega\), such that
\[\{t_{i}\mathbf{e}_{i}^{\mathbf{z}}:0\leq t_{i}<\tau\}\subset P_{\mathbf{z}}(B_{R}(\mathbf{z}) \cap\partial\Omega)\subset T_{\mathbf{z}},\qquad\mathbf{z}\in\partial\Omega \tag{9}\]
and we also assume that
\[E_{\partial}\subseteq(0,\tau)^{d-1}. \tag{10}\]
Property (10) implies that we can view \(E_{\partial}\) as a subset of the tangent plane, while (9) then implies that a well-defined, continuous, inverse \(P_{\mathbf{z}}^{-1}:\{t_{i}\mathbf{e}_{i}^{\mathbf{z}}:\mathbf{t}\in E_{\partial}\}\to\partial\Omega\) exists. A schematic illustration of this construction is shown in Figure 3.
This technical construction allows us, for each \(\mathbf{z}\in\partial\Omega\), to define a function \(\mathbf{b_{z}}:E_{\partial}\to\mathbb{R}^{n_{\partial}}\), which depends on the boundary data local to \(\mathbf{z}\), by
\[\mathbf{b_{z}}(\mathbf{t}):=\mathbf{b}\left(P_{\mathbf{z}}^{-1}(t_{i}\mathbf{e}_{i}^{\mathbf{z}}) \right),\qquad\mathbf{t}=(t_{i})_{i=1}^{d-1}\in E_{\partial}, \tag{11}\]
Similarly, we can also define a function \(\mathbf{n_{z}}:E_{\partial}\to\mathbb{R}^{d}\) which describes the boundary geometry local to \(\mathbf{z}\) by
\[\mathbf{n_{z}}(\mathbf{t}):=\mathbf{n}\left(P_{\mathbf{z}}^{-1}(t_{i}\mathbf{e}_{i}^{\mathbf{z}}) \right),\qquad\mathbf{t}=(t_{i})_{i=1}^{d-1}\in E_{\partial}. \tag{12}\]
Next, using the boundary generating function \(e^{\partial}\), the corresponding boundary encoder is defined by
\[\left(\epsilon^{\partial}\mathbf{b}\right)(\mathbf{z}):=e^{\partial}(\mathbf{b_{z}},\mathbf{ n_{z}}),\qquad\mathbf{z}\in\partial\Omega. \tag{13}\]
Similar to the case of the interior encoder, since \(e^{\partial}\) is assumed to be compact and continuous, an analogous proof to that of Lemma 1 implies that
\[\epsilon^{\partial}:L^{2}(\partial\Omega,\mathbb{R}^{n_{\partial}})\to C( \partial\Omega,\mathbb{R}^{r}).\]
Hence, the boundary latent variables \(\mathbf{\ell}_{|_{\partial\Omega}}=\epsilon^{\partial}\mathbf{b}\) are continuous functions.
### Decoders
We describe how to construct a decoder mapping \(\delta\) such that, given a distribution of latent variables \(\ell\in X_{L}\), one can transform these into a distribution of physical variables \(\mathbf{u}=\delta\mathbf{\ell}\) on the domain interior.
_Decoder GFs:_ Given a compact, symmetric, set \(0\in D\subset\mathbb{R}^{d}\), a decoder generating function is any continuous, compact, and generally nonlinear mapping
\[d:\mathbb{R}^{r}\to C(D,\mathbb{R}^{n}). \tag{14}\]
This GF should be thought of as follows: given latent variables \(\boldsymbol{\ell}(\boldsymbol{y})\in\mathbb{R}^{r}\) at a point \(\boldsymbol{y}\in\Omega\), then \(d(\boldsymbol{\ell}(\boldsymbol{y}))(\boldsymbol{x})\) gives a local prediction of the physical variables \(\boldsymbol{u}(\boldsymbol{x})\in\mathbb{R}^{n}\) for any \(\boldsymbol{x}\in D_{\boldsymbol{y}}=\boldsymbol{y}+D\). The idea is to use this map repeatably to build up a semi-local decoder operator.
We define decoders in two situations, which we refer to as partition decoders and averaging decoders.
_Definition of Partition Decoders:_ In this case, it is assumed that there exist points \(\{\boldsymbol{y}_{i}\}_{i=1}^{N_{d}}\subset\Omega\) such that the collections of sets \((\boldsymbol{y}_{i}+D)_{i=1}^{N_{d}}\) forms a disjoint partition of \(\Omega\). Now, let \(\boldsymbol{\ell}\in X_{L}\) be a latent variable distribution and let \(\boldsymbol{x}\in\Omega\). Due to the assumed partition property, there is a unique index \(j\in\{1,\ldots,N_{d}\}\) such that \(\boldsymbol{x}\in\boldsymbol{y}_{j}+D\). Consequently, \(\boldsymbol{x}-\boldsymbol{y}_{j}\in D\) and we define a decoded value \(\boldsymbol{u}_{\boldsymbol{\ell}}(\boldsymbol{x})\) by
\[(\delta\boldsymbol{\ell})(\boldsymbol{x}):=d(\boldsymbol{\ell}(\boldsymbol{y }_{j}))(\boldsymbol{x}-\boldsymbol{y}_{j}).\]
Consequently, we can view \(\delta\) as an operator \(\delta:X_{L}\to X\) and we can create an approximation to the physical variables by letting \(\boldsymbol{u}(\boldsymbol{x})=(\delta\boldsymbol{\ell})(\boldsymbol{x})\).
An advantage of using a partition decoder is that if \(D\) is be chosen as a coarse discretization of \(\Omega\), then this can reduce the computational cost of implementing the decoder. However, there are two potential disadvantages of this choice. First, requiring the decoder to extrapolate from latent to physical variables over a large set \(D\) may introduce approximation errors to the solution. Second, while \(\delta\boldsymbol{\ell}\) is guaranteed to be square-integrable (as an element of \(X\)), there is no guarantee that the resulting physical solution \(\delta\boldsymbol{\ell}\) is smooth, or even continuous.
If such a property is desirable, the it is possible to instead implement the following notion of an averaging decoder.
_Definition of Averaging Decoders:_ Let \(\boldsymbol{\ell}\in X_{L}\) be a latent variable distribution and fix \(\boldsymbol{x}\in\Omega\). Now, for any point \(\boldsymbol{y}\) such that \(\boldsymbol{x}\in D_{\boldsymbol{y}}\), it follows from the definition of decoder GFs that a prediction of the physical variables at \(\boldsymbol{x}\) can be obtained using the function \(d(\boldsymbol{\ell}(\boldsymbol{y}))\). The idea is to average all such possible predictions. To simplify the resulting expression, note that since \(D\) is symmetric,
\[\boldsymbol{x}\in D_{\boldsymbol{y}}\Leftrightarrow\boldsymbol{x}-\boldsymbol {y}\in D\Leftrightarrow\boldsymbol{y}-\boldsymbol{x}\in D\Leftrightarrow \boldsymbol{y}\in D_{\boldsymbol{x}},\]
meaning that \(\boldsymbol{x}\) can be predicted from any point \(\boldsymbol{y}\in D_{\boldsymbol{x}}\cap\Omega\), as illustrated schematically in Figure 4, and that the value of the prediction from the point \(\boldsymbol{y}\) at \(\boldsymbol{x}\) is \(d(\boldsymbol{\ell}(\boldsymbol{y}))(\boldsymbol{x}-\boldsymbol{y})\).
Consequently, given a function \(\ell\in C(\Omega,\mathbb{R}^{r})\), we define a decoder mapping \(\delta\) by
\[(\delta\boldsymbol{\ell})(\boldsymbol{x}):=\frac{1}{|D_{\boldsymbol{x}}\cap \Omega|}\int_{D_{\boldsymbol{x}}\cap\Omega}d(\boldsymbol{\ell}(\boldsymbol{y} ))(\boldsymbol{x}-\boldsymbol{y})\,d\boldsymbol{y},\qquad\boldsymbol{x}\in\Omega. \tag{15}\]
The following lemma shows that continuous generating functions create decoders which themselves produce continuous functions on the entire domain \(\Omega\).
**Lemma 2**.: _Let \(d:\mathbb{R}^{r}\to C(D,\mathbb{R}^{n})\) be a decoder generating function and let \(\delta\) be defined by (15). Then \(\delta:C(\Omega,\mathbb{R}^{r})\to C(\Omega,\mathbb{R}^{n})\)._
Proof.: See Appendix 8.2.
### Elliptic Systems
The final component required to define a SINN operator is an elliptic system. We simply refer to any symmetric, strictly positive definite, matrix
\[\mathcal{A}\in\mathbb{S}_{++}^{(rd)^{2}} \tag{16}\]
as a generating function from which an elliptic operator \(D_{\mathcal{A}}\) and the associated boundary value problem (4) can be defined. This then generates the solution operator \(\mathcal{E}_{\mathcal{A}}:Y_{L}\to X_{L}\).
## 3 Training generating functions
It is worth summarising the constructions developed in SS2. Given localisation sets \(G:=(D,E,E_{\partial})\) and local generating functions \((d,e,e^{\partial},\mathcal{A})\), one can use (7), (13) and (15) to define a globalisation mapping
\[\mathcal{G}_{G}:(d,e,e^{\partial},\mathcal{A})\mapsto(\delta,\epsilon,\epsilon ^{\partial},\mathcal{E}_{\mathcal{A}})\]
which outputs an interior encoder \(\epsilon\), boundary encoder \(\epsilon^{\partial}\), decoder \(\delta\) and elliptic system solution operator \(\mathcal{E}_{\mathcal{A}}\). These components can then be combined to give a SINN operator \(\mathcal{F}=\delta\circ\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial}\) in (3). The aim now is to use the available data ensemble
\[\mathcal{U}=\left(\mathbf{u}_{j}(x),\mathbf{b}_{j}(\mathbf{z})\right)_{j=1}^{N_{T}},\qquad \mathbf{x}\in\Omega,\mathbf{z}\in\partial\Omega, \tag{17}\]
to obtain identify an optimal generating functions and, consequently, optimal SINN operators.
### The cost function for SINN training
Training will be posed as a minimisation problem, and a schematic for the cost function to be minimised is given by the four-stage process shown in Figure 5. In the following, it is assumed that \((\mathbf{u},\mathbf{b})\in\mathcal{U}\) is a snapshot selected from the training data ensemble.
**Stage 1:** Fix a set \(\{\mathbf{p}_{i}\}_{i=1}^{M}\subset\Omega_{E}\) of _training points_. At each training point, it is assumed that a _training patch_ exists, which is defined as the convex hull3 of a set of points \(\{\mathbf{q}_{ij}\}_{j=1}^{N}\in\Omega\cup\partial\Omega\) local to \(\mathbf{p}_{i}\), an example of which is shown in Figure 5 (a). Specifically, for each \(i\), we assume that there exist points satisfying
Footnote 3: The convex hull of a set of points is the smallest convex subset containing all such points.
* \(\mathbf{q}_{ij}\in\Omega_{E}\cup\partial\Omega\), for each \(j=1,\ldots,N\);
* \(\mathbf{p}_{i}\in\mathrm{int}(Q_{i})\) where \(Q_{i}=\mathrm{conv}\{\mathbf{q}_{ij}:j=1,\ldots,N\}\);
* \(\mathbf{q}_{ij}\in\partial Q_{i}\), for each \(j=1,\ldots,N\).
For any generating functions \(e,e^{\partial}\), assumption (i) implies that we can compute the latent variables \(\boldsymbol{\ell}_{i}:=e(\boldsymbol{u}_{\boldsymbol{p}_{i}})\) and
\[\boldsymbol{\ell}_{ij}:=\left\{\begin{array}{cc}e(\boldsymbol{u}_{ \boldsymbol{q}_{ij}}),&\quad\text{if }\boldsymbol{q}_{ij}\in\Omega;\\ e^{\partial}(\boldsymbol{b}_{\boldsymbol{q}_{ij}},\boldsymbol{\eta}_{ \boldsymbol{q}_{ij}}),&\quad\text{if }\boldsymbol{q}_{ij}\in\partial\Omega.\end{array} \right.,\qquad j=1,\ldots,N.\]
**Stage 2:** Conditions \((ii)\) and \((iii)\) from Stage 1 imply that the convex hull \(Q_{i}\subset\mathbb{R}^{d}\) is a polytope and that each \(\boldsymbol{q}_{ij}\) is an exterior point on its boundary \(\partial Q_{i}\subset\mathbb{R}^{d-1}\). Linear interpolation can then be used to obtain a function \(\boldsymbol{f}_{i}\in C(\partial Q_{i},\mathbb{R}^{r})\) satisfying \(\boldsymbol{f}_{i}(\boldsymbol{q}_{ij})=\boldsymbol{\ell}_{ij}\) for each \(j=1,\ldots,N\). This process is indicted in Figure 5 (b).
**Stage 3:** Given a generating matrix \(\mathcal{A}\in\mathbb{R}^{dr\times dr}\), define the associated linear
Figure 5: Schematic overview of variables involved in a training run. (a) A training patch with \(Q\), containing central point \(\boldsymbol{p}\), formed as the convex hull of exterior points \(\{\boldsymbol{q}_{i}\}\). (b) Latent variables computed to form a function \(\boldsymbol{f}:\partial Q\to\mathbb{R}^{r}\) on the training patch boundary. (c) Elliptic extension defines latent variables in \(Q\), in particular at \(\hat{\ell}(\boldsymbol{p})\). (d) The decoder allows a prediction of the physical variables at \(\boldsymbol{p}\).
elliptic operator \(D_{\mathcal{A}}\), and solve the boundary value problem
\[\begin{split} D_{\mathcal{A}}\hat{\boldsymbol{\ell}}& =0\qquad\text{in }Q_{i},\\ \hat{\boldsymbol{\ell}}_{|_{\partial Q_{i}}}&= \boldsymbol{f}_{i}\qquad\text{on }\partial Q_{i},\end{split} \tag{18}\]
on the training patch \(Q_{i}\). From this, a predicted value \(\hat{\boldsymbol{\ell}}_{i}:=\hat{\boldsymbol{\ell}}(\boldsymbol{p}_{i})\in \mathbb{R}^{r}\) can be obtained, as shown in Figure 5 (c). It is then natural to define the error function
\[\Psi_{1}((\boldsymbol{u},\boldsymbol{b}),(e,e^{\partial},\mathcal{A})):=\frac {1}{Mr}\sum_{i=1}^{M}\left\|\boldsymbol{\ell}_{i}-\hat{\boldsymbol{\ell}}_{i} \right\|_{2}^{2}.\]
which quantifies the error, averaged over all training points \(\{\boldsymbol{p}_{i}\}_{i=1}^{M}\), between the encoded latent variables computed using the generating function \(e\), and their predictions from the boundary \(\partial Q_{i}\) using the elliptic system \(\mathcal{E}_{\mathcal{A}}\).
**Stage 4.** Given a decoder \(d:\mathbb{R}^{r}\to C(D,\mathbb{R}^{n})\), we create predictions for the physical variables, in the local sets \(D_{\boldsymbol{p}_{i}}\) of points close to the training points, using
\[\hat{\boldsymbol{u}}(\boldsymbol{x}):=d(\hat{\boldsymbol{\ell}}_{i})( \boldsymbol{x}-\boldsymbol{p}_{i}),\qquad\boldsymbol{x}\in D_{\boldsymbol{p}_ {i}}.\]
A second error function
\[\Psi_{2}((\boldsymbol{u},\boldsymbol{b}),(d,e,e^{\partial},\mathcal{A})):=\frac {1}{M|D|}\sum_{i=1}^{M}\int_{D}|\boldsymbol{u}(\boldsymbol{p}_{i}+\boldsymbol{ y})-\hat{\boldsymbol{u}}(\boldsymbol{p}_{i}+\boldsymbol{y})|_{2}^{2}d \boldsymbol{y}.\]
then quantifies whether the decoder generating function \(d\) is able to accurately recreate the physical data, averaged across a subset of \(\Omega\) local to the chosen training points.
The four-stage process described above allows us, for each data point \((\boldsymbol{u},\boldsymbol{b})\in\mathcal{U}\) to define two functions \(\Psi_{1},\Psi_{2}\) which quantify the error associated with a given quadruple of generating functions \(\mathcal{X}:=(d,e,e^{\partial},\mathcal{A})\). We therefore define the ensemble cost function by
\[\Psi\left(\mathcal{U},\mathcal{X}\right):=\frac{1}{N_{T}}\sum_{j=1}^{N_{T}} \left[\Psi_{1}((\boldsymbol{u}_{j},\boldsymbol{b}_{j}),\mathcal{X})+\alpha \Psi_{2}((\boldsymbol{u}_{j},\boldsymbol{b}_{j}),\mathcal{X})\right], \tag{19}\]
where \(\alpha>0\) is a weighting parameter.
## 4 Numerical Implementation
In SS2 it was shown how the generating functions \((e,e^{\partial},d,\mathcal{A})\) act as building blocks for SINNs. The generating functions \((e,e^{\partial},d)\) and, however, still _infinite dimensional_ nonlinear functionals. A feasible approach to SINN training must therefore parameterise the infinite-dimensional generating functions \((e,e^{\partial},d)\) by mappings between finite-dimensional spaces. The assumed semi-local structure of these operators will be used to achieve this in a simple manner, avoiding the need to impose any restrictive structure on the resulting approximation.
### Interior generating functions
The aim is to create an ensemble of possible mappings \(e:L^{2}(E,\mathbb{R}^{n})\to\mathbb{R}^{r}\) using only finite dimensional functionals. The first step is to partition \(E\subset\mathbb{R}^{d}\) into a union of \(N_{E}\) subsets,
\[E=\bigcup_{i=1}^{N_{E}}E_{i}. \tag{20}\]
Then, given any function \(\mathbf{u}\in L^{2}(E,\mathbb{R}^{n})\), we can create a vector \(\mathcal{D}\mathbf{u}\in\mathbb{R}^{n\times N_{E}}\) by taking its the average of \(\mathbf{u}\) over each of the \(N_{E}\) subsets. Consequently, any _finite-dimensional_ mapping \(\tilde{e}:\mathbb{R}^{n\times N_{E}}\to\mathbb{R}^{r}\) can be used to define an encoder \(e:L^{2}(E,\mathbb{R}^{n})\to\mathbb{R}^{r}\) by forming the composition \(e=\tilde{e}\circ\mathcal{D}\).
### Boundary generating functions
The aim is to create boundary generating function \(e^{\partial}:L^{2}(E_{\partial},\mathbb{R}^{n_{\partial}})\times L^{2}(E_{ \partial},\mathbb{R}^{d})\to\mathbb{R}^{r}\). Using the same idea as before, partition \(E_{\partial}\subset\mathbb{R}^{d-1}\) into \(N_{E_{\partial}}\) sets
\[E_{\partial}=\bigcup_{i=1}^{N_{E_{\partial}}}(E_{\partial})_{i}. \tag{21}\]
Then, for any functions \(\mathbf{f}\in L^{2}(E_{\partial},\mathbb{R}^{n_{\partial}})\) and \(\mathbf{\eta}\in L^{2}(E_{\partial},\mathbb{R}^{d})\) we can form vectors \(\mathcal{D}^{\partial}\mathbf{f}\in\mathbb{R}^{n_{\partial}\times N_{E_{\partial}}}\) and \(\mathcal{D}^{\partial}\mathbf{\eta}\in\mathbb{R}^{d\times N_{E_{\partial}}}\) by averaging these functions over each subset \((E_{\partial})_{i}\). Consequently, any finite-dimensional map \(\tilde{e}^{\partial}:\mathbb{R}^{(n_{\partial}+d)\times N_{E_{\partial}}}\to \mathbb{R}^{r}\) induces a boundary encoder \(e^{\partial}:L^{2}(E_{\partial},\mathbb{R}^{n_{\partial}})\times L^{2}(E_{ \partial},\mathbb{R}^{d})\to\mathbb{R}^{r}\) via the composition \(e^{\partial}:=\tilde{e}^{\partial}\circ\mathcal{D}^{\partial}\).
A summary of the constructions developed so-far is given in Table 1. It should be emphasised that due to the _semi-local_ role of the encoder generating function \(e\), it is not important to impose any particular structure on the partitions (20) or (21). The only requirement is to form a sufficiently resolved local approximation to the underlying data.
\begin{table}
\begin{tabular}{|c|c|c|c c c|} \hline & Interior Encoder & \multicolumn{3}{c|}{Boundary Encoder} \\ \hline \hline Semi-local & \(\begin{matrix}\epsilon:L^{2}(\Omega)&\to&C(\Omega)\\ \mathbf{u}&\overset{(\ref{eq:L^2})}{\mapsto}&e(\mathbf{u}_{\mathbf{x}})\end{matrix}\) & \(\begin{matrix}\epsilon^{\partial}:L^{2}(\partial\Omega)\times L^{2}(\partial \Omega)&\to&C(\partial\Omega)\\ (\mathbf{b},\mathbf{n})&\overset{(\ref{eq:L^1})}{\mapsto}&e^{\partial}(\mathbf{b}_{\mathbf{x }},\mathbf{n}_{\mathbf{x}})\end{matrix}\) \\ \hline GF & \(\begin{matrix}e:L^{2}(E)\to\mathbb{R}^{r}\\ e=\tilde{e}\circ\mathcal{D}\end{matrix}\) & \(\begin{matrix}e^{\partial}:L^{2}(E_{\partial})\times L^{2}(E_{\partial})\to \mathbb{R}^{r}\\ e^{\partial}=\tilde{e}^{\partial}\circ\mathcal{D}^{\partial}\end{matrix}\) \\ \hline Finite Dim. & \(\tilde{e}:\mathbb{R}^{n\times N_{E}}\to\mathbb{R}^{r}\) & \(\tilde{e}^{\partial}:\mathbb{R}^{(n_{\partial}+d)\times N_{E_{\partial}}}\to \mathbb{R}^{r}\) \\ \hline \end{tabular}
\end{table}
Table 1: Structure of the Interior and Boundary Encoders.
### Decoder Generating Functions
We aim to create functions \(d:\mathbb{R}^{r}\to C(D,\mathbb{R}^{n})\). To do this, choose \(N_{D}\) points \(\{d_{i}\}_{i=1}^{N_{D}}\subset\mathbb{R}^{d}\) whose convex hull contains \(D\in\mathbb{R}^{d}\), and let \(\mathcal{I}:\mathbb{R}^{N_{D}}\to C(D,\mathbb{R}^{n})\) be any interpolation operator, such as linear interpolation, which continuously extends known functional values at the points \(d_{i}\) to the whole of \(D\). Then, any mapping \(\tilde{d}:\mathbb{R}^{r}\to\mathbb{R}^{N_{D}}\) induces a decoder generating function via \(d:=\mathcal{I}\circ\tilde{d}\).
### Elliptic system solution on training patches
Effective training of interior and boundary encoders requires a choice of training patches which provide a good sample of both the domain interior and the domain boundary. For simplicity, we only describe the case of rectangular domains \(\Omega\). In this case, by considering training patches \(Q=\operatorname{conv}\{\boldsymbol{q}_{i}\}\subset\bar{\Omega}\) whose boundary \(\partial Q\) is also rectangular, these may be chosen to either lie entirely in the domain interior, or to coincide with a portion of the boundary \(\partial\Omega\).
After appropriate interpolation of the latent variables \(\ell(\boldsymbol{q}_{i})\) to the rectangular boundary \(\partial Q\), standard second-order finite difference schemes can be used to solve the required elliptic system (18) on each training patch to obtain the prediction \(\hat{\boldsymbol{\ell}}(\boldsymbol{p})\) at the training point \(\boldsymbol{p}\in Q\).
### Cost function minimisation
Suppose that an ensemble \(\mathcal{U}\) of training data of the form (17) is available. Suppose that the latent space dimension \(r\), the partition dimensions \(N_{E},N_{E_{\partial}}\) for the encoders, and the decoder discretisation dimension \(N_{D}\) are all given. The problem of model training is now reduced to finding mappings \(\tilde{e}:\mathbb{R}^{n\times N_{E}}\to\mathbb{R}^{r},\tilde{e}^{\partial}: \mathbb{R}^{(n_{\partial}+d)\times N_{E_{\partial}}}\to\mathbb{R}^{r},d: \mathbb{R}^{r}\to\mathbb{R}^{N_{D}}\) and a matrix \(\mathcal{A}\in\mathbb{R}^{rd\times rd}\) which minimise that for which the cost function (19) is minimised.
Each of the generating functions \(\tilde{e},\tilde{e}^{\partial}\) and \(d\) are assumed to be fully connected feed-forward neural networks. A neural network (NN) with \(L\) layers is a mapping \(\mathcal{N}:\mathbb{R}^{m_{I}}\to\mathbb{R}^{m_{O}}\) formed by repeated composition of a prescribed nonlinear _activation function_\(\phi:\mathbb{R}\to\mathbb{R}\) and a series of affine maps \(A_{i}:\boldsymbol{x}\mapsto W_{i}\boldsymbol{x}+\boldsymbol{b}_{i}\), for \(i=1,\ldots,L\). Here, \(W_{i}\in\mathbb{R}^{r_{i}\times r_{i-1}}\) and \(\boldsymbol{b}_{i}\in\mathbb{R}^{r_{i}}\) are the free parameters of the neural network, \(r_{i}\) are the number of neurons in the \(i^{\text{th}}\) layer, and \(r_{0}=m_{I},r_{L}=m_{O}\). The output of the \(i^{\text{th}}\) layer of the network is given by \(\mathcal{N}_{i}(\boldsymbol{x}):=W_{i}\phi(\mathcal{N}_{i-1}(\boldsymbol{x})) +\boldsymbol{b}_{i}\), \(i\geq 2\), where \(\phi\) acts component-wise. For an input \(\boldsymbol{x}\in\mathbb{R}^{m_{I}}\), and letting \(\mathcal{N}_{1}\boldsymbol{x}=W_{1}\boldsymbol{x}+\boldsymbol{b}_{1}\), the output of the neural network, after iterating its \(L\) layers, is \(\mathcal{N}(\boldsymbol{x})=\mathcal{N}_{L}(\boldsymbol{x})\). In this paper, we fix
\[\phi(x)=\operatorname{ReLU}(x)=\left\{\begin{array}{cc}x,&x\geq 0,\\ 0&x<0,\end{array}\right.\]
meaning that the tunable parameters of the considered neural networks are \(\Theta=(W_{i},\boldsymbol{b}_{i})_{i=1}^{L}\). We then write \(\mathcal{N}=\mathcal{N}_{\Theta}\) to emphasise this dependency.
The aim of model training is to find a quadruple \(\mathcal{X}:=(e,e^{\partial},d,\mathcal{A})\) which minimises the modelling residual \(\Psi(\mathcal{U},\mathcal{X})\) defined in (19). To impose the positive definiteness constraint (18) on \(\mathcal{A}\), we use a logarithmic barrier function and instead solve the optimisation problem
\[\begin{split}\min_{\Theta,\mathcal{A}}&\Psi( \mathcal{U},\mathcal{X})-\rho\log\left(\det\mathcal{A}\right)\\ &\Theta=(\Theta_{e},\Theta_{e^{\partial}},\Theta_{d}),\\ &\mathcal{A}\in\mathbb{R}^{dr\times dr},\\ & e=\tilde{e}\circ\mathcal{D},\;e^{\partial}:=\tilde{e}^{ \partial}\circ\mathcal{D}^{\partial},\;d:=\mathcal{I}\circ\tilde{d},\\ &\tilde{e}=\mathcal{N}_{\Theta_{e}},\;\tilde{e}^{\partial}= \mathcal{N}_{\Theta_{e^{\partial}}},\;d=\mathcal{N}_{\Theta_{d}}\\ &\rho>0,\alpha>0.\end{split} \tag{22}\]
The input and output dimensions of each neural network in (22) are prescribed by the encoder and decoder structure described in SS4.1, SS4.2, and SS4.3. The number of layers and neurons in each neural network is problem dependent and will be specified in SS5 for the particular numerical examples considered. The \(-\log\left(\det\mathcal{A}\right)\) term provides a _barrier_ in the sense that \(-\log\left(\det\mathcal{A}\right)\to\infty\) as \(\det A\to 0\), which penalises sign-changes of the eigenvalues of \(\mathcal{A}\), and hence promotes positive definiteness.
The optimisation problem (22) is solved using a stochastic gradient descent approach. At each iteration, a random set of training patches of the form described in SS4.4 are created. The gradient of cost function (19), dependent upon the chosen training patches, is then computed using automatic differentiation an appropriate step in the decision variables is taken. The Adam gradient-based optimization algorithm implemented in the TensorFlow package is used to identify a local minimum. Since solutions to the elliptic system (4) are invariant upon rescaling of \(\mathcal{A}\), after each iteration the elliptic decision variables are updated via \(\mathcal{A}\mapsto\mathcal{A}/(\det(\mathcal{A}))^{(1/(rd)^{2})}\) to maintain the value of the determinant to be unity. The process of random training patch selection and gradient-based weight updates is then iterated until the cost \(\Psi\) has converged to a local minimum.
## 5 Numerical Examples
We consider two nonlinear PDEs to demonstrate performance SINNs for solving boundary observation problems.
### A nonlinear heat equation
Let \(\Omega=[0,1]\times[0,1]\subset\mathbb{R}^{2}\) and suppose that \(u(x,y)\) satisfies the PDE
\[\begin{split}\nabla\cdot(e^{u}\nabla u)&=0,\qquad \text{in}\;\Omega,\\ u&=g,\qquad\text{on}\;\partial\Omega.\end{split} \tag{23}\]
This can be viewed as a the steady solution of a nonlinear diffusion equation for which the diffusivity, \(e^{u(x,y)}\), depends on the local solution \(u(x,y)\). An alternative view is that (23) is equivalent to the nonlinear PDE \(\Delta u=-|\nabla u|^{2}\). The motivation for studying this example is that the linearised PDE is simply the Laplace equation \(\Delta u=0\), meaning that this example will facilitate a careful comparison of the SINN methodology to more traditional approaches which directly employ a system's linearised dynamics with an encoder/decoder architecture.
Data for training and testing is obtained by first creating \(10^{3}\) boundary functions \(g_{i}\in L^{2}(\partial\Omega,\mathbb{R})\), with \(N_{T}=900\) of these used for training and the remaining 100 boundary functions used for testing. The boundary functions \(g_{i}\) are created as random sums of sinusoids. To describe this process, for any boundary point \(\mathbf{z}=\mathbf{z}(x,y)\in\partial\Omega\), let \(\alpha(\mathbf{z})\) be the angle, measured anticlockwise, using a co-ordinate system with origin at the centre of the square domain \(\Omega\), namely
\[\alpha(\mathbf{z}(x,y))=\text{atan}2\left(x-0.5,y-0.5\right),\qquad\mathbf{z}\in \partial\Omega.\]
By sampling coefficients \(X_{i},Y_{i}\sim N(0,1)\) from standard Normal distributions, we first let
\[\tilde{g}_{i}(\mathbf{z})=\sum_{n=1}^{4}\frac{X_{n}\sin(n\alpha(\mathbf{z}))+Y_{n}\cos (n\alpha(\mathbf{z}))}{n},\qquad\mathbf{z}\in\partial\Omega.\]
The cosine component creates a random phase shift, while higher-order sinusoids are moderately attenuated to encourage the lower frequency data. Each sampled boundary function is then normalized to define the final data boundary function \(g_{i}=\tilde{g}_{i}/(\Delta_{g})\) where \(\Delta_{g}\) is randomly sampled from a triangular distribution with pdf
\[f(\Delta_{g})=\begin{cases}\Delta_{g}/8,&\text{if }0\leq\Delta_{g}\leq 4,\\ 0,&\text{otherwise}.\end{cases}\]
This involved approach create an ensemble of boundary functions which have mostly large differences between their largest and smallest value.
#### 5.1.1 Numerical solution and SINN implementation
For each boundary data function \(g_{i}\in L^{2}(\partial\Omega,\mathbb{R})\), the PDE (23) is solved on a uniform grid \(38\times 38\) grid with a Newton Linearization Method with finite difference equations to obtain solution data \(u_{i}\in L^{2}(\Omega,\mathbb{R})\). Given the computational domain discretisation, we view \(\Omega\) as the union of \(38\times 38\) square elements \(\mathcal{T}=[1/38]\times[1/38]\), with any solution \(u(x,y)\) to (23) assumed to have a single value in each element.
To define interior encoders, we let \(E\) be a square, centred at the \((0,0)\in\mathbb{R}^{2}\), and formed of the union of \(N_{E}=(2m_{e}+1)^{2}\) elements \(\mathcal{T}\) for some \(m_{e}\in\mathbb{N}\). In this way, the value of any interior latent variable at \((x,y)\in\Omega\) depends only on the solution values in the \((2m_{e}+1)^{2}\) elements symmetrically surrounding \((x,y)\in\Omega\).
Boundary latent variables \(\ell(\mathbf{z})\) are defined at the centre of each exterior element, using the construction described in Section 4.2, with \(E_{\partial}\) chosen to be a line segment, centred at \(0\in\mathbb{R}\), and formed of \(N_{E_{\partial}}=2m_{e}+1\) line segments whose lengths are equal to the side length of the tile \(\mathcal{T}\). Consequently, boundary latent variables \(\ell(\mathbf{z})\) depend on the boundary values \(g(\mathbf{z})\) on the \(2m_{e}+1\) tile boundaries symmetrically surrounding \(\mathbf{z}\in\partial\Omega\). Note that is has been assumed for simplicity that \(N_{E}=N_{E_{\partial}}^{2}\).
Decoders are created using the construction in SS4.3 by letting \(D\) be a square, centred at \((0,0)\in\mathbb{R}^{2}\), form of the union of \(N_{D}=(2m_{d}+1)^{2}\) tiles \(\mathcal{T}\). Consequently, decoders seek to use latent values at a point \((x,y)\in\Omega\) to predict the solution \(u(x,y)\) on \(N_{D}\) tiles symmetrically surrounding \((x,y)\). Since the decoder output shape is square the domain can be fully tiled by the decoder outputs, we use partition decoders as descibed in SS2.3.
As previously described the training was split into batches where each batch needs to contain samples from both the interior and domain boundary. Since the domain is square a distinction will also be made between edge training patches which do not include the corner points:
\[(x_{c},y_{c})\in\{(0,0),(0,1),(1,0),(1,1)\}\]
and corner training patches which include a corner point. Each batch contained 128 internal samples, 128 edge samples and 32 corner samples which were equally divided between the 4 sides of the square.
Finally, to implement the symmetric matrix \(\mathcal{A}\) as an optimisation variable, we define matrices \(P_{11},P_{12},P_{22}\in\mathbb{R}^{r\times r}\), let \(A_{ij}:=P_{ij}+P_{ij}^{\top}\) and form the block matrix \(\mathcal{A}=(A_{ij})_{ij=1}^{2}\in\mathbb{R}^{4r^{2}}\). The corresponding elliptic PDE for a latent variable function \(\mathbf{\ell}:\Omega\to\mathbb{R}^{r}\) is then given by
\[D_{\mathcal{A}}\mathbf{\ell}=A_{11}\mathbf{l}_{xx}+2A_{12}\mathbf{l}_{xy}+A_{22}\mathbf{l}_{yy }=0.\]
We consider the performance of SINN models for two types of boundary data. In the first, for each boundary data function \(g_{i}\), we only assume that the boundary encoder can access pure boundary data via
\[\mathbf{b}_{i}(\mathbf{z})=\left(g_{i}(\mathbf{z}),\mathbf{n}(\mathbf{z})\right),\qquad\mathbf{z}\in \partial\Omega, \tag{24}\]
where at the corners of the square domain, the boundary vector is defined diagonally in an outward pointing manner (e.g. \((-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\) for the north-west corner). Following this, we will also consider the case in which for each boundary data function \(g_{i}\), the boundary derivative of its associated solution \(u_{i}\) is available to the encoder, by letting
\[\mathbf{b}_{i}(\mathbf{z})=\left(g_{i}(\mathbf{z}),\frac{\partial u_{i}}{\partial\mathbf{n}}( \mathbf{z}),\mathbf{n}(\mathbf{z})\right),\qquad\mathbf{z}\in\partial\Omega, \tag{25}\]
The code available for generating results is available at: [https://github.com/jh6220/SINNs-for-boundary-observvtion-problems.git](https://github.com/jh6220/SINNs-for-boundary-observvtion-problems.git)
#### 5.1.2 SINN performance with pure boundary data (24)
Table 2 shows, for different choices of the latent space dimension \(r\) and the encoder and decoder complexities \(N_{E},N_{D}\), the mean square error between the \(N_{\text{test}}\) test boundary functions solved by the SINN model \(\mathcal{F}\) and ground truth is
\[\mathcal{E}=\frac{1}{N_{\text{test}}}\sum_{i=1}^{N_{\text{test}}}\|u_{i}- \mathcal{F}(\mathbf{b}_{i})\|_{L^{2}(\Omega,\mathbb{R})}^{2}\]
In the above equation, integrals are interpreted as sums over the \(38^{2}\) square elements comprising the domain \(\Omega\). The results in Table 2 use SINNs in which each component (encoder, decoder, boundary encoder) is parameterised using a neural network with 5 hidden layers of 60 nodes.
It is evident that increasing either the latent dimension \(r\) or the encoder complexity \(N_{E}\) reduces the SINN error \(\mathcal{E}\). The former allows for a more complex latent space, while the latter effectively allows the encoder to access higher order derivatives of the underlying data. For example, when \(N_{E}=3\) an encoder has access to nine local function values and is therefore has the potential to access approximate second-order derivatives. Conversely, increasing the decoder dimension \(N_{D}\) increases the error \(\mathcal{E}\) which occurs due to the choice of partition decoder used for this example. In this case, a increasing \(N_{D}\) corresponds to requiring the decoder to extrapolate to a larger sets, naturally increasing \(\mathcal{E}\). However, if the number of degrees of freedom, i.e. \(r/N_{D}^{2}\), of a trained SINN are considered it can be seen from the penultimate row of Table 2 that a higher value of \(N_{D}\) can possibly be viewed as computationally advantageous.
To discuss the influence of latent variable dimension \(r\), we consider the trained internal elliptic models \(D_{\mathcal{A}}\) in two cases. In the simplest case (\(r=1,N_{E}=1,N_{D}=1\)), the internal elliptic model is
\[1.608\,\ell_{xx}+0.001\,\ell_{xy}+1.613\,\ell_{yy}=0,\]
which is very close to the linearised PDE \(\Delta u=0\). On the other hand, when more modelling degrees of freedom are available for the case (\(r=3,N_{E}=3,N_{D}=1\)), the trained internal elliptic system
\[\left(\begin{smallmatrix}1.46&0.49&-1.14\\ 0.49&1.57&-0.72\\ -1.14&-0.72&0.36\end{smallmatrix}\right)\ell_{xx}+\left(\begin{smallmatrix}0.0 1&-0.03&-0.02\\ -0.03&0.02&-0.02\\ -0.02&-0.02&0.06\end{smallmatrix}\right)\ell_{xy}+\left(\begin{smallmatrix}0. 92&1.66&-1.30\\ 1.66&0.74&-1.58\\ -1.30&-1.58&1.69\end{smallmatrix}\right)\ell_{yy}=0\]
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(r\) & 1 & 2 & 3 & 8 & 2 & 3 & 5 & 8 \\ \(N_{E}\) & 1 & 3 & 3 & 3 & 3 & 3 & 3 \\ \(N_{D}\) & 1 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ \(r/N_{D}^{2}\) & 1 & 2 & 3 & 8 & 2/9 & 1/3 & 5/9 & 8/9 \\ \hline \(\mathcal{E}\) & \(4.54\times 10^{-3}\) & \(7.38\times 10^{-4}\) & \(5.3\times 10^{-4}\) & \(6.56\times 10^{-5}\) & \(2.18\times 10^{-2}\) & \(8.14\times 10^{-4}\) & \(6.74\times 10^{-4}\) & \(5.29\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 2: Mean square error \(\mathcal{E}\) of SINNs using pure boundary data \(\mathbf{b}\in(u_{|\partial\Omega},\mathbf{n}_{|\partial\Omega}))\) for the nonlinear heat equation (23).
of the SINN is non-trivial, and the resulting error \(\mathcal{E}\) is an order of magnitude lower than that of the simplest model.
We next discuss the structure of the trained encoder and boundary encoders. For the simplest case (\(r=1,N_{E}=1,N_{D}=1\)), Figure 6 visualises, for a selected test data pair \((u,\mathbf{b})\), the SINN solution \(\mathcal{F}(\mathbf{b})\) computed using the boundary data in (e); the encoded latent variable \(\epsilon u\) in (a); and the reconstructed latent variable \((\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial})\mathbf{b}\) in (b). The error field of both the SINN and the internal latent variables are also shown in Figure 6 (c),(f) and these show small, but non-trivial, discrepancies.
Since the trained PDE is approximately equivalent the linearised PDE, and the latent variable space has scalar-values (\(r=1\)), it is not surprising that the encoded latent variable \(\epsilon u\) and the original data \(u\) are superficially similar. Model accuracy in this case is achieved purely from the nonlinearity of the boundary encoder and decoder. Indeed, Figure 7 (a-b) shows two slices of the SINN solution \(\mathcal{F}(\mathbf{b})\), ground-truth data \(u\), and the solution obtained by extending the boundary data using just the linearised PDE \(\Delta u=0\). It is clear that the nonlinear SINN solution is substantially more accurate than the linearised model. To understand the precise way in which the nonlinear structure achieves this increase in accuracy, Figure 7 (c) shows both the true boundary data
Figure 6: Indicative example of SINN performance with (\(r=1,N_{E}=1,N_{D}=1\)). (a) shows the encoded latent variable \(\epsilon u\); (b) the solved latent variable \(\left(\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial}\right)\mathbf{b}\); (c) latent variable error; (d) original data \(u\); (e) SINN solution; (f) SINN error. The SINN error for this data pair is comparable to the mean value reported in Table 2
and the encoded data \(\epsilon^{\partial}\mathbf{b}\). The encoder \(\epsilon^{\partial}\) behaves assymetricaly in the sense that it attenuates positive boundary values and amplifies negative ones. The reason for this behaviour is that the local diffusion coefficient \(e^{u(x,y)}\) of the PDE (23) increases exponentially with the \(u(x,y)\). Thus, positive boundary values imply diffusion on shorter length scales compared to negative boundary values. The boundary encoder's behaviour can now be interpreted as \(\epsilon^{\partial}\) reflecting this local nonlinear structure of the underlying nonlinear diffusion coefficient, and this improves the accuracy of the SINN operator \(\mathcal{F}\). While this behaviour of the boundary encoder is now interpretable, there is a persistent error if a simple model with \(r=1\) is used. As indicated in Table 2, this error can be avoided by increasing the dimension \(r\) of the latent space.
Finally, we seek to understand the geometric properties of the latent variables for cases in which \(r=3\) and \(r=5\). While the latent variables do not have an strict physical meaning, one can use sensitive analysis to extract their underlying structure. In particular, Figure 8 shows
\[\frac{\partial\delta}{\partial\mathbf{\ell}}(\bar{\mathbf{\ell}})\]
where \(\bar{\ell}\) is the mean value of the latent variables computed across the entire data ensemble. The gradient was computed numerically using central finite difference method with a step size equal to a standard deviation of each latent dimension computed from the data-set in a similar manner to the mean.
The results of the sensitivity analysis for two with \((r=3,N_{E}=3,N_{D}=3)\) and \((r=5,N_{E}=3,N_{D}=3)\) are shown in Figure 8. It can be observed that the implies latent variables are spatially coherent. In the case \(r=3\), the structures are approximately orthogonal linear surfaces, while for \(r=5\) more complex, yet still coherent, spatial structures can be observed.
#### 5.1.3 SINN performance with extended boundary data (25)
We now consider the case in which extra boundary data, namely the normal boundary derivatives, are available to the boundary encoder. Table 2 shows
Figure 7: Indicative example of SINN performance for \((r=1,N_{E}=1,N_{D}=1)\). (a) shows solutions on a slice through the domain at \(x=0.5\); (b) the respective variables on the slice \(y=0.5\); (c) Original and encoded boundary data.
SINN errors \(\mathcal{E}\) for a variety of model choices. These follow a similar trend to the case of pure boundary conditions, although the final row of Table 5 indicates that extra boundary information gives a consistent performance improvement, and that this improvement is more pronounced for higher values of \(N_{D}\) and of \(r\).
Figure 9 shows, for an test function \(u\) whose error is indicative of the mean values presented in Table 5, slices thought the true solution and SINN solutions. Both slices at \(x=0\) and \(y=0\) in Figure 9 (a-b) show very good agreement of the SINN solution \(\mathcal{F}(\mathbf{b})\) with the true solution \(u\). Consider first the SINN models with \(N_{D}=1\), whose decoders are not required to extrapolate. It can be seen in both error plots of Figure 9 (c-d) that the pointwise error decreases uniformly as \(r\) increases. Next, consider the SINN models with \(N_{D}=3\), whose decoders must extrapolate to two adjacent elements. In this case, while absolute error also decreases with increasing \(r\), the error plots are oscillatory as a result of the extrapolation error involved with the chosen partition decoder.
Finally, we discuss the influence of underlying neural network complexity on the SINN error. For the case (\(r=5,N_{E}=3,N_{D}=3\)), Table 4 shows the average SINN error \(\mathcal{E}\) over the testing ensemble for different choices of Neural Network dimensions. Three different neural network structures are considered, in terms of the number of layers and nodes per layer, with each case apply
Figure 9: An indicative example of SINN performance for the parameters shown in Table 3. Results are plotted along slices through the domain where \(x=0.5\) (left) and \(y=0.5\) (right). The upper plots (a-b) show the test data \(u\) and SINN reconstructions \(\mathcal{F}(\mathbf{b})\); the lower plots (c–d) show errors \(u_{\text{error}}=u-\mathcal{F}(\mathbf{b})\).
ing to the encoder, boundary encoder and decoder. It is interesting to note that \(\mathcal{E}\) decreases with neural network complexity, suggesting the model over fitting has not occurred for these parametric values. This highlights a potential benefit of the SINN methodology in that, due to the use of training patches, significant training information can be obtained from each training data pair \((\mathbf{u}_{i},\mathbf{b}_{i})\). Consequently, the SINN approach appears robust to overfitting, even when employing only a relatively small training data ensemble.
### Steady laminar fluid flow
Let \(\Omega=[0,1]\times[0,1]\subset\mathbb{R}^{2}\) and suppose that \(\mathbf{u}:\Omega\to\mathbb{R}^{2}\) and \(p:\Omega\to\mathbb{R}\) satisfy the steady, incompressible, Navier-Stokes equations
\[\mathbf{u}\cdot\nabla\mathbf{u}+\nabla p =\nu\Delta\mathbf{u}, \text{in}\ \Omega \tag{26}\] \[\nabla\cdot\mathbf{u} =0, \text{in}\ \Omega,\] \[\mathbf{u} =\mathbf{g}, \text{on}\ \partial\Omega.\]
Here, \(\mathbf{u}=(u_{x},u_{y})\) represents the velocity components of a fluid contained in the square domain \(\Omega\), \(p\) is the pressure of the fluid, and \(\nu>0\) is the kinematic viscosity of the fluid. In this example, \(\Omega\) should be thought of a control volume in a larger fluid flow. The corresponding velocity boundary conditions must, by the divergence theorem and incompressibility, then satisfy
\[\int_{\partial\Omega}\mathbf{g}\cdot\mathbf{n}\,dS=\int_{\Omega}\nabla\cdot\mathbf{u}\,dV =0. \tag{27}\]
The aim of this example is to study whether SINNs can reconstruct the fluid velocity in the domain \(\Omega\), using only boundary velocity data. This presents a more complex and challenging example than the nonlinear heat equation in the previous section, in view of the three-dimensional state space comprising of two velocity components and the pressure. Furthermore, to increase the challenge presented by this example, we will not seek to exploit any pressure information, meaning that its influence must be automatically discovered during SINN training. We note that, although we do not consider such an example here, the SINN methodology could analogously be applied to a fluid flow example with
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(N_{\text{layers}}\) & 4 & 5 & 5 \\ \(N_{\text{nodes}}\) & 40 & 60 & 200 \\ \(N_{\text{parameters}}\) & \(5.53\times 10^{3}\) & \(1.55\times 10^{4}\) & \(1.64\times 10^{5}\) \\ \hline \(\mathcal{E}\) & \(1.03\times 10^{-3}\) & \(1.43\times 10^{-4}\) & \(2.34\times 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: SINN error \(\mathcal{E}\) as a function of the neural network parameters for the case \((r=5,N_{E}=3,N_{D}=3)\).
no-slip (i.e. Dirichlet) boundary conditions in which the interior fluid velocity must be recovered using only boundary pressure data.
A data ensemble is created using a similar method to that in SS5.1. Here, \(2\times 10^{3}\) random, sinusoidal, boundary velocity functions \(\mathbf{g}_{i}=((u_{x})_{i},(u_{y})_{i})\) are generated, each of which also satisfies the constraint (27). For each boundary data function, the PDE (26) was solved using a SIMPLE algorithm on a staggered grid, with the domain \(\Omega\) discredited into \(30\times 30\) rectangular cells, with the pressure \(p\) computed at each grid centre, and the velocity components \(u_{x},u_{y}\) computed at the mid-point of the cell's sides. The chosen grid was non-uniform with greater resolution near the boundaries to aid numerical convergence. The resulting "ground-truth" solutions were then re-sampled onto a \(38\times 38\) uniform grid with the same properties as in SS4.2.
The generating functions and training loop was implemented in an equivalent manner to the example in SS5.1. The only difference for this example is that encoders and boundary encoders take the two velocity components \(u_{x},u_{y}\) as inputs. We note again, that pressure information is not available during training. Here, for brevity, we focus on case in which extra boundary information \(\mathbf{b}=(\mathbf{u}_{|\partial\Omega}=\mathbf{g},\frac{\partial\mathbf{u}}{\partial\mathbf{n}} _{|\partial\Omega},\mathbf{n})\) is available. Each trained SINN model this section has the same architecture for generating functions \(\epsilon\), \(\epsilon^{\partial}\) and uses a partition decoder \(\delta\). The models with \(N_{D}=1\) have 5 hidden layers of 60 nodes, and models \(N_{D}=3\) have 5 hidden layers of 200 nodes to compensate for the higher dimensional decoder output.
Table 5 shows the testing errors \(\mathcal{E}\) of SINN models created with a selected parameter values. The errors exhibit the same trends as observed for the nonlinear heat equation in SS5.1, with modelling error decreasing with increasing latent space dimension \(r\) and encoder dimension \(N_{E}\), and with errors increasing as the extrapolation dimension \(N_{D}\) of the partition decoder is increased. The final row of Table 5 shows the increase in error if only standard boundary conditions \((\mathbf{u}_{|_{\partial\Omega}},\mathbf{n})\) are available in model training although, for brevity, we do not discuss these results in detail here.
An indicative visualisation of the internal structure of two of the trained SINNs is given in Figure 10 for the cases \((r=6,N_{E}=3,N_{D}=1)\) and \((r=10,N_{E}=3,N_{D}=3)\). The true test function velocity components \(u_{x},u_{y}\) are shown in the top row. Both models give a solution to the boundary value
problem \(\mathcal{F}(\mathbf{b})\) which is a very good approximation to the true flow, as is shown in the right-hand column of Figure 10. The main noticeable difference is that the SINN model with more latent variables \(r=10\) is able to more accurately capture the elongated vertical structure corresponding to values \(u_{y}<0\). Both models have coherent, yet non-trivial, latent variables which are shown in the middle two columns of Figure 10. The additional degrees of freedom enjoyed by the SINN with \(r=10\) allows for more accurate reconstruction of the finer scale flow features than the simpler model with \(r=6\).
To look more closely at SINN performance for boundary observation of the Navier-Stokes PDE (26), Figure 11 shows, for one indicative example, slices through the SINN solutions \(\mathcal{F}(\mathbf{b})\) at \(x=0.5\) and \(y=0.5\) for all models considered in Table 5. All models exhibit a good approximation to the main trends of the true data, with approximation error decreasing with increasing latent space dimension \(r\). For this more challenging example, modelling errors have
Figure 10: An indicative example of SINN performance and latent variables when solving the boundary value problem (26). The true solution, \(\mathbf{u}\), is shown in the top-right corner, and its SINN approximations and error fields for the two indicated models shown in the right-hand column. The encoded latent variables \(\epsilon\mathbf{u}\) and solved latent variables \((\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial})(\mathbf{b})\) are shown in the left-hand and middle columns, respectively.
not yet converged for the parameter values shown. However, as can be observed from the reconstructed flow fields shown in Figures (10), all SINN models are able to reconstruct a very close approximation to the dominant internal vortex structures in the flow domain. Such performance, if replicated in experimental applications, for example, would be of great practical use.
Finally, we perform a latent variable sensitivity analysis for the two trained models with \((r=8,N_{E}=3,N_{D}=3)\) and \((r=10,N_{E}=3,N_{D}=3)\). The decoder sensitivities \(\frac{\partial\delta}{\partial\boldsymbol{\ell}_{i}}(\boldsymbol{\overline{ \ell}})\) each have two components which correspond to the two velocity components in the \(x\) and \(y\) directions. These sensitivities are shown in Figure 12. The model with \(r=8\) exhibits approximately planar sensitivities about the mean latent variable value \(\boldsymbol{\overline{\ell}}\), suggesting that the \(r=8\) latent variables are being used by the trained model to enable planar perturbations to the solution in four directions for each of the two velocity components. Conversely, the model with \(r=10\) clearly exhibits nonlinear, yet spatially coherent, latent-variable sensitivities which appear to enable a more accurate solution to the underlying boundary observation problem.
Figure 11: An indicative example of SINN performance for the parameters shown in Table 5. Results are plotted along slides of the domain where \(x=0.5\) (left) and \(y=0.5\) (right), with the original “ground truth” solution to the PDE (26) also shown.
Figure 12: Latent variable sensitivities \((\nabla\delta)(\bar{\mathbf{\ell}})\) at the mean ensemble latent variable value \(\bar{\mathbf{\ell}}\).
Discussion
The numerical examples discussed in SS5 suggest that SINNs are able to provide very good approximations to the nonlinear solution operators
\[\mathcal{F}:L^{2}(\partial\Omega,\mathbb{R}^{n_{\partial}})\to L^{2}(\Omega, \mathbb{R}^{n})\]
for nonlinear boundary observation problems of the form (1). The power of the SINN approach is that, via training only finite-dimensional neural networks, it provides nonlinear infinite-dimensional operators \(\mathcal{F}\) which can give approximate solutions \(\mathcal{F}(\boldsymbol{b})\in L^{2}(\Omega,\mathbb{R}^{n})\) to a boundary observation problem for _any given_ boundary data function \(\boldsymbol{b}\in L^{2}(\partial\Omega,\mathbb{R}^{n_{\partial}})\). This represents a step-change in utility in comparison to data-driven approaches in which model training, and hence also the trained models, directly depends on a fixed instance of the boundary data.
From the viewpoint of operator identification, the fact that ensemble errors in the range of \(\mathcal{O}(10^{-3})\) to \(\mathcal{O}(10^{-5})\) can be obtained by SINNs with very few latent variables (\(3\leq r\leq 10\)) indicates that the approach has strong potential to be successfully applied to more complex examples. This is supported by the evidence, discussed in SS5.1, that the semi-local structure of the SINN training algorithm endows the approach with significant robustness against over fitting. A further advantage of our data-driven approach is that SINNs can be obtained regardless of whether the available boundary data renders the underlying PDE boundary observation problem over- or under-determined. The data-driven operator \(\mathcal{F}\) merely attempts to find an optimal approximation to the PDE solution, given the available training ensemble. We also emphasise that SINN training does not require knowledge of the underling PDE, meaning that our method can be applied to experimental data and subsequently used to solve unseen boundary conditions.
A natural question is to ask whether the approximation error will converge to zero with increased complexity of the trained SINN operator (e.g. as \(r,N_{E}\to\infty\), or with the complexity of the underlying neural networks). Since our aim is identify operators which solve nonlinear boundary observation problems which may have no closed-form solutions, and in view of the fact that linear elliptic systems are used as the central non-local building blocks of SINNs, it is unlikely that such convergence will hold in general. However, even without such a property, the numerical evidence presented in this paper suggests that SINNs can provide very good, low-complexity, approximations to nonlinear boundary observation problems which, furthermore, capture key physical features of the solution.
Viewing performance from an approximation accuracy philosophy is not out of line with the motivation for many well-established approaches to the simulation of complex nonlinear PDEs. For example, in fluid mechanics, if one numerically solves the Reynolds Averaged Navier Stokes (RANS) equations, there is no expectation that the solution will agree with a fully resolved direct numerical simulation (DNS) of the governing Navier-Stokes equations. However, in many practical cases, a RANS solution may provide sufficiently physical insights
at a substantially reduced computational cost than DNS. A similar philosophy applies to more accurate, yet still approximate, numerical approaches such as Large Eddy Simulation (LES). From the perspective of creating low-cost SINN models, it should be noted that there is technically no limit to using of significantly larger choice of extrapolation dimension \(N_{D}\) than those used in the numerical examples considered in this paper. Furthermore, even if a SINN is trained using a finely-resolved spatial grid, the fact that an elliptic PDE is identified implies that the SINN operator \(\mathcal{F}=\delta\circ\mathcal{E}_{\mathcal{A}}\circ\epsilon^{\partial}\) can be implemented using an arbitrary resolution, and potentially low-cost, solution to the central elliptic system \(\mathcal{D}_{\mathcal{A}}\boldsymbol{\ell}=0\).
The boundary encoding and internal decoding can be computationally expensive if large neural networks are used but unlike the latent elliptic system this computation is applied to each section of the domain independently and is trivial to parallelise. Given a discretisation grid of \(n_{i}\) internal points the computational complexity of decoding would scale linearly with \(\mathcal{O}(n_{i})\) and the boundary encoding would scale even more favourably, since a typical choice of the number of boundary points \(n_{b}\) is lower (e.g., for a 2D domain it may be assumed to scale as \(n_{b}\propto\sqrt{n_{i}}\)). On the other hand the latent elliptic system requires solving a linear system with \(n_{i}\times r\) variables. Computational complexity of this step depends on specific algorithm, with direct methods such as Cholesky decomposition scaling as \(\mathcal{O}((n_{i}r)^{3})\). Since the SINN method only provides an approximate solution this precision is not required and so an iterative method could be used which has a smaller per iteration complexity. This is still significantly higher than the encoding or decoding step which, at large enough \(n_{i}\), would dominate the computational cost. In summary, SINNs scale very well for large neural networks and, as shown in the section SS5.1, this can significantly increase modeling accuracy.
Finally, we comment on the computational cost of SINN training. A potential bottleneck is that, for each update to the elliptic system coefficients \(\mathcal{A}\), one must repeatably solve a new elliptic system of PDEs on each training patch that is used to build up the cost function \(\Psi(\mathcal{U},\mathcal{X})\). The cost of evaluating the cost function can be controlled by using a fixed number of training patch geometries \(Q\), and by parallelising the elliptic system solutions on each training patch. To give an example, suppose that each training patch is as shown in Figure 5 and requires the solution of an elliptic system at \(n_{p}\) internal points \(\boldsymbol{p}_{i}\) in the training patch. Solution of this elliptic system on the training patch domain involves solving a linear system:
\[A(\mathcal{A},\boldsymbol{p}_{i},\boldsymbol{q}_{j})\boldsymbol{x}= \boldsymbol{b}(\mathcal{A},\boldsymbol{p}_{i},\boldsymbol{q}_{j},\boldsymbol{ \ell}(\boldsymbol{q}_{j}))\]
where \(A\in\mathbb{R}^{(n_{p}r)\times(n_{p}r)}\) is a symmetric positive definite matrix which depends linearly on the coefficients of \(\mathcal{A}\), on the boundary points \(\boldsymbol{q}_{j}\), and on the interior points \(\boldsymbol{p}_{ij}\). The vector \(\boldsymbol{b}\in\mathbb{R}^{n_{p}r}\) depends on \(\mathcal{A}\), \(\boldsymbol{q}_{j}\), \(\boldsymbol{p}_{i}\) and the encoded boundary values \(\boldsymbol{\ell}(\boldsymbol{q}_{i})\). Solving the above linear system can be performed in two steps: forming the Cholesky decomposition \(A=LL^{\top}\), then using \(L\) to solve the linear system via \(\boldsymbol{x}=(LL^{\top})^{-1}\boldsymbol{b}\). If a common training patch geometry is used, the
first step only needs to be computed once per training iterate, with the matrix \(L\) stored in memory. This computation can be performed in parallel across all training patches required to compute \(\Psi\). In a similar manner, any required evaluations of the encoder and decoder can also be parallelised. These steps imply that very efficient training of SINNs is possible.
## 7 Conclusions
We have presented a data-driven method for solving boundary observation problems which identifies a solution operator which can approximate the PDE solution for arbitrary boundary data. The constructed models, referred to here as Structure Informed Neural Networks (SINNs), embed an elliptic system into a classical encoder/decoder Neural-Network architecture for reduced-order modelling. The use of elliptic systems, which are well-posed with respect to the global passage of problem data, enables very efficient model training to be performed on small patches of the underlying domain. Numerical evidence suggests that this endows the proposed SINN methodology with significant robustness to over-fitting.
The methodology presented in this paper can be used to solve boundary observation problems which are both time-independent and have boundary data which is known on the entire boundary. Future research will investigate the possibility of extending the SINN methodology to handle cases in which only partial boundary data is available for training or testing, the potential for SINN operators to be embedded in time-dependent algorithms for boundary observation, and the application of the developed methodology to more complex domain geometries.
## 8 Appendix
We present the proofs of the regularity results stated in the paper.
### Proof of Lemma 1
_Regularity of \(\epsilon\mathbf{u}\):_ Given \(\mathbf{x},\mathbf{y}\in\Omega_{E}\), note that
\[|(\epsilon\mathbf{u})(\mathbf{x})-(\epsilon\mathbf{u})(\mathbf{y})|=|e(\mathbf{u_{x}})-e(\mathbf{u_{y }})|. \tag{28}\]
Now, if \(\mathbf{x}\to\mathbf{y}\) in \(\Omega_{E}\), then by a standard approximation argument, \(\|\mathbf{u_{x}}-\mathbf{u_{y}}\|_{L^{2}(E)}\to 0\). It then follows from (28) and the assumed continuity of the generating function \(e\) that \((\epsilon\mathbf{u})(\mathbf{x})\to(\epsilon\mathbf{u})(\mathbf{y})\), meaning that \((\epsilon\mathbf{u}):\Omega_{E}\to\mathbb{R}^{r}\) is continuous.
To prove uniform boundedness of \(\epsilon\mathbf{u}\), note that for for any \(\mathbf{u}\in L^{2}(\Omega)\),
\[\sup_{x\in\Omega_{E}}\|\mathbf{u_{x}}\|_{L^{2}(E)}^{2}=\sup_{x\in\Omega_{E}}\int_ {E}|u(\mathbf{x}+\mathbf{y})|^{2}d\mathbf{y}\leq\|\mathbf{u}\|_{L^{2}(\Omega)}^{2}\]
Since \(e:L^{2}(E)\to\mathbb{R}^{r}\) is compact, it maps bounded subsets of \(L^{2}(E)\) to bounded subsets of \(\mathbb{R}^{r}\). Hence,
\[\sup_{x\in\Omega_{E}}|(\epsilon\mathbf{u})(\mathbf{x})|=\sup_{x\in\Omega_{E}}|e(\mathbf{u}_{ \mathbf{x}})|_{2}<\infty.\]
Consequently, \(\epsilon\mathbf{u}\in C(\Omega_{E},\mathbb{R}^{r})\).
### Proof of Lemma 2
Let \(\mathbf{\ell}\in C(\Omega,\mathbb{R}^{r})\) and let \(\epsilon>0\). Let \(\mathbf{x},\mathbf{z}\in\Omega\) and define sets \(D_{\mathbf{x}\mathbf{y}}:=(D_{\mathbf{x}}\cap D_{\mathbf{z}}\cap\Omega)\) and
\[D_{\mathbf{x}\setminus\mathbf{z}}=(D_{\mathbf{x}}\cap\Omega)\setminus D_{\mathbf{x}\mathbf{y}}, \quad D_{\mathbf{z}\setminus\mathbf{x}}=(D_{\mathbf{z}}\cap\Omega)\setminus D_{\mathbf{x}\mathbf{y}}\]
and set volumes by
\[c_{\mathbf{x}}=|D_{\mathbf{x}}\cap\Omega|,\quad c_{\mathbf{z}}=|D_{\mathbf{z}}\cap\Omega|.\]
For convenience, we also let \(f(\cdot):=(\delta\mathbf{\ell})(\cdot)\) and \(g_{\mathbf{y}}(\cdot):=d(\ell(\mathbf{y}))(\cdot)\). Then,
\[|f(\mathbf{x})-f(\mathbf{z})| =\left|\frac{1}{c_{\mathbf{x}}}\int_{D_{\mathbf{x}}}g_{\mathbf{y}}(\mathbf{x}- \mathbf{y})d\mathbf{y}-\frac{1}{c_{\mathbf{z}}}\int_{D_{\mathbf{z}}}g_{\mathbf{y}}(\mathbf{z}-\mathbf{y}) d\mathbf{y}\right|\] \[\leq\underbrace{\frac{1}{c_{\mathbf{x}}}\int_{D_{\mathbf{x}\setminus\bm {z}}}|g_{\mathbf{y}}(\mathbf{x}-\mathbf{y})|d\mathbf{y}+\frac{1}{c_{\mathbf{z}}}\int_{D_{\mathbf{z} \setminus\mathbf{x}}}|g_{\mathbf{y}}(\mathbf{z}-\mathbf{y})|d\mathbf{y}}_{:=I_{1}}\] \[\quad+\underbrace{\frac{1}{c_{\mathbf{x}}}\int_{D_{\mathbf{x}\mathbf{z}}}|g_ {\mathbf{y}}(\mathbf{x}-\mathbf{y})-g_{\mathbf{y}}(\mathbf{z}-\mathbf{y})|\,d\mathbf{y}}_{:=I_{2}}\] \[\quad+\underbrace{\left|\frac{1}{c_{\mathbf{x}}}-\frac{1}{c_{\mathbf{z}} }\right|\int_{D_{\mathbf{x}\mathbf{z}}}|g_{\mathbf{y}}(\mathbf{z}-\mathbf{y})|d\mathbf{y}}_{:=I_{3}}\]
Now, since \(\mathbf{\ell}\in C(\Omega,\mathbb{R}^{r})\), it follows that \(\mathbf{\ell}(\Omega)\subset\mathbb{R}^{r}\) is bounded. Then, using compactness of the encoder generating function \(d\), it follows that \(\{d(\mathbf{\ell})(\mathbf{y})\}_{\mathbf{y}\in\Omega}=\{g_{\mathbf{y}}\}_{\mathbf{y}\in\Omega}\) is a bounded subset of \(C(\Omega,\mathbb{R}^{n})\). Hence, there exists \(K>0\) such that
\[\sup_{\mathbf{y}\in\Omega}\|g_{\mathbf{y}}\|_{C(\Omega,\mathbb{R}^{n})}\leq K<\infty. \tag{29}\]
Then, since \(D_{\mathbf{x}\setminus\mathbf{z}},D_{\mathbf{z}\setminus\mathbf{x}}\to 0\) and \(c_{\mathbf{x}}-c_{\mathbf{z}}\to 0\) as \(\mathbf{x}\to\mathbf{z}\), it follows that there exists \(\delta_{1}>0\) such that
\[I_{1}+I_{3}\leq K\left(|D_{\mathbf{x}\setminus\mathbf{z}}|+|D_{\mathbf{z}\setminus\mathbf{x}} |+|D_{\mathbf{x}\mathbf{z}}|\left|\frac{1}{c_{\mathbf{x}}}-\frac{1}{c_{\mathbf{z}}}\right| \right)<\frac{\epsilon}{2}.\]
whenever \(|\mathbf{x}-\mathbf{y}|<\delta_{1}\).
Finally, since \(d\) is continuous and \(\boldsymbol{\ell}(\tilde{\Omega})\subset\mathbb{R}^{d}\) is compact, it follows that \(\mathcal{F}:=\{g_{\boldsymbol{y}}\}_{\boldsymbol{y}\in\tilde{\Omega}}\) is a compact subset of \(C(\Omega,\mathbb{R}^{n})\). Consequently, the set of functions \(\mathcal{F}\) is equicontinuous and, hence, there exists \(\delta_{2}>0\) such that \(|\boldsymbol{g}_{y}(\boldsymbol{x}-\boldsymbol{y})-g_{\boldsymbol{y}}( \boldsymbol{z}-\boldsymbol{y})|<\epsilon/2\), for any \(\boldsymbol{y}\in\Omega\), whenever \(|\boldsymbol{x}-\boldsymbol{z}|<\delta_{2}\). Hence,
\[|f(\boldsymbol{x})-f(\boldsymbol{z})|\leq I_{1}+I_{2}+I_{3}\leq\epsilon\]
whenever \(|\boldsymbol{x}-\boldsymbol{z}|<\min\{\delta_{1},\delta_{2}\}\), meaning that \(f=\delta\boldsymbol{\ell}\) is continuous. That \(\delta\boldsymbol{\ell}\in C(\Omega,\mathbb{R}^{n})\) then follows from the upper bound (29).
|
2302.06751 | OpenHLS: High-Level Synthesis for Low-Latency Deep Neural Networks for
Experimental Science | In many experiment-driven scientific domains, such as high-energy physics,
material science, and cosmology, high data rate experiments impose hard
constraints on data acquisition systems: collected data must either be
indiscriminately stored for post-processing and analysis, thereby necessitating
large storage capacity, or accurately filtered in real-time, thereby
necessitating low-latency processing. Deep neural networks, effective in other
filtering tasks, have not been widely employed in such data acquisition
systems, due to design and deployment difficulties. We present an open source,
lightweight, compiler framework, without any proprietary dependencies, OpenHLS,
based on high-level synthesis techniques, for translating high-level
representations of deep neural networks to low-level representations, suitable
for deployment to near-sensor devices such as field-programmable gate arrays.
We evaluate OpenHLS on various workloads and present a case-study
implementation of a deep neural network for Bragg peak detection in the context
of high-energy diffraction microscopy. We show OpenHLS is able to produce an
implementation of the network with a throughput 4.8 $\mu$s/sample, which is
approximately a 4$\times$ improvement over the existing implementation | Maksim Levental, Arham Khan, Ryan Chard, Kazutomo Yoshii, Kyle Chard, Ian Foster | 2023-02-13T23:25:55Z | http://arxiv.org/abs/2302.06751v4 | # OpenHLS: High-Level Synthesis for Low-Latency Deep Neural Networks for Experimental Science
###### Abstract.
In many experiment-driven scientific domains, such as high-energy physics, material science, and cosmology, high data rate experiments impose hard constraints on data acquisition systems: collected data must either be indiscriminately stored for post-processing and analysis, thereby necessitating large storage capacity, or accurately filtered in real-time, thereby necessitating low-latency processing. Deep neural networks, effective in other filtering tasks, have not been widely employed in such data acquisition systems, due to design and deployment difficulties. We present an open source, lightweight, compiler framework, without any proprietary dependencies, OpenHLS, based on high-level synthesis techniques, for translating high-level representations of deep neural networks to low-level representations, suitable for deployment to near-sensor devices such as field-programmable gate arrays. We evaluate OpenHLS on various workloads and present a case-study implementation of a deep neural network for Bragg peak detection in the context of high-energy diffraction microscopy. We show OpenHLS is able to produce an implementation of the network with a throughput 4.8 us/sample, which is approximately a 4\(\times\) improvement over the existing implementation.
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote †: [
+
Footnote: [
+
[4], MXNet [11]), which abstract all implementation details, thereby making portability of model architectures to unsupported hardware platforms (e.g., FPGAs and ASICs) close to non-existent (barring almost wholesale reimplementations of the frameworks).
These three barriers demand a solution that can translate a high-level DNN representation to a low-level representation, suitable for FPGA deployment, while simultaneously optimizing resource usage and minimizing latency. In general, the task of _lowering_ high-level representations of programs to low-level representations is the domain of a compiler. Similarly, the task of _synthesizing_ a _register-transfer level_ (RTL) _design_, rendered in a _hardware description language_ (HDL), from a program, is the domain of high-level synthesis (HLS) [34] tools. Existing HLS tools [10; 18; 48] struggle to perform needed optimizations in reasonable amounts of time (see Section 2.2) despite, often, bundling robust optimizing compilers.
Recently, deep learning compilers (e.g., TVM [12], MLIR [26], and Glow [42]) have demonstrated the ability to reduce dramatically inference DNN latencies [28], training times [49], and memory usage [13]. These compilers function by extracting intermediate-level representations (IRs) of the DNNs from the representations produced by the frameworks, and performing various optimizations (e.g., kernel fusion [6], vectorization [32], and memory planning [13]) on those IRs. The highly optimized IR is then used to generate code for various target hardware platforms. Given the successes of these compilers, it is natural to wonder whether they can be adapted to the task of sufficiently optimizing a DNN such that it might be synthesized to RTL, for deployment to FPGA.
In this paper, we present OpenHLS, an open-source1, lightweight compiler and HLS framework that can translate DNNs defined as PyTorch models to FPGA-compatible implementations. OpenHLS uses a combination of compiler and HLS techniques to compile the entire DNN into fully scheduled RTL, thereby eliminating all synchronization overheads and achieving low latency. OpenHLS is general and supports a wide range of DNN layer types, and thus a wide range of DNNs. To the best of our knowledge, OpenHLS is the first HLS framework that enables the use of DNNs, free of a dependence on expensive and opaque proprietary HLS tools, for science experiments that demand low-latency inference. In summary our specific contributions include:
Footnote 1: Available at [https://github.com/makkeventral/openhls](https://github.com/makkeventral/openhls)
1. We describe and implement a compiler framework, OpenHLS, that can efficiently transform, without use of proprietary HLS tools, unoptimized, hardware-agnostic PyTorch models into low-latency RTL suitable for deployment to FPGAs;
2. We show that OpenHLS generates lower latency designs than does a state-of-the-art commercial HLS tool (Xilinx's Vitis HLS) for many DNN layer types. In particular we show that OpenHLS can produce synthesizable designs that meet placement, routing, and timing constraints for BraggNN, a DNN designed for analyzing Bragg diffraction peaks;
3. We discuss challenges faced even after successful synthesis of RTL from a high-level representation of a DNN, namely during the place and route phases of implementation.
Note that while we focus here, for illustrative purposes, on optimizations relevant to a DNN used for identifying Bragg diffraction peaks in materials science, OpenHLS supports a wide range of DNNs, limited only by upstream support for DNN layers.
The rest of this paper is as follows: Section 2 reviews key concepts from compilers, high-level synthesis, and RTL design for FPGA, as well as related work. Section 3 describes the OpenHLS compiler and HLS framework in detail. Section 4 evaluates OpenHLS's performance, scalability, and competitiveness with designs generated by Vitis HLS, and describes a case study in which OpenHLS is applied to BraggNN, a Bragg peak detection DNN with a target latency of 1 \(\upmu\)s/sample. Finally, Section 5 concludes and discusses future work.
## 2. Background
We briefly review relevant concepts from DNN frameworks and compilers, high-level synthesis, and FPGA design. Each subsection corresponds to a phase in the translation from high-level DNN to feasible FPGA implementation.
### Compilers: The path from high to low
The path from a high-level, abstract, DNN representation to a register-transfer level representation can be viewed as a sequence of lowerings between adjacent levels of abstraction. Each level of abstraction is rendered as a programming language, IR, or HDL, and thus we describe each lowering in terms of the representations and tools used by OpenHLS to manipulate those representations:
1. An imperative, _define-by-run_, Python representation, in PyTorch;
2. High-level data-flow graph representation, in TorchScript;
3. Low-level data and control flow graph representation, in Multi-Level Intermediate Representation (MLIR).
#### 2.1.1. PyTorch and TorchScript
Typically DNN models are represented in terms of high-level frameworks, themselves implemented within general purpose programming languages. Such frameworks are popular because of their ease of use and large library of example implementations of various DNN model architectures. OpenHLS targets the PyTorch framework. DNNs developed within PyTorch are _defined-by-run_: the author describes the DNN imperatively in terms of high-level operations, using Python, which, when executed, materializes the (partial) high-level data-flow graph (DFG) corresponding to the DNN (e.g., for the purposes of reverse-mode automatic differentiation). From the perspective of the user, define-by-run enables fast iteration at development time, possibly at the cost of some runtime performance.
Yet from the perspective of compilation, define-by-run precludes efficient extraction of the high-level DFG; since the DFG is materialized only at runtime, it cannot easily be statically inferred from the textual representation (i.e., the Python source) of the DNN. Furthermore, a priori, the runtime-materialized DFG is only partially materialized [37], and only as an in-memory data structure. Thus, framework support is necessary for efficiently extracting the full DFG. For this purpose, PyTorch supports a Single Static Assignment (SSA) IR, called TorchScript (TS) IR and accompanying tracing mechanism (the TS JIT), which generates TS IR from conventionally defined PyTorch models. Lowering from PyTorch to TS IR enables various useful analyses and transformations on a DNN at the level of the high-level DFG, but targeting FPGAs requires
a broader collection of transformations. To this end, we turn to a recent addition to the compiler ecosystem, MLIR.
#### 2.1.2. Mlr
MIIR (Mikrishnan et al., 2017) presents a new approach to building reusable and extensible compiler infrastructure. MLIR is composed of a set of _dialect_ IRs, subsets of which are mutually compatible, either directly or by way of translation/legalization. The various dialects aim to capture and formalize the semantics of compute intensive programs at varying levels of abstraction, as well as namespace-related sets of IR transformations. The entrypoint into this compiler framework from PyTorch is the torch dialect (Mikrishnan et al., 2017), a high-fidelity mapping from TS IR to MLIR native IR, which, in addition to performing the translation to MLIR, fully refines all shapes of intermediate tensors in the DNN (i.e., computes concrete values for all dimensions of each tensor), a necessary step for downstream optimizations and eliminating inconsistencies in the DNN (Mikrishnan et al., 2017).
While necessary for lowering to MLIR and shape refinement, the torch dialect represents a DNN at the same level of abstraction as TS IR: it does not capture the precise data and control flow needed for de novo implementations of DNN operations (e.g., for FPGA). Fortunately, MLIR supports lower-level dialects, such as linalg, affine, and scf. The scf (structured control flow) dialect describes standard control flow primitives, such as conditionals and loops, and is mutually compatible with the arith (arithmetic operations) and memref (memory buffers) dialects. The affine dialect, on the other hand, provides a formalization of semantics that lend themselves to polyhedral compilation techniques (Bleifer and Belfelfs, 2017) that enable loop dependence analysis and loop transformations. Such loop transformations, particularly loop unrolling, are crucial for achieving lowest possible latencies (Mikrishnan et al., 2017) because loop nests directly inform the concurrency and parallelism of the final RTL design.
### High-level synthesis
High-level synthesis tools produce RTL descriptions of designs from high-level representations, such as C or C++ (Bleifer and Belfs, 2017; Belfs and Belfs, 2017). In particular, Xilinx's Vitis HLS, based on the Autopilot project (Xilinx, 2017), is a state-of-the-art HLS tool. Given a high-level, procedural, representation, HLS carries out three fundamental tasks, in order to produce a corresponding RTL design:
1. HLS schedules operations (such as mulf, addf, load, store) in order to determine which operations should occur during each clock cycle; such a schedule depends on three characteristics of the high-level representation: (a) the topological ordering of the DFG of the procedural representation (i.e., the dependencies of operations on results of other operations and resources); (b) the delay for each operation; and (c) the user's desired clock rate/frequency.
2. HLS associates (_binds_) floating point operations to RTL instantiations of intellectual property (IP) for those operations; for example whether to associate an addition operation followed by a multiply operation to IPs for each, or whether to associate them both with a single IP, designed to perform a fused multiply-accumulate (MAC). In the case of floating-point arithmetic operations, HLS also (with user guidance) determines the precision of the floating-point representation.
3. HLS builds a finite-state machine (FSM) that implements the schedule of operations as control logic, i.e., logic that initiates operations during the appropriate stages of the schedule.
In addition to fulfilling these three fundamental tasks, HLS aims to optimize the program. In particular, HLS attempts to maximize concurrency and parallelism (number of concurrent operations scheduled during a clock cycle) in order maximize the throughput and minimize the latency of the final implementation. Maximizing concurrency entails pipelining operations: operations are executed such that they overlap in time when possible, subject to available resources. Maximizing parallelism entails partitioning the DNN into subsets of operation that can be computed independently and simultaneously and whose results are aggregated upon completion.
While HLS aims to optimize various characteristics of a design automatically, there are challenges associated this automation. In particular, maximum concurrency and parallelism necessitates data-flow analysis in order to identify data dependencies amongst operations, both for scheduling and identifying potential data hazards. Such data-flow analysis is expensive and grows (in runtime) as better performance is pursued. This can be understood in terms of loop-nest representations of DNN operations.
For example, consider the convolution in Listing 1. A schedule that parallelizes (some of) the arithmetic operations for this loop nest can be computed by first unrolling the loops up to some "trip count" and then computing the topological sort of the operations. When using this _list scheduling_ algorithm, the degree to which the loops are unrolled determines how many arithmetic operations can be scheduled in parallel. The issue is that the stores and loads
on the output array prevent reconstruction of explicit relationships between the inputs and outputs of the arithmetic operations across loop iterations. The conventional resolution to this loss of information is to perform _store-load forwarding_: pairs of store and load operations on the same memory address are eliminated, with the operand of the store forwarded to the uses of the load (see Listing 2). Ensuring correctness of this transformation (i.e., that it preserves program semantics) requires verifying, for each pair of candidate store and load operations, that there is no intervening memory operation on the same memory address. These verifications are non-trivial since the iteration spaces of the loops need not be regular; in general it might involve solving a small constraint satisfaction program (Steintein and Tanner, 1996). Furthermore, the number of required verifications grows polynomially in the convolution parameters, since the loop nest unrolls into \(b\times c_{out}\times h\times w\times c_{in}\times k^{2}\) store-load pairs on the output array.
Finally, note, although greedy solutions to the scheduling problem solved by HLS are possible, the scheduling problem, in principle, can be formulated as an integer linear program (ILP), for which the corresponding decision problem is complete for NP. In summary, HLS tools solve computationally intensive problems in order to produce an RTL description of a high-level representation of a DNN. These phases of the HLS process incur "development time" costs (i.e., runtime of the tools) and impose practical limitations on the amount of design space exploration (for the purpose of achieving latency goals) which can be performed. OpenHLS addresses these issues by enabling the user to employ heuristics during both the parallelization and scheduling phases which, while not guaranteed to be correct (but can be _behaviorally verified_) and have much lower runtimes (see Section 3.1).
### FPGA design
Broadly, at the register-transfer level of abstraction, there remain two more steps prior to being able to deploy a design to an FPGA: a final lowering, so-called logic synthesis, and place and route (P&R). The entire process may be carried out by Xilinx's Vivado tool.
Logic synthesis is the process of mapping RTL to actual hardware primitives on the FPGA (so-called _technology mapping_), such as lookup tables (LUTs), block RAMs (BRAMs), flip-flops (FFs), and digital signal processors (DSPs). Logic synthesis produces a network list (_netlist_) describing the logical connectivity of various parts of the design. Logic synthesis, for example, determines the implementation of floating-point operations in terms of DSPs; depending on user parameters and other design features, DSP resource consumption for floating-point multiplication and addition can differ greatly. Logic synthesis also determines the number of LUTs and DSPs which a high-level representation of a DNN corresponds to, which is relevant to both the performance and feasibility of that DNN when deployed to FPGA.
After the netlist has been produced, the entire design undergoes P&R to determine which configurable logic block within an FPGA should implement each of the units of logic required by the digital design. P&R algorithms need to minimize distances between related units of functionality (in order to minimize wire delay), balance wire density across the entire fabric of the FPGA (in order to reduce route congestion), and maximize the clock speed of the design (a function of both wire delay, logic complexity, and route congestion). The final, routed design, can then be deployed to the FPGA by producing a proprietary _bitstream_, which configures the FPGA.
### Related work
Several projects aim to support translation from high-level representations of DNNs to feasible FPGA designs. Typically they rely on commercial HLS tools for the scheduling, binding, and RTL emission phases of the translation, such as in the cases of DaCeML (Dae et al., 2017), hls4ml (Dae et al., 2017), and ScaleHLS (Dae et al., 2017), which all rely on Xilinx's Vitis HLS. Thus, they fail to efficiently (i.e., without incurring the aforementioned runtime costs) produce feasible and low-latency designs. One notable recent work is the SODA Synthesizer (Dae et al., 2017), which does not rely on a commercial tool but instead relies on the open-source PandA-Bambu HLS tool (Dae et al., 2017); though open-source and mature, we found in our own tests that PandA-Bambu also could not handle fully unrolled designs efficiently.
Alternatively, some projects do not rely on HLS for scheduling, binding, and RTL emission, and also attempt to translate from high-level representations of DNNs to feasible FPGA designs, such as DNN Weaver (Wang et al., 2017) and NNGen (Wang et al., 2018). Both of the cited projects function as parameterized/templatized RTL generators and thus lack sufficient generality for our needs; primarily they seek to produce implementations of kernels that emulate GPU architectures (i.e., optimizing for throughput rather than latency). In our experiments they were unable to generate low-latency implementations, either by achieving unacceptable latencies or by simply failing outright. (NNGen, due to the nature of templates, supports only limited composition, and produced "recursion" errors.)
## 3. The Compiler and HLS Framework
OpenHLS is an open source compiler and HLS framework that employs MLIR for extracting loop-nest representations of DNNs. Implemented in Python for ease of use and extensibility, it handles the DNN transformations as well as scheduling, binding, and FSM extraction. Importantly, there is no dependence on commercial HLS tools, a property that uniquely enables its use for applications that require the flexibility of open source tool (e.g., the ability to inspect and modify internals in order to adapt to special cases), such as low-latency physical science experiments. Figure 1 shows its overall architecture. OpenHLS first lowers DNNs from PyTorch to MLIR through TorchScript and the torch dialect (see Section 2.1.2) and then from the torch dialect to the scf dialect (through the linalg dialect). Such a representation lends itself to a straightforward translation to Python (compare Listing 1 to Listing 3) and indeed OpenHLS performs this translation.
The benefits of translating scf dialect to Python are manifold: see Section 3.1. Ultimately, OpenHLS produces a representation of the DNN that is then fully scheduled by using the scheduling infrastructure in CIRCT (Wang et al., 2018) (an MLIR adjacent project). After scheduling, OpenHLS emits corresponding RTL (as Verilog).
OpenHLS delegates to the FloPoCo (Kipper and Kipper, 2018) IP generator the task of generating pipelined implementations of the standard floating-point arithmetic operations (mulf, divf, addf, subf, sqrtf) at various precisions. In addition, we implement a few generic (parameterized by bit width) operators in order to support a broad range of DNN operations: two-operand maximum (max), unary negation (neg), and the rectified linear unit (relu). Transcendental functions, such as exp, are implemented by using a Taylor series expansion to \(k\)-th order (where \(k\) is determined on a case-by-case basis). Note that FloPoCo's floating-point representation differs slightly from IEEE754, foregoing subnormals and differently encoding zeroes, infinities and NaNs (for the benefit of reduced complexity) and our implementations max, neg, relu are adjusted appropriately.
We now discuss some aspects of OpenHLS in more detail.
### Symbolic interpretation for fun and profit
As noted in Section 2.2, maximizing concurrency and parallelism for a design entails unrolling loops and analyzing the data flow of their operations. As illustrated in Figure 2, the formally correct approach
Figure 1. OpenHLS framework overview.
to unrolling loop nests can be prohibitively expensive in terms of runtime. In the case of BraggNN (see Listing 5), for example, the high cost of unrolling precluded effective search of the design space for a RTL representation achieving the target latency. Translating scf dialect to Python enables OpenHLS to overcome this barrier by enabling us to use the Python interpreter as a _symbolic interpreter_. Interpreting the resulting Python loop nests (i.e., running the Python program) while treating the arithmetic and memory operations on SSA values as operations on symbols (i.e., Python classes with overloaded methods) enables us to:
1. Partially evaluate functions of iteration variables (for example, %3 = arith.addi %13, %16) to determine array index operands of all stores and loads (for example, memref.load %input[Xii1,Xi5,Xi3,Xi3,X4]) and thereupon perform memory dependence checks, thus transforming the problem of statically verifying memory dependence into one of checking assertions at runtime;
2. Unroll loops by recording each floating-point arithmetic operation executed while enforcing SSA; e.g., for a loop whose body has repeated assignments to the same SSA value (ostensibly violating SSA), we execute the loop and instantiate new, uniquely identified, symbols for the result of each operation;
3. Reconstruct all data flow through arithmetic operations and memory operations by interpreting memrefs as _geometric symbol tables_ (i.e., symbol tables indexed by array indices rather than identifiers/names) and stores and loads as reads and writes on those symbol tables;
4. Swap evaluation rules in order to support various functional modes, e.g., evaluating floating-point arithmetic operations by using (Python) bindings to FloPoCo's C++ functional models, thereby enabling behavioral verification of our designs.
See Table 3 for the translation rules from MLIR dialects to Python.
### AST transformations and verification
Prior to interpretation, OpenHLS performs some simple AST transformations on the Python generated from scf dialect:
1. **Hoist globals**: Move fixed DNN tensors (i.e., weights) out of the body of the generated Python function (OpenHLS translates the MLIR module corresponding to the DNN into a single Python function in order to simplify analysis and interpretation) and into the parameter list, for the purpose of ultimately exposing them at the RTL module interface.
2. **Remove if expressions**: DNN relu operations are lowered to the scf dialect as a decomposition into arith.cmpfugt and arith.select; this transformation recomposes them into a relu.
3. **Remove MACs**: Schedule sequences of load-multiply-add-store (common in DNN implementations) jointly, coalescing them into a single fmac operation.
4. **Reduce fors**: Implement the reduction tree structure for non-parallelizable loop nests mentioned in Section 3.3.
These transformations on the Python AST are simple (implemented with procedural pattern matching), extensible, and efficient (marginal runtime cost) because no effort is made to verify their formal correctness. Thus, OpenHLS trades formal correctness for development time performance. This tradeoff enables quick design space iteration, which for example, enabled us to achieve low latency implementations for BraggNN (see Section 4.2).
OpenHLS supports behavioral rather than formal verification. Specifically, OpenHLS can generate testbenches for all synthesized RTL. The test vectors for these testbenches are generated by evaluating the generated Python representation of the DNN on randomly generated inputs but with floating-point operations now evaluated using functional models of the corresponding FloPoCo operators. The testbenches can then be run using any IEEE 1364 compliant simulator. We run a battery of such testbenches (corresponding to various DNN operation types), using coccotb (Kumar et al., 2017) and iverilog(Kumar et al., 2017), as a part of our continuous integration (CI) process.
### Scheduling
Recall that HLS must schedule operations during each clock cycle in a way that preserves the DNN's data-flow graph. That schedule then informs the construction of a corresponding FSM. As already mentioned, scheduling an arbitrary DNN involves formulating and solving an ILP. In the resource-unconstrained case, due to the precedence relations induced by data flow, the constraint matrix of the associated ILP is a _totally unimodular matrix_ and the feasible region of the problem is an integral polyhedron. In such cases, the scheduling problem can be solved optimally in polynomial time with a LP solver (Lewis and Sack, 2017). In the resource-constrained case, resource constraints can also be transformed into precedence constraints by picking a particular (possibly heuristic) linear ordering on the resource-constrained operations. This transformation partitions resource-constrained operations into distinct clock cycles, thereby guaranteeing sufficient resources are available for all operations scheduled within the same clock cycle (Lewis and Sack, 2018).
OpenHLS uses the explicit parallelism of the scf.parallel loop-nest representation to inform such a linear ordering on resource-constrained operations. By assumption, for loop nests which can
Figure 2. 3\(\times\)3-kernel convolution (cf. Listing 3) full unrolling time vs. input (square) image size, with store-load forwarding using MLIR’s -affine-scalrep pass. The longest time is 577,419 s (\(\approx\)160 h) for a loop nest with a trip count of 128\(\times\)128\(\times\)3\(\times\)3\(-\)147,456.
be reprepresented as scf.parallel loop nests (see Listing 4), each instance of a floating-point arithmetic operation in the body corresponding to unique values of the iteration variables (e.g., Xi1, Xi2, Xi3, Xi4 for Listing 4) is independent of all other such instances, although data flow within a loop body must still be respected. This exactly determines total resource usage per loop nest; for example, the convolution in Listing 4 would bind to \(2K_{i}\) DSPs (assuming mulf, addf bind to one DSP each), where:
\(K_{i}\ \coloneqq\ |\{\)Xi1 = Xc0 + Xc1 \(\times\) N \(\land\) Xi1 \(<b\}|\ \times\)
\(|\{\)Xi2 = Xc0 + Xc1 \(\times\) N \(\land\) Xi2 \(<_{out}\}|\ \times\)
\(|\{\)Xi3 = Xc0 + Xc1 \(\times\) N \(\land\) Xi3 \(<h\}|\ \times\)
\(|\{\)Xi4 = Xc0 + Xc1 \(\times\) N \(\land\) Xi4 \(<w\}|\)
with Xc1 \(\times\) N representing all multiples of Xc1. That is to say, \(K_{i}\) is the cardinality of the cartesian product of the iteration spaces of the parallel iteration variables.
Defining \(K\coloneqq\max_{i}K_{i}\) across all scf.parallel loop nests, we can infer peak usage of any resource. Then, after indexing available hardware resources \(j=1,\ldots,K\), we can bind the operations of any particular loop nest. This leads to a linear ordering on resource-constrained operations such that operations bound to the same hardware resource index \(j\) must be ordered according to their execution order during symbolic interpretation.2 Note that this ordering coincides with the higher-level structure of the DNN, which determines the ordering of scf.parallel loop nests (and thus interpretation order during execution of the Python program).
Footnote 2: OpenI.S only needs to construct a partial precedence ordering \(\omega_{a}<\omega_{b}\) for operations \(\omega_{\omega},\omega_{b}\), which CICT then combines with the delays of the operations to construct constraints such as \(\texttt{start\_op}_{a}+\texttt{delay}_{a}\leq\texttt{start\_op}\).
For DNN operations that lower to sequential loop nests rather than scf.parallel loop nests (e.g., sum, max, or prod), we fully unroll the loops and transform the resulting, sequential, operations
```
1cv5=Val("%5")
2MemRef(b, c_in, h, w)
3[%5]=[%input].__geitem_(([Xi1], [Xi5], [Xi3], [Xi4]))
4[%output].__geitem_(([Xi1], [Xi5], [Xi3], [Xi4]), [%9])
5for[Xi1]inrange([%c0], b, [%c1])
6[%3]=[%i3]+[%i6]
7[%8]=[%5].__mul_([%6])
8[%9]=[%7].__add_([%8])
9[%6]=[%6].__add_([%8])
10[%64].__relu([%10])
11[%65]
12[%8]=[%5].__mul_([%6])
13[%9]=[%7].__add_([%8])
14[%63=arith.cmpfugt%10,%c0] \(\land\)[%64=arith.select%63,%10,%c0]
15[%64].__relu([%10])
16[%8=arith.mulf%5,%6] \(\land\)[%9=arith.addf%7,%8]
17[%9]=fma([%5],[%6],[%7])
```
**Figure 3**: Translation rules for mapping scf, arith, and memref dialects to Python.
into a reduction tree; we use As-Late-As-Possible scheduling (Brandt et al., 2017) amongst the subtrees of such reduction trees.
## 4. Evaluation
We evaluate OpenHLS both on individual DNN layers, and end-to-end, on our use-case BraggNN. We compare OpenHLS to Xilinx's Vitis HLS by comparing the latencies and resource usages of the final designs generated by each. We also compare the runtimes of the tools themselves. Both OpenHLS and Vitis HLS produce Verilog RTL, on which we run a synthesis pass by using Xilinx's Vivado. The particular FPGA target is Xilinx Alive U280. We measure LUT, DSP, BRAM, and FF usage. For the DNN layer evaluations, we use FloPoCo (5,11)-floating point representations (5-bit exponent, 11-bit mantissa), corresponding to Vitis HLS's IEEE half-precision IPs. We synthesize all designs for a 10 ns target clock period and report end-to-end latency as a product of the total schedule interval count of the design and achieved clock period (_10-WNS_, where _WNS_ is the worst negative slack reported). In the case of Vitis HLS, which potentially explicitly pipelines the design and therefore implements with an initiation interval strictly less than the total schedule interval count, we report in terms of the best possible interval count (LatencyBest from the Vitis HLS reports). All other measurements are collected from Vivado synthesis reports. As Vitis HLS operates on C++ representations, we generate such a representation for our test cases by first lowering each DNN layer to the affine dialect and then applying the scalehls-translate tool of the ScaleHLS project (Zhao et al., 2017) to emit C++. Importantly, we do not make any use of scalehls-opt optimization tool (of the same project).
Since our ultimate goal is low latency inference, and since the strategy that OpenHLS employs in the pursuit of this goal is loop unrolling, in order to produce a like for like comparison, we similarly unroll the representation that is passed to Vitis HLS. Thus, all Vitis HLS measurements are reported in terms of _unroll factor_: an unroll factor of \(k\) corresponds to a \(k\)-fold increase in the number of statements in the body of a loop and commensurate \(k\)-fold decrease in the trip count of the loop. For loop nests, we unroll inside out: if \(k\) is greater than the trip count \(t\) of the innermost loop, we unroll the innermost loop completely and then unroll the enclosing loop by a factor of \(k-t\). We do not perform any store-load forwarding during this preprocessing but we annotate all arrays with the directive array_partition complete dim=1 in order that Vitis HLS can effectively pipeline. All representations generated by OpenHLS correspond to full unrolling of the loop nests.
### DNN layers
We evaluate OpenHLS vs. Xilinx's Vitis HLS by comparing the latency of the final design on five DNN layer types, chosen to cover a range of arithmetic operations (mulf, divf, addf, subf, sqrt) and data access patterns (iteration, accumulation, reduction):
* addm(a, b, c) : Matrix multiply: a \(\times\) b + c;
* batch_norm_2d(num_features) : Batch normalization over a 4D input (Khan et al., 2017);
* conv_2d(\(c_{in},\ c_{out},\ k\)) : 2D convolution with bias, with \(k\times k\) kernel, over a \(b\times c_{in}\times h\times w\) input, producing \(b\times c_{out}\times h^{\prime}\times w^{\prime}\) output;
* max_pool_2d(\(k,\ stride\)) : 2D max pooling, with \(k\times k\) kernel, and striding;
* soft_max : softmax (\(x\)) := \(\left[\frac{\exp\left(x_{i}\right)}{\sum_{j}\exp\left(x_{j}\right)}\right]\)
The parameter values and input dimensions used during evaluation are summarized in Table 1.
Figure 4 shows Vitis HLS vs. OpenHLS resource usage and latency vs. unroll factor and Figure 5 shows the runtimes of Vitis HLS as function of increasing unroll factor. We observe that while Vitis HLS end-to-end latencies decrease with increased unroll factor, they never match that achieved by OpenHLS. Even at an unroll factor of 1024 (which corresponds to fully unrolled for all loop nests comprising these layer types), Vitis HLS is only within 10% of OpenHLS. We attribute this to Vitis HLS's inability to pipeline effectively, due to its inability to eliminate memory dependencies, either through store-load forwarding or further array partitioning. Conversely, OpenHLS's ability to effectively perform store-load forwarding is evident in the complete lack of BRAM usage: all weights are kept on FFs or LUTs. While infeasible for larger designs (which would be constrained by the number of available FFs), this unconstrained usage of FFs is acceptable for our use case. The increasing latency (as a function of unroll factor) in the max_pool_2d case is due to Vitis HLS's failure to meet timing, i.e., while the interval count decreases as a function of unroll factor, the clock period increases.
### BraggNN case study
High-energy diffraction microscopy enables non-destructive characterization for a broad class of single-crystal and polycrystalline materials. A critical step in a typical HEDM experiment is an analysis to determine precise Bragg diffraction peak characteristics. Peak characteristics are typically computed by fitting the peaks to a probability distribution, e.g., Gaussian, Lorentzian, Voigt, or Pseudo-Voigt. As noted in Section 1, HEDM experiments can collect data at more than 80 GB/s. These data rates, though more modest than at the LHC, merit exploring low latency approaches in order to enable experiment modalities that depend on measurement-based feedback (i.e., experiment steering).
BraggNN (Zhao et al., 2017), a DNN aimed at efficiently characterizing Bragg diffraction peaks, achieves a throughput (via batch inference) of approximately \(22\,\mathrm{\SIUnitSymbolMicro s}/\mathrm{sample}\) on a state-of-the-art GPU: a large speedup over classical pseudo-Voigt peak fitting methods, but still far short of the \(1\,\mathrm{\SIUnitSymbolMicro s}/\mathrm{sample}\) needed to handle \(1\,\mathrm{MHz}\) sampling rates. In addition, the data-center class GPU such as a NVIDIA V100 (or even
\begin{table}
\begin{tabular}{l l l} \hline \hline Layer & Parameter values & Input dimensions \\ \hline addm & N/A & a, b, c : \((16,16)\) \\ \hline batch\_norm\_2d & num\_features = 2 & input : \((10,2,3,3)\) \\ \hline conv\_2d & \(c_{in}=1,c_{out}=k=3\) & input : \((1,1,16,16)\) \\ \hline max\_pool\_2d & \(k=3,\ stride=2\) & input : \((1,3,16,16)\) \\ \hline soft\_max & N/A & input : \((1,3,16,16)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. DNN layers used for evaluation of OpenHLS.
a workstation class GPU such as a NVIDIA RTX 2080Ti) required to run the current BraggNN implementation cannot be deployed at the edge, i.e., adjacent or proximal to the high energy microscopy equipment. With the goal of reducing both per-sample time and the GPU size, the GPU size is \(\sim 100\).
Figure 4. Vitis HLS vs. OpenHLS resource usage and latency vs. unroll factor for five DNN modules, exhibiting the large runtime cost incurred in using Vitis HLS to search the design space (of possible low-latency designs for each layer). The lines give latencies (left axes); the bars give the % of the resource used (right axes). All \(y\)-scales are log.
deployment footprint, we applied OpenHLS to the PyTorch representation of BraggNN(s=1) (see Listing 5) and achieved a RTL implementation which synthesizes to a 1238 interval count design that places, routes, and meets timing closure for a clock period of 10 ns (for a Xilinx Alveo U280). The design consists of a three stage pipeline with the longest stage measuring 480 intervals, for a throughput of 4.8 us/sample. See Figure 6 for a comparison with designs generated by Vitis HLS (using the same flow as in 4).
The most challenging aspect of implementing BraggNN was minimizing latency while satisfying compute resource constraints (LUTs, DSPs, BRAMs) and achieving routing closure, i.e., not exceeding available routing resources and avoiding congestion. We made two design choices to reduce resource consumption. The first was to reduce the precision used for the floating-point operations, from half precision to FloPoCo (5,4)-precision (5-bit exponent, 4-bit mantissa), a choice justified by examination of the distribution of the weights of the fully trained BraggNN (see Figure 7).
Reducing the precision enabled the second design choice, to eliminate BRAMs from the design, since, at the lower precision, all weights can be represented as registered constants. The reduced precision also drove the Vivado synthesizer to infer implementations of the floating-point operations that make no use of DSPs, likely becaue the DSP48 hardware block includes a 18-bit by 25-bit signed multiplier and a 48-bit adder [2], neither of which neatly divides the bit width of FloPoCo (5,4)-precision cores. (The actual width for FloPoCo (5,4)-precision is 12 bits: 1 extra bit is needed for the sign and 2 for handling of exceptional conditions.)
Achieving routing closure was difficult due to the nature of the Xilinx's UltraScale architecture, of which the Alveo U280 is an instance. The UltraScale architecture achieves its scale through Stacked Silicon Interconnect (SSI) technology [27], which implies multiple distinct FPGA dies, called Super Logic Regions (SLRs), on the same chip, connected by interposers. Adjacent SLRs communicate with each other over a limited set of Super Long Lines (SLLs), which determine the maximum bus width that spans two SLRs. On the Alveo U280 there are exactly 23,040 SLLs available between adjacent SLRs and at (5,4)-precision Br\(\underline{\text{aggNN}}\)(s=1) needs 23,328 SLLs between SLR2 and SLR1. [We route from SLR2 to SLR1 the outputs of cnn_layers_1 (1x16x9x9x12 wires) and soft(theta_layerx phi_layer)xg_layer (1x8x9x12 wires.)] Thus, we further reduced the precision to (5,3). Finally, since multiple dies constitute independent clock domains, the SLLs that cross SLRs are sensitive to hold time violations due to the higher multi-die variability [1]. This multi-die variability leads to high congestion if not addressed.
Figure 5. Vitis HLS vs. OpenHLS runtime vs. unroll factor, illustrating the large runtime cost incurred in using Vitis HLS to search over possible low-latency BraggNN designs.
Figure 6. BraggNN Vitis HLS vs. OpenHLS resource usage and latency vs. unroll factor (with both at half-precision) throughout the design space of possible low-latency designs.
Thus, routing across SLRs needs to be handled manually, using placement and routing constraints for logic in each SLR and the addition of so-called "launch" and "latch" registers in each SLR. Figure 8 illustrates the effect of using launch and latch registers as well as placement and routing constraints.
Thus, these design choices (in combination with compiler level optimizations performed by OpenHLS) plus careful management of routing constraints enable us to lower, compile, synthesize, place, and route BraggNN(s=1) to Xilinx's Alveo U280 at a throughput of 4.8 \(\upmu\)s/sample: -5\(\times\) higher latency than the target 1 \(\upmu\)s/sample, but a -4\(\times\) improvement over the PyTorch GPU implementation.
## 5. Conclusion
We have presented OpenHLS, an MLIR-based HLS compilation framework that supports translating DNN models to RTL without the use of commercial HLS tools. The OpenHLS end-to-end compilation pipeline provides a PyTorch front-end and Verilog emission backend. An extensible Python intermediate layer supports use-case-specific optimizations (e.g., store-load forwarding) that are not possible otherwise. Experimental results demonstrate that OpenHLS outperforms, in terms of end-to-end latency, Vitis HLS on a range of DNN layer types and on a case-study DNN.
We note three directions for future work, primarily with respect to scheduling: (1) Better integration between the Python layer and MLIR: it is preferable that the transformations on the Python representation could make use of various MLIR facilities, such as affine analysis, for the purposes of exploring loop transformations that improve latency; (2) Expanding the set of scheduling algorithms available: for example, resource aware scheduling (Krishnan et al., 2017); and (3) Integration of scheduling-aware placement and vice-versa (placement-aware scheduling): currently OpenHLS can be used to inform placement but does not explicitly emit placement constraints (see Section 4.2); a more precise approach, such as in (Krishnan et al., 2017), would potentially enable better pipelining and thus higher throughput.
|
2310.02068 | Well-posedness and numerical analysis of an elapsed time model with
strongly coupled neural networks | The elapsed time equation is an age-structured model that describes dynamics
of interconnected spiking neurons through the elapsed time since the last
discharge, leading to many interesting questions on the evolution of the system
from a mathematical and biological point of view. In this work, we first deal
with the case when transmission after a spike is instantaneous and the case
when there exists a distributed delay that depends on previous history of the
system, which is a more realistic assumption. Then we study the well-posedness
and the numerical analysis of the elapsed time models. For existence and
uniqueness we improve the previous works by relaxing some hypothesis on the
nonlinearity, including the strongly excitatory case, while for the numerical
analysis we prove that the approximation given by the explicit upwind scheme
converges to the solution of the non-linear problem. We also show some
numerical simulations to compare the behavior of the system in the case of
instantaneous transmission with the case of distributed delay under different
parameters, leading to solutions with different asymptotic profiles. | Mauricio Sepulveda, Nicolas Torres, Luis Miguel Villada | 2023-10-03T14:10:50Z | http://arxiv.org/abs/2310.02068v2 | Well-posedness and numerical analysis of an elapsed time model with strongly coupled neural networks
###### Abstract
The elapsed time equation is an age-structured model that describes dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. In this work, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on previous history of the system, which is a more realistic assumption. Then we study the well-posedness and the numerical analysis of the elapsed time models. For existence and uniqueness we improve the previous works by relaxing some hypothesis on the non-linearity, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non-linear problem. We also show some numerical simulations to compare the behavior of the system in the case of instantaneous transmission with the case of distributed delay under different parameters, leading to solutions with different asymptotic profiles.
2010 _Mathematics Subject Classification._ 35A35, 35F20, 35R09, 65M06.
_Keywords and phrases._ Structured equations; Mathematical neuroscience; Delay differential equations, Well-posedness, Numerical analysis; Periodic solutions.
## 1 Introduction
Structured equations have been widely studied in the modeling of biological systems. In particular in the context of neuroscience age-structured models are an interesting approach to modeling the dynamics of interconnected spiking neurons. The study of the precise mechanisms of brain processes, leading to synchronous regular or irregular activities, has been a challenge for mathematician and biologists with many interesting models including discrete systems, differential equations and stochastic processes (for a reference see for example [1]).
One of these equations is the well-known elapsed time model, where a given population of neurons is described by the time elapsed since their last discharge. In this network neurons
are submitted to random discharges, which is related to changes in the membrane potential, stimulating other neurons to spike. One the main motivations of this model is to predict the brain activity through the previous history of spikes and the key element that determines the evolution of the system is the way neurons interact within the network, leading to different possible behaviors of the neural activity and patterns formation. This equation is a mean-field limit of a microscopic model that establishes a bridge of the dynamics of a single neuron with a population-based approach, whose aspects has been investigated in [2, 3, 4, 5, 6, 7].
The elapsed time model has been studied by many authors. The pioneer works on this model were studied by Pakdaman et al. [8, 9, 10] and some important results on exponential convergence to the equilibrium for weak non-linearities were proved in [11, 12, 13] through different techniques such as the entropy method, semi-group theory and spectral arguments. Results on strong non-linearities have been studied in [14], where existence of periodic solution with jump discontinuities was established. Moreover, different extensions of the elapsed time model have been studied by incorporating new elements such as the fragmentation equation [10], spatial dependence with connectivity kernel in [15], a multiple-renewal equation in [16] and a leaky memory variable in [17].
The classical elapsed time model with instantaneous transmission, which we will call throughout this article as ITM, is given by the following non-linear age-structured equation
\[\text{(ITM)}\begin{cases}\partial_{t}n+\partial_{s}n+p(s,N(t))n=0&t>0,\;s>0 \\ N(t)=n(t,s=0)=\int_{0}^{+\infty}p(s,N(t))n(t,s)\mathrm{d}s&t>0\\ n(0,s)=n^{0}(s)\geq 0&s\geq 0,\\ \int_{0}^{\infty}n^{0}(s)\mathrm{d}s=1,\end{cases} \tag{1}\]
where \(n(t,s)\) is the probability density of finding a neuron at time \(t\), whose elapsed time since last discharge is \(s\geq 0\) and the function \(N(t)\) represents the flux of discharging neurons.
For this equation, we assume that when a neuron spikes, its interactions with other neurons are instantaneous so that we assume for simplicity in this model that the total activity of the network is simply given by the value of \(N(t)\). The crucial non-linearity is given by the function \(p\colon[0,\infty)\times[0,\infty)\mapsto[0,\infty)\), which is called as the hazard rate and it describes the susceptibility of neurons to spike. We assume that \(p\) depends on the elapsed time \(s\) and the activity \(N\) and without loss of generality we consider that \(p\in W^{1,\infty}(\mathbb{R}^{+}\times\mathbb{R}^{+})\), though the regularity is not a crucial assumption as we will see in the numerical examples. We stick to these names of each term of the system, though they may differ in the literature.
In this setting, neurons discharge at the rate given by \(p\) and then the elapsed time is immediately reset to zero, as it is stated by the integral boundary condition of \(n\) at \(s=0\). Following the terminology of age-structured equations, the elapsed time corresponds to "age" of neuron. When a neuron discharge, it is considered to "die" where \(p(s,N)n\) is the corresponding "death" term. After a neuron spikes, it instantaneously re-enter the cycle and it is considered to be "reborn" with the boundary condition at \(s=0\) representing the "birth" term.
Moreover, we assume that the rate \(p\) is increasing with respect to the age \(s\), which means that neurons are more prone to spike when the elapsed time since last discharge is large. According to the dependence of the rate \(p\) on the total activity different regimes are possible. When \(p\) is increasing with respect to \(N\) we say that the network is excitatory, which is means under a high activity neurons are more susceptible to discharge. Similarly, when \(p\) is decreasing we say that the network is inhibitory and we have the opposite effect on the
network. Moreover, if the following conditions holds
\[\|\partial_{N}p\|_{\infty}<1, \tag{2}\]
we say that the network is under a weakly interconnected regime, which means that the non-linearity is weak.
For the initial data \(n^{0}\in L^{1}(\mathbb{R}^{+})\) we assume that is a probability density and we formally have the following mass-conservation property
\[\int_{0}^{\infty}n(t,s)\,\mathrm{d}s=\int n^{0}(s)\,\mathrm{d}s,\qquad\forall t \geq 0, \tag{3}\]
which will be crucial in the analysis of Equation (1). Throughout this article we consider solutions in the weak sense but for simplicity we simply refer to them as solutions.
**Remark 1.1**.: _Since we look for solution \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) for some \(T>0\), observe that \(N(0)\) formally satisfies the following equation_
\[N(0)=\int_{0}^{\infty}p(s,N(0))n^{0}(s)\,\mathrm{d}s. \tag{4}\]
_In the inhibitory case and the weak interconnections regime this equation has a unique solution for \(N(0)\), while for the excitatory case we may have multiple solutions and thus the solution of Equation (1) is not unique. Moreover, we remark that in general \(N(0)\neq n^{0}(0)\), which might imply that \(n\) has discontinuities along the line \(\{(t,s)\in\mathbb{R}^{2}\colon t=s\}\)._
Concerning stationary solutions of the non-linear System (1), these are given by solutions the problem
\[\begin{cases}\partial_{s}n+p(s,N)n=0&s>0,\\ N=n(s=0)\coloneqq\int_{0}^{\infty}p(s,N)n(s)\,\mathrm{d}s,\\ \int_{0}^{\infty}n(s)\,\mathrm{d}s=1,\quad n(s)\geq 0.\end{cases} \tag{5}\]
If the activity \(N\) is given, we can determine the stationary density through the formula
\[n(s)=Ne^{-\int_{0}^{s}p(u,N)\,\mathrm{d}u}. \tag{6}\]
Thus by integrating with respect to \(s\), we get that \((n,N)\) corresponds to a stationary solution of System (1) if the activity satisfies the fixed point equation
\[N=F(N)\coloneqq\left(\int_{0}^{\infty}e^{-\int_{0}^{s}p(u,N)\,\mathrm{d}u}\, \mathrm{d}s\right)^{-1}. \tag{7}\]
So that depending on the rate \(p\), we have a unique solution in the inhibitory case and in the weak interconnections regime. For the excitatory case we may have multiple steady-states.
We also remark that a prototypical form of the function \(p\) is given by
\[p(s,N)=\varphi(N)\chi_{\{s>\sigma\}}, \tag{8}\]
which represents a hazard rate with an absolute refractory period \(\sigma>0\) so that for an age \(s<\sigma\) neurons are not susceptible to discharge. For \(s>\sigma\) neurons are able to discharge
and the density \(n\) decays exponentially according to \(\varphi(N)\). The function \(\varphi\) is assumed to be smooth and satisfies the following bounds
\[p_{0}\leq\varphi(N)\leq p_{1}\qquad\forall N\geq 0,\]
for some constants \(p_{0},p_{1}>0\). This special case has been studied which has been studied in [14, 8], where they proved convergence for the inhibitory case and constructed periodic solutions for the excitatory case. Another possible example is to consider a variable refractory period depending on the total activity
\[p(s,N)=\chi_{\{s>\sigma(N)\}}. \tag{9}\]
This type of hazard rate has been studied in [9], where existence of periodic solutions was studied as well.
From a biophysical point of view, when neurons spike it is reasonable to consider a delay in the transmission to other neurons. In order to take into account this effect, Pakdaman et al. [8] considered a modification of the elapsed time model incorporating a distributed delay, which corresponds to the following variant of Equation (1) that we will call as DDM
\[\text{(DDM)}\begin{cases}\partial_{t}n+\partial_{s}n+p(s,X(t))n=0&t>0,\,s>0\\ N(t)=n(t,s=0)=\int_{0}^{+\infty}p(s,X(t))n(t,s)\mathrm{d}s&t>0\\ X(t)=\int_{0}^{t}\alpha(t-\tau)N(\tau)\,\mathrm{d}\tau&t>0\\ n(0,s)=n^{0}(s)\geq 0&s\geq 0,\\ \int_{0}^{\infty}n^{0}(s)\mathrm{d}s=1.\end{cases} \tag{10}\]
The kernel \(\alpha\in L^{1}(\mathbb{R}^{+})\) with \(\alpha\geq 0\), corresponds to the distributed delay and for simplicity we may assume that \(\alpha\) is smooth and uniformly bounded with \(\int_{0}^{\infty}\alpha(\tau)\mathrm{d}\tau=1\), but the theoretical results are still valid if we only assume that \(\alpha\) is integrable. For this model \(N(t)\) is the flux of discharging neurons and \(X(t)\) is the total activity, which depends on the values taken by \(N\) in the past (i.e. in the interval \([0,t]\)) through the convolution with \(\alpha\). Unlike ITM, the rate \(p\) depends on the total activity \(X(t)\) instead of the discharging flux. Properties like the mass-conservation remain valid for this modified model. We remark that under the condition \(\int_{0}^{\infty}\alpha(\tau)\mathrm{d}\tau=1\), we consider as steady-states of DDM equation (10) the same as those of ITM equation.
In particular when \(\alpha(t)\) approaches in the sense of distributions to the Dirac's mass \(\delta(t)\), then we formally get \(X(t)=N(t)\) and thus we recover the ITM equation (1). An important example studied in [8] is the exponential delay given by \(\alpha(t)=\frac{1}{\lambda}e^{-t/\lambda}\) so that \(X(t)\) satisfies the following differential equation
\[\begin{cases}\lambda X^{\prime}(t)+X(t)=N(t),\\ X(0)=0.\end{cases} \tag{11}\]
giving a simple way to compute numerical solution of this system.
Similarly, when \(\alpha(t)\) approaches in the sense of distributions to the Dirac's mass \(\delta(t-d)\), then we formally get \(X(t)=N(t-d)\) and we recover a version of the classical elapsed time equation with a single discrete delay
\[\begin{cases}\partial_{t}n+\partial_{s}n+p(s,N(t-d))n=0&t>0,\,s>0\\ N(t)=n(t,s=0)=\int_{0}^{+\infty}p(s,N(t-d))n(t,s)\mathrm{d}s&t>0\\ n(0,s)=n^{0}(s)\geq 0&s\geq 0,\\ \int_{0}^{\infty}n^{0}(s)\mathrm{d}s=1.\end{cases} \tag{12}\]
Given the flexibility of the DDM model through the kernel \(\alpha\), in this article we will focus on the ITM and DDM equations, but the results and techniques are presented in this work also valid for Equation (12).
This article is devoted to two aspects of the elapsed time model: well-posedness and numerical analysis. On one hand, we aim to give a straightforward proof on well-posedness for both ITM and DDM equations, improving the proofs given in [8, 11]. In the work of Pakdaman et al. [8] they proved well-posedness when the rate \(p\) is of the form given by (8) or (9) under some asymptotic conditions in the growth of \(p\) with respect to discharging flux \(N\) (or the total activity \(X\) in the DDM equation), while in the work of Canizo et al. [11] they mainly focused in the weak interconnections regime for the ITM equation (1). In this context, we give a proof for a wider class of hazard rates \(p\) with some simple and general assumptions.
On the other hand, we aim to make a numerical analysis of these non-linear models by proving the convergence of an explicit upwind scheme of first-order. Previous works on the numerical analysis of age-structured equations includes the works [18, 19] and further generalizations to solutions in the space of positive regular measures \(\mathcal{M}^{+}(\mathbb{R}^{+})\) has been investigated by [20, 21, 22] through the particle method. This method consists that measure solutions are approximated by a sum Dirac's masses and then transported according the structured equation, so that by compactness through tightness of measures they proved that the approximation actually convergences to a solution of the non-linear equation. Following the spirit of these ideas, we study the evolution of the finite-volume approximation and then by an estimate of the bounded variation norm, we get the necessary compactness to conclude the convergence of the numerical method. This BV-estimate to prove correctness of the scheme is a novelty that was missing in the literature.
The article is organized as follows. In Section 2, we study the ITM equation by giving a proof on well-posedness in the inhibitory case (including the weak interconnections case as well) and explaining how the arguments can be extended for the general excitatory case. Then we proceed to explain the scheme to solve numerically ITM equation 1 and prove the necessary estimates that ensure the convergence of the numerical method. In Section 3, we make the analogous analysis for the DDM equation by adapting the ideas of the arguments applied to the ITM equation. Finally in Section 4, we present numerical simulations to compare both ITM and DDM equations under different choice of parameters, including the inhibitory and excitatory regime. In particular, we consider different types of the delay kernel \(\alpha\) in order to observe the limit cases when \(\alpha(t)\) is approaching a Dirac's mass and the possible asymptotic behavior thereof. This extends the numerical simulations made by Pakdaman et al. [8], where they consider mainly the exponential kernel in the DDM equation (10).
Instantaneous Transmission Model (ITM)
### Well-posedness of the ITM
In this subsection we prove that the solution of Equation (1) is well-posed in the inhibitory case and the weak interconnections regime. We improve the ideas of Pakdaman et al. [8] by giving consider more general forms for the rate \(p\) and we improve the result of the weak interconnection regime of [11] by extending the existence and uniqueness of a solution when we drop the absolute value in the Condition (2). The main idea of the proof is to propose the appropriate fixed point problem that eventually leads to a solution of ITM equation 1 through the contraction principle.
**Theorem 2.1**.: _Consider a non-negative \(n^{0}\in L^{1}(\mathbb{R}^{+})\). Assume that \(p\in W^{1,\infty}(\mathbb{R}^{+}\times\mathbb{R}^{+})\) and let \(\gamma\coloneqq\sup_{s,N}\partial_{N}p(s,N)\) with \(\gamma\|n^{0}\|_{1}<1\), then Equation (1) has a unique solution \(n\in\mathcal{C}\left([0,\infty),L^{1}(\mathbb{R}^{+})\right)\) and \(N\in\mathcal{C}[0,\infty)\). Moreover \(n\) satisfies the mass-conservation property (3)._
**Remark 2.1**.: _We remark that the regularity of the rate \(p\) is not fundamental for the proof and Theorem 2.1 is still valid for a wider class of functions such as_
\[p(s,N)=\varphi(N)\chi_{\{s>\sigma(N)\}},\]
_with \(\varphi\) and \(\sigma\) Lipschitz bounded functions, as they are studied in [8, 14]. So under a similar conditions for the inhibitory and weakly excitatory regimes, we can apply the arguments used in Theorem 2.1 to get well-posedness in these cases._
_Moreover, from Remark 1.1 we know that in the excitatory case multiple solutions may arise [14] and the proof of the theorem can be replicated to prove the existence of solutions with \(n\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\) for some \(T>0\). Indeed, from the proof Lemma 2.2 we can apply the implicit function theorem as long as the following invertibility condition holds_
\[\Psi(N(t),n(t,\cdot)):=1-\int_{0}^{\infty}\partial_{N}p(s,N(t))n(t,s)\mathrm{d }s\neq 0\qquad\forall t\in[0,T], \tag{13}\]
_where \(\Psi:\mathbb{R}^{+}\times L^{1}(\mathbb{R}^{+})\longmapsto\mathbb{R}\), so that we obtain existence of a solution (or possible branches of solutions) of Equation (1) defined locally in time by applying the arguments in the proof of Theorem (2.1). If \(\Psi(N(t^{*}),n(t^{*},\cdot))=0\) for some \(t^{*}>0\), then the continuity of solutions is not ensured and jump discontinuities might arise. We explore this aspect in the section of numerical simulations._
For the proof we need the following lemmas, which will be the key idea throughout this article. We start with following result on the linear case.
**Lemma 2.1**.: _Assume that \(n^{0}\in L^{1}(\mathbb{R}^{+})\) and \(p\in L^{\infty}(\mathbb{R}^{+}\times\mathbb{R}^{+})\). Then for a given \(N\in\mathcal{C}[0,T]\), the linear equation_
\[\begin{cases}\partial_{t}n+\partial_{s}n+p(s,N(t))n=0&t>0,\;s>0,\\ n(t,s=0)=\int_{0}^{+\infty}p(s,N(t))n(t,s)\mathrm{d}s&t>0,\\ n(0,s)=n^{0}(s)\geq 0&s\geq 0,\end{cases} \tag{14}\]
_has a unique weak solution \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\). Moreover \(n\) is non-negative and verifies the mass conservation property (3) for \(t\in[0,T]\)._
Proof.: For a proof see the linear theory developed in [23, 8, 11].
By using the implicit function theorem, we prove the following lemma that is the keystone in the proof of our theorem.
**Lemma 2.2**.: _Consider a non-negative function \(n^{0}\in L^{1}(\mathbb{R}^{+})\). Let \(\gamma\coloneqq\sup_{s,N}\partial_{N}p(s,N)\) and assume that \(\gamma\|n^{0}\|_{1}<1\). Then there exists a unique solution for \(N\) of the equation_
\[N=F(N)\coloneqq\int_{0}^{\infty}p(s,N)n^{0}(s)\,\mathrm{d}s, \tag{15}\]
_that we call \(N\coloneqq\psi(n^{0})\), where the map \(\psi\colon L^{1}(\mathbb{R}^{+})\mapsto\mathbb{R}\) satisfies the following estimate_
\[|\psi(n^{1})-\psi(n^{2})|\leq\frac{\|p\|_{\infty}}{1-\gamma\|n^{0}\|_{1}}\int_ {0}^{\infty}|n^{1}-n^{2}|(s)\,\mathrm{d}s \tag{16}\]
_for non-negative integrable functions \(n^{1},n^{2}\) with \(\|n^{1}\|_{1}=\|n^{2}\|_{1}=\|n^{0}\|_{1}\)._
Proof.: Observe that \(F\) is a continuous and bounded with respect to \(N\). Indeed, for all \(N\) we have
\[0\leq F(N)\leq\|p\|_{\infty}\|n^{0}\|_{1}\]
and hence there exists \(N\in[0,\|p\|_{\infty}\|n^{0}\|_{1}]\) such that \(N=F(N)\). Moreover the function \(g(N)=N-F(N)\) is strictly increasing. Indeed,
\[g^{\prime}(N)=1-F^{\prime}(N)=1-\int_{0}^{\infty}\partial_{N}p(s,N)n^{0}(s)\, \mathrm{d}s\geq 1-\gamma\|n^{0}\|_{1}>0,\]
and therefore the \(N=F(N)\) has a unique solution that we call \(\overline{N}=\psi(\overline{n})\). Consider the set \(U\) defined by
\[U\coloneqq\{n\in L^{1}(\mathbb{R}^{+})\colon\gamma\|n\|_{1}<1\}\]
And consider the map \(G\colon X\times\mathbb{R}\mapsto\mathbb{R}\) defined by
\[G(n,N)=N-\int_{0}^{\infty}p(s,N)n^{0}(s)\,\mathrm{d}s.\]
Observe that for \((n,N)\in U\times\mathbb{R}\) with \(G(n,N)=0\) we get \(\partial_{N}G(n,N)>0\). By the implicit function theorem, we notice that \(\psi\) it is a differentiable map on \(U\). Moreover \(D\psi\colon L^{1}(\mathbb{R})\mapsto\mathbb{R}\) at a point \((n,N)\in U\times[0,\infty)\) is given by
\[D\psi[h]=\frac{\int_{0}^{\infty}p(s,N)h(s)\,\mathrm{d}s}{1-\int_{0}^{\infty} \partial_{N}p(s,N)n^{0}(s)\,\mathrm{d}s},\]
and we have the following estimate in the operator norm at the point \((n^{0},N^{0})\)
\[\|D\psi\|\leq\frac{\|p\|_{\infty}}{1-\gamma\|n^{0}\|_{1}},\]
thus for \(n^{1},n^{2}\) with the same norm \(n^{0}\), the inequality (16) readily follows.
With this lemma, we continue the proof of the Theorem 2.1.
Proof.: Consider \(T>0\) and a fixed non-negative \(N\in\mathcal{C}[0,T]\). For this function \(N\) we define \(n[N]\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+})\) as the weak solution of the linear equation (14), which can be expressed trough the method of characteristics
\[n(t,s)=n^{0}(s-t)e^{-\int_{0}^{t}p(s-t+t^{\prime},N(t^{\prime}))\mathrm{d}t^{ \prime}}\chi_{\{s>t\}}+N(t-s)e^{-\int_{0}^{s}p(s^{\prime},N(t-s+s^{\prime})) \mathrm{d}s^{\prime}}\chi_{\{t>s\}}.\]
From the mass conservation property we know that \(\|n[N](t)\|_{1}=\|n^{0}\|_{1}\).
On the other hand, for a given \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\), let \(\psi\colon\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\mapsto\mathcal{C}[0,T]\) the unique solution \(X\in\mathcal{C}[0,T]\) of the equation
\[X=\int_{0}^{\infty}p(s,X)n(t,s)\,\mathrm{d}s.\]
Under this setting we get a solution of the non-linear equation (1) if only if we find \(N\in\mathcal{C}[0,T]\) such that is a solution of the equation
\[N=H(N)\coloneqq\psi\left(n[N]\right),\]
where \(H\colon\mathcal{C}[0,T]\mapsto\mathcal{C}[0,T]\).
We assert that for \(T\) small enough the map \(H\) is a contraction. For non-negative functions \(N_{1},N_{2}\in\mathcal{C}[0,T]\), we get from the method of characteristics we get the following inequality
\[\int_{0}^{\infty}|n[N_{1}]-n[N_{2}]|(t,s)\,\mathrm{d}s\leq A_{1}+A_{2}+A_{3},\]
where \(A_{1},A_{2},A_{3}\) are given by
\[A_{1} = \int_{0}^{\infty}n^{0}(s)\left|e^{-\int_{0}^{t}p(s+t^{\prime},N_{1 }(t^{\prime}))\mathrm{d}t^{\prime}}-e^{-\int_{0}^{t}p(s+t^{\prime},N_{2}(t^{ \prime}))\mathrm{d}t^{\prime}}\right|\,\mathrm{d}s\] \[A_{2} = \int_{0}^{t}|N_{1}-N_{2}|(t-s)e^{-\int_{0}^{s}p(s^{\prime},N_{1}( t-s+s^{\prime}))\mathrm{d}s^{\prime}}\mathrm{d}s\] \[A_{3} = \int_{0}^{t}N_{2}(t-s)\left|e^{-\int_{0}^{s}p(s^{\prime},N_{1}(t- s+s^{\prime}))\mathrm{d}s^{\prime}}-e^{-\int_{0}^{s}p(s^{\prime},N_{2}(t-s+s^{ \prime}))\mathrm{d}s^{\prime}}\right|\mathrm{d}s.\]
We proceed by estimating each term. For simplicity we write \(n_{1}=n[N_{1}],n_{2}=n[N_{2}]\), so that for \(A_{1}\) we have
\[A_{1}\leq\int_{0}^{\infty}n^{0}(s)\int_{0}^{t}|p(s+t^{\prime},N_{1}(t^{\prime} ))-p(s+t^{\prime},N_{2}(t^{\prime}))|\,\mathrm{d}t^{\prime}\,\mathrm{d}s\leq T \|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}\|N_{1}-N_{2}\|_{\infty},\]
while for \(A_{2}\) we have
\[A_{2}\leq\int_{0}^{t}|N_{1}-N_{2}|(t-s)\mathrm{d}s\leq T\|N_{1}-N_{2}\|_{ \infty},\]
and for \(A_{3}\) we get
\[A_{3} \leq\|p\|_{\infty}\|n^{0}\|_{1}\int_{0}^{t}\int_{0}^{s}|p(s^{ \prime},N_{1}(t-s+s^{\prime}))-p(s^{\prime},N_{2}(t-s+s^{\prime}))|\mathrm{d}s^ {\prime}\,\mathrm{d}s\] \[\leq\frac{T^{2}}{2}\|p\|_{\infty}\|n^{0}\|_{1}\|\partial_{N}p\|_{ \infty}\|N_{1}-N_{2}\|_{\infty}.\]
By combining these inequalities we have finally
\[\sup_{t\in[0,T]}\|n_{1}(t,\cdot)-n_{2}(t,\cdot)\|_{1}\leq\left(\|n^{0}\|_{1}\| \partial_{N}p\|_{\infty}+\frac{T}{2}\|p\|_{\infty}\|n^{0}\|_{1}\|\partial_{N}p \|_{\infty}+1\right)T\|N_{1}-N_{2}\|_{\infty}. \tag{17}\]
Since the mass is conserved, we apply Lemma 2.2 to get
\[\sup_{t\in[0,T]}|\psi(n^{1}(t,\cdot))-\psi(n^{2}(t,\cdot))|\leq\frac{\|p\|_{ \infty}}{1-\gamma\|n^{0}\|_{1}}\sup_{t\in[0,T]}\|n^{1}(t,\cdot)-n^{2}(t,\cdot) \|_{1}, \tag{18}\]
and therefore, we deduce from (17) and (18) that the following estimate holds for \(H\)
\[\|H(N_{1})-H(N_{2})\|_{\infty}\leq\frac{\|p\|_{\infty}T}{1-\gamma\|n^{0}\|_{1} }\left(\frac{T}{2}\|p\|_{\infty}\|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}+\|n^{0 }\|_{1}\|\partial_{N}p\|_{\infty}+1\right)\|N_{1}-N_{2}\|_{\infty},\]
so that for \(T>0\) small enough we get a unique fixed point of \(H\) by contraction principle, implying the existence of a unique solution \(n\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\) with \(N\in\mathcal{C}[0,T]\) of Equation (1). Since the mass is conserved, we can iterate this argument to conclude the solution is indeed defined for all \(T>0\).
### Numerical scheme for ITM
In this subsection, a first-order explicit upwind scheme to approach ITM equation (1) is introduced based on the finite-volume framework [24, 25, 26]. We consider an uniform discretization of \(\Omega=[0,T]\times[0,+\infty)\) with cells \(I_{j}=[s_{j-\frac{1}{2}},s_{j+\frac{1}{2}})\), interface points \(s_{j+\frac{1}{2}}=j\Delta s>0\) and centers \(s_{j}=(j-\frac{1}{2})\Delta s\), \(j\in\mathbb{N}\) such that \([0,+\infty)=\cup_{j\in\mathbb{N}}I_{j}\) and \(I^{m}=[t^{m-1},t^{m}],t^{m}=m\Delta t>0\), \(m\in\mathbb{N}\), \(T=M\Delta t\) such that \([0,T]=\cup_{m=0}^{M}I^{m}\). Let \(n_{j}^{m}=\frac{1}{\Delta s}\int_{I_{j}}n(t^{m},s)\mathrm{d}s\) be the cell average of \(n(t,s)\) at time \(t^{m}\) in the cell \(I_{j}\), then applying an explicit finite-volume approximation with an upwind discretization for the convective term in Equation (1), we obtain
\[\frac{n_{j}^{m+1}-n_{j}^{m}}{\Delta t}+\frac{n_{j}^{m}-n_{j-1}^{m}}{\Delta s} +p(s_{j},N(t^{m}))n_{j}^{m}=0,\quad j\in\mathbb{N},\,m\in\mathbb{N}, \tag{19}\]
and
\[N(t^{m})=n(t^{m},0)\approx\Delta s\sum_{j\in\mathbb{N}}p(s_{j},N(t^{m}))n_{j} ^{m},\quad m\in\mathbb{N}.\]
Now, if we define \(N^{m}:=N(t^{m})\), the solution of the partial differential equation in (1) can be solved by the explicit upwind scheme
\[n_{j}^{m+1}=n_{j}^{m}-\frac{\Delta t}{\Delta s}(n_{j}^{m}-n_{j-1}^{m})-\Delta tp (s_{j},N^{m})n_{j}^{m},\quad j\in\mathbb{N},\,m\in\mathbb{N}. \tag{20}\]
In particular for \(j=1\) we have
\[n_{1}^{m+1}=n_{1}^{m}-\frac{\Delta t}{\Delta s}(n_{1}^{m}-N^{m})-\Delta tp(s_{1 },N^{m})n_{1}^{m}.\]
The explicit upwind scheme (20) is stable if the CFL condition hold
\[1-\Delta t\left(\frac{1}{\Delta s}+p(s_{j},N^{m})\right)\geq 0,\quad j\in \mathbb{N},\,m\in\mathbb{N}.\]
In that regard, the numerical scheme for \(n_{j}^{m}\) and \(N^{m}\) can be summarized in the next algorithm.
**Algorithm 2.1**.: _ITM numerical scheme_
_Input: Approximate initial data \(\{n_{j}^{0}\}_{j\in\mathbb{N}}\)_
_Solve for \(N^{0}\)_
\[N^{0}=\sum_{j\in\mathbb{N}}\Delta sp(s_{j},N^{0})n_{j}^{0}. \tag{21}\]
_Choose \(\Delta t\) such that_
\[\Delta t\leq\left(\frac{1}{\Delta s}+\|p\|_{\infty}\right)^{-1}. \tag{22}\]
**For \(m\in\mathbb{N}_{0}\) do**
**For \(j\in\mathbb{N}\) do**
\[n_{j}^{m+1}\leftarrow\begin{cases}n_{1}^{m}-\frac{\Delta t}{\Delta s}(n_{1}^{ m}-N^{m})-\Delta tp(s_{1},N^{m})n_{1}^{m}&j=1,\\ n_{j}^{m}-\frac{\Delta t}{\Delta s}(n_{j}^{m}-n_{j-1}^{m})-\Delta tp(s_{j},N^{m })n_{j}^{m}&j>1.\end{cases}\]
**end**
_Solve for \(N^{m+1}\)_
\[N^{m+1}=\sum_{j\in\mathbb{N}}\Delta sp(s_{j},N^{m+1})n_{j}^{m+1}. \tag{23}\]
**end**
_Output: Approximate solution \(\{n_{j}^{m+1}\}_{j\in\mathbb{N}}\) and \(N^{m+1}\) at time \(t^{m+1}=(m+1)\Delta t\)_
The solution of Equation (23) for \(N^{m+1}\) can be solved with different numerical methods such as Newton-Raphson, bisection or inverse quadratic interpolation. In particular for the inhibitory case and the weak interconnections regime, the solution of Equation (23) \(N^{m+1}\) can be approximated in terms of \(N^{m}\) through the following formula if \(\Delta t\) is small enough:
\[N^{m+1}=\sum_{j\in\mathbb{N}}\Delta sp(s_{j},N^{m})n_{j}^{m+1}.\]
For simplicity in the estimates we assume that we can compute the solution of Equation (23) exactly, but the results remain valid if we take into account an specific method to get an approximation.
**Remark 2.2**.: _Analogously to Remark 1.1, we will prove in this section that Equations (21) and (23) have a unique solution in the inhibitory case and the weak interconnections regime. In the excitatory case, we may have multiple solutions for \(N^{0}\) that lead to different branches of numerical solutions for the ITM equation (1) defined in some interval of time. Depending on how we calculate the solution of the fixed point problem (23), the numerical method will approximate one of the multiple possible solutions._
In order to prove the convergence of the upwind scheme we follow the ideas of the previous subsection on well-posedness and we prove a BV-estimate that will be the crucial in the analysis. For simplicity we assume that initial data \(n^{0}\) is compactly supported, but the theoretical results still hold when the initial data \(n^{0}\) vanishes at infinity.
We start with some lemmas that will be useful in the sequel.
**Lemma 2.3**.: _(\(L^{1}\)-norm) Numerical approximation obtained with the Algorithm 2.1 satisfies_
\[\|n^{m}\|_{1}:=\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}=\|n^{0}\|_{1},\quad m\in \mathbb{N}.\]
Proof.: Multiply equation (20) by \(\Delta s\) and sum over \(j\in\mathbb{N}\) and using the boundary condition we obtain
\[\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m+1}=\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m }-\Delta t\left(N^{m}-\sum_{j\in\mathbb{N}}\Delta sp(s_{j},N^{m})n_{j}^{m} \right).\]
**Lemma 2.4**.: _(\(L^{\infty}\)-norm) Assume the initial data \(n^{0}\in L^{\infty}(\mathbb{R}^{+})\) is non-negative. Then under CFL restriction (22) the numerical solution obtained with the Algorithm 2.1 satisfies_
\[0\leq n_{j}^{m}\leq\|n^{0}\|_{\infty},\ \ 0\leq N^{m}\leq\|p\|_{\infty}\|n^{0} \|_{1}\ \ \text{for all}\,j,m\in\mathbb{N}.\]
Proof.: By the CFL condition, observe that
\[0\leq n_{j}^{1}=\left(1-\Delta t\left(\frac{1}{\Delta s}+p(s_{j},N^{0})\right) \right)n_{j}^{0}+\frac{\Delta t}{\Delta s}n_{j-1}^{0}\leq\|n^{0}\|_{\infty}\ \ \ \text{for all}\,j\in\mathbb{N},\]
and by induction in \(m\) we conclude that \(0\leq n_{j}^{m}\leq\|n^{0}\|_{\infty}\) for all \(j,m\in\mathbb{N}\). Now for the estimates involving \(N^{m}\), we apply the previous lemma to conclude the following inequality
\[0\leq N^{m}\leq\|p\|_{\infty}\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}=\|p\|_{ \infty}\sum_{j\in\mathbb{N}}\Delta sn_{j}^{0}=\|p\|_{\infty}\|n^{0}\|_{1},\]
and we get the desired result.
We now prove that for each iteration in \(m\) of the numerical method. Equation (23) has a indeed a unique solution \(N^{m}\) in the inhibitory case and the weak interconnection regime. The following result corresponds to the discrete version of Lemma 2.2.
**Lemma 2.5**.: _Consider a discretization \(n^{0}=(n_{j}^{0})_{j\in\mathbb{N}}\) of non-negative terms. Let \(\gamma\coloneqq\sup_{s,N}\partial_{N}p(s,N)\) and assume that \(\gamma\|n^{0}\|_{1}<1\). Then there exists a unique solution for \(N\) of the equation_
\[N=F(N)\coloneqq\sum_{j\in\mathbb{N}}\Delta sp(s_{j},N)n_{j}^{0},\]
_that we call \(n^{0}\coloneqq\psi(n^{0})\). Moreover the map \(\psi\) satisfies the following estimate_
\[|\psi(n^{1})-\psi(n^{2})|\leq\frac{\|p\|_{\infty}}{1-\gamma\|n^{0}\|_{1}}\sum_ {j\in\mathbb{N}}\Delta s|n_{j}^{1}-n_{j}^{2}| \tag{24}\]
_for sequences \(n^{1}=(n_{j}^{1})_{j\in\mathbb{N}}\), \(n^{2}=(n_{j}^{2})_{j\in\mathbb{N}}\) of non-negative terms with \(\|n^{1}\|_{1}=\|n^{2}\|_{1}=\|n^{0}\|_{1}\)._
Proof.: The proof is similar to that of Lemma 2.2 by the replacing the integral terms with discrete summations.
**Remark 2.3**.: _In the excitatory case, the ideas of the previous lemma can be applied as long as the following invertibility condition holds_
\[\Psi(N,n):=1-\Delta s\sum_{j\in\mathbb{N}}\partial_{N}p(s_{j},N)n_{j}\mathrm{d}s \neq 0, \tag{25}\]
_so that we can extend the numerical solution through the implicit function theorem in order to approximate a continuous solution of the ITM equation (10) in some interval \([0,T]\). This is the analog to the invertibility condition (13)._
We now establish some BV-lemmas on the discretization given by the upwind scheme to prove that the numerical method approximates the solution of Equation (1) when \(\Delta t\) and \(\Delta s\) converge to zero. For the discretization \(n=(n_{j})_{j\in\mathbb{N}}\) we define the total variation as
\[TV(n):=\sum_{j=0}^{\infty}|n_{j+1}-n_{j}|. \tag{26}\]
In this context, we prove the following key lemma.
**Lemma 2.6**.: _(**BV**-estimate) Assume that \(TV(n^{0})<\infty\) and the CFL condition (22). Then there exist constants \(C_{1},C_{2}>0\) (depending only \(p\) and the norms of \(n^{0}\)) such that for \(m\in\mathbb{N}\) we have_
\[TV(n^{m})\leq e^{C_{1}T}TV(n^{0})+C_{2}(e^{C_{1}T}-1), \tag{27}\]
_with \(T=m\Delta t\) and \(TV(n^{m})=\sum_{j=0}^{\infty}|n_{j+1}^{m}-n_{j}^{m}|\)._
Proof.: Using the notation \(\Delta^{+}n_{j}^{m}=n_{j+1}^{m}-n_{j}^{m}\), we have
\[\Delta^{+}n_{j}^{m+1} = \Delta^{+}n_{j}^{m}-\frac{\Delta t}{\Delta s}\left(\Delta^{+}n_{ j}^{m}-\Delta^{+}n_{j-1}^{m}\right)-\Delta t\left(p(s_{j+1},N^{m})n_{j+1}^{m}-p(s_ {j},N^{m})n_{j}^{m}\right)\] \[= \Delta^{+}n_{j}^{m}-\frac{\Delta t}{\Delta s}\left(\Delta^{+}n_{ j}^{m}-\Delta^{+}n_{j-1}^{m}\right)-\Delta tp(s_{j+1},N^{m})\Delta^{+}n_{j}^{m}\] \[-\Delta tn_{j}^{m}(p(s_{j+1},N^{m})-p(s_{j},N^{m})).\]
now applying (8) and taking absolute value and using the CFL condition (22) we obtain
\[|\Delta^{+}n_{j}^{m+1}|\leq\left(1-\Delta t\left(\frac{1}{\Delta s}+p(s_{j+1 },N^{m})\right)\right)|\Delta^{+}n_{j}^{m}|+\frac{\Delta t}{\Delta s}|\Delta^ {+}n_{j-1}^{m}|+\Delta t\Delta s\|\partial_{s}p\|_{\infty}n_{j}^{m}.\]
Now by summing over all \(j\geq 1\), we have
\[\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m+1}|\leq\sum_{j=1}^{\infty}|\Delta^{+}n _{j}^{m}|+\frac{\Delta t}{\Delta s}|\Delta^{+}n_{0}^{m}|-\Delta t\sum_{j\in \mathbb{N}}^{\infty}p(s_{j+1},N^{m})|\Delta^{+}n_{j}^{m}|\]
and from the mass conservation we deduce
\[\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m+1}|\leq\sum_{j=1}^{\infty}|\Delta^{+}n _{j}^{m}|+\frac{\Delta t}{\Delta s}|\Delta^{+}n_{0}^{m}|+\Delta t\|\partial_{ s}p\|_{\infty}\|n^{0}\|_{1}. \tag{28}\]
On the other hand, by assuming \(\Delta t\) small enough we have
\[\begin{split}|\Delta^{+}n_{0}^{m+1}|&=|n_{1}^{m+1}-N^{ m+1}|\\ &\leq\left(1-\frac{\Delta t}{\Delta s}\right)|n_{1}^{m}-N^{m}|+|N^{m+1}-N^{m}|+\Delta tp (s_{1},N^{m})n_{1}^{m}\\ &\leq\left(1-\frac{\Delta t}{\Delta s}\right)|\Delta^{+}n_{0}^{m} |+|N^{m+1}-N^{m}|+\Delta t\|p\|_{\infty}\|n^{0}\|_{\infty}.\end{split} \tag{29}\]
Furthermore, from Lemma 2.5 there exists \(C_{1}>0\) depending only on \(p\) such that
\[|N^{m+1}-N^{m}| = \left|\psi(n^{m+1})-\psi(n^{m})\right| \tag{30}\] \[\leq C_{1}\sum_{j\in\mathbb{N}}\Delta s|n_{j}^{m+1}-n_{j}^{m}|\] \[\leq C_{1}\Delta t\left(\sum_{j\in\mathbb{N}}|n_{j}^{m}-n_{j-1}^{m}|+ \|p\|_{\infty}\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}\right),\]
and replacing (30) in (29), we obtain from the mass conservation
\[\begin{split}|\Delta^{+}n_{0}^{m+1}|&\leq\left(1- \frac{\Delta t}{\Delta s}\right)|\Delta^{+}n_{0}^{m}|+C_{1}\Delta t\left(\sum_ {j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+\|p\|_{\infty}\|n^{0}\|_{1}\right)\\ &\qquad+\Delta t\|p\|_{\infty}\|n^{0}\|_{\infty}.\end{split} \tag{31}\]
Now, summing (28) and (31), we deduce
\[\sum_{j=0}^{\infty}|\Delta^{+}n_{j}^{m+1}|\leq(1+C_{1}\Delta t)\sum_{j=0}^{ \infty}|\Delta^{+}n_{j}^{m}|+C_{2}\Delta t. \tag{32}\]
with \(C_{2}:=\|\partial_{s}p\|_{\infty}\|n^{0}\|_{1}+\|p\|_{\infty}C_{1}\|n^{0}\|_{ 1}+\|p\|_{\infty}\|n^{0}\|_{\infty}\).
Finally, proceeding recursively on \(m\), we obtain
\[\sum_{j=0}^{\infty}|\Delta^{+}n_{j}^{m}|\leq(1+C_{1}\Delta t)^{m}\sum_{j=0}^{ \infty}|\Delta^{+}n_{j}^{0}|+\frac{C_{2}}{C_{1}}\left((1+C_{1}\Delta t)^{m}-1 \right),\]
and the estimate (27) readily follows.
**Remark 2.4**.: _The previous lemma is also valid for the case when the hazard rate is of the form_
\[p(s,N)=\varphi(N)\chi_{\{s>\sigma(N)\}}\]
_with \(\varphi\) and \(\sigma\) Lipschitz bounded functions and satisfying the analogous hypothesis on \(\partial_{N}p\). Indeed, the inequality in (28) is replaced by_
\[\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m+1}| \leq\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m}|+\frac{\Delta t}{\Delta s }|\Delta^{+}n_{0}^{m}|+\Delta t\varphi(N^{m})\sum_{j\in\mathbb{N}}n_{j}^{m}\left| \chi_{\{s_{j+1}>\sigma(N^{m})\}}-\chi_{\{s_{j}>\sigma(N^{m})\}}\right|\] \[\leq\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m}|+\frac{\Delta t}{ \Delta s}|\Delta^{+}n_{0}^{m}|+\Delta t\|p\|_{\infty}n_{j_{m}}^{m}\] \[\leq\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m}|+\frac{\Delta t}{ \Delta s}|\Delta^{+}n_{0}^{m}|+\Delta t\|p\|_{\infty}\|n^{0}\|_{\infty},\]
_where \(j_{m}\coloneqq\min\{j\in N\colon s_{j}\geq\sigma(N^{m})\}=\left\lceil\frac{ \sigma(N^{m})}{\Delta s}+\frac{1}{2}\right\rceil\) and the rest of proof is analogous._
Now we prove that the numerical approximation of the solution of Equation (1) \(n(t,s)\), which is constructed by a simple piece-wise linear interpolation, has a limit when the time step \(\Delta t\) and age step \(\Delta s\) converge to \(0\). For simplicity we assume that the initial data \(n^{0}\in BV(\mathbb{R}^{+})\) and with compact support.
**Lemma 2.7**.: _Assume that \(n^{0}\in BV(\mathbb{R}^{+})\) is compactly supported and the rate \(p\) satisfies the hypothesis of Theorem 2.1. Consider the function \(n_{\Delta t,\Delta s}\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\) defined by_
\[n_{\Delta t,\Delta s}(t,s)\coloneqq\frac{t^{m}-t}{\Delta t}\sum_{j\in\mathbb{ N}}n_{j}^{m-1}\chi_{[s_{j-\frac{1}{2}},s_{j+\frac{1}{2}}]}(s)+\frac{t-t^{m-1}}{ \Delta t}\sum_{j\in\mathbb{N}}n_{j}^{m}\chi_{[s_{j-\frac{1}{2}},s_{j+\frac{1}{ 2}}]}(s)\;\text{if}\;t\in[t^{m-1},t^{m}]. \tag{33}\]
_Then there exists a sub-sequence \((\Delta t_{k},\Delta s_{k})\to(0,0)\) when \(k\to\infty\) and a function \(\overline{n}(t,s)\) such that \(n_{\Delta t_{k},\Delta s_{k}}\to\overline{n}\) in \(\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\). Moreover, if we define the function \(N_{\Delta t,\Delta s}(t)\) as the unique solution of the equation_
\[N(t)=\int_{0}^{\infty}p(s,N(t))n_{\Delta t,\Delta s}(t,s)\,\mathrm{d}s,\]
_then there exists \(\overline{N}\) such that \(N_{\Delta t_{k},\Delta s_{k}}\to\overline{N}\) in \(\mathcal{C}[0,T]\) and \(\overline{N}\) is a solution of the equation_
\[\overline{N}(t)=\int_{0}^{\infty}p(s,\overline{N}(t))\overline{n}(t,s)\, \mathrm{d}s. \tag{34}\]
Proof.: The idea is to apply the compactness criterion in \(\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) in order to extract a convergent sub-sequence of \(n_{\Delta t,\Delta s}\) when \(\Delta t\) and \(\Delta s\) converge to \(0\). From Lemma 2.3, we deduce that
\[\|n_{\Delta t,\Delta s}(t,\cdot)\|_{1}=\|n^{0}\|_{1},\qquad\forall t\in[0,T].\]
We prove that sequence \(n_{\Delta t,\Delta s}\) has a modulus of continuity in the \(L^{1}\) in both variables. In the variable \(s\) we have that for \(t\in[t^{m-1},t^{m}]\) and \(|h|<\varepsilon\) the following estimate holds
\[\int_{0}^{\infty}|n_{\Delta t,\Delta s}(t,s+h)-n_{\Delta t,\Delta s }(t,s)|\,\mathrm{d}s \leq\frac{t^{m}-t}{\Delta t}|h|\sum_{j\in\mathbb{N}}\Delta sn_{j} ^{m-1}+\frac{t-t^{m-1}}{\Delta t}|h|\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}\] \[\leq\varepsilon\|n^{0}\|_{1}\]
Now we prove we have modulus of continuity in the variable \(t\). Consider \(t_{1},t_{2}\in[0,T]\) and without loss of generality assume that \(t_{1},t_{2}\in[t^{m-1},t^{m}]\) for some \(m\in\mathbb{N}\). Then from Lemma 2.6 we have following estimate
\[\int_{0}^{\infty}|n_{\Delta t,\Delta s}(t_{1},s)-n_{\Delta t,\Delta s }(t_{2},s)|\,\mathrm{d}s \leq|t_{1}-t_{2}|\sum_{j\in\mathbb{N}}\Delta s\frac{|n_{j}^{m}-n _{j}^{m-1}|}{\Delta t}\] \[\leq|t_{1}-t_{2}|\left(\sum_{j\in\mathbb{N}}|n_{j}^{m-1}-n_{j-1} ^{m-1}|+\Delta sp(s_{j},N^{m-1})n_{j}^{m-1}\right)\] \[\leq|t_{1}-t_{2}|\left(C_{T}\,TV(n^{0})+\|p\|_{\infty}\|n^{0}\|_{ 1}\right), \tag{35}\]
thus we have the modulus of continuity in time.
Since \(n^{0}\) has its support contained in some interval \([0,R]\), there exists \(K\in\mathbb{N}\) such that \(n_{j}^{0}=0\) for \(j\geq K\) and \(s_{K}=(K-\frac{1}{2})\Delta s\geq R\). From the numerical scheme we deduce that \(n_{j}^{m}=0\) for \(j\geq K+M\), which implies that \(n_{\Delta t,\Delta s}\) vanishes for
\[s\geq s_{K+M}=(K+M-\frac{1}{2})\Delta s\geq R+M\Delta t\,\frac{\Delta s}{\Delta t }\geq R+T,\]
so \(n_{\Delta t,\Delta s}\) has also compact support. From the estimates on the modulus of continuity, we can apply the compactness criterion in \(\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) in order to extract a convergent sub-sequence of \(n_{\Delta t,\Delta s}\) when \(\Delta t\) and \(\Delta s\) converge to \(0\).
Observe now that \(N_{\Delta t,\Delta s}\in\mathcal{C}[0,T]\). Indeed from Lemma 2.2, for each \(t\in[0,T]\) we get that
\[N_{\Delta t,\Delta s}(t)=\psi(n_{\Delta t,\Delta s})\]
and from the continuity of \(\psi\) and \(n_{\Delta t,\Delta s}\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+})\) we obtain that \(N_{\Delta t,\Delta s}\in\mathcal{C}[0,T]\). Since \(\|n_{\Delta t,\Delta s}(t,\cdot)\|_{1}=\|n^{0}\|_{1}\), for all \(t\in[0,T]\) we have the following estimate
\[\|N_{\Delta t,\Delta s}\|_{\infty}\leq\|p\|_{\infty}\|n^{0}\|_{1},\]
so that \(N_{\Delta t,\Delta s}\) is uniformly bounded.
We now prove that the family \(N_{\Delta t,\Delta s}\) is equicontinuous. For \(t_{1},t_{2}\in[0,T]\) we deduce from Lemma (2.2) and estimate (35) the following inequality
\[|N_{\Delta t,\Delta s}(t_{1})-N_{\Delta t,\Delta s}(t_{2})| =|\psi(n_{\Delta t,\Delta s})(t_{1})-\psi(n_{\Delta t,\Delta s})( t_{2})|\] \[\leq C\int_{0}^{\infty}|n_{\Delta t,\Delta s}(t_{1},s)-n_{\Delta t,\Delta s}(t_{2},s)|\,\mathrm{d}s\] \[\leq|t_{1}-t_{2}|\left(C_{T}\,TV(n^{0})+\|p\|_{\infty}\|n^{0}\| _{1}\right),\]
where \(C\) is a constant independent of \(\Delta t\) and \(\Delta s\). Therefore the family \(N_{\Delta t,\Delta s}\) is equicontinuous in \([0,T]\) and we can extract a convergent sub-sequence by applying Arzela-Ascoli Theorem. Finally, by passing to the limit in \(\Delta t\) and \(\Delta s\) we obtain Equation (34).
With the previous lemmas, we are now ready to the prove the following theorem on convergence of the numerical scheme for the ITM equation (1).
**Theorem 2.2** (Convergence of the numerical scheme).: _Assume that \(n^{0}\in BV(\mathbb{R}^{+})\) is compactly supported and the rate \(p\) satisfies the hypothesis of Theorem 2.1. Then for all \(T>0\), the numerical approximation (33) given by the upwind scheme converges to the unique weak solution \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) of the ITM equation (1)._
Proof.: Consider the functions \(n_{\Delta t,\Delta s}\) and \(N_{\Delta t,\Delta s}\) defined in Lemma 2.7. From this result we get Equation (2.7). Now we take \(\varphi\in\mathcal{C}^{1}_{c}([0,T)\times[0,\infty))\) a test function. If we multiply Equation (19) by \(\varphi_{j}^{m}\coloneqq\varphi(t^{m},s_{j})\) and compute the discrete integral we get
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta s(n_{j}^{m+1}-n_{j}^{m})\varphi_{j}^ {m}+\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta t(n_{j}^{m}-n_{j-1}^{m})\varphi_ {j}^{m}+\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta t\Delta sp(s_{j},N^{m})n_{j} ^{m}\varphi_{j}^{m}=0, \tag{36}\]
We study each term of Equation (36). From summation by parts we have the following inequality for the first term
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta s(n_{j}^{m+1}-n_{j}^{m})\varphi_{j}^ {m}=-\sum_{j\in\mathbb{N}}\Delta s\varphi_{j}^{0}n_{j}^{0}-\sum_{m=1}^{M}\sum_ {j\in\mathbb{N}}\Delta sn_{j}^{m}(\varphi_{j}^{m}-\varphi_{j}^{m-1}),\]
thus applying Lemma 2.7, we get the following limit when \((\Delta t,\Delta s)\to 0\)
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta s(n_{j}^{m+1}-n_{j}^{m})\varphi_{j}^ {m}\to-\int_{0}^{\infty}\varphi(0,s)n^{0}(s)\,\mathrm{d}s-\int_{0}^{T}\int_{0 }^{\infty}\overline{n}(t,s)\partial_{t}\varphi(t,s)\,\mathrm{d}s\,\mathrm{d}t.\]
Similarly, for the second term of Equation (36) the following equality holds
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta t(n_{j}^{m}-n_{j-1}^{m})\varphi_{j}^ {m}=-\Delta t\varphi_{0}^{0}n_{0}^{0}-\sum_{m=1}^{M}\Delta t\varphi_{0}^{m}N^ {m}-\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta tn_{j}^{m}(\varphi_{j}^{m}- \varphi_{j-1}^{m}),\]
and by passing to the limit in \((\Delta t,\Delta s)\) we get
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta t(n_{j}^{m}-n_{j-1}^{m})\varphi_{j}^ {m}\to-\int_{0}^{T}\varphi(t,0)\overline{N}(t)\,\mathrm{d}t-\int_{0}^{T}\int_{ 0}^{\infty}\overline{n}(t,s)\partial_{s}\varphi(t,s)\,\mathrm{d}s\,\mathrm{d}t,\]
and in the same way
\[\sum_{m=0}^{M}\sum_{j\in\mathbb{N}}\Delta t\Delta s\,p(s_{j},N^{m})n_{j}^{m} \varphi_{j}^{m}\to\int_{0}^{T}\int_{0}^{\infty}p(s,\overline{N}(t))\overline {n}(t,s)\varphi(t,s)\,\mathrm{d}s\,\mathrm{d}t.\]
Therefore \(\overline{n}(t,s)\) is the weak solution of the ITM equation (1).
## 3 Distributed Delay Model (DDM)
### Well-posedness of DDM
In this subsection we prove the well-posedness of the DDM equation (10). As in Section 2, we improve the proof of [8] by extending existence and uniqueness for a more general calls of hazard rates \(p\). We essentially follow the same ideas from of the previous section with some slight modifications.
**Theorem 3.1**.: _Assume that \(p\in W^{1,\infty}(\mathbb{R}^{+}\times\mathbb{R}^{+})\) and \(\alpha\in L^{1}(\mathbb{R}^{+})\) is bounded. Then for a non-negative \(n^{0}\in L^{1}(\mathbb{R}^{+})\), Equation (10) has a unique solution \(n\in\mathcal{C}\left([0,\infty),L^{1}(\mathbb{R}^{+})\right)\) and \(N,X\in\mathcal{C}[0,\infty)\)._
For the proof we need the following lemma, which is the analogous of Lemma 2.2.
**Lemma 3.1**.: _Consider a non-negative function \(n\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\). Then for \(T\) small enough, the exists a unique solution \(X\in\mathcal{C}([0,T])\) of the integral equation_
\[X(t)=F(X(t))\coloneqq\int_{0}^{t}\int_{0}^{\infty}\alpha(t-\tau)p(s,X(\tau))n( \tau,s)\,\mathrm{d}s\,\mathrm{d}\tau, \tag{37}\]
_that we call \(X\coloneqq\psi(n)\), where the map \(\psi\colon\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\mapsto\mathcal{ C}([0,T])\) satisfies the following estimate_
\[\|\psi(n^{1})-\psi(n^{2})\|_{\infty}\leq\tfrac{T\|\alpha\|_{\infty}\|p\|_{ \infty}}{1-T\|\alpha\|_{\infty}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}}\sup_{t \in[0,T]}\|n^{1}(t,\cdot)-n^{2}(t,\cdot)\|_{1} \tag{38}\]
_for non-negative integrable functions \(n^{1},n^{2}\) with \(\|n^{1}(t,\cdot)\|_{1}=\|n^{2}(t,\cdot)\|_{1}=\|n^{0}\|_{1}\) for all \(t\in[0,T]\)._
Proof.: Observe that the map \(F\colon\mathcal{C}[0,T]\mapsto\mathcal{C}[0,T]\) satisfies the following estimate
\[\|F(X_{1})-F(X_{2})\|_{\infty}\leq T\|\alpha\|_{\infty}\|\partial_{X}p\|_{ \infty}\|n\|\|X_{1}-X_{2}\|_{\infty},\]
with \(\|n\|=\sup_{t\in[0,T]}\|n(t,\cdot)\|_{1}\). Hence for \(T>0\) such that \(T\|\alpha\|_{\infty}\|\partial_{X}p\|_{\infty}\|n\|<1\) the map \(F\) is a contraction and then the map \(\psi\) is well-defined.
Let \(n^{1},n^{2}\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) with \(\|n^{1}(t,\cdot)\|_{1}=\|n^{2}(t,\cdot)\|_{1}=\|n^{0}\|_{1}\) for all \(t\in[0,T]\). Then we have the following inequality
\[|X_{1}-X_{2}|(t) \leq\int_{0}^{t}\int_{0}^{\infty}\alpha(t-\tau)\left(|p(s,X_{1}( \tau))-p(s,X_{2}(\tau))|n^{1}(\tau,s)+p(s,X_{2}(\tau))|n^{1}-n^{2}|(\tau,s) \right)\mathrm{d}s\mathrm{d}\tau\] \[\leq T\|\alpha\|_{\infty}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1} \|X_{1}-X_{2}\|_{\infty}+T\|\alpha\|_{\infty}\|p\|_{\infty}\sup_{t\in[0,T]}\| n^{1}(t,\cdot)-n^{2}(t,\cdot)\|_{1},\]
and therefore for \(T>0\) such that \(T\|\alpha\|_{\infty}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}<1\) we get
\[\|X_{1}-X_{2}\|_{\infty}\leq\frac{T\|\alpha\|_{\infty}\|p\|_{\infty}}{1-T\| \alpha\|_{\infty}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}}\sup_{t\in[0,T]}\|n^{ 1}(t,\cdot)-n^{2}(t,\cdot)\|_{1},\]
and estimate (38) holds.
With this lemma, we continue the proof of the Theorem 3.1.
Proof.: Consider \(T>0\) and a given non-negative \(X\in\mathcal{C}[0,T]\). Like in the proof of Theorem 2.1, we define \(n[X]\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) as the weak solution of the linear equation
\[\begin{cases}\partial_{t}n+\partial_{s}n+p(s,X(t))n=0&t>0,\,s>0,\\ n(t,s=0)=N(t)=\int_{0}^{+\infty}p(s,X(t))n(t,s)\mathrm{d}s&t>0,\\ n(0,s)=n^{0}(s)\geq 0&s\geq 0.\end{cases}\]
which can be expressed trough the method of characteristics
\[n(t,s)=n^{0}(s-t)e^{-\int_{0}^{t}p(s-t+t^{\prime},X(t^{\prime}))\mathrm{d}t^{ \prime}}\chi_{\{s>t\}}+N(t-s)e^{-\int_{0}^{s}p(s^{\prime},X(t-s+s^{\prime})) \mathrm{d}s^{\prime}}\chi_{\{t>s\}}.\]
From the mass conservation property we know that \(\|n[X](t)\|_{1}=\|n^{0}\|_{1}\).
On the other hand, for a given \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) and for \(T>0\) small enough, let \(\psi\colon\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\mapsto\mathcal{ C}[0,T]\) the unique solution \(X\in\mathcal{C}[0,T]\) of Equation (37). Under this setting, we get a solution of the non-linear equation (10) if only if we find \(X\in\mathcal{C}[0,T]\) such that is a solution of the equation
\[X=H(X)\coloneqq\psi\left(n[X]\right),\]
where \(H\colon\mathcal{C}[0,T]\mapsto\mathcal{C}[0,T]\).
We assert that for \(T\) small enough the map \(H\) is a contraction following the proof Theorem 2.1. For non-negative functions \(X_{1},X_{2}\in\mathcal{C}[0,T]\), we get from the method of characteristics we get the following inequality
\[\int_{0}^{\infty}|n[X_{1}]-n[X_{2}]|(t,s)\mathrm{d}s\leq A_{1}+A_{2}+A_{3},\]
where \(A_{1},A_{2},A_{3}\) are given by
\[A_{1} = \int_{0}^{\infty}n^{0}(s)\left|e^{-\int_{0}^{t}p(s+t^{\prime},X_{ 1}(t^{\prime}))\mathrm{d}t^{\prime}}-e^{-\int_{0}^{t}p(s+t^{\prime},X_{2}(t^{ \prime}))\mathrm{d}t^{\prime}}\right|\,\mathrm{d}s\] \[A_{2} = \int_{0}^{t}|N_{1}-N_{2}|(t-s)e^{-\int_{0}^{s}p(s^{\prime},X_{1}( t-s+s^{\prime}))\mathrm{d}s^{\prime}}\mathrm{d}s\] \[A_{3} = \int_{0}^{t}N_{2}(t-s)\left|e^{-\int_{0}^{s}p(s^{\prime},X_{1}(t-s +s^{\prime}))\mathrm{d}s^{\prime}}-e^{-\int_{0}^{s}p(s^{\prime},X_{2}(t-s+s^{ \prime}))\mathrm{d}s^{\prime}}\right|\mathrm{d}s.\]
We proceed by estimating each term. For simplicity we write \(n_{1}=n[X_{1}],n_{2}=n[X_{2}]\), so that for \(A_{1}\) we have
\[A_{1}\leq\int_{0}^{\infty}n^{0}(s)\int_{0}^{t}|p(s+t^{\prime},X_{1}(t^{\prime} ))-p(s+t^{\prime},X_{2}(t^{\prime}))|\,\mathrm{d}t^{\prime}\,\mathrm{d}s\leq T \|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}\|X_{1}-X_{2}\|_{\infty},\]
while for \(A_{2}\) we have
\[A_{2} \leq\int_{0}^{t}\int_{0}^{\infty}|p(s^{\prime},X_{1}(s))-p(s^{ \prime},X_{2}(s))|n_{1}(s,s^{\prime})\mathrm{d}s^{\prime}\,\mathrm{d}s+\int_{ 0}^{t}\int_{0}^{\infty}p(s^{\prime},X_{2}(s))|n_{1}-n_{2}|(s,s^{\prime}) \mathrm{d}s^{\prime}\,\mathrm{d}s,\] \[\leq T\|\partial_{X}p\|\|n^{0}\|_{1}+T\|p\|_{\infty}\sup_{t\in[0,T ]}\|n^{1}(t,\cdot)-n^{2}(t,\cdot)\|_{1},\]
and for \(A_{3}\) we get
\[A_{3} \leq\|p\|_{\infty}\|n^{0}\|_{1}\int_{0}^{t}\int_{0}^{s}|p(s^{ \prime},X_{1}(t-s+s^{\prime}))-p(s^{\prime},X_{2}(t-s+s^{\prime}))|\mathrm{d}s ^{\prime}\,\mathrm{d}s\] \[\leq\frac{T^{2}}{2}\|p\|_{\infty}\|n^{0}\|_{1}\|\partial_{N}p\|_{ \infty}\|X_{1}-X_{2}\|_{\infty}.\]
By combining these estimates, we get for \(T<\frac{1}{\|p\|_{\infty}}\) the following inequality
\[\sup_{t\in[0,T]}\|n_{1}(t,\cdot)-n_{2}(t,\cdot)\|_{1}\leq\frac{T}{1-T\|p\|_{ \infty}}\left(2\|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}+\frac{T}{2}\|p\|_{ \infty}\|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}\right)\|X_{1}-X_{2}\|_{\infty}. \tag{39}\]
From the mass conservation property, we get from (38) and (39) that the following estimate holds for \(H\)
\[\|H(X_{1})-H(X_{2})\|_{\infty}\leq\tfrac{T^{2}\|p\|_{\infty}^{2}\| \alpha\|_{\infty}}{(1-T\|\alpha\|_{\infty}\|n^{0}\|_{1}\|\partial_{X}p\|_{ \infty})(1-T\|p\|_{\infty})}\] \[\qquad\qquad\left(2\|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}+\frac{ T}{2}\|p\|_{\infty}\|n^{0}\|_{1}\|\partial_{N}p\|_{\infty}\right)\|X_{1}-X_{2}\|_{ \infty}.\]
For \(T>0\) small enough we get a unique fixed point of \(H\) by contraction principle, implying the existence of a unique solution \(n\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\) with \(N,X\in\mathcal{C}[0,T]\) of Equation (10). In order to extend the solution for all times, we split the integral involving the distributed delay
\[X(t)=\int_{0}^{T}\alpha(t-\tau)N(\tau)\mathrm{d}\tau+\int_{T}^{t}\alpha(t- \tau)N(\tau)\mathrm{d}\tau.\]
Since the first term is already known and \(T\) is independent of the initial data, we can reapply the argument on existence to have the solution of Equation (10) defined for all \(t\in[T,2T]\). By iterating the splitting argument involving \(X(t)\), we conclude that the solution of Equation (10) is defined for all \(t>0\).
**Remark 3.1**.: _As in Theorem 2.1, the regularity of the rate \(p\) is not fundamental for the proof and Theorem 2.1 is still valid for rates \(p\) of the form_
\[p(s,X)=\varphi(X)\chi_{\{s>\sigma(X)\}},\]
_with \(\varphi\) and \(\sigma\) Lipschitz bounded functions. Moreover, the proof can be adapted when \(p\) is not necessarily bounded, but the continuous solution might not be defined for all \(t>0\). We can extend the solution as long as \(X(t)<\infty\)._
### Numerical scheme for DDM
In this section we make the respective numerical analysis for DDM equation (10). Using the same discretization and notation from Section 2, Equation (10) can be solved numerically by the explicit scheme given by
\[n_{j}^{m+1}=n_{j}^{m}-\frac{\Delta t}{\Delta s}(n_{j}^{m}-n_{j-1}^{m})-\Delta tp (s_{j},X(t^{m}))n_{j}^{m},\quad j\in\mathbb{N}. \tag{40}\]
In particular for \(j=1\)
\[n_{1}^{m+1}=n_{1}^{m}-\frac{\Delta t}{\Delta s}(n_{1}^{m}-N(t^{m}))-\Delta tp (s_{1},X(t^{m}))n_{1}^{m},\]
where
\[N(t^{m})=n(t^{m},0)\approx\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X(t^{m}))n_{j} ^{m} \tag{41}\]
and using a trapezoidal quadrature rule we have
\[X(t^{m})=\int_{0}^{t^{m}}N(t^{m}-s)\alpha(s)\,\mathrm{d}s\approx\frac{\Delta t }{2}\sum_{k=0}^{m}N(t^{k})\alpha_{m-k}, \tag{42}\]
where \(\alpha_{m}=\alpha(t^{m})\) and we denote \(\|\alpha\|_{1}\coloneqq\sum_{k\in\mathbb{N}}\Delta t|\alpha_{k}|\). If we set \(X^{m}\coloneqq X(t^{m})\) and \(N^{m}\coloneqq N(t^{m})\), we can combine the equations (41) and (41) and to solve for \(X^{m}\)
\[X^{m}=\frac{\Delta t}{2}\left(\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m})n_{j} ^{m}\alpha_{0}+\sum_{k=0}^{m-1}N^{k}\alpha_{m-k},\right) \tag{43}\]
and then we obtain \(N^{m}\) from (41). In that regard, the numerical method is given as follows.
**Algorithm 3.1**.: _DDM numerical scheme_
_Input: Approximate initial data \(\{n_{j}^{0}\}_{j\in\mathbb{N}}\)_
\[X^{0}\gets 0,\quad N^{0}\leftarrow\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^ {0})n_{j}^{0}.\]
_Choose \(\Delta t\) such that_
\[\Delta t<\min\left\{\left(\frac{1}{\Delta s}+\|p\|_{\infty}\right)^{-1}, \frac{2}{\alpha_{0}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}}\right\} \tag{44}\]
**For \(m\in\mathbb{N}_{0}\)**
**Do \(j\in\mathbb{N}\),**
\[n_{j}^{m+1}\leftarrow\begin{cases}n_{1}^{m}-\frac{\Delta t}{\Delta s}(n_{1}^{ m}-N^{m})-\Delta tp(s_{1},N^{m})n_{1}^{m}&j=1\\ n_{j}^{m}-\frac{\Delta t}{\Delta s}(n_{j}^{m}-n_{j-1}^{m})-\Delta tp(s_{j},N^{m })n_{j}^{m}&j>1\end{cases}\]
**end**
_Solve for \(X^{m+1}\)_
\[X^{m+1}=\frac{\Delta t}{2}\left(\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m+1})n _{j}^{m+1}\alpha_{0}+\sum_{k=0}^{m}N^{k}\alpha_{m-k}.\right) \tag{45}\]
\[N^{m+1}\leftarrow\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m+1})n_{j}^{m+1}\]
**end**
_Output: approximate solution \(\{n_{j}^{m+1}\}_{j\in\mathbb{N}}\) and \(X^{m+1},\,N^{m+1}\) at time \(t^{m+1}=(m+1)\Delta t\)._
Analogously to the numerical scheme for the ITM equation. The solution of Equation (45) for \(X^{m+1}\) can be solved with different numerical methods. Unlike the ITM equation, there no restriction on the rate \(p\) to have unique solution of Equation (10). Hence, by following the idea of Lemma 3.1 and the contraction principle, the solution of Equation (23) \(N^{m+1}\) can be approximated in terms of \(N^{m}\) through the following formula if \(\Delta t\) is small enough:
\[X^{m+1}=\frac{\Delta t}{2}\left(\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m})n_{ j}^{m+1}\alpha_{0}+\sum_{k=0}^{m}N^{k}\alpha_{m-k}.\right)\]
For simplicity in the estimates we assume that we can compute the solution of Equation (45) exactly, but the results remain valid if we take into account an specific method to get an approximation.
In order to prove the convergence of the upwind scheme we follow the ideas of the previous subsection on well-posedness and we prove a BV-estimate that will be the crucial in the analysis. For simplicity we assume that initial data \(n^{0}\) is compactly supported, but the theoretical results still hold when the initial data \(n^{0}\) vanishes at infinity.
As in the case of the ITM equation, we get the corresponding lemmas on \(L^{1}\) and \(L^{\infty}\) norms.
**Lemma 3.2**.: _(\(L^{1}\)-norm) Numerical approximation obtained with the Algorithm 3.1 satisfies_
\[\|n^{m}\|_{1}:=\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}=\|n^{0}\|_{1},\quad m\in \mathbb{N}.\]
Proof.: The proof is the same as Lemma 2.3.
**Lemma 3.3**.: _(\(L^{\infty}\)-norm) Assume that \(n^{0}\in L^{\infty}(\mathbb{R}^{+})\) is non-negative, then under the condition (44) Equation (45) has a unique solution and the numerical solution obtained by Algorithm 3.1, satisfies the following estimates_
\[0\leq n_{j}^{m}\leq\|n^{0}\|_{\infty},\ \ 0\leq X^{m}\leq\|p\|_{\infty}\|n^{0} \|_{1}\|\alpha\|_{1},\ \ 0\leq N^{m}\leq\|p\|_{\infty}\|n^{0}\|_{1}\quad\text{for all}\;j,m\in \mathbb{N}.\]
Proof.: First, observe that \(X^{0}=0\) and \(N^{0}=\Delta s\sum_{j\in\mathbb{N}}p(s_{j},0)n_{j}^{0}\leq\|p\|_{\infty}\). Now, using the CFL condition we have
\[0\leq n_{j}^{1}=n_{j}^{0}\left(1-\Delta t\left(\frac{1}{\Delta s}+p(s_{j},0) \right)\right)+\frac{\Delta t}{\Delta s}n_{j-1}^{0}\leq\|n^{0}\|_{\infty},\]
for \(j\geq 1\). In order to proof existence of \(X^{1}\) which satisfy equation (43), we consider the function
\[F(X)=\frac{\Delta t}{2}\left(\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X)n_{j}^{1} \alpha_{0}+N^{0}\alpha_{1}.\right),\]
and observe that
\[0\leq F(X)\leq\|p\|_{\infty}\|n^{0}\|_{1}\|\alpha\|_{1}, \text{for all}\;X\geq 0,\] \[|F(X_{1})-F(X_{2})|\leq\tfrac{\Delta t}{2}\|n^{0}\|_{1}\alpha_{0 }\|\partial_{X}p\|_{\infty}|X_{1}-X_{2}|, \text{for all}\;X_{1},X_{2}\geq 0.\]
From Condition (44) we have
\[\frac{\Delta t}{2}\alpha_{0}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}<1,\]
so that from contraction principle that there exists a unique \(X^{1}\) such that
\[X^{1}=F(X^{1})=\frac{\Delta t}{2}\left(\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X ^{1})n_{j}^{1}\alpha_{0}+N^{0}\alpha_{1}.\right)\]
and we have that \(0\leq X^{1}\leq\|p\|_{\infty}\|n^{0}\|_{1}\|\alpha\|_{1}\). Moreover we get the following estimate
\[N^{1}:=\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{1})n_{j}^{1}\leq\|p\|_{\infty} \|n^{0}\|_{1},\]
and we conclude the desired result by iterating this argument for all \(m\in\mathbb{N}\)
We note that in Condition (44), the first term in right-hand side corresponds to the CFL condition of the explicit scheme and the second term ensures that the Equation (45) has a unique solution so that \(X^{m}\) is well-defined.
Next, we proceed with the corresponding BV-estimate that gives the necessary compactness to prove the convergence of the numerical scheme.
**Lemma 3.4**.: _(**BV**-estimate) Assume that \(TV(n^{0})<\infty\) and the CFL condition (22). Then there exist constants \(C_{1},C_{2}>0\) (depending only \(p,\alpha\) and the norms of \(n^{0}\)) such that for \(m\in\mathbb{N}\) we have_
\[TV(n^{m})\leq e^{C_{1}T}TV(n^{0})+C_{2}(e^{C_{1}T}-1), \tag{46}\]
_with \(T=m\Delta t\) and \(TV(n^{m})=\sum_{j=0}^{\infty}|n_{j+1}^{m}-n_{j}^{m}|\)._
Proof.: using notation \(\Delta^{+}n_{j}^{m}=n_{j+1}^{m}-n_{j}^{m}\), we have
\[\Delta^{+}n_{j}^{m+1} =\Delta^{+}n_{j}^{m}-\frac{\Delta t}{\Delta s}\left(\Delta^{+}n_ {j}^{m}-\Delta^{+}n_{j-1}^{m}\right)-\Delta tp(s_{j+1},X^{m})\Delta^{+}n_{j}^ {m}\] \[\qquad-\Delta tn_{j}^{m}(p(s_{j+1},X^{m})-p(s_{j},X^{m})).\]
Next, from the the CFL condition (22) we obtain
\[|\Delta^{+}n_{j}^{m+1}|\leq\left(1-\Delta t\left(\frac{1}{\Delta s}+p(s_{j+1},X^{m})\right)\right)|\Delta^{+}n_{j}^{m}|+\frac{\Delta t}{\Delta s}|\Delta^{ +}n_{j-1}^{m}|+\Delta t\Delta s\|\partial_{s}p\|_{\infty}n_{j}^{m}.\]
By summing over all \(j\geq 1\) we deduce that
\[\sum_{j=1}^{\infty}|\Delta^{+}n_{j}^{m+1}|\leq\sum_{j=1}^{\infty}|\Delta^{+}n _{j}^{m}|+\frac{\Delta t}{\Delta s}|\Delta^{+}n_{0}^{m}|+\Delta t\|\partial_{ s}p\|_{\infty}\|n^{0}\|_{1}. \tag{47}\]
We now take into account the boundary term. From the numerical scheme we have
\[|\Delta^{+}n_{0}^{m+1}| =|n_{1}^{m+1}-N^{m+1}|\] \[\leq\left(1-\frac{\Delta t}{\Delta s}\right)|n_{1}^{m}-N^{m}|+|N ^{m+1}-N^{m}|+\Delta tp(s_{1},N^{m})n_{1}^{m}\] \[\leq\left(1-\frac{\Delta t}{\Delta s}\right)|\Delta^{+}n_{0}^{m} |+|N^{m+1}-N^{m}|+\Delta t\|p\|_{\infty}\|n^{0}\|_{\infty}.\]
Next we estimate the second term of the last inequality. First note that
\[N^{m+1}-N^{m} =\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m+1})n_{j}^{m+1}-\Delta s \sum_{j\in\mathbb{N}}p(s_{j},X^{m})n_{j}^{m}\] \[=\Delta s\sum_{j\in\mathbb{N}}(p(s_{j},X^{m+1})-p(s_{j},X^{m}))n_ {j}^{m+1}+\Delta s\sum_{j\in\mathbb{N}}p(s_{j},X^{m})(n_{j}^{m+1}-n_{j}^{m}),\]
thus we have
\[\begin{split}|N^{m+1}-N^{m}|&\leq\|\partial_{X}p\|_{ \infty}\|n^{0}\|_{1}|X^{m+1}-X^{m}|+\|p\|_{\infty}\sum_{j\in\mathbb{N}}\Delta s |n_{j}^{m+1}-n_{j}^{m}|\\ &\leq\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}|X^{m+1}-X^{m}|+\|p \|_{\infty}\Delta t\left(\sum_{j\in\mathbb{N}}|n_{j}^{m}-n_{j-1}^{m}|+\|p\|_{ \infty}\sum_{j\in\mathbb{N}}\Delta sn_{j}^{m}\right)\\ &\leq\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}|X^{m+1}-X^{m}|+\|p \|_{\infty}\Delta t\sum_{j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+\Delta t\|p\|_ {\infty}^{2}\|n^{0}\|_{1}\end{split} \tag{48}\]
And similarly for \(|X^{m+1}-X^{m}|\) we get by using Equation (43)
\[\begin{split}|X^{m+1}-X^{m}|&\leq\frac{\Delta t}{2} \alpha_{0}\|\partial\chi p\|_{\infty}\|n^{0}\|_{1}|X^{m+1}-X^{m}|+\frac{\Delta t }{2}\alpha_{0}\|p\|_{\infty}\sum_{j\in\mathbb{N}}\Delta s|n_{j}^{m+1}-n_{j}^{m} |\\ &\qquad+\frac{\Delta t}{2}\left|\sum_{k=0}^{m}N^{k}\alpha_{m+1-k} -\sum_{k=0}^{m-1}N^{k}\alpha_{m-k}\right|,\end{split}\]
so we get
\[\begin{split}|X^{m+1}-X^{m}|&\leq\frac{\Delta t}{2} \alpha_{0}\left(\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1}|X^{m+1}-X^{m}|+\Delta t \|p\|_{\infty}\sum_{j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+\Delta t\|p\|_{ \infty}^{2}\|n^{0}\|_{1}\right)\\ &\qquad+\frac{\Delta t}{2}N^{m}\alpha_{1}+\frac{\Delta t}{2}\|p\|_ {\infty}\|n^{0}\|_{1}\sum_{k=0}^{m}|\alpha_{m+1-k}-\alpha_{m-k}|,\end{split}\]
and therefore we have the following estimate
\[\begin{split}|X^{m+1}-X^{m}|&\leq\frac{\Delta t}{2(1 -\frac{\Delta t}{2}\alpha_{0}\|\partial_{X}p\|_{\infty}\|n^{0}\|_{1})}\left( \Delta t\alpha_{0}\|p\|_{\infty}\sum_{j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+ \Delta t\alpha_{0}\|p\|_{\infty}^{2}\|n^{0}\|_{1}\right.\\ &\qquad+\|p\|_{\infty}\|n^{0}\|_{1}\|\alpha\|_{\infty}+\|p\|_{ \infty}\|n^{0}\|_{1}TV(\alpha)\right).\end{split} \tag{49}\]
By plugging (49) into (48) we obtain for \(\Delta t\) small
\[|N^{m+1}-N^{m}|\leq A_{1}\Delta t\sum_{j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+ B_{1}\Delta t,\]
where \(A_{1},B_{1}>0\) are constants depending on \(p,\alpha\) and the norms of \(n^{0}\). So that the boundary term is estimated as follows
\[|\Delta^{+}n_{0}^{m+1}|\leq\left(1-\frac{\Delta t}{\Delta s}\right)|\Delta^{+} n_{0}^{m}|+A_{1}\Delta t\sum_{j\in\mathbb{N}}|\Delta^{+}n_{j-1}^{m}|+B_{2} \Delta t, \tag{50}\]
and by adding (50) with (47), we obtain
\[\sum_{j=0}^{\infty}|\Delta^{+}n_{j}^{m+1}|\leq(1+C_{1}\Delta t)\sum_{j=0}^{ \infty}|\Delta^{+}n_{j}^{m}|+C_{2}\Delta t, \tag{51}\]
where \(C_{1},C_{2}>0\) constants depending only on \(p,\alpha\) and the norms of \(n^{0}\).
Finally, proceeding recursively on \(m\), we obtain
\[\sum_{j=0}^{\infty}|\Delta^{+}n_{j}^{m}|\leq(1+C_{1}\Delta t)^{m}\sum_{j=0}^{ \infty}|\Delta^{+}n_{j}^{0}|+\frac{C_{2}}{C_{1}}\left((1+C_{1}\Delta t)^{m}-1 \right),\]
and the estimate (46) readily follows.
As we did in Section 2 for the ITM equation. We now prove that the numerical approximation of the solution of Equation (10) \(n(t,s)\), which is constructed by a simple piece-wise linear interpolation, has a limit when the time step \(\Delta t\) and age step \(\Delta s\) converge to \(0\).
**Lemma 3.5**.: _Assume that \(n^{0}\in BV(\mathbb{R}^{+})\) is compactly supported and the rate \(p\) satisfies the hypothesis of Theorem 3.1. Consider the function \(n_{\Delta t,\Delta s}(t,s)\in\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\) defined by_
\[n_{\Delta t,\Delta s}(t,s)\coloneqq\frac{t^{m}-t}{\Delta t}\sum_{j\in\mathbb{N }}n_{j}^{m-1}\chi_{[s_{j-\frac{1}{2}},s_{j+\frac{1}{2}}]}(s)+\frac{t-t^{m-1}}{ \Delta t}\sum_{j\in\mathbb{N}}n_{j}^{m}\chi_{[s_{j-\frac{1}{2}},s_{j+\frac{1}{ 2}}]}(s)\;\text{if}\;t\in[t^{m-1},t^{m}].\]
_Then there exists a sub-sequence \((\Delta t_{k},\Delta s_{k})\to(0,0)\) when \(k\to\infty\) and a function \(\overline{n}(t,s)\) such that \(n_{\Delta t_{k},\Delta s_{k}}\to\overline{n}\) in \(\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\). Moreover, if we define the function \(X_{\Delta t,\Delta s}(t)\in\mathcal{C}[0,T]\) as the unique solution of the integral equation_
\[X(t)=\int_{0}^{t}\int_{0}^{\infty}\alpha(t-\tau)p(s,X(\tau))n_{\Delta t,\Delta s }(\tau,s)\,\mathrm{d}s\,\mathrm{d}\tau,\]
_then there exists \(\overline{X}\in\mathcal{C}[0,T]\) such that \(X_{\Delta t_{k},\Delta s_{k}}\to\overline{X}\) in \(\mathcal{C}[0,T]\) and \(\overline{X}\) is a solution of the equation_
\[\overline{X}(t)=\int_{0}^{t}\int_{0}^{\infty}\alpha(t-\tau)p(s,\overline{X}( \tau))\overline{n}(\tau,s)\,\mathrm{d}s\,\mathrm{d}\tau, \tag{52}\]
_and similarly \(N_{\Delta t,\Delta s}\in\mathcal{C}[0,T]\) defined as_
\[N_{\Delta t,\Delta s}(t)=\int_{0}^{\infty}p\left(s,X_{\Delta t,\Delta s}(t) \right)n_{\Delta t,\Delta s}\,\mathrm{d}s\]
_converges respectively to \(\overline{N}\in\mathcal{C}[0,T]\), where \(\overline{N}\) satisfies the equality_
\[\overline{N}(t)=\int_{0}^{\infty}p(s,\overline{X}(t))\overline{n}(t,s)\, \mathrm{d}s. \tag{53}\]
Proof.: The proof on the compactness of \(n_{\Delta t,\Delta s}\) is the same as Lemma 2.7 by using Lemma 3.4. Hence there exists a sub-sequence \((\Delta t_{k},\Delta s_{k})\to(0,0)\) when \(k\to\infty\) and a function \(\overline{n}(t,s)\) such that \(n_{\Delta t_{k},\Delta s_{k}}\to\overline{n}\) in \(\mathcal{C}\left([0,T],L^{1}(\mathbb{R}^{+})\right)\). For the sequence \(X_{\Delta t,\Delta s}\) observe that
\[\|X_{\Delta t,\Delta s}\|_{\infty}\leq\|\alpha\|_{1}\|p\|_{\infty}\|n^{0}\|_{1}\]
and for the derivative we get
\[\frac{d}{dt}X_{\Delta t,\Delta s}(t) =\int_{0}^{\infty}\alpha(0)p(s,X_{\Delta t,\Delta s}(t))n_{\Delta t,\Delta s}(\tau,s)\,\mathrm{d}s\,\mathrm{d}\tau\] \[\qquad+\int_{0}^{t}\int_{0}^{\infty}\alpha^{\prime}(t-\tau)p(s,X_ {\Delta t,\Delta s}(\tau))n_{\Delta t,\Delta s}(\tau,s)\,\mathrm{d}s\,\mathrm{d}\tau,\]
thus we have the following estimate
\[\left\|\frac{d}{dt}X_{\Delta t,\Delta s}\right\|_{\infty}\leq\alpha(0)\|p\|_{ \infty}\|n^{0}\|_{1}+T\|\alpha^{\prime}\|_{\infty}\|p\|_{\infty}\|n^{0}\|_{1},\]
and from Arzela-Ascoli theorem we conclude that \(X_{\Delta t,\Delta s}\) converges to some \(\overline{X}\) in \(\mathcal{C}[0,T]\) for a sub-sequence. By passing to the limit, Equations (52) and (53) readily follow.
With this result, we finally we get the result on convergence of the numerical scheme for the DDM equation (10).
**Theorem 3.2** (Convergence of the numerical scheme).: _Assume that \(n^{0}\in BV(\mathbb{R}^{+})\) is compactly supported and the rate \(p\) satisfies the hypothesis of Theorem 3.1. Then for all \(T>0\), the numerical scheme converges to the unique weak solution \(n\in\mathcal{C}([0,T],L^{1}(\mathbb{R}^{+}))\) of the DDM equation (10)._
Proof.: The proof is the same as Theorem 2.2.
**Remark 3.2**.: _The previous results are also valid for the case when the rate \(p\) is of the form_
\[p(s,X)=\varphi(X)\chi_{\{s>\sigma(X)\}},\]
_with \(\varphi\) and \(\sigma\) Lipschitz bounded functions._
## 4 Numerical Results
In order to illustrate the theoretical results of the previous sections, we present in different scenarios for the dynamics of the ITM equation (1) and DDM equation (10) which are solved by the finite volume method described in Algorithms (2.1) and (3.1) respectively, where the non-linear problems (23) and (45) where solved for \(N\) or \(X\) using the Newton-Raphson iterative method with a relative error less than \(10^{-12}\). For all numerical tests, we consider the prototypical rate \(p\) with absolute refractory period \(\sigma>0\) given by
\[p(s,N)=\varphi(N)\chi_{\{s>\sigma\}}(s),\]
where \(\varphi(N)\) and \(\sigma\) are specified in each example. For the DDM equation we will consider for the delay kernel \(\alpha(t)\) the following examples
\[\alpha_{1}(t)=\frac{e^{-t/\lambda}}{\lambda}\quad\text{or}\quad\alpha_{2}(t) =\frac{1}{\sqrt{2\pi}\lambda}e^{-\frac{1}{2}(\frac{t-d}{\lambda})^{2}},\quad \text{with }\lambda=10^{-3}.\]
For this choice of \(\lambda\) we essentially consider the approximation \(\alpha_{1}(t)\approx\delta(t)\), where we are interested in comparing both ITM and DDM equations when we are close to this limit case. Similarly for the second kernel we get \(\alpha_{2}(t)\approx\delta(t-d)\) and we study the the behavior of DDM equation (10) with different values of the parameter \(d\).
### Example 1: A strongly inhibitory case
We start with an inhibitory hazard rate, i.e. \(\varphi^{\prime}(N)<0\), given by
\[\varphi(N)=e^{-9N},\quad\sigma=\frac{1}{2},\quad n^{0}(s)=\frac{1}{2}e^{-(s-1)^ {+}}. \tag{54}\]
With these choice of parameters Equation (1) has a unique steady state with \(N^{*}\approx 0.1800\), by solving Equation (15). For the ITM equation we display in Figures 1(a-b) the numerical solution for \(n(t,s)\) with \((t,s)\in[0,30]\times[0,40]\) and \(N(t)\) with \(t\in[0,10]\), where we observe that the total activity \(N(t)\) converge to the unique steady state \(N^{*}\), while for \(n(t,s)\) the initial condition moves to the right (with respect to age \(s\)) as it decays exponentially and approaches to the equilibrium density given by Equation 6. Moreover, this example is clearly consistent with the theory on the convergence to the equilibrium for the strongly excitatory case studied in [14].
On the other hand, we solve the DDM (10) with the parameters in (54) and we choose \(d=\frac{1}{2}\) so that \(\alpha_{2}(t)\approx\delta(t-\frac{1}{2})\). In Figs 1(c,d) we display numerical solution for \(n(t,s)\) for \((t,s)\in[0,15]\times[0,20]\), and \(N(t)\) and \(X(t)\) for \(t\in[0,20]\). Unlike the ITM, for the DDM equation the solutions for \(N\) and \(X\) converge to a periodic profile that we conjecture to be of \(2d\)-periodic and they tend to differ by a period of time equal to \(d\), which means that
Figure 1: An inhibitory case. (a-b) Density \(n(t,s)\) and discharging flux \(N(t)\) for ITM. (c-d) Density \(n(t,s)\), discharging flux \(N(t)\) and total activity \(X(t)\) for DDM with \(\alpha_{2}(t)\approx\delta(t-\frac{1}{2})\).
\(|N(t-d)-X(t)|\approx 0\) for \(t\) large. The density \(n(t,s)\) is asymptotic to the its respective periodic profile in time. In this case, we observe a periodic solution induced by a negative feedback delay, which is a classical behavior in the context of delay differential equations. This negative feedback corresponds to the inhibition determined by the rate \(p\), so that when combined with the delay it induces cycles of increase and decrease in the discharging flux \(N\) and total activity \(X\), which are favorable to form periodic solutions (see [27] for a reference).
### Example 2: An excitatory case with a unique steady state
Now we consider an excitatory case, i.e. \(\varphi^{\prime}(N)>0\), given by
\[\varphi(N)=\frac{10N^{2}}{N^{2}+1}+0.5,\quad\sigma=1,\quad n^{0}(s)=e^{-(s-1) ^{+}}\chi_{\{s>1\}}(s). \tag{55}\]
This example was previously studied in [14] for the ITM equation. We know that under this choice of parameters Equation (1) has a unique steady state with \(N^{*}\approx 0.8186\).
For the ITM equation, in Figure 2(a) we display the numerical solution for \(n(t,s)\) for \((t,s)\in[0,20]\times[0,4]\) and in the blue curve of Figure 2(b) we show \(N(t)\) for \(t\in[0,20]\) where the solution is asymptotic periodic pattern with jump discontinuities. This is due to the invertibility condition \(\Psi(N,n)\) in (13) is close to zero as we show in Figure 2(c), where we plot \(\Psi(N(t),n(t,\cdot))\) for \(t\in[0,10]\). We observe that when a discontinuity arises for \(N(t)\) in Figure 2(b), then \(\Psi(N,n)\) is close to zero in Figure 2(c). This means that the invertibility condition (13) is a key criterion that determine the existence and continuity of solutions.
For the DDM equation with \(\alpha_{1}(t)\approx\delta(t)\) we observe in the red curve of Figure 2(b) that the respective discharging flux is a smooth approximation of the activity of \(N(t)\) for the ITM equation. This is due to the regularizing effect of the delay kernel \(\alpha\) through the convolution.
Next we take \(d=1\) so that \(\alpha_{2}(t)\approx\delta(t-1)\) In Figure 3(a) we display \(n(t,s)\) for \((t,s)\in[0,6]\times[0,15]\) and in Figure 3(b) we display the graphics of \(N(t)\) and \(X(t)\) for \(t\in[0,15]\), where we observe an asymptotic periodic pattern for both discharging flux \(N\) and total activity \(X\), which we conjecture to be \(d\)-periodic. In this case we observe a a synchronization phenomena which means that \(|N(t)-X(t)|\approx 0\) for large \(t\), unlike the inhibitory case shown in Fig 1(d) where they tend to differ in time by \(d\). In this excitatory case we conjecture that periodic solutions are due to effect of the refractory period \(\sigma\) as it was studied in [14] and the solution are continuous due to the regularizing effect of the kernel \(\alpha\). Therefore we observe that periodic solution that may arise in the inhibitory and excitatory are of different nature.
### Example 3: An excitatory case with multiples steady states
Next, we consider a case where \(\varphi^{\prime}(N)>0\) with parameters given by
\[\varphi(N)=\frac{1}{1+e^{-9N+3.5}},\quad\sigma=\frac{1}{2},\quad n^{0}(s)=e^ {-(s-\frac{1}{2})^{+}}\chi_{\{s>\frac{1}{2}\}}(s). \tag{56}\]
This example was previously studied in [14] for the ITM equation. Under this choice of parameters, we have three different solutions for \(N(0)\) according to Equation (4), that are given by \(N_{1}^{0}\approx 0.0410\), \(N_{2}^{0}\approx 0.3650\) and \(N_{3}^{0}\approx 0.6118\). These values determine three different branches of local continuous solutions. For the ITM equation we display the numerical solution for \(n(t,s)\) and \(N(t)\) in Figures 4. The dynamics for the discharging flux \(N\) is determined by the initial condition \(N(0)\) and thus it also determines the dynamics for \(n\). In this case, we
Figure 3: A periodic solution for the DDM equation. (a-b) Density \(n(t,s)\), discharging flux \(N(t)\) and total activity \(X(t)\) with \(d=1\) and \(\alpha_{2}(t)\approx\delta(t-1)\), \(X(t)\approx N(t-1)\).
Figure 2: An excitatory case with periodic patterns. (a) Density \(n(t,s)\) for ITM, (b) Comparison of \(N(t)\) for both ITM and DDM equations with \(\alpha_{1}(t)\approx\delta(t)\), (c) Invertibility condition \(\Psi(N,n)\) for ITM.
observe three different types of numerical approach for \(N(t)\), which converge to two different equilibrium points \(N_{1}^{*}\) and \(N_{2}^{*}\).
In the DDM equation for both \(\alpha_{1}(t)\approx\delta(t)\) and \(\alpha_{2}(t)\approx\delta(t-d)\), we observe in Figure 5 that \(N(t)\) converge to the first equilibrium of the ITM equation. Moreover, when the ITM equation has multiple branches of solutions for the same initial condition, i.e. multiples solutions for \(N(0)\), we conjecture that when \(\alpha(t)\) converges \(\delta(t)\) in the sense of distributions, the total activity of \(X(t)\) in the DDM equation converges a.e. to the solution of \(N(t)\) in the ITM equation, whose value of \(N(0)\) is the closest one to zero and we expect \(L^{1}\)-convergence for the corresponding probability densities. This is due to the condition \(X(0)=0\) imposed for the DDM equation. We observe in Figure 5(b) that \(X(t)\) and \(N(t)\) follow the same behavior of the Figure 4(b) and in particular the total activity \(X(t)\) grows fast from \(X(0)=0\) until it approaches to the solution of \(N(t)\).
Similarly when \(\alpha(t)\) converges to \(\delta(t-d)\), we conjecture that total activity \(X(t)\) in the DDM equation converges to the solution of \(N(t)\) of Equation 12 that satisfies \(N(t)\equiv 0\) for \(t\in[-d,0]\), as it is suggested by the numerical solution in Figure 5(d) since we would formally get \(X(t)=N(t-d)\) and \(X(t)=0\) for \(t\in[0,d]\).
### Example 4: A variable refractory period [8]
Based on Example 2 in [8], we consider a hazard rate with variable refractory period as in Equation (9) for the the DDM equation with parameters given by
\[p(s,X)=\chi_{\{s>\sigma(X)\}}(s),\quad\sigma(X)=2-\frac{X^{4}}{X^{4}+1},\quad \alpha(t)=J\alpha_{1}(t),\quad n^{0}(s)=e^{-(s-1)}\chi_{\{s>1\}}(s), \tag{57}\]
where \(J>0\) is the connectivity parameter of the network. As it was studied in [8], the system has different behaviors depending value of \(J\). When \(J\) is small the network is weakly connected and the dynamics are close to the linear case, while if \(J\) is large the network is strongly connected and different asymptotic behaviors are possible. We recall that when \(\alpha(t)=\delta_{0}\), we formally obtain the ITM equation where
\[X(t)=JN(t),\quad\text{and},\quad p(s,N)=\chi_{\{s>\sigma(JN)\}}(s). \tag{58}\]
Taking \(J=2.5\) as in [8], in Figure 6 we compare the numerical approximation of \(N(t)\) with \(t\in[0,14]\), for both ITM and DDM equations. In Figure 6(a) we observe that the solution of the ITM equation is asymptotic to a periodic pattern with jump discontinuities, where this type of solutions were also observed in [14]. We also observe that when a jump discontinuity arises for \(N(t)\) in Figure 6(a), the function \(\Psi(N,n)\) is close to zero as we see in Figure 6(b), verifying numerically the invertibility condition (13) that ensures the continuity of solutions.
In Figure 6(c) we observe that the discharging flux \(N(t)\) in the DDM equation is a smooth approximation of the solution observed in Figure 6(a) and we see the same phenomenon for \(\tilde{X}(t)\coloneqq X(t)/J\), corresponding to a normalization of the total activity in order to the compare these quantities. Finally in Figure 6(d) we display the corresponding numerical approximation of the DDM equation with \(n(t,s)\) for \((t,s)\in[0,14]\times[0,8]\), which also follows a periodic pattern.
Figure 4: An excitatory problems with multiples solutions for the ITM equation with different initial approximations for \(N^{0}\) in (23). (a-b) Density \(n(t,s)\) and discharging flux \(N(t)\) for \(N_{1}^{0}\approx 0.0281\), (c-d) for \(N_{2}^{0}\approx 0.4089\) and (e-f) for \(N_{3}^{0}\approx 0.7114\)
## Conclusion and perspectives
In this article we managed to improve proof of [8, 11] on well-posedness for both ITM and DDM equations, allowing to extend the theory for wider types of hazard rates \(p\). The key idea is to apply the implicit function theorem to the correct fixed point problem and the arguments can be extended when this rate is not necessarily bounded. This motivates the study of elapsed time model when the activity of neurons may increase to infinity and blow-up or other special phenomena might arise.
Another interesting question is convergence of the delay kernel \(\alpha(t)\) to the Dirac's mass \(\delta(t)\) in order to compare both ITM and DDM equations. We conjecture that the total activity \(X(t)\) of the DDM equation converges almost everywhere (or in some norm) to the discharging flux \(N(t)\) of the ITM equation. In particular we believe that the convergence holds for every \(t>0\) except when \(N(t)\) has a jump discontinuity. This motivates to determine if the assumption of instantaneous transmission is actually a good approximation of the neural dynamics, which have indeed a certain delay. Similarly, when the delay kernel \(\alpha(t)\) converges to the Dirac's mass \(\delta(t-d)\) in the sense of distributions, we conjecture that solutions of the DDM equations converge to the solutions of Equation (12).
Figure 5: An excitatory case for the DDM equation. (a-b) Density \(n(t,s)\), discharging flux \(N(t)\) and total activity \(X(t)\) for the DDM equation with \(\alpha(t)\approx\delta(t)\). (c-d) Same variables of the system with \(\alpha(t)\approx\delta(t-1)\).
From a numerical point of view, we proved the convergence of the explicit upwind scheme for the elapse time model relying on the mass-preserving property, the analysis of the fixed point equations (23), (45) and the key BV-estimate, from where we obtain the compactness to conclude the result. We can extend the analysis of the elapsed time model by considering implicit or semi-discrete schemes, but a more detailed analysis on the mass conservation the estimates must be considered. Other possible discretizations to solve the equations include for the example high-order Runge-Kutta-WENO methods (see for example [24, 25, 28, 26]). These alternatives might be useful to analyse numerically the time elapsed equation when the rate \(p\) is not bounded, which implies that the total activity may be also unbounded. Furthermore, this numerical analysis can be considered for other extensions of the elapsed time equation such as the model with fragmentation [10], spatial dependence [15], the multiple-renewal equation [16] and the model with leaky memory variable in [17] or other type of structured equations.
Figure 6: Hazard rate with variable refractory period. Comparing numerical approximation of \(N(t)\) for both ITM and DDM equations. (a) Discharging flux \(N(t)\) for ITM equation, (b) Invertibility condition \(\Psi(N,n)\) for ITM. (c) \(N(t)\) and \(X(t)/J\) for DDM equation, (d) Density \(n(t,s)\) for DDM.
### Acknowledgements
This work has been supported by ANID project ECOS200018. MS and LMV was partially supported by ANID-Chile through Centro de Modelamiento Matematico (FB210005) and INRIA Associated team ANACONDA. MS was supported by Fondecyt-ANID project 1220869, and Jean d'Alembert fellowship program, Universite de Paris-Saclay. NT was supported by the grant Juan de la Cierva FJC2021-046894-I funded by MCIN/AEI and the European Union NextGenerationEU/PRTR.
### ORCID iDs
Mauricio Sepulveda: [https://orcid.org/0000-0001-8463-3830](https://orcid.org/0000-0001-8463-3830).
Nicolas Torres: [https://orcid.org/0000-0001-6059-9754](https://orcid.org/0000-0001-6059-9754).
Luis Miguel Villada: [https://orcid.org/0000-0002-4860-4431](https://orcid.org/0000-0002-4860-4431).
|
2302.07301 | Simulating a full-sky high resolution Galactic synchrotron spectral
index map using neural networks | We present a model for the full-sky diffuse Galactic synchrotron spectral
index with an appropriate level of spatial structure for a resolution of 56
arcmin (to match the resolution of the Haslam 408 MHz data). Observational data
at 408 MHz and 23 GHz have been used to provide spectral indices at a
resolution of 5 degrees. In this work we make use of convolutional neural
networks to provide a realistic proxy for the higher resolution information, in
place of the genuine structure. Our deep learning algorithm has been trained
using 14.4 arcmin observational data from the 1.4 GHz Parkes radio continuum
survey. We compare synchrotron emission maps constructed by extrapolating the
Haslam data using various spectral index maps, of different angular resolution,
with the Global Sky Model. We add these foreground maps to a total emission
model for a 21 cm intensity mapping experiment, then attempt to remove the
foregrounds. The different models all display different spectral or spatial
behaviour and so each provide a useful and different tool to the community for
testing component separation techniques. We find that for an experiment
operating using a cosine aperture taper beam with a primary Full Width at Half
Maximum between 1.1 and 1.6 degrees, and the principal component analysis
technique of foreground removal, there is a discernible difference between
synchrotron spectral index models with a resolution larger than 5 degrees but
that no greater resolution than 5 degrees is required. | M. O. Irfan | 2023-02-14T19:38:06Z | http://arxiv.org/abs/2302.07301v1 | # Simulating a full-sky high resolution Galactic synchrotron spectral index map using neural networks
###### Abstract
We present a model for the full-sky diffuse Galactic synchrotron spectral index with an appropriate level of spatial structure for a resolution of 56 arcmin (to match the resolution of the Haslam 408 MHz data). Observational data at 408 MHz and 23 GHz have been used to provide spectral indices at a resolution of 5 degrees. In this work we make use of convolutional neural networks to provide a realistic proxy for the higher resolution information, in place of the genuine structure. Our deep learning algorithm has been trained using 14.4 arcmin observational data from the 1.4 GHz Parkes radio continuum survey. We compare synchrotron emission maps constructed by extrapolating the Haslam data using various spectral index maps, of different angular resolution, with the Global Sky Model. We add these foreground maps to a total emission model for a 21 cm intensity mapping experiment, then attempt to remove the foregrounds. The different models all display different spectral or spatial behaviour and so each provide a useful and different tool to the community for testing component separation techniques. We find that for an experiment operating using a cosine aperture taper beam with a primary Full Width at Half Maximum between 1.1 and 1.6 degrees, and the principal component analysis technique of foreground removal, there is a discernible difference between synchrotron spectral index models with a resolution larger than 5 degrees but that no greater resolution than 5 degrees is required.
keywords: Cosmology: diffuse radiation, Methods: statistical, Radio continuum: ISM
## 1 Introduction
Component separation has proven fundamental to observational cosmology; disentangling diffuse Galactic foregrounds from a cosmological signal of interest has been a central theme for Cosmic Microwave Background studies for decades (Bennett et al., 2003; Gold et al., 2009, 2011; Leach et al., 2008; Delabrouille et al., 2013; Planck Collaboration et al., 2014, 2016, 2020). More recently a plethora of low-frequency (\(<1.5\) GHz) radio cosmology experiments have started observing with the aim of measuring the redshifted 21cm hydrogen line in order to probe Cosmic Dawn (Eastwood et al., 2019; DeBoer et al., 2017), the Epoch of Reionisation (Parsons et al., 2010; Tingay et al., 2013) or the formation of large scale structure (Newburgh et al., 2014; Battye et al., 2012; Nan et al., 2011; Santos et al., 2016). At these frequencies diffuse Galactic synchrotron emission dwarfs the cosmological signal of interest; this emission is typically modelled as a power law with a spectral index which scales the temperature across frequency. The synchrotron spectral index changes both spatially and gradually across frequency. Numerous works have investigated the mitigation of simulated foreground contamination on the detection of the simulated Hi signal, e.g. Wolz et al. (2014); Shaw et al. (2015); Bigot-Sazy et al. (2015); Alonso et al. (2015); Chapman et al. (2016); Zhang et al. (2016); Mertens et al. (2018); Carucci et al. (2020); Licardo et al. (2011); Cunnington et al. (2021); Irfan & Bull (2021); Makinen et al. (2021); Yohana et al. (2021); Soares et al. (2022); Spinelli et al. (2022). It is vital that these simulated foregrounds contain an accurate level of spatial and spectral complexity to prevent a misleading simplification of the component separation problem.
Publicly available repositories of diffuse Galactic foreground models are of enormous use to the community as they provide a test-bed on which to assess the qualities and deficiencies of component separation methods. Such resources include the Global Sky Model (Zheng et al., 2017), which uses observational data to produce all-sky maps of diffuse Galactic emission between 10 MHz and 5 THz. The Global Sky Model (GSM) performs principal component analysis on empirical data sets to determine the statistically independent components within the sky maps and then interpolates this information to model the total diffuse emission temperature at any frequency within the 10 MHz to 5 THz range. Additionally, there are the Planck
Sky (Delabrouille et al., 2013) and Python Sky (Thorne et al., 2017) models which model the different physical components expected in the Galaxy due to differing emission mechanisms and use spectral index information to scale these maps across frequency.
The 408 MHz all-sky map of Haslam et al. (1982) is typically taken as a proxy for an all-sky map of synchrotron emission because synchrotron is thought to be the dominant diffuse emission at this frequency across the majority of the sky (excluding the central Galactic plane and a few specific molecular clouds). The simplest way to scale the Haslam map from 408 MHz to any other frequency (\(\nu\)) is to use the power law parametrisation:
\[T_{\nu}(p)=T_{408}(p)\,\left(\frac{\nu}{408}\right)^{\beta}\,, \tag{1}\]
and assume a single value for the spectral index (\(\beta\)) which remains constant across pixels (p). However, the synchrotron spectral index is known to vary spatially due to energy losses of the relativistic, charged particles responsible for the emission (Bennett et al., 2003). Previous works have highlighted the need to consider a variable synchrotron spectral index for the problem of 21 cm foreground removal; demonstrating that the combination of a frequency-changing beam plus a spatially-varying spectral index is a more challenging foreground removal problem than just the situation of a frequency-changing beam plus a constant spectral index (Bernardi et al., 2015; Mozdzen et al., 2016; Anstey et al., 2021).
Miville-Deschenes et al. (2008) used the 408 MHz all-sky map together with the _WMAP_ 23 GHz polarisation map to determine an all-sky model for the synchrotron spectral index between 0.408 and 23 GHz. The 23 GHz map of polarisation intensity was specifically used as at GHz frequencies synchrotron emission is no longer the dominant diffuse Galactic emission in intensity, but it is the only emission expected to be present in polarised intensity. To increase the signal-to-noise ratio of the polarised data the maps were smoothed to 5 degrees, enabling the production of an all-sky synchrotron spectral index map at 5 degree resolution. This map is used in both the Planck Sky Model and Python Sky Model to scale the 408 MHz Haslam map across different frequencies. While it is generally accepted that the synchrotron spectral index changes spatially, the relationship between spectral index steepness and Galactic latitude is still not fully understood. Some spectral index maps, such as the 45 to 408 MHz Guzman et al. (2011) map, show a range of values with the shallowest indices occurring within the Galactic plane. Dickinson et al. (2009) display six different spectral index maps from the literature, all with very different spatial features, some of which display a trend in spectral index steepness with Galactic latitude while some do not, and discuss how the visible spatial features depend on the variation assumptions made for the spectral index calculation. Guzman et al. (2011) attribute the presence of shallow spectral indices across the Galactic plane to free-free emission (spectral index \(\sim-2.1\)) due to the increased amount of warm ionised hydrogen within the Galactic plane. Miville-Deschenes et al. (2008) actually present three distinct maps of the synchrotron spectral index. Two are made from the Haslam 0.408 and _WMAP_ 23 GHz intensity maps; at 23 GHz in intensity several diffuse Galactic emissions are present across the sky. While model 1 models the free-free contribution at 23 GHz, model 2 models both the free-free and anomalous microwave emission contributions. Both of these models also show shallower indices running across the plane. The third model, however, is made using the Haslam 408 MHz intensity map and the _WMAP_ 23 GHz polarised intensity map. As synchrotron emission is believed to be the only non-negligible emission present in polarised intensity at 23 GHz, this is the only spectral index model that doesn't rely on a free-free or anomalous microwave emission model; this is also the only model out of the three which does not display shallow spectral indices within the Galactic plane. It is this spectral index model which is used in the Planck Sky Model and the Python Sky Model, therefore we follow suite and also use this spectral index map as our basis.
Previous works have already demonstrated the need for a spatially complex synchrotron spectral index model and provided such models; the highest resolution model available being at 5 degrees. We aim to determine if this level of spatial accuracy is enough or if a spectral index map with the same resolution as the Haslam data themselves would further complicate the foreground removal process. Inspired by the recent success of Krachmalnicoff and Puglisi (2021) who used Convolutional Neural Networks (CNNs) to learn high resolution features of thermal dust models from low resolution input models, we aim to simulate high resolution information for the Miville-Deschenes et al. (2008) spectral index map. We use a CNN trained on a spectral index map constructed using both Haslam and Parkes (CHIPASS) (Calabretta et al., 2014) observational data. CNNs are a deep learning technique often employed for the task of image segmentation (assigning a label to each pixel of an image). We do not attempt a physically motivated model of synchrotron emission, as in Waelkens et al. (2009) and Fauvet et al. (2011) where the Galactic magnetic field itself is modelled. We simply attempt to construct a plausible model representative of the synchrotron spectral index for use in the testing of Hi data reduction pipelines. Previous models of the synchrotron spectral index and emission maps have used Gaussian realisations to provide additional spatial resolution; for instance Remazeilles et al. (2015) use Gaussian realisations to provide a higher resolution estimate of the 408 MHz map. Component separation methods, however, behave differently when attempting to clean Gaussian or non-Gaussian structure; Spinelli et al. (2022) show that a variety of different techniques all struggle to deal with non-Gaussian foregrounds as viewed through a non-Gaussian (Airy) beam, while the same techniques do a better job of approaching the Hi power level when only Gaussian foregrounds are considered. Therefore we aim to determine if the additional complexity of non-Gaussian, 56 arcmin resolution spatial structure will provide a more accurately challenging test-bed for such techniques.
In this work we create a new all-sky spectral index template and assess how high resolution, non-Gaussian structure impacts the ability of a blind component separation method, Principal Component Analysis (PCA), to clean an emission cube of diffuse Galactic synchrotron emission in an attempt to measure the Hi auto-correlation power spectrum. Numerous foreground cleaning methods, both blind and parametric, are available for use and each method has different advantages and disadvantages when faced with spatial and spectral structure. It is not the aim of this work to provide a complete review of all existing component separation methods, as of such we choose to select one mainstream (i.e. often applied to intensity mapping data) technique to test here and make our spectral index map publicly available for the community to test alongside all other component separation techniques. We use our high resolution spectral index map, alongside the Haslam data at 408 MHz to form diffuse synchrotron emission templates at various frequencies, which can then be compared to existing synchrotron emission models. In this work we compare four models for synchrotron emission: 1) the GSM model, 2) the Haslam data scaled using the 5 degree Miville-Deschenes et al. (2008) spectral index map, 3) the Haslam data scaled using a version of the 5 degree Miville-Deschenes et al. (2008) spectral index map, which has had higher resolution angular detail up to 56 arcmin added to it using Gaussian realisations and 4)
the Haslam data scaled using the 5 degree Miville-Deschenes et al. (2008) spectral index map, which has had higher resolution angular detail up to 56 arcmin added to it using our trained CNN.
We choose to only consider one foreground emission: diffuse Galactic synchrotron emission, so as to assess the effect our different models have on cleaning. In reality, low level diffuse free-free emission, extragalactic points sources, residual radio frequency interference (RFI) and possibly even residual ground emission pick-up may also be present in observational data. The simulations include Hi emission, white noise and diffuse synchrotron emission and each frequency channel is convolved with a frequency-dependant beam. We use the online Hi simulations repository FastBox1 to provide the test-bed set-up and so these simulations are specifically focused on foreground removal for a single-dish intensity mapping experiment using the MeerKAT dishes; an experiment such as MeerKLASS (Santos et al., 2016). However, the spectral index map that we have produced can be used by any experiment (single-dish or interferometric) alongside the Haslam data to get an estimate of diffuse synchrotron emission.
Footnote 1: [https://github.com/philbull/FastBox](https://github.com/philbull/FastBox)
The paper is laid out as follows: Sect. 2 is split between the description of the construction of our high resolution spectral index map using CNNs and the description of our simulation test set-up i.e. each of the simulated components and the chosen component separation method. We also assess the success of our CNN in building a realistic level of high resolution structure within our spectral index map in Sect. 2. In Sect. 3 we go on to use spherically-averaged auto-correlation power spectra to assess the impact of the different foreground maps on the cleaning ability of PCA. Our conclusions are presented in Sect. 4.
## 2 Method
### Constructing a high resolution spectral index map
Spectral index maps can be formed from data taken at two different frequencies (\(\nu_{A}\) and \(\nu_{B}\)); the map produced gives the average spectral index per pixel required to scale the emission from \(\nu_{A}\) to \(\nu_{B}\) (as long as said emission can be modelled as a power law). The spectral index is an average over frequency if and only if it is believed that the spectral index changes across frequency. It is important to select two sets of observational data which can be believed to only contain the emission of interest. For example, in this work we want to create a map of the synchrotron spectral index and so we need two data sets observed across either frequencies or regions of the sky we believe to be dominated by synchrotron emission. The Haslam 408 MHz data is the standard proxy for diffuse synchrotron emission. To create a spectral index map using these data we required another synchrotron dominated observational data set of the same, or higher resolution. Figure 1 shows both full and partial radio continuum maps, publicly available from the _WMAP_ legacy archive 2 as filled circles and two possible sources of future radio maps, from the MeerKLASS (Santos et al., 2016) and Bingo (Bartye et al., 2012) Hi intensity mapping experiments as dotted lines. As the MeerKLASS data span a range of resolutions they are denoted using a dotted rectangle. The only two maps available with higher resolution than the Haslam data (which can also be seen on Figure 1) are both at 1.4 GHz. We choose to use the CHIPASS radio continuum survey, which is available at 14 arcmin resolution.
Footnote 2: [https://lambda.gsfc.nasa.govl](https://lambda.gsfc.nasa.govl)
We only consider publicly available maps under 1.5 MHz, as at higher frequencies free-free emission is no longer a negligible component across the full sky. In Figure 2 we plot the typical decrease in synchrotron and free-free emission temperatures across frequency for both low and high Galactic latitudes. The _Planck_ full focal plane simulation data were used to provide the emission amplitudes 3; we used the mean synchrotron and free-free emission temperatures within a 5 degree squared region centred at Galactic latitudes 75\({}^{\circ}\) (to represent high latitudes) and 10\({}^{\circ}\) (to represent low latitudes). Both emissions are modelled as a power law with a synchrotron spectral index of -2.7 (Irfan et al., 2022) and a free-free spectral index of -2.1 (Bennett et al., 1992).
Footnote 3: [https://pla.esac.esa.int/maps](https://pla.esac.esa.int/maps)
#### 2.1.1 The training and testing data
To train our network we used high resolution (14 arcmin) data from the CHIPASS experiment (Calabretta et al., 2014) which cover the Southern sky at declinations < 25\({}^{\circ}\). Even if the data were full sky, it would be risky to try and construct a full sky synchrotron spectral index between 0.408 and 1400 MHz using the Haslam and CHIPASS data. At frequencies over 1 GHz free-free emission is no longer believed to be negligible across the majority of the sky; free-free emission can contribute up to 50 per cent of the total emission close to the Galactic plane (Platania et al., 1998). Therefore the full data were not used to train our network, instead we focused on the North Polar Spur (NPS) region which is believed to be a strong synchrotron emission feature (Haslam et al., 1964). This region is highlighted using red dotted lines in Figure 3 which shows the full CHIPASS data in Galactic coordinates. We restricted our training spectral index maps to come from this red dotted region only.
Figure 1: Publicly available, full and partial sky observational data under 1.5 GHz, as collated by the _WMAP_ legacy archive. The survey central frequencies and resolutions are shown and each survey is labelled by observatory name; some surveys are made up from several telescope observations and in those cases only one observatory name has been listed. Full survey details can be found in the relevant papers: DRAO 10 MHz (Caswell, 1976), DRAO 22 MHz (Roger et al., 1999), MU (Guzman et al., 2011), Parkes 85 MHz (Landecker and Wielebinski, 1970), Parkes 150 MHz (Landecker and Wielebinski, 1970), EDA2 (Kriele et al., 2022), Jodrell (Haslam et al., 1982), Dwingeloo (Berkhuijsen, 1972), Parkes 1.4 GHz (Calabretta et al., 2014), Stokert (Reich and Reich, 1986). Two future radio surveys, using the Bingo and MeerKLASS telescopes, are also plotted using dotted lines. The grey strip highlights surveys with resolutions higher than or equal to 56 arcmin.
The 1.4 GHz CHIPASS map was smoothed (assuming Gaussian beams) to 56 arcmin and used alongside the 56 arcmin, 408 MHz Haslam map to produce a spectral index map:
\[\beta=\ln\left(\frac{T_{\rm{\nu_{1}}}}{T_{\rm{\nu_{0}}}}\right)/\ln\left(\frac{ \nu_{1}}{\nu_{0}}\right), \tag{2}\]
where \(\nu_{0}=408\) MHz and \(\nu_{1}=1.4\) GHz. The destriped, reprocessed version of the Haslam map (Remazeilles et al., 2015) was used. As this map is available at HEALPix(Gorski et al., 2005) N\({}_{\rm side}\) 512, we downgraded the N\({}_{\rm side}\) 1024 CHIPASS map to N\({}_{\rm side}\) 512.
For the synchrotron spectral index map to have the correct mean both the Haslam map and the CHIPASS data must have the correct zero-levels. As the Haslam data are so widely used as a proxy for synchrotron emission, considerable work has already been done to determine the map zero-level. Wehus et al. (2017) used linear regression between multiple data sets to fit a zero-level of 8.9 K to the Haslam data. We adopt that value and subtract it from the Haslam data. To find the zero-level of the CHIPASS data we then used the same temperature-temperature linear regression technique used in Wehus et al. (2017). The linear regression between the CHIPASS and Haslam data within our selected NPS region is given as:
\[T_{\rm{1400}}(p)=m\times T_{\rm{408}}(p)+c, \tag{3}\]
where
\[c=m\times c_{\rm{408}}+c_{\rm{1400}}, \tag{4}\]
where \(T_{\rm{1400}}\) is the temperature per pixel within the NPS region in the 1400 MHz map, \(T_{\rm{1400}}\) is the equivalent at 408 MHz, \(m\) is the gradient fitted from the linear regression and the fitted offset (\(c\)) is a combination of the offsets in both maps. By taking the Haslam offset (\(c_{\rm{408}}\)) as 8.9 K, we could then calculate the CHIPASS offset. Figure 4 shows the linear regression within the NPS region between the Haslam and CHIPASS data; the Haslam data have already had the map zero-level of 8.9 K removed. We found a fitted zero-level of 3.21 K for the CHIPASS data; both the Haslam and CHIPASS maps were then used with their respective zero-levels subtracted to form a spectral index map. We show the spectral indices for our NPS region in Figure 5. In Sect. A we explore the uncertainty on our fitted zero-level of 3.21 K by performing linear regression across regions of different Galactic latitude and examine how this uncertainty propagates to uncertainties within our final spectral index map. We find a 1\(\sigma\) deviation of 0.1 K on the zero-level, which effects both the mean level of the spectral indices determined, by 3 per cent, as well as their spatial variations, by 11 per cent at the resolution of 56 arcmin.
The spectral index map produced was then smoothed to a resolution of 5 degrees, to provide both a low and high resolution perspective of the same data, and the data within our NPS region was cut up into numerous smaller sized spectral index maps. To increase the ease with which the network learnt the features across the numerous training spectral index maps, each separate map was normalised to contain spectral indices spanning from -1 to 1. Normalising the spectral index maps set the mean-level for all the maps to zero. This was not a problem however, as our interest is in the spatial variations around this mean; specifically the relationship between these variations at 56 arcmin and at 5 degree resolution. We selected 39 overlapping patches within this region, each spanning 7.3 degrees and having dimensions \(64\times 64\) pixels. A smaller number of patches of larger dimensions could have been chosen, but we opted for smaller images to reduce the number of features the network had to learn and to increase the number of training examples. Additionally, to maximise the number of training spectral index maps, we rotated each map three times by 90 degrees giving a total of \(39\times 4\) training maps. We formed these maps at both 56 arcmin and 5 degrees resulting in 156 pairs of maps.
Figure 4: Linear regression between the Haslam and CHIPASS data within the NPS region. The fitted offset is stated on the plot.
Figure 3: The CHIPASS 1.4 GHz map. The region used to create our synchrotron spectral index training data is highlighted in red.
Figure 2: Typical temperature variation across frequency for both synchrotron and free-free emission at high (\(b_{\rm{hi}}\)) and low (\(b_{\rm{ho}}\)) Galactic latitudes. The typical emission amplitude values were taken from the _Planck_ full focal plane simulations and power law models were assumed for both with a spectral index of -2.7 for synchrotron and -2.1 for free-free emission.
#### 2.1.2 The network
CNNs can be built up using a variety of architectures. Krachmalnicoff and Puglisi (2021) use generative adversarial neural networks (GANs) to learn the 12 arcmin features from pairs of 12 and 80 arcmin thermal dust images taken from the GNNLC all-sky thermal dust model (Planck Collaboration et al., 2016). Initially we began by training our CNN using a GAN architecture but found this structure failed to converge for our training data, possibly due to the small number of training information available as we could only use a small, synchrotron dominated fraction of the full sky. In the future, when experiments like MeerKLASS and Bingo publicly release large-area maps of the MHz sky, the GAN architecture can be revisited. For the current level of data availability however, we found the U-Net architecture optimal for our goals (Ronneberger et al., 2015) and made use of the Keras python library. U-Net CNNs use a 'U'-shaped symmetric structure of convolutional layers followed by deconvolution layers.
We have 156 pairs of synchrotron spectral index maps each of dimension \(64\times 64\) pixels. Table 1 shows the details of each layer in the U-Net network used in this work. The first layer has 8 filters expanding the original map size from \(64\times 64\) to \(64\times 64\times 8\), for the following layers the filters are doubled using a stride of 2. We picked a kernel size of 8, ensuring that the kernel size is divisible by the stride size to reduce the 'checkerboard' effect in the final images 4. Following the example of the generator 5 used in Krachmalnicoff and Puglisi (2021) we used the LeakyReLU activation function with a slope of 0.2, batch normalisation (to reinstate a zero mean and a variance of 1) after each convolution and a tanh activation for the final layer.
Footnote 4: [https://distill.pub/2016/deconv-checkerboard/](https://distill.pub/2016/deconv-checkerboard/)
Footnote 5: [https://github.com/ai4cmb/ForSE](https://github.com/ai4cmb/ForSE)
We held back 25 per cent of the spectral index map pairs for testing and used the other 75 per cent for training. To train our network we minimised the Mean Squared Error (MSE) loss function using the Adam optimiser for gradient descent with an initial learning rate of 0.0001. Use of the ReduceLRonPlateau option enabled the network to reduce the learning rate by a factor of 0.1 after seven iterations (epochs) with zero improvement in the loss function. The network was set to train in batches of size six and was allowed to stop whenever the loss function ceased to decrease after ten iterations. The testing spectral index maps were not used to train the network, but instead to evaluate it (see Sect. 2.1.4).
#### 2.1.3 Processing full-sky data
After the network had been trained the goal was to take the full-sky spectral index map of Miville-Deschenes et al. (2008) at 5 degree resolution and use it as the input for the network to generate a 56 arcmin resolution version. We obtained the Miville-Deschenes et al. (2008) map from the _Planck_ Full Focal Plane simulations (Planck Collaboration et al., 2016), using Equation 2 where \(T_{\mathrm{vp}_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ }}}}}}}}}}}}}}}}\) are the simulated synchrotron temperature maps at 353 and 217 GHz. The _Planck_ Full Focal Plane synchrotron simulations are available at N\({}_{\mathrm{side}}\) 2048, so we downgraded the synchrotron spectral index map to N\({}_{\mathrm{side}}\) 512. The \(12\times 512\times 512\) pixels were then projected into
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Operation** & **Dimensions** & **Hyperparameters** \\ \hline \hline Input & \(64\times 64\times 1\) & \\ Convolution 2D & \(64\times 64\times 8\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Convolution 2D & \(32\times 32\times 16\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Convolution 2D & \(16\times 16\times 32\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Convolution 2D & \(8\times 8\times 64\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Convolution 2D & \(4\times 4\times 128\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Convolution 2D & \(2\times 2\times 256\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Up sampling 2D & & \\ Convolution 2D & \(16\times 16\times 32\) & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\ Leaky ReLU & & \\ Leaky ReLU & & \(\alpha=0.2\) \\ Batch normalisation & Momentum = 0.5 \\
768 spectral index maps of dimensions \(64\times 64\) pixels using healpy functions (Zonca et al., 2019).
Once the 768 generated spectral index maps were projected back onto the sphere, we enlisted the technique of Cycle-spinning (Coifman and Donoho, 1995) to remove any border effects caused as a result of the projections between the 3D sphere and the 2D image plane. Cycle-spinning involves rotating the full-sky, 5 degree spectral index map, breaking up the map into 2D maps which are then used by the network to estimate high resolution 2D maps, re-projecting these maps to form a high resolution all-sky map and then performing the inverse rotation. We perform 12 rotations in total, plus the original map: Axis (X,Y): \((0^{\circ},0^{\circ})\), \((45^{\circ},0^{\circ})\), \((0^{\circ},90^{\circ})\), \((-45^{\circ},0^{\circ})\), \((0^{\circ},-90^{\circ})\), \((45^{\circ},90^{\circ})\), \((45^{\circ},-90^{\circ})\), \((-45^{\circ},90^{\circ})\), \((-45^{\circ},90^{\circ})\), \((-45^{\circ},90^{\circ})\), \((-45^{\circ},-90^{\circ})\), \((90^{\circ},90^{\circ})\), \((90^{\circ},-90^{\circ})\), \((-90^{\circ},90^{\circ})\) and \((-90^{\circ},-90^{\circ})\). These thirteen maps were then averaged together. Lastly we smoothed the maps from the pixel resolution (6.87 arcmin for an N\({}_{\rm side}\) 512 map) to 56 arcmin.
#### 2.1.4 Assessing the synchrotron spectral index map
An example of a training spectral index map pair given to the network is shown in Figure 6. The top left panel shows a normalised \(64\times 64\) pixel spectral index map at 5 degree resolution, while the top right shows the same but for the 56 arcmin spectral index map. In the lower panel we have the map generated by the trained network having been given the 5 degree map as input. The network can be seen to have learnt the key features in the high resolution map. There are clearly pixel effects in the generated map, such as a faint checkerboard effect and the odd spurious (not true to the high resolution map) pixel. However, as we are not attempting to create a spectral index map at the pixel resolution (6.87 arcmin for an N\({}_{\rm side}\) 512 map) these spurious effects get removed by the Cycle spinning and smoothing processes described in Sect. 2.1.3.
To see how well the network can generate high resolution spectral index maps from a low resolution map not used in the training we use a map from the test subset. The top left panel of Figure 7 shows a low resolution map from the test subset, and the accompanying high resolution map in the top right panel. The network generated map is shown in the lower panel. It can be seen that the reproduction of the small scale structure is far less faithful to the true high resolution map than in the case of the training data shown in Figure 6. However, on visual inspection the level of spatial structure in the generated map seems appropriately detailed. The histogram distributions of map pixels can be used as a method to assess image complexity. In Figure 8 we show the histograms for the same maps displayed in Figure 7. The low resolution training map has one strong peak but other than that has a very flat histogram distribution, while both the 56 arcmin map and the network generated map show considerably more structure. The generated test maps are not correct, in that they are not identical to the true 56 arcmin test data, but they contain high resolution spatial structure and they remain faithful to the large scale structure in the map therefore they can be used to provide a high resolution spectral index map.
The original 5 degree resolution, all-sky spectral index map of Miville-Deschenes et al. (2008) is displayed in the top panel of Figure 9, whilst our 56 arcmin version is presented in the lower panel. Our 56 arcmin map is publicly available 6. Figure 10 selects
Figure 8: Histogram distribution of the normalised spectral indices in a test high and low resolution maps and the network generated high resolution model. These distributions are for the same data show in Figure 7
Figure 6: _Top left:_ One of the normalised 5 degree spectral index training maps, _top right:_ the equivalent 56 arcmin version of the map, _bottom:_ the equivalent map generated by the trained network.
Figure 7: _Top left:_ One of the normalised 5 degree spectral index test maps, _top right:_ the equivalent 56 arcmin version of the map, _bottom:_ the equivalent map generated by the trained network.
a smaller region of the all-sky maps to clearly demonstrate the additional detail in the high resolution version of the Miville-Deschenes et al. (2008) spectral index map. The healpy anafast library was used to calculate the angular power spectra for both the original and our new spectral index map, shown in Figure 11. The angular power spectrum for the original map drops off at \(\ell\sim 40\) / \(\theta\sim 4.5^{\circ}\) while the CNN generated spectral map power only starts to drop off at \(\ell\sim 150\) / \(\theta\sim 1.2^{\circ}\). We can see that the high resolution spectral index map suffers from a slight power loss (between 5 and 15 percent) from the original map between \(\sim 6\) and 18 degrees, as our CNN fails to perfectly reconstruct high-resolution images different to those that it has been trained with. We also add the angular power spectrum of a synchrotron spectral index map generated by adding high resolution structure to the 5 degree map using a Gaussian realisation with Gaussian structure up to 56 arcmin. The high resolution Gaussian structure was added using a power-law in \(\ell\): \(A\times(36/\ell)^{\beta}\), where the amplitude was set by the 5 degree spectral index map power at \(\ell=36\) and \(\beta=2.4\) following the parametrisation for synchrotron emission detailed in Santos and Cooray (2006). This will be pertinent for the next section, where we discuss the full simulation and four options for synchrotron emission models.
### The simulation
Having constructed a high resolution spectral index map, we wished to investigate whether or not such a map would impact the results of a foreground cleaning simulation. We used the Hi simulation suite FastBox to set up two data cubes of dimensions \(128\times 128\times 128\). Both cubes cover a 3600 square degree region with a pixel resolution of \(0.47^{\circ}\) (\(60^{\circ}\) across 128 pixels). The first data cube spans the frequency range 1220 to 1363 MHz over 128 pixels, giving a frequency resolution of 1.1 MHz and the seconds cube spans from 1084 to 1226 MHz, also with a frequency resolution of 1.1 MHz. The total data cubes consist of Hi emission plus diffuse synchrotron emission and are convolved with the MeerKAT beam, instrumental white noise is then added into the simulations after the beam convolution.
The FastBox Hi signal is simulated as:
\[\Delta T_{b}(\vec{x},z)=\overline{T}_{b}(z)\,b_{\rm HI}(z)\,\delta_{m}(\vec{x },z), \tag{5}\]
details of the mean brightness temperature, HI bias and fractional HI density used can be found in Irfan and Bull (2021). Log-normal transformations of the Gaussian field are applied to ensure a physical density distribution and the effect of Redshift Space Distortions are included by shifting each 3D pixel of the transformed field.
For the foreground contribution we only simulate diffuse synchrotron emission, as we wish to explore different synchrotron models. All our foreground models cover the 3600 degree square region of \(25^{\circ}<\) Galactic latitude (\(b\)) \(<85^{\circ}\) and \(220^{\circ}<\) Galactic longitude (\(l\)) \(<280^{\circ}\). The first model is the Global Sky Model; the GSM is in fact a model of the total diffuse Galactic emission at the user-selected frequency. As the simulated data are at high Galactic latitude and MHz frequencies the assumption is that synchrotron emission will be the dominant emission, but this assumption may not necessarily hold true within the GSM map which will contain some fractional level of extrapolated free-free, anomalous microwave and thermal dust emission. Models two, three and four are all models of pure synchrotron emission assuming a power law model form and using the Haslam 56 arcmin data as the emission amplitude at 408 MHz. These three models only differ in the resolution of the spectral index map used. The 5 degree spectral index map of Miville-Deschenes et al. (2008) is the base for all three spectral index maps and provides all the spectral index information for model two. Model three uses the new spectral index map produced by this work, which has high resolution information between 56 arcmin and 5 degrees provided by our CNN and model four has the high resolution information
Figure 11: Angular power spectra of the 5 degree, CNN and Gaussian generated synchrotron spectral index map.
Figure 10: A \(20^{\circ}\) by \(20^{\circ}\) zoom-in of the all-sky spectral index map at (_left:_) 5 degree and (_right:_) 56 arcmin resolution.
Figure 9: Full sky synchrotron spectral index between 408 MHz and 23 GHz at 5 degrees (_top_) and 56 arcmin (_bottom_) resolution.
simulated using a Gaussian realisation. Figure 12 shows the three different spectral indices per pixel for our 3600 square degree region. Common large-scale structure is clear in all maps but the 5 degree map is clearly missing the high resolution structure seen in the other two. The Gaussian high resolution map looks to be simply more noisy than the CNN map, as opposed to containing high resolution structure. The four foreground emission models are summarised in Table 2.
By introducing high resolution spatial structure we aim to determine whether said structure poses a problem for component separation methods after the sky signal has been convolved with a complex beam structure, i.e. a frequency changing beam with sidelobes which cannot be modelled at each frequency as Gaussian. Through the FastBox set-up we make use of the L-Band, Stokes I katbeam model 7 which models the frequency changing beam as a cosine aperture taper (Mauch et al., 2020). A 1D slice through the 2D beam pattern as a function of frequency is show in Figure 13. The approximate Gaussian FWHM for the MeerKAT beam for the frequency ranges under investigation in this work is 1.54\({}^{\circ}\) to 1.13\({}^{\circ}\).
Footnote 7: [https://github.com/ska-sa/Katbeam](https://github.com/ska-sa/Katbeam)
Only the combination of the synchrotron emission plus the Hi signal is convolved with the beam; the final constituent of the total emission model: instrumental noise, is added after beam convolution. We assume Gaussian instrumental noise with a standard deviation calculated from the radiometer equation:
\[\sigma_{\rm rms}=\frac{T_{\rm sys}}{\sqrt{N_{d}\,t_{\rm res}\,\delta\nu}}, \tag{6}\]
where
\[T_{\rm sys}=T_{r}+60\left(\frac{\nu}{300}\right)^{-2.5}, \tag{7}\]
where \(T_{r}=16\) K (based on typical MeerKAT receiver temperatures (Wang et al., 2021)), \(N_{d}\) is the number of available dishes which we set to 64 to match the MeerKAT array, \(\delta\nu\) is the frequency resolution and \(t_{\rm res}\) is the observational time per pixel. Figure 14 shows the total emission (Hi plus synchrotron emission convolved with the beam and then added to the instrumental noise) at 1273 MHz for the four different synchrotron emission models. The three models which use a scaling of the Haslam data for the foregrounds are indistinguishable by eye, whereas the total emission model which uses the GSM to provide the synchrotron emission template is quite distinct. All the models share common identifiable features however, such as regions of compact emission.
In Figure 15 the spectral form for a single pixel in each of the four total emission models is shown. The pixel temperature is multiplied by frequency squared in order to highlight any deviations from a simple power law model in the spectral form. The GSM total emission cube can be seen to display a completely different spectral form to the other three models, which is unsurprising as it is the only model not formed using a power law parameterisation. The other three models were formed from power laws but convolution with the MeerKAT beam has resulted in the averaging together of neighbouring pixels, which destroys the simple power law spectral form over frequency. The more complex the beam, the more complex these spectral perturbations and in-turn the harder it becomes to remove foreground structure when using a component separation technique that replies on spectral smoothness.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model name** & **Emission** & **Resolution** \\ \hline GSM & total diffuse Galactic & 56 arcmin \\
5\({}^{\circ}\) & diffuse Galactic synchrotron & 56 arcmin/ 5\({}^{\circ}\) \\ CNN & diffuse Galactic synchrotron & 56 arcmin/ 56 arcmin \\ Gaussian & diffuse Galactic synchrotron & 56 arcmin/ 56 arcmin \\ \hline \hline \end{tabular}
\end{table}
Table 2: The four foreground emission models investigated in this work and their resolutions. The last three models require both amplitude and spectral index templates which is why they have two resolutions stated for each; the first for the amplitude, the second for the spectral index.
Figure 12: The three spectral index maps investigated in this work: the 5\({}^{\circ}\) model is on top, the CNN model is in the middle and the model with high resolution information filled in using a Gaussian realisation is on the bottom row.
Figure 13: A 1D slice through the katbeam beam model as a function of frequency.
### The foreground clean
Numerous methods of component separation can be used to attempt to clean total emission maps from foregrounds. Our aim is to determine if an individual technique behaves differently for our four sets of total emission cubes. Therefore we select a single method of foreground cleaning and explore the successes and failures of that technique for our four simulation sets. The method of choice for this work is Principal Component Analysis, which is a blind technique meaning that it requires no parametric information about the foreground emissions. PCA relies on the assumption that the foreground emission is smooth over frequency, whilst the cosmological signal is mainly stochastic over frequency. The technique works by performing an eigen decomposition of the data frequency-frequency covariance matrix;
\[C(\nu,\nu^{\prime})=\frac{1}{N_{\rm pix}}\sum_{j}^{N_{\rm pix}}\delta T_{j}(\nu )\delta T_{j}(\nu^{\prime}). \tag{8}\]
The largest eigenmodes are then labelled as foregrounds and subtracted away from the total data. The number of these large eigenmodes to remove is the only input parameter required by the PCA algorithm from the user.
## 3 Results
To asses the impact of our four different total emission models on the success of a PCA clean to the full data cube we enlist the use of spherically averaged auto-correlation power spectra. In place of plotting the power for each cube, we plot the ratio between the cleaned cube power spectrum and the Hi plus white noise power spectrum. An ideal foreground removal technique would remove all the synchrotron emission leaving only the Hi signal (after convolution with the beam) plus the instrumental noise thus giving a power spectra ratio of 1 on our plots. However, our focus is not on which foreground model results in the ratio closest to one but rather, if there are significant differences between the different total emission models.
In Figure 16 we summarise the current understanding of synchrotron spectral index modelling for the problem of component separation. The auto-correlation power spectra shown are all for PCA-cleaned total emission cubes, the number of PCA modes removed from the total emission cubes is always 1. The synchrotron emission models for the four spectra were provided by the Haslam data scaled to different frequencies using four different spectral index maps. The first, and simplest spectral index map is simply a constant value at each pixel. We use the mean value of the 5 degree Miville-Deschenes et al. (2008) spectral index map, -2.93, as our constant spectral index value. The second map is a Gaussian distribution of values with a mean of -2.93 and the same standard deviation as the 5 degree spectral index map, 0.06, with a spatial resolution of 5 degrees. The third map is the actual 5 degree spectral index map and the fourth map is the third map smoothed to a resolution of 10 degrees. As previously identified in the literature, using a single, constant value for the synchrotron spectral index across all
Figure 14: The total emission models at 1273 MHz for _top to bottom_: the GSM model, the 5\({}^{\circ}\) model, the CNN model and the Gaussian model.
Figure 15: The spectral form of a single pixel in each of the four total emission models.
pixels presents a simplified version of the true problem to the component separation algorithm. The fact that the three 5 and 10 degree power spectra display the largest power excesses over the Hi plus white noise power indicates that the foreground removal is particularly challenging for a spatially complex synchrotron spectral index. The Gaussian 5 degree spectral index map, however, can be seen to place an excess of power across different (smaller) angular scales to the true 5 degree spectral index map and therefore is not an accurate representation of the foreground cleaning problem at medium angular scales. Whereas, the power spectra ratios with the largest values at large scales (small \(k\) values), as opposed to medium scales, are the two where a non-Gaussian, spatially changing spectral index has been used. As there is a significant difference between the cleaned power spectra for the true 10 and 5 degree spectral index map this plot motivates the goal of this paper, which is to determine if a new spectral index map with non-Gaussian spatial structure and a resolution of 56 arcmin is required to further bolster Hi component separation test-beds.
In the next plot we return to the four emission models set-up in Sect. 2.2: namely the GSM, the 5 degree spectral index model, the 56 arcmin CNN spectral index model and the 56 arcmin Gaussian spectral index model. Figure 17 presents the auto-correlation power spectra plots for the four total emission models across the two frequency ranges investigated. The top plot is for the 1220.0 to 1362.6 MHz frequency range whilst the bottom two are for the 1084.0 to 1225.6 MHz frequency range. At lower frequencies the foreground emission is brighter and so a larger number of modes may be needed to achieve a suitably effective clean, which is why we include both the 1 and 2 mode removal results for the 1084.0 to 1225.6 MHz range. The GSM total emission cube was only ever cleaned using 2 mode removal as 1 mode left visible foreground structure in the cleaned maps. It is also clear from the plots in Figure 17 that the GSM model results in a completely different cleaned result compared to the other three models which are formed from a power law extrapolation of the Haslam data. The GSM is a total diffuse emission model and therefore should be used as such, as opposed to as a proxy for purely synchrotron emission. It has a more complex spectral structure than a simple power law, which is an advantage to those who wish to test their component separation technique of choice on such a data set. However, for those who intend to add their own compact sources, free-free emission, residual RFI etc., using the GSM as a proxy for solely synchrotron emission may make for an unrealistically hard to clean total emission cube.
The three models which are based on a power law extrapolation of the Haslam data can be seen to perform similarly in cleaning from Figure 17. In fact for the 1084.0 to 1225.6 MHz frequency range when two foreground modes are removed and the total emission cube is over-cleaned, i.e. Hi signal itself is removed causing the power spectra ratio to drop below one, the three power law models are essentially identical. For the 1 mode clean, for both frequency ranges investigated, we can see a difference in the three power law models at the large scale, low \(k\) end of the power spectra. At the largest scale plotted the 5 degree model and the Gaussian model behave similarly, while the CNN model gives different results. For the first frequency range the cleaned maps are between 3 and 3.5 times higher than the ideal measured signal (Hi plus white noise), the difference between 3 and 3.5 is noteworthy, although within the error budget. However for the second range, all three methods are more than 100 times higher in power than the ideal signal and
Figure 16: Spherically averaged auto-correlation power spectra. The power spectra ratio between cleaned total emission cubes and data cubes containing only Hi emission convolved with the beam plus instrumental noise are plotted. In all four synchrotron emission models the Haslam data have been scaled using a spectral index map. The four different plots are for four different synchrotron spectral index maps.
Figure 17: Spherically averaged auto-correlation power spectra. The power spectra ratio between cleaned total emission cubes and data cubes containing only Hi emission convolved with the beam plus instrumental noise are plotted. The top plot is for total emission cubes covering the 1220.0 to 1362.6 MHz frequency while the bottom two plots are for the 1084.0 to 1225.6 MHz. The difference between the middle and the bottom plot is the number of foreground modes removed for the PCA clean.
so the difference between the three methods becomes insignificant compared to the overall inability to recover the Hi signal. The conclusion to be taken from this is that it is worthwhile to pursue a more spatially complex foreground model, such as the CNN model presented in this work, if the component separation method under testing is performing well. If on the other hand, a new method is being honed which is orders of magnitude away from recovering the Hi signal, there is no need to pursue a more complex test-bed set-up than using the Haslam map extrapolated to different frequencies with the Miville-Deschenes et al. (2008) spectral index map. Specifically, for the MeerKLASS experimental set-up and the foreground cleaning method of PCA there is no significant difference between any of our 56 arcmin and the 5 degree spectral index models i.e. spectral index spatial complexity beyond a 5 degree resolution is not required for this particular test set-up.
## 4 Conclusions
Prior to this work it was known that assuming a spatially constant synchrotron spectral index would be an unrealistic simplification of the component separation problem for 21 cm intensity mapping experiments with frequency-dependant beams. It had also been shown that Gaussian foreground realisations are easier to separate from the Hi signal than non-Gaussian foregrounds when using blind foreground removal techniques. Motivated by the knowledge that increasingly spatially complex synchrotron spectral index models have been required to test the limitations of component separation techniques we created a 56 arcmin spectral index model to act as a proxy until future observational data allows us to measure the all-sky synchrotron spectral index at high angular resolution.
We have presented a model of the diffuse Galactic synchrotron emission spectral index between 0.408 and 23 GHz created from the 5 degree map of Miville-Deschenes et al. (2008) using a CNN trained on low and high resolution spectral index maps from CHIPASS 1.4 GHz observational data. Our map contains spatial structure up to 56 arcmin resolution. The intent is for this map to be used alongside the 56 arcmin 408 MHz map to create more realistic models for diffuse synchrotron emission across frequency for use as part of any simulation suite designed to test component separation techniques. It can be seen from the test subset of the data used to train the CNN that the small-scale structure generated is not an accurate representation of the true (empirically measured) small-scale structure, this coupled with the mid-range resolution power loss seen in the angular power spectrum means that our spectral index map cannot be used for science analysis of diffuse Galactic synchrotron emission. However, we believe that the CNN model presented in this work offers a useful contribution to simulation test-beds designed to probe the advantages and disadvantages of different component separation techniques.
Deep convolutional generative adversarial networks (DC-GANs) have been implemented for the simulation of thermal dust emission maps (Krachmalnicoff and Puglisi, 2021; Aylor et al., 2021) but for our input resolution (5 degrees) and relatively small number of training maps (156) we found it problematic to get a GAN network to converge and so found instead, the U-NET CNN architecture to be the optimum set-up for our purposes. The availability of high resolution CHIPASS data enabled the calculation of spectral indices between 408 and 1400 MHz at 56 arcmin which could then be smoothed so our network could be trained using pairs of 56 arcmin and 5 degree spectral index maps. Ideally we would have used publicly available high resolution MHz data, as at 1.4 GHz free-free emission is no longer negligible in certain regions of the sky. To combat the lack of available MHz data, we used the CHIPASS data but only within the North Polar Spur region; an area known to be dominated by diffuse synchrotron emission. In the future, however, we will be able to redo this analysis using MeerKAT or Bingo data to train our CNN.
To emphasize the use of a spatially complex foreground emission model, we set up four different total emission models and tested the ability of PCA to clean away the foregrounds, leaving the Hi plus instrumental noise. Our four emission models each contained the same Hi signal, the same level of instrumental Gaussian noise and the foreground and sky were convolved by the frequency-dependant katbeam beam model for each total emission model. The only difference between the four models were the foreground contributions. Emission model one had the synchrotron emission provided by the Global Sky Model, emission models two to four were all power law models of synchrotron emission made by extrapolating the Haslam map over frequency using a spectral index map. For model two the spectral index map used was the 5 degree map of Miville-Deschenes et al. (2008), for model three our high resolution CNN spectral index map was used and for model four the high resolution spatial information for the 5 degree spectral index map was provided using a Gaussian realisation. None of these four models are perfect, the GSM model is for total emission and therefore has a far more complex spectral form than a simple power law and the other three models assume that only synchrotron emission is present in the Haslam data. Having a selection of possible models with different characteristics, however, is useful for testing component separation techniques as that allows for an investigation into how each technique responds to different contaminants. Using the GSM allows for the testing of a technique in the presence of a complex spectral structure caused by multiple emission sources, whilst using a power law model with a spatially varying spectral index simulates the complex spectral structure caused by the interaction between a foreground emission and the telescope beam pattern. For our experimental setup (cosine aperture taper beam with a FWHM between 1.1 and 1.6 degrees and foregrounds cleaned using PCA) and an upper resolution bound of 5 degrees we have shown that increasing the resolution of the spectral index spatial structure changes the cleaned power spectrum and that this change occurs over different \(k\) scales depending on whether the spatial structure is Gaussian or non-Gaussian. At resolutions equal to or greater than 5 degrees, however, there is no significant difference between using the Miville-Deschenes et al. (2008) 5 degree spectral index map, a Gaussian 56 arcmin spectral index map or a non-Gaussian 56 arcmin spectral index map. This case-study is of particular pertinence for the MeerKLASS (Santos et al., 2016) component separation effort, while the 56 arcmin spectral index map made publicly available by this work is relevant to any other 21 cm experiment testing component separation techniques on their own unique experimental set-up.
In this work we have chosen to use the Miville-Deschenes et al. (2008) all-sky map as our estimate for the synchrotron spectral index at all resolutions larger than 5 degrees. As of such our CNN spectral index map is completely tied to this per-pixel spectral index estimate. We believe this is an appropriate choice given the prevalence of this spectral index map within both the CMB and intensity mapping communities as it is used within both the Python Sky Model and Planck Sky Model. However, it must be noted that any limitations or inaccuracies associated with the Miville-Deschenes et al. (2008) 5 degree spectral index map are shared by the CNN 56 arcmin spectral index map presented in this paper.
An additional complexity for the synchrotron spectral index is the very likely possibility that it changes, not only across pixels but also over frequency. Our CNN spectral index map, like the 5 degree spectral index map, provides the average (across frequency) spectral index per pixel between 0.408 and 23 GHz. The per-pixel spectral index value can, however, be scaled across frequency using a curvature model from the literature (Kogut (2012) for example), if required. We leave it to the community to scale the spectral index map as desired and to extend the tests shown in this work to include any other component separation technique and other non-Gaussian beam models.
## Data Availability
The 56 arcmin map of the simulated synchrotron spectral index is publicly available here: [https://github.com/melisirfan/synchrotron_emission](https://github.com/melisirfan/synchrotron_emission) and the Jupyter notebooks used in this analysis have been added to the Fastbox repository.
## Acknowledgements
M.I acknowledges support from the South African Radio Astronomy Observatory and National Research Foundation (Grant No. 84156) and would like to thank Mosima Nagip for her valuable insight on CNN architecture and Phil Bull and Mario Santos for the useful discussions.
|
2301.13142 | Self-Compressing Neural Networks | This work focuses on reducing neural network size, which is a major driver of
neural network execution time, power consumption, bandwidth, and memory
footprint. A key challenge is to reduce size in a manner that can be exploited
readily for efficient training and inference without the need for specialized
hardware. We propose Self-Compression: a simple, general method that
simultaneously achieves two goals: (1) removing redundant weights, and (2)
reducing the number of bits required to represent the remaining weights. This
is achieved using a generalized loss function to minimize overall network size.
In our experiments we demonstrate floating point accuracy with as few as 3% of
the bits and 18% of the weights remaining in the network. | Szabolcs Cséfalvay, James Imber | 2023-01-30T18:22:28Z | http://arxiv.org/abs/2301.13142v2 | # Self-Compressing Neural Networks
###### Abstract
This work focuses on reducing neural network size, which is a major driver of neural network execution time, power consumption, bandwidth, and memory footprint. A key challenge is to reduce size in a manner that can be exploited readily for efficient training and inference without the need for specialized hardware. We propose Self-Compression: a simple, general method that simultaneously achieves two goals: (1) removing redundant weights, and (2) reducing the number of bits required to represent the remaining weights. This is achieved using a generalized loss function to minimize overall network size. In our experiments we demonstrate floating point accuracy with as few as 3% of the bits and 18% of the weights remaining in the network.
## Introduction
The ongoing revolution in the capabilities of machine learning models can in large part be attributed to their increasing size. For example, the exceptional capabilities of recent state-of-the-art language models [1] have only been achieved at the expense of immense network size, slow training and execution, and high energy/carbon consumption [1]. However, performance optimization, particularly for power- and area-efficient inference on dedicated accelerators, has been relatively neglected, which limits the deployment of powerful models on resource-limited devices [1].
In this work our objective is threefold: (1) to compress networks _during_ training to realize benefits in training time; (2) to reduce the size of weight and activation tensors by eliminating redundant channels; and (3) to reduce the number of bits required to represent weights. The second and third points produce a smaller network expected to execute more efficiently on devices supporting variable bit depth weight formats [10]. Despite being conceptually simple, the approach we take is effective and we demonstrate high compression rates on an example classification network. We achieve the following advantages:
* Fewer weights in the final network.
* Fewer bits in the remaining parameters (depending on the target device).
* Reduced training and execution time.
* Freeing the network designer from manually optimizing architectural hyperparameters such as layer widths and bit depths.
* No requirement for special hardware to take advantage of most optimizations (e.g., no need for sparse matrix multiplication [10] or support for hash functions [11]).
We achieve this by means of a novel quantization-aware training (QAT) scheme in which the quantization nodes are differentiable in their exponents and number of bits. This allows bit depths to be reduced simultaneously with maximizing accuracy on the task being trained for. Redundant channels are automatically detected when they reach zero bits and periodically eliminated, leading to a speedup in both training and inference due to reduced bandwidth and compute requirements.
## Related Work
Our proposed solution bridges multiple active research areas: low bit depth neural networks, QAT, and induced sparsity (particularly channel pruning).
Early contributions in the field of low bit depth neural networks showed that it is possible to achieve reasonable accuracy at very low bit depths with specialized operators [14, 15]. Where specialized operators are needed, specialized inference hardware may also be required [12]. The present work is designed to yield networks that may be deployed efficiently on low-bit-depth integer pipelines, as are available in many GPUs and neural network accelerators.
There exist many methods for performing QAT for network parameters. One important advance is the Straight-Through Estimator (STE) for rounding [1], which allows gradient updates to be propagated to weights through a rounding operation during training. Other methods smooth the rounding function, using stochastic rounding [1] or explicit smoothing [1]. Importantly, Defossez et al. (2022) take QAT a step further by also learning bit depths.
The literature on induced network sparsity started with Le Cun et al. (1989). Recent related developments include methods for efficient inference of sparse networks [1]
and Ferhatosmanoglu (2021), and induced structured sparsity such as channel pruning He et al. (2017).
In our experiments we compare with the related method of Defossez et al. (2022), as described in more detail in the _Experiments_ section below. The following differences with our method should be noted:
1. We allow bit depths to reduce to zero, eliminating some weights, instead of limiting minimum compression to 1 bit.
2. We define the quantization function in such a way that it is fully differentiable with respect to all parameters, including the number format parameters (scale/exponent and bit depth Jacob et al. (2017)). Importantly, this turns all number format parameters into network parameters that can be trained directly as if they were weights.
3. We use the basic STE for all training instead of using pseudo-quantization noise.
4. We use a coarser grouping of weights: instead of using groups of 4, 8 or 16 weights, we group all weights in a channel, achieving greater stability and less forgetting during training. This also allows for a significant reduction in compute requirements without requiring specialized hardware by a complete elimination of channels.
## Self-Compression and Differentiable Quantization
In this paper, our experiments use a differentiable number format (eq. 1) that is shared by a group of weights, represented as signed integers with floating point exponents \(e\) and bit depths \(b\) (however, this is fully expected to generalize to other formats such as Q8A). Our quantization function is as follows:
\[q(x,b,e)=\ 2^{e}[\min(\max(2^{-e}x,-2^{b-1}),2^{b-1}-1)] \tag{1}\]
Where \([\cdot]\) is the rounding function which rounds to nearest integer with ties to nearest even. Since this formula is only valid for non-negative values of \(b\), we constrain the range of \(b\) to be greater than or equal to zero. Use of the STE to redefine the derivative of the rounding function makes it possible to optimize an objective function with respect to the quantization parameters \(b\) and \(e\).
The choice of rounding mode is important: when \(b=0\) the output of the \(q\) function is always zero. Therefore, when a weight is represented with zero bits, it makes no contribution to the output of the network, and may be removed without changing the result. By sharing the quantization parameters across entire channels, it becomes possible to remove (prune) zero bit channels without impacting the network's output. This has the effect both of reducing the size of weight and activation tensors in the network (Figure 1), but also accelerating training over time (Figure 2) without affecting the accuracy of the final network.
Reducing a network's size by removing channels has the advantage of not requiring specialized hardware to handle the reduced network. Our proposed method therefore proceeds as follows:
1. Quantizing each output channel of the weights with a single quantization parameter pair of bit depth and exponent (\(b\),\(e\)).
2. Training the network using a loss function that maximizes accuracy on the original task whilst penalizing the number of bits used.
3. Removing network parameters (i.e. weight output channels) when the corresponding bit depths reach zero. This is also propagated to subsequent ops that consumed the removed output channel, resulting in a reduction in the size of following layers, and the removal of the corresponding input channel of a following convolution, where present.
Although the method described in this work learns to compress and eliminate channels, it is expected to generalize to other hardware-exploitable learned sparsity patterns.
Figure 1: Using the proposed method, network size (number of bits) shrinks quickly early in training, with further reductions becoming progressively more gradual.
Figure 2: Training time accelerates as parameters are removed from the network.
When removing parts of a network during training the optimiser state must also be modified by removing the corresponding meta-parameters (e.g. momentum vectors) of the removed parameters.
### Optimization Objective
In this work, it is shown that an optimization objective may be defined that improves one or more aspects of neural network performance in addition to the usual objective of reducing error on the training dataset. These aspects could include the network's size, total bandwidth consumed, number of hardware operations, power consumption, energy per inference, performance on a specific target hardware, etc. All of the above can be minimised by using bit depths as a proxy. In this work we therefore chose to minimize the number of bits, which additionally makes direct use of our proposed differentiable number format (1) for learning quantization parameters. We do this by including a new term \(\gamma Q\) in the optimisation objective:
\[\Lambda(x)=\Lambda_{0}(x)+\gamma Q \tag{2}\]
Where \(\Lambda_{0}\) is the original loss of the network, \(\gamma\) is the compression factor (a larger \(\gamma\) produces a smaller, less accurate network), and \(Q\) is the average bit depth. \(Q\) is defined as the sum of the sizes \(z_{l}\) of all layers \(l\), divided by the total number of weights \(N\) in the starting network:
\[Q=\frac{1}{N}{\sum}_{l=1}^{L}z_{l} \tag{3}\]
The size of a layer can be expressed as the total number of bits used to represent its output channels:
\[z_{l}=I_{l}H_{l}W_{l}\sum\nolimits_{i=1}^{o_{l}}b_{l}^{i} \tag{4}\]
Where \(O_{l}\), \(I_{l}\), \(H_{l}\) and \(W_{l}\) are the output, input, height, and width dimensions of the weight tensor of layer \(l\) respectively, and \(b_{l}^{i}\) is the bit depth of output channel \(i\) of layer \(l\). When this metric is minimized, some \(b_{l}^{i}\) can reach zero. When this happens the corresponding output channel can often be removed from the network without losing accuracy.
In addition, if the output of layer \(l^{\prime}\) is directly used by a layer \(l\), the corresponding input channel of the next layer \(l\) also becomes redundant. Therefore, the compression loss may be improved by including this relationship:
\[z_{l} =H_{w}W_{w}{\sum}^{l}\mathbf{1}_{b_{l^{\prime}}^{j}>0}{\sum}_{i=1 }^{o}b_{l}^{i}\] \[+H_{w}W_{w}{\sum}^{o}_{i=1}\mathbf{1}_{b_{l}^{i}>0}{\sum}_{j=1}^ {l}b_{l^{\prime}}^{j} \tag{5}\]
Where \(b_{l^{\prime}}\) is the vector of bit depths used to encode the previous convolution layer's output (where present).
Once a channel can be compressed to zero bits it becomes a candidate for removal. However, removing a channel only outputting zeros could significantly change the network's output if a bias was to be added to that channel. A sudden change to the network's output can irreversibly disrupt the training, so to handle this, an \(L_{1}\) loss is applied to biases operating on zero-bit channels to reduce them to zero. Only when the biases are reduced to zero are these output channels (and corresponding input channels from the next layer) removed, since at this point removing such a channel does not change the network's output.
A sudden change of quantisation parameters can also irreversibly degrade the network during training, which is a problem described in the next section.
### Irreversible Forgetting
Compressing networks in this way can be challenging. We conjecture that the network is continuously trying to remove (forget) channels (or more generally groups of weights quantised by a common bit depth parameter) that are not necessary to produce a low error _at that moment_ in training. However, this process could erroneously remove parts of a network that are useful, albeit not heavily used during processing of recent minibatches. For example, one might consider a network channel in the first layer trained to match horizontal lines. If multiple subsequent training batches contain no horizontal lines affecting the output, the training might determine that horizontal lines are not necessary and reduce the channel's quantization bit depth too much, and possibly to a point whereat the training can no longer relearn the feature if needed by recovering the corresponding bit depth. We will call this _irreversible forgetting_.
This phenomenon is more likely to occur deeper in the network in wider layers where more abstract (and less often-needed) features are located. We have identified ways to mitigate irreversible forgetting, including:
1. Having more weights share the same quantization parameters. Even if some of the weights in a group seem unnecessary, their encoding bit depth will stay high if other weights in the group are being used.
2. Use the Adam optimizer that adapts the learning rate when a gradient is noisy, with relatively high epsilon parameter to reduce the "acceleration" of bit depth parameters during the early phase of training.
Another factor that might affect the compression rate of the network is the error function's smoothness, but exploring this aspect is left for future work.
## Experiments
To demonstrate the proposed method, a fast-training classification network was chosen (Page 2019). This is important for being able to iterate algorithm development quickly, and to explore the tradeoff space between training time (Figure 2), network size (Figure 3), and accuracy in reasonable time.
Experiments were conducted on the CIFAR-10 dataset using the following data augmentation methods, applied in the order: (1) 4 pixel padding; (2) PyTorch AutoAugment policy for CIFAR-10; (3) random horizontal flip; (4) 32x32 random crop; (5) random erasing; and (6) normalization.
The optimizer used was Adam. For training the quantization parameters and weights we use a learning rate of 0.5 and \(10^{-3}\) respectively, and an \(\epsilon\) parameter of \(10^{-3}\) and \(10^{-5}\) respectively. A \(L_{2}\) decay of \(5\times 10^{-4}\) was applied only to the weights. Training was run for 850 iterations, then the network was allowed to "anneal" to a final state by using PyTorch's ReduceLROnPlateau scheduler until convergence.
The same training method was used when implementing the method of Defossez et al. (2022) for fair comparison with our method.
## Results
A major advantage of Self-Compression is a parameterized trade-off between size and accuracy, in our case governed by the parameter \(\gamma\) (Equation 2). The network was trained with \(\gamma\) log-uniformly sampled from the interval [\(10^{3}\), \(10^{-0.5}\)]. As can be seen in Figure 3, this forms a locus in a plot of accuracy against final network size, wherein high values of \(\gamma\) correspond to higher compression/lower accuracy. Baseline 32-bit float accuracy on this network is 95.69 \(\pm\) 0.22, which we can match down to as few as 3% of the network weight bits (18% of weights) remaining.
Also shown in Figure 3 are results for the method of Defossez et al. (2022) which also learns bit depths simultaneously with optimization of accuracy. Their method typically achieves floating point accuracy when the final size is above \(\sim\)8% of the original number of bits. However, our proposed method maintains high accuracy at lower numbers of bits. We also note that the locus of their method is considerably noisier, which may be due to their use of a smaller weight granularity and stochastic rounding. One key difference between our proposed method and that of Defossez et al. is the form of the quantization function (STE vs. stochastic rounding). For this reason, we also include results in Figure 3 for their method using the STE instead of stochastic rounding, which results in a modest improvement in accuracy.
Figure 4 shows the number of channels before and after Self-Compression is applied with \(\gamma\) = 0.015. The boxes represent convolution blocks, comprising convolutions with optional batch norm and bias. The numbers on the arrows indicate number of activation channels, and the numbers on the convolution blocks represent the number of output channels. Where a summation has been performed, the number of input channels is instead noted.
## Conclusion
We have introduced Self-Compression: an efficient, conceptually simple means of learning the bit depths used to represent a network's parameters simultaneously with learning its weights, so that during training the network size is reduced simultaneously with maximizing accuracy on its task. Results on the CIFAR-10 classification task indicate that accuracy close to 32-bit floating point can be achieved with as few as 1-3% of the original bits remaining. Importantly, performance improvements are realisable on typical hardware for accelerating neural networks including
Figure 4: An overview of the number of weight channels in the example classification network before (top) and after (bottom) applying Self-Compression.
Figure 3: Top-1 accuracy on CIFAR-10 for different choices of compression factor \(\gamma\). Also shown are results from the method of Defossez et al. (2022). Bit depths are determined using eq. (3) and (4) after channels of zeroes have been removed.
CPUs, GPUs, and neural network accelerators, without the need for specialized hardware or execution algorithms.
## Acknowledgments
Our special thanks go to Timothy Gale and Gunduz Vehbi Demirci. We would also like to thank our other colleagues at Imagination Technologies who supported this work.
|
2302.13406 | GNNDelete: A General Strategy for Unlearning in Graph Neural Networks | Graph unlearning, which involves deleting graph elements such as nodes, node
labels, and relationships from a trained graph neural network (GNN) model, is
crucial for real-world applications where data elements may become irrelevant,
inaccurate, or privacy-sensitive. However, existing methods for graph
unlearning either deteriorate model weights shared across all nodes or fail to
effectively delete edges due to their strong dependence on local graph
neighborhoods. To address these limitations, we introduce GNNDelete, a novel
model-agnostic layer-wise operator that optimizes two critical properties,
namely, Deleted Edge Consistency and Neighborhood Influence, for graph
unlearning. Deleted Edge Consistency ensures that the influence of deleted
elements is removed from both model weights and neighboring representations,
while Neighborhood Influence guarantees that the remaining model knowledge is
preserved after deletion. GNNDelete updates representations to delete nodes and
edges from the model while retaining the rest of the learned knowledge. We
conduct experiments on seven real-world graphs, showing that GNNDelete
outperforms existing approaches by up to 38.8% (AUC) on edge, node, and node
feature deletion tasks, and 32.2% on distinguishing deleted edges from
non-deleted ones. Additionally, GNNDelete is efficient, taking 12.3x less time
and 9.3x less space than retraining GNN from scratch on WordNet18. | Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik | 2023-02-26T21:04:53Z | http://arxiv.org/abs/2302.13406v1 | # GNNDelete: A General Strategy for
###### Abstract
Graph unlearning, which involves deleting graph elements such as nodes, node labels, and relationships from a trained graph neural network (GNN) model, is crucial for real-world applications where data elements may become irrelevant, inaccurate, or privacy-sensitive. However, existing methods for graph unlearning either deteriorate model weights shared across all nodes or fail to effectively delete edges due to their strong dependence on local graph neighborhoods. To address these limitations, we introduce GNNDelete, a novel model-agnostic layer-wise operator that optimizes two critical properties, namely, Deleted Edge Consistency and Neighborhood Influence, for graph unlearning. Deleted Edge Consistency ensures that the influence of deleted elements is removed from both model weights and neighboring representations, while Neighborhood Influence guarantees that the remaining model knowledge is preserved after deletion. GNNDelete updates representations to delete nodes and edges from the model while retaining the rest of the learned knowledge. We conduct experiments on seven real-world graphs, showing that GNNDelete outperforms existing approaches by up to 38.8% (AUC) on edge, node, and node feature deletion tasks, and 32.2% on distinguishing deleted edges from non-deleted ones. Additionally, GNNDelete is efficient, taking 12.3x less time and 9.3x less space than retraining GNN from scratch on WordNet18.
## 1 Introduction
Graph neural networks (GNNs) are being increasingly used in a variety of real-world applications (Li et al., 2022; Ying et al., 2019; Xu et al., 2022, 2019; Huang et al., 2021; Morselli Gysi et al., 2021; Hu et al., 2020), with the underlying graphs often evolving over time. Machine learning approaches typically involve offline training of a model on a complete training dataset, which is then used for inference without further updates. In contrast, online training methods allow for the model to be updated using new data points as they become available (Orabona, 2019; Nagabandi et al., 2019). However, neither offline nor online learning approaches can address the problem of data deletion (Cao and Yang, 2015; Ginart et al., 2019), which involves removing the influence of a data point from a trained model without sacrificing model performance. When data needs to be deleted from a model, the model must be updated accordingly (Fu et al., 2022). In the face of evolving datasets and growing demands for privacy, GNNs must therefore not only generalize to new tasks and graphs but also be capable of effectively handling information deletion for graph elements from a trained model.
Despite the development of methods for machine unlearning, none of these approaches are applicable to GNNs due to fundamental differences arising from the dependencies between nodes connected by edges (which we show in this paper). Existing machine unlearning methods are unsuitable for data with underlying geometric and relational structure, as graph elements can exert a strong influence on other elements in their immediate vicinity. Furthermore, since the effectiveness of GNN models is based on the exchange of information across local graph neighborhoods, an adversarial agent can easily infer the presence of a data point from its neighbors if the impact of the data point on its local neighborhood is not limited. Given the wide range of GNN applications and the lack of graph unlearning methods, there is a pressing need to develop algorithms that enable GNN models to unlearn previously learned information. This would ensure that inaccurate, outdated, or
privacy-concerned graph elements are no longer used by the model, thereby preventing security concerns and performance degradation. In this paper, we take a step towards building an efficient and general-purpose graph unlearning method for GNNs.
Designing graph unlearning methods is a challenging task. Merely removing data is insufficient to comply with recent demands for increased data privacy because models trained on the original data may still contain information about removed features. A naive approach is to delete the data and retrain a model from scratch, but this can be prohibitively expensive, especially in large datasets. Recently, efforts have been made to achieve efficient unlearning based on exact unlearning (Brophy & Lowd, 2021; Sekhari et al., 2021; Hase et al., 2021; Ullah et al., 2021). The core idea is to retrain several independent models by dividing a dataset into separate shards and then aggregating their predictions during inference. Such methods guarantee the removal of all information associated with the deleted data. However, in the context of GNNs, dividing graphs destroys the structure of the input graph, leading to poor performance on node-, edge- and graph-level tasks. To address this issue, Chen et al. (2022b) uses a graph partitioning method to preserve graph structural information and aggregates predictions across individually retrained shards to produce predictions. However, this approach is still less efficient as the cost increases as the number of shards grows. In addition, choosing the optimal number of shards is still unresolved and may require extra hyperparameter tuning. Several approximation-based approaches (Guo et al., 2020; Ullah et al., 2021; He et al., 2021; Shibata et al., 2021) avoid retraining a model from scratch on data subsets. While these approaches have shown promise, Mitchell et al. (2022) demonstrated that these unlearning methods change the underlying predictive model in a way that can harm model performance.
**Present Work.** We introduce GNNDelete1, a general approach for graph unlearning that can delete nodes, node labels, and relationships from any trained GNN model. We formalize two essential properties that GNN deletion methods should satisfy: 1) _Deleted Edge Consistency:_ predicted probabilities for deleted edges in the unlearned model should be similar to those for nonexistent edges. This property enforces GNNDelete to unlearn information such that deleted edges appear as unconnected nodes. 2) _Neighborhood Influence:_ we establish a connection between graph unlearning and Granger causality (Granger, 1969) to ensure that predictions in the local vicinity of the deletion maintain their original performance and are not affected by the deletion. However, existing graph unlearning methods do not consider this essential property, meaning they do not consider the influence of local connectivity, which can lead to sub-optimal deletion. To achieve both efficiency and scalability, GNNDelete uses a layer-wise deletion operator to revise a trained GNN model. When receiving deletion requests, GNNDelete frezes the model weights and learns additional small weight matrices that are shared across nodes in the graph. Unlike methods that attempt to
Figure 1: **a.** Illustration of Deleted Edge Consistency: It suggests that the predicted probability of deleted edges after unlearning should be random, such that it looks like the deleted data was not used for training before. **b.** Illustration of Neighborhood Influence: It implies that an appropriate unlearning should not change the representations of the local neighborhood (nodes in the subgraph, not nodes themselves ) to maintain the original causality. **c.** Overview of GNNDelete: Given a trained GNN model and edge deletion request, GNNDelete outputs unlearned representations efficiently by only learning a small deletion operator \(W_{D}\). It also ensures representation quality by minimizing a loss function that satisfies the two key properties proposed above.
retrain several small models from scratch or directly update model weights, which can be inefficient and suboptimal, GNNDelete learns small matrices for inference without changing GNN model weights. To optimize GNNDelete, we specify a novel objective function that satisfies Deleted Edge Consistency and Neighborhood Influence and achieves strong deletion performance.
**Our Contributions.** We present our contributions as follows: 1 We formalize the problem of graph unlearning and define two key properties, _Deleted Edge Consistency_ and _Neighborhood Influence_, for effective unlearning on graph data. 2 We propose GNNDelete, a GNN data deletion approach that achieves more efficient unlearning than existing methods without sacrificing predictive performance. GNNDelete is model-agnostic and can be applied to various GNN architectures and training methodologies. 3 The difference between node representations returned the baseline GNN model and those revised by GNNDelete is theoretically bounded, ensuring the strong performance of GNNDelete. 4 We demonstrate the flexibility of GNNDelete through empirical evaluations on both link and node deletion tasks. Results show that GNNDelete achieves effective unlearning with 12.3x less time and 9.3x less computation than retraining from scratch.
## 2 Related Work
**Machine Unlearning.** We organize machine unlearning research into four categories: 1) _Retraining:_ Retraining models from scratch to unlearn is a simple yet often inefficient approach, despite recent efforts to develop efficient data partitioning and retraining methods (Bourtoule et al., 2021; Wu et al., 2020; Liu et al., 2022; Cao and Yang, 2015; Golatkar et al., 2020; Izzo et al., 2021). However, partitioning graphs can be challenging because the graph structure and learned node representations of the partitioned graphs may be significantly different from the original graph. Furthermore, such methods may not scale well to large datasets. _2) Output modification:_ Methods such as UNSIR (Tarun et al., 2021) directly modify model outputs to reduce computational overhead. UNSIR first learns an error matrix that destroys the output and then trains the destroyed model for two epochs with clean data to repair the outputs. However, for graphs, the error destroys outputs for all edges, and the training after that falls back to retraining the whole model. _3) Logit manipulation:_ Other methods achieve unlearning by manipulating the model logits (Izzo et al., 2021; Baumhauer et al., 2022), but these methods only apply to linear or logit-based models. _4) Weight modification:_ Unlearning via weight modification is achieved by running an optimization algorithm. For example, Ullah et al. (2021) proposed an unlearning method based on noisy stochastic gradient descent, while Guo et al. (2020) achieves certified removal based on Newton updates. Other optimization methods that modify weights include Thudi et al. (2022) and Neel et al. (2021). Recent unlearning methods perturb gradients (Ma et al., 2022) or model weights (Chen et al., 2021a). However, weight modification approaches lack unique features for graphs and incur computation overheads, such as calculating the inverse Hessian. In Appendix A, we provide details on unlearning methods for other models.
**Graph Unlearning.** We present an overview of the current state of the art in graph unlearning research. GraphEraser (Chen et al., 2022b) attempts to address the graph unlearning problem by utilizing graph partitioning and efficient retraining. They use a clustering algorithm to divide a graph into shards based on both node features and structural information. A learnable aggregator is optimized to combine the predictions from sharded models. However, the limitations of GraphEraser (Chen et al., 2022b) are that it supports only node deletion. GraphEditor (Cong and Mahdavi, 2023) provides a closed-form solution for linear GNNs to guarantee information deletion, including node deletion, edge deletion, and node feature update. Additional fine-tuning can improve predictive performance. However, GraphEditor is only applicable to linear structures, which is the case for most unlearning algorithms, not only those designed for graph-structured data. As a result, it is not possible to use existing non-linear GNNs or knowledge graphs with GraphEditor, and it struggles to process larger deletion requests. Recently, Chien et al. (2022) proposed the first framework for certified graph unlearning of GNNs. Their approach provides theoretical guarantees for approximate graph unlearning. However, the framework is currently limited to certain GNN architectures and requires further development to become a more practical solution for the broader range of GNNs. For more details on related work, we refer the reader to Appendix A.
**Connection with Adversarial Attacks and Defense for GNNs.** To determine whether a data point has been used to train a model, the success of a membership inference (MI) attack can be a suitable measure for the quality of unlearning (Yeom et al., 2019; Sablayrolles et al., 2019). Defending
against MI attacks is also a challenge that we care about when building unlearning models. Thudi et al. (2022c) proposed using a novel privacy amplification scheme based on a new tighter bound and subsampling strategy. Olatunji et al. (2021) showed that all GNN models are vulnerable to MI attacks and proposed two defense mechanisms based on output perturbation and query neighborhood perturbation. Liu et al. (2022a) treated the data to be unlearned as backdoored data. While defense strategies against MI attacks can provide valuable insights for evaluating unlearning, it is important to note that they serve a different purpose than unlearning itself.
## 3 Preliminaries
Let \(G=(\mathcal{V},\mathcal{E},\mathbf{X})\) be an attributed graph with \(n=|\mathcal{V}|\) nodes, set of edges \(\mathcal{E}\), and \(n_{f}\)-dimensional node features \(\mathbf{X}=\{\mathbf{x}_{0},\dots,\mathbf{x}_{n-1}\}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{n_{f}}\). We use \(\mathbf{A}\) to denote the adjacency matrix of \(G\) and \(\text{deg}_{G}:\mathcal{V}\rightarrow\mathbb{N}\) to denote the degree distribution of graph \(G\). Further, we use \(\mathcal{S}_{uv}^{k}=(\mathcal{V}_{uv}^{k},\mathcal{E}_{uv}^{k},\mathbf{X}_{uv}^{k})\) to represent a \(k\)-hop enclosing subgraph around nodes \(u\) and \(v\).
**Graph Neural Networks (GNNs).** A GNN layer \(g\) can be expressed as a series of transformation functions: \(g(G)=(\text{Upd}\circ\text{Agg}\circ\text{Msg})(G)\) that takes \(G\) as input and produces \(n\)\(d\)-dimensional node representations \(\mathbf{h}_{u}\) for \(u\in\mathcal{V}\) (Figure 1). Within layer \(l\), \(\text{Msg}\) specifies neural messages that are exchanged between nodes \(u\) and \(v\) following edges in \(\mathbf{A}_{uv}\) by calculating \(\mathbf{p}_{uv}^{l}=\text{Msg}(\mathbf{h}_{u}^{l-1},\mathbf{h}_{v}^{l-1},\mathbf{A}_{uv})\). The \(\text{Agg}\) defines how every node \(u\) combines neural messages from its neighbors \(\mathcal{N}_{u}\) and computes the aggregated message \(\mathbf{P}_{u}^{l}=\text{Agg}((\mathbf{p}_{uv}^{l}|v\in\mathcal{N}_{u}))\). Finally, \(\text{Upd}\) defines how the aggregated messages \(\mathbf{P}_{u}^{l}\) and hidden node states from the previous layer are combined to produce \(\mathbf{h}_{u}^{l}\), i.e., final outputs of \(l\)-th layer \(\mathbf{h}_{u}^{l}=\text{Upd}(\mathbf{P}_{u}^{l},\mathbf{h}_{u}^{l-1})\). The output of the last GNN layer is the final node representation, \(\mathbf{z}_{u}=\mathbf{h}_{u}^{L}\), where \(L\) is the number of GNN layers in the model.
**Unlearning for GNNs.** Let \(\mathcal{E}_{d}\subseteq\mathcal{E}\) denote the set of edges to be deleted and \(\mathcal{E}_{r}=\mathcal{E}\backslash\mathcal{E}_{d}\) be the remaining edges after the deletion of \(\mathcal{E}_{d}\) from \(G\). We use \(G_{r}=(\mathcal{V}_{r},\mathcal{E}_{r},\mathbf{X}_{r})\) to represent the resulting graph after deleting edge \(\mathcal{E}_{d}\). Here, \(\mathcal{V}_{r}=\{u\in V|\deg_{G_{r}}(u)>0\}\) denotes the set of nodes that are still connected to nodes in \(G_{r}\), and \(\mathbf{X}_{r}\) denotes the corresponding node attributes. Although the above notations are specific to edge deletion, GNNDelete can also be applied to node deletion by removing all edges incident to the node that needs to be deleted from the model.
To unlearn an edge \(e_{uv}\), the model must erase all the information and influence associated with \(e_{uv}\) as if it was never seen during the training while minimizing the change in downstream performance. To this end, we need to modify both the predicted probability of \(e_{uv}\) and remove its information from its local neighborhood. Therefore, post-processing and logit manipulation are ineffective for deleting edges from a GNN model because these strategies do not affect the rest of the graph. We denote a classification layer \(f\) that takes node representations \(\mathbf{h}_{u}^{L}\) and \(\mathbf{h}_{v}^{L}\) as input and outputs the prediction probability for edge \(e_{uv}\). Given \(L\) layers and \(f\), a GNN model \(m:G\rightarrow\mathbb{R}^{|\mathcal{E}|}\) can be expressed as \(m(G)=(f\circ g^{L}\cdots\circ g^{1})(G)\), where \(g^{i}\) is the \(i^{th}\) GNN layer. The unlearned model \(m^{\prime}:G\rightarrow\mathbb{R}^{|\mathcal{E}_{r}|}\) can be written as \(m^{\prime}(G_{r})=(f\circ g^{\prime L}\cdots\circ g^{\prime 1})(G_{r})\), which are \(L\) stacked unlearned GNN layers \(g^{\prime i}\) operating on the graph \(G_{r}\).
## 4 GNNDelete: A General Strategy for Graph Unlearning
To ensure effective edge deletion from graphs, the GNN model should ignore edges in \(\mathcal{E}_{d}\) and not be able to recognize whether a deleted edge \(e_{uv}\in\mathcal{E}_{d}\) is part of the graph. Furthermore, the model should ignore any influence that a deleted edge has in its neighborhood. To this end, we introduce two properties for effective graph unlearning and a layer-wise deletion operator that implements the properties and can be used with any GNN to process deletions in \(\mathcal{E}_{d}\).
**Problem Formulation (Graph Unlearning).**_Given a graph \(G=(\mathcal{V},\mathcal{E},\mathbf{X})\) and a fully trained GNN model \(m(G)\), we aim to unlearn every edge \(e_{uv}\in\mathcal{\bar{E}}_{d}\) from \(m(G)\), where \(\mathcal{\bar{E}}_{d}\) is a set of edges to be deleted. The goal is to obtain an unlearned model \(m^{\prime}(G)\) that is close to the model output that would have been obtained had the edges in \(\mathcal{E}_{d}\) been omitted from training. To achieve this, we require that the following properties hold:_
* [leftmargin=*]
* _Deleted Edge Consistency: If_ \(e_{uv}\in\mathcal{E}_{d}\)_, then_ \(m^{\prime}(G)\) _should output a prediction that is independent of the existence of the edge_ \(e_{uv}\)_, i.e., the deletion of_ \(e_{uv}\) _should not have any influence on the predicted output._
* _Neighborhood Influence: If_ \(e_{uv}\notin\mathcal{E}_{d}\)_, then_ \(m^{\prime}(G)\) _should output a prediction that is close to_ \(m(G)\)_, i.e., the deletion of edges in_ \(\mathcal{E}_{d}\) _should not have any significant impact on predictions in the rest of the graph._
### Required Properties for Successful Deletion on Graphs
Deleting information from a graph is not a trivial task because the representations of nodes and edges are dependent on the combined neighborhood representations. The following two properties show intuitive assumptions over the deletion operator for effective unlearning in GNNs:
**1) Deleted Edge Consistency.** The predicted probability from the unlearned model \(m^{\prime}\) for an edge \(e_{uv}\) should be such that it is hard to determine whether it is a true edge or not. The unlearned GNN layer \(g^{tl}\) should not be aware of the edge existence. Formally, we define the following property:
**Definition 1** (Deleted Edge Consistency).: _Let \(e_{uv}\) denote an edge to be deleted, \(g^{l}\) be the 1-th layer in a GNN with output node representation vectors \(\mathbf{h}^{l}_{u}\), and the unlearned GNN layer \(g^{tl}\) with \(\mathbf{h}^{tl}_{u}\). The unlearned layer \(g^{tl}\) satisfies the Deleted Edge Consistency property if it minimizes the difference between node-pair representations \(\phi(\mathbf{h}^{l}_{u},\mathbf{h}^{l}_{v})\), and \(\phi(\mathbf{h}_{p},\mathbf{h}_{q})\) of two randomly chosen nodes \(p,q\in\mathcal{V}\):_
\[\underset{p,q\in_{R}}{\mathbb{E}}[\phi(\mathbf{h}^{tl}_{u},\mathbf{h}^{tl}_{v})-\phi (\mathbf{h}_{p},\mathbf{h}_{q})]=\delta, \tag{1}\]
where \(\phi\) is a readout function (e.g., dot product, concatenation) that combines node representations \(\mathbf{h}^{tl}_{u}\) and \(\mathbf{h}^{tl}_{v}\), \(\in_{R}\) denotes a random choice from \(\mathcal{V}\), and \(\delta\) is an infinitesimal constant.
**2) Neighborhood Influence.** While the notion of causality has been used in explainable machine learning, to the best of our knowledge, we propose the first effort of modifying a knowledge graph using a causal perspective. Formally, removing edge \(e_{uv}\) from the graph requires unlearning the influence of \(e_{uv}\) from the subgraphs of both nodes \(u\) and \(v\). In this work, we propose the _Neighborhood Influence_ property which leverages the notion of Granger causality (Granger, 1969; Bressler & Seth, 2011) and declares a causal relationship \(\psi(\{\mathbf{h}_{u}|u\in\mathcal{S}_{uv}\})\to e_{uv}\) between variables \(\psi(\{\mathbf{h}_{u}|u\in\mathcal{S}_{uv}\})\) and \(e_{uv}\) if we are better able to predict edge \(e_{uv}\) using all available node representations in \(\mathcal{S}_{uv}\) than if the information apart from \(\psi(\{\mathbf{h}_{u}|u\in\mathcal{S}_{uv}\})\) had been used. Here, \(\psi(\cdot)\) is an operator that combines the node representations in subgraph \(\mathcal{S}_{uv}\). In the context of graph unlearning, if the absence of node representations decreases the prediction confidence of \(e_{uv}\), then there is a causal relationship between the node representation and the \(e_{uv}\) prediction.
Here, we characterize the notion of deletion by extending Granger causality to local subgraph causality, i.e., an edge \(e_{uv}\) dependent on the subgraphs associated with both nodes \(u\) and \(v\). In particular, removing \(e_{uv}\) should not affect the predictions of \(\mathcal{S}_{uv}\) yielding the following property:
**Definition 2** (Neighborhood Influence).: _Let \(e_{uv}\in\mathcal{E}_{d}\) denote an edge in \(G\) to be deleted, \(g^{l}\) be the 1-th layer in a GNN with output node representation vectors \(\mathbf{h}^{l}_{u}\), and the unlearned GNN layer \(g^{tl}\) with \(\mathbf{h}^{tl}_{u}\). The unlearned layer \(g^{tl}\) satisfies the Neighborhood Influence property if it minimizes the difference of all node-subset representations \(\psi(\{\mathbf{h}^{l}_{w}|w\in\mathcal{S}_{uv}\})\) comprising \(e_{uv}\) with their corresponding node-subset representations \(\psi(\{\mathbf{h}^{l}_{w}|w\in\mathcal{S}_{uv/e_{uv}}\})\) where \(e_{uv}\) is deleted, i.e.,_
\[\psi(\{\mathbf{h}^{l}_{w}|w\in\mathcal{S}_{uv}\})-\psi(\{\mathbf{h}^{tl}_{w}|w\in \mathcal{S}_{uv/e_{uv}}\})]=\delta, \tag{2}\]
where \(\psi\) is an operator that combines the elements of \(S_{uv}\) (e.g., concatenation, summation), \(\mathcal{S}_{uv/e_{uv}}\) represent the subgraph excluding the information from \(e_{ij}\), and \(\delta\) is an infinitesimal constant.
### Layer-Wise Deletion Operator
To achieve effective deletion of an edge \(e_{uv}\) from a graph \(G\), it is important to eliminate signals with minor contributions to predicting the edges and develop mechanisms that can tune or perturb any source of node or edge information that aids in the prediction of \(e_{uv}\). Perturbing weights or other hyperparameters of the GNN model can affect decisions for multiple nodes and edges in \(G\) due to information propagation through the local neighborhood of each node. In order to allow for the
deletion of specific nodes and edges, we introduce a model-agnostic deletion operator Del that can be applied to any GNN layer.
**Deletion Operator.** Following the notations of Section 3, for the \(l\)-th GNN layer \(g^{l}(G)\) with output dimension \(d^{l}\), we define an extended GNN layer with unlearning capability as \((\textsc{Del}^{l}\circ g^{l})(G)\) with the same output dimension \(d^{l}\). Given an edge \(e_{uv}\) that is to be removed, Del is applied to the node representations and is defined as:
\[\textsc{Del}^{l}=\begin{cases}\phi&\text{if }w\in S^{l}_{uv}\\ \mathbbm{1}&\text{otherwise}\end{cases}, \tag{3}\]
where \(\mathbbm{1}\) is the identity function, and \(\phi:\mathbb{R}^{n\times d^{l}}\rightarrow\mathbb{R}^{n\times d^{l}}\) can be any differentiable function that takes as input the output node representations of \(g\). In this work, \(\phi\) is considered as an MLP with weight parameters \(\mathbf{W}^{l}_{D}\). Similarly to other GNN operators, the weights \(\mathbf{W}^{l}_{D}\) of our Del operator are shared across all nodes to achieve efficiency and scalability.
**Local Update.** Defining an operator that acts only in the local neighborhood \(\mathcal{S}_{uv}\) enables targeted unlearning, keeping the previously learned knowledge intact as much as possible. If node \(u\) is within the local neighborhood of \(e_{uv}\), Del is activated. For other nodes, Del remains deactivated and does not affect the hidden states of the nodes. This ensures that the model will not forget the knowledge it has gained before during training and the predictive performance on \(\mathcal{E}\backslash S_{uv}\) will not drop.
By applying the deletion operator Del to every GNN layer, we expect the final representations to reflect the unlearned information in the downstream task. Next, we show a theoretical observation over the unlearned node representations that indicates a stable behavior of the deletion operator:
**Theorem 1**.: _(Bounding edge prediction using initial model \(m\) and unlearned model \(m^{\prime}\)) Let \(e_{uv}\) be an edge to be removed, \(\mathbf{W}^{L}_{D}\) be the weight matrix of the deletion operator \(\textsc{Del}^{L}\), and normalized Lipschitz activation function \(\sigma(\cdot)\). Then, the norm difference between the dot product of the final node representations from the initial model \(\mathbf{z}_{u},\mathbf{z}_{v}\) and from the unlearned one \(\mathbf{z}^{\prime}_{u},\mathbf{z}^{\prime}_{v}\) is bounded by:_
\[\langle\mathbf{z}_{u},\mathbf{z}_{v}\rangle-\langle\mathbf{z}^{\prime}_{u},\mathbf{z}^{\prime }_{v}\rangle\geq-\frac{1+\|\mathbf{W}^{L}_{D}\|^{2}}{2}\|\mathbf{z}_{u}-\mathbf{z}_{v}\|^ {2}, \tag{4}\]
_where \(\mathbf{W}^{L}_{D}\) denotes the weight matrix of the deletion operator for the \(l\)-th GNN layer._
The proof is in Appendix B. By Theorem 1, \(\langle\mathbf{z}^{\prime}_{u},\mathbf{z}^{\prime}_{v}\rangle\), and consequently the prediction probability for edge \(e_{uv}\) from the unlearned model cannot be dissimilar from the baseline. Nevertheless, Del is a layer-wise operator, which provides stable node embeddings as compared to the initial ones.
### Model Unlearning
Moving from a layer-wise operator to the whole GNN model, our method GNNDelete applies Del to every GNN layer, leading to a total number of trainable parameters \(\sum_{l}(d^{l})^{2}\). As the number of trainable parameters in GNNDelete is independent of the size of the graph, it is compact and scalable to larger graphs and the number of deletion requests. Considering the properties defined in Section 4.1, we design two loss functions and compute them in a layer-wise manner. Specifically, for the \(l\)-th GNN layer we first compute the _Deleted Edge Consistency_ loss:
\[\mathcal{L}^{l}_{\text{DEC}}=\mathcal{L}(\{[\mathbf{h}^{l}_{u};\mathbf{h}^{l}_{v}]|e_{ uv}\in\mathcal{E}_{d}\},\{[\mathbf{h}^{l}_{u};\mathbf{h}^{l}_{v}]|u,v\in_{R}\mathcal{V}\}), \tag{5}\]
and the _Neighborhood Influence_ loss:
\[\mathcal{L}^{l}_{\text{NI}}=\mathcal{L}(\|_{w}\ \{\mathbf{h}^{l}_{w}|w\in\mathcal{S}^{l }_{uv}/e_{uv}\},\|_{w}\ \{\mathbf{h}^{l}_{w}|w\in\mathcal{S}^{l}_{uv}\}), \tag{6}\]
where \([\mathbf{h}^{l}_{u};\ \mathbf{h}^{l}_{v}]\) denotes the concatenation of two vectors, and \(\|\) denotes the concatenation of multiple vectors. Note that according to Equations 1 and 2, we choose the functions \(\phi,\psi\) to be the concatenation operators. During the backward pass, the deletion operator at the \(l\)-th GNN layer is only optimized based on the weighted total loss at the \(l\)-th layer, i.e.
\[\mathbf{W}^{l^{*}}_{D}=\arg\min_{\mathbf{W}^{L}_{D}}\mathcal{L}^{l}=\arg\min_{\mathbf{W}^{ L}_{D}}\lambda\mathcal{L}^{l}_{\text{DEC}}+(1-\lambda)\mathcal{L}^{l}_{\text{NI}}, \tag{7}\]
where \(\lambda\in[0,1]\) is a regularization coefficient that balances the trade-off between the two properties, \(\mathcal{L}\) refers to the distance function. We use Mean Squared Error (MSE) throughout the experiments.
**Broad Applicability of GNNDelete.** GNNDelete treats node representations in a model-agnostic manner, allowing us to consider graph unlearning in models beyond GNNs. Graph transformers (Ying et al., 2021; Rampasek et al., 2022) have been proposed recently as an extension of the Transformer architecture (Vaswani et al., 2017) for learning representations on graphs. The Del operator can also be applied after the computation of the node representations in such models. For example, the MPNN\({}_{e}^{l}(\mathbf{X}^{l},\mathbf{E}^{l},\mathbf{A})\) layer in GraphGPS (Rampasek et al., 2022, Equation 2) can be replaced with the unlearned version \((\textsc{Del}\circ\textsc{MPNN}_{e}^{l})\). Similarly, in the Graphormer layer, Del operator can be applied after the multi-head attention MHA layer (Rampasek et al., 2022, Equation 8).
## 5 Experiments
We proceed with the empirical evaluation of GNNDelete. We examine the following questions: **Q1**) How does GNNDelete perform compared to existing state-of-the-art unlearning methods? **Q2**) Can GNNDelete support various unlearning tasks including node, node label, and edge deletion? **Q3**) How does the interplay between Deleted Edge Consistency and Neighborhood Influence property affect deletion performance? Appendix C.1 provides a detailed definition of performance metrics.
### Experimental setup
**Datasets.** We evaluate GNNDelete on several widely-used graphs at various scales. We use 5 homogeneous graphs: Cora (Bojchevski & Gunnemann, 2018), PubMed (Bojchevski & Gunnemann, 2018), DBLP (Bojchevski & Gunnemann, 2018), CS (Bojchevski & Gunnemann, 2018), OGB-Collab (Hu et al., 2020), and 2 heterogeneous graphs: OGB-BioKG (Hu et al., 2020), and WordNet18RR (Dettmers et al., 2018). Table 4 includes details on graph datasets.
**GNNs and Baselines.** We test with four GNN architectures and two graph types to show the flexibility of our GNNDelete operator. In particular, we test on GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2018), and GIN (Xu et al., 2019) for homogeneous graphs, and R-GCN (Schlichtkrull et al., 2018) and R-GAT (Chen et al., 2021b) for heterogeneous graphs. We consider four baseline methods: i) GraphFlotor (Cong & Mahdavi, 2023), a method that finetunes on a closed-form solution of linear GNN models; ii) CerUnlear(Chien et al., 2022), a certified unlearning approach based on linear GNNs; iii) GraphEraser (Chen et al., 2022b), a re-training-based machine unlearning method for graphs; iv) GradAscent, which performs gradient ascent on \(\mathcal{E}_{d}\) with cross-entropy loss, and v) Descent-to-Delete (Neel et al., 2021), a general machine unlearning method.
**Unlearning Tasks and Downstream Tasks.** Requests for graph unlearning can be broadly classified into three categories: 1) edge deletion, which involves removing a set of edges \(\mathcal{E}_{d}\) from the training graph, 2) node deletion, which involves removing a set of nodes \(\mathcal{N}_{d}\) from the training graph, and 3) node feature unlearning, which involves removing the node feature \(X_{d}\) from the nodes \(\mathcal{N}_{d}\). Deletion of information can have a significant impact on several downstream tasks. Therefore, we evaluate the effects of graph unlearning on three different downstream tasks, namely, link prediction, node classification, and graph classification.
**Setup.** We evaluate the effectiveness of GNNDelete on edge deletion tasks and also demonstrate its ability to handle node deletion and node feature unlearning tasks. We perform experiments on two settings: i) an easier setting where we delete information far away from test set \(\mathcal{E}_{t}\) in the graph, and ii) a harder setting where we delete information proximal to test set \(\mathcal{E}_{t}\) in the graph. To perform edge deletion tasks, we delete a varying proportion of edges in \(\mathcal{E}_{d}\) between [0.5%-5.0%] of the total edges, with a step size of 0.5%. For larger datasets such as OGB (Hu et al., 2020), we limit the maximum deletion ratio to 2.5%. We report the average and standard error of the unlearning performance across five independent runs. We use AUROC to evaluate the performance of GNNDelete for link prediction tasks, as well as Membership Inference (MI) (Thudi et al., 2022a) for node deletion. Performance metrics are described in Appendix C.1. Additionally, we consider two sampling strategies for \(\mathcal{E}_{d}\) (Appendix C.2).
### **Q1**: Results - Comparison to Existing Unlearning Strategies
We compare GNNDelete to the baseline unlearning techniques and present the results in Tables 1. Across four GNN architectures, we find that GNNDelete achieves the best performance on the test edge set \(\mathcal{E}_{t}\), outperforming GraphEditor, CertUnLearn and GraphEraser by 13.9%, 19.7% and 38.8%. Further, we observe that GNNDelete achieves the highest AUROC on \(\mathcal{E}_{d}\), outperforming GraphEditor, CertUnLearn and GraphEraser by 32.2%, 27.9% and 25.4%. GNNDelete even outperforms Retrain-from-Scratch by 21.7% under this setting, proving its capability of effectively unlearning the deleted edges. Interestingly, none of the existing baseline methods have comparable performance to GNNDelete on these performance metrics, including GraphEraser, which ignores the global connectivity pattern and overfit to specific shards, as well as GraphEditor and CertUnLearn, whose choice of linear architecture strongly limits the power of the GNN unlearning. Our results demonstrate that baselines like Descent-to-Delete and GradAscent lose almost their predictive prowess in making meaningful predictions and distinguishing deleted edges because the weight updates are independent of the unlearning task and affect all the nodes, including nodes associated with \(\mathcal{E}_{t}\). In addition, CertUnLearn and GraphEditor are not applicable due to their linear architectures. Please refer to the Appendix for results on Cora (Tables 13-14), PubMed (Tables 15-16), DBLP (Tables 17-18), OGB-Collab (Tables 19-20), and WordNet18 (Tables 21-22) using a deletion ratio of 0.5%, 2.5%, and 5%.
Results in Table 2 show the Membership Inference (MI) performance of baselines and GNNDelete for the DBLP and Wordnet18 using a deletion ratio of \(2.5\%\). It shows that GNNDelete outperforms baselines for most GNN models, highlighting its effectiveness in hiding deleted data. Across five GNN architectures, we find that GNNDelete improves on the MI ratio score of all baselines: GraphEditor (+0.083), CertUnLearn (+0.169) GraphEraser (+0.154), Retrain-from-Scratch (+0.047), GradAscent (+0.134), and Descent-to-Delete (+0.086).
unlearning method on node classification and Membership Inference attacks. Results in Table 8 show that GNNDelete outperforms baselines on node classification while deleting nodes. GNNDelete outperforms GraphEditor and CertUnLearn by 4.7% and 4.0% in accuracy, respectively. It is also 0.139 and 0.267 better than GraphEditor and CertUnLearn in terms of membership inference attacks. Tables 9 and 10 show results for node feature unlearning and sequential unlearning.
**Time and Space Efficiency.** We demonstrate that GNNDelete is time-efficient as compared to most unlearning baselines. For all methods, we use a 2-layer GCN/R-GCN architecture with a trainable entity and relation embeddings with 128, 64, and 32 hidden dimensions trained on three datasets (PubMed, CS, and OGB-Collab). We present the results of wall-clock time vs. graph size in Figure 2 and observe that GNNDelete consistently takes less time than existing graph unlearning methods. In particular, GNNDelete is \(\mathbf{12.3\times}\) faster than Retrain-from-Scratch on WordNet. For smaller graphs like DBLP, GNNDelete takes 185 seconds less (18.5% faster) than the pre-training stage of GraphEditor. Despite taking lower time, the predictive performances of Descent-to-Delete and GradAscent are poor compared to GNNDelete because they are not tailored to incorporate the graph structure for unlearning. Regarding the space efficiency, we measure the number of training parameters and show that GNNDelete has the smallest model size. In addition, the number of training parameters does not scale with respect to the graph size, proving the efficiency and scalability of GNNDelete. For instance, GNNDelete takes \(\mathbf{9.3\times}\) less computation than GraphEraser. We further demonstrate that GNNDelete can be more efficient by only inserting a deletion operator after the last layer without losing much performance. Additional results and details are in Tables 5-6.
### Q3: Results - Deleted Edge Consistency vs. Neighborhood Influence
We conducted ablations on two key properties of GNNDelete, namely Deleted Edge Consistency and Neighborhood Influence, by varying the regularization parameter \(\lambda\) in Equation 7. The results presented in Table 3 demonstrate that both properties are necessary for achieving high AUROC on both \(\mathcal{E}_{t}\) and \(\mathcal{E}_{d}\). We observed that as \(\lambda\) decreases, GNNDelete focuses more on Neighborhood Influence, which explains why the model's performance on \(\mathcal{E}_{t}\) is close to the original, while it cannot distinguish \(\mathcal{E}_{d}\) from the remaining edges. Conversely, for higher values of \(\lambda\), GNNDelete focuses more on optimizing the Deleted Edge Consistency property and can better distinguish between \(\mathcal{E}_{d}\) and \(\mathcal{E}_{r}\). In summary, we observed a 5.56% improvement in the average AUROC for \(\lambda=0.5\).
## 6 Conclusion
We introduce GNNDelete, a novel deletion operator that is both flexible and easy-to-use, and can be applied to any type of graph neural network (GNN) model. We also introduce two properties, denoted as Deleted Edge Consistency and Neighborhood Influence, which can contribute to more
\begin{table}
\begin{tabular}{c|c c c} \hline \(\lambda\) & AUROC on \(\mathcal{E}_{t}\) & AUROC on \(\mathcal{E}_{d}\) & Avg. AUROC (Gap) \\ \hline
0.0 & 0.964 \(\pm\)0.003 & 0.492 \(\pm\)0.012 & 0.728 (0.473) \\
0.2 & 0.961 \(\pm\)0.003 & 0.593 \(\pm\)0.011 & 0.777 (0.368) \\
0.4 & 0.950 \(\pm\)0.005 & 0.691 \(\pm\)0.0010 & 0.821 (0.259) \\
0.5 & 0.934 \(\pm\)0.002 & 0.748 \(\pm\)0.005 & **0.841** (**0.185**) \\
0.6 & 0.927 \(\pm\)0.001 & 0.739 \(\pm\)0.006 & 0.834 (0.185) \\
0.8 & 0.893 \(\pm\)0.003 & 0.759 \(\pm\)0.008 & 0.823 (0.134) \\
1.0 & 0.858 \(\pm\)0.004 & 0.757 \(\pm\)0.004 & 0.808 (0.101) \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on the interplay of Deleted Edge Consistency and Neighborhood Influence property. **Unlearning task: 2.5% edge deletion. Evaluation: link prediction. Dataset: DBLP.** The gap is calculated as: \(|\text{AUROC}(\mathcal{E}_{t})-\text{AUROC}(\mathcal{E}_{d})|\). Best overall deletion performance is achieved for \(\lambda=0.5\), indicating that both properties are necessary to successfully delete information from the GNN model while minimizing negative effects on overall model performance.
Figure 2: Comparison of efficiency on three datasets (PubMed, CS, and OGB-Collab). We plot the retraining approach in solid lines, general unlearning methods in dotted lines, and graph unlearning methods in dash-dotted lines. Results show that GNNDelete scales better than existing graph unlearning methods, as its execution time is consistently lower than other methods, especially for larger graphs.
effective graph unlearning. By combining the deletion operator with these two properties, we define a novel loss function for graph unlearning. We evaluate GNNDelete across a wide range of deletion tasks including edge deletion, node deletion, and node feature unlearning, and demonstrate that it outperforms existing graph unlearning models. Our experiments show that GNNDelete performs consistently well across a variety of tasks and is easy to use. Results demonstrate the potential of GNNDelete as a general strategy for graph unlearning.
## Acknowledgements
We gratefully acknowledge the support of the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001 and awards from Harvard Data Science Initiative, Amazon Research Award, Bayer Early Excellence in Science Award, AstraZeneca Research, and Roche Alliance with Distinguished Scientists Award. G.D. is supported by the Harvard Data Science Initiative Postdoctoral Fellowship. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders. The authors declare that there are no conflict of interests.
|
2301.08210 | Everything is Connected: Graph Neural Networks | In many ways, graphs are the main modality of data we receive from nature.
This is due to the fact that most of the patterns we see, both in natural and
artificial systems, are elegantly representable using the language of graph
structures. Prominent examples include molecules (represented as graphs of
atoms and bonds), social networks and transportation networks. This potential
has already been seen by key scientific and industrial groups, with
already-impacted application areas including traffic forecasting, drug
discovery, social network analysis and recommender systems. Further, some of
the most successful domains of application for machine learning in previous
years -- images, text and speech processing -- can be seen as special cases of
graph representation learning, and consequently there has been significant
exchange of information between these areas. The main aim of this short survey
is to enable the reader to assimilate the key concepts in the area, and
position graph representation learning in a proper context with related fields. | Petar Veličković | 2023-01-19T18:09:43Z | http://arxiv.org/abs/2301.08210v1 | # Everything is Connected: Graph Neural Networks
###### Abstract
In many ways, **graphs** are the main modality of data we receive from **nature**. This is due to the fact that most of the patterns we see, both in natural and artificial systems, are elegantly representable using the language of graph structures. Prominent examples include molecules (represented as graphs of atoms and bonds), social networks and transportation networks. This potential has already been seen by key scientific and industrial groups, with already-impacted application areas including traffic forecasting, drug discovery, social network analysis and recommender systems. Further, some of the most successful domains of application for machine learning in previous years--images, text and speech processing--can be seen as special cases of graph representation learning, and consequently there has been significant exchange of information between these areas. The main aim of this short survey is to enable the reader to assimilate the key concepts in the area, and position graph representation learning in a proper context with related fields.
Introduction: Why study data on graphs?
In this survey, I will present a vibrant and exciting area of deep learning research: graph representation learning. Or, put simply, building machine learning models over data that lives on _graphs_ (interconnected structures of _nodes_ connected by _edges_). These models are commonly known as _graph neural networks_, or **GNNs** for short.
There is very good reason to study data on graphs. From the molecule (a graph of _atoms_ connected by chemical _bonds_) all the way to the connectomic structure of the brain (a graph of _neurons_ connected by _synapses_), graphs are a universal language for describing living organisms, at all levels of organisation. Similarly, most relevant artificial constructs of interest to humans, from the transportation network (a graph of _intersections_ connected by _roads_) to the social network (a graph of _users_ connected by _friendship links_), are best reasoned about in terms of graphs.
This potential has been realised in recent years by both scientific and industrial groups, with GNNs now being used to discover novel potent antibiotics (Stokes et al., 2020), serve estimated travel times in Google Maps (Derrow-Pinion et al., 2021), power content recommendations in Pinterest (Ying et al., 2018) and product recommendations in Amazon (Hao et al., 2020), and design the latest generation of machine learning hardware: the TPUv5 (Mirhoseini et al., 2021). Further, GNN-based systems have helped mathematicians uncover the hidden structure of mathematical objects (Davies et al., 2021), leading to new top-tier conjectures in the area of representation theory (Blundell et al., 2021). It would not be an understatement to say that billions of people are coming into contact with predictions of a GNN, on a day-to-day basis. As such, it is likely a valuable pursuit to study GNNs, even without aiming to directly contribute to their development.
Beyond this, it is likely that the very _cognition_ processes driving our reasoning and decision-making are, in some sense, _graph-structured_. That is, paraphrasing a quote from Forrester (1971), nobody really imagines in their head all the information known to them; rather, they imagine only selected _concepts_, and _relationships_ between them, and use those to represent the real system. If we subscribe to this interpretation of cognition, it is quite unlikely that we will be able to build a generally intelligent system without some component relying on graph representation learning. Note that this finding does not clash with the fact that many recent skillful ML systems are based on the Transformer architecture (Vaswani et al., 2017)--as we will uncover in this review, Transformers are themselves a special case of GNNs.
The fundamentals: Permutation equivariance and invariance
In the previous section, we saw _why_ it is a good idea to study data that lives on graphs. Now we will see _how to learn_ useful functions over graph-structured data. The exposition largely follows Bronstein et al. (2021).
With graph-structured inputs, we typically assume a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\); that is, we have a set of _edges_\(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), which specifies pairs of nodes in \(\mathcal{V}\) that are connected.
As we are interested in representation learning over the nodes, we attach to each node \(u\in\mathcal{V}\) a feature vector, \(\mathbf{x}_{u}\in\mathbb{R}^{k}\). The main way in which this data is _presented_ to a machine learning model is in the form of a _node feature matrix_. That is, a matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times k}\) is prepared by stacking these features:
\[\mathbf{X}=\left[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{|\mathcal{V }|}\right]^{\top} \tag{1}\]
that is, the \(i\)th row of \(\mathbf{X}\) corresponds to \(\mathbf{x}_{i}\).
There are many ways to represent \(\mathcal{E}\); since our context is one of linear algebra, we will use the _adjacency matrix_, \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\):
\[a_{uv}=\begin{cases}1&(u,v)\in\mathcal{E}\\ 0&(u,v)\notin\mathcal{E}\end{cases} \tag{2}\]
Note that it is often possible, especially in biochemical inputs, that we want to attach more information to the edges (such as distance scalars, or even entire feature vectors). I deliberately do not consider such cases to retain clarity--the conclusions we make would be the same in those cases.
However, the very act of using the above representations imposes a _node ordering_, and is therefore an arbitrary choice which does not align with the nodes and edges being unordered! Hence, we need to make sure that permuting the nodes and edges (\(\mathbf{PAP}^{\top}\), for a permutation matrix \(\mathbf{P}\)), does not change the outputs. We recover the following rules a GNN must satisfy:
\[f(\mathbf{PX},\mathbf{PAP}^{\top}) =f(\mathbf{X},\mathbf{A})\] (Invariance) (3) \[\mathbf{F}(\mathbf{PX},\mathbf{PAP}^{\top}) =\mathbf{PF}(\mathbf{X},\mathbf{A})\] (Equivariance) (4)
Here we assumed for simplicity that the functions \(f\), \(\mathbf{F}\) do not change the adjacency matrix, so we assume they only return graph or node-level outputs.
Further, the graph's edges allow for a _locality_ constraint in these functions. Much like how a CNN operates over a small neighbourhood of each pixel of an image, a GNN can operate over a neighbourhood of a node. One standard way to define this neighbourhood, \(\mathcal{N}_{u}\), is as follows:
\[\mathcal{N}_{u}=\{v\ |\ (u,v)\in\mathcal{E}\vee(v,u)\in\mathcal{E}\} \tag{5}\]
Accordingly, we can define the multiset of all neighbourhood features, \(\mathbf{X}_{\mathcal{N}_{u}}\):
\[\mathbf{X}_{\mathcal{N}_{u}}=\{\!\{\mathbf{x}_{v}\ |\ v\in\mathcal{N}_{u}\}\!\} \tag{6}\]
And our local function, \(\phi\), can take into account the neighbourhood; that is:
\[\mathbf{h}_{u}=\phi(\mathbf{x}_{u},\mathbf{X}_{\mathcal{N}_{u}})\qquad\qquad \mathbf{F}(\mathbf{X})=\left[\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_ {|\mathcal{V}|}\right]^{\top} \tag{7}\]
Through simple linear algebra manipulation, it is possible to show that if \(\phi\) is permutation invariant in \(\mathbf{X}_{\mathcal{N}_{u}}\), then \(\mathbf{F}\) will be permutation equivariant. The remaining question is, how do we define \(\phi\)?
## 3 Graph Neural Networks
Needless to say, defining \(\phi\) is one of the most active areas of machine learning research today. Depending on the literature context, it may be referred to as either "diffusion", "propagation", or "message passing". As claimed by Bronstein et al. (2021), most of them can be classified into one of three spatial flavours:
\[\mathbf{h}_{u} =\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}c_{vu} \psi(\mathbf{x}_{v})\right)\] (Convolutional) (8) \[\mathbf{h}_{u} =\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}a( \mathbf{x}_{u},\mathbf{x}_{v})\psi(\mathbf{x}_{v})\right)\] (Attentional) (9) \[\mathbf{h}_{u} =\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}\psi( \mathbf{x}_{u},\mathbf{x}_{v})\right)\] (Message-passing) (10)
where \(\psi\) and \(\phi\) are neural networks--e.g. \(\psi(\mathbf{x})=\text{ReLU}(\mathbf{W}\mathbf{x}+\mathbf{b})\), and \(\bigoplus\) is any permutation-invariant aggregator, such as \(\sum\), averaging, or max. The expressive power of the GNN progressively increases going from Equation 8 to 10, at the cost of interpretability, scalability, or learning stability. For most tasks, a careful tradeoff is needed when choosing the right flavour.
This review does not attempt to be a comprehensive overview of specific GNN layers. That being said: representative _convolutional_ GNNs include the Chebyshev network (Defferrard et al., 2016, ChebyNet), graph convolutional network (Kipf and Welling, 2017, GCN) and the simplified graph convolution (Wu et al., 2019, SGC); representative _attentional_ GNNs include the mixture model CNN (Monti et al., 2017, MoNet), graph attention network (Velickovic et al., 2018, GAT) and its recent "v2" variant (Brody et al., 2022, GATv2); and representative _message-passing_ GNNs include interaction networks (Battaglia et al., 2016, IN), message passing neural networks (Gilmer et al., 2017, MPNN) and graph networks (Battaglia et al., 2018, GN).
Given such a GNN layer, we can learn (m)any interesting tasks over a graph, by appropriately combining \(\mathbf{h}_{u}\). I exemplify the three principal such tasks, grounded in biological examples:
**Node classification.** If the aim is to predict targets for each node \(u\in\mathcal{V}\), then our output is equivariant, and we can learn a shared classifier directly on \(\mathbf{h}_{u}\). A canonical example of this is classifying protein functions (e.g. using gene ontology data (Zitnik and Leskovec, 2017)) in a given protein-protein interaction network, as first done by GraphSAGE (Hamilton et al., 2017).
**Graph classification.** If we want to predict targets for the entire graph, then we want an invariant output, hence need to first _reduce_ all the \(\mathbf{h}_{u}\) into a common representation, e.g. by performing \(\bigoplus_{u\in\mathcal{V}}\mathbf{h}_{u}\), then learning a classifier over the resulting flat vector. A canonical example is classifying molecules for their quantum-chemical properties (Gilmer et al., 2017), estimating pharmacological properties like toxicity or solubility (Duvenaud et al., 2015; Xiong et al., 2019; Jiang et al., 2021) or virtual drug screening (Stokes et al., 2020).
**Link prediction.** Lastly, we may be interested in predicting properties of _edges_\((u,v)\), or even predicting whether an edge _exists_; giving rise to the name "link prediction". In this case, a classifier can be learnt over the concatenation of features \(\mathbf{h}_{u}\|\mathbf{h}_{v}\), along with any given edge-level features. Canonical tasks include predicting links between drugs and diseases--drug repurposing (Morselli Gysi et al., 2021), drugs and targets--binding affinity prediction (Lim et al., 2019; Jiang et al., 2020), or drugs and drugs--predicting adverse side-effects from polypharmacy (Zitnik et al., 2018; Deac et al., 2019).
It is possible to use the building blocks from the principal tasks above to go beyond classifying the entities given by the input graph, and have systems that _produce_ novel molecules (Mercado et al., 2021) or even perform _retrosynthesis_--the estimation of which reactions to utilise to synthesise given molecules (Somnath et al., 2021; Liu et al., 2022).
A natural question arises, following similar discussions over sets (Zaheer et al.,
2017; Wagstaff et al., 2019): Do GNNs, as given by Equation 10, represent _all_ of the valid permutation-equivariant functions over graphs? Opinions are divided. Key results in previous years seem to indicate that such models are fundamentally limited in terms of problems they can solve (Xu et al., 2019; Morris et al., 2019). However, most, if not all, of the proposals for addressing those limitations are still expressible using the pairwise message passing formalism of Equation 10; the main requirement is to carefully modify the _graph_ over which the equation is applied (Velickovic, 2022). To supplement this further, Loukas (2020) showed that, under proper initial features, sufficient depth-width product (\(\#\)layers \(\times\)\(\dim\mathbf{h}_{u}\)), and correct choices of \(\psi\) and \(\phi\), GNNs in Equation 10 are _Turing universal_--likely to be able to simulate _any_ computation which any computer can perform over such inputs.
All points considered, it is the author's opinion that the formalism in this section is likely all we need to build powerful GNNs--although, of course, different perspectives may benefit different problems, and existence of a powerful GNN does not mean it is easy to find using stochastic gradient descent.
## 4 GNNs without a graph: Deep Sets and Transformers
Throughout the prior section, we have made a seemingly innocent assumption: that we are _given_ an input graph (through \(\mathbf{A}\)). However, very often, not only will there not be a clear choice of \(\mathbf{A}\), but we may not have any prior belief on what \(\mathbf{A}\) even is. Further, even if a ground-truth \(\mathbf{A}\) is given _without noise_, it may not be the optimal _computation graph_: that is, passing messages over it may be problematic--for example, due to bottlenecks (Alon and Yahav, 2021). As such, it is generally a useful pursuit to study GNNs that are capable of modulating the input graph structure.
Accordingly, let us assume we only have a node feature matrix \(\mathbf{X}\), but no adjacency. One simple option is the "pessimistic" one: assume there are no edges at all, i.e. \(\mathbf{A}=\mathbf{I}\), or \(\mathcal{N}_{u}=\{u\}\). Under such an assumption, Equations 8-10 _all_ reduce to \(\mathbf{h}_{u}=\phi(\mathbf{x}_{u})\), yielding the Deep Sets model (Zaheer et al., 2017). Therefore, no power from graph-based modelling is exploited here.
The converse option (the "lazy" one) is to, instead, assume a _fully-connected_ graph; that is \(\mathbf{A}=\mathbf{1}\mathbf{1}^{\top}\), or \(\mathcal{N}_{u}=\mathcal{V}\). This then gives the GNN the full potential to exploit any edges deemed suitable, and is a very popular choice for smaller numbers of nodes. It can be shown that convolutional GNNs (Equation 8) would still reduce to Deep Sets in this case, which motivates the use of a stronger GNN. The next model
in the hierarchy, attentional GNNs (Equation 9), reduce to the following equation:
\[\mathbf{h}_{u}=\phi\left(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{V}}a(\mathbf{x}_{ u},\mathbf{x}_{v})\psi(\mathbf{x}_{v})\right) \tag{11}\]
which is essentially the forward pass of a Transformer (Vaswani et al., 2017). To reverse-engineer why Transformers appear here, let us consider the NLP perspective. Namely, words in a sentence _interact_ (e.g. subject-object, adverb-verb). Further, these interactions are not trivial, and certainly not _sequential_--that is, words can interact even if they are many sentences apart1. Hence, we may want to use a _graph_ between them. But what _is_ this graph? Not even annotators tend to agree, and the optimal graph may well be task-dependant. In such a setting, a common assumption is to use a complete graph, and let the network infer relations by itself--at this point, the Transformer is all but rederived. For an in-depth rederivation, see Joshi (2020).
Footnote 1: This insight may also partly explain why RNNs or CNNs have been seen as suboptimal language models: they implicitly assume only neighbouring words directly interact.
Another reason why Transformers have become such a dominant GNN variant is the fact that using a fully connected graph structure allows to express all model computations using _dense matrix products_, and hence their computations align very well with current prevalent accelerators (GPUs and TPUs). Further, they have a more favourable storage complexity than the message passing variant (Equation 10). Accordingly, Transformers can be seen as GNNs that are currently winning the hardware lottery (Hooker, 2021)!
Before closing this section, it is worth noting a _third_ option to learning a GNN without an input graph: to _infer_ a graph structure to be used as edges for a GNN. This is an emerging area known as _latent graph inference_. It is typically quite challenging, since edge selection is a non-differentiable operation, and various paradigms have been proposed in recent years to overcome this challenge: nonparametric (Wang et al., 2019; Deac et al., 2022), supervised (Velickovic et al., 2020), variational (Kipf et al., 2018), reinforcement (Kazi et al., 2022) and self-supervised learning (Fatemi et al., 2021).
## 5 GNNs beyond permutation equivariance: Geometric Graphs
To conclude our discussion, we revisit another assumption: we have assumed our graphs to be a discrete, unordered, collection of nodes and edges--hence, only susceptible to permutation symmetries. But in many cases, this is not the entire story!
The graph, in fact, may often be endowed with some spatial _geometry_, which will be very useful to exploit. Molecules, and their three-dimensional conformer structure, are a classical example of this.
In general, we will assume our inputs are _geometric graphs_: nodes are endowed with both _features_, \(\mathbf{f}_{u}\), and _coordinates_, \(\mathbf{x}_{u}\in\mathbb{R}^{3}\). We may be interested in designing a model that is equivariant not only to permutations, but also 3D rotations, translations and reflections (the Euclidean group, \(\mathrm{E}(3)\)).
An \(\mathrm{E}(3)\)-equivariant message passing layer transforms the coordinates and features, and yields updated features \(\mathbf{f}_{u}^{\prime}\) and coordinates \(\mathbf{x}_{u}^{\prime}\). There exist many GNN layers that obey \(\mathrm{E}(n)\) equivariance, and one particularly elegant solution was proposed by Satorras et al. (2021):
\[\mathbf{f}_{u}^{\prime} =\phi\left(\mathbf{f}_{u},\bigoplus_{v\in\mathcal{N}_{u}}\psi_{ \mathrm{f}}\left(\mathbf{f}_{u},\mathbf{f}_{v},\|\mathbf{x}_{u}-\mathbf{x}_{v} \|^{2}\right)\right) \tag{12}\] \[\mathbf{x}_{u}^{\prime} =\mathbf{x}_{u}+\sum_{v\neq u}(\mathbf{x}_{u}-\mathbf{x}_{v}) \psi_{\mathrm{x}}\left(\mathbf{f}_{u},\mathbf{f}_{v},\|\mathbf{x}_{u}- \mathbf{x}_{v}\|^{2}\right) \tag{13}\]
The key insight behind this model is that rotating, translating or reflecting coordinates does not change their distances \(\|\mathbf{x}_{u}-\mathbf{x}_{v}\|^{2}\), i.e., such operations are _isometries_. Hence, if we roto-translate all nodes as \(\mathbf{x}_{u}\leftarrow\mathbf{R}\mathbf{x}_{u}+\mathbf{b}\), the output features \(\mathbf{f}_{u}^{\prime}\) remain unchanged, while the output coordinates transform accordingly: \(\mathbf{x}_{u}^{\prime}\leftarrow\mathbf{R}\mathbf{x}_{u}^{\prime}+\mathbf{b}\).
While indeed highly elegant, a model like this hides a caveat: it only works over _scalar_ features \(\mathbf{f}_{u}\). If our model needs to support any kind of _vector_ input (e.g. _forces_ between atoms), the model in Equation 12 would not suffice, because the vectors would need to appropriately rotate with \(\mathbf{R}\). Satorras et al. (2021) do propose a variant that allows for explicitly updating vector features, \(\mathbf{v}_{u}\):
\[\mathbf{v}_{u}^{\prime}=\phi_{\mathrm{v}}(\mathbf{h}_{u})\mathbf{v}_{u}+C \sum_{v\neq u}(\mathbf{x}_{u}-\mathbf{x}_{v})\phi_{\mathrm{x}}\left(\mathbf{ f}_{u},\mathbf{f}_{v},\|\mathbf{x}_{u}-\mathbf{x}_{v}\|^{2}\right),\quad \mathbf{x}_{u}^{\prime}=\mathbf{x}_{u}+\mathbf{v}_{u}^{\prime} \tag{14}\]
But these issues will continue to arise, as features get "tensored up". Hence, in such circumstances, it might be useful to instead characterise a generic equation that supports _all_ possible roto-translation equivariant models, and then learning its parameters. Such an analysis was done in Tensor Field Networks (Thomas et al., 2018) for point clouds, and then extended to \(\mathrm{SE}(3)\)-Transformers for general graphs (Fuchs et al., 2020).
Perhaps a fitting conclusion of this survey is a simple realisation: having showed how both Transformers and geometric equivariance constraints play a part within the context of GNNs, we now have all of the key building blocks to define some
of the most famous geometric GNN architectures in the wild, such as AlphaFold 2 (Jumper et al., 2021), but also similar protein-related papers which made headlines in both Nature Methods (Gainza et al., 2020, MaSIF) and Nature Machine Intelligence (Mendez-Lucio et al., 2021). It seems that protein folding, protein design, and protein binding prediction (Stark et al., 2022) all appear to be an extremely potent area of attack for geometric GNNs; just one of many solid reasons why the field of structural biology would benefit from these recent developments (Bouatta et al., 2021).
|
2302.01002 | Over-parameterised Shallow Neural Networks with Asymmetrical Node
Scaling: Global Convergence Guarantees and Feature Learning | We consider the optimisation of large and shallow neural networks via
gradient flow, where the output of each hidden node is scaled by some positive
parameter. We focus on the case where the node scalings are non-identical,
differing from the classical Neural Tangent Kernel (NTK) parameterisation. We
prove that, for large neural networks, with high probability, gradient flow
converges to a global minimum AND can learn features, unlike in the NTK regime.
We also provide experiments on synthetic and real-world datasets illustrating
our theoretical results and showing the benefit of such scaling in terms of
pruning and transfer learning. | Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang | 2023-02-02T10:40:06Z | http://arxiv.org/abs/2302.01002v1 | # Over-parameterised Shallow Neural Networks
###### Abstract
We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter. We focus on the case where the node scalings are non-identical, differing from the classical Neural Tangent Kernel (NTK) parameterisation. We prove that, for large neural networks, with high probability, gradient flow converges to a global minimum AND can learn features, unlike in the NTK regime. We also provide experiments on synthetic and real-world datasets illustrating our theoretical results and showing the benefit of such scaling in terms of pruning and transfer learning.
## 1 Introduction
The training of neural networks typically involves the minimisation of a non-convex objective function. However, first-order optimisation methods, such as gradient descent (GD) and its variants, often find solutions with low training error. To gain a better understanding of this phenomenon, one fruitful direction of research has been to analyse the properties of GD training of over-parameterised (or large-width) neural networks; that is, neural networks where the number \(m\) of hidden nodes is very large. In particular, under a "\(\sqrt{1/m}\)" scaling of the hidden nodes, Jacot et al. (2018) have shown that, as the number of nodes \(m\) tends to infinity, the solution obtained by GD coincides with that of kernel regression under a so-called limiting _Neural Tangent Kernel_ (NTK). Under the same node scaling, called _NTK scaling_, theoretical guarantees for the global convergence and generalisation properties have then been obtained for large (but finite) width neural networks (Du et al., 2019, 2019; Oymak and Soltanolkotabi, 2020; Arora et al., 2019; Bartlett et al., 2021).
However, it has been noted in a number of articles (Chizat et al., 2019; Yang, 2019; Arora et al., 2019) that in this large-width regime, feature learning does not occur; consequently, for large-width neural networks under the NTK scaling, GD training is performed in a _lazy training_ regime, in contrast with the typical feature-learning regime exhibited in deep neural networks.
### Summary of the main contributions
In this work, we investigate global convergence properties and feature learning in gradient-type training of neural networks under a more general asymmetrical node scaling. In particular, we consider that each node \(j=1,\ldots,m\) has a fixed node-specific scaling \(\sqrt{\lambda_{m,j}}\), where
\[\lambda_{m,j}=\gamma\cdot\frac{1}{m}+(1-\gamma)\cdot\frac{\widetilde{\lambda }_{j}}{\sum_{k=1}^{m}\widetilde{\lambda}_{k}} \tag{1}\]
where \(\gamma\in[0,1]\) and \(1\geq\widetilde{\lambda}_{1}\geq\widetilde{\lambda}_{2}\geq\ldots\geq 0\) are nonnegative fixed scalars with \(\sum_{j=1}^{\infty}\widetilde{\lambda}_{j}=1\). Note that \(\gamma=1\) corresponds to the NTK scaling \(\sqrt{1/m}\). If \(\gamma<1\), the node scaling is necessarily asymmetrical for large-width networks.1 For \((\widetilde{\lambda}_{j})_{j\geq 1}\), one can take for instance \(\widetilde{\lambda}_{j}=6\pi^{-2}j^{-2}\) for all \(j\geq 1\); another example is \(\widetilde{\lambda}_{1}=1\) and \(\widetilde{\lambda}_{j}=0\) for \(j\geq 2\).
Footnote 1: That is, for \(m\) large enough, there exists \(j\) and \(k\) such that \(\lambda_{m,j}\neq\lambda_{m,k}\).
We consider a shallow neural network with a ReLU or smooth activation function and without bias, where the first layer weights are trained via gradient flow and empirical risk minimisation under the \(\ell_{2}\) loss. We show that, under similar assumptions as in (Du et al., 2019, 2019) on the data, activation function, and initialisation, when the number of nodes \(m\) is sufficiently large:
* if \(\gamma>0\), then the training error goes to 0 at a linear rate with high probability; and
* feature learning arises if and only if \(\gamma<1\).
We also provide numerical experiments illustrating the theoretical results, and demonstrating empirically that such node-scaling is also useful for network pruning and transfer learning.
### Organisation of the paper
Section 2 introduces the neural network model with asymmetrical node scaling, the gradient flow updates, and the main assumptions on the data, activation function, and initialisation. Section 3 discusses the properties of the NTK at initialisation, and its infinite-width limit. Section 4 derives our main results on the convergence to a global minimum of gradient flow and sketches their proofs. Section 5 gives the main results regarding feature learning. Section 6 describes our experiments on simulated and real datasets, whose results illustrate our theoretical results and their potential applications. Missing detailed proofs can be found in the Appendix.
### Related work
**Large-width neural networks.** The analysis of large-width neural networks dates back to the work of Neal (1996) who showed the connection between Gaussian processes and neural network models in the infinite-width limit. Recent work has further explored this connection under different assumptions (Matthews et al., 2018; Lee et al., 2018; Yang, 2019; Favaro et al., 2020; Bracale et al., 2021; Jung et al., 2021; Lee et al., 2022).
**Large-width neural networks under the NTK scaling.** Following the seminal work of Jacot et al. (2018), a number of articles have investigated the benefits of over-parameterisation for gradient descent training, with the "\(1/\sqrt{m}\)" NTK scaling (Arora et al., 2019, 2019, 2019, 2019; Lee et al., 2019; Zou and Gu, 2019; Oymak and Soltanolkotabi, 2020; Zou et al., 2020). Crucially, when the size of the network is large enough with respect to the size of the training set, the training loss converges to a global minimum at a linear rate under gradient descent. However, under the NTK scaling, the hidden-layer features do not move significantly when the width is large, and the NTK scaling has been coined lazy training regime for this reason (Chizat et al., 2019; Woodworth et al., 2020).
**Large-width neural networks under the mean-field scaling.** An alternative scaling is the "\(1/m\)" mean field scaling (Rotskoff and Vanden-Eijnden, 2018; Mei et al., 2018, 2019; Sirignano and Spiliopoulos, 2020; Chen et al., 2021). The mean field scaling is also equivalent, up to symmetry, to the \(\mu P\) parameterisation of Yang and Hu (2021) in the case of shallow networks. While there is feature learning under the mean-field and \(\mu P\) scalings, the relative contribution of each individual activation to the network output becomes infinitesimal in the large-width limit. In contrast, under asymmetrical scaling, individual activations retain a nontrivial contribution even in the large-width limit.
**Asymmetrical scaling in neural networks.** The idea of using asymmetrical scaling parameters in the context of (stochastic) gradient descent optimisation of deep neural networks has been previously introduced by Wolinski et al. (2020). The focus of the work of Wolinski et al. (2020) was on the (empirical) usefulness in terms of pruning. Indeed, our experiments in Section 6 are also in line with their findings. Wolinski et al. (2020), however, only consider asymmetrical scaling with \(\gamma=0\) (no fixed part), and did not investigate the global convergence properties under such scaling. The property of random neural networks under (random) asymmetrical node scaling in the large-width limit has also been considered by Lee et al. (2022); but this paper did not investigate the training properties under gradient descent or gradient flow.
### Notations
For an integer \(n\geq 1\), let \([n]=\{1,\ldots,n\}\). For a multivariate real-valued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the gradient \(\nabla_{\mathbf{v}}f(\mathbf{v})\) is the \(n\)-dimensional column vector of partial derivatives \(\nabla_{\mathbf{v}}f(\mathbf{v})=\left(\frac{\partial f}{\partial v_{1}}( \mathbf{v}),\ldots,\frac{\partial f}{\partial v_{n}}(\mathbf{v})\right)^{\top}\) where \(\mathbf{v}=(v_{1},\ldots,v_{n})^{\top}\). For a square matrix \(B\), we denote its minimum and maximum eigenvalues by \(\mathrm{eig}_{\min}(B)\)
and \(\mathrm{eig}_{\max}(B)\), respectively. For a vector \(\mathbf{v}\in\mathbb{R}^{n}\), we denote by \(B=\mathrm{diag}(\mathbf{v})\) the \(n\)-by-\(n\) diagonal matrix with \(B_{ii}=v_{i}\) for \(i\in[n]\).
## 2 Problem setup
### Statistical model
We consider a shallow feedforward neural network (FFNN) with one hidden layer of \(m\geq 1\) hidden nodes and a scalar output. To simplify the analysis, we assume that there is no bias term. Let \(\mathbf{x}\in\mathbb{R}^{d}\) be some input vector, where \(d\geq 1\) is the input dimension. The model is defined as
\[\begin{split} Z_{j}(\mathbf{x};\mathbf{W})&=\frac{ 1}{\sqrt{d}}\mathbf{w}_{j}^{\top}\mathbf{x},\ \text{ for }j\in[m]\\ f_{m}(\mathbf{x};\mathbf{W})&=\sum_{j=1}^{m}\sqrt {\lambda_{m,j}}a_{j}\sigma(Z_{j}(\mathbf{x};\mathbf{W}))\end{split} \tag{2}\]
where \(f_{m}(\mathbf{x};\mathbf{W})\) is the scalar output of the neural network, \(Z_{j}(\mathbf{x};\mathbf{W})\) is the pre-activation of the \(j\)-th hidden node, \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function, \(\mathbf{w}_{j}\in\mathbb{R}^{d}\) is the column vector of weights between node \(j\) at the hidden layer and the input nodes, and \(a_{j}\in\mathbb{R}\) is the weight between the hidden node \(j\) and the output node. The \(\lambda_{m,j}\)'s for \(j\in[m]\) are nonnegative scaling parameters for hidden nodes. Let \(\mathbf{W}=(\mathbf{w}_{1}^{\top},\dots,\mathbf{w}_{m}^{\top})^{\top}\) be a column vector of dimension \(md\) corresponding to the parameters to be optimised. We assume that \(\sigma\) admits a (weak) derivative \(\sigma^{\prime}\). For \(n\geq 1\), let \(\mathbf{\sigma}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) (resp. \(\mathbf{\sigma}^{\prime}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\)) be the vector-valued multivariate function that applies the real-valued function \(\sigma\) (resp. \(\sigma^{\prime}\)) element-wise on each of the \(n\) input variables.
To simplify the analysis, we will assume throughout this article that the output weights \(a_{j}\) are randomly initialised and fixed afterwards: for all \(j\geq 1\),
\[a_{j}\mathop{\sim}\limits^{\text{iid}}\mathrm{Uniform}(\{-1,1\}). \tag{3}\]
This simplifying assumption is often made when analysing large shallow networks (see e.g. (Du et al., 2019b; Bartlett et al., 2021)), and the analysis can also be extended to the case where both layers are trained.
The scaling parameters \(\lambda_{m,j}\) are fixed and parameterised as in Equation (1). By construction, we have \(\lambda_{m,1}>0\) and \(\sum_{j=1}^{m}\lambda_{m,j}=1\) for all \(m\geq 1\).
Recall that the case \(\gamma=1\) corresponds to the NTK scaling. The case that \(\gamma=0\) and \(\widetilde{\lambda}_{j}=\frac{1}{K}\) for \(j\in[K]\) for some \(K\leq m\) and \(0\) otherwise corresponds to a finite neural network with width \(K\). In the experiments, we will consider that \((\widetilde{\lambda}_{j})_{j\geq 1}\) is the probability mass of a Zipf law:
\[\widetilde{\lambda}_{j}=\frac{1}{\zeta(1/\alpha)}\frac{1}{j^{1/ \alpha}},\ \ j\geq 1 \tag{4}\]
for some \(\alpha\in(0,1)\), where \(\zeta\) is the Riemann zeta function. The parameter \(\alpha\) tunes how quickly \(\widetilde{\lambda}_{j}\) decreases with \(j\), smaller values corresponding to a more rapid decrease and a more asymmetrical node scaling.
### Training with gradient flow
Let \(\mathcal{D}_{n}=\{(\mathbf{x}_{i},y_{i})\}_{i\in[n]}\) be the training dataset, where \(n\geq 1\) is the number of observations. Let \(\mathbf{X}\) denote the \(n\)-by-\(d\) matrix whose \(i\)th row is \(\mathbf{x}_{i}^{\top}\). We want to minimise the empirical risk under the \(\ell_{2}\) loss. Consider the objective function
\[L_{m}(\mathbf{W})=\frac{1}{2}\sum_{i=1}^{n}(y_{i}-f_{m}(\mathbf{ x}_{i};\mathbf{W}))^{2} \tag{5}\]
which is non-convex in general. For a given dataset \(\mathcal{D}_{n}\), width \(m\geq 1\), output weights \(a_{j}\), and scaling parameters \((\lambda_{m,j})_{j\in[m]}\), we aim to estimate the trainable parameters \(\mathbf{W}\) by minimising \(L_{m}(\mathbf{W})\) using continuous-time gradient descent (a.k.a. gradient flow). Let \(\mathbf{W}_{0}\) be some initialisation and \((\mathbf{W}_{t})_{t>0}\) be the solution to the ordinary differential equation (ODE)
\[\frac{d\mathbf{W}_{t}}{dt}=-\nabla_{\mathbf{W}}L_{m}(\mathbf{W}_{t})\]
with \(\lim_{t\to 0}\mathbf{W}_{t}=\mathbf{W}_{0}\). Let \(\mathbf{w}_{tj}\) be the value of the parameter \(\mathbf{w}_{j}\) at time \(t\), and define \(Z_{tj}(\mathbf{x})=Z_{j}(\mathbf{x};\mathbf{W}_{t})\). Note that
\[\nabla_{\mathbf{w}_{j}}f_{m}(\mathbf{x};\mathbf{W}) =\frac{\partial f_{m}(\mathbf{x};\mathbf{W})}{\partial Z_{j}( \mathbf{x};\mathbf{W})}\nabla_{\mathbf{w}_{j}}Z_{j}(\mathbf{x};\mathbf{W})\] \[=\sqrt{\lambda_{m,j}}a_{j}\sigma^{\prime}(Z_{j}(\mathbf{x}; \mathbf{W}))\cdot\frac{1}{\sqrt{d}}\mathbf{x}.\]
Thus, under gradient flow, for \(j\in[m]\) and \(\mathbf{x}\in\mathbb{R}^{d}\),
\[\frac{d\mathbf{w}_{tj}}{dt} =\sum_{i=1}^{n}\nabla_{\mathbf{w}_{j}}f_{m}(\mathbf{x};\mathbf{W} _{t})\cdot(y_{i}-f_{m}(\mathbf{x}_{i};\mathbf{W}_{t}))\] \[=\sqrt{\lambda_{m,j}}\frac{a_{j}}{\sqrt{d}}\sum_{i=1}^{n}\sigma^ {\prime}(Z_{tj}(\mathbf{x}_{i}))\mathbf{x}_{i}\cdot(y_{i}-f_{m}(\mathbf{x}_{i };\mathbf{W}_{t})),\] \[\frac{dZ_{tj}(\mathbf{x})}{dt} =(\nabla_{\mathbf{w}_{j}}Z_{j}(\mathbf{x};\mathbf{W}_{t}))^{ \top}\frac{d\mathbf{w}_{tj}}{dt}\] \[=\sqrt{\lambda_{m,j}}\frac{a_{j}}{d}\mathbf{x}^{\top}\sum_{i=1}^ {n}\sigma^{\prime}(Z_{tj}(\mathbf{x}_{i}))\mathbf{x}_{i}\cdot(y_{i}-f_{m}( \mathbf{x}_{i};\mathbf{W}_{t})).\]
Note that the derivatives associated with each hidden node \(j\) are scaled by \(\sqrt{\lambda_{m,j}}\). We may view this per-node scaling as using different learning rates for different hidden nodes.
For an input \(\mathbf{x}\in\mathbb{R}^{d}\), the output of the neural network therefore satisfies the ODE
\[\frac{df_{m}(\mathbf{x};\mathbf{W}_{t})}{dt} =\nabla_{\mathbf{W}}f_{m}(\mathbf{x};\mathbf{W}_{t})^{\top}\frac {d\mathbf{W}_{t}}{dt}\] \[=\sum_{i=1}^{n}\Theta_{m}(\mathbf{x},\mathbf{x}_{i};\mathbf{W}_{t })\cdot(y_{i}-f_{m}(\mathbf{x}_{i};\mathbf{W}_{t})),\]
where \(\Theta_{m}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is the neural tangent kernel, defined by
\[\Theta_{m}(\mathbf{x},\mathbf{x}^{\prime};\mathbf{W}) =\nabla_{\mathbf{W}}f_{m}(\mathbf{x};\mathbf{W})^{\top}\nabla_{ \mathbf{W}}f_{m}(\mathbf{x}^{\prime};\mathbf{W})\] \[=\frac{1}{d}\mathbf{x}^{\top}\mathbf{x}^{\prime}\sum_{j=1}^{m} \lambda_{m,j}\sigma^{\prime}(Z_{j}(\mathbf{x};\mathbf{W}))\sigma^{\prime}(Z_{ j}(\mathbf{x}^{\prime};\mathbf{W})). \tag{6}\]
The associated neural tangent Gram (NTG) matrix \(\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W})\) is the \(n\)-by-\(n\) symmetric positive semi-definite matrix whose \((i,j)\)-th entry is \(\Theta_{m}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{W})\). It takes the form
\[\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W})=\frac{1}{d}\sum_{j=1}^{m}\lambda_{ m,j}\operatorname{diag}\biggl{(}\sigma^{\prime}\biggl{(}\frac{\mathbf{X} \mathbf{w}_{j}}{\sqrt{d}}\biggr{)}\biggr{)}\mathbf{X}\mathbf{X}^{\top} \operatorname{diag}\biggl{(}\sigma^{\prime}\biggl{(}\frac{\mathbf{X}\mathbf{w} _{j}}{\sqrt{d}}\biggr{)}\biggr{)}. \tag{7}\]
### Main assumptions
In this section, we give the main assumptions used in this article. The first set of assumptions is on the training dataset \(\mathcal{D}_{n}=\{(\mathbf{x}_{i},y_{i})\}_{i\in[n]}\). The assumptions are rather mild, and similar to other assumptions used in the literature.
**Assumption 2.1** (Dataset).:
1. All inputs are non-zero and their norms are at most \(1\): \(0<\|\mathbf{x}_{i}\|\leq 1\) for all \(i\geq 1\).
2. For all \(i\neq i^{\prime}\) and all \(c\in\mathbb{R}\), \(\mathbf{x}_{i}\neq\mathbf{\alpha}\mathbf{x}_{i^{\prime}}\).
3. There exists \(C>0\) such that \(|y_{i}|\leq C\) for all \(i\geq 1\).
The next assumption concerns the activation function \(\sigma\). Standard activation functions ((leaky) ReLU, softplus, tanh, sigmoid, linear) satisfy this assumption.
**Assumption 2.2** (Activation function).: The activation function is either
1. the ReLU function \(\sigma(x)=\max(0,x)\), with weak derivative \(\sigma^{\prime}(x)=\mathbf{1}_{\{x>0\}}\); or
2. analytic, with \(|\sigma^{\prime}(x)|\leq 1\) and \(|\sigma^{\prime\prime}(x)|\leq M\) for some \(M>0\), and not a polynomial.
The last assumption, which is standard, is on the initialisation of the weights.
**Assumption 2.3** (Initialisation).: For \(j\in[m]\),
\[\mathbf{w}_{0j}\overset{\text{iid}}{\sim}\mathcal{N}\left(0,\mathrm{I}_{d}\right) \tag{8}\]
where \(\mathrm{I}_{d}\) is the \(d\)-by-\(d\) identity matrix.
## 3 Neural Tangent Kernel at initialisation and its limit
### Mean NTG at initialisation and its minimum eigenvalue
Let \(\mathbf{W}_{0}\) be a random initialisation from Equation (8). Consider the mean NTK at initialisation
\[\Theta^{*}(\mathbf{x},\mathbf{x}^{\prime}) =\mathbb{E}\left[\Theta_{m}(\mathbf{x},\mathbf{x}^{\prime}; \mathbf{W}_{0})\right]\] \[=\frac{1}{d}\mathbf{x}^{\top}\mathbf{x}^{\prime}\mathbb{E}\left[ \sigma^{\prime}\left(\frac{1}{\sqrt{d}}\mathbf{w}_{01}^{\top}\mathbf{x}\right) \sigma^{\prime}\left(\frac{1}{\sqrt{d}}\mathbf{w}_{01}^{\top}\mathbf{x}^{ \prime}\right)\right]. \tag{9}\]
Note that this mean NTK, which is also, by the law of large numbers, the limiting NTK under the \(1/\sqrt{m}\) scaling (Jacot et al., 2018), does not depend on \((\lambda_{m,j})_{j\geq 1}\) nor \(m\). Let \(\widehat{\Theta}^{*}(\mathbf{X})=\mathbb{E}\left[\widehat{\Theta}_{m}( \mathbf{X};\mathbf{W}_{0})\right]\) be the associated \(n\)-by-\(n\) mean NTG matrix at initialisation, whose \((i,i^{\prime})\)-th entry is \(\Theta^{*}(\mathbf{x}_{i},\mathbf{x}_{i^{\prime}})\). Let
\[\kappa_{n}=\mathrm{eig}_{\min}(\widehat{\Theta}^{*}(\mathbf{X})) \tag{10}\]
be the minimum eigenvalue of the mean NTG matrix at initialisation. This minimum eigenvalue plays an important role in the analysis of the global convergence properties in the symmetrical NTK regime. Many authors (see e.g. (El Karoui, 2010; Nguyen et al., 2021)) have shown that, under some assumptions on the data, activation function, and initialisation, \(\kappa_{n}\) is strictly positive or bounded away from zero. We now state such a result, under the Assumptions of Section 2.3.
**Proposition 3.1** ((Du et al., 2019b, Theorem 3.1) and (Du et al., 2019a, Proposition F.1)).: _Under Assumptions 2.1 to 2.3, \(\kappa_{n}>0\)._
_Remark 3.2_.: Du et al. (2019b,a) make the assumption that the \(\mathbf{x}_{i}\) have unit norm. But their proof holds under the less strict Assumption 2.1(a).
### Limiting NTG
To give some intuition, we now describe the limiting behaviour of the NTG, for a fixed sample size \(n\), as the width \(m\) goes to infinity. The proof, given in Appendix B, follows from the triangle inequality and the law of large numbers, together with the facts that \(|\sigma^{\prime}(z)|\leq 1\) and \(\sum_{j\geq 1}\widetilde{\lambda}_{j}=1\).
**Proposition 3.3**.: _Consider a sequence \((\mathbf{w}_{0j})_{j\geq 1}\) of iid random vectors distributed as in Assumption 2.3. Assume Assumption 2.2 holds. Then,_
\[\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{0})\to\widehat{\Theta}_{\infty}( \mathbf{X};\mathbf{W}_{0}) \tag{11}\]
_almost surely as \(m\to\infty\), where_
\[\widehat{\Theta}_{\infty}(\mathbf{X};\mathbf{W}_{0})=\gamma\widehat{\Theta}^{ *}(\mathbf{X})+(1-\gamma)\widehat{\Theta}_{\infty}^{(2)}(\mathbf{X};\mathbf{W }_{0})\]
_with \(\widehat{\Theta}_{\infty}^{(2)}(\mathbf{X};\mathbf{W}_{0})\) a random positive semi-definite matrix:_
\[\widehat{\Theta}_{\infty}^{(2)}(\mathbf{X};\mathbf{W}_{0})=\frac{1}{d}\sum_{ j=1}^{\infty}\widetilde{\lambda}_{j}\,\mathrm{diag}\bigg{(}\sigma^{\prime} \bigg{(}\frac{\mathbf{X}\mathbf{w}_{0j}}{\sqrt{d}}\bigg{)}\bigg{)}\mathbf{X} \mathbf{X}^{\top}\,\mathrm{diag}\bigg{(}\sigma^{\prime}\bigg{(}\frac{\mathbf{X }\mathbf{w}_{0j}}{\sqrt{d}}\bigg{)}\bigg{)}.\]
_Also, \(\mathbb{E}[\widehat{\Theta}_{\infty}(\mathbf{X};\mathbf{W}_{0})]=\mathbb{E}[ \widehat{\Theta}_{\infty}^{(2)}(\mathbf{X};\mathbf{W}_{0})]=\widehat{\Theta}^{ *}(\mathbf{X})\), and_
\[\mathbb{E}\left[||\widehat{\Theta}_{\infty}(\mathbf{X};\mathbf{W}_{0})-\widehat {\Theta}^{*}(\mathbf{X})||_{F}^{2}\right]=C_{0}(\mathbf{X})(1-\gamma)^{2}\sum_ {j\geq 1}\widetilde{\lambda}_{j}^{2} \tag{12}\]
_where \(||\cdot||_{F}\) denotes the Frobenius norm, and \(C_{0}(\mathbf{X})\geq 0\) is some positive constant equal to_
\[\sum_{1\leq i,i^{\prime}\leq n}\!\left(\frac{\mathbf{x}_{i}^{\top}\mathbf{x}_ {i^{\prime}}}{d}\right)^{2}\!\mathrm{Var}\bigg{(}\sigma^{\prime}\bigg{(}\frac{ 1}{\sqrt{d}}\mathbf{w}_{01}^{\top}\mathbf{x}_{i}\bigg{)}\sigma^{\prime}\bigg{(} \frac{1}{\sqrt{d}}\mathbf{w}_{01}^{\top}\mathbf{x}_{i^{\prime}}\bigg{)}\bigg{)}.\]
When \(\gamma=1\) (symmetric NTK scaling), the NTG converges to a constant matrix, and solutions obtained by GD coincide with that of kernel regression. Whenever \(\gamma<1\), Proposition 3.3 shows that the NTG is random at initialisation, even in the infinite-width limit, suggesting that we are not operating in the kernel regime asymptotically. As shown in Equation (12), the departure from the kernel regime, as measured by the total variance of the limiting random NTG, can be quantified by the nonnegative constant
\[(1-\gamma)^{2}\left(\sum_{j\geq 1}\widetilde{\lambda}_{j}^{2}\right)\in[0,1].\]
When this constant is close to 0, we approach the kernel regime; increasing this value leads to a departure from the kernel regime, and increases the amount of feature learning (see Section 5). The quantity \(\sum_{j\geq 1}\widetilde{\lambda}_{j}^{2}\in(0,1]\) is always strictly positive. More rapid decrease of the \(\widetilde{\lambda}_{j}\) as \(j\) increases will lead to higher values of \(\sum_{j\geq 1}\widetilde{\lambda}_{j}^{2}\). For example, when using the Zipf weights in Equation (4), we have \(\sum_{j\geq 1}\widetilde{\lambda}_{j}^{2}=\frac{\zeta(2/\alpha)}{\zeta(1/ \alpha)^{2}}\), which decreases with \(\alpha\), as shown in Figure 5 in the Appendix.
In this section, we described the behaviour of the NTG at initialisation in the infinite-width limit, and provided some intuition on the node scaling parameters. In the next two sections, we state results on the global convergence and feature learning properties of large, but finite, neural networks under such asymmetrical scaling.
## 4 Global convergence analysis
### Main results
**ReLU case.** Our main theorem for ReLU activation functions, which is given below, explains what happens during training via gradient flow. It says that with high probability, the loss decays exponentially fast with respect to \(\kappa_{n}\) and the training time \(t\), and the weights \(\mathbf{w}_{tj}\) and the NTG matrix change by \(O\left(\frac{n^{\lambda_{m,j}^{1/2}}}{\kappa_{n}d^{1/2}}\right)\) and \(O\left(\frac{n^{2}\sum_{j=1}^{\sum_{m=1}^{m}\lambda_{m,j}^{3/2}}}{\kappa_{n}d^ {3/2}}+\frac{n^{3/2}\sqrt{\sum_{j=1}^{\sum_{m=1}^{m}\lambda_{m,j}^{3/2}}}}{ \kappa_{n}^{\lambda_{m}^{\prime\prime}}d^{5/4}}\right)\), respectively.
**Theorem 4.1** (Global convergence, ReLU).: _Consider \(\delta\in(0,1)\). Let \(D_{0}=\sqrt{2C^{2}+(2/d)}\). Assume Assumption 2.1, Assumption 2.2(a), and Assumption 2.3. Assume \(\gamma>0\) and_
\[m\geq\max\left(\frac{2^{3}n\log\frac{4n}{\kappa_{n}d}}{\kappa_{n}d},\ \frac{2^{ 25}n^{4}D_{0}^{2}}{\kappa_{n}^{4}d^{3}\gamma^{2}\delta^{5}},\ \frac{2^{35}n^{6}D_{0}^{2}}{\kappa_{n}^{6}d^{5}\gamma^{2}\delta^{5}}\right).\]
_Then, with probability at least \(1-\delta\), the following properties hold for all \(t\geq 0\):_
1. \(\mathrm{eig}_{\min}(\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t}))\geq\frac {2\kappa_{n}}{4}\)_;_
2. \(L_{m}(\mathbf{W}_{t})\leq e^{-(\gamma\kappa_{n}t)/2}L_{m}(\mathbf{W}_{0})\)_;_
3. \(\|\mathbf{w}_{tj}-\mathbf{w}_{0j}\|\leq\frac{2^{3}nD_{0}}{\kappa_{n}d^{1/2} \gamma^{6/2}\sqrt{\lambda_{m,j}}}\) _for all_ \(j\in[m]\)_;_
4. \(\|\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t})-\widehat{\Theta}_{m}( \mathbf{X};\mathbf{W}_{0})\|_{2}\leq\left(\frac{2^{9}n^{2}D_{0}}{\kappa_{n}d^ {3/2}\gamma^{6/2}}\cdot\sum_{j=1}^{m}\lambda_{m,j}^{3/2}\right)+\left(\frac{2 ^{6}n^{3/2}D_{0}^{1/2}}{\kappa_{n}^{1/2}d^{5/4}\gamma^{1/2}d^{5/4}}\cdot \sqrt{\sum_{j=1}^{m}\lambda_{m,j}^{3/2}}\right)\)_._
The above theorem implies that, whenever \(\gamma>0\), the training error converges to 0 exponentially fast. Additionally, the weight change is bounded by a factor \(\sqrt{\lambda_{m,j}}\) and the NTG change is bounded by a factor \(\sqrt{\sum_{j=1}^{m}\lambda_{m,j}^{3/2}}\). We have (see Appendix A.2), as \(m\rightarrow\infty\), \(\lambda_{m,j}\rightarrow(1-\gamma)\widetilde{\lambda}_{j},\ \ j\geq 1\), and \(\sum_{j=1}^{m}\lambda_{m,j}^{3/2}\rightarrow(1-\gamma)^{3/2}\sum_{j=1}^{\infty} \widetilde{\lambda}_{j}^{3/2}\). If \(\widetilde{\lambda}_{j}>0\) (note that we necessarily have \(\widetilde{\lambda}_{1}>0\)) the upper bound in (c) is therefore vanishing in the infinite-width limit if and only if \(\gamma=1\) (NTK regime); similarly, the upper bound in (d) is vanishing if and only if \(\gamma=1\). Although we were not able to obtain matching lower bounds, we show in Section 5 that indeed feature learning arises whenever \(\gamma<1\).
**Smooth activation case.** We now state a similar result in the smooth activation case. First define
\[C_{1}=\sup_{c\in(0,1]}\mathbb{E}_{z\sim\mathcal{N}(0,1)}\left[\sigma\left(\frac {cz}{\sqrt{d}}\right)^{2}\right]. \tag{13}\]
**Theorem 4.2**.: _(Global convergence, smooth activation) Consider \(\delta\in(0,1)\). Assume Assumption 2.1, Assumption 2.2(b), and Assumption 2.3. Assume \(\gamma>0\) and_
\[m\geq\max\bigg{(}\frac{2^{3}n\log\frac{2n}{\delta}}{\kappa_{n}d},\ \frac{2^{10}n^{3}M^{2}(C^{2}+C_{1})}{ \kappa_{n}^{4}d^{3}\gamma^{2}\delta},\frac{2^{15}n^{4}M^{2}(C^{2}+C_{1})}{ \kappa_{n}^{4}d^{4}\gamma^{2}\delta}\bigg{)}.\]
_Then, with probability at least \(1-\delta\), the following properties hold for all \(t\geq 0\):_
1. \(\mathrm{eig}_{\min}(\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t}))\geq\frac {2\pi_{n}}{4}\)_;_
2. \(L_{m}(\mathbf{W}_{t})\leq e^{-(\gamma\kappa_{n},t)/2}L_{m}(\mathbf{W}_{0})\)_;_
3. \(\|\mathbf{w}_{tj}-\mathbf{w}_{0j}\|\leq\sqrt{\lambda_{m,j}}\times\frac{n}{ \kappa_{n}d^{2/3}}\sqrt{\frac{2^{7}(C^{2}+C_{1})}{7\delta}}\) _for all_ \(j\in[m]\)_;_
4. \(\|\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t})-\widehat{\Theta}_{m}( \mathbf{X};\mathbf{W}_{0})\|_{2}\leq\Big{(}\frac{2^{7}n^{3}M^{2}(C^{2}+C_{1})} {\kappa_{n}^{2}d^{3}\gamma^{2}\delta}\cdot\sum_{j=1}^{m}\lambda_{m,j}^{2} \Big{)}+\Big{(}\frac{2^{5}n^{2}M(C^{2}+C_{1})^{1/2}}{\kappa_{n}d^{2}\gamma^{ \delta/2}}\cdot\sqrt{\sum_{j=1}^{m}\lambda_{m,j}^{2}}\Big{)}.\)__
The same comments as in the previous section apply here. The proof is slightly different than in the ReLU case, and leads to a different factor \(\sqrt{\sum_{j=1}^{m}\lambda_{m,j}^{2}}\) for the change of the NTG in (d).
### Sketch of the proofs
We give here a sketch of the proofs of Theorems 4.1 and 4.2. The detailed proofs are given in Appendices E and F, with secondary lemmas and propositions given in Appendices C and D.
The structures of the proofs of Theorems 4.1 and 4.2 are similar to that of (Du et al., 2019, Theorem 3.2), which showed analogous results on the global convergence under the NTK scaling. There are some key differences however which we highlight below.
Gradient flow converges to a global minimum of the objective function if the minimum eigenvalue of the NTG matrix \(\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t})\) is bounded away from zero by some positive constant for all \(t\geq 0\). In the NTK scaling case, Du et al. (2019) showed that it is satisfied as, for \(m\) sufficiently large, with high probability:
1. the NTG matrix at initialisation is close to its mean, and the minimum eigenvalue is close to that of the mean NTG,
2. the weights \(\mathbf{w}_{tj}\) are almost constant over time, which implies
3. the NTG matrix is almost constant over time, hence
4. the minimum eigenvalue of the NTG matrix at time \(t\) is close to that at initialisation, which is bounded away from zero.
However, in the case of asymmetrical node scaling (\(\gamma<1\)), none of the points (i-iv) holds. At initialisation, the random NTG matrix may be significantly different from its mean. Additionally, as we will discuss in Section 5, both the weights and the NTG matrix substantially change over time. This therefore requires a somewhat different approach that we now describe.
Let \(\lambda_{m,j}^{(1)}=\frac{\gamma}{m}\) and \(\lambda_{m,j}^{(2)}=(1-\gamma)\frac{\bar{X}_{j}}{\sum_{k=1}^{n}\bar{\lambda}_ {k}}\), and note that \(\lambda_{m,j}^{(1)}+\lambda_{m,j}^{(2)}=\lambda_{m,j}\). For \(k\in\{1,2\}\), let \(\widehat{\Theta}_{m}^{(k)}\) be the \(n\)-by-\(n\) symmetric positive semi-definite matrices defined by Equation (7), with \(\lambda_{m,j}\) replaced by either \(\lambda_{m,j}^{(k)}\). Note that
\[\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t})=\widehat{\Theta}_{m}^{(1)}( \mathbf{X};\mathbf{W}_{t})+\widehat{\Theta}_{m}^{(2)}(\mathbf{X};\mathbf{W}_{ t}) \tag{14}\]
with \(\mathbb{E}[\widehat{\Theta}_{m}^{(1)}(\mathbf{X};\mathbf{W}_{0})]=\gamma \widehat{\Theta}^{*}(\mathbf{X})\).
The key idea of the proof is to use the above decomposition of the NTG matrix as a sum of two terms, and to show that, while the second term may change over time, the first term is close to its mean at initialisation, and does not change over time. The important points of the proof are as follows. For large \(m\), with high probability,
1. \(\widehat{\Theta}_{m}^{(1)}(\mathbf{X};\mathbf{W}_{0})\) is close to its mean \(\gamma\widehat{\Theta}^{*}(\mathbf{X})\) and its minimum eigenvalue is therefore lower bounded by \(\frac{2\pi_{n}}{2}\);
2. while the weights \(\mathbf{W}_{t}\) may change significantly over time, \(\widehat{\Theta}_{m}^{(1)}(\mathbf{X};\mathbf{W}_{t})\) remains almost constant over time;
3. as a result, the minimum eigenvalue of \(\widehat{\Theta}_{m}^{(1)}(\mathbf{X};\mathbf{W}_{t})\) can be lower bounded by \(\frac{2\pi_{n}}{4}\);
4. this implies that the minimum eigenvalue of the overall NTG matrix \(\widehat{\Theta}_{m}(\mathbf{X};\mathbf{W}_{t})\) is lower bounded by \(\frac{2\pi_{n}}{4}\).
## 5 Feature learning analysis
In this section we provide some results about feature learning when \(\gamma<1\).
### Weight change by the first gradient update
We first show that on average, each individual weight in the network changes on the order of \(\lambda_{m,j}\) by the first gradient-update step. To state this formally, for \(j\in[m]\) and \(k\in[d]\), let \(w_{0jk}\) be the \(k\)-th component of the weight vector \(\mathbf{w}_{0j}\) at initialisation, and define a function \(g_{1}:\mathbb{R}^{d}\to\mathbb{R}\) by
\[g_{1}(\mathbf{x})=\mathbb{E}_{z\sim\mathcal{N}(0,1)}\left[\sigma\left(\frac{z \|\mathbf{x}\|}{\sqrt{d}}\right)\sigma^{\prime}\left(\frac{z\|\mathbf{x}\|}{ \sqrt{d}}\right)\right].\]
In the ReLU case, we have \(g_{1}(\mathbf{x})=\|\mathbf{x}\|/\sqrt{2\pi d}\). The next theorem describes our result on weight change, and its proof appears in Appendix G.
**Theorem 5.1**.: _Assume Assumption 2.2 holds. For all \(j\in[m]\) and \(k\in[d]\), we have_
\[\mathbb{E}\left[\left.\frac{dw_{ijk}}{dt}\right|_{t=0}\right]=-\frac{\lambda_ {m,j}}{\sqrt{d}}\sum_{i=1}^{n}x_{ik}g_{1}(\mathbf{x}_{i}).\]
_In particular, if \(\sigma\) is the ReLU function, this expectation is \(-\frac{\lambda_{m,j}}{d\sqrt{2\pi}}\sum_{i=1}^{n}x_{ik}\|\mathbf{x}_{i}\|\)._
Recall that \(\lambda_{m,j}\to(1-\gamma)\widetilde{\lambda}_{j}\) as \(m\to\infty\). Hence, if \(\gamma<1\) and \(\widetilde{\lambda}_{j}>0\), the expected change of the weight vector \(\mathbf{w}_{j}\) at the first GD update is non-vanishing in the infinite-width limit.
### NTK change by the first gradient update in the smooth activation case
We now consider a smooth activation function \(\sigma\) satisfying Assumption 2.2(b). The next result characterises how much the NTK kernel at time \(0\) changes on average, using the neural tangent hierarchy (Huang and Yau, 2020). Let \(g_{2}\) be a function of type \((\mathbb{R}^{d}\setminus\{0\})\times(\mathbb{R}^{d}\setminus\{0\})\times( \mathbb{R}^{d}\setminus\{0\})\to\mathbb{R}\) defined by
\[g_{2}(\mathbf{x}_{k},\mathbf{x}_{\ell},\mathbf{x}_{i})=\mathbb{E}_{(z_{1},z_{ 2},z_{3})}\left[\sigma^{\prime\prime}(z_{1})\sigma^{\prime}(z_{2})\sigma^{ \prime}(z_{3})\sigma(z_{3})\right],\]
where \((z_{1},z_{2},z_{3})\in\mathbb{R}^{3}\) is distributed as a multivariate zero-mean Gaussian with the following covariance matrix:
\[\Sigma=\frac{1}{d}\left(\begin{array}{ccc}\|\mathbf{x}_{k}\|^{2}&\mathbf{x }_{k}^{\top}\mathbf{x}_{\ell}&\mathbf{x}_{k}^{\top}\mathbf{x}_{i}\\ \mathbf{x}_{k}^{\top}\mathbf{x}_{\ell}&\|\mathbf{x}_{\ell}\|^{2}&\mathbf{x}_{ \ell}^{\top}\mathbf{x}_{i}\\ \mathbf{x}_{i}^{\top}\mathbf{x}_{k}&\mathbf{x}_{i}^{\top}\mathbf{x}_{\ell}&\| \mathbf{x}_{i}\|^{2}\end{array}\right).\]
The evolution equation for \(\Theta_{m}(\mathbf{x}_{k},\mathbf{x}_{\ell};\mathbf{W}_{t})\) is
\[\frac{d\Theta_{m}(\mathbf{x}_{k},\mathbf{x}_{\ell};\mathbf{W}_{ t})}{dt} =\Big{(}\nabla_{\mathbf{W}}\Theta_{m}(\mathbf{x}_{k},\mathbf{x}_{ \ell};\mathbf{W}_{t})\Big{)}^{\top}\,\frac{d\mathbf{W}_{t}}{dt}\] \[=-\sum_{i=1}^{n}(f_{m}(\mathbf{x}_{i};\mathbf{W}_{t})-y_{i})\times \Big{\langle}\nabla_{\mathbf{W}}\Theta_{m}(\mathbf{x}_{k},\mathbf{x}_{\ell}; \mathbf{W}_{t}),\,\nabla_{\mathbf{W}}f_{m}(\mathbf{x}_{i};\mathbf{W}_{t}) \Big{\rangle}.\]
**Theorem 5.2**.: _Assume Assumption 2.2(b) holds. For all \(\mathbf{x}_{k},\mathbf{x}_{\ell}\in(\mathbb{R}^{d}\setminus\{0\})\), we have_
\[\mathbb{E}\left[\left.\frac{d\Theta_{m}(\mathbf{x}_{k},\mathbf{x}_{\ell}; \mathbf{W}_{t})}{dt}\right|_{t=0}\right]=-\frac{\mathbf{x}_{k}^{\top}\mathbf{x }_{\ell}}{d^{3/2}}\times\left(\sum_{j=1}^{m}\lambda_{m,j}^{2}\right)\times \left[\sum_{i=1}^{n}\big{(}(\mathbf{x}_{k}^{\top}\mathbf{x}_{i})g_{2}(\mathbf{ x}_{k},\mathbf{x}_{\ell},\mathbf{x}_{i})+(\mathbf{x}_{\ell}^{\top}\mathbf{x}_{i})g_{2}( \mathbf{x}_{\ell},\mathbf{x}_{k},\mathbf{x}_{i})\big{)}\right].\]
The above theorem shows that the expected change to the NTK at initialisation is scaled by the factor \(\sum_{j=1}^{m}\lambda_{m,j}^{2}\), which converges to \((1-\gamma)^{2}\sum_{j}\widetilde{\lambda}_{j}^{2}\) as \(m\to\infty\). The expected change in the NTK at the first GD iteration is therefore bounded away from zero when \(\gamma<1\).
## 6 Experiments
The code to reproduce the experiments in this section can be found in this repository.
Figure 4: Results for Cifar10 data. From left to right, 1) test risk through training, 2) the differences in weight norms \(\|\mathbf{w}_{ij}-\mathbf{w}_{0j}\|\) with \(j\)s being the neurons having the maximum difference at the end of the training, 3) the test risks of the pruned models, and 4) test accuracies of the pruned models.
Figure 3: A subset of results for MNIST dataset. (Left) test accuracies of the pruned models, and (right) test accuracies of transferred models.
Figure 2: A subset of results for the regression experiments. From left to right, 1) the differences in weight norms \(\|\mathbf{w}_{tj}-\mathbf{w}_{0j}\|\) with \(j\)s being the neurons having the maximum difference at the end of the training for concrete dataset, 2) the differences in NTG matrices for energy dataset, 3) training risks of pruned models for airfoil dataset, and 4) test risks of transferred models for plant dataset.
### Simulated data
We first illustrate our theory on simulated data. We generate a dataset with \(n=100\) observations where, for \(i=1,\ldots,n\), \(\mathbf{x}_{i}\) is \(d=50\) dimensional and sampled uniformly on the unit sphere and \(y_{i}=\frac{5}{d}\sum_{j=1}^{d}\sin(\pi x_{i,j})+\varepsilon_{i}\) where \(\varepsilon_{i}\overset{\text{iid}}{\sim}\mathcal{N}(0,1)\). We use a wide shallow neural network as described in Section 2.1, with \(m=2000\) hidden nodes and scaling parameters \(\lambda_{m,j}\) defined as in Equation (1), where \(\gamma\in[0,1]\) and \((\widetilde{\lambda}_{j})_{j\geq 1}\) is the probability mass of a Zipf law with parameter \(\alpha\in(0,1)\), see Equation (4). We consider four values for the pair of parameters \((\gamma,\alpha)\): \(\gamma=1\), \((\gamma,\alpha)=(0.5,0.7)\), \((\gamma,\alpha)=(0.5,0.5)\), and \((\gamma,\alpha)=(0,0.4)\). For each setting, we run gradient descent with a learning rate of 1.0 for 50 000 steps, which is repeated five times to get average results.
We summarise the results in Figure 1, which shows the training error and the evolution of the weights, NTG, and minimum eigenvalue of the NTG as a function of the GD iterations. We see a clear correspondence between the theory and the empirical results. For \(\gamma>0\), GD achieves near-zero training error. The minimum eigenvalue and the training rates increase with the value of \(\gamma\). For \(\gamma=1\), we have the highest minimum eigenvalue and the fastest training rate; however, there is no/very little feature learning: the weights and the NTG do not change significantly over the GD iterations. When \(\gamma<1\), there is clear evidence of feature learning: both the weights and the NTG change significantly over time; the smaller \(\gamma\) and \(\alpha\), the more feature learning arises.
### Regression
We also validate our model on four real-world regression datasets from the UCI repository2: concrete (concrete compressive strength, \((n,d)=(1030,9)\)), energy (energy efficiency, \((n,d)=(768,8)\)), airfoil (airfoil self-noise, \((n,d)=(1503,6)\)), and plant (combined cycle power plant, \((n,d)=(9568,4)\)). We split each dataset into training (40%), test (20%), and validation sets (40%), and the validation set is used to test transfer learning. We use the same parameters as in Section 6.1, and we now train our neural networks for 100 000 steps in each run.
Footnote 2: [https://archive.ics.uci.edu/ml/datasets.php](https://archive.ics.uci.edu/ml/datasets.php)
In addition, to further highlight the presence of feature learning in our model, we conduct two experiments. Firstly, we test the prunability of the networks. We gradually prune those hidden nodes which have small feature importance and measure risks after pruning. Here, feature importance is measured as \((\lambda_{m,j}\|\mathbf{w}_{t,j}\|^{2})_{j\in[m]}\), and our theory suggests that a model with smaller \(\gamma\) and/or \(\alpha\) values is likely to be more robust with respect to pruning, as long as \(\gamma<1\). Similar empirical findings were made by Wolinski et al. (2020) about the benefits of asymmetrical scaling for neural network pruning, in the case \(\gamma=0\). Secondly, we test the transferability of the features learnt from our networks as follows. We first split the validation set into a held-out training set (50%) and a test set (50%), and extract features of the held-out training set using the neural networks trained with the original training set. The features are taken to be the output of the hidden layers, so each data point in the validation set is represented with a \(m=2000\) dimensional vector. Then, we sort the feature dimensions with respect to feature importance as above and use the top-\(k\) of these to train an external model. The chosen external model is a FFNN with a single hidden layer having 64 neurons and a ReLU activation function, and it is trained for 5,000 steps of gradient descent with a learning rate of 1.0. Again, our theory suggests that a model with smaller \(\gamma\) and \(\alpha\) values is likely to exhibit better transfer learning.
We summarise a subset of our results in Figure 2; additional results can be found in Appendix I. In line with the simulated data experiments, we observe a stronger presence of feature learning, in terms of weight-norm changes and NTG changes, for smaller values of \(\gamma\) and \(\alpha\). Also, we observe that models with smaller \(\gamma\) values are more robust to pruning, allowing them to retain relatively low training risks for larger numbers of pruned nodes. A similar phenomenon is seen in transfer learning, where models with smaller values of \(\gamma\) have lower risks when a small number of features are used for the transfer. The interpretation is that those models are able to learn a sufficient number of representative features using a relatively small number of neurons.
### Classification
Finally, we apply our model on image classification tasks. We conduct two experiments, 1) a small-scale experiment under the same setting assumed in our theory, and 2) a larger-scale experiment under a more realistic setting.
**MNIST.** We take a subset of size 5,000 from the MNIST dataset and train the same models used in the previous experiments. We also test pruning and transfer learning, where we use an additional subset of size 5,000 to train an external FFNN having a single hidden layer with 128 nodes. To match our theory, instead of using cross-entropy loss, we use the MSE loss by treating one-hot class labels as continuous-valued targets. The outputs of the models are 10 dimensional, so we compute the NTG matrices using only the first dimension of the outputs. In general, we get similar results in line with our previous experiments. The pruning and transfer learning results are displayed in Figure 3. Other results can be found in Figure 10 in the Appendix.
**CIFAR10.** We consider a more challenging image classification task, CIFAR10. The dataset is composed of 60 000 images, among which 50 000 are used for training and the rest for testing. There are ten different classes. We illustrate the described benefits of asymmetrical node scaling and show they hold for this more challenging problem. In many applications, one uses a large model pre-trained on a general task and then performs fine-tuning or transfer learning to adapt it to the task at hand. This is the approach we implement here. We consider a Resnet-18 model, pre-trained on the ImageNet dataset. Using this model, we transform each one of the original images to a vector of dimension \(512\). We then train wide shallow neural networks as described in Section 2.1, with \(m=2000\), and output dimensions respectively \(10\). The experimental setup differs from previous results as 1) we use stochastic gradient descent with a mini-batch size of \(64\) instead of full batch gradient descent, here a step is a mini-batch step; 2) we use the cross-entropy loss instead of the MSE. All experiments are run five times, the learning rate is set to \(5.0\). In Figure 4, we report the results for the same four values of \((\gamma,\alpha)\) pairs. We can see that the main conclusions of the previous experiments hold in this setting as well, even though the theory does not apply directly. Additional results, investigating the effect of \(\gamma\), are given in Figure 11 in the Appendix. |
2304.07070 | Who breaks early, looses: goal oriented training of deep neural networks
based on port Hamiltonian dynamics | The highly structured energy landscape of the loss as a function of
parameters for deep neural networks makes it necessary to use sophisticated
optimization strategies in order to discover (local) minima that guarantee
reasonable performance. Overcoming less suitable local minima is an important
prerequisite and often momentum methods are employed to achieve this. As in
other non local optimization procedures, this however creates the necessity to
balance between exploration and exploitation. In this work, we suggest an event
based control mechanism for switching from exploration to exploitation based on
reaching a predefined reduction of the loss function. As we give the momentum
method a port Hamiltonian interpretation, we apply the 'heavy ball with
friction' interpretation and trigger breaking (or friction) when achieving
certain goals. We benchmark our method against standard stochastic gradient
descent and provide experimental evidence for improved performance of deep
neural networks when our strategy is applied. | Julian Burghoff, Marc Heinrich Monells, Hanno Gottschalk | 2023-04-14T11:47:52Z | http://arxiv.org/abs/2304.07070v1 | Who breaks early, looses: goal oriented training of deep neural networks based on port Hamiltonian dynamics
###### Abstract
The highly structured energy landscape of the loss as a function of parameters for deep neural networks makes it necessary to use sophisticated optimization strategies in order to discover (local) minima that guarantee reasonable performance. Overcoming less suitable local minima is an important prerequisite and often momentum methods are employed to achieve this. As in other non local optimization procedures, this however creates the necessity to balance between exploration and exploitation. In this work, we suggest an event based control mechanism for switching from exploration to exploitation based on reaching a predefined reduction of the loss function. As we give the momentum method a port Hamiltonian interpretation, we apply the 'heavy ball with friction' interpretation and trigger breaking (or friction) when achieving certain goals. We benchmark our method against standard stochastic gradient descent and provide experimental evidence for improved performance of deep neural networks when our strategy is applied.
neural nets momentum goal oriented search port Hamilton systems
## 1 Introduction
The success of deep neural networks (DNN) significantly depends on the cheap computation of gradients using back-propagation enabling gradient based minimization of the loss functions. As the parameter count of DNN ranges between several tens of thousand in small classification networks to several billion in large scale generative models, there seems to be no alternative to the use of gradients. However, gradient based optimization is beset with the problem of local minima, of which the energy landscape of DNN offers plenty. Exploitation of a local minimum with gradient descent comes with guarantees for progress relative to previous optimization steps, but does not guarantee a decent level of performance. In order to go more global, momentum methods have therefore been introduced to overcome local minima.
As compared to gradient descent, momentum based methods have more parameters to adjust. Besides the strength of the inertial forces controlled by the'mass' parameter, a 'friction' parameter has to be determined, which is responsible for slowing down the search motion and bringing it to rest, ultimately. Finally, the learning rate needs to be controlled throughout the progress of the optimization process, like in gradient descent.
The complexity in setting and controlling the aforementioned hyper-parameters can be alleviated by an interpretation of the optimization process in physical terms as already indicated by the physical connotations of'mass' and 'friction'. It has been recently proposed to cast the optimization process in a port Hamiltonian framework, which makes the convergence of the optimization process to a stationary point transparent via energy based considerations, where loss is connected to potential and momentum to kinetic energy, whereas 'friction' accounts for energy dissipation and interdicts motion at high pace for unlimited time. It is clear that the friction / energy dissipation parameter is essential for the (non) locality of the optimization process: if high, friction essentially damps out all momentum and the procedure essentially 'just flows down the hill' as for gradient descent, resulting in low exploration and high exploitation. If low, the motion will go on essentially un-damped and not rest and thereby explore all of the accessible parameter space. Exploration is high, and exploitation is low in this setting.
Then, parameter settings can be modified over time or controlled adaptively as a part of the optimization algorithm is a familiar thought. The physics based intuition of port Hamiltonian systems can be helpful in the design of such adaptive strategies. Here we suggest a simple, event based adaptive parameter selection strategy that starts the optimization in an exploratory phase with low friction and turns over to exploitation by 'heavy breaking', once the potential energy (i.e. the loss function) is sufficiently reduced. Sufficiency is pre-defined as the minimum reduction goal of the optimization, which can be set, e.g., as the reduction of the loss obtained in previous trials.
In this paper, we show that the proposed strategy actually works for some classical examples in deep learning and improves the optimization loss and also the test accuracy for a standard, Le-Net-5 [1] based architectures on two well known academic classification tasks solved by deep
learning, namely the CIFAR10 [2] and the FashionMNIST [3] data-sets.
In order to focus on the optimization only, we do not employ data augmentation or pre-training and thereby do not achieve SOTA performance in our experiments. We however consistently achieve an advantage over the widely used stochastic gradient descent as a benchmark. We also observe consistent gains in performance after 'heavy breaking' is finally triggered.
Our paper is organized as follows: in Section 2 we give an overview over related work and in Section 3 we present the port Hamiltonian view on gradient based optimization with momentum and energy dissipation. Our experimental setup as well as our results are documented in Section 4. In the final Section 5 we present our conclusions and give an outlook to future research.
## 2 Related Work
The fact that neural networks with parameter counts ranging from some tenth of thousands to several hundreds of billions can actually be trained, largely depends on the cheap computation of gradients, see [4, 5] for original work and [6] for a recent reference. Gradient based optimization itself has been studied since the days of Newton, see e.g. [7, 8]. In the context of deep learning, the formation of randomly sub-sampled mini-batches is necessary as big data often exceeds the working memory available [9]. One has therefore to pass over to the stochastic gradient descent method (SGD) [10, 11].
One of the problems in neural network training is the complex, non convex structure of the energy landscapes [12]. This makes it necessary to avoid local minima, which is mostly done by the momentum method [13, 14, 15]. From a theoretical side, momentum can be understood as a discretized version of a second order ordinary differential equation, which also provides theoretical insight to convergence to critical points [16, 17, 18], see also [19, 20, 21] for recent extensions.
The momentum method has recently be cast in a modern port Hamiltonian language [22, 23, 24]. Port Hamiltonian systems [25] are particularly suited to understand the long time behviour and hence convergence properties of momentum based methods.
For a long time, the control of hyperparemeters in the training of neural networks has been a topic of interest in the deep learning community [26]. While learning rate schedules [27, 28] determine the setting for one specific parameter upfront, it has also been proposed to modify the dissipation parameter in momentum based optimization [17, 29, 30]. Other strategies, like the much used ADAM algorithm, rely on adaptive parameter control [31, 32].
One specific adaptive strategy however much less considered is the goal oriented search, where one pre-defines the target value to achieve during optimization, see e.g. [33].
In our work, we thus make the following contributions:
* For the first time, we use the port Hamiltonian language in the training of reasonably _deep_ neural networks in contrast to [22, 23] where networks are shallow.
* We also introduce an adaptive, goal oriented strategy for the control of the friction constant, which goes in the opposite direction as [17, 29, 30] but is well-motivated in terms of combining exploration and exploitation in one algorithm.
* We show experimentally for standard deep learning problems in image recognition that this strategy consistently produces improvements over fixed-parameter strategies. We also provide a considerable amount of ablation studies related to our parameter settings.
## 3 The Goal Oriented PHS Method
The simple gradient descent algorithm to minimize a differentiable loss function \(\mathscr{L}(\theta)\), namely \(\theta_{k+1}=\theta_{k}-\alpha\nabla_{\theta}\mathscr{L}(\theta_{k})\) can be seen as a first order Euler discretization of the gradient flow
\[\dot{\theta}(t)=-\nabla_{\theta}\mathscr{L}(\theta),\ \ \theta(0)=\theta_{0}. \tag{1}\]
It is well known that under adequate conditions on \(\mathscr{L}(\theta)\), the flow \(\theta(t)\) converges for \(t\to\infty\) to a critical point \(\theta^{*}\) with \(\nabla_{\theta}\mathscr{L}(\theta^{*})=0\), see e.g. [22, 23]. Likewise, the gradient descent algorithm converges for \(k\to\infty\) to a critical point, provided the step length \(\alpha\) is suitably controlled, confer [16, 17].
As mentioned in the introduction, the problem with gradient descent in the context of highly non-convex loss functions \(\mathscr{L}(\theta)\), as especially in the context of the training of deep neural networks [6], lies in the fact that gradient flows and gradient descent algorithms get stuck in local minima.
To over come the strict locality of gradient flow and gradient descent, momentum based methods have been introduced. The update rule of gradient descent is changed to
\[\theta_{k+1} =\theta_{k}+\alpha\frac{1}{m}p_{k} \tag{2}\] \[p_{k+1} =p_{k}-\alpha\frac{\gamma}{m}p_{k}-\alpha\nabla_{\theta}\mathscr{ L}(\theta)\]
where \(m,\gamma>0\) are parameters called mass and friction coefficient. \(p_{k}\) is the so-called momentum at iteration \(k\). In fact, (2) can be understood as the discretized version of the following Hamiltonian set of equations
\[\dot{\theta}(t) =\frac{1}{m}p(t) \tag{3}\] \[\dot{p}(t) =-\frac{\gamma}{m}p(t)-\nabla_{\theta}\mathscr{L}(\theta)\]
with initial conditions \(\theta(0)=\theta_{0}\) and \(p(0)=p_{0}\).
To understand the global properties of the Hamiltonian dynamics, it is convenient to define a state variable \(x(t)=\begin{pmatrix}\theta(t)\\ p(t)\end{pmatrix}\) and the Hamiltonian function \(H(x)=\frac{\|p\|^{2}}{2m}+\mathscr{L}(\theta)\) and a the symplectic matrix \(J=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)\) as well as
a symmetric, positive resistive matrix \(J=\left(\begin{array}{cc}0&0\\ 0&\frac{\gamma}{m}\end{array}\right)\) so that we can rewrite (3) in the compact, port-Hamiltonian form
\[\dot{x}(t)=\left(J-R\right)\nabla_{x}H(x). \tag{4}\]
Using the chain-rule, (4) and \(\nabla_{x}H(x(\tau))^{\top}J\nabla_{x}H(x(\tau))=0\) by the skew-symmetry of \(J\), it is now easy to see that the following inequality holds for the dissipated total 'energy' measured by \(H(x)\), where \(\frac{\|p\|^{2}}{2m}\) takes the role of kinetic energy and the loss \(\mathscr{L}(\theta)\) the role of potential energy
\[H(x(t))-H(x(0))=-\int_{0}^{t}\nabla_{x}H(x(\tau))^{\top}R\nabla_{x}H(x(\tau)) \,\mathrm{d}\tau. \tag{5}\]
From this exposition it is intuitive, and in fact can be proven mathematically [16, 17], that due to dissipation the state \(x(t)\) ultimately has to come to a rest, if \(\mathscr{L}(\theta)\) is bounded from below. Thus, if the stationary points \(x^{*}\) with \(\nabla_{x}H(x^{*})=0\) of the system are isolated, \(x(t)\) will asymptotically converge to a stationary point. Furthermore, for \(x^{*}=\left(\begin{smallmatrix}\theta^{*}\\ p^{*}\end{smallmatrix}\right)\), we find \(p^{*}=0\) and \(\nabla_{\theta}\mathscr{L}(\theta^{*})=0\), hence the \(\theta\)-component of stationary points are in one to one correspondence to the critical points of the original optimization problem.
Energy dissipation (5) thus is the key component that determines how fast \(x(t)\) comes to rest, which conceptually is corresponding to convergence of the optimization algorithm. Apparently, the matrix \(R\) and thus the friction coefficient \(\gamma\) controls dissipation.
In fact, if \(\gamma\approx 0\), essentially no energy is lost and the dynamics \(x(t)\) will either move on for a very long time, or, in very rare cases, get to rest on a local maximum or saddle point. This perpetual motion through the accessible part of the 'phase space' can be seen as an exploitative strategy.
In contrast, if \(\gamma\) gets large, the friction essentially disperses energy and momentum and the motion of \(x(t)\) behaves highly viscous, i.e. determined by the equality
\[-\frac{\gamma}{m}p(t)-\nabla_{\theta}\mathscr{L}(\theta)\approx 0 \Leftrightarrow \dot{\theta}(t)\approx-\frac{1}{\gamma}\nabla_{\theta}\mathscr{L}(\theta), \tag{6}\]
from which we see that in this high viscosity regime the port Hamiltonian flow essentially behaves like gradient descent (with a modified step length). Despite working with momentum, we are thus back in the exploitation phase of local minima.
The idea of this article is to use this physics based intuition to efficiently control the behavior of our port Hamiltonian optimization strategy in a goal oriented search. We thus propose to 'keep on moving' as long as we have not yet reached a predefined reduction of the initial loss function \(\mathscr{L}(\theta_{0})\). In many cases, it is known that \(\mathscr{L}(\theta)\) is lower bounded by zero, and we can thus demand a \(90\%\), \(95\%\)\(\ldots\)reduction in \(\mathscr{L}(x(t))\), before we, upon reaching this target, instantaneously increase the value of \(\gamma\) in order to switch over from the low-viscous exploration phase to high-viscous exploitation. In this sense, our proposed optimization algorithm resembles the 'chicken game': who breaks too early, looses.
Before we come to the implementation and numerical tests of this strategy in deep learning, we discuss some peculiarities of the loss function in this case. We would like to learn a conditional probability density \(p(y|x,\theta)\) from data independently sampled from the same distribution \(\{(y_{i},x_{i})\}_{i=1}^{n}\), where \(x_{i}\) is some input and \(y_{i}\) takes values in some prescribed label space \(\mathscr{C}=\{c_{1},\ldots,c_{q}\}\). In applications in image recognition, \(p(y|x,\theta)\) often consists of several stacked convolutional and fully connected layers and an ultimate softmax layer, cf. [6]. The 'cross entropy'/negative log likelihood loss is given by
\[\mathscr{L}(\theta)=-\frac{1}{n}\sum_{i=1}^{n}\log p(y_{i}|x_{i},\theta). \tag{7}\]
The numerical problem to implement (7) directly lies in the memory constraints that do not permit to load the entire data set \(\{(y_{i},x_{i})\}_{i=1}^{n}\) in the working memory. Therefore, mini batches \(B_{j}\), i.e. small random subsets of \(\{1,\ldots,n\}\) are drawn and an update step of the parameters \(\theta_{k}\) and the associated momentum is executed for a loss \(\mathscr{L}_{B_{j}}(\theta)\) with the original data set replaced by \(\{(y_{i},x_{i})\}_{i\in B_{j}}\). Nevertheless, as in image classification oftentimes the batch \(|B_{j}|\) is quite large (\(\gtrapprox 10\)), \(\mathscr{L}_{B_{j}}(\theta)\) and \(\mathscr{L}(\theta)\) tend do behave similar by the law of large numbers. In our numerical experiments, we therefore observe the behavior of the algorithm in accordance with intuition.
## 4 Experiments and results
For our experiments, we use a Convolutional Neural Net (CNN) similar to the Le-Net-5[1] which consists of two convolutional, one pooling and two fully connected layers as it is shown in figure 2 and has a total of 44,426 weights. For implementation we are using the PyTorch framework [34]. This network is chosen as it is a widely used standard
Figure 1: Selecting hyperparameters of learning rate (here: \(\alpha=0.1\)), mass and friction based on the accuracy on the Fahion-MNIST dataset
architecture, although it is not eligible to compete with more sophisticated ResNet [35] or Transformer [36] architectures. Furthermore, in order to focus on training exclusively, the networks are trained from scratch on the data sets and we use neither pre-training nor augmentation. The training is performed with respect to the usual cross-entropy loss without regularization.
On the hardware-side, we use a workstation with an Intel(R) Core(TM) i7-6850K 3.6GHz and two Nvidia TI-TAN Xp graphic units with 12GB VRAM each for our experiments.
For a comparison with SGD and PHS, i.e. the traditional momentum method, we test our goal oriented PHS search on the two data sets CIFAR10 and FashionMNIST introduced above. We furthermore run trainings for a number of different learning rates \(\alpha\) and for several settings for the mass and baseline friction parameter. To establish which parameter settings are rewarding, we consider the accuracies of the PHS for different learning rates (\(0.0001\leq\alpha\leq 0.1\)), that can be achieved when mass and friction are included. This is shown in Figure 1 for the example of \(\alpha=0.1\) on the Fashion-MNIST dataset. As one can already see, the trainings for many parameter settings work significantly worse or not at all. Therefore, only experiments that lie in a parameter range leading to reasonable results are included in our result tables. Concerning goal orientation, we aim at an reduction of the initial loss of 65% to 90% and then increase the friction significantly by a factor between 5 and 99. The results are given in Tables 1 for CIFAR10 and 2 for FashionMNIST.
around and in many cases above 0.5% throughout parameter settings and the two data sets employed, as documented in Table 1 for CIFAR10 and Table 2 for FashionMNIST.
The history of the test accuracy over the iteration count of the optimization procedure is shown in Figure 3 for two example configurations of each dataset. As we observe, the sudden 'breaking' exploits a local minimum better and avoids overfitting (as it can be especially seen in figure 2(a)), i.e. the decrease of the ordinary PHS method in the further pursuit of the optimization. Interestingly, this hints that overfitting rather is a 'global' phenomenon associated with ongoing exploration, whereas exploitation of the local minimum seems less beset from overfitting issues. This is consistent with our observation that the training loss after 'breaking' quickly converges, whereas the training loss for SGD or PHS is further reduced. This suggest that the onset of overfitting could thus also be a useful triggering event for 'breaking' instead of goal orientation, as employed here.
## 5 Discussion and Outlook
In our paper, we have introduced a new goal oriented strategy for the training of deep neural networks. By the physics-motivated interpretation of momentum in a port Hamiltonian framework, we explained how different settings for the friction / dissipation correspond to an exploration or exploitation phase in the progress of optimization. By switching from exploration to exploitation when a certain minimal reduction of the loss function of a deep neural network is achieved, we obtain improved classification accuracy of image classification networks as compared with simple stochastic gradient descent or a momentum based optimization with fixed friction.
The outlined strategy can be extended in several ways. First, for the case where the minimal reduction is never achieved for a long time, the exploitation phase could be executed nevertheless starting from the best parameter setting found so far, or the target could be adjusted. This will robustify our algorithm. Second, after a first exploitation phase, a re-acceleration could be executed, e.g. by an external force or 'port', so that multiple promising local minima can be visited.
**Acknowledgements:** The authors thank Onur T. Doganay, Kathrin Klamroth, Matthias Rottmann and Claudia Totzeck for interesting discussions. This work is partially funded
Figure 3: History of the accuracies over the epochs depending on the choosable hyperparameters learning rate \(\alpha\), friction and mass. PHS in orange, Goal-oriented approach in blue.
by the German Federal Ministry for Economic Affairs and Climate Action, within the project "KI Delta Learning", grant no. 19A19013Q.
|
2305.10625 | Measuring and Mitigating Local Instability in Deep Neural Networks | Deep Neural Networks (DNNs) are becoming integral components of real world
services relied upon by millions of users. Unfortunately, architects of these
systems can find it difficult to ensure reliable performance as irrelevant
details like random initialization can unexpectedly change the outputs of a
trained system with potentially disastrous consequences. We formulate the model
stability problem by studying how the predictions of a model change, even when
it is retrained on the same data, as a consequence of stochasticity in the
training process. For Natural Language Understanding (NLU) tasks, we find
instability in predictions for a significant fraction of queries. We formulate
principled metrics, like per-sample ``label entropy'' across training runs or
within a single training run, to quantify this phenomenon. Intriguingly, we
find that unstable predictions do not appear at random, but rather appear to be
clustered in data-specific ways. We study data-agnostic regularization methods
to improve stability and propose new data-centric methods that exploit our
local stability estimates. We find that our localized data-specific mitigation
strategy dramatically outperforms data-agnostic methods, and comes within 90%
of the gold standard, achieved by ensembling, at a fraction of the
computational cost | Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, Aram Galstyan | 2023-05-18T00:34:15Z | http://arxiv.org/abs/2305.10625v2 | # Measuring and Mitigating Local Instability in Deep Neural Networks
###### Abstract
Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample "label entropy" across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90% of the gold standard, achieved by ensembling, at a fraction of the computational cost.
## 1 Introduction
When training large deep neural networks on the same data and hyperparameters can lead to many distinct solutions with similar loss, we say the model is _underspecified_(D'Amour et al., 2022). One tangible manifestation of underspecification is that a model prediction on a single data point can change across different training runs, without any change in the training data or hyperparameter settings, due to stochasticity in the training procedure. This extreme sensitivity of model output, which has been termed as _model variance/instability_ or _model jitter/churn_(Hidey et al., 2022; Milani Fard et al., 2016), is highly undesirable as it prohibits comparing models across different experiments (Dodge et al., 2019). We refer to this problem as _local instability_1, a term that highlights our focus on the non-uniformity of instability across data points. Local instability can lead to highly undesirable consequences for deployed industrial systems, as it can cause inconsistent model behavior across time, eroding trust on AI systems (Dodge et al., 2020; D'Amour et al., 2020). The problem is further exacerbated by the fact that industry models are typically more complex and trained on diverse datasets with potentially higher proportion of noise.
Footnote 1: We use _local instability_ to mean _local model instability_
Table 1 shows examples of local instability for a domain classification problem, where we used a pre-trained language model DistilBERT (Sanh et al., 2019) to train 50 independent classifiers
\begin{table}
\begin{tabular}{l l|l} \hline \hline
**Utterance (gold label)** & \(\hat{p}_{[min-max]}\)**,** & **Label predictions over** \\ & & **50 runs** \\ \hline funnyjoke (general) & [0.98-0.99], 0.003 (low) & general:50 \\ \hline start house & [0.002-0.97], 0.17 (high) & lists:26, IOT:6, \\ & 0.17 (high) & general:6, play:5, \\ & & news:3, social:1, \\ & & calendar:1 \\ \hline search for gluten & [0.002-0.693], 0.06 (low) & lists:28, take- \\ & 0.06 (low) & away:18, social:1, \\ & & music:1, cooking:1, play:1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Utterances from Massive data show different predictions over 50 model runs with different seeds. \(\hat{p}\) is the prediction score on gold labels and \(\sigma_{m}\) is the standard deviation over _m_ultiple model outputs \(\hat{p_{1}},\ldots,\hat{p_{50}}\). For example, _start house cleanup_ with gold label _IOT_ is predicted to label _lists_ 26 out of the 50 model runs. Its prediction score on _IOT_ ranges between 0.002 and 0.97. green: low variability, predictions match gold label, red: high predicted label switching
(with random initial conditions) on Massive dataset FitzGerald et al. (2022). It shows that a validation set utterance _start house cleanup_ with gold label _IOT_ gets assigned seven different predicted labels over the 50 runs, with the predicted confidence on gold label \(\hat{p}\) ranging between 0.002 and 0.97, with high \(\sigma_{m}\) (the standard deviation of \(\{\hat{p_{i}}\}_{i=1}^{50}\)) of 0.17. In comparison, _search for gluten free menus_ gets 6 different predicted labels over 50 runs, with a relatively low \(\sigma_{m}\) of 0.06. The differences in stability across examples demonstrates that the phenomenon is localized to certain data points. See Figures 4 and 5 in Appendix. Examples in table 1 also highlight that variability in confidence is not perfectly aligned with stability of predictions.
Measuring Local Model InstabilityWhile detecting and quantifying local instability across multiple runs is trivial for toy problems, it becomes infeasible with much larger industrial datasets. Swayamdipta et al. (2020) suggested to use single-run training dynamics to estimate the variance in prediction scores over multiple epochs. However, as shown in Table 1 low prediction variance does not always lead to less label switching, which is the defining feature of local instability. Instead, here we introduce _label switching entropy_ as a new metric for characterizing local instability. Furthermore, we demonstrate that label switching entropy calculated over training epochs of a single run is a good proxy for label switching over multiple runs, so that data points with high prediction instability over time also exhibit high instability across training runs.
Mitigating Local Model InstabilityOne straightforward strategy of mitigating local instability is to train an ensemble of \(n\) models and average their weights or their predictions. Unfortunately, ensembling neural networks such as large language models is often computationally infeasible in practice, as it requires multiplying both the training cost and the test time inference cost by a factor of \(n\). Therefore, we propose and compare more economical options for mitigating local instability.
Here we propose a more efficient smoothing-based approach where we train just two models. The first (teacher) model is trained using the one-hot encoded gold labels as the target. Once the model has converged and is no longer in the transient learning regime (after \(N\) training or optimization steps), we compute the temporal average predicted probability vector over \(K\) classes after each optimization step, which is then adjusted by temperature \(T\) to obtain the smoothed predicted probability vector. A student model is then trained using these "soft" labels instead of the one-hot encoded gold labels. We call this Temporal Guided Temperature Scaled Smoothing (TGTSS). TGTSS allows local mitigation of local instability as each datapoint is trained to its unique label in the student model. In contrast to existing methods such stochastic weight averaging Izmailov et al. (2018) or regularizing options such as adding L2-penalty, TGTSS significantly outperforms existing methods and reaches within 90% of the gold standard of ensemble averaging.
We summarize our contributions as follows:
* We propose a new measure of local instability that is computationally efficient and descriptive of actual prediction changes.
* We introduce a data-centric strategy to mitigate local instability by leveraging temporally guided label smoothing.
* We conduct extensive experiments with two public datasets and demonstrate the effectiveness of the proposed mitigation strategy compared to existing baselines.
## 2 Related work
Sophisticated, real-world applications of Deep Neural Networks (DNNs) introduce challenges that require going beyond a myopic focus on accuracy. Uncertainty estimation is increasingly important for deciding when a DNN's prediction should be trusted, by designing calibrated confidence measures that may even account for differences between training and test data Nado et al. (2021). Progress on uncertainty estimation is largely orthogonal to another critical goal for many engineered systems: _consistency_ and _reliability_. Will a system that works for a particular task today continue to work in the same way tomorrow? One reason for inconsistent performance in real-world systems is that even if a system is re-trained with the same data, predictions may significantly change, a phenomenon that has been called model _churn_Milani Fard et al. (2016). The reason for this variability is that neural networks are under-specified D'Amour et al. (2020), in the sense that there are many different neural networks that
have nearly equivalent average performance for the target task. While randomness could be trivially removed by fixing seeds, in practice tiny changes to data will still significantly alter stochasticity and results. We will explore the case of altering training data in future studies. Studying how stochasticity affects model churn addresses a key obstacle in re-training engineered systems while maintaining consistency with previous results.
The most common thread for reducing model churn focuses on adding constraints to a system so that predictions for re-trained system match some reference model. This can be accomplished by adding hard constraints Cotter et al. (2019) or distillation Milani Fard et al. (2016); Jiang et al. (2021); Bhojanapalli et al. (2021).
We adopt a subtly different goal which is to train at the outset in a way that reduces variability in predictions due to stochasticity in training. Hidey et al. (2022) suggest a co-distillation procedure to achieve this. Label smoothing, which reduces over-confidence Muller et al. (2019), has also been suggested to reduce variance, with a local smoothing approach to reduce model churn appearing in Bahri and Jiang (2021).
A distinctive feature of our approach is a focus on how properties of the data lead to instability. Inspired by dataset cartography Swayamdipta et al. (2020) which explored variance in predictions over time during training of a single model, we investigate how different data points vary in predictions across training runs. Non-trivial patterns emerge, and we use sample-specific instability to motivate a new approach to reducing model churn.
Our work draws connections between model stability and recent tractable approximations for Bayesian learning Izmailov et al. (2018); Maddox et al. (2019). Recent Bayesian learning work focuses on the benefits of Bayesian model ensembling for confidence calibration, but an optimal Bayesian ensemble would also be stable. Bayesian approximations exploit the fact that SGD training dynamics approximate MCMC sampling, and therefore samples of models over a single training run can approximate samples of models across training runs, although not perfectly Fort et al. (2019); Wenzel et al. (2020); Izmailov et al. (2021). We study connections between prediction variability within a training run and across training runs, and use this connection to devise practical metrics and mitigation strategies.
Similar to BANNs Furlanello et al. (2018), our teacher and corresponding student models use the same model architecture with same no. of parameters rather than using a high-capacity teacher model, however, unlike BANNS, our work is geared towards addressing model instability. Architecturally, our methodology (TG TSS) uses a temperature scaled temporally smoothed vector that is obtained from the last \(N\) checkpoints from the teacher model instead of the finalized teacher model and not use the annotated labels for the utterances.
## 3 Model instability measurement
The examples in Table 1 show that re-training a model with different random seeds can lead to wildly different predictions. The variance of predictions across models, \(\sigma_{m}^{2}\), is intuitive, but is expensive to compute and does not necessarily align with user experience since changes in confidence may not change predictions. A changed prediction, on the other hand, may break functionality that users had come to rely on. Hence we want to include a metric which measures how often predictions change.
Therefore, we propose to study the label switching entropy. Given a setup with training data \(\{x_{i},y_{i}\}\in X\) where \(X\) are utterances, \(y\in\{1,...,K\}\) are the corresponding gold labels, the multi-run Label Entropy (\(LE_{m}\)) over \(N\) independent runs for an utterance \(x_{i}\) can be computed as,
\[LE_{m}^{(i)}=\sum_{k=1}^{K}-\frac{n_{k}^{(i)}}{N}\log(\frac{n_{k}^{(i)}}{N}) \tag{1}\]
where, \(n_{k}\) is the number of times utterance \(i\) was predicted to be in class \(k\) across \(N\) models trained with different random seeds. For example, if an utterance gets labeled to three classes A, B and C for 90%, 5% and 5% of the time respectively, then its multi-run label entropy (\(LE_{m}^{(i)}\)) will be \(-(0.9\)\(\ast\)\(\log(0.9)\)\(+0.05\)\(\ast\)\(\log 0.05\)\(+0.05\)\(\log 0.05)\)\(=0.39\). Similarly, an utterance that is consistently predicted to belong to one class over \(N\) runs will have a \(LE_{m}^{(i)}\) of 0 (even if it is consistently put in the _wrong_ class). We can compute the overall \(LE_{m}\) by averaging \(LE_{m}^{(i)}\) for all the utterances. Empirically, we also observe a relatively strong linear relationship between \(LE_{m}\) and \(\sigma_{m}\) (Figure 1).
Since computing \(LE_{m}\) is computationally expensive due to training \(N\) independent models, we pro
pose using _single-run Label Entropy_ (\(LE_{s}\)) that can be computed over a single model run. Mathematically, the formula for label entropy stays consistent for both multi-run and single-run, however, \(LE_{s}\) is computed across different model checkpoints. In our analyses, we computed \(LE_{s}\) by accumulating the predicted class after each optimization step whereas \(LE_{m}\) was computed by accumulating the final predicted class across \(N\) models on the validation set.
Empirically, we found that there exists a strong linear relationship between \(LE_{s}\) and \(LE_{m}\) (Figure 2). This demonstrates that utterances that suffer from local instability across multiple independent runs exhibit similar instability across multiple optimization steps for a single model. This finding supports our hypothesis that \(LE_{s}\) is a suitable proxy for \(LE_{m}\) in real world production settings for NLU systems.
## 4 Model instability mitigation
In our study, we have explored 3 baseline mitigation strategies to address model instability: ensembling, stochastic weight averaging (SWA) and uniform label smoothing. These methodologies have been used in numerous other works to improve generalization as well as predictive accuracy across a diverse range of applications. Performance of the ensembling strategy serves as our upper bound in reducing model instability. We propose a novel model instability mitigation strategy, temporal guided temperature scaled label smoothing, that is able to recover 90% of the reduction in model instability as ensembling at a fraction of model training time and computational cost. We describe all the mitigation strategies below.
### Ensemble averaging and regularizing
In this setting, we trained \(N\) independent models, initialized with different random seeds, using the standard cross-entropy loss, computed between the ground truth labels and the predicted probability vector. For every utterance in the test set, we recorded the mean predicted probability of the gold label, the predicted label and our proposed local instability metric, label entropy, across \(N\) models. We also trained another baseline by leveraging \(L2\) regularization. No other mitigation strategies were used in the process since our aim was to emulate the current model training scenario in natural language understanding(NLU) production settings.
### Stochastic Weight Averaging
Stochastic weight averaging(SWA) Izmailov et al. (2018) is a simple yet effective model training methodology that improves generalization performance in deep learning networks. SWA performs an uniform average of the weights traversed by the stochastic gradient descent based optimization algorithms with a modified learning rate. In our implementation, we equally averaged the weights at the end of the last two training epochs. We also explored equal averaging of weights from two randomly selected epochs out of the final 3 epochs but that strategy did not yield better results. We left the work of using a modified learning rate to a future study with a significantly larger training dataset.
Figure 1: \(LE_{m}\) vs \(\sigma_{m}\) for Massive dataset shows a strong linear relationship. Each data point is an utterance with \(LE_{m}^{(i)}\) vs \(\sigma_{m}^{(i)}\) values.
Figure 2: \(LE_{s}\) vs \(LE_{m}\) for Massive dataset shows a strong linear relationship. Each data point is an utterance with \(LE_{s}^{(i)}\) vs \(LE_{m}^{(i)}\) values. Zero entropy corresponds to utterances with confidence scores close to 1 for a class with very low variability.
### Label smoothing
Label smoothing [23] is a popular technique to improve performance, robustness and calibration in deep learning models. Instead of using "hard" one-hot labels when computing the cross-entropy loss with the model predictions, label smoothing introduces "soft" labels that are essentially a weighted mixture of one-hot labels with the uniform distribution. For utterances \(\{x_{i},y_{i}\}\) where \(y\in\{1,...,K\}\) for \(K\) classes, the new "soft" label is given by \(y^{LS}=(1-\alpha)*y+\alpha/K\) where \(\alpha\) is the label smoothing parameter. The "soft" labels are then used in the softmax cross-entropy loss.
### Ensemble baseline
To obtain consistent predictions with low local instability, ensembling is often utilized as the default mitigation strategy. Given a problem setup with training data \(\{x_{i},y_{i}\}\ \in X\) where \(X\) are utterances, \(y\in\{1,...,K\}\) are the corresponding gold labels, then intuitively, ensembling over \(N\) independent models,where \(N\) is sufficiently large, will converge to the average predicted probability by the law of large numbers. Hence, using a sufficiently large ensemble of independently trained models would give stable predictions in general.
In our study, we used ensembling to aggregate (uniform average) predictions for each utterance across \(N\) independently trained models. Each model was trained using the softmax cross-entropy loss between the predicted logit \(z_{i}\) over \(K\) classes and the one-hot encoded vector representing the gold label. For an utterance \(x_{i}\), the uniform average predicted probability vector \(\bar{p}_{i}\) across \(N\) models over all class \(K\) (softmax probability vector of size \(k=(1,K)\)) is adjusted by a temperature \(T\), to obtain the smoothed predicted probability vector \(q_{i}\):
\[q_{i}=\frac{\bar{p_{i}}^{T}}{\sum_{k=1}^{K}\bar{p_{k}}^{T}} \tag{2}\]
The temperature \(T\) can be used to control the entropy of the distribution. The smoothed probability vector \(q\) is now used as the "soft" labels to train a model instead of the "hard" one hot encoded gold labels and the resultant model is robust to local instability. One challenge for ensembling is that it requires training, storing and running inference on a large number of models which is often infeasible for large scale NLU systems.
### Temporal guided temperature scaled smoothing (TGTSS)
Since ensembling is infeasible for large models in practice, we propose temporal guided label smoothing that does not require training large ensembles to compute the soft labels.
In this setup, we train a pair of models as opposed to training a large ensemble of models. The first (teacher) model is trained using the one-hot encoded gold labels as the target. Once the model has converged and is no longer in the transient training state (after \(N\) training or optimization steps), we compute the uniform average predicted probability vector (\(\bar{p_{i}}\)) after each optimization step of the model, which is then adjusted by temperature \(T\) to obtain the smoothed predicted probability vector \(q_{i}\) using (2). A suitable \(N\) can be chosen by looking at the cross-entropy loss curve for the validation dataset. The second (student) model is now trained using \(q_{i}\) as the "soft" label instead of the one-hot encoded gold labels.
The significant advantage of TGTSS over ensembling is that it does not require training, storing, or inferring over large ensembles. A key feature of TGTSS is that it uniformly averages predictions over numerous training steps instead of averaging predictions over numerous independent models. This saves the cost of training multiple models. Moreover, we never need to store multiple models for TGTSS since we can store a running average of the predictions over time. Finally, at inference time we only need to call a single model (the trained student model), as opposed to \(N\) models for the ensemble.
## 5 Experimental setup and results for mitigation
### Base model architecture
For all our experiments, we used DistilBERT [17] as the pre-trained language model. We used the implementation of _DistilBERT-base-uncased_ from the _Huggingface_ library by leveraging _AutoModelForSequenceClassification_. The pre-trained language model is then fine-tuned on the benchmark datasets by using the training set. DistilBERT is a widely used pre-trained language model that is currently used in production in many large scale NLU systems. One key advantage of using DistilBERT is that it is able to recover more than 90% performance of the larger _BERT-base-uncased_ model while using 40% less parameters on the
GLUE language understanding benchmark Wang et al. (2018). Using other BERT models as the pre-trained language model was outside the scope of this study.
### Datasets
To study local instability and compare different mitigation strategies, we used two open source benchmark datasets (Table 2): Massive and Clinc150.
* Massive: Massive (FitzGerald et al., 2022) dataset is an open source multilingual NLU dataset from Amazon Alexa NLU system consisting of 1 million labeled utterances spanning 51 languages. For our experiments, we only used the _en-US_ domain utterances for domain classification task across 18 domains (alarm, audio, general, music, recommendation, etc.).
* Clinc150 DialoGLUE: Clinc150 Larson et al. (2019) is an open source dataset from DialoGLUE Mehri et al. (2020), a conversational AI benchmark collection. We utilized Clinc150 for intent classification task across 150 intents (translate, transfer, time-zone, taxes, etc).
### Training and Evaluation Protocol
We compared the performance of our proposed mitigation strategy, _temporal guided temperature scaled smoothing_ (TGTSS), with other baseline mitigation strategies such as ensembling averaging, L2 regularization, uniform label smoothing, SWA and ensembling. We trained 50 independent models with the same hyper-parameters for each mitigation strategy using different random initialization seeds. We reported the mean \(\pm\) std. dev domain classification accuracy for the Massive dataset and mean \(\pm\) std. dev intent classification accuracy for the Clinc150 dataset. For both the datasets, we also reported the percentage reduction in \(LE_{m}\) when compared to the control baseline over 50 independent model runs for all the utterances as well as for high label entropy utterances whose label entropy was over 0.56 in the control baseline. For each method, we computed the sum of \(LE_{m}\) over all the \(N\) utterances in the test set as \(\sum_{i=1}^{N}LE_{m_{i}}\). The \(\Delta LE_{m}\) is then computed as the percentage reduction among these values for each method and the control baseline. We do similar computations for \(\Delta LE_{s}\) in Table 4.
The \(LE_{m}\) value 0.56 for an utterance indicates that if the utterance was assigned to 2 different labels over 50 independent model runs, then its membership is split 75%-25% between the two labels. A lower value of label entropy indicates better model robustness and consequently, lower local instability. An utterance will have \(LE_{m}=0\) if it is consistently predicted to be the same label across 50 independent model runs. All the results for both the benchmark datasets have been reported on an unseen holdout set. A model having high overall accuracy and low label entropy is usually preferred.
#### 5.3.1 Hyper-parameters
In our empirical analyses, all the models across different mitigation strategies were trained using the ADAM Kingma and Ba (2014) optimizer with a learning rate of 0.0001. For both the benchmark datasets, all the models were trained for 5 epochs with a batch size of 256. For the control baseline with L2 regularization, we selected a weight decay value of 0.001. For the ensemble baseline, we selected \(N\) as 200 i.e. the pre-temperature scaled "soft" labels were computed after uniformly averaging outputs from 200 independent models for each utterance in the training set. In the uniform label smoothing mitigation strategy, we used \(\alpha\) as 0.5 for the Clinc150 dataset and \(\alpha\) as 0.1 for the Massive dataset. For SWA, we equally averaged the model weights after the last 2 epochs. For experiments using temporal guided temperature scaled smoothing on the Clinc150 dataset, we used \(N\) as 200 where as for the Massive dataset, we set \(N\) as 180. This indicates that model outputs after first 200 training or optimization steps were recorded for the Clinc150 dataset and uniformly averaged for each utterance before temperature scaling. Similarly, for the Massive dataset, model outputs were
\begin{table}
\begin{tabular}{l|r|r} \hline \hline
**Attribute** & **MASSIVE** & **CLINC150** \\ \hline Source & Amazon & DialoGLUE \\ & Alexa AI & \\ \hline Domains & 18 & - \\ Intents & 60 & 150 \\ \hline Train & 11,514 & 15,000 \\ Holdout(Unseen) & 2974 & 3,000 \\ \hline Balanced? & No. & Yes. 100 \\ & & per intent \\ \hline Classification task & Domain & Intent \\ \hline \hline \end{tabular}
\end{table}
Table 2: Benchmark dataset statistics
recorded after 180 training steps. For both the ensemble guided and temporal guided temperature scaled smoothing mitigation strategies, we set the temperature \(T\) at 0.5.
### Results
We compared the proposed mitigation strategy with other baselines described in Section 4.1. We highlight the effectiveness of our proposed local instability metric, _label entropy_, in capturing local instability over 50 independent model runs as well as a single model run.
**Ensemble is the best mitigation strategy**
In our empirical analyses, we found that ensemble baseline is often the best performing mitigation strategy in terms of both model accuracy and \(LE_{m}\) for both the benchmark datasets(Table 3).
**TGTSS is comparable to ensembing at a fraction of computation cost**
We found that TGTSS is able to recover about 91% of the performance of ensembling in the multi-run experiments. TGTSS trains only one teacher-student pair and drastically reduces the computational cost of ensembling. Hence, it is much more feasible to deploy TGTSS in production NLU systems. We also found that TGTSS is significantly better than model-centric local instability mitigation strategies such as SWA and L2 regularization.
However, as mentioned in Section 4.5, TGTSS computes "soft" labels across multiple optimization steps which leads to multiple inference cycles. In our experiments, we ran inference after each optimization step once the model is no longer in the transient training state. However, it may be possible to further reduce the number of inference cycles by running inference after every \(X\) optimization steps and this is left for future studies.
**Efficacy of single run label entropy (\(LE_{s}\)) as a local instability metric**
In Table 3, we demonstrated how TGTSS is able to reduce local instability in terms of our proposed metric \(LE_{m}\) over multiple independent runs of the model and recover 91% of the performance of ensembling. We propose \(LE_{s}\) as a more practical metric for local instability. We show that TGTSS is still able to recover more than 90% of the performance of ensembling for the Clinc150 and the Massive datasets (Table 4). For high \(LE_{m}\) utterances in the control baseline, TGTSS was able to considerably reduce \(LE_{s}\) (Appendix Table 6).
In figure 3 we observe that TGTSS significantly reduces variation in prediction scores compared to the control baseline. In the top panels we show utterances that are easy to learn and the classifier converges to the gold label within 2 epochs. In bottom panels, we show examples that exhibit high variation in prediction scores through the training process, and consequently, high \(LE_{s}\). After mitigation by TGTSS, the bottom right panel shows the significant reduction in prediction score variation and \(LE_{s}\). Figure 8 in Appendix shows more examples of reduction in \(LE_{s}\) over the course of training.
**Global label smoothing is not as effective**
In our empirical analyses, we found that uniform label smoothing reduces local instability by 7-9% compared to the control baseline but falls short of ensembling and TGTSS. Label smoothing involves computing a weighted mixture of hard targets with the uniform distribution where as both ensembling and TGTSS uses the model's average predictions over multiple runs and multiple optimization steps, respectively. Tuning the smoothing factor (\(\alpha\)) did not improve model stability in terms of label entropy.
**Importance of temperature scaling for TGTSS**
We conducted ablation studies to understand how temperature scaling affects the performance of TGTSS. Temperature scaling uses a parameter \(T<1\) for all the classes to scale the uniformly averaged predictions. We found that the proposed methodology reduces label entropy by 17.5% over the control baseline without temperature scaling for the Massive dataset on the validation set (31.5% reduction with temperature scaling). This also indicates that temporal uniform averaging is independently able to significantly reduce label entropy.
## 6 Conclusion
In this work, we study the problem of model instability/churn in deep neural networks in the context of large scale NLU systems. Assigning different labels to the same training data over multiple training runs can be detrimental to many applications based on DNNs. We notice that the instability of model predictions are non-uniform over the data, hence we call it local instability. We propose a new metric, _label switching entropy_, that is able to quantify model instability over multi-runs as well as a single training run. We also introduce _Temporal Guided Temperature Scaled Smoothing_ that
reduces model churn by a considerable margin. We show in experiments that TGTSS is able to recover up to 91% of the performance of ensembling at a fraction of computational cost for training and storing, thereby providing a viable alternative to ensembling in large scale production systems. Future directions of research include expanding our analysis to multi-modal data and further dissecting the root causes behind local model instability.
### Limitations
Even though our proposed methodology, TGTSS, was able to significantly reduce model instability, there is still a gap in performance with the gold standard ensembling techniques. More work needs to be done to bridge this gap. In our empirical analysis, we used two open source datasets, Massive
\begin{table}
\begin{tabular}{l|c c} \hline \hline & \multicolumn{2}{c}{\(\Delta LE_{s}\) (\%) \(\uparrow\)} \\ \hline Methods & Massive & Clinc150 \\ \hline Label Smoothing & 37.9 & 40.5 \\ Ensemble baseline & **55.5** & **61.7** \\ \hline TGTSS (Ours) & 53.4 & 55.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Empirical analyses highlights Temporal guided temperature scaled smoothing (TGTSS) reduces \(LE_{s}\) with respect to the single run control baseline model across different optimization steps when a single model is trained. \(\Delta LE_{s}\) (%) is computed as percentage reduction between the sum of per-utterance \(LE_{s}\) for each method and that of the control baseline. A \(-ve\) sign indicates an increase in label entropy over the control baseline.
Figure 3: Training trajectories between pre-mitigation and post-mitigation stages show that TGTSS was able to significantly reduce the variability of raw confidence scores on the gold labels as well as reduce model churn in Massive dataset. [Top] shows some utterances where the model predictions are stable (no label switching), [Bottom] shows some utterance where TGTSS significantly reduced model churn as measured using \(LE_{s}\).
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**Massive**} & \multicolumn{3}{c}{**Clinc150**} \\ \hline Methods & Accuracy(\%) & \(\Delta LE_{m}(\%)\uparrow\) & \% of \(E_{b}\) & Accuracy(\%) & \(\Delta LE_{m}(\%)\uparrow\) & \% of \(E_{b}\) \\ \hline Control baseline & 90.6 \(\pm\) 0.6 & - & - & 95.1 \(\pm\) 0.8 & - & - \\ Ensemble baseline (\(E_{b}\)) & 91.3 \(\pm\) 0.5 & 34.5 & - & 95.4 \(\pm\) 0.6 & 31.1 & - \\ \hline L2 Regularization & 90.3 \(\pm\) 0.5 & -2.3 & -7 & 94.9 \(\pm\) 0.7 & -0.6 & -2 \\ SWBA & 91.0 \(\pm\) 0.5 & 17.6 & 51 & 95.2 \(\pm\) 0.7 & 7.3 & 23 \\ Label Smoothing & 90.8 \(\pm\) 0.5 & 5.7 & 17 & 95.2 \(\pm\) 0.8 & 6.1 & 20 \\ \hline TGTSS (Ours) & **91.3 \(\pm\) 0.6** & **31.4** & **91** & **95.3 \(\pm\) 0.8** & **26.7** & **86** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Reduction of multi-run entropy \(LE_{m}\) across 50 independent model runs for different methods. \(\Delta LE_{m}(\%)\) is calculated as percentage reduction between the sum of per-utterance \(LE_{m}\) for each method and that of the control baseline. A higher percentage indicates greater reduction in \(LE_{m}\) over control baseline and thus better performance. The values for % of \(E_{b}\) indicates the reduction in \(LE_{m}\) as a percentage of the gold standard ensemble baseline. A negative sign in label entropy reduction indicates an increase in \(LE_{m}\). Our method TGTSS shows the best results among the competing methods, coming within 91% of gold standard ensemble baseline.
and Clinc150. Both these datasets are small and may not represent the complexity in real world production datasets which may contain substantially large noise. In our proposed methodology, we train a pair of models successively, a teacher and a student, which is significantly better than ensembling in terms of computational cost. However, this setup may still be challenging in many sophisticated real world production NLU systems. More work needs to be done to reduce the computational complexity of training and inference for these systems.
## Ethics Statement
The authors foresee no ethical concerns with the research presented in this work.
## Acknowledgement
The authors would like to thank the anonymous reviewers and area chairs for their suggestions and comments.
|
2305.19167 | Reduced Precision Floating-Point Optimization for Deep Neural Network
On-Device Learning on MicroControllers | Enabling On-Device Learning (ODL) for Ultra-Low-Power Micro-Controller Units
(MCUs) is a key step for post-deployment adaptation and fine-tuning of Deep
Neural Network (DNN) models in future TinyML applications. This paper tackles
this challenge by introducing a novel reduced precision optimization technique
for ODL primitives on MCU-class devices, leveraging the State-of-Art
advancements in RISC-V RV32 architectures with support for vectorized 16-bit
floating-point (FP16) Single-Instruction Multiple-Data (SIMD) operations. Our
approach for the Forward and Backward steps of the Back-Propagation training
algorithm is composed of specialized shape transform operators and Matrix
Multiplication (MM) kernels, accelerated with parallelization and loop
unrolling. When evaluated on a single training step of a 2D Convolution layer,
the SIMD-optimized FP16 primitives result up to 1.72$\times$ faster than the
FP32 baseline on a RISC-V-based 8+1-core MCU. An average computing efficiency
of 3.11 Multiply and Accumulate operations per clock cycle (MAC/clk) and 0.81
MAC/clk is measured for the end-to-end training tasks of a ResNet8 and a DS-CNN
for Image Classification and Keyword Spotting, respectively -- requiring 17.1
ms and 6.4 ms on the target platform to compute a training step on a single
sample. Overall, our approach results more than two orders of magnitude faster
than existing ODL software frameworks for single-core MCUs and outperforms by
1.6 $\times$ previous FP32 parallel implementations on a Continual Learning
setup. | Davide Nadalini, Manuele Rusci, Luca Benini, Francesco Conti | 2023-05-30T16:14:16Z | http://arxiv.org/abs/2305.19167v1 | # Reduced Precision Floating-Point Optimization for
###### Abstract
Enabling On-Device Learning (ODL) for Ultra-Low-Power Micro-Controller Units (MCUs) is a key step for post-deployment adaptation and fine-tuning of Deep Neural Network (DNN) models in future TinyML applications. This paper tackles this challenge by introducing a novel reduced precision optimization technique for ODL primitives on MCU-class devices, leveraging the State-of-Art advancements in RISC-V RV32 architectures with support for vectorized 16-bit floating-point (FP16) Single-Instruction Multiple-Data (SIMD) operations. Our approach for the Forward and Backward steps of the Back-Propagation training algorithm is composed of specialized shape transform operators and Matrix Multiplication (MM) kernels, accelerated with parallelization and loop unrolling. When evaluated on a single training step of a 2D Convolution layer, the SIMD-optimized FP16 primitives result up to 1.72\(\times\) faster than the FP32 baseline on a RISC-V-based 8+1-core MCU. An average computing efficiency of 3.11 Multiply and Accumulate operations per clock cycle (MAC/clk) and 0.81 MAC/clk is measured for the end-to-end training tasks of a ResNet8 and a DS-CNN for Image Classification and Keyword Spotting, respectively - requiring 17.1 ms and 6.4 ms on the target platform to compute a training step on a single sample. Overall, our approach results more than two orders of magnitude faster than existing ODL software frameworks for single-core MCUs and outperforms by 1.6 \(\times\) previous FP32 parallel implementations on a Continual Learning setup.
keywords: Parallel Computing, Computer Architecture, Open Source Software, Open Architecture Platforms, Deep Learning +
Footnote †: journal: Future Generation Computer Systems
## 1 Introduction
In recent years, the Internet-of-Things (IoT) ecosystem has been enriched by tiny battery-powered devices that can capture and locally analyze the sensed data [1; 2]. Notable examples include smart cameras for face recognition [3], nano-drones with autonomous navigation capabilities [4], hearable aids featuring noise cancelling [5], wearable healthcare devices [6] or smart agriculture [7] systems, and more. To cope with the severely constrained energy budget and the form factor requirements, these _smart_ devices rely on MicroController Units (MCUs) as their main computational unit for processing the data coming from the tightly coupled sensors. Differently from more capable engines such as edge GPUs or mobile-class CPUs (e.g., ARM Cortex-A multi-cores), MCUs present a power envelope lower than a few hundred mW to comply with battery-powered operation. On the other side, running complex processing pipelines, i.e., based on modern Deep Neural Networks (DNNs), on these platforms can be extremely challenging because of the limited compute and memory budget, which typically amounts to only a few MBs of on-chip memory [8].
The commonly adopted design flow to bring DNN inference models to low-power MCUs is composed of an initial training phase, typically performed in a GPU-equipped data-center machine, followed by the deployment of the frozen trained model on the end-point device. This rigid scheme, indicated as _train-once-deploy-everywhere_, has started to be questioned because of the lack of robustness observed when testing smart devices in the real world. A major error source concerns the nature of the data sensed in the field that differs substantially from the training data, e.g., when a device is sensing an unknown environment not well-represented in the train set [9]. Because of this mismatch, the prediction accuracy can be drastically reduced compared to the accuracy scored on the test dataset used at design time. Transfer Learning [10] or recently proposed Continual Learning [11] techniques address this issue by fine-tuning the trained DNN, i.e., updating the model coefficients, over new data coming from a new domain. Unfortunately, these adaptive solutions cannot scale if relying on external servers for the training tasks, considering that every individual device can face a different domain that may be subjected to rapid changes.
To address this challenge, we focus on the _On-Device Learning_ (ODL) paradigm [12]. According to this, end-point devices rely on the local compute capacity for the (incremental) learning task rather than running inference-only workloads. Instead of continuously exchanging data and parameters between nodes
and a central server for the DNN model updates, the ODL policy reduces communication costs and lowers the workload on the server side. Additionally, ODL brings benefits to bypass major privacy concerns: local execution prevents the sharing of personal (labeled) data to third-parties cloud services used for the training process.
All these points motivate us to address the fundamental research question concerning the _feasibility of On-Device Learning on low-end IoT devices_ powered by tiny edge devices such as MCUs. We refer to a learning scheme using the Back-Proagation (BP) algorithm, a gradient-based optimization strategy typically used for DNN training. Several recent works targeting resource constrained devices addressed this problem by introducing hard restrictions to the BP algorithm. The TinyOL framework[13] and the STM32 NanoEdge AI Studio1, for example, enable learning on MCU by updating the parameters - i.e. fine-tuning - either of only the last layer of the deployed DNN, or in limited scenarios, with respect to in-field data. Here, the BP computation leverages full-precision floating-point (FP32) arithmetic. The rest of the model, previously quantized to low-bitwidth, is kept frozen. Restricting training to the last layer drastically reduces the expressiveness of the method, i.e., the complexity of what can be learned, limiting the overall effectiveness. On the other hand, Tiny Training Engine [14] applies gradient scaling to use 8-bit arithmetic for the backward pass in combination with a sparse weight update logic. This approach covers the entire network but still compromises between the training complexity and its efficacy; the algorithm cannot have, therefore, the same general applicability as conventional BP. Only a recent work, _AI/ES2_, has focused, instead, on deploying the full BP algorithm on MCU. This library covers several full-precision operators, emphasizing completeness at the expense of speed, as it does not support any optimization for multi-core execution, optimal loop unrolling, and half-precision floating-point execution.
Footnote 1: STM32 NanoEdgeAI: [https://www.st.com/en/development-tools/nanoedgeaistudio.html](https://www.st.com/en/development-tools/nanoedgeaistudio.html)
Footnote 2: AlfES for Arduino: [https://github.com/Fraunhofer-IMS/AIFES_for_Arduino](https://github.com/Fraunhofer-IMS/AIFES_for_Arduino)
In this work, we explore the feasibility of Back-Propagation-based ODL on an ultra-low-power device from a novel perspective, providing an in-depth exploration of software-based optimization strategies to accelerate the full BP algorithm for multi-core MCUs at the frontier of the State-of-the-Art. First, we propose a comprehensive computational analysis of the basic primitives required for training a DNN on an MCU, decomposing them in shape transform operations (e.g., _Im2Col_) combined with Matrix Multiplications (MM) - essentially, extending to training the work conducted by Lai et al. [15] for DNN inference. Second, leveraging recent advances in MCU architecture design, we deploy our work on a 22nm silicon embodiment of the Parallel Ultra Low Power (PULP) platform [16], Green-Waves GAP-9, which is a RISC-V multi-core design with support for SIMD-accelerated half-precision (16-bit) floating-point (FP16). We ascertain whether the architectural improvements related to parallelism, reduced precision, and SIMD translate to proportional improvements in performance and energy efficiency, compared both to a single-core highly optimized full-precision floating-point (FP32) baseline tested on the same platform and on a single commercial STM32 MCU. In particular, we design a set of optimized software primitives leveraging FP16 arithmetic, which is nowadays widely adopted on the server side for efficiently training DNN models without accuracy penalties with respect to FP32 [17; 18; 19]. Furthermore, the choice of the specific FP16 format can be tuned by the user in accordance with the target device's specifications (e.g., Vega [16], which supports both IEEE FP16 and Bfloat16). Finally, to investigate whether the proposed optimization can make ODL feasible in realistic use cases, we consider the class-incremental Continual Learning case study proposed by Pellegrini et al. [20], and we compare, in terms of latency and energy consumption, the solutions obtained using AIRES - the most complete MCU training framework currently available - and the proposed library.
In detail, this work makes the following contributions towards the State-of-the-Art for MCU-based ODL:
* We analyze the training primitives for a DNN, focusing on the Conv2D case, and derive foundational abstractions for the basic operators, discussing the impact of data layout (channel-height-width / CHW vs. height-width-channel / HWC) on the underlying computational structure.
* We introduce latency-optimized software primitives for MM kernels exploiting loop unrolling, parallelization, and SIMD FP16 and introducing transposed MM (MM\({}_{T}\)) and _Im2Row_ transformations to minimize transposition overheads in SIMD-vectorized MM.
* We analyze in detail the latency impact of transform operators needed by every training kernel and quantify the impact, in terms of latency, on the learning task.
* We assess the execution latency and the energy consumption of our primitives on individual DNN layers, comparing baseline and optimized layers on the target Green-Waves GAP9.
* We explore optimized MM primitives, inspired by the same principles, on an STMicroelectronics STM32L4 to provide a further testing point for our approach.
* We compare the training of the end-to-end case study proposed by Pellegrini et al. [20] between our proposed framework on GAP9 and _AI/ES_ on STM32L4.
Our optimized MM functions, which are the core kernels of the proposed ODL primitives, achieve a peak performance of 7.89 MAC/clk on GAP9 when leveraging FP16 SIMD instruction and 8-core parallelism, 1.91\(\times\) faster than the FP32 counterpart. When benchmarking a complete training step of a Conv2D layer, the computational efficiency reduces to 6.62 MAC/clk because of the overhead of the shape transform functions, which impact 12.5% of the computational time. Such overhead is mitigated by using an HWC data layout, which is 11% faster than a
CHW-based implementation. Overall, a latency of 17.1 ms and 6.4 ms is accounted to run the forward and backward steps for a ResNet8 for Image Classification, and a DS-CNN for Keyword Spotting on a GAP9 SoC clocked at 370MHz while consuming 60.5 mW on average. Our evaluation of a Continual Learning case study shows that our solution results 1.63\(\times\) and 767\(\times\) faster than previous solutions based on, respectively, FP32 training primitives running on the same platform and a single-core MCU using the open-source _AI/ES_ library.
To foster future research on MCU-based On-Device Learning, we release the code of our library as open-source software at: [https://github.com/pulp-platform/pulp-trainlib](https://github.com/pulp-platform/pulp-trainlib).
## 2 Related Work
### On-Device Learning on Tiny Edge Devices
To review the existing techniques, we first analyze lightweight ad-hoc methods for ODL. Secondly, we describe the existing ODL applications and implementations targeting MCU devices.
#### 2.1.1 Restrictions to Backpropagation
Several works, which we summarized in Table 1, address the ODL problem by focusing on the reduction of the computational burden of the Backpropagation (BP) algorithm, by applying specific restrictions or directly replacing BP with a proxy. Focusing on time series analysis for Anomaly Detection, De Vita et al. [21] extended the functionalities of STMicroelectronic's X-CUBE-AI3 by introducing support for On-Device Training of Echo State Networks. The method was tested on an STM32 MCU featuring less than 100 kB of memory occupation. To enable lightweight transfer learning, TinyOL [13] proposes to insert a single trainable layer on top of a frozen and quantized model. This extra layer is trained in a few milliseconds using ARM-Cortex-equipped Arduino boards in both supervised and unsupervised setups. Similarly, Train++ [24] implemented ODL for on-device targets but targeted shallow single-layer networks for binary classification problems. To reduce the memory footprint of the activations tensors for the training task, TinyTL [22] proposes to limit the backpropagation to biases only. Within a transfer learning context, this approach can reduce the memory requirements by up to 12.9\(\times\) with respect to training also the weight parameters at the cost of an extra custom residual layer for preserving the accuracy level. These works only train a subset of the weight parameters to prevent the implementation of costly Backpropagation algorithms on resource-constrained MCUs. In contrast, we address this challenge by developing an optimized software methodology that exploits advanced multi-core MCU designs with reduced-precision FPU support.
Footnote 3: STM X-CUBE-AI: [https://www.st.com/en/embedded-software/x-cube-ai.html](https://www.st.com/en/embedded-software/x-cube-ai.html)
To bring ODL to resource-constrained devices lacking FPU support, PocketNN [25] presented a training methodology to exploit integer-only computation based on Direct Feedback Alignment [32]. To reach the same goal, Tiny Training Engine (TTE) [26] combined gradient tensors pruning via offline calibration and a novel Quantization-Aware strategy for scaling the gradient magnitude and fitting the limited integer range. Thanks to this approach, the authors demonstrated a training procedure for low-end MCUs leveraging 8-bit computation kernels. In contrast to these approaches, our work does not impose modifications or custom training algorithms for the learning process for ODL, nor does it require an additional offline calibration procedure. Rather, we support and accelerate the canonical and commonly used Backpropagation to broaden the scope of ODL without paying accuracy degradations due to limited integer ranges [33].
#### 2.1.2 ODL Implementations for MCUs
Table 2 reports the works addressing the application and implementations of complete Backpropagation on Ultra-Low-Power MCUs. Targeting the problem of noise domain shift in audio keyword spotting, Cioflan et al. [27] propose to increase the accuracy of their classification model using On-Device Domain Adaptation (ODDA). Thanks to this strategy, they could achieve an accuracy improvement by 1.43% at a memory cost of only 1.47 MB; still, the latency higher than 100s prevented real-time application. Gimenez et al. [28] used simple DNNs composed of Fully-Connected layers to learn in-the-field simple audio commands in several milliseconds with tiny MCUs. The same authors [29] later extended this approach to a distributed setup using Federated Learning [34; 35]. This method presents a memory footprint of less than 256 kB but requires a training latency of several hundreds of seconds for a full federated update. These works focused on the applications of ODL rather than addressing performance optimization, as we consider in our work.
In contrast, a small group of works targeted the design of a complete training framework for ODL on MCUs. AIFES by Fraunhofer IMS is currently a state-of-the-art library for ODL on Arduino and ARM Cortex-based MCUs, supporting Fully-Connected and Convolutional layers and a variety of activation functions, optimizers, and commodity functions for training. They rely on CMSIS-DSP MM kernels for latency-optimized ODL. PULP-TrainLib [31] is an ODL framework for RISC-V Multicore MCUs, featuring a set of FP32 performance-tunable training primitives of Fully-Connected and Convolutional layers. To find the fastest configuration for each training step and DNN layer, PULP-TrainLib employs an Autotuner to select the fastest MM algorithm. Ravaglia et al. [23] exploited an early prototype of the PULP-TrainLib to demonstrate Continual Learning (CL) for image recognition on MCUs. In this work, we leverage the PULP-TrainLib templates and extend them with novel latency-optimized software primitives that take advantage of multi-core RISC-V MCUs with FP16 SIMD support. To the best of our knowledge, our design results in the fastest ODL library for MCU targets.
### HW Support for Reduced Precision
The opportunity to accelerate the computation by exploiting low-bitwidth precisions is fostering the research community, pushing for new HW concepts and MCU sub-systems. In this context, J. Lee et al. [36] presented LNPU, a Sparse DNN Processor which allows Fine-Grained Mixed Precision between FP16 and FP8 to enable on-chip training. LNPU features 16 sparse Deep Learning cores, orchestrated by a single Central Core, a SIMD core, and a RISC controller, with a power consumption of 43.1 to 367 mW, at an operational frequency of 50 and 200 MHz, respectively. Furthermore, LNPU features a peak efficiency of up to 25.3 TFLOPS/W, while processing inputs with 90% of sparsity. Targeting RISC-V cores as the main computational cores, F. Montagna et al. [37] present a multi-core transprecision computing cluster that aims at minimizing the power consumption of near-sensor applications. The authors combine hardware sub-word vectorization and a dedicated interconnect to efficiently share multiple Floating Point Units (FPUs) among up to 16 parallel cores while providing a complete software infrastructure to enable efficient parallel programming. In terms of performance, their solution achieves a peak performance of 2.9 GFLOPS with a power consumption of 43 mW, which is compatible with the low-power environment. Other works focus on general-purpose strategies to provide Reduced Precision on Ultra-Low-Power SoCs. Among the RISC-V4 MCU architectures, D. Rossi et al. [16] presented Vega, a ten-core System-on-Chip (SoC) based on the Parallel Ultra-Low Power (PULP) platform5. Vega's cores are equipped with a set of floating-point units (FPU) capable of dealing with different floating-point formats, as wide as 32-bit and Single-Instruction Multiple-Data (SIMD) 16-bit, as well as two programmable Machine Learning accelerators. Thanks to these features, Vega achieves a State-of-the-Art performance of up to 129 GFLOPS/W for FP16 computations. In this paper, we refer to this HW concept to design a set of software primitives optimized to exploit the Reduced Precision FPUs for DNN training tasks. On the other side, the recently introduced ARMv8.1-M6 enabled FP16 processing as part of the new Helium M-Profile Vector Extension (MVE), which allows up to 15\(\times\) speedup on Machine Learning applications and 5\(\times\) speedup on DSP, with respect to ARMv8 instructions. However, to the best of our knowledge, no off-the-shelf MCU equipped with ARM MVE is yet available on the market.
Footnote 4: RISC-V International: [https://riscv.org/](https://riscv.org/)
Footnote 5: PULP Platform: [https://pulp-platform.org/](https://pulp-platform.org/)
### BLAS Optimization of DNN Training Primitives
Many Machine Learning and Deep Learning workloads can be computationally expressed in terms of Basic Linear Algebra Subroutines (BLAS), and Matrix Multiplication (MM) in particular [38]. The problem of BLAS optimization for Deep Learning applications is the target of several works concerning server-side applications. Approximate methods can be em
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**ODL Implementation** & **Target Task** & **Retrainable Layers** & **Kernel Optimizations** & **Target Device** & **Data Type** \\ \hline ODDA [27] & Keyword Spotting & All & None & Vega [16], Raspberry PI-4B, Snapdragon 888 & FP32 \\ \hline Giménez [28; 29] & Keyword Spotting & All (Fully-Connected) & None & Arduino Nano 33 BLE, Arduino Portenta H7 & FP32 \\ \hline AIFES [30] & General Purpose & All & Matrix Multiplication (ARM CMSIS-NN) & Arduino boards, ARM Cortex-M cores & FP32 \\ \hline PULP-TrainLib [31] & General Purpose & All & Matrix Multiplication (FP32) & RISC-V Multicore MCUs, STM32 boards & FP32 \\ \hline
**This Work** & General Purpose & All & Matrix Multiplication, Im2Co/lm2Row (FP16) & RISC-V Multicore MCUs with FP16 SIMD & FP16 \\ & & & & FPU & FPU \\ \hline \end{tabular}
\end{table}
Table 2: Backpropagation-Based On-Device Learning Implementations on MCUs
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**ODL Method** & **Target Task** & **Retrainable Layers** & **Variations to Backpropagation** & **Target Device** & **Data Type** \\ \hline De Vita [21] & Anomaly Detection & ESN layer & Custom Echo State Network (ESN) for time series & STM32 boards & FP32 \\ \hline TinyOL [13] & Image Classification & Only last layer & Extra custom trainable layer on bottom of a frozen DNN & Arduino Nano 33 BLE & FP32 \\ \hline TinyTL [22] & Image Classification & All (biases only) & Bias training only (reduction via size) & Generic embedded device & FP32 \\ \hline Ravaglia [23] & Image Classification & Last N layers & Continual Learning with quantized Latent Replays & Vega [16], STM32L4 & FP32 \\ \hline Train++ [24] & Binary Classification & All & Custom incremental learning algorithm for binary classification & ARM Cortex-equipped MCUs, ESP32 & FP32 \\ \hline PocketNN [25] & Image Classification & All & Integer-Only Direct Feedback Alignment (no Backprop) & Generic edge device & INT8 \\ \hline Tiny Training Engine [26] & Image Classification / General Purpose & All (quantized) & Automatic gradient scaling to fit INT8 precision + gradient pruning & STM32F746, Other MCUs & INT8 \\ \hline \end{tabular}
\end{table}
Table 1: On-Device Learning Methods for Ultra-Low-Power Devices
ployed to speed up the computation of MM kernels. In this context, Osawa et al. [39] proposed to apply Low-Rank Approximation to reduce the computational burden on convolutions in server applications. With their approach, they have shown up to 25\(\times\) performance improvement on the wide matrices of server applications while maintaining a negligible accuracy loss. The cost of MM kernels in Convolutional and Fully-Connected layers can be reduced by means of software approaches like the Strassen algorithm [40]. With this aim, Tschannen et al. [41] modified the Strassen algorithm, introducing a method capable of learning fast approximations of MM algorithms for the end-to-end execution of DNNs. With their approach, the authors claimed a 99.5% reduction in the total number of multiplications in image classification models without accuracy drops.
Hardware-Software solutions are common in the acceleration of computationally-intensive tasks. In the effort to optimize General Matrix Multiplication (GEMM) with multiple data precision, Moss et al. [42] provided support for a hardware-software GEMM framework based on an Intel HARPv2 processor, which is able to accelerate DNN models like AlexNet by up to 4\(\times\) using a mixed CPU+FPGA approach. With a similar approach, Juan et al. [43] presented a multi-threaded approach of MM for Deep Learning applications. In their approach, they exploited a 16-bit integer precision and the hardware SIMD capabilities of ARM Cortex-A processor to extend the BLIS framework. Thanks to their integer vectorized approach, they obtained a 20% speedup with respect to FP32 models like AlexNet or VGG16 while saving 25% energy.
The acceleration of MM workloads in MCUs is often delegated to SIMD integer computation due to the hard restrictions in terms of memory, computation, and power. In this context, ARM CMSIS-NN[15] presented a method to exploit fixed-point quantization in the form of INT16 and INT8 data to accelerate convolutions by 4.6\(\times\) on ARM Cortex-M processors equipped with integer SIMD hardware. Similarly, PULPNN [44] provided a method to accelerate integer MM kernels on multicore RISC-V MCUs, exploiting SIMD integer computation. Thanks to their approach, they showed up to 15.5 MAC/clk in INT8 format on eight parallel RISC-V cores. Similar to many previous works, we rely on common linear algebra kernels, i.e., MM, to solve our problem but, differently from others in this context, we focus on on-device learning based on the backpropagation algorithm with a reduced-precision capable MCU system.
## 3 Background
This section reviews the BackPropagation (BP) algorithm, a gradient-based DNN optimization technique commonly used for DNN training. Let us consider a DNN model composed of N layers. Every layer operates a non-linear function \(f_{i}(\cdot),i=0,..,N-1\) parameterized by the coefficient tensor \(W_{i}\), that is learned during the training process.
During the Forward (FW) pass, which corresponds to the DNN inference phase, the model's input data \(X_{0}\) propagates layer-by-layer through the composite function \(\{f_{0}\circ f_{1}\circ\cdots\circ f_{N-1}\}\). In the case of convolutional layers, the layer-wise operation of the FW step can be expressed as:
\[Y_{i}=W_{i}*X_{i} \tag{1}\]
where \(*\) denotes the cross-correlation operator (commonly denoted as _convolution_), and \(X_{i}\) and \(Y_{i}\) are the input and output activation feature maps, respectively. Note that \(X_{i}\equiv Y_{i-1}\). We omit the bias term for simplicity. Fig. 1-a) visually represents the FW step of a Convolution layer, operating on a single feature map. To visually describe this layer, we refer to the notation introduced in [45]. The output of the last layer \(Y_{N-1}\) represents the DNN model prediction, e.g., the class scores in case of a classification task.
To train a DNN model, a loss function \(\mathcal{L}\) is used to estimate the classification error with respect to the ground-truth labels of a set of labelled data, i.e., the train set. The BP algorithm has the purpose of backward propagating the prediction error to compute the gradients of the loss function with respect to the parameters of every layer's weights \(W_{i}\), i.e., \(\nabla\mathcal{L}_{W_{i}}=dW_{i}\). Once the latter is computed, an optimization procedure, such as the Stochastic Gradient Descent (SGD), updates \(W_{i}\) according to a certain learning rate \(\eta\), e.g. \(W_{i}\gets W_{i}+\eta\cdot dW_{i}\).
The Backward (BW) step consists of an application of the gradient's chain rule to compute \(dW_{i}\). Starting from the network output, the gradient \(\nabla\mathcal{L}_{Y_{N-1}}\) is first calculated as the derivative of the prediction error with respect to the model output. This value is backpropagated into the DNN model to compute the Intermediate Gradient (IG) tensors, denoted as \(dX_{i}\). As for the FW case, note that \(dY_{i-1}=dX_{i}\). For a convolutional layer (Fig.1-c)), the IG tensors are computed as:
\[dX_{i}=dY_{i}*\delta Y_{i}/\delta X_{i}=dY_{i}*W_{i}^{R} \tag{2}\]
This operation is referred to as BW-IG step. Note that, for con
Figure 1: Training steps of a Convolution layer with a single input/output channel: a) Forward (FW) step; b) Weight Gradient (BW-WG) step, to compute the gradient of the weights; c) Input Gradient (BW-IG) step, which back-propagates the prediction error to the previous layer. We indicate as \(w_{i}\) the filter elements. We refer to the notation introduced in [45].
volutional layers, \(\delta Y_{i}/\delta X_{i}\) corresponds to the \(W_{i}\) tensor, opportunely transformed to \(W_{i}^{R}\) by inverting the element order, as described in Sec. 5.4. The output gradient \(dY_{i}\) may need to be padded to produce an output vector \(dX_{i}\) with the correct size. Following the same differentiation rule, the weight gradient \(\mathbf{dW}_{i}\) is computed during the BW-WG step (Fig.1-b):
\[dW_{i}=\delta Y_{i}/\delta W_{i}*dY_{i}=X_{i}*dYi \tag{3}\]
In the case of convolutional layers, \(\delta Y_{i}/\delta W_{i}\) is equivalent to the \(X_{i}\) activation tensor computed during the FW pass.
In the rest of the paper, we narrow down the scope to the workload analysis and implementation of individual layers. Hence, for simplicity of notation, we omit the index \(i\) when referring to individual tensors. Tab. 3 summarizes the used symbols.
## 4 ODL kernels
For convolutional DNNs - i.e. DNNs whose layers mainly consist of Convolutions - the layer-wise training primitives (FW, BW-IG, BW-WG steps) reduce to convolutions as described, respectively, by Eq. 1, 2, 3. Previous works [15; 44] showed that FW convolutions could be reshaped as Matrix Multiplications (MMs) after applying a shape transformation (e.g. _Im2row_ or _Im2col_ described below) operator to the input activation tensor. In this section, we discuss how to extend this concept to the BW steps when targeting execution on low-end MCUs. It is important to remark that we consider a batch size of 1 for the training task, which is equivalent to computing the weight gradients in a sample-by-sample streaming fashion.
As a template for most DNN operators, we consider a 2D Convolution (Conv2D7) layer with weight shape \(C_{O}\times C_{I}\times k_{h}\times k_{w}\), where \(C_{O}\) is the number of output channels, \(C_{I}\) is the number of input channels, and \(k_{h}\times k_{w}\) is the spatial filter size. Input and output activations (and gradients) are 3-dimensional tensors, featuring two spatial dimensions (\(H\) and \(W\)) and a channel dimension (\(C_{I}\) channels for \(X\) and \(dX\) and \(C_{O}\) channels for \(Y\) and \(dY\)). We consider the two commonly used data layouts, denoted as CHW and HWC, that differ for the ordering of the tensor dimensions in memory. The CHW convention presents the channel size as the outermost dimension, while the HWC convention stores the channel dimension as the innermost. In a matrix form, a CHW tensor is reshaped as a matrix with C rows and \(H\times W\) column, where elements in a row are stored contiguously in memory. A HWC tensor is obtained by transposing the CHW matrix, i.e., a (\(H\times W\))\(\times C\) sized matrix. On the other hand, weight tensors (and gradients) are 4-dimensional tensors, featuring \(C_{O}\) as the outermost dimension. The spatial filter sizes \(k_{h}\) and \(k_{w}\) and the input channels \(C_{I}\) are shuffled in the inner dimensions according to the chosen layout: in case of HWC, the filter sizes are sorted as \((C_{O},k_{h},k_{w},C_{I})\), while in CHW as \((C_{O},C_{I},k_{h},k_{w})\). Although input, weight, and output tensors of an individual layer could feature different memory layouts, in the following we only consider homogeneous schemes where all tensors are formatted as HWC or CHW.
Footnote 7: We refer to the Pytorch’s _Com2d_ notation.
Fig. 2 analyzes in detail the FW step of a Conv2D with HWC layout. The input tensor \(X\) is initially stored in matrix form. Then, the _Image-to-Row (Im2Row)_ shape transform function copies the values under the moving window of the convolution filter (of size \(k_{h}\times k_{w}\times C_{I}\)) to a new matrix _Im2Row_(X) of size \((H_{O}\times W_{O})\times(k_{h}\times k_{w}\times C_{I})\). The result of the convolution is then computed by means of a Matrix Multiplication between the _Im2Row_(X) matrix and the weight tensor, also stored in a matrix form. Differently from \(X\), the weight tensor \(W\) is stored in matrix form by placing the elements of each filter in the columns, with adjacent \(C_{O}\) elements.
More in detail, the _Im2Row_ copies data chunks of size \(k_{w}\times C_{I}\) from the matrix \(X\) with HWC layout to the destination matrix. On the contrary, with an input featuring a CHW layout, the chunk size of the _Im2Row_ data transfer is reduced to the \(k_{w}\) elements that are stored contiguously in memory. A strided access
\begin{table}
\begin{tabular}{|c|c|} \hline Acronym & Meaning \\ \hline _X, dX_ & Input activation and gradient of a DNN layer \\ _W, dW_ & Weight data and gradient of a DNN layer \\ _Y, dX_ & Output activation and gradient of a DNN layer \\ _Im2Col_ & Image-to-Column Operator \\ _Im2Row_ & Image-to-Row Operator \\ _B-T_ & Block-Transpose Operator (weights only) \\ _Tr_ & Matrix Transposition Operator \\ MM & Matrix Multiplication \\ MM\({}_{T}\) & Row-Row Matrix Multiplication \\ \hline \end{tabular}
\end{table}
Table 3: Acronyms and Symbols
Figure 2: Matrix representation of a FW step of a Conv2D layer with with HWC data layout. The input size is \(H_{I}=3\), \(W_{I}=3\), \(C_{I}=2\). The weight tensor is \(k_{w}=k_{h}=2\), \(C_{I}=2\) and \(C_{O}=3\). The input tensor is transformed with _Im2Row_ before performing the Matrix Multiplication.
is performed to load the next elements that fall under the weight filter. The different memory layouts are key for impacting the efficiency of the _Im2Row_ transform function, as discussed in the experimental section and, in particular, when the datatype is set to FP16.
In case of CHW, the expression of the FW step is adapted, as shown in the top of Fig. 3, to produce a transposed output matrix with respect to the HWC case. Differently from the HWC expression: (i) operands are transposed and (ii) the MM switches the operand order. To handle the transposition of the input activation \(X\), the _Im2Row_ operator is replaced with the _Image-to-Column (Im2Col)_ operator, where elements under the filter are copied on a column of the destination matrix. In terms of performance, similar considerations to the ones drawn for _Im2Row_ also hold for _Im2Col_. In general, the resultant _Im2Row(\(X\))_ (or _Im2Col(\(X\))_) matrix features a memory footprint larger than \(X\) because every element of the input tensor contributes to the computation of multiple output values. The memory requirements of an _Im2Row/Im2Col_ exceeding the available memory can be reduced using tensor tiling - i.e., instead of processing the full tensor, multiple partial sub-tensors can be copied in sequence in a temporary buffer, using the transform operators, and processed at minimal computation overhead with respect to processing the full tensor [46].
In addition to the FW step of the Conv2D layer, Fig. 3 visually shows the core operations of BW training primitives operating on tensors with HWC e CHW layouts. In the plot, we denote the Matrix Multiplications, which implement each training step as FW-MM for the forward step and BW-WG-MM and BW-IG-MM for the backward steps. Similarly to the FW, the BW-WG convolution is turned into a Matrix Multiplication to compute the weight gradient \(dW\). This step takes as inputs the activation input \(X\), stored after the FW pass, and the gradient vector \(dY\). Note that the \(dY\) tensor has the same size of \(Y\). Differently from the FW-MM, the _Im2Col_ transform is applied over the \(X\) tensor with an HWC layout. _Im2Row(\(X\))_ is instead used for CHW. Lastly, the BW-IG step is reshaped into a Matrix Multiplication (BW-IG-MM) between the output gradient \(dY\) and the weight tensor \(W\). Differently from the other steps, the BW-IG step requires the weight tensor to be transformed using a _Block-Transpose (\(B\)-\(T\))_ operator before feeding the BW-IG-MM. Fig. 4 illustrates the workflow of the \(B\)-\(T\) operator applied to a weight matrix \(W\). First, the \(k_{h}\times k_{w}\) elements of the filters are placed in reverse order, i.e., the reversed-order matrix \(W_{t}^{R}\) of Eq. 3; second, the weight input and output channels are _block-transposed_: elements belonging to the same input channel are transposed into rows.
## 5 ODL on a MultiCore MCU with FP16 support
This section describes our design methodology for latency-optimized ODL software kernels targeting a multi-core platform with HW support for reduced-precision FP16 SIMD instructions.
### The PULP Platform
Fig. 5 shows the RISC-V-based Parallel Ultra-Low Power (PULP) platform targeted by our approach [16], as embodied in Greenwaves GAP9 SoC. The system features an MCU domain, namely the PULP SoC region (depicted in light blue in the figure), which includes a single RISC-V core for control-related tasks, and a Cluster domain (in yellow) with 8+1 RISC-V cores to accelerate compute-intensive tasks. All the cores support the RV32IMFC ISA, extended with DSP-oriented instructions, like post-increment load/store instructions and 2-level hardware loops. Every CPU is also granted access to a mixed-precision
Figure 4: Workflow of the _Block-Transpose (\(B\)-\(T\))_ operator applied to a HWC weight matrix of size \(3\times 2\times 2\times 2\).
Figure 5: PULP SoC Architecture with 8 RISC-V Cores. The PULP Cluster is equipped with 4 shared Mixed Precision Floating Point Units (FPUs) to compute FP32 and FP16 operations.
Figure 3: ODL training primitives of a Conv2D Layer. On the left, the non-optimized HWC expressions of the Conv2D training primitives for ODL; on the right, the same expressions in CHW format.
Floating Point Unit (FPU), operating full-precision (FP32) and half-precision (FP16) floating-point instructions. More in detail, every FPU can process 1x FP32 MAC in a single clock cycle or 2 MAC/clk if using FP16 SIMD instructions.
From a system-level viewpoint, the PULP SoC features a multi-level memory hierarchy with up to 2 MB of L2 SRAM, directly accessible by the MCU core in a single clock cycle, and an on-chip non-volatile MRAM memory of up to 4MB. On the Cluster side, an L1 data-scratchpad memory with a size of up to 256 kB is shared among the multiple cores. The Cluster DMA can be used to efficiently copy data between the L1 and the L2 memories in the background of the CPUs operation. Data in the L1 memory can be accessed in a single clock cycle by the cluster cores.
In our setup, we consider the 8+1-core PULP Cluster for the acceleration of the DNN training primitives. Out of the total 9 cores, the first 8 are devoted to parallel computation and can access 4 shared mixed-precision FPUs. The 9-th core, instead, acts as a Cluster Controller: this core is in charge of programming the Cluster DMA and dispatching parallel tasks to the other 8 compute cores.
### Matrix Multiplication Optimization
As highlighted in Section 4, the MM algorithm is the computation core of the ODL primitives. Therefore, we first study the acceleration of the MM on the targeted platform using either FP32 or FP16 datatypes.
Let us consider a generic matrix multiplication with \(A\in\mathbb{R}^{N\times K}\) and \(B\in\mathbb{R}^{K\times M}\) inputs and \(C\in\mathbb{R}^{N\times M}\) as output. \(A\) and \(B\) are stored in memory as arrays. The elements of a row (K in case of \(A\)) are adjacent in memory. Conversely, successive column elements (N in case of \(A\)) are stored with a stride equal to the row length. Fig. 6-a) shows the pseudo-code of a naive MM implementation that uses 3 nested for loops and a time complexity of \(\mathcal{O}(N\times K\times M)\). For every iteration of the inner loop, the CPU operates a MAC between elements loaded from the \(A\) and \(B\) arrays. Our baseline implementation assumes data stored in low-level memory, i.e., the L1 memory of the PULP cluster. Hence, elements from \(A\) and \(B\) arrays are loaded in a single clock cycle by the Cluster cores.
Fig. 6-b) shows the memory access pattern to compute the dot product between a row vector of matrix \(A\) and a column vector of matrix \(B\). While the elements from a row of the matrix \(A\) are accessed from a contiguous memory area, strided accesses are required to load the elements belonging to a column of the matrix \(B\). If we consider that every element is a FP16 number, the access pattern of this _Row-Column Dot-Product_ is inefficient in loading columns, as it cannot use 32-bit load/store instructions to load two FP16 elements in one single-clock instruction. This motivates us to consider an MM\({}_{T}\) operator that expects the second operand \(B\) in a transposed form according to:
\[MM(A,B)=MM_{T}(A,Tr(B)) \tag{4}\]
where \(Tr()\) is the transpose operator. Differently from the MM baseline, the MM\({}_{T}\) performs a series of dot-products between the row vectors of \(A\) and \(Tr(\)B\()\). As a major benefit, this Row-Row Dot Product scheme, which is depicted in 6-b), gains a sequential memory access pattern by design both for the matrix \(A\) and matrix \(B\), favoring the usage of SIMD FP16 load/store instructions. On the other side, the transposition of the \(B\) matrix represents a potential computation overhead. However, this extra cost can be cancelled by transposing the \(B\) matrix before the deployment on the target platform, when possible, e.g., transposing the matrix of weight values. This cost can also be absorbed by the shape transform operator of the ODL training primitives, i.e., by replacing an \(Im2Col\) function with an \(Im2Row\) or vice versa. This strategy will be discussed in further detail in Section 5.4.
### Loop Unrolling and Parallelization
As proposed in [31], we exploit loop unrolling and parallelization to speed up the MM and MM\({}_{T}\) kernels. Loop unrolling maximizes the data reuse of loaded elements. We refer to an unrolling factor of \(U\times V\) to indicate a MM kernel that computes \(U\times V\) elements of the output matrix \(C\) within the inner loop. A MM with a higher unrolling factor presents a lower number of instructions by using fewer load operations. If the MM dimensions are not divisible by the unrolling factors, ancillary _leftover_ loops take care of the remainders using a naive non-unrolled strategy.
Fig. 7-a) shows the pseudo-code of an FP32 MM with a \(2\times 4\) unrolling. This kernel computes 8 partial results of the output matrix \(C\) within the innermost loop, obtained from the \(2\times 4\) MAC operations. In this case, the CPU loads only 6 values
Figure 6: a) Pseudo-code of the FP32 non-unrolled (1\(\times\)1) naïve MM algorithms that we refer to as baseline; b) and c) show the difference between the memory access patterns of a MM (b) and MM\({}_{T}\) (c) kernels. MM\({}_{T}\) favours SIMD loads and MAC instructions, as \(B\) matrix elements are adjacent in memory. Row elements of the matrices are stored adjacent, while column elements have a stride equal to the row length one each other.
instead of 16 because every element from the \(A\) and \(B\) arrays is reused 4 and 2 times, respectively. Hence, the utilization of the MAC units in the innermost loop increases from 33% to 57% if compared to a naive implementation (Fig. 6-a) - i.e. the number of MAC instructions in the inner loop is increased in exchange for less load instructions. Fig. 7-b) shows the pseudo-code of an FP16 MM\({}_{T}\) exploiting 1\(\times\)2 loop unrolling. Thanks to the SIMD instructions, the inner loop computes 4 MAC at the cost of 3 load operations, reaching the same MAC utilization of the previous FP32 kernel but with a lower unrolling factor.
Lastly, we exploit the multi-core architecture of the PULP Cluster and the native parallelism of the MM computation. The parallelization strategy that we adopt splits the iterations of the outermost loop dimensions with respect to the available cores (8 in our case). Thanks to this, the execution throughput gains a parallel speedup that increases almost linearly with the number of parallel cores. For instance, a naive FP32 MM with \(32\times 32\)-shaped matrices parallelized on 8 RISC-V cores shows a parallel speed-up of up to 7.47 vs. a theoretical limit of 8.
### FP16 ODL Primitives
The design of the FP16 ODL primitives is based on the software templates of the PULP-TrainLib [31]. As this library only included CHW FP32 training kernels, we _i)_ extended it with the shape transform operators for both HWC and CHW layouts and _ii)_ introduced FP16 primitives. In the remainder of the discussion, we focus mainly on the Conv2D case, but similar considerations and design strategies have been applied to the main other layers composing typical DNNs.
Fig. 8 graphically depicts our FP16 ODL training primitives for a Conv2D layer with an HWC data layout that exploits the MM\({}_{T}\) kernels. Differently from FP32, we replace the MM with MM\({}_{T}\) to fully benefit from row-by-row SIMD Dot Products. To amortize the computational cost of the transpose operator required by Eq. 4, we store the weight parameters of our HWC primitives in transposed form to comply with the layout required by MM\({}_{T}\) kernels. This choice impacts the BW-WG step: the produced weight gradient \(dW\) must also be transposed for a convenient update of the weight tensor. For this reason, we further transpose the BW-WG-MM expression and feed \(Tr(dY)\). This additional transform has a negligible impact on the execution costs since it brings a latency overhead lower than 5%.
Tab. 4 provides a summary of the operations for the FP32 and FP16 ODL training primitives. We consider Conv2D and PointWise Layers. More in detail, the table highlights the transforms and MM kernels for the FW, BG-WG, and BW-IG steps when an HWC or a CHW layout is used. Differently from the FP32 Conv2D operations that use MM kernels (reported in Fig. 3), the FP16 Conv2D implementations make use of MM\({}_{T}\) and shape transform functions to transpose the \(B\) operand.
Unlike other cases, the primitives for FP16 HWC require the weights to be stored in memory in a transposed form (see Eq. 4). In the table, we denote this weight tensor as \(W^{T}\). Because of this transformation, the HWC FW and BW-IG steps feature the same order of operands of the FP32 counterpart in the convolution expression. Conversely, the operands of the FP16 BW-WG step are switched to produce a transposed weight gradient, which can be directly summed to \(W^{T}\) during the weight update phase. On the contrary, FP16 Conv2D CHW primitives do not require to store the weights in transposed form. In this case, the transposition of \(B\) can be obtained, instead, by replacing each \(Im2Row\) operator with \(Im2Col\) and vice versa.
Despite the fact that we only discussed the Conv2D case, our ODL kernel design methodology can be used for every convolutional DNN layer. As a notable example, Tab. 4 also lists the internal operations for the training steps of a Pointwise (PW) Convolution, which is frequently used for DNN models, e.g., DepthWise Separable layers [47]. Given a filter size of \(k_{w}=k_{h}=1\), the ODL steps do not include any \(Im2Col\) or
Figure 8: ODL matrix expressions of the FP16 training primitives of a Conv2D, which make use of the MM\({}_{T}\) kernels.
Figure 7: Pseudo-code of optimized MM algorithms: a) shows an FP32 MM with 2\(\times\)4 unrolling; b) presents an FP16 1\(\times\)2 MM\({}_{T}\) which makes use of SIMD to load adjacent \(A\) and \(B\) row elements. While performing the same amount of MAC instructions per iteration, SIMD MM\({}_{T}\) allows to reduce by 43% the inner loop instructions.
_Im2Row_ transforms. On the contrary, the weights are transposed during the BW-IG step. Similarly to Conv2D primitives, the FP16 HWC primitives require to store transposed weights and to swap the operands in the BW-WG step. CHW primitives, instead, only transpose the B operand. Furthermore, an additional transposition is required in the FP16 CHW FW step, unlike HWC.
## 6 Experimental Results
### Implementation Details
We evaluate our software design on a RISC-V-based Multi-Core MCU, Greenwaves Technologies' GAP9. This platform embodies an instance of the PULP Platform with a 9-core Cluster equipped with 4 shared FPUs and HW support for SIMD FP16 instructions. In our implementation, input (output) data and gradients are stored in the large off-cluster L2 memory and are copied to (from) L1 using the Cluster DMA before (after) the computation of each training step. The _Im2Col_ and _Im2Row_ transform functions are operated by the cluster cores to gain data load/store parallelism; the destination matrix, also placed in the L1 memory, feeds the MM kernel. When the FP16 datatype is used, the amount of data to be copied is reduced by 2\(\times\) with respect to FP32, leading to faster shape transform operators. Padding can introduce a large overhead because of the extra additional control instructions located in the inner loop of the copy to check for for zero-insertion. For example, an _Im2Col_ operator applied to an input activation of size \(8\times 8\times 1\) may suffer up to a 60% cycle increase in case of a \(3\times 3\) filter. Larger input channel sizes help reducing this overhead.
### FP16 and FP32 Optimized Matrix Multiplications
First, we study the performance of the optimized MM kernels on the targeted platform. Fig. 9 shows the throughput expressed as a ratio between the amount of MAC and the measured clock cycles, i.e., MAC/clk. An increase of MAC/clk score corresponds to a faster execution. We analyze single-core runs of multiple-sized MM and MM\({}_{T}\) functions featuring different unrolling factors for both FP16 and FP32 datatypes. For every setting, we report the upper bound limit (yellow bar), accounted by excluding from the cycle count any instructions but MAC, loads, and stores. For comparison purposes, we also benchmark our FP32 MM kernels on an STM32L4 MCU.
The FP32 MM baseline without loop unrolling (marked as \(1\times 1\) in the plot) presents an average throughput of 0.24 MAC/clk for all three considered cases. The same performance is measured for the transposed form MM\({}_{T}\) because the latency spent to access and process 32-bit data is the same. This is up to 2.4\(\times\) faster than the same kernel running on the STM32L4 device, thanks to the build tools, which fully exploit the underlying hardware by leveraging post-increment load/store instructions and hardware loops to reduce the iteration overheads. Using \(2\times 4\) loop unrolling, the throughput of MM kernels further increases by 2.11\(\times\) with respect to the baseline. Compared to an equivalently unrolled STM32L4 porting, this is also 2.36\(\times\). Further increasing the unrolling factor is detrimental: the register file pressure of unrolling requires frequent register spilling in the stack, leading to severe slow-down effects.
Introducing FP16 SIMD vectorization and MM\({}_{T}\) kernels, a maximum speed-up of 1.91\(\times\) is measured vs. the fully unrolled FP32 case, reaching a top performance of 1.07 MAC/clk. For both the analysed unrolling factors, the performance varies across the matrix sizes but is generally superior to FP16 MM kernels. When the innermost K dimension is large (e.g., Fig. 9-a), the MM\({}_{T}\) with \(2\times 4\) loop unrolling shows a \(\sim\)25% performance gain, which leads to performance near the theoretical upper bound. This condition is common in the deep layers of convolutional models, featuring a relatively large number of channels (e.g., more than 8 input and output channels and a spatial size of 8). When K is small, the gain is less substantial but still present (\(\sim\)6% in Fig. 9-c). The reason for this effect is that innermost loops with more iterations positively impact the execution of unrolled MM and MM\({}_{T}\) kernels, amortizing the overhead introduced by the result accumulation and store instruction of the outer loops.
### Conv2D ODL Primitives
In this section, we evaluate the primitive-level optimizations introduced in Sec. 5. Tab. 5 reports the shapes of the four Conv2D layers and a PointWise layer under analysis. The sizes
\begin{table}
\begin{tabular}{|l||c|c|c||c|c|c|} \hline
**Conv2D - HWC** & MM Kernel & \(A\) (\(1^{st}\) Operand) & \(B\) (\(2^{nd}\) Operand) & **PW Conv - HWC** & MM Kernel & \(A\) (\(1^{st}\) Operand) & \(B\) (\(2^{nd}\) Operand) \\ \hline _FP32 FW_ & & _Im2Row_(X) & W & _FP32 FW_ & & X & W \\ _FP32 BW-WG_ & MM & _Im2Col_(X) & dY & _FP32 BW-WG_ & MM & Tr(X) & dY \\ _FP32 BW-IG_ & & _Im2Row_(dY) & B-T(W) & _FP32 BW-IG_ & & dY & Tr(W) \\ \hline _FP16 FW_ & & _Im2Row_(X) & W\({}^{T}\) & _FP16 FW_ & & X & W\({}^{T}\) \\ _FP16 BW-WG_ & MM\({}_{T}\) & Tr(dY) & _Im2Col_(X) & _FP16 BW-WG_ & MM\({}_{T}\) & Tr(dY) & Tr(X) \\ _FP16 BW-IG_ & & _Im2Row_(dY) & B-T(W\({}^{T}\)) & _FP16 BW-IG_ & & dY & Tr(W\({}^{T}\)) \\ \hline \hline
**Conv2D - CHW** & MM Kernel & \(A\) (\(1^{st}\) Operand) & \(B\) (\(2^{nd}\) Operand) & **PW Conv - CHW** & MM Kernel & \(A\) (\(1^{st}\) Operand) & \(B\) (\(2^{nd}\) Operand) \\ \hline _FP32 FW_ & & W & _Im2Col_(X) & _FP32 FW_ & & W & X \\ _FP32 BW-WG_ & MM & dY & _Im2Row_(X) & _FP32 BW-WG_ & MM & dY & Tr(X) \\ _FP32 BW-IG_ & & B-T(W) & _Im2Col_(dY) & _FP32 BW-IG_ & Tr(W) & dY \\ \hline _FP16 FW_ & & W & _Im2Row_(X) & _FP16 FW_ & & W & Tr(X) \\ _FP16 BW-WG_ & MM\({}_{T}\) & dY & _Im2Col_(X) & _FP16 BW-WG_ & MM\({}_{T}\) & dY & X \\ _FP16 BW-IG_ & & B-T(W) & _Im2Row_(dY) & _FP16 BW-IG_ & & Tr(W) & Tr(dY) \\ \hline \end{tabular}
\end{table}
Table 4: On-Device Learning operations for FP32 and FP16 Conv2D and PW Conv layers. \(A\) and \(B\) denote the first and second operand of the used MM kernel. \(W^{T}\) indicates a tensor already transposed in memory. _Tr(W)_ transpose instead the operand at runtime.
are chosen as portions - or tiles - of several input and weight tensors that fit layers from the ResNet8 model in Fig. 13. In particular, CONV1, CONV2 and CONV3 are possible tiles of Layer 2 and 3, featuring large \(H\times W\) size and \(C\) size. Conversely, CONV4 is a possible tile of the input layer, while PW CONV represents Layer 9 of the ResNet8 model. In these experiments, FP32 layers use the MM kernels, while FP16 ones employ MM\({}_{T}\). A HWC data layout is adopted.
Fig. 10 shows the latency for a complete training, which consists of the FW and BW steps with respect to a single data point, of the considered layers when leveraging 8-core processing. Measurements are shown in terms of normalized latency, measured in clock cycles, with respect to the total number of MAC (cycles/MAC). A lower score indicates a smaller latency. On the top row, we highlight the latency breakdown among the FW, BW-IG, and BW-WG phases, while on the bottom rows, we provide a detailed report of the internal operations: MM/MM\({}_{T}\) kernels, _Tr/B-T_ transpose operators, _Im2Col/Im2Row_, and the DMA transfers between the L1 and L2 memories.
For every layer shape, the training step is dominated by the MM kernels in a fully-unrolled version. When applied to Conv2D training, FP32 \(\times\) 4 MM kernels achieve up to 3.66 MAC/clk when executing both FW and BW-IG steps of CONV1. In the same case, FP16 \(2\times\) 4 SIMD MM\({}_{T}\) kernels achieve 6.63 MAC/clk, outperforming by 1.81\(\times\) the FP32 kernels on 8 parallel cores. A similar result has already been observed for the single-core case (Fig. 9).
In case of a single FW or BW step, FP16 SIMD optimizations of the \(MM_{T}\) kernel achieve up to 1.72\(\times\) performance increase, as observed for the CONV1, CONV2, CONV3 and PW CONV scenarios. This speedup is uniform across all the training steps. Only for CONV4, the usage of FP16 SIMD brings a slower execution than the FP32 computation. In this corner case, both the innermost loop and the external loops feature a reduced size because of the single channel of the input image, preventing the kernel from exploiting acceleration opportunities given by the loop unrolling. Instead, a leftover subroutine is invoked to handle the operation, slowing down the process. This infrequent case may appear in the first layer of a model; a kernel without loop unrolling or using FP32 should be preferred in this case. However, when considering a full model design, e.g., ResNet8, this type of layer has a very limited impact on the on the total computation time, i.e., less than 3% on a ResNet8.
The percentage of latency related to MM and MM\({}_{T}\) kernels with respect to the total latency depends on the layer's size. In particular, a larger channel size increases the weight tensor size and, therefore, the impact of DMA transfers. On the contrary, large H and W sizes increase the computation intensity. These effects result in a total execution latency that is dominated by more than 76% on average by MM/MM\({}_{T}\) kernels with shapes like CONV1. In other cases like CONV2 and CONV3, instead, MM/MM\({}_{T}\) kernels impact the 68% of the latency due to increased DMA activity (CONV2, due to larger weight tensor size compared to CONV1) and shape transformation (CONV3, due to larger activation tensor size) overheads. These extra costs determine a slight decrease in the performance of the ODL primitives, whose latency is increased by up to 11% even with large channel and spatial sizes. The throughput of the MM kernels is substantially reduced, as already observed, in the case of small channel sizes, like for CONV4. In the case of a PW CONV, the compute efficiency reaches 4.76 MAC/clk in FP16 on average, 4.5% less than CONV1; the MM/MM\({}_{T}\) represents 84% of the total latency.
The execution of the Conv2D ODL primitives is also highly influenced by the _Im2Col_ and _Im2Row_ operators, which can represent a large overhead during each training step. The impact of these shape transform operators is particularly relevant when the sizes of the input and output tensors widely exceed the size of the weight tensor. In the case of CONV3 and CONV4 with FP32 data type, these overheads represent on average the 19% and 33% of the total latency cost, respectively. On the contrary, layers with smaller input and output tensor sizes are less impacted by the compute cost of these shape transforms. The latency due to _Im2Col/Im2Row_ operators is reduced by 1.2\(\times\) thanks to the FP16 SIMD. This depends on the capability to move two contiguous data elements with a single 32-bit load/store. This speedup is almost constant for each layer shape, including the corner cases as CONV4. On the other side, _Im2Col/Im2Row_ operators are not used by the PointWise layer PW CONV because of the \(1\times 1\) filter size.
In the baseline FP32 implementation of Conv2D primitives, the _B-T_ operator typically represents less than 3% of the total latency of a training step, depending on the amount of input and output channels. FP16 optimizations introduce additional trans
Figure 9: MAC/clk of different MM kernels on both an STM32L476RG and 1 RISC-V core. All cases perform 32768 MAC. Thanks to vectorization, we achieve a top 25.17% performance gain. However, the structure of the FP16 MM kernels limits the performances if K is small due to the accumulation of the inner product introduced by the intermediate loop.
position operators in the ODL primitives, to fully exploit MM\({}_{T}\) kernels (Tab. 4). This extra overhead is however limited to 4% at most in typical layer and tile sizes, reaching 8% only for CONV4 where the impact of the MM/MM\({}_{T}\) reduces. In case of PW CONV, no \(B\)-\(T\) operator is required (Tab. 4). Extra latency costs are however accounted for the transpose operators, reaching up to 4% and 6% of the total workload for, respectively, the FP32 and FP16 format.
Lastly, DMA data transfers between L2 and L1 memory represent 7% of the execution time in typical tile shapes, like CONV1 and CONV3. Given the large channel size, CONV2 features an increased weight tensor size with respect to CONV1 and CONV3. This increases the DMA latency, which reaches up to 14% of the total time. DMA transfers may occupy up to 20% in corner cases like CONV4. When FP16 is used, the time to transfer data is reduced by 1.6\(\times\) with respect to FP32 on average, thanks to the halved memory footprint. In case of PW CONV, both FP32 and FP16 DMA transfers are responsible for at most the 12% of the total latency, representing the prime overhead.
### Energy Evaluation
In this section, we evaluate the energy consumption of the layer primitives of Tab 5. To this aim, we measure the power consumption of the building components of the training steps on a GAP9 SoC featuring a supply voltage of 0.8 V and a running clock frequency of 370 MHz. The energy profile is then calculated by taking into account the latency of the primitives shown in Figure 10. Fig. 11 shows the power costs (in \(mW\)) of CONV 1-4 and PW CONV. We break down the contributions from the different components (MM, DMA and the transform operators) either for the FP32 and FP16 kernels. The power consumption is plotted after averaging the measurements across the training steps; a low variance was observed because of the similar workload composition.
The average power consumption of CONV 1-4 reaches up to 63.6 mW for both FP32 and FP16. This is a result of the prominence of MM/MM\({}_{T}\) operators (70% of the total latency), whose consumption surpasses 66 mW in FP32 and 59 mW in FP16. When it comes to FP16, the MM\({}_{T}\) kernels have a lower power consumption, suggesting that the hardware is not being fully utilized due to a reduction of the parallel efficiency of FP16 MM\({}_{T}\) kernels. The maximum speedup is limited to 6.23 on 8 cores. This is due to the matrix shapes of each training step in FP16, which does not offer favorable parallelization schemes. This effect can also be observed in terms of MAC/clk: if a theoretical MAC/clk of 7.13 is expected (2\(\times\) the FP32 performance on the same layer), only a 6.37 MAC/clk is measured on the FP16 primitives. This 12% difference is reflected in the decrease of power consumption.
The FP32 and FP16 _Im2Row/Im2Col_ operators feature an average power consumption of 55.3 mW. The similar cost of these operators indicates a similar activity of the hardware units. In the case of PW CONV, the average power consumption approximates 60 mW, in line with CONV 1-4. In this case, the power consumption due to _Tr/B-T_ entirely depends on transposition operators, which consume 69.4 mW on average independently of the datatype. Both CONV 1-4 and PW CONV feature similar power consumption for both MM kernels and DMA transfers. These latter represent the smallest power overhead, as their consumption is as little as 33.5 mW on average.
When analyzing the energy consumption, we account for a minimum of 4 \(\mu J\) in the case of a full FP32 training step of CONV4, and a maximum of 25.4 \(\mu J\) for CONV1. In the case of FP16, the range goes from 3.8 \(\mu J\) for CONV4 (corner case with a latency similar to FP32) and 17.4 \(\mu J\) for CONV3. Reflecting the latency breakdown observed in Fig. 10, the transform operators (i.e., _Im2Col/Im2Row_ and _Tr/B-T_) of CONV 1-3 reach up to the 18.6% of the single training steps in FP32 and 27.3% in FP16. The same operators in the CONV4 consume up to the 47% of the total, because of the reduced channel size of the in
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c|c|c} \hline
**Layer** & \(C_{l}\) & \(H_{l}\) & \(W_{l}\) & \(k_{w}\) & \(E_{O}\) & \(H_{O}\) & \(W_{O}\) \\ \hline \hline CONV1 & 16 & 8 & 8 & 3 & 3 & 16 & 8 & 8 \\ \hline CONV2 & 16 & 4 & 4 & 3 & 3 & 32 & 4 & 4 \\ \hline CONV3 & 8 & 16 & 16 & 3 & 3 & 8 & 16 & 16 \\ \hline CONV4 & 1 & 8 & 8 & 3 & 3 & 16 & 8 & 8 \\ \hline PW CONV & 32 & 8 & 8 & 1 & 1 & 64 & 8 & 8 \\ \hline \end{tabular}
\end{table}
Table 5: Tile Shapes of ResNet8 Conv2D and PointWise Layers
Figure 10: Training Latency (top row) of four 2D Convolution layers, namely CONV1–4 and a PointWise Convolution layer, namely PW CONV, whose shapes are reported in Tab. 5. Measurements are expressed in terms of cycles/MAC. A latency breakdown of the total performances is provided in the bottom rows (lower results are faster).
put activation, which makes the BW-IG _Im2RowIm2Col_ more impacting and limit the maximum performance of MM/MM\({}_{T}\) kernels.
### Conv2D HWC/CHW Layout Comparison
Fig. 12 shows a comparison of the performances achieved with both HWC and CHW formats on the same Conv2D shape (CONV1). In the case of FP32 training primitives, MM kernels are on average responsible of the largest share of the execution time for each training step, both in the case of CHW primitives (81%) and of HWC primitives (83%). FP32 DMA transfers and transpositions present the same latency since data can be accessed and manipulated element-by-element, disregarding vectorization. However, the contribution coming from the _Im2Col[Im2Row_ operators is reduced by 36% using the HWC format. This depends on the structure of HWC _Im2Col_ and _Im2Row_ algorithms, which load and reshape \(k_{h}\times k_{w}\times C_{i}\) tensor elements in each iteration of the inner loop. On the contrary, CHW operators manipulate only \(k_{h}\times k_{w}\) elements for every iteration, leading to larger reshaping overheads due to additional control instructions. Furthermore, the HWC format allows slightly faster execution of MM kernels in some cases. This depends on the shape of the involved matrices, which can allow larger sizes on the N dimension. In turn, this may enable larger chunks when parallelizing outer loops on multiple cores. Overall, FP32 HWC-shaped Conv2D kernels show up to 6% faster latency with respect to CHW.
In the case of FP16 primitives, MM\({}_{T}\) kernels occupy 68% of the latency of CHW primitives. The dominance of MM\({}_{T}\) kernels is largely increased in the case of HWC kernels since MM\({}_{T}\) occupies 78% of the latency. The impact of DMA transfers is the same in both FP32 and FP16. In terms of \(MM_{T}\) kernels, both CHW and HWC formats have similar performance since both formats provide similar matrix shapes in each training step. In the HWC case, FP16 _Im2Col_ and _Im2Row_ kernels achieve 2\(\times\) faster performance than in the CHW case, thanks to fully vectorizable data accesses (see Fig. 2), impacting the overall training step latency only by 12%. In the CHW case, transfer chunks are smaller than in HWC, yielding larger overheads impacting as much as 24% the overall latency. In absolute terms, CHW FP16 _Im2Col/Im2Row_ algorithms suffer a 32% slowdown with respect to FP32 due to poorer vectorization. Instead, the same FP16 HWC operators prove to be 1.25% faster than FP32. Therefore, FP16 SIMD execution in HWC data format proves to be 11% faster than CHW in the same data precision.
### End-To-End TinyML Model Training
In this section, we apply the proposed methodology to the layers of complete TinyML models - a ResNet8 for Image Classification and a DS-CNN for Audio Keyword Spotting, whose architectures are represented in Fig. 13. For both models, we estimate the latency for running a training step on a single input
\begin{table}
\begin{tabular}{l|c c|c c} \hline & \multicolumn{2}{c|}{ResNet 8} & \multicolumn{2}{c}{DS-CNN} \\ \hline
**GAP9 Training [This work]** & FP32 & FP16 & FP32 & FP16 \\ \hline Total Clock Cycles [Millions] & 11.9 & 6.3 & 4.3 & 2.4 \\ \% MM & 0.73 & 0.74 & 0.71 & 0.75 \\ latency GAP9@240MHz [ms] & 49.5 & 26.3 & 17.7 & 9.9 \\ latency GAP9@370MHz [ms] & 32.1 & 17.1 & 11.5 & 6.4 \\ \hline \end{tabular}
\end{table}
Table 6: Latency on GAP9 for a complete training step of ResNet8 and DS-CNN models and comparison with top inference-only scores on MCUs.
Figure 11: Average Power analysis of CONV 1-4 and PW CONV, whose latency analysis was presented in Fig. 10. Results are presented in mW.
Figure 12: Comparison of CHW and HWC formats for a Conv2D layer of the same size with both FP32 and FP16. The considered Conv2D has the same shape of Fig. 10’s CONV1.
Figure 13: ResNet8 (top) and DS-CNN (bottom) model architectures including the shapes of the activation tensors and layer types.
sample (i.e., batch size of 1) using our FP16 or FP32 primitives. We use an HWC format to implement the model layers; only DepthWise Convolutions are represented in memory using a CHW layout.
From a memory viewpoint, we consider storing the activation tensors, the weights and their gradients in the on-chip L2 RAM memory. The total memory footprint of the FP32 ResNet8 amounts to 893 kB and reaches 772 kB in the case of the DS-CNN model. These costs are reduced by 2\(\times\) if the FP16 datatype is used, dropping to 443 kB and 386 kB, respectively. On the other side, the FP32 and FP16 versions maximize the utilization of the L1 memory buffer, which we limit to 64 kB for storing the tensor tiles. E.g., In the case of ResNet8, the computation of the FP32 FW step of the Layer 2 can be split into 32 tiles, each one featuring a memory cost of 47 kB. Thanks to the 2x compression of FP16, the number of channels per tile is doubled: the workload splits into 16 tiles with a requirement of 53 kB per tile. A similar tiling logic is proposed for the other layers and the DS-CNN layers.
Tab. 6 shows the latencies measured on GAP9 in terms of clock cycles. In the case of the ResNet8 model, the overall training time is dominated by Conv2D layers (up to 98%) for both FP32 and FP16 formats. The Pointwise layers, instead, represent up to 2.3% of the workload. Using the FP16 primitives accelerates the training time by 1.88\(\times\) vs. the FP32 implementation. Yet, the MM operator presents the largest execution time, up to 74% of the total training time. Moreover, these kernels show high efficiency (4.2 MAC/clk on average for MM in Resnet8) because large tile size can be considered, similarly to CONV1 and CONV2 of Fig. 10. The _Im2Row/Im2Col_ shape transforms (up to 14% for FP16), and the DMA transfers (7%) constitute the other most expensive tasks. On the other side, the workload of the DS-CNN is dominated by the 5\(\times\) Depthwise Separable Convolutions, and in particular, the PointWise layers, which take more than 54% of the computation in both FP32 and FP16.
Running on GAP9 clocked at 370MHz, a complete training step of ResNet8 and DS-CNN takes, respectively, 17.1 msec and 6.4 msec. This result paves the way for real-time ODL on low-end MCUs using the canonical backpropagation algorithm. Considering a data streaming scenario, the ResNet8 and the DS-CNN models can sustain the throughput of, respectively, image sensor acquisition (typically 10-30 fps for embedded applications) and audio frame processing every 0.5s used for Keyword Spotting. This also holds when GAP9 works in low-power mode, [email protected], but the average power consumption of the training task reduces to 27.18 mW, 2.23\(\times\) lower than working at full speed. Given the existing slack between the training latency and typical sensor data rate, we speculate on the feasibility of a real-time mini-batch-based ODL framework that we will investigate in future work.
### Continual Learning on MCU case-study
Lastly, we evaluate our design in a class-incremental Continual Learning approach proposed by Pellegrini et. al. [20]. A MobileNet128 trained on a 10-class image classification task learns to recognize a new class after acquiring 100 image samples of the new objects, without forgetting previously learned classes. The new data are labelled and mixed with 500 pre-stored embeddings of the other classes. This data set is used to fine-tune the last layers of the MobileNet128 for learning the 11-th class; an SGD optimization is applied over 8 epochs.
Table 7 compares our solution with other State-of-the-Art ODL frameworks to solve the task. More in detail, we estimate the latency and the energy consumption to train the weight parameters of the last 7 layers (up to layer **DW21**), the last 5 layers (**DW23**), or only the last layer (**LIN27**). For comparison purposes, we consider the most mature ODL software libraries for MCU: AlfES [30], targeting an STM32L476RG, and PULP-TrainLib [31], which targets multi-core RISC-V MCUs but only supports FP32 ODL kernels.
When fine-tuning multiple layers, our solution demonstrates to be up to 1.6\(\times\) and 767\(\times\) faster than PULP-TrainLib on GAP9 and AlfES on an STM32L4 MCU, respectively. While the first is motivated by the shift from FP32 to FP16, the second large gap depends on multiple factors: the different ISA and CPU micro-architecture (\(\sim\)2.4\(\times\)), the reduced precision (\(\sim\)1.6\(\times\)), the 8-core parallelization (\(\sim\)7.5\(\times\)), the GAP9 clock frequency higher than STM32 (\(\sim\)4.6\(\times\)) and our HW/SW optimization that we estimate to contribute \(\sim\)6\(\times\). Conversely, the gain is reduced when retraining only the last layer. FP16 fully-connected training primitives are not yet fully optimized with SIMD vectorization; in this case, our solution achieves the same performance level as the FP32 implementation of PULP-TrainLib. Moreover, our design is 292.02 \(\times\) more energy efficient compared to AlfES; the lower gain compared with the performance speedup is due to GAP9's higher power consumption than an STM32L4 at maximum speed. Thanks to the proposed technique, the adaptation time for highly accurate Continual Learning solutions [20] is reduced from whole days (entirely infeasible!) to 3-5 minutes, depending on the embedding layer depth (**DW21** or **DW23**).
## 7 Conclusion
In this paper, we introduced a novel methodology to optimize the execution of DNN primitives for On-Device Learning on multi-core MCU powered by FP16 SIMD FPUs. We proposed a strategy to reshape the training kernels into Matrix Multiplication operators. Furthermore, we provided an efficient implementation of the MM kernel exploiting FP16 SIMD instructions and we defined the needed transform functions for the different
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Pattern**} & \multicolumn{2}{c|}{**Latency (s)**} & \multicolumn{2}{c}{**Energy (j)**} \\ \hline \multirow{3}{*}{_A/for[30]_} & \multirow{3}{*}{STM32L4 \# 80 MHz} & **DW21** & 256055 & **DW21** & 7799 \\ & & **DW23** & 142157 & **DW23** & 469 \\ & & **L12872** & 102.4 & **L1287** & 3.35 \\ \hline \multirow{3}{*}{PULP-TrainLib[31]} & \multirow{3}{*}{Greenwaves GAP9 @ 370 MHz} & **DW21** & 504.1 & **DW21** & 32.24 \\ & & **DW23** & 303.8 & **DW23** & 19.43 \\ & & **L12872** & 0.89 & **L1287** & 0.66 \\ \hline \multirow{3}{*}{This Work} & \multirow{3}{*}{Greenwaves GAP9 @ 370 MHz} & **DW21** & 308.5 & **DW21** & 18.68 \\ & & **DW23** & **1893** & **DW23** & 11.26 \\ \cline{1-1} & & **L12872** & 0.89 & **L1287** & 0.66 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Latency and Energy Evaluation for Continual Learning on MCUs
steps of the training algorithm. When benchmarked on a Continual Learning scenario, our solution deployed on a multi-core RISC-V MCU was demonstrated to be more than two orders of magnitude faster than the other proposed ODL library for single-core MCUs. To foster future research on MCU-based On-Device Learning, we release the code of our library as open-source software at: [https://github.com/pulp-platform/pulp-trainlib](https://github.com/pulp-platform/pulp-trainlib).
|
2309.02692 | Hy-DeFake: Hypergraph Neural Networks for Detecting Fake News in Online
Social Networks | Nowadays social media is the primary platform for people to obtain news and
share information. Combating online fake news has become an urgent task to
reduce the damage it causes to society. Existing methods typically improve
their fake news detection performances by utilizing textual auxiliary
information (such as relevant retweets and comments) or simple structural
information (i.e., graph construction). However, these methods face two
challenges. First, an increasing number of users tend to directly forward the
source news without adding comments, resulting in a lack of textual auxiliary
information. Second, simple graphs are unable to extract complex relations
beyond pairwise association in a social context. Given that real-world social
networks are intricate and involve high-order relations, we argue that
exploring beyond pairwise relations between news and users is crucial for fake
news detection. Therefore, we propose constructing an attributed hypergraph to
represent non-textual and high-order relations for user participation in news
spreading. We also introduce a hypergraph neural network-based method called
Hy-DeFake to tackle the challenges. Our proposed method captures semantic
information from news content, credibility information from involved users, and
high-order correlations between news and users to learn distinctive embeddings
for fake news detection. The superiority of Hy-DeFake is demonstrated through
experiments conducted on four widely-used datasets, and it is compared against
eight baselines using four evaluation metrics. | Xing Su, Jian Yang, Jia Wu, Zitai Qiu | 2023-09-06T04:00:21Z | http://arxiv.org/abs/2309.02692v2 | # Hy-DeFake: Hypergraph Neural Networks for Detecting Fake News in Online Social Networks
###### Abstract
Nowadays social media is the primary platform for people to obtain news and share information. Combating online fake news has become an urgent task to reduce the damage it causes to society. Existing methods typically improve their fake news detection performances by utilizing textual auxiliary information (such as relevant retweets and comments) or simple structural information (_i.e._, graph construction). However, these methods face two challenges. First, an increasing number of users tend to directly forward the source news without adding comments, resulting in a lack of textual auxiliary information. Second, simple graphs are unable to extract complex relations beyond pairwise association in a social context. Given that real-world social networks are intricate and involve high-order relations, we argue that exploring beyond pairwise relations between news and users is crucial for fake news detection. Therefore, we propose constructing an attributed hypergraph to represent non-textual and high-order relations for user participation in news spreading. We also introduce a hypergraph neural network-based method called Hy-DeFake to overcome the challenges. Our proposed method captures semantic information from news content, credibility information from involved users, and high-order correlations between news and users to learn distinctive embeddings for fake news detection. The superiority of Hy-DeFake is demonstrated through experiments conducted on four widely-used datasets, and it is compared against six baselines using four evaluation metrics.
Misinformation; Fake News Detection; High-order Relation; Hypergraph Neural Networks
## I Introduction
Nowadays, social media platforms such as Facebook and Twitter have emerged as primary sources for accessing timely news and sharing information. While these platforms offer the advantage of instant access to real-time news from diverse perspectives, the ease of engagement and interaction has led to the rapid proliferation of fake news. This spread of misinformation has resulted in significant physical and mental harm to individuals. For instance, over 5,800 people were admitted to hospital after falling victim to a piece of COVID-19 fake news, which claims that alcohol with high concentration can sanitize the body and eradicate the virus [1]. Fake news not only poses a threat to public health but also undermines trust in governments and alters public perception [2]. For example, research has shown that fake news circulating on social media played a role in shaping the vote for the Brexit referendum [3]. During the Brexit campaign, a significant amount of false information circulated on social platforms, with pro-Leave fake news spreading more widely on Twitter than pro-Remain fake news. The widespread dissemination of fake news has had detrimental social consequences. Therefore, it is an imminent task to identify fake news online to ensure the public receives trustworthy information and reliable guidance.
Various methods have been developed for the detection of fake news in online social networks [4, 5]. A simple technique to identify fake news is through text classification, but the results indicate that these approaches have limited performance since they only consider news content itself and overlook the social context [6]. To achieve more accurate results, some studies adopt textual data from social context as auxiliary information, such as comments, tweets, and retweets of source news [7, 8, 9]. However, these methods face a practical problem: many users tend to retweet the news directly without adding comments or expressing their stances, which results in the absence of textual side information. Let's take the spreading of a piece of COVID-19 news as an example, as shown in Fig. 1. Most users participate in sharing this news by directly retweeting it, such as _#User1_, _#User7_. The comments published during retweeting are often insignificant, like the symbols from _#User8_ and the expression from _#User9_. As of May 11, 2023, the news in Fig. 1 has 10.2K retweets without comments, but only 380 retweets with comments. Therefore, the challenge in real-world fake news detection is how to exploit non-textual auxiliary information from social context for accurate results, which we refer to as _challenge #1_.
To address the _challenge #1_, some models utilizing non-textual structural information have been proposed. For example, some models construct propagation graphs for tweets and retweets [10], while others employ heterogeneous graphs for news and users [11]. These approaches have demonstrated notable performance in identifying fake news, as the extracted structural information aligns well with the network formed by news spreading or news-user interaction in social media. However, these methods still have a limitation: they typically capture pairwise relations in social networks. In real-world
Fig. 1: An illustrative example of a piece of news spreading by retweeting.
social networks, there are numerous high-order and intricate relations that these methods fail to capture beyond pairwise relations. For instance, a pairwise relation can represent the relationship between a user and a single piece of news he or she posts, but it cannot represent the relations among multiple news posts by the same user. Thus, extracting high-order relations from social networks to gather in-depth information for enhancing fake news detection presents _challenge #2_.
In this work, to address the _challenge #1_, we leverage user information to complement the plain text of news content. Given the participatory and user-driven nature of news dissemination on online social networks, we argue that users' engagement and dissemination patterns may vary between fake and real news, providing valuable insights for fake news detection. Specifically, to incorporate non-textual information from the social context, we aim to extract the inherent attributes of users as well as the structures of both news and users during news spreading. To tackle the _challenge #2_, we employ the concept of hypergraph to capture intricate high-order relations between news and users [12]. Hypergraphs are generalizations of simple graphs that can connect an arbitrary number of nodes in an edge. As depicted in Figure 2, a simple graph only connects two nodes in an edge and its adjacency matrix illustrates the pairwise relation between nodes. However, the incidence matrix of a hypergraph demonstrates that multiple nodes are assigned to one edge, enabling the extraction of high-order information. Therefore, in this work users are abstracted as nodes, and news is represented as hyperedges. This hypergraph construction allows us to capture the intricate high-order relation between a source news piece and the users who participated in its dissemination within each hyperedge, enabling the learning of distinctive information for fake news detection. Within this hypergraph, node attributes represent user properties related to credibility, while hyperedge attributes encompass the textual contents of news.
To comprehensively learn the latent high-order correlation between news and users in our constructed hypergraph and overcome the two challenges, we propose a method based on **H**ypergraph neural networks for **D**etecting **F**ake news, abbreviated as Hy-DeFake. Hy-DeFake consists of four main parts, utilizing the attributed hypergraph as input: (1) News semantic channel: it updates hyperedge features by learning semantic embeddings of news contents, as each hyperedge represents a piece of news; (2) User credibility channel: it learns node features through embedding both user credibility information and the high-order structural information between news and users; (3) Consistency-based feature fusion: it incorporates the semantic embeddings of news (_i.e._, hyperedges) and the credibility embeddings of relevant users (_i.e._, nodes) based on restricting their consistency in low-dimensional space; (4) Fake news detection: it models our task as hyperedge classification and classifies the news based on the integrated embeddings. Specifically, Hy-DeFake simultaneously trains the news semantic channel and the user credibility channel to acquire diverse information for identifying fake news. The news semantic channel leverages a language model to exploit semantic embeddings of news. The user credibility channel employs a hypergraph autoencoder, where the encoder is a hypergraph convolution network, that learns the high-order information between nodes and hyperedges. Through this process, the two channels learn the semantic embeddings of news and user embeddings, which capture both user credibility and high-order correlation between news and users, respectively. This work presents the following key contributions:
* We introduce an innovative approach by constructing an attributed hypergraph to represent the process of news spreading in online social networks. By abstracting fake news detection as hyperedge classification, we capture the intricate high-order relation between news and users in social contexts, enabling us to achieve accurate results.
* The proposed Hy-DeFake inventively utilizes hypergraph neural networks for fake news detection. It effectively captures the credibility information of users and the high-order correlation between news and users. Both of these aspects provide distinctive information that contributes to fake news detection.
* Extensive experiments demonstrate that Hy-DeFake generally surpasses six baseline methods on four real-world datasets from different domains.
* We uncover a positive correlation between news authority and user credibility. Users who spread fake news exhibit more intensive interaction compared to those who spread real news, resulting in the formation of a denser community.
This paper consists of the following parts. Section II reviews the relevant literature on hypergraph learning, fake news detection, and the distinctions between Hy-DeFake and existing methods. The relevant definitions and hypergraph construction are formulated in Section III. Following that, section IV presents each part of the proposed framework, and section V shows experimental evaluations. Finally, we draw conclusions in this work and outline our future work in Section VI.
## II Related work
To give an overview of the relevant research, this section first reviews the work on hypergraph learning and its applications, then further discusses the work on fake news detection.
### _Hypergraph Learning_
The development of graph neural networks (GNN) [13] has demonstrated its success in various tasks and has garnered significant attention for its ability to uncover structural relations [14, 15]. However, traditional graph learning methods primarily focus on pairwise connections, limiting their capacity to express complex high-order relations that extend
Fig. 2: Difference between simple graph and hypergraph.
beyond pairwise associations. Therefore, as a promising and flexible technique for modeling complex high-order relations, hypergraph learning is emerging recently. The first work that applies GNNs to hypergraphs is HGNN [16]. To leverage complex and high-order relations for better representation learning, it designs a hyperedge convolution operation. HyperGCN [17] trains a GCN on hypergraph by adopting tools from the spectral theory of hypergraphs and extending its faster variant. Hyper-SAGNN [18] develops a self-attention based GNN for homogeneous and heterogeneous hypergraphs. DHGNN (Dynamic Hypergraph Neural Networks) [19] is a framework that exploits adjusted feature embeddings to dynamically update hypergraph structure. HyperGCL [20] is a hypergraph generative model that applies contrastive learning for robust and fair hypergraph representation learning.
Due to its capacity of capturing complicated high-order relations, hypergraph neural networks have found applications in various domains. For instance, SHINE [21] is a subhypergraph inductive neural network for simultaneously capturing functional relations for genes and pathways. HHGR [22] and DH-GCN [23] are two examples of exploiting hypergraph neural networks for social recommendation. GroupNet [24] introduces a multiscale hypergraph neural network for trajectory prediction. We can conclude that a hypergraph neural network is suitable for numerous real-world scenarios, particularly in the context of complex social networks. However, its application in the domain of detecting fake news in online social networks is relatively limited. Available approaches utilizing hypergraphs for fake news detection are quite limited [25, 26] and primarily focus on news contents, lacking the integration of high-order relations of both users and news. Thus, this work is proposed to address such issue.
### _Fake News Detection_
#### Ii-B1 Text-based Detection
From the perspective of news contents, text-based approaches detect fake news by distinguishing linguistic features and writing styles between real and fake news. Perez-Rosas _et al._, for instance, [6] introduced a text-based method focusing on learning linguistic features to identify fake news. In FakeBERT [27], Kaliyar _et al._ integrated BERT [28] with multiple parallel blocks of 1d-CNN for fake news detection. Apart from analyzing the textual contents of the news itself, auxiliary textual information is also integrated to enhance fake news detection. dEFEND [7], for example, adopts a hierarchical co-attention mechanism to capture the explainable sentences from both news and the relevant comments. Similarly, STANKER [8] utilizes auxiliary features through comments and leverages the level-grained attention-masked BERT model to uncover fake news. Furthermore, mining stance or emotion in news content can provide side information as well. For example, Zhang _et al._[29] specifically examined the impact of emotional cues in real and fake news, and incorporated dual emotion features to represent both individual emotions and the relationship between them. Yang _et al._ proposed two tree-structured models (TD-MIL and BU-MIL) [30] to simultaneously verify rumorous claims and detect the stances behind the relevant posts. While text-based models utilize textual information and auxiliary features for fake news detection, they commonly analyze each news piece in isolation, overlooking the intrinsic news-spreading mechanism. As a result, they may neglect valuable propagation information. These methods often lack user information which is crucial for understanding the social context to accurately identify fake news. Consequently, the performance of these approaches becomes limited.
#### Ii-B2 Graph-based Detection
Due to the compatibility between the propagation nature of news and the structural property of graphs, graph learning shows high-quality results in fake news detection. In graph-based methods, researchers typically extract structural relations of words or sentences, construct news propagation graphs or incorporate social contexts (such as user information) to detect fake news from both textual and structural information perspectives. For instance, Yao _et al._ used GCN [31] to learn news-level structure information. They constructed graphs to represent individual news pieces [32]. Wu _et al._[33] developed a causal graph to depict the relations between news and evidence in a counterfactual inference framework. Additionally, Ma _et al._[9] created a tree-structured graph based on tweets and utilized recursive neural networks to represent the graph in both top-down and bottom-up fashion. Similarly, Bi-GCN [10] is a bi-directional graph neural network that understands patterns of news propagation and dispersion by GCN [31]. To sum up, graph-based approaches have the ability to represent the textural and structural properties of debunking fake news.
To fully harness the potential of graph structure for accurate results, researchers also incorporate structural side information. CompareNet [34] exploits knowledge graphs to detect fake news. For capturing various connections, Huang _et al._[11] and HGAT [35] are two works constructing heterogeneous information networks (HINs). For considering user information in social contexts, UPFD [36] considers user preferences by summarizing historical posts, while Us-DeFake [37] captures news-user relations in a dual-layer graph. MFAN [38] integrates texts, visions, and social graph features in a unified framework, to obtain complementary relations between different modalities. Furthermore, Mehta _et al._[39] used inference operators to reveal interactions like the similarity between news content and user engagement patterns. SureFact [40] is based on multiple heterogeneous subgraphs that extract diverse information from claims, posts, keywords, and users. Similarly, Jin _et al._ constructs a claim-evidence graph by news articles, posts, and users, and then proposes [41] a fine-grained reasoning model FinerFact. PSIN [42] models the post and user interactions through a divide-and-conquer strategy to detect fake news. Albeit the variety of methods considers user information in social contexts, they are inadequate in modeling the complex high-order relations between news and users.
Therefore, we propose Hy-DeFake to address the aforementioned problems. Hy-DeFake goes beyond considering only textual information and incorporates structural information within a social context. Specifically, it also captures the high-order correlation between news and users to acquire distinctive representations for accurate identification of fake news.
## III Preliminaries
In this section, we begin by describing the notions and definitions utilized in this paper, and then present how a social network with news and users is modeled as a hypergraph. Finally, we state our problem of detecting fake news.
### _Notations and Definition_
Let \(\mathcal{N}=\{n_{1},n_{2},\cdots,n_{t}\}\) denote the news set, where \(t\) represents the number of news and \(n_{i}\) represents the textual contents of news \(i\). Let \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{m}\}\), where \(m\) is the number of users. The corresponding user attributes are defined as \(\mathbf{X}\subseteq\mathbb{R}^{m\times d}\), where \(d\) refers to the dimension of user attribute vectors.
_Hypergraph Definition_: Hypergraph is the generalized graph, it can be formulated as \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\), where \(\mathcal{V}\) and \(\mathbf{X}\) indicate the node set and the corresponding node attributes. \(\mathcal{E}\) signifies the set of hyperedges, where each hyperedge connects to a subset of nodes rather than two nodes. The diagonal matrices of edge degrees and node degrees can be represented as \(\mathbf{D}_{e}\), \(\mathbf{D}_{v}\), respectively. \(\mathbf{H}=|\mathcal{V}|\times|\mathcal{E}|\) denotes incidence matrix, where \(\mathbf{h}(v,e)=1\) if the node \(v\) is on hyperedge \(e\) (_i.e._, \(v\in e\)), else \(\mathbf{h}(v,e)=0\).
### _Hypergraph Construction_
In this work, to model the relation between each news piece and the users involved in its spreading in online social networks for the purpose of debunking fake news, we define an attributed hypergraph as \(\mathcal{G}=(\mathcal{U},\mathcal{E},\mathbf{X},\mathcal{N})\) to represent the beyond pairwise relations. Here the node set \(\mathcal{U}\) denotes the users who are involved in news spreading. The hyperedge set \(\mathcal{E}\) denotes the relation between news and users, where each hyperedge \(e_{i}\) represents a piece of news \(i\) and can connect more than two users who are involved in the spread of this news. \(\mathbf{X}\) and \(\mathcal{N}\) represents attributes of nodes (_i.e._, users) and hyperedges (_i.e._, news), respectively. The node attributes of users are user properties, such as the number of followers and verified status, while the hyperedge attributes are the textual contents of news. Figure 3 shows the changes in row data after hypergraph construction.
Thus, the task of this work is abstracted as distinguishing news in a hypergraph, _i.e._, hyperedge classification task. Treating fake news detection as a binary classification task, we associate a binary label \(y\in\{0,1\}\) to each hyperedge \(e_{i}\) in a hypergraph. A label of 0 is assigned to indicate that the corresponding node represents real news, whereas a label of 1 is assigned to indicate fake news. Thus, the hypergraph for fake news detection can be denoted as \(\mathcal{G}=(\mathcal{U},\mathcal{E},\mathbf{X},\mathcal{N},\mathcal{Y})\), where \(\mathcal{Y}\) represents the label set assigned to the news.
## IV Methodology
In this section, we provide a detailed explanation of our method Hy-DeFake. The model's input is our constructed hypergraph. To fully capture the credibility information from users and semantic information from news, we design two channels to learn user and news information individually. As depicted in Fig. 4, Hy-DeFake comprises four parts: 1) the news semantic channel, 2) the user credibility channel, 3) the consistency-based feature fusion, and 4) the fake news detection. The news semantic channel adopts the language model to update hyperedge features and understand the semantics of news. The user credibility channel exploits an unsupervised hypergraph autoencoder to obtain credibility features of users, preserving the high-order information between news and users. Then, the consistency-based feature fusion module aggregates users' node features who are involved in each piece of news to obtain representative credibility embeddings of users. This module also minimizes the distribution of news embeddings and user embeddings in low-dimensional space to optimize the final hyperedge embeddings. Finally, the fake news classifier distinguishes the news based on integrated embeddings.
### _News Semantic Channel: Updating Hyperedge Features_
When detecting fake news, news contents play a crucial role in discriminating lexical features or writing styles of fake news from real ones. Thus, we devise a news semantic channel and take the source news contents as the hyperedge attributes in the input hypergraph for our proposed model, where each hyperedge represents a piece of news. To update the textual features of news to contain semantic information, we utilize a pre-trained language model \(\mathcal{M}\) to derive hyperedge features. Here we employ RoBERTa [43] which is a robustly optimized BERT pre-trained model. The process is as follows:
\[\mathbf{z}_{i}^{e}=\mathcal{M}(n_{i},\forall n_{i}\in\mathcal{N}), \tag{1}\]
where \(\mathbf{z}_{i}^{e}\) stands for the updated hyperedge feature of a piece of news \(n_{i}\). Through the language model's fine-tuning, the textual contents of news can be processed as vector embeddings that embed semantic information of news. In Hy-DeFake, since each piece of news is abstracted as a hyperedge, we take the textual embeddings as the hyperedge features \(\mathbf{Z}^{e}\), to provide semantic information in hypergraph for accurate fake news detection results.
### _User Credibility Channel: Learning Node Features_
Users are indispensable creators and disseminators of news in online social media, who are capable to make a piece of news influential by spreading it. In general, credible users are inclined to forward or propagate trustworthy news. They usually keep a wait-and-see or skeptical attitude toward uncertain or false news and do not forward them omnivorously.
Fig. 3: Hypergraph construction.
Oppositely, uncredible users, including potentially malicious individuals, are prone to disseminating false or misleading news. Based on this phenomenon, we suppose there exists a correlation between news and users which is worth exploring for fake news detection, thereby this work designs a user credibility channel to investigate the impact of users in identifying fake news. Since real-world social networks are complex with beyond pairwise connections, this channel learns not only user credibility features, but the high-order correlation between news and users.
For learning the complicated correlation between news and users with user credibility information, we propose a hypergraph Autoencoder, consisting of an encoder and a decoder in an unsupervised setting. Due to the limitation of traditional graph encoders in encoding complex correlations more than pairwise connections, Hy-DeFake adopts a hypergraph convolution network, _i.e._, HGNN [16], as the encoder. It designs a hyperedge convolution operation for learning the high-order latent correlation between nodes and hyperedges. Specifically, it performs node-edge-node transform, which can capture user-news-user relations and refine the node features of user credibility using the hypergraph structure. The hypergraph convolution is defined by
\[\mathbf{Z}^{(l)}=\sigma(\mathbf{D}_{v}^{-1/2}\mathbf{H}\mathbf{W}\mathbf{D}_{c }^{-1}\mathbf{H}^{\top}\mathbf{D}_{v}^{-1/2}\mathbf{Z}^{(l-1)}\Theta^{(l-1)}), \tag{2}\]
where \(\mathbf{W}\) signifies the learnable parameter during the training process, \(\mathbf{Z}^{(0)}=\mathbf{X}\), \(\Theta\) is the filter, and \(\sigma\) indicates the nonlinear activation function.
Through the aforementioned encoding process, the encoder maps the input data to the embeddings of users which contain user credibility information and high-order correlation between users and news. For unsupervised training of this channel, the decoder maps the embeddings back to reconstruct the input user attributes. The formulation is as follows:
\[\begin{split}\hat{\mathbf{x}}_{i}^{(1)}&=\sigma(\tilde {W}^{(1)}\mathbf{z}_{i}+b^{(1)}),\\ \cdots&\\ \hat{\mathbf{x}}_{i}^{(k)}&=\sigma(\tilde{W}^{(k)}\hat{ \mathbf{x}}_{i}^{(k-1)}+b^{(k)}).\end{split} \tag{3}\]
In Equation 3, \(\mathbf{z}_{i}\) represents the \(i\)-th node's latent representation learned by Equation 2, \(\hat{\mathbf{x}}_{i}^{k}\) is the desired reconstructed attribute of node \(i\), \(\{\tilde{W}^{(1)},\cdots,\tilde{W}^{(k)},b^{(1)},\cdots,b^{(k)}\}\) are the parameters of the decoder with \(k\) layers. The training of hypergraph Autoencoder is to minimize the reconstruction error between the input user attributes and the reconstructed user features. The loss function is as follows:
\[\begin{split}\mathcal{L}_{u}&=\sum_{i=1}^{m}\parallel \mathbf{x}_{i}-\hat{\mathbf{x}}_{i}\parallel^{2}=\parallel\mathbf{X}-\hat{\mathbf{X}} \parallel^{2}\\ =&\parallel\mathbf{X}-\mathbf{D}_{v}^{-1/2}\mathbf{H} \mathbf{W}\mathbf{D}_{e}^{-1}\mathbf{H}^{\top}\mathbf{D}_{v}^{-1/2}\mathbf{X} \parallel^{2}.\end{split} \tag{4}\]
Fig. 4: **The Framework of Hy-DeFake. (a) Framework** shows the overall architecture of Hy-DeFake. It takes a hypergraph of news and users as input. Firstly, Hy-DeFake designs two channels to learn features from news and users, respectively. Hyperedge attributes (_i.e._, news content) are fed into the news semantic channel to learn semantic features of textual data (the process in blue arrows). Hypergraph with node attributes is fed into the user credibility channel to learn node features with credibility information of users and high-order structural information between news and users (the process in yellow arrows). After that, a consistency-based feature fusion module fuses hyperedge features and representative node features to obtain integrated embeddings for fake news detection. Finally, the fused embeddings are fed into the fake news detection component to classify the news into real and fake. **(b) User Credibility Channel** shows the details in this channel, it adopts a hypergraph neural network (HGNN) in Autoencoder architecture to learn the node features \(\mathbf{Z}\). Then, it fuses \(\mathbf{Z}\) on each hyperedge to aggregate the representative user embeddings \(\mathbf{U}\) for further fusion with news features. **(b) News Semantic Channel** shows the learning process of hyperedge attributes \(\mathcal{N}\) to update news features \(\mathbf{Z}^{e}\) in a pre-trained large language model (LLM).
### _Consistency-based Feature Fusion_
Through the learning process in the two channels described above, Hy-DeFake obtains the news semantic features embedded on hyperedges, the user credibility features and high-order relations embedded on nodes in the hypergraph. Consistency-based feature fusion is proposed to integrate these semantic features, credibility features, and high-order relations to provide rich embeddings for the precise debunking of fake news. Here we first fuse the features of users who are involved in each news to gain representative user credibility embedding for each news. Then, we fuse the news semantic embeddings with the representative user credibility embeddings to obtain the integrated embeddings for fake news detection.
Each hyperedge has multiple nodes, indicating that the news is propagated by multiple users. We define \(\mathcal{U}^{s}_{i}\subseteq\mathcal{U}\) to represent a subset of users involved in news \(i\). To obtain the representative embedding of users \(\mathcal{U}^{s}_{i}\), we aggregate credibility embeddings of users involved in news \(i\) by the element-wise mean calculation, which can be expressed as follows:
\[\mathbf{u}_{i}=\text{MEAN}\{\mathbf{z}_{j},\forall j\in\mathcal{U}^{s}_{i}\subseteq \mathcal{U}\}, \tag{5}\]
where \(\mathbf{u}_{i}\) is the representative credibility embedding of users involved in the dissemination of news \(i\), and \(\mathbf{z}_{j}\) is the credibility embedding of \(j\)-th user in \(\mathcal{U}^{s}_{i}\). Thus, we obtain the representative user credibility matrix \(\mathbf{U}\).
Since we uncover a phenomenon in Section IV-B, that is users with high trustworthiness tend to share trustworthy news, whereas users with low trustworthiness are more likely to disseminate untrustworthy news. This indicates that there may be a positive correlation between news and users. Therefore, we try to restrain the consistency of news semantic embeddings and user credibility embeddings in low-dimensional space, so as to make associated news and users closer and unassociated news and users further away. We choose the Jensen-Shannon divergence to measure the distance between the divergence of each piece of news and the divergence of its relevant representative user credibility embedding. The loss function is minimized by the following equation:
\[\begin{split}\mathcal{L}_{con}&=\mathcal{D}_{KL}( \mathbf{Z}^{\prime}||\mathbf{U}^{\prime})+\mathcal{D}_{KL}(\mathbf{U}^{\prime}||\mathbf{Z}^{ \prime}),\\ &=\sum_{i}\mathbf{z}_{i}^{\prime}\log\frac{\mathbf{z}_{i}^{\prime}}{\mathbf{u }_{i}^{\prime}}+\sum_{i}\mathbf{u}_{i}^{\prime}\log\frac{\mathbf{u}_{i}^{\prime}}{\mathbf{ z}_{i}^{\prime}},\end{split} \tag{6}\]
where \(\mathcal{D}_{KL}\) indicates the Kullback-Leibler divergence, \(\mathbf{Z}^{\prime}\) and \(\mathbf{U}^{\prime}\) are the normalized embeddings of \(\mathbf{Z}^{e}\) and \(\mathbf{U}\), respectively. The normalization of \(\mathbf{U}\) is as follows:
\[\text{SOFTMAX}(\mathbf{U})_{p}=\frac{e^{\mathbf{u}_{p}}}{\sum_{d=1}^{D}e^{\mathbf{u}_{d}}}, \tag{7}\]
where \(\text{SOFTMAX}(\mathbf{U})_{p}\) indicates the \(p\)-th dimension value of \(D\)-dimensional embedding \(\mathbf{u}\) after softmax. The normalization of \(\mathbf{Z}^{e}\) is the same as the normalization of \(\mathbf{U}\).
Due to the assumption of positive correlation, by minimizing the loss function in Eq. 6, our method Hy-DeFake can bring the distribution of credible users and real news closer, the distribution of unredicule users and fake news closer, and at the same time, the distance between the two distributions becomes larger. Therefore, Hy-DeFake learns distinctive embeddings for the task of identifying fake news.
### _Fake News Detection_
After obtaining the final embeddings that fuse news semantic information and user credibility information, such integrated embeddings can be regarded as hyperedge embeddings, because each hyperedge represents an individual news piece. The task of discovering fake news becomes a hyperedge classification. In Hy-DeFake, the integrated hyperedge embeddings act as input data for the hyperedge classifier. We pass the hyperedge embeddings through a multilayer perceptron (MLP) followed by a softmax layer for final news prediction, which is formulated as follows:
\[\mathbf{z}_{i}^{o}=f(W^{\prime(k)}(\mathbf{z}_{i}^{e}\oplus\mathbf{u}_{i})+b^ {\prime(k)}),k=1,\cdots,K \tag{8}\] \[\hat{y}=\text{SOFTMAX}(\mathbf{z}_{i}^{o}), \tag{9}\]
where \(\mathbf{z}_{i}^{o}\) is the output embedding of news \(i\) after MLP, \(\oplus\) denotes the concatenation operation, \(W^{\prime(k)}\) and \(b^{\prime(k)}\) denote the parameters on layer \(k\). \(\hat{y}\) represents the predicted label. For each piece of news, _i.e._, each hyperedge, this component's objective is to minimize the cross-entropy loss:
\[\mathcal{L}_{d}=-y\log\hat{y}-(1-y)\log(1-\hat{y}), \tag{10}\]
where \(y\in\{0,1\}\) represents the ground truth label of news. \(\hat{y}\) denotes the predicted value, which indicates the probability that the news is fake. In our model Hy-DeFake, we combine the loss functions of user credibility encoding, consistency of news and users, and hyperedge classification in a unified framework. The total loss function to be minimized becomes:
\[\mathcal{L}=\mathcal{L}_{u}+\mathcal{L}_{con}+\mathcal{L}_{d}. \tag{11}\]
## V Experiments
This section validates the performance of Hy-DeFake by presenting experimental conduction and results. Specifically, after introducing the experimental setup, we assess the overall performance of Hy-DeFake by comparing it against six popular baselines, and analyze the effectiveness of each module in Hy-DeFake, respectively. Further, we verify the explainability, robustness, and efficiency of Hy-DeFake.
### _Experimental Setup_
#### V-A1 Datasets
To investigate the efficacy of Hy-DeFake on fake news detection, we choose four real-world datasets, named Politifact [44], ReCOVery [45], MM-COVID [46], and Gossipcop [44]. These datasets cover various domains, with Politifact focusing on political news, ReCOVery and MM-COVID containing data related to COVID-19, and Gossipcop collecting news related to entertainment. All the datasets
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Datasets**} & **Nodes** & **Hyperedges** & \multirow{2}{*}{**Labels**} \\ & **(Users)** & **(News)** & \\ \hline
**Politifact** & 27,682 & 635 & R: 344 / F: 291 \\
**ReCOVery** & 36,741 & 2,026 & R: 1,364 / F: 662 \\
**MM-COVID** & 18,672 & 3,992 & R: 1,894 / F: 2,098 \\
**Gossipcop** & 72,083 & 10,084 & R: 5,444 / F: 4,640 \\ \hline \hline \end{tabular}
\end{table} TABLE I: The Dataset Statistics.
include fact-checked source news, and the relevant social context, _i.e._, related tweets along with users who participate in the news spread. To identify fake news by incorporating the natural pattern of news outbreak and spreading, we used Twitter API1 to crawl user information. Table I summarizes the dataset statistics, where each "Hyperedge" represents individual news, and "Nodes" represent the users participating in the news spreading. The labels assigned to the hyperedges are "real" and "fake".
Footnote 1: [https://developer.twitter.com/en/docs/twitter-api](https://developer.twitter.com/en/docs/twitter-api)
#### Iv-A2 Baselines and Metrics
In our evaluation, we employ four frequently employed metrics to compare the performance of our method with six popular baseline algorithms. Here is a brief introduction to the baseline algorithms:
* TextCNN [47] is a CNN (convolutional neural network) based method to classify sentences, it builds CNNs and utilizes convolution filters on _word2vec_ of news contents to capture textual features in different granularity.
* HAN [48] (hierarchical attention network) is proposed to classify documental content. It adopts an attention mechanism at both the sentence and word levels to encode news content.
* BERT [28] is a prominent pre-trained language model for natural language processing (NLP) tasks, that utilizes a transformer-based architecture with a bidirectional encoder and self-attention heads. Here we fine-tune the BERT model specifically for identifying fake news.
* TextGCN [32] is a graph-based approach that extends GCN [31] to perform text classification. It captures semantic relationships between words and sentences in the graph representation.
* HyperGAT [26] is a hypergraph-based inductive method for textual data. It constructs a document-level hypergraph attention network for each piece of news to learn the textual representations in fake news detection.
* DualEmo [29] is the fake news detection model that mines dual emotions from publishers and users in a social context. It identifies the emotional gap between source news and relevant comments to distinguish fake news.
In this paper, we measure the performance of our proposed Hy-DeFake and six baseline methods for the detection of fake news using four widely-used metrics: Accuracy (ACC), Precision (Pre), Recall (Rec), and F1 score (F1). The evaluation is conducted using 5-fold cross-validation. The average results are reported for each method.
#### Iv-A3 Experimental Setup and Implementation
The datasets are split into train-validation-test sets with a ratio of 70%-10%-20%. In Hy-DeFake, the embedding size is set to 768. We use 2 hypergraph convolution layers and 3 fully-connected layers in both the decoder and hyperedge classification. The model optimization is performed using the Adam optimizer with a learning rate of 0.0001 for all datasets. The number of epochs is set to 600.
### _Performance_
The performance of Hy-DeFake, along with six popular baseline methods, is evaluated on four real-world datasets. The results are presented in Table II, with the optimal results highlighted in bold. As shown in this table, the performance of Hy-DeFake is superior to that of the baselines on three datasets, and achieves sub-optimal or optimal performance on one dataset, _i.e._, ReCOVery. This illustrates that the embeddings of Hy-DeFake that incorporate news semantics and user credibility learn sufficient information compared to other methods. Further, capturing high-order correlation between news and users is effective for identifying fake news.
Regarding the baseline performances, the overall results of DualEmo are worse than that of Hy-DeFake but better than that of other baseline methods, because this method considers social context as well when detecting fake news. It verifies that considering social context _e.g._, dual emotion between
\begin{table}
\begin{tabular}{c c|c c c c c c c} \hline \hline
**Datasets** & **Metrics** & **TextCNN**[47] & **HAN**[48] & **BERT**[28] & **TextGCN**[32] & **HyperGAT**[26] & **DualEmo**[29] & **Hy-DeFake** \\ \hline \multirow{4}{*}{**Politicalfact**} & **ACC** & \(0.506\pm 0.038\) & \(0.521\pm 0.015\) & \(0.786\pm 0.027\) & \(0.74\pm 0.033\) & \(0.869\pm 0.021\) & \(0.83\pm 0.027\) & \(\mathbf{0.903\pm 0.012}\) \\ & **Pre** & \(0.302\pm 0.067\) & \(0.416\pm 0.052\) & \(0.869\pm 0.029\) & \(0.741\pm 0.034\) & \(0.871\pm 0.022\) & \(0.831\pm 0.027\) & \(\mathbf{0.901\pm 0.011}\) \\ & **Rec** & \(0.529\pm 0.115\) & \(0.529\pm 0.055\) & \(0.808\pm 0.037\) & \(0.741\pm 0.033\) & \(0.869\pm 0.022\) & \(0.825\pm 0.028\) & \(\mathbf{0.914\pm 0.013}\) \\ & **F1** & \(0.378\pm 0.067\) & \(0.465\pm 0.05\) & \(0.841\pm 0.016\) & \(0.739\pm 0.032\) & \(0.868\pm 0.022\) & \(0.824\pm 0.028\) & \(\mathbf{0.9\pm 0.012}\) \\ \hline \multirow{4}{*}{**ReCOVery**} & **ACC** & \(0.406\pm 0.041\) & \(0.385\pm 0.028\) & \(0.756\pm 0.017\) & \(0.701\pm 0.024\) & \(0.648\pm 0.009\) & \(0.818\pm 0.031\) & \(0.814\pm 0.007\) \\ & **Pre** & \(0.266\pm 0.06\) & \(0.224\pm 0.03\) & \(0.778\pm 0.019\) & \(0.688\pm 0.078\) & \(0.581\pm 0.014\) & \(0.781\pm 0.037\) & \(\mathbf{0.823\pm 0.011}\) \\ & **Rec** & \(0.691\pm 0.016\) & \(0.675\pm 0.066\) & \(0.749\pm 0.026\) & \(0.616\pm 0.059\) & \(0.58\pm 0.014\) & \(0.763\pm 0.041\) & \(0.748\pm 0.019\) \\ & **F1** & \(0.381\pm 0.066\) & \(0.336\pm 0.041\) & \(0.751\pm 0.013\) & \(0.597\pm 0.094\) & \(0.579\pm 0.014\) & \(0.769\pm 0.037\) & \(0.766\pm 0.016\) \\ \hline \multirow{4}{*}{**MM-COVID**} & **ACC** & \(0.496\pm 0.016\) & \(0.502\pm 0.009\) & \(0.879\pm 0.019\) & \(0.787\pm 0.157\) & – & \(0.883\pm 0.021\) & \(\mathbf{0.917\pm 0.013}\) \\ & **Pre** & \(0.535\pm 0.015\) & \(0.419\pm 0.053\) & \(0.873\pm 0.02\) & \(0.716\pm 0.258\) & – & \(0.881\pm 0.024\) & \(\mathbf{0.918\pm 0.013}\) \\ & **Rec** & \(0.48\pm 0.016\) & \(0.48\pm 0.019\) & \(0.881\pm 0.016\) & \(0.744\pm 0.198\) & – & \(0.884\pm 0.021\) & \(\mathbf{0.917\pm 0.013}\) \\ & **F1** & \(0.506\pm 0.01\) & \(0.446\pm 0.036\) & \(0.877\pm 0.014\) & \(0.696\pm 0.259\) & – & \(0.881\pm 0.022\) & \(\mathbf{0.917\pm 0.013}\) \\ \hline \multirow{4}{*}{**Gossipcop**} & **ACC** & \(0.487\pm 0.008\) & \(0.502\pm 0.009\) & \(0.776\pm 0.007\) & \(0.675\pm 0.111\) & \(0.781\pm 0.004\) & \(0.816\pm 0.008\) & \(\mathbf{0.847\pm 0.016}\) \\ & **Pre** & \(0.332\pm 0.08\) & \(0.494\pm 0.038\) & \(0.803\pm 0.058\) & \(0.59\pm 0.213\) & \(0.781\pm 0.006\) & \(0.817\pm 0.009\) & \(\mathbf{0.852\pm 0.014}\) \\ \cline{1-1} & **Rec** & \(0.545\pm 0.013\) & \(0.544\pm 0.007\) & \(0.769\pm 0.022\) & \(0.656\pm 0.13\) & \(0.777\pm 0.004\) & \(0.814\pm 0.01\) & \(\mathbf{0.842\pm 0.018}\) \\ \cline{1-1} & **F1** & \(0.406\pm 0.059\) & \(0.517\pm 0.022\) & \(0.792\pm 0.015\) & \(0.598\pm 0.201\) & \(0.778\pm 0.004\) & \(0.814\pm 0.009\) & \(\mathbf{0.844\pm 0.017}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Overall Performance of Different Methods.
publishers and users, is valuable for distinguishing between real and fake news. HyperGAT is the only baseline method that uses the concept of hypergraph as well, but it builds hypergraphs of words in a piece of news, which leads to worse performance than our method due to ignoring user information. Moreover, the construction of hypergraphs makes HyperGAT require the length of text. A hypergraph cannot be built if the text is too short. However, most COVID-19 news is in the form of short tweets, such as those in the MM-COVID dataset, so HyperGAT does not work on this dataset (mark as "-" in Table II). The reason why HyperGAT can perform on the other three datasets is that the datasets provide statements for each news. Thus, we incorporate these textual data to serve as supplementary information and make the data meet the requirements of HyperGAT. TextGCN builds graphs at the sentence level, so that the structural information of the sentence can be taken into account. Nevertheless, short news in tweets fails to learn enriched information about the structure among sentences, resulting in mediocre results.
The aforementioned three baseline methods (_i.e.,_ DualEmo, HyperGAT, and TextGCN) take into account structural or social context information, while the rest of the baseline methods only consider textual information from news content. Another baseline algorithm that performs stable is BERT, indicating that BERT can learn effective textual features of news. However, BERT's performance in fake news detection is mediocre and obviously worse than that of Hy-DeFake and DualEmo, which reflects that only learning textual information and ignoring the assistance of social context like user information leads to limited results. In TextCNN and HAN methods, because they do not learn structural information, are not pre-trained in the large-scale corpus, and do not take social contexts into account, they perform the worst.
Comparing the overall results in Table II, it shows that the methods that consider side information perform better than methods that only use textual information. This demonstrates that social context and structural information are indeed instrumental in detecting fake news. Furthermore, all methods on the ReCOVery dataset demonstrate poorer performance than that of the other three datasets, and our proposed Hy-DeFake only achieves one optimal result and three sub-optimal results. Referring to the dataset information in Table I, we can see that the ReCOVery dataset is more unbalanced than other datasets, and the number of real news is about twice that of fake news. This phenomenon shows that all methods are of limited ability to handle the imbalance problem of real and fake news. How to solve this problem requires further exploration, due to the disparity in the volume of fake news compared to real news in the real world. Overall, Hy-DeFake consistently delivers precise and reliable outcomes for distinguishing fake news in various domains.
### _Ablation Study_
To assess the impact of the key components in Hy-DeFake, ablation experiments are conducted to evaluate the performance of the news semantic channel, user credibility channel, and consistency-based feature fusion individually. The outcomes of these ablation studies are presented in Fig. 5, where "Text" and "User" indicate that the news semantic channel and user credibility channel are solely used, respectively. "T+U"
Fig. 5: Ablation Study.
denotes the model incorporating the news semantic channel and the user credibility channel, while excluding consistency-based feature fusion. "Hy-DeFake" is the final model with all components.
The ablation studies show that "User" has the worst performance, especially on two COVID-relevant datasets, _i.e._, ReCOVery, and MM-COVID. That is because users lack the expertise to distinguish health or medicine-related news and some credible users may spread fake news unintentionally. In this case, user attributes play a limited role and the textual information of news is insufficient to accomplish accurate detection of fake news. In datasets from other domains where the number of users involved in news spreading is higher, "User" achieves satisfactory results solely based on user credibility, particularly on the Gossipcop dataset.
When relying solely on textual information from news content, the outcomes of "Text" are similar to those obtained by text-based language models. This is because they exclusively consider semantic information and disregard the significance of social context in detecting fake news. However, when considering user information simultaneously, the direct concatenation of news embeddings and user embeddings in "T+U" has a significantly improved effect compared to that of "Text". This implies that incorporating social contexts is crucial for superior performance. By introducing consistency-based feature fusion, our final model Hy-DeFake exhibits further improvement. This indicates that imposing consistency between news and users in a low-dimensional space can accentuate the distinction between trustworthy and untrustworthy users as well as real and fake news, thereby rendering the learned representations more discriminative.
To conclude, each individual component has a significant contribution to the proposed Hy-DeFake for detecting fake news in online social networks.
### _Case Study_
To examine the influence of user credibility on news reliability, we choose the Politifact dataset to perform a case study. Here we analyze the distinctions between users who are involved in spreading fake news versus real news.
We analyze the disparities in user attributes between real and fake news by presenting key statistics of users involved in news spreading in the Politifact dataset, as displayed in Table. III. The average number of users in each piece of news demonstrates that fake news tends to attract more attention compared to real news. For real news, the average number of followers significantly exceeds the average number of following, indicating that the users involved in real news are influential and potentially represent public accounts. However, such discrepancy is less obvious in fake news. Furthermore, the number of verified users in real news is nearly 10 times more than that in fake news. This attribute reflecting user credibility is positively correlated with news authority. By comparing the total number of users in all news and the average number of users in each news, we observe that the total number of users involved in fake news is less than that in real news, but the average number of users in each fake news surpasses that in real news. It illustrates that users involved in fake news spreading are more active, and some may be malicious users who spread false news in large numbers.
Fig. 6: User relation when spreading news in Politifact dataset (Green nodes: users who spread real news; Red nodes: users who spread fake news; Edges: connect any two users who participate in the same piece of news).
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**Statistics** & **Real News** & **Fake News** \\ \hline Average number of users in each news & 43 & 46 \\ \hline Average number of following of involved users & 2,555 & 3,468 \\ \hline Average number of followers of involved users & 48,150 & 9,140 \\ \hline Verified Users & 1,053 & 167 \\ \hline Total number of users in all news & 14,822 & 13,371 \\ \hline \hline \end{tabular}
\end{table} TABLE III: User Statistics in Politifact Dataset.
In addition to the above analysis of the relationship between news and users from the perspective of statistics and attributes, we also analyze from the perspective of structure in Fig. 6. In this figure, the red nodes represent the users involved in fake news while the green nodes stand for the users who spread real news. The edges link two users who are involved in the same piece of news. We discover that users who spread fake news connect more closely while the users involved in real news are evenly distributed. The users spreading fake news form a dense community, as some malicious users deliberately participate in the spread of multiple fake news. Therefore, Fig. 6 demonstrated that exploring structural information in the social context is essential for fake news detection.
### _Parameter Sensitivity_
To examine the sensitivity of Hy-DeFake to changes in different parameters, we evaluate the results of different embedding sizes \(d\) by the ACC, Pre, Rec, and F1 on four datasets. As shown in Fig. 7, \(d\) is set to be the dimensions of news embeddings and user embeddings. It is crucial to note that RoBERTa, which we used to update the embeddings of news, requires that the dimension of embeddings be a multiple of the number of attention heads 12. Moreover, the consistency loss in updating the embeddings of news and users requires them to have the same dimensions. The results demonstrate that Hy-DeFake performs satisfyingly and stably when the embedding dimensions are 1020, 768, and 516.
Through the analysis, we set the dimension of 768 as Hy-DeFake's parameter, because our model performs best in most datasets on this dimension, and the default dimension of textual features learned by the language model RoBERTa is 768. Although on the dimension of 1020, our model can also obtain the optimal results in some metrics, the dimension is too high with inefficiency and the result has little difference with the dimension of 768, so we do not set the dimension to 1020. Hy-DeFake performs slightly worse on the 516 dimensions than on the 768 dimensions. When the dimension decreases from 252 to 60, the performance of our model gradually deteriorated. This decline is due to the gradual loss of real latent textual and structural information. Overall, our method maintains satisfactory results across most parameter settings and outperforms baseline methods.
### _Time Efficiency_
To evaluate the execution time of Hy-DeFake, we record its training time on four datasets. Due to the variations in input sizes (_e.g._, the hypergraph describing users spreading the news in Hy-DeFake is not considered by other baselines), comparing the runtime directly with other baselines is unfair. Table IV presents the average training time per epoch for Hy-DeFake on each dataset. The results demonstrate that Hy-DeFake achieves efficient and effective training, with superior performance on four datasets. The training time of Gossipcop is the longest, since the large number of users in this dataset. Overall, Hy-DeFake learns the most informative embeddings
Fig. 7: Parameter Analysis.
in an acceptable time and achieves superior and more stable results than baseline methods.
### _Discussion_
In this section, we delve deeper into the impact of users in fake news detection. Our ablation study (Section V-C) reveals that datasets with a higher number of users yield more accurate results in fake news detection when relying solely on user credibility information rather than textual information. For instance, using only user information achieves better performance in distinguishing real and fake news compared to using only text information in the Gossipcop dataset. This demonstrates that it is feasible to identify fake news without text analysis, by leveraging high-order relationships among users in social networks, _i.e._, integrating user attributes with their structural connections. This observation is further supported by the findings presented in Section of V-D (Case Study), which reveal distinct differences in both attributes and high-order structures between users who engage with real versus fake news. The discovery of this phenomenon presents a potential opportunity to distinguish real and fake news in specific domains without using news content. For instance, detecting fake news in the real world typically necessitates expertise from professionals. However, identifying such news with the aid of high-order user relations could prove to be an effective approach. This is an area we plan to explore in future work.
## VI Conclusion
In this work, our objective is to detect fake news in online social networks, and we argue that existing methods face two challenges in doing so. To address these challenges, we explore the high-order correlation between news and users. Firstly we construct an attributed hypergraph to capture the intricate relationships between news and users in online social networks. Subsequently, we propose Hy-DeFake, a hypergraph neural network-based method for detecting fake news. Hy-DeFake not only learns semantic embeddings of news content and credibility embeddings of users, but also incorporates high-order correlations between news and users. By integrating these embeddings, Hy-DeFake can provide informative embeddings for real and fake news classification. Extensive experiments on four real-world datasets demonstrate the superior performance of Hy-DeFake, indicating that it learns distinctive embeddings that contain rich semantic and credibility information as well as high-order correlations. Furthermore, our findings reveal a positive correlation between news authority and user credibility. Users who spread fake news exhibit more intensive interaction compared to those who spread real news, resulting in the formation of a denser community.
In our future work, we aim to further investigate the significance of user credibility in the detection of fake news. Additionally, as discussed in Section V-G, we plan to explore the possibility of utilizing high-order relations between news and users to detect fake news without relying on textual information in certain domains where distinguishing textual information is challenging due to a lack of professional knowledge.
|
2305.05228 | Semantic Embedded Deep Neural Network: A Generic Approach to Boost
Multi-Label Image Classification Performance | Fine-grained multi-label classification models have broad applications in
e-commerce, such as visual based label predictions ranging from fashion
attribute detection to brand recognition. One challenge to achieve satisfactory
performance for those classification tasks in real world is the wild visual
background signal that contains irrelevant pixels which confuses model to focus
onto the region of interest and make prediction upon the specific region. In
this paper, we introduce a generic semantic-embedding deep neural network to
apply the spatial awareness semantic feature incorporating a channel-wise
attention based model to leverage the localization guidance to boost model
performance for multi-label prediction. We observed an Avg.relative improvement
of 15.27% in terms of AUC score across all labels compared to the baseline
approach. Core experiment and ablation studies involve multi-label fashion
attribute classification performed on Instagram fashion apparels' image. We
compared the model performances among our approach, baseline approach, and 3
alternative approaches to leverage semantic features. Results show favorable
performance for our approach. | Xin Shen, Xiaonan Zhao, Rui Luo | 2023-05-09T07:44:52Z | http://arxiv.org/abs/2305.05228v4 | Semantic Embedded Deep Neural Network: A Generic Approach to Boost Multi-Label Image Classification Performance
###### Abstract
Fine-grained multi-label classification models have broad applications in e-commerce, such as visual based label predictions ranging from fashion attribute detection to brand recognition. One challenge to achieve satisfactory performance for those classification tasks in real world is the wild visual background signal that contains irrelevant pixels which confuses model to focus onto the region of interest and make prediction upon the specific region. In this paper, we introduce a generic semantic- embedding deep neural network to apply the spatial awareness semantic feature incorporating a channel- wise attention based model to leverage the localization guidance to boost model performance for multi- label prediction. We observed an Arg.relative improvement of 15.27% in terms of AUC score across all labels compared to the baseline approach. Core experiment and ablation studies involve multi-label fashion attribute classification performed on Instagram fashion apparels' image. We compared the model performances among our approach, baseline approach, and 3 alternative approaches to leverage semantic features. Results show favorable performance for our approach.
## I Introduction
One big issue for today's industry to develop fashion pattern classification model is to achieve the high precision. The challenges are in 2 aspects: 1) it's a multi-label problem where some patterns are minor to the whole population; 2) the model is working on wild background images (e.g. from Instagram). The multi-label model is used in the following applications: 1) Fashion trend attribute extraction, 2) Image filtering according to fashion attributes, 3) Fashion collection creation. Therefore, its accuracy is critical to the success of the downstream applications.
Thus, we want to develop a multi-label image classification model that can accurately process images with wild background, considering that some of our applications use images from Instagram. For example, given a fashion image with a fashion model wearing a pinstripe dress standing in front of flowers, we want a network to predict the attribute "pinstripe", but network can be confused by the irrelevant pixels from the wild background of the floral-alike pixels. Therefore, we want the deep learning model to focus on pixels of fashion garment in image, so to have more accurate predictions. To achieve this, we need the classification model to be guided by regional semantic information. In one previous work of from Mahdi M. Kalayeh et al. [17], authors concatenate semantic segmentation mask with raw image as input feature, so the whole CNN layers are aware of regional semantic information, which significantly improves classification performance. However, this approach requires a semantic segmentation model with pixel level annotation, which cannot be used when such labels are not available.
Following the idea, we propose to solve this problem with an innovative model that has 3 components: 1) We use classification label to train a image classifier, but use it as class activation map (CAM) generator, which contains regional information. 2) We use CAM as an input to learn a semantic embedding for each pixel, where regional semantic information is encoded. 3) We concatenate semantic embedding with the raw image tensor as input features and feed them into a channel wise attention based image classification model to enhance the fine-grained multi-label prediction.
## II Related Work
Semantic guidanceSemantic guidance,i.e. using semantic information as an extra input or feature to guide feature extraction or classification, are widely adopted to boost target tasks, such as classification, visual search, GAN based image rectification and semantic based regression [19, 20], crowd counting [23], etc. Some previous work [17, 21] uses semantic information as input of CNN networks to train a CNN classifier. Some previous works perform attentions between semantic segmentation output and feature maps from CNN backbone [3, 6, 10, 11], but these methods rely on a supervised semantic segmentation model to generate the semantic segmentation map. There are some research works using class activation maps to boost the CNN model to extract fine-grained visual features [25]. Human semantic parsing [6] is used for person-reID. Dense pose [16] are used to train generative models for outfit transfer or outfit generation [8]. In the work of crow counting [23], semantic segmentation map is used to correct the final prediction from the regression model.
Multi-label classification problem is hard mainly because of the nature of imbalanced distribution. Some previous works also leveraged semantic information to help the problem. In the work [27], regional latent semantic dependencies are learned by a mixer of CNN and LSTM model. Similarly, in their work [24], they uses Mask R-CNN model to generate object bounding boxes, then perform a graph based multi-label classifier. To our best knowledge, all these works rely on a pre-trained semantic segmentation model or an object detection model, which has same semantic dictionary [5, 28] as the target task. This is a major drawback for generalization on any tasks.
## III Method
Firstly, we fine-tune a pre-trained ResNet50 [4] incorporated with attention-based SGE module [12] with multi-label Sigmoid Cross-Entropy loss (denoted as BCEWithLogitsLoss in PyTorch). We added a convolution layer after Resnet50 to transform the feature map from \([h^{\prime},w^{\prime},2048]\) to \([h^{\prime},w^{\prime},c]\), where c equals to number of classes (see Fig. 2). The class activation map (\([h^{\prime},w^{\prime},c]\)) is used as input to generate semantic embeddings in the next component.
As shown in Fig.1, we utilize CAM Model to generate a class activation map (CAM) \(\lambda\) in the size of \([Batch,W,H,num\_class]\). We then feed the CAM into a Semantic Feature Embedding Module - a 5-layer of deconvolution network. Instance normalization [22] is plugged in deconvolution layers to normalize the output of regional semantic embedding \(\lambda^{{}^{\prime}}\). The regional semantic embedding \(\lambda^{{}^{\prime}}\) from the embedding layer in the size of \([Batch,256,256,D]\) is concatenated with the raw image as extra \(D\) channel(s).
The concatenated tensor is fed into the channel-wise attention based multi-label image classifier (ResNet50 [26]). We found that channel wise attention is better than plain convolutions, when it learns the correlation between the regional semantic embedding and the raw RGB channels. We further added more fully connected layers (FCs) to introduce more non- linearity.
## IV Experiments
### _Dataset Description_
We collected 34,158 Instagram fashion apparel images for women's dressing, such as dress, tops, and bottoms where the image contains a wilder background noise, shown in the right figure of Fig.3. Images are annotated in a multi-label fashion with 11 pattern attributes: _[Solid, Plaid, Floral, Stripe, Check, Graphic, Tie Dye, Animal, Words/Letters, Dot, Paisley]_.The dataset shows a severe label imbalance issue, see Fig.4. These images were randomly partitioned into training(70%) and testing(30%).
### _Problem Setting and Evaluation Metrics_
We defined this visual detection work as a multi-label classification problem, given the nature of the fashion attributes which could co-exist in a same women's dressing, as shown in the left figure of Fig.3. In turn the ground truth is labeled into the form of multi-label one-hot vector, such as _[1,0,0,0,0,0,1,1,0,0,0]_, and the output from the model is the corresponding probability distribution for each label in the same dimension as the ground truth. The model performance
Fig. 1: The model structure of the proposed Semantic Embedded ResNest50 Classifier
Fig. 3: The left image shows a women’s shift containing pattern attributes of ”Animal” and ”Dots” and ”Graphics”. The right image shows a sample of the image domain we develop and test the model on, where the background behind the model is quite noisier, which makes it more challenging for a DNN model to predict the pattern attribute from such images.
Fig. 2: Model structure and training scheme of the semantic generator
is evaluated using the area under the curve (AUC) of precision-recall curve on the test set for each label shown in Fig.5
### _Network and Training_
We selected 2 models for experimental comparisons: 1) the baseline model is a ResNet50 [26] without applying any semantic guidance (referred as _ResNet50_), 2) our Semantic Embedded ResNest50 approach (referred as _Semantic Embedded ResNest50_). Model parameters can be found in Fig.1. We also found that increasing the output channel size from 1 to 3 of the last deconvolution layer does not consistently improve the model performance (see Ablation Study). Thus, we set the output channel size to be 1. We trained models using Sigmoid Cross-Entropy loss. We reduced learning rate when the loss has stopped improving after 4 epochs. The learning rate was initiated to be \(1e^{-5}\) and Adam Optimizer [7] was adopted.
## V Results and Discussion
Table.I compares the model performance between baseline approach (plain ResNet50) and our approach (Semantic Embedded ResNet50). Our approach out-performed the baseline approach across ALL labels in terms of the AUC score, with relative improvement ranging from **7.54%** to **35.41%**. This manifests using semantic embedding as a guidance significantly boosts up the model performance as the extra localization information embedded helps a network to focus more onto the region of interest and ignore irrelevant pixels.
We consider the majority class to be the label with image samples over 5% and the rest are minorities. In turn, the majority classes are _Solid_, _Stripe_ and _Animal_, while minorities are _Plaid_, _Check_, _Graphics_, _Paisley_, _Tie Dye_, _Dot_, _Words/Letters_, _and Floral_. We compared the average relative improvement across all majorities and all minorities respectively. According to Table.II, our approach yields a higher improvement on minority classes (image samples under 5%) than it performs on majorities. From this perspective, our approach can be leveraged to alleviate label imbalanced issue.
Further, Fig.6 visualizes four image samples with the multi-label predictions made by the baseline approach and our approach respectively. From the prediction results, our approach shows a preciser prediction than the baseline approach and it could better capture the pattern of minority classes. In turn when an image contains both majority class and minority class, our approach does not solely focus onto majority class but also makes a correct prediction for minority class (see Image.A and Image.B). Meanwhile, we observed that for label "tie dye", the baseline approach tends to be confused by the wild background pixels (see Image.C) and by the less-strong and insignificant pixel signal of "tie dye" (see Image.D), while our approach could successfully extract the needed pixels for a correct prediction.
### _Ablation Study_
The classification module is composed with 2 modules: 1) Semantic Embedding Module which serves to embed a raw CAM into a semantic embedding with the shape of \([Batch,256,256,1]\), and 2). ResNet50 as the classification backbone to provide the network with a channel- wise attention mechanism. We conducted 3 following ablation studies: 1) we
Fig. 4: The label distribution visualization
Fig. 5: Sample of Model Performance Visualization of PR- Curve. The number in the legend manifests this approach’s AUC score
compared our Semantic Embedded Module to embed a raw CAM with a simple interpolation based tensor-reshape. We performed tensor reshape and a channel-wise summation with min-max normalization to achieve a 1-D feature map in the shape of \([Batch,256,256,1]\), 2) we wanted to understand if a plain convolution network without channel- wise attention, Resnet50, can be used to substitute the ResNet50 (referred as Semantic Embedded Resnet50). 3) we tested if a 3- channel embedded semantic feature generated by Semantic Embedding Module could achieve a better model performance (referred as 3-channel Semantic Embedded ResNest50) compared to 1-D semantic embedding.
The comparison between \(2^{nd}\) row and \(1^{st}\) row from Table.III shows a heavily decayed performance for the approach of concatenating a manually reshaped and normalized semantic map. This manifests this approach harms the overall learning progress. The interpolation based reshaping mechanism and channel- wise summation to reshape a raw CAM from a dimension of _[w', h', c]_ into _[256, 256, 1]_ introduces many noises to the original CAM. Still this manually work is not learnable, thus, at the inference stage, the extra information provided by this manually reshaped semantic feature is in a random pattern to a DNN. This introduces more irrelevant noises for a DNN to process and in turn hurts the model performance.
Further, the comparison between Semantic Embedded Resnet50 and Semantic Embedded ResNest50, shown in \(3^{rd}\) row and \(1^{st}\) row, manifests this alternative approach fails to yield a boosted performance. The average relative decrease in terms of AUC score across all labels using this approach compared to our approach's is **4.39%**. These manifest that our approach, Semantic Embedded ResNest50 which leverages the channel- wise attention mechanism, is more sensitive to the semantic embedding guidance. The channel- wise attention mechanism enables a soft-attention-like effect via the semantic embedding guidance to help the classifier to learn the localization information better and in turn help the classifier to focus more onto the region of interest for a better model performance.
Lastly, the last row of Table.III shows the model performance comparison between our approach of using a 1-channel semantic embedding and an alternative approach of using a 3-channel semantic embedding. The mixed performance manifests that there is no significant correlation between the channel size of the semantic embedding and model performance. The Semantic Embedded Module we built, shown in Fig.1, expands the input raw CAM's dimension by 2X without changing the channel size over the first 4 deconvolution layers, and the last deconvolution layer performs both dimension reshaping and channel size squeezing. In turn, the localization information from raw CAM is both retained throughout the entire 5 deconvolution layers no matter whether the last layer convolves the feature into a 3 channel or 1 channel.That is, the 1-channel semantic embedding embeds the information more densely, while the semantic information from a 3-channel semantic embedding is more diluted and distributed across 3 channels.
## VI Industrial Impact
Our work presents a generic deep neural network structure to build a multi-label classification model with a high precision and recall. This model serves as a crucial upper stream for many down stream applications across E-commerce, such as Fashion Industry. For example, to understand how a fashion attribute evolve over a time period, we need the temporal signal of a
Fig. 6: The visualization showing the comparison between the baseline’s approach and our approach. All four images contain at least 1 minority class and might/ might not contain majority class. For example, Image.A contains one majority class as solid, and one minority class as floral
certain attribute over a time period, and our work can contribute to make attribute signal extraction and aggregation. Further, this model can be used as image content understanding for customer search query to products matching on the fashion attribute level. For example, when a customer search a query of "Floral Tie Dye Shirt", most existed query to product matcher mechanism is based on the similarity between query and product-titles. However, a free text based signal might contain more noises and even fake attribute signal (sellers tend to include more attribute information to make a title attractive) compared to pure visual signal representation. Hence, our model can serve as a proficient encoder for semantic embeddings, providing valuable support to a wide range of downstream applications. Examples of such applications include text embedding/ image embedding based query-to-product matching and sequential-based item retrieval, where our model's capabilities can significantly enhance the overall performance and effectiveness. [1, 2, 9]. More importantly, we introduced a generic deep learning approach to achieve a high- performance backbone more than fashion detection domain. Many other industries, such as mechanical industry, could leverage our approach to build their own high- performance backbone to operate tasks including but not limited to object's relocation and deviation detection. By reframing the conventional approach of object status detection using traditional geometric based computer vision into a machine learning-based classification and regression problem, our solution offers a compelling alternative to existing works [14, 15] that heavily depend on costly hardware. In doing so, we introduce an efficient methodology that addresses this challenge with remarkable efficacy.
## VII Conclusion
Overall, by leveraging our proposed approach, Semantic Embedded ResNest50, enables us to obtain a boosted model performance in terms of fine grained multi- label prediction. The trainable semantic embedding layer enables a dynamic mechanism to reshape and to embed a raw CAM into the desired shape without introducing extra noises and losing signals during the process of deconvolution, while the channel- wise attention based classifier backbone enables a soft- attention- like mechanism to leverage the localization information provided by the embedded semantic map to help the network to focus more onto the region of interest from the RGB channel. Experimental results on fashion pattern attribute classification show significant boost using this proposed approach with a well-rounded improvement across all labels ranging from 7.54% to 35.41% in terms relative AUC improvement compared to the baseline approach. Moreover, our approach shows a favorable performance to the minority classes, which improves a model's robustness upon the minority classes. To further improve the model's generalization abilities upon minority classes, we could incorporate label distribution stabilization techniques [13, 18].
Our next step is to investigate an end to end solution from generating the raw CAM to embed the semantic embedding, and to the prediction layer using one single DNN model. This merged deep learning approach could remove the training efforts of a separate CAM generator, which in turn removes a dependency.
|
2310.15299 | Neural Network with Local Converging Input (NNLCI) for Supersonic Flow
Problems with Unstructured Grids | In recent years, surrogate models based on deep neural networks (DNN) have
been widely used to solve partial differential equations, which were
traditionally handled by means of numerical simulations. This kind of surrogate
models, however, focuses on global interpolation of the training dataset, and
thus requires a large network structure. The process is both time consuming and
computationally costly, thereby restricting their use for high-fidelity
prediction of complex physical problems. In the present study, we develop a
neural network with local converging input (NNLCI) for high-fidelity prediction
using unstructured data. The framework utilizes the local domain of dependence
with converging coarse solutions as input, which greatly reduces computational
resource and training time. As a validation case, the NNLCI method is applied
to study inviscid supersonic flows in channels with bumps. Different bump
geometries and locations are considered to benchmark the effectiveness and
versability of the proposed approach. Detailed flow structures, including
shock-wave interactions, are examined systematically. | Weiming Ding, Haoxiang Huang, Tzu Jung Lee, Yingjie Liu, Vigor Yang | 2023-10-23T19:03:37Z | http://arxiv.org/abs/2310.15299v1 | Neural Network with Local Converging Input (NNLCI) for Supersonic Flow Problems with Unstructured Grids
###### Abstract
In recent years, surrogate models based on deep neural networks (DNN) have been widely used to solve partial differential equations, which were traditionally handled by means of numerical simulations. This kind of surrogate models, however, focuses on global interpolation of the training dataset, and thus requires a large network structure. The process is both time consuming and computationally costly, thereby restricting their use for high-fidelity prediction of complex physical problems. In the present study, we develop a neural network with local converging input (NNLCI) for high-fidelity prediction using unstructured data. The framework utilizes the local domain of dependence with converging coarse solutions as input, which greatly reduces computational resource and training time. As a validation case, the NNLCI method is applied to study inviscid supersonic flows in channels with bumps. Different bump geometries and locations are considered to benchmark the effectiveness and versability of the proposed approach. Detailed flow structures, including shock-wave interactions, are examined systematically.
+
Footnote †: journal: Computer Science
## 1 Introduction
Numerical solution of partial differential equations is an essential aspect of learning about physical phenomena in many natural and engineering science disciplines. The spatiotemporal discretization of the underlying governing equations is computationally expensive and time-consuming. In recent years, with the rapid development of machine learning techniques, researchers have proposed alternative ways to handle the problems more efficiently. Data-driven surrogate models have been developed to improve, or even replace, traditional numerical simulations by mapping between the problem setting and its solution. Yang and colleagues [1; 2; 3; 4; 5] developed a POD-based surrogate model for emulating detailed spatio-temporally
evolving flows in swirl injectors. Josee et al. [6] utilized U-Net for the parametric study of shear coaxial injector flow evolution. Milan et al. [7] applied a deep neural network (DNN) to study automotive fuel injector design. On the other hand, the deep Galerkin method, deep Ritz method and physics-informed neural network (PINN) have been extensively studied to solve partial differential equations (PDE) and physical problems by enforcing the boundary and initial conditions, and the functional forms of underlying governing equations [8; 9; 10]. The usage of neural operators further improves the ability to address various problems [11; 12; 13]. Researchers also implemented multi-fidelity approximation of high-resolution solutions (also known as super-resolution or upscaling), due to the limitation in computation resources or high-fidelity data. For instance, Kennedy [14] proposed a co-Kriging model, which uses both the high- and low-fidelity data. It was then applied to the uncertainty quantification of beam frequency and the prediction of the airfoil pressure field[15; 16]. Erichson et al. [17] employed shallow neural network technique to reconstruct high-resolution solutions from limited observations.
Although the above methods have shown promising results, they have limits that prohibit their application to the high-resolution prediction of complex nonlinear physical problems. Most existing DNN-based surrogate models are based on global computation of the physical field. The mapping between the low-fidelity latent space and the high-fidelity data over the entire computational domain needs to be developed, which requires a large covariance matrix or neural network structure, and is computationally intensive. In addition, in physics-based methods [8; 9; 10], the computation of the functional forms adds the burden on the training process of the network. This limits its application to complicated problems involving discontinuities, particularly to interactions among such discontinuities.
To address this issue, local based surrogate models were proposed. Trask et al. [18] developed convolutional neural network (CNN) with a generalized moving least squares (GMLS) method. The scattered data inputs are used to construct the local regression functions. However, the local feature and underlying information of the data is not fully utilized. Huang et al. [19; 20] established a novel method, known as Neural Networks with Local Converging Inputs (NNLCI), to solve conservation laws at low cost. This method predicts the high-resolution solution at a space-time location from two converging, low-fidelity local input solution patches. With the use of local domain of dependence, the method extracts important local features for accurate predictions, while at the same time reducing the need of computational resources and training data. The NNLCI method has shown great prediction accuracy for solving 1D [19] and 2D [20] Euler equations and Maxwell's equations [21].
The application of NNLCI to structured data is relatively easy to implement, due to its nature of local domain of dependence. Many scientific and engineering problems, however, involve irregularly structured data sets. Extension of NNLCI to such situations is necessary. In the present work, we will develop a new NNLCI method for unstructured data. As a validation case, inviscid flow through a converging-diverging channel with two smooth Gaussian bumps was studied systematically. The new model is capable of capturing the flow behaviors in the entire field, including regions with smooth variations and shock discontinuities.
This paper is structured as follows. Section 2 describes the theoretical formulation and numerical scheme for inviscid channel flows. The data generation and pre-processing steps are also discussed. Section 3 introduces the neural network with local converging inputs (NNLCI). The method for determining the local domain of dependence is developed for unstructured grids. Section 4 presents the results of the proposed method. The effectiveness of the new approach is demonstrated by a variety of channel geometries. In Section 5, the conclusions are reported.
## 2 Theoretical and Numerical Framework
### Problem Setup
In this study, we consider a 2D inviscid flow through a channel with two smooth Gaussian bumps, as shown schematically in Fig. 1. The computational domain is bounded by \(x\in[-1.5,1.5]\) horizontally, and \(y\in[0.0,0.8]\) in the y-direction. Two Gaussian bumps are placed on the top and bottom walls of the channel. The lower bump geometry is fixed and defined by:
\[y=0.0625e^{-25x^{2}}. \tag{1}\]
The upper bump is perturbed from the original location and is defined by:
\[y=0.8-0.0625e^{-25(x-\Delta x)^{2}}. \tag{2}\]
where \(\Delta x\) stands for the perturbation of the bump location.
The problem is governed by the 2D Euler equations for compressible flows:
\[\frac{\partial\rho}{\partial t}+\frac{\partial(\rho u)}{\partial x}+\frac{ \partial(\rho v)}{\partial y}=0, \tag{3}\]
\[\frac{\partial(\rho u)}{\partial t}+\frac{\partial(\rho u^{2}+p)}{\partial x }+\frac{\partial(\rho uv)}{\partial y}=0, \tag{4}\]
\[\frac{\partial(\rho v)}{\partial t}+\frac{\partial(\rho vu)}{\partial x}+ \frac{\partial(\rho v^{2}+p)}{\partial y}=0, \tag{5}\]
\[\frac{\partial(\rho E)}{\partial t}+\frac{\partial(\rho uH)}{\partial x}+ \frac{\partial(\rho vH)}{\partial y}=0, \tag{6}\]
where \(\rho\),\(u\),\(v\), \(p\)and \(E\) are density, x-velocity, y-velocity, pressure, and total energy, respectively. \(H=E+p/\rho\)
Figure 1: Computational domain for inviscid flow through a channel with two smooth Gaussian bumps
is the total enthalpy. Calorically perfect gas is assumed. The resulting equation of state takes the form:
\[p=(\gamma-1)(\rho E-\frac{1}{2}\rho\|\mathbf{v}\|_{2}^{2}). \tag{7}\]
The ratio of specific heats \(\gamma\) is taken as 1.4. At the top and bottom wall, we apply the inviscid wall boundary condition. At the inflow, we specify the total temperature \(T_{t}\) and total pressure \(p_{t}\) as:
\[\frac{T_{t}}{T_{\infty}}=1+\frac{\gamma-1}{2}M_{\infty}^{2}, \tag{8}\]
\[\frac{p_{t}}{p_{\infty}}=(\frac{T_{t}}{T_{\infty}})^{\gamma/(\gamma-1)}. \tag{9}\]
Both the inlet total temperature and total pressure depend on the inlet Mach number. For this study, we choose the inlet Mach number to be \(M_{\infty}=2.0\).
A finite-volume solver is implemented for the problem using the MUSCL scheme [22] with Rusanov flux [23]. The improved Euler, which is a total variation diminishing (TVD) second-order Runge-Kutta (RK2) scheme [24] is used for the explicit time marching. In the end, a steady state solution can be obtained.
### Data Generation and Pre-processing
To generate the training and testing data set, different bump location perturbations need to be considered. We translate the upper bump location in the x-direction with \(\Delta x=0.00\), \(\pm 0.15\), \(\pm 0.30\), \(\pm 0.45\), and \(\pm 0.60\). Similarly, for the testing dataset, we randomly perturb the upper bump location within the range of \([-0.60,0.60]\). In addition, for the training cases, the free-stream Mach number is perturbed by \(\pm 5\%\). This ensures that the training dataset contains all the features of highly nonlinear flow behavior and increases the robustness of the network. Table 1 shows the details of the training and testing dataset.
For each geometry setting, we solve the problems on unstructured triangular grids with three different resolutions: coarse, finer, and high-resolution grids. To obtain the finer and high-resolution meshes, we implement uniform mesh adaptation from the coarse grid.; the numbers of cells are 200, 800, and 12800, respectively. Fig. 2 and 3 show the calculated density-gradient and Mach-number fields, respectively, for the upper bump translation at \(\Delta x=-0.60\), \(-0.30\), \(0.00\), \(0.30\) and \(0.60\). For each case, the coarse and finer inputs, and high-fidelity results are shown from left to right. From observation, as the upper bump is translated, the flow behavior in the shock and downstream regions changes considerably. In addition, the high-fidelity simulation results exhibit much richer details in the shock intersection and expansion regions.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Set & Description & \(M_{\infty}\) Perturbation \\ \hline Training & \(\Delta x=0.00\), \(\pm 0.15\), \(\pm 0.30\), \(\pm 0.45\), and \(\pm 0.60\) & \(\Delta M_{\infty}=0\), \(\pm 5\%\) \\ Testing & \(\Delta x=0.12\), \(-0.19\), \(-0.35\), \(0.44\) & None \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the training and testing cases. For each bump translation case, the free-stream Mach number \(M_{\infty}\) is perturbed by \(\pm 5\%\)
To fully utilize the low-fidelity information and extract common flow features from different bump translation cases, special pre-processing needs to be performed on the coarse and finer grid data. Fig. 4 shows an example of the data selection process. Only the cell centroids of the coarse grids (red dots in plot) are selected as the training locations. The data at the corresponding locations in the finer and high-resolution grids are extracted to form the input-image pairs for the training dataset. That is, 200 points are used in each training case. This avoids the computationally expensive interpolation of a large dataset, and accelerates the training process.
For each data point, all four state variables \(\mathbf{u}=[\rho,\rho u,\rho v,\rho E]\) are used. Normalization and standardization are needed to facilitate the training of the neural network. In this study, the data are rescaled by the difference between maximum and minimum values:
\[\tilde{\mathbf{u}}=\frac{\mathbf{u}-min(\mathbf{u})}{max(\mathbf{u})-min(\mathbf{u})} \tag{10}\]
Figure 2: Example of density fields for case \(\Delta x=-0.60\), \(-0.30\), \(0.00\), \(0.30\) and \(0.60\), \(M_{\infty}=2.0\). The coarse, finer and high-resolution solutions are shown, from left to right.
Correspondingly, the Tanh function is used as the activation function in the neural network:
\[f(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}} \tag{11}\]
## 3 Neural Network with Local Converging Inputs
In this section, we introduce the setup of Neural Network with Local Converging Inputs (NNLCI) for unstructured data. Fig. 5 provides an overview of the NNLCI.
First, we obtain the solutions from the simulations on the coarse and finer grids. Then, for each data location \((x,y)\), the local domain of dependence is determined. The intention is to filter the data at a proper scale to include all the local features for accurate prediction, while discarding far end information for low training cost. Fig. 6 gives two examples of this process. For the selected locations (red dots), and a given grid, the corresponding cells E are located (blue lines) can be found efficiently by adopting a hierarchical data structure described in Ref. [25]. The computational domain is divided into a binary tree of blocks of cells based on the cell centroid locations. By comparing the data location \((x,y)\) with the cell centroid,
Figure 3: Calculated Mach-number fields for cases \(\Delta x=-0.60\), \(-0.30\), \(0.00\), \(0.30\) and \(0.60\), \(M_{\infty}=2.0\). The coarse, finer inputs, and high-fidelity results are shown, from left to right.
one can descend from the root to sub-blocks and find the cell \(E\) containing it. This greatly reduces the computational cost and search time. The adjacent cells \(E_{adj}\) that share edges or vertices with \(E\) are then identified based on the connectivity information. Next, the local cell sizes are calculated. Here, the local cell size \(h_{E}\) of a triangular cell \(E\) is defined as the average cell size of itself and its adjacent cells:
\[h_{E}=\frac{1}{N}\sum_{k\in E_{adj}}\sqrt{A_{k}} \tag{12}\]
where \(A_{k}\) is the area of cell \(k\), \(E_{adj}\) is the adjacent cells to cell \(E\), and \(N\) is the total number of \(A_{k}\) in the summation. We use a \(5\times 5\) rectangular stencil in
\[[x-2h_{coarse},x+2h_{coarse}]\times[y-2h_{coarse},y+2h_{coarse}]\]
based on the coarse grid (where \(h_{coarse}\) is the local cell size of the coarse grid near \((x,y)\)) as the local domain of dependence, shown by green points. It should be noted that the size of the local domain of dependence
Figure 4: Example of the unstructured NNLCI training dataset. The coarse and finer data are used as the inputs, and the finest data is used as the image for training.
Figure 5: Overview of the NNLCI structure. Four major steps are involved: multi-fidelity simulation, local domain of dependence, neural network regression and high-fidelity prediction.
varies with the local cell size. Also, during the process, some of the data points near the boundaries are discarded, since the selected stencil points are out of bound.
Once a stencil is determined, the values of state variables at the \(5\times 5\) locations determined through interpolation from the coarse and finer meshes are used as the input of a neural network. Note that the input also includes \(h_{coarse}\) and \(h_{finer}\), the local cell size at \((x,y)\) for the coarse and finer mesh. These two values are critical for the neural network to approximate the non-uniform properties, given the states interpolated from the coarse and finer meshes. The output is the predicted state variables at \((x,y)\). The procedure for determining the values of state variables is as follows. Let \(\vec{\boldsymbol{x}}_{s}=(x_{s},y_{s})\) be one of the \(5\times 5\) stencil points in a particular stencil. We first locate the corresponding cell \(E_{0}\) that contains this point using the hierarchical approach as above. The cell center of cell \(E_{0}\) is denoted as \(\vec{\boldsymbol{x}}_{0}=(x_{0},y_{0})\) and the state variables of the cell is \(\mathbf{u}_{0}\). Then, the three adjacent triangles \(E_{1}\), \(E_{2}\) and \(E_{3}\) are located, with cell centers and state as \((x_{i},y_{i})\) and \(\mathbf{u}_{i}\), where \(i=1,2\) and \(3\). The state at the stencil point location \((x_{s},y_{s})\) can be interpolated with a linear polynomial:
\[\boldsymbol{u}(\vec{\boldsymbol{x}}_{s}-\vec{\boldsymbol{x}}_{0})=\boldsymbol {a}_{0}+\boldsymbol{a}_{1}(x_{s}-x_{0})+\boldsymbol{a}_{2}(y_{s}-y_{0}) \tag{13}\]
It is easily seen that \(\boldsymbol{a}_{0}=\boldsymbol{u}_{0}\). To solve the coefficients \(\boldsymbol{a}_{1}\) and \(\boldsymbol{a}_{2}\), three combinations of cells can be used: \((E_{0},E_{1},E_{2})\), \((E_{0},E_{2},E_{3})\) and \((E_{0},E_{1},E_{3})\). Each will determine a set of candidate values for \(\boldsymbol{a}_{1}\) and \(\boldsymbol{a}_{2}\), say \(\boldsymbol{a}_{1\,k}\) and \(\boldsymbol{a}_{2\,k}\). Then, the minmod function is applied to determine the best coefficient values:
\[\boldsymbol{a}_{i}=minmod(\boldsymbol{a}_{1\,k},\boldsymbol{a}_{2\,k}),k=1,2,3 \tag{14}\]
Figure 6: Example of \(5\times 5\) stencil for NNLCI local domain of dependence. Red dots: selected locations; green dots: local domain of dependence; blue lines: corresponding cells.
where the minmod function is defined as:
\[minmod(a_{1},a_{2},...,a_{n})=\begin{cases}min(a_{1},a_{2},...,a_{n}),&\text{if all }a_{i}>0\\ max(a_{1},a_{2},...,a_{n}),&\text{if all }a_{i}<0\\ 0,&\text{otherwise}\end{cases} \tag{15}\]
The procedure is repeated on the coarse, finer, and high-fidelity solutions to generate the input and reference values for the neural network. The local cell sizes are used as the inputs of the neural network to include the local mesh size information, which gives us an input size of 202, including the four state variables at \(5\times 5\) locations and the two local cell sizes. The output size is 4. The input-output pair is shown in Fig. 7:
Table 2 lists the hyperparameters of the selected neural network, determined from manual search, in the present study. A network of 10 hidden layers of 500 nodes is designed to learn the mapping from low-fidelity solutions in a local domain to the high-fidelity solution at a point. The network is trained using the Adam optimizer with a learning rate of \(1\times 10^{-4}\) with \(1\times 10^{-8}\) regularization. The Tanh function is used as the activation function. The relative Mean squared error (RMSE) is selected as the loss function to measure the difference between NNLCI prediction and the true high-fidelity data:
\[\mathcal{L}=\frac{\sum_{k}\|\mathbf{\tilde{u}_{k}}-\mathbf{u_{k}}\|_{2}^{2}}{\sum_{k} \|\mathbf{u_{k}}\|_{2}^{2}} \tag{16}\]
where \(\mathbf{\tilde{u}_{k}}\) is the predicted results from NNLCI and \(\mathbf{u_{k}}\) the data of high-fidelity simulation. For use on unstructured data, the RMSE needs to be weighted by the cell area. The cell-weighted RMSE is given by:
\[\mathcal{L}=\frac{\sum_{k}A_{k}\|\mathbf{\tilde{u}_{k}}-\mathbf{u_{k}}\|_{2}^{2}}{\sum _{k}A_{k}\|\mathbf{u_{k}}\|_{2}^{2}} \tag{17}\]
\begin{table}
\begin{tabular}{c c} \hline \hline & Hyperparameters \\ \hline Number of epochs & 50000 \\ Number of hidden layers & 10 \\ Network Structure & [202, \(10\times 500\), 4] \\ Learning rate & \(1\times 10^{-4}\) \\ \(L_{2}\) regularization & \(1\times 10^{-8}\) \\ Activation function & Tanh \\ Optimizer & Adam \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameters of the neural network of local converging input (NNLCI)
Figure 7: Input-output pair of the unstructured NNLCI. The input contains the state variable values on 5\(\times\)5 stencil points, and the local mesh sizes of coarse and finer grids. The output is the state variable value at the center.
## 4 Results and Discussion
IIn this section, we present the unstructured NNLCI predictions for several bump geometries. The coarse and finer solutions of each case are used as the inputs for the neural network. Unlike in the training cases, we select the cell centroids of the finest grids as the prediction locations to enhance the resolution of final prediction results. Fig. 8 shows an example of the input-image pairs. The red dots denote the selected locations for prediction. The interpolation technique described in Sec. 3 is used to get the data for these locations on the coarse and finer grids.
Fig. 9 shows the NNLCI predicted Mach-number fields, for the upper bump translations of \(\Delta x=-0.35\), \(-0.19\), \(0.12\) and \(0.44\). High-fidelity simulation results are also presented for comparison. The flowfields exhibit different behaviors and features, depending on the bump translation. In the case of \(\Delta x=-0.35\), the upper shock develops upstream and intersects with the lower shock near the lower bump. As a consequence, the flow expansion area is shifted toward the lower wall. The secondary shock is well-developed downstream of the upper bump, while it can hardly be observed near the lower bump. On the contrary, the case of \(\Delta x=0.44\) has opposite behavior. For cases of \(\Delta x=0.12\) and \(-0.19\), a secondary shock is observed for both the upper and lower bumps, with shock intersection near the center of the channel. Regardless of the complex flow features, the NNLCI method achieves promising results. For all four cases, it accurately captures the primary shock location and structure. The complex nonlinear intersection of the upper and lower shocks is precisely predicted. The expansion region and secondary shock are well-reconstructed.
Table 3 summarizes the prediction error over the four cases. Here, the area-weighted relative \(L_{1}\) norm is used as the measure for performance:
\[\mathcal{L}_{1}=\frac{\sum_{k}A_{k}\|\tilde{\mathbf{u}}_{\mathbf{k}}-\mathbf{u}_{\mathbf{k}} \|_{1}}{\sum_{k}A_{k}\|\mathbf{u}_{\mathbf{k}}\|_{1}} \tag{18}\]
Figure 8: Example of the unstructured NNLCI prediction process. Data at prediction locations are interpolated on coarse and finer grids.
As comparison, the area-weighted relative root mean square error (RRMSE) is added:
\[\mathcal{L}=\sqrt{\frac{\sum_{k}A_{k}\|\tilde{\mathbf{u}}_{\mathbf{k}}-\mathbf{u}_{\mathbf{k}}\|_ {2}^{2}}{\sum_{k}A_{k}\|\mathbf{u}_{\mathbf{k}}\|_{2}^{2}}} \tag{19}\]
For all four prediction cases, the NNLCI method achieves a relative \(L_{1}\) error around one percent. As a
\begin{table}
\begin{tabular}{c c c c} \hline \hline Bump Translation & Relative \(L_{1}\) Norm & RRMSE & Low-fidelity Relative \(L_{1}\) Norm \\ \hline -0.35 & 0.701\% & 1.122\% & 6.421\% \\ -0.19 & 0.738\% & 1.115\% & 6.345\% \\ 0.12 & 0.757\% & 1.097\% & 6.813\% \\ 0.44 & 0.449\% & 0.699\% & 6.647\% \\ Total & 0.659\% & 1.021\% & 6.554\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Relative \(L_{1}\) norm and relative root mean square error for prediction cases
Figure 9: Mach number fields of the NNLCI prediction (left) and high-resolution simulation (right). Upper bump translations are \(\Delta x=-0.35,-0.19,0.12,0.44\) respectively.
comparison, the relative \(L_{1}\) error is measured on the low-fidelity simulation data. The data are interpolated using the same methods on the desired prediction locations and used as the inputs for the NNLCI. It can be seen that the low-fidelity error is around 6.5%, while the overall error of the NNLCI method is below 1%. The prediction accuracy is improved by about 10 times.
Fig. 10 shows the Mach-number distribution along the centerline of the channel, \(y=0.4\). The NNLCI prediction (red line) agrees well with the high-fidelity simulation result (green line). The finer input from low-resolution simulation results is also presented (blue line). The flow development and shock interaction are accurately captured by the NNLCI prediction for all the test cases. The prediction of shock shape and location is greatly enhanced compared with the low-fidelity simulation. These results testify to the effectiveness of the NNLCI method on the flow prediction for the entire field, including regions with smooth evolution and shock discontinuities [19; 20].
As introduced in Sec 2.3, the NNLCI network can predict the four state variables simultaneously. It is thus interesting to examine the NNLCI performance on each state variable separately. Figs. 11, 12 and 13 show the results of the density fields, density contour gradient and pressure fields. The NNLCI method can accurately capture the shape and magnitude of discontinuities for both variables. In addition, the prediction of smooth regions closely matches the simulation results. The NNLCI method achieves an accuracy of more than 99% for all cases. For a new bump configuration, the NNLCI method can predict all the state variables with one neural network, eliminating the need for repeated training of multiple networks for different variables.
Compared with the physics-informed methods, the NNLCI method can capture shock and shock inter
Figure 10: Mach-number distribution along the centerline of the channel \(y=0.4\)
actions accurately. In addition, different from the conventional global-to-global deep learning methods, the NNLCI method avoids the use of complex neural network structure and eliminates the need for large dataset with small training gap. This greatly reduces the training time and computational cost. The prediction accuracy of local patches is improved with richer details. For the present study, the wall time for low and high-fidelity simulation of each design setting takes 10 seconds and 30 minutes respectively, on a single CPU (Intel Core i7-10750H). In comparison, the NNLCI is able to predict a new case in less than 1 second on the same hardware. The time saving is more than 100 times.
To further evaluate the effectiveness and flexibility of the NNLCI method, two different bump geometries are investigated: a triangular bump and a Gaussian bump with changing variance, as shown in Fig. 14. The shape of the Gaussian bump is varied by tuning the variance \(\lambda\) of the governing function, with the amplitude fixed.
\[y=0.0625e^{-\lambda x^{2}}. \tag{20}\]
Figure 11: Density fields: NNLCI prediction (left) and high-fidelity simulation results (right). Upper bump translations are \(\Delta x=-0.35,-0.19,0.12,0.44\), respectively.
Second, the Gaussian bump is replaced by the triangular wedge. The height of the wedge \(h\) is fixed as \(0.1\), while the length of the wedge \(L\) is changed in the range of \([0.3,0.6]\) with increment of \(0.1\). For both cases, the angle of attack varies with the bump shape, and the flow exhibits different behaviors.
Figs. 15 and 16 show numerical simulation results of the Mach number fields for the two different kinds of bump shapes. Tuning the geometric parameter \(\lambda\) or \(L\) changes the angle of attack of the incoming flow, and the shock structure and dynamics vary as a result. With the triangular bump, as the angle of attack decreases, the secondary shock structure moves downstream and the Mach number alters significantly. Such enormous change of flow behavior greatly increases the difficulty of prediction. In the present study, instead of constructing a separate neural network, the new cases are trained together with the bump translation dataset. The resultant NNLCI is expected to predict the flowfield for all bump geometries with high accuracy simultaneously. Table 4 summarizes the training and validation cases. For each bump shape, the free-stream Mach number \(M_{infty}\) is perturbed by \(\pm 5\%\)
in the plot, the shock develops from different locations with variation of the bump length. For example, the wedge bump case has a larger oblique shock angle, due to the large angle of attack. As a consequence, the subsonic region is compressed and the secondary shock strength is weaker, due to insufficient expansion. A well-developed smooth region is observed further downstream. The Gaussian bump result exhibits opposite behavior, but the NNLCI method accurately predicts the shock structure and the smooth region behavior for the smooth region.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Set & Bump Type & Description & \(M_{\infty}\) Perturbation \\ \hline \multirow{2}{*}{Training} & Gaussian Bump & \(\lambda=10,25,40\) & \multirow{2}{*}{\(M_{\infty}=0,\pm 5\%\)} \\ & Triangular Wedge & \(L=0.3,0.4,0.5,0.6\) & \\ \hline \multirow{2}{*}{Testing} & Gaussian Bump & \(\lambda=28\) & \multirow{2}{*}{None} \\ & Triangular Wedge & \(L=0.38\) & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Training and validation cases for Gaussian and triangular bumps.
Figure 13: Pressure fields: NNLCI prediction (left) and high-fidelity simulation results (right). Upper bump translations are \(\Delta x=-0.35,-0.19,0.12,0.44\) respectively.
both cases. The prediction error is summarized in table 5. In both cases, the NNLCI achieves an accuracy of more than \(98\%\), while maintaining precision for the bump translation cases. Compared to the low-fidelity simulation results, the NNLCI is able to increase precision by three times.
As noted above, only one NNLCI is built and trained for all the cases. This avoids the construction and
Figure 14: Gaussian and triangular bumps with parameters \(\lambda\) and \(L\).
Figure 15: Simulation results of Mach-number fields for Gaussian bumps with \(\lambda=10\), \(25\), and \(40\). The coarse, finer inputs, and high-fidelity results are shown, from left to right.
training of new neural networks for different geometries. In practice, the NNLCI method can greatly reduce design turnaround time; for the prediction of a new design feature, one can train the existing NNLCI model with supplementary data and obtain results in a short time. Overall, this further validates the effectiveness of NNLCI method for real applications.
A cross-sectional plot of Mach number contour is shown in Fig. 18. For the two cases with different bump shapes, the NNLCI is able to accurately capture the shock location and amplitude, despite the large deviation in flow behavior. In addition, Figs. 19, 20 and 21 show density fields, density contour gradient and pressure fields for the two prediction cases. Despite the deviation near the boundary of the upper bump due to lack of data, the NNLCI can capture the development and variation of the shock behavior to high accuracy.
## 5 Conclusion
This paper presents a neural network with local converging inputs (NNLCI) for unstructured data. The proposed model employs a novel sampling and interpolation technique to construct local converging inputs from low-resolution simulation results. The neural network builds up the regression from the local converging
Figure 17: Mach-number fields calculated by the NNLCI method (left) and high-fidelity simulation (right) for Gaussian bump \(\lambda=28\) (upper) and triangular bump \(L=0.38\) (lower).
Figure 18: Cross-sectional plot of the Mach number contour at the center of the channel \(y=0.4\) for Gaussian bump case \(\lambda=28\) (left) and triangular bump \(L=0.38\) (right).
inputs to the high-fidelity prediction results. To demonstrate, the unstructured NNLCI is applied to predict supersonic inviscid flow in a channel with two bumps. The upper bump geometry is perturbed to create complex shock intersection structures. A detailed comparison between the unstructured NNLCI prediction and the high-resolution simulation results is carried out. The NNLCI is able to accurately capture the shock structure. The density and pressure profiles are examined to demonstrate the capabilities of NNLCI in predicting multiple flow variables simultaneously. Furthermore, the unstructured NNLCI is successfully applied to other bump geometries. Without the training of a new model, the NNLCI can absorb additional features from new training data and produce accurate predictions of new design geometries.
Figure 19: Density fields: NNLCI prediction (left) and high-fidelity simulation results (right) for Gaussian bump case \(\lambda=28\) (upper) and triangular bump \(L=0.38\) (lower).
Figure 21: Pressure fields: NNLCI prediction (left) and high-fidelity simulation results (right) for Gaussian bump case \(\lambda=28\) (upper) and triangular bump \(L=0.38\) (lower).
Figure 20: Density contour gradient: NNLCI prediction (left) and high-fidelity simulation results (right) for Gaussian bump case \(\lambda=28\) (upper) and triangular bump \(L=0.38\) (lower). |
2305.10157 | Efficient Error Certification for Physics-Informed Neural Networks | Recent work provides promising evidence that Physics-Informed Neural Networks
(PINN) can efficiently solve partial differential equations (PDE). However,
previous works have failed to provide guarantees on the worst-case residual
error of a PINN across the spatio-temporal domain - a measure akin to the
tolerance of numerical solvers - focusing instead on point-wise comparisons
between their solution and the ones obtained by a solver on a set of inputs. In
real-world applications, one cannot consider tests on a finite set of points to
be sufficient grounds for deployment, as the performance could be substantially
worse on a different set. To alleviate this issue, we establish guaranteed
error-based conditions for PINNs over their continuous applicability domain. To
verify the extent to which they hold, we introduce $\partial$-CROWN: a general,
efficient and scalable post-training framework to bound PINN residual errors.
We demonstrate its effectiveness in obtaining tight certificates by applying it
to two classically studied PINNs - Burgers' and Schr\"odinger's equations -,
and two more challenging ones with real-world applications - the Allan-Cahn and
Diffusion-Sorption equations. | Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar | 2023-05-17T12:19:43Z | http://arxiv.org/abs/2305.10157v2 | # Provably Correct Physics-Informed Neural Networks
###### Abstract
Recent work provides promising evidence that Physics-informed neural networks (PINN) can efficiently solve partial differential equations (PDE). However, previous works have failed to provide guarantees on the _worst-case_ residual error of a PINN across the spatio-temporal domain - a measure akin to the tolerance of numerical solvers - focusing instead on point-wise comparisons between their solution and the ones obtained by a solver on a set of inputs. In real-world applications, one cannot consider tests on a finite set of points to be sufficient grounds for deployment, as the performance could be substantially worse on a different set. To alleviate this issue, we establish tolerance-based _correctness_ conditions for PINNs over the _entire_ input domain. To verify the extent to which they hold, we introduce \(\partial\)-CROWN: a general, efficient and scalable post-training framework to bound PINN residual errors. We demonstrate its effectiveness in obtaining tight certificates by applying it to two classically studied PDEs - Burgers' and Schrodinger's equations -, and two more challenging ones with real-world applications - the Allan-Cahn and Diffusion-Sorption equations.
## 1 Introduction
Accurately predicting the evolution of complex systems through simulation is a difficult, yet necessary, process in the physical sciences. Many of these systems are represented by partial differential equations (PDE) the solutions of which, while well understood, pose a major computational challenge to solve at an appropriate spatio-temporal resolution (Raissi et al., 2019; Kochkov et al., 2019). Inspired by the success of machine learning in other domains, recent work has attempted to overcome the aforementioned challenge through _physics-informed neural networks_ (PINN) (Raissi et al., 2019; Sun et al., 2020; Pang et al., 2019). For example, the Diffusion-Sorption equation - which has real-world applications in the modeling of groundwater contaminant transport - takes 59.83s to solve per inference point using a classical PDE solver, while inference in its PINN version from Takamoto et al. (2022) takes only \(2.7\times 10^{-3}\)s, a speed-up of more than \(10^{4}\) times.
The parameters of a PINN are estimated by minimizing the residual of the given PDE, together with its initial and boundary conditions, over a set of spatio-temporal training inputs. Its accuracy is then empirically estimated by measuring the output over separate held-out inputs, and (typically) comparing them to standard numerical PDE solvers. In other words, most current work on PINNs provides no formal correctness guarantees that are applicable for _every_ input within the feasible domain. We argue that, while testing on a finite set of points provides a good initial signal on the accuracy of the PINN, such an approach cannot be relied upon in practice, and error certification is needed to understand the quality of the PINN trained (Hillebrecht and Unger, 2022).
In order to alleviate the deficiencies of previous evaluation criteria, we introduce formal, tolerance-based _correctness_ conditions for PINNs. These require that the residual error is _globally_ upper bounded across the domain by a tolerance parameter. To compute this bound and verify the correctness conditions, we build on the progress that has been made in the neural network verification literature. Specifically, we extend the CROWN framework (Zhang et al., 2018) by deriving linear upper and lower bounds for the various nonlinear terms that appear in PINNs, and devise a novel customized bound propagation strategy for the task at hand.
Our contributions are threefold. **(i)** We formally define global correctness conditions for general PINNs that approximate solutions of PDEs. **(ii)** We introduce a general, efficient, and scalable post-training _correctness certification framework_ (\(\partial\)-CROWN) to theoretically verify PINNs over their entire spatio-temporal domains. **(iii)** We demonstrate our post-training framework on two widely studied PDEs in the context of PINNs, Burgers' and Schrodinger's equations (Raissi et al., 2019), and two more challenging ones with real-world applications, the Allan-Cahn equation (Monaco and Apiletti, 2023) and the Diffusion-Sorption equation (Takamoto et al., 2022).
## 2 Related work
Since our certification framework for PINNs is based on the verification literature of image classifiers, in this section we explore: related work for PINNs, and previous work on NN robustness verification.
Physics-informed Neural NetworksDissanayake and Phan-Thien (1994) first discussed using neural networks to approximate PDE solutions under a supervised learning paradigm. More recently, Raissi et al. (2019) introduced PINNs, which leverage automatic differentiation to obtain approximate solutions to the underlying PDE. Since then, a variety of different PINNs have emerged in a range of applications, from fluid dynamics (Raissi et al., 2019, 2020; Sun et al., 2020; Jin et al., 2021), to meta material design (Liu and Wang, 2019; Fang and Zhan, 2019; Chen et al., 2020) for different classes of PDEs (Pang et al., 2019; Fang and Zhan, 2019; Zhang et al., 2020). A few works analyze the convergence of the training process of PINNs under specific conditions (Shin et al., 2020; Wang et al., 2022). Mishra and Molinaro (2022) approximated the generalization error of various PINNs under specific stability and training process assumptions, and others introduced approximation bounds under regularity assumptions (Ryck and Mishra, 2022; Hillebrecht and Unger, 2022). Our verification framework is applicable to any PINN where the solution is modeled by a fully connected network.
Robustness Verification of Neural NetworksThe presence of adversarial examples, _i.e._, small local input perturbations that lead to large output changes, was established by Szegedy et al. (2013) in image classifiers. As robust classifiers emerged (Madry et al., 2017), so did attempts to certify them formally. Those methods can be divided into _exact_, _i.e._, complete (Katz et al., 2017; Ehlers, 2017; Huang et al., 2017; Lomuscio and Maganti, 2017; Bunel et al., 2018), or _conservative_, _i.e._, sound but incomplete (Gowal et al., 2018; Mirman et al., 2018; Wang et al., 2018; Wong and Kolter, 2018; Ayers et al., 2020). A promising set of conservative methods poses the problem as a convex relaxation of the original nonlinear network architecture, and solves it using a linear programming solver (Salman et al., 2019) or by obtaining closed-form bounds (Zhang et al., 2018; Wang et al., 2021). The latter are especially appealing due to their efficiency. Examples include CROWN (Zhang et al., 2018) and \(\alpha\)-CROWN (Xu et al., 2020). Xu et al. (2020) extended the linear relaxation framework from Zhang et al. (2018) to general computation graphs, but the purely backward propagation nature makes it potentially less efficient than custom bounds/hybrid approaches (Shi et al., 2020).
We use techniques from robustness verification typically applied in a local input neighborhoods to certify the _full_ applicability domains of PINNs. To the best of our knowledge, ours is the first application of these methods to a '_global_' specification, and within a scientific context.
## 3 Preliminaries
### Notation
Given vector \(\mathbf{a}\in\mathbb{R}^{d}\), \(\mathbf{a}_{i}\) refers to its \(i\)-th component. We use \(\partial_{\mathbf{x}_{i}^{j}}f\) and \(\frac{\partial^{j}f}{(\partial\mathbf{x}_{i})^{j}}\) interchangeably to refer to the \(j\)-th partial derivative of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) with respect to the \(i\)-component of its
input, \(\mathbf{x}_{i}\). Where it is clear, we use \(f(\mathbf{x})\) and \(f\) interchangeably. We take \(\mathbb{L}^{(i)}_{\mathbf{W},\mathbf{b}}(\mathbf{x})=\mathbf{W}^{(i)}\mathbf{x}+ \mathbf{b}^{(i)}\) to be a function of \(\mathbf{x}\) parameterized by weights \(\mathbf{W}^{(i)}\) and bias \(\mathbf{b}^{(i)}\). We define an \(L\)-layer _fully connected neural network_\(g:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{d_{L}}\) for an input \(\mathbf{x}\) as \(g(\mathbf{x})=y^{(L)}(\mathbf{x})\) where \(y^{(k)}(\mathbf{x})=\mathbb{L}^{(k)}_{\mathbf{W},\mathbf{b}}(z^{(k-1)}( \mathbf{x}))\), \(z^{(k-1)}(\mathbf{x})=\sigma(y^{(k-1)}(\mathbf{x}))\), \(z^{(0)}(\mathbf{x})=\mathbf{x}\), in which \(\mathbf{W}^{(k)}\in\mathbb{R}^{d_{k}\times d_{k-1}}\) and \(\mathbf{b}^{(k)}\in\mathbb{R}^{d_{k}}\) are the weight and bias of layer \(k\), \(\sigma\) is the nonlinear activation, and \(k\in\{1,\ldots,L\}\).
### Physics-informed neural networks (PINNs)
We consider general nonlinear PDEs of the form:
\[f(t,\hat{\mathbf{x}})=\partial_{t}u(t,\hat{\mathbf{x}})+\mathcal{N}[u](t, \hat{\mathbf{x}})=0,\ \hat{\mathbf{x}}\in\mathcal{D},t\in[0,T], \tag{1}\]
where \(f\) is the residual of the PDE, \(t\) is the temporal and \(\hat{\mathbf{x}}\) is the spatial components of the input, \(u:[0,T]\times\mathcal{D}\rightarrow\mathbb{R}\) is the solution, \(\mathcal{N}\) is a nonlinear differential operator on \(u\), \(T\in\mathbb{R}^{+}\), and \(\mathcal{D}\subset\mathbb{R}^{D}\). Where possible, for conciseness we will use \(\mathbf{x}=(t,\hat{\mathbf{x}})\), for \(\mathbf{x}\in\mathcal{C}=[0,T]\times\mathcal{D}\), with \(\mathbf{x}_{0}=t\).
We assume \(f\) is the residual of an \(R^{th}\) order PDE where the differential operators of \(\mathcal{N}\) applied to \(u\) yield the partial derivatives for order \(\{0,...,R\}\) as: \(u\in\mathcal{N}^{(0)}\), \(\partial_{\mathbf{x}_{i}}u\in\mathcal{N}^{(1)}\), \(\partial_{\mathbf{x}_{i}^{2}}u\in\mathcal{N}^{(2)}\), \(\ldots\), \(\partial_{\mathbf{x}_{i}^{n}}u\in\mathcal{N}^{(R)}\) for \(i\in\{0,\ldots,D\}\)1. With these, we can re-write \(f=\mathcal{P}(u,\partial_{\mathbf{x}_{0}}u,\ldots,\partial_{\mathbf{x}_{D}}u, \ldots,\partial_{\mathbf{x}_{D}^{n}}u)\), where \(\mathcal{P}\) is a nonlinear function of those terms. Furthermore, the PDE is defined under (1) initial conditions, _i.e._, \(u(0,\hat{\mathbf{x}})=u_{0}(\hat{\mathbf{x}})\), for \(\hat{\mathbf{x}}\in\mathcal{D}\), and (2) general Robin boundary conditions, _i.e._, \(au(t,\hat{\mathbf{x}})+b\partial_{n}u(t,\hat{\mathbf{x}})=u_{b}(t,\hat{ \mathbf{x}})\) for \(a,b\in\mathbb{R}\), \(\hat{\mathbf{x}}\in\delta\mathcal{D}\) and \(t\in[0,T]\), and \(\partial_{n}u\) is the normal derivative at the border with respect to some components of \(\hat{\mathbf{x}}\).
Footnote 1: For simplicity, we assume \(\mathcal{N}\) does not contain any cross-derivative operators, yet an extension would be trivial to derive.
Continuous-time PINNs (Raissi et al., 2019) result from approximating the solution, \(u(\mathbf{x})\), using a neural network parameterized by \(\theta\), \(u_{\theta}(\mathbf{x})\). We refer to this network as the _approximate solution_. In that context, the _physics-informed neural network_ (or residual) is \(f_{\theta}(\mathbf{x})=\partial_{t}u_{\theta}(\mathbf{x})+\mathcal{N}[u_{ \theta}](\mathbf{x})\). For example, the one-dimensional Burgers' equation (explored in detail in Section 6) is defined as:
\[f_{\theta}(\mathbf{x})=\partial_{t}u_{\theta}(\mathbf{x})+u_{\theta}(\mathbf{x })\partial_{x}u_{\theta}(\mathbf{x})-(0.01/\pi)\partial_{x^{2}}u_{\theta}( \mathbf{x}). \tag{2}\]
Note \(f_{\theta}\) has the same order as \(f\), and can be described similarly as a nonlinear function with the partial derivatives applied to \(u_{\theta}\) instead of \(u\). For example, Burgers' equation from above has one \(0^{th}\) order term (\(u_{\theta}\)), two \(1^{st}\) order ones (\(\partial_{t}u_{\theta}\) and \(\partial_{x}u_{\theta}\)), and a \(2^{nd}\) order partial derivative (\(\partial_{x^{2}}u_{\theta}\)), while \(u_{\theta}(\mathbf{x})\partial_{x}u_{\theta}(\mathbf{x})\) is a nonlinear term of the \(f_{\theta}\) polynomial.
### Bounding neural network outputs using CROWN (Zhang et al., 2018)
The computation of upper/lower bounds on the output of neural networks over a domain has been widely studied within verification of image classifiers (Katz et al., 2017; Mirman et al., 2018; Zhang et al., 2018). For the sake of computational efficiency, we consider the bounds obtained using CROWN (Zhang et al., 2018)/\(\alpha\)-CROWN (Xu et al., 2020) as the base for our framework.
Take \(g\) to be the fully connected neural network (as defined in Section 3.1) we're interested in bounding. The goal is to compute \(\max/\min_{x\in\mathcal{C}}g(\mathbf{x})\), where \(\mathcal{C}\) is the applicability domain. Typically within verification of image classifiers, \(\mathcal{C}=\mathbb{B}^{p}_{\mathbf{x},\epsilon}=\{\mathbf{x}^{\prime}:\| \mathbf{x}^{\prime}-\mathbf{x}\|_{p}\leq\epsilon\}\), _i.e._, it is a _local_\(\ell_{p}\)-ball of radius \(\epsilon\) around an input from the test set \(\mathbf{x}\).
CROWN solves the optimization problem by _back-propagating_ linear bounds of \(g(\mathbf{x})\) through each hidden layer of the network until the input is reached. To do so, assuming constant bounds on \(y^{(k)}(\mathbf{x})\) are known for \(\mathbf{x}\in\mathcal{C}\), _i.e._, \(\forall\mathbf{x}\in\mathcal{C}:y^{(k),L}\leq y^{(k)}(\mathbf{x})\leq y^{(k),U}\), CROWN relaxes the nonlinearities of each \(z^{(k)}\) using a linear lower and upper bound approximation that contains the full possible range of \(\sigma(y^{(k)}(\mathbf{x}))\). By relaxing the activations of each layer and back-propagating it through \(z^{(k)}\), CROWN obtains a bound on each \(y^{(k)}\) as a function of \(y^{(k-1)}\). Back-substituting from the output \(y^{(L)}=g(\mathbf{x})\) until the input \(\mathbf{x}\) results in:
\[\min_{\mathbf{x}\in\mathcal{C}}\ g(\mathbf{x})\geq\min_{\mathbf{x}\in\mathcal{C} }\mathbf{A}^{L}\mathbf{x}+\mathbf{a}^{L},\ \max_{\mathbf{x}\in\mathcal{C}}\ g(\mathbf{x})\leq\max_{\mathbf{x}\in\mathcal{C}} \mathbf{A}^{U}\mathbf{x}+\mathbf{a}^{U},\]
where \(\mathbf{A}^{L}\), \(\mathbf{a}^{L}\), \(\mathbf{A}^{U}\) and \(\mathbf{a}^{U}\) are computed in polynomial time from \(\mathbf{W}^{(k)},\mathbf{b}^{(k)}\), and the linear relaxation parameters. The solution to the optimization problems above given simple constraints \(\mathcal{C}\) can be obtained in closed-form. \(\alpha\)-CROWN (Xu et al., 2020b) improves these bounds by optimizing the linear relaxations for tightness.
## 4 Correctness Conditions for PINNs
By definition, \(u_{\theta}\) is a correct solution to the PINN \(f_{\theta}\) - and therefore the PDE \(f(\mathbf{x})=0\) - if 3 conditions are met: 1 the norm of the solution error with respect to the initial condition is upper bounded within an acceptable tolerance, 2 the norm of the solution error with respect to the boundary conditions is bounded within an acceptable tolerance, and 3 the norm of the residual is bounded within an acceptable convergence tolerance. We define these as PINN _correctness conditions_, and formalize it in Definition 1.
**Definition 1** (Correctness Conditions for PINNs).: \(u_{\theta}:[0,T]\times\mathcal{D}\rightarrow\mathbb{R}\) _is a \(\delta_{0},\delta_{b},\varepsilon\)-globally correct approximation of the exact solution \(u:[0,T]\times\mathcal{D}\rightarrow\mathbb{R}\) if:_
\[\begin{array}{ll}\textbf{1}&\max_{\tilde{\mathbf{x}}\in\mathcal{D}}|u_{ \theta}(0,\hat{\mathbf{x}})-u_{0}(\hat{\mathbf{x}})|^{2}\leq\delta_{0},\\ \textbf{2}&\max_{t\in[0,T],\tilde{\mathbf{x}}\in\mathcal{D}}|au_{\theta}(t, \tilde{\mathbf{x}})+b\partial_{\mathbf{n}}u_{\theta}(t,\tilde{\mathbf{x}})-u_ {b}(t,\tilde{\mathbf{x}})|^{2}\leq\delta_{b},\\ \textbf{3}&\max_{\mathbf{x}\in\mathcal{C}}|f_{\theta}(\mathbf{x})|^{2}\leq \varepsilon.\end{array}\]
Previous works deriving from Raissi et al. (2019) have measured the correctness of the approximation \(u_{\theta}\) empirically through the error between \(u_{\theta}\) and a solution obtained via either analytical or numerical solvers for \(f\), satisfying a relaxed, empirical version of these conditions only. In practice, \(\delta_{0}\), \(\delta_{b}\), and \(\varepsilon\) correspond to tolerances similar to the ones given by numerical solvers for \(f\).
## 5 \(\partial\)-CROWN: PINN Correctness Certification Framework
The verification of the PINN correctness conditions from Definition 1 requires bounding a linear function of \(u_{\theta}\) for 1. Moreover, it requires bounds for a linear function of \(u_{\theta}\) and \(\partial_{\mathbf{n}}u_{\theta}\) for 2, and the PINN output, \(f_{\theta}\), in 3. To achieve 1, assuming \(u_{\theta}\) is a standard fully connected neural network as in Raissi et al. (2019), we can directly use CROWN/\(\alpha\)-CROWN (Zhang et al., 2018; Xu et al., 2020). However, bounding 2 and 3 with a linear function in \(\mathbf{x}\) efficiently requires a method to bound linear and nonlinear functions of the partial derivatives of \(u_{\theta}\).
We propose \(\partial\)-CROWN, an efficient framework to: (i) compute closed-form bounds on the partial derivatives of an arbitrary fully-connected network \(u_{\theta}\) (Section 5.1), and (ii) bound a nonlinear function of those partial derivative terms, _i.e._, \(f_{\theta}\) (Section 5.2). Throughout this section, we assume \(u_{\theta}(\mathbf{x})=g(\mathbf{x})\) as defined in Section 3.1, with \(d_{0}=1+D\). Proofs for lemmas and theorems presented in this section are in Appendix B.
### Bounding Partial Derivatives of \(u_{\theta}\)
The computation of the bounds for the \(0^{th}\) order derivative, _i.e._, \(u_{\theta}\), and intermediate pre-activations can be done using CROWN/\(\alpha\)-CROWN (Zhang et al., 2018; Xu et al., 2020). As such, for what follows, we assume that for \(\mathbf{x}\in\mathcal{C}\), both the bounds on \(u_{\theta}\) and \(y^{(k)},\ \forall k\) are given.
**Assumption 1**.: _The pre-activation layer outputs of \(u_{\theta}\), \(y^{(k)}=\mathbb{L}_{\mathbf{W},\mathbf{b}}^{(k)}(z^{(k-1)})\), are lower and upper bounded by linear functions \(\mathbb{L}_{\mathbf{A},\mathbf{a}}^{(k),L}(\mathbf{x})\leq y^{(k)}\leq\mathbb{ L}_{\mathbf{A},\mathbf{a}}^{(k),U}(\mathbf{x})\). Moreover, for \(\mathbf{x}\in\mathcal{C}\), we have \(y^{(k),L}\leq y^{(k)}\leq y^{(k),U}\)._
Note that using CROWN/\(\alpha\)-CROWN, \(\mathbf{A}^{(k),L}\), \(\mathbf{a}^{(k),L}\), \(\mathbf{A}^{(k),U}\), \(\mathbf{a}^{(k),U}\) are functions of all the previous layers' parameters. For \(1^{st}\) order derivatives, we start by explicitly obtaining the expression of \(\partial_{\mathbf{x}_{i}}u_{\theta}\).
**Lemma 1** (Computing \(\partial_{\mathbf{x}_{i}}u_{\theta}\)).: _For \(i\in\{1,\ldots,d_{0}\}\), the partial derivative of \(u_{\theta}\) with respect to \(\mathbf{x}_{i}\) can be computed recursively as \(\partial_{\mathbf{x}_{i}}u_{\theta}=\mathbf{W}^{(L)}\partial_{\mathbf{x}_{i}}z ^{(L-1)}\) for:_
\[\partial_{\mathbf{x}_{i}}z^{(k)}=\partial_{z^{(k-1)}}z^{(k)}\partial_{\mathbf{ x}_{i}}z^{(k-1)},\quad\partial_{\mathbf{x}_{i}}z^{(0)}=\mathbf{e}_{i},\]
_for \(k\in\{1,\ldots,L-1\}\), and where \(\partial_{z^{(k-1)}}z^{(k)}=\text{diag}\left[\sigma^{\prime}\left(y^{(k)}\right) \right]\mathbf{W}^{(k)}\)._
Using this result, we can efficiently linearly lower and upper bound \(\partial_{\mathbf{x}_{i}}u_{\theta}\).
**Theorem 1** (\(\partial\)-CROWN: Linear Bounding \(\partial_{\mathbf{x}_{i}}u_{\theta}\)).: _There exist two linear functions \(\partial_{\mathbf{x}_{i}}u_{\theta}^{U}\) and \(\partial_{\mathbf{x}_{i}}u_{\theta}^{L}\) such that, \(\forall\mathbf{x}\in\mathcal{C}\) it holds that \(\partial_{\mathbf{x}_{i}}u_{\theta}^{L}\leq\partial_{\mathbf{x}_{i}}u_{\theta }\leq\partial_{\mathbf{x}_{i}}u_{\theta}^{U}\), where the linear coefficients can be computed recursively in closed-form in \(\mathcal{O}(L)\) time._
The formal statement of Theorem 1 and expressions for \(\partial_{\mathbf{x}_{i}}u_{\theta}^{L}\) and \(\partial_{\mathbf{x}_{i}}u_{\theta}^{U}\) are provided in Appendix B.3. Note that this bound is not computed using fully backward propagation as in Xu et al. (2020). Instead we use a _hybrid_ scheme in the spirit of Shi et al. (2020) for the sake of efficiency. We perform backward propagation to compute \(\partial_{z^{(k-1)}}z^{(k)}\) as a function of \(y^{(k)}\), and forward-substitute the pre-computed CROWN bounds \(\mathbb{L}_{\mathbf{A},\mathbf{a}}^{(k),L}(\mathbf{x})\leq y^{(k)}\leq \mathbb{L}_{\mathbf{A},\mathbf{a}}^{(k),U}(\mathbf{x})\) at that point instead of fully backward propagating which would have \(\mathcal{O}(L^{2})\) complexity. This induces a significant speed-up while achieving tight enough bounds. Figure 1 showcases the back-propagation and forward substitution paths for bounding \(\partial_{\mathbf{x}_{i}}u_{\theta}\) in blue. Similarly to CROWN with the activation \(\sigma\), this bound requires relaxing \(\sigma^{\prime}(y^{(k)})\).
Similarly, we can linearly bound \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}\), a requirement to bound \(f_{\theta}\) in \(2^{nd}\) order PINNs.
**Lemma 2** (Expression for \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}(\mathbf{x})\)).: _For \(i\in\{1,\ldots,d_{0}\}\), the second partial derivative of \(u_{\theta}\) with respect to \(\mathbf{x}_{i}\) can be computed recursively as \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}=\mathbf{W}^{(L)}\partial_{\mathbf{x}_ {i}^{2}}z^{(L-1)}\) where:_
\[\partial_{\mathbf{x}_{i}^{2}}z^{(k)}=\partial_{x_{i}z^{(k-1)}}z^{(k)}\partial _{\mathbf{x}_{i}}z^{(k-1)}+\partial_{z^{(k-1)}}z^{(k)}\partial_{\mathbf{x}_ {i}^{2}}z^{(k-1)},\]
_and \(\partial_{\mathbf{x}_{i}^{2}}z^{(0)}=\mathbf{0}\), for \(k\in\{1,\ldots,L-1\}\), with \(\partial_{\mathbf{x}_{i}}z^{(k-1)}\) and \(\partial_{z^{(k-1)}}z^{(k)}\) as per in Lemma 1, and \(\partial_{x_{i}z^{(k-1)}}z^{(k)}=\text{diag}\left[\sigma^{\prime\prime}\left( y^{(k)}\right)\left(\mathbf{W}^{(k)}\partial_{\mathbf{x}_{i}}z^{(k-1)}\right) \right]\mathbf{W}^{(k)}\)._
**Theorem 2** (\(\partial\)-CROWN: Linear Bounding \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}\)).: _Assume that through a previous bounding of \(\partial_{\mathbf{x}_{i}}u_{\theta}\), we have linear lower and upper bounds on \(\partial_{\mathbf{x}_{i}}z^{(k-1)}\) and \(\partial_{z^{(k-1)}}z^{(k)}\). There exist two linear functions \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{U}\) and \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{L}\) such that, \(\forall\mathbf{x}\in\mathcal{C}\) it holds that \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{U}\leq\partial_{\mathbf{x}_{i}^{2}}u_ {\theta}\leq\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{U}\), where the linear coefficients can be computed recursively in closed-form in \(\mathcal{O}(L)\) time._
The formal statement of Theorem 2 and expressions for \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{L}\) and \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}^{U}\) are in Appendix B.4. As with the first derivative, this bound requires a relaxation of \(\sigma^{\prime\prime}(y^{(k)})\). Note that this also follows a hybrid computation scheme, with the back-propagation and forward substitution paths for bounding \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}\) computations shown in green in Figure 1.
Figure 1: **Bounding Partial Derivatives with \(\partial\)-CROWN**: our hybrid scheme for bounding \(\partial_{\mathbf{x}_{i}}u_{\theta}\) and \(\partial_{\mathbf{x}_{i}^{2}}u_{\theta}\) uses back-propagation and forward substitution (inspired by Shi et al. (2020)) to compute bounds in \(\mathcal{O}(L)\) instead of the \(\mathcal{O}(L^{2})\) complexity of full back-propagation as in Xu et al. (2020).
Assuming \(\mathcal{C}=\{\mathbf{x}\in\mathbb{R}^{d_{0}}:\mathbf{x}^{L}\leq\mathbf{x}\leq \mathbf{x}^{U}\}\), we can obtain closed-form expressions for constant global bounds on the linear functions \(\partial_{\mathbf{x}_{i}}u_{\theta}^{U}\), \(\partial_{\mathbf{x}_{i}}u_{\theta}^{L}\), \(\partial_{\mathbf{x}_{i}}u_{\theta}^{U}\), \(\partial_{\mathbf{x}_{i}}\)\(u_{\theta}^{L}\), which we formulate and prove in Appendix B.52. While here we only compute the expression for the second derivative with respect to the same input, it would be trivial to extend it to cross derivatives (_i.e._, \(\partial_{\mathbf{x}_{i}\mathbf{x}_{j}}u_{\theta}\) for \(i\neq j\)), as well as to higher order ones.
Footnote 2: Note that this is different from the CROWN case in which \(\mathcal{C}\) is assumed to be an \(\epsilon\)-ball around an input \(\mathbf{x}\).
### Bounding \(f_{\theta}\)
With the partial derivative terms bounded, to bound \(f_{\theta}\), we use McCormick envelopes (McCormick, 1976) to obtain linear lower and upper bound functions \(f_{\theta}^{L}\leq f_{\theta}\leq f_{\theta}^{U}\):
\[f_{\theta}^{U}=\mu_{0}^{U}+\mu_{1}^{U}u_{\theta}+\sum_{j=1}^{r} \sum_{\partial_{\mathbf{x}_{i}^{i}}\in\mathcal{N}^{(j)}}\mu_{j,i}^{U} \partial_{\mathbf{x}_{i}^{i}}u_{\theta},\quad f_{\theta}^{L}\quad=\mu_{0}^{L} +\mu_{1}^{L}u_{\theta}+\sum_{j=1}^{r}\sum_{\partial_{\mathbf{x}_{i}^{i}}\in \mathcal{N}^{(j)}}\mu_{j,i}^{L}\partial_{\mathbf{x}_{i}^{i}}u_{\theta},\]
where \(\mu_{0}^{U}\), \(\mu_{1}^{U}\), and \(\mu_{i,j}^{U}\) are functions of the global lower and upper bounds of \(u_{\theta}\) and \(\partial_{\mathbf{x}_{i}^{i}}u_{\theta}\). In the example of Burgers' equation (Equation 2), \(f_{\theta}^{U}=\mu_{0}^{U}+\mu_{1}^{U}u_{\theta}+\mu_{1,0}^{U}\partial_{\mathbf{ x}_{0}}u_{\theta}+\mu_{1,1}^{U}\partial_{\mathbf{x}_{1}}u_{\theta}+\mu_{2,1}^{U} \partial_{\mathbf{x}_{1}^{2}}u_{\theta}\) (and similarly for \(f_{\theta}^{L}\) with \(\mu^{L}\)).
To get \(f_{\theta}^{U}\) and \(f_{\theta}^{L}\) as linear functions of \(\mathbf{x}\), we replace \(u_{\theta}\) and \(\partial_{\mathbf{x}_{i}^{i}}u_{\theta}\) with the lower and upper bound linear expressions from Section 5.1, depending on the sign of the coefficients \(\mu^{U}\) and \(\mu^{L}\). As in Section 5.1, since \(\mathcal{C}=\{\mathbf{x}\in\mathbb{R}^{d_{0}}:\mathbf{x}^{L}\leq\mathbf{x}\leq \mathbf{x}^{U}\}\) we can then solve \(\max_{\mathbf{x}\in\mathcal{C}}f_{\theta}^{U}\) and \(\min_{\mathbf{x}\in\mathcal{C}}f_{\theta}^{L}\) in closed-form (see Appendix B.5), obtaining constant bounds for \(f_{\theta}\) in \(\mathcal{C}\).
### Tighter Bounds via Greedy Input Branching
Using \(\partial\)-CROWN we can compute a bound on a nonlinear of derivatives of \(u_{\theta}\), which we will generally refer to as \(h\), for \(\mathbf{x}\in\mathcal{C}\). However, given the approximations used throughout the bounding process, it is likely that such bounds will be too loose to be useful when compared to the true lower and upper bound of \(h\).
To improve these bounds, we introduce _greedy input branching_ in Algorithm 1. The idea behind it is to recursively divide the input domain (DomainSplit, line 9) - exploring the areas where the _current bounds are further from the empirical optima_ obtained via sampling (Sample, line 3) - and globally bound the output of \(h\) as the worst-case of all the branches (line 13). As the number of splits, \(N_{b}\), increases, so does the tightness of our global bounds. For small dimensional spaces, it suffices to split each branch \(\mathcal{C}_{\backslash}\) into \(N_{d}=2^{d_{0}}\) equal branches. Note that in higher dimensional spaces, a non-equal splitting function, DomainSplit, can lead to improved convergence to the tighter bounds. The time complexity of greedy input branching is \(\mathcal{O}(N_{b}N_{d}\mathcal{M})\), where \(\mathcal{M}\) is the complexity of running \(\partial\)-CROWN for each branch.
```
1:Input: function \(h\), input domain \(\mathcal{C}\), # splits \(N_{b}\), # empirical samples \(N_{s}\), # branches per split \(N_{d}\)
2:Result: lower bound \(h_{lb}\), upper bound \(h_{ub}\)
3:\(\mathcal{B}=\emptyset\)
4:\(\mathcal{B}_{\Delta}=\emptyset\)
5:\(\hat{h}_{lb},\hat{h}_{ub}=\min\bigvee\max h(\text{Sample}(\mathcal{C},\,N_{s}))\)
6:\(h_{lb},h_{ub}=\partial\text{-CROWN}(h,\mathcal{C})\)
7:\(\mathcal{B}[\mathcal{C}]=(h_{lb},h_{ub})\)
8:\(\mathcal{B}_{\Delta}[\mathcal{C}]=\max(\hat{h}_{lb}-h_{lb},h_{ub}-\hat{h}_{ub})\)
9:for\(i\in\{1,\ldots,N_{b}\}\)do
10:\(\mathcal{C}_{i}=\mathcal{B}.\text{Pop}(\arg\max_{\mathcal{C}^{\prime}}\mathcal{B} [\mathcal{C}^{\prime}])\)
11:for each \(\mathcal{C}^{\prime}\in\text{DomainSplit}(\mathcal{C}_{i},\,N_{d})\)do
12:\(h_{lh}^{\prime},h_{ub}^{\prime}=\partial\text{-CROWN}(h,\mathcal{C}^{\prime})\)
13:\(\mathcal{B}[\mathcal{C}^{\prime}]=(h_{lb}^{\prime},h_{ub}^{\prime})\)
14:\(\mathcal{B}_{\Delta}[\mathcal{C}^{\prime}]=\max(\hat{h}_{lb}-h_{lb}^{\prime},h_{ub}^{ \prime}-\hat{h}_{ub})\)
15:endfor
16:\(h_{lb},h_{ub}=\min_{\mathcal{C}^{\prime}}\mathcal{B}_{0}[\mathcal{C}^{\prime}], \max_{\mathcal{C}^{\prime}}\mathcal{B}_{1}[\mathcal{C}^{\prime}]\)
17:return\(h_{lb},h_{ub}\)
```
**Algorithm 1** Greedy Input Branching
## 6 Experiments
The aim of this experimental section is to (i) showcase that the Definition 1 certificates obtained with \(\partial\)-CROWN are tight compared to empirical errors computed with a large number of samples (Section 6.1), (ii) highlight the relationship of our residual-based certificates and the commonly reported solution errors (Section 6.2, and (iii) qualitatively analyze the importance of greedy input branching in the success of our method (Section 6.3).
### Certifying with \(\partial\)-CROWN
To achieve (i), we apply our post-training certification framework \(\partial\)-CROWN to two widely studied PINNs from Raissi et al. (2019), Burgers' and Schrodinger's equations, as well as to the more complex Allen-Cahn's equation from Monaco and Apiletti (2023), and the Diffusion-Sorption equation from Takamoto et al. (2022). Since \(u_{\theta}\) for these PINNs use \(\sigma=\tanh\) activations, we need to be able to linearly relax \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\) given pre-activation bounds. We propose a practical relaxation in Appendix C. All timing results were obtained on a MacBook Pro with a 10-core M1 Max CPU.
Burgers' EquationThis one-dimensional PDE is used in several areas of mathematics, fluid dynamics, nonlinear acoustics, gas dynamics and traffic flow, and is derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient (Raissi et al., 2019). It is defined on a temporal domain \(t\in[0,1]\) and spatial domain \(x\in[-1,1]\) as:
\[\partial_{t}u(t,x)+u(t,x)\partial_{x}u(t,x)-(0.01/\pi)\partial_{x^{2}}u(t,x)=0, \tag{3}\]
for \(u(0,x)=-\sin(\pi x)\), \(u(t,-1)=u(t,1)=0\). The solution \(u_{\theta}:\mathbb{R}^{2}\to\mathbb{R}\) is modeled by an 8-hidden layer, 20 neurons per layer network (Raissi et al., 2019). The training process took \(\sim 13.35\) minutes, and resulted in a mean \(\ell_{2}\) error of \(6.1\cdot 10^{-4}\), with a visualization in Figure 1(a).
Schrodinger's EquationSchrodinger's equation is a classical field equation used to study quantum mechanical systems. In Raissi et al. (2019), Schrodinger's equation is defined with the temporal domain \(t\in[0,\pi/2]\) and spatial domain \(x\in[-5,5]\) as:
\[i\,\partial_{t}u(t,x)+0.5\,\partial_{xx}u(t,x)+|u(t,x)|^{2}u(t,x)=0, \tag{4}\]
where \(u:[0,\pi/2]\times\mathcal{D}\to\mathbb{C}\) is a complex-valued solution, for initial conditions \(u(0,x)=2\operatorname{sech}(x)\), and periodic boundary conditions \(u(t,-5)=u(t,5)\) and \(\partial_{x}u(t,-5)=\partial_{x}u(t,5)\). As in Raissi et al. (2019), \(u_{\theta}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) is a 5-hidden layer, 100 neurons per layer network. The training took \(\sim 23.67\) minutes, and resulted in a mean \(\ell_{2}\) error of \(1.74\cdot 10^{-3}\), with a visualization in Figure 1(b).
Allan-Cahn EquationThe Allan-Cahn equation is a form of reaction-diffusion equation, describing the phase separation in multi-component alloy systems (Monaco and Apiletti, 2023). In 1D, it is defined on a temporal domain \(t\in[0,1]\) and spatial domain \(x\in[-1,1]\) as:
\[\partial_{t}u(t,x)+\rho u(t,x)(u^{2}(t,x)-1)-\nu\partial_{x^{2}}u(t,x)=0, \tag{5}\]
for \(\rho=5\), \(\nu=10^{-4}\), and \(u(0,x)=x^{2}\cos(\pi x)\), \(u(t,-1)=u(t,1)\). The solution \(u_{\theta}:\mathbb{R}^{2}\to\mathbb{R}\) is modeled by an 6-hidden layer, 40 neurons per layer network, and due to its complexity, it is trained using the Causal training scheme from Monaco and Apiletti (2023). The training process took \(\sim 18.56\) minutes, and resulted in a mean \(\ell_{2}\) error of \(7.9\cdot 10^{-3}\), with a visualization in Figure 1(c).
Figure 2: **Certifying with \(\partial\)-CROWN**: visualization of the time evolution of \(u_{\theta}\), and the residual errors as a function of the spatial temporal domain (log-scale), \(|f_{\theta}|\), for **(a)** Burgers’ equation (Raissi et al., 2019), **(b)** Schrödinger’s equation (Raissi et al., 2019), **(c)** Allan-Cahn’s equation (Monaco and Apiletti, 2023), and **(d)** the Diffusion-Sorption equation (Takamoto et al., 2022).
Diffusion-SorptionThe diffusion-sorption equation models a diffusion system which is retarded by a sorption process, with one of the most prominent applications being groundwater contaminant transport (Takamoto et al., 2022). In (Takamoto et al., 2022), the equation is defined on a temporal domain \(t\in[0,500]\) and spatial domain \(x\in[0,1]\) as:
\[\partial_{t}u(t,x)-D/R(u(t,x))\partial_{x^{2}}u(t,x)=0, \tag{6}\]
where \(D=5\times 10^{-4}\) is the effective diffusion coefficient, and \(R(u(t,x))\) is the retardation factor representing the sorption that hinders the diffusion process (Takamoto et al., 2022). In particular, we consider \(R(u(t,x))=1+\nicefrac{{(1-\phi)}}{{(\phi)}}\rho_{s}kn_{f}u^{n_{f}-1}(t,x)\), where \(\phi=0.29\) is the porosity of the porous medium, \(\rho_{s}=2880\) is the bulk density, \(k=3.5\times 10^{-4}\) is the Freundlich's parameter, and \(n_{f}=0.874\) is the Freundlich's exponent. The initial and boundary conditions are defined as \(u(0,x)=0\), \(u(t,0)=0\) and \(u(t,1)=D\partial_{x}u(t,1)\). The solution \(u_{\theta}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is modeled by a 7-hidden layer, 40 neurons per layer network, and we obtain the trained parameters from Takamoto et al. (2022). The mean \(\ell_{2}\) solution error is \(9.9\cdot 10^{-2}\), with a visualization in Figure 1(d).
\(\partial\)-CROWN certificationWe verify the global correctness conditions of the PINNs by applying the framework from Section 5. We report in Table 1 our verification of the initial conditions 1 using \(N_{b}=5k\) splits, boundary conditions 2 using \(N_{b}=5k\) splits, and the certified bounds on the residual condition 3 using \(N_{b}=2M\) splits. We observe that \(\partial\)-CROWN approaches the empirical bounds obtained using Monte Carlo sampling while providing the guarantee that no point within the domain breaks those bounds, effectively establishing the tolerances from Definition 1.
### Empirical relation of \(|f_{\theta}|\) and \(|u_{\theta}-u|\)
One question that might arise from our certification procedure is the relationship between the PINN residual error, \(|f_{\theta}|\), and the solution error with respect to true solution \(u\), \(|u_{\theta}-u|\), across the domain.
\begin{table}
\begin{tabular}{l l l l l} & \multicolumn{2}{c}{MC max (\(10^{4}\))} & \multicolumn{2}{c}{MC max (\(10^{6}\))} & \multicolumn{2}{c}{\(\partial\)-CROWN \(u_{b}\) (time [s])} \\ \hline \hline \multicolumn{2}{l}{(a) **Burgers**(Raissi et al., 2019)} & \multicolumn{2}{c}{} \\ \hline \multicolumn{2}{l}{**(1)**} & \(|u_{\theta}(0,x)-u_{0}(x)|^{2}\) & \(1.59\times 10^{-6}\) & \(1.59\times 10^{-6}\) & \(2.63\times 10^{-6}\) (\(116.5\)) \\ \hline \multicolumn{2}{l}{2} & \(|u_{\theta}(t,-1)|^{2}\) & \(8.08\times 10^{-8}\) & \(8.08\times 10^{-8}\) & \(6.63\times 10^{-7}\) (\(86.7\)) \\ \multicolumn{2}{l}{2} & \(|u_{\theta}(t,1)|^{2}\) & \(6.54\times 10^{-8}\) & \(6.54\times 10^{-8}\) & \(9.39\times 10^{-7}\) (\(89.8\)) \\ \hline \multicolumn{2}{l}{3} & \(|f_{\theta}(x,t)|^{2}\) & \(1.23\times 10^{-2}\) & \(1.80\times 10^{-2}\) & \(1.03\times 10^{-1}\) (\(2.8\times 10^{5}\)) \\ \hline \multicolumn{2}{l}{(b) **Schrodinger**(Raissi et al., 2019)} & \multicolumn{2}{c}{} \\ \hline \multicolumn{2}{l}{**(1)**} & \(|u_{\theta}(0,x)-u_{0}(x)|^{2}\) & \(7.06\times 10^{-5}\) & \(7.06\times 10^{-5}\) & \(8.35\times 10^{-5}\) (\(305.2\)) \\ \hline \multicolumn{2}{l}{2} & \(|u_{\theta}(t,5)-u_{\theta}(t,-5)|^{2}\) & \(7.38\times 10^{-7}\) & \(7.38\times 10^{-7}\) & \(5.73\times 10^{-6}\) (\(545.4\)) \\ \multicolumn{2}{l}{2} & \(|\partial_{x}u_{\theta}(t,5)-\partial_{x}u_{\theta}(t,-5)|^{2}\) & \(1.14\times 10^{-5}\) & \(1.14\times 10^{-5}\) & \(5.31\times 10^{-5}\) (\(2.4\times 10^{3}\)) \\ \hline \multicolumn{2}{l}{3} & \(|f_{\theta}(x,t)|^{2}\) & \(7.28\times 10^{-4}\) & \(7.67\times 10^{-4}\) & \(5.55\times 10^{-3}\) (\(1.2\times 10^{6}\)) \\ \hline \multicolumn{2}{l}{(c) **Allen-Cahn**(Monaco and Apiletti, 2023)} & \multicolumn{2}{c}{} \\ \hline \multicolumn{2}{l}{**(1)**} & \(|u_{\theta}(0,x)-u_{0}(x)|^{2}\) & \(1.60\times 10^{-3}\) & \(1.60\times 10^{-3}\) & \(1.61\times 10^{-3}\) (\(52.7\)) \\ \hline \multicolumn{2}{l}{2} & \(|u_{\theta}(t,-1)-u_{\theta}(t,1)|^{2}\) & \(5.66\times 10^{-6}\) & \(5.66\times 10^{-6}\) & \(5.66\times 10^{-6}\) (\(95.4\)) \\ \hline \multicolumn{2}{l}{3} & \(|f_{\theta}(x,t)|^{2}\) & \(10.74\) & \(10.76\) & \(10.84\) (\(6.7\times 10^{5}\)) \\ \hline \multicolumn{2}{l}{(d) **Diffusion-Sorption**(Takamoto et al., 2022)} & \multicolumn{2}{c}{} \\ \hline \multicolumn{2}{l}{**(1)**} & \(|u_{\theta}(0,x)|^{2}\) & \(0.0\) & \(0.0\) & \(0.0\) (\(0.2\)) \\ \hline \multicolumn{2}{l}{2} & \(|u_{\theta}(t,0)-1|^{2}\) & \(4.22\times 10^{-4}\) & \(4.39\times 10^{-4}\) & \(1.09\times 10^{-3}\) (\(72.5\)) \\ \multicolumn{2}{l}{2} & \(|u_{\theta}(t,1)-D\partial_{x}u_{\theta}(t,1)|^{2}\) & \(2.30\times 10^{-5}\) & \(2.34\times 10^{-5}\) & \(2.37\times 10^{-5}\) (\(226.4\)) \\ \hline \multicolumn{2}{l}{3} & \(|f_{\theta}(x,t)|^{2}\) & \(1.10\times 10^{-3}\) & \(21.09\) & \(21.34\) (\(2.4\times 10^{6}\)) \\ \hline \multicolumn{2}{l}{3} & \(|f_{\theta}(x,t)|^{2}\) & \(1.
By definition, achieving a low \(|f_{\theta}|\) implies \(u_{\theta}\) is a valid solution for the PDE, but there is no formal guarantee related to \(|u_{\theta}-u|\) within our framework.
Obtaining a bound on \(|u_{\theta}-u|\) is typically a non-trivial task given \(u\) might not be unique, and does not necessarily exhibit an analytical solution and can only be computed using a numerical solver. And while some recent works perform this analysis for specific PDEs by exploiting their structure and/or smoothness properties (Mishra and Molinaro, 2022; Ryck and Mishra, 2022; Wang et al., 2022), these methods typically suffer from scalability and bound tightness issues. As such, we perform an empirical analysis on Burgers' equation using a numerical, finite-difference solver to obtain \(\tilde{u}(\mathbf{x})\) for sampled points \(\mathbf{x}\). We randomly sample \(10^{6}\) domain points (\(\mathcal{S}^{\prime}\)), and compute the maximum residual error, \(\max_{\mathbf{x}\in\mathcal{S}^{\prime}}|f_{\theta}(\mathbf{x})|\), and the empirical maximum solution error, \(\max_{\mathbf{x}\in\mathcal{S}^{\prime}}|u_{\theta}(\mathbf{x})-\tilde{u}( \mathbf{x})|\), for networks obtained at different epochs of the training process. We report the results in Figure 3, with each point corresponding to an instance of a network. As expected, there is a correlation between these errors obtained using a numerical solver, suggesting a similar correlation holds for \(|u_{\theta}-u|\).
### On the importance of greedy input branching
A key factor in the success of \(\partial\)-CROWN in achieving tight bounds of the residual is the greedy input branching procedure from Algorithm 1. To illustrate the fact that a uniform sampling strategy would be significantly more computationally expensive, we plot in Figure 4 the relative density of branches (_i.e._, the percentage of branches per unit of input domain) in the case of Burgers' and Schrodinger's equations. As can be observed, there are clear imbalances at the level of the branching distribution - with areas away from relative optima of \(u_{\theta}\) being relatively under sampled yet achieving tight bounds - showcasing the efficiency of our strategy.
## 7 Discussion
We show that \(\partial\)-CROWN is able to obtain tight upper bounds on the correctness conditions established in Definition 1. Of particular relevance is the case of the residual condition 3 for the Diffusion-Sorption equation, for which varying the number of MC samples leads to distinct results - using \(10^{4}\) estimates puts the maximum at \(1.10\times 10^{-3}\), while \(10^{6}\) samples give an estimate of \(21.09\) - highlighting the need for our framework to obtain guarantees across the full domain. Note that the absolute values of the residual errors can be seen as a function of the PDE itself, and thus cannot be compared across different PINNs. As shown in Section 6.2, they are instead connected to PDE solution errors, and can be compared within the same system. In Appendix A we study how the training method from Shekarpaz et al. (2022) can lead to a reduction in empirical and certified errors.
One of the limitations of our method is unquestionably the running time, which for residual verification is in the order of \(10^{5}\)-\(10^{6}\) for each of the PINNs studied. This is mainly due to the need to perform a high number of branchings (\(2M\)) as a result of the looseness of the bounds obtained by \(\partial\)-CROWN on each individual one. These issues become more accentuated as the input dimension grows, since the number of branches is expected to grow exponentially. In future work we aim to improve the tightness of the bounds to be able to apply our framework to larger, higher dimensional PINNs.
Figure 4: **Branching densities**: relative density of the input branching distribution obtained via Algorithm 1 applied to Burgers’ (top) and Schrödinger’s (bottom) equations.
Figure 3: **Residual and solution errors**: connection of the maximum residual error (\(\max_{\mathcal{S}^{\prime}}|f_{\theta}|\)) and the maximum solution error, \(\max_{\mathcal{S}^{\prime}}|u_{\theta}-\tilde{u}|\), for networks at different epochs of the training process (in orange). |
2305.04122 | ConvPIM: Evaluating Digital Processing-in-Memory through Convolutional
Neural Network Acceleration | Processing-in-memory (PIM) architectures are emerging to reduce data movement
in data-intensive applications. These architectures seek to exploit the same
physical devices for both information storage and logic, thereby dwarfing the
required data transfer and utilizing the full internal memory bandwidth.
Whereas analog PIM utilizes the inherent connectivity of crossbar arrays for
approximate matrix-vector multiplication in the analog domain, digital PIM
architectures enable bitwise logic operations with massive parallelism across
columns of data within memory arrays. Several recent works have extended the
computational capabilities of digital PIM architectures towards the
full-precision (single-precision floating-point) acceleration of convolutional
neural networks (CNNs); yet, they lack a comprehensive comparison to GPUs. In
this paper, we examine the potential of digital PIM for CNN acceleration
through an updated quantitative comparison with GPUs, supplemented with an
analysis of the overall limitations of digital PIM. We begin by investigating
the different PIM architectures from a theoretical perspective to understand
the underlying performance limitations and improvements compared to
state-of-the-art hardware. We then uncover the tradeoffs between the different
strategies through a series of benchmarks ranging from memory-bound vectored
arithmetic to CNN acceleration. We conclude with insights into the general
performance of digital PIM architectures for different data-intensive
applications. | Orian Leitersdorf, Ronny Ronen, Shahar Kvatinsky | 2023-05-06T19:23:10Z | http://arxiv.org/abs/2305.04122v1 | # ConvPIM: Evaluating Digital Processing-in-Memory through Convolutional Neural Network Acceleration
###### Abstract
Processing-in-memory (PIM) architectures are emerging to reduce data movement in data-intensive applications. These architectures seek to exploit the same physical devices for both information storage and logic, thereby dwarfing the required data transfer and utilizing the full internal memory bandwidth. Whereas analog PIM utilizes the inherent connectivity of crossbar arrays for _approximate_ matrix-vector multiplication in the analog domain, digital PIM architectures enable bitwise logic operations with massive parallelism across columns of data within memory arrays. Several recent works have extended the computational capabilities of _digital PIM_ architectures towards the _full-precision_ (single-precision floating-point) acceleration of convolutional neural networks (CNNs); yet, they lack a comprehensive comparison to GPUs. In this paper, we examine the potential of digital PIM for CNN acceleration through an updated quantitative comparison with GPUs, supplemented with an analysis of the overall limitations of digital PIM. We begin by investigating the different PIM architectures from a theoretical perspective to understand the underlying performance limitations and improvements compared to state-of-the-art hardware. We then uncover the tradeoffs between the different strategies through a series of benchmarks ranging from memory-bound vectord arithmetic to CNN acceleration. We conclude with insights into the general performance of digital PIM architectures for different data-intensive applications.
Digital processing-in-memory (PIM), memory wall, convolutional neural networks (CNNs), floating-point numbers.
## 1 Introduction
The _memory wall_ serves as a fundamental bottleneck to computing systems for _memory-intensive_ applications as the memory access occasionally becomes orders of magnitude more expensive than the computation itself [1]. Therefore, processing-in-memory (PIM) solutions aim to tackle the memory wall via memory architectures with embedded processing capabilities. That is, the traditional read/write interface is supplemented with _logic_ operations that perform vectord computation within the memory without transferring the data through the memory wall bottleneck. While early proposals for PIM [2, 3] integrated small processors within the memory architecture, recent solutions perform logic by using the physical properties of the underlying memory devices themselves.
Numerous emerging PIM architectures exploit these physical properties towards _digital bitwise_ logic within memory arrays [4]. For example, memristive stateful logic [5, 6, 7, 8, 9] utilizes the memristor [10], an emerging physical device with variable resistance, to perform logic in the resistive domain on binary values (e.g., low resistance is logical one and high resistance is logical zero). For memristors connected as seen in Figure 1(a), applying fixed voltages at the memristor terminals causes the resistance of the output memristor to become conditional on the resistances of the input memristors (e.g., the logical NOR of the inputs). As this circuit appears within every row of a crossbar array of memristors (see Figure 1(b)), then we find that applying fixed voltages on the bitlines of the array can simultaneously induce a logic gate within every row of the crossbar array. Abstractly, we can consider a crossbar array as a binary matrix of memory, and stateful logic enables logic operations on columns of bits with \(O(1)\) time (e.g., NOR of two columns into a third column), see Figure 1(e). Furthermore, this abstract model covers additional architectures such as in-DRAM computing [11, 12]. This bitwise parallelism can be exploited towards high-throughput arithmetic (e.g., addition, multiplication) for both fixed-point and floating-point numbers within the memory in a bit-serial element-parallel fashion: the logic gates that construct the arithmetic function are performed serially, yet in parallel across all rows of an array for parallel vectorsd execution. This can provide massive throughout and energy benefits over traditional hardware such as GPUs _when the cache locality is low_ (e.g., vectored arithmetic on vectors stored only in the main memory) [13].
Recent works [14, 15, 16, 17] have proposed utilizing digital PIM architectures towards convolutional neural network (CNN) acceleration. While emerging _analog PIM_ approaches [18] provide significant acceleration over GPUs, they suffer from low accuracy due to noise in the analog domain and high costs for conversion between digital and analog domains. By supporting full-precision floating-point computation, digital
Fig. 1: Examples of digital PIM using (a, b) memristive [5, 6, 7] and (c, d) DRAM [11, 12] memories. Both follow (e) an abstract model of bitwise column operations in \(O(1)\) time. Figure adapted from [13].
PIM approaches have the potential to overcome these flaws for reliable CNN acceleration. FloatPIM [14] was the first such work - presenting vast improvement over GPU performance; several additional recent works [15, 16, 17] have since built upon the FloatPIM architecture and evaluation. Yet, there is a need for an updated comprehensive evaluation of these works and a comparison to state-of-the-art hardware as:
1. FloatPIM utilized several routines that were either erroneous (e.g., the floating-point addition algorithm only supported unsigned numbers [13, 15]) or have been since updated (e.g., convolution [19]).
2. The GPU baseline considered in FloatPIM (and later utilized in the additional works) stores the weights in the CPU memory during the computation [14]. In this paper, we demonstrate that storing the weights in the GPU memory significantly improves the GPU baseline.
This paper aims to simultaneously provide an updated evaluation of CNN acceleration with digital PIM while highlighting the underlying limitations of digital PIM architectures to give further insight into new applications. The paper is structured as follows. Section 2 presents the evaluation methodology of comparing memristive and DRAM PIM architectures to the NVIDIA A6000 GPU. The paper then progresses through several benchmarks in Sections 3, 4, and 5, first considering routines utilized in CNN acceleration and then various large-scale CNN models. We analyze the unique factors involved in each benchmark and develop several metrics that provide further insight into the performance of digital PIM architectures. Section 6 discusses the implications of these results, revealing the key characteristics that may indicate the potential of future applications for digital PIM acceleration.
## 2 Methodology
This section details the evaluation methodology1 that is utilized throughout this paper to compare the digital PIM architectures to GPUs, as summarized in Table I. While the exact parameters may vary depending on the specific GPU or PIM implementation, the overall trends evaluated in this paper remain.
Footnote 1: The code repository for this paper is publicly available at [https://github.com/deitersdorf/ConvPIM](https://github.com/deitersdorf/ConvPIM).
### _Gpu_
The GPU results provided throughout this paper are based on the NVIDIA A6000 GPU, including both experimental and theoretical results. The experimental results are derived from the PyTorch [20] library for general-purpose neural network acceleration, utilizing the built-in PyTorch profiler connected to the NVIDIA Nsight Systems profiler for GPU metrics (e.g., DRAM bandwidth, L1/L2 hit rate) and NVIDIA NVML [21] for power measurements. The theoretical results reflect _compute-bound_ performance and are thus derived from the theoretical peak computation throughput provided in the datasheet [22].
### _Digital PIM_
We consider a simple digital PIM architecture consisting of several crossbar arrays that may all operate simultaneously according to the abstract model depicted in Figure 1(e). We construct the architecture to match the overall GPU memory size of 48GB. The crossbar dimension, per-gate energy, and clock frequency are derived from state-of-the-art digital PIM architectures for memristive [23, 24] and DRAM [11, 12] PIM. The maximum power consumption is derived from the maximal parallelism at full duty cycle. We compare both throughput and normalized throughput per Watt (energy efficiency).
## 3 High-Throughput Arithmetic
We begin by analyzing in this section the elementary benchmark of memory-bound vectored arithmetic. Assume that two \(n\)-dimensional vectors \(\mathbf{u},\mathbf{v}\) of \(N\)-bit numbers (fixed-point or floating-point) reside in the main memory; the goal is to compute an element-wise elementary operation \(\circ\in\{+,-,*,/\}\) and store the result as an \(n\)-dimensional vector \(\mathbf{z}\) in the memory.
The bit-serial element-parallel approach extends the bitwise parallelism of digital PIM towards maximal arithmetic throughput. Consider an \(r\times c\) crossbar with two \(r\)-dimensional vectors \(\mathbf{u},\mathbf{v}\) stored with a single \(N\)-bit element per row (from each vector), as shown in Figure 2. This approach performs the arithmetic in parallel across all rows and all crossbars by constructing the arithmetic operation from a _serial_ sequence of logic gates. For example, \(N\)-bit fixed-point addition is performed by first constructing a 1-bit full-adder from 9 serial NOR gates [13, 25], and then performing ripple-carry addition by serially executing \(N\) full-adders. While the latency is high at \(9\cdot N=O(N)\) cycles, the throughput is also high at \(R/O(N)\) operations per cycle, where \(R\) is the total number of rows in the memory (i.e., \(r\) multiplied by the number of crossbars). Floating-point operations were originally considered incompatible with digital PIM [26] due to the control flow involved with the alignment and normalization of floating-point numbers. This hurdle was first overcome by FloatPIM [14]; however, several aspects of the design were erroneous [13, 15], and it also required a Content Addressable Memory (CAM) within every array. Conversely, AritPIM [13] recently proposed a suite of floating-point algorithms that adheres to the IEEE 754 standard exactly while requiring no modifications to the abstract model.
We compare the performance of the suite of arithmetic functions proposed in AritPIM [13] to experimental and theoretical GPU performance in Figure 3. The experimental GPU performance is bounded by the memory bandwidth for reading the input vectors \(\mathbf{u},\mathbf{v}\) and writing the result \(\mathbf{z}\), as indicated by the \(>94\%\) DRAM memory bandwidth recorded across all functions. Therefore, it depends only on the bit width of the underlying arithmetic operation. The theoretical compute-bound
\begin{table}
\begin{tabular}{|c|l|} \hline
**Configuration** & **Parameters** \\ \hline \hline \multirow{6}{*}{NVIDIA A6000 GPU} & _Number of Cores:_ 10752 \\ & _Memory Size:_ 48 GB \\ & _Memory Bandwidth:_ 768 GB/s \\ & _Clock Frequency:_ 1410 MHz \\ & _Max Power:_ 300W \\ \hline \multirow{6}{*}{Memristive/DRAM PIM} & _Crossbar Size:_ 1024 / 65536 \(\times\) 1024 \\ & _Memory Size:_ 48 GB \\ \cline{1-1} & _Gate Energy:_ 6.4 f / 391 f \\ \cline{1-1} & _Clock Frequency:_ 333 MHz / 0.5 MHz \\ \cline{1-1} & _Max Power:_ 860 W / 80 W \\ \hline \end{tabular}
\end{table} TABLE I: Evaluation Parameters
Fig. 2: The bit-serial element-parallel approach to high-throughput in-memory arithmetic that performs vectored operations in an \(r\times c\) crossbar as a sequence of parallel logic operations on columns [13].
GPU performance reflects the theoretical throughput provided in an ideal circumstance where memory operations are not required. For digital PIM, in the spirit of [27], we define the _compute complexity_ (CC) as the number of logic gates performed per bit (e.g., \(9N/(3N)=3\) for \(N\)-bit fixed-point addition as the inputs and outputs occupy \(3N\) bits). Naturally, we find an inverse relationship between the CC of an arithmetic operation and the improvement over experimental GPU performance, as shown in Figure 4. Therefore, PIM is most effective compared to memory-bound GPU when the CC is low. Notice that 16-bit and 32-bit addition possess the same CC as PIM addition latency is linear in \(N\) (doubling \(N\) doubles the PIM latency).
## 4 Matrix Multiplication and Convolution
This section investigates the performance of matrix multiplication and 2D convolution via digital PIM. The extension of the arithmetic parallelism provided in the previous section to such matrix operations was first investigated in FloatPIM [14] and then generalized in MatPIM [19]. These works express the matrix operations as a serial sequence of vectored arithmetic operations, thereby utilizing digital PIM for the vector parallelism. The matrix operations are characterized by high data reuse, and thus the experimental memory-bound GPU approaches the theoretical compute-bound GPU performance.
Figure 5 compares the performance of digital PIM approaches to experimental and theoretical GPU performance for _batched_ matrix multiplication on many pairs of matrices (both of dimension \(n\times n\)). In this operation, we find reuse of \(O(n)\) as there is a total of \(O(n^{3})\) operations operating on \(O(n^{2})\) data. Therefore, as \(n\) increases, we expect the performance gap between experimental memory-bound GPU and theoretical compute-bound GPU will diminish. Indeed, we find in Figure 5 that the gap shrinks with increasing \(n\) (e.g., \(n=32\) has a significantly larger gap than \(n=128\)). Overall, we conclude that starting at \(n=128\), the GPU performance surpasses that of digital PIM due to the data reuse mitigating the memory wall bottleneck. Two-dimensional convolution with a \(k\times k\) kernel on a \(W\times H\) image possesses similar considerations with data reuse of \(O(k^{2})\) (\(O(WHk^{2})\) operations on \(O(WH)\) data).
## 5 CNN Inference and Training
We culminate the benchmark analysis by evaluating full-precision inference and training of convolutional neural networks (CNN). The acceleration of such neural networks primarily involves matrix multiplication for fully connected layers and 2D convolution for the convolutional layers, as well as element-wise operations for activation functions (e.g., ReLU). We consider a benchmark consisting of the AlexNet [28], GoogLeNet [29] and ResNet-50 [30] convolutional neural networks with the ImageNet [31] dataset. We evaluate the digital PIM performance by considering only the required matrix multiplication and 2D convolution operations, thereby providing an upper bound on the digital PIM performance that is also close to the true performance. The experimental GPU performance is measured through the PyTorch [20] implementations of the CNNs with random input images of size \(224\times 224\times 3\)[31].
In Figure 6, we compare the upper-bound digital PIM performance with the GPU performance on CNN inference.
Fig. 4: Inverse relationship between compute complexity and digital PIM improvement over memory-bound GPU (experimental).
Fig. 5: Comparison of throughput (matrix multiplications per second) and normalized throughput per Watt (energy efficiency) for matrix multiplication with \(n\times n\) matrices of 32-bit floating-point numbers.
Fig. 3: Comparison of throughput (operations per second) and normalized throughput per Watt (energy efficiency) for the addition and multiplication of 32-bit fixed-point and 32-bit floating-point (FP) numbers.
We find that the experimental GPU performance is close to the theoretical peak performance across all models due to the moderately-high data reuse (\(55-67\%\) L2 hit rate) - notice that the gap in ResNet and GoogLeNet is more significant than AlexNet since some of their operations are with low reuse (e.g., residual connections, \(1\times 1\) convolutions). For the same reason, we find that the digital PIM performance is not significantly better than the GPU performance, and digital PIM energy is slightly worse. The results for CNN training are provided in the code repository and portray similar trends to Figure 6.
## 6 Discussion and Conclusion
Recent works have proposed to utilize emerging digital bit-wise PIM approaches towards the full-precision acceleration of CNNs as an alternative to analog PIM. This approach benefits from the high accuracy of digital floating point operations and the reduction in data transfer due to the in-memory operations. However, through a series of benchmarks starting with element-wise vector operations and culminating in CNN inference and training, we demonstrate that digital PIM approaches, with current parameters, are unable to surpass GPU performance for full-precision CNN acceleration.
We identify the poor performance of digital PIM in full-precision CNN acceleration as arising from a combination of two factors: high compute complexity for the underlying arithmetic operations, and high data reuse in CNN architectures. The analysis in Figure 4 reveals that floating-point multiplication possesses relatively high compute complexity and thus already has a low throughput improvement over GPU performance. Furthermore, Figure 3 demonstrates that this improvement originates entirely from the memory wall bottleneck throttling the GPU performance as the theoretical compute-bound GPU results surpass digital PIM. Therefore, when we increase the data reuse in Figure 5, we find that the PIM performance becomes inferior to GPU performance as the memory wall is no longer the bottleneck. That is, we find that it is the combination of the high compute complexity and the high data reuse that leads to the inferior digital PIM performance in CNN acceleration. Hence, future work may focus on applications that prioritize arithmetic operations with low compute complexity or low data reuse in GPUs.
## Acknowledgments
This work was supported by the European Research Council through the European Union's Horizon 2020 Research and Innovation Programme under Grant 757259, by the European Research Council through the European Union's Horizon Research and Innovation Programme under Grant 101069336, and by the Israel Science Foundation under Grant 1514/17.
|
2303.08496 | Psychophysics of Artificial Neural Networks Questions Classical Hue
Cancellation Experiments | We show that classical hue cancellation experiments lead to human-like
opponent curves even if the task is done by trivial (identity) artificial
networks. Specifically, human-like opponent spectral sensitivities always
emerge in artificial networks as long as (i) the retina converts the input
radiation into any tristimulus-like representation, and (ii) the post-retinal
network solves the standard hue cancellation task, e.g. the network looks for
the weights of the cancelling lights so that every monochromatic stimulus plus
the weighted cancelling lights match a grey reference in the (arbitrary) color
representation used by the network. In fact, the specific cancellation lights
(and not the network architecture) are key to obtain human-like curves: results
show that the classical choice of the lights is the one that leads to the best
(more human-like) result, and any other choices lead to progressively different
spectral sensitivities. We show this in two ways: through artificial
psychophysics using a range of networks with different architectures and a
range of cancellation lights, and through a change-of-basis theoretical analogy
of the experiments. This suggests that the opponent curves of the classical
experiment are just a by-product of the front-end photoreceptors and of a very
specific experimental choice but they do not inform about the downstream color
representation. In fact, the architecture of the post-retinal network (signal
recombination or internal color space) seems irrelevant for the emergence of
the curves in the classical experiment. This result in artificial networks
questions the conventional interpretation of the classical result in humans by
Jameson and Hurvich. | Jorge Vila-Tomás, Pablo Hernández-Cámara, Jesús Malo | 2023-03-15T10:13:34Z | http://arxiv.org/abs/2303.08496v2 | # Psychophysics of Artificial Neural Networks
###### Abstract
We show that classical hue cancellation experiments lead to human-like opponent curves even if the task is done by trivial (_identity_) artificial networks. Specifically, human-like opponent spectral sensitivities always emerge in artificial networks as long as (i) the _retina_ converts the input radiation into any tristimulus-like representation, and (ii) the post-retinal _network_ solves the standard hue cancellation task, e.g. the network looks for the weights of the cancelling lights so that every monochromatic stimulus plus the weighted cancelling lights match a grey reference in the (arbitrary) color representation used by the network. In fact, the specific cancellation lights (and not the network architecture) are key to obtain human-like curves: results show that the classical choice of the lights is the one that leads to the best (more human-like) result, and any other choices lead to progressively different special sensitivities. We show this in two ways: through _artificial psychophysics_ using a range of networks with different architectures and a range of cancellation lights, and through a _change-of-basis theoretical analogy_ of the experiments. This suggests that the opponent curves of the classical experiment are just a by-product of the front-end photoreceptors and of a very specific experimental choice but they do not inform about the downstream color representation. In fact, the architecture of the post-retinal network (signal recombination or internal color space) seems irrelevant for the emergence of the curves in the classical experiment. This result in artificial networks questions the conventional interpretation of the classical result in humans by Jameson and Hurvich.
Artificial Psychophysics. Spectral Sensitivity of Artificial Networks. Visual Neuroscience. Hue Cancellation Experiments. Opponent Color Coding.
## 1 Introduction
The classical hue cancellation experiments [1, 2] are usually considered as the first psychophysical quantification of Hering's intuition on opponent color coding in the human brain [3, 4, 5]. As an example, an influential textbook on visual neuroscience [6] introduces hue cancellation as follows: _"Several experimental observations, beginning in the mid-1950s, catapulted opponent-colors theory from a special-purpose model, known only to color specialists, to a central idea in Vision Science. The first was a behavioral experiment that defined a procedure for measuring opponent-colors, the hue cancellation experiment. By providing a method of quantifying the opponent-colors insight, Hurvich and Jameson made the idea accessible to other scientists, opening a major line of inquiry."_
The scientific question to be solved by the _hue cancellation experiment_ is about the post-retinal neural architecture, or recombination of color signals after photodetection. This is illustrated by Fig. 1.a, based on the original diagram in [2]. The authors confront the Young-Helmholtz trichromatic theories of color vision with the qualitative opponent theory of Hering. They propose an architecture to get the Achromatic, Tritanopic (red-green) and Deuteranopic (yellow-blue) sensors (ATD) from the front-end photoreceptors tuned to Long, Medium, and Short (LMS) wavelengths, and hue cancellation would be the tool to quantify the spectral sensitivity of the ATD mechanisms in the proposed architecture.
In this work we present a counter-example based on artificial networks (on automatic differentiation) that suggests that the results of conventional hue cancellation experiments do not provide conclusive information on the inner color representation of the system that mediates the task (the post-retinal network, black box in Fig.1.b). Therefore, strictly speaking, the curves from the classical hue cancellation experiments would not be measuring the sensitivity of those ATD mechanisms.
In particular, we show that _identity networks_ develop opponent red-green and yellow-blue color valence functions which are quite similar to the human curves independently of the color representation (LMS, RGB or ATD). What we refer to as _identity network_ is a trivial architecture whose (3-dimensional) output is exactly the same as its (3-dimensional) input in each spatial location. This trivial network, which already operates in a tristimulus-related representation, (say certain standard LMS cone space [7], or even in an arbitrary, device dependent, digital count RGB space [8, 9]) may apply no opponent color coding whatsoever and still gets the human-like curves (in contrast to the specific architecture assumed in Fig. 1.a). Therefore, the opponent curves that emerge do not strictly inform of the inner (eventually opponent) color representation of the post-retinal neural network. Instead, they are a by-product of the (retinal) tristimulus representation of the input radiation and of the choices in the conventional experimental setting (e.g. the wavelengths of the spectral cancellation lights). To explore in more detail this result, we perform multiple hue cancellation experiments with cancellation lights different to the classical ones and we obtain a clear dependence with the choice of the spectral cancellation lights, achieving the best human-like behaviour only in the case of the classical cancellation lights. This result is confirmed by an analysis of the hue cancellation experiment using a change-of-basis analogy.
Figure 1: **(a)** Elements of the competing theories of Young-Helmholtz vs Hering, and **(b)** Learning process to get the weights that cancel the hue of certain monochromatic stimulus of wavelength \(\lambda\). Following the original diagram in [2], **Figure 1.a** displays the sensors of the Young-Helmholtz theory, with all-positive sensitivities tuned to _Long_, _Medium_, and _Short_ (LMS) wavelengths, and a possible architecture of a network that would lead to the sensors of the Hering theory: two chromatic sensors with opponent sensitivities, the _Tritanopic_ sensor (T) tuned to red-green and the _Deuteraopic_ sensor (D) tuned to yellow-blue, together with an Archromatic sensor (A) with a wide all-positive sensitivity. **Figure 1.b** illustrates the hue cancellation experiment: the (natural or artificial) observer _looks for_ the weights of the spectral cancelling lights so that a mixture of these cancellation stimuli with the original monochromatic input matches a grey reference (a stimulus with no hue). In this setting, hue cancellation reduces to distance minimization between the responses \(R^{\prime}\) to the white and to the considered \(\lambda\) plus the weighted cancelling lights.
**The question** is whether this search of the weights reveals something about the computation or architecture of the _brain-network_ module in Fig. 1.b that transforms \(R\) into \(R^{\prime}\), or about the nature of the inner color representation \(R^{\prime}\).
Methods: hue cancellation experiments in artificial networks
### General setting
In this work the artificial hue cancellation experiment is a matching problem in the color representation used by the artificial network. Take the setting represented in Fig 1.b: for any arbitrary spectral input of wavelength \(\lambda\), \(E_{\lambda}\), and a grey reference, \(W\), the network takes the input retinal representation of stimulus and reference, \(R(E_{\lambda})\) and \(R(W)\), and transforms them into the inner representation \(R^{\prime}(E_{\lambda})\) and \(R^{\prime}(W)\). We make no assumption of the nature of this representation \(R^{\prime}\). In Fig 1.b \(R^{\prime}\) is represented by red, green and blue layers just for visualization, this does not mean we assume them to be LMS-like. In the initial situation, when no cancelling lights are added, the distance \(|R^{\prime}(W)-R^{\prime}(E_{\lambda})|\) will have a large value. The goal in this matching problem is looking for the optimal weights \(w_{\lambda_{c}}^{\star}(\lambda)\) of the cancelling lights that minimize the distance between the reference and the monochromatic stimulus plus the weighted cancelling lights:
\[w_{\lambda_{c}}^{\star}(\lambda)=\operatorname*{arg\,min}_{w_{\lambda_{c}}( \lambda)}\left|R^{\prime}(W)-R^{\prime}\left(E_{\lambda}\oplus\sum_{\lambda_{ c}}w_{\lambda_{c}}(\lambda)\,E_{\lambda_{c}}\right)\right| \tag{1}\]
where the subtraction in the distance is regular subtraction between vectors, but \(\oplus\) stands for additive superposition of radiations. Physical superposition is always positive so, in this case, as conventionally done in color matching experiments [10], we assume that _negative_ weights in the superposition to \(E_{\lambda}\) physically mean the corresponding amount of _positive_ superposition to \(W\). In short, the cancellation experiment should tell us about the change of color representations, from the input space \(R\) to the output \(R^{\prime}\). In principle, the goal function in Eq. 1 can be applied to regular tristimulus vectors (where vector summation has perceptual meaning) but also to arbitary, engineering-oriented device-dependent color representations such as digital counts in RGB.
The matching problem described above is just a difference minimization problem which is well suited for learning based on automatic differentiation. In this _artificial psychophysics_ setting, the network architecture of the black-box in Fig. 1.b is fixed but the energy of the cancelling lights (the weights \(w_{\lambda_{c}}\)) is modified in each iteration to minimize the distance in Eq. 1.
Appendix A elaborates on how to approximate monochromatic stimuli for artificial networks intended to work with restricted stimuli such as regular digital images. Appendix B elaborates on how the four individual weighting functions we get from the artificial nets, \(w_{\lambda_{c}}^{\star}(\lambda)\), are combined into the final valence functions (that happen to be red-green and yellow-blue in the case of the conventional \(\lambda_{c}\)'s).
### Hue cancellation with artificial networks beyond the classical setting
This artificial simulation of the hue cancellation experiment can be applied with any architecture in the fixed network (black box in Fig. 1.b) and with any choice of \(\lambda_{c}\)'s for the cancelling lights.
If human-like opponent channels emerge from the simulations even if the network does not have a biologically plausible architecture and independently of the post retinal space, this means that the result of the classical experiment cannot be interpreted as an indication of the existence of post-retinal mechanisms performing the computation suggested in Fig. 1.a.
Refutation of the conventional interpretation of the classical experiment is stronger if the emergence of opponent curves mainly happens with a particular choice of \(\lambda_{c}\)'s. This would mean that instead of having the result because of interesting properties of the post-retinal mechanisms, it comes from a fortunate selection of the experimental setting. For this reason it is interesting to simulate hue cancellation for a range of alternative \(\lambda_{c}\)'s different from the classical experiment.
### Differences with the experimental setting for humans
In the original experiments with humans, the cancelling lights had the same energy and their wavelengths were slightly different for the two observers J/H: \(467/475\) nm (blue), \(490/500\) nm (green), \(588/580\) (yellow), and \(700/700\) nm (red). In all our simulations the cancelling lights always had the same initial energy and we used an equienergetic stimulus as grey reference. In simulating the classical setting, our wavelengths were the ones for observer H (\(475\), \(500\), \(580\) and \(700\) nm). In our experiments we use (without loss of generality) quasi-monochromatic lights so that they can be properly represented in digital values to be processed by conventional artificial networks. These stimuli are defined by a narrow Gaussian spectral radiance added on top of a low-radiance equienergetic background. Appendix A shows examples of these stimuli.
In solving the distance minimization problem, the iterative variation of the weights was applied to the height of the narrow Gaussian of the quasi-monochromatic cancelling lights. These differences (cancelling wavelengths similar to the ones in the classical experiment and narrow-spectrum quasi-monochromatic stimuli) do not imply fundamental differences with the classical setting.
Human observers in the classical experiment do not change all 4 weights at the same time, but (just for the observers convenience) they just move one at a time (judging how the complementary hue disappears) and repeat the experiment 4 times. This is not a fundamental difference because (at the expense of longer time per wavelength) after the "first cancellation" the observer could also cancel the remaining hue and then match the response to a grey. Additionally, in any part of the spectrum, is the experimenter in the classical experiment who lets the observer to use "the appropriate" cancellation light. This is not a fundamental difference either because if they could look for the cancellation lights in pairs, simultaneous modification of the opponent cancellation lights would null each other and the effect would be as using a single one.
In the setting that we propose to simulate hue cancellation in artificial systems, the only difference with regard to the experiments in humans is that humans may not need an achromatic reference since they already have the concept of what an achromatic stimulus is, and hence they modify the weights of the cancellation lights to match this mental concept. In the case of artificial systems, obtaining the concept of achromatic reference for hue cancellation is not a problem either. It could be computed from natural images using the classical _grey world assumption_[11], or simply take a flat spectrum reference as we did here.
### The trivial identity network
The counter-example presented in this note is based on a trivial network architecture. Its output is the same as the input: for a color \(C\), represented at the input by the array \(R(C)\), the response \(R^{\prime}(C)\) is just:
\[R^{\prime}(R(C))=I\cdot R(C)=R(C) \tag{2}\]
This, clearly non-human, trivial architecture preserves whatever previous color representation coming from the sensors. This trivial network is a good counter-example for the eventual human-like results because in the brain, the color representation in the retina certainly changes downstream [12, 13].
## 3 Experiments and Results
As stated in the _Methods_ section, the conventional interpretation of the classical hue cancellation experiment can be questioned if one finds a counter example showing that human-like opponent valence curves may emerge for the classical choice of \(\lambda_{c}\)'s regardless of the post-retinal network architecture and color representation. Moreover, refutation would be stronger if one finds that the human-like results are mainly obtained for the classical choice of \(\lambda_{c}\)'s while other choices lead to progressively different curves regardless of the input color representation space.
According to this, we perform two sets of experiments: (1) we look for counter examples with the classical hue cancellation lights using trivial identity networks working with different color representations (LMS, ATD and digital RGB). (2) we consider a range of experiments with alternative cancellation lights different from the classical choice using the same trivial identity networks operating either in LMS, ATD or digital RGB.
### Counter examples in the classical setting
In order to check the emergence of human-like curves in hue cancellation even with the trivial identity network, we perform three experiments assuming different _input_ representations \(R\):
* **Experiment 1:** Identity network working in an arbitrary non-human color representation: a device-dependent digital RGB.
* **Experiment 2:** Identity network working in a standard LMS cone space, as for instance [7].
* **Experiment 3:** Identity network working in a standard opponent space as for instance, the Jameson and Hurvich model [1, 14].
Note that the above three identity networks would correspond to color representations with quite different qualitative features: (a) if the input is digital RGB, the problem is solved by a system with wide-band overlapping all-positive spectral sensitivities (different from LMS) and compressive nonlinear response in the retina, (b) if the input are standard LMS tristimulus one has a purely linear LMS color code with all-positive sensitivities in the retina, and (c) if the input
representation \(R\) is an opponent system with an achromatic channel and two chromatic channels, the network is fed with a fundamentally different color coding.
Figure 2 shows the results of these three hue cancellation experiments together with the experimental results for humans reported in [1].
Appendix C shows that (1) the final matches make sense (found at the yellow-blue and red-green curves) and are close to perfect (almost zero difference after the addition of \(w^{\star}_{\lambda_{c}}(\lambda)E_{\lambda_{c}}\)), and (2) the difference minimization process with the different networks is remarkably similar.
The results show that all identity networks, regardless of the space where they operate, lead to similar hue cancellation curves, and these are remarkably similar to the human curves.
### Alternative \(\lambda_{c}\)'s: control experiments and theoretical analysis
The previous artificial experiments question the traditional interpretation of hue cancellation with the classical \(\lambda_{c}\)'s because not only opponent systems but also trichromatic systems lead to similar opponent results. As anticipated above, the fortunate selection of the cancellation \(\lambda_{c}\)'s is _somehow_ biasing the matching towards the opponent curves.
In order to confirm that this is the case, we propose additional control experiments with artificial networks (experiments 4, 5 and 6), and we introduce a _change-of-basis analogy_ of the hue cancellation to understand the results. We show the predictions of this _change-of-basis analogy_ in the experiment 7:
* **Experiment 4:** Numerical results of hue cancellation for a range of \(\lambda_{c}\)'s away from the classical choice using the identity network working in a device-dependent digital RGB space.
* **Experiment 5:** Numerical results of hue cancellation for a range of \(\lambda_{c}\)'s away from the classical choice using the identity network working in a standard LMS space [7].
* **Experiment 6:** Numerical results of hue cancellation for a range of \(\lambda_{c}\)'s away from the classical choice using the identity network working in a standard ATD space [1, 14].
Figure 2: Opponent curves for the trivial identity network operating in different color representation spaces.
* **Experiment 7:** Exhaustive exploration of (analytical) changes of basis that are similar to hue cancellation experiments for \(\lambda_{c}\)'s very different from the classical choice.
First, lets introduce the idea of the _change-of-basis analogy_ of the hue cancellation experiments, and then we present the results of experiments 4-6 together with the theory-based simulation (experiment 7).
Consider the case in which the cancellation lights are complementary in pairs. For instance, in Fig. 3, see the pair [\(\lambda_{1}\), \(\lambda_{3}\)] and the pair formed by \(\lambda_{2}\) and the magenta referred to as \(\lambda_{4}\). In that situation, the determination of \(w^{\star}_{\lambda_{c}}\) is equivalent to a change to a color basis where two of the primaries go in the directions of the pair of complementary wavelengths (e.g. the red and green vectors in Fig. 3). By choosing a third linearly-independent vector (e.g. in the direction of an achromatic color as the vector in blue perpendicular to the triangle of the chromatic diagram) one has a _new basis_ of the color space perfectly defined by the _new_ primaries, \(P^{\star}_{i}\), with \(i=1,2,3\). These _new_ primaries are defined by their tristimulus vectors, \(R(P^{\star}_{i})\), in the basis of _old_ primaries, \(P_{i}\), with \(i=1,2,3\). They have chromatic coordinates \(r(P^{\star}_{i})\), and, as in every array of chromatic coordinates and tristimulus vectors, they are proportional: \(R(P^{\star}_{i})=\gamma_{i}r(P^{\star}_{i})\).
In this situation, taking \(P_{i}\) as the input color representation (as in Fig. 1.b), hue cancellation with the four lights is analogous to a _change-of-basis_ from \(P_{i}\) to \(P^{\star}_{i}\). Therefore, looking for \(w^{\star}_{\lambda_{1}}(\lambda)\) and \(w^{\star}_{\lambda_{2}}(\lambda)\) is analogous to the computation of the tristimulus values of the monochromatic components of the equienergetic white \(R^{\star}_{1}(E_{\lambda})\) and \(R^{\star}_{2}(E_{\lambda})\). Under this _change-of-basis analogy_, the valence functions can be computed analytically from the color matching functions (the vectors \(R(E_{\lambda})\), \(\forall\,\lambda\)), and the matrix \(M_{PP^{\star}}\) that changes the vectors from the basis \(P_{i}\) to the basis \(P^{\star}_{i}\):
\[R^{\star}(E_{\lambda})=M_{PP^{\star}}\cdot R(E_{\lambda}) \tag{3}\]
where, as in any standard change of basis [10], the matrix is:
\[M_{PP^{\star}}=\left(\begin{array}{ccc}R_{1}(P^{\star}_{1})&R_{1}(P^{\star}_ {2})&R_{1}(P^{\star}_{3})\\ R_{2}(P^{\star}_{1})&R_{2}(P^{\star}_{2})&R_{2}(P^{\star}_{3})\\ R_{3}(P^{\star}_{1})&R_{3}(P^{\star}_{2})&R_{3}(P^{\star}_{3})\end{array} \right)^{-1}=\left(\begin{array}{ccc}\gamma_{1}^{-1}&0&0\\ 0&\gamma_{2}^{-1}&0\\ 0&0&\gamma_{3}^{-1}\end{array}\right)\cdot\left(\begin{array}{ccc}r_{1}(P^{ \star}_{1})&r_{1}(P^{\star}_{2})&r_{1}(P^{\star}_{3})\\ r_{2}(P^{\star}_{1})&r_{2}(P^{\star}_{2})&r_{2}(P^{\star}_{3})\\ r_{3}(P^{\star}_{1})&r_{3}(P^{\star}_{2})&r_{3}(P^{\star}_{3})\end{array} \right)^{-1}\]
In this _change-of-basis analogy_ the hue cancellation valence functions are obtained from the color matching functions in the input representation transformed by the matrix in Eq. 3. Note that the weights \(\gamma_{i}\) associated to the (arbitrary)
Figure 3: The _change-of-basis analogy_: Hue cancellation experiment as combination of vectors of a new basis. Note that the primaries \(P^{\star}_{i}\) (based on the cancelling lights) are not related to the unknown primaries of the unknown representation \(R^{\prime}\). The primaries \(P^{\star}_{i}\) (either in option 1 or 2) are just an _artifice_ to compute analytically the weights \(w^{\star}_{i}(\lambda)\) from the tristimulus values \(R^{\star}_{1}(E_{\lambda})\) and \(R^{\star}_{2}(E_{\lambda})\). Given two arbitrary \(\lambda_{1}\) and \(\Delta\lambda\), the difference between _option 1_ and _option 2_ is that in the second the primaries \(P^{\star}_{1}\) and \(P^{\star}_{2}\) are taken to be orthogonal to the one that goes in the direction of the White, \(P^{\star}_{3}\propto W\), so that they convey _less_ information about brightness.
length of the vectors, \(R(P_{i}^{\star})\), will scale each output \(R_{i}^{\star}(E_{\lambda})\). Therefore, despite the shape of the curves is fixed by the matrix of chromatic coordinates of the new basis, the global scale of the predicted functions can be varied via the length of the primaries. As a result, in the simulations using this analogy, given certain cancellation \(\lambda_{c}\)'s, the length of the basis vectors will be adjusted to obtain the best possible match between the predicted function and the classical curves of Jameson and Hurvich.
As explained in Appendix B, in the settings where the cancelling lights are not strictly complementary (as in the classical setting by Jameson and Hurvich) the curves can be obtained from alternative instrumental lights which are complementary. Then, the contribution of these instrumental lights always can be assigned back to the considered cancelling lights. Therefore, (1) the classical setting can be understood using this _change-of-basis analogy_, and (2) this analogy can be used to explore multiple combinations of axes \((\lambda_{1},\lambda_{3})\) and \((\lambda_{2}=\lambda_{1}+\Delta\lambda,\lambda_{4})\). These configurations can include the original experiment and also other, progressively different, alternatives.
In the experiments 4-6 we execute artificial hue cancellation experiments with identity networks using complementary cancelling lights selected according to the _change-of-basis analogy_ described above. We explore a range of \(\lambda_{1}\) over the visible spectrum, and for each \(\lambda_{1}\), we select \(\lambda_{2}=\lambda_{1}+\Delta\lambda\) with a range of \(\Delta\lambda\) so that \(\lambda_{2}\) is still visible. Then, the 3rd and 4th cancellation lights are the complementary lights of \(\lambda_{1}\) and \(\lambda_{2}\). Sometimes the complementary cancellation lights are purple-magenta, as in the arbitrary example of Fig. 3, but that is not a conceptual problem to apply the change-of-basis analogy. We take the wavelengths in these control experiments along a uniform grid over the spectral space. The analytical solution of the change-of-basis analogy (Fig. 3 and Eq. 3) can, of course, be used in this range of \(\lambda_{c}\)'s. Moreover, its analytical nature implies that one can efficiently sample the spectral space at higher rates. On top of the coarse regular grid shown below, we also perform the artificial hue cancellation at the configurations where the theory predicts better agreement with the opponent curves, which incidentally coincide with the wavelengths chosen in the classic experiment.
For every considered configuration of cancellation lights we compute the cancellation (or valence) curves and we compute the departure from this result and the human curves of Jameson and Hurvich. Fig. 4 shows the error of these predicted valence curves obtained either through the identity networks operating in different color spaces (experiments 4-6), or through the analytical change-of-basis analogy (experiment 7).
The results of experiments 4-7 stress the role of the choice of the cancellation lights in these experiments. Note that _all the error surfaces_ have the same specific structure:
* The theoretical surfaces of experiment 7 (which could be densely sampled since they are faster to compute) show two clear minima consistent with the setting selected in the classical experiment. The diagram shows that these two minima are actually equivalent. Moreover, they display a clear pattern of secondary minima. The pattern is more distinct in the setting where the _chromatic_ primaries \(P_{1}^{\star}\) and \(P_{2}^{\star}\) are chosen to be orthogonal to the White.
* The errors checked at the grid in the artificial hue cancellation experiments 4-6 are consistent with the theoretical surfaces despite the sampling grid is coarser. The reason for a coarser grid is merely computational1. In some cases the deepest minimum is not in the classical point, but the difference is always very small, i.e. in the classical setting the artificial curves are also very similar to the human curves. Footnote 1: Each location involves the estimation of the two valence curves at 50 \(\lambda\)’s. Therefore, it involves 50 hue cancellation experiments, i.e. 50 minimizations, one per \(\lambda\) in the visible range
* The artificial experiments lead to more marked differences between the agreement in the singular locations of small error (blueish points) and the rest. Note that the errors in the artificial experiments seem to increase faster as one goes away from the regions of small error.
These results (which are consistent regardless of the use of trichomatic representations or opponent representations) suggest that the emergence of the classical curves is more linked to the selection of the cancellation lights than on the inner color representation \(R^{\prime}\).
## 4 Discussion
### Summary of results
When using trivial (identity) artificial networks in the classical hue cancellation setting, opponent red-green and yellow-blue valence functions emerge regardless of the actual color representation used by the networks (as long as it is a tristimulus representation or even tristimulus-like digital-RGB representations that include mild nonlinearities).
This suggests that these opponent curves do not inform us about the inner workings of the considered system, but about the properties of color mixtures in the tristimulus representations. Given the fact that the mixture of opponent spectral cancellation lights is in the line between them in the chromatic diagram, changing the energy of these cancellation stimuli will always lead to displacements along these lines and hence, proper match with the grey reference (or proper hue cancellation) using the correct proportion of cancellation lights: humans and also trivial machines forced to use spectral (or quasi-spectral) cancellation lights would arrive to the same conclusion.
The reasoning is not as (analytically) obvious in nonlinear representations (as the digital-RGB) but results show that it follows the same trends, thus stressing the generality of the result.
The actual variation of the mixture when modifying the weights in the hue cancellation process only depends on the properties of the additive color mixture, and the path in the diagram is determined by the (classical) choice of the spectral cancellation lights, and not by the inner color representations. Results suggest that a fortunate selection of the cancellation \(\lambda_{c}\)'s is _somehow_ biasing the matching towards the correct opponent curves. If a range of alternative cancellation lights are considered, the results are progressively different from the classical opponent functions.
With the classical \(\lambda_{c}\)'s, the different color representations only imply different metric spaces to compute the error in the match, but in absence of neural noise (or in moderate neural noise), this would mean minor variations in the result of the minimization, and hence one cannot rule out trichromatic LMS-like representations.
Figure 4: Results of the control experiments (regular grid) together with the results in the original configuration (see the two dots off the regular grid). **Top row** shows the errors of the experiments 4-6 with a blue-yellow colorbar scale where blue means low error (good reproduction of the human opponent curves) and yellow means high departure from the human result. The color code of the departure represents the Mean Squared Error between the human and the artificial curves. **Bottom row (right):** these surfaces represent the same kind of errors, with the same color code for the two options of the change-of-basis analogy. The circles in red and magenta indicate the minima of the theoretical surfaces. **Bottom row (left):** the chromatic diagram shows that the two minima found by the theoretical simulations actually correspond to the same choice of cancellation lights, and coincide with the classical setting (see appendix B for more information on the auxiliary magenta).
### Previous criticisms to hue cancellation experiments
Certainly there are have been a number of well founded criticisms to the classical hue cancellation results. For instance, [6] makes this point: to what extent can we generalize from the valence measurements using monochromatic lights to other lights?. If the human behavior for polychromatic light does not follow from the behavior for monochromatic lights, then the data represents only an interesting (but non-generalizable) collection of observations. In general, the linearity assumption is only an approximation [15, 16, 17, 18]. As a result, we need a more complete (nonlinear) model before we can apply the hue cancellation data to predict the opponent-colors appearance of polichromatic lights. Other criticisms refer to overestimation of valence in certain spectral regions in hue cancellation versus other psychophysical methods [17, 19, 20].
However, the problem implied by the systematic emergence of the opponent curves from the identity networks is different. It is not restricted to the linearity assumption. In fact, the systems with nets operating in the LMS or ATD spaces are linear by definition. The emergence of the same result in two different (linear) trivial cases implies that the curves do not give a conclusive message about the inner working of the system.
### Emergence of human-like opponent curves in artificial systems
Emergence of human-like behavior in artificial systems has been an inspiration for functional (or principled) explanations in theoretical neuroscience [21, 22, 23].
In particular, due in part to the current success of artificial networks in vision tasks [24], there is a growing interest to compare their behavior with humans [25, 26, 27] or with human-like models of traditional visual neuroscience [28, 29, 30, 31, 32].
In this context, we set a low-level conventional psychophysics program to check the basic behavior of artificial networks in light of known basic human behavior [33, 34]. In this context, to our surprise, our first experiments with artificial networks (with markedly non-human color representation) actually displayed human-like behavior in hue cancellation [35].
That was the origin of this research because the emergence of human-like curves in hue cancellation in networks where opponency had not been built in (nor assumed in the training tasks) could have two implications:
* **Hypothesis A:** On the positive side, it could imply that the considered tasks used to train the nets actually lead to human behavior in scenarios different from the training. These evidences are interesting in the debate about the kind of tasks that may lead to human behavior. Note that certain tasks (e.g. assessing image quality or enhancing the retinal image), may lead to positive or negative results in reproducing human behavior depending on the architecture of the net. Consider examples in [28, 36] for the emergence of contrast nonlinearities, examples in [30, 32] for the emergence of the Contrast Sensitivity Functions, or examples in [37, 38] for the visibility of distortions.
* **Hypothesis B:** On the negative side, it could also be that the experimental setting somehow forces the result. In this case the opponent curves would not tell much about the inner color representation of the system, but about the selected _opponent_ spectral cancelling lights and about the properties of additive mixtures in tristimulus spaces. These elements (alien to the specific color coding in the network) could also explain the human-like opponent curves.
According to the results reported here, the second hypothesis seems the one that may be true.
### Implications in Visual Neuroscience
Direct physiological recording of the opponent spectral sensitivity of cells [39, 40] is (of course) the strongest indication of opponent color coding in the brain. However, following our results with trivial networks, the consistent emergence of the opponent curves in hue cancellation experiments suggests that other psychophysical techniques [41] may be more appropriate than hue cancellation to reveal the opponent mechanisms. Similarly, our results suggest that indirect statistical arguments actually give stronger evidences in favour of opponent color coding than hue cancellation experiments. Statistical arguments are not limited to classical linear decorrelation [42, 43], but also include more recent, nonlinear measures of dependence [44, 45, 46, 47].
## Appendix A: Quasi-monochromatic spectrum and cancellation lights
Monochromatic lights live on the spectral locus of the color diagram. However, it is not possible to represent perfect monochromatic lights in digital values, so we used a quasi-monochromatic approximation to perform the experiments.
To do that, we generate the quasi-monochromatic radiation as a narrow Gaussian spectral radiance of a determined height and width over a low-radiance equienergetic background. Fig. 5 left shows the quasi-monochromatic spectrum generated for different lambdas. For the experiments we use a Gaussian height of \(1.5\times 10^{-3}W\cdot m^{-2}\cdot st^{-1}\cdot nm^{-1}\) over an equienergetic background of \(0.5\times 10^{-4}W\cdot m^{-2}\cdot st^{-1}\cdot nm^{-1}\) and a Gaussian width of \(10nm\).
Fig. 5 right shows where the classical monochromatic lights live on the CIE 1931 color space (B, G, Y and R points) and the quasi-monochromatic reference wavelengths used in the experiments (inner blue points, which are inside the inner triangle that represent the color space that is possible to represent in digital values). The diagram also shows the equivalent opponent magenta marked by the red point in the B-R line that we used to combine the weights (see appendix B for more details).
## Appendix B: valence functions from optimal \(w^{\star}_{\lambda,\epsilon}(\lambda)\)
The result of our experimental settings with identity nets (as in the classical experiment) are four weights, \(w^{\star}_{\lambda,\epsilon}(\lambda)\), obtained after solving Eq. 1. However, as in the classical experiment, we need to combine the four weights to obtain two curves, the _red-green_ and the _yellow-blue_ when the classical cancellation lights are used. The way to combine the weights depends on the cancellation lights.
### B.1 When the cancellation lights _are_ complementary
Some cancellation stimuli can have _complementary wavelengths_: for instance, in the conventional setting, \(\lambda_{475}\) and \(\lambda_{580}\) are approximately _complementary_ because their mixture can lead to a grey (approximately equal to the equienergetic white). The mixture of these two lights leading to the white can be obtained by solving the following equation:
\[R(W)=\kappa_{1}\cdot R(E_{475})+\kappa_{2}\cdot R(E_{580}) \tag{4}\]
Where \(R(W)\) and \(R(E_{\lambda})\) represent the tristimulus vectors of the white and the cancellation lights respectively, and \(\kappa_{i}\) are the corresponding weights so that the sum of the two lights give monochromatic white. Then, the corresponding cancellation weights (i.e. \(w^{\star}_{475}\) and \(w^{\star}_{580}\)) are straightforward to mix because a positive increase in one of them can be compensated (in terms of hue) by a corresponding positive increase in the other with the corresponding \(\kappa_{i}\) factors. With such same-sign increases, the mixture will remain at the same point in the chromatic diagram and hence the hue is not modified. As a result, these same-sign increments cancel. Similarly, weights of different sign in complementary \(\lambda_{c}\)'s contribute to the change of hue in the same way (moving the mixture in the same direction). Therefore, such opposite-sign increases should not cancel, but should be added in absolute value. In these opposite-sign cases, the resulting sign depends on the criterion taken to define the chromatic channel: for instance, if we decide to build a _yellow-blue_ channel (meaning positive values for long wavelengths and negative values for short wavelengths), the sum
Figure 5: Monochromatic cancellation lights and quasi-monochromatic approximation of the spectral locus. The auxiliary colors in _yellow_ and _magenta_ represent alternative methods to get the valence cancellation curves (see Appendix B) in case the complementary of some of the selected cancellation wavelengths is not a monochromatic stimulus (i.e. it is in the purple region) as is the case in the classical setting depicted here.
of modulus should be given a positive value when \(w^{\star}_{475}<0\) and \(w^{\star}_{580}>0\). In short, the yellow-blue valence function, \(V_{\text{YB}}\), is:
\[V_{\text{YB}}=\pm\kappa_{1}\cdot|w^{\star}_{475}|\pm\kappa_{2}\cdot|w^{\star}_{5 80}| \tag{5}\]
where, the sign criterion we have just discussed above leads to these four cases:
\[\left\{\begin{array}{l}\text{if}\;\;w^{\star}_{475}\geq 0,w^{\star}_{580} \geq 0\implies V_{\text{YB}}=sign(w^{\star}_{475}-w^{\star}_{580})\left|\kappa_{1} \cdot w^{\star}_{475}-\kappa_{2}\cdot w^{\star}_{580}\right|,\\ \text{if}\;\;w^{\star}_{475}<0,w^{\star}_{580}<0\implies V_{\text{YB}}=sign(w^{ \star}_{580}-w^{\star}_{475})\left|\kappa_{1}\cdot w^{\star}_{475}-\kappa_{2} \cdot w^{\star}_{580}\right|,\\ \text{if}\;\;w^{\star}_{475}\geq 0,w^{\star}_{580}<0\implies V_{\text{YB}}=-\left(\kappa_{1}\cdot|w^{ \star}_{475}|+\kappa_{1}\cdot|w^{\star}_{580}\right|),\\ \text{if}\;\;w^{\star}_{475}<0,w^{\star}_{580}\geq 0\implies V_{\text{YB}}=\kappa_{1}\cdot|w^{\star}_{475}|+\kappa_{1} \cdot|w^{\star}_{580}|\end{array}\right. \tag{6}\]
The prescription is equivalent for any arbitrary pair of complementary cancellation \(\lambda_{c}\).
### When the cancellation lights are not complementary
In the case of the red-green channel, the complementary direction of the \(\lambda_{500}\) is not in the direction of the \(\lambda_{700}\). The actual complementary color is in the purple region. In that case, summation of \(w^{\star}_{500}\) and \(w^{\star}_{700}\) is not as straightforward because grey is not a sum of these cancellation lights.
Method 1: cancelling the reddish-greenish appearance (matching an auxiliary yellow instead of the white)
The authors of the classical experiment, [1] considered \(\lambda_{700}\) and \(\lambda_{500}\) as complementary because they weren't cancelling at white, but they were looking to cancel the reddish or greenish hue. This is equivalent to (4) but changing \(W\) by an _auxiliary yellow_, \(\mathcal{Y}\), at the intersection of the YB line with the line that connects the green \(\lambda=500nm\) with the red \(\lambda=700nm\). See this line and the auxiliary yellow in the diagram of Fig. 5, which can be obtained from this mixture:
\[R(\mathcal{Y})=\kappa_{3}\cdot R(E_{500})+\kappa_{4}\cdot R(E_{700}) \tag{7}\]
Where we set the (arbitrary) luminance of this auxiliary Yellow as the sum of the luminance of \(E_{500}\) and \(E_{700}\), and \(\kappa_{i}\) are the corresponding weights so that the sum of the two corresponding lights give a color which is neither red nor green. After that, we combine the obtained weights following the same sign criterion as in (6).
#### Method 2: matching the white through an auxiliary magenta
There is yet another way to solve the problem: in order to be able to cancel \(\lambda_{500}\) to the white, we need to find its complementary, and we can also impose that it lies in the \(BR\) line (magenta point in the diagram of Fig. 5) so that we can relate it with the other \(\lambda_{c}\)'s in use. We calculate this auxiliary _magenta_, as \(R(\mathcal{M})=\alpha_{M1}\cdot R(E_{475})+\alpha_{M2}\cdot R(E_{700})\), and we impose that it has the same energy as the other cancelling lights. We can consider, without loss of generality, that this magenta is complementary of \(\lambda_{500}\) so that, when they are mixed with the appropriate weights, they generate the White. This magenta is only an artifice to get the red-green curve from the obtained \(w^{\star}_{i}\); it has not been used in the optimization process. Its equivalent cancellation curve can be obtained via \(w^{\star}_{M}=\alpha_{M1}\cdot w^{\star}_{475}+\alpha_{M2}\cdot w^{\star}_{700}\). Then, we can impose the White sum condition as before to get the corresponding weights \(\kappa_{i}\):
\[R(W)=\kappa_{M1}\cdot R(E_{500})+\kappa_{M2}\cdot R(\mathcal{M}) \tag{8}\]
Now we can obtain the red-green valence curve, \(V_{\text{RG}}\), as a sum of \(w^{\star}_{500}\) and \(w^{\star}_{M}\) as follows (taking into account the same sign criteria stated in Eq. 6):
\[V_{\text{RG}} =\pm\kappa_{M1}\cdot|w^{\star}_{500}|\pm\kappa_{M2}\cdot|w^{ \star}_{M}|=\pm\kappa_{M1}\cdot|w^{\star}_{500}|\pm\kappa_{M2}\cdot(\alpha_{M1 }\cdot|w^{\star}_{475}|+\alpha_{M2}\cdot|w^{\star}_{700}|)=\] \[=\pm\kappa_{M1}\cdot|w^{\star}_{500}|\pm\kappa_{M2}\cdot\alpha_{M 1}\cdot|w^{\star}_{475}|\pm\kappa_{M2}\cdot\alpha_{M2}\cdot|w^{\star}_{700}| \tag{9}\]
By doing this calculation, we are using \(w^{\star}_{475}\) to get the two curves, which is something that our algorithm has not taken into account. To avoid using the energy of \(\lambda=475\) nm twice, we must remove from \(V_{\text{RG}}\) the amount of \(w^{\star}_{475}\) that we used in \(V_{\text{YB}}\). Doing so, Eq. 9 becomes:
\[V_{\text{RG}}=\pm\kappa_{M1}\cdot|w^{\star}_{500}|\pm(\kappa_{M2}\cdot\alpha_{ M1}-\kappa_{1})\cdot|w^{\star}_{475}|\pm\kappa_{M2}\cdot\alpha_{M2}\cdot|w^{ \star}_{700}| \tag{10}\]
Fig. 6 right shows the \(\lambda_{475}\) (blue), \(\lambda_{580}\) (yellow), \(\lambda_{500}\) (green) and auxiliary-magenta curves that are summed to give the yellow-blue and red-green curves.
The procedure described here can be applied to other choices of cancelling \(\lambda_{c}\)'s. When exploring the whole range of possible cancelling \(\lambda_{c}\) to simulate the hue cancellation experiment in situations beyond the conventional choice of cancelling lights, we always compute first the _complementary_ curves (one or two) when possible and then, when necessary, compute the complementary of \(\lambda_{c}\) with the red or blue extremes to get the last curve. Note that we always use \(\lambda_{700}\) when only one component lies in the purple line, but we use both \(\lambda_{400}\) and \(\lambda_{700}\) when there are two.
Finally, a note on the scaling of the valence curves. The shape of the curves and their relative scale determine how the matchings are made for each \(\lambda\). According to the change-of-basis analogy in Eq. 3, the scale of the curves is associated to the _arbitrary_ length of the associated primaries \(P_{i}^{\star}\). Therefore, once the minimization is finished, we keep the spectral shape constant and we look for the optimal lengths of \(P_{i}^{\star}\) to obtain the best match to the human-opponent curves.
## Appendix C: Visualization of hue cancellation matches with classical \(\lambda_{c}\)'s
It is important to check if the algorithm we used to minimize the distance has converged. In Fig. 7 we represent the hue cancellation solutions after solving Eq. 1 for experiments 1-3. Blue points represent the initial quasi-monochromatic stimuli, before the addition of the cancelling lights (i.e, \(w_{i}^{\star}=0\)). Black and red points represent the colors of the spectral and reference modified with the addition of the optimal \(E_{\lambda_{c}}\) founded by Eq. 1. We find that independently of the color representation, identity network gets the match at the directions determined by the selected \(\lambda_{c}\)'s in a very consistent way. Interestingly, the _red-green_ axis consistent with the magenta complementary of \(\lambda_{c}=500\) nm was not imposed in any way because the minimization was done by modifying the energy of \(\lambda_{c}=700\) nm. Of course (as in any learning process prone to errors due to early stopping), the networks do not find the absolute minimum (in a perfect match the difference between the red and the black stimuli should be zero). However, the final differences (black lines) are substantially smaller than the initial differences (blue lines).
Figure 6: Optimal weights, \(w_{i}^{\star}(\lambda)\), of the classical cancellation lights for the trivial identity network operating in different photoreceptor color spaces. Here we show the magenta curve (built from the blue curve and the red curve) that can be directly subtracted from the green curve to obtain \(V_{\text{RG}}\)
## Acknowledgments
The authors thank interesting discussions on the preliminary results [35] that lead to this research with A. Parraga, A. Akbarinia, J. Vazquez-Corral, X. Otazu, M. Bertalmio, F. Wichmann, and particularly, V. Laparra. This work was supported in part by MICIIN/FEDER/UE under Grant PID2020-118071GB-I00 and PDC2021-121522-C21, in part by Spanish MIU under Grant FPU21/02256 and in part by Generalitat Valenciana under Projects GV/2021/074, CIPROM/2021/056 and CIAPOT/2021/9. Some computer resources were provided by Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV).
|
2302.00485 | Equivariant Message Passing Neural Network for Crystal Material
Discovery | Automatic material discovery with desired properties is a fundamental
challenge for material sciences. Considerable attention has recently been
devoted to generating stable crystal structures. While existing work has shown
impressive success on supervised tasks such as property prediction, the
progress on unsupervised tasks such as material generation is still hampered by
the limited extent to which the equivalent geometric representations of the
same crystal are considered. To address this challenge, we propose EMPNN a
periodic equivariant message-passing neural network that learns crystal lattice
deformation in an unsupervised fashion. Our model equivalently acts on lattice
according to the deformation action that must be performed, making it suitable
for crystal generation, relaxation and optimisation. We present experimental
evaluations that demonstrate the effectiveness of our approach. | Astrid Klipfel, Olivier Peltre, Najwa Harrati, Yaël Fregier, Adlane Sayede, Zied Bouraoui | 2023-02-01T14:48:18Z | http://arxiv.org/abs/2302.00485v1 | # Equivariant Message Passing Neural Network for Crystal Material Discovery
###### Abstract
Automatic material discovery with desired properties is a fundamental challenge for material sciences. Considerable attention has recently been devoted to generating stable crystal structures. While existing work has shown impressive success on supervised tasks such as property prediction, the progress on unsupervised tasks such as material generation is still hampered by the limited extent to which the equivalent geometric representations of the same crystal are considered. To address this challenge, we propose EMPNN a periodic equivariant message-passing neural network that learns crystal lattice deformation in an unsupervised fashion. Our model equivalently acts on lattice according to the deformation action that must be performed, making it suitable for crystal generation, relaxation and optimisation. We present experimental evaluations that demonstrate the effectiveness of our approach.
## 1 Introduction
Discovering thermodynamic stable materials with desired properties is a fundamental challenge for material sciences. Considerable attention has recently been devoted to crystalline (crystal) material generation. Crystals are involved everywhere in our modern society from metal alloys to semiconductors. Contrarily to organic molecules which are mostly composed of wide carbon chains with a limited variety of atoms, crystals are three-dimensional periodic structures composed of a wider variety of chemical bonds and atoms. The periodic structure is often represented as a parallelepiped tiling, a.k.a crystal lattice or unit cell.
Within the broad aim of automated stable (crystal) material discovery, various strategies mainly based on simulation or Machine Learning (ML) can be explored. Simulation allows the properties of a given structure to be predicted by applying physics laws while ML consists of modelling and predicting the physical properties. Notice that simulation can also be used for material relaxation, i.e. modifying a structure to improve its stability. The success of ML has led to a paradigm shift in materials science. In particular, ML techniques are used for performing molecule design, modelling physical properties or at the early stage of material discovery. Recently, several works have been introduced to manipulate crystal structures, e.g. [12, 13]. Most notably, models based on geometrically equivariant ML techniques such as Message Passing Neural Networks (MPNNs) have shown good performance in theoretical chemistry, in particular, on supervised tasks such as properly predictions on both organic and crystalline structures, e.g. [11, 12]. However, the majority of existing models are not fully equivariant, making them unsuitable for unsupervised tasks such as generation or representation learning. For example, the method from [11] is only equivariant to SO(3) (rotation group), making it not suitable for crystal lattice deformation where the shape of the structure is unknown in advance. To this end, some methods have been proposed to approximate Density Functional Theory (DFT) simulation using MPNNs for unsupervised tasks, e.g. [11, 12]. They rely on self-simulations to gather information about the interaction forces of a few specific structures to perform generation. However, discovering new materials requires a consequent amount of data to obtain out-of-distribution generalization, i.e. knowledge needed to generalise to unknown structures and perform arbitrary lattice deformation.
We propose EMPNN an equivariant MPNN that acts on crystal lattice without any label from the interaction forces and stress tensors. Previous works already showed the advantage of using MPNN acting on atomic position for both organic molecules and crystals. But acting on crystal lattices without explicit stress tensors remains a challenging problem. Our model enforces a structuring bias adapted to crystals using group actions incorporated by the equivariance property of MPNN layers. To illustrate intuition, given a pair of atoms, if we know their interaction force in a given state, we can generalize this interaction to any other orientation as long as the state and the relative distance remain the same. Hence, we can take advantage of this property, and the equivariant representation to enhance the generalisation capability. This allows our model to equivalently act on crystal lattice according to the deformation action that needs to be performed. We consider equivariance with respect to the Euclidean group \(Euc(3)\) and \(\text{SL}_{3}(\mathbb{Z})\) group. To the best of our knowledge, our model is the first general framework that
formulates an equivariant MPNN on the periodic structures. To demonstrate the effectiveness of our model, we propose a number of evaluation tasks to compare multiple equivariant MPNNs and losses.
## 2 Related works
Within the area of automatic stable material discovery, We can identify three classes of related work according to the molecular descriptors used to represent data.
**Fingerprint.** This class of methods uses handcrafted features of the materials. They are based on fingerprint representation that includes atomic positions and lattice parameters [14]. Additional information such as electronegativity, atomic radius or interatomic distances can also be incorporated, e.g. [13, 15]. Those works mainly rely on Feed-forward Neural Network (FFN) architectures to build Variational Autoencoder [12] or Generative Adversarial Networks [11] to achieve generation or optimization tasks. However, fingerprints do not satisfy the uniqueness property, i.e. the same crystal can have different representations. As FFNs are not equivariant to permutation, alternative representations of the same material can be processed differently. The same observation can be made for other group actions. Finally, existing models don't take into account periodicity.
**Voxel.** Offering a convenient way to represent data in 3-dimensional space, voxels allow encoding lattice parameters and atomic positions [13, 14, 15, 16]. However, voxel-based representation is limited since input data are by nature sparse and discontinuous in the space. Moreover, voxels do not take into account periodicity, which can lead to an edge effect. Finally, the aforementioned methods are not equivariant. As shown in section 4, there are multiple equivalent representations of a given material. Therefore, a set of equivalent representations may lead to inconsistent results. This is a clear limitation of voxel-based representation models.
**Graph-based Representation.** Graph representation of materials can represent the local environment of each atom and structure periodicity. Recent works suggested using Graph Neural Networks (GNN) for materials [10]. MPNNs allow to process sparse data and can be designed to be invariant or equivariant to many group actions. Most of the existing works are equivariant to \(\text{SO}(3)\)[17] thanks to a spherical basis that allows us to predict lattice properties and perform simulations. However, these methods are not able to deform crystal lattices where the shape of the lattice is unknown in advance. In addition, these works are equivariant to subgroups of the Euclidian group but do not consider other group actions such as \(\text{SL}_{3}(\mathbb{Z})\). Several methods have been proposed to approximate DFT simulation with GNN. These methods work by learning interaction forces and stress tensors to lower the total energy of a structure with methods analogue to DFT calculation [18, 19, 20, 21]. These equivariant methods require a lot of additional information about interaction forces, which are not always available. They mainly use self-simulations to gather data, but only for a few specific structures. To discover new materials, we need a lot of data and cannot rely on randomly generated structures, as they lead in general to unstable structures.
## 3 Problem Setting
Crystalline materials can be defined as infinite point clouds. A periodic structure can be represented as a network where a group of points is repeated by a discrete translation, which is is equivalent to parallelepiped tiling containing a cloud of atoms as illustrated in Figure 1. A crystal can be described as atomic positions \(x_{i}\in[0,1[^{3}\) with an associated feature space \(F\) representing the chemical information of each atom \(z_{i}\in F\) and a lattice \(\rho\in\text{GL}_{3}(\mathbb{R})\) representing the material periodicity. The infinite point cloud generated by this representation can be defined as follows:
\[\left\{\left(\rho(x_{i}+\tau),\;z_{i}\right)|\;\tau\in\mathbb{Z}^{3},\;1\leq i \leq n\right\}\ \subseteq\ \mathbb{R}^{3}\times F \tag{1}\]
Where \(\tau\) acts as a \(\mathbb{Z}^{3}\) vector that translates the point cloud. Equation 1 defines the space in which the atoms are located as a torus. In fact, when atoms leave by one side of the lattice they enter by the opposite side with the same orientation. \(\text{GL}_{3}(\mathbb{R})\) defines the shape of the lattice, i.e the periodicity. \(F\) is the feature space that can encode chemical information such as atomic number or charge. For crystal generation, we need to define a model capable to deform the geometry of a structure in order to minimize the total energy and hence obtain a stable structure. Such actions are performed on the material lattice \(\rho\) resulting in the updated lattice \(\rho^{\prime}\) and on atomic positions \(x_{i}\) resulting in the updated positions \(x_{i}^{\prime}\).
\[\begin{cases}\rho^{\prime}=h\rho\\ x_{i}^{\prime}=[x_{i}+h_{i}]\end{cases}. \tag{2}\]
We aim to predict the action \(h\in\text{GL}_{3}(\mathbb{R})\) on the lattice and the actions \(h_{i}\in\mathbb{R}^{3}\) on the atomic position. The atomic positions are brought back into the crystal lattice by truncation. In the following, we introduce our model that learns arbitrary deformations on crystal lattices. We first explain, in Section 4, why group actions are needed for materials, recall the notion of equivariance, and define our group actions on crystals while providing their properties. Finally, Section 5 gives an explicit description of our model along with equivariance results. Proofs and additional materials are provided in an online ArXiv appendix.
Figure 1: Periodic structure represented as a lattice (in dotted lines). The multi-graph associated with a material (blue arrow) can overlap on the adjacent repetition of the lattice and a pair of nodes can have multiple connections.
Equivariance and Group Actions
Crystals materials can be seen as an infinite cloud of atoms as \(\langle m\rangle\subseteq\mathbb{R}^{d}\times F\). As such, equivalences between materials are defined by isometries, i.e. by the group action of \(\mathrm{Euc}(d)\) regardless of lattice generators1. As a crystal lattice can have multiple space-tiling representations resulting in an identical infinite atomic cloud, the \(SL_{d}(\mathbb{Z})\) group action is needed for paving. Consequently, the group \(G=\mathrm{Euc}(d)\times SL_{d}(\mathbb{Z})\times\mathfrak{S}_{n}\) acts on the lattice without affecting its properties. \(\mathfrak{S}_{n}\) is the permutation group that acts by changing the numbering of atoms, where \(n\) is the number of atoms. Please note that atoms are always in the same place, but not with the same index. As chirality has an impact on the properties of a chemical structure, the reflection action should be excluded. The special Euclidean group that doesn't include reflection should be then considered. However, in this work, we consider \(\mathrm{Euc}(d)\) that acts on the chirality assuming that this limitation will not be problematic with inorganic material. We consider crystals described by an infinite cloud of atoms that is invariant under a discrete subgroup \(L\subseteq\mathbb{R}^{d}\) of maximal rank. For any choice of generators \((\tau_{1},\ldots,\tau_{d})\in L\), we consider the unique automorphism \(\rho\in GL_{d}(\mathbb{R})\) that maps the canonical basis of \(\mathbb{R}^{d}\) to the generating basis of \(L\) to represent \(L\).
Footnote 1: A generator is a lattice property that defines pattern repetition) and atom indices
**Definition 1**.: _The representation space of featured materials \(\mathcal{M}^{F}\) is the disjoint union \(\coprod_{n\in\mathbb{N}}\mathcal{M}^{F}_{n}\) where:_
\[\mathcal{M}^{F}_{n}=\left\{(\rho,x,z)\,|\,\rho\in GL_{d}(\mathbb{R}),\;x\in[0,1[^{n\times d},\;z\in F^{n}\right\}\]
_Chemical materials are represented in \(\mathcal{M}=\mathcal{M}^{\mathbb{N}}\), with atomic numbers as feature sequence \(z\)._
\(\mathcal{M}^{F}_{n}\) is an infinite set of triplet \(\rho\), \(x\), \(z\) that represent all possible materials with \(n\) atoms. The atomic number has a chemistry reference, e.g. 1 for hydrogen or 6 for carbon.
**Definition 2**.: _The infinite point cloud \(\langle M\rangle\) associated to a material \(M=(\rho,x,z)\) in \(\mathcal{M}^{F}_{n}\) is defined as:_
\[\langle M\rangle=\left\{\left(\rho\cdot(x_{i}+\tau),\,z_{i}\right)|\,\tau\in \mathbb{Z}^{d},\,1\leq i\leq n\right\}\ \subseteq\ \mathbb{R}^{d}\times F\]
_The cloud \(\langle M\rangle\) is invariant under the action of the lattice \(L=\rho\cdot\mathbb{Z}^{d}\subseteq\mathbb{R}^{d}\)._
The \(\mathrm{Euc}(d)\) group acts naturally on subsets of \(\mathbb{R}^{d}\) and two materials \(M\) and \(M^{\prime}\) should be considered physically identical if they span isometric point clouds. Let us write \(M\sim M^{\prime}\) if there exists an isometry \(g\in\mathrm{Euc}(d)\) such that \(\langle M^{\prime}\rangle=g\cdot\langle M\rangle\). Let \(\langle\mathcal{M}^{F}\rangle\) be the image of \(\mathcal{M}^{F}\) in \(\mathcal{P}(\mathbb{R}^{d}\times F)\) under \(\langle-\rangle\). The quotient space \(\mathcal{M}/\sim\) of equivalent materials is defined by the following universal diagram:
Infinite point clouds can only be represented by non-intrinsic representatives \(M\in\mathcal{M}^{F}\). In the following, we describe how the relation \(\sim\) is related to group actions on \(\mathcal{M}^{F}\). The following proposition introduces the group actions that don't change the properties of materials, i.e. actions that lead to producing equivalent materials.
**Proposition 1**.: _The following four actions on \(\mathcal{M}^{F}_{n}\) preserve the equivalence class of material:_
* \(\mathfrak{S}_{n}\)_permutation group, acting by_ \(\sigma\cdot(\rho,x,z)=(\rho,x\circ\sigma^{-1},z\circ\sigma^{-1})\)__
* \(O(d)\) _orthogonal group, acting by_ \(g\cdot(\rho,x,z)=(g\cdot\rho,x,z)\)__
* _Ex translation group_2_, acting by_ \(v\cdot(\rho,x,z)=(\rho,\left(x+\rho^{-1}v\right],z)\)__
Footnote 2: The actions of \(E\) and \(\mathbb{R}^{d}\) are equivalent, being simply intertwined by the isomorphism \(\rho:\mathbb{R}^{d}\to E\). The action of \(E\) is more natural, extending the action of \(O(d)\) to \(\mathrm{Euc}(d)\) but the action of \(\mathbb{R}^{d}\) is more convenient in our representation space.
Footnote 3: Permutations acting trivially on \(\langle\mathcal{M}\rangle\).
_These actions are free and proper on \(\mathcal{M}^{F}_{n}\). The point cloud map \(\langle-\rangle\) commutes with these actions4._
Footnote 4: The actions of \(E\) and \(\mathbb{R}^{d}\) are equivalent, being simply intertwined by the isomorphism \(\rho:\mathbb{R}^{d}\to E\). The action of \(E\) is more natural, extending the action of \(O(d)\) to \(\mathrm{Euc}(d)\) but the action of \(\mathbb{R}^{d}\) is more convenient in our representation space.
Footnote 4: Permutations acting trivially on \(\langle\mathcal{M}\rangle\).
Performing modification by permutations and isometries is not enough to get a faithful representation of \(\mathcal{M}^{F}/\sim\). Different choices of lattice \(L\subseteq\mathbb{R}^{d}\) lead to different primitive point clouds in \([0,1[^{d}\). The action of \(SL_{d}(\mathbb{Z})\) on \(GL_{d}(\mathbb{R})\) describes all the possible choices of generators for \(L\). However, \(SL_{d}(\mathbb{Z})\) cannot simply act by left multiplication on \(\mathcal{M}^{F}\) like \(\mathrm{Euc}(d)\) without distorting the relative positions of atoms in the primitive cell \(\rho\cdot[0,1[^{d}\). We complete Proposition 1 by specifying how to repave the space while being equivalent to the structure we start with.
**Proposition 2**.: _The group \(SL_{d}(\mathbb{Z})\) acts on \(\mathcal{M}^{F}\) by letting for every change of lattice generators \(g\):_
\[g\cdot(\rho,x,z)=(\rho\cdot g^{-1},[gx],z)\]
_where \([gx]_{i}\) denotes the unique element of \([0,1[^{d}\) in the orbit of \(gx_{i}\) under \(\mathbb{Z}^{d}\). Identifying the reference cell \([0,1[^{d}\) with the torus \(\mathbb{T}^{d}\), the action of \(SL_{d}(\mathbb{Z})\) on \(\mathcal{M}^{F}\simeq GL_{d}(\mathbb{R})\times(\mathbb{T}^{d})^{n}\times F^{n}\) is free and proper. The point cloud map is invariant under the action of \(SL_{d}(\mathbb{Z})\)._
The reference cell is the base cell we use to pave the space with \(L\). It is a parallelepiped of atoms and \(L\) is the translation that allows the parallelepiped moving to pave the space.
Figure 2: In definition 2 the point cloud is a space tiling (top left corner). The actions from \(\mathrm{Euc}(2)\) and \(SL_{2}(\mathbb{Z})\) groups commute and do not affect interatomic distances.
**Proposition 3**.: _The actions of \(\mathrm{Euc}(d)\), \(\mathfrak{S}_{n}\) and \(SL_{d}(\mathbb{Z})\) on \(\mathcal{M}_{n}^{F}\) commute as shown in Figure 2._
Let \(G\) be the product of \(\mathrm{Euc}(d)\times\mathfrak{S}_{n}\times SL_{d}(\mathbb{Z})\). Propositions 1 and 2 imply that the quotient of \(\mathcal{M}^{F}\) under the action of \(G\) is a well-formed topological space. This quotient is not the space \(\mathcal{M}^{F}/\sim\) of equivalent materials, because the lattice associated with a material representation \(M\in\mathcal{M}^{F}\) is not always a maximal symmetry subgroup of its point cloud.
Graph equivarianceInternal forces acting on a crystal structure are equivariant to the aforementioned group actions. As the properties of a crystal depend on interatomic interaction, equivariance could be then considered as the solution to obtain generalization capability. In this work, we take advantage of the equivariance of the graph representation of materials under \(G\) the product of \(\mathrm{Euc}(d)\times\mathfrak{S}_{n}\times SL_{d}(\mathbb{Z})\).
**Definition 3**.: _A neural network \(f_{\theta}:\mathcal{M}^{F}\rightarrow\mathbb{R}^{k}\) is said invariant under \(G\) if for all \(g\in G\):_
\[f_{\theta}(g\cdot M)=f_{\theta}(M)\]
**Definition 4**.: _A neural network \(\varphi_{\theta}:\mathcal{M}^{F}\rightarrow\mathcal{M}^{F^{\prime}}\) is said equivariant under \(G\) if for all \(g\in G\):_
\[\varphi_{\theta}(g\cdot M)=g\cdot\varphi_{\theta}(M)\]
## 5 Equivariant GNN for Materials
We now introduce our MPNN that performs arbitrary deformation by reasoning on relative atomic distances and angles. A spatial equivariance is enforced by the MPNN. We first associate a graph with a material and then take advantage of the local invariance (input quantities are themselves invariant: distance, angle, etc.) and equivariance of the graph to define equivariant actions on crystal lattices.
**Definition 5**.: _We call directed 2-graph \(\Gamma=(\Gamma_{0},\Gamma_{1},\Gamma_{2})\) a triplet of sets together with applications:_
* \(\pi_{1}:\Gamma_{1}\rightarrow\Gamma_{0}\times\Gamma_{0}\)_, written_ \(\pi_{1}(\gamma)=(\mathrm{src}(\gamma),\mathrm{tgt}(\gamma))\)__
* \(\pi_{2}:\Gamma_{2}\rightarrow\Gamma_{0}\times\Gamma_{0}\times\Gamma_{0}\)__
_We call \(\Gamma\) a directed 1-graph when \(\Gamma_{2}=\varnothing\)._
The aforementioned graphs are often called "multi"-graphs. Recall that \(\pi_{1}\) and \(\pi_{2}\) may not be injective. They are called "hyper"-graphs as well, because they generalise 1-graphs to dimensions \(\geq 1\) and "directed" because we do not assume any symmetry on \(\Gamma\) w.r.t vertice permutations.
**Definition 6**.: _Let \(M=(\rho,x,z)\) in \(\mathcal{M}_{n}^{F}\) be a material and \(c_{i}>0\) for \(1\leq i\leq n\) denotes cutoff distances. We define a directed 2-graph \(\Gamma=\Gamma_{M,c}\) by the graded components:_
* \(\Gamma_{0}=\{1,\ldots,n\}\)__
* \(\Gamma_{1}=\left\{(i,j,\tau)\in\Gamma_{0}\times\Gamma_{0}\times\mathbb{Z}^{d }\,\big{|}\,||\rho(x_{j}-x_{i}+\tau)||<c_{i}\right\}\)__
* \(\Gamma_{2}=\left\{(\gamma,\gamma^{\prime})\in\Gamma_{1}\times\Gamma_{1}\, \big{|}\,\mathrm{tgt}(\gamma)=\mathrm{src}(\gamma^{\prime})\right\}\)__
_with obvious projections, i.e. with \(\pi_{1}:(i,j,\tau)\mapsto(i,j)\) and \(\pi_{2}:(\gamma,\gamma^{\prime})\mapsto(\mathrm{src}(\gamma),\mathrm{tgt}( \gamma),\mathrm{tgt}(\gamma^{\prime}))\)._
This graph construction includes many definitions of material graphs, making it versatile and usable in most contexts since a material graph is built from the local environment of atoms. This definition includes a graph built from a constant cutoff distance (i.e. \(c_{i}\) is constant) and a graph built from \(k\) nearest neighbour or built from chemical properties as the covalent radii. Definition 6 generalizes to most of the graphs defined in previous works [1, 1, 1, 10]. The key feature of this construction is the invariance of edges and triplets. As interatomic distances and unoriented angles are invariants to \(\mathrm{Euc}(d)\) and \(SL_{d}(\mathbb{Z})\) groups, any graph constructed from the local environment of the atoms will be invariant. More details about graph construction are in the appendix. We now introduce notations needed to define our model.
**Definition 7**.: _Let consider \(M=(\rho,x,z)\in\mathcal{M}^{F}\) and \(\Gamma=\Gamma_{M,c}\) we introduce the following notations:_
* \(e_{ij}^{\tau}=(x_{j}-x_{i}+\tau)\) _for edge vector in lattice coordinates,_
* \(v_{ij}^{\tau}=\rho(e_{ij}^{\tau})\) _for the edge vector in physical space,_
* \(r_{ij}^{\tau}=||v_{ij}^{\tau}||\) _for the physical edge length,_
* \(\theta r_{ijk}^{\tau\tau^{\prime}}\) _as the unoriented angle between_ \(v_{ij}^{\tau}\) _and_ \(v_{jk}^{\tau^{\prime}}\)__
* \(\mathcal{A}_{ijk}^{\tau\tau^{\prime}}\) _as the area of the triangle_ \(x_{i}\)_,_ \(x_{j}+\tau\) _and_ \(x_{k}+\tau^{\prime}\)__
_Let us also write \(e_{\gamma},v_{\gamma},r_{\tau},\theta_{\gamma\gamma^{\prime}},\mathcal{A}_{ijk }^{\tau\tau^{\prime}}\) for the same quantities when we do not need to make vertices explicit. Note that \(r_{\gamma}\), \(\theta_{\gamma\gamma^{\prime}}\) and \(\mathcal{A}_{ijk}^{\tau\tau^{\prime}}\) are natural Euclid invariants._
### Gradient of the invariant geometry
To build vector fields of our equivariant MPNN, we take advantage of the gradient of the invariant geometry of crystal graphs. For 0-chains, i.e. vertices \(i\in\Gamma_{0}\), the Euclid group acts transitively on spatial coordinates such that \(I_{i}\) is trivial (a point) and \(r_{i}\) is a constant. For 1-chains, i.e. directed edges \(\gamma\in\Gamma_{1}\), the only Euclid invariant is the length of the associated vector. For \(I_{\gamma}=\mathbb{R}\) and for \(\gamma:i\xrightarrow{\tau}j\), we let:
\[r_{\gamma}(x_{\gamma})=r_{ij}^{\tau} \tag{3}\]
For 2-chains \(\bar{\gamma}=i\xrightarrow{\tau}j\xrightarrow{\tau^{\prime}}k\), we find more convenient to define invariants as two vector lengths and the angle at their common point, i.e. \(I_{\bar{\gamma}}=\mathbb{R}^{3}\) with:
\[r_{\bar{\gamma}}=\left(\theta_{ijk}^{\tau\tau^{\prime}},\,r_{ij}^{\tau},\,r_{ jk}^{\tau^{\prime}}\right) \tag{4}\]
For a tangent vector at \(\rho\in GL_{d}(\mathbb{R})\), we have:
\[\frac{\partial v_{ij}^{\tau}}{\partial\rho}=\rho\cdot(x_{j}-x_{i}+\tau)=\rho \cdot e_{ij}^{\tau} \tag{5}\]
The differential edge distances with respect to \(\rho\) projects on the source and image edge vectors \(e_{ij}^{\tau}\) and \(u_{ij}^{\tau}\) respectively. It is equal to 1 on the rank 1 linear map \(|u_{ij}^{\tau}\rangle\langle e_{ij}^{\tau}|\). \(u_{ij}^{\tau}\rangle\) denotes the normalized vector \(v_{ij}^{\tau}\) such as \(u_{ij}^{\tau}=v_{ij}^{\tau}/\tau_{ij}^{\tau}\).
\[\frac{\partial r_{ij}^{\tau}}{\partial\rho}=\langle u_{ij}^{\tau},\,\rho\cdot e_{ ij}^{\tau}\rangle \tag{6}\]
The angle differentials with respect to \(\rho\) are computed by assuming that the middle point is fixed (it is true up to a translation in the target space, which does not alter the angle). \(\omega_{ijk}^{\tau\tau^{\prime}}\) denotes the unit normal vector to \((v_{ij}^{\tau},v_{jk}^{\tau})\)
\[\frac{\partial\theta_{ijk}^{\tau\tau^{\prime}}}{\partial\rho}=\langle\omega_{ijk}^{ \tau\tau^{\prime}}\times u_{jk}^{\tau^{\prime}}\rho\cdot e_{jk}^{\tau^{\prime}} \rangle-\langle\omega_{ijk}^{\tau\tau^{\prime}}\times u_{ij}^{\tau}\rho\cdot e_{ij}^{\tau}\rangle \tag{7}\]
The mixed product coincides with the determinant and is invariant under cyclic permutations.
### Equivariant Message Passing Neural Network
We now introduce a general definition of our equivariant MPNN based on vector fields. We formally define \(\lambda\) as the vector field used in Equation10. It allows for defining how the GNN acts on the crystal lattice.
**Definition 8**.: _To every edge \(\gamma\in\Gamma_{1}\) and every 2-region \(\gamma\gamma^{\prime}\in\Gamma_{2}\) we associate the infinitesimal lattice deformations \(\lambda_{\bar{\gamma}}:\mathcal{M}_{\bar{\gamma}}\to\mathfrak{gl}_{d}\) defined by:_
* \(\lambda_{\gamma\gamma}(M_{\bar{\gamma}})=\left|\,u_{\gamma}\,\right\rangle \left\langle\,u_{\gamma}\,\right|\)__
* \(\lambda_{\gamma\gamma^{\prime}}(M_{\bar{\gamma}})=\left|\,u_{\gamma}\,\right\rangle \left\langle\,u_{\gamma^{\prime}}\,\right|+\left|\,u_{\gamma^{\prime}}\,\right\rangle \left\langle\,u_{\gamma}\,\right|\)__
_The \(\left|\,-\,\right\rangle\)\(\left\langle\,\,\,\,-\,\right\rangle\) is a notation in quantum physics to denote the matrix obtained as the product of a column vector (\(\left|\,(\,\right\rangle\)) \(N\)) is \(V\) seen as a column) and a line vector (\(\left\langle\,W\,\right|\) is W seen as line vector). In our case \(\left|\,-\,\right\rangle\)\(\left\langle\,\,\,\,-\,\right\rangle\) with two vectors \(u,v\in\mathbb{R}^{d}\) we have \(\left|\,u\,\right\rangle\left\langle\,v\,\right|=uv^{\intercal}\). Alternatively, we can directly use gradients of the geometric invariant such as:_
* \(\lambda_{\gamma}(M_{\bar{\gamma}})=\frac{\partial r_{ij}^{\tau^{\prime}}}{ \partial\rho}\)__
* \(\lambda_{\gamma\gamma^{\prime}}(M_{\bar{\gamma}})=\frac{\partial r_{ij}^{\tau^ {\prime}}}{\partial\rho}\) _or_ \(\frac{\partial r_{ij}^{\tau^{\prime}}}{\partial\rho}\) _or_ \(\frac{\partial\theta r_{ij}^{\tau^{\prime}}}{\partial\rho}\) _or_ \(\frac{\partial\mathcal{A}_{ij}^{\tau^{\prime}}}{\partial\rho}\)__
_To ensure transversality with \(\mathfrak{so}_{d}\), \(\lambda_{\bar{\gamma}}\) for all \(\bar{\gamma}\in\Gamma\) is symmetric as equivariance means that the lattice is searched among an equivalence class in \(GL_{d}(\mathbb{R})/SO_{d}\)._
An equivariant GNN that acts on materials is as follows:
**Proposition 4**.: _A neural network \(\varphi_{\theta}:\mathcal{M}_{n}^{F}\to\mathcal{M}_{n}^{F^{\prime}}\), written \(\varphi_{\theta}:(\rho,x,z)\mapsto\rho^{\prime}\) is decomposed as follows:_
_The generation of messages from the edges and the triplets of the graph such as \(\varphi_{\theta}^{m^{(k)}}:\mathbb{R}^{f^{(k)}\times\Gamma_{k}}\to\mathbb{R} ^{h^{(k)}\times\Gamma_{k}}\)_
\[m_{ij\tau}= \varphi_{\theta}^{m^{(1)}}(z_{i},z_{j},||v_{ij\tau}||) \tag{8a}\] \[m_{\gamma,\gamma^{\prime}}= \varphi_{\theta}^{m^{(2)}}(z_{i},z_{j},z_{k},||v_{\gamma}||,|v_{ \gamma^{\prime}}||,\theta_{\gamma,\gamma^{\prime}}) \tag{8b}\]
_The aggregation and update of the messages at each node is \(\varphi_{\theta}^{\varphi^{(k)}}:\mathbb{R}^{h^{(k)}\times\Gamma_{k}}\to \mathbb{R}^{h^{\prime(k)}\times\Gamma_{k}}\) and \(\varphi_{\theta}^{u}:\mathbb{R}^{z\times\Gamma_{0}}\times\mathbb{R}^{h^{\prime (k)}\times\Gamma_{1}}\times\mathbb{R}^{h^{\prime(2)}\times\Gamma_{2}}\to \mathbb{R}^{z\times\Gamma_{0}}\)_
\[z_{i}^{\prime}= \varphi_{\theta}^{u}(z_{i},\sum_{\gamma\in\Gamma_{1}(i)}\varphi _{\theta}^{z^{(1)}}(m_{\gamma}),\sum_{(\gamma,\gamma^{\prime})\in\Gamma_{2}(i )}\varphi_{\theta}^{z^{(2)}}(m_{\gamma\gamma^{\prime}})) \tag{9}\]
\(\varphi_{\theta}^{\rho^{(k)}}\) _is the weight of a vector field \(\lambda_{\bar{\gamma}}\) such as \(\varphi_{\theta}^{\rho^{(k)}}:\mathbb{R}^{f^{(k)}\times\Gamma_{k}}\to\mathbb{R }^{\Gamma_{k}}\)_
\[\rho^{\prime}=\exp\left(\frac{1}{|\Gamma_{1}|}\sum_{\gamma\in \Gamma_{1}}\varphi_{\theta}^{\rho^{(1)}}(m_{\gamma})\cdot\lambda_{\gamma} \right)\cdot\rho \tag{10a}\] \[\rho^{\prime}=\exp\left(\frac{1}{|\Gamma_{2}|}\sum_{(\gamma, \gamma^{\prime})\in\Gamma_{2}}\varphi_{\theta}^{\rho^{(2)}}(m_{\gamma},m_{ \gamma^{\prime}},\theta_{\gamma\gamma^{\prime}})\cdot\lambda_{\gamma\gamma^{ \prime}}\right)\cdot\rho \tag{10b}\]
\(\varphi_{\theta}\) _is equivariant under \(G=\mathrm{Euc}(d)\times\mathfrak{S}_{n}\times SL_{d}(\mathbb{Z})\) if the vector field \(\lambda_{\bar{\gamma}}\) is invariant to \(SL_{d}(\mathbb{Z})\) and equivariant to \(\mathrm{Euc}(d)\) such as \(\lambda_{\bar{\gamma}}(g\cdot M)=g\lambda_{\bar{\gamma}}(M)g^{-1}\) for all \(g\in O(d)\), as the translation doesn't act on the crystal lattice._
From proposition 4, a GNN architecture acting on crystal material that satisfies Equations 8-10 is equivariant.
### EMPNN for Crystal Lattice Deformation
To empirically evaluate our approach, we defined EMPNN as a simple but effective GNN model that fits with Proposition 4. We chose to keep our model simple to facilitate the comparison between multiple vector fields. The architecture is illustrated in Figure 3. We slightly adapted equation 10 by adding a first-order approximation of the matrix exponential to both vector fields over the edges and the triplets. Further details are given in Section B.2 of the appendix.
Loss functionsThe goal of a loss function is to reproduce the shape and volume of the target crystal, i.e. \(\varphi(\tilde{\rho}\cdot h^{-1})=g\cdot\rho\cdot h^{-1},g\in O(3)\) and \(h\in SL_{3}(\mathbb{Z})\) (as \(\mathrm{Euc}(3)\) acts on \(\rho\) as \(O(3)\)). There exist multiple ways to define loss functions, but all the definitions will have implicit bias. To evaluate this bias, we use a classical loss function over the normalized lattice parameters. Another approach is to compute a matrix distance between the metric tensors. Both the lattice parameters and metric tensor losses are invariant to the euclidean group but equivariant to \(SL_{3}(\mathbb{Z})\). We tested the mean absolute error (MAE) \(\mathcal{L}_{\text{max}}^{\text{Param}}\) and the mean squared error (MSE) \(\mathcal{L}_{\text{max}}^{\text{Param}}\) of the normalized lattice parameters. We have also tested the MAE \(\mathcal{L}_{\text{max}}^{\rho}\), the MSE \(\mathcal{L}_{\text{mse}}^{\rho}\) and the invariant Riemannian metric \(\mathcal{L}_{\text{Riemann}}^{\text{Param}}\) of the metric tensors. The loss expressions are available in the appendix.
## 6 Experiments
Our main goal is to show the capability of our EMPNN to perform arbitrary crystal lattice deformation by improving the total energy of crystal structures, i.e. the thermodynamic stability. We rely on denoising of the crystal lattice as evaluation task4. We considered datasets of stable crystals where each structure is in local minima of formation energy. Applying a small random deformation to a structure leads to a less stable one with a high energy level (as the energy increases in all directions locally). We can then generate pairs of stable and less stable structures that we used to teach our model how to deform the less stable structure to obtain a stable one. In general, denoising tasks are more insightful than generative tasks as they show how a model acts on a
Figure 3: (a) The EMPNN model comprises an embedding layer, standard MPNN layers and EMPNN layers to perform deformation. (b) A EMPNN layer is composed of an MPNN with vector fields deforming the lattice \(\rho\).
crystal lattice. More specifically, external bias can be better controlled when performing denoising. The chemical composition and atomic positions have an important impact on the outcome. For example, binary and ternary compounds with a light element are known to be significantly easier to generate than ternary compounds without light elements or quaternary compounds. Consequently, a generator may tend to produce simple stable materials instead of a representative sample. In this case, an improvement of the metrics may not reflect lattice improvement. The quality of a crystal is also more difficult to evaluate. Namely, if a generative model can not produce some specific lattice shapes, quantitative metrics will struggle to measure the bias. Therefore, the performance of a generator is not a good measure to evaluate the performance of our model on arbitrary lattice deformation.
Evaluation metricsWe introduce three evaluation metrics defined as the average improvement of lattice parameters and the total energy. Let us denote the lattice parameters by \(abc\in\mathbb{R}^{3}\) and \(\alpha\beta\gamma\in\mathbb{R}^{3}\) and the total energy by \(E\in\mathbb{R}\). Given a parameter \(y\), let \(\tilde{y}\) be the noisy parameter and \(y^{\prime}\) the denoised parameter. The metrics are defined as follows:
\[\text{length}= \frac{1}{3N}\sum_{k=1}^{N}l1(\widetilde{abc}_{k},abc_{k})-l1(abc_{k }^{\prime},abc_{k}) \tag{11}\] \[\text{angle}= \frac{1}{3N}\sum_{k=1}^{N}l1(\widetilde{\alpha\beta\gamma}_{k}, \alpha\beta\gamma_{k})-l1(\alpha\beta\gamma_{k}^{\prime},\alpha\beta\gamma_{k})\] (12) \[\text{energy}= \frac{1}{N}\sum_{k=1}^{N}E_{k}^{\prime}-\tilde{E}_{k} \tag{13}\]
Namely, the improvement could be geometrical, i.e. based on lattice parameters or chemical, i.e. lowering of the formation energy. Evaluating the formation energy is computationally expressive and only done on a small subset of the test set.
Experimental setting and datasetsWe considered three datasets of stable crystals for which we perform denoising: Perov-5 [10, 11], Carbon-24 [24] and Mp-20 [12]. Perov-5 contains perovskite (cubic) structures that have highly uniform shapes but with different chemical compositions between structures. Carbon-24 is composed of carbon atoms having a large variety of shapes. This dataset is used to evaluate the performance of our EMPNN without negative bias in case of a poor chemical encoding of atoms. Mp-20 is a subset of the material project proposed in [10] that has a large sample of shapes and chemical compositions. It is the most representative of ordinary structures. We used the same training, validation and test splits as [10]. To train our model, we apply random deformations on the lattices \(\rho\) as \(\tilde{\rho}=\exp(A)\rho\) with \(A\sim\mathcal{N}(0,\sigma)\). All the conducted experiments, use grid search on hyperparameters. More information about the experiments is given in the supplementary materials. We conducted three experiments to evaluate (1) the loss functions of Section 5.3, (2) the vector fields and (3) the reconstruction capability of our model.
Loss functions evaluationTable 1 shows the relationship between geometrical and chemical metrics. The best energy improvements are generally associated with the best lattice error improvement. Based on lattice parameters comparison, we obtain better performance of the loss functions. However, we may expect that this evaluation is biased. But since energy-based metrics show similar results to geometry based metrics, we conclude that the bias is negligible.
Force field evaluationWe evaluated force field configurations acting on the lattice. We first considered the edge information: \(\{|\gamma\rangle\langle\gamma|\subseteq\Gamma_{1}\}\) and \(\{r_{\gamma}\subseteq\Gamma_{1}\}\). Second, we consider triplets information without angle and area: \(\{|\gamma\rangle\langle\gamma|\subseteq\Gamma_{2}\}\) and \(\{r_{\gamma},r_{\gamma^{\prime}}\subseteq\Gamma_{2}\}\). As geometrical information such as angles can determine crystal properties, we include triplets information as unoriented angles and area: \(\{|\gamma\rangle\langle\gamma^{\prime}|\subseteq\Gamma_{2}\}\) and \(\{|\gamma\rangle\langle\gamma|,|\gamma\rangle\langle\gamma^{\prime}|_{\text{ sym}}\subseteq\Gamma_{2}\}\). \(\cup\), represents the union of several vector fields and \(\bullet\) denotes a wildcard that takes all vector fields into account for a given n-graph. We also evaluate the benefits of symmetric matrices on the lattice as suggested in Definition 8. Any matrix in \(GL_{d}(\mathbb{R})\) can be seen as the composition of a rotation and a symmetric matrix, i.e. polar decomposition such as \(M=RS\) with \(R\in\mathit{SO}_{d}\) and \(S\in GL_{d}(\mathbb{R})/SO_{d}\). As rotation doesn't act on material properties, then acting on the lattice with \(M\) is equivalent to acting on the lattice with a symmetric matrix \(S\). Forcing this action to be a symmetric matrix may then lead to interesting results. We conduct experiments with relaxed symmetry constraint "sym" when the symmetric vector fields are used.
As baselines, we first consider the (Feed-forward FF) method proposed in [10] which is an invariant
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} loss & \multicolumn{3}{c|}{Carbon-24} & \multicolumn{3}{c|}{Mp-20} & \multicolumn{3}{c}{Perov-5} \\ & lengths & angle & energy & lengths & angle & energy & lengths & angle & energy \\ \hline \(\mathcal{L}_{\text{me}}^{\text{Param}}\) & **0.696** & **8.390** & **-0.655** (62.5) & **0.785** & **5.093** & **3.124** (51.7) & **0.967** & **15.227** & -3.426 (93.8) \\ \(\mathcal{L}_{\text{me}}^{\text{Param}}\) & 0.677 & 8.148 & -0.413 (65.6) & 0.710 & 4.752 & 5.485 (44.8) & 0.983 & 15.437 & -3.634 (93.8) \\ \(\mathcal{L}_{\text{me}}^{\rho}\) & 0.599 & 4.306 & 0.526 (62.5) & 0.540 & 1.674 & 11.268 (40.7) & 0.964 & 15.074 & -1.518 (90.6) \\ \(\mathcal{L}_{\text{me}}^{\rho}\) & 0.655 & 5.563 & 2.432 (40.6) & 0.683 & 2.645 & 10.964 (18.5) & 0.974 & 15.047 & **-3.741** (93.8) \\ \(\mathcal{L}_{\text{Riemann}}^{\rho}\) & 0.637 & 5.352 & 0.864 (43.8) & 0.729 & 3.777 & 6.859 (51.7) & 0.967 & 15.367 & -3.088 (93.8) \\ \end{tabular}
\end{table}
Table 1: Metrics are defined as the average improvement of lattice parameters and the average improvement of total energy. The metrics are calculated between noisy structure and denoised structure. The lengths are given in Å (angström), the angle in degree (higher is better) and the energy in eV/atom (lower is better). The value between parenthesis is the percentage of structure with lower energy. Energy is calculated with VASP[10, 10, 11] on a subset of 32 structures because of the high computational budget of DFT calculation.
method aiming to predict lattice parameters (distances and angles) using an invariant encoder with a simple FF. This allows us to compare the performance of our model with an invariant model. The second baseline (DFT) is a DFT calculation that evaluates the stress tensor of the crystal and optimizes its geometry. The configuration of the DFT calculation is given in the supplementary materials. DFT is not based on ML, as such, it is computationally heavy compared to EMPNN. DFT is unsuited for generating crystals without optimization techniques. So it cannot really be compared with ML models (baselines and our model), but we chose to use it to provide insight into the metrics.
Table 2 shows an enhanced denoising capability of our model for most of the proposed variants. Including triplets information improves the results when vector fields are defined from the gradient of invariant geometry (Section 5.1). However, vector fields defined from edges information achieved more consistent results than those defined from triplets, especially on ket-bra. Our model outperforms FF on Carbon-24 and Mp-20 with a significant improvement of the lattice parameters but not on Perov-5 (although the performance is very close). This suggests the importance of equivariance. The FF is not capable to achieve fine-grained deformation contrary based on vector fields. In fact, FF converges much faster during the first training steps but cannot improve the loss above a certain threshold. The only case where FF outperforms our model is when the crystal shape is extremely uniform, which is the case of Perov-5 where all the structures are cubic. In Perov-5, the angle improvement is not relevant as FF uses normalized lattice parameters. A random model or a constant parameter will produce similar results. Regarding DFT, it improves the lattice parameters on Carbon-24 and Mp-20 but not on Perov-5. This suggests that the crystals before random deformation probably remain close to local minima of the formation energy after deformation on Carbon-24 and Mp-20, but not on Perov-5. Our methods can take advantage of the biased distribution on Perov-5 while DFT is not capable of. Finally, comparing multiple configurations of vector fields shows that ket-bra works better on 1-graph while gradient-based vector fields work better on 2-graph. Triplets vector fields obtain better results with the area and angle information.
**Reconstruction task evaluation.** The reconstruction is close to the generative task and aims to build a crystal lattice from scratch. This cannot be performed with chemical simulation techniques such as DFT. We start from the point cloud as if it was in a cubic lattice of one A on a side. From this cubic lattice, the EMPNN performs the reconstruction. The main hypothesis is that there is a single stable cell which corresponds to the starting atomic positions. Our model consistently outperforms the FF model as shown in Table 3.
## 7 Conclusion
We proposed a general equivariant MPNN framework for material science by taking into consideration \(SL_{3}(\mathbb{Z})\) group action on crystal materials. In particular, our model uses multiple vector fields to act on crystal lattices. We showed the benefits of our model compared to equivariant baselines that do not consider \(SL_{3}(\mathbb{Z})\). We also compared different loss functions and results with DFT calculation to give insight into methods based on lattice reconstruction such as those using auto-encoder.
\begin{table}
\begin{tabular}{c c|c c|c c|c c} & \multicolumn{2}{c|}{carbon-24} & \multicolumn{2}{c|}{Mp-20} & \multicolumn{2}{c}{Perov-5} \\ & method & lengths & angle & lengths & angle & lengths & angle \\ \hline \multirow{8}{*}{\(|\bullet\rangle\langle\bullet|\subseteq\Gamma_{1}\rangle\)} & **0.084** & **1.266** & **0.115** & **1.437** & 0.290 & 5.487 \\ & \(\{|\gamma\rangle\langle\gamma|\subseteq\Gamma_{2}\rangle\)} & 0.056 & 0.596 & 0.053 & 0.283 & 0.287 & 5.209 \\ & \(\{|\bullet\rangle\langle\bullet|\subseteq\Gamma_{1}\rangle\cup\{|\gamma\rangle \langle\gamma|\subseteq\Gamma_{2}\}\)} & 0.063 & 0.454 & 0.063 & 0.270 & **0.296** & 5.733 \\ & \(\{|\bullet\rangle\langle\bullet|\subseteq\Gamma_{1}\rangle\cup\{|\gamma\rangle \langle\gamma|,|\gamma\rangle\langle\gamma^{\prime}|_{\text{sym}}\subseteq \Gamma_{2}\}\)} & 0.065 & 0.670 & 0.066 & 0.353 & **0.296** & 5.733 \\ & \(\{|\bullet\rangle\langle\bullet|\subseteq\Gamma_{1}\rangle\cup\{|\bullet \rangle\langle\bullet|\subseteq\Gamma_{2}\}\)} & 0.065 & 0.725 & 0.066 & 0.420 & **0.296** & **5.765** \\ \hline \multirow{8}{*}{\(\nabla\)} & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\) & 0.075 & 1.183 & 0.102 & **1.479** & 0.259 & 4.654 \\ & \(\{r_{\gamma},r_{\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & 0.060 & 0.488 & 0.085 & 0.391 & 0.289 & **5.560** \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}}\subseteq \Gamma_{2}\}\) & 0.101 & 1.232 & 0.101 & 0.541 & 0.292 & 5.514 \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}},\mathcal{A} _{\gamma\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & 0.087 & 1.093 & **0.106** & 0.717 & 0.265 & 4.990 \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}},\theta_{ \gamma\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & **0.107** & **1.283** & 0.088 & 0.617 & **0.293** & 5.550 \\ \hline \multirow{8}{*}{\(\nabla_{\text{sym}}\)} & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\) & 0.083 & 1.307 & 0.064 & 0.816 & 0.281 & 5.134 \\ & \(\{r_{\gamma},r_{\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & **0.100** & 1.188 & 0.101 & 0.503 & 0.281 & 4.959 \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}}\subseteq \Gamma_{2}\}\) & 0.097 & **1.375** & 0.098 & 0.672 & 0.226 & 3.188 \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}},\mathcal{A} _{\gamma\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & 0.099 & 1.328 & **0.124** & **1.160** & 0.285 & 5.457 \\ & \(\{r_{\gamma}\subseteq\Gamma_{1}\}\cup\{r_{\gamma},r_{\gamma^{\prime}},\theta_{ \gamma\gamma^{\prime}}\subseteq\Gamma_{2}\}\) & **0.100** & 1.289 & -0.001 & -0.007 & **0.291** & **5.617** \\ \hline \multicolumn{2}{l|}{feed forward (FF)} & -0.191 & -5.277 & -0.304 & -3.304 & 0.303 & 6.438 \\ \hline \multicolumn{2}{l|}{DFT} & 0.164 & 5.442 & 0.345 & 5.648 & 0.150 & -1.446 \\ \end{tabular}
\end{table}
Table 2: Metrics are defined as the average improvement of lattice parameters. The experiment is split into five categories of vector fields: from the ket-bra \(|\bullet\rangle\langle\bullet|\), from the gradient of invariant geometric without symmetric action \(\nabla\), the gradient with symmetric action \(\nabla_{\text{sym}}\), lattice predicted by a FF readout function and lattice obtained from a DFT calculation with VASP.
\begin{table}
\begin{tabular}{c|c c|c c} model & \multicolumn{2}{c|}{carbon-24} & \multicolumn{2}{c}{mp-20} \\ & length & angles & length & angles \\ \hline EMPNN & **0.200** & **3.199** & **0.174** & **1.965** \\ baseline & 0.469 & 13.693 & 0.534 & 6.324 \\ \end{tabular}
\end{table}
Table 3: MAE between lattice parameters of the original cell and the reconstructed cell (Å and degree).
## 8 Acknowledgments
This work has been supported by ANR-22-CE23-0002 ERI-ANA, ANR-20-THIA-0004 and by HPC resources from GENCI-IDRIS (Grant 2022-[AD011013338]).
|
2302.02292 | RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation
Based Private Inference | The proliferation of deep learning (DL) has led to the emergence of privacy
and security concerns. To address these issues, secure Two-party computation
(2PC) has been proposed as a means of enabling privacy-preserving DL
computation. However, in practice, 2PC methods often incur high computation and
communication overhead, which can impede their use in large-scale systems. To
address this challenge, we introduce RRNet, a systematic framework that aims to
jointly reduce the overhead of MPC comparison protocols and accelerate
computation through hardware acceleration. Our approach integrates the hardware
latency of cryptographic building blocks into the DNN loss function, resulting
in improved energy efficiency, accuracy, and security guarantees. Furthermore,
we propose a cryptographic hardware scheduler and corresponding performance
model for Field Programmable Gate Arrays (FPGAs) to further enhance the
efficiency of our framework. Experiments show RRNet achieved a much higher ReLU
reduction performance than all SOTA works on CIFAR-10 dataset. | Hongwu Peng, Shanglin Zhou, Yukui Luo, Nuo Xu, Shijin Duan, Ran Ran, Jiahui Zhao, Shaoyi Huang, Xi Xie, Chenghong Wang, Tong Geng, Wujie Wen, Xiaolin Xu, Caiwen Ding | 2023-02-05T04:02:13Z | http://arxiv.org/abs/2302.02292v2 | # RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
###### Abstract
The proliferation of deep learning (DL) has led to the emergence of privacy and security concerns. To address these issues, secure Two-party computation (2PC) has been proposed as a means of enabling privacy-preserving DL computation. However, in practice, 2PC methods often incur high computation and communication overhead, which can impede their use in large-scale systems. To address this challenge, we introduce RRNet, a systematic framework that aims to jointly reduce the overhead of MPC comparison protocols and accelerate computation through hardware acceleration. Our approach integrates the hardware latency of cryptographic building blocks into the DNN loss function, resulting in improved energy efficiency, accuracy, and security guarantees. Furthermore, we propose a cryptographic hardware scheduler and corresponding performance model for Field Programmable Gate Arrays (FPGAs) to further enhance the efficiency of our framework. Experiments show RRNet achieved a much higher ReLU reduction performance than all SOTA works on CIFAR-10 dataset.
## I Introduction
Machine-Learning-as-a-Service (MLaaS) has emerged as a popular solution for accelerating inference in various applications [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. The challenges of MLaaS comes from several folds: inference latency and privacy. To accelerate the MLaaS training and inference application, accelerated gradient sparsification [12, 13] and model compression methods [14, 15, 16, 17, 18, 19, 20, 21, 22] are proposed. On the other side, a major limitation of MLaaS is the requirement for clients to reveal raw input data to the service provider, which may compromise the privacy of users. This issue has been highlighted in previous studies such as [23]. In this work, we aim to address this challenge by proposing a novel approach for privacy-preserving MLaaS. Our method enables clients to maintain the confidentiality of their input data while still allowing for efficient and accurate inference. Homomorphic Encryption (HE) is a powerful tool for securing small to medium-scale deep neural networks (DNNs) without incurring the high costs associated with bootstrapping or significant communication overhead. Other secure multiparty computation (MPC) protocols such as secret-sharing [24] and Yao's Garbled Circuits (GC) [25] have also been proposed to support the evaluation of operator blocks in large-scale networks. However, our focus in this work is on the use of secure two-party computation (2PC) [24] as a means of protecting DNN models.
The main challenge in 2PC-based private inference (PI) is the overhead associated with the comparison protocol for non-linear operators [26]. To address this challenge, existing works have focused on optimizing the cost of the ReLU operator by minimizing ReLU counts (e.g., DeepReduce [27], CryptoNAS [28]) or replacing ReLUs with polynomials (e.g., CryptoNets [29], Delphi [30], SAFENet [31]). Another trend in the field has been the use of hardware acceleration for PI, such as using Graphics Processing Units (GPUs) [24, 32] to speed up MPC-based DNNs. However, both of these approaches have limitations in effectively exploring the design space of 2PC-based PI. In this work, we aim to address these limitations by proposing a novel approach for optimizing the cost of non-linear operators and hardware acceleration for PI that can effectively perform design exploration.
Current approaches for optimizing the performance of 2PC-based private inference (PI) rely on heuristic methods for evaluating the impact of different non-linear operators on system performance. In this work, we propose a novel approach, the **ReLU-Reduced Neural Architecture Search (RRNet)** framework, that jointly optimizes the structure of the deep neural network (DNN) model and the hardware architecture to support high-performance MPC-based PI. Our framework eliminates the need for manual heuristic analysis by automating the process of exploring the design space and identifying the optimal configuration of DNN models and hardware architectures for 2PC-based PI. We use FPGA accelerator design as a demonstration and summarize our contributions:
1. We propose a novel approach to addressing the high computational cost of non-linear operators in 2PC-based PI. We introduce a trainable _straight-through polynomial activation initialization_ method that utilizes a trainable polynomial activation function as an alternative to the computationally expensive ReLU operator.
2. We develop a cryptographic hardware scheduler and performance model for FPGA platform. We also construct a latency lookup table to optimize scheduling of cryptographic operations for improved performance and energy efficiency.
3. We propose a differentiable NAS framework that takes into account the constraints and latencies of cryptographic operators. Our framework enables the selection of appropriate polynomial or non-polynomial activation functions based on the specific needs of the task and
the computational resources available. By integrating cryptographic considerations into the NAS process, our framework ensures that the resulting DNN models are both accurate and secure, while also being optimized for the target hardware platform.
## II **Basic of Cryptographic Operators**
### _Secret Sharing_
**2PC setup.** We consider a similar scheme involving two semi-honest in a MLaaS applications [33], where two servers receive the confidential inputs from each other and invoke evaluation.
**Additive Secret Sharing.** In this work, we evaluate 2PC secret sharing. As a symbolic representation, for a secret value \(x\in\mathbb{Z}_{m}\), \(\llbracket x\rrbracket\leftarrow(x_{S_{0}},x_{S_{1}})\) denotes the two shares, where \(x_{S_{i}},i\in\{0,1\}\) belong to server \(S_{i}\). Other notations are as below:
* _Share Generation_\(\text{shr}(x)\): A random value \(r\) in \(\mathbb{Z}_{m}\) is sampled, and shares are generated as \(\llbracket x\rrbracket\leftarrow(r,x-r)\).
* _Share Recovering_\(\text{rec}(\llbracket x\rrbracket)\): Given shares \(\llbracket x\rrbracket\leftarrow(x_{S_{0}},x_{S_{1}})\), it computes \(x\gets x_{S_{0}}+x_{S_{1}}\) to recover \(x\).
An example of plaintext vs. secret shared based ciphertext evaluation is given in Fig. 1, where ring size is 4 and \(\mathbb{Z}_{m}=\{-8,-7,...7\}\). Details are given in following sections.
### _Polynomial Operators Over Secret-Shared Data_
**Scaling and Addition.** We denote secret shared matrices as \(\llbracket X\rrbracket\) and \(\llbracket Y\rrbracket\). The encrypted evaluation is given in Eq. 1.
\[\llbracket aX+Y\rrbracket\leftarrow(aX_{S_{0}}+Y_{S_{0}},aX_{S_{1}}+Y_{S_{1}}) \tag{1}\]
**Multiplication.** We consider the matrix multiplicative operations \(\llbracket R\rrbracket\leftarrow\llbracket X\rrbracket\otimes\llbracket Y\rrbracket\) in the secret-sharing pattern. We use oblivious transfer (OT) [34] based approach. To make the multiplicative computation secure, an extra Beaver triples [35] should be generated as \(\llbracket Z\rrbracket=\llbracket A\rrbracket\otimes\llbracket B\rrbracket\), where \(A\) and \(B\) are randomly initialized. Specifically, their secret shares are denoted as \(\llbracket Z\rrbracket=(Z_{S_{0}},Z_{S_{1}})\), \(\llbracket A\rrbracket=(A_{S_{0}},A_{S_{1}})\), and \(\llbracket B\rrbracket=(B_{S_{0}},B_{S_{1}})\). Later, two matrices are derived from given shares: \(E_{S_{i}}=X_{S_{i}}-A_{S_{i}}\) and \(F_{S_{i}}=Y_{S_{i}}-B_{S_{i}}\), in each party end separately. The intermediate shares are jointly recovered as \(E\leftarrow\text{rec}(\llbracket E\rrbracket)\) and \(F\leftarrow\text{rec}(\llbracket F\rrbracket)\). Finally, each party, i.e, server \(S_{i}\), will calculate the secret-shared \(R_{S_{i}}\) locally:
\[R_{S_{i}}=-i\cdot E\otimes F+X_{S_{i}}\otimes F+E\otimes Y_{S_{i}}+Z_{S_{i}} \tag{2}\]
**Square.** For the element-wise square operator shown \(\llbracket R\rrbracket\leftarrow\llbracket X\rrbracket\otimes \llbracket X\rrbracket\), we need to generate a Beaver pair \(\llbracket Z\rrbracket\) and \(\llbracket A\rrbracket\) where \(\llbracket Z\rrbracket=\llbracket A\rrbracket\otimes\llbracket A\rrbracket\), and \(\llbracket A\rrbracket\) is randomly initialized. Then parties evaluate \(\llbracket E\rrbracket=\llbracket X\rrbracket-\llbracket A\rrbracket\) and jointly recover \(E\leftarrow\text{rec}(\llbracket E\rrbracket)\). The result \(R\) can be obtained through Eq. 3.
\[R_{S_{i}}=Z_{S_{i}}+2E\otimes A_{S_{i}}+E\otimes E \tag{3}\]
### _Non-Polynomial Operator Modules_
Non-polynomial operators such as ReLU and MaxPool are evaluated using secure comparison protocol.
**Secure 2PC Comparison.** The 2PC comparison, a.k.a. millionaires protocol, is committed to determine whose value held by two parties is larger, without disclosing the exact value to each other. We adopt work [26] for 2PC comparison.
## III **The RRNet Framework**
The overview of the framework is given in Fig. 2. This section introduces the new cryptographic-friendly activation function and its initialization method. The modeling of DNN operators is conducted under 2PC setup on FPGA. As last, the hardware-aware NAS framework is proposed to find proper DNN architecture.
### _Trainable \(X^{2}act\) Non-linear Function._
We use a hardware friendly trainable second order polynomial activation function as an non-linear function candidate, shown in Eq. 4, where \(w_{1}\), \(w_{2}\) and \(b\) are all trainable parameters. We propose _straight through polynomial activation initialization_ (**STPAI**) method to set the \(w_{1}\) and \(b\) to be small enough and \(w_{2}\) to be near to 1 in Eq. 4 for initialization.
\[\delta(x)=\frac{c}{\sqrt{N_{x}}}w_{1}x^{2}+w_{2}x+b \tag{4}\]
### _Search Space of Hardware-aware NAS._
We focus on convolutional neural networks (CNNs) in our study. CNNs are mostly composed of Conv-Act-Pool and Conv-Act blocks. In work, we use the regular backbone model as a search baseline, such as the VGG family, mobilenetV3, and ResNet family. A toy example is shown in Fig. 2, where a two-layer supernet is constructed, and the first layer is Conv-Act-Pool, and the second layer is Conv-Act. The first layer has four combinations which are Conv-ReLU-Pool\({}_{\text{m}}\), Conv-ReLU-Pool\({}_{\text{a}}\), Conv-\(X^{2}act\)-Pool\({}_{\text{m}}\), and Conv-\(X^{2}act\)-Pool\({}_{\text{a}}\). The second layer has two combinations: Conv-ReLU and Conv-\(X^{2}act\). The Conv block's parameters can be either shared among candidates or separately trained during the search.
### _Differentiable Harware Aware NAS Algorithm_
In this work, we incorporate latency constraint into the target loss function of the DARTS framework [36], and develop a differentiable cryptographic hardware-aware micro-architecture search framework. We firstly determine a supernet model for NAS, and introduces gated operators \(OP_{l}(x)\) which parametrizes the candidate operators \(OP_{l,j}(x)\) selection with a trainable weight \(\alpha_{l,k}\) (Eq. 5). For example, a gated pooling operator consists of MaxPool and AvgPool operators and 2 trainable parameters for pooling selection. The latency of the operators could be determined based on performance predictor. A parameterized latency constraint is given as \(Lat(\alpha)=\sum_{l=1}^{n}\sum_{j=1}^{m}\theta_{l,j}Lat(OP_{l,j})\), where the latency of gated operators are weighted by \(\theta_{l,j}\). We incorporate the latency constraint into the loss function as \(\zeta(\omega,\alpha)=\zeta_{CE}(\omega,\alpha)+\lambda Lat(\alpha)\), and penalize the latency \(Lat(\alpha)\) by \(\lambda\).
Fig. 1: A example of 4 bit plaintext vs. ciphertext evaluation.
\[\theta_{l,j}=\frac{\exp(\alpha_{l,j})}{\sum_{k=1}^{m}\exp(\alpha_{l,k})},\;OP_{l}( x)=\sum_{k=1}^{m}\theta_{l,k}OP_{l,k}(x) \tag{5}\]
The optimization objective of our design is shown in Eq. 6, we aim to minimize the validation loss \(\zeta_{val}(\omega^{*},\alpha)\) with regard to architecture parameter \(\alpha\). The optimal weight \(\omega^{*}\) is obtained through minimize the training loss. The second order approximation of the optimal weight is given as \(\omega^{*}\approx\omega^{\prime}=\omega-\xi\,\delta\zeta_{trn}(\omega,\alpha)/ \delta\omega\), the approximation is based on current weight parameter and its' gradient. The virtual learning rate \(\xi\) can be set equal to that of weight optimizer.
\[\text{argmin}_{\alpha}\;\zeta_{val}(\omega^{*},\alpha),\;s.t.\;\omega^{*}= \text{argmin}_{\omega}\;\zeta_{trn}(\omega,\alpha) \tag{6}\]
Eq. 7 gives the approximate \(\alpha\) gradient using chain rule, the second term of \(\alpha\) gradient can be further approximated using small turbulence \(\varepsilon\), where weights are \(\omega^{\pm}=\omega\pm\varepsilon\,\delta\zeta_{val}(\omega^{\prime},\alpha)/ \delta\omega^{\prime}\) and Eq. 8 is used for final \(\alpha\) gradient.
\[\delta\zeta_{val}(\omega^{\prime},\alpha)/\delta\alpha-\xi\,\delta\zeta_{val} (\omega^{\prime},\alpha)/\delta\omega^{\prime}\,\delta\delta\zeta_{trn}( \omega,\alpha)/\delta\omega\delta\alpha \tag{7}\]
\[\frac{\delta\delta\zeta_{trn}(\omega,\alpha)}{\delta\omega\delta\alpha}= \delta(\zeta_{trn}(\omega^{+},\alpha)-\zeta_{trn}(\omega^{-},\alpha))/(2 \varepsilon\delta\alpha) \tag{8}\]
With the help of analytical modeling of optimization objective, we are able to derive the differentiable polynomial architecture search framework in Algo. 1. The input of search framework includes backbone model \(M_{b}\), dataset \(D\), latency loop up table \(Lat(OP)\), and hardware resource \(H\). The algorithm returns a searched polynomial model \(M_{p}\). The algorithm iteratively trains the architecture parameter \(\alpha\) and weight \(\omega\) parameter till the convergence. Each \(\alpha\) update requires 4 forward paths and 5 backward paths according to Eq. 6 to Eq. 8, and each \(\omega\) update needs 1 forward paths and 1 backward paths. After the convergence of training loop, the algorithm returns a deterministic model architecture by applying \(OP_{l}(x)=OP_{l,k^{*}}(x),\;s.t.\;k^{*}=\text{argmax}_{k}\;\alpha_{l,k}\). The returned architecture is then used for 2PC based PI evaluation.
## IV Evaluation
**Hardware setup.** Our experiment platform is based on two ZCU104 MPSoCs, both are connected to a router with \(Rt_{bw}=1GB/s\) through LAN. The load/store bus width is 128-bit and our data is 32-bit, thus, we simultaneously load and store four data and implement the kernel on \(freq=200MHz\). Fixed point ring size is set as 32 bits for 2PC inference.
**Datasets and Backbone Models.** We evaluate RRNet on two public datasets: CIFAR-10 [37] and ImageNet [38] for image classification tasks.
Fig. 2: Overview of RRNet framework for 2PC DNN based private inference setup.
**Systems Setup.** All polynomial architecture search experiments are conducted in plaintext domain on Ubuntu 18.04 and Nvidia Quadro RTX 6000 GPU with 24 GB GPU memory. The cryptographic DNN inference experiment is conducted on an FPGA-based accelerator for 2PC DNN setup. Two ZCU104 boards are used for server 0 and server 1, which are equipped with XCZU7EV MPSoC for the PS-PL system. Two boards are connected to a router with the Ethernet LAN setup. The FPGA accelerators are optimized with coarse-grained and fine-grained pipeline structures.
### **Hardware-aware NAS Evaluation**
Our hardware-aware NAS experiment (algorithm described in Sec. III-C) was conducted on CIFAR-10 training dataset. A new training & validation dataset is randomly sampled from the CIFAR-10 training dataset with 50%-50% split ratio.
The hardware latency is modeled through FPGA performance predictor, and the \(\lambda\) for latency constraint in loss function is tuned to generate architectures with different latency-accuracy trade-off. Prior search starts, the major model parameters are randomly initialized, and the polynomial activation function is initialized through **STPAI** method. We use VGG-16 [39], ResNet-18, ResNet-34, ResNet-50 [40], and MobileNetV2 [41] as backbone model structure to evaluate our RRNet framework.
The finetuned model accuracy under 2PC setting with regard to \(\lambda\) setting can be found in Fig. 3(a). The baseline model with all ReLU setting and all-polynomial operation based model are also included in the figure for comparison. Generally, a higher polynomial replacement ratio leads to a lower accuracy. The VGG-16 model is the most vulnerable model in the study, while the complete polynomial replacement leads to a 3.2% accuracy degradation (baseline 93.5%). On the other side, ResNet family are very robust to full polynomial replacement and there are only \(0.26\%\) to \(0.34\%\) accuracy drop for ResNet-18 (baseline 93.7%), ResNet-34 (baseline 93.8%) and ResNet-50 (baseline 95.6%). MobileNetV2's is in between the performance of VGG and ResNet, in which a full polynomial replacement leads to \(1.27\%\) degradation (baseline 94.09%).
On the other hand, Fig. 3(b) presents the latency profiling result of searched models performance on CIFAR-10 dataset under 2PC setting. All polynomial replacement leads to 20 times speedup on VGG-16 (baseline 382 ms), 15 times speedup on MobileNetV2 (baseline 1543 ms), 26 times speedup, ResNet-18 (baseline 324 ms), 19 times speedup on ResNet-34 (baseline 435 ms), and 25 times on speedup ResNet-50 (baseline 922 ms). With most strict constraint \(\lambda\), the searched model latency is lower.
### **Cross-work ReLU Reduction Performance Comparison**
A further accuracy-ReLU count analysis is conducted and compared with SOTA works with ReLU reduction: DeepRe-Duce [27], DELPHI [30], CryptoNAS [28], and SNI [42]. As shown in Fig. 4, we generate the pareto frontier with best accuracy-ReLU count trade-off from our architecture search result. We name the selected models as **RRNet**, and compare it with other works. The accuracy-ReLU count comparison is show in Fig. 5. Our work achieves a much better accuracy vs. ReLU comparison than existing works, especially at the situation with extremely few ReLU counts.
## V **Conclusion**
In the work, to reduce the high comparison protocol overhead from the non-linear operators in 2PC-based privacy-preserving DL implementation, we propose the RRNet framework that enables low latency, high energy efficiency & accuracy 2PC-DL. Experiments show RRNet achieved a much higher ReLU reduction performance than all SOTA works on CIFAR-10 dataset.
Fig. 4: Accuracy-ReLU count trade-off on CIFAR-10.
Fig. 5: ReLU reduction comparison on CIFAR-10.
Fig. 3: RRNet framework evaluation under 2PC PI setup. Network bandwidth: 1 GB/s. Device: ZCU104. |
2305.09348 | One-Shot Online Testing of Deep Neural Networks Based on Distribution
Shift Detection | Neural networks (NNs) are capable of learning complex patterns and
relationships in data to make predictions with high accuracy, making them
useful for various tasks. However, NNs are both computation-intensive and
memory-intensive methods, making them challenging for edge applications. To
accelerate the most common operations (matrix-vector multiplication) in NNs,
hardware accelerator architectures such as computation-in-memory (CiM) with
non-volatile memristive crossbars are utilized. Although they offer benefits
such as power efficiency, parallelism, and nonvolatility, they suffer from
various faults and variations, both during manufacturing and lifetime
operations. This can lead to faulty computations and, in turn, degradation of
post-mapping inference accuracy, which is unacceptable for many applications,
including safety-critical applications. Therefore, proper testing of NN
hardware accelerators is required. In this paper, we propose a \emph{one-shot}
testing approach that can test NNs accelerated on memristive crossbars with
only one test vector, making it very suitable for online testing applications.
Our approach can consistently achieve $100\%$ fault coverage across several
large topologies with up to $201$ layers and challenging tasks like semantic
segmentation. Nevertheless, compared to existing methods, the fault coverage is
improved by up to $24\%$, the memory overhead is only $0.0123$ MB, a reduction
of up to $19980\times$ and the number of test vectors is reduced by
$10000\times$. | Soyed Tuhin Ahmed, Mehdi B. Tahoori | 2023-05-16T11:06:09Z | http://arxiv.org/abs/2305.09348v1 | # One-Shot Online Testing of Deep Neural Networks Based on Distribution Shift Detection
###### Abstract
Neural networks (NNs) are capable of learning complex patterns and relationships in data to make predictions with high accuracy, making them useful for various tasks. However, NNs are both computation-intensive and memory-intensive methods, making them challenging for edge applications. To accelerate the most common operations (matrix-vector multiplication) in NNs, hardware accelerator architectures such as computation-in-memory (CiM) with non-volatile memristive crossbars are utilized. Although they offer benefits such as power efficiency, parallelism, and nonvolatility, they suffer from various faults and variations, both during manufacturing and lifetime operations. This can lead to faulty computations and, in turn, degradation of post-mapping inference accuracy, which is unacceptable for many applications, including safety-critical applications. Therefore, proper testing of NN hardware accelerators is required. In this paper, we propose a _one-shot_ testing approach that can test NNs accelerated on memristive crossbars with only one test vector, making it very suitable for online testing applications. Our approach can consistently achieve \(100\%\) fault coverage across several large topologies with up to \(201\) layers and challenging tasks like semantic segmentation. Nevertheless, compared to existing methods, the fault coverage is improved by up to \(24\%\), the memory overhead is only \(0.0123\) MB, a reduction of up to \(19980\times\) and the number of test vectors is reduced by \(10000\times\).
one-shot testing, single-shot testing, functional testing, Memristor
## I Introduction
Deep learning algorithms have been the driving force behind substantial advancements in various domains, such as computer vision, natural language processing, and speech recognition. Recently, deep learning algorithms have been increasingly deployed in safety- and security-critical domains such as autonomous driving, medical imaging, and malware detection. At the heart of deep learning systems are multi-layered neural networks (NNs) that learn hierarchical representations from the training dataset and make actionable predictions on inference data. Despite the algorithmic success of NNs, they are computationally demanding, and their conventional hardware implementation suffers from a memory bottleneck due to von Neumann architectures, where memory and processing units are physically separated, leading to significant data movement and energy consumption.
Therefore, several specialized architectures and hardware accelerators, such as computation-in-memory (CiM) architectures [1], have been explored to accelerate NNs in hardware. CiM leverages emerging non-volatile memory (NVM) technologies, such as Resistive Random-Access Memory (ReRAM) [2], Phase Change Memory (PCM) [3], and Spin Transfer Torque Magnetic Random Access Memory (STT-MRAM) [4], to perform computations directly in memory, mitigating the memory bottleneck. Emerging NVM technologies offer benefits such as zero leakage power, non-volatility, high switching speed, and endurance compared to conventional CMOS-based memories.
However, emerging NVM technologies exhibit several post-manufacturing and online non-idealities, including read disturb error [5], retention faults [6], manufacturing variations, and online thermal variations [7]. These non-idealities can adversely impact the online functionality of memristive chips for deep learning applications and negatively impacts the prediction ability of the NNs [8]. Therefore, testing such hardware systems is crucial to ensuring their reliability and correct functionality, especially in safety-critical applications.
Nevertheless, testing NN hardware accelerators present a unique set of challenges due to the complexity, inherent non-linearity, and vast number of layers and parameters. Unlike traditional hardware or software testing approaches, NNs cannot be exhaustively tested with all possible input combinations, as they can be millions, leading to high testing overhead. Furthermore, specialized CiM-based NN hardware accelerators do not contain conventional digital Design for Test (DT) infrastructure, such as scan chains. A testing approach that does not require access to training data, treats the NN as an intellectual property (IP), that is it does not require access to the intermediate results or backdoors to the model (non-invasive) but can test the NN in a _one-shot_ is desired.
One-shot testing, which is the extreme form of test compaction, can test an NN model and its hardware realization with a single test vector and forward pass. It can minimize testing time, the computation required, and system downtime to a minimum. Even memristor chip-specific testing approaches can lead to long system downtime due to the large number
Fig. 1: Flow diagram of our proposed one-shot testing approach. A KL-divergence value greater than a predefined threshold indicates faults or variation in the memristive NN.
of test vectors [9]. Longer system downtime can be unacceptable for many applications, including "always-on" scenarios, e.g., real-time object detection and tracking, voice assistants, anomaly detection, and predictive maintenance, particularly in mission-critical applications.
In this paper, we propose a comprehensive _one-shot_ testing framework for CiM-based memristive deep learning hardware accelerators that treat NNs as a black box and do not require access to training datasets or intermediate results. Our approach is capable of testing large-scale NNs with hundreds of layers using a single testing vector, significantly reducing the number of forward passes and computational overhead during testing. We evaluate our approach on several large CNN topologies with up to \(201\) layers and several difficult tasks, e.g., ImageNet classification with \(1000\) classes, and semantic segmentation on real-world biomedical segmentation. Nevertheless, we were able to consistently achieve \(100\%\) test coverage of different fault types and fault severeness.
The rest of the paper is organized as follows: Section II provides background information on deep learning, NVM technologies, and their non-idealities. Section III describes our proposed one-shot testing method in detail, Section IV describes the fault injection framework, evaluates our approach, and presents the results, and finally, Section V concludes the paper.
## II Preliminaries
### _Memristor Devices and Non-idealities_
Memristor technology including ReRAM [2], PCM [3], and STT-MRAM [4] are two-terminal nanoscale devices that are the basic building block of in-memory computing for the acceleration of NNs. The number of stable states varies from technology, e.g., STT-MRAM can be programmed to Low Resistance State \(LRS\)/High Conductance State \(G_{off}\) or High Resistance State \(HRS\)/Low Conductance State \(G_{on}\) but ReRAM and PCM can be programmed into multiple stable states [10]. Multilevel cells can also be designed using multiple STT-MRAM devices.
Despite their promising characteristics, due to a number of factors, memristive devices exhibit a number of non-idealities [11, 12, 13, 14, 15, 16, 17] that can be broadly categorized as either permanent or soft faults.
Permanent faults refer to those that irreversibly alter the conductance state of memristor cells, preventing them from being programmed to the desired resistance/conductance state for encoding NN parameters. Cells with permanent faults cannot be restored to their original fault-free values.
On the contrary, soft faults refer to those that temporarily alter the conductance state of memristor cells but can still cause deviations in NN parameters. Faulty memristor cells can, however, be restored to their original values.
Irrespective of the specific type of fault, they occur during both the manufacturing process and in-field operation. As a result, the model parameters and activations of NNs can deviate from their expected values after their hardware mapping (post-mapping) due to manufacturing faults and post-deployment due to runtime faults. In this section, common memristor device faults and their corresponding fault models are discussed.
Stuck-at faultsAmong all kinds of hard faults, stuck-at faults appear more frequently in memristive crossbars. Suck-at faults are modelled as the memristor cell conductance can become stuck at high conductance (stuck-at-\(G_{on}\)) or low conductance (stuck-at-\(G_{off}\)). Depending on the occurrence factor, stuck-at faults can be categorized as either soft faults or permanent faults. Stuck-at faults caused by limited endurance from repeated reading are categorized as soft faults, whereas the manufacturing defects that cause stuck-at faults are categorized as permanent faults. In a memristive crossbar array, stuck-at faults are randomly distributed and can be as high as 10% [12]. Defects like stuck-open or short can also be modeled as stuck-at-\(G_{off}\) and stuck-at-\(G_{on}\)[11, 12]. Consequently, the parameters of the memristive NN implementation deviate from their initial values. The parameter bit change, depending on encoding, can be represented as either stuck-at-0 or stuck-at-1.
Manufacturing and In-field VariationsDevice variability occurs when the conductance of memristors exhibits distribution rather than a fixed value due to factors such as manufacturing process variations. In-field variations, on the other hand, emerge from the dynamic changes in the memristor's environment, including temperature or other environmental factors, which can fluctuate the conductance. The effect of both types of variations can cause variations in the current sum in the bit-line in the crossbar and a reduction in the sensing margin, leading to incorrect sensed values.
Read/write disturbanceMemristor reading (inference) and writing (parameter mapping) can both be affected by read and write currents impacting other memristor cells sharing the same bit-line in the crossbar array. Such faults can lead to unintentional switching of memristor conductance states during read operations. Moreover, write disturbance faults influence the data (NN parameters) stored in memristor cells [12, 15].
Slow-Write FaultDuring NN parameter mapping, defective memristor cells might experience longer write delays, referred to as slow-write faults. In ReRAM, slow-write faults can emerge from repeated write operations. Switching in MTJ and PCM is inherently stochastic, causing non-deterministic write delays even when the environmental factors remain constant. A write failure can happen if the MTJ does not switch within a specified time or the switching pulse is
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Symbol & Explanation & Symbol & Explanation \\ \hline \(l\) & Threshold for fault detection & \(\mathcal{N}\) & Output distribution of a memristive NN \\ \hline \(l_{\text{LR}}(\mathcal{N}\,|\,\mathcal{N}\,)\) & KL divergence between \(\mathcal{N}\,\text{and}\,\mathcal{N}\) & \(\mathcal{N}\) & Expected current distribution (unit Gaussian) \\ \hline \(\mathbf{W}\) & Weight matrix & \(\sharp\) & One-shot test vector \\ \hline \(\mathbf{b}\) & Bus vector & \(\mathcal{L}\) & Loss function \\ \hline \(\mathbf{\hat{g}}\) & Output of the Memristive NN & \(\mathcal{P}\) & Re-trained NN \\ \hline \(Q\) & Leaving state state state & \(\alpha\) & Learning rate \\ \hline \(\nabla\mathcal{C}\) & Gradient from both backpropagation & \(\eta\) & Note scale for variations \\ \hline \(P_{lin}\) & Percentage of faults & \(\mathcal{V}\) & The ground truth for optimization \\ \hline \(\mathbf{\hat{g}}\) & Intermediate activations & \(\theta\) & Leaking parameters of a NN \\ \hline \(\mathbf{\hat{g}}\) & Mean & \(\sigma^{2}\) & Variance \\ \hline \(N\) & Number of output classes & \(\mathcal{N}\) & Number of Monte Carlo faults runs \\ \hline \end{tabular}
\end{table} TABLE I: Notations used in this paper.
truncated before the switching operation is completed [13, 14].
### _Neural Networks (NNs)_
Neural Networks (NNs) are computational models inspired by the structure and operation of biological neural networks. NNs comprised of multiple layers of neurons organized into a single input, single output, and multiple hidden layers. The input layer does not perform any computation, but only receives the input data. However, the hidden layers compute intermediate activations \(\mathbf{z}\), and the output layer generates the final results \(\mathbf{y}\). The basic computation of a layer \(l\) consists of the weighted sum of inputs \(\mathbf{x}\) and the element-wise addition of bias. Afterwards, a non-linear activation function \(\phi(\cdot)\) is applied. The overall mathematical computation of the NNs is as follows:
\[\mathbf{z}^{(0)} =\mathbf{x}, \tag{1}\] \[\mathbf{z}^{(l)} =\phi^{(l)}\left(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^ {(l)}\right),\quad l=1,2,\dots,L-1,\] (2) \[\mathbf{\hat{y}} =\bar{\phi}^{(L)}\left(\mathbf{W}^{(L)}\mathbf{h}^{(L-1)}+ \mathbf{b}^{(L)}\right), \tag{3}\]
where, \(\mathbf{W}\), \(L\), and \(\bar{\phi}\) represent the weight matrix, the total number of layers, and the final transformation, e.g., SoftMax, respectively. SoftMax rescales the output values between \(0\) and \(1\).
NNs can be categorized based on their layer types and arrangements within the network. One popular type is the Convolutional Neural Network (CNN), which incorporates convolutional and linear layers. Another type is the Multi-layer Perceptron (MLP), which solely uses linear layers. CNNs are particularly powerful and are commonly applied in tasks that involve image, audio, and video. Thus, our methodology is assessed using state-of-the-art (SOTA) CNN architectures.
Normalization technique, such as batch normalization, is increasingly utilized to enhance the convergence speed and stability of the learning process. Batch normalization normalized the activations of each neuron during training prior to the application of two learnable parameters \(\beta\) and \(\gamma\) that scale and adjust the normalized activations as follows:
\[\overline{\mathbf{z}}^{(l)}=\frac{\mathbf{W}^{(l)}\mathbf{x}^{(l-1)}+\mathbf{ b}^{(l)}-\mu^{(l)}}{\sqrt{\sigma^{2(l)}+\epsilon}}\beta+\gamma, \tag{4}\]
where, the batch mean and variance are denoted by \(\mu^{(l)}\) and \(\sigma^{2(l)}\), respectively. Also, \(\epsilon\) is a small constant added for stability.
The NN training procedure consists of learning the parameters \(\theta\), which summarize all the learnable parameters given the training dataset \(\mathcal{D}\subset(\mathbf{x},\mathbf{y})\) with \(N\) training examples by minimizing a task-specific loss function \(\mathcal{L}\):
\[\arg\min\boldsymbol{\theta}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{\theta}( \mathbf{y}_{i},\mathbf{\hat{y}}_{i}). \tag{5}\]
### _Deep Learning Acceleration with Memristor-based Crossbars_
Memristive devices can be arranged into crossbar arrays, with each cross point consisting of a memristive device, as depicted in Fig. 2. Therefore, the weighted sum computation required for the inference stage of the NN can be carried out directly in the memory by leveraging Ohm's Law (V = IR) and Kirchhoff's Current Law at a constant \(O(1)\) time without any data movement between the processing element and the memory.
Due to the finite number of conductance states of memristors, the trained parameters \(\theta\) are first quantized to signed 8-bit precision using a post-training quantization approach, with negligible performance penalties. However, quantization-aware training should be performed for lower-bit precision quantization.
Afterwards, the quantized parameters \(\theta\) of the NN are mapped to the memristor-based crossbar arrays with NVM technology-specific encoding. For emerging NVM technology with two stable states, e.g., STT-MRAM, each bit of a parameter is encoded as high \(G_{off}\) or low conductance \(G_{on}\). Therefore, each cell in the crossbar array represents a single-bit (\(0\) or \(1\)). For multilevel NVM technology, e.g., ReRAM or PCM with 128 resistance states, the sign can be represented with a single bit, \(G_{off}\) (0) for positive, \(G_{on}\) (1) for negative and the magnitude (0 to 127) can be represented with multi-level cells. A look-up table can be used to map the magnitudes to the conductance values of the ReRAM cells.
The input vector \(x\) is converted to continuous voltages and then streamed into the word-lines of the crossbar array for inference. Multiple word-lines of the crossbar are activated simultaneously for parallel computation, and the current that flows into the bit-line of the crossbar represents the result of the weighted sum operation. Ultimately, an Analog-to-Digital Converter (ADC) circuit digitizes the sensed currents. After that, other computations in the digital domain, e.g., bias addition, batch normalization, and non-linear activation operations are undertaken. Note that, ADC circuits are also subject to variations. However, we consider them to be robust
Fig. 2: Mapping a layer of NN into a memristor-based crossbar array.
to variations and do not add any noise to the NN activation.
Note that popular CNN topologies use skip connections, which allow information to bypass a few intermediate layers and add to the output of another layer. It can be implemented in a memristive crossbar NN by routing output signals through specific crossbars and adding the resulting outputs using digital summing circuits. However, since the NN computations are done sequentially, signals for the skip connections can be stored in the buffer memory.
### _Related Works_
In the literature, several testing approaches have been proposed for testing memristor-based crossbar arrays. March-based algorithms serially program and read the memristor cells under test to a specified conductance level to identify faults [18]. However, March-based algorithms are not practical for memristor-mapped NN applications due to their large number of memory cells, which results in extended test times. Additionally, testing multi-level cells requires setting the memristive cells to all possible levels, further increasing test time.
An alternative approach for fault detection involves analyzing the deviation in the inference accuracy of either original training data or synthetic testing data in the presence of faults [19, 20, 21]. Synthetic testing data can be generated using adversarial examples [20], watermarking the training data, and re-training the NN on the testing data to create a backdoor [19]. Although such methods efficiently detect deviations, they necessitate a large amount of testing data, on-chip storage (depending on the availability of on-chip retraining data), and an invasive test generation process. The performance of back-dooring when common data augmentation techniques, such as corner padding and center-cropping, are used is unclear, since data augmentation can either partially or completely remove the watermarks. Additionally, watermarking relies on the translation invariance feature of CNN to achieve high accuracy on the test dataset and similar performance on the original task. However, since MLPs are not translation invariant, this method may not be suitable for MLPs. The work in [21] proposed back-propagating to the input image and using the gradient of the input image as standalone testing data or combining it with training data as a perturbation, similar to [20] which employs the fast gradient sign method (FGSM). However, their "pause-and-test" method leads to long periods of system downtime. The work presented in [22] proposes monitoring the dynamic power consumption of crossbar arrays to detect faults. To achieve this, an adder tree is implemented to continuously monitor the dynamic power consumption, which adds hardware overhead.
A compact functional testing method has been studied in the work by [23]. Their method can achieve high testing coverage with a sufficiently large number of testing vectors, typically ranging from 16 to 64. However, it has been observed that their method does not work well when the number of testing vectors is small, i.e., less than 10. Furthermore, their method relies on access to the training dataset.
In contrast, our one-shot vector generation and testing method **a)** do not require access to training data, **b)** is non-invasive, **c)** generalizable across different classes of NN, **d)** requires only single testing queries, **e)** needs negligible storage, power, and testing time, and **f)** can achieve high fault coverage.
## III One-shot Testing of Memristive NNs
### _Motivation and hypothesis of our approach_
As discussed earlier, in the presence of faults or variations in the parameters of memristor-mapped NNs, their representation changes from the expected (trained) parameters, resulting in degradation in their performance. According to Equation 1, non-ideal parameters will directly affect the weighted sum and in turn activation of a layer. Since the activation of a layer becomes the input to the following layer, the cascading effect of non-ideal parameters is likely to ultimately flow to the overall output \(\hat{y}\) of the NNs. Therefore, the distribution \(\hat{y}\) is also expected to change, as shown in Fig. 3(c).
We introduce a novel _one-shot testing_ method based on the observation and hypothesis that faults and variations in memristive NN parameters influence the distribution of \(\hat{y}\). Our approach aims to detect distribution shifts in the model output using a single test vector that is specifically designed to produce two distinct output distributions for faulty and fault-free cases. Consequently, faults and variations can be easily detected by evaluating the output distribution of a memristive NN after applying the one-shot test vector.
However, there are several challenges associated with this approach. The primary challenges include standardizing the output distribution using one test vector, estimating the change in distribution for pre-trained models, and designing an effective one-shot test vector for various model architectures. We discuss them in the following section with their respective solutions.
### _Proposed deviation detection method_
Since the expected distribution of a model is unknown and likely varies from one model to another and from one test vector to another (as shown in the top half of Fig 3(a) and (b)), it is difficult to estimate the change in distribution for a pre-trained model. Therefore, we propose standardizing the output distribution of each _model under test_ (MUT) to a unit Gaussian distribution, \(\hat{y}\sim\mathcal{N}(\mu\approx 0,\,\sigma^{2}\approx 1)\,\), which means zero mean \(\mu\approx 0\) and unit variance \(\sigma^{2}\approx 1\). Standardizing the output distribution is crucial for the one-shot testing method, as it ensures a consistent and comparable metric across various models and test vectors. Also, it reduces the likelihood of false-positive deviation detection and enhances the sensitivity to non-ideal parameters.
Let \(\hat{y}\sim\mathcal{\tilde{N}}(\mu,\,\sigma^{2})\) be the output distribution of a memristive NN model. Faults and variations in the parameters of memristive NNs can be detected by evaluating the Kullback-Leibler (KL) divergence between the expected output distribution \(\mathcal{N}\) and the output distribution of memristive NNs
\[D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})=\sum_{i=1}^{n}\hat{ \mathcal{N}}(i)\log\frac{\hat{\mathcal{N}}(i)}{\mathcal{N}(i)} \tag{6}\]
which can be simplified for two normal distributions as:
\[D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})=\log\frac{\sigma_{ \mathcal{N}}}{\sigma_{\hat{\mathcal{N}}}}+\frac{\sigma_{\hat{\mathcal{N}}}^{2} +(\mu_{\hat{\mathcal{N}}}-\mu_{\mathcal{N}})^{2}}{2\sigma_{\mathcal{N}}^{2}}- \frac{1}{2}. \tag{7}\]
We assumed that the distributions \(\hat{\mathcal{N}}\) and \(\mathcal{N}\) were discrete, since we quantized the parameters of the memristive NN. Here, we denote the mean and standard deviation of the output distribution \(\hat{\mathcal{N}}\) of the memristive NN as \(\mu_{\hat{\mathcal{N}}}\) and \(\sigma_{\hat{\mathcal{N}}}\), respectively. Similarly, \(\mu_{\mathcal{N}}\) and \(\sigma_{\mathcal{N}}\) represent the mean and standard deviation of the expected output distribution \(\mathcal{N}\). Since \(\mu_{\mathcal{N}}\) and \(\sigma_{\mathcal{N}}\) are defined as 0 and 1, respectively, the equation can be further simplified as:
\[D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})=\log\frac{1}{\sigma_{ \hat{\mathcal{N}}}}+\frac{\sigma_{\hat{\mathcal{N}}}^{2}+\mu_{\hat{\mathcal{N }}}^{2}}{2}-\frac{1}{2}. \tag{8}\]
The KL divergence measures how one probability distribution differs from another. A larger value of \(D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})\) indicates that the output distribution of the memristive NN is different from the expected distribution due to non-idealities in the parameters. Specifically, a threshold \(t\) can be defined, where \(D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})\geq t\) indicates non-ideal parameters in the memristive NN. The specific choice of \(t\) depends on several factors, which will be discussed later.
Please note that other distance functions, such as the Jensen-Shannon divergence (which is a symmetrized version of the KL divergence), can also be used. Alternatively, for simplicity, evaluating only the \(\mu_{\hat{\mathcal{N}}}\) and \(\sigma_{\hat{\mathcal{N}}}\) values may be sufficient for fault and variation detection.
The test vector (stored in the hardware) can be applied periodically during online operation, and the deviation from the expected distribution can be used as an indicator for faults and variation in the memristive NN. The overall flow diagram of our one-shot testing approach is depicted in Fig. 1.
### _Proposed test vector generation method_
In order to make the proposed one-shot testing method possible, the distribution of \(\hat{y}\) should not only be standardized but also done with a single test vector, i.e., _one-shot_. We generate a special test vector for this purpose with a specific learning objective. However, there are several challenges associated with this.
#### Iii-C1 Learning objective
Since our learning objective is to produce a standard Gaussian distribution for \(\hat{y}\), several loss functions can be designed to encourage the \(\hat{y}\) distribution \(\hat{\mathcal{N}}\) to have a mean of \(0\) and a standard deviation of \(1\). For example:
\[\operatorname*{arg\,min}_{\mu_{\mathcal{N}}\to 0,\sigma_{\mathcal{N}}\to 1} \frac{1}{N}\sum_{i=1}^{N}\hat{y}_{i}\log\frac{\hat{y}_{i}}{y_{i}^{\prime}}, \tag{9}\]
Fig. 3: a) Change in output distribution depicted for different NN models but on the same test vector, b) comparison of the change in the output distribution for two different test vectors but on the same NN model (ResNet-18). While the conventional method reveals a change in output distribution across different models and test vectors, our approach ensures standardized output distributions (distributions overlap) irrespective of models or test vectors. c) We compared the relative change in output distribution for the same noise level between the proposed and conventional test vectors. The output distribution is more sensitive to noise for our proposed one-shot test vector. In the conventional method, the test vectors are randomly sampled from the ImageNet validation dataset.
minimizes pointwise KL-divergence loss between NN output \(\hat{y}\) and ground truth value \(y^{\prime}\). Alternatively,
\[\operatorname*{arg\,min}_{\mu_{\hat{\mathcal{N}}}\to 0,\sigma_{\hat{ \mathcal{N}}}\to 1}(\mu_{\hat{\mathcal{N}}})^{2}+(1-\sigma_{\hat{\mathcal{N}}})^{2}, \tag{10}\]
encourages \(\mu_{\hat{\mathcal{N}}}\) and \(\sigma_{\hat{\mathcal{N}}}\) to be close to 0, and 1, respectively. Regression loss, such as
\[\operatorname*{arg\,min}_{\mu_{\hat{\mathcal{N}}}\to 0,\sigma_{\hat{ \mathcal{N}}}\to 1}\frac{1}{N}\sum_{i=1}^{N}(\hat{y}_{i}-y^{\prime}_{i})^{2}, \tag{11}\]
can also be used. Here, \(N\) denotes the number of output classes in the NN.
The ground truth \(y^{\prime}\) for the training can be defined as
\[y^{\prime}=\frac{\hat{y}-\mu_{\hat{\mathcal{N}}}}{\sigma_{\hat{ \mathcal{N}}}}, \tag{12}\]
or can be sampled from a unit Gaussian distribution. The number of samples should be the same as the number of output classes of a NN model. Our learning objective can be considered supervised learning.
The proposed one-shot test vector produces a standardized output distribution across different models and generated test vectors, as shown in the bottom half of Fig. 3(a) and (b). Additionally, the relative deviation of the output distribution for the one-shot test vector is significantly higher, as demonstrated in Fig. 3(c). As a result, our one-shot test vector is considerably more sensitive to non-ideal parameters.
#### Iii-C2 Initialization
Let \(\bar{x}\) be the learnable one-shot testing vector with shape [H, W, C] (assuming a colored image) that is optimized based on the loss function (6). Here, H, W, and C denote height, width, and number of channels, respectively. Initializing \(\bar{x}\) properly is crucial for proper learning, where initial values are assigned to each pixel of \(\bar{x}\) before training. The convergence speed and final loss value greatly depend on proper initialization. Additionally, appropriate initialization is essential for deeper networks, as the gradient is propagated all the way back to the input.
We initialize \(\bar{x}\) element-wise with random values drawn from a unit Gaussian distribution as follows:
\[\bar{x}_{i\to H,j\to W,k\to C}\sim\mathcal{N}(0,1) \tag{13}\]
Here, i, j, and k are the indices of the elements in the input tensor. Element-wise initialization enables fine-grained control over the initialization process and is commonly used in deep learning.
Alternatively, initialization from out-of-distribution data, i.e. data that does not belong to the training set, also works well. This means that stock images from the internet can also be used for initialization. Therefore, access to training data is still not necessary. Fig. 4 shows some examples of generated test vectors with their initial images. Our optimization procedure makes minute adjustments to the stock photos. Therefore, they are visually indistinguishable.
The overall algorithm for the proposed one-shot test vector generation is summarized in Algorithm 1. To accelerate the learning process, we propose optimizing the test vector with an exponential decay learning rate for every \(Q\) iteration.
### _Relevance of Conventional Normalization Methods for Standardizing the Output Distribution_
As previously mentioned, normalization methods such as batch normalization standardize neuron activations before applying affine transformations. However, they are not suitable for our proposed one-shot testing method due to the following reasons:
* Conventional normalization methods are typically applied to intermediate activations (see Equation 4) of a NN and are not designed to directly standardize the output distribution, which is the primary goal of our one-shot testing method.
Fig. 4: Some examples of the proposed one-shot test vector for the DenseNet-121 topology. To the naked eye, optimized stock images appear identical to their original images. Nevertheless, they differ marginally.
**Algorithm 1**: One-shot test vector generation using gradient descent with an exponential decaying learning rate
```
0: Pre-trained network \(\mathcal{F}\), loss function \(\mathcal{L}(\hat{y},y^{\prime})\), initial learning rate \(\alpha_{0}\), number of iterations \(K\), Decay rate \(Q\), and shape of the test vector \(\bar{x}\) [H, W, C].
0: One-shot test vector \(\bar{x}\)
1: Initialize \(\bar{x}\) element-wise with random values from a unit Gaussian distribution
2:for\(k=1\dots K\)do
3: Perform forward pass through \(\mathcal{F}\) with input \(\bar{x}\) to obtain output \(\hat{y}\)
4: Compute loss \(\mathcal{L}(\hat{y},y^{\prime})\)
5: Calculate gradient \(\nabla\mathcal{L}\) with respect to \(\bar{x}\)
6: Compute the current learning rate \(\alpha_{t}\): \[\alpha_{t}=\begin{cases}\alpha_{0}&\text{if }t\bmod Q\neq 0\\ \alpha_{t-1}/10&\text{if }t\bmod Q=0\end{cases}\]
7: Update \(\bar{x}\) using gradient descent with the current learning rate: \(\bar{x}\leftarrow\bar{x}-\alpha_{t}\nabla\mathcal{L}\)
8:endfor
```
**Algorithm 2** One-shot test vector generation using gradient descent with an exponential decaying learning rate
**b)** Batch normalization, as an example, requires multiple test vectors to estimate the mean and variance of the distribution, conflicting with the one-shot nature of our method that relies on a single test vector for output distribution standardization. Although other normalization techniques, such as group normalization [24], have been proposed for small batch sizes, they are designed for specific tasks like sequence-to-sequence learning, recurrent neural networks (RNNs), or style transfer, and may not be directly applicable or easily adaptable to all deep learning tasks.
**c)** Finally, normalizing the model output may necessitate retraining the model using the entire training dataset, which could be computationally expensive, require access to the training data, and potentially negatively impact the model's performance, as it may not generalize well to unseen data.
Thus, our unique approach to standardizing the output distribution aligns well with the requirements and objectives of our one-shot testing method.
## IV Simulation Results
### _Fault Modelling and Injection Framework_
#### Iv-A1 Modelling Conductance Variations
Memristive technology and external environmental conditions influence conductance variation during online operations and in the manufacturing process. In this paper, we employ the variation model proposed in [23] which considers both device-to-device manufacturing variations (spatial fluctuation) and thermal variations (temporal fluctuation). This model injects multiplicative and additive Gaussian noise into the weight matrix of all layers as random noise, with a noise scale of \(\eta_{0}\) used to control the severity of the noise. For each fault run, a different random sample is taken from the variation model.
#### Iv-A2 Modelling Online and Manufacturing Faults
We consider two different kinds of fault models depending on the mapping employed: bit-wise and level-wise. As mentioned in Section II, NN parameters can be encoded bit-wise, with eight memristive cells representing a single parameter. Our bit-wise fault model targets this kind of parameter encoding and can be expressed as:
\[W_{flip}=f(\mathcal{P}_{flip},W_{orig}) \tag{14}\]
Here, \(\mathcal{P}_{flip}\) and \(f(\cdot)\) represent the percentage of bit-flip faults injected and the fault model function, respectively. Specifically, the fault model \(f(\cdot)\) randomly samples \(\mathcal{P}_{flip}\)% of the bits of weights in each layer and flips their bits from \(1\) to \(0\) and vice versa. However, for parameter mapping with multi-level memristive cells, the (level-wise) fault model \(f(\cdot)\) randomly sets the weights to a value between \(-127\) and \(127\).
The ultimate effect of permanent faults is flipping the affected memristive cell from its desired level into another one. Therefore, the fault model \(f(\cdot)\) considers read/write disturbance as well as permanent faults, including stuck-at faults.
### _Simulation Setup_
In this paper, we have abstracted circuit-level details and evaluated our proposed one-shot approach using PyTorch-based simulation. We target _hard-to-detect_ deviations in memristive crossbars. As the name suggests, detecting these kinds of deviations is hard, as they cause subtle changes in the output distribution and, in turn, inference accuracy. On the contrary, we have found that a large change in accuracy correlates to a large relative shift in the output distribution and is easier to detect with our approach in comparison.
Furthermore, instead of a simpler dataset like MNIST, we evaluated our method on larger pre-trained topologies, with up to 201 layers, trained on the more challenging ImageNet dataset [25], which is a large-scale image recognition dataset with 1000 classes, approximately 13 million training data points, and 50,000 validation data points. Additionally, we tested our approach on popular semantic segmentation topologies trained on both real-world brain MRI datasets and Microsoft's COCO benchmark dataset [26, 27]. Semantic segmentation is considerably more challenging than image classification, since it involves assigning labels to individual pixels in an image. Table II summarizes all the evaluated pre-trained models, their accuracy, and the number of parameters. All the pre-trained models are accessible through PyTorch Hub.
For the fault coverage analysis, we have done the Monte Carlo simulation to simulate the effect of per-chip and online variations, as well as various faults modeled as bit-flip. Specifically, \(1000\) memristive crossbars instances are evaluated for each noise level for variations and fault percentages.
We report fault coverage as the ratio between detected faults (\(D_{\mathrm{KL}}(\hat{\mathcal{N}}\parallel\mathcal{N})\geq t\)) and overall faults runs (\(\mathcal{M}\)) and can be described as
\[\text{fault coverage}=\frac{\text{\# of }D_{\mathrm{KL}}(\hat{\mathcal{N}} \parallel\mathcal{N})\geq t}{\mathcal{M}}\times 100. \tag{15}\]
Note that, although we inject variation and faults into all parameters that are mapped to memristive cells, our fault coverage does not necessarily imply all possible faults that may occur.
### _Detecting Variations in a One-Shot_
For classification tasks using the ImageNet dataset, Table III evaluates the fault coverage achieved by a proposed one-shot method on multiplicative and additive variations. The six state-of-the-art (SOTA) models consistently achieve \(100\%\) fault coverage across various noise scales (\(\eta_{0}\)). Our result results indicate the robustness of the one-shot method in adapting to diverse levels of noise.
Similarly, for semantic segmentation tasks, as shown in Table IV, the proposed one-shot method on multiplicative and additive variations with a range of noise scales (\(\eta_{0}\)) consistently achieves 100% fault coverage. This further underscores the robustness of our one-shot method across different tasks.
### _Detecting Faults in a One-Shot_
For ImageNet classification on various SOTA topologies, Table VI demonstrates a similarly high level of fault coverage under both bit-flip and level-flip fault conditions. For each model, as the fault rate increases, the percentage of fault coverage generally improves. At higher fault rates, such as 0.05% and 0.1%, most of the models achieved 100% fault coverage. At lower fault rates, the shift in the output distribution is very low, resulting in a few false-negative cases. However, by reducing the threshold to a value closer to the KL-divergence value on a fault-free model, the number of false-negative cases can be reduced (see Table V). Nevertheless, our results indicate the resilience of our one-shot approach to various types of faults at different rates.
Similarly, our proposed method can achieve a high fault-coverage on both bit-flip and level-flip fault conditions for semantic segmentation tasks on two state-of-the-art (SOTA) topologies, as demonstrated in Table VII. The trend in fault-coverage percentage for each model is similar to that of the models used for ImageNet classification. We also found that lowering the threshold can have a similar effect on fault-coverage, as observed in the ImageNet classification models.
### _Comparison with State of the Art and Overhead Analysis_
Our proposed method is compared against the related work that uses the functional test generation method and focuses on test pattern compaction. With only one test vector and test query, the proposed one-shot testing method outperforms existing methods [19, 20, 21], and [23] on all metrics listed in Table VIII. Therefore, the proposed method requires significantly fewer test vectors and queries compared to other methods. Furthermore, the proposed approach consistently achieves 100% fault coverage, outperforming methods [19, 20, 21] which range from 76% to 99.27% coverage. Additionally, the proposed method is the most memory-efficient, requiring only 0.012288 MB, which is much lower than the other methods, regardless of whether re-training data is stored in hardware or not. Moreover, our method does not rely on storing re-training data in hardware to reduce memory consumption, unlike methods proposed by [19], and [23].
The test application time (latency) and test energy are directly proportional to the number of test vectors used for testing. For example, the testing method [20] requires \(1024\) test vectors, therefore, their method requires \(1024\times\) more matrix-vector operations and power consumption. In our comparisons, we assume the hardware implementation, NN topology, and NVM technology are the same.
The analysis presented in Table VIII is based on the numbers reported in related works. To calculate memory consumption, we utilized the bit-width reported in [19] for the images and test labels. Note that our approach does not require storing any labels.
### _Discussion and Future Works_
The proposed one-shot testing method determines the output distribution by calculating the mean and standard deviation. To avoid biased estimation of mean and standard deviation, it is important to have a sufficiently large number of output classes, such as 20 or more. For cases with fewer output classes like 2 or 5, alternative statistical methods like Bayesian approaches could be considered, but they are beyond the scope of this paper.
Additionally, the proposed one-shot testing method considers full precision as the bit-precision (32 bits floating point) of the test vector, allowing for high precision in the optimization process using the gradient descent algorithm. While quantizing the one-shot test vector can provide benefits such as reduced memory requirements and computational complexity, the limited representation ability and quantization error may
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Classification} \\ \cline{2-5} & \multicolumn{2}{c|}{Inference Acc.} & Parameters & Layers & Dataset \\ \hline \hline ResNet-18 [28] & 69.76\% & 11.7\(\times 10^{6}\) & 18 & \\ \hline ResNet-50 [28] & 76.13\% & 25.6\(\times 10^{6}\) & 50 & \\ \hline ResNet-101 [28] & 81.89\% & 44.5\(\times 10^{6}\) & 101 & Imagenet [25] \\ \hline DenseNet-121 [29] & 74.43\% & 8\(\times 10^{6}\) & 121 & \\ \hline DenseNet-201 [29] & 76.89\% & 20\(\times 10^{6}\) & 201 & \\ \hline MobileNet-V2 [30] & 71.87\% & 3.5\(\times 10^{6}\) & 52 & \\ \hline \multicolumn{5}{|c|}{Semantic Segmentation} \\ \hline & Pixelwise Acc. & Parameters & Layers & Dataset \\ \hline \hline U-Net [31] & 98.75\% & 7.76\(\times 10^{6}\) & 23 & Brain MRI [26] \\ \hline DeepLab-V3 [32] & 91.2\% & 11.03\(\times 10^{6}\) & 72 & COCO [27] \\ \hline \end{tabular}
\end{table} TABLE II: Showing the evaluated (pre-trained) models for classification and semantic segmentation tasks, along with their respective information such as inference accuracy, number of parameters, layers, and dataset used for training.
impact the reliability of the output distribution estimations. As a result, a full-precision test vector is preferred.
## V Conclusion
In this work, we have introduced a one-shot testing and respective test generation method to test the hardware accelerators for deep learning models based on memristor crossbars with a single test vector. Our method hypotheses that memristive non-idealities correlate to change in output distribution, and our testing method aims at detecting this distribution shift with a single testing vector. The proposed approach demonstrates superior performance in fault coverage, memory storage overhead, and the number of test queries required, highlighting its effectiveness and efficiency compared to the existing methods. Therefore, our work allows significantly faster detection of faults and variations at a negligible overhead.
|
2307.12111 | Noise tailoring, noise annealing and external noise injection strategies
in memristive Hopfield neural networks | The commercial introduction of a novel electronic device is often preceded by
a lengthy material optimization phase devoted to the suppression of device
noise as much as possible. The emergence of novel computing architectures,
however, triggers a paradigm change in noise engineering, demonstrating that a
non-suppressed, but properly tailored noise can be harvested as a computational
resource in probabilistic computing schemes. Such strategy was recently
realized on the hardware level in memristive Hopfield neural networks
delivering fast and highly energy efficient optimization performance. Inspired
by these achievements we perform a thorough analysis of simulated memristive
Hopfield neural networks relying on realistic noise characteristics acquired on
various memristive devices. These characteristics highlight the possibility of
orders of magnitude variations in the noise level depending on the material
choice as well as on the resistance state (and the corresponding active region
volume) of the devices. Our simulations separate the effects of various device
non-idealities on the operation of the Hopfield neural network by investigating
the role of the programming accuracy, as well as the noise type and noise
amplitude of the ON and OFF states. Relying on these results we propose
optimized noise tailoring, noise annealing, and external noise injection
strategies. | János Gergő Fehérvári, Zoltán Balogh, Tímea Nóra Török, András Halbritter | 2023-07-22T15:44:09Z | http://arxiv.org/abs/2307.12111v1 | Noise tailoring, noise annealing and external noise injection strategies in memristive Hopfield neural networks
###### Abstract
The commercial introduction of a novel electronic device is often preceded by a lengthy material optimization phase devoted to the suppression of device noise as much as possible. The emergence of novel computing architectures, however, triggers a paradigm change in noise engineering, demonstrating that a non-suppressed, but properly tailored noise can be harvested as a computational resource in probabilistic computing schemes. Such strategy was recently realized on the hardware level in memristive Hopfield neural networks delivering fast and highly energy efficient optimization performance. Inspired by these achievements we perform a thorough analysis of simulated memristive Hopfield neural networks relying on realistic noise characteristics acquired on various memristive devices. These characteristics highlight the possibility of orders of magnitude variations in the noise level depending on the material choice as well as on the resistance state (and the corresponding active region volume) of the devices. Our simulations separate the effects of various device non-idealities on the operation of the Hopfield neural network by investigating the role of the programming accuracy, as well as the noise type and noise amplitude of the ON and OFF states. Relying on these results we propose optimized noise tailoring, noise annealing, and external noise injection strategies.
## I Introduction
Memristive crossbar arrays are promising candidates as the hardware components of artificial neural networks,[1; 2; 3; 4; 5] including advanced applications in feed-forward neural networks,[6; 7; 8; 9] convolutional layers,[10; 11] 3D architectures,[12; 13; 14; 15] unsupervised neural networks,[16; 17; 18] as well as recurrent neural networks.[19; 11] In these applications the tunable conductance of a memristor unit encodes a synaptic weight in the network, and once the properly trained weights are programmed to each memristor cell, the memristive crossbar array is able to perform the vector-matrix multiplication, i.e. the key mathematical operation of the network inference in a single time-step.[1; 2; 6] This equips the artificial neural networks with a highly energy efficient hardware component compared to software solutions, where the evaluation of the input vector at a layer with \(N\) neurons requires \(N^{2}\) multiplication operations. In most of the neural network applications the highest resolution of the memristive synaptic weights is desirable,[20] and therefore the memristor non-idealities, like their programming inaccuracy or their stochastic noise properties should be eliminated as much as possible. A special class of the memristive networks, however, relies on probabilistic optimization,[21; 22; 23; 24] where it is well known that tunable stochasticity, such as customizable device noise, is not a disturbing factor, but a useful computational resource. Similar strategy was recently experimentally realized in \(60\times 60\) memristive Hopfield neural networks (HNNs),[24] demonstrating the efficient solution of max-cut graph segmentation problems, and delivering over four orders of magnitude higher solution throughput per power consumption than digital or quantum annealing approaches.
Inspired by these achievements, we perform a thorough analysis of simulated memristive Hopfield neural networks putting a key emphasis on the effect of the device noise on the network operation. To this end, first the realistic noise characteristics of memristive devices[25; 26; 27; 28] are discussed (Sec. III), and a general noise model, describing the conductance-dependent noise characteristics in the filamentary and broken filamentary regimes is proposed. Afterwards, various benchmark max-cut problems are solved by simulated memristive HNNs (Sec. IV), relying on the proposed noise model. These simulations demonstrate rather well-defined relative noise values, at which the network operation is optimized, regardless of the network size and the type of the noise spectrum. We also demonstrate a simplified, easily implementable double-step noise annealing scheme (Sec. IV.3), which further enhances the convergence probability of the network. These optimized noise levels, however, are at the top border of the experimentally observed noise amplitudes, which raises the need for external injection of stochasticity (Sec. IV.6). For the latter, two strategies are tested, including external noise injection and the introduction of chaotic behavior through the self-feedback of the neurons in the network.[29] Finally, the effect of further device-non-idealities are tested separately (Sec. IV.5), analyzing the effect of the programming inaccuracy, and the finite OFF-state conductance. The presentation of all these results is preceded by the brief overview of Hopfield neural networks, and their implementation by memristive crossbar arrays (Sec. II).
## II Memristive Hopfield neural networks
### Hopfield Networks
The Hopfield Neural Network (HNN) introduced by John Hopfield,[30] was shown to be capable of solving complex problems by Hopfield and Tank[31] and has been used for optimiza
tion ever since.[32] A Hopfield Network's main allure is its simplicity and immense power to provide reasonable solutions for high complexity problems. The network consists of fully connected binary neurons (without self-connections), the \(\underline{W}\) synaptic weight matrix encodes the optimization problem and the \(\underline{x}\) state of the neurons represent the possible states of the system including the desired solution(s) (see Fig. 1A). The network is operated in an iterative fashion: at each time step \(t\) the activation
\[\underline{a}^{(t)}=\underline{W}\cdot\underline{x}^{(t)} \tag{1}\]
is calculated. Then an index \(j\) is picked at random and a single neuron is updated according to the rule
\[x_{j}^{(t+1)}=\begin{cases}+1&\text{if }a_{j}^{(t)}\geq\theta_{j},\\ -1&\text{if }a_{j}^{(t)}<\theta_{j},\end{cases} \tag{2}\]
where \(\theta_{j}\) is a component of a predefined threshold vector \(\underline{\theta}\). It can be shown, that this simple update rule decreases the energy function
\[E^{(t)}\left(\underline{x}^{(t)};\underline{W},\underline{\theta}\right)=- \frac{1}{2}\left(\underline{x}^{(t)}\right)^{\text{T}}\underline{W}\ \underline{x}^{(t)}+\underline{\theta}\ \underline{x}^{(t)} \tag{3}\]
in every iteration step (\(E^{(t+1)}\leq E^{(t)}\)). Due to this property, Hopfield neural networks are widely used to solve complex problems, which can be encoded in the form of the effective energy function in Eq. 3. This can be applied in an associative memory scheme,[30] where \(\underline{W}\) and \(\underline{\theta}\) are set such, that each local minimum of \(E(\underline{x})\) encodes a predefined pattern (e.g. images), and the update rule drives the system from an arbitrary initial state \(\underline{x}^{(0)}\) to the closest local minimum, i.e. the network finds the predefined pattern most similar to the initial state. Alternatively, the Hopfield neural network may find the global solution of a complex problem, like the max-cut graph segmentation problem.
### Max-cut problem
The NP-hard max-cut problem is formulated for an arbitrary undirected \(G(V,E)\) graph with \(V\) vertices and \(E\) edges. The goal is to find a partitioning of \(V\) into two adjacent sets \(S\) and \(K\) so that the total weight of crossing edges between the two sets is maximal (see Fig. 1B).[33] Though abstract at first sight, many practical problems can be mapped to the max-cut, such as the conflict-graph formulation of the layer assignment problem in very large scale integration (VLSI) design, where the position of functional blocks is optimized in a multilayer chip.[34] If a graph is given by its adjacency matrix \(\underline{A}\), a cut can be encoded by \(\underline{x}\), simply as
\[x_{k}=\begin{cases}+1&\text{if }V_{k}\in S,\\ -1&\text{if }V_{k}\in K.\end{cases} \tag{4}\]
and the maximum cut can be found by minimizing the \(E(\underline{x};-\underline{A},0)\) energy function (Eq. 3). For an unweighted graph, the absolute value of the energy is proportional to the number of edges running between \(S\) and \(K\), so the problem is directly addressable by a HNN.[33] In that case, however, the global minimum of the energy function is to be found, while the conventional operation of a Hopfield neural network would yield dead ends in each local minima of the energy landscape. This
Figure 1: (A) Illustration of a Hopfield neural network with five neurons. The orange (yellow) circles illustrate the ‘+1’ (‘-1’) binary states, whereas the lines represent the synaptic weights between the neurons. A HNN excludes self-connections, however, a self-connection with negative weight (dark green arrow) introduces chaotic nature to the network operation, which helps to find the global solution. Stochasticity can be also introduced by the temporal variation (noise) of the synaptic weights, as well as by external noise injection (light green arrow). (B) Illustration of the max-cut problem: the goal is to find a partitioning of the vertices into two adjacent sets so that the total number of crossing edges between the two sets (red lines with dashed-line cut) is maximal. (C) Illustration of the energy landscape of a HNN. A noiseless operation with the conventional update rule may yield dead-ends in local minima (grey line). Properly tailored stochasticity, however, helps escaping from the local minima, and finding the global solution (blue line). (D) Experimental realization of the discrete HNN by a memristor crossbar array. The \(V_{i}=\pm|V|\) voltage inputs at the horizontal lines represent the states of the neurons, which are updated according to the \(I_{j}\) current outputs at the vertical lines, the latter representing the \(a_{j}^{(t)}\) activation. The synaptic weights are encoded in the \(G_{i,j}\) conductance matrix of the memristors in the crossbar. In a conventional HNN the lack of self-connections is represented by the \(\approx 0\) conductance values at the diagonal of the crossbar (dark green memristors). The light green arrow illustrates the possibility of external noise injection. (E) In a memristive HNN the ‘1’ and ‘0’ synaptic weights are encoded in \(G_{\text{ON}}\) and \(G_{\text{OFF}}\) conductance values. These, however, exhibit device-to-device variations described by the \(\Delta G_{\text{static}}\) variance. (F) The stochastic temporal variation (i.e. the noise) of the \(G_{\text{ON}}\) and \(G_{\text{OFF}}\) conductance values also introduces a device non-ideality described by the \(\Delta G_{\text{dynamic}}\) variance. The proper tailoring of the noise, however, aids the network operation.
problem can be eliminated by introducing proper stochasticity to the network, like a finite noise, which helps escaping from the local minima, and which is reduced as the states evolve towards the global solution (see the illustration in Fig. 1C).
### Hardware implementation of HNNs by memristive crossbar arrays
The so-called crossbar structure is a popular scheme for physical matrix realization via memristors.[2; 35] As seen in Fig. 1.D it is essentially a set of horizontal and vertical wires (word and bit lines) with a memristor placed at each meeting point of the lines. Operating this arrangement in the linear, sub-threshold regime of the memristive units, the output current vector at the bit lines is obtained as the product of the input voltage vector at the word lines and the conductance matrix of the memristors at the crosspoints, \(I_{j}=\sum_{i}G_{i,j}\cdot V_{i}\). Once the proper conductance weights are programmed to the crossbar, the vector-matrix multiplication is performed on the hardware level within a single clock-cycle. This scheme is also implementable for Hopfield neural networks, where the diagonal values of the \(G_{i,j}\) conductance matrix are zero due to the lack of self-connections in the HNN.
The special case of the max-cut problem is mathematically formulated by a weight matrix which is '1' or '0' if the proper vertices are connected or non-connected. This problem can be mapped to a memristive HNN by setting a constant \(G_{\text{ON}}\) conductance and \(G_{\text{OFF}}\ll G_{\text{ON}}\) conductance instead of the 1 and 0 values, respectively. The \(x_{i}\) binary state vector elements are represented by \(V_{i}^{(t)}=x_{i}^{(t)}\cdot|V|\) input voltages at the crossbar word lines. This scheme was experimentally realized in Ref. [24] using memristive HNNs up to \(60\times 60\) matrix sizes.
### Non-idealities and stochasticity in memristive HNNs
Figures 1E,F demonstrate the key device non-idealities in a memristive Hopfield neural network: the temporal stochastic variation (noise) of the programmed \(G_{\text{ON}}\) and \(G_{\text{OFF}}\) conductances (F), as well as the programming inaccuracy, i.e. the device-to-device variation of the time-averaged \(\overline{G_{i,j}(t)}\) conductances for the memristor cells programmed to the same ON/OFF binary state. These non-idealities are measured by the temporal variance (\(\Delta G_{\text{dynamic}}\)) and the device-to-device variance (\(\Delta G_{\text{static}}\)) around the ideal \(G_{\text{ON}}\) and \(G_{\text{OFF}}\) values. Furthermore it is noted, that \(G_{\text{ON}}\) can be chosen arbitrary in the mapping of the '0' and '1' synaptic weights to the \(G_{\text{ON}}\) and \(G_{\text{OFF}}\) conductance, however, a finite \(G_{\text{OFF}}\) already represents a device non-ideality, which may modify the operation of the network. Furthermore, finite wire resistances or non-linear \(I(V)\) characteristics may also be considered as a non-idealities,[24] however, these two non-idealities are not considered in our following analysis.
Among these non-idealities the noise plays a distinguished role, as it not necessarily hampers the network operation, but a properly tailored noise may help to find the global solution. However, the injection of external stochasticity might also become necessary once the internal noise of the memristor elements in the crossbar array is not enough large for the optimal network operation. The latter possibility is illustrated by the light green arrows in Figs. 1A,D as well as by the dark green arrow in Fig. 1A respectively illustrating external noise injection and the introduction of chaotic behavior via a negative self-feedback of the neurons.[24; 29]
## III Realistic noise properties of memristive devices
In the following we analyze the realistic noise characteristics of various memristive systems (Fig. 2), which are considered as a key ingredient of the memristive HNNs' operation.
### Typical noise spectra of memristive units
The \(S_{l}(f)\) spectral density of the current noise is defined as the \((\Delta I)^{2}|_{f_{0,\Delta f}}\) mean squared deviation of the current within a narrow \(\Delta f\) band around the central frequency \(f_{0}\) normalized to the bandwidth, \(S_{l}(f_{0})=(\Delta I)^{2}|_{f_{0,\Delta f}}/\Delta f\), but practically \(S_{l}(f)\) is calculated from the absolute value squared of the Fourier transform of the measured \(I(t)\) fluctuating current signal.[26] In memristive devices \(S_{l}(f)\) typically exhibits a Lorentzian spectrum (blue curve in Fig. 2B), or a 1/f-type spectrum (pink curve in Fig. 2B), or the mixture of these two (purple curve in Fig. 2B). In the former case the noise is dominated by a single fluctuator introducing a steady-state resistance fluctuation with a typical time constant \(\tau_{0}\), yielding a spectrum which is constant at \(2\pi f<\tau_{0}^{-1}\), and decays with \(1/f^{2}\) at \(2\pi f>\tau_{0}^{-1}\).[26] If multiple fluctuators with different time constants contribute to the device noise, the Lorentzian spectra of the individual fluctuators sums up to a spectrum with \(S_{l}\sim f^{-\beta}\), where \(\beta\) is usually close to unity (pink noise, pink curve in Fig. 2B).[26] Alternatively, a single fluctuator positioned at the device bottleneck may dominate the device noise, but a larger ensemble of more remote fluctuators may also give a significant contribution.[26] This situation yields the mixture of Lorentzian and 1/f-type noise (purple curve in Fig. 2B). Without any steady-state resistance fluctuations, still, a finite thermal noise is observed, the latter exhibiting a constant (frequency-independent) spectrum (white noise). Integrating the current noise for the frequency band of the measurement, the mean squared deviation of the current is obtained in this band, \((\Delta I)^{2}=\int_{f_{\text{A}}}^{f_{\text{B}}}S_{l}(f)\text{d}f\).
### Proper metrics of the noise characteristics
At low enough sub-threshold voltages the memristive conductances exhibit steady-state fluctuations, i.e. the applied voltage is only used for the readout of the noise, but it does not excite any fluctuations. In this case \((\Delta I)^{2}=(\Delta G)^{2}\cdot V^{2}\) holds according to Ohm's law, i.e. the voltage-dependent current fluctuation is not a good measure of the noise properties.
The \(\Delta I/I\) relative current fluctuation, however, is already a voltage-independent metric of the fluctuations, which equals the relative fluctuation of the conductance or the resistance in the linear regime, \(\Delta I/I=\Delta G/G=\Delta R/R\).[26] This metric will be used throughout the paper to describe the noise characteristics, where \((\Delta G/G)_{\rm dynamic}\) describes the relative temporal fluctuations of a certain element of the memristor conductance matrix \(G_{i,j}(t)\). It is noted, that \((\Delta G/G)_{\rm dynamic}\) depends on the bandwidth. The high-frequency cutoff is determined by the integration time of the current readout (\(\tau_{\rm readout}=2\,\mu\)s in our simulation yielding \(f_{B}=1/2\tau_{\rm readout}=250\)kHz), whereas the \(f_{A}=10\) Hz bottom end of the frequency band is determined by the time-period for which the network is operated (0.1 s in our simulation corresponding to 10000 iteration steps, and a \(4\tau_{\rm readout}\) waiting time between the current readout events, simulating the finite time of the neural updates and the multiplexing). Increasing the number of iterations steps would naturally increase the \((\Delta G/G)_{\rm dynamic}\), but this dependence is characteristic to the nature of the noise spectrum. In case of a Lorentzian spectrum the noise amplitude does not really depend on the bandwidth, once the \((2\pi\tau_{0})^{-1}\) characteristic frequency of the fluctuator is well inside the band. This is consistent with the \((\Delta G)^{2}\sim\arctan(2\pi f\tau_{0})|_{f_{A}}^{fn}\) relation for the Lorentzian spectrum. The other experimentally relevant, \(1/f\)-type spectrum yields \((\Delta G)^{2}\sim\ln(f_{B}/f_{A})\), which is also a very weak dependence on the bandwidth, yielding only a \(\approx 30\)% increase of \((\Delta G/G)_{\rm dynamic}\) once the number of iteration steps is increased from \(10^{4}\) to \(10^{7}\), i.e. the above bandwidth is increased by three orders of magnitude. According to these considerations, the results of our following simulations are rather weakly dependent on our specific choice for the bandwidth.
### Variation of the noise with the device conductance
Several studies have pointed out, that the relative noise amplitude of a memristive device exhibits a strong and specific dependence on the device conductance, i.e. the multilevel programmability is accompanied by the tuning of the relative noise level.[25; 27; 39; 40; 41; 42; 43; 44; 45; 46; 47] Fig. 2A shows four examples for this behavior, demonstrating the conductance-dependent noise characteristics of Ag\({}_{2}\)S (green),[26; 27] Ta\({}_{2}\)O\({}_{5}\) (red),[25] Nb\({}_{2}\)O\({}_{5}\) (orange)[25] and SiO\({}_{x}\) (blue) memristive units integrated for the same 10 Hz-250 kHz frequency band. It is clear, that the overall noise amplitude, the characteristic conductance range of the operation, as well as the dependence of \(\Delta G/G\) on the conductance is a kind of a device fingerprint, exhibiting significant differences between various material systems. However, a rather general trend of the noise characteristics can be identified: in the low-conductance region of the operation regime \(\Delta G/G\) is very weakly dependent on the conductance, whereas in the high conductance region a strong \(\Delta G/G\sim G^{-\gamma}\) power-law dependence is typical.
In the latter case, a metallic filamentary conduction is envisioned, where the relative noise amplitude obviously increases as the filament diameter is reduced.[26] A rather generally observed tendency is related to the volume-distributed fluctuators in a diffusive filament, where \(\Delta G/G\sim G^{-3/2}\) was obtained from theoretical considerations.[25; 27] This is also confirmed by the experimental data in Fig. 2A, where the validity of the \(\gamma=3/2\) exponent was approved for the Ag\({}_{2}\)S, Ta\({}_{2}\)O\({}_{5}\) and Nb\({}_{2}\)O\({}_{5}\) systems,[25; 27] whereas our new data on SiO\({}_{x}\) memristors exhibits a somewhat shallower dependence with \(\gamma=1.13\) (see the dashed lines representing the best fitting
Figure 2: (A) Relative conductance noise as a function of device conductance for a variety of memristive materials. The data for Ag\({}_{2}\)S (green), Ta\({}_{2}\)O\({}_{5}\) (red) and Nb\({}_{2}\)O\({}_{5}\) (orange) memristive devices are reproduced from Refs. [25; 26; 27], recalculating the integrated noise amplitudes for the \([f_{A},f_{B}]=[10\,{\rm Hz},250\,{\rm kHz}]\) band. For these material systems the validity of the diffusive noise model with volume-distributed fluctuators was verified in the high conductance regime, the lines with the corresponding colors represent the best fitting trends with the corresponding \(\gamma=3/2\) exponent.[25; 26; 27] At somewhat smaller conductances a rather narrow ballistic conductance region is observed with significantly shallower, \(\gamma=1/4\) exponent.[25; 26; 27] Finally, in the sub-conductance-quantum interval a broken filamentary regime is observed with \(\approx\) constant relative noise level, which is best resolved for the Ag\({}_{2}\)S system.[26] The blue data represent new measurements on graphene-SiO\({}_{x}\)-graphene lateral devices, using the sample preparation protocol as in Ref. [28; 36]. Here the switching relies on the voltage-controlled transitions between well-conducting crystalline and poorly conducting amorphous regions.[37; 38] The low conductance \(\approx\) constant and high conductance \(\sim G^{-\gamma},\gamma=1.13\) dependencies are clearly seen for this system spanning 5 orders of magnitude (3 orders of magnitude) range along the conductance (relative noise) axis. The \(G_{C}\) crossover conductance is well below \(G_{0}\) indicating a barrier-like component even in the metallic regime. (B) Illustrative Lorentzian (blue) 1/f-type (pink) and mixed (purple) noise spectra measured on Ta\({}_{2}\)O\({}_{5}\) devices. The bottom curve is on true scale, while the middle and top curves are artificially shifted upwards by one and two orders of magnitude. (C) Proposed noise model with constant relative noise in the barrier-like regime (\(G<G_{C}\)) and \(\sim G^{-\gamma}\) relative noise in the metallic nanojunction regime (\(G>G_{C}\)), the \(G_{\rm ON}\) (red circle) and \(G_{\rm OFF}\) (blue circle) conductances can be set to arbitrary positions along the noise model.
tendecies with the given \(\gamma\) exponents). It is noted, however, that the \(\gamma\) exponent may depend on the transport mechanism, the device geometry, the dimensionality (2D/3D devices), as well as the distribution of the fluctuators (single or multiple fluctuators, surface or volume distributed fluctuators, etc.).[26]
In contrast, the saturated noise characteristics in the low conductance regime are attributed to broken filaments, where a barrier-like transport is envisioned. In the simplest case of a tunnel barrier the \(G=A\cdot\exp(-\alpha\cdot d)\) relations yields a conductance-independent \(\Delta G/G=\alpha\cdot\Delta d\) relative conductance noise for a constant \(\Delta d\) fluctuation of the barrier width.[26] More complex transport phenomena, like the Frenkel-Poole mechanism,[48] or a hopping-type transport[49] require more sophisticated descriptions, but the overall trend, i.e. the independence, or the very weak dependence of \(\Delta G/G\) on \(G\) is left unchanged due to the exponential dependence of the conductance on a relevant fluctuating parameter.
According to these considerations, in the following simulations we rely on a simplified noise model (see Fig. 2C), where \(\Delta G/G\) is constant below a certain threshold conductance \(G_{C}\) (see the red barrier-like regime in Fig. 2C), whereas a general \(\Delta G/G\sim G^{-7}\) power-law dependence is considered at \(G>G_{C}\) (see the blue metallic nanojunction regime in Fig. 2C). The \(G_{\rm OFF}\) and \(G_{\rm ON}\) conductances of the memristive HNN can be fixed at arbitrary positions along this noise model, as demonstrated by the red and blue circles in Fig. 2C. This simplified model has three free parameters, the \(G_{C}\) threshold, the \(\gamma\) slope, and the \(\Delta G/G\) relative fluctuation in the barrier-like regime. Note, that according to the experimental results in Fig. 2A the latter can reach a few tens of percents, \(G_{C}\) is not necessarily, but reasonably close to the \(G_{0}=2e^{2}/h\) conductance quantum unit, whereas the variation of \(\Delta G/G\) can span up to three orders of magnitude in the metallic nanojunction regime. For the exponents \(\gamma=1.13-1.5\) values are observed, however, we emphasize that fundamentally different slopes are also possible.[26]
## IV Simulation of Memristive HNNs with realistic noise characteristics
We have simulated memristive HNNs with realistic noise characteristics relying on the standardized Biq Mac Library[50] which provides exact globally optimal energies for Max-Cut instances of undirected and unweighted graphs in the sizes \(n\in[60,80,100]\). Following the results of Ref. [24], we were studying Erdos-Renyi graphs with 50% connection probability between the vertices. We have simulated the HNNs starting from \(K=200\) randomly picked initial state vectors, and performing \(N=10000\) iterations for each epoch. The neurons were iterated in a predetermined random order. The \(K\) runs are evaluated according to two figures of merit, the proportion of runs, where the network was in the globally optimal state at the \(N^{\rm th}\) step (\(\mathbb{P}_{\rm conv}\) convergence probability), and the number of edges between the two subsets (i.e. the \(C\) number of cuts) after \(N\) iteration steps averaged for the K random initial vectors, \(\overline{C}=\frac{1}{K}\sum_{i=1}^{K}C\left(\frac{\chi^{(N)}_{i}}{i^{(N)}}\right)\).
Instead of the ideal '1' and '0' values of the \(W_{i,j}\) weight matrix, realistic \(G_{i,j}\) conductances of the memristive HNN were used in the simulations. At the matrix positions with a value of '1' in the original problem an average conductance of \(G_{\rm ON}\) was applied, considering both \((\Delta G/G)_{\rm static}\) device-to-device variations and \((\Delta G/G)_{\rm dynamic}\) temporal fluctuations around this mean value. To simulate the latter, independent \(G(t)\) time traces were generated for all the memristive elements in the ON state, using either Lorentzian, pink or white noise spectrum. Carson's theorem and method was applied[51] to generate the \(G(t)\) temporal noise traces (i.e. temporal conductance variations) from the chosen \(S_{G}(f)\) noise spectrum.
According to the HNN scheme, the diagonal elements of the crossbar matrix (self-connections) are set to exactly zero. This can be physically implemented by omitting the memrisors at the diagonal positions either by switching off their transistor in a 1T1R arrangement,[52; 53; 54] or by omitting their electroforming procedure. In Sec. IV.6, however, we also discuss the case of finite self-feedback, which introduces a chaotic nature to the network.
The offdiagonal elements of the weight matrix with values of '0' are generally represented by the \(G_{\rm OFF}\) conductance, with the corresponding relative conductance noise. In a part of the simulations, however, \(G_{\rm OFF}=0\) is applied according to the following considerations.
Contribution of the OFF and ON state elements to the current output and the noise of the memristive HNN
In the following we provide simple considerations on the relative current and current fluctuation contributions of the ON and OFF state elements, which helps to identify the most relevant contributions.
(i) _Relative current contribution of OFF-state memristive elements in the crossbar._ For bit line \(j\) the number of '1' values in the original weight matrix is denoted by \(d_{j}\) yielding an average value of \(\overline{d}_{j}=(n-1)\cdot p_{\rm c}\) according to the random connection probability between the \(n\) vertices, for which \(p_{\rm c}=0.5\) is applied in the following. The current contribution of the ON and OFF state elements in a certain bit line \(j\), however, also depends on the distribution of the '\(+1\)' and '\(-1\)' values in the \(x_{i}\) state vector, which varies along the operation. For the ensemble average of the adjacency matrices with the same random connection probability, however, an ensemble-averaged current can be calculated
\[\overline{I_{j}}=\underbrace{\sum_{i}x_{i}\cdot|V|\cdot p_{\rm c}\cdot G_{\rm ON }}_{\overline{I_{\rm ONj}}}+\underbrace{\sum_{i}x_{i}\cdot|V|\cdot(1-p_{\rm c}) \cdot G_{\rm OFF}}_{\overline{I_{\rm OFFj}}}, \tag{5}\]
from which the \(\overline{I_{\rm OFFj}}/\overline{I_{\rm ONj}}=(G_{\rm OFF}/G_{\rm ON})*(1-p_{ \rm c})/p_{\rm c}\) ratio gives an indication on the OFF and ON state memrisors' relative current contribution in column \(j\). For the special case of \(p_{\rm c}=0.5\) this simplifies to \(\overline{I_{\rm OFF}}/\overline{I_{\rm ON}}=G_{\rm OFF}/G_{\rm ON}\). This demonstrates, that at large enough ON/OFF conductance ratio (e.g. \(G_{\rm ON}/G_{\rm OFF}>100\)), the replacement of \(G_{\rm OFF}\) by zero is a reasonable simplification for a densely connected graph. Later on (Sec. IV.5.2), we numerically analyze how a nonzero
value modifies the network operation at moderate ON/OFF conductance ratios.
(ii) _Relative noise contribution of OFF-state memristive elements in the crossbar._ Whereas the current in a certain bit line strongly depends on the actual \(x_{i}\) state vector values, the mean squared deviation of the current is independent of that, and can be exactly deduced, once the \(d_{j}\) number of ON-state elements in column \(j\) is known:
\[(\Delta I)_{\rm j}^{2} =\sum_{i\ (\neq j)}\ (\Delta G)_{\rm j,i}^{2}\cdot|V_{i}|^{2}= \tag{6}\] \[=\underbrace{(\Delta G)_{\rm OFF}^{2}\cdot(n-d_{j}-1)\cdot|V|^{2} }_{(\Delta I)_{\rm OFF,j}^{2}}+\underbrace{(\Delta G)_{\rm ON}^{2}\cdot d_{j} \cdot|V|^{2}}_{(\Delta I)_{\rm ON,j}^{2}}.\]
From this the relative noise contributions of the OFF and ON state elements in bit line \(j\) can be calculated considering our simplified noise model (Fig. 2C). First, we treat the _mixed barrier-like and metallic_ regime, where \(G_{\rm ON}>G_{\rm C}>G_{\rm OFF}\), yielding:
\[\frac{\Delta I_{\rm OFF,j}}{\Delta I_{\rm ON,j}}=\frac{G_{\rm OFF}}{G_{\rm C} }\left(\frac{G_{\rm C}}{G_{\rm ON}}\right)^{1-\gamma}\cdot\sqrt{\frac{n-d_{j} -1}{d_{j}}}. \tag{7}\]
Note, that the square-root term gives unity once \(d_{j}\) is replaced by its average value at 50% connection probability. This formula yields negligible OFF-state noise contribution for arbitrary \(\gamma\), once \(G_{\rm OFF}\) is chosen deep in the barrier-like regime (\(G_{\rm OFF}/G_{\rm C}\ll 1\)), whereas \(G_{\rm ON}\) remains reasonably close to \(G_{\rm C}\).
It is worth discussing another limit as well, where the entire crossbar is operated in the metallic nanojunction regime (see Fig. 2C), i.e. \(G_{\rm ON}>G_{\rm OFF}>G_{\rm C}\) (_pure metallic regime_). In the metallic nanojunction regime the memristive elements exhibit much more linear subthreshold \(I(V)\) characteristics than in the barrier-like regime, which is a favorable property for the high-precision vector-matrix multiplication operation of the memristive crossbar. This limit yields
\[\frac{\Delta I_{\rm OFF,j}}{\Delta I_{\rm ON,j}}=\left(\frac{G_{\rm OFF}}{G_{ \rm ON}}\right)^{1-\gamma}\cdot\sqrt{\frac{n-d_{j}-1}{d_{j}}}, \tag{8}\]
emphasizing the dominance of the OFF-state elements' noise contribution at any \(\gamma>1\) value, i.e. for all the memristive units demonstrated in Fig. 2A. Furthermore, in this pure metallic operation regime the \(G_{\rm ON}/G_{\rm OFF}\) conductance ratio is restricted to rather limited values spanning one order of magnitude in the diffusive regime of Ag\({}_{2}\)S, Ta\({}_{2}\)O\({}_{5}\) and Nb\({}_{2}\)O\({}_{5}\) memristors, and less than two orders of magnitude in SiO\({}_{x}\) memristors (see Fig. 2A), which may distort the network operation compared to networks operated with orders of magnitude larger \(G_{\rm ON}/G_{\rm OFF}\) ratios in the mixed barrier-like and metallic regimes (Eq. 7), i.e. the choice of the operation regime is the tradeoff between high precision linearity and the proper representation of the '0' values in the weight matrix. In the following subsections, we discuss the results of our simulations for both operation regimes using max-cut benchmarks with 50% connection probability and noise model with \(\gamma=3/2\). Note, however, that the above formulae are also proper to discuss more general situations, including arbitrary conductances and \(\gamma\) scaling exponents. Furthermore, arbitrary graphs (i.e. any \(d_{j}\) and \(n\) values) can also be analyzed, where the replacement of the applied dense graph with a sparse graph \((d_{j}/n\ll 1/2)\) would yield the further enhancement of the OFF noise contribution.
### Optimal noise level, and the role of the noise color
We have simulated max-cut problems using graphs with 50% connection probability and different sizes (\(n=60,80,100,300\)). First, the mixed barrier-like and metallic operation regime is analyzed (Eq. 7) with \(G_{\rm OFF}/G_{\rm ON}\ll 1\), and according to the above considerations \(G_{\rm OFF}=0\) was chosen, whereas a finite \(G_{\rm ON}\) value with variable noise was applied. The time-averaged conductance was the same for all the ON elements, i.e. \((\Delta G/G)_{\rm static}=0\) was applied. Simulations were run with three different noise types: Lorentzian noise (blue symbols in Fig. 3), \(1/f\) noise (pink symbols), and white noise (grey symbols). The related noise spectra and time traces generated from these spectra are shown in Figs. 3C,D. For all three spectra, the \((\Delta G/G)_{\rm dynamic}\) metric was used to measure the relative noise.
Figs. 3A,B demonstrate the \(\mathbb{P}_{\rm conv}\) convergence probability and the \(\overline{C}\) average number of cuts after \(N\) iterations steps for the \(60\times 60\) benchmark max-cut problem also applied in Ref. [24]. The red line in panel B shows the maximum number of cuts, i.e. the global solution to the problem.
It is clear, that at zero noise level, the convergence probability is poor (\(\mathbb{P}_{\rm conv}=1.5\%\)) and the achieved number of cuts is far away from the global solution. As the relative amplitude of the dynamic noise is increased, the convergence probability (Fig. 3A) exhibits a stochastic resonance phenomenon similarly to the results of Ref. [24]: irrespective of the noise color \(\mathbb{P}_{\rm conv}\) shows a peak at \((\Delta G/G)_{\rm dynamic}\approx 13.8\%\) leading to a \(\mathbb{P}_{\rm conv}=40-50\%\) chance of convergence. This figure implies, that at a lower noise level the system sticks to local minima, which prevents the convergence to the global solution, whereas at too high noise level the system is able to escape from the global minimum, which also hampers the convergence. As an interesting conclusion, however, the results of the simulation are very similar for the different noise colors, i.e. the temporal correlations in the noise spectra are irrelevant, and \((\Delta G/G)_{\rm dynamic}\) seems a proper, noise-type independent metric to find the optimal noise level. This also allows the simplification of the simulations by easily generated white noise spectra. Furthermore, it is emphasized that the best \((\Delta G/G)_{\rm dynamic}\approx 13.8\%\) noise level corresponds to the top end of the experimentally observed relative noise values (Fig. 2A), i.e. the experimentally relevant noise levels do not hamper the network operation, in contrary, experimentally relevant noise levels might not be enough to realize the optimal noise level if stochasticity is solely introduced by the noise of the memristor elements of the crossbar matrix.
We have repeated these simulations for numerous benchmark problems from the Biq Mac library spanning matrix
sizes of \(60\times 60\) (circles in Fig. 3E), \(80\times 80\) (pentagons) and \(100\times 100\) (stars), using white, pink and Lorentzian spectra (grey, pink and blue symbols). At the larger matrix sizes, only white noise was applied. For these problems, the symbols in Fig. 3E represent the relative noise values, where the convergence probability is maximal. Furthermore, we have generated an even larger weight matrix (\(300\times 300\), '\(+\)' symbol in the last column). Here, the global solution is not known, therefore the symbol represents the noise value, where \(\overline{C}\) is maximal. Whereas the convergence probability and the \(\overline{C}\) strongly vary for the different problems, the optimal noise level scatters around a common \(\Delta G/G=13.2\%\) average value (horizontal solid line) with a a small variance of \(2.6\%\) (horizontal dashed lines). This analysis does not show any systematic tendencies as a function of the matrix size, even the largest matrix with 90000 memristor elements exhibits optimal operation close to this average value. We note, that the system size dependence of the optimal noise level was analyzed in Ref. [24] as well. However, in the latter analysis the current noise of the entire bit lines was considered. The system-size independent optimal noise level of the individual devices (Fig. 3E) yields a bit line current variance scaling with the square root of the array size due to the \(\Delta I_{j}\sim\sqrt{d_{j}}\) relation (see Eq. 6), i.e. the results of Ref. [24] on the optimal noise level (Fig. 5C in Ref. [24]) are consistent with our observations.
### Annealing schemes
The greatest challenge with randomization algorithms is that stochasticity helps to escape local minima but there is no guarantee for the system to stay in the global minimum, once it is first reached. A common approach to overcome this difficulty is to "cool" the system, i.e. gradually decrease the extent of stochastic behavior during the optimization process. For an experimentally realized HNN, the straightforward method is to harvest and tune the inherent device noise utilizing the multilevel programming of the conductance states.
According to the work of Cai et. al. [24] the optimal trend for the cooling process in a HNN is superlinear. We have implemented this cooling scheme in our simulations, applying a parameterless superlinear annealing protocol on the stochastic variation of the conductance:
\[G_{\text{anneal}}^{(i)}=\log\left(10-\frac{9\cdot t}{N}\right)\cdot G^{(t)}, \tag{9}\]
where the \(G(t)\) noise signal is generated according to the chosen spectrum and the initial \(\Delta G/G\) value, and the noise signal is accordingly attenuated as the iterations evolve. The such generated temporal decrease of the noise amplitude is illustrated by the pink curve in Fig. 4D.
As illustrated in Fig. 4.C: the OFF-state conductance (red dot) is chosen deep in the barrier-like regime (and accordingly \(G_{\text{OFF}}=0\) is applied) while the ON-state (blue dot) is prepared in the metallic regime with non-zero dynamic noise. During the \(N\) steps the blue dot is moved towards higher conductances so that the relative dynamic noise gradually decreases (Fig. 4.D). All simulations were made using the experimentally motivated pink noise.
The results achieved by this continuous annealing protocol can be seen in Figs. 4A,B scattered as pink circles. Here, the horizontal axis represents the initial noise value. To compare this annealing scheme to the network operation with constant noise, the concerned results from Figs. 3A,B are reproduced by pink lines. It is clear, that the annealing procedure started from a high enough noise level delivers significantly better convergence probability than the constant noise simulation using the optimal noise level, which is consistent with the observations in Ref. [24]. However, the results plotted in Fig. 4A,B demonstrate an unexpected phenomenon: if the annealing is started from a noise level at or below the optimal 13.8% constant noise level, the convergence probability does not show any improvement compared to the related constant noise simulation anymore. This implies an important conclusion: it is
Figure 3: Simulation results for test problem g05_60.0 [50] with different dynamic noise spectra at various constant noise levels. (A) Convergence probability as a function of dynamic noise level for the three noise types. White noise, pink noise, and Lorentzian noise are respectively marked by grey, pink, and blue symbols in all panels. (B) \(\overline{C}\) as a function of dynamic noise level for the three noise types. (C,D) Noise spectra and example \(G(t)\) traces generated from these spectra for the three noise types. (E) Optimal noise level for various max-cut problems using graphs with randomly generated 50% connection probabilities, and sizes of \(60\times 60\) (circles), \(80\times 80\) (pentagons), \(100\times 100\) (stars) and \(300\times 300\) (plus symbol).
not vital to decrease the noise level well below the optimal noise level during the annealing process, however, it is beneficial if the annealing is started from a higher noise level than the optimal constant noise level. In other words, the optimal constant noise level does not cause a significant escape probability from the global solution, however, an initially higher noise level helps to escape from the local minima driving the system more efficiently towards the global solution.
Utilizing this finding, we propose a highly simplified double-step annealing protocol (orange illustrations in panels C and D), where the initial noise level is decreased to its \(2/3\) and \(1/3\) value at the \(1/3\) and \(2/3\) of the iterations steps. According to panels A and B, this simplified annealing protocol (orange symbols) delivers similar results as continuous annealing. This is highly beneficial for the network operation, as continuous noise annealing would be a demanding task due to the frequent reprogramming of all memristive cells. The double reprogramming along all the iterations steps is a reasonable trade-off between the time-consuming continuous annealing, and the constant noise operation, where the convergence probability is worse, and it is unrealistic to precisely know the optimal noise level in advance.
### Memristive HNN operated in the metallic nanojunction regime
Along the discussion of Eq. 8 we have seen that the noise of the OFF-state elements dominates the network once both the OFF and the ON states are positioned in the metallic nanojunction regime, which is described by a \(\gamma>1\) exponent. Next, we analyze the network operation in this _pure metallic regime_ by varying the dominant \(\Delta G_{\text{OFF}}/G_{\text{OFF}}\) relative noise level using \(G_{\text{ON}}/G_{\text{OFF}}=10\) (purple symbols in Figs. 5A,B) and \(G_{\text{ON}}/G_{\text{OFF}}=100\) (orange symbols in Figs. 5A,B) conductance ratios and \(\gamma=3/2\). Here, the ON state noise level is also simulated according to the scaling in Eq. 7. In this case the convergence probability and \(\overline{C}\) (Figs. 5A,B) exhibit significantly worse results even at the highest 30% relative noise than the optimal network operation in Fig. 3A,B at 13.8% relative ON state noise level. This result, however, is obvious from Eqs. 6-7. According to these formulae, arbitrary OFF and ON state noise levels along the noise model can be converted to an equivalent situation, where the OFF elements are noiseless, but the equivalent relative ON-state noise, \(\left(\Delta G_{\text{ON}}/G_{\text{ON}}\right)_{\text{equivalent}}\) is set such, that the overall current noise of the given bit line remains the same. According to Eq. 7 the \(\gamma=3/2\) and \(p_{\text{c}}=0.5\) parameters yield \(\left(\Delta G_{\text{ON}}/G_{\text{ON}}\right)_{\text{equivalent}}=\left( \Delta G_{\text{OFF}}/G_{\text{OFF}}\right)/\sqrt{1+G_{\text{ON}}/G_{\text{OFF }}}\). In Fig. 5C,D the results of Fig. 5A,B and Fig. 3A,B are plotted as the function of the equivalent ON-state noise level, demonstrating that the curves indeed follow the same tendency. From this we can conclude, that the pure metallic nanojunction regime yields the dominance of the OFF-state elements' noise, however, a given OFF-state noise corresponds to significantly smaller equivalent ON-state noise, i.e. even the largest 30% OFF-state noise is too small to reach the optimal operation.
### Further non-idealities
After a detailed analysis of the memristive HNNs' noise properties and their impact on the network operation, we analyze the role of further device-nonidealities, like the programming inaccuracy, and the finite \(G_{\text{OFF}}\) conductance.
#### iv.5.1 Programming inaccuracy
In Figs. 6A,B we analyze the role of the \(\left(\Delta G/G\right)_{\text{static}}\) measure of the programming inaccuracy, i.e. the device-to-device variance of the time-averaged conductance normalized to the average conductance. Here, we also consider the mixed barrier-like and metallic regime using the approximation of \(G_{\text{OFF}}=0\), i.e. solely analyzing the programming inaccuracy of the ON-state conductances. The network's operation for an increasing \(\left(\Delta G/G\right)_{\text{static}}\) is demonstrated in Figs. 6A,B with no
Figure 4: Simulation results for test problem g05_60.0.50 using annealed pink dynamic noise. (A) Convergence probability as a function of dynamic noise level for the different schemes: constant noise (pink line), continuous annealing (pink circles), double-step annealing (orange squares). (B) \(\overline{C}\) as a function of dynamic noise level for the different schemes (same colors as in (A)). (C) Operation scheme with annealing. A memristor has two operational regimes based on the dynamic noise. The matrix elements representing zero are set to the far OFF state giving essentially zero contribution, whereas matrix elements representing one are programmed to an ON state at the desired initial noise level. During the \(N=10^{4}\) steps at each \(K=200\) starting vectors the systems’ ON state is gradually reprogrammed to a lower dynamic noise level. (D) Example \(G_{\text{anneal}}^{(t)}\) noise signals for continuous logarithmic and discrete double-step annealing schemes.
dynamical noise (pink line), and constant pink noise at optimal amplitude (pink circles).
In the noiseless network device-to-device variations up to \(\approx 15\%\) leave the poor noiseless network performance practically unchanged, whereas larger \((\Delta G/G)_{\text{static}}\) already makes the network operation even worse. Here, it is to be emphasized, that static device-to-device variations seemingly produce a stochastic deviation of the bit line currents from the expected values of the original ideal HNN along the temporal evolution of the neural states.[24] However, a finite \((\Delta G/G)_{\text{static}}\) only deforms the weight matrix of the HNN, but still an ideal noiseless HNN is realized. This means that a finite \((\Delta G/G)_{\text{static}}\) and \((\Delta G/G)_{\text{dynamic}}=0\) yields a modified ideal HNN, where the energy can only be reduced along the operation yielding similar dead-ends in the local minima as the original noiseless HNN. Therefore, the device-to-device variations are not proper for performance enhancement in the memristive HNN, for that either true stochasticity (noise) is required, or a non-ideal HNN with somewhat chaotic energy-trajectories should be realized. The latter is possible by the introduction of a diagonal feedback (see Sec. IV.6.2), and presumably nonlinear device characteristics also yield similar non-ideal chaotic behavior, the latter, however is not analyzed in this paper.
It is also interesting to analyze the role of the programming inaccuracy if it is accompanied by optimal dynamical noise characteristics. According to the pink circles in Figs. 6 already \((\Delta G/G)_{\text{static}}>0.025\) values yield a sharp decrease in the convergence probability. An annealed network would show a very similar decay of the convergence probability (not shown). Accordingly, the proper programming accuracy is vital in the network. Such accuracy has already been experimentally demonstrated in memristors with 2048 distinct conductance levels (corresponding to 11-bit resolution), where a special denoising process was applied to maximize programming accuracy.[20] The states were programmed between 4144 \(\mu\)S and 50 \(\mu\)S, with a 2 \(\mu\)S resolution, which roughly estimates to \(\Delta G/G\approx 0.0005-0.04\), i.e. if the network is operated in the high conductance end of this conductance regime, the envisioned \((\Delta G/G)_{\text{static}}<0.025\) condition (see Fig. 6A) is easily satisfied even at much worse conductance resolution.
#### iv.2.2 Finite OFF conductance
We have seen that from the noise perspective one can always find an equivalent picture, where the OFF state is noiseless, i.e. in this sense the partitioning of the noise between the ON and OFF elements is irrelevant, just the overall noise matters. However, even at zero noise, a finite OFF-state conductance may modify the network operation due to the imperfect representation of zero states. To analyze this, simulations were run at different \(G_{\text{OFF}}/G_{\text{ON}}\) values - ranging from 0 to 0.3 - with no dynamical noise, and constant pink noise with optimal equivalent amplitude. The results can be seen in Figs.6C,D. No significant change is
Figure 5: Simulation results for test problem g05_60.0[50] operating the noise model in the pure metallic regime, and using pink noise. (A,B) Convergence probability and \(\overline{C}\) as a function of the relative dynamic OFF-state noise level (i.e. the dominant noise contribution) using \(G_{\text{ON}}/G_{\text{OFF}}=10\) and \(G_{\text{ON}}/G_{\text{OFF}}=100\) conductance ratios (pink and orange). (C,D) The same data rescaled to the equivalent relative ON-state noise level (see text). The pink line reproduces the simulation using solely ON-state noise in the mixed barrier-like and metallic regime (pink data in Figs. 3A,B), which actually represents the \(G_{\text{ON}}/G_{\text{OFF}}=\infty\) limit. Note, the largest 30% relative OFF-state noise levels in panels (A,B) correspond to equivalent noise values of \(\approx 0.1\) and \(\approx 0.03\) for the \(G_{\text{ON}}/G_{\text{OFF}}=10\) and \(G_{\text{ON}}/G_{\text{OFF}}=100\) conductance ratios, respectively.
Figure 6: Convergence probability (A,C) and \(\overline{C}\) (B,D) as a function of the \((\Delta G/G)_{\text{static}}\) device-to-device conductance variations (A,B) and the \(G_{\text{OFF}}/G_{\text{ON}}\) conductance ratio at finite OFF conductance (C,D). Pink circles (lines) represent the results for optimal equivalent dynamic noise level (zero dynamic noise level). Device-to-device variations are modeled by a Gaussian conductance distribution.
network (pink line), but in a network with optimal equivalent noise the finite, \((G_{\rm OFF}/G_{\rm ON})>0.1\) values already yield shallow but significant reduction of the convergence probability (pink circles).
### Externally induced stochasticity in memristive HNNs with suboptimal internal noise level
#### iv.6.1 External injection of current noise
The above considerations have demonstrated, that rather large, \(\approx 11-16\%\) relative equivalent ON-state noise levels are required for the best network operation, which can be further boosted by annealing the noise from an even higher initial level. These noise levels are already at the border of the experimentally observed noise values, and especially in the pure metallic regime it is hardly possible to reach the optimal noise level in the network. On the other hand this also means that the network is easily set to an operation regime, where the overall noise is definitely smaller than the optimal level, i.e. in this regime it is possible to apply external noise injection with which the stochastic operation is optimized. This scheme is demonstrated in Figs. 7A,B (see also Fig. 1A,D), where the light green arrows illustrate noise injection to the bit-line current from an external tunable noise source. Mathematically this is represented by the proper modification of the update rule (Eq. 2):
\[x_{j}^{(t+1)}=\begin{cases}+1&\text{if }a_{j}^{(t)}\geq\theta_{j}+\xi_{j}^{(t)}, \\ -1&\text{if }a_{j}^{(t)}<\theta_{j}+\xi_{j}^{(t)},\end{cases} \tag{10}\]
where \(\xi_{j}^{(t)}\) is a stochastic variable with \(\sigma_{j}\) standard deviation representing the external noise injection (note, that for the max-cut problem \(\theta_{j}=0\) applies). As proposed in Ref. [55], an additional memristive crossbar line with high amplitude tunable noise characteristics could be applied to tailor the noise level in the bit lines separately. Here, we apply an even simpler scheme with the single external memristive (or non-memristive) tunable noise source representing a \(\sigma\) standard deviation of the \(\xi\) random variables. Along the multiplexing this external noise is added to the randomly chosen bit line. The green line (green symbols) in Figs. 7A,B represent the convergence probability and \(\overline{C}\) as a function of \(\sigma\) for constant (annealed) external noise amplitudes. These curves highly resemble the results, where a constant (annealed) internal noise of the crossbar elements was applied (Figs. 3A,B and 4A,B). This is, however, an easily deducible correspondance, as the \(\sigma_{j}\) standard deviation of the external noise can be converted to an \((\Delta G_{\rm ON}/G_{\rm ON})_{\rm equivalent,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
complex optimization tasks thank to their single-step matrix-vector multiplication capability, but the intrinsic noise of the memristive elements can be exploited as a hardware resource, introducing proper stochasticity to the network. As a main focus we simulated the operation of the memristive HNNs relying on experimentally deduced, realistic noise characteristics. Based on a broad range of conductance dependent noise characteristics in various memristive systems, we proposed a noise model describing the typical noise evolution along the variation of the conductance states. Relying on this model, we demonstrated distinct operation regimes, where either the ON-state or the OFF-state noise provides dominant contribution. We also demonstrated, that the relative conductance variation is not only a good measure of the noise amplitude, but is a highly relevant parameter describing the operation of the memristive HNNs: according to our simulations the relative noise level required for the optimal network operation is found to be in the range of \(\Delta G/G\approx 11-16\%\) regardless of the color of the noise spectrum (white, pink or Lorentzian noise) or the size of the problem (\(60\times 60-300\times 300\)). We have shown, that further performance enhancement can be achieved by noise annealing, however, a highly simplified and easily implemented double step noise annealing scheme provides similar performance, as the more refined, continuous super-linear noise annealing scheme. It is also found, that the optimal noise level is at the top edge of the experimentally achievable relative noise levels, which means, that the network is easily tuned to an operation regime with suboptimal relative noise level, where the optimal operation can be either set by external noise injection or by a negative diagonal feedback, introducing a chaotic network behavior. Finally, we have explored the effects of further non-idealities, such as the limited programming accuracy, and the finite OFF-state conductance of the memristors. We have argued that any static non-ideality that deforms the weight matrix but still implements a noise-free HNN can only lead to a degradation of the network performance, i.e. for performance enhancement either true stochasticity (noise) or a non-ideal HNN with somewhat chaotic energy trajectories is required. It was, however, also found, that static non-idealities, especially the device-to-device variations with \((\Delta G/G)_{\text{static}}>0.025\) cause severe performance degradation in networks with optimized dynamical noise level.
The rapidly growing field of memristor research is expected to deliver radically new IT solutions in the nearest future. We believe that our results contribute to this field, by exploring the prospects of fully connected memristive networks utilizing the inherent stochasticity of memristors for probabilistic optimization algorithms.
###### Acknowledgements.
This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), the **UNKP-22-2-1-BME-73** and **UNKP-22-5-BME-288** New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund and the NKFI K143169 and K143282 grants. Project no. 963575 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the KDP-2020 funding scheme. Z.B. acknowledges the support of the Bolyai Janos Research Scholarship of the Hungarian Academy of Sciences. The authors are grateful to David Krisztian and Peter Balazs for their contribution to the noise measurements on SiO\({}_{\text{x}}\) resistive switches.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
Figure 7: (A,B) The effect of external noise injection on the performance of the memristive HNN using the simulation results for the \(\text{g05\_60.0}^{\,\,\,\,50}\) benchmark problem. The green line (circles) represent the convergence probability (A) and \(\overline{C}\) as the function of the external noise’s standard deviation, \(\sigma\). As a reference, \(\mathbb{P}_{\text{conv}}\) and \(\overline{C}\) are also plotted for the case of constant and annealed internal noise (purple line and circles) using the rescaled, equivalent ON-state noise axis (see top axes). For the external noise injection white noise was applied, and the annealing protocol (circles) followed the continuous annealing scheme (Eq. 9). (C,D) The effect of diagonal self-feedback on the performance of the same memristive HNN using finite negative \(w\) values and zero noise. \(\mathbb{P}_{\text{conv}}\) and \(\overline{C}\) are plotted as a function of \(|w|\) for constant \(w\) (blue lines) and for continuously annealed \(w\) (see Eq. 9). The insets in (A) and (C) illustrate the external noise injection and diagonal feedback schemes similarly to Fig. 1A.
### Author Contributions
The program codes were developed by J.G.F. with supporting contribution from T.N.T. The simulations were run and the data analysis was performed by J.G.F and the work was revised by Z.B. The noise measurements on SiO\({}_{\text{x}}\) were performed by former graduate students and J.G.F under the supervision of Z.B and A.H. Model calculations were performed by J.G.F, Z.B. and A.H. The manuscript was written by J.G.F. and A.H. All authors contributed to the discussions. The project was supervised by A.H. with support from B.Z. and T.N.T.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2302.08797 | Deep comparisons of Neural Networks from the EEGNet family | Most of the Brain-Computer Interface (BCI) publications, which propose
artificial neural networks for Motor Imagery (MI) Electroencephalography (EEG)
signal classification, are presented using one of the BCI Competition datasets.
However, these databases contain MI EEG data from less than or equal to 10
subjects . In addition, these algorithms usually include only bandpass
filtering to reduce noise and increase signal quality. In this article, we
compared 5 well-known neural networks (Shallow ConvNet, Deep ConvNet, EEGNet,
EEGNet Fusion, MI-EEGNet) using open-access databases with many subjects next
to the BCI Competition 4 2a dataset to acquire statistically significant
results. We removed artifacts from the EEG using the FASTER algorithm as a
signal processing step. Moreover, we investigated whether transfer learning can
further improve the classification results on artifact filtered data. We aimed
to rank the neural networks; therefore, next to the classification accuracy, we
introduced two additional metrics: the accuracy improvement from chance level
and the effect of transfer learning. The former can be used with different
class-numbered databases, while the latter can highlight neural networks with
sufficient generalization abilities. Our metrics showed that the researchers
should not avoid Shallow ConvNet and Deep ConvNet because they can perform
better than the later published ones from the EEGNet family. | Csaba Márton Köllőd, András Adolf, Gergely Márton, István Ulbert | 2023-02-17T10:39:09Z | http://arxiv.org/abs/2302.08797v1 | # Deep comparisons of Neural Networks from the EEGNet family
###### Abstract
Most of the Brain-Computer Interface (BCI) publications, which propose artificial neural networks for Motor Imagery (MI) Electroencephalography (EEG) signal classification, are presented using one of the BCI Competition datasets. However, these databases contain MI EEG data from less than or equal to 10 subjects. In addition, these algorithms usually include only bandpass filtering to reduce noise and increase signal quality. In this article, we compared 5 well-known neural networks (Shallow ConvNet, Deep ConvNet, EEGNet, EEGNet Fusion, MI-EEGNet) using open-access databases with many subjects next to the BCI Competition 4 2a dataset to acquire statistically significant results. We removed artifacts from the EEG using the FASTER algorithm as a signal processing step. Moreover, we investigated whether transfer learning can further improve the classification results on artifact filtered data. We aimed to rank the neural networks; therefore, next to the classification accuracy, we introduced two additional metrics: the accuracy improvement from chance level and the effect of transfer learning. The former can be used with different class-numbered databases, while the latter can highlight neural networks with sufficient generalization abilities. Our metrics showed that the researchers should not avoid Shallow ConvNet and Deep ConvNet because they can perform better than the later published ones from the EEGNet family.
keywords: BCI, EEG, Neural Networks, EEGNet +
## 1 Introduction
Artificial Neural Networks made one of the earliest significant impacts in the field of Brain-Computer Interfaces (BCI) when Schirrmeister et al. introduced Deep ConvNet and Shallow ConvNet in 2017 [1] for electroencephalographic (EEG) signal classification. Since then, neural networks have become one of the hottest topics in BCI literature.
BCIs are integrated systems that include software and hardware components. As it is presented by Wolpaw et al. [2], these systems record bioelectrical signals from the brain, extract useful information from the EEG-noise mixture, and convert them to computer commands. EEG is defined as the postsynaptic membrane potential fluctuation of the neurons, recorded from the surface of the head. Figure 1 presents the components of a BCI System.
If a new system is developed for motor imagery (MI) signal classification, it is often tested and compared with the previously published ones on one of the BCI Competition databases: [3; 4; 5; 6]. However, these datasets contain records from less than or equal to 10 subjects. Other open-access databases
Figure 1: Components of a Brain-Computer Interface system
have EEG records from more than 50 subjects but are avoided mainly by the researchers. One is the MI EEG dataset on PhysioNet [7] recorded by the BCI2000 software [8], which includes EEG records from 109 subjects. The other one was recorded utilizing the OpenBMI toolbox [9] and contained data from 52 subjects. Each subject in this dataset participated in two experimental days. In addition, we have recorded our dataset, which included 25 experiments from 9 subjects [10]. We hypothesize that databases with more than 20 experimental days are sufficient for BCI system comparison.
Next to the offline comparisons, the Cybathlon competition [11] was introduced. It aimed to investigate the reliability of BCI systems working in real-time, out of the lab situation. 11 teams participated successfully in the BCI discipline of Cybathlon 2016 [12], and two published their concepts, training protocols, and BCI Systems after the competition. [13], [14] As a continuation of this competition, the 2019 Cybathlon BCI Series and the 2020 Cybathlon Global Edition were organized, from which multiple teams shared their preparation and results. [15]-[20]
Before neural networks, scientists intended to investigate and develop hand-crafted feature extraction methods combined with simple classification algorithms. Blankertz et al. [21] successfully used the Common Spatial Patterns (CSP) algorithm with Linear Discriminant Analysis (LDA) classifier to control the cursor in one dimension. Barachant et al. [22] introduced Riemannian geometry for BCI with an LDA classifier to successfully classify EEG covariance matrices. Lotte and Guan [23] introduced a unifying theoretical framework for regularizing the CSP and compared it with 10 other regularized versions of the CSP algorithm. Another feature extraction algorithm, considering the CSP, is the Filter Bank Common Spatial Pattern (FBCSP) with a Naive Bayesian Parzen Window classifier [24], which was compared with the ConvNets [1], [25] on the BCI Competition IV 2a database. The winner of the BCI discipline of the Cybathlon competitions used the power spectral density of the EEG signals as a feature [13], [19] with a Gaussian classifier.
With the introduction of the Deep and Shallow ConvNets, a new trend started in BCI development. The focus had been shifted from hand-crafted features to creating neural networks which not just classify the signal but also include the feature extraction step. Lawhern et al. [25] introduced the EEGNet, which was inspired by former neural networks that were designed for EEG signal processing, including MI-based BCIs [1], [26]-[28]. It was demonstrated that EEGNet makes a similar feature extraction as the FBCSP. This
neural network inspired many scientists, which resulted in many improved versions of the EEGNet [29]-[42], creating a whole family of the neural network. Other publications outside the EEGNet family [43]-[49] highlight the importance of neural network-based BCI research.
Along with the development of neural networks, scientists started investigating the effect of transfer learning [50]. This method aims to transfer knowledge between two domains and increase the classification's accuracy. Khademi et al. [51] used a CNN-LSTM hybrid model, which was pretrained on the ILSVRC subset of the ImageNet dataset to classify MI EEG signals. This way, they aimed to transfer the knowledge of image classification and use it on spatial EEG images generated with continuous wavelet transformation method using complex Morlet mother wavelet. Another strategy is to utilize the entire EEG dataset and combine cross-subject and within-subject training, as presented in [49], [52], [53]. In this case, the knowledge is granted from subjects not used in the test set of the neural network. The network is pretrained on data from all but one subject, as it would in a cross-subject training procedure. But then the data of the test subject is also split to train and test set, as in the within-subject, and the training part is used for fine-tuning the pretrained neural network. We selected the later version of transfer learning because it is architecture independent and aimed to use it after artifact filtering.
In this article, all the experiments were conducted on data purified from artifacts because eye and muscle movement activity can distort the EEG signal [54]. Moreover, it was demonstrated that artifacts could successfully be used for BCI purposes [55]; however, in our perspective, an actual BCI does not depend on artifacts, only on pure brain waves.
To reduce the computational time of the experiments, we have arbitrarily selected Shallow and Deep ConvNet [1] as predecessors of EEGNet, the EEGNet itself [25], the EEGNet Fusion [41], and the MI-EEGNet [39] from the EEGNet family.
## 2 Materials and Methods
The databases and neural networks are presented in this section, with the experimental setups and concepts. The code used in this study is available under: [https://github.com/kolcs/bionic_apps](https://github.com/kolcs/bionic_apps)
### Databases
In the following, we present the datasets which were used for the EEGNet family comparisons. The databases were processed in an "independent days" configuration, meaning that if a subject participated in an experiment multiple times on different experimental days, the data were handled as if it would have been recorded from various subjects. According to our knowledge, EEG data can greatly be varied by many factors, such as recording setup, period of the day, and mental state of the subjects. The latter could lead to poorer performance if the data is merged concerning the subjects. With the independent days configuration, we aimed to overcome this problem and extend the number of subjects to strengthen the results of the statistical analyses.
#### 2.1.1 Physionet
The open-access database PhysioNet (Goldberger Ary L. et al., 2000) is a valuable resource for numerous physiological datasets. Among these datasets is the EEG Motor Movement/Imagery Dataset, which was captured by Schalk et al. [8] utilizing the BCI2000 paradigm control program. For convenience, we will refer to this specific dataset as the Physionet database. It encompasses four MI EEG signals obtained from 109 individuals: Left Hand, Right Hand, Both Hands, and Both Legs. The MI periods are of 4-second duration and are interspersed with 4-second long Rest periods. The recordings were sampled at 160 Hz, over 64 channels, without using hardware filters.
Four subjects out of the 109 were dropped from the database before the experiments. For subject 89 the labels were incorrect. In the case of subjects 88, 92, and 100 the timing was incorrect. The execution of the MI tasks and the resting phases were 5.125 and 1.375 seconds, respectively. Moreover, the sampling frequency was changed from 160 Hz to 128 Hz. Other articles, which use the Physionet database [41], [52], [56], also reported these problems
#### 2.1.2 Giga
Lee et al. [9] published an EEG dataset that included three paradigms: MI, event-related potential, and steady-state visually evoked potential. The experimental paradigms were conducted with the OpenMBI toolbox, custom written in MATLAB. We selected the files corresponding to the MI EEG paradigm from these three paradigms, which contains a 2-class classification problem, where the tasks are Left Hand and Right Hand movement imagination. The EEG signals were recorded with a 62-channeled BrainAmp
amplifier system with a sampling rate of 1000 Hz. 54 subjects participated in the experiments, and each subject was present on two different experimental days. Therefore, concerning our independent days configuration, this dataset contains 108 subjects. To shrink the size of the raw EEG files, we resampled the data to a 500 Hz sampling frequency.
#### 2.1.3 BCI Competition IV 2a
Tangermann et al. [6] published the well-known and highly used BCI competition IV database, which includes 5 sub-datasets with different paradigms and challenges. This popular dataset is used as a standard in the BCI literature to compare the developed methods and algorithms. This article uses only the 2a sub-dataset, an MI dataset with Left Hand, Right Hand, Both Feet, and Tongue tasks. The EEG signals were recorded with a 250 Hz sampling frequency on 22 electrodes. The amplifier included a hardware bandpass filter between 0.5 and 100 Hz and a notch filter at 50 Hz to remove the powerline noise.
This dataset was recorded with the help of 9 experimental subjects, and each subject participated in two different experimental days. Therefore, concerning the independent days configuration, this dataset contains 18 subjects.
#### 2.1.4 Ttk
The TTK database [10] was recorded in Research Centre for Natural Sciences (TTK, as a Hungarian abbreviation). A 64-channeled ActiChamp amplifier system (Brain Products GmbH, Gliching, Germany) was used to capture the EEG signals, which were operated with a 500 Hz sampling frequency.
The EEG signals were recorded with a custom designed MATLAB based, paradigm leader code called General Offline Paradigm (GoPar). GoPar was presented in the Supplementary Materials of [57] and available at [https://github.com/kolcs/GoPar](https://github.com/kolcs/GoPar). The paradigm of the Physionet database inspired this code. GoPar was designed to conduct multiple different MI paradigms with 4 tasks. In the case of the TTK dataset, these tasks were Left Hand, Right Hand, Left Foot, and Right Foot. The paradigm started with a one-minute-long eye-open session, followed by a one-minute-long eye-closed one as the initial task. These tasks aimed to get the subjects' full attention, preparing them for the core part of the experiment, and served as a baseline. The paradigm continued with 2 warmup sessions where two out of the four MI tasks were selected and practiced overtly and covertly. The warmup sessions
aimed to lead the subjects on how to execute MI tasks. After the warmup sessions, the pure MI sessions were followed with a randomized order of the four MI tasks.
In total 25 experiments were conducted with 9 subjects. No hardware or software filters were applied.
### Signal processing
As a first step, the EEG signals were filtered with a 5th-order Butterworth bandpass filter between 1 and 45 Hz. Then a customized FASTER algorithm [54] was applied, as presented in [57], to remove artifacts corresponding to eye movement or muscle activity. The first step removed EEG channels which were assumed to be constantly noisy throughout an entire experiment, concerning the variance, the correlation, and the Hurst exponent. Secondly, epochs containing motions (chewing, yawning) were discarded by measuring the deviation from the channel average, the amplitude range, and variance parameters. In the third step eye related artifacts are filtered out utilizing independent component analysis. The final step filtered out EEG channels from epochs individually, which were still considered noisy, concerning the variance, median gradient, amplitude range, and channel deviation parameters. The fifth step of the original FASTER algorithm, which detected artifacts through subjects, was omitted, because we aimed our signal processing algorithm to be subject-specific.
The purified 4-second-long epochs were split into 2-second-long windows with 0.1-second shifts to enhance the number of samples. Then the signals were normalized using the standard scaling, where the mean of the data is set to zero and the standard deviation to one. These processed EEG windows were used to train and test the classifiers of the BCI System.
In the case of within-subject classification, 5-fold cross-validation was conducted subject-wise, where the database was split on the epoch level to ensure that windows originating from the same epoch are exclusively used in either the train or test set. Approximately 10 % of the data from the training set were used as a validation set, splitting it on the epoch level.
### Nerual Networks
This section presents the used neural networks, our methods, and modifications concerning the original ones.
#### 2.3.1 Callbacks
Under the training of neural networks, a modified early stopping and model-saving strategy were applied. The conventional early stopping approach [58] focuses on monitoring the validation loss and halting the learning process when it increases to prevent the network from overfitting. Additionally, a patience parameter can be defined to determine the number of training epochs that should be waited for before the monitored value shows improvement again. We extended this strategy with an additional patience-like value called "give up". This strategy aims to handle training situations where the validation loss increases above the initial training loss, and after a particular time, the neural network starts to learn; therefore, the validation loss decreases. The give up value defines how many training epochs should be waited until the validation loss reaches the initial amount of validation loss. The original patience value is activated if the initial loss has been reached under the give up limit. Otherwise, the training is finished.
Our model saving strategy was designed to reflect on the modified early-stopping strategy. Until the initial validation loss was reached, model weights with the highest validation accuracy were saved. After reaching the initial validation loss, model weights were only saved if improvements were detected with respect to both the validation loss and validation accuracy. Before the test phase, the best model weights were restored.
We conducted our experiments by setting the training epochs to 500, the give up value to 100, and the patience to 20.
#### 2.3.2 ConvNets
For implementing the Deep and Shallow ConvNets the source code in [25] was used, which uses a few modified parameters concerning the originally published ones in [1]. No additional modifications were made concerning the architecture of the networks.
#### 2.3.3 EEGNets
The networks of the EEGNet family, the EEGNet [25], the EEGNet Fusion [41], and the MI-EEGNet [39] were all modified in such a way that they could be automatically used for databases with different sampling frequencies instead of setting up the input parameters manually. In the article of the EEGNet [25], the authors explicitly said that the filter size of the first convolutional block should be half of the sampling frequency rate. Therefore, in our implementation, instead of directly defining the size of the kernel,
it is calculated from the used signals sampling frequency. We followed this strategy in the case of the other two networks.
### Transfer learning
Next to the subject-wise learning, we also investigated the effect of transfer learning. First, test subjects were selected as distinct groups of 10. The remaining subjects, called pre-train subjects, were used to set the initial optimal weights for the neural networks. A validation set was separated from the pre-train data to use it with our modified early stopping and model-saving strategy. When the pretraining phase converged, either reaching the maximum training epoch number or stopping it by the early stopping strategy, the best weights of the network were stored. For each test subject, 5-fold within-subject cross-validation was conducted as described in the third paragraph of 2.2. Before each cross-validation step, the saved model's weights were loaded, and the selected training set of the selected test subject was used as a fine-tuning data for the neural networks. Under the fine-tuning step, validation sets were again used with our early-stopping and model-saving strategies.
### EEGNet family comparison
Many computational experiments were conducted on each database (Physionet, Giga, TTK, and BCI Competition IV 2a) to compare the neural networks from the EEGNet family (Shallow ConvNet, Deep ConvNet, EEGNet, EEGNet Fusion, MI-EEGNet). In the case of an experimental subject who participated in multiple experiments on different days, the data was handled as if multiple subjects would have participated instead, called independent days configuration. However, on the BCI Competition IV 2a dataset, we also conducted experiments where the data of a subject is combined, overseeing the date of the records because this way, the results are more comparable with previous BCI studies. We highlighted this experiment with the "merged subject data" words.
A within-subject and a transfer learning phase were conducted on each neural network database. The results of the cross-validations were collected, and the normality test was used to select a proper statistical test: the t-test or Wilcoxon for normally or not normally distributed accuracy levels, respectively. The received p-values were adjusted with Bonferroni correction. The significance level was preset to 0.05.
We aimed to rank the neural networks; therefore, two additional metrics were introduced next to the pure accuracy comparison. These metrics were
investigated on the independent days configured databases. The first metric is the accuracy improvement of the EEGNet family with respect to the chance level. One advantage of this metric is that it can be used on databases with different class numbers. This metric was calculated and averaged for both within-subject and transfer learning.
The second metric focused on the effect of transfer learning, which was investigated by comparing the results of within-subject classification with the transfer learning classification. The difference between the two methods was calculated for each database concerning independent days.
### Significance investigation of databases
The number and quality of significant differences were investigated to quantitatively measure our hypothesis that databases with higher than 20 experimental days are sufficient only for BCI system comparison. For each database configuration, we calculated two numbers. The sum of significance levels, where the significance levels were categorized as it, is presented in Table 1. and the count of significant differences. We correlated these numbers with the number of subjects in the databases.
## 3 Results
After receiving the 5-fold cross-validated accuracy levels for all the combinations of the 4 databases, 5 neural networks, and 2 learning methods (within-subject and transfer learning), the normality tests showed a not normal distribution. Therefore, the Wilcoxon statistical test was used with Bonferroni correction for significance analysis. The results are presented in Figure 2. In general, it can be concluded that transfer learning significantly improved the results at all the databases except the BCI Competition 2a.
\begin{table}
\begin{tabular}{c c}
**Level** & **p-value range** \\ \hline
1 & 1e-2 \textless{} p \textless{}= 5e-2 \\
2 & 1e-3 \textless{} p \textless{}= 1e-2 \\
3 & 1e-4 \textless{} p \textless{}= 1e-3 \\
4 & p \textless{}= 1e-4 \\ \end{tabular}
\end{table}
Table 1: Level of a significance tests
Figure 2: EEGNet family comparison on 4 databases handling the datasets in independent days configuration.
The p-value annotation legend is the following: ns:5e-2 < p; *: 1e-2 < p <= 5e-2; **: 1e-3 < p <= 1e-2; ***: 1e-4 < p <= 1e-3; ***: p <= 1e-4. The mean of the data is presented with the ‘+’ symbol.
On the Physionet (Figure 2A), in the case of within-subject classification, the MI-EEGNet significantly reached the highest accuracy (0.4646) compared to the other methods, while in the case of transfer learning, the Deep ConvNet showed the significantly highest performance (0.5377).
On the Giga database (Figure 2B), the MI-EEGNet reached the highest accuracies, 0.725 and 0.7724, in the case of within-subject and transfer learning, respectively. This network significantly outperformed the others, except compared to the Shallow ConvNet in transfer learning mode.
Analyzing the results from the TTK dataset (Figure 2C), it can be concluded that the EEGNet reached the highest accuracies: 0.4437 and 0.4724 in the case of within-subject and transfer learning, respectively. These results were significantly higher than the other networks except for the MI-EEGNet.
In the case of the BCI Competition IV 2a dataset, handled as independent days (Figure 2D), the Shallow ConvNet reached 0.719 and 0.733 accuracies in the case of within-subject and transfer learning, respectively. In the case of transfer learning, it was significant compared to the other neural networks. However, concerning the within-subject classification, the results were comparable with the EEGNet and the MI-EEGNet. On the other hand, when the data corresponding to one subject was merged, ignoring the experimental days, the Shallow ConvNet reached again the highest accuracies, 0.749 and 0.7533 in the case of within-subject and transfer learning, respectively; however, the difference between the neural network was insignificant.
To establish a hierarchy among the neural networks, we analyzed the accuracy improvement of the EEGNet family relative to the chance level. Table 3 displays the ranking of these networks based on their training modes. Among all the databases in an independent days setup, MI-EEGNet displayed the most significant average improvement in within-subject classification. On the other hand, the Shallow ConvNet outperformed the other networks for transfer learning.
Another factor we considered was the enhancement of neural networks through transfer learning, which we have presented in Table 2. Deep ConvNet showed the most substantial improvement, achieving results 0.1 higher than within-subject classification mode on average. Whereas Shallow ConvNet, ranked first in transfer learning performance, could only improve by 0.05 compared to within-subject classification.
Finally, the databases were ranked concerning the number of significant differences. Table 4 presents the sum of significance ranges (corresponding to the number of stars on the figures) and the count of significant differences
Figure 3: EEGNet family comparison on BCI Competition IV 2a. The p-value annotation legend is the following: ns:5e-2 < p; *: 1e-2 < p <= 5e-2; **: 1e-3 < p <= 1e-2; ***: 1e-4 < p <= 1e-3; ***: p <= 1e-4. The mean of the data is presented with the ‘+’ symbol.
\begin{table}
\begin{tabular}{c c c c c c c} & **Rank** & **Classifier** & \multicolumn{2}{c}{**Avg. Acc. impr.**} \\ & & & & **from chance level** \\ \hline \multirow{5}{*}{Within subject} & 1 & MI-EEGNet & 0.2306 \\ & 2 & Shallow ConvNet & 0.2071 \\ & 3 & EEGNet & 0.1997 \\ & 4 & EEGNet Fusion & 0.1871 \\ & 5 & Deep ConvNet & 0.1249 \\ \hline \multirow{5}{*}{Transfer learning} & 1 & Shallow ConvNet & 0.2721 \\ & 2 & Deep ConvNet & 0.2598 \\ & 3 & MI-EEGNet & 0.2537 \\ & 4 & EEGNet & 0.2521 \\ & 5 & EEGNet Fusion & 0.2312 \\ \end{tabular}
\end{table}
Table 2: Classification Improvements by transfer learning on databases with independetn day configuration
\begin{table}
\begin{tabular}{c c c c c c}
**Rank** & **Neural Networks** & **Physionet** & **Giga** & **TTK** & **BCI Comp** & **Avg. impr.** \\ & & & & **IV 2a** & **Avg. impr.** \\ \hline
1 & Deep ConvNet & 0.1557 & 0.1418 & 0.0708 & 0.0614 & 0.1075 \\
2 & Shallow ConvNet & 0.0928 & 0.0497 & 0.0509 & 0.0141 & 0.0519 \\
3 & EEGNet & 0.0716 & 0.0487 & 0.0288 & -0.0065 & 0.0357 \\
4 & EEGNet Fusion & 0.0381 & 0.0586 & 0.0379 & 0.0007 & 0.0338 \\
5 & MI-EEGNet & -0.0058 & 0.0475 & 0.0564 & -0.0015 & 0.0241 \\ \end{tabular}
\end{table}
Table 3: Ranking the performance of neural networks on databases concerning independent days.
with the number of subjects in the databases. The sum of the significance ranges and the number of subjects in the databases were found to be strongly correlated, r(3) = 0.7709, but insignificant (p-value 0.127014 > 0.05).
## 4 Discussion
Most of the articles presenting MI EEG signal classification with artificial neural networks from the EEGNet family are presenting and comparing their results on one of the BCI competition databases. This article aimed to highlight that datasets with many subjects are required for statistically significant comparisons. Therefore, we compared 5 neural networks from the EEGNet family on 4 databases which include various subjects. Concerning the datasets, we introduced the independent day configuration, where the data of a subject who participated in multiple experimental days counted as if multiple subjects participated instead. With this configuration, we aimed to extend the number of experiments and increase the comparisons' significance level. All four databases, namely the BCI competition IV 2a database [6], the Physionet [7], [8], the Giga [9], and our TTK dataset [10], were used in this configuration. In the case of Physionet, the authors claim that the experiments were conducted with 109 volunteers; therefore, the independent subject configuration is irrelevant here. Concerning the BCI competition IV 2a database, we conducted another experiment where the data of a subject was combined into one ("merged subject data"). Width this configuration, we aimed to conduct classifications comparable to other reports. In addition, we used these results to test our hypothesis about the correlation between
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{3}{c}{**Significance level**} \\
**Database** & **Sum** & **Count** & **Subjects** \\ \hline Physionet & 63 & 18 & 105 \\ Giga & 49 & 15 & 108 \\ TTK & 45 & 16 & 25 \\ BCI Comp IV 2a & 31 & 15 & 18 \\ BCI Comp IV 2a – & & & \\ merged subject data & & & \\ \end{tabular}
\end{table}
Table 4: Significance investigation
the number of subjects in the database and the number of significant comparisons (Table 4). The correlation between the number of subjects and our significance metric was strong but insignificant. However, Table 4 presents that a database with 9 subjects is insufficient for significance testing. Therefore, we suggest that databases with many subjects, such as the Physionet or the Giga datasets, should be used to compare BCI systems. To further investigate our hypothesis, additional open-access MI EEG databases are required.
We also would like to highlight that in our experiments artifact filtered EEG data were used, unlike the articles about the investigated neural networks [1], [25], [39], [41]. They included bandpass filtering and standardization before the classification. In our signal processing step, next to the 5th ordered bandpass Butterworth filter from 1 to 45 Hz, we utilized the FASTER [54] artifact removal algorithm to detect and remove artifacts corresponding to eye movement and muscle activity. This is highly important; otherwise, it may happen that instead of pure EEG signals, artifacts are classified. It was demonstrated in [55] that electromyography could be successfully used for BCI purposes.
Most articles that investigate the effect of transfer learning are presented using datasets without artifact filtering [49], [51]-[53], [59]. We showed that on databases with many subjects (Physionet and Giga), even after artifact filtering, the transfer learning significantly improves the accuracy of the neural networks compared to the within-subject classification (Figure 2 A & B). We also showed that Deep ConvNet could gain the most from the transfer learning considering all the databases (Table 2). On the other hand, the Shallow ConvNet reached the highest results concerning our "improvement from the chance level" metric on all transfer learning trained neural networks (Table 3). Nevertheless, the difference between the ConvNets is insignificant concerning the Physionet and the Giga database (Figure 2 A & B).
Our results highlight that various aspects should be considered for a good ranking between neural networks. Using unfiltered datasets with few numbers of subjects and considering only the accuracy differences between networks may lead to a vague conclusion.
In future work, it would be interesting to conduct transfer learning by utilizing data from multiple databases. However, one must overcome the problem that different datasets are recorded with different EEG amplifiers; therefore, the position, number of electrodes, and sampling frequency could differ.
## 5 Conclusion
In this paper, we critically compared neural networks from the EEGNet family, namely the Shallow ConvNet, the Deep ConvNet, the EEGNet, the EEGNet Fusion. and the MI-EEGNet, on MI EEG signal classification tasks. The comparisons were conducted utilizing the BCI competition IV 2a database and the Giga and the Physionet, which include many subjects. In addition, we also used our TTK dataset. Within-subject and transfer learning classification were conducted on each database configuration and neural network combination. All the received results were 5-fold cross-validated. The classification results were acquired after cleaning the raw signals from artifacts with the FASTER algorithm.
To the best of our knowledge, there were no articles published before that compare neural networks from the EEGNet family on artifact filtered databases, which include high numbers of subjects (>20), and in addition, the results are also cross-validated. We also demonstrated that transfer learning could improve the classification results, even on artifact-filtered MI EEG data. To rank the neural networks, we introduced two metrics. The first considered the neural networks' accuracy improvement from the chance level, while the second investigated the classification improvements by transfer learning. These metrics showed that Shallow ConvNet and Deep ConvNet could perform better than the later published networks from the EEGNet family. Finally, we showed that databases with few numbers of subjects (\(\leq\)10) are not sufficient for comparing BCI systems statistically.
## CRediT Authorship contribution statement
**Csaba Kollod**: Conceptualization, Methodology, Experiments, Software, Statistics, Visualization, Writing - original draft
**Andras Adolf**: Software - FASTER algorithm implementation, Experiments, Writing - review & editing
**Gergely Marton**: Experiments, Writing - review & editing
**Istvan Ulbert**: Supervision, Project administration, Writing - review & editing
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work
reported in this paper.
## Acknowledgement
We thank our Pilots for their weekly availability and participation in the experiments. We are also grateful for their regular feedback about the system.
Prepared with the professional support of the Doctoral Student Scholarship Program of the Co-operative Doctoral Program of the Ministry of Innovation and Technology financed from the National Research, Development and Innovation Fund.
## Data availability statement
Databases and source codes are available under the following links.
**Source codes:**
**Datasets:**
## Supplementary Materials
Excel tables about database accuracies: Supplementary Materials.xlsx |
2306.08274 | A Simple and Scalable Graph Neural Network for Large Directed Graphs | Node classification is one of the hottest tasks in graph analysis. Though
existing studies have explored various node representations in directed and
undirected graphs, they have overlooked the distinctions of their capabilities
to capture the information of graphs. To tackle the limitation, we investigate
various combinations of node representations (aggregated features vs. adjacency
lists) and edge direction awareness within an input graph (directed vs.
undirected). We address the first empirical study to benchmark the performance
of various GNNs that use either combination of node representations and edge
direction awareness. Our experiments demonstrate that no single combination
stably achieves state-of-the-art results across datasets, which indicates that
we need to select appropriate combinations depending on the dataset
characteristics. In response, we propose a simple yet holistic classification
method A2DUG which leverages all combinations of node representations in
directed and undirected graphs. We demonstrate that A2DUG stably performs well
on various datasets and improves the accuracy up to 11.29 compared with the
state-of-the-art methods. To spur the development of new methods, we publicly
release our complete codebase under the MIT license. | Seiji Maekawa, Yuya Sasaki, Makoto Onizuka | 2023-06-14T06:24:58Z | http://arxiv.org/abs/2306.08274v2 | # Why Using Either Aggregated Features or Adjacency
###### Abstract
Node classification is one of the hottest tasks in graph analysis. In this paper, we focus on the choices of node representations (aggregated features vs. adjacency lists) and the edge direction of an input graph (directed vs. undirected), which have a large influence on classification results. We address the first empirical study to benchmark the performance of various GNNs that use either combination of node representations and edge directions. Our experiments demonstrate that no single combination stably achieves state-of-the-art results across datasets, which indicates that we need to select appropriate combinations depending on the characteristics of datasets. In response, we propose a simple yet holistic classification method **A2DUG** which leverages all combinations of node representation variants in directed and undirected graphs. We demonstrate that A2DUG stably performs well on various datasets. Surprisingly, it largely outperforms the current state-of-the-art methods in several datasets. This result validates the importance of the adaptive effect control on the combinations of node representations and edge directions.
## 1 Introduction
The semi-supervised node classification task is one of the hottest topics in graph analysis. Its goal is to predict unknown labels of the nodes by using the topology structure and node attributes, given partially labeled networks. Many algorithms [2; 6; 8; 12; 13; 14; 16; 21; 23; 32; 35; 38; 42; 43] have been proposed to tackle this task and they have gained wide research interest from various domains including chemistry [4; 7], physics [29], social science [12], and neuroscience [41]. Since nodes and edges are core components in graphs, our focus is on to what extent we can improve the classification accuracy by leveraging various node representations and edge directions. Actually, existing techniques primarily emphasize two key aspects: 1) node representations (aggregated features vs. adjacency lists), and 2) edge directions within input graphs (directed vs. undirected).
Aggregated feature vs. Adjacency list.There are two major approaches to constructing node representations: 1) Graph Neural Networks (GNNs) adopting feature aggregation [2; 12; 17; 30; 32; 39] and 2) methods using adjacency lists as node features [14; 15; 16]. As for the first approach, GNNs adopt feature aggregation that aggregates node features/embeddings from local neighbor nodes to obtain denoised features (we call _aggregated features_). The second approach utilizes an adjacency matrix, i.e., the adjacency list of a node (node ID list), as its node feature. Hence, aggregated features and adjacency lists capture the different characteristics of nodes and complement each other.
Figure 1 depicts two motivating examples in which methods using only either aggregated features or adjacency lists cannot correctly predict the node labels. In these example graphs, nodes indicate papers, edges indicate citations, and each node has its attribute vector.
In Figure 0(a), the prediction target is the publication year of each paper. Using adjacency lists, nodes \(p\) and \(p^{\prime}\) are distinguishable as they connect to different nodes. Since nodes \(p\) and \(q\) share the same adjacency lists and node features, the label of node \(p\) is correctly predicted as \(2016\), the same label as that of \(q\). Similarly, the label of node \(p^{\prime}\) is accurately predicted as \(2018\) using information from labeled node \(q^{\prime}\). However, methods utilizing feature aggregation fail to predict the distinct labels for node \(p\) and \(p^{\prime}\) because they share the same aggregated features, i.e., the same node features within 1-hop and 2-hop, resulting in identical embeddings. Next, Figure 0(b) shows another example graph in which labels indicate the research fields of papers, e.g., NLP and DB. In this example, the methods using adjacency lists fail to predict the label of node \(t\) because there are no intersections between the adjacency lists of node \(t\) and the labeled node \(x\), which shares the same label as node \(t\). As for methods using feature aggregation, node \(t\) obtains the embedding similar to those of locally neighboring nodes \(u,v,\) and \(w\). As a result, they are expected to correctly predict the label of node \(t\), i.e., "DB".
**Directed graph vs. Undirected graph.** Most methods focus on undirected graphs [2, 12, 13, 21, 35, 38, 42, 43]. Since edges in most graphs are potentially associated with direction information that may contribute to classification quality, several existing studies have addressed a node classification task for directed graphs. First, a small number of GNN algorithms [30, 31, 39] utilizes feature aggregation in directed graphs rather than in undirected graphs. Second, as for methods using adjacency lists, users can choose whether to use a directed or undirected adjacency list for input. However, the choice of utilizing a directed or undirected graph depends on the characteristics of datasets.
We give two examples to show that users need to choose whether to use the input graph as directed or undirected. In Figure 0(a), since the prediction target is the publication years of papers, edge directions play a crucial role, i.e., edge directions indicate time directions cited from new papers to old ones. For instance, nodes \(r\) and \(s\) have no incoming edge, and thus methods using directed graphs can predict that they are recent papers, i.e., their labels are \(2022\). If we choose the input graph as undirected, methods cannot utilize the time direction. We also give another example in which an undirected graph is preferable in Figure 0(b). In this example, the nodes in a densely connected subgraph have the same label. For instance, node \(t\) does not receive any information from its neighbor since it has no incoming edge. This means that methods using only directed graphs fail to utilize the existence of edges. Hence, users need to adaptively select a directed or undirected graph depending on datasets.
**Issues in existing studies.** The above discussions on node representations and edge directions clarify that users need to select whether to use aggregated features or adjacency lists and whether to treat an input graph as directed or undirected. However, it is burdensome and difficult to choose an appropriate combination for each dataset. Nevertheless, no studies have comprehensively investigated the performance of methods using either combination of node representations and edge directions with various directed graphs.
**Contributions.** First, we address the following research question: **Q1.** _To what extent do node representations (aggregated features vs. adjacency lists) and edge directions (directed vs. undirected) affect node classification quality?_ To answer this question, we conduct experiments using various existing methods including state-of-the-art methods on eight directed graphs. The experiments demonstrate that classification results highly depend on the characteristics of graphs, e.g., methods using edge directions perform well on a graph in which the prediction target is publication years (arxiv-year [16]) and methods ignoring edge directions perform well on a graph in which the prediction target is research fields (ogbn-arxiv [10]). Hence, we address the following research question: **Q2.** _Can we adaptively choose the best combination of node representations and edge
Figure 1: Motivating examples. The red and white nodes indicate labeled and unlabeled nodes belonging to training and validation/test sets, respectively. To make the explanation easier to understand, we show the ground truth labels of all nodes.
directions for each dataset?_ To answer this question, we propose a simple yet holistic method, **A2DUG**, that leverages all combinations of node representation variants in directed and undirected graphs, as illustrated in Figure 2. Further, it achieves high scalability by adopting precomputation-based GNNs [21; 34]. We summarize key insights through our experiments as follows.
* Our empirical study demonstrates that no existing methods stably perform well across datasets since they use only either combination of node representations and edge directions.
* By incorporating both aggregated features and adjacency lists in directed/undirected graphs into the model training, the performance of A2DUG is robust against various datasets. Surprisingly, it achieves better results than state-of-the-art methods with large margins in arxiv-year, snap-patents, and wiki, despite its simple architecture. This is because A2DUG can adaptively combine and control the effects from aggregated features and adjacency lists in directed/undirected graphs rather than simply selecting either combination.
* A2DUG scales well to the largest dataset wiki having more than \(0.3\) billion edges because it adopts scalable batchwise training, similarly to precomputation-based GNNs.
We hope that this paper demonstrates the potential of future research on the combination of aggregated features and adjacency lists in directed/undirected graphs. To spur the development of new methods, we publicly release our complete codebase2 under the MIT license.
Footnote 2: [https://github.com/seijimaekawa/A2DUG](https://github.com/seijimaekawa/A2DUG)
## 2 Preliminaries
An _attributed graph with class labels_ is a triple \(G=(\mathbf{A},\mathbf{X},\mathbf{C})\) where \(\mathbf{A}\in\{0,1\}^{n\times n}\) is an adjacency matrix, \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is an attribute matrix assigning attributes to nodes, and a class matrix \(\mathbf{C}\in\{0,1\}^{n\times y}\) contains class information of each node, and \(n,d,y\) are the numbers of nodes, attributes, and classes, respectively. We denote a transposed adjacency matrix by \(\mathbf{A}^{\top}\), where \(\mathbf{A}^{\top}_{ij}=\mathbf{A}_{ji}\). Given an adjacency matrix \(\mathbf{A}\), we can obtain an undirected adjacency matrix \(\mathbf{A}^{\text{und}}\in\{0,1\}^{n\times n}\), where \(\mathbf{A}^{\text{und}}_{ij}=\mathbf{A}^{\text{und}}_{ji}=1\) if \(\mathbf{A}_{ij}=1\) or \(\mathbf{A}_{ji}=1\) otherwise \(\mathbf{A}^{\text{und}}_{ij}=\mathbf{A}^{\text{und}}_{ji}=0\). Also, we define \(\Omega_{l}\) as the class for label \(l\), i.e., the set of nodes labeled with \(l\). We define the degree matrix \(\mathbf{D}=\text{diag}(D_{1},\dots,D_{n})\in\mathbb{R}^{n\times n}\) as a diagonal matrix, where \(D_{i}\) expresses the degree of node \(i\). We also define an identity matrix \(\mathbf{I}=\text{diag}(1,\dots,1)\in\mathbb{R}^{n\times n}\).
**Problem Definition (Node classification).** We split nodes into train/validation/test sets. Given an adjacency matrix \(\mathbf{A}\), an attribute matrix \(\mathbf{X}\), and a partial class matrix \(\mathbf{C}^{\prime}\) which contains class information of nodes in the train/validation sets, we predict the labels of the nodes in the test set.
### Prior Work of Node Classification
To address the aforementioned node classification task, existing studies have proposed a variety of methods including GNNs and methods using adjacency lists. We briefly explain their motivations and several key instances. We summarize the input of existing methods and A2DUG in Table 1.
Figure 2: Our proposal **A2DUG** leverages both aggregated features and adjacency lists in directed/undirected graphs through MLP and GNN, respectively. \(\text{MLP}_{\mathbf{X}}\), \(\text{MLP}_{\mathbf{A}}\), and \(\text{MLP}_{\mathbf{A}^{\text{und}}}\) create node representations for node features \(\mathbf{X}\), adjacency lists \(\mathbf{A}\), and undirected adjacency lists \(\mathbf{A}^{\text{und}}\). GNN and \(\text{GNN}^{\text{und}}\) compute aggregated features by inputting node features and adjacency lists. Finally, it concatenates the embeddings from MLP and GNN, and then outputs the results via \(\text{MLP}_{\text{final}}\). For the sake of brevity, we omit a transposed adjacency list in this figure (see the details in Section 3).
GNNs using feature aggregation in undirected graphs.Feature aggregation is an effective approach to obtain node representations denoised by locally neighboring nodes, e.g., the average, sum, or max pooling of neighboring nodes, specifically for homophilous graphs3. Feature aggregation is operated in graph convolution which has been proposed in [12]. Since then, many successor GNN techniques [2; 6; 8; 13; 21; 23; 32; 35; 38; 42; 43] adopt feature aggregation in order to obtain node representations/embeddings suitable for various downstream tasks including node classification and link prediction. These GNNs do not directly use adjacency lists as node features and focus on only undirected graphs.
Footnote 3: In homophilous graphs, nodes in the same class tend to be connected.
GNNs for directed graphs.While many studies in the literature on GNNs have addressed developing techniques for undirected graphs, several studies [18; 30; 31; 39] have proposed techniques for node classification in directed graphs. DiGraph [30] extends the spectral-based graph convolution to directed graphs by leveraging the inherent connections between graph Laplacian and stationary distributions of PageRank [24]. MagNet [39] utilizes a complex Hermitian matrix known as the magnetic Laplacian, where the real and imaginary parts represent the undirected and directed edges, respectively. However, similar to GNNs for undirected graphs, these GNNs for directed graphs do not use adjacency lists as node features. While not only for node classification, several studies [9; 33] have addressed a recommendation task in directed graphs, they are out of our scope.
Methods using adjacency lists as node features.Several node embedding methods [11; 15; 36] leverage an adjacency matrix as node features. LINKX [16] achieves state-of-the-art performance on non-homophilous graphs. However, LINKX obtains lower accuracy on homophilous graphs than existing GNNs since it does not adopt graph convolution that is tailored to capture the homophily. A recent work, GloGNN++ [14], creates node features by combining an adjacency matrix and an attribute matrix. Then, it aggregates information from global nodes in the graph, i.e., nodes having similar combined node features, while other GNNs aggregate information from local neighbors4. Since these methods use adjacency lists as node features, they can be easily applied to directed graphs by replacing undirected adjacency lists with directed adjacency lists. However, users need to decide whether to use a directed or undirected graph for each dataset. Also as for directed graphs, inverse edges may have different semantics from the original edges. Hence, these choices for an appropriate input to their methods increase users' burdens. In summary of this subsection, no existing methods combine aggregated features and adjacency lists in directed/undirected graphs though they complement each other.
Footnote 4: As we explained in this subsection, we categorize GloGNN++ into methods using adjacency lists since it utilizes adjacency lists for node features and does not use local feature aggregation.
### Related Work of Benchmarking GNNs
Several studies [3; 5; 37] empirically evaluate various GNNs to benchmark their performance from diverse perspectives such as scalability to large-scale graphs, accuracy in various tasks, and the impact of components in GNN architectures. Other studies [19; 25] aim to clarify the strengths and weaknesses of GNNs by generating various synthetic graphs. To the best of our knowledge, no study has comprehensively explored both GNNs and methods utilizing adjacency lists with respect to edge directions.
\begin{table}
\begin{tabular}{l|c c c c} \hline Methods & Aggregated feature & Adjacency list & Directed graph & Undirected graph \\ \hline GNNs for undirected graphs & & & & \\ GNNs for directed graphs & & & & \\ Methods using adjacency lists & & & & \\ \hline A2DUG (proposal) & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Node representations and edge directions of existing methods and A2DUG. \(\bigvee\) indicates that methods (do not) support the node representation and edge direction. \(\bigvee^{*}\) indicates that methods use either a directed or undirected graph.
A2DUG: A Simple yet Holistic Method Leveraging Aggregated Features and Adjacency Lists in Directed/Undirected Graph
We propose a simple yet holistic method leveraging Aggregated features and **A**djacency lists in **D**irected/**U**ndirected **G**raphs (**A2DUG**) that satisfies two key design criteria for the effectiveness and scalability: **D1** leveraging both aggregated and adjacency lists features in directed/undirected graphs, **D2** adopting precomputation-based GNNs for minibatch training.
**D1: Leveraging both aggregated features and adjacency lists in directed/undirected graphs.** First, to adaptively control the effect from aggregated features and adjacency lists in directed/undirected graphs during the model training, we simply input all the combinations to multi-layer perceptions (MLPs) as follows:
\[\mathbf{Y}=\text{MLP}_{\text{final}}\Big{(}\sigma(\mathbf{H_{X}}\|\mathbf{H_{A}}\|\mathbf{H_{A ^{\top}}}\|\mathbf{H_{A^{\text{md}}}}\|\mathbf{H_{\text{GNN}}}\|\mathbf{H_{\text{GNN}^{ \top}}}\|\mathbf{H_{\text{GNN}^{\text{md}}}})\Big{)}, \tag{1}\]
where \(\mathbf{Y}\) is the predicted labels, \(\|\) indicates a concatenation operation, \(\sigma\) indicates an activation function, e.g., ReLU, and \(\mathbf{H_{X}},\mathbf{H_{A}},\mathbf{H_{A^{\top}}},\mathbf{H_{A^{\text{md}}}},\mathbf{H_{\text{GNN }}},\mathbf{H_{\text{GNN}^{\top}}},\mathbf{H_{\text{GNN}^{\text{md}}}}\) are node representation matrices of node features, adjacency lists, transposed adjacency lists, undirected adjacency lists, aggregated features in directed graphs, transposed aggregated features in directed graphs, and aggregated features in undirected graphs, respectively. We include the node representations from _transposed_ directed graphs5, i.e., \(\mathbf{H_{A^{\top}}}\) and \(\mathbf{H_{\text{GNN}^{\top}}}\), since inverse edges may have different semantics from the original edges. Each \(\mathbf{H}\) is formulated as follows:
Footnote 5: Transposed directed graphs indicate graphs where edge directions are opposite to their original graphs.
\[\mathbf{H_{X}}=\text{MLP}_{\mathbf{X}}(\mathbf{X}), \tag{2}\] \[\mathbf{H_{A}}=\text{MLP}_{\mathbf{A}}(\mathbf{A}),\ \ \mathbf{H_{A^{\top}}}=\text{MLP}_{\mathbf{A^{\top}}}(\mathbf{A^{ \top}}),\ \ \mathbf{H_{A^{\text{md}}}}=\text{MLP}_{\mathbf{A^{\text{md}}}}(\mathbf{A^{\text{md}}}),\] \[\mathbf{H_{\text{GNN}}}=\text{GNN}(\mathbf{A},\mathbf{X}),\ \ \mathbf{H_{\text{GNN}^{\top}}}=\text{GNN}^{\top}(\mathbf{A^{ \top}},\mathbf{X}),\ \ \mathbf{H_{\text{GNN}^{\text{md}}}}=\text{GNN}^{\text{md}}(\mathbf{A^{\text{md}}},\mathbf{X }),\]
where \(\text{GNN}\), \(\text{GNN}^{\top}\), and \(\text{GNN}^{\text{md}}\) are GNN-based encoders. Since A2DUG leverages both aggregated features and adjacency lists in directed/undirected graphs, it satisfies the first design criteria6.
Footnote 6: While other model architectures may satisfy our design criteria, our model architecture is the simplest among the ones satisfying the design criteria
Footnote 7: Though other GNNs can also be applied to our proposal, its scalability cannot be ensured in the case.
**D2: Adopting precomputation-based GNNs for minibatch training.** Second, to ensure the scalability of our proposal, we adopt precomputation-based GNNs, which ensures high scalability while showing competitive classification quality to other GNNs7. Existing precomputation-based GNNs [6, 21, 34] compute feature aggregation as a preprocess. Their models thus can be trained with small input batches of node features once features are computed, leading to their high scalability. Also, motivated by existing GNNs [21], we utilize both adjacency matrices with and without self-loops, i.e., \(\mathbf{A}\) and \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), which improve the classification quality in both homophilous and non-homophilous graphs. We formulate a \(k\) layer GNN-based graph encoder as follows:
Footnote 7: Though other GNNs can also be applied to our proposal, its scalability cannot be ensured in the case.
\[\text{GNN}_{\text{precomp}}=\text{MLP}(\mathbf{A}\mathbf{X}\|\hat{\mathbf{A}}\mathbf{X}\|\bm {A^{2}}\mathbf{X}\|\hat{\mathbf{A}^{2}}\mathbf{X}\|\ldots\|\mathbf{A^{k}}\mathbf{X}\|\hat{\mathbf{A}^{ k}}\mathbf{X}). \tag{3}\]
Also, we derive the formula for \(\text{GNN}_{\text{precomp}}^{\top}\) by replacing \(\mathbf{A}\) in Eq (3) with \(\mathbf{A}^{\top}\).
As for feature aggregation in an undirected graph, an adjacency matrix is typically normalized by node degrees [12, 21, 32, 34]. Following this, we apply the node degree-normalization to an undirected adjacency matrix and formulate a GNN-based graph encoder for undirected graphs as follows:
\[\text{GNN}_{\text{precomp}}^{\text{und}}=\text{MLP}(\mathbf{S}\mathbf{X}\|\hat{\mathbf{S} }\mathbf{X}\|\mathbf{S}^{2}\mathbf{X}\|\hat{\mathbf{S}}^{2}\mathbf{X}\|\ldots\|\mathbf{S}^{k}\mathbf{X}\| \hat{\mathbf{S}}^{k}\mathbf{X}), \tag{4}\]
where \(\mathbf{S}\) is a degree-normalized adjacency matrix, i.e., \(\mathbf{S}=(\mathbf{D^{\text{und}}})^{-\frac{1}{2}}\mathbf{A^{\text{und}}}(\mathbf{D^{\text{und }}})^{-\frac{1}{2}}\), and \(\hat{\mathbf{S}}\) is a degree-normalized adjacency matrix with self-loops.
**Training procedure.** We train our proposed model in an end-to-end training framework for a node classification task. To scale the training of the model for large-scale graphs, we adopt batchwise training which is introduced in existing precomputation-based GNNs [21, 34] and methods using adjacency lists [16]. To be concrete, we can handle \(\mathbf{X},\mathbf{A},\mathbf{A}^{\top},\mathbf{A^{\text{md}}},\mathbf{A}\mathbf{X},\hat{\mathbf{A}}\mathbf{X}, \mathbf{A}^{\top}\mathbf{X},\hat{\mathbf{A}}^{\top}\mathbf{X},\)
\(\mathbf{S}\mathbf{X},\hat{\mathbf{S}}\mathbf{X},\ldots,\mathbf{A}^{k}\mathbf{X},\hat{\mathbf{A}}^{k}\mathbf{X},( \mathbf{A}^{\top})^{k}\mathbf{X},(\mathbf{A}^{\top})^{k}\mathbf{X},\mathbf{S}^{k}\mathbf{X}\), and \(\hat{\mathbf{S}}^{k}\mathbf{X}\) as node features and execute row-wise decomposition for batchwise training. Hence, the proposed method satisfies the second key design, i.e., it scales well to large-scale graphs.
Connection with existing methods.Since A2DUG is based on a general and holistic form of leveraging both aggregated features and adjacency lists in directed/undirected graphs, it can employ any existing method as a component in Eq. (1). For example, A2DUG can imitate LINKX and GloGNN++, which are methods using adjacency lists, by utilizing their formulations to generate one of the node representations in Eq. (1), e.g., \(\mathbf{H_{A}}\) or \(\mathbf{H_{A^{\text{md}}}}\). Other examples are GCN [12], FSGNN [21], and ACMGCN [17], which can be used as \(\mathbf{H_{\text{GNN}^{\text{md}}}}\). In this paper, to ensure high scalability, A2DUG adopts simple and scalable components in its model architecture (see Eq. (2), (3), and (4)).
Complexity.The precomputation and training step of A2DUG have time complexity of \(\mathcal{O}(dk|E|+(h|E|+ndkh+nh^{2}L)T)\), in which \(h\) is the hidden dimension, \(|E|\) is the number of edges, \(L\) is the number of MLP's layers, and \(T\) is the number of epochs. This is because it requires \(\mathcal{O}(dk|E|)\) to precompute aggregated features within \(k\)-hops. Then, for the training step, it requires \(\mathcal{O}(h|E|)\) cost for the first linear mapping of \(\mathbf{A},\mathbf{A}^{\top}\), and \(\mathbf{A^{\text{md}}}\), \(\mathcal{O}(ndkh)\) cost for the first linear mapping of aggregated features within \(k\)-hops, and \(\mathcal{O}(nh^{2}L)\) cost for MLP operations on hidden features. This complexity is comparable with existing scalable methods using aggregated features or adjacency lists, e.g., FSGNN or LINKX, since A2DUG also employs their efficient feature precomputation and model architectures. Detailed discussion is described in our supplementary material.
## 4 Empirical Studies
We aim to answer the following questions: **Q1.**_How effectively and efficiently do existing methods perform in various graphs?_ **Q2.**_How does_ A2DUG _perform well?_ **Q3.**_To what extent do aggregated features and adjacency lists in directed/undirected graphs affect node classification?_
Datasets.We use eight directed graphs: Two small-scale chameleon and squirrel[26]8, three middle-scale genius, arxiv-year [16], and ogbn-arxiv [10], and three large-scale pokec, snap-patents, and wiki [16]. We note arxiv-year and ogbn-arxiv have the same graph structure and node features but have different classes. We follow the same way to split training/validation/test sets as the papers that propose the datasets [10; 16; 26]. We summarize the statistics of the datasets in the supplementary materials.
Footnote 8: Since a study [26] pointed out the presence of a large number of duplicate nodes in the original datasets of chameleon and squirrel, we use the filtered versions that do not have any duplicate node.
Baselines.For GNNs using feature aggregation in undirected graphs, we use GCN [12], SGC [34], GPRGNN [2], FSGNN [21], and ACMGCN[17]. For GNNs for directed graphs, we use DGCN[31], DiGraph [30], its variant DiGraphIB, and MagNet [39]. As for methods using adjacency lists as node features, we use LINK [40], LINKX [16], and GloGNN++ [14]. We also execute a graph-agnostic classifier, multi-layer perceptron (MLP) as a baseline, i.e., it ignores the topology structure.
Settings.We report the performance as mean classification accuracy and standard deviation over five random runs with different random seeds. As following [16], we use ROC-AUC as the metric for the class-imbalanced genius dataset (about \(80\%\) of nodes are in the majority class), as it is less sensitive to class-imbalance than accuracy. In the methods using adjacency lists (i.e., LINK, LINKX, and GloGNN++), we evaluate both directed and undirected graphs as their inputs. The more detailed settings are described in the supplementary material.
We use a single NVIDIA A100-PCIE-40GB for all our experiments. As for large datasets, snap-patents, pokec, and wiki, we use minibatch training by setting batchsizes to \(10,10\), and \(20\), respectively. To efficiently precompute aggregated features on the largest dataset wiki, which does not fit to the GPU memory, we utilize the block-wise precomputation scheme proposed in [20].
### Q1. Benchmarking Existing Methods
To answer the first question, we benchmark the performance of existing methods in directed and undirected graphs. Tables 2 and 3 show the node classification results in small-/middle-scale and large-scale graphs, respectively. We observe that no single existing method stably achieves high classification quality across datasets. This motivates us to propose a new method A2DUG. From
Table 2, we can see that FSGNN, which uses aggregated features in undirected graphs, obtains the highest accuracy for chameleon, squirrel, and ogbn-arxiv. While, in arxiv-year, the methods that can capture edge directions such as LINK directed, LINKX directed, GloGNN++ directed, and Magnet, achieve much higher accuracy than FSGNN. Also, in genius, LINKX undirected and GloGNN++ undirected achieve higher accuracy than the others. We here note that DGCN, Digraph, and DigraphIB show low scalability due to the computationally heavy operations, e.g., the computation of second-order proximity and eigenvectors of an adjacency matrix.
From Table 3, we can see MLP, SGC, FSGNN, LINK, and LINKX scale well to large-scale graphs, which adopt minibatch training. In large-scale graphs, methods using adjacency lists LINK, LINKX, and GloGNN++ outperform other methods adopting feature aggregation. This means that the information from adjacency lists is more important to predict labels in these datasets than one from aggregated features. Comparing LINK and LINKX using directed/undirected graphs, on one hand, LINK/LINKX directed perform well in snap-patents and wiki. On the other hand, LINK/LINKX undirected achieve higher accuracy in pokec.
These results validate that an appropriate selection of directed/undirected graphs is necessary to obtain state-of-the-art classification quality. In summary, through our empirical studies with small-, middle-, and large-scale datasets, no single existing method stably obtains state-of-the-art classification results.
### Q2. Effectiveness and Efficiency of A2DUG
**Effectiveness.** In Tables 2 and 3, we observe that A2DUG achieves results within top-2 across all datasets. This shows the performance stability of A2DUG in various datasets. Further, despite its sim
\begin{table}
\begin{tabular}{l|c c|c c c} \hline & chameleon & squirrel & genius & arxiv-year & ogbn-arxiv \\ \hline MLP & \(37.65_{\pm 3.10}\) & \(35.13_{\pm 3.87}\) & \(85.84_{\pm 0.88}\) & \(36.92_{\pm 0.23}\) & \(53.78_{\pm 0.29}\) \\ GCN & \(37.17_{\pm 3.37}\) & \(32.26_{\pm 2.96}\) & \(82.23_{\pm 3.42}\) & \(43.73_{\pm 0.22}\) & \(69.30_{\pm 0.16}\) \\ SGC & \(38.64_{\pm 3.79}\) & \(38.50_{\pm 1.93}\) & \(80.08_{\pm 2.82}\) & \(38.79_{\pm 0.22}\) & \(67.40_{\pm 0.02}\) \\ GPRGNN & \(38.22_{\pm 5.05}\) & \(35.03_{\pm 1.66}\) & \(89.89_{\pm 0.54}\) & \(40.21_{\pm 0.33}\) & \(67.52_{\pm 0.80}\) \\ FSGNN & \(44.96_{\pm 0.02}\) & \(41.12_{\pm 2.58}\) & \(88.95_{\pm 1.51}\) & \(45.99_{\pm 0.35}\) & \(71.26_{\pm 0.30}\) \\ ACMGCN & \(39.44_{\pm 9.32}\) & \(38.85_{\pm 0.56}\) & \(73.16_{\pm 8.27}\) & \(43.30_{\pm 0.90}\) & \(67.51_{\pm 0.69}\) \\ \hline LINK undirected & \(41.60_{\pm 5.62}\) & \(36.31_{\pm 1.90}\) & \(69.16_{\pm 0.11}\) & \(48.43_{\pm 0.10}\) & \(63.33_{\pm 0.04}\) \\ LINKX undirected & \(41.55_{\pm 2.45}\) & \(40.10_{\pm 2.64}\) & \(89.27_{\pm 1.11}\) & \(47.90_{\pm 0.20}\) & \(61.78_{\pm 0.40}\) \\ GloGNN++ undirected & \(38.49_{\pm 9.02}\) & \(34.48_{\pm 7.13}\) & \(90.00_{\pm 0.38}\) & \(50.55_{\pm 0.12}\) & \(46.30_{\pm 2.74}\) \\ \hline LINK directed & \(38.21_{\pm 2.82}\) & \(33.06_{\pm 2.62}\) & \(55.46_{\pm 0.11}\) & \(51.71_{\pm 0.22}\) & \(57.17_{\pm 0.04}\) \\ LINKX directed & \(39.64_{\pm 5.95}\) & \(36.00_{\pm 1.18}\) & \(88.35_{\pm 0.45}\) & \(52.61_{\pm 0.26}\) & \(59.81_{\pm 0.43}\) \\ GloGNN++ directed & \(40.07_{\pm 3.36}\) & \(34.87_{\pm 8.12}\) & \(87.82_{\pm 0.13}\) & \(53.67_{\pm 0.32}\) & \(55.36_{\pm 20.59}\) \\ \hline DGCN & \(42.42_{\pm 4.60}\) & \(41.04_{\pm 2.07}\) & (TO) & (TO) & (TO) \\ Digraph & \(34.29_{\pm 0.82}\) & \(35.01_{\pm 0.60}\) & (M) & (M) & (M) \\ DigraphIB & \(38.28_{\pm 2.69}\) & \(33.90_{\pm 0.20}\) & (M) & (M) & (M) \\ Magnet & \(35.66_{\pm 3.91}\) & \(31.96_{\pm 1.48}\) & \(86.68_{\pm 2.78}\) & \(52.10_{\pm 0.32}\) & \(68.50_{\pm 0.11}\) \\ \hline
**A2DUG** & \(42.78_{\pm 4.79}\) & \(42.28_{\pm 2.36}\) & \(89.85_{\pm 3.15}\) & \(59.14_{\pm 0.48}\) & \(69.51_{\pm 0.24}\) \\ \hline \end{tabular}
\end{table}
Table 2: Experimental results on small- and middle-scale datasets. Test accuracy is displayed for most datasets, while genius displays test ROC AUC. Standard deviations are over five runs with different random seeds. The two best results per dataset are highlighted. (M) denotes some (or all) hyperparameter settings run out of memory and (TO) denotes that the runs do not finish in 24 hours.
\begin{table}
\begin{tabular}{l|c c c} \hline & snap-patents & pokec & wiki \\ \hline MLP & \(31.49_{\pm 0.07}\) & \(62.53_{\pm 0.04}\) & \(39.74_{\pm 0.28}\) \\ \hline GCN & \(39.99_{\pm 0.35}\) & \(63.05_{\pm 4.23}\) & (M) \\ SGC & \(35.26_{\pm 0.04}\) & \(69.83_{\pm 0.32}\) & \(45.07_{\pm 0.09}\) \\ GPRGNN & \(32.44_{\pm 0.25}\) & \(65.79_{\pm 8.59}\) & (M) \\ FSGNN & \(45.44_{\pm 0.05}\) & \(78.21_{\pm 1.09}\) & \(58.40_{\pm 0.26}\) \\ ACMGCN & \(40.07_{\pm 0.40}\) & \(69.91_{\pm 0.06}\) & (M) \\ \hline LINK undirected & \(49.93_{\pm 0.07}\) & \(79.17_{\pm 0.05}\) & \(58.42_{\pm 0.04}\) \\ LINKX undirected & \(51.40_{\pm 0.11}\) & \(79.44_{\pm 0.13}\) & \(61.02_{\pm 0.36}\) \\ GloGNN++ undirected & (M) & \(82.66_{\pm 0.07}\) & (M) \\ \hline LINK directed & \(57.54_{\pm 0.07}\) & \(71.53_{\pm 0.09}\) & \(59.69_{\pm 0.03}\) \\ LINKX directed & \(61.09_{\pm 0.07}\) & \(71.88_{\pm 0.09}\) & \(62.08_{\pm 0.14}\) \\ GloGNN++ directed & (M) & \(75.36_{\pm 0.07}\) & (M) \\ \hline Magnet & (M) & \(75.14_{\pm 1.59}\) & (M) \\ \hline
**A2DUG** & \(72.38_{\pm 0.10}\) & \(82.55_{\pm 0.08}\) & \(65.13_{\pm 0.07}\) \\ \hline \end{tabular}
\end{table}
Table 3: Experiments on large-scale datasets. (M) denotes some hyperparameter settings run out of memory. We omit methods that cannot work on large-scale datasets.
ple architecture, A2DUG outperforms other state-of-the-art methods with large margins in arxiv-year (see Table 2), snap-patents, and wiki (see Table 3). These surprising results demonstrate that A2DUG can adaptively control the effects from aggregated features and adjacency lists in directed/undirected graphs rather than simply selecting which feature is important for the classification.
Next, we discuss the importance of aggregated features and adjacency lists. The variant "wo aggregation" obtains better results than "wo adjacency" in pokec and wiki. This means that adjacency lists play more important roles than aggregated features in these datasets. In contrast, the accuracy of "wo adjacency" is better than that of "wo aggregation" in snap-patent. This indicates that the importance of aggregated features and adjacency lists also depends on datasets. These observations validate that the adaptive effect control on aggregated features and adjacency lists is required. We also observe that "wo transpose" obtains lower results than A2DUG in most cases. We validate the effectiveness of using inverse edges in directed graphs.
Through the above observations, we conclude that the importance of aggregated features and adjacency lists in directed/undirected graphs highly depends on datasets and their prediction target. As a result, it is important to adaptively control the effects from aggregated features and adjacency lists in directed/undirected graphs. Thus, A2DUG is robust across various prediction targets and achieves superior results to existing methods on datasets in which the combination of multiple factors is necessary since it can leverage the information from all of them.
## 5 Conclusion
We demonstrated that no existing methods stably obtain state-of-the-art results on various graphs since the importance of combinations depends on datasets. We proposed a simple yet holistic method, A2DUG, that leverages all the combinations of node representations and edge directions. Our empirical studies showed that A2DUG stably achieves state-of-the-art results across various datasets.
**Limitations.** This work shows the potential improvement of GNNs using all the combinations of node representations and edge directions by proposing a simple method combining all of them. In other words, we do not focus on developing the optimal model architecture. For instance, our ablation study in Section 4.3 demonstrated that removing unnecessary features for classification can contribute to performance improvement. Hence, it is an interesting direction to combine our method and feature selection techniques such as [22]. Also, it is an open question of how we can construct appropriate model architecture for further improvement of node classification quality. Indeed, concurrent works [27; 28] have addressed developing sophisticated but complicated model architectures and thus achieved state-of-the-art classification quality. It is our future work to incorporate these methods into our approach while ensuring scalability.
## Acknowledgments and Disclosure of Funding
This work was supported by JSPS KAKENHI Grant Numbers JP20H00583 and JST PRESTO Grant Number JPMJPR21C5.
|
2306.12548 | Finite-time Lyapunov exponents of deep neural networks | We compute how small input perturbations affect the output of deep neural
networks, exploring an analogy between deep networks and dynamical systems,
where the growth or decay of local perturbations is characterised by
finite-time Lyapunov exponents. We show that the maximal exponent forms
geometrical structures in input space, akin to coherent structures in dynamical
systems. Ridges of large positive exponents divide input space into different
regions that the network associates with different classes. These ridges
visualise the geometry that deep networks construct in input space, shedding
light on the fundamental mechanisms underlying their learning capabilities. | L. Storm, H. Linander, J. Bec, K. Gustavsson, B. Mehlig | 2023-06-21T20:21:23Z | http://arxiv.org/abs/2306.12548v1 | # Finite-time Lyapunov exponents of deep neural networks
###### Abstract
We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep networks and dynamical systems, where the growth or decay of local perturbations is characterised by finite-time Lyapunov exponents. We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems. Ridges of large positive exponents divide input space into different regions that the network associates with different classes. These ridges visualise the geometry that deep networks construct in input space, shedding light on the fundamental mechanisms underlying their learning capabilities.
Deep neural networks can be trained to model complex functional relationships [1]. The expressivity of such neural networks - their ability to unfold intricate data structures - increases exponentially as the number of layers increases [2]. However, deeper networks are harder to train, due to the multiplicative growth or decay of signals as they propagate through the network. This multiplicative amplification, also known as the unstable-gradient problem [3], causes signals to either explode or vanish in magnitude if the number of layers is too large. A second important problem is that we lack insight into the learning mechanisms. Although there is some intuition for shallow networks [4], there is still no general understanding of the principles that cause some architectures to fail, while others work better.
For a common type of deep networks, the so-called multi-layer perceptrons [Fig. 1(**a**)], we show that these two problems are closely related. We exploit the fact that such networks are discrete dynamical systems; inputs \(\mathbf{x}^{(0)}\) are mapped iteratively through \(x_{i}^{(\ell)}=g(\sum_{j=1}^{N_{\ell}}w_{ij}^{(\ell)}x_{j}^{(\ell-1)}-\theta_ {i}^{(\ell)})\). Here, \(g(\cdot)\) is a non-linear activation function [3], the layer index \(\ell=0,\ldots,L+1\) plays the role of time, \(L\) is the number of hidden layers, \(N_{\ell}\) is the number of neurons in layer \(\ell\), and the weights \(w_{ij}^{(\ell)}\) and thresholds \(\theta_{i}^{(\ell)}\) are parameters. Sensitivity of \(\mathbf{x}^{(\ell)}\) to small changes in the inputs \(\mathbf{x}^{(0)}=\mathbf{x}\) corresponds to exponentially growing perturbations in a chaotic system with positive maximal Lyapunov exponent [5; 6]\(\lim_{\ell\to\infty}\lambda_{1}^{(\ell)}(\mathbf{x})\), with growth rate \(\lambda_{1}^{(\ell)}(\mathbf{x})=\ell^{-1}\log|\delta\mathbf{x}^{(\ell)}|/|\delta\bm {x}|\). The multiplicative ergodic theorem [5] guarantees that \(\lambda_{1}^{(L)}(\mathbf{x})\) converges as \(L\to\infty\) to a limit that is independent of \(\mathbf{x}\).
The standard way of initialising network parameters is to choose zero thresholds and random weight matrices with independent Gaussian-distributed elements with zero mean, and variance \(\sigma_{w}^{2}\). In this case, the Lyapunov exponents are determined by a product of random matrices [7], and in the mean-field limit of \(N_{\ell}=N\to\infty\), one finds \(\lambda_{1}^{(L)}\sim\log(GN\sigma_{w}^{2})\) independent of \(\mathbf{x}\) and \(L\). Here, the constant \(G\) depends on the choice of activation function [8]. This relation explains why the initial weight variance should be chosen so that \(GN\sigma_{w}^{2}=1\), because then signals neither contract nor expand [8; 9; 10], stabilising the learning. The maximal Lyapunov exponent also determines the success or failure in predicting chaotic time series with recurrent networks [11; 12; 13] that use large reservoirs of neurons with random weights. In that case, the mean-field limit \(N\to\infty\) works very well [13].
For finite \(N\) and \(L\), the maximal finite-time Lyapunov exponent (FTLE) \(\lambda_{1}^{(L)}(\mathbf{x})\) depends on the input \(\mathbf{x}\). Averaging over input patterns yields an estimate for the Lyapunov exponent [14], but even if the average \(\langle\lambda_{1}^{(L)}(\mathbf{x})\rangle\) over inputs vanishes, some patterns may exhibit large positive exponents, causing the training to fail.
Moreover, the weights of a trained network are not random but should reflect what the network has learned about the inputs. This raises the question: does the maximal exponent form geometric structures in input space, just as in dynamical systems where the ridges of high FTLE define Lagrangian coherent structures [15; 16; 17]? How does the variations of the maximal FTLE in input-space depend on the number \(L\) of layers of the network, and on its width \(N\)?
To answer these questions, we computed the maximal FTLEs for fully connected deep neural networks with different widths and numbers of layers. For a simple classification problem with two-dimensional inputs \(\mathbf{x}\) divided into two classes with targets \(t(\mathbf{x})=\pm 1\) [Fig. 1(**b**)], we show how the \(\mathbf{x}\)-dependence of the maximal FTLE changes when changing \(N\) and \(L\). For narrow networks (small \(N\)), we find that the maximal FTLE forms ridges of large exponents in the input plane, much like Lagrangian coherent structures in high-dimensional dynamical systems [15; 16; 17]. These ridges provide insight into the learning process, illustrating how the network learns to change its output by order unity in response to a small shift of the input pattern across the decision boundary. However, as the network width grows, we see that the ridges disappear, suggesting a different learning mechanism. Similar conclusions hold for a more complex clas
sification problem using the MNIST data set of handwritten digits, where FTLE structures in input space explain variations in classification accuracy and predictive uncertainty.
_Finite-time Lyapunov exponents_. Figure 1(**a**) shows a multi-layer perceptron [3], a fully-connected feed-forward neural network with \(L\) hidden layers, \(N_{0}\) input components, \(N\) neurons per hidden layer with non-linear activation functions, and \(N_{L+1}\) output neurons. The network maps every input \(\mathbf{x}^{(0)}=\mathbf{x}\) to an output \(x^{(L\!+\!1)}\). Weights and thresholds are varied to minimise the output error \([x^{(L\!+\!1)}-t(\mathbf{x})]^{2}\), so that the network predicts the correct target \(t(\mathbf{x})\) for each input \(\mathbf{x}\). The sensitivity of \(\mathbf{x}^{(\ell)}\) to small changes \(\delta\mathbf{x}\) is determined by linearisation,
\[\delta\mathbf{x}^{(\ell)}\!=\!\mathbb{D}^{(\ell)}\mathbb{W}^{(\ell)}\!\cdots\, \mathbb{D}^{(2)}\mathbb{W}^{(2)}\mathbb{D}^{(1)}\mathbb{W}^{(1)}\delta\mathbf{x} \!\equiv\!\mathbb{J}_{\ell}\delta\mathbf{x}\,. \tag{1}\]
Here, \(\mathbb{W}^{(\ell)}\) are the weight matrices, and \(\mathbb{D}^{(\ell)}\) are diagonal matrices with elements \(D^{(\ell)}_{ij}=g^{\prime}(b^{(\ell)}_{i})\delta_{ij}\), with \(b^{(\ell)}_{i}=\sum_{j=1}^{N_{\ell}}w^{(\ell)}_{ij}x^{(\ell-1)}_{j}\!-\theta^ {(\ell)}_{i}\) and \(g^{\prime}(b^{(\ell)}_{i})=\frac{\mathrm{d}}{\mathrm{d}b}g(b)\big{|}_{b^{( \ell)}_{i}}\). The Jacobian \(\mathbb{J}_{\ell}(\mathbf{x})\) characterises the growth or decay of small perturbations to \(\mathbf{x}\)[5; 6]. Its maximal singular value \(\Lambda^{(\ell)}_{1}(\mathbf{x})\) increases or decreases exponentially as a function of \(\ell\), with rate \(\lambda^{(\ell)}_{1}(\mathbf{x})\equiv\ell^{-1}\log\Lambda^{(\ell)}_{j}(\mathbf{x})\). The singular values \(\Lambda^{(\ell)}_{1}(\mathbf{x})>\Lambda^{(\ell)}_{2}(\mathbf{x})>\dots\) are the square roots of the non-negative eigenvalues of the right Cauchy-Green tensor \(\mathbb{J}^{\top}_{\ell}(\mathbf{x})\mathbb{J}_{\ell}(\mathbf{x})\). The maximal eigenvector of \(\mathbb{J}^{\top}_{\ell}(\mathbf{x})\mathbb{J}_{\ell}(\mathbf{x})\) determines the direction of maximal stretching, i.e. in which input direction the output changes the most, starting from a given input \(\mathbf{x}\).
FTLEs and Cauchy-Green tensors are used in solid mechanics to identify elastic deformation patterns [18], and to find regions of instability in plastic deformation [19] and crack initiation [20]. More generally, FTLEs help to characterise the sensitivity of complex dynamics to initial conditions [21; 22; 23]. In fluid mechanics, they explain the alignment of particle transported by the fluid [24; 25], providing valuable insight into the stretching and rotation of fluid elements over time and space [26]. FTLEs allow to identify Lagrangian coherent structures [15; 16; 17]; strongly repelling fluid-velocity structures that help to organise and understand flow patterns [27]. These geometrical structures appear as surfaces of large maximal FTLEs, orthogonal to the maximal stretching direction.
In applying these methods to neural networks, one should recognise several facts. First, in deep neural networks, the weights change from layer to layer. Therefore the corresponding dynamical system is not autonomous. Second, the number \(N_{\ell}\) of neurons per layer may change as a function of \(\ell\), corresponding to a changing phase-space dimension. Third, the neural-network weights are trained. This limits the exponential growth of the maximal singular value, as we show below. Fourth, one can use different activation functions, such as the piecewise linear ReLU function [3], or the smooth tanh function [8]. Here we use \(g(b)=\tanh(b)\), so that the network map is continuously differentiable just like the dynamical systems for which Lagrangian coherent structures were found and analysed.
_Two-dimensional data set_. To illustrate the geometric structures formed by the maximal FTLE, we first consider a toy problem. The data set [Fig. 1(**b**)] comprises \(4\times 10^{4}\) input patterns, with 90% used for training, the rest for testing. We trained fully connected feed-forward networks on this data set by stochastic gradient descent, minimising the output error \([x^{(L\!+\!1)}-t(\mathbf{x})]^{2}\). In this way we obtained classification accuracies of at least 98%. We considered different network layouts, changing the numbers of layers and hidden neurons per layer. The weights were initialised as independent Gaussian random numbers with zero mean and variance \(\sigma_{w}^{2}\sim N^{-1}\), while the thresholds were initially set to zero. After training, we computed the maximal FTLE in layer \(L\) and the associated stretching direction from Eq. (1) as described in Refs. [28; 29].
The results are summarised in Figure 2, which shows maximal-FTLE fields for trained networks with different layouts. First, we see that the ridges of large positive \(\lambda^{(L)}_{1}(\mathbf{x})\) align with the decision boundary between the two classes [Fig. 1(**b**)]. The ridges are most prominent for small \(N\) and large \(L\). In this case, the network learns by grouping the inputs into two different basins of attraction for \(t=\pm 1\), separated by a ridge of positive \(\lambda^{(L)}_{1}(\mathbf{x})\). A small shift of the input across the decision boundary leads to a substantial change in the output.
Second, the contrast increases as \(L\) becomes larger, quantifying the exponential expressivity of deep neural networks. For larger \(L\), the network can resolve smaller input distances \(\delta\mathbf{x}\) because the singular values increase/decrease exponentially from layer to layer. Comparing networks of different depths, we find that \(L\lambda^{(L)}_{1}(\mathbf{x})\) saturates for large \(L\), on the ridge. This is a consequence of the training: the network learns to produce output differences on the order of \(\delta x^{(L+1)}\sim 1\), and
Figure 1: Classification with a fully connected feed-forward network. (**a**) Layout with two input components \(x^{(0)}_{1}\) and \(x^{(0)}_{2}\), \(L\) hidden layers with \(N=5\) neurons, and one output \(x^{(L\!+\!1)}\) for classification. (**b**) Two-dimensional input plane (schematic) for a classification problem with a circular decision boundary that separates input patterns with targets \(t=+1\) (\(\blacksquare\)) from those with \(t=-1\) (\(\square\), green).
to resolve input differences \(\delta\mathbf{x}\) on the scale of the mean distance between neighbouring patterns over the decision boundary. Therefore, the saturation value is larger when the number densities of input pattern is higher (not shown). Even though \(\Lambda_{1}^{(L)}(\mathbf{x})\) saturates, \(\Lambda_{2}^{(L)}(\mathbf{x})<1\) decreases exponentially as \(L\) grows (not shown), thereby causing \(\Lambda_{1}^{(L)}(\mathbf{x})/\Lambda_{2}^{(L)}(\mathbf{x})\) to increase exponentially, as in dynamical systems [30].
Third, the ridges gradually disappear as the number \(N\) of hidden neurons per layer increases, because the maximal singular value of \(\mathbb{J}_{L}(\mathbf{x})\) approaches a definite \(\mathbf{x}\)-independent limit as \(N\to\infty\) at fixed \(L\). In the infinite-width limit, training is equivalent to kernel regression with a kernel that is independent of the inputs in the training data set [31, 32]. But how can the network distinguish inputs with different targets in this case, without ridges indicating decision boundaries? One possibility is that the large number of hidden neurons allows the network to embed the inputs into a high-dimensional space where they can be separated thanks to the universal approximation theorem [33]. In this case, training only the output weights (and threshold) should suffice. Figure 3(**a**) confirms this, as the classification error decreases with increasing embedding dimension, for random hidden weights. We remark that the classification error of the fully trained network is smaller than the error with random hidden weights. This is not surprising, since different random embeddings have different classification errors when the number of patterns exceeds twice the embedding dimension [34, 3].
Fourth, Figure 2 also shows the maximal stretching directions. For large \(L\) they become orthogonal to the ridges of large \(\lambda_{1}^{(L)}(\mathbf{x})\). This demonstrates that there is a stringent analogy between the FTLE ridges of deep neural networks and Lagrangian coherent structures. The stretching patterns appear to exhibit singular points where the maximal stretching tends to zero [35], reflecting topological constraints imposed on the direction field.
Fifth, one may wonder how the FTLE structures depend on weight initialisation. When weights are initialised with a small variance, \(\sigma_{w}\ll 1\), most FTLEs are negative initially [blue in Fig. 3(**b**)]. This implies a slowing down of the initial training (vanishing-gradient problem). To see this, consider the fundamental forward-backward dichotomy of deep neural networks [3]: weight updates in the stochastic-gradient algorithm are given by \(\delta w_{mn}^{(\ell)}\propto\Delta_{m}^{(\ell)}x_{n}^{(\ell-1)}\), where
\[[\mathbf{\Delta}^{(\ell)}]^{\mathsf{T}}=[\mathbf{\Delta}^{(L)}]^{\mathsf{T}}\mathbb{D} ^{(L)}\mathbb{W}^{(L)}\cdots\mathbb{D}^{(\ell+1)}\mathbb{W}^{(\ell+1)} \mathbb{D}^{(\ell)}\,, \tag{2}\]
and \(\Delta_{j}^{(L)}=g^{\prime}(b^{(L+1)})[x^{(L+1)}-t(\mathbf{x})]g^{\prime}(b_{j}^{(L )})w_{j}^{(L+1)}\). It follows from Eq. (2) that negative FTLEs cause small weight increments \(\delta w_{mn}^{(\ell)}\). Conversely, when the maximal FTLE is positive and too large, the weights grow rapidly, leading to training instabilities. Remarkably, Figure 3(**b**) demonstrates a self-organising effect due to training: the distributions of the maximal FTLE converge to centre around zero. This is explained by the fact that the network learns by creating maximal-FTLE ridges in input space: to accommodate positive and neg
Figure 2: Geometrical FTLE structures in input space for different widths \(N\) and depths \(L\) of fully-connected feed-forward neural networks trained on the data set shown schematically in Fig. 1(**b**). Shown is the colour-coded magnitude of \(L\lambda_{1}^{(L)}(\mathbf{x})\), and the maximal stretching directions (black lines).
Figure 3: (**a**) Classification error for a fully connected feed-forward network with \(L=2\) hidden layers with random weights (not trained), and trained output weights, as a function of the number \(N\) of hidden neurons per layer (solid black line). Also shown is the classification error for the fully trained network (dashed line). Both curves were obtained for the data set shown schematically in Fig. 1(**b**). (**b**) Evolution of maximal-FTLE distribution as a function of training time measured in epochs [3], for a network with \(L=8\) hidden layers with \(N=50\) neurons per layer. The weights were initialised with different variances, \(\log GN\sigma_{w}^{2}=-0.2\) (blue), \(0\) (green), and \(0.2\) (red).
ative \(\lambda_{1}^{(L)}(\mathbf{x})\), the distribution centres around zero, alleviating the unstable-gradient problem.
_MNIST data set._ This data set consists of 60,000 images of handwritten digits 0 to 9. Each grayscale image has \(28\times 28\) pixels and was pre-processed to facilitate machine learning [36]. Deep neural networks can achieve high precision in classifying this data, with accuracies of up to 99.77% on a test set of 10,000 digits [37].
We determined the maximal-FTLE field for this data set for a network with \(L=16\) hidden layers, each containing \(N=20\) neurons, and a standard softmax layer with ten outputs [3]. To visualise the geometrical structures in the \(28^{2}\)-dimensional input space, we projected it to two dimensions as follows. We added a bottleneck layer with two neurons to the fully trained network, just before the softmax-output layer. We retrained only the weights and thresholds of this additional layer and the output layer, keeping all other hidden neurons unchanged. The local fields \(b_{1}\) and \(b_{2}\) of the two bottleneck neurons are the coordinates of the two-dimensional representation shown in Figure 4(**a**). We see that the input data separate into ten distinct clusters corresponding to the ten digits. The maximal FTLEs at the centre of these clusters are very small or even negative, indicating that the output is not sensitive to small input changes. These regions are delineated by areas with significantly larger positive FTLEs [see 3\(\times\)zoom in panel (**a**)]. Figure 2 leads us to expect that patterns with large \(\lambda_{1}^{(L)}(\mathbf{x})\) are located near the decision boundaries in high-dimensional input space. This is verified by strong correlations between \(\lambda_{1}^{(L)}(\mathbf{x})\) and both the classification error and the predictive uncertainty. Figure 4(**b**) shows that the classification error on the test set is larger for inputs \(\mathbf{x}\) with larger \(\lambda_{1}^{(L)}(\mathbf{x})\). Figure 4(**c**) shows that large values of \(\lambda_{1}^{(L)}(\mathbf{x})\) correlate with high predictive uncertainty, measured by the entropy \(H\) of the posterior predictive distribution [38]. For softmax outputs, where \(x_{i}^{(L+1)}\) can be interpreted as probabilities, \(H=-\sum_{i}\langle x_{i}^{(L+1)}\rangle\log\langle x_{i}^{(L+1)}\rangle\), where \(\langle\cdot\rangle\) denotes an average over an ensemble of networks (here with ten members) with the same layout but different weight initialisations [39, 40]. These observations confirm that ridges of maximal FTLEs localise the decision boundaries.
Figure 4(**a**) also shows \(\lambda_{1}^{(L)}(\mathbf{x})\) along a path generated by an adversarial attack. The attack begins from a sample within the cluster corresponding to the digit 9 and aims to transform it into a digit 4 by making small perturbations to the input data [41] toward class 4. We see that the maximal FTLE is small at first, then increases as the path approaches the decision boundary, and eventually decreases again. This indicates that our conclusions regarding the correlations between large maximal FTLEs and decision boundaries extend to neighbourhoods of the MNIST training set that contain patterns the network has not encountered during training.
_Conclusions._ We explored geometrical structures formed by the maximal FTLE in input space, for deep neural networks trained on different classification problems. We found that ridges of positive exponents define the decision boundaries, for a two-dimensional toy classification problem, and for a high-dimensional data set of hand-written digits. In the latter case, we projected the high-dimensional input space to two dimensions, and found that the network maps digits into distinct clusters surrounded by FTLE ridges at the decision boundaries. This conclusion is supported by the fact that the locations of the FTLE ridges correlate with low classification accuracy and high predictive uncertainty.
The network layout determines how prominent the FTLE structures are. As the number of layers increases, the ridges sharpen, emphasising their role in learning and classification. However, as the number of hidden neurons per layer tends to infinity, the FTLE structures disappear. In this limit, the network separates the inputs by embedding them into a high-dimensional space, rendering training of the hidden neurons unnecessary.
It is important to underscore that the two different
Figure 4: Maximal-FTLE field for the MNIST data [36]. A fully connected feed-forward network with \(N=20\) neurons per hidden layer, \(L=16\) hidden layers, and a softmax layer with ten outputs was trained to a classification accuracy of \(98.88\%\). The maximal FTLE was calculated for each of the \(28^{2}\)-dimensional inputs and projected to two dimensions (see text). (**a**) Training data in the non-linear projection. For each input, the maximal FTLE \(\lambda_{1}^{(L)}\) is shown colour-coded (legend). The box contains 93% of the recognised digits 0. A threefold blow up of this box is also shown. The line represents a sequence of adversarial attacks from 9 to 4 (see text), with \(\lambda_{1}^{(L)}(\mathbf{x})\) colour-coded. (**b**) Classification error on the test set as a function of \(\lambda_{1}^{(L)}(\mathbf{x})\). (**c**) Predictive uncertainty \(H\) (see text) as a function of \(\lambda_{1}^{(L)}(\mathbf{x})\).
ways to learn, by FTLE ridges or embedding, result in qualitative differences regarding classification errors and predictive uncertainties, and may also affect how susceptible a network is to adversarial attacks. The geometrical method presented here extends to other network architectures (such as convolutional networks), and will help to visualise and understand the mechanisms that allow such neural networks to learn.
LS was supported by grants from the Knut and Alice Wallenberg (KAW) Foundation (no. 2019.0079) and Vetenskapsradet (VR), no. 2021-4452. JB received support from UCA-JEDI Future Investments (grant no. ANR-15-IDEX-01). HL was supported by a grant from the KAW Foundation. KG was supported by VR grant 2018-03974, and BM by VR grant 2021-4452. Part of the the numerical computations for this project were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC).
|
2306.08510 | Permutation Invariant Recurrent Neural Networks for Sound Source
Tracking Applications | Many multi-source localization and tracking models based on neural networks
use one or several recurrent layers at their final stages to track the movement
of the sources. Conventional recurrent neural networks (RNNs), such as the long
short-term memories (LSTMs) or the gated recurrent units (GRUs), take a vector
as their input and use another vector to store their state. However, this
approach results in the information from all the sources being contained in a
single ordered vector, which is not optimal for permutation-invariant problems
such as multi-source tracking. In this paper, we present a new recurrent
architecture that uses unordered sets to represent both its input and its state
and that is invariant to the permutations of the input set and equivariant to
the permutations of the state set. Hence, the information of every sound source
is represented in an individual embedding and the new estimates are assigned to
the tracked trajectories regardless of their order. | David Diaz-Guerra, Archontis Politis, Antonio Miguel, Jose R. Beltran, Tuomas Virtanen | 2023-06-14T13:53:31Z | http://arxiv.org/abs/2306.08510v1 | # Permutation Invariant Recurrent Neural Networks for
###### Abstract
Many multi-source localization and tracking models based on neural networks use one or several recurrent layers at their final stages to track the movement of the sources. Conventional recurrent neural networks (RNNs), such as the long short-term memories (LSTMs) or the gated recurrent units (GRUs), take a vector as their input and use another vector to store their state. However, this approach results in the information from all the sources being contained in a single ordered vector, which is not optimal for permutation-invariant problems such as multi-source tracking. In this paper, we present a new recurrent architecture that uses unordered sets to represent both its input and its state and that is invariant to the permutations of the input set and equivariant to the permutations of the state set. Hence, the information of every sound source is represented in an individual embedding and the new estimates are assigned to the tracked trajectories regardless of their order.
Sound source tracking (SST), permutation-invariant recurrent neural networks (PI-RNN)
## 1 Introduction
In recent years, the state-of-the-art of sound source localization established by classic signal processing techniques has been surpassed by new systems using deep-learning models [1]. These models use different input features and network architectures, but most of them track the temporal evolution of the signals using convolutional layers followed by recurrent layers [2, 3, 4]. Using these architectures, the latent representations at every hidden layer are difficult to interpret and we cannot exploit the permutation invariance of the tracking problem where, if we cannot apply any criteria to order or classify the sources, any permutation of the sources should be considered equally correct.
In [5], we proposed an icosahedral convolutional neural network (iccCNN) for single source localization where the output of the last convolutional layer can be interpreted as the probability distribution of the direction of arrival (DOA) and we can obtain the estimated DOA as its expected value. Extending this model to multi-source scenarios is straightforward and we just need to increase the number of channels of the last convolutional layers to the maximum number of concurrent sources \(M\) that the model should be able to localize. Following this approach, after computing the expected value of every one of the \(M\) probability distributions generated by the iccCNN, we obtain a set of \(M\) DOAs that should be considered invariant to the permutations of its elements. In order to incorporate a recurrent neural network (RNN) after the localization model to increase its temporal perceptive field and improve its tracking capabilities, we could concatenate every element of the DOA set into a single vector and use it as the input of a gated recurrent unit (GRU) [6] or a long short-term memory (LSTM) layer [7]. However, we should expect the output of a tracking system to not be affected by the order of the new estimates at every time frame (i.e., to be invariant to the permutations of the input set), and a conventional RNN operating over the concatenation of the es
timates would need to learn this property during the training instead of being part of its architecture. In addition, in a tracking system, we can also expect the association of a new estimate to the tracked trajectories be done regardless of their order (i.e., be equivariant to the permutations of the state set) but the state vector generated by a conventional RNN would contain the information of every tracked trajectory in an unstructured way so we would not be able to exploit this property either.
In this paper we present a permutation-invariant recurrent neural network (PI-RNN) that takes an unordered set of embeddings as input (each one with the information of one of the sources detected by the localization network) and generates a recursive output, or state, that is also an unordered set of embeddings with the information of every tracked trajectory. As we could expect from a tracking system, the proposed architecture associates the embeddings in the input set to the embeddings of the state set in a way that is invariant to the permutations of the input set and equivariant to the permutations of the state set.
To the best of our knowledge, this is the first recurrent layer that works with sets instead of with vectors. The closest proposal in the literature is probably the TrackFormer [8], a model for multiple object tracking on video signals that is based on the DETR transformer [9, 10], a model for object detection on images. The recursivity of the TrackFormer model is built around the decoder of the DETR transformer by using the output obtained for a video frame as the input for the following frame. Compared with the TrackFormer, the PI-RNN is not a model but a layer that can be integrated easily into many different models. In addition, it is based on an architecture, the conventional GRU, that, unlike the transformer, was designed to be used in recurrent loops.
Thanks to be taking into account the symmetries of the problem, the proposed PI-RNN, compared with the conventional RNN, scales better with the number of tracked sources and the amount of information stored in each one. Furthermore, we present experiments proving that they can obtain better tracking results than the conventional GRUs.
## 2 Network Architecture
Conventional RNNs use a \(\mathbf{h}(t)\in\mathbb{R}^{d_{h}}\) vector to store the tracking state, which is updated at every time frame based on an input vector \(\mathbf{x}(t)\in\mathbb{R}^{d_{h}}\) using fully connected perceptrons whose computational complexity and number of trainable parameters grow linearly with \(d_{x}\) and quadratically with \(d_{h}\). When applied to track up to \(M\) sources, the information of all the sources and tracked trajectories are stored in these vectors without any structure, so there is a trade-off between the number of sources \(M\) that we can track, the amount of information that we store about each one, and the model size and complexity.
In contrast to conventional RNNs, we propose to replace the input and state vectors \(\mathbf{x}(t)\) and \(\mathbf{h}(t)\) with the sets of embedding \(\mathbf{X}(t)=\{\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),...,\mathbf{x}_{M_{K}}(t)\}\) and \(\mathbf{H}(t)=\{\mathbf{h}_{1}(t),\mathbf{h}_{2}(t),...,\mathbf{h}_{M_{H}}(t)\}\) where every element \(\mathbf{x}_{i}(t)\in\mathbb{R}^{d_{h}}\) and \(\mathbf{h}_{i}(t)\in\mathbb{R}^{d_{h}}\) contains information about a single input detection, weacked trajectory respectively. For the sake of simplicity, we will keep \(M_{X}=M_{H}=M\) and \(d_{x}=d_{h}=d\) during the rest of the paper, however the proposed architecture can work with \(M_{X}\neq M_{H}\) or even with dynamic values that change during time, and it can be easily extended to configurations with \(d_{x}\neq d_{h}\).
In order to match every new embedding of the input set with the embeddings of the state set, we can use a multi-head attention module [11], which is well known for its use in transformer models and is invariant to the permutation of the elements of its input sets:
\[\mathbf{C}(t)=\mathrm{MultiHead}(\mathbf{H}(t-1), \tag{1}\] \[\mathbf{X}(t)\cup\mathbf{H}(t-1),\] \[\mathbf{X}(t)\cup\mathbf{H}(t-1))\] \[\mathrm{MultiHead}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{ Concat}(\mathbf{head}_{1},...,\mathbf{head}_{N_{\text{data}}})\] (2) \[\mathbf{head}_{i}=\mathrm{Attention}(\mathbf{Q}\mathbf{W}_{i}^{ \mathbf{Q}},\mathbf{K}\mathbf{W}_{i}^{K},\mathbf{V}\mathbf{W}_{i}^{V}) \tag{3}\]
Figure 1: Architecture of the proposed permutation invariant recurrent layer.
\[\mathrm{Attention}(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i})=\mathrm{softmax} \left(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{d_{k}}}\right)\mathbf{V}_{ i}, \tag{4}\]
with the \(\mathrm{softmax}(\cdot)\) operating across rows. With this configuration, the generated set \(\mathbf{C}(t)\) is invariant to the permutations of \(\mathbf{X}(t)\) and equivariant to the permutations of \(\mathbf{H}(t-1)\) as we would expect from a tracking system.
Finally, as shown in Fig. 1, once we have assigned the input embeddings to their corresponding state embedding, we can just update every element of the state set according to this assignation:
\[\mathbf{h}_{i}(t)=[1-\mathbf{z}_{i}(t)]\odot\mathbf{h}_{i}(t-1)+\tilde{ \mathbf{h}}_{i}(t) \tag{5}\]
\[\mathbf{z}_{i}(t)=\sigma(\mathbf{c}_{i}(t)\mathbf{W}^{z}) \tag{6}\]
\[\tilde{\mathbf{h}}_{i}(t)=\tanh(\mathbf{c}_{i}(t)\mathbf{W}^{h}), \tag{7}\]
where \(\odot\) denotes element-wise vector multiplication and \(\sigma(\cdot),\tanh(\cdot)\) denote sigmoid and hyperbolic tangent functions respectively, applied to each element of their vector arguments. This gated architecture is based on a simplified version of the minimal gated recurrent unit [12], but we could design different architectures based on different conventional recurrent architectures. As in conventional RNNs, the number of trainable parameters grows quadratically with \(d\), but in the case of the PI-GRUs we have \(M\) embeddings of size \(d\) containing the information of every tracked trajectory. Hence, we can expect our model to scale better when we increase the number of sources we want to track or the amount of information that we want to be able to represent for each one of them.
## 3 Evaluation
### Experiment design
As a preliminary study of the performance of this new architecture, we decided to add a PI-RNN after the icoCNN presented in [5] for single source localization. As shown in Fig. 2, in order to extend the icoCNN to multi-source localization, we just increased the number of output channels from 1 to \(M\). Fig. 3 represents the PI-RNN we used after the icoCNN: we first used a multi-layer perceptron to project every ACCDOA [13] generated by the icoCNN into an embedding of size \(d\) and then we used those embeddings as the input set of our PI-RNN. After the PI-RNN had associated every new estimate from the icoCNN to one of the tracked trajectories, we added a conventional GRU (operating independently over the embedding of every tracked trajectory so it did not break the permutation invariance of the model) and, finally, we used a linear layer to project the \(d\)-size embedding into a 3D ACCDOA. The initial state of every embedding of the state set of the PI-RNN was learned during the training of the model while, at every time frame, the embeddings of all the inactive trajectories were reset (i.e., those who had lead to ACCDOAs with a norm lower than 0.5).
The method was compared to two baselines, a) the icoCNN without any kind of recurrent layers, and b) the icoCNN with two conventional GRUs designed to have
Figure 4: Architecture of the conventional RNN used after the icoCNN in the baseline model.
Figure 3: Architecture of the PI-RNN used after the icoCNN in the evaluated model.
Figure 2: Architecture of the icoCNN used for evaluation. B is the batch size, T is the number of temporal frames of the acoustic scenes, \(H=2^{r}=8\) and \(W=2^{r+1}=16\) are the height and the width of the projections of the icosahedral grid.
a similar number of trainable parameters as the evaluated model (see Fig. 4). In order to avoid identity switches (IDSs) in the tracked trajectories, we trained all the models using sliding permutation invariant training (sPIT) [14]. To facilitate the training of the icoCNN, we added an auxiliary frame-level permutation invariant training (fPIT) at its output in the models that included recurrent layers after it.
We used the same synthetic dataset as in [14], where acoustic sources randomly appeared and disappeared along 20-second-length scenes. As source signals, we used speech utterances from the LibriSpeech corpus and we simulated them following random trajectories in rooms with reverberation times from \(T_{60}=0.2\) to \(1.3\)s with the image source method. The maximum number of concurrent active sources in a time frame was 3.
We used \(M=10\) as the number of ACCDOA outputs of all our models since we observed that it was beneficial to use a higher number than the maximum possible number of active sources in the dataset (i.e., 3) and we used \(d=128\) as embedding size for the input and state sets of the PI-RNN. This is a preliminary study of this new architecture and further experiments should be conducted for a better optimization of these hyperparameters.
### Results
As we can see in Fig. 5, the proposed PI-RNN clearly outperforms the baselines in terms of localization error and the frequency of the identity switches while, as we can see in the detection error tradeoff (DET) curve, the trade-off between false positives and misses remains is for all the evaluated models. It is worth saying that both the conventional and the permutation-invariant RNNs are receiving only spatial information about the estimated sources. By modifying the model to include spectral information in their input we could expect both models to improve their performance, with the PI-RNNs scaling better to the amount of spectral information of each source and therefore being able to better exploit it.
As an example, in Fig. 6 we can see one of the test acoustic scenes. We can see how the output of the icoCNN had a high number of identity switches even when only one source was active but the PI-RNN was able to fix these switches and also reduce the localization error.
Figure 5: Evaluation metrics for proposed PI-RNN and the baseline models.
Figure 6: Example of one of the test acoustic scenes. The solid line represents the ground truth trajectories of the sources, the crosses the defections estimated at the icoCNN (i.e., the input of the PI-RNN), and the dashed line the trajectories estimated by the whole model (i.e., the output of the PI-RNN). The color indicates to which of the outputs they correspond, so the IDS are visible.
### Attention matrices
We can interpret the attention matrix of the multi-head attention module of the PI-RNN as an assignment matrix where each row indicates which elements of the input and state set were employed to compute each element of the output set.
The attention matrix shown in Fig. 6(a) corresponds to the first frame where a source appeared and we can see how it was detected at the 8th output of the icoCNN (i.e., the 8th input of the PI-RNN) and the PI-RNN assigned it to its 9th output. In the attention matrix of the next time frame (Fig. 6(b)) we can see that the 9th output of the PI-RNN was computed combining the information of the new estimate at that frame with the corresponding recurrent state. A new source was detected by the icoCNN at its 4th output in the time frame corresponding to Fig. 6(c) and it was assigned to the 10th output of the PI-RNN. Finally, in Fig. 6(d) we can see how, after an identity switch at the output of the icoCNN, the PI-RNN was able to assign every new estimate to the correct tracked trajectory fixing the identity switch.
## 4 Conclusions
We have presented a new RNN architecture whose input and state are presented with sets instead of vectors and that is invariant to the permutation of the elements of the input and equivariant to the permutations of the elements of the state set. This new architecture is able to exploit the permutation symmetries of the tracking problem and to outperform the conventional RNN in the preliminary experiments presented in this paper. We expect the difference between the performance of the PI-RNNs and the conventional RNNs to become even greater when including more information of every source at their input.
|
2301.02924 | Reducing Over-smoothing in Graph Neural Networks Using Relational
Embeddings | Graph Neural Networks (GNNs) have achieved a lot of success with
graph-structured data. However, it is observed that the performance of GNNs
does not improve (or even worsen) as the number of layers increases. This
effect has known as over-smoothing, which means that the representations of the
graph nodes of different classes would become indistinguishable when stacking
multiple layers. In this work, we propose a new simple, and efficient method to
alleviate the effect of the over-smoothing problem in GNNs by explicitly using
relations between node embeddings. Experiments on real-world datasets
demonstrate that utilizing node embedding relations makes GNN models such as
Graph Attention Network more robust to over-smoothing and achieves better
performance with deeper GNNs. Our method can be used in combination with other
methods to give the best performance. GNN applications are endless and depend
on the user's objective and the type of data that they possess. Solving
over-smoothing issues can potentially improve the performance of models on all
these tasks. | Yeskendir Koishekenov | 2023-01-07T19:26:04Z | http://arxiv.org/abs/2301.02924v1 | # Reducing Over-smoothing in Graph Neural Networks Using Relational Embeddings
###### Abstract
Graph Neural Networks (GNNs) have achieved a lot of success with graph-structured data. However, it is observed that the performance of GNNs does not improve (or even worsen) as the number of layers increases. This effect has known as over-smoothing, which means that the representations of the graph nodes of different classes would become indistinguishable when stacking multiple layers. In this work, we propose a new simple, and efficient method to alleviate the effect of the over-smoothing problem in GNNs by explicitly using relations between node embeddings. Experiments on real-world datasets demonstrate that utilizing node embedding relations makes GNN models such as Graph Attention Network more robust to over-smoothing and achieves better performance with deeper GNNs. Our method can be used in combination with other methods to give the best performance. GNN applications are endless and depend on the user's objective and the type of data that they possess. Solving over-smoothing issues can potentially improve the performance of models on all these tasks.
University of Amsterdam
[email protected]
## 1 Introduction
Graph neural networks (GNNs) are a family of neural networks that can learn from graph-structured data. Starting with the success of GCN Kipf and Welling (2016) in achieving state-of-the-art performance on semi-supervised classification, several variants of GNNs have been developed for this task, including Graph-SAGE Hamilton et al. (2017), GAT Velickovic et al. (2017), GATv2 Brody et al. (2021), EGNN Satorras et al. (2021) to name a few most recent ones.
A key issue with GNNs is their depth limitations. It has been observed that stacking the layers often results in significantly worse performance for GNNs, such as GCN and GAT. One of the factors associated with this performance drop is the phenomenon called _over-smoothing_. The first to call attention to the over-smoothing problem was in the work of Li et al. (2018). Having shown that the graph convolution is a type of Laplacian smoothing, they proved that after repeatedly applying Laplacian smoothing many times, the features of the nodes in the (connected) graph would converge to similar values. Later, several other works have alluded to the same problem Li et al. (2019); Luan et al. (2019).
The main research question of this paper is how to keep two node representations distinguishable as we increase their receptive fields. We propose that stressing the difference between two nodes will prevent their embeddings from becoming too similar. The main contributions of this paper are:
* A method to reduce the over-smoothing in GNNs, specifically in GAT, by using not only node embeddings as features but also explicitly using their relations. To validate our method, we use different metrics that quantify the over-smoothing phenomenon.
* We empirically show that deeper GAT equipped with our proposed method improves node classification accuracy in a real-world scenario where graphs have missing node features.
* Improve other approaches to tackling over-smoothing by employing our method.
## 2 Related Work
### Over-smoothing
A straightforward way to reduce the effect of over-smoothing is to simply reduce the number of layers. However, this implies not exploiting the multi-hop information in the case of complex-structured data and consequently limiting end-task performance. Therefore, having over-smoothing as an issue, researchers encounter a trade-off between a low-efficiency model and a model with more depth but less expressivity in terms of node representations. Oono and Suzuki (2019), Cai and Wang (2020) did extensive analysis of expressive power in GNN.
There have been several attempts to make GNNs more robust to over-smoothing. Xu et al. (2018) introduced Jumping Knowledge Networks, which employ skip connections for multi-hop message passing and also enable different neighborhood ranges. Klicpera et al. (2019) proposed a propagation scheme based on personalized PageRank that ensures locality (via teleports) which in turn prevents over-smoothing. Li et al. (2019) built on ideas from ResNet to use residual as well as dense connections to train deep GCNs. Rong et al. (2019) proposed DropEdge to alleviate over-smoothing through message passing reduction via removing a certain fraction of edges at random from the input graph.
Recently, utilizing normalization layers showed effectiveness in preventing node embeddings from becoming too similar. Zhao and Akoglu (2019) proposed PairNorm, a normalization scheme that ensures that the total pairwise node feature distances remain constant across layers. Zhou et al. (2020) introduced DGN which clusters nodes and prevents distinct groups from having close features.
It is essential to quantify over-smoothing to validate solutions. The main goal is to improve or at least prevent the drop in accuracy as the number of layers increases. However, different factors besides over-smoothing can impact it. Therefore, proposed approaches should be validated on various quantitative metrics that directly measure over-smoothing in graphs such as group distance ratio and Instance information gain (Zhou et al., 2020), or row-diff. and col-diff. (Zhao and Akoglu, 2019).
### Node Relations
In Natural Language Processing tasks such as Natural Language Inference, understanding entailment, and contradiction, sentence embedding relations were used to distinguish two vector representations (Conneau et al., 2017; Nie et al., 2017). For example, Conneau et al. (2017) applied 3 matching methods to extract relations between two sentence embeddings: concatenation of the two representations, element-wise product, and absolute element-wise difference. In a similar manner Nie et al. (2019) in addition to sentence embeddings used a fixed set of common pair-wise vector operations: subtraction, multiplication, and average. They showed empirically that non-linear interactions between feature vectors are needed. Motivated by these works, we apply similar techniques in GNNs to disentangle node representations hence alleviating over-smoothing.
## 3 Preliminaries
In this work, we consider the semi-supervised node classification task as an example and illustrate how to handle the over-smoothing issue. A graph is represented by \(G=\mathcal{V},\mathcal{E}\), where \(\mathcal{V}\) and \(\mathcal{E}\) represent the sets of nodes and edges, respectively. Each node \(i\in\mathcal{V}\) is associated with a feature vector \(x_{i}\in\mathbb{R}^{d}\) and \(X=[x_{1},...,x_{n}]^{T}\) denotes the feature matrix, and a subset \(\mathcal{V}_{l}\subset\mathcal{V}\) of the nodes are labeled, i.e. \(y_{i}\in 1,...,C\) for each \(i\in\mathcal{V}\) where \(C\) is the number of classes. The task is to learn a hypothesis that predicts \(y_{i}\) from \(x_{i}\) that generalizes to the unlabeled nodes \(\mathcal{V}_{u}=\mathcal{V}\setminus\mathcal{V}_{l}\).
### Graph Neural Network
A GNN layer updates every node representation by aggregating its neighbors' representations. A layer's input is a set of node representations \(\{h_{i}\in\mathbb{R}^{d}|i\in\mathcal{V}\}\) and the set of edges \(\mathcal{E}\). A layer outputs a new set of node representations \(\{h^{{}^{\prime}}_{i}\in\mathbb{R}^{d}|i\in\mathcal{V}\}\), where the same parametric function is applied to every node given its neighbors \(\mathcal{N}_{i}=\{j\in\mathcal{E}|(j,i)\in\mathcal{E}\}\):
\[h^{{}^{\prime}}_{i}=f_{\theta}(h_{i},Aggregate(h_{j}|j\in\mathcal{N}_{i})) \tag{1}\]
The design of \(f\) and \(Aggregate\) is what mostly distinguishes one type of GNN from the other. For example, GAT (Velickovic et al., 2017) instantiates Equation 1 by computing a learned weighted average of the representations of \(\mathcal{N}_{i}\). A scoring function \(e:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) computes a score for every edge \((j,i)\), which indicates the importance of the features of the neighbor \(j\) to the node \(i\):
\[e_{i,j}=\sigma(a^{T}\cdot[Wh_{i}||Wh_{j}]) \tag{2}\]
where \(a\in\mathbb{R}^{2d^{\prime}},W\in\mathbb{R}^{d^{\prime}\times d}\) are parameters learned, \(\sigma\) is a non-linear activation function (e.g. \(LeakyReLU\)), and \(||\) denotes vector concatenation. These attention scores are normalized across all neighbors \(j\in\mathcal{N}_{i}\) using softmax, and the attention function is defined as:
\[\alpha_{ij}=softmax_{j}(e_{i,j})=\frac{exp(e_{i,j})}{\sum_{j^{\prime}\in \mathcal{N}_{i}}exp(e_{i,j^{\prime}})} \tag{3}\]
Then GAT computes a weighted average of the transformed features of the neighbor nodes (followed by nonlinearity \(\sigma\)) as the new representation of \(i\), using the normalized attention coefficients:
\[h^{{}^{\prime}}_{i}=\sigma(\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}\cdot Wh_{j}). \tag{4}\]
## 4 Approach
In this work, we propose to alleviate the over-smoothing problem by utilizing node relations. In the message-passing framework, each node aggregates feature vectors from neighboring nodes via a permutation equivariant function and outputs a message vector. Then it updates its own embedding using this message vector. Traditionally, the message vector is computed as in Equations 2 - 4. We propose to additionally utilize the relation of two node embeddings. In this work, we will empirically search for the best pair-wise vector operation between two embeddings that can help to solve the problem. We extend Equation 2 to utilize the relation of node embeddings as shown below:
\[e_{i,j}=\sigma(a^{T}\,[W^{{}^{\prime}}h_{i}||W^{{}^{\prime}}h_{j}||W^{{}^{ \prime\prime}}\,relation(h_{i},h_{j})]) \tag{5}\]
where \(W^{{}^{\prime}}\) and \(W^{{}^{\prime\prime}}\) parameterize embeddings and their relations. The intuitive explanation of our approach is that utilizing node relation information stresses the difference between two nodes, which in return will prevent node embeddings from becoming indistinguishable. Analogous approaches were used in Natural Language Inference task (Conneau et al., 2017; Nie et al., 2019) to distinguish two vector representations.
We experiment with four matching methods, \(relation(h_{i},h_{j})\) in Equation 5, to extract relations between two nodes:
* difference: \(h_{i}-h_{j}\)
* absolute difference: \(|h_{i}-h_{j}|\)
* element-wise product: \(h_{i}*h_{j}\)
* concatenation of absolute difference and element-wise product
## 5 Experiments
We now empirically evaluate the effectiveness of our method on real-world datasets.
### Experiment Setup
DatasetsWe use a well-known benchmark dataset in GNN domain: _Cora_, _Pubmed_, and _Citeseer_(Yang, Cohen, and Salakhudinov, 2016). These citation network datasets contain sparse bag-of-words feature vectors for each document (node) and a list of citation links (edges) between documents. We treat the citation links as (undirected) edges and construct a binary, symmetric adjacency matrix. Each document has a class label. We use the same data splits as Kipf and Welling (2016), where all nodes outside the train and validation are used as a test set.
ImplementationsAs a base model, we use GAT (Velickovic et al., 2017). Following the previous settings, we choose the hyperparameters of the model and optimizer as follows. We set the number of hidden units to 64 and the number of attention heads in GAT is 1. During training, we use the Adam optimizer (Kingma and Ba, 2014) with a dropout rate of 0.6, weight decay 5e-4 (\(L_{2}\) regularization). In experiments with PairNorm (Zhao and Akoglu, 2019) and DGN (Zhou et al., 2020), we use exactly the same hyperparameters provided in authors' papers or released code. We run each experiment on NVIDIA TITAN RTX within 1000 epochs 4 times with different seeds and report the average performance.
### Measuring Over-smoothing
In addition to standard test accuracy, we use the following metrics to quantify over-smoothing and validate our proposed approach. These metrics consider both pairwise and group information in graphs.
Row-diff and Col-diffZhao and Akoglu (2019) introduced two metrics to quantify node-wise and feature-wise over-smoothing. The row-diff measure is the average of all pairwise distances between the node features and quantifies node-wise over-smoothing. The col-diff is the average of all
Figure 1: GAT’s test accuracy with an increasing number of layers on the Cora dataset.
Figure 2: The row-diff., col-diff., instance information gain, and group distance ratio of GAT on Cora dataset with an increasing number of layers
pairwise distances between the columns of the representation matrix and quantifies feature over-smooth
**Group Distance Ratio and Instance Information Gain** Zhou et al. (2020) introduced two metrics: Group Distance Ratio, \(R_{Group}\), and Instance Information Gain, \(G_{Ins}\). Group Distance Ratio first clusters nodes of the same class label into a group to formulate the labeled node community, then measures the ratio of inter-group distance over intra-group distance in the Euclidean space. A small \(R_{Group}\) leads to the over-smoothing issue where all groups are mixed together. Instance information gain, \(G_{Ins}\), is defined as how much input feature information is contained in the final representation. \(G_{Ins}\) measures the dependency between node feature and representation via their mutual information.
### Experiment Analysis
Choosing effective relational embeddingsWe first show in Figure 1 the test accuracy of GAT on the _Cora_ dataset as we increase the number of layers with different relational embeddings. Employing node embedding relations such as their absolute difference and their concatenation with their element-wise product improves the performance of deeper models. We can notice that the baseline graph line was shifted in the right direction. Therefore, we do further analysis focusing on these node relation features.
Reducing over-smoothingWe show in Figure 2 the metrics quantifying over-smoothing, i.e. row-diff, col-diff, instance information gain, group distance ratio, of GAT model on the _Cora_ dataset as we increase the number of layers with different relational embeddings. Here we observe the same trend with test accuracy that the baseline graph line for all metrics was shifted in the right direction by adding relational embeddings. In other words, the receptive field of the nodes increased while maintaining performance. It indicates that utilizing node relation features such as absolute difference or element-wise product of two embeddings shows improvement over the baseline. The improvement over four different metrics quantifying over-smoothing supports that our method reduces the representation similarity both between groups and pairs of nodes hence alleviating the over-smoothness of nodes over a graph.
Case where deeper is betterWe demonstrated that our method makes deeper models more robust to over-smoothing. However, the overall test accuracy did not improve significantly. This is due to the fact that architectures with no more than 2-4 layers are sufficient for the popular graph benchmark datasets. Our method shows its power in a setting where it required a large number of layers to achieve its best performance. One example is the real-world scenario when a notable portion of the nodes lack feature vectors. This variant of a task is called semi-supervised node classification with missing vectors (Zhao and Akoglu, 2019). In Table 1 we show the global best test accuracy of GAT on the Cora, Pubmed, and Citeseer, datasets along with the optimal layer number #L under varying feature missing rates. As the percentage of missing rates increase the GAT model utilizing node relations consistently outperforms its vanilla version by tackling over-smoothing. As we mentioned, in setting with missing feature vectors deeper models achieve the best performance.
Improving other methods with node relational embeddingsUntil now, we showed that the model robustness
\begin{table}
\begin{tabular}{l|l|c c|c c|c c|c c|c c} & \multicolumn{2}{c|}{Missing Percentage} & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{20} & \multicolumn{2}{c|}{40} & \multicolumn{2}{c|}{60} & \multicolumn{2}{c|}{80} & \multicolumn{2}{c}{100} \\ Dataset & \multicolumn{2}{c|}{Relational Embedding} & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L \\ \hline \hline Cora & none & 81.903 & 3 & **81.262** & 2 & 79.05 & 2 & 77.539 & 3 & 74.758 & 4 & 71.639 & 8 \\ & + abs. difference & **82.374** & 2 & 81.25 & 2 & 79.171 & 2 & 77.563 & 4 & 75.0 & 7 & 71.434 & 9 \\ & + abs. diff \& elem. prod. & 81.854 & 3 & 80.936 & 2 & **79.678** & 4 & **77.817** & 4 & **75.459** & 4 & **71.954** & 8 \\ \hline Pubmed & none & **77.34** & 2 & **77.611** & 2 & 77.5 & 7 & 77.095 & 7 & 76.784 & 7 & 70.187 & 8 \\ & + abs. difference & 77.32 & 8 & 77.123 & 2 & 77.465 & 7 & 77.708 & 8 & 77.1 & 8 & 69.822 & 7 \\ & + abs. diff \& elem. prod. & 77.042 & 6 & 77.327 & 8 & **77.519** & 8 & **77.691** & 8 & **77.664** & 10 & **71.334** & 10 \\ \hline Citeseer & none & **68.563** & 2 & 67.187 & 2 & 63.881 & 2 & 60.251 & 4 & 55.006 & 5 & **46.417** & 6 \\ & + abs. difference & 69.025 & 2 & 67.603 & 2 & 63.613 & 3 & **61.119** & 4 & 55.218 & 4 & 46.204 & 6 \\ & + abs. diff \& elem. prod. & 69.256 & 2 & **67.409** & 2 & **64.01** & 2 & 60.537 & 3 & **55.43** & 5 & 45.586 & 6 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of test accuracies of GAT with and without relational embeddings with varying missing percentages on different datasets. #L denotes the optimal layer numbers where the model achieves the highest performance.
\begin{table}
\begin{tabular}{l|l|c|c|c c|c c|c c|c c} & \multicolumn{2}{c|}{Dataset} & \multicolumn{4}{c|}{Cora} & \multicolumn{4}{c|}{Pubmed} & \multicolumn{4}{c}{Citeseer} \\ & Missing Percentage & 0 & & 100 & & 0 & 100 & & 0 & & 100 \\ Method & Relational Embedding & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L & Acc. & \#L \\ \hline \hline PairNorm & none & **78.506** & 2 & 69.5 & 7 & **77.051** & 8 & 73.491 & 16 & 67.15 & 3 & 50.933 & 5 \\ & + abs. difference & 78.433 & 2 & 70.43 & 6 & 76.707 & 8 & 73.345 & 16 & 67.205 & 1 & **51.487** & 5 \\ & + abs. diff \& elem. product & 78.179 & 2 & **70.636** & 4 & 76.848 & 10 & **73.833** & 16 & **67.732** & 1 & 51.321 & 5 \\ \hline DGN & none & 81.093 & 2 & 69.85 & 4 & **77.259** & 4 & 63.481 & 4 & **67.723** & 2 & 48.735 & 4 \\ & + abs. difference & 81.081 & 2 & 70.261 & 4 & 76.707 & 8 & **63.61** & 4 & 67.372 & 1 & **49.926** & 4 \\ & + abs. diff \& elem. product & **81.564** & 2 & **70.563** & 4 & 77.105 & 4 & 61.095 & 4 & 67.464 & 2 & 49.621 & 4 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of test accuracies of GAT with PairNorm and DGN normalizations with and without relational embeddings on different datasets. #L denotes the optimal layer numbers where the model achieves the highest performance.
to over-smoothing increases when it uses relational embeddings. However, the absolute results are not better than alternative methods tackling over-smoothing such as PairNorm [15] or DGN [1]. The strong advantage of our approach is that it can be easily used in the combination with other methods to bolster their performance. Table 2 shows the performance of other methods with and without relational embeddings. We can see that our method is most effective in the case of missing node features.
## 6 Conclusion
In this work, we proposed a new simple but effective approach to reducing the impact of over-smoothing on training graph neural networks. We showed that utilizing some non-linear interactions between node embeddings such as absolute difference and element-wise product can mitigate the effect of the over-smoothing problem in the Graph Attention Network. To achieve it we validated our approach not only on accuracy, but also on "over-smoothing"-specific metrics such as row-diff., col-diff., instance information gain, and group distance ratio. Our approach also showed its effectiveness in combination with other methods to bolster their performance.
|
2307.09269 | End-to-End Neural Network Training for Hyperbox-Based Classification | Hyperbox-based classification has been seen as a promising technique in which
decisions on the data are represented as a series of orthogonal,
multidimensional boxes (i.e., hyperboxes) that are often interpretable and
human-readable. However, existing methods are no longer capable of efficiently
handling the increasing volume of data many application domains face nowadays.
We address this gap by proposing a novel, fully differentiable framework for
hyperbox-based classification via neural networks. In contrast to previous
work, our hyperbox models can be efficiently trained in an end-to-end fashion,
which leads to significantly reduced training times and superior classification
results. | Denis Mayr Lima Martins, Christian Lülf, Fabian Gieseke | 2023-07-18T13:52:12Z | http://arxiv.org/abs/2307.09269v2 | # End-to-End Neural Network Training for Hyperbox-Based Classification
###### Abstract
Hyperbox-based classification has been seen as a promising technique in which decisions on the data are represented as a series of orthogonal, multidimensional boxes (i.e., hyperboxes) that are often interpretable and human-readable. However, existing methods are no longer capable of efficiently handling the increasing volume of data many application domains face nowadays. We address this gap by proposing a novel, fully differentiable framework for hyperbox-based classification via neural networks. In contrast to previous work, our hyperbox models can be efficiently trained in an end-to-end fashion, which leads to significantly reduced training times and superior classification results.
## 1 Introduction
Hyperbox-based classification has been widely studied in the context of machine learning and data mining [1, 2, 3]. The goal of the corresponding approaches is to identify/produce a set of hyperboxes (i.e., multidimensional rectangles) that collectively cover the data of interest (e.g., data points belonging to a class of interest in the context of classification scenarios) [4], as shown in Figure 1.
Using hyperboxes to represent regions of interest in the data has various advantages. One of them is that the resulting models can be interpreted more easily. For instance, identifying such hyperboxes allows selecting representative data points or to provide user-friendly predicates/decision rules to describe objects belonging to a specific class. While there is no binary tree associated with such decisions, like it is the case for decision trees, the "individual rules are often simpler" [1]. Another advantage of simple predicates is the fact that they can give rise to orthogonal range queries in low-dimensional sub-spaces, which can efficiently be supported via indexing structures in the context of modern database management systems [5]. These characteristics make hyperbox-based models promising alternatives to classic, opaque models (e.g., deep neural networks) for data-intense tasks in medicine, healthcare, pharmaceutical, and cybersecurity domains [3].
Figure 1: Hyperbox-based classification for the Iris data set. Only a user-defined target class (black squares) is covered by two axes-aligned boxes.
Under existing approaches, _patient rule induction method_ (PRIM) [1] and _fuzzy min-max neural networks_ (FMMs) [6] have been the _de facto_ for hyperbox-based classification. These approaches are, however, not yet capable to cope with the increasing amounts of data many domains are confronted with. Also, one generally has little to no control over the number, size, and dimensionality of the induced hyperboxes. In particular, current hyperbox-based neural networks [3] rely on non-differentiable modules, which prevents both end-to-end training via gradient-based optimization and the use of modern optimizers (see Table 1).
In this work, we introduce HyperNN, a novel neural network for hyperbox-based classification method that can be trained in an end-to-end training fashion. We demonstrate via our experimental analysis that HyperNN achieves a competitive if not superior classification performance compared to other state-of-the-art approaches, while reducing both training and inference times. Hence, to the best of our knowledge, this is the first work to propose a fully differentiable, end-to-end approach for hyperbox-based classification, which can be easily adapted via the use of appropriate loss functions and regularizers, and readily combined to modern deep neural networks (e.g., ResNets [7]) for enhanced classification.
## 2 Problem Formulation
Given a \(d\)-dimensional space, a hyperbox \(B=B_{\boldsymbol{\theta}_{m},\boldsymbol{\theta}_{l}}=\{\mathbf{x}\in\mathbb{ R}^{d}\mid\boldsymbol{\theta}_{m}\leq\mathbf{x}\leq\boldsymbol{\theta}_{m}+ \boldsymbol{\theta}_{l}\}\subset\mathbb{R}^{d}\) can be characterized via its minimal point \(\boldsymbol{\theta}_{m}\in\mathbb{R}^{d}\) along with a vector \(\mathbf{0}\leq\boldsymbol{\theta}_{l}\in\mathbb{R}^{d}\) containing the length spans. For a point \(\mathbf{x}\in\mathbb{R}^{d}\), let \(\mathbb{1}_{B}(\mathbf{x})=1\) if \(\mathbf{x}\in B\) and \(\mathbb{1}_{B}(\mathbf{x})=0\), otherwise. Accordingly, for the union \(\mathcal{B}=\bigcup_{k=1}^{M}B_{k}\) of \(M\) hyperboxes \(B_{1},\ldots,B_{M}\), we have \(\mathbb{1}_{\mathcal{B}}(\mathbf{x})=\max(\mathbb{1}_{B_{1}}(\mathbf{x}), \ldots,\mathbb{1}_{B_{M}}(\mathbf{x}))\).
We consider binary classification tasks with training sets of the form \(T=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{N},y_{N})\}\subset\mathbb{R}^{ d}\times\{0,1\}\), where each instance \(i\) is represented by a feature vector \(\mathbf{x}_{i}\) and an associated class label \(y_{i}\). The goal of the learning process is to find a set \(B_{1},\ldots,B_{M}\) of \(M\) hyperboxes such that the binary classification model \(\mathbb{1}_{\mathcal{B}}:\mathbb{R}^{d}\rightarrow\{0,1\}\) induced by the union \(\mathcal{B}\) of those boxes minimizes \(G(\mathcal{B})=\nicefrac{{1}}{{N}}\sum_{i=1}^{N}\mathcal{L}(\mathbb{1}_{ \mathcal{B}}(\mathbf{x}_{i}),y_{i})\), where \(\mathcal{L}:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) is a suitable loss function. Here, we use the binary cross entropy (BCE), which leads to \(G_{BCE}(\mathcal{B})=-\nicefrac{{1}}{{N}}\sum_{i=1}^{N}y_{i}\log(\mathbb{1}_{ \mathcal{B}}(\mathbf{x}_{i}))+(1-y_{i})\log(1-\mathbb{1}_{\mathcal{B}}(\mathbf{ x}_{i}))\) as objective.
For the sake of simplicity, this work focuses on binary classification tasks and numerical features. However, our approach can be readily adapted to target other data types such as image and text (with an additional feature extraction step), or alternative tasks such as multi-class classification (by modifying \(\mathcal{L}\)).
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Approach & Training & Large \(d\) & Large \(N\) & End-to-end & Mult. hyperboxes \\ \hline PRIM [4] & Hill climbing & ✗ & ✗ & ✗ & ✓ \\ FMM [6] & Fuzzy membership & ✗ & ✗ & ✗ & ✓ \\
**HyperNN (Ours)** & Gradient-based & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of hyperbox-based classification methods.
## 3 Differentiable Hyperbox-Based Classification
The HyperNN architecture in Figure 1(a) is similar to the one introduced by Simpson [6], where each neuron in the hidden layer represents a hyperbox characterized by two trainable weight vectors (i.e., model parameters) \(\mathbf{\theta}_{m}\in\mathbb{R}^{d}\) and \(\mathbf{\theta}_{l}\in\mathbb{R}^{d}\). Such hidden neurons are named _hyperbox neurons_ thereafter. The number of neurons in the hidden layer corresponds to the maximum number of hyperboxes to be induced, which is controlled by a hyperparameter \(M\).
In a nutshell, the hidden layer is responsible to check for individual hyperbox containment, i.e., each hyperbox neuron checks whether a data instance is covered by its associated hyperbox. The output layer, in turn, consists of a single neuron that checks whether a data instance is contained in _at least one_ of the hyperboxes. The sequence of operations performed by each hyperbox neuron is depicted in Figure 1(b). We detail these operations next.
Let \(h_{\mathbf{\mathcal{B}}}\) be a HyperNN network including \(M\) hyperbox neurons \(h_{B_{1}},\ldots,h_{B_{M}}\), see again Figure 1(a). In a first step, for each hyperbox neuron \(h_{B_{k}},1\leq k\leq M\), upper hyperbox bounds are computed as \(\mathbf{\theta}_{u}^{k}=\mathbf{\theta}_{m}^{k}+\mathbf{\theta}_{l}^{k}\), where \(\mathbf{\theta}_{m}^{k}\) and \(\mathbf{\theta}_{l}^{k}\) are the two trainable weight vectors of neuron \(h_{B_{k}}\). Generally, a hyperbox containment check \(h_{B_{k}}(\mathbf{x})\) for a data instance \(\mathbf{x}=[x_{1},\ldots,x_{d}]^{\top}\) could be performed using \(h_{B_{k}}(\mathbf{x})=\mathbb{1}_{B_{k}}(\mathbf{x})\). However, such an indicator function formulation would lead to a gradient of zero during backpropagation, which, in turn, would render gradient-based optimization not applicable. Instead, we implement the containment check by computing \(\delta_{u}^{k}(\mathbf{x})=\mathbf{\theta}_{u}^{k}-\mathbf{x}\) and \(\delta_{m}^{k}(\mathbf{x})=\mathbf{x}-\mathbf{\theta}_{m}^{k}\).
Note that, for \(\mathbf{x}\) to be covered by the hyperbox represented by neuron \(h_{B_{k}}\), both \(\delta_{m}^{k}(\mathbf{x})\) and \(\delta_{u}^{k}(\mathbf{x})\) must be non-negative for all the \(d\) dimensions. As before, in order to obtain meaningful gradient information in the backpropagation phase, we cannot resort to element-wise step functions to check for this property (i.e., \(S_{j}(z)=1\) if \(z\geq 0\), and \(S_{j}(z)=0\) otherwise, for \(j=1,\ldots,d\)). Instead, we resort to a differentiable surrogate applied to the minimum value (across all \(d\) dimensions) of both \(\delta_{m}^{k}(\mathbf{x})\) and \(\delta_{u}^{k}(\mathbf{x})\), respectively. More precisely, for \(\delta_{m}^{k}(\mathbf{x})\), we implement this check via a generalized sigmoid function:
Figure 2: Architecture of HyperNN.
\[\sigma_{\tau}(min(\delta^{k}_{m}(\mathbf{x})))=\frac{1}{1+\exp({-min(\delta^{k}_{m} (\mathbf{x}))}/\tau)},\]
where \(\tau\) is a temperature hyperparameter that controls the smoothness of the containment check. Small values of \(\tau\) lead to an approximation to the original indicator function \(\mathbb{1}_{B_{k}}(\mathbf{x})\), while still providing valuable gradient information. Accordingly, we implement the upper bound check via \(\sigma_{\tau}(min(\delta^{k}_{u}(\mathbf{x})))\).
Hence, each hyperbox neuron outputs a value between \([0,1]\) that expresses the degree of containment of \(\mathbf{x}\) within its associated hyperbox.
Likewise, the neural network output \(h_{\mathcal{B}}(\mathbf{x})\) must indicate whether at least one of the hyperboxes represented by the hidden neurons contains the input data point \(\mathbf{x}\). This could be achieved by simply taking the maximum over all the outputs \(h_{B_{1}}(\mathbf{x}),\ldots,h_{B_{K}}(\mathbf{x})\).
However, using the maximum only yields gradient information for a single box. Instead, we resort to a smooth maximum function \(\mathcal{S}_{\phi}\) to conduct this step, where values close to one denote containment of \(\mathbf{x}\), and \(\phi\) controls smoothness of \(\mathcal{S}_{\phi}\), as follows:
\[\mathcal{S}_{\phi}(h_{B_{1}}(\mathbf{x}),\ldots,h_{B_{M}}(\mathbf{x}))=\frac {\sum_{k=1}^{M}h_{B_{k}}(\mathbf{x})\exp({h_{B_{k}}(\mathbf{x})}/\phi)}{\sum_{ k=1}^{M}\exp({h_{B_{k}}(\mathbf{x})}/\phi)}.\]
Overall, we obtain meaningful gradient information via the simple, yet crucial modifications described above, which allows training the networks in an end-to-end fashion.
Training \(h_{\mathcal{B}}\) involves finding, for each neuron \(h_{B_{k}}\), suitable assignments for the associated weight vectors \(\boldsymbol{\theta}^{k}_{m}\) and \(\boldsymbol{\theta}^{k}_{l}\), in order to minimize the loss function introduced in Section 2.
## 4 Experiments and Results
We report an experimental design and analysis on several benchmark datasets, with focus on (1) effectiveness of our approach in comparison to widely-used baselines; (2) efficiency in terms of training and inference times; (3) sensitivity to the number of hyperboxes (\(M\)).
### Experimental Design
We consider nine data sets included in the UCI Repository (see Table 2, where \(c\) denotes the number of distinct classes). We employ a "one-versus-all" strategy to transform the original task into a binary classification. We use the ratio 70/30 to split the data into training and test sets, and evaluate all methods in terms of \(F_{1}\)-score, training time (\(\mathcal{T}_{train}\)), and inference time (\(\mathcal{T}_{pred}\)).
\begin{table}
\begin{tabular}{l r r r} \hline \hline Data set & \(N\) & \(d\) & \(c\) \\ \hline iris & 150 & 4 & 3 \\ wine & 178 & 13 & 3 \\ cancer & 569 & 30 & 2 \\ blood & 748 & 5 & 2 \\ cars & 1,728 & 6 & 4 \\ satimage & 6,430 & 36 & 6 \\ letter & 20,000 & 16 & 26 \\ sensit & 98,528 & 100 & 3 \\ covtype & 581,012 & 54 & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data Sets.
For comparison, we use the PRIM implementation provided by David Hadka1, and the recent FMM implementation by Thanh Tung Khuat2, while HyperNN is implemented in Python/PyTorch3. In all experiments, we conduct hyperparameter tuning using grid search. Best performing models are selected via averaged \(F_{1}\)-score over 5-fold cross-validation. We set the training epochs to \(10,000\), with early stopping of 200 epochs when no further improvement is achieved on a holdout validation data set. For HyperNN, we use the Adam optimizer. All experiments are conducted on an Ubuntu 18.04 server with 24 AMD EPYC 7402P cores, 192 GB RAM, and NVIDIA GeForce RTX 3090 GPU. In contrast to HyperNN, both PRIM and FMM _do not_ make use of a GPU for fast computations.
Footnote 1: [https://github.com/Project-Platybus/PRIM](https://github.com/Project-Platybus/PRIM)
Footnote 2: [https://github.com/UTS-AAi/comparative-gfmm](https://github.com/UTS-AAi/comparative-gfmm)
Footnote 3: [https://github.com/mlde-ms/hypernn](https://github.com/mlde-ms/hypernn)
### Results
Figure 3 reports results averaged over three runs using different random seeds. Note that we do not report FMM results on the larger data sets, since training time has not been concluded after a pre-defined time limit of ten hours. Both PRIM and FMM achieves high classification performance in terms of \(F_{1}\)-score for all data sets. For large data sets such as satimage and sensit, however, these results are produced at a cost of high training times. In contrast, HyperNN shows similar classification performance while keeping lower training times for almost all data sets. For satimage and sensit, HyperNN achieves an \(F_{1}\)-score close to PRIM in a fraction of the training time of the latter.
We also explore how sensitive HyperNN is to changes in its main hyperparameters. Figure 4 shows the effect of \(M\) in terms of \(F_{1}\)-score, \(\mathcal{T}_{train}\), and \(\mathcal{T}_{pred}\), where HyperNN shows a stable scalability and generalization performance for an increasing \(M\). For small datasets, such as iris, wine, and cancer, increasing \(M\)
Figure 3: Mean \(F_{1}\)-score (above) and \(\mathcal{T}_{train}\) (below) obtained in our experiments.
brings almost no benefit in terms of \(F_{1}\)-score. In contrast, for letter, sensit, and covtype, a high \(M\) rapidly improves classification performance, at a cost of higher training and prediction times. However, for blood, increasing \(M\) from 10 to 20 decreases \(F_{1}\)-score due to overfitting. Such a degradation in classification performance could be alleviated by, e.g., an adaptive training procedure where \(M\) is adapted (i.e., increased or decreased) if the validation loss deteriorates.
## 5 Conclusion
We propose HyperNN, a fully differential approach for hyperbox-based classification. We provide an efficient, GPU-ready implementation that produced highly competitive models in terms of both classification and runtime performance, when compared to state-of-the-art techniques such as PRIM and FMM. As future work, we plan to apply HyperNN to image data, in combination with other modern deep learning models (e.g., CNNs, ResNets), where both suitable features and hyperboxes must be learned jointly in an end-to-end fashion.
|
2305.15987 | A graphon-signal analysis of graph neural networks | We present an approach for analyzing message passing graph neural networks
(MPNNs) based on an extension of graphon analysis to a so called graphon-signal
analysis. A MPNN is a function that takes a graph and a signal on the graph (a
graph-signal) and returns some value. Since the input space of MPNNs is
non-Euclidean, i.e., graphs can be of any size and topology, properties such as
generalization are less well understood for MPNNs than for Euclidean neural
networks. We claim that one important missing ingredient in past work is a
meaningful notion of graph-signal similarity measure, that endows the space of
inputs to MPNNs with a regular structure. We present such a similarity measure,
called the graphon-signal cut distance, which makes the space of all
graph-signals a dense subset of a compact metric space -- the graphon-signal
space. Informally, two deterministic graph-signals are close in cut distance if
they ``look like'' they were sampled from the same random graph-signal model.
Hence, our cut distance is a natural notion of graph-signal similarity, which
allows comparing any pair of graph-signals of any size and topology. We prove
that MPNNs are Lipschitz continuous functions over the graphon-signal metric
space. We then give two applications of this result: 1) a generalization bound
for MPNNs, and, 2) the stability of MPNNs to subsampling of graph-signals. Our
results apply to any regular enough MPNN on any distribution of graph-signals,
making the analysis rather universal. | Ron Levie | 2023-05-25T12:27:35Z | http://arxiv.org/abs/2305.15987v2 | # A graphon-signal analysis of graph neural networks
###### Abstract
We present an approach for analyzing message passing graph neural networks (MPNNs) based on an extension of graphon analysis to a so called graphon-signal analysis. A MPNN is a function that takes a graph and a signal on the graph (a graph-signal) and returns some value. Since the input space of MPNNs is non-Euclidean, i.e., graphs can be of any size and topology, properties such as generalization are less well understood for MPNNs than for Euclidean neural networks. We claim that one important missing ingredient in past work is a meaningful notion of graph-signal similarity measure, that endows the space of inputs to MPNNs with a regular structure. We present such a similarity measure, called the graphon-signal cut distance, which makes the space of all graph-signals a dense subset of a compact metric space - the graphon-signal space. Informally, two deterministic graph-signals are close in cut distance if they "look like" they were sampled from the same random graph-signal model. Hence, our cut distance is a natural notion of graph-signal similarity, which allows comparing any pair of graph-signals of any size and topology. We prove that MPNNs are Lipschitz continuous functions over the graphon-signal metric space. We then give two applications of this result: 1) a generalization bound for MPNNs, and, 2) the stability of MPNNs to subsampling of graph-signals. Our results apply to any regular enough MPNN on any distribution of graph-signals, making the analysis rather universal.
## 1 Introduction
In recent years, the need to accommodate non-regular structures in data science has brought a boom in machine learning methods on graphs. Graph deep learning (GDL) has already made a significant impact on the applied sciences and industry, with ground-breaking achievements in computational biology [16, 2, 27, 9], and a wide adoption as a general-purpose tool in social media, e-commerce, and online marketing platforms, among others. These achievements pose exciting theoretical challenges: can the success of GDL models be grounded in solid mathematical frameworks? Since the input space of a GDL model is non-Euclidean, i.e., graphs can be of any size and any topology, less is known about GDL than standard neural networks. We claim that contemporary theories of GDL are missing an important ingredient: meaningful notions of metric on the input space, namely, graph similarity measures that are defined for _all graphs of any size_, which respect and describe in some sense the behavior of GDL models. In this paper, we aim at providing an analysis of GDL by introducing such appropriate metrics, using _graphon theory_.
A graphon is an extension of the notion of a graph, where the node set is parameterized by a probability space instead of a finite set. Graphons can be seen as limit objects of graphs, as the number of nodes increases to infinity, under an appropriate metric. One result from graphon theory (that reformulates Szemeredi's regularity lemma from discrete mathematics) states that any sufficiently large graph behaves as if it was randomly sampled from a stochastic block model with a fixed number of classes. This result poses an "upper bound" on the complexity of graphs: while deterministic large graphs may appear to be complex and intricate, they are actually approximately regular and behave random-like.
In this paper we extend this regularity result to an appropriate setting for message passing neural networks (MPNNs), a popular GDL model. Since MPNNs take as input a graph with a signal defined over the nodes (a graph-signal), we extend graphon theory from a theory of graphs to a theory of graph-signals. We define a metric, called the _graph-signal cut distance_, and formalize regularity statements for MPNNs of the following sort.
(1) Any deterministic graph-signal behaves as if it was randomly sampled from a stochastic block model, where the number of blocks only depends on how much we want the graph-signal to look random-like, and not on the graph-signal itself.
(2) If two graph-signals behave as if they were sampled from the same stochastic block model, then any (regular enough) MPNN attains approximately the same value on both.
Formally, (1) is proven by extending Szemeredi's weak regularity lemma to graphon-signals. As a result of this new version of the regularity lemma, we show that the space of graph-signals is a dense subset of the space of graphon-signals, which is shown to be compact. Point (2) is formalized by proving that MPNNs with Lipschitz continuous message functions are Lipschitz continuous mappings from the space of graph-signals to an output space, in the graphon-signal cut distance.
We argue that the above regularity result is a powerful property of MPNNs. To illustrate this, we use the new regularity result to prove two corollaries. First, a generalization bound of MPNNs, showing that if the learned MPNN performs well on the training graph-signals, it is guaranteed to also perform well on test graph-signals. This is shown by first bounding the covering number of the graphon-signal space, and then using the Lipschitzness of MPNNs. Second, we prove that MPNNs are stable to graph-signal subsampling. This is done by first showing that randomly subsampling a graphon-signal produces a graph-signal which is close in cut distance to the graphon-signal, and then using the Lipschitzness of MPNNs.
As opposed to past works that analyze MPNNs using graphon analysis, we do not assume any generative model on the data. Our results apply to any regular enough MPNN on any distribution of graph-signals, making the analysis rather universal.
The problem with graph-signal domains.Since the input space of MPNNs is non-Euclidean, results like universal approximation theorems and generalization bounds are less well developed for MPNNs than Euclidean deep learning models. For example, analysis like in [6] is limited to graphs of fixed sizes, seen as adjacency matrices. The graph metric induced by the Euclidean metric on adjacency matrices is called _edit-distance_. This reduction of the graph problem to the Euclidean case does not describe the full complexity of the problem. Indeed, the edit-distance is defined for weighted graphs, and non-isomorphic simple graphs are always far apart in this metric. This is an unnatural description of the reality of machine learning on graphs, where different large non-isomorphic simple graphs can describe the same large-scale phenomenon and have similar outputs for the same MPNN.
Other papers that consider graphs of arbitrary but bounded size are based on taking the union of the Euclidean edit-distance spaces up to a certain graph size [3]. If one omits the assumption that all graphs are limited by a predefined size, the edit-metric becomes non-compact - a topology too fine to explain the behavior of real MPNNs. For example, two graphs with different number of nodes are always far apart in edit-distance, while most MPNN architectures in practice are not sensitive to the addition of one node to a large graph. In [18], the expressivity of GNNs is analyzed on spaces of graphons. It is assumed that graphons are Lipschitz continuous kernels. The metric on the graphon space is taken as the \(L_{\infty}\) distance between graphons as functions. We claim that the Lipschitz continuity of the graphons in [18], the choice of the \(L_{\infty}\) metric, and the choice of an arbitrary compact subset therein, are not justified as natural models for graphs, and are not grounded in theory. Note that graphon analysis is measure theoretic, and results like the regularity lemma are no longer true when requiring Lipschitz continuity for the graphons. Lastly, in papers like [30, 17, 25, 26], the data is assumed to be generated by one, or a few graphons,
which limits the data distribution significantly. We claim that this discrepancy between theory and practice is an artifact of the inappropriate choices of the metric on the space of graphs, and the choice of a limiting generative model for graphs.
## 2 Background
For \(n\in\mathbb{N}\), we denote \([n]=\{1,\ldots,n\}\). We denote the Lebesgue \(p\) space over the measure space \(\mathcal{X}\) by \(\mathcal{L}^{p}(\mathcal{X})\), or, in short, \(\mathcal{L}^{p}\). We denote by \(\mu\) the standard Lebesgue measure on \([0,1]\). A _partition_ is a sequence \(\mathcal{P}_{k}=\{P_{1},\ldots,P_{k}\}\) of disjoint measurable subsets of \([0,1]\) such that \(\bigcup_{j=1}^{k}P_{j}=[0,1]\). The partition is called _equipartition_ if \(\mu(P_{i})=\mu(P_{j})\) for every \(i,j\in[k]\). We denote the indicator function of a set \(S\) by \(\mathds{1}_{S}\). See Appendix A for more details.
### Message passing neural networks
Most graph neural networks used in practice are special cases of MPNN (see [14] and [10] of a list of methods). MPNNs process graphs with node features, by repeatedly updating the feature at each node using the information from its neighbors. The information is sent between the different nodes along the edges of the graph, and hence, this process is called _message passing_. Each node merges all messages sent from its neighbors using an _aggregation scheme_, where typical choices is to sum, average or to take the coordinate-wise maximum of the messages. In this paper we focus on normalized sum aggregation (see Section 4.1). For more details on MPNNs we refer the reader to Appendix E.
### Szemeredi weak regularity lemma
The following is taken from [12, 24]. Let \(G=\{V,E\}\) be a simple graph with nodes \(V\) and edges \(E\). For any two subsets \(U,S\subset V\), denote the number of edges with one end point at \(U\) and the other at \(S\) by \(e_{G}(U,S)\). Let \(\mathcal{P}=\{V_{1},\ldots,V_{k}\}\) be a partition of \(V\). The partition is called _equipartition_ if \(||V_{i}|-|V_{j}||\leq 1\) for every \(i,j\in[k]\). Given two node set \(U,S\subset V\), if the edges between each pair of classes \(V_{i}\) and \(V_{j}\) were random, we would expect the number of edges of \(G\) connecting \(U\) and \(S\) to be close to the expected value \(e_{\mathcal{P}(U,S)}:=\sum_{i=1}^{k}\sum_{j=1}^{k}\frac{e_{G}(V_{i},V_{j})}{| V_{i}||V_{j}|}\,|V_{i}\cap U|\,|V_{j}\cap S|\). Hence, the _irregularity_, that measures how non-random like the edges between \(\{V_{j}\}_{j=1}^{k}\) are, is defined to be
\[\operatorname{irreg}_{G}(\mathcal{P})=\max_{U,S\subset V}|e_{G}(U,S)-e_{ \mathcal{P}}(U,S)|\,/\,|V|^{2}\,. \tag{1}\]
**Theorem 2.1** (Weak Regularity Lemma [12]).: _For every \(\epsilon>0\) and every graph \(G=(V,E)\), there is an equipartition \(\mathcal{P}=\{V_{1},\ldots,V_{k}\}\) of \(V\) into \(k\leq 2^{c/\epsilon^{2}}\) classes such that \(\operatorname{irreg}_{G}(\mathcal{P})\leq\epsilon\). Here, \(c\) is a universal constant that does not depend on \(G\) and \(\epsilon\)._
Theorem 2.1 asserts that we can represent any large graph \(G\) by a smaller, coarse-grained version of it: the weighted graph \(G^{\epsilon}\) with node set \(V^{\epsilon}=\{V_{1},\ldots,V_{k}\}\), where the edge weight between the nodes \(V_{i}\) and \(V_{j}\) is \(\frac{e_{G}(V_{i},V_{j})}{|V_{i}|,|V_{j}|}\). The "large-scale" structure of \(G\) is given by \(G^{\epsilon}\), and the number of edges between any two subsets of nodes \(U_{i}\subset V_{i}\) and \(U_{j}\subset V_{j}\) is close to the "expected value" \(e_{\mathcal{P}(U_{i},U_{j})}\). Hence, the deterministic graph \(G\) "behaves" as if it was randomly sampled from \(G^{\epsilon}\).
### graphon analysis
A graphon [4, 23] can be seen as a weighted graph with a "continuous" node set, or more accurately, the nodes are parameterized by an atomless standard probability space called the _graphon domain_. Since all such graphon domains are equivalent to \([0,1]\) with the standard Lebesgue measure (up to a measure preserving bijection), we take \([0,1]\) as the node set. The space of graphons \(\mathcal{W}_{0}\) is defined to be the set of all measurable symmetric function \(W:[0,1]^{2}\to[0,1]\), \(W(x,y)=W(y,x)\)
The edge weight \(W(x,y)\) of a graphon \(W\in\mathcal{W}_{0}\) can be seen as the probability of having an edge between the nodes \(x\) and \(y\).
Graphs can be seen as special graphons. Let \(\mathcal{I}_{m}=\{I_{1},\ldots,I_{m}\}\) be an _interval equipartition_: a partition of \([0,1]\) into intervals of equal length. The graph \(G=\{V,E\}\) with adjacency matrix \(A=\{a_{i,j}\}_{i,j=1}^{m}\)_induces_ the graphon \(W_{G}\), defined by \(W_{G}(x,y)=a_{\lceil xm\rceil,\lceil ym\rceil}\)1. Note that \(W_{G}\) is piecewise constant on the partition \(\mathcal{I}_{m}\). We hence identify graphs with their induced graphons. A graphon can also be seen as a generative model of graphs. Given a graphon \(W\), a corresponding random graph is generated by sampling i.i.d. nodes \(\{X_{n}\}\) from he graphon domain, and connecting each pair \(X_{n},X_{m}\) in probability \(W(X_{n},X_{m})\) to obtain the edges of the graph.
Footnote 1: In the definition of \(W_{G}\), the convention is that \(\lceil 0\rceil=1\).
### Regularity lemma for graphons
A simple way to formulate the regularity lemma in the graphon language is via stochastic block models (SBM). A SBM is a piecewise constant graphon, defined on a partition of the graphon domain \([0,1]\). The _number of classes_ of the SBM is defined to be the number of sets in the partition. A SBM is seen as a generative model for graphs, where graphs are randomly sampled from the graphon underlying the SBM, as explained above. Szemeredi weak regularity lemma asserts that for any error tolerance \(\epsilon\), there is a number of classes \(k\), such that any deterministic graph (of any size and topology) behaves as if it was randomly sampled from a SBM with \(k\) classes, up to error \(\epsilon\). Hence, in some sense, every graph is approximately _quasi-random_.
To write the weak regularity lemma in the graphon language, the notion of irregularity (Equation (1)) is extended to graphons. For any measurable \(W:[0,1]^{2}\to\mathbb{R}\) the _cut norm_ is defined to be
\[\|W\|_{\square}=\sup_{U,S\subset[0,1]}\left|\int_{U\times S}W(x,y)dxdy\right|,\]
where \(U,S\subset[0,1]\) are measurable. It can be verified that the irregularity (Equation (1)) is equal to the cut norm of a difference between graphons induced by adequate graphs. The cut metric between two graphons \(W,V\in\mathcal{W}_{0}\) is defined to be \(d_{\square}(W,V)=\|W-V\|_{\square}\). The _cut distance_ is defined to be
\[\delta_{\square}(W,V)=\inf_{\phi\in S_{[0,1]}}\|W-V^{\phi}\|_{\square},\]
where \(S_{[0,1]}\) is the space of measure preserving bijections \([0,1]\to[0,1]\), and \(V^{\phi}(x,y)=V(\phi(x),\phi(y))\) (see Section 3.1 and Appendix A.3 for more details). The cut distance is a pseudo metric on the space of graphons. By considering equivalence classes of graphons with zero cut distance, we can construct a metric space \(\widetilde{\mathcal{W}}_{0}\) for which \(\delta_{\square}\) is a metric. The following version of the weak regularity lemma is from [24, Lemma 7].
**Theorem 2.2**.: _For every graphon \(W\in\mathcal{W}_{0}\) and \(\epsilon>0\) there exists a step graphon \(W^{\prime}\in\mathcal{W}_{0}\) with respect to a partition of at most \(\lceil 2^{\epsilon/\epsilon^{2}}\rceil\) sets such that \(\delta_{\square}(W,W^{\prime})\leq\epsilon\), for some universal constant \(c\)._
The exact definition of a step graphon is given in Definition 3.3. It is possible to show, using Theorem 2.2, that \(\widetilde{\mathcal{W}}_{0}\) is a compact metric space [24, Lemma 8]. Instead of recalling this construction here, we refer to Section 3.4 for the extension of this construction to graphon-signals.
## 3 Graphon-signal analysis
A graph-signal \((G,\mathbf{f})\) is a graph \(G\), that may be weighted or simple, with node set \([n]\), and a signal \(\mathbf{f}\in\mathbb{R}^{n\times k}\) that assigns the value \(f_{j}\in\mathbb{R}^{k}\) for every node \(j\in[n]\). A graphon-signal will be defined in Section 3.1 similarly to a graph-signal, but over the node set \([0,1]\). In this section, we show how to extend classical results in graphon theory to a so called graphon-signal theory. All proofs are given in the appendix.
### The graphon signal space
For any \(r>0\), define the _signal space_
\[\mathcal{L}_{r}^{\infty}[0,1]:=\{f\in\mathcal{L}^{\infty}[0,1]\ |\ \forall x\in[0,1],\ \ |f(x)|\leq r\}. \tag{2}\]
We define the following "norm" on \(\mathcal{L}_{r}^{\infty}[0,1]\) (which is not a vector space).
**Definition 3.1** (Cut norm of a signal).: _For a signal \(f:[0,1]\to\mathbb{R}\), the cut norm \(\|f\|_{\square}\) is defined as_
\[\|f\|_{\square}:=\sup_{S\subseteq[0,1]}\bigg{|}\int_{S}f(x)d\mu(x)\bigg{|}, \tag{3}\]
_where the supremum is taken over the measurable subsets \(S\subset[0,1]\)._
In Appendix A.2 we prove basic properties of signal cut norm. One important property is the equivalence of the signal cut norm to the \(L_{1}\) norm
\[\forall f\in\mathcal{L}_{r}^{\infty}[0,1],\quad\|f\|_{\square}\leq\|f\|_{1} \leq 2\|f\|_{\square}.\]
Given a bound \(r\) on the signals, we define the space of _graphon-signals_ to be the set of pairs \(\mathcal{WL}_{r}:=\mathcal{W}_{0}\times\mathcal{L}_{r}^{\infty}[0,1]\). We define the _graphon-signal cut norm_, for measurable \(W,V:[0,1]^{2}\to\mathbb{R}\) and \(f,g:[0,1]\to\mathbb{R}\), by
\[\|(W,f)\|_{\square}=\|W\|_{\square}+\|f\|_{\square}.\]
We define the _graphon-signal cut metric_ by \(d_{\square}\big{(}(W,f),(V,g)\big{)}=\|(W,f)-(V,g)\|_{\square}\).
We next define a pseudo metric that makes the space of graphon-signals a compact space. Let \(S^{\prime}_{[0,1]}\) be the set of measurable measure preserving bijections between co-null sets of \([0,1]\), namely,
\[S^{\prime}_{[0,1]}=\{\phi:A\to B\ |\ A,B\ \text{co-null in}\ [0,1],\ \text{ and }\ \forall S\in A,\ \mu(S)=\mu(\phi(S))\},\]
where \(\phi\) is a measurable bijection and \(A,B,S\) are measurable. For \(\phi\in S^{\prime}_{[0,1]}\), we define \(W^{\phi}(x,y):=W(\phi(x),\phi(y))\), and \(f^{\phi}(z)=f(\phi(z))\). Note that \(W^{\phi}\) and \(f^{\phi}\) are only define up to a null-set, and we arbitrarily set \(W,W^{\phi},f\) and \(f^{\phi}\) to \(0\) in their respective null-sets, which does not affect our analysis. Define the _cut distance_ between two graphon-signals \((W,f),(V,g)\in\mathcal{WL}_{r}\) by
\[\delta_{\square}\big{(}(W,f),(V,g)\big{)}=\inf_{\phi\in S^{\prime}_{[0,1]}}d _{\square}\big{(}(W,f),(V,g)^{\phi}\big{)}. \tag{4}\]
Here, \((V,g)^{\phi}:=(V^{\phi},g^{\phi})\). More details on this construction are given in Appendix A.3.
The graphon-signal cut distance \(\delta_{\square}\) is a pseudo-metric, and can be made into a metric by introducing the equivalence relation: \((W,f)\sim(V,g)\) if \(\delta_{\square}((W,f),(V,g))=0\). The quotient space \(\widetilde{\mathcal{WL}_{r}}:=\mathcal{WL}_{r}/\sim\) of equivalence classes \([(W,f)]\) of graphon-signals \((W,f)\) is a metric space with the metric \(\delta_{\square}([(W,f)],[(V,g)])=\delta_{\square}((W,f),(V,g))\). By abuse of terminology, we call elements of \(\widetilde{\mathcal{WL}_{r}}\) also graphon-signals. A graphon-signal in \(\widetilde{\mathcal{WL}_{r}}\) is defined irrespective of a specific "indexing" of the nodes in \([0,1]\).
### Induced graphon-signals
Any graph-signal can be identified with a corresponding graphon-signal as follows.
**Definition 3.2**.: _Let \((G,\mathbf{f})\) be a graph-signal with node set \([n]\) and adjacency matrix \(A=\{a_{i,j}\}_{i,j\in[n]}\). Let \(\{I_{k}\}_{k=1}^{n}\) with \(I_{k}=[(k-1)/n,k/n)\) be the equipartition of \([0,1]\) into \(n\) intervals. The graphon-signal \((W,f)_{(G,\mathbf{f})}=(W_{G},f_{\mathbf{f}})\) induced by \((G,\mathbf{f})\) is defined by_
\[W_{G}(x,y)=\sum_{i,j=1}^{n}a_{ij}\mathds{1}_{I_{i}}(x)\mathds{1}_{I_{j}}(y), \quad\text{ and }\quad f_{\mathbf{f}}(z)=\sum_{i}^{n}f_{i}\mathds{1}_{I_{i}}(z).\]
We denote \((W,f)_{(G,\mathbf{f})}=(W_{G},f_{\mathbf{f}})\). We identify any graph-signal with its induced graphon-signal. This way, we define the cut distance between a graph-signal and a graphon-signal. As before, the cut distance between a graph-signal \((G,\mathbf{f})\) and a graphon-signal \((W,g)\) can be interpreted as how much the deterministic graph-signal \((G,\mathbf{f})\) "looks like" it was randomly sampled from \((W,g)\).
### graphon-signal regularity lemma
To formulate our regularity lemma, we first define spaces of step functions.
**Definition 3.3**.: _Given a partition \(\mathcal{P}_{k}\), and \(d\in\mathbb{N}\), we define the space \(\mathcal{S}^{d}_{\mathcal{P}_{k}}\) of step functions of dimension \(d\) over the partition \(\mathcal{P}_{k}\) to be the space of functions \(F:[0,1]^{d}\to\mathbb{R}\) of the form_
\[F(x_{1},\ldots,x_{d})=\sum_{j=(j_{1},\ldots,j_{d})\in[k]^{d}}c_{j}\prod_{l=1}^{ d}\mathds{1}_{P_{j_{l}}}(x_{l}), \tag{5}\]
_for any choice of \(\{c_{j}\in\mathbb{R}\}_{j\in[k]^{d}}\)._
We call any element of \(\mathcal{W}_{0}\cap\mathcal{S}^{2}_{\mathcal{P}_{k}}\) a _step graphon_ with respect to \(\mathcal{P}_{k}\). A step graphon is also called a _stochastic block model (SBM)_. We call any element of \(\mathcal{L}^{\infty}_{r}[0,1]\cap\mathcal{S}^{1}_{\mathcal{P}_{k}}\) a _step signal_. We also call \([\mathcal{WL}_{r}]_{\mathcal{P}_{k}}:=(\mathcal{W}_{0}\cap\mathcal{S}^{2}_{ \mathcal{P}_{k}})\times(\mathcal{L}^{\infty}_{r}[0,1]\cap\mathcal{S}^{1}_{ \mathcal{P}_{k}})\) the space of SBMs with respect to \(\mathcal{P}_{k}\).
In Appendix B.2 we give a number of versions of the graphon-signal regularity lemma. Here, we show one version in which the partition is fixed regardless of the graphon-signal.
**Theorem 3.4** (Regularity lemma for graphon-signals - equipartition).: _For any \(c>1\), and any sufficiently small \(\epsilon>0\), for every \(n\geq 2^{\lceil 2c/\epsilon^{2}\rceil}\) and every \((W,f)\in\mathcal{WL}_{r}\), there exists a step graphon-signal \((W_{n},f_{n})\in[\mathcal{WL}_{r}]_{\mathcal{I}_{n}}\) such that_
\[\delta_{\square}\big{(}(W,f),(W_{n},f_{n})\big{)}\leq\epsilon, \tag{6}\]
_where \(\mathcal{I}_{n}\) is the equipartition of \([0,1]\) into \(n\) intervals._
By identifying graph-signals with their induced graphon-signals, (Equation (6)) shows that the space of graph-signals is dense in the space of graphon-signals with cut distance. Similarly to the classical case, Theorem 3.4 is interpreted as follows. While deterministic graph-signals may seem intricate and complex, they are actually regular, and "look like" random graph-signals that were sampled from SBMs, where the number of blocks of the SBM only depends on the desired approximation error between the SBM and the graph-signal, and not on the graph-signal itself.
### Compactness of the graphon-signal space and its covering number
We prove that \(\widetilde{\mathcal{WL}_{r}}\) is compact using Theorem 3.4, similarly to [24, Lemma 8]. Moreover, we can bound the number of balls of radius \(\epsilon\) required to cover \(\widetilde{\mathcal{WL}_{r}}\).
**Theorem 3.5**.: _The metric space \((\widetilde{\mathcal{WL}_{r}},\delta_{\square})\) is compact. Moreover, given \(r>0\) and \(c>1\), for every sufficiently small \(\epsilon>0\), the space \(\widetilde{\mathcal{WL}_{r}}\) can be covered by_
\[\kappa(\epsilon)=2^{k^{2}} \tag{7}\]
_balls of radius \(\epsilon\), where \(k=\lceil 2^{2c/\epsilon^{2}}\rceil\)._
The Proof of Theorem 3.5 is given in Appendix C. This is a powerful result - the space of arbitrarily large graph-signals is dense in the "small" space \(\widetilde{\mathcal{WL}_{r}}\). We will use this property in Section 4.3 to prove a generalization bound for MPNNs.
### Graphon-signal sampling lemmas
In this section we prove that randomly sampling a graphon signal produces a graph-signal that is close in cut distance to the graphon signal. Let us first describe the sampling setting. More details on the construction are given in Appendix D.1. Let \(\Lambda=(\lambda_{1},\ldots\lambda_{k})\in[0,1]^{k}\) be \(k\) independent uniform random samples from \([0,1]\), and \((W,f)\in\mathcal{WL}_{r}\). We define the _random weighted graph_\(W(\Lambda)\) as the weighted graph with \(k\) nodes and edge weight \(w_{i,j}=W(\lambda_{i},\lambda_{j})\) between node \(i\) and node \(j\). We similarly define the _random sampled signal_\(f(\Lambda)\) with value \(f_{i}=f(\lambda_{i})\) at each node \(i\). Note that \(W(\Lambda)\) and \(f(\Lambda)\) share the sample points \(\Lambda\). We then define a random simple graph as follows. We treat each \(w_{i,j}=W(\lambda_{i},\lambda_{j})\) as the parameter of a Bernoulli variable \(e_{i,j}\), where \(\mathbb{P}(e_{i,j}=1)=w_{i,j}\) and \(\mathbb{P}(e_{i,j}=0)=1-w_{i,j}\). We define the _random simple graph_\(\mathbb{G}(W,\Lambda)\) as the simple graph with an edge between each node \(i\) and node \(j\) if and only if \(e_{i,j}=1\).
We note that, given a graph signal \((G,\mathbf{f})\), sampling a graph-signal from \((W,f)_{(G,\mathbf{f})}\) is equivalent to subsampling the nodes of \(G\) independently and uniformly (with repetitions), and considering the resulting subgraph and subsignal. Hence, we can study the more general case of sampling a graphon-signal, where graph-signal sub-sampling is a special case. We now extend [23, Lemma 10.16], which bounds the cut distance between a graphon and its sampled graph, to the case of a sampled graphon-signal.
**Theorem 3.6** (Sampling lemma for graphon-signals).: _Let \(r>1\). There exists a constant \(K_{0}>0\) that depends on \(r\), such that for every \(k\geq K_{0}\), every \((W,f)\in\mathcal{WL}_{r}\), and for \(\Lambda=(\lambda_{1},\ldots\lambda_{k})\in[0,1]^{k}\) independent uniform random samples from \([0,1]\), we have_
\[\mathbb{E}\bigg{(}\delta_{\square}\Big{(}\big{(}W,f\big{)},\big{(}W(\Lambda),f(\Lambda)\big{)}\Big{)}\bigg{)}<\frac{15}{\sqrt{\log(k)}},\]
_and_
\[\mathbb{E}\bigg{(}\delta_{\square}\Big{(}\big{(}W,f\big{)},\big{(}\mathbb{G}( W,\Lambda),f(\Lambda)\big{)}\Big{)}\bigg{)}<\frac{15}{\sqrt{\log(k)}}.\]
The proof of Theorem 3.6 is given in Appendix D.2
## 4 Graphon-signal analysis of MPNN
In this section, we propose utilizing the compactness of the graphon-signal space under cut distance, and the sampling lemma, to prove regularity results for MPNNs, uniform generalization bounds, and stability to subsampling theorems.
### MPNN on graphon signals
Next, we define MPNNs on graphon-signals, in such a way that the application of a MPNN on an induced graphon-signal is equivalent to applying the MPNN on the graph-signal and then inducing it. A similar construction was presented in [26], for average aggregation, but we use normalized sum aggregation.
At each layer, we define the message function \(\Phi(x,y)\) as a linear combination of simple tensors as follows. Let \(K\in\mathbb{N}\). For every \(k\in[K]\), let \(\xi_{\mathrm{r}}^{k},\xi_{\mathrm{t}}^{k}:\mathbb{R}^{d}\to\mathbb{R}^{p}\) be Lipschitz continuous functions that we call the _receiver_ and _transmitter message functions_ respectively. Define the _message function_\(\Phi:\mathbb{R}^{2d}\to\mathbb{R}^{p}\) by
\[\Phi(a,b)=\sum_{k=1}^{K}\xi_{\mathrm{r}}^{k}(a)\xi_{\mathrm{t}}^{k}(b).\]
Given a signal \(f\), define the _message kernel_\(\Phi_{f}:[0,1]^{2}\to\mathbb{R}^{p}\) by
\[\Phi_{f}(x,y)=\Phi(f(x),f(y))=\sum_{k=1}^{K}\xi_{\mathrm{r}}^{k}(f(x))\xi_{ \mathrm{t}}^{k}(f(y)).\]
We see the \(x\) variable of \(\Phi_{f}(x,y)\) as the receiver of the message, and \(y\) as the transmitter. Define the aggregation of a message kernel \(Q:[0,1]^{2}\to\mathbb{R}^{p}\), with respect to the graphon \(W\in\mathcal{W}_{0}\), to be the signal \(\mathrm{Agg}(W,Q)\in\mathcal{L}_{r}^{\infty}[0,1]\), defined by
\[\mathrm{Agg}(W,Q)(x)=\int_{0}^{1}W(x,y)Q(x,y)dy,\]
for an appropriate \(r>0\). A _message passing layer (MPL)_ takes the form \(f^{(t)}\mapsto\mathrm{Agg}(W,\Phi_{f^{(t)}}^{(t+1)})\), where \(f^{(t)}\) is the signal at layer \(t\). Each MPL is optionally followed by an _update layer_, which updates the signal pointwise via \(f^{(t+1)}=\mu^{(t+1)}\big{(}f^{(t)}(x),\mathrm{Agg}(W,\Phi_{f^{(t)}}^{(t+1)})( x)\big{)}\), where \(\mu^{(t+1)}\) is a learnable mapping called the _update function_. A MPNN is defined by choosing the number of layers \(T\), and defining message and update functions \(\{\mu^{\epsilon},(\xi_{\mathrm{r}}^{k}),(\xi_{\mathrm{t}}^{k})\}_{k\in[K],t \in[T]}\). A MPNN only modifies the signal, and keeps the graph/graphon intact. We denote by \(\Theta_{t}(W,f)\) the output of the MPNN applied on \((W,f)\in\mathcal{WL}_{r}\) at layer \(t\in[T]\). More details on the construction are given in Appendix E.1.
The above construction is rather general. Indeed, it is well known that many classes of functions \(F:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{C}\) (e.g., \(L^{2}\) functions) can be approximated by (finite) linear combinations of simple tensors \(F(a,b)\approx\sum_{k=1}^{K}\xi_{1}^{k}(a)\xi_{2}^{k}(b)\). Hence, message passing based on general message functions \(\Phi:\mathbb{R}^{2d}\to\mathbb{R}^{p}\) can be approximated by our construction. Moreover, many well-known MPNNs can be written using our formulation with a small \(K\), e.g., [29, 34] and spectral convolutional networks [8, 19, 21], if we replace the aggregation in these method with normalized sum aggregation.
In Appendix E.1 we show that for any graph-signal \((G,\mathbf{f})\), we have \(\Theta_{t}(W,f)_{(G,\mathbf{f})}=(W,f)_{\Theta_{t}(G,\mathbf{f})}\), where the MPNN on a graph-signal is defined with normalized sum aggregation
\[\big{(}\mathrm{Agg}(G,\Phi_{\mathbf{f}})\big{)}_{i}=\frac{1}{n}\sum_{j\in[n]} a_{i,j}(\Phi_{\mathbf{f}})_{i,j}.\]
Here, \(n\) is the number of nodes, and \(\{a_{i,j}\}_{i,j\in[n]}\) is the adjacency matrix of \(G\). Hence, we may identify graph-signals with their induced graphon-signals when analyzing MPNNs.
### Lipschitz continuity of MPNNs
We now show that, under the above construction, MPNNs are Lipschitz continuous with respect to cut distance.
**Theorem 4.1**.: _Let \(\Theta\) be a MPNN with \(T\) layers. Suppose that there exist constants \(L,B>0\) such that for every layer \(t\in[T]\), every \(\mathrm{y}\in\{\mathrm{t},\mathrm{r}\}\) and every \(k\in[K]\),_
\[\big{|}\mu^{t}(0)\big{|}\,,\ \big{|}^{t}\xi_{\mathrm{y}}^{k}(0)\big{|}\leq B, \quad\text{and}\quad L_{\mu^{t}},\ L_{\xi_{\mathrm{y}}^{k}}<L,\]
_where \(L_{\mu^{t}}\) and \(L_{\cdot\xi_{\mathrm{y}}^{k}}\) are the Lipschitz constants of \(\mu^{t}\) and \({}^{t}\xi_{\mathrm{y}}^{k}\). Then, there exists a constant \(L_{\Theta}\) (that depends on \(T,K,B\) and \(L\)) such that for every \((W,f),(V,g)\in\mathcal{WL}_{r}\),_
\[\|\Theta(W,f)-\Theta(V,g)\|_{\square}\leq L_{\Theta}\Big{(}\|f-g\|_{\square}+ \|W-V\|_{\square}\Big{)}.\]
The constant \(L_{\Theta}\) depends exponentially on \(T\), and polynomially on \(K,B\) and \(L\). For formulas of \(L_{\Theta}\), under different assumptions on the hypothesis class of the MPNN, we refer to Appendix F.
### A generalization theorem for MPNN
In this section we prove a uniform generalization bound for MPNNs. For background on generalization analysis, we refer the reader to Appendix G.1. While uniform generalization bounds are considered a classical approach in standard neural networks, the approach is less developed in the case of MPNNs. For some works on generalization theorems of MPNNs, see [31, 13, 22, 26, 28].
When a MPNN is used for classification or regression, \(\Theta_{T}\) is followed by global pooling. Namely, for the output signal \(g:[0,1]\to\mathbb{R}^{p}\), we return \(\int g(x)dx\in\mathbb{R}^{p}\). This is then typically followed by a learnable mapping \(\mathbb{R}^{p}\to\mathbb{R}^{C}\). In our analysis, we see this mapping as part of the loss, which can hence be learnable. The combined loss is assumed to be Lipschitz continuous2.
Footnote 2: We note that loss functions like cross-entropy are not Lipschitz continuous. However, the composition of cross-entropy on softmax is Lipschitz, which is the standard way of using cross-entropy.
We model the ground truth classifier into \(C\) classes as a piecewise constant function \(\mathcal{C}:\widetilde{\mathcal{W}\mathcal{L}_{r}}\to\{0,1\}^{C}\), where the sets of different steps in \(\widetilde{\mathcal{W}\mathcal{L}_{r}}\) are Borel measurable sets, correspond to different classes. We consider an arbitrary probability Borel measure \(\nu\) on \(\widetilde{\mathcal{W}\mathcal{L}_{r}}\) as the data distribution. More details on the construction are given in Appendix G.2.
Let \(\operatorname{Lip}(\widetilde{\mathcal{W}\mathcal{L}_{r}},L_{1})\) be the space of Lipschitz continuous mappings \(\Upsilon:\widetilde{\mathcal{W}\mathcal{L}_{r}}\to\mathbb{R}^{C}\) with Lipschitz constant \(L_{1}\). By Theorem 4.1, we may assume that our hypothesis class of MPNNs is a subset of \(\operatorname{Lip}(\widetilde{\mathcal{W}\mathcal{L}_{r}},L_{1})\) for some given \(L_{1}\). Let \(\mathbf{X}=(X_{1},\dots,X_{N})\) be independent random samples from the data distribution \((\widetilde{\mathcal{W}\mathcal{L}_{r}},\nu)\). Let \(\Upsilon_{\mathbf{X}}\) be a model that may depend on the sampled data, e.g., via training. Let \(\mathcal{E}\) be a Lipschitz continuous loss function3 with Lipschitz constant \(L_{2}\). For every function \(\Upsilon\) in the hypothesis class \(\operatorname{Lip}(\widetilde{\mathcal{W}\mathcal{L}_{r}},L_{1})\) (i.e. \(\Upsilon_{\mathbf{X}}\)), define the _statistical risk_
Footnote 3: The loss \(\mathcal{E}\) may have a learnable component (that depends on the dataset \(\mathbf{X}\)), as long as the total Lipschitz bound of \(\mathcal{E}\) is \(L_{2}\).
\[\mathcal{R}(\Upsilon)=\mathbb{E}\big{(}\mathcal{E}(\Upsilon, \mathcal{C})\big{)}=\int\mathcal{E}(\Upsilon(x),\mathcal{C}(x))d\nu(x). \tag{8}\]
We define the empirical risk
\[\hat{\mathcal{R}}(\Upsilon_{\mathbf{X}},\mathbf{X})=\frac{1}{N} \sum_{i=1}^{N}\mathbb{E}\big{(}\Upsilon_{\mathbf{X}}(X_{i}),\mathcal{C}(X_{i} )\big{)}. \tag{9}\]
**Theorem 4.2** (MPNN generalization theorem).: _Consider the above classification setting, and let \(L=L_{1}L_{2}\). Let \(X_{1},\dots,X_{N}\) be independent random samples from the data distribution \((\widetilde{\mathcal{W}\mathcal{L}_{r}},\nu)\). Then, for every \(p>0\), there exists an event \(\mathcal{U}^{p}\subset\widetilde{\mathcal{W}\mathcal{L}_{r}}^{N}\), with probability_
\[\nu^{N}(\mathcal{U}^{p})\geq 1-Cp-2\frac{C^{2}}{N},\]
_in which_
\[\Big{|}\mathcal{R}(\Upsilon_{\mathbf{X}})-\hat{\mathcal{R}}( \Upsilon_{\mathbf{X}},\mathbf{X})\Big{|}\leq\xi^{-1}(N/2C)\Big{(}2L+\frac{1}{ \sqrt{2}}\big{(}L+\mathcal{E}(0,0)\big{)}\big{(}1+\sqrt{\log(2/p)}\big{)} \Big{)}, \tag{10}\]
_where \(\xi(\epsilon)=\frac{\kappa(\epsilon)^{2}\log(\kappa(\epsilon))}{\epsilon^{2}}\), \(\kappa\) is the covering number of \(\widetilde{\mathcal{W}\mathcal{L}_{r}}\) given in (Equation (7)), and \(\xi^{-1}\) is the inverse function of \(\xi\)._
The theorem is proved in Appendix G.4. Note that the term \(\xi^{-1}(N/2C)\) in Equation (10) decreases to zero as the size of the training set \(N\) goes to infinity.
### Stability of MPNNs to graph-signal subsampling
When working with very large graphs, it is often the practice to subsample the large graph, and apply a MPNN to the smaller subsampled graph [15, 5, 7]. Here, we show that such an approach is justified theoretically. Namely, any (Lipschitz continuous) MPNN has approximately the same outcome on the large graph and its subsampled version.
Transferability analysis [20, 30, 17, 25] often studies a related setting. Namely, it is shown that a MPNN applied on a randomly sampled graph \(G\) approximates the MPNN on the graphon \(W\) from which the graph is sampled. However, previous analyses assumed that the generating graphon \(W\) has metric properties. Namely, it is assumed that there is some probability metric space \(\mathcal{M}\) which
is the graphon domain, and the graphon \(W:\mathcal{M}\times\mathcal{M}\to[0,1]\) is Lipschitz continuous with respect to \(\mathcal{M}\), where the dimension of \(\mathcal{M}\) affects the asymptotics. This is an unnatural setting, as general graphons are only assumed to be measurable, not continuous. Constraining the construction to Lipschitz continuous graphons with a uniformly bounded Lipschitz constant only accounts for a small subset of \(\mathcal{W}\mathcal{L}_{r}\), and, hence, limits the analysis significantly. In comparison, our analysis applies to any graphon-signal in \(\mathcal{W}\mathcal{L}_{r}\).
**Theorem 4.3**.: _Consider the setting of Theorem 4.2, and let \(\Theta\) be a MPNN with Lipschitz constant \(L\). Denote_
\[\Sigma=\big{(}W,\Theta(W,f)\big{)},\quad\text{and}\quad\Sigma(\Lambda)=\Big{(} \mathbb{G}(W,\Lambda),\Theta\big{(}\mathbb{G}(W,\Lambda),f(\Lambda)\big{)} \Big{)}.\]
_Then_
\[\mathbb{E}\Big{(}\delta_{\square}\big{(}\Sigma,\Sigma(\Lambda)\big{)}\Big{)}< \frac{15}{\sqrt{\log(k)}}L.\]
## 5 Discussion
We presented an extension of graphon theory to a graphon-signal theory. Especially, we extended well-known regularity, compactness, and sampling lemmas from graphons to graphon-signals. We then showed that the normalized sum aggregation of MPNNs is in some sense compatible with the graphon-signal cut distance, which leads to the Lipschitz continuity of MPNNs with respect to cut distance. This then allowed us to derive generalization and sampling theorems for MPNNs. The strength of our analysis is in its generality and simplicity- it is based on a natural notion of graph similarity, that allows studying the space of _all_ graph-signals, it applies to any graph-signal data distribution, and does not impose any restriction on the number of parameters of the MPNNs, only to their regularity through the Lipschitzness of the message functions. The main limitation of the theory is the very slow asymptotics of the generalization and subsampling theorems. This follows the fact that the covering number of the compact space \(\widetilde{\mathcal{W}\mathcal{L}_{r}}\) grows faster than the covering number of any finite-dimensional compact space. Yet, we believe that our work can serve as a point of departure for future works, that 1) will model subspaces of \(\widetilde{\mathcal{W}\mathcal{L}_{r}}\) of lower complexity, which approximate the support of the data-distribution in real-life settings of graph machine learning, and, 2) will lead to improved asymptotics.
### Acknowledgments
I would like to thank Manish Krishan Lal for many interesting discussions on optimization, geometric deep learning, and graphon analysis, during his internship in my lab.
|
2306.08538 | Fast and Private Inference of Deep Neural Networks by Co-designing
Activation Functions | Machine Learning as a Service (MLaaS) is an increasingly popular design where
a company with abundant computing resources trains a deep neural network and
offers query access for tasks like image classification. The challenge with
this design is that MLaaS requires the client to reveal their potentially
sensitive queries to the company hosting the model. Multi-party computation
(MPC) protects the client's data by allowing encrypted inferences. However,
current approaches suffer from prohibitively large inference times. The
inference time bottleneck in MPC is the evaluation of non-linear layers such as
ReLU activation functions. Motivated by the success of previous work
co-designing machine learning and MPC, we develop an activation function
co-design. We replace all ReLUs with a polynomial approximation and evaluate
them with single-round MPC protocols, which give state-of-the-art inference
times in wide-area networks. Furthermore, to address the accuracy issues
previously encountered with polynomial activations, we propose a novel training
algorithm that gives accuracy competitive with plaintext models. Our evaluation
shows between $3$ and $110\times$ speedups in inference time on large models
with up to $23$ million parameters while maintaining competitive inference
accuracy. | Abdulrahman Diaa, Lucas Fenaux, Thomas Humphries, Marian Dietz, Faezeh Ebrahimianghazani, Bailey Kacsmar, Xinda Li, Nils Lukas, Rasoul Akhavan Mahdavi, Simon Oya, Ehsan Amjadian, Florian Kerschbaum | 2023-06-14T14:38:25Z | http://arxiv.org/abs/2306.08538v2 | # Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions
###### Abstract
Machine Learning as a Service (MLaaS) is an increasingly popular design where a company with abundant computing resources trains a deep neural network and offers query access for tasks like image classification. The challenge with this design is that MLaaS requires the client to reveal their potentially sensitive queries to the company hosting the model. Multi-party computation (MPC) protects the client's data by allowing encrypted inferences. However, current approaches suffer prohibitively large inference times. The inference time bottleneck in MPC is the evaluation of non-linear layers such as ReLU activation functions. Motivated by the success of previous work co-designing machine learning and MPC aspects, we develop an activation function co-design. We replace all ReLUs with a polynomial approximation and evaluate them with single-round MPC protocols, which give state-of-the-art inference times in wide-area networks. Furthermore, to address the accuracy issues previously encountered with polynomial activations, we propose a novel training algorithm that gives accuracy competitive with plaintext models. Our evaluation shows between 4 and 90\(\times\) speedups in inference time on large models with up to 23 million parameters while maintaining competitive inference accuracy.
## 1 Introduction
The rapid development of increasingly capable machine learning (ML) models has resulted in significant demand for products like machine learning as a service (MLaaS). In this scenario, big tech companies with vast computing resources train large machine learning models and provide users with query access. The major pitfall with MLaaS is that it requires clients to submit potentially sensitive queries to an untrusted entity. A promising solution to this problem is to employ cryptography to ensure the queries and inferences are hidden from the model owner. Secure inference is an active field of research with many solutions and different threat models as summarized in a recent SoK [30]. The challenge is that despite recent advances, the inference times are still prohibitively large compared to plaintext inferences.
This work focuses on reducing the runtime of secure inference on image data, under realistic network conditions, while maintaining classification accuracy. We consider the two-party setting using multi-party computation (MPC), where the server holds the ML model, and the client holds the data to query the model. Recent state-of-the-art works in this space employ various co-design approaches to reduce the inference time [30]. For example, COINN co-designs ML models optimized for quantization with efficient MPC protocols tailored to the custom models [17]. COINN substantially compresses the model and makes numerous optimizations to the architecture to achieve fast inferences. Another example is GForce, which tailors the cryptography needed for ML to high-speed GPU hardware [29]. By offloading vast amounts of work to
Figure 1: Summary of the inference time in seconds vs. test accuracy for each state-of-the-art approach on the CIFAR-10 dataset in the WAN (100 ms roundtrip delay).
the pre-computation phase, they are able to achieve state-of-the-art runtime and accuracy in secure inference [29]. Similarly, CryptGPU [32] modifies the CrypTen framework [20] to run efficiently on the GPU and give state-of-the-art inference times in wide area networks. However, despite making major steps towards practical inference, none of these works remove a crucial bottleneck in secure inference: the non-linear layers.
It is well known that the non-linear layers are the bottleneck of secure inference [12, 13, 17, 27]. This is because secure computation on arithmetic shares is optimized for multiplications and additions, instead of non-linear layers such as ReLU activation functions or MaxPool layers. In order to compute these non-linear functions, expensive conversions between different types of MPC protocols are required. Specifically, in more realistic network settings with high latency, the inference time is substantially degraded due to each conversion taking many rounds of communication. This problem is particularly prevalent in deep neural networks (DNNs), where a non-linear activation separates each of the many linear layers.
This work addresses the non-linear layers by taking a co-design approach between the activation functions and MPC. We take the approach of replacing classic ReLU activation functions with a polynomial approximation to avoid conversions altogether. Previous work has considered this approach but with limited success [12]. We propose two modifications to make this approach practical. First, we develop and evaluate new single-round MPC protocols that give the fastest evaluation of polynomials to date. The challenge with using polynomials is they severely impact model accuracy [12, 17, 27]. Previous work could not successfully train DNNs with more than 11 layers due to exploding gradients [12]. Thus, our second contribution is tailoring the ML training process to ensure high accuracy and stable training using polynomials. Our approach utilizes a new type of regularization that focuses on keeping the input to each activation function within a small range. We achieve close to plaintext accuracy on models as deep as ResNet-110 [16] and as large as a ResNet-50 on ImageNet [10] (23 million parameters). The combination of these approaches yields a co-design with state-of-the-art inference times and the highest accuracy for polynomial models.
We compare our work with three solutions representing the state-of-the-art approaches in secure inference according to Ng and Chow [30]. We summarize our results in Figure 1. Combining the single-round MPC protocols with our activation regularization achieves significantly faster inference times than all other solutions. Specifically, our solution is faster than CryptGPU by 4\(\times\), GForce by 5\(\times\), and COINN by 18\(\times\) on average in wide area networks. Our approach also scales to large models on ImageNet with a 90\(\times\) speedup over COINN and a 3\(\times\) speedup over CryptGPU. Furthermore, our inference accuracy remains competitive with all other solutions. CryptGPU often gives slightly higher accuracy as it can evaluate any plaintext model (albeit slower than our work). Thus, the challenge for future work is to further close the ML accuracy gap between plain and polynomial models.
## 2 Background
### Multi Party Computation
Secure multi-party computation (MPC) allows a set of parties to jointly compute a function while keeping their inputs to the function private. We focus on a variant of MPC which performs operations over shares of the data [5]. We use \([[s]]=[[s]_{a},[s]_{b}]\) to denote that the value of \(s\) is shared among participants, where \([s]_{a}\) is the share held by party \(a\) and \([s]_{b}\) by party \(b\). Arithmetic MPC protocols utilize a linear secret sharing scheme, such as an additive secret sharing scheme to compute complex circuits using combinations of additions and multiplications. Given constants \(v_{1},v_{2},v_{3}\) and shares of values \([[x]],[[y]]\), one can locally compute
\[v_{1}[[x]]+v_{2}[[y]]+v_{3}=[[v_{1}\cdot x+v_{2}\cdot y+v_{3}]] \tag{1}\]
to obtain shares of the value \(v_{1}\cdot x+v_{2}\cdot y+v_{3}\). For multiplication, one can use Beaver's trick to multiply using a single round of communication between parties [4]. Specifically, we assume a triplet of random numbers \(a,b,c\) (called a Beaver triplet) was generated such that \(a\cdot b=c\) and secret shared among all parties ahead of time (typically in an offline pre-computation phase). Then the parties compute \([[x\cdot y]]\) by first locally computing \([[a+x]]=A\) and \([[b+y]]=B\) and reconstructing \(A\) and \(B\) so that both parties have them in plaintext. This reconstruction is the single round required. Using these values, the parties compute the result locally as \([[x\cdot y]]=A[[y]]+(-B)[[a]]+[[c]]\), using the linearity property in equation 1.
Arithmetic MPC protocols are limited to basic multiplications and additions. Thus, for computing non-linear operations such as comparisons, other techniques such as converting to binary secret shares or using Yao's garbled circuits are common [9]. A binary secret sharing scheme is an arithmetic scheme carried out bitwise in the ring \(\mathbb{Z}_{2}\). Specifically, the difference is that we first decompose \(x\) into its bits and have a separate arithmetic share of each bit. By maintaining this bitwise structure, operations such as XOR or bit shifts are trivial. We describe how to use a binary and arithmetic secret-sharing scheme together to compute non-linear functions in Section 3.3.
### Neural Network Inference
We consider DNN classifiers with domain \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and range \(\mathcal{Y}\subseteq\mathbb{R}^{c}\). DNN classifiers consist of a sequence of layers, each performing either a linear or a non-linear operation. The ResNet [16] architecture we consider is composed of (i) convolutional, (ii) fully connected, (iii) pooling, (iv) batch normalization, and (v) ReLU layers. All layers are linear, except for \(\text{ReLU}(x)=\max(x,0)\) and max pooling (that can
be replaced with average pooling). To classify an input \(x\), a classifier \(h\), passes the input sequentially through each layer. Upon reaching the last layer, the prediction is obtained by taking \(\arg\max_{i\in\{1..c\}}h(x)_{i}\), where we call \(h\) the logit function for a classifier \(h:X\rightarrow\mathcal{Y}\). The output of the secure inference protocol for an encrypted input \(x\) is the encrypted output \(h(x)\) of the logit function.
## 3 Problem Setup and Motivation
### Problem Setup
We follow the same threat model as prior work for two-party secure inference [29, 20, 17, 20]. Specifically, we follow the two-party client-server model where the server has a machine learning model (a DNN) they have trained, and the client holds private data upon which they would like to make an inference. The server's input to the protocol is the weights of their trained model, which they do not want to leak to the client (due to intellectual property or protecting their MLaaS business [33]). The client has a private input (typically an image) they would like to classify using the model but do not want to leak this input or the prediction to the server. That is, the MPC function can be written as \(f(\text{image},\text{model})=(label,\emptyset)\). Following previous work [29, 20, 17, 20], we consider the semi-honest model, introduced by Goldreich [15, SS7.2.2], where adversaries do not deviate from the protocol but may gather information to infer private information. Also, in line with previous work, we assume the model architecture is known to both parties. This includes the dimensions and type of each layer and parameters such as field size used for inference. The mean and standard deviation of the training set are also known to both parties following CrypTen [20].
### Privacy During Model Training
We focus only on the inference phase of machine learning. However, the privacy of the training process and training data is an orthogonal but essential problem. We recommend that the data owner take appropriate steps to protect the privacy of the model, such as training using differential privacy [1] or rounding the output of the inference. Furthermore, during training, care should be taken to protect against threats such as model stealing, which can be launched using only the inference result [18]. To summarize, the model owner learns nothing other than the fact a query was made. We ensure only the inference is revealed to the client; however, ML attacks that only require black box query access [18] must be defended against during the training process.
### Motivating the Co-Design of Activation Functions
It has been well established in the literature that activation functions such as ReLU are the bottleneck in MPC-based secure inference, taking up to 93% of the inference time [12, 13, 27, 17]. The reason for this is that current approaches use different types of MPC protocols for a model's linear and non-linear layers [29, 20, 17, 19, 27]. The linear layers are typically computed using standard arithmetic secret-sharing protocols tailored for additions and multiplications. The non-linear layers are computed using garbled circuits or binary secret share-based protocols. The bottleneck in wide area networks is typically the conversions between these protocols as they require a large number of communication rounds. A typical DNN architecture has many linear layers, each followed by a non-linear layer resulting in a prohibitively large number of conversions.
Consider CrypTen, a PyTorch-based secure ML library, as a baseline approach [20]. CrypTen uses binary shares to evaluate boolean non-linear layers such as ReLUs and MaxPooling layers. Specifically, all linear layers are computed using standard multiplication and addition protocols over arithmetic shares. To compute \([[ReLU(x)]]\) at each layer, \([[x]]\) is first converted to binary shares using a carry look-ahead adder. Once in binary shares, CrypTen extracts the sign bit to compute \([[x>0]]\) (a local operation). The sign bit, \([[x>0]]\), is then converted back to arithmetic shares (trivial for a single bit) and multiplied with \([[x]]\) to get \([[ReLU(x)]]\). The problem with this approach is that each conversion takes \(O(log(L))\) communication rounds. Taking into account the additional round needed for multiplication, we observe nine communication rounds per ReLU in practice (under 64-bit precision). Recently, more sophisticated MPC protocols have been proposed that reduce the number of rounds needed for comparisons in arithmetic shares [6, 7] or reduce the cost of binary share conversions [11]. However, even if one were to implement these protocols in CrypTen, the number of rounds needed for non-linear layers would still outweigh the number needed for linear layers.
Motivated by this bottleneck, several works have focused on either reducing the number of ReLUs or replacing ReLUs altogether [12, 13, 27, 28, 19, 22]. One approach is to approximate each ReLU with a high degree polynomial [12, 22]. The advantage of using polynomials is that polynomials can be computed in arithmetic shares, thus removing the need for expensive conversions and improving the total inference time. A significant challenge with polynomials is maintaining model accuracy [12, 13, 17, 19, 27].
Thus, this work aims to provide a secure inference protocol with state-of-the-art inference time and accuracy in realistic networks with high latency. To do this, we take a co-design approach to balance accuracy and fast inference time. In Section 4, we develop MPC protocols that achieve the fastest
evaluation of polynomials to date, assuming a modified ML architecture. In Section 5, we tailor the ML training procedure to achieve high accuracy using this modified architecture.
## 4 Faster Evaluation of Polynomials
In this section, we evaluate the speed-up of replacing ReLU's with a naive polynomial approximation. We then develop our single-round protocols and show that they drastically reduce the activation function evaluation time in wide area networks.
### The Polynomial Advantage
To highlight the speed-up of polynomials over standard ReLUs, we first evaluate the runtime of a single layer with \(2^{15}\) ReLU activation functions in Figure 2. (See Section 6 for more implementation details.) First, we plot an unmodified version of CrypTen (using CryptGPU [32]) with the conversion to binary shares. Next, we replace the ReLU with a degree four polynomial fitted using least squares polynomial regression (see Section 5 for the details). We can see that using polynomials in off-the-shelf CrypTen is much faster across all network speeds than the default mixed arithmetic and binary protocol. This difference becomes more pronounced as we add more network delay or scale to deeper models with more ReLUs.
Despite the significant speed-up, naively computing a polynomial is still expensive in MPC with a non-trivial number of communication rounds. For example, Horner's method (an iterative approach to evaluating polynomials) uses \(O(n)\) communication rounds (where \(n\) is the degree of the polynomial). However, most MPC libraries (including CrypTen) use the square-and-multiply algorithm for exponentiation, followed by multiplying and summing the coefficients locally. The square-and-multiply algorithm requires \(O(log(n))\) multiplications (and thus rounds) in MPC. In practice, the default square and multiply implementation in CrypTen uses two rounds per ReLU for a degree four polynomial. To increase the advantage of using polynomial activation functions even further, we develop a new single-round protocol for evaluating polynomials in MPC.
### ESPN: Exponentiating Secret Shared Values using Pascal's triangle
We present our single-round, highly parallelizable protocol ESPN for computing high-degree polynomials. The fundamental idea is utilizing the binomial theorem (Pascal's triangle) to achieve faster exponentiation. We begin by describing our protocol for raising a number \([[x]]\) to the power \(k\), in MPC (see Algorithm 1 for an overview). Using the additive secret sharing scheme, the exponentiation corresponds to \((x_{a}+x_{b})^{k}\) where \(x_{a}\) represents the first party's share and \(x_{b}\) represents the second (such that \(x_{a}+x_{b}=x\)). The binomial theorem expands this expression as:
\[x^{k}=(x_{a}+x_{b})^{k}=\sum_{i=0}^{k}\binom{k}{i}x_{a}^{k-i}x_{b}^{i} \tag{2}\]
We observe that, for each \(i\) in the sum, party \(a\) can compute \(a_{i}=x_{a}^{k-i}\) without needing to communicate with party \(b\) (Alg. 1 line 4). Similarly, party \(b\) can compute \(x_{b}^{i}\) without communicating with party \(a\) (Alg. 1 line 5). Finally, \(\binom{k}{i}\) can be computed by any party (or pre-computed ahead of time). For simplicity, we assign the computation of \(\binom{k}{i}\) to party \(b\). Thus party \(b\) computes \(b_{i}=\binom{k}{i}x_{b}^{i}\).
Once each party has computed their respective vectors, we multiply \(a_{i}\cdot b_{i}\) for each \(i\) in parallel (Alg. 1 line 6). We carry out this multiplication using standard MPC protocols in one round. To use these multiplication protocols, each party must have a share of the input. We use a trivial additive secret sharing, where the other party inputs zero as their share to the protocol (Alg. 1 line 2). Finally, after the multiplication, the sum of the binomial theorem can be efficiently computed with no communication (Alg. 1 line 7).
Using our exponentiation protocol, we can now efficiently compute high-degree polynomials in a single round. The first step in evaluating a polynomial is to compute all needed exponents of the input (i.e. \(\{x^{k}|k\in[n]\}\) for a degree \(n\) polynomial) by calling Algorithm 1 in parallel. We note there will be a significant overlap in each party's local powers from the binomial theorem. Thus, a simple cache can improve performance significantly. Furthermore, the binomial coefficients \(\binom{k}{i}\), can also benefit from basic dynamic programming by reusing the
Figure 2: Benchmarking the secure evaluation of ReLU activation functions using various approaches. The \(x\)-axis is the network delay in ms and the \(y\)-axis is the mean runtime in seconds averaged over 20 runs with the shaded area representing the 95% confidence intervals.
previous result (\(k-1\)) to compute the next (\(k\)). After computing the exponents, the output of the polynomial can be computed by multiplying the coefficients (public values) and summing, all of which can be done locally.
In Figure 2, we plot this approach alongside the previous approaches to evaluate the runtime. ESPN incurs slightly more overhead in the LAN setting; however, it scales significantly better (the confidence intervals do not overlap) to wide area networks that can be expected in practice.
### Alternative Single Round Protocol: HoneyBadger
Like ESPN, Lu et al. give a single round protocol for exponentiation in MPC [26]. Despite focusing on a completely different problem (anonymous communication), they provide an MPC protocol of independent interest for exponentiation, which we also utilize in our work. They take a very different approach to our work that yields different trade-offs. Instead of the binomial theorem, their work utilizes the following factoring rule
\[x^{k}-r^{k}=(x-r)\sum_{i=0}^{k-1}x^{k-i-1}r^{i} \tag{3}\]
where \(r\) is a random secret-shared number derived during pre-computation. We assume each party has a share of \(x\) and a share of \(r^{i}\) for \(i\in\{1,\ldots,k\}\) before beginning the protocol (instead of the more common Beaver triplets). The first step in the protocol is to compute and reveal \(x-r\) (\(x\) blinded by \(r\)), which uses a single round. Once revealed, this value becomes a public constant \(C\). After some algebraic manipulation of (3), Lu et al. obtain a recursive formula for \(x^{i}r^{j}\) given below.
\[[[x^{k}r^{j}]]=[[r^{k+j}]]+C\sum_{i=0}^{k-1}[[x^{k-i-1}r^{i+j}]] \tag{4}\]
Using dynamic programming, the parties can then compute any power (\(x^{k}r^{0}\)) using only additions of previously computed terms and powers of \(r\). To compute polynomials using this protocol, we need to multiply the coefficients and sum the terms, precisely as we did in ESPN.
The advantage of Lu et al.'s protocol is that the communication is small (only the opening of \(x-r\)). The first disadvantage is that the protocol requires a modified pre-computation phase, which is as difficult to pre-compute securely as the original problem (it is exponentiation). On the contrary, our binomial protocol uses standard beaver triplets commonly found in MPC frameworks. There are well established protocols for efficiently computing these triplets, and the parties may already have them due to the popularity of Beaver's trick. The second disadvantage of Lu et al.'s solution is that, while the protocol requires very little communication, it is not locally parallelizable as each dynamic programming step depends on the previous one. In contrast, our entire protocol can be executed in parallel.
We also consider the runtime of using HoneyBadger to compute polynomial approximations of ReLUs in Figure 2. We emphasize this is a runtime-only evaluation. Without our training algorithm in Section 5, none of the polynomial-based solutions can attain usable accuracy. We find that ESPN and HoneyBadger perform similarly in practice, with HoneyBadger gaining a slight advantage in very low network delay. Due to the trade-offs and similarities, we will evaluate both approaches for the remainder of this work.
### Floating Point Considerations
We note that the exponentiation protocols we have discussed are designed for integers. Extending to floating point values is straightforward, but requires rescaling (a standard practice in fixed-point arithmetic). Furthermore, in both protocols (ESPN and HoneyBadger), we can only rescale once for each polynomial computation (since the majority of the computation is local). Thus, to ensure correctness, we must scale up each exponent by \(n+1-k\) where \(n\) is the degree of the polynomial and \(k\) is the exponent (Algorithm 1). This scaling up is a scalar multiplication by a public constant and thus incurs no additional rounds. At the end of the computation, we must then scale down by \(n\) for correctness. We use CrypTen's two-party truncation protocol that also incurs no additional rounds. However, there is a negligible chance of an incorrect result from this truncation protocol due to wrap-around in the ring. Specifically, the probability of an incorrect result when truncating \(x\) is \(\frac{x}{2^{L}}\) where \(2^{L}\) is the size of the ring [20]. This implies \(x\) must be small compared to the ring for this fast truncation protocol to be correct. We observe that \(x\) can also be very large while maintaining correctness due to symmetry. Following the proof of Mohassel and Zhang [28], we get that the probability of failure is \(max\{\frac{x}{2^{L}},\frac{2^{L}-x}{2^{L}}\}\). Since we upscale all terms by \(n+1-k\) we ensure \(x\) is sufficiently large and thus can take advantage of this fast truncation protocol with no additional rounds.
In addition to scaling, we must ensure that the polynomial computation does not overflow in the ring by choosing the correct precision. Consider using degree \(n\) polynomials fitted
to a range \([-\lambda,\lambda]\). If the global precision (size of the ring) is \(L\)-bit, then the working precision of each value must be \(p\)-bit where
\[\log\left[n\cdot\frac{1-\lambda^{n}}{1-\lambda}\right]+n\cdot p\leq L-1 \tag{5}\]
For our experiments, we use CrypTen, with \(L=64\)[20]. Assuming default values of \(n=4\) and \(\lambda=5\) we get that 10-bit is the maximum working precision. However, this is a pessimistic upper bound, and in practice, we find that 12-bit precision gives the best results.
## 5 PILLAR: Polynomial Activation Regularization
Our initial benchmark in Section 4 showed a significant speed-up when replacing ReLU functions with polynomials implemented using ESPN and HoneyBadger. However, a notable challenge neglected thus far is that replacing a ReLU with a polynomial can drastically reduce the accuracy of the model [12, 17]. This section discusses the causes of the accuracy degradation and describes our mitigation techniques. Finally, we give empirical results showcasing the high accuracy of our modified training procedures across various architectures and datasets.
### The Problem with Polynomial Activation Functions
Escaping Activations.The first step in replacing an activation function with a polynomial is to design a polynomial that approximates the original function as closely as possible. A common approach for this is the least-squares polynomial fitting. In this approach, a small discretized range is chosen to fit the polynomial on. A table of values is created for the function over all values in the range. This creates a system of equations for the polynomial coefficients that can be solved with least squares. The challenge with this approach is that, outside of this range, the polynomial no longer resembles the original activation function and often diverges rapidly. This leads to a problem called escaping activations, first identified by Garimella et al. [12]. If one naively swaps a ReLU for its polynomial approximation, with no additional modifications to the training, all weights will become infinite within a few training epochs. We give an example of this degradation in Figure 4. We can see that, without modifying the training procedure, a polynomial can completely destroy the accuracy.
To illustrate the problem more clearly, we conduct an experiment using a polynomial of degree four fitted on the range \([-5,5]\) (\(\lambda\)=5) as the activation function for a three-layer model on CIFAR-10 [21]. In Figure 4, we plot this polynomial activation function and the \(\ell_{\infty}\)-norm of the input and output to each activation function. We note that, with no modification (except replacing ReLUs with polynomials), the weights of this model become undefined within approximately three epochs of training. First, we note the divergent behaviour of the polynomial outside the fitted range. Second, we observe the effect of the divergence on the outputs of the activation function. Specifically, we wait until the model weights become undefined (NaN in Python) and then observe the behaviour leading up to the explosion. We can see that three steps before the model weights become undefined (NaN), the input values of each activation are out-of-range, but the outputs still behave similarly to a ReLU. However, in the next iteration (two steps before NaN), a single value in the first layer goes too far out of range. This causes a ripple effect for the other two layers, creating an extremely large output (approx \(-2000\)) in the final activation function. This large value creates a large gradient, and after another iteration of training, the values become so large that the gradients (and weights) become undefined (NaN). We find that training the model by minimizing the classification loss alone is not enough to keep the model in
range as the gradients explode before decreasing the loss.
Truncated Polynomial Coefficients.An additional challenge is that we will evaluate the fitted polynomial in a finite ring with limited precision. This significantly impacts the polynomial coefficients, which tend to be relatively small, especially for the higher-order terms. Specifically, these small coefficients can get truncated to zero in limited precision, which causes the polynomial to diverge even inside the fitted range. We give an example of this in Appendix B.
### Defining PILLAR
Our approach, which we call PILLAR, is the combination of the components we describe in this section. Activation function regularization is our primary approach for mitigating escaping activation functions. However, to scale to larger models, we find that the additional steps of clipping, regularization warm-up, and adding batch normalization are beneficial.
Quantization-Aware Polynomial Fitting.We begin by solving the problem of truncated polynomial coefficients. To address this, we fit the polynomial with the precision constraint in mind. We do this by using mixed integer non-linear programming. Let \(X\) be the set of all values between \([-\lambda,\lambda]\) in \(p\)-bit precision (the domain we want to fit on). First, we generate \(Y=ReLU(X)\cdot 2^{p}\), a table of values for a standard ReLU scaled up by the precision. Scaling the output of the ReLU allows us to work in the integer domain (similar to fixed point arithmetic). We then compute a matrix \(B\) where each column is the different powers of \(X\) used in a polynomial (\(B=[X^{0},X^{1},X^{2},\ldots]\)).
Next, we solve the system \(AB=Y\) for \(A\) using mixed integer linear programming with \(A\in[-2^{p}-1,2^{p}-1]\) to get the coefficients \(A\) that minimize the error between the polynomial \(AB\) and the ReLU values \(Y\). Finally, we scale the resulting coefficients down by \(2^{p}\). We note that \(A\in[-2^{p}-1,2^{p}-1]\) corresponds to coefficients being bounded by \([-1,1]\) after we scale down. We use 10 bits of precision for all polynomials motivated by our derivations in Section 4. As we observe in Appendix B, our quantized polynomial fitting addresses the problems of exploding activations within the range. However, the issue of going out-of-range requires additional treatment.
Activation Regularization.Following the observations of Section 5.1 and Garimella et al. [12], it is clear that minimizing the classification loss alone is not sufficient to prevent escaping activations. Garimella et al. proposed QuaIL, a method that trains one layer of the model at a time, focusing not on classification accuracy but the similarity of the layer to a standard ReLU model [12]. QuaIL showed much better accuracy than naive training but only scaled to models with at most 11 layers.
In our work, we address the cause of the problem directly by regularizing the input to each activation function during training. We add an exponential penalty to the loss function when the model inputs out-of-range values to the polynomial activation function. Let \(x\) be the input to the activation function, and \(\lambda_{reg}\) be the upper bound of the symmetric range \([-\lambda_{reg},\lambda_{reg}]\) in which we would like the input to be contained. Then, we define our penalty function as
\[p(x)=\left(\frac{x}{\lambda_{reg}}\right)^{\gamma} \tag{6}\]
where \(\gamma\) is a large even number (to handle negative values) determining the severity of the penalty. We find that values between six and ten work best in practice, with \(\gamma=10\) being the default in our experiments. This penalty function gives negligible penalties (less than 1) for \(|x|<\lambda_{reg}\) and rapidly grows (in the degree of \(\gamma\)) as \(|x|>\lambda_{reg}\).
We aggregate \(p(x)\) over \(I\), the set of inputs to all activation functions, by taking the average over each activation layer in the model. After aggregation, we scale the penalty using a regularization coefficient \(\beta\) and add it to the existing cross-entropy loss function of the model \(\ell_{c}\). Specifically, the modified loss function \(\ell^{\prime}\) is defined as:
\[\ell^{\prime}(\cdot)=\ell_{c}(\cdot)+\frac{\beta}{K}\sum_{x\in I}p(x) \tag{7}\]
where \(K\) is the number of activation layers in the model. This allows us to tune the importance of classification loss vs. the cost of going out-of-range.
Clipping.Although activation regularization teaches the model not to go out-of-range over time, the model still needs to avoid going to infinity during the early stages of training. Thus, during training, we apply a clipping function to the input of the activation function such that if any input goes out of range, it is truncated to the range's maximum (or minimum) value. This clipping function does not affect the penalty as it is applied after the penalty function has been computed. We emphasize that this clipping function is only used during training and is removed during inference. The intuition is that the model should learn not to go out-of-range during training and thus no longer requires this clipping function during inference. Additionally, we often find that setting the \(\lambda_{reg}\) of the penalty to be smaller than the range used for clipping and polynomial fitting can yield even better results as it gives room for going slightly out-of-range in the inference.
Regularization Warm-up.We find the minimum requirements for successfully training a model with polynomial activation functions are activation regularization and clipping. However, for larger models, the penalty term can be extremely large in the first few epochs (until the model learns to stay in range). In some cases, the loss can become infinite due
to our regularization penalty. To address this challenge, we adopt a regularization scheduler for the first four epochs that slowly increases both \(\gamma\) and \(\beta\) to the values used for the rest of the training. Empirically, the following schedule works well and avoids infinite loss. We let \(\gamma^{\prime}\in\{4,6,\dots,\gamma,\gamma,\dots\}\) and \(\beta^{\prime}\in\{\beta/100,\beta/50,\beta/10,\beta/5,\beta,\beta,\dots\}\).
BatchNorm Layers.Garimella et al. also investigated using normalization to help prevent the escaping activation functions [12]. They proposed a min-max normalization approach where each layer's minimum and maximum values are approximated using a weighted moving average of the true minimum and maximum. These values are frozen during inference. Garimella et al. observed that this approach alone was insufficient, as activations still escaped the range during inference. We observe this operation is similar to the batch norm layer commonly added to ML models. The main difference is that the mean and standard deviation of the batch are used to normalize the layer instead of the minimum and maximum values. By fixing the approximation of the mean and standard deviation during inference (following CrypTen [20]), this operation is very efficient in MPC. We study the effect of BatchNorm in Appendix A. We find that batch norm layers considerably improve the accuracy of PILLAR. This is an intuitive result as batch normalization helps to keep each layer's output bounded and thus reduces the work of our regularization function.
### Measuring PILLAR's Effectiveness
The Regularization Coefficient.To show the effect of our regularization and coefficient \(\beta\), we conduct an experiment using the same three-layer model on CIFAR-10 from Section 5.1. In Figure 5, the left \(y\)-axis gives the out-of-range ratio (OOR), defined as the ratio of activation function inputs that were not within the interval \([-5,5]\). The right \(y\)-axis is standard classification accuracy, and the \(x\)-axis varies the regularization coefficient \(\beta\). We observe that when the coefficient, \(\beta\), is small, the model goes out-of-range often and thus has poor accuracy. As we increase \(\beta\), the out-of-range ratio decreases, and accuracy increases. However, if we increase the coefficient too much, the accuracy decreases again.
End-to-end Accuracies.We evaluate PILLAR across a range of different models and architectures considered in related work [32, 17, 29]. We summarize the results in Table 1. All results are averaged over five random seeds, and we show the 95% confidence interval. The only exception is ResNet50 on ImageNet, where we only train a single model due to the size of the dataset. We defer to Section 6 for the details of the experimental setup. We note that these results are using PyTorch with no cryptography or quantization. We give a complete evaluation using MPC in Section 6 where quantization has an effect. Table 1 provides preliminary evidence that our polynomial training approach yields high accuracies competitive with state-of-the-art ReLU models across a range of models and datasets.
## 6 Evaluation of Co-Design
In this section, we provide an end-to-end comparison of our co-design against state-of-the-art solutions in secure inference. We determine the state-of-the-art works following a recent SoK by Ng and Chow [30]. Specifically, we consider three solutions on the Pareto front of latency and accuracy as determined by Ng and Chow. These works are COINN [17], GForce [29] and CrypTen (CryptGPU) [20, 32]. We will evaluate the metrics of latency (or runtime of a single sample) and encrypted accuracy. We begin with the experimental setup, then evaluate both metrics against each related work. Section 6.2 evaluates the ResNet-18 architecture, which gives our state-of-the-art performance. Section 6.3 evaluates the VGG-16 architecture, the only architecture GForce evaluates.
\begin{table}
\begin{tabular}{l l l} \hline \hline Dataset & Model & Accuracy \(\pm\) CI \\ \hline \multirow{4}{*}{Cifar10} & MiniONN & 88.1 \(\pm\) 0.26 \\ & VGG 16 & 90.8 \(\pm\) 0.11 \\ & ResNet18 & 93.4 \(\pm\) 0.14 \\ & ResNet110 & 91.4 \(\pm\) 0.18 \\ \hline \multirow{4}{*}{CIFAR-100} & VGG 16 & 66.3 \(\pm\) 0.22 \\ & ResNet32 & 67.8 \(\pm\) 0.32 \\ & ResNet18 & 74.9 \(\pm\) 0.14 \\ \hline ImageNet & ResNet50 & 77.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Plain-text Accuracy of PILLAR (5 runs).
Figure 6: Summary of the inference time vs accuracy for each state-of-the-art approach on the CIFAR-100 dataset in the WAN (100 ms roundtrip delay).
Section 6.4 considers other ResNets and the MiniONN architecture following COINN. Finally, in Section 6.5, we evaluate ImageNet.
Results Summary.We plot a summary of the accuracy and inference time for CIFAR-100 in Figure 6. For both datasets (recall Figure 1 for CIFAR-10), we observe that our work always gives the solution with the fastest inference time by a statistically significant amount. In terms of accuracy our work is competitive with the state-of-the-art but CryptGPU is always the most accurate as it can infer unmodified plaintext models. We find that our solution is faster than CryptGPU by \(4\times\), GForce by \(5\times\), and COINN by \(18\times\) on average in wide area networks. Our accuracies are competitive with state-of-the-art and plaintext solutions and stay stable (no escaping activations) with models containing up to 110 layers and 23 million parameters.
### Experimental Setup
We develop an experimental setup that follows as closely as possible to the works we compare to [17, 32, 29]. We use CIFAR-10/100 [21] and ImageNet [10], the same common benchmark datasets as related work. Our model architectures include: MiniONN [24], VGG [31], and ResNets [16]. This covers models of depth 7 to 110 layers with the number of trainable parameters ranging from 0.2 to 23 million. See Appendix C for further details on architectures and datasets.
Implementation Details.We implement PILLAR using GEKKO's [3] mixed integer linear programming solver for the quantization-aware polynomial fitting. We then cache these polynomial coefficients and use them for all models and datasets. We train all our models using PyTorch and implement a custom activation function module for the polynomial approximations. We call this the PolyReLU layer and use it to replace all ReLU layers. This module takes the cached coefficients and computes the output of the polynomial approximation for forward passes. When training, we compute the regularization penalty by taking a snapshot of the inputs passed to the PolyReLU layers and computing our modified loss function from equation 7. After training, we export the model as an ONNX model which can be inputted to CrypTen.
We implement ESPN and HoneyBadger in the CrypTen interface. We add configuration parameters that allow the user to specify which type of activation function they would like from ReLU or PolyReLU. Similarly, the polynomial evaluation method is parametrized in the config file between ESPN, HoneyBadger, and the default CrypTen Polynomial Evaluation. We follow CrypTen's default trusted first party provider, which assumes a first party will generate and distribute all needed pre-computed values from Beaver triplets to binary shares, etc. To ensure we accurately measure the online phase of the inference, we move some additional operations to the pre-computation phase. Specifically, random-number generation in the Pseudo-Random-Zero-Sharing (PRZs) are fixed to zero to simulate being computed in the offline phase.
For ESPN, we implement the idea directly into CrypTen's polynomial evaluation module. As discussed in Section 4, we compute the powers for the last term in the polynomial first, and then reuse these powers for all lower-degree terms. For HoneyBadger, we modify the Trusted First-Party Provider to provide the additional sets of random numbers and exponents needed for their protocol. We then implement the idea directly into the polynomial module of ArithmeticSharedTensors. We use HoneyBadger's proposed dynamic programming method with the memory optimization technique of keeping only the current and previous iterations in memory.
When running our experiments on the GPU, we observed overflows not present in the CPU version of CrypTen. Upon inspection, we found that the default number of blocks (4) set in CryptGPU is not adequate for our use-case. Increasing this parameter to 5 fixed the overflow issue completely. All experiments are run on a machine with 32 CPU cores @ 3.7 GHz and 1 TB of RAM with two NVIDIA A100 with 80 GB of memory. We simulate network delay by calling the sleep function for the appropriate time whenever the client and server communicate. We simulate the LAN with 0.25 ms roundtrip delay and the WAN with 100 ms, following COINN [17]. All experiments (except ImageNet) are repeated over multiple random seeds, and we report the mean and 95% confidence interval as shaded areas.
Since COINN does not include source code, only executables, we use the numbers reported in the paper for their work. For all other works, we run our own benchmarks. To evaluate GForce we use their code unmodified. In order to use CryptGPU in practice, one must first train a model in PyTorch. For ImageNet, PyTorch provides pre-trained models that we can use. However, we will need to train a model for all other architectures and datasets. We simply use the same configurations as our PolyRelu models but with standard ReLUs. We include all source code to reproduce our results1.
Footnote 1: [https://github.com/LucasFenaux/PILLAR-ESPN](https://github.com/LucasFenaux/PILLAR-ESPN)
Hyperparameters.We introduce five new hyperparameters associated with our techniques: polynomial degree (\(n\)), polynomial approximation range (\(\lambda\)), polynomial regularization range (\(\lambda_{reg}\)), polynomial regularization coefficient (\(\beta\)), and polynomial regularization exponent (\(\gamma\)). We found that, for all models and datasets, a value of \(\gamma=10\) performs well as it introduces a strong enough incentive for PolyReLU inputs to stay in range while keeping penalization for values in range practically 0 (if an input \(x\) is within range, then \(\frac{x}{\lambda_{reg}}<1\Rightarrow(\frac{x}{\lambda_{reg})^{10}}\sim 0\)). We also found that a polynomial approximation range \(\lambda=5\) provides a good compromise between regularization (larger ranges require less regularization) and quantization (lower range values use less precision).
Similarly, we found the optimal quantization-aware polynomial degree (\(n\)) to be 4. This value provides a good trade-off between approximation quality while avoiding overflow by keeping the total precision less than 64-bits. We vary the polynomial regularization range (\(\lambda_{reg}\)) and polynomial regularization coefficient (\(\beta\)) per model and dataset, although we found \(\lambda_{reg}=4.8\) (slightly tighter than \(\lambda=5\)) and \(\beta=5\times 10^{-5}\) to be good default values. We use a precision of \(p=12\) as this allows us to work in CrypTen's 64-bit ring.
We used Stochastic Gradient Decent as the optimizer with a learning rate of 0.013 as the default. This includes a Cosine Annealing Learning Rate Scheduler with Linear Learning Rate Warmup of 5 epochs and decay 0.01. We use a weight decay of either \(10^{-4}\) or \(5\times 10^{-4}\) and a momentum of 0.9. We used a default batch size of 128 and set the default number of Epochs to 185. For some models we tuned the learning rate, number of epochs, and regularization coefficient to achieve a slightly higher accuracy. We detail hyperparameters in our source code repository.
### ResNet-18 Architecture
In this section, we use a ResNet-18 architecture as it is the architecture that yields the best inference time and accuracy over all CIFAR-10 and CIFAR-100 experiments. For this comparison we focus on CryptGPU, which has been shown to be a state-of-the-art solution [30]. CryptGPU (or CrypTen) [32] serves as a baseline in all our comparisons including those against GForce and COINN in Sections 6.3 and 6.4. Neither COINN nor GForce support the ResNet-18 architecture evaluated in this section.
Inference Time.We measure the inference time of a single input image over varying network delays. The results are given in Figure 7. We include the result for CIFAR-100 and omit the plot for CIFAR-10 as it displays similar trends. We observe that both PILLAR + HoneyBadger and PILLAR + ESPN outperform CryptGPU with statistical significance across all roundtrip delays (as the shaded area does not overlap). In the WAN (100 ms), this corresponds to a 4\(\times\) speedup over CryptGPU. Furthermore, we find HoneyBadger and ESPN perform similarly as observed in Section 4, with PILLAR + HoneyBadger having a slight advantage.
Accuracy.We measure the accuracy of the models on the testing set both in plain (using PyTorch) and encrypted (using CrypTen). We give the result in Table 2. First, we observe the plain and encrypted accuracies are very similar, indicating that quantization has a minor effect despite not considering this in training. We find that CryptGPU and PILLAR give similar accuracies, with CryptGPU performing slightly better, as is to be expected since they use unmodified activation functions. However, we argue this slight loss in accuracy is well justified by the significant decrease in inference time.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset & Technique & Plain Acc & Enc Acc \\ \hline \multirow{2}{*}{CIFAR-10} & PILLAR & 93.4 \(\pm\) 0.14 & 93.3 \(\pm\) 0.15 \\ & CryptGPU & 94.7 \(\pm\) 0.09 & 94.6 \(\pm\) 0.10 \\ \hline \multirow{2}{*}{CIFAR-100} & PILLAR & 74.9 \(\pm\) 0.14 & 74.4 \(\pm\) 0.21 \\ & CryptGPU & 76.6 \(\pm\) 0.07 & 76.6 \(\pm\) 0.13 \\ \hline \hline \end{tabular}
\end{table}
Table 2: ResNet-18 accuracy comparison (5 runs).
Figure 8: VGG-16 evaluation on CIFAR-100 (20 runs).
Figure 7: ResNet-18 evaluation on CIFAR-100 (20 runs).
### VGG-16 Architecture
In this section, we compare with GForce, the current state-of-the-art as shown by Ng and Chow [30]. GForce focused on a modified VGG-16 [31] architecture and compared it to all other works (including those using different architectures). For completeness, we evaluate the VGG-16 architecture using our techniques, and CryptGPU although we note that the ResNet-18 architecture outperforms VGG-16 in both inference time and accuracy. COINN [17] does not give results for VGG-16, so we exclude it from this section.
Inference Time.Since our work aims to reduce the rounds needed by binary non-linear layers, we replace the MaxPool layers in the VGG-16 with AvgPool for all solutions (including CryptGPU and GForce). We give the inference times over various delays in Figure 8. First, we note that GForce significantly outperforms all other solutions in the LAN. However, for more realistic high latency networks (>5ms roundtrip delay), we observe our solutions significantly outperform GForce (5\(\times\) speedup in WAN). Once again, our solutions outperform CryptGPU for all network delays.
We once again outperform CryptGPU in all evaluations2 with a 4\(\times\) speed up on average in the WAN.
Footnote 2: All of which are statistically significant except MiniONN in LAN where the confidence intervals overlap slightly.
Encrypted Accuracy.We give the results in Table 4. We observe that PILLAR is competitive with related work in all models, although we remark that, once again, our ResNet-18 models outperform all others. We also note that, while COINN does quantization-aware training, PILLAR does not and still only loses a small amount of accuracy in encryption vs. plaintext.
### Scaling to ImageNet
In this section, we evaluate the scalability of our approach on the ImageNet dataset using a ResNet-50 architecture with 23 million parameters. This architecture was previously too large for training with polynomial activation functions [12]. We compare our approach to COINN and CryptGPUand exclude GForce as they do not consider ImageNet.
Inference Time.We plot the inference time in Figure 10. We observe a significant reduction over COINN across all network delays with a 28\(\times\) reduction in the LAN (0.25 ms) and a 90\(\times\) reduction in the WAN (100 ms). Compared to CryptGPU we find that PILLAR + HoneyBadger is the fastest in all network delays by 3\(\times\) on average. PILLAR + ESPN is slightly slower in the LAN, but once again outperforms CryptGPU in the WAN.
Encrypted Accuracy.We present a summary of the accuracies in Table 5. We observe a much higher encrypted accuracy for PILLAR compared to COINN and thus, our solution is Pareto dominant. For CryptGPU, we use a pre-trained PyTorch model with state-of-the-art accuracy. Therefore, as expected, CryptGPU has an accuracy 3% higher than the model we trained from scratch. We note that with a higher degree polynomial, we were able to train a 79.2% polynomial model. However, this model is not possible to infer in the 64-bit field used by CrypTen (as higher degrees need more precision by equation 5). We discuss future directions to further improve this result in Section 7.
## 7 Discussion
Our experimental evaluation in Section 6 showed our algorithms significantly outperform all related work in inference time when the network latency is high. While state-of-the-art compared to other polynomial training approaches, PILLAR still incurs a minor accuracy degradation compared to standard models with ReLUs. We posit a few directions for future work to further close this gap between polynomials and ReLUs.
Quantization.Note that aside from our quantization-aware polynomial fitting described in Section 5, we have made no efforts to reduce the effects of quantization. COINN developed training algorithms to help the model be robust to the overflow and quantization present in MPC [17]. An interesting future work would be to combine the COINN methods with PILLAR to see if further accuracy gains are possible.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset/Model & Technique & Plain Acc & Enc Acc \\ \hline \multirow{2}{*}{CIFAR-10 / MiniONN} & PILLAR & 88.1 \(\pm\) 0.26 & 87.85 \(\pm\) 0.36 \\ & CryptGPU & 91.2 \(\pm\) 0.17 & 91.2 \(\pm\) 0.16 \\ & COINN & - & 87.6 \\ \hline \multirow{2}{*}{CIFAR-10 / ResNet-110} & PILLAR & 91.4 \(\pm\) 0.18 & 91.4 \(\pm\) 0.18 \\ & CryptGPU & 92.8 \(\pm\) 0.27 & 92.7 \(\pm\) 0.26 \\ & COINN & - & 93.4 \\ \hline \multirow{2}{*}{CIFAR-100 / ResNet-32} & PILLAR & 67.8 \(\pm\) 0.32 & 67.84 \(\pm\) 0.35 \\ & CryptGPU & 68.4 \(\pm\) 0.46 & 68.5 \(\pm\) 0.45 \\ \cline{1-1} & COINN & - & 68.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy comparison on the various architectures considered in COINN (5 runs).
Figure 10: ImageNet evaluation on ResNet-50 (20 runs).
\begin{table}
\begin{tabular}{l l l} \hline \hline Technique & Plain Acc & Enc Acc \\ \hline PILLAR & 77.7 & 77.5 \\ CryptGPU & 80.8 & 80.8 \\ COINN & - & 73.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: ImageNet accuracy comparison (1 run).
Precision.By using CrypTen as our backend, we were limited to a 64-bit ring for cryptographic operations. As discussed in Section 4, this precision determines the degree and range of polynomials we can use. Interesting future work is to increase this precision to enable higher-degree polynomials and study the performance-accuracy trade-off. Our initial results on ImageNet show that we can train up to a degree eight polynomial without suffering escaping activations. However, we could not increase the ring size to study the effect of higher degrees on inference time.
MaxPools.We recall that a MaxPool layer requires comparisons and, thus, expensive conversions to binary shares (like ReLUs). Therefore, we replaced all MaxPools with AvgPool layers. However, in some architectures, such as VGG-16 [31], we found that swapping MaxPool for AvgPool degraded accuracy by up to 6%. Finding an efficient MaxPool alternative for architectures like VGG is important for future work. However, since the ResNet models give high accuracy using AvgPool layers we did not pursue this issue further.
## 8 Related Work
This work focuses on achieving state-of-the-art run time and accuracy in two-party secure inference. We measure this objective by evaluating against the current state-of-the-art as determined by a recent SoK by Ng and Chow [30]. Namely, we compare to COINN [17], GForce [29] and CrypTen [20, 32] in Section 6 as they represent the Pareto front according to Ng and Chow [30]. Another potential candidate on the Pareto front is Falcon [23], with low latency and accuracy [23]. We did not evaluate Falcon as the accuracy drop was too significant (over 10% [30]). Furthermore, GForce is shown to outperform Falcon in both latency and accuracy, and we outperform GForce [29]. For a complete list of other works not on the Pareto front, we defer to Ng and Chow's work [30]. Notably, many works consider different threat models or use different approaches, such as homomorphic encryption. We leave extending our polynomial activation functions to these settings for future work. For the remainder of this section, we discuss works with a similar approach to ours that are not state-of-the-art or not evaluated by Ng and Chow [30].
Replacing or Reducing ReLU's.It has been established that the non-linear functions such as ReLU are the bottleneck for secure computation [12, 13, 27, 17]. Several works initially focused on reducing the number of ReLU activations, optimizing for the best trade-off between accuracy and runtime [13, 19]. A faster approach is to replace all ReLU's entirely using polynomial approximations [12]. In Section 5, we discussed the most recent work in this space, Sisyphus [12]. While making significant progress toward training models with polynomial activations, Sisyphus could not overcome the escaping activation problem for models with more than 11 layers. Before Sisyphus, there were a handful of works on smaller models that typically focus on partial replacement (some ReLU's remained) [27, 28, 14]. An interesting exception from Lee et al. used degree 29 polynomials in HE but suffered prohibitively high runtimes [22]. Our work is the first to make high-accuracy polynomial training feasible (without escaping activations) in deep neural networks.
A notable recent work is PolyKervNets [2]. Inspired by the computer vision literature, PolyKervNets remove the activation functions and instead exponentiate the output of each convolutional layer [2]. The problem with this approach is that, similar to polynomial activation functions, the exponents make the training unstable. Aremu and Nandakumar note that exploding gradients prevent their approach from scaling to ResNet models deeper than ResNet18 (using degree 2 polynomials). Furthermore, PolyKervNets only allow for a single fully connected layer which reduces the accuracy of the models. Conversely, PILLAR scales to deeper models such as ResNet110 and much higher degrees. Moreover, we achieve significantly better plaintext accuracy on ResNet-18 (93.4 vs 90.1 on CIFAR-10 and 74.9 vs 71.3 on CIFAR-100).
Polynomial Evaluation in MPC.Our work focuses on designing the activation functions with cryptography by using polynomials. However, the problem of computing polynomials in MPC is of independent interest and has also been studied in the literature. The state-of-the-art in this space is HoneyBadger, as discussed in Section 4. Other notable works include the initial inspiration for HoneyBadger from Damgard et al. [8]. This early approach conducts exponentiation by blinding and reconstructing the number to be exponentiated so the powers can be computed in plaintext [8]. Building off this idea, Polymath constructs a constant round protocol for evaluating polynomials focused on matrices [25]. However, HoneyBadger outperforms Polymath by reducing both the rounds and the number of reconstructions to one.
## 9 Conclusion
In this work, we co-designed the ML and MPC aspects of secure inference to remove the bottleneck of non-linear layers. PILLAR maintains a competitive inference accuracy while being significantly faster in wide area networks using novel single round MPC protocols (ESPN and HoneyBadger). Our state-of-the-art inference times motivate future work to further improve the ML accuracy of polynomial activations in DNNs.
## Acknowledgments
We gratefully acknowledge the support of the Natural Sciences and Engineering Research Council (NSERC) for grants RGPIN-05849, and IRC-537591, the Royal Bank of Canada, and Amazon Web Services Canada.
## Availability
We make all source code to reproduce our experiments available here: [https://github.com/LucasFenaux/PILLAR-ESPN](https://github.com/LucasFenaux/PILLAR-ESPN).
|
2302.08965 | Multi-body wave function of ground and low-lying excited states using
unornamented deep neural networks | We propose a method to calculate wave functions and energies not only of the
ground state but also of low-lying excited states using a deep neural network
and the unsupervised machine learning technique. For systems composed of
identical particles, a simple method to perform symmetrization for bosonic
systems and antisymmetrization for fermionic systems is also proposed. | Tomoya Naito, Hisashi Naito, Koji Hashimoto | 2023-02-17T15:57:18Z | http://arxiv.org/abs/2302.08965v3 | A simple method for multi-body wave function of ground and low-lying excited states using deep neural network
###### Abstract
We propose a method to calculate wave functions and energies not only of the ground state but also of low-lying excited states using a deep neural network and the unsupervised machine learning technique. For systems composed of identical particles, a simple method to perform symmetrization for bosonic systems and antisymmetrization for fermionic systems is also proposed.
+
Footnote †: preprint: RIKEN-iTHEMS-Report-23
KUNS-2954
## I Introduction
Atoms, molecules, and solids are composed of many electrons and ions, and atomic nuclei are composed of many nucleons. In principle, once the Schrodinger equation of these systems is solved, most properties can be described. However, in practice, they are quantum many-fermion systems, which are difficult to solve directly. Hence, it has been one of the important issues to solve the Schrodinger equation for the quantum many-fermion efficiently and accurately; in fact, many numerical methods, for instance, the Faddeev calculation [1], several methods for few-body systems [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12], the quantum Monte Carlo method (QMC) including the variational Monte Carlo and diffusion Monte Carlo (DMC) methods [13; 14; 15; 16; 17], the configuration interaction method [18; 19; 20], the coupled cluster method [21; 22; 23; 24], and the density functional theory (DFT) [25; 26; 27], have been proposed in recent decades. These methods can be classified into two categories: methods based on the diagonalization of the Hamiltonian and on the variational principle.
Among above, DFT and QMC are classified into the latter. The variational principle [28] guarantees that the ground-state energy \(E_{\rm gs}\) of a Hamiltonian \(H\) satisfies
\[E_{\rm gs}=\inf\frac{\langle\Psi|H|\Psi\rangle}{\langle\Psi|\Psi\rangle}, \tag{1}\]
where all the possible functions are considered in the infimum. The minimizer corresponds to the ground-state wave function. Indeed, it is scarcely possible to consider all the possible functions when minimizing the energy expectation value; hence, in practice, the calculation accuracy of a method based on the variational principle depends on the ansatz of a trial wave function. In other words, the size of the space of trial wave functions determines the calculation accuracy.
For instance, a trial wave function of DFT is a Slater determinant, which is the simplest antisymmetric trial wave function. Owing to the simpleness of the ansatz, the numerical cost is drastically reduced, while it is known that inter-particle correlation is missing [29].
In the QMC calculation, a Jastrow-type trial wave function [30] is often used. A Jastrow-type wave function \(|\Psi\rangle\) consists of a single- (or sometimes multi-) Slater determinant \(|\Phi_{0}\rangle\) and a correlation factor \(F\), \(|\Psi\rangle=F\,|\Phi_{0}\rangle\). With assuming \(F\) as a symmetric function, \(|\Psi\rangle\) satisfies antisymmetry. Owing to the introduction of the factor \(F\), inter-particle correlations are described better than a single Slater determinant. Nevertheless, most QMC calculations optimize mainly \(F\), while \(|\Phi_{0}\rangle\) is optimized only around an initial ansatz [31]. In addition, an ansatz is introduced even for \(F\); hence, accuracy also depends on the ansatz. Recently, based on the QMC calculation, a deep neural network (DNN) has been used for the ansatz of a trial wave function [32; 33; 34]. Since deep neural networks span much wider space, calculation accuracy is much improved.
Another problem of variational-principle-based methods is calculation of excitation spectra. Excitation spectra are important quantities of molecules and atomic nuclei, while the variational principle [Eq. (1)] obtains the ground state only. Hence, another technique is needed to calculate excited states based on the variational principle. Indeed, a method to calculate low-lying excited states using the DMC calculation was proposed [35] by considering the orthogonality of wave functions, while its application has been still limited [36], whereas calculation of excited states on top of the DFT ground state has been widely performed by using the random phase approximation or some other techniques [37; 38; 39; 40; 41], while there is still room to be improved. Recently, low-lying excited states
were also obtained in Ref. [42] using the combination of the DNN, QMC, and the orthogonal condition.
In this paper, we propose a new method to calculate energies and wave functions of the ground state and low-lying excited states based on the variational principle. The ground-state wave function is assumed to be a DNN, which, in principle, is able to represent any function [43; 44]. Using an essence of the machine learning technique--the minimization of the loss function--, they are directly optimized by using the machine learning technique. References [45; 46] also assumed the ground-state wave function as a deep neural network that were optimized by using the machine learning technique: Ref. [45] performed calculation of few-body bosonic systems and Ref. [46] performed calculation of the simplest realistic system--a deuteron. Although these papers are pioneering works of unsupervised machine learnings for quantum many-body problems, the fermion antisymmetrization was not considered and excited states were not studied, while in quantum many-body problems both ground and excited states of many-fermionic systems are interesting in general. Recently, Ref. [47] proposed a method to obtain the ground state of the many-body Schrodinger equation for fermionic systems by using a tensor neural network, while its implementation is involved and the antisymmetrization is, indeed, not perfectly guaranteed.
In this paper, on top of the method in Refs. [45; 46] a simple method of the antisymmetrization for many-fermion systems or the symmetrization for many-boson systems is introduced. Then, low-lying excited states are sequentially calculated by using the orthogonality conditions and the variational principles. In this method, there is no need to discover a DNN architecture to generate (anti)symmetric wave functions. The symmetrization is put at the level of loss functions. Furthermore, the symmetrization and the antisymmetrization are implemented in almost the same way and the wave function perfectly satisfies (anti)symmetry. Thanks to the simplicity of the implementation, the numerical cost is quite small. We show that our method works successfully for popular examples in bosonic and fermionic quantum mechanical systems, providing a fundamental basis of the DNN method for quantum mechanics.
This paper is organized as follows: Section II is devoted to calculation of the ground-state. The novel antisymmetrization is introduced. Section III is devoted to the calculation of low-lying excited states. All the calculations are performed in a MacBook Pro with the Apple M1 chip (MacBook Pro (13-inch, M1, 2020): MacBookPro17,1) and \(16\,\mathrm{GB}\) memory. Section IV gives a summary of this paper.
## II Ground-state calculation
In this section, the ground-state wave function and energy are calculated by using a DNN and the machine learning technique. Throughout the paper, a machine learning software named Tensorflow[48] is used.
### Network structure and machine learning technique
In general, a wave function of a \(d\)-dimensional \(N\)-particle system is a function of the spatial coordinates of all the particles \(\mathbf{r}_{j}=(r_{j1},r_{j2},\ldots,r_{jd})\) (\(j=1,\ 2,\ \ldots,\ N\)). Here, to sake simplicity, we neglect the spin and isospin dependence of wave functions and \(\mathbf{R}\) denotes \(\mathbf{R}=(\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{N})=(r_{11},r_{12},\ldots,r_{1d}, r_{21},r_{22},\ldots,r_{2d},\ldots,r_{N1},r_{N2},\ldots,r_{Nd})\).
In this work, the wave function is represented by a deep neural network with \(Nd\)-input units that corresponds to the spatial coordinate \(\mathbf{R}\) and one-output unit that corresponds to the value of the wave function \(\psi\left(\mathbf{R}\right)\). Between the input and output layers, there are hidden layers. Each unit is connected to all the units just one before or after layers. The schematic figure of the deep neural network is shown in Fig. 1. In this paper, the "softplus" function
\[\mathrm{softplus}\left(x\right)=\log\left(1+e^{x}\right) \tag{2}\]
is used for an activation function and the adam optimizer [49] is used for the optimization process.
As the normal procedure of the numerical calculation of the wave function, the spatial coordinate is discretized. Each point is treated as a batch of the machine learning. In other words, if the spatial coordinate of each direction is discretized with \(M\) meshes, the batch size is the
Figure 1: Schematic figure of the deep neural network representing a one-dimensional three-body system.
same as the number of meshes, \(M^{dN}\). The mini-batch technique is not used.
Once the spatial coordinates are discretized, the Hamiltonian
\[H=-\frac{\hbar^{2}}{2m}\sum_{j}\Delta_{j}+\sum_{j}V^{\mathrm{ext}}\left(\mathbf{r }_{j}\right)+\frac{1}{2}\sum_{j\neq k}V^{\mathrm{int}}\left(\mathbf{r}_{j}, \mathbf{r}_{k}\right) \tag{3}\]
can be written as a matrix, where \(m\) is the mass of the particles, \(V^{\mathrm{ext}}\) is the external potential, and \(V^{\mathrm{int}}\) is the inter-particle interaction. The matrices of the external potential and the interaction are diagonal and that of the kinetic energy is sparse. Hence, the expectation value of the Hamiltonian \(\left\langle H\right\rangle\) can be calculated by using the sparse-matrix technique. The ground-state wave function minimizes \(\left\langle H\right\rangle\); hence, \(\left\langle H\right\rangle\) is regarded as a loss function. Note that all the calculations are performed with double precision floating point numbers (float64). For simplicity, \(m=\hbar=1\) is assumed.
The procedure in the Tensorflow code is as follows:
1. Construct a model of the deep neural network;
2. Note that although a Tensorflow subroutine specification for the loss function technically requires two inputs--the training data (true_value) and the network output (predicts), the former is not referred in our training;
3. Fit the model (model.fit) where the initial value of predicts consists of positive random numbers;
4. The final wave function output_wf is given by using model.predict;
5. The ground-state energy is calculated using the wave function obtained by the last step.
The third step (model.fit) corresponds to determining the parameters inside the DNN; the fourth step (model.predict) corresponds to storing the wave function obtained in the previous step; the fifth step corresponds to calculating the ground-state energy using the wave function obtained in the third step. Note that predicts and the final wave function should be normalized whenever generated.
### One-dimensional one-particle systems
In this section, benchmark calculations of one-dimensional systems are shown. The dependence of the numbers of units and layers on calculation accuracy is also discussed. Since there exists only one particle, there is no interaction, \(V^{\mathrm{int}}\equiv 0\); and thus the Hamiltonian reads
\[H=-\frac{1}{2}\frac{d^{2}}{dx^{2}}+V^{\mathrm{ext}}\left(x\right). \tag{4}\]
Since we focus only on bound states in this paper, it is enough to deal with the limited spatial region. In the calculation, \(\left|x\right|\leq x_{\mathrm{max}}\) is considered and the box is discretized within \(1024\) meshes, i.e., \(M=1024\). The Dirichlet boundary condition (\(\psi\left(\pm x_{\mathrm{max}}\right)=0\)) is used. For the second derivative, the three-point derivative is used for simplicity, while it can be straightforwardly improved for the accuracy [50]. Then, \(H\) is discretized as
\[H\simeq\tilde{H} =-\frac{1}{2h^{2}}\tilde{T}+\tilde{V}^{\mathrm{ext}}, \tag{5a}\] \[\tilde{T} =\begin{pmatrix}-2&1&0&\ldots&0&0&0\\ 1&-2&1&\ldots&0&0&0\\ 0&1&-2&\ldots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\ldots&-2&1&0\\ 0&0&0&\ldots&1&-2&1\\ 0&0&0&\ldots&0&1&-2\end{pmatrix},\] (5b) \[\tilde{V}^{\mathrm{ext}} =\begin{pmatrix}V_{1}^{\mathrm{ext}}&0&0&\ldots&0&0&0\\ 0&V_{2}^{\mathrm{ext}}&0&\ldots&0&0&0\\ 0&0&V_{3}^{\mathrm{ext}}&\ldots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\ldots&V_{M-3}^{\mathrm{ext}}&0&0\\ 0&0&0&\ldots&0&V_{M-2}^{\mathrm{ext}}&0\\ 0&0&0&\ldots&0&0&V_{M-1}^{\mathrm{ext}}\end{pmatrix} \tag{5c}\]
and the wave function is also discretized as a \((M-1)\)-dimensional vector
\[\psi\simeq\tilde{\psi}=\begin{pmatrix}\psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \vdots\\ \psi_{M-3}\\ \psi_{M-2}\\ \psi_{M-1}\end{pmatrix}, \tag{6}\]
where \(\tilde{\psi}\) is assumed to be normalized, i.e., \(h\sqrt{\sum_{j}\tilde{\psi}_{j}^{2}}=1\), \(V_{j}^{\mathrm{ext}}=V^{\mathrm{ext}}\left(x_{j}\right)\), \(\psi_{j}=\psi\left(x_{j}\right)\), \(x_{j}=-x_{\mathrm{max}}+hj\), and \(h\) denotes the mesh size \(h=2x_{\mathrm{max}}/M\)[51]. This \(\tilde{\psi}\) is used for predicts and output_wf. Here, a tilde denotes discretized form. Then, \(\left\langle H\right\rangle\) can be calculated as
\[\left\langle H\right\rangle\simeq\left\langle\tilde{H}\right\rangle=\tilde{ \psi}^{\mathrm{T}}\tilde{H}\tilde{\psi}. \tag{7}\]
#### ii.2.1 Harmonic oscillator
First of all, the harmonic oscillator potential
\[V^{\mathrm{ext}}\left(x\right)=\frac{1}{2}\omega^{2}x^{2} \tag{8}\]
is tested. The ground-state wave function \(\psi_{\mathrm{gs}}\) and energy \(E_{\mathrm{gs}}\) are, respectively, known exactly as [28]
\[\psi_{\mathrm{gs}}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\exp\left(-\frac{\omega x^{ 2}}{2}\right), \tag{9a}\] \[E_{\mathrm{gs}} =\frac{\omega}{2}. \tag{9b}\]
In this calculation, \(x_{\text{max}}=5\) is used.
Table 1 shows the summary of calculations. In general, all the calculations give almost the correct energy [Eq. (9b)]. On the one hand, total optimization costs similar amount of time in all the calculation. On the other hand, different setup requires different number of epochs and time per epoch for optimizing the DNN. Small DNN tends to take shorter time for each epoch, while it requires longer epochs. It seems that 32 units per layer is too large, so it requires longer epoch and longer estimation time per epoch. It should be noted that the number of epochs differs in each time since the initial condition of the fitting procedure is generated by the random numbers. In addition, if one uses a different value of the learning rate, the number of epochs can be different.
Figure 2 shows relative errors of the loss function, \(\left\langle H\right\rangle\), to the exact ground-state energy \(E_{\text{gs}}\) as functions of the number of epochs. It can be seen that, although the loss function achieved the relative error of \(1.0\times 10^{-8}\), the final accuracy becomes about \(1.0\times 10^{-4}\). This may be due to the precision of the Tensorflow code.
Figure 3 shows calculated wave functions. The red thick lines correspond to the exact solution given in Eq. (9a), while thin lines corresponds to the results given in this work, where different colors correspond to different numbers of units and layers. The relative errors of the DNN wave function, \(\psi^{\text{DNN}}\), to the exact one, \(\psi^{\text{exact}}\),
\[\delta\psi\left(x\right)=\frac{\left|\psi^{\text{DNN}}\left(x\right)-\psi^{ \text{exact}}\left(x\right)\right|}{\psi^{\text{exact}}\left(x\right)} \tag{10}\]
are shown in Fig. 4. It can be seen that the DNN calculation, basically, reproduces the exact solution in our interest within the accuracy of \(10^{-4}\) or more. This deviation can be reduced if we use more tight convergence criterion [52]. In the tail region, the deviation \(\delta\psi\left(x\right)\) diverges, while this is because the denominator of Eq. (10), \(\psi^{\text{exact}}\left(x\right)\), reaches to zero. The deviation looks larger if \(\omega\) is smaller, which is related to the cutoff parameter for the spatial mesh \(x_{\text{min}}\). The exact value of \(\psi_{\text{gs}}\left(x_{\text{min}}\right)\) is \(2.8\times 10^{-6}\) for \(\omega=1\), while it is \(8.1\times 10^{-28}\) for \(\omega=5\) and it is much smaller for \(\omega=10\), while in the numerical calculation, they are approximated to zero. The value \(2.8\times 10^{-6}\) may be too large to assume to be zero.
It should be noted that rather small DNN is enough to reproduce the solution of the wave function. Owing to the simplicity, it is easy to analyse the weights and biases of the DNN. For instance, the DNN wave function for the single layer with four units includes only 13 parameters; the ground-state DNN wave function for \(\omega=1\) can be written as
\[\psi_{\text{gs}}\left(x\right) =\frac{1}{3.7451}\,\text{softplus}\left(a_{\text{gs}}\left(x \right)\right), \tag{11a}\] \[a_{\text{gs}}\left(x\right) =2.4069a_{1}\left(x\right)-1.8344a_{2}\left(x\right)-1.9778a_{3} \left(x\right)+2.3484a_{4}\left(x\right)-4.8998,\] (11b) \[a_{1}\left(x\right) =\text{softplus}\left(0.35953x+3.9226\right),\] (11c) \[a_{2}\left(x\right) =\text{softplus}\left(2.5821x+0.033213\right),\] (11d) \[a_{3}\left(x\right) =\text{softplus}\left(-0.65170x+2.9574\right),\] (11e) \[a_{4}\left(x\right) =\text{softplus}\left(0.15421x+2.2016\right), \tag{11f}\]
where the first coefficient of Eq. (11a) (\(1/3.7451\)) is not obtained by the DNN but instead by the normalization. Hence, smaller DNN is better not only due to the calculation cost but also for analysis of the structure of DNN.
Let us provide our interpretation of the obtained wave function [Eq. (11a)]. The rectified linear function (ReLU)
\[\text{ReLU}\left(x\right)=\begin{cases}0&\left(x<0\right),\\ x&\left(x\geq 0\right)\end{cases} \tag{12}\]
is a widely-used activation function, and the softplus function can be regarded as a smoothed version of the ReLU. Here, for interpreting Eq. (11a), we shall just replace the softplus function with the ReLU. The wave function obtained by the DNN [Eq. (11a)] and the obtained function before the output layer [Eq. (11b)] are shown in Fig. 5, where the normalization factor (\(1/3.7451\)) of \(\psi_{\text{gs}}\) is ignored. Equations (11a) and (11b) where the softplus function is replaced to the ReLU function are also plotted as \(\psi_{\text{gs}}^{\text{ReLU}}\) and \(a_{\text{gs}}^{\text{ReLU}}\), respectively. The DNN with the ReLU function can be understood as an approximation with a piecewise linear function. As shown in Fig. 5, the ground-state wave function in DNN is approximated by the following function:
\[\psi_{\text{gs}}\approx\begin{cases}-ax+b&\left(0\leq x\leq b/a\right),\\ ax+b&\left(-b/a\leq x\leq 0\right),\\ 0&\left(\text{otherwise}\right),\end{cases} \tag{13}\]
where \(a\) and \(b\) are positive numbers. The ReLU function at the output layer guarantees to make the wave function vanish for \(x<-b/a\) and \(x>b/a\), and thus, \(a_{\text{gs}}\) should be \(\pm ax+b\). This function can be represented by just two ReLU functions. Hence, even two units in hidden layer are enough to describe the brief structure of ground-state wave function, and with increase the number of units, the
ground-state wave function is reproduced easily. Since the ReLU function is not differentiable at \(x=0\), the ReLU wave function is not differentiable. Hence, the softplus is better to describe a differentiable function, while the ReLU function can also describe a differentiable function approximately if the number of units is large enough. In case of \(N\)-bodies systems, the similar function to Eq. (11a) can be represented by just \(2^{N}\) ReLU functions.
#### iii.2.2 Square wall potential
Next, the square wall potential
\[V^{\mathrm{ext}}\left(x\right)=\begin{cases}-V_{0}&(|x|<x_{0}),\\ 0&(\text{otherwise})\end{cases} \tag{14}\]
is tested (\(V_{0}>0\)). The analytical forms of the ground-state wave function \(\psi_{\mathrm{gs}}\) and energy \(E_{\mathrm{gs}}\) are unknown; thus, our values of the energy will be compared with the numerical calculation obtained by the orthodox method of Hamiltonian diagonalization. In this calculation, \(x_{\mathrm{max}}=20\) and \(x_{0}=1\) is used.
Table 2 shows the summary of calculations. In general, all the calculations give almost the correct energy. The calculation time per epoch and the number of epochs with respect to the number of layers and units is slightly longer than the case of the harmonic oscillator.
Figure 6 shows relative errors of the loss function, \(\langle H\rangle\), to the exact ground-state energy \(E_{\mathrm{gs}}\) as functions of the number of epochs. It can be seen that, although the loss function achieved the relative error of \(1.0\times 10^{-7}\), the final accuracy becomes about \(1.0\times 10^{-2}\). Note that in this calculation, the convergence criteria is needed to be set looser than the other case; otherwise, it could not reach convergence. This may be related to the shape of the potential: Asymptotic region of the square wall potential is zero, while the harmonic oscillator potential increases rapidly. It will be shown later that the double-wall potential, which is close to the latter situation, reaches convergence with the tight criterion.
Figure 7 shows calculated wave functions. The red thick lines correspond to the exact solution given by the exact diagonalization, where the same mesh size matrix form are used for comparison; thin lines corresponds to the results given in this work. It can be seen that the DNN calculation, basically, reproduces the solutions given by the exact diagonalization.
### One-dimensional many-particle systems
When one considers systems composed of many identical particles, the symmetrization for bosonic systems or the antisymmetrization for fermionic systems of the wave function must be considered. The ground state of the bosonic system is identical to that of the different particles; hence it has no extra difficulty as was done in Ref. [45], while the antisymmetrization is rather difficult. In this section, a simple method of (anti)symmetrization in the DNN wave function is provided, in which the symmetrization and the antisymmetrization can be performed with equal footing.
#### iii.3.1 Hamiltonian matrix
As was done in the last section, the discretized Hamiltonian \(\tilde{H}\) should be represented in a matrix form and the discretized wave function \(\tilde{\psi}\) should be represented in a vector form. Here, one-dimensional two-body systems are considered as an example, and their coordinates are denoted by \(x\) and \(y\). Each direction is discretized with \(M\) meshes, i.e., in total \(M\times M\) meshes. Then, the discretized wave function \(\tilde{\psi}\) is
\[\tilde{\psi}=\begin{pmatrix}&\psi_{11}\\ &\psi_{12}\\ &\vdots\\ &\psi_{1(M-1)}\\ &\psi_{21}\\ &\psi_{22}\\ &\vdots\\ &\psi_{2(M-1)}\\ &\vdots\\ &\psi_{(M-1)1}\\ &\psi_{(M-1)2}\\ &\vdots\\ &\psi_{(M-1)(M-1)}\end{pmatrix}, \tag{15}\]
where \(\psi_{jk}=\psi\left(x_{j},y_{k}\right)\), \(x_{j}=-x_{\mathrm{max}}+hj\), \(y_{k}=-y_{\mathrm{max}}+hk\), and \(x_{\mathrm{max}}=y_{\mathrm{max}}\). Accordingly, the discretized Hamiltonian \(\tilde{H}\) reads
\[\tilde{H}=-\frac{1}{2h^{2}}\tilde{T}_{1}-\frac{1}{2h^{2}}\tilde{T}_{2}+\tilde{ V}_{\mathrm{ext}}^{1}+\tilde{V}_{\mathrm{ext}}^{2}+\tilde{V}_{\mathrm{int}}, \tag{16}\]
where \(\tilde{T}_{1}\) and \(\tilde{T}_{2}\) are the kinetic energy matrices
\[\tilde{T}_{1} =T\otimes I_{2}, \tag{17a}\] \[\tilde{T}_{2} =I_{2}\otimes T, \tag{17b}\]
\(\tilde{V}_{\mathrm{ext}}^{1}\) and \(\tilde{V}_{\mathrm{ext}}^{2}\) are the external potential matrices
\[\tilde{V}_{1}^{\mathrm{ext}} =V^{\mathrm{ext}}\otimes I_{2}, \tag{18a}\] \[\tilde{V}_{2}^{\mathrm{ext}} =I_{2}\otimes V^{\mathrm{ext}}, \tag{18b}\]
and \(\tilde{V}_{\mathrm{int}}\) are the interaction matrix whose matrix elements are
\[\left(\tilde{V}_{\text{int}}\right)_{i+j(M-1),k+l(M-1)}=\begin{cases}\frac{1}{2} \left[V^{\text{int}}\left(x_{i},y_{j}\right)+V^{\text{int}}\left(y_{j},x_{i} \right)\right]=V^{\text{int}}\left(x_{i},y_{j}\right)&\text{(for $i=k$, $j=l$)},\\ 0&\text{(otherwise)},\end{cases} \tag{19}\]
and \(\otimes\) is the Kronecker product. For instance, the matrix elements of Eqs. (17) reads
\[\left(\tilde{T}_{1}\right)_{i+j(M-1),k+l(M-1)}=\begin{cases}-2&\text{(for $i=k$, $j=l$)},\\ 1&\text{(for $i=k\pm 1$, $j=l$)},\\ 0&\text{(otherwise)},\end{cases} \tag{20a}\] \[\left(\tilde{T}_{2}\right)_{i+j(M-1),k+l(M-1)}=\begin{cases}-2&\text{(for $i =k$, $j=l$)},\\ 1&\text{(for $i=k$, $j=l\pm 1$)},\\ 0&\text{(otherwise)}.\end{cases} \tag{20b}\]
\begin{table}
\begin{tabular}{l l l l l l l l} \(\omega\) & \multicolumn{2}{c}{\# of Unit} & \multicolumn{2}{c}{Energy} & \multicolumn{2}{c}{\# of Epochs} & \multicolumn{1}{c}{Time per Epoch} \\ \cline{2-7} & \multicolumn{1}{c}{1st Layer} & \multicolumn{1}{c}{2nd Layer} & \multicolumn{1}{c}{Kinetic} & \multicolumn{1}{c}{Kinetic} & \multicolumn{1}{c}{Potential} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{(\#s)} \\ \hline
[MISSING_PAGE_POST]
& +2.499976 & +2.499745 & +4.999721 & 19833 & 996.732 \\ \end{tabular}
\end{table}
Table 1: Calculation summary of a one-body problem under the harmonic oscillator potential. Row with “—” in the column “# of unit for 2nd layer” corresponds to calculation performed only with one layer.
#### ii.1.2 Symmetrization and antisymmetrization
The discretized wave function \(\tilde{\psi}\) should be symmetrized or antisymmetrized. In general, for the arbitrary function \(f\left(x,y\right)\), \(f\left(x,y\right)+f\left(y,x\right)\) is a symmetrized function and \(f\left(x,y\right)-f\left(y,x\right)\) is an antisymmetrized function.
In order to perform (anti)symmetrization in TensorFlow, instead of the simple \(\tilde{\psi}\),
\[\tilde{\psi}_{\pm}=\left(\begin{array}{c}\psi_{11}\\ \psi_{12}\\ \vdots\\ \psi_{1(M-1)}\\ \psi_{21}\\ \psi_{22}\\ \vdots\\ \psi_{2(M-1)}\\ \vdots\\ \psi_{(M-1)(M-1)}\end{array}\right)\pm\left(\begin{array}{c}\psi_{11}\\ \psi_{21}\\ \vdots\\ \psi_{12}\\ \psi_{22}\\ \vdots\\ \psi_{(M-1)2}\\ \vdots\\ \psi_{1(M-1)}\\ \psi_{2(M-1)}\\ \vdots\\ \psi_{(M-1)(M-1)}\end{array}\right) \tag{22}\]
is assumed to be a trial wave function. In the Tensorflow code, instead of the original predicts and output_wf, \(\tilde{\psi}_{\pm}\) is used in the second (the calculation process of the loss function) and fifth (calculate the ground-state energy) steps in Sec. II.1. Note that this process can be easily done by using the following commands:
1. predicts_transpose = tf.reshape(predicts,
Figure 3: Wave function under the harmonic oscillator potential. The red thick line corresponds to the exact solution [Eq. (9a)], while thin lines correspond to results of DNN calculation. Different thin line corresponds to different number of units. We observe that all the simulated results overlap with the exact solution.
Figure 2: Relative error of \(\left\langle H\right\rangle\) to the exact ground-state energy \(E_{\text{gs}}\) for the harmonic oscillator potential as functions of the number of epochs.
[m, m]),
2. predicts_transpose = tf.transpose(predicts_transpose),
3. predicts_transpose = tf.reshape(predicts_transpose, [m**2, 1]),
4. predicts = tf.add(predicts, predicts_transpose) for bosonic systems or predicts = tf.subtract(predicts, predicts_transpose) for fermionic systems,
5. the final predicts is used to evaluate the loss function.
Here, m corresponds to the number of meshes \(M\). Note that this method can be straightforwardly extended to multi-body systems.
#### iii.2.3 Two-body systems
Two-body systems under the harmonic oscillator potential [Eq. (8)] is tested. If there is no interaction \(V^{\mathrm{int}}\equiv 0\), the ground-state wave function \(\psi_{\mathrm{gs}}\) and energy \(E_{\mathrm{gs}}\) are known exactly. If one considers bosonic systems, they read
\[\psi_{\mathrm{gs}}\left(x,y\right) =\sqrt{\frac{\omega}{\pi}}\exp\left[-\frac{\omega\left(x^{2}+y^{2 }\right)}{2}\right], \tag{23a}\] \[E_{\mathrm{gs}} =\omega,\] (23b) and if one considers the fermionic systems, they read \[\psi_{\mathrm{gs}}\left(x,y\right) =\frac{\omega}{\sqrt{\pi}}\left(x-y\right)\exp\left[-\frac{ \omega\left(x^{2}+y^{2}\right)}{2}\right],\] (24a) \[E_{\mathrm{gs}} =2\omega. \tag{24b}\]
Figures 8 and 9, respectively, show wave functions for bosonic and fermionic systems obtained in this work. The total energies and the calculation time are shown in Table 3. For comparison, the exact wave functions [Eq. (23a) or (24a)] are also shown. Here, \(x_{\mathrm{max}}=y_{\mathrm{max}}=5\) and \(M_{x}=M_{y}=256\) are used for the spatial mesh and two layers each of which contains 32 units are used for the DNN. For the interaction, the Gaussian-type interaction
\[V^{\mathrm{int}}\left(x,y\right)=\lambda\exp\left(-\left|x-y\right|\right) \tag{25}\]
is used, where \(\lambda\) is the strength of the interaction. The DNN results with \(\lambda=0\) show a good agreement with the exact results, demonstrating that the DNN technique works well.
Behaviour of wave functions for nonzero \(\lambda\) is consistent qualitatively with our expectation: if \(\lambda\) is negative, i.e., the interaction is attractive, the wave function tends to collapse, and if \(\lambda\) is positive, i.e., the interaction is repulsive, the wave function tends to be broad. Time cost per epoch is almost universal among all the calculation, while more epochs are required to reach convergence for
Figure 4: Relative error of DNN wave function to exact one. Different thin line corresponds to different number of units. In the region of large \(\left|x\right|\), the deviation \(\delta\psi\left(x\right)\) diverges, because the denominator of Eq. (10), \(\psi^{\mathrm{exact}}\left(x\right)\), reaches to zero. Hence, the figures plot only \(\left|\delta\varphi\left(x\right)\right|<10^{-1}\).
Figure 5: Wave function obtained by the DNN [Eq. (11a)] and the obtained function before the output layer [Eq. (11b)], where the normalization is ignored. Equations (11a) and (11b) where the softplus function is replaced with the ReLU function are also plotted as \(\psi_{\mathrm{gs}}^{\mathrm{ReLU}}\) and \(a_{\mathrm{gs}}^{\mathrm{ReLU}}\), respectively.
fermionic systems than for bosonic systems. This may be because all the values are positive for the initial condition, while there are negative values for fermionic ground-state wave functions. More epochs are required for the repulsive interaction (\(\lambda>0\)) than for the attractive interaction (\(\lambda<0\)). This may be because the topology of the wave function is more complicated and extended in the repulsive case than the attractive case.
Finally, we point out a strange behaviour of the obtained wave function of the two-body system of \(\omega=1\) without the interaction. Here, for simplicity, the two-layer DNN in which each unit is composed of 4 units is used. In the case of two layers, the function obtained by
\begin{table}
\begin{tabular}{c c c c c c c c} \(V_{0}\) & \multicolumn{2}{c}{\# of Unit} & \multicolumn{2}{c}{Energy} & \multicolumn{2}{c}{\# of Epochs} & Time per Epoch \\ \cline{2-7} & \multicolumn{2}{c}{1st Layer} & \multicolumn{1}{c}{2nd Layer} & Kinetic & Potential & Total & (\(\mu\)s) \\ \hline
[MISSING_PAGE_POST]
& \(+0.659308\) & \(-9.847178\)
optimized weights of DNN is
\[u_{\text{gs}}\left(x,y\right) =A\,\text{softplus}\left(\sum_{j=1}^{N_{\text{unit}}}w_{2j}u_{2j} \left(x,y\right)+b_{2}\right), \tag{26a}\] \[u_{2j}\left(x,y\right) =\text{softplus}\left(\sum_{k=1}^{N_{\text{unit}}}w_{1jk}u_{1k} \left(x,y\right)+b_{1j}\right),\] (26b) \[u_{1k}\left(x,y\right) =\text{softplus}\left(w_{0k0}x+w_{0k1}y+b_{0k}\right), \tag{26c}\]
where \(A\) is the normalization constant, \(N_{\text{unit}}\) is the number of units of each layer, \(k\) is a weight, and \(b\) is a bias. The left column of Fig. 10 shows \(u_{\text{gs}}\left(x,y\right)\). The upper and lower rows, respectively, correspond to the results with minimizing the bosonic or fermionic energy expectation value. It is shown that the obtained function \(u_{\text{gs}}\), which is referred to as the raw _wave function_, is not symmetric nor antisymmetric. After the symmetrization for bosonic systems \(\psi_{\text{boson}}\left(x,y\right)=\left[u_{\text{gs}}\left(x,y\right)+u_{ \text{gs}}\left(y,x\right)\right]/A_{\text{boson}}\) or the antisymmetrization for fermionic systems \(\psi_{\text{fermion}}\left(x,y\right)=\left[u_{\text{gs}}\left(x,y\right)-u_{ \text{gs}}\left(y,x\right)\right]/A_{\text{fermion}}\) is performed with the normalization constant \(A_{\text{boson}}\) or \(A_{\text{fermion}}\), \(\psi_{\text{boson}}\) or \(\psi_{\text{fermion}}\) can be regarded as the bosonic or fermionic ground-state wave function, respectively. The energy expectation value of the raw (\(u_{\text{gs}}\)), the symmetrized (\(\psi_{\text{boson}}\)), and the antisymmetrized (\(\psi_{\text{fermion}}\)) wave functions are shown in Table 4. A surprising fact is that even if the raw _wave function_ is obtained by minimizing the bosonic (fermionic) expectation value, fermion (bosonic) energy expectation value is close to the cor
Figure 6: Relative error of \(\langle H\rangle\) to the exact ground-state energy \(E_{\text{gs}}\) for the square wall potential as functions of the number of epochs.
rect fermion (bosonic) energy eigenvalue, and _vice versa_. The (anti)symmetrization corresponds to the projection of the raw _wave function_ to the boson (fermion) subspace, while the remaining part is not supposed to be optimized well. This unexpected coincidence may be due to the smallness of the number of parameters in the DNN architecture, and deserves further study.
#### iv.2.4 Three-body systems
Three-body systems under the harmonic oscillator potential [Eq. (8)] is tested. For simplicity, we consider a system without any interaction \(V^{\text{int}}\equiv 0\). Then, the ground-state wave function \(\psi_{\text{gs}}\) and energy \(E_{\text{gs}}\) are known exactly as
\[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27a) \[E_{\text{gs}} =\frac{3}{2}\omega,\] (27b) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27c) \[E_{\text{gs}} =\frac{3}{2}\omega,\] (27d) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27e) \[E_{\text{gs}} =\frac{3}{2}\omega,\] (27f) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27g) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27h) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27h) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27i) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27h) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27i) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27h) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27i) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27i) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27j) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega \left(x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega \left(x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right) =\left(\frac{\omega}{\pi}\right)^{3/4}\exp\left[-\frac{\omega\left( x^{2}+y^{2}+z^{2}\right)}{2}\right],\] (27k) for bosonic systems and \[\psi_{\text{gs}}\left(x,y,z\right)
\[\psi_{\rm gs}\left(x,y,z\right)=\left(\frac{\omega}{\pi}\right)^{3/4}\sqrt{\frac{ \omega}{6}}\left[\left(x-y\right)\left(1-2\omega z^{2}\right)+\left(y-z\right) \left(1-2\omega x^{2}\right)+\left(z-x\right)\left(1-2\omega y^{2}\right) \right]\exp\left[-\frac{\omega\left(x^{2}+y^{2}+z^{2}\right)}{2}\right], \tag{28a}\] \[E_{\rm gs}=\frac{9}{2}\omega \tag{28b}\]
for fermionic systems.
Figures 11 and 12, respectively, show wave functions for bosonic and fermionic systems obtained by this work. The total energies and the calculation time are shown in Table 5. For comparison, the exact wave functions [Eq. (27a) or (28a)] are also shown. Here, \(x_{\rm max}=y_{\rm max}=z_{\rm max}=5\) and \(M=64\) are used for the spatial mesh and two layers each of which contains 32 units are used for the DNN. The interaction is not considered.
The DNN calculations
\begin{table}
\begin{tabular}{l c c c} & Raw & Symmetrized & Antisymmetrized \\ \hline Boson & 1.13190 & 1.00012 & 2.05195 \\ Fermion & 1.13628 & 1.08235 & 2.00056 \\ \end{tabular}
\end{table}
Table 4: Energy expectation value of the raw (\(u_{\rm gs}\) in Eq. (26a)), the symmetrized, and the antisymmetrized wave functions. The rows named “Boson” and “Fermion”, respectively, correspond to the results obtained by minizing the bosonic or fermionic energy expectation values.
\begin{table}
\begin{tabular}{l c c c c} Particles & \(\omega\) & \(\lambda\) & Energy & \# of Epochs & Time per Epoch (ms) \\ \hline Boson & 1.0 & \(-1.00\) & \(-89.869381\) & 8490 & 23.583 \\ Boson & 1.0 & \(-0.25\) & \(-19.848949\) & 3595 & 23.618 \\ Boson & 1.0 & \(+0.00\) & \(+0.999927\) & 27242 & 23.424 \\ Boson & 1.0 & \(+0.25\) & \(+3.298725\) & 20040 & 23.529 \\ Boson & 1.0 & \(+1.00\) & \(+3.835173\) & 21763 & 23.677 \\ \hline Boson & 5.0 & \(-1.00\) & \(-87.554311\) & 10194 & 23.573 \\ Boson & 5.0 & \(-0.25\) & \(-17.203647\) & 10893 & 23.712 \\ Boson & 5.0 & \(+0.00\) & \(+4.997829\) & 24477 & 23.648 \\ Boson & 5.0 & \(+0.25\) & \(+21.149827\) & 20721 & 23.797 \\ Boson & 5.0 & \(+1.00\) & \(+31.804917\) & 21718 & 23.591 \\ \hline Boson & 10.0 & \(-1.00\) & \(-84.129658\) & 18635 & 23.566 \\ Boson & 10.0 & \(-0.25\) & \(-13.118213\) & 24380 & 23.601 \\ Boson & 10.0 & \(+0.00\) & \(+9.991009\) & 27489 & 23.692 \\ Boson & 10.0 & \(+0.25\) & \(+32.287424\) & 23810 & 23.816 \\ Boson & 10.0 & \(+1.00\) & \(+72.688350\) & 19591 & 23.601 \\ \hline Fermion & 1.0 & \(-1.00\) & \(-71.409493\) & 18794 & 23.928 \\ Fermion & 1.0 & \(-0.25\) & \(-11.369632\) & 17999 & 23.818 \\ Fermion & 1.0 & \(+0.00\) & \(+1.999931\) & 19215 & 23.843 \\ Fermion & 1.0 & \(+0.25\) & \(+3.298786\) & 25187 & 23.804 \\ Fermion & 1.0 & \(+1.00\) & \(+3.839178\) & 106163 & 24.136 \\ \hline Fermion & 5.0 & \(-1.00\) & \(-68.409494\) & 18558 & 23.731 \\ Fermion & 5.0 & \(-0.25\) & \(-7.207718\) & 19618 & 23.787 \\ Fermion & 5.0 & \(+0.00\) & \(+9.995902\) & 75208 & 23.841 \\ Fermion & 5.0 & \(+0.25\) & \(+21.385884\) & 51691 & 23.607 \\ Fermion & 5.0 & \(+1.00\) & \(+31.804915\) & 27885 & 23.799 \\ \hline Fermion & 10.0 & \(-1.00\) & \(-63.026667\) & 26603 & 23.894 \\ Fermion & 10.0 & \(-0.25\) & \(+0.302312\) & 21783 & 23.805 \\ Fermion & 10.0 & \(+0.00\) & \(+19.975414\) & 10135 & 23.741 \\ Fermion & 10.0 & \(+0.25\) & \(+38.060907\) & 79973 & 24.054 \\ Fermion & 10.0 & \(+1.00\) & \(+73.245097\) & 59289 & 23.972 \\ \end{tabular}
\end{table}
Table 3: Calculation summary of a two-body problem under the harmonic oscillator potential.
state energies. The DNN wave functions are consistent with the exact solution. The number of epochs for three-body systems is comparable with those for two-body systems, where the number of units and layers are identical for these two cases. In contrast, the time per epoch for the three-body systems are about four times of that for the two-body systems. This is related to the number of spatial meshes: \(256\times 256=65536\) meshes are used for the two-body systems and \(64\times 64\times 64=262144\) meshes are used for the three-body systems; thus, the number of meshes for the three-body systems are four times more than those for the two-body systems. Hence, it can be concluded that the time per epoch is almost proportional to the number of spatial meshes. This is reasonable since we use numerical methods for sparse matrices, in which the number of the non-zero matrix elements is \(O\left(M^{Nd}\right)\).
Figure 8: Two-body wave function under the harmonic oscillator potential for bosonic systems. The exact wave function without the interaction is shown in the left-most column.
Figure 9: Same as Fig. 8 but for fermionic systems.
## III Excited-state calculation
In this section, based on the variational principle, a method to calculate low-lying excited states sequentially is explained. Assume that wave functions of the ground state and \(n\) excited states, \(\ket{\psi_{0}}\), \(\ket{\psi_{1}}\), \(\dots\), \(\ket{\psi_{n}}\), are derived, where \(\ket{\psi_{0}}=\ket{\psi_{\text{gs}}}\). We consider a problem of finding the \((n+1)\)-th excited state \(\ket{\psi_{n+1}}\) which satisfies the orthnormal condition
\[\left\langle\psi_{j}|\psi_{n+1}\right\rangle=\delta_{j,n+1}, \tag{29}\]
by using a trial wave function \(\ket{\psi}\). The \((n+1)\)-th wave function can be obtained with minimizing the expectation value
\[\left\langle H\right\rangle=\frac{\left\langle\psi|H|\psi\right\rangle}{ \left\langle\psi|\psi\right\rangle}, \tag{30}\]
where \(\ket{\psi}\) is assumed to be orthogonal to \(\ket{\psi_{j}}\) (\(j=0\), \(1\), \(\dots\), \(n\)). This can be implemented in Tensorflow with assuming that
\[\ket{\psi}-\sum_{j=0}^{n}\left\langle\psi_{j}|\psi\right\rangle\ket{\psi_{j}} \tag{31}\]
is a trial wave function, instead of the simple \(\ket{\psi}\). For one-body problem, \(x_{\text{max}}=5\) and \(M=1024\) are used for the spatial mesh and the single-layer DNN with eight units is adopted.
\begin{table}
\begin{tabular}{l c c c} Particles & Energy & \# of Epochs & Time per Epoch (ms) \\ \hline Boson & +1.497880 & 20183 & 101.216 \\ Fermion & +4.486830 & 22770 & 98.356 \\ \end{tabular}
\end{table}
Table 5: Calculation summary of a three-body problem under the harmonic oscillator potential. Calculation is performed with \(\omega=1.0\).
Figure 11: (Left) Three-body wave function under the harmonic oscillator potential without inter-particle interaction for bosonic systems. (Right) Slice of the three-body wave function at the plane \(x+y+z=0\).
Figure 10: DNN wave function of the raw (\(u_{\text{gs}}\) in Eq. (26a)), the symmetrized, and the antisymmetrized wave functions. The rows named “Boson” and “Fermion”, respectively, correspond to the results obtained by minizing the bosonic or fermionic energy expectation values.
### Harmonic oscillator potential
One-body one-dimensional harmonic oscillators are taken as examples. The exact wave functions for several low-lying excited states are [28]
\[\psi_{0}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\exp\left(-\frac{\omega x^{2 }}{2}\right), \tag{32a}\] \[\psi_{1}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\sqrt{2\omega}x\exp\left(- \frac{\omega x^{2}}{2}\right),\] (32b) \[\psi_{2}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\frac{2\omega x^{2}-1}{ \sqrt{2}}\exp\left(-\frac{\omega x^{2}}{2}\right),\] (32c) \[\psi_{3}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\sqrt{\frac{\omega}{3}} \left(2\omega x^{2}-3\right)x\exp\left(-\frac{\omega x^{2}}{2}\right),\] (32d) \[\psi_{4}\left(x\right) =\left(\frac{\omega}{\pi}\right)^{1/4}\frac{4\omega^{2}x^{4}-12 \omega x^{2}+3}{2\sqrt{6}}\exp\left(-\frac{\omega x^{2}}{2}\right), \tag{32e}\]
where \(\psi_{n}\) is the \(n\)-th excited state, and the energies are
\[E_{n}=\left(n+\frac{1}{2}\right)\omega. \tag{33}\]
Figure 13 shows the wave functions of the ground state and first, second, third, and fourth excited states. Table 6 shows the summary of calculations. Basically, not only the ground-state but also low-lying excited-states wave functions and energies are successfully calculated.
One can find that the DNN solutions are consistent with the exact solutions. Thus, it can be concluded that the method to calculate low-lying excited states proposed here works well.
The number of epochs are almost universal for all states calculated here. In contrast, the time per epoch for a higher excited state is slightly longer since calculation for orthogonal condition [Eq. (31)] is needed to be performed, while it takes just a few \(\mu\)s.
Let us explain why our simple DNN can describe even the excited states correctly. For simplicity, the single-layer DNN with the four unit is used. The obtained function for the ground-state (\(u_{\text{gs}}\)), the first (\(u_{\text{1st}}\)), and the second (\(u_{\text{2nd}}\)) excited states are shown in Fig. 14. The obtained function for the \(n\)-th excited state is
\[u_{n\text{-th}}\left(x\right)=\sum_{j=0}^{n}a_{j}\psi_{n\text{-th}}\left(x\right) \tag{34}\]
with \(\sum_{j=0}^{n}\left|a_{j}\right|^{2}=1\), where \(\psi_{n\text{-th}}\) is the \(n\)-th excited-state wave function. In the case of the first and second excited states,
\[u_{\text{1st}}\left(x\right) =0.981758\psi_{\text{0th}}\left(x\right)+0.190136\psi_{\text{1st }}\left(x\right), \tag{35a}\] \[u_{\text{2nd}}\left(x\right) =0.969707\psi_{\text{0th}}\left(x\right)-0.0445505\psi_{\text{1st }}\left(x\right)+0.240173\psi_{\text{2nd}}\left(x\right) \tag{35b}\]
### Double-well potential
In order to see the effect of degeneracy, we also test the double-well potential
\[V^{\text{ext}}\left(x\right)=\left(x^{2}-\alpha^{2}\right)^{2}. \tag{36}\]
If the central barrier is low enough, i.e., \(\alpha\) is small enough, each state is not degenerate. In contrast, if the central barrier is high, i.e., \(\alpha\) is large, low-lying excited states below the central barrier are twofold degenerate: one state \(\psi_{\text{L}}\) is localized into the left (\(x<0\)) region while the other state \(\psi_{\text{R}}\) is localized into the right (\(x>0\)) region, and \(\psi_{\text{L}}\left(x\right)=\psi_{\text{R}}\left(-x\right)\) holds. Using a linear combination of these two degenerate states, one can recognize each state is degenerate the following two states: \(\psi_{\pm}\left(x\right)=\left[\psi_{\text{L}}\left(x\right)\pm\psi_{\text{R} }\left(x\right)\right]/\sqrt{2}\), where \(\psi_{+}\) (\(\psi_{-}\)) is a positive (negative) parity state. According to the exact diagonalization, \(\alpha=1.0\) and \(1.25\) give non-degenerate ground and low-lying excited states and thus \(\psi_{j}\) is just a \(j\)-th excited state, while \(\alpha=2.0\) and \(3.0\) give ground and low-lying excited states, which are almost two-fold degenerer
\begin{table}
\begin{tabular}{l c c c} State & Energy & Epochs & Time per Epoch (\(\mu\)s) \\ \hline
0th & +0.499998 & 23419 & 516.238 \\
1st & +1.499991 & 25646 & 519.026 \\
2nd & +2.499986 & 23157 & 527.849 \\
3rd & +3.500193 & 37880 & 534.115 \\
4th & +4.500201 & 19101 & 542.224 \\ \end{tabular}
\end{table}
Table 6: Calculation summary of excited states for a one-body problem under the harmonic oscillator potential. Calculation is performed with \(\omega=1.0\).
ate: \(\psi_{0}\) and \(\psi_{1}\) correspond to the ground states and \(\psi_{2}\) and \(\psi_{3}\) correspond to the first excited states.
Figure 15 shows the wave functions of the ground state and first, second, third, and fourth excited states. Table 7 shows the summary of calculations. Basically, not only the ground-state but also low-lying excited-states wave functions and energies are successfully calculated, even for the degenerate states. It is not apparent which calculation gives, left-right bases (\(\psi_{\mathrm{L}}\) and \(\psi_{\mathrm{R}}\)), parity bases (\(\psi_{+}\) and \(\psi_{-}\)) or even general linear combinations. The DNN calculations for both \(\alpha=2\) and \(3\) obtained wave functions with the left-right bases, while it may depend on the initial condition. Note that the exact diagonalization for \(\alpha=2\) obtained wave functions with the parity bases, while wave functions with the left-right bases are plotted by using linear combinations in Fig. 15 to make a comparison with the DNN result easily.
### Two-body systems
Two-body one-dimensional harmonic oscillators are taken as the last examples. Here, \(x_{\mathrm{max}}=y_{\mathrm{max}}=5\) and \(M_{x}=M_{y}=256\) are used for the spatial mesh and two layers each of which contains \(32\) units are used for the DNN. Here, the inter-particle interaction is not considered. The exact wave functions for several low-lying excited states can be written as linear combinations of Eqs. (32):
\[\Psi_{0}\left(x,y\right) =\psi_{0}\left(x\right)\psi_{0}\left(y\right), \tag{37a}\] \[\Psi_{1}\left(x,y\right) =\frac{1}{\sqrt{2}}\left[\psi_{0}\left(x\right)\psi_{1}\left(y \right)+\psi_{1}\left(x\right)\psi_{0}\left(y\right)\right],\] (37b) \[\Psi_{2}\left(x,y\right) =\frac{1}{\sqrt{2}}\left[\psi_{0}\left(x\right)\psi_{2}\left(y \right)+\psi_{2}\left(x\right)\psi_{0}\left(y\right)\right],\] (37c) \[\Psi_{3}\left(x,y\right) =\psi_{1}\left(x\right)\psi_{1}\left(y\right), \tag{37d}\]
where the energy eigenvalue of \(\Psi_{0}\), \(\Psi_{1}\), \(\Psi_{2}\), and \(\Psi_{3}\) are equal to, respectively, \(1\), \(2\), \(3\) and \(3\) for bosonic systems and
\[\Psi_{0}\left(x,y\right) =\frac{1}{\sqrt{2}}\left|\begin{matrix}\psi_{0}\left(x\right)& \psi_{1}\left(x\right)\\ \psi_{0}\left(y\right)&\psi_{1}\left(y\right)\end{matrix}\right|, \tag{38a}\] \[\Psi_{1}\left(x,y\right) =\frac{1}{\sqrt{2}}\left|\begin{matrix}\psi_{0}\left(x\right)& \psi_{2}\left(x\right)\\ \psi_{0}\left(y\right)&\psi_{2}\left(y\right)\end{matrix}\right|,\] (38b) \[\Psi_{2}\left(x,y\right) =\frac{1}{\sqrt{2}}\left|\begin{matrix}\psi_{0}\left(x\right)& \psi_{3}\left(x\right)\\ \psi_{0}\left(y\right)&\psi_{3}\left(y\right)\end{matrix}\right|,\] (38c) \[\Psi_{3}\left(x,y\right) =\frac{1}{\sqrt{2}}\left|\begin{matrix}\psi_{1}\left(x\right)& \psi_{2}\left(x\right)\\ \psi_{1}\left(y\right)&\psi_{2}\left(y\right)\end{matrix}\right|, \tag{38d}\]
where the energy eigenvalue of \(\Psi_{0}\), \(\Psi_{1}\), \(\Psi_{2}\), and \(\Psi_{3}\) are equal to, respectively, \(2\), \(3\), \(4\) and \(4\) for fermionic systems. Note that the second excited states are twofold degenerate, \(\Psi_{2}\) and \(\Psi_{3}\), in both the bosonic and fermionic systems.
Figures 16 and 17, respectively, show the wave functions of the ground state and first and second excited
Figure 14: Optimized function \(u\) obtained by the DNN for the ground-state (0th), the first and the second excited states.
Figure 13: Wave functions of the ground and low-lying excited states under the harmonic oscillator potential. The top panel shows the exact wave functions and the bottom one shows the DNN wave functions. In order to make consistency for the phase factor, \(-\psi_{3}\left(x\right)\) is plotted for the exact wave function of the third excited state.
states. Table 8 shows the summary of calculations. Note that \(\left[\Psi_{2}\left(x,y\right)\pm\Psi_{3}\left(x,y\right)\right]/\sqrt{2}\) are plotted for the second excited states for exact solutions. Not only the ground-state but also low-lying excited-states wave functions and energies are successfully calculated even for two-body systems. In addition, as one-body problems, the numerical cost for a low-lying excited state is almost the same as that for the ground-state. Thus, this method to calculate low-lying excited states can work even for multi-body systems with a reasonable numerical cost.
## IV Summary
In this paper, we proposed a method to calculate the wave functions and energies of not only the ground state but also low-lying excited states of quantum multi-body systems using the deep neural network and the unsupervised machine learning technique. In order to calculate systems of many-particle systems of identical particles, a simple method of symmetrization for bosonic systems
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \(\alpha\) & \(j\) & Energy & Epochs & Time per Epoch (\(\mu\)s) \\ \cline{3-5} & & Exact diagonalization & Deep neural network & \\ \hline
1.0 & 0 & +0.869573 & +0.869706 & 29108 & 513.203 \\
1.0 & 1 & +1.661393 & +1.661685 & 35596 & 523.556 \\
1.0 & 2 & +3.543667 & +3.544327 & 55037 & 524.351 \\
1.0 & 3 & +5.665058 & +5.666010 & 20083 & 536.858 \\ \hline
1.25 & 0 & +1.417858 & +1.417886 & 30284 & 512.083 \\
1.25 & 1 & +1.725904 & +1.726013 & 36992 & 523.092 \\
1.25 & 2 & +3.717933 & +3.717949 & 30943 & 528.675 \\
1.25 & 3 & +5.424725 & +5.424850 & 24474 & 534.907 \\ \hline
2.0 & 0 & +2.762317 & +2.762333 & 23663 & 519.261 \\
2.0 & 1 & +2.762333 & +2.762343 & 24764 & 529.129 \\
2.0 & 2 & +7.988520 & +7.989654 & 19491 & 532.193 \\
2.0 & 3 & +7.990618 & +7.989601 & 18662 & 534.543 \\ \hline
3.0 & 0 & +4.214229 & +4.214253 & 20526 & 515.042 \\
3.0 & 1 & +4.214229 & +4.214284 & 20290 & 521.291 \\
3.0 & 2 & +12.526202 & +12.526214 & 51131 & 519.062 \\
3.0 & 3 & +12.526202 & +12.526278 & 12488 & 534.752 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Calculation summary of excited states for a two-body problem under the harmonic oscillator potential. Calculation is performed with \(\omega=1.0\).
Figure 15: Wave functions of the ground and low-lying excited states under the double-well potential. The top panels show the exact wave functions and the bottom ones show the DNN wave functions.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \(\alpha\) & \(j\) & Energy & Epochs & Time per Epoch (\(\mu\)s) \\ \cline{3-5} & & Exact diagonalization & Deep neural network & \\ \hline
1.0 & 0 & +0.869573 & +0.869706 & 29108 & 513.203 \\
1.0 & 1 & +1.661393 & +1.661685 & 35596 & 523.556 \\
1.0 & 2 & +3.543667 & +3.544327 & 55037 & 524.351 \\
1.0 & 3 & +5.665058 & +5.666010 & 20083 & 536.858 \\ \hline
1.25 & 0 & +1.417858 & +1.417886 & 30284 & 512.083 \\
1.25 & 1 & +1.725904 & +1.726013 & 36992 & 523.092 \\
1.25 & 2 & +3.717933 & +3.717949 & 30943 & 528.675 \\
1.25 & 3 & +5.424725 & +5.424850 & 24474 & 534.907 \\ \hline
2.0 & 0 & +2.762317 & +2.762333 & 23663 & 519.261 \\
2.0 & 1 & +2.762333 & +2.762343 & 24764 & 529.129 \\
2.0 & 2 & +7.988520 & +7.989654 & 19491 & 532.193 \\
2.0 & 3 & +7.990618 & +7.989601 & 18662 & 534.543 \\ \hline
3.0 & 0 & +4.214229 & +4.214253 & 20526 & 515.042 \\
3.0 & 1 & +4.214229 & +4.214284 & 20290 & 521.291 \\
3.0 & 2 & +12.526202 & +12.526214 & 51131 & 519.062 \\
3.0 & 3 & +12.526202 & +12.526278 & 12488 & 534.752 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Calculation summary of a one-body problem under the double-well potential.
and antisymmetrization for fermionic systems were also proposed.
The obtained wave functions and energies are consistent with the exact solution. We found that the neural network is not necessarily large for one-body systems, which also enables us to analyze the internal structure of the deep neural network used. For instance, just only one hidden layer with four units is enough to describe the ground-state wave function of the harmonic oscillator. This can be understood by using the piecewise approximation with linear functions. We confirmed that our simple (anti)symmetrization method works for multi-body systems. The numerical cost per epoch for fermionic systems is almost the same as that for bosonic systems. The numerical cost is almost proportional to the number of spatial meshes since the sparse matrix representation is used. In addition, the numerical cost for a low-lying excited state is almost the same as that for the ground state.
The deep neural network has been applied to solve many-fermion systems where the ground-state wave function is assumed to be a Jastrow wave function [33; 34; 42]. The method proposed in this paper can be an alternative method to solve many-fermionic systems since the ansatz for the ground-state wave function is lenient, and the symmetrization and antisymmetrization are treated on an equal footing.
Since the numerical cost is not so large and our (anti)symmetrization is quite simple, this method can be an alternative method to calculate wave functions and
Figure 16: Two-body wave function for the ground and low-lying excited states under the harmonic oscillator potential for bosonic systems without the interaction. The exact wave function is shown in the top row.
Figure 17: Same as Fig. 16 but for fermionic systems.
energies of the ground and low-lying excited states, for instance, for the electronic structure of molecules and solids, for the nuclear structure of atomic nuclei including a tetra neutron [53; 54; 55], and for cold atoms [56].
At this moment, we only considered one-dimensional systems, while most problems interested are three-dimensional systems. In addition, spin components, or even isospin components for nuclear systems, are often important. The restricted Boltzmann machine has been applied to obtain the ground- and low-lying excited-states wave functions [57; 32; 58]. Since the input is discrete variables in the spin systems, the Boltzmann machine is suitable. such pioneering works may help to consider the spin (or isospin) components in this work. Such extensions are possible within our framework, and remain for future work.
As far as we know, all the calculations using the deep neural network for wave functions are static, while describing many phenomena including the interaction between matter and laser [59; 60], ion-cluster collision [61] heavy-ion collision [62] nuclear fission [63; 64], and fusion [65]. In order to describe such phenomena, time evolution from a state obtained by the deep neural network is also interesting, while it is left for a future study.
Finally, let us make a comment on the interpretation of the wave functions obtained in our work. As we have shown, thanks to the simplicity of the deep neural network, we could interpret the structure of the network easily. We found that replacing the softmax function with the ReLU activation provides a piecewise linear function which approximates the ground-state wave function. Since any wave function including those for excited states, which is naturally continuous, can be approximated by a piecewise-linear function, we intuitively conclude that the neural network representation can work for any physical quantum mechanical system in any dimensions. The physical meaning of the piecewise-linear functions is as follows. First of all, linear functions are solutions of the free Schrodinger equation with no potential term. So it is a good idea to start with linear functions in physical systems. Then the inclusion of the potential term in the Hamiltonian causes the curvature of the wave function. The curvature is determined by the interplay between the Laplacian and the potential term in the Hamiltonian. So, the kink structure of the wave function is dictated by the Hamiltonian. The kinks correspond to the ReLU activations, thus in effect, the nonlinearity in the Hamiltonian corresponds to the neural network structure. This reminds us of the work [66] in which the deep layers of the deep Boltzmann machine representing the ground-state wave functions of spin systems were interpreted as a Euclidean Hamiltonian evolution, or the work [67; 68; 69] in which the deep layers of the sparse neural network used for the AdS/CFT correspondence were interpreted as a bulk curved geometry. Further interplay between the sparsity of the interpretable neural networks and Hamiltonians of physical systems is to be discovered.
###### Acknowledgements.
The authors acknowledge for the fruitful discussion with Haozhao Liang, Masaaki Kimura, and Hiroyuki Tajima. T. N. acknowledges the RIKEN Special Postdoctoral Researcher Program, the Science and Technology Hub Collaborative Research Program from RIKEN Cluster for Science, Technology and Innovation Hub (RC-STI), and the JSPS Grant-in-Aid for Research Activity Start-up under Grant No. JP22K20372. H. N. acknowledges the JSPS Grant-in-Aid for Scientific Research (C) under Grant No. JP19K03488. The work of K. H. was supported in part by JSPS KAKENHI Grant No. JP22H01217, JP22H05111 and JP22H05115.
|
2302.03830 | TetCNN: Convolutional Neural Networks on Tetrahedral Meshes | Convolutional neural networks (CNN) have been broadly studied on images,
videos, graphs, and triangular meshes. However, it has seldom been studied on
tetrahedral meshes. Given the merits of using volumetric meshes in applications
like brain image analysis, we introduce a novel interpretable graph CNN
framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model
exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over
commonly used graph Laplacian which lacks the Riemannian metric information of
3D manifolds. For pooling adaptation, we introduce new objective functions for
localized minimum cuts in the Graclus algorithm based on the LBO. We employ a
piece-wise constant approximation scheme that uses the clustering assignment
matrix to estimate the LBO on sampled meshes after each pooling. Finally,
adapting the Gradient-weighted Class Activation Mapping algorithm for
tetrahedral meshes, we use the obtained heatmaps to visualize discovered
regions-of-interest as biomarkers. We demonstrate the effectiveness of our
model on cortical tetrahedral meshes from patients with Alzheimer's disease, as
there is scientific evidence showing the correlation of cortical thickness to
neurodegenerative disease progression. Our results show the superiority of our
LBO-based convolution layer and adapted pooling over the conventionally used
unitary cortical thickness, graph Laplacian, and point cloud representation. | Mohammad Farazi, Zhangsihao Yang, Wenhui Zhu, Peijie Qiu, Yalin Wang | 2023-02-08T01:52:48Z | http://arxiv.org/abs/2302.03830v2 | # TetCNN: Convolutional Neural Networks on Tetrahedral Meshes
###### Abstract
Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes. However, it has seldom been studied on tetrahedral meshes. Given the merits of using volumetric meshes in applications like brain image analysis, we introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian which lacks the Riemannian metric information of 3D manifolds. For pooling adaptation, we introduce new objective functions for localized minimum cuts in the Graclus algorithm based on the LBO. We employ a piece-wise constant approximation scheme that uses the clustering assignment matrix to estimate the LBO on sampled meshes after each pooling. Finally, adapting the Gradient-weighted Class Activation Mapping algorithm for tetrahedral meshes, we use the obtained heatmaps to visualize discovered regions-of-interest as biomarkers. We demonstrate the effectiveness of our model on cortical tetrahedral meshes from patients with Alzheimer's disease, as there is scientific evidence showing the correlation of cortical thickness to neurodegenerative disease progression. Our results show the superiority of our LBO-based convolution layer and adapted pooling over the conventionally used unitary cortical thickness, graph Laplacian, and point cloud representation.
Keywords:Magnetic resonance imaging Convolutional neural networks Tetrahedral meshes Laplace-Beltrami operator.
## 1 Introduction
Since the emergence of geometric deep learning research, many researchers have sought to develop learning methods on non-Euclidean domains like point clouds, surface meshes, and graphs [3]. In brain magnetic resonance imaging (MRI) analysis, geometric deep learning has been widely employed for applications in brain network analysis, parcellation of brain regions, and brain cortical surface analysis[9, 29, 2, 5]. In a benchmark study [9], authors addressed the common limitations of widely used graph neural networks (GNNs) on cortical surface meshes.
While the majority of these studies focus on using voxel representation and surface mesh, limitations like limited grid resolution cannot characterize complex geometrical curved surfaces precisely [26]. Cortical thickness is a remarkable AD imaging biomarker; therefore, building learning-based methods will be advantageous by exploiting volumetric meshes over surface meshes since the thickness is inherently embedded in volume [4]. Using volumetric mesh representation also potentially helps with the over-squashing nature of Message Passing Neural Networks (MPNN) by interacting with long-range nodes through interior nodes. Thus, developing efficient volumetric deep learning methods to analyze grey matter morphometry may provide a means to analyze the totality of available brain shape information and open up new opportunities to study brain development and intervention outcomes.
While a manifold can be represented as a graph to employ graph convolutional networks, the majority of GNN models are not suitable for applying to volumetric mesh data. First, the Riemannian metric is absent in a uniform graph representation. Second and foremost, these methods mostly, are not scalable to very large sample sizes like tetrahedral meshes with millions of edges and vertices. Although there are a few methods tailored specifically to work on 3D surface meshes like MeshCNN (Convolutional Neural Networks) [14], these methods are particularly designed for triangular meshes and are not scalable to meshes with a high number of vertices. Consequently, to opt for a method both scalable to tetrahedral meshes and computationally inexpensive, a framework like ChebyNet [6], modified with volumetric LBO over graph Laplacian, is an appropriate candidate to adapt for the tetrahedral meshes.
Although deep learning on mesh has been studied in recent years, few studies have employed explainable methods for qualitative assessment. Generally, to better explain geometric deep learning models, some methods have been proposed in recent years. Gradient-weighted Class Activation Map (Grad-CAM) on graphs [22] is one of the first methods, using the gradient of the target task flowing back to the final convolution layer to create a localization map to visualize important nodes in the prediction task. This technique is commonplace in CNN and GNN models, however, the generalization of the concept is rarely used on surface mesh data [1]. Specifically, Grad-CAM has never been investigated on volumetric meshes, and it is worth generalizing such an explainable technique for the medical image analysis community for a better interpretation of the deep learning model.
Motivated by the prior work [6, 15], here we propose to develop Tetrahedral Mesh CNN (TetCNN) to address the issues mentioned above. Using the tetrahedral Laplace-Beltrami operator (LBO) over graph Laplacian, we use the Riemannian metric in tetrahedral meshes to capture intrinsic geometric features. Fig. 1 demonstrates that the LBO successfully characterizes the difference between two mesh structures while the graph Laplacian fails. Additionally, we propose novel designs on the pooling layers and adopt the polynomial approximation [13] for computational efficiency. The main contributions of this paper, thus, are summarized as follows: **(1).** TetCNN is the first of its kind and an exclusive geometric
deep-learning model on tetrahedral meshes. **(2).** We use volumetric LBO to replace graph Laplacian adopted in ChebyNet [6]. **(3).** We re-define the Graclus algorithm [7] used in [6, 1] by adapting a localized minimum-cut objective function using the cotangent and mass matrix. **(4).** We approximate the LBO on down-sampled mesh with the piece-wise linear approximation function. This avoids the re-computation of Laplacian in deeper layers. **(5).** We demonstrate the generalization of Grad-CAM to the tetrahedral mesh may be used for biomarker identification. Our extensive experiments demonstrate the effectiveness of our proposed TCNN framework for AD research.
## 2 Methods
In our LBO-based TetCNN framework, first, we pre-compute the volumetric LBO for each tetrahedral mesh. Secondly, together with the LBO, we feed into the network a set of input features for each vertex, like the 3D coordinates of each vertex. Having built a new graph convolution layer based on the LBO, we need to down-sample the mesh with an efficient down-sampling and pooling layer to learn hierarchical feature representation for the large-sized input data. In Fig. 2, we illustrate the pipeline for the binary classification task by defining specific components of our deep learning model.
Figure 1: Illustration of comparison between LBO and graph Laplacian-based spectral filters represented by \(k^{th}\) order polynomials of a mesh in 1-ring neighbor of a given vertex (\(i\) in \(M_{1}\), \(i^{\prime}\) in \(M_{2}\)). Based on [13], the Laplacian of \(k^{th}\) order polynomials are exactly \(k\) localized, therefore, we use \((L_{m}^{i})_{i,j}\) to show the 1-localized Laplacian and \((A_{m}^{i})_{i,j}\) 1-localized adjacency matrix around vertices \(i\) and \(i^{\prime}\) in this example. As depicted, two meshes have similar corresponding edge-length within the 1-ring of vertex \(i\) and \(i^{\prime}\). Thus, the **1-localized** graph Laplacian of both meshes is similar while their **1-localized** LBO are different due to differences in cotangent matrix weights. The surface mesh is used for simplified intuition. \({}^{*}\) is based on Lemma 5.2 in [13].
### Tetrahedral Laplace Beltrami Operator (LBO)
Let \(T\) represent the tetrahedral mesh with a set of vertices \(\{v_{i}\}_{i=1}^{n}\) where \(n\) denotes the total number of vertices, and \(\Delta_{tet}\) be the volumetric LBO on \(T\), which is a linear differential operator. For a Riemannian manifold, given \(f\in C^{2}\), a real-valued function, the eigen-system of Laplacian is \(\Delta_{tet}f=-\lambda f\). The solution to this eigen-system problem can be approximated by a piece-wise linear function \(f\) over the tetrahedral mesh \(T\)[26]. As proposed in [26], the lumped discrete LBO on \(T\) is defined as follows:
\[\Delta f(v_{i})=\frac{1}{d_{i}}\sum_{j\in N(i)}k_{i,j}(f(v_{i})-f(v_{j})) \tag{1}\]
where \(N(i)\) includes the adjacent vertices of vertex \(v_{i}\), \(d_{i}\) is total tetrahedral volume of all adjacent tetrahedra to vertex \(v_{i}\), and \(k_{i,j}\) is the string constant. Now, we define the stiffness matrix as \(A=W-K\) in which \(W=diag(w_{1},w_{2},...,w_{n})\) is the diagonal matrix comprised of weights \(w_{i}=\sum_{j\in N(i)}k_{i,j}\). For \(A_{ij}\) we have:
\[A_{i,j}=\begin{cases}k_{i,j}=\frac{1}{12}\sum_{m=1}^{k}l_{m}^{(i,j)}cot(\theta _{m}^{(i,j)}),&\text{if }(i,j)\in E.\\ 0,&\text{if }(i,j)\notin E.\\ -\sum_{q\subseteq N(i)}k_{i,q}=-\sum_{q\subseteq N(i)}\frac{1}{12}\sum_{m=1}^ {k}l_{m}^{(i,q)}cot(\theta_{m}^{(i,q)}),&\text{if }i=j,\end{cases} \tag{2}\]
where \(l_{m}^{(i,j)}\) is the length of the opposite edge to \((v_{i},v_{j})\) in tetrahedron \(m\) sharing \((v_{i},v_{j})\), \(N(i)\) is the set of adjacent vertices to \((v_{i})\), \(E\) is the set of all edges in \(T\), and finally \(\theta_{m}^{i,j}\) is the dihedral angle of \((v_{i},v_{j})\) in tetrahedron \(m\). Now, we
Figure 2: TetCNN architecture for the classification task. Pre-computed LBO and \(xyz\) features are fed to the network with 5 layers. Each layer includes a down-sampling of size 1/4 and a pooling layer afterward except for _“conv5”_, which consists of a global average pooling (GAP). Fully connected (FC) layers and a Sigmoid activation function are used for the binary classification at the end. Grad-CAM is adopted to visualize important biomarkers.
define the lumped discrete tetrahedral LBO \(L_{tet}\) given \(A\) and the volume mass matrix \(D\)[26]:
\[L_{tet}=D^{-1}A, \tag{3}\]
in which \(D=diag(d_{1},d_{2},...,d_{n})\).
### Spectral Filtering of Mesh Signals with Chebyshev Polynomial Approximation
We define the input signal on the mesh as \(x_{in}\in R^{N}\) and the output of the convolved signal with filter \(g\) as \(x_{out}\in R^{M}\). We denote the convolution operator on tetrahedral mesh \(T\) with \(*_{T}\). Following the duality property of convolution in the time domain, and having the eigenvalue and eigen-functions of tetrahedral LBO at hand, we define the convolution as:
\[x_{out}=g*_{T}x_{in}=\Phi((\Phi^{T}g)\odot(\Phi^{T}x_{in}))=\Phi f(\Lambda) \Phi^{T}x_{in}, \tag{4}\]
in which \(\odot\) is the element-wise product, \(f(\Lambda)\) is general function based on the eigen-value matrix \(\Lambda\), and \(\Phi\) is the eigen-vector matrix. In [6], authors approximated the function \(f\) with the linear combination of k-order power of \(\Lambda\) matrix as polynomial filters:
\[f(\Lambda)=\sum_{m=0}^{K-1}\theta_{m}\Lambda_{tet}^{m}, \tag{5}\]
This formulation is localized in space and computationally less expensive than an arbitrary non-parametric filter \(f(\Lambda)\). Per [6], the convolution of kernel \(f(.)\) centered at vertex \(i\) with delta function \(\delta_{i}\) given by \((f(L)\delta_{i})_{j}=\sum_{k}\theta_{k}(L^{k})_{i,j}\) gives the value at vertex \(j\). Interestingly, since the \((L^{k})_{i,j}\) is \(K-\)localized, i.e., \((L^{k})_{i,j}=0\) if \(d(i,j)>K\), the locality is guaranteed with spectral filters approximated with \(k-th\) polynomials of LBO [6].
Now, by plugging Eq. 5 in Eq. 4, the convolution can be expressed in terms of the Laplacian itself without any further need to calculate the eigen-functions. Chebyshev polynomials provide a boost in computational efficiency with a closed recursive formulation:
\[x_{out}=\sum_{m=0}^{K}\theta_{m}T_{m}(L_{tet})x_{in}, \tag{6}\]
where \(\theta_{m}\) are a set of learnable model parameters denoting the coefficients of the polynomials, and \(T_{m}\in R^{n*n}\) is the Chebyshev polynomial of order \(k\).
**Recursive formulation of Chebyshev polynomials.** The main idea of using polynomial approximation is to avoid the costly eigendecomposition and multiplication with \(\Phi\). Therefore, we parameterize \(f(\Lambda_{tet})\) with LBO, i.e., \(f(L_{tet})\), using the recursive formulation of Chebyshev polynomials. The cost immediately reduces to \(\mathcal{O}(K|\varepsilon|)\ll\mathcal{O}(n^{2})\) and is desirable in graph convolution of big graphs and 3D meshes. In Eq. 5, the Chebyshev polynomial \(T_{m}\) can be computed recursively using the form \(T_{m}(x)=2xT_{m-1}(x)-T_{m-2}\) with \(T_{0}=1\) and \(T_{1}=x\)
[6]. Here, all \(T_{m}(k)\) create an orthonormal basis for \(L^{2}([-1,1],\mu)\) with measure \(\mu\) being \(\frac{dy}{\sqrt{1-y^{2}}}\) in the Hilbert space of square integrable functions. Now given this recurrence, the Eq. 6, \(T_{m}(L_{tet})\) is evaluated at \(\tilde{L_{tet}}=\frac{2L_{tet}}{\lambda_{max}}-I\) with the initialization of the recurrence being \(\tilde{x_{0}}=x_{in}\), \(\tilde{x_{1}}=\tilde{L_{tet}}x_{in}\) with \(\tilde{x}\) representing \(T_{m}(L_{tet})x_{in}\) in Eq. 6.
### Mesh Coarsening and Pooling Operation
Although graph coarsening and mesh coarsening methods differ, using tetrahedral mesh down-sampling based on methods like Qslim [12] or learning-based methods like [30] are both expensive and infeasible as they are template-based with registered shapes. Here we do not try to register tetrahedral meshes, and the number of vertices varies from mesh to mesh. Therefore, we propose to build a sub-sampling approach similar in [6] but using spectral-aware configuration. The method is similar to Graclus clustering by exploiting the Laplacian and matrix \(D\) defined in the previous section.
**Defining the Normalized Min-Cut Based on Tetrahedral LBO.** Here, the objective function is based on normalized cut acting on vertices in a tetrahedral mesh. We need an affinity value between \((v_{i},v_{j})\) and \(vol(.)\) to capture the volume of each node. For the volume in the normalized cut problem of a simple graph, we use the degree of the node; however, in surface and volumetric meshes, this notion refers to the area and volume of the adjacent surface and tetrahedrons of the vertex, respectively. The proposed affinity or edge distance must be correlated with the \(A\) and \(D\) in Eq. 3. Thus, the proposed affinity distance as a new objective function for the local normalized cut is:
\[d(v_{i},v_{j})=-A_{i,j}(\frac{1}{D_{ii}}+\frac{1}{D_{jj}}), \tag{7}\]
Using this clustering objective function, at each step, we decimate the mesh by order of two. Consequently, \(D_{c}(i,i)\) with \(c\) denoting the coarsen graph, are updated by the sum of their weights for the new matched vertices. The algorithm repeats until all the vertices are matched. Typically, at each convolution layer, we use two or three consecutive pooling since the size of the tetrahedral mesh is very large.
After coarsening, the challenging part is to match the new set of vertices with that of the previous ones. As proposed in [6], we use the same approach of exploiting a balanced binary tree and rearrangement of vertices by creating necessary fake nodes in the binary tree structure. For an exhaustive description of this approach please see Sec 2.3 in [6].
**Approximation of LBO on Down-sampled Mesh.** After each pooling, we have a coarsened mesh that needs updated LBO to pass it to the new convolution layer. We adopt the piece-wise constant approximation approach [20] where the clustering assignment matrix \(G\) is used. This choice of \(G\) is the most simple yet efficient one as the matrix is already computed for Graclus clustering. In
some literature, they refer to \(G\) as the prolongation operator. Now the updated Laplacian \(\hat{L}\) can be derived using the following equation for \(\hat{L}\):
\[\hat{L}=G^{T}LG, \tag{8}\]
**Grad-CAM for Tetrahedral Mesh.** To utilize Grad-CAM for our framework, we need to adopt the Grad-CAM in [22] to our tetrahedral mesh. We use the \(k\)-\(th\) feature after the GAP layer denoted as \(f_{k}\) which is calculated based on the last layer feature map \(X_{k,n}^{L}\). Here, \(n\) and \(L\) refer to the \(n\)-\(th\) node and the last layer of the network, respectively. Now, weights of Grad-CAM for class \(c\) of feature \(k\) in a tetrahedral mesh are calculated using
\[\alpha_{k}^{l,c}=\frac{1}{N}\sum_{n=1}^{N}\frac{\partial y^{c}}{\partial X_{k,n}^{L}} \tag{9}\]
To calculate the final heat map, we need to apply an activation function like ReLU and an upsampling method to project the weights to our original input mesh. As for upsampling, we use the _KNN_ interpolation. The final heat-map of the last layer \(H\) is as follows
\[H_{c}^{L,n}=\text{ReLU}(\sum_{k}\alpha_{k}^{l,c}X_{k,n}^{L}) \tag{10}\]
## 3 Experimental Results
**Data Processing.** In our experiment, we study the diagnosis task for Alzheimer's disease. Our dataset contained 116 Alzheimer's disease (AD) patients, and 137 normal controls (NC) from the Alzheimer's Disease Neuroimaging Initiative phase 2 (ADNI-2) baseline initial-visit dataset [17]. All the subjects underwent the whole-brain MRI scan using a 3-Tesla MRI scanner. More details regarding the scans can be found at [http://adni.loni.usc.edu/wp-content/uploads/2010/05/ADNI2_GE_3T_22.0_T2.pdf](http://adni.loni.usc.edu/wp-content/uploads/2010/05/ADNI2_GE_3T_22.0_T2.pdf).
**Cortical Tetrahedral Mesh Generation.** We followed the procedure in [8] to create cortical tetrahedral meshes. First, pial and white surfaces were processed and created by FreeSurfer [11]. To remove self-intersections while combining pial
Figure 3: Procedure of creating a cortical tetrahedral mesh of a closed surface from white and pial surface pre-processed and segmented by FreeSurfe [11].
and white surfaces, we repeatedly moved erroneous nodes and their small neighborhood along the inward normal direction by a small step size. This process continued to be done until the intersection was removed. Consequently, we used local smoothing on the modified nodes. Finally, we used TetGen [25] to create tetrahedral meshes of the closed surfaces. Fig. 3 illustrates the cortical tetrahedral mesh generation process. The number of vertices in all tetrahedral meshes was around \(150k\). To validate the robustness of our model, we used the simple _xyz_ coordinate as input features and normalized them using min-max normalization. We avoided using informative features as they may contribute to the final performance rather than the TetCNN itself. We pre-computed the lumped LBO for all meshes and embedded them in our customized data-loader.
**Classification Model Setup.** For comparison between different manifold spectral models, we tested our model based on both tetrahedral LBO and graph Laplacian. For the sake of equal comparison among each setting, we used the same network architecture and hyper-parameters. We used 5 TetCNN layers followed by ReLu activation function [21] and batch-normalization [16]. Before the two fully connected layers, we applied a GAP to ensure the same size feature
\begin{table}
\begin{tabular}{l|l|l|l} \hline Method & _ACC_ & _SEN_ & _SPE_ \\ \hline Thickness* & \(76.2\%\) & \(77.0\%\) & \(78.6\%\) \\ LBO(1) & \(\mathbf{91.7\%\pm 2.1}\) & \(89.1\%\pm 5.1\) & \(\mathbf{93.3\%\pm 3.5}\) \\ LBO(2) & \(90.8\%\pm 2.0\) & \(87.5\%\pm 4.8\) & \(92.1\%\pm 3.1\) \\ GL(1) & \(87.1\%\pm 1.8\) & \(90.4\%\pm 4.7\) & \(89.5\%\pm 3.1\) \\ GL(2) & \(85.7\%\pm 2.1\) & \(\mathbf{90.0\%\pm 4.2}\) & \(87.5\%\pm 2.9\) \\ \hline LBO(1)+E\_pool & \(84.1\%\pm 2.4\) & \(83.5\%\pm 4.9\) & \(87.1\%\pm 3.5\) \\ LBO(1)+LBO\_pool & \(91.7\%\pm 2.1\) & \(89.1\%\pm 5.1\) & \(92.1\%\pm 3.5\) \\ \hline \end{tabular}
\end{table}
Table 1: Classification results between AD vs. NC under different settings and parameters (GL = graph Laplacian, (.) defines the polynomial order \(k\) for LBO and GL, E_pool = Euclidean-based pooling). *Cortical thickness generated by FreeSurfer
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Study & _ACC_ & _SEN_ & _SPE_ & _Subject_ \\ & & & & _Split_ \\ \hline GF-Net [28] & \(94.1\%\pm 2.8\) & \(93.2\%\pm 2.4\) & \(90.6\%\pm 2.6\) & (188,229) \\ _Qiu et al._[24] & \(83.4\%\) & \(76.7\%\) & \(88.9\) & (188,229) \\ ViT3D [28] & \(85.5\%\pm 2.9\) & \(87.9\%\pm 3.6\) & \(86.8\%\pm 3.7\) & (188,229) \\ _Huang et al._[15] & \(90.9\%\pm 0.6\) & \(91.3\%\pm 0.1\) & \(90.7\%\pm 0.5\) & (261,400) \\ H-FCN [19] & \(90.5\%\) & \(90.5\%\) & \(91.3\%\) & (389,400) \\ ResNet3D [28] & \(87.7\%\pm 3.5\) & \(90.2\%\pm 2.8\) & \(89.7\%\pm 3.0\) & (188,229) \\ DA-Net [31] & \(92.4\%\) & \(91.0\%\) & \(93.8\%\) & (389,400) \\ \hline
**Ours** & \(91.7\%\pm 2.1\) & \(89.1\%\pm 5.1\) & \(92.1\%\pm 3.5\) & (116,137) \\ \hline \end{tabular}
\end{table}
Table 2: Classification results between AD vs. NC comparison to the baseline using different data representation. The number of different subjects is also used for fair comparison.
space among all mini-batches. We used 10-fold cross-validation and picked 15% of the training set for validation. We set the hyper-parameter \(k\) to two different values as it is shown in Table 1. The batch size for all TetCNN experiments was 8, and the loss function used for the model was Cross-Entropy. ADAM optimizer [18] with Learning \(10^{-3}\), weight decay of \(10^{-4}\), and number of epochs to 150 were used for training the model. For the AD vs. NC classification performance evaluation, we used three measures accuracy (_ACC_), sensitivity (_SEN_), and specificity (_SPE_). As a benchmark, we also used the FreeSurfer thickness features to train an AdaBoost classifier.
**Point Clouds Model Setup for Classification.** Point clouds have been widely used in deep learning literature to study manifold data. In our work, we further implemented DGCNN [27] and PointNet [23] as our baseline models to analyze volume data. For both PointNet and DGCNN, we trained the network with batch size 1 to feed the whole data without losing points for a fair comparison. All experiments were implemented in Python 3.7 with Pytorch Geometric 1.8 library [10] using NVIDIA GeForce Titan X GPU.
**Age Prediction Setup.** In order to further compare the TetCNN using volumetric LBO and its Graph Laplacian counterpart, we used a regression model to see the difference in age prediction. We used the same processed data from the ADNI dataset, but we only trained the model on normal subjects. Further, we tested the trained model on both normal subjects and independent AD subjects to see the accuracy and effect of AD on age prediction. In order for an unbiased age prediction on the AD cohort, we made a test set that matched the age distribution of normal subjects. We used 5-_k_ fold cross-validation. As for AD subjects, we randomly chose 25 subjects to test on the trained model and repeated the experiment 5 times. All the parameters of the network are the same as the classification model except for the last fully connected layer the output dimension is one as we predict a number instead of discrete class labels.
**Classification Results.** As we see in Table 1, TetCNN with \(k=1\) outperformed any other setting, including graph Laplacian with the same parameter.
We expected the increase in \(k\) would result in boosted performance, however, the results are marginally worse. We assume this behavior demonstrates the fact that 1-ring neighbor provides sufficient information that making the receptive field larger does not contribute to more discriminative features, necessarily. Overall, TetCNN with an LBO-based setting outperformed its graph Laplacian counter-part, presumably, owing to both rich geometric features learned using
\begin{table}
\begin{tabular}{l|l|l} \hline Method & _RMSE (NC)_ & _RMSE (AD)_ \\ \hline LBO(1) & \(\mathbf{6.3\pm 0.5}yr\) & \(\mathbf{7.2\pm 0.7}yr\) \\ LBO(2) & \(6.5\pm 0.6yr\) & \(7.4\pm 0.4yr\) \\ GL(1) & \(7.2\pm 0.4yr\) & \(7.9\pm 0.4yr\) \\ GL(2) & \(7.1\pm 0.5yr\) & \(8.1\pm 0.5yr\) \\ \hline \end{tabular}
\end{table}
Table 4: Age prediction result.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & _DGCNN_[27] & _PointNet_[23] & _TetCNN_ \\ \hline _ACC_ & 73.45\% & 77.35\% & **91.7\%** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of TetCNN with DGCNN [27] and PointNet [23].
LBO, as exhaustively depicted in Fig. 1, and efficient spectral-based mesh downsampling using the proposed objective function in Eq. 7. We also tested our new Graclus based on LBO and compared it to the default localized min-cut based on the Euclidean distance between two vertices. The LBO-based objective function clearly outperformed the one used in [6], which is not suitable for mesh structure as the degree of a node is almost similar in a mesh.
Regarding the comparison with point cloud learning frameworks, our observation in Table 3 shows that DGCNN and PointNet could not provide comparable results with our method due to the lack of deformation sensitivity in point cloud representation. These methods produce state-of-the-art results for the classification of completely distinct objects but fail to compete with mesh structure for learning subtle deformations in volume data.
Lastly, we compared our TetCNN with other methods in the literature that are based on either brain network, surface mesh or voxel-based representations. Our results, though have a smaller dataset size for training, have comparable performance to state-of-the-art models.
**Grad-CAM Results.** In Fig 4, we illustrated the Grad-CAM results on the left grey matter tetrahedral mesh of an AD subject, trained on both LBO (A)and Graph Laplacian-based scheme (B). As illustrated, the important regions for the AD class are different in the two approaches. The identified ROIs from the LBO are more centered at the medial temporal lobe, frontal lobe, and posterior cingulate, areas that are affected by AD. But the ROIs from the graph Laplacian are more scattered, without concise ROIs. Although more validations are desired, the current results demonstrate our interpretable model may identify important AD biomarkers.
**Age Prediction Results.** We tested our model on a regression task to compare Graph Laplacian and LBO. Furthermore, we aimed to see if the age prediction in AD patients has a larger margin of error with respect to normal subjects.
Figure 4: Grad-CAM results for AD class showing the important regions. Comparison between LBO-based (top) and graph Laplacian (bottom) on the left hemisphere of the brain. A-B From left: Lateral-Medial view. C-D From top: Sagittal-Coronal view. Darker colors show more importance, hence greater weight.
Results in Table 4 show the consistent outperformance of LBO-based TetCNN over its graph Laplacian counterpart. Also, it shows an erroneous prediction of AD patients with a margin of around one year which is predictable due to changes in the cortical thickness of AD patients being more severe.
**Complexity.** Finally, in terms of computational complexity, the parameterized filter introduced in Eq. 5 addresses the non-locality in space and high learning complexity of \(\mathcal{O}(n)\) problem of a non-parametric filter by employing the polynomial approximation of the tetrahedral LBO. Our novel approach reduced the time complexity to the dimension of \(k\), hence \(\mathcal{O}(k)\).
## 4 Conclusion and Future Work
In this study, we proposed a graph neural network based on volumetric LBO with modified pooling and down-sampling for tetrahedral meshes with different sizes. Results show the outperformance of the model to ChebyNet using graph Laplacian. Also, the adapted Grad-CAM for tetrahedral meshes showed regions affected within the surface and volume of the brain cortex in AD patients consistent with the findings in the literature. Our proposed learning framework is general and can be applied to other LBO definitions. Therefore, it may be extended to triangular mesh representation and point clouds which helped solve lots of challenging medical imaging problems including shape analysis and shape correspondence, etc. In the future, we will also study brain parcellation and segmentation tasks with our LBO-based TetCNN.
|
2310.10308 | Time integration schemes based on neural networks for solving partial
differential equations on coarse grids | The accuracy of solving partial differential equations (PDEs) on coarse grids
is greatly affected by the choice of discretization schemes. In this work, we
propose to learn time integration schemes based on neural networks which
satisfy three distinct sets of mathematical constraints, i.e., unconstrained,
semi-constrained with the root condition, and fully-constrained with both root
and consistency conditions. We focus on the learning of 3-step linear multistep
methods, which we subsequently applied to solve three model PDEs, i.e., the
one-dimensional heat equation, the one-dimensional wave equation, and the
one-dimensional Burgers' equation. The results show that the prediction error
of the learned fully-constrained scheme is close to that of the Runge-Kutta
method and Adams-Bashforth method. Compared to the traditional methods, the
learned unconstrained and semi-constrained schemes significantly reduce the
prediction error on coarse grids. On a grid that is 4 times coarser than the
reference grid, the mean square error shows a reduction of up to an order of
magnitude for some of the heat equation cases, and a substantial improvement in
phase prediction for the wave equation. On a 32 times coarser grid, the mean
square error for the Burgers' equation can be reduced by up to 35% to 40%. | Xinxin Yan, Zhideng Zhou, Xiaohan Cheng, Xiaolei Yang | 2023-10-16T11:43:08Z | http://arxiv.org/abs/2310.10308v1 | Time integration schemes based on neural networks for solving partial differential equations on coarse grids
###### Abstract
The accuracy of solving partial differential equations (PDEs) on coarse grids is greatly affected by the choice of discretization schemes. In this work, we propose to learn time integration schemes based on neural networks which satisfy three distinct sets of mathematical constraints, i.e., unconstrained, semi-constrained with the root condition, and fully-constrained with both root and consistency conditions. We focus on the learning of 3-step linear multistep methods, which we subsequently applied to solve three model PDEs, i.e., the one-dimensional heat equation, the one-dimensional wave equation, and the one-dimensional Burgers' equation. The results show that the prediction error of the learned fully-constrained scheme is close to that of the Runge-Kutta method and Adams-Bashforth method. Compared to the traditional methods, the learned unconstrained and semi-constrained schemes significantly reduce the prediction error on coarse grids. On a grid that is \(4\times\) coarser than the reference grid, the mean square error shows a reduction of up to an order of magnitude for some of the heat equation cases, and a substantial improvement in phase prediction for the wave equation. On a \(32\times\) coarser grid, the mean square error for the Burgers' equation can be reduced by up to \(35\%\) to \(40\%\).
**Keywords:** Time integration scheme, Partial differential equation, Numerical simulation on coarse grids, Neural networks, Mathematical constraints
## 1 Introduction
Many environmental and engineering problems, encompassing a broad spectrum of spatial and temporal scales, are described by partial differential equations (PDEs) of high dimension. Since resolving all the scales often demands exceedingly extensive computational resources, solving the high-dimensional PDEs poses a great challenge to the current computing systems. For example, direct numerical simulations of high-Reynolds number turbulent flows can be particularly challenging in this regard. Solving the PDEs on coarse grids, reducing the dimension of the problem, can substantially reduce the computational costs, while meanwhile introduces discretization errors. Discretization errors depend not only on the spatial discretization schemes but also on the time integration schemes. In this study, we propose to learn time integration schemes to reduce the error of solving PDEs on coarse grids.
The traditional approach to reduce the prediction error on coarse grid is to directly account for the effect of the physics of the unresolved scales. One way is to solve the spatially filtered PDEs instead of the original PDE. However, the filtering procedure on the nonlinear term often introduces a new unclored term, i.e., the subgrid term. The subgrid term can be modelled explicitly by establishing its relation with the resolved scales. For instance, the eddy viscosity model for large-eddy simulation (LES) of turbulent flows governed by the Navier-Stokes (NS) equations [1, 2]. The subgrid term can also be modelled implicitly by changing the properties of the schemes for discretizing the spatial derivatives. One example is the Implicit Large-Eddy Simulation (ILES) method for solving turbulent flows [3]. The success of such approaches largely depends on our understanding of the physics of the unresolved scales. Moreover, the developed models are often valid for only a range of unresolved scales. When the dominant physics of the unresolved scales change from one to the other as the cut-off scale changes, a grey region appears where neither of the models in
each regime works. For instance, the grey region in the microscale and mesoscale simulations of the atmospheric flow in the meteorology research [4, 5].
The machine learning methods are revolutionizing the numerical methods for solving PDEs [6, 7, 8, 9, 10, 11]. Such approaches approximate the solution based on machine learning models [12, 13], e.g., neural networks [14], avoiding the need for a grid to discretize the computational domain. In the review paper by [15], three types of the methods based on neural networks were reviewed: 1) the physics-informed neural networks (PINN) [6, 16, 17], in which the PDEs with initial and boundary conditions are approximately enforced via loss functions; 2) the methods based on the Feynman-Kac formula [18], which approximate the solution of a PDE as the expectation of a stochastic process; and 3) the methods based on the solution of backward stochastic differential equations [19, 20], in which deep neural networks (DNN) are employed for computing the gradient of solutions. As noted in the review [15], the last two methods showed promising results for high-dimensional linear and semilinear systems. The methods based on PINN are capable of handling complex nonlinear PDEs and inverse problems with incomplete models and imperfect data, while they are not competitive for solving well-posed, high-dimensional forward problems when compared with the traditional grid-based numerical methods [12, 15].
Improving the predictive capability of grid-based numerical methods using the machine learning methods received lots of attention as well [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. Some efforts are particularly focused on improving the traditional subgrid models for solving PDEs on coarse grids, such as models for turbulent flow simulations [21, 22, 23, 24, 25, 26, 27, 28]. Other attempts were made to improve the performance of the spatial discretization schemes for PDEs [29, 30, 31]. Particularly, the spatial discretization schemes for solving PDEs on coarse grids were learned using neural networks in the work by Bar-Sinai et al. [31]. The results of the one-dimensional Burgers' equation showed that the learned discretization schemes can reduce the prediction error and extend the median valid simulation time for very coarse grids.
The stability and accuracy of solving PDEs on coarse grids depend on both temporal and spatial discretization schemes. Empowering time integration schemes through machine learning methods, however, has been overlooked in existing studies. When solving PDEs on coarse grids, errors in the solution due to coarse spatial resolution accumulate over time. For example, this can lead to phase errors when solving the wave equation on coarse grids. In this study, we employ neural networks to learn time integration schemes to reduce the error of solving PDEs on coarse grids. Specifically, we propose three learning approaches with different constraints and test the learned schemes using the one-dimensional heat equation, wave equation, and Burgers' equation.
In the rest of the paper, the three learning approaches are first described and applied to learn a 3-step linear multistep method in section 2. Then, the test results for the one-dimension heat equation, wave equation, and Burgers' equation are presented sequentially in section 3. Lastly, the conclusions are drawn in section 4.
## 2 Learning of time integration schemes
In this section, we describe the approach for developing data-driven time integration schemes. The 3-step linear multistep method is taken as an example. Similar ideas can be applied to other time integration schemes. The general formulation of the linear multistep method is given in section 2.1 focusing on the constraints for the stability and consistency of the method. The three different models for imposing the constraints are then presented in section 2.2. The definition of the loss function is given in section 2.3. Lastly, the details for the application in a 3-step linear multistep method is given in section 2.4.
### General formulation of the linear multistep method
We consider the following ordinary differential equation (ODE) with independent variable \(t\)
\[\frac{\partial v}{\partial t}=F(x,t), \tag{1}\]
where \(x\) is a parameter (which can be the spatial coordinate). The linear multistep method of \(k\)-step for discretizing the above ODE can be expressed as
\[\alpha_{k}v^{n+1}=-\sum_{i=0}^{k-1}\alpha_{i}v^{n+i-k+1}+\Delta t \sum_{i=0}^{k}\beta_{i}F_{n+i-k+1}, \tag{2}\]
where \(\alpha_{i}\) and \(\beta_{i}\) are real coefficients, \(v^{n}\) represent the approximate value of \(v(t_{n})\) at \(t=t_{n}\), \(\Delta t\) is the size of time step and \(F_{n+i}=F(x,t_{n+i})\), \(t_{n+i}=t_{n}+i\Delta t\). For the sake of simplicity, we assume that \(\alpha_{k}=1\), \(\alpha_{0}^{2}+\beta_{0}^{2}\neq 0\).
The generating polynomials of the multistep method, which can be written as
\[\rho(\chi)=\alpha_{k}\chi^{k}+\alpha_{k-1}\chi^{k-1}+\cdots+ \alpha_{0}, \tag{3}\] \[\sigma(\chi)=\beta_{k}\chi^{k}+\beta_{k-1}\chi^{k-1}+\cdots+\beta _{0}, \tag{4}\]
are employed to obtain the consistency and stability constraints [32]. The obtained consistency condition is
\[\rho(1)=0,\quad\rho^{\prime}(1)=\sigma(1), \tag{5}\]
which is also the condition that the linear multistep method should satisfy to be of first-order accuracy 1.
Footnote 1: According to Taylor expansion, the necessary and sufficient condition for a linear multistep method of \(k\)-step to be of order \(P\) is \(\sum_{i=0}^{k}\alpha_{i}=0\), \(\sum_{i=0}^{k}\alpha_{i}i^{x}=s\sum_{i=0}^{k}\beta_{i}i^{x-1},s=1,2,\cdots P\).
The stability condition we employ here is called zero-stability, that a general method is stable when \(\Delta t\to 0\). For a linear multistep method, it is stable if \(\rho(\chi)\) satisfies the root condition [32], i.e., the moduli of the roots of \(\rho(\chi)\) are less than or equal to 1, and if one root's modulus is 1, the root must be single. That is to say, the roots of \(\rho(\chi)\) are on or within the unit circle, and the roots on the unit circle are single. It is worth mentioning that the convergence and stability of the linear multistep method are equivalent under the premise of the consistency condition.
### Three models for learning time integration schemes
The stability and consistency constraints discussed in the last subsection are imposed during the training of the neural network model, with the aim to increase the interpretability and stability of the learned time integration schemes. Three different strategies for applying the constraints are employed, i.e., 1) the BP (backpropagation) neural network model with no constraints (the unconstrained model, Un-con); 2) the BP neural network model with root condition (stability condition) enforced (the semi-constrained model, Semi-con); 3) the BP neural network model with both root condition and consistency condition enforced (the fully-constrained model, Full-con). In the following, the root condition and consistency condition will be reformulated in a way to facilitate their implementation as constraints for training the neural network models. As the results from the explicit Runge-Kutta method of order 3(2) with adaptive time-stepping will be employed as the reference for examining the accuracy, in the following we focus on deriving the constraints for the data-driven explicit linear multistep method of 3-step (\(k=3\) in Eq. (2)).
The consistency condition, which is equivalent to imposing the first-order constraint to a third-order explicit linear multistep method, is implemented in a way that some parameters are the direct outputs of the neural network with the rest of the parameters (i.e., \(\alpha_{i}\) and \(\beta_{i}\) in Eq. (2)) adjusted to satisfy the constraint (i.e., Eq. (5)).
In regard to the root condition (stability condition), i.e., the moduli of the roots of \(\rho(\chi)\) are less than or equal to 1 (\(\|\chi\|\leq 1\)), and if one root's modulus is 1, the root must be single, we employ the transform \(\chi=\frac{1+z}{1-z}\) to map the above constraint to the one that the real parts of all roots of a real coefficient polynomial with respect to \(z\) are negative. With \(Re(z)<0\), the original root condition is replaced by a stricter one, \(\|\chi\|<1\). Therefore, the \(\rho(\chi)\) with the root condition promised can be turned into a Hurwitz Polynomial \(\psi(z)\)[33]. Specifically, the Hurwitz Polynomials are in the following form,
\[\psi_{2}(z)=(1-z)^{2}\rho_{2}\left(\frac{1+z}{1-z}\right)=(1-\alpha_{1}+ \alpha_{0})z^{2}+2(1-\alpha_{0})z+(1+\alpha_{1}+\alpha_{0}), \tag{6}\]
\[\psi_{3}(z)=(1-z)^{3}\rho_{3}\left(\frac{1+z}{1-z}\right)=(1-\alpha_{2}+ \alpha_{1}-\alpha_{0})z^{3}+(3-\alpha_{2}-\alpha_{1}+3\alpha_{0})z^{2}+(3+ \alpha_{2}-\alpha_{1}-3\alpha_{0})z+(1+\alpha_{2}+\alpha_{1}+\alpha_{0}), \tag{7}\]
which correspond to the following quadratic and cubic \(\rho(\chi)\) polynomials \(\rho_{2}(\chi)=\chi^{2}+\alpha_{1}\chi+\alpha_{0}\) and \(\rho_{3}(\chi)=\chi^{3}+\alpha_{2}\chi^{2}+\alpha_{1}\chi+\alpha_{0}\), respectively.
According to the Routh-Hurwitz criterion [33], the constraint that the real parts of the roots of \(\psi_{2}(z)\) and \(\psi_{3}(z)\) are negative is equivalent to the condition that their polynomial coefficients satisfy the following inequalities,
\[\begin{cases}1-\alpha_{1}+\alpha_{0}>0,\\ 1-\alpha_{0}>0,\\ 1+\alpha_{1}+\alpha_{0}>0,\end{cases}\quad\text{(a)}\quad\quad\quad\quad \begin{cases}1-\alpha_{2}+\alpha_{1}-\alpha_{0}>0,\\ 1+\alpha_{2}+\alpha_{1}+\alpha_{0}>0,\\ 1-\alpha_{1}+\alpha_{2}\alpha_{0}-\alpha_{0}^{2}>0,\\ 1-\alpha_{0}>0,\\ 1+\alpha_{0}>0.\end{cases}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
The root condition is then reduced to that the rest of the two roots of \(\rho_{2}(\chi)\) must lie within the unit circle. The coefficients of \(\chi^{2}+p\chi+q\) should meet the Routh-Hurwitz criterion, i.e., Eq. (8a). With \(p\), \(q\) given by the learned neural network model, the values of \(\alpha_{0}\), \(\alpha_{1}\) and \(\alpha_{2}\) can then be obtained.
### Loss function
The loss function consists of two parts, i.e., the part related to the error of the solution predicted by the learned time integration scheme, and the other part for the constraints imposed on the scheme coefficients given by the neural network model. The first part of the loss function (\(\text{MSE}(v_{pred},v_{true})\)) is defined as the mean squared error between the predictions on the coarse grid (\(v_{pred}\)) and those on the same grid produced by coarsening the solutions from the high resolution grid (\(v_{true}\)). The second part of the loss function is in the form of the internal penalty function, which originates from the inequalities of the root condition constraint. The internal penalty function, which is also called the barrier function, establishes a barrier at the boundary of the feasible region to prevent the iteration from leaving the area [34]. When the output values of the neural network are close to the boundary of the feasible region, the value of the internal penalty function tends to become infinity. In this work, the barrier function of the following reciprocal form,
\[B_{1}(\vec{\alpha})=\sum_{i}\frac{1}{|g_{i}(\vec{\alpha})|+\epsilon},\quad \text{(a)}\qquad\qquad B_{2}(p,q)=\sum_{i}\frac{1}{g_{i}(p,q)},\quad\text{(b)} \tag{10}\]
that the boundary of the feasible region is determined by \(g_{i}(\cdot)>0\), is employed. The barrier function for the semi-constrained model Eq. (10a) sets the constraints on the values of \(\alpha_{i}\) directly. Because the initial output value may be at the boundary of the feasible region, the application of absolute values and \(\epsilon\) here is to prevent the denominator from changing to 0 during training, resulting in subsequent training not being possible. In addition we need to test the output of the semi-constrained model after each training. If the output coefficients are not in the feasible domain, the model will be retrained with modified initial weights.
For the fully-constrained model, the barrier function Eq. (10b) is enforced on \(p\), \(q\) as shown in Eq. (9), in which the consistency condition \(\rho(1)=0\) is guaranteed. Due to the initial output of the fully constrained model being within the feasible domain, it will not run out of the feasible domain at small learning rates. There is no need for techniques to ensure that the denominator is not zero. The specific initial network settings are shown in subsection 2.4 and section 3. As a consequence, the loss function can be written as
\[\text{loss}=\text{MSE}(v_{pred},v_{true})+\gamma B, \tag{11}\]
where \(\gamma\) is an adjustable hyperparameter and set to 0 for the unconstrained model. The \(\epsilon\) in Eq. (10a) is a small quantity and should be adjusted with \(\gamma\) changes.
### Learning of 3-step linear multistep methods
The data preparation and model training of a 3-step linear multistep method are presented to show the procedure of developing data-driven time integration schemes based on the BP neural networks, which is then applied to the one-dimensional heat equation, wave equation and Burgers' equation.
The outputs of the neural network model are the coefficients \(\alpha_{i}\) and \(\beta_{i}\) for the linear multistep formulation (Eq. (2)). The inputs consist of coarse-grained solutions from the previous time step, i.e., \(v^{n}\) at different spatial locations (with the solutions \(v^{n-1}\), \(v^{n-2}\) for computing the right-hand-side term for the 3-step linear multistep method). Consequently, the learned coefficients are influenced by spatial and temporal variations in the solutions. Since the internal penalty function is employed, the initial values of the outputs of the neural network need to be specified in the feasible region. A small learning rate is employed, such that the step size of updating the hyperparameter of network is small, which ensures the outputs are located in the feasible region. Max-min normalization is used for inputs during training and testing to ensure valid constraints during testing.
To train the data-driven model for a 3-step linear multistep method, the coefficients of the third-order explicit Adams methods, which can be written as
\[v^{n+1}=v^{n}+\Delta t\left(\frac{23}{12}F_{n}-\frac{16}{12}F_{n-1}+\frac{5}{ 12}F_{n-2}\right), \tag{12}\]
are employed to set the initial output of the neural network. To prevent the training results from falling into local optimization, small perturbations are added to the coefficients. With the assumption that the optimal coefficients are close to the initial guess, the initial biases of the network are set manually, and the initial weights are set as small random numbers for realizing the optimization in small step sizes and ensuring that the output coefficients are still in the feasible region when the inputs are not in the training set.
A schematic showing the training process is shown in Figure 1. During each iteration of the model training, the solution from the previous time step is first given as the input. With the input, the scheme coefficients are then given by the NN model (neural network model). At last, the loss is computed as the mean square error of the predicted solution \(v_{pre}^{n+1}\) and the constraints for the coefficients (i.e., Eq. (11)), in which \(v_{pre}^{n+1}\) is computed by advancing the equation for one step using the output coefficients of the NN model, the solution at previous time steps, and the right-hand-side term.
The employed neural network is composed of three layers, i.e., the input layer with the number of neurons equals to the number of grid points in \(x\), a hidden layer with 20 neurons, and the output layer with 6 neurons for the unconstrained model and the semi-constrained model, and 4 neurons for the fully-constrained model. Due to the initial loss being below \(10^{-5}\), small learning rates, somewhere around \(10^{-7}\), are employed to adjust the initial guess of the coefficients. The three models are all trained using the Adam optimizer. The activation function is ReLU. As the number of biases in the first two layers is greater than the output, the values of the additional bias will default to the last number of the manually set bias for the output layer. The models are trained in the framework of TensorFlow 1.0.
The training data are generated by coarsening the numerical solution on the fine grid using the cell-averaging approach. The spatial discretizations based on the finite volume method are performed, which has good stability and conservation properties, being consistent with the cell-averaging approach employed for coarsening.
The procedure for using the learned data-driven 3-step time integration scheme is shown in Figure 2. In this figure, the process of advancing in time is shown by the green arrows. At beginning, the solutions \(v^{1}\) and \(v^{2}\) are calculated from the initial solution \(v^{0}\) using the Runge-Kutta method. The black arrows indicate the process of employing the data-driven time integration scheme, while the red arrows indicate the calculation of the right-hand-side terms.
## 3 Results
In this section, we present the results from the data-driven time integration schemes and compare them with those from the Runge-Kutta method of order 3(2) with adaptive time-stepping. In subsection 3.1, we apply the data-driven
Figure 1: A schematic for the training procedure of a data-driven 3-step linear multistep scheme.
Figure 2: A schematic for the procedure of using the learned 3-step linear multistep scheme.
time integration schemes with the finite volume method to solve 1-D heat equations with different thermal diffusivities on a grid \(4\times\) coarser than the reference grid. And then we consider first-order wave equations with varying wave speeds in subsection 3.2, and analyze why data-driven time integration schemes can achieve highly accurate results. In subsection 3.3, we test the data-driven temporal schemes on the Burgers' equation on a grid \(32\times\) coarser than the reference grid, with the spatial derivatives approximated using the data-driven schemes proposed in [31].
### 1-D Heat Equation
The heat equation considered here is
\[\begin{cases}\frac{\partial v}{\partial t}=\frac{\partial}{\partial x}\left( \lambda\frac{\partial v}{\partial x}\right),&0\leqslant x\leqslant 1,\ 0\leqslant t\leqslant 1,\\ v(x,0)=\sin(2\pi x),&0\leqslant x\leqslant 1,\end{cases} \tag{13}\]
where \(\lambda\) is thermal diffusivity which expresses the ability of the a system tending to a uniform temperature in heating or cooling [35]. This equation describes the heat conduction or diffusion in one-dimensional isotropic medium. Eq. (13) employs the periodic boundary condition with the domain of size \(L=1\). The exact solution is
\[v(x,t)=\mathrm{e}^{-4\pi^{2}\lambda t}\sin(2\pi x). \tag{14}\]
The thermal diffusivity \(\lambda\) here is set in the range of \(0.1\) to \(1\). We apply the data-driven time integration schemes trained under a particular thermal diffusivity to other thermal diffusivities in this range. The first-order finite-volume method is employed for spatial discretization during both training and testing.
The three different data-driven time integration schemes with/without constraints (i.e., the unconstrained, the semi-constrained, and the fully-constrained schemes) are trained on a training set with a fine grid solution coarsen by a factor of \(4\) for \(\lambda=0.5\), and then tested on \(\lambda\in\{0.1,0.2,0.3,0.4,0.6,0.7,0.8,0.9,1.0\}\). We assume that, for example, if time integration schemes have lower errors at \(\lambda=0.7\) and \(0.8\), then it is highly likely that schemes will perform better in \(\lambda\in[0.7,0.8]\). The fine grid solution has \(64\) grid points in the \(x\) direction for \(0\leqslant x\leqslant 1\), which means there are \(16\) grid points on the \(4\times\) coarse grid. The size of the time step is set to \(0.0001\).
Specifically, to learn the data-driven 3-step linear multistep model, the initial weights are uniformly distributed in the range from \(-0.0005\) to \(0.0005\) for the unconstrained model and the semi-constrained model, with the same initial biases \([0,0,-1,5/12,-4/3,23/12]\) for \(\alpha_{0,1,2}\) and \(\beta_{0,1,2}\). With the consistency condition enforced, the number of outputs in the fully-constrained model is \(4\), i.e., \(p\), \(q\), \(\beta_{0}\) and \(\beta_{1}\). Considering that \(p,q=0\) for the generating polynomial \(\rho(\lambda)\) of Eq. (12) when simplifying it as the form of Eq. (9), the initial biases of the fully-constrained model are set as \([0,0,5/12,-4/3]\), and the initial weights are uniformly distributed in the range from \(-0.005\) to \(0.005\). The learning rate is \(10^{-7}\) for both unconstrained model and semi-constrained model and is \(5\times 10^{-7}\) for fully-constrained model. The three models are trained by Adam optimizer for \(8000\) steps. The \(\gamma\) in loss function Eq. (11) is set to \(10^{-18}\) for the semi-constrained model and \(10^{-12}\) for the fully-constrained model. The purpose is that when the output coefficients are in the feasible domain, the penalty function will minimize its impact on the main part of the loss, i.e. the prediction error.
Figure 3 shows the results of the 1-D heat equation for \(\lambda=0.3,0.7\) and \(1.0\). Since the solution tends to be a constant after a time greater than \(0.5\), only the solution and error for t from \(0\) to \(0.5\) are plotted here. The results for a larger range of t and other \(\lambda\) can be found in table 4 in the Appendix. From Figure 3 and table 4, it can be seen that unconstrained model and semi-constrained model have lower values of mean square error (MSE) and mean absolute error (MAE) than the Runge-Kutta method. Little difference is observed between prediction from the fully-constrained model and Runge-Kutta method. We believe this is because the fully-constrained model needs to find coefficients satisfying both the root condition and the consistency condition, leaving almost no space for optimisation.
As can be seen from the table 4 and figure 3, while the unconstrained model works well for \(\lambda=0.3\) to \(1\) (especially for \(\lambda\) close to the training set around \(0.5\)), it performs the worst for \(\lambda=0.1\) and \(0.2\), with the MSE one to two orders of magnitude greater than the Runge-Kutta method. This is understandable as the unconstrained model has the space to learn the mechanism for error reduction, but with a narrow scope of generalisation as a result of not satisfying certain mathematical constraints.
In contrast, the generalisation ability of the semi-constrained model is well demonstrated, with its predictions closer to the exact solution when compared with the Runge-Kutta method for the cases with \(\lambda\) from \(0.2\) to \(1\). This can be explained by the fact that the root constraint enhances the stability of the learned scheme, and meanwhile leaves room for optimisation.
### 1-D Wave Equation
In this subsection one-dimensional first-order wave equation (i.e. linear convection equation [35]) with wave speed \(c\) is considered, which can be written as
\[\begin{cases}\frac{\partial\tau}{\partial t}+c\frac{\partial\tau}{\partial x}=0,\quad 0\leqslant x\leqslant 1,\ 0\leqslant t\leqslant 1,\\ v(x,0)=\sin(4\pi x),\quad 0\leqslant x\leqslant 1.\end{cases} \tag{15}\]
where the wave speed \(c>0\). The wave equation describes the propagation of a wave in a homogeneous medium with a velocity \(c\) in the \(x\) direction. Eq. (15) employs the periodic boundary condition with the domain of size \(L=1\). The exact solution is
\[v(x,t)=\sin\left[4\pi(x-ct)\right]. \tag{16}\]
The wave speed \(c\) here is set from \(0.1\) to \(1\). The spatial discretization scheme is the second order finite-volume method. The training set is generated by coarsening the numerical solution on the fine grid with \(c=0.5\). The test set has the values of \(c\in\{0.1,0.2,0.3,0.4,0.6,0.7,0.8,0.9,1.0\}\). The fine grid case has points of \(64\) in \(x\)-direction. The cell-averaging method is employed for coarsening find grid solution to obtain the training data. The size of the time step is set to \(0.0001\).
To train the model, the initial weights are set uniformly distributed in the range of \([-0.0005,0.0005]\) for the unconstrained model and \([0,0.0005]\) for the semi-constrained model while \([-0.005,0.005]\) for the fully-constrained
Figure 3: Test results of the 1-D heat equation for (A) \(\lambda=0.3\), (B) \(\lambda=0.7\) and (C) \(\lambda=1.0\) and their curves of error over time (D, E, F, G, H, I). The subgraph on the top of (A, B, C) is the exact solution obtained from Eq. (14) and then followed by realizations of solutions and absolute error distribution from different time integration schemes. The numbers in brackets on the right of (A, B, C) are the mean square error averaged over the whole domain. The error shown in (D, E, F) is the mean squared error obtained by averaging the error in space \(([0,1])\) and time \(([0,t])\). The error shown in (G, H, I) is the mean absolute error averaged over space \(([0,1])\) at \(t\) instant. (D, G) is the error curves for (A): \(\lambda=0.3\) and (E, H) for (B): \(\lambda=0.7\), (F, I) for (C): \(\lambda=1.0\).
model. The initial biases, learning rate, and optimizer are identical with the 1-D heat equation case but training for 10000 steps.
Figure 4 shows the results of the 1-D wave equation for \(c=0.2,0.7\) and 1.0. The MSE and the MAE for other \(c\) could be found in table 5 in the Appendix. Here we clearly see that the coarse-grained prediction of the unconstrained model for \(c\) from 0.1 to 1 is the closest to the exact solution among the considered methods, with the values of MSE and MAE reduced by at least one order of magnitude in comparison with the Runge-Kutta method and the third-order Adams methods. Significant and consistent improvements are also observed for the semi-constrained model, with at least about 90 percent reduction in MSE and 60 percent reduction in MAE for each \(c\). The fully-constrained model, on the other hand, predicts almost the same solutions as the Runge-Kutta method and the third-order Adams method. For the sake of comparison, the third-order Adams method will be used here as the baseline method.
In the following, we further analyze the error of the 1-D wave equation to provide some understanding of the improvement obtained from the learned time integration scheme. As shown from the Fourier analysis, the spatial discretization scheme introduces significant dispersion errors on coarse grids for the wave equation, here manifesting as a gradual lagging behind the phase during propagation. Figure 5 illustrates the solution of different models corresponding to different \(c\) at \(t=1\). It is evident that the phase error increases as \(c\) increases. Among them, the Runge-Kutta method, the third-order Adams method, and the fully-constrained model lag in phase the most.
Specifically, we aim to investigate how the coefficients of the unconstrained and semi-constrained models lead to error reductions. The third-order explicit linear multistep method combined with a second-order finite-volume method corresponds to a four-level explicit scheme. Assuming that the solution \(v^{n-2},v^{n-1}\) and \(v^{n}\) from the three previous time
Figure 4: Test results of wave equation for (A) \(c=0.2\), (B) \(c=0.7\) and (C) \(c=1.0\) and their curves of error over time (D, E, F, G, H, I). The subgraph on the top of (A, B, C) is the exact solution obtained from Eq. (16) and then followed by realizations of solutions and absolute error distribution from different time integration schemes. The numbers in brackets on the right of (A, B, C) are the mean square error averaged over the whole domain. The error shown in (D, E, F) is the mean squared error obtained by averaging the error in space \(([0,1])\) and time \(([0,t])\). The error shown in (G, H, I) is the mean absolute error averaged over space \(([0,1])\) at \(t\) instant. (D, G) is the error curves for (A): \(c=0.2\) and (E, H) for (B): \(c=0.7\), (F, I) for (C): \(c=1.0\).
steps are given by the exact solution Eq. (16), we intend to examine the phase error brought about by the discretization scheme after advancing for one time step to \(n+1\).
It can be proven that the phase displacement per one time step is a constant for the coefficients given by the linear multistep method for initial conditions in Eq (15). Figure 6 shows the phase displacement per one time step for cases with different \(c\). Since the coefficients from the data-driven models vary with the inputs, we have plotted the time-averaged phase displacements for these models. Ideally, the phase would move \(c\Delta t\) in one time step, with the exact slope being of \(\Delta t\) as shown in Figure 6. It is seen that phase displacement from the unconstrained model is the closest to the exact one for different values of \(c\). Improvements are also observed for the semi-constrained model. As for the fully-constrained model and third-order Adams method, however, phase displacements lower by approximately 10 percent are observed, being consistent with the observations in Figure 5. Overall, Figure 6 demonstrates that the learned time integration scheme is capable of correcting the numerical phase displacement, which reduces the dispersion error caused by the spatial discretizations. More details on the mathematical derivations can be found in Appendix C.
### 1-D Burgers' Equation
In this section, we present the results of Burgers' equation from the data-driven time integration schemes and compare them with those from the Runge-Kutta method of order 3(2). Three different test cases, including different forcing terms, long integration time and large computational domain are considered. The performance of the data-driven time integration schemes is evaluated by applying them to simulations on a coarse grid, which is \(32\times\) coarser than the fine grid simulations (512 grids points) employed for generating the training data. In the \(32\times\) coarse grid simulations, the data-driven spatial discretization schemes trained (the number of steps and the learning rate for training are slightly different from the reference) using the method in the reference [31] are employed.
The considered one-dimensional Burgers' equation is in the following form,
\[\frac{\partial v}{\partial t}+\frac{\partial}{\partial x}\left(\frac{v^{2}}{2} -\eta\frac{\partial v}{\partial x}\right)=f(x,t), \tag{17}\]
where \(\eta=0.01\) is the viscosity and \(f\) is the forcing term. The initial condition is \(v(x,t=0)=0\). The periodic boundary condition is applied in \(x\)-direction. The forcing term \(f\) is given as follows [31],
\[f(x,t)=\sum_{i=1}^{20}A_{i}\sin(\omega t+2\pi l_{i}x/L+\phi_{i}), \tag{18}\]
Figure 5: The snapshots of wave equation for \(c\in\{0.1,0.2,0.3,0.4,0.6,0.7,0.8,0.9,1.0\}\) at \(t=1\). The horizontal axis represents the spatial position, and the vertical axis represents the coarse-grained waves predicted by different methods at \(t=1\).
where \(A_{i}\in[-0.5,0.5]\), \(\alpha_{i}\in[-0.4,0.4]\), \(\phi_{i}\in[0,2\pi]\) and \(l_{i}\in\{3,4,5,6\}\). The initial weights are uniformly distributed in the range of \(-0.0001\) to \(0.0001\) and \(-0.001\) to \(0.001\) for the unconstrained model and the semi-constrained mode respectively, with the same initial biases \([0,0,-1,1/2,-4/3,23/12]\). The initial biases of the fully-constrained model are set as \([0,0,1/3,-4/3]\), and the initial weights are uniformly distributed in the range of \(-0.01\) to \(0.01\). The \(\gamma\) in loss function Eq. (11) is set to \(10^{-15}\) for the semi-constrained model and \(10^{-12}\) for the fully-constrained model. The three coarse-grained solutions of the Burgers' equation for \(t\in[0,20]\) with different forcing terms in the training set do not intersect with the test cases.
First, we examine the performance of the three data-driven time integration schemes using the test cases with the forcing term different from the training dataset. Figure 7 shows the results from two typical cases with (B, D with green-background title) and without (A, C with blue-background title) significant improvements, respectively. It is seen that the predictions from different time integration schemes are basically the same in the beginning of time. As further advancing in time, the unconstrained model and the semi-constrained model perform better than the other two models, which is measured using MSE and MAE. Little difference is observed between the predictions from the fully-constrained model and Runge-Kutta method, as no much space is left for further improvement when the constraints for deriving a third-order method are fully enforced when training the data-driven schemes.
A greater expectation of the proposed data-driven time integration scheme is their superior performance over an integration time longer than that of the training dataset. Figure 8 compares the predictions from the three learned time integration schemes trained on a temporal domain with \(t\in[0,20]\) with those from the explicit Runge-Kutta method on a temporal domain with \(t\in[0,100]\). It is clear that the unconstrained model and semi-constrained model outperform the fully-constrained model and the Runge-Kutta method as shown by the MSE and MAE for a long integration time. In order to systematically evaluate the performance of the data-driven time integration schemes, 20 cases with vastly different random forcing terms are carried out. The exact value of error and the percentage of the error reduction are shown in the AppendixD. As shown in Figure 9, the error of learned unconstrained (purple diamond) and semi-constrained (orange square) discretization schemes, which are close to each other, are less than that of the Runge-Kutta method (blue triangle) in terms of MSE and MAE for most cases. As for the fully-constrained discretization scheme, the values of the MSE and MAE (green pentagon) are approximately the same with those from the Runge-Kutta method.
Furthermore, the differences between the error from the three data-driven time integration schemes and the errors from the Runge-Kutta method are analysed by the paired t-tests. The P-values are shown in table 1 for the three schemes. At a significance level of \(0.01\), it is seen that the error from the learned unconstrained and semi-constrained time integration schemes is significantly different from (lower than) the error from the Runge-Kutta method.
Figure 6: Phase displacement per one time step for different time integration schemes for 1-D wave equation with different values of \(c\). The grey dashed lines from top to bottom are the lines fitted by the unconstrained, semi-constrained, fully-constrained, and third-order Adams methods, with the latter two almost overlapping. Two pink dashed lines with \(k=1.000e-4\) is the slope for the exact solution, which equals to the size of the time step. The values in the grey box correspond to the slopes of various lines in the figure. Since \(c=0.5\) is used to generate the training data, the corresponding results are not included.
\begin{tabular}{l c c c} \hline \hline & Un-Con & Semi-Con & Full-Con \\ \hline Mean MSEs & 0.009832659 & 0.010041047 & 0.011981383 \\ MSE p-value & 0.000147995 & 0.000201141 & 0.087204851 \\ MAE p-value & 0.000111214 & 0.000180005 & 0.008416531 \\ \hline \hline \end{tabular}
Overall, the test results from these cases with different forcing terms and a long integration time (\(t\in[0,100]\) compared with \(t\in[0,20]\) for the training dataset) demonstrate that the data-driven unconstrained and semi-constrained time integration schemes can reduce the error caused by the spatial coarse-graining. However, the fully-constrained model can hardly reduce the error, for which we deduce that fully enforcing the constraint conditions, which leaves very little room for improvement when training the model, is the key reason.
So far, the size of the spatial domain (\([0,2\pi]\)) employed in the test cases is the same as that of the training dataset. In the following, we test the data-driven time integration schemes using test cases with a larger spatial domain, in which it will use part of the spatial points as input to determine the coefficients of time integration schemes. Specifically, a \(10\times\) domain is employed in these test cases, that the periodic boundary condition is applied for \(x\in[0,20\pi]\) and the forcing term \(f\) should be modified as reference [31]. This poses a challenge on selecting the grid points where the solutions are employed as the input of the data-driven model. In the tests of this work, the numerical solution on the first 16 grid points for \(x\in[0,2\pi]\) are fed into the data-driven time integration schemes. The length of the integration time is 40.
Figure 10 illustrates one experiment of the prediction on \(x\in[0,20\pi]\). It is seen that the unconstrained and the semi-constrained models still outperform the Runge-Kutta method and the fully-constrained model, keeping lower values of MSE and MAE as advancing in time. Figure 11 shows the error of 10 tests in the domain of \([0,20\pi]\times[0,40]\) and \([0,20\pi]\times[0,100]\), respectively. As seen, an overall improvement is obtained, although the performance deteriorates for several cases as the time advances further to 100.
Given the small initial weights of our neural network, the time-varying coefficients are approximately constant after retaining several decimal places. For data-driven unconstrained and semi-constrained time integration schemes, the constant optimized coefficients are used to calculate the above 20 samples on the domain \(t\in[0,100]\), \(x\in[0,2\pi]\). Paired t-tests similar to the above are carried out. The exact value of error are shown in the AppendixD and the p-values
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Un-Con & Semi-Con & Full-Con \\ \hline Mean MSEs & 0.009832659 & 0.010041047 & 0.011981383 \\ MSE p-value & 0.000147995 & 0.000201141 & 0.087204851 \\ MAE p-value & 0.000111214 & 0.000180005 & 0.008416531 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The average of mean square errors and the P-values of paired t-test of the mean square error and the mean absolute error of three data-driven time integration schemes (The mean MSE is 0.011989284 for the Runge-Kutta method).
Figure 7: Test results for two distinct forcing terms for (A, B) the realizations of solutions and error distribution and (C, D) the corresponding mean square error curves and mean absolute error curves. The employed grid is 32 times coarser than the reference fine grid. The numbers in brackets below each subgraph in (A, B) are the mean square error averaged over the whole domain. The error shown in (C, D) is obtained by averaging the error in space \(([0,2\pi])\) and time \(([0,t])\)
are shown in table 2. It is seen that the constant optimized coefficients can produce results similar with the time-varying coefficients. Although the p-values are slightly changed, it still can be considered acceptable at the significance level of 0.01. From the perspective of computing efficiency, the obtained constant optimized coefficients can be employed.
Last but not least, we attempt to explore the reason for the improved performance of the data-driven time integration schemes. Can the improvements be obtained by increasing the scheme's order of accuracy? To probe into this, the
Figure 8: Test results for a long integration time with \(t\in[0,100]\) for (A) a realization of the solution and the corresponding error distribution in space and time, (B,D) the mean square error curves, and (C,E) the mean absolute error curves. The employed grid is 32 times coarser than the reference fine grid. The error shown in (B, C) is obtained by averaging the error in space (\([0,2\pi]\)) and time (\([0,t]\)). The error shown in (D,E) is the error averaged over space (\([0,2\pi]\)) at \(t\) instant.
Figure 9: Mean square error and mean absolute error of 20 samples for comparison with the time period of 40 and 100. (A) is the mean square error of these samples (\(0\leq t\leq 40\)) solved by four different time integration schemes. (B) is the mean absolute error of samples (\(0\leq t\leq 40\)) solved by four different time integration schemes. (C) and (D) are the same as (A) and (B), but for \(0\leq t\leq 100\).
Adams-Bashforth schemes of different orders of accuracy as shown in Table 3 are tested and compared with the constant optimized coefficients obtained from the unconstrained model.
As shown in the figure 12 for the results from a sample, improving the order of accuracy of the time integration schemes does not reduce the error during the advancement in time. The low spatial resolution employed in coarse grained simulations is the major source for error. The data-driven time integration scheme provides a new mechanism for canceling the error due to the coarse graining in space, at the price of destroying some properties or/and conditions employed for developing the conventional time integration schemes. Therefore, it is not surprising that imposing less or no constraints during model training, which leaves more space for optimizing the scheme coefficients, achieves an
\begin{table}
\begin{tabular}{l c c} \hline \hline & Const-coef (Un-Con) & Const-coef (Semi-Con) \\ \hline Mean MSEs & 0.009864970 & 0.009423226 \\ MSE p-value & 0.000160390 & 0.000341949 \\ MAE p-value & 0.000120366 & 0.000350297 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The average of mean square errors and the P-values of paired t-test of the mean square error and the mean absolute error of the constant optimized coefficients obtained from the unconstrained and semi-constrained models.
Figure 10: Test results on a 10\(\times\) larger spatial domain for (A) a realization of the solution, (B) the corresponding error distribution in space and time, (C,E) the mean square error curves, and (D,F) the mean absolute error curves. The spatial resolution is the same as the 32\(\times\) coarse test for spatial domain \([0,20\pi]\). The error shown in (C, D) is obtained by averaging the error in space (\([0,20\pi]\)) and time (\([0,t]\)).The error shown in (E,F) is the error averaged over space (\([0,20\pi]\)) at \(t\) instant.
overall better performance. However, it is likely that the data-driven time integration schemes learned for certain types of cases (e.g., one specific coarsening or one specific PDE) may not improve the performance for other cases, as the learned schemes are uniquely tuned to reduce the error of a specific grid resolution for a specific PDE discretized in space using a specific scheme. The worst case is that the data-driven schemes have to be learned case by case, which is difficult for problems requiring a large amount of computational resources, e.g., high Reynolds number turbulent flows. This issue of generalization ability exists for almost all data-driven models. Further investigations need to be carried out in future work.
## 4 Conclusions
In this work, we proposed to learn time integration schemes using neural networks for solving partial differential equations on coarse grids, and tested the learned 3-step linear multistep method using the one-dimensional heat equation, the one-dimensional wave equation, and the one-dimensional Burgers' equation. During the training of the model, mathematical constraints, i.e., the consistency condition and the root condition (stability condition), are enforced. A backpropagation neural network with three layers and a low learning rate was employed with the initial
\begin{table}
\begin{tabular}{l c c} \hline \hline k-step & Schemes & Truncation error \\ \hline \(k=3\) & \(v^{n+1}=v^{n}+\frac{\Delta t}{12}\left(23F_{n}-16F_{n-1}+5F_{n-2}\right)\) & \(\frac{3}{8}(\Delta t)^{4}\frac{d^{4}v}{d^{4}v}|_{t}=\zeta_{n}\) \\ \(k=4\) & \(v^{n+1}=v^{n}+\frac{\Delta t}{24}\left(55F_{n}-59F_{n-1}+37F_{n-2}-9F_{n-3}\right)\) & \(\frac{251}{720}(\Delta t)^{5}\frac{d^{5}v}{d^{5}v}|_{t}=\zeta_{n}\) \\ \(k=5\) & \(v^{n+1}=v^{n}+\frac{\Delta t}{720}\left(1901F_{n}-2774F_{n-1}+2616F_{n-2}-1274 F_{n-3}+251F_{n-4}\right)\) & \(\frac{95}{288}(\Delta t)^{6}\frac{d^{6}v}{d^{6}v}|_{t}=\zeta_{n}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Adams-Bashforth schemes for \(k=3,4,5\) and the corresponding expression for truncation errors.
Figure 12: Test results of Adams-Bashforth (k=3,4,5) and constant optimized coefficients (Un-Con) for (A) a realization of the solution and the corresponding error distribution in space and time, (B,D) the mean square error curves, and (C,E) the mean absolute error curves. The meaning of subgraphs are the same as Figure 8.
values of the coefficients given as those of the conventional time integration schemes with perturbations. Three distinct time integration schemes were trained using the unconstrained model, the semi-constrained model with the root condition enforced, the fully-constrained model with both root and consistency conditions enforced.
The test results showed that the time integration schemes learned using the semi-constrained model and the unconstrained model are capable of reducing the mean square error (MSE) and the mean absolute error (MAE) for most cases, showing a reduction in error as high as an order of magnitude for the 1-D heat and the 1-D wave equation, and the most reduction in error in the range of 35% to 40% for the 1-D Burgers' equation. For the fully-constrained model, the prediction errors are close to those of the conventional time integration schemes.
Analysis of the 1-D wave equation case revealed that the learned scheme effectively mitigates the dispersion error induced by the coarse grid. Further analysis of results from the Burgers' equation indicates that, instead of the order of accuracy, the data-driven model learned a mechanism to offset the error due to low spatial resolutions. Such mechanism is not readily analyzed using the existing methods such as Taylor analysis or Fourier analysis. To develop discretization schemes that exhibit strong generalization and interpretability, future research should focus on developing mathematical theories and tools for analyzing and evaluating these learned schemes.
## Acknowledgment
This work was funded by the NSFC Basic Science Center Program for "Multiscale Problems in Nonlinear Mechanics" (NO. 11988102), National Natural Science Foundation of China (NO. 12172360), Institute of Mechanics CAS, and Chinese Academy of Sciences.
|
2306.05785 | End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$
Regularized Latency Surrogates | Neural network (NN) compression via techniques such as pruning, quantization
requires setting compression hyperparameters (e.g., number of channels to be
pruned, bitwidths for quantization) for each layer either manually or via
neural architecture search (NAS) which can be computationally expensive. We
address this problem by providing an end-to-end technique that optimizes for
model's Floating Point Operations (FLOPs) or for on-device latency via a novel
$\frac{\ell_1}{\ell_2}$ latency surrogate. Our algorithm is versatile and can
be used with many popular compression methods including pruning, low-rank
factorization, and quantization. Crucially, it is fast and runs in almost the
same amount of time as single model training; which is a significant training
speed-up over standard NAS methods. For BERT compression on GLUE fine-tuning
tasks, we achieve $50\%$ reduction in FLOPs with only $1\%$ drop in
performance. For compressing MobileNetV3 on ImageNet-1K, we achieve $15\%$
reduction in FLOPs, and $11\%$ reduction in on-device latency without drop in
accuracy, while still requiring $3\times$ less training compute than SOTA
compression techniques. Finally, for transfer learning on smaller datasets, our
technique identifies $1.2\times$-$1.4\times$ cheaper architectures than
standard MobileNetV3, EfficientNet suite of architectures at almost the same
training cost and accuracy. | Anshul Nasery, Hardik Shah, Arun Sai Suggala, Prateek Jain | 2023-06-09T09:57:17Z | http://arxiv.org/abs/2306.05785v2 | End-to-End Neural Network Compression via \(\frac{\ell_{1}}{\ell_{2}}\) Regularized Latency Surrogates
###### Abstract
Neural network (NN) compression via techniques such as pruning, quantization requires setting compression hyperparameters (_e.g.,_ number of channels to be pruned, bitwidths for quantization) for each layer either manually or via neural architecture search (NAS) which can be computationally expensive. We address this problem by providing an end-to-end technique that optimizes for model's Floating Point Operations (FLOPs) or for on-device latency via a novel \(\frac{\ell_{1}}{\ell_{2}}\) latency surrogate. Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization. Crucially, it is fast and runs in almost the same amount of time as _single model training_; which is a significant training speed-up over standard NAS methods. For BERT compression on GLUE fine-tuning tasks, we achieve \(50\%\) reduction in FLOPs with only \(1\%\) drop in performance. For compressing MobileNetV3 on ImageNet-1K, we achieve \(15\%\) reduction in FLOPs, and \(11\%\) reduction in on-device latency _without drop in accuracy_, while still requiring \(3\times\) less training compute than SOTA compression techniques. Finally, for transfer learning on smaller datasets, our technique identifies \(1.2\times\)-\(1.4\times\) cheaper architectures than standard MobileNetV3, EfficientNet suite of architectures at almost the same training cost and accuracy.
## 1 Introduction
Large-scale neural networks consistently provide state-of-the-art performance on complex learning tasks [1; 2; 3]. But they place heavy burden on compute resources such as battery, memory or processor making them hard to deploy on edge devices such as phones, cameras and wearables. Several recent works have designed techniques to compress ML models and make them efficient for inference. However, as detailed below, many of these techniques are hard to use in practice, and often achieve sub-optimal accuracy _vs_ inference time trade-offs.
**Hyperparameter search for compression.** Existing works typically rely on one of the following building blocks to design efficient models: unstructured weights sparsity [4; 5], pruning entire neurons or low-rank factorization [6], quantization [7], distillation [8]. Figuring out an optimal way to combine these building blocks (or to figure out hyper-parameters such as amount of sparsity associated with each block) while satisfying a global latency/FLOPs/resource constraint is difficult and involves a combinatorial search. This problem is further exacerbated when multiple building blocks are used for model compression (_e.g.,_ simultaneous low rank factorization, sparsity/pruning of weights).
Over the past few years, there has been a large body of work that addresses the problem of finding hyperparameters for model compression. Existing literature in this space can be broadly categorized as: (a) methods to find hyperparameters of a _specific_ building block such as unstructured pruning of weights, (b) Neural Architecture Search (NAS) techniques to find hyperparameters of _any_ efficient block. The first set of techniques are naturally limited, but even for the specific blocks considered, their performance on real-world benchmarks is unstable and in fact, can be sub-optimal to the more general approach that we propose in this work (see the left plot in Figure 1). The NAS based techniques are much more generally applicable, but the computational cost of such methods is prohibitive as they typically don't exploit any specific attributes of the popular efficient blocks such as sparsity, pruning.
**Neuron Pruning**: Among the category (a) techniques mentioned above, a prominent line of work has focused on unstructured pruning of weights with non-uniform budget allocation across layers [4, 9, 10, 5]. However, any gain in FLOPs using unstructured pruning is hard to translate to real latency gain as modern hardware - like GPUs, TPUs - are more geared towards dense matrix operations. So it is more fruitful to focus on neuron pruning, which removes entire neurons/channels, and low-rank factorization of weights, which is closely related to neuron pruning. Recent techniques in this line of work add a latency/FLOPs regularizer to the standard cross entropy loss [11, 12] to bias the model towards lower number of neurons. Unfortunately the resulting objective is discrete and difficult to optimize. To alleviate this, existing works have designed continuous surrogates that are more amenable to SGD style optimization. These methods either work in the space of probability distributions over pruned models and optimize the "expected objective" [12, 13, 6] or replace the discontinuous FLOPs regularizer with a continuous surrogate such as \(\ell_{1}\) norms of the weights of the network [11]. However, the former class of techniques are often unstable, hard to implement in practice, and empirical studies indicate that their performance is similar to that of simple magnitude based pruning [14] (also see left plot of Fig. 1). Furthermore, as we show in this work, the latter class of techniques fail to enforce sparsity in the presence of batch, layer normalization (see Section 3).
**NAS**: Several works in category (b) formulate model compression as a black-box Neural Architecture Search (NAS) problem and rely on state-of-the-art NAS techniques to search for efficient models [15, 16, 17, 18, 19]. These techniques directly take the latency/FLOPs into account and have the potential to identify the optimal per-layer budget allocation for a wide variety of efficient blocks/compression mechanisms. However, these approaches are often computationally expensive as they take a blackbox view of the problem and perform combinatorial search over the space of architectures. Despite recent advances such as TuNAS [18] and DARTS [20], these techniques can be an order of magnitude slower and less accurate than our proposed method (see Fig 1).
**Our Approach**: In this work, we propose an approach that sits right in the middle of the above two mentioned categories. That is, our approach applies to a large class of efficient building blocks - like unstructured sparsity, neuron pruning, quantization - for which we can write the FLOPs computation with a continuous surrogate (see Table 1). Furthermore, to ensure that our FLOPs, latency regularizers work even in the presence of batchnorm, layernorm, we propose a novel surrogate that is based on \(\frac{\ell_{1}}{\ell_{2}}\) norm. While our surrogates are continuous, they are non-differentiable. In such cases standard
Figure 1: Left plot compares various techniques for BERT compression on GLUE tasks (averaged across tasks). \(x\)-axis is the relative number of FLOPs as compared to BERTBASE. \(y\)-axis is the relative drop in accuracy from the baseline. Pruning SOTA numbers are taken from [21], while distillation baselines are from [22, 23]. Right plot compares various techniques for MobileNetV3 compression on ImageNet-1K dataset. _MobileNetV3_ corresponds to MobileNetV3 models with different width multiplier. _TuNAS, MorphNet_ are SOTA techniques for scalable compression. TuNAS takes a blackbox approach to model compression, whereas MorphNet takes a more direct approach by optimizing FLOPs regularized objective.
optimizers such as SGD, Adam can be quite slow to converge [24]. To overcome this, we propose a projection operation on the mask variables, after each SGD step. Our proposed method speeds up the convergence and also outputs _exact sparse solutions_ thus eradicating need for post-hoc thresholding, while being simple enough to not increase training time significantly.
We implement our algorithm with multiple building blocks including pruning, low-rank factorization, quantization, and apply it on multiple problems in the domain of image classification and NLP. In particular, we demonstrate the effectiveness of our technique for MobileNetV3 compression on ImageNet (see Fig. 1), where our method can learn an architecture with up to 15% (\(11\%\)) lower FLOPs (latency) on Pixel 6 mobile phones, without any drop in accuracy. Here our approach is more accurate than MorphNet, a SOTA technique which focus exclusively on neuron-pruning, as well as, TuNAS, a SOTA NAS technique. Furthermore, in terms of training time, our method is \(3\times\) cheaper than TuNAS. We would like to highlight that MobileNetv3 is a highly optimized architecture found using efficient NAS techniques [25], and our technique is able to compress this architecture further.
One exciting application of our work is that we can apply it to optimize certain "foundational" baseline models for individual fine-tuning tasks. For example, for compression of BERT on GLUE benchmarks, our method achieved \(40-50\%\) reduction in FLOPs with only \(1\%\) drop in accuracy (see Fig 1). Moreover, our technique dominates standard model compression baselines. Similarly for smaller vision classification tasks, our technique compresses MobileNetV3, EfficientNet suite of architectures and identifies \(1.2\times\)-\(1.4\times\) cheaper architectures without significant loss in accuracy (see Figure 3). We would like to note that all these results are obtained at almost the same cost as that of training a single model for the task. Finally, we also demonstrate the versatility of our method by using it to quantize a CNN on CIFAR-10, and learning the bit-widths (\(2,4,8,16\)) for each of its layers. Our technique found a model that is 55% smaller than the baseline float-16 model, while achieving the same accuracy (see Figure 5). While low-bit quantization is not usually exploited by general purpose accelerators to speed up computation, it can still lead to reduction in inference times of large language models such as GPT as these models are memory bandwidth bound [26]. Here is a summary of our contributions:
**(1).** We provide an end-to-end neural network compression technique that directly optimizes the FLOPs/latency regularized objective during leading to compression during training. Our algorithm can be used with many popular efficient building blocks including pruning, low-rank factorization, quantization, and can optimize for on-device inference latency.
**(2).** We design a novel \(\frac{\ell_{1}}{\ell_{2}}\) regularized surrogate for latency that works even in the presence of batchnorm, layernorm. Our algorithm is fast and runs in the same amount of time as single model training, and doesn't require any post-processing steps.
**(3).** We demonstrate the performance of our technique on both language and vision tasks. Moreover, for transfer learning settings where the goal is to take a baseline architecture and optimize it for individual tasks, our techniques outperform SOTA techniques in the broad-domain of automated neural compression.
## 2 Related Work
### Neural Architecture Search
Early works on NAS treated the problem as a purely blackbox optimization (BO) problem. These works relied on BO techniques such as random search [27], Gaussian process optimization [17], and zeroth-order gradient descent [15; 16], evolutionary algorithms to optimize the NAS objective and identify a good architecture. Several works have improved upon these algorithms using heuristics such as early stopping [27]. Nonetheless, these techniques are computationally expensive, as evaluating the optimization objective at any point requires training a neural network from scratch. Moreover, due to computational complexity, these techniques perform a very coarse grained search and are not suited for fine-grained search over sparsity or low-rank structures.
Recent works have tried to open the blackbox a bit. In these techniques, the search space is first transformed to the space of probability distributions over architectures. Next, a surrogate model (which takes an architecture as input and tries to output the optimal set of weights for the architecture) is trained to quickly evaluate the optimization objective at any input [18; 20; 28; 29; 12]. While these techniques are fast, they involve joint training of the surrogate model during the search process. This joint training often makes the optimization process unstable [30].
**NAS for Efficient ML.** Several recent works at the intersection of efficient ML and NAS have realized the importance of explicitly accounting for the hardware in the search process [15; 31; 32; 33; 34; 35]. These works incorporate the actual inference time in their search objectives, instead of surrogates such as FLOPs. The inference time maybe estimated using another neural network, or through latency tables for basic arithmetic operations on the target platform [19]. Many of these works rely on greedy, random search heuristics to solve the resulting objective [32; 33]. However, these heuristics either take a lot of time to find the optimal architecture or are not guaranteed to converge to an optimal solution. There are some works that rely on the NAS algorithms described above [15; 31; 18]. However, these techniques face the same issues as previously mentioned.
**Hardware, Neural Architecture codesign.** Certain hardware level parameters such as tiling configurations of tensors significantly affect the inference time of a model. Recent hardware-aware NAS techniques expose these hardware level parameters to the NAS algorithm and simultaneously search over neural architectures and hardware configurations [35]. These techniques have the potential to achieve better performance than vanilla NAS techniques which do not search over hardware configurations.
### Model Compression
The field of model compression is vast. Here, we focus on techniques that perform training-time compression (as opposed to post-training compression) using the following building blocks: unstructured sparsity, pruning and low-rank factorization. Early works in unstructured sparsity and pruning relied on magnitude, gradient based pruning [4; 36; 14]. Several works have explored more sophisticated scoring metrics for pruning [37; 38; 39; 40; 41]. Other techniques include adding sparsity inducing norms such as \(\ell_{0},\ell_{1}\)to the training objective [13; 5]. A number of works have also explored low-rank factorization for model compression [42; 43; 44]. Some of these techniques again rely on sparsity inducing regularizers to induce the low-rank structure [6]. Others rely on SVD based pruning. Some recent works try and optimize FLOPs regularized objective to perform pruning, low-rank factorization [11; 12]. However, as we discussed in the introduction, the resulting optimization techniques are often unstable and difficult to use in practice.
## 3 Method
In this section, we describe our approach for model compression. For simplicity of presentation, we illustrate our technique on feed-forward networks and restrict ourselves to pruning. The ideas here can be extended to other architectures (_e.g.,_ 1x1 convolutions in CNNs), and other efficient building blocks (_e.g.,_ unstructured sparsity, low-rank factorization, quantization) in a straightforward manner (see Table 1 for details). Consider the following problem: we are given a pre-trained feed forward neural network (FFN) \(f^{*}(x)=\sigma(W_{D}^{T}\sigma(W_{D-1}^{*}\sigma(\ldots\sigma(W_{1}^{*}x))))\), where \(W_{i}^{*}\in\mathbb{R}^{d_{i+1}\times d_{i}}\) for all \(i\in[D]\), and a dataset \(\{(x_{i},y_{i})\}_{i=1}^{n}\). Our goal is to compress \(f^{*}\) while simultaneously performing well on the learning task. This problem can be formulated as the following optimization problem
\[\min_{\mathcal{W}}\frac{1}{n}\sum_{i=1}^{n}\ell(x_{i},y_{i};\mathcal{W})+ \lambda\times\text{Latency}(\mathcal{W}). \tag{1}\]
Here \(\mathcal{W}=\{W_{i}\}_{i=1}^{D},\) with \(W_{i}\in\mathbb{R}^{d_{i+1}^{\prime}\times d_{i}^{\prime}}\) being the weight matrix at layer \(i\), \(\lambda\) is the regularization parameter which trades-off latency with accuracy and \(\ell\) is the supervised loss.1. Directly optimizing the above objective is intractable because \(\text{Latency}(\mathcal{W})\) is a discrete function of the dimensions of weight matrices, and is hardware specific.
Footnote 1: In this objective, we search over \(d_{i}^{\prime}\) such that \(d_{i}^{\prime}\leq d_{i}\)
We now present our technique for solving Equation (1). To begin with, we substitute \(\text{Latency}(\mathcal{W})\) with \(\text{FLOPs}(\mathcal{W})\)2. Later, we extend it to actual latency. The objective in this case is given by
Footnote 2: FLOPs is also a discrete function of dimensions of \(W_{i}\), and the resulting optimization problem is still intractable
\[\min_{\mathcal{W}}\frac{1}{n}\sum_{i=1}^{n}\ell(x_{i},y_{i};\mathcal{W})+ \lambda\sum_{i=1}^{D}d_{i}^{\prime}d_{i+1}^{\prime}. \tag{2}\]
To solve this objective, we associate masks with each neuron in the network. In particular, we parameterize the weight matrix in the \(i^{th}\) layer as \(W_{i}\times\text{diag}(\alpha_{i})\). Here \(\alpha_{i}\in\{0,1\}^{d_{i}}\) are the mask variables of layer \(i\). If \(\alpha_{i,j}\) is set to \(0\), then the \(j^{th}\) neuron in the \((i-1)^{th}\) layer will be pruned. The FLOPs regularizer can now be written in terms of masks as \(\sum_{i=1}^{D}\|\alpha_{i}\|_{0}\|\alpha_{i+1}\|_{0}\), where \(\alpha_{D+1}\) is the static vector of all \(1\)'s. The resulting objective though is not continuous. To make it continuous and amenable to gradient based optimization, one class of techniques place a Bernoulli distribution \(\text{Bern}(p_{i,j})\) over each of the masks \(\alpha_{i,j}\) and solve the following smoothed objective [12; 13; 6]
\[\min_{\mathcal{W},p}\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\ell(x_{i},y_{i}; p,\mathcal{W})+\lambda\sum_{i=1}^{D}\|\alpha_{i}\|_{0}\|\alpha_{i+1}\|_{0} \right].\]
The expectation above is taken w.r.t the random masks \(\alpha_{i}\)'s. It is easy to see that the above objective is equivalent to Equation (2), and is consequently as hard as solving the latter. In fact, the above problem can be shown to be NP-hard using the observation that sparse linear regression is a special case of it [45]. Furthermore, the discrete nature of \(\alpha_{i}\)'s makes the optimization process unstable [13]. To overcome this, [12; 13; 6] rely on a heuristic which involves relaxing Bernoulli distribution to a continuous distribution such as LogisticSigmoid. However, the main drawback of the resulting algorithm is that it is hard to implement in practice and requires very careful annealing of the parameters of LogisticSigmoid distribution. Another drawback of this class of techniques is that their performance is not well understood theoretically, even for simple and fundamental problems such as sparse linear regression.
Another approach to convert the discrete objective in Equation (2) into a continuous function is to replace the \(\ell_{0}\) norm on \(\alpha_{i}\)'s with \(\ell_{1}\) norm
\[\min_{\mathcal{W},\alpha_{i}\in\mathbb{R}^{d_{i}}}\frac{1}{n}\sum_{i=1}^{n} \ell(x_{i},y_{i};\alpha,\mathcal{W})+\lambda\sum_{i=1}^{D}\|\alpha_{i}\|_{1} \|\alpha_{i+1}\|_{1}. \tag{3}\]
This approach is much more attractive than the previous approach as it is known to recover the optimal sparse solutions for a variety of statistical problems including sparse linear regression, low-rank matrix completion [46; 47]. Furthermore, it is much simpler to implement in practice, with numerous algorithms being proposed for fast convergence to the stationary points of the objective [24; 48]. Consequently, recent SOTA compression techniques relied on \(\ell_{1}\) norm surrogates to compute the FLOPs regularizer [11]. A major drawback of \(\ell_{1}\) norm though is that it does not promote sparsity in the presence of batch normalization and layer normalization [49; 50]. To see this, consider the following \(1\)-hidden layer network: \(\sigma(\text{BN}(W_{2}\text{diag}(\alpha_{2})\sigma(\text{BN}(W_{1}\text{ diag}(\alpha_{1})x))))\). One can scale down all entries of \(\alpha_{1}\) and scale up the weights \(W_{1}\) without affecting the output of the network. Doing this reduces the objective value in Equation (3), but doesn't induce any sparsity in the network. In practice, we in fact notice this behaviour during optimization of Equation (3), which leads to sub-optimal solutions (see Section 3.2). Note that adding \(\ell_{2}\) penalty on the weights (_i.e.,_ weight decay) doesn't mitigate this issue as any scaling of \(\alpha\)'s can be absorbed by the batch norm parameters without changing the output of the network.
### Inducing sparsity through \(\frac{\ell_{1}}{\ell_{2}}\) regularizer
We now introduce our approach for making the objective in Equation (2) continuous. We replace \(\ell_{0}\) norm over masks (\(\|\alpha_{i}\|_{0}\)) with \(\frac{\ell_{1}}{\ell_{2}}\) penalty (\(\sqrt{d_{i}}\|\alpha_{i}\|_{1}/\|\alpha_{i}\|_{2}\)) and solve the following optimization problem
\[\min_{\mathcal{W},\alpha_{i}\in\mathbb{R}^{d_{i}}}\frac{1}{n}\sum_{i=1}^{n} \ell(x_{i},y_{i};\alpha,\mathcal{W})+\lambda\sum_{i=1}^{D}\frac{\sqrt{d_{i}} \|\alpha_{i}\|_{1}}{\|\alpha_{i}\|_{2}}\frac{\sqrt{d_{i+1}}\|\alpha_{i+1}\|_{1 }}{\|\alpha_{i+1}\|_{2}}. \tag{4}\]
The \(\sqrt{d_{i}}\) term in the numerator normalizes the penalty to lie between \([0,d_{i}]\). When \(\alpha_{i}\)'s are all \(1\)'s, the regularizer evaluates to FLOPs. Observe that this regularizer is invariant to scaling of \(\alpha\)'s. Consequently, the value of the regularizer cannot simply be reduced by scaling down \(\alpha_{i}\)'s. In our experiments in sec 3.2 and Appendix C.2, we show that this handles batch, layer normalizations better than \(\ell_{1}\) regularizer. Several works have studied this regularizer in the context of sparse linear regression and showed that is recovers the underlying sparse signal under mild conditions on the
data [51; 52; 53]. [54] used a similar \(\frac{\ell_{1}}{\ell_{2}}\) regularizer for network pruning, but their technique doesn't optimize latency or FLOPs, and relies on post-training thresholding to get sparsity.
For certain technical reasons described later, we add a positivity constraint on \(\alpha_{i}\)'s and solve the following objective
\[\min_{\mathcal{W},\alpha_{i}\in\mathbb{R}^{d_{i}}_{+}}\frac{1}{n}\sum_{i=1}^{n }\ell(x_{i},y_{i};\alpha,\mathcal{W})+\lambda\sum_{i=1}^{D}\frac{\sqrt{d_{i}} \sum_{j=1}^{d_{i}}\alpha_{i,j}}{\|\alpha_{i}\|_{2}}\frac{\sqrt{d_{i+1}}\sum_{ j=1}^{d_{i+1}}\alpha_{i+1,j}}{\|\alpha_{i+1}\|_{2}}. \tag{5}\]
Note that we consider \(\alpha\in\mathbb{R}^{d_{i}}_{+}\) rather that discrete or bounded values. We would like to highlight that this change doesn't reduce the representational power of our model. It is mainly done for computational reasons. In the sequel, we use the shorthand \(\|\alpha_{i}\|_{1p}\) (\(p\) for positive) to denote \(\sum_{j=1}^{d_{i}}\alpha_{i,j}\).
**Importance of positivity constraints.** The objective in Equation (4) is continuous, but not smooth. For such losses, standard optimization techniques such as SGD, Adam are slow to converge to stationary points [55]. Furthermore, these algorithms don't output exact sparse solutions. This forces additional post-processing steps to be introduced into the compression pipeline. For example, [11; 54] rely on Adam optimizer and add a pruning step at the end, where masks that are close to \(0\) are pruned away. This is quite cumbersome in practice as one needs to choose appropriate thresholds for pruning, which introduces an additional tunable hyper-parameter, and needs re-training after pruning. To overcome this, we add a positivity constraint to the mask variables and modify the objective to Equation (5). This makes the regularizer smooth (except at all 0's vector), and easy to optimize using SGD, Adam. After each SGD/Adam update, we simply project the masks back to the space of positive real numbers. The overall update looks as follows
\[\mathcal{W}\leftarrow\mathcal{W}-\eta\nabla_{\mathcal{W}}(\mathcal{L}(\alpha, \mathcal{W})+\lambda\mathcal{R}(\alpha)),\quad\alpha\leftarrow\max(0,\alpha- \eta\nabla_{\alpha}(\mathcal{L}(\alpha,\mathcal{W})+\lambda\mathcal{R}( \alpha))).\]
Here \(\mathcal{L}(\alpha,\mathcal{W})\) is the empirical risk and \(\mathcal{R}(\alpha)\) is the regularizer. Notice, the only additional step compared to traditional optimization, is the clipping of \(\alpha\)'s. In our ablation studies in Sec 3.2 and App C.2, we validate the importance of this projection step, together with \(\frac{\ell_{1}}{\ell_{2}}\) norm, in encouraging sparse solutions.
### Verification of design choices
To empirically demonstrate the drawbacks of using \(\ell_{1}\) penalty for model compression, we perform experiments on the FashionMNIST dataset with a single hidden layer fully connected network which has a batch norm layer after the first linear layer. We prune out the input to the network using a mask \(\alpha\) on the input. We compare the performance of networks compressed using FLOPs regularizer induced by \(\ell_{1}\) and \(\frac{\ell_{1}}{\ell_{2}}\) norms. We use SGD for optimization of both the objectives. Furthermore, we pre-train the network using standard CE loss, and initialize \(\alpha=\mathbf{1}\). We track the variance of the absolute values of the entries of \(\alpha\), i.e. \(\frac{\sum_{i=1}^{d}(|\alpha_{i}|-\mu_{\alpha})^{2}}{d}\), where \(\mu_{\alpha}=\frac{\sum_{i=1}^{d}|\alpha_{i}|}{d}\). We also track the mean \(\mu_{\alpha}\) of the absolute values of the entries of \(\alpha\). Finally, we plot out the curve between FLOPs and the considered norm of \(\alpha\) (_i.e., \(\ell_{1}\)_, \(\frac{\ell_{1}}{\ell_{2}}\)). Figure 2 presents the results from these experiments. We can see that the \(\ell_{1}\) objective is mis-aligned with the actual value of FLOPs, while the regularizer computed using \(\frac{\ell_{1}}{\ell_{2}}\) is a better proxy. We also find that the mean and variance of \(\alpha\)'s sharply decreases when \(\ell_{1}\) induced FLOPs regularizer is used for compression. This indicates that all entries of \(\alpha\) are uniformly
Figure 2: **Comparison of \(\ell_{1}\), \(\frac{\ell_{1}}{\ell_{2}}\) induced FLOPs regularizer for pruning on FashionMNIST: Figures (a) and (b) depict the evolution of the statistics of the mask variables (\(\alpha\)) as training progresses. Figure (c) shows the relation between the actual FLOPs of the model and the value of the proxy computed by Equations 3, 4. Figure (d) shows the evolution of the Frobenius norm of the weight matrix.**
scaled down to a small, non-zero value, reducing the value of the regularizer, while not providing any sparsity. As seen from the figure, \(\frac{\ell_{1}}{\ell_{2}}\) does not suffer from this drawback. Finally, we note that the frobenius norm of the weight matrix \(W\) increases when \(\ell_{1}\) regularization is used on \(\alpha\), suggesting that the network is simply scaling down \(\alpha^{\prime}\)s and scaling up the weights to evade the regularizer.
### Hardware aware model compression
In this section, we extend the FLOPs regularizer to take the latency on the target hardware into account. The resulting regularizer is especially useful for performing hardware aware network compression. Our key observation is that the inference on a neural network can be broken down into a series of matrix multiplication operations. For example, inference on a depth \(D\) FFN involves \(D\) matrix-vector multiplications, which take-up majority of the time. So, getting a good estimate of the inference time of the overall network boils down to having a good estimate of the latency of matrix-vector multiplication. To this end, we rely on lookup tables. Before the start of the pruning phase, we construct a \(2\)-dimensional lookup-table \(T\) whose \((d_{1},d_{2})^{th}\) entry is the on-device latency of multiplying a matrix of size \(d_{1}\times d_{2}\) with a vector of size \(d_{2}\). Such a table is easy to construct, given access to the target device. Next, to incorporate the look-up table \(T\) into our pruning algorithm, we convert it into a continuous function by performing linear interpolation on the entries in the table [56]. To be precise, for any \((x,y)\in[d_{1},d_{1}+1]\times[d_{2},d_{2}+1]\), where \(d_{1},d_{2}\in\mathbb{N}\cup\{0\}\), we define \(T(x,y)\) as: \(T(x,y)=t_{1}+(t_{2}-t_{1})(y-d_{2})\), where \(t_{1}=T(d_{1},d_{2})+(T(d_{1}+1,d_{2})-T(d_{1},d_{2}))(x-d_{1}),\) and \(t_{2}=T(d_{1},d_{2}+1)+(T(d_{1}+1,d_{2}+1)-T(d_{1},d_{2}+1))(x-d_{1}).\) Note that in contrast to black-box NAS techniques like [19] which search over a discrete space of number of filters for each block, our approach needs the latency surrogate to be differentiable, and hence we need interpolated latency tables. See the appendix for details on how we construct the tables.
We use this interpolated lookup table to construct our _latency_ regularizer as follows
\[\sum_{i=1}^{D}T\left(\frac{\sqrt{d_{i}}\|\alpha_{i}\|_{1p}}{\|\alpha_{i}\|_{ 2}},\frac{\sqrt{d_{i+1}}\|\alpha_{i+1}\|_{1p}}{\|\alpha_{i+1}\|_{2}}\right). \tag{6}\]
In the above expression, our differentiable surrogate for \(\|\alpha_{i}\|_{0}\) (_i.e.,_\(\sqrt{d_{i}}\|\alpha_{i}\|_{1p}/\|\alpha_{i}\|_{2}\)), is used to index the lookup table. We note that \(\frac{\ell_{1}}{\ell_{2}}\) norm is very crucial for this technique to be successful. This is because \(\frac{\sqrt{d_{i}}\|\alpha_{i}\|_{1p}}{\|\alpha_{i}\|_{2}}\) is normalized and always lies between \([0,d_{i}]\). In contrast, using \(\ell_{1}\) norm surrogate in the regularizer gives us \(T(\|\alpha_{i}\|_{1},\|\alpha_{i+1}\|_{1})\). Scaling \(\alpha_{i}\) by a constant can drastically change this regularizer, and makes the optimization unstable.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline
**Efficient** & **Parameterization of \(W_{i}\)** & **FLOPs** & **Regularizer (FLOPs surrogate)** \\
**Building Block** & \(W_{i}\times\text{diag}(\alpha_{i})\) & \(\|\alpha_{i}\|_{0}\|\alpha_{i+1}\|_{0}\) & \(\frac{\sqrt{d_{i}}[\|\alpha_{i}\|_{1p}}{\|\alpha_{i}\|_{2}}\frac{\sqrt{d_{i+1 }}\|\alpha_{i+1}\|_{p}}{\|\alpha_{i+1}\|_{2}}\) \\ \hline Pruning & \(W_{i}\odot\alpha_{i}\), where \(\alpha_{i}\in\mathbb{R}^{d_{i+1}\times d_{i+1}}\), \(\odot\) is the elementwise & \(\|\text{Vec}(\alpha_{i})\|_{0}\) & \(\frac{\sqrt{d_{i}}[\|\alpha_{i}\|_{1p}}{\|\text{Vec}(\alpha_{i})\|_{2}}\) \\ & multiplication operator & & \\ \hline Low-rank Factorization & \(U\),diag\((\beta_{i})\),\(\nabla_{\alpha_{i}}\) & \((d_{i}+d_{i+1})\|_{\beta_{i}}\|_{0}\) & \((d_{i}+d_{i+1})\frac{\sqrt{d_{i+1}}[\beta_{i}]_{1p}}{\|\beta_{i}\|_{2}}\) \\ & \(d_{i+1}=\min\{d_{i},d_{i+1}\}\) & & \((d_{i}+d_{i+1})\frac{\sqrt{d_{i+1}}[\beta_{i}]_{1p}}{\|\beta_{i}\|_{2}}\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} Quantization \\ \((1,2,4\) bit quantization) \\ \end{tabular} } & \(W_{i,+1}+\alpha_{i}\alpha_{i}(\Delta_{i}+2,d_{i}(\Delta_{i},d_{i}))\), where \(\alpha_{i,2,4,4,4}\in\mathbb{I},1\), are & \(|(1-\alpha_{i,2})|_{0}d_{i,4,1}+\) & \(\frac{\ell_{1}}{\ell_{2}}\) norm over \\ & mask variables, \(W_{i,\pm}\) is the & \(2|\alpha_{i}(1-\alpha_{i,2})|_{0}d_{i,4,1}+\) & \(2|\alpha_{i,2}|(1-\alpha_{i,4})|_{0}d_{i,4,1}+\) & \(2\alpha_{i,2}|(1-\alpha_{i,4})\), \(4\alpha_{i,2}\alpha_{i,4}|d_{i,4}|\) \\ & \(\Delta_{i,4}=W_{i,4}-W_{i,2}\) & & \(\lambda_{i,2}=W_{i,2}-W_{i,1}\) & \(\lambda_{i,2}=W_{i,2}-W_{i,1}\) & \(\lambda_{i,2}=W_{i,2}-W_{i,1}\) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Pruning + \\ Low-rank Factorization \\ \end{tabular} } & \(U\);diag\((\beta_{i})\);\(V\);diag\((\alpha_{i})\), where \(U_{i}\in\mathbb{R}^{d_{i+1}\times d_{i+1}}\), & \((\|\alpha_{i}\|_{0}+\|\alpha_{i+1}\|_{0})\|\beta_{i}\|_{0}\) & \(\left(\begin{array}{c}\frac{\sqrt{d_{i}}[\|\alpha_{i}\|_{1p}}{\|\alpha_{i}\|_{ 1p}}\frac{\sqrt{d_{i+1}}\|\alpha_{i+1}\|_{p}}{\|\alpha_{i+1}\|_{2}}\\ \frac{\sqrt{d_{i+1}}[\beta_{i}]_{1p}}{\|\alpha_{i}\|_{1p}}\end{array}\right)\) \\ & \(d_{i,\pm}=\min\{d_{i},d_{i+1}\}\) & & \(\times\frac{\sqrt{d_{i+1}}[\beta_{i}]_{1p}}{\|\alpha_{i}\|_{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Table describing regularizers used by our technique for various efficient building blocks (refer to Appendix for details on quantization). One can easily design regularizers for searching over a combination of building blocks. For example, last row presents regularizer for low-rank + pruning, which we use in our large-scale experiments.
## 4 Experiments
In this section, we apply our framework to large scale pre-training and transfer learning tasks on standard language and vision benchmarks. To demonstrate the versatility of our technique, we perform experiments on multiple model families (MobileNet, EfficientNet [2], BERT), and multiple building blocks (pruning, low-rank factorization, quantization). We also present a case study using the actual on-device latency instead of FLOPs. See Appendix C.2 for other ablation studies.
### ImageNet Pre-training
We begin by comparing the performance of our technique with baselines on MobileNetV3 compression, for ImageNet classification. We rely on low-rank factorization + pruning for the compression. The results from this experiment are presented in Figure 1. By varying the strength of our regularization, we obtain models with different MACs and accuracies. We find that models produced by our method significantly outperform MobileNetV3 and TuNAS in the high and mid-MACs regime. In particular, for the same accuracy as MobileNetV3Large, our approach finds a model with \(15\%\) fewer MACs. In comparison with TuNAS, we achieve 30% reduction in MACs at the same level of accuracy. We however find that our model is at par with MobileNetV3Small in the low MACs regime, indicating that the former is already well-tuned for this task. In terms of compute needed for training, TuNAS is the most expensive among all the techniques we tried; it took 2 days to train with our hardware setup. In contrast, our method took 13 hours (\(3-4\times\) faster than TuNAS), and MorphNet took 10 hours.
### Transfer Learning
A common paradigm in deploying machine learning models today is to first pre-train them on a large scale dataset such as ImageNet, and then fine-tune them for the desired target task. However, deploying large models is not feasible on edge devices. Our technique provides a light-weight modification to the standard fine-tuning procedure by producing a compressed model with comparable transfer learning performance on the specific task. We demonstrate this on vision and language tasks.
**Vision tasks.** We consider the task of fine-tuning an ImageNet pre-trained model for a smaller dataset. We consider Cars196 [57] and Food101 [58] as the target datasets, and compare against the MobileNetV3 and EfficientNet families of models. We use ImageNet pre-trained models for initialization. We plot the FLOP-accuracy curves in Fig 3. We compress MobileNetv3Large and EfficientNet-B4 and EfficientNet-B2 architectures while transferring them to the target target task. We find that our method consistently improves over baseline architectures across various FLOPs regimes. This is because our technique is able to adaptively prune the model based on the difficulty of the classification task. On both the tasks, we see 1% accuracy gains over MobileNetV3 small. The accuracy gains persist at the latency footprint of MobileNetV3Large-0.75, where we see over 1.5% accuracy gains on both datasets. On EfficientNet, we see upto 40% reduction in FLOPs without any drop in accuracy on Food101, and around 20% reduction in FLOPs on the Cars196 dataset for the largest models (B4). We also see around 30% FLOP reduction while maintaining the transfer learning performance of the B1 and B0 variants. This demonstrates that our learnt models can scale better than the heuristic scaling described in [2]. See the appendix for additional results.
**Fine-tuning BERT on GLUE.** We consider 5 datasets of the GLUE benchmark [59] that are commonly used in the literature, and fine-tune a pre-trained BERT-Base model with our FLOPs regularizer. We re-parameterize the weight matrices of the feed forward network of each transformer block with our low-rank+sparse parameterization. We compare our approach against model pruning,
Figure 3: **Accuracy-FLOPs trade-off on transfer learning tasks: Figures (a) and (b) depict the the fine-tuning performance of models found by our method while compressing MobileNetv3Large and baseline MobileNetV3 on Cars-196 and Food-101 datasets. Figures (c) and (d) show the performance on the EfficientNet family of architectures, where baselines are EfficientNetB0-B4, while our method compresses EfficientNet B4 and B2.**
where SOTA numbers are taken from Fig. 6 of [21], reporting the maximum accuracy among [60; 61; 62; 63; 64; 65]. We also report the performance of widely-used distillation based baselines [22; 23]. Figure 1 presents the average performance on the 5 datasets, and Figure 6 in appendix presents the individual performance. In both these figures, we plot the relative FLOPs of the compressed model w.r.t BERT-base against the drop in accuracy w.r.t BERT-base (similar to [21]). We find that on 4 of the 5 datasets considered, our technique provides a higher accuracy for the same number of FLOPs, indicating the efficacy of our method. On MRPC, a dataset with very few samples, our method is worse off on higher FLOPs, but outperforms the baselines in the low FLOP regime.
### Additional Experiments
**Using the latency regularizer.** In Eq 6, we propose a latency surrogate for optimizing the actual on-device inference latency. In this section, we provide empirical evidence of the effectiveness of this approach for MobileNetv3 on Pixel 6. We compare the accuracy-latency curves of models produced using FLOPs, latency regularizers (see Fig 4). Observe that using the latency regularizer leads to models with smaller latencies and consequently better latency-accuracy tradeoff compared to using the FLOP regularizer. We also find these models to have better performance than MobileNetV3 (\(0.5-2\%\) improvement in accuracy for similar latency), despite MobileNetv3 being hand-crafted for faster inference on mobile devices.
**Quantization.** In this set of experiments, we consider CIFAR-10 classification and compress a 3 layer CNN using quantization. We use the quantization formulation presented in Table 1 and search over \(\{2,4,8,16\}\) bit quantizations for each layer. We compare with a baseline which uses the same level of quantization at each layer. Fig 5 presents the results from this experiments. The details of the implementation can be found in the appendix. We find that our technique compresses the model size by almost 55% without drop in accuracy (as compared to a model with 16-bit weights). Our technique also outputs a model which is 1.4% more accurate than a 2-bit quantized model with only 4% more FLOPs. In the plot on the right in Fig 5, we visualize the learned bit-widths of our models. We find that later layers are assigned a smaller bit width, indicating the importance of learning expressive filters early in the network. The different models in our plots were found by varying the the value of the regularizer coefficient, and hence no combinatorial search over bit-widths is required.
## 5 Conclusion and Future Work
In this work, we presented an end-to-end technique for neural network compression. Our approach applies to a wide variety of efficient blocks including pruning, unstructured sparsity, quantization. At
Figure 4: Left plot shows the accuracy-latency curves of models obtained using FLOPs, latency regularizers. Right table compares the performance of our latency regularized models with MobileNetV3 baseline.
Figure 5: **Quantization on CIFAR-10: Figure (a) compares the performance of our technique for dynamic quantization against fixed-bit quantization for a 4 layer CNN on CIFAR-10. The baselines have weights quantized to 2,4,8, 16 bits. Fig (b) depicts the learnt bitwidths for different layers of the models found by our technique, with the labels denoting the number of MACs (in Bn) of the models.**
the core of our algorithm is a novel surrogate for FLOPs, latency that relies on \(\frac{\ell_{1}}{\ell_{2}}\) norms, and works with batchnorm, layernorm. Our algorithm is computationally efficient and runs in same amount of time as needed for training a single model. We demonstrated the efficacy of our approach on various pre-training and transfer learning tasks on standard language and vision benchmarks. As a future work, it will useful to incorporate more efficient building blocks such as block diagonal matrices into our framework. Another interesting direction would be to make our technique more hardware aware by incorporating hardware level parameters such as tiling into our search process.
|
2302.13376 | Efficient Ensemble for Multimodal Punctuation Restoration using
Time-Delay Neural Network | Punctuation restoration plays an essential role in the post-processing
procedure of automatic speech recognition, but model efficiency is a key
requirement for this task. To that end, we present EfficientPunct, an ensemble
method with a multimodal time-delay neural network that outperforms the current
best model by 1.0 F1 points, using less than a tenth of its inference network
parameters. We streamline a speech recognizer to efficiently output hidden
layer acoustic embeddings for punctuation restoration, as well as BERT to
extract meaningful text embeddings. By using forced alignment and temporal
convolutions, we eliminate the need for attention-based fusion, greatly
increasing computational efficiency and raising performance. EfficientPunct
sets a new state of the art with an ensemble that weights BERT's purely
language-based predictions slightly more than the multimodal network's
predictions. Our code is available at
https://github.com/lxy-peter/EfficientPunct. | Xing Yi Liu, Homayoon Beigi | 2023-02-26T18:28:20Z | http://arxiv.org/abs/2302.13376v2 | Efficient Ensemble Architecture for Multimodal Acoustic and Textual Embeddings in Punctuation Restoration using Time-Delay Neural Networks
###### Abstract
Punctuation restoration plays an essential role in the post-processing procedure of automatic speech recognition, but model efficiency is a key requirement for this task. To that end, we present EfficientPunct, an ensemble method with a multimodal time-delay neural network that outperforms the current best model by 1.0 F1 points, using less than a tenth of its parameters to process embeddings. We streamline a speech recognizer to efficiently output hidden layer latent vectors as acoustic embeddings for punctuation restoration, as well as BERT to extract meaningful text embeddings. By using forced alignment and temporal convolutions, we eliminate the need for multi-head attention-based fusion, greatly increasing computational efficiency but also raising performance. EfficientPunct sets a new state of the art, in terms of both performance and efficiency, with an ensemble that weights BERT's purely language-based predictions slightly more than the multimodal network's predictions.
**Recognition Technologies, Inc. Technical Report: RTI-20230224-01**
**DOI: 10.13140/RG.2.2.29800.75528**
_Xing Yi Liu\({}^{1}\) and Homayoon Beigi\({}^{1,2}\)_
\({}^{1}\)Columbia University, New York, USA
\({}^{2}\)Recognition Technologies, Inc., New York, USA
[email protected], [email protected]
**Index Terms**: speech recognition, punctuation restoration, multimodal learning
## 1 Introduction
Automatic speech recognition (ASR) systems' transformation of audio into text opens up possibilities for a wide range of downstream tasks. With natural language text, applications like machine translation and voice assistants are enabled. However, raw ASR outputs lack punctuation and hence the full meaning of texts, which must be restored for usage by the aforementioned tasks. To illustrate the importance of punctuation, consider how the meaning of the sentence, "I have a favorite, family," differs drastically from the unpunctuated version, "I have a favorite family." Punctuation restoration is therefore also important for readability of transcribed speech and accuracy of conveyed message.
Following the standard of the punctuation restoration task, we focus on three key punctuation marks which most commonly occur and play critical roles in language: commas (.), full stops (.), and question marks (?). We also consider no punctuation (NP) as a fourth class in need of our model's consideration.
### Related work
Many works and proposed architectures have been devoted to restoring punctuation, and two main research categories have emerged: (1) considering only text output from ASR, and (2) considering both text output from ASR and the original audio.
Most consider text only, effectively forming a natural language processing task. They usually train and evaluate on the benchmark, textual datasets from IWSLT 2011 and 2012. Researchers have studied a wide variety of methods, including \(n\)-gram models [1], recurrent neural networks [2, 3, 4], adversarial models [5], contrastive learning [6], and transformers [7, 8]. Conditional random fields [9, 10, 11, 12] had particularly notable success. Direct fine-tuning of BERT [13] has also proven effective, which we demonstrate in Section 4.1.
In the other category, both audio and text modalities are considered. Earlier techniques involved statistical models like finite state machines [14], but unsurprisingly, more recently we see the exploration of neural networks [15, 16] and re-purposing existing models to take audio-based input and predict punctuation [17, 18]. Current state of the art models begin in separate branches: one to tokenize and process text and the other to process raw audio waveforms. They then use the attention mechanism [19] to fuse text and acoustic embeddings [20, 21].
### Significance of multimodal approach
Despite research in multimodal punctuation restoration being far less numerous than the text-only category, [17] explicitly demonstrated the value of added acoustic information. Intuitively, audio provides more diverse features from which models may learn [22]. As a simple example, long pauses in speech are definitive indicators of a full stop's (.) occurrence. Similarly, shorter pauses may indicate a comma (.), and rising pitch is often associated with question marks (?).
The substantial benefit of involving both the transcribed text and original speech audio is that, in practical applications, we can design a highly streamlined system for restoring punctuation. Speech can first be transcribed into text by forward-passing audio signals through an ASR network, but one may preserve a hidden layer's latent representation for further usage as input (along with the transcribed text's embeddings) to a separate punctuation model. Then, the concatenated input would embed not only textual information, but also acoustics and prosody.
Our work is precisely motivated by this potential for high-speed punctuation labeling after receiving ASR output. We present EfficientPunct, a model that surpasses state of the art performance while requiring far fewer parameters, enabling practical usage.
## 2 Method
We formulate the problem as follows. We are given spoken audio signal \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{S})\) and transcription words \(\mathbf{t}=(t_{1},t_{2},\ldots,t_{W})\). Here, \(S\) is the number of samples in the audio, and \(W\) is the number of words. The goal is to predict punctuation labels \(\mathbf{y}=\left(y_{1},y_{2},\ldots,y_{W}\right)\) that follow each word, where each \(y_{i}\in\{\textbf{",",".","?",NP}\}\).
As illustrated in Figure 1, EfficientPunct begins in two branches which separately process the audio signal \(\mathbf{a}\) and transcription text \(\mathbf{t}\). Their details are as follows.
### Text encoder
First, the text sequence \(\mathbf{t}\) is passed through the default WordPiece tokenizer used by BERT. Then, using a pre-trained BERT model which we have fine-tuned for predicting the four previously described punctuation classes, we obtain final hidden layer text embeddings
\[H_{t}=\text{BERT}(\mathbf{t}). \tag{1}\]
\(H_{t}\) is a matrix whose columns are \(768\)-dimensional vectors and represent embeddings of tokens. These text embeddings contain each token's context-aware information about grammar and linguistics.
### Audio encoder
To process raw spoken audio waveforms and obtain meaningful acoustic embeddings, we use a pre-trained model built using the Kaldi speech recognition toolkit [23]. This is directly analogous to previous works' usage of wav2vec 2.0 [24] as their pre-trained audio encoder. Kaldi's TED-LIUM 3 [25] framework first extracts Mel frequency cepstral coefficients (MFCCs) [22] and i-vectors, which are then passed to a time-delay neural network for speech recognition. We extract the 12th layer's representation of the input audio for further usage in the punctuation model:
\[H_{a}=\text{KaldiTedlium12}(\mathbf{a}). \tag{2}\]
\(H_{a}\) is a matrix whose columns are \(1024\)-dimensional embedding vectors. The number of columns is equal to the number of frames in the original audio.
### Alignment and fusion
The first step of fusing the \(768\)-dimensional embedding vectors from \(H_{t}\) and the \(1024\)-dimensional embedding vectors from \(H_{a}\) is to find correspondences between columns in each matrix. In other words, we must determine the text token being spoken during each frame of audio. This is performed through forced alignment. According to columns matched between the two modalities' embeddings, we concatenate them into columns of \(1792\)-dimensional embedding vectors. To fuse the two concatenated portions of each vector, we use a linear layer to learn affine transformations of embeddings which may be useful to punctuation restoration.
Many related works opt for attention-based fusion of the two modalities, but we found forced alignment and a simple linear layer to be the most parameter-efficient and competitive approach. Through experiments, we determined that more sophisticated fusion methods were counterproductive.
### Time-delay neural network
Next, the fused embeddings are passed through a time-delay neural network (TDNN) [26]. It contains a series of 1D convolution layers to capture temporal properties of the features, with a gradually decreasing number of channels. At the last convolution layer, there are \(4\) channels, with each one corresponding to a punctuation class. The channels are passed through two linear layers with weights and biases shared among the channels to output \(4\) values for softmax activation.
### Ensemble method
To complete EfficientPunct, we create an ensemble of the main TDNN and predictions using BERT's text embeddings only. We pre-trained BERT using the dark- and light-blue modules in Figure 1, which can still be used at inference time to obtain a set of predictions that only consider text, grammar, and linguistics. The other set of predictions obtained from the TDNN consider both text and audio.
Let \(\alpha\in[0,1]\) be the weight assigned to the TDNN's predictions and \(1-\alpha\) be the weight assigned to BERT's predictions. Our final predicted punctuation will be
\[f(\mathbf{a},\mathbf{t},\alpha)=\arg\max\left[\alpha y_{a}+(1-\alpha)y_{t} \right], \tag{3}\]
where \(y_{a}\) is the TDNN's softmax values and \(y_{t}\) is BERT's softmax values. Essentially, if either the TDNN or BERT outputs a maximum class probability much lower than \(1\), then the other model may help resolve the ambiguity in predicting a punctuation mark.
## 3 Experiments
### Data
Our primary dataset is the publicly available MuST-C version 1 [27], the same as that used by UniPunc [21] for sake of fair comparison. This dataset was compiled using TED talks. We also use same training and test set splits as the original authors, whose information is available on GitHub. We further split the original training set into 90% for training and 10% for validation. Please see Table 1 for full information.
Each sample is an English audio piece of approximately \(10\,\mathrm{s}\) to \(30\,\mathrm{s}\) with the corresponding transcription text. In Kaldi, we use a frame duration of \(10\,\mathrm{ms}\) for MFCCs, i-vectors, and 12th layer acoustic embeddings. We follow the procedure described in Section 2.3 to generate a matrix of aligned embeddings for each data sample. Then, to obtain examples for training and inference, we consider segments of \(301\) frames, or \(3\,\mathrm{s}\), wherein the exact middle frame is the point of transition from one text token to the next. The resulting example will thus be labeled with the punctuation following the prior token and occurring at the middle frame. We use a context window of \(3\,\mathrm{s}\), because this duration should be sufficient to capture all acoustic and prosodic information relevant to a punctuation mark, such as pauses and pitch rises. At the same time, this duration is not so long as to include much unnecessary information, such as extensions into adjacent words.
For the entire dataset, punctuation label distributions were as follows. Due to the highly imbalanced nature of the dataset, we sampled less occurring classes more frequently for training so that, in effect, all class counts are equal, and the network avoids learning only the prior probability distribution.
Moreover, since BERT was already pre-trained on massive corpora, we fine-tune it for punctuation prediction using the National Speech Corpus [28] of Singaporean English, in addition to MuST-C.
### Training
To fine-tune BERT and pre-train the text encoder, we place two linear layers on top of the base, uncased BERT's last hidden layer for four-way classification. For the pre-trained audio encoder, we use the TED-LIUM 3 [25] framework in Kaldi.
Our main TDNN module for punctuation restoration comprises seven 1-dimensional convolution layers, with said dimension spanning across time. Figure 1 shows the number of input and output channels of each layer. The kernel sizes used are, in order: \(9\), \(9\), \(5\), \(5\), \(7\), \(7\), \(5\), alternating between no dilation and a dilation of \(2\). The stride was kept at \(1\) in all layers. Additionally, we apply ReLU activation and batch normalization [29] to the output of each layer. We trained using stochastic gradient descent [30] with learning rate \(0.00001\) and momentum \(0.9\), instead of the typically used Adam optimizer [31]. This allowed for greater generalizability but still reasonable training speed [32].
To experiment with our ensemble, we explored the effect of varying \(\alpha\), the weight assigned to the TDNN for final predictions. \(1-\alpha\) is the weight assigned to BERT.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Set** & **Number of samples** & **Total duration** (\(\mathrm{h}\)) \\ \hline Training & 92,723 & 392.0 \\ Validation & 10,301 & 43.5 \\ Test & 490 & 2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training, validation, and test set information
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Label** & **Number of examples** & **\% of total** \\ \hline No punctuation (NP) & 3,567,572 & 86.9\% \\ Comma (.) & 280,446 & 6.8\% \\ Full stop (.) & 238,213 & 5.8\% \\ Question mark (?) & 20,897 & 0.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Punctuation label distributions
Figure 1: The EfficientPunct framework. The top branch predicts using text only, while the bottom branch predicts using text and audio.
In Section 4, we report results for \(\alpha=0.3\) to \(\alpha=0.7\) in \(0.1\) increments.
We used a standard Linux computing environment hosted on Google Cloud Platform with a single NVIDIA Tesla P100 GPU. Training took roughly 2 days, and inference can be performed on CPU-only machines 50 times faster than real time, or in about 0.02 seconds per second of audio.
## 4 Results
Our results reported in Table 3 includes a comparison with current state of the art (SOTA) and best-performing models, MuSe [20] and UniPunc [21]. We also divide the reporting of EfficientPunct's results into three categories:
1. _EfficientPunct-BERT_ considers text only, which is equivalent to the fine-tuned BERT model.
2. _EfficientPunct-TDNN_ considers text and audio via our TDNN.
3. _EfficientPunct_ is an ensemble of predictions from categories (1) and (2) with \(\alpha=0.4\), the best performing weight as reported in Section 4.2.
Categories (1), (2), and (3) are reported in the third, fourth, and fifth rows of Table 3, respectively.
As is standard in punctuation restoration research, we report the F1 scores of commas, full stops, and question marks. The "overall" F1 score aggregates these while considering the imbalanced classes' varying numbers of examples. We also state each model's number of parameters to provide an indication of computational efficiency.
### EfficientPunct and submodules
Our main EfficientPunct model achieves an overall F1 score of 79.5, outperforming all current state of the art frameworks by 1.0 or more points. We also achieve highest F1 scores for each individual punctuation mark, with the most significant improvement occurring for question marks. These were accomplished with EfficientPunct using less than half of UniPunc's total number of parameters, which achieved the previous best results. The significant improvement in recognizing question marks may be attributed to our audio encoder, Kaldi's TED-LIUM 3 framework, aiming explicitly at phone recognition. In this process, the acoustics surrounding question marks may be more pronounced in the embedding representation than other acoustics models.
Even more lightweight models are EfficientPunct-BERT and EfficientPunct-TDNN. EfficientPunct-BERT is simply a concatenation of two linear layers and a softmax layer on top of BERT. With the incorporation of audio features, we observe that EfficientPunct-TDNN indeed performs slightly better.
These results validate the strength of TDNNs, traditionally used in speech and speaker recognition, in punctuation restoration. UniPunc and MuSe both used attention-based mechanisms for fusing text and acoustic embeddings, but alignments learned as such rely on trainable attention weights. Our forced alignment strategy likely generated more precise temporal matches between text and audio. Combined with a TDNN architecture, we achieved a significantly more efficient model.
### Ensemble weights
In this section, we observe the effect of ensemble weights on EfficientPunct's performance. Equation 3 details the role of \(\alpha\) in weighting predictions made by the TDNN and BERT, with \(\alpha=0\) meaning pure consideration of BERT, and \(\alpha=1\) meaning pure consideration of the TDNN.
Table 4 reports the effect of \(\alpha\) on model performance. When both BERT and the TDNN play an approximately equal role in the ensemble, a fair voting mechanism is enabled, and we achieve the highest F1 scores. However, notice that \(\alpha=0.4\), a weight that considers BERT slightly more strongly than the TDNN, achieves the maximum overall F1. This gain comes mostly from sharper comma predictions, which present notorious difficulties due to varying grammatical and (transcription) writing styles. We reason that \(\alpha=0.4\) excels, because a stronger reliance on BERT's language modeling perspective yields more linguistically correct punctuation, as agreed upon by countless writers' contributions to BERT's training corpora.
The strength of our ensemble method is that, in cases of uncertain predictions by either party, i.e. approximately equal softmax probabilities over all classes, the other can provide guidance to clarify the ambiguity. This process demands very little additional parameters through which the input must be passed, as shown by the last two rows of Table 3, but greatly advances state of the art performance.
### Parameter Breakdown
In order to show the specific modules in which we attain superior efficiency, we further break down the parameters count from the last column of Table 3. In Table 5, we detail the number of parameters devoted by each model to extracting embeddings and inferring those embeddings to make punctuation decisions.
EfficientPunct requires much less computational cost in both the embedding extraction and inference stages. Our usage of Kaldi's TED-LIUM 3 model brought massive efficiency gains compared to MuSe and UniPunc's usage of wav2vec 2.0. Moreover, our inference module uses less than a tenth of UniPunc's parameters in the same stage, which achieved the previous best results.
## 5 Conclusion
In this paper, we explored the application of time-delay neural networks in punctuation restoration, which proved to be more computationally efficient than and as effective as previous approaches. Combined with BERT in an ensemble, EfficientPunct establishes a strong, new state of the art with a fraction of previous approaches' number of parameters. A key factor of our model's success is removing the need for attention-based fusion of text and audio features. In previous approaches, multiple attention heads added extraordinary overhead in the punctuation prediction stage. We demonstrated that forced alignment of text and acoustic embeddings, in conjunction with temporal convolutions, rendered attention unnecessary.
Additionally, we studied the effect of different weights assigned to members of the ensemble. We found that a slightly stronger weighting of BERT against the multimodal TDNN optimized performance by emphasizing language rules associated with punctuation.
In future works, the effectiveness of jointly training ensemble weights and the TDNN may be examined, which could allow the learning of an optimal ensemble. Jointly training with the text and audio encoders may also be considered, but this procedure should not inhibit the encoders' generalizability for purposes other than punctuation restoration. Finally, we would like to explore the applicability of EfficientPunct in more languages and a similar framework for other post-processing tasks of speech recognition.
|
2308.01729 | Telematics Combined Actuarial Neural Networks for Cross-Sectional and
Longitudinal Claim Count Data | We present novel cross-sectional and longitudinal claim count models for
vehicle insurance built upon the Combined Actuarial Neural Network (CANN)
framework proposed by Mario W\"uthrich and Michael Merz. The CANN approach
combines a classical actuarial model, such as a generalized linear model, with
a neural network. This blending of models results in a two-component model
comprising a classical regression model and a neural network part. The CANN
model leverages the strengths of both components, providing a solid foundation
and interpretability from the classical model while harnessing the flexibility
and capacity to capture intricate relationships and interactions offered by the
neural network. In our proposed models, we use well-known log-linear claim
count regression models for the classical regression part and a multilayer
perceptron (MLP) for the neural network part. The MLP part is used to process
telematics car driving data given as a vector characterizing the driving
behavior of each insured driver. In addition to the Poisson and negative
binomial distributions for cross-sectional data, we propose a procedure for
training our CANN model with a multivariate negative binomial (MVNB)
specification. By doing so, we introduce a longitudinal model that accounts for
the dependence between contracts from the same insured. Our results reveal that
the CANN models exhibit superior performance compared to log-linear models that
rely on manually engineered telematics features. | Francis Duval, Jean-Philippe Boucher, Mathieu Pigeon | 2023-08-03T12:40:44Z | http://arxiv.org/abs/2308.01729v2 | # Telematics Combined Actuarial Neural Networks for Cross-Sectional and Longitudinal Claim Count Data
###### Abstract
We present novel cross-sectional and longitudinal claim count models for vehicle insurance built upon the Combined Actuarial Neural Network (CANN) framework proposed by Mario Wuthrich and Michael Merz. The CANN approach combines a classical actuarial model, such as a generalized linear model, with a neural network. This blending of models results in a two-component model comprising a classical regression model and a neural network part. The CANN model leverages the strengths of both components, providing a solid foundation and interpretability from the classical model while harnessing the flexibility and capacity to capture intricate relationships and interactions offered by the neural network. In our proposed models, we use well-known log-linear claim count regression models for the classical regression part and a multilayer perceptron (MLP) for the neural network part. The MLP part is used to process telematics car driving data given as a vector characterizing the driving behavior of each insured driver. In addition to the Poisson and negative binomial distributions for cross-sectional data, we propose a procedure for training our CANN model with a multivariate
negative binomial (MVNB) specification. By doing so, we introduce a longitudinal model that accounts for the dependence between contracts from the same insured. Our results reveal that the CANN models exhibit superior performance compared to log-linear models that rely on manually engineered telematics features.
Automobile insurance Combined Actuarial Neural Network Deep Learning Claim count data Multivariate negative binomial
## 1 Introduction and Motivations
Vehicle insurance products have traditionally been priced based on self-reported attributes provided by insureds. These attributes commonly include various risk factors, including gender, age, vehicle usage, and claim history. Insurers rely on this information to assess the level of risk associated with each insurance contract and determine appropriate premium rates. With the introduction of telematics technology, insurers can now collect a wide range of driving data through devices installed in the vehicles of policyholders or through mobile applications. This includes information such as vehicle speed, acceleration and braking behavior, mileage, location data, and factors like the time of day or types of roads frequently traveled. By leveraging this wealth of data, insurers can gain a more accurate and objective understanding of each individual's driving habits and style, enabling them to customize insurance offerings and pricing based on their actual driving behavior. This emerging paradigm, known as Usage-Based Insurance (UBI), revolutionizes the insurance landscape in various ways. For insurers, telematics data means more accurate risk assessment algorithms, which can often translate into a competitive advantage. For insureds, it means fairer premium rates that align more closely with their actual risk profiles rather than being computed based on broad demographic categories. It also means that they are priced based on risk indicators over which they have control. From a societal perspective, UBI also offers many advantages. One of the key benefits is the potential to improve road safety. By giving incentives for safe driving behavior and reduced mileage, UBI not only helps reduce the frequency and severity of accidents, ultimately saving lives and reducing the economic burden associated with road accidents, but also contributes to reducing greenhouse gas emissions. Additionally, telematics provide insurers with viable alternatives to sensitive risk factors, thereby helping to prevent unfair discrimination. For a more extensive overview of the benefits of UBI, we refer to the works of (Litman, 2007), (Bordoff and Noel, 2008) and (Ziakopoulos et al., 2022).
One of the most prominent questions related to UBI is how to make the most out of the collected driving data. A significant subset of the literature has focused on incorporating mileage into pricing models due to its acknowledged importance as a risk factor in assessing risk and determining premium rates (see, for instance, (Boucher et al., 2017), (Lemaire et al., 2015), and (Turcotte and Boucher, 2023)). However, mileage alone fails to provide the whole story about an insured individual's driving behavior, prompting researchers to consider additional telematics information. One prevalent approach involves drawing upon domain knowledge to craft telematics features from raw data. By applying their expertise in the field, researchers can engineer features that capture critical aspects of driving behavior, specifically driving characteristics that are thought to be correlated with the risk of accident. Common examples of such features include harsh braking/acceleration events, cornering events, speeding, distracted driving, the fraction of driving during both different time slots (e.g., rush hour, late-night hours, weekdays), and on different road types (e.g., urban roads, highways), as well as the fraction of driving in different speed slots. While this approach captures signals missed by traditional risk factors and mileage (thereby improving pricing accuracy), it relies heavily on human judgment, with its inherent flaws and biases. With countless possible telematics features that can be engineered from raw telematics data, selecting the optimal ones for pricing is not straightforward. Furthermore, this process
necessitates the setting of thresholds. For example, how should night driving or harsh braking be precisely defined?
The limitations of the aforementioned approach have motivated researchers to explore a new set of methods that rely more on data and decrease the need for human judgment. As highlighted in a recent study by [Embrechts and Wuthrich, 2022], the increasing amount of data available presents a challenge in manually designing features, leading actuaries to increasingly depend on tools like neural networks to learn and extract meaningful representations from the data. [Blier-Wong et al., 2021] underline the importance of learning valuable representations from emerging data sources such as text, image, and sensor data. These sources, which include telematics car driving data, can enrich traditional data and offer improved insights for predicting future losses in insurance contracts. Neural networks are regarded as the most effective means for automatically extracting valuable features from raw data, which validates their practical application. In recent years, researchers have successfully applied the toolbox of deep learning, namely neural networks architectures with a large number of hidden layers, to handle telematics data and other types of unstructured data. In their work, [Wuthrich, 2017] introduce the speed-acceleration heatmap, a matrix representation that characterizes the driving style of an insured driver, which is well-suited for processing by deep learning algorithms. Subsequent studies ([Gao and Wuthrich, 2018], [Gao et al., 2019], [Gao et al., 2022]) have effectively leveraged these heatmaps by employing neural networks to learn representations from them. In [Meng et al., 2022], the authors propose a supervised driving risk scoring convolutional neural network model that uses telematics car driving data to improve automobile insurance claims frequency prediction. [Blier-Wong et al., 2020] propose a Convolutional Regional Autoencoder model for generating geographical risk encodings using convolutional neural networks. The resulting encodings, which aim to replace the traditional territory variable, proved beneficial for risk-related regression tasks.
In this paper, we present novel claim count models based on the Combined Actuarial Neural Network (CANN) approach, initially proposed by [Wuthrich and Merz, 2019]. The CANN approach involves embedding a classical regression model, such as a generalized linear model (GLM; see [Nelder and Wedderburn, 1972] and [Dionne and Vanasse, 1989]), into a neural network, achieved by blending the regression functions of both models. Consequently, the resulting model comprises the two following components: the classical regression (or actuarial) model and the neural network. This blending process can be interpreted as a form of neural network boosting for the actuarial model, combining the strengths of both approaches. The calibration of the CANN neural network is performed using the classical actuarial model as the initial value in the gradient descent algorithm, with the negative log-likelihood of the specified distribution used as the loss function. One of the key benefits of this specific architecture is the solid foundation offered by the classical model, complemented by the network component's flexibility and pattern recognition capabilities. Neural networks excel in approximating highly nonlinear functions and possess the ability to compute valuable interactions between input variables automatically. Consequently, the CANN approach combines the best of both worlds, leveraging the interpretability and reliability of the classical model while capitalizing on the power of neural networks to capture complex relationships and patterns in the data. A few studies have successfully leveraged this approach: [Schelldorfer and Wuthrich, 2019] present a case study where a Poisson GLM for predicting claims frequencies is initially used, then enhanced through generalized additive models (GAMs) with natural cubic splines and finally combined with a neural network, resulting in a CANN approach. The study also explores the use of embedding layers for more efficient treatment of categorical variables; [Gabrielli et al., 2020] boost an overdispersed Poisson model with a multilayer perceptron to improve individual loss reserving; [Tzougas and Kutzkov, 2023] use the CANN approach to enhance binary classification; [Laporta et al., 2023] apply the CANN architecture in the context of quantile regression.
Our models employ a log-linear model for the actuarial model part and a multilayer perceptron (MLP) for the network part. Telematics information is incorporated into the MLP as a telematics vector, which is given as
input to represent the driving behavior of each insured driver. The MLP part additionally includes traditional risk factors as inputs, enabling interactions between traditional and telematics inputs, while the log-linear part, constrained in estimating complex functions, is only given traditional risk factors. We explore three distinct distribution specifications for the claim count: Poisson and negative binomial for cross-sectional analysis, and multivariate negative binomial (MVNB; see [Hausman et al., 1984] and [Boucher et al., 2008]), also known as _negative multinomial_, for longitudinal analysis. The MVNB distribution is a popular choice for modeling longitudinal claim count data, as it captures the dependence between contracts from the same insured. However, to our knowledge, this specification has never been adapted to a neural network model for claim count regression. In this study, we extend the application of the MVNB distribution by incorporating it into the neural network framework, specifically the CANN architecture, for modeling longitudinal claim count data. This adaptation allows us to leverage the strengths of both the MVNB distribution and the neural network architecture. Our findings indicate that the CANN models perform better than their log-linear counterparts that rely on manually engineered telematics features. Furthermore, the CANN model using the MVNB specification exhibits a significant improvement compared to the two cross-sectional specifications.
In Section 2, we present the two datasets available to us: the contract dataset and the telematics dataset. Following that, in Section 3, we delve into the theory behind the CANN claim count models and also discuss the log-linear models that serve as benchmarks. Moving on to Section 4, we provide an explanation of how we apply the models on our specific dataset and show how we preprocess telematics data. In Section 5, we assess the performance of the models on a holdout sample and interpret the CANN models through permutation feature importance and partial dependence plots. Lastly, we conclude in Section 6.
## 2 Data
We have access to data from a Canadian property and casualty insurance company, which comes in two distinct datasets: the contract and the telematics dataset.
### Contract dataset
In the contract dataset, each row represents a unique insurance contract. Contracts typically last for one year, but there are instances where their duration may be shorter or longer. Each vehicle is observed over one or more contracts; therefore, one vehicle can be represented by one or more rows in this dataset. Based on risk factors, a premium must be computed for each contract. When using a cross-sectional data model, contracts from the same vehicle are assumed to be independent of each other. On the other hand, a longitudinal data model assumes a dependence between contracts, allowing it to use information from previous contracts (including traditional risk factors, telematics data, past claims, etc.) to compute the premium. The contract dataset includes attributes commonly used in vehicle insurance pricing models. These traditional risk factors, displayed in Table 1, are recorded for 117,268 insurance contracts initiated between December 15\({}^{\text{th}}\), 2015 and December 31\({}^{\text{st}}\), 2018. In cases where multiple drivers are associated with a particular contract, attributes of the principal driver are used. Additionally, the dataset includes the vehicle identification number (VIN), allowing us to identify the insured vehicle accurately, alongside the reported claim count. As our goal is to perform claim count regression on contracts, the claim count variable will serve as the response for our supervised learning algorithms, namely the log-linear and the CANN models. The 117,268 contracts are associated with 49,671 distinct vehicles, resulting in an average of approximately 2.36 contracts per vehicle. The histogram of the number of contracts per vehicle is shown in Figure 1.
### Telematics dataset
All 117,268 contracts have been logged using an on-board diagnostics (OBD) device, capturing driving information. This data is stored as trip summaries in the telematics dataset, which comprises 117,566,259 trips. Each row in the dataset represents a specific trip, and every trip is described by 4 attributes: the departure and arrival date and time, the distance driven, and the maximum speed reached. Additionally, each trip is associated with a VIN, and with the date information, it is thus possible to link each trip with one of the 117,268 contracts. An extract from the telematics dataset is presented in Table 2.
### Training, validation, and testing datasets
In supervised learning analysis, splitting the available data into training, validation, and testing sets is paramount for ensuring the reliability and ability to generalize of the learned model. The training set, which usually comprises the largest portion of the data, is used to train the model's parameters and optimize its
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Variable name** & **Description** & **Type** \\ \hline vin & Unique vehicle identifier & ID \\ \hline annual\_distance & Annual distance declared by the insured & Numeric \\ commute\_distance & Distance to the place of work declared by the insured & Numeric \\ conv\_count\_3\_yrs\_minor & Number of minor contraventions in the last three years & Numeric \\ distance & Real distance driven & Numeric \\ expo & Contract duration in years & Numeric \\ gender & Gender of the insured & Categorical \\ marital\_status & Marital status of the insured & Categorical \\ pmt\_plan & Payment plan chosen by the insured & Categorical \\ vsh\_age & Vehicle age & Numeric \\ veh\_use & Use of the vehicle & Categorical \\ years\_licensed & Number of years since obtaining driver’s license & Numeric \\ \hline nb\_claims & Number of claims & Numeric \\ \hline \hline \end{tabular}
\end{table}
Table 1: Variables of the contract dataset.
Figure 1: Number of contracts per vehicle.
performance. However, relying solely on the training set for performance assessment can lead to overfitting, particularly when the model has a high capacity. To address this, the validation set is used during the modeling process to assess the model's performance on unseen data. It plays an important role in tuning hyperparameters, selecting the optimal model architecture, and preventing overfitting. By assessing the model's performance on the validation set, one can obtain an estimate of its generalization performance and make necessary adjustments to improve its ability to generalize well to new, unseen examples. However, it is important to note that the back-and-forth process of evaluating the model on the validation set and adjusting its hyperparameters can introduce information leakage from the validation set into the training set. This can create an illusion of better performance than the model would exhibit in real-world scenarios. As a result, the testing set is reserved for the final evaluation of the learned model. It serves as an unbiased assessment of how well the model will perform on completely unseen data. This final evaluation provides an estimate of the model's true performance and helps determine its reliability in real-world scenarios. By keeping the testing set separate from the training and validation sets, we can ensure an unbiased evaluation and avoid any potential data leakage. We partition the data as outlined in Table 3 for our analysis. Approximately 60% of the vehicles are allocated for training, while approximately 20% is assigned to the validation and testing sets.
## 3 Count Regression Models
We consider a training dataset denoted as \(\mathcal{T}_{r}\), which consists of \(|\mathcal{T}_{r}|\) rows representing vehicle insurance contracts. Contracts are grouped by vehicle and each vehicle \(i\) is observed over \(T_{i}\) contracts. We define \(Y_{it}\) as a discrete random variable denoting the number of claims during the \(t^{\text{th}}\) contract of vehicle \(i\). Furthermore, we have \(\mathbf{x}_{it}\) a vector containing relevant predictor variables associated with the \(t^{\text{th}}\) contract of vehicle \(i\). Importantly, we assume independence among all insured vehicles. In claim count regression, the ultimate
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**VIN** & **Trip ID** & **Departure datetime** & **Arrival datetime** & **Distance** & **Maximum speed** \\ \hline A & \(1\) & \(2017\)-05-02 19:04:15 & \(2017\)-05-02 19:24:24 & \(25.0\) & \(104\) \\ A & \(2\) & \(2017\)-05-02 21:31:29 & \(2017\)-05-02 21:31:29 & \(6.4\) & \(66\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ A & \(2320\) & \(2018\)-04-30 21:17:22 & \(2018\)-04-30 21:18:44 & \(0.2\) & \(27\) \\ \hline B & \(1\) & \(2017\)-03-26 11:46:07 & \(2017\)-03-26 11:53:29 & \(1.5\) & \(76\) \\ B & \(2\) & \(2017\)-03-26 15:18:23 & \(2017\)-03-26 15:51:46 & \(35.1\) & \(119\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ B & \(1485\) & \(2018\)-03-23 20:07:08 & \(2018\)-03-23 20:20:30 & \(10.1\) & \(92\) \\ \hline C & \(1\) & \(2017\)-11-20 08:14:34 & \(2017\)-11-20 08:40:21 & \(9.7\) & \(78\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Extract from the telematics dataset. Dates are displayed in the yyyo-mm-dd format. The actual VINs have been hidden for privacy purposes.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Set** & **Symbol** & **Number of vehicles** & **Number of contracts** & **Number of trips** \\ \hline Training & \(\mathcal{T}_{r}\) & \(30{,}000\) & \(70{,}451\) & \(71{,}416{,}560\) \\ Validation & \(\mathcal{V}_{a}\) & \(10{,}000\) & \(23{,}368\) & \(22{,}611{,}829\) \\ Testing & \(\mathcal{T}_{e}\) & \(9{,}671\) & \(23{,}449\) & \(23{,}537{,}870\) \\ \hline Total & – & \(49{,}671\) & \(117{,}268\) & \(117{,}566{,}259\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Data partitioning
goal is to estimate the probability mass function (PMF) of the number of claims, given all past and current information about the vehicle. Mathematically, we seek to estimate:
\[\mathbb{P}\left(Y_{it}=y_{it}|\mathbf{y}_{i,(1:t-1)},\mathbf{x}_{i,(1:t)}\right),\quad y _{it}\in\mathbb{N}, \tag{1}\]
where \(\mathbf{y}_{i,(1:t-1)}=(y_{i1},\ldots,y_{i,t-1})\) is the vector of past claims and \(\mathbf{x}_{i,(1:t)}=\{\mathbf{x}_{i1},\ldots,\mathbf{x}_{it}\}\) is the set of past and current covariate vectors for vehicle \(i\).
### Cross-sectional models
In addition to assuming independence between vehicles, cross-sectional models also assume independence between contracts from the same vehicle. Consequently, these models do not use the history of a vehicle to estimate its future risk. The PMF of the number of claims can thus be written as:
\[\mathbb{P}\left(Y_{it}=y_{it}|\mathbf{y}_{i,(1:t-1)},\mathbf{x}_{i,(1:t)}\right)= \mathbb{P}\left(Y_{it}=y_{it}|\mathbf{x}_{it}\right),\quad y_{it}\in\mathbb{N}. \tag{2}\]
#### 3.1.1 Poisson regression
The Poisson distribution is widely used in supervised learning analysis for claim count data due to its good properties and simplicity. Under the Poisson specification, the PMF of the claim count for the \(t^{\text{th}}\) contract of vehicle \(i\), denoted by \(Y_{it}\), given its predictor vector, denoted by \(\mathbf{x}_{it}\), is defined by
\[\mathbb{P}(Y_{it}=y_{it}|\mathbf{x}_{it})=\frac{e^{-\mu(\mathbf{x}_{it})}\mu(\mathbf{x}_{ it})^{y_{it}}}{y_{it}!},\quad\text{for}\quad y_{it}\in\mathbb{N}, \tag{3}\]
with \(\mathbb{E}[Y_{it}|\mathbf{X}_{it}=\mathbf{x}_{it}]=\text{Var}[Y_{it}|\mathbf{X}_{it}=\mathbf{x }_{it}]=\mu(\mathbf{x}_{it})\). The mean parameter \(\mu(\mathbf{x}_{it})\) denotes the conditional expectation (and conditional variance) of \(Y_{it}\). The regression function \(\mu(\cdot)\) captures the relationship between the predictors \(\mathbf{x}_{it}\) and the mean parameter in the Poisson distribution, indicating how the conditional expected count is influenced by the predictors. Subsequently, one must choose a specific functional form for \(\mu(\cdot)\), which defines a hypothesis function space \(\mathcal{H}\) that includes all the candidate functions for modeling \(\mu(\cdot)\). The next step involves selecting the optimal function \(\widehat{\mu}\in\mathcal{H}\), equivalent to estimating the parameters of the specified functional form based on the available data.
In order to define what constitutes a good regression function, it is necessary to select a suitable loss function that quantifies the dissimilarity between the estimated probability mass and the true label. The goal is to minimize this dissimilarity, improving the model's predictive performance. The cross-entropy loss, also known as the negative log-likelihood loss, is a commonly chosen option. For a specific observation \(i\), the cross-entropy loss is given by \(-\ln(p_{i})\), where \(p_{i}\) is the estimated probability of observing the true label \(y_{i}\). This loss function assigns a higher penalty to larger discrepancies between the true label and the predicted probability, incentivizing the model to converge towards more accurate predictions. To estimate the parameters, we typically aim to minimize the average loss function over the training set, also called the _empirical risk_. In the case of Poisson regression, this involves minimizing the average Poisson cross-entropy by solving the following optimization problem:
\[\widehat{\mu}=\operatorname*{argmin}_{\mu\in\mathcal{H}}\left\{-\frac{1}{| \mathcal{T}_{r}|}\sum_{(i,t)\in\mathcal{T}_{r}}y_{it}\ln[\mu(\mathbf{x}_{it})]-\mu (\mathbf{x}_{it})-y_{it}!\right\}. \tag{4}\]
Note that this is equivalent to maximizing the likelihood function. For some specifications of \(\mu(\cdot)\), notably the log-linear specification, the criterion in Equation (4) is convex, which enables various convex optimization techniques to be applied. Alternative estimation techniques can also be used. One common option is regularization techniques, including lasso, Ridge, and elastic-net regressions. Instead of solely minimizing the average
cross-entropy, these methods involve minimizing a modified objective function that includes a penalty term. Regularization is particularly beneficial for addressing common issues such as multicollinearity and overfitting.
Log-linear Poisson regression.In the Poisson regression context, one notable specification for the regression function is the log-linear form, where the mean parameter is expressed as the exponential of a linear function of the predictors:
\[\mu^{\text{LL}}(\mathbf{x};\mathbf{\beta})=\exp\left\{\langle\mathbf{x},\mathbf{ \beta}\rangle\right\}, \tag{5}\]
where \(\mathbf{\beta}\) denotes a vector of parameters, and \(\langle\mathbf{x},\mathbf{\beta}\rangle\) stands for the inner product between the predictor vector \(\mathbf{x}\) and the coefficient vector \(\mathbf{\beta}\). The use of the exponential function ensures that the mean parameter remains positive. Log-linear Poisson regression has favorable properties, notably its interpretability stemming from the quasi-linearity of the link function \(\mu(\cdot)\). Moreover, when maximum likelihood is used for parameter estimation, this regression model falls within the framework of generalized linear models. GLMs provide valuable properties, such as the asymptotic Gaussian distribution of the parameters \(\mathbf{\beta}\), allowing for the estimation of standard errors, hypothesis testing, and construction of confidence intervals.
However, log-linear regression does have a significant drawback - its regression function, being linear, lacks flexibility. To address this limitation, various techniques can be employed. In fact, any supervised learning technique could be used for the specification of \(\mu(\cdot)\). One simple approach to incorporating non-linearity involves adding polynomial terms of the predictors to the model alongside the linear terms. Splines, on the other hand, offer a flexible and powerful method for modeling non-linear relationships. Instead of fitting a single global function, splines divide the predictor range into smaller intervals and fit separate polynomial functions within each interval. This approach enables more localized and flexible modeling of the relationship between the predictors and the mean parameter.
CANN Poisson regression.In some cases, the supervised learning problem may require even more flexibility, and neural networks are particularly useful in such scenarios. Neural networks are formidable function approximation machines, well-known for their ability to estimate a wide range of highly non-linear multivariate functions. One of the key advantages of neural networks is their ability to handle raw and unstructured data effectively. Because we deal with detailed telematics data, this capability forms the basis for adopting the Combined Actuarial Neural Network (CANN) approach of (Wuthrich and Merz, 2019), which embeds a classical actuarial model into a neural network architecture. A CANN model consists of two distinct components: the classical regression model component and the neural network component. This architecture offers great flexibility, allowing for seamless integration of any classical model whose regression function is compatible with a neural network architecture. Likewise, the neural network component can employ various types of supervised architectures, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other architectures tailored to the specific problem at hand. The classical regression model provides good initial estimations and serves as a guide for the neural network component. It offers a starting point for the network's optimization process, enabling faster convergence. The neural network component, in turn, refines the initial estimations, capturing additional signals and uncovering patterns that may have been missed by the classical model alone.
In our specific case, we use log-linear count regression as the classical model and a multilayer perceptron (MLP) as the neural network component in the CANN model. As a result, we have the following specification for the regression function:
\[\mu^{\text{CANN}}(\mathbf{x};\mathbf{\beta},\mathbf{\theta})=\mu^{\text{LL}} (\mathbf{x};\mathbf{\beta})\times\mu^{\text{MLP}}(\mathbf{x};\mathbf{\theta}), \tag{6}\]
where \(\mu^{\text{MLP}}(\cdot)\) is the regression function learned by a multilayer perceptron parametrized with \(\mathbf{\theta}\). In a nutshell, an MLP consists of interconnected layers, including an input layer, hidden layers, and an output layer. Each layer applies an affine transformation to the inputs it receives, followed by a non-linear activation function. This combination of linear transformations and non-linear activations allows MLPs to model complex non-linear relationships in the data. To delve into the mathematical description of an MLP, we can break down its structure starting from the input layer and progressing towards the output layer:
1. **Input layer (\(l=0\))**: The input layer consists of \(n_{0}\) nodes representing the input variables \(\mathbf{x}=[x_{1},x_{2},\dots,x_{n_{0}}]\).
2. **First hidden layer (\(l=1\)):** The first hidden layer contains \(n_{1}\) nodes, connected to the nodes from the input layer (\(l=0\)) and the nodes in the subsequent layer (\(l=2\)). The computations in the first hidden layer involve an affine transformation of the input variables followed by the application of a non-linear activation function, which introduces non-linearity into the network. Let us denote the weight matrix between layers \(l=0\) and \(l=1\) as \(\mathbf{W}^{(1)}\) with dimensions \((n_{1},n_{0})\) and the bias vector as \(\mathbf{b}^{(1)}\) with dimensions \((n_{1},1)\). The activation function applied to the transformed inputs is denoted as \(\phi\). The computations in the first hidden layer can then be expressed as: \[\mathbf{a}^{(1)}=\mathbf{W}^{(1)}\mathbf{x}+\mathbf{b}^{(1)},\quad\mathbf{z}^{(1)}=\phi\left(\mathbf{ a}^{(1)}\right),\] (7) where \(\mathbf{a}^{(1)}\) represents the preactivation values in the first hidden layer, and \(\mathbf{z}^{(1)}\) represents the post-activation values. It is worth noting that the activation function \(\phi\) is applied element-wise on the preactivation vector \(\mathbf{a}^{(1)}\).
3. **Subsequent hidden layers (\(l=2,3,\dots,L-2\)):** Each subsequent hidden layer \(l\) contains \(n_{l}\) nodes, connected to the nodes from the previous layer (\(l-1\)) and the nodes in the following layer (\(l+1\)). Similar to the first layer, the computations in the subsequent hidden layers involve an affine transformation of the inputs \(\mathbf{z}^{(l-1)}\) followed by the application of the non-linear activation function \(\phi\). Let us denote the weight matrix between layers \(l-1\) and \(l\) as \(\mathbf{W}^{(l)}\) with dimensions \((n_{l},n_{l-1})\) and the bias vector as \(\mathbf{b}^{(l)}\) with dimensions \((n_{l},1)\). The computations in the hidden layers can then be expressed as: \[\mathbf{a}^{(l)}=\mathbf{W}^{(l)}\mathbf{z}^{(l-1)}+\mathbf{b}^{(l)},\quad\mathbf{z}^{(l)}=\phi \left(\mathbf{a}^{(l)}\right),\] (8) where \(\mathbf{a}^{(l)}\) represents the preactivation values in the \(l^{\text{th}}\) hidden layer, and \(\mathbf{z}^{(l)}\) represents the post-activation values.
4. **Output layer (\(l=L-1\))**: The output layer consists of \(n_{L-1}\) nodes, representing the final output(s) of the MLP. Similar to the hidden layers, the output layer involves an affine transformation followed by an activation function \(g\). We denote the weight matrix between layers \(L-2\) (last hidden layer) and \(L-1\) as \(\mathbf{W}^{(L-1)}\) with dimensions \((n_{L-1},n_{L-2})\) and the bias vector as \(\mathbf{b}^{(L-1)}\) with dimensions \((n_{L-1},1)\). The computations within the output layer can be expressed as: \[\mathbf{a}^{(L-1)}=\mathbf{W}^{(L-1)}\mathbf{z}^{(L-2)}+\mathbf{b}^{(L-1)},\quad\mathbf{z}^{(L-1)}= g\left(\mathbf{a}^{(L-1)}\right).\] (9)
Note that the number of output neurons \(n_{L-1}\) should match the number of modeled distribution parameters. In the context of Poisson regression, where we are modeling a single parameter \(\mu\), only one output neuron is necessary. The choice of the output activation function \(g(\cdot)\) is important and should be aligned with the specific problem being tackled since it determines the range and properties of the output values. For instance, in the classic case of a multi-class classification problem (where the multinoulli distribution is used as a specification for the target variable), each output neuron represents a class, and the predicted probabilities for each class should be positive and sum up to 1. In this scenario, a common choice for the activation function is the softmax
function, which normalizes the outputs and ensures they are positive and sum up to 1. In our case, we need to ensure that the parameter \(\mu\), which represents the expected count, is always positive. While the exponential function is a natural choice to enforce positivity, it can sometimes lead to numerical instability, especially for large input values. As a better alternative, we choose to use the softplus function as the activation function for the output layer, defined as \(\zeta(x)=\log(1+\exp(x))\). The softplus function is well-behaved even for large input values, mitigating the issue of numerical instability that can arise with the exponential function. The parameters of the generic MLP described above, consisting of weight matrices and bias vectors, can be denoted as:
\[\mathbf{\theta}=\left\{\mathbf{W}^{(1)},\mathbf{b}^{(1)},\mathbf{W}^{(2)},\mathbf{b}^{(2)},\dots, \mathbf{W}^{(L-1)},\mathbf{b}^{(L-1)}\right\}. \tag{10}\]
Naturally, these parameters must be estimated. However, the criterion in Equation (4) is typically not convex, making it challenging to find the global minimum of the empirical risk. In practice, the goal is to find a "good enough" local minimum that yields satisfactory performance on the task. Gradient descent algorithms, such as stochastic gradient descent (SGD) and its variants, are commonly employed to update iteratively the parameters \(\mathbf{\theta}\) in the direction of the steepest descent. The backpropagation algorithm efficiently computes the gradients and propagates them through the network, enabling parameter updates. With the introduced notation for the MLP, we can now express the specification in (6) as
\[\mu^{\text{CANN}}(\mathbf{x};\mathbf{\beta},\mathbf{\theta})=\zeta\left\{(\mathbf{x},\mathbf{ \beta})+\mathbf{a}^{(L-1)}(\mathbf{x};\mathbf{\theta})\right\}. \tag{11}\]
The corresponding computational graph for \(L=5\) (i.e., 3 hidden layers) is shown in Figure 2.
Like a standard MLP, the network parameters \(\mathbf{\beta}\) and \(\mathbf{\theta}\) can be estimated using gradient descent. A simplified pseudo-algorithm
Figure 2: CANN architecture for the Poisson specification. The MLP’s preactivation output value \(a_{1}^{(4)}\) is added to the log-linear model’s preactivation output value \(\langle\mathbf{x},\mathbf{\beta}\rangle\) before being transformed with the softplus function \(\zeta(\cdot)\). The resulting \(\mu\) value is compared to the ground truth \(y\) using Poisson cross-entropy loss. The architecture shown employs a 3-hidden-layer MLP, but can be customized with any number of layers.
for the training of the Poisson CANN model is provided in Algorithm 1. The learning rate \(\eta\) is a hyperparameter that determines the step size taken every time a gradient descent step is performed. In other words, it controls how quickly or slowly the network parameters are updated during training. A higher learning rate allows for larger steps, which can lead to faster convergence. However, an excessively high learning rate may cause the optimization process to overshoot or oscillate around the minimum, hindering convergence. Conversely, a very low learning rate might result in slow convergence, requiring more iterations to reach an acceptable solution. Finding the right learning rate is important and is typically an empirical process that requires experimentation and tuning. In practice, mini-batch gradient descent is commonly used for training neural networks. It works by dividing the training data into smaller subsets, called mini-batches, and computing the gradients and parameter updates based on these mini-batches. This approach offers computational efficiency and improved generalization compared to regular gradient descent, making it a preferred choice in practice. For a comprehensive understanding of neural networks, we refer to the excellent book [Goodfellow et al., 2016].
#### 3.1.2 Negative binomial regression
One issue with the Poisson distribution is its equidispersion assumption. Indeed, we have that \(\mu(\mathbf{x})=\mathbb{E}[Y_{it}|\mathbf{X}=\mathbf{x}]=\text{Var}[Y_{it}|\mathbf{X}=\mathbf{x}]\). In practice, claim count data often exhibit overdispersion, where the observed variance of the claim count is greater than the mean. To address this limitation, alternative distributions allowing for overdispersion can be used. Among them, the negative binomial distribution (see, for instance, [Denuit et al., 2007] and [Cameron and Trivedi, 2013]) stands out as a common choice. Under the negative binomial specification, the PMF of the claim count for the \(t^{\text{th}}\) contract of vehicle \(i\) (\(Y_{it}\)), given its predictor
vector (\(\mathbf{x}_{it}\)), can be written as
\[\mathbb{P}(Y_{it}=y_{it}|\mathbf{x}_{it})=\frac{\Gamma(y_{it}+\phi)}{y_{it}!\Gamma(\phi)}\left(\frac{\phi}{\phi+\mu(\mathbf{x}_{it})}\right)^{\phi}\left(\frac{ \mu(\mathbf{x}_{it})}{\mu(\mathbf{x}_{it})+\phi}\right)^{y_{it}},\quad\text{for}\quad y _{it}\in\mathbb{N}, \tag{12}\]
where \(\phi>0\) is a dispersion parameter. This can be seen as a generalization of the Poisson distribution. Indeed, the Poisson distribution is recovered when \(\frac{1}{\phi}\to 0\). The first two centered moments are given by:
\[\mathbb{E}[Y_{it}|\mathbf{X}_{it}=\mathbf{x}_{it}]=\mu(\mathbf{x}_{it})\quad \text{and}\quad\text{Var}[Y_{it}|\mathbf{X}_{it}=\mathbf{x}_{it}]=\mu(\mathbf{x}_{it})+ \frac{\mu(\mathbf{x}_{it})^{2}}{\phi}. \tag{13}\]
As can be seen, the negative binomial specification assumes overdispersion since \(\text{Var}[Y_{it}|\mathbf{X}_{it}=\mathbf{x}_{it}]>\mathbb{E}[Y_{it}|\mathbf{X}_{it}=\mathbf{x} _{it}]\). Once the specification for the regression function \(\mu(\cdot)\) has been chosen, which defines a set of candidate functions \(\mathcal{H}\), one can estimate the parameters of the regression function \(\mu(\cdot)\) along with the dispersion parameter \(\phi\) by maximum likelihood or, equivalently, by minimizing the empirical risk over the training set:
\[\{\widehat{\mu},\widehat{\phi}\}=\operatorname*{argmin}_{\mu\in \mathcal{H},\phi>0}\left\{-\frac{1}{|\mathcal{T}_{r}|}\sum_{(i,t)\in\mathcal{T }_{r}}\ln\left[\frac{\Gamma(y_{it}+\phi)}{y_{it}!\Gamma(\phi)}\right]+\phi\ln \left[\frac{\phi}{\phi+\mu(\mathbf{x}_{it})}\right]+y_{it}\ln\left[\frac{\mu(\mathbf{ x}_{it})}{\mu(\mathbf{x}_{it})+\phi}\right]\right\}. \tag{14}\]
Log-linear negative binomial regression.As in the Poisson case, a common specification for \(\mu(\cdot)\) is the log-linear form, defined in Equation (5). In this case, the criterion in (14) is convex, and convex optimization can be used to estimate \(\mathbf{\beta}\) and \(\phi\).
CANN negative binomial regression.Similar to the approach used for the Poisson case, a CANN architecture can be used to model the mean parameter in the negative binomial distribution. The specification for the regression function \(\mu(\cdot)\) remains identical to the Poisson case, as defined in Equation (11). In order to incorporate the extra distribution parameter \(\phi\), an additional output neuron is introduced in the network. This output neuron is connected to a neural network weight \(w_{\phi}\in\mathbb{R}\) through the softplus function, ensuring that \(\phi\) remains positive, i.e., \(\phi=\zeta(w_{\phi})\). It is important to highlight that the distribution parameter \(\phi\) is not directly connected to the input variables \(\mathbf{x}\). As a result, no heterogeneity is incorporated into this parameter, and a common estimated value \(\widehat{\phi}\) is used for all observations. The exact architecture for the negative binomial CANN model is depicted in Figure 3. The network parameters \(\mathbf{\beta}\), \(\mathbf{\theta}\), and \(w_{\phi}\) can be learned by minimizing the criterion in Equation (14) using the procedure described in Algorithm 2.
### Longitudinal models
Cross-sectional models assume independence between all contracts. However, in our case, the data exhibits clustering due to contracts being grouped by vehicle. While it is reasonable to assume independence between contracts from distinct vehicles, this assumption is less valid for contracts from the same vehicle. In reality, the claim counts of contracts within the same vehicle may be influenced by shared vehicle-specific characteristics, unobserved risk factors, or policy-level effects, resulting in dependence between observations within each vehicle cluster. To appropriately address this dependence, we transition from cross-sectional to longitudinal models, enabling the introduction of within-vehicle dependence. In the case of claim count data, a longitudinal model can efficiently leverage the history of the vehicles to refine the risk estimation for future contracts.
While various models are available to analyze longitudinal data, such as random effects models, fixed effects models, generalized estimating equations (GEE), and autoregressive models (AR), among others, empirical evidence in the context of claim count regression supports the effectiveness of random (or mixed) effects models (see (Boucher et al., 2008)). In these models, a random effect, which is a random variable, is introduced in the
specified distribution. For instance, in the case of count data, the specified distribution could be the Poisson distribution. The random effect is assumed to follow a certain distribution, such as a normal, gamma, or another appropriate distribution. The inclusion of the random effect allows for capturing the unobserved heterogeneity or individual-specific effects that cannot be accounted for by the observed covariates. It introduces additional variability into the model and accounts for the dependence within clusters. In longitudinal analysis, we need, for each vehicle \(i\), to model the random vector of claim counts \(\mathbf{Y}_{i,(1:T_{i})}=(Y_{i1},\ldots,Y_{i,T_{i}})\). The joint PMF can be expressed with
\[\mathbb{P}\left(\mathbf{Y}_{i,(1:T_{i})}=\mathbf{y}_{i,(1:T_{i})}|\mathbf{x}_{i,(1:T_{i})} \right)=\int_{-\infty}^{\infty}\left(\prod_{t=1}^{T_{i}}\mathbb{P}(Y_{it}=y_{it }|\mathbf{x}_{i,(1:T_{i})},\theta_{i})\right)f(\theta_{i})d\theta_{i}, \tag{15}\]
where \(f(\theta_{i})\) is the PDF of the ramdom effect.
#### 3.2.1 Multivariate negative binomial regression
A multivariate negative binomial regression model is obtained by introducing a gamma-distributed random effect in the mean parameter of the Poisson distribution. Specifically, we assume that the conditional distribution of \(Y_{it}\), given \(\Theta_{i}=\theta_{i}\), follows a Poisson distribution with mean \(\mu(\mathbf{x}_{it})\theta_{i}\), where \(\Theta_{i}\) is a gamma-distributed
Figure 3: CANN architecture for the negative binomial specification. The MLP’s preactivation output value \(a_{1}^{(4)}\) is added to the log-linear model’s preactivation output value \(\langle\mathbf{x},\mathbf{\beta}\rangle\) before being transformed with the softplus function \(\zeta(\cdot)\) to obtain the \(\mu\) value of the negative binomial distribution. The \(\phi\) value is obtained by transforming a real-valued parameter \(w_{\phi}\) through the softplus function. The resulting parameters \(\mu\) and \(\phi\) are then compared to the ground truth \(y\) using negative binomial cross-entropy loss. The architecture shown employs a 3-hidden-layer MLP, but can be customized with any number of layers.
random variable with mean \(1\) and variance \(1/\phi\). The density of \(\Theta_{i}\) is given by
\[f_{\Theta_{i}}(\theta_{i})=\frac{\phi^{\phi}}{\Gamma(\phi)}\theta_{i}^{\phi-1}e^ {-\phi\theta_{i}},\quad\theta_{i}>0. \tag{16}\]
By using Equation (15), one can derive the joint distribution for the vector of claim counts:
\[\mathbb{P}\left(\mathbf{Y}_{i,(1:T_{i})}=\mathbf{y}_{i,(1:T_{i})}|\mathbf{x}_{i,(1:T_{i})} \right)=\prod_{t=1}^{T_{i}}\left(\frac{\mu(\mathbf{x}_{it})^{y_{it}}}{y_{it!}} \right)\frac{\Gamma(y_{i\bullet}+\phi)}{\Gamma(\phi)}\left(\frac{\phi}{\mu_{i \bullet}+\phi}\right)^{\phi}\left(\frac{1}{\mu_{i\bullet}+\phi}\right)^{y_{it}}, \tag{17}\]
where \(\mu_{i\bullet}=\sum_{t=1}^{T_{i}}\mu_{it}\) and \(y_{i\bullet}=\sum_{t=1}^{T_{i}}y_{it}\). This joint distribution is commonly referred to as the multivariate negative binomial (MVNB) or negative multinomial distribution. Note that the Poisson distribution is retrieved when \(\frac{1}{\phi}\to 0\). Furthermore, given the past claim history denoted as \(\mathbf{y}_{i,(1:t-1)}=(y_{i1},\ldots,y_{i,t-1})\) as well as current and past covariate vectors denoted as \(\mathbf{x}_{i,(1:t)}=(\mathbf{x}_{i1},\ldots,\mathbf{x}_{it})\), one can show that the number of claims at time (or contract) \(t\) follows a negative binomial distribution. The probability of observing \(y_{it}\) claims at time \(t\), given the past claim history as well as past and current covariate vectors, is thus expressed with
\[\mathbb{P}(Y_{it}=y_{it}|\mathbf{y}_{i,(1:t-1)},\mathbf{x}_{i,1:t})=\frac{\Gamma(y_{it} +\alpha_{it})}{y_{it}!\Gamma(\alpha_{it})}\left(\frac{\gamma_{it}}{\gamma_{it} +\mu(\mathbf{x}_{it})}\right)^{\alpha_{it}}\left(\frac{\mu(\mathbf{x}_{it})}{\mu(\mathbf{ x}_{it})+\gamma_{it}}\right)^{y_{it}},\quad t=1,2,\ldots,T_{i}, \tag{18}\]
where \(\alpha_{it}=\phi+\Sigma_{it}^{(y)}\) and \(\gamma_{it}=\phi+\Sigma_{it}^{(\mu)}\). \(\Sigma_{it}^{(y)}=\sum_{t^{\prime}=1}^{t-1}y_{it^{\prime}}\) and \(\Sigma_{it}^{(\mu)}=\sum_{t^{\prime}=1}^{t-1}\mu(\mathbf{x}_{it^{\prime}})\) represent the number of past claims and the sum of past \(\mu\) values for contract \((i,t)\), respectively. In the special case when \(t=1\), there is no past history and we set \(\Sigma_{it}^{(y)}=\Sigma_{it}^{(\mu)}=0\), which yields \(\alpha_{i1}=\gamma_{i1}=\phi\). The expected claim count, given the past history, is given by:
\[\mathbb{E}\Big{[}Y_{it}|\mathbf{y}_{i,(1:t-1)},\mathbf{x}_{i,1:t}\Big{]} =\mu(\mathbf{x}_{it})\left(\frac{\phi+\Sigma_{it}^{(y)}}{\phi+\Sigma_ {it}^{(\mu)}}\right) \tag{19}\] \[=\mu(\mathbf{x}_{it})\left(\frac{\alpha_{it}}{\gamma_{it}}\right). \tag{20}\]
Fitting an MVNB model, therefore, amounts to fitting a negative binomial model, where the parameters \(\alpha_{it}\) and \(\gamma_{it}\) depend on the vehicle's history. Once the specification for the regression function \(\mu(\cdot)\) is chosen, the parameter \(\phi\) and the parameters in the regression function \(\mu(\cdot)\) can be estimated by minimizing the empirical risk over the training set. This can be achieved through the following optimization problem:
\[\{\widehat{\mu},\widehat{\phi}\}=\operatorname*{argmin}_{\mu\in\mathcal{H}, \phi>0}\left\{-\frac{1}{|\mathcal{T}_{r}|}\sum_{(i,t)\in\mathcal{T}_{r}}\ln \left[\frac{\Gamma(y_{it}+\alpha_{it})}{y_{it}!\Gamma(\alpha_{it})}\right]+ \alpha_{it}\ln\left[\frac{\gamma_{it}}{\gamma_{it}+\mu(\mathbf{x}_{it})}\right]+y _{it}\ln\left[\frac{\mu(\mathbf{x}_{it})}{\mu(\mathbf{x}_{it})+\gamma_{it}}\right] \right\}. \tag{21}\]
Log-linear multivariate negative binomial regression.If the specification for \(\mu(\cdot)\) is the log-linear form, defined in Equation (5), the criterion in (21) is convex, and convex optimization can be used to estimate \(\mathbf{\beta}\) and \(\phi\).
CANN multivariate negative binomial regression.In the MVNB case, the CANN architecture, as defined in Equation (11), can also be used as a specification for the regression function \(\mu(\cdot)\). To incorporate the additional distribution parameters \(\alpha_{it}\) and \(\gamma_{it}\), two additional output neurons are introduced in the network, as depicted in Figure 4. The distribution parameters \(\alpha_{it}\) and \(\gamma_{it}\) stem from a common parameter \(\phi>0\), and for the \(t^{\text{th}}\) contract of vehicle \(i\), we have \(\alpha_{it}=\phi+\Sigma_{it}^{(y)}\) and \(\gamma_{it}=\phi+\Sigma_{it}^{(\mu)}\). The neuron representing \(\phi\) is connected to a network weight \(w_{\phi}\in\mathbb{R}\) through the softplus function, i.e., \(\phi=\zeta(w_{\phi})\). An MVNB CANN model can be trained with backpropagation and gradient descent, as outlined in Algorithm 3. Notice that for a vehicle \(i\), the parameter \(\gamma_{it}\) depends on the \(\mu\) parameter values for its past contracts. As the training procedure of the CANN model is iterative, the estimated \(\mu\) values change at each iteration. Hence, it is crucial to update \(\Sigma_{it}^{(\mu)}\) for each contract \((i,t)\) at every iteration. This updating procedure is carried out in step 2 of Algorithm 3.
## 4 Pratical Application with Telematics Data
In this section, we explain how our CANN regression models are applied to our dataset. Additionally, we describe the application of the log-linear models, which serve as benchmark models in our analysis.
### Log-linear models
The Poisson, negative binomial, and MVNB log-linear models are benchmarks for the Poisson, negative binomial, and MVNB CANN models. These models incorporate all 11 traditional risk factors from Table 1, including the real distance driven (although not strictly classified as a traditional risk factor). For each contract \((i,t)\), these traditional risk factors are denoted by the vector \(\mathbf{x}_{it}^{(\text{rad})}\). Notice that among the 11 traditional risk factors, 4 are categorical: gender, marital_status, pmt_plan, and veh_use. For these risk factors, the approach involves initially grouping all rare categories, defined as those representing 5% or less of the total number of observations, and labeling them as "others." We then encode them numerically using dummy encoding.
All the resulting traditional covariates are then centered and scaled. Moreover, commute_distance contains missing values, which we fill in using median imputation.
Unlike neural networks, log-linear models do not have the ability to learn features directly from raw data. As a result, we must manually engineer features from the telematics data used by these models. These 13 telematics features, described in Table 4, were specifically engineered from the telematics dataset as risk factors potentially correlated with the claiming risk. For each contract \((i,t)\), these numerical handcrafted telematics features are denoted by the vector \(\mathbf{x}_{it}^{\text{(hand)}}\). Note that these handcrafted telematics features are also centered and scaled prior to being input into the log-linear models. The regression function for the \(\mu\) parameter can thus be written as
\[\mu\left(\mathbf{x}^{\text{(trd, hand)}};\mathbf{\beta}\right)=\exp\left( \langle\mathbf{x}^{\text{(trd, hand)}},\mathbf{\beta}\rangle\right), \tag{22}\]
where \(\mathbf{x}^{\text{(trd, hand)}}\) is the concatenation of \(\mathbf{x}^{\text{(trd)}}\) and \(\mathbf{x}^{\text{(hand)}}\). The parameters estimated on the training set
are shown in Table 5. Notably, when using telematics information, the estimated \(\phi\) parameter in the MVNB log-linear model is higher. A higher \(\phi\) value brings the correcting factor in Equation 20 closer to one, indicating reduced importance on past experience when telematics features are used. This underscores the relevance of the engineered telematics features.
### CANN models
For the CANN regression models, we extract low-level descriptor vectors that are specifically designed to accurately describe the driving patterns within a particular contract, at least with the dataset we have. We expect the MLP component within the CANN models to learn meaningful high-level features from these low-level vectors. The hope is that the learned features in the hidden layers will be more relevant than the handcrafted features of Table 4. Each contract \((i,t)\) is described by the following descriptor vectors, which provide a summary of its telematics information:
\[\mathbf{x}_{it}^{(h)} =\left(x_{it,1}^{(h)},\ldots,x_{it,24}^{(h)}\right)\in\mathbb{R}^ {24},\] \[\mathbf{x}_{it}^{(d)} =\left(x_{it,1}^{(d)},\ldots,x_{it,7}^{(d)}\right)\in\mathbb{R}^ {7},\] \[\mathbf{x}_{it}^{(a)} =\left(x_{it,1}^{(a)},\ldots,x_{it,14}^{(a)}\right)\in\mathbb{R}^ {14},\] \[\mathbf{x}_{it}^{(m)} =\left(x_{it,1}^{(m)},\ldots,x_{it,16}^{(m)}\right)\in\mathbb{R}^ {16},\] \[\mathbf{x}_{it}^{(k)} =\left(x_{it,1}^{(k)},\ldots,x_{it,10}^{(k)}\right)\in\mathbb{R}^ {10}.\]
* The elements in vector \(\mathbf{x}_{it}^{(h)}\) represent the fraction of driving during each of the 24 hours of the day. Therefore, \(x_{it,j}^{(h)}\) is the fraction of driving during the \(j^{\text{th}}\) hour of the day for contract \((i,t)\).
* The elements in vector \(\mathbf{x}_{it}^{(d)}\) represent the fraction of driving during each of the 7 days of the week. Therefore, \(x_{it,j}^{(d)}\) is the fraction of driving during the \(j^{\text{th}}\) day of the week for contract \((i,t)\). Monday,
\begin{table}
\begin{tabular}{l l} \hline \hline
**Feature name** & **Description** \\ \hline avg\_daily\_nb\_trips & Average daily number of trips \\ frac\_expo\_evening & Fraction of evening driving1 \\ frac\_expo\_fri\_sat & Fraction of driving on Friday and Saturday \\ frac\_expo\_mon\_to\_thu & Fraction of driving on Monday to Thursday \\ frac\_expo\_night & Fraction of night driving2 \\ frac\_expo\_noon & Fraction of midday driving3 \\ frac\_expo\_peak\_evening & Fraction of evening rush hour driving4 \\ frac\_expo\_peak\_evening & Fraction of evening rush hour driving5 \\ max\_trip\_max\_speed & Maximum of the maximum speed of the trips \\ med\_trip\_avg\_speed & Median of the average speeds of the trips \\ med\_trip\_distance & Median of the distances of the trips \\ med\_trip\_max\_speed & Median of the maximum speeds of the trips \\ prop\_long\_trip & Proportion of long trips (\(>100\)km) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Handcrafted telematics features extracted from the telematics dataset.
Tuesday, Wednesday, Thursday, Friday, Saturday, and Sunday are denoted by \(j=1,2,3,4,5,6,7\), respectively.
* The elements in vector \(\mathbf{x}_{it}^{(a)}\) represent the fraction of trips made in different average speed slots. For instance, \(x_{it,j}^{(a)}\) denotes the fraction of trips made at an average speed between \(10(j-1)\) and \(10j\) kilometers per hour.
* The elements in vector \(\mathbf{x}_{it}^{(m)}\) represent the fraction of trips made in different maximum speed slots. For instance, \(x_{it,j}^{(m)}\) denotes the fraction of trips made where the maximum speed reached falls between \(10(j-1)\) and \(10j\) kilometers per hour.
* The elements in vector \(\mathbf{x}_{it}^{(k)}\) represent the fraction of trips made in different distance slots. For instance, \(x_{it,j}^{(k)}\) denotes the fraction of trips between \(5(j-1)\) and \(5j\) kilometers.
These descriptor vectors capture specific aspects of the driving patterns, such as hourly, weekly, average speed, and maximum speed distribution, providing valuable information for the MLPs. Since MLPs can only accept vectors as input, we concatenate these four vectors into a global telematics vector:
\[\mathbf{x}_{it}^{(tele)}=\left(\mathbf{x}_{it}^{(h)},\mathbf{x}_{it}^{(d)},\mathbf{x}_{it}^{(a) },\mathbf{x}_{it}^{(m)},\mathbf{x}_{it}^{(k)}\right).\]
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**No telematics**} & \multicolumn{3}{c}{**With telematics**} \\
**Parameters** & Poisson & Negative binomial & MVNB & Poisson & Negative binomial & MVNB \\ \hline Intercept & \(-2.8310\) & \(-2.8311\) & \(-2.8281\) & \(-2.8454\) & \(-2.8456\) & \(-2.8430\) \\ \hline annual\_distance & \(0.0273\) & \(0.0280\) & \(0.0288\) & \(0.0353\) & \(0.0359\) & \(0.0367\) \\ commute\_distance & \(0.0055\) & \(0.0055\) & \(0.0056\) & \(0.0159\) & \(0.0159\) & \(0.0157\) \\ conv\_count\_3\_yrs\_minor & \(0.0470\) & \(0.0474\) & \(0.0469\) & \(0.0381\) & \(0.0384\) & \(0.0380\) \\ distance & \(0.1697\) & \(0.1706\) & \(0.1681\) & \(0.1244\) & \(0.1252\) & \(0.1232\) \\ expo & \(0.1945\) & \(0.1943\) & \(0.1957\) & \(0.1812\) & \(0.1813\) & \(0.1834\) \\ gender\_Male & \(-0.0234\) & \(-0.0238\) & \(-0.0236\) & \(-0.0409\) & \(-0.0415\) & \(-0.0415\) \\ marital\_status\_Single & \(0.0241\) & \(0.0243\) & \(0.0243\) & \(0.0194\) & \(0.0192\) \\ marital\_status\_other & \(0.0342\) & \(0.0341\) & \(0.0341\) & \(0.0299\) & \(0.0297\) & \(0.0298\) \\ pmt\_plan\_EFT.Monthly & \(0.0963\) & \(0.0965\) & \(0.0969\) & \(0.0828\) & \(0.0830\) & \(0.0833\) \\ pmt\_plan\_Monthly & \(0.0856\) & \(0.0854\) & \(0.0850\) & \(0.0773\) & \(0.0771\) & \(0.0768\) \\ pmt\_plan\_other & \(0.0134\) & \(0.0135\) & \(0.0131\) & \(0.0111\) & \(0.0111\) & \(0.0107\) \\ veh\_age & \(-0.1552\) & \(-0.1543\) & \(-0.1540\) & \(-0.1433\) & \(-0.1425\) & \(-0.1422\) \\ veh\_use\_other & \(-0.0085\) & \(-0.0083\) & \(-0.0084\) & \(-0.0100\) & \(-0.0098\) & \(-0.0100\) \\ veh\_use\_pleasure & \(-0.0025\) & \(-0.0023\) & \(-0.0027\) & \(-0.0014\) & \(-0.0013\) & \(-0.0018\) \\ years\_licensed & \(-0.1061\) & \(-0.1064\) & \(-0.1076\) & \(-0.0538\) & \(-0.0539\) & \(-0.0547\) \\ \hline avg\_daily\_nb\_trips & \(-\) & \(-\) & \(0.0428\) & \(0.0424\) & \(0.0411\) \\ frac\_expo\_evening & \(-\) & \(-\) & \(-\) & \(0.0734\) & \(0.0738\) & \(0.0741\) \\ frac\_expo\_fri\_sat & \(-\) & \(-\) & \(-\) & \(0.0290\) & \(0.0288\) & \(0.0294\) \\ frac\_expo\_mon\_to\_thu & \(-\) & \(-\) & \(-\) & \(0.0854\) & \(0.0852\) & \(0.0857\) \\ frac\_expo\_night & \(-\) & \(-\) & \(-\) & \(0.0192\) & \(0.0193\) & \(0.0198\) \\ frac\_expo\_noon & \(-\) & \(-\) & \(-\) & \(0.0103\) & \(0.0100\) & \(0.0092\) \\ frac\_expo\_peak\_evening & \(-\) & \(-\) & \(-\) & \(0.0049\) & \(0.0047\) & \(0.0046\) \\ frac\_expo\_peak\_morning & \(-\) & \(-\) & \(-\) & \(0.0072\) & \(0.0073\) & \(0.0071\) \\ max\_trip\_max\_speed & \(-\) & \(-\) & \(-\) & \(0.1084\) & \(0.1087\) & \(0.1079\) \\ med\_trip\_avg\_speed & \(-\) & \(-\) & \(-\) & \(-0.1465\) & \(-0.1470\) & \(-0.1472\) \\ med\_trip\_distance & \(-\) & \(-\) & \(-\) & \(0.0082\) & \(0.0088\) & \(0.0081\) \\ med\_trip\_max\_speed & \(-\) & \(-\) & \(-\) & \(0.0725\) & \(0.0723\) & \(0.0736\) \\ prop\_long\_trip & \(-\) & \(-\) & \(-\) & \(0.0310\) & \(0.0314\) & \(0.0322\) \\ \hline \(\phi\) & \(-\) & \(2.8397\) & \(3.4868\) & \(-\) & \(3.1193\) & \(3.9119\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Estimated parameters of the log-linear models on the training set.
We incorporate this telematics vector into the MLP component of the CANN models, together with the traditional risk factors \(\mathbf{x}^{(trad)}\), enabling interactions between telematics and traditional inputs. In contrast, the log-linear part of the CANN models only includes the traditional risk factors due to the difficulty of processing low-level information. The regression function for the \(\mu\) parameter can thus be written as
\[\mu^{\text{CANN}}\left(\mathbf{x}^{(\text{trad, tele})};\mathbf{\beta},\mathbf{\theta} \right)=\zeta\left\{\langle\mathbf{x}^{\text{trad}},\mathbf{\beta}\rangle+\mathbf{a}^{(L-1 )}(\mathbf{x}^{\text{trad,tele}};\mathbf{\theta})\right\}. \tag{23}\]
where \(\mathbf{x}^{(\text{trad, tele})}\) is the concatenation of \(\mathbf{x}^{(\text{trad})}\) and \(\mathbf{x}^{(\text{tele})}\).
The CANN models are trained using the torch library in the R programming language, using mini-batch gradient descent with 256 observations per batch. The optimizer we use to perform gradient descent is the Adam optimizer, which is a fairly popular choice for training neural networks. Additionally, we use the reduce-on-plateau learning rate scheduler, which dynamically adjusts the learning rate based on the model's performance, automatically reducing it when the improvement plateaus, allowing for better optimization and convergence during training. For the MLP component of our CANN models, we opt for 3 hidden layers (\(L=5\)) with 128, 64, and 32 hidden units, respectively (\(n_{1}=128,n_{2}=64,n_{3}=32\)). We choose the rectified linear unit (ReLU) as the activation function \(\phi(\cdot)\) used in the hidden layers. Additionally, we add batch normalization and dropout layers in-between fully connected layers. Batch normalization applies a normalization transformation to the input of a layer by subtracting the mini-batch mean and dividing by the mini-batch standard deviation. By maintaining a stable mean and variance throughout the network, it can mitigate the vanishing or exploding gradients problem, enabling more effective and efficient training. The dropout layers, on the other hand, serve as a regularization technique that helps prevent overfitting. During training, dropout randomly sets a fraction of the hidden units of a given hidden layer to zero at each iteration, which forces the network to learn redundant representations and reduces the reliance on specific features. This regularization technique improves the model's ability to generalize well to unseen data.
### CANN hyperparameter tuning
To maximize the performance of our CANN models, we use grid search for hyperparameter tuning, with the average loss observed on the validation dataset \(\mathcal{V}_{a}\) as our optimization criterion. Additionally, we incorporate a regularization technique known as "early stopping" to determine the best number of epochs. This approach allows us to prevent overfitting and select the optimal number of epochs based on the lowest average loss achieved during training. We focus on three key hyperparameters: p, which represents the probability of dropout in the dropout layers, l_start, denoting the initial learning rate used in the reduce-on-plateau learning rate scheduler, and factor, indicating the factor by which the learning rate is multiplied upon reaching a plateau. A plateau is the point where there is no observed improvement in the validation loss for two consecutive epochs. We compute the average validation loss for all 45 combinations derived from the following hyperparameter values:
* l_start: \(0.00001,0.00005,0.0001,0.0005,0.001\)
* factor: \(0.3,0.4,0.5\)
* p: \(0.2,0.3,0.4\).
Remember that the network parameters in the classical components of the CANN models are initialized with the maximum likelihood estimators of the corresponding log-linear model, which is why we use relatively small learning rates. At the initialization stage, the network already produces reasonable predictions, reducing the need for large gradient descent steps. The validation loss for each of the 45 combinations and the three specifications is presented in Table 6. It is worth noting that each model is trained for 30 epochs, and as early stopping is employed, the displayed average validation loss is based on the optimal number of epochs, which
can be less than 30. As can be seen, for all learning rates higher than 0.00001, the minimum average validation loss is achieved after a very small number of epochs, indicating that the network learns too quickly. Although the negative binomial and MVNB models perform best at a learning rate of 0.0001, we believe that with more epochs, we could achieve a lower average loss with a learning rate of 0.00001. This is particularly true since the average losses are quite similar for lr_start = 0.00001 and lr_start = 0.0001. When examining the first 9 rows of Table 6, it becomes apparent that the factor hyperparameter has a negligible effect on the validation loss. On the other hand, the p hyperparameter only seems to have an impact on the validation loss for the Poisson model, performing best when p = 0.4. Although the dropout rate does not significantly affect the performance for both the negative binomial and MVNB models, we also choose p = 0.4 for these two models since the best performance is achieved at a high number of epochs (29 and 30 epochs, respectively). This suggests that with more epochs, there is potential for further performance improvement. Therefore, we select lr_start = 0.00001, factor = 0.3, and p = 0.4 as the hyperparameters for all three specifications. We train the models again on the training set, this time for 100 epochs. The performance of the three models on the validation set is displayed in Table 7. As can be seen, all 3 specifications require 35 epochs to minimize the average validation loss.
## 5 Analyses
### Performance assessment on the testing set
After carefully tuning the hyperparameters of our CANN models, we have at hand promising claim count models that are now nearing implementation. The next crucial step is to estimate their generalization capabilities accurately. To achieve this, we cannot rely on the validation set, as it has been extensively used during the hyperparameter tuning process. Instead, we assess the models' generalization performance using the testing set \(\mathcal{T}_{e}\), which has remained untouched until now. Using this independent dataset, we can estimate the models' predictive performance on unseen data points and determine their suitability for real-world applications. Furthermore, we perform a comparative analysis between the CANN and the benchmark models, namely the log-linear models that use telematics information in the form of handcrafted telematics features. This comparative assessment allows us to evaluate our CANN models' relative performance and effectiveness against established approaches. In order to fully capture the value of telematics data, we also evaluate the performance of all 6 models (Poisson, negative binomial and MVNB log-linear and CANN models) using only the 11 traditional risk factors as covariates. In the CANN models, the MLP component therefore only comprises the 11 traditional risk factors. This analysis helps us understand the contribution of telematics information in improving the predictive power of the models.
All 12 models are trained on the learning set, and their performance is evaluated on the testing set. To assess the performance, we employ 3 different scoring rules, namely the Poisson deviance, the logarithmic score, and the squared error. For each scoring rule, we compute the average value on the testing set. To assess the magnitude of the achieved performance, we begin by calculating the average scoring rule values for a baseline model. This baseline model is defined as a homogeneous Poisson log-linear model, where the estimation of the mean (and variance) parameter \(\mu\) is estimated by the average number of claims per contract observed in the learning set:
\[\hat{\mu}=\frac{1}{|\{\mathcal{T}_{r},\mathcal{V}_{a}\}|}\sum_{(i,t)\in\{ \mathcal{T}_{r},\mathcal{V}_{a}\}}y_{it}. \tag{24}\]
The average scoring rule values for this baseline model on the testing set are reported in Table 8. We can then evaluate the performance of each of the 6 models in terms of percentage improvement over the baseline model, as shown in Table 9. As can be seen, our CANN models consistently outperform their corresponding
log-linear benchmark models across all scoring rules. Moreover, our longitudinal MVNB CANN model offers a significative improvement over both Poisson and negative binomial distributions, suggesting a substantial dependence among contracts within a vehicle.
### Permutation feature importance and partial dependence plots
One substantial drawback of neural networks is their difficulty of interpretation. However, researchers have developed tools to shed light on the inner workings of these black box algorithms. Two particularly useful tools in this context are permutation feature importance and partial dependence plots.
Permutation feature importance is a model-agnostic technique that computes an importance score for each input (or variable) in a supervised learning algorithm. It achieves this by randomly permuting the values of a specific input while holding the other inputs constant and observing the resulting effect on the model's performance. By comparing the original model's performance with the permuted performance, we can determine the variable's importance relative to the chosen performance metric. Suppose we have a trained model and a holdout sample for evaluation purposes. We can initially score the model on this sample and measure its performance using a chosen metric, such as the average loss. Let us denote the average loss obtained with the original holdout sample as \(\ell_{\text{original}}\). To assess the importance of input \(j\) in the prediction process, we randomly shuffle the values of input \(j\) in the holdout sample and rescore the model. This process yields a new average loss, denoted as \(\ell_{\text{permuted}}^{(j)}\), where the superscript \(j\) indicates that input \(j\) has been permuted. If input \(j\) is indeed important for the model's prediction, the permuted average loss \(\ell_{\text{permuted}}^{(j)}\) is expected to be greater than the original average loss \(\ell_{\text{original}}\). This suggests that permuting the values of input \(j\) has a detrimental effect on the model's performance. To obtain an importance score for input \(j\), we can compute the difference between the permuted average loss and the original average loss, resulting in the feature importance score \(\text{FI}_{j}\):
\[\text{FI}_{j}=\ell_{permuted}^{(j)}-\ell_{original}.\]
To obtain a more reliable estimate of the importance score, this procedure can be repeated a certain number of times for input \(j\), creating a distribution of the increase or decrease in the average loss. The whole procedure can then be repeated for all inputs. In Figure 5, the importance scores of the 20 most important variables for our best model, the MVNB CANN, are visualized using boxplots. Please note that the names used for the telematics inputs in Figure 5 differ from the introduced notation. However, a translation table is provided in Table 10 of Appendix A to clarify the correspondence between the names used and the introduced notation. Each boxplot represents the distribution of the 100 importance scores assigned to a specific input obtained by shuffling and assessing the model 100 times. The performance metric used is the average cross-entropy loss. The analysis reveals interesting findings regarding the claim count model. As can be seen, the top 5 most important variables are from our set of 11 traditional risk factors. Notably, veh_age, distance, and expo play a significant role in the model's performance. When it comes to telematics inputs, those related to maximum speed demonstrate a substantial impact on the model's performance. Particularly, vma_16, representing the fraction of trips made at a maximum speed exceeding 150 kilometers per hour, stands out as the most important input. In general, the fraction of trips made at high maximum speeds, such as vma_14, vma_15, and vma_16, proves to be valuable for predicting claims. Additionally, it is interesting to observe that h_22 and h_2, which represent the fraction of driving during night hours, contribute substantially to the assessment of risk. Importantly, the gender variable, often used by insurers as a risk factor, is rendered useless in the presence of telematics inputs. It ranks as the 70th most important variable (not showed in Figure 5), indicating its insignificance in the model's predictive power.
Partial Dependence Plots (PDP) are valuable tools for understanding the relationship between a specific input variable and the output of a supervised learning model. PDPs are also model-agnostic, meaning they can be
applied to different types of models. They provide insights into how changes in a particular input variable influence the model's predictions while keeping all other variables at fixed values. In other words, they illustrate the marginal effect of an input variable on the predicted outcome. To compute a PDP for a specific input variable \(j\), the process involves the following steps. First, a grid of values is defined to cover the entire or plausible range of the variable's values. Next, while holding all other variables fixed, the input vector in the holdout dataset is sequentially replaced with each value from the defined grid. Subsequently, predictions are obtained using the trained model on the modified holdout dataset for each value. By plotting the input variable values on the x-axis and the corresponding average prediction on the y-axis, the resulting PDP visually showcases the relationship between the input variable and the model's predictions. Figure 6 displays the PDPs of the 8 most important telematics inputs in the MVNB CANN model. The plots reveal that the risk, expressed as the expected number of claims, appears to increase in a linear fashion with the proportion of trips made at high maximum speeds, indicated by the input variables vma_14, vma_15, and vma_16. Additionally, there appears to be a positive linear association between the expected number of claims and the proportion of driving taking place during nighttime hours, specifically between 9 p.m. and 10 p.m. (h_22) and between 1 a.m. and 2 a.m. (h_2). It is important to emphasize that when interpreting partial dependence plots, caution must be exercised, as the procedure assumes that the input variables are independent of each other. In particular, the interpretation of the PDPs related to the fraction of driving on Tuesdays (p_2) is challenging due to the correlation between the proportions of driving on different days of the week. For instance, if an insured individual drives in smaller proportions on Tuesdays, they will systematically drive in larger proportions on other days of the week.
## 6 Conclusions
In this study, we developed three novel claim count regression models leveraging telematics data in the form of trip summaries. Our models are based on the Combined Actuarial Neural Network architecture, specifically designed to address actuarial problems and harness rich and complex information such as data provided by telematics technology. One key aspect of our work is the adaptation of the CANN architecture to accommodate the MVNB distribution specification. This adaptation allows us to effectively capture the time dependence between insurance contracts, which is important for accurately modeling claim counts. Furthermore, our findings highlight the importance of telematics inputs related to the maximum speed reached during trips in the claim count models. With partial dependence plots, we found that claim frequency is positively correlated with the fraction of trips made at high maximum speeds. Overall, the new approaches developed in this article represent a significant advancement in accurately modeling claim counts and enhancing the performance of predictive models in the context of usage-based insurance. Remarkably, the CANN regression models consistently outperform traditional log-linear models using handcrafted telematics features, as demonstrated by the superior performance across three performance metrics. These results are further supported by the use of a proper machine learning methodology that effectively prevents data leakage and mitigates the risk of producing falsely optimistic results.
While the available telematics data has been instrumental in improving our claim count models, we believe that further improvement can be achieved with access to richer data. For instance, if second-by-second data or additional information such as harsh acceleration/braking and distracted driving were accessible, we believe the performance could be further improved. Depending on the data format, different types of neural networks, such as convolutional and recurrent neural networks, could be used as the network component in the CANN models. Additionally, we acknowledge that with more time and computational power, a more comprehensive fine-tuning process of the CANN models could yield even better results than what we achieved. Notably, we were constrained in adjusting the number of hidden layers and units in the MLP components of the CANN models due to time and computational limitations. Moreover, a more advanced tuning method, beyond the
grid search approach used in this study, could be employed to optimize model performance. In this study, we used the MVNB distribution as our longitudinal specification. However, alternative longitudinal specifications, such as the beta-binomial distribution, exist and could be easily implemented as they share similarities with the MVNB specification. Finally, it would be interesting to conduct further research investigating the impact of using a longitudinal model on telematics variables. It is expected that the importance of certain telematics variables would decrease when considering past claim history, as this historical data can provide insights into the claiming risk of an insured.
Figure 4: CANN architecture for the MVNB specification. The MLP’s preactivation output value \(a_{\cdot}^{(4)}\) is added to the log-linear model’s preactivation output value \(\langle\mathbf{x},\mathbf{\beta}\rangle\) before being transformed with the softplus function \(\zeta(\cdot)\) to obtain the \(\mu\) value of the negative binomial distribution of Equation (18). The \(\phi\) value is obtained by transforming a real-valued parameter \(w_{\phi}\) through the softplus function. To obtain \(\alpha\), the sum of past claims \(\Sigma^{(y)}\) is added to the \(\phi\) parameter, while for \(\gamma\), the sum of past \(\mu\) values \(\Sigma^{(\mu)}\) is added to the same \(\phi\) parameter. The resulting distribution parameters \(\mu\), \(\alpha\) and \(\gamma\) are then compared to the ground truth \(y\) using negative binomial cross-entropy loss. The architecture shown employs a 3-hidden-layer MLP, but can be customized with any number of layers.
\begin{table}
\begin{tabular}{c c c|c c c|c c c} \hline \hline \multicolumn{2}{l|}{**Hyperparameter values**} & \multicolumn{4}{c|}{**Average validation loss**} & \multicolumn{4}{c}{**Number of epochs**} \\ \hline l\_start & factor & p & Poisson & Negative binomial & MVNB & Poisson & Negative binomial & MVNB \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 6: Coarse hyperparameter tuning for the CANN models. The training process is stopped after 30 epochs. The provided validation loss corresponds to the optimal number of epochs, consistent with the early stopping procedure.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Specification** & **Average validation loss** & **Number of epochs** \\ \hline Poisson & 0.2352 & 35 \\ Negative binomial & 0.2351 & 35 \\ MVNB & 0.2349 & 35 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Optimal CANN models’ performance on the validation set.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Scoring rule**} & \multicolumn{2}{c|}{**No telematics**} & \multicolumn{2}{c}{**With telematics**} \\ \cline{2-5} & Log-linear model & CANN model & Log-linear model & CANN model \\ \hline \hline \multicolumn{5}{c}{Poisson} \\ \hline Poisson deviance & 5.23 \% & 5.53 \% & 5.68 \% & 5.78 \% \\ Logarithmic score & 3.90 \% & 4.12 \% & 4.23 \% & 4.31 \% \\ Squared error & 2.10 \% & 2.26 \% & 2.30 \% & 2.38 \% \\ \hline \multicolumn{5}{c}{Negative binomial} \\ \hline Poisson deviance & 5.24 \% & 5.58 \% & 5.68 \% & 5.81 \% \\ Logarithmic score & 3.99 \% & 4.24 \% & 4.31 \% & 4.41 \% \\ Squared error & 2.10 \% & 2.27 \% & 2.30 \% & 2.37 \% \\ \hline \multicolumn{5}{c}{MVNB} \\ \hline Poisson deviance & 5.36 \% & 5.65 \% & 5.79 \% & 5.90 \% \\ Logarithmic score & 4.07 \% & 4.27 \% & 4.38 \% & 4.46 \% \\ Squared error & 2.13 \% & 2.29 \% & 2.34 \% & 2.41 \% \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance of the baseline model on the testing set.
Figure 5: Importance scores of the 20 most important variables obtained for the MVNB CANN model.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline
**Scoring rule** & \multicolumn{2}{c|}{**No telematics**} & \multicolumn{2}{c}{**With telematics**} \\ \cline{2-5}
**Scoring rule** & Log-linear model & CANN model & Log-linear model & CANN model \\ \hline \hline \multicolumn{5}{c}{Poisson} \\ \hline Poisson deviance & 5.23 \% & 5.53 \% & 5.68 \% & 5.78 \% \\ Logarithmic score & 3.90 \% & 4.12 \% & 4.23 \% & 4.31 \% \\ Squared error & 2.10 \% & 2.26 \% & 2.30 \% & 2.38 \% \\ \hline \multicolumn{5}{c}{Negative binomial} \\ \hline Poisson deviance & 5.24 \% & 5.58 \% & 5.68 \% & 5.81 \% \\ Logarithmic score & 3.99 \% & 4.24 \% & 4.31 \% & 4.41 \% \\ Squared error & 2.10 \% & 2.27 \% & 2.30 \% & 2.37 \% \\ \hline \multicolumn{5}{c}{MVNB} \\ \hline Poisson deviance & 5.36 \% & 5.65 \% & 5.79 \% & 5.90 \% \\ Logarithmic score & 4.07 \% & 4.27 \% & 4.38 \% & 4.46 \% \\ Squared error & 2.13 \% & 2.29 \% & 2.34 \% & 2.41 \% \\ \hline \hline \end{tabular}
\end{table}
Table 9: Performance comparison of the CANN models and their corresponding log-linear benchmark model on the testing set.
Figure 6: Partial dependence plots showcasing the 8 most important telematics inputs in the MVNB CANN model. The histogram above each line plots shows the input’s distribution.
## Acknowledgement
The authors gratefully acknowledge The Co-operators for their generous financial support and for providing the data used in this paper through the Co-operators Chair in Actuarial Risk Analysis. Additionally, the authors would like to extend their sincere appreciation to Marc Morin from the Research and Innovation team at The Co-operators for his invaluable assistance with the torch library.
## Funding
The authors thank The Co-operators, the Natural Sciences and Engineering Research Council of Canada and Les fonds de recherche du Quebec for funding.
|
2303.10542 | Wheat Head Counting by Estimating a Density Map with Convolutional
Neural Networks | Wheat is one of the most significant crop species with an annual worldwide
grain production of 700 million tonnes. Assessing the production of wheat
spikes can help us measure the grain production. Thus, detecting and
characterizing spikes from images of wheat fields is an essential component in
a wheat breeding process. In this study, we propose three wheat head counting
networks (WHCNet\_1, WHCNet\_2 and WHCNet\_3) to accurately estimate the wheat
head count from an individual image and construct high quality density map,
which illustrates the distribution of wheat heads in the image. The WHCNets are
composed of two major components: a convolutional neural network (CNN) as the
front-end for wheat head image feature extraction and a CNN with skip
connections for the back-end to generate high-quality density maps. The dataset
used in this study is the Global Wheat Head Detection (GWHD) dataset, which is
a large, diverse, and well-labelled dataset of wheat images and built by a
joint international collaborative effort. We compare our methods with CSRNet, a
deep learning method which developed for highly congested scenes understanding
and performing accurate count estimation as well as presenting high quality
density maps. By taking the advantage of the skip connections between CNN
layers, WHCNets integrate features from low CNN layers to high CNN layers,
thus, the output density maps have both high spatial resolution and detailed
representations of the input images. The experiments showed that our methods
outperformed CSRNet in terms of the evaluation metrics, mean absolute error
(MAE) and the root mean squared error (RMSE) with smaller model sizes. The code
has been deposited on GitHub (\url{https://github.com/hyguozz}). | Hongyu Guo | 2023-03-19T02:45:53Z | http://arxiv.org/abs/2303.10542v1 | # Wheat Head Counting by Estimating a Density Map with Convolutional Neural Networks
###### Abstract
Wheat is one of the most significant crop species with an annual worldwide grain production of 700 million tonnes. Assessing the production of wheat spikes can help us measure the grain production. Thus, detecting and characterizing spikes from images of wheat fields is an essential component in a wheat breeding process. In this study, we propose three wheat head counting networks (WHCNet_1, WHCNet_2 and WHCNet_3) to accurately estimate the wheat head count from an individual image and construct high quality density map, which illustrates the distribution of wheat heads in the image. The WHCNets are composed of two major components: a convolutional neural network (CNN) as the front-end for wheat head image feature extraction and a CNN with skip connections for the back-end to generate high-quality density maps. The dataset used in this study is the Global Wheat Head Detection (GWHD) dataset, which is a large, diverse, and well-labelled dataset of wheat images and built by a joint international collaborative effort. We compare our methods with CSRNet, a deep learning method which developed for highly congested scenes understanding and performing accurate count estimation as well as presenting high quality density maps. By taking the advantage of the skip connections between CNN layers, WHCNets integrate features from low CNN layers to high CNN layers, thus, the output density maps have both high spatial resolution and detailed representations of the input images. The experiments showed that our methods outperformed CSRNet in terms of the evaluation metrics, mean absolute error (MAE) and the root mean squared error (RMSE) with smaller model sizes. The code has been deposited on GitHub ([https://github.com/hyguozz](https://github.com/hyguozz)).
Wheat head counting Object counting Density map Convolutional neural network deep learning
## 1 Introduction
Wheat is an important primary food for a large proportion of the world's population, so methods to estimate and enhance its yield have received significant research attentionBognar et al. (2017). Genomic selection and high-throughput phenotyping techniques are essential in selecting important wheat traits, such as, yield potential, disease resistance, or adaptation to abiotic stress. Developing efficient and robust models for traits extraction of raw data is challengingEtienne et al. (2020). Wheat head density, the number of wheat heads per unit ground area, is a significant yield trait. However, wheat head counts and density is mainly dependent manually evaluation, which is labour intensive and inaccuracy resulting in around 10% measurement errorsMadec et al. (2019). Therefore, automated wheat head detection and counting methods based on machine learning technology can help to estimate wheat yield and discover the potential traits of wheat phenotypingZhang et al. (2007). The computer vision based object counting task is the estimation of the number of objects presented in images or videos. Since the potential wide range of real-world applications such as public safety, traffic control, agriculture monitoring, and cell counting, object counting has been extensively explored by many researchersJiang et al. (2020)Lempitsky and Zisserman (2010). Object detection methods can localize and identify wheat heads in images, so the head density of wheat populations can be estimated. Wheat head counting can
also discover the additional wheat traits, including the spatial distribution between rows, the presence of awns, size, inclination, colour, grain filling stage, and health. Thus, wheat head counting could help farmers to manage their crops scientificallyEtienne et al. (2020).
The counting task of the number of objects in an image can be classified into two categories, counting by detection and counting by regressionLempitsky and Zisserman (2010)Segui et al. (2015). Counting by detection uses a object detector to localize individual objects in the image. Given the bounding boxes of all instances or a single dot on each object instance in each image.Lempitsky and Zisserman (2010), counting can be easily performed. However, object detection is very far from being solved. The extreme overlap of objects, the size of the instances, scene perspective, etc. can affect the performance of the supervised object counting systems, thus, many researches have turned their attentions to the density map based object counting modelsArteta et al. (2014)Zhang et al. (2015)Lempitsky and Zisserman (2010). Density map based object counting methods tackle the counting problem through learning a regression function that projects the image appearance into an object density map, then obtain the object count by integration. Moreover, density map preserves more information and gives the spatial distribution of the objects in a given imageJiang et al. (2020).
However, for the supervised counting methods, the quality of the annotations of images is crucial in deciding the accuracy of the counting task, besides, object detection requires time-consuming inference. To avoid the these disadvantages, unsupervised object counting methods were proposed, which tackle counting problems based on grouping self-similarities or motion similarities, thus avoid the complicated object detection processingRabaud and Belongie (2006)Ahuja and Todorovic (2007).
Image-based plant phenotyping has received increasing interest in recent years. Organ counting is a common task in image-based plant phenotyping, such as leaf, head, pod, fruit counting etc. Tewodros Ayalew et al.Ayalew et al. (2020) proposed a domain-adversarial learning approach for domain adaptation of density map estimation for the purposes of object counting and evaluated on wheat spikelets counting and leaves counting. Jordan et al.Ubbens et al. (2020) used a fully unsupervised method to implement the plant organ counting task, which is a convolutional network-based unsupervised segmentation method followed by two post-hoc optimization steps. Bipul Neupane et al.Neupane et al. (2019) proposed a deep learning based method to detect and count banana plants on a farm exclusive of other plants, using high resolution RGB aerial images collected from Unmanned Aerial Vehicle (UAV). Since the available datasets of the plant phenotyping are often small and the costs associated with generating new data are high. Ubbens et al.Ubbens et al. (2018) proposed a method for augmenting plant phenotyping datasets using rendered images of synthetic plants and estimated their method on CNN based leaf counting task.
In this study, we take a supervised learning approach to solve the wheat head counting problem, so require a set of training wheat head images with annotations. Global Wheat Head Detection (GWHD) datasetEtienne et al. (2020) is a large, diverse, and well-labelled dataset of wheat images and built by a joint international collaborative effort. The GWHD dataset is publicly available at [http://www.global-wheat.com/](http://www.global-wheat.com/). Based on the GWHD dataset, we propose three deep learning based wheat head counting networks (WHCNet_1, WHCNet_2 and WHCNet_3) to detect and count wheat spikes presented in wheat field images. The WHCNets are composed of two major components: a CNN based front end for wheat head image feature extraction and a CNN with skip connections for the back end to generate high-quality density maps, consequently, accomplish the wheat head counting task.
## 2 Methods
Density map based wheat head counting refers to the input is a wheat head image and the output is the density map of the wheat heads, which shows how many wheat heads per unit area and the spatial distribution of wheat heads in that image, so it is very useful in many applications, such as, estimating the grain yield potential. Consequently, the number of wheat heads in an image can be obtained by the integration of its density map. In this section, firstly, we will introduce the dataset and data preprocessing, then, we will discuss how to generate the ground truth density maps from wheat head images. For comparison purpose, we introduce a baseline network, CSRNet, which has achieved the state-of-the art performance on dense crowd counting tasks and vehicle counting tasks. Consequently, we present three wheat head counting networks, WHCNet_1, WHCNet_2 and WHCNet_3, which can learn density maps from input wheat head images via fully CNNs. The loss function and evaluation metrics will be described as well.
### Dataset and data preprocessing
Global wheat head detection (GWHD) datasetEtienne et al. (2020) is collected from several countries around the world at different growth stages with a wide range of genotypes aiming at developing and benchmarking methods for wheat head detection. In terms of phenotyping datasets for object detection, GWHD dataset is currently the largest open labelled dataset freely available for object detection for field plant phenotyping.
In this study, we downloaded the GWHD dataset from [https://www.kaggle.com/c/global-wheat-detection](https://www.kaggle.com/c/global-wheat-detection), which contains 3422 high-resolution RGB images for training and 10 high-resolution RGB images for testing, with 147793 wheat head with bounding boxes annotated which average 40 heads per image. Figure 1 shows the distribution of the count number of bounding boxes per image. As can be seen from the figure, most of the images have 20-60 wheat heads, and few images, specifically 4 images, contain more than 100 heads with a maximum of 116 heads. Moreover, there are 49 images containing no heads in the dataset.
As deep learning framework requires a large amount of training data. We crop four patches from four angles of each image with 1/4 size of the original image. Then, we vertical flip the patches to further double the wheat head image dataset, thus increase the size of our training set by a factor of 8. We have mentioned that in the GWHD dataset (kaggle version), there are 3422 images in the training folder and 10 images in testing folder. We only augmented the images of the training folder and split these patches into training set, validation set and testing set. For the 10 images in the testing folder, as there are no annotation information given, we leave them out and use them to verify the performance of our models. Since the GWHD dataset has sub-datasets from different regions in the world, we shuffle these augmented image patches to ensure the images from different regions distributed in our training set, validation set and testing set, evenly. As a result, we selecte 12,000 patches for training, 1,600 patches for validation set and 1,600 patches as the testing set.
### Ground truth density map generation
The wheat head counting solution requires a set of annotated wheat head images, where all the wheat heads are marked by dots. The ground truth density map \(D_{I}\), for a wheat head image \(I\), is defined as a sum of Gaussian functions centered on each dot annotation,
\[D_{I}(p)=\sum_{\mu\in A_{I}}N(p;\mu,\sigma) \tag{1}\]
where \(A_{I}\) is the set of 2D points annotated for the image \(I\), and \(N(p;\mu,\sigma^{2})\) represents the evaluation of a normalized 2D Gaussian function, with mean \(\mu\) and isotropic co-variance matrix \(\sigma^{2}\), evaluated at pixel position defined by \(p\). With the density map \(D_{I}\), the total wheat head count \(N_{I}\) can be directly obtained by integrating the density map values in
Figure 1: The count number of bounding boxes per image.
over the entire wheat head image, as follows,
\[N_{I}=\sum_{p\in I}D_{I}(p). \tag{2}\]
Since all the Gaussian are summed, so the total wheat head count is preserved even when there is overlap between wheat heads. The purpose of our wheat head counting model is to learn a mapping from the input wheat head image to a wheat head density map.
However, in this definition of density function, each object is looked as independent samples in the image, thus, the perspective distortion, and the pixels associated with different samples correspond to areas of different sizes in the scene are all neglected. The geometry-adaptive kernelsZhang et al. (2016) takes the distortion caused by the homography between the ground plane and the image plane into account by assuming around each object, the objects are evenly distributed, then the average distance between this object and its nearest \(k\) neighbors (in the image) gives a reasonable estimate of the geometric distortion (caused by the perspective effect). Therefore, for the density maps of those dense scenes, the spread parameter \(\sigma\) for each object can be determined based on its average distance to its neighbors, formalized by
\[\sigma_{i}=\beta\bar{d}_{i} \tag{3}\]
where \(\bar{d}_{i}\) represents the average distance of \(k\) nearest neighbors of the \(i\)th object. Thus, the Gaussian kernel with variance \(\sigma_{i}\) is proportional to \(\bar{d}_{i}\), and \(\beta\) is a regulating parameter.
In this study, we adopt the geometry-adaptive kernels to generate the ground truth of wheat head images because most of the wheat heads are densely distributed in our images, similar as the dense crowd scene in the study of Li et al. (2018). As the GWHD dataset has provided the bounding box annotations, firstly, the dot-annotation can be obtained through calculating the centroid of each bounding box, then, the ground truth density maps for all wheat head images are generated. Figure 2 shows the bounding box labeled wheat head images, and their corresponding density maps generated using the centroids of bounding boxes. The \(\beta\) is set as 0.3, and \(k\) is set as 3, followed the configuration in paper Li et al. (2018), as the wheat heads image is a dense image, similar with the context of crowds counting problemLi et al. (2018).
### Baseline network
Li et al.Li et al. (2018) proposed CSRNet for congested scene recognition, which can perform crowd count estimation and generate high quality density maps. CSRNet obtained the state-of-the-art performance in four crowd counting datasets (ShanghaiTech dataset, the UCF CC 50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset)Zhang et al. (2016) compared with other previous state-of-the- art methods and achieved the best accuracy on the vehicle counting task as well. Since in our wheat head images, wheat heads often overlap and occlude each other with high planting densities, the wheat head counting problem is a dense counting problem as well. Therefore, we take CSRNet as the baseline of our study.
Figure 3 illustrates the architecture of CSRNet, as shown in this figure, CSRNet is composed of two parts: a CNN as the front-end and a dilated CNN for the back-end. It is a fully convolutional network. Specifically, the front-end of CSRNet is composed of the top 10 convolutinal layers and 3 maxpooling layers of VGG-16Simonyan and Zisserman (2014) while the back-end is a CNN with 6 consecutive convolutional layers. The architecture is concise, however, the high level features extracted from the deeper layers are more abstract and some location information from the lower layers may be lost. Even though those abstract high level features can help to improve the performances in some classification tasks where only one class prediction is mapped from the input image, but when the features are used for the generation of density map, the high level features are not enough to construct the location information of the input image. Therefore, the deeper CNN layers in CSRNet can not preserve enough spatial information to generate accurate density maps from the input images.
### WHCNet architecture
As discussed in the last section, the back end of CSRNet may cause the loss of the location information of the input image as the high level features are more abstract than the low level features, thus, the quality of the output density map can be degraded. To address this issue, we introduce forward skip connections into the back end of our models aiming to infuse the location information of wheat heads from low layers to high layers to avoid information degradation during the training process and integrate low layer features with high level features for the inference of density maps. We propose three architectures, WHCNet_1 (see Figure 4), WHCNet_2 (see Figure 5) and WHCNet_36, respectively.
There are two major components in our models, the front end CNN and the back end CNN. We use the same front end with CSRNet, which includes the top 10 convolutional layers and 3 max-pooling layers of VGG-16 model, to tackle the low level features extraction. Besides, the pre-trained VGG-16 model has been previously trained on a large dataset and contains the weights and biases that represent the features of the dataset it was trained on. Thus, using the pre-trained VGG-16 model, we can not only save the training time but also obtain more accurate weights instead of using random initialized weights.
As for the back end part, we propose three architectures, which will be described in the following section. The last layer in WHCNet_1, WHCNet_2 and WHCNet_3 is the convolutional layer with the kernel size \(1\times 1\), which is responsible for outputting the density map of the input image. In addition, the kernel size for all convolution including the transpose convolution is set to \(3\times 3\), which has shown excellent image recognition performanceSimonyan and Zisserman (2014). Moreover, the rectified linear activation function (or ReLU for short) layers are added after each convolutional layer as it has been demonstrated that it can help the model to be easier to be trained and achieve better performance. Besides, it is worth mentioning that the size of input image can be arbitrary since our network is essentially a pixel-wise prediction.
Figure 2: Wheat head images with dense wheat head density, medium dense wheat head density. The left side shows the bounding box labeled wheat head images. The right side shows the corresponding density maps generated by geometry-adaptive kernel with \(\beta=0.3,k=3\).
Figure 4: The overall architecture of WHCNet_1. All convolutional layers use padding to maintain the previous size. The convolutional layers’ parameters are denoted as ’(number of layers) -conv-(kernel size)-(number of filters)-(dilation rate, if applicable)’, max-pooling layers are conducted over a \(2\times 2\) pixel window with stride 2. Thus, the configuration of CSRNet is as follows: Front end ( 2-conv3-64, 1-max-pooling, 2-conv3-128, 1-max-pooling, 3-conv3-256, 1-max-pooling, 3-conv3-512), Back end ( 1-max-pooling, 1-conv3-1024, 1-conv3-512, 1-convtranspose3, 1-conv3-512, 1-conv3-256-2, 1-conv3-128, 1-conv3-64-2), Output layer ( 1-conv1-1)
Figure 3: The architecture of the baseline model CSRNet. All convolutional layers use padding to maintain the previous size. The convolutional layers’ parameters are denoted as ’(number of layers) -conv-(kernel size)-(number of filters)- (dilation rate, if applicable)’, max-pooling layers are conducted over a \(2\times 2\) pixel window with stride 2. Thus, the configuration of CSRNet is as follows: Front end ( 2-conv3-64, 1-max-pooling, 2-conv3-128, 1-max-pooling, 3-conv3-256, 1-max-pooling, 3-conv3-512), Back end ( 3-conv3-512-2, 1-conv3-256-2, 1-conv3-128-2, 1-conv3-64-2), Output layer ( 1-conv1-1)
#### 2.4.1 WHCNet_1
Inspired by U-NetRonneberger et al. (2015), an architecture composed of a contracting path to capture context and a symmetric expanding path that enables precise localization. U-Net uses the upsampling part to propagate context information to deep layers. In this study, to further extract features from the input image, we build a downsampling and upsampling part by adding one max-pooling layer with stride 2 at the beginning of the back end part, after two consecutive convolutional layers, an upsampling layer is applied to restore the spatial resolution, subsequently, the output of the deeper layer is concatenated with the low level features, i.e. the output of the front end part, thus we can achieve performance improvement while the network going deeper. Specifically, we use the transpose convolution layer as the upsampling layer as it is a learnable upsampling layer.
Consequently, consecutive convolutional layers with braided skip connections are used to construct the density map. These skip connections from earlier layers in the network provide the necessary detail to reconstruct accurate shapes for the density maps. In addition, two dilated convolutional layers with dilation rate 2 are included in the back end. Dilated convolution (also called Atrous convolution) can arbitrarily enlarge the field-of-view of filters at deep CNN layerYu and Koltun (2016). Dilated convolution with rate \(r\) introduces \(r-1\) zeros between consecutive filter values, thus, enlarging the kernel size of a \(k\times k\) filter to \(k+(k-1)(r-1)\) without increasing the number of parameters. This character enlarges the receptive field without increasing the number of parameters or the amount of computation. Dilated convolutional layers have been demonstrated having significant improvement of accuracy in segmentation tasksChen et al. (2018). The dilated convolution layers are used in WHCNet_1 to extract multi-scale representation from wheat head images.
WHCNet_1 utilizes the downsampling and upsampling part to extract high level features, a braided skip connection part to provide location information of the input image from lower layers to deeper layers, and the dilated convolutional layers to extract multi-scale features. However, the architecture of WHCNet_1 is a little bit complicated compared with the baseline CSRNet. To overcome this disadvantage, we propose WHCNet_2, a simpler architecture compare to WHCNet_1 in next section.
#### 2.4.2 WHCNet_2
In WHCNet_2 (See Figure 5), we use one skip connection from the output of the front end of the model to the output of 5 stacked convolutional layers to combine the location information from the low layers with the high level features to ensure the output density map preserve both the higher level representation and accurate location information. Different from other skip connectionsHe et al. (2016)Ronneberger et al. (2015), we add one convolutional layer in the connection path aiming to keep the output shape of the front end part consistent with the output shape of its prior consecutive CNN
Figure 5: The overall architecture of WHCNet_2. All convolutional layers use padding to maintain the previous size. The convolutional layers’ parameters are denoted as ‘(number of layers) -conv-(kernel size)-(number of filters)-(dilation rate, if applicable)’, max-pooling layers are conducted over a \(2\times 2\) pixel window with stride 2. Thus, the configuration of CSRNet is as follows: Front end ( 2-conv3-64, 1-max-pooling, 2-conv3-128, 1-max-pooling, 3-conv3-256, 1-max-pooling, 3-conv3-512), Back end ( skip connection part [ 1-conv3-512, 2-conv3-256, 2-conv3-128, parallel with 1-conv3-128], consecutive part [ 2-conv3-128, 1-conv3-64]), Output layer ( 1-conv1-1)
layers, so that the concatenation layer can accept balanced information with the equal portion of the low and high level features.
#### 2.4.3 WhcNet_3
To further reduce the computation cost WHCNet_2, we design WHCNet_3, with the similar architecture of WHCNet_2, but fewer convolutional layers and smaller filter sizes for convolutional layers. The architecure of WHCNet_3 is shown in Figure 6.
### Loss function
Euclidean distance is used to measure the difference between the estimated density map and ground truthLi et al. (2018)Zhang et al. (2016)Shi et al. (2019). The loss function is defined as follow:
\[L(\Theta)=\frac{1}{2N}\sum_{i=1}^{N}\|\Psi(Img_{i};\Theta)-GT_{i}\|_{2}^{2} \tag{4}\]
where \(\Theta\) is a set of trainable parameters in our deep CNN network \(\Psi\), \(N\) is the number of the training images in the batch, \(Img_{i}\) is the input wheat head image and \(GT_{i}\) is the corresponding ground truth density map. Therefore, \(L\) is the loss between the estimated density map and the ground truth density map.
### Evaluation metrics
The mean absolute error (MAE) and the root mean squared error (RMSE) between the predicted and ground truth maps are used for the evaluation of the counting performance in this study, which are defined as follows:
\[MAE=\frac{1}{N}\sum_{i=1}^{N}\left|Cest_{i}-Cgt_{i}\right| \tag{5}\]
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left|Cest_{i}-Cgt_{i}\right|^{2}} \tag{6}\]
where \(N\) is the number of test images, \(Cest_{i}\) stands for the estimated counting number of the \(i\)th image \(Img_{i}\), and \(Cgt_{i}\) represents the corresponding ground truth of counting.
Figure 6: The overall architecture of WHCNet_3. All convolutional layers use padding to maintain the previous size. The convolutional layers’ parameters are denoted as ’(number of layers) -conv-(kernel size)-(number of filters)-(dilation rate, if applicable)’, max-pooling layers are conducted over a \(2\times 2\) pixel window with stride 2. Thus, the configuration of CSRNet is as follows: Front end ( 2-conv3-64, 1-max-pooling, 2-conv3-128, 1-max-pooling, 3-conv3-256, 1-max-pooling, 3-conv3-512), Back end ( skip connection part [ 2-conv3-256, 2-conv3-128, 2-conv3-64, parallel with 1-conv3-64], consecutive part[ 2-conv3-64]), Output layer ( 1-conv1-1)
## 3 Results
We present experiments trained on the training set, which is composed of 12000 patches of wheat head images, and validated on the validation set, which has 1600 patches of wheat head images, using models, CSRNet, WHCNet_1, WHCNet_2 and WHCNet_3, separately. We test these four models on the 1600 patches (size of \(512\times 512\)) testing set, and we also test the models on the full sized wheat head images (size of \(1024\times 1024\)) by randomly selecting 350 wheat head images from GWHD.
Implementation of the proposed networks and their experiments are based on the Pytorch framework. The initialisation of the network weights is important, since bad initialisation consumes time to the learning process due to the instability of gradient in deep nets. We use the pretrained VGG-16 model as the front-end in this study. For the other layers, we use Gaussian initialization with 0.01 standard deviation to initialize our models. Stochastic gradient descent (SGD) is applied with fixed learning rate at 1e-6 during training.
The experiments results are compared in Table 1.
As shown in Table 1, the model WHCNet_1 has achieved the best MAE and RMSE both in testing patches and the whole images, however, its model size is the biggest one in the four models, because WHCNet_1 includes a down sampling and up sampling part, and four skip connections, the architecture is complicated compared with other models. To reduce the complexity and the cost of computation, we removed the down and up sampling part and optimized the skip connection part to keep only one skip connection in WHCNet_2 and WHCNet_3, therefore their model sizes have decreased, meanwhile, the performances dropped slightly in terms of the evaluation metrics of MAE and RMSE. Though WHCNet_2 and WHCNet_3 have the smaller model sizes than CSRNet, our proposed models outperformed the baseline model CSRNet at the evaluaiton metrics MAE and RMSE. This has testified that the skip connection scheme can improve the performance compared with the consecutive layers scheme in our wheat head counting task.
The comparison of the density maps constructed by our proposed models and CSRNet is illustrated in Figure 7. From the figure we can see, our proposed models can detect more detailed information than CSRNet, such as, the small spike in the region of Patch 1 was neglected by CSRNet, but was clearly labeled by our proposed models. The pattern of Patch 2 is more detailed in our proposed models than CSRNet. Moreover, Patch 3 is detected by WHCNet 2 only, because the span of its skip connection is longer than the skip connections in WHCNet_2, the loss of location information of wheat heads is smaller. While, the architecture of CSRNet is a consecutive CNN, therefore, as the network goes deeper the location information of wheat heads degraded and this affects the quality of the output of density map. Since, there is no ground truth in the test folder of the Kaggle version GWHD, the corresponding ground truth density map is not shown in Figure 7. Figure 8 illustrates another sample of our experiments which includes the ground truth density map in it. In conclusion, the quality of density map generated by WHCNet_2 is the best compared with the original image. The reason is the skip connection in WHCNet_2 pass the location information to the deeper layers directly, so it is straight forward unlike the braided skip connections in WHCNet_1.
## 4 Discussion
Our WHCNets introduced skip connections to the back end to avoid the degradation of the location information, which are represented in lower layers, during the training process of a deep network to improve the quality of the generated density maps. Compared the architecture of the simply stacked layers, skip connections have improved the quality of the density maps through passing the low level features to deeper layers. WHCNet_1 has achieved the best evaluation scores in terms of MAE and RMSE, but its model size is the biggest in the four models, meanwhile, it needs longer training time as well. The architecture of WHCNet_1 is complicated including braided skip connections and an down-up sampling part, therefore, to find an optimized model, ablation study is needed. We will explore ablation study in our future work.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & MAE & RMSE & MAE & RMSE & \multirow{2}{*}{Size} \\ \cline{2-5} & Patches & & Whole image & & \\ \hline WHCNet 1 & **1.895** & **2.431** & **6.251** & **7.972** & 202 M \\ WHCNet 2 & 1.955 & 2.523 & 6.329 & 8.089 & 102 M \\ WHCNet 3 & 2.01 & 2.591 & 6.627 & 8.43 & **79 M** \\ CSRNet & 2.352 & 3.006 & 7.923 & 9.911 & 124 M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the performances of WHCNets and the baseline model CSRNet.
WHCNet_2 has achieved the best quality of density maps observed from Figure 7 and Figure 8, but its MAE and RSME score were not the best ones. We assume that the reason is complicated and several factors should be taken into consideration: Firstly, our proposed method is a supervised method, thus, the annotation information plays an very important role in the performance of a supervised task. Wheat heads in field plant are often occlude or covered partly by leaves, meanwhile, the environment is very complicated as well, so, its hard to annotated accurately. Secondly, finding a proper ground truth generation algorithm is crucial. In this study, we use the geometry-adaptive kernel, which is designed for dense crowd counting task, however, in our wheat head images, some wheat heads are densely distributed in the image, but some wheat heads are sparsely distributed. Besides, different from crowds, the wheat heads shape is featured with the elongated shape, thus, the geometry-adaptive kernel may not have the ability to delineate the long stripped object. Therefore, a suitable ground truth density map generation method is needed for the wheat head counting task. GWHD has a revised version which took the orientation of wheat heads into account, as the course project time is limited, we will use the revised GWHD version to develop an optimized ground truth generation algorithm in our further
Figure 7: Comparison of density maps generated by CSRNet, WHCNet_1 and WHCNet_2. The original wheat head image is one of the 10 test images provided by Kaggle version GWHD. Red squares labels main differences among density maps. Patch 1, Patch 3 and Patch 4 are not detected by the CSRNet model, but detected by our proposed models. The pattern of Patch 2 is more detailed in our proposed models than CSRNet. Patch 3 is detected by WHCNet 2 only.
research. Thirdly, data augmentation methods used in this study are simply the cropping and flipping, so the training dataset is limited, hence the model may not generalize well to other situations. We will apply more image augmentation methods to enhance the performance of our models. Last but not least, since, MAE and RSME are evaluated based on the ground truth density maps and the predicted density maps, the error of the ground truth density maps may cause the error of MAE and RMSE. Therefore, more evaluation methods should be considered in this study.
The performance of WHCNet_3 dropped slightly compared with WHCNet_1 and WHCNet_2, because its architecture is simple, we could add more layers or increase the number of filters to improve it. Nevertheless, WHCNet_2 and WHCNet_3 outperformed the baseline model in terms of MAE, RMSE and model size.
## 5 Conclusion
In this study, we have proposed three novel models called WHCNet_1, WHCNet_2, and WHCNet_3 for the wheat head counting task and the high-quality density map generation from wheat head images with CNNs. We have trained and tested our approaches on GWHD dataset, a large, diverse, and well-labelled dataset of wheat images and built by a joint international collaborative effort. Our WHCNets are composed of two major components: a CNN as the front-end for wheat head image feature extraction and a skip connected CNN for the back-end to generate high-quality density maps to accomplish the wheat head counting task. We compared our methods with CSRNet, a deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high quality density maps. By taking the advantage of the skip connections between CNN layers, WHCNets combined low level features, the local information, with high level features to make density map predictions, thus, the density maps can have both high spatial
Figure 8: The first row shows one wheat head image and its ground truth density map. The second row presents the generated density map by WHCNet_1, WHCNet_2, WHCNet_3 and CSRNet. The density maps generated by our proposed models present more detailed information than the baseline model, CSRNet.
resolution and detailed representations of the input images. Experiments demonstrated that our methods outperformed CSRNet by the evaluation metrics, MAE and RMSE.
|
2307.01806 | DeepFlorist: Rethinking Deep Neural Networks and Ensemble Learning as A
Meta-Classifier For Object Classification | In this paper, we propose a novel learning paradigm called "DeepFlorist" for
flower classification using ensemble learning as a meta-classifier. DeepFlorist
combines the power of deep learning with the robustness of ensemble methods to
achieve accurate and reliable flower classification results. The proposed
network architecture leverages a combination of dense convolutional and
convolutional neural networks (DCNNs and CNNs) to extract high-level features
from flower images, followed by a fully connected layer for classification. To
enhance the performance and generalization of DeepFlorist, an ensemble learning
approach is employed, incorporating multiple diverse models to improve the
classification accuracy. Experimental results on benchmark flower datasets
demonstrate the effectiveness of DeepFlorist, outperforming state-of-the-art
methods in terms of accuracy and robustness. The proposed framework holds
significant potential for automated flower recognition systems in real-world
applications, enabling advancements in plant taxonomy, conservation efforts,
and ecological studies. | Afshin Khadangi | 2023-07-04T16:21:39Z | http://arxiv.org/abs/2307.01806v1 | DeepFlorist: Rethinking Deep Neural Networks and Ensemble Learning as A Meta-Classifier For Object Classification
###### Abstract
In this paper, we propose a novel learning paradigm called "DeepFlorist" for flower classification using ensemble learning as a meta-classifier. DeepFlorist combines the power of deep learning with the robustness of ensemble methods to achieve accurate and reliable flower classification results. The proposed network architecture leverages a combination of dense convolutional and convolutional neural networks (DCNNs and CNNs) to extract high-level features from flower images, followed by a fully connected layer for classification. To enhance the performance and generalization of DeepFlorist, an ensemble learning approach is employed, incorporating multiple diverse models to improve the classification accuracy. Experimental results on benchmark flower datasets demonstrate the effectiveness of DeepFlorist, outperforming state-of-the-art methods in terms of accuracy and robustness. The proposed framework holds significant potential for automated flower recognition systems in real-world applications, enabling advancements in plant taxonomy, conservation efforts, and ecological studies.
## 1 Introduction
Flower classification is a fundamental task in the field of computer vision with numerous applications in ecological studies, botanical research, and horticulture. Accurately identifying and classifying flower species from images can provide valuable insights into biodiversity assessment, ecosystem monitoring, and plant species conservation efforts. With the advent of deep learning techniques, deep neural networks (DNNs) have shown remarkable success in various image classification tasks.
In recent years, convolutional neural networks (CNNs) have emerged as the state-of-the-art models for image classification tasks, including flower classification. CNNs can automatically learn hierarchical representations from raw image data, capturing both low-level visual features and high-level semantic information. Several CNN-based architectures, such as AlexNet [1], VGG [2], ResNet [3] and DenseNet [4], have demonstrated outstanding performance in large-scale image classification benchmarks.
However, despite the impressive achievements of DCNNs and CNNs, flower classification remains a challenging task due to several factors. Flowers exhibit diverse color patterns, shapes, and textures, often leading to high intra-class variability and inter-class similarities. Moreover, limited availability of labeled flower datasets and the potential presence of noise and occlusions in real-world flower images further exacerbate the classification difficulty. To address these challenges and improve the accuracy of flower classification, we propose a novel deep neural network architecture called "DeepFlorist." DeepFlorist is specifically designed to effectively capture and utilize the discriminative visual characteristics of flowers for accurate classification. It combines the strengths of CNNs in feature learning with ensemble learning techniques to enhance the robustness and generalization capability of the classification model.
Ensemble learning has proven to be a powerful approach to improve classification performance by combining the decisions of multiple base classifiers. The ensemble model aggregates the predictions from individual classifiers to make the final classification decision, reducing the impact of individual classifier errors and enhancing overall accuracy. By integrating ensemble learning as a meta classifier within DeepFlorist, we aim to exploit the diversity of learned features and decision boundaries from different CNN models, resulting in improved flower classification performance. In this paper, we present a comprehensive investigation of DeepFlorist's architecture and its effectiveness for flower classification. We evaluate the performance of DeepFlorist in comparison to state-of-the-art flower classification methods using Google Flower Classificaiton Challenge using TPUs. Additionally, we analyze the contributions of ensemble learning and demonstrate its impact on enhancing the classification accuracy and robustness of DeepFlorist.
The contributions of this work can be summarized as follows: (1) the introduction of DeepFlorist, a novel deep neural network architecture specifically tailored for flower classification, (2) the utilization of ensemble learning as a meta-classifier within DeepFlorist to improve the classification accuracy and robustness, and (3) comprehensive experimental evaluations and comparisons with state-of-the-art methods, demonstrating the superior performance of DeepFlorist and the effectiveness of ensemble learning for flower classification tasks. The remainder of this paper is organized as follows. Section 2 provides a literature review on flower classification methods and deep learning techniques. Section 3 presents the details of the proposed DeepFlorist architecture, including the network design, training process, and ensemble learning integration. Section 4 describes the experimental setup and presents the results and analysis. Finally, Section 5 concludes the paper and discusses potential future directions in flower classification research.
## 2 Background
Flower classification is an important task in the field of computer vision and has gained significant attention due to its applications in various domains, including ecology, botany, horticulture, and agriculture. The ability to accurately identify and classify flowers enables researchers to study floral biodiversity, monitor ecological changes, and facilitate plant breeding programs. In recent years, significant progress has been made in developing automated flower classification systems, driven by advancements in deep learning techniques and the availability of large-scale flower datasets.
Early Approaches:Early approaches to flower classification predominantly relied on handcrafted features and traditional machine learning algorithms. These methods involved extracting various features such as color, shape, and texture, and then employing classifiers such as Support Vector Machines (SVMs) or Random Forests for classification [5, 6, 7, 8]. While these techniques achieved reasonable accuracy, their performance was limited by the difficulty of designing effective features that capture the intricate characteristics of flowers.
Deep Learning:The advent of deep learning has revolutionized the field of flower classification by enabling the automatic extraction of discriminative features from raw image data. Convolutional Neural Networks (CNNs) have emerged as the primary architecture for deep learning-based flower classification models. CNNs can learn hierarchical representations of images, capturing both local and global patterns, which are essential for accurate flower recognition. Several studies have explored the use of CNNs for flower classification. Krizhevsky et al. [1] introduced the pioneering AlexNet architecture, which achieved breakthrough performance on the ImageNet dataset. Inspired by this success, researchers adapted CNN architectures such as VGG [2], GoogLeNet [9], and ResNet [3] for flower classification tasks. These models demonstrated superior performance in terms of accuracy and robustness, surpassing traditional methods. More recently, researchers have modified the architecture of these networks to classify flower images [10, 11, 12, 13, 14].
Data Augmentation:To mitigate the challenges posed by limited annotated flower datasets, data augmentation techniques have been widely employed. Data augmentation involves applying transformations such as rotation, scaling, and flipping to expand the training dataset artificially. This approach helps prevent overfitting and improves the generalization ability of the models [15, 16, 17, 18, 19]. Techniques like random cropping, Gaussian blur, and color jittering have been used to generate diverse training samples and enhance model performance [10, 11, 12, 13, 14].
Transfer Learning:Transfer learning has also been extensively utilized in flower classification tasks. Pretrained CNN models, trained on large-scale datasets such as ImageNet, are fine-tuned on flower datasets to leverage the learned features. This approach is particularly effective when limited labeled flower data is available. By transferring knowledge from general image features, the models can achieve better performance and faster convergence [20, 18, 21].
Ensemble Learning:Ensemble learning techniques have been employed to further boost the classification accuracy in image recognition tasks. Ensemble models combine predictions from multiple base classifiers, such as CNNs
or SVMs, to make final decisions. Bagging, boosting, and stacking are popular ensemble methods used in object classification. These techniques provide improved generalization, robustness to noise, and enhanced classification performance [22, 23].
In conclusion, flower classification has seen significant advancements in recent years, driven by the integration of deep learning architectures, data augmentation techniques, transfer learning, and ensemble learning. CNN-based models have demonstrated superior performance in capturing intricate flower characteristics. The use of data augmentation and transfer learning has addressed the challenges posed by limited labeled data. Ensemble learning techniques have further enhanced the classification accuracy and robustness. Future research in flower classification should focus on exploring novel architectures, developing specialized flower datasets, and investigating interpretability and explainability aspects to make flower classification models more reliable and transparent in their decision-making processes.
## 3 Methods
### Google Flower Classification using TPUs
We participated the Google Flower Classification using TPUs challenge to classify the dataset which consists of a large collection of flower images. We split the data into 16465 training samples, 3712 validation images and 7382 test instances spanning across 104 different flower species. The challenge was hosted on Kaggle 1. Figures 1 and 2 illustrate a random batch of the training and test samples, respectively. For augmentation part, we used a set of random rotation, shearing, zooming, shifting and image flipping. Our code is publicly available on Kaggle as a notebook 2.
Footnote 1: [https://www.kaggle.com/competitions/flower-classification-with-tpus](https://www.kaggle.com/competitions/flower-classification-with-tpus)
Footnote 2: [https://www.kaggle.com/code/afshin/flower-classification-focal-loss-0-98/notebook](https://www.kaggle.com/code/afshin/flower-classification-focal-loss-0-98/notebook)
### Model: Meta Classifier
In object classification tasks, the use of ensemble learning techniques has gained significant attention due to their ability to improve the performance and robustness of classifiers. The meta classifier, also known as the ensemble model, plays a vital role in aggregating the decisions from individual classifiers and making the final prediction. In this section, we describe the meta classifier used in our object classification framework. The proposed meta-classifier is designed based on the concept of combining multiple classifiers to achieve better classification accuracy. We employ a diverse set of base classifiers including DenseNet201 [4], EfficientNet-B4, B5 and B6 [24], each trained on the training data with different training parameters. This diversity allows the ensemble model to capture a wide range of features and pattern translations, leading to improved generalization and robustness. Figure 3 illustrates the architecture of the proposed meta-classifier. We initialised DenseNet201 with \(imagenet\) and all the EfficientNet variations using \(noisy-student\) weights. The parameters of the sequential base classifiers had been frozen to tune the fully-connected layer of the meta-classifier. Figures 4 and 5 represent the graph visualisations of DeepFlorist compiled using Graphcore Poplar [25].
The fusion of classifier decisions is performed using average voting scheme. Each base classifier contributes to the final decision equally. However, the weights assigned to each classifier can be determined using techniques such as accuracy-based weighting, entropy-based weighting, or dynamic weighting based on classifier confidence scores. These weighting scheme ensure that the ensemble model benefits more from the classifiers that have shown better performance on the given task. Our results showed that submission to the leaderboard using weighted approaches led to the better evaluation F1-score.
### Training
We trained DeepFlorist on a Google TPU GRPC using TensorFlow's Mirrored Strategy across 8 replicas [26]. We used a batch size of 128 to train DeepFlorist by minimizing the categorical focal loss. A learning rate scheduler was also utilised for better convergence, exploration and generalisation. Categorical focal loss is defined as follows:
\[CategoricalFocalLoss=\sum_{i=1}^{C}(y_{i}.(1-p_{i})^{\gamma}.log(p_{i})) \tag{1}\]
where:
* \(C\) is the number of classes in the classification problem,
* \(y_{i}\) is the one-hot encoded ground truth label for class \(i\),
* \(p_{i}\) is the predicted probability of class \(i\) outputted by the model,
* \(\gamma\) is the focusing parameter that controls the degree of emphasis on hard-to-classify examples.
For learning rate, we used exponential decay with ramp-up and sustain as 4 batches each, starting at 0.00001, maximum of 0.00040 and a minimum same as starting learning rate. The coefficient for exponential decay had been set to 0.8. We used Adam [27] to optimise DeepFlorist and controlled the model snapshots using the validation Macro F1-score.
We trained the base classifiers with the same procedure as highlighted above. After the base classifiers had been trained, we used the trained parameters as the nested sequential models, where we froze all the parameters of the base classifiers (Figure 3, before \(Concatenate\)) and only left the fully-connected layer for fine-tuning (Figure 3, \(Dense\)).
## 4 Results and Discussion
### DeepFlorist ranked 4th among more than 800 teams
In this section, we present the results achieved by DeepFlorist in the Google Flower Classification Competition using TPUs on Kaggle, where we secured an impressive 4th place out of more than 800 participating teams. The competition organisers employed the Macro F1-score as the evaluation metric to assess the performance of submissions on the test dataset.
The performance of DeepFlorist was evaluated on a diverse set of flower images from the competition dataset. Our model demonstrated remarkable classification score and robustness, achieving a Macro F1-score of 0.989823 on the test dataset. This exceptional performance highlights the effectiveness of our proposed architecture and training strategies for object classification tasks.
Footnote 3: Kaggle username: RReddington
Compared to other participating teams, our model exhibited several key strengths. Firstly, DeepFlorist effectively captured intricate patterns and discriminative features present in the flower images, enabling accurate classification across different flower species. The ensemble nature of the base classifiers facilitated the learning of local and global image representations, enhancing the model's ability to discriminate between visually similar flower classes.
Moreover, we incorporated targeted objective function which helped mitigate underfitting and improved the generalization capability of our model. This allowed DeepFlorist to maintain a strong performance on unseen test data, which is crucial for real-world applications. Additionally, hyperparameter optimization played a significant role in fine-tuning the model's performance. We extensively explored different configurations of learning rates, batch sizes and ensemble strategies to find the optimal settings. This meticulous tuning process enabled DeepFlorist to achieve its remarkable performance in the competition.
### Meta-classifier > base-classifiers
Our submissions to the public leaderboard (\(30\%\) of the test data) showed that DeepFlorist performed better than the individual base classifiers in terms of the test Macro F1-score. Figure 6 shows the results of our submissions as base-classifiers along with the aggregated meta-classifiers. As shown, DeepFlorist achieved better test Macro F1-score across all our submissions to the leaderboard when compared to DenseNet201, EfficientNet-B4, B5 and B6.
### Discussion
Although our proposed model attained an impressive placement, there are potential areas for further improvement. One such avenue is the incorporation of graph learning techniques, which have shown promise in enhancing the model performance. This is especially important for training complex architectures like DeepFlorist, where training is computationally demanding. We suggest exploring the Graphcore Intelligence Processing Units (IPUs), as new possibilities have emerged for training meta-classifiers without the need to freeze the network parameters [25].
Graphcore IPUs are specifically designed to leverage massive parallelism, making them highly efficient for training deep neural networks. Compared to TPUs and GPUs, IPUs can handle a higher number of operations per second, leading to reduced training times and increased overall computational efficiency. Moreover, Graphcore IPUs also excel in memory bandwidth, which is crucial for handling large-scale models and datasets. One other significant advantage of utilizing Graphcore IPUs is their ability to support dynamic model updates during training without the need for parameter freezing. Unlike TPUs and GPUs, which often require freezing the network parameters during meta-classifier training, IPUs allow continuous adaptation of the model without interrupting the training process.
In conclusion, our proposed meta-classifier, is scalable and we encourage the community to explore its potential across other domains of object recognition. The outstanding performance of DeepFlorist, as reflected by its high Macro F1-score, highlights the effectiveness of our proposed architecture, methods, and hyperparameter optimization. The success of our approach reinforces the value of meta-learning techniques in achieving state-of-the-art results in object classification tasks.
Figure 1: Illustration of a sample batch from the training set. The labels for flower species can be seen on top of each tile.
Figure 2: Illustration of a sample batch from the test set.
Figure 3: Illustration of the DeepFlorist architecture. sequential 5, 6, 7 and 8 represent DenseNet201, EfficientNet-B4, B5 and B6, respectively.
Figure 4: Illustration of the DeepFlorist architecture in graph mode compiled through Graphcore Poplar [25]. DeepFlorist has been visualised as a fully-trainable network. The aggregation node can be identified as a single component at the centre composed of 4 base classifiers. DenseNet201 feature maps can be seen at the left corner (orange), where the other 3 clusters (blue) correspond to EfficientNet-B6, B5 and B4 in clockwise order, respectively.
Figure 5: Illustration of the DeepFlorist architecture in graph mode compiled through Graphcore Poplar [25]. DeepFlorist has been visualised as a partially-trainable network. The aggregation node can be identified as a modular component at the centre composed of 4 base classifiers. DenseNet201 feature maps can be seen at the left corner (orange), where the other 3 clusters (blue) correspond to EfficientNet-B6, B5 and B4 in clockwise order, respectively.
Figure 6: Results of our submissions as base-classifiers outputs along with the aggregated meta-classifiers. As shown, DeepFlorist achieves better test Macro F1-score across all our submissions to the leaderboard in comparison with the base models. |
2310.04369 | MBTFNet: Multi-Band Temporal-Frequency Neural Network For Singing Voice
Enhancement | A typical neural speech enhancement (SE) approach mainly handles speech and
noise mixtures, which is not optimal for singing voice enhancement scenarios.
Music source separation (MSS) models treat vocals and various accompaniment
components equally, which may reduce performance compared to the model that
only considers vocal enhancement. In this paper, we propose a novel multi-band
temporal-frequency neural network (MBTFNet) for singing voice enhancement,
which particularly removes background music, noise and even backing vocals from
singing recordings. MBTFNet combines inter and intra-band modeling for better
processing of full-band signals. Dual-path modeling are introduced to expand
the receptive field of the model. We propose an implicit personalized
enhancement (IPE) stage based on signal-to-noise ratio (SNR) estimation, which
further improves the performance of MBTFNet. Experiments show that our proposed
model significantly outperforms several state-of-the-art SE and MSS models. | Weiming Xu, Zhouxuan Chen, Zhili Tan, Shubo Lv, Runduo Han, Wenjiang Zhou, Weifeng Zhao, Lei Xie | 2023-10-06T16:44:47Z | http://arxiv.org/abs/2310.04369v1 | # MBTFNet: Multi-band Temporal-Frequency Neural Network for Singing Voice Enhancement
###### Abstract
A typical neural speech enhancement (SE) approach mainly handles speech and noise mixtures, which is not optimal for singing voice enhancement scenarios where singing is often mixed with vocal-correlated accompanies and singing has substantial differences from speaking. Music source separation (MSS) models treat vocals and various accompaniment components equally, which may reduce performance compared to the model that only considers vocal enhancement. In this paper, we propose a novel multi-band temporal-frequency neural network (MBTFNet) for singing voice enhancement, which particularly removes background music, noise and even backing vocals from singing recordings. MBTFNet combines inter and intra-band modeling for better processing of full-band signals. Dual-path modeling in the temporal and frequency axis and temporal dilation blocks are introduced to expand the receptive field of the model. Particularly for removing backing vocals, we propose an implicit personalized enhancement (IPE) stage based on signal-to-noise ratio (SNR) estimation, which further improves the performance of MBTFNet. Experiments show that our proposed model significantly outperforms several state-of-the-art SE and MSS models.
Weining Xu\({}^{1}\), Zhouxuan Chen\({}^{2}\), Zhili Tan\({}^{2}\), Shubo Lv\({}^{1}\), Runduo Han\({}^{1}\),
Wenjiang Zhou\({}^{2}\), Weifeng Zhao\({}^{2}\), Lei Xie\({}^{1*}\)+\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU),
Northwestern Polytechnical University, Xi'an, China
\({}^{2}\)Lyra Lab, Tencent Music Entertainment, Shenzhen, China singing-voice enhancement, implicit personalized enhancement, MBTFNet
Footnote †: Corresponding author.
## 1 Introduction
With easy access to the internet, user-generated content (UGC) has become popular on platforms like TikTok, YouTube, and various karaoke apps. Among UGC, singing recordings provided by users are proliferating. However, these recordings are often accompanied by background noise, reverberation, and accompaniment, as they are recorded in ordinary daily environments. To improve the listening quality and enable further processing such as remixing and singing transcription, the interference needs to be removed.
Recent advances in neural speech enhancement (SE) and music source separation (MSS) can be sensibly leveraged to remove the above interference in the singing recordings. Mask-based time-frequency (TF) domain approaches are prevalent in speech enhancement. In these approaches, a neural network is designed to estimate a _mask_ in TF domain from simulated clean-noisy speech pairs. The _mask_ is then applied to the noisy signal at runtime to obtain the clean signal. Early approaches only considered the magnitude part of the noisy signal until the complex ratio mask (CRM) [1] was proposed with explicit consideration of phase. Then complex-valued neural network approaches have become popular for their superior denoising performance. These approaches explicitly model the real and imaginary parts of the speech spectrum typically by a U-net-shaped encoder-decoder structure [2, 3, 4, 5, 6]. Research interests have gradually shifted from wide-band (16 kHz) to super-wide-band and full-band [7, 8, 9], triggered by the deep noise suppression challenge (DNS) series [10, 11, 12]. However, increasing the sampling rate leads to a higher modeling complexity. To address this challenge, S-DCCRN [13] divides the frequency bands into two parts and performs intra-band and inter-band modeling respectively. MTFAANet [7] expands the receptive fields of the time-axis and frequency-axis with the specifically designed T-F convolution module (TFCM) to model the challenging full-band signal. HGCN [14] and HGCN+ [9] particularly focus on speech harmonics recovery by a harmonic gated compensation network. Recently, personalized speech enhancement (PSE) or target speaker extraction has received a lot of attention [15, 16, 17, 18]. In these approaches, enrollment speech from a target speaker can be adopted as a prior feed to the denoising network, leading to superior performance, especially in the case of overlapping speech.
With a similar rationale, the goal of music source separation (MSS) is to particularly separate vocals from background music. In this area, complex-valued U-nets are also dominant [19, 20, 21, 22, 23, 24]. Since the harmonic components that need to be separated between vocals and other instrumen
tal components have a specific frequency band distribution, a sub-band division strategy is usually employed to make the model more focused on a certain frequency band and source type. As a typical approach, ResUNetDecouple [21] uses a very deep structure, a residual UNet architecture with up to 143 layers, and achieves state-of-the-art separation performance on the popular MUSDB18 [25] dataset.
Although both speaking and singing originate from the same human vocal system, they have substantial differences in phoneme usage, tonality, diction, breathing, and volume. For example, singing has a higher average intensity level than speech and always features a wider intensity variation than speech. Likewise, singing occurs at higher frequency levels than speaking and within a wider range of frequencies. Differently in pronunciation with speaking, in singing, we always extend the vowels to the greatest length possible because they carry most of the sound, but consonants are usually shortened as they are much harder to project. Singing has a specific rhythm and melody to adhere to. Sustained notes and vibrato differentiates it from speech. Sometimes in singing, the lead vocal is also accompanied by backing vocals. On the other hand, the background music associated with singing also has unique characteristics. As a coherent background, musical accompaniments mostly are harmonic, broadband, and highly correlated with singing.
In this paper, we present a neural network approach designed specifically for enhancing singing voices. Our goal is to remove musical accompaniments, various types of noise, and even backing vocals. Our work is inspired by the recent advances in SE and MSS reviewed above, but it makes substantial improvements that target the unique characteristics of singing. Specifically, we propose a novel multi-band temporal-frequency neural network (MBTFNet) with the following designs:
* We design an inter-band and intra-band modeling structure to make it easier in distinguishing harmonic structures of vocals and background music at the _frequency_ scale.
* To better distinguish fine-grained harmonic structures between vocals and background music at a _temporal_ scale, we introduce time-axis dilation block (TDB), dual-path RNN (DPRNN) [26] and squeezed-TCM (STCM) [5] to expand the receptive field of the singing enhancement model.
* Inspired by the recent advances in personalized speech enhancement, we propose an implicit personalized singing enhancement module as a secondary stage to further remove residuals and backing vocals. Using a signal-to-noise ratio (SNR) estimator, the module can dynamically leverage the singer's singing as a speaker embedding without requiring explicit voice enrollment from the singer.
Experiments show that the proposed MBTFNet outperforms several state-of-the-art SE and MSS models in singing voice enhancement by a large margin. With the help of the implicit personalized enhancement module, a further performance gain can be obtained including the challenging case for backing vocal removal.
## 2 Mbftnet
The noisy singing signal can be described as:
\[Y(t,f)=X(t,f)+N(t,f)+M(t,f)+B(t,f) \tag{1}\]
where \(Y(t,f)\), \(X(t,f)\), \(B(t,f)\), \(M(t,f)\) and \(N(t,f)\) represent the noisy signal, clean singing voice, backing vocal voice, background music and noise TF-bins, respectively. In our scenario, the goal is to extract the clean singing voice \(X(t,f)\) from the input noisy signal \(Y(t,f)\). In the singing voice enhancement (SVE) stage, we introduce the multi-band temporal-frequency neural network (MBTFNet), consisting of both inter and intra-band modeling, to remove the background music \(M(t,f)\) and noise \(N(t,f)\) from the input signal \(Y(t,f)\). For the rest part, we further eliminate the backing vocal signal \(B(t,f)\) by an implicit personalized enhancement (IPE) stage. Fig. 1 shows the overall structure of MBTFNet with SVE and IPE stages.
### Inter- and Intra-Band Modeling
Different from noise, background music has lots of harmonic structure correlated with vocals, which causes difficulty in singing voice enhancement.
The complexity of directly modeling full-band signals is large, and the harmonic components of musical instrument are often distributed within a specific frequency band. So we design an inter-band and intra-band modeling structure. Meanwhile, larger receptive field modules are introduced to better distinguish harmonic structures between vocals and music. The overall process is shown in Fig. 1, where the input audio signal \(y\) is first decomposed into \(C\) sub-band signals using Pseudo Quadrature Mirror Filter (PQMF) [27]. The complex-valued spectrogram \(Y_{i}\in\mathbb{C}^{F\times T}\), \(i=1,...,C\) are computed by STFT, where \(F\) and \(T\) are the frequency and time index, and then fed into the inter-band module to get the rough enhanced result, \(X_{r,i}\). Similarly to general speech enhancement models, this module learns the common characteristic of different sub-bands. Specifically, it adopts the U-Net structure, as shown in Fig. 2(a). The encoder block consists of a conv block and multiple stacked time-axis dilation blocks (TDB) which help to expand the receptive field. TDB consists of a convolution layer with time-axis dilation, a batch norm layer, and a PReLU layer, as shown in Fig. 2(c). The output of the stacked encoder blocks, \(Z\in\mathbb{C}^{N\times K\times T}\), contains the full-band information and thus will be used in the subsequent intra-band module and personalized enhancement
Figure 1: The overall network structure of MBTFNet.
Figure 2: The design of the inter-band module (a), the dual-path convolution block (b), the encoder block (c), and the SNR module (d).
module, where \(N\) and \(K\) are transformed from \(C\) and \(F\) by encoder blocks. The decoder block has a similar structure to the encoder, where the convolution layer in the conv block is replaced by a transpose convolution layer. For the stacked TDBs, the higher one has a higher number of dilation, in order to make the higher layer have a larger receptive field. Specifically speaking, the number of dilation is \(2^{n}\), where \(n\) is the layer index beginning from 1.
In order to fully utilize the full-band information, each rough enhanced sub-band \(X_{r,i}\) is concatenated with the full-band feature \(Z\), as the input of dual-path convolution blocks (DPCB). The intra-band module aims to learn the unique characteristic of each band, thus it consists of \(C\) DPCBs. Each DPCB focus on the harmonic components of musical instrument in its corresponding sub-band, which makes the learning process easier and the elimination ability better. DPCB is stacked by two types of layers, namely the time-axis dilation convolution layer and the frequency-axis dilation convolution layer, which helps to extract features of each sub-band adequately, as shown in Fig. 2(b). The higher dilation block has a higher number of dilation, as the \(2^{n}\) setup above.
After removing the background music and noise in each band, another DPCB is used to merge the enhanced sub-bands as \(X_{s,i}\), as shown in Fig. 1.
### Implicit Personalized Enhancement
To further eliminate the backing vocals and the residual harmonics of background music, we design an implicit personalized enhancement (IPE) stage to extract the voice of the lead vocalist. It consists of an SNR module [28], a speaker encoding module (SEM) and a personalized enhancement module (PEM). The SNR module is responsible for evaluating the cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The SNR module is shown in Fig. 2(d), which is composed of a GRU layer, a convolution layer, and a sigmoid layer. The SEM is a pre-trained ECAPA-TDNN [29]. The PEM has roughly the same structure as the intra-band module, and an additional convolution layer is added to map the output \(a\in\mathbb{R}^{192}\) of SEM to \(A\in\mathbb{R}^{N\times K}\). \(A\) is multiplied by \(Z\) and combined with \(X_{\text{s}}\) as the input of PEM to get \(X_{\text{p}}\).
```
Input:\(X_{s},\lambda\) Result:\(E\)
1\(E\leftarrow\vec{\Pi}\);
2whilechunk in chunksdo
3\(\tilde{\bar{S}}\leftarrow\frac{1}{T}\sum_{t}\text{SNR}(X_{s,t},Y_{t})\);
4if\(\tilde{\bar{S}}>=\lambda\)then
5\(E\leftarrow\alpha E+(1-\alpha)\text{SEM}(X_{s})\);
6
7 end if
8
9 end while
```
**Algorithm 1**Update temporary speaker embedding
The SEM module is responsible for evaluating the cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and the speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\). The PEM further removes the backing vocals and the residual harmonics of background music according to the cleanliness and speaker information above.
The cleanliness of \(Y(t,f)\), and the SEM extracts the speaker embedding from the previous enhancement result \(X_{s,i}\).
ture and noise with a random SNR ranging from -5 to 15 dB. To simulate the test set, we apply the same rules to each vocal track of the MUSDB18HQ test set. Each test vocal track is used to simulate 5 mixture audio, resulting in a total of 250 audio tracks.
MUSDB18HQ is a classic music separation dataset, but some of its vocal tracks contain both lead vocals and backing vocals. This can cause issues during the training and testing of the IPE stage. Therefore, we use the M4Singer [31] dataset for the IPE stage experiments. M4Singer is an un-accompanied singing dataset that includes 10 male and 10 female singers, with a total of 700 vocals. Vocals from 8 males and 7 females are used in the training, while the remaining vocals are used for testing. We apply the same rules as the MUSDB18HQ test set to generate the without-backing test set. We then mix the without-backing test set with vocals to generate the random-backing test sets. The random-backing test sets select random vocals as the backing vocals, which can result in gaps with the real audio. To address this, we refer to [32] and propose a selection method to simulate the selected-backing test set. We randomly select 10 vocal tracks and choose the one with the highest cross-correlation of chroma features with the lead vocal, then randomly raise or lower the backing vocal by two semitones, and then mix them with a random SNR ranging from 5 to 10 dB to generate desired audio.
### Training Setup and Baselines
All training and test data are at a 44.1kHz sampling rate. A frame of 20 ms with a shift of 10 ms is used for STFT computation. We vary the learning rate during training as
\[lr=d^{-0.5}\cdot\min(\text{step}^{-0.5},\text{step}\cdot\text{warmup\_steps}^ {-1.5}) \tag{3}\]
where \(d=1\text{e}{-3}\) and warmup_steps=5000. All models in Table 1 were trained using the Adam optimizer under identical conditions to suppress both noise and music until no further improvement was observed. The detailed configuration of MBTFNet is as follows, and the other models for comparative experiments on MUSDB18HQ are configured according to their papers. The SEM uses a pre-trained ECAPA-TDNN model and fixes the weights in experiments.
The PQMF splits the signal into 4 sub-band signals. The inter-band module has 6 encoder blocks and decoder blocks, the channel of encoder blocks are [8, 64, 64, 64, 128, 128], and the kernel size of encoder blocks are (5, 2). Each encoder block contains 6 TDBs and all the kernel size in TDB is (3,3). The decoder block is the same as the encoder block. The layer number of the frequency-axis and time-axis is 2, and their rnn-units are 256 with 1 STCM layer. The intra-band module consists of 4 DPCBs. Each DPCB has 5 FDBs and 5 TDBs, and their kernel size is (3,3).
The ECAPA-TDNN configures with 1024 channels. In the SNR Module, the number of GRU layers is 2 and the rnn-units are 256, followed by a convolution layer with an input channel of 1, an output channel of 2, and a kernel size of (3, 3). The DPCB in PEM is the same as the intra-band module.
When conducting experiments on the M4Singer dataset, we first train the SVE part for 100 epochs, then freeze it to train the IPE part for 50 epochs. When training the IPE part, the mixture has a backing vocal with a probability of 0.5, and corresponding enrollment audio is provided.
### Experimental Results and Discussion
We first conduct model comparison and ablation experiments on the MUSDB18HQ simulation test set, using SI-SNR and PESQ as objective metrics, and the experimental results are shown in Table 1. MBTFNet (causal) achieves the highest metrics among experiment speech enhancement models, where SI-SNR and PESQ are 1.39dB and 0.14 higher than the second-best S-DCCRN, respectively. MBTFNet (no causal) is better than the music separation model ResUNetDecouple, where SI-SNR and PESQ are 1.63dB and 0.32 higher, respectively.
MBTFNet-A, MBTFNet-B and MBTFNet-C are ablation experiments, where MBTFNet-A means that the full-band signal is directly modeled without PQMF, MBTFNet-B means that all sub-bands are modeled in one Intra-band Module, and MBTFNet-C means that the Intra-band Module is removed. We keep the parameters of MBTFNet-A and MBTFNet-B consistent with MBTFNet. Ablation experiments show that modeling each sub-band individually improves the performance of MBTFNet.
After verifying the model performance of MBTFNet on the MUSDB18HQ simulation test set, we continue to experiment with the IPE on the M4Singer simulation test set. The experiment tests three values of \(\lambda\): 0, \(\lambda_{t}\), and 1. When \(\lambda\)=0, every \(X_{s}\) is accepted to update the speaker embedding. When \(\lambda\)=1, \(X_{s}\) is not accepted to update the speaker embedding, making two-stage MBTFNet degenerate into one-stage MBTFNet. \(\lambda_{t}\) is obtained during training. The experiments are conducted on three types of test sets: without-backing, selected-backing, and random-backing. The results are shown in Table 2. Where Noisy, IPE, and
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Causal & Param. (M) & SI-SNR (dB) & PESQ \\ \hline Noisy & - & - & 0.49 & 1.79 \\ HGCN+ [9] & ✓ & 7.03 & 7.25 & 2.49 \\ S-DCCRN [13] & ✓ & 2.04 & 7.43 & 2.60 \\ MBTFNet & ✓ & 4.08 & **8.82** & **2.74** \\ ResUNetDecouple [21] & ✗ & 103 & 7.97 & 2.63 \\ MBTFNet-A & ✗ & 8.57 & 8.62 & 2.80 \\ MBTFNet-B & ✗ & 8.54 & 8.49 & 2.86 \\ MBTFNet-C & ✗ & 8.43 & 9.11 & 2.94 \\ MBTFNet & ✗ & 8.54 & **9.60** & **2.95** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with various models on MUSDB18HQ simulation test set.
no enhancement, implicit personalized enhancement, and personalized enhancement. First, we compare the IPE at different values of \(\lambda\). Among all \(\lambda\) values tested, \(\lambda_{t}\) achieves the highest metrics in all three test sets. Compared to \(\lambda\)=1, \(\lambda\)=\(\lambda_{t}\) achieves a further improvement on the test set. This demonstrates that using the IPE can further remove residual noise generated by the SVE stage. \(\lambda\)=0 receives the lowest metrics, which indicates that the SNR estimator contributes to the IPE stage.
Second, we compare the performance of temporary speaker embedding with directly provided speaker enrollments obtained from the same singer's other songs, which we refer to as personalized enhancement (PE). The explicit speaker enrollment is extracted from a random 20-second segment by the same singer in the test set. As shown in Table 2, IPE has better performance than PE. The reason is that the speaker module designed for speech undergoes a reduction in its effectiveness when applied in the singing scene. In the case of PE, there may be a mismatch between the extracted speaker characteristics from the enrollment and test audio, as they are obtained from different songs, leading to a decrease in the performance of personalized enhancement. In our proposed IPE module, the temporary speaker embedding is extracted from the previous part of the test audio, thus the speaker characteristic suffers from less mismatch.
Fig. 3 shows the spectra of \(\lambda\) valued 1 and \(\lambda_{t}\) in the without-backing (above) and selected-backing (below) test set. In the without-backing samples, there are some piano accompaniment residuals if only using the SVE stage, but these residuals are further eliminated in the IPE stage. In the second group of samples (below), the backing vocalist is eliminated by the IPE stage.
## 5 Conclusions
This paper has focused on singing voice enhancement, designing a novel Multi-Band Temporal-Frequency Neural Network (MBTFNet) to particularly address the challenges in the singing scene. By further introducing an implicit personalized enhancement (IPE) stage with an automatic speaker enrollment strategy, MBTFNet gets a stronger ability to distinguish the target singer from background music and backing vocals. Our proposed model achieves the best metrics in the MUSDB18HQ simulation test set as compared with SOTA SE and MSS models and obtains significant improvement on the M4singer simulation test set by using the IPE stage.
|
2310.16745 | Design Space Exploration of Sparsity-Aware Application-Specific Spiking
Neural Network Accelerators | Spiking Neural Networks (SNNs) offer a promising alternative to Artificial
Neural Networks (ANNs) for deep learning applications, particularly in
resource-constrained systems. This is largely due to their inherent sparsity,
influenced by factors such as the input dataset, the length of the spike train,
and the network topology. While a few prior works have demonstrated the
advantages of incorporating sparsity into the hardware design, especially in
terms of reducing energy consumption, the impact on hardware resources has not
yet been explored. This is where design space exploration (DSE) becomes
crucial, as it allows for the optimization of hardware performance by tailoring
both the hardware and model parameters to suit specific application needs.
However, DSE can be extremely challenging given the potentially large design
space and the interplay of hardware architecture design choices and
application-specific model parameters.
In this paper, we propose a flexible hardware design that leverages the
sparsity of SNNs to identify highly efficient, application-specific accelerator
designs. We develop a high-level, cycle-accurate simulation framework for this
hardware and demonstrate the framework's benefits in enabling detailed and
fine-grained exploration of SNN design choices, such as the layer-wise
logical-to-hardware ratio (LHR). Our experimental results show that our design
can (i) achieve up to $76\%$ reduction in hardware resources and (ii) deliver a
speed increase of up to $31.25\times$, while requiring $27\%$ fewer hardware
resources compared to sparsity-oblivious designs. We further showcase the
robustness of our framework by varying spike train lengths with different
neuron population sizes to find the optimal trade-off points between accuracy
and hardware latency. | Ilkin Aliyev. Kama Svoboda, Tosiron Adegbija | 2023-10-25T16:22:03Z | http://arxiv.org/abs/2310.16745v1 | # Design Space Exploration of Sparsity-Aware Application-Specific Spiking Neural Network Accelerators
###### Abstract
Spiking Neural Networks (SNNs) offer a promising alternative to Artificial Neural Networks (ANNs) for deep learning applications, particularly in resource-constrained systems. This is largely due to their inherent sparsity, influenced by factors such as the input dataset, the length of the spike train, and the network topology. While a few prior works have demonstrated the advantages of incorporating sparsity into the hardware design, especially in terms of reducing energy consumption, the impact on hardware resources has not yet been explored. This is where design space exploration (DSE) becomes crucial, as it allows for the optimization of hardware performance by tailoring both the hardware and model parameters to suit specific application needs. However, DSE can be extremely challenging given the potentially large design space and the interplay of hardware architecture design choices and application-specific model parameters.
In this paper, we propose a flexible hardware design that leverages the sparsity of SNNs to identify highly efficient, application-specific accelerator designs. We develop a high-level, cycle-accurate simulation framework for this hardware and demonstrate the framework's benefits in enabling detailed and fine-grained exploration of SNN design choices, such as the layer-wise logical-to-hardware ratio (LHR). Our experimental results show that our design can (i) achieve up to \(76\%\) reduction in hardware resources and (ii) deliver a speed increase of up to \(31.25\times\), while requiring \(27\%\) fewer hardware resources compared to sparsity-oblivious designs. We further showcase the robustness of our framework by varying spike train lengths with different neuron population sizes to find the optimal trade-off points between accuracy and hardware latency.
Spiking neural networks, design space exploration, resource-efficient machine learning, TLM modeling, neural network sparsity.
## I Introduction
Artificial Neural Networks (ANNs) have grown exponentially in popularity, as machine learning (ML)-based methods become applicable to an increasing number of new application domains. Ever-increasing workload demands in edge computing require ANN accelerators to further reduce inference latency and energy consumption. However, ANNs are not always viable in resource-constrained systems, as they are extremely compute-intensive and can be prohibitive for edge computing applications despite their high prediction accuracy.
Spiking Neural Networks (SNNs) are gaining a lot of attention as an efficient alternative to ANNs for machine learning in resource-constrained systems [1]. SNNs are a special kind of neural networks that differ from ANNs in their communication and computation schemes. Neurons in SNNs transmit discrete binary events (or _spikes_) to communicate with each other, rather than continuous variables as in ANNs. Whereas neurons in ANNs require complex multiply-and-accumulation (MAC) operations, SNNs only require simple addition operations.
Furthermore, SNNs reflect biological neural networks by implementing sparse coding [2] and sparse connectivity [3]. In sparse coding, only a fraction of neurons are activated at a time, and each neuron connects with only a subset of other neurons in sparse connectivity. This sparsity can further reduce computational complexity and decrease energy consumption compared to ANNs, particularly in handling high-dimensional data. This sparsity can be leveraged to facilitate the design of efficient hardware accelerators for ML, which are ideal for power-constrained devices such as the Internet of Things (IoT) systems and edge computing devices. Therefore, SNNs not only provide a better analog of the biological neuronal communication and computation mechanisms but also offer an excellent opportunity for hardware implementation of highly efficient machine learning accelerators.
**Major issue with current SNN accelerators**: The design of SNN architectures has been an active area of research due to the benefits of SNNs for low-overhead ML. Both industry and academia have proposed various SNN accelerators, such as IBM's Truenorth [4], Intel's Loihi [5], Spinnaker [6], Miniatur [7], S2N2 [8], etc. However, prior studies (e.g., [9, 10]) have demonstrated the complexity of training SNNs and shown that while SNNs can be trained to achieve similar accuracy as ANNs, this is usually at the expense of energy efficiency due to the processing time steps intrinsic to SNNs. In order to close the energy efficiency gap, SNN hardware must be carefully designed to match the application behavior and exploit such characteristics as the intensity of firing activity. Therefore, early design space exploration methodologies are needed to investigate the application-driven hardware performance and to provide opportunities for model updates before hardware synthesis and deployment on edge devices.
**Limited work on SNN design space exploration**: Prior work on SNN design space exploration (DSE) studied the hardware efficiency implications of model parameters, but these DSE methods are limited to a small number of parameters such as spike encoding mechanisms, degree of parallelism [11], and spike train length [12, 13]. In contrast, we adopt an expanded view of the neuronal dynamics of SNNs and how they affect and are affected by hardware designs, especially considering
the network's sparsity. We propose a cycle-accurate simulation approach for exploring various neural parameters, including the degree of parallelism, and the ratio of logical neurons to physical hardware neurons. Importantly, our approach can study the impacts of these model parameters at a fine, layer-wise granularity.
**Sparsity-aware SNN hardware**: A neuron's workload in an SNN is primarily determined by the spiking intensity (particularly the pre-synaptic layer's spikes), which is influenced by factors like the dataset/application and input encoding mechanism. A higher spiking activity results in a larger accumulation delay for post-synaptic neurons, as more neurons are activated in the pre-synaptic layer. We argue that hardware resources (i.e., neuron processor, memory blocks) can be allocated based on a layer's sparsity level, alleviating high resource demands and enabling optimal performance of the SNN models. For example, recent studies [14, 15] have shown reductions in both hardware resources and inference time by simply considering sparsity in input layer in a two layer network (e.g., only input and output layers).
We present a highly flexible sparsity-aware and cycle-accurate simulation framework for rapidly exploring the design space of application-specific SNN accelerators. The framework leverages the _Transaction-Level Modeling (TLM)_[16] formalism, which can model complex digital systems that involve complex data communication. TLM abstracts away the communication details from those of the functional units and communication architecture. This enables an abstraction that enhances modularity, composability, reusability, and interoperability of design. We implement our framework in SystemC and validate it extensively against both software and hardware implementations of SNNs.
Through detailed experiments and analysis using our framework, we draw two key insights that may elude state-of-the-art SNN DSE methods. First, the implications of an SNN's neural dynamics on the hardware implementations vary for different layers within a network. This insight requires exploring parameters such as the total number of memory blocks and the number of physical neuron processors per layer to improve overall network efficiency. Second, increasing the logical-to-hardware neuron ratio for the deeper layers in a deep network can reduce the hardware footprint substantially without degrading the inference latency. This insight enables deploying larger and more accurate models on hardware-limited systems. To our knowledge, we are the first to perform rapid experimentation through various model configurations for an application dataset to find the sweet spot across hardware area, latency, and model accuracy. Moreover, this is also the first time that the layer-wise dynamics and sparsity of the SNN are taken into account in the design of SNN accelerators.
In summary, this work makes the following important contributions:
* We propose a modular hardware design that enables the flexibility to easily adjust the allocation of hardware neurons according to layer-specific sparsity. The proposed hardware architecture takes advantage of SNN's binary communication scheme and implements it using simple hardware primitives, like shift register, priority encoder, and concatenation.
* We implement a cycle-accurate simulation framework for this hardware with a high degree of automation and introduce a logical-to-hardware neuron ratio (LHR) knob which controls the total number of hardware neurons allocated to each network layer.
* Using three different datasets, MNIST, FashionMNIST, and DVSGesture we analyze the sparsity and show area-efficient hardware with a trade-off in inference delay.
* Our experiments show that compared to prior works with fixed hardware configurations, our design can achieve (i) up to \(76\%\) reduction in hardware resources with similar latency for MNIST, (ii) up to \(31.25\times\) speed up, while requiring \(27\%\) fewer hardware resources for FashionMNIST, and (iii) \(2.34\times\) speed up for DVSGesture by simply tuning the layer-wise LHR knob.
* Furthermore, we employ a population of neurons for the classification layer and conduct a trade-off analysis between spike train length and population size and their impact on classification accuracy and hardware performance.
* Finally, we open-source our code to flourish research in the area [https://github.com/githubofalivev/SNN-DSE](https://github.com/githubofalivev/SNN-DSE)
## II Background and Related Work
In this section, we briefly describe SNNs and discuss some related work on SNN design space. We then motivate and describe TLM, which we leverage in our work. We also explain its abstraction levels and their associated characteristics and review prior work on TLM-based architecture modeling.
### _Overview of Spiking Neural Networks_
SNNs are inspired by how neurons in the brain communicate via sparse, discrete electrical signals, or spikes [17]. Modeled after the structure and functionality of biological neurons, the neurons in a typical SNN operate as simple integrate-and-fire units. This means that they accumulate incoming spikes over time and emit an outgoing spike when the integrated value reaches a certain threshold [18]. A sequence of spikes forms a spike train. Information is relayed through these spike trains via various coding schemes: rate coding, which concerns the frequency of transmitted spikes transmitted [19], temporal coding or TTFS coding, which focuses on the timing of the spikes, often in relation to the time-to-first-spike [20], burst coding, which counts the number of spikes and the inter-spike interval within a burst of spikes [21], or phase coding, which encodes information in the spike times relative to an oscillatory background activity [22].
A major advantage of using SNNs over traditional ANNs is their event-driven communication: they only communicate when necessary rather than continuously. This means that the neurons in SNNs only activate when a spike is present [23]. Moreover, unlike ANNs that require hardware-expensive Multiplier and Accumulate (MAC) operations for regular computations, SNNs can rely solely on accumulate operations [24]. Therefore, SNNs have lower power requirements and
computational costs than ANNs, which makes them ideal for edge computing applications on resource-constrained devices [25].
### _Prior work on SNN Hardware Design Space Exploration_
Despite the computational simplicity of spiking neurons, a few recent studies have argued that SNNs require higher energy and longer inference latency to achieve _similar classification accuracy_ to ANNs [9, 10, 13]. While these studies may highlight the challenge of designing SNNs with comparable accuracy to ANNs, it is critical to note that these findings underscore the need for more in-depth and targeted exploration of SNNs' design space with a focus on hardware-software co-design. Given the complexity and breadth of SNN accelerators' design space--encompassing factors like network topologies, memory configurations, parallelization of computation resources, neuronal dynamics--it is clear that innovative approaches are essential for efficient DSE in SNNs, to foster the creation of effective and highly-tailored accelerators.
Li et al. [9] compared convolutional neural network (CNN) accelerators with their spike-coding equivalents (SNNs) in terms of processing and energy efficiency using high-level synthesis (HLS) to generate CNNs and SNNs on field-programmable gate arrays (FPGAs). The study used three types of deep neural network accelerators: CNN hardware generated with HLS, SNN hardware generated with HLS, and SNN RTL hardware manually developed in VHDL. They evaluated all three accelerator configurations with the same layer-based architecture across three benchmark datasets: MNIST, GTSRB, and CIFAR-10.
The authors found that SNNs offer comparable accuracy but may be less efficient than CNNs in terms of execution time due to the spike encoding scheme and the lack of parallelism. They used rate coding as the spike encoding scheme, which results in relatively larger spike trains and higher activity, leading to long execution times. However, the authors do not measure how many time steps are needed for SNNs to match ANN accuracy. [13] study spiking activity per layer to find (i) spike train length and (ii) hardware resources needed. For example, they suggest fully parallel and flat serial hardware. They show that parallel SNNs use less energy than parallel ANNs for a complex dataset like Spoken MNIST. However, serial SNNs use more energy than serial ANNs
Another work by the authors [11] examines two main parameters in SNN design: input data encoding and parallelism degree. They propose three configurations: fully parallel, time-multiplexed, and hybrid. In the fully parallel one, each logical neuron has a physical neuron. In the time-multiplexed one, one hardware neuron serves a whole layer. In the hybrid one, the first hidden layer is fully parallel, but the rest are time-multiplexed. However, both works only consider flat parallelization or serialization of layers without exploring sparsity per layer. Unlike these prior works, our framework enables better flexibility and allows users to explore the layer-wise resource allocation scheme at a finer granularity, providing more control over the trade-offs in the exploration process and the output design efficiency. Moreover, our approach streamlines the design process by letting users specify parameters such as the total number of neurons and memory blocks per layer. The framework then automatically performs the mapping of the corresponding hardware neurons. Therefore, we can change the architectural configuration easily, allowing rapid pruning of optimal hardware that matches the neural structure for the target application.
The authors in [15] and [14] investigate how sparsity affects FPGA hardware resources and inference time. Both works use a Selective input Sparsity approach [15] on a two-layer MLP network and present quantitative analyses. Across different datasets, results show that, at the cost of lower accuracy, a sparse connection reduces hardware area and inference time compared to a full connection. In our work, we do not use any selection mechanism but simple hardware logic to compress spike trains and remove non-spiking outputs from pre-synaptic neurons. As such, our approach does not change network accuracy. In addition, unlike prior work, our approach also applies to hidden layers.
### _Overview of Transaction-Level Modeling_
Transaction-level models (TLMs) [16] model the hardware system components at a high level of abstraction in which the details of communication among computation units are separated from the details of the computation units. Channels model communication. Transaction requests call interface functions of these channel models. The fundamental purpose of TLM is to abstract away the unnecessary details of communication and computation to speed up the simulation and enable the exploration and validation of design alternatives at a higher level of abstraction. The TLM formalism is especially suitable for simulating SNNs because of the complexity of the event-driven communications between their components. The separation and abstraction enhance modularity, composability, reusability, and interoperability of design. That is, atomic computation and communication components of an SNN (e.g., neuron, synaptic connections) can be individually simulated and validated. The component designs can be coupled to form complex systems. These designs can also be reused or coupled with designs from different vendors or designers to create new application- or domain-specific system designs. TLM also supports simulation at different levels of abstraction to allow hardware designers to explore hardware at a range of granularities. Supported abstraction levels range from the _specification model_, which focuses on event ordering similar to dataflow computation without delving into computation or communication component specifics, to the _component-assembly_ and _bus-functional_ models, which introduce details of processing elements and connecting buses, respectively. Lastly, the _implementation model_, which we use in our work, is the least abstract and delivers cycle-accurate modeling for both computations and communications, detailing computation tasks at the RTL granularity.
### _TLM Architecture Modeling_
Embedded systems are one of the major application areas of TLMs because they contain multiple processor cores,
memory/cache subsystems, and various I/O peripheral units. By enabling the rapid simulation of different models, TLM provides a quick and iterative design scheme during the early design stage of the embedded system development. The simulation time required for TLM models varies from around 1/1000th to 1/100th of the execution time of RTL design [26]. TLM has the significant advantage of having multiple abstraction levels. Once the architectural specification is defined, software developers can start building their TLM models without waiting for RTL development kick-off. Consequently, TLM models can save orders of magnitude in man-hours and development costs compared to the traditional development cycle. For example, STMicroelectronics' System Architecture group (CR&D) used TLM models for developing MPEG4 IVT six months before the top-level netlist was made available [27]. Besides providing fast simulation, the fidelity of the TLM models has also been investigated. The study in [27] compares TLM and RTL implementations of a dual-core processor and found that the TLM model had less than a 15% error margin for interrupt latency and bus utilization. Although we are the first to employ TLM in modeling and simulating SNN accelerators, we envision that this approach will become a mainstay in designing and developing application-specific SNN accelerators in both industry and academia because of its numerous benefits to the design process.
## III Motivation for SNN DSE
To motivate our approach, we start by studying the synaptic traffic or activity in the individual layers of the SNN. Section VI-A details our experimental setup for this analysis. This analysis aims to find variabilities in the number of spiking neurons across layers. This fine-grained variability can be exploited to significantly improve the design of efficient hardware accelerators that satisfy application-specific latency, energy consumption, and area constraints. Figure 1 shows layer-wise variability using a fully-connected model with two hidden layers for the MNIST [28] and FashionMNIST (FMNIST) [29] datasets. The model achieved 96.2% accuracy for MNIST and 90.7% accuracy for FMNIST. We used consistent layer sizes across the three hidden layers to monitor variabilities in the spiking activity independent of the number of neurons within each layer.
Figure 1 shows that the number of firing neurons (averaged for five randomly selected time steps) declines exponentially as the layers get deeper. For example, in layer 0, the ratio of static neurons to firing neurons is 2.4. It increases to 3.4 and 10 for layer 1 and layer 2 respectively. We did not perform the analysis on deeper layers because they did not improve the accuracy of results for the datasets. **The key takeaway**: sparse firing traffic in deeper layers reduces the workload (i.e., accumulation of spikes) for post-synaptic layers. Consequently, this provides the opportunity to allocate fewer hardware neurons for those post-synaptic layers.
Deep networks might require prohibitive hardware resources for resource-constrained systems. However, based on a layer-wise variation analysis, resource allocation can be efficiently managed. As a result, DSE is imperative for evaluating the parameters of high-performing deep learning models. Hardware designers may also want to evaluate these models in comparison to each other (e.g., both ResNet and Lenet-5 perform with similar accuracy but ResNet occupies less hardware area) to enhance the design outcomes. Note that this experiment only shows spiking activity, but we also observed similar layer-wise variability for other model parameters, like weight quantization size, which significantly affects the system's memory requirements. Overall, an effective DSE approach will enable designers to explore the trade-off points of their SNN accelerator designs and provide feedback to their network models. This will result in a highly efficient hardware-software co-design process in terms of both model accuracy and hardware efficiency.
## IV Design Space Exploration Methodology
Figure 2 depicts an overview of our framework and outlines the key functional components of the rapid DSE process. Since our work's main goal is to design efficient application-specific SNN accelerators, the starting point for an SNN DSE is a system specification that describes the network model for the
Fig. 1: Ratio of firing neurons to layer size for a four-layer network model (784-600-600-600). The model uses population coding (detailed in Section VI) The model’s accuracy is 96.2% and 90.7% for MNIST and FMNIST respectively
Fig. 2: Overview of our framework outlining the key steps for rapid design space exploration
target application. The following are the essential phases of the DSE methodology in our framework.
**Training Phase**: First, one or more candidate network topologies are selected and initially trained using a model simulation tool, like _snntorch_1. For clarity, we use _snntorch_ as a proxy for software machine learning libraries due to its native support for SNN simulations. Our framework includes a training script that orchestrates the training process given multiple models and selects the model that gives the best accuracy that is also within the desired accuracy range. It then extracts the input and output spikes and associated model parameters of the topology. Note that although the selection of the candidate topologies is mainly driven by the state-of-the-art network models (e.g., commonly used topologies for a certain dataset), we also experimented with models with random parameters to explore a larger design space toward a more accurate model. However, the initial network architecture search process is beyond the scope of this paper.
Footnote 1: available online at [https://github.com/jeshraghian/snntorch](https://github.com/jeshraghian/snntorch)
**Configuration Phase**: After training the target model and dumping its associated data, the data obtained from _snntorch_ is inserted into the configuration file (shown in the upper left corner in Figure 2). The model-related data include the number of hidden layers, the number of logical neurons in each corresponding layer, spike train length, and beta and threshold constants. In addition, the framework also sets the number of neurons per layer to define the logical-to-physical neuron ratio. This is an important hardware knob since realistic neural network models typically have too many neurons to be implemented or scaled in hardware. Moreover, unlike ANNs, SNN models naturally exhibit sparse spiking behavior, which leaves most of the neurons in an idle state. Our framework allows architectures (see Section V) to exploit the sparsity of SNNs and explore this parameter in determining the mapping ratio. To enable an estimate of resource costs (e.g., lookup tables (LUT), registers, Block RAM (BRAM) primitives, etc.), our framework also features a library of hardware component costs that were obtained by synthesizing the individual hardware components. Additionally, the verbosity level of the simulation can be set for debugging and tracing purposes.
**Architecture Generation Phase**: Next, the _hardware generator_ takes the configuration file and generates the corresponding detailed RTL architecture (bottom left corner in Figure 2). Adhering to the TLM guidelines, this script builds the target hardware architecture using the memory unit, neural unit, and event control from the hardware component libraries. In this process, the individual components are first modified to better suit each layer's model and hardware-specific constraints. For instance, the event control unit for hidden layer 0 will have a different state machine behavior than the other layers, depending on the total number of neurons. Similarly, the memory size will vary depending on the neural activity within each layer. We will describe the details of this architecture enhancement in Section V. Given the component-level modifications, the framework also generates the top-level wrapper that couples the components together. In this process, it creates individual instances of the hardware components and connects their ports and exports.
**Simulation & Validation Phase**: After generating the RTL architecture, the framework estimates the hardware resources for the target topology using the included component library (details in Section VI-A). Then, it dumps the resource information into a text file that is used as input to a cycle-accurate SystemC simulation. At this stage, the simulator reads the model's input spikes along with the weight and bias data (from _snntorch_) and simulates the inferred architecture. During simulation, it records the number of clock cycles as latency data for the SNN topology. Our framework also allows for the collection and recording of other peripheral execution data that might be useful for more detailed analysis. The data include the number and labels of spiking neurons in each layer and memory access counts. To verify the functionality of the generated architecture, the framework also performs a _spike-to-spike validation_ wherein the simulated output spikes are validated against the reference spikes from the trained input model.
**Evaluation Phase**: In this phase, both the model's performance (accuracy) and the hardware performance (latency and area cost) are evaluated. Depending on the evaluation result, modifications can be made to the hardware configuration (e.g., increase the neuron ratio, or reduce the memory blocks), after which further evaluation iterations would take place. Our framework can also automate the compilation and running of various configurations, which is a substantial advantage when the design space is large (this feature is omitted from Figure 2 for brevity). Overall, utilizing a single Makefile, our framework is capable of conducting SNN DSE experiments with minimal user intervention, which would otherwise not be possible through RTL implementation.
## V Implementation of the Framework
We implemented our hardware using SystemC [30], a C++ library for system-level modeling and hardware/software co-design. SystemC inherits all C++ features such as object-oriented programming (OOP) patterns and template-based meta-programming paradigms. These features are highly useful for defining an abstraction of a parametric processing element (PE) or any other hardware component in the TLM design (see Figure 3). In addition to C++ features, the SystemC library defines a set of enhanced features that makes it especially suitable for our work. For example, PE constructs can be modeled by Module entities of the SystemC library which is also inherited from class in the OOP. For PEs to communicate with each other, SystemC defines primitive channels and ports/exports (see Figure 3). Moreover, it also provides custom data types such as bit vectors (sc_bv), arbitrary precision fixed point integers (sc_uint), etc.
### _Parametric Hardware Platform_
Figure 3 depicts our generic TLM platform that represents a single layer of a network. Since our modeling utilizes the _RTL/implementation-level_ abstraction of TLM, wrappers are the parent class of the units that interact with the interface
classes. These interface instances are the main "communication channels" through which computation components interact with each other. For all computation components, we use the clocked thread feature of SystemC to simulate cycle-accurate behavior. In this platform, a control wrapper and a neural wrapper form a single neural layer. Before moving on to the description of these basic components, we discuss our parallelization strategy for the SNN inference flow.
**Mapping Strategy**: The main challenge with a parallelization strategy is ability to keep hardware units always busy. For a fully Connected (FC) layer with \(n\) neurons, our approach is straightforward: we partition the layer into \(m\) groups (a design parameter): each group contains \(n/m\) neurons, and each group is assigned to a Neural Unit (NU) during hardware synthesis. For example, in Figure 3, a layer is mapped to four neural units. For a Convolutional (CONV) layer, we parallelize output channel-wise, meaning that, for instance, each NU in Figure 3 is responsible for \(m\) output channels. Given this structure, we now define the processing flow of the spike trains. For this, we begin with a discussion of the Event Control Unit (ECU), which manages the spike-based processing flow. Note that the behavior of both FC and CONV ECUs are similar with minor distinctions.
### _Event Control Unit_
To provide a time-step-based processing flow, an ECU communicates with the pre- and post-synaptic layer ECUs to keep track of time steps and stay synchronized. Basically, it receives a spike train when the pre-synaptic layer has one ready. Likewise, it notifies the post-synaptic layer once its own spike train is ready. Intuitively, our simulator employs layer-wise pipelining: instead of having to wait for the post-synaptic layer, the ECU loads the spike train into a buffer and moves on to the next spike train from the pre-synaptic layer.
Within the ECU, a state machine orchestrates the spiking activity for the assigned neurons as depicted in Figure 4. When it receives a spike train, it applies a compression mechanism to eliminate the non-spiking (e.g., reset) bits. With this mechanism, an \(n\)-bit spike train is translated into a shift register array (see Figure 4). The process is as follows: in each cycle, the Priority Encoder (PENC) takes in \(n\) bits of data and outputs the address of the first set bit, which gets written into the shift register array. The bit reset component of the ECU then resets the bit value of 1 for this address in the one cycle earlier version of the \(n\)-bit spike data. Despite inherent 2D \((row,col)\) nature of spikes in CONV layer, we store addresses in 1D fashion for the following reasons: (1) both PENC and Accumulation phases operate on 1D structure more efficiently, and (2) conversion between 1D and 2D is relatively lower cost in hardware, e.g., subtracting and adding (V-C for details). From an FPGA hardware perspective, the PENC would ideally handle up to 100-bit inputs, beyond which the resource overhead would likely be prohibitive due to the FPGA routing overhead. Hence, PENC handles large inputs in chunks, meaning it compresses a subset of spikes to construct the general address set of the input spike train.
### _Neural Unit_
To provide fully-automated model mapping, we initialize each NU with a "_base address_" and "_neural size_" module parameters. In the context of an FC layer, this indicates that NU is responsible for logical neurons from (_base address_) to (_base address + neural size_). The provided shift address also serves as the weight address for the synapse memory. The NU iterates through its neurons and serially calculates their accumulator values. Once the ECU transitions from the accumulation to the activation phase, using the Leaky Integrate and Fire (LIF) neuron model, the NU calculates the membrane potential for the neurons. For this, it adds three components together: (i) leaky potential value (multiplied potential from the previous time step with the beta constant), (ii) the accumulated value from the shifting phase, and (iii) the neuron bias. Then, the NU checks whether the new membrane value exceeds the threshold and assigns a spike based on the result.
In the case of the CONV layer, the NU is responsible for the output channels ranging from (_base address_) to (_base address_ + _neural size_). For each output channel assigned, an NU serially processes spikes from each input feature map (fmap). [31] proposed the spike-based convolution first. As figure 5 illustrates, for a given input spike address, the NU calculates the addresses for all affected neurons, which also depends on the filter size (a design parameter). For the filter size of
Fig. 4: Event Control Unit (ECU) design
Fig. 3: TLM-based Hardware Platform
three, there are nine neurons impacted by this spike (unless the addresses do not exceed the frame). The NU serially reads the membrane potential values for the affected neuron addresses and adds the corresponding filter coefficients to the potential values. Note that [31] employs input channel-wise parallelization: a spike from each input fmap is processed in parallel, whereas output channel-wise in our design. Therefore, the NU serially iterates through all input channels, and then it performs the activation/spiking operation. Finally, to implement max-pooling in hardware, we OR-gate the generated spike train with \(2x2\) window size in non-overlapping fashion [32].
### _Memory Unit_
This unit has memory blocks that store synapse weight information and mapping logic that manages the access of multiple hardware neurons to a single memory block. Our platform lets users set the depth and count of the memory blocks. The depth of the blocks can be configured to \(M\times SIZE\) where \(M\) is the number of neurons assigned per memory block and \(SIZE\) is the size of the pre-synaptic layer. As discussed in Section V-C, the memory unit uses the _memory interface_ to respond to the weight read requests.
**Memory Interface**: The communication between the neural unit and memory unit is established via the _memory interface_ class which is a virtual class whose behavior is purely implemented in the calling class. In TLM terminology, the calling class is named the _export_ class since information is being exported into it. Therefore, the implementation of the interface class consists of a set of methods to be invoked by the export class. In this platform, the memory interface has a _Read_ method for reading a specific synapse weight. The method utilizes two signals (e.g., signal labels 16 and 17 in Figure 3): a 32-bit read_data bus that carries weight information and a 1-bit read_en line to enable data reads.
**Neural Interface**: The neural interface for the event control unit is more sophisticated than the memory interface. It contains both _Read_ and _Write_ methods to communicate the following signals (e.g., signal labels 1 to 8 in Figure 3): accumulation_en (1-bit) and activation_en (1-bit) are the enable signals used to allow all neurons within the neural unit to perform accumulation and activation operations. shifted_spike_addr (N-bit) represents the address to the individual neurons from the pre-synaptic layer whose weight is to be integrated, where N is the size of pre-synaptic layer. The spike_out shows whether the neuron spiked or not and done signals that its associated neuron completed accumulation or activation. Note that the number of buses for the last three signals depends on the number of neurons in the neural unit, which can be specified by the user in the configuration file.
## VI Experimental Results
### _Experimental Setup_
We use C++ and SystemC 2.0 to implement the framework's software (which simulates the SNN hardware). The hardware components are developed in SystemVerilog RTL and the generated hardware instances were synthesized using Xilinx Vivado onto a Xilinx Virtex(r) UltraScale+(tm) FPGA with a 100MHz clock frequency to obtain precise FPGA area reports. We provide resource utilization results in Table I. As we mentioned in Section V, we utilize the _snuncth_ library for training. Within _snuncth_, two primary methods are typically employed: Surrogate Gradient Descent (SGD) and Backpropagation Through Time (BPTT) [36]. SGD is a technique that addresses the non-differentiable nature of the spiking mechanism in SNNs. It substitutes the original non-differentiable function with a smooth surrogate derivative that allows the usage of conventional gradient descent methods for optimization. On the other hand, BPTT is a temporal variant of the traditional backpropagation algorithm, which considers the recurrent nature of SNNs. We employ SGD for our models as it captures precise spike timings.
We use the static (MNIST and FMNIST) and dynamic (DVSGesture) datasets as driving applications. The static datasets contain 28\(\times\)28 grayscale image samples. DVSGesture contains 128\(\times\)128 frames for hand gesture recognition. Each frame captures changes in pixel intensity using Dynamic Vision Sensor cameras. To evaluate our framework, we compare it with existing state-of-the-art SNN inference accelerators, as there are no dedicated simulators for SNN hardware. We rigorously evaluate our framework's latency in clock cycles and compare it to the results of five prior SNN accelerators [11, 12, 33, 34, 35]. The second column in Table I summarizes the SNN model topologies for which these previous accelerators were designed. Net-1 to net-4 are fully connected (FC) networks with different numbers of hidden layers. Net-5 is 32C3-P2-32C3-P2-512-256-11 where 32C3 stands for 32 filters with size of \(3\times 3\) and P2 for maxpooling with size of \(2\times 2\) followed by three fully connected layers.
### _Impact of Logical-to-Hardware Neuron Ratio_
Table I shows how different layer-wise logical-to-hardware ratios (\(LHR\)) affect the latency per inference and the resource utilization of our flexible hardware design. \(LHR\) is a parameter that controls the mapping ratio of model hyperparameters into hardware. For fully connected layers, \(LHR\) indicates the number of logical neurons per physical hardware neuron (i.e., Neural Unit) in each layer of the network. For convolutional
Fig. 5: The illustration for spike to neuron address extraction and weight accumulation flow.
layers, \(LHR\) indicates the number of logical output channels per Neural Unit. For example, (\(LHR-1,2,4,1\)) for net-5 means that the network has four hidden layers. Each neural unit in the first and second layers handles one and two output channels, respectively, and each neural unit handles four logical neurons in the third layer and one neuron in the fourth layer. See Section V for more details.
We use LUT-Latency improvement (depicted as **LUT-Latency Impr.** in Table I) as a metric to measure the improvement in FPGA area (LUT) and inference latency (i.e., total clock cycles for inferring a single test sample) over the prior works.
We vary \(LHR\) for each layer (by powers of two) for each network topology to explore the trade-offs between LUT and latency across datasets and topologies. In some cases, our baseline design may have worse latency or LUT than the prior works, mainly because the prior works are optimized for their specific fixed hardware configurations. However, by tuning \(LHR\), we can achieve similar or better efficiency in terms of either latency or LUT. For example, for the MNIST dataset, our design with \((LHR-4,8,8)\) for topology (1) reduces LUT by \(76\%\) and maintains the same latency as [12], and our design with \((LHR-4,4,16,8)\) for topology (2) achieves 0.21\(\times\) latency with similar LUT as [11]. Note that Fang et al. [12] do not report the PE size (which determines the parallelism of neuronal operations) for their synthesis results, although they claim that their PE size is parametric. On the other hand, Abderrahman et al. [11] state that their design executes the first hidden layer in fully-parallel mode and the rest of the layers in serial mode with only one hardware neuron per layer. Similarly, for the Fashion MNIST dataset, our design with \((LHR-32,32,8)\) for topology (3) outperforms the baseline by \(4.1\times\) at the expense of 13% more hardware resources compared to [33] and \((LHR-32,16,8,16,64)\) scheme for topology (4) outperforms the prior work by \(31.25\times\) with 27% less hardware than [34]. Unlike latency, which scales more steeply, energy serves as a more balanced metric that takes both latency and area into consideration. Additionally, it's worth mentioning that in a fully realized hardware implementation, after area optimization, energy efficiency can be further enhanced through clock gating.
Our baseline (highest resource allocated) mapping scheme for DVSGesture performs 2.5\(\times\) better in terms of cycles but 87.6\(\times\) higher energy compared to a prior ASIC implementation [35] (which is also sparsity-aware). Despite the high sparsity characteristic of the data (see Table I caption), the long latency can be attributed to the lengthy time steps required to achieve close to state-of-the-art (SoA) accuracy in snntorch. Yet, the highest attainable accuracy was \(71.23\%\) with 124 time steps (and beta set to 0.23). In comparison, the prior work manages to achieve higher accuracy while applying maxpooling to the input layer, directly reducing the input
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & **Net.** & \multirow{2}{*}{**Top.**} & \multirow{2}{*}{**Work**} & **Target Device** & **Ext. Area** & **Cycles/** & **LUT - Lat.** & **Energy/** & **Pop.** & **Acc.** \\ & & & **Device** & **LUT/REG** & **Cod.** & **Impr.** & **Image** & **[\%]** \\ \hline \hline \multirow{8}{*}{MNIST} & [12] & Zynq US+ & 124.6K/185.2K & 65,000 & — & 2.34 mJ & — & 98.96 \\ \cline{3-10} & (net-1) & \begin{tabular}{c} **TW\(-(1,1,1)\)** \\ **TW\(-(2,1,1)\)** \\ **TW\(-(1,2,1)\)** \\ **TW\(-(1,2,1)\)** \\ **TW\(-(4,4,4)\)** \\ **TW\(-(4,8,8)\)** \\ \end{tabular} & \begin{tabular}{c} 157.6K/103.1K \\ 127.2K/83.2K \\ 127.2K/83.2K \\ 127.2K/83.2K \\ 127.2K/83.2K \\ 80.8K/39.7K \\ 30.7K/63.4K \\ \end{tabular} & 10,83 & \(\times\)1.26, \(\times\)0.16 & 0.09 mJ & — & 98.96 \\ \cline{3-10} & &
\begin{tabular}{c} **TW\(-(1,1,1)\)** \\ **TW\(-(4,4,4,1)\)** \\ **TW\(-(4,4,8,1)\)** \\ **TW\(-(4,4,8,1)\)** \\ **TW\(-(4,4,8,1)\)** \\ **TW\(-(4,4,8,1)\)** \\ **TW\(-(4,4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-(4,4,16)\)** \\ **TW\(-
frame size from 128 down to 32. We were unable to apply maxpooling due to low accuracy. Furthermore, while an ASIC implementation might offer significant energy advantages due to its tailored design, our approach provides a valuable balance of performance improvement, combined with the benefits of flexibility inherent to FPGA-based implementations.
Our layer-wise analysis of the network showed that the majority of processing time is consumed by the second convolutional layer followed by the first fully connected layer, which also has a high input spike activity as shown in Table I caption. Therefore, in the configurations \((LHR-1,1,8,32)\), \((LHR-1,1,16,16)\), \((LHR-16,1,16,256)\), latency remains consistent largely because the second convolutional layer alone overshadows other layers' latencies in the pipeline. For \((LHR-1,1,32,32)\), latency increases due to the increased workload in the first fully connected layer's neural unit. Based on this analysis, we conclude that the \((LHR-16,1,16,256)\) configuration is the best mapping for this use case due to the reduction in the hardware area, which translates into lower inference energy. Importantly, our approach enables rapid exploration of the design space to achieve a 64% reduction in the inference energy compared to the sparsity-oblivious baseline scheme, while maintaining the same latency.
Figure 6 captures the high-level view of the Latency-LUT trend for the same topologies as in Table I. Some trends have irregular patterns (i.e., lower latency despite reducing LUT) because of the layer-wise allocation of hardware neurons. For instance, in net-3, hidden layer 1 and hidden layer 2 have the highest spike events (see Table I footnotes) and hence dominate the network latency. Therefore, a slight reduction in resources leads to a significant performance degradation for the network. In general, we observed that the spike event counts in Table I follow a ratio of \(1/3\) of the layer size for the first layer and about \(2/7\) for the second hidden layer. This is consistent with existing works [10] that suggest that the sparsity increases as the network gets deeper.
### _Spike Train Length vs. Population Coding Ratio_
A major, yet under-explored, hyperparameter in the SNN design space is spike train length. The spike train length specifies the length of the encoding window required to transform real-valued images (pixel resolution) into spikes. In general, a short spike train length leads to poor accuracy but fast computation time due to low precision during conversion and inadequate time for the neuron to complete the accumulation, e.g., there are not enough time steps to produce spikes. This drawback can be mitigated by employing a coding scheme known as "population coding" over the output layer of the network [37]. Indeed, a study published in _Current Opinion in Neurobiology_[38] has shown that the brain extensively employs population coding in certain regions for efficient information representation. With this coding scheme applied to the SNN's classification layer, each class or category is represented by a pool of neurons, e.g., 10 neurons per class of the 10 categories in the MNIST dataset. Hence, we define population coding ratio (PCR) as a parameter that controls how many logical neurons are assigned per class.
We investigate the combined impact of spike encoding window length and PCR on model accuracy and hardware latency. We vary the spike train length from four to 25 time steps in Figure 7 with three different PCRs (_TW_pop_1_ for one neuron per class, _TW_pop_10_ for 10 neurons per class, _TW_pop_30_ for 30 neurons per class) and show the scaling of the model's (a) accuracy vs. (b) latency (in clock cycles) for an MNIST image. We also compare our results with a previous work that performed a similar experiment on the same network topology (784-500-500-10). As the spike train length is increased, we observe a significant improvement in performance for _TW_pop_1_ as the spike train length increase from four to 20, as shown in Figure 7a; we observe no improvement beyond 25 (i.e., the best attainable accuracy at 25 is 94.07%). In contrast, we notice the immediate effect of population coding in _TW_pop_10_ and _TW_pop_30_ where the accuracy starts at 96%, even for short spike train length, and continues to slightly increase. For _TW_pop_30_, we achieve 97.68% accuracy at time steps 15 after which we observe a slight drop due to potential model over-fitting. Therefore, we can conclude that 15 time steps are sufficient to represent MNIST images with high resolution although most of the existing works use 50 or more time steps [39].
In comparison, Fang et al. [12] outperforms our accuracy by 1.85% and achieves the highest accuracy of 98.96% at time steps 25. The superiority of this prior work can be attributed to the optimized spike encoding schemes as opposed to the standard rate coding utilized in this work. In terms of latency, however, our clock cycle results for the best accuracy (29008) outperform the prior work by more than \(2\times\) as shown in Figure 7b. While both the prior work and ours employ similar hardware design strategies and execute in a layer-wise pipelined manner, the latency savings can be attributed to their PE size (e.g., hardware neurons) which, as noted before, is not disclosed in their discussion.
Fig. 6: Overview of Latency-LUT trend for the topologies tabulated in Table I. Although clock cycles increase as the area decreases, in some cases the same (or even less) resource with different \(LHR\) combinations leads to lower clock cycles.
Higher PCR ratios lead to longer latency since more shifting iterations are required to propagate spikes from the pre-synaptic neurons to the output layer. For instance, latency leaps by \(2\times\) for _TW_pop_30_ when we change time steps from six to eight, whereas we observe a substantially lower pace of scaling in _TW_pop_10_ by \(1.43\times\). Another major drawback of the neural population coding is the increase in total neuron count. However, we argue that our design and the design space exploration enabled by our work can help to mitigate both drawbacks. In terms of latency, the design executes in a layer-wise pipelined manner. Moreover, the output layer is typically the smallest across the network and is inherently highly sparse (i.e., a lower number of spike events). Hence the increased execution cycles in the output layer do not directly translate into overall latency since that time would otherwise be spent by stalling the layer while waiting for the next spike train from the pre-synaptic layer. Overall, the key goal of this experiment is to demonstrate the ability of population coding to project temporal information into the spatial domain, thus favoring the inference latency of the network.
## VII Conclusion and Future Work
This article presented and demonstrated the effectiveness of sparsity-aware design space exploration for SNN hardware accelerators. Specifically, we have shown the benefits of utilizing layer-wise sparsity in SNNs, which we argue is a grand challenge for SNNs and a crucial consideration toward achieving brain-like hardware efficiency. We present a sparsity-driven hardware neuron allocation approach that can achieve up to \(76\%\) savings in hardware resources while maintaining a similar latency to prior SNN accelerators that do not consider sparsity. We also investigated the effects of two important model hyperparameters--spike train length and neuron population size--on SNN acceleration. Both hyperparameters have a significant impact on the trade-off between hardware performance and model accuracy. We further showed that the population coding technique is particularly advantageous for our design compared to previous work, since sparsity occurs least at the output layer thereby leading to minimal hardware overhead for our design.
For future research, we aim to implement a dynamic (runtime) scheme of sparsity-aware neuron allocation directly in hardware and explore the deployment of FPGA-based SNN accelerators for a wider variety of SNN models and datasets. Moreover, we plan to conduct detailed comparative analyses of SNNs against ANNs with a heavy focus on sparsity. We aim to delve deeply into the question of how to exploit the inherent potential efficiency benefits of SNNs, characterized by their simpler computations, to maintain a competitive edge over traditional ANNs in terms of computational efficiency.
|
2301.12262 | Arrhenius Crossover Temperature of Glass-Forming Liquids Predicted by an
Artificial Neural Network | The Arrhenius crossover temperature, $T_{A}$, corresponds to a thermodynamic
state wherein the atomistic dynamics of a liquid becomes heterogeneous and
cooperative; and the activation barrier of diffusion dynamics becomes
temperature-dependent at temperatures below $T_{A}$. The theoretical estimation
of this temperature is difficult for some types of materials, especially
silicates and borates. In these materials, self-diffusion as a function of the
temperature $T$ is reproduced by the Arrhenius law, where the activation
barrier practically independent on the temperature $T$. The purpose of the
present work was to establish the relationship between the Arrhenius crossover
temperature $T_{A}$ and the physical properties of liquids directly related to
their glass-forming ability. Using a machine learning model, the crossover
temperature $T_{A}$ was calculated for silicates, borates, organic compounds
and metal melts of various compositions. The empirical values of the glass
transition temperature $T_{g}$, the melting temperature $T_{m}$, the ratio of
these temperatures $T_{g}/T_{m}$ and the fragility index $m$ were applied as
input parameters. It has been established that the temperatures $T_{g}$ and
$T_{m}$ are significant parameters, whereas their ratio $T_{g}/T_{m}$ and the
fragility index $m$ do not correlate much with the temperature $T_{A}$. An
important result of the present work is the analytical equation relating the
temperatures $T_{g}$, $T_{m}$ and $T_{A}$, and that, from the algebraic point
of view, is the equation for a second-order curved surface. It was shown that
this equation allows one to correctly estimate the temperature $T_{A}$ for a
large class of materials, regardless of their compositions and glass-forming
abilities. | Bulat N. Galimzyanov, Maria A. Doronina, Anatolii V. Mokshin | 2023-01-28T18:11:55Z | http://arxiv.org/abs/2301.12262v1 | # Arrhenius Crossover Temperature of Glass-Forming Liquids Predicted by an Artificial Neural Network
###### Abstract
The Arrhenius crossover temperature, \(T_{A}\), corresponds to a thermodynamic state wherein the atomistic dynamics of a liquid becomes heterogeneous and cooperative; and the activation barrier of diffusion dynamics becomes temperature-dependent at temperatures below \(T_{A}\). The theoretical estimation of this temperature is difficult for some types of materials, especially silicates and borates. In these materials, self-diffusion as a function of the temperature \(T\) is reproduced by the Arrhenius law, where the activation barrier practically independent on the temperature \(T\). The purpose of the present work was to establish the relationship between the Arrhenius crossover temperature \(T_{A}\) and the physical properties of liquids directly related to their glass-forming ability. Using a machine learning model, the crossover temperature \(T_{A}\) was calculated for silicates, borates, organic compounds and metal melts of various compositions. The empirical values of the glass transition temperature \(T_{g}\), the melting temperature \(T_{m}\), the ratio of these temperatures \(T_{g}/T_{m}\) and the fragility index \(m\) were applied as input parameters. It has been established that the temperatures \(T_{g}\) and \(T_{m}\) are significant parameters, whereas their ratio \(T_{g}/T_{m}\) and the fragility index \(m\) do not correlate much with the temperature \(T_{A}\). An important result of the present work is the analytical equation relating the temperatures \(T_{g}\), \(T_{m}\) and \(T_{A}\), and that, from the algebraic point of view, is the equation for a second-order curved surface. It was shown that this equation allows one to correctly estimate the temperature \(T_{A}\) for a large class of materials, regardless of their compositions and glass-forming abilities.
keywords: machine learning; physical properties; organic compounds; metallic alloys; silicates;
borates
## 1 Introduction
In the last decade, interest in the study of phase transformations in glass-forming liquids has increased significantly [1; 2; 3]. There is increasing evidence that such transformations can be related to the ability of a liquid to form a glassy state [4; 5; 6]. The results of recent studies show that the glass-forming ability of a liquid depends on the specifics of changes in its atomistic structure and collective dynamics near the melting temperature \(T_{m}\)[7; 8; 9]. The beginning of such the changes in the dynamics of a liquid corresponds to the Arrhenius crossover temperature \(T_{A}\)[10; 11; 12; 13]. It is generally accepted that the atoms of a liquid do not form any bound structures above \(T_{A}\). In this case, the dependence of the logarithm of viscosity on the reverse temperature obeys a linear law (so-called high-temperature Arrhenius behavior). Below \(T_{A}\), individual groups of atoms become less mobile, which manifests in the deviation of viscosity from the Arrhenius behavior, which is typical for equilibrium liquids [14; 15; 16].
The existing empirical and theoretical methods for estimating \(T_{A}\) are mainly based on analysis of the temperature-dependence of liquid viscosity (or the structural relaxation time) and on determining the high-temperature linear regime in this relationship [17; 18; 19; 20]. As a rule, linear approximation methods most accurately characterize this linear regime. Such approximations are applicable only if the viscosity of the liquid is determined for a wide temperature range covering temperatures above and below the melting temperature (\(T_{m}\)). For organic (molecular) compounds and polymers belonging to the class of the so-called _fragile glass formers_, viscosity increases rapidly with decreasing temperature, which makes it possible to determine the deviation from the high-temperature Arrhenius behavior. For the so-called _strong glass formers_, including most metal melts, silicates and borates, the Arrhenius behavior practically does not change even when passing through the melting temperature and entering the region of supercooled melt. This is displayed in blurring the region of transition from the high-temperature Arrhenius behavior to the low-temperature non-linear regime. Therefore, the accuracy of the temperature estimation can be low, and the estimated values of \(T_{A}\) practically do not correlate with the other physical characteristics of liquids. For example, an expression was proposed by A. Jaiswal et al. which relates the fragility index \(m\) with the \(T_{A}\) values of various glass-forming liquids. This expression takes into account the temperature dependence of the transport properties (mainly self-diffusion) and the dynamics of atoms near the
glass transition [8]. This expression gives a correct correspondence between \(m\) and \(T_{A}\) in the case of molecular glasses, though the results of calculations can differ greatly from empirical data in the case of metallic and optical glasses. Further, the analytical expression was proposed by T. Wen at al., according to which the glass-forming ability of liquid is related to the reverse temperature \(1/T_{A}\): i.e., the higher the \(T_{A}\), the worse the liquid forms a stable glassy state [21]. However, this rule is valid only for a narrow class of glass formers that are similar in composition (mainly for metallic glasses). Therefore, obtaining an analytical expression that allows one to determine \(T_{A}\) based on the known key physical characteristics of glass-forming liquids remains an unsolved task. It is obvious that the correct solution of this task is possible using machine learning methods, which will allow us to reveal hidden relationships between physical characteristics and determine the most significant factors in estimating \(T_{A}\)[22, 23, 24, 25, 26].
The purpose of the present study was to determine how physical characteristics associated with the _overall kinetics_ of supercooled liquids correlate with each other. These characteristics are primarily
* the glass transition temperature (\(T_{g}\)), at which liquid becomes amorphous upon rapid cooling,
* the melting temperature (\(T_{m}\)),
* the Arrhenius crossover temperature (\(T_{A}\)),
* the Kauzmann temperature,
* the high-temperature limit (\(T_{\infty}\)), at which the viscosity tends to zero,
* the temperature (\(T_{0}\)) associated with the transition to a non-ergodic phase (for example, in the mode-coupling theory),
* the temperature ratio of \(T_{g}/T_{m}\), which is considered as one of the criteria for the glass-forming ability of liquids and
* the fragility index \(m\), which determines the rate of change in viscosity with temperature.
Some of these characteristics come to the fore for several reasons. First of all, these characteristics are available for experimental measurements. In addition, they are presented in various models that reproduce the kinetics and transport properties of supercooled melts. Model equations for the shear viscosity--such as the equations of the Vogel-Fulcher-Tammann-Hesse [12], Mauro et al. [27], Avramov-Milchev [28] and the equation obtained in the framework of the mode-coupling theory [29]--contain three or even more parameters to reproduce the viscosity over a range of the
temperatures. This indicates that it is necessary to consider some temperature pairs associated with the supercooled melt phase. It is important to note that these temperature pairs occur in arbitrary combinations, which indirectly indicates the presence of correlations between "critical" temperatures in some way related to the glass transition. Moreover, this fact is directly supported by previous results relating to the description of the temperature dependence of the viscosity and crystallization rate characteristics of supercooled melts by the scale relations [9; 30; 3], where only the melting and glass transition temperatures, \(T_{m}\) and \(T_{g}\), appear as input parameters. Thus, the determination of specific correlation relationships between the "critical" temperatures of the kinetics of viscous melts is an important task, the solution of which will contribute to a deeper understanding of the solidification processes (glass transition and crystallization).
In the present work, the Arrhenius crossover temperature \(T_{A}\) is predicted for various types of glass-forming liquids, including silicates, borates, metal melts and organic compounds using the machine learning method. The most significant factors among the physical characteristics of these glass-forming liquids are determined. Taking into account these factors, an analytical equation is obtained that allows one to accurately relate the temperature \(T_{A}\) to the physical properties of glass-forming liquids.
## 2 Data Set and Machine Learning Model
Using an appropriate set of physical properties as the neural network input parameters is a crucial for correct predicting the Arrhenius crossover temperature. These physical properties must uniquely characterize the nature of the material and must be determined with high accuracy by experimental or simulation methods. Here, it is quite reasonable to choose the fragility index (\(m\)), the melting temperature (\(T_{m}\)), the glass transition temperature (\(T_{g}\)) and the so-called reduced glass transition temperature (\(T_{g}/T_{m}\)), whose values are known for almost all types of glass-forming liquids and can be found in the scientific literature. Moreover, for some organic and metallic glass formers, the phenomenological relation between \(T_{g}\) and \(T_{A}\) is known [5; 15]. For most silicates and borates, there is no known correlation between these two temperatures. At the same time, there can be hidden relationships, which are usually revealed using machine learning methods.
The initial data set for machine learning included experimental and calculated data as well as information from databases (e.g., ITPhyMS-Information technologies in physical materials science, Materials Project) [8; 12; 31]. For our purpose, different glass-forming materials were selected,
among which were silicates, borates, organic compounds and metallic alloys (Cu, Zr, Ti, Ni, Pd-based) (see Table S1 in Supplementary Materials). We chose systems for which the melting temperature, the glass transition temperature and the fragility index are known. This data set was divided into the sets corresponding to _training_, _validation_ and _test_ regimes. The _training_ and _validation_ sets included all organic compounds and metallic alloys, along with several silicates and borates, for which \(T_{A}\) is known. The machine learning model was created on the basis of the training data set. The accuracy of the neural network was checked using the validation data set. The _test_ set included only silicates and borates, for which \(T_{A}\) was predicted. Note that to create an artificial neural network, we used instances for which all parameters are known. Predictions were made only for those systems for which the temperature \(T_{A}\) is unknown. The reliability of the obtained results is quite expected, since the formation of the neural network was performed using the data for systems of all categories, including those for which further predictions were made.
In the present work, the machine learning model was a feedforward artificial neural network (see. Figure 1). This model has one input layer with four neurons, for which the values of the melting temperature, the glass transition temperature, the ratio \(T_{g}/T_{m}\) and the fragility index were taken from the data set. The values of this physical characteristics were renormalized and presented in the range [0, 1]. Next two layers of the neural network were hidden and consisted of 20 neurons. In the output layer we had only one neuron, which determined \(T_{A}\). For the initiation of the neural network, the values of all neurons and their weight coefficients were assigned randomly from the range [0, 1]. Subsequently, calculation of the values of all neurons was carried out as follows [32]:
\[n_{i}^{(k)}=f\left(\sum_{j=1}^{N_{k-1}}w_{ij}^{(k-1)}n_{j}^{(k-1)}+b_{i}^{(k)} \right). \tag{1}\]
Here, \(n_{i}^{(k)}\) is the value of the \(i\)th neuron in the \(k\)th layer (\(k=2,\,3,\,4\)); \(w_{ij}^{(k-1)}\) is the value of the \((k-1)\)th layer weight going from a neuron with index \(j\) to a neuron with index \(i\) from the \(k\)th layer; \(b_{i}^{(k)}\) is the bias weight acting on a neuron with index \(i\) from the \(k\)th layer; \(N_{k-1}\) is the number of neurons in the \((k-1)\)th layer; function \(f(...)\) is the sigmoid-type logistic function:
\[f(x)=\frac{1}{1-\exp(-x)}. \tag{2}\]
The neural network was trained using the backpropagation algorithm, according to which the
value of the weight coefficient was adjusted as follows [33]:
\[w_{ij}^{(k),\,new}=w_{ij}^{(k)}-\gamma\frac{\partial\chi(s)}{\partial w_{ij}^{(k) }}. \tag{3}\]
\(\gamma\) is the training rate, the value of which is usually chosen in the range [0, 1]. In the present work, we took the rate of \(\gamma=0.3\) as optimal for the considered machine learning model. The value of the loss function \(\chi(s)\) is determined as
\[\chi(s)=\frac{1}{2}\left[n_{1}^{(4)}(s,l)-n(l)\right]^{2}, \tag{4}\]
where \(s\) is the training iteration number (i.e., epoch number); \(n_{1}^{(4)}(s,l)\) is the value of the output neuron at the \(s\)th epoch for the \(l\)th element from the training data set; \(n(l)\) is the required value of the output neuron for the \(l\)th element. To train the machine learning model, 2400 epochs were used. The gradient of the loss function with respect to each weight was computed by the chain rule, according to which Equation (3) can be represented in the following form:
\[w_{ij}^{(k),\,new}=w_{ij}^{(k)}-\gamma\delta_{i}n_{i}^{(k)}\frac{e^{-W_{i}^{(k )}}}{\left[1+e^{-W_{i}^{(k)}}\right]^{2}}, \tag{5}\]
where
\[\delta_{i}=\begin{cases}n_{1}^{(4)}(l)-n(l)&\text{if $i$ is the output layer neuron}\\ \sum_{j}w_{ij}\delta_{j}&\text{if $i$ is a neuron of the hidden layers} \end{cases},\]
\[W_{i}^{(k)}=\sum_{j=1}^{N_{k-1}}w_{ij}^{(k-1)}n_{j}^{(k-1)}. \tag{6}\]
This backpropagation algorithm allows one to control the training procedure. The criterion for finishing this procedure is the minimal error between the results of the output neuron and the required values from the validation data set.
## 3 Identification of Significant Physical Properties
To identify the physical characteristics that are most significant for estimating the temperature \(T_{A}\), calculations were carried out for various combinations of the neural network's input parameters. As shown in Figure 2a, retraining of the machine learning model was performed for various combinations of \(T_{m}\), \(T_{g}\), \(T_{g}/T_{m}\) and \(m\) using the training and validation data sets. For each considered
combination, the root mean square error was calculated:
\[\xi=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\Big{(}T_{A}^{(pred)}-T_{A}^{(emp)}\Big{)}}. \tag{7}\]
Obviously, the smaller the value of \(\xi\), the more accurately \(T_{A}\) is determined. In Equation (7), \(T_{A}^{(pred)}\) is the Arrhenius crossover temperature predicted by machine learning model; \(T_{A}^{(emp)}\) is the empirical Arrhenius crossover temperature; \(N\) is the number of elements in the data set. The obtained results reveal that the most significant physical quantities correlating with \(T_{A}\) are the glass transition temperature \(T_{g}\) and the melting temperature \(T_{m}\). This is confirmed by the relatively small value of the mean absolute error, which does not exceed \(\xi=11.4\,\)K. The quantities \(T_{g}/T_{m}\) and \(m\) are less significant in the estimation of the temperature \(T_{A}\), which is clearly manifested in the relatively large \(\xi\) with values of up to \(25.8\,\)K. The smallest error \(\xi\approx 10.5\,\)K is obtained by taking into account all four physical quantities at which the best agreement between the empirical values of \(T_{A}\) and the result of the machine learning model is observed.
Figure 2b shows that the empirical and predicted temperatures \(T_{A}\) correlate well with each other. Regarding organic compounds, an insignificant variation between the empirical and predicted \(T_{A}\) can be observed for saccharides (for example, fructose, trehalose). This was mainly due
Figure 1: Scheme of the machine learning model based on the feedforward artificial neural network.
to insufficient of data in the training set for this class of materials. For metal melts, the variation in the values of \(T_{A}\) can be observed for alloys based on rare earth elements (for example, Pr\({}_{60}\)Ni\({}_{10}\)Cu\({}_{20}\)Al\({}_{10}\)). The empirical and predicted values of \(T_{A}\) have a minimum variation for silicates and borates. This result indicates that artificial neural networks have good trainability with respect to these materials. The reason for this could be that the change in viscosity of a silicate and borate melt occurs in a similar way in a wide temperature range, including near the melting temperature [12]. Such universality in the temperature dependencies of viscosity is kept when the composition of the melts changes, for example, by adding alkali oxides (Li\({}_{2}\)O, Na\({}_{2}\)O, K\({}_{2}\)O, etc.) or metal oxides (Al\({}_{2}\)O\({}_{3}\), MgO, PbO, etc.).
## 4 Regression Model for Arrhenius Crossover Temperature
Figure 3a shows the correspondence between the glass transition temperature \(T_{g}\) and the predicted temperature \(T_{A}\) for various glass-forming liquids. For organic compounds, the correspondence between \(T_{A}\) and \(T_{g}\) is reproduced according to the linear law \(T_{A}\simeq k\cdot T_{g}\) with \(k=1.4\). It
Figure 2: (a) Diagram of the root mean square error \(\xi\) of estimation of the Arrhenius crossover temperature \(T_{A}\) calculated for various combinations of the quantities \(T_{m}\), \(T_{g}\), \(T_{g}/T_{m}\) and \(m\), which were the inputs of the machine learning model. Inset: \(T_{A}^{(pred)}\) and \(T_{A}^{(emp)}\) are the predicted and empirical Arrhenius crossover temperatures, respectively. (b) Correspondence between the empirical \(T_{A}\) and the \(T_{A}\) predicted by the machine learning model using the validation data set.
is noteworthy that this correspondence between \(T_{A}\) and \(T_{g}\) was predicted earlier (for example, see Refs. [8, 34]). For metallic glass formers, there is a relationship between \(T_{A}\) and \(T_{g}\) of the form \(T_{A}=k\cdot T_{g}\), where \(k=2.0\pm 0.2\). As a rule, such a relationship between temperatures \(T_{A}\) and \(T_{g}\) is universal for metal alloys containing two to five different components [8]. For silicates and borates, there is no clear correlation between \(T_{A}\) and \(T_{g}\): the known laws do not reproduce the correspondence between \(T_{A}\) and \(T_{g}\). The results given in Figure 3(b) reveal the obvious correlation between \(T_{A}\) and \(T_{m}\) for silicates and borates, whereas variation in values of these temperatures is more pronounced for organic and metallic glass formers. Despite this, the correspondence between \(T_{A}\) and \(T_{m}\) is reproduced by the linear law
\[T_{A}=k\cdot T_{m}\;\;\mbox{(where $k=1.1\pm 0.15$)} \tag{8}\]
regardless of the type of glass-forming liquid. It is noteworthy that this result agrees with the results of Refs. [35, 36].
Relationship (8) is an empirical result that has no theoretical explanation and is only _an approximation_. The error of this relationship depends both on the specific type of material and on the category to which this material belongs (i.e., organic, metallic, silicate). As shown in Figure 3b, relationship (8) only qualitatively reproduces the empirical data for a large data set. At the same time, one can be convinced that for certain categories of materials, this relationship yields very accurate results. Thus, for example, the available data for organic materials and metallic systems are more correctly reproduced by the quadratic polynomials than by the linear relationship (see Figure 3b). On the other hand, the results for silicates and borates reveal a general trend of increasing \(T_{A}\) with \(T_{m}\), which can be described by the linear relationship \(T_{A}=aT_{m}+b\), where the parameters \(a\) and \(b\) take different values for materials from different categories. In this regard, it is quite natural to expect that the _overall correlation_ between \(T_{A}\) and \(T_{m}\) is not as so simple as prescribed by relationship (8), and it requires taking into account other physical characteristics.
For _implicit_ ("hidden") correlations between different parameters, it is quite natural and often feasible that the parameters do not appear in the resulting correlation relation as single additive terms, but in the form of combinations (products or ratios). For example, in contrast to the methodology of artificial neural networks, this is most clearly manifested in the method of joint
accounting for arguments using the Kolmogorov-Gabor polynomial [37; 38; 39]:
\[y(x_{1},...,x_{n})=a_{0}+\sum_{i=1}^{n}a_{i}x_{i}+\sum_{i=1}^{n}\sum_{j=i}^{n}a_{ ij}x_{i}x_{j}+\sum_{i=1}^{n}\sum_{j=i}^{n}\sum_{k=j=1}^{n}a_{ijk}x_{i}x_{j}x_{k}+\dots, \tag{9}\]
which determines the relationship of a parameter \(y\) with the parameters \(x_{1}\), \(x_{2}\),... \(x_{i}\),... In the obtained model of the artificial neural network, the appearance of the parameter \(T_{g}/T_{m}\), together with the individual parameters \(T_{g}\) and \(T_{m}\), directly indicates that the Arrhenius crossover temperature \(T_{A}\) correlates not only with the absolute values of the melting and glass transition temperatures for different systems, but also with their ratio. This result is fully consistent with the theoretical description of crystallization rate characteristics of supercooled melts within the reduced temperature scale \(\widetilde{T}_{MG}\) and universal scaled relations [30; 9]. This point is discussed in detail in Ref. [30] (see text on page 104502-2).
To obtain a general expression relating the temperatures \(T_{g}\), \(T_{m}\) and \(T_{A}\), the reproducibility of
Figure 3: (a) Correspondence between the glass transition temperature \(T_{g}\) and the predicted value of the Arrhenius crossover temperature \(T_{A}\) for different types of glass formers. (b) Correlation between the melting temperature \(T_{m}\) and the predicted temperature \(T_{A}\). The dashed and dotted lines show the interpolation by the quadratic polynomials: \(T_{A}=409-1.23T_{m}+0.003T_{m}^{2}\) in the case of organic materials and \(T_{A}=-2161+4.6T_{m}-0.0014T_{m}^{2}\) for metallic systems. The dot-dash lines show the linear fit by equations \(T_{A}=318+0.9T_{m}\) (for silicates) and \(T_{A}=465+0.71T_{m}\) (for borates).
these temperatures was tested in the framework of the nonlinear regression model:
\[T_{A}(T_{g},T_{m})=\sum_{i=1}^{k}\left(a_{i}T_{g}^{i}+b_{i}T_{m}^{i}+c_{i}T_{g}^{ i}T_{m}^{i}\right). \tag{10}\]
The temperatures \(T_{g}\) and \(T_{m}\) are input parameters determined by experimental values; the temperature \(T_{A}\) is the resulting factor; \(k\) is an integer value chosen during the regression analysis; \(a_{i}\), \(b_{i}\) and \(c_{i}\) are the fitting coefficients, whose values are determined by enumeration to obtain the best agreement between the empirical temperature \(T_{A}\) and the result of Equation (10).
The values of the fitting coefficients were determined by regression analysis: \(a_{1}=b_{1}=0.7016\), \(a_{2}=-7.52\times 10^{-4}\,\mathrm{K}^{-1}\), \(c_{1}=4.43\times 10^{-4}\,\mathrm{K}^{-1}\). As was found, all other fitting coefficients equal zero. Thus, with these values of the fitting coefficients, we obtained the minimum error between the empirical \(T_{A}\) and the result of Equation (10) for the considered glass-forming liquids. Thus, the temperatures \(T_{g}\), \(T_{m}\) and \(T_{A}\) can be related by the nonlinear regression model:
\[T_{A}(T_{g},T_{m})=a_{1}T_{g}+a_{2}T_{g}^{2}+b_{1}T_{m}+c_{1}T_{g}T_{m}. \tag{11}\]
In algebra, an equation of this type is known as the equation of a second-order curved surface. Figure 4 shows that Equation (11) correctly determines the correspondence between the temperatures \(T_{g}\), \(T_{m}\) and \(T_{A}\) for all considered glass formers. The average error between the empirical data and the result of Equation (11) is \(\sim\)10%. The plane surface corresponds to the data for organic compounds and metal melts. The deviation from this surface and its transformation into a curved surface occurs due to taking into account the data for silicates and borates (see Figure 4b). Therefore, Equation (11) can be applied to determine \(T_{A}\) for various types of materials, regardless of composition. Note that Equation (11) is an empirical result, the rigorous physical meaning of which has not yet been established. This is also true for relationship (8), which also has no a clear physical meaning. On the other hand, Equation (11) shows that the three key temperatures associated with a change in kinetic regime (as in the case of \(T_{A}\)) and with a change in thermodynamic phase (as for \(T_{m}\) and \(T_{g}\)) correlate in some universal way with each other for melts that are different in physical nature. The necessity of the quadratic contribution in Equation (11) to reproduce the empirical data becomes obvious if these data are represented in the space of three parameters--temperatures \(T_{A}\), \(T_{m}\) and \(T_{g}\)--as shown in Figure 4b. As can be clearly seen in this representation, the empirical data form the second-order curved surface, for the analytical reproduction of which the presence of the quadratic contributions \(T_{g}^{2}\) and \(T_{g}T_{m}\), are necessary. Moreover, since the curvature of this
surface is expressed significantly, its projection onto the coordinate plane (\(T_{A}\), \(T_{m}\)) will give a certain curve that can be reproduced by a straight line only _approximately_ (for example, as prescribed by relationship (8)). It should be noted that such representation of the empirical data in (\(T_{A}\), \(T_{m}\), \(T_{g}\))-space was not expected and originally carried out; and Equation (11) is a direct result of the regression analysis.
To determine the error in estimating \(T_{A}\) for materials of various categories, the root mean square relative error (RMSRE) was calculated:
\[\text{RMSRE}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{T_{A}^{(emp)}-T_{A}^ {(calc)}}{T_{A}^{(emp)}}\right)^{2}}, \tag{12}\]
where \(T_{A}^{(emp)}\) is the empirical value of \(T_{A}\); \(T_{A}^{(calc)}\) is the \(T_{A}\) computed by various methods--a machine learning model, by relationship (8) and by Equation (11). Figure 5 shows that the accuracy at estimation of \(T_{A}\) depends on the applied method and the category of material. Thus, for silicates and borates, the error of machine learning prediction is lower than that of other methods. In this case, Equation (11) is more accurate than relationship (8). For metallic systems, the errors of all
Figure 4: (a) Correspondence between the Arrhenius crossover temperature (\(T_{A}\)), the melting temperature (\(T_{m}\)) and the glass transition temperature (\(T_{g}\)). Circle and square markers denote predicted and empirical data, respectively. These data are compared with the results of Equation (11), which are presented as a curved surface. (b) This figure is from an another foreshortening, which allows one to consider the curved surface.
methods are comparable, although Equation (11) produces the smallest error. For organic materials, the machine learning prediction is more accurate than other methods. In this case, the error of Equation (11) is higher than the error of relationship (8). This is due to the fact that for materials with complex structures, such as organic materials, the glass transition temperature is determined ambiguously. Namely, for this category of materials, the temperature \(T_{g}\) in relation to the melting temperature \(T_{m}\) can vary widely compared to silicate, borate and metallic systems. For example, for organic materials, the variation in \(T_{g}/T_{m}\) exceeds 0.5, whereas for borate, silicate and metallic systems this variation is usually less than 0.4.
## 5 Conclusions
The physical characteristics of various type glass-forming liquids were determined using a machine learning model--that is, those which are most significant to correct prediction/estimation of the Arrhenius crossover temperature. Such significant factors are the glass transition temperature and the melting temperature. It has been established that the fragility index and the reduced glass transition temperature (\(T_{g}/T_{m}\)), which is directly related to the glass-forming ability of a liquid,
Figure 5: Root mean square relative error between the empirical values of \(T_{A}\) and actual \(T_{A}\), which is computed by different methods for silicates, borates, metallic systems and organic materials.
are insignificant factors. These factors do not affect the accuracy of \(T_{A}\) estimation. The correctness of the obtained results was confirmed by the presence of a good correlation between the empirical values of \(T_{A}\) and the \(T_{A}\) predicted by a machine learning model. Moreover, the result of the machine learning model gives the correct relationships between the temperatures \(T_{A}\), \(T_{g}\) and \(T_{m}\), which agree with the previously established empirical rules \(T_{A}\simeq 1.1T_{m}\) (for all types of liquids), \(T_{A}\simeq 1.4T_{g}\) (for organic compounds) and \(T_{A}\simeq 2.0T_{g}\) (for metallic systems). Based on the results of nonlinear regression analysis, an equation was obtained that allows one to determine the temperature \(T_{A}\) by using known temperatures \(T_{g}\) and \(T_{m}\). It was shown that this equation gives the correct values of \(T_{A}\) for various types of liquids, including silicates and borates, for which direct estimation of \(T_{A}\) can be difficult.
## Acknowledgment
This research was funded by the Russian Science Foundation (project no. 19-12-00022).
|
2304.14973 | Evolutionary Multi-Objective Aerodynamic Design Optimization Using CFD
Simulation Incorporating Deep Neural Network | An evolutionary multi-objective aerodynamic design optimization method using
the computational fluid dynamics (CFD) simulations incorporating deep neural
network (DNN) to reduce the required computational time is proposed. In this
approach, the DNN infers the flow field from the grid data of a design and the
CFD simulation starts from the inferred flow field to obtain the steady-state
flow field with a smaller number of time integration steps. To show the
effectiveness of the proposed method, a multi-objective aerodynamic airfoil
design optimization is demonstrated. The results indicate that the
computational time for design optimization is suppressed to 57.9% under 96
cores processor conditions. | Yukito Tsunoda, Akira Oyama | 2023-04-28T16:55:21Z | http://arxiv.org/abs/2304.14973v1 | Evolutionary Multi-Objective Aerodynamic Design Optimization Using CFD Simulation Incorporating Deep Neural Network
###### Abstract
An evolutionary multi-objective aerodynamic design optimization method using the computational fluid dynamics (CFD) simulations incorporating deep neural network (DNN) to reduce the required computational time is proposed. In this approach, the DNN infers the flow field from the grid data of a design and the CFD simulation starts from the inferred flow field to obtain the steady-state flow field with a smaller number of time integration steps. To show the effectiveness of the proposed method, a multi-objective aerodynamic airfoil design optimization is demonstrated. The results indicate that the computational time for design optimization is suppressed to 57.9% under 96 cores processor conditions.
## I.Nomenclature
\(AoA\) = angle of attack
\(M\) = Mach number
\(C_{L}\) = lift coefficient
\(C_{D}\) = drag coefficient
\(C_{p}\) = pressure coefficient
\(e\) = total energy nondimensionalized by density and sound speed of the ambient condition
\(Re\) = Reynolds number based on chord length
\(r_{LE}\) = leading-edge radius
\(x\) = horizontal coordinate
\(y\) = vertical coordinate
\(L\) = distance from the surface of the airfoil
\(u\) = velocity in \(x\)-direction nondimensionalized by the speed of sound of ambient conditions
\(v\) = velocity in \(y\)-direction nondimensionalized by the speed of sound of ambient conditions
\(X_{LO}\) = lower crest abscissa
\(X_{UP}\) = upper crest abscissa
\(Z_{LO}\) = lower crest ordinate
\(Z_{\pi T}\) = trailing edge ordinate
\(Z_{UP}\) = upper crest ordinate
\(Z_{XXLO}\) = lower crest curvature
\(Z_{XXUP}\) = upper crest curvature
\(\alpha_{TE}\) = trailing edge direction
\(\beta_{TE}\) = trailing edge wedge angle
\(\Delta\mathcal{I}_{TE}\) = trailing edge thickness
\(\rho\) = density nondimensionalized by the density of ambient
## II.Introduction
Multi-objective evolutionary algorithms (MOEAs) have been applied in various fields of aerospace engineering because MOEAs have excellent features (e.g., the capability of identifying Pareto-optimal solutions of a multi-objective design optimization problem in a single run). For example, MOEAs have been applied to optimize spacecraft trajectory design [1, 2], earth observation satellite mission planning [3], and rocket engine design [4, 5, 6, 7]. They have been applied to aerodynamic design problems such as airfoil shape designs [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. However, an issue is that a large number of design candidates created by MOEAs require to be evaluated using computational fluid dynamics (CFD) simulation, which requires a large computational time [12]. For example, in the flame deflector optimization problem [18], the computational time required to evaluate each design candidate by CFD simulation was up to 7 h with 130 processors (1040 cores) of K supercomputer, and 2,500 design candidates were evaluated. Consequently, this problem took more than two weeks to solve using K supercomputer. Therefore, a problem is that aerodynamics design optimization using an MOEA can only be realized under abundant computing resources. Many approaches have been proposed to address this problem [10, 16, 17, 20, 21, 22].
One method for reducing the computational time is replacing a surrogate model from the CFD simulation [10, 16, 17]. The surrogate model comprises a computationally cheap function such as a neural network [10, 19]. In these studies, first, a certain number of CFD simulations were performed to acquire the sampling data, and a surrogate model was constructed based on these sampling points. Then, the model was used to evaluate the performance of design candidates. In this method, CFD simulations are only needed to construct the model. However, issues remain in terms of the use of surrogate models for aerodynamic design optimization. The optimization of general aerodynamic design problems requires a large number of design parameters and thus high dimensional design space [20, 22]. The number of sample data required to create an accurate surrogate model exponentially grows depending on the increase in the number of design parameters. In the case of a lack of the number of these sample data, inaccuracy caused by the surrogate model is an issue [20, 21]. Moreover, in a high-dimensional design space, the method to select the sampling points is an important problem. The inaccuracy also tends to be more significant in case the model infers a value outside the distribution of the sample data set [23, 24].
Recently, there is a technique to incorporate a deep neural network (DNN) with CFD simulations [25, 26]. In this approach, the DNN infers the flow field, and the CFD simulation starts from the inferred flow field to obtain the steady-state flow field. Using this method, the number of time steps required to reach the steady-state flow was reduced. While the accuracy of the result was not compromised because the performance of the design is evaluated by CFD simulations. This method is suitable for use in the CFD simulation component of MOEAs, wherein various design candidates must be evaluated. However, the effectiveness of this method for multi-objective aerodynamic design optimization has not been clarified. Especially, in the case of using the DNN, the process to prepare the training data and to train the DNN is indispensable. However, there is no report to discuss the total computational cost including this process of training the DNN. So, the MOEA method for implementing this process needs to be developed. In addition, the inference accuracy of the DNN depends on the training data used to train the DNN. Therefore, the method to gather the training data is a critical issue.
The purpose of this study is to propose an evolutionary multi-objective aerodynamic design optimization method using the CFD simulation incorporating the DNN for the steady-state simulation. To realize this, the MOEA method implementing the process of training the DNN is developed. To eliminate the increment of the time of the process to prepare the training data, the grid data and the flow field data, obtained during the process of the MOEA, are used as the training data. The design candidates in the 1st and 2nd generations of MOEA are evaluated using conventional CFD simulations and the DNN is trained using these results. In addition, the obtained results using this method of CFD simulation incorporating the DNN are equal to that of the conventional method. Therefore, these results are also able to use for training data. The method that the DNN is retrained using these results during the MOEA is applied in order to improve the inference accuracy further. In addition, to suppress the increment of time caused by training the DNN, the MOEA method that these processes of training the DNN are parallelly performed with the process of evolution is applied. This MOEA method is evaluated against the sample problem of the design optimization against the 2D airfoil shape. The advantage that the computational time can be suppressed using this method is shown first.
Then the other advantage that the obtained result of this proposed design optimization method is equivalent to that of a conventional method is shown.
## III.Proposed Approach
To reduce the computational time of an aerodynamic design optimization using an MOEA, the CFD simulation method incorporating a DNN is proposed to be used. In this chapter, first, the MOEA method and aerodynamic design optimization using MOEA is described. Then, the CFD simulation method incorporating a DNN is described. After that, the overall flow of a proposed MOEA for this CFD simulation method using the DNN is presented.
### _Evolutionary Aerodynamic Design Optimization Method_
The MOEAs are the optimization method based on the mechanism of evolution. Figure 1 shows the procedures of design optimizations. The parameter sets indicating the designs are generated as the initial populations and evaluated the performance of the parameter sets. The performance values obtained by the evaluation are treated as fitness. The populations of the next generation are reproduced by selection and pairing based on this fitness. By repeating this procedure, design optimization progresses like an evolution. With this procedure, Pareto solutions and optimal design can be obtained automatically.
In the case of the evolutionary aerodynamic design optimization, the evaluations of the fitness are performed by the CFD simulation. Figure 1 shows an example of the optimization of the design of the 2-dimensional airfoil shape. The airfoil shapes are generated depending on the parameter sets. The grid used for CFD simulation is created depending on each airfoil shape, and the CFD simulation is performed to derive the steady-state flow field for each airfoil. The performances of each airfoil shape, such as the lift coefficient (\(C_{L}\)) and drag coefficient (\(C_{D}\)), are calculated from the derived flow field and these values are supplied as the fitness depending on the parameter sets.
### _CFD Simulation Incorporating a DNN for a Steady-State Flow Simulation_
Figure 2 shows the CFD simulation method used to evaluate the performances of designs. As shown in Fig. 2(a), the conventional method for steady-state flow simulation is often used a uniform flow field as an initial state, and the CFD simulation is performed until reaching a steady state. In this method, there is a large difference between the initial state and the steady state. Therefore, the time steps of the CFD simulation of this case are large. To overcome this issue, the method using the DNN inference was reported [25, 26]. The procedure of this method is shown in Fig. 2(b). In the reported method, the CFD simulation used an inferred flow field by the DNN as this initial state. The simulation started from the inferred flow field close to the steady state. Therefore, the time steps can be reduced. Because the computational time of DNN inference is considerably smaller than that of CFD simulation, the total computational time can be reduced.
Fig. 1: Flowchart of evolutionary multi-objective aerodynamic design optimization using CFD simulation.
The DNN uses the position information of the structured grid for the CFD simulation as input information. It can infer the flow field regardless of the position of the grid point, although the position of grid points varies depending on the shape of the design candidates [25, 26]. Therefore, this DNN is suitable for use in the CFD simulation part, which is used to evaluate the various shapes of the design candidates of the MOEA. The output information of the DNN is the values of the flow variables on the grid point, comprising the physical or conserved quantity, such as velocity "\(\mathbf{v}\)" or momentum "\(\rho v\)". These flow variables can be used as the flow field data of CFD simulation. Thus, the proposed DNN is suitable for inferring the flow field to be applied to the CFD simulation.
In addition, the steady-state flow field is calculated by CFD simulation. Therefore, the flow field obtained is equivalent to that obtained using the conventional CFD simulation, even if the accuracy of DNN inference degrades. As a result, the calculated performance values of \(C_{L}\), and \(C_{D}\) are accurate. And then, the result of the proposed design optimization method is also equivalent to that of a conventional method.
### **Evolutionary Aerodynamic Design Optimization Method Including the Process of Training the DNN**
To use this CFD simulation using the DNN, the process of preparing the training data and training the DNN is required. Therefore, the MOEA including the process of training the DNN is developed this time. The procedure of this MOEA is shown in Fig. 3. The designs of the 1st and 2nd generations in the MOEA are evaluated using the conventional CFD simulation. Then, the DNN is trained using the grid and flow field data obtained on these generations. After the DNN is trained, the CFD simulation part is replaced with a CFD simulation incorporating a DNN.
In the MOEA, new design candidates of the latest generation are often generated outside of the data distribution of the design candidates of a previous generation. These are caused by an important property of the MOEA that enables the generation of an unknown superior design by generating certain design candidates outside of the data distribution of design candidates of previous generations [27]. However, at the initial training stage of the DNN, these design candidates have not yet been generated and are not included in the training dataset. The DNN has a property that the inference accuracy for these design candidates degrades [23, 24]. The time steps of CFD simulation increase when the inference accuracy of the DNN degrades. However, the derived flow field by CFD simulation is accurate regardless of whether a DNN is used. Therefore, the obtained result during the MOEA process can also be used as training data. The inference accuracy can be further improved by training the DNN using the data including these latest generations of the MOEA. Thus, the DNN is retrained using the grid and flow field data including the latest generations. After the DNN is retrained, the DNN part is replaced with the updated one. In other words, because it is assumed that the DNN
Fig. 2: CFD simulation of a conventional CFD simulation (a) and CFD simulation incorporating the DNN (b). Source: Adapted from [26]. In CFD simulation incorporating the DNN, a steady-state flow field is inferred by the DNN and CFD simulation is performed from this inferred flow field.
will be updated during the process of design optimization in this method, the DNN trained with an insufficient number of training data can be applicable at the initial stage.
The increment of the total computational time caused by the process of training the DNN should be suppressed. Therefore, the MOEA method that these processes of training the DNN are parallelly performed with the process of evolution is applied. At the initial stage, during the DNN is trained, the MOEA is continued by using the conventional CFD simulation. After the DNN is trained, the CFD simulation part is replaced with a CFD simulation incorporating a DNN. Similarly, the processes of retraining the DNN are parallelly performed with the process of evolution. After the process of retraining is completed, the DNN part is replaced with the updated one.
Here, in the case of retraining the DNN, two different training data sets can be created: a dataset comprising of the designs of the latest generation (e.g., design candidates of the 11th and 12th generations), and a data set comprising all designs at that point (e.g., design candidate from the 1st to 12th generations). Therefore, each case is evaluated this time using the method of "_CFD simulation incorporating an evolution update-type DNN using latest designs_" (Method 1) and the method of "_CFD simulation incorporating an evolution update-type DNN using all designs_" (Method 2).
## IVExperimental Setup
In this section, first, the sample problem used to confirm the effect of our proposed evolutionary aerodynamic design optimization method is described. Then, the architecture and libraries used for the evaluation are shown.
### Sample Problem of an Aerodynamic Design Optimization
The optimization of a 2D airfoil shape in a steady-state flow field is used as the sample problem. The objective is to find the airfoil shapes that have the property of maximized \(C_{L}\) and minimized \(C_{D}\). The evaluation is performed with a Reynolds number of 1,500,000, a Mach of 0.15, and an AoA of 1.1\({}^{\circ}\), based on the reported conditions in [28]. The PARSEC parameters [10; 16; 29] shown in Fig. 4 are used as parameters to create an airfoil shape for design optimization. Table 1 presents the search space of the design optimization's parameters.
Fig. 3: MOEA method using the CFD simulation incorporating the DNN including the process of gathering the training data and training the DNN.
### Computational Method
The simulator calculates the flow field by solving the compressible Navier-Stokes equations. In this study, LANS3D is employed as the CFD simulator [30]. Upwind-biased 2nd-order differencing [31], the alternative direction implicit-symmetric Gauss-Seidel (ADI-SGS) scheme [32], and the turbulence model of Baldwin and Lomax [33] are used. The structured grid is created following the method described in a previous paper [34]. The flow field is treated to reach a steady state if the variation of the \(C_{D}\) value was below 2e-4 during the 1,000 steps, which corresponds to the error in the \(C_{D}\) value is almost 1 count or below.
A residual neural network (ResNet) is employed as the DNN architecture to infer the flow field [26, 35, 36]. The DNN comprises an encoder and a decoder [26]. The architecture of our DNN is shown in Fig. 5. In the encoder part, the features of the airfoil shape are extracted. In the decoder part, the flow field is inferred based on the extracted features. Both the encoder and decoder comprise four units, and each unit includes convolutional and shortcut connections. This DNN is implemented using Keras [37] and a TensorFlow GPU 1.13 backend.
The MOEA is implemented based on a previous study. The elitist mate selection based on the binary tournament (termed EBT) is used as the mate selection scheme [38]. The MOEA/D-multi-objective to multi-objective (MOEA/D-M2M) decomposition is applied [39]. Simulated binary crossover (SBX) is used as the crossover operator [27]. The MOEA was performed with 96 populations and 100 generations.
The CFD simulation is run on the Intel Xeon Gold 5218 (2.30 GHz) while DNN training is performed on the dual NVIDIA Tesla P100 GPU.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \(r_{LE}\) & \(X_{UP}\) & \(Z_{U}\) & \(Z_{X}_{U}\) & \(X_{LO}\) & \(Z_{LO}\) & \(Z_{X}_{LO}\) & \(Z_{TE}\) & \(\alpha_{TE}\) & \(\beta_{TE}\) \\ & & & & & & & & & (rad) & _(rad)_ \\ \hline Minimum, & 0.0055 & 0.25 & 0.048 & -1.0294 & 0.25 & -0.071 & -0.0686 & -0.02 & -0.3580 & 0.0201 \\ limit & & & & & & & & & & \\ Maximum & 0.0215 & 0.6043 & 0.1194 & -0.418 & 0.5376 & 0.00 & 0.8204 & 0.02 & 0.02304 & 0.2571 \\ limit & & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameter ranges of the design space
Figure 4: PARSEC airfoil parameters.
Figure 5: The architecture of DNN used to infer the flow field
**V.Results**
The advantage of our proposed method is realized to reduce computational costs while maintaining the quality of the optimization results as the conventional method. Therefore, the design optimization by MOEA was performed both with the conventional method (without DNN) and with the method using the DNN and then the results were compared. In addition, to confirm the effect to update the DNN, the method that the DNN trained with 1st and 2nd generation is used whole MOEA process was also evaluated as the method of "_CFD simulation incorporating a fixed-type DNN_". The property that the computational time can be suppressed was shown by comparing the method using the DNN with the conventional method. Then, the property that the quality of the optimization results was equal was shown by comparing the obtained designs and Pareto solutions of each method.
## 0.A Evaluation of Total Computational Time of the Evolutionary Aerodynamic Design Optimization
_1. Execution Time of Each Process_
To clarify the execution procedure of the MOEA method shown in Fig. 3, the computational time of each process was confirmed. The computational time of the CFD simulation is proportional to the number of time steps. Therefore, the computational time of the CFD simulation part was evaluated by using this number of time steps. The CFD simulation execution of the MOEA is performed for each generation. Therefore, the computational time was evaluated for each generation of the MOEA. In case the aerodynamic evaluation was performed with the processors of 96 cores total, all the CFD simulations of one generation are performed parallelly. In this case, the computational time of one generation depends on the design whose computational time is worst in one generation. On the other hand, in case the aerodynamic evaluation was performed with the single-core processor, the CFD simulation is performed serially. In this case, the computational time of one generation depends on the average time of the simulation. Therefore, the time steps of the worst value and the average time steps in each generation were evaluated.
As an example of the computational time of CFD simulation, the computational time of the 30th generation was presented in Table 2. The required number of time steps of each CFD simulation method against the worst design candidate was 14,800 steps with the conventional method, 11,000 steps using the method with fixed-type DNN, 9600 steps using the evolution update-type DNN using the latest design, and 9600 steps using with the evolution update-type DNN using all designs. Each 10,000 CFD simulation step takes 0.92 h. These were equivalent to the required computational time of 1.36, 1.01, 0.88, and 0.88 h respectively. The number of average required time steps of each CFD simulation method against one design candidate was 10,460 steps with the conventional method, 7,058 steps using the method with fixed-type DNN, 5,269 steps using the evolution update-type DNN using the latest design (Method 1), and 5,800 steps using with the evolution update-type DNN using all designs (Method 2). These were equivalent to the required computational time of 0.96, 0.65, 0.48, and 0.53 h respectively. Training the DNN took 8.5 h with the GPU. The DNN should be retrained to update the DNN for the evolution update-type DNN. In this case, the transfer learning technique was applied. Therefore, retraining the DNN with the GPU can be suppressed to 1.6 h. Moreover, inferring the flow field by the trained DNN with a 1-core CPU took less than 1 min. Therefore, the total execution time of the design optimization is regarded as the execution time of CFD simulation and DNN training.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Time for DNN** & **CFD time/1 sample** & **CFD time/1 sample** \\
**Method of CFD simulation part** & **training** & **\(\bullet\) 1 core CPU** & **\(\bullet\) 1 core CPU** \\ & & **(Gen.30, average)** & **(Gen.30, worst)** \\ \hline
**Conventional CFD simulation** & & & \\
**(Without DNN)** & - & 0.96 h & 1.36 h \\
**CFD simulation** & & & \\
**with fixed-type DNN** & & & \\
**CFD simulation** & & & \\
**with evolution update-type** & 8.5 h (Initial) & 0.48 h & 0.88 h \\
**DNN using the latest designs** & 1.6 h (update) & & \\
**(Method 1)** & & & \\
**CFD simulation** & & & \\
**with evolution update-type** & 8.5 h (Initial) & 0.53 h & 0.88 h \\
**DNN using all designs** & 1.6 h (update) & & \\
**(Method 2)** & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational time of each method and each process
_2. Required Time of the Entire Design Optimization_
The total computational time of the MOEA method shown in Fig. 3 was calculated using these results. In the 1st and 2nd generations whose data were used as the training data, the designs were evaluated with the conventional CFD simulation. And the conventional CFD simulation was used also for the evaluation of the designs in the period during which the DNN has been trained in parallel. The training time of the DNN was 8.5 h with the GPU. This duration was equivalent to that of the 7 generations in the MOEA process under the 96-core CPU condition. Therefore, the 1st and 2nd generations were used to gather the training data and from the 3rd to 9th generation where the DNN had been trained parallelly, the evaluation was performed with the conventional CFD simulation. And from the 10th to the last generation, the evaluation was performed with the CFD simulation incorporating the DNN.
To reduce the computational time further by improving the inference accuracy of the DNN, the DNN was retrained and updated as shown in Fig. 3. The retraining of the DNN was performed every 10 generations in this research. The computational time to retrain the DNN was 1.6 h. This duration was equivalent to that of the 2 generations in the MOEA process under the 96-core CPU condition. During the retraining of the DNN which is equivalent to 2 generations in the MOEA process, the CFD simulation was performed with the DNN before the update. And after the DNN was retrained, the CFD simulation was performed using the updated DNN. On the other hand, with the method of "_CFD simulation incorporating a fixed-type DNN_", the DNN trained using the results of 1st and 2nd generations was used during the whole the 10th to the last generation.
To evaluate the effect when the computational time is reduced, the result of the method utilizing conventional CFD simulation was used as a reference. Figure 6 shows the results of the number of time steps depending on the generation of the MOEA. Figures 6(a) and 6(b) show the worst number of steps of the design of each generation and the average number of steps of each generation, respectively. The horizontal axis shows the generation number of the MOEA, and the vertical axis shows the average or worst number of time steps required for each generation, respectively. Because the CFD simulation incorporating the DNN was started to be used from the 10th generation, the required number of time steps was reduced from the 10th generation. The DNN was updated every 10 generations. On this timing, a further reduction of the required time steps was observed. This was likely caused by the improvement of the inference accuracy by retraining the DNN including the latest designs. The number of time steps was further reduced when the method of "_CFD simulation incorporated the evolution update-type DNN using the latest designs_" (Method 1) compared to "_using all the designs_" (Method 2). The reason was likely that the DNN was specialized to the optimal designs created by the MOEA and the inference accuracy was improved against these designs.
First, the total execution time of design optimization performed with a 96-core CPU was evaluated. In this case, the evaluation of the design candidates using CFD simulations is performed parallelly, where the execution time of each generation depends on the largest execution time in each generation. The results were shown in Fig. 7(a). The execution times of design optimization using the method of "_CFD simulation incorporating evolution update-type DNN using the latest designs_" were 78.3 h, which is 57.9% of a conventional method. The total execution time using the method of "_CFD simulation incorporates the fixed-type DNN_" was 104.2 h, which was 76.9% of a conventional method. This difference indicated the effect to update the DNN depending on the evolution. The execution times of design optimization using the method of "_CFD simulation incorporating evolution update-type DNN using all designs_" were 84.3 h which is 62.3% of a conventional method. Therefore, the latest designs were suitable to use when updating the DNN with the CFD using the evolution update-type DNN.
Then, we evaluated the total execution time in the case of the design optimization performed with a 1-core CPU. The result was shown in Fig. 7(b). Like the results under using a 96-core CPU, the computational time was suppressed most in the case of using the method of "_CFD simulation incorporating evolution update-type DNN using the latest designs_". The execution times of design optimization using this method was 4152 h, which was 43.5% of the conventional method.
### Comparison of the Results of the Design Optimization
The property that the quality of the optimization results of this proposed method was equal was confirmed by comparing the conventional method. As a preliminary, the design optimization using the conventional MOEA method was performed to confirm the design optimization correctly under the evaluated condition. The distribution of design candidates during the process of evolution was evaluated first. Figure 8 showed the distribution of the obtained solutions. Figures 8(a) and (b) showed the distribution of the evaluation results of the design candidates of the 1st and 2nd and 11th and 12th generations, respectively. As shown in these results, superior designs were confirmed to be generated as evolution progresses. In particular, design candidates that had higher \(C_{L}\) values were confirmed to be newly generated by evolution.
All the design candidates and Pareto solutions obtained with 100 generations of the MOEA were shown in Fig. 9(a). The airfoil shape with \(C_{L}\)=0.8 on the Pareto front was observed. The airfoil shape and the pressure coefficient fields around it was shown in Fig. 10(a). The obtained airfoil shape had the features of the increased camber, as reported in the previous research [28]. With these results, the optimization was confirmed to be performed properly under this condition.
Each design optimization method using the CFD simulation incorporating the DNN was also performed with the same condition. All the design candidates obtained by each method were compared in Fig. 9(b)-(d). No significant difference in the distribution of Pareto-optimal solutions against that obtained using the conventional method was observed. The airfoil with minimum \(C_{D}\) at \(C_{L}\)=0.8 was also indicated and compared with that of the previous research.
Figure 7: Comparison of the results of the total computational time
The airfoil shape and the pressure coefficient fields around it was shown in Fig. 10(b)-(d). The obtained airfoil shape with all the methods was found to be almost the same, and the features were also the same as that of the reported one in the previous research [28]. With these results, the property that the optimization results with the method using the DNN were equal to that with the conventional method was confirmed.
Figure 8: Distribution of the solutions.
Figure 9: Results of design optimization. Distribution of the Pareto-optimal solutions and other design candidates.
## 6 Conclusions
An evolutionary multi-objective aerodynamic design optimization method using the CFD simulation incorporating a DNN to reduce the required computational time was proposed. In this approach, the DNN inferred the flow field from the grid data and the CFD simulation started from the inferred flow field to obtain the steady-state flow field with a smaller number of time integration steps. In the proposed optimization approach, the design candidates in the 1st and 2nd generations were evaluated using the conventional CFD simulation and then the DNN was trained using grid data and flow field data of these design candidates. The DNN was updated also after a certain number of generations using the CFD simulation results of the design candidates of the latest generation. To suppress the increment of time caused by the process of training the DNN, the process of the DNN training was performed parallelly with the process of evolution of the MOEA. After that, the DNN trained with the data started to be used.
To show the effectiveness of the proposed method, a multi-objective aerodynamic airfoil design optimization was demonstrated. The computational time was suppressed to 57.9% when the aerodynamic evaluation using CFD simulation was parallelized using 96 processors. And also, the results showed that the computational time was suppressed to 43.5% if a single processor was used for the optimization.
|
2302.06035 | Variational Bayesian Neural Networks via Resolution of Singularities | In this work, we advocate for the importance of singular learning theory
(SLT) as it pertains to the theory and practice of variational inference in
Bayesian neural networks (BNNs). To begin, using SLT, we lay to rest some of
the confusion surrounding discrepancies between downstream predictive
performance measured via e.g., the test log predictive density, and the
variational objective. Next, we use the SLT-corrected asymptotic form for
singular posterior distributions to inform the design of the variational family
itself. Specifically, we build upon the idealized variational family introduced
in \citet{bhattacharya_evidence_2020} which is theoretically appealing but
practically intractable. Our proposal takes shape as a normalizing flow where
the base distribution is a carefully-initialized generalized gamma. We conduct
experiments comparing this to the canonical Gaussian base distribution and show
improvements in terms of variational free energy and variational generalization
error. | Susan Wei, Edmund Lau | 2023-02-13T00:32:49Z | http://arxiv.org/abs/2302.06035v1 | # Variational Bayesian Neural Networks via Resolution of Singularities
###### Abstract
In this work, we advocate for the importance of singular learning theory (SLT) as it pertains to the theory and practice of variational inference in Bayesian neural networks (BNNs). To begin, using SLT, we lay to rest some of the confusion surrounding discrepancies between downstream predictive performance measured via e.g., the test log predictive density, and the variational objective. Next, we use the SLT-corrected asymptotic form for singular posterior distributions to inform the design of the variational family itself. Specifically, we build upon the idealized variational family introduced in Bhattacharya et al. (2020) which is theoretically appealing but practically intractable. Our proposal takes shape as a normalizing flow where the base distribution is a carefully-initialized generalized gamma. We conduct experiments comparing this to the canonical Gaussian base distribution and show improvements in terms of variational free energy and variational generalization error.
Normalizing Flow Real Log Canonical Threshold Singular Learning Theory Singular Models Test log-likelihood Variational Free Energy Variational Inference Variational Generalization Error
## 1 Introduction
A Bayesian neural network (BNN) Mackay (1995) is a neural network endowed with a prior distribution \(\varphi\) on its weights \(w\). Despite their theoretical appeal Lampinen and Vehtari (2001); Wang and Yeung (2020), applying BNNs in practice is not without significant challenges. MCMC and its variants, while widely considered the gold standard, can be prohibitively expensive in terms of computation. On the other hand, fast alternatives such as variational inference may result in _uncontrolled_ approximations.
In this work, we mine insights from **singular learning theory** (SLT) Watanabe (2009) to explain and improve upon certain aspects of BNNs. Roughly speaking, a model is (strictly) **singular** if the parameter-to-model mapping is not one-to-one and the likelihood function does not look Gaussian1. That neural networks are singular is well documented Sussmann (1992); Watanabe (2000, 2001); Fukumizu (2003); Watanabe (2007). We refer the readers to Wei et al. (2022) for a detailed proof in the case of a standard feedforward network. The singular nature of BNNs has interesting implications for the posterior distribution, see Figure 1.
Footnote 1: These features should not be viewed as pathological, see “Deep learning is singular and that’s good” by Wei et al. (2022).
Let \((x,y)\) denote the input-target pair modeled jointly as \(p(x,y|w)=p(y|x,w)p(x)\) where \(w\in\mathbb{R}^{d}\) is the model parameter. Let \(p(y|x,w)\) be a neural network model with functional model \(f\), by which we mean \(y=f(x,w)+\epsilon\) where \(\epsilon\) is some random variable. For example, if we have Gaussian additive noise \(\epsilon\), the conditional distribution could be modelled as \(\mathcal{N}(y|f(x,w),\sigma^{2}I)\) where \(f\) is a feedforward ReLU network with weights \(w\).
The central quantity of interest in BNNs is the intractable posterior distribution over the neural network weights,
\[p(w|\mathcal{D}_{n})=\frac{\prod_{i=1}^{n}p(y_{i}|x_{i},w)\varphi(w)}{Z(n)},\]
where \(\mathcal{D}_{n}=\{(x_{i},y_{i})\}_{i=1}^{n}\) is a dataset of \(n\) input-output pairs. The normalizing constant,
\[Z(n)=\int\prod_{i=1}^{n}p(y_{i}|x_{i},w)\varphi(w)\,dw,\]
is variously known as the **model evidence** and the **marginal likelihood**. Define the **empirical entropy** of the training data,
\[S_{n}=-\frac{1}{n}\sum_{i=1}^{n}\log p_{0}(y_{i}|x_{i}).\]
We shall call
\[\bar{Z}(n)=-\log Z(n)-nS_{n}\]
the **normalized evidence**. Let us call \(F(n):=-\log Z(n)\) the **Bayes free energy** and \(\bar{F}(n):=-\log\bar{Z}(n)\) its normalized version.
Unlike prediction in traditional neural networks, prediction in BNNs proceeds by marginalization, i.e., averaging over all possible values of the network weights. Namely, prediction in BNNs makes use of the **Bayes posterior predictive distribution**,
\[p(y|x,\mathcal{D}_{n}):=\int p(y|x,w)p(w|\mathcal{D}_{n})\,dw. \tag{1}\]
With (1), we can calculate prediction uncertainties as well as obtain better calibrated predictions Heek (2018); Osawa et al. (2019); Maddox et al. (2019).
In Section 3, we recapitulate from the perspective of SLT the predictive advantages of BNNs over traditional neural networks. Specifically, SLT shows that the Bayes posterior predictive distribution in (1) has lower generalization error compared to MLE or MAP point estimates.
Despite compelling arguments for employing BNNs, we must reckon with the fact that they can only ever be applied _approximately_. Among approximate techniques, a major class is represented by scaling classic MCMC to modern settings of large datasets and deep neural networks Welling and Teh (2011); Chen et al. (2014); Zhang et al. (2020). In this paper, we instead turn our focus to variational inference, which is particularly suited to scaling BNNs to large datasets.
All variational inference techniques are characterized by two ingredients. First, a family of densities \(\mathcal{Q}\), often called the variational family, is posited. Second, some \(q^{*}\in\mathcal{Q}\) is found via optimization according to some criterion that measures
Figure 1: Posterior density contour plot for a 2D \(\tanh\)-regression model, \(p(y|x,a,b)\propto\exp(y-a\tanh(bx))\). The white diamond marks the true parameter \((a_{0},b_{0})\) used to generate the dataset \(\mathcal{D}_{n}\). Each row shows a different true distribution, while each column shows a different sample size \(n\). When \(a_{0}b_{0}=0\) as in the second row, the set of true parameters \(W_{0}\) is not a singleton and contains a singularity at the origin. It is worth noticing that, for a singular model, even when the truth is not at a singularity (first row), the posterior is still far from being locally Gaussian even at sample size \(n=5000\).
closeness to the desired target density. In this work, we seek to approximate the posterior density and we will employ the conventional Kullback-Leibler divergence. This leads to the optimization problem,
\[\min_{q\in\mathcal{Q}}\mathrm{KL}(q(w)\parallel p(w|\mathcal{D}_{n})). \tag{2}\]
This is equivalent to minimizing the so-called **normalized2 variational free energy (VFE)**,
Footnote 2: Throughout this paper, we work with normalized quantities for ease of exposition. The asymptotics presented hold equally for the unnormalized counterparts.
\[\bar{F}_{vb}(n):=\mathbb{E}_{q}nK_{n}(w)+\mathrm{KL}(q(w)\parallel\varphi(w)).\]
It is easy to see that \(\bar{F}_{vb}(n)\geq\bar{F}(n)\) with equality if and only if the variational distribution is exactly equal to the posterior. Readers are likely more familiar with the variational objective of maximizing the so-called **evidence lower bound (ELBO)** which is simply related to the (normalized) VFE via \(\mathrm{ELBO}=-\bar{F}_{vb}(n)\).
Let \(q^{*}\in\mathcal{Q}\) be a minimizer of (2). et us call the variational approximation to (1) given by
\[p_{vb}(y|x,\mathcal{D}_{n}):=\int p(y|x,w)q^{*}(w)\,dw, \tag{3}\]
the **induced predictive distribution**. We can measure the predictive accuracy of \(p_{vb}\) using once again the KL divergence, i.e.,
\[G_{n}(p_{vb}(y|x,\mathcal{D}_{n})):=\mathrm{KL}(p_{0}(y|x)\parallel p_{vb}(y| x,\mathcal{D}_{n})),\]
which we shall call the **variational generalization error (VGE)**. Per the discussion in Section 3, this is, up to a constant and a sign flip, nothing more than the typical **test log predictive density**Gelman et al. (2014) commonly employed in variational inference evaluation.
We shall see in Section 4 that, surprisingly, the VGE may be arbitrarily high even for a variational family whose minimum VFE is close to optimality. In other words, it is not guaranteed that minimizing (2) results in good downstream predictive performance. The outlook is not entirely bleak. Depending on the relationship between two critical quantities of variational inference - the **MVFE coefficient**\(\lambda_{\textbf{vfe}}\) and the **VGE coefficient**\(\lambda_{\textbf{vge}}\) - the generalization error of the induced predictive distribution may be controllable via minimizing the VFE.
Clarification of the relationship between the two variational coefficients for most common variational learning problems is an open problem, which we leave aside for future work. We will assume the variational coefficients are related _favorably_, in a manner which will be made clear in Section 4, and proceed to design a variational family whose **variational approximation gap** is small. The proposal is predicated on an important SLT result which states that, roughly speaking, the posterior distribution over the parameters of a singular model is not asymptotically Gaussian, but can still be put into an explicit standard form via the **resolution of singularities**.
## 2 Singular learning theory
In this section, we give a succinct overview of key concepts from SLT. We focus in particular on what SLT has to say about the behavior of the posterior distribution in strictly singular models. Let us assume the parameter space \(W\) is a compact set in \(\mathbb{R}^{d}\) and \(p_{0}(x,y)=p_{0}(y|x)p(x)\) is the true data-generating mechanism. Throughout, we suppose there exists \(w_{0}\in W\) such that \(p_{0}(y|x)=p(y|x,w_{0})\). In the parlance of SLT, this condition is known as **realizability**. Let \(\varphi(w)\) be a compactly-supported prior. We shall refer to \((p(\cdot,\cdot),p_{0}(\cdot,\cdot),\varphi(\cdot))\) as a **model-truth-prior triplet**. The roles played by compactness and realizability in singular learning theory are discussed in Appendix A.
Define \(K(w)\) to be the Kullback-Leibler divergence between the truth and the model, i.e.,
\[K(w):=\mathrm{KL}(p_{0}(x,y)\parallel p(x,y|w)).\]
Following Watanabe (2009), we say a model is **regular** if 1) it is identifiable, i.e., the map \(w\mapsto p(\cdot,\cdot|w)\) from parameter to model is one-to-one and 2) its Fisher information matrix \(I(w)\) is positive definite for arbitrary \(w\in W\). We call a model **strictly singular** if it is not regular. The term singular will refer to either regular or strictly singular models. See Figure 1 for an example of a strictly singular model with two truth settings. This figure illustrates an important lesson: for strictly singular models, even when the true parameter set \(W_{0}:=\{w:K(w)=0\}\) does not contain singularities, the posterior distribution is still far from Gaussian.
The following theorem from Watanabe (2009), adapted for notational consistency, gives precise conditions for the existence of **resolution maps**, algebraic-geometrical transformations which enables \(K(w)\) to be locally written as a
monomial_, i.e., a product of powers of variables such as in the right-hand-side of (4). The result is itself based on Hironaka's resolution of singularities, a celebrated result in modern algebraic geometry.
To prepare, let \(W_{\epsilon}=\{w\in W:K(w)\leq\epsilon\}\) for some small positive constant \(\epsilon\) and \(W_{\epsilon}^{(R)}\) be some real open set such that \(W_{\epsilon}\subset W_{\epsilon}^{(R)}\). The theorem below will make use of the multi-index notation: for a given \(\xi=(\xi_{1},\ldots,\xi_{d})\in\mathbb{R}^{d}\), define \(w^{\mathbf{k}}:=w_{1}^{k_{1}}\cdots w_{d}^{k_{d}}\) where the multi-index \(\mathbf{k}=(k_{1},\ldots,k_{d})\) with each \(k_{j}\) a nonnegative integer. Due to space constraints, Fundamental Conditions I and II required below are stated and discussed in Appendix A.
**Theorem 2.1** (Theorem 6.5 of Watanabe [2009]).: _Suppose the model-truth-prior triplet \((p,p_{0},\varphi)\) satisfies Fundamental Conditions I and II with \(s=2\). We can find a real analytic manifold \(M^{(R)}\) and a proper and real analytic map \(g:M^{(R)}\to W_{\epsilon}^{(R)}\) such that_
1. \(M=g^{-1}(W_{\epsilon})\) _is covered by a finite set_ \(M=\cup_{\alpha}M_{\alpha}\) _where_ \(M_{\alpha}=[0,b]^{d}\)_._
2. _In each_ \(M_{\alpha}\)_,_ \[K(g(\xi))=\xi^{\mathbf{2k}}=\xi_{1}^{2k_{1}}\cdots\xi_{d}^{2k_{d}},\] (4) _where_ \(k_{j}\in\mathbb{N},j=1,\ldots,d\) _are such that not all_ \(k_{j}\) _are zero._
3. _There exists a_ \(C^{\infty}\) _function_ \(b(\xi)\) _such that_ \[\varphi(g(\xi))|g^{\prime}(\xi)|=\xi^{\mathbf{h}}b(\xi)=\xi_{1}^{h_{1}}\cdots\xi_{ d}^{h_{d}}b(\xi),\] (5) _where_ \(h_{j}\in\mathbb{N},j=1,\ldots,d\)_,_ \(|g^{\prime}(\xi)|\) _is the absolute value of the determinant of the Jacobian and_ \(b(\xi)>c>0\) _for_ \(\xi\in[0,b]^{d}\)_._
In Theorem 2.1 we have suppressed the dependency on the manifold chart index \(\alpha\), but the reader should keep in mind that the maps \(g\) and the multi-indices are all indexed by \(\alpha\). It is also important to recognize that none of these said quantities are unique for a given triplet \((p,p_{0},\varphi)\).
A crucial quantity that appears in SLT is a rational number in \((0,d/2]\) known as the **real log canonical threshold** (RLCT). Let \(\{M_{\alpha}:\alpha\}\) be as in Theorem 2.1 and define
\[\lambda_{j}=\frac{h_{j}+1}{2k_{j}},j=1,\ldots,d\]
where \(h_{j}\) and \(k_{j}\) are the entries of the multi-indices \(\mathbf{h}\) and \(\mathbf{k}\) in a local coordinate \(M_{\alpha}\). When \(k_{j}=0\), \(\lambda_{j}\) is taken to be infinity.
Uniquely associated to a triplet \((p,p_{0},\varphi)\) are its real log canonical threshold (RLCT) and its multiplicity defined, respectively, as
\[\lambda=\min_{\alpha}\min_{j\in 1,\ldots,d}\lambda_{j},\quad m=\max_{\alpha} \#\{j:\lambda_{j}=\lambda\}. \tag{6}\]
Let \(\{\alpha^{*}\}\) be the set of those local coordinates in which both the \(\min\) and \(\max\) in (6) are attained. Watanabe [2009] calls this set the **essential coordinates** and the corresponding collection \(\{M_{\alpha}\}\) the **essential charts**.
If \(\{w:K(w)=0,\varphi(w)>0\}\) is not the empty set, the RLCT of a model-truth-prior triplet is _at most_\(d/2\)[Watanabe, 2009, Theorem 7.2]. When the model is regular, the RLCT is _exactly equal_ to \(d/2\) and the multiplicity \(m=1\)[Watanabe, 2009, Remark 1.15]. In fact, (twice the) RLCT may be regarded as the effective degrees of freedom in strictly singular models [Wei et al., 2022]. The RLCT also shows up in important asymptotic results, see (10) and (11).
Henceforth, to make clear that the RLCT and multiplicity are invariants of the model-truth-prior triplet, we shall write \(\lambda(p,p_{0},\varphi)\) and \(m(p,p_{0},\varphi)\) to mark this dependence. In Appendix B, we recall a simple toy network, a two-parameter \(\tanh\) network, where the resolution map, the RLCT, and the multiplicity can be calculated explicitly.
### Posterior distribution in singular models
The posterior distribution in strictly singular models is decidedly not Gaussian. The correct asymptotic form can be derived using SLT. For a particular manifold chart index \(\alpha\), let us apply the transformation \(g_{\alpha}(\xi)=w\) and rewrite the posterior distribution in the new coordinate \(\xi\),
\[p(\xi|\mathcal{D}_{n})=\frac{\exp(-nK_{n}(g_{\alpha}(\xi)))\varphi(g_{\alpha}( \xi))|g^{\prime}_{\alpha}(\xi)|}{\bar{Z}(n)}, \tag{7}\]
with
\[K_{n}(w)=\frac{1}{n}\sum_{i=1}^{n}\log\frac{p_{0}(y_{i}|x_{i})}{p(y_{i}|x_{i},w)}\]
denoting the sample average log likelihood ratio. Note \(K_{n}(w)\) is the empirical counterpart to \(K(w)\).
By (cheekily) substituting (4) (5) into \(p(\xi|\mathcal{D}_{n})\) in (7), we obtain that the posterior distribution for large \(n\), in the chart \(M_{\alpha}\), is described as a so-called **standard form**Watanabe (2018):
\[\exp(-n\xi_{1}^{2k_{1}}\xi_{2}^{2k_{2}}\cdots\xi_{d}^{2k_{d}})|\xi_{1}^{h_{1}} \cdots\xi_{d}^{h_{d}}|b(\xi).\]
In other words, the posterior distribution over the parameters of a singular model can be transformed into a mixture of standard forms, asymptotically. In Figure 1 we display the singular posterior density contour plot for a toy 2D \(\tanh\)-neural network in two settings of the true distribution.
## 3 The Bayes posterior predictive distribution
Let the generalization error of some predictive distribution \(\hat{p}_{n}(y|x)\), estimated from a training set \(\mathcal{D}_{n}\), be measured using the KL divergence:
\[G_{n}(\hat{p}_{n}(y|x)):=\mathrm{KL}(p_{0}(y|x)p(x)\parallel\hat{p}_{n}(y|x)p (x)) \tag{8}\]
In the machine learning community, this goes by another name: \(G_{n}(\cdot)\) is, up to a constant and a sign flip, the population counterpart to the commonly reported test log-likelihood, aka the predictive log-likelihood or **test log-predictive density**. This can be seen by writing
\[\hat{G}_{n}=-\frac{1}{n^{\prime}}\sum_{(x,y)\in\mathcal{D}_{n^{\prime}}}(\log p _{0}(y|x)-\log\hat{p}_{n}(y|x)) \tag{9}\]
where \(\mathcal{D}_{n^{\prime}}\) is an independent dataset.1 According to Theorems 1.2 and 7.2 in Watanabe (2009), we have, for the Bayes posterior predictive distribution (1),
\[\mathbb{E}G_{n}(p(y|x,\mathcal{D}_{n}))=\lambda(p,p_{0},\varphi)/n+o(1/n) \tag{10}\]
where the expectation is taken with respect to \(\mathcal{D}_{n}\). We will call the left hand side (10) the expected **Bayes generalization error**. This can be contrasted to the expected generalization error of MLE (and similarly of MAP), which Theorem 6.4 of Watanabe (2009) shows to be \(\mathbb{E}G_{n}(p(y|x,\hat{w}_{\text{\emph{m}le}}))=S/n+o(1/n)\) where \(S\), the maximum of a Gaussian process, can be much larger than \(\lambda(p,p_{0},\varphi)\). The situation is markedly different for regular models, where differences between the three estimators become negligible in the large-\(n\) regime.
We briefly outline the derivation of (10) as it will inform the narrative on the VGE in the next section. First, for the normalized Bayes free energy, under the Fundamental Conditions I and II discussed in A, it was proven in (Watanabe, 2009, Main Theorem 6.2) that the following asymptotic expansion holds
\[\bar{F}(n)=\lambda(p,p_{0},\varphi)\log n+(m-1)\log\log n+O_{P}(1). \tag{11}\]
The result in (10) is then proven using the above expansion together with the well known relationship between the Bayes generalization error and the (normalized) Bayes free energy (Watanabe, 2009, Theorem 1.2):
\[\mathbb{E}G_{n}(p(y|x,\mathcal{D}_{n}))=\mathbb{E}\bar{F}(n+1)-\mathbb{E}\bar {F}(n). \tag{12}\]
where on the right-hand side, the first expectation is with respect to dataset \(\mathcal{D}_{n+1}\) and the second \(\mathcal{D}_{n}\). Due to this relationship, the Bayes free energy shares the _same_ coefficient as the Bayes generalization error.
## 4 A tale of two variational coefficients
Most applications of variational inference in BNNs labor under the following implicit assumptions: 1) optimizers of the variational objective in (2) have good induced predictive distributions, and 2) two variational families can be compared according to the performance of their induced predictive distributions. A look at the experimental sections of various works on variational BNNs reveal that these assumptions underlie standard practice Blundell et al. (2015), Rezende and Mohamed (2015), Louizos and Welling (2016, 2017), Osawa et al. (2019), Swiatkowski et al. (2020). We shall see in this section that these two assumptions do not always hold.
Let us associate to a variational family \(\mathcal{Q}\) its **normalized minimum variational free energy (MVFE)**,
\[\bar{F}^{*}_{vb}(n):=\min_{q\in\mathcal{Q}}\bar{F}_{vb}(n).\]
Asymptotics for the MVFE have so far been addressed on a case-by-case basis for certain models and certain variational families, e.g., Gaussian mean-field variational families for reduced rank regression Nakajima and Watanabe (2007), nonnegative matrix factorization Kohijima and Watanabe (2017), Hayashi (2020), normal mixture model Watanabe and Watanabe (2006), hidden Markov model Hosino et al. (2005). In all the cited instances above, the asymptotic expansion of the **average normalized MVFE** takes the form
\[\mathbb{E}\bar{F}_{vb}^{*}(n)=\lambda_{\text{\sf rfe}}\log n+o(\log n) \tag{13}\]
where the expectation is taken over datasets \(\mathcal{D}_{n}\). Note that \(\lambda_{\text{\sf rfe}}\geq\lambda(p,p_{0},\varphi)\) necessarily Nakajima and Watanabe (2007). Because the **variational approximation gap**,
\[\mathcal{G}:=\bar{F}_{vb}^{*}(n)-\bar{F}(n), \tag{14}\]
is the difference of the (normalized) MVFE and the (normalized) Bayes free energy, the gap boils down to the difference between two coefficients:
\[\mathcal{G}\approx(\lambda_{\text{\sf rfe}}-\lambda(p,p_{0},\varphi))\log n.\]
Now, under some natural conditions3, the VGE admits the asymptotic expansion,
Footnote 3: The predictive distribution should be consistent as \(n\) goes to infinity, see the discussion in Chapter 13 of Nakajima et al. (2019)
\[\mathbb{E}G_{n}(p_{vb}(y|x,\mathcal{D}_{n}))=\lambda_{\text{\sf rge}}/n+o(1/n). \tag{15}\]
Importantly, \(\lambda_{\text{\sf rge}}\neq\lambda_{\text{\sf rfe}}\) in general, e.g., Nakajima and Watanabe (2007). This is in contrast to the Bayesian posterior predictive distribution in (1), where the coefficient of the leading \(O(1/n)\) term is precisely the RLCT, \(\lambda(p,p_{0},\varphi)\). That \(\lambda_{\text{\sf rge}}\neq\lambda_{\text{\sf rfe}}\) results from the fact that the relationship (12) is not valid when a variational approximation to the posterior is employed.
In Figure 1(a), we illustrate the three possible configurations of the coefficients \(\lambda(p,p_{0},\varphi),\lambda_{\text{\sf rfe}},\lambda_{\text{\sf rge}}\) for a given variational family \(\mathcal{Q}\) and a model-truth-prior triplet. When \(\lambda_{\text{\sf rfe}}>\lambda_{\text{\sf rge}}\), we call the setting **favorable** since minimizing the VFE offers control over the VGE. When \(\lambda_{\text{\sf rfe}}<\lambda_{\text{\sf rge}}\), we call this **unfavorable** since achieving even a small variational approximation gap could result in an induced predictive distribution with high generalization error. The distribution of favorable versus unfavorable settings in practice is unclear, as the exact relationship between \(\lambda_{\text{\sf rfe}}\) and \(\lambda_{\text{\sf rge}}\) has been derived in a limited number of works. The results in Nakajima and Watanabe (2007) on linear neural networks, aka reduced rank regression, show there are both favorable and unfavorable settings depending on the input and output dimension, the number of hidden units, and a rank measurement on the truth.
Note that even in favorable settings, we must be careful when comparing two variational families \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\). Figure 1(b) illustrates a scenario where the family \(\mathcal{Q}_{1}\) incurs a smaller variational approximation gap than \(\mathcal{Q}_{2}\), but the induced predictive distribution of \(\mathcal{Q}_{1}\) has \(\lambda_{\text{\sf rge}}\) higher than that of \(\mathcal{Q}_{2}\). This shows that comparing different variational approximations by their test log predictive density is fraught with potential misinterpretations. In order to control the downstream predictive performance, it is thus important to find a variational family with a small approximation gap, so that we can inherit (and sometimes even beat!) the predictive advantages of the exact Bayes posterior predictive distribution (1), i.e., achieve \(\lambda_{\text{\sf rge}}<\lambda(p,p_{0},\varphi)\).
## 5 Related work
Although the perspective on offer here - that the discrepancy between test log predictive density and the variational objective _amounts to the relationship between two variational coefficients_ - is novel, we are not the first to point out this general phenomenon in variational inference Yao et al. (2018), Huggins et al. (2020), Deshpande et al. (2022), Dhaka et al. (2020). This phenomenon is also documented in the specific setting of variational inference for BNNs Heek (2018), Yao et al. (2019), Krishnan and Tickoo (2020), Foong et al. (2020). For instance, Foong et al. (2020) demonstrated in experiments that optimizing the ELBO may not lead to accurate predictive means or variances.
Another area of active research in variational BNNs is the design of the variational family itself. For the large part, the mean-field family of fully factorized Gaussian distributions is still predominant in the general practice of variational inference (Graves, 2011; Blundell et al., 2015; Hernandez-Lobato et al., 2016; Li and Turner, 2016; Khan et al., 2018; Sun et al., 2019). The mean-field assumption is mostly adopted for computational ease, though the limitations are well known (MacKay, 1992; Coker et al., 2022). Moving beyond mean-field Gaussian, we can find works that make use of more realistic covariance structures (Louizos and Welling, 2016; Zhang et al., 2018) or more expressive approximating families, e.g., via normalizing flows (Louizos and Welling, 2017; Papamakarios et al., 2021).
Finally, we note there have been a few recent works that recognize the non-identifiability of deep learning models Moore (2016), Pourzanjani et al. (2017), Kurle et al. (2022). These works however seem to treat the non-identifiability as an issue to be fixed.
## 6 Methodology
To achieve a good variational approximation, conventional wisdom says to make \(\mathcal{Q}\) as "expressive" as possible. We will approach the design of the variational family in a more principled manner using SLT. To this end, we rely on recent work in Bhattacharya et al. (2020) which leveraged SLT to produce an _idealized_ variational family as follows. Let \(\mathcal{Q}_{0}\) be a family consisting of generalized gamma distributions in \(\mathbb{R}^{d}\):
\[\mathcal{Q}_{0}=\{q_{0}(\xi|\boldsymbol{\lambda},\boldsymbol{k},\boldsymbol{ \beta})=\prod_{j=1}^{d}q_{0}^{j}(\xi_{j}|\lambda_{j},k_{j},\beta_{j})\} \tag{16}\]
where
\[q_{0}^{j}(\xi_{j}|\lambda_{j},k_{j},\beta_{j})\propto\xi_{j}^{2k_{j}\lambda_{j }-1}\exp(-\beta_{j}\xi_{j}^{2k_{j}})1_{[0,1]}(\xi_{j})\]
for \(\boldsymbol{\lambda}\in\mathbb{R}_{>0}^{d},\boldsymbol{k}\in\mathbb{R}_{>0}^{ d},\boldsymbol{\beta}\in(0,\infty)^{d}\). **Henceforth, let \(g:=g_{\alpha}\) where \(\alpha\) is such that \(M_{\alpha}\) is an essential chart.** In other words, we are fixing a resolution map \(g\), working in a fixed essential chart domain, and a coordinate \(\xi\) on that domain that makes \(K(g(\xi))\) a monomial as a function from \(\mathbb{R}^{d}\to\mathbb{R}^{d}\). The idealized variational family of Bhattacharya et al. (2020) is given as the pushforward of base distributions \(q_{0}\in\mathcal{Q}_{0}\) by said map \(g\):
\[\mathcal{Q}=\{g\sharp q_{0}:q_{0}\in\mathcal{Q}_{0}\}. \tag{17}\]
We refer to this as an _idealized_ variational family for the simple fact that the resolution map \(g\), though its existence is guaranteed, is almost never tractable except in the simplest model-truth-prior triplets. Also note that although the family \(\mathcal{Q}_{0}\) is mean-field, (17) is _not_.
To study the variational approximation gap incurred by the idealized family (17), we will first introduce some definitions to help us rewrite the gap \(\mathcal{G}\) in notation that is consistent with Bhattacharya et al. (2020). Define
\[\Psi_{n}(q_{0})=-\mathbb{E}_{q_{0}}nK_{n}(g(\xi))-\mathrm{KL}(q_{0}(\xi)\parallel \varphi(g(\xi))|g^{\prime}(\xi)|) \tag{18}\]
See Appendix C for the derivation that the variational approximation gap in (14) is equivalent to
\[\mathcal{G}=\log\bar{Z}(n)-\sup_{q_{0}\in\mathcal{Q}_{0}}\Psi_{n}(q_{0}). \tag{19}\]
Following Bhattacharya et al. (2020), we consider the deterministic approximation gap corresponding to (19). This is accomplished by replacing \(K_{n}\) with \(K\), leading to
\[\Psi(q_{0}):=-\mathbb{E}_{q_{0}}nK(g(\xi))-\mathrm{KL}(q_{0}(\xi)\parallel \varphi(g(\xi))|g^{\prime}(\xi)|). \tag{20}\]
Figure 2: We show in these schematics that evaluating variational approximations to BNNs according to their induced predictive distribution is fraught with potential misinterpretations.
and
\[\bar{Z}_{K}(n):=\int_{W}e^{-nK(w)}\varphi(w)\,dw.\]
For our theoretical investigation, we shall concern ourselves with the _deterministic_ variational approximation gap,
\[\mathcal{G}_{K}:=\log\bar{Z}_{K}(n)-\sup_{q_{0}\in\mathcal{Q}_{0}}\Psi(q_{0}). \tag{21}\]
Techniques for generalizing the main result Theorem 6.1 which concerns \(\mathcal{G}_{K}\) to the stochastic world can be found in Plummer (2021, Section 5.3.3).
We will appeal to large-\(n\) asymptotics to study the behavior of (21). Note that the study and deployment of BNNs is no stranger to large-\(n\) asymptotics, both in early MacKay (1992) and recent Ritter et al. (2018) works. We proceed under this tradition, but deviate from the crude (and incorrect) Laplace approximation that is often employed and instead use the correct asymptotics provided by SLT.
### Model evidence in singular models
To study the gap in (21), we begin by examining the asymptotic behavior of \(\bar{Z}_{K}(n)\). When the model is regular, we need not bother with SLT and may find to leading order, \(\bar{Z}_{K}(n)=\varphi(w_{0})\sqrt{\frac{(2\pi)^{d}}{\det H(w_{0})}}n^{-d/2}\) via the Laplace approximation. This approximation, however, is egregiously inappropriate for strictly singular models, in particular neural networks Wei et al. (2022). Nonetheless, perhaps due to a sense that no tractable alternatives exist, the Laplace approximation is seeing a resurgence of application in Bayesian deep learning Ritter et al. (2018), Immer et al. (2021).
For strictly singular models, the quantities \(Z(n),\bar{Z}(n)\) and \(\bar{Z}_{K}(n)\) manifest as singular integrals, i.e., integrals of the form \(\int_{W}e^{-nf(w)}\varphi(w)\,dw\) where \(W\subset\mathbb{R}^{d}\) is a compact semi-analytic subset, and \(f\) and \(\varphi\) are real analytic functions. The behavior of a singular integral depends critically on the zeros of \(f\). According to Theorem 6.7 in Watanabe (2009), we find to leading order:
\[\bar{Z}_{K}(n)=C(p,p_{0},\varphi)n^{-\lambda(p,p_{0},\varphi)}(\log n)^{m(p,p_ {0},\varphi)-1}, \tag{22}\]
where \(C(p,p_{0},\varphi)\) is a constant independent of \(n\) that we shall call the **leading coefficient** following the terminology of Lin (2011). Note that since \(\lambda(p,p_{0},\varphi)=d/2\) and \(m(p,p_{0},\varphi)=1\) in regular models, (22) is a true generalization of the Laplace approximation, holding for both regular and strictly singular models.
### Bounding \(\mathcal{G}_{K}\)
We show in Lemma D.2 in Appendix D, that for large \(n\), the following bound holds
\[\sup_{q_{0}\in\mathcal{Q}_{0}}\Psi(q_{0})\geq-\lambda(p,p_{0},\varphi)\log n+C \tag{23}\]
where \(C\) is the constant free of \(n\) in Lemma D.2. This result is in the same spirit as (Bhattacharya et al., 2020, Theorem 3.1), except that we have improved on the tightness of their lower bound, which in turn allows us to devise better initialization of the variational parameters. With Lemma D.2, we are now in a position to characterize the (deterministic) variational approximation gap, \(\mathcal{G}_{K}\).
**Theorem 6.1** (Deterministic variational approximation gap).: _Suppose the model-truth-prior triplet \((p,p_{0},\varphi)\) is such that Theorem 2.1 holds. Let \(g=g_{\alpha}\) where \(\alpha\) is such that \(M_{\alpha}\) is an essential chart. On this essential chart, write the local RLCTs \(\tilde{\lambda}_{j}=\frac{\tilde{h}_{j}+1}{2\tilde{k}_{j}},j=1,\ldots,d\) in descending order so that \(\tilde{\lambda}_{1}\) is the RLCT of the triplet \((p,p_{0},\varphi)\), i.e., \(\tilde{\lambda}_{1}=\lambda(p,p_{0},\varphi)\). If the multiplicity of the triplet is 1, we have, for \(n\) large, \(\mathcal{G}_{K}\leq\log C(p,p_{0},\varphi)-C+o(1),\) where the constant \(C\) is as given in Lemma D.2._
All that is needed for the proof of Theorem 6.1 is to put together the lower bound in Lemma D.2 with the fact that \(\bar{Z}_{K}(n)\) admits the asymptotic expansion in (22). Even when \(m(p,p_{0},\varphi)\neq 1\), there may be finite \(n\) situations when the two terms \((m(p,p_{0},\varphi)-1)\log\log n\) and \(\log C(p,p_{0},\varphi)-C\) are comparable. In such settings, the idealized variational family \(\mathcal{Q}\) in (17) could still perform well.
### Learning to desingularize
In the preceding section, we studied the deterministic variational approximation gap of an idealized variational family. Although Hironaka proved the existence of a resolution map and showed that it can be found by recursive blow up, known algorithms for finding such resolutions, other than a few exceptional cases (such as those for toric resolutions),
have complexity that vastly exceed existing computational capabilities. Thus we are precluded from directly applying the idealized variational family.
This leads us to consider _learning_ the resolution map \(g\) using an invertible architecture \(G_{\theta}\) resulting in the variational family
\[\hat{\mathcal{Q}}=\{G_{\theta}\sharp q_{0}(\boldsymbol{\lambda},\boldsymbol{k}, \boldsymbol{\beta}):\boldsymbol{\beta}=(n,\beta_{2},\ldots,\beta_{d})\}. \tag{24}\]
If the network is expressive enough, we can hope that \(g\in\{G_{\theta}:\theta\}\), which would lead \(\hat{\mathcal{Q}}\) to enjoy the theoretical guarantee provided in Theorem 6.1. Note in (24) the first coordinate of \(\boldsymbol{\beta}\) has been set to the sample size \(n\). The proof of Lemma D.2 reveals why we do so. Specifically, it is shown that the following parameters in \(q_{0}\) can achieve \(\Psi(q_{0})=-\lambda(p,p_{0},\varphi)\log n+C\):
\[\lambda_{1}=\lambda(p,p_{0},\varphi),\quad k_{1}=\tilde{k}_{1},\quad\beta_{1}=n\]
where \(\tilde{k}_{1}\) is as in Theorem 6.1. Note that \(\lambda(p,p_{0},\varphi)\) and \(\tilde{k}_{1}\) are unknown, but \(n\) is certainly known.
It might be readily apparent at this point that we have in \(\hat{\mathcal{Q}}\) a standard normalizing flow, albeit with the base distribution given by the generalized gamma distribution. To ease the computational cost, we fix the variational parameters \(\boldsymbol{\lambda},\boldsymbol{k},\boldsymbol{\beta}_{|-1|}\) and absorb the learning of their optimal values into the invertible transformation \(G_{\theta}\). Note that this is in line with standard practice, whereby normalizing flows adopt parameter-less base distributions.
To summarize, recognizing that the variational approximation gap can be theoretically studied using SLT allowed for the design of a principled variational family which incurs a variational approximation gap that is independent of sample size \(n\), to leading order. To the best of our knowledge, no existing works on normalizing flows for BNNs theoretically address the variational approximation gap. Furthermore, our results offer a new perspective on the benefits of using normalizing flows for variational inference in BNNs.
## 7 Experiments
In the following set of experiments4, we will isolate and examine the effect of the base distribution. Specifically, we compare the _generalized gamma base distribution_ to the commonly-adopted _Gaussian base distribution_, holding the architecture of \(G_{\theta}\) fixed when we do so. At the outset, we expect that when \(G_{\theta}\) is expressive enough, the effect of the base distribution will be small. However, when \(G_{\theta}\) is more limited (and thus less computationally expensive), we conjecture the generalized gamma base distribution can "pick up the slack" and outperform the Gaussian base distribution.
Footnote 4: The code to reproduce our results is available at [https://github.com/suswei/BNN_via_SLT](https://github.com/suswei/BNN_via_SLT).
In line with our earlier discussion, the parameters of the base distributions are frozen throughout training, see Appendix E for the initialization used. The invertible network \(G_{\theta}\) is implemented as a sequence of affine coupling transformations. We denote by base_numcouplingpairs_numhidden the variational family that results from pushing forward the base distribution through \(G_{\theta}\) with the said configuration, see Appendix E for a complete description of the implementation. We consider a total of four different expressivity levels of \(G_{\theta}\) from least to most: \(2\_4\), \(2\_1\)6, \(4\_4\), \(4\_16\).
The expression for the ELBO objective corresponding to each of the base distributions is given in (29) and (30) in Appendix E. Details of the training procedure such as epochs, learning rate, and optimizer are also given there. Let \(\hat{q}^{*}\)
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline model & \(H\) & \(\dim_{p_{0}}\) & \(\lambda(p_{0},p_{0},\varphi)\) & \(\dim_{p_{0}}\) & \(\dim_{p_{0}}\) \\ \hline fifth & 3 & 42 & - & 13 & 1 \\ & 7 & 98 & - & 13 & 1 \\ & 16 & 22 & - & 13 & 1 \\ & 40 & 560 & 13 & 1 \\ & 2 & 144 & 5.0 & 1 & 2 \\ & 7 & 119 & 35.0 & 10 & 7 \\ & 10 & 202 & 65.0 & 13 & 10 \\ & 16 & 560 & 152.0 & 19 & 16 \\ tanh & 15 & 30 & - & 1 & 1 \\ & 50 & 100 & - & 1 & 1 \\ & 115 & 20 & - & 1 & 1 \\ & 200 & 500 & - & 1 & 1 \\ tanh (zero mean) & 15 & 30 & 1.93 & 1 & 1 \\ & 50 & 100 & 3.53 & 1 & 1 \\ & 115 & 220 & 5.36 & 1 & 1 \\ & 280 & 560 & 8.36 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The various model-truth-prior triplets considered in experiments. The truth is realizable. The prior over network weights is standard Gaussian. The RLCT is only known in some of the cases.
be the variational distribution obtained at the end of training. Comparison of the base distributions, and hence the two different normalizing flows, will be made according to normalized MVFE, \(\bar{F}_{vb}^{*}(n)\), and VGE, \(G_{n}(p_{vb}(y|x,\mathcal{D}_{n})\). (For both, the lower, the better.) We will also estimate the coefficients \(\lambda_{\text{ref}}\) in (13) and \(\lambda_{\text{spe}}\) in (15), see Appendix E.
We consider four model-truth-prior triplets, summarized in Table 1, in which the truth is always realizable. In all four triplets, the prior over the neural network weights is chosen to be the standard Gaussian following conventional practice in BNNs Neal (1996), Bishop (2006). Note, priors for BNNs are notoriously difficult to design and is an area under active research Sun et al. (2019), Nalisnick et al. (2021).
### Results
Due to space constraints, we only show a subset of the results in Figure 3; complete results can be found in Appendix E. In the first column of Figure 3, we plot \(\log n\) versus the normalized MVFE. First, we observe that when \(G_{\theta}\) is not very expressive, the generalized gamma resoundingly outperforms the Gaussian base distribution for the reduced rank and ReLU experiments across all values of \(H\) in terms of achieving lower MVFE. (This can be better seen in Figure 8 in Appendix E.) On the other hand, as conjectured, when \(G_{\theta}\) is most expressive at the 4_16 configuration, the distinction in MVFE between the base distributions is still discernible but less dramatic, see Figure 10. Interestingly, for the \(\tanh\) triplet, the Gaussian base distribution sometimes achieves lower MVFE depending on the configuration of \(G_{\theta}\).
In the second column of Figure 3, we plot \(1/n\) versus the VGE. The results empirically verify the issues we highlighted in Section 4. In terms of VGE, the generalized gamma is not uniformly better than the Gaussian base distribution for the ReLU experiment, contrary to what the corresponding MVFE plots suggest. Only for the reduced rank experiment do we see one-to-one correspondence between MVFE and VGE. Note that the VGE fit is particularly poor for the Gaussian 2_4 and 2_16 configurations because these variational approximations are themselves poor. Next, note the scenario in Figure 2b is borne out by some of the \(\tanh\) experiments. Take for instance \(\tanh\) at \(H=115\) for the 2_4 configuration. Judging by MVFE alone the generalized gamma base is worse than Gaussian base, but the corresponding VGE curves show the opposite, see (3,3) subplot in Figures 8 and 9.
Figure 3: MVFE versus \(\log n\) is displayed in the first column and VGE versus \(1/n\) is displayed in the second. Each row corresponds to a different model-truth-prior triplet. Line color indicates the expressiveness of the network \(G_{\theta}\), darker being more expressive. Error bars represent mean, min and max over 30 draws of the training set \(\mathcal{D}_{n}\). The dashed line is the least squares fit with \(\lambda_{\text{ref}}\) and \(\lambda_{\text{spe}}\) coefficients and their \(R^{2}\) values displayed in legend.
## 8 Discussion
We conclude by discussing some of the limitations of the current work. On the empirical front, the reader may have noticed that our experiments did not involve truly deep BNNs. Strictly speaking this is not a limitation of the proposed method but rather a limitation of the scalability of normalizing flows for approximating deep BNNs. We expect the proposed methodology to benefit from orthogonal research advances in normalizing flow architectures.
On the theoretical side, it may be of interest to flush out the magnitude of \(\log C(p,p_{0},\varphi)-C\) in Theorem 6.1. The general expression for \(C(p,p_{0},\varphi)\), although known in special cases [11, Corollary 5.9], has complex dependency on \(K(w)\) and the prior. However, we do expect that the leading coefficient can be bounded with some effort. Relatedly, it is important to recognize that Theorem 6.1 only concerns the variational approximation gap of the idealized family in (17). Deriving an analogous result for the Gaussian base distribution would make for interesting future work.
We are optimistic that natural conditions on the model-truth-prior triplet and the variational family should allow for general statements about MVFE asymptotic expansions. Further efforts into studying the asymptotics of the MVFE will also advance knowledge of the relationship between \(\lambda_{\text{vfe}}\) and \(\lambda_{\text{vge}}\). In its place, our results here show that it is all the more important to pay attention to the variational approximation gap if we wish to have useful downstream predictions.
## Acknowledgements
We thank Daniel Murfet for helpful discussions. SW was supported by the ARC Discovery Early Career Researcher Award (DE200101253). This material is also based on work that is partially funded by an unrestricted gift from Google. |
2303.13077 | An Efficient Knowledge Transfer Strategy for Spiking Neural Networks
from Static to Event Domain | Spiking neural networks (SNNs) are rich in spatio-temporal dynamics and are
suitable for processing event-based neuromorphic data. However, event-based
datasets are usually less annotated than static datasets. This small data scale
makes SNNs prone to overfitting and limits their performance. In order to
improve the generalization ability of SNNs on event-based datasets, we use
static images to assist SNN training on event data. In this paper, we first
discuss the domain mismatch problem encountered when directly transferring
networks trained on static datasets to event data. We argue that the
inconsistency of feature distributions becomes a major factor hindering the
effective transfer of knowledge from static images to event data. To address
this problem, we propose solutions in terms of two aspects: feature
distribution and training strategy. Firstly, we propose a knowledge transfer
loss, which consists of domain alignment loss and spatio-temporal
regularization. The domain alignment loss learns domain-invariant spatial
features by reducing the marginal distribution distance between the static
image and the event data. Spatio-temporal regularization provides dynamically
learnable coefficients for domain alignment loss by using the output features
of the event data at each time step as a regularization term. In addition, we
propose a sliding training strategy, which gradually replaces static image
inputs probabilistically with event data, resulting in a smoother and more
stable training for the network. We validate our method on neuromorphic
datasets, including N-Caltech101, CEP-DVS, and N-Omniglot. The experimental
results show that our proposed method achieves better performance on all
datasets compared to the current state-of-the-art methods. Code is available at
https://github.com/Brain-Cog-Lab/Transfer-for-DVS. | Xiang He, Dongcheng Zhao, Yang Li, Guobin Shen, Qingqun Kong, Yi Zeng | 2023-03-23T07:14:48Z | http://arxiv.org/abs/2303.13077v2 | # Improving the Performance of Spiking Neural Networks
###### Abstract
Spiking neural networks (SNNs) have rich spatial-temporal dynamics, which are suitable for processing neuromorphic, event-based data. However, event-based datasets are usually less annotated than static datasets used in traditional deep learning. Small data scale makes SNNs prone to overfitting and limits the performance of the SNN. To enhance the generalizability of SNNs on event-based datasets, we propose a knowledge-transfer framework that leverages static images to assist in the training on neuromorphic datasets. Our method proposes domain loss and semantic loss to exploit both domain-invariant and unique features of these two domains, providing SNNs with more generalized knowledge for subsequent targeted training on neuromorphic data. Specifically, domain loss aligns the feature space and aims to capture common features between static and event-based images, while semantic loss emphasizes that the differences between samples from different categories should be as large as possible. Experimental results demonstrate that our method outperforms existing methods on all mainstream neuromorphic vision datasets. In particular, we achieve significant performance improvement of 2.7% and 9.8% when using only 10% training data of CIFAR10-DVS and N-Caltech 101 datasets, respectively.
Machine Learning, Neural Networks, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning Learning, Neural Networks, Learning Learning,
Compared to DVS dataset, it is relatively easy to obtain RGB data. When exploring a new environment, humans use their learned knowledge to compare it with new environment to better adapt to it. This inspires us leveraging static images to assist in the SNN training on neuromorphic datasets.
In this paper, we propose the knowledge transfer framework for SNN. Since DVS focuses on the variation of light intensity, we convert the widely available RGB static images into HSV space and take the luminance channel to help SNN learning on DVS data. To extract information useful for DVS, we propose domain loss and semantic loss. Specifically, domain loss learns domain-invariant features by reducing the joint distribution distance between the source domain (static images) and the target domain (DVS data), while semantic loss learns specific features by separating the distances between different classes of samples. As shown in Fig. 1, by introducing these two types of losses, the locations that are important for producing target are shifted to features of objects, such as the body of a bird, or the head of a dog and cat. Fig. 1 shows that our method learns the common features and obtains DVS features that facilitate correct classification.
The main contributions of this paper can be summarized as follows:
1. We propose a single-stage knowledge transfer framework based on annotated static datasets. To our best knowledge, we are the first to transfer the knowledge of spiking neural networks on static datasets to the DVS datasets, which enhances the generalization of SNNs.
2. We propose domain loss and semantic loss to enhance the learning of invariant and unique features between static and event data. Our results show that the learned features facilitate the training of SNN on DVS data, and the models trained with domain loss and semantic loss are flatter around local minima.
3. We conduct experiments on commonly used neuromorphic datasets CIFAR10-DVS, NCALTECH101, and N-Omniglot to verify the effectiveness of our method. The experimental results show that the proposed method outperforms the state-of-the-art methods on all datasets. Significantly when gradually reducing the number of training samples in the neuromorphic datasets, our method brings more significant performance improvements to SNNs.
## 2 Related Work
In order to solve the problem of limited labeled DVS data, previous works have tried domain adaptation, data augmentation and efficient training methods.
**Domain adaptation using static data.** Messikommer et al. (2022) use a generative event model to classify event features into content and motion features, enabling efficient matching between the latent space of events and images, and training the model directly with labeled images and unlabeled event data. Li et al. (2018) combine temporal coding and deep representation. The deep representation learned by the initially optimized CNN is effectively transferred to the event stream classification task, and the performance is further improved by fine-tuning the CNN with a certain number of training samples. Zhao et al. (2022) train a convolutional transformer network for event-based classification tasks using large-scale labeled image data via a passive unsupervised domain adaptation (UDA) algorithm. These works are related to ours; differently, we exploit the invariant and unique features between static and event data by cross-domain loss. Such features can provide a general
Figure 1: Class Activation Mapping of CIFAR10 and CIFAR10-DVS. Three categories are selected for display, the first two rows under each category represent static pictures, and the last row represents neuromorphic data integrated into frames; the three columns from left to right represent the results of original picture, baseline and our method, respectively.
ized prior knowledge for the SNN, thus facilitating further training of the network. This can enhance the original SNN structure instead of pre-training a new network model with more parameters.
Event-based augmentation.Li et al. (2022) propose neuromorphic data augmentation to stabilize the training of SNNs and improve generalization. Shen et al. (2022) design an enhancement strategy for event stream data, and perform the mixing of different event streams by Gaussian mixing model, while assigning labels to the mixed samples by calculating the relative distance of event streams.
SNN efficient trainingZhan et al. (2021) analyzes the plausibility of central kernel alignment (CKA) as a domain distance measure relative to maximum mean difference (MMD) in deep SNNs. Deng et al. (2022) introduce the time-efficient training (TET) method, which allows the surrogate gradient method to SNN to converge to a flatter minimum.
## 3 Preliminaries
Neuron model.We choose Leaky Integrate-and-Fire (LIF) model (Dayan and Abbott, 2005), the most commonly used neuron model. Equationally, the update of the membrane potential \(\mathbf{u}\) can be written in the following discrete form
\[\mathbf{u}^{t+1,l}=\tau\mathbf{u}^{t,l}+\mathbf{W}^{l}\mathbf{s}^{t,l-1}, \tag{1}\]
where \(\tau\) is leaky factor and \(\mathbf{u}^{t,l}\) denotes membrane potential of the neurons in layer \(l\) at time step \(t\). \(\mathbf{W}^{l}\) and \(\mathbf{s}^{l}\) represent the weight parameters of the layer \(l\) and the fired spikes in layer \(l\), respectively.
The membrane potential accumulates with the input until a given threshold \(V_{th}\) is exceeded, then the neuron delivers a spike and the membrane potential \(\mathbf{u}^{t,l}\) is reset to zero. The equation can be expressed as
\[\mathbf{s}^{t,l}=H\left(\mathbf{u}^{t,l}-V_{th}\right) \tag{2}\] \[\mathbf{u}^{t+1,l}=\tau\mathbf{u}^{t,l}\cdot\left(1-\mathbf{s}^{t,l}\right)+ \mathbf{W}^{l}\mathbf{s}^{t+1,l-1}, \tag{3}\]
where \(H\) denotes Heaviside step function. In this paper, leaky factor \(\tau\) is set to 0.5 and threshold \(V_{th}\) to 0.5.
Processing of neuromorphic data.The Dynamic Vision Sensor (DVS) triggers an event at a specific pixel point when it detects a significant change in brightness. Formuliaically, it can be expressed as
\[L(x,y,t)-L(x,y,t-\Delta t)\geq pC, \tag{4}\]
where \(x\) and \(y\) denote pixel location and \(\Delta t\) means the time since last triggered event at \((x,y)\). \(p\) is polarity of brightness change and \(C\) is a constant contrast threshold. In this way, DVS triggers a number of events \(\varepsilon\) during a time interval in the form \(\varepsilon=\left\{\left(x_{i},y_{i},t_{i},p_{i}\right)\right\}_{i=1}^{N}\). Due to the large number of events, we integrate them into frames in order to facilitate processing as previous works (Wu et al., 2019; He et al., 2020; Fang et al., 2021; Shen et al., 2022). Specifically, the events are divided into T slices, and all events in each slice are accumulated. The \(j\)-th (\(0\leq j\leq T-1\)) slice event after integration, \(E(j,x,y,p)\), can be defined as
\[E(j,x,y,p)=\sum_{j_{s}}^{j_{s}-1}\mathbf{1}_{x,y,p}\left(x_{i},y_{i},p_{i}\right) \tag{5}\]
\[j_{s}=\lfloor\frac{N}{T}\rfloor\cdot j,\quad j_{e}=\lfloor\frac{N}{T}\rfloor \cdot(j+1), \tag{6}\]
where \(\mathbf{1}_{x,y,p}\left(x_{i},y_{i},p_{i}\right)\) is an indictor function. \(j_{s}\) and \(j_{e}\) are the start and end index of event in \(j\)-th slice.
Feature Similarity Measurements.In order to measure the difference between static image and DVS data, we need to calculate the distance and similarity between them. The Hilbert-Schmidt Independence Criterion (HSIC) (Gretton et al., 2005) can be used to measure whether two sets of data are independent. For a Reproducing Kernel Hilbert Space \(\mathcal{H}\), all eigenfunctions of the kernel function constitute a set of orthogonal bases for the space. So we obtain the matrix form of HSIC under finite samples as follows
\[\mathrm{HSIC}(K,L)=\frac{1}{(n-1)^{2}}\operatorname{tr}(KJLJ), \tag{7}\]
where \(J\) is the centering matrix \(J_{n}=I_{n}-\frac{1}{n}11^{\mathrm{T}}\), here \(I_{n}\) is an \(n\) order unit matrix. \(\operatorname{tr}\) means trace of matrix. A normalized approach based on HSIC, i.e., central kernel alignment (CKA) (Cortes et al., 2012; Cristianini et al., 2001), can make HSIC invariant to isotropic scaling. The formula can be written as:
\[\mathrm{CKA}(K,L)=\frac{\mathrm{HSIC}(K,L)}{\sqrt{\mathrm{HSIC}(K,K)\, \mathrm{HSIC}(L,L)}}. \tag{8}\]
Kornblith et al. (2019) introduced CKA as a similarity index to better measure neural network representation similarity. Let \(K_{ij}=k\left(\mathbf{x}_{i},\mathbf{x}_{j}\right)\) and \(L_{ij}=l\left(\mathbf{y}_{i},\mathbf{y}_{j}\right)\), where k and l are kernels and can be simply chosen as linear kernel. Zhan et al. (2021) demonstrates the feasibility of using CKA as a distance metric in deep SNN. And in this paper, we use CKA as the distance criterion between static image domain and DVS data domain.
## 4 Methods
We expect to use the source domain (static images) data to help learn better SNN model for the target domain (DVS data). This is a supervised domain adaptation problem since we can make use of labeled target domain data despite the
fact that they are fewer. In this section, we first formalize the method in Sec. 4.1, and then introduce the color space transformation and dimension alignment in Sec. 4.2. The design of the loss function in Sec. 4.3. Sec. 4.4 and 4.5 elaborate the pipeline and training strategy of proposed method.
### Methodology formulation
A labeled source domain \(\mathcal{D}_{s}=\left\{x_{t}^{i},y_{s}^{i}\right\}_{i=1}^{N}\) and small labeled target domain \(\mathcal{D}_{t}=\left\{x_{t}^{i},y_{t}^{i}\right\}_{i=1}^{M}\) with feature space \(\mathcal{X}_{s}\) and \(\mathcal{X}_{t}\) respectively. The feature \(x_{s}^{i}\in\mathcal{X}_{s}\) and \(x_{t}^{i}\in\mathcal{X}_{t}\). In our task, two domain have the same category space and conditional probability distribution, i.e., \(\mathcal{Y}_{s}=\mathcal{Y}_{t}\) and \(Q_{s}\left(y_{s}\mid\mathbf{x}_{s}\right)=Q_{t}\left(y_{t}\mid\mathbf{x}_{t}\right)\). Let us use \(\mathbf{x}\) as a vector representation of the domain data and \(\mathbf{X}\) as the whole domain data. For static image domain and DVS data domain, the feature space and marginal distribution of these two domains is different due to the difference in sensor type, i.e., \(\mathcal{X}_{s}\neq\mathcal{X}_{t}\) and \(P_{s}\left(X_{s}\right)\neq P_{t}\left(X_{t}\right)\).
We aim to leverages \(\mathcal{D}_{s}\) to assist in learning a better classifer \(f:\mathbf{x}_{t}\mapsto\mathbf{y}_{t}\) to predict \(\mathcal{D}_{t}\) label \(\mathbf{y}_{t}\in\mathcal{Y}_{t}\). The model for function \(f\) involves a composition of two functions, i.e., \(f=h\circ g\). Here \(g:\mathcal{X}\rightarrow\mathcal{Z}\) represents an embedding of the input space \(\mathcal{X}\) into a feature space \(\mathcal{Z}\), and \(h:\mathcal{Z}\rightarrow\mathcal{Y}\) is a function that predicts outputs from the feature space. With this notation we would have \(f_{s}=h_{s}\circ g_{s}\) and \(f_{t}=h_{t}\circ g_{t}\). In this paper we utilize the final classification head of the original model as \(h_{t}\). This function is learned solely through supervised signal update gradients. There are a large number of labeled image that can be studied directly for static image so \(h_{s}\) are not the object of this study. Critically, we want to provide a generalization of \(g_{t}\) which can pave the way for learning of \(h_{t}\) to improve the generalizability of SNN.
As mentioned before, due to the different sensory devices, the two data domains have different feature spaces and they have different dimensions. This type of domain adaptation (DA) is heterogeneous and unlike homogeneous DA approaches, there is not much work focused on heterogeneous DA as far as depth approaches are concerned, and the solution for heterogeneous deep DA is still similar to some homogeneous DA approaches (Wang and Deng, 2018).
In this paper we use a discrepancy-based approach. The embedding function \(g\) would be modeled by network sharing between the source and target domains using all layers before the last classification layer. At this point the shared \(g_{t}=g_{s}=g\), the objective of optimization is finding the satisfied \(g\) in its hypothetical space \(\mathcal{G}\):
\[\operatorname*{arg\,min}_{g\in\mathcal{G}}\left(d\left(p\left(g\left(X_{s}^{ a}\right)\right),p\left(g\left(X_{t}^{a}\right)\right)\right)-d\left(p\left(g \left(X_{s}^{a}\right)\right),p\left(g\left(X_{t}^{c}\right)\right)\right) \right), \tag{9}\]
where \(X_{s}^{a}\) and \(X_{t}^{a}\) refer to the same classes of data in the source and target domains while \(X_{s}^{a}\) and \(X_{t}^{c}\) mean the data from different classes. The \(d\) is a metric for judging similarity between two domain, we choose CKA here. One of the purpose of Eq. 9 is to align the distributions of the features in the embedding space \(\mathcal{Z}\), whose features are assumed to be domain invariant. The other is learning unique features of DVS data. These two aspects facilitate finding a more
Figure 2: Proposed knowledge transfer framework for spiking neural network. Static image and DVS data are input simultaneously and share the network weights except for the last layer. The membrane potential of the neurons in the second-last layer is used to calculate the domain loss and semantic loss. Static data is replaced with DVS in certain probability.
generalized \(g\).
### HSV color space
The event data can be generated by rich local intensity changes in continuous time. One way is to use the relative motion of the image and the camera, e.g., Orchard et al. (2015) employed the camera movement of saccade, Li et al. (2017) employed image movement instead of camera movement; Li et al. (2022) reconstruct the written record of strokes into a video of writing tracks. All these approaches use DVS to sense the change of brightness of each pixel point in the image in an asynchronous manner and output a stream of events. The static images we use directly are in RGB color space, and the three components (red, green, blue) are sensitive to luminance, i.e., whenever the luminance changes, all three components change accordingly. Therefore, using RGB to reflect light intensity is not intuitive enough. For the above reason, we convert the still RGB image to HSV color space with three components (hue, saturation, value), as expressed in the following equation.
\[H=\begin{cases}0^{\circ},&\text{if}\ \ \Delta=0\\ 60^{\circ}\times\frac{G^{\prime}-B^{\prime}}{\Delta}+0^{\circ},&\text{if}\ \ C_{ \text{max}}=R^{\prime}\\ 60^{\circ}\times\frac{B^{\prime}-R^{\prime}}{\Delta}+120^{\circ},&\text{if}\ \ C_{ \text{max}}=G^{\prime}\\ 60^{\circ}\times\frac{R^{\prime}-G^{\prime}}{\Delta}+240^{\circ},&\text{if}\ \ C_{ \text{max}}=B^{\prime}\end{cases} \tag{10}\]
\[S=\begin{cases}0,&\text{if}\ \ C_{\text{max}}=0\\ \frac{\Delta}{C_{\text{max}}},&\text{if}\ \ C_{\text{max}}\neq 0\end{cases} \tag{11}\]
\[V=C_{\text{max}} \tag{12}\]
where \(R^{\prime},G^{\prime},B^{\prime}\) here are the normalized values of \(R,G,B\), \(C_{\text{max}}=\max\{R^{\prime},G^{\prime},B^{\prime}\},C_{\text{min}}=\min\{R ^{\prime},G^{\prime},B^{\prime}\}\) and \(\Delta=C_{\text{max}}-C_{\text{min}}\). \(V\) indicates the degree of brightness of the color, for the light source color, the brightness value \(V\) and the luminosity of the luminous body are related. To match with 2-channel, i.e., positive and negative polarity DVS data, we duplicated V channel and then input to the network simultaneously with DVS data.
### Loss function
Our loss contains three parts, classification loss for optimizing \(h_{t}\), domain loss and semantic loss for optimizing shared \(g\). We describe them in detail as following.
**Classification loss.** For the classification loss, we choose the TET loss (Deng et al., 2022), which is proven to compensate the momentum loss of surrogate gradient and make SNN have better generalizability. The formula is described as follows
\[\mathcal{L}_{TET}=\frac{1}{T}\sum_{t=1}^{T}\left((1-\lambda)\ell_{ce}\left( \mathbf{O}(t),\mathbf{y}\right)+\lambda\operatorname{MSE}\left(\mathbf{O}(t ),V_{th}\right)\right), \tag{13}\]
where \(\mathbf{O}(t)\) represents the presynaptic input current to the output layer. The two items before and after the plus symbol are the cross-entropy loss and the mean-squared error, with the combination of coefficients of \(1-\lambda\) and \(\lambda\) respectively.
**Domain loss.** The inputs to dual streams come from static image and DVS data respectively. The closer the value of CKA is to 1 indicates that the two vectors are more correlated. For this reason, we subtract the CKA from 1, minimizing the loss i.e., maximizing the correlation of the two inputs. We express the samples \(\mathbf{x}_{s},\mathbf{x}_{t}\) drawn from the whole data \(\mathbf{X}_{s},\mathbf{X}_{t}\), in this way, domain loss can be expressed as
\[\mathcal{L}_{d}(g)=1-\frac{1}{T}\sum_{t=1}^{T}\underset{y_{i}=y_{j},y\in \mathcal{Y}}{CKA}\left(g\left(\mathbf{x}_{s}^{i},t\right),g\left(\mathbf{x}_{ t}^{j},t\right)\right). \tag{14}\]
Here we use \(g\left(\mathbf{x}_{s}^{i},t\right)\) indicates the value of the input after the shared parameter function \(g\), \(t\) is brought in to emphasize that here is the output of \(g\) at time \(t\). Two samples \(\mathbf{x}_{s}^{i},\mathbf{x}_{t}^{j}\) are sampled from the same class, expressed by the formula \(y_{i}=y_{j}\).
**Semantic loss.** The representation of semantic loss is the opposite of domain loss, which is equal to the similarity between samples belonging to different categories in two domains, minimizing semantic loss means maximizing the difference between samples of different classes. The formula is expressed as
\[\mathcal{L}_{s}(g)=max\left(0,\frac{1}{T}\sum_{t=1}^{T}\underset{y_{i}\neq y_{j },y\in\mathcal{Y}}{CKA}\left(g\left(\mathbf{x}_{s}^{i},t\right),g\left(\mathbf{ x}_{t}^{j},t\right)\right)-m\right), \tag{15}\]
Here \(m\) is the margin of similarity, that is, the loss is considered zero when the similarity is below \(m\), and there is no further optimization. The meaning of \(m\) is that the similarity of two inputs from different domains can be controlled arbitrarily. For example, when the differences are well characterized, i.e., they are far enough apart to have low similarity, there is no need to waste time reducing similarity in that sample pair, so further training will focus on other sample pairs that are more difficult to separate.
**Total loss.** With the above statement, the total training loss is the sum of the classification loss, domain loss and semantic loss. That is:
\[\mathcal{L}_{all}=\mathcal{L}_{ce}+\lambda_{d}\mathcal{L}_{d}+\lambda_{s} \mathcal{L}_{s}. \tag{16}\]
One thing to mention is that domain loss and semantic loss work only within pre-set epochs while classification loss works throughout the training process.
### Single-stage training pipeline
During the training process, we gradually replace a portion of the static input with DVS data probabilistically, so that the role of domain loss is smoothly decreased and the semantic loss gradually moves from distinguishing different classes of sample distances across domains to different classes of distances within the DVS domain. Separately, with \(b_{i}\) denoting index of training batch, \(b_{l}\) denoting total length of training batch; \(e_{c}\) means current epoch and \(e_{m}\) means maximum training epoch, then probability of making a substitution \(P_{replacement}\) could be expressed by the following equation
\[P_{replacement}=\left(\frac{b_{i}+e_{c}*b_{l}}{e_{s}*b_{l}}\right)^{3}, \tag{17}\]
where \(e_{s}\) is a manual settings epoch for the end of the domain loss and semantic loss effects. The value of \(e_{s}\) if usually set to half of the total training epoch.
In summary, our framework first converts the static images from RGB to HSV and aligns the dimensions. And then for dual-stream inputs, the model shares the network parameters of all layers before the final classification layer to learn domain-invariant features by domain loss and unique features by semantic loss. Static image inputs are gradually replaced with DVS data inputs by increasing probability with the number of training epochs until the set epochs is reached, all static images are replaced with DVS data, and the feature extraction period ends. Finally, we fine-tune the SNN on the DVS data to get better performance. The whole process is shown in Fig. 2.
### Training strategy
We use gradient descent to directly train the SNN. For classification loss, the last layer of neurons only receives and accumulates presynaptic input currents and does not fire spikes, so the derivative of \(\mathcal{L}_{\mathrm{TET}}\) with respect to \(W\) is
\[\frac{\partial\mathcal{L}_{\mathrm{TET}}}{\partial W^{l}}=\sum_{t=1 }^{T}\frac{\partial\mathcal{L}_{\mathrm{TET}}}{\partial O(t)}\frac{\partial O (t)}{\partial W^{l}} \tag{18}\] \[\frac{\partial\mathcal{L}_{\mathrm{TET}}}{\partial O(t)}=\frac{1 }{T}\left(\left(1-\lambda\right)\left(S(O(t))-\hat{y}\right)+\lambda(O(t)-V_{ th})\right) \tag{19}\]
where \(\hat{y}\) means one-hot and \(S\) means softmax function.
The partial derivative of the loss \(\mathcal{L}_{d}\) with respect to \(W\) can be expressed as
\[\frac{\partial\mathcal{L}_{d}}{\partial W^{l}}=\sum_{t=1}^{T}\frac{\partial \mathcal{L}_{d}}{\partial s^{t,l}}\frac{\partial s^{t,l}}{\partial u^{t,l}} \frac{\partial u^{t,l}}{\partial W^{l}}. \tag{20}\]
Denote the kernel matrix of source and target domains by \(K_{S}\), \(K_{T}\), and the membrane potential inputs from the source
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **Model** & **Methods** & **Architecture** & **Simulation Length** & **Accuracy** \\ \hline \multirow{10}{*}{CIFAR10-DVS} & Zheng et al. (2021) & STBP-tdBN & ResNet-19 & 10 & 67.80 \\ & Kugele et al. (2020) & Streaming Rollout & DenseNet & 10 & 66.80 \\ & Wu et al. (2021) & LIAF & LIAF-Net & 10 & 70.40 \\ & Li et al. (2021) & Dspike & ResNet-18 & 10 & 75.4 \\ & Fang et al. (2021) & PLIF & SNN4 & 20 & 74.8 \\ & Yao et al. (2021) & TA-SNN & SNN5 & 10 & 72.0 \\ & Li et al. (2022d) & NDA & VGGSNN & 10 & 81.70 \\ & Deng et al. (2022) & TET & VGGSNN & 10 & \(83.17\pm 0.15\) \\ & Zhu et al. (2022) & TJCA-TET & VGGSNN & 10 & 83.3 \\ & Duan et al. & TEBN & VGGSNN & 10 & 84.9 \\ \cline{2-6} & **Our model** & Transfer & VGGSNN & 10 & \(84.60\pm 0.14\)\(\mathbf{(84.8)}\) \\ & **Our model** & Transfer & VGGSNN & 10 & \(84.67\pm 0.39\)\(\mathbf{(85.2)}^{2}\) \\ \hline \multirow{10}{*}{N-Caltech101} & Li et al. (2022d) & NDA & VGGSNN & 10 & 78.2 \\ & Deng et al. (2022) & TET & VGGSNN & 10 & \(79.77\pm 0.77^{1}\) \\ \cline{1-1} & Shen et al. (2022b) & EventMixer & ResNet-18 & 10 & 79.5 \\ \cline{1-1} & Zhu et al. (2022) & TJCA-TET & CombinedSNN & 14 & 82.5 \\ \cline{1-1} \cline{2-6} & **Our model** & Transfer & VGGSNN & 10 & \(81.46\pm 0.82\)\(\mathbf{(82.30)}\) \\ \cline{1-1} & **Our model** & Transfer & VGGSNN & 10 & \(81.57\pm 0.71\)\(\mathbf{(82.52)}^{2}\) \\ \hline \multirow{10}{*}{N-Omniglot} & Li et al. (2022b) & plain & SCNN & 12 & 60.0 \\ \cline{1-1} & Li et al. (2022b) & plain & SCNN & 12 & \(61.73\pm 0.41\)\(\mathbf{(62.23)}^{1}\) \\ \cline{1-1} \cline{2-6} & **Our model** & Transfer & SCNN & 12 & \(63.03\pm 0.14\)\(\mathbf{(63.22)}\) \\ \hline \multicolumn{6}{l}{\({}^{1}\) Our implement} \\ \multicolumn{6}{l}{\({}^{2}\) Using label smoothing} \\ \end{tabular}
\end{table}
Table 1: Experimental results compared with existing works. The best accuracy is shown in parentheses.
and target domains by \(u_{S}\), \(u_{T}\), respectively. The first term can be expanded as
\[\begin{split}\frac{\partial L_{d}}{\partial s_{S}^{t,l}}=& -\frac{1}{T}\sum_{t=1}^{T}\frac{\partial\text{CKA}\left(K_{S},K_{T} \right)}{\partial s_{S}^{t,l}}\\ =&-\frac{1}{T}\sum_{t=1}^{T}\frac{\partial\,\text{ CKA}\left(K_{S},K_{T}\right)}{\partial\,\text{HSIC}\left(K_{S},K_{T} \right)}\frac{\partial\,\text{HSIC}\left(K_{S},K_{T}\right)}{\partial s_{S}^{t,l}}\\ &-\frac{1}{T}\sum_{t=1}^{T}\frac{\partial\,\text{CKA}\left(K_{S},K _{T}\right)}{\partial\,\text{HSIC}\left(K_{S},K_{S}\right)}\frac{\partial\, \text{HSIC}\left(K_{S},K_{S}\right)}{\partial s_{S}^{t,l}}\end{split} \tag{21}\]
Calculating \(\frac{\partial\text{CKA}\left(K_{S},K_{T}\right)}{\partial\,\text{HSIC}\left( K_{S},K_{T}\right)}\) and \(\frac{\text{HSIC}\left(K_{S},K_{T}\right)}{\partial s_{S}^{t,l}}\) as an example, from Eq. 8
\[\begin{split}\frac{\partial\text{CKA}\left(K_{S},K_{T}\right)}{ \partial\,\text{HSIC}\left(K_{S},K_{T}\right)}=\frac{1}{\left[\text{HSIC} \left(K_{S},K_{S}\right)\text{HSIC}\left(K_{T},K_{T}\right)\right]^{1/2}} \end{split} \tag{22}\]
\(\frac{\text{HSIC}\left(K_{S},K_{T}\right)}{\partial s_{S}^{t,l}}\) in Eq. 21 can be expressed as
\[\begin{split}&\left(\sum_{j=1}^{n}\frac{\partial k_{S}\left(u_{S }^{t,l},u_{S_{j}}^{t,l}\right)}{\partial u_{S_{i}}^{t,l}}A(k_{T})_{ji}+\sum_{j= 1}^{n}\frac{\partial k_{S}\left(u_{S_{j}}^{t,l},u_{S_{i}}^{t,l}\right)}{ \partial u_{S_{i}}^{t,l}}A(k_{T})_{ij}\right.\\ &-\left.\frac{\partial k_{S}\left(u_{S_{i}}^{t,l},u_{S_{i}}^{t,l} \right)}{\partial u_{S_{i}}^{t,l}}A(k_{T})_{ii}\right)\frac{\partial u_{S_{i}}^ {t,l}}{(n-1)^{2}\partial s_{S_{i}}^{t,l}}\end{split} \tag{23}\]
where \(A(k_{T})_{ij}\) should be
\[\begin{split} k_{T}\left(u_{T_{i}}^{t,l},u_{T_{j}}^{t,l} \right)+&\sum_{k=1}^{n}\sum_{m=1}^{n}k_{T}\left(u_{T_{k}}^{t,l},u_ {T_{m}}^{t,l}\right)-& 2\sum_{m=1}^{n}k_{T}\left(z_{T_{i}}^{l},z_{T_{m}}^{l} \right)\end{split} \tag{24}\]
For second term \(\frac{\partial s^{t,l}}{\partial u^{t,l}}\), we use Piece-wise linear function (Xu et al., 2022) surrogate gradient, the formula as follows
\[\frac{\partial s^{t,l}}{\partial u^{t,l}}=max(0,1-|u^{t,l}|) \tag{25}\]
As for third term in Eq. 20, from Eq. 3, It can be expressed cyclically as
\[\begin{split}\frac{\partial u^{t,l}}{\partial W^{l}}=& \tau\left(1-s^{t-1,l}\right)\frac{\partial u^{t-1,l}}{\partial W^{l}}\\ &-\tau u^{t-1,l}\frac{\partial s^{t-1,l}}{\partial u^{t-1,l}} \frac{\partial u^{t-1,l}}{\partial W^{l}}+s^{t,l-1}.\end{split} \tag{26}\]
The derivation of \(\partial\mathcal{L}_{s}/\partial W^{l}\) is similar to Eq. 21, 22, 23. More detail please refer to supplementary materials.
## 5 Experiments
In this section, we conduct experiments on three mainstream neuromorphic datasets: CIFAR10-DVS (Li et al., 2017), N-Caltech 101 (Orchard et al., 2015) and N-Omniglot (Li et al., 2022b) to evaluate the effectiveness of the proposed method. We choose AdamW (Loshchilov and Hutter, 2017) as the optimizer with an initial learning rate of 1e-3. The total training epoch is 600 and the warmup epoch is 5. For a fair comparison, for the first two datasets, the network model is chosen as VGGSNN (64C3-128C3-AP2-AP2-256C3-256C3-AP2-512C3-512C3-AP2-512C3-AP2-FC) with step 10. For the N-omniglot, we use the network structure SCNN (15C5-AP2-40C5-AP2-FC-FC) from the original paper with step 12. More detail please refer to supplementary materials. All experiments are based on the Pytorch framework.
### Comparison with the State-of-the-Art
We first evaluate the proposed method on the CIFAR10-DVS dataset and compare the proposed method with TdBN (Zheng et al., 2021), Streaming Rollour (Kugele et al., 2020), LIAF (Wu et al., 2021), Dspike (Li et al., 2021), PLIF (Fang et al., 2021), TA-SNN (Yao et al., 2021), NDA (Li et al., 2022d), TET (Deng et al., 2022), TJCA-TET (Zhu et al., 2022), TEBN (Duan et al.). The results are presented in Tab. 1. In fairness, we perform three independent experiments using different random seeds and report the mean and standard deviation of the results. The best accuracy is indicated in parentheses. The experimental results demonstrate that the proposed method can achieve state-of-the-art performance
Figure 3: Classification accuracy with epoch for CIFAR10-DVS data on the whole test data with different amounts of training data. (a) training with 10% training data on CIFAR10-DVS, (b) training with 100% training data on CIFAR10-DVS
on CIFAR10-DVS dataset compared with existing methods. In addition, we observe that the trick of label smoothing is beneficial to alleviate the overfitting, leading to a 0.4% accuracy improvement.
As for N-Caltech 101 dataset, there are fewer available results. We use TET, NDA, EventMixer (Shen et al., 2022b), and TJCA-TET as baseline. The results in Tab. 1 show that our method outperforms all baselines and similarly, label smoothing brings a modest improvement in accuracy. For N-omniglot, we reproduce the original result and compare it with the proposed method. Experimental results show that our proposed method improves 1% accuracy over the original method.
### Ablation study
In our approach, the first step is to extract domain-invariant features using domain loss, followed by adding semantic loss to better extract features from the DVS data. Thus, the ablation experimental section shows the results of baseline, with domain loss, with domain loss and semantic loss. As shown in Fig. 3, baseline has overfitted at about 100-200 epochs earlier, i.e., the accuracy on the test set does not increase in subsequent training rounds. In contrast, the introduction of domain loss and semantic loss provides a good model at the end of the action period, and the green line is always at the top in the later fine-tuning phase, indicating that the best results can be achieved with these two losses.
Moreover, we conduct a detailed evaluation of our proposed approach on CIFAR10-DVS and N-Caltech101 datasets using varying amounts of training data, as presented in Tab. 2. Our results show that regardless of training data amount, with domain loss results in higher performance than the baseline method, while the combination of both domain and semantic losses outperforms the method with only domain loss. Importantly, our proposed approach achieves a remarkable performance improvement of nearly 10% on N-Caltech101 when using only 10% training data. This indicates that the fewer data available, the easier it is to overfit existing methods. Providing generalizability knowledge helps to alleviate this problem.
### Analysis
**Loss losslandscape** To evaluate the impact of domain and semantic loss in providing generalization weight for SNN fine-tuning on DVS data, we utilize 2D lossscape visualization (Li et al., 2018). For this purpose, we selected the model weights at the 300th epoch, which is the moment when the effect of loss ends, for CIFAR10-DVS and N-Caltech101 at 10% of the training data. As depicted in Fig. 4(b) and Fig. 4(d), the maximum loss decreases compared to Fig. 4(a) and Fig. 4(c) and the lowest loss area becomes flatter, which indicates that the SNN obtains better weights with domain loss and semantic loss.
**Visual Explanations from Deep Networks** To assess
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{NetWork} & Dataset & Methods & Accuracy \\ \hline \multicolumn{2}{c}{**Data with no Noise**} & & \\ \hline \multirow{4}{*}{VGG\_SNN} & \multirow{4}{*}{CIFAR10-DVS} & origin & 83.60\% \\ & & domainLoss & 84.10\% \\ & & domainLoss + semanticLoss & **84.50\%** \\ \cline{2-4} & & origin & 79.54\% \\ & & domainLoss & 80.46\% \\ & & domainLoss + semanticLoss & **81.72\%** \\ \hline \multicolumn{2}{c}{**70\% Training Data**} & & \\ \hline \multirow{4}{*}{VGG\_SNN} & \multirow{4}{*}{CIFAR10-DVS} & origin & 80.70\% \\ & & domainLoss & 81.50\% \\ & & domainLoss + semanticLoss & **82.30\%** \\ \cline{2-4} & & origin & 76.78\% \\ \cline{2-4} & & domainLoss & 78.51\% \\ & & domainLoss + semanticLoss & **79.20\%** \\ \hline \multicolumn{2}{c}{**40\% Training Data**} & & \\ \hline \multirow{4}{*}{VGG\_SNN} & \multirow{4}{*}{CIFAR10-DVS} & origin & 76.50\% \\ & & domainLoss & 76.80\% \\ \cline{1-1} & & domainLoss + semanticLoss & **77.90\%** \\ \cline{1-1} \cline{2-4} & & origin & 67.93\% \\ \cline{1-1} & & domainLoss & **71.84\%** \\ \cline{1-1} & & domainLoss + semanticLoss & 71.49\% \\ \hline \multicolumn{2}{c}{**10\% Training Data**} & & \\ \hline \multirow{4}{*}{VGG\_SNN} & \multirow{4}{*}{CIFAR10-DVS} & origin & 58.60\% \\ & & domainLoss & 60.50\% \\ \cline{1-1} & & domainLoss + semanticLoss & **61.30\%** \\ \cline{1-1} \cline{2-4} & & origin & 45.40\% \\ \cline{1-1} & & domainLoss & 54.71\% \\ \cline{1-1} & & domainLoss + semanticLoss & **55.17\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation experimental results with VGGSNN.
Figure 4: The loss landscape of VGGSNN on CIFAR10-DVS and N-Caltech101 at the 300th epoch.
whether domain loss and semantic loss effectively learn common and unique features between static images and neuromorphic data, we employ grad-cam++ (Chattopadhay et al., 2018). This method enables the visualization of the degree of similarity between each location in an image and a category by generating a heat map. Such a visualization allows us to understand which local locations of an original image contributed most significantly to the model's final classification decision. Ideally, static pictures and event data that are integrated into frames have similarity in object contour features when they are in the same class, while they should be as distinct as possible between different classes. This is well illustrated in Fig. 1, where by introducing domain and semantic loss, for both static pictures and DVS data, the network pays attention to the contour features of the category. In particular, the results on DVS show that our approach facilitates the correct classification of DVS data.
### The effect and selection of hyperparameters
In this section, we show the effect of hyperparameters in the proposed method and the principles of their selection.
The coefficient of domain loss for learning common features, \(\lambda_{d}\), is set fixed at 1. Our method is insensitive to the semantic loss coefficient and performs well within its reasonable range. As shown in Tab. 3, \(\lambda_{s}=0.0\) means only domain loss works, with the addition of semantic loss there is a performance degradation only when its coefficients \(\lambda_{s}\) are particularly large. This means that semantic loss should not dominate, and that the learning of common features should be satisfied before the learning of unique features.
As for the marginal coefficient \(m\), it should not be chosen too small, resulting in not working, or too large, limiting the role of semantic loss. As shown in Tab. 3, the choice of the m = 0.1 is appropriate.
## 6 Conclusion
In this paper, we propose a single-stage transfer learning framework based on a large number of annotated static datasets to improve the generalizability of SNN networks on DVS data. We propose domain loss and semantic loss to learn domain-invariant and unique features between static image and DVS data. Visualizations on loss-landscape and Class Activation Mapping show that the learned features benefit the performance of SNN on DVS data. Experimental results show that our method achieves the best performance on mainstream DVS datasets.
|
2305.11288 | Riemannian Multinomial Logistics Regression for SPD Neural Networks | Deep neural networks for learning Symmetric Positive Definite (SPD) matrices
are gaining increasing attention in machine learning. Despite the significant
progress, most existing SPD networks use traditional Euclidean classifiers on
an approximated space rather than intrinsic classifiers that accurately capture
the geometry of SPD manifolds. Inspired by Hyperbolic Neural Networks (HNNs),
we propose Riemannian Multinomial Logistics Regression (RMLR) for the
classification layers in SPD networks. We introduce a unified framework for
building Riemannian classifiers under the metrics pulled back from the
Euclidean space, and showcase our framework under the parameterized
Log-Euclidean Metric (LEM) and Log-Cholesky Metric (LCM). Besides, our
framework offers a novel intrinsic explanation for the most popular LogEig
classifier in existing SPD networks. The effectiveness of our method is
demonstrated in three applications: radar recognition, human action
recognition, and electroencephalography (EEG) classification. The code is
available at https://github.com/GitZH-Chen/SPDMLR.git. | Ziheng Chen, Yue Song, Gaowen Liu, Ramana Rao Kompella, Xiaojun Wu, Nicu Sebe | 2023-05-18T20:12:22Z | http://arxiv.org/abs/2305.11288v2 | # Riemannian Multiclass Logistics Regression for SPD Neural Networks
###### Abstract
Deep neural networks for learning symmetric positive definite (SPD) matrices are gaining increasing attention in machine learning. Despite the significant progress, most existing SPD networks use traditional Euclidean classifiers on approximated spaces rather than intrinsic classifiers that accurately capture the geometry of SPD manifolds. Inspired by the success of hyperbolic neural networks (HNNs), we propose Riemannian multiclass logistics regression (RMLR) for SPD networks. We introduce a general unified framework for a family of Riemannian metrics on SPD manifolds and showcase the specific \(\mathrm{O}(n)\)-invariant Log-Euclidean Metrics for SPD networks. Moreover, we encompass the most popular classifier in existing SPD networks as a special case of our framework. Extensive experiments on popular SPD learning benchmarks demonstrate the superiority of our classifiers.
## 1 Introduction
Symmetric Positive Definite (SPD) matrices are commonly encountered in a diverse range of scientific fields, such as medical imaging [1; 2], signal processing [3; 4; 5; 6], elasticity [7; 8], question answering [9; 10], and computer vision [11; 12; 13; 14; 15; 14; 16; 17; 18; 19; 20]. Despite their ubiquitous presence, traditional learning algorithms are ineffective in handling the non-Euclidean geometry of SPD matrices. To address this limitation, researchers have proposed different manifold learning techniques based on Riemannian geometry, including Affine-Invariant Metric (AIM), Log-Euclidean Metric (LEM) [21], Bures-Wasserstein Metric (BWM) [22], and Log-Cholesky Metric (LCM) [23]. More recently, Chen _et al._[24] and Thanwerdas _et al._[25] developed two different generalizations of LEM and introduced Adaptive Log-Euclidean Metrics (ALEMs) and \(\mathrm{O}(n)\)-Invariant Log-Euclidean Metrics (OILEMs), respectively.
Inspired by the great success of deep learning [26; 27; 28], several deep networks have been developed on SPD manifolds, exhibiting promising performance in various machine learning applications [11; 1; 13; 6; 2; 15; 14; 17; 18; 29; 30]. However, their classification layers are usually designed in an approximated space or are constrained in a special type of manifold-valued data. For instance, the most commonly used approach is a traditional Euclidean classifier in the tangent space at the identity matrix, realized by stacking matrix logarithm, fully connected (FC) layer, and softmax layer [11; 6; 31; 32; 17; 29; 30]. Other approaches have also been explored, such as the Euclidean classifier in the local coordinate domain [1] and direct classification on SPD matrices [15]. However, all the above three types of classifiers rely on some approximation spaces or tricks that are not intrinsic. More recently, Chakraborty _et al._[2] introduced an invariant layer for manifold-valued data that mimics the invariant FC layer in CNNs which is more intrinsic compared to the previous classifiers, as no approximation spaces or tricks are required. However, it is designed for gridded
manifold-valued data, which is not the main type of data encountered in many other SPD networks. Following the convention of most existing SPD networks, we also only focus on non-gridded cases.
In this paper, we address the issue of ill-intrinsic classifiers by designing Riemannian classifiers based on the geometric reinterpretation of Euclidean classifiers, which has been successful in the design of classifiers on hyperbolic neural networks (HNNs) [33]. One main obstacle is that the formulae of Riemannian classifiers can vary under different Riemannian metrics. Fortunately, we find that on SPD manifolds, several Riemannian metrics can be unified discussed. We first introduce our general Riemannian multiclass logistics regression (RMLR) in this paper, which incorporates different cases under several Riemannian metrics. We then proceed to discuss specific cases under OILEMs, which consist of a family of metrics and can be viewed as parameterized LEM. Our general RMLR allows us to propose unified classifiers for all variants of OILEMs, suiting datasets of different characteristics. To learn the SPD parameters in our RMLR, we discuss in detail three kinds of Riemannian Stochastic Gradient Descent (RSGD) based on OILEMs, AIM, and BWM, respectively. Our framework also allows for an intrinsic explanation for the commonly used Euclidean classifier on SPD manifolds which consists of successive matrix logarithm, FC, and softmax layers. Finally, extensive experiments demonstrate that our proposed Riemannian classifiers exhibit consistent performance gains across widely-used SPD benchmarks. The **contributions** of our work are summarized as follows:
* We are the **first** to introduce a general RMLR framework on SPD manifolds, design specific RMLRs under all OILEMs for SPD networks, and discuss different optimization strategies for training the SPD parameters.
* Our framework encompasses the most popular classifier in SPD networks (_i.e._,stacking matrix logarithm, FC layer, and softmax) as a special case and gives an intrinsic explanation.
* Extensive experiments on widely used SPD learning benchmarks demonstrate the superiority of our proposed classifiers over the previous baselines.
## 2 Geometry of SPD manifolds
The set of SPD matrices, denoted as \(\mathcal{S}_{++}^{n}\), constitutes a smooth manifold known as the SPD manifold [21; 34]. Endowed with a Riemannian metric, it forms a Riemannian manifold [21; 34]. Several Riemannian metrics have yielded impressive results in machine learning, namely LEM [21], AIM [34], LCM [23], BWM [22], and ALEMs [24]. Recently, the authors of [25] have generalized LEM and AIM into two-parameter \(\mathrm{O}(n)\)-invariant Log-Euclidean Metrics (OILEMs) and \((\alpha,\beta)\)-AIM. This section briefly reviews LEM and OILEMs.
Literally, OILEMs enjoy \(\mathrm{O}(n)\)-invariance, which generalizes LEM by two parameters,
\[g_{S}^{\mathrm{LE}(\alpha,\beta)}(V,V)=\alpha\|\mathrm{mlog}_{\bullet,S}(V)\|_ {\mathrm{F}}+\beta\operatorname{tr}\left(S^{-1}V\right)^{2}, \tag{1}\]
where \(S\in\mathcal{S}_{++}^{n},V\in T_{S}\mathcal{S}_{++}^{n},\alpha>0,\alpha+n\beta>0\), \(\mathrm{mlog}_{\bullet,S}\) denotes the differentail map of matrix logarithm at \(S\), and \(\|\cdot\|_{\mathrm{F}}\) is the standard Frobenius norm. When \(\alpha=1\) and \(\beta=0\), OILEM is reduced back to LEM, which is known for its fast and simple computation. Intuitively, an OILEM is a two-parameter variant of LEM. For this reason, we simply use \((\alpha,\beta)\)-LEM to represent OILEM.
As shown in [25], \((\alpha,\beta)\)-LEM is actually the pullback metric from the standard LEM by
\[f_{p,q}:S\in\mathcal{S}_{++}^{n}\longmapsto\mathrm{mexp}\left(F_{p,q}(\mathrm{ mlog}S)\right)=\det(S)^{\frac{p-q}{n}}S^{q}\in\mathcal{S}_{++}^{n}, \tag{2}\]
where \(\mathrm{mexp}\) is matrix exponential, and \(F_{p,q}(X)=qX+\frac{p-q}{n}\operatorname{tr}(X)I_{n}\) with \(p=\sqrt{\alpha+n\beta}\) and \(q=\sqrt{\alpha}\). With varying \(\alpha,\beta\), OILEMs constitute a family of Riemannian metrics.
We recall an excerpt from Theorem 4.2 of [24], which generally characterize a huge family of Riemannian metrics on SPD manifolds
**Theorem 1** (Pullback Euclidean Metrics (PEMs)).: _Let \(S_{1},S_{2}\in\mathcal{S}_{++}^{n}\), \(\phi:\mathcal{S}_{++}^{n}\rightarrow\mathcal{S}^{n}\) is a diffeomorphism. We define the following operations,_
\[S_{1}\odot_{\phi}S_{2} =\phi^{-1}(\phi(S_{1})+\phi(S_{2})), \tag{3}\] \[g_{S}^{\phi}(V_{1},V_{2}) =\langle\phi_{*,S}(V_{1}),\phi_{*,S}(V_{2})\rangle,\forall S\in \mathcal{S}_{++}^{n},\forall V_{i}\in T_{S}\mathcal{S}_{++}^{n}, \tag{4}\]
_where \(\phi_{*,S}:T_{S}\mathcal{S}_{++}^{n}\to T_{\phi(S)}\mathcal{S}^{n}\) is the differential map of \(\phi\) at \(S\), and \(<\cdot,\cdot>\) is a Euclidean metric. Then, we have the following conclusions: \(\{\mathcal{S}_{++}^{n},\odot_{\phi}\}\) is an abelian Lie group, \(\{\mathcal{S}_{++}^{n},g^{\phi}\}\) is
a Riemannian manifold, and \(g^{\phi}\) is a bi-invariant metric, called Pullback Euclidean Metric (**PEM**). The associated geodesic distance is_
\[d^{\phi}(S_{1},S_{2})=\|\phi(S_{1})-\phi(S_{2})\|, \tag{5}\]
_where \(\|\cdot\|\) is the norm induced by \(\langle\cdot,\cdot\rangle\). The Riemannian operators are as follows_
\[\operatorname{Exp}_{S_{1}}V =\phi^{-1}(\phi(S_{1})+\phi_{*,S_{1}}V), \tag{6}\] \[\operatorname{Log}_{S_{1}}S_{2} =\phi_{*,\phi(S_{1})}^{-1}(\phi(S_{2})-\phi(S_{1})),\] (7) \[\Gamma_{S_{1}\to S_{2}}(V) =\phi_{*,\phi(S_{2})}^{-1}\circ\phi_{*,S_{1}}(V), \tag{8}\]
_where \(V\in T_{S_{1}}\mathcal{S}_{++}^{n}\), \(\operatorname{Exp}\), \(\operatorname{Log}\), and \(\Gamma\) are Riemannian exponential map, logarithmic map, and parallel transportation respectively, and \(\phi_{*}\) & \(\phi_{*}^{-1}\) are the differential maps of \(\phi\) & \(\phi^{-1}\)._
As discussed in [24], both LEM and LCM belongs to PEMs. In the following section, we will further present that all OILEMs actually belong to PEMs. Besides, Theorem 1 also allows us to exempt from various concrete computations. We present the Venn diagram in Figure 1 to illustrate the relationship of some popular Riemannian metrics on SPD manifolds.
_Remark 1_.: The identity element of the Lie group \(\{\mathcal{S}_{++}^{n},\odot_{\phi}\}\) induced by \(\phi\) is not necessarily the identity matrix by definition. However, the current Lie groups on SPD manifolds [21, 23, 35, 24] all incorporate the identity matrix as the identity element. Therefore, by abuse of notation, we use \(I\) to represent the identity matrix or identity element alternatively, according to the context. Besides, the inner product \(\langle\cdot,\cdot\rangle\) in Eq. (4) does not need to be the standard Euclidean inner product. However, without loss of generality, \(\langle\cdot,\cdot\rangle\) is assumed to be the standard one, as \(n\)-dimensional Euclidean spaces are naturally linearly isometric.
## 3 Riemannian multiclass logistic regression under general PEMs
In this section, we first reformulate the Euclidean multiclass logistic regression (MLR). Then we proceed to deal with the general RMLR under arbitrary PEM on SPD manifolds.
### Reformulation of Euclidean MLR
In HNNs [33], hyperbolic MLR is designed based on the reformulation of Euclidean MLR from the perspective of distances to margin hyperplanes. Lebanon _et al._[36] first showcase this framework on spherical geometry. We now briefly review the reformulation of Euclidean MLR and then present our RMLR on SPD manifolds.
Given \(C\) classes, each margin hyperplane in \(\mathbb{R}^{n}\) can be represented by softmax probabilities:
\[\forall k\in\{1,\dots,C\},\quad p(y=k\mid x)\propto\exp\left((\langle a_{k}, x\rangle-b_{k})\right),\quad\text{ where }b_{k}\in\mathbb{R},x,a_{k}\in\mathbb{R}^{n}. \tag{9}\]
Every hyperplane in \(\mathbb{R}^{n}\) can be parameterized by a normal vector \(a\) and a scalar shift \(b\):
\[H_{a,b}=\{x\in\mathbb{R}^{n}:\langle a,x\rangle-b=0\},\quad\text{ where }a\in\mathbb{R}^{n}\backslash\{\mathbf{0}\},\text{ and }b\in\mathbb{R}. \tag{10}\]
As in [36, Sec. 5] and [33, Sec. 3.1], by a proper \(p\in\mathbb{R}^{n}\), we have \(\langle a,x\rangle-b=\langle a,x-p\rangle\). Hyperplane \(H_{a,b}\) can be reformulated as
\[H_{a,p}=\{x\in\mathbb{R}^{n}:\langle a,x-p\rangle=0\},\quad\text{ where }a\in\mathbb{R}^{n}\backslash\{\mathbf{0}\},\text{ and }p\in\mathbb{R}^{n}. \tag{11}\]
In view of \(\langle a,x-p\rangle=\operatorname{sign}(\langle a,x-p\rangle)\|a\|d(x,H_{a,p})\), Eq. (9) can be rewritten as :
\[p(y=k\mid x)\propto\exp(\operatorname{sign}(\langle a_{k},x-p_{k}\rangle)\|a_ {k}\|d(x,H_{a_{k},p_{k}})),p_{k},x\in\mathbb{R}^{n},\text{ and }a_{k}\in\mathbb{R}^{n}\backslash\{\mathbf{0}\}. \tag{12}\]
In geometry, \(\operatorname{Log}_{p}x\) is the natural generalization of the directional vector \(\vec{p}=x-p\) starting at \(p\) and ending at \(x\). The inner product can also be replaced by the Riemannian metric at \(p\). More detail can be found in [34, Table 1]. Therefore, the hyperplane in Eq. (11) and MLR in Eq. (12) can be readily generalized into SPD manifolds \(\{\mathcal{S}_{++}^{n},g\}\).
Figure 1: Conceptual illustration of some popular Riemannian metrics on SPD manifolds.
**Definition 3.1** (SPD hyperplanes).: Given \(P\in\mathcal{S}^{n}_{++},A\in T_{P}\mathcal{S}^{n}_{++}\backslash\{\mathbf{0}\}\), we define the SPD hyperplane as
\[\tilde{H}_{A,P}=\{S\in\mathcal{S}^{n}_{++}:g_{P}(\operatorname{Log}_{P}S,A)= \langle\operatorname{Log}_{P}S,A\rangle_{P}=0\} \tag{13}\]
**Definition 3.2** (SPD multiclass logistics regression).: SPD multiclass logistics regression is defined as
\[p(y=k\mid S)\propto\exp(\operatorname{sign}(\langle A_{k},\operatorname{Log} _{P_{k}}(S)\rangle_{P_{k}})\|A_{k}\|_{P_{k}}d(S,\tilde{H}_{A_{k},P_{k}})), \tag{14}\]
where \(P_{k}\in\mathcal{S}^{n}_{++}\), \(A_{k}\in T_{P_{k}}\mathcal{S}^{n}_{++}\backslash\{\mathbf{0}\}\), \(\langle\cdot,\cdot\rangle_{P_{k}}=g_{P_{k}}\), and \(\|\|_{P_{k}}\) is the norm on \(T_{P_{k}}\mathcal{S}^{n}_{++}\) induced by \(g\) at \(P_{k}\), and \(\tilde{H}_{A_{k},P_{k}}\) is a margin hyperplane in \(\mathcal{S}^{n}_{++}\) as defined in Eq. (13). \(d(S,\tilde{H}_{A_{k},P_{k}})\) denotes the distance between \(S\) and SPD hyperplane \(\tilde{H}_{A_{k},P_{k}}\), which is formulated as:
\[d(S,\tilde{H}_{A_{k},P_{k}}))=\inf_{Q\in H_{A_{k},P_{k}}}d(S,Q), \tag{15}\]
where \(d(S,Q)\) is the geodesic distance induced by \(g\).
_Remark 2_.: Note that since \(g\) could be any kind of the existing Riemannian metrics on SPD manifolds, the specific formulae of Eq. (13) and Eq. (14) would vary with \(g\). Also, simple computation shows that SPD hyperplane is actually a submanifold of \(\mathcal{S}^{n}_{++}\) but we still follow the nomenclature of [33, 36]. Lastly, Definition 3.1 and Definition 3.2 can also be literally applied to other matrix manifolds, as matrix manifolds are usually geodesically complete.
### General SPD MLR under PEMs
As stated in Theorem 1, PEMs are not a single Riemannian metric. Instead, they denote a family of Riemannian metrics pulled back from Euclidean space. In this subsection, we follow the notation in Theorem 1 and will conclude that RMLR under any PEM can be uniformly expressed.
Before embarking on technical details, let us first clarify why we choose PEMs as our starting metrics. Several Riemannian metrics, including LEM, LCM, and ALEMs, all end up as PEMs. We will further show that all OILEMs are PEMs. Besides, when formulating Eq. (12) into manifolds, the core concerns lie in the optimization problem of calculating the distance \(d(x,H_{a_{k},b_{k}})\). The calculation of this distance under PEMs enjoys theoretical convenience, while other metrics like AIM would be complicated to obtain the distances to hyperplanes.
Now, we start by calculating Eq. (15), the distance to an SPD hyperplane.
**Lemma 2**.: _The distance of \(S\in\mathcal{S}^{n}_{++}\) to the SPD hyperplane \(\tilde{H}_{A_{k},P_{k}}\) is reduced to the distance of \(\phi(S)\) to the Euclidean hyperplane \(H_{\phi_{*,P_{k}}(A_{k}),\phi(P_{k})}\) in the Euclidean space of \(T_{P_{k}}\mathcal{S}^{n}_{++}\):_
\[d(S,\tilde{H}_{A_{k},P_{k}}))=d(\phi(S),H_{\phi_{*,P_{k}}(A_{k}),\phi(P_{k})}) =\frac{|\langle\phi(S)-\phi(P_{k}),\phi_{*,P_{k}}(A_{k})\rangle|}{\|A_{k}\|_{P _{k}}}, \tag{16}\]
_where \(|\cdot|\) is the absolute value._
Putting Eq. (16) into Eq. (14), we obtain our SPD classifiers under any PEMs:
\[p(y=k\mid S)\propto\exp(\langle A_{k},\operatorname{Log}_{P_{k}}(S)\rangle_{P_ {k}})=\exp(\langle\phi(S)-\phi(P_{k}),\phi_{*,P_{k}}(A_{k})\rangle). \tag{17}\]
One might have observed that \(A_{k}\in T_{P_{k}}\mathcal{S}^{n}_{++}\) in Eq. (17) are a non-Euclidean parameter, as \(P_{k}\) would vary during training. Fortunately, we have different tricks to avoid this issue. The first solution is the parallel transportation from a fixed tangent space, writing \(A_{k}=\Gamma_{Q\to P_{k}}(\tilde{A}_{k})\) with \(\tilde{A}_{k}\in T_{Q}\mathcal{S}^{n}_{++}\) as a Euclidean parameter. This is the solution adopted by HNNs [33], where the tangent point is the identity element. Alternatively, one can also rely on the differential of a Lie group translation, which is widely used in differential manifolds [37, SS 20]. More interestingly, under PEMs, the above two solutions are equivalent and anchor points can be arbitrarily chosen (See Appendix D for technical details). Therefore, without loss of generality, we generate \(A_{k}\) from tangent space at the identity by parallel transportation, _i.e._,\(A_{k}=\Gamma_{I\to P_{k}}(\tilde{A}_{k})\) with \(\tilde{A}_{k}\in T_{I}\mathcal{S}^{n}_{++}\cong\mathcal{S}^{n}\). Together with Eq. (8), Eq. (17) can be further simplified.
**Theorem 3** (SPD MLR under PEMs).: _Under any PEMs, SPD multiclass logistics regression and SPD hyperplane is_
\[p(y =k\mid S)\propto\exp(\langle\phi(S)-\phi(P_{k}),\phi_{*,I}(\tilde{A} _{k})\rangle), \tag{18}\] \[\tilde{H}_{\tilde{A}_{k},P_{k}} =\{S\in\mathcal{S}^{n}_{++}:\langle\phi(S)-\phi(P_{k}),\phi_{*,I} (\tilde{A}_{k})\rangle=0\}, \tag{19}\]
One can observe that Eq. (18) and Eq. (19) are very similar to a Euclidean MLR. However, since \(\phi\) is normally non-linear and \(P_{k}\) is an SPD parameter, Eq. (18) can not hastily be identified with a Euclidean MLR. However, under some special circumstances, SPD MLR can be reduced to the familiar Euclidean MLR. To show this result, we first need to present the Riemannian Stochastic Gradient Descent (RSGD) under PEMs. General RSGD [38] is formulated as
\[W_{t+1}=\exp_{W_{t}}(-\gamma_{t}\Pi_{W_{t}}(\nabla_{W}f|_{W_{t}})) \tag{20}\]
where \(\Pi_{W_{t}}\) denotes the projection map mapping Euclidean gradient \(\nabla_{W}f|_{W_{t}}\) to Riemannian gradient. We have already obtained the formula for the Riemannian exponential map as shown in Eq. (7). We proceed to formulate \(\Pi\).
**Lemma 4**.: _For a smooth function \(f:\mathcal{S}^{n}_{++}\rightarrow\mathbb{R}\) on \(\mathcal{S}^{n}_{++}\) endowed with any kind of PEMs, the projection map \(\Pi_{P}:\mathcal{S}^{n}\to T_{P}\mathcal{S}^{n}_{++}\) at \(P\in\mathcal{S}^{n}_{++}\) is_
\[\Pi_{P}(\nabla_{P}f)=\phi_{*,P}^{-1}(\phi_{*,P}^{-*})(\nabla_{P}f), \tag{21}\]
_where \(\phi_{*,P}^{-*}\) is the adjoint operator of \(\phi_{*,P}^{-1}\), i.e.,\((V_{1},\phi_{*,P}^{-1}V_{2})_{P}=\langle\phi_{*,P}^{-*}V_{1},V_{2}\rangle_{P}\), for all \(V_{i}\in T_{P}\mathcal{S}^{n}_{++}\)._
Together with the above lemma, we can describe the special case we mentioned.
**Theorem 5**.: _Supposing the differential map \(\phi_{*,I}:T_{I}\mathcal{S}^{n}_{++}\to T_{0}\mathcal{S}^{n}\) is the identity map, and \(P_{k}\) in Eq. (18) is optimized by PEM-based RSGD, then Eq. (18) can be reduced to a Euclidean MLR in the codomain of \(\phi\) updated by Euclidean SGD._
## 4 Riemannian multiclass logistic regression under OILEMs
Although Theorem 3 can deal with Riemannian classifiers under all PEMs, we focus on OILEMs to establish our SPD MLR in this paper.
### OILEMs-based SPD MLR
LEM enjoys fast and simple computation on SPD manifolds [21] and has shown many successful applications in dealing with SPD data [39; 17; 29; 10; 18]. The nascent OILEMs [25] are natural generalizations of LEM. Furthermore, as shown in the following lemma, all the OILEMs are PEMs.
**Lemma 6**.: \((\alpha,\beta)\)_-LEM is a pullback metric by \(\phi_{p,q}=F_{p,q}\circ\mathrm{mlog}\) from the standard Euclidean space \(\mathcal{S}^{n}\)._
Therefore, the previous discussion of PEMs-based MLR can be directly applied to OILEMs-based ones as well. For the above two reasons, we choose OILEMs as the Riemannian metrics to build our SPD MLR.
Calculating the differential of \(\phi_{p,q}\) at \(I\) directly gives the final formulation of SPD MLR.
**Proposition 7** (Final formulation of SPD MLR under \((\alpha,\beta)\)-LEM).: _On the SPD manifold with \((\alpha,\beta)\)-LEM, the SPD MLR and SPD hyperplane can be written as_
\[p(y =k\mid S)\propto\exp(\langle\phi_{p,q}(S)-\phi_{p,q}(P_{k}),F_{p, q}(\tilde{A_{k}})\rangle), \tag{22}\] \[\tilde{H}_{\tilde{A}_{k},P_{k}} =\{S\in\mathcal{S}^{n}_{++}:\langle\phi_{p,q}(S)-\phi_{p,q}(P_{k} ),F_{p,q}(\tilde{A_{k}})=0\}, \tag{23}\]
_where \(\tilde{A}_{k}\in\mathcal{S}^{n}\) is the normal matrix, \(P_{k}\in\mathcal{S}^{n}_{++}\) is the shift matrix, and \(\phi_{p,q}=F_{p,q}\circ\mathrm{mlog}\)._
Figure 2: Conceptual illustration of SPD hyperplanes on \(\mathcal{S}^{2}_{++}\). The black dots are symmetric positive semi-definite matrices, denoting the boundary of \(\mathcal{S}^{2}_{++}\). The blue, red, and yellow dots denote three SPD hyperplanes.
Figure 2 illustrates three OILEMs-based hyperplanes on SPD manifolds. As a submanifold of \(\mathbb{R}^{3}\), \(\mathcal{S}^{2}_{++}\) can be visualize in \(\mathbb{R}^{3}\), by \(\forall P=\left(\begin{array}{cc}x&y\\ y&z\end{array}\right)\in\mathcal{S}^{2}\) is positive definite iff \(x,z>0\wedge xz>y^{2}\).
_Remark 3_.: When \(p=1,q=1\) (\(\alpha=1,\beta=0\)), OILEM is exactly the familiar LEM. Eq. (22) and Eq. (23) are also reduced to SPD MLR under LEM.
### Understanding LogEig classifier in existing SPD networks
Many of the existing SPD neural networks [11; 6; 31; 32; 17; 29; 30] rely on a Euclidean MLR in the codomain of matrix logarithm. _i.e._,matrix logarithm followed by an FC layer and a softmax layer. For simplicity, we call this classifier as LogEig MLR. The existing explanation of LogEig MLR is approximating manifolds by tangent space. Here, we claim that it is a special case of our SPD MLR.
Although Eq. (22) is very similar to the LogEig MLR, as we stated before, due to the nonlinearity of \(\mathrm{mlog}\) and non-Euclideanness of SPD parameter \(P_{k}\), SPD MLR cannot be hastily viewed equivalent to LogEig MLR. That being said, under special circumstances, as a direct corollary of Theorem 5, Eq. (22) is equivalent to a LogEig MLR.
**Corollary 8**.: _Endowing SPD manifolds with LEM, optimizing SPD parameter \(P_{k}\) in Eq. (22) by LEM-based RSGD and Euclidean parameter \(A_{k}\) by Euclidean SGD, the LEM-based SPD MLR is equivalent to a LogEig MLR with parameters in FC layer optimized by Euclidean SGD._
The widely used LogEig MLR can be thus geometrically explained as a special case of our approach.
### Learning SPD parameters
Similar to Corollary 8, optimizing \(P_{k}\) in SPD MLR by OILEM-based RSGD would be equivalent to a Euclidean MLR in the codomain of \(\phi_{p,q}\). However, both theoretical analysis [40] and empirical experiments [41; 5] have demonstrated the benefits of AIM-based optimization. Therefore, we choose AIM to optimize the SPD parameter \(P_{k}\) in our SPD MLR. Besides, we also adopt the RSGD based on the less explored Bures-Wasserstein Metric (BWM) [22], which has shown promising performance for ill-conditioned matrices [42].
The required operators for RSGD under these two metrics are well studied in the existing literature [34; 40; 22]. We summarize them in Table 1, where \(\mathcal{L}_{P}(V)\) is known as Lyapunov operator, _i.e._,\(\mathcal{L}_{P}(V)P+P\mathcal{L}_{P}(V)=V\), and \((A)_{\mathrm{sym}}=\frac{A+A^{\top}}{2}\). For fast computation of BWM-based RSGD, we further adopt the Newton-Schulz method to calculate the Lyapunov operator. More detail is exposed in Appendix E.
As for the gradient computation and backpropagation of SPD MLR, please refer to Appendix C.
### Final SPD MLR algorithm
Now we write the full algorithm of our SPD MLR. Recalling Eq. (22), for each class \(k\in\{1,\cdots,C\}\), we have a normal matrix \(\tilde{A}_{k}\in\mathcal{S}^{n}\) and a biasing matrix \(P_{k}\in\mathcal{S}^{n}_{++}\). Computationally, Eq. (22) first applies \(\phi_{p,q}\) to each \(P_{k}\) and input SPD feature \(S_{i}\), and apply \(F_{p,q}\) to each \(\tilde{A}_{k}\). Then, the associated inner products are calculated and softmax is applied. The inner products can be further efficiently carried on by matrix product. Therefore, we can concatenate all the Euclidean data \(F_{p,q}(\tilde{A}_{k})\) into \(A\) and apply traditional linear operation with dimensionality reduction matrix \(A\) and biasing vector \(b=(b_{i},\cdots,b_{C})\) with \(b_{k}=\langle F_{p,q}(\tilde{A}_{k}),\phi_{p,q}(P_{k})\rangle)\). In practice, our MLR can be applied as a classifier to any SPD networks. We present the above process in Algorithm 1.
\begin{table}
\begin{tabular}{c c c} \hline \hline Operators & AIM & BWM \\ \hline Exponential map \(\mathrm{Exp}_{P}(V)\) & \(P^{\frac{1}{2}}\mathrm{mexp}(P^{-\frac{1}{2}}SP^{-\frac{1}{2}})P^{\frac{1}{2}}\) & \(P+V+\mathcal{L}_{P}(V)P\mathcal{L}_{P}(V)\) \\ Riemannian gradient \(\Pi_{P}(\nabla_{P}f)\) & \(P(\nabla_{P}f)_{\mathrm{sym}}P\) & \(\Pi_{P}(\nabla_{P}f)=4(\nabla_{P}fP)_{\mathrm{sym}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Riemannian optimization operators for AIM and BWM.
```
Hyper-parameters: \(\alpha,\beta,\mathrm{s.t.}p=\sqrt{\alpha+n\beta}>0,q=\sqrt{\alpha}>0\); Parameters: \(n\times n\) SPD parameters \(\{P_{k}\}_{k\leq C}\), \(n\times n\) symmetric matrices \(\{\tilde{A}_{k}\}_{k\leq C}\); Inputs: a batch of \(n\times n\) SPD matrices \(\{S_{i}\}_{i<N}\); Step 1: mapping SPD features and parameters: \[\forall k\leq C,i\leq N,\bar{P}_{k}\leftarrow\phi_{p,q}(P_{k}),\bar{A}_{k} \gets F_{p,q}(\tilde{A}_{k}),\bar{S}_{i}\leftarrow\phi_{p,q}(S_{i});\] Step 2: concatenation: \[A\leftarrow\mathrm{concat}(\mathrm{vec}(\bar{A}_{1}),\cdots,\mathrm{vec}(\bar{A }_{C})),S\leftarrow\mathrm{concat}(\mathrm{vec}(\bar{S}_{1}),\cdots,\mathrm{ vec}(\bar{S}_{N}));\] Step 3: calculating multinomial probabilities: \[v\leftarrow\mathrm{softmax}(SA^{\top}-(b,\cdots,b)^{\top}),\] where bias vector \(b\in\mathbb{R}^{C}\) and \(b_{i}=\langle\bar{P}_{i},\bar{A}_{i}\rangle\); Output: \(v\in\mathbb{R}^{N\times C}\); Updating \(P_{k}\) by AIM/BWM-based RSGD, \(\tilde{A}_{k}\) by Euclidean optimization.
```
**Algorithm 1**Training \((\alpha,\beta)\)-LEM-based SPD multiclass logistics regression on SPD networks
### Further explanations on hyper-parameters
As we have discussed, SPD MLR and the corresponding hyperplane would vary with different \((\alpha,\beta)\), respecting various OILEMs. Here, we make further explanations about the effect of \(\alpha,\beta\) or equivalently \((p,q)\).
Recalling \(F_{p,q}(X)=qX+\frac{p-q}{n}\operatorname{tr}(X)I_{n}\) in Lemma 6, the second term can be viewed as \((p-q)\frac{\operatorname{tr}(X)I_{n}}{n}\). Intuitively, \(p-q\) is related to the importance of the average of trace. To see this effect more clearly, we present the general expression of OILEMs as follows, which is not covered in [25].
**Proposition 9** (\((\alpha,\beta)\)-LEM).: _The \((\alpha,\beta)\)-LEM is formulated as_
\[\langle V_{1},V_{2}\rangle_{S}=q^{2}\langle\tilde{V}_{1},\tilde{V}_{2}\rangle_ {\mathbb{F}}+(p-q)^{2}\frac{\operatorname{tr}(\tilde{V}_{1})\operatorname{ tr}(\tilde{V}_{2})}{n}+q(p-q)\frac{2(\operatorname{tr}(\tilde{V}_{1}) \operatorname{tr}(\tilde{V}_{2}))}{n}, \tag{24}\]
_where \(S\) is an SPD matrix, \(V_{i}\in T_{S}\mathcal{S}_{++}^{n}\), \(\tilde{V}_{i}=\mathrm{mlog}_{*,S}V_{i},p=\sqrt{\alpha+n\beta}\), and \(q=\sqrt{\alpha}\)._
Now we can see clearly the effect of \(p,q\) by Eq. (24). Compared with the vanilla LEM (\(p=q=1\)), the second and third terms in Eq. (24) indicates that _the absolute value of \(p-q\) control the magnitude of trace in the Riemannian metric._ The third term indicates that _the sign of \(p-q\)_ (\(+,-,0\)) _determines whether the trace is amplified, suppressed, or neutralized._ Correspondingly, there are three basic ratios between \(q\) and \((p-q)\), _i.e._,\(1:1\), \(1:-0.99\), and \(1:0\). Note that since \(1:-1\) cannot ensure \(\mathrm{O}(n)\)-invariance, we lower the ratio and use \(1:-0.99\) instead. Although there are more fine-grained ratios between \(q\) and \((p-q)\), we will focus on the above three most basic ones.
## 5 Experiments
To fairly validate the effectiveness of our methods, we adopt the two most classic SPD networks, _i.e._,SPDNet [11] and SPDNetBN [6], as backbones and validate the performance on Radar [6], HDM05 [43], and AFEW [44] datasets. A brief review of their basic layers is presented in Appendix B.3. We implement our SPD MLR based on the official code by SPDNetBN1. We apply our methods to these two networks, by substituting their LogEig MLR with our SPD MLR. We focus on three basic cases, where \(q:(p-q)\) is \(1:0,1:1\), and \(1:-0.99\), respectively. We call SPDNet-RMLR-(1,0) an SPDNet with our Riemannian classifier with \(q=1,p-q=0\). So does SPDNet-RMLR-(1,1) and SPDNet-RMLR-(1,-0.99). We initialize our normal matrices \(\tilde{A}_{k}\) by Kaiming uniform strategies [45], and \(P_{k}\) as identity matrix. Note that, with this initialization, the
initial status of our classifier under LEM (\(q=1,p-q=0\)) is exactly equivalent to LogEig MLR in the PyTorch implementation of SPDNetBN. We denote \(\{d_{0},d_{1},\cdots,d_{L}\}\) as the dimensions of each transformation layer in the SPDNet backbone. The batch size is 30, and the optimizer is SGD. We will first validate SPDNet-RMLR optimized by AIM RSGD. In Section 5.2, we will further implement our classifier into SPDNetBN, and implement our classifier by BWM-based RSGD, respectively.
**Impact of \(p\) and \(p-q\).** For the hyper-parameters \(p\) and \(p-q\), as we stated before, the variants of \(p,q\) respect different kinds of OILEMs. The best setting depends on the characteristics of the datasets. Our general observations are that when the dimension is relatively low (\(8\times 8\) on Radar), extra attention to trace might be beneficial, while under relatively high dimension (\(30\times 30\) on HDM05 and \(50\times 50\) on AFEW), \(1:0\) tends to be already saturated. This matches our intuition that with the growing dimensions, the proportion of diagonal elements gets smaller. The trace is therefore relatively less important when calculating the Riemannian metric in Eq. (24). Below are our detailed results.
### Experiments on SPDNet
**Drone recognition.** We adopt the Radar dataset2[6], composed of \(3,000\) synthetic radar signals, each of which is split into windows of length 20, resulting in 3000 \(20\times 20\) SPD matrices equally distributed in 3 classes. In line with [6], we assign 50%, 25%, and 25% of data for training, validation, and testing. We test our classifiers under two suggested architectures [6] on this dataset, {20, 16, 8} and {20, 16, 14, 12, 10, 8}. Different learning rates are also tested. The 10-fold results are presented in Table 2. Our classifier with ratios (1,0) and (1,1) can bring consistent performance gain to SPDNet under different settings. In particular, SPDNet-RMLR-(1,1) achieves much better performance and less variance, compared with the vanilla SPDNet. In contrast, suppressing the trace part, _i.e._,RMLR-\((1,-1)\), is harmful to this task. The accuracy curve versus epochs is also presented in Figure 3. The performance gain is consistent throughout the training.
Footnote 2: [https://www.dropbox.com/s/dfnlx2bnyh3kjwy/data.zip?dl=0](https://www.dropbox.com/s/dfnlx2bnyh3kjwy/data.zip?dl=0)
**Action recognition.** HDM05 dataset [43] contains 2,273 skeleton-based motion capture sequences executed by various actors. Each frame consists of 3D coordinates of 31 joints of the subjects and each sequence can be therefore modeled by a \(93\times 93\) covariance matrix. For a fair comparison, we adopt the pre-processed covariance features3 released by SPDNet [11], where datasets are augmented
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline Learning Rate & \multicolumn{2}{c|}{\(1e^{-2}\)} & \multicolumn{2}{c}{\(2.5\)e\({}^{-2}\)} \\ \hline Architecture & {20, 16, 8} & {20, 16, 14, 12, 10, 8} & {20, 16, 8} & {20, 16, 14, 12, 10, 8} \\ \hline SPDNet & 89.89\(\pm\)1.21 & 90.88\(\pm\)0.61 & 91.93\(\pm\)0.84 & 92.32\(\pm\)0.50 \\ SPDNet-RMLR-(1,0) & 92.74\(\pm\)1.01 & 93.25\(\pm\)0.98 & 94.19\(\pm\)0.91 & 93.33\(\pm\)0.79 \\ SPDNet-RMLR-(1,-0.99) & 85.15\(\pm\)1.34 & 85.02\(\pm\)0.76 & 85.65\(\pm\)1.34 & 85.84\(\pm\)0.78 \\ SPDNet-RMLR-(1,1) & **94.57\(\pm\)1.08** (\(\pm\)4.68) & **95.08\(\pm\)0.48** (\(\uparrow\)4.22) & **94.97\(\pm\)0.70** (\(\uparrow\)3.04) & **95.33\(\pm\)0.61** (\(\uparrow\)3.01) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of SPDNet with and without Riemannian MLR on the Radar dataset.
Figure 3: Test accuracy on Radar (left) and HDM05 (right) datasets. The architectures are {20, 16, 8} and {93, 30}. The learning rates are \(1e^{-2}\) and \(5e^{-2}\).
to 18,294 for training, and 1,197 for testing. Following [11; 6], we adopt the architecture of {93, 30} and the learning rate of \(5e^{-2}\). The 10-fold results are presented in Table 3. For this task, RMLR-\((1,0)\) is the best classifier. Compared with the vanilla LogEig MLR, RMLR-\((1,0)\) shows less variance as well. In fact, RMLR-\((1,0)\) respects the geometry induced by LEM, indicating that for this task, no extra attention should be paid to trace when computing Riemannian metric tensor in Eq. (24). We also present the accuracy curve in Figure 3.
**Emotion recognition.** AFEW dataset [44] covers 7 kinds of emotion, consisting of 2,118 video clips with 1,747 for training and 371 for testing. Following [11; 5], each video is modeled by a \(400\times 400\) covariance matrix. Following [11], we also validate our classifier under different network architectures. The learning rate is set to be \(5e^{-2}\). The 10-fold results are presented in Table 4. Note that since SPDNet shows relatively large fluctuation on this dataset, we also present the best result among 10 folds. Similar to HDM05, SPDNet-RMLR-\((1,0)\) shows the most promising results. The best result of SPDNet-RMLR-\((1,0)\) reaches 37.2, while the best result of vanilla SPDNet is 35.8.
### Ablation studies
**Experiments on SDPNetBN.** We further apply our classifier to SPDNetBN. We adopt the best settings of \(q\) and \(p-q\) obtained in our above experiments, _i.e._,(1,1) for Radar, (1,0) for HDM05 and AFEW. The learning rate is \(1e^{-2},1e^{-2},\) and \(5e^{-2}\), respectively. The results are presented in Table 5. Our classifier brings better performance and less variance on Radar and HDM05 datasets. Although the average performance on the AFEW dataset is slightly worse than SPDNetBN, the best performance is achieved by our approach. Nevertheless, we admit that the general performance gain of our SPDNetBN-MLR against SPDNetBN might not be as obvious as SPDNet-MLR against SPDNet. We think the underlying reason comes from the used metrics: the Riemannian batch normalization (RBN) in SPDNetBN is based on AIM, while our classifiers are based on OILEMs. This inconsistency could undermine the effectiveness of our methods.
**BWM-based RSGD.** We focus on the SPDNet backbone. We call SPDNet-RMLR-BWM a network where RMLR is updated by BWM-based RSGD. So does SPDNet-RMLR-AIM, which is the model we have tested. We carry 10-fold experiments on all three datasets, with learning rates \(1e^{-2},5e^{-2}\), and \(5e^{-2}\), respectively. We adopt the optimal value of \((p,p-q)\) obtained before. We set the maximal iteration in the Newton-Schulz method as 8 while other settings remain the same. The average performance and training time (s/epoch) are presented in Table 6. We observe that although BWM shows obviously better performance on the HDM05 dataset against AIM, on the other two datasets, BWM is generally inferior to AIM, similar to the conclusion reached in [42]. However, BWM could be more efficient than AIM, especially under multiple SPD parameters. The main reason is the fast computation of the Lyapunov equation.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline Dataset & \multicolumn{3}{c|}{Radar} & \multicolumn{2}{c|}{HDM05} & \multicolumn{2}{c}{AFEW} \\ \hline Architecture & \multicolumn{2}{c|}{[20, 16, 8]} & \multicolumn{2}{c|}{[20, 16, 14, 12, 10, 8]} & \multicolumn{2}{c|}{[93, 30]} & \multicolumn{2}{c}{[400, 200, 100, 50]} \\ \hline Measure & Mean\$STD & Time & Mean\$STD & Time & Mean\$STD & Time & Mean\$STD & Max & Time \\ \hline SPDNet & \$89.89\$1.21 & 1.21 & 0.98\$0.61 & 2.59 & 62.80\$1.01 & 21.98 & 34.16\$1.04 & 35.8 & 74.18 \\ SPDNet-RMLR-AIM & **94.57\$1.08** & 1.39 & **95.08\$0.48** & 2.56 & 63.93\$0.49\$2 & 16.27 & **34.66\$1.45** & **37.2** & 79.15 \\ SPDNet-RMLR-BWM & 93.59\$0.99 & 1.32 & 94.03\$0.48 & 2.51 & **65.26\$0.25** & 44.89 & 34.35\$0.98 & 35.99 & 76.13 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of SPDNet-RMLR under BWM RSGD against AIM RSGD.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline Architecture & \multicolumn{2}{c|}{[400, 50]} & \multicolumn{2}{c|}{[400, 200, 50]} & \multicolumn{2}{c}{[400, 200, 100, 50]} \\ Measure & Mean\$STD & Max & Mean\$STD & Max & Mean\$STD & Max \\ \hline SPDNet & 31.86\$0.95 & 33.05 & 33.13\$0.55 & 34.08 & 34.16\$1.04 & 35.8 \\ SPDNet-RMLR-(1,0) & **33.14\$1.48\$1.7\$1\$2\$3\$3\$6\$6\$7\$1\$2\$3\$3\$6\$1\$2\$1\$1\$2\$1\$1\$2\$1\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$1\$2\$2\$1\$2\$1\$2\$1\$2\$2\$1\$2\$2\$1\$2\$2\$1\$2\$2\$2\$1\$2\$2\$2\$2\$1\$2$2\$2\$2$2\$2$2\$2$
Conclusion
In this paper, we are the first to develop Riemannian classifiers for SPD neural networks. We provide a general classifier for all kinds of metrics pulled back from Euclidean space. Furthermore, we showcase our framework under a family of \(\mathrm{O}(n)\)-invariant Riemannian metrics, called OILEMs. To the best of our knowledge, our work is the first one to maneuver with these metrics in machine learning. The consistent superior performance in extensive experiments also supports our claims. As a future avenue, our framework can also be readily applied to other kinds of PEMs.
|
2305.05772 | Spiking Neural Networks in the Alexiewicz Topology: A New Perspective on
Analysis and Error Bounds | In order to ease the analysis of error propagation in neuromorphic computing
and to get a better understanding of spiking neural networks (SNN), we address
the problem of mathematical analysis of SNNs as endomorphisms that map spike
trains to spike trains. A central question is the adequate structure for a
space of spike trains and its implication for the design of error measurements
of SNNs including time delay, threshold deviations, and the design of the
reinitialization mode of the leaky-integrate-and-fire (LIF) neuron model. First
we identify the underlying topology by analyzing the closure of all
sub-threshold signals of a LIF model. For zero leakage this approach yields the
Alexiewicz topology, which we adopt to LIF neurons with arbitrary positive
leakage. As a result LIF can be understood as spike train quantization in the
corresponding norm. This way we obtain various error bounds and inequalities
such as a quasi isometry relation between incoming and outgoing spike trains.
Another result is a Lipschitz-style global upper bound for the error
propagation and a related resonance-type phenomenon. | Bernhard A. Moser, Michael Lunglmayr | 2023-05-09T21:22:57Z | http://arxiv.org/abs/2305.05772v2 | # Spiking Neural Networks in the Alexiewicz Topology: A New Perspective on Analysis and Error Bounds
###### Abstract
In order to ease the analysis of error propagation in neuromorphic computing and to get a better understanding of spiking neural networks (SNN), we address the problem of mathematical analysis of SNNs as endomorphisms that map spike trains to spike trains. A central question is the adequate structure for a space of spike trains and its implication for the design of error measurements of SNNs including time delay, threshold deviations, and the design of the reinitialization mode of the leaky-integrate-and-fire (LIF) neuron model. First we identify the underlying topology by analyzing the closure of all sub-threshold signals of a LIF model. For zero leakage this approach yields the Alexiewicz topology, which we adopt to LIF neurons with arbitrary positive leakage. As a result LIF can be understood as spike train quantization in the corresponding norm. This way we obtain various error bounds and inequalities such as a quasi isometry relation between incoming and outgoing spike trains. Another result is a Lipschitz-style global upper bound for the error propagation and a related resonance-type phenomenon.
Leaky-Integrate-and-Fire (LIF) Neuron Spiking Neural Networks (SNN) Re-Initialization Quantization Error Propagation Alexiewicz Norm
## 1 Introduction
Spiking neural networks (SNNs) are artificial neural networks of interconnected neurons that asynchronously process and transmit spatial-temporal information based on the occurrence of spikes that come from spatially distributed sensory input neurons. The most commonly used neuron model in SNNs is the leaky-integrate-and-fire (LIF) model Gerstner et al. (2014). Despite its strong simplification of the biological way of spike generation, the LIF model has proven useful in particular when modeling the temporal spiking characteristics in the biological nervous system. For an overview see, e.g., Tavanaei et al. (2019); Nunes et al. (2022). At the interfaces, that is from real-world to SNN, and from SNN output to real-world, in general there is the need to translate analogue perceived intensities into spike trains, and later on, after processing, to decode the resulting spike trains into meaningful decisions. Some approaches prefer a rate-based encoding while others emphasize on the timing, e.g., of first arriving spikes Guo et al. (2021). In the context of sampling time-varying signals also alternatives of LIF are encountered to sample real analogue signals into spikes, e.g., by delta coding or synonymously used terms like send-on-delta, level crossing or threshold-based sampling Miskowicz (2006); Liu et al. (2014); Yousefzadeh and Sifalakis (2022).
Due to their particular nature of asynchronous and sparse information processing, SNNs are studied mainly for two reasons: first, as a simplified mathematical model in the context of computational neuroscience aiming at a better understanding of biological neural circuits, and second, as a further step towards more powerful but energy-efficient embedded AI edge solutions to process time-varying signals with a wide range of applications including visual processing Amir et al. (2017); Yousefzadeh and Sifalakis (2022), audio recognition Wu et al. (2018), speech
recognition Wu et al. (2020), biomedical signal processing Hassan et al. (2018) and robotic control Kabilan and Muthukumaran (2021), Yamazaki et al. (2022). New application scenarios are emerging in the context of edge AI and federated learning across a physically distributed network of resource-constrained edge devices to collaboratively train a global model while preserving privacy Yang et al. (2022). Of particular interest are applications in the emerging field of brain-computer interfaces which opens up new perspectives for the treatment of neurological diseases such as Parkinson's disease Dethier et al. (2013), Gege et al. (2021). For an overview on the performance comparison between SNNs and conventional vector-based artificial neural networks see Deng et al. (2020). However, the full potential of SNNs, in particular their energy efficiency and dynamic properties, will only become manifest when implemented on dedicated neuromorphic hardware, leading to ongoing research in this direction Bouvier et al. (2019), DeBole et al. (2019), Ostrau et al. (2022), Michaelis et al. (2022).
However, despite the great potential of SNNs and the ongoing research efforts the practical realizations are few so far. To this end, the research on SNNs is still in an early phase of maturity, particularly lacking mathematical foundation which becomes necessary due to the special hybrid continuous-discrete nature of SNNs and the underlying paradigm shift towards event-based signal processing.
A closer look at the different ways of sampling makes apparent this paradigm shift as sketched in Fig. 1. In equidistant-based uniform sampling, the mathematics of information encoding and processing is based on regular Dirac combs and its embedding in Hilbert spaces with its powerful mathematical tools of convolution, orthogonal projection, and based thereupon spectral analysis, signal filtering and reconstruction. As sequences of uniformly distributed Dirac pulses in time, regular Dirac combs and related concepts of signal decomposition into regular wave forms can be viewed as a trick that allows time to be treated mathematically as a space variable. However, this mathematical abstraction neglects the time information that is implicitly encoded by events Miskowicz (2006). To this end, the resulting mathematics of uniform sampling and signal processing becomes basically vector-based. In contrast, in biological information processing systems and bio-inspired neuromorphic computing, see e.g., Tavanei et al. (2019), Nunes et al. (2022), the paradigm of information encoding somewhat flips the role of regularity w.r.t time versus amplitude. While in uniform sampling and related signal processing time is treated as a regular structure and the amplitudes of sampled values are kept flexible, in bio-inspired sampling and signal processing the amplitudes are forced into a regular structure by means of thresholding while keeping the handling of time flexible. This paradigm shift of information encoding will have implications on the mathematical foundation of handling irregularity in time, that is to handle sequences of Dirac impulses beyond a regular comb structure, see Figure 1.
While regular time leads to Hilbert spaces, and inherently to the Euclidean norm, our approach is to revise the Euclidean view of geometry in this context in favor of an approach based on alternative metrics which become necessary if we postulate certain analytical properties on the topological and metric structure of the space of spike trains in combination with neuronal models and spiking neural networks as used in neuromorphic computing. Our paper is a contribution to the mathematical foundation of SNNs by elaborating on postulates on the topology of the vector space of spike trains in terms of sequences of weighted Dirac impulses. Our approach is a follow-up of Moser and Natschlager (2014), Moser (2015, 2016, 2017), Moser and Lunglmayr (2019) which discusses the discrepancy measure as a special range-based metric for spike trains for which a quasi-isometry relation for threshold-based sampling between analog signals as input and spike trains as output of a leaky-integrate-and-fire neuron can be established. In this paper, we go beyond sampling and show that range-based metrics also play a special role for the mathematical conception of error, resp., deviation analysis based on spike trains, and thus for understanding, analyzing, and bounding error propagation of spiking neural networks (SNNs).
The paper is structured as follows. In Section 2, we fix notation and prepare preliminaries from SNNs based on leaky integrate-and-fire neuron model and the mathematical idealized assumption of instantaneous realization of events due to an impulse input. In this context, we discuss proposed variants of re-initialization after the firing event and revise the re-initialization mode known as _reset-by-subtraction_ in the general setting of spike amplitudes that exceed multiples of the threshold. Such high spike amplitudes can arise in SNNs by scaling the input channels by weights. This way we introduce the _reset-to-mod_ re-initialization which results actually from a consequent application of the assumption of instantaneous events to _reset-by-subtraction_. The resulting operation can be considered as modulo division. _reset-to-mod_ allows to take a broader view on the mathematics of LIF as endomorphism that operates on the
Figure 1: Paradigm shift in information encoding of uniform (left) versus threshold-based (right) sampling.
vector space of spike trains of weighted Dirac impulses. This way we study the resulting LIF operator, specify the above motivated postulates in Section 3 and derive a solution based on a topological argument in Subsection 3.1, leading to the Alexiewicz norm for integrate-and-fire (IF). We generalize this norm for leaky integrate-and-fire (LIF) and, as a main result, we show in Subsection 3.2 that the LIF operator acts as spike quantization in the grid given by the corresponding norm. The related inequality turns out to be useful when it comes to the analysis of error and signal propagation due to perturbations or added spikes in the input channels. Section 4 focuses on the effect of added spikes in the input channel, which results in a Lipschitz-style upper bound for the LIF model and feed-forward SNNs in general. This analysis also shows that there can be a principle change in the input-output characteristic along the transition from integrate-and-fire with zero leakage to arbitrarily small leaky parameters. Section 5 reports on simulations that illustrate our theoretical approach.
## 2 Preliminaries
First we recall the LIF neural model with its computational variants and fix notation. It has a long history which goes back to Lapicque (1907). For an overview on its motivation and relevance in neuroscience and neuromorphic computing see Gerstner et al. (2014); Dayan and Abbott (2001) and Eshraghian et al. (2021). Situated in the middle ground between biological plausibility and technical feasibility, the LIF model abstracts away the shape and profile of the output spike. This way spikes are mathematically represented as Dirac delta impulses.
Therefore, we start with spike trains as input signals which we assume to be given mathematically as sequences of weighted Dirac impulses, i.e.,
\[\eta(t):=([a_{i};t_{i}]_{i})(t):=\sum_{i\in\mathbb{N}_{0}}a_{i}\,\delta_{t_{i} }(t), \tag{1}\]
where \(a_{i}\in\mathbb{R}\) and \(\delta_{t_{i}}\) refers to a Dirac impulse shifted by \(t_{i}\). \(\mathbb{N}_{0}\) means that there is no bound for the number of spikes, though for convenience we assume that for each spike train the number is finite. For convenience and without loss of generality, to ease notation below we assume that \(t_{0}=0\) and \(a_{0}=0\). The empty spike train is denoted by \(\emptyset\).
\((\mathbb{S},+,\cdot)\) denotes the vector space of all spike trains (1) based on usual addition and scaling, which later on will be equipped with a metric \(d(.,.)\), resp. norm \(\|.\|\), to obtain the metric space \((\mathbb{S},d)\), respectively, normed space \((\mathbb{S},d)\).
Mathematically, the LIF neuron model is actually an endomorphism, \(\text{LIF}_{\vartheta,\alpha}:\mathbb{S}\to\mathbb{S}\), that is determined by two parameters, the threshold \(\vartheta>0\) and the leaky time constant \(\alpha>0\) and the mode for resetting the neuron after firing, respectively, the charging/discharging event. In this paper we consider three reset modes, _reset-to-zero_, _reset-by-subtraction_ and _reset-to-mod_. According to Eshraghian et al. (2021), _reset-to-zero_ means that the potential is reinitialized to zero after firing, while _reset-by-subtraction_ subtracts the \(\vartheta\)-potential \(u_{\vartheta}\) from the membrane's potential that triggers the firing event. As a third variant we use the term _reset-to-mod_, which can be understood as instantaneously cascaded application of _reset-by-subtraction_ according to the factor by which the membrane's potential exceeds the threshold which results in a modulo computation. This means, in the _reset-to-mod_ case the re-initialization starts with the residue after subtracting the integral multiple of the threshold from the membrane's potential at firing time.
Setting \(t_{0}^{(1)}:=0\) (where the upper index indicates the layer, here the output of LIF) and \(\eta_{in}(t):=([a_{i};t_{i}^{(0)}]_{i})(t)\) the mapping
\[\sum_{i\in\mathbb{N}_{0}}b_{i}\,\delta_{t_{i}^{(1)}}=\eta_{out}=\text{LIF}_{ \vartheta,\alpha}(\eta_{in})\]
is recursively given by
\[t_{i+1}^{(1)}=\inf\left\{t\geq t_{i}^{(1)}:\,\left|u_{\vartheta,\alpha}(t_{i }^{(1)},t)\right|\geq\vartheta\right\}, \tag{2}\]
where
\[u_{\alpha}(t_{i},t):=\int_{t_{i}}^{t}e^{-\alpha(\tau-t_{i})}\left(\eta_{in}( \tau)-\text{discharge}(t_{i},\tau)\right)d\tau \tag{3}\]
models the dynamic change of the neuron membrane's potential after an input spike event at time \(t_{i}\) (based on the assumption of instantaneous increase, resp. decrease). At the moment when the absolute value of the membrane potential touches the threshold-level, \(\vartheta>0\), an output spike is generated whose amplitude is given by \(b_{i+1}=+\vartheta\) or \(=-\vartheta\) depending on whether the membrane's potential \(u_{\alpha}\) in (2) is positive or negative.
The process of triggering an output spike is actually a charge-discharge event that is followed by the re-initialization of the membrane's potential modeled by an instantaneously acting discharge process
\[\text{discharge}(t_{i}^{(1)},\tau):=\left\{\begin{array}{lcl}u_{i}\,\delta_{t _{i}^{(1)}}(\tau)&\ldots&\text{for {\it reset-to-zero}},\\ \text{sgn}(u_{i})\,\vartheta\,\delta_{t_{i}^{(1)}}(\tau)&\ldots&\text{for {\it reset-by-subtraction}},\\ \text{sgn}(u_{i})[u_{i}/\vartheta]\,\vartheta\,\delta_{t_{i}^{(1)}}(\tau)& \ldots&\text{for {\it reset-to-mod}},\end{array}\right. \tag{4}\]
where \(u_{i}:=u_{\alpha}(t_{i-1},t_{i})\), \(\text{sgn}(x)\in\{-1,0,1\}\) is the signum function and
\[[x]:=\text{sgn}(x)\max\{k\in\mathbb{Z}:k\leq|x|\} \tag{5}\]
realizes integer quantization by truncation.
The integration in (2) models the voltage in an RC circuit as response to current impulses. Note that the immediate reset without delay in (2) is an idealization from biology or hardware realizations. Anyway, since in practical realizations spikes are sparse in time (or should be) this idealization is a justifiable approximation. _reset-to-zero_ and _reset-by-subtraction_ can show quite different behavior if the spike amplitudes are large.
The _reset-by-subtraction_ mode can be understood as compensation event so that the net voltage balance of the spiking event equals zero, i.e. in case of an output spike with amplitude \(\vartheta\) the membrane is actually discharged by this amount. Accordingly, though not always made clear in the literature, see for example Eshraghian et al. (2021), this assumption has the subtle consequence that an increase of the membrane potential \(u\) by multiples \([u/\vartheta]\) of the threshold level \(\vartheta\) results in a discharge of the membrane's potential by the same amount, i.e., \([u/\vartheta]\vartheta\).
This can also be seen virtually as sequence of \([u/\vartheta]\)-many \(\vartheta\)-discharge actions, which acting in sequence in instantaneous time produce the same result, that is an output spike with amplitude \([u/\vartheta]\vartheta\). Here we express the amplitude of the output spikes as multiple of the unit in terms of the threshold potential \(\vartheta\). For example, consider a single spike \(\eta_{in}(t):=a_{1}\delta_{t_{1}}(t)\) with large amplitude \(|a_{1}|>\vartheta\). Note that due to the idealization of instantaneous actions of charge and discharge events the discharge model in the _reset-by-subtraction_ mode implies that \(\eta_{in}=a_{1}\delta_{t_{1}}\) is mapped to
\[b_{1}\delta_{t_{1}}=\text{LIF}_{\vartheta,\alpha}(a_{1}\delta_{t_{1}}),\,\,\,b _{1}=[a_{1}/\vartheta]\,\vartheta. \tag{6}\]
Fig. 2 illustrates LIF with _reset-by-subtraction_.
Depending on the research and application context, discrete approximations of the LIF model (2) become popular, particularly to simplify computation and to make the application of deep learning methods to spike trains easier Eshraghian et al. (2021). Under the assumptions
1. continuous time \(t\in[0,\infty)\) is replaced by discrete time \(n\Delta t\in\Delta t\,\mathbb{N}_{0}\), where \(\Delta t\ll\alpha\);
2. instantaneous increase, respectively decrease of the membrane's potential (3);
we obtain a discrete computational model, where the input signal \(\eta_{in}=\sum_{i}a_{i}\delta_{t_{i}}\) in continuous time is replaced by the sequence
\[\hat{a}_{k}:=\left\{\begin{array}{ll}a_{i}&k=[t_{i}/\Delta t],\\ 0&\text{else}\end{array}\right. \tag{7}\]
Figure 2: LIF in continuous time with _reset-by-subtraction_, resp. _reset-to-mod_; at \(t_{6}\) the amplitude \(a_{6}\in(2\vartheta,3\vartheta)\) of the input spike, which causes a two times cascaded _reset-by-subtraction_ resulting in an output spike amplitude \(b_{6}=2\vartheta=[a_{6}/\vartheta]\,\vartheta\).
which is well-defined if \(\Delta t\) is chosen sufficiently small so that at most one Dirac impulse hits a time interval \(I_{k}=[k\Delta t,(k+1)\Delta t)\). The amplitudes \(\hat{b}_{n}\), \(n=0,1,\ldots\), of the output spike train are defined as for continuous time. This way, finally we get the discrete LIF model, \(\widehat{\text{LIF}}_{\vartheta,\beta,\Delta t}:\mathbb{R}^{N_{0}}\to \mathbb{R}^{N_{0}}\), (\(\hat{b}_{k})_{k}=\widehat{\text{LIF}}_{\vartheta,\beta,\Delta t}((\hat{a}_{k}) _{k})\), as outlined in Algorithm 1.
**Step 0**: Initialization: \(\hat{a}=(\hat{a}_{k})_{k}\), \(u_{0}:=0\), \(\hat{b}_{0}=0\), \(\beta:=(1-\frac{\Delta t}{\alpha})\);
**Step 1**: Update Membrane Potential: \(u_{n+1}:=\beta\,u_{n}+\hat{a}_{n}-\hat{b}_{n}\)
**Step 2**: Check Threshold: Update time \(n\mapsto n+1\) and check whether \(|u_{n}|\geq\vartheta\). If 'no', then set \(\hat{b}_{n}:=0\) and repeat
**Step 1**; if 'yes' then output a spike at time step with amplitude \(u_{n}\) and move on to **Step 3**.
**Step 3**: Discharge Event: According to the re-initialization mode set
\[\hat{b}_{n}:=\left\{\begin{array}{ll}u_{n}&\cdots&\text{for {\it reset-to-zero}},\\ \text{sgn}(u_{n})\vartheta&\cdots&\text{for {\it reset-by-subtraction}},\\ [u_{n}/\vartheta]\vartheta&\cdots&\text{for {\it reset-to-mod}}.\end{array}\right. \tag{8}\]
**Step 4**: Repeat steps **Steps 1,2,3** until all input spikes are processed.
With this mathematical clarification of the LIF model, in continuous and discrete time, we are in the position to study integrate-and-fire as spike quantization and provide upper bounds for the quantization error in Section 3.2.
In this paper we consider feed-forward spiking neural networks, \(\text{SNN}:\mathbb{S}^{N_{0}}\to\mathbb{S}^{N_{L}}\) which are mappings given by weighted directed acyclic graphs \((V,E)\) connecting LIF units with fixed parameters \(\vartheta\) and \(\alpha\). SNN takes \(N_{0}\) spike trains as input and maps them to \(N_{L}\) output spike trains. The underlying graph can be arranged in hierarchies starting from the first layer \(1\) up to layer \(L\). We enumerate the LIF nodes in the \(k\)th layer by \((k,i_{k})\), where \(i_{k}\in\{1,\ldots,N_{k}\}\).
For convenience we consider the input spike trains as \(0\) layer. The weight \(w^{(k+1)}_{i_{k+1},i_{k}}\) of an edge \([(k,i_{k}),(k+1,i_{k+1})]\in E\) connecting the \(i_{k}\)-th neuron in the \(k\)-th layer with the \(i_{k+1}\)-th neuron in the \((k+1)\)-th layer rescales accordingly the weights of the spike train being transmitted from the former neuron to the latter. See Fig. 3 for an illustration. This way the mapping SNN can be represented by the tuple of weight matrices \(W=[W^{(1)},\ldots,W^{(N_{L})}]\), where
\[\text{SNN} = [W^{(1)},\ldots,W^{(N_{L})}], \tag{9}\] \[W^{(k+1)} = (w^{(k+1)}_{i_{k+1},i_{k}})\in\mathbb{R}^{N_{k+1}\times N_{k}}.\]
## 3 Which Topology for the Space of Spike Trains is Appropriate?
Our approach starts with two main postulates a topology for spike trains should satisfy, see Fig. 4:
_Postulate 1:_ Two spike trains that differ only by small delays of their spikes or small additive noise should be considered close where the notion of closeness should not depend on the number of spikes.
_Postulate 2:_ Small perturbations in the system's configuration parameters should end up in similar input-output behavior, e.g., if the threshold deviates only by some small value.
Note that the widely spread Euclidean approach based on summing up squared differences does not meet these postulates. For example, in the context of back propagation expressions of the type \(d_{E}(\eta,\eta^{\prime})^{2}:=\sum_{i}(t_{i}-t_{i}^{\prime})^{2}\) are used, see, e.g. Bohte et al. (2000). Apart from the problem that this definition is only well defined if there is a one-to-one correspondence between the spikes in the first and the second spike train. This ansatz may be useful for certain algorithms in a certain setting, but due to the lack of well-definedness it is not suitable for an axiomatic foundation of a generally valid theory. For example, this ansatz is not well-defined in the scenario of Postulate 2 if the first spike train is
Figure 3: SNN as weighted directed acyclic graph.
empty and the second not. Also Postulate 1 is problematic as the error depends on the number of spikes. A large error can result from a large delay of a single spike or of many small delays. See also Moser and Natschlager (2014); Moser (2015). In addition, the sign of the spike is not taken into account.
### Alexiewicz Topology
In order to get an idea about signals that should be considered close in the topology let us consider the set \(C\) of all sub-threshold input spike trains to a LIF neuron \(\text{LIF}_{\vartheta,\alpha}:\mathbb{S}\rightarrow\mathbb{S}\). Note that \(C\) is the pre-image of LIF of the empty spike trains, i.e., \(C:=\text{LIF}_{\vartheta,\alpha}^{(-1)}(\{\emptyset\})\). \(C\) is obviously not a closed set, as, e.g., all spike trains \(\eta_{k}:=(\vartheta-1/k)\delta_{t_{1}}\) are below threshold but not its pointwise limit. Taking also all limits into account we obtain a notion of closure \(\overline{C}\) of \(C\). \(\overline{C}\) can be characterized in the following way, see A.
**Lemma 1**: _For a leaky integrate-and-fire neuron \(\text{LIF}_{\vartheta,\alpha}:\mathbb{S}\rightarrow\mathbb{S}\) with \(0\leq\alpha<\infty\) we have:_
\[\eta=\sum_{i}a_{i}\delta_{t_{i}}\in\overline{\text{LIF}_{\vartheta,\alpha}^{( -1)}(\{\emptyset\})}\Longleftrightarrow\max_{n}\left|\sum_{i=1}^{n}a_{i}e^{- \alpha(t_{n}-t_{i})}\right|\leq\vartheta. \tag{10}\]
Note that
\[\|([a_{i};t_{i}])_{i}\|_{A,\alpha}:=\max_{n}\left|\sum_{j=1}^{n}a_{j}e^{- \alpha(t_{n}-t_{j})}\right| \tag{11}\]
defines a norm on the vector space \(\mathbb{S}\), which justifies the \(\|.\|\) notation. As immediate consequences from the definition (11) we obtain
\[\|\eta\|_{A,\alpha}=\inf\left\{\vartheta>0:\text{LIF}_{\vartheta,\alpha}(\eta )=\emptyset\right\}, \tag{12}\]
and
\[\forall\alpha,\beta\in(0,\infty),\eta\in\mathbb{S}:\|\text{LIF}_{\vartheta, \alpha}(\eta(\cdot))\|_{A,\alpha}=\|\text{LIF}_{\vartheta,\beta}\left(\eta( \alpha/\beta\,\cdot)\right)\|_{A,\beta}. \tag{13}\]
This way, \(\overline{\text{LIF}_{\vartheta,\alpha}^{(-1)}(\{\emptyset\})}\) turns out to be the ball \(B_{A,\alpha}(\vartheta)\) centered at \(\emptyset\) of radius \(\vartheta\) w.r.t the norm \(\|.\|_{A,\alpha}\). For \(\alpha=0\) the length of the time intervals between the events do not have any effect, and we get the norm \(\|(a_{i})_{i}\|_{A,0}=\max_{n}|\sum_{i=1}^{n}a_{i}|\). By looking at \(a_{i}\) as width of a step of a walk up and down along a line, \(\|.\|_{A,0}\) marks the maximum absolute route amplitude of the walk.
Range measures are studied in the field of random walks in terms of an asymptotic distribution resulting from diffusion process Finch (2018); Jain and Orey (1968). A similar concept is given in terms of the diameter \(\|(a_{i})_{i}\|_{D}\) of a walk, i.e., \(\|(a_{i})_{i}\|_{D}:=\max_{1\leq m\leq n\leq N}|\sum_{i=m}^{n}a_{i}|\), which immediately can be generalized to \(\|(a_{i})_{i}\|_{D,\alpha}:=\max_{1\leq m\leq n\leq N}|\sum_{j=m}^{n}a_{j}e^{- \alpha(t_{n}-t_{j})}|\). Note that \(\|(a_{i})_{i}\|_{A,\alpha}\leq\|(a_{i})_{i}\|_{D,\alpha}\leq 2\|(a_{i})_{i}\|_{A,\alpha}\) stating the norm-equivalence of \(\|.\|_{A,\alpha}\) and \(\|.\|_{D,\alpha}\).
While the unit ball \(B_{A,0}\) w.r.t \(\|.\|_{A,0}\) can be understood by shearing the hypercube \([-1,1]^{N}\), see B, the geometric characterization of the related unit ball \(B_{D,0}\) of \(\|.\|_{D,0}\) is more tricky, see Moser (2012).
Figure 4: Postulates for an adequate metric \(d(.,.)\) for spike strains.
For an illustration of the corresponding unit balls for two spikes (2D case) see Fig. 5.
These concepts are related to the more general concept of _discrepancy_ measure, see Chazelle (2000); Moser (2011), which goes back to Hermann Weyl Weyl (1916) and is defined on the basis of a family \(\mathcal{F}\) of subsets \(F\) of the universe of discourse, i.e., \(\mu((a_{i})_{i})=\sup_{F\in\mathcal{F}}|\sum_{i\in F}a_{i}|\). For \(\|.\|_{A}\) the family \(\mathcal{F}\) consists of all index intervals \(\{0,\ldots,m\}\), while for \(\|.\|_{D}\) the family \(\mathcal{F}\) consists of all partial intervals \(\{m,\ldots,n\}\), \(m,n\in\{1,\ldots,N\}\). Therefore we refer particularly to \(\|.\|_{A}\), resp. \(\|.\|_{D}\), as example of a _discrepancy measure_.
An analogous concept, \(\|f\|:=\sup_{[a,b]}|\int_{[a,b]}fd\mu|\), can be defined for functions \(f\) and tempered distributions such as Dirac delta impulses by using integrals instead of the discrete sum, which is known in the literature as _Alexiewicz_ semi-norm Alexiewicz (1948). As spike trains live in continuous time, in the end our topology we are looking for is the Alexiewicz topology, which meets the postulates above. However, most of the reasoning and proofs in the context of this paper can be boiled down to discrete sequences, hence utilizing the discrepancy norm.
### Spike Train Quantization
Interestingly, as pointed out by Moser and Lunglmayr (2023), LIF can be understood as a \(\|.\|_{A,\alpha}\)-quantization operator satisfying
\[\|\text{LIF}_{\vartheta,\alpha}(\eta)-\eta\|_{A,\alpha}<\vartheta. \tag{14}\]
Moser and Lunglmayr (2023) provides a proof of (14) for weighted Dirac impulses as input signal to the LIF, see C. Here, first we note that (5) also applies to the discrete version of Algorithm 1. The proof is analogous. Second, we state a generalization to piecewise continuous functions. This way, we show that LIF acts like a signal-to-spike-train quantization. The generalization of Dirac pulses to more general classes of signals is especially important for a unifying theory that combines analog spike sampling and SNN-based spike-based signal processing. An extension to the general class of locally integrable functions is also possible but requires the introduction of the Henstock-Kurzweil integral Kurtz and Swartz (2004), which is postponed to future research.
**Theorem 1**: _(14) also holds for piecewise continuous functions \(\eta\), i.e., functions having at most a finite number of discontinuities._
**Proof.** The idea is to construct a \(\hat{\eta}=\sum_{i}a_{i}\delta_{t_{i}}\in\mathbb{S}\) such that \(\int_{t_{0}}^{t_{i}}\hat{\eta}(t)e^{\alpha t}dt=\int_{t_{0}}^{t_{i}}\eta(t)e^ {\alpha t}dt\) for all \(t_{i}\). This can be achieved by utilizing the _mean value theorem for integrals_. First, partition the time domain into intervals \(U_{k}=(u_{k-1},u_{k})\) on which \(\eta\) is continuous. On \(U_{k}\), the mean value theorem guarantees the existence of \(s_{k}\in U_{k}\) such that \(\int_{U_{k}}\eta(t)e^{\alpha t}dt=|U_{k}|\eta(s_{k})e^{\alpha s_{k}}\). Then define the sequence \((t_{i})_{i}\) consisting of all \(s_{k}\) and the \(u_{k}\). For \(t_{i}=s_{k}\) define \(a_{i}:=|U_{k}|\eta(s_{k})\) and for \(t_{i}=u_{k}\) define \(a_{i}:=\lim_{\varepsilon\to 0}\int_{u_{k}-\varepsilon}^{u_{k}+\varepsilon}ndt\). Refine this partition so that also the time points \(t_{i}^{*}\) of the output spikes of \(\text{LIF}_{\vartheta,\alpha}(\eta)=\sum_{i}b_{i}\delta_{t_{i}^{*}}\) are taking into account as border points of the \(U_{i}\) intervals. This way we obtain for all \(t_{i}\):
\[\int_{t_{0}}^{t_{i}}\eta(t)e^{-\alpha(t_{i}-t)}dt=\int_{t_{0}}^{t_{i}}\hat{ \eta}(t)e^{-\alpha(t_{i}-t)}dt. \tag{15}\]
Moreover, since all time points \(t_{i}^{*}\) are contained in \((t_{i})_{i}\), we also have
\[\text{LIF}_{\vartheta,\alpha}(\eta)=\text{LIF}_{\vartheta,\alpha}(\hat{\eta}). \tag{16}\]
Putting (15) and (16) together closes the proof. \(\Box\)
Fig. 6 illustrates the quantization for different values of \(\alpha\) w.r.t \(\|.\|_{A,\alpha}\). Note that for \(\alpha\to\infty\) we obtain the standard \(\|.\|_{\infty}\)-quantization.
Like for threshold-based sampling Moser (2017); Moser and Lunglmayr (2019) we also obtain quasi isometry in the discrete case, though the situation and the way of proving it is different. Here in this context, we get it as a byproduct of Theorem 1.
**Corollary 1** (LIF Quasi Isometry): _The norm \(\|.\|_{A,\alpha}\), \(\vartheta>0\), establishes quasi isometry for the LIF neuron model, i.e.,_
\[\|\eta_{1}-\eta_{2}\|_{A,\alpha}-2\vartheta\leq\|\text{LIF}_{\vartheta,\alpha} (\eta_{1})-\text{LIF}_{\vartheta,\alpha}(\eta_{2})\|_{A,\alpha}\leq\|\eta_{1} -\eta_{2}\|_{A,\alpha}+2\vartheta \tag{17}\]
_and asymptotic isometry, i.e.,_
\[\lim_{\vartheta\to 0}\|\text{LIF}_{\vartheta,\alpha}(\eta_{1})-\text{LIF}_{\vartheta,\alpha}(\eta_{2})\|_{A,\alpha}=\|\eta_{1}-\eta_{2}\|_{A,\alpha} \tag{18}\]
_for all \(\eta_{1},\eta_{2}\in\mathbb{S}\)._
Theorem 1 together with the quasi isometry property (17) immediately gives an answer to our Postulates 1 and 2 in Section 3 in terms of Corollary 2 and Corollary 3. Because of the discontinuity of thresholding the best what we can expect is an error bound in order of the threshold \(\vartheta\). The error bound (19) in Corollary 2 caused by a small lag is remarkable as it asymptotically depends only on the threshold and the maximal spike amplitude and not, e.g., on the spike frequency. This property is typical for the Alexiewicz norm and related metrics such as the discrepancy measure and contrasts the Euclidean geometry and its related concept of measuring correlation, see also Moser et al. (2011). For the proof we refer to E.
**Corollary 2** (Error Bound on Lag): _For \(\eta=\sum_{i}a_{i}\delta_{t_{i}}\in\mathbb{S}\) and sufficiently small \(\Delta t\) we get the error bound:_
\[\left\|\text{LIF}_{\vartheta,\alpha}(\eta(\cdot-\Delta t))-\text{LIF}_{ \vartheta,\alpha}(\eta(\cdot))\right\|_{A,\alpha} \leq\] \[\max_{i}\left|a_{i}\right|+2\vartheta+\Delta t\,\alpha(\|\eta\|_ {A,\alpha}+\max_{i}\left|a_{i}\right|)+O(\Delta t^{2}).\]
(1) together with the triangle inequality of the norm gives
\[\left\|\text{LIF}_{\vartheta+\varepsilon,\alpha}(\eta)-\eta+ \eta-\text{LIF}_{\vartheta,\alpha}(\eta)\right\|_{A,\alpha} \leq\] \[\left\|\text{LIF}_{\vartheta+\varepsilon,\alpha}(\eta)-\eta\right\| _{A,\alpha}+\left\|\text{LIF}_{\vartheta,\alpha}(\eta)-\eta\right\|_{A,\alpha} \leq 2\vartheta+\varepsilon,\]
proving Corollary 3.
**Corollary 3** (Error Bound on Threshold Perturbation): _For \(\varepsilon>0\) we have_
\[\sup_{\eta\in\mathbb{S}}\left\|\text{LIF}_{\vartheta+\varepsilon,\alpha}(\eta )-\text{LIF}_{\vartheta,\alpha}(\eta)\right\|_{A,\alpha}\leq 2\vartheta+\varepsilon. \tag{19}\]
(1) can also be interpreted as spike train decomposition into a part that consists of spikes with amplitudes that are signed multiples of the threshold and a sub-threshold residuum. It is interesting that the first part can be further decomposed into a sum of unit Alexiewicz norm spike trains. See D for an example.
**Theorem 2** (Spike Train Decomposition): _For any \(\eta\in\mathbb{S}\) there is a \(\psi\in\mathbb{S}\) with spike amplitudes that are integer multiples of the threshold and a below-threshold residuum spike train \(\rho\in\mathbb{S}\) with \(\|\rho\|_{A,\alpha}<\vartheta\), such that \(\eta=\psi+\rho\), where \(\psi=\text{LIF}_{\vartheta,\alpha}(\eta)\). Moreover, \(\psi\) can be represented as sum of \(\|.\|_{A,0}\)-unit spike trains \(\Delta\eta_{r}\), \(r\in\{1,\ldots,a\}\), \(a:=\|\psi\|_{A,0}\), i.e.,_
\[\psi=\sum_{r=1}^{a}\Delta\eta_{r}, \tag{20}\]
_where \(\|\Delta\eta_{r}\|_{A,0}=1\) for all \(r\)._
Proof.Let \(\eta_{0}:=\psi\) be the initial spike train with integer spike amplitudes \(a_{i}^{(0)}\in\mathbb{Z}\). Assume that \(\|\eta_{0}\|_{A,0}>1\). For convenience we define a sum over an empty index set to be zero, i.e., \(\sum_{i\in\emptyset}a_{i}=0\). We will recursively define a sequence
\[\eta_{r}=\sum_{i=1}^{N^{(r)}}a_{i}^{(r)}\delta_{t_{i}}, \tag{21}\]
Figure 6: Quantization w.r.t \(\|.\|_{A,\alpha}\), \(\alpha\in\{0,1,2,\infty\}\) for spike trains \(\eta=a_{1}\delta_{0}+a_{2}\delta_{1}\) with random \(a_{i}\in[-2,2]\); the corresponding unit balls are marked red; the arrows are connecting points with their quantization points.
of spike trains for \(r=1,\ldots,N^{(0)}\) such that \(\|\eta_{r}-\eta_{r-1}\|_{A,0}=1\) and \(\eta_{N^{(0)}}=\emptyset\). Denote \(N^{(r)}:=\|\eta_{r}\|_{A,0}\).
If \(N^{(r)}\geq 2\), then according to Fig. (7) we consider the peaks in the walk \(S_{k}=\sum_{i=1}^{k}a_{i}^{(r)}\). Without loss of generality let us assume that the first peak is positive. For this we define recursively the corresponding top and bottom peak indexes \(\overline{m}_{k}^{(r)}\), resp. \(\underline{m}_{k}^{(r)}\) as follows.
\[\overline{m}_{1}^{(r)} := \min\{k>0:\sum_{i=1}^{k}a_{i}^{(r)}=N^{(r)}\},\] \[\overline{m}_{k+1}^{(r)} := \min\{k>\overline{m}_{k}^{(r)}:\sum_{i=1}^{k}a_{i}^{(r)}\geq N^{(r )}-1\}, \tag{22}\]
and, analogously,
\[\underline{m}_{k}^{(r)}:=\min\{m\in J=\{\overline{m}_{k}^{(r)}+1,\ldots, \overline{m}_{k+1}^{(r)}\}:\sum_{i=1}^{m}a_{i}^{(r)}=\min_{j\in J}\sum_{i=1}^ {j}a_{i}^{(r)}\leq-1\}. \tag{23}\]
Based on (22) and (23) we define the spike train \(\Delta\eta_{r+1}:=\sum_{i}d_{i}^{(r+1)}\delta_{t_{i}}\) as follows. Due to our assumption that the first peak is positive we define (otherwise \(-1\))
\[d_{m_{1}}^{(r+1)}:=1. \tag{24}\]
For the subsequent peaks we consider the down and up intervals
\[\underline{J}_{k}:=\left\{\overline{m}_{k}^{(r)}+1,\ldots,\underline{m}_{k}^{ (r)}\right\},\ \overline{J}_{k}:=\left\{\underline{m}_{k}^{(r)}+1,\ldots,\overline{m}_{k+1}^ {(r)}\right\}. \tag{25}\]
We set \(d_{i}^{(r+1)}:=0\) for all \(t_{i}\) except the following cases. There are two cases for down intervals (analogously for up intervals):
* Case A. \(\sum_{i\in\underline{J}_{k}}a_{i}^{(r)}\leq-2\) and there is an index \(i\in\underline{J}_{k}:a_{i}^{(r)}\leq-2\), then we set \[d_{i}^{(r+1)}:=-2.\] (26)
* Case B. \(\sum_{i\in\underline{J}_{k}}a_{i}^{(r)}\leq-2\) and there is no index \(i\in\underline{J}_{k}:a_{i}^{(r)}\leq-2\), then there are at least two indexes \(i_{1},i_{2}\) such that \(a_{i_{1}}^{(r)}+a_{i_{2}}^{(r)}\leq-2\). Thus, we set \[d_{i_{1}}^{(r+1)}:=-1,d_{i_{2}}^{(r+1)}:=-1.\] (27)
Analogously, we define the spikes for the up intervals, i.e., again distinguishing two cases.
* Case A. \(\sum_{i\in\overline{J}_{k}}a_{i}^{(r)}\geq 2\) and there is an index \(i\in\underline{J}_{k}:a_{i}^{(r)}\geq 2\), then we set \[d_{i}^{(r+1)}:=2.\] (28)
Figure 7: Peaks in spike decomposition algorithm. The subtraction of \(\Delta\eta_{r}\) results in shifting the peaks towards the zero line, indicated by the red arrows.
* Case B. \(\sum_{i\in\underline{J}_{k}}a_{i}^{(r)}\geq 2\) and there is no index \(i\in\underline{J}_{k}:a_{i}^{(r)}\geq 2\), then there are at least two indexes \(i_{1},i_{2}\) such that \(a_{i_{1}}^{(r)}+a_{i_{2}}^{(r)}\geq 2\). Thus, we set \[d_{i_{1}}^{(r+1)}:=1,d_{i_{2}}^{(r+1)}:=1.\] (29)
Not that \(\|\Delta\eta_{r+1}\|_{A,0}=1\) and \(\|\eta_{r}-\Delta\eta_{r+1}\|_{A,0}=\|\eta_{r}\|_{A,0}-1\), since all peaks are shifted by \(1\) towards the zero line. Since in each step the \(\|.\|_{A,0}\) reduced by \(1\), \(\|\psi\|_{A,0}\) many steps are sufficient to represent \(\psi=\sum_{r}\Delta\eta_{r}\). \(\Box\)
## 4 Additive Spike Errors and a Resonance Phenomenon
In this section we first study the effect on the output of a single LIF neuron model when perturbing an input spike train \(\eta\) by adding weighted spikes \(\nu\), as illustrated in Fig. 8.
First of all we consider the special cases of zero and infinite leakage, i.e., \(\alpha=0\), resp., \(\alpha=\infty\), to obtain Lemma 2. For the proof see F.
**Lemma 2** (Additive Error Bound for Integrate-and-Fire): _Let \(\mbox{LIF}_{\vartheta,\alpha}\) be a LIF neuron model with \(\alpha\in\{0,\infty\}\) and reset-to-mod \(\mbox{re-initialization}\), then:_
\[\forall\vartheta>0,\eta,\nu\in\mathbb{S}:\|\nu\|_{A,\alpha}\leq\vartheta \Rightarrow\|\mbox{LIF}_{\vartheta,\alpha}(\eta+\nu)-\mbox{LIF}_{\vartheta, \alpha}(\eta)\|_{A,\alpha}\leq\vartheta. \tag{30}\]
Based on Theorem 1 on characterizing LIF as signal-to-spike-train quantization and taking into account the special cases of \(\alpha\in\{0,\infty\}\) of Lemma 2 we obtain a Lipschitz-style upper bound in terms of inequality (31) for a _reset-to-mod_ LIF neuron, resp. in terms of (40) for SNNs based on _reset-to-mod_ LIF neurons.
**Theorem 3** (Liptschitz-Style Upper Bound for the LIF model): _For a reset-to-mod LIF neuron model with \(\vartheta>0\) and \(\alpha\in[0,\infty]\) and for all spike trains \(\nu\in\mathbb{S}\) there holds the following inequality_
\[\sup_{\eta\in\mathbb{S}}\left\|\mbox{LIF}_{\vartheta,\alpha}(\eta+\nu)-\mbox{ LIF}_{\vartheta,\alpha}(\eta)\right\|_{A,\alpha}\leq\gamma(\alpha)\left\lceil \frac{1}{\vartheta}\|\nu\|_{A,\alpha}\right\rceil\vartheta, \tag{31}\]
_where \(\gamma(0)=\gamma(\infty)=1\) and \(\gamma(\alpha)\in[2,3]\) for \(\alpha\in(0,\infty)\)._
Proof.First of all note that
\[\xi(\vartheta,\alpha):=\sup_{\eta,\nu\in\mathbb{S}}\frac{\left\|\mbox{LIF}_{ \vartheta,\alpha}(\eta+\nu)-\mbox{LIF}_{\vartheta,\alpha}(\eta)\right\|_{A, \alpha}}{\left|\|\frac{1}{\vartheta}\nu\|_{A,\alpha}\right|\vartheta} \tag{32}\]
is independent from \(\vartheta\) although \(\vartheta\) appears in (32), as shown in the following. Indeed, for given threshold \(\vartheta>0\) let \(\eta_{i}^{(\vartheta)}\) and \(\nu_{i}^{(\vartheta)}\) be sequences for which the fraction in (32) converges to \(\xi(\vartheta,\alpha)\), then \(\widetilde{\eta}_{i}:=\eta_{i}^{(\vartheta)}/\vartheta\) and \(\widetilde{\nu}_{i}:=\nu_{i}^{(\vartheta)}/\vartheta\) yield
\[\xi(\vartheta,\alpha)=\sup_{i}\frac{\vartheta\left\|\mbox{LIF}_{1,\alpha}( \widetilde{\eta}_{i}+\widetilde{\nu}_{i})-\mbox{LIF}_{1,\alpha}(\widetilde{ \eta}_{i})\right\|_{A,\alpha}}{\left|\|\frac{1}{\vartheta}\nu_{i}^{(\vartheta) }\|_{A,\alpha}\right|\vartheta}=\xi(1,\alpha). \tag{33}\]
Now, define and use (14)
\[\gamma(\alpha) := \xi(1,\alpha) \tag{34}\] \[= \sup_{\eta,\nu\in\mathbb{S}}\frac{\left\|\mbox{LIF}_{1,\alpha}( \eta+\nu)-(\eta+\nu)+\nu+\eta-\mbox{LIF}_{1,\alpha}(\eta)\right\|_{A,\alpha}}{ \left|\|\nu\right\|_{A,\alpha}}\] \[\leq \sup_{\eta,\nu\in\mathbb{S}}\frac{2+\|\nu\|_{A,\alpha}}{\left\| \nu\right\|_{A,\alpha}}\leq 3<\infty.\]
Now, consider \(\alpha\in(0,\infty)\) and the following example.
**Example 1**: _Let \(\eta=\sum_{k=1}^{3}a_{k}\delta_{t_{k}}\) for \(t_{k}=k\varepsilon\), \(\varepsilon>0\) and \((a_{1},a_{2},a_{3})=(-\frac{3}{2},1,\frac{3}{2})\), and \(\nu=\sum_{k=1}^{3}b_{k}\delta_{t_{k}}\) with \((b_{1},b_{2},b_{3})=(1,-1,1)\) satisfying \(\|\nu\|_{A,\alpha}=1\) for all \(\alpha\in[0,\infty]\)._
Figure 8: Scheme for additive signal error propagation through a single LIF model.
For \(\alpha\in(0,\infty)\) we obtain for this example
\[\text{LIF}_{1,\alpha}(\eta) = -1\,\delta_{t_{1}}+0\,\delta_{t_{2}}+1\,\delta_{t_{3}},\] \[\text{LIF}_{1,\alpha}(\eta+\nu) = 0\,\delta_{t_{1}}+0\,\delta_{t_{2}}+2\,\delta_{t_{3}}. \tag{35}\]
Therefore we get
\[\rho(\varepsilon,\alpha):=\|\text{LIF}_{1,\alpha}(\eta+\nu)-\text {LIF}_{1,\alpha}(\eta)\|_{A,\alpha} = \|1\,\delta_{t_{1}}+0\,\delta_{t_{2}}+1\,\delta_{t_{3}}\|_{A,\alpha} \tag{36}\] \[= \left|1+e^{-2\varepsilon\alpha}\right|.\]
From \(\lim_{\vartheta\to 0}\rho(\varepsilon,\alpha)=2\) for all \(\alpha\in(0,\infty)\) we follow that \(\gamma(\alpha)\geq 2\) for all \(\alpha\in(0,\infty)\).
Now, let us check the special case \(\alpha=0\). Without loss of generality we may assume that \(\vartheta=1\). For this case we apply the spike train decomposition of Corollary 2, which allows us to represent \(\nu=\sum_{k=1}^{a}\nu_{k}+\widetilde{\nu}\), where \(a:=\lfloor\|\nu\|_{A,0}\rfloor\), \(\|\nu_{k}\|_{A,0}=1\) and \(\|\widetilde{\nu}\|_{A,0}<1\). Then, taking into account Lemma 2 and applying (14) on the telescope sum
\[\|\text{LIF}_{1,0}(\eta+\nu)-\text{LIF}_{1,0}(\eta)\|_{A,0} \tag{37}\] \[= \|\text{LIF}_{1,0}(\eta+\nu_{1}+\ldots+\nu_{a}+\widetilde{\nu}) -\text{LIF}_{1,0}(\eta+\nu_{2}+\ldots+\nu_{a}+\widetilde{\nu})\] \[+\text{LIF}_{1,0}(\eta+\nu_{2}+\ldots+\nu_{a}+\widetilde{\nu})- \text{LIF}_{1,0}(\eta+\nu_{3}+\ldots+\nu_{a}+\widetilde{\nu})\] \[\ldots\] \[+\text{LIF}_{1,0}(\eta+\nu_{a}+\widetilde{\nu})-\text{LIF}_{1,0}( \eta+\widetilde{\nu})\] \[+\text{LIF}_{1,0}(\eta+\widetilde{\nu})-\text{LIF}_{1,0}(\eta)\, \|_{A,0}\] \[\leq \lceil\|\nu\|_{A,0}\rceil\]
we obtain \(\gamma(0)\leq 1\). Since Example 1 gives
\[\text{LIF}_{1,0}(\eta) = -1\,\delta_{t_{1}}+0\,\delta_{t_{2}}+2\,\delta_{t_{3}},\] \[\text{LIF}_{1,0}(\eta+\nu) = 0\,\delta_{t_{1}}+0\,\delta_{t_{2}}+2\,\delta_{t_{3}}, \tag{38}\]
hence, \(\|\text{LIF}_{1,0}(\eta+\nu)-\text{LIF}_{1,0}(\eta)\|_{A,0}=1\), we finally get \(\gamma(0)=1\). The same way of reasoning on the telescope sum applies to the case \(\alpha=\infty\), giving \(\gamma(\infty)\leq 1\). Checking again Example 1, we get
\[\text{LIF}_{1,\infty}(\eta) = -1\,\delta_{t_{1}}+1\,\delta_{t_{2}}+1\,\delta_{t_{3}},\] \[\text{LIF}_{1,\infty}(\eta+\nu) = 0\,\delta_{t_{1}}+0\,\delta_{t_{2}}+2\,\delta_{t_{3}}, \tag{39}\]
hence, \(\|\text{LIF}_{1,\infty}(\eta+\nu)-\text{LIF}_{1,\infty}(\eta)\|_{A,0}=\|1\, \delta_{t_{1}}-1\,\delta_{t_{2}}+1\,\delta_{t_{3}}\|_{A,0}=1\), showing that \(\gamma(\infty)=1\).
Theorem 3 together with the triangle inequality of the norm \(\|.\|_{A,\alpha}\) immediately yields a global upper bound on the norm difference of \(\text{LIF}(\eta)\) and its perturbed version \(\text{LIF}(\eta+\nu)\) for SNNs.
**Theorem 4** (Global Lipschitz-Style Bound for SNNs): _Let the spiking neural network SNN \(:\mathbb{S}^{N_{0}}\rightarrow\mathbb{S}^{(N_{L})}\) with reset-to-mod LIF neurons LIF\({}_{\vartheta,\alpha}\), \(\vartheta=1\), be given by \([W^{(1)},\ldots,W^{(N_{L})}]\) according to (9), and let \((\nu_{1},\ldots,\nu_{N_{0}})\) be additive error spike trains in the corresponding input spike trains \((\eta_{1},\ldots,\eta_{N_{0}})\), then for all output channels \(\eta_{j}^{(N_{L})}\), \(j\in\{1,\ldots,N_{L}\}\), we obtain the following error bound_
\[\sup_{\eta_{i}}\|\text{SNN}\left((\eta_{i}+\nu_{i})_{i}\right)- \text{SNN}\left((\eta_{i})_{i}\right)\|_{A,\alpha} \tag{40}\] \[\leq_{j} \Gamma_{\alpha}\circ\widetilde{W}^{(N_{L})}\left(\Gamma_{\alpha} \circ\widetilde{W}^{(N_{L}-1)}\cdots\left(\Gamma_{\alpha}\circ\widetilde{W} ^{(1)}\left(\Gamma_{\alpha}\circ\nu_{A,\alpha}\right)\right)\right),\]
_where \(\|(c_{i,j})_{i,j}\|_{A,\alpha}:=\left((\|(c_{i,j})_{i,j}\|_{A,\alpha})_{i}\right)\), \(\leq_{j}\) refers to the \(j\)-th output channel on the left and the right hand side of the inequality, \(\widetilde{W}^{(k+1)}:=\left(\left|w_{i_{k+1},i_{k}}^{(k+1)}\right|\right)_{i_ {k+1},i_{k}}\), \(\nu_{A,\alpha}:=\left(\|\nu_{1}\|_{A,\alpha},\ldots,\|\nu_{N_{0}}\|_{A,\alpha} \right),\Gamma_{\alpha}(x):=\lceil\gamma(\alpha)\,x\rceil\) and the rounding-up function \(\lceil.\rceil\) is applied coordinate-wise._
## 5 Evaluation
In this section we look at numerical examples to demonstrate the main theoretical results of our paper, that is above all (14) on spike train quantization and its consequences in terms of quasi isometry, Corollary 1, the error bounds w.r.t time delay, Corollary 2 and the global Lipschitz-stlye upper bound for additive spike trains due to Theorem 3 for the LIF model, resp. Theorem 4 for LIF-based feedforward SNNs. See [https://github.com/LunglmayrMoser/AlexSNN](https://github.com/LunglmayrMoser/AlexSNN) for Python and Mathematica code.
All the theoretical findings of this paper are based on the choice of _reset-to-mod_ as re-initialization mode. Therefore, in the subsequent evaluations we also take the other reset modes into account to get an overview about the differences in the behavior. It is also instructive to look at the effect of alternative distance measures that are not equivalent to the Alexiewicz norm. We restrict this comparison to the Euclidean based norm (41). Other metrics for spike trains such as Satuvuori and Kreuz (2018), Sihin and Kim (2019), Victor (2005) are not considered because they are motivated for other purposes than considered in our approach which aims at characterizing that topology of the vector space \(\mathbb{S}\) which meets the Postulates 1 and 2 of Section 3. Therefore a detailed discussion on potential implications of the Alexiewicz topology (and its leaky variants) in the context of other proposed metrics is postponed for future study.
### Spike Train Quantization due to (14)
Fig. 9 displays the quantization error in the Alexiewicz topology, i.e., \(\left\|\mathrm{LIF}_{\vartheta,\alpha}(\eta)-\eta\right\|_{A,\alpha}\)-norm for the different reset variants: (a) _reset-to-zero_, (b) _reset-by-subtraction_, and, ours, (c) _reset-to-mod_. For large leaky parameter \(\alpha\) all three variants tend to same error behavior. As expected according to Theorem 1 only in the _reset-to-mod_ variant we can guarantee the bound of of Theorem 1. In contrast, Fig. 10 illustrates the effect of choosing the Euclidean topology as commonly used in the context of SNNs, i.e.,
\[\left\|\sum_{i=1}^{N}a_{i}\delta_{i}\right\|_{2,\alpha}:=\sqrt{\sum_{k=1}^{N} \left(\sum_{i=1}^{k}a_{i}e^{-\alpha(t_{k}-t_{i})}\right)^{2}}. \tag{41}\]
In contrast to the Alexiewicz topology and the _reset-to-mod_ re-initialization of the LIF neuron there is no guarantee in the Euclidean metric to have a global upper bound for the quantization error, see Fig. (11).
### Error Bounds regarding Postulates 1 and 2 and Quasi Isometry
Postulates 1 and 2 are covered by the inequalities (19), resp. (19), regarding time delay, resp. threshold deviation. Fig. 12 shows an example including the theoretical upper bound proven for the _reset-to-mod_ variant together with the norm-errors resulting from our three re-initialization variants and different settings of the leakage parameter. For time delays the theoretical upper bound is guaranteed for sufficiently small time delays (see E). In this example the other re-initialization variants _reset-by-subtraction_ and _reset-to-zero_ show smaller errors compared to _reset-to-mod_. For threshold deviations it is the other way round and as guaranteed by (19) the _reset-to-mod_ related dashed red line is strictly below the bound (black line) for all \(\Delta\vartheta\).
Like the data for spike train quantization, also the analysis of quasi isometry, see Fig.13, shows significant differences regarding the choice of the re-initialization mode.
Figure 10: Like in Fig. 9 with \(L_{2}\)-based norm (41).
Figure 9: Distribution of the quantization error \(\left\|\mathrm{LIF}_{\vartheta,\alpha}(\eta)-\eta\right\|_{A,\alpha}\) for \(\vartheta=1\), \(\alpha\in\{0.01,0.1,1,10,100\}\) and the reset variants: (a) _reset-to-zero_, (b) _reset-by-subtraction_, and, (c) _reset-to-mod_ (ours); the spike trains with \(50\) spikes (at equidistant grid) are generated by uniformly distributed spike amplitudes in the range of \([-2,2]\); for each variant \(100\) runs are performed.
### Lipschitz-Style Upper Bound for LIF and SNNs
First we look at a single LIF neuron. Theorem 3 actually addresses two aspects. First, the global Lipschitz-style bound, and second, the observation that the upper bound constant \(\gamma(\alpha)\) behaves different for \(\alpha\in\{0,\infty\}\) and \(\alpha\in(0,\infty)\). In Fig. 14 we compute this effect for different leakage parameters \(\alpha\) and scaling factors \(\lambda\) of the additive spike train \(\nu\). As these \((\alpha,\lambda)\)-plots show the amplification factor can be quite discontinuous and jagged. Different interfering \(\nu\) signals can cause quite different shapes. The more it is remarkable that the amplification factor is globally bounded for all input spike trains irrespective of how many spikes they contain, and that for \(\alpha=0\) (or, large) we have the tight bound of \(\gamma(\alpha)=1\). For \(\alpha\in(0,\infty)\) we find examples converging to \(\gamma(\alpha)=2\) as proven in the Theorem. It is a conjecture that in fact \(\gamma(\alpha)=2\), though in the proof we only have evidence that \(\gamma(\alpha)\in[2,3]\). It remains an open question to analyze this resonance-type phenomenon in more detail. In contrast, the different shapes in the 3D plots comparing the re-initialization mode _reset-to-mod_ (first row of 3D plots) with that of _reset-to-zero_ can be explained more easily. Since a large \(\alpha\) reduces the dependence on spikes in the past, hence approximating the behavior of _reset-to-zero_.
(\(\alpha\), \(\lambda\))-plots of Fig. 14 look similar for SNNs. Due to Theorem 4 they are globally bounded for all input spike trains. However, the shape can be quite jagged and discontinuous as illustrated by Fig. 15 which shows an example for a \(3\)-layered SNN with
\[W^{(1)}=\left(\begin{array}{cc}1&1\\ 1&2\end{array}\right),\,W^{(2)}=\left(\begin{array}{cc}0.5&0\\ 0.5&0.5\\ 0&-0.5\end{array}\right),\,W^{(3)}=\left(\begin{array}{cc}1&1&1\end{array} \right). \tag{42}\]
Figure 11: Like Fig. 9 and 10 for \(\alpha=1\) but different numbers of spikes, \(N\in\{100,\ldots,500\}\). While the quantization error in the Euclidean norm (41) increases with \(N\) (right), due to Theorem 1 it remains strictly upper bounded by the threshold in the Alexiewicz topology (left). Note the concentration of measure effect in the Alexiewicz topology, see Vershynin.
Figure 12: Evaluation of the effect of time delay \(\Delta t\) (second row) and threshold deviations \(\Delta\vartheta\) (third row) for a single LIF neuron for different \(\alpha\in\{0.2,0.5,0.8\}\). The plots show the left side of the inequalities (19), resp. (19), for the three reset variants together with the bound (black line) given by the right side of the corresponding inequalities.
Figure 14: \((\alpha,\lambda)\)-plot evaluations of the left hand side of inequality (31) for four variations of Example 1 for \(\alpha\in[0,10]\) in the x-axis and scaling factor \(\lambda\in[0,1]\) in the y-axis.
Figure 13: Evaluation of quasi isometry. Second and third row: Evaluation of \(\|.\|_{A,\alpha}\)-norm of \(\eta_{1}-\eta_{2}\) after applying LIF\({}_{\vartheta,\alpha}\) for const. \(\alpha=4\), second row, and const. \(\vartheta=0.3\) for third row. Only _reset-to-mod_ meets the conditions of quasi isometry due to (17).
The comparison of the two examples in Fig. 15 show the sensitivity of time. After shifting the disturbing red spike to the green position the resulting \((\alpha,\lambda)\)-plot breaks the symmetry causing a different characteristic of the shape. A detailed analysis of these effects is postponed to future research.
## 6 Outlook and Conclusion
Our approach starts with the well-known observation that bio-inspired signal processing leads to a paradigm shift in contrast to the well-established technique of clock-based sampling and processing. Driven by the hypothesis that this paradigm shift must also manifest itself in its mathematical foundation, we started our analysis in terms of a top-down theory development by first searching for informative postulates. For the LIF neuron model (and SNNs based thereupon) our analysis shows that there is an underlying non-Euclidean geometry that governs its input-output behavior. As the central result of this paper, it turns out that this mapping can be fully characterized as signal-to-spike train quantization in the Alexiewicz norm, resp. its adaptation for a positive leakage parameter. While we gave a proof for this result for spike trains represented by a sequence of weighted Dirac impulses of arbitrary time intervals, we indicated in the Appendix that this quantization principle also holds for a wider class. Going beyond that, our conjecture is that the quantization error inequality will hold for all signals for which the formula is well-defined, but, that for being able to achieve this one will resort to an alternative concept of integration, namely the Henstock-Kurzweil integral which is related to the Alexiewicz topology. This remains to be worked out in follow-up research. So does the analysis of the resonance-like phenomenon of the in the context of the Lipschitz-style error bound. Another research direction is to explore the potential of our Alexiewicz norm-based approach for information coding and, more generally, for establishing a unified theory that incorporates low-level signal acquisition through event-based sampling and signal processing via feedback loops and learning strategies for high-level problem solving. Another thread running through the paper is the question of the choice and impact of the re-initialization variant. The quantization theorem is stated for the variant, which we coined _reset-to-mod_ and results from applying _reset-by-subtraction_ instantaneously in the case of spike amplitudes that exceed the threshold by a multiple. The resulting found properties such as quasi isometry or the error bound on time delay might be arguments for _reset-to-mod_, but in the end this study can only be seen as a starting point towards a more comprehensive theoretical foundation of bio-inspired signal processing taking its topological peculiarities into account.
Figure 15: (\(\alpha\), \(\lambda\)) evaluation of the left-hand side of the inequality (40) for the \(2\)-\(3\)-\(1\) SNN given by the weight matrices (42) and the input spike trains \(\eta_{i}\). The outer right graph depicts the right-hand side of the inequality (40) where the cases \(\alpha=0\) and \(\alpha>0\) are distinguished. The second row shows the (\(\alpha\), \(\lambda\)) evaluation for the red additional spike, while the second row shows the evaluation with the green additional spike. In the the second row we disturb \(\eta_{1}\) by the red spike, and accordingly, in the third row we add the green spike and the resulting 3D-plots of the measured error in the Alexiewicz norm like in Fig. 14 for the three different re-initialization modes as indicated.
## Acknowledgements
This work was supported (1) by the 'University SAL Labs' initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems, (2) by Austrian ministries BMK, BMDW, and the State of UpperAustria in the frame of SCCH, part of the COMET Programme managed by FFG, and (3) by the _NeuroSoC_ project funded under the Horizon Europe Grant Agreement number 101070634.
## Appendix A Proof of Theorem 1
We show the proof for \(\alpha=0\). For \(\alpha>0\) the argumentation is analogous.
From left to right. Consider a sequence \(\eta_{n}\) of sub-threshold spike trains \(\eta_{n}=\sum_{i_{n}}a_{i_{n}}^{(n)}\delta_{t_{i_{n}}}\), then an integrate-and-fire neuron never reaches the threshold \(\theta>0\), i.e., for all \(n\) and \(m\) we have \(|\sum_{i_{n}=0}^{m}a_{i_{n}}^{(n)}|<\theta\). Consequently, taking the limit w.r.t. \(n\) we obtain the right-hand side of Equ. (10).
From right to left. Now, consider \(\eta=\sum_{i}a_{i}\delta_{t_{i}}\) satisfying the inequality of the right-hand side of Equ. (10). If there is no spike in the output, the spike train is sub-threshold, i.e., \(\eta\in C\), hence \(\eta\in\overline{C}\). Assume that there is at least one spike in the output. Without loss of generality, let us assume that the first spike is positive. Then we define \(i_{0}:=0\) and \(i_{k}\) recursively by \(i_{k+1}:=\min\{j:\sum_{i=i_{k}+1}^{j}=(-1)^{k}2\vartheta\}\). Note that \(a_{i}^{(e)}:=a_{i}-(-1)^{k}\varepsilon a_{i}/\vartheta\}\) yields a spike train \(\eta^{(e)}\in C\) that converges to \(\eta\).
## Appendix B Unit Ball of \(\|.\|_{A,0}\)
In this section we characterize the unit ball \(B_{A}\) of \(\|.\|_{A}\) as sheared transform of the hypercube \([-1,1]^{N}\), i.e., \(B_{A}=\{x\in\mathbb{R}^{N}:\,\|x\|_{A}\leq 1\}=\{x=Ty:\,y\in[-1,1]^{N}\}\), where
\[T=\left(\begin{array}{ccccc}1&0&\cdots&\cdots&0\\ -1&1&0&\cdots&0\\ \vdots&\cdots&\ddots&\ddots&\vdots\\ 0&\cdots&\cdots&-1&1\end{array}\right). \tag{43}\]
Proof.We are interested to characterize all \(x=(x_{1},\ldots,x_{N})\in\mathbb{R}^{N}\) such that \(\|x\|_{A}\leq 1\), i.e., \(y_{n}:=\sum_{i=1}^{n}x_{i}\in[-1,1]\) for all \(n\in\{1,\ldots,N\}\). Expressing \(x_{i}\) in terms of \(y_{i}\) means \(y_{1}=x_{1}\), \(y_{2}-y_{1}=x_{2}\), \(\ldots\), \(y_{N}-y_{N-1}=x_{N}\), that means \(x=Ty\) due to (43).
## Appendix C Proof for Spike Train Quantization for Dirac Impulses
We recall the proof from Moser and Lunglmayr [2023]
**Theorem 5** (reset-to-mod LIF Neuron as \(\|.\|_{A}\)-Quantization): _Given a LIF neuron model with reset-to-mod, the LIF parameters \(\vartheta>0\) and \(\alpha\in[0,\infty]\) and the spike train \(\eta\in\mathbb{S}\) with amplitudes \(a_{i}\in\mathbb{R}\). Then, \(\text{LIF}_{\vartheta,\alpha}(\eta)\) is a \(\vartheta\)-quantization of \(\eta\), i.e., the resulting spike amplitudes are multiples of \(\vartheta\), where the quantization error is bounded by (14), hence \(\text{LIF}_{\vartheta,\alpha}(\text{LIF}_{\vartheta,\alpha}(\eta)-\eta)=\emptyset\) and \(\text{LIF}_{\vartheta,\alpha}(\text{LIF}_{\vartheta,\alpha}(\eta))=\text{LIF}_ {\vartheta,\alpha}(\eta)\)._
Proof.First of all, we take up an idea from Moser [2017] and introduce the following operation \(\oplus\), which is associative and can be handled with like the usual addition if adjacent elements \(a_{i}\) from a spike train \(\eta=\sum_{i}a_{i}\delta_{t_{i}}\) are aggregated:
\[a_{i}\oplus a_{i+1}:=e^{-\alpha(t_{i+1}-t_{i})}a_{i}+a_{i+1}. \tag{44}\]
This way we get a simpler notation when aggregating convolutions, e.g.,
\[a_{i}\oplus\ldots\oplus a_{j}=\sum_{k=i}^{j}e^{-\alpha(t_{j}-t_{k})}a_{k}.\]
Figure 16: Illustration of Equation (45). The red arrows indicate _reset-by-mod_.
For the discrete version due to Algorithm 1 we re-define \(a_{i_{k}}\oplus\)
\(a_{i_{k+1}}:=\beta^{(i_{k+1}-i_{k})}a_{i_{k}}+a_{i_{k+1}}\), if \(i_{k}\) and \(i_{k+1}\) refer to adjacent spikes at time \(i_{k}\), resp. \(i_{k+1}\).
After fixing notation let us consider a spike train \(\eta=\sum_{j}a_{i}\delta_{i_{i}}\). Without loss of generality we may assume that \(\vartheta=1\). We have to show that \(\|\text{LIF}_{1,\alpha}(\eta)-\eta\|_{A,\alpha}<1\), which is equivalent to the discrete condition that \(\forall n:\max_{n}|\sum_{i=1}^{n}\hat{a}_{i}|<1\), where \(\text{LIF}_{1,\alpha}(\eta)-\eta=\sum_{i}\hat{a}_{i}\delta_{i_{i}}\). The proof is based on induction and leads the problem back to the standard quantization by truncation.
Suppose that at time \(t_{i_{k-1}}\) after re-initialization by _reset-by-subtraction_ we get the residuum \(\Delta_{i_{k-1}}\) as membrane potential that is the starting point for the integration after \(t_{i_{k-1}}\). Then, as illustrated in Fig. 16 the residuum \(\Delta_{i_{k}}\) at the next triggering event \(t_{i_{k}}\) is obtained by the equation
\[\Delta_{i_{k}}=\Delta_{i_{k-1}}\oplus a_{i_{k-1}+1}\oplus\ldots\oplus a_{i_{k }}-[\Delta_{i_{k-1}}\oplus\ldots\oplus a_{i_{k}}]. \tag{45}\]
On the other hand, note that for the differences \(\hat{a}_{i}\) we have
\[\hat{a}_{i_{k+1}}=\hat{a}_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{k -1}-1}\oplus\left(a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus\ldots \oplus a_{i_{k+1}}]\right). \tag{46}\]
Note that \(\Delta_{i_{0}}=0\), then for induction we assume that up to index \(k\) to have
\[\hat{a}_{i_{k}}=\Delta_{i_{k}}. \tag{47}\]
Now, using (47), Equation (46) gives
\[\hat{a}_{i_{k+1}} = \Delta_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{k-1}-1} \oplus\left(a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus\ldots\oplus a _{i_{k+1}}]\right)\] \[= \Delta_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{k-1}-1} \oplus a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus\ldots\oplus a_{i_{ k+1}}],\]
which proofs (47) to hold for all \(k\), showing that the differences can be expressed as differences of the standard quantization by truncation, hence proving the claim of Theorem 1 for all \(\alpha\in[0,\infty]\).
## Appendix D Example for Spike Train Decomposition
Algorithm 2 summarizes this approach in the proof of Theorem 2 and Fig. 17 demonstrates an example.
**Step 0**: Initialization: \(\eta_{0}:=\eta\), \(r=0\);
**Step 1**: Up and Down Intervals: Partition the time domain into up and down intervals \(\overline{J}_{k}\), resp. \(\underline{J}_{k}\) due to Equ. (25) based on the top and bottom peaks positions \(\overline{m}_{k}\), resp., \(\underline{m}_{k}\) in the resulting walk according to Equ. (22), resp. (23).
**Step 2**: Unit Discrepancy Delta: Define \(\Delta\eta_{r+1}\) according to (24), (26), (27), (28) and (29).
**Step 3**: Subtraction: \(\eta_{r+1}:=\eta_{r}-\Delta\eta_{r+1}\);
**Step 4**: Repeat steps **1, 2** and **3** until \(r=\|\eta\|_{A,0}\) to obtain \(\eta=\sum_{k=1}^{r}\Delta\eta_{k}\).
**Algorithm 2** Spike Train Decomposition
## Appendix E Error Bound on Lag, Corollary 2
The quasi-isometry property (17) yields
\[\left\|\text{LIF}_{\vartheta,\alpha}(\eta(\cdot-\Delta t))-\text{ LIF}_{\vartheta,\alpha}(\eta)\right\|_{A,\alpha} \leq \left\|\eta(\cdot-\Delta t)-\eta\right\|_{A,\alpha}+2\vartheta. \tag{48}\]
Because of \(e^{\alpha\Delta t}=1+\alpha\Delta t+O(\Delta t^{2})\) for \(\Delta t\approx 0\), we get
\[\left\|\eta(\cdot-\Delta t)-\eta\right\|_{A,\alpha} = \max_{i}\left|\sum_{j=1}^{i-1}\left(a_{j}e^{-\alpha(t_{i}-t_{j})} -a_{j}e^{-\alpha(t_{i}-t_{j}-\Delta t)}\right)+a_{i}\right| \tag{49}\] \[= \max_{n}\left|\sum_{j=1}^{n-1}a_{j}e^{-\alpha(t_{n}-t_{j})}\left( 1-e^{\alpha\Delta t)}\right)+a_{n}\right|\] \[= \max_{i}\left|(-\alpha\Delta t)\sum_{j=1}^{i-1}a_{j}e^{-\alpha(t_ {i}-t_{j})}+a_{n}\right|+O(\|\eta\|_{A,\alpha}\Delta t)\] \[\leq \alpha\left(\|\eta\|_{A,\alpha}+\max_{i}|a_{i}|\right)\Delta t+ \max_{i}|a_{i}|,\]
which together with (48) proves (19).
## Appendix F Proof of Lemma 2
First, let us check the case \(\alpha=0\). Without loss of generality we may assume that \(\vartheta=1\). Given spike trains \(\eta,\nu\in\mathbb{S}\), \(\eta=\sum_{i}a_{i}\delta_{t_{i}}\) and \(\nu=\sum_{i}b_{i}\delta_{t_{i}}\) and suppose that \(\|\nu\|_{A,0}\leq 1\). Denote by \(p_{\eta}(t_{i}^{-})\) the membrane's potential in the moment before triggering a spike w.r.t the input spike train \(\eta\).
Indirectly, let us assume that there are three subsequent spikes with the same polarity (all negative or all positive) generated by adding \(\nu\) to \(\eta\). Without loss of generality we may assume that the polarity of these three spike events is negative. Let denote these time points by \(t_{s_{1}}<t_{s_{2}}<t_{s_{3}}\) and let us consider the time point \(t_{s_{0}}\leq t_{s_{1}}\) at which for the first time \(\nu\) contributes to the negative spike event at \(t_{s_{1}}\). Then, the first spiking event after \(t_{s_{0}}\) is realized at \(t_{s_{1}}\), which is characterized by
\[p_{\eta}(s_{1})-\sum_{i=s_{0}}^{s_{1}}b_{i}<[p_{\eta}(s_{1})]-1. \tag{50}\]
After the spike event at \(t_{s_{1}}\) the re-initialization due to _reset-to-mod_ takes place, meaning the reset of the membrane's potential increase by \(1\), resulting in the addition of \((1-\sum_{i=s_{0}}^{s_{1}}b_{i})\) to the original membrane's potential \(p_{\eta}(s_{1})\). Hence, we obtain for the second subsequent spike at \(t_{s_{2}}\) the firing condition
\[p_{\eta}(s_{2})+(1-\sum_{i=s_{0}}^{s_{1}}b_{i})-\sum_{i>s_{1}}^{s_{2}}b_{i}<[p _{\eta}(s_{2})]-1, \tag{51}\]
thus,
\[p_{\eta}(s_{2})-[p_{\eta}(s_{2})]+2<\sum_{i=s_{0}}^{s_{2}}b_{i}. \tag{52}\]
Analogously, we obtain as characterizing condition for the third spiking event
\[p_{\eta}(s_{3})-[p_{\eta}(s_{3})]+3<\sum_{i=s_{0}}^{s_{3}}b_{i}. \tag{53}\]
Since \(\|\nu\|_{A,0}\leq 1\), i.e., \(|\sum_{i=1}^{k}b_{i}|\leq 1\) for all \(k\) it follows that \(|\sum_{i=k_{1}}^{k_{2}}b_{i}|=|\sum_{i=1}^{k_{2}}b_{i}-\sum_{i=1}^{k_{1}-1}b_ {i}|\leq 2\). Which applied to (53) yields the contradiction
\[p_{\eta}(s_{3})-[p_{\eta}(s_{3})]+3<\sum_{i=s_{0}}^{s_{3}}b_{i}\leq 2, \tag{54}\]
namely, \(-1<p_{\eta}(s_{3})-[p_{\eta}(s_{3})]\leq-1\). Therefore, there are at most \(2\) subsequently triggered additional spikes by adding \(\nu\).
Figure 17: Example of decomposing a spike train \(\eta\) with \(\|\eta\|_{A,0}=2\) into a sum \(\eta=\sum_{k=1,2}\Delta\eta_{k}\) with \(\|\Delta\eta_{k}\|_{A,0}=1\), according to Algorithm 2. Top left: spike train \(\eta\); Top right: illustration of first step with resulting walk and its top and bottom peaks due to (22) and (23). The red dashed line marks the resulting walk after subtracting \(\Delta\eta_{1}\). Bottom: Recursive steps with highlighted up and down intervals according to (25).
The same way of reasoning, now with Equ. (51), shows that the first occurrence of an added spike can only be a single event which if any has to be followed by a spike event with different polarity. All together this means that \(\|\text{LIF}_{1,0}(\eta+\nu)\|_{A,0}-\text{LIF}_{1,0}(\eta)\|_{A,0}\leq 1\).
The second case of \(\alpha=\infty\), i.e., \(\|\nu\|_{A,\infty}=\max_{i}|b_{i}|\leq 1\) reduces to the standard quantization of integer truncation, i.e., to show that \(\max_{i}\|[a_{i}+b_{i}]-[a_{i}]\|\leq 1\), which follows from the fact that \(-1\leq a_{i}-[a_{i}]+b_{i}<2\) for positive \(a_{i}\) and \(-2<a_{i}-[a_{i}]+b_{i}\leq 1\) for negative \(a_{i}\).
|
2306.09418 | A comprehensive review of 3D convolutional neural network-based
classification techniques of diseased and defective crops using non-UAV-based
hyperspectral images | Hyperspectral imaging (HSI) is a non-destructive and contactless technology
that provides valuable information about the structure and composition of an
object. It can capture detailed information about the chemical and physical
properties of agricultural crops. Due to its wide spectral range, compared with
multispectral- or RGB-based imaging methods, HSI can be a more effective tool
for monitoring crop health and productivity. With the advent of this imaging
tool in agrotechnology, researchers can more accurately address issues related
to the detection of diseased and defective crops in the agriculture industry.
This allows to implement the most suitable and accurate farming solutions, such
as irrigation and fertilization before crops enter a damaged and
difficult-to-recover phase of growth in the field. While HSI provides valuable
insights into the object under investigation, the limited number of HSI
datasets for crop evaluation presently poses a bottleneck. Dealing with the
curse of dimensionality presents another challenge due to the abundance of
spectral and spatial information in each hyperspectral cube. State-of-the-art
methods based on 1D- and 2D-CNNs struggle to efficiently extract spectral and
spatial information. On the other hand, 3D-CNN-based models have shown
significant promise in achieving better classification and detection results by
leveraging spectral and spatial features simultaneously. Despite the apparent
benefits of 3D-CNN-based models, their usage for classification purposes in
this area of research has remained limited. This paper seeks to address this
gap by reviewing 3D-CNN-based architectures and the typical deep learning
pipeline, including preprocessing and visualization of results, for the
classification of hyperspectral images of diseased and defective crops.
Furthermore, we discuss open research areas and challenges when utilizing
3D-CNNs with HSI data. | Nooshin Noshiri, Michael A. Beck, Christopher P. Bidinosti, Christopher J. Henry | 2023-06-15T18:02:53Z | http://arxiv.org/abs/2306.09418v1 | A comprehensive review of 3D convolutional neural network-based classification techniques of diseased and defective crops using non-UAV-based hyperspectral images
###### Abstract
Hyperspectral imaging (HSI) is a non-destructive and contactless technology that provides valuable information about the structure and composition of an object. It has the ability to capture detailed information about the chemical and physical properties of agricultural crops. Due to its wide spectral range, compared with multispectral- or RGB-based imaging methods, HSI can be a more effective tool for monitoring crop health and productivity. With the advent of this imaging tool in agrotechnology, researchers can more accurately address issues related to the detection of diseased and defective crops in the agriculture industry. This allows to implement the most suitable and accurate farming solutions, such as irrigation and fertilization, before crops enter a damaged and difficult-to-recover phase of growth in the field. While HSI provides valuable insights into the object under investigation, the limited number of HSI datasets for crop evaluation presently poses a bottleneck. Dealing with the curse of dimensionality presents another challenge due to the abundance of spectral and spatial information in each hyperspectral cube. State-of-the-art methods based on 1D and 2D convolutional neural networks (CNNs) struggle to efficiently extract spectral and spatial information. On the other hand, 3D-CNN-based models have shown significant promise in achieving better classification and detection results by leveraging spectral and spatial features simultaneously. Despite the apparent benefits of 3D-CNN-based models, their usage for classification purposes in this area of research has remained limited. This paper seeks to address this gap by reviewing 3D-CNN-based architectures and the typical deep learning pipeline, including preprocessing and visualization of results, for the classification of hyperspectral images of diseased and defective crops. Furthermore, we discuss open research areas and challenges when utilizing 3D-CNNs with HSI data.
keywords: Hyperspectral Imaging, Agriculture, Convolutional Neural Network, Crop Disease and Defect Detection, Crop Evaluation, Deep Learning +
Footnote †: journal: Computers and Electronics in Agriculture
## 1 Introduction
Plant diseases pose significant threats to global food production, with potential yield losses of up to 30% and substantial economic impact (Rizzo et al., 2021). This can have a devastating impact on farmers and communities, particularly in low-income countries where access to food is already challenging. Precision agriculture and hyperspectral imaging (HSI) offer promising solutions for preventing crop damage and losses, ultimately contributing to efforts to promote sustainability and reduce the impact of diseases on food production.
HSI, also referred to as imaging spectrometry, combines two distinct technologies, imaging and spectroscopy, to provide both spatial and spectral information, simultaneously. Spectral information can provide rich information about biochemical and biophysical attributes of the agricultural crops. This is due to the higher spectral resolution of hyperspectral sensors compared to multispectral and RGB ones. As a result, this feature can lead to better discrimination of objects of similar colors, higher accuracy in complex classifications, and the ability to predict chemical composition
and provide information about the interior of an object (Sun, 2010).
However, the interpretation of spectral data can be complex, especially when analyzing and comparing multiple samples over extended periods. One approach to simplify spectral analysis is the usage of spectral indices. Spectral indices are mathematical expressions that combine several spectral bands into a single value, providing an easier representation of the data. For example, the Normalized Difference Vegetation Index (NDVI) computes the ratio between Near Infrared (NIR) and Red (R) bands of hyperspectral channels as follows:
\[\text{NDVI}=\frac{\text{NIR}-\text{R}}{\text{NIR}+\text{R}} \tag{1}\]
With the help of spectral indices, one can effectively identify trends and changes in the data without requiring an in-depth comprehension of the underlying scientific principles governing spectral data, thus enabling a simpler data analysis, enhancement of features, standardization, comparability, and calibration of data.
In the agricultural industry, two common spectral indices are the already mentioned NDVI and the Green Chlorophyll Index (GCI). The former is used to monitor vegetation growth and health and the latter quantifies the amount of chlorophyll in plants. Further indices have been defined, usually in the context of remote sensing, to support research for example on agriculture, soil, vegetation, water and forestry. A comprehensive database of spectral indices that is searchable by application area and hyperspectral sensor is provided in Henrich et al. (2009).
From a data-perspective, a hyperspectral image is a stack of images, known as a hyperspectral cube or a data cube. Each image of this cube represents the response of the imager to one of the distinct hyperspectral channels (Benediktsson and Ghamisi, 2015). This is illustrated in Fig. 1. It shows a 3D data cube P with dimensions \(M\times N\times\lambda\), where \(M\) and \(N\) represent the axes of spatial information and \(\lambda\) represents the spectral dimension (Tarabalka et al., 2010). In the hyperspectral cube, each pixel, given by its spatial coordinates, is a vector of length \(\lambda\) that indicates the reflected radiation of a specific part of the object.
The high-dimensionality of this data cube poses a challenge to traditional machine learning approaches, resulting in reduced accuracy due to their inability to extract complex features. Moreover, the performance of these approaches heavily depends on manual feature engineering. Convolutional Neural Networks (CNN) have been proven to achieve high classification accuracies in image classification tasks and to work well with the high-dimensionality of HSI data.
In this paper, we present a comprehensive review of 3D-CNN-based models utilized in the classification of non-UAV-based hyperspectral images of diseased and defective crops. This review is intended to assist computer vision experts and agriculture-domain researchers seeking to address HSI classification tasks for crops under stress.
The paper is organized as follows. In the following Section, we outline the investigation protocol used in this review. In Section 3, we briefly describe the structure of CNNs and their most important concepts. Following the typical data pipeline associated with CNNs, Sections 4 to 7 review data preprocessing, band and feature selection, network architecture design, and data visualization. This provides a convenient overview of its individual steps for plant classification problems using 3D-CNNs. Finally, Section 8 highlights the research gaps and limitations associated with the application of 3D-CNNs for HSI data classification.
## 2 Search methodology
A systematic search was conducted by accessing scholarly publications through google scholar search engine. To optimize the search results, specific keywords were employed in the advanced search section, resulting in a refined list of articles. The selected search terms were different combinations of "hyperspectral", "disease", "detection", "identification", "diagnosis", "plant", "crop", "stress", "3D CNN", "3 dimensional CNN", "three dimensional CNN", utilizing the Boolean operators, AND and OR. Around 2,000
Figure 1: The hyperspectral cube (adapted from Tarabalka et al. (2010) with modification). It is a three-dimensional array where each pixel represents a spectrum containing a range of wavelengths. This spectrum can act as a fingerprint and provides information about biophysical and biochemical characteristics of the imaged object.
records were investigated, however, to ensure the relevance of the articles, the abstracts of the retrieved papers were thoroughly assessed to confirm coherence with the title of this research. Moreover, screening criteria were implemented, including the removal of non-English papers, to ensure an accessible and high-quality selection of articles. The results of this comprehensive study are provided in Table 1 and Table 2 (see Section 6) and contain papers from 2015 up to February of 2023.
This review examines various applications of 3D-CNN-based models in detecting and classifying diseases in agricultural crops, including charcoal rot in soybeans (Nagasubramanian et al., 2018), mold in peanuts and strawberries (Liu et al., 2020; Jung et al., 2022), bacterial leaf blight (BLB) in rice (Cao et al., 2022), grapevine vein-clearing virus (GVCV) in grapevines (Nguyen et al., 2021), and potato late blight (PLB) in potatoes (Qi et al., 2023). Moreover, the review explores the use of 3D-CNN-based architectures for identifying specific defects in crops, such as decay in blueberries (Qiao et al., 2020), bruise and brown spots in fruits (Pourdarbani et al., 2023; Jia et al., 2023), heat stress in rice (Gao et al., 2021), as well as black, fermented, shell, and broken coffee defects in beans (Chen et al., 2022). By harnessing the power of 3D-CNN-based models, we can effectively address challenges of classifying diseased and defective using hyperspectral images. This can result in the preservation of product quality, prevention of yield losses, and ensuring food safety standards.
## 3 CNN structure and concepts
A CNN is comprised of a series of layers each consisting of several neurons. As shown in Fig. 2, each layer is the input for the next layer in the network. Key building blocks of a CNN are convolution layers, detector layers, and pooling layers. A convolution layer uses convolutional kernels to extract low-dimensional features from the input data while preserving the spatial relationship between the input data pixels. Fig. 3 depicts the movement directions of 1D (spectral), 2D (spatial), and 3D (spatial-spectral) kernels within the hypercube. A detector layer applies a non-linear function like Rectified Linear Unit (ReLU) to learn non-linear representations. Pooling layers make features invariant by reducing the dimensionality of data.
Classification methods based on 1D-CNN (spectral feature-based) and 2D-CNN (spatial feature-based) cannot efficiently classify hyperspectral data as neither utilizes spatial and spectral features together. However, a 3D-CNN can extract spatial-spectral features from the volumetric data. This is due to its ability to incorporate the spectral dimension in addition to the spatial dimensions, which enables it to model and learn more complex spatiotemporal representations.
In the following, a description of previous review papers that deal with the topics of CNNs, hyperspectral data, and applications in agriculture and plant research is given. The work of Signoroni et al. (2019) is aimed at domain professionals seeking comprehensive insights into the integration of hyperspectral acquisition techniques and deep learning architectures for specific tasks across diverse application domains. This resource caters to machine learning and computer vision experts, offering a nuanced understanding of how deep learning technologies are tailored to effectively process and analyze hyperspectral data, keeping them up-to-date with the latest advancements. In Jiang and Li (2020) an examination of how different CNN architectures have been employed in the assessment of plant stress, plant development, and postharvest quality. This review categorizes the studies according to the technical advancements achieved in terms of imaging classification, object detection, and image segmentation. As a result, it highlights cutting-edge solutions for specific phenotyping applications, offering valuable insights into the current state of the field. An overview of state-of-the-art CNN models and visualization techniques for disease diagnosis in plants is given in Joseph et al. (2022). The review given in Gill et al. (2022) delves into the plant stress phenotyping, specifically examining the utilization of machine learning and deep learning methodologies. The study encompasses a wide range of high throughput phenotyping platforms, exploring the integration of data from diverse sources. However, it does not discuss 3D-CNN architectures. Finally, a review paper that combines all three topics was provided in Wang et al. (2021). In that work, the authors provide an overview of the application of hyperspectral imaging in agriculture, encompassing areas such as ripeness and component prediction, classification themes, and plant disease detection. Additionally, the study examines recent advancements in hyperspectral image analysis specifically in the context of deep learning models. The review not only highlights the achievements in this field but also outlines the existing challenges associated with deep learning-based hyperspectral image analysis. Moreover, the study presents future prospects and potential directions for further research in this domain.
In this review, we update and complement the paper of Wang et al. (2021) with a particular focus on 3D-CNNs and the entirety of the process of creating a high-performant model, including preprocessing of
data, band selection, exploration of model architectures, and data visualization.
A deep learning pipeline for HSI classification typically consists of several stages, including data preprocessing, band and feature selection, model design, model training, testing, and evaluation (see Fig. 4). Data preprocessing involves enhancing the quality of raw HSI data, for example, through noise removal, radiometric calibration, and dimension reduction. Feature extraction transforms the raw data into a new space of features, which is expected to be more discriminative for the classification task. Band selection defines a subset of the original spectral bands that is the most relevant for the classification task. Model design is the configuration of hyperparameters of the CNN model, for example, the number and sequence of convolutional layers and dense layers, the activation functions to be used, and the usage of dropout or skip connections. During the training step, the model is optimized by iteratively adjusting its internal parameters (weights and biases) to minimize the error between its predicted output and the actual output on the input data. Finally, the model is evaluated to learn patterns and relationships in the data and to make accurate predictions on new unseen data.
## 4 Data preprocessing
Data preprocessing is a critical step in hyperspectral data analysis, aimed at optimizing the quality and quantity of the data. This step enhances the suitability of the data for downstream tasks such as classification and feature extraction. Preprocessing techniques include patch extraction, radiometric correction and calibration, smoothing, dimension reduction, and background removal. Data augmentation also falls under preprocessing and has the goal of increasing the volume of training data.
Figure 3: Movement direction of convolution process using 1D, 2D, and 3D kernels in hypercube. A schematic overview of movement directions for (a) 1D, (b) 2D, and (c) 3D convolutions in CNNs over hypercube is shown. The X and Y directions indicate the movement of the kernel across the spatial dimensions, and Z direction shows the movement across the spectral dimension.
Figure 2: A basic conceptual CNN architecture. A CNN consists of multiple layers, including convolution, detector, and pooling layers, where each layer serves as the input for the subsequent layer, enabling the extraction of low-dimensional features, learning of non-linear representations, and dimensionality reduction in the network.
### Patch extraction
Patch extraction is a technique that involves dividing an image into smaller images or patches. In the context of HSI, patch extraction has significant advantages for efficient and targeted analysis of specific regions of interest within an image. By extracting image patches that contain pixels with similar properties, researchers can focus their analysis on areas of the image that are most relevant to diseased or defective areas.
To give a concrete example, Nagasubramanian et al. (2018) utilized patch extraction to analyze hyperspectral images of soybean crops. In their study, they extracted spatial patches of size 64x64x240 from an original image of size 500x1600x240, where the first two dimensions define the spatial resolution of the image and 240 denotes the number of spectral bands. By analyzing the properties of pixels within these patches, they were able to extract features that were more representative of the target disease which ultimately improved the accuracy and efficiency of their analysis. Furthermore, patch extraction helps to expand the number of images when there is a lack of data. It also reduces computational time, as processing hyperspectral images of large sizes can be computationally demanding (Qi et al., 2023; Jia et al., 2023).
### Data augmentation
Imbalanced data is a common problem in many machine learning applications and HSI is no exception. It refers to a situation where the number of samples in each class or category of the data is not evenly distributed. This can lead to a bias towards the overrepresented classes in the analysis results, which is particularly problematic if the minority class is of interest.
To address imbalanced HSI data, one common approach is resampling (Nguyen et al., 2021), which involves either oversampling the minority class or undersampling the majority class to balance the class distribution. Resampling can be done randomly or using more advanced techniques such as Synthetic Minority Oversampling Technique (SMOTE) (Chawla et al., 2002) or Adaptive Synthetic (ADASYN) sampling (He et al., 2008). Other widespread data augmentation approaches include transformation techniques such as mirroring (Liu et al., 2020), rotation (Liu et al., 2020; Chen et al., 2022), horizontal and vertical flipping (Chen et al., 2022; Pourdarbani et al., 2023), and color jittering (Pourdarbani et al., 2023). Along with above-mentioned methods, patch extraction (Section 4.1) can also be used to address imbalance HSI dataset.
### Radiometric calibration and correction
Radiometric calibration is an essential step in the accurate operation of hyperspectral cameras. It aims to establish a quantitative relationship between the response of the camera sensor (the radiation sensor) and the actual reflectance (radiation level) of an object in a given
Figure 4: Typical deep learning pipeline for HSI data classification. First, the discriminative features and bands are extracted from preprocessed HSI dataset of crop. Then, the dataset is split to training, validation, and testing sets. The training dataset is used to train the 3D-CNN model, the validation dataset is employed to assess the model’s performance and fine tune its parameters, and the testing dataset serves to evaluate the final performance and generalization ability of the trained 3D-CNN model on unseen data.
environment. The calibration process involves assigning a "true" value for either radiation intensity or reflectance to the digital numbers given by the camera that represent recorded outputs for each pixel and spectral channel. To that end, calibrated reflectance standards are utilized, which usually consist of a matte Lambertian reflecting surface to ensure that reflected light is uniform in all directions (Shaikh et al., 2021).
The calibration process is then performed as follows: After selecting a radiation source that emits a known type and amount of radiation, the detector is placed in close proximity to the source to measure the radiation. Next, using calibration factors provided by the manufacturer or based on known mathematical formulas (see, for example, (Qi et al., 2023)) for the particular detector, the expected response of the detector is determined. The measured response of the detector is then aligned with the expected response based on the known radiation level using calibration factors. This calibration process should be repeated periodically with different radiation sources to ensure the continued accuracy and reliability of the detector's measurements.
Even with calibration performed, hyperspectral cameras are susceptible to radiometric errors that can arise from a variety of sources, including sensor drift, electronic interference, light source, and data transmission and recording issues. For example, one of the common radiometric errors which relates to the camera's sensor is stripping. Each sensor consists of multiple individual detectors that sometimes do not function properly due to being out of calibration. Consider a push broom (along-track) camera where its sensor has multiple detectors aligned in a row. When one of them is calibrated slightly different from the adjacent detector the striping effect can occur. In this case, lines predominantly consisting of varying shades of dark and bright pixels formed. Radiometric correction can register and rectify incorrect pixel brightness. To achieve this, a series of procedures are employed including noise correction, de-striping, line-dropout correction (Duggal, 2013), and black and white image correction (Gao et al., 2021; Cao et al., 2022; Chen et al., 2022; Jia et al., 2023). Figure 5(b) shows the calibrated hyperspectral image of lettuce using the following computation,
\[\text{Calibrated HSI Data}=\frac{\text{Raw HSI Data}-\text{Dark Ref}}{\text{ White Ref}-\text{Dark Ref}}, \tag{2}\]
where, Raw HSI Data is the image taken by the hyperspectral camera without modification, Dark Ref is the image captured by closing the camera's lens, and the White Ref is the imaged white reference with even and maximum reflectance across the spectral range.
### Smoothing
Smoothing, also known as filtering, is a technique in HSI to improve the quality of the data by reducing noise and artifacts and enhancing the signal-to-noise ratio of the data. This technique in HSI can be broadly classified into two categories: spatial smoothing and spectral smoothing.
Spatial smoothing techniques selectively smooth out certain features in an image by applying filters that amplify or attenuate certain spatial frequencies (Duggal, 2013). Common spatial smoothing techniques include Gaussian filtering, mean filtering (Rees, 2013), median filtering, bilateral filtering (Cao et al., 2017), and anisotropic diffusion filtering (Lennon et al., 2002).
Spectral smoothing techniques operate on the spectral domain of an HSI image by smoothing the intensity values of neighboring spectral bands. As in spatial smoothing the goal is to remove noise (Vaiphasa, 2006), reduce artifacts, and enhance features in the image. Common spectral smoothing techniques include moving average filtering (Vaiphasa, 2006), Savitzky-Golay (SG) filtering (Vidal and Amigo, 2012), Fourier filtering, wavelet filtering, and principal component analysis (PCA). For example, Cao et al. (2022) eliminated the random noise that was present in the spectral data of the different region of interests (ROIs) using SG filtering. The filtering process makes it easier to identify the true signal of the sample and remove interference caused by size and structure differences between the ROIs. Similarly, Jung et al. (2022) and Jia et al. (2023) used a SG filter to smooth out the spectral data and reduce the effect of noise.
As Vaiphasa (2006) points out, care must be taken when applying smoothing techniques. The subjective selection of smoothing filters in hyperspectral remote sensing studies can negatively impact the statistical properties of the spectral data, which can, in turn, affect subsequent analyses. To preserve the statistical properties of the HSI data, the selection of smoothing filters should be done through a comparative t-test method that identifies the filter with the least statistical disturbances. By using this approach, it is possible to mitigate the negative effects of smoothing filters and ensure the reliability of subsequent analyses based on statistical class models.
### Dimension reduction
The large number of spectral bands within hyperspectral data often makes it challenging to process and analyze HSI data. Dimensionality reduction techniques
retain relevant information, while allowing the model to work on smaller hypercubes downstream. These techniques are split into linear and non-linear ones.
#### 4.5.1 Linear techniques
Linear dimension reduction in HSI refers to a set of statistical and machine learning techniques that aim to find a lower-dimensional linear manifold in the high-dimensional space that captures the essential spectral information of the data (Khodr and Younes, 2011). By embedding the data into this lower-dimensional linear space, the dimensionality of the data is reduced while preserving as much of the spectral information as possible. PCA and Random Forest (RF) are two examples of this technique, and demonstrations of both can be seen in Cao et al. (2022).
The process of PCA involves first zero-centering the input spectral matrix (i.e., the matrix of the spectral bands) and computing its covariance matrix. Next, the eigenvectors and eigenvalues of the covariance matrix are calculated, and the eigenvalues are sorted in descending order. The top \(k\) eigenvalues are then selected and the corresponding eigenvectors space computed. The \(k\)-dimensional data is obtained by projecting the original spectral matrix into the new space using the selected eigenvectors.
On the other hand, an RF evaluates the importance of each wavelength by randomly replacing it and measuring its effect on the accuracy of the thus trained CNN model. To accomplish this, the RF builds many decision trees, and each tree is trained on a different subset of the data. The algorithm calculates the importance score for each wavelength by measuring the change in the prediction error rate before and after randomly replacing the wavelength in the out-of-bag data. The wavelength with the highest importance score is selected, and this process is repeated until the desired number of wavelengths is achieved. The dimensionality of the hyperspectral data is thus reduced, while retaining the most important information for accurate predictions.
Further linear algorithms for dimension reduction of HSI data are described in Firat et al. (2022) and encompass Independent Component Analysis (ICA) and PCA-based algorithms like Incremental PCA (IPCA), Sparse PCA (SPCA), and Randomized PCA (RPCA).
#### 4.5.2 Non-linear techniques
Non-linear techniques can handle data with complex and non-linear structures. Yang et al. (2009) classified these techniques into Kernel-based and manifold learning (Yang et al., 2009). Kernel-based techniques, like Kernel PCA (KPCA), use non-linear mappings to transform data into higher-dimensional feature spaces first on which linear techniques can be applied. Manifold learning algorithms, on the other hand, aim to directly discover the intrinsic non-linear structure of data. See (Khodr and Younes, 2011) for well-known manifold learning techniques, including Isometric Feature Mapping (Isomap), Locally Linear Embedding (LLE), Local Tangent Space Alignment (LTSA), Diffusion Maps, Sammon's Mapping (SM), and Locality Preserving Projections (LPP).
### Background removal
The data captured in hyperspectral images contains both foreground, called the ROI, and background ob
Figure 5: Black and white calibration of HSI data. (a) The hyperspectral image of lettuce rendered before black and white image correction; (b) The image shows the rendered hyperspectral image after black and white image correction using the Equation 2.
jects. In scenarios where the target object does not cover the entire scanning area the signals from background objects can interfere with the data analysis, i.e., the background can contain noise that needs to be filtered out (Vidal and Amigo, 2012). This is especially true when dealing with images that exhibit color gradients. By masking the background from the data, researchers can focus on the spectral signature of the ROI. This leads to improved target detection in classification task. It also reduces the computational complexity of subsequent processing steps including the training 3D-CNN models (Qi et al., 2023).
There are various traditional techniques to extract he ROI from the hyperspectral image. These techniques can be classified into several categories based on their underlying principles and are discussed separately below.
#### 4.6.1 Spectral similarity-based methods
These methods work based on the similarity between the spectra of the pixels within an image. Examples of such methods include Spectral Angle Mapper (SAM) (Kumar et al., 2015) and Spectral Information Divergence (SID) (Qin et al., 2009). SAM computes the spectral angle
\[\alpha=\cos^{-1}\left(\frac{\sum_{i=1}^{\lambda}R_{i}\cdot T_{i}}{(\sum_{i=1 }^{\lambda}R_{i}^{2})^{\frac{1}{2}}\cdot(\sum_{i=1}^{\lambda}T_{i}^{2})^{\frac {1}{2}}}\right) \tag{3}\]
between the target reference spectrum \(R\) and each pixel spectrum \(T\) for all spectral bands \(\lambda\) in the hyperspectral image. This results in a similarity measure that is insensitive to illumination variations (Avbelj, 2012). On the other hand, SID works based on the concept of information theory (entropy). It compares the spectral information content of each pixel to a reference spectral information content. In general, SAM is better suited for well-defined spectral variations and low background noise, while SID is more robust to complex background noise and illumination variations.
#### 4.6.2 Statistical-based methods
These methods leverage statistical techniques to identify ROIs that share similar spectral properties and produce a set of uncorrelated components capturing different aspects of spectral variability within the image. For example, Minimum Noise Fraction (MNF) (Luo et al., 2016) is specifically designed to reduce the impact of noise in the data by separating the noise and signal components of the HSI data. This makes it particularly useful for operating on noisy HSI data but at the expense of more computational time. Another technique, ICA (see Section 4.5.1), aims to separate the mixed signals into their independent components, providing a more flexible approach to identify subtle spectral differences between ROIs. Finally, PCA (Chen et al., 2022) decomposes the original HSI data into orthogonal components that represent the directions of maximum variance in the data. In general, PCA is computationally more efficient than ICA and MNF, as it involves a simpler mathematical transformation that uses standard matrix operations.
#### 4.6.3 Spatial-based methods
These methods exploit the spatial correlation present in the image to differentiate between pixels within and outside the ROI. Typically, these methods apply morphological operations or spatial filtering techniques to extract features such as edges or texture, which are then used to segment the image into ROIs. The Morphological Attribute Profile (MAP) (Dalla Mura et al., 2010) and the Spatial-Spectral Endmember Extraction (SSEE) algorithm (Plaza et al., 2011) are examples of spatial-based methods that use mathematical morphology and spatial filtering, respectively.
SSEE first identifies the endmembers, or pure spectral signatures, present in the HSI data, and then uses a spatial clustering algorithm to group adjacent pixels with similar spectral properties into ROIs. On the other hand, MAP applies a series of morphological opening and closing operations to the image to identify connected regions of pixels with similar morphological attributes, such as size and shape. These connected regions can then be used as ROIs. These methods are computationally efficient and can be useful in scenarios where spectral information alone is not sufficient for accurate ROI extraction, such as in cases of low spectral contrast or high noise levels.
#### 4.6.4 Hybrid methods
These methods combine techniques to improve the accuracy and efficiency of the ROI extraction process. One approach can be combining statistical-based techniques such as PCA, ICA, or MNF with spatial-based techniques such as MAP or SSEE. For example, by combining MAP and PCA we can exploit both the spatial and spectral information (Sun et al., 2021) in the HSI data for ROI extraction. These hybrid methods can be effective in cases where neither spatial nor spectral methods alone are sufficient for accurate ROI extraction.
#### 4.6.5 Machine learning-based methods
Machine learning algorithms can also be used to extract the ROI. This process involves training a model using labeled data to identify and extract regions with spe
cific spectral characteristics. Once trained, the model can be used to predict the presence of those characteristics in unlabeled data and extract ROIs. This procedure is computationally efficient and allows for the extraction of subtle and complex patterns that may not be easily identifiable through traditional methods. Some instances of such techniques are Support Vector Machines (SVM) (Bojeri et al., 2022), RF (Boston et al., 2022), and CNN (Li et al., 2022; Wan et al., 2023).
#### 4.6.6 Software-assisted manual annotation
Manual definition of the ROI can be assisted by software specifically built to handle HSI data. Amongst these are for example ENVI (L3Harris Geospatial, 2023) and Spectronon (Resonon Inc, 2023) as commercial products, as well as the MATLAB Hyperspectral toolbox. Examples for open-source and free software are SeaDAS (NASA Ocean Biology Processing Group (OBPG), 2022), the Orfeo ToolBox (Centre National D'Etudes Spatiales (CNES), 2023), and RSGISLib (Bunting et al., 2014). These packages provide HSI data analysis for a wide range of tasks discussed before. An example of software-assisted annotation is the work of Jin et al. (2018) in which the authors generated an ROI using ENVI by manually selecting the tissues or areas of interest in the false color images of HSI data.
Researchers can utilize such software in addition to the above discussed algorithms. For example, Gao et al. (2021) developed an open-source software which is specifically designed for analyzing HSI data of seeds. It is able to remove the background of the HSI data and produces a binary mask using a user-defined minimum and maximum intensity threshold along with a component-searching algorithm. The results are an accurate segmentation of the seeds even when they overlap.
## 5 Band and feature selection
The selection of spectral bands in HSI is another crucial preprocessing step for classification tasks. It entails the identification of a subset of the most discriminative spectral bands from all available bands. This process is instrumental in reducing data dimensionality by eliminating redundant bands, thereby significantly reducing the computational overheads of downstream tasks. Furthermore, the careful selection of spectral bands can also reduce the effects of noise in the data. In general, based on the survey of Sun and Du (2019), band selection mechanisms can be categorized into six groups as follows (another classification of methods is presented in Sawant and Prabukumar (2020)).
### Ranking-based selection
Ranking-based band selection methods evaluate the significance of each spectral band based on a predetermined criterion and choose the most important bands in a sorted order. These methods can be categorized into two types: supervised and unsupervised. In supervised ranking-based methods, labeled training samples are utilized to determine the importance of each spectral band, while in unsupervised ranking-based methods, statistical properties of the data are used for the same purpose.
One example of such method is spectral differentiation. Qi et al. (2023) employed first and second order differentiation to decrease the computational complexity of HSI data containing 204 bands. The first derivative can effectively pinpoint areas in the spectrum where the rate of change is highest. This indicates the presence of sharp spectral features such as absorption or emission lines. Moreover, it allows for the selection of bands that capture these features and thus, provide critical information for classification or detection tasks. The second derivative is useful for identifying regions of the spectrum where the rate of change of the first derivative is highest, signifying the presence of spectral curvature. The selection of bands that capture the shape of the spectral signature, based on this information, can enhance the differentiating power in classification or detection tasks. Likewise, Jung et al. (2022) achieved an increase in accuracy using spectral differentiation and expansion of the input in the vertical direction of the raw data. This technique was applied in addition to SG smoothing (see Section 4.4) to further improve data quality.
### Searching-based selection
Searching-based band selection methods involve the creation of a criterion function such as Euclidean distance and Bhattacharyya distance (Ifarraguerri and Prairie, 2004) to evaluate the performance of each spectral band based on a specified optimization objective. The first step involves creating an initial subset of bands, followed by an assessment of the criterion function for the subset. The next step is applying a searching strategy to identify the best subset of bands that maximizes the criterion function, and evaluating the selected subset based on data classification performance. This iterative process continues until the desired level of performance is reached. Searching-based methods largely depend on the quality of the criterion function and the optimization strategy employed. Incremental searching (Wang et al., 2007; Santos et al., 2015), updated searching (Ghamisi
et al., 2014; Shi et al., 2016), and eliminating searching (Sun et al., 2014a) are among the commonly utilized strategies.
### Clustering-based selection
Clustering-based methods for hyperspectral band selection group bands into clusters and select representative bands from each cluster to create a final subset. These algorithms can be unsupervised (Imbiriba et al., 2015; Yang et al., 2017), supervised (Mojaradi et al., 2008) or semisupervised (Su et al., 2011, 2012). The selection of representative bands is typically performed using information measurements, such as mutual information or Kullback-Leibler divergence. Commonly used clustering techniques are \(K\)-means, affinity propagation, and graph clustering. \(K\)-means selects the best cluster centers that minimize the sum of distances to a set of putative center candidates. Affinity propagation selects exemplars by considering the correlation or similarity among bands and the discriminative capability of each band.
### Sparsity-based selection
Sparsity-based techniques for band selection rely on sparse representation or regression to identify representative bands. The most common of these methods are discussed separately below.
#### 5.4.1 Sparse Nonnegative Matrix Factorization-based methods
These methods (Li and Qian, 2011) break down the hypercube into a set of building blocks, which are both nonnegative and sparsely encoded. This promotes a feature extraction process that combines these building blocks to create a parts-based representation of the original data. The goal of this method is to identify the most informative bands of the HSI data matrix by optimizing an objective function that includes sparsity constraints.
#### 5.4.2 Sparse representation-based methods
Sparse representation-based methods (Zhai et al., 2016; Sun et al., 2017) use pre-defined or learned dictionaries to select informative bands of the HSI data matrix based on their sparse coefficients. These methods rank the bands according to the frequency of their occurrence in the sparse coefficient histograms. In some cases, sparse representation-based methods can also be designed to solve multiple tasks simultaneously, and an immune clonal strategy can be used to search for the best combinations of informative bands.
#### 5.4.3 Sparse regression-based methods
Sparse regression-based techniques (Sun et al., 2014b; Damodaran et al., 2017) transform the band selection problem into a regression problem and estimate the most representative bands by solving a sparse regression problem. These methods can also include sparsity constraints to encourage the selection of only the most informative bands for the regression model.
### Embedding-learning based selection
Embedding-learning based methods aim to learn a low-dimensional representation of the spectral data, also known as an embedding, that captures the most salient features of the data. There are several types of embedding learning-based methods that can be used for band selection, including autoencoders (Tschannerl et al., 2018), Deep Neural Networks (DNN) (Zhan et al., 2017), and CNNs (Sharma et al., 2016). Autoencoders learn a compact representation of the input data by training a neural network to encode the input into a lower-dimensional space and then decode it back into the original space. DNN, on the other hand, consists of multiple layers of interconnected neurons. This architecture enables the network to learn complex patterns and features in a hierarchical manner, empowering it to extract high-level representations from the input data. Meanwhile, CNNs can learn spatially invariant features from image data.
These techniques aim to learn a set of parameters that minimize an objective function that measures the model's performance on a particular task, such as classification or target detection. Band selection is integrated into the optimization process by constraining the learning algorithm to focus on a subset of the available bands, or by assigning weights to each band that reflect its relevance to the task at hand. The resulting model can then be utilized to predict the class label of new samples or to detect the presence of specific targets in the image. For example, Chen et al. (2022) employed a DNN based binary classification to identify the foreground and background regions of the HSI data. Subsequently, connected component labeling algorithms and edge contours were used to isolate the ROI from the image for further analysis.
Moreover, Jia et al. (2023) implemented a CNN-based band selection module that works based on a group convolution technique, which involves applying a 1\(\times\)1 one-dimensional convolution (equivalent to a scalar multiplication) to each band of the input hyperspectral image independently. This technique helps to overcome the problem of mutual interference between different
channels. The weights of the convolutional kernel are updated in the early stage of the network training using a loss function and an auxiliary classifier (see also the next Section 6). The weights represent the importance of each band, with a higher absolute value of weight indicating greater importance of the corresponding band.
### Hybrid-scheme based selection
Hybrid-scheme based methods involve combining multiple band selection techniques to select the most appropriate bands. A popular combination is clustering and ranking (Yin et al., 2010; Datta et al., 2015), where clustering is used to group bands and ranking is used to select the most important bands within each cluster. Other hybrid methods combine clustering with searching or combine ranking with searching to further optimize band selection.
## 6 Network architecture design: Feature extraction and classification
The objective of neural network architecture design is to create a model that can effectively learn from the input data and generalize well to new, unseen data. In order to achieve this for HSI classification, a network architecture must be developed that can capture the complex spectral and spatial information present in the data.
As an integral aspect of designing a neural network, the decision of the number of layers to use is a crucial one. While the inclusion of more layers in a network has been shown (Uzair and Jamil, 2020; Josephine et al., 2021) to improve performance, it can simultaneously present several challenges. A notable challenge arises from the potential occurrence of vanishing or exploding gradients (Tan and Lim, 2019), where the gradient signal becomes too small or too large as it backpropagates through the layers during training (see also Subsection 6.2.1). Such a phenomenon makes it difficult for the network to learn and adjust its weights effectively.
Moreover, as the number of layers and parameters increases, the risk of overfitting rises, which is characterized by the network's ability to perform exceptionally well on the training data, but not generalize well on unseen data. Consequently, the network may become computationally expensive to train and use due to its high processing power and memory requirements. Additionally, gradient computation time increases, thereby impeding the training's efficiency. Hence, careful consideration of these challenges and trade-offs is imperative to designing a network that balances complexity and performance.
For this review, we classified 3D-CNNs into hybrid and non-hybrid structures. A network is hybrid if it includes either specific module(s) that improve feature extraction, accuracy, and performance or if it integrates 2D-CNN within the 3D-CNN architecture. In the following, we present 3D-CNN models for the classification of diseased and defected crop using HSI data. A summary of these architectures is also given in Tables 1 and 2.
### Non-hybrid Networks
Nagasubramanian et al. (2018) presented a 3D-CNN model to classify healthy and diseased crop. This model consists of two convolution layers with max pooling layers, and two Fully Connected (FC) layers, trained using the Adam optimizer. To prevent overfitting, dropout mechanisms were used after the first max pooling and first FC layer. A Weighted Binary Cross Entropy (WBCE) function of the form
\[L_{WBCE}(y,\hat{y})=-[\beta\cdot y\log(\hat{y})-(1-y)\log(1-\hat{y})] \tag{4}\]
was implemented to address imbalanced training data. Here \(y\) and \(\hat{y}\) represent binary variables for whether the ground truth and predicted result belong to a given class, respectively (see also Section 4.2 for a discussion on how to counteract imbalances in datasets). Using the coefficient \(\beta\) the WBCE loss function assigns higher weights to the minority class, for example, the false negatives rate decreases if \(\beta\) is set higher than 1, while setting it smaller than 1 reduces the false positive rate.
Although the above technique can classify and detect diseased crop, detection of asymptomatic diseased crops at early stage and differentiating it from the healthy crops can be more challenging. In this respect, Jung et al. (2022) developed a 3D-CNN model that improves classification accuracy of the asymptomatic diseased crop without modification and preprocessing of input HSI data. The model consists of four 3D convolution layers in which the first and fourth layers each are followed by a 3D max pooling layer and a batch normalization. The rest of the convolution layers are only followed by batch normalization. The output of the last 3D convolution layer is passed through a global average pooling and two dense layers. In Jung et al. (2022) the results were improved further by preprocessing the input HSI data which went through spectral differentiation, vertical expansion, and smoothing.
Automating the classification procedure of HSI data using software can effectively mitigate the time-consuming process of implementing a deep learning pipeline. In this respect, Gao et al. (2021) developed
an open-source software that classifies seeds at pixel level using 3D-CNN. Though at first it was developed for a specific crop (rice), the test experiments over other seeds are promising. This software utilizes a 3D-CNN that consists of two 3D convolution layers, with two and four 3D convolution kernels for the first and second layers, respectively. The output is then flattened via one FC layer and classification is performed using a softmax function. However, this software as part of its feature extraction process does not consider the global feature relationship which can improve its ability to recognize and classify seeds accurately.
Nguyen et al. (2021) implemented a 3D-CNN for classification of a small size dataset. The feature extraction part is implemented based on AlexNet (Krizhevsky et al., 2017) and constitutes 5 convolutional layers followed by a flattening layer. The input layer takes an input of size 512\(\times\)512\(\times\)203, where 203 stands for number of bands. The first two convolutional layers are followed by a max pooling layer and a batch normalization. After that each convolutional layer is followed by a max pooling layer and after the last max pooling layer batch normalization and flattenings is applied before feeding the result into a RF or SVM for binary classification (healthy or diseased crop).
### Hybrid networks
#### 6.2.1 3D-CNN architectures based on ResNet
To address the challenges of vanishing or exploding gradients Qiao et al. (2020) proposed to leverage residual convolutional blocks within a 3D deep ResNet architecture. This approach reduces the number of channels through the use of identity residual blocks and a convolutional residual blocks. The identity residual block maintains the same input and output dimensions, while the convolutional residual block changes the number of channels. The use of a l\(\times\)l\(\times\)l convolution kernel as the shortcut in the convolutional residual block reduces the number of parameters and computational complexity.
To further improve efficiency, Qiao et al. (2020) adopt a bottleneck structure, that reduces the number of required convolution operations. Each convolutional layer is followed by a batch normalization layer to prevent vanishing gradient and enhance convergence rate. The network uses Exponential Linear Unit (ELU) as the non-linear activation function, which addresses the problem of dying neurons in ReLU.
In order to identify the appropriate hyperparameters, the authors employed a Tree-structured Parzen Estimator (TPE) as an optimization algorithm. The TPE utilizes a probabilistic model to approximate the distribution of the objective function and guides the search
\begin{table}
\begin{tabular}{c l c} \hline
**CNN Model** & **Information** & **Reference** \\ \hline \multirow{4}{*}{3D-CNN-based} & **Dataset**: 111 hyperspectral images of soybean & \\ & **Type of Disease:** Charcoal rot & Nagasubramanian \\ & **Imaging Device**: Pika XC hyperspectral line imaging scanner & et al. (2018) \\ & **Spectral range:** 400–1000 nm & \\ & **GPU:** NVIDIA Tesla P40 & \\ \hline \multirow{4}{*}{3D-CNN based on AlexNet} & **Dataset**: 40 hyperspectral images of Grapevine groups & \\ & **Type of disease:** GVCV & Nguyen et al. (2021) \\ & **Imaging Device**: SPECIM IQ & \\ & **Spectral range:** 400–1000 nm & \\ & **GPU:** - & \\ \hline \multirow{4}{*}{HyperSeed} & **Dataset**: 200 rice seeds (274,641 pixels) & \\ & **Type of defect:** Heat stress & \\ \cline{1-1} & **Imaging Device**:Micro-Hyperspec Imaging Sensors, Extended & Gao et al. (2021) \\ \cline{1-1} & VNIR version & \\ \cline{1-1} & **Spectral range:** 600-1700 nm & \\ \cline{1-1} & **GPU:** - & \\ \hline \multirow{4}{*}{3D-CNN-based} & **Dataset**: Above 200 strawberry leaves (3,110 ROIs) & \\ \cline{1-1} & **Type of disease:** Gray mold & \\ \cline{1-1} & **Imaging Device**: Corning microHSI & Jung et al. (2022) \\ \cline{1-1} & **Spectral range:** 400-1000nm & \\ \cline{1-1} & **GPU:** NVIDIA RTX3090 X (24 GB memory) & \\ \hline \end{tabular}
\end{table}
Table 1: Non-hybrid 3D-CNN-based architectures for detection of diseased and defected hyperspectral images of crop.
for optimal hyperparameters. For the classification, a 7\(\times\)7\(\times\)1 global pooling layer and a FC are utilized. This model halves the number of parameters and improves the computational time up to 10%. Moreover, the study of Pourdarbarani et al. (2023) over performance of well-known architectures in detection of defective crop demonstrates that residual connections achieve higher accuracy, while training faster despite having more parameters.
#### 6.2.2 Hypernet-PRMF network
Liu et al. (2020) presented a feature pre-extraction and a multi-feature fusion block to extract peanut characteristics from hyperspectral data. The feature pre-extraction includes constructing a Peanut Recognition Index (PRI) based on two informative bands to distinguish healthy, moldy, and damaged peanuts.
The multi-feature fusion block is a technique used in image segmentation to fully extract spatial and spectral
\begin{table}
\begin{tabular}{c l c} \hline
**CNN Model** & **Information** & **Reference** \\ \hline \multirow{4}{*}{Hypernet-PRMF} & **Dataset**: 16 hyperspectral images of peanut & \\ & **Type of disease:** Mold & Liu et al. (2020) \\ & **Imaging Device**: SOC710E portable hyperspectral imager & \\ & **Spectral range:** 400–1000 nm & \\ & **GPU:** NVIDIA Tesla P100 GPU (12G) & \\ \hline \multirow{4}{*}{Deep ResNet 3D-CNN} & **Dataset**: 16,346 hyperspectral images of blueberry & \\ & **Type of defect:** Distinguishing decayed and sound blueberries & \\ & **Imaging Device**: - & \\ & **Spectral range:** 400-1000nm & \\ & **GPU:** - & \\ \hline \multirow{4}{*}{SDC-3DCNN} & **Dataset**: Rice leaves (Number of taken samples are not determined.) & \\ & **Type of disease:** BLB & \\ & **Imaging Device**: Raptor EM285 & Cao et al. (2022) \\ & **Spectral range:** 378.28–1033.05 nm & \\ & **GPU:** NVIDIA GeForce RTX 2080Ti GPU and the AMD Ryzen & \\ & 5-1600 Six-Core processor @ 3.20 GHZ CPUs & \\ \hline \multirow{4}{*}{2D-3D-CNN (Defect detection module of RT-CBDIA)} & **Dataset**: 1026 coffee beans & \\ & **Type of defect:** Black, insect-damaged, and shell & \\ & **Imaging Device**: Imec XIMEA snapshot sensor & Chen et al. (2022) \\ \cline{1-1} & **Spectral range:** 660-980 nm & \\ \cline{1-1} & **GPU:** GPU of GEFORCE GTX1660 Ti and a RAM of 16 GB & \\ \hline \multirow{4}{*}{ResNet} & **Dataset**: 210 lemons & \\ & **Type of defect:** Bruise & \\ \cline{1-1} & **Imaging Device**: It was not determined, however was provisioned by Noor Imen Tajhiz Co.. & \\ \cline{1-1} & **Spectral range:** 400-1100 nm & \\ \cline{1-1} & **GPU:** Trained on Google Colab & \\ \hline \multirow{4}{*}{PLB-2D-3D-A} & **Dataset**: 15,360 potato leaves & \\ & **Type of disease:** PLB & Qi et al. (2023) \\ \cline{1-1} & **Imaging Device**: Specim IQ & \\ \cline{1-1} & **Spectral range:** 400-1000 nm & \\ \cline{1-1} & **GPU:** NVIDIA Tesla V100 & \\ \hline \multirow{4}{*}{Y-Net} & **Dataset**: 200 diseased corn leaves (extracted 6,264 regions) & \\ \cline{1-1} & **Type of disease:** Brown spot and anthracose & \\ \cline{1-1} & **Imaging Device** : An HSI system provided by Head Wall & Jia et al. (2023) \\ \cline{1-1} & **Spectral range:** 400-1000 nm & \\ \cline{1-1} & **GPU:** RTX 3090 24 Gb & \\ \hline \end{tabular}
\end{table}
Table 2: Hybrid 3D-CNN-based architectures for detection of diseased and defected hyperspectral images of crop.
features from HSI data. This technique involves using multiple types of convolution kernels including 2D convolution for common texture features, separable convolution for increased feature diversity, depthwise convolution for band feature extraction, and 3D convolution for spectral change information. The convolutions are concatenated after normalization and activation functions to enhance diversity and improve recognition accuracy.
Moreover, the authors employed feature pre-extraction and multi-feature fusion block techniques in their proposed peanut recognition model, called Hypernet-PRMF network. This model works at both peanut- and pixel-level recognition. The model consists of four parts: feature pre-extraction, down-sampling, up-sampling, and prediction. The feature pre-extraction part enhances the differentiation between different peanut features. The down-sampling part reduces the size of the image while increasing the number of convolution kernels, whereas the up-sampling part reconstructs the image while reducing the number of convolution kernels. The prediction works based on the softmax function and the class of maximum predicted probability is chosen as the final recognition result. The model achieves pixel-wise recognition accuracy with the use of the watershed segmentation algorithm. This technique has the potential to be employed for detection of other crops like sorghum.
#### 6.2.3 Spectral Dilated Convolution 3D-CNN
Cao et al. (2022) proposed a spectral dilated convolution (SDC)-3D-CNN model to detect crop's asymptomatic diseases at an early stage. This model consists of SDC modules along with residual blocks that prevent the gradient vanishing problem. SDC extends the idea of dilated convolution which expands the receptive field of convolution kernels without augmenting the model's parameterization. Receptive field is the portion of the input space needed to create a filter at any convolutional layer. The 3D-SDC extends the receptive field of convolutional kernels to the spectral dimension. It works based on the principle of applying a filter to an input with intermittent intervals, which are dictated by the spectral dilation rate.
The network was tested with top 50, 100, 150, and 200 significant wavelengths extracted by RF and Principal Components (PCs) of the same ranking by PCA along with different spectral dilation rate to detect healthy, asymptomatic, and symptomatic crop. The experiment result shows higher detection performance of the network using top the 50 important features extracted by RF at a dilation rate of 5.
#### 6.2.4 Merged 2D- and 3D-CNN architectures
Chen et al. (2022) developed a 2D-3D-CNN for real-time crop defect detection. This network is the detection module of a real-time coffee-bean defect inspection algorithm (RT-CBDIA). The network consists of a 2D-CNN and a 3D-CNN, the former one is responsible to extract spatial features and the latter one is accountable for extraction of spectral features. Combining these two networks can boost feature extraction by providing robust and discriminative spectral-spatial features.
The 3D-CNN is comprised of two convolution blocks with the same structure as in the 2D-CNN except that 3D convolutions and pooling layers are used. The two networks run simultaneously and their last pooling layers will be merged and fed to the a FC layer and then a dropout layer to avoid overfitting. Finally, a softmax layer determines each crop health status.
Likewise, Qi et al. (2023) fully extracted spatial-spectral features by merging 2D- and 3D-CNN architectures using AttentionBlock (Yin et al., 2020) and Squeeze-and-Excitation (SE)-ResNet (Hu et al., 2018). To accomplish this, the model first creates a neighborhood block of size 11\(\times\)11\(\times\)10 around a center pixel from the input image. Then, 2D convolution operations are used to extract spatial correlation features from the neighborhood block, and 3D convolution operations are used to capture spectral correlation features. The model uses four 2D convolutional layers and four 3D convolutional layers to capture feature maps of various spatial and spectral dimensions. By using different sizes of convolution kernels and downsampling steps, varied types of information can be captured.
Finally, the extracted feature maps are fused together to create a final set of feature maps that contain valuable and pertinent information required for effective classification. Herein, AttentionBlock and SE-ResNet play important role, as outlined immediately below.
An AttentionBlock is used to highlight important information in the fused spectral space feature map. It works by considering the similarity between each pixel in the feature map and weighting the relevant pixels with higher importance. This is achieved through a series of 2D convolutions with a kernel size of 1\(\times\)1 to transform each pixel into an \(\lambda\)-dimensional vector, where \(\lambda\) represents the number of feature channels in the input tensor. The similarity between any two pixels is then calculated using the dot-product of their transformed vectors, and the results are weighted using a softmax function.
The output of an AttentionBlock is a feature map that emphasizes the relevant information while suppressing irrelevant information. This process allows Attention
Blocks to focus on the relevance between pixels in the entire feature map, rather than just the spatial range of the convolution kernel size used in traditional convolution and pooling operations. This results in better classification results with little computational complexity.
In order to enhance the representational power of CNNs, SE modules as a type of attention mechanism can adaptively recalibrate the feature maps. It does so by capturing channel-wise feature dependencies through a squeeze operation, followed by an excitation operation that learns how to weight the importance of each feature map. This mechanism allows the model to pay more attention to salient channel features and disregards the less significant ones. By integrating Attention-Blocks and SE-ResNet, the network of Qi et al. (2023) can better generalize in classification and achieve higher accuracy.
Moreover, in the context of detection of two similar crop diseases that are indistinguishable to the naked eyes, a recent study by Jia et al. (2023) developed a new network called Y-Net. The Y-Net model takes in 10\(\times\)10\(\times\)203 hyperspectral data cubes as input and it consists of a channel attention mechanism, a band selection module with auxiliary classifier, a 3D-2D-CNN architecture, and a classification module.
A CNN architecture is employed to conduct the band selection, where 1\(\times\)1 1D convolutions are assembled to modify the parameters of the convolutional kernel in the early phase of the network training. The magnitude of the weight of the convolution kernel is indicative of the relevance of the band, with greater absolute weight values signifying more distinctive bands. The use of group convolution helps in preventing any hindrance from nearby bands, while the ReLU activation function is adopted for fast network convergence without the problem of saturation. The output of this step will be given to an auxiliary classifier that enables early-stage weight updating in the band selection block.
The auxiliary classifier module updates the loss function of the Y-Net model:
\[L(y,\hat{y})=(1-\theta)\cdot L_{\text{Final Classifier}}(y,\hat{y})+\\ \theta\cdot L_{\text{Auxiliary Classifier}}(y,\hat{y})+\beta \cdot\sum_{j=1}^{n}W_{j}, \tag{5}\]
where \(y\) is the ground truth label, \(\hat{y}\) is the predicted label, and \(\theta\) and \(\beta\) are hyperparameters that control the trade-off between the two losses and the sparsity of the band selection module, respectively. \(W_{j}\) is the weight of the \(j^{\text{th}}\) band in the band selection module and n is the number of total bands. The loss is a combination of the cross entropy losses of the final classifier and the auxiliary classifier and the sum of the weights of the band selection module. The purpose of this combination is to control the classification accuracy while updating the weights of the band selection layer. The adjustment factor \(\theta\) gradually decreases as the number of training iterations increases. The presence of this adjustment factor enables the Y-Net model to update the weights in the band selection module in the early stages of training and gradually shift towards training the final classifier to learn a more accurate classification model. Additionally, the auxiliary classifier helps to constrain the weight sparsity of the band selection module, ensuring that the score of unimportant features is close to zero.
The results of Jia et al. (2023) show that by removing nonessential and nondiscriminative bands the accuracy of the Y-Net model increases and reduces the model size and the number of parameters. Moreover, since the band selection module is integrated into an overall architecture, the training time does not significantly increase.
## 7 Visualization techniques for HSI classification decisions
Visualization techniques can be employed to observe the contribution of pixels in the classification decision. These techniques allow us to identify the pixel locations associated with the most important spectral bands that play a crucial role in the final classification results.
Saliency maps (Simonyan et al., 2013) are one of the essential and traditional visualization tools to identify the most sensitive regions (crucial pixels) in an image with respect to a model's predictions. This technique works by computing the gradient of the output class score with respect to the input image. This gradient represents how much each pixel in the input image contributes to the final classification decision. Next, the absolute values of these gradients are summed across the channels to obtain a saliency map, which highlights the most salient regions of the input image for the predicted class.
For example, Nagasubramanian et al. (2018) discovered that saliency maps can help to locate the most sensitive pixel locations in infected crop images, which are often the severely infected areas. Conversely, both healthy and infected crop images had saliency map gradients that were primarily focused around the mid-region of the crop stem, highlighting the stem's importance in crop classification. Moreover, Cao et al. (2022) observed that the significant wavelengths extracted by RF from raw HSI data overlaps the saliency-sensitive
wavelengths. More importantly, saliency maps can determine significant wavelengths for classification which are not extracted by RF.
Another visual explanation technique for CNN decision is the Gradient-weighted Class Activation Mapping (Grad-CAM) (Selvaraju et al., 2020). It was introduced as an improvement over CAM (Zhou et al., 2016). CAM involves modifying a pre-existing CNN model by replacing the final FC layer with a global average pooling layer, which retains essential channel information while reducing the spatial dimensions. This modification enables the utilization of feature maps from the preceding layer. By applying learned weights to these feature maps through global average pooling, CAM generates a map that highlights the crucial regions associated with the predicted category. However, CAM provides a coarse localization of the important regions within an image. It highlights the regions that contribute most to the predicted class, but it does not provide precise boundaries of those regions. To address this limitation, Grad-CAM was introduced as an extension to CAM.
Grad-CAM enhances the CAM approach by incorporating gradient information. Similar to CAM, Grad-CAM also utilizes a global average pooling layer after the last convolutional layer to obtain importance scores for each channel in the feature maps. However, instead of learning separate linear models for each class, Grad-CAM calculates the gradients of the predicted class score with respect to the feature maps. These gradients are used to weigh the feature maps and enables Grad-CAM to identify more intricate features that play a significant role in the classification decision.
In addition, there are further statistical techniques that help in analyzing the importance of individual features (spectral bands) of HSI data in a classification. Light Gradient Boosting Machine (LightGBM) (Ke et al., 2017; Gao et al., 2021) is one of such methods that uses decision trees in calculating feature importance. It builds decision trees in a leaf-wise manner, meaning that the algorithm grows the tree by adding new leaves one at a time which is computationally faster compared to level-wise approach by selecting the best split based on the maximum reduction in loss function for all leaves in the tree. Therefore, LightGBM calculates the importance of each feature by evaluating how much it contributes to the reduction in loss function across all trees. The feature importance score is calculated by summing up the number of times a feature is used to split the data across all trees, weighted by the improvement in accuracy achieved by each split. Features that are used more frequently and result in larger improvements in accuracy are assigned higher importance scores.
## 8 Discussion and conclusion
In this study, we conducted a comprehensive review of 3D-CNN-based models applied in the domain of agriculture using non-UAV-based HSI data. Our analysis delved into diseased and defective crops, focusing on the structures and efficiencies of the models, the quantity of datasets utilized, and the necessary pre-processing steps. This review indicates the advantages of 3D-CNNs in capturing spatial-spectral information within hyperspectral data, enabling them to outperform 1D- and 2D-CNN in hyperspectral image classification. With the aim of assisting computer vision experts and agriculture-domain researchers in tackling HSI classification tasks for crops experiencing stress, this comprehensive review provides valuable insights and guidance.
In general, HSI holds great potential for detecting subtle changes in crop growth and development, making it a promising technique for diagnosing crop diseases and defects. Despite this potential, our study indicates that there is still limited research conducted that use 3D-CNNs in this context. Furthermore, the studies that are performed are often very application-specific and it is unknown how well the performed methods and models generalize to a broader range of applications. We identify three major challenges that must be overcome to achieve a broader adoption of HSI in general and the usage of 3D-CNN for HSI classification problems: limited availability of hyperspectral data, computational complexity of 3D-CNN models, and the costs of hyperspectral imaging hardware. In the following we address each of these challenges in more detail and offer ideas on how to overcome or avoid them.
Limited availability of varied hyperspectral data on diseased and defective crops presents a significant challenge for researchers, farmers, and stakeholders in the agriculture industry who rely on data to make informed decisions. The lack of data in this area of the research hampers efforts to understand the extent of the problem and develop effective solutions. It makes it difficult to track the progress and success of any initiatives aimed at improving crop health and reducing the prevalence of disease and defects. To address this issue, there is a need for increased investment in data collection and data sharing, as it has been done in the past, for example, for RGB-data (e.g., Beck et al. (2020, 2022)). Large-scale HSI data collection and publicly available datasets will allow research groups to develop the next generation of models, even if they have no access to the otherwise required hardware and plant material. Even with small datasets there are also techniques to models beyond the specific application case they had been
trained on. In recent years, transfer learning and active learning techniques have been increasingly used together to tackle the challenges posed by limited data. By leveraging knowledge from pre-existing models, transfer learning can enhance the accuracy of models trained on limited HSI data. On the other hand, active learning involves selecting and annotating informative samples to improve the effectiveness of models trained on small HSI datasets. By combining the two techniques, transfer learning and active learning can address the bottleneck of limited HSI data and enable the development of robust 3D-CNN models capable of accurately classifying HSI data.
The computational complexity of 3D-CNN models is a barrier for their deployment in the field, for example, in edge computing devices or even as part of an embedded system in UAVs or agricultural equipment. Real-time diagnosis could significantly enable farmers to quickly detect and respond to disease outbreaks, which can help to prevent the spread of disease and reduce crop losses. This requires, however, researchers to explore ways to optimize their models for speed and efficiency, particularly their memory-footprint. This can be achieved through advancing the capabilities of single board computers, particularly their GPU and AI accelerators, on the one side, as well as developing lightweight models on the other side, for example, by reduction of used bands or grouping of bands into indices. Identifying the spectral bands that are most informative for detecting a varied range of diseases and defects would play an important role in reducing the model's training computational time, decreasing the number of parameters, achieving higher accuracy and generating a more lightweight model.
Despite the valuable insights hyperspectral imaging provides regarding the condition and health of crops, deploying this technology is relatively costly compared to RGB and multispectral imaging technologies. The higher expense is primarily attributed to the increased processing power required to analyze the HSI data and the extensive range of spectrum offered by hyperspectral cameras.
One approach to consider for cost reduction is the limitation of the number of bands in hyperspectral cameras, as not all spectral bands may be equally crucial for disease and defect detection. Therefore, instead of having an imaging device that supports full range of wavelengths from the visible to the infrared spectrum, we can deploy imaging systems that work with essential wavelengths rather than hundreds of spectral bands. In this respect, some companies already provide the facility to design customized multispectral systems, which work with bands that had been identified to be the most discriminative (see Hamila et al. (2023) for a 3D-CNN model training advantage of this approach). Therefore, developing methods for identifying the most informative spectral bands would also be cost beneficial. Furthermore, by leveraging Machine Learning as a Service (MLaaS) platforms (Noshiri et al., 2021), the need for required infrastructure to process HSI data using 3D-CNN can be eliminated. Additionally, leading MLaaS providers can integrate pre-built 3D-CNN models into their offerings for training HSI data, thereby reducing the time and effort required for model development, especially for individuals in the agriculture domain who may have limited machine learning expertise.
In summary, the application of 3D-CNNs with hyperspectral data for disease and defective crop detection is a promising research area with several open research questions. Collecting and sharing HSI data on scale, identifying informative spectral bands, developing transfer and active learning techniques, and implementing light-weight architectures are some of the key research areas that can be considered for future work. The advancement of this research will lead to more accurate and efficient disease and defective crop detection, which can have significant impacts on the agricultural industry.
**CRediT authorship contribution statement**
**Nooshin Noshiri**: Conceptualization, Investigation, Methodology, Software, Visualization, Writing - original draft, Writing - review & editing. **Michael A. Beck**: Writing - Original Draft, Writing - review & editing. **Christopher P. Bidinosti**: Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing - review & editing. **Christopher J. Henry**: Conceptualization, Resources, Writing - review & editing, Supervision, Project administration, Funding acquisition.
**Declaration of competing interest**
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. |
2304.00245 | Reusing Deep Neural Network Models through Model Re-engineering | Training deep neural network (DNN) models, which has become an important task
in today's software development, is often costly in terms of computational
resources and time. With the inspiration of software reuse, building DNN models
through reusing existing ones has gained increasing attention recently. Prior
approaches to DNN model reuse have two main limitations: 1) reusing the entire
model, while only a small part of the model's functionalities (labels) are
required, would cause much overhead (e.g., computational and time costs for
inference), and 2) model reuse would inherit the defects and weaknesses of the
reused model, and hence put the new system under threats of security attack. To
solve the above problem, we propose SeaM, a tool that re-engineers a trained
DNN model to improve its reusability. Specifically, given a target problem and
a trained model, SeaM utilizes a gradient-based search method to search for the
model's weights that are relevant to the target problem. The re-engineered
model that only retains the relevant weights is then reused to solve the target
problem. Evaluation results on widely-used models show that the re-engineered
models produced by SeaM only contain 10.11% weights of the original models,
resulting 42.41% reduction in terms of inference time. For the target problem,
the re-engineered models even outperform the original models in classification
accuracy by 5.85%. Moreover, reusing the re-engineered models inherits an
average of 57% fewer defects than reusing the entire model. We believe our
approach to reducing reuse overhead and defect inheritance is one important
step forward for practical model reuse. | Binhang Qi, Hailong Sun, Xiang Gao, Hongyu Zhang, Zhaotian Li, Xudong Liu | 2023-04-01T06:49:07Z | http://arxiv.org/abs/2304.00245v2 | # Reusing Deep Neural Network Models through Model Re-engineering
###### Abstract
Training deep neural network (DNN) models, which has become an important task in today's software development, is often costly in terms of computational resources and time. With the inspiration of software reuse, building DNN models through reusing existing ones has gained increasing attention recently. Prior approaches to DNN model reuse have two main limitations: 1) reusing the entire model, while only a small part of the model's functionalities (labels) are required, would cause much overhead (e.g., computational and time costs for inference), and 2) model reuse would inherit the defects and weaknesses of the reused model, and hence put the new system under threats of security attack. To solve the above problem, we propose SeAM, a tool that re-engineers a trained DNN model to improve its reusability. Specifically, given a target problem and a trained model, SeAM utilizes a gradient-based search method to search for the model's weights that are relevant to the target problem. The re-engineered model that only retains the relevant weights is then reused to solve the target problem. Evaluation results on widely-used models show that the re-engineered models produced by SeAM only contain 10.11% weights of the original models, resulting 42.41% reduction in terms of inference time. For the target problem, the re-engineered models even outperform the original models in classification accuracy by 5.85%. Moreover, reusing the re-engineered models inherits an average of 57% fewer defects than reusing the entire model. We believe our approach to reducing reuse overhead and defect inheritance is one important step forward for practical model reuse.
model reuse, deep neural network, re-engineering, DNN modularization
## I Introduction
Software reuse is the process of using existing software artifacts that would be otherwise created from scratch [1, 2, 3], which is widely deemed essential to improve software quality and development productivity. Instances of software reuse include the reuse of software libraries, components, APIs, etc. As today's software systems are increasingly incorporating AI techniques (e.g., deep learning), training DNN models has become an important task in the software development lifecycle. However, training DNN models is often known to be very costly, especially for models with billions of parameters and large datasets. To solve this problem, with the inspiration of software reuse, the software engineering community is paying more attention to DNN model reuse [4, 5, 6, 7, 8, 9, 10].
A trained model can be directly reused if it fits the target problem domain. However, reusing entire trained models may cause large overhead (e.g., inference time). Just like traditional software libraries which implement a large number of functions, a trained model may also have multiple functionalities (e.g., classification for multiple categories). When reusing a trained model, often only part of functionalities are required to solve the target problem. For instance, Google Vision API provides the service of multi-class classification with around 20,000 classes, but not all classes are necessary in practical scenarios. Suppose that a developer needs to build a fire alarm application [11] for determining whether a given image indicates "fire". Although only two classes ("fire" and "non-fire") are needed, if the developer directly invokes Google Vision API, all the 20,000 classes will be involved, which can incur much inference overhead caused by the unnecessary weights/neurons in the underlying DNN model.
A model trained to solve a similar problem can also be indirectly reused via transfer learning [12, 13]. Transfer learning consists of taking relevant features learned on a similar problem and optionally fine-tuning the trained model using the dataset of the target problem. Although effective in classification accuracy and training efficiency, reusing trained models may inherit their defects [14, 15, 16]. It has been shown that AI models are notoriously brittle to small perturbations on input data [15, 17], which allows attackers to craft adversarial examples for malicious attacks. When reusing a model, the weakness of a trained model can be inherited, and hence putting the system under the threats of adversarial attacks.
To address the weaknesses of existing model reuse methods, one idea is to only reuse some parts of a trained model (e.g., by eliminating some weights or neurons) that are relevant to the target problem, as the weaknesses correlate with the
weights of a trained model [7, 18]. Identifying the relevant weights/neurons can be achieved with the fundamental concept of _re-engineering_ in software engineering [19, 20], which aims to improve software maintainability and reusability by enhancing or altering existing software. Borrowing the idea of software re-engineering, we propose _model re-engineering_ for DNN models, which searches for the target problem-related weights with the guidance of target problem-related metrics (e.g., classification accuracy) and removes irrelevant weights from an _original model_ (i.e., trained model), resulting in a re-engineered model. When solving a certain problem through direct or indirect reuse, the re-engineered model, which retains only relevant weights to certain functionalities (e.g., a part of classes in classification), is reused, hence reducing the reuse overhead and mitigating the defect inheritance.
Existing work, including model modularization [8, 9, 10] and model slicing [7], has preliminarily explored the idea of reusing part of trained models based on neuron activation and neuron coverage [14, 15]. For instance, relying on neuron coverage, model slicing [7] first computes the relevance between weights and the target problem, then deletes the irrelevant weights. Unfortunately, due to the lack of interpretability of DNN models, the effectiveness of using neuron coverage is still questionable [21, 22]. The neuron coverage-based work [7, 8, 9] is not accurate enough in identifying relevant weights and hence prefers to be conservative in removing weights, i.e., only a small number of weights are removed to avoid removing relevant weights. Therefore, the models obtained with the existing approaches [7, 8, 9] will retain lots of irrelevant weights or neurons, having the limitations of reuse overhead and defect inheritance. CNNSplitter [10] introduces the first search-based approach for modularizing CNNs. As CNNSplitter achieves modularization by searching for relevant convolution kernels with a genetic algorithm, this approach cannot be applied directly to other neural networks, such as the fully connected neural networks.
In this paper, we propose SeAM, a **Se**arch-based **M**odel re-engineering approach that can accurately identify relevant weights and hence removes as many irrelevant weights as possible. Different from the neuron coverage-based approaches [7, 8, 9], SeAM is directly guided by the target problem-related metrics, e.g., classification accuracy, to search for the relevant weights. Moreover, SeAM applies a gradient-based search method to identify relevant weights, which is more general and efficient than CNNSplitter [10]. Specifically, SeAM consists of three components: search space, performance estimation strategy, and search strategy. The search space consists of the masks of all candidate re-engineered models. The mask of a candidate records which weights of the original model should be retained or removed. The performance estimation strategy defines the objective function of the search as the weighted sum of the candidate's weight retention rate and its cross-entropy loss on the target problem's dataset (denoted as target dataset). The objective function is used to evaluate a candidate's performance, and the objective function value is sent to the search strategy to guide the next round of search. The search strategy applies a gradient-based search method to explore the search space efficiently. In each search round, the search strategy finds a candidate with better performance by minimizing the objective function value. SeAM performs the search and estimation processes iteratively, and stops when the objective function value converges. The candidate with the minimal objective function value will be regarded as the resultant re-engineered model. The re-engineered model can be reused directly, or indirectly via fine-tuning, which helps reduce reuse overhead and lower the risk of defect inheritance while achieving comparable performance (e.g., classification accuracy) to the original model.
We evaluate SeAM using four representative CNN models on eight widely-used datasets. The experimental results first demonstrate that SeAM can accurately identify relevant weights and thus remove a large number of irrelevant weights. On average, a re-engineered model contains 89.89% fewer weights than the original model, and outperforms the original model by 5.85% in classification accuracy. Moreover, reusing a re-engineered model incurs less reuse overhead than reusing an original model, e.g., the average reduction in time cost for inference is 42.41%. Regarding defect inheritance, reusing the re-engineered model inherits an average of 57% fewer defects than reusing the original model.
The main contributions of this work are as follows:
* We propose the notion of _model re-engineering_, which re-engineers a trained deep learning model to improve its reusability.
* We propose a search-based model re-engineering approach named SeAM, which can accurately identify the weights relevant to a target problem and hence allows the re-engineered model to retain as few irrelevant weights as possible. SeAM can reduce the reuse overhead and lower the risk of defect inheritance in model reuse.
* We conduct extensive experiments using four representative CNN models on eight widely-used datasets. The results show that SeAM can remove a large number of irrelevant weights from the original models. Also, the experiments demonstrate the effectiveness of SeAM in overcoming the limitations of existing approaches.
Fig. 1: An example of direct model reuse.
## II Motivating Examples
Reusing a re-engineered model containing fewer irrelevant weights rather than an original model has several benefits. In this section, we introduce the applications and benefits of model re-engineering with two examples.
### _Reducing reuse overhead in direct reuse_
When a trained model satisfies the requirement of a target problem, a common way of reuse is to reuse the entire trained model on the target problem directly. However, there may be redundancy in the functionalities provided by the trained model [23, 8]. Redundancy in a trained model's functionality implies redundant weights, which may incur significant _reuse overhead_, including computational and time costs for inference, that is unnecessary for the target problem.
As shown in Figure 1, a simple fire alarm application [11] is used to illustrate the problem. In this example, the developer reuses a trained model (by calling the Google label_detection API) to classify an input image. An alarm will be triggered if the top-3 classification labels returned by the trained model include the keyword "fire". The requirement of the target problem is to classify an image into "fire" or "non-fire", while the reused trained model classifies an image into one of around 20,000 classes. As different weights could recognize features of different classes [24, 25], only a few relevant weights recognize the features of "fire". However, when reusing the trained model for inference, a lot of irrelevant weights are loaded into memory and involved in computation to produce intermediate results, incurring memory, computational, and time costs.
The example demonstrates that the requirement of a target problem may be only a small part of a trained model's functionality. Model re-engineering can remove part of the original model's weights irrelevant to the target problem and allows developers to reuse only the relevant weights. In this example, the weights irrelevant to the target problem are removed, resulting in a re-engineered model that only classifies "fire" and "non-fire". Compared to directly reusing the trained model, reusing the re-engineered model containing fewer weights could reduce the reuse overhead.
### _Mitigating defect inheritance in transfer learning_
When a trained model cannot satisfy the requirement of a target problem, a common form of reuse is to transfer learning [26, 27, 28]. That is, a developer reuses a trained model and fine-tunes the trained model on the target dataset to build a fine-tuned model that satisfies the requirement. This form of reuse is widely-used and effective; however, it faces the problem of defect inheritance [16, 29, 7, 30]. An example shown in Figure 2 is used to illustrate the defect inheritance and potential attacks to face. In this example, a public model trained on ImageNet [31] can perform classification with 1000 classes (including 59 bird classes [32]). To build a model for classifying birds with 200 classes, a developer reuses the trained model and fine-tunes the trained model on the target dataset Caltech-UCSD Birds [33]. During fine-tuning, most of the weights in the pre-trained model are retained in the fine-tuned model. The adversarial examples that can fool the public trained model are still likely to be able to fool the fine-tuned model, which is called defect inheritance [16, 29, 7, 30].
The major reason for defect inheritance is indiscriminate reuse [18, 7]. Specifically, in conventional transfer learning, all the trained model's weights are reused, including both the relevant and the irrelevant ones to the target problem. As the target dataset is usually not very large, fine-tuning will not have much effect on changing the weights irrelevant to the target problem. As a result, the defects are mostly inherited in the fine-tuned model [16, 34].
Model re-engineering alters the original model by removing irrelevant weights, thus avoiding the inheritance of defects associated with these weights when the re-engineered model is reused. In this example, a re-engineered model retains only the weights relevant to the features of "bird". As a result, compared to reusing the original model, reusing the re-engineered model can reduce the defect inheritance while achieving comparable accuracy.
## III Our Approach
In this section, we introduce SeaM, a search-based approach to model re-engineering, which uses a gradient-based search method to find the target problem-related weights.
### _Overview_
As illustrated in Figure 3, the workflow of SeaM consists of three components: _search space_, _performance estimation strategy_, and _search strategy_. Given an original model (a 3-class classification in Figure 3), which consists of three neural network layers with fifteen weights, and a target dataset (binary classification in Figure 3), the model re-engineering process is summarized as follows:
(1) _Construction of Search Space_: A re-engineered model selectively removes part of the original model's weights according to a _mask_. A _mask_ is a bit vector \([0,1]^{L}\), where \(L\) is the number of weights in the original model, and each bit represents whether the corresponding weight is removed. In total, there are \(2^{L}\) candidate masks, each of which corresponds to a candidate re-engineered model. Consequently, the search space consists of \(2^{L}\) candidates. The mask is initialized
Fig. 2: Fine-tuning a publicly available trained model. Inherited defects could be exploited by attackers.
with all element values as 1, representing that all weights are retained initially. The first and second steps along with the component Search Space in Figure 3 display the above process, where \(L=15\) and the search space size is \(2^{15}\).
(2) _Performance Estimation_: Given a candidate mask, the performance estimation strategy first constructs a candidate re-engineered model by removing weights according to the mask and appending a _head_ as the output layer. The _head_, which is a fully connected layer, is used to enable the candidate to adapt to the target problem, i.e., adapt the original _N_-classification model to the target _K_-classification problem. Then, the objective function is defined as the weighted sum of the _weight retention rate_ of the candidate and the _cross-entropy loss_ between the candidate's predictions and corresponding actual labels on the _target dataset_. The objective function is used to evaluate the performance of a candidate. The resulting objective function value will be fed back to the searching process to guide the next search round. The third step along with the component Performance Estimation Strategy in Figure 3 display the estimation process.
(3) _Searching Candidates_: The search strategy applies a gradient-based search method to explore the search space with the guidance of the objective function. The gradient-based search method not only efficiently explores the huge search space, but also optimizes the head at the same time. In each search round, the search strategy sends the updated mask and head as a new candidate to the performance estimation strategy. The fourth step along with the component Search Strategy in Figure 3 display the search process, where the head has two neurons as the target problem is binary classification.
Seam iterates the search and estimation processes. When the objective function value converges, Seam outputs the re-engineered model. In the example shown in Figure 3, the re-engineered model retains 7 out of 15 weights of the original model and performs binary classification. We present the technical details of each step in the following.
### _Construction of Search Space_
The goal of model re-engineering is to obtain a new model which retains only the target problem-related weights of the original model. Model re-engineering is formulated as a problem of searching for a new model from all candidate models, which selectively removes part of the original model's weights. If the searched model retains only the target problem-related weights, it is regarded as the re-engineered model. In this problem, the search space consists of all possible candidate re-engineered models. To facilitate a technical solution to this problem in practice, a _mask_ that records which weights are removed and retained in a candidate is used to represent a candidate, thereby omitting unnecessary details of a candidate, such as Max-pooling and Dropout layers. Consequently, in Seam, the search space consists of all candidate masks.
Specifically, a mask is a bit vector \([0,1]^{L}\), where \(L\) is the number of weights in the original model, and 0 (or 1) represents the corresponding weight removed (or retained). Figure 4 illustrates the use of a mask to remove weights from the original model. By multiplying the weights of the trained model with the mask, Seam sets the values of irrelevant weights to zero and keeps the values of relevant weights. The weights with values set to zero are involved in the computation but have no effect on the prediction, thus achieving the effect of removing irrelevant weights. Note that, after model re-engineering, the computation of a re-engineered model involving the weights with zero values could be eliminated by special libraries (e.g., DeepSparse [35]), which will be discussed in Section IV-B.
After the construction of search space, a mask initialized to all element values of 1 is fed to the performance estimation strategy. That is, the starting point of the search is a candidate that retains all the original model's weights.
### _Performance Estimation_
The search aims to find the optimal mask, which corresponds to a candidate re-engineered model that retains only
Fig. 4: The construction of a re-engineered model using the mask and head.
Fig. 3: The workflow of model re-engineering with Seam.
the target problem-related weights and can classify well on the target problem. To achieve the goal, the performance estimation strategy defines the objective function of the search as the weighted sum of _weight retention rate_ and _cross-entropy loss_. The weight retention rate can measure the number of weights retained by the candidate. The cross-entropy loss on the target dataset can measure the classification performance of the candidate on the target problem.
Specifically, when evaluating a candidate's performance, SeaM first constructs the candidate, as the computation of cross-entropy loss requires running the candidate on the target dataset. Figure 4 illustrates the construction of a candidate re-engineered model. SeaM first multiplies the weights of the original model with the mask to remove part of the original model's weights, resulting in an intermediate model. As the output layer has three neurons, the intermediate model is still a model for 3-class classification. To adapt the candidate to the number of classes of the target problem, the head, a fully connected layer, is appended after the intermediate model as the output layer of the candidate. The head is randomly initialized in the first search round and will be updated along with the mask in the subsequent rounds. In this example, the head has two neurons, which transforms the 3-class prediction of the intermediate model to the binary prediction, allowing the candidate to adapt to the target problem.
After constructing the candidate, the cross-entropy loss \(\mathcal{L}_{ce}\) between the candidate's predictions on the target dataset and the actual labels is computed as follows:
\[\mathcal{L}_{ce}=-\sum_{i=1}^{K}t_{i}\log(P_{i}(\mathcal{M},\mathcal{H})), \tag{1}\]
where \(K\) is the number of classes in the target problem, \(\mathcal{M}\) and \(\mathcal{H}\) are the mask and head, \(P_{i}(\mathcal{M},\mathcal{H})\) is the prediction for the \(i\)-th class by a candidate constructed with \(\mathcal{M}\) and \(\mathcal{H}\), and \(t_{i}\) is the probability of the \(i\)-th class in the one-hot representation of the actual label, with a value of 0 or 1. A lower cross-entropy loss indicates that the candidate retains more target problem-related weights and hence achieves higher classification accuracy on the target dataset.
The weight retention rate \(\mathcal{L}_{wr}\) is computed directly from the mask:
\[\mathcal{L}_{wr}=\frac{1}{L}\sum_{i=1}^{L}\mathcal{M}[i], \tag{2}\]
where \(L\) is the number of weights in the original model. A lower weight retention rate indicates that the candidate retains fewer weights. Based on \(\mathcal{L}_{ce}\) and \(\mathcal{L}_{wr}\), the objective function \(\mathcal{O}\) is defined as follows:
\[\mathcal{O}=\mathcal{L}_{ce}+\alpha\times\mathcal{L}_{wr}, \tag{3}\]
where \(\alpha\) is a weighting factor and is empirically set to 1.0. To minimize \(\mathcal{O}\), SeaM tends to search for a candidate that retains only the target problem-related weights, as this candidate can achieve the highest classification accuracy while retaining as few weights as possible.
### _Searching Candidates_
Large models can have billions of parameters, resulting in super huge search space. To explore the huge search space efficiently, our search strategy applies a gradient-based search method. In each search round, the search strategy finds a new candidate with a smaller objective function value by gradient descent based on the objective function value of the candidate in the previous round. That is, the mask is updated by descending the gradient as follows:
\[\mathcal{M}^{{}^{\prime}}=\mathcal{M}-\xi\times\nabla_{\mathcal{M}, \mathcal{H}}\mathcal{O}, \tag{4}\] \[\nabla_{\mathcal{M},\mathcal{H}}\mathcal{O}=\nabla_{\mathcal{M}, \mathcal{H}}\mathcal{L}_{ce}+\alpha\times\nabla_{\mathcal{M}}\mathcal{L}_{wr}, \tag{5}\]
where \(\xi\) is the learning rate, and \(\mathcal{M}^{{}^{\prime}}\) is the updated mask corresponding to a new candidate with a smaller objective function value.
When applying gradient descent to update a mask, it is important to note that gradient descent requires the search space to be continuous and differentiable [36], while the search space composed of masks is discrete and non-differentiable. Inspired by DARTS [36], the search strategy attaches a continuous number to each element of the mask, which can be considered as the relevance of the weight to the target problem. Then an indicator function \(\mathbb{1}_{(0,+\infty)}{:}X{\rightarrow}\{0,1\}\) is used to set the element values corresponding to the weights with relevance greater than zero to 1 and the other element values to 0. As the relevance is continuous, the search strategy uses gradient descent to update the relevance and thus can update the mask.
After satisfying the condition that the search space is continuous, another problem is that the indicator function is non-differentiable at \(x{=}0\), and the derivative of the indicator function equals 0 everywhere except at \(x{=}0\). This problem prevents the common backward propagation based on gradient descent from directly applying to update relevance [37, 38]. To address the problem, the technique named straight-through estimator [38, 39] is used to estimate the gradient of the indicator function, which directly uses the gradient of the previous neural network layer as the gradient of the current neural network layer.
The head is updated along with the mask by descending the gradient \(\nabla_{\mathcal{M},\mathcal{H}}\mathcal{L}_{ce}\). After updating the mask and head, the search strategy sends them as a new candidate to the performance estimation strategy and starts the next round of search after getting the objective function value.
## IV Experiments
To evaluate the effectiveness of SeaM, in this section, we introduce the benchmarks and experimental setup as well as the experimental results. Specifically, we evaluate SeaM by answering the following research questions:
* RQ1: How effective is our model re-engineering approach in reusing trained models?
* RQ2: Does reusing a re-engineered model incur less overhead than reusing the original model?
* RQ3: Does reusing the re-engineered model mitigate the defect inheritance?
### _Experimental Setup_
**RQ1: How effective is our model re-engineering approach in reusing trained models?** Three representative CNN models are used in this research question, including VGG16 [40], ResNet20, and ResNet50 [41]. The three CNN models are trained on three public classification datasets, including CIFAR-10 [42], CIFAR-100 [42], and ImageNet [31]. In total, there are five trained CNN models in this experiment, including VGG16-CIFAR10, VGG16-CIFAR100, ResNet20-CIFAR10, ResNet20-CIFAR100, and ResNet50-ImageNet. Among these trained models, the first four models are publicly available from the third-party GitHub repositories [43], and the last model is provided by PyTorch [44].
Given a trained model for \(N\)-class classification, we perform model re-engineering to alter the trained model on two types of target problems, including binary and multi-class classification problems. For the binary classification problem, each class of the trained model corresponds to a target problem. In total, there are \(N\) target problems. A re-engineered model needs to classify whether an input belongs to the corresponding class or not. In this scenario, VGG16-CIFAR10, VGG16-CIFAR100, ResNet20-CIFAR10, and ResNet20-CIFAR100 are altered, and there are 220 re-engineered models in total. Due to the significant overhead of generating 1000 re-engineered models, ResNet50-ImageNet is not used here. We count the number of removed weights and compare the accuracy of re-engineered models and trained models on target problems to validate the effectiveness of SeaM. Also, we compare SeaM with the state-of-the-art modularization approach [8] to demonstrate the improvement achieved by our approach.
For the multi-class classification problem, a re-engineered model classifies an input into one of the concerning classes. In this scenario, we use CIFAR-100 and ImageNet as our datasets since there are publicly available schemes for dividing them into superclasses [42, 45]. A small-size model ResNet20-CIFAR100 and a large model ResNet50-ImageNet are chosen for a more comprehensive evaluation. Specifically, CIFAR-100 has divided the 100 classes into 20 superclasses, each containing 5 classes with semantically similar labels [42]. For ResNet20-CIFAR100, we follow this division; thus, there are 20 target problems, each corresponding to a superclass. For ResNet50-ImageNet, following the public division [45], the 1000 classes are divided into 67 superclasses, of which 3 superclasses are discarded because they contain only 1 class. The remaining 64 superclasses with a number of classes ranging from 2 to 119 form 64 target problems. In total, there are 84 re-engineered models. We count the number of removed weights and compare the accuracy of re-engineered models and trained models on target problems to validate the effectiveness of SeaM. Since the modularization approaches [8, 9] are designed for binary classification (i.e., each module performs binary classification) and cannot be applied to multi-class classification directly, we compare SeaM with the method of retraining from scratch.
When re-engineering an original model on a target problem, we follow the settings of our baselines [7, 43] to divide the target dataset into training and testing sets. The training set is used to search for a candidate, and the testing set is used to evaluate the candidate. The major parameters in SeaM include weighting factor \(\alpha\) (see Equation 3) and learning rate \(\xi\) (see Equation 4). The appropriate values of \(\alpha\) and \(\xi\) could vary from different trained models and are generally set to 1.0 and 0.05, respectively. The detailed settings and their impact on model re-engineering are described in the project webpage [46].
**RQ2: Does reusing a re-engineered model incur less overhead than reusing the original model?** In this experiment, the trained models and re-engineered models from RQ1 are reused, and we compare the reuse overhead of re-engineered models with the original models. Two metrics are used to measure the reuse overhead, including the number of floating point operations (FLOPs) [47, 48] and time cost for inference. An open-source tool fvcore [49] is used to calculate the FLOP. Regarding inference time cost, an open-source tool DeepSparse [35] is used to run both original and re-engineered models and compute the inference time cost.
**RQ3: Does reusing the re-engineered model mitigate the defect inheritance?** In transfer learning, a pre-trained model generally has a large number of weights and classifications, and a target problem has insufficient data. Therefore, VGG16 and ResNet20 are not suitable to be transferred, and CIFAR-100 and CIFAR-100 are unsuitable for target problems. Following the state-of-the-art approach ReMos [7], two widely-used transfer learning CNN models, ResNet18 and ResNet50, are used as trained models (i.e., the pre-trained models in transfer learning), which are trained on ImageNet and are provided by PyTorch [44]. Five popular transfer learning datasets are used as target datasets, including MIT Indoor Scenes [50], Caltech-UCSD Birds [33], 102 Category Flowers [51], Standford 40 Actions [52], and Standford Dogs [53].
We first apply SeaM to alter the trained model on the target dataset, resulting in a re-engineered model. Then we use the standard fine-tuning approach [26, 27] to fine-tune the re-engineered model on the target dataset, resulting in a fine-tuned model. We compare SeaM with two baselines, standard fine-tuning [26, 27] and the state-of-the-art approach ReMos [7]. Standard fine-tuning fine-tunes all of the trained model's weights on the target dataset. ReMos first sets a trained model's weights irrelevant to the target problem to zeros, and then uses standard fine-tuning to fine-tune the sliced model on the target dataset, resulting in a fine-tuned model. Following the setup of ReMos, we use accuracy (ACC) and defect inheritance rate (DIR) to measure and compare the effectiveness of SeaM and the baselines. The accuracy is computed as the correct classification rate on the target dataset \(D^{T}\):
\[ACC=\frac{1}{|D^{T}|}\sum_{(x,y)\in D^{T}}\mathbbm{1}[f(x)=y]. \tag{6}\]
The defect inheritance rate is computed as the misclassification
rate on a set of malicious inputs \(S^{M}\):
\[DIR=\frac{1}{|S^{M}|}\sum_{(\hat{x},y)\in S^{M}}\mathbb{1}\left[f(\hat{x})\neq y \right]. \tag{7}\]
Same as ReMos [7], open source tool _advertorch_[54] is used to generate \(S^{M}\) based on the trained model and \(D^{T}\). We use the same parameters as ReMos when using advertorch.
In this experiment, we set the learning rate \(\xi{=}0.05\) and weighting factor \(\alpha{=}0.5\). Regarding the standard fine-tuning approach and ReMos, we also use the open source project [55] published by ReMos.
All the experiments are conducted on Ubuntu 20.04 server with 64 cores of 2.3GHz CPU, 128GB RAM, and NVIDIA Ampere A100 GPUs with 40 GB memory.
### _Experimental Results_
**RQ1: How effective is our model re-engineering approach in reusing trained models?**
In this research question, we present the model re-engineering results of SeAM for two types of target problems (i.e., binary and multi-class classification). Figure 5 shows the convergence process of SeAM on two types of target problems. For instance, the left sub-figure shows the trend of weight retention rate and classification accuracy along with search rounds during re-engineering VGG16-CIFAR10 on a binary classification problem. The weight retention rate descends quickly in the first 50 rounds and then gradually converges. Although many weights are removed, the re-engineered model maintains a comparable accuracy to the original model. The right sub-figure depicts the convergence process of ResNet20-CIFAR100 on a 5-class classification problem. Similar to re-engineering VGG16-CIFAR10, the weight retention rate descends quickly in the first 100 rounds and then gradually converges. The difference is that the accuracy of the re-engineered model may be lower than that of the original model at the beginning of search. The reason is that the target dataset of the 5-class classification problem contains fewer samples than that of the binary classification problem (500 _vs._ 10,000). Thus, the former requires more rounds to optimize the mask and head. As optimization rounds increase, the mask retains more related weights, and the head learns to classify better, so the re-engineered model can recover accuracy and eventually exceed that of the original model. The time cost of the search varies by the models, target problems, and target datasets. The sizes of target datasets vary from 500 to 140,000 samples. For the binary classification problem, each round takes several seconds. For the multi-class classification problem, re-engineering ResNet20-CIFAR100 takes 2s per round. For ResNet50-ImageNet, as each superclass contains a different number of classes, the time cost varies from several seconds to a few minutes per round. In this example, re-engineering VGG16-CIFAR10 and ResNet20-CIFAR100 takes 4s and 2s per round, respectively.
Table I shows the results regarding the number of weights for the original and re-engineered models. For each trained model, we count the number of the original model's weights and the number of weights retained (i.e., non-zero weights) in the re-engineered model1. For instance, VGG16-CIFAR10 is altered on 10 target problems, resulting in 10 re-engineered models. The average number of weights retained (i.e., non-zero weights) in a re-engineered model is 0.62 million. Compared to the original model having 15.25 million weights, a re-engineered model retains only 4.07% of the original model's weights, which means that SeAM achieves a 95.93% reduction in the number of weights. It is worth mentioning that, for multi-class modularization, although a re-engineered model requires the classification of more classes, it still has much fewer weights than the trained model. For instance, a re-engineered model obtained by altering ResNet20-CIFAR100 can classify five classes; however, the re-engineered model has only an average of 0.04 million weights, and the reduction in the number of weights is 85.71%. The reason is that different classes may contain the same features, which means that the weights needed to identify one more class may already be included in the existing weights. Consequently, for all six trained models, the number of weights retained in re-engineered models is significantly smaller than the number of weights in original models. On average, for the six trained models, SeAM achieves an 89.89% reduction in the number of weights.
Footnote 1: As a head contains a negligible number of weights (e.g., 0.43% at most) compared to the original model, the head weight count is omitted in the experiment.
Table II shows the averaged accuracy of original and re-engineered models. The original and re-engineered models are
Fig. 5: The convergence process of SeAM on binary (left sub-figure) and multi-class (right sub-figure) classification problems.
evaluated on the corresponding target problems. Again using VGG16-CIFAR10 as an example, the average accuracy of the 10 re-engineered models on the 10 target problems is 97.12%. The original model is also evaluated on the 10 target problems, and the average accuracy is 96.50%. Compared to the original model, the re-engineered models achieve comparable accuracy on target problems, and the averaged accuracy increases by 0.62%. The reason for the improvement may be that model re-engineering enables the re-engineered model to fit the target problem during altering the original model. Note that, the fitting is mainly achieved by removing irrelevant weights instead of training the weights of the original trained model. On both binary and multi-class classification problems, for all the six trained models, re-engineered models can achieve comparable accuracy to original models, and the averaged accuracy increases by 5.85%. Due to space limitation, the detailed results regarding the number of weights and accuracy are available at the project webpage [46].
When comparing SeAM with the existing modularization approach [8], we directly use the open source project [56] published by [8], which decomposes a trained model into modules, each for a binary classification problem. Since tool [8] and SeAM are implemented on Keras and PyTorch, respectively, they cannot directly alter each other's trained models. We attempted to convert PyTorch and Keras trained models to each other; however, the conversion incurs much loss of accuracy (5% to 10%) due to the differences in the underlying computation of PyTorch and Keras. To make the comparison as fair as possible, we run the modules and trained models published by [8] and compare SeAM to [8] based on the results of ResNet20-CIFAR10 and ResNet20-CIFAR100, as the two models are also used in [8]. Specifically, we analyzed the accuracy and the number of neurons of the original models and modules (re-engineered model of SeAM). As modularization [8] decomposes a CNN model mainly by removing neurons (i.e., setting neurons to zero but retaining all weights) from convolutional layers, we analyzed the number of neurons rather than the number of weights.
As shown in Table III, for both ResNet20-CIFAR10 and ResNet20-CIFAR100, a module retains fewer neurons than the original model; however, the number of neurons in a module is reduced by only 18.59% on average. In addition, a module retains all the weights of the convolutional layers. Regarding accuracy, modules achieve a lower accuracy than the trained models on target problems, and the accuracy of a module reduces by 7.25% on average. Compared to modularization [8], model re-engineering can remove a large number of weights without impairing the accuracy. A major reason for the improvement of SeAM over [8] is that SeAM identifies the target problem-related weights more accurately. SeAM is a search-based approach that identifies the target problem-related weights directly based on the classification accuracy, while [8] identifies the target problem-related weights and neurons based on the neuron activation that indirectly correlates with the accuracy.
We also compare SeAM to model retraining on multi-class classification problems. Model retraining reuses the architecture and hyperparameters of the trained model to retrain a new model from scratch on the target dataset. As both model re-engineering and retraining alter/train the same model (architecture) on the same target problem, while the latter may fit more slowly and even run several times, the time cost of the retraining would be higher than that of re-engineering. Regarding accuracy, as shown in Table IV, re-engineered models outperform retrained models for both ResNet20-CIFAR100 and ResNet50-ImageNet, and the average improvement is 7.84%. The reason for the improvement of model re-engineering may be the difference in the amount of data. The original model is trained on a large-scale dataset, while the retrained model is trained on a small-scale target dataset. The model re-engineering alters the original model to fit the target problem; thus, the re-engineered model achieves higher accuracy than the retrained model.
On average, a re-engineered model contains 89.89% fewer weights than the original model but outperforms the original model in accuracy by 5.85%.
**RQ2: Does reusing a re-engineered model incur less overhead than reusing the original model?**
One of the benefits of model re-engineering is to reduce the reuse overhead. As mentioned in Section IV-A, the number of FLOPs and inference time cost are used to measure the reuse overhead. We evaluated the original and re-engineered models from RQ1 on the two metrics to answer this research question.
Table V shows the number of FLOPs required by the original and re-engineered models to classify an image with resolution \(32\times 32\), respectively. Note that, following the related work [57, 58], when computing the number of FLOPs required by a re-engineered model with a sparse weight matrix, only the computations involved in non-zero weights are considered. For instance, despite having the same number of weights as the original model, a re-engineered model obtained by altering VGG16-CIFAR10 has 95.93% (see Table I) of its weights set to zero. As the calculations associated with these zero weights can be eliminated by special libraries [59], the calculations associated with these weights are not considered when calculating FLOPs. VGG16-CIFAR10 requires 314.28 million FLOPs, while the average number of FLOPs required by a re-engineered model is 75.53 million. SeAM achieves 75.97% reduction in terms of FLOPs. On average, for the six trained models, SeAM reduces the FLOPs by 74.71%.
To verify that the reduction in the number of FLOPs can reduce the inference time cost, the open-source library DeepSparse [35] is used to deploy and run the original and re-engineered models. Given an input with batch size 16, each re-engineered model or original model classifies the input 200 times, and the average time cost of classification is used to measure the inference time cost of a re-engineered model or an original model. Table VI shows the average inference time cost of each trained model and its corresponding re-engineered models. For instance, the inference time cost of VGG16-CIFAR10 is 6.82ms/batch, which means that VGG16-CIFAR10 requires 6.82ms to classify an input with batch size 16. The re-engineered model obtained by altering VGG16-CIFAR10 incurs an average of 3.79ms/batch inference time cost. The reduction in inference time cost is 44.43% (calculated by \((1-3.79/6.82)*100\)). For all the six trained models, SeAM achieves an average of 42.41% reduction in inference time cost, which demonstrates that the reduction in the number of weights and FLOPs can reduce the inference time cost.
FLOP focus on the computation of neural network layers containing weights. While apart from the layers containing weights, the time cost for inference also involves other operations, such as activation functions, dropout, tensor reshape, and so on. Therefore, the reductions in the number of FLOPs and the time cost differ.
Reusing a re-engineered model incurs less reuse overhead than reusing an original model, while achieving even higher accuracy in inference than the original model.
removing the weights that are not relevant to the target dataset can reduce DIRs and improve the robustness of the fine-tuned model. Overall, for the two models on five datasets, the averaged DIRs for fine-tuning the re-engineered model and fine-tuning the original model (i.e., standard fine-tuning approach) are 16% and 73%, respectively. The reduction in DIR is 57%, demonstrating the effectiveness of Seam in reducing defect inheritance.
Compared to ReMos, Seam can achieve lower DIRs and higher ACC. For instance, for ResNet18, the average DIRs achieved by SeaM and ReMos are 19% and 40%, respectively. The DIR achieved by SeaM is roughly half of that achieved by ReMos. Regarding ACC, the average ACC achieved by SeaM and ReMos is 79% and 74%, respectively. Overall, for the two models on five datasets, the average DIRs and ACC for SeaM and ReMos are (16%, 82%) and (29%, 78%), respectively. SeaM is 13% lower and 4% higher than ReMos in terms of DIR and ACC, respectively. The reason for the improvement in DIR achieved by Seam is the considerable reduction in the number of weights. ReMos removes only 10% and 3% weights for ResNet18 and ResNet50, respectively. Compared to ReMos, SeaM can remove more irrelevant weights. The reduction in the number of weights is about 50% for both ResNet18 and ResNet50.
It is worth mentioning that there are some differences between the results shown in Figure 6 and Figure 7 and the results shown in ReMos [7], especially in terms of DIR. For instance, for ResNet18, the average DIRs achieved by ReMos shown in [7] and our work are 15% and 40%, respectively. The reason for the differences is that ReMos uses additional Dropout layers for fine-tuning while ours does not. To make a more comprehensive comparison of ReMos and Seam, we follow the experimental setup of ReMos [7] and plot the results on ResNet18 in Figure 8. As shown in Figure 8, after adding Dropout layers, both SeaM and the baselines achieve better results, as Dropout layers help increase the robustness of models. The average DIRs achieved by the standard fine-tuning approach, ReMos, and SeaM are 44%, 20%, and 12%, respectively. Consistent with the above conclusion, our approach can outperform ReMos. Moreover, we observe that the DIRs of ResNet50 are lower than that of ResNet18. The reason for this could be that the increased number of weights helps increase the robustness. This observation aligns with the prior works [7, 60].
Overall, SeaM inherits much fewer defects compared to standard fine-tuning and the state-of-the-art approach.
## V Threats to Validity
**External validity:** Threats to external validity relate to the generalizability of our results. While the notion of re-engineering a trained model to improve its reusability is general, we have only evaluated our approach on CNN models in this paper. The effectiveness on other types of DNNs, such as LSTM and transformer, remains to be evaluated. However, during the search, the objects removed are weights, not CNN-specific structures such as convolutional kernels. Also, the search is guided by the classification accuracy and the number of retained weights. Therefore, the principles of our proposed approach are not specific to CNN and are applicable to other types of DNNs as well. We will further investigate it in our future work.
**Internal validity:** An internal threat comes from the choice of trained models and datasets. To mitigate this threat, we use four representative trained CNN models and evaluate SeaM on eight well-organized and widely-used datasets.
**Construct validity:** A threat relates to the suitability of our evaluation metrics. Evaluating the quality of DNN models remains an open problem. Measuring only the misclassification rate of the adversarial samples may not be comprehensive enough. However, the misclassification rate of adversarial samples is a representative metric and has also been widely used in related work [7, 30].
Fig. 8: The accuracy (ACC) and defect inheritance rate (DIR) on ResNet18 with Dropout layers.
Fig. 7: The accuracy (ACC) and defect inheritance rate (DIR) on ResNet50.
## VI Related Work
_Reusing trained DNN models:_ Our work is related to reusing DNN models, including direct reuse [4, 5] and transfer learning [12, 61]. The work related to direct reuse recommends a trained model for developers and allows developers to reuse the model on the target problem directly. For instance, SDS [4] evaluates trained models using a few efficient test data that could discriminate multiple trained models and then recommends the best one to reuse. Transfer learning techniques reuse a model trained to solve a similar problem and fine-tune the reused model on the target problem. For instance, ResNet [41] trained on ImageNet for 1000-class classification is widely reused to develop new models for various target problems by fine-tuning its weights on the target datasets [61, 62]. The techniques mentioned above support model reuse; however, they reuse the entire trained model or the vast majority of model's weights. In contrast, this work allows developers to reuse only the target problem-related weights, thus reducing reuse overhead and defect inheritance.
_DNN modularization and slicing:_ Similar to our work, DNN modularization [8, 9] and slicing [7] attempt to reuse part of trained models. For instance, DNN modularization [8, 9] decomposes a trained model into modules based on neuron activation [21, 22]. A module retains part of trained model's neurons and can be reused to solve a binary classification problem. Relying on neuron coverage [21, 22], DNN slicing [7] removes irrelevant weights and reuses the slice with relevant weights for fine-tuning. Compared to DNN modularization and slicing, our work is search-based model re-engineering, which can remove much more irrelevant weights and hence reduce more reuse overhead and defect inheritance. Our previous work CNNSplitter [10] concerns the modularization of CNN models through searching with genetic algorithms and fixing the weakness of a model by replacing the corresponding part with a better module. In contrast, this work can realize the modularization of general neural network models and the searching algorithm is more efficient.
_DNN pruning:_ Iterative magnitude pruning [57, 63, 64] is one of the mainstream network pruning techniques, which prunes part of weights that are not important for the original problem to reduce the computational overhead required by inference on the original problem. Our work removes part of weights that are irrelevant to a target problem to reduce reuse overhead and defect inheritance on the target problem. Apart from their differences in objectives, iterative magnitude pruning compresses a model by repeatedly removing unimportant weights and retraining the retained weights over several rounds, while SeaM removes irrelevant weights without changing retained weights.
## VII Conclusion
In this work, we propose the notion of _model re-engineering_, which re-engineers a trained DNN model to improve its reusability. Based on the notion, we propose a search-based model re-engineering approach named SeaM, which can re-engineer a trained model by removing many irrelevant weights. Extensive experiments with four representative CNN models on eight widely-used datasets demonstrate the effectiveness of SeaM in reusing trained models as well as reducing reuse overhead and defect inheritance.
Our source code and experimental data are available at: **[https://github.com/qibinhang/SeaM](https://github.com/qibinhang/SeaM)**.
## Acknowledgement
This work was supported partly by National Natural Science Foundation of China under Grant Nos.(61932007, 61972013, 62141209, 62202026) and Australian Research Council (ARC) Discovery Project DP200102940 and sponsored by Huawei Innovation Research Plan.
|
2308.06293 | Target Detection on Hyperspectral Images Using MCMC and VI Trained
Bayesian Neural Networks | Neural networks (NN) have become almost ubiquitous with image classification,
but in their standard form produce point estimates, with no measure of
confidence. Bayesian neural networks (BNN) provide uncertainty quantification
(UQ) for NN predictions and estimates through the posterior distribution. As NN
are applied in more high-consequence applications, UQ is becoming a
requirement. BNN provide a solution to this problem by not only giving accurate
predictions and estimates, but also an interval that includes reasonable values
within a desired probability. Despite their positive attributes, BNN are
notoriously difficult and time consuming to train. Traditional Bayesian methods
use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being
too slow. The most common method is variational inference (VI) due to its fast
computation, but there are multiple concerns with its efficacy. We apply and
compare MCMC- and VI-trained BNN in the context of target detection in
hyperspectral imagery (HSI), where materials of interest can be identified by
their unique spectral signature. This is a challenging field, due to the
numerous permuting effects practical collection of HSI has on measured spectra.
Both models are trained using out-of-the-box tools on a high fidelity HSI
target detection scene. Both MCMC- and VI-trained BNN perform well overall at
target detection on a simulated HSI scene. This paper provides an example of
how to utilize the benefits of UQ, but also to increase awareness that
different training methods can give different results for the same model. If
sufficient computational resources are available, the best approach rather than
the fastest or most efficient should be used, especially for high consequence
problems. | Daniel Ries, Jason Adams, Joshua Zollweg | 2023-08-11T01:35:54Z | http://arxiv.org/abs/2308.06293v1 | # Target Detection on Hyperspectral Images Using MCMC and VI Trained Bayesian Neural Networks
###### Abstract
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust that the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations.
We apply and compare MCMC- and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has no measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC- and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
## 1 Introduction
Aerial and aerospace assets collect imaging data in a variety of forms, and the remote detection of trace, sub-pixel targets in that image data is an important topic for a variety of applications. Hyperspectral imagery (HSI) contain hundreds of contiguous spectral bands which provide powerful information to detect material that would otherwise be near impossible. Aerial sensors with HSI measuring capabilities collect data that looks like Figure 1, the three-dimensional cube containing both spatial information and spectral information. Target detection using HSI is a research area which has received significant attention in recent years [1, 2, 3], and results have shown it is effective at finding rare targets [3]. Uncertainty quantification (UQ) of model predictions is becoming a necessity in high consequence problems [4, 5].
Aerospace sensors searching for targets of interest often use automated algorithms to make detec
Figure 1: Example of a hyperspectral image cube. Spatial coordinates are shown in the X/Y plane while the spectral coordinate is the Z plane. Image credit: [https://en.wikipedia.org/wiki/Hyperspectral_imaging](https://en.wikipedia.org/wiki/Hyperspectral_imaging).
such as the adaptive cosine estimator have proved time and again to give strong results [6], but in the era of machine learning (ML) and artificial intelligence (AI), the common thought is that more advanced algorithms and detectors ought to provide better performance and generalization. However, traditional ML and AI methods only provide a best estimate, and do not provide an estimate of the model's confidence in oneself. This can be problematic for many high-risk aerospace target detection applications.
Bayesian neural networks (BNN) were first popularized by David MacKay [7], [8] and his student Radford Neal [9], [10]. Neal's dissertation introduced Hamiltonian Monte Carlo (HMC) to sample the posterior distribution of a BNN, providing a practical way of training. To this day, HMC is considered the gold standard for BNN training due to its theoretical backing and lack of approximations. Before HMC, Gaussian approximations were typically used [7], [8]. [11] followed this up with alternative Markov Chain Monte Carlo (MCMC) methods for fixed architectures and [12] proposed an approach which treated the model architecture as unknown and estimated its posterior distribution with reversible jump MCMC (RJMCMC). [13] extended this work on RJMCMC.
There were early applications of BNN in the statistics literature, including in time series [14], medicine [15], [16], and with count data [17]. [18] provides a review of BNN and their common estimation methods at the time, MCMC, Gaussian approximation, and early variational inference (VI), from a statistical perspective.
Due to the increasing size of network architectures and the associated computational costs, faster sampling or approximation methods to obtain posterior distributions were explored. [19] introduced stochastic gradient HMC which uses a noisy estimate of the gradient from a subset of the data instead of the exact computation using all the data. [20] extends this by applying variance reduction tricks which help speed convergence.
Variational inference (VI) is the most popular method of Bayesian inference for NN [21]. [22] gives an extensive review of VI methods. [23] introduced Bayes by Backprop which is a practical stochastic VI algorithm to train a BNN. A common criticism of standard implementations of VI is the mean-field assumption, or assuming posterior independence of all parameters. [24], [25] each proposed new approaches to VI which allowed for training of full covariance variational distributions. [26] introduced probabilistic backpropagation for scalable learning. Wang and Blei (2019) [27] established the frequentist consistency properties of VI, including asymptotic posterior Normality and consistency and asymptotic Normality of the posterior VI expectation, establishing VI as a serious large-sample alternative to MCMC.
Although the introduction of new methods to provide UQ in deep learning is popular, there is less focus on ensuring the UQ provided by these methods is useful and transparent. [28] compared UQ performance using various BNN training methods and using various metrics. The authors concluded a new metric for assessing predictive uncertainty is needed.[29] argue using various performance metrics that standard BNN can perform poorly with respect to UQ and propose using temperature scaling, otherwise known as weighted likelihood to make training adjustments.
In this paper, we explore the performance of two different estimation methods for Bayesian inference and prediction. Although both methods will give the same results asymptotically under mild conditions, it is not always clear how fast asymptopia arrives nor do many applications in aerospace typically have large numbers of (labeled) observations of targets of interest. The quality of approximation for MCMC is determined by computer run time, or how long the MCMC sampler is run, while the quality of approximation for VI is determined by the data sample size. In a data poor environment, the ability of VI to produce similar results to MCMC needs to be assessed. By estimating the same BNN with MCMC and VI with the same training data, we will evaluate the relative performance of each. We compare MCMC on two data sets, a simple simulated regression problem and a high fidelity simulated HSI target detection problem.
This paper is organized as follows. In Section 2, the high fidelity simulated HSI scene, Megascene, is described as well as what the targets are and how they were added to the scene. In Section 3, the model and model fitting details are explained. Results are presented in Section 4 regarding predictive power and its intersection with UQ. Section 5 summarizes our conclusions and discusses implications and future research directions.
## 2 Data
In order to have a scene for which we know ground truth and that represents our problem, we opted to create a synthetic dataset from DIRSIG Megascene [30]. Megascene is modeled after a section of Rochester, NY and contains manmade objects such as houses and roads as well as natural features such as trees and grass. The simulator uses an AVIRIS-like sensor measuring 211 spectral bands ranging from 0.4 to 2.5 \(\mu m\), creating a datacube similar to Figure 1. The images were created over the space at an elevation of 4 km which gives a pixel size of 1 m\({}^{2}\). A total of nine images were generated across three atmospheres (mid-latitude summer (MLS), sub arcit summer (SAS), tropical (TROP)) and three times of day (1200,1430,1545). Figure 2 shows a pseudo color rendering of MLS 1200.
To serve as targets, we manually inserted green discs randomly through each scene. Each scene had 125 discs ranging in size from 0.1 to 4m radii, meaning some targets filled multiple pixels while others filled a small fraction of a pixel. A subset of the discs was made such that they were partially hidden beneath foliage, so not all the targets were complete circles. Figure 3 (Figure 6 in [3]), shows an example of several different sized green target discs placed in Megascene. Some of the target discs were placed under foliage, as shown on the right image.
Figure 4 shows the spectra for several different green objects used to create Megascene, these are common confusers for our green paint target. Most pixels in the dataset will be a combination, or mixture, of several materials' spectra since with a pixel size of 1m\({}^{2}\), there is often more than one material in the area.
The left half of MLS-1200 was used for training the BNN models. The right half of all nine scenes were used as test sets. By only training on one scene at a particular time and atmosphere, we are able to understand the model's ability to detect targets in scenes it has never seen before. This is particularly important for our application since we cannot expect to have training data in all atmospheres and times of day due to expense and practicality reasons. Even though
aerospace sensors might be able to collect data at many different atmospheres and times of day, it is costly to have labeled data at all these combinations.
## 3 Methods
The BNN contained 3 hidden layers, each with 10 neurons activated with a sigmoid function. Although ReLU tends to be more computationally efficient we found sigmoid to give better results. The priors on the weights were all Normal with mean 0 and standard deviation 10. The standard deviation was selected partially to optimize performance, making this a quasi-Empirical Bayes approach. Although we do not believe this is the best approach, there are others working on BNN priors [31] and this serves as a proof of concept.
Because the features are spectra, they are functional data by nature and contain a correlated structure. The structure has physical meaning itself and can be used for model explainability [32]. To account for this dependence and help reduce the dimensionality of the inputs, we employ functional principal component analysis (fPCA) on the feature functions. We then use the first 25 functional principal components (fPC) for each pixel as features. The first 25 fPCs explain 99.999% of the variability, and later fPCs did not add to the predictive power of the model. Formally, the model is written as:
\[Y_{i}\stackrel{{ iid}}{{\sim}}Bernoulli(\pi_{i}),i=1,2,...,n \tag{1}\] \[\pi_{i}=f(\mathbf{\theta},\mathbf{x}_{i})\] (2) \[\theta_{j}\stackrel{{ iid}}{{\sim}}N(0,10),j=1,2,...J \tag{3}\]
where \(Y_{i}\) is a binary random variable for whether pixel \(i\) contains target with parameter \(\pi_{i}\equiv P(Y_{i}=1)\), the probability that pixel \(i\) contains target. This is itself a deterministic function of the 25 fPCs for pixel \(i\), \(\mathbf{x}_{i}\), the NN model \(f(\cdot,\cdot)\), and the NN's parameters \(\mathbf{\theta}\). Note \(\pi_{i}=\pi_{i}(\mathbf{x}_{i},\mathbf{\theta})\), but we drop the dependence for brevity. Because the probability \(\pi_{i}\) is an unknown parameter, it is treated as a distribution in Bayesian statistics, with its prior distribution inferred by the prior on \(\mathbf{\theta}\). Denote the vectors \(\mathbf{Y}=(Y_{1},Y_{2},...,Y_{n})\) and \(\mathbf{\theta}=(\theta_{1},\theta_{2},...,\theta_{J})\).
All pixels which had a target abundance greater than zero were used for training, and about ten times the number of pixels with target abundance zero were randomly sampled and used for training. This subsetting sped up training significantly due to the sparse nature of the targets in the scene. Reducing the number of non-targets did not affect the model's performance. We use numpyro and pyro to fit the BNN via MCMC and VI, respectively [33, 34].
BNN output a posterior distribution for \(\pi_{i}\), denoted \(p(\pi_{i}|\mathbf{Y})\). A posterior mean can then be used as a point estimate by taking \(E(p(\pi_{i}|\mathbf{Y}))\). Uncertainty around \(\pi_{i}\) can be quantified using confidence intervals (sometimes called credible intervals.) These intervals are constructed by taking the \(\alpha/2\)th and \((1-\alpha/2)\)th quantiles of \(p(\pi_{i}|\mathbf{Y})\), denoted as \(L_{\alpha,i},U_{\alpha,i}\), respectively, to create a \(1-\alpha\) confidence interval. Based on
Figure 4: Spectra of different green objects used in the creation of Megascene.
Figure 3: Close up of several different sized target green discs in Megascene. On the right is a further zoomed in picture of one of the target discs that is partially hidden by foliage. Image credit to [3].
Figure 2: Pseudo color render of Megascene MLS 1200.
Bayesian probability, \(P(L_{\alpha,i}<\pi_{i}<U_{\alpha,i}|{\bf Y})=1-\alpha\). The value of \(\alpha\) is chosen depending on the risk desired.
One way to incorporate the UQ provided by the BNN is using high confidence (HC) sets. An HC set contains predictions that are either close to 0 or 1 indicating a high probability of either no-target or target and with a corresponding confidence interval that spans no more than a specified width. More formally, pixel \(i\) is included in the HC set \(\Omega\):
\[i\in\Omega\iff P(\pi_{i}<{\cal L}|{\bf Y})>1-\alpha\mbox{ OR }P(\pi_{i}>{\cal U}|{\bf Y})>1-\alpha \tag{4}\]
where \({\cal L}\) (\({\cal U}\)) is the value the estimated target probability for pixel \(i\) (\(\pi_{i}\)) needs to be less (greater) than, and \(1-\alpha\) is the desired confidence that \(\pi_{i}\) is less (greater) than \({\cal L}\) (\({\cal U}\)). Therefore, if pixel \(i\) is in the high confidence set, we can say, there's at least a \(1-\alpha\) probability that \(\pi_{i}\) is less than \({\cal L}\) (greater than \({\cal U}\)). This ensures two things: (i) the estimated probability of pixel \(i\) containing target is either close to 0 or close to 1, as defined by chosen \({\cal L}\) and \({\cal U}\), and (ii) we are confident in the estimated probability of pixel \(i\) containing target since there's at least a \(1-\alpha\) chance that \(\pi_{i}\) is less than \({\cal L}\) (greater than \({\cal U}\)). For this application we choose \({\cal L}=0.2,{\cal U}=0.8,\alpha=0.2\). This set contains predictions which are strongly target or non-target and the model has high confidence in that prediction.
## 4 Results
The MCMC model ran on 2 chains for 2500 iterations with 500 burn in period. The chains were run in parallel and total MCMC training time was about 21 minutes on a Intel(R) Xeon(R) CPU E5-2650 v4 2.20GHz. Posteriors estimated from MCMC must be checked for proper convergence, but overparameterized BNN weights may not be identifiable. Checking convergence of predictions across chains is the alternative to ensure the MCMC behaves accordingly. We checked many pixels' prediction traces and there were no signs of non-convergence meaning the MCMC is behaving as expected. VI was optimized using Adam with a learning rate of 0.01 for 450 epochs, monitoring the validation loss for overfitting. The VI training was much faster, taking only about 4 seconds on the same CPU. The VI model could have been trained using a GPU, which often trains faster than CPU, but given the relatively simple architecture it was unnecessary to do so. In this situation, 21 minutes is not cost prohibitive for a real life target detection algorithm, so the difference in computation time is of minor concern compared to model performance.
Figure 5 shows the proportion of data in the HC set for each scene for both the MCMC- and VI-trained models. Overall, the MCMC model creates larger HC sets. For MLS and SAS scenes the MCMC-trained model contains over twice as many pixels as VI. The MCMC model has a large drop in HC set pixels for TROP, but still more than VI. VI is fairly constant across scenes. This is interesting since theory tells us uncertainties from mean-field VI should under represent the true uncertainties due to independence assumptions on the posterior, and although this result is not exactly a test of that theory, it is unexpected that the VI model appears to have refrained from being overly confident. However, note that the proportion of data within the HC set is not an evaluation of the model, rather an outcome. That is, pixels are included or excluded based on the UQ given by the model, therefore an overly conservative model may not include all pixels that are truly high confidence or an overly optimistic model may include pixels that have no business being called highly confident. Future work needs to address the quality of UQ given by models to ensure membership to HC sets retains its quantitative meaning.
Figures 6 and 7 show ROC curves for MCMC- and VI-trained BNNs on the full SAS 1430 scene, respectively. We only show ROC curves from one of the nine scenes, but the trends are the same for all scenes. The lines denote ROC scores for the sets of pixels containing target in proportions up to the denoted fraction. This allows evaluation at different abundance levels to see how good the models are at finding different sized targets. This is important because depending on the resolution of the remote sensing device, a pixel could represent a relatively large area, and sub-pixel detection is necessary. Overall, the MCMC BNN has much better performance, often having area under the ROC curves about 10 percentage points higher. Additionally, the MCMC model tends to have much higher detection rates at low false alarm rates, which is important in high consequence national security problems.
Figures 8 and 9 show ROC curves for MCMC- and VI-trained BNNs on the HC set only on SAS 1430, respectively. These ROC curves show the performance when we only consider pixels for which the model determines it is confident in its prediction. In this case, there is a slight bump in performance for the MCMC model for abundances down to 20%, and then there is actually a degradation in performance for abundances \(<\)10%. This isn't surprising since the model is more likely to be confident for pixels which contain a high proportion of target compared to pixels containing a very small amount, where its target signature could be mixed with the background. The VI-BNN sees a large boost in performance for abundances \(>\)50%, and slight degradations for abundances \(<\)50%, likely for the same reasons.
5%. Figure 10 shows detection probabilities averaged over all nine scenes for both HC sets and full test sets, for the MCMC- and VI-trained BNN. The MCMC BNN performs much better at low pixel target abundances compared to VI. This effect diminishes somewhat at an abundance level of 40%. HC sets also provide a boost for both MCMC and VI methods, but at different times. Although the BNN model is trained in two different ways, we should still expect results from the same model to only differ slightly based on training approach, and these prediction results show the converse is true. One consideration is more tuning could be done with the VI algorithm to improve optimization and thus prediction, whereas tuning the MCMC was fairly straightforward for this application.
Another common misconception is the predicted probability of target, or output of a standard NN classifier, fully quantifies the uncertainty of the prediction. This predicted probability only contains the aleatoric uncertainty as determined by the variance of a Bernoulli random variable (in the case of target detection). This prediction does not contain the epistemic uncertainty, part of which is the modeling and sampling uncertainty. A predicted target probability of 0.99 does not automatically imply we should be confident in the prediction itself, rather that is the model's best guess if it was forced to predict. The confidence interval from the posterior determines the model's confidence in the prediction, and intervals are not always symmetric. It is possible for a prediction of 0.99 to have a 90% confidence interval ranging from 0.01 to 0.999. In high consequence national security problems,
Figure 8: ROC evaluated on HIGH CONFIDENCE SAS 1430 test scene for MCMC-trained BNN. Lines denote ROC scores for the sets of pixels containing target in proportions up to the denoted fraction. Area under the curves are given next to each fraction.
Figure 6: ROC evaluated on FULL SAS 1430 test scene for MCMC-trained BNN. Lines denote ROC scores for the sets of pixels containing target in proportions up to the denoted fraction. Area under the curves are given next to each fraction.
Figure 7: ROC evaluated on FULL SAS 1430 test scene for VI-trained BNN. Lines denote ROC scores for the sets of pixels containing target in proportions up to the denoted fraction. Area under the curves are given next to each fraction.
Figure 9: ROC evaluated on HIGH CONFIDENCE SAS 1430 test scene for VI-trained BNN. Lines denote ROC scores for the sets of pixels containing target in proportions up to the denoted fraction. Area under the curves are given next to each fraction.
decisions should not be made solely with the point estimate when we can know the model's confidence in the prediction. It turns out, this is not an unrealistic scenario.
Figure 11 and 12 show the distribution of low confidence (LC) predictions on the nine combined test sets for pixels containing target, for MCMC and VI, respectively. The LC set is the opposite of the HC set, that is it contains predictions whose lower confidence bound is less than 0.2 and upper confidence bound is greater than 0.8. As expected, most point predictions are close to 0.5, but many are towards 0 and 1. Although these represent a relatively small number of the overall predictions, they are not negligible in high consequence situations where every false positive or false negative can be extremely costly. The fact that there are target probability estimates close to 0 or 1 that are in the low confidence set further confirms that having uncertainty quantification on estimates is imperative to avoid a false degree of confidence. Predictions which fall in the low confidence set should receive further review because the model is incapable of providing information on them, and simply having an estimate close to 0 or 1 is not evidence enough to make assumptions about them.
Figure 13 shows the MLS 1200 test set mean prediction, interval width, absolute prediction error, and rgb image, respectively for MCMC-trained BNN. These plots are useful to understand where the model predicts targets, where the model is confident in its predictions, and where the model makes mistakes. Looking at the rgb image, the context and spatial surroundings of the problem can be identified, and further examination of the spectra can indicate what types of materials confuse the model or cause the model to be uncertain. Knowledge of these shortcomings can then be used to efficiently design future data collection campaigns and be communicated to operators, so they understand the model's shortcomings.
BNN are able to perform target detection on this scene and utilize their uncertainties to reduce false alarms. Although the underlying BNN model is the same, the training method differs and gives surprisingly different results. As the gold standard, the MCMC results show the true power of the BNN, especially when considering high confidence sets. The VI results show promise to the posterior approximation method and present a computationally efficient alternative. The out-of-the-box approach for VI requires expert knowledge to understand, set up the model, and tune the hyperparameters. Both models show how UQ can be used in practice to handle high consequence problems and both showed the limitations of using estimated class probabilities themselves as model uncertainty.
## 5 Discussion
In this paper we compared the results in predictive performance and uncertainty quantification for target detection in a HSI problem using BNNs, trained using both MCMC and VI. MCMC is generally considered the gold standard to compare Bayesian model results to, and that held in this experiment as well. Results from the VI model were slightly worse than the MCMC model and required more effort tuning. Given that NN are commonly used "off-the-shelf" already by practitioners, it is only a matter of time before BNN are an off-the-shelf tool to provide uncertainty quantification, and VI is the obvious computationally efficient tool to provide that training. This paper is meant to provide an example of how to utilize the benefits of UQ, but also to increase awareness that the optimization method matters, and if sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
This paper shows there is still work to be done with all-purpose, generic VI algorithms before they can be used by non-deep learning experts. Furthermore, we gave examples of why uncertainty quantification of estimates and predictions is important in practice, specifically in classification problems when researchers and practitioners alike often interpret the estimated class probability as the uncertainty in that same prediction.
These results and insights are relevant to the aerospace community because remote sensing assets collect a wealth of
Figure 11: Distribution of low confidence set predictions from MCMC-BNN on the nine combined test sets for pixels containing target.
Figure 10: Probability of detection as a function of target pixel abundance for a constant false alarm rate of 0.05. MCMC-trained BNN results are solid lines, VI-trained BNN results are in dashed lines. Full data test results are in blue, and high confidence set results are in orange.
HSI information that often needs to be analyzed quickly and accurately. For remote sensing in the national security space, having a reliable model that is confident in its predictions is imperative due to the high consequence nature of the decisions. BNN show promise for providing this capability, and while MCMC provides quality predictions and uncertainty, it is often computationally infeasible for most problems. Progress is being made on more efficient, available off-the-shelf VI, but there still exist limitations for this method to be utilized without a deep learning expert.
## Acknowledgments
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND2021-15821 C.
|
2304.13205 | Splitting physics-informed neural networks for inferring the dynamics of
integer- and fractional-order neuron models | We introduce a new approach for solving forward systems of differential
equations using a combination of splitting methods and physics-informed neural
networks (PINNs). The proposed method, splitting PINN, effectively addresses
the challenge of applying PINNs to forward dynamical systems and demonstrates
improved accuracy through its application to neuron models. Specifically, we
apply operator splitting to decompose the original neuron model into
sub-problems that are then solved using PINNs. Moreover, we develop an $L^1$
scheme for discretizing fractional derivatives in fractional neuron models,
leading to improved accuracy and efficiency. The results of this study
highlight the potential of splitting PINNs in solving both integer- and
fractional-order neuron models, as well as other similar systems in
computational science and engineering. | Simin Shekarpaz, Fanhai Zeng, George Karniadakis | 2023-04-26T00:11:00Z | http://arxiv.org/abs/2304.13205v1 | Splitting physics-informed neural networks for inferring the dynamics of integer- and fractional-order neuron models
###### Abstract
We introduce a new approach for solving forward systems of differential equations using a combination of splitting methods and physics-informed neural networks (PINNs). The proposed method, splitting PINN, effectively addresses the challenge of applying PINNs to forward dynamical systems and demonstrates improved accuracy through its application to neuron models. Specifically, we apply operator splitting to decompose the original neuron model into sub-problems that are then solved using PINNs. Moreover, we develop an \(L^{1}\) scheme for discretizing fractional derivatives in fractional neuron models, leading to improved accuracy and efficiency. The results of this study highlight the potential of splitting PINNs in solving both integer- and fractional-order neuron models, as well as other similar systems in computational science and engineering.
operator splitting, neuron models, fractional calculus. 1
[email protected]
[email protected]
## 1 Introduction
The human brain is a complex system that involves the interactions of billions of neurons. Mathematical models can be used to simulate the neuronal activity in the brain as a system of differential equations, allowing researchers to better understand how the brain works. Studies related to spiking neurons are performed numerically or biophysically. In numerical studies, the main goal is to solve neural equations and investigate how the dynamic behavior changes for different inputs. Biophysical approaches focus on interpreting the dynamic behavior of spiking neurons according to available experimental observations [2, 58].
Another interesting aspect of spiking neuron models is that they can be formulated as fractional-order equations, which take into account long-term memory. The order of the derivative in these equations can affect the neuron's response [54, 59, 63], making this an important area of research. Recent works in both integer- and fractional-order neuron models are discussed in Section 3.3.
In this work we introduce a new approach for solving neuron models that combines operator splitting methods with physics-informed neural networks (PINNs). Operator splitting methods have been successfully applied in various fields of physics and engineering [6, 11, 15, 18, 23, 34, 51, 52], while PINNs provide a powerful tool for approximating the solution of differential equations. A general introduction to the splitting method can be found in [25, 43].
PINNs were first introduced by Raissi et al. [50]. In this method, the solution of a differential equation is approximated using a neural network, and the parameters of the network are determined by solving a minimization problem that includes residual functions at collocation points, as well as initial and boundary conditions.
PINNs have been applied successfully to a broad range of ordinary and partial differential equations, including fractional equations [48], integro-differential equations, stochastic partial differential equations [68], and inverse problems [44]. There have also been several extensions to the original PINN, such as Fractional PINN (FPINN) [48], physics-constrained neural networks (PCNN) [36, 69], variable hp-VPINN [30], conservative PINN (CPINN) [29], Bayesian PINN [66], parallel PINN [55], Self-Adaptive PINN [42], and Physics informed Adversarial training (PIAT) [53]. Innovations in activation functions, gradient optimization techniques, neural network structures, and loss function structures have driven recent advances in the field. Despite these advances, improvements are still possible, especially concerning unresolved theoretical and practical issues.
Our study makes two important contributions to the field of neural modeling. First, we propose a new method, called the splitting PINN, that employs the operator splitting technique to decompose the original spiking neuron model into sub-problems, which are then solved using PINNs. We demonstrate the effectiveness and accuracy of this method by applying it to integer- and fractional-order neuron models with oscillatory responses, for which vanilla PINN and FPINN formulations fail to predict the solutions. Second, we introduce a novel \(L^{1}\)-scheme for discretizing fractional derivatives in fractional neuron models, which leads to improved accuracy and efficiency in solving these complex models. Our results show that the combination of the splitting PINN method and the \(L^{1}\)-scheme accurately solves fractional neuron models and provides valuable insights into the underlying mechanisms of neural activity.
This paper is organized as follows: Section 2 provides an overview of the proposed method for a given system of differential equations. Section 3 introduces various neuron models and their properties, including the Leaky Integrate-and-Fire (LIF), Izhikevich, Hodgkin-Huxley (HH), and the fractional order Hodgkin-Huxley (FO-HH) models. The efficiency and accuracy of splitting PINNs and FPINNs are demonstrated in Section 4
by applying our algorithm to different neuron models. Finally, we present a discussion of the main results in Section 5. The results and conclusion of this paper will provide valuable insights into the potential of this new approach for solving neuron models and other similar systems in computational science and engineering.
## 2 Problem setup and solution methodology
Let us consider the general form of a nonlinear system of differential equations as follows,
\[\frac{dx_{i}}{dt}=f_{i}(t,x),\qquad i=1,\cdots,n,\qquad t\in[0,T]. \tag{2.1}\]
Before introducing the splitting PINN for solving the given systems of differential equations, we briefly review what splitting methods are, in general.
### Splitting method
For solving the given system (2.1), let us rewrite it as follows,
\[\frac{dx}{dt}=f(t,x(t)),\qquad x(0)=x_{0}\qquad x\in\mathbb{R}^{n},\]
then by using splitting method, \(x\) is decomposed into \((x^{*},x^{**})\), where \(x^{*},x^{**}\in\mathbb{R}^{d}\)\((d<n)\). So, we have
\[\frac{dx^{*}}{dt}=f(t,x^{*}(t)),\qquad x^{*}(0)=x_{0}^{*}, \tag{2.2}\]
\[\frac{dx^{**}}{dt}=f(t,x^{**}(t)),\qquad x^{**}(0)=x_{0}^{**}, \tag{2.3}\]
and \(x_{0}^{*}\) and \(x_{0}^{**}\) are the given vectors of initial conditions. After solving the above sub-systems by the given practical algorithms, we denote the solutions of (2.2) and (2.3) as
\[\phi_{\Delta t}^{(*)}x_{0}^{*}\qquad\text{and}\qquad\phi_{\Delta t}^{(**)}x_{ 0}^{**},\]
where \(\phi_{\Delta t}^{(*)}\) and \(\phi_{\Delta t}^{(**)}\) are \(x^{*}\)-flow and \(x^{**}\)-flow, respectively, and \(\Delta t\) is the step size. Then, by combining the solutions of sub-systems, the approximation operator in \(x\)-flow can be written as follows
\[\phi_{\Delta t}^{(*)}\circ\phi_{\Delta t}^{(**)}\qquad\text{or}\qquad\phi_{ \Delta t/2}^{(**)}\circ\phi_{\Delta t}^{(*)}\circ\phi_{\Delta t/2}^{(**)} \tag{2.4}\]
and \(\phi_{\Delta t}^{(*)}\) and \(\phi_{\Delta t}^{(**)}\) are interchangable. The first splitting is the Lie-splitting method [62], which is first-order, and the second is the Strang splitting method [56], which is a second-order method.
On interval \([0,T]\), we first split the original system into sub-systems (sub-problems) for each sub-interval \([t^{j},t^{j+1}]\)\((j=0,1,\cdots,J-1,\ t^{l}=T)\). Then the solution of the original system at time \(t^{j+1}\) can be approximated as follows
\[x(t^{j+1})=\phi^{(*)}_{\Delta t}\circ\phi^{(**)}_{\Delta t}x(t^{j})\]
where \(x(t^{j})\) is the accurate solution at \(t=t^{j}\).
### Physics informed neural network
Consider an initial value problem as follows,
\[\begin{split}\frac{dx}{dt}&=f(t,x(t)),\qquad x\in \Omega\subseteq\mathbb{R}^{n},\ t\in[0,T],\\ x(0)&=x_{0},\end{split} \tag{2.5}\]
where \(f\) is a nonlinear differential operator, and \(x\) is the unknown solution with known initial condition.
By using the PINN framework, the solution of the above equation is approximated by a fully connected neural network \(\mathcal{N}^{L}\) with \(L\) layers and \(N\) neurons, where the output of \(l\)-th hidden layer is defined as follows
\[\mathcal{N}^{l}(t)=W^{l}\sigma(\mathcal{N}^{l-1}(t))+b^{l},\qquad 2\leq l \leq L, \tag{2.6}\]
\(t\in\mathbb{R}\) is the input vector, \(\sigma(\cdot)\) is the activation function, \(\sigma(\mathcal{N}^{l-1}(t))\in\mathbb{R}^{N}\)\(\{W^{l}\in\mathbb{R}^{N\times N},b^{l}\in\mathbb{R}^{N}\}\) are the network parameters, \(\mathcal{N}^{1}(t)=W^{1}t+b^{1}\) and \(\mathcal{N}^{L}(t)\) is the output of the last layer which is used to approximate the solution. The unknown parameters can be learned by solving a minimization problem that consists of the residual error terms as follows
\[\min_{w,b}\qquad\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}|\frac{dx}{dt}(t^{i})-f(t^{i },x(t^{i}))|^{2}+|x(0)-x_{0}|^{2}, \tag{2.7}\]
where \(N_{r}\) is the number of collocation points. The parameters are randomly initialized and optimized, and then the approximate solution is obtained. The framework uses automatic differentiation to calculate the derivatives of the solution, which eliminates the need for manual calculation or numerical discretization [5]. This capability is available in popular deep-learning frameworks such as TensorFlow and PyTorch.
Figure 1: **Overview of the Splitting PINN :** We first split the original system into subsystems (sub-problems) for each sub-interval \([t^{j},t^{j+1}]\). For each sub-interval, \(x(t^{j})\) is known, the sub-systems are solved using PINN and then the solutions are combined to obtain the approximate solution \(x(t^{j+1})\). To evaluate the error, we obtain the reference solution, \(x_{exact}(t^{j+1})\), by using a high-order numerical solver (for more details, see Appendix B). The algorithm proceeds until arriving at a given accuracy for each sub-interval.
## 3 Neuron models
Until now, various neuron models have been presented for the biological simulations of different parts of the brain. Among them, we can mention the leaky integrate-and-fire (LIF) model, the Izhikevich model, the Hodgkin-Huxley (HH) model, and FitzHugh-Nagumo (FHN) model, which model the membrane behavior [2]. These models can be classified into integer-and fractional-order models. Integer-order models can capture complex phenomena in the neuron system. However, they represent only one type of firing characteristic for constant parameters of the model. On the other hand, fractional-order models can exhibit different dynamic behavior of neurons for constant parameters [58]. This makes fractional-order models more versatile and capable of capturing a wider range of neuron behavior.
### Neuron models: integer order
#### 3.1.1 IF and LIF models
The neuron model of integrate-and-fire (IF model) is one of the best models due to the simplicity of calculations and closeness to human biological conditions. This model is a simplified version of the HH model, which is described by an equation and an assumption. Unlike the HH model, the IF model does not automatically generate an action potential. This model can be determined by
\[C_{m}\frac{dV}{dt}=I(t), \tag{3.1}\]
with the following spike condition: if \(V=V_{th}\), a spike at \(t_{spike}\) is generated and the membrane potential \(V(t)\) is set to \(V_{rest}\) for a refractory period \(\tau_{ref}\)[20, 21]. \(C_{m}\) is the membrane capacitance, \(V_{th}\) is the voltage threshold and \(V_{rest}\) is the resting membrane potential.
A generalized type of IF model is the leaky integrate-and-fire (LIF) model, which adds a leak to the membrane potential. This model is defined by the following equation,
\[\tau\frac{dV}{dt}=-(V-V_{rest})+RI(t), \tag{3.2}\]
where \(\tau=RC_{m}\) is the membrane time constant and \(R\) is the membrane resistance. Because of the important properties of this model, like its computational simplicity [27], accuracy in terms of the spiking behavior and spike times of neurons, and simulating speed [10, 39], this model has become one of the most popular and advantageous neuron models in neuromorphic computing [1, 7, 39, 45]. Also, the characteristic of membrane potential decay over time can be seen in the LIF model.
More complex types of IF models include exponential integrate-and-fire, quadratic integrate-and-fire, and adaptive exponential integrate-and-fire [8].
#### 3.1.2 Izhikevich model
Another neuron model for simulating the membrane behavior is the Izhikevich model. Two important features of this model are computational efficiency and biological plausibility. It reduces the more complex Hodgkin-Huxley model to a 2D system of ordinary differential equations of the form
\[\begin{split}\frac{dv}{dt}&=0.04v^{2}+5v+140-u+I(t), \\ \frac{du}{dt}&=a(bv-u),\end{split} \tag{3.3}\]
with the auxiliary condition
\[\text{if }v\geq v_{th},\text{ then}\left\{\begin{array}{l}v\gets c\\ u\gets u+d\end{array}\right\} \tag{3.4}\]
with \(u\) and \(v\) being dimensionless variables. The variable \(v\) is the membrane potential of the neuron, and \(u\) is the membrane recovery variable, which accounts for the activation of the \(K^{+}\) ion current and the inactivation of the \(Na^{+}\) ion current and provides negative feedback to the membrane potential. The dimensionless parameters \(a\), \(b\), \(c\), and \(d\) regulate the behavior of the neuron.
The auxiliary condition (3.4) triggers a reset of the neuron when the membrane potential surpasses the threshold, simulating a spike. This model is capable of reproducing the spiking and bursting behavior of neurons in real-time, making it a widely used model in simulations of large-scale neural networks.
#### 3.1.3 Hodgkin-Huxley model
From a biophysical perspective, the nerve cell's action potential is generated by the flow of ions through the cell membrane's ion channels. Hodgkin and Huxley described the dynamics of these membrane currents through a set of coupled differential equations based on their experiments on the giant squid axon [24].
The mechanism of the action potential can be understood with reference to Figure 2. A capacitor, resistor, and transistor were used to simulate the equivalent circuit. Changes in the action potential were observed by applying the current \(I(t)\) and adjusting the capacitance and leakage resistance of the sodium and potassium channels.
The circuit of the Hodgkin-Huxley (HH) model consists of four parallel branches: integrative branch, leaky branch, \(K^{+}\) channel, and \(Na^{+}\) channel. A system of four coupled differential equations was used to describe the membrane potential of a giant squid axon as follows
\[\begin{split}\frac{dV_{m}}{dt}=& F_{1}(t,V_{m},n,m,h), \qquad\frac{dn}{dt}=F_{2}(t,V_{m},n,m,h),\\ \frac{dm}{dt}=& F_{3}(t,V_{m},n,m,h),\qquad\frac{dh}{dt }=F_{4}(t,V_{m},n,m,h),\end{split} \tag{3.5}\]
where
\[F_{1} =\frac{1}{C_{m}}(-g_{L}(V_{m}-E_{L})-g_{K}n^{4}(V_{m}-E_{K})-g_{Na}m^ {3}h(V_{m}-E_{Na})+I(t)),\] \[F_{2} =\alpha_{n}(V_{m}(t))(1-n(t))-\beta_{n}(V_{m}(t))n, \tag{3.6}\] \[F_{3} =\alpha_{m}(V_{m}(t))(1-m(t))-\beta_{m}(V_{m}(t))m,\] \[F_{4} =\alpha_{h}(V_{m}(t))(1-h(t))-\beta_{h}(V_{m}(t))h,\]
and \(g_{Na}\), \(g_{K}\) and \(g_{L}\) are the maximum conductances of the \(Na^{+}\), \(K^{+}\) and leak currents. The variables \(\alpha_{x}\) and \(\beta_{x}\) of the current channel conductances are dependent functions of \(V_{m}(t)\) as follows:
\[\alpha_{n}(V_{m}) =\frac{0.1-0.01(V_{m}-V_{0})}{e^{1-0.1(V_{m}-V_{0})}-1}, \beta_{n}(V_{m}) =0.125e^{-(V_{m}-V_{0})/80},\] \[\alpha_{m}(V_{m}) =\frac{2.5-0.1(V_{m}-V_{0})}{e^{2.5-0.1(V_{m}-V_{0})}-1}, \beta_{m}(V_{m}) =4.0e^{-(V_{m}-V_{0})/80}, \tag{3.7}\] \[\alpha_{h}(V_{m}) =0.07e^{-(V_{m}-V_{0})/20}, \beta_{h}(V_{m}) =\frac{1}{1+e^{3-0.1(V_{m}-V_{0})}}.\]
### Neuron models: fractional order
In recent years, fractional differential equations have been developed to improve the modeling of many biological phenomena, including mechanical properties of viscoelastic tissue [40], the tissue-electrode interface [41], pharmacokinetics of drug delivery and absorption [13, 14, 49], and anomalous calcium sub-diffusion in micro-domains [57].
The main characteristic of using fractional derivatives is their non-locality. This means that the next state of the system depends on the current state and all historical states before it. This advantage makes the study of fractional order systems an active area of research.
Figure 2: Schematic diagram for the Hodgkin-Huxley model [24]. Left: ion channels on the membrane of the neuron. Right: simulated circuit with \(I\) denoting the input current.
#### 3.2.1 Fractional derivative definitions
In this Section, we present the definitions of fractional derivatives. There are different methods for defining fractional derivatives, among which we can mention the Grunwald-Letnikov derivative, Riemann-Liouville derivative, and Caputo derivative [33]. The models in this paper are defined using the Caputo fractional derivative.
**Definition 1**.: The Caputo fractional derivative of the function \(f(t)\) with order \(\alpha>0\) is defined as
\[{}_{C}D_{a,t}^{\alpha}f(t)=\frac{1}{\Gamma(n-\alpha)}\int_{a}^{t}(t-s)^{n- \alpha-1}f^{(n)}(s)ds, \tag{3.8}\]
where \(n-1<\alpha<n\) and \(n\) in a non-negative integer.
If \(a=0\), then we can use \(\frac{d^{\alpha}}{dt^{\alpha}}f(t)={}_{C}D_{0,t}^{\alpha}f(t)\).
#### 3.2.2 \(L^{1}\) scheme to approximate the fractional derivatives
An efficient method for approximating the Caputo derivative of order \(\alpha\)\((0<\alpha<1)\) is the \(L^{1}\) scheme, which was introduced by Oldham and Spanier [47]. Using this method, the Caputo derivative can be approximated by using the following formula
\[\frac{d^{\alpha}}{dt^{\alpha}}f(t^{j})\approx\delta_{t}^{\alpha}f^{j}=\sum_{k =0}^{j-1}b_{k}^{\alpha}\left[f(t^{j-k})-f(t^{j-1-k})\right], \tag{3.9}\]
where for the uniform time mesh \(t^{j}=j\Delta t,j\geq 0\), \(\Delta t\) is the step size. Then, \(b_{k}^{\alpha}\) is given by \(b_{k}^{\alpha}=\frac{(\Delta t)^{-\alpha}}{\Gamma(2-\alpha)}[(k+1)^{1-\alpha} -k^{1-\alpha}]\).
In [35], for a smooth \(f\), the error estimate of the above \(L^{1}\) scheme is
\[|\delta_{t}^{\alpha}f(t^{j})-\frac{d^{\alpha}}{dt^{\alpha}}f(t^{j})|\leq C \Delta t^{2-\alpha}, \tag{3.10}\]
where \(C=C(\alpha,f)\). This approximation has been used in many papers discussing fractional-order spiking neurons. The interested readers are referred to [46, 60, 2, 4]. Here, we will develop this method to discretize the fractional derivative, and then FPINN is used.
#### 3.2.3 Fractional Hodgkin-Huxley model
In fractional-order neural models, the neuron's dynamics depends on the order of the derivative, which can create different types of memory-dependent dynamics. The fractional order Hodgkin-Huxley (FO-HH) model is one of the neuron models that has attracted much attention.
The HH model has two basic problems; the first is that the Dielectric losses in the membrane are neglected. The second problem is that membrane capacity is considered
ideal. To overcome the above problems, a fractional model is proposed. The idea of fractional capacity is taken from Curie's empirical law [65], which can be written as follows
\[I_{c}(t)=C_{m}\frac{d^{q_{1}}V_{m}(t)}{dt^{q_{1}}}, \tag{3.11}\]
where \(V_{m}(t)\) is the excitation voltage, \(I_{c}(t)\) is the current in the capacitor, \(q_{1}\) is the order of differentiation, and \(C_{m}\) is the fractional capacitance [64]. The fractional order model provides a more accurate description based on long-term memory behavior. Motivated by the above discussion, we propose the following FO-HH model
\[\begin{split}\frac{d^{q_{1}}V_{m}}{dt^{q_{1}}}=& F_{1}(t,V_{m},n,m,h),\qquad\frac{d^{q_{2}}n}{dt^{q_{2}}}=F_{2}(t,V_{m},n,m,h), \\ \frac{d^{q_{3}}m}{dt^{q_{3}}}=& F_{3}(t,V_{m},n,m,h),\qquad\frac{d^{q_{4}}h}{dt^{q_{4}}}=F_{4}(t,V_{m},n,m,h), \end{split} \tag{3.12}\]
where \(q=(q_{1},q_{2},q_{3},q_{4})\) is the order of differentiation, and the other parameters are as in (3.5).
By defining the fractional model, we apply the dielectric loss in the membrane, and as a result, we will see the change in the refractory period with the same value of the given current in an integer order case. The refractory period is the time when the membrane is hyperpolarized and, hence, requires a stronger stimulus to produce a smaller action potential.
The modified HH model can be very effective in biomedical applications such as heart health analysis. For example, in an ECG waveform, the PR interval represents a refractory period. PR interval estimation is very important for cardiac diagnosis [12, 22].
### Prior works in neuron models
In [37], the authors compared the spiking rate patterns of five single neuron models, including LIF, Izhikevich, and Hodgkin-Huxley (HH) models, under different sustained current inputs. Numerical stability and accuracy were also considered. The multi-step methods for neuronal modeling, including the HH model, were proposed in [32]. In [31], the modified Khater (mK) method and B-spline scheme were proposed to find numerical solutions of the FitzHugh-Nagumo (FHN) equation, with a focus on finding different types of soliton wave solutions, studying their stability properties, and using them to obtain numerical solutions of the model.
In [19], the Hybrid Functions (HF) method was proposed as a solution for the HH model. The HF method was compared comprehensively with other algorithms, evaluating computational speed, absolute error, and integral time squared error. The finite difference scheme was used in [67] to solve the stochastic FHN model, including stability analysis and the calculation of explicit optimal a priori estimates for the existence of solutions. In [3], numerical solutions for the FHN and Izhikevich neuron models were
obtained using a non-standard finite difference scheme and GL discretization technique. The models were compared, and their behavior was analyzed in different fractional orders.
Fractional order modeling in neural systems is a relatively new area of research. The non-local definitions of fractional calculus used in these models provide a more realistic representation of neural systems and offer a deeper understanding of their behavior. In a study reported in [2], four numerical methods were applied to two fractional-order spiking neuron models, the FO-LIF and FO-HH models. The authors used a finite memory window version of the \(L^{1}\) approximation for comparison with well-known techniques such as the GL-based method, product integration approximation, and the Z-transform approach. In the four methods, the uniform mesh is used, and low-accurate solutions are obtained due to the singularity of the solution of fractional equations. In this paper, they have analyzed the spiking patterns, inter-spike interval adaptation, and steady-state spiking frequency for each numerical method under varying memory lengths. In a related study reported in [58], the authors have used a \(L^{1}\) scheme (linear interpolation based on uniform mesh) to discretize the Caputo fractional derivative. The first-order extrapolation is used to derive the linearized scheme, where the global error is \(O(\Delta t^{\alpha})\), and the error far from the origin is \(O(\Delta t)\). In this paper, they have investigated the effects of non-Markovian power-law voltage-dependent conductances on the generation of action potentials and spiking patterns in a Hodgkin-Huxley model. They used fractional derivatives to implement the slow-adapting power-law dynamics of the potassium and sodium conductance gating variables. The results showed that, with different input currents and derivative orders, a wide range of spiking patterns can be generated, such as square wave bursting, mixed mode oscillations, and pseudo-plateau potentials. These findings suggest that power-law conductances increase the number of spiking patterns a neuron can produce.
In [9], the dynamics and numerical simulations of a fractional-order coupled FHN neuronal model were discussed, and the stability properties of its equilibrium states were analyzed based on theoretical results. In [61], a non-standard finite difference scheme was used to solve the fractional Izhikevich neuron model, and a general formula for the synchronization of different Izhikevich neurons was proposed.
## 4 Results
In this Section, we use splitting PINN to solve the neuron models presented in Section 3, using the network architecture and hyperparameters specified in Table 1. The optimization algorithm used is Adam with a learning rate of 0.0001 and a scheduler. IN addition techniques such as adaptive activation function and feature expansion proposed in [28, 38] were also used to improve the computational efficiency of the proposed method.
The relative \(L^{2}\) norm of errors of the inferred solutions are also computed as follows,
\[\text{relative L}^{2}\text{ error}=\frac{\sqrt{\sum_{j=1}^{l}(V_{exact}(t^{j})-V_{app}(t^{j}))^{2}}}{ \sqrt{\sum_{j=1}^{l}(V_{exact}(t^{j}))^{2}}}.\]
Further details about the training procedure and hyperparameters can be found in the relevant Section.
**Data Validation:** In all of the examples, the reference solutions are obtained by using the adaptive time-stepping spectral collocation method, which is presented in Appendix B.
### LIF model
**PINN implementation:** The approximate solutions of the LIF model can be obtained by using PINNs. For solving this model, we discretize the time domain \([0,T]\) into subdomains \([t^{j},t^{j+1}]\) where \(j=0,1,\cdots,J-1\) and \(t^{j}=T\) and then PINNs are used to solve the model in each sub-interval. More detailed visual evaluations and the numerical results of this model are provided in Appendix A.
\begin{table}
\begin{tabular}{l|c c c c c} \hline Problems & Dep. & Wid. & Act. & Opt. & Iter. \\ \hline _LIF model_ & 5 & 40 & tanh & _Adam_ & 50000 \\ _LIF model with_ & 7 & 60 & tanh & _Adam_ & 10000 \\ _threshold voltage_ & & & & & \\ _Izhikevich model_ & 6,6 & 40,40 & tanh,tanh & _Adam_,_Adamax_ & 20000 \\ _HH model_ & 6,10 & 20,20 & tanh,sin & _Adam_,_Adamax_ & 20000 \\ _(step current function)_ & & & & & \\ _HH model_ & 6,10 & 20,20 & tanh,sin & _Adam_ & 20000 \\ _(constant current)_ & & & & & \\ _FO-HH model_ & 10,6 & 100,100 & tanh,sin & _Adam_ & 70000 \\ \((q_{i}=0.8)\) & & & & & \\ _FO-HH model_ & 10,6 & 100,100 & tanh,sin & _Adam_ & 50000 \\ \((q_{i}=0.6)\) & & & & & \\ _FO-HH model_ & 10,6 & 100,100 & tanh,sin & _Adam_ & 20000 \\ \((q_{i}=0.4)\) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: PINN architectures and hyperparameters used for training. The first and second subcolumns under “Depth”, “Width”, and “Activation” correspond to the first and second sub-problems, respectively.
### Izhikevich model
**Splitting PINN implementation:** Consider \((u^{j},v^{j})\) to be the numerical solution at \(t=t^{j}=j\Delta t\) with step size, \(\Delta t\), then by using the splitting method, the following sub-problems on each sub-interval \([t^{j},t^{j+1}]\)\((j=0,1,\cdots,J-1,t^{j}=T)\) are obtained
\[\frac{du}{dt}=a(bv^{j}-u),\qquad u(t^{j})=u^{j}, \tag{4.1}\]
and
\[\frac{d\omega}{dt}=0.04v^{2}+5v+140-u^{j+1}+I(t),\qquad v(t^{j})=v^{j}. \tag{4.2}\]
This model is solved in the time interval \([0,100\ ms]\), with 2000 sub-intervals and 20 points per sub-interval. For solving sub-problems by using PINN, the fully connected neural networks are assumed to approximate the solutions, with the given architecture described in Table 1, \(v_{th}=30mV\),
\[I(t)=\begin{cases}0&\text{for }0<t<2,\\ 15&\text{for }t\geq 2,\end{cases}\]
and the choice of parameters given in [26], which can be described as follows:
* \(\mathbf{a}\sim 0.02\ \frac{1}{ms}\) : Time scale of the recovery variable \(u\). Smaller values result in slower recovery.
* \(\mathbf{b}\sim\ 0.2\ [dimensionless]\) : Sensitivity of the recovery variable \(u\) to the sub-threshold fluctuations of the membrane potential \(v\). Greater values couple \(v\) and \(u\) more strongly resulting in possible sub-threshold oscillations and low-threshold spiking dynamics. The case \(b<a\ (b>a)\) corresponds to saddle-node (Andronov-Hopf) bifurcation of the resting state.
* \(\mathbf{c}\sim-50\ mV\) : The after-spike reset value of the membrane potential \(v\) caused by the fast high-threshold \(K^{+}\) conductances.
* \(\mathbf{d}\sim 2\ mV\) : After-spike reset of the recovery variable \(u\) caused by slow high-threshold \(Na^{+}\) and \(K^{+}\) conductances.
\begin{table}
\begin{tabular}{l l l} \hline & training error & testing error \\ \hline _membrane potential_ & \(0.115719\pm 0.008516\) & \(0.120458\pm 0.008221\) \\ \hline \(u\) & \(0.041201\pm 0.005980\) & \(0.043186\pm 0.005070\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: mean relative \(L^{2}\) norm of errors for the Izhikevich model.
### Hodgkin-Huxley model
**Splitting PINN implementation:**
For solving the HH model by applying Splitting PINN on interval \([0,T]\), we have used the choice of parameters given in Table 3. Consider \((V^{j},n^{j},m^{j},h^{j})\) to be the solution at \(t^{j}=j\Delta t\) with step size \(\Delta t\). Then, the following sub-systems on each sub-interval \([t^{j},t^{j+1}]\) are obtained using the Lie splitting method. The first part is,
\[\frac{dn}{dt}=F_{2}(t,V^{j}_{m},n,m,h),\qquad n(t^{j})=n^{j},\qquad t \in(t^{j},t^{j+1}],\] \[\frac{dm}{dt}=F_{3}(t,V^{j}_{m},n,m,h),\qquad m(t^{j})=m^{j}, \qquad t\in(t^{j},t^{j+1}], \tag{4.3}\] \[\frac{dh}{dt}=F_{4}(t,V^{j}_{m},n,m,h),\qquad h(t^{j})=h^{j}\qquad t \in(t^{j},t^{j+1}].\]
where \(V^{j}_{m}\) is the known solution at time \(t=t^{j}\). The second part is given by
\[\frac{dV_{m}}{dt}=F_{1}(t,V_{m},n^{j+1},m^{j+1},h^{j+1}),\qquad V_{m}(t^{j})=V ^{j}_{m},\qquad t\in(t^{j},t^{j+1}], \tag{4.4}\]
Figure 4: Loss function of the Izhikevich model.
Figure 3: Izhikevich model: Left: membrane potential versus time. Right: recovery variable versus time.
and \((n^{j+1}\),\(m^{j+1}\),\(h^{j+1})\) is the solution obtained from the first part. The splitting approximation of the solution at \(t=t^{j+1}\) is \(\left(V_{m}^{j+1}\),\(n^{j+1},m^{j+1},h^{j+1}\right)\), and the solution for the whole domain is obtained by repeating this process.
Different kinds of input currents, including step current function and constant current, are considered for solving the HH model.
**Step current function:** For solving the HH model with the given step current in Figure 5(a) at the time interval \([0,20ms]\), we have used 800 sub-intervals in the splitting procedure with 40 training points for solving each sub-problem. The hyperparameters and architecture of PINNs for solving the sub-problems can be found in Table 1.
**Constant current:** This system is solved at time interval \([0,100ms]\), with 3000 sub-intervals and 30 points at each sub-interval. With the given hyperparameters and architectures in Table 1, PINN is used for solving each part, and the solutions are combined to obtain the approximate solutions.
The relative \(L^{2}\) norm of errors of approximate solutions are shown in Table 5.
\begin{table}
\begin{tabular}{l c c} \hline & training error & testing error \\ \hline _membrane potential_ & \(0.001155\pm 0.000111\) & \(0.000886\pm 0.000149\) \\ \hline \(n\) & \(0.002720\pm 5.3\times 10^{-5}\) & \(0.002712\pm 5.3\times 10^{-5}\) \\ \hline \(m\) & \(0.014316\pm 0.000447\) & \(0.014071\pm 0.000451\) \\ \hline \(h\) & \(0.002685\pm 7.3\times 10^{-5}\) & \(0.002680\pm 7.2\times 10^{-5}\), \\ \hline \hline \end{tabular}
\end{table}
Table 4: mean relative \(L^{2}\) norm of errors for HH model with current step function.
\begin{table}
\begin{tabular}{l l l} \hline Parameter & Value & Description \\ \hline \(g_{Na}\) & \(120\)\(mS/cm^{2}\) & _Maximum Na\({}^{+}\) current conductance_ \\ \hline \(gK\) & \(36\)\(mS/cm^{2}\) & _Maximum K\({}^{+}\) current conductance_ \\ \hline \(g_{L}\) & \(0.3\)\(mS/cm^{2}\) & _Maximum leak current conductance_ \\ \hline \(E_{Na}\) & \(50\)\(mV\) & _Na\({}^{+}\) current reversal potential_ \\ \hline \(E_{K}\) & \(-77\)\(mV\) & _K\({}^{+}\) current reversal potential_ \\ \hline \(E_{L}\) & \(-54\)\(mV\) & _Leak current reversal potential_ \\ \hline \(C_{m}\) & \(1\)\(\mu F\cdot s^{q_{1}-1}/cm^{2}\) & _Membrane Capacitance_ \\ \hline \(V_{0}\) & \(-65\) & _Initial membrane potential_ \\ \hline \(m_{0}\) & \(0.0529\) & _Initial Na\({}^{+}\) current activation_ \\ \hline \(n_{0}\) & \(0.3177\) & _Initial Na\({}^{+}\) current inactivation_ \\ \hline \(h_{0}\) & \(0.5960\) & _Initial K\({}^{+}\) current activation_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Parameter values and their descriptions of the HH neuron model [58].
Figure 5: HH model: comparison of splitting PINN results with the reference solution. (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel. The inset plot shows the step function input current.
Figure 6: HH model: absolute error of the normalized solutions using splitting PINN with a current step function. (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel.
Figure 7: Loss function of the HH model with a current step function.
\begin{table}
\begin{tabular}{l c c} \hline \hline & training error & testing error \\ \hline _membane potential_ & \(0.009925\pm 0.001957\) & \(0.009434\pm 0.001877\) \\ \hline \(n\) & \(0.005240\pm 0.000666\) & \(0.005229\pm 0.000661\) \\ \hline \(m\) & \(0.030731\pm 0.003931\) & \(0.030364\pm 0.003939\) \\ \hline \(h\) & \(0.007077\pm 0.000899\) & \(0.007066\pm 0.000898\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: mean relative \(L^{2}\) norm of errors for HH model for constant current.
Figure 8: HH model: comparison of splitting PINN results with the reference solution for constant current input (\(I=10\)\(nA/cm^{2}\)). (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel. Note that PINN fails to predict the solution.
### Fractional Hodgkin-Huxley model
**Splitting FPINN implementation:**
This problem is solved using the splitting fractional physics-informed neural network
Figure 10: Loss function of the HH model for constant current.
Figure 9: HH model: absolute error of the normalized solutions using splitting PINN with a constant current (\(I=10\)\(nA/cm^{2}\)). (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel.
(FPINN) approach [48]. In this approach, the memory of the fractional derivative at each sub-interval encompasses the entire domain. So for each sub-interval \([t^{j},t^{j+1}]\)\((0\leq j\leq N_{1}-1)\), by using the developed \(L^{1}\) - scheme, we have the following sub-problems:
\[\delta_{t}^{q_{1}}V_{m}^{N_{2}j+l+1}=F_{1}(t^{N_{2}j+l+1},V_{m}^{N_{2}j+l+1},n, m,h), \tag{4.5}\]
where
\[\delta_{t}^{q_{1}}V_{m}^{N_{2}j+l+1}=\sum_{k=0}^{N_{2}j+l}b_{k}^{q_{1}}\left[V_ {m}(t^{N_{2}j+l+1-k})-V_{m}(t^{N_{2}j+l-k})\right], \tag{4.6}\]
and \(n\), \(m\) and \(h\) are the known solutions at \(t=t^{N_{2}j+l}\). The second subproblem is
\[\delta_{t}^{q_{2}}n^{N_{2}j+l+1} = F_{2}(t^{N_{2}j+l+1},V_{m},n^{N_{2}j+l+1},m,h),\] \[\delta_{t}^{q_{3}}m^{N_{2}j+l+1} = F_{3}(t^{N_{2}j+l+1},V_{m},n,m^{N_{2}j+l+1},h), \tag{4.7}\] \[\delta_{t}^{q_{4}}h^{N_{2}j+l+1} = F_{4}(t^{N_{2}j+l+1},V_{m},n,m,h^{N_{2}j+l+1}).\]
where \(V_{m}\) is the solution of the first sub-problem at \(t=t^{N_{2}j+l+1}\), \(l=0,1,\cdots,N_{2}-1\), and \(N_{2}\) is the number of residual points for each sub-interval.
Then, the solutions of sub-systems are combined to obtain the solution of the original system.
This model is solved with the proposed method for \(q_{i}=0.8,0.6,0.4\) at time interval \([0,100ms]\), with 2000 sub-intervals and 40 residual points in each sub-interval. Each part is solved via FPINN, with the given hyperparameters and network architectures in Table
Figure 11: FO-HH model: comparison of splitting FPINN results with the reference solution for constant current input (\(I=20\ nA/cm^{2}\)) and fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.8\). (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel. Note that FPINN fails to predict the solution.
Figure 12: Loss function of the FO-HH model for fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.8\).
Figure 14: Loss function of the FO-HH model for fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.6\).
Figure 13: FO-HH model: comparison of splitting FPINN results with the reference solution for constant current input (\(I=20\)\(nA/cm^{2}\)) and fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.6\). (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel. Note that FPINN fails to predict the solution.
Figure 16: Loss function of the FO-HH model for fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.4\).
Figure 15: FO-HH model: comparison of splitting FPINN results with the reference solution for constant current input (\(I=20\)\(nA/cm^{2}\)) and fractional order \(q_{1}=q_{2}=q_{3}=q_{4}=0.4\). (a) membrane potential; (b) activation variable of potassium channel; (c) activation variable of the sodium channel; (d) deactivation variable of the sodium channel. Note that FPINN fails to predict the solution.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(q_{i}=0.8\) & \(q_{i}=0.6\) & \(q_{i}=0.4\) \\ \hline _membrane_ & \(0.084252\pm 0.011138\) & \(0.098086\pm 0.027659\) & \(0.043236\pm 0.001514\) \\ _potential_ & \(0.084460\pm 0.012576\) & \(0.096949\pm 0.028145\) & \(0.043241\pm 0.003771\) \\ \hline \(n\) & \(0.024353\pm 0.005469\) & \(0.023854\pm 0.008417\) & \(0.001909\pm 8.8\times 10^{-5}\) \\ & \(0.024360\pm 0.005527\) & \(0.023776\pm 0.008435\) & \(0.001933\pm 0.000177\) \\ \hline \(m\) & \(0.129660\pm 0.035021\) & \(0.093863\pm 0.031778\) & \(0.006009\pm 9.4\times 10^{-5}\) \\ & \(0.129083\pm 0.035903\) & \(0.093619\pm 0.032427\) & \(0.006063\pm 0.000214\) \\ \hline \(h\) & \(0.036292\pm 0.009040\) & \(0.040947\pm 0.013980\) & \(0.007536\pm 0.000374\) \\ & \(0.036336\pm 0.009121\) & \(0.040921\pm 0.013963\) & \(0.007592\pm 0.000649\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: mean relative \(L^{2}\) norm of errors for FO-HH model for different orders of fractional derivatives. The first and second sub-rows correspond to the training and testing errors, respectively.
Figure 17: Plots of the membrane potential and the corresponding activation and deactivation variables of the FO-HH model with constant current \(I=20\ nA/cm^{2}\) for different orders of fractional derivatives, obtained using the splitting FPINN.
## 5 Discussion
We have presented a deep learning approach for solving nonlinear systems of differential equations. The performance and accuracy of splitting Physics-Informed Neural Networks (PINNs) are studied in the context of solving neuron models. In Section 4.2, the proposed method was used to solve the Izhikevich model. We showed the reference and learned solutions in Figure 3 for equi-distant training and random test datasets. It can be seen that the nonlinear solutions learned by the nonlinear splitting PINN match well with the reference solutions. The relative \(L^{2}\) norm of errors for the solutions are presented in Table 2, which shows the method's capability for this model. The convergence of the loss functions is shown in Figure 4. It can be seen that this function rapidly tends to values lower than \(10^{-5}\) and \(10^{-4}\) for the first and second sub-problems.
After solving the Izhikevich model, we focus on using the Splitting PINN method for the Hodgkin-Huxley model. In this model, applying input current and changes in voltage resulting from the opening and closing of ion channels lead to the generation of spikes. The amount of input current needed to generate a spike is at least \(2.7nA/cm^{2}\), and with increasing input current, the number of spikes increases. In Section 4.3, the voltage is obtained for two states of input current: step function current and constant input current. In the first case, when the current is applied over a given time interval, the voltage increases and produces an action potential (positive peak). After the spike, the potassium channel opens, and the sodium channel closes, causing the voltage to decrease and the neuron to enter a refractory period where the potential is below the resting potential of \(V_{rest}=-65mV\). The voltage then slowly returns to \(-65mV\). The splitting PINN method was used to calculate approximate solutions and compare them with reference solutions. The results are shown in Figure 5, and the absolute errors are given in Figure 6, which demonstrate the accuracy of the method. The relative \(L^{2}\) norm of errors for the current step function is presented in Table 4. The convergence of the loss functions is also shown in Figure 7 and tends to smaller values of \(10^{-6}\) and \(10^{-5}\) for the first and second sub-problems, respectively.
In the case of a constant input current, the approximate and reference solutions are shown in Figure 8. The proposed method's effectiveness is demonstrated by comparing the approximate results with the reference and numerical solutions obtained using PINN. The absolute errors of the solutions are shown in Figure 9, and the plots of the loss functions are displayed in Figure 10, which converge to values of \(10^{-6}\) for sub-problems. The relative \(L^{2}\) norm of errors of the solutions is presented in Table 5. These results demonstrate the capability and accuracy of the proposed algorithm in solving this model.
The fractional-order Hodgkin-Huxley model was also studied in Section 4.4. The system was modeled using Caputo's fractional derivative and simulated using the proposed method. The effect of memory, associated with fractional order, on firing activity was also examined. To address this, the \(L^{1}\) scheme was improved for solving the fractional-order model by proposing using full-domain memory, rather than sub-domain memory, to calculate the fractional derivative. The numerical results obtained using the improved
method for \(q_{i}=0.8,0.6,0.4\) are shown in Figures 11, 13 and 15, where the approximate results are compared with the reference solutions. The plots of the loss functions are also shown in Figures 12, 14, and 16, demonstrating the efficiency and convergence of the current method. The relative \(L^{2}\) norm of errors of the solutions is presented in Table 6, and as shown in the Section, the results are accurate and have converged to the reference solutions.
The voltage responses for the different orders of fractional derivatives under constant input current \(I=20\) are also displayed in Figure 17. The Figure demonstrates that various spike patterns can be produced. As \(q_{i}\) approaches 1, the firing frequency decreases and results in an increase in the number of spikes in the same time period. Furthermore, the first spike occurs at a later time. Following the cessation of the injected current, the memory-dependent spiking activity can also be observed by applying a step function current. Additionally, the regularity of solutions is observed to depend on the order of fractional derivatives in Figure 17. As \(q_{i}\) decreases, the regularity increases. Table 6 shows that, unregular solutions are more sensitive to parameter initialization, leading to a greater standard deviation. To address this issue, the network architecture can be improved.
The results in this section demonstrate high accuracy, and the solutions have converged to the reference values.
## 6 Conclusion
In this study, we introduced a novel method for solving neuron models represented as systems of differential equations. The Splitting PINN algorithm was demonstrated to be effective and accurate through comparisons with reference solutions and the mean \(L^{2}\) relative norm of errors. In addition, the results of the fractional-order Hodgkin-Huxley model highlighted the effect of memory on firing activity and voltage responses, showing that as \(q_{i}\) approaches 1, the firing frequency decreases while the number of spikes in the same time period increases.
This research provides valuable insights into the behavior of neuron membranes and the various spike patterns that can be generated. The performance of the proposed method demonstrates its superiority over the vanilla PINN algorithm in solving complex neuron models represented as systems of differential equations. This study contributes to the development of tools for investigating the behavior of neurons and the underlying mechanisms of neural activity.
In conclusion, the proposed Splitting PINN algorithm is a promising method for solving systems of differential equations and provides valuable insights into the behavior of neurons and their underlying mechanisms.
## Acknowledgments
We would like to thank Dr. Khemraj Shukla and Dr. Ehsan Kharazmi for helpful discussions. This work was supported by AFOSR MURI funding (FA9550-20-1-0358) and Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project. Fanhai Zeng is supported by the National Natural Science Foundation of China (12171283), the National Key R&D Program of China (2021YFA1000202, 2021YFA1000200), the Science Foundation Program for Distinguished Young Scholars of Shandong (Overseas) (2022HWYQ-045).
|
2302.06586 | Stitchable Neural Networks | The public model zoo containing enormous powerful pretrained model families
(e.g., ResNet/DeiT) has reached an unprecedented scope than ever, which
significantly contributes to the success of deep learning. As each model family
consists of pretrained models with diverse scales (e.g., DeiT-Ti/S/B), it
naturally arises a fundamental question of how to efficiently assemble these
readily available models in a family for dynamic accuracy-efficiency trade-offs
at runtime. To this end, we present Stitchable Neural Networks (SN-Net), a
novel scalable and efficient framework for model deployment. It cheaply
produces numerous networks with different complexity and performance trade-offs
given a family of pretrained neural networks, which we call anchors.
Specifically, SN-Net splits the anchors across the blocks/layers and then
stitches them together with simple stitching layers to map the activations from
one anchor to another. With only a few epochs of training, SN-Net effectively
interpolates between the performance of anchors with varying scales. At
runtime, SN-Net can instantly adapt to dynamic resource constraints by
switching the stitching positions. Extensive experiments on ImageNet
classification demonstrate that SN-Net can obtain on-par or even better
performance than many individually trained networks while supporting diverse
deployment scenarios. For example, by stitching Swin Transformers, we challenge
hundreds of models in Timm model zoo with a single network. We believe this new
elastic model framework can serve as a strong baseline for further research in
wider communities. | Zizheng Pan, Jianfei Cai, Bohan Zhuang | 2023-02-13T18:37:37Z | http://arxiv.org/abs/2302.06586v3 | # Stitchable Neural Networks
###### Abstract
The public model zoo containing enormous powerful pretrained model families (e.g., ResNet/DeiT) has reached an unprecedented scope than ever, which significantly contributes to the success of deep learning. As each model family consists of pretrained models with diverse scales (e.g., DeiT-Tu/S/B), it naturally arises a fundamental question of how to efficiently assemble these readily available models in a family for dynamic accuracy-efficiency trade-offs at runtime. To this end, we present Stitchable Neural Networks (SN-Net), a novel scalable and efficient framework for model deployment. It cheaply produces numerous networks with different complexity and performance trade-offs given a family of pretrained neural networks, which we call anchors. Specifically, SN-Net splits the anchors across the blocks/layers and then stitches them together with simple stitching layers to map the activations from one anchor to another. With only a few epochs of training, SN-Net effectively interpolates between the performance of anchors with varying scales. At runtime, SN-Net can instantly adapt to dynamic resource constraints by switching the stitching positions. Extensive experiments on ImageNet classification demonstrate that SN-Net can obtain on-par or even better performance than many individually trained networks while supporting diverse deployment scenarios. For example, by stitching Swin Transformers, we challenge hundreds of models in Timm model zoo with a single network. We believe this new elastic model framework can serve as a strong baseline for further research in wider communities.
## 1 Introduction
The vast computational resources available and large amount of data have driven researchers to build tens of thousands of powerful deep neural networks with strong performance, which have largely underpinned the most recent breakthroughs in machine learning and much broader artificial intelligence. Up to now, there are \(\sim\)81k models on HuggingFace [57] and \(\sim\)800 models on Timm [56] that are ready to be downloaded and executed without the overhead of reproducing. Despite the large model zoo, a model family (_e.g._, DeiT-Tu/S/B [52]) that contains pretrained models with functionally similar architectures but different scales only covers a coarse-grained level of model complexity/performance, where each model only targets a specific resource budget (_e.g._, FLOPs). Moreover, the model family is not flexible to adapt to dynamic resource constraints since each individual model is not re-configurable due to the fixed computational graph. In reality, we usually need to deploy models to diverse platforms with different resource
constraints (_e.g._, energy, latency, on-chip memory). For instance, a mobile app in Google Play has to support tens of thousands of unique Android devices, from a high-end Samsung Galaxy S22 to a low-end Nokia X5. Therefore, given a family of pretrained models in the model zoo, a fundamental research question naturally arises: _how to effectively utilise these off-the-shelf pretrained models to handle diverse deployment scenarios for Green AI [49]?_
To answer this question, a naive solution is to train individual models with different accuracy-efficiency trade-offs from scratch. However, such method has a linearly increased training and time cost with the number of possible cases. Therefore, one may consider the existing scalable deep learning frameworks, such as model compression and neural architecture search (NAS), to obtain models at different scales for diverse deployment requirements. Specifically, network compression approaches such as pruning [22, 25, 28], quantization [35, 46, 66] and knowledge distillation [7, 47, 51] aim to obtain a small model from a large and well-trained network, which however only target one specific resource budget (see Figure 1 (a)), thus not flexible to meet the requirements of real-world deployment scenarios. On the other hand, one-shot NAS [31, 40], a typical NAS framework that decouples training and specialization stages, seeks to train an over-parameterized supernet that supports many sub-networks for run-time dynamics (see Figure 1 (b)), but training the supernet is extremely time-consuming and computationally expensive (_e.g._, 1,200 GPU hours on 32 V100 GPUs in OFA [4]). To summarize, the existing scalable deep learning frameworks are still limited within a single model design space, which cannot inherit the rich knowledge from pretrained model families in a model zoo for better flexibility and accuracy. Besides, they also require complicated training strategies to guarantee a good model performance.
In this work, we present Stitchable Neural Network (SN-Net), a novel scalable deep learning framework for efficient model design and deployment which quickly stitches an off-the-shelf pretrained model family with much less training effort to cover a fine-grained level of model complexity/performance for a wide range of deployment scenarios (see Figure 1 (c)). Specifically, SN-Net is motivated by the previous observations [2, 23, 10] that the typical minima reached by SGD can be stitched to each other with low loss penalty, which implies architectures of the same model family pretrained on the same task can be stitched. Based on this insight, SN-Net directly selects the well-performed pretrained models in a model family as "anchors", and then inserts a few simple stitching layers at different positions to transform the activations from one anchor to its nearest anchor in terms of complexity. In this way, SN-Net naturally interpolates a path between neighbouring anchors of different accuracy-efficiency trade-offs, and thus can handle dynamic resource constraints _with a single neural network at runtime_. An example is shown in Figure 2, where a single Swin-based SN-Net is able to do what hundreds of models can do with only 50 epochs training on ImageNet-1K.
We systematically study the design principles for SN-Net, including the choice of anchors, the design of stitching layers, the stitching direction and strategy, along with a sufficiently simple but effective training strategy. With comprehensive experiments, we show that SN-Net demonstrates promising advantages: 1) Compared to the existing prevalent scalable deep learning frameworks (Figure 1), SN-Net is a new universal paradigm which breaks the limit of a single pretrained model or supernet design by extending the design space into a large number of model families in the model zoo, forming a "many-to-many" pipeline. 2) Different from NAS training that requires complex optimization techniques [4, 65], training SN-Net is as easy as training individual models while getting rid of the huge computational cost of training from scratch. 3) The final performance of stitches is almost predictable due to the interpolation-like performance curve between anchors, which implies that we can selectively train a number of stitches prior to training based on different deployment scenarios.
In a nutshell, we summarize our contributions as follows:
* We introduce Stitchable Neural Networks, a new universal framework for elastic deep learning by directly utilising the pretrained model families in model zoo via model stitching.
* We provide practical principles to design and train SN-Net, laying down the foundations for future research.
* Extensive experiments demonstrate that compared to training individual networks from scratch, _e.g._, a single
Figure 2: **One** Stitchable Neural Network _vs._**200** models in Timm model zoo [56]. It shows an example of SN-Net by stitching ImageNet-22K pretrained Swin-Ti/S/B. Compared to each individual network, SN-Net is able to instantly switch network topology at runtime and covers a wide range of computing resource budgets. Larger and darker dots indicate a larger model with more parameters and higher complexity.
DeiT-based [52] SN-Net can achieve flexible accuracy-efficiency trade-offs at runtime while reducing \(22\times\) training cost and local disk storage.
## 2 Related Work
Model stitching.Model stitching was initially proposed by Lenc [23] to study the equivalence of representations. Specifically, they showed that the early portion of a trained network can be connected with the last portion of another trained network by a \(1\times 1\) convolution stitching layer without significant performance drop. Most recently, Yamini [2] revealed that neural networks, even with different architectures or trained with different strategies, can also be stitched together with small effect on performance. As a concurrent work to [2], Adrian [10] studied using model stitching as an experimental tool to match neural network representations. They demonstrated that common similarity indices (, CKA [21], CCA [14], SVCCA [44]) are not correlated to the performance of the stitched model. Unlike these previous works which view model stitching as a tool to measure neural network representations, this paper unleashes the power of model stitching as a general approach for utilising the pretrained model families in the large-scale model zoo to obtain a single scalable neural network at a low cost that can instantly adapt to diverse deployment scenarios. More recently, Yang proposed DeRy [62] to dissect and reassemble arbitrary pretrained models into a new network for a certain resource constraint (, FLOPs) one at a time. Unlike DeRy, the proposed SN-Net supports numerous sub-networks by stitching the off-the-shelf model families, being capable of handling diverse resource budgets at deployment time.
Neural architecture search.Neural architecture search (NAS) [68] aims to automatically search for the well-performed network architecture in a pre-defined search space under different resource constraints. In the early attempts [68, 69], NAS consumes prohibitive computational cost (, 500 GPUs across 4 days in [69]) due the requirement of training individual sub-networks until convergence for accurate performance estimation. To address this problem, one-shot NAS [31, 33, 5, 60] has been proposed to improve NAS efficiency by weight sharing, where multiple subnets share the same weights with the supernet. However, training a supernet still requires intensive computing resources. Most recently, zero-shot NAS [34, 1, 8, 20] has been proposed to identify good architectures prior to training. However, obtaining the final model still requires training from scratch. Compared to NAS, our method builds upon the off-the-shelf family of pretrained models in model zoo, which exploits the large model design space and is able to assemble the existing rich knowledge from heterogeneous models for flexible and diverse model deployments.
Vision Transformers.Vision Transformers [11] are emerging deep neural networks which have challenged the de-facto standard of convolutional neural networks on vision tasks. The majority of the existing efforts focus on improving the performance of ViT as a single general vision backbone [32, 37, 53, 54, 61] or adopting ViT as a strong module for modeling global relationships to address downstream tasks [58, 26, 50]. Another line of works focus on improving ViT efficiency by token pruning [38, 45], quantization [27, 30] and dynamic inference [45, 55],. Most recently, large-scale self-supervised pretraining has helped ViTs achieve promising results on ImageNet, including contrastive learning [6, 9] and masked image modeling [15, 19, 67, 3]. However, these models are designed to be over-parameterized and have a fixed computational cost, which is inflexible at the inference stage and cannot adapt to diverse and dynamic deployment environment. Instead of proposing or pretraining a new ViT architecture, we utilize different pretrained ViTs or even CNNs [17, 59] to show that the proposed SN-Net is a general framework to assemble the existing model families.
## 3 Method
In this section, we first introduce the preliminary of model stitching at Section 3.1. Next, we describe the details of our proposed stitchable neural networks at Section 3.2.
### Preliminaries of Model Stitching
Let \(\theta\) be the model parameters of a pretrained neural network and \(f_{i}\) represent the function of the \(i\)-th layer. A typical feed-forward neural network with \(L\) layers can be defined as a composition of functions: \(f_{\theta}=f_{L}\circ\cdots\circ f_{1}\), where \(\circ\) indicates the composition, and \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) maps the inputs in an input space \(\mathcal{X}\) to the output space \(\mathcal{Y}\). Let \(\mathbf{X}\in\mathcal{X}\) be an input to the network. The basic idea of model stitching involves splitting a neural network into two portions of functions at a layer index \(l\). The first portion of layers compose the front part that maps the input \(\mathbf{X}\) into the activation space of the \(l\)-th layer \(\mathcal{A}_{\theta,l}\), which can be formulated as
\[H_{\theta,l}(\mathbf{X})=f_{l}\circ\cdots\circ f_{1}(\mathbf{X})=\mathbf{X}_{l}, \tag{1}\]
where \(\mathbf{X}_{l}\in\mathcal{A}_{\theta,l}\) denotes the output feature map at the \(l\)-th layer. Next, the last portion of layers maps \(\mathbf{X}_{l}\) into the final output
\[T_{\theta,l}(\mathbf{X}_{l})=f_{L}\circ\cdots\circ f_{l+1}(\mathbf{X}_{l}). \tag{2}\]
In this case, the original neural network function \(f_{\theta}\) can be defined as a composition of the above functions \(f_{\theta}=T_{\theta,l}\circ H_{\theta,l}\) for all layer indexes \(l=1,...,L-1\).
Now suppose we have another pretrained neural network \(f_{\phi}\). Let \(\mathcal{S}:\mathcal{A}_{\theta,l}\rightarrow\mathcal{A}_{\phi,m}\) be a stitching layer which implements a transformation between the activation space of the \(l\)-th layer of \(f_{\theta}\) to the activation space of the \(m\)-th layer
of \(f_{\phi}\). The basic idea of model stitching to obtain a new network defined by \(\mathcal{S}\) can be expressed as
\[F_{S}(\mathbf{X})=T_{\phi,m}\circ\mathcal{S}\circ H_{\theta,l}(\mathbf{X}). \tag{3}\]
By controlling the stitched layer indexes \(l\) and \(m\), model stitching can produce a sequence of stitched networks. It has been observed by [23] that models of the same architecture but with different initializations (_i.e_., random seeds) can be stitched together with low loss penalty. Further experiments by [2, 10] have demonstrated that different architectures (_e.g_., ViTs and CNNs) may also be stitched without significant performance drop, regardless they are trained in different ways such as self-supervised learning or supervised learning.
### Stitchable Neural Networks
Based on the insight of model stitching, we propose Stitchable Neural Networks (SN-Net), a new "many-to-many" elastic model paradigm. SN-Net is motivated by an increasing number of pretrained models in the publicly available model zoo [56], where most of the individually trained models are not directly adjustable to dynamic resource constraints. To this end, SN-Net inserts a few stitching layers to smoothly connect a family of pretrained models to form diverse stitched networks permitting run-time network selection. The framework of SN-Net is illustrated in Figure 3 by taking plain ViTs [11] as an example. For brevity, we will refer to the models that to be stitched as "**anchors**" and the derived models by stitching anchors as "**stitches**". In the following, we describe the concrete approach in detail, including what, how and where to stitch, the stitching strategy and space, as well as an effective and efficient training strategy for SN-Net.
What to stitch: the choice of anchors.In general, the large-scale model zoo determines the powerful representation capability of SN-Net as it is a universal framework for assembling the prevalent families of architectures. As shown in Section 4, SN-Net works for stitching representative ViTs and CNNs. However, intuitively, anchors that are pretrained on different tasks can learn very different representations (_e.g_., ImageNet [48] and COCO [29]) due to the large distribution gap of different domains [36], thus making it difficult for stitching layers to learn to transform activations among anchors. Therefore, the selected anchors should be consistent in terms of the pretrained domain.
How to stitch: the stitching layer and its initialization.Conceptually, the stitching layer should be as simple as possible since its aim is not to improve the model performance, but to transform the feature maps from one activation space to another [2]. To this end, the stitching layers in SN-Net are simply \(1\times 1\) convolutional layers. By default in PyTorch [39], these layers are initialized based on Kaiming initialization [16].
However, different from training a network from scratch as in most works [3, 54, 32, 53], SN-Net is built upon pretrained models. In this case, the anchors have already learned good representations, which allows to directly obtain an accurate transformation matrix by solving the following least squares problem
\[\|\mathbf{AM}_{o}-\mathbf{B}\|_{F}=\min\|\mathbf{AM}-\mathbf{B}\|_{F}, \tag{4}\]
where \(\mathbf{A}\in\mathbb{R}^{N\times D_{1}}\) and \(\mathbf{B}\in\mathbb{R}^{N\times D_{2}}\) are two feature maps of the same spatial size but with different number of hidden dimensions. \(N\) denotes the length of the input sequence and \(D_{1},D_{2}\) refer to the number of hidden dimensions.
Figure 3: Illustration of the proposed **Stitchable Neural Network**, where three pretrained variants of DeiTs are connected with simple stitching layers (\(1\times 1\) convolutions). We share the same stitching layer among neighboring blocks (_e.g_., 2 blocks with a stride of 2 in this example) between two models. Apart from the basic anchor models, we obtain many sub-networks (stitches) by stitching the nearest pairs of anchors in complexity, _e.g_., DeiT-Ti and DeiT-S (the blue line), DeiT-S and DeiT-B (the green line). Best viewed in color.
\(\mathbb{R}^{D_{1}\times D_{2}}\) is the targeted transformation matrix.
One can observe that Eq. (4) indicates a closed form expression based on singular value decomposition, in which case the optimal solution can be achieved through an orthogonal projection in the space of matrices,
\[\mathbf{M}_{o}=\mathbf{A}^{\dagger}\mathbf{B}, \tag{5}\]
where \(\mathbf{A}^{\dagger}\) denotes the Moore-Penrose pseudoinverse of \(\mathbf{A}\). To obtain \(\mathbf{M}_{o}\) requires only a few seconds on one CPU with hundreds of samples. However, we will show in Section 4.2 that directly using with least-squares solution achieves unstable performance for stitches, but it actually provides a good initialization for learning stitching layers with SGD. Therefore, the least-squares solution serves as the default initialization approach for the stitching layers in SN-Net.
Where to stitch: the stitching directions.Given anchors with different scales and complexities, there are two options to stitch them together: **Fast-to-Slow** and **Slow-to-Fast**. Taking two anchors as an example (Figure 4), Fast-to-Slow takes the first portion of layers (_i.e._, Eq. (1)) from a smaller and faster model, and the last portion of layers (_i.e._, Eq. (1)) from a larger and slower model, where Slow-to-Fast goes in a reverse direction. However, as Fast-to-Slow is more aligned with the existing model design principle (_i.e._, increasing the network width as it goes deeper), we will show in Section 4.2 that it achieves more stable and better performance than Slow-to-Fast. In this case, we take Fast-to-Slow as the default stitching direction in SN-Net. Besides, as different anchors may reach very different minima, we propose a **nearest stitching** strategy by limiting the stitching between two anchors of the nearest model complexity/performance. Thus, each stitch in SN-Net assembles a pair of neighbouring anchors. We will show in Section 4.2 that stitching across anchors without the nearest stitching constraint achieves inferior performance.
Way to stitch: stitching as sliding windows.Our stitching strategy is inspired by the main observation: neighboring layers dealing with the same scale feature maps share similar representations [21]. To this end, we propose to stitch anchors as sliding windows, where the same window shares a common stitching layer, as shown in Figure 5. Let \(L_{1}\) and \(L_{2}\) be depth of two anchors. Then intuitively, there are two cases when stitching layers/blocks between the two anchors: **paired stitching** (\(L_{1}=L_{2}\)) and **unpaired stitching** (\(L_{1}\neq L_{2}\)). In the case of \(L_{1}=L_{2}\), the sliding windows can be controlled as sliding windows with a window size \(k\) and a stride \(s\). Figure 5 left shows an example with \(k=2,s=1\). However, in most cases we have unequal depth as different model architectures have different scales. Even though, matching \(L_{1}\) layers to \(L_{2}\) layers can be easily done by nearest interpolation where each layer from the shallower anchor can be stitched with more than one layers of the deeper anchor, as shown in Figure 5 right.
Stitching space.In SN-Net, we first split the anchors along the internal layers/blocks at each stage then apply our stitching strategy within each stage. As different anchors have different architectural configurations, the size of the stitching space can be variant based on the depth of the selected anchors and the stitching settings (_i.e._, the kernel size \(k\) and stride \(s\)). For example, with \(k=2\) and \(s=1\), DeiT-based SN-Net can have 71 stitches under the constraint of our nearest stitching principle, or 732 stitches without this constraint, as shown in Figure 9 (b). We provide detailed illustrations for this figure in the supplementary material. More stitches can be obtained by choosing anchors with larger scales or configuring the sliding windows by using a larger window size or smaller stride. Overall, compared to one-shot NAS which can support more than \(10^{20}\) sub-networks, SN-Net has a relatively smaller space (up to hundreds or thousands). However, we point out that even though NAS has a much larger architecture space, during deployment, it only focuses on the sub-networks on the Pareto frontier of performance and resource consumption [64]. Thus the vast majority of sub-networks are ignored. In contrast, we will show in Section 4.2 that the stitches in SN-Net smoothly distribute among the anchors, which indicates that the analogous performance curve can almost be estimated without too much searching cost, permitting fast deployment.
Figure 4: Stitching direction: Fast-to-Slow _vs_. Slow-to-Fast.
Figure 5: Stitching as sliding windows, where paired stitching is proposed for stitching models with equal depth and unpaired stitching is utilised for models with unequal depth.
Training strategy.Given the anchors with different accuracy-efficiency trade-offs from the model zoo, our aim is to train an elastic joint network that covers a large number of stitches in a highly efficient way so that it can fit diverse resource constraints with low energy cost. The detailed training algorithm is provided in Algorithm 1 with PyTorch style, where we firstly define a configuration set that contains all possible stitches and initialize all stitching layers with least-squares matching by solving Eq. (4). Next, at each training iteration, we randomly sample a stitch and follow the standard training process as in common practices [32, 54]. To further improve the performance of stitches, we also adopt knowledge distillation with RegNetY-160 [43] as the teacher model. The overall training process requires only a few epochs (_e.g_., 50) on ImageNet, which is far less than the supernet training in NAS [4, 60, 5] and other techniques [63, 65] that train networks from scratch. Moreover, as anchors are already well-trained, we do not observe significant interference [4] among stitches, as shown in the experiments.
```
1:\(M\) pretrained anchors to be stitched. Configuration set \(E=\{e_{1},...,e_{Q}\}\) with \(Q\) stitching positions.
2:Initialize all stitching layers by least-squares matching
3:for\(i=1,...,n_{iters}\)do
4: Get next mini-batch of data \(\mathbf{X}\) and label \(\mathbf{Y}\).
5: Clear gradients, \(optimizer.zero\_grad()\).
6: Randomly sample a stitching \(e_{q}\) from set \(E\).
7: Execute the current stitch, \(\mathbf{\hat{Y}}=F_{e_{q}}(\mathbf{X})\).
8: Compute loss, \(loss=criterion(\mathbf{\hat{Y}},\mathbf{Y})\).
9: Compute gradients, \(loss.backward()\).
10: Update weights, \(optimizer.step()\).
11:endfor
```
**Algorithm 1** Training Stitchable Neural Networks
## 4 Experiment
Implementation details.We conduct all experiments on ImageNet-1K [48], a large-scale image dataset which contains \(\sim\)1.2M training images and 50K validation images from 1K categories. Model performance is measured by Top-1 accuracy. Furthermore, we report the FLOPs and throughput as indicators of theoretical complexity and real speed on hardware, respectively. We study stitching plain ViTs, hierarchical ViTs, CNNs, and CNN with ViT. We choose the representative model families as anchors: DeiT [52], Swin Transformer [32] and ResNet [17]. By default, we randomly sample 100 images from the training set to initialize the stitching layers. For paired stitching, we set the default sliding kernel size as 2 and the stride as 1. For unpaired stitching, we match layers by nearest interpolation. Unless otherwise specified, all experiments adopt a total batch size of 1,024 on 8 V100 GPUs. We train DeiT/Swin with 50 epochs with an initial learning rate of \(1\times 10^{-4}\). For the experiments with ResNet, we train with 30 epochs based on the training scripts from timm [56] with an initial learning rate of \(0.05\). All other hyperparameters adopt the default setting as in [52, 56, 32]. For hierarchical models, we scale the learning rate of the anchor parameters by \(1/10\) compared to that of stitching layers.
### Main Results
Stitching plain ViTs.Based on Algorithm 1, we first generate a stitching configuration set by assembling ImageNet-1K pretrained DeiT-Ti/S/B, which contains 71 stitches including 3 anchors. Then we jointly train the stitches in DeiT-based SN-Net on ImageNet with 50 epochs. The whole training and evaluation process takes only around 110 and 3 GPU hours on V100 GPUs, respectively. In Figure 6 left, we visualize the performance of all 71 stitches, including the anchors DeiT-Ti/S/B (highlighted as yellow stars). In general, SN-Net achieves a wide range of successful stitches, where they achieve smoothly increased performance when stitching more blocks from a larger anchor. We also observe a phenomenon of **model-level interpolation** between two anchors: _with the architectures of the stitches become more similar to the nearest larger anchor, the performance also gradually gets closer to it._
Moreover, we compare individually trained models from scratch and select stitches from our jointly optimized SN-Net. For brevity, we denote "Ti-S" as the stitches with DeiT-Ti/S as anchors and "S-B" as the stitches with DeiT-S/B as anchors. The results are shown in Table 1. As it indicates, compared to individually trained "S-B" stitches, SN-Net achieves even better performance. It is worth noting that some stitches can fail to converge when training from scratch. However, due to all anchors in SN-Net have been well-trained, the stitches can be easily interpolated among them. Also note that "Ti-S" stitches achieve inferior performance than individual ones. We speculate that due to a slightly larger performance gap between DeiT-Ti/S compared to DeiT-S/B, training Ti-S stitches from scratch may help to find a better local optimum. We also notice a performance drop for anchor DeiT-Ti, for which we assume a more intelligent stitch sampling strategy can help in future works. Overall, a single SN-Net can cover a wide range of accuracy-efficiency trade-offs while achieving competitive performance with models that trained from scratch. To be emphasized, SN-Net reduces around \(22\times\) training cost (\(71\times 300\) epochs _vs_. \(3\times 300+50\) epochs) and local disk storage (2,630M _vs_. 118M) compared to training and saving all individual networks.
Stitching hierarchical ViTs.Furthermore, we conduct experiment by stitching hierarchical ViTs. In particular, we assemble Swin-Ti/S/B trained on ImageNet-22K by stitching the blocks at the first three stages. Note that we do
not choose ImageNet-1K pretrained Swin models due to the minor performance gap (83.1% _vs._ 83.5%) but the large difference in FLOPs (8.7G _vs._ 15.4G) between Swin-S/B. We visualize the results at Figure 6 right. It shows that the Swin-based SN-Net also achieves flexible accuracy-efficiency trade-offs among the three anchors. This strongly demonstrates that the proposed SN-Net is a general solution for both plain and hierarchical models.
**Stitching CNNs and CNN-ViT.** We show that SN-Net also works for stitching CNN models and even connecting CNNs with ViTs. As Figure 7 shows, with only 30 epochs of training, the stitches by assembling from ResNet-18 [17] to ResNet-50/Swin-Ti perform favourably, which again emphasizes that SN-Net can be general for both CNNs and ViTs. Also note that ResNet-18/50 and Swin-Ti are shallow models, so we obtain a small number of stitches.
### Ablation Study
In this section, we ablate the design principles for SN-Net. Unless otherwise specified, our experiments are based on stitching DeiT-Ti/S/B with \(k=2,s=1\) and knowledge distillation with RegNetY-160. By default, the training strategy is the same as in Section 4.1, _e.g_., 50 epochs on ImageNet. We provide more ablation studies in the supplementary material, such as the effect of kernel size and stride for controlling the sliding windows during stitching, _etc_.
**Effect of different stitching layer learning strategies.** To study the effect of different learning strategies for stitching
\begin{table}
\begin{tabular}{c c c|c c|c c|c c} \# **Ti Blocks** & **\# S Blocks** & **\# B Blocks** & **FLOPs** & **Throughput** & \multicolumn{2}{c|}{**Individually Trained**} & \multicolumn{2}{c}{**SN-Net**} \\ \cline{5-10} & & & **(G)** & **(images/s)** & **Params (M)** & **Top-1 (\%)** & **Params (M)** & **Top-1 (\%)** \\ \hline
12 & 0 & 0 & 1.3 & 2,839 & 5.7 & 72.1 & & 70.6 \\
9 & 3 & 0 & 2.1 & 2,352 & 10.0 & 75.9 & & 72.6 \\
6 & 6 & 0 & 2.9 & 1,963 & 14.0 & 78.2 & & 76.5 \\
3 & 9 & 0 & 3.8 & 1,673 & 18.0 & 79.4 & & 78.2 \\
0 & 12 & 0 & 4.6 & 1,458 & 22.1 & 79.8 & 118.4 & 79.5 \\
0 & 9 & 3 & 7.9 & 1,060 & 38.7 & 79.4 & & 80.0 \\
0 & 6 & 6 & 11.2 & 828 & 54.6 & failed & & 81.5 \\
0 & 3 & 9 & 14.3 & 679 & 70.6 & 80.3 & & 82.0 \\
0 & 0 & 12 & 17.6 & 577 & 86.6 & 81.8 & & 81.9 \\ \end{tabular}
\end{table}
Table 1: Performance comparisons on ImageNet-1K between individually trained models from scratch with **300 epochs** and stitches selected from our proposed SN-Net trained with **50 epochs**. A single SN-Net with 118.4M parameters can include all possible stitches. We denote “# Ti/S/B Blocks” as the number of stitched blocks chosen from DeiT-Ti/S/B, respectively. “failed” means training such stitched model from scratch fails to converge and incurs “loss is nan”. Throughput is measured on one RTX 3090 and averaged over 30 runs, with a batch size of 64 and input resolution of \(224\times 224\).
Figure 8: Different learning strategies for stitching layers.
Figure 6: Performance of SN-Net by stitching DeiT-Ti/S/B and Swin-Ti/S/B.
Figure 7: Effect of stitching CNNs and CNN-ViT.
layers, we consider 4 cases: 1) **Kaiming Init**, the default initialization method in PyTorch. 2) **Least-squares (LS) Init**, the LS solution by solving Eq. (4). 3) **Kaiming Init + SGD**, learning with gradients update on ImageNet after Kaiming Init. 4) **LS Init + SGD**, learning with gradients update on ImageNet after LS Init. We report the experiment results in Figure 8. Overall, we find LS Init serves as a good starting point for learning stitching layers compared to the default Kaiming Init. Interestingly, we observe some stitches by directly matching with LS solution perform quite well compared to Kaiming Init, as shown in Figure 8 right. However, in general, directly matching with LS solution results in an unstable performance curve. This indicates that LS Init is not fully aware of the final performance of the stitches and updating the stitching layers is essential.
**Effect of stitching directions.** In Figure 9 (a), we compare the stitching directions of Fast-to-Slow and Slow-to-Fast based on DeiT. In general, Fast-to-Slow helps to ensure a better performance for most stitches. On the other hand, Slow-to-Fast obtains a more unstable performance curve, especially for stitching DeiT-S/B. Compared to Fast-to-Slow which increases feature representation capacity by expanding the hidden dimension of activations from a narrower model to a wider one, Slow-to-Fast shrinks the hidden dimension, which contradicts to the existing model design principle [17, 18, 43] that gradually expands the hidden dimension to encode rich semantics as the network goes deeper. Therefore, the resulting information loss of Slow-to-Fast may increase the optimization difficulty.
**Effect of nearest stitching.** In SN-Net, we adopt the nearest stitching strategy which limits a stitch to connect with a pair of anchors that have the nearest model complexity/performance. However, it is possible to simultaneously stitching more than two anchors (_e.g_., stitching all DeiT-Ti/S/B sequentially) or anchors with a large gap in complexity/performance (_e.g_., stitching DeiT-Ti with DeiT-B). With the same 50 epochs training, this approach helps to produce 10\(\times\) more stitches than our default settings (732 _vs_. 71). However, as shown in Figure 9 (b), even though Ti-B and Ti-S-B achieve good interpolated performance among the anchors (_i.e_., they are stitchable), most of them cannot outperform Ti-S and S-B stitches. In the case of Ti-B, we speculate that without a better minima as a guide in the middle (_e.g_., DeiT-S), the local minima that found by stitching layers can be sub-optimal due to the large complexity/performance gap between two anchors. Besides, stitching more than two anchors simultaneously does not bring obvious gain at this stage, which we leave for future work.
**Effect of tuning full model vs. stitching layers only.** In SN-Net, the role of stitching layers is to map the feature maps from one activation space to another. However, since the anchors have been well-trained, one question is how the performance changes if we only update the stitching layers during training. In Figure 9 (c), we show that tuning stitching layers is only promising for some stitches. In contrast, we observe that the performance of stitches can be improved by tuning the full model. Therefore, we by default make SN-Net to be fully updated during training.
## 5 Conclusion
We have introduced Stitchable Neural Networks, a novel general framework for developing elastic neural networks that directly inherit the rich knowledge from pretrained model families in the large-scale model zoo. Extensive experiments have shown that SN-Net can deliver fast and flexible accuracy-efficiency trade-offs at runtime with low cost, fostering the massive deployment of deep models for real-world applications. With the rapid growth of the number of large-scale pretrained models [15, 42], we believe our work paves a new way for efficient model development and deployment, yielding a significant step towards Green AI. In future works, SN-Net can be extended into more tasks, such as natural language processing, dense prediction and transfer learning.
**Limitations and societal impact.** Our current training strategy randomly samples a stitch at each training iteration, which implies that with a much larger stitching space, the stitches may not be sufficiently trained unless using more training epochs. We leave this for future work.
Figure 9: From left to right, Figure (a) shows the effect of different stitching directions. Figure (b) presents the effect of nearest stitching based on DeiT, where “Ti”, “S”, “B” denote the stitched anchors. For example, “Ti-S-B” refers to a stitch that defined by connecting the tiny, small and base variants of DeiT, sequentially. Figure (c) shows the comparison of full model tuning vs. tuning stitching layers only.
**Appendix**
We organize our supplementary material as follows.
* In Section A, we provide further explanation of the proposed nearest stitching strategy.
* In Section B, we study the effect of different sizes and strides of sliding windows for stitching.
* In Section C, we study the effect of different training epochs.
* In Section D, we show the effectiveness of our training strategy by comparing with sandwich sampling rule and inplace distillation [63].
* In Section E, we discuss the effect of training without the pretrained weights of anchors.
* In Section F, we experiment with different number of samples for initializing stitching layers.
* In Section G, we provide additional discussion with One-shot NAS.
* In Section H, we compare SN-Net with LayerDrop [12] at inference time.
## Appendix A Detailed Illustration of Nearest Stitching Strategy
In the proposed SN-Net, we introduce a nearest stitching strategy which limits the stitching between two anchors of the nearest complexity/performance. In Figure 10, we describe more details for this approach based on DeiT [52]. Under the nearest stitching, we limit the stitches to two types: Ti-S and S-B, which connects DeiT-Ti/S and DeiT-S/B, respectively. Experiments in the main manuscript have shown that stitching anchors with a larger complexity/performance gap or sequentially stitching more than two anchors achieves inferior performance.
## Appendix B Effect of Different Sizes and Strides of Sliding Windows
We explore different settings of sliding windows in SN-Net. In Figure 11, we visualize the results of using different kernel sizes and strides in stitching DeiT models. Overall, different settings can produce a different number of stitches but achieve similar good performance. However, it is worth noting that within a larger window, a stitching layer needs to map activations with more dissimilar representations, which potentially results in some bad-performed stitches, as shown in the case of \(k=4,s=4\) in Figure 11.
solution with more samples can increase the memory cost at the beginning of training, we set the default number of samples for initializing stitching layers to 100 to avoid the potential "out of memory" issue.
## Appendix G Compared with One-shot NAS
As discussed earlier, SN-Net is fundamentally different from one-shot NAS. Specifically, one-shot NAS trains a supernet from scratch and searches for an optimal sub
Figure 11: Effect of different sizes of sliding windows. \(k\) and \(s\) refer to the kernel size and stride for controlling the sliding windows. From left to right, the kernel sizes and strides of 2, 3 and 4 produce 51, 75 and 99 stitches, respectively.
Figure 12: Effect of different training epochs.
Figure 10: Four types of stitches based on DeiT-Ti/S/B. Under the proposed nearest stitching strategy, we limit the stitching between two anchors of the nearest model complexity/performance, _i.e_., Figure (a) and (b), while excluding stitching anchors with a larger complexity/performance gap (Figure (c)) or sequentially stitching more than two anchors (Figure (d)).
Figure 13: Comparison between our training strategy and common supernet training strategy in NAS (_i.e_., sandwich sampling rule and inplace distillation [63]).
network during deployment to meet a specific resource constraint with complicated techniques (_e.g_., evolutionary search) and expensive cost (_e.g_., \(>2K\) GPU hours in [64]). In contrast, SN-Net aims to cheaply and fast assemble pre-trained model families (_e.g_., \(\sim\)110 GPU hours) to get a scalable network, and instantly select optimal stitches due to the interpolation effect. In our experiments, we use DeiTs and Swins as two examples to show that SN-Net is a universal framework. Besides, we show in Figure 15 that we easily achieve comparable performance with BigNASModel-XL [64] (80.7% _vs_. 80.9%) with lower FLOPs (977M _vs_. 1040M) by stitching LeViTs [13].
## Appendix H Compared with LayerDrop at Inference Time
LayerDrop [12] is a form of structured dropout which randomly drops Transformer layers during training for regularization. It also facilitates efficient pruning by dropping some layers at inference time. In DeiT-based SN-Net, the anchors are already pretrained with a drop rate of \(0.1\). To show the advantage of our method, we train DeiT-B (_i.e_., the largest model in DeiT family) with a more aggressive path drop rate (0.5) and achieve 81.4% Top-1 accuracy on ImageNet. However, cropping some layers of this trained network during testing performs badly, _e.g_., throwing the first 6 blocks (achieving 0.2%), the last 6 blocks (52.7%), and every other (72.7% with 8.9G FLOPs), while our method achieves 72.6% with 2.1G FLOPs.
|
2306.12010 | Spiking Neural Network for Ultra-low-latency and High-accurate Object
Detection | Spiking Neural Networks (SNNs) have garnered widespread interest for their
energy efficiency and brain-inspired event-driven properties. While recent
methods like Spiking-YOLO have expanded the SNNs to more challenging object
detection tasks, they often suffer from high latency and low detection
accuracy, making them difficult to deploy on latency sensitive mobile
platforms. Furthermore, the conversion method from Artificial Neural Networks
(ANNs) to SNNs is hard to maintain the complete structure of the ANNs,
resulting in poor feature representation and high conversion errors. To address
these challenges, we propose two methods: timesteps compression and
spike-time-dependent integrated (STDI) coding. The former reduces the timesteps
required in ANN-SNN conversion by compressing information, while the latter
sets a time-varying threshold to expand the information holding capacity. We
also present a SNN-based ultra-low latency and high accurate object detection
model (SUHD) that achieves state-of-the-art performance on nontrivial datasets
like PASCAL VOC and MS COCO, with about remarkable 750x fewer timesteps and 30%
mean average precision (mAP) improvement, compared to the Spiking-YOLO on MS
COCO datasets. To the best of our knowledge, SUHD is the deepest spike-based
object detection model to date that achieves ultra low timesteps to complete
the lossless conversion. | Jinye Qu, Zeyu Gao, Tielin Zhang, Yanfeng Lu, Huajin Tang, Hong Qiao | 2023-06-21T04:21:40Z | http://arxiv.org/abs/2306.12010v2 | # Spiking Neural Network for Ultra-low-latency and High-accurate Object Detection
###### Abstract
Spiking Neural Networks (SNNs) have garnered widespread interest for their energy efficiency and brain-inspired event-driven properties. While recent methods like Spiking-YOLO have expanded the SNNs to more challenging object detection tasks, they often suffer from high latency and low detection accuracy, making them difficult to deploy on latency sensitive mobile platforms. Furthermore, the conversion method from Artificial Neural Networks (ANNs) to SNNs is hard to maintain the complete structure of the ANNs, resulting in poor feature representation and high conversion errors. To address these challenges, we propose two methods: timesteps compression and spike-time-dependent integrated (STDI) coding. The former reduces the timesteps required in ANN-SNN conversion by compressing information, while the latter sets a time-varying threshold to expand the information holding capacity. We also present a SNN-based ultra-low latency and high accurate object detection model (SUHD) that achieves state-of-the-art performance on nontrivial datasets like PASCAL VOC and MS COCO, with about remarkable 750x fewer timesteps and 30% mean average precision (mAP) improvement, compared to the Spiking-YOLO on MS COCO datasets. To the best of our knowledge, SUHD is the deepest spike-based object detection model to date that achieves ultra low timesteps to complete the lossless conversion.
Spiking neural network, Object detection, Low latency, Timesteps compression
## I Introduction
With the development of high-performance computing devices, artificial neural networks (ANNs) have made achievements in many artificial intelligence tasks such as image classification [1], object detection [2], and sequential decision-making [3] in recent years. However, ANNs has a huge energy consumption, which makes it difficult to deploy on mobile devices. Spiking Neural Networks (SNNs) is the third-generation artificial neural network [4]. Inspired by biological neurons [5], it changes the complex multiplication operation in the ANNs into a simple accumulation operation and transmits information through spikes sequences [6][7]. Due to the sparsity of spike events and the characteristics of event-driven computing, SNNs have remarkable energy efficiency and are the neural network of choice for neuromorphic chips [8, 9, 10, 11]. It is generally believed that SNNs has greater development potential and bionic value.
Because of the non-differentiation of SNNs, the gradients cannot be computed directly during backward propagation, which makes it difficult to obtain SNNs by training directly. There are currently two main methods of obtaining SNNs, one is obtained by conversion [12, 13, 14], i.e. processing the trained ANNs weights to obtain available SNNs, and the other one is learning SNNs weights from scratch [15][16] by spike-time-dependent plasticity (STDP) [17, 18, 19] or spike-time-dependent backpropagation (STDB) [20, 21]. On that basis, a variety of correlation optimization algorithms are derived [22, 23, 24]. The approach to learning from scratch requires a significant amount of time and computer resources relative to the conversion approach. The conversion approach takes full advantage of the ease of training of ANNs and can promptly obtain usable weights from the trained ANNs [25]. Both methods are widely used in shallow SNNs with good results. Q. Yu et al. used a double threshold scheme and an augmented spike scheme to achieve lossless conversion in MNIST, FashionMNIST and CIFAR10 [14]. C. Hong et al. proposed a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs [26]. N. Rathi et al. proposed using the first convolutional layer as the coding layer and using a gradient descent-based training method to make the SNN accuracy close to that of an ANN with the same structure [27]. At the same time, many works have been devoted to the use of SNNs on deeper networks. J. Ding et al. achieved an ANN to SNN conversion with a loss of 0.8% using a PreActResNet-34 network on the CIFAR-100 dataset [28]. Y. Li et al. achieved a 0.23% accuracy loss ANN to SNN conversion on the MS COCO dataset using resnet50 [29]. Y. Hu et al. attempted the conversion on a deep resnet network and achieved an accuracy loss of about 1.16% under 50 layers on ImageNet dataset [30]. These works pushed the development of SNNs to deeper networks.
In the past, SNNs were mainly applied to simple tasks such as image classification [13, 14, 31, 32]. In recent years, some works have used tried to promote SNNs to more challenging tasks, such as multi-sensory integration learning [33], object detection [12], reinforcement learning [34]. One of the most concerned directions is the SNN based object detection. Spiking-YOLO [12] is the first SNN based
object detection model that pushes the boundaries of the field by achieving a near lossless ANN to SNN conversion at timesteps = 8000 based on a YOLOv3-tiny backbone network. Nevertheless, the practicality and deployment of this network is hampered by its excessive timesteps requirement and the depth of 23 layers limits its detection performance. FSHNN [35] combines the STDP, STBP, and Monte Carlo Dropout methods and makes FSHNN exceed the accuracy of its heterogeneous ANN based RetinaNet. It has reduced timesteps from thousands to 300, significantly improving energy efficiency. However, 300 timesteps is still not sufficient for the model to run on mobile robot platforms and the model's object detection accuracy on the COCO dataset still does not match the performance of current mainstream ANN models, such as YOLOv5.
In summary, most of the previous works [12, 36, 37, 38, 39, 24, 36] require tremendous timesteps to reach lossless conversion, which makes it difficult to deploy SNNs on the latency-sensitive mobile terminal devices. The excessively slow processing speed is also hardly acceptable for real-time object detection. In addition, the current ANN-SNN conversion methods do not apply to all ANN structures, which may cause some structural damage of the ANN partly during the conversions, reducing the accuracy of the ANN, which equates to a reduction in the accuracy of the converted SNN. So to convert the complete structure of a deep neural network into a SNN with both ultra-low timesteps and high accuracy is a significant issue.
To overcome the challenges mentioned above, we introduce two novel methods to reduce time latency while maintaining comparable accuracy and low energy cost: timesteps compression and spike-time-dependent integrated coding. Further, we present an low-latency and high-accurate SNN based object detection model called SUHD. Our contributions can be summarized as follows:
* A timesteps compression method is proposed, which compresses multi-timesteps into one timestep, reducing timesteps requirements for ANN to SNN conversion and inference, providing the possibility of SNNs deployment for engineering applications. Compared to Spiking-YOLO, we are able to reduce the timesteps requirement by more than 750 times with comparable accuracy.
* We propose a spike-time-dependent integrated coding (STDI) method and implement it using a time-varying threshold neuron model. This approach further reduces the inference time of SNN by approximately 38%, which is mainly caused by the increased information capacity of individual spikes.
* Fast (SPPF) structures, achieving lossless conversion of the Maxpool layer with any stride and realizing a lossless conversion of SPPF.
* Based on the methods mentioned above, we propose a ultra-low-latency and high-accurate SNN based object detection model, called SUHD. SUHD has demonstrated excellent performance on two challenging datasets (PASCAL VOC and MS COCO), achieving state-of-the-art results with 4 timesteps.
## II Related Works
### _Neuronal Coding_
Frequency coding is a widely used coding method that relies on spike firing rates within a certain timesteps to convey information. However, due to the binary nature of the spikes, the spike firing ratio does not transmit information very efficiently. As a result, when encountering complex information, frequency coding must ensure accurate transmission of the information at the cost of large timesteps.
Temporal coding is an advanced coding method that embeds time information into the spike train. The combination of time information and frequency information allows the spike train to carry more information. It includes time-to-first-spike [39, 40], rank-order coding [41], and phase coding [42]. It has achieved remarkable results in deep SNNs.
### _Conversion Methods_
The ANN to SNN conversion has been one of the hottest issues in the last few years. Many methods have been proposed and some significant developments have been achieved based on the conversion methods. The subtraction reset [36] discards the fixed reset potential and retains the spike intensity information. The temporal-separation (TS) [31] are proposed in 2022, which eliminates errors caused by incorrect firing order of negative spikes. TS separated the accumulation and firing phases, achieving lossless conversion in simple models. T. Bu et al. proposed the theory of membrane potential initialization [13] and pointed out that the membrane potential can be initialized to half of the threshold value, which can achieve lossless conversion in sufficiently large timesteps. The conversion of deep ResNet [30] was proposed by Y. Hu et al. to solve the conversion problem of bottleneck structures.
### _Spiking-YOLO_
Spiking-YOLO [12] is the first attempt to convert ANN to SNN to achieve object detection, which could get relatively high accuracy based on very large timesteps. It uses channel-wise normalization and signed neurons featuring imbalanced threshold to reduce the conversion error and is based on YOLOv3-tiny for conversion. At timesteps = 3000 on MS COCO dataset, it achieves close to lossless performance with an mAP of 25%. However, we found during implementation that the large timesteps required significant computational resources, making it impossible to use on common PC or mobile robotics platform. Additionally, the actual computing speed is extremely slow.
## III Methods
In this section, we present our systematic approach to converting ANNs to SNNs with competitive accuracy and reduced timesteps. In subsection A, we conduct a detailed analysis of the mathematical processes involved in ANN to SNN conversion, identifying the main sources of errors. In subsections B and C, we introduce our novel methods of timesteps compression and spike-time-dependent integrated coding, which help eliminate these errors. Finally, in subsection
C, we implement the conversion of the SPPF structure and apply our approaches to construct a state-of-the-art SUHD object detection model.
### _ANN to SNN Conversion Algorithm and Error Analysis_
First of all, we implemented an ANN neuron model base for a single neuron as follows:
\[x^{l}=\sum\limits_{i=1}^{n}w_{i}^{l-1}x_{i}^{l-1}+b_{i}^{l}, \tag{1}\]
where \(x^{l}\) denotes the output value of the neuron in layer \(l\), \(w_{i}^{l-1}\) denotes \(i\)-th weight of layer \(l-1\) to layer \(l\), \(x_{i}^{l-1}\) denotes the output value of \(i\)-th neuron in layer \(l-1\), and \(b_{i}^{l}\) denotes the bias of the output neuron.
The LIF and IF models are two basic biological neuronal models. The former has a continuous leaky current in the absence of stimulation, allowing the membrane potential to gradually return to resting potential, which may lead to information loss. In order to accurately represent the recurrent relationships between neurons, we adopted the IF neuron model as the basic neuron model.
Meanwhile, we use the spiking ratio for the SNN neuron model to substitute the simulated value \(x\) in Eq. 1. To establish the corresponding equivalence, we begin with the input potential of the SNN neuron and analyze its dynamics.
\[z^{l}(t)=\sum\limits_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t)+b_{i}^{l}. \tag{2}\]
where \(z^{l}(t)\) represents input of that neuron in layer \(l\) at time \(t\), \(s_{i}^{l-1}(t)\) represents the spike of \(i\)-th neuron of layer \(l-1\), it can be formulated as follows:
\[s=\begin{cases}1,&V_{mem}\geq V_{thr},\\ 0,&V_{mem}<V_{thr},\end{cases} \tag{3}\]
the threshold \(V_{thr}\) was set to 1. Then we can denote the sum of the input voltage of that neuron as follows:
\[U_{in}^{l}=\sum\limits_{t=1}^{T}\sum\limits_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t) +\sum\limits_{t=1}^{T}b_{i}^{l}, \tag{4}\]
then we denote the spiking ratio as \(r^{l}\) and denote output potential of that neuron in layer \(l\) as \(U_{out}^{l}\), Thus its numerical relationship can be represent as:
\[r^{l}=\frac{\sum\limits_{t=1}^{T}s^{l}(t)}{T}=\frac{U_{out}^{l}}{T}, \tag{5}\]
to limit the spiking ratio to a reasonable [0, 1], we performed channel-wise weights and bias normalization operations [12] to scale the activation values:
\[w^{l}_{\ i}=\frac{w_{i}^{l}\times\max_{in}}{\max_{out}},b^{l}_{\ i}=\frac{b_{i} ^{l}\times\max_{in}}{\max_{out}}, \tag{6}\]
where \(\text{max}_{in}\) and \(\text{max}_{out}\) denote the maximum input and output activation values of the current channel. Ideally, the voltage input to the neuron and the voltage output from the neuron should be the same, i.e. \(U_{in}=U_{out}\)[31]. Thus combining the Eq. 4 and Eq. 5, we will get:
\[r^{l}=\sum\limits_{i=1}^{n}w^{\prime l-1}_{\ i}r_{i}^{l-1}+b^{l}_{\ i}, \tag{7}\]
it can be seen from Eq. 7 that in an ideal state, the SNN replaces the analog value in the ANN with spiking ratio for lossless conversion and information transfer. For convenience, we use \(w\) and \(b\) to denote the normalized weights and biases, as shown in Eq. 6. Thus, we can obtain the relationship between the membrane potential at time \(T\) and \(0\):
\[\begin{split} V_{mem,i}^{l}(T)=V_{mem,i}^{l}(0)+\sum\limits_{t= 1}^{T}\sum\limits_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t)+\\ \sum\limits_{t=1}^{T}b_{i}^{l}-\sum\limits_{t=1}^{T}s_{i}^{l}(t), \end{split} \tag{8}\]
where \(V_{mem,i}^{l}(t)\) represent the membrane potential of the \(i\)-th neuron of layer \(l\) in time \(t\).
#### Ii-A1 Tremendous timesteps demand
Combining Eq. 5 and Eq. 8, Then we can get:
\[r_{i}^{l}=\frac{(V_{mem,i}^{l}(0)-V_{mem,i}^{l}(T))}{T}+\sum\limits_{i=1}^{n}w _{i}r_{i}^{l-1}+b_{i}^{l}, \tag{9}\]
ideally, in order to satisfy \(U_{in}=U_{out}\) and the equivalence of Eq. 7, \(V_{mem,i}^{l}(0)-V_{mem,i}^{l}(T)\) should be 0. But, from Eq. 8, we can see that \(\sum\limits_{t=1}^{T}s_{i}^{l}(t)\) is a step function. For \(V_{mem,i}^{l}(0)-V_{mem,i}^{l}(T)=0\) to be true, the \(\sum\limits_{t=1}^{T}\sum\limits_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t)+\sum\limits_ {t=1}^{T}b_{i}^{l}\) has to be the same step function. It certainly doesn't always satisfy this condition. Therefore:
\[V_{mem,i}^{l}(0)=V_{mem,i}^{l}(T)-\varepsilon,(\varepsilon<V_{thr}), \tag{10}\]
the \(\varepsilon\) is the residual membrane potential and the \(\frac{\varepsilon}{T}\) is called quantization error, as shown in Fig. 1. Referring to the Eq. 9 and Fig. 1(b), previous works have mainly focused on scaling down the quantization error by increasing T [12, 35, 43], as shown in Fig. 1(c). Since most of the current SNN-based
Fig. 1: Quantization errors. Assuming timesteps (\(T\)) is 5. In (a), the membrane potential does not reach the spiking threshold at the final \(t=T\) moment, leading to the potential remaining in the cell membrane without being transmitted. The equivalence to (b), only 6 values can be represented in this process: 0, 2, 0, 0, 0.6, 0.8, 1.0. Therefore, many values cannot be expressed precisely. In (c), we improve the density of expressed values by increasing \(T\), resulting in a more accurate expression of the spiking ratio.
works focus on image classification tasks, this approach is feasible. However, the object detection task requires more accurate position regression, which requires the spike sequence to have a very accurate numerical representation. Besides, most of such tasks are based on deep SNNs, which exacerbates quantization errors. Thus, we should often greatly improve the timesteps \(T\) to cope with the high accuracy of the numerical expression, which takes huge timesteps for conversion (256-8000) [12, 35, 43]. However, the timesteps are not infinitely long, because it brings a huge amount of energy consumption and a huge running time demand. This defeats the original purpose of real-time object detection. We believe that delivering more information in limited timesteps is an important direction for SNNs development.
#### Iii-A2 Truncation error
In Eq. 6, to reduce the spiking ratio to a reasonable range, we scale the weight based on the maximum activation value [12]. The maximum activation value is taken from the samples which is often based on a part of the conversion dataset. In most cases, this maximum activation value is applicable. However, in actual detection cases, the activation value may still be greater than the maximum value in the sample, as shown in Fig. 2. Therefore, for real applications in object detection tasks, \(r^{l}\) may be greater than the spiking ratio upper limit. At this time, the part higher than 1 is cut out by default. This error reduces SNN's accuracy.
### _Timesteps Compression_
The large timesteps can reduce the quantization error, unevenness error, and thus the conversion loss. However, larger timesteps also bring an increase in time latency. We propose the timesteps compression method to alleviate this problem. Timesteps compression means compressing information from multiple timesteps into one timestep and delivering the information using burst or binary spikes. Burst spikes [44, 45, 46] are a set of short inter-spike interval (ISI) spikes that can issue multiple spikes at one timestep.
As shown in Fig. 3, we represent the fully equivalent uncompressed and compressed processes. timesteps compression consists of i) input compression, ii) layer compression, and iii) output decompression. We assume that the timesteps have a compression scale of \(f_{c}\). Input compression compresses information from \(f_{c}\) timesteps into one timestep and reduces the timesteps to \(\frac{1}{f_{c}}\). This operation makes the input information the same as that with the uncompressed state. In layer compression, the compressed and uncompressed inputs within a single timestep are related as follows:
\[z_{c}=\sum_{t=1}^{f_{c}}z(t)=\sum_{i=1}^{n}\sum_{t=1}^{f_{c}}w_{i}^{l-1}s_{i}^{ l-1}(t)+f_{c}b, \tag{11}\]
where \(z_{c}\) represents the compressed input, \(s_{i}^{l-1}\) denotes the output spikes of \(i\)-th neuron in previous layer. Multiple inputs lead to changes in the spike firing as follows:
\[s_{c}=\begin{cases}min(k,f_{c}),&V_{mem}\geq k*V_{thr},\\ 0,&V_{mem}<V_{thr},\end{cases} \tag{12}\]
\(s_{c}\) represents the spike issued, and when the \(k>1\), the \(s_{c}\) is burst spike. Layer compression allows multiple spikes to be issued in one timestep, thereby increasing the density of information within a timestep. Ideal mathematical relationships can be presented as follows:
\[U_{in}^{l}=\sum_{i=1}^{n}\sum_{t=1}^{T_{c}}w_{i}^{l-1}s_{c,i}^{l-1}(t)+T_{c}f_ {c}b_{i}^{l}, \tag{13}\]
\[r_{c}^{l}=\frac{U_{in}^{l}}{T_{c}}=\sum_{i=1}^{n}w_{i}^{l-1}\frac{\sum\limits {{}_{l=1}^{T_{c}}s_{c,i}^{l-1}(t)}}{T_{c}}+f_{c}b_{i}^{l}, \tag{14}\]
input compression and layer compression are equivalent to compressing multiple timesteps into one timestep, which is the core of the timesteps compression. This relationship is illustrated in the layer compression stage and layer transfer stage in Fig. 3. By input compression and layer compression, the firing ratio is mapped from \(r\in[0,1]\) to \(r_{c}\in[0,f_{c}]\), and the oversized firing ratio does not match the firing ratio at \(f_{c}\) times uncompressed timesteps. Output decompression solves this problem by remapping the firing ratio from \(r_{c}\in[0,f_{c}]\) to \(r\in[0,1]\) as shown in Eq. 15.
\[r^{l}=\frac{r_{c}^{l}}{f_{c}}=\sum_{i=1}^{n}w_{i}^{l-1}\frac{\sum \limits_{t=1}^{T_{c}}s_{c,i}^{l-1}(t)}{T}+b_{i}^{l} \tag{15}\] \[=\sum_{i=1}^{n}w_{i}^{l-1}\frac{\sum\limits_{t=1}^{T}s_{i}^{l-1}(t )}{T}+b_{i}^{l},\]
as shown above, timesteps compression improved the information-carrying capacity of a single timestep and reduce the quantization error. The experiments in Sec. IV-C confirm this inference.
Fig. 2: Truncation errors. The image on the left illustrates the generation of truncation errors, where messages exceeding the upper limit of the firing ratio are truncated by default. In the image on the right, we randomly selected 100 images from the COCO dataset, measured their maximum activation value at each layer in the model, and represented that activation value on the horizontal axis. At the same time, we measured the maximum activation value at each layer using the ANN to SNN conversion sample dataset, and represent this activation value on the vertical axis. The blue slash indicates the ideal situation when the actual maximum activation is equal to the maximum activation of the sample data. The red dots are the relative positions of the activation values for the layer under different samples.
### _Spike-Time-Dependent Integrated Coding_
Recent object detection researches have increasingly emphasized real-time and energy efficiency. Although in the previous section, we used timesteps compression to reduce the requirement for timesteps in the model inference process, the use of frequency coding makes the model still inefficient. At the same time, it has been reported that neurons coordinate action potentials in different ways even when the spike firing rates are the same. These reports suggest that intercellular communication is the result of a combination of various coding methods.
In particular, we use TS [31] in our work to avoid the damage to accuracy caused by the negative spikes of misordered firing. The use of TS concentrates the spikes at the beginning of the timesteps, which gives us the opportunity to embed temporal information in the spike train. Therefore, we propose an encoding method called the spike-time-dependent integrated (STDI) coding method, which further improves the inference speed and energy efficiency of the model.
#### Iii-C1 Weighted Spikes
We first attach weights to the spikes based on their firing time. The weights are defined as follows:
\[\tau(t)=T-t+1, \tag{16}\]
where \(t\) denotes the time of spike firing over whole timesteps. After the weights are defined, the input value represented by a firing spike becomes \(s(t)*\tau\). Fig. 4(a) shows the correspondence between the weighted spikes and the input values. Fig. 4(a) also expresses that STDI can make the spike firing ratio within \(t\in[0,T]\) much greater than 1, which resolves the truncation error. When faced with excessive truncation errors, STDI can eliminate such errors by integrating multiple spikes (burst spikes) in one timestep, as shown in Fig. 4(a) when the input value is 12. Weighted spikes bring about a change in the way input information is encoded. In previous SNN object detection works, we needed to keep constant value information input throughout the timesteps and then encode it using the first layer of the SNN model. With spikes being weighted, the information only needs to be fed into the model at the first timestep. This reduces energy consumption to a certain extent.
#### Iii-C2 Model Adaption for STDI
Due to the weighted spikes, we need to make appropriate adjustments to the IF model. Firstly, the threshold \(V_{thr}\) was adjusted for weighted spikes as Eq. 17.
\[V_{thr}=\tau(t)*v_{thr}, \tag{17}\]
the \(v_{thr}\) denotes the initial threshold, often set to 1. Then we adjust the input as follows:
\[z^{l}=\sum_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t)*\tau(t)+b_{i}^{l}, \tag{18}\]
therefore Eq. 4 and Eq. 5 must be changed to:
\[U_{in}^{l}=\sum_{t=1}^{T}\sum_{i=1}^{n}w_{i}^{l-1}s_{i}^{l-1}(t)*\tau(t)+\sum _{t=1}^{T}b_{i}^{l}, \tag{19}\]
\[r^{l}=\sum_{i=1}^{n}w_{i}^{l-1}\frac{\sum\limits_{t=1}^{T}s_{i}^{l-1}(t)*\tau (t)}{T}+b_{i}^{l}, \tag{20}\]
the TS scheme was employed in the present study, leading to the observation of two distinct phases of STDI which are the accumulation phase and the firing phase. In the accumulation phase, the input information is accumulated to the membrane
Fig. 3: Process comparison between proposed timesteps compression and traditional methods. We first denote the compression scale as \(f_{c}\), which indicates that each compressed timestep contains information from \(f_{e}\) uncompressed timesteps. For the sake of an example, the \(f_{e}\) is set to 3. And then we denote compressed timesteps and firing ratio as \(T_{c}\) and \(r_{c}\).
potential according to the Eq. 19. During the firing phase, the threshold decrease according to the time. the accumulated membrane potential searches for the right firing time to release the spikes. Specifically, when the membrane potential is less than the spike threshold at the current moment, the spike is not fired and waits for the next moment. When the membrane potential is greater than the spike threshold at that moment, the spike fires and the membrane potential decreases according to the subtractive reset [36]. Cycle this process until the membrane potential is lower than \(V_{thr}\). The specific algorithm is shown in Algorithm 1. In the output layer, we need to decode the spike train to spiking ratio according to Eq. 20. Fig. 4(b) illustrates the whole process.
```
0:\(T\), \(x\)
0:\(s\)
0: Initialize \(V_{mem}\) to 0
0: Initialize \(s\) to 0
0: phase 1: fort = 1 to \(T\)do \(\tau(t)\gets T-t+1\) \(V_{mem}\gets V_{mem}\) + \(x\)[t-1]*\(\tau(t)\) endfor phase 2: if\(V_{mem}\leq 0\)then \(V_{mem}=0\) endif fort = 1 to \(T\)do \(\tau(t)\gets T-t+1\) \(s[t-1]\gets s[t-1]+\lfloor V_{mem}/\tau(t)\rfloor\) \(V_{mem}\gets V_{mem}\) - \(s[t-1]^{*}\tau(t)\) endfor return\(s\)
```
**Algorithm 1** Algorithm for STDI
The advantage of applying STDI is that intercellular messaging can be accomplished with very few spikes. In combination with timesteps compression, we can express and convey information with fewer spikes and much lower timesteps. An example of this characteristic is given in Fig. 5. This example demonstrates that with the use of STDI and timesteps compression, information that would otherwise require six spikes can be expressed with only one binary or burst spike, with a correspondingly lower timesteps. Interestingly, STDI makes the spikes conform to the poisson distribution with or without the use of TS.
### _SPPF Structure Conversion_
After comparing the YOLO series models, we chose the YOLOv5s as the backbone of our ANN to SNN conversion. Compared to the YOLOv3-tiny used by Spiking-YOLO, YOLOv5s has a deeper network and an optimized feature extraction structure such as FPN+PAN (i.e. SPPF), making it more interesting for development.
We first use ReLU as the activation function and then replace the Upsampling layer with a Contranspose layer to form the
Fig. 4: Spike-time-dependent integrated (STDI) coding. (a) shows the correspondence between the input value and the moment of spike issuance. (b) illustrates the main process of information transfer after using STDI. Where the accumulation phase is in \(t\in[-T,0]\). In the firing phase, the red dashed line is the variable threshold over time.
Fig. 5: Timesteps compression and STDI. Here we give an example of the effect of applying the timesteps compression and STDI. Assuming timesteps \(T\) is 6, frequency coding, timesteps compression, and STDI are used to express \(spiking\ ratio=1\), respectively. Where the black vertical line is the binary spike and the red vertical line is the burst spike.
ANN version of SUHD. Compared to the previous ANN to SNN works, the SUHD conversion work has an additional SPPF structure's conversion. SPPF structure includes three Maxpool layers with \(stride=1\), as shown in SPPF in Fig. 6. According to the conversion method of YOLOv3-tiny in Spiking-YOLO [12], we first set the SPPF layer directly to the CBR layer (Conv. + BN + ReLU).
We get significant performance improvements after using this structure. The results are shown in Sec. IV-B. However, the SPPF structure of the ANN version of SUHD led to an increase of approximately 3% mAP, while the conversion method of the SPPF structure can be extended to the whole YOLO series and even more SNN based deep learning models, thus facilitating the conversion of more efficient and complex structures. Therefore, we perform the conversion of the SPPF structure. For achieving fast Maxpool operations with any stride in deep SNN models, we propose a Spike-Maxpooling mechanism based on membrane potentials.
#### Iv-B1 Spike-Maxpooling According to Membrane Potential
The existing Maxpool methods in SNNs mainly include two kinds: one is the "winner take all" [38, 47], that is, the pooling neurons accept all the previous spikes and add them. The pooling layer value obtained by this pooling method is often greater than the real value. The other one is to calculate the number of spikes of maximum firing rate neurons directly or indirectly. These methods are complicated, especially in the case of compressed timesteps and STDI. Therefore, we proposed a Spike-Maxpooling method based on membrane potential, which allows for the Maxpool of SNNs at any stride in a simple way.
As shown in Spike-Maxpooling Based on Membrane Potential in Fig. 6, the main process has five phases:
1. **Accumulation phase**: The input spiking train is decoded to obtain the input voltage \(U_{in}\) and accumulated to the membrane.
2. **Restore Normalization**: Restore the activation values by Eq. 21. \[V_{r}=max_{in}*U_{in},\] (21) where \(max_{in}\) denotes the maximum sample activation value of the input of this channel in ANN, and the \(V_{r}\) denotes the activation value of this channel after restoring normalization in SNN. The purpose of this phase is to eliminate errors due to integer and floating point calculations.
3. **Pooling phase**: We name the region where the neurons that are involved in \(V_{r}\) competition as a candidate region, as shown within the circle in the Fig. 6. Neurons in candidate regions undergo \(V_{r}\) competition. The winner's \(V_{r}\) is passed on.
4. **Re-normalization phase**: Normalize \(V_{r}\) value to output voltage as \(U_{out}\) by Eq. 22. \[U_{out}=\frac{V_{r}}{max_{out}},\] (22) where \(max_{out}\) denotes the sample maximum activation value of the output of this channel in ANN.
5. **Firing phase**: Encoding the \(U_{out}\) to spike train and release it.
#### Iv-B2 Spike-SPPF
Based on the above work, we completed the conversion of the Spike-SPPF structure, the exact structure is shown in Spike-SPPF in Fig. 6. The results, as shown in Sec. IV-B, demonstrate that the accuracy of the SNN model improves by about 3% after using Spike-SPPF.
Fig. 6: The comparison of SPPF and Spike-SPPF. The traditional SPPF (left figure) contains two CBR layers and three Maxpool layers with stride=1 and connects these layers through a concat operation. As a comparison, Spike-SPPF uses Spike-Conv. layer and Spike-Maxpooling based on membrane potential to achieve the same operation on the spike train. The extended diagram of the circle in the right figure shows the specific method of Spike-Maxpooling based on the membrane potential, which includes five stages.
## IV Experiments
### _Experiments Setup_
Different comparative experiments were set up to verify the effectiveness of different structures or methods. The model is the ANN/SNN version of the SUHD model. The initial conversion method (SNN base code) is frequency coding + TS. The membrane potential was initialized with a value of 0.5 with reference to the [13]. The whole experiment is based on Intel Core i7-8700K CPU or NVIDIA RTX2080Ti GPU with CUDA 10.1. The datasets used in this work are PASCAL VOC [48] and MS COCO [49], where the PASCAL VOC dataset consists of three parts, train (2007+2012), val (2007+2012), and test (2007), including 8218, 8333, and 4952 images respectively. Specifically, we used train (2007+2012) as the training set and test (2007) as the validation set. The COCO dataset consists of two parts, train2017 and val2017, with train2017 including 118,287 images and val2017 including 5,000 images. We used train2017 as the training set and val2017 as the evaluation set. Unless explicitly specified, the metric utilized to assess the accuracy of experiments is the [email protected].
### _Ablation Study of Spike-SPPF_
In order to demonstrate the impact of Spike-SPPF on improving the accuracy upper bound, an ablation study was conducted with and without Spike-SPPF. To ensure impartiality and proper operation of the model, the SPPF component was replaced with a CBR (Conv. + BN + ReLU) layer in ANN of the model without Spike-SPPF. The results of the ablation study are shown in Tab. I.
The results demonstrate that the Spike-SPPF can improve accuracy by approximately 2.3%. In particular, the accuracy loss of the ANN to SNN conversion here does not exceed 0.2%, which verifies that our proposed Spike-Maxpooling has the ability to losslessly convert the Maxpool layer with \(stride=1\).
### _Optimization of Accuracy and Speed Ablation Experiments_
Our proposed methods aim to achieve a faster and more precise SNN-based object detection model within limited timesteps. In order to evaluate the effectiveness of these methods, we conducted ablation experiments to measure the impact on speed and accuracy improvements.
The results are presented in Tab. II and III. In these experiments, we used a base code with uncompressed timesteps and frequency encoding, then gradually increased the compression scale and applied STDI. The initial timesteps were set to 64, and the model achieved an mAP of 72.5% and 53.6% on the PASCAL VOC and MS COCO datasets, respectively, with a processing speed of over 6500 (CPU)/720 (GPU) ms per frame.
Increasing the compression scale to 16 and reducing the compressed timesteps to 4 significantly improved the processing speed without compromising accuracy, indicating the effectiveness of timesteps compression.
Changing the encoding method to STDI further improved the mAP by 2.8% and 1% on the PASCAL VOC and MS COCO datasets, respectively, and reduced the inference time by about 38% (CPU)/17% (GPU), confirming the effectiveness of STDI in reducing the conversion error and improving the speed of object detection.
Finally, we increased the compression scale to 64 with compressed timesteps of 1, without reducing detection accuracy. This allowed us to achieve the fastest detection speeds of 189.4ms/frame and 189.5ms/frame (CPU) on PASCAL VOC and MS COCO datasets, respectively. We also conducted the same experiment on the GPU and achieved a speed of 90.1ms/frame at 64x compression.
### _Compared with the-State-of-the-Arts_
We compared the performance of SUHD with that of other methods on the PASCAL VOC and MS COCO datasets respectively. The results are shown in Tab. IV, Tab. V and Fig. 7, where Burst refers to applying burst spikes to prevent harm caused by truncation errors [43]. The data in tables are taken from the corresponding papers [12, 43, 50]. In comparison with the current advanced spiking object detection, our methods have achieved the optimal result in terms of speed, precision, and timesteps. As shown in the Tab. IV, with the PASCAL VOC dataset, our proposed method achieves an accuracy improvement of about 23% using 2000x fewer timesteps compared to Spiking-YOLO. Compared to the Burst + MLIpooling + SpiCalib method, we achieve almost the same accuracy using 128x fewer timesteps. The results under the MS COCO dataset are shown in Tab. V. Compared to the Spiking-YOLO and Burst + MLIpooling + SpiCalib methods, our method achieves about 30% and 9% improvement in accuracy using 750x and 128x fewer timesteps, respectively. Compared to FSHNN, we achieve a 12% accuracy improvement using 75x fewer timesteps.
Fig. 7: Detection results using different methods. Our work achieved the best results in detection.
\begin{tabular}{c c c c} \hline \hline Methods & Timesteps & [email protected] &
\begin{tabular}{c} Speed(ms/frame) \\ CPU \\ \end{tabular} \\ \hline ANN & - & 75.3 & - & - \\ \hline Bascode & 64 & 72.5 & 6639.2 & 720 \\ \hline
16x Compression & 4 & 72.5 & 867.3 & 185.8 \\ \hline
16x Compression + STDI & **4** & **75.3** & 541 & **152.3** \\ \hline \hline \end{tabular}
TABLE IIIAccuracy and speed improvement by different methods on MS COCO
\begin{tabular}{c c c} \hline \hline Methods & Timesteps & [email protected] &
\begin{tabular}{c} Speed(ms/frame) \\ CPU \\ \end{tabular} \\ \hline ANN & - & 54.8 & - & - \\ \hline Bascode & 64 & 53.6 & 6562 & 770 \\ \hline
16x Compression & 4 & 53.6 & 882.6 & 185.9 \\ \hline
16x Compression + STDI & **4** & **54.6** & 552 & **155.9** \\ \hline \hline \end{tabular}
TABLE IVComparison with other works under the PASCAL VOC dataset
### _Model Robustness_
The noise immunity of the model is also one of the main indicators of model performance. To verify the stability of the model, we evaluate the performance of the model under adverse conditions using standard additive gaussian white noise with signal-to-noise ratios (SNR) of 15dB and 30dB respectively. The FSHNN [35] model was also added for comparison. The results are shown in Tab. VI.
When SNR = 30 dB, we can see that our model has almost no loss(\(\leq 2.2\%\)). When SNR = 15 dB, accuracy suffered a more serious decline. However, the model still has an advantage of approximately 6.9% over the FSHNN. At the same time, the noise immunity of the SUHD model is not inferior to that of the ANN version of SUHD at high noise levels(\(loss\leq 0.5\%\)), as shown in Tab. VI and Fig. 8. This suggests that our proposed methods do not have an excessive impact on the robustness of the model. Thus, our proposed model has satisfactory performance in terms of robustness.
### _Energy Efficiency_
Energy consumption is an important metric for measuring the cost of model inference. Referring to other works [12, 35, 51], the generic index \(FLOPs\) is used to measure the complexity of the model. The \(energy\) denotes the energy cost of models. As proposed in Horowitz et al. [12, 52], the energy cost of one operation is 4.6pJ(FLOAT3T2 MAC)/0.9pJ(FLOAT3T2 AC). To measure SNN's energy consumption more rationally, we use the average FLOPs based on the conversion sample dataset. Specifically, we define SNN FLOPs as:
\[FLOPs=\sum_{l=1}^{n}\sum_{t=1}^{T}s^{l}(t)\ +\ \sum p_{in}, \tag{23}\]
where \(n\) denotes the sum of layers of the model, and the \(\sum p_{in}\) represents the sum of pixels of the input figures. Considering that the model accepts analog inputs, we define the input layer operations as MAC operations. The other operations are caused by spikes as floating-point AC operations. In contrast to spiking-yolo, the SUHD model uses YOLOv5s as its backbone. Therefore, our evaluation is between YOLOv5s and SUHD. The energy efficiency profile of YOLOv5s is derived from official data. Based on this method, we compared the YOLOv5s and the SUHD's energy costs.
The results are shown in Tab. VII, which illustrate that the SUHD model is at least 200 times more energy efficient than the YOLOv5s model.
### _Algorithm Deployment and Evaluation in Robot Platform_
To evaluate the ability of the model to detect objects in dynamic scenarios and the performance of the model on mobile platforms, in this section we deploy the algorithm to a mobile robot platform and test it.
#### Iv-G1 Algorithm Deployment
We use the summit-xl mobile robot, as shown in Fig. 9. Summit-xl is equipped with an Intel Core i3-9100 CPU, 7.16GB RAM, four independent drive wheels, and an Axis gimbal camera. The deployment process consists of three main steps: We first built the python algorithm runtime environment based on ubuntu 16.04 on x86 architecture and deployed the algorithm to the robot. The video signal is then acquired using the gimbal camera. The video signal is in h265 format and is hard-decoded. Finally, the decoded video image stream is fed into the model in real time for object detection.
Fig. 8: Performance of the models at different SNR. The first three rows show the detection results using the SUHD model and the last row shows the detection results using the ANN model.
Fig. 9: Summit-xl mobile robot platform.
#### Iv-A2 Performance Evaluation in Dynamic Scenarios
We produced two video datasets. Both datasets contain one video each.
**Dataset A** was sampled from the robot gimbal camera at 6 fps. Three motion speeds, fast, medium, and slow were included throughout the sampling process. At the same time, both the robot and the object can move. This dataset is therefore comprehensive and can accurately assess the performance of the algorithm in dynamic scenes. The entire dataset contains 920 frames with a total of 4140 labels in 9 categories, with an average of 4.5 labels per frame.
**Dataset B** was sampled from a fixed viewpoint and contains 50 frames in 3 categories, with a total of 219 labels, averaging 4.38 labels per frame. The overall speed of object movement within the frames is slow.
First, we train the ANN model using the COCO dataset and then convert it into an SNN model. In the second step, we deploy the SNN models to the PC and mobile robot described at the beginning of this section, respectively. In the third step, we use datasets A and B to compute the mAP to obtain the object detection capability of the model in dynamic scenes. In the fourth step, we test the object recognition capability of the model deployed on the robot using a real-time robot video streaming. At the same time, we record video data in the same scene and then move the video to the PC to simulate object detection in a real-time dynamic scene. The results were shown in Tab. VIII, where the evaluation indicator of Dataset A and Dataset B is [email protected] for detection accuracy, and the evaluation indicator of the real-time robot video streaming is recognition accuracy, which is defined as follows:
\[Acc=\frac{TP+TN}{P+N}, \tag{24}\]
where \(Acc\) is recognition accuracy, \(TP+TN\) is the sum of the number of objects correctly classified, and \(P+N\) is the sum of the number of all objects. The results of Dataset A and B show that the model deployed on the robot produced no additional losses. This result indicates that our algorithm has strong repeatability. Due to the use of moving observation points in dataset A, there is a more significant blurring of some of the frames, which results in an mAP of 69.8%. Dataset B uses a fixed viewpoint and therefore achieves an mAP of 92.5%. Fig. 10(a) (b) show the difference in performance between the model on the PC and the model deployed to the mobile robot. In addition, we examined the object detection of the algorithmic model on the robot side under real-time robot video streaming and compared its performance with that of the PC side model under the same dynamic scene video. The results are shown in Fig. 10(c) and the Real-time signal of Tab. VIII. The two results differ by only 2.7%. The reason for the error is that the robot has less computing ability than the PC, resulting in greater frame drops and blurring of the live video signal it receives, thus reducing recognition precision. This result suggests that the detection performance of the model is not overly compromised by deployment to the robot side.
Fig. 10: Performance evaluation in dynamic scenarios. (a) and (b) show detection results for Dataset A and Dataset B, respectively. The top row shows PC results and the bottom row shows robot results. (c) presents detection results for video streaming, with the top row showing the results for the simulated real-time video streaming on the PC and the bottom row showing the results for the real-time robot video streaming on the robot.
## V Conclusion
This paper presents three main methods for optimizing the conversion accuracy and run speed of SNNs. The first method is timesteps compression, which significantly reduces the required timesteps to the minimal one timestep during lossless conversion. The second method is STDI, which improves the efficiency of SNN inference by an average of 38% on the CPU and 17% on the GPU by increasing the information capacity of a single spike. The third approach is a Spike-Maxpooling mechanism based on membrane potential, which facilitates the lossless conversion of complex and efficient structures in ANNs.
By combining timesteps compression and STDI, we achieved results that are 34 times faster on the CPU than initial conversion method (frequency coding + TS). We have further built the spike-based high-performance object detection SUHD, which can run on mobile platforms with comprehensive performance up to the state-of-the-art. Especially, SUHD is currently the deepest object detection model that can achieve a lossless ANN to SNN conversion with an ultra low timesteps. Our methods have shown competitive results and have significant potential for improving the efficiency and accuracy of SNNs, while offering the possibility to solve the problem of deploying SNN models on the mobile terminal. We believe that our methods can contribute to the large-scale popularisation of SNNs application in the future.
|
2307.15456 | Worrisome Properties of Neural Network Controllers and Their Symbolic
Representations | We raise concerns about controllers' robustness in simple reinforcement
learning benchmark problems. We focus on neural network controllers and their
low neuron and symbolic abstractions. A typical controller reaching high mean
return values still generates an abundance of persistent low-return solutions,
which is a highly undesirable property, easily exploitable by an adversary. We
find that the simpler controllers admit more persistent bad solutions. We
provide an algorithm for a systematic robustness study and prove existence of
persistent solutions and, in some cases, periodic orbits, using a
computer-assisted proof methodology. | Jacek Cyranka, Kevin E M Church, Jean-Philippe Lessard | 2023-07-28T10:20:08Z | http://arxiv.org/abs/2307.15456v1 | # Worrisome Properties of Neural Network Controllers
###### Abstract
We raise concerns about controllers' robustness in simple reinforcement learning benchmark problems. We focus on neural network controllers and their low neuron and symbolic abstractions. A typical controller reaching high mean return values still generates an abundance of persistent low-return solutions, which is a highly undesirable property, easily exploitable by an adversary. We find that the simpler controllers admit more persistent bad solutions. We provide an algorithm for a systematic robustness study and prove existence of persistent solutions and, in some cases, periodic orbits, using a computer-assisted proof methodology.
## 1 Introduction
The study of neural network (NN) robustness properties has a long history in the research on artificial intelligence (AI). Since establishing the existence of so-called adversarial examples in deep NNs in [14], it is well known that NN can output unexpected results by slightly perturbing the inputs and hence can be exploited by an adversary. Since then, the robustness of other NN architectures has been studied [44]. In the context of control design using reinforcement learning (RL), the robustness of NN controllers has been studied from the adversarial viewpoint [29, 42]. Due to limited interpretability and transparency, deep NN controllers are not suitable for deployment for critical applications. Practitioners prefer abstractions of deep NN controllers that are simpler and human-interpretable. Several classes of deep NN abstractions exist, including single layer or linear nets, programs, tree-like structures, and symbolic formulas. It is hoped that such abstractions maintain or improve a few key features: generalizability - the ability of the controller to achieve high performance in similar setups (e.g., slightly modified native simulator used in training); deployability - deployment of the controller in the physical world on a machine, e.g., an exact dynamical model is not specified and the time horizon becomes undefined; verifiability - one can verify a purported controller behavior (e.g., asymptotic stability) in a strict sense; performance - the controller reaches a very close level of average return as a deep NN controller.
In this work, we study the robustness properties of some symbolic controllers derived in [24] as well as deep NN with their a few neuron and symbolic abstractions derived using our methods. By robustness, we mean that a controller maintains its average return values when changing the simulator configuration (scheme/ time-step) at test time while being trained on some specific configuration. Moreover, a robust controller does not admit open sets of simulator solutions with extremely poor return relative to the average. In this regard, we found that NNs are more robust than simple symbolic abstractions, still achieving comparable average return values. To confirm our findings, we implement a workflow of a symbolic controller derivation: regression of a trained deep NN and further fine-tuning. For the simplest benchmark problems, we find that despite the controllers reaching the performance of deep NNs measured in terms of mean return, there exist singular solutions that behave unexpectedly and are persistent for a long time. In some cases, the singular solutions are persistent forever (periodic orbits). The found solutions are stable and an adversary having access to the simulation setup knowing the existence of persistent solutions and POs for specific setups and initial conditions may reconfigure the controlled system and bias it towards the bad persistent solutions; resulting in a significant performance drop, and if the controller is deployed in practice, may even lead to damage of robot/machine. This concern is critical in the context of symbolic controllers, which are simple abstractions more likely to be deployed on hardware than deep NNs. Two systems support the observed issues. First, the standard pendulum benchmark from OpenAI gym [5] and the cartpole swing-up problem.
Each instance of an persistent solution we identify is verified mathematically using computer-assisted proof (CAP) techniques based on interval arithmetic [27, 38] implemented in Julia [4]. Doing so, we verify that the solution truly exists and is not some spurious object resulting from e.g., finite arithmetic precision. Moreover, we prove the adversarial exploitability of a wide class of controllers. The existence of persistent solutions is most visible in the case of symbolic controllers. For deep NN, persistent solutions are less prevalent, and we checked that deep NN controllers' small NN abstractions (involving few neurons) somewhat alleviate the issue of symbolic controllers, strongly suggesting that the robustness is inversely proportional to the number of parameters, starkly contrasting with common beliefs and examples in other domains.
**Main Contributions.** Let us summarize the main novel contributions of our work to AI community below.
_Systematic controller robustness study._ In light of the average return metric being sometimes deceptive, we introduce a method for investigating controller robustness by designing an persistent solutions search and the penalty metric.
_Identification and proofs of abundant persistent solutions._ We sys
tematically find and prove existence of a concerning number of persistent orbits for symbolic controllers in simple benchmark problems. Moreover, we carried out a proof of a periodic orbit for a deep NN controller, which is of independent interest. To our knowledge, this is the first instance of such a proof in the literature.
_NN controllers are more robust than symbolic._ We find that the symbolic controllers admit significantly more bad persistent solutions than the deep NN and small distilled NN controllers.
### Related Work
_(Continuous) RL._ A review of RL literature is beyond the scope of this paper (see [34] for an overview). In this work we use state-of-the-art TD3 algorithms dedicated for continuous state/action spaces [12] based on DDPG [25]. Another related algorithm is SAC [16].
_Symbolic Controllers._ Symbolic regression as a way of obtaining explainable controllers appeared in [22, 20, 24]. Other representations include programs [39, 37] or decision trees [26]. For a broad review of explainable RL see [41].
_Falsification of Cyber Physical Systems (CPS)_ The research on falsification [3, 10, 40, 43] utilizes similar techniques for demonstrating the violation of a temporal logic formula, e.g., for finding solutions that never approach the desired equilibrium. We are interested in solutions that do not reach the equilibrium but also, in particular, the solutions that reach minimal returns.
_Verification of NN robustness using SMT_ Work on SMT like ReLUplex [6, 11, 21] is used to construct interval robustness bounds for NNs only. In our approach we construct interval bounds for solutions of a coupled controller (a NN) with a dynamical system and also provide existence proofs.
_Controllers Robustness._ Design of NN robust controllers focused on adversarial defence methods [29, 42].
_CAPs._ Computer-assisted proofs for ordinary differential equations (ODEs) in AI are not common yet. Examples include validation of NN dynamics [23] and proofs of spurious local minima [32].
### Structure of the Paper
Section 2 provides background on numerical schemes and RL framework used in this paper. Section 3 describes the training workflow for the neural network and symbolic controllers. The class of problems we consider is presented in Section 4. We describe the computer-assisted proof methodology in Section 5. Results on persistent periodic orbits appear in Section 6, and we describe the process by which we search for these and related singular solutions in Section 7.
## 2 Preliminaries
### Continuous Dynamics Simulators for AI
Usually, there is an underlying continuous dynamical system with control input that models the studied problem \(s^{\prime}(t)=f(s(t),a(t))\), where \(s(t)\) is the state, \(a(t)\) is the control input at time \(t\), and \(f\) is a vector field. For instance, the rigid body general equations of motion in continuous time implemented in robotic simulators like MuJoCo [36] are \(Mv^{\prime}+c=\tau+J^{T}f\), \(J,f\) is the constraint Jacobian and force, \(\tau\) is the applied force, \(M\) inertia matrix and \(c\) bias forces. For training RL algorithms, episodes of simulated rollouts \((s_{0},a_{0},r_{1},s_{1},\dots)\) are generated; the continuous dynamical system needs to be discretized using one of the available numerical schemes like the Euler or Runge-Kutta schemes [17]. After generating a state rollout, rewards are computed \(r_{k+1}=r(s_{k},a_{k})\). The numerical schemes are characterized by the approximation order, time-step, and explicit/implicit update. In this work, we consider the explicit Euler (E) scheme \(s_{k+1}=s_{k}+hf(s_{k},a_{k})\); this is a first-order scheme with the quality of approximation being proportional to time-step \(h\) (a hyperparameter). Another related scheme is the so-called semi-implicit Euler (SI) scheme, a two-step scheme in which the velocities are updated first. Then the positions are updated using the computed velocities. Refer to the appendix for the exact form of the schemes.
In the research on AI for control, the numerical scheme and time-resolution1 of observations \(h\) are usually fixed while simulating episodes. Assume we are given a controller that was trained on simulated data generated by a particular scheme and \(h\); we are interested in studying the controller robustness and properties after the zero-shot transfer to a simulator utilizing a different scheme or \(h\), e.g., explicit to semi-implicit or using smaller \(h\)'s.
Footnote 1: While in general time-resolution may not be equal to the time step, in this work we set them to be equal.
### Reinforcement Learning Framework
Following the standard setting used in RL, we work with a Markov decision process (MDP) formalism \((\mathcal{S},\mathcal{A},F,r,\rho_{0},\gamma)\), where \(\mathcal{S}\) is a state space, \(\mathcal{A}\) is an action space, \(F\colon\mathcal{S}\times\mathcal{A}\to\mathcal{S}\) is a deterministic transition function, \(r\colon\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is a reward function, \(\rho_{0}\) is an initial state distribution, and \(\gamma\in(0,1)\) is a discount factor used in training. \(\mathcal{S}\) may be equipped with an equivalence relation, e.g. for an angle variable \(\theta\), we have \(\theta\equiv\theta\in k2\pi\) for all \(k\in\mathbb{Z}\). In RL, the agent (policy) interacts with the environment in discrete steps by selecting an action \(a_{t}\) for the state \(s_{t}\) at time \(t\), causing the state transition \(s_{t+1}=F(s_{t},a_{t})\); as a result, the agent collects a scalar reward \(r_{t+1}(s_{t},a_{t})\), the (undiscounted) return is defined as the sum of discounted future reward \(R_{t}=\sum_{i=t}^{T}r(s_{i},a_{i})\) with \(T>0\) being the fixed episode length of the environment. RL aims to learn a policy that maximizes the expected return over the starting state distribution.
In this work, we consider the family of MDPs in which the transition function is a particular numerical scheme. We study robustness w.r.t. the scheme; to distinguish the _transition function used for training (also called native)_ from the _transition function used for testing_, we introduce the notation \(F_{train}\) and \(F_{test}\) resp. e.g. explicit Euler with time-step \(h\) is denoted \(F_{*}(\mathds{E},h)\), where \(*\in\{test,train\}\).
## 3 Algorithm for Training of Symbolic Controllers and Small NNs
Carrying out the robustness study of symbolic and small NN controllers requires that the controllers are first constructed (trained). We designed a three-step deep learning algorithm for constructing symbolic and small NN controllers. Inspired by the preceding work in this area the controllers are derived from a deep RL NN controller. The overall algorithm is summarized in Alg. 1.
### RL Training
First we train a deep NN controller using the state-of-the-art model-free RL algorithm TD3 [25, 12] - the SB3 implementation [30]. We choose TD3, as it utilizes the replay buffer and constructs deterministic policies (NN). Plots with the evaluation along the training procedure for studied systems can be found in App. C.
### Symbolic Regression
A random sample of states is selected from the TD3 training replay buffer. Symbolic abstractions of the deep NN deterministic policies are constructed using the symbolic regression over the replay buffer samples. Following earlier work [22, 20, 24] the search is performed by an evolutionary algorithm. For such purpose, we employ the PySR Python library [7, 8]. The main hyperparameter of this step is the complexity limit (number of unary/binary operators) of the formulas (\(k\) in Alg. 1). This procedure outputs a collection of symbolic representations with varying complexity. Another important hyperparameter is the list of operators used to define the basis for the formulas. We use only the basic algebraic operators (add, mul., div, and multip. by scalar). We also tried a search involving nonlinear functions like \(tanh\), but the returns were comparable with a larger complexity.
### Distilling Simple Neural Nets
Using a random sample of states from the TD3 training replay buffer we find the parameters of the small NN representation using the mean-squared error (MSE) regression.
### Controller Parameter Fine-tuning
Just regression over the replay buffer is insufficient to construct controllers that achieve expected returns comparable with deep NN controllers, as noted in previous works. The regressed symbolic controllers should be subject to further parameter fine-tuning to maximize the rewards. There exist various strategies for fine-tuning. In this work, we use the non-gradient stochastic optimization covariance matrix adaptation evolution strategy (CMA-ES) algorithm [19, 18]. We also implemented analytic gradient optimization, which takes advantage of the simple environment implementation, and performs parameter optimization directly using gradient descent on the model rollouts from the differentiable environment time-stepping implementation in PyTorch.
## 4 Studied Problems
We perform our experimental investigation and CAP support in the setting of two control problems belonging to the set of standard benchmarks for continuous optimization. First, the pendulum problem is part of the most commonly used benchmark suite for RL - OpenAI gym [5]. Second, the cart pole swing-up problem is part of the DeepMind control suite [35]. Following the earlier work [13] we used a closed-form implementation of the cart pole swing-up problem. While these problems are of relatively modest dimension, compared to problems in the MuJoCo suite, we find them most suitable to convey our message. The low system dimension makes a self-contained cross-platform implementation easier and eventually provides certificates for our claims using interval arithmetic and CAPs.
### Pendulum
The pendulum dynamics is described by a 1d \(2^{nd}\) order nonlinear ODE. We followed the implementation in OpenAI gym, where the ODEs are discretized with a semi-implicit (SI) Euler method with \(h=0.05\). For training we use \(F_{train}(\mathrm{SI},0.05)\). Velocity \(\omega\) is clipped to the range \([-8,8]\), and control input \(a\) to \([-2,2]\). There are several constants: gravity, pendulum length and mass \((g,l,m)\), which we set to defaults. See App. A.1 for the details. The goal of the control is therefore to stabilize the up position \(\theta=0\mod 2\pi\), with zero angular velocity \(\omega\). The problem uses quadratic reward for training and evaluation \(r=-\lfloor\theta\rfloor^{2}-0.1\omega^{2}-0.001a^{2}\), where \(\lfloor\theta\rfloor=\arccos(\cos(\theta))\) at given time \(t\) and action \(a\). The episode length is \(200\) steps. The max reward is \(0\), and large negative rewards might indicate long-term simulated dynamics that are not controlled.
### Cartpole Swing-up
The cartpole dynamics is described by a 2d \(2^{nd}\) order nonlinear ODEs with two variables: movement of the cart along a line (\(x,x^{\prime}\)), and a pole attached to the cart \((\theta,\theta^{\prime})\). We followed the implementation given in [15]. The ODEs are discretized by the explicit Euler (E) scheme with \(h=0.01\). As with the pendulum we use clipping on some system states, and several constants are involved, which we set to defaults. See B for details. The goal of the control is to stabilize the pole upwards \(\theta=0\mod 2\pi\) while keeping the cart \(x\) within fixed boundaries. The problem uses a simple formula for reward \(r=\cos\theta\), plus the episode termination condition if \(|x|\) is above threshold. The episode length is set to \(500\), hence the reward is within \([-500,500]\). Large negative reward is usually indicative of undesirable behaviour, with the pole continuously oscillating, the cart constantly moving, and escaping the boundaries fairly quickly.
## 5 Rigorous Proof Methodology
All of our theorems presented in the sequel are supported by a computer-assisted proof, guaranteeing that they are fully rigorous in a mathematical sense. Based on the existing body of results and our algorithm we developed in Julia, we can carry out the proofs for different abstractions and problems as long as the set of points of non-differentiability is small, e.g., it works for almost all practical applications: ReLU nets, decision trees, and all sorts of problems involving dynamical systems in a closed form. The input to our persistent solutions prover is a function in Julia defining the controlled problem, the only requirement being that the function can be automatically differentiated. To constitute a proof, this part needs to be carried out rigorously with interval arithmetic. Our CAPs are automatic; once our searcher finds a candidate for a persistent solution/PO, a CAP program attempts to verify the existence of the solution/PO by verifying the theorem (Theorem 1) assumptions. If the prover succeeds this concludes the proof.
### Interval Arithmetic
Interval arithmetic is a method of tracking rounding error in numerical computation. Operations on floating point numbers are instead done on _intervals_ whose boundaries are floating point num
bers. Functions \(f\) of real numbers are _extended_ to functions \(\overline{f}\) defined on intervals, with the property that \(\overline{f}(X)\) necessarily contains \(\{f(x):x\in X\}\). The result is that if \(y\) is a real number and \(Y\) is a thin interval containing \(y\), then \(f(y)\in\overline{f}(Y)\). For background, the reader may consult the books [27, 38]. Function iteration on intervals leads to the _wrapping effect_, where the radius of an interval increases along with composition depth. See Figure 1 for a visual.
### Computer-assisted Proofs of Periodic Orbits
For \(x=(x_{1},\ldots,x_{n})\), let \(||x||=\max\{|x_{1}|,\ldots,|x_{n}|\}\). The following is the core of our CAPs.
**Theorem 1**: _Let \(G:U\rightarrow\mathbb{R}^{n}\) be continuously differentiable, for \(U\) an open subset of \(\mathbb{R}^{n}\). Let \(\overline{x}\in\mathbb{R}^{n}\) and \(r^{*}\geq 0\). Let \(A\) be a \(n\times n\) matrix \({}^{2}\) of full rank. Suppose there exist real numbers \(Y\), \(Z_{0}\) and \(Z_{2}\) such that_
\[||AG(\overline{x})|| \leq Y, \tag{1}\] \[||I-ADG(\overline{x})|| \leq Z_{0}\] (2) \[\sup_{|\delta|\leq r^{*}}||A(DG(\overline{x}+\delta)-DG(\overline {x}))|| \leq Z_{2}, \tag{3}\]
_where \(DG(x)\) denotes the Jacobian of \(G\) at \(x\), and the norm on matrices is the induced matrix norm. If \(Z_{0}+Z_{2}<1\) and \(Y/(1-Z_{0}-Z_{2})\leq r_{*}\), the map \(G\) has a unique zero \(x\) satisfying \(||x-\overline{x}||\leq r\) for any \(r\in(Y/(1-Z_{0}-Z_{2}),r_{*}]\)._
A proof can be completed by following Thm 2.1 in [9]. In Sec. 5.3, we identify \(G\) whose zeroes correspond to POs. Conditions (1)-(3) imply that the Newton-like operator \(T(x)=x-AG(x)\) is a contraction on the closed ball centered at the _approximate zero_\(\overline{x}\) with radius \(r>0\). Being a contraction, it has a unique fixed point (\(x\) such that \(x=T(x)\)) by the Banach fixed point theorem. As \(A\) is full rank, \(G(x)=0\), hence an orbit exists. The radius \(r\) measures how close the approximate orbit \(\overline{x}\) is to the exact orbit, \(x\). The contraction is rigorously verified by performing all necessary numerical computations using interval arithmetic. The technical details appear in App. D.2.
### Set-up of the Nonlinear Map
A PO is a finite MDP trajectory. Let the step size be \(h\), and let the period of the orbit be \(m\). We present a nonlinear map that encodes (as zeroes of the map) POs when \(h\) is fixed. However, for technical reasons (see App. E), it is possible for such a proof to fail. If Alg. 2 fails to prove the existence of an orbit with a fixed step size \(h\), we fall back to a formulation where the step size is not fixed, which is more likely to yield a successful proof. This alternative encoding map \(G_{2}\) is presented in App. D.1. Given \(h\), pick \(g(h,\cdot)\in\{g_{\mathrm{E}},g_{\mathrm{SI}}\}\) one of the discrete dynamical systems used for numerically integrating the ODE. Let \(p\) be the dimension of the state space, so \(g(h,\cdot):\mathbb{R}^{p}\rightarrow\mathbb{R}^{p}\). We interpret the first dimension of \(\mathbb{R}^{p}\) to be the angular component, so that a periodic orbit requires a shift by a multiple of \(2\pi\) in this variable. Given \(h\), the number of steps \(m\) (i.e. period of the orbit) and the number of signed rotations \(j\) in the angular variable, POs are zeroes of the map (if and only if) \(G_{1}:\mathbb{R}^{pm}\rightarrow\mathbb{R}^{pm}\), defined by
\[G_{1}(X)=\begin{pmatrix}x_{0}-g(h,x_{m})+(j2\pi,\mathbf{0})\\ x_{1}-g(h,x_{0})\\ x_{2}-g(h,x_{1})\\ \vdots\\ x_{m}-g(h,x_{m-1})\end{pmatrix},\]
where \(\mathbf{0}\) is the zero vector in \(\mathbb{R}^{p-1}\), \(X=(x_{1},\ldots,x_{m})\) for \(x_{i}\in\mathbb{R}^{p}\), and \(x_{1},\ldots,x_{m}\) are the time-ordered states.
## 6 Persistent Orbits in Controlled Pendulum
When constructing controllers using machine learning or statistical methods, the most often used criterion for measuring their quality is the mean return from performing many test episodes. The mean return may be a deceptive metric for constructing robust controllers. More strongly, our findings suggest that mean return is not correlated to the presence of periodic orbits or robustness. One would typically expect a policy with high mean return to promote convergence toward states that maximize the return for any initial condition (IC) and also for other numerical schemes. Our experiments revealed reasons to believe this may be true for deep NN controllers. However, in the case of simple symbolic controllers, singular persistent solutions exist that accumulate large negative returns at a fast pace. By persistent solutions we mean periodic orbits that remain \(\varepsilon\) away from the desired equilibrium. This notion we formalize in Sec. 7.1. We emphasize that all of the periodic orbits that we prove are necessary stable in the usual Lyapunov sense, i.e., the solutions that start out near an equilibrium stay near the equilibrium forever, and hence feasible in numerical simulations. We find such solutions for controllers as provided in the literature and constructed by ourselves employing Alg. 1. We emphasize that our findings are not only numerical, but we support them with (computer-assisted) mathematical proofs of existence.
### Landajuela et. al [24] Controller
First, we consider the symbolic low complexity controller for the pendulum \(a=-7.08s_{2}-(13.39s_{2}+3.12s_{3})/s_{1}+0.27\), derived in [24] (with model given in App. A.1), where \(s_{1}=\cos\theta\), \(s_{2}=\sin\theta\), \(s_{3}=\omega=\theta^{\prime}\), and \(a\) is the control input. While this controller looks more desirable than a deep NN with hundreds thousand of parameters, its performance changes dramatically when using slightly different transition function at test-time, i.e., halved \(h\) (\(F_{test}(\mathrm{SI},0.025)\)) or the explicit Euler scheme (\(F_{test}(\mathrm{E},0.05)\)). Trajectories in Fig. 2 illustrate that some orbits oscillate instead of stabilizing at the equilibrium \(\hat{s}=\hat{\theta}=0\bmod 2\pi\). The average return significantly deteriorates for the modified schemes and the same ICs compared to \(F_{train}(\mathrm{SI},0.05)\); see Tab. 1. Such issues are present in deep NN controllers and small distilled NN to a significantly lower extent. We
Figure 1: Left: midpoint of interval enclosure of a proven persistent solution (see Appendix Tab. 23). Right: log-scale of radius of the interval enclosure. Calculations done at 163 bit precision, the minimum possible for this solution at episode length 1000.
associate the cause of the return deterioration with existence of 'bad' solutions - persistent periodic orbits (POs) (formal Def. 1). Using CAPs (c.f., Sec. 5) we obtain:
**Theorem 2**: _For \(h\in H=\{0.01,0.005,0.0025,0.001\}\), the nonlinear pendulum system with controller a from [24] described in the opening paragraph of Section 6.1 has a periodic orbit (PO) under the following numerical schemes; 1) (SI) with step size \(h\in H\), 2) (E) at \(h=0.05\) (native), and for all \(h\in H\)._
_The identified periodic orbits are persistent (see Def. 2) and generate minus infinity return for infinite episode length, with each episode decreasing the reward by at least 0.198._
### Our Controllers
The issues with robustness and performance of controllers of Sec. 6.1 may be an artefact of a particular controller construction rather than a general property. Indeed, that controller had a division by \(s_{1}\). To investigate this further we apply Alg. 1 for constructing symbolic controllers of various complexities (without divisions). Using Alg. 1 we distill a small NN (single hidden layer with \(10\) neurons) for comparison. In step 2 we use fine-tuning based on either analytic gradient or CMA-ES, each leading to different controllers. The studied controllers were trained using the default transition \(F_{train}(\mathrm{SI},0.05)\), and for testing using \(F_{test}(\mathrm{E},0.05)\), \(F_{test}(\mathrm{E},0.025)\), \(F_{test}(\mathrm{SI},0.05)\), \(F_{test}(\mathrm{SI},0.025)\).
Tab 1 reveals that the average returns deteriorate when using other numerical schemes for the symbolic controllers obtained using Alg. 1, analogous to the controller from [24]. The average return discrepancies are very large as well. We emphasize that all of the studied metrics for the symbolic controllers are far from the metrics achieved for the deep NN controller. Terminating Alg. 1 at step 2 results in a very bad controller achieving mean return only of \(-1061\), i.e., as observed in the previous works the symbolic regression over a dataset sampled from a trained NN is not enough to construct a good controller. Analogous to Theorem 2, we are able to prove the following theorems on persistent periodic orbits (Def. 1) for the controllers displayed in Table 1.
**Theorem 3**: _For \(h\in H=\{0.025,0.0125\}\), the nonlinear pendulum system with controller generated by analytic gradient refinement in Tab. 1 has POs under 1) (SI) with \(h\in H\) and at the native step size \(h=0.05\), 2) (E) with \(h\in H\)._
_The identified periodic orbits are persistent (see Def. 2) and generate minus infinity return for infinite episode length, with each episode decreasing the reward by at least \(0.18\)._
**Theorem 4**: _For \(h=0.0125\) and \(h=0.05\) (native) with scheme (E), the nonlinear pendulum system with controller generated by CMA-ES refinement in Tab. 1 has POs which generate minus infinity return for infinite episode length, with each episode decreasing the reward by at least 0.20._
## 7 Systematic Robustness Study
We consider a controller to be _robust_ when it has "good" return statistics at the native simulator and step size, which persist when we change simulator and/or decrease step size. If a degradation of return statistics on varying the integrator or step size is identified, we wish to identify the source.
### Background on Persistent Solutions and Orbits
Consider a MDP tuple \((\mathcal{S},\mathcal{A},F,r,\rho_{0},\gamma)\), a precision parameter \(\varepsilon>0\), a policy \(\pi\colon\mathcal{S}\to\mathcal{A}\) (trained using \(F_{train}\) and tested using \(F_{test}\)), a desired equilibrium \(\hat{s}\) (corresponding to the maximized reward \(r\)), and episode length \(N\).
**Definition 1**: _We call a persistent periodic orbit (PO) (of period n) an infinite MDP trajectory \((s_{0},a_{0},r_{1},s_{1},a_{1},\dots)\), such that \(s_{kn}=s_{0}\) for some \(n>1\) and all \(k\in\mathbb{N}\), and such that \(\|\hat{s}-s_{j}\|>\varepsilon\) for all \(j\geq 0\)._
**Definition 2**: _A finite MDP trajectory of episode length \(N\)\((s_{0},a_{0},p_{1},s_{1},a_{1},\dots,s_{N})\) such that \(\|\hat{s}-s_{j}\|>\varepsilon\) for all \(0\leq j\leq N\) is called a persistent solution._
Locating the objects in dynamics responsible for degradation of the reward is not an easy task, as they may be singular or local minima of a non-convex landscape. For locating such objects we experimented with different strategies, but found the most suitable the evolutionary search of _penalty maximizing solutions_. The solutions identified using such a procedure are necessarily stable. We introduce a measure of 'badness' of persistent solutions and use it as a search criteria.
**Definition 3**: _We call a penalty value, a function \(p\colon\mathcal{S}\times\mathcal{A}\to\mathbb{R}_{+}\), such that for a persistent solution/orbit the accumulated penalty value is bounded from below by a set threshold \(M\gg 0\), that is \(\sum_{i=0}^{N-1}p(s_{i},a_{i})\geq M\)._
**Remark 4**: _The choice of particular penalty in Def. 3 depend on the particular studied example. We choose the following penalties in the studied problems._
_1. \(p(s,a)=-r(s,a)\) for pendulum._
_2. \(p(s,a)=-r(s)+0.5(\theta^{\prime})^{2}+0.5(x^{\prime})^{2}\) for cartpole swing-up. Subtracting from the native reward value \(r(s)=\cos\theta\) the scaled sum of squared velocities (the cart and pole) and turning off the episode termination condition. This allows capturing orbits that manage to stabilize the pole, but are unstable and keep the cart moving. The threshold \(M\) in Def. 3 can be set by propagating a number of trajectories with random IC and taking the maximal penalty as \(M\)._
**Remark 5**: _For a PO, the accumulated penalty admits a linear lower bound, i.e. \(\sum_{m=0}^{n-1}p(s_{m},a_{m})\geq Cn\) for some \(C>0\). Thm. 2 implies \(C=0.14\) for the POs in Tab. 6 in the Appendix._
Figure 2: \(100\) numerical simulations with IC \(\omega=0\) and \(\theta\) sampled uniformly, time horizon set to \(T=6\), \(x\)-axis shows the (unnormalized) \(\omega\), and \(y\)-axis \(\theta\). In (a), all IC are attracted by an equilibrium at \(\omega=0\)mod\(2\pi\), \(\theta=0\). Whereas when applying different \(F_{test}\), (b) and (c) show existence of attracting periodic solutions (they can be continued infinitely as our theorems demonstrate).
### Searching for and Proving Persistent Orbits
We designed a pipeline for automated persistent/periodic orbits search together with interval proof certificates. By an interval proof certificate of a PO we mean interval bounds within which a CAP that the orbit exist was carried out applying the Newton scheme (see Sec. 5.2), whereas by a proof certificate of a persistent solution (which may be a PO or not) we mean interval bounds for the solution at each step, with a bound for the reward value, showing that it does not stabilize by verifying a lower bound \(\|\hat{s}-s_{t}\|>\varepsilon\). The search procedure is implemented in Python, while the CAP part is in Julia, refer Sec. 5 for further details.
```
1:\(F_{test}\); control policy \(\pi\); \(h\)-parameters of the evolutionary search; penalty function \(p\); trajectory length; search domain;
2:interval certificates of persistent/periodic orbits;
3:for each MDP do
4:for number of searches do
5: initialize CMA-ES search within specified bounds;
6: search for a candidate maximizing penalty \(p\) during the fixed episode length;
7:endfor
8:order found candidates w.r.t. their \(p\) value;
9:endfor
10:for each candidate do
11: search for nearby periodic orbit with Newton's method correction applied to suitable sub-trajectory;
12:if potential periodic orbit found then
13: attempt to prove existence of the orbit with Thm. 1;
14:if proof successful then
15: return an interval certificate of the orbit;
16:else
17: return proof failure;
18:endif
19:else
20: return periodic orbit not found;
21:endif
22: produce and return an interval certificate of the uncontrolled solution;
23:endfor
```
**Algorithm 2** Persistent Solutions/Orbits Search & Prove
### Findings: Pendulum
Changing simulator or step size resulted in substantial mean return loss (see Tab. 1), and simulation revealed stable POs (see Fig. 2). We proved existence of POs using the methods of Section 5.2-5.3. Proven POs are presented in tables in App. F. See also Fig. 3, where an persistent solution shadows an unstable PO before converging to the stable equilibrium. We present proven persistent solutions in the tables in App. F.
Comparing the mean returns in Tab. 1 we immediately see that deep NN controller performance does not deteriorate as much as for the symbolic controller, whereas the small net is located between the two extremes. This observation is confirmed after we run Alg. 2 for the symbolic controllers and NN. In particular, we did not identify any stable periodic orbits or especially long persistent solutions. However, the Deep NN controller is not entirely robust, admitting singular persistent solutions achieving returns far from the mean; refer to Tab. 4. On the other hand, the small \(10\) neuron NN also seems to be considerably more robust than the symbolic controllers. For the case \(F_{test}(\mathrm{E},0.05)\) the average returns are two times larger than for the symbolic controllers, but still two times smaller than for the deep NN. However, in the case \(F_{test}(\mathrm{E},0.05)\), the average returns are close to those of the deep NN contrary to the symbolic controllers. The small NN compares favorably to symbolic controllers in terms of E/SI return discrepancy metrics, still not reaching the level of deep NN. This supports our earlier conjecture (Sec. 1) that controller robustness is proportional to the parametric complexity.
dulum, the small NN sits between the symbolic and deep NN in terms of the studied metrics. We computed the mean accumulated shaped penalty \(p(s,a)=-r(s)+0.5(\theta^{\prime})^{2}+0.5(x^{\prime})^{2}\) for the selected controllers in Tab. 5. The contrast between the deep NN and the symbolic controller is clear, with the small NN being in between those two extremes. The mean penalty is a measure of the prevalence of persistent solutions. However, we emphasize that the Deep NN controller is not entirely robust and also admits singular persistent solutions with bad returns, refer to Tab. 4. Rigorously proving the returns for the deep NN was not possible in this case; see Rem. 6.
## 8 Codebase
Our full codebase is written in Python and Julia shared in a github repository [2]. The reason why the second part of our codebase is written in Julia is the lack of a suitable interval arithmetic library in Python. The Python part of the codebase consists of four independent parts - scripts: deep NN policy training, symbolic/small NN controller regression, regressed controller fine-tuning and periodic orbit/persistent solution searcher. All controllers that we use are implemented in Pytorch [28]. For the deep NN policy training we just use the Stable-baselines 3 library [30], which outputs a trained policy (which achieved the best return during training) and the training replay buffer of data. For the symbolic regression we employ the PySR lib. [7]. For the regressed controller fine-tuning we employ the pycma CMA-ES implementation [18]. Our implementation in Julia uses two external packages: IntervalArithmetic.jl [33] (for interval arithmetic) and ForwardDiff.jl [31] (for forward-mode automatic differentiation). These packages are used together to perform the necessary calculations for the CAPs.
## 9 Conclusion and Future Work
Our work is a first step towards a comprehensive robustness study of deep NN controllers and their symbolic abstractions, which are desirable for deployment and trustfulness reasons. Studying the controllers' performance in a simple benchmark, we identify and prove existence of an abundance of persistent solutions and periodic orbits. Persistent solutions are undesirable and can be exploited by an adversary. Future work will apply the developed methods to study higher dimensional problems often used as benchmarks for continuous control.
## 10 Acknowledgements
The project is financed by the Polish National Agency for Academic Exchange. The first author has been supported by the Polish National Agency for Academic Exchange Polish Returns grant no. PPN/PPO/2018/1/00029 and the University of Warsaw IDUB New Ideas grant. This research was supported in part by PL-Grid Infrastructure.
|